Custom video source for WebRTC on Android - java

Overview
I would like to use a custom video source to live stream video via WebRTC Android implementation. If I understand correctly, existing implementation only supports front and back facing cameras on Android phones. The following classes are relevant in this scenario:
Camera1Enumerator.java
VideoCapturer.java
PeerConnectionFactory
VideoSource.java
VideoTrack.java
Currently for using front facing camera on Android phone I'm doing the following steps:
CameraEnumerator enumerator = new Camera1Enumerator(false);
VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
VideoSource videoSource = peerConnectionFactory.createVideoSource(false);
videoCapturer.initialize(surfaceTextureHelper, this.getApplicationContext(), videoSource.getCapturerObserver());
VideoTrack localVideoTrack = peerConnectionFactory.createVideoTrack(VideoTrackID, videoSource);
My scenario
I've a callback handler that receives video buffer in byte array from custom video source:
public void onReceive(byte[] videoBuffer, int size) {}
How would I be able to send this byte array buffer? I'm not sure about the solution, but I think I would have to implement custom VideoCapturer?
Existing questions
This question might be relevant, though I'm not using libjingle library, only native WebRTC Android package.
Similar questions/articles:
for iOS platform but unfortunately I couldn't help with the answers.
for native C++ platform
article about native implementation

There are two possible solutions to this problem:
Implement custom VideoCapturer and create VideoFrame using byte[] stream data in onReceive handler. There actually exists a very good example of FileVideoCapturer, which implements VideoCapturer.
Simply construct VideoFrame from NV21Buffer, which is created from our byte array stream data. Then we only need to use our previously created VideoSource to capture this frame. Example:
public void onReceive(byte[] videoBuffer, int size, int width, int height) {
long timestampNS = TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime());
NV21Buffer buffer = new NV21Buffer(videoBuffer, width, height, null);
VideoFrame videoFrame = new VideoFrame(buffer, 0, timestampNS);
videoSource.getCapturerObserver().onFrameCaptured(videoFrame);
videoFrame.release();
}

Related

Get ByteBuffer from Image for TensorFlow Lite Model

I am creating an android app to run on Google Glass Enterprise Edition 2 that does Real-time Face Recognition. I am using Camera X as my Camera API and TensorFlow Lite (TFLite) as my classification model. However, the TFLite model input requires ByteBuffer which I am unable to convert into from the image retrieved from CameraX.
How do I get my Image from CameraX into ByteBuffer class for my TFLite Model?
Camera X Image Analysis: Reference
val imageAnalysis = ImageAnalysis.Builder()
.setTargetResolution(Size(640, 360))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build()
imageAnalysis.setAnalyzer(AsyncTask.THREAD_POOL_EXECUTOR, ImageAnalysis.Analyzer { imageProxy ->
val rotationDegrees = imageProxy.imageInfo.rotationDegrees
val mediaImage = imageProxy.image
if (mediaImage != null) {
val image = InputImage.fromMediaImage(mediaImage, rotationDegrees)
/* Classify the Image using TensorFlow Lite Model */
}
})
TensorFlow Model Sample Code
val model = FaceRecognitionModel.newInstance(context)
// Creates inputs for reference.
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
inputFeature0.loadBuffer(byteBuffer)
// Runs model inference and gets result.
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
// Releases model resources if no longer used.
model.close()
Try using the TensorImage class from the TensorFlow Lite Support Library.
Roughly, you can follow these steps.
Convert the Image object into Bitmap. There should be other Stackoverflow questions on how to do this (e.g., this answer)
Create a TensorImage object from the Bitmap object using TensorImage.fromBitmap() factory.
Call getBuffer() method on the TensorImage object to get the underlying ByteBuffer.
You might also want to do some image pre-processing, in case the image from CameraX doesn't exactly match the format expected by the model. For this, you can explore the ImageProcessor utility.
I have made some findings and I applied these to help shorten my problem.
The image I have gotten from CameraX is in YUV. I have trained my model in RGB in 224X224. To suit my issue, I first convert the image to RGB Bitmap, then crop it into 224X224. Afterwards convert Bitmap to ByteBuffer.
As for my TFLite Model, the TFLite model managed to accept the converted RGB ByteBuffer image and process it. Returning back
TensorBuffer.

Android Java - programmatically capture background before app window opens [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to programatically take a screenshot on Android?
How to capture the android device screen content and make an image file using the snapshot data? Which API should I use or where could I find related resources?
BTW:
not camera snapshot, but device screen
Use the following code:
Bitmap bitmap;
View v1 = MyView.getRootView();
v1.setDrawingCacheEnabled(true);
bitmap = Bitmap.createBitmap(v1.getDrawingCache());
v1.setDrawingCacheEnabled(false);
Here MyView is the View through which we need include in the screen. You can also get DrawingCache from of any View this way (without getRootView()).
There is also another way.. If we having ScrollView as root view then its better to use following code,
LayoutInflater inflater = (LayoutInflater) this.getSystemService(LAYOUT_INFLATER_SERVICE);
FrameLayout root = (FrameLayout) inflater.inflate(R.layout.activity_main, null); // activity_main is UI(xml) file we used in our Activity class. FrameLayout is root view of my UI(xml) file.
root.setDrawingCacheEnabled(true);
Bitmap bitmap = getBitmapFromView(this.getWindow().findViewById(R.id.frameLayout)); // here give id of our root layout (here its my FrameLayout's id)
root.setDrawingCacheEnabled(false);
Here is the getBitmapFromView() method
public static Bitmap getBitmapFromView(View view) {
//Define a bitmap with the same size as the view
Bitmap returnedBitmap = Bitmap.createBitmap(view.getWidth(), view.getHeight(),Bitmap.Config.ARGB_8888);
//Bind a canvas to it
Canvas canvas = new Canvas(returnedBitmap);
//Get the view's background
Drawable bgDrawable =view.getBackground();
if (bgDrawable!=null)
//has background drawable, then draw it on the canvas
bgDrawable.draw(canvas);
else
//does not have background drawable, then draw white background on the canvas
canvas.drawColor(Color.WHITE);
// draw the view on the canvas
view.draw(canvas);
//return the bitmap
return returnedBitmap;
}
It will display entire screen including content hidden in your ScrollView
UPDATED AS ON 20-04-2016
There is another better way to take screenshot.Here I have taken screenshot of WebView.
WebView w = new WebView(this);
w.setWebViewClient(new WebViewClient()
{
public void onPageFinished(final WebView webView, String url) {
new Handler().postDelayed(new Runnable(){
#Override
public void run() {
webView.measure(View.MeasureSpec.makeMeasureSpec(
View.MeasureSpec.UNSPECIFIED, View.MeasureSpec.UNSPECIFIED),
View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED));
webView.layout(0, 0, webView.getMeasuredWidth(),
webView.getMeasuredHeight());
webView.setDrawingCacheEnabled(true);
webView.buildDrawingCache();
Bitmap bitmap = Bitmap.createBitmap(webView.getMeasuredWidth(),
webView.getMeasuredHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
Paint paint = new Paint();
int height = bitmap.getHeight();
canvas.drawBitmap(bitmap, 0, height, paint);
webView.draw(canvas);
if (bitmap != null) {
try {
String filePath = Environment.getExternalStorageDirectory()
.toString();
OutputStream out = null;
File file = new File(filePath, "/webviewScreenShot.png");
out = new FileOutputStream(file);
bitmap.compress(Bitmap.CompressFormat.PNG, 50, out);
out.flush();
out.close();
bitmap.recycle();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}, 1000);
}
});
Hope this helps..!
AFAIK, All of the methods currently to capture a screenshot of android use the /dev/graphics/fb0 framebuffer. This includes ddms. It does require root to read from this stream. ddms uses adbd to request the information, so root is not required as adb has the permissions needed to request the data from /dev/graphics/fb0.
The framebuffer contains 2+ "frames" of RGB565 images. If you are able to read the data, you would have to know the screen resolution to know how many bytes are needed to get the image. each pixel is 2 bytes, so if the screen res was 480x800, you would have to read 768,000 bytes for the image, since a 480x800 RGB565 image has 384,000 pixels.
For newer Android platforms, one can execute a system utility screencap in /system/bin to get the screenshot without root permission.
You can try /system/bin/screencap -h to see how to use it under adb or any shell.
By the way, I think this method is only good for single snapshot.
If we want to capture multiple frames for screen play, it will be too slow.
I don't know if there exists any other approach for a faster screen capture.
[Based on Android source code:]
At the C++ side, the SurfaceFlinger implements the captureScreen API. This is exposed over the binder IPC interface, returning each time a new ashmem area that contains the raw pixels from the screen. The actual screenshot is taken through OpenGL.
For the system C++ clients, the interface is exposed through the ScreenshotClient class, defined in <surfaceflinger_client/SurfaceComposerClient.h> for Android < 4.1; for Android > 4.1 use <gui/SurfaceComposerClient.h>
Before JB, to take a screenshot in a C++ program, this was enough:
ScreenshotClient ssc;
ssc.update();
With JB and multiple displays, it becomes slightly more complicated:
ssc.update(
android::SurfaceComposerClient::getBuiltInDisplay(
android::ISurfaceComposer::eDisplayIdMain));
Then you can access it:
do_something_with_raw_bits(ssc.getPixels(), ssc.getSize(), ...);
Using the Android source code, you can compile your own shared library to access that API, and then expose it through JNI to Java. To create a screen shot form your app, the app has to have the READ_FRAME_BUFFER permission.
But even then, apparently you can create screen shots only from system applications, i.e. ones that are signed with the same key as the system. (This part I still don't quite understand, since I'm not familiar enough with the Android Permissions system.)
Here is a piece of code, for JB 4.1 / 4.2:
#include <utils/RefBase.h>
#include <binder/IBinder.h>
#include <binder/MemoryHeapBase.h>
#include <gui/ISurfaceComposer.h>
#include <gui/SurfaceComposerClient.h>
static void do_save(const char *filename, const void *buf, size_t size) {
int out = open(filename, O_RDWR|O_CREAT, 0666);
int len = write(out, buf, size);
printf("Wrote %d bytes to out.\n", len);
close(out);
}
int main(int ac, char **av) {
android::ScreenshotClient ssc;
const void *pixels;
size_t size;
int buffer_index;
if(ssc.update(
android::SurfaceComposerClient::getBuiltInDisplay(
android::ISurfaceComposer::eDisplayIdMain)) != NO_ERROR ){
printf("Captured: w=%d, h=%d, format=%d\n");
ssc.getWidth(), ssc.getHeight(), ssc.getFormat());
size = ssc.getSize();
do_save(av[1], pixels, size);
}
else
printf(" screen shot client Captured Failed");
return 0;
}
You can try the following library: Android Screenshot Library (ASL) enables to programmatically capture screenshots from Android devices without requirement of having root access privileges. Instead, ASL utilizes a native service running in the background, started via the Android Debug Bridge (ADB) once per device boot.
According to this link, it is possible to use ddms in the tools directory of the android sdk to take screen captures.
To do this within an application (and not during development), there are also applications to do so. But as #zed_0xff points out it certainly requires root.
Framebuffer seems the way to go, it will not always contain 2+ frames like mentioned by Ryan Conrad. In my case it contained only one. I guess it depends on the frame/display size.
I tried to read the framebuffer continuously but it seems to return for a fixed amount of bytes read. In my case that is (3 410 432) bytes, which is enough to store a display frame of 854*480 RGBA (3 279 360 bytes). Yes, the frame in binary outputed from fb0 is RGBA in my device. This will most likely depend from device to device. This will be important for you to decode it =)
In my device /dev/graphics/fb0 permissions are so that only root and users from group graphics can read the fb0. graphics is a restricted group so you will probably only access fb0 with a rooted phone using su command.
Android apps have the user id (uid) app_## and group id (guid) app_## .
adb shell has uid shell and guid shell, which has much more permissions than an app.
You can actually check those permissions at /system/permissions/platform.xml
This means you will be able to read fb0 in the adb shell without root but you will not read it within the app without root.
Also, giving READ_FRAME_BUFFER and/or ACCESS_SURFACE_FLINGER permissions on AndroidManifest.xml will do nothing for a regular app because these will only work for 'signature' apps.
if you want to do screen capture from Java code in Android app AFAIK you must have Root provileges.

codename one mediaplayer not working on android device

i am creating a mediaplayer app which is supposed to stream mp3 files from remote url.the problem is that the everything works fine on the codename one simulator but not on an actual android device.I want the app to show native player controls like on the simulator.below is my code and screenshots
try {
video = MediaManager.createMedia(sample_url,true);
Display.getInstance().callSerially(() -> {
if (mp != null){
mp.getMedia().cleanup();
}
Image samp = theme.getImage("sample.png");
Label samlabel = new Label();
samlabel.setIcon(samp);
mp = new MediaPlayer(video);
mp.setAutoplay(false);
video.setNativePlayerMode(true);
sample.add(BorderLayout.CENTER,BorderLayout.centerAbsolute(samlabel));
sample.add(BorderLayout.SOUTH,mp);
//songDetails.add(mp);
});
the first image is the simulator screenshot and the second image is the actual android device screenshot
It's unclear from your post if this is an mp3 which is audio and doesn't have media control or an actual video. The MediaPlayer class is strictly for video and you passed true to indicate that this is a video file so I'll treat it as such.
Notice that if this is an audio file then you need to add/create your own controls and shouldn't use the MediaPlayer class.
We recently defined behaviors for native media control rendering as explained here.
Just use:
video.setVariable(Media.VARIABLE_NATIVE_CONTRLOLS_EMBEDDED, true);

How to extract a frame from video using Java

Is there any solution to "Extract a frame from video file in Java using core library without importing external libraries"?
Say for I saw Image, BufferedStrategy, BufferCapabilities in Java AWT libraries.
The Java Media Framework API (JMF) enables audio, video and other time-based media operations, without use of any third party library.
Seeking frames inside a movie with JMF.
xuggler is a good third party library, widely used.
I think that you should use Xuggler from here or you can find it in maven.
In the github repository is a sample under demos with the file:
DecodeAndCaptureFrames.java
According to this answer on another question, you can do that without external libraries by leveraging features of JavaFX.
Quoting original answer below:
You can use the snapshot() of MediaView. First connect a mediaPlayer
to a MediaView component, then use mediaPlayer.seek() to seek the
video position. And then you can use the following code to extract the
image frame:
int width = mediaPlayer.getMedia().getWidth();
int height = mediaPlayer.getMedia().getHeight();
WritableImage wim = new WritableImage(width, height);
MediaView mv = new MediaView();
mv.setFitWidth(width);
mv.setFitHeight(height);
mv.setMediaPlayer(mediaPlayer);
mv.snapshot(null, wim);
try {
ImageIO.write(SwingFXUtils.fromFXImage(wim, null), "png", new File("/test.png"));
} catch (Exception s) {
System.out.println(s);
}

Update photo in flex without blinking

i am trying to simulate a live view using a canon Camera.
I am interacting with the cam using the CanonSDK, i get an image every a short period in order to simulate a video frame by frame. This works fine, i am using java to do the backend and send the images trough BlazeDS to flex.
The problem is not getting the image, the problem is that when i load a new image using something like:
image.source=my_new_image;
the new image is loaded but it produces a short white blink and it ruins the video...
So i would like to know if the is a way to update an image on flex avoiding the blinking problem, or if i could make a video streaming from java and pick it up with flex...
Thanks in advance!!!
The easy way is to use a technique called double buffering, using two Loaders - one for the image which is visible, and one for the image which is being loaded and is invisible. When the image has completed loading it becomes visible, and the other one becomes invisible and the process repeats.
In terms of efficiency, it would be better to at least use a socket connection to the server for transferring the image bytes, preferably in AMF format since it has little overhead. This is all fairly possible in BlazeDS with some scripting.
For better efficiency you may try using a real-time frame or video encoder on the server, however decoding the video on the client will be challenging. For best performance it will be better to use the built-in video decoder and a streaming server such as Flash Media Server.
UPDATE (example script):
This example loads images over HTTP. A more efficient approach would be to use an AMF socket (mentioned above) to transfer the image, then use Loader.loadBytes() to display it.
private var loaderA:Loader;
private var loaderB:Loader;
private var foregroundLoader:Loader;
private var backgroundLoader:Loader;
public function Main()
{
loaderA = new Loader();
loaderB = new Loader();
foregroundLoader = loaderA;
backgroundLoader = loaderB;
loadNext();
}
private function loadNext():void
{
trace("loading");
backgroundLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, loaderCompleteHandler);
backgroundLoader.load(new URLRequest("http://www.phpjunkyard.com/randim/randim.php?type=1"));
}
private function loaderCompleteHandler(event:Event):void
{
trace("loaded");
var loaderInfo:LoaderInfo = event.target as LoaderInfo;
var loader:Loader = loaderInfo.loader;
loader.contentLoaderInfo.removeEventListener(Event.COMPLETE, loaderCompleteHandler);
if (contains(foregroundLoader))
removeChild(foregroundLoader);
var temp:Loader = foregroundLoader;
foregroundLoader = backgroundLoader;
backgroundLoader = temp;
addChild(foregroundLoader);
loadNext();
}

Categories

Resources