I've currently got a semi- working screenshare application, which does the following:
Client:
Capture the users Screen using a Robot
Detects which pixels have changed since the last screenshot
Sends the difference image through a compression stream
Server:
Decompresses the difference image
Overlays the image over the users last screen
Now, this works, but it's incredibly slow. It takes 200ms to process a 1920x1080 screen.
What other techniques could I use to make this more efficient? Also is there any way I could just encode multiple images into a .h264 stream or something similar? Are there any useful libraries that can compare the two images faster? Possibly on the GPU instead of the CPU?
Related
For performance reasons, I ditched Python-Opnecv/FFmpeg solution and moved on to Java.
But to my surprise, I am not able to find any better and complete solution as we have in Python. I tried using vlcj but again it just gives more of a command line kind of interface. I am not able to find any callback kind of mechanism for reading and analyzing all the frames.
I also tried using Java Sockets but wasn't able to do anything more than establishing a connection with Ip Camera streaming h264 video over RTSP.
Note: It will be running in a server environment so we don't want to show any frame, we just need run certain other operations on frames.
Please guide me in the right direction.
If you want to get access to the video frame buffer while media is playing you have a couple of options.
I'l assume you are using vlcj 4.x+, which is current at time of writing.
First, you can use an EmbeddedMediaPlayer with a CallbackVideoSurface.
You can use the MediaPlayerFactory to create your video surface.
When you create your video surface, it requires a RenderCallback implementation that you provide.
Create the embedded media player as normal, and invoke mediaPlayer.setVideoSurface() to set your video surface.
It is this render callback implementation class that will be called back by VLC with raw video frame data in the form of a ByteBuffer backed by native memory. You can then do your analysis on the data in this byte buffer.
The second approach is to look instead at the CallbackMediaPlayerComponent class - this class aims to make it very easy for you to get an out-of-the-box working media player and provides a way for you to plug in only the bits you want to customise. In this case you plug in your render callback implementation to do your analysis.
There are examples in the vlcj source code at the github project page that show all of this. One of the examples processes this buffer to dynamically convert the video to greyscale, but obviously you can do anything you want with the frame data.
The method is named "onDisplay()" but you do not have to actually display the video anywhere if you're only interested in performing some analysis.
This is the extent of what vlcj can provide if you want to access the video frame data.
I'm a 17 year old trying to start developing some android games. I've used LibGDX once before and found it a pretty effective tool, so I'm using it again now.
The game I'm making is a choice based, interactive game where you make a choice and then the next scenario happens based on your choice, and it goes on and on until your character dies or you win. I'm expected to have around 200 scenarios by the time I'm done, and currently have around 160.
The problem I'm having is that each of these scenarios is basically a "card," with a picture, scenario description and 2 options below it. Each of these images is pretty big, and if I scale the card images down they start looking pixelated on the phone screen. I'm worried that in just images, my game will reach 100mb, and then with sound effects and everything else it might be like 200mb. This seems pretty inefficient and I don't want potential players to shy away from the game just because of it's size, if they don't have enough room on their phone...
Am I doing something wrong? I apologize for this inexperienced question, I'm really new to Android development.
That isn't too big for modern games but you will need to not include the assets in the apk and either download that when needed or use what Google already thought of for this problem.
https://developer.android.com/google/play/expansion-files.html
These are some steps that can help you reduce your apk size:
Use only specific drawable
Add only specific image size for each drawable directory for drawable-mdpi, drawable-xhdpi, drawable-xxhdpi, etc. You can try removing a drawable directory that could potentially unused by the user, like drawable-mdpi and drawable-xhdpi, and use only the hi-res one like drawable-xxhdpi. Or, you can try using only drawable directory for all the images, so your app will only use one image for all devices type.
Resize your images
Compress your images
If your images are PNG files, You can compress the images without a noticeable change using pngquant. In fact, it can reduce your images sizes significantly (often as much as 70%) and preserves full alpha transparency. Or you can try using pngcrush (I'm rarely using this)
I am writing an android app, which needs to draw a huge amount of pictures aligned in a grid. The plan is to draw about 2000 pictures (building up a mosaic). Each of these pictures has a size of 1024x1024. Of course I ran there into heavy efficiency problems.
Right now I am using OpenGL ES 2.0 for drawing. I have one thread loading the images from resources into bitmap objects and passing them to a central pool. The rendering thread of OpenGL then takes an image from this pool, loads the image data from the bitmap to the GPU and draws it at the right cell of a built up grid.
This is horribly slow. I measured some times on this procedure using the android emulator. I got about 60ms for loading a resource into a bitmap object. Loading bitmap into GPU + drawing it takes about 15ms.
I don't have a clear idea on how to speed this up and do this with at least like 30 FPS. My ideas at this moment are to use an external library for loading bitmaps and maybe use the NDK to speed things up. Also I know about texture atlas and batching up my draws. I could implement this, but as my problems are with loading images from resources on runtime I didn't try it for now.
Maybe someone can give me an advice on how to do this efficiently.
Thank you everyone.
First of all, I just want to make it clear, since I think your title is a but misleading - your real problem is not drawing but loading, according to your description. You did not really say if you have a drawing problem (you mentioned the fps for loading and drawing).
You did not mention all the requirements, for example - how soon do you need to draw all of them? Maybe you can, for example, load the first X as part of the app loading, so when the app loads the textures are already created and can be drawn, and in the meantime in the dedicated thread load another X, and so on...
Another approach would be to have low resolution images and high resolution images, load the low resolution first and replace later (on-demand, for example). I'm not sure if this is reasonable memory-wise for 2000 images, but you can check that.
Also, I would suggest taking a look at Rajawali - it is a nice framework, with good examples including for multiple textures (there's also examples app in Google Play). Perhaps it could help.
I have a game where users complete assignment by making a picture that will be send to the backend. Before sending this to the backend the image is resized to limit the amount of data that has to be send. This all works fine.
Now I want to extend this with movie clips. Movie clips are a lot bigger then picture. Especially if you don't compress them. The problem is that I have no clue how to do this.
So the main question is how can change my app that the user records a video and after it compress it to make the file smaller in size. Are there libraries around to do this? Or is there something in Android itself to use?
One approach that works is to use ffmpeg to do the compression.
There are some well used ffmpeg libraries that will allow you include ffmpge via a wrapper and then use standard ffmpeg syntax to perform the compression within your app.
See this one which includes examples:
https://github.com/WritingMinds/ffmpeg-android-java
Note that video compression is power and battery intensive, and takes time so you may want to limit the clip size if you plan to have users use this functional regularly.
I'm interested in doing image processing in Java with frames collected from a network video adapter. The first challenge is finding network video adapters/cameras which don't require an ActiveX control for PTZ control and therefore require IE. Then the issue is how to do still image grabs from network video adapters which only make MP4 available.
Does anyone know of some Java friendly network video cameras and adapters?
Anyone know of some Java code to control PTZ on a network camera?
Two ways in Java that I know of. The first (and the one I current recommend) is the LTI-Civil project. The second is to use Xuggler which uses FFmpeg webcam code behind the scenes.