I have a game where users complete assignment by making a picture that will be send to the backend. Before sending this to the backend the image is resized to limit the amount of data that has to be send. This all works fine.
Now I want to extend this with movie clips. Movie clips are a lot bigger then picture. Especially if you don't compress them. The problem is that I have no clue how to do this.
So the main question is how can change my app that the user records a video and after it compress it to make the file smaller in size. Are there libraries around to do this? Or is there something in Android itself to use?
One approach that works is to use ffmpeg to do the compression.
There are some well used ffmpeg libraries that will allow you include ffmpge via a wrapper and then use standard ffmpeg syntax to perform the compression within your app.
See this one which includes examples:
https://github.com/WritingMinds/ffmpeg-android-java
Note that video compression is power and battery intensive, and takes time so you may want to limit the clip size if you plan to have users use this functional regularly.
Related
For performance reasons, I ditched Python-Opnecv/FFmpeg solution and moved on to Java.
But to my surprise, I am not able to find any better and complete solution as we have in Python. I tried using vlcj but again it just gives more of a command line kind of interface. I am not able to find any callback kind of mechanism for reading and analyzing all the frames.
I also tried using Java Sockets but wasn't able to do anything more than establishing a connection with Ip Camera streaming h264 video over RTSP.
Note: It will be running in a server environment so we don't want to show any frame, we just need run certain other operations on frames.
Please guide me in the right direction.
If you want to get access to the video frame buffer while media is playing you have a couple of options.
I'l assume you are using vlcj 4.x+, which is current at time of writing.
First, you can use an EmbeddedMediaPlayer with a CallbackVideoSurface.
You can use the MediaPlayerFactory to create your video surface.
When you create your video surface, it requires a RenderCallback implementation that you provide.
Create the embedded media player as normal, and invoke mediaPlayer.setVideoSurface() to set your video surface.
It is this render callback implementation class that will be called back by VLC with raw video frame data in the form of a ByteBuffer backed by native memory. You can then do your analysis on the data in this byte buffer.
The second approach is to look instead at the CallbackMediaPlayerComponent class - this class aims to make it very easy for you to get an out-of-the-box working media player and provides a way for you to plug in only the bits you want to customise. In this case you plug in your render callback implementation to do your analysis.
There are examples in the vlcj source code at the github project page that show all of this. One of the examples processes this buffer to dynamically convert the video to greyscale, but obviously you can do anything you want with the frame data.
The method is named "onDisplay()" but you do not have to actually display the video anywhere if you're only interested in performing some analysis.
This is the extent of what vlcj can provide if you want to access the video frame data.
I have to create such a file, on android from a video that is on http:
So I have the following questions:
Is there any way to use ffmpeg on android?
How much does it cost me
to create this file? Consider that we do not use the wifi connection
and use the data connection, how much would it cost me in megabite?
Assuming that the video is lasting 2 hours.
I'm not sure what exactly you want to create - whether it is combining multiple images into a video or combining multiple videos into a single grid, but as a quick note either way if you can do it server side you will have the luxury of more processing power, less battery usage concerns and possibly less bandwidth issues also.
Assuming you do want to do it on Android, then one way to use ffmpeg which I have found worked well is to use a wrapper program. One well supported example is here:
https://github.com/WritingMinds/ffmpeg-android-java
There may be newer ones now also.
I've currently got a semi- working screenshare application, which does the following:
Client:
Capture the users Screen using a Robot
Detects which pixels have changed since the last screenshot
Sends the difference image through a compression stream
Server:
Decompresses the difference image
Overlays the image over the users last screen
Now, this works, but it's incredibly slow. It takes 200ms to process a 1920x1080 screen.
What other techniques could I use to make this more efficient? Also is there any way I could just encode multiple images into a .h264 stream or something similar? Are there any useful libraries that can compare the two images faster? Possibly on the GPU instead of the CPU?
i'm actually adding a music player in my android app. It will contains 8 albums, 12songs in each one. So i'm thinking about the best way to do this. Should i store the mp3 songs in the app, which will make there lecture faster and won't need access to internet. Or maybe it's too much heavy and calling mp3 url would be a better idea?
Thank you
There are a few things to consider when evaluating your options.
If you ship the songs together with the app, you will greatly increase it's size. Most probably you will go over the 50 Mb limit, and you will have to implement APK expansion files, which are downloaded when the app is first started. Implementing this is not very hard, but it can take some time.
If you implement a custom download for the songs, you can optionally provide better control over what is downloaded, optimizing bandwidth usage, download times, etc. Still, it would be more difficult to implement then the standard APK expansions, and you will have to deal with a lot of additional download logic - pausing / resuming, dealing with insufficient storage space, etc.
My advice is to go with an APK expansion, which is downloaded when the app is first started and will manage all the download complexity out of the box. If you need a more fine grained download - go custom.
Good luck.
I'm writing an accompaniment application that continuously needs to play specific notes (or even chords). I have a thread running that figures out which note I need to play, but I have no idea where to begin regarding the actual playback. I know I can create an audiotrack and write a sine wave to it, but for this project a simple tone won't cut it. So I'm guessing I either need to use MIDI (can android do that?) or to somehow take a sample and change its pitch on the fly, but I don't know if that's even possible.
All I can say is to check out pitch-shifting (which you seem to have heard of) and soundpool (which would require some recording of your own) and these 2 links:
Audio Playback Rate in Android
Programmatically increase the pitch of an array of audio samples
the second link seems to have more info.