Android: Can we actually save ImageReader's acquireLatestImage()? - java

I'm currently working with the Camera2 API, and created a new ImageReader.OnImageAvailableListener object. Of course, it has to implement the onImageAvailable(ImageReader reader) method. The only thing I want, is to acquire the latest image from this reader, and save it, but unfortunately, I just can't get it. I have read a lot of source codes, visited different StackOverflow topcis, but couldn't find the answer. I'm now at that point, when I have to ask: can this Image object actually saved as an image file to the phone's storage? Here is the method:
#Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
buffer.rewind();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
save(bytes);
image.close();
}
The save() method only opens a FileOutputStream and writes the bytes to it, it is working. The problem is, I only get a black image, and it has a really small size.
The format of the image is JPEG, this is how I configured my ImageReader instance previously.
I even tryed to convert it to different formats, like from NV21 to JPEG and stuff, but it didn't work out. What I'm missing here?

Here's a class I've used to extract a Bitmap from an Image.
import android.graphics.Bitmap
import android.graphics.BitmapFactory
import android.media.Image
import java.io.IOException
import java.io.InputStream
import java.nio.ByteBuffer
import kotlin.experimental.and
class ImagePreprocessor {
private var rgbFrameBitmap = Bitmap.createBitmap(IMAGE_WIDTH, IMAGE_HEIGHT,
Bitmap.Config.ARGB_8888)
fun preprocessImage(image: Image?): Bitmap? {
if (image == null) {
return null
}
check(rgbFrameBitmap!!.width == image.width, { "Invalid size width" })
check(rgbFrameBitmap!!.height == image.height, { "Invalid size height" })
if (rgbFrameBitmap != null) {
val bb = image.planes[0].buffer
rgbFrameBitmap = BitmapFactory.decodeStream(ByteBufferBackedInputStream(bb))
}
return rgbFrameBitmap
}
private class ByteBufferBackedInputStream(internal var buf: ByteBuffer) : InputStream() {
#Throws(IOException::class)
override fun read(): Int {
return if (!buf.hasRemaining()) {
-1
} else (buf.get() and 0xFF.toByte()).toInt()
}
#Throws(IOException::class)
override fun read(bytes: ByteArray, off: Int, len: Int): Int {
var len = len
if (!buf.hasRemaining()) {
return -1
}
len = Math.min(len, buf.remaining())
buf.get(bytes, off, len)
return len
}
}
}
I've used this to set the bitmap directly on an ImageView but it should be possible to use the compress method on the Bitmap to save it to file. Note that this only works for JPEG since the data is in a single plane.

Have you tried the sample camera2 app, Camera2Basic? It saves JPEGs with code very similar to yours, and should work fine.
It's possible your other camera setup code has a bug, not the saving part itself. So see if the sample works, and then compare what it does to your code.

Related

Decode h264 video to java.awt.image.BufferedImage in java

I am trying to make an AirPlay server in java with this library. I am able to start the server and connect to it and I am getting video input, however the input is in h264 format and I tried decoding it with JCodec but it always says I need an sps/pps and I don't know how to create/find this with just a byte[]. This is the onVideo method which is pretty much just copy-pasted from some websites:
#Override
public void onVideo(byte[] video) {
try {
videoFileChannel.write(ByteBuffer.wrap(video));
ByteBuffer bb = ByteBuffer.wrap(video);
H264Decoder decoder = new H264Decoder();
decoder.addSps(List.of(ByteBuffer.wrap(video)));
Picture out = Picture.create(1920, 1088, ColorSpace.YUV420);
var real = decoder.decodeFrame(bb, out.getData());
// decoder.decodeFrame prints "[WARN] . (:0): Skipping frame as no SPS/PPS have been seen so far..." in console and returns null => NullPointer in next line
var img = AWTUtil.toBufferedImage(real.createCompatible());
// ...
} catch (IOException e) {
e.printStackTrace();
}
}
Edit: I've uploaded a ("working") version to github, but the decoded image is discolored and doesn't update all pixels so when something is on the screen and the frame changes, that something can still be on the image.

How to get an image's orientation information in android 10?

Since android 10 there is some changes in accessing media files. After going through documentation https://developer.android.com/training/data-storage/shared/media i have been able to load the media content in to a bitmap, but i didn't get the orientation information. I know there is some restriction to the location information of the image, but does these exif restrictions also effect orientation information? If there is any other way to get an image's orientation information, please let me know. The code am using is given below (which is always returning 0 - Value for undefined). Thank you.
ContentResolver resolver = getApplicationContext().getContentResolver();
try (InputStream stream = resolver.openInputStream(selectedFileUri)) {
loadedBitmap = BitmapFactory.decodeStream(stream);
ExifInterface exif = new ExifInterface(stream);
orientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL);
}
BitmapFactory.decodeStream consumed the whole stream and closed it.
You should open a new stream first before trying to read the exif.
Firstly, consider the API used in different SDK version, please use the AndroidX ExifInterface Library
Secondly, ExifInterface for reading and writing Exif tags in various image file formats. Supported for reading: JPEG, PNG, WebP, HEIF, DNG, CR2, NEF, NRW, ARW, RW2, ORF, PEF, SRW, RAF.
But you use it for bitmap, Bitmap does not have any EXIF headers. You already threw away any EXIF data when you loaded the Bitmap from wherever it came from. Use ExifInterface on the original source of the data, not the Bitmap
you can try to use the following code to get the info and remember to use the original stream.
public static int getExifRotation(Context context, Uri imageUri) throws IOException {
if (imageUri == null) return 0;
InputStream inputStream = null;
try {
inputStream = context.getContentResolver().openInputStream(imageUri);
ExifInterface exifInterface = new ExifInterface(inputStream);
int orienttation = exifInterface.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_UNDEFINED)
switch (orienttation) {
case ExifInterface.ORIENTATION_ROTATE_90:
return 90;
case ExifInterface.ORIENTATION_ROTATE_180:
return 180;
case ExifInterface.ORIENTATION_ROTATE_270:
return 270;
default:
return ExifInterface.ORIENTATION_UNDEFINED;
}
}finally {
//here to close the inputstream
}
}

Is there any way to make image compression and saving faster on Android?

The situation
I should show 200-350 frames animation in my application. Images have 500x300ish resolution. If user wants to share animation, i have to convert it to Video. For convertion i am using ffmpeg command.
ffmpeg -y -r 1 -i /sdcard/videokit/pic00%d.jpg -i /sdcard/videokit/in.mp3 -strict experimental -ar 44100 -ac 2 -ab 256k -b 2097152 -ar 22050 -vcodec mpeg4 -b 2097152 -s 320x240 /sdcard/videokit/out.mp4
To convert images to video ffmpeg wants actual files not Bitmap or byte[].
Problem
Compressing bitmaps to image files taking to much time. 210 image convertion takes about 1 minute to finish on average device(HTC ONE m7). Converting image files to mp4 takes about 15 seconds on the same device. All together user have to wait about 1.5 minutes.
What i have tried
I changed comrpession format form PNG to JPEG(1.5 minute result is
achieved with JPEG compression(quality=80),with PNG it takes about
2-2.5 minutes) success
Tried to find how pass byte[] or bitmap to ffmpeg - no succes.
QUESTION
Is there any way(library (even native)) to make saving process faster.
Is there any way to pass byte[] or Bitmap objects (i mean png file decompressed to Android Bitmap Class Object) to ffmpeg library video creating method
Is there any other working library which will create mp4(or any supported format(supported by main Social Networks)) from byte[] or Bitmap objects in about 30 seconds(for 200 frames).
You can convert Bitmap (or byte[]) to YUV format quickly, using renderscript (see https://stackoverflow.com/a/39877029/192373). You can pass these YUV frames to ffmpeg library (as suggests halfelf), or use the built-in native MediaCodec which uses dedicated hardware on modt devices (but compression options are less flexible than all-software ffmpeg).
There are two steps slow us down. Compressing image to PNG/JPG and writing them to disk. Both can be skipped if we directly code against ffmpeg libs, instead of calling ffmpeg command. (There are other improvements too, such like GPU encoding and multithreading, but much more complicated.)
Some approaches to code:
Only use C/C++ NDK for android programming. FFmpeg will happily work. But I guess it's not an option here.
Build it from scratch by Java JNI. Not much experience here. I only know this could link java to c/c++ libs.
Some java wrapper. Luckily I found javacpp-presets. (There are others too, but this one is good enough and up to date.)
This library includes a good example ported from famous dranger's ffmpeg tutorial, though it is a demuxing one.
We can try to write a muxing one, following ffmpeg's muxing.c example.
import java.io.*;
import org.bytedeco.javacpp.*;
import static org.bytedeco.javacpp.avcodec.*;
import static org.bytedeco.javacpp.avformat.*;
import static org.bytedeco.javacpp.avutil.*;
import static org.bytedeco.javacpp.swscale.*;
public class Muxer {
public class OutputStream {
public AVStream Stream;
public AVCodecContext Ctx;
public AVFrame Frame;
public SwsContext SwsCtx;
public void setStream(AVStream s) {
this.Stream = s;
}
public AVStream getStream() {
return this.Stream;
}
public void setCodecCtx(AVCodecContext c) {
this.Ctx = c;
}
public AVCodecContext getCodecCtx() {
return this.Ctx;
}
public void setFrame(AVFrame f) {
this.Frame = f;
}
public AVFrame getFrame() {
return this.Frame;
}
public OutputStream() {
Stream = null;
Ctx = null;
Frame = null;
SwsCtx = null;
}
}
public static void main(String[] args) throws IOException {
Muxer t = new Muxer();
OutputStream VideoSt = t.new OutputStream();
AVOutputFormat Fmt = null;
AVFormatContext FmtCtx = new AVFormatContext(null);
AVCodec VideoCodec = null;
AVDictionary Opt = null;
SwsContext SwsCtx = null;
AVPacket Pkt = new AVPacket();
int GotOutput;
int InLineSize[] = new int[1];
String FilePath = "/path/xxx.mp4";
avformat_alloc_output_context2(FmtCtx, null, null, FilePath);
Fmt = FmtCtx.oformat();
AVCodec codec = avcodec_find_encoder_by_name("libx264");
av_format_set_video_codec(FmtCtx, codec);
VideoCodec = avcodec_find_encoder(Fmt.video_codec());
VideoSt.setStream(avformat_new_stream(FmtCtx, null));
AVStream stream = VideoSt.getStream();
VideoSt.getStream().id(FmtCtx.nb_streams() - 1);
VideoSt.setCodecCtx(avcodec_alloc_context3(VideoCodec));
VideoSt.getCodecCtx().codec_id(Fmt.video_codec());
VideoSt.getCodecCtx().bit_rate(5120000);
VideoSt.getCodecCtx().width(1920);
VideoSt.getCodecCtx().height(1080);
AVRational fps = new AVRational();
fps.den(25); fps.num(1);
VideoSt.getStream().time_base(fps);
VideoSt.getCodecCtx().time_base(fps);
VideoSt.getCodecCtx().gop_size(10);
VideoSt.getCodecCtx().max_b_frames();
VideoSt.getCodecCtx().pix_fmt(AV_PIX_FMT_YUV420P);
if ((FmtCtx.oformat().flags() & AVFMT_GLOBALHEADER) != 0)
VideoSt.getCodecCtx().flags(VideoSt.getCodecCtx().flags() | AV_CODEC_FLAG_GLOBAL_HEADER);
avcodec_open2(VideoSt.getCodecCtx(), VideoCodec, Opt);
VideoSt.setFrame(av_frame_alloc());
VideoSt.getFrame().format(VideoSt.getCodecCtx().pix_fmt());
VideoSt.getFrame().width(1920);
VideoSt.getFrame().height(1080);
av_frame_get_buffer(VideoSt.getFrame(), 32);
// should be at least Long or even BigInteger
// it is a unsigned long in C
int nextpts = 0;
av_dump_format(FmtCtx, 0, FilePath, 1);
avio_open(FmtCtx.pb(), FilePath, AVIO_FLAG_WRITE);
avformat_write_header(FmtCtx, Opt);
int[] got_output = { 0 };
while (still_has_input) {
// convert or directly copy your Bytes[] into VideoSt.Frame here
// AVFrame structure has two important data fields:
// AVFrame.data (uint8_t*[]) and AVFrame.linesize (int[])
// data includes pixel values in some formats and linesize is size of each picture line.
// For example, if formats is RGB. linesize should has 3 valid values equaling to `image_width * 3`. And data will point to three arrays containing rgb values.
// But I guess we'll need swscale() to convert pixel format here. From RGB to yuv420p (or other yuv family formats).
Pkt = new AVPacket();
av_init_packet(Pkt);
VideoSt.getFrame().pts(nextpts++);
avcodec_encode_video2(VideoSt.getCodecCtx(), Pkt, VideoSt.getFrame(), got_output);
av_packet_rescale_ts(Pkt, VideoSt.getCodecCtx().time_base(), VideoSt.getStream().time_base());
Pkt.stream_index(VideoSt.getStream().index());
av_interleaved_write_frame(FmtCtx, Pkt);
av_packet_unref(Pkt);
}
// get delayed frames
for (got_output[0] = 1; got_output[0] != 0;) {
Pkt = new AVPacket();
av_init_packet(Pkt);
avcodec_encode_video2(VideoSt.getCodecCtx(), Pkt, null, got_output);
if (got_output[0] > 0) {
av_packet_rescale_ts(Pkt, VideoSt.getCodecCtx().time_base(), VideoSt.getStream().time_base());
Pkt.stream_index(VideoSt.getStream().index());
av_interleaved_write_frame(FmtCtx, Pkt);
}
av_packet_unref(Pkt);
}
// free c structs
avcodec_free_context(VideoSt.getCodecCtx());
av_frame_free(VideoSt.getFrame());
avio_closep(FmtCtx.pb());
avformat_free_context(FmtCtx);
}
}
For porting C code, normally several changes should be done:
Mostly the work is to replace every C struct member access (. and ->) to java getter/setter.
Also there are many C address-of operators &, just delete them.
Change C NULL macro and C++ nullptr pointer to Java null object.
C codes used to check bool result of an int type in if, for, while. Have to compare them with 0 in java.
And there may be other API changes, as long as referencing to javacpp-presets docs, it'll be ok.
Note that I omitted all error handling codes here. It may be needed in real development/production.
Really I don't want to make publicity but to use pkzip and its SDK may be a good
solution. Pkzip compress file to 95% as they say.
The Smartcrypt SDK is available in all major programming languages, including C++, Java, and C#, and can be used to encrypt both structured and unstructured data. Changes to existing applications typically consist of two or three lines of code.

Swift equivalent of inputStream

Heading
I am converting my android app into a IOS app using swift 2.0 and Parse Backend, I would just like to know the equivalent to this code:
Code
InputStream rawData = (InputStream) new URL(https_url).getContent();
Bitmap UniqueQRCode = BitmapFactory.decodeStream(rawData);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
// Compress image to lower quality scale 1 - 100
UniqueQRCode.compress(Bitmap.CompressFormat.PNG, 100, stream);
byte[] image = stream.toByteArray();
It is better to do an asynchronous call on iOS. That will lead to more responsive applications.
Here is a simple example to download an image from the web and display it in a UIImageView:
class ViewController: UIViewController
{
#IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
if let url = NSURL(string: "https://mozorg.cdn.mozilla.net/media/img/firefox/new/header-firefox-high-res.d121992bf56c.png") {
let task = NSURLSession.sharedSession().dataTaskWithRequest(NSURLRequest(URL: url)) { (data, response, error) -> Void in
if error == nil && data != nil { // Needs better error handling based on what your server returns
if let image = UIImage(data: data!) {
dispatch_async(dispatch_get_main_queue()) {
self.imageView.image = image
}
}
}
}
task.resume()
}
}
}
If you run this on iOS 9 then you do need to set NSAllowsArbitraryLoads to YES in the application's Info.plist. There are lots of postings about that in more detail.

Encode a video into h264 using bufferedimages?

Im attempting to translate a large set of bufferedimages (pre-saved images created on the fly by my application) into a video using java and hopefully a library that can help with the process.
I've explored a number of different options, such as jcodec (there was no documentation on how to use it). Xuggler (couldn't get it to run due to compatibility issues with jdk5 and its related libraries). and a number of other libraries that had very poor documentation.
I'm trying to find a library that I can use that uses java to (1) create h264 videos by writing bufferedimages frame by frame and (2) has documentation so that I can actually figure out how to use the dam thing.
Any ideas on what I should be looking into?
If pure java source code exists somewhere that can achieve this I would be VERY interested in seeing it. Because I would love to see how the person has achieved the functionality and how I could use it!
Thanks in advance...
Here's how you can do it with JCodec:
public class SequenceEncoder {
private SeekableByteChannel ch;
private Picture toEncode;
private RgbToYuv420 transform;
private H264Encoder encoder;
private ArrayList<ByteBuffer> spsList;
private ArrayList<ByteBuffer> ppsList;
private CompressedTrack outTrack;
private ByteBuffer _out;
private int frameNo;
private MP4Muxer muxer;
public SequenceEncoder(File out) throws IOException {
this.ch = NIOUtils.writableFileChannel(out);
// Transform to convert between RGB and YUV
transform = new RgbToYuv420(0, 0);
// Muxer that will store the encoded frames
muxer = new MP4Muxer(ch, Brand.MP4);
// Add video track to muxer
outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Allocate a buffer big enough to hold output frames
_out = ByteBuffer.allocate(1920 * 1080 * 6);
// Create an instance of encoder
encoder = new H264Encoder();
// Encoder extra data ( SPS, PPS ) to be stored in a special place of
// MP4
spsList = new ArrayList<ByteBuffer>();
ppsList = new ArrayList<ByteBuffer>();
}
public void encodeImage(BufferedImage bi) throws IOException {
if (toEncode == null) {
toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
}
// Perform conversion
transform.transform(AWTUtil.fromBufferedImage(bi), toEncode);
// Encode image into H.264 frame, the result is stored in '_out' buffer
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, frameNo, 25, 1, frameNo, true, null, frameNo, 0));
frameNo++;
}
public void finish() throws IOException {
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
NIOUtils.closeQuietly(ch);
}
}
jcodec now (jcodec-0.1.9.jar) includes SequenceEncoder that directly permits writing of BufferedImages to a video stream.
I spent a while fixing the default import of this new class into Eclipse. After removing the first import, attempting (as I said above, I could not locate some of the classes) to create my own using Stanislav's code and reimporting I spotted the mistake:
import org.jcodec.api.awt.SequenceEncoder;
//import org.jcodec.api.SequenceEncoder;
The second is completely deprecated with no documentation directing me to the latter.
The commensurate method is then:
private void saveClip(Trajectory traj) {
//See www.tutorialspoint.com/androi/android_audio_capture.htm
//for audio cap ideas.
SequenceEncoder enc;
try {
enc = new SequenceEncoder(new File("C:/Users/WHOAMI/today.mp4"));
for (int i = 0; i < BUFF_COUNT; ++i) {
BufferedImage image = buffdFramToBuffdImage(frameBuff.get(i));
enc.encodeImage(image);
}
enc.finish();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}

Categories

Resources