I'm using OpenCV 4.5.2 with Java bindings, and trying to create a video from set of frames.
class OpenCVVideoWriter(
size: com.domain.util.model.Size,
frameRate: Float,
outputFile: File
) {
private var videoWriter = VideoWriter(
outputFile.absolutePath,
VideoWriter.fourcc('M', 'J', 'P', 'G'),
frameRate.toDouble(),
Size(size.width.toDouble(), size.height.toDouble()),
true
)
private var currentFrame: Mat = Mat(size.height, size.width, CvType.CV_8UC3)
init {
if (!videoWriter.isOpened) {
throw OpenCVException("Failed to open VideoWriter")
}
currentFrame = Mat(size.height, size.width, CvType.CV_8UC3)
}
fun writeFrame(data: ByteArray) {
currentFrame.put(0, 0, data)
videoWriter.write(currentFrame)
}
fun release() {
videoWriter.release()
currentFrame.release()
}
private companion object {
init {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME)
}
}
}
Frames are OK and I've also dumped the ByteArrays passed to VideoWriter as *.bmp, all of them are fine.
But the final video is flipped (not rotated but flipped vertically):
I've used the same code (but with a bit older version of OpenCV) before in an older project, and it worked correct there (but I can't try it again with that project as I don't have an access to it).
UPD. If I write the frame with OpenCV Imgcodecs.imwrite("/tmp/frame0.jpg", currentFrame), it's flipped too.
How to fix it?
Related
I'm currently working with the Camera2 API, and created a new ImageReader.OnImageAvailableListener object. Of course, it has to implement the onImageAvailable(ImageReader reader) method. The only thing I want, is to acquire the latest image from this reader, and save it, but unfortunately, I just can't get it. I have read a lot of source codes, visited different StackOverflow topcis, but couldn't find the answer. I'm now at that point, when I have to ask: can this Image object actually saved as an image file to the phone's storage? Here is the method:
#Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
buffer.rewind();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
save(bytes);
image.close();
}
The save() method only opens a FileOutputStream and writes the bytes to it, it is working. The problem is, I only get a black image, and it has a really small size.
The format of the image is JPEG, this is how I configured my ImageReader instance previously.
I even tryed to convert it to different formats, like from NV21 to JPEG and stuff, but it didn't work out. What I'm missing here?
Here's a class I've used to extract a Bitmap from an Image.
import android.graphics.Bitmap
import android.graphics.BitmapFactory
import android.media.Image
import java.io.IOException
import java.io.InputStream
import java.nio.ByteBuffer
import kotlin.experimental.and
class ImagePreprocessor {
private var rgbFrameBitmap = Bitmap.createBitmap(IMAGE_WIDTH, IMAGE_HEIGHT,
Bitmap.Config.ARGB_8888)
fun preprocessImage(image: Image?): Bitmap? {
if (image == null) {
return null
}
check(rgbFrameBitmap!!.width == image.width, { "Invalid size width" })
check(rgbFrameBitmap!!.height == image.height, { "Invalid size height" })
if (rgbFrameBitmap != null) {
val bb = image.planes[0].buffer
rgbFrameBitmap = BitmapFactory.decodeStream(ByteBufferBackedInputStream(bb))
}
return rgbFrameBitmap
}
private class ByteBufferBackedInputStream(internal var buf: ByteBuffer) : InputStream() {
#Throws(IOException::class)
override fun read(): Int {
return if (!buf.hasRemaining()) {
-1
} else (buf.get() and 0xFF.toByte()).toInt()
}
#Throws(IOException::class)
override fun read(bytes: ByteArray, off: Int, len: Int): Int {
var len = len
if (!buf.hasRemaining()) {
return -1
}
len = Math.min(len, buf.remaining())
buf.get(bytes, off, len)
return len
}
}
}
I've used this to set the bitmap directly on an ImageView but it should be possible to use the compress method on the Bitmap to save it to file. Note that this only works for JPEG since the data is in a single plane.
Have you tried the sample camera2 app, Camera2Basic? It saves JPEGs with code very similar to yours, and should work fine.
It's possible your other camera setup code has a bug, not the saving part itself. So see if the sample works, and then compare what it does to your code.
I want to find a face on an image from camera. But detector can't find faces. My app does photo and save it in file.
Below code which create file, start camera, and in onActivityResult in trying detect face and save file path to the room, its saving correctrly and showing in recycler view as expected, but face detector dont finding faces. how can i fix this?
private fun takePhoto() {
val takePictureIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
if (takePictureIntent.resolveActivity(activity?.packageManager!!) != null) {
val photoFile: File
try {
photoFile = createImageFile()
} catch (e: IOException) {
error { e }
}
val photoURI = FileProvider.getUriForFile(activity?.applicationContext!!, "com.nasibov.fakhri.neurelia.fileprovider", photoFile)
takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, photoURI)
takePictureIntent.putExtra("android.intent.extras.CAMERA_FACING", 1)
startActivityForResult(takePictureIntent, PhotoFragment.REQUEST_TAKE_PHOTO)
}
}
#Suppress("SimpleDateFormat")
private fun createImageFile(): File {
val date = SimpleDateFormat("yyyyMMdd_HHmmss").format(Date())
val fileName = "JPEG_$date"
val filesDir = activity?.getExternalFilesDir(Environment.DIRECTORY_PICTURES)
val image = File.createTempFile(fileName, ".jpg", filesDir)
mCurrentImage = image
return mCurrentImage
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
if (requestCode == REQUEST_TAKE_PHOTO && resultCode == Activity.RESULT_OK) {
val bitmap = BitmapFactory.decodeFile(mCurrentImage.absolutePath)
val frame = Frame.Builder().setBitmap(bitmap).build()
val detectedFaces = mFaceDetector.detect(frame)
mViewModel.savePhoto(mCurrentImage)
}
}
Android Face detection API tracks face in photos, videos using some landmarks like eyes, nose, ears, cheeks, and mouth.
Rather than detecting the individual features, the API detects the face at once and then if defined, detects the landmarks and classifications. Besides, the API can detect faces at various angles too.
https://www.journaldev.com/15629/android-face-detection
Android SDK contains an API for Face Detection: android.media.FaceDetector class. This class detects faces on the image. To detect faces call findFaces method of FaceDetector class. findFaces method returns a number of detected faces and fills the FaceDetector.Faces[] array. Please note, that findFaces method supports only bitmaps in RGB_565 format at this time.
Each instance of the FaceDetector.Face class contains the following information:
Confidence that it’s actually a face – a float value between 0 and 1.
Distance between the eyes – in pixels.
Position (x, y) of the mid-point between the eyes.
Rotations (X, Y, Z).
Unfortunately, it doesn’t contain a framing rectangle that includes the detected face.
Here is sample source code for face detection. This sample code enables a custom View that shows a saved image from an SD Card and draws transparent red circles on the detected faces.
class Face_Detection_View extends View {
private static final int MAX_FACES = 10;
private static final String IMAGE_FN = "face.jpg";
private Bitmap background_image;
private FaceDetector.Face[] faces;
private int face_count;
// preallocate for onDraw(...)
private PointF tmp_point = new PointF();
private Paint tmp_paint = new Paint();
public Face_Detection_View(Context context) {
super(context);
// Load an image from SD Card
updateImage(Environment.getExternalStorageDirectory() + "/" + IMAGE_FN);
}
public void updateImage(String image_fn) {
// Set internal configuration to RGB_565
BitmapFactory.Options bitmap_options = new BitmapFactory.Options();
bitmap_options.inPreferredConfig = Bitmap.Config.RGB_565;
background_image = BitmapFactory.decodeFile(image_fn, bitmap_options);
FaceDetector face_detector = new FaceDetector(
background_image.getWidth(), background_image.getHeight(),
MAX_FACES);
faces = new FaceDetector.Face[MAX_FACES];
// The bitmap must be in 565 format (for now).
face_count = face_detector.findFaces(background_image, faces);
Log.d("Face_Detection", "Face Count: " + String.valueOf(face_count));
}
public void onDraw(Canvas canvas) {
canvas.drawBitmap(background_image, 0, 0, null);
for (int i = 0; i < face_count; i++) {
FaceDetector.Face face = faces[i];
tmp_paint.setColor(Color.RED);
tmp_paint.setAlpha(100);
face.getMidPoint(tmp_point);
canvas.drawCircle(tmp_point.x, tmp_point.y, face.eyesDistance(),
tmp_paint);
}
}
}
The situation
I should show 200-350 frames animation in my application. Images have 500x300ish resolution. If user wants to share animation, i have to convert it to Video. For convertion i am using ffmpeg command.
ffmpeg -y -r 1 -i /sdcard/videokit/pic00%d.jpg -i /sdcard/videokit/in.mp3 -strict experimental -ar 44100 -ac 2 -ab 256k -b 2097152 -ar 22050 -vcodec mpeg4 -b 2097152 -s 320x240 /sdcard/videokit/out.mp4
To convert images to video ffmpeg wants actual files not Bitmap or byte[].
Problem
Compressing bitmaps to image files taking to much time. 210 image convertion takes about 1 minute to finish on average device(HTC ONE m7). Converting image files to mp4 takes about 15 seconds on the same device. All together user have to wait about 1.5 minutes.
What i have tried
I changed comrpession format form PNG to JPEG(1.5 minute result is
achieved with JPEG compression(quality=80),with PNG it takes about
2-2.5 minutes) success
Tried to find how pass byte[] or bitmap to ffmpeg - no succes.
QUESTION
Is there any way(library (even native)) to make saving process faster.
Is there any way to pass byte[] or Bitmap objects (i mean png file decompressed to Android Bitmap Class Object) to ffmpeg library video creating method
Is there any other working library which will create mp4(or any supported format(supported by main Social Networks)) from byte[] or Bitmap objects in about 30 seconds(for 200 frames).
You can convert Bitmap (or byte[]) to YUV format quickly, using renderscript (see https://stackoverflow.com/a/39877029/192373). You can pass these YUV frames to ffmpeg library (as suggests halfelf), or use the built-in native MediaCodec which uses dedicated hardware on modt devices (but compression options are less flexible than all-software ffmpeg).
There are two steps slow us down. Compressing image to PNG/JPG and writing them to disk. Both can be skipped if we directly code against ffmpeg libs, instead of calling ffmpeg command. (There are other improvements too, such like GPU encoding and multithreading, but much more complicated.)
Some approaches to code:
Only use C/C++ NDK for android programming. FFmpeg will happily work. But I guess it's not an option here.
Build it from scratch by Java JNI. Not much experience here. I only know this could link java to c/c++ libs.
Some java wrapper. Luckily I found javacpp-presets. (There are others too, but this one is good enough and up to date.)
This library includes a good example ported from famous dranger's ffmpeg tutorial, though it is a demuxing one.
We can try to write a muxing one, following ffmpeg's muxing.c example.
import java.io.*;
import org.bytedeco.javacpp.*;
import static org.bytedeco.javacpp.avcodec.*;
import static org.bytedeco.javacpp.avformat.*;
import static org.bytedeco.javacpp.avutil.*;
import static org.bytedeco.javacpp.swscale.*;
public class Muxer {
public class OutputStream {
public AVStream Stream;
public AVCodecContext Ctx;
public AVFrame Frame;
public SwsContext SwsCtx;
public void setStream(AVStream s) {
this.Stream = s;
}
public AVStream getStream() {
return this.Stream;
}
public void setCodecCtx(AVCodecContext c) {
this.Ctx = c;
}
public AVCodecContext getCodecCtx() {
return this.Ctx;
}
public void setFrame(AVFrame f) {
this.Frame = f;
}
public AVFrame getFrame() {
return this.Frame;
}
public OutputStream() {
Stream = null;
Ctx = null;
Frame = null;
SwsCtx = null;
}
}
public static void main(String[] args) throws IOException {
Muxer t = new Muxer();
OutputStream VideoSt = t.new OutputStream();
AVOutputFormat Fmt = null;
AVFormatContext FmtCtx = new AVFormatContext(null);
AVCodec VideoCodec = null;
AVDictionary Opt = null;
SwsContext SwsCtx = null;
AVPacket Pkt = new AVPacket();
int GotOutput;
int InLineSize[] = new int[1];
String FilePath = "/path/xxx.mp4";
avformat_alloc_output_context2(FmtCtx, null, null, FilePath);
Fmt = FmtCtx.oformat();
AVCodec codec = avcodec_find_encoder_by_name("libx264");
av_format_set_video_codec(FmtCtx, codec);
VideoCodec = avcodec_find_encoder(Fmt.video_codec());
VideoSt.setStream(avformat_new_stream(FmtCtx, null));
AVStream stream = VideoSt.getStream();
VideoSt.getStream().id(FmtCtx.nb_streams() - 1);
VideoSt.setCodecCtx(avcodec_alloc_context3(VideoCodec));
VideoSt.getCodecCtx().codec_id(Fmt.video_codec());
VideoSt.getCodecCtx().bit_rate(5120000);
VideoSt.getCodecCtx().width(1920);
VideoSt.getCodecCtx().height(1080);
AVRational fps = new AVRational();
fps.den(25); fps.num(1);
VideoSt.getStream().time_base(fps);
VideoSt.getCodecCtx().time_base(fps);
VideoSt.getCodecCtx().gop_size(10);
VideoSt.getCodecCtx().max_b_frames();
VideoSt.getCodecCtx().pix_fmt(AV_PIX_FMT_YUV420P);
if ((FmtCtx.oformat().flags() & AVFMT_GLOBALHEADER) != 0)
VideoSt.getCodecCtx().flags(VideoSt.getCodecCtx().flags() | AV_CODEC_FLAG_GLOBAL_HEADER);
avcodec_open2(VideoSt.getCodecCtx(), VideoCodec, Opt);
VideoSt.setFrame(av_frame_alloc());
VideoSt.getFrame().format(VideoSt.getCodecCtx().pix_fmt());
VideoSt.getFrame().width(1920);
VideoSt.getFrame().height(1080);
av_frame_get_buffer(VideoSt.getFrame(), 32);
// should be at least Long or even BigInteger
// it is a unsigned long in C
int nextpts = 0;
av_dump_format(FmtCtx, 0, FilePath, 1);
avio_open(FmtCtx.pb(), FilePath, AVIO_FLAG_WRITE);
avformat_write_header(FmtCtx, Opt);
int[] got_output = { 0 };
while (still_has_input) {
// convert or directly copy your Bytes[] into VideoSt.Frame here
// AVFrame structure has two important data fields:
// AVFrame.data (uint8_t*[]) and AVFrame.linesize (int[])
// data includes pixel values in some formats and linesize is size of each picture line.
// For example, if formats is RGB. linesize should has 3 valid values equaling to `image_width * 3`. And data will point to three arrays containing rgb values.
// But I guess we'll need swscale() to convert pixel format here. From RGB to yuv420p (or other yuv family formats).
Pkt = new AVPacket();
av_init_packet(Pkt);
VideoSt.getFrame().pts(nextpts++);
avcodec_encode_video2(VideoSt.getCodecCtx(), Pkt, VideoSt.getFrame(), got_output);
av_packet_rescale_ts(Pkt, VideoSt.getCodecCtx().time_base(), VideoSt.getStream().time_base());
Pkt.stream_index(VideoSt.getStream().index());
av_interleaved_write_frame(FmtCtx, Pkt);
av_packet_unref(Pkt);
}
// get delayed frames
for (got_output[0] = 1; got_output[0] != 0;) {
Pkt = new AVPacket();
av_init_packet(Pkt);
avcodec_encode_video2(VideoSt.getCodecCtx(), Pkt, null, got_output);
if (got_output[0] > 0) {
av_packet_rescale_ts(Pkt, VideoSt.getCodecCtx().time_base(), VideoSt.getStream().time_base());
Pkt.stream_index(VideoSt.getStream().index());
av_interleaved_write_frame(FmtCtx, Pkt);
}
av_packet_unref(Pkt);
}
// free c structs
avcodec_free_context(VideoSt.getCodecCtx());
av_frame_free(VideoSt.getFrame());
avio_closep(FmtCtx.pb());
avformat_free_context(FmtCtx);
}
}
For porting C code, normally several changes should be done:
Mostly the work is to replace every C struct member access (. and ->) to java getter/setter.
Also there are many C address-of operators &, just delete them.
Change C NULL macro and C++ nullptr pointer to Java null object.
C codes used to check bool result of an int type in if, for, while. Have to compare them with 0 in java.
And there may be other API changes, as long as referencing to javacpp-presets docs, it'll be ok.
Note that I omitted all error handling codes here. It may be needed in real development/production.
Really I don't want to make publicity but to use pkzip and its SDK may be a good
solution. Pkzip compress file to 95% as they say.
The Smartcrypt SDK is available in all major programming languages, including C++, Java, and C#, and can be used to encrypt both structured and unstructured data. Changes to existing applications typically consist of two or three lines of code.
Heading
I am converting my android app into a IOS app using swift 2.0 and Parse Backend, I would just like to know the equivalent to this code:
Code
InputStream rawData = (InputStream) new URL(https_url).getContent();
Bitmap UniqueQRCode = BitmapFactory.decodeStream(rawData);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
// Compress image to lower quality scale 1 - 100
UniqueQRCode.compress(Bitmap.CompressFormat.PNG, 100, stream);
byte[] image = stream.toByteArray();
It is better to do an asynchronous call on iOS. That will lead to more responsive applications.
Here is a simple example to download an image from the web and display it in a UIImageView:
class ViewController: UIViewController
{
#IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
if let url = NSURL(string: "https://mozorg.cdn.mozilla.net/media/img/firefox/new/header-firefox-high-res.d121992bf56c.png") {
let task = NSURLSession.sharedSession().dataTaskWithRequest(NSURLRequest(URL: url)) { (data, response, error) -> Void in
if error == nil && data != nil { // Needs better error handling based on what your server returns
if let image = UIImage(data: data!) {
dispatch_async(dispatch_get_main_queue()) {
self.imageView.image = image
}
}
}
}
task.resume()
}
}
}
If you run this on iOS 9 then you do need to set NSAllowsArbitraryLoads to YES in the application's Info.plist. There are lots of postings about that in more detail.
I have a whole bunch of images and need to filter out all that have human faces in them. Is there such a Java library that provides a single method that takes in an image as input and outputs yes or no?
You can do face detection with JavaCV. JavaCV is a Java wrapper for OpenCV. It doesn't provided true/false but the location of the face in picture. You could do something like:
public class FaceDetect {
// Create memory for calculations
CvMemStorage storage = null;
// Create a new Haar classifier
CvHaarClassifierCascade classifier = null;
// List of classifiers
String[] classifierName = {
"./classifiers/haarcascade_frontalface_alt.xml",
"./classifiers/haarcascade_frontalface_alt2.xml",
"./classifiers/haarcascade_profileface.xml" };
public FaceDetect() {
// Allocate the memory storage
storage = CvMemStorage.create();
// Load the HaarClassifierCascade
classifier = new CvHaarClassifierCascade(cvLoad(classifierName[0]));
// Make sure the cascade is loaded
if (classifier.isNull()) {
System.err.println("Error loading classifier file");
}
}
public boolean find (Image value ){
// Clear the memory storage which was used before
cvClearMemStorage(storage);
if(!classifier.isNull()){
// Detect the objects and store them in the sequence
CvSeq faces = cvHaarDetectObjects(value.getImage(), classifier,
storage, 1.1, 3, CV_HAAR_DO_CANNY_PRUNING);
// Get the number of faces found.
int total = faces.total();
if (total > 0) {
return true;
}
}
return false;
}
}