Webcam api : Maximizing the 720p capture - java

I am attempting to capture a video recording through an external camera, Logitec C922. Using java, I can make this possible through webcam api.
<dependency>
<groupId>com.github.sarxos</groupId>
<artifactId>webcam-capture</artifactId>
<version>0.3.10</version>
</dependency>
<dependency>
<groupId>xuggle</groupId>
<artifactId>xuggle-xuggler</artifactId>
<version>5.4</version>
</dependency>
However, for the life of me, I cannot make it record at 60FPS. The video randomly stutters when stored, and is not smooth at all.
I can connect to the camera, using the following details.
final List<Webcam> webcams = Webcam.getWebcams();
for (final Webcam cam : webcams) {
if (cam.getName().contains("C922")) {
System.out.println("### Logitec C922 cam found");
webcam = cam;
break;
}
}
I set the size of the cam to the following:
final Dimension[] nonStandardResolutions = new Dimension[] { WebcamResolution.HD720.getSize(), };
webcam.setCustomViewSizes(nonStandardResolutions);
webcam.setViewSize(WebcamResolution.HD720.getSize());
webcam.open(true);
And then I capture the images:
while (continueRecording) {
// capture the webcam image
final BufferedImage webcamImage = ConverterFactory.convertToType(webcam.getImage(),
BufferedImage.TYPE_3BYTE_BGR);
final Date timeOfCapture = new Date();
// convert the image and store
final IConverter converter = ConverterFactory.createConverter(webcamImage, IPixelFormat.Type.YUV420P);
final IVideoPicture frame = converter.toPicture(webcamImage,
(System.currentTimeMillis() - start) * 1000);
frame.setKeyFrame(false);
frame.setQuality(0);
writer.encodeVideo(0, frame);
}
My writer is defined as follows:
final Dimension size = WebcamResolution.HD720.getSize();
final IMediaWriter writer = ToolFactory.makeWriter(videoFile.getName());
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_H264, size.width, size.height);
I am honestly not sure what in my code could be causing this. Given that I lower the resolution, I get no problems. ( 480p ) Could the issue be with the codes I am using?

As some of the comments mentioned, introducing queues does solve the problem. Here is the general logic to performs the needed steps. Note, I've setup my code for a lower resolution, as it allows me to capture 100FPS per sec. Adjust as needed.
Class to link the image/video capture and class to edit it :
public class WebcamRecorder {
final Dimension size = WebcamResolution.QVGA.getSize();
final Stopper stopper = new Stopper();
public void startRecording() throws Exception {
final Webcam webcam = Webcam.getDefault();
webcam.setViewSize(size);
webcam.open(true);
final BlockingQueue<CapturedFrame> queue = new LinkedBlockingQueue<CapturedFrame>();
final Thread recordingThread = new Thread(new RecordingThread(queue, webcam, stopper));
final Thread imageProcessingThread = new Thread(new ImageProcessingThread(queue, size));
recordingThread.start();
imageProcessingThread.start();
}
public void stopRecording() {
stopper.setStop(true);
}
}
RecordingThread :
public void run() {
try {
System.out.println("## capturing images began");
while (true) {
final BufferedImage webcamImage = ConverterFactory.convertToType(webcam.getImage(),
BufferedImage.TYPE_3BYTE_BGR);
final Date timeOfCapture = new Date();
queue.put(new CapturedFrame(webcamImage, timeOfCapture, false));
if (stopper.isStop()) {
System.out.println("### signal to stop capturing images received");
queue.put(new CapturedFrame(null, null, true));
break;
}
}
} catch (InterruptedException e) {
System.out.println("### threading issues during recording:: " + e.getMessage());
} finally {
System.out.println("## capturing images end");
if (webcam.isOpen()) {
webcam.close();
}
}
}
ImageProcessingThread:
public void run() {
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_H264, size.width, size.height);
try {
int frameIdx = 0;
final long start = System.currentTimeMillis();
while (true) {
final CapturedFrame capturedFrame = queue.take();
if (capturedFrame.isEnd()) {
break;
}
final BufferedImage webcamImage = capturedFrame.getImage();
size.height);
// convert the image and store
final IConverter converter = ConverterFactory.createConverter(webcamImage, IPixelFormat.Type.YUV420P);
final long end = System.currentTimeMillis();
final IVideoPicture frame = converter.toPicture(webcamImage, (end - start) * 1000);
frame.setKeyFrame((frameIdx++ == 0));
frame.setQuality(0);
writer.encodeVideo(0, frame);
}
} catch (final InterruptedException e) {
System.out.println("### threading issues during image processing:: " + e.getMessage());
} finally {
if (writer != null) {
writer.close();
}
}
The way it works is pretty simple. The WebcamRecord class creates an instance of a queue that is shared between the video capture and the image processing. The RecordingThread sends bufferedImages to the queue ( in my case, it a pojo, called CapturedFrame ( which has a BufferedImage in it ) ). The ImageProcessingThread will listen and pull data from the queue. If it does not receive a signal that the writing should end, the loop never dies.

Related

javacv merging image and mp3 creates a mp4 longer than the original mp3

I am using the following code to merge an mp3 and an image
IplImage ipl = cvLoadImage(path2ImageFile);
int height = ipl.height();
int width = ipl.width();
if(height%2!=0) height = height+1;
if(width%2!=0) width = width+1;
OpenCVFrameConverter.ToIplImage grabberConverter = new OpenCVFrameConverter.ToIplImage();
FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(path2OutputFile,width,height);
FrameGrabber audioFileGrabber = new FFmpegFrameGrabber(path2AudioFile);
try
{
audioFileGrabber.start();
recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264 );//AV_CODEC_ID_VORBIS
recorder.setAudioCodec(avcodec.AV_CODEC_ID_AAC);//AV_CODEC_ID_MP3 //AV_CODEC_ID_AAC
recorder.setFormat("mp4");
recorder.setAudioChannels(2);
recorder.start();
Frame frame = null;
//audioFileGrabber.getFrameNumber()
while ((frame = audioFileGrabber.grabFrame())!=null)
{ recorder.record(grabberConverter.convert(ipl));
recorder.record(frame);
}
recorder.stop();
audioFileGrabber.stop();
}
This produces an mp4, but it is about 1 and a half minutes longer.
Why does this happen?
Is there any way I can fix this?
Was a nice issue to solve. IMO, The issues seems to be because of two reasons
The frame rate of video in this case needs to be same as that of Audio, so that is there is no time increase
The audio frames while being written should be written at the same timestamp
Below is final code which works for me
import org.bytedeco.ffmpeg.global.*;
import org.bytedeco.javacv.*;
public class AudioExample {
public static void main(String[] args) {
String path2ImageFile = "WhatsApp Image 2018-12-29 at 21.54.58.jpeg";
String path2OutputFile = "test.mp4";
String path2AudioFile = "GMV_M_Brice_English_Greeting-Menus.mp3";
FFmpegFrameGrabber imageFileGrabber = new FFmpegFrameGrabber(path2ImageFile);
FFmpegFrameGrabber audioFileGrabber = new FFmpegFrameGrabber(path2AudioFile);
try {
audioFileGrabber.start();
imageFileGrabber.start();
FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(path2OutputFile,
imageFileGrabber.getImageWidth(), imageFileGrabber.getImageHeight());
recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);//AV_CODEC_ID_VORBIS
recorder.setAudioCodec(avcodec.AV_CODEC_ID_AAC);//AV_CODEC_ID_MP3 //AV_CODEC_ID_AAC
recorder.setSampleRate(audioFileGrabber.getSampleRate());
recorder.setFrameRate(audioFileGrabber.getAudioFrameRate());
recorder.setVideoQuality(1);
recorder.setFormat("mp4");
recorder.setAudioChannels(2);
recorder.start();
Frame frame = null;
Frame imageFrame = imageFileGrabber.grab();
while ((frame = audioFileGrabber.grab()) != null) {
recorder.setTimestamp(frame.timestamp);
recorder.record(imageFrame);
recorder.record(frame);
}
recorder.stop();
audioFileGrabber.stop();
imageFileGrabber.stop();
} catch (FrameRecorder.Exception | FrameGrabber.Exception e) {
e.printStackTrace();
}
}
}
I removed the opencv for image, since we can use another framegrabber to get the image as a frame also. Also you can see the audio generated is of same size as the mp3
Attempt #2
Since the first attempt was using same framerate for Video and Audio, which was causing some issue, I thought of fixing the framerate and then interleaving the audio on top
import org.bytedeco.ffmpeg.global.*;
import org.bytedeco.javacv.*;
public class AudioExample {
public static void main(String[] args) {
String path2ImageFile = "/Users/tarunlalwani/Downloads/WhatsApp Image 2018-12-29 at 21.54.58.jpeg";
String path2OutputFile = "/Users/tarunlalwani/Downloads/test.mp4";
String path2AudioFile = "/Users/tarunlalwani/Downloads/GMV_M_Brice_English_Greeting-Menus.mp3";
FFmpegFrameGrabber imageFileGrabber = new FFmpegFrameGrabber(path2ImageFile);
FFmpegFrameGrabber audioFileGrabber = new FFmpegFrameGrabber(path2AudioFile);
try {
audioFileGrabber.start();
imageFileGrabber.start();
FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(path2OutputFile,
imageFileGrabber.getImageWidth(), imageFileGrabber.getImageHeight());
recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);//AV_CODEC_ID_VORBIS
recorder.setAudioCodec(avcodec.AV_CODEC_ID_AAC);//AV_CODEC_ID_MP3 //AV_CODEC_ID_AAC
recorder.setSampleRate(audioFileGrabber.getSampleRate());
int frameRate = 30;
recorder.setFrameRate(frameRate);
recorder.setFormat("mp4");
recorder.setAudioChannels(2);
recorder.start();
Frame frame = null;
Frame imageFrame = imageFileGrabber.grab();
int frames = (int)((audioFileGrabber.getLengthInTime() / 1000000f) * frameRate) + 1;
int i = 0;
//create video using fixed images and framerate
while (i ++ <frames) {
recorder.record(imageFrame);
}
// Add audio to given timestamps
while ((frame = audioFileGrabber.grabFrame()) != null) {
recorder.setTimestamp(frame.timestamp);
recorder.record(frame);
}
recorder.stop();
audioFileGrabber.stop();
imageFileGrabber.stop();
} catch (FrameRecorder.Exception | FrameGrabber.Exception e) {
e.printStackTrace();
}
}
}

How to capture an RTSP livestream via Java?

I know, there are several posts handling with this problem here on Stackoverflow, but I didn´t get a solution for my problem. I have an ipcam and I want to capture the livestream using java. The problem is, that I can only access the cam via RTSP. There is no normal http://xxx.xxx.xxx:xx access.
I use the OpenCv library but unfortunately my program only works with the webcam of my laptop because it is the default device.
protected void init()
{
this.capture = new VideoCapture();
this.faceCascade = new CascadeClassifier();
this.absoluteFaceSize = 0;
}
/**
* The action triggered by pushing the button on the GUI
*/
#FXML
protected void startCamera()
{
// set a fixed width for the frame
originalFrame.setFitWidth(600);
// preserve image ratio
originalFrame.setPreserveRatio(true);
if (!this.cameraActive)
{
// disable setting checkboxes
this.haarClassifier.setDisable(true);
this.lbpClassifier.setDisable(true);
// start the video capture
//String URLName = "rtsp://username:password#rtsp:554/h264/ch1/main/av_stream";
this.capture.open("rtsp://192.168.178.161:554/onvif1");
// is the video stream available?
if (this.capture.isOpened())
{
this.cameraActive = true;
// grab a frame every 33 ms (30 frames/sec)
Runnable frameGrabber = new Runnable() {
#Override
public void run()
{
Image imageToShow = grabFrame();
originalFrame.setImage(imageToShow);
}
};
this.timer = Executors.newSingleThreadScheduledExecutor();
this.timer.scheduleAtFixedRate(frameGrabber, 0, 33, TimeUnit.MILLISECONDS);
// update the button content
this.cameraButton.setText("Stop Camera");
}
else
{
// log the error
System.err.println("Failed to open the camera connection...");
}
}

Multiprocessing using BlockingQueue with Android & OpenCV

I am trying to increase my frame rate using with Opencv for Android. While I get the process to run I do not notice any change in my frame rate. I am wondering if there is something wrong with my logic. Also after running for about 30 seconds it falls over with a buffer error.
What I have attempted to do was to use the main thread for the I/O display of the video and a second thread for object detection.
My main activity is the producer with "implements CvCameraViewListener2" this will put each frame on blocking queue IN and take them off Blocking Queue OUT .
I then have a Consumer runnable with the processing logic. This take them off the IN Blocking queue processes them and then puts them on the OUT queue.
public final class CameraActivity extends FragmentActivity
implements CvCameraViewListener2 {
Consumer consumer1 ;
private BlockingQueue<Mat> inFrames = new LinkedBlockingQueue<Mat>(11);
private BlockingQueue<Mat> outFrames = new LinkedBlockingQueue<Mat>(11);
#Override
public void onCameraViewStarted(final int width,
final int height) {
consumer1 = new Consumer(inFrames,outFrames);
Thread thread1 = new Thread(consumer1);
thread1.start();
}
#Override
public void onCameraViewStopped() {
consumer1.stopRunning();
consumer2.stopRunning();
}
#Override
public Mat onCameraFrame(final CvCameraViewFrame inputFrame) {
final Mat rgba = inputFrame.rgba();
// This is the producer of the blocking queue
try {
inFrames.put(inputFrame.rgba());
} catch (InterruptedException e) {
}
try {
return outFrames.take();
} catch (InterruptedException e) {
return rgba;
}
Consumer Class
public class Consumer implements Runnable {
private final BlockingQueue<Mat> queueIn;
private final BlockingQueue<Mat> queueOut;
private boolean isRunning;
public Consumer(BlockingQueue qIn, BlockingQueue qOut) {
queueIn = qIn;
queueOut = qOut;
isRunning = true;
}
#Override
public void run() {
try {
while (isRunning) { consume(queueIn.take()); }
} catch (InterruptedException ex) { }
}
void consume(Mat src) {
Mat mIntermediateMat = new Mat(src.rows(), src.cols(), CvType.CV_8UC1);
Mat dst = new Mat(src.size(), CvType.CV_8UC3);
Mat mHsv = new Mat(src.size(), CvType.CV_8UC3);
Mat mHsv2 = new Mat(src.size(), CvType.CV_8UC3);
src.copyTo(dst);
// Convert to HSV
Imgproc.cvtColor(src, mHsv, Imgproc.COLOR_RGB2HSV, 3);
// Remove all colors except the reds
Core.inRange(mHsv, new Scalar(0, 86, 72), new Scalar(39, 255, 255), mHsv);
Core.inRange(mHsv, new Scalar(150, 125, 100), new Scalar(180,255,255), mHsv2);
Core.bitwise_or(mHsv, mHsv2, mHsv);
/// Reduce the noise so we avoid false circle detection
Imgproc.GaussianBlur(mHsv, mHsv, new Size(7, 7), 2);
// Find Circles
Imgproc.HoughCircles(mHsv, mIntermediateMat, Imgproc.CV_HOUGH_GRADIENT, 2.0, 100);
// Find the largest circle
int maxRadious = 0;
Point pt = new Point(0,0);
if (mIntermediateMat.cols() > 0) {
for (int x = 0; x < mIntermediateMat.cols(); x++)
{
double vCircle[] = mIntermediateMat.get(0,x);
if (vCircle == null)
break;
int radius = (int)Math.round(vCircle[2]);
if (radius > maxRadious) {
maxRadious = radius;
pt = new Point(Math.round(vCircle[0]), Math.round(vCircle[1]));
}
}
// Draw the larest circle in Red
int iLineThickness = 5;
Scalar red = new Scalar(255, 0, 0);
// draw the found circle
Core.circle(dst, pt, maxRadious, red, iLineThickness);
try{
queueOut.put(dst);
} catch (InterruptedException e) {
}
}
}
public void stopRunning() {
this.isRunning = false;
}
}
This is the error I get after about 30 seconds
01-16 14:20:23.358 510-586/? E/InputDispatcher﹕ channel '428188e8 my.com/my.com.CameraActivity (server)' ~ Channel is unrecoverably broken and will be disposed!
01-16 14:20:23.508 10788-24667/? E/Surface﹕ queueBuffer: error queuing buffer to SurfaceTexture, -32
01-16 14:20:23.508 10788-24667/? E/NvOmxCamera﹕ Queue Buffer Failed. Skipping buffer.
01-16 14:20:23.508 10788-24667/? E/NvOmxCamera﹕ Dequeue Buffer Failed
01-16 14:20:23.578 10788-17442/? E/NvOmxCamera﹕ Already called release()
I would like to add that I m running on a Nexus 7 (2012) which is quod-core
Just a thought but could you try making a clone of the input rgba at this line:
inFrames.put(inputFrame.rgba().clone());
as you are adding and holding a reference to the last captured image in the queue and I think CvCamera might get upset if it can't release and reallocate the space; that might explain the crash.
In terms of speeding things up then presumably on a single CPU device you will only speed things up if you start to skip processing of the frames that you are taking off the queue.

How to refresh web sourced images in GUI

I've created a simple GUI application in Java, that displays certain images sourced from the web in different sections.
What i'm having trouble doing, is refreshing the images after a certain period of time (specifically, 40 seconds). I need the application to get the image from the web again, and place it in one of three JPanels, every 40 seconds as long as the app is running.
This is the code used to display the images when the application starts up:
try {
BufferedImage cameraOne = ImageIO.read(new URL("IMAGE LINK HERE"));
BufferedImage cameraTwo = ImageIO.read(new URL("IMAGE LINK HERE"));
BufferedImage cameraThree = ImageIO.read(new URL("IMAGE LINK HERE"));
cPanel1 = new CameraPanel(new ImageIcon(cameraOne).getImage());
cPanel2 = new CameraPanel(new ImageIcon(cameraTwo).getImage());
cPanel3 = new CameraPanel(new ImageIcon(cameraThree).getImage());
} catch (MalformedURLException ex) {
throw new Error("One of the cameras you are trying to connect to doesn't exist!");
} catch (IOException ex) {
throw new Error("CCTV couldn't connect to the required camera feeds!");
}
This is the code i'm using to attempt to schedule the image refreshes:
final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
final Runnable refreshTask = new Runnable() {
public void run() {
// I need help here
}
};
final ScheduledFuture<?> refreshHandler = scheduler.scheduleAtFixedRate(refreshTask, 10, 40, TimeUnit.SECONDS);
This is the CameraPanel class, as seen in the code used to display the images when the application starts up:
class CameraPanel extends JPanel {
private Image img;
public CameraPanel(BufferedImage cameraImage) {
this(new ImageIcon(cameraImage).getImage());
}
public CameraPanel(Image img) {
this.img = img;
Dimension size = new Dimension(img.getWidth(null), img.getHeight(null));
setPreferredSize(size);
setMinimumSize(size);
setMaximumSize(size);
setSize(size);
setLayout(null);
}
public void paintComponent(Graphics g) {
g.drawImage(img, 0, 0, null);
}
}
Any solutions? Thanks.
The full class i've created can be found at http://pastebin.com/nRNHHkfx

Capurting Frames from a video file without playing it on screen

I am a beginner in java programming language, Recently I have got a work to capture frames from a video file, I have also developed a program that does so, but it does that when the video is played on screen with the help of any player.
I have developed following program to do so.
public class Beginning implements Runnable {
private Thread thread;
private static long counter = 0;
private final int FRAME_CAPTURE_RATE = 124;
private Robot robot;
public Beginning() throws Exception {
robot = new Robot();
thread = new Thread(this);
thread.start();
}
public static void main(String[] args) throws Exception {
Beginning beginning = new Beginning();
}
public void run() {
for (;;) {
try {
Rectangle screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
BufferedImage bufferedImage = robot.createScreenCapture(screenRect);
ImageIO.write(bufferedImage, "png", new File("D:\\CapturedFrame\\toolImage" + counter + ".png"));
counter++;
thread.sleep(FRAME_CAPTURE_RATE);
} catch (Exception e) {
System.err.println("Something fishy is going on...");
}
}
}
}
My sir has told to capture all frames from any specified videos without playing it on screen, Can anyone please suggest me that how can I do so.
Well, you can simply use OpenCV. Here is an example of it.
https://www.tutorialspoint.com/opencv/opencv_using_camera.htm
You can use any video in place of the camera in the VideoCapture class as:
VideoCapture capture = new VideoCapture(0);
instead of the above line of code, you can use
VideoCapture capture = new VideoCapture("/location/of/video/");
I hope this is what you are looking for.

Categories

Resources