I have a grayscale .mkv video, which i want to open with OpenCV in Java, but i get the following errors:
With return new VideoCapture(path, Videoio.CAP_FFMPEG);
Errors:
[ERROR:0#0.004] global /build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (1108) open Could not find decoder for codec_id=61
[ERROR:0#0.004] global /build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (1140) open VIDEOIO/FFMPEG: Failed to initialize VideoCapture
With return new VideoCapture(path, Videoio.CAP_DSHOW); No errors, but
video.isOpened() is false
With return new VideoCapture(path);
Errors:
[ERROR:0#0.005] global /build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (1108) open Could not find decoder for codec_id=61
[ERROR:0#0.005] global /build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (1140) open VIDEOIO/FFMPEG: Failed to initialize VideoCapture
[ WARN:0#0.122] global C:\build\master_winpack-bindings-win64-vc14-static\opencv\modules\videoio\src\cap_msmf.cpp (923) CvCapture_MSMF::initStream Failed to set mediaType (stream 0, (480x360 # 1) MFVideoFormat_RGB24(codec not found)
I have installed OpenCV and added it as a dependency using this video.
I have also tried adding ...\opencv\build\bin\opencv_videoio_ffmpeg455_64.dll to the native libraries, and also tried using this: System.load("path\\to\\opencv\\build\\bin\\opencv_videoio_ffmpeg455_64.dll");.
Full code:
public class Test {
static {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
System.load("path\\to\\opencv\\build\\bin\\opencv_videoio_ffmpeg455_64.dll");
}
public static void main(String[] args) {
List<Mat> frames = getVideoFrames(openVideoFile(args[0]));
System.out.println(frames.size());
}
}
//... different class
public static VideoCapture openVideoFile(String path) {
return new VideoCapture(path);
}
public static List<Mat> getVideoFrames(VideoCapture video) {
List<Mat> frames = new ArrayList<>();
Mat frame = new Mat();
if (video.isOpened()) {
while (video.read(frame)) {
frames.add(frame);
}
video.release();
}
return frames;
}
ffprobe result:
Metadata:
MAJOR_BRAND : qt
MINOR_VERSION : 512
COMPATIBLE_BRANDS: qt
ENCODER : Lavf56.40.101
Duration: 00:01:05.83, start: 0.000000, bitrate: 1511 kb/s
Stream #0:0(eng): Video: png (MPNG / 0x474E504D), rgb24(pc), 480x360 [SAR 1:1 DAR 4:3], 6 fps, 6 tbr, 1k tbn (default)
Metadata:
LANGUAGE : eng
HANDLER_NAME : DataHandler
ENCODER : Lavc56.60.100 png
DURATION : 00:01:05.834000000
The error message is indicating that OpenCV's FFmpeg plugin is not built with MPNG codec support. So, you are essentially SOL to get this task done only with OpenCV (you can request OpenCV to support the codec, but it won't be a quick adaption even if you succeed to convince their devs to do so). Here are a couple things I could think of as a non-Java/non-OpenCV person (I typically deal with Python/FFmpeg):
1a) If you have a control of the upstream of your data, change the video codec from MPNG to one with OpenCV support.
1b) Transcode the MKV file to re-encode the video stream with a supported codec within your program
Create a thread and call ffmpeg from Java as a subprocess (I think ProcessBuilder is the class you are interested in) and load the video data via stdout pipe. FFmpeg can be called in the following manner:
ffmpeg -i <video_path> -f rawvideo -pix_fmt gray -an -
The stdout pipe will receive 480x360 bytes per frame. Read as many frames as you need at a time. If you need to limit the frames, you need to do this in seconds using -ss, -t, and/or -to options.
I'm assuming this video is grayscale as you mentioned (ffprobe is indicating the video is saved in RGB format). If you need to get RGB, use -pix_fmt rgb24 and the video frame data.
Once you have the image data in memory, there should be an OpenCV function to create an image object from in-memory data.
Related
The issue of this question has already been discussed e.g. for
C++
Python
The OpenCV documentation describes
ErrorCallback cv::redirectError ( ErrorCallback errCallback,
void * userdata = 0,
void ** prevUserdata = 0
)
How can this be made to use to e.g. filter out annoying messages?
An example is
[mjpeg # 0x7fe5a696ea00] unable to decode APP fields: Invalid data found when processing input
from a Logitech USB Webcam mjpeg stream which is created on every single frame and is superflous and not needed.
There is also a loglevel available. Unfortunately the import org.opencv.utils only contains "Converters" but no logging as of OpenCV 3.4.8
How could the loglevel be set from Java?
enum LogLevel {
LOG_LEVEL_SILENT = 0,
LOG_LEVEL_FATAL = 1,
LOG_LEVEL_ERROR = 2,
LOG_LEVEL_WARNING = 3,
LOG_LEVEL_INFO = 4,
LOG_LEVEL_DEBUG = 5,
LOG_LEVEL_VERBOSE = 6
}
Would Redirect System.out and System.err to slf4j help?
How can this be made to use to e.g. filter out annoying messages?
How could the loglevel be set from Java?
At this time (2020-01) it can't. Even if the API would be accessible from Java the bug https://github.com/opencv/opencv/issues/12780 would prevent it.
Would [Redirect System.out and System.err to slf4j][5] help?
No - see Junit test case below. The result is:
11:05:56.407 [main] DEBUG u.o.l.s.c.SysOutOverSLF4JInitialiser - Your logging framework class ch.qos.logback.classic.Logger should not need access to the standard println methods on the console, so you should not need to register a logging system package.
11:05:56.417 [main] INFO u.o.l.s.context.SysOutOverSLF4J - Replaced standard System.out and System.err PrintStreams with SLF4JPrintStreams
11:05:56.420 [main] INFO u.o.l.s.context.SysOutOverSLF4J - Redirected System.out and System.err to SLF4J for this context
11:05:56.421 [main] ERROR org.rcdukes.roi.TestROI - testing stderr via slf4j
[mjpeg # 0x7f958b1b0400] unable to decode APP fields: Invalid data found when processing input
[mjpeg # 0x7f958b027a00] unable to decode APP fields: Invalid data found when processing input
where the decode APP field part is still showing up via some stderr magic.
#Test
public void testLogStderr() throws Exception {
NativeLibrary.logStdErr();
System.err.println("testing stderr via slf4j");
NativeLibrary.load();
VideoCapture capture = new VideoCapture();
// Dorf Appenzell
//String url="http://213.193.89.202/axis-cgi/mjpg/video.cgi";
// Logitech Cam on test car
// url="http://picarford:8080/?action=stream";
File imgRoot = new File(testPath);
File testStream=new File(imgRoot,"logitech_test_stream.mjpg");
assertTrue(testStream.canRead());
capture.open(testStream.getPath());
Mat image=new Mat();
capture.read(image);
assertEquals(640,image.width());
assertEquals(480,image.height());
capture.release();
}
Funny side fact
According to the ffmpeg documentation
Log coloring can be disabled setting the environment variable AV_LOG_FORCE_NOCOLOR or NO_COLOR, or can be forced setting the environment variable AV_LOG_FORCE_COLOR. The use of the environment variable NO_COLOR is deprecated and will be dropped in a future FFmpeg version
But there seems to be no option to change the Logging level from an environment variable ...
I've written an Android app that (among other things) records videos, which are then uploaded to an AWS server.
On the server, I need to be able to determine the rotation of the videos. My research indicates that MP4Parser should be able to help me with that, but running:
FileDataSourceImpl fileDataSource = new FileDataSourceImpl("path/to/file");
IsoFile isoFile = new IsoFile(fileDataSource);
MovieBox moov = isoFile.getMovieBox();
for (Box b : moov.getBoxes()) {
System.out.println(b);
}
on my file gives me the following output:
MovieHeaderBox[creationTime=Wed Jun 15 12:08:38 IDT 2016;modificationTime=Wed Jun 15 12:08:38 IDT 2016;timescale=1000;duration=2057;rate=1.0;volume=1.0;matrix=Rotate 0°;nextTrackId=2]
UserDataBox[]
TrackBox[]
If, within my app, I run the following Android code on the very same file:
MediaMetadataRetriever m = new MediaMetadataRetriever();
m.setDataSource("path/to/file");
String rotation = m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_ROTATION);
System.out.println("Rotation is " + rotation);
I get:
06-15 12:07:36.741 27424-27424/... I/System.out: Rotation is 90
Why the discrepancy? And is there any pure Java (no native libraries) tool that I can use to get the same results on the server side, without the benefit of the native Android code?
Thanks,
Reuven
Is there a faster way to save plots directly to byte[] or base64 or anything that Java can read easily ... looking for ~1ms or less
This is what i have working so far, but it's too slow...
# PNG raw 50ms
library(Cairo)
library(png)
Cairo(filename="test",width=500,height=500)
plot(cars)
i = Cairo:::.image(dev.cur())
r = Cairo:::.ptr.to.raw(i$ref, 0, i$width * i$height * 4)
dim(r) = c(4, i$width, i$height)
r[c(1,3),,] = r[c(3,1),,]
p <- writePNG(r, raw())
# XML 4ms
library(svglite)
x <- xmlSVG({ plot(cars) })
So far using BufferedImage i get from R Cairo sample.
Goal: Getting plots as image into java, bridge is JRI (rJava)
I'm making a player that can play an MPEG-TS stream and display all its videos at once (for monitoring purposes) in one frame using Xuggler for JAVA.
My problem is getting to determine what programs this stream holds (tv programs) and what are its streams...
for example : audio stream 1 and video stream 3 belong to program "BBC".
Now I already got it working for a .ts file by using MediaInfo http://mediaarea.net/en/MediaInfo/ like so :
MediaInfo.exe -LogFile="log.txt" "some .ts file" .... which logs a file like this :
Menu #2
ID : 1001 (0x3E9)
Menu ID : 1202 (0x4B2)
Duration : 13mn 33s
List : 2001 (0x7D1) (MPEG Video) / 3002 (0xBBA) (MPEG Audio, English)
Language : / English
Service name : NBN
Service provider : NILESAT
Service type : digital television
UTC 2006-03-28 00:00:00 : en:NBN / en:Nilesat / / / 99:00:00 / Running
and then I parsed the file in java
but I need to make this work for a live stream and when I give MediaInfo a URL instead of a file it gives this error :
Libcurl library not found
I also tried vlc commands but turns out it doesn't have this option and only available in gui (show codec information)...
the player is already working and I got an extractor too... just need this media info to work... any ideas?
EDIT : I found out FFprobe which is bundled with FFmpeg http://www.ffmpeg.org/ can do the task
but for some reason I can't read anything from the input stream.
Here's what the output looks like:
Input #0, mpegts, from 'D:\record ts\PBR_REC_20140426094852_484.ts':
Duration: N/A, start: 6164.538011, bitrate: N/A
Program 1201
Metadata:
service_name : Arabica TV
service_provider: Nilesat
Stream #0:10[0x7db]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv42
0p(tv), 720x576 [SAR 16:15 DAR 4:3], max. 2348 kb/s, 25 fps, 25 tbr, 90k tbn, 50
tbc
Stream #0:4[0xbcf]: Audio: mp2, 48000 Hz, stereo, s16p, 384 kb/s
Program 1202
I tried this in JAVA:
try {
Process process ;
Scanner sc;
ProcessBuilder processBuilder = new ProcessBuilder("C:\\Users\\vlatkozelka\\Desktop\\ffmpeg-20140623-git-ca35037-win64-static\\bin\\ffprobe.exe","-i",filename);
process=processBuilder.start();
sc=new Scanner(process.getInputStream());
process=processBuilder.start();
while(sc.hasNext()){
System.out.println(sc.nextLine());
}
} catch (IOException ex) {
Logger.getLogger(ChannelDivider.class.getName()).log(Level.SEVERE, null, ex);
}
but the sc.hasNext() just hangs like there is no input
then I tried writing to a file with cmd by using > but it gave me a blank file
however trying both methods with FFprobe -h (help command) does give output which is very much confusing me, I see output in cmd but cant read it...
I just solved this, and hope someone might make use of it:
it turns out that FFprobe was writing to stderr, not stdout,
so instead of:
getInputStream()
I used:
getErrorStream()
Now all I have to do is parse that :)
I'm currently using Xuggler to receive the video stream of an AR.Drone. The stream format is H.264 720p. I can decode and display the video using the following code, but the processor usage is very high (100% on dual-core 2ghz) and there is a huge delay in the stream that keeps increasing.
final IMediaReader reader = ToolFactory.makeReader("http://192.168.1.1:5555");
reader.setBufferedImageTypeToGenerate(BufferedImage.TYPE_3BYTE_BGR);
MediaListenerAdapter adapter = new MediaListenerAdapter()
{
public void onVideoPicture(IVideoPictureEvent e)
{
currentframe = e.getImage();
//Draw frame
}
public void onOpenCoder(IOpenCoderEvent e) {
videostreamopened = true;
}
};
reader.addListener(adapter);
while (!stop) {
try {
reader.readPacket();
} catch(RuntimeException re) {
// Errors happen relatively often
}
}
Using the Xuggler sample application resolves none of the problems, so I think my approach is correct. Also, when I decrease the resolution to 360p the stream is real-time and everything works OK. Does anybody know if this performance issues are normal or what I have to do to avoid this? I am very new to this, and I have not been able to find information, so does anybody have suggestions?
By the way, I tried changing the bitrate without success. Calling reader.getContainer().getStream(0).getStreamCoder().setBitRate(bitrate); seems to be ignored...
Thanks in advance!
UPDATE:
I get many of these errors:
9593 [Thread-7] ERROR org.ffmpeg - [h264 # 0x7f12d40e53c0] mmco: unref short failure
39593 [Thread-7] ERROR org.ffmpeg - [h264 # 0x7f12d40e53c0] number of reference frames (0+2) exceeds max (1; probably corrupt input), discarding one
39593 [Thread-15] ERROR org.ffmpeg - [h264 # 0x7f12d40e53c0] reference overflow
39593 [Thread-15] ERROR org.ffmpeg - [h264 # 0x7f12d40e53c0] decode_slice_header error
UPDATE 2: Changing the codec solves the above errors, but performance is still poor.