I'm currently using Xuggler to receive the video stream of an AR.Drone. The stream format is H.264 720p. I can decode and display the video using the following code, but the processor usage is very high (100% on dual-core 2ghz) and there is a huge delay in the stream that keeps increasing.
final IMediaReader reader = ToolFactory.makeReader("http://192.168.1.1:5555");
reader.setBufferedImageTypeToGenerate(BufferedImage.TYPE_3BYTE_BGR);
MediaListenerAdapter adapter = new MediaListenerAdapter()
{
public void onVideoPicture(IVideoPictureEvent e)
{
currentframe = e.getImage();
//Draw frame
}
public void onOpenCoder(IOpenCoderEvent e) {
videostreamopened = true;
}
};
reader.addListener(adapter);
while (!stop) {
try {
reader.readPacket();
} catch(RuntimeException re) {
// Errors happen relatively often
}
}
Using the Xuggler sample application resolves none of the problems, so I think my approach is correct. Also, when I decrease the resolution to 360p the stream is real-time and everything works OK. Does anybody know if this performance issues are normal or what I have to do to avoid this? I am very new to this, and I have not been able to find information, so does anybody have suggestions?
By the way, I tried changing the bitrate without success. Calling reader.getContainer().getStream(0).getStreamCoder().setBitRate(bitrate); seems to be ignored...
Thanks in advance!
UPDATE:
I get many of these errors:
9593 [Thread-7] ERROR org.ffmpeg - [h264 # 0x7f12d40e53c0] mmco: unref short failure
39593 [Thread-7] ERROR org.ffmpeg - [h264 # 0x7f12d40e53c0] number of reference frames (0+2) exceeds max (1; probably corrupt input), discarding one
39593 [Thread-15] ERROR org.ffmpeg - [h264 # 0x7f12d40e53c0] reference overflow
39593 [Thread-15] ERROR org.ffmpeg - [h264 # 0x7f12d40e53c0] decode_slice_header error
UPDATE 2: Changing the codec solves the above errors, but performance is still poor.
Related
Please tell me what's wrong, I'm new to working with hbase. when creating regions in a java application for hbase, an error occurs below then.
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.DoNotRetryRegionException): org.apache.hadoop.hbase.client.DoNotRetryRegionException: bc3ec95b447809887e3c198afe4d1084 is not OPEN; regionState={bc3ec95b447809887e3c198afe4d1084 state=CLOSING, ts=1651224527248, server=hbase-docker,16020,1651207907804}
Code as below:
byte[][] splits = getSplits(countSplits, countSlot);
for (byte[] byteSplit : splits) {
byte[] regionName = admin.getRegions(tableName).get(admin.getRegions(tableName).size() - 1).getRegionName();
admin.splitRegionAsync(regionName, byteSplit);
}
This code is executed and 1 region out of 20 required is created. After creating the first one, the error above occurs. What needs to be added? I hope for any help
The problem is solved by adding a waiting time between operations.
.....
admin.splitRegionAsync(regionName, byteSplit);
Thread.sleep(30000);
.....
I have a grayscale .mkv video, which i want to open with OpenCV in Java, but i get the following errors:
With return new VideoCapture(path, Videoio.CAP_FFMPEG);
Errors:
[ERROR:0#0.004] global /build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (1108) open Could not find decoder for codec_id=61
[ERROR:0#0.004] global /build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (1140) open VIDEOIO/FFMPEG: Failed to initialize VideoCapture
With return new VideoCapture(path, Videoio.CAP_DSHOW); No errors, but
video.isOpened() is false
With return new VideoCapture(path);
Errors:
[ERROR:0#0.005] global /build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (1108) open Could not find decoder for codec_id=61
[ERROR:0#0.005] global /build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (1140) open VIDEOIO/FFMPEG: Failed to initialize VideoCapture
[ WARN:0#0.122] global C:\build\master_winpack-bindings-win64-vc14-static\opencv\modules\videoio\src\cap_msmf.cpp (923) CvCapture_MSMF::initStream Failed to set mediaType (stream 0, (480x360 # 1) MFVideoFormat_RGB24(codec not found)
I have installed OpenCV and added it as a dependency using this video.
I have also tried adding ...\opencv\build\bin\opencv_videoio_ffmpeg455_64.dll to the native libraries, and also tried using this: System.load("path\\to\\opencv\\build\\bin\\opencv_videoio_ffmpeg455_64.dll");.
Full code:
public class Test {
static {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
System.load("path\\to\\opencv\\build\\bin\\opencv_videoio_ffmpeg455_64.dll");
}
public static void main(String[] args) {
List<Mat> frames = getVideoFrames(openVideoFile(args[0]));
System.out.println(frames.size());
}
}
//... different class
public static VideoCapture openVideoFile(String path) {
return new VideoCapture(path);
}
public static List<Mat> getVideoFrames(VideoCapture video) {
List<Mat> frames = new ArrayList<>();
Mat frame = new Mat();
if (video.isOpened()) {
while (video.read(frame)) {
frames.add(frame);
}
video.release();
}
return frames;
}
ffprobe result:
Metadata:
MAJOR_BRAND : qt
MINOR_VERSION : 512
COMPATIBLE_BRANDS: qt
ENCODER : Lavf56.40.101
Duration: 00:01:05.83, start: 0.000000, bitrate: 1511 kb/s
Stream #0:0(eng): Video: png (MPNG / 0x474E504D), rgb24(pc), 480x360 [SAR 1:1 DAR 4:3], 6 fps, 6 tbr, 1k tbn (default)
Metadata:
LANGUAGE : eng
HANDLER_NAME : DataHandler
ENCODER : Lavc56.60.100 png
DURATION : 00:01:05.834000000
The error message is indicating that OpenCV's FFmpeg plugin is not built with MPNG codec support. So, you are essentially SOL to get this task done only with OpenCV (you can request OpenCV to support the codec, but it won't be a quick adaption even if you succeed to convince their devs to do so). Here are a couple things I could think of as a non-Java/non-OpenCV person (I typically deal with Python/FFmpeg):
1a) If you have a control of the upstream of your data, change the video codec from MPNG to one with OpenCV support.
1b) Transcode the MKV file to re-encode the video stream with a supported codec within your program
Create a thread and call ffmpeg from Java as a subprocess (I think ProcessBuilder is the class you are interested in) and load the video data via stdout pipe. FFmpeg can be called in the following manner:
ffmpeg -i <video_path> -f rawvideo -pix_fmt gray -an -
The stdout pipe will receive 480x360 bytes per frame. Read as many frames as you need at a time. If you need to limit the frames, you need to do this in seconds using -ss, -t, and/or -to options.
I'm assuming this video is grayscale as you mentioned (ffprobe is indicating the video is saved in RGB format). If you need to get RGB, use -pix_fmt rgb24 and the video frame data.
Once you have the image data in memory, there should be an OpenCV function to create an image object from in-memory data.
The issue of this question has already been discussed e.g. for
C++
Python
The OpenCV documentation describes
ErrorCallback cv::redirectError ( ErrorCallback errCallback,
void * userdata = 0,
void ** prevUserdata = 0
)
How can this be made to use to e.g. filter out annoying messages?
An example is
[mjpeg # 0x7fe5a696ea00] unable to decode APP fields: Invalid data found when processing input
from a Logitech USB Webcam mjpeg stream which is created on every single frame and is superflous and not needed.
There is also a loglevel available. Unfortunately the import org.opencv.utils only contains "Converters" but no logging as of OpenCV 3.4.8
How could the loglevel be set from Java?
enum LogLevel {
LOG_LEVEL_SILENT = 0,
LOG_LEVEL_FATAL = 1,
LOG_LEVEL_ERROR = 2,
LOG_LEVEL_WARNING = 3,
LOG_LEVEL_INFO = 4,
LOG_LEVEL_DEBUG = 5,
LOG_LEVEL_VERBOSE = 6
}
Would Redirect System.out and System.err to slf4j help?
How can this be made to use to e.g. filter out annoying messages?
How could the loglevel be set from Java?
At this time (2020-01) it can't. Even if the API would be accessible from Java the bug https://github.com/opencv/opencv/issues/12780 would prevent it.
Would [Redirect System.out and System.err to slf4j][5] help?
No - see Junit test case below. The result is:
11:05:56.407 [main] DEBUG u.o.l.s.c.SysOutOverSLF4JInitialiser - Your logging framework class ch.qos.logback.classic.Logger should not need access to the standard println methods on the console, so you should not need to register a logging system package.
11:05:56.417 [main] INFO u.o.l.s.context.SysOutOverSLF4J - Replaced standard System.out and System.err PrintStreams with SLF4JPrintStreams
11:05:56.420 [main] INFO u.o.l.s.context.SysOutOverSLF4J - Redirected System.out and System.err to SLF4J for this context
11:05:56.421 [main] ERROR org.rcdukes.roi.TestROI - testing stderr via slf4j
[mjpeg # 0x7f958b1b0400] unable to decode APP fields: Invalid data found when processing input
[mjpeg # 0x7f958b027a00] unable to decode APP fields: Invalid data found when processing input
where the decode APP field part is still showing up via some stderr magic.
#Test
public void testLogStderr() throws Exception {
NativeLibrary.logStdErr();
System.err.println("testing stderr via slf4j");
NativeLibrary.load();
VideoCapture capture = new VideoCapture();
// Dorf Appenzell
//String url="http://213.193.89.202/axis-cgi/mjpg/video.cgi";
// Logitech Cam on test car
// url="http://picarford:8080/?action=stream";
File imgRoot = new File(testPath);
File testStream=new File(imgRoot,"logitech_test_stream.mjpg");
assertTrue(testStream.canRead());
capture.open(testStream.getPath());
Mat image=new Mat();
capture.read(image);
assertEquals(640,image.width());
assertEquals(480,image.height());
capture.release();
}
Funny side fact
According to the ffmpeg documentation
Log coloring can be disabled setting the environment variable AV_LOG_FORCE_NOCOLOR or NO_COLOR, or can be forced setting the environment variable AV_LOG_FORCE_COLOR. The use of the environment variable NO_COLOR is deprecated and will be dropped in a future FFmpeg version
But there seems to be no option to change the Logging level from an environment variable ...
I can't understand why I'm getting this error: java.lang.RuntimeException: Resource not found I'm trying to make a simple 2d game using Slick and LWJGL libraries, I followed this guide http://www.youtube.com/playlist?list=PLaNw_AbDFccGkU5gnFYquQ0PNQPmmD-Q7 and I managed to make even some more by myself.
The thing is that I am receiving this error even though the image does exist in the specified location. The game runs completely fine and suddenly quits with the error I already mentioned.
The error:
Wed Nov 27 14:43:46 PST 2013 ERROR:Resource not found:
/home/tomtam/workspace/Game/gfx/world/object/blockgreen.png
java.lang.RuntimeException: Resource not found:
/home/tomtam/workspace/Game/gfx/world/object/blockgreen.png
at org.newdawn.slick.util.ResourceLoader.getResourceAsStream(ResourceLoader.java:69)
at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java:273)
at org.newdawn.slick.Image.<init>(Image.java:270)
at org.newdawn.slick.Image.<init>(Image.java:244)
at org.newdawn.slick.Image.<init>(Image.java:232)
at org.newdawn.slick.Image.<init>(Image.java:198)
at tomtam.game.object.BlockGreen.render(BlockGreen.java:18)
at tomtam.game.main.World.render(World.java:447)
at tomtam.game.state.PlayState.render(PlayState.java:76)
at org.newdawn.slick.state.StateBasedGame.render(StateBasedGame.java:207)
at org.newdawn.slick.GameContainer.updateAndRender(GameContainer.java:703)
at org.newdawn.slick.AppGameContainer.gameLoop(AppGameContainer.java:456)
at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:361)
at tomtam.game.main.Main.main(Main.java:36)
Wed Nov 27 14:43:46 PST 2013 ERROR:Game.render() failure - check the game code.
org.newdawn.slick.SlickException: Game.render() failure - check the game code.
at org.newdawn.slick.GameContainer.updateAndRender(GameContainer.java:706)
at org.newdawn.slick.AppGameContainer.gameLoop(AppGameContainer.java:456)
at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:361)
at tomtam.game.main.Main.main(Main.java:36)
So the error points to this part of the code (image):
public void render(GameContainer gc, Graphics g) throws SlickException
{
super.render(gc, g);
image = new Image("/home/tomtam/workspace/Game/gfx/world/object/blockgreen.png");
}
I am not a skilled programmer, but I guess that this error is because the image is getting rendered nonstop, even if it's location or other information haven't changed. That may produce some lag spikes, right..? So, I tried to change it to:
try
{
image = new Image("/home/tomtam/workspace/Game/gfx/world/object/blockgreen.png");
}
catch (RuntimeException npe)
{
}
I know that it's a bad thing to do it like this, however, this way I am not receiving any errors and everything works fine for some time. After some time some images starts blinking and the more I wait, the less time they are showing up until finally disappearing.
The code is kinda long, but I can post it, just ask. Any help will be appreciated.
Usually a "java.lang.RuntimeException: Resource not found" occurs when your resources (images now) are not in your CLASSPATH and generally it is a classpath issue.
This may also help you.
I am getting OutOfMemory Exception in my app. I have taken the heap dump and ananlyzed through MAT. While analyzing my app memory usage I found following suspect. I am unable to understand the main cause behind these suspects.
Please help me in understanding this leakage suspects and what is relevant solution for it.
Suspect 1
The thread org.apache.tomcat.util.threads.TaskThread # 0x2bdf5ff8 "ajp-bio-9002"-exec-5 keeps local variables with total size 113,973,288 (50.72%) bytes.
The memory is accumulated in one instance of "org.apache.tomcat.util.threads.TaskThread" loaded by "org.apache.catalina.loader.StandardClassLoader # 0x293b4488".
Thread Stack
"ajp-bio-9002"-exec-5
at java.util.Arrays.copyOf([CI)[C (Arrays.java:2882)
at java.lang.AbstractStringBuilder.expandCapacity(I)V (AbstractStringBuilder.java:100)
at java.lang.AbstractStringBuilder.append(C)Ljava/lang/AbstractStringBuilder; (AbstractStringBuilder.java:572)
at java.lang.StringBuffer.append(C)Ljava/lang/StringBuffer; (StringBuffer.java:320)
at org.apache.myfaces.renderkit.html.util.ReducedHTMLParser.consumeString(C)Ljava/lang/String; (ReducedHTMLParser.java:303)
at org.apache.myfaces.renderkit.html.util.ReducedHTMLParser.consumeAttrValue()Ljava/lang/String; (ReducedHTMLParser.java:327)
at org.apache.myfaces.renderkit.html.util.ReducedHTMLParser.parse()V (ReducedHTMLParser.java:579)
at org.apache.myfaces.renderkit.html.util.ReducedHTMLParser.parse(Ljava/lang/CharSequence;Lorg/apache/myfaces/renderkit/html/util/CallbackListener;)V (ReducedHTMLParser.java:66)
at org.apache.myfaces.renderkit.html.util.DefaultAddResource.parseResponse(Ljavax/servlet/http/HttpServletRequest;Ljava/lang/String;Ljavax/servlet/http/HttpServletResponse;)V (DefaultAddResource.java:699)
at org.apache.myfaces.webapp.filter.ExtensionsFilter.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;Ljavax/servlet/FilterChain;)V (ExtensionsFilter.java:157)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;)V (ApplicationFilterChain.java:243)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(Ljavax/servlet/ServletRequest;Ljavax/servlet/ServletResponse;)V (ApplicationFilterChain.java:210)
at org.apache.catalina.core.StandardWrapperValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (StandardWrapperValve.java:240)
at org.apache.catalina.core.StandardContextValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (StandardContextValve.java:164)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (AuthenticatorBase.java:462)
at org.apache.catalina.core.StandardHostValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (StandardHostValve.java:164)
at org.apache.catalina.valves.ErrorReportValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (ErrorReportValve.java:100)
at org.apache.catalina.valves.AccessLogValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (AccessLogValve.java:562)
at org.apache.catalina.core.StandardEngineValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (StandardEngineValve.java:118)
at org.apache.catalina.ha.session.JvmRouteBinderValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (JvmRouteBinderValve.java:218)
at org.apache.catalina.ha.tcp.ReplicationValve.invoke(Lorg/apache/catalina/connector/Request;Lorg/apache/catalina/connector/Response;)V (ReplicationValve.java:333)
at org.apache.catalina.connector.CoyoteAdapter.service(Lorg/apache/coyote/Request;Lorg/apache/coyote/Response;)V (CoyoteAdapter.java:395)
at org.apache.coyote.ajp.AjpProcessor.process(Lorg/apache/tomcat/util/net/SocketWrapper;)Lorg/apache/tomcat/util/net/AbstractEndpoint$Handler$SocketState; (AjpProcessor.java:301)
at org.apache.coyote.ajp.AjpProtocol$AjpConnectionHandler.process(Lorg/apache/tomcat/util/net/SocketWrapper;Lorg/apache/tomcat/util/net/SocketStatus;)Lorg/apache/tomcat/util/net/AbstractEndpoint$Handler$SocketState; (AjpProtocol.java:183)
at org.apache.coyote.ajp.AjpProtocol$AjpConnectionHandler.process(Lorg/apache/tomcat/util/net/SocketWrapper;)Lorg/apache/tomcat/util/net/AbstractEndpoint$Handler$SocketState; (AjpProtocol.java:169)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run()V (JIoEndpoint.java:302)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Ljava/lang/Runnable;)V (ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run()V (ThreadPoolExecutor.java:908)
at java.lang.Thread.run()V (Thread.java:662)
Suspect 2
One instance of "java.lang.StringBuffer" loaded by "" occupies 59,216,088 (26.35%) bytes. The instance is referenced by org.apache.myfaces.renderkit.html.util.ReducedHTMLParser # 0x276990e8 , loaded by "org.apache.catalina.loader.WebappClassLoader # 0x29592038". The memory is accumulated in one instance of "char[]" loaded by "".
You can go to the "dominator_tree" tab of memory analyzer (MAT) and expand the TaskThread. This will show you the retained heap of all the objects in that taskthread. This might help you reach the part (code) in your application causing the issue.
It looks like org.apache.myfaces.renderkit.html.util.ReducedHTMLParser is the culprit. The javadoc for ReducedHTMLParser explains how it works. It buffers the entire HTML response in memory and then processes. It looks like it it trying - and failing - to process a very large response.