I'm currently investigating one of our Android applications for memory leaks and I found something that completely baffles me.
The DDMS heap monitor reports that the application is using 13mb/20mb heap memory, but a report pulled directly from the device is reporting that the application is using nearly 700mb!
Is this an issue with the device? Is DDMS wrong? How do I find out what is going into that 700mb?
It is part of the system software
The first screenshot is the output of adb shell dumpsys meminfo. The second screenshot looks like it is from procrank, which isn't part of standard Android; leastways, I can't find it quickly on Android 6.0.
(in the future, when somebody asks you 'what exactly is this "report"', feel free to cite actual commands)
Is this an issue with the device?
Probably not, though that's tough to say, since we do not know what the device is, what the app is, or much of anything outside of two digital camera photos.
Is DDMS wrong?
Probably not. Java code, whether running in Dalvik or ART, has a heap limit, and that's going to be well under 700MB.
How do I find out what is going into that 700mb?
~600MB of that will be coming from native code (NDK libraries), most likely.
So, start by finding out what in your app is using native code. That could be your code, or it could come from third-party libraries (e.g., Fresco). Your choices then are:
Call (or implement) logic in those libraries to cap how much heap space they use, or
Get rid of them, or
See if there's a way of hooking up Valgrind or something else to NDK code to determine where and how those libraries are using so much system RAM
Related
Its actually a Minecraft server. I have 16GB of RAM on my desktop here running a 4 core processor each core at 3.0ghz speed and 4GB Video Memory. its a pretty beefy computer (especially back in the day) yet it is still able to hold its own as a gaming computer even today running some pretty awesome games from Xboxone and whatever.
Well, I'm trying to run a game server on this desktop (I know it can handle it). Problem is, the server runs, but I can't see any of the mobs (NPC's creators of the world) on the map yet I can hear them. I know they are there. I can hear them. I go around my map, hearing them, but not seeing them.
I looked in other places on the web regarding this issue and found the issue is a memory issue (not enough memory). So I need to increase the memory of Java 8--problem is, it says in my server console "Ignoring max memory--support removed in 8.0" meaning though I set the memory in the bat file to run the server, it is ignoring how much I am telling it to use to run my server... and this is annoying.
Okay there is more details.
I entered a command in the server /memory
The server reports that the max memory allocated to the server is only a mere 1GB!!!! and I'm like WHAT!? Cuz I know I know I KNOW I have WAY more than that to offer my server! I need to increase that! So this is the issue.
To sum it up: Java 8 says it does not support max memory or min memory anymore if I were to set it up in a bat file to use 10,000MB (10GB) for my server when I run it--it ignores that... yet I need to force it to USE THAT AMMOUNT. how do I do this?
In control panel I already set it in the java, java tab, (field that is already there by default).
So I'm not sure what else to do.
Seems to me it was dumb of Java to remove support for memory heap customization in 8, makes me miss Java 7 if you ask me.
So any idea how I can make this work?
Make sure your java arguments include -Xmx and -Xms.
Read this question/answer for further details: What are the Xms and Xmx parameters when starting JVMs?
I have a Java Webstart application that starts successfully with -Xmx1G, but fails to start with -Xmx2G. Some of my users really need 2G of heap.
This seems to be a problem with Java 8u60 only, because I have a report of someone launching successfully with Java 8u51.
The failure looks like this: I see the blue 'Java...' splash screen, and then after a few seconds, poof it's gone, before displaying the Java console and without producing any trace information in the expected place.
The failure occurs only on those clients with less than 2G of memory available. But, I am a little surprised that requesting a 'maximum' heap size could cause the application to fail so early and without any diagnostic information. We are dealing with a 'maximum' value, after all, not an 'initial' value. I read in multiple places that the JVM is not supposed to do this.
But I also remembered reading that the 'initial', if unspecified, is based on the maximum. So, along with passing -Xmx2G, I tried passing -Xms512M, -Xms256M, and -Xms128M. But, this attempt to shrink the initial heap size did not help. I cannot get this thing to start with -Xmx2G!
Does anyone have any light to shed on this situation? A solution? A workaround? In the short term, I'll change to -Xmx1G, but, as I said at the beginning, I have some users that really need -Xmx2G. I'd like to avoid having two separate *.jnlp files, which would also entail having two separate *.jar files!
Turns out that this is exactly what Webstart on Java8u60 does if the client machine does not have enough memory to satisfy -Xmx. It attempts to start, and then poof, it disappears without any indication as to what went wrong.
So, I will end up having to build my application in different configurations if I want to enable the users with more memory to allocate that memory to my application. This is because signing requires the *.jnlp file to into the *.jar file itself, and this *.jnlp file must be an exact match with the *.jnlp file used to launch the application.
I'm trying to find a leak in my app that make the device warms without intensive usage. Is there a way of seeing the number of times a method is called and the CPU time expent for it.
I'm asking that because I don't remember its name now, but for C there is a tool that after compile the app with some special parameters, gives that info
Have a look at Profiling with Traceview and dmtracedump. Link
I am trying to reproduce java.lang.OutOfMemoryException in Jboss4, which one of our client got, presumably by running the J2EE applications over days/weeks.
I am trying to find a way for the webapp to spitout java.lang.OutOfMemoryException in a matter of minutes (instead of days/weeks).
One thing come into mind is to write a selenium script and has the script bombards the webapps.
One other thing that we can do is to reduce JVM heap size, but we would prefer not to do this, as we want to see the limit of our system.
Any suggestions?
ps: I don't have access to the source code, as we just provide a hosting service (of course I could decompile the class files...)
If you don't have access to the source code of the J2EE app in question, the options that come to mind are:
Reduce the amount of RAM available to the JVM. You've already identified this one and said you don't want to do it.
Create a J2EE app (it could probably just be a JSP) and configure it to run within the same JVM as the target app, and have that app allocate a ridiculous amount of memory. That will reduce the amount of memory available to the target app, hopefully such that it fails in the way you're trying to force.
Try to use some profiling tools to investigate memory leakage. Also good to investigate memory damps that was taken after OOM happens and logs. IMHO: reducing memory is not the rightest way to investigate cose you can get issues not connected with real production one.
Do both, but in a controlled fashion :
Reduce the available memory to the absolute minimum (using -Xms1M -Xmx2M, as an example, but I fear your app won't even load with such limitations)
Do controlled "nuclear irradiation" : do Selenium scripts or each of the known working urls before to attack the presumed guilty one.
Finally, unleash the power that shall not be raised : start VisualVM and any other monitoring software you can think of (DB execution is a usual suspect).
If you are using Sun Java 6, you may want to consider attaching to the application with jvisualvm in the JDK. This will allow you to do in-place profiling without needing to alter anything in your scenario, and may possibly immediately reveal the culprit.
If you don't have the source use decompile it, at least if you think the terms of usage allows this and you live in a free country. You can use:
Java Decompiler or JAD.
In addition to all the others I must say that even if you can reproduce an OutOfMemory error, and find out where it occurred, you probably haven't found out anything worth knowing.
The trouble is that an OOM occurs when an allocation can not take place. The real problem however is not that allocation, but the fact that other allocations, in other parts of the code, have not been de-allocated (de-referenced and garbage collected). The failed allocation here might have nothing to do with the source of the trouble (no pun intended).
This problem is larger in your case as it might take weeks before trouble starts, suggesting either a sparsely used application, or an abnormal code path, or a relatively HUGE amount of memory in relation to what would be necessary if the code was OK.
It might be a good idea to ask around why this amount of memory is configured for JBoss and not something different. If it's recommended by the supplier than maybe they already know about the leak and require this to mitigate the effects of the bug.
For these kind of errors it really pays to have some idea in which code path the problem occurs so you can do targeted tests. And test with a profiler so you can see during run-time which objects (Lists, Maps and such) are growing without shrinking.
That would give you a chance to decompile the correct classes and see what's wrong with them. (Closing or cleaning in a try block and not a finally block perhaps).
In any case, good luck. I think I'd prefer to find a needle in a haystack. When you find the needle you at least know you have found it:)
The root of the problem is most likely a memory leak in the webapp that the client is running. In order to track it down, you need to run the app with a representative workload with memory profiling enabled. Take some snapshots, and then use the profiler to compare the snapshots to see where objects are leaking. While source-code would be ideal, you should be able to at least figure out where the leaking objects are being allocated. Then you need to track down the cause.
However, if your customer won't release binaries so that you can run an identical system to what he is running, you are kind of stuck, and you'll need to get the customer to do the profiling and leak detection himself.
BTW - there is not a lot of point causing the webapp to throw an OutOfMemoryError. It won't tell you why it is happening, and without understanding "why" you cannot do much about it.
EDIT
There is not point "measuring the limits", if the root cause of the memory leak is in the client's code. Assuming that you are providing a servlet hosting service, the best thing to do is to provide the client with instructions on how to debug memory leaks ... and step out of the way. And if they have a support contract that requires you to (in effect) debug their code, they ought to provide you with the source code to do your job.
A Java application I support that runs on JRE 1.4.2_12 is hanging near midnight every night. I'd like to try and record as much profiling information as I can to discover if there is an issue in the JVM or external to the app.
I'd like to use HPROF to collect as much information as possible.
Is there a way to have HPROF dump its cpu sample and memory allocation report every minute instead of at the termination of the JVM?
Is there a different, more appropriate profiler that can collect information like this?
Rather than relying on dump files, I would try hooking up a profiler to the VM and leave it attached until the hang up occurs. Then use the profiler to introspect the state of the threads.
The use of Java 1.4 is a minor issue here, since 1.4's debug interface is not great, but some profilers still support it. I can particularly recommend YourKit, which is commercial, but offers an evaluation licence. It's the best profiler I've used, but some margin.
First things first: did you analyze the thread dump when your application hangs? A lot of the time that has enough information to troubleshoot a hanging java app...
Ctrl-Break in the process window on Windows, or kill -QUIT [pid] on Linux.
I would first try to determine if its actually your app or something else.
Are there any other apps on the box, if so do they run any batch around midnight. It could be a situation of your app suffering from a lack of resources due to other things running on the box or chewing up bandwidth.
Was this always the case or did it start recently. If this is new look at what changed on the box as a whole not just your own app.