I wanted to know the how much memory is utilized by specific functionality of application.
I am running the Java based web application in Chrome browser (or any browser), It consist the form approve functionality like this,
Step
Select the record/records.
select the status as approved.
click on submit button form gets successfully approved.
This works fine for number of records are less than 50, but for 500 form is shown as memory issue.
Now my question is, how to find the memory usage of submit functionality. so that can i show the exact understandable figures.
Why not use Java profiler(jvisualvm). It exist in the bin directory of Java installation folder.
You can start your application, same time start monitoring main tread through this profiling tool. Then take Heap snapshot when ever you want, this will not just tell you about memory usage by individual classes, it even help you identify memory leaks too.
Related
We have a major challenge which have been stumping us for months now.
A couple of months ago, we took over the maintenance of a legacy application, where the last developer to touch the code, left the company several years ago.
This application needs to be more or less always online. It's developed many years ago without staging and test environments, and without a redundant infrastructure setup.
We're dealing with a legacy Java EJB application running on Payara application server (Glassfish derivative) on an Ubuntu server.
Within the last year or two, it has been necessary to restart Payara approximately once a week, and the Ubuntu server once a month.
This is due to a memory leak which slows down the application over a period of around a week. The GUI becomes almost entirely non-responsive, but a restart of Payara fixes this, at least for a while.
However after each Payara restart, there is still some kind of residual memory use. The baseline memory usage increases, thereby reducing the time between Payara restarts. Around every month, we thus do a full Ubuntu reboot, which fixes the issue.
Naturally we want to find the memory leak, but we are unable to run a profiler on the server because it's resource intensive, and would need to run for several days in order to capture the memory leak.
We have also tried several times to dump the heap using "gcore" command, but it always result in a segfault and then we need to reboot the Ubuntu server.
What other options / approaches do we have to figure out which objects in the heap are not being garbage collected?
I would try to clone the server in some way to another system where you can perform tests without clients being affected. Could even be a system with less resources, if you want to trigger a resource based problem.
To be able to observe the memory leak without having to wait for days, I would create a load test, maybe with Apache JMeter, to simulate accesses of a week within a day or even hours or minutes (don't know if the base load is at a level where that is feasible from the server and network infrastructure).
First you could set up the load test to act as a "regular" mix of requests like seen in the wild. After you can trigger the loss of response, you can try to find out, if there are specific requests that are more likely to be the cause for the leak than others. (It also could be that some basic component that is reused in nearly any call contains the leak, and so you cannot find out "the" call with the leak.)
Then you can instrument this test server with a profiler.
To get another approach (you could do it in parallel) you also can use a static code inspection tool like SonarQube to analyze the source code for typical patterns of memory leaks.
And one other idea comes to my mind, but it is coming with many preconditions: if you have recorded typical scenarios for the backend calls, and if you have enough development resources, and if it is a stateless web application where each call could be inspoected more or less individually, then you could try to set up partial integration tests where you simulate the incoming web calls, with database and file access, but if possible without the application server, and record the increase of the heap usage after each of the calls. Statistically you might be able to find out the "bad" call this way. (So this would be something I would try as very last option.)
Apart from heap dump have to tried any realtime app perf monitoring (APM) like appdynamics or the opensource alternative like https://github.com/scouter-project/scouter.
Alternate approach would be to analyse existing application issue Eg: Payara issues like these https://github.com/payara/Payara/issues/4098 or maybe the ubuntu patch you are currently running app on.
You can use jmap, an exe bundled with the JDK, to check the memory. From the documentation:-
jmap prints shared object memory maps or heap memory details of a given process or core file or a remote debug server.
For more information you can see the documentation or see the stackoverflow question How to analyse the heap dump using jmap in java
There is also a tool called jhat which can be used tp analise java heap.
From the documentation:-
The jhat command parses a java heap dump file and launches a webserver. jhat enables you to browse heap dumps using your favorite webbrowser. jhat supports pre-designed queries (such as 'show all instances of a known class "Foo"') as well as OQL (Object Query Language) - a SQL-like query language to query heap dumps. Help on OQL is available from the OQL help page shown by jhat. With the default port, OQL help is available at http://localhost:7000/oqlhelp/
See JHat Dcoumentation, or How to analyze the heap dump using jhat
This might be a long shot of a question, but I have ran into a very complicated issue and I am unsure on how to solve it.
Long story short, we have a Java application running, it's currently using JDBC to pull in data from a MysQL Database on startup.
We have had a meltdown and that database is no longer active and has been lost forever and so has the data to go along with it which internally is very valuable.
However the data is still stored in the heap of the running JVM that pulled it in.
My only hope now is to somehow extract the data from the running JVM, in an ideal world i would be able to attach to it and have the flexibility to run code which could access the internal running classes..
So my questions today are:
Is my approach reasonable and possible?
If so how can I attach to the JVM and 'Inject' code
Thank you for reading
It seems that what you want to use is the jmap command. jmap can be used to dump the heap of a running JVM into a file, which you can then analyze "off-line", using tools such as jhat or JVisualVM.
It allows you to do so without killing the JVM and/or injecting code into it, and since the heap dump file is "inert", you can analyze it at your leisure without fear of harming the running VM by probing it further. Admittedly, I haven't used it extensively, so I'm not sure exactly what its capabilities are, but theoretically, you could perhaps also use JVisualVM's OQL language to run automated sequences on data in the heap and dump it to files in a format you want.
See, for instance, this question for usage examples.
In a situation like this the Eclipse Memory Analyzer Tool can be a good solution. It works on heap dumps too and shows you which objects take up memory.
In addition to this it can show the content of objects / memory locations.
I sometimes found MAT goes beyond what VisualVM does, but perhaps a view like this helps you find your data already:
(This is a screenshot of a made-up example where I create some custom objects in
order to show them with their value in the heapdump)
Perhaps you can even attach Eclipse to the running application. There is a certain trick where you can run custom code in a breakpoint. This one could somehow dump your data to disk.
I have a Java Webstart application that starts successfully with -Xmx1G, but fails to start with -Xmx2G. Some of my users really need 2G of heap.
This seems to be a problem with Java 8u60 only, because I have a report of someone launching successfully with Java 8u51.
The failure looks like this: I see the blue 'Java...' splash screen, and then after a few seconds, poof it's gone, before displaying the Java console and without producing any trace information in the expected place.
The failure occurs only on those clients with less than 2G of memory available. But, I am a little surprised that requesting a 'maximum' heap size could cause the application to fail so early and without any diagnostic information. We are dealing with a 'maximum' value, after all, not an 'initial' value. I read in multiple places that the JVM is not supposed to do this.
But I also remembered reading that the 'initial', if unspecified, is based on the maximum. So, along with passing -Xmx2G, I tried passing -Xms512M, -Xms256M, and -Xms128M. But, this attempt to shrink the initial heap size did not help. I cannot get this thing to start with -Xmx2G!
Does anyone have any light to shed on this situation? A solution? A workaround? In the short term, I'll change to -Xmx1G, but, as I said at the beginning, I have some users that really need -Xmx2G. I'd like to avoid having two separate *.jnlp files, which would also entail having two separate *.jar files!
Turns out that this is exactly what Webstart on Java8u60 does if the client machine does not have enough memory to satisfy -Xmx. It attempts to start, and then poof, it disappears without any indication as to what went wrong.
So, I will end up having to build my application in different configurations if I want to enable the users with more memory to allocate that memory to my application. This is because signing requires the *.jnlp file to into the *.jar file itself, and this *.jnlp file must be an exact match with the *.jnlp file used to launch the application.
I am trying to profile an application with jvisualvm. The application consists of a loop, in which data is loaded from a database and then some complex calculations are performed on the data. When a set of data is processed, the next set is loaded and calculated.
When I start my application and attach jvisualvm, I set up a filter on the CPU profiling page ("Sart profiling from classes" and "Do not profile classes"), since I am not interested in anything that relates to the database access, and other input/output related stuff.
The filter works - almost. My problem is, that the profiler reports most of the time is spent in sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(), even though sun.* is entered into the "Do not profile classes" filter. This is the only method in sun.* appearing in my profiling results.
Has anyone seen this before and knows how to get rid of it? Problem is, all other methods show up only with tiny amounts (<1%) in the "Self Time" column, most are displayed with 0%.
The jvisualvm version used is 1.3.2.
Thanks in advance,
Axel
Sounds like most of the time is spent waiting from the database. If you want to profile the rest of the stuff, you can either
stub the database so that it returns quickly (thus making the rest of your code take most of the time), or
use a better profiler such as YourKit or JProfiler (paid, definitely support what you want) or TPTP (free, but I'm not sure how powerful it us)
Uncheck 'Profile new Runnables' on the CPU profiling page.
To answer your other question with "Self Time" - you need to take a CPU snapshot of profiled data. The snapshot contains total method time info.
[UPDATE: I forgot to add that this 30 sec. freezing problem only happens the first time I try to load a file from the server. Subsequent loads are very quick. Maybe some strange reverse DNS lookup? I am hosting on Google's appengine.]
I started a little project recently called http://www.chartle.net which is build around an applet.
Startup time is an important factor in the user's experience of an applet. I collect statistics and am shocked that I find often very long startup times (factor 50 to 100 higher then necessary)
The applet starts in 1-3 seconds depending on the speed of your computer and connection. Still for some users it takes up to 100 sec.
I have mixed results from my own tests. Mostly it is very fast but sometimes freezes the browser for a long time and the Java console doesn't tell me why. Best guess is, that it stalls when loading a saved chart.
Please help me figuring this out - best test by opening an already saved chart (click on one of the 'create' links at http://www.chartle.net/gallery)
Cheers,
Dieter
This is generic help rather than specific for your demo (which loaded fine for me in a few attempts).
Freezing applets
In the JDK bin directory there is a very handy programme called jstack. Refresh your browser window until it crashes and then run:
jstack *process_id*
This will give you the stack trace of any frozen Java process. If Java is not a separate process then you can use the browser's process (eg for Opera).
The following few problems were/are common for me:
I reccommend you use invokeLater rather than invokeAndWait on the init method (although you can't do this if you use start/stop methods)
Opera's custom java plugin acts very poorly...
Deadlocks caused by synch blocks and invokeAndWait's
Slow applets
Possibly the browser is fetching resources from the server, unable to use the jar file?
It may be that only the old plugin causes these problems. That means basically all people running on OSX and other users with Java prior to 1.6_update_10.
So, I would really appreciate people with such setups to watch their Java console and describe the first startup behaviour.
Cheers,
Dieter