I have got some basic working knowledge of YourKit Java profiler. I want to perform memory & CPU profiling of my Selenium WebDriver+TestNG framework. It contains a large number of tests in the form of PageObject classes and Test classes. I have checked out for any online resource that can show some direction on how to do this but could not find any.
Has anyone done memory+CPU profiling on webdriver+TestNG tests? Is it possible first of all to do memory profiling of such java applications? Need some directions.
Profiling tests is a bit tricky affair due to number of factors involved like effects of parallel test execution or sequence of test execution . You can get an overall picture for memory or cpu by having the visualvm (or even jconsole ) monitoring on your tests . For getting statistics for individual tests i believe you need to have a professional profiler (i am not aware about any open source tools to do that ) . I don't prefer intrusive profilers for performance tests , hence you can also get an overall picture of the CPU samples during the tests using Hprof as well .
Having said that nowadays most ides have pretty good profilers (or at least profiler plug-in) and if you are required to give a rough estimate of the cpu and memory numbers you can use them as well .
Related
I consider running VisualVM against a production JVM to see what's going on there - it started to consume too much CPU for some reason.
It must not result in a JVM failure so I'm trying to estimate all the risks.
The only issue that I see on their site that could potentially bring JVM down is related to class sharing and -Xshare JVM option, but afaik class sharing is not enabled in server mode and/or on x64 systems.
So is it really safe to run VisualVM against a production JVM, if it's not - what are the risks that one should consider, and how much load (CPU/memory) does running VisualVM against a JVM (and profiling with it) put on it?
Thanks
AFAIK VisualVM can be used in production, but I would only use it on a server which is lightly loaded. What you could do is wait for the service to slow down and later when its not used as much test it to see if some of the collections are surprising large. Or you could trigger a heap dump and analyze it offline.
And you can't get stats on method calls without significant overhead. Java 6 and 7 are better than java 5 but it could still slow your application by 30% even wityh a commercial profiler.
Actually, you can get some information without a lot of overhead by using stack dumps. There is even a script to help you do this at https://gist.github.com/851961
This type of profiling is the least intrusive that you can get.
I have noticed a high memory and CPU usage during mvn-gwt operation especially during compile phase. Memory usage just soars. I just wanna know if this is normal and if anyone else is experiencing this.
My current JVM setting is -Xms64m -Xmx1280m -XX:MaxPermSize=512m
I think it's normal. Because the phase of compilation in GWT is really very resource-intensive. GWT provides a larger library (in gwt-user.jar) that must be analyzed during compilation and a number of compiler optimizations that require much memory and processing power. Thus, the GWT compiler use much memory internally.
Yes, it's normal. It derives from the awsome CPU utilization Google made when they've written the gwtc command (gwtc = GWT Compile).
I consider it good since the tradeoff for the CPU would typically be memory usage which is far more valuable for me.
(I do not work for Google :-))
The GWT compiler has a localWorkers setting that tells it how many cores to use. The more cores, the more memory it will use. If you are using the Eclipse plugin it defaults to just using one (I believe). But the Maven plugin defaults to using all core on your machine (ie if you have a quad core, it will use localWorkers 5.
Interestingly I've been following the advice found here: http://josephmarques.wordpress.com/2010/07/30/gwt-compilation-performance/ which says that localWorkers 2 is an ideal setting for memory usage and speed. That way my machine doesn't lock up during the compile, and the speed difference is very minor.
I have an application which runs a rather long analysis (lots of number crunching) so running the application once takes about 3-4 hours, fully utilizing all of the cores. Now I am pretty sure my code is not water-tight so I want to profile and look for potential weak points.
I have been reading quite a bit on jvisualvm, and played around with it a bit too. However it appears as one chooses either cpu or memory profiling, while this article from Javalobby has an interesting quote where the author says:
I realise that both the CPU and Memory Profiling could have been done simultaneously, but for the purpose of this article I wanted to keep them seperate.
Could anyone deny or confirm this? If this is possible it would very useful, so I don't start over and over to profile in different modes. If it's not possible, would it be possible to queue to different profiling analyses so I can run them overnight?
Thanks,
It is not possible to do CPU and Memory profiling together, but you can switch between CPU and memory very easily especially when using 'Sampler' tab. For your case, I would start with just simple monitoring. Looking at the graphs, you should be able to tell, if you have memory problem or not. If you have memory problem, I would try to fix it first and that turn your attention to the CPU profiling.
I find that profilers tend to underestimate the cost of object allocation, so I usually enable memory profiling with cpu profiling, as I feel this gives a more realistic CPU profiling result. (Even if I don't look at the memory profiling report)
If in doubt I suggest you run the CPU profile, with and without memory profiling and you can get very different results. In my experience it is worth optimising for both results. ;)
BTW: I use YourKit, but I don't imagine VisualVM to be very different in this regard.
Why does the jvm require around 10 MB of memory for a simple hello world but the clr doesn't. What is the trade-off here, i.e. what does the jvm gain by doing this?
Let me clarify a bit because I'm not conveying the question that is in my head. There is clearly an architectural difference between the jvm and clr runtimes. The jvm has a significantly higher memory footprint than the clr. I'm assuming there is some benefit to this overhead otherwise why would it exist. I'm asking what the trade-offs are in these two designs. What benefit does the jvm gain from it's memory overhead?
I guess one reason is that Java has to do everything itself (another aspect of platform independence). For instance, Swing draws it's own components from scratch, it doesn't rely on the OS to draw them. That's all got to take place in memory. Lots of stuff that windows may do, but linux does not (or does differently) has to be fully contained in Java so that it works the same on both.
Java also always insists that it's entire library is "Linked" and available. Since it doesn't use DLLs (they wouldn't be available on every platform), everything has to be loaded and tracked by java.
Java even does a lot of it's own floating point since the FPUs often give different results which has been deemed unacceptable.
So if you think about all the stuff C# can delegate to the OS it's tied to vs all the stuff Java has to do for the OS to compensate for others, the difference should be expected.
I've run java apps on 2 embedded platforms now. One was a spectrum analyzer where it actually drew the traces, the other is set-top cable boxes.
In both cases, this minimum memory footprint hasn't been an issue--there HAVE been Java specific issues, that just hasn't been one. The number of objects instantiated and Swing painting speed were bigger issues in these cases.
I don't know if initial memory footprint or a footprint of a Hello World application is important. A difference might be due to the number and sizes of the libraries that are loaded by the JVM / CLR. There can also be an amount of memory that is preallocated for garbage collection pools.
Every application that I know off, uses a lot more then Hello World functionality. That will load and free memory thousands of times throughout the execution of the application. If you are interested in Memory Utilization differences of JVM vs CLR, here are a couple of links with good information
http://benpryor.com/blog/2006/05/04/jvm-vs-clr-memory-allocation/
Memory Management Case study (JVM & CLR)
Memory Management Case study is in Power Point. A very interesting presentation.
Seems like java is just using more virtual memory.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
amwise 20598 0.0 0.5 22052 5624 pts/3 Sl+ 14:59 0:00 mono Test.exe
amwise 20601 0.0 0.7 214312 7284 pts/2 Sl+ 15:00 0:00 java Program
I made a test program in C# and in Java that print the string "test" and waits for input. I believe that the resident set size (RSS) value more accurately shows the memory usage. The virtual memory useage (VSZ) is less meaningful.
As I understand it applications can reserve a ton of virtual memory without actually using any real memory. For example you can ask the VirtualAlloc function on Windows to either reserve or commit virtual memory.
EDIT:
Here is a pretty picture from my windows box:
alt text http://awise.us/images/mem.png
Each app was a simple printf followed by a getchar.
Lots of virtual memory usage by Java and CLR. The C version depends on just about nothing, so it's memory usage is tiny relatively.
I doubt it really matters either way. Just pick whichever platform you are more familiar with and then don't write terrible, memory-wasting code. I'm sure it will work out.
EDIT:
This VMMap tool from Microsoft might be useful in figureing out where memory is going.
The JVM counts all its shared libraries whether they use memory or not.
Task manager is rather unreliable when it comes to reporting the memory consumption of programs. You should take it as a guide.
JVM loads lots of unnecessary core classes on each run from rt.jar. Unfortunately, the inner-cross dependencies (java.lang <-> java.io) of java packages make it hard to do a partial runtime init. Not to mention the rt.jar itself is over 40MB, needs lots of time for lookup and decompress.
Post Java 6u10 seems to load things a bit smarter (it has a jqs.exe = java quick starter service to keep necessary data in memory and do a faster startup), still Java 7 is told to be better.
The Process Explorer in Windows reports the Private Bytes correctly (Private bytes are those memory regions, which are not shared by any dll).
A slightly bigger annoyance is that after 10 years, JVM still defaults to 64MB memory usage. It is really annoying to use -Xmx almost every time and cannot run demanding programs in jars with a simple double click (unless I alter the file extension assignment's command).
CLR is counted as part of the OS so the task manager doesn't report it's memory consumption under the application process.
I have a very large Java app. It runs on Tomcat and is your typical Spring/Hibernate webapp. It is also an extremely large Java program. It's easy for me to test the performance of database queries, since I can run those separately, but I have no idea to look for Java bottlenecks on a stack like this. I tried Eclipse's TPTP profiler, but it really didn't seem to like my program, and I suspect that it is because my program is too large. Does anyone have any advice on profiling a large webapp?
The Visual VM profiler that now comes with the JDK can be attached to running processes and may at least give an initial overview of the performance. It is based on the Netbeans profiler.
Try jProfiler. It's easy to integrate with Tomcat
If you can get Tomcat and your application running in Netbeans.
Then you can use the Netbeans built-in profiler to test performance, memory usage, etc ...
Wikipage on tomcat in Netbeans.
I have used YourKit to profile applications with an 8 GB heap and it worked quite well.
Check JAMon. It's not a profiler, but it's the best tool for profiling that I can recommend. It's very easy to integrate with spring. We use it in test and live environment.
I've never found an easy way to do this because there's typically so much going on that it's hard to get a clear overall picture. With things like Hibernate even more so because the correct behavior may be to grab a big chunk of memory for cached data, even though your app's not really "doing anything", so another memory inefficient process that you run may get swamped in profiling.
Are you profiling for memory, speed, or just in general looking for poor performance? Try to test processes that you suspect are bad in isolation, it's certainly much easier.
JProbe, JProfiler, both good, free demos are available. Testing inside an IDE complicates the memory issues, I've found it easier not to bother.
Try JProfiler. It has a trial license and it is very full featured. To use it, you'll have to:
Add the JProfiler agent as an argument to your java command
Start the program on the server
Start JProfiler and choose the "Connect to an application running remotely"
Give it the port number and whatever host it's running on
All these are in the instructions that come with JProfiler, but the important part is that you'll connect through a host and port to your running app.
As for what to profile, I'm sure you have an idea of things that could be memory/CPU intensive - loading large data sets, sorting, even simple network I/O if it's done incorrectly. Do these things (it's great if you can automate load testing using some scripts that bang on your server) and collect a snapshot with JProfiler.
Then view the graphs at your leisure. Turn on CPU monitoring and watch where the CPU cycles are being spent. You'll be able to narrow down by percentage in each method call, so if you're using more than 1 or 2% of CPU in methods that you have source for, go investigate and see if you can make them less CPU intensive.
Same goes for memory. Disable all the CPU profiling, enable all the memory profiling, run the tests again and get your snapshot.
Rinse, repeat.
You might also take this time to read up on memory management and garbage collection. There's no better time to tune your garbage collection than when you're already profiling: http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html
Pay special attention to the part on the eden/survivor object promotion. In web apps you get a lot of short-lived objects, so it often makes sense to increase the young generations at the expense of the tenured generations.