I need to do a performance analysis of Java EE web application and optimize the code.
Please suggest ways of doing it?
To start with, I am checking the server logs.
Based on your vague question an answer can just be vague:
Depending on what you want to improve, the first rule is to measure what you want to improve. Furthermore alsways measure again after you tried to improve!
Memory
Regarding memory optimizations you should acquire heap dumps of the running application and analyze those. A very helpful tool for an anylysis is the eclipse memory analyzer tools.
Profiling
If you want to improve the performance and minimize runtime of code, you should start with profiling. JVisualVM is then a good tool. To get some load on your application JMeter can help you in the context of a web based application.
Rules of Performance tuning
First measure to identify the bottlenecks, then pick the "biggest" leaks for optimization. After optimizing measure again to verify your result. If you are not happy afterwards, start again with measuring.
Know the real slow parts of your application
Before even starting with measuring you should exactly identify the situations where your application is really slow, otherwise you might not notice a difference or even "de-optimize".
Use some good java profiler and figure out problem points like high memory usage, high CPU usage etc.
Look at YourKit and/or jprofiler. You can use their trial version for your case
Multiple tools are available to do performance analysis.
You can use Jmeter to do some load testing and see what performance you are getting. If you find performance bad for certain features then dig into that to find the bottlenecks.
You can use JProfiler to analyse JVM of the web application.
Try using a application monitoring tool like newrelic , it will tell you which server side components have the slowest response times, and then it will let you drill down to which calls within that application consume the most resources, that should be a good start ...
Related
I need to optimise a Java application. It makes some 3rd party calls. I need some good tool to accurately measure the time taken by individual API calls.
To give an idea of complexity-
the application takes a data source file containing 1 million rows, and it takes around one hour to complete the processing. As a part of processing , it makes some 3rd party calls (including some network calls). I need to identify which calls are taking more time then others, and based on that, find out a way to optimise the application.
Any suggestions would be appreciated.
I can recommend JVisualVM. It's a great monitoring / profiling tool that is bundled with the Oracle/Sun JDK. Just fire it up, connect to your application and start the CPU-profiling. You should get great histograms over where the time is spent.
Getting Started with VisualVM has a great screen-cast showing you how to work with it.
Screen shot:
Another more rudimentary alternative is to go with the -Xprof command line option:
-Xprof
Profiles the running program, and sends profiling data to
standard output. This option is provided as a utility that is
useful in program development and is not intended to be be
used in production systems.
I've been using YourKit a few times and what quite happy with it. I've however never profiled a long-running operation.
Is the processing the same for each row? In which case the size of the input file doesn't really matter. You could profile a subset to figure out which calls are expensive.
Just wanted to mention the inspectIT tool. It recently became completely open source (https://github.com/inspectIT/inspectIT). It provides complete and detailed call graph with contextual information, there are many out-of the box sensor for database calls, http monitoring, exceptions, etc.
Seams perfect for your use-case..
Try OPNET's Panorama software product
It sounds like a normal profiler might not be the right tool in this case, since they're geared towards measuring the CPU time taken by the program being profiled rather than external APIs that it calls, and they tend to incur a high overhead of their own and collect a large amount of data that would probably overwhelm your system if left running for a long time.
If you really need to collect performance data over such a long time, and mainly for external calls, then Perf4J is probably a better tool.
In our office we use YourKit profiler on a day to day basis. It's really light weight and serves most of the performance related use cases we have had.
But I have also used Visual VM. It's free and fast. You may first want to give Visual VM a try before going towards YourKit (YourKit is not freeware).
visualvm (part of the SDK) and Java 7 can produce detailed profiling.
I use profiler in NetBeans (it is really brilliant and already built in, no need to install plugin) or JVisualVM when not using NetBeans.
It's difficult to find all bottlenecks, deadlocks, and memory leaks in a Java application using unit tests alone.
I'd like to add some level of stress testing for my application. I want to test the limits of the application and determine how it reacts under high load.
I'd like to gauge the following:
Availablity under high load
Performance under high load
Memory / CPU / Disk Usage under high load
Does it crash under high load or react gracefully
It would also be interesting to measure and contrast such characteristics under normal load.
Are their well known, standard techniques to address stress testing.
I am looking for help / direction in setting up such an environment.
Ideally, I would like to run these tests regularly, so that wecan determine if recent deliveries impact performance.
I am a big fan of JMeter. You can set up calls directly against the server just as users would access it. You can control the number of user (concurrent threads) and accesses. It can follow a workflow, scraping pertinent information page to page. It takes 1 to 2 days to learn it well enough to be productive. (You can do the basics within an hour of downloading!)
As for seeing how all that affects the server, that is a tougher question. I have used professional tools from CA and IBM. (I am drawing a blank on specific tool names - maybe due to PTSD!) I have used out-of-the-box JVM profilers. I have used native linux and windows tools. If you are not too concerned about profiling what parts of your application causes issues, then you can just use the native tools for your OS to monitor CPU/Memory/IO.
One of our standard techniques is running stepped-ramp load tests to measure scalability.
There are mainly two approaches for performance on an application:
Performance test and System Test
How do they differ? Well it's easy, it's based on their scope, Performance tests' scope is limited and are highly unrealistic. Example: Test the IncomingMessage handler on some App X, for this you would setup a test which sends meesages to this handler on a X,Y,Z basis. This approach will help you pin down problems and measure performance of individual and limited zones on your application.
So this should now take you to the question, so am I to benchmark and performance test each one of the components in my app individually? Yes if you believe the component's behavior is critical and changes on newer versions are likely to induce performance penalties. But, if you want to get a feel on your application as a whole, the bunch of components interacting with each other and see how performance comes out, then you need a system test.
A system test will always, try to replicate as close as possible any customer production environment. Here you can observe what a real world feel of your app's performance is like and act accordingly to correct it.
So as conclusion,setup a system test on your app and measure what you were saying you wanted to measure. Then Stress the system as a whole and see how it reacts, you will be surprised on the outcome.
Finally, Performance test individually any critical components you have identified or would like to keep tracked on your app.
As a general guideline, when doing performance you should always:
1.- Get a baseline for the system on an idle state.
2.- Get a baseline for the system under normal expected load.
3.- Get a baseline for the system under stress conditions.
Keep in mind that Normal load results should be extrapolated to stress conditions, and a nice system will always be that one which scales linearly.
Hope this helps.
P.S. Tests, envirnoment setup and even data collection should be as fully automated as possible, this will help you run this on a basis and spend time diagnosing performance problems and not setting up the test.
As mentioned by others; tools like JMeter (Commercial tools like LoadRunner and more) can help you generate concurrent test load.
Many monitoring tools (some provided within JDK like MissionControl, some other open source/ free tools like java Melody and many commercial one's) can help you do generic monitoring of various system (memory, CPU, network bandwidth) and JVM resources (Heap, CPU, GC overheads etc).
But to really identify bottlenecks within your code as well as other dependencies of your applications (like external services invoked, DB queries/updates etc) in a very quick and easy way; I recommend considering a good APM i.e. Application Performance Monitoring Tools like AppDynamics/ DynaTrace and more. They can help you pinpoint bottlenecks for specific request level, highlight slower parts of apps, generate percentile metrics at individual service end point or component / method level etc. They can be immensely useful , if one is dealing with very high concurrent users and stringent response time NFR's. They help uncover many bottlenecks across the layers of your application. Many even configure these tools in production (expected to cause 2-3% overheads; but worth it per me for the benefits they provide) - as production logging is not at debug level by default; so once some errors or slowness is observed; it's often extremely difficult to reproduce in lower environments or debug in absence of debug level logs from specific past duration.
There's no one tool to tackle this as far as I know. So build you own environment
Load Injecting & Scripting: JMeter, SOAP UI, LoadUI
Scheduling Tests & Automation: Jenkins, Rundeck
Analytics on transaction data, resources, application performance logs: AppDynamics, ElasticSearch, Splunk
Profiling: AppDynamics, YouKit, Java Mission Control, VisualVm
Please explain about the steps involved in profiling a JAVA application? This is irrespective of what ever profiling tools that is used. What are the best practices and steps involved in profiling the java applications?
Experts, any links or documents are really appreciated.
Thanks.
Thanks. The thing I want to know is there are so many profilers available but when we profile a Java applications for OutOfMemory or Memoryleaks , etc . What are the steps we need to go through in profiling the application. Let say I am using VisualVM which does have a profiler , I am getting an OutOfMemory Error in my application my application so huge that I don't know where exactly is the problem even the logger is of no use (just for assumption). In such case how we can figure out where exactly is the problem by using the profiler tool like VisualVM? And what are the steps we need to look into ? Whether we can directly use the CPU and Memory profiling or still we need to go and get the thread dump and analyse it , then create a Heap dump analyze it and then go for a CPU and Memory profiling? I am little confused here. Hence please point me to the right direction as well as the steps involved in profiling a JAVA application to find the memory leaks. Hope I am clear with my question.
Depending on why you need to profile your application you have to decide what filters you will need, as mentioned inthe comments the question is very general, you should provide some more percise information to get some help hier.
Try the following link (in Eclipse):
An introduction to profiling Java applications
And check this List or Open source java profilers
i have used JProfiler and yourkit but i am not satisfied with output for actual performance tuning. currently we have been switched to java melody. This not only help performance optimization in dev but also in production system. Java melody is very easy to integrate and configure and in production you can enable or disable by just updating web.xml
This series of articles should give you a good idea on how you go about serious performance investigation of a relatively complex Java application.
http://www.jinspired.com/solutions/case-studies/scala-compiler
How can we increase the performance of an application. My application is written using Java, Hibernate, Servlets, Wsdl i have used for web services. I have executed some of the tests on linux machine, so that i can get proper TPS of the execution.
but still , i am not satisfied by the performance.
So for this, what all steps i should try to increase the performance.
adding to above, i have executed code coverage and used find bugs in the code prominently for each and every test and every service i have written.
Individual suggestions are invited.
Thanks.
Profile your application, and remove all of your bottlenecks.
In addition, or better before, take a day or two and read as much from the Java Performance Tuning newsletters as you understand.
You should monitor your application with a tool like VisualVM, JProfiler etc. to determine the performance bottleneck(s). It is pointless to tune the application without knowing where the actual performance problems are located.
In a professional environment, I suggest dynaTrace that can show you performance bottlenecks along the execution path. The tool can show you exactly where the application spends its time.
Is the performance related to disk I/O or network I/O? In a high throughput system (from DB point of view) Hibernate might not be the best way to go. If you have a lot of writes I would recommend you use a different mechanism to write to database -- perhaps simply switching to simple JDBC might speed it up?
Secondly, is it the case that your webservices are taking too long to get back with results? SOAP is not the fastest protocols really -- have you looked at something like REST maybe coupled with JSON ?
I am attempting to solve performance issues with a large and complex tomcat java web application. The biggest issue at the moment is that, from time to time, the memory usage spikes and the application becomes unresponsive. I've fixed everything I can fix with log profilers and Bayesian analysis of the log files. I'm considering running a profiler on the production tomcat server.
A Note to the Reader with Gentle Sensitivities:
I understand that some may find the very notion of profiling a production app offensive. Please be assured that I have exhausted most of the other options. The reason I am considering this is that I do not have the resources to completely duplicate our production setup on my test server, and I have been unable to cause the failures of interest on my test server.
Questions:
I am looking for answers which work either for a java web application running on tomcat, or answer this question in a language agnostic way.
What are the performance costs of profiling?
Any other reasons why it is a bad idea to remotely connect and profile a web application in production (strange failure modes, security issues, etc)?
How much does profiling effect the memory foot print?
Specifically are there java profiling tools that have very low performance costs?
Any java profiling tools designed for profiling web applications?
Does anyone have benchmarks on the performance costs of profiling with visualVM?
What size applications and datasets can visualVM scale to?
OProfile and its ancestor DPCI were developed for profiling production systems. The overhead for these is very low, and they profile your full system, including the kernel, so you can find performance problems in the VM and in the kernel and libraries.
To answer your questions:
Overhead: These are sampled profilers, that is, they generate timer or performance counter interrupts at some regular interval, and they take a look at what code is currently executing. They use that to build a histogram of where you spend your time, and the overhead is very low (1-8% is what they claim) for reasonable sampling intervals.
Take a look at this graph of sampling frequency vs. overhead for OProfile. You can tune the sampling frequency for lower overhead if the defaults are not to your liking.
Usage in production: The only caveat to using OProfile is that you'll need to install it on your production machine. I believe there's kernel support in Red Hat since RHEL3, and I'm pretty sure other distributions support it.
Memory: I'm not sure what the exact memory footprint of OProfile is, but I believe it keeps relatively small buffers around and dumps them to log files occasionally.
Java: OProfile includes profiling agents that support Java and that are aware of code running in JITs. So you'll be able to see Java calls, not just the C calls in the interpreter and JIT.
Web Apps: OProfile is a system-level profiler, so it's not aware of things like sessions, transactions, etc. that a web app would have.
That said, it is a full-system profiler, so if your performance problem is caused by bad interactions between the OS and the JIT, or if it's in some third-party library, you'll be able to see that, because OProfile profiles the kernel and libraries. This is an advantage for production systems, as you can catch problems that are due to misconfigurations or particulars of the production environment that might not exist in your test environment.
VisualVM: Not sure about this one, as I have no experience with VisualVM
Here's a tutorial on using OProfile to find performance bottlenecks.
I've used YourKit to profile apps in a high-load production environment, and while there was certainly an impact, it was easily an acceptable one. Yourkit makes a big deal of being able to do this in a non-invasive manner, such as selectively turning off certain profiling features that are more expensive (it's a sliding scale, really).
My favourite aspect of it is that you can run the VM with the YourKit agent running, and it has zero performance impact. it's only when you connect the GUI and start profiling that it has an effect.
There is nothing wrong in profiling production apps. If you work on distributed applications, there are times when a outofmemory exception occurs in a very unique probability scenario which is very difficult to reproduce in a dev/stage/uat environment.
You can try using custom profilers but if you are in a hurry and plugging in/ setting upa profiler on a production box will take time, you can also use the jvm to take a memory dump(jvms memory dump also gives you thread dump)
You can activate the automatic generation on the JVM command line, by using the following option :
-XX:+HeapDumpOnOutOfMemoryError
he Eclipse Memory Analyzer project has a very powerful feature called “group by value”, which makes it possible to build an object query and regroup the instances by a field value. This is useful in the case where you have a lot of instances that are containing a smaller set of possible values, and you can to see which values are being used the most. This has really helped me understand some complex memory dumps so I recommend you try it out.
You may also consider using one of the modern HotSpot JVM - Java Flight Recorder and Java Mission Control. It is a set of tools that allow you to collect low-level runtime information with the CPU overhead about 5% (I cannot prove the last statement anyhow, this is the statement of Oracle engineer who presented the feature and live demo).
You can use this tool as long as your application is running 1_7u40 JVM or higher. To enable the runtime info collection, you need to start JVM with particular flags:
By default, JFR is disabled in the JVM. To enable JFR, you must launch your Java application with the -XX:+FlightRecorder option. Because JFR is a commercial feature, available only in the commercial packages based on Java Platform, Standard Edition (Oracle Java SE Advanced and Oracle Java SE Suite), you also have to enable commercial features using the -XX:+UnlockCommercialFeatures options.
(Quoted http://docs.oracle.com/javase/8/docs/technotes/guides/jfr/about.html#sthref7)
I added this answer as this is viable option for profiling in production IMO.
Also there is an Eclipse plugin that supports JFR and JMC and capable of displaying information user-friendly.
The tools have improved vastly over the years. These days, most people who have needs like these use a tool that hooks into Java's instrumentation API instead of the profiling API. Surely there are more examples, but NewRelic and AppDynamics come to mind. Instrumentation-based solutions usually run as an agent in the JVM and constantly collect data. They report the data at a higher level (business transaction, web transaction, database transaction) than the old profiling approach and allow you to dig deeper (down to the method or line) if necessary. You can even setup monitoring and alerts, so you can track/alert on metrics like page load times and performance against SLAs. With these great tools, you really should have no reason to run a profiler in production any longer. The cost of running them is negligible.