I am working on an enterprise application which earlier used MyEclipse tool for Java / Java EE development, EJB 2.1 and WAS 7.0, recently we migrated to EJB 3.1, Websphere 8.5.5 and Eclipse Kepler. Now, we have noticed the performance of the application has increased and the screens load faster.
Now the problem that I am facing is to compare the earlier application with the one we've upgraded to and identify those areas which has led to speed up of the application. There are no performance metrics recorded for this application till date so I don't have anything to compare with.
All I am thinking was to have the Pre-Upgraded application deployed on a box and the Post-Upgrade application on the other box and record the time of load of all the screens. Now, this is not as subtle as being thought, so would like to know from you guys if there are any tools or strategies to compare two working applications and give performance metrics based on EJB methods time, JSP load time, Business Logic time, Database operations which gives true benefit analysis of the upgrade.
Also, do you guys think upgrade of application server and Integrated Development Environment (Eclipse Kepler) might have contributed to this speed?
If you still have both environments (WAS 7 and WAS 8.5.5) and some load scripts, I'd suggest to use PMI (Performance monitoring infrastructure) in WAS. You can enable metrics that interests you, set data to be saved to the log and run tests on both environments. Then you will be able to see gathered metrics for both environments.
The other option could be free WebSphere Application Server Performance Tuning Toolkit, which can be used to gather the performance data. Available either as standalone (older version) or as plugin to IBM Support Assistant (ISA).
Could upgrade of application server and Integrated
Developement Environment (Eclipse Kepler) might have contributed to
this speed?
Sure. The WAS 8.5.5 is in general faster than v7.0. For example it by default is using genCon garbage collection policy, which in most cases is more efficient that optthroughput.
The dev environment has no impact on application runtime performance, but maybe it is more responsive during development and thats why you have the 'faster' feeling.
I think what you need is benchmarking of both the versions of application and then compare both to see improvement.
for comparing both versions follow below approach,
Deploy both versions on exactly similar hw to create difference instance of 2 versions
Identify workflows/scenarios in which you found improvement and scenarios which are important for your application(mostly used/heavy/important for client etc.)
Carry out performance test/load test on those scenarios on both versions
Measure response time for all pages as well as system metrics i.e. cpu,memory,paging,disk etc.
Based on both versions results, carry out analysis and compare the both the versions.
If required carry out performance tuning and optimization round to improve the results.
This was about strategy.
For tools,
Check Ganglia,munin,graphite,carbon,sar,perfmon,nmon for system metrics (if its a cluster then RRD tools like ganglia,munin are better and if its a single box instance then sar for linux will do and on windows perfmon will do.)
For Load testing, JMeter is better option but you have quite enough funding then go for loadrunner,neoload,rational performance tester and for cloud, try blazemeter
For J2EE level analysis, IBM health center is available (according to me very inefficient to use), JProfiler, yourkit, jvisualvm are available
For WAS, Performance Monitoring Infra. is available with standard options it has low overhead but if you increase the logging counters and levels and it has huge performance impact.
I hope things are clear now :)
Related
We have a couple of custom portlet applications running inside Liferay Portal.
The solution is installed on client’s computer which is entry-level (RAM <= 1 Giga). Due to red tape, it is rather unlikely the client switches to higher-end computers in the short term.
The issue is that the applications are very slow.
What are the hints to optimize Liferay configuration (or optimize the portlet application) so we are able to run decently on entry-level computers?
Or is it a good move to switch the portlets to lighter Portlets Containers alternatives such as Apache Pluto or GateIn?
Or running a portal like Liferay on entry-level computers is not an option? And we should consider porting the existing portlets to separate standard Java Web Applications so to achieve better performance?
Compare the price of tuning, minimizing the footprint and measuring the result to the price of just 1 more Gigabyte of RAM - which you might not even be able to purchase in this size any more.
Then compare the price for porting from a portal environment into Java Web Applications: You can't even be sure that this will result in a lower footprint, as you'll have to redo quite a bit of functionality that Liferay provides out of the box. Identity Management for example. Content Management as another one. This will take time (equaling money) that might be better spent with just a new server.
For ~40€/month you can get a hosted server, including network connectivity, power and even support, that is way more capable of serving an application like this than a server the size of a Raspberry Pi (<40€ total, I've seen Raspberry Pi hosting for less than 40€ per year).
I don't know what you mean with "Red Tape", but I'd say you're definitely going for the wrong target. While there is a point to tune Liferay, I'd not go for this kind of optimization.
You're not mentioning the version you're using - with that hardware I'm assuming that it's an ancient version. Back before the current version, Liferay was largely monolithic. While you can configure quite a bit (cache, deactivate some functionality) they'll not bring drastic advantages. The current version has been modularized and you can remove components that you don't use, lowering the footprint - however, it's not been built for that size of infrastructure.
And when you're running the portal on that kind of hardware, you're not running the database and an extra webserver on the same box as well, right? This would be the first thing to change: Minimize everything that's running outside of Liferay on the same OS/Box.
We are implementing a bespoke 3rd party J2EE application on a 6 server weblogic cluster (latest versions of Oracle products - running on SuSE). The supplier is suggesting to me that we schedule a restart of each WebLogic instance every week on a Monday morning at 3am.
I'm no weblogic expert and I can't seem to track down any best practice guidelines on the subject of regular restarts, but I'm used to working in environments where other clustered app server instances have uptime is measured in much longer periods than 7 days...
My concern is that this is intended to mask issues in the J2EE app itself. Can anyone point me towards best practice guidance related to Weblogic which I may have missed, or confirm that this may be a legitimate suggestion from the application vendor?
We don't always get perfect codes, no-wrong applications, and best programmers to work with you, in fact, many codes are written by junior programmers with low cost. So it is reasonable there are some bugs in these J2EE applications (depend on OS patch level, java version, application itself, etc). Memory leak is one of the problem to ask regular restart to avoid the applications go down at business time. Some other problems are hide in and can't be easily found out.
That's the reason to recommend to restart the application fortnight, weekly, or daily (I DO see some business java application restart every night).
If you really want to troubleshooting the application, maybe you can install some APM (application performance management) application to help you to find out why the application have memory leak, unstable behavior, etc.
You can search in google or read this URL for a starting : http://en.wikipedia.org/wiki/Application_performance_management
I have a web application that runs quite slow. Hitting the request from one JSP page to another takes a long time. I have to measure it performance and find out the classes taking up most time. In other words I have to make an end to end analysis. Please advise me about free profiler tools to measure the performance of a web based application.
The one I have found is http://visualvm.java.net/features.html, but I want free profilers for Java EE web based applications.
And also guys what about jprofiler http://www.ej-technologies.com/products/jprofiler/whatsnew72.html I think it is also a good tool but not a free one.
My one and only recommendation for your requirements would be JavaMelody: http://code.google.com/p/javamelody/. It's great, free and gives a clear overview of which methods take up most of the time or even which SQL statements.
You can try HttpWatch, they have a free version. This is useful for end-to-end measurement. You can combine this with Selenium to measure scenarios.
http://www.httpwatch.com/
java melody gives a different perspective from the server/thread/connection stand point
http://code.google.com/p/javamelody/
JMeter helps you to simulate loads
http://jmeter.apache.org/
If you are using Eclipse, try Eclipse TPTP (Eclipse Test & Performance Tools Platform Project).
TPTP addresses the entire test and performance life cycle, from early testing to production application monitoring, including test editing and execution, monitoring, tracing and profiling, and log analysis capabilities. The platform supports a broad spectrum of computing systems including embedded, standalone, enterprise, and high-performance and will continue to expand support to encompass the widest possible range of systems.
I have a Web Application, Made entirely with Java. The Webapp doesn't use any Graphical / Model Framework, instead, the webapp uses The Model-View Controller. It's made only with Servlet specification (Servlet ver. 2.4).
The webapp it's developed since 2001, and it's very complex. Initially, was built for work with Tomcat 4.x/5.x. Actually, runs on Tomcat 6.x. But, we still having memory Leaks.
In Depth, the specifications of The Webapp can resumed as:
Uses Servlet v. 2.4 Specification.
It doesn't use Any Framework
It doesn't use JavaEE (Not EJB)
It's based on JavaSE (With Servlets)
Works Only on IE 6+ (Because of it's age)
Infrastructure Specification
Actually, the webapp works in three environments:
First
IBM Server (I don't remember exactly the model)
Intel Xeon 2.4 Ghz
32GB RAM
1TB HDD
Tomcat (Version 6) is configured to use 8GB of RAM
Second
Dell Server
Intel Xeon 2.0Ghz
4GB RAM
500GB HDD
Tomcat (Version 5.5) is configured to use 1.5GB of RAM
Third
Dell Server
Amd Opteron 1214 2.20Ghz
4GB RAM
320GB HDD
Tomcat (Version 6) is Configured to use 1.5GB of RAM
Database specification
The webapp uses SQL Server 2008 R2 Express Edition as a DBMS, except for the user of the first server-specification, that uses SQL Server 2008 R2 Standard Edition. For the connection pools, the app uses Apache DBCP.
Problem
Well, it has very serious performance issues. The webapp slow down continually, and, many times Denies the Service. The only way to recover the app is restarting The Apache Tomcat Service.
During a performance Audit, i've found several programming issues (Like database connections that never closes, excesive use of Vector collection [instead of ArrayList]).
I want to know how can improve the performance for the app, which applications can help me to monitoring the Tomcat performance and the Webapp Memory usage.
All suggestions are gladly accepted.
You could also try stagemonitor. It is an open source performance monitoring library. It records request response times, JVM metrics, request details including a call stack (profile) of the called methods during the request and more. Because of the low overhead, you can also use it in production.
The tuning procedure would be the following.
Identify slow requests with the Request Dashboard
Analyze the stack trace of the request with the Request Detail Dashboard to find out about slow methods
Dive into your code and try to optimize those slow methods
You can also correlate some metrics like the throughput or number of sessions with the response time or cpu usage
Analyze the heap with the JVM Memory Dashboard
Note: I am the developer of stagemonitor.
I would start with some tools that can help you profiling the application. Since you are developing webapp start with Lambda Probe and Java melody.
The first step is to determine the conditions under which the app starts to behave oddly. Ask yourself few questions:
Do performance issues arise right after applications starts, or overtime?
Do performance issues are correlated to quantity of client requests?
What is the real performance problem - high load on the server or lack of memory (note that they are related, so check which one starts first)
Are there any background processes which are performing some massive operations? Are they scheduled to run at some particular time period?
Try to find some clues before going deep into code. It will help you to narrow down possible causes.
As Joshua Bloch has stated in his book entitled "Effective Java" - performance issues are rarely the effect of some minor mistakes in source code (although, of course, misuse of Java constructs can lead to disaster). Usually the cause is bad system (API) architecture.
The last suggestion based on my experience - try not to think that high memory consumption is something bad. Tomcat will use as much memory as operating system and JVM will let him (not more than max settings) and just when it needs more - Tomcat will perform garbage collection. So a typical (proper!) graph of memory consumption looks like a saw. If you are dealing with memory leak, then the graph will be increasing constantly, but indefinitely. This is the most often misunderstood of memory leaks, so keep it in mind.
To be honest - we cannot help you much further. Those are just pointers, now you will have to make extensive research to figure out the cause :)
The general solution is to use a profiler e.g. YourKit, with a realistic workload which reproduces the problem.
What I do first is a CPU only profile, a memory only profile and finally a CPU & Memory profile on at once (I then look at the CPU profile results)
YourKit can also monitor your high level operations such a Java EE resources and JDBC connections. I haven't tried these as I don't use them. ;)
It can be a good idea to improve the efficiency even if its not the cause of the problem as it will reduce the amount of "noise" in these profiles and make your issues more obvious.
You could try increasing the amount of memory available but a suspect it will just delay the problem.
Ok. So I have seen huge Java applications run lesser configurations. You should try to do the following -
First connect a Profiler to your application and see which part of your application takes the most time. You can use JProfiler or Eclipse MAT ( I personally prefer JProfiler). Also try to take a look at the objects taking the most memory. This will help you narrow down to the parts which you need to rewrite to improve the performance.
Once you have taken a look at the memory leaks update your application to use 64bit JDK(assuming it already does not do so)
Take a look at your JVM arguments and optimize them.
You can try the open source tool Webapp Watcher in order to identify where in the code is the performance issue.
You have first to add a filter in the webapp (as explained here) in order to record metrics, and then import the logs in the WAW Analyzer tool and follow the steps described in the doc to know where is the potential performance issue in the code.
I am attempting to solve performance issues with a large and complex tomcat java web application. The biggest issue at the moment is that, from time to time, the memory usage spikes and the application becomes unresponsive. I've fixed everything I can fix with log profilers and Bayesian analysis of the log files. I'm considering running a profiler on the production tomcat server.
A Note to the Reader with Gentle Sensitivities:
I understand that some may find the very notion of profiling a production app offensive. Please be assured that I have exhausted most of the other options. The reason I am considering this is that I do not have the resources to completely duplicate our production setup on my test server, and I have been unable to cause the failures of interest on my test server.
Questions:
I am looking for answers which work either for a java web application running on tomcat, or answer this question in a language agnostic way.
What are the performance costs of profiling?
Any other reasons why it is a bad idea to remotely connect and profile a web application in production (strange failure modes, security issues, etc)?
How much does profiling effect the memory foot print?
Specifically are there java profiling tools that have very low performance costs?
Any java profiling tools designed for profiling web applications?
Does anyone have benchmarks on the performance costs of profiling with visualVM?
What size applications and datasets can visualVM scale to?
OProfile and its ancestor DPCI were developed for profiling production systems. The overhead for these is very low, and they profile your full system, including the kernel, so you can find performance problems in the VM and in the kernel and libraries.
To answer your questions:
Overhead: These are sampled profilers, that is, they generate timer or performance counter interrupts at some regular interval, and they take a look at what code is currently executing. They use that to build a histogram of where you spend your time, and the overhead is very low (1-8% is what they claim) for reasonable sampling intervals.
Take a look at this graph of sampling frequency vs. overhead for OProfile. You can tune the sampling frequency for lower overhead if the defaults are not to your liking.
Usage in production: The only caveat to using OProfile is that you'll need to install it on your production machine. I believe there's kernel support in Red Hat since RHEL3, and I'm pretty sure other distributions support it.
Memory: I'm not sure what the exact memory footprint of OProfile is, but I believe it keeps relatively small buffers around and dumps them to log files occasionally.
Java: OProfile includes profiling agents that support Java and that are aware of code running in JITs. So you'll be able to see Java calls, not just the C calls in the interpreter and JIT.
Web Apps: OProfile is a system-level profiler, so it's not aware of things like sessions, transactions, etc. that a web app would have.
That said, it is a full-system profiler, so if your performance problem is caused by bad interactions between the OS and the JIT, or if it's in some third-party library, you'll be able to see that, because OProfile profiles the kernel and libraries. This is an advantage for production systems, as you can catch problems that are due to misconfigurations or particulars of the production environment that might not exist in your test environment.
VisualVM: Not sure about this one, as I have no experience with VisualVM
Here's a tutorial on using OProfile to find performance bottlenecks.
I've used YourKit to profile apps in a high-load production environment, and while there was certainly an impact, it was easily an acceptable one. Yourkit makes a big deal of being able to do this in a non-invasive manner, such as selectively turning off certain profiling features that are more expensive (it's a sliding scale, really).
My favourite aspect of it is that you can run the VM with the YourKit agent running, and it has zero performance impact. it's only when you connect the GUI and start profiling that it has an effect.
There is nothing wrong in profiling production apps. If you work on distributed applications, there are times when a outofmemory exception occurs in a very unique probability scenario which is very difficult to reproduce in a dev/stage/uat environment.
You can try using custom profilers but if you are in a hurry and plugging in/ setting upa profiler on a production box will take time, you can also use the jvm to take a memory dump(jvms memory dump also gives you thread dump)
You can activate the automatic generation on the JVM command line, by using the following option :
-XX:+HeapDumpOnOutOfMemoryError
he Eclipse Memory Analyzer project has a very powerful feature called “group by value”, which makes it possible to build an object query and regroup the instances by a field value. This is useful in the case where you have a lot of instances that are containing a smaller set of possible values, and you can to see which values are being used the most. This has really helped me understand some complex memory dumps so I recommend you try it out.
You may also consider using one of the modern HotSpot JVM - Java Flight Recorder and Java Mission Control. It is a set of tools that allow you to collect low-level runtime information with the CPU overhead about 5% (I cannot prove the last statement anyhow, this is the statement of Oracle engineer who presented the feature and live demo).
You can use this tool as long as your application is running 1_7u40 JVM or higher. To enable the runtime info collection, you need to start JVM with particular flags:
By default, JFR is disabled in the JVM. To enable JFR, you must launch your Java application with the -XX:+FlightRecorder option. Because JFR is a commercial feature, available only in the commercial packages based on Java Platform, Standard Edition (Oracle Java SE Advanced and Oracle Java SE Suite), you also have to enable commercial features using the -XX:+UnlockCommercialFeatures options.
(Quoted http://docs.oracle.com/javase/8/docs/technotes/guides/jfr/about.html#sthref7)
I added this answer as this is viable option for profiling in production IMO.
Also there is an Eclipse plugin that supports JFR and JMC and capable of displaying information user-friendly.
The tools have improved vastly over the years. These days, most people who have needs like these use a tool that hooks into Java's instrumentation API instead of the profiling API. Surely there are more examples, but NewRelic and AppDynamics come to mind. Instrumentation-based solutions usually run as an agent in the JVM and constantly collect data. They report the data at a higher level (business transaction, web transaction, database transaction) than the old profiling approach and allow you to dig deeper (down to the method or line) if necessary. You can even setup monitoring and alerts, so you can track/alert on metrics like page load times and performance against SLAs. With these great tools, you really should have no reason to run a profiler in production any longer. The cost of running them is negligible.