Tuning a WebService - java

I have a webservice created with CXF. In my service I run an application witch is very time consuming.
My application takes about 30 minutes to be executed but inside the webservice it takes about 1 1/2 hours.
Is there something I can do that my service gets faster?

There isn't any good reason for such a difference (assuming your doing the same work). You are going to have to work out what is different about enviroment or the input parameters you are using.
Try turning on -Xverbose:gc it may be that you have just about maxed out the heap on the servlet container and the JVM is speading it's whole life running the garabage collector over and over again.
Note you can also see using jvisualvm which comes with the JDK for free.

I would have a look at the application with jvisualvm in the Sun JDK.
My guess is that you have too little memory in your web service container, and that all the time is spent garbage collecting.

There really isn't enough information here to solve the issue, you need to figure out what is going on use some sort of tracing/profiling mechanism. It could be a memory issue. I don't know how you're actually launching the app, but it could be that its getting assigned a very low priority thread, vs a more user (high) priority thread when being launched by JUnit. The webservice itself would long time out before half an hour passed, let alone 1.5 hours, so are you using an ASync service or launching your own thread and/or process from the service? If its a separate process, how much memory is being allocated to that?
Once you've gathered this information, you're probably well on your way to really getting your answer.
YMMV

Related

Java server application slow after period of idleness (Windows)

I'm having trouble with a Jetty 9 server application that seems to go into some kind of resting state after a longer period of idleness. Normally the memory usage of the Java process is ~500 MB, but after being idle for some time it seems to drop down to less than 50MB. The first request that comes takes up to several seconds to respond whereas requests are normally on the scale of tens of milliseconds. But after one or two requests it seems like the application is back to it's normal responsive state.
I'm running on the 32-bit Oracle Java 8 JVM. My JVM configuration is very basic:
java -server -jar start.jar
I was hoping that this issue might be solvable through JVM configuration. Does anyone know if there's any particular parameter to disable this type of behavior?
edit: Based on the comment from Ivan, I was able to identify the source of the issue. Turns out Windows was swapping parts of the Java process out to disk. See my own answer below for a description of my solution.
Based on the comment from Ivan, I was able to identify the source of the issue. Turns out Windows was swapping parts of the Java process out to disk. This was clearly visible when comparing the private working set to the commit size in the task manager.
My solution to this was two-fold. First, I made a simple scheduled job inside my server app that runs every minute and does a simple test run to make sure that the important services never go inactive for long periods. I'm hoping this should ensure that Windows doesn't regard the related pages as inactive.
Afterwards, I also noticed that the process was executing with "Below normal" priority. So I changed the script that starts the server to ensure that it's running with "High" priority going forward. This seems likely to affect swapping behavior and may very well also have been enough to resolve the issue on it's own, but I only found it after already deploying my first solution so that remains unclear. In any case, everything seems to be working as it should now.

Java Web App has a high rate of CPU consumption

I'm new here and I'm not that very good in CPU consumption and Multi Threading. But I was wondering why my web app is consuming too much of the CPU process? What my program does is update values in the background so that users don't have to wait for the processing of the data and will only need to fetch it upon request. The updating processes are scheduled tasks using executor library that fires off 8 threads every 5 seconds to update my data.
Now I'm wondering why my application is consuming too much of the CPU. Is it because of bad code or is it because of a low spec server? (2 cores with 2 database and 1 major application running with my web app)
Thank you very much for your help.
You need to profile your application to find out where the CPU is actually being consumed. Java has some basic profiling methods built in, or if your environment permits it, you could run the built in "hprof" compiler:
java -Xrunhprof ...
(In reality, you probably want to set some extra options: Google "hprof" for more details.)
The latter is easier in principle, but I mention the possibility of adding your own profiling routine because it's more flexible and you can do it e.g. in a Servlet environment where running another profiler is more cumbersome.
Paulo,
It is not possible for someone here to say whether the problem is that your code is inefficient or the server is under spec. It could be either or both of those, or something else.
You are going to need to do some research of your own:
Profile the code. This will allow you to identify where your webapp is spending most of its time.
Look at the OS-level stats that are available to you. This might tell you that the real problem is memory usage or disk I/O.
Look at the performance of the back-end database. Is it using a lot of CPU?
Once you have identified the area(s) where the CPU is being used, you need to figure out the real cause of the problem is and work out how to fix it. And once you've got a potential fix implemented, you can rerun your profiling, etc to see it has helped.

Large CPU usage on a java process under Linux

I am running into trouble to determine what is wrong with my software.
The situation is;
-The program is always running on background and every X minutes performs some actions.
-Right now it is set to check every 1 minute a certain directory and see if there are new files in it.
-If there are new files, they are processed and moved somewhere else.
-If not, it simply logs the event and goes idle again.
I Assume that when new files appear, CPU usage can be somewhat high.
The problem comes when, even if I dont put new files in the directory for many days, the CPU usage will raise to ~90% every minute it checks for new entrys, then after some seconds, return back to <1% usage.
The same process under windows seems somehow stable, staying always on low cpu usage.
If I monitor the CPU activty monthly, I can see that the average CPU usage for my java process keeps growing up (without putting new files to 'activate' the rest of the process), and I have to restart the process for it to return to lower CPU usage levels.
I really dont happen to understand this behaviour, so I dont really know what may be affecting this.
If the log file is somewhat 'big', like 10-20mb would it require that much cpu to log a new entry every minute?
If there are many libraries loaded in the classpath for this process, will the cpu usage be increased even though many of this libraries wont be used most all the time?
Excuse me if I haven't been very clear on my question, I am somewhat new to this.
Thanks every one in advance, regards.
--edit--
I note your advices, I will do some monitoring and I will post some code / results to share with you and see what can you come up with!
I am really lost right now!
I your custom monitoring code is causing a problem, you could always use something standard like Apache Commons IO's FileAlterationMonitor. It's simple to implement and it might be faster than fixing your current code.
Are you talking about a simple console application or a swing/awt app ?
Is the application run every minute via OS underlying at schedule or it's a simple server process ?
If the process is run as a server how do you launch the VM ? (server VM or client VM - -server switch on cmd line)
You may check also your garbage collector, sometimes logging framework use up too many object without releasing their references.
Regards
M.

Keep JVM running on iseries

We are calling a Java program on a iseries machine and the first call to the program is quiet slow. The following calls are fast but if we wait a certain time the call is slow again.
How can I keep the JVM up and running or is there another way to solve this problem?
Thanks
The newest JVM's (IBM Technology for Java) are the fastest available. The typical problem is that if the JVM's own jars are cached in memory then it is quite fast to load - if not, they need to be loaded from disk as needed which is quite slow. (There is actually an accelleration process for this under Windows).
You could consider having a small script which simply reads through all the jars for the JVM every X seconds, or to implement a "communicate with daemon JVM through dataqueues" which is the traditional approach for this.
You might want to consider making your java application a server that is running all the time ... your native app can send & receive requests to the server using tcp or data queues.
That way the start up cost for the server is a one time thing and none of the users never have to suffer through it.
Allocate more memory to the subsystem the JVM is running in.
Do not call a static method because by definition a static class doesn't have to stay in memory.
Do call methods with *this.
Good Luck

How to detect Out Of Memory condition?

I have an application running on Websphere Application Server 6.0 and it crashes nearly every day because of Out-Of-Memory. From verbose GC is certain there are the memory leaks(many of them)
Unfortunately the application is provided by external vendor and getting things fixed is slow & painful process. As part of the process I need to gather the logs and heapdumps each time the OOM occurs.
Now I'm looking for some way how to automate it. Fundamental problem is how to detect OOM condition. One way would be to create shell script which will periodically search for new heapdumps. This approach seems me a kinda dirty. Another approach might be to leverage the JMX somehow. But I have little or no experience in this area and don't have much idea how to do it.
Or is in WAS some kind of trigger/hooks for this? Thank you very much for every advice!
You can pass the following arguments to the JVM on startup and a heap dump will be automatically generated on an OutOfMemoryError. The second argument lets you specify the path for the heap dump file. By using this at least you could check for the existence of a specific file to see if a heap dump has occurred.
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=<value>
I see two options if you want heap dumping automated but #Mark's solution with heap dump on OOM isn't satisfactory.
You can use the MemoryMXBean to detect high memory pressure, and then programmatically create a heap dump if the usage (or usage delta) seems high.
You can periodically get memory usage info and generate heap dumps with a cron'd shell script using jmap (works both locally and remote).
It would be nice if you could have a callback on OOM, but, uhm, that callback probably would just crash with an OOM error. :)
Have you looked at JConsole ? It uses JMX to give you visibility of a variety of JVM metrics, including memory info. It would probably be worth monitoring your application using this to begin with, to get a feel for how/when the memory is consumed. You may find the memory is consumed uniformly over the day, or when using certain features.
Take a look at the detecting low memory section of the above link.
If you need you can then write a JMX client to watch the application automatically and trigger whatever actions required. JConsole will indicate which JMX methods you need to poll.
And alternative to waiting until the application has crashed may be to script a controlled restart like every night if you're optimistic that it can survive for twelve hours..
Maybe even websphere can do that for you !?
You could add a listener (Session scoped or Application scope attribute listener) class that would be called each time a new object is added in session/app scope.
In this - you can attempt to check the total memory used by app (Log it) as as call run gc (note that invoking it will not imply gc will always run)
(The above is for the logging part and gc based on usage growth)
For scheduled gc:
In addition you can keep a timer task class that runs after every few hrs and does a request for gc.
Our experience with ITCAM has been less than stellar from the monitoring perspective. We dumped it in favor of CA Wily Introscope.
Have you had a look on the jvisualvm tool in the latest Java 6 JDK's?
It is great for inspecting running code.
I'd dispute that the you need the heap dumps when the OOM occurs. Periodic gathering of the information over time should give the picture of what's going on.
As has been observed various tools exist for analysing these problems. I have had success with ITCAM for WebSphere, as an IBMer I have ready access to that. We were very quickly able to indentify the exact lines of code in out problem situation.
If there's any way you can get a tool of that nature then that's the way to go.
It should be possible to write a simple program to get the process list from the kernel and scan it to see if your WAS process is still running. On a Unix box you could probably whip up something in Perl in a few minutes (if you know Perl), not sure how difficult it would be under Windows. Run it as a scheduled task every five minutes or so, and if the process doesn't show up you could have it fork off another process that would deal with the heap dump and re-start WAS.

Categories

Resources