GWT: High resource usage during mvn goals - java

I have noticed a high memory and CPU usage during mvn-gwt operation especially during compile phase. Memory usage just soars. I just wanna know if this is normal and if anyone else is experiencing this.
My current JVM setting is -Xms64m -Xmx1280m -XX:MaxPermSize=512m

I think it's normal. Because the phase of compilation in GWT is really very resource-intensive. GWT provides a larger library (in gwt-user.jar) that must be analyzed during compilation and a number of compiler optimizations that require much memory and processing power. Thus, the GWT compiler use much memory internally.

Yes, it's normal. It derives from the awsome CPU utilization Google made when they've written the gwtc command (gwtc = GWT Compile).
I consider it good since the tradeoff for the CPU would typically be memory usage which is far more valuable for me.
(I do not work for Google :-))

The GWT compiler has a localWorkers setting that tells it how many cores to use. The more cores, the more memory it will use. If you are using the Eclipse plugin it defaults to just using one (I believe). But the Maven plugin defaults to using all core on your machine (ie if you have a quad core, it will use localWorkers 5.
Interestingly I've been following the advice found here: http://josephmarques.wordpress.com/2010/07/30/gwt-compilation-performance/ which says that localWorkers 2 is an ideal setting for memory usage and speed. That way my machine doesn't lock up during the compile, and the speed difference is very minor.

Related

Scala debugging under IntelliJ is horribly slow

I'm currently tracing through a relatively small Scala program (Apache's Kafka) with IntelliJ 10.5.4, while at the same time running several other Java applications in another project. While the Java applications are doing just fine, the Scala debugging is horribly slow, a simple "Make Project" will take easily 2 minutes, and even basic cursor keystrokes take up to a second each to reflect on the screen.
I've used IntelliJ before with as many as 6 applications running at the same time, under multiple projects, and it doesn't have any problems at all. Hardware isn't an issue either, this is all running on a very recent Macbook Pro (256GB SSD, 8Gb RAM, quad-core i7), under Java 1.6.0_31.
Are there any tips/tricks to making Scala perform decently while debugging under IntelliJ? Alternatively, what are people out there using for Scala debugging?
Try latest Idea 11 EAP with latest Scala plugin. It works for me fine.
tl;dr
re: compile times, do you have FSC enabled? This dramatically helps Scala compile times.
re: overall slowness, you probably need to tweak your JVM settings. Scala can be more memory intensive, so you may have to increase your -Xmx value to spend less time garbage collecting. Or lower it, if it's too dramatically high. Or change the garbage collector. For reference, here are mine:
<key>VMOptions</key>
<string>-ea -Xverify:none -Xbootclasspath/a:../lib/boot.jar -XX:+UseConcMarkSweepGC </string>
<key>VMOptions.i386</key>
<string>-Xms128m -Xmx512m -XX:MaxPermSize=250m -XX:ReservedCodeCacheSize=64m</string>
<key>VMOptions.x86_64</key>
<string>-Xms128m -Xmx800m -XX:MaxPermSize=350m -XX:ReservedCodeCacheSize=64m -XX:+UseCompressedOops</string>
Actually figuring it out
You will probably have to do some profiling to find out what the performance problem actually is. The simplest thing to start with would be to open Activity Monitor to see if memory is low, CPU utilization is high, or disk activity is high. This will at least give you a clue as to what's going on. You have an SSD, so disk I/O is probably not a problem.
You may need to profile IDEA itself, such as running it with YourKit Java Profiler. This could tell you something interesting, such as if IDEA is spending an excessive amount of time garbage collecting.
In general, though, Scala seems a bit more memory intensive than Java on IntelliJ. It's possible that you have -Xmx set too low, causing excessive garbage collections. Or maybe it's set too high, so that when it does garbage collect, you get a pauses in the app. Changing which collector your using may help with that, but with all the collectors there's a point of diminishing returns where setting -Xmx too high causes performance problems.
I found out that I accidentally put breakpoint in one of the List's methods. It made intellij super slow while debugging scala project but when I canceled the breakpoint intellij became normal again.
The list of breakpoints can be found by clicking ctrl+shift+F8.
Eclipse with http://scala-ide.org/download/current.html is working very well for me as of 2.0.1.
My stuff is maven based at the moment and the projects are combined Java and Scala. I can debug across language boundaries without problems.

Why does System. gc () seem to have no effect on some JVMs

I have been developing a small Java utility that uses two frameworks: Encog and Jetty to provide neural network functionality for a website.
The code is 'finished' in that it does everything it needs to do, but I have some problems with memory usage. When running on my development machine the memory usage seems to fluctuate between about 4MB and 13MB when the application is doing things (training neural networks) and at most it uses about 18MB. This is very good usage and I think it is due to the fact that I call System.GC() fairly regularly. I do this because the processing time doesn't matter for me, but the memory usage does.
So it all works fine on my machine, but as soon as I put it online on our server (shared unix hosting with memory limits) it uses about 19MB to start with and rises to hundreds of MB of memory usage when doing things. These are the same things that I have been doing in testing. The only way, I believe, to reduce the memory usage, is to quit the application and restart it.
The only difference that I can tell is the Java Virtual Machine that it is being run on. I do not know about this and I have tried to find the reason why it is acting this way, but a lot of the documentation assumes a great knowledge of Java and Virtual Machines. Could someone please help m with some reasons why this may be happening and perhaps some things to try to stop it.
I have looked at using GCJ to compile the application, but I don't know if this is something I should be putting a lot of time in to and whether it will actually help.
Thanks for the help!
UPDATE: Developing on Mac OS 10.6.3 and server is on a unix OS but I don't know what. (Server is from WebFaction)
I think it is due to the fact that I
call System.GC() fairly regularly
You should not do that, it's almost never useful.
A garbage collector works most efficiently when it has lots of memory to play with, so it will tend to use a large part of what it can get. I think all you need to do is to set the max heap size to something like 32MB with an -Xmx32m command line parameter - the default depends on whether the JVM believes it's running on a "server class" system, in which case it assumes that you want the application to use as much memory as it can in order to give better throughput.
BTW, if you're running on a 64 bit JVM on the server, it will legitimately need more memory (usually about 30%) than on a 32bit JVM due to larger references.
Two points you might consider:
Calls of System.gc can be disabled by a commandline parameter (-XX:-DisableExplicitGC), I think the behaviour also depends on the gc algorithm the vm uses. Normally invoking the gc should be left to the jvm
As long as there is enough memory available for the jvm I don't see anything wrong in using this memory to increase application and gc performance. As Michael Borgwardt said you can restrict the amount of memory the vm uses at the command line.
Also you may want to look at what mode the JVM has been started when you deploy it online. My guess its a server VM.
Take a look at the differences between the two right here on stackoverflow. Also, see what garbage collector is actually running on the actual deployment. See if you can tweek the GC behaviour, or change the GC algorithm.See the -X options if its a Sun JVM.
Basically the JVM takes the amount of memory it is allowed to as needed, in order to make the "new" operation as fast as possible (this is a science in itself).
So if you have a lot of objects being used, and then discarded, you will slowly and surely fill up the available memory. Then you can ask for garbage collection, but it is just a hint, and the JVM may choose not to listen.
So, you need another mechanism to keep memory usage down. The typical approach is to limit the amount of memory with -Xoptions, but be careful since the JVM you use on your pc may be very different from the one you deploy on, and the memory need may therefore be different.
Is there a deliberate requirement for low memory usage? If not, then just let it run and see how the JVM behaves. Use jvisualvm to attach and monitor.
Perhaps the server uses more memory because there is a higher load on your app and so more threads are in use? Jetty will use a number of threads to spread out the load if there are a lot of requests. Its worth a look at the thread count on the server versus on your test machine.

Java using too much memory on Linux?

I was testing the amount of memory java uses on Linux. When just staring up an application that does absolutely NOTHING it already reports that 11 MB is in use. When doing the same on a Windows machine about 6 MB is in use. These were measured with the top command and the windows task manager. The VM on linux I use is the 1.6_0_11 one, and the hotspot VM is Server 11.2. Starting the application using -client did not influence anything.
Why does java take this much memory? How can I reduce this?
EDIT: I measure memory using the windows task manager and in Linux I open the terminal and type top.
Also, I am only interested in how to reduce this or if I even CAN reduce this. I'll decide for myself whether a couple of megs is a lot or not. It's just that the difference of 5 MB between windows and Linux is strange, and I want to know if I am able to do this on Linux too.
If you think 11MB is "too much" memory... you'd better avoid using Java entirely. Seriously, the JVM needs to do quite a lot of stuff (bytecode verifier, GC, loading all the essential classes), and in an age where average desktop machines have 4GB of RAM, keeping the base JVM overhead (and memory use in generay) very low is simply not a design priority.
If you need your app to run on an embedded system (pretty much the only case where 11 MB might legitimately be considered "too much"), then there are special JVMs designed for such sytems that use less RAM - but at the cost of lacking many of the features and/or performance of mainstream JVMs.
You can control the heap size otherwise default values will be used, java -X gives you an explanation of the meaning of these switches
i.g.
set JAVA_OPTS="-Xms6m -Xmx6m"
java ${JAVA_OPTS} MyClass
The question you might really be asking is, "Does windows task manager and Linux top report memory in the same way?" I'm sure there are others that can answer this question better than I, but I suspect that you may not be doing an apples to apples comparison.
Try using the jconsole application on each respective machine to do a more granular inspection. You'll find jconsole on your sdk under the bin directory.
There is also a very extensive discussion of java memory management at http://www.ibm.com/developerworks/linux/library/j-nativememory-linux/
The short answer is that how memory is being allocated is a more complex answer than just looking at a single figure at the top of a user simplifed system utility.
Both Top and TaskManager will report how much memory has been allocated to a process, not how much the process is actually using, so I would say it's not an apples to apples comparison. Regardless, in the age of Gigs of memory what's a couple megs here or there on startup?
Linux and Windows are radically different operating systems and use RAM very differently. Windows kind of allocates as you go, and Linux caches more at once, and prepares for the future, so that the next operations are smooth.
This explanation is not quite right, but it's close enough for you.

jvm design decision

Why does the jvm require around 10 MB of memory for a simple hello world but the clr doesn't. What is the trade-off here, i.e. what does the jvm gain by doing this?
Let me clarify a bit because I'm not conveying the question that is in my head. There is clearly an architectural difference between the jvm and clr runtimes. The jvm has a significantly higher memory footprint than the clr. I'm assuming there is some benefit to this overhead otherwise why would it exist. I'm asking what the trade-offs are in these two designs. What benefit does the jvm gain from it's memory overhead?
I guess one reason is that Java has to do everything itself (another aspect of platform independence). For instance, Swing draws it's own components from scratch, it doesn't rely on the OS to draw them. That's all got to take place in memory. Lots of stuff that windows may do, but linux does not (or does differently) has to be fully contained in Java so that it works the same on both.
Java also always insists that it's entire library is "Linked" and available. Since it doesn't use DLLs (they wouldn't be available on every platform), everything has to be loaded and tracked by java.
Java even does a lot of it's own floating point since the FPUs often give different results which has been deemed unacceptable.
So if you think about all the stuff C# can delegate to the OS it's tied to vs all the stuff Java has to do for the OS to compensate for others, the difference should be expected.
I've run java apps on 2 embedded platforms now. One was a spectrum analyzer where it actually drew the traces, the other is set-top cable boxes.
In both cases, this minimum memory footprint hasn't been an issue--there HAVE been Java specific issues, that just hasn't been one. The number of objects instantiated and Swing painting speed were bigger issues in these cases.
I don't know if initial memory footprint or a footprint of a Hello World application is important. A difference might be due to the number and sizes of the libraries that are loaded by the JVM / CLR. There can also be an amount of memory that is preallocated for garbage collection pools.
Every application that I know off, uses a lot more then Hello World functionality. That will load and free memory thousands of times throughout the execution of the application. If you are interested in Memory Utilization differences of JVM vs CLR, here are a couple of links with good information
http://benpryor.com/blog/2006/05/04/jvm-vs-clr-memory-allocation/
Memory Management Case study (JVM & CLR)
Memory Management Case study is in Power Point. A very interesting presentation.
Seems like java is just using more virtual memory.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
amwise 20598 0.0 0.5 22052 5624 pts/3 Sl+ 14:59 0:00 mono Test.exe
amwise 20601 0.0 0.7 214312 7284 pts/2 Sl+ 15:00 0:00 java Program
I made a test program in C# and in Java that print the string "test" and waits for input. I believe that the resident set size (RSS) value more accurately shows the memory usage. The virtual memory useage (VSZ) is less meaningful.
As I understand it applications can reserve a ton of virtual memory without actually using any real memory. For example you can ask the VirtualAlloc function on Windows to either reserve or commit virtual memory.
EDIT:
Here is a pretty picture from my windows box:
alt text http://awise.us/images/mem.png
Each app was a simple printf followed by a getchar.
Lots of virtual memory usage by Java and CLR. The C version depends on just about nothing, so it's memory usage is tiny relatively.
I doubt it really matters either way. Just pick whichever platform you are more familiar with and then don't write terrible, memory-wasting code. I'm sure it will work out.
EDIT:
This VMMap tool from Microsoft might be useful in figureing out where memory is going.
The JVM counts all its shared libraries whether they use memory or not.
Task manager is rather unreliable when it comes to reporting the memory consumption of programs. You should take it as a guide.
JVM loads lots of unnecessary core classes on each run from rt.jar. Unfortunately, the inner-cross dependencies (java.lang <-> java.io) of java packages make it hard to do a partial runtime init. Not to mention the rt.jar itself is over 40MB, needs lots of time for lookup and decompress.
Post Java 6u10 seems to load things a bit smarter (it has a jqs.exe = java quick starter service to keep necessary data in memory and do a faster startup), still Java 7 is told to be better.
The Process Explorer in Windows reports the Private Bytes correctly (Private bytes are those memory regions, which are not shared by any dll).
A slightly bigger annoyance is that after 10 years, JVM still defaults to 64MB memory usage. It is really annoying to use -Xmx almost every time and cannot run demanding programs in jars with a simple double click (unless I alter the file extension assignment's command).
CLR is counted as part of the OS so the task manager doesn't report it's memory consumption under the application process.

How can I reduce Eclipse Ganymede's memory use?

I use the recent Ganymede release of Eclipse, specifically the distro for Java EE and web developers. I have installed a few additional plugins (e.g. Subclipse, Spring, FindBugs) and removed all the Mylyn plugins.
I don't do anything particularly heavy-duty within Eclipse such as starting an app server or connecting to databases, yet for some reason, after several hours use I see that Eclipse is using close to 500MB of memory.
Does anybody know why Eclipse uses so much memory (leaky?), and more importantly, if there's anything I can do to improve this?
I don't know about Eclipse specifically, I use IntelliJ which also suffers from memory growth (whether you're actively using it or not!). Anyway, in IntelliJ, I couldn't eliminate the problem, but I did slow down the memory growth by playing with the runtime VM options. You could try resetting these in Eclipse and see if they make a difference.
You can edit the VM options in the eclipse.ini file in your eclipse folder.
I found that (in IntelliJ) the garbage collector settings had the most effect on how fast the memory grows.
My settings are:
-Xms128m
-Xmx512m
-XX:MaxPermSize=120m
-XX:MaxGCPauseMillis=10
-XX:MaxHeapFreeRatio=70
-XX:+UseConcMarkSweepGC
-XX:+CMSIncrementalMode
-XX:+CMSIncrementalPacing
(See http://piotrga.wordpress.com/2006/12/12/intellij-and-garbage-collection/ for an explanation of the individual settings). As you can see, I'm more concerned with avoiding long pauses during editting than actuial memory usage but you could use this as a start.
I don't think the JVM does a lot of garbage collection unless it has to (i.e. it's getting to its limits). Therefore it grabs all the memory it can get, probably up to the limit set in the eclipse.ini (the -Xmx argument, set to 512MiB here).
You can get a visual representation of the current heap status by checking 'Preferences' -> 'General' -> 'Show heap status'. It will create a small gauge in the status bar which also has a 'trash can' button you can use to trigger a manual garbage collection.
Just for information,
you can add
-Dcom.sun.management.jmxremote
to your eclise.ini file, launch eclipse and then monitor its memory usage through 'jconsole.exe' found in your jdk installation.
C:\[jdk1.6.0_0x path]\bin\jconsole.exe
Choose 'Connection / New connection / 'eclipse' to monitor the memory used by eclipse
always use the latest jvm to launch your eclipse (that does not prevent you to use any other jfk to compile your project within eclipse)
The Ganymede Java EE plugins are absolutely huge when running in memory. Also, I've had bad experiences with FindBugs and its reliability over a long coding session.
If you can't live without these plugins though, then your only recourse is to start closing projects. If you limit the number of open projects in your workspace, the compiler (and FindBugs) will have less to worry about and your memory usage will drop tremendously.
I usually split up my workspaces by customer and then only keep the bare-minimum projects open within each workspace. Note that if you have a particularly large projects (especially ones with a lot of files checked by WST), that will not only chew through your memory, but also cause a noticeable pause in responsiveness when compiling.
Eclipse by itself is pretty bloated, and the more plugins you add only exacerbates the situation. It's still my favorite IDE, as it certainly isn't short on functionality, but if you're looking for a lightweight IDE then I'd suggest ditching Eclipse; it's pretty normal to run up half a gig of memory if you leave it running for awhile.
Eclipse is a pretty bloated IDE. You can minimize it by turning of the automatic project building under Project -> Build Automatically. It also can be helped by closing any open project you are not currently working on.
I'd call it bloated, but not leaky. (If it was leaky it would climb and climb until something crashed.) As others have said, memory is cheap! It seems like a simple decision to me: spend a tiny bit on more memory vs. lose productivity because you don't have the memory budget to run Eclipse # 500MB.
Summarized rhetorical question: What is more valuable:
The productivity gained from using an IDE you know with the plug-ins you want, or
Spending $50-200 on some memory?
RAM is relatively cheap (not that this is an excuse for poor memory managmentment). Unused memory is essentially WASTED memory. If you're hitting limits and the IDE is the problem consider less multitasking, adjusting your memory reqs, or buy more. I wouldn't cripple Eclipse if that's your bread-and-butter IDE.
Instead of whining about how much memory Eclipse takes, just go ahead and analyze where the problem is. I might be just one plugin.
Check the blog here :
"analyzing memory consumption of eclipse"
Regards,
Markus
I had problem with java-based programs memory consumption. I found that it could be related to the chosen jvm (in my case it was). Try to run eclipse with -client switch.
In some operating systems (most of linux distros I believe), the default option is server vm, which will consume noticeable more memory when running applications with gui.
In my case initial memory footprint went down from 300MB to 80MB.
Sorry for my crappy English. I hope I helped.
All Regards
Arkadiusz Jamrocha
Well, you don't specify on which platform this occurs. The memory management may vary if you're using Windows XP, Vista, Linux, OS X, ...
Usually, on my computer (WinXP with 1Gb of Ram), Eclipse take rarely more than 200Mb, depengin of the size of the opened projects, the loaded plugins and the ongoing action.
I usually give Eclipse 512 MB of RAM (using the -Xmx option of the JVM) and I don't have any memory problems with Ganymede. I upgraded to two GB of RAM a few months ago, and I can really recommend it. It makes a huge difference.
Eclipse generally keeps a lot of meta-data in memory to allow for all kinds of IDE gymnastics.
I have found that the default configuration of Eclipse works well for most purposes and that includes a limit (either given explicitly or implictly by the JVM) to how much memory can be consumed, and Eclipse will stay within that.
Is there any particular reason you are concerned about memory usage?

Categories

Resources