Scala debugging under IntelliJ is horribly slow - java

I'm currently tracing through a relatively small Scala program (Apache's Kafka) with IntelliJ 10.5.4, while at the same time running several other Java applications in another project. While the Java applications are doing just fine, the Scala debugging is horribly slow, a simple "Make Project" will take easily 2 minutes, and even basic cursor keystrokes take up to a second each to reflect on the screen.
I've used IntelliJ before with as many as 6 applications running at the same time, under multiple projects, and it doesn't have any problems at all. Hardware isn't an issue either, this is all running on a very recent Macbook Pro (256GB SSD, 8Gb RAM, quad-core i7), under Java 1.6.0_31.
Are there any tips/tricks to making Scala perform decently while debugging under IntelliJ? Alternatively, what are people out there using for Scala debugging?

Try latest Idea 11 EAP with latest Scala plugin. It works for me fine.

tl;dr
re: compile times, do you have FSC enabled? This dramatically helps Scala compile times.
re: overall slowness, you probably need to tweak your JVM settings. Scala can be more memory intensive, so you may have to increase your -Xmx value to spend less time garbage collecting. Or lower it, if it's too dramatically high. Or change the garbage collector. For reference, here are mine:
<key>VMOptions</key>
<string>-ea -Xverify:none -Xbootclasspath/a:../lib/boot.jar -XX:+UseConcMarkSweepGC </string>
<key>VMOptions.i386</key>
<string>-Xms128m -Xmx512m -XX:MaxPermSize=250m -XX:ReservedCodeCacheSize=64m</string>
<key>VMOptions.x86_64</key>
<string>-Xms128m -Xmx800m -XX:MaxPermSize=350m -XX:ReservedCodeCacheSize=64m -XX:+UseCompressedOops</string>
Actually figuring it out
You will probably have to do some profiling to find out what the performance problem actually is. The simplest thing to start with would be to open Activity Monitor to see if memory is low, CPU utilization is high, or disk activity is high. This will at least give you a clue as to what's going on. You have an SSD, so disk I/O is probably not a problem.
You may need to profile IDEA itself, such as running it with YourKit Java Profiler. This could tell you something interesting, such as if IDEA is spending an excessive amount of time garbage collecting.
In general, though, Scala seems a bit more memory intensive than Java on IntelliJ. It's possible that you have -Xmx set too low, causing excessive garbage collections. Or maybe it's set too high, so that when it does garbage collect, you get a pauses in the app. Changing which collector your using may help with that, but with all the collectors there's a point of diminishing returns where setting -Xmx too high causes performance problems.

I found out that I accidentally put breakpoint in one of the List's methods. It made intellij super slow while debugging scala project but when I canceled the breakpoint intellij became normal again.
The list of breakpoints can be found by clicking ctrl+shift+F8.

Eclipse with http://scala-ide.org/download/current.html is working very well for me as of 2.0.1.
My stuff is maven based at the moment and the projects are combined Java and Scala. I can debug across language boundaries without problems.

Related

GWT: High resource usage during mvn goals

I have noticed a high memory and CPU usage during mvn-gwt operation especially during compile phase. Memory usage just soars. I just wanna know if this is normal and if anyone else is experiencing this.
My current JVM setting is -Xms64m -Xmx1280m -XX:MaxPermSize=512m
I think it's normal. Because the phase of compilation in GWT is really very resource-intensive. GWT provides a larger library (in gwt-user.jar) that must be analyzed during compilation and a number of compiler optimizations that require much memory and processing power. Thus, the GWT compiler use much memory internally.
Yes, it's normal. It derives from the awsome CPU utilization Google made when they've written the gwtc command (gwtc = GWT Compile).
I consider it good since the tradeoff for the CPU would typically be memory usage which is far more valuable for me.
(I do not work for Google :-))
The GWT compiler has a localWorkers setting that tells it how many cores to use. The more cores, the more memory it will use. If you are using the Eclipse plugin it defaults to just using one (I believe). But the Maven plugin defaults to using all core on your machine (ie if you have a quad core, it will use localWorkers 5.
Interestingly I've been following the advice found here: http://josephmarques.wordpress.com/2010/07/30/gwt-compilation-performance/ which says that localWorkers 2 is an ideal setting for memory usage and speed. That way my machine doesn't lock up during the compile, and the speed difference is very minor.

Eclipse not releasing memory in Java process on Linux

My Linux server need to be able to handle 30+ eclipse instances for developers. I did a quick test of running 10 eclipse instances. The Java process associated with each eclipse initially around 200MB RSS memory, increased up to around 550MB, when more projects are loaded.
But Java process doesn't seem to release memory, after closing/deleting all projects within eclipse instances. I still see it uses over 550MB RSS.
How can I change Eclipse or Java settings so that memory foot print got reduced when developers closed down projects or being idle for a while?
Thanks
You may want to experiment with these (and other) JVM tuning options to make the JVM less reluctant to return memory to the OS:
-XX:MaxHeapFreeRatio Maximum percentage of heap free after GC to avoid shrinking. Default is 70.
-XX:MinHeapFreeRatio Minimum percentage of heap free after GC to avoid expansion. Default is 40.
However, I suspect that you won't see the eclipse process shrink to anywhere near its initial size, since eclipse is a huge, complex application that probably lazy-loads (but does not unload, once used) a lot of classes and associated data structures.
I've never seen Java release memory.
I don't think you will get any value out of trying to get it to release memory with Eclipse, I've watched that little memory counter for YEARS and never once see the allocated memory drop.
You might try one of these.
After each session, exit the JVM and restart.
Set your -Xmx lower.
Separate your instances into categories with high -Xmx and low -Xmx and let the user determine which one he wants.
As a side-thought, if it really mattered to you, you MIGHT be able to run multiple eclipse instances under one VM. It would probably be WAY too much work (man-weeks to man-years), but if you could get it right you could reduce overhead by like 150-200mb/instance. The disadvantage would be that a VM crash (Pretty rare these days) would kill everyone.
Testing this theory would be a matter of calling eclipse's main from within an existing JVM and trying to get it to display somewhere useful. The rest of the man-year is spent trying to figure out where they used evil static variables or singletons and changing them to something else.
Switch the Java to use the G1 garbage collector with the HeapFreeRatio parameters. Use these options in eclipse.ini:
-XX:+UnlockExperimentalVMOptions
-XX:+UseG1GC
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=25
Now when Eclipse eats up more than 1 GB of RAM for a complicated operation and switched back to 300 MB after Garbage Collection the memory will be released back to the operating system.
I would suggest checking on garbage collection, setting right options or even forcing GC periodically might increase time till eclipse memory usage grows high.
Following link might be useful http://www.eclipsezone.com/eclipse/forums/t93757.html

Experiences with escape analysis enabled on the JVM

I've just tried the -XX:+DoEscapeAnalysis option enabled on a jdk6-u18 VM (on solaris) and had a rather disappointing experience. I'm running a scala application which has rather a lot of actors (20,000 of them). This is a recipe for garbage-creation!
Typically the app can run with 256Mb of heap but generates huge amounts of garbage. In its steady state it:
spends 10% of time in GC
generates >150Mb of garbage in <30s which then gets GC'd
I thought that escape analysis might help so I enabled the option and re-ran the app. I found that the app became increasingly unable to clear away the garbage it had collected until it seemed eventually to spend the entire time doing GC and the app was "flatlining" at its full allocation.
At this point I should say that the app did not throw a OutOfMemoryError which I would have expected. Perhaps JConsole (which I was using to perform the analysis) does not properly display GC stats with this option on (I'm not convinced)?
I then removed the option and restarted and the app became "normal" again! Anyone have any idea what might be going on?
1 Did the escape analysis show up as being enabled in JConsole? You need make sure you're running the VM with the -server option. I assume you had this working, but I just thought I'd check.
2 I don't think escape analysis will help the situation with Scala Actors. You might see a big gain if you do something like:
def act():Unit = {
val omgHugeObject = new OMGHugeObject();
omgHugeObject.doSomethingCrazy();
}
In the example above the EscapeAnalysis would make it so omgHugeObject could be allocated on the stack instead of the heap and thus not create garbage. I don't think it is likely that the escape analysis will help with actors. Their references will always "escape" to the actor subsystem.
3
Are you on the most recent release of Scala? There was a memory leak that I believe was fixed in a recent version. This even caused Lift to spawn off its own Actor library that you might look into.
4 You might try the G1Garbage collector You can enable it with:
-XX:+UnlockExperimentalVMOptions -XX:+UseG1GC
from the jdk-u18 release notes:
Note that Escape analysis-based optimization (-XX:+DoEscapeAnalysis) is disabled in 6u18. This option will be restored in a future Java SE 6 update.
I suggest you try to increase the new generation size, e.g. -XX:NewSize=96M XX:NewRatio=3. Use JVisualVM (included in the JDK), with the Visual GC Plugin to watch how the young and old spaces are utilised.

Running a JNI application in the Sun VM under Valgrind

The sun JVM spits out a LOT of extra noise when run under valgrind, which makes tracking memory problems in the application very challenging.
I'd like to find either a suppression file, or a VM runtime mode, that will strip out spurious memory errors in order to separate the wheat from the chaff in this situation. Any suggestions?
What about profiling this native code outside of the Java application? Usually the JNI code is a wrapper around some library that is not Java specific. Not sure if that is true for your specific case, but if it is then maybe the memory problems can be isolated by writing a plain C or C++ test framework around that library?
If your framework is in C++ then you might also be able to supply your own new and delete operators and track memory usage yourself. You'll have to collect statistics and process them with some scripts but it can work well.
I can't answer your posted question, but can you elaborate on what problem you're having?
In other words, can you tell us if it is...
In the JNI layer and not a JVM object scope issue?
A use of free'd memory?
A buffer underwrite/overwrite?
Other memory corruption?
I recently had to debug a Java/C that had issues (after 30+ minutes into its run), which it turns out was using memory after it had been free'd. I tried using dmalloc, a custom memory leak library of mine, Valgrind and none worked as I needed.
Eventually I created a simple set of wrappers around free, malloc, calloc, realloc that simply printed memory addresses and sizes to a file. After it aborted (within GDB) I could backtrack in time and figure out when the memory was free'd and where the references were that did not get removed.
IF your issue is in C/C++ and you can trap the error in a debugger this might work for you. Yes, it's tedious, but maybe no worse than sifting through megabytes of Valgrind output.
Hope that helps & good luck.
While not as spiffy as valgrind (based on what I've read), you might try jmap and jhat. These tools allow you to take a snapshot of the running process and see what's going on. I've used this technique for simple memory leaks and it's worked well. However, if the memory problems are caused by non-jvm allocations, this won't help.

How can I reduce Eclipse Ganymede's memory use?

I use the recent Ganymede release of Eclipse, specifically the distro for Java EE and web developers. I have installed a few additional plugins (e.g. Subclipse, Spring, FindBugs) and removed all the Mylyn plugins.
I don't do anything particularly heavy-duty within Eclipse such as starting an app server or connecting to databases, yet for some reason, after several hours use I see that Eclipse is using close to 500MB of memory.
Does anybody know why Eclipse uses so much memory (leaky?), and more importantly, if there's anything I can do to improve this?
I don't know about Eclipse specifically, I use IntelliJ which also suffers from memory growth (whether you're actively using it or not!). Anyway, in IntelliJ, I couldn't eliminate the problem, but I did slow down the memory growth by playing with the runtime VM options. You could try resetting these in Eclipse and see if they make a difference.
You can edit the VM options in the eclipse.ini file in your eclipse folder.
I found that (in IntelliJ) the garbage collector settings had the most effect on how fast the memory grows.
My settings are:
-Xms128m
-Xmx512m
-XX:MaxPermSize=120m
-XX:MaxGCPauseMillis=10
-XX:MaxHeapFreeRatio=70
-XX:+UseConcMarkSweepGC
-XX:+CMSIncrementalMode
-XX:+CMSIncrementalPacing
(See http://piotrga.wordpress.com/2006/12/12/intellij-and-garbage-collection/ for an explanation of the individual settings). As you can see, I'm more concerned with avoiding long pauses during editting than actuial memory usage but you could use this as a start.
I don't think the JVM does a lot of garbage collection unless it has to (i.e. it's getting to its limits). Therefore it grabs all the memory it can get, probably up to the limit set in the eclipse.ini (the -Xmx argument, set to 512MiB here).
You can get a visual representation of the current heap status by checking 'Preferences' -> 'General' -> 'Show heap status'. It will create a small gauge in the status bar which also has a 'trash can' button you can use to trigger a manual garbage collection.
Just for information,
you can add
-Dcom.sun.management.jmxremote
to your eclise.ini file, launch eclipse and then monitor its memory usage through 'jconsole.exe' found in your jdk installation.
C:\[jdk1.6.0_0x path]\bin\jconsole.exe
Choose 'Connection / New connection / 'eclipse' to monitor the memory used by eclipse
always use the latest jvm to launch your eclipse (that does not prevent you to use any other jfk to compile your project within eclipse)
The Ganymede Java EE plugins are absolutely huge when running in memory. Also, I've had bad experiences with FindBugs and its reliability over a long coding session.
If you can't live without these plugins though, then your only recourse is to start closing projects. If you limit the number of open projects in your workspace, the compiler (and FindBugs) will have less to worry about and your memory usage will drop tremendously.
I usually split up my workspaces by customer and then only keep the bare-minimum projects open within each workspace. Note that if you have a particularly large projects (especially ones with a lot of files checked by WST), that will not only chew through your memory, but also cause a noticeable pause in responsiveness when compiling.
Eclipse by itself is pretty bloated, and the more plugins you add only exacerbates the situation. It's still my favorite IDE, as it certainly isn't short on functionality, but if you're looking for a lightweight IDE then I'd suggest ditching Eclipse; it's pretty normal to run up half a gig of memory if you leave it running for awhile.
Eclipse is a pretty bloated IDE. You can minimize it by turning of the automatic project building under Project -> Build Automatically. It also can be helped by closing any open project you are not currently working on.
I'd call it bloated, but not leaky. (If it was leaky it would climb and climb until something crashed.) As others have said, memory is cheap! It seems like a simple decision to me: spend a tiny bit on more memory vs. lose productivity because you don't have the memory budget to run Eclipse # 500MB.
Summarized rhetorical question: What is more valuable:
The productivity gained from using an IDE you know with the plug-ins you want, or
Spending $50-200 on some memory?
RAM is relatively cheap (not that this is an excuse for poor memory managmentment). Unused memory is essentially WASTED memory. If you're hitting limits and the IDE is the problem consider less multitasking, adjusting your memory reqs, or buy more. I wouldn't cripple Eclipse if that's your bread-and-butter IDE.
Instead of whining about how much memory Eclipse takes, just go ahead and analyze where the problem is. I might be just one plugin.
Check the blog here :
"analyzing memory consumption of eclipse"
Regards,
Markus
I had problem with java-based programs memory consumption. I found that it could be related to the chosen jvm (in my case it was). Try to run eclipse with -client switch.
In some operating systems (most of linux distros I believe), the default option is server vm, which will consume noticeable more memory when running applications with gui.
In my case initial memory footprint went down from 300MB to 80MB.
Sorry for my crappy English. I hope I helped.
All Regards
Arkadiusz Jamrocha
Well, you don't specify on which platform this occurs. The memory management may vary if you're using Windows XP, Vista, Linux, OS X, ...
Usually, on my computer (WinXP with 1Gb of Ram), Eclipse take rarely more than 200Mb, depengin of the size of the opened projects, the loaded plugins and the ongoing action.
I usually give Eclipse 512 MB of RAM (using the -Xmx option of the JVM) and I don't have any memory problems with Ganymede. I upgraded to two GB of RAM a few months ago, and I can really recommend it. It makes a huge difference.
Eclipse generally keeps a lot of meta-data in memory to allow for all kinds of IDE gymnastics.
I have found that the default configuration of Eclipse works well for most purposes and that includes a limit (either given explicitly or implictly by the JVM) to how much memory can be consumed, and Eclipse will stay within that.
Is there any particular reason you are concerned about memory usage?

Categories

Resources