Sometimes debugging our purely local desktop application in IDEA 2021.2.4 with Java 11 becomes incredible slow, especially Step Over (F8) or (to a less extend) Step Into (F7). But sometimes even pressing Resume (F9) after having hit some breakpoints is extremely slow while the CPU fan runs at full power (what takes 1s to start normally takes multiple minutes). Stopping usually halts in JDK code, e.g. class loading. I've already verified the breakpoints, but there are just a handful of normal Java Line breakpoints without any conditions. Especially when pressing Resuming other things mentioned in the IDEA FAQ like toString evaluation or Java Type Renderers should not matter, isn't it? Even pausing, disabling all breakpoints, resuming makes no difference.
Any idea how to determine what causes the CPU to heat?
Related
I've tried debugging my application in Android Studio/Eclipse and both yield same result, very slow debugging (1-15 frames per second) with my samsung note3. The funny part is that without a debugger attached to the application, the application runs just fine (60 frames per second).
I've removed all break points in Android Studio (installed today from scratch), same with eclipse, both yield same behavior. I've recreated the existing application project a few times, with no luck. I've also removed all expressions as well, no luck.
I've done method tracing with and without a debugger. The times/invocation counts look very similar and there isn't any obvious outliers.
The moment I turn off debugging, the application instantly goes back up to 60fps.
I'm going to ask my friend to try it on his device (an htc phone) and see if he can reproduce. I should mention, I do think that somehow I'm causing extra overhead with the debugger with my application, cause I created a simple open gl surface, and it debugged just fine. I should also mention, I've tried 3 different usb cords, and all yield same result.
My app is multi threaded and makes use of large object pools.
Any ideas?
If you're doing a lot of computation in Java code, this may be expected.
When running normally, Dalvik executes code with the "fast" interpreter, and uses the JIT to compiler any hot sections. When the debugger attaches, it runs exclusively in the "debug" interpreter, which is much slower.
Some operations, such as "step over", can be fairly painful. ("Run to line" should execute more quickly.)
Imagine you have command-line application that takes input file and does something with it. Now imagine you want to sample/profile this application. If it were Visual Studio you would just select profiling method (sampling/instrumentation) and VS would run application for you and collect data while program completes. But as far as I can see there is no similar functionality in VisualVM. You have to run your application, then select it in VisualVM and then explicitly start sampling/profiling. The problem is that sometimes execution of program with certain input data takes less time than it is required to setup VisualVM. Also with such an approach there is no possibility to batch profile application. Someone has suggested to start application in debug mode from Eclipse and set breakpoint somewhere in the beginning of main() method. Then setup VisualVM and continue execution. But I have suspicion that running in Debug vs Release mode has performance implications on its own.
Suggestions?
There is a new Startup Profiler plugin for VisualVM 1.3.6, which allows you to profile your application from its startup. See this article for additional information.
If the program does I/O, the Visual Studio sampler will not see the I/O because it is a "CPU Sampler" (even if nearly all of the time is spent waiting for I/O).
If you use Instrumentation, you won't see any line-level information because it only summarizes at the function level.
I use this technique.
If the program runs too quickly to sample, just put a temporary outer loop around it of, say, 100 or 1000 iterations.
The difference between Debug and Release mode will be next to nothing unless you are spending a good fraction of time in tight loops, in your code, where the loops do not contain any function calls, OR if you are doing data structure operations that do a lot of validation in the libraries.
If you are, then your samples will show that you are, and you will know that Release will make a speed difference.
As far as batch profiling is concerned, I don't. I just keep an eye on the program's overall throughput rate. If there is some input that seems to make it take too long, then I do the sampling procedure on the program with that input, see what the problem is, and fix it.
I am using IntelliJ IDEA 12.1.6 on (64-bit) Windows 7 Enterprise (I've also seen this same problem with IDEA 11, though not as often). I am running it under Java 7 (1.7.45).
When I run a program in the IDE, under the debugger (local debugging), everything is fine until I hit a breakpoint (these are plain old on-a-specific-line breakpoints, not method breakpoints or exception breakpoints). Once the breakpoint is hit, virtually all the time (though not always) my entire machine slows to a near halt. All keyboard operations (not just in IDEA) slow way down (they eventually do get processed so the events are buffered, not lost). Same for window operations (drag, minimize, raise, lower). Once the program resumes from the breakpoint everything goes completely back to normal until the next time a breakpoint is hit.
This is obviously really annoying as it makes debugging essentially impossible.
I've had Task Manager up and don't see anything strange. The CPU is not pegged, memory isn't maxed out, etc. My hard drive light isn't on.
Any ideas on what's going on and (more importantly) how to fix it?
It's hard to say what the problem is, but here are some things that might help:
Invalidate caches as described here--though be aware of the consequences.
Make sure you have only enabled those plugins you are actually using or likely to use.
Try to find if all breakpoints have the same problem or if, for example, you only have issues with Java breakpoints but not JavaScript ones. If that happens, it could give you a clue.
Well, it looks like my specific problem is IE-related.
The program I am debugging is the server-side of a webapp. (Sadly it is IE-only due to some weird stuff in the Ajax it is using -- I'm not responsible for that).
I have been setting my breakpoints, running the server under the debugger, and then doing stuff with the webapp in IE, which ultimately causes breakpoints to be hit. When a breakpoint is hit and I am single-stepping, everything remains OK if I do not use the keyboard at all. If I click on the various single-step icons or select the single-step actions via mouse from IDEA's "Run" menu everything keeps working. But the moment I use F8 to single-step, or hit a key in any other problem (like in a mail program) the freeze happens.
It finally occurred to me (shame on me) to run IE on another computer while running the server in the IDEA debugger on my main computer. When I do that, everything is fine and I can use the keyboard in any application, as much as I want to, and the freeze does not happen.
So the problem seems to be IE causing weirdness when it has sent a request and is waiting for a response from the server. Still, there does seem to be some interaction with IDEA because when I run IE on a separate computer and hit a breakpoint on my computer, I can use the keyboard all I want on the computer where I am running IE without any problems at all.
I wrote a simple code in Java that uses the Robot class to move the mouse according to some conditions.
Although the code works nicely, there seems to be a 'lag' when other applications are running.
I think Java has some issues posting system messages.
Is there a workaround to avoid this?
Before you start thinking about reducing the lag, you must first understand it's causes. I'll present the answer(s) in a fashion in which you can understand the "why" along with the "what to do".
By your description that the lag only occurs when other programs are running along with your robot, the most likely causes for the lag are:
Lack of system resources - Too many things running at the same time, consuming too much memory/processing-power, thus making the OS slow-down some programs in order to be able to run the others.
What to do: To try to fix such issues, you can try to optimize your code, to make it use less memory/processing power, thus reducing the cause of the lag, with implicitly reduces the lag itself. Unfortunetly tough, it's hard to legally do the same for any 3rd party programs, so the lag can hardly be completely removed if the concurrent applications are not yours.
Concurrency regarding a non-replicable, non-shareable component - One or more components that cannot be accessed by more than one process at a time and that cannot be cloned into multiple instances needs to be used by more than one process that is running. While one process takes control of it, any other processes have no choice but wait for the component to be freed.
What to do: In this case, there hardly is any legal method other than to reduce the concurrent process's priority while increasing your's (effectively slowing them down in order for your program to run faster), or shut them down completely.
How to do: To increase your process's priority, this is the code to set it at 80% (default is usually 50%), inset at your main():
Thread.currentThread().setPriority((int)(Thread.MAX_PRIORITY*0.8));
Note: You can set your process to "never" let go of whatever components it needs, using Thread.MAX_PRIORITY without multiplying by 0.8, but that is unrecommended, as it will pretty much pause any process that requires the components (quasi-same to shutting them down while yours is running), and if your program hangs, for whatever reason, so will those, as the components are never released.
Is there any Java profiler that allows profiling short-lived applications? The profilers I found so far seem to work with applications that keep running until user termination. However, I want to profile applications that work like command-line utilities, it runs and exits immediately. Tools like visualvm or NetBeans Profiler do not even recognize that the application was ran.
I am looking for something similar to Python's cProfile, in that the profiler result is returned when the application exits.
You can profile your application using the JVM builtin HPROF.
It provides two methods:
sampling the active methods on the stack
timing method execution times using injected bytecode (BCI, byte codee injection)
Sampling
This method reveals how often methods were found on top of the stack.
java -agentlib:hprof=cpu=samples,file=profile.txt ...
Timing
This method counts the actual invocations of a method. The instrumenting code has been injected by the JVM beforehand.
java -agentlib:hprof=cpu=times,file=profile.txt ...
Note: this method will slow down the execution time drastically.
For both methods, the default filename is java.hprof.txt if the file= option is not present.
Full help can be obtained using java -agentlib:hprof=help or can be found on Oracles documentation
Sun Java 6 has the java -Xprof switch that'll give you some profiling data.
-Xprof output cpu profiling data
A program running 30 seconds is not shortlived. What you want is a profiler which can start your program instead of you having to attach to a running system. I believe most profilers can do that, but you would most likely like one integrated in an IDE the best. Have a look at Netbeans.
Profiling a short running Java applications has a couple of technical difficulties:
Profiling tools typically work by sampling the processor's SP or PC register periodically to see where the application is currently executing. If your application is short-lived, insufficient samples may be taken to get an accurate picture.
You can address this by modifying the application to run a number of times in a loop, as suggested by #Mike. You'll have problems if your app calls System.exit(), but the main problem is ...
The performance characteristics of a short-lived Java application are likely to be distorted by JVM warm-up effects. A lot of time will be spent in loading the classes required by your app. Then your code (and library code) will be interpreted for a bit, until the JIT compiler has figured out what needs to be compiled to native code. Finally, the JIT compiler will spend time doing its work.
I don't know if profilers attempt to compensate to for JVM warmup effects. But even if they do, these effects influence your applications real behavior, and there is not a great deal that the application developer can do to mitigate them.
Returning to my previous point ... if you run a short lived app in a loop you are actually doing something that modifies its normal execution pattern and removes the JVM warmup component. So when you optimize the method that takes (say) 50% of the execution time in the modified app, that is really 50% of the time excluding JVM warmup. If JVM warmup is using (say) 80% of the execution time when the app is executed normally, you are actually optimizing 50% of 20% ... and that is not worth the effort.
If it doesn't take long enough, just wrap a loop around it, an infinite loop if you like. That will have no effect on the inclusive time percentages spent either in functions or in lines of code. Then, given that it's taking plenty of time, I just rely on this technique. That tells which lines of code, whether they are function calls or not, are costing the highest percentage of time and would therefore gain the most if they could be avoided.
start your application with profiling turned on, waiting for profiler to attach. Any profiler that conforms to Java profiling architecture should work. i've tried this with NetBeans's profiler.
basically, when your application starts, it waits for a profiler to be attached before execution. So, technically even line of code execution can be profiled.
with this approach, you can profile all kinds of things from threads, memory, cpu, method/class invocation times/duration...
http://profiler.netbeans.org/
The SD Java Profiler can capture statement block execution-count data no matter how short your run is. Relative execution counts will tell you where the time is spent.
You can use a measurement (metering) recording: http://www.jinspired.com/site/case-study-scala-compiler-part-9
You can also inspect the resulting snapshots: http://www.jinspired.com/site/case-study-scala-compiler-part-10
Disclaimer: I am the architect of JXInsight/OpenCore.
I suggest you try yourkit. It can profile from the start and dump the results when the program finishes. You have to pay for it but you can get an eval license or use the EAP version without one. (Time limited)
YourKit can take a snapshot of a profile session, which can be later analyzed in the YourKit GUI. I use this to feature to profile a command-line short-lived application I work on. See my answer to this question for details.