Throttling CPU from within Java - java

I have seen many questions in this (and others) forum with the same title, but none of them seemed to address exactly my problem. This is it:
I have got a JVM that eats all the CPU on the machine that hosts it. I would like to throttle it, however I cannot rely on any throttling tool/technique external to Java as I cannot make assumptions as to where this Vm will be run. Thus, for instance, I cannot use processor affinity because if the VM runs on a Mac the OS won't make process affinity available.
What I would need is an indication as to whether means exist within Java to ensure the thread does not take the full CPU.
I would like to point straightaway that I cannot use techniques based on alternating process executions and pauses, as suggested in some forums, because the thread needs to generate values continuously.
Ideally I'd like some mean for, for instance, setting some VM or thread priority, or cap in some way the percentage of CPU consumed.
Any help would be much appreciated.

What I would need is an indication as to whether means exist within Java to ensure the thread does not take the full CPU.
There is no way that I know of to do this within Java except for tuning your application to use less CPU.
You could put some Thread.sleep(...); calls in your calculation methods. A profiler would help with showing you the hot loops/methods/etc..
Forking fewer threads would also affect the CPU used. Moving to fixed sized thread-pools or lowering the number of threads in your pools.
It may not be CPU that is the problem but other resources. Watch your IO bandwidth for example. Slowing down your network or disk reads/writes might restore your server to proper operation.
From outside of the JVM you could use the ~unix nice command to affect the priority of the running JVM to not dominate the system. This will give it CPU if available but will let other applications get more of the CPU.

I take it you want something more reliable than setting the threads' priorities?
If you want throttled execution of some code that is constantly generating values, you need to look into chunking up the work the thread(s) do, and coding in your own timer. For example, the java.util.Timer allows for scheduling execution at a fixed rate.
Any other technique will still consume as much CPU as is available (1 core per thread, assuming no locks preventing concurrent execution) when the scheduler doesn't have other tasks to prioritize ahead of yours.

The detail is simply that you said "must generate values continuously", and if that, to the extreme, is true, then CPU saturation is actually the goal.
But, if you define "continuously" as X values per second, then there is room to work.
Because then you can run your process at 100% CPU, measure the number of values over time, and if you find that it's generates more values than necessary (more than X/sec), then you can now insert pauses in to the process as appropriate until the value rate reaches your desired goal.
The plan being to continually monitor and adjust the pauses to maintain your value rate over time. Then your process will take as much CPU as necessary to meet your values/sec goal.
Addenda:
If you have a benchmark of values/sec that you are happy with, then interjecting the sleeps will give "all the priority necessary" to the other applications, but still maintain your throughput. If, on the other hand, you don't have any solid requirement, that is the requirement is "run as fast as possible when nothing else is running, with no actual requirement for ANY results if some other process dominates the CPU", then that's truly a kernel issue of the host OS, and not something the JVM has any direct, portable mechanism to address.
On Unix systems, you have the nice(1) command to adjust process (not thread) priority, and Windows has their own mechanism. With these commands, you can knock the priority of your Java process to just above "idle" (the default "process" that always runs when nothing else is running). But it's platform specific, as this is an inherently platform specific problem. This may well be managed through platform specific startup scripts that launch your Java program (or even a Java launcher that detects the platform and "does the right thing" before executing your actual code).
Most systems will allow you to lower your own process priorities, but few will let you raise unless you're an admin/superuser or have whatever the appropriate role is for your host OS.

Check to see if you have any "tight loops" in your code.
while (true) {
if (object.checkSomething()) {
...
}
}
If you do, then you are burning the CPU cycles on millions of checks that are probably not that time critical. The JVM will oblige (because it doesn't know if the check is "important" or not) and you'll get 100% CPU.
If you find such loops, rewrite them like so
while (true) {
if (object.checkSomething()) {
...
}
try {
Thread.sleep(100);
} catch (InterruptedException e) {
// purposefully do nothing
}
}
and the sleeping will voluntarily release the CPU within the loop, preventing it from running too quickly (and checking the condition too many times).

Really interesting thread. I found out Java does not provide means for doing what I want to do, and the only way to do this is from outside the JVM.
I ended up using nice to alter the scheduling priority in my test (Linux) environment, will still need to find something similar for WIn-based OSs.
Everyone's intervention has been much appreciated.

Related

Java Identifying Source of Latency

I am working on a java process which is conceptually rather simple. It is a single thread constantly fetching data from various sources and making decisions based on these. I have recently noticed a suspicious delay between 2 log lines, where I would not expect much processing to happen between the 2. (Tens of milis delay vs expectation of a 1 to a few milis)
Since this suspicious delay is not always there, my first thought was that I did a poor job at minimizing the need for garbage collection, causing the JVM to pause execution at unwanted times.
While I still believe I did a poor job with that, it doesn't seem to be the cause. I have added the following JVM parameters: -Xlog:gc*=info,safepoint::timemillis,level,tags
and I see no pause between my suspicious log lines. Could there be other JVM pauses that these JVM params would not reveal?
Anyways, would java pros have any recommendations to try and efficiently track down the source of this latency? Any (preferably free) tools I could use to monitor and understand what's happening?
Environment info: Linux 3.10, java 11. The process in question is running on an isolated core, other than that I have not done much tuning.
depending on where the information is coming from, it may be latency in the grabbing of the information. like if it has to reach out to a server, then the connecting and querying of the server may be introducing that latency. or maybe you are accessing your disk too quickly, and you are limited by the speed of your drive. besides that, I am not sure. maybe add in some of these.
long time = System.currentTimeMillis();
foo();
System.err.println(System.currentTimeMillis() - time + "ms");

Will threading help improve efficiency in Java?

My application is supposed to have a "realtime with pause" functionality. The user can pause execution, do some things that modify what's going to happen, then unpause and let stuff happen. Stuff happens at regular intervals as specified by the user, can be slow, can be fast.
My goal at using threading here is to improve performance on multicore systems. The amount of data that the application is supposed to crunch at the time intervals is supposed to be arbitrarily large (I expect lots and lots of loops over collections, modifying object properties and generating random numbers, but precious little disk access). I don't want the application to be constrained by the capacity of a single core, if it can use more to run faster.
Will this actually work this way?
I've run some tests (made a program crunch numbers a lot, and looked at CPU usage during its activity), but it's not really conclusive - usage is certainly in the proximity of 100% on my dual core machine, but hardly ever 100%. Does a single-threaded (main only) Java application use all available cores for computation?
Does a single-threaded (main only) Java application use all available cores for computation?
No, it will normally use a single core.
Making a program do computations in parallel with multiple threads may make it faster, but it's not a magical solution for any kind of problem. Whether this is a suitable solution for your program depends on what your program is doing exactly, and if the algorithm can be parallelized. If, for example, you are doing lots of computations where the next computation depends on the result of the previous computation, then making it multi-threaded will not help a lot, because you can't do the computations at the same time - the next one first has to wait for the answer of the previous one. So, you first have to think about what computations in your program could be run in parallel.
Java has a lot of support for multi-threading. You can program with threads directly, or use an executor service, or use the fork/join framework. Whatever is appropriate depends on what exactly you want to do.
Does a single-threaded (main only) Java application use all available cores for computation?
Not usually, but you could make use of some higher level apis in java that is actually using threads for you and youre not even usinfpg threads directly, more obviousiously fork/join and executors, less obvious the new Streams API on collections (ie parallelStream).
In general, though, to make use of all cores, you need to do some kind of concurrency. Further...its really hard to just observe you OS monitor to see what is going on (especially with only 2 cores)...your OS has other things going on (trying to manage itself, running your IDE, running crontab, running a browers to post to stackoverflow ;).
Finally, just implementing (concurrency) itself may not help, you have to do it "right" for your code/algorithm.
a java thread will run in a single cpu. to use multiple CPUs, you should have multiple threads.
Imagine that u have to do various tasks using your hand. You will do it slowly using one hand and more effciently using both your hands. Similarly, in java or in any other language multi threading provides the system with many hands. The good news is that you can have many threads to do different tasks. Running operations in a single thread will make the program sluggish and sometimes unresponsive. A good practice is to do long running tasks in a separate thread. For example loading large chunks of data from a database should be processed in a separate thread. Downloading data from the internet should also be processed in a separate thread. What happens if you do long running operations in the main thread? The program HANGS and will become unresponsive till the task gets completed and the user will think that there is someting wrong. I hope you get it

Would process with more threads on JVM have more cpu time than process with one thread?

Based on the question on Linux, this is effective way to hogging the CPU until 2.6.38. How about JVM? Assume we have implemented the lock free algorithm, all these threads are totally independent from each other. Will more threads help us to gain more CPU time from the system?
The short answer is yes. More processes will also result in getting more CPU time.
The default assumption of a typical scheduler on a modern operating system is that anything that asks for CPU time intends to use the CPU to make useful forward progress and it's generally more important to make as much forward progress as possible than to be "fair". If you have some notion of fairness that's important to your particular workload, you can specifically configure it in most operating systems.
More threads will use more cpu time. However, you also get much more overhead and you can end up getting less useful work done. For a cpu bound process where your threads can work independently, the optimal number of threads can be number of cpus you have, rarely more. For system resource limited processes, the optimal number can be one. For external to the system limited processes, you can actually gain by having more threads than cpus but it would be a mistake to assume this is always the case.
in short, is your goal to burn cpu or is it to get something done.
Assume we have implemented the lock free algorithm, all these threads are totally independent from each other. Will more threads help us to gain more CPU time from the system?
Not exactly sure what you are asking. The JVM can certainly go above 100% and take over more than a single CPU if your threads use a lot of CPU. IO bound applications, regardless of the number of threads, might spike over 100% but never do it sustained. If you program is waiting on web connections or reading and writing to the disk, they may max out the IO chain and not run much concurrent processing.
When the JVM forks a thread (depending on the arch) it works with the OS to allocate an OS thread. In linux these are called clones and you can see them in the process table with the right args to ps. If you fork 100 threads in your JVM, there are 100 corresponding clones in the linux kernel. If all of them are spinning (i.e. not waiting on some system resource) then the OS will give them all time slices of available CPU depending on priority and competition with other applications and system processes.

How can I prevent my Java program from lagging while other applications are running?

I wrote a simple code in Java that uses the Robot class to move the mouse according to some conditions.
Although the code works nicely, there seems to be a 'lag' when other applications are running.
I think Java has some issues posting system messages.
Is there a workaround to avoid this?
Before you start thinking about reducing the lag, you must first understand it's causes. I'll present the answer(s) in a fashion in which you can understand the "why" along with the "what to do".
By your description that the lag only occurs when other programs are running along with your robot, the most likely causes for the lag are:
Lack of system resources - Too many things running at the same time, consuming too much memory/processing-power, thus making the OS slow-down some programs in order to be able to run the others.
What to do: To try to fix such issues, you can try to optimize your code, to make it use less memory/processing power, thus reducing the cause of the lag, with implicitly reduces the lag itself. Unfortunetly tough, it's hard to legally do the same for any 3rd party programs, so the lag can hardly be completely removed if the concurrent applications are not yours.
Concurrency regarding a non-replicable, non-shareable component - One or more components that cannot be accessed by more than one process at a time and that cannot be cloned into multiple instances needs to be used by more than one process that is running. While one process takes control of it, any other processes have no choice but wait for the component to be freed.
What to do: In this case, there hardly is any legal method other than to reduce the concurrent process's priority while increasing your's (effectively slowing them down in order for your program to run faster), or shut them down completely.
How to do: To increase your process's priority, this is the code to set it at 80% (default is usually 50%), inset at your main():
Thread.currentThread().setPriority((int)(Thread.MAX_PRIORITY*0.8));
Note: You can set your process to "never" let go of whatever components it needs, using Thread.MAX_PRIORITY without multiplying by 0.8, but that is unrecommended, as it will pretty much pause any process that requires the components (quasi-same to shutting them down while yours is running), and if your program hangs, for whatever reason, so will those, as the components are never released.

Java performance Inconsistent

I have an interpreter written in Java. I am trying to test the performance results of various optimisations in the interpreter. To do this I parse the code and then repeatedly run the interpreter over the code, this continues until I get 5 runs which differ by a very small margin (0.1s in the times below), the mean is taken and printed. No I/O or randomness happens in the interpreter. If I run the interpreter again I am getting different run times:
91.8s
95.7s
93.8s
97.6s
94.6s
94.6s
107.4s
I have tried to no avail the server and client VM, the serial and parallel gc, large tables and windows and linux. These are on 1.6.0_14 JVM. The computer has no processes running in the background. So I asking what may be causing these large variations or how can I find out what is?
The actualy issue was caused because the program had to iterate to a fixed point solution and the values were stored in a hashset. The hashed values differed between runs, resulting in a different ordering which in turn led to a change in the amount of iterations needed to reach the solution.
"Wall clock time" is rarely a good measurement for benchmarking. A modern OS is extremely unlikely to "[have] no processes running in the background" -- for all you know, it could be writing dirty block buffers to disk, because it's decided that there's no other contention.
Instead, I recommend using ThreadMXBean to track actual CPU consumption.
Your variations don't look that large. It's simply the nature of the beast that there are other things running outside of your direct control, both in the OS and the JVM, and you're not likely to get exact results.
Things that could affect runtime:
if your test runs are creating objects (may be invisible to you, within library calls, etc) then your repeats may trigger a GC
Different GC algorithms, specifications will react differently, different thresholds for incremental gc. You could try to run a System.gc() before every run, although the JVM is not guaranteed to GC when you call that (although it always has when I've played with it).T Depending on the size of your test, and how many iterations you're running, this may be an unpleasantly (and nearly uselessly) slow thing to wait for.
Are you doing any sort of randomization within your tests? e.g. if you're testing integers, values < |128| may be handled slightly differently in memory.
Ultimately I don't think it's possible to get an exact figure, probably the best you can do is an average figure around the cluster of results.
The garbage collection may be responsible. Even though your logic is the same, it may be that the GC logic is being scheduled on external clock/events.
But I don't know that much about JVMs GC implementation.
This seems like a significant variation to me, I would try running with -verbosegc.
You should be able to get the variation to much less than a second if your process has no IO, output or network of any significance.
I suggest profiling your application, there is highly likely to be significant saving if you haven't done this already.

Categories

Resources