Need some help from the experts!
We have a project here (still on dev) that needs to run 50 java processes (for now and it will probably doubled or tripled in the future) at the same time every 5 minutes. I set Xmx50m for every process and our server has only 4gb of RAM, I know that would really slow our server. What I have in mind is to upgrade our RAM. My question is that do I have other options to prevent our server from being slow when running that amount of java processes?
Since you have 50 process and as per your assumption your processes need about 2.5 Gb to run .
To prevent your server from being slow you can follow some best practices to set java memory parameters e.g. set -Xmin and -Xmx the same values and determine a proper values based on your process usage, Also you can profile your process on runtime to ensure that everything is ok.
Related
Do you know whether exist a library for Java that makes it possible to get recent CPU usage, let's say for the last few seconds?
This library should work on different OSes (Mac, Linux, Windows) and be container-aware - let's say that our JVM was run in container and CPU was limited to 1000 ticks per any period. Then, cpu usage by process should be relative to the limit.
With Jmx / MBean you can get that kind of info. Java process will run as fast as the CPU limit. So that measurement doesn't make sense on different machine.
To have an idea you need to develop on docker with CPU limits and check wall clock to validate requirement of the service with load tests like gatling https://gatling.io/.
I have a number of Java processes using OpenJDK 11 running on Windows Server 2019. The server has two physical processors and 36 total cores; it is an HP machine. When I start my processes, I see work allocation in Task Manager across all the cores. This is good. However after the processes run for some period of time, not a consistent amount of time, the machine begins to only utilize only half the cores.
I am working off a few theories:
The JDK has some problem that is preventing it from consistently accessing all the cores.
Something with Windows Server 2019 is causing a problem, limiting Java from accessing all the cores.
There is a thermal management problem and one processor is getting too hot and the OS is directing all the processing to the other processor.
There is some issue with hyper-threading and the 'logical' processors that is causing the process to not be able to utilize all the cores.
I've tried searching for JDK issues and haven't found anything like this mentioned. I went down to the server and while it's running a little warm, it didn't appear excessively hot. I have not yet tried disabling hyper-threading. I have tried a number of parameters to force the JVM to use all the cores and indeed the process initially does use all the cores; I can see the activity in Task Manager.
Anyone have any thoughts? This is a really baffling problem and I'd appreciate any ideas.
UPDATE: I am able to make it use the other processor by using the Task Manager to assign one of the java.exe processes to the other processor. This is also working from the java invocation on the command line as well with an argument for which socket to use.
Now that said, this feels like a hack. I don't see why I should have to manually assign a socket to each of my java processes; that job should be left to the OS. I'm still not sure exactly where the problem is, if it's the OS or what.
We have Tomcat application running in a Debian 6.07 Server.
Lately CPU used were increasing gradually.
Using Top command I noticed that Java PID keep increasing everyday.
I need to restart the tomcat to make it back to normal.
After restart the tomcat, Java Cpu Used will be back to around 2 %.
From that moment it will increase everyday, and I will need to restart the tomcat every time it reach around 40 %.
Is there any way to fix this issues ?
Thank you
It looks like you have some memory leak or some thread which consumes memory or processing iteratively without freeing unused resources.
Also, you may use tools like Java Profiler (or any other java auditing and profiling tools) to analyze what resources are being used and by whom (classes, threads... etc.)
checkout the following links for Java profiling tools:
https://blog.idrsolutions.com/2014/06/java-performance-tuning-tools/
http://www.infoq.com/articles/java-profiling-with-open-source
(if you can share more info I'll edit my answer properly)
I have a project where I have to send emails using Amazon SES REST API, now amazon allows concurrent connections at a same time based on account. So in my case amazon allows me to open 50 connections at a same time, which means I can send 50 emails/sec. To achieve this, currently I am using JAVA Executioner threads where I control the thread speed to be 50/sec. Also I have implemented this in Hibernate framework because I need to execute some SQL queries before sending emails.
This java program runs continuously in background(its a jar file). This takes around 512MB RAM, so my question is that can I use some other frameworks or better thread system to make it more lighter? The SQL query I execute is only a select query, update/delete/create queries are not used.
I am not good in JAVA so may be this sounds stupid.
I guess the smallest possible framework to use would be plain JDBC.
This would limit your libraries to those in the jre plus the DB driver and maybe libs for AWS / Email. Depending on what else you need, selecting a compact profile might be worth investigating.
Also check your memory settings:
If you set -Xms512m it's really not surprising your app uses 512m, is it?
Edit due to rephrased question
In your level of parallelism, most of your Memory is consumed by Objects, not by Threads (well, Threads are objects, but small ones). Threads are good the way they are in Java. You can run hundrets of them without them consuming 500 mb of heap or more as you claim.
So the issue with 50 threads consuming 512m of your memory is more likely rooted in your code and your objects, not (only) in your threads.
In order to reduce memory footprint, tra the follwing:
Remove hibernate. As you say you only have a simple select SQL, so you don't need the overhead and additional libraries.
Take a memory dump of your running app and analyse it. (MAT - Eclipse Memory Analyser tool comes to mind)
Check other objects and how you use them. When you say "sending emails" - how large are your emails? Might there be duplicate buffers do to bad choice of coding? Share your code for how you do it, then we can have a look.
Try running without any memory options and see how the program runs on defaults.
Add garbage collector output and check that
I have a situation in which I need to create thousands of instances of a class from third party API. Each new instance creates a new thread. I start getting OutOfMemoryError once threads are more than 1000. But my application requires creating 30,000 instances. Each instance is active all the time. The application is deployed on a 64 bit linux box with 8gb RAM and only 2 gb available to my application.
The way the third party library works, I cannot use the new Executor framework or thread pooling.
So how can I solve this problem?
Note that using thread pool is not an option. All threads are running all the time to capture events.
Sine memory size on the linux box is not in my control but if I had the choice to have 25GB available to my application in a 32GB system, would that solve my problem or JVM would still choke up?
Are there some optimal Java settings for the above scenario ?
The system uses Oracle Java 1.6 64 bit.
I concur with Ryan's Answer. But the problem is worse than his analysis suggests.
Hotspot JVMs have a hard-wired minimum stack size - 128k for Java 6 and 160k for Java 7.
That means that even if you set the stack size to the smallest possible value, you'd need to use roughly twice your allocated space ... just for thread stacks.
In addition, having 30k native threads is liable to cause problems on some operating systems.
I put it to you that your task is impossible. You need to find an alternative design that does not require you to have 30k threads simultaneously. Alternatively, you need a much larger machine to run the application.
Reference: http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2012-June/003867.html
I'd say give up now and figure another way to do it. Default stack size is 512K. At 30k threads, that's 15G in stack space alone. To fit into 2G, you'll need to cut it down to less than 64K stacks, and that leaves you with zero memory for the heap, including all the Thread objects, or the JVM itself.
And that's just the most obvious problem you're likely to run into when running that many simultaneous threads in one JVM.
I think we are missing lots of details, but would a distributed plateform would work? Each of individual instances would manage a range of your classes instances. Those plateform could be running on different pcs or virtual machines and communicate with each other?
I had the same problem with an SNMP provider that required a thread for each outstanding get (I wanted to have tens of thousands of outstanding gets going on at once). Now that NIO exists I'd just rewrite the library myself if I had to do this again.
You cannot solve it in "Java Code" or configuration. Windows chokes at around 2-3000 threads in my experience (this may have changed in later versions). When I was doing this I surprisingly found that Linux supported even less threads (around 1000).
When the system stops supplying threads, "Out of Memory" is the exception you should expect to see--so I'm sure that's it--I started getting this exception long before I ran out of memory. Perhaps you could hack linux somehow to support more, but I have no idea how.
Using the concurrent package will not help here. If you could switch over to "Green" threads it might, but that might take recompiling the JVM (it would be nice if it was available as a command line switch, but I really don't think it is).