Java Servlet Container on a small VPS - java

A while back I was using Virtual Private Server (VPS) that had very limited RAM. I used it to host Jetty. It was so slow that it became completely unusable. I believe the main problem was memory-related. I switched the project over to PHP and the problems disappeared.
Needless to say, I'm very hesitant to try Java again in a VPS. Even though the RAM in my VPS is significantly higher, it seems like PHP is streamlined for low RAM. Has anyone tried a VPS with a Servlet container and had a lot of success? Could it have been something simple with my Java config? Is PHP usually the better choice for a small VPS deployment?

ServerFault may be a better place to ask that than here, but in my experience 128 is dreadfully low. I run a Tomcat instance on a Linode VPS with 1 gig of guaranteed memory and haven't had any issues. The particular site in my case also has very low traffic, so I can't vouch for it under heavy loads
The 'Burst' signifies that your VPS may be given access to more than your 128 megs at times (depending on the overall server usage). For a server instance, access to this memory should be considered unreliable and your better off assuming the worst case scenario of only having 128 megs.
In other words, pay more for more memory =)
Edit:
Ask and ye shall receive. Top reports 1025 megs virtual memory, and 416 megs reserved. It's by far the largest memory hog running on my VPS.

Related

Tomcat commited virtual memory = more than x2 XmX Setting

I'm having difficulties understanding / ignorning current memory commitments (virtual memory) by my jvms.
For better context:
We're running an closed source java application. We just host / run the application by deploying the delivered .war files. But overall we're are responsible for running the application (this construct, as obscure as it is, is non negotiable ;) )
OS is RHEL 7, Tomcat v8. Overall we're running several Tomcats with an apache 2.2 in front of them as a loadbalancer
We experience massive memory swapping. It seems like every time we give the server VM more RAM it immediately gets consumed by the tomcats. Currently we run 7 tomcats with a total 48GB of RAM. The plan is to upgrade to 64GB or even higher but I fear this won't change a thing with tomcats hunger for memory...
For security reasons I've blacked out some internal paths/IPs/ports etc.
I do know that the memory consumption of a jvm consits of more than just assigned Heap Space, but in my case Xmx is set to 4096m and commited memory is about 10GB - as seen in the jconsole screenshot but can also be confirmed by a top on the server.
And this just doesn't seem right. I mean - over 2x more than heap space?
I've also read some memory related java stuff, but as stated above - the application is closed source which gives me not that much leeway in debugging / trying stuff out,
And since I am relatively new to tomcat I figured I might as well turn to a more experienced community :)
So here are the main questions, which came up:
Is there a certain setting I am missing which definatley caps the max
commited memory size for a tomcat?
Is the behaviour we're experiencing normal?
How should I talk with the devs about this problem? We've already done Heap Dumps but they weren't as telling as we hoped. Heap is correctly set and is almost always at around 50% load.
Am I missing something here?
I know it's hard to tell from a distance, especially if no source code can be provided.
But any help is appreciated :)
Thanks!
Alex

Use Large pages in Jboss

I was going through the JBOSS manual where I read about LARGEPAGES.
We have allocated more than 10GB to heap.
I want to know whether really do we get any benefits of using this? Or is there any sideeffects of doing this?
Is there any tool or utility which will lets us know the pages that are being created.Basically I want to analyse the page related performance
What I have done till now is below:
1)Changed my local security policy in windows where jboss is running to include the user account which will be running the Jboss.Do I need to do this where jboss is running or where database is running ?
2)Added -XX:+UseLargePages -XX:LargePageSizeInBytes=2m to the jboss startup script.
Is there anything else apart from above which needs to be done or I need to take care of ?
Machine details
a)windows server 8
b)processor:intel xeon
In my opinion 10G Heap for Web Application server like JBoss Etc are very huge.
Big heap size means, it need more Full GC Time. As u know in full GC time, it will freeze the world.
Is there any specific reason to use such big memory?
If you have big memory in a box, i recommend to run # of Jboss instance in a box and load balace by using reverse proxy. It will be more efficient.
It depends on your OS as to how large pages are handled and whether they really help. For example if you have a processor up to Sandy Bridge, it can have up to 8 large pages in the TLB cache. For the latest Haswell, it can have up to 1024 which is obviously a big difference.
In some versions of Linux you have to reserve the large pages in advance and there may be an advantage in pre-allocating this memory.
However, I am not aware of any big performance advantages in using large pages in the JVM. esp for larger heaps and pre-Haswell processors.
BTW 10 GB isn't that big these days. The limit for CompressesOops is around 30 GB and up to this is a medium sizes heap IMHO.

Java web application really slow

I am using Rackspace as a hosting provider, using their Cloud server hosting, with 256mb plan.
I am using Geronimo 2.2 to run my java application.
The server starts up no problem, loads Geronimo quite fast, however, when I started to deploy my web application, it is taking forever, and once it is deployed, it takes forever to navigate through pages.
I've been monitoring the server activity, the CPU is not so busy, however, 60% of the memory is being used up. Could this be the problem?
If so, what are my options? Should I consider upgrading this cloud server to something with more RAM, or changing a host provider to better suit my needs?
Edit:
I should note that, even if I don't deploy my application, just having Geronimo loaded, sometimes I would get a connection time when I try to shut down Geronimo.
Also the database is on the same server as the application. (however I wouldn't say its query intensive)
Update:
After what #matiu suggested, I tried running free -m, and this is the output that I get:
total used free shared buffers cached
Mem: 239 232 6 0 0 2
-/+ buffers/cache: 229 9
Swap: 509 403 106
This was totally different result than running ps ux, which is how I got my previous 60%.
And I did an iostat check, and about 25% iowait time, and device is constantly writing and reading.
update:
Upgraded my hosting to 512MB, now it is up to speed! Something I should note is, I forgot about the Java's Permanent Generation memory, which is also used by Geronimo. So it turns out, I do need more RAM, and more RAM did solve my problem. (As expected) yay.
I'm guessing you're running into 'swapping'.
As you'll know Linux swaps out some memory to disk. This is great for memory that's not accessed very much.
When Java starts eating heaps and heaps, linux starts:
Swapping memory block A out to disk to make space to read in block B
Reading/writing block B
Swapping block B to disk to make space for some other block.
As disk is 1000s of times slower than RAM, as the memory usage increases your machine grinds more and more closer to a halt.
With 256 MB Cloud Servers you get 512 MB of Swap space.
Checking:
You can check if this is the case with free -m .. this page shows how to read the output:
Next I'd check with 'iostat 5' to see what the disk IO rate on the swap partition is. I would say a write rate of 300 or more means you're almost dead in the water. I'd say you'd want to keep the write rate of the swap partition down below 50 blocks a second and the read rate down below 500 blocks a second .. if possible both should be zero most of the time. Remember disk is 1000s of times slower than RAM.
You can check if it's Java eating the ram by running top and hitting shift+m to order the processes by memory consumption.
If you want .. you can disable the swap partition with swapoff -a .. then open up the web console, and hit the site a bit .. you'll soon see error messages in the console like 'OOM Killed process xxx' (OOM is for Out of Memory I think). If you see those that's linux trying to satisfy memory requests by killing processes. Once that happens, it's best to hard reboot.
Fixing:
If it's Java using the RAM .. this link might help.
I think the easy fix would be just to upgrade the size of the Cloud Server.
You may find a different Java RTE may be better.
If you run it in a 32 bit chroot it may use less RAM.
You should consider running a virtual dedicated Linux server, from something like linode.
You'd have to worry about how to start a Java service and things like firewalls, etc, but once you get it right, you are in effect you're own hosting provider, allowing you to do anything a standalone actual Linux box can do.
As for memory, I wouldn't upgrade until you have evidence that you do not have enough. 60% being used up is less than 100% used up...
Java normally assumes that it can take whatever it is assigned to it. Meaning, if you give it a max of 200MB, it thins that it's ok to take 200MB even though it's using much less.
There is a way to make Java use less memory, by using the -Xincgc incremental garbage collector. It actually ends up giving chunks of memory back to the system when it no longer needs it. This is a bit of a kept secret really. You won't see anyone point this out...
Based on my experience, memory and CPU load on VPSes are quite related. Meaning, when application server will take up all available memory, CPU usage starts to sky rock, finally making application inaccessible.
This is just a side effect though - you should really need to investigate where your problems origin!
If the memory consumption is very high, then you can have multiple causes:
It's normal - maybe you have reached a point, where all processes (application server, applications within it, background processes, daemons, Operating System, etc.) put together need that huge amount of memory. This is least probably scenario.
Memory leak - can happen due to bug in framework or some library (not likely) or your own code (possible). This can be monitored and solved.
Huge amount of requests - each request will take both CPU and memory to be processed. You can have a look at the correlation between requests per second and memory consumption, meaning, it can be monitored and resolved.
If you are interested in CPU usage:
Again, monitor requests to your application. For constant count of requests - nothing extraordinary should happen.
One component is exhausting most resources (maybe your database is installed on the same server and it uses all CPU power due to inefficient queries? Slow log would help.)
As you can see, it's not trivial task, but you have tools support which will can help you out. I personally use java melody and probe.

Tomcat memory and virtual hosts

I have a VPS on which I serve Tomcat 6.0.
500mb memory was enough, when I had only two applications.
Last week I deployed another web application and formed a new virtual host editing Tomcat server.xml.
But the server responses slow down, linux started to eat the swap.
When I increased the memory 750 mb it become stable again.
But memory is not so cheap so I won't be happy to pay 250 mb RAM for each additional application.
Is "250 mb additional memory need" for each web app normal?
Is there any solution to decrease this cost?
For example, does "to put common libraries of these applications to shared folder of Tomcat" have positive impact on Tomcat memory and performance?
Note: Deployed applications are web applications that use Spring, hibernate, sitemesh and related libraries, war file size totals up to 30 mb.
Thanks.
It's unlikely that this memory is being consumed by the Spring / Hibernate / etc. classes themselves. So the size of their .jar files isn't going to matter much. Putting these libraries in Tomcat's shared library directory would help save a bit of memory in that one copy of these classes would be loaded only, but that won't save much.
Isn't it simply more likely that your applications are just using this much memory for data and so forth? You need to use a profiler to figure out what is consuming the heap memory first. Until you know the problem, it's not much use in pursuing particular solutions.
I think you need to measure the memory consumption of the application.
You can use JProfiler or Java6 build in profiling tools such as VisualVM and JHat (best way to start).
Check this link for more info:
http://java.sun.com/developer/technicalArticles/J2SE/monitoring/

Java using too much memory on Linux?

I was testing the amount of memory java uses on Linux. When just staring up an application that does absolutely NOTHING it already reports that 11 MB is in use. When doing the same on a Windows machine about 6 MB is in use. These were measured with the top command and the windows task manager. The VM on linux I use is the 1.6_0_11 one, and the hotspot VM is Server 11.2. Starting the application using -client did not influence anything.
Why does java take this much memory? How can I reduce this?
EDIT: I measure memory using the windows task manager and in Linux I open the terminal and type top.
Also, I am only interested in how to reduce this or if I even CAN reduce this. I'll decide for myself whether a couple of megs is a lot or not. It's just that the difference of 5 MB between windows and Linux is strange, and I want to know if I am able to do this on Linux too.
If you think 11MB is "too much" memory... you'd better avoid using Java entirely. Seriously, the JVM needs to do quite a lot of stuff (bytecode verifier, GC, loading all the essential classes), and in an age where average desktop machines have 4GB of RAM, keeping the base JVM overhead (and memory use in generay) very low is simply not a design priority.
If you need your app to run on an embedded system (pretty much the only case where 11 MB might legitimately be considered "too much"), then there are special JVMs designed for such sytems that use less RAM - but at the cost of lacking many of the features and/or performance of mainstream JVMs.
You can control the heap size otherwise default values will be used, java -X gives you an explanation of the meaning of these switches
i.g.
set JAVA_OPTS="-Xms6m -Xmx6m"
java ${JAVA_OPTS} MyClass
The question you might really be asking is, "Does windows task manager and Linux top report memory in the same way?" I'm sure there are others that can answer this question better than I, but I suspect that you may not be doing an apples to apples comparison.
Try using the jconsole application on each respective machine to do a more granular inspection. You'll find jconsole on your sdk under the bin directory.
There is also a very extensive discussion of java memory management at http://www.ibm.com/developerworks/linux/library/j-nativememory-linux/
The short answer is that how memory is being allocated is a more complex answer than just looking at a single figure at the top of a user simplifed system utility.
Both Top and TaskManager will report how much memory has been allocated to a process, not how much the process is actually using, so I would say it's not an apples to apples comparison. Regardless, in the age of Gigs of memory what's a couple megs here or there on startup?
Linux and Windows are radically different operating systems and use RAM very differently. Windows kind of allocates as you go, and Linux caches more at once, and prepares for the future, so that the next operations are smooth.
This explanation is not quite right, but it's close enough for you.

Categories

Resources