We have created a spring web app. using:
Spring 3.1.0
Hibernate 3.5.4 final
tomcat 6.24
The application is reasonably heavy, we are sending about 1000 contacts per user request.
We tested our application with 9 concurrent users with repeated requests and profiled with visual vm the results are as follows:
Looking at the results, the high peaks are the repeated requests and the lower points are when all requests are stopped. The first ~200MB of memory does not seem to be released at all. Is spring actually just this heavy or do I have a potential memory issue? The release version of this web app will potentially handle much more users.
I have similar results testing on tomcat 7 as well.
its not any memory issue, GC is smart enough that release objects after there is no reference in your application, make sure that there is no global reference for which can be used as local to any method, and as per your graph it is releasing objects, 200 mb may be required tor permgen, so you should not worry.
Related
I have an API (Spring Boot/Spring Web using Swagger) that has a throughput (TPS?) of 9.05 (not sure how this is being calculated, but its displayed on some metrics page). The API gets hit thousands of times per hour, sometimes peaking at 9,000 calls. Average response time is anywhere between ~2000-3000ms, approximately. This is a simple API that accepts a POST request and then queries a Postgres Database and returns this data as an HTTP response to client. This API is containerized via Docker and running on an ECS cluster on AWS (m5.large2) instance.
Instance Size vCPU Memory(GiB) Instance Storage(GiB) Network Bandwidth(Gbps) EBS Bandwidth (Mbps)
m5a.2xlarge 8 32 EBS-Only Up to 10 Up to 2,880
I have apache Jmeter installed and I am trying to mimic the production API calls to lower environments so I can fine-tune some CPU and memory configurations of our Docker containers running in AWS Elastic Container Service (ECS).
I am currently running 5 Threads, with 1/sec ramp-up, and 900 second duration time -
Is there a systematic way to I can replicate the traffic load in the lower environments so I can reproduce PROD load so I can correctly fine-tune CPU and memory?
As per Performance Testing in Scaled Down Environments. Part One: The Challenges article:
An application’s underlying infrastructure is constructed of many different components such as caches, web servers, application servers and disks(I/O). Bandwidth and CDNs also play a role in its function and therefore have to be taken into consideration during scaling. Each component behaves differently in the application according to how it was configured and scaled. However, the tiered structure makes it difficult to calculate how each should be tested and scaled.
Furthermore, there are two ways to scale the application. Scaling-up adds supplementary resources, like CPUs and memory, to a single computer. Scaling-out clusters additional computers together as one system to generate combined computing power. All of these options make it almost impossible to estimate actual data from performance testing in a smaller environment.
So there is no formula of extrapolation the behaviour of "lower environment" in comparison to production-like environment, I would say you're quite limited in what you can do, for example:
Run a Soak Test, this way you will be able to determine memory leaks
Run a test with a profiler tool telemetry enabled and inspect the longest running functions, largest objects, garbage collection activity, etc.
Monitor database slow queries and inspect their query plans for optimization in case of high cardinality/cost
I have a spring boot application which I'm running inside docker containers in an openshift cluster. In steady state, there are N instances of the application (say N=5) and requests are load balanced to these N instances. Everything runs fine and response time is low (~5ms with total throughput of ~60k).
Whenever I add a new instance, response time goes up briefly (upto ~70ms) and then comes back to normal.
I checked NewRelic JVM stats.
As you can see, whenever the app starts, there is GC-MarkSweep which I think is probably related to the initial high response time.
How can I avoid this? I'm using Java 8. Will using a different GC (G1) help or can I somehow tune my GC settings?
JVM itself requires quite a work when starting, and Spring Boot is also adding a lot of it's own work and classes. Try to remove/switch off all unused features, since autoconfiguration magic can cause a lot of unnecessary overhead.
Problem Description: We have a web application which is used by 200-300 people per day. The application slows down twice or thrice in a day at certain hours changing it's home page load time from 6-7 seconds to 11-13 seconds. This application is deployed on JBoss AS 7.2. There are 4-5 other applications which are deployed on the same Jobss on the same instance(port number). These application are web services(REST & SOAP webservices which are used by other applications of same company which I am not aware about) that use the same database as the Main application which is having slowness issues. The application is built with the following technology stack:
Frontend: Angular JS, Angular UI, JqueryUI, JSON
Backend: Spring REST controllers, Java 7, JDBC
Database: Oracle 11g, PL SQL
It's been only 4 months since the application response time as soared up. We had a production release 4 months ago, in which lots of data filtering is done on the basis of certain parameters. This code is implemented in PL/SQL. Also some filtering of data is done in front end. The response time has increased after this release. (Note: During this period number of users and data has also increased by a significant amount)
So far I have tried to improve the performance by minimising Javascript files changing 2.8 MB of DOM downloaded Content to just 1.2 MB. I have also optimised some of the queries which are being used for data filtering. I have been able to bring homepage load time down to average 9-10 seconds. Which is still quite more than client's expectation.
I would like to know how to tackle this kind of issues and what all things should I bear in mind which might have been causing this problem.
At present production jvm configurations are xms: 64 MB, xmx: 256 MB. will changing increasing the memory help?
Should I remove the PLSQL codeand write Java code and use multithreading?
During the peak time CPU usage gets quite high around 85-95 percent. The main tables are used by many applications(cron job which calls java program to send email notifications) What can be done about it?
I have fixed this issue now. As per comments and suggestions, I timed queries to database, checked database logs & monitored daily CPU usage. I did same for application server and analysed using jvisualvm.
I did a bit of everything like minimising static content, optimise queries, remove unnecessary logging, The significant change however has come from changing JVM tuning (heap size -xms:1024M, -xmx:1536M & permGen: 512M and other things). The performance has now improved a lot changing average home page(after login time) to 4 to 5 seconds(from 10-13 seconds).
There are still chances of improvement from Database point of view. Some queries and PL SQL blocks have to be optimized. But it's nonetheless quite better than before.
I'm very much a Tomcat newbie, so I'm guessing that the answer to this is pretty straightforward, but Google is not being friendly to me today.
I have a Java web application mounted on Apache Tomcat. Whilst the application has a front page (for diagnostic purposes), the application is really all about a SOAP interface. No client will ever need to look up the server's web page. The clients send SOAP requests to the server, which parses the requests and then looks up results in a database. The results are then passed back to the clients, again over SOAP.
In its default configuration, Tomcat appears to queue requests. My experiment consisted of installing the client on two separate machines pointing at the same server and running a search at exactly the same time (well, one was 0.11 seconds after the other, but you get the picture).
How do I configure the number of concurrent request threads?
My ideal configuration would be to have X request threads, each of which recycles itself (i.e. calls destructor and constructor and recycles its memory allocation) every Y minutes, or after Z requests, whichever is the sooner. I'm told that one can configure IIS to do this (although I also have no experience with IIS), but how would you do this with Tomcat?
I'd like to be able to recycle threads because Tomcat seems to be grabbing memory when a request comes in and not releasing it, which means that I get occasional (but not consistent) Java Heap Space errors when we are approaching the memory limit (which I have already configured to be 1GB on a 2GB server). I'm not 100% sure if this is due to a memory leak in my application, or just that the tools that I'm using use a lot of memory.
Any advice would be gratefully appreciated.
Thanks,
Rik
Tomcat, by default, can handle up to 150 concurrent HTTP requests - this is totally configurable and obviously varies depending on your server spec and application.
However, if your app has to handle 'bursts' of connections, I'd recommend looking into Tomcat's min and max "spare" threads. These are threads actively waiting for a connection. If there aren't enough waiting threads, Tomcat has to allocate more (which incurs a slight overhead), so you might see a delay.
Also, have a look at my answer to this question which covers how to configure the connector:
Tomcat HTTP Connector Threads
In addition, look at basic JVM tuning - especially in relation to heap allocation overhead and GC pause times.
I have a Web Application, Made entirely with Java. The Webapp doesn't use any Graphical / Model Framework, instead, the webapp uses The Model-View Controller. It's made only with Servlet specification (Servlet ver. 2.4).
The webapp it's developed since 2001, and it's very complex. Initially, was built for work with Tomcat 4.x/5.x. Actually, runs on Tomcat 6.x. But, we still having memory Leaks.
In Depth, the specifications of The Webapp can resumed as:
Uses Servlet v. 2.4 Specification.
It doesn't use Any Framework
It doesn't use JavaEE (Not EJB)
It's based on JavaSE (With Servlets)
Works Only on IE 6+ (Because of it's age)
Infrastructure Specification
Actually, the webapp works in three environments:
First
IBM Server (I don't remember exactly the model)
Intel Xeon 2.4 Ghz
32GB RAM
1TB HDD
Tomcat (Version 6) is configured to use 8GB of RAM
Second
Dell Server
Intel Xeon 2.0Ghz
4GB RAM
500GB HDD
Tomcat (Version 5.5) is configured to use 1.5GB of RAM
Third
Dell Server
Amd Opteron 1214 2.20Ghz
4GB RAM
320GB HDD
Tomcat (Version 6) is Configured to use 1.5GB of RAM
Database specification
The webapp uses SQL Server 2008 R2 Express Edition as a DBMS, except for the user of the first server-specification, that uses SQL Server 2008 R2 Standard Edition. For the connection pools, the app uses Apache DBCP.
Problem
Well, it has very serious performance issues. The webapp slow down continually, and, many times Denies the Service. The only way to recover the app is restarting The Apache Tomcat Service.
During a performance Audit, i've found several programming issues (Like database connections that never closes, excesive use of Vector collection [instead of ArrayList]).
I want to know how can improve the performance for the app, which applications can help me to monitoring the Tomcat performance and the Webapp Memory usage.
All suggestions are gladly accepted.
You could also try stagemonitor. It is an open source performance monitoring library. It records request response times, JVM metrics, request details including a call stack (profile) of the called methods during the request and more. Because of the low overhead, you can also use it in production.
The tuning procedure would be the following.
Identify slow requests with the Request Dashboard
Analyze the stack trace of the request with the Request Detail Dashboard to find out about slow methods
Dive into your code and try to optimize those slow methods
You can also correlate some metrics like the throughput or number of sessions with the response time or cpu usage
Analyze the heap with the JVM Memory Dashboard
Note: I am the developer of stagemonitor.
I would start with some tools that can help you profiling the application. Since you are developing webapp start with Lambda Probe and Java melody.
The first step is to determine the conditions under which the app starts to behave oddly. Ask yourself few questions:
Do performance issues arise right after applications starts, or overtime?
Do performance issues are correlated to quantity of client requests?
What is the real performance problem - high load on the server or lack of memory (note that they are related, so check which one starts first)
Are there any background processes which are performing some massive operations? Are they scheduled to run at some particular time period?
Try to find some clues before going deep into code. It will help you to narrow down possible causes.
As Joshua Bloch has stated in his book entitled "Effective Java" - performance issues are rarely the effect of some minor mistakes in source code (although, of course, misuse of Java constructs can lead to disaster). Usually the cause is bad system (API) architecture.
The last suggestion based on my experience - try not to think that high memory consumption is something bad. Tomcat will use as much memory as operating system and JVM will let him (not more than max settings) and just when it needs more - Tomcat will perform garbage collection. So a typical (proper!) graph of memory consumption looks like a saw. If you are dealing with memory leak, then the graph will be increasing constantly, but indefinitely. This is the most often misunderstood of memory leaks, so keep it in mind.
To be honest - we cannot help you much further. Those are just pointers, now you will have to make extensive research to figure out the cause :)
The general solution is to use a profiler e.g. YourKit, with a realistic workload which reproduces the problem.
What I do first is a CPU only profile, a memory only profile and finally a CPU & Memory profile on at once (I then look at the CPU profile results)
YourKit can also monitor your high level operations such a Java EE resources and JDBC connections. I haven't tried these as I don't use them. ;)
It can be a good idea to improve the efficiency even if its not the cause of the problem as it will reduce the amount of "noise" in these profiles and make your issues more obvious.
You could try increasing the amount of memory available but a suspect it will just delay the problem.
Ok. So I have seen huge Java applications run lesser configurations. You should try to do the following -
First connect a Profiler to your application and see which part of your application takes the most time. You can use JProfiler or Eclipse MAT ( I personally prefer JProfiler). Also try to take a look at the objects taking the most memory. This will help you narrow down to the parts which you need to rewrite to improve the performance.
Once you have taken a look at the memory leaks update your application to use 64bit JDK(assuming it already does not do so)
Take a look at your JVM arguments and optimize them.
You can try the open source tool Webapp Watcher in order to identify where in the code is the performance issue.
You have first to add a filter in the webapp (as explained here) in order to record metrics, and then import the logs in the WAW Analyzer tool and follow the steps described in the doc to know where is the potential performance issue in the code.