I am using 6.0.20 I have a number of web apps running on the server, over time, approximately 3 days and the server needs restarting otherwise the server crashes and becomes unresponsive.
I have the following settings for the JVM:
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\logs
This provides me with a hprof file which I have loaded using Java VisualVM which identifies the following:
byte[] 37,206 Instances | Size 86,508,978
int[] 540,909 Instances | Size 55,130,332
char[] 357,847 Instances | Size 41,690,928
The list goes on, but how do I determine what is causing these issues?
I am using New Relic to monitor the JVM and only one error seems to appear but it's a reoccurring one, org.apache.catalina.connector. ClientAbortException. Is it possible that when a user session is aborted, any database connections or variables created are not being closed and are therefore left orphaned?
There is a function which is used quite heavily throughout each web app, not sure if this has any bearing on the leak:
public static String replaceCharacters(String s)
{
s = s.replaceAll(" ", " ");
s = s.replaceAll(" ", "_");
s = s.replaceAll("\351", "e");
s = s.replaceAll("/", "");
s = s.replaceAll("--", "-");
s = s.replaceAll("&", "and");
s = s.replaceAll("&", "and");
s = s.replaceAll("__", "_");
s = s.replaceAll("\\(", "");
s = s.replaceAll("\\)", "");
s = s.replaceAll(",", "");
s = s.replaceAll(":", "");
s = s.replaceAll("\374", "u");
s = s.replaceAll("-", "_");
s = s.replaceAll("\\+", "and");
s = s.replaceAll("\"", "");
s = s.replaceAll("\\[", "");
s = s.replaceAll("\\]", "");
s = s.replaceAll("\\*", "");
return s;
}
Is it possible that when a user connection is aborted, such as a user browser closed or the users has left the site that all variables, connections, etc... are purged/released, but isn't GC supposed to handled that?
Below are my JVM settings:
-Dcatalina.base=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20
-Dcatalina.home=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20
-Djava.endorsed.dirs=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\endorsed
-Djava.io.tmpdir=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\temp
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djava.util.logging.config.file=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\conf\logging.properties
-Dfile.encoding=UTF-8
-Dsun.jnu.encoding=UTF-8
-javaagent:c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\newrelic\newrelic.jar
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\logs
-Dcom.sun.management.jmxremote.port=8086
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false vfprintf
-Xms1024m
-Xmx1536m
Am I missing anything? The server has 3GB ram.
Any help would be much appreciated :-)
... but how do I determine what is causing these issues?
You need to use a dump analyser that allows you to see what is making these objects reachable. Pick an object, and see what other object or objects refer to it ... and work backwards through the chains until you find either a "GC root" or or some application-specific class that you recognise.
Here are a couple of references on analysing memory snapshots and memory profilers:
How do I analyze a .hprof file?
How to find memory leaks using visualvm
Solving OutOfMemoryError - Memory Profilers
Once you have identified that, you've gone most of the way to identifying the source of your storage leak.
That function has no direct bearing on the leak. It certainly won't cause it. (It could generate a lot of garbage String objects ... but that's a different issue.)
I migrated all projects to Tomcat 7.0.42 and my errors have disappeared, our websites are far more stable and slightly faster, we are using less memory and cpu usage is far better.
Start server in local dev environment, attach profiler (yourkit preferably), Take the heap dump periodically, You will see growth in object byte[] and you can actually connect those byte[] with your application class leaking it with this tool that will help you idenfity defect in code
Related
I have an java app (JDK13) running in a docker container. Recently I moved the app to JDK17 (OpenJDK17) and found a gradual increase of memory usage by docker container.
During investigation I found that the 'serviceability memory category' NMT grows constantly (15mb per an hour). I checked the page https://docs.oracle.com/en/java/javase/17/troubleshoot/diagnostic-tools.html#GUID-5EF7BB07-C903-4EBD-A9C2-EC0E44048D37 but this category is not mentioned there.
Could anyone explain what this serviceability category means and what can cause such gradual increase?
Also there are some additional new memory categories comparing to JDK13. Maybe someone knows where I can read details about them.
Here is the result of command jcmd 1 VM.native_memory summary
Native Memory Tracking:
(Omitting categories weighting less than 1KB)
Total: reserved=4431401KB, committed=1191617KB
- Java Heap (reserved=2097152KB, committed=479232KB)
(mmap: reserved=2097152KB, committed=479232KB)
- Class (reserved=1052227KB, committed=22403KB)
(classes #29547)
( instance classes #27790, array classes #1757)
(malloc=3651KB #79345)
(mmap: reserved=1048576KB, committed=18752KB)
( Metadata: )
( reserved=139264KB, committed=130816KB)
( used=130309KB)
( waste=507KB =0.39%)
( Class space:)
( reserved=1048576KB, committed=18752KB)
( used=18149KB)
( waste=603KB =3.21%)
- Thread (reserved=387638KB, committed=40694KB)
(thread #378)
(stack: reserved=386548KB, committed=39604KB)
(malloc=650KB #2271)
(arena=440KB #752)
- Code (reserved=253202KB, committed=76734KB)
(malloc=5518KB #23715)
(mmap: reserved=247684KB, committed=71216KB)
- GC (reserved=152419KB, committed=92391KB)
(malloc=40783KB #34817)
(mmap: reserved=111636KB, committed=51608KB)
- Compiler (reserved=1506KB, committed=1506KB)
(malloc=1342KB #2557)
(arena=165KB #5)
- Internal (reserved=5579KB, committed=5579KB)
(malloc=5543KB #33822)
(mmap: reserved=36KB, committed=36KB)
- Other (reserved=231161KB, committed=231161KB)
(malloc=231161KB #347)
- Symbol (reserved=30558KB, committed=30558KB)
(malloc=28887KB #769230)
(arena=1670KB #1)
- Native Memory Tracking (reserved=16412KB, committed=16412KB)
(malloc=575KB #8281)
(tracking overhead=15837KB)
- Shared class space (reserved=12288KB, committed=12136KB)
(mmap: reserved=12288KB, committed=12136KB)
- Arena Chunk (reserved=18743KB, committed=18743KB)
(malloc=18743KB)
- Tracing (reserved=32KB, committed=32KB)
(arena=32KB #1)
- Logging (reserved=7KB, committed=7KB)
(malloc=7KB #289)
- Arguments (reserved=1KB, committed=1KB)
(malloc=1KB #53)
- Module (reserved=1045KB, committed=1045KB)
(malloc=1045KB #5026)
- Safepoint (reserved=8KB, committed=8KB)
(mmap: reserved=8KB, committed=8KB)
- Synchronization (reserved=204KB, committed=204KB)
(malloc=204KB #2026)
- Serviceability (reserved=31187KB, committed=31187KB)
(malloc=31187KB #49714)
- Metaspace (reserved=140032KB, committed=131584KB)
(malloc=768KB #622)
(mmap: reserved=139264KB, committed=130816KB)
- String Deduplication (reserved=1KB, committed=1KB)
(malloc=1KB #8)
The detailed information about increasing part of memory is:
[0x00007f6ccb970cbe] OopStorage::try_add_block()+0x2e
[0x00007f6ccb97132d] OopStorage::allocate()+0x3d
[0x00007f6ccbb34ee8] StackFrameInfo::StackFrameInfo(javaVFrame*, bool)+0x68
[0x00007f6ccbb35a64] ThreadStackTrace::dump_stack_at_safepoint(int)+0xe4
(malloc=6755KB type=Serviceability #10944)
Update#1 from 2022-01-17:
Thanks to #Aleksey Shipilev for help! We were able to find a place which causes the issue, is related to many ThreadMXBean#.dumpAllThreads calls. Here is MCVE, Test.java:
Run with:
java -Xmx512M -XX:NativeMemoryTracking=detail Test.java
and check periodically serviceability category in result of
jcmd YOUR_PID VM.native_memory summary
Test java:
import java.lang.management.ManagementFactory;
import java.lang.management.ThreadInfo;
import java.lang.management.ThreadMXBean;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class Test {
private static final int RUNNING = 40;
private static final int WAITING = 460;
private final Object monitor = new Object();
private final ThreadMXBean threadMxBean = ManagementFactory.getThreadMXBean();
private final ExecutorService executorService = Executors.newFixedThreadPool(RUNNING + WAITING);
void startRunningThread() {
executorService.submit(() -> {
while (true) {
}
});
}
void startWaitingThread() {
executorService.submit(() -> {
try {
monitor.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
});
}
void startThreads() {
for (int i = 0; i < RUNNING; i++) {
startRunningThread();
}
for (int i = 0; i < WAITING; i++) {
startWaitingThread();
}
}
void shutdown() {
executorService.shutdown();
try {
executorService.awaitTermination(5, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static void main(String[] args) throws InterruptedException {
Test test = new Test();
Runtime.getRuntime().addShutdownHook(new Thread(test::shutdown));
test.startThreads();
for (int i = 0; i < 12000; i++) {
ThreadInfo[] threadInfos = test.threadMxBean.dumpAllThreads(false, false);
System.out.println("ThreadInfos: " + threadInfos.length);
Thread.sleep(100);
}
test.shutdown();
}
}
Unfortunately (?), the easiest way to know for sure what those categories map to is to look at OpenJDK source code. The NMT tag you are looking for is mtServiceability. This would show that "serviceability" are basically diagnostic interfaces in JDK/JVM: JVMTI, heap dumps, etc.
But the same kind of thing is clear from observing that stack trace sample you are showing mentions ThreadStackTrace::dump_stack_at_safepoint -- that is something that dumps the thread information, for example for jstack, heap dump, etc. If you have a suspicion for the memory leak in that code, you might try to build a MCVE demonstrating it, and submitting the bug against OpenJDK, or showing it to a fellow OpenJDK developer. You probably know better what your application is doing to cause thread dumps, focus there.
That being said, I don't see any obvious memory leaks in StackFrameInfo, neither can I reproduce any leak with stress tests, so maybe what you are seeing is "just" thread dumping over the larger and larger thread stacks. Or you capture it when thread dump is happening. Or... It is hard to say without the MCVE.
Update: After playing with MCVE, I realized that it reproduces with 17.0.1, but not with either mainline development JDK, or JDK 18 EA, or JDK 17.0.2 EA. I tested with 17.0.2 EA before, so was not seeing it, dang. Bisection between 17.0.1 and 17.0.2 EA shows it was fixed with JDK-8273902 backport. 17.0.2 releases this week, so the bug should disappear after you upgrade.
One possible reason for some memory fluctuations would be some other process using dynamic attach to attach on JVM and debug the application and transfer application wise information to the debugger. Serviceability is closely related with jdb (java debugger).
https://openjdk.java.net/groups/serviceability/
The open JDK has this also analytically documented
Serviceability in HotSpot
The HotSpot Virtual Machine contains several technologies that allow its operation >to be observed by another Java process:
The Serviceability Agent(SA). The Serviceability Agent is a Sun private >component in the HotSpot repository that was developed by HotSpot engineers to >assist in debugging HotSpot. They then realized that SA could be used to craft >serviceability tools for end users since it can expose Java objects as well as >HotSpot data structures both in running processes and in core files.
jvmstat performance counters. HotSpot maintains several performance counters >that are exposed to external processes via a Sun private shared memory mechanism. >These counters are sometimes called perfdata.
The Java Virtual Machine Tool Interface (JVM TI). This is a standard C >interface that is the reference implementation of JSR 163 - JavaTM Platform >Profiling Architecture JVM TI is implemented by HotSpot and allows a native code >'agent' to inspect and modify the state of the JVM.
The Monitoring and Management interface. This is a Sun private API that allows >aspects of HotSpot to be monitored and managed.
Dynamic Attach. This is a Sun private mechanism that allows an external process >to start a thread in HotSpot that can then be used to launch an agent to run in >that HotSpot, and to send information about the state of HotSpot back to the >external process.
DTrace. DTrace is the award winning dynamic trace facility built into Solaris >10 and later versions. DTrace probes have been added to HotSpot that allow >monitoring of many aspects of operation when HotSpot runs on Solaris. In addition, >HotSpot contains a jhelper.d file that enables dtrace to show Java frames in stack >traces.
pstack support. pstack is a Solaris utility that prints stack traces of all >threads in a process. HotSpot includes support that allows pstack to show Java >stack frames.
In my node.js application, I'm using JDBC to connect to a Oracle database. I need to increase my java heap space to prevent following error:
java.lang.OutOfMemoryError: Java heap space
I know that there is a terminal option for setting maximum Java heap size (-Xmx<size>) but the problem is, I don't explicitly run java, it happens inside my JDBC module (which depends on java module), so I can't use that terminal option.
So how java heap size can be configured in my case?
In short
I checked the source code of node-jdbc, and it's not possible at the moment.
In Detail
Refer the file jinst.js
var java = require('java');
...
module.exports = {
...
addOption: function(option) {
if (!isJvmCreated() && option) {
java.options.push(option);
} else if (isJvmCreated()) {
...
Refer the files pool.js, connection.js, resultset.js
var jinst = require("./jinst");
...
var java = jinst.getInstance();
...
if (!jinst.isJvmCreated()) {
jinst.addOption("-Xrs");
}
You will see it's only setting the option -Xrs even though the node module java is giving the flexibility of adding any java options.
Next Step
For the moment I'm not interested in this project. But If I was in your shoes I will create a pull request to the project https://github.com/CraZySacX/node-jdbc with this option as a feature.
Cheers :)
I've read all "System resource exceeded" posts, but this is nothing like them.
I've spend the last 3 hours searching for a solution.
I don't have many connections / statements / resultsets and I always close all of them.
My code used to work but now I get the "System resource exceeded" exception, not during queries, but WHEN I TRY TO CONNECT.
I didn't change a thing from my code, however it doesn't work at the moment, except 1 out of 10 times I try it. I tried to change some things in it but no difference.
My Access files are 15 - 50 MB.
My code is:
private String accessFilePath;
private Connection myConnection;
public boolean connectToAccess(String myAccessFilePath) {
accessFilePath = myAccessFilePath;
//Get connection to database
try {
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
// set properties for unicode
Properties myProperties = new Properties();
myProperties.put("charSet", "windows-1253");
myConnection = DriverManager.getConnection("jdbc:odbc:driver={Microsoft Access Driver (*.mdb)};DBQ=" + accessFilePath, myProperties); // I get the exception here
} catch (Exception ex) {
System.out.println("Failed to connect to " + accessFilePath + " database\n" + ex.getMessage());
return false;
}
return true;
}
What is now different from other times? Do Access files keep previous connections open? What can be wrong here?
OK, I found the solution.
At first I started a new java project and copied the same codelines there.
I successfully connected to my files every time I tried it in my new project.
So it struck me. I looked at my VM settings.
In my original program I ASSIGNED TOO MUCH MEMORY TO THE VIRTUAL MACHINE so there was no memory left even for a single connection to the files.
My settings were --> VM Options: -Xmx1536m -Xms768m (a little bit excessive)
I changed it to --> VM Options: -Xmx512m -Xms256m
And it worked. Thank you for your comments.
I hope this helps other people, because I spend many hours to find it.
While running the following code i am getting the error java.lang.OutOfMemoryException : java heap space
My code is:
public class openofficeupdate {
String databaseurl="C:\\mydbdir\\location\\salesforce"; // Path of the base after renaming and extraction
openofficeupdate() throws ClassNotFoundException, SQLException{
System.out.println("Entered into constructor");
Connection connection=null;
Statement statement=null;
try{
Class c=openofficeclass();
System.out.println("Class name set");
Connection cntn=createConnection(databaseurl);
connection=cntn;
System.out.println("connection created");
Statement stmt=createStatement(cntn);
statement=stmt;
System.out.println("Statement created");
executeQueries(stmt);
System.out.println("Query executed");
closeStatement(stmt);
System.out.println("Statement closed");
closeConnection(cntn);
System.out.println("Connection closed");
}catch(Exception e){
System.out.println(e);
closeStatement(statement);
System.out.println("Statement closed");
closeConnection(connection);
System.out.println("Connection closed");
}
}
public static void main(String args[]) throws ClassNotFoundException, SQLException{
new openofficeupdate();
}
private Class openofficeclass() throws ClassNotFoundException {
return Class.forName("org.hsqldb.jdbcDriver");
}
private Connection createConnection(String databaseurl) throws SQLException{
return DriverManager.getConnection("jdbc:hsqldb:file:" +databaseurl,"sa","");
}
private Statement createStatement(Connection cntn) throws SQLException{
return cntn.createStatement();
}
private void closeStatement(Statement stmt) throws SQLException{
stmt.close();
}
private void closeConnection(Connection cntn) throws SQLException{
cntn.close();
}
private void executeQueries(Statement stmt) throws SQLException{
System.out.println("Going to execute query");
int status=stmt.executeUpdate("insert into \"Mobiles\" values(9874343210,123,'08:30:00','09:30:06')");
12','2010-12-14','c','Casula')");
System.out.println("Query executed with status "+status);
}
}
I am using NetBeans IDE... Is there any option there to control this kind of errors?
If you go on increasing without knowing the cause, there is a possibility that your problem might not be solved.So I suggest you to find the root cause of the problem and solve it from there,
These are some of the free tools which can be used to analyze heap and will help you to get out of OutOfMemoryError :
Visualgc :
Visualgc stands for Visual Garbage Collection Monitoring Tool and you can attach it to your instrumented hostspot JVM. Main strength of visualgc is that it displays all key data graphically including class loader, garbage collection and JVM compiler performance data.
The target JVM is identified by its virtual machine identifier also called as vmid.
Jmap :
Jmap is a command line utility comes with JDK6 and allows you to take a memory dump of heap in a file. It’s easy to use as shown below:
jmap -dump:format=b,file=heapdump 6054
Here file specifies name of memory dump file which is "heapdump" and 6054 is PID of your Java progress. You can find the PDI by using "ps -ef” or windows task manager or by using tool called "jps"(Java Virtual Machine Process Status Tool).
Jhat :
Jhat was earlier known as hat (heap analyzer tool) but it is now part of JDK6. You can use jhat to analyze heap dump file created by using "jmap". Jhat is also a command line utility and you can run it from cmd window as shown below:
jhat -J-Xmx256m heapdump
Here it will analyze memory-dump contained in file "heapdump". When you start jhat it will read this heap dump file and then start listening on http port, just point your browser into port where jhat is listening by default 7000 and then you can start analyzing objects present in heap dump.
Eclipse memory analyzer :
Eclipse memory analyzer (MAT) is a tool from eclipse foundation to analyze java heap dump. It helps to find classloader leaks and memory leaks and helps to minimize memory consumption.you can use MAT to analyze heap dump carrying millions of object and it also helps you to extract suspect of memory leak.
VisualVM : VisualVM is a visual tool integrating several commandline JDK tools and lightweight profiling capabilities. Designed for both production and development time use, it further enhances the capability of monitoring and performance analysis for the Java SE platform.
YourKit
Courtesy : solution of java.lang.OutOfMemoryError in Java
In Netbeans 7.0:
You can right click on the project, and select Properties.
Click on Run category and insert you configuration in VM Options
For instance, in your case, paste: -Xmx512m (512Mb) as suggested by rm5248
You can read Playing with JVM / Java Heap Size for further information.
If you run out of heap space then you need to increase your heap size. The way to know how much memory you use is by running approproiate tests and profiling. That said, there are some simpler ways to get your app up and running... Here are the 3 common things that people do to fix or address memory issues with java programs.
You can pass -Xmx1028 argument as in the previous post. This increases the maximum memory.
If you want to start off with a large memory footprint, you can optionally also pass -Xms1028.
You can pass -XX:+AggressiveHeap if you dont know before hand what your memory requirements will be. This is not the "official" best way to do things, but I find it always works quite well when trying to run a new application which I'm not yet sure of the memory requirements.
Pass -Xmx512m to the JVM as an argument. That will increase your maximum heap size to 512 MB. See http://download.oracle.com/javase/7/docs/technotes/tools/windows/java.html for more information on how this works.
As for how to set it in Netbeans, I'm not sure; if I remember correctly there's a setting for when you run the program for arguments to pass the program, as well as arguments for the JVM.
Free RAM: my target metric.
Java: my tool of choice.
???: a good way to get the former using the latter.
Probably like this, using Java Native Interface (JNI) :
Kernel32 lib = (Kernel32) Native.loadLibrary ("kernel32",Kernel32.class);
Kernel32.MEMORYSTATUS mem = new Kernel32.MEMORYSTATUS ();
lib.GetMem(mem);
System.out.println ("Available physical memory " + mem.dwAvailPhys);
Difficult to do without resorting to non-portable or native libraries.
Something like
Runtime.getRuntime().freeMemory()
will only return the memory available to the JVM, which may not be the same as the system-wide available memory.
This page provides a good rundown.
http://blog.codebeach.com/2008/02/determine-available-memory-in-java.html
To get the Free RAM by executing the command free -m and then interpreting it as below:
Runtime runtime = Runtime.getRuntime();
BufferedReader br = new BufferedReader(
new InputStreamReader(runtime.exec("free -m").getInputStream()));
String line;
String memLine = "";
int index = 0;
while ((line = br.readLine()) != null) {
if (index == 1) {
memLine = line;
}
index++;
}
// total used free shared buff/cache available
// Mem: 15933 3153 9683 310 3097 12148
// Swap: 3814 0 3814
List<String> memInfoList = Arrays.asList(memLine.split("\\s+"));
int totalSystemMemory = Integer.parseInt(memInfoList.get(1));
int totalSystemUsedMemory = Integer.parseInt(memInfoList.get(2));
int totalSystemFreeMemory = Integer.parseInt(memInfoList.get(3));
System.out.println("Total system memory in mb: " + totalSystemMemory);
System.out.println("Total system used memory in mb: " + totalSystemUsedMemory);
System.out.println("Total system free memory in mb: " + totalSystemFreeMemory);
I know of two projects that have purchased JNIWrapper and have been happy with the result. Both Windows - based usage. When I embedded it on our current project, we wanted to know how much free ram was available when users launched our app (WebStart) since there were lots of performance complaints which were hard to investigate (we suspected RAM issues). JNIWrapper helps us to collect stats at startup about free ram, total and CPU etc so if a user group is complaining, we can check our stats to see if they have been given dodgy machines. Life saving.