Facing a strange memory "leak" with Java 1.6 - java

I'm trying to understand where is used server memory.
When I look to the memory usage on the system via top I've got :
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6091 oper 20 0 2721m 1.4g 9180 S 0 11.4 6:13.42 java
10854 oper 20 0 2186m 1.1g 5104 S 1 9.1 114:52.15 java
9350 oper 20 0 2293m 971m 4892 S 0 8.0 40:15.68 java
9286 oper 20 0 2082m 800m 4852 S 0 6.6 31:31.44 java
10506 oper 20 0 1936m 711m 4900 S 0 5.9 49:09.64 java
8965 oper 20 0 1918m 680m 5076 S 0 5.6 106:53.10 java
All those process are tomcats 6.0.20 running on JVM 1.6.0_26
As we can see in the top report, one process is using 1.4GO an other 1.1GO... so much more than expected.
When I open JConsole on the first process I can see cumulated memory (the Heap and non heap memory) is arround 200Mo, on the second 385 Mo, the third 235Mo.
So my question is where is the unvisible memory ?
Top - JConsole =
1.4G - 200M = 1.2G
1.1G - 385M = 715M
971M - 235M = 736M
800M - 173M = 627M
Does any one have an idea ?
Thanks a lot.

Related

Hive Query Failing ( AMI - 3.11.0 , Hive- 0.13.1)

Diagnostic Messages for this Task: Container [pid=3347,containerID=container_1490354262227_0013_01_000104] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 1.5 GB of 5 GB virtual memory used. Killing container. Dump of the process-tree for container_1490354262227_0013_01_000104 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 3360 3347 3347 3347 (java) 7596 396 1537003520 262629 /usr/java/latest/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx864m -Djava.io.tmpdir=/mnt3/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1490354262227_0013/container_1490354262227_0013_01_000104/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/mnt/var/log/hadoop/userlogs/application_1490354262227_0013/container_1490354262227_0013_01_000104 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.35.178.86 49938 attempt_1490354262227_0013_m_000004_3 104 |- 3347 2563 3347 3347 (bash) 0 1 115806208 698 /bin/bash -c /usr/java/latest/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx864m -Djava.io.tmpdir=/mnt3/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1490354262227_0013/container_1490354262227_0013_01_000104/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/mnt/var/log/hadoop/userlogs/application_1490354262227_0013/container_1490354262227_0013_01_000104 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.35.178.86 49938 attempt_1490354262227_0013_m_000004_3 104 1>/mnt/var/log/hadoop/userlogs/application_1490354262227_0013/container_1490354262227_0013_01_000104/stdout 2>/mnt/var/log/hadoop/userlogs/application_1490354262227_0013/container_1490354262227_0013_01_000104/stderr
Container [pid=3347,containerID=container_1490354262227_0013_01_000104] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 1.5 GB of 5 GB virtual memory used.
Looks like your process needs more memory and it is exceeding the defined limit.
You need to increase the container size
SET hive.tez.container.size=4096MB
SET hive.auto.convert.join.noconditionaltask.size=1370MB
Read more about this here.
If it is failing on reducer:
Add distribute by partition key to the query. It will distribute
data between reducers and as a result reducers will create less
partitions and consume less memory.
insert overwrite table items_s3_table PARTITION(w_id) select pk, cId,
fcsku, cType, disposition, cReferenceId, snapshotId, quantity, w_id
from items_dynamodb_table distribute by w_id;
Try to decrease bytes per reducer. Decreasing this parameter will increase parallelizm (the number of reducers) and may reduce memory consumption per reducer. hive.exec.reducers.bytes.per.reducer=67108864;
Adjust memory settings if nothing helps.
For mappers:
mapreduce.map.memory.mb=4096;
mapreduce.map.java.opts=-Xmx3000m;
For reducers:
mapreduce.reduce.memory.mb=4096;
mapreduce.reduce.java.opts=-Xmx3000m;

Java "No space left on device" but there is enough space on disk?

I'm trying to write files to the file system in a java programme but it fails with an exception throwing 'No space left on device' when df -h (and df -i) says otherwise.
Here is the output of df -h:
Filesystem Size Used Avail Use% Mounted on
udev 5.9G 4.0K 5.9G 1% /dev
tmpfs 1.2G 1.4M 1.2G 1% /run
/dev/sda5 156G 139G 9.1G 94% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 5.9G 105M 5.8G 2% /run/shm
none 100M 28K 100M 1% /run/user
/dev/sda2 105G 102G 2.7G 98% /media/sam/System
/dev/sda3 176G 163G 14G 93% /media/sam/Sam //where I'm trying to write
and here is the output of df -i:
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 1523763 552 1523211 1% /dev
tmpfs 1526458 584 1525874 1% /run
/dev/sda5 10362880 4750381 5612499 46% /
none 1526458 2 1526456 1% /sys/fs/cgroup
none 1526458 3 1526455 1% /run/lock
none 1526458 191 1526267 1% /run/shm
none 1526458 26 1526432 1% /run/user
/dev/sda2 3832820 1045155 2787665 28% /media/sam/System
/dev/sda3 35769712 22090147 13679565 62% /media/sam/Sam //where I'm trying to write
I am trying to write files to /media/sam/Sam/. I have tried restarting my machine, unmounting and mounting again but no luck. Also in Java if I create a new file object inside /media/sam/Sam and print the usage information, it prints 0 as below:
File tweetFile = new File( "/media/sam/Sam/" + "test" + ".txt" );
System.out.println( tweetFile.getTotalSpace() ); // prints 0
System.out.println( tweetFile.getFreeSpace()/1000000000 ); // prints 0
System.out.println( tweetFile.getUsableSpace()/1000000000 ); // prints 0
Any help is really appreciated as I'm trying to fix this for the whole day and getting really frustrated.
EDIT
Here is the exact message thrown:
Exception: java.io.IOException: No space left on device
java.io.IOException: No space left on device
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1006)
at utils.FileUtil.writeToFile(FileUtil.java:64)
at experiments.usinglists.tagbasedeval.ToindexTweetsForAUser.main(ToindexTweetsForAUser.java:36)
In my case the Java Process was operating with a temp-directory, using default java.io.tmpdir (in my case /tmp).
I was expecting it to work in another partition. So maybe changing the default works:
-Djava.io.tmpdir=/media/sam/Sam/tmp

Launching JRE in Linux from a FAT32 USB

I have a Java application installed on a USB which the user should be able to run from any OS.
For this,
I'm packaging a JRE instance on the USB along with my application.
I'm having a FAT32 file-system on the USB.
However, the problem is, FAT32 has no concept of execute ("+x") permissions. While I can launch a shell script, like so:
$ sh /path/to/fat32-usb/helloWorld.sh
, and while I can launch a simple ELF binary, like so:
$ /lib64/ld-linux-x86-64.so.2 /path/to/fat32-usb/helloWorld
, I can't seem to be able to launch the Java ELF program. I get these errors:
Error: could not find libjava.so
Error: Could not find Java SE Runtime Environment.
Before launching java, I've tried setting these environment variables as follows:
export JAVA_HOME=/path/to/fat32-usb/jre
export LD_LIBRARY_PATH="$JAVA_HOME/lib/amd64:.:$LD_LIBRARY_PATH"
export PATH="$JAVA_HOME/bin:.:$PATH"
I have also tried launching java from inside the $JAVA_HOME/bin directory. Finally, I've also tried copying all the libXXX.so's from $JAVA_HOME/lib/amd64/ to $JAVA_HOME/bin, hoping that they would get picked up from the current directory, ., somehow.
But nothing has worked.
EDIT
Here are the last few lines of strace output:
$ strace -vfo /tmp/java.strace /lib64/ld-linux-x86-64.so.2 /path/to/fat32-usb/jre ...
...
readlink("/proc/self/exe", "/lib/x86_64-linux-gnu/ld-2.17.so", 4096) = 32
write(2, "Error: could not find libjava.so", 32) = 32
write(2, "\n", 1) = 1
write(2, "Error: Could not find Java SE Ru"..., 50) = 50
write(2, "\n", 1) = 1
exit_group(2) = ?
EDIT2
And here is the output of ltrace (just a single line!):
$ ltrace -s 120 -e '*' -ifo /tmp/java.ltrace /lib64/ld-linux-x86-64.so.2 /path/to/fat32-usb/jre ...
30913 [0xffffffffffffffff] +++ exited (status 2) +++
EDIT 3
This is ltrace excerpt around loading of libjava.so by a Java on an ext4 partition (and not the problem FAT32 partition), which I can load fine:
5525 [0x7f7627600763] <... snprintf resumed> "/home/aaa/bbb/jdk1.7.0_40/lib/amd64/libjava.so", 4096, "%s/lib/%s/libjava.so", "/home/aaa/bbb/jdk1.7.0_40", "amd64") = 46
5525 [0x7f762760076d] libjli.so->access("/home/aaa/bbb/jdk1.7.0_40/lib/amd64/libjava.so", 0) = -1
5525 [0x7f762760078d] libjli.so->snprintf( <unfinished ...>
5525 [0x3085246bdb] libc.so.6->(0, 0x7fffffd8, 0x7f7627607363, 39) = 0
5525 [0x3085246be3] libc.so.6->(0, 0x7fffffd8, 0x7f7627607363, 39) = 0
5525 [0x7f762760078d] <... snprintf resumed> "/home/aaa/bbb/jdk1.7.0_40/jre/lib/amd64/libjava.so", 4096, "%s/jre/lib/%s/libjava.so", "/home/aaa/bbb/jdk1.7.0_40", "amd64") = 50
5525 [0x7f7627600797] libjli.so->access("/home/aaa/bbb/jdk1.7.0_40/jre/lib/amd64/libjava.so", 0) = 0
And this is the strace output of, again, the healthy/loading java.
5952 readlink("/proc/self/exe", "/home/aaa/bbb/jdk1.7.0_40/bin/ja"..., 4096) = 34
5952 access("/home/aaa/bbb/jdk1.7.0_40/lib/amd64/libjava.so", F_OK) = -1 ENOENT (No such file or directory)
5952 access("/home/aaa/bbb/jdk1.7.0_40/jre/lib/amd64/libjava.so", F_OK) = 0
5952 open("/home/aaa/bbb/jdk1.7.0_40/jre/lib/amd64/jvm.cfg", O_RDONLY) = 3

How to check heap usage of a running JVM from the command line?

Can I check heap usage of a running JVM from the commandline, I mean the actual usage rather than the max amount allocated with Xmx.
I need it to be commandline because I don't have access to a windowing environment, and I want script based on the value , the application is running in Jetty Application server
You can use jstat, like :
jstat -gc pid
Full docs here :
http://docs.oracle.com/javase/7/docs/technotes/tools/share/jstat.html
For Java 8 you can use the following command line to get the heap space utilization in kB:
jstat -gc <PID> | tail -n 1 | awk '{split($0,a," "); sum=a[3]+a[4]+a[6]+a[8]; print sum}'
The command basically sums up:
S0U: Survivor space 0 utilization (kB).
S1U: Survivor space 1 utilization (kB).
EU: Eden space utilization (kB).
OU: Old space utilization (kB).
You may also want to include the metaspace and the compressed class space utilization. In this case you have to add a[10] and a[12] to the awk sum.
All procedure at once. Based on #Till Schäfer answer.
In KB...
jstat -gc $(ps axf | egrep -i "*/bin/java *" | egrep -v grep | awk '{print $1}') | tail -n 1 | awk '{split($0,a," "); sum=(a[3]+a[4]+a[6]+a[8]+a[10]); printf("%.2f KB\n",sum)}'
In MB...
jstat -gc $(ps axf | egrep -i "*/bin/java *" | egrep -v grep | awk '{print $1}') | tail -n 1 | awk '{split($0,a," "); sum=(a[3]+a[4]+a[6]+a[8]+a[10])/1024; printf("%.2f MB\n",sum)}'
"Awk sum" reference:
a[1] - S0C
a[2] - S1C
a[3] - S0U
a[4] - S1U
a[5] - EC
a[6] - EU
a[7] - OC
a[8] - OU
a[9] - PC
a[10] - PU
a[11] - YGC
a[12] - YGCT
a[13] - FGC
a[14] - FGCT
a[15] - GCT
Used for "Awk sum":
a[3] -- (S0U) Survivor space 0 utilization (KB).
a[4] -- (S1U) Survivor space 1 utilization (KB).
a[6] -- (EU) Eden space utilization (KB).
a[8] -- (OU) Old space utilization (KB).
a[10] - (PU) Permanent space utilization (KB).
[Ref.: https://docs.oracle.com/javase/7/docs/technotes/tools/share/jstat.html ]
Thanks!
NOTE: Works to OpenJDK!
FURTHER QUESTION: Wrong information?
If you check memory usage with the ps command, you will see that the java process consumes much more...
ps -eo size,pid,user,command --sort -size | egrep -i "*/bin/java *" | egrep -v grep | awk '{ hr=$1/1024 ; printf("%.2f MB ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' | cut -d "" -f2 | cut -d "-" -f1
UPDATE (2021-02-16):
According to the reference below (and #Till Schäfer comment) "ps can show total reserved memory from OS" (adapted) and "jstat can show used space of heap and stack" (adapted). So, we see a difference between what is pointed out by the ps command and the jstat command.
According to our understanding, the most "realistic" information would be the ps output since we will have an effective response of how much of the system's memory is compromised. The command jstat serves for a more detailed analysis regarding the java performance in the consumption of reserved memory from OS.
[Ref.: http://www.openkb.info/2014/06/how-to-check-java-memory-usage.html ]
If you start execution with gc logging turned on you get the info on file.
Otherwise 'jmap -heap ' will give you what you want.
See the jmap doc page for more.
Please note that jmap should not be used in a production environment unless absolutely needed as the tool halts the application to be able to determine actual heap usage. Usually this is not desired in a production environment.
If you are using JDK 8 and above , use jcmd:
jcmd < pid > GC.heap_info

java profile tool without gui

Is there a java profile tool that works without a GUI in Linux, just like top? I don't have the permission to use tools like jprofile and jvisualvm to work in remote model.
Try this: http://java.sun.com/developer/technicalArticles/Programming/HPROF.html You can query heap and cpu details.
You can use HPROF.
Command used: javac -J-agentlib:hprof=cpu=samples Hello.java
CPU SAMPLES BEGIN (total = 126) Fri Oct 22 12:12:14 2004
rank self accum count trace method
1 53.17% 53.17% 67 300027 java.util.zip.ZipFile.getEntry
2 17.46% 70.63% 22 300135 java.util.zip.ZipFile.getNextEntry
3 5.56% 76.19% 7 300111 java.lang.ClassLoader.defineClass2
4 3.97% 80.16% 5 300140 java.io.UnixFileSystem.list
5 2.38% 82.54% 3 300149 java.lang.Shutdown.halt0
6 1.59% 84.13% 2 300136 java.util.zip.ZipEntry.initFields
7 1.59% 85.71% 2 300138 java.lang.String.substring
8 1.59% 87.30% 2 300026 java.util.zip.ZipFile.open
9 0.79% 88.10% 1 300118 com.sun.tools.javac.code.Type$ErrorType.<init>
10 0.79% 88.89% 1 300134 java.util.zip.ZipFile.ensureOpen

Categories

Resources