PMD Memory Leak - java

I have a serious memory leak when using pmd in eclipse on windows. I import a bunch of large maven projects. After m2e and compilation PMD starts up. It then eventually runs in about 8-10 eclipse job instances in parallel for the many (about 50) projects in the workspace. Then the process size grows endlessly until virtual memory on the machine is exhausted (at about 12 GB or so) and the PC completely freezes.
My memory config in eclipse.ini:
-Xms256m
-Xmx2G
-Xss2m
-Xverify:none
-XX:+UseG1GC
-XX:+UseStringDeduplication
-XX:MaxMetaspaceSize=1G
The heap size limit here of 2 GB doesn't seem to have much effect. I suspect then that the allocated vm memory is not the java heap but classloader metaspace or from a native dll.
I've got 32 GB RAM on my machine. Using java 1.8.0_121.
Here's a vmmap snapshot shortly before the process gets dangerously large and I have to kill it:
I tried running jcmd PID GC.class_stats against the process, but I don't see the problem here, since total bytes is only ~1.4 GB:
1636:
Index Super InstBytes KlassBytes annotations CpAll MethodCount Bytecodes MethodAll ROAll RWAll Total ClassName
1 89 141004480 560 0 1296 7 149 2176 880 3464 4344 java.util.HashMap$Node
2 -1 131125856 480 0 0 0 0 0 24 584 608 [I
3 89 129135744 624 0 8712 94 4623 58024 12136 56304 68440 java.lang.String
4 -1 111470376 480 0 0 0 0 0 24 584 608 [B
5 -1 94462520 480 0 0 0 0 0 24 584 608 [C
6 -1 55019976 480 0 0 0 0 0 32 584 616 [Ljava.util.HashMap$Node;
7 -1 53828832 480 0 0 0 0 0 24 584 608 [Ljava.lang.Object;
8 89 51354560 504 0 9016 42 2757 23352 6976 26704 33680 java.net.URL
9 89 48028392 504 0 544 1 20 496 216 1520 1736 java.util.LinkedList$Node
10 28783 40910880 1000 0 6864 51 3951 35648 8664 35792 44456 java.util.HashMap
...snip...
48234 48225 0 608 0 288 2 10 288 160 1152 1312 sun.util.resources.en.CalendarData_en
48235 48225 0 608 0 360 2 27 304 200 1200 1400 sun.util.resources.en.CurrencyNames_en_US
48236 48225 0 608 0 288 2 10 288 160 1152 1312 sun.util.resources.en.LocaleNames_en
48237 48229 0 608 0 288 2 10 304 176 1152 1328 sun.util.resources.en.TimeZoneNames_en
48238 29013 0 520 0 272 2 5 592 160 1352 1512 sun.util.spi.CalendarProvider
48239 89 0 512 0 336 3 5 440 240 1184 1424 sun.util.spi.XmlPropertiesProvider
48240 89 0 560 0 440 5 16 760 488 1504 1992 sun.util.xml.PlatformXmlPropertiesProvider$EH
48241 89 0 528 0 1040 3 71 520 464 1840 2304 sun.util.xml.PlatformXmlPropertiesProvider$Resolver
48242 89 0 552 0 520 3 19 512 456 1392 1848 uescape.view.UnicodeEscapeView$1
48243 89 0 552 0 520 3 19 512 456 1392 1848 uescape.view.UnicodeEscapeView$2
1374367440 32457872 432408 90295960 502480 22001616 144854704 85034192 198366896 283401088 Total
485.0% 11.5% 0.2% 31.9% - 7.8% 51.1% 30.0% 70.0% 100.0%
Index Super InstBytes KlassBytes annotations CpAll MethodCount Bytecodes MethodAll ROAll RWAll Total ClassName
I don't have much experience profiling native processes on Windows. How can I determine what's endlessly allocating so much memory?

This is indeed a problem of the pmd-eclipse-plugin.
See https://github.com/pmd/pmd-eclipse-plugin/issues/52
The latest version 4.0.17.v20180801-1551 contains a fix.
It is available via the update site: https://dl.bintray.com/pmd/pmd-eclipse-plugin/updates/

Related

Solr Replication leaking some memory?

Lately we have discovered that the JBoss process on our Linux server was shut down by OS, due to high memory consumption (about 2.3 GB). Here is the dump:
RPC: fragment too large: 0x00800103
RPC: multiple fragments per record not supported
RPC: fragment too large: 0x00800103
RPC: multiple fragments per record not supported
RPC: fragment too large: 0x00800103
RPC: multiple fragments per record not supported
RPC: fragment too large: 0x00800103
RPC: multiple fragments per record not supported
RPC: fragment too large: 0x00800103
RPC: multiple fragments per record not supported
RPC: fragment too large: 0x00800103
RPC: multiple fragments per record not supported
RPC: fragment too large: 0x00800103
RPC: multiple fragments per record not supported
java invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
java cpuset=/ mems_allowed=0
Pid: 11445, comm: java Not tainted 2.6.32-431.el6.x86_64 #1
Call Trace:
[<ffffffff810d05b1>] ? cpuset_print_task_mems_allowed+0x91/0xb0
[<ffffffff81122960>] ? dump_header+0x90/0x1b0
[<ffffffff8122798c>] ? security_real_capable_noaudit+0x3c/0x70
[<ffffffff81122de2>] ? oom_kill_process+0x82/0x2a0
[<ffffffff81122d21>] ? select_bad_process+0xe1/0x120
[<ffffffff81123220>] ? out_of_memory+0x220/0x3c0
[<ffffffff8112fb3c>] ? __alloc_pages_nodemask+0x8ac/0x8d0
[<ffffffff81167a9a>] ? alloc_pages_current+0xaa/0x110
[<ffffffff8111fd57>] ? __page_cache_alloc+0x87/0x90
[<ffffffff8111f73e>] ? find_get_page+0x1e/0xa0
[<ffffffff81120cf7>] ? filemap_fault+0x1a7/0x500
[<ffffffff8114a084>] ? __do_fault+0x54/0x530
[<ffffffff810afa17>] ? futex_wait+0x227/0x380
[<ffffffff8114a657>] ? handle_pte_fault+0xf7/0xb00
[<ffffffff8114b28a>] ? handle_mm_fault+0x22a/0x300
[<ffffffff8104a8d8>] ? __do_page_fault+0x138/0x480
[<ffffffff81527910>] ? thread_return+0x4e/0x76e
[<ffffffff8152d45e>] ? do_page_fault+0x3e/0xa0
[<ffffffff8152a815>] ? page_fault+0x25/0x30
Mem-Info:
Node 0 DMA per-cpu:
CPU 0: hi: 0, btch: 1 usd: 0
CPU 1: hi: 0, btch: 1 usd: 0
Node 0 DMA32 per-cpu:
CPU 0: hi: 186, btch: 31 usd: 178
CPU 1: hi: 186, btch: 31 usd: 30
Node 0 Normal per-cpu:
CPU 0: hi: 186, btch: 31 usd: 174
CPU 1: hi: 186, btch: 31 usd: 194
active_anon:113513 inactive_anon:184789 isolated_anon:0
active_file:21 inactive_file:0 isolated_file:0
unevictable:0 dirty:10 writeback:0 unstable:0
free:17533 slab_reclaimable:4706 slab_unreclaimable:8059
mapped:64 shmem:4 pagetables:3064 bounce:0
Node 0 DMA free:15696kB min:248kB low:308kB high:372kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15300kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
lowmem_reserve[]: 0 3000 4010 4010
Node 0 DMA32 free:41740kB min:50372kB low:62964kB high:75556kB active_anon:200648kB inactive_anon:216504kB active_file:20kB inactive_file:52kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3072160kB mlocked:0kB dirty:8kB writeback:0kB mapped:168kB shmem:0kB slab_reclaimable:3720kB slab_unreclaimable:2476kB kernel_stack:512kB pagetables:516kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:108 all_unreclaimable? yes
lowmem_reserve[]: 0 0 1010 1010
Node 0 Normal free:12696kB min:16956kB low:21192kB high:25432kB active_anon:253404kB inactive_anon:522652kB active_file:64kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1034240kB mlocked:0kB dirty:32kB writeback:0kB mapped:88kB shmem:16kB slab_reclaimable:15104kB slab_unreclaimable:29760kB kernel_stack:3704kB pagetables:11740kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:146 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 0
Node 0 DMA: 4*4kB 2*8kB 3*16kB 4*32kB 2*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15696kB
Node 0 DMA32: 341*4kB 277*8kB 209*16kB 128*32kB 104*64kB 54*128kB 33*256kB 13*512kB 0*1024kB 1*2048kB 0*4096kB = 41740kB
Node 0 Normal: 2662*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 12696kB
64603 total pagecache pages
64549 pages in swap cache
Swap cache stats: add 3763837, delete 3699288, find 1606527/1870160
Free swap = 0kB
Total swap = 1048568kB
1048560 pages RAM
67449 pages reserved
1061 pages shared
958817 pages non-shared
[ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name
[ 419] 0 419 2662 1 1 -17 -1000 udevd
[ 726] 0 726 2697 1 1 -17 -1000 udevd
[ 1021] 0 1021 4210 40 1 0 0 vmware-guestd
[ 1238] 0 1238 23294 28 1 -17 -1000 auditd
[ 1254] 65 1254 112744 203 1 0 0 nslcd
[ 1267] 0 1267 62271 123 1 0 0 rsyslogd
[ 1279] 0 1279 2705 32 1 0 0 irqbalance
[ 1293] 32 1293 4744 16 1 0 0 rpcbind
[ 1311] 29 1311 5837 2 0 0 0 rpc.statd
[ 1422] 81 1422 5874 36 0 0 0 dbus-daemon
[ 1451] 0 1451 1020 1 0 0 0 acpid
[ 1460] 68 1460 9995 129 0 0 0 hald
[ 1461] 0 1461 5082 2 1 0 0 hald-runner
[ 1490] 0 1490 5612 2 1 0 0 hald-addon-inpu
[ 1503] 68 1503 4484 2 0 0 0 hald-addon-acpi
[ 1523] 0 1523 134268 53 0 0 0 automount
[ 1540] 0 1540 1566 1 0 0 0 mcelog
[ 1552] 0 1552 16651 27 1 -17 -1000 sshd
[ 1560] 0 1560 5545 26 0 0 0 xinetd
[ 1568] 38 1568 8202 33 0 0 0 ntpd
[ 1584] 0 1584 21795 56 0 0 0 sendmail
[ 1592] 51 1592 19658 32 0 0 0 sendmail
[ 1601] 0 1601 29324 21 1 0 0 crond
[ 1612] 0 1612 5385 5 1 0 0 atd
[ 1638] 0 1638 1016 2 0 0 0 mingetty
[ 1640] 0 1640 1016 2 1 0 0 mingetty
[ 1642] 0 1642 1016 2 0 0 0 mingetty
[ 1644] 0 1644 2661 1 1 -17 -1000 udevd
[ 1645] 0 1645 1016 2 0 0 0 mingetty
[ 1647] 0 1647 1016 2 1 0 0 mingetty
[ 1649] 0 1649 1016 2 1 0 0 mingetty
[25003] 0 25003 26827 1 1 0 0 rpc.rquotad
[25007] 0 25007 5440 2 1 0 0 rpc.mountd
[25045] 0 25045 5773 2 1 0 0 rpc.idmapd
[31756] 0 31756 43994 12 0 0 0 httpd
[31758] 48 31758 45035 205 0 0 0 httpd
[31759] 48 31759 45035 210 1 0 0 httpd
[31760] 48 31760 45035 201 1 0 0 httpd
[31761] 48 31761 45068 211 1 0 0 httpd
[31762] 48 31762 45068 199 0 0 0 httpd
[31763] 48 31763 45035 196 0 0 0 httpd
[31764] 48 31764 45068 191 1 0 0 httpd
[31765] 48 31765 45035 206 1 0 0 httpd
[ 1893] 0 1893 41344 2 0 0 0 su
[ 1896] 500 1896 26525 2 0 0 0 standalone.sh
[ 1957] 500 1957 570217 81589 0 0 0 java
[10739] 0 10739 41344 2 0 0 0 su
[10742] 500 10742 26525 2 0 0 0 standalone.sh
[10805] 500 10805 576358 77163 0 0 0 java
[13378] 0 13378 41344 2 0 0 0 su
[13381] 500 13381 26525 2 1 0 0 standalone.sh
[13442] 500 13442 561881 73430 1 0 0 java
Out of memory: Kill process 10805 (java) score 141 or sacrifice child
Killed process 10805, UID 500, (java) total-vm:2305432kB, anon-rss:308648kB, file-rss:4kB
It was shut down at about 04:00 in the morning, when there were no users and no activity on the server, besides Solr replication. It was the master node, which has failed, and our slave pings it every minute. Here is the replication config:
<requestHandler name="/replication" class="solr.ReplicationHandler" >
<lst name="master">
<str name="enable">${solr.enable.master:false}</str>
<str name="replicateAfter">commit</str>
<str name="replicateAfter">startup</str>
<str name="confFiles">schema.xml,stopwords.txt</str>
</lst>
<lst name="slave">
<str name="enable">${solr.enable.slave:false}</str>
<str name="masterUrl">${solr.master.url:http://localhost:8080/solr/cstb}</str>
<str name="pollInterval">00:00:60</str>
</lst>
</requestHandler>
Since there were no users activity there were no changes in indexes and thus Solr should not actually do anything (I assume).
Some other values from config file:
<indexDefaults>
<useCompoundFile>false</useCompoundFile>
<mergeFactor>10</mergeFactor>
<ramBufferSizeMB>32</ramBufferSizeMB>
<maxFieldLength>10000</maxFieldLength>
<writeLockTimeout>1000</writeLockTimeout>
<lockType>native</lockType>
</indexDefaults>
<mainIndex>
<useCompoundFile>false</useCompoundFile>
<ramBufferSizeMB>32</ramBufferSizeMB>
<mergeFactor>10</mergeFactor>
<unlockOnStartup>false</unlockOnStartup>
<reopenReaders>true</reopenReaders>
<deletionPolicy class="solr.SolrDeletionPolicy">
<str name="maxCommitsToKeep">1</str>
<str name="maxOptimizedCommitsToKeep">0</str>
</deletionPolicy>
<infoStream file="INFOSTREAM.txt">false</infoStream>
</mainIndex>
<queryResultWindowSize>20</queryResultWindowSize>
<queryResultMaxDocsCached>200</queryResultMaxDocsCached>
So, have anybody experienced similar situation or have any thoughts about it? We are using Solr 3.5.
You are running into a low memory condition that is causing Linux to kill off a high memory usage process:
Out of memory: Kill process 10805 (java) score 141 or sacrifice child
This is known as the out of memory killer or OOM. Given that you are only using 512MB for heap for the JVM (way too low in my opinion for a production Solr instance of any significant capacity) you don't have a lot of options as you cannot reduce heap to free up more OS memory.
Things you can try:
Upgrade to a larger server with more memory. This would be my number one recommendation - you simply don't have enough memory available.
Move any other production code to another system. You did not
mention if you have anything else running on this server but I would
move anything I could elsewhere. Not a lot to gain here as I suspect
your system is quite small to being with, but every little bit
helps.
Try tuning the OOM killer to be less strict - not that easy to do and I don't know what you will gain due to overall low server size but you can always experiment:
https://unix.stackexchange.com/questions/58872/how-to-set-oom-killer-adjustments-for-daemons-permanently
http://backdrift.org/how-to-create-oom-killer-exceptions
http://www.oracle.com/technetwork/articles/servers-storage-dev/oom-killer-1911807.html

Obtaining EVDEV Event Code from raw bytes?

In a previous question , I asked how to interpret the event bytes from /dev/input/mice. Realizing now that /dev/input/mice does NOT give me the information I need, as I am using a touchscreen using the stmpe-ts driver. It is setup under EVDEV node /dev/input/event2, and using a personal program I built, I can obtain the neccessary bytes from this file. My only problem is translating that into Event Codes. Using evtest, I get this output:
Input driver version is 1.0.1
Input device ID: bus 0x18 vendor 0x0 product 0x0 version 0x0
Input device name: "stmpe-ts"
Supported events:
Event type 0 (EV_SYN)
Event type 1 (EV_KEY)
Event code 330 (BTN_TOUCH)
Event type 3 (EV_ABS)
Event code 0 (ABS_X)
Value 2486
Min 0
Max 4095
Event code 1 (ABS_Y)
Value 1299
Min 0
Max 4095
Event code 24 (ABS_PRESSURE)
Value 0
Min 0
Max 255
Properties:
Testing ... (interrupt to exit)
I need those event codes, from the raw data obtained by reading directly from /dev/input/event2. This is as follows:
236 21 100 83 63 223 11 0 3 0 0 0 124 8 0 0 236 21 100 83 72 223 11 0 3 0 1 0 237 7
0 0 236 21 100 83 76 223 11 0 3 0 24 0 60 0 0 0 236 21 100 83 80 223 11 0 1 0 74
1 1 0 0 0 236 21 100 83 84 223 11 0 0 0 0 0 0 0 0 0 236 21 100 83 251 247 11 0 3 0 0 0
123 8 0 0 236 21 100 83 6 248 11 0 3 0 1 0 242 7 0 0 236 21 100 83 10 248 11 0 3 0 24
0 142 0 0 0 236 21 100 83 16 248 11 0 0 0 0 0 0 0 0 0 236 21 100 83 137 16 12 0 3 0 0
0 121 8 0 0 236 21 100 83 147 16 12 0 3 0 1 0 7 8 0 0 236 21 100 83 150 16 12 0 3 0 24
0 163 0 0 0 236 21 100 83 156 16 12 0 0 0 0 0 0 0 0 0
Is this even possible to do? If so, can someone help me out here? (Also, I've determined that a pattern occurs every 16 bytes or so, I've also determined that 236 and 237 are byte stating that the event is a touch event, 236 being touch without click, and 237 being touch with click)
The output of evdev nodes is a series of struct input_event, defined in linux/input.h.
struct input_event {
struct timeval time;
__u16 type;
__u16 code;
__s32 value;
};
So all you need to do is read into an array of those structs and then access each type/code as needed. Don't know how to do that in Java, but it's probably not that hard.
evtest is free software btw, so you can look at the code and see what it does.
Also look at libevdev, it's MIT license so you don't get 'tainted' by looking at it.
http://www.freedesktop.org/wiki/Software/libevdev/

Preventing Linux Kernel from killing Java process with really large heap [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
Running Ubuntu 12.04.3 LTS with 32 cores 244GB. Its the Amazon EC2 memory instance the big one and Java 1.7u25
My java process is running with -Xmx226g
I'm trying to create a really large local cache using CQEngine and so far its blazingly fast with 30,000,000 records. Of course I will add an eviction policy that will allow garbage collection to clean up old objects evicted, but really trying to push the limits here :)
When looking at jvisualvm, the total heap is at about 180GB which dies 40GB to soon. I should be able to squeeze out a bit more.
Not that I don't want the kernel to kill a process if it runs out of resources but I think it's killing it to early and want to squeeze the mem usage as much as possible.
The ulimit output is as follows...
ubuntu#ip-10-156-243-111:/var/log$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1967992
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1967992
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
The kern.log output is...
340 total pagecache pages
0 pages in swap cache
Swap cache stats: add 0, delete 0, find 0/0
Free swap = 0kB
Total swap = 0kB
63999984 pages RAM
1022168 pages reserved
649 pages shared
62830686 pages non-shared
[ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name
[ 505] 0 505 4342 93 9 0 0 upstart-udev-br
[ 507] 0 507 5456 198 2 -17 -1000 udevd
[ 642] 0 642 5402 155 28 -17 -1000 udevd
[ 643] 0 643 5402 155 29 -17 -1000 udevd
[ 739] 0 739 3798 49 10 0 0 upstart-socket-
[ 775] 0 775 1817 124 25 0 0 dhclient3
[ 897] 0 897 12509 152 10 -17 -1000 sshd
[ 949] 101 949 63430 91 9 0 0 rsyslogd
[ 990] 102 990 5985 90 8 0 0 dbus-daemon
[ 1017] 0 1017 3627 40 9 0 0 getty
[ 1024] 0 1024 3627 41 10 0 0 getty
[ 1029] 0 1029 3627 43 6 0 0 getty
[ 1030] 0 1030 3627 41 3 0 0 getty
[ 1032] 0 1032 3627 41 1 0 0 getty
[ 1035] 0 1035 1083 34 1 0 0 acpid
[ 1036] 0 1036 4779 49 5 0 0 cron
[ 1037] 0 1037 4228 40 8 0 0 atd
[ 1045] 0 1045 3996 57 3 0 0 irqbalance
[ 1084] 0 1084 3627 43 2 0 0 getty
[ 1085] 0 1085 3189 39 11 0 0 getty
[ 1087] 103 1087 46916 300 0 0 0 whoopsie
[ 1159] 0 1159 20490 215 0 0 0 sshd
[ 1162] 0 1162 1063575 263 15 0 0 console-kit-dae
[ 1229] 0 1229 46648 153 4 0 0 polkitd
[ 1318] 1000 1318 20490 211 10 0 0 sshd
[ 1319] 1000 1319 6240 1448 1 0 0 bash
[ 1816] 1000 1816 70102543 62010032 4 0 0 java
[ 1947] 0 1947 20490 214 6 0 0 sshd
[ 2035] 1000 2035 20490 210 0 0 0 sshd
[ 2036] 1000 2036 6238 1444 13 0 0 bash
[ 2179] 1000 2179 13262 463 2 0 0 vi
Out of memory: Kill process 1816 (java) score 987 or sacrifice child
Killed process 1816 (java) total-vm:280410172kB, anon-rss:248040128kB, file-rss:0kB
The kern.log clearly states that it killed my process because it ran out of memory. But like I said I think I can squeeze it a bit more. Is there any settings I need to do to allow me to use the 226GB allocated to JAVA.

load test memory problem

We are testing our app running on IBM portal server in the Linux box but found that the "free" values of vmstat are decreasing steadily, even after the test. By looking at top, the "VIRT" values are also increasing steadily. By monitoring Java app heap usage, the initial heap size (1.5G) was never reached, and the usage was rising slowly and steadily (with minor rises/drops within the test period) from 6xxm to about 1g until the test ended. When the test just ended, it dropped by a large amount back to about 6XXm. My questions are:
1. Is the result normal and OK?
2. Is the app heap usage behaviour OK?
3. Is it normal that the "free" values of vmstat are decreasing steadily and "VIRT" values of top are increasing steadily without drop after the test?
Below are top and vmstat output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14157 user01 17 0 2508m 1.2g 47m S 16.0 23.3 11:38.94 java
14157 user01 17 0 2508m 1.2g 47m S 16.9 23.3 11:49.08 java
14157 user01 17 0 2508m 1.2g 47m S 15.8 23.4 11:58.58 java
14157 user01 17 0 2509m 1.2g 47m S 13.0 23.5 12:06.36 java
14157 user01 17 0 2509m 1.2g 47m S 17.6 23.5 12:16.92 java
14157 user01 17 0 2509m 1.2g 47m S 16.9 23.6 12:27.09 java
14157 user01 17 0 2510m 1.2g 47m S 16.1 23.6 12:36.73 java
14157 user01 17 0 2510m 1.2g 47m S 14.5 23.7 12:45.43 java
...
14157 user01 17 0 2514m 1.2g 47m S 15.9 24.6 15:20.18 java
14157 user01 17 0 2514m 1.2g 47m S 16.2 24.6 15:29.88 java
14157 user01 17 0 2514m 1.2g 47m S 16.1 24.7 15:39.56 java
14157 user01 17 0 2515m 1.2g 47m S 19.5 24.7 15:51.28 java
14157 user01 17 0 2516m 1.2g 47m S 11.4 24.8 15:58.11 java
14157 user01 17 0 2515m 1.2g 47m S 14.7 24.8 16:06.91 java
14157 user01 17 0 2515m 1.2g 47m S 16.0 24.9 16:16.51 java
14157 user01 17 0 2515m 1.2g 47m S 16.1 24.9 16:26.15 java
14157 user01 17 0 2515m 1.2g 47m S 14.7 25.0 16:34.96 java
14157 user01 17 0 2516m 1.2g 47m S 11.8 25.0 16:42.03 java
...
14157 user01 17 0 2517m 1.3g 47m S 13.1 25.6 18:18.04 java
14157 user01 17 0 2517m 1.3g 47m S 17.8 25.6 18:28.75 java
14157 user01 17 0 2516m 1.3g 47m S 15.2 25.7 18:37.85 java
14157 user01 17 0 2517m 1.3g 47m S 13.5 25.7 18:45.93 java
14157 user01 17 0 2516m 1.3g 47m S 14.6 25.8 18:54.70 java
14157 user01 17 0 2517m 1.3g 47m S 14.6 25.8 19:03.47 java
14157 user01 17 0 2517m 1.3g 47m S 15.3 25.9 19:12.67 java
14157 user01 17 0 2517m 1.3g 47m S 16.6 25.9 19:22.64 java
14157 user01 17 0 2517m 1.3g 47m S 15.0 26.0 19:31.65 java
14157 user01 17 0 2517m 1.3g 47m S 12.4 26.0 19:39.09 java
...
14157 user01 17 0 2530m 1.4g 47m S 0.0 27.5 23:23.91 java
procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 2004 702352 571508 1928436 0 0 0 54 287 413 1 1 98 0 0
0 0 2004 702368 571528 1928416 0 0 0 12 280 379 0 0 100 0 0
...
24 0 2004 673988 572504 1948000 0 0 0 440 760 751 16 6 78 0 0
0 0 2004 671352 572540 1951048 0 0 0 477 1180 830 19 7 74 0 0
0 0 2004 674756 572572 1946904 0 0 0 380 604 650 13 3 84 0 0
1 0 2004 694208 572612 1928360 0 0 0 222 518 599 7 2 91 0 0
16 0 2004 692068 572640 1929360 0 0 0 539 1075 850 24 7 69 0 0
0 0 2004 689036 572680 1931376 0 0 0 292 978 781 14 6 81 0 0
...
0 0 2004 530432 579120 2007176 0 0 0 453 511 712 18 4 78 0 0
0 0 2004 528440 579152 2008172 0 0 0 200 436 652 10 2 87 0 0
0 0 2004 524352 579192 2010188 0 0 0 401 514 779 17 6 76 0 0
0 0 2004 524964 578208 2012200 0 0 0 514 475 696 15 3 82 0 0
0 0 2004 522484 578260 2013176 0 0 0 416 488 699 15 3 82 0 0
2 0 2004 521264 578300 2015192 0 0 0 368 501 728 14 5 80 0 0
0 0 2004 518400 578340 2016180 0 0 0 404 452 647 14 3 84 0 0
25 0 2004 517064 578368 2018208 0 0 0 414 497 752 15 3 82 0 0
...
0 0 2004 499312 578820 2029064 0 0 0 351 459 660 13 3 84 0 0
0 0 2004 496228 578872 2031068 0 0 0 260 473 701 15 5 80 0 0
0 0 2004 501360 578912 2026916 0 0 0 500 398 622 9 3 88 0 0
1 0 2004 499260 578948 2027908 0 0 0 262 436 638 13 2 85 0 0
1 0 2004 497964 578984 2028900 0 0 0 276 452 628 15 3 82 0 0
0 0 2004 497492 579024 2029888 0 0 0 200 384 548 7 2 91 0 0
0 0 2004 496620 579044 2030896 0 0 0 172 393 586 9 2 89 0 0
...
1 0 2004 357876 566000 2104592 0 0 0 374 510 736 18 6 76 0 0
23 0 2004 358544 566032 2105588 0 0 0 362 456 644 12 3 85 0 0
0 0 2004 376332 566084 2087032 0 0 0 353 441 614 13 3 84 0 0
0 0 2004 375888 566120 2088024 0 0 0 220 411 620 10 2 88 0 0
0 0 2004 375280 566156 2087988 0 0 0 224 408 586 7 2 91 0 0
16 0 2004 373092 566188 2090012 0 0 0 233 494 723 12 3 85 0 0
2 0 2004 369564 566236 2090992 0 0 0 455 475 714 14 5 80 1 0
...
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 2004 235156 572776 2155384 0 0 0 8 282 396 0 0 100 0 0
0 0 2004 235132 572796 2155364 0 0 0 24 291 435 0 0 100 0 0
1 0 2004 234780 572828 2155332 0 0 0 101 292 474 1 5 94 0 0
0 0 2004 234804 572844 2155316 0 0 0 45 288 451 0 1 99 0 0
0 0 2004 234852 572856 2155304 0 0 0 12 283 409 0 0 100 0 0
Heap Usage:
The most likely cause that you are seeing is that the -Xms and -Xmx values differ in your JAVA_OPTS. What appears to be happening here is that the OS is allocating memory when needed by the JVM. It is never bad practice to allocate all of the heap you will need to the JVM on start-up. Set the 2 values to their upper limit. By setting the two values equal, the JVM never has to request memory from the OS and is free to do the work it needs to within its own memory space.
It is not uncommon to see the behavior you have seen, the JVM will continue to request more memory from the OS until the upper limits (-Xmx) are reached. If you are new to Sizing the heap, or other techniques around tuning the JVM, have a look at this guide.
On another note, top and vmstat will only show you so much of the picture into what is happening with the JVM's memory. What you are seeing is what the operating system is allocating to it. You will want to use other tools such as jmap and jvisualvm to see how the memory inside the JVM is responding. These tools will be a better bench mark for your application. What they will show you is the New and Old generations, Garbage collections and other stats which are really important.

Dreaded Could not reserve enough space for object heap

Im trying to get Solr up and running, at first I had the JDK.1.6 working fine, then tomcat running fine too. All of a sudden when trying to run Solr for the first time however I get the error message:
[root#78 bin]# ./java -version
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
Ive deleted Tomcat, deleted the JDK and reinstalled the latest JRE but still get the error message even when trying to get the version number of Java.
top - 18:47:15 up 207 days, 13:50, 1 user, load average: 0.08, 0.03, 0.00
Tasks: 42 total, 1 running, 41 sleeping, 0 stopped, 0 zombie
Cpu(s): 5.0%us, 0.2%sy, 0.0%ni, 94.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 786432k total, 376656k used, 409776k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached
The setup Ive got is: Dual CPU Dual Core AMD Opteron 512MB RAM 40GB HDD
Im almost new to UNIX and so any help or advice would be really helpful, thanks guys
EDIT RUNNING PROCESSES ARE:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 18 0 10332 636 600 S 0 0.1 0:08.28 init
1752 psaadm 15 0 176m 29m 17m S 0 3.9 0:03.76 httpsd
1785 psaadm 15 0 173m 24m 14m S 0 3.1 0:02.03 httpsd
5308 psaadm 15 0 174m 32m 21m S 0 4.2 0:02.70 httpsd
6107 apache 25 0 347m 47m 5616 S 0 6.2 1:48.26 httpd
11493 root 15 -4 12588 320 316 S 0 0.0 0:00.00 udevd
12105 root 15 0 60592 1224 676 S 0 0.2 0:00.00 sshd
13659 apache 15 0 345m 46m 4784 S 0 6.1 0:57.14 httpd
15855 root 15 0 21628 768 672 S 0 0.1 0:13.75 xinetd
15986 root 15 0 40848 592 536 S 0 0.1 0:00.38 couriertcpd
16086 root 18 0 33540 1184 1120 S 0 0.2 0:00.28 courierlogger
16117 root 21 0 40848 536 532 S 0 0.1 0:00.00 couriertcpd
16119 root 21 0 33544 1072 1068 S 0 0.1 0:00.00 courierlogger
16135 root 15 0 40848 592 536 S 0 0.1 0:03.09 couriertcpd
16137 root 18 0 33540 1184 1120 S 0 0.2 0:01.70 courierlogger
16154 root 18 0 40852 536 532 S 0 0.1 0:00.00 couriertcpd
16157 root 18 0 33540 1124 1120 S 0 0.1 0:00.00 courierlogger
16287 qmails 18 0 3832 512 428 S 0 0.1 2:03.49 qmail-send
16289 qmaill 18 0 3780 508 444 S 0 0.1 0:36.67 splogger
16290 root 18 0 3816 408 324 S 0 0.1 0:00.09 qmail-lspawn
16291 qmailr 17 0 3820 404 328 S 0 0.1 0:16.95 qmail-rspawn
16292 qmailq 18 0 3772 368 324 S 0 0.0 0:15.61 qmail-clean
17669 root 18 0 12592 1180 908 R 0 0.2 0:00.03 top
18190 root 15 0 318m 25m 9000 S 0 3.3 0:36.21 httpd
19687 apache 16 0 347m 47m 5764 S 0 6.2 1:10.59 httpd
19710 named 25 0 180m 2572 1744 S 0 0.3 0:03.06 named
19809 root 18 0 11908 1152 1148 S 0 0.1 0:00.01 mysqld_safe
20166 apache 15 0 347m 47m 5696 S 0 6.2 1:07.68 httpd
20340 mysql 15 0 303m 35m 5620 S 0 4.7 185:56.38 mysqld
23747 apache 15 0 412m 46m 5768 S 0 6.0 0:38.23 httpd
23791 root 15 0 166m 7504 4216 S 0 1.0 0:02.39 httpsd
23901 root 15 0 20836 616 548 S 0 0.1 3:37.38 crond
23926 root 18 0 46648 416 412 S 0 0.1 0:00.00 saslauthd
24084 root 18 0 46648 160 156 S 0 0.0 0:00.00 saslauthd
24297 root 15 0 96636 4032 3112 S 0 0.5 0:00.20 sshd
24302 root 18 0 12180 1804 1308 S 0 0.2 0:00.17 bash
24431 root 18 0 152m 1112 664 S 0 0.1 0:25.77 rsyslogd
24435 root 18 0 3784 336 332 S 0 0.0 0:00.00 rklogd
24537 apache 15 0 344m 45m 4364 S 0 5.9 0:35.93 httpd
Its a shared server by the way.
Free -m gives me:
total used free shared buffers cached
Mem: 768 367 400 0 0 0
-/+ buffers/cache: 367 400
Swap: 0 0 0
The JVM will require a certain amount of memory on start up (configured via -Xms - default is 32m, I believe). If it can't get it, it won't start.
So what else is running on your macine ? I suspect you have very little free virtual memory on your machine.
Do you have $JAVA_OPTIONS set?
echo $JAVA_OPTIONS
Doest the following command runs without error?
java -Xmx8m -version
Just a simple reboot was needed, cant for the life of me understand what happened or why.

Categories

Resources