I want to detect the Memory and CPU consumption of a particular app in android (programmatically), can any one help me with it. I have tried the TOP method, but i want an alternative for it.
Any help will be appreciated, thanks :)
If you wan to trace your memory usage in your app then there is ActivityManager.getMemoryInfo() API.
Cpu usage can be traced using CpuStatsCollector API.
For more informative memory usage overview, outside your app, you can use adb shell dumpsys meminfo <package_name|pid> [-d] for more specific memory usage statistics. For example, there is the the command for com.google.android.apps.maps process:
adb shell dumpsys meminfo com.google.android.apps.maps -d
Which gives you a following output:
** MEMINFO in pid 18227 [com.google.android.apps.maps] **
Pss Private Private Swapped Heap Heap Heap
Total Dirty Clean Dirty Size Alloc Free
------ ------ ------ ------ ------ ------ ------
Native Heap 10468 10408 0 0 20480 14462 6017
Dalvik Heap 34340 33816 0 0 62436 53883 8553
Dalvik Other 972 972 0 0
Stack 1144 1144 0 0
Gfx dev 35300 35300 0 0
Other dev 5 0 4 0
.so mmap 1943 504 188 0
.apk mmap 598 0 136 0
.ttf mmap 134 0 68 0
.dex mmap 3908 0 3904 0
.oat mmap 1344 0 56 0
.art mmap 2037 1784 28 0
Other mmap 30 4 0 0
EGL mtrack 73072 73072 0 0
GL mtrack 51044 51044 0 0
Unknown 185 184 0 0
TOTAL 216524 208232 4384 0 82916 68345 14570
(output trimmed) More about it here
Tracing of memory usage on modern operating systems is very complex task. See this question for more info.
To get your processid:
int pid = android.os.Process.myPid();
To get CPU Usage :
public String getCPUUsage(int pid) {
Process p;
try {
String[] cmd = {
"sh",
"-c",
"top -m 1000 -d 1 -n 1 | grep \""+pid+"\" "};
p = Runtime.getRuntime().exec(cmd);
String line = reader.readLine();
// line contains the process info
}
Related
I'm studying Spring and Hibernate and the IDE I'm using is Eclipse (after years of using IntelliJ) but the thing is, every time I attempt to install Eclipse ether by using the installer or just downloading Eclipse JEE I come a across an installation error and can't move forward.
Here is the stacktrace of the error.
PS: I've been investigating and can't find anything related.
/* -------------------------------------------------------------------------- */
Process: eclipse-inst [4686]
Path: /Volumes/VOLUME/Eclipse Installer.app/Contents/MacOS/eclipse-inst
Identifier: org.eclipse.oomph.setup.installer.product
Version: 1.17.0 (1.17.0.v20200610-0514)
Code Type: X86-64 (Native)
Parent Process: ??? [1]
Responsible: eclipse-inst [4686]
User ID: 501
Date/Time: 2020-08-14 08:06:40.452 -0400
OS Version: Mac OS X 10.16 (20A5343j)
Report Version: 12
Bridge OS Version: 5.0 (18P50347c)
Anonymous UUID: 4148D75A-3E08-F11D-ECE7-4B3A70E9672E
Sleep/Wake UUID: 912A59C5-0BFE-4085-94F6-2AB8CB24A2AF
Time Awake Since Boot: 14000 seconds
Time Since Wake: 1000 seconds
System Integrity Protection: enabled
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_INSTRUCTION (SIGILL)
Exception Codes: 0x0000000000000001, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Termination Signal: Illegal instruction: 4
Termination Reason: Namespace SIGNAL, Code 0x4
Terminating Process: exc handler [4686]
Application Specific Information:
*** CFRelease() called with NULL ***
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 com.apple.CoreFoundation 0x00007fff297f593e CFRelease.cold.1 + 14
1 com.apple.CoreFoundation 0x00007fff2965110f CFRelease + 108
2 com.apple.JavaVM 0x00007fff2e436c61 MakeMatcher + 406
3 com.apple.JavaVM 0x00007fff2e435804 CreateJVMDetector + 38
4 com.apple.JavaVM 0x00007fff2e4375eb CheckForInstalledJavaRuntimes + 43
5 dyld 0x0000000015bcbebd ImageLoaderMachO::doModInitFunctions(ImageLoader::LinkContext const&) + 559
6 dyld 0x0000000015bcc2b8 ImageLoaderMachO::doInitialization(ImageLoader::LinkContext const&) + 40
7 dyld 0x0000000015bc6e78 ImageLoader::recursiveInitialization(ImageLoader::LinkContext const&, unsigned int, char const*, ImageLoader::InitializerTimingList&, ImageLoader::UninitedUpwards&) + 492
8 dyld 0x0000000015bc4d90 ImageLoader::processInitializers(ImageLoader::LinkContext const&, unsigned int, ImageLoader::InitializerTimingList&, ImageLoader::UninitedUpwards&) + 188
9 dyld 0x0000000015bc4e30 ImageLoader::runInitializers(ImageLoader::LinkContext const&, ImageLoader::InitializerTimingList&) + 82
10 dyld 0x0000000015bb6261 dyld::runInitializers(ImageLoader*) + 82
11 dyld 0x0000000015bc0769 dlopen_internal + 609
12 libdyld.dylib 0x00007fff6aa3826f dlopen_internal(char const*, int, void*) + 177
13 libdyld.dylib 0x00007fff6aa2690e dlopen + 28
14 com.apple.CoreFoundation 0x00007fff296ed352 _CFBundleDlfcnLoadBundle + 147
15 com.apple.CoreFoundation 0x00007fff29765ada _CFBundleLoadExecutableAndReturnError + 503
16 com.apple.CoreFoundation 0x00007fff2972db6e CFBundleGetFunctionPointerForName + 39
17 eclipse_1902.so 0x000000000df7a62e findSymbol + 62
18 eclipse_1902.so 0x000000000df78679 startJavaJNI + 89
19 eclipse_1902.so 0x000000000df74f44 _run + 5732
20 eclipse_1902.so 0x000000000df734c0 run + 432
21 org.eclipse.oomph.setup.installer.product 0x000000000de689f7 original_main + 1319
22 org.eclipse.oomph.setup.installer.product 0x000000000de693a7 main + 1655
23 libdyld.dylib 0x00007fff6aa36851 start + 1
Thread 1:
0 libsystem_pthread.dylib 0x00007fff6ac6e4b4 start_wqthread + 0
Thread 2:
0 libsystem_pthread.dylib 0x00007fff6ac6e4b4 start_wqthread + 0
Thread 3:
0 libsystem_pthread.dylib 0x00007fff6ac6e4b4 start_wqthread + 0
Thread 4:
0 libsystem_pthread.dylib 0x00007fff6ac6e4b4 start_wqthread + 0
Thread 0 crashed with X86 Thread State (64-bit):
rax: 0x00007fff29a06bc2 rbx: 0x0000000000000000 rcx: 0x0000000000000055 rdx: 0x0000000000000000
rdi: 0x0000000000000000 rsi: 0x00007ffee1d957f8 rbp: 0x00007ffee1d957d0 rsp: 0x00007ffee1d957c8
r8: 0x00000000000000a9 r9: 0x00000000000007fb r10: 0x0000000000002520 r11: 0x0000000000000060
r12: 0x0000000000000000 r13: 0x00006000026c8410 r14: 0x00006000039c8000 r15: 0x00006000008e9b40
rip: 0x00007fff297f593e rfl: 0x0000000000010246 cr2: 0x00006000013fa008
Logical CPU: 0
Error Code: 0x00000000
Trap Number: 6
External Modification Summary:
Calls made by other processes targeting this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by all processes on this machine:
task_for_pid: 18440
thread_create: 0
thread_set_state: 0
VM Region Summary:
ReadOnly portion of Libraries: Total=657.9M resident=0K(0%) swapped_out_or_unallocated=657.9M(100%)
Writable regions: Total=620.9M written=0K(0%) resident=0K(0%) swapped_out=0K(0%) unallocated=620.9M(100%)
VIRTUAL REGION
REGION TYPE SIZE COUNT (non-coalesced)
=========== ======= =======
Activity Tracing 256K 1
CoreServices 172K 1
Foundation 16K 1
Kernel Alloc Once 8K 1
MALLOC 226.2M 31
MALLOC guard page 16K 4
MALLOC_NANO (reserved) 384.0M 1 reserved VM address space (unallocated)
STACK GUARD 56.0M 5
Stack 10.0M 5
VM_ALLOCATE 732K 12
__DATA 23.2M 280
__DATA_CONST 32K 1
__FONT_DATA 4K 1
__LINKEDIT 471.7M 7
__OBJC_RO 36.5M 1
__OBJC_RW 2483K 2
__TEXT 186.3M 279
__UNICODE 588K 1
mapped file 49.5M 8
shared memory 40K 4
=========== ======= =======
TOTAL 1.4G 646
TOTAL, minus reserved VM space 1.0G 646
Environment: Running macOS BigSur (10.16) and java version "14.0.1" 2020-04-14.
Steps: Go to Applications folder -> Right click on Eclipse/STS4 -> Click on Show Package Content -> Click on Contents -> open info.plist -> Paste this code replacing the tags in the file.
<array>
<!-- to use a specific Java version (instead of the platform's default) uncomment one of the following options, or add a VM found via $/usr/libexec/java_home -V -->
<string>-vm</string><string>/Library/Java/JavaVirtualMachines/jdk-14.0.1.jdk/Contents/Home/bin/java</string>
<string>-keyring</string>
<string>~/.eclipse_keyring</string>
</array>
STS4 from OS Catalina version don`t work on OX BIGSUR.
Try spring-tool-suite-4-4.9.0.CI-B173-e4.18.0-macosx.cocoa.x86_64.dmg, can you downloading from http://dist.springsource.com/snapshot/STS4/nightly-distributions.html
My solution
Step 1: Use o updated Java SDK version "1.8.0_271"
Step 2: Dowloading an install STS from spring-tool-suite-4-4.9.0.CI-B173-e4.18.0-macosx.cocoa.x86_64.dmg **
Step 3: verify you Shell (STS4 don`t work whit zsh version) may be /bin/bash/, follow steps https://support.apple.com/es-us/HT208050
Step 4: Thanks for follow https://www.facebook.com/gestiondigitaladministrativa
** link http://dist.springsource.com/snapshot/STS4/nightly-distributions.html
I want to analysis my app memory. But when I use adb dumpsys meminfo myapp it returns:
App Summary
Pss(KB)
------
Java Heap: 0
Native Heap: 0
Code: 0
Stack: 0
Graphics: 0
Private Other: 0
System: 0
TOTAL: 0 TOTAL SWAP PSS: 0
Objects
Views: 20 ViewRootImpl: 1
AppContexts: 3 Activities: 1
Assets: 2 AssetManagers: 2
Local Binders: 10 Proxy Binders: 16
Parcel memory: 6 Parcel count: 25
Death Recipients: 0 OpenSSL Sockets: 0
WebViews: 0
myapp
It has an activity message, but Pss is 0;
And when I dumpsys meninfo systemapp like settings:
App Summary
Pss(KB)
------
Java Heap: 3264
Native Heap: 5404
Code: 10172
Stack: 40
Graphics: 0
Private Other: 1916
System: 12551
TOTAL: 33347 TOTAL SWAP PSS: 0
Objects
Views: 110 ViewRootImpl: 1
AppContexts: 3 Activities: 1
Assets: 2 AssetManagers: 2
Local Binders: 29 Proxy Binders: 28
Parcel memory: 4 Parcel count: 18
Death Recipients: 0 OpenSSL Sockets: 0
WebViews: 0
settings
This question already has answers here:
is nice() used to change the thread priority or the process priority?
(3 answers)
Closed 1 year ago.
On a Unix system, you can run a process at lower CPU "priority" (pedantically, it does not change the thing that is called the priority, but rather influences what share of available CPU time is used, which is "priority" in the general sense) using the nice command:
nice program
And you could use that to run a JVM process:
nice java -jar program.jar
The Java program run by that JVM process will start multiple threads.
Does the nice change affect the scheduling of those Java threads? That is, will the Java threads have a lower CPU priority when run as
nice java -jar program.jar
that when run as
java -jar program.jar
In general, this will be system dependent, so I am interested in the Linux case.
According to what ps reports niceness is applied to java threads. I ran this quick test with a java application that waits for user input:
Start process with : nice -n 19 java Main
Output of ps -m -l 20746
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
0 - 1000 20746 10006 0 - - - 1739135 - pts/2 0:00 java Main
0 S 1000 - - 0 99 19 - - futex_ - 0:00 -
1 S 1000 - - 0 99 19 - - wait_w - 0:00 -
1 S 1000 - - 0 99 19 - - futex_ - 0:00 -
1 S 1000 - - 0 99 19 - - futex_ - 0:00 -
Start process with : nice -n 15 java Main
Output of ps -m -l 21488
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
0 - 1000 21488 10006 0 - - - 1722494 - pts/2 0:00 java Main
0 S 1000 - - 0 95 15 - - futex_ - 0:00 -
1 S 1000 - - 0 95 15 - - wait_w - 0:00 -
1 S 1000 - - 0 95 15 - - futex_ - 0:00 -
1 S 1000 - - 0 95 15 - - futex_ - 0:00 -
The NI column seems to reflect what I passed to nice and the priority changes accordingly too. I got the process ID (20746, 21488) using jps.
Note that running jstack 21488 for example will not give the above information.
I ran the above on Ubuntu 16.04 LTS (64bit)
Actually...Niceness is a property of the application according to POSIX.1. Here is a more detailed post. is nice() used to change the thread priority or the process priority?
Java is not special. It's just a process, and the OS sets its "niceness" the same way as with any other process.
On Linux, Java threads are implemented using native threads, so again, "niceness" is subject to OS controls in the same way as any other native thread.
I have started using Casandra since last few days and here is what I am trying to do.
I have about 2 Million+ objects which maintain profiles of users. I convert these objects to json, compress and store them in a blob column. The average compressed json size is about 10KB. This is how my table looks in cassandra,
Table:
dev.userprofile (uid varchar primary key, profile blob);
Select Query:
select profile from dev.userprofile where uid='';
Update Query:
update dev.userprofile set profile='<bytebuffer>' where uid = '<uid>'
Every hour, I get events from a queue which I apply to my userprofile object. Each event corresponds to one userprofile object. I get about 1 Million of such events, so I have to update around 1M of the userprofile objects within a short time i.e update the object in my application, compress the json and update the cassandra blob. I have to finish updating all of 1 Million user profile objects preferably in few mins. But I notice its taking longer now.
While running my application, I notice that I can update around 400 profiles/second on an average. I already see a lot of CPU iowait - 70%+ on cassandra instance. Also, the load initially is pretty high around 16 (on 8 vcpu instance) and then drops off to around 4.
What am I doing wrong? Because, when I was updating smaller objects of size 2KB I noticed that cassandra operations /sec is much faster. I was able to get about 3000 Ops/sec. Any thoughts on how I should improve the performance?
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>3.1.0</version>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-extras</artifactId>
<version>3.1.0</version>
</dependency>
I just have one node of cassandra setup within a m4.2xlarge aws instance for testing
Single node Cassandra instance
m4.2xlarge aws ec2
500 GB General Purpose (SSD)
IOPS - 1500 / 10000
nodetool cfstats output
Keyspace: dev
Read Count: 688795
Read Latency: 27.280683695439137 ms.
Write Count: 688780
Write Latency: 0.010008401811899301 ms.
Pending Flushes: 0
Table: userprofile
SSTable count: 9
Space used (live): 32.16 GB
Space used (total): 32.16 GB
Space used by snapshots (total): 0 bytes
Off heap memory used (total): 13.56 MB
SSTable Compression Ratio: 0.9984539538554672
Number of keys (estimate): 2215817
Memtable cell count: 38686
Memtable data size: 105.72 MB
Memtable off heap memory used: 0 bytes
Memtable switch count: 6
Local read count: 688807
Local read latency: 29.879 ms
Local write count: 688790
Local write latency: 0.012 ms
Pending flushes: 0
Bloom filter false positives: 47
Bloom filter false ratio: 0.00003
Bloom filter space used: 7.5 MB
Bloom filter off heap memory used: 7.5 MB
Index summary off heap memory used: 2.07 MB
Compression metadata off heap memory used: 3.99 MB
Compacted partition minimum bytes: 216 bytes
Compacted partition maximum bytes: 370.14 KB
Compacted partition mean bytes: 5.82 KB
Average live cells per slice (last five minutes): 1.0
Maximum live cells per slice (last five minutes): 1
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1
nodetool cfhistograms output
Percentile SSTables Write Latency Read Latency Partition Size Cell Count
(micros) (micros) (bytes)
50% 3.00 9.89 2816.16 4768 2
75% 3.00 11.86 43388.63 8239 2
95% 4.00 14.24 129557.75 14237 2
98% 4.00 20.50 155469.30 17084 2
99% 4.00 29.52 186563.16 20501 2
Min 0.00 1.92 61.22 216 2
Max 5.00 74975.55 4139110.98 379022 2
Dstat output
---load-avg--- --io/total- ---procs--- ------memory-usage----- ---paging-- -dsk/total- ---system-- ----total-cpu-usage---- -net/total-
1m 5m 15m | read writ|run blk new| used buff cach free| in out | read writ| int csw |usr sys idl wai hiq siq| recv send
12.8 13.9 10.6|1460 31.1 |1.0 14 0.2|9.98G 892k 21.2G 234M| 0 0 | 119M 3291k| 63k 68k| 1 1 26 72 0 0|3366k 3338k
13.2 14.0 10.7|1458 28.4 |1.1 13 1.5|9.97G 884k 21.2G 226M| 0 0 | 119M 3278k| 61k 68k| 2 1 28 69 0 0|3396k 3349k
12.7 13.8 10.7|1477 27.6 |0.9 11 1.1|9.97G 884k 21.2G 237M| 0 0 | 119M 3321k| 69k 72k| 2 1 31 65 0 0|3653k 3605k
12.0 13.7 10.7|1474 27.4 |1.1 8.7 0.3|9.96G 888k 21.2G 236M| 0 0 | 119M 3287k| 71k 75k| 2 1 36 61 0 0|3807k 3768k
11.8 13.6 10.7|1492 53.7 |1.6 12 1.2|9.95G 884k 21.2G 228M| 0 0 | 119M 6574k| 73k 75k| 2 2 32 65 0 0|3888k 3829k
Edit
Switched to LeveledCompactionStrategy & disabled compression on sstables, I don't see a big improvement:
There was a bit of improvement in profiles/sec updated. Its now 550-600 profiles /sec. But, the cpu spikes remain i.e the iowait.
gcstats
Interval (ms) Max GC Elapsed (ms)Total GC Elapsed (ms)Stdev GC Elapsed (ms) GC Reclaimed (MB) Collections Direct Memory Bytes
755960 83 3449 8 73179796264 107 -1
dstats
---load-avg--- --io/total- ---procs--- ------memory-usage----- ---paging-- -dsk/total- ---system-- ----total-cpu-usage---- -net/total-
1m 5m 15m | read writ|run blk new| used buff cach free| in out | read writ| int csw |usr sys idl wai hiq siq| recv send
7.02 8.34 7.33| 220 16.6 |0.0 0 1.1|10.0G 756k 21.2G 246M| 0 0 | 13M 1862k| 11k 13k| 1 0 94 5 0 0| 0 0
6.18 8.12 7.27|2674 29.7 |1.2 1.5 1.9|10.0G 760k 21.2G 210M| 0 0 | 119M 3275k| 69k 70k| 3 2 83 12 0 0|3906k 3894k
5.89 8.00 7.24|2455 314 |0.6 5.7 0|10.0G 760k 21.2G 225M| 0 0 | 111M 39M| 68k 69k| 3 2 51 44 0 0|3555k 3528k
5.21 7.78 7.18|2864 27.2 |2.6 3.2 1.4|10.0G 756k 21.2G 266M| 0 0 | 127M 3284k| 80k 76k| 3 2 57 38 0 0|4247k 4224k
4.80 7.61 7.13|2485 288 |0.1 12 1.4|10.0G 756k 21.2G 235M| 0 0 | 113M 36M| 73k 73k| 2 2 36 59 0 0|3664k 3646k
5.00 7.55 7.12|2576 30.5 |1.0 4.6 0|10.0G 760k 21.2G 239M| 0 0 | 125M 3297k| 71k 70k| 2 1 53 43 0 0|3884k 3849k
5.64 7.64 7.15|1873 174 |0.9 13 1.6|10.0G 752k 21.2G 237M| 0 0 | 119M 21M| 62k 66k| 3 1 27 69 0 0|3107k 3081k
You could notice the cpu spikes.
My main concern is iowait before I increase the load further. Anything specific I should looking for thats causing this? Because, 600 profiles / sec (i.e 600 Reads + Writes) seems low to me.
Can you try LeveledCompactionStrategy? With 1:1 read/writes on large objects like this the IO saved on reads will probably counter the IO spent on the more expensive compactions.
If your already compressing the data before sending it, you should turn off compression on the table. Its breaking it into 64kb chunks which will be largely dominated by only 6 values which wont get much compression (as shown in horrible compression ratio SSTable Compression Ratio: 0.9984539538554672).
ALTER TABLE dev.userprofile
WITH compaction = { 'class' : 'LeveledCompactionStrategy' }
AND compression = { 'sstable_compression' : '' };
400 profiles/second is very very slow though and there may be some work to do on your client that could potentially be bottleneck as well. If you have a 4 load on a 8 core system its may not Cassandra slowing things down. Make sure your parallelizing your requests and using them asynchronously, sending requests sequentially is a common issue.
With larger blobs there is going to be an impact on GCs, so monitoring them and adding that information can be helpful. I would be surprised for 10kb objects to affect it that much but its something to look out for and may require more JVM tuning.
If that helps, from there I would recommend tuning the heap and upgrading to at least 3.7 or latest in 3.0 line.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
Running Ubuntu 12.04.3 LTS with 32 cores 244GB. Its the Amazon EC2 memory instance the big one and Java 1.7u25
My java process is running with -Xmx226g
I'm trying to create a really large local cache using CQEngine and so far its blazingly fast with 30,000,000 records. Of course I will add an eviction policy that will allow garbage collection to clean up old objects evicted, but really trying to push the limits here :)
When looking at jvisualvm, the total heap is at about 180GB which dies 40GB to soon. I should be able to squeeze out a bit more.
Not that I don't want the kernel to kill a process if it runs out of resources but I think it's killing it to early and want to squeeze the mem usage as much as possible.
The ulimit output is as follows...
ubuntu#ip-10-156-243-111:/var/log$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1967992
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1967992
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
The kern.log output is...
340 total pagecache pages
0 pages in swap cache
Swap cache stats: add 0, delete 0, find 0/0
Free swap = 0kB
Total swap = 0kB
63999984 pages RAM
1022168 pages reserved
649 pages shared
62830686 pages non-shared
[ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name
[ 505] 0 505 4342 93 9 0 0 upstart-udev-br
[ 507] 0 507 5456 198 2 -17 -1000 udevd
[ 642] 0 642 5402 155 28 -17 -1000 udevd
[ 643] 0 643 5402 155 29 -17 -1000 udevd
[ 739] 0 739 3798 49 10 0 0 upstart-socket-
[ 775] 0 775 1817 124 25 0 0 dhclient3
[ 897] 0 897 12509 152 10 -17 -1000 sshd
[ 949] 101 949 63430 91 9 0 0 rsyslogd
[ 990] 102 990 5985 90 8 0 0 dbus-daemon
[ 1017] 0 1017 3627 40 9 0 0 getty
[ 1024] 0 1024 3627 41 10 0 0 getty
[ 1029] 0 1029 3627 43 6 0 0 getty
[ 1030] 0 1030 3627 41 3 0 0 getty
[ 1032] 0 1032 3627 41 1 0 0 getty
[ 1035] 0 1035 1083 34 1 0 0 acpid
[ 1036] 0 1036 4779 49 5 0 0 cron
[ 1037] 0 1037 4228 40 8 0 0 atd
[ 1045] 0 1045 3996 57 3 0 0 irqbalance
[ 1084] 0 1084 3627 43 2 0 0 getty
[ 1085] 0 1085 3189 39 11 0 0 getty
[ 1087] 103 1087 46916 300 0 0 0 whoopsie
[ 1159] 0 1159 20490 215 0 0 0 sshd
[ 1162] 0 1162 1063575 263 15 0 0 console-kit-dae
[ 1229] 0 1229 46648 153 4 0 0 polkitd
[ 1318] 1000 1318 20490 211 10 0 0 sshd
[ 1319] 1000 1319 6240 1448 1 0 0 bash
[ 1816] 1000 1816 70102543 62010032 4 0 0 java
[ 1947] 0 1947 20490 214 6 0 0 sshd
[ 2035] 1000 2035 20490 210 0 0 0 sshd
[ 2036] 1000 2036 6238 1444 13 0 0 bash
[ 2179] 1000 2179 13262 463 2 0 0 vi
Out of memory: Kill process 1816 (java) score 987 or sacrifice child
Killed process 1816 (java) total-vm:280410172kB, anon-rss:248040128kB, file-rss:0kB
The kern.log clearly states that it killed my process because it ran out of memory. But like I said I think I can squeeze it a bit more. Is there any settings I need to do to allow me to use the 226GB allocated to JAVA.