Given this heap dump
size no. of obj class
515313696 2380602 char[]
75476832 614571 * ConstMethodKlass
57412368 2392182 java.lang.String
44255544 614571 * MethodKlass
33836872 70371 * ConstantPoolKlass
28034704 70371 * InstanceKlassKlass
26834392 349363 java.lang.Object[]
25853848 256925 java.util.HashMap$Entry[]
24224240 496587 * SymbolKlass
19627024 117963 byte[]
18963232 61583 * ConstantPoolCacheKlass
18373920 120113 int[]
15239352 634973 java.util.HashMap$Entry
11789056 92102 ph.com.my.class.Person
And only 1 class is from my app, ph.com.my.class.Person. The class definition is follows:
public class Person {
private String f_name;
private String l_name;
}
In the heap dump, does the Person size (11789056) include the memory that the 2 string variables occupying? Or will the f_name and l_name be counted in the String class instead, in this case size 57412368?
UPDATED - added followup question:
So let's say each instance of:
f_name size is 30
l_name size is 20
Person size is 75
If there where 10 instances of Person, there will be
10 * (30+20) = 500
10 * 75 = 750
Will the 500 be counted in String or char[]? And subsequently, will 750 be counted in Person?
The size of an object in the heap dump is the number of bytes allocated as a block on the heap to hold that instance. It never includes the bytes of the whole graph reachable through the object. In general that could easily mean that the size of the object is the entire heap. So in your case it takes into account the two references, but not the String instances themselves. Note also that even the String size doesn't reflect the size of the represented string -- that's stored in a char[]. The char[] instances are shared between strings so the story isn't that simple.
Each count and size is the size of that object. If you used -histo instead of -histo:live this will be all the objects, even the ones which are not referenced.
Note: each String has a char[] and the JVM uses quite a few of these. The String size is the size of the object itself and not its char[]
Related
I was trying to get the memory consumption of some code snippets. After some search, I realized that ThreadMXBean.getThreadAllocatedBytes(long id) can be used to achieve this. So I tested this method with the following code:
ThreadMXBean threadMXBean = (ThreadMXBean) ManagementFactory.getThreadMXBean();
long id = Thread.currentThread().getId();
// new Long(0);
long beforeMemUsage = threadMXBean.getThreadAllocatedBytes(id);
long afterMemUsage = 0;
{
// put the code you want to measure here
for (int i = 0; i < 10; i++) {
new Long(i);
}
}
afterMemUsage = threadMXBean.getThreadAllocatedBytes(id);
System.out.println(afterMemUsage - beforeMemUsage);
I run this code with different iteration times in for loop (0, 1, 10, 20, and 30). And the result as follows:
0 Long: 48 bytes
1 Long: 456 bytes
10 Long: 672 bytes
20 Long: 912 bytes
30 Long: 1152 bytes
The differences between 1 and 10, 10 and 20, as well as 20 and 30 are easy to explain, because the size of Long object is 24 bytes. But I was confused by the huge difference between 0 and 1.
Actually, I guessed this is caused by the class loading. So I uncommented the 3rd line code and the result as follows:
0 Long: 48 bytes
1 Long: 72 bytes
10 Long: 288 bytes
20 Long: 528 bytes
30 Long: 768 bytes
It seems that my guess is confirmed by the result. However, in my opinion, the information of class structure is stored in Method Area, which is not a part of heap memory. As the Javadoc of ThreadMXBean.getThreadAllocatedBytes(long id) indicates, it returns the total amount of memory allocated in heap memory. Have I missed something?
The tested JVM version is:
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
Thanks!
The first invocation of new Long(0) causes the resolution of the constant pool entry referenced by new bytecode. While resolving CONSTANT_Class_info for the first time, JVM loads the referenced class - java.lang.Long.
ClassLoader.loadClass is implemented in Java, and it can certainly allocate Java objects. For instance, getClassLoadingLock method creates a new lock object and a new entry in parallelLockMap:
protected Object getClassLoadingLock(String className) {
Object lock = this;
if (parallelLockMap != null) {
Object newLock = new Object();
lock = parallelLockMap.putIfAbsent(className, newLock);
if (lock == null) {
lock = newLock;
}
}
return lock;
}
Also, when doing a class name lookup in the system dictionary, JVM creates a new String object.
I used async-profiler to record all heap allocations JVM does when loading java.lang.Long class. Here is the clickable interactive Flame Graph:
The graph includes 13 samples - one per each allocated object. The type of an allocated object is not shown, but it can be easily guessed from the context (stack trace).
Green color denotes Java stack trace;
Yellow means VM stack trace.
Note that each java_lang_String::basic_create() (and similar) allocates two objects: an instance of java.lang.String and its backing char[] array.
The graph is produced by the following test program:
import one.profiler.AsyncProfiler;
public class ProfileHeapAlloc {
public static void main(String[] args) throws Exception {
AsyncProfiler profiler = AsyncProfiler.getInstance();
// Dry run to skip allocations caused by AsyncProfiler initialization
profiler.start("_ZN13SharedRuntime19dtrace_object_allocEP7oopDesci", 0);
profiler.stop();
// Real profiling session
profiler.start("_ZN13SharedRuntime19dtrace_object_allocEP7oopDesci", 0);
new Long(0);
profiler.stop();
profiler.execute("file=alloc.svg");
}
}
How to run:
java -Djava.library.path=/path/to/async-profiler -XX:+DTraceAllocProbes ProfileHeapAlloc
Here _ZN13SharedRuntime19dtrace_object_allocEP7oopDesci is the mangled name for SharedRuntime::dtrace_object_alloc() function, which is called by JVM for every heap allocation whenever DTraceAllocProbes flag is on.
I have the following classes
public void dump(Integer value){
//do soemthing with value
}
public void dump(Integer[] values){
//do soemthing with values
}
And I want to call dump(null) , it doesn't matter what I choose to cast to because both work as intended with nulls:
dump((Integer[]) null);
dump((Integer) null);
Which one is better to use resource wise (less ram, cpu usage)?
I would think that java would preallocate 4 bytes for a null Integer and 8 bytes for Double, is this true?
What about other (more complicated) objects that are nulled, if no constructor call then how are the nulls stored?
Which one is better to use resource wise (less ram, cpu usage)?
It won't make any measurable difference.
I would think that java would preallocate 4 bytes for a null Integer and 8 bytes for Double, is this true?
No. Null is null. There is only one format. You are confusing objects and references here. null is a reference. All references are the same size.
What about other (more complicated) objects that are nulled
There is no such thing as 'objects that are nulled'.
if no constructor call then how are the nulls stored?
As nulls.
The type of the cast doesn't affect the memory used by null. In a decompiled class:
aconst_null
checkcast #4 // class java/lang/Integer
invokevirtual #5 // Method dump:(Ljava/lang/Integer;)V
vs
aconst_null
checkcast #6 // class "[Ljava/lang/Integer;"
invokevirtual #7 // Method dump:([Ljava/lang/Integer;)
so both are the same in terms of memory usage.
I believe they cost equal because you do not have actual objects in both cases (neither Integer object, neither Integer[] object, which definitely have different size in memory).
What you have here is just 4-bytes reference which points to nothing as it null. From this point of view there is no difference between type to which you are casting these null-s.
No difference.
Nope. Java doesn't allocate any space at all for a null object, since null by definition is a reference to no object at all. No object at all takes no space at all. The reference itself will always be 32 or 64 bits long.
What do you mean by "stored"? A reference variable holds a pointer value, which is always the same size irrespective of the object to which it points. Regardless of the size of the object, even if a reference points to a non-null value, the reference size is always the same no matter what type it references. It is the size of the address. I haven't looked, but I'll bet that the null reference is a special address value such as 0 that points nowhere by definition.
Casting a reference does not change the reference. It still has the exact same value, bit by bit, that it has without the cast.
The only exception to all this is the 32-bit optimization for 64-bit Java. Normally you'd expect all references in 64-bit Java to be, well, 64 bits long. But you can switch on or off the ability to hold references in 32 bits, if certain assumptions about the program hold. But either way, once the JVM decides how wide a reference is, 32 or 64 bits, that will hold true through the program.
So bottom line, no, casting a reference to some other type has no effect on the memory consumed by the reference.
The memory consumed by the object can go to zero if all references to it fall out of scope or become null.
All reference values (and both boxed Integer and arrays are reference values) have the same representation, so there is no difference in the call at all. If there is a difference in cost, it would be do to the implementation of the function that is called.
Which one is better to use resource wise (less ram, cpu usage)?
If you ever need to ask yourself this question, you can always write a simple test.
public class Main {
private static Integer x = 0;
private static final int n = 500000000;
public static void dump(Integer value){
x += 1;
}
public static void dump(Integer[] values){
x += 1;
}
public static void main(String[] args) {
long t = System.currentTimeMillis();
for (int i = 0; i < n; i++) {
dump((Integer[]) null);
}
long array_time = System.currentTimeMillis() - t;
t = System.currentTimeMillis();
for (int i = 0; i < n; i++) {
dump((Integer) null);
}
long int_time = System.currentTimeMillis() - t;
System.out.println("array_time: " + array_time + " ms");
System.out.println("int_time: " + int_time + " ms");
}
}
Output:
array_time: 2578 ms
int_time: 2045 ms
Not a significant difference.
I created an ArrayList of 1 million MyItem objects and the memory consumed was 106mb(checked from Task Manager) But after adding the same list to two more list through addAll() method ,it takes 259mb. My question is I have added only the references to list , no new objects are created after that 1 million.why does the memory consumption increase eventhough LinkedList has been used (as it wont require contiguous memory blocks so no reallocation will be made)?
How to achieve this efficiently? Data passes through various lists in my program and consuming more than 1GB of memory.Similar scenario is presented above.
public class MyItem{
private String s;
private int id;
private String search;
public MyItem(String s, int id) {
this.s = s;
this.id = id;
}
public String getS() {
return s;
}
public int getId() {
return id;
}
public String getSearchParameter() {
return search;
}
public void setSearchParameter(String s) {
search = s;
}
}
public class Main{
public static void main(String args[]) {
List<MyItem> l = new ArrayList<>();
List<MyItem> list = new LinkedList<>();
List<MyItem> list1 = new LinkedList<>();
for (int i = 0; i < 1000000 ; i++) {
MyItem m = new MyItem("hello "+i ,i+1);
m.setSearchParameter(m.getS());
l.add(i,m);
}
list.addAll(l);
list1.addAll(l);
list1.addAll(list);
Scanner s = new Scanner(System.in);
s.next();//just not to terminate
}
}
LinkedList is a doubly-linked list, so elements in the list are represented by nodes, and each node contains 3 references.
From Java 8:
private static class Node<E> {
E item;
Node<E> next;
Node<E> prev;
Node(Node<E> prev, E element, Node<E> next) {
this.item = element;
this.next = next;
this.prev = prev;
}
}
Since you use a lot of memory, you may not be using compressed OOP, so references might be 64-bit, i.e. 8 bytes each.
With an object header of 16 bytes + 8 bytes per reference, a node occupies 40 bytes. With 1 million elements, that would be 40 Mb.
Two lists is 80 Mb, and then remember that Java memory is segmented into pools and objects (the nodes) gets moved around, and your memory consumption of an additional 153 Mb now seems about right.
Note: An Arraylist would only use 8 bytes per element, not 40 bytes, and if you preallocate the backing array, which you can do since you know the size, you would save a lot of memory that way.
Anytime you call LinkedList.addAll behind the scene it will create a LinkedList.Node for each added element so here you created 3 millions of such nodes which is not free, indeed:
This object has 3 references, knowing that the size of a reference is 4 bytes on 32-bit JVM and 64-bit JVM with UseCompressedOops (-XX:+UseCompressedOops) enabled which is the case by default with heaps less than 32 GB in Java 7 and higher and 8 bytes on 64-bit JVM with UseCompressedOops disabled (-XX:-UseCompressedOops). So here according to your configuration it gives 12 bytes or 24 bytes.
Then we add the size of the header fields which is 8 bytes on 32-bit JVM and 16 bytes on 64-bit JVM. So here according to your configuration it gives 8 bytes or 16 bytes.
So if we summarize it takes:
20 bytes per instance on 32-bit JVM
28 bytes per instance on 64-bit JVM with UseCompressedOops enabled
40 bytes per instance on 64-bit JVM with UseCompressedOops disabled
As you call 3 times addAll of 1 Million objects on a LinkedList, it gives
60 Mo on 32-bit JVM
84 Mo on 64-bit JVM with UseCompressedOops enabled
120 Mo on 64-bit JVM with UseCompressedOops disabled
The rest is probably the objects not yet collected by the garbage collector, you should try to call System.gc() after loading your ArrayList to get the real size and do the same thing after loading your LinkedList.
If you want to get the size of a given object, you can use SizeOf.
If you use a 64-bit JVM and you want to know if UseCompressedOops is enabled, simply launch your java command in a terminal with only -X options and adds -XX:+PrintFlagsFinal | grep UseCompressedOops so for example if my command is java -Xms4g -Xmx4g -XX:MaxPermSize=4g -cp <something> <my-class>, launch java -Xms4g -Xmx4g -XX:MaxPermSize=4g -XX:+PrintFlagsFinal | grep UseCompressedOops, the beginning of the output should look like this:
bool UseCompressedOops := true {lp64_product}
...
In this case the flag UseCompressedOops is enabled
Consider the following two segments of code in Java,
Integer x=new Integer(100);
Integer y=x;
Integer z=x;
System.out.println("Used memory (bytes): " +
(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
In which the memory usage was when tested on my system : Used memory (bytes): 287848
and
int a=100;
int b=a;
int c=a;
System.out.println("Used memory (bytes): " +
(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
In which the memory usage was when tested on my system : Used memory (bytes): 287872
and the following
Integer x=new Integer(100);
System.out.println("Used memory (bytes): " +
(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
and
int a=100;
System.out.println("Used memory (bytes): " +
(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
in both of the above cases, the memory usage was exactly the same when tested on my system : Used memory (bytes): 287872
The statement
System.out.println("Used memory (bytes): " +
(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
will display the total memory currently in use [Total available memory-Currently free available memory], (in bytes).
I have alternatively verified through the above mentioned methods that in the first case the memory usage (287848) was lower than the second one (287872) while in the rest of the two cases it was exactly the same (287872). Of course and obviously, it should be such because in the very first case, y and z contain a copy of the reference held in x and they all (x, y and z) point to the same/common object (location) means that the first case is better and more appropriate than the second one and in the rest of the two cases, there are equivalent statements with exactly the same memory usage (287872). If it is so, then the use of primitive data types in Java should be useless and avoidable though they were basically designed for better memory usage and more CPU utilization. still why do primitive data types in Java exist?
A question somewhat similar to this one was already posted here but it did not have such a scenario.
That question is here.
I wouldn't pay attention to Runtime.freeMemory -- it's very ambiguous (does it include unused stack space? PermGen space? gaps between heap objects that are too small to be used?), and giving any precise measurement without halting all threads is impossible.
Integers are necessarily less space efficient than ints, because just the reference to the Integer takes 32 bits (64 for a 64-bit JVM without compressed pointers).
If you really want to test it empirically, have many threads recurse deeply and then wait. As in
class TestThread extends Thread {
private void recurse(int depth) {
int a, b, c, d, e, f, g;
if (depth < 100)
recurse(depth + 1);
for (;;) try {
Thread.sleep(Long.MAX_VALUE);
} catch (InterruptedException e) {}
}
#Override public void run() {
recurse(0);
}
public static void main(String[] _) {
for (int i = 0; i < 500; ++i)
new TestThread().start();
}
}
For a start, an Integer wraps an int, therefore Integer has to be at least as big as int.
From the docs (I really doubt this is necessary):
The Integer class wraps a value of the primitive type int in an
object. An object of type Integer contains a single field whose type
is int.
So obviously a primitive int is still being used.
Not only that but objects have more overhead, and the most obvious one is that when you're using objects your variable contains a reference to it:
Integer obj = new Integer(100);
int prim = 100;
ie. obj stores a reference to an Integer object, which contains an int, whereas prim stores the value 100. That there's enough to prove that using Integer over int brings with it more overhead. And there's more overhead than just that.
The wrapper contains a primitive as a field, but it causes additional overhead because it's an object. The reference takes up space as well, but your example isn't really designed to show this.
The tests you designed aren't really well-suited for a precise measurement, but since you used them, try this example instead:
public static void main(String[] args) {
int numInts = 100000;
Integer[] array = new Integer[numInts];
// int[] array = new int[numInts];
for(int i = 0; i < numInts; i++){
array[i] = i; //put some real data into the arrays using auto-boxing if needed
}
System.out.println("Used memory (bytes): " +
(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
}
Now try it again but uncomment the primitive line and comment out the wrapper line. You should see that the wrapper takes up much more memory
If your first example, you have the equivalent to 1 integer, and 2 pointers.
Because Integer is an Object, it has pointer properties, and contains functions.
By using int instead of Integer, you are copying the value 3 times.
You have a difference in 24 bytes, which is used for storing the headers and values of your extra 2 ints. Although I wouldn't trust your test: the JVM can be somewhat random, and it's garbage collection is quite dynamic. As far as required memory for a single Integer vs int, Integer will take up more space because it is an Object, and thus contains more information.
Runtime.getRuntime().freeMemory() : getting delta on this does not give you the correct statistics as there are many moving parts like garbage collection and other threads.
Integer takes more memory than int primitive.
Your test case is too simple to be of any conclusive result.
Any test case that takes less than 5 seconds doesn't mean anything.
You need to at least do something with these objects you are creating. The JVM can simply look at your code and just not do anything because your objects aren't ever used, and you exit. (Can't say for certain what the JVM interpreter does, but the JIT will use escape analysis to optimize your entire testcase into nothing)
First of all, if you're looking for memory effectiveness, primitives are smaller because they are what size they are. The wrapper objects are objects, and need to be garbage collected. They have tons of fields within them that you can use, those fields are stored somewhere...
Primitives aren't "designed" to be more effective. Wrapper objects were designed to be more feature friendly. You need primitives, because how else are you going to store a number?
If you really wan't to see the memory difference, take a real application. If you want to write it yourself, go ahead but it'll take some time. Use some text editor and search and replace every single int declaration with Integer, and long with Long, etc. Then take a look at the memory footprint. I wouldn't be surprised if you see your computer explode.
From a programming point of view, you need to use primitives when necessary, and wrapper objects when necessary. When its applicable to do both, it's your preference. Trust me, there aren't that many.
http://www.javaspecialists.eu/archive/Issue193.html
This might help you understand/explore things a little bit more. An excellent article! Cheers!
If you look at the source code of java.lang.Integer, the value is stored as an int.
private int value;
Your test is not valid, that's all there is to it.
Proof:
when you run these Tests you'll get an AssertionError in second Test (because memory gets lower, even if you stop resetting memory-field). Once you try this tests with 10.000 loops you'll get at both StackOverflowError.
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.is;
import org.junit.Test;
public class TestRedundantIntegers {
private long memory;
#Test
public void whenRecursiveIntIsSet() {
memory = Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory();
recurseInt(0, 100);
}
private void recurseInt(int depth, int someInt) {
int x = someInt;
assertThat(memory,is(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
memory=Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory();
if (depth < 1000)
recurseInt(depth + 1, x);
}
#Test
public void whenRecursiveIntegerIsSet() {
memory = Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory();
recurseInteger(0, new Integer(100));
}
private void recurseInteger(int depth, Integer someInt) {
Integer x = someInt;
assertThat(memory,is(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
memory=Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory();
if (depth < 1000)
recurseInt(depth + 1, x);
}
}
As for "where and when": use the non-primitive types where an Object is required, and the primitives everywhere else. For example, the types of a generic can't be primitive, so you can't use primitives with them. Even before generics were introduced, things like HashSet and HashMap couldn't store primitives.
The following code is causing a OutOfMemmoryError: heap space for some 3 million rows.
Memory allocated to JVM is 4 GB, using 64 bit installation.
while (rs.next())
{
ArrayList<String> arrayList = new ArrayList<String>();
for (int i = 1; i <= columnCount; i++)
{
arrayList.add(rs.getString(i));
}
objOS.writeObject(arrayList);
}
The memory referenced by the ArrayList is eligible for garbage collection in each iteration of the while loop, and internally JVM calls garbage collection (System.gc()) before throwing an OutOfMemoryError because of heap space.
So why is the exception occurring?
Is objOS an ObjectOutputStream?
If so, then that's your problem: An ObjectOutputStream keeps a strong reference to every object that was ever written to it in order to avoid writing the same object twice (it will simply write a reference saying "that object that I wrote before with id x").
This means that you're effectively leaking all ArrayList istances.
You can reset that "cache" by calling reset() on your ObjectOutputStream. Since you don't seem to be making use of that cache between writeObject calls anyway, you could call reset() directly after the writeObject() call.
I agree with #Joachim.
The below suggestion was a myth
In addition, it is recommended (in good coding convention) that do not declare any object inside the loop. Instead, declare it just before the loop start and use the same reference for initialization purpose. This will ask your code to use the same reference for each iterations and cause less burden on memory release thread (i.e. Garbage collection).
The Truth
I have edited this because I feel that there may be many people who (like me before today) still believe that declaring an object inside loop could harm the memory management; which is wrong.
To demonstrate this, I have used the same code posted on stackOverflow for this.
Following is my code snippet
package navsoft.advskill.test;
import java.util.ArrayList;
public class MemoryTest {
/**
* #param args
*/
public static void main(String[] args) {
/* Total number of processors or cores available to the JVM */
System.out.println("Available processors (cores): "
+ Runtime.getRuntime().availableProcessors());
/*
* Total amount of free memory available to the JVM
*/
long freeMemory = Runtime.getRuntime().freeMemory();
System.out.println("Free memory (bytes): "
+ freeMemory);
/*
* This will return Long.MAX_VALUE if there is no preset limit
*/
long maxMemory = Runtime.getRuntime().maxMemory();
/*
* Maximum amount of memory the JVM will attempt to use
*/
System.out.println("Maximum memory (bytes): "
+ (maxMemory == Long.MAX_VALUE ? "no limit" : maxMemory));
/*
* Total memory currently in use by the JVM
*/
System.out.println("Total memory (bytes): "
+ Runtime.getRuntime().totalMemory());
final int LIMIT_COUNTER = 1000000;
//System.out.println("Testing Only for print...");
System.out.println("Testing for Collection inside Loop...");
//System.out.println("Testing for Collection outside Loop...");
//ArrayList<String> arr;
for (int i = 0; i < LIMIT_COUNTER; ++i) {
//arr = new ArrayList<String>();
ArrayList<String> arr = new ArrayList<String>();
System.out.println("" + i + ". Occupied(OldFree - currentFree): "+ (freeMemory - Runtime.getRuntime().freeMemory()));
}
System.out.println("Occupied At the End: "+ (freeMemory - Runtime.getRuntime().freeMemory()));
System.out.println("End of Test");
}
}
The result from the output is clearly shows that there is no difference in occupying/freeing the memory if you either declare the object inside or outside the loop. So it is recommended to have the declaration to as small scope as it can.
I pay my thanks to all the experts on StackOverflow (specially #Miserable Variable) for guiding me on this.
Hope this would clear your doubts too.