Is there any sizeof-like method in Java? - java

Is there any built-in method in Java to find the size of any datatype?
Is there any way to find size?

No. There is no such method in the standard Java SE class library.
The designers' view is that it is not needed in Java, since the language removes the need for an application1 to know about how much space needs to be reserved for a primitive value, an object or an array with a given number of elements.
You might think that a sizeof operator would be useful for people that need to know how much space their data structures take. However you can also get this information and more, simply and reliably using a Java memory profiler, so there is no need for a sizeof method.
Previous commenters made the point that sizeof(someType) would be more readable than 4. If you accept that readability argument, then the remedy is in your hands. Simply define a class like this ...
public class PrimitiveSizes {
public static int sizeof(byte b) { return 1; }
public static int sizeof(short s) { return 2; }
// etcetera
}
... and statically import it ...
import static PrimitiveSizes.*;
Or define some named constants; e.g.
public static final int SIZE_OF_INT = 4;
Or (Java 8 and later) use the Integer.BYTES constant, and so on.
Why haven't the Java designers implemented this in standard libraries? My guess is that:
they don't think there is a need for it,
they don't think there is sufficient demand for it, and
they don't think it is worth the effort.
There is also the issue that the next demand would be for a sizeof(Object o) method, which is fraught with technical difficulties.
The key word in the above is "they"!
1 - A programmer may need to know in order to design space efficient data structures. However, I can't imagine why that information would be needed in application code at runtime via a method call.

From the article in JavaWorld
A superficial answer is that Java does not provide anything like C's sizeof(). However,
let's consider why a Java programmer might occasionally want it.
A C programmer manages most datastructure memory allocations himself,
and sizeof() is indispensable for knowing memory block sizes to
allocate. Additionally, C memory allocators like malloc() do almost
nothing as far as object initialization is concerned: a programmer
must set all object fields that are pointers to further objects. But
when all is said and coded, C/C++ memory allocation is quite
efficient.
By comparison, Java object allocation and construction are tied
together (it is impossible to use an allocated but uninitialized
object instance). If a Java class defines fields that are references
to further objects, it is also common to set them at construction
time. Allocating a Java object therefore frequently allocates numerous
interconnected object instances: an object graph. Coupled with
automatic garbage collection, this is all too convenient and can make
you feel like you never have to worry about Java memory allocation
details.
Of course, this works only for simple Java applications. Compared with
C/C++, equivalent Java datastructures tend to occupy more physical
memory. In enterprise software development, getting close to the
maximum available virtual memory on today's 32-bit JVMs is a common
scalability constraint. Thus, a Java programmer could benefit from
sizeof() or something similar to keep an eye on whether his
datastructures are getting too large or contain memory bottlenecks.
Fortunately, Java reflection allows you to write such a tool quite
easily.
Before proceeding, I will dispense with some frequent but incorrect
answers to this article's question. Fallacy: Sizeof() is not needed
because Java basic types' sizes are fixed
Yes, a Java int is 32 bits in all JVMs and on all platforms, but this
is only a language specification requirement for the
programmer-perceivable width of this data type. Such an int is
essentially an abstract data type and can be backed up by, say, a
64-bit physical memory word on a 64-bit machine. The same goes for
nonprimitive types: the Java language specification says nothing about
how class fields should be aligned in physical memory or that an array
of booleans couldn't be implemented as a compact bitvector inside the
JVM. Fallacy: You can measure an object's size by serializing it into
a byte stream and looking at the resulting stream length
The reason this does not work is because the serialization layout is
only a remote reflection of the true in-memory layout. One easy way to
see it is by looking at how Strings get serialized: in memory every
char is at least 2 bytes, but in serialized form Strings are UTF-8
encoded and so any ASCII content takes half as much space

The Java Native Access library is typically used for calling native shared libraries from Java. Within this library there exist methods for determining the size of Java objects:
The getNativeSize(Class cls) method and its overloads will provide the size for most classes.
Alternatively, if your classes inherit from JNA's Structure class the calculateSize(boolean force) method will be available.

You can do bit manipulations like below to obtain the size of primitives:
public int sizeofInt() {
int i = 1, j = 0;
while (i != 0) {
i = (i<<1); j++;
}
return j;
}
public int sizeofChar() {
char i = 1, j = 0;
while (i != 0) {
i = (char) (i<<1); j++;
}
return j;
}

As mentioned here, there are possibilities to get the size of primitive types through their wrappers.
e.g. for a long this could be Long.SIZE / Byte.SIZE from java 1.5 (as mentioned by zeodtr already) or Long.BYTES as from java 8

There is a contemporary way to do that for primitives. Use BYTES of types.
System.out.println("byte " + Byte.BYTES);
System.out.println("char " + Character.BYTES);
System.out.println("int " + Integer.BYTES);
System.out.println("long " + Long.BYTES);
System.out.println("short " + Short.BYTES);
System.out.println("double " + Double.BYTES);
System.out.println("float " + Float.BYTES);
It results in,
byte 1
char 2
int 4
long 8
short 2
double 8
float 4

You can use Integer.SIZE / 8, Double.SIZE / 8, etc. for primitive types from Java 1.5.

The Instrumentation class has a getObjectSize() method however, you shouldn't need to use it at runtime. The easiest way to examine memory usage is to use a profiler which is designed to help you track memory usage.

EhCache provides a SizeOf class that will try to use the Instrumentation agent and will fall back to a different approach if the agent is not loaded or cannot be loaded (details here).
Also see the agent from Heinz Kabutz.

I decided to create an enum without following the standard Java conventions. Perhaps you like this.
public enum sizeof {
;
public static final int FLOAT = Float.SIZE / 8;
public static final int INTEGER = Integer.SIZE / 8;
public static final int DOUBLE = Double.SIZE / 8;
}

Try java.lang.Instrumentation.getObjectSize(Object). But please be aware that
It returns an implementation-specific approximation of the amount of storage consumed by the specified object. The result may include some or all of the object's overhead, and thus is useful for comparison within an implementation but not between implementations. The estimate may change during a single invocation of the JVM.

There's a class/jar available on SourceForge.net that uses Java instrumentation to calculate the size of any object. Here's a link to the description: java.sizeOf

Just some testing about it:
public class PrimitiveTypesV2 {
public static void main (String[] args) {
Class typesList[] = {
Boolean.class , Byte.class, Character.class, Short.class, Integer.class,
Long.class, Float.class, Double.class, Boolean.TYPE, Byte.TYPE, Character.TYPE,
Short.TYPE, Integer.TYPE, Long.TYPE, Float.TYPE, Double.TYPE
};
try {
for ( Class type : typesList ) {
if (type.isPrimitive()) {
System.out.println("Primitive type:\t" + type);
}
else {
boolean hasSize = false;
java.lang.reflect.Field fields[] = type.getFields();
for (int count=0; count<fields.length; count++) {
if (fields[count].getName().contains("SIZE")) hasSize = true;
}
if (hasSize) {
System.out.println("Bits size of type " + type + " :\t\t\t" + type.getField("SIZE").getInt(type) );
double value = type.getField("MIN_VALUE").getDouble(type);
long longVal = Math.round(value);
if ( (value - longVal) == 0) {
System.out.println("Min value for type " + type + " :\t\t" + longVal );
longVal = Math.round(type.getField("MAX_VALUE").getDouble(type));
System.out.println("Max value for type " + type + " :\t\t" + longVal );
}
else {
System.out.println("Min value for type " + type + " :\t\t" + value );
value = type.getField("MAX_VALUE").getDouble(type);
System.out.println("Max value for type " + type + " :\t\t" + value );
}
}
else {
System.out.println(type + "\t\t\t type without SIZE field.");
}
} // if not primitive
} // for typesList
} catch (Exception e) {e.printStackTrace();}
} // main
} // class PrimitiveTypes

Not sure for older versions, but since version 1.8 java sdk provides the .BYTES properties for boxed Objects of primitive types.
BYTES ( = SIZE / Byte.size )
import java.util.*;
import java.lang.*;
import java.io.*;
// The main method must be in a class named "Main".
class Main {
public static void main(String[] args) {
System.out.println("size of Integer: " + Integer.BYTES);
System.out.println("size of Character: " + Character.BYTES);
System.out.println("size of Short: " + Short.BYTES);
System.out.println("size of Long: " + Long.BYTES);
System.out.println("size of Double: " + Double.BYTES);
System.out.println("size of Float: " + Float.BYTES);
}
}
Here's a fiddle: https://www.mycompiler.io/view/0N19Y6cWL8F

I don't think it is in the java API. but most datatypes which have a number of elements in it, have a size() method. I think you can easily write a function to check for size yourself?

yes..in JAVA
System.out.println(Integer.SIZE/8); //gives you 4.
System.out.println(Integer.SIZE); //gives you 32.
//Similary for Byte,Long,Double....

Related

Converting a binary string to integer using a basic mathematical operator

Main:
public class Main{
public static void main(String[] args){
System.out.println(Convert.BtoI("10001"));
System.out.println(Convert.BtoI("101010101"));
}
}
Class:
public class Convert{
public static int BtoI(String num){
Integer i= Integer.parseInt(num,2);
return i;
}
}
So I was working on converters, I was struggling as I am new to java and my friend suggested using integer method, which works. However, which method would be most efficient to convert using the basic operators (e.g. logical, arithmetic etc.)
.... my friend suggested using integer method, which works.
Correct:
it works, and
it is the best way.
However, which method would be most efficient to convert using the basic operators (e.g. logical, arithmetic etc.)
If you are new to Java, you should not be obsessing over the efficiency of your code. You don't have the intuition.
You probably shouldn't optimize this it even if you are experienced. In most cases, small scale efficiencies are irrelevant, and you are better off using a profiler to validate your intuition about what is important before you start to optimize.
Even if this is a performance hotspot in your application, the Integer.parseint code has (no doubt) already been well optimized. There is little chance that you could do significantly better using "primitive" operations. (Under the hood, the methods will most likely already be doing the same thing as you would be doing.)
If you are just asking this because you are curious, take a look at the source code for the Integer class.
If you want to use basic arithmetic to convert binary numbers to integers then you can replace the BtoI() method within the class Convert with the following code.
public static int BtoI(String num){
int number = 0; // declare the number to store the result
int power = 0; // declare power variable
// loop from end to start of the binary number
for(int i = num.length()-1; i >= 0; i--)
{
// check if the number encountered is 1
/// if yes then do 2^Power and add to the result
if(num.charAt(i) == '1')
number += Math.pow(2, power);
// increment the power to use in next iteration
power++;
}
// return the number
return number;
}
Normal calculation is performed in above code to get the result. e.g.
101 => 1*2^2 + 0 + 1*2^0 = 5

Does casting null to an array of objects cost more than casting to an object?

I have the following classes
public void dump(Integer value){
//do soemthing with value
}
public void dump(Integer[] values){
//do soemthing with values
}
And I want to call dump(null) , it doesn't matter what I choose to cast to because both work as intended with nulls:
dump((Integer[]) null);
dump((Integer) null);
Which one is better to use resource wise (less ram, cpu usage)?
I would think that java would preallocate 4 bytes for a null Integer and 8 bytes for Double, is this true?
What about other (more complicated) objects that are nulled, if no constructor call then how are the nulls stored?
Which one is better to use resource wise (less ram, cpu usage)?
It won't make any measurable difference.
I would think that java would preallocate 4 bytes for a null Integer and 8 bytes for Double, is this true?
No. Null is null. There is only one format. You are confusing objects and references here. null is a reference. All references are the same size.
What about other (more complicated) objects that are nulled
There is no such thing as 'objects that are nulled'.
if no constructor call then how are the nulls stored?
As nulls.
The type of the cast doesn't affect the memory used by null. In a decompiled class:
aconst_null
checkcast #4 // class java/lang/Integer
invokevirtual #5 // Method dump:(Ljava/lang/Integer;)V
vs
aconst_null
checkcast #6 // class "[Ljava/lang/Integer;"
invokevirtual #7 // Method dump:([Ljava/lang/Integer;)
so both are the same in terms of memory usage.
I believe they cost equal because you do not have actual objects in both cases (neither Integer object, neither Integer[] object, which definitely have different size in memory).
What you have here is just 4-bytes reference which points to nothing as it null. From this point of view there is no difference between type to which you are casting these null-s.
No difference.
Nope. Java doesn't allocate any space at all for a null object, since null by definition is a reference to no object at all. No object at all takes no space at all. The reference itself will always be 32 or 64 bits long.
What do you mean by "stored"? A reference variable holds a pointer value, which is always the same size irrespective of the object to which it points. Regardless of the size of the object, even if a reference points to a non-null value, the reference size is always the same no matter what type it references. It is the size of the address. I haven't looked, but I'll bet that the null reference is a special address value such as 0 that points nowhere by definition.
Casting a reference does not change the reference. It still has the exact same value, bit by bit, that it has without the cast.
The only exception to all this is the 32-bit optimization for 64-bit Java. Normally you'd expect all references in 64-bit Java to be, well, 64 bits long. But you can switch on or off the ability to hold references in 32 bits, if certain assumptions about the program hold. But either way, once the JVM decides how wide a reference is, 32 or 64 bits, that will hold true through the program.
So bottom line, no, casting a reference to some other type has no effect on the memory consumed by the reference.
The memory consumed by the object can go to zero if all references to it fall out of scope or become null.
All reference values (and both boxed Integer and arrays are reference values) have the same representation, so there is no difference in the call at all. If there is a difference in cost, it would be do to the implementation of the function that is called.
Which one is better to use resource wise (less ram, cpu usage)?
If you ever need to ask yourself this question, you can always write a simple test.
public class Main {
private static Integer x = 0;
private static final int n = 500000000;
public static void dump(Integer value){
x += 1;
}
public static void dump(Integer[] values){
x += 1;
}
public static void main(String[] args) {
long t = System.currentTimeMillis();
for (int i = 0; i < n; i++) {
dump((Integer[]) null);
}
long array_time = System.currentTimeMillis() - t;
t = System.currentTimeMillis();
for (int i = 0; i < n; i++) {
dump((Integer) null);
}
long int_time = System.currentTimeMillis() - t;
System.out.println("array_time: " + array_time + " ms");
System.out.println("int_time: " + int_time + " ms");
}
}
Output:
array_time: 2578 ms
int_time: 2045 ms
Not a significant difference.

Is there a performance impact when using Guava.Preconditions with concatenated strings?

In our code, we often check arguments with Preconditions:
Preconditions.checkArgument(expression, "1" + var + "3");
But sometimes, this code is called very often. Could this have a notable negative impact on performance? Should we switch to
Preconditions.checkArgument(expression, "%s%s%s", 1, var, 3);
?
(i expect the condition true most of the time. False means bug.)
If you expect the check to not throw any exception most of the time, there is no reason to use the string concatenation. You'll lose more time concatenating (using .concat or a StringBuilder) before calling the method than doing it after you're sure you're throwing an exception.
Reversely, if you're throwing an exception, you're already in the slow branch.
It's also noteworthy to mention that Guava uses a custom and faster formatter which accepts only %s. So the loss of time is actually more similar to the standard logger {} handle (in slf4j or log4j 2). But as written above, this is in the case you're already in the slow branch.
In any case, I would strongly recommend against any of your suggestion, but I'd use this one instead:
Preconditions.checkArgument(expression, "1%s3", var);
You should only put variables in %s, not constants to gain marginal speed.
In the case of String literal concatenation, the compiler should do this in compilation time, so no runtime performance hit will occur. At least the standard JDK does this, it is not per specification (so some compilers may not optimize this).
In the case of variables, constant folding won't work, so there will be work in runtime. However, newer Java compilers will replace string concatenation to StringBuilder, which should be more efficient, as it is not immutable, unlike String.
This should be faster than using a formatter, if it is called. However, if you don't except it to be called very often, then this can be slower, as the concatenation always happen, even if the argument is true, and the method does nothing.
Anyway, to wrap it up: I do not think that it is worth to rewrite the existing calls. However, in new code, you can use the formatter without doubts.
I wrote a simple test. Using formatter is much faster as suggested here. The difference in performance grows with the number of calls (performance with formatter does not change O(1)). I guess the garbage collector time grows with number of calls in case of using simple strings.
Here is one sample result:
started with 10000000 calls and 100 runs
formatter: 0.94 (mean per run)
string: 181.11 (mean per run)
Formatter is 192.67021 times faster. (this difference grows with number of calls)
Here is the code (Java 8, Guava 18):
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
import com.google.common.base.Preconditions;
import com.google.common.base.Stopwatch;
public class App {
public static void main(String[] args) {
int count = 10000000;
int runs = 100;
System.out.println("started with " + count + " calls and " + runs + "runs");
Stopwatch stopwatch = Stopwatch.createStarted();
run(count, runs, i->fast(i));
stopwatch.stop();
float fastTime = (float)stopwatch.elapsed(TimeUnit.MILLISECONDS)/ runs;
System.out.println("fast: " + fastTime + " (mean per run)");
//
stopwatch.reset();
System.out.println("reseted: "+stopwatch.elapsed(TimeUnit.MILLISECONDS));
stopwatch.start();
run(count, runs, i->slow(i));
stopwatch.stop();
float slowTime = (float)stopwatch.elapsed(TimeUnit.MILLISECONDS)/ runs;
System.out.println("slow: " + slowTime + " (mean per run)");
float times = slowTime/fastTime;
System.out.println("Formatter is " + times + " times faster." );
}
private static void run(int count, int runs, Consumer<Integer> function) {
for(int c=0;c<count;c++){
for(int r=0;r<runs;r++){
function.accept(r);
}
}
}
private static void slow(int i) {
Preconditions.checkArgument(true, "var was " + i);
}
private static void fast(int i) {
Preconditions.checkArgument(true, "var was %s", i);
}
}

Where and when the use of primitive data types in Java is effectively appropriate?

Consider the following two segments of code in Java,
Integer x=new Integer(100);
Integer y=x;
Integer z=x;
System.out.println("Used memory (bytes): " +
(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
In which the memory usage was when tested on my system : Used memory (bytes): 287848
and
int a=100;
int b=a;
int c=a;
System.out.println("Used memory (bytes): " +
(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
In which the memory usage was when tested on my system : Used memory (bytes): 287872
and the following
Integer x=new Integer(100);
System.out.println("Used memory (bytes): " +
(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
and
int a=100;
System.out.println("Used memory (bytes): " +
(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
in both of the above cases, the memory usage was exactly the same when tested on my system : Used memory (bytes): 287872
The statement
System.out.println("Used memory (bytes): " +
(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
will display the total memory currently in use [Total available memory-Currently free available memory], (in bytes).
I have alternatively verified through the above mentioned methods that in the first case the memory usage (287848) was lower than the second one (287872) while in the rest of the two cases it was exactly the same (287872). Of course and obviously, it should be such because in the very first case, y and z contain a copy of the reference held in x and they all (x, y and z) point to the same/common object (location) means that the first case is better and more appropriate than the second one and in the rest of the two cases, there are equivalent statements with exactly the same memory usage (287872). If it is so, then the use of primitive data types in Java should be useless and avoidable though they were basically designed for better memory usage and more CPU utilization. still why do primitive data types in Java exist?
A question somewhat similar to this one was already posted here but it did not have such a scenario.
That question is here.
I wouldn't pay attention to Runtime.freeMemory -- it's very ambiguous (does it include unused stack space? PermGen space? gaps between heap objects that are too small to be used?), and giving any precise measurement without halting all threads is impossible.
Integers are necessarily less space efficient than ints, because just the reference to the Integer takes 32 bits (64 for a 64-bit JVM without compressed pointers).
If you really want to test it empirically, have many threads recurse deeply and then wait. As in
class TestThread extends Thread {
private void recurse(int depth) {
int a, b, c, d, e, f, g;
if (depth < 100)
recurse(depth + 1);
for (;;) try {
Thread.sleep(Long.MAX_VALUE);
} catch (InterruptedException e) {}
}
#Override public void run() {
recurse(0);
}
public static void main(String[] _) {
for (int i = 0; i < 500; ++i)
new TestThread().start();
}
}
For a start, an Integer wraps an int, therefore Integer has to be at least as big as int.
From the docs (I really doubt this is necessary):
The Integer class wraps a value of the primitive type int in an
object. An object of type Integer contains a single field whose type
is int.
So obviously a primitive int is still being used.
Not only that but objects have more overhead, and the most obvious one is that when you're using objects your variable contains a reference to it:
Integer obj = new Integer(100);
int prim = 100;
ie. obj stores a reference to an Integer object, which contains an int, whereas prim stores the value 100. That there's enough to prove that using Integer over int brings with it more overhead. And there's more overhead than just that.
The wrapper contains a primitive as a field, but it causes additional overhead because it's an object. The reference takes up space as well, but your example isn't really designed to show this.
The tests you designed aren't really well-suited for a precise measurement, but since you used them, try this example instead:
public static void main(String[] args) {
int numInts = 100000;
Integer[] array = new Integer[numInts];
// int[] array = new int[numInts];
for(int i = 0; i < numInts; i++){
array[i] = i; //put some real data into the arrays using auto-boxing if needed
}
System.out.println("Used memory (bytes): " +
(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
}
Now try it again but uncomment the primitive line and comment out the wrapper line. You should see that the wrapper takes up much more memory
If your first example, you have the equivalent to 1 integer, and 2 pointers.
Because Integer is an Object, it has pointer properties, and contains functions.
By using int instead of Integer, you are copying the value 3 times.
You have a difference in 24 bytes, which is used for storing the headers and values of your extra 2 ints. Although I wouldn't trust your test: the JVM can be somewhat random, and it's garbage collection is quite dynamic. As far as required memory for a single Integer vs int, Integer will take up more space because it is an Object, and thus contains more information.
Runtime.getRuntime().freeMemory() : getting delta on this does not give you the correct statistics as there are many moving parts like garbage collection and other threads.
Integer takes more memory than int primitive.
Your test case is too simple to be of any conclusive result.
Any test case that takes less than 5 seconds doesn't mean anything.
You need to at least do something with these objects you are creating. The JVM can simply look at your code and just not do anything because your objects aren't ever used, and you exit. (Can't say for certain what the JVM interpreter does, but the JIT will use escape analysis to optimize your entire testcase into nothing)
First of all, if you're looking for memory effectiveness, primitives are smaller because they are what size they are. The wrapper objects are objects, and need to be garbage collected. They have tons of fields within them that you can use, those fields are stored somewhere...
Primitives aren't "designed" to be more effective. Wrapper objects were designed to be more feature friendly. You need primitives, because how else are you going to store a number?
If you really wan't to see the memory difference, take a real application. If you want to write it yourself, go ahead but it'll take some time. Use some text editor and search and replace every single int declaration with Integer, and long with Long, etc. Then take a look at the memory footprint. I wouldn't be surprised if you see your computer explode.
From a programming point of view, you need to use primitives when necessary, and wrapper objects when necessary. When its applicable to do both, it's your preference. Trust me, there aren't that many.
http://www.javaspecialists.eu/archive/Issue193.html
This might help you understand/explore things a little bit more. An excellent article! Cheers!
If you look at the source code of java.lang.Integer, the value is stored as an int.
private int value;
Your test is not valid, that's all there is to it.
Proof:
when you run these Tests you'll get an AssertionError in second Test (because memory gets lower, even if you stop resetting memory-field). Once you try this tests with 10.000 loops you'll get at both StackOverflowError.
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.is;
import org.junit.Test;
public class TestRedundantIntegers {
private long memory;
#Test
public void whenRecursiveIntIsSet() {
memory = Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory();
recurseInt(0, 100);
}
private void recurseInt(int depth, int someInt) {
int x = someInt;
assertThat(memory,is(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
memory=Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory();
if (depth < 1000)
recurseInt(depth + 1, x);
}
#Test
public void whenRecursiveIntegerIsSet() {
memory = Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory();
recurseInteger(0, new Integer(100));
}
private void recurseInteger(int depth, Integer someInt) {
Integer x = someInt;
assertThat(memory,is(Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));
memory=Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory();
if (depth < 1000)
recurseInt(depth + 1, x);
}
}
As for "where and when": use the non-primitive types where an Object is required, and the primitives everywhere else. For example, the types of a generic can't be primitive, so you can't use primitives with them. Even before generics were introduced, things like HashSet and HashMap couldn't store primitives.

Java: store to memory + numbers

Im having a function:
private void fixTurn(int turn)
And then I have:
memory1 = memory1 + count;
Now, I would like to make, if turn is 2 it should:
memory2 = memory2 + count;
I tried this:
memory + turn = memory+turn + count;
But will it will not work, should i just go with an if statement?
No, you should use a collection of some form instead of having several separate variables. For example, you could use an array:
memory[turn] += count;
Numerical indexes in variable names are generally something to be avoided.
Wanting to access such variables via the index is usually the sign of a novice programmer who hasn't gotten the point of arrays - because an array is exactly that, a bunch of variables that can be accessed via an index:
memory[turn] = memory[turn] + count;
or, shorter (using a compound assignment operator):
memory[turn] += count;
u have to write it as
memory += turn * count
you should rephrase your quesiton but I think you want to do something like this
private void fixTurn(int turn){
if(turn == 1){//note can be replaced by a switch
memory1 +=count;
}else if(turn ==2){
memory2 +=count;
}
Edit: the solution proposed by John Skeet is better in terms of readability and adaptability and I would recommend it more
My polished crystal ball tells me, that you that you have some sort of game, that is organized in "turns" and you want to change something for a given turn ("fixTurn").
You may want to store the turns in a list. That's preferrable over an array, because a list can grow (or shrink) and allows adding more and more "turns".
Assuming, you have some class that models a turn and it's named Turn, declare the list like:
List<Turn> turns = new ArrayList<Turn>();
Then you can add turns to it:
turns.add(new Turn());
And now, if you have to change some parameter for a turn, do it like this:
private void fixTurn(int number) {
Turn memory = turns.get(number);
memory.setCount(memory.getCount()+count);
}
I am not very clear about your question but I think this is what you are looking for:
memory += turn * count
This syntax is not allowed in java
memory + turn = memory+turn + count;

Categories

Resources