I need to pass a dynamic list of primitives to a Java method. That could be (int, int, float) or (double, char) or whatever. I know that's not possible, so I was thinking of valid solutions to this problem.
Since I am developing a game on Android, where I want to avoid garbage collection as much as possible, I do not want to use any objects (e.g. because of auto boxing), but solely primitive data types. Thus a collection or array of primitive class objects (e.g. Integer) is not an option in my case.
So I was thinking whether I could pass a class object to my method, which would contain all the primitive vales I need. However, thats neither a solution to my problem, because as said the list of primitives is variable. So if I would go that way in my method I then don't know how to access this dynmic list of primitives (at least not without any conversion to objects, which is what I want to avoid).
Now I feel a bit lost here. I do not know of any other possible way in Java how to solve my problem. I hope that's simply a lack of knowledge on my side. Does anyone of you know a solution without a conversion to and from objects?
It would perhaps be useful to provide some more context and explain on exactly what you want to use this technique for, since this will probably be necessary to decide on the best approach.
Conceptually, you are trying to do something that is always difficult in any language that passes parameters on a managed stack. What do you expect the poor compiler to do? Either it lets you push an arbitrary number of arguments on the stack and access them with some stack pointer arithmetic (fine in C which lets you play with pointers as much as you like, not so fine in a managed language like Java) or it will need to pass a reference to storage elsewhere (which implies allocation or some form of buffer).
Luckily, there are several ways to do efficient primitive parameter passing in Java. Here is my list of the most promising approaches, roughly the order you should consider them:
Overloading - have multiple methods with different primitive arguments to handle all the possible combinations. Likely to be the the best / simplest / most lightweight option if there are a relatively small number of arguments. Also great performance since the compiler will statically work out which overloaded method to call.
Primitive arrays - Good way of passing an arbitrary number of primitive arguments. Note that you will probably need to keep a primitive array around as a buffer (otherwise you will have to allocate it when needed, which defeats your objective of avoiding allocations!). If you use partially-filled primitive arrays you will also need to pass offset and/or count arguments into the array.
Pass objects with primitive fields - works well if the set of primitive fields is relatively well known in advance. Note that you will also have to keep an instance of the class around to act as a buffer (otherwise you will have to allocate it when needed, which defeats your objective of avoiding allocations!).
Use a specialised primitive collection library - e.g. the Trove library. Great performance and saves you having to write a lot of code as these are generally well designed and maintained libraries. Pretty good option if these collections of primitives are going to be long lived, i.e. you're not creating the collection purely for the purpose of passing some parameters.
NIO Buffers - roughly equivalent to using arrays or primitive collections in terms of performance. They have a bit of overhead, but could be a better option if you need NIO buffers for another reason (e.g. if the primitives are being passed around in networking code or 3D library code that uses the same buffer types, or if the data needs to be passed to/from native code). They also handle offsets and counts for you which can helpful.
Code generation - write code that generates the appropriate bytceode for the specialised primitive methods (either ahead of time or dynamically). This is not for the faint-hearted, but is one way to get absolutely optimal performance. You'll probably want to use a library like ASM, or alternatively pick a JVM language that can easily do the code generation for you (Clojure springs to mind).
There simply isn't. The only way to have a variable number of parameters in a method is to use the ... operator, which does not support primitives. All generics also only support primitives.
The only thing I can possibly think of would be a class like this:
class ReallyBadPrimitives {
char[] chars;
int[] ints;
float[] floats;
}
And resize the arrays as you add to them. But that's REALLY, REALLY bad as you lose basically ALL referential integrity in your system.
I wouldn't worry about garbage collection - I would solve your problems using objects and autoboxing if you have to (or better yet, avoiding this "unknown set of input parameters" and get a solid protocol down). Once you have a working prototype, see if you run into performance problems, and then make necessary adjustments. You might find the JVM can handle those objects better than you originally thought.
Try to use the ... operator:
static int sum (int ... numbers)
{
int total = 0;
for (int i = 0; i < numbers.length; i++)
total += numbers [i];
return total;
}
You can use BitSet similar to C++ Bit field.
http://docs.oracle.com/javase/1.3/docs/api/java/util/BitSet.html
You could also cast all your primitives to double then just pass in an array of double. The only trick there is that you can't use the boolean type.
Fwiw, something like sum(int... numbers) would not autobox the ints. It would create a single int[] to hold them, so there would be an object allocation; but it wouldn't be per int.
public class VarArgs {
public static void main(String[] args) {
System.out.println(variableInts(1, 2));
System.out.println(variableIntegers(1, 2, 3));
}
private static String variableInts(int... args) {
// args is an int[], and ints can't have getClass(), so this doesn't compile
// args[0].getClass();
return args.getClass().toString() + " ";
}
private static String variableIntegers(Integer... args) {
// args is an Integer[], and Integers can have getClass()
args[0].getClass();
return args.getClass().toString();
}
}
output:
class [I
class [Ljava.lang.Integer;
Related
In Java 8, there is a new method String.chars() which returns a stream of ints (IntStream) that represent the character codes. I guess many people would expect a stream of chars here instead. What was the motivation to design the API this way?
As others have already mentioned, the design decision behind this was to prevent the explosion of methods and classes.
Still, personally I think this was a very bad decision, and there should, given they do not want to make CharStream, which is reasonable, different methods instead of chars(), I would think of:
Stream<Character> chars(), that gives a stream of boxes characters, which will have some light performance penalty.
IntStream unboxedChars(), which would to be used for performance code.
However, instead of focusing on why it is done this way currently, I think this answer should focus on showing a way to do it with the API that we have gotten with Java 8.
In Java 7 I would have done it like this:
for (int i = 0; i < hello.length(); i++) {
System.out.println(hello.charAt(i));
}
And I think a reasonable method to do it in Java 8 is the following:
hello.chars()
.mapToObj(i -> (char)i)
.forEach(System.out::println);
Here I obtain an IntStream and map it to an object via the lambda i -> (char)i, this will automatically box it into a Stream<Character>, and then we can do what we want, and still use method references as a plus.
Be aware though that you must do mapToObj, if you forget and use map, then nothing will complain, but you will still end up with an IntStream, and you might be left off wondering why it prints the integer values instead of the strings representing the characters.
Other ugly alternatives for Java 8:
By remaining in an IntStream and wanting to print them ultimately, you cannot use method references anymore for printing:
hello.chars()
.forEach(i -> System.out.println((char)i));
Moreover, using method references to your own method do not work anymore! Consider the following:
private void print(char c) {
System.out.println(c);
}
and then
hello.chars()
.forEach(this::print);
This will give a compile error, as there possibly is a lossy conversion.
Conclusion:
The API was designed this way because of not wanting to add CharStream, I personally think that the method should return a Stream<Character>, and the workaround currently is to use mapToObj(i -> (char)i) on an IntStream to be able to work properly with them.
The answer from skiwi covered many of the major points already. I'll fill in a bit more background.
The design of any API is a series of tradeoffs. In Java, one of the difficult issues is dealing with design decisions that were made long ago.
Primitives have been in Java since 1.0. They make Java an "impure" object-oriented language, since the primitives are not objects. The addition of primitives was, I believe, a pragmatic decision to improve performance at the expense of object-oriented purity.
This is a tradeoff we're still living with today, nearly 20 years later. The autoboxing feature added in Java 5 mostly eliminated the need to clutter source code with boxing and unboxing method calls, but the overhead is still there. In many cases it's not noticeable. However, if you were to perform boxing or unboxing within an inner loop, you'd see that it can impose significant CPU and garbage collection overhead.
When designing the Streams API, it was clear that we had to support primitives. The boxing/unboxing overhead would kill any performance benefit from parallelism. We didn't want to support all of the primitives, though, since that would have added a huge amount of clutter to the API. (Can you really see a use for a ShortStream?) "All" or "none" are comfortable places for a design to be, yet neither was acceptable. So we had to find a reasonable value of "some". We ended up with primitive specializations for int, long, and double. (Personally I would have left out int but that's just me.)
For CharSequence.chars() we considered returning Stream<Character> (an early prototype might have implemented this) but it was rejected because of boxing overhead. Considering that a String has char values as primitives, it would seem to be a mistake to impose boxing unconditionally when the caller would probably just do a bit of processing on the value and unbox it right back into a string.
We also considered a CharStream primitive specialization, but its use would seem to be quite narrow compared to the amount of bulk it would add to the API. It didn't seem worthwhile to add it.
The penalty this imposes on callers is that they have to know that the IntStream contains char values represented as ints and that casting must be done at the proper place. This is doubly confusing because there are overloaded API calls like PrintStream.print(char) and PrintStream.print(int) that differ markedly in their behavior. An additional point of confusion possibly arises because the codePoints() call also returns an IntStream but the values it contains are quite different.
So, this boils down to choosing pragmatically among several alternatives:
We could provide no primitive specializations, resulting in a simple, elegant, consistent API, but which imposes a high performance and GC overhead;
we could provide a complete set of primitive specializations, at the cost of cluttering up the API and imposing a maintenance burden on JDK developers; or
we could provide a subset of primitive specializations, giving a moderately sized, high performing API that imposes a relatively small burden on callers in a fairly narrow range of use cases (char processing).
We chose the last one.
In Java 8, there is a new method String.chars() which returns a stream of ints (IntStream) that represent the character codes. I guess many people would expect a stream of chars here instead. What was the motivation to design the API this way?
As others have already mentioned, the design decision behind this was to prevent the explosion of methods and classes.
Still, personally I think this was a very bad decision, and there should, given they do not want to make CharStream, which is reasonable, different methods instead of chars(), I would think of:
Stream<Character> chars(), that gives a stream of boxes characters, which will have some light performance penalty.
IntStream unboxedChars(), which would to be used for performance code.
However, instead of focusing on why it is done this way currently, I think this answer should focus on showing a way to do it with the API that we have gotten with Java 8.
In Java 7 I would have done it like this:
for (int i = 0; i < hello.length(); i++) {
System.out.println(hello.charAt(i));
}
And I think a reasonable method to do it in Java 8 is the following:
hello.chars()
.mapToObj(i -> (char)i)
.forEach(System.out::println);
Here I obtain an IntStream and map it to an object via the lambda i -> (char)i, this will automatically box it into a Stream<Character>, and then we can do what we want, and still use method references as a plus.
Be aware though that you must do mapToObj, if you forget and use map, then nothing will complain, but you will still end up with an IntStream, and you might be left off wondering why it prints the integer values instead of the strings representing the characters.
Other ugly alternatives for Java 8:
By remaining in an IntStream and wanting to print them ultimately, you cannot use method references anymore for printing:
hello.chars()
.forEach(i -> System.out.println((char)i));
Moreover, using method references to your own method do not work anymore! Consider the following:
private void print(char c) {
System.out.println(c);
}
and then
hello.chars()
.forEach(this::print);
This will give a compile error, as there possibly is a lossy conversion.
Conclusion:
The API was designed this way because of not wanting to add CharStream, I personally think that the method should return a Stream<Character>, and the workaround currently is to use mapToObj(i -> (char)i) on an IntStream to be able to work properly with them.
The answer from skiwi covered many of the major points already. I'll fill in a bit more background.
The design of any API is a series of tradeoffs. In Java, one of the difficult issues is dealing with design decisions that were made long ago.
Primitives have been in Java since 1.0. They make Java an "impure" object-oriented language, since the primitives are not objects. The addition of primitives was, I believe, a pragmatic decision to improve performance at the expense of object-oriented purity.
This is a tradeoff we're still living with today, nearly 20 years later. The autoboxing feature added in Java 5 mostly eliminated the need to clutter source code with boxing and unboxing method calls, but the overhead is still there. In many cases it's not noticeable. However, if you were to perform boxing or unboxing within an inner loop, you'd see that it can impose significant CPU and garbage collection overhead.
When designing the Streams API, it was clear that we had to support primitives. The boxing/unboxing overhead would kill any performance benefit from parallelism. We didn't want to support all of the primitives, though, since that would have added a huge amount of clutter to the API. (Can you really see a use for a ShortStream?) "All" or "none" are comfortable places for a design to be, yet neither was acceptable. So we had to find a reasonable value of "some". We ended up with primitive specializations for int, long, and double. (Personally I would have left out int but that's just me.)
For CharSequence.chars() we considered returning Stream<Character> (an early prototype might have implemented this) but it was rejected because of boxing overhead. Considering that a String has char values as primitives, it would seem to be a mistake to impose boxing unconditionally when the caller would probably just do a bit of processing on the value and unbox it right back into a string.
We also considered a CharStream primitive specialization, but its use would seem to be quite narrow compared to the amount of bulk it would add to the API. It didn't seem worthwhile to add it.
The penalty this imposes on callers is that they have to know that the IntStream contains char values represented as ints and that casting must be done at the proper place. This is doubly confusing because there are overloaded API calls like PrintStream.print(char) and PrintStream.print(int) that differ markedly in their behavior. An additional point of confusion possibly arises because the codePoints() call also returns an IntStream but the values it contains are quite different.
So, this boils down to choosing pragmatically among several alternatives:
We could provide no primitive specializations, resulting in a simple, elegant, consistent API, but which imposes a high performance and GC overhead;
we could provide a complete set of primitive specializations, at the cost of cluttering up the API and imposing a maintenance burden on JDK developers; or
we could provide a subset of primitive specializations, giving a moderately sized, high performing API that imposes a relatively small burden on callers in a fairly narrow range of use cases (char processing).
We chose the last one.
I have a class of which there may be many instances (on a mobile device), so I'm trying to minimize the size. One of my fields is a "DrawTarget" that indicates whether drawing operations are being ignored, queued to a path or drawn to the display. I would like it to take a single byte or less since there are only 3 possible values, but I would also like it to be friendly code so I don't have hard-coded numbers all over. One thought is to use an enum like:
public enum DrawTarget {
Invisible,
Path,
Canvas
}
But from what I read, a Java enum doesn't allow you to specify the memory layout -- I can't request that the enum values represent a byte-size value -- and I guess enum values end up being integer-sized values in Java.
So I thought about maybe making an implicit conversion operator in the enum... is this possible in Java? Or is my best option to implement something like this within the enum:
public static DrawTarget fromValue(byte value) {
switch (value) {
case 0:
return Invisible;
case 1:
return Path;
default:
return Canvas;
}
}
and then call DrawTarget.fromValue wherever I want to access the value?
Or should I just create a single-byte class since apparently (from what I read in my research on this) enums are basically just special classes in Java anyway?
public class DrawTarget {
public static final byte Invisible = 0;
public static final byte Path = 1;
public static final byte Canvas = 2;
}
But how to I represent the value of an enum instance if I use that last solution? I still need a way to allow the "=" operator to accept one of the static fields of the class... like a conversion constructor or an assignment operator overload.
I suspect, however, that any class object, being a reference type, will take more than a byte for each instance. Is that true?
In Java enum is a class that has as many instances, as there are values. The instances are produced at class (enum) loading time. Each place where you use an enum variable or an enum attribute, you actually use an ordinary reference to one of the existing enum objects (instances of enums are never created after enum is initialized).
This means that an enum reference costs as much as any other object reference, usually four bytes. Which is really, really, really little.
You don't know how much memory does a byte take (really! remember that low level memory management includes plenty of padding!), so any "optimization" based on this will fail. On a given architecture a byte field might take as much memory as an integer field (because it might be faster that way).
If you want to write good Java, use enum. Really. The only good reason not to use enums, would be if you had a whole array of values, like: drawTargets[] = new DrawTarget[100000];
If you insist on microoptimizing, just use plain bytes and forget enums; public static final byte SOMETHING = 1; is fine for making comparisons (and sucks for debugging).
I have written Android programs for a long time and have never seen such microoptimization pay off. Your case might be the one in a million, but I don't think it is.
Also, to make life simpler for all of us, please consider using Java conventions in Java code: enum instances and public final static fields should be names LIKE_THIS, attributes likeThis (not LikeThis!).
and I guess enum values end up being integer-sized values in Java.
No, enums are always classes in Java. So if you have a field of type DrawTarget, that will be a reference - either to null or to one of the three instances of DrawTarget. (There won't be any more instances than that; it's not like a new instance of DrawTarget is created every time you use it.)
I would go with the enum and then measure the memory usage - an enum is logically what you want, so take the normal approach of writing the simplest code that works and then testing the performance - rather than guessing at where bottlenecks might be.
You may want to represent the value as a single byte when serializing, and then convert it back to the enum when deserializing, but other than that I'd stick with the enum type throughout your code if possible.
Unless android has some special way of treating enum references, each reference to a DropTarget will indeed take more than one byte in memory. Enums are classes, and enum instances are objects. So a reference to an enum instance takes the same amout of memory as any other object reference.
I wouldn't care much about it unless you have measured that this caused memory problems, though, and that reducing the size would have a significant impact.
What you get from enums, mainly, is type safety. If a method takes a DropTarget as argument, you (or coworkers) won't be able to pass anything other than one of the three instances of DropTarget (or null). If you use a byte instead, the code is less clear, and anyone could pass any byte value instead of the three authorized byte values.
So, decide which is the most important for you, and choose the solution you prefer.
Your classes will only contain a reference to the enum. Only one instance of each enum will be created.
Aside from that, consider using polymorphism to implement the drawing behavior.
If the value of the enum is fixed, instantiate a different subclass for each object depending on its desired drawing behavior.
If the value changes often, you could keep a reference to the desired drawing strategy in the object. Refer to an object with an empty draw() method for objects that should not be drawn. Etc.
enum is special data type, not a class.check oracle documentations for further details.
An enum type is a special data type that enables for a variable to be a set of predefined constants. The variable must be equal to one of the values that have been predefined for it.
I am writing a program which accepts 400 numbers of type long and will modify some of them depending on conditions at runtime and I want to know whether to use ArrayList<Long> or long[].
Which will be faster to use? I am thinking of using long[] because size is fixed.
When the size is fixed, long[] is faster, but it allows a less maintainable API, because it does not implement the List interface.
Note a long[] is faster for 2 reasons:
Uses primitive longs and not box object Longs (also enables better cache performace, since the longs are allocated contigously and the Longs aren't guaranteed to)
An array is much simpler and more efficient DS.
Nevertheless, for simpler maintainability - I would have used a List<Long>, unless performace is very critical at this part of the program.
If you use this collection very often in a tight loop - and your profiler says it is indeed a bottleneck - I would then switch to a more efficient long[].
As far as the speed goes, it almost does not matter for a list of 400 items. If you need to grow your list dynamically, ArrayList<Long> is better; if the size is fixed, long[] may be better (and a bit faster, although again, you would probably not notice the difference in most situations).
There are few things which haven't been mentioned in other answers:
A generic collection is in actuality a collection of Objects, or better said, this is what Java compiler will make of it. This is while long[] will always remain what it is.
A consequence of the first bullet point is that if you do something that eventually puts something other then Long into your collection, in certain situations the compiler will let it through (because Java type system is unsound, i.e. as an example, it will allow you to upcast and then re-cast to a completely disparate type).
A more general consequence of these two is that Java generics are half-baked and in some less trivial cases such as reflection, serialization etc. may "surprise" you. It is in fact safer to use plain arrays, then generics.
package tld.example;
import java.util.List;
import java.util.ArrayList;
class Example {
static void testArray(long[] longs) {
System.out.println("testArray");
}
static void testGeneric(List<Long> longs) {
System.out.println("testGeneric");
}
#SuppressWarnings("unchecked")
public static void main(String... arguments) {
List<Long> fakeLongs = new ArrayList<Long>();
List<Object> mischiefManaged = (List<Object>)(Object)fakeLongs;
mischiefManaged.add(new Object());
// this call succeeds and prints the value.
// we could sneak in a wrong type into this function
// and it remained unnoticed
testGeneric(fakeLongs);
long[] realLongs = new long[1];
// this will fail because it is not possible to perform this cast
// despite the compiler thinks it is.
Object[] forgedLongs = (Object[])(Object)realLongs;
forgedLongs[0] = new Object();
testArray(realLongs);
}
}
This example is a bit contrived because it is difficult to come up with a short convincing example, but trust me, in less trivial cases, when you have to use reflection and unsafe casts this is quite a possibility.
Now, you have to consider that besides what is reasonable, there is a tradition. Every community has a set of it's customs and tradition. There are a lot of superficial beliefs, such as those voiced here eg. when someone claims that implementing List API is an unconditional goodness, and if this does not happen, then it must be bad... This is not just a dominating opinion, this is what overwhelming majority of Java programmers believe in. After all, it doesn't matter that much, and Java, as a language has a lot more of other shortcomings... so, if you want to secure your job interview or simply avoid conflicts with other Java programmers, then use Java generics, no matter the reason. But if you don't like it - well, perhaps just use some other language ;)
long[] is both much faster and takes much less memory. Read "Effective Java" Item 49: "Prefer primitive types to boxed primitives"
I think better to use ArrayList because it is much much easier to maintain in long runs. In future if the size of your array gets increased beyond 400 then maintaining a long[] has lot of overhead whereas ArrayList grows dynamically so you don't need to worry about increased size.
Also, deletion of element is handled in much better way by ArrayList than static arrays (long[]) as they automatically reorganize the elements so that they still appear as ordered elements.
Static arrays are worse at this.
Why not use one of the many list implementations that work on primitive types? TLongArrayList for instance. It can be used just like Javas List but is based on a long[] array. So you have the advantages of both sides.
HPPC has a short overview of some libraries: https://github.com/carrotsearch/hppc/blob/master/ALTERNATIVES.txt
Since Java 5, we've had boxing/unboxing of primitive types so that int is wrapped to be java.lang.Integer, and so and and so forth.
I see a lot of new Java projects lately (that definitely require a JRE of at least version 5, if not 6) that are using int rather than java.lang.Integer, though it's much more convenient to use the latter, as it has a few helper methods for converting to long values et al.
Why do some still use primitive types in Java? Is there any tangible benefit?
In Joshua Bloch's Effective Java, Item 5: "Avoid creating unnecessary objects", he posts the following code example:
public static void main(String[] args) {
Long sum = 0L; // uses Long, not long
for (long i = 0; i <= Integer.MAX_VALUE; i++) {
sum += i;
}
System.out.println(sum);
}
and it takes 43 seconds to run. Taking the Long into the primitive brings it down to 6.8 seconds... If that's any indication why we use primitives.
The lack of native value equality is also a concern (.equals() is fairly verbose compared to ==)
for biziclop:
class Biziclop {
public static void main(String[] args) {
System.out.println(new Integer(5) == new Integer(5));
System.out.println(new Integer(500) == new Integer(500));
System.out.println(Integer.valueOf(5) == Integer.valueOf(5));
System.out.println(Integer.valueOf(500) == Integer.valueOf(500));
}
}
Results in:
false
false
true
false
EDIT Why does (3) return true and (4) return false?
Because they are two different objects. The 256 integers closest to zero [-128; 127] are cached by the JVM, so they return the same object for those. Beyond that range, though, they aren't cached, so a new object is created. To make things more complicated, the JLS demands that at least 256 flyweights be cached. JVM implementers may add more if they desire, meaning this could run on a system where the nearest 1024 are cached and all of them return true... #awkward
Autounboxing can lead to hard to spot NPEs
Integer in = null;
...
...
int i = in; // NPE at runtime
In most situations the null assignment to in is a lot less obvious than above.
Boxed types have poorer performance and require more memory.
Primitive types:
int x = 1000;
int y = 1000;
Now evaluate:
x == y
It's true. Hardly surprising. Now try the boxed types:
Integer x = 1000;
Integer y = 1000;
Now evaluate:
x == y
It's false. Probably. Depends on the runtime. Is that reason enough?
Besides performance and memory issues, I'd like to come up with another issue: The List interface would be broken without int.
The problem is the overloaded remove() method (remove(int) vs. remove(Object)). remove(Integer) would always resolve to calling the latter, so you could not remove an element by index.
On the other hand, there is a pitfall when trying to add and remove an int:
final int i = 42;
final List<Integer> list = new ArrayList<Integer>();
list.add(i); // add(Object)
list.remove(i); // remove(int) - Ouch!
Can you really imagine a
for (int i=0; i<10000; i++) {
do something
}
loop with java.lang.Integer instead? A java.lang.Integer is immutable, so each increment round the loop would create a new java object on the heap, rather than just increment the int on the stack with a single JVM instruction. The performance would be diabolical.
I would really disagree that it's much mode convenient to use java.lang.Integer than int. On the contrary. Autoboxing means that you can use int where you would otherwise be forced to use Integer, and the java compiler takes care of inserting the code to create the new Integer object for you. Autoboxing is all about allowing you to use an int where an Integer is expected, with the compiler inserting the relevant object construction. It in no way removes or reduces the need for the int in the first place. With autoboxing you get the best of both worlds. You get an Integer created for you automatically when you need a heap based java object, and you get the speed and efficiency of an int when you are just doing arithmetic and local calculations.
Primitive types are much faster:
int i;
i++;
Integer (all Numbers and also a String) is an immutable type: once created it can not be changed. If i was Integer, than i++ would create a new Integer object - much more expensive in terms of memory and processor.
First and foremost, habit. If you've coded in Java for eight years, you accumulate a considerable amount of inertia. Why change if there is no compelling reason to do so? It's not as if using boxed primitives comes with any extra advantages.
The other reason is to assert that null is not a valid option. It would be pointless and misleading to declare the sum of two numbers or a loop variable as Integer.
There's the performance aspect of it too, while the performance difference isn't critical in many cases (though when it is, it's pretty bad), nobody likes to write code that could be written just as easily in a faster way we're already used to.
By the way, Smalltalk has only objects (no primitives), and yet they had optimized their small integers (using not all 32 bits, only 27 or such) to not allocate any heap space, but simply use a special bit pattern. Also other common objects (true, false, null) had special bit patterns here.
So, at least on 64-bit JVMs (with a 64 bit pointer namespace) it should be possible to not have any objects of Integer, Character, Byte, Short, Boolean, Float (and small Long) at all (apart from these created by explicit new ...()), only special bit patterns, which could be manipulated by the normal operators quite efficiently.
I can't believe no one has mentioned what I think is the most important reason:
"int" is so, so much easier to type than "Integer". I think people underestimate the importance of a concise syntax. Performance isn't really a reason to avoid them because most of the time when one is using numbers is in loop indexes, and incrementing and comparing those costs nothing in any non-trivial loop (whether you're using int or Integer).
The other given reason was that you can get NPEs but that's extremely easy to avoid with boxed types (and it is guaranteed to be avoided as long as you always initialize them to non-null values).
The other reason was that (new Long(1000))==(new Long(1000)) is false, but that's just another way of saying that ".equals" has no syntactic support for boxed types (unlike the operators <, >, =, etc), so we come back to the "simpler syntax" reason.
I think Steve Yegge's non-primitive loop example illustrates my point very well:
http://sites.google.com/site/steveyegge2/language-trickery-and-ejb
Think about this: how often do you use function types in languages that have good syntax for them (like any functional language, python, ruby, and even C) compared to java where you have to simulate them using interfaces such as Runnable and Callable and nameless classes.
Couple of reasons not to get rid of primitives:
Backwards compatability.
If it's eliminated, any old programs wouldn't even run.
JVM rewrite.
The entire JVM would have to be rewritten to support this new thing.
Larger memory footprint.
You'd need to store the value and the reference, which uses more memory. If you have a huge array of bytes, using byte's is significantly smaller than using Byte's.
Null pointer issues.
Declaring int i then doing stuff with i would result in no issues, but declaring Integer i and then doing the same would result in an NPE.
Equality issues.
Consider this code:
Integer i1 = 5;
Integer i2 = 5;
i1 == i2; // Currently would be false.
Would be false. Operators would have to be overloaded, and that would result in a major rewrite of stuff.
Slow
Object wrappers are significantly slower than their primitive counterparts.
Objects are much more heavyweight than primitive types, so primitive types are much more efficient than instances of wrapper classes.
Primitive types are very simple: for example an int is 32 bits and takes up exactly 32 bits in memory, and can be manipulated directly. An Integer object is a complete object, which (like any object) has to be stored on the heap, and can only be accessed via a reference (pointer) to it. It most likely also takes up more than 32 bits (4 bytes) of memory.
That said, the fact that Java has a distinction between primitive and non-primitive types is also a sign of age of the Java programming language. Newer programming languages don't have this distinction; the compiler of such a language is smart enough to figure out by itself if you're using simple values or more complex objects.
For example, in Scala there are no primitive types; there is a class Int for integers, and an Int is a real object (that you can methods on etc.). When the compiler compiles your code, it uses primitive ints behind the scenes, so using an Int is just as efficient as using a primitive int in Java.
In addition to what others have said, primitive local variables are not allocated from the heap, but instead on the stack. But objects are allocated from the heap and thus have to be garbage collected.
It's hard to know what kind of optimizations are going on under the covers.
For local use, when the compiler has enough information to make optimizations excluding the possibility of the null value, I expect the performance to be the same or similar.
However, arrays of primitives are apparently very different from collections of boxed primitives. This makes sense given that very few optimizations are possible deep within a collection.
Furthermore, Integer has a much higher logical overhead as compared with int: now you have to worry about about whether or not int a = b + c; throws an exception.
I'd use the primitives as much as possible and rely on the factory methods and autoboxing to give me the more semantically powerful boxed types when they are needed.
int loops = 100000000;
long start = System.currentTimeMillis();
for (Long l = new Long(0); l<loops;l++) {
//System.out.println("Long: "+l);
}
System.out.println("Milliseconds taken to loop '"+loops+"' times around Long: "+ (System.currentTimeMillis()- start));
start = System.currentTimeMillis();
for (long l = 0; l<loops;l++) {
//System.out.println("long: "+l);
}
System.out.println("Milliseconds taken to loop '"+loops+"' times around long: "+ (System.currentTimeMillis()- start));
Milliseconds taken to loop '100000000' times around Long: 468
Milliseconds taken to loop '100000000' times around long: 31
On a side note, I wouldn't mind seeing something like this find it's way into Java.
Integer loop1 = new Integer(0);
for (loop1.lessThan(1000)) {
...
}
Where the for loop automatically increments loop1 from 0 to 1000
or
Integer loop1 = new Integer(1000);
for (loop1.greaterThan(0)) {
...
}
Where the for loop automatically decrements loop1 1000 to 0.
Primitive types have many advantages:
Simpler code to write
Performance is better since you are not instantiating an object for the variable
Since they do not represent a reference to an object there is no need to check for nulls
Use primitive types unless you need to take advantage of the boxing features.
You need primitives for doing mathematical operations
Primitives takes less memory as answered above and better performing
You should ask why Class/Object type is required
Reason for having Object type is to make our life easier when we deal with Collections. Primitives cannot be added directly to List/Map rather you need to write a wrapper class. Readymade Integer kind of Classes helps you here plus it has many utility methods like Integer.pareseInt(str)
I agree with previous answers, using primitives wrapper objects can be expensive.
But, if performance is not critical in your application, you avoid overflows when using objects. For example:
long bigNumber = Integer.MAX_VALUE + 2;
The value of bigNumber is -2147483647, and you would expect it to be 2147483649. It's a bug in the code that would be fixed by doing:
long bigNumber = Integer.MAX_VALUE + 2l; // note that '2' is a long now (it is '2L').
And bigNumber would be 2147483649. These kind of bugs sometimes are easy to be missed and can lead to unknown behavior or vulnerabilities (see CWE-190).
If you use wrapper objects, the equivalent code won't compile.
Long bigNumber = Integer.MAX_VALUE + 2; // Not compiling
So it's easier to stop these kind of issues by using primitives wrapper objects.
Your question is so answered already, that I reply just to add a little bit more information not mentioned before.
Because JAVA performs all mathematical operations in primitive types. Consider this example:
public static int sumEven(List<Integer> li) {
int sum = 0;
for (Integer i: li)
if (i % 2 == 0)
sum += i;
return sum;
}
Here, reminder and unary plus operations can not be applied on Integer(Reference) type, compiler performs unboxing and do the operations.
So, make sure how many autoboxing and unboxing operations happen in java program. Since, It takes time to perform this operations.
Generally, it is better to keep arguments of type Reference and result of primitive type.
The primitive types are much faster and require much less memory. Therefore, we might want to prefer using them.
On the other hand, current Java language specification doesn’t allow usage of primitive types in the parameterized types (generics), in the Java collections or the Reflection API.
When our application needs collections with a big number of elements, we should consider using arrays with as more “economical” type as possible.
*For detailed info see the source: https://www.baeldung.com/java-primitives-vs-objects
To be brief: primitive types are faster and require less memory than boxed ones