In Java 8, there is a new method String.chars() which returns a stream of ints (IntStream) that represent the character codes. I guess many people would expect a stream of chars here instead. What was the motivation to design the API this way?
As others have already mentioned, the design decision behind this was to prevent the explosion of methods and classes.
Still, personally I think this was a very bad decision, and there should, given they do not want to make CharStream, which is reasonable, different methods instead of chars(), I would think of:
Stream<Character> chars(), that gives a stream of boxes characters, which will have some light performance penalty.
IntStream unboxedChars(), which would to be used for performance code.
However, instead of focusing on why it is done this way currently, I think this answer should focus on showing a way to do it with the API that we have gotten with Java 8.
In Java 7 I would have done it like this:
for (int i = 0; i < hello.length(); i++) {
System.out.println(hello.charAt(i));
}
And I think a reasonable method to do it in Java 8 is the following:
hello.chars()
.mapToObj(i -> (char)i)
.forEach(System.out::println);
Here I obtain an IntStream and map it to an object via the lambda i -> (char)i, this will automatically box it into a Stream<Character>, and then we can do what we want, and still use method references as a plus.
Be aware though that you must do mapToObj, if you forget and use map, then nothing will complain, but you will still end up with an IntStream, and you might be left off wondering why it prints the integer values instead of the strings representing the characters.
Other ugly alternatives for Java 8:
By remaining in an IntStream and wanting to print them ultimately, you cannot use method references anymore for printing:
hello.chars()
.forEach(i -> System.out.println((char)i));
Moreover, using method references to your own method do not work anymore! Consider the following:
private void print(char c) {
System.out.println(c);
}
and then
hello.chars()
.forEach(this::print);
This will give a compile error, as there possibly is a lossy conversion.
Conclusion:
The API was designed this way because of not wanting to add CharStream, I personally think that the method should return a Stream<Character>, and the workaround currently is to use mapToObj(i -> (char)i) on an IntStream to be able to work properly with them.
The answer from skiwi covered many of the major points already. I'll fill in a bit more background.
The design of any API is a series of tradeoffs. In Java, one of the difficult issues is dealing with design decisions that were made long ago.
Primitives have been in Java since 1.0. They make Java an "impure" object-oriented language, since the primitives are not objects. The addition of primitives was, I believe, a pragmatic decision to improve performance at the expense of object-oriented purity.
This is a tradeoff we're still living with today, nearly 20 years later. The autoboxing feature added in Java 5 mostly eliminated the need to clutter source code with boxing and unboxing method calls, but the overhead is still there. In many cases it's not noticeable. However, if you were to perform boxing or unboxing within an inner loop, you'd see that it can impose significant CPU and garbage collection overhead.
When designing the Streams API, it was clear that we had to support primitives. The boxing/unboxing overhead would kill any performance benefit from parallelism. We didn't want to support all of the primitives, though, since that would have added a huge amount of clutter to the API. (Can you really see a use for a ShortStream?) "All" or "none" are comfortable places for a design to be, yet neither was acceptable. So we had to find a reasonable value of "some". We ended up with primitive specializations for int, long, and double. (Personally I would have left out int but that's just me.)
For CharSequence.chars() we considered returning Stream<Character> (an early prototype might have implemented this) but it was rejected because of boxing overhead. Considering that a String has char values as primitives, it would seem to be a mistake to impose boxing unconditionally when the caller would probably just do a bit of processing on the value and unbox it right back into a string.
We also considered a CharStream primitive specialization, but its use would seem to be quite narrow compared to the amount of bulk it would add to the API. It didn't seem worthwhile to add it.
The penalty this imposes on callers is that they have to know that the IntStream contains char values represented as ints and that casting must be done at the proper place. This is doubly confusing because there are overloaded API calls like PrintStream.print(char) and PrintStream.print(int) that differ markedly in their behavior. An additional point of confusion possibly arises because the codePoints() call also returns an IntStream but the values it contains are quite different.
So, this boils down to choosing pragmatically among several alternatives:
We could provide no primitive specializations, resulting in a simple, elegant, consistent API, but which imposes a high performance and GC overhead;
we could provide a complete set of primitive specializations, at the cost of cluttering up the API and imposing a maintenance burden on JDK developers; or
we could provide a subset of primitive specializations, giving a moderately sized, high performing API that imposes a relatively small burden on callers in a fairly narrow range of use cases (char processing).
We chose the last one.
Related
I need to modify a local variable inside a lambda expression in a JButton's ActionListener and since I'm not able to modify it directly, I came across the AtomicInteger type.
I implemented it and it works just fine but I'm not sure if this is a good practice or if it is the correct way to solve this situation.
My code is the following:
newAnchorageButton.addActionListener(e -> {
AtomicInteger anchored = new AtomicInteger();
anchored.set(0);
cbSets.forEach(cbSet ->
cbSet.forEach(cb -> {
if (cb.isSelected())
anchored.incrementAndGet();
})
);
// more code where I use the 'anchored' variable...
}
I'm not sure if this is the right way to solve this since I've read that AtomicInteger is used mostly for concurrency-related applications and this program is single-threaded, but at the same time I can't find another way to solve this.
I could simply use two nested for-loops to go over those arrays but I'm trying to reduce the method's cognitive complexity as much as I can according to the sonarlint vscode extension, and leaving those for-loops theoretically increases the method complexity and therefore its readability and maintainability.
Replacing the for-loops with lambda expressions reduces the cognitive complexity but maybe I shouldn't pay that much attention to it.
While it is safe enough in single-threaded code, it would be better to count them in a functional way, like this:
long anchored = cbSets.stream() // get a stream of the sets
.flatMap(List::stream) // flatten to list of cb's
.filter(JCheckBox::isSelected) // only selected ones
.count(); // count them
Instead of mutating an accumulator, we limit the flattened stream to only the ones we're interested in and ask for the count.
More generally, though, it is always possible to sum things up or generally aggregate the values without a mutable variable. Consider:
record Country(int population) { }
countries.stream()
.mapToInt(Country::population)
.reduce(0, Math::addExact)
Note: we never mutate any values; instead, we combine each successive value with the preceding one, producing a new value. One could use sum() but I prefer reduce(0, Math::addExact) to avoid the possibility of overflow.
and leaving those for-loops theoretically increases the method complexity and therefore its readability and maintainability.
This is obvious horsepuckey. x.forEach(foo -> bar) is not 'cognitively simpler' than for (var foo : x) bar; - you can map each AST node straight over from one to the other.
If a definition is being used to define complexity which concludes that one is significantly more complex than the other, then the only correct conclusion is that the definition is silly and should be fixed or abandoned.
To make it practical: Yes, introducing AtomicInteger, whilst performance wise it won't make one iota of difference, does make the code way more complicated. AtomicInteger's simple existence in the code suggests that concurrency is relevant here. It isn't, so you'd have to add a comment to explain why you're using it. Comments are evil. (They imply the code does not speak for itself, and they cannot be tested in any way). They are often the least evil, but evil they are nonetheless.
The general 'trick' for keeping lambda-based code cognitively easily followed is to embrace the pipeline:
You write some code that 'forms' a stream. This can be as simple as list.stream(), but sometimes you do some stream joining or flatmapping a collection of collections.
You have a pipeline of operations that operate on single elements in the stream and do not refer to the whole or to any neighbour.
At the end, you reduce (using collect, reduce, max - some terminator) such that the reducing method returns what you need.
The above model (and the other answer follows it precisely) tends to result in code that is as readable/complex as the 'old style' code, and rarely (but sometimes!) more readable, and significantly less complicated. Deviate from it and the result is virtually always considerably more complicated - a clear loser.
Not all for loops in java fit the above model. If it doesn't fit, then trying to force that particular square peg into the round hole will take a lot of effort and almost always results in code that is significantly worse: Either an order of magnitude slower or considerably more cognitively complicated.
It also means that it is virtually never 'worth' rewriting perfectly fine readable non-stream based code into stream based code; at best it becomes a percentage point more readable according to some personal tastes, with no significant universally agreed upon improvement.
Turn off that silly linter rule. The fact that it considers the above 'less' complex, and that it evidently determines that for (var foo : x) bar; is 'more complicated' than x.forEach(foo -> bar) is proof enough that it's hurting way more than it is helping.
I have the following to add to the two other answers:
Two general good practices in your code are in question:
Lambdas shouldn't be longer than 3-4 lines
Except in some precise cases, lambdas of stream operations should be stateless.
For #1, consider extracting the code of the lambda to a private method for example, when it's getting too long.
You will probably gain in readability, and you will also probably gain in better separating UI from business logic.
For #2, you are probably not concerned since you are working in a single thread at the moment, but streams can be parallelized, and they may not always execute exactly as you think it does.
For that reason, it's always better to keep the code stateless in stream pipeline operations. Otherwise you might be surprised.
More generally, streams are very good, very concise, but sometimes it's just better to do the same with good old loops.
Don't hesitate to come back to classic loops.
When Sonar tells you that the complexity is too high, in fact, you should try to factorize your code: split into smaller methods, improve the model of your objects, etc.
In Java8 streams I can use the mapToInt method to create an IntStream, which will return OptionalInts for some actions (like findFirst). Why isn't there anything similar in Optional?
int i = Stream
.of("1") // just as an example
.mapToInt(Integer::parseInt) // mapToInt exists for streams
.findFirst() // this even returns an OptionalInt!
.getAsInt(); // quite handy
int j = Optional
.of("1") // same example
.map(Integer::parseInt) // no mapToInt available
.get().intValue(); // not as handy as for streams
Apparently a handful of additional methods will appear in Optionals in Java-9. However it's unlikely that mapToInt will be added. I discussed this problem several days before in core-libs-dev. Here's Paul Sandoz answer:
I don’t wanna go there, my response is transform Optional* into a *Stream. An argument for adding mapOrElseGet (notice that the primitive variants return U) is that other functionality can be composed from it.
And later:
I think it’s fine to to pollute OptionalInt etc with Optional but i want to avoid it for the other direction.
In general I think it's reasonable. The purpose of primitive streams is to improve the performance when you process many primitive values. However for Optional the performance gain of using the primitive value is quite marginal if any (there are much bigger chances compared to streams that extra boxing will by optimized out by JIT-compiler). Also even though project Valhalla will not appear in Java-9, it's gradually moving forward and it's possible that in Java-10 we will finally see generics-over-primitives, so these primitive optionals will become completely unnecessary. In this context adding more interoperability between Object Optional and primitive OptionalInt seems unnecessary.
It makes sense to have specializations in the Stream API as a stream may represent bulk operations processing millions of elements, thus the performance impact can be dramatic. But as far as I know, even this decision wasn’t without a controversy.
For an Optional, carrying at most one element, the performance impact is not justifying additional APIs (if there ever is an impact). It’s not quite clear whether OptionalInt, etc. are really necessary at all.
Regarding the convenience, I can’t get your point. The following works:
int j = Optional.of("1").map(Integer::parseInt).get();
your proposal is to add another API which allows to rewrite the above statement as
int j = Optional.of("1").mapToInt(Integer::parseInt).getAsInt();
I don’t see how this raises the convenience…
But following the logic, with Java 9, you can write
int j = Optional.of("1").stream().mapToInt(Integer::parseInt).findFirst().getAsInt();
which raises this kind of “convenience” even more…
The following also works quite nicely:
int j = Optional
.of("1")
.map(Integer::parseInt)
.map(OptionalInt::of) // -> Optional<OptionalInt>
.orElse(OptionalInt.empty()) // -> OptionalInt
.getAsInt();
The trick is to map Optional<Integer> to Optional<OptionalInt> and then unwrap the inner OptionalInt. Thus, as far as the Optional is concerned, no primitive int is involved, so it works with Java generics.
One advantage of this approach is that it doesnt need autoboxing. That's an important point for me as we enabled warnings for autoboxing in our project, primarily to prevent unnecessary boxing and unboxing operations.
I stumbled upon this problem and the solution while implementing a public method that returns an OptionalInt, where the implementation uses another method returning Optional<Something>.
I didn't want to return Optional<Integer> from my public method, so I searched for something like Optional.mapToInt just like you.
But after reading the responses of Holger and Tagir Valeev, I agree that it's perfectly reasonable to omit such a method in the JDK. There are enough alternatives available.
In Java 8, there is a new method String.chars() which returns a stream of ints (IntStream) that represent the character codes. I guess many people would expect a stream of chars here instead. What was the motivation to design the API this way?
As others have already mentioned, the design decision behind this was to prevent the explosion of methods and classes.
Still, personally I think this was a very bad decision, and there should, given they do not want to make CharStream, which is reasonable, different methods instead of chars(), I would think of:
Stream<Character> chars(), that gives a stream of boxes characters, which will have some light performance penalty.
IntStream unboxedChars(), which would to be used for performance code.
However, instead of focusing on why it is done this way currently, I think this answer should focus on showing a way to do it with the API that we have gotten with Java 8.
In Java 7 I would have done it like this:
for (int i = 0; i < hello.length(); i++) {
System.out.println(hello.charAt(i));
}
And I think a reasonable method to do it in Java 8 is the following:
hello.chars()
.mapToObj(i -> (char)i)
.forEach(System.out::println);
Here I obtain an IntStream and map it to an object via the lambda i -> (char)i, this will automatically box it into a Stream<Character>, and then we can do what we want, and still use method references as a plus.
Be aware though that you must do mapToObj, if you forget and use map, then nothing will complain, but you will still end up with an IntStream, and you might be left off wondering why it prints the integer values instead of the strings representing the characters.
Other ugly alternatives for Java 8:
By remaining in an IntStream and wanting to print them ultimately, you cannot use method references anymore for printing:
hello.chars()
.forEach(i -> System.out.println((char)i));
Moreover, using method references to your own method do not work anymore! Consider the following:
private void print(char c) {
System.out.println(c);
}
and then
hello.chars()
.forEach(this::print);
This will give a compile error, as there possibly is a lossy conversion.
Conclusion:
The API was designed this way because of not wanting to add CharStream, I personally think that the method should return a Stream<Character>, and the workaround currently is to use mapToObj(i -> (char)i) on an IntStream to be able to work properly with them.
The answer from skiwi covered many of the major points already. I'll fill in a bit more background.
The design of any API is a series of tradeoffs. In Java, one of the difficult issues is dealing with design decisions that were made long ago.
Primitives have been in Java since 1.0. They make Java an "impure" object-oriented language, since the primitives are not objects. The addition of primitives was, I believe, a pragmatic decision to improve performance at the expense of object-oriented purity.
This is a tradeoff we're still living with today, nearly 20 years later. The autoboxing feature added in Java 5 mostly eliminated the need to clutter source code with boxing and unboxing method calls, but the overhead is still there. In many cases it's not noticeable. However, if you were to perform boxing or unboxing within an inner loop, you'd see that it can impose significant CPU and garbage collection overhead.
When designing the Streams API, it was clear that we had to support primitives. The boxing/unboxing overhead would kill any performance benefit from parallelism. We didn't want to support all of the primitives, though, since that would have added a huge amount of clutter to the API. (Can you really see a use for a ShortStream?) "All" or "none" are comfortable places for a design to be, yet neither was acceptable. So we had to find a reasonable value of "some". We ended up with primitive specializations for int, long, and double. (Personally I would have left out int but that's just me.)
For CharSequence.chars() we considered returning Stream<Character> (an early prototype might have implemented this) but it was rejected because of boxing overhead. Considering that a String has char values as primitives, it would seem to be a mistake to impose boxing unconditionally when the caller would probably just do a bit of processing on the value and unbox it right back into a string.
We also considered a CharStream primitive specialization, but its use would seem to be quite narrow compared to the amount of bulk it would add to the API. It didn't seem worthwhile to add it.
The penalty this imposes on callers is that they have to know that the IntStream contains char values represented as ints and that casting must be done at the proper place. This is doubly confusing because there are overloaded API calls like PrintStream.print(char) and PrintStream.print(int) that differ markedly in their behavior. An additional point of confusion possibly arises because the codePoints() call also returns an IntStream but the values it contains are quite different.
So, this boils down to choosing pragmatically among several alternatives:
We could provide no primitive specializations, resulting in a simple, elegant, consistent API, but which imposes a high performance and GC overhead;
we could provide a complete set of primitive specializations, at the cost of cluttering up the API and imposing a maintenance burden on JDK developers; or
we could provide a subset of primitive specializations, giving a moderately sized, high performing API that imposes a relatively small burden on callers in a fairly narrow range of use cases (char processing).
We chose the last one.
I'm reviewing the API changes for Java 8 and I noticed that the new methods in java.util.Arrays are not overloaded for all primitives. The methods I noticed are:
parallelSetAll
parallelPrefix
spliterator
stream
Currently these new methods only handle int, long, and double primitives.
int, long, and double are probably the most widely used primitives so it makes sense that if they had to limit the API that they would choose those three, but why did they have to limit the API?
To address the questions as a whole, and not just this particular scenario, I think we all want to know...
Why There's Interface Pollution in Java 8
For instance, in a language like C#, there is a set of predefined function types accepting any number of arguments with an optional return type (Func and Action each one going up to 16 parameters of different types T1, T2, T3, ..., T16), but in the JDK 8 what we have is a set of different functional interfaces, with different names and different method names, and whose abstract methods represent a subset of well known function arities (i.e. nullary, unary, binary, ternary, etc). And then we have an explosion of cases dealing with primitive types, and there are even other scenarios causing an explosion of more functional interfaces.
The Type Erasure Issue
So, in a way, both languages suffer from some form of interface pollution (or delegate pollution in C#). The only difference is that in C# they all have the same name. In Java, unfortunately, due to type erasure, there is no difference between Function<T1,T2> and Function<T1,T2,T3> or Function<T1,T2,T3,...Tn>, so evidently, we couldn't simply name them all the same way and we had to come up with creative names for all possible types of function combinations. For further reference on this, please refer to How we got the generics we have by Brian Goetz.
Don't think the expert group did not struggle with this problem. In the words of Brian Goetz in the lambda mailing list:
[...] As a single example, let's take function types. The lambda
strawman offered at devoxx had function types. I insisted we remove
them, and this made me unpopular. But my objection to function types
was not that I don't like function types -- I love function types --
but that function types fought badly with an existing aspect of the
Java type system, erasure. Erased function types are the worst of
both worlds. So we removed this from the design.
But I am unwilling to say "Java never will have function types"
(though I recognize that Java may never have function types.) I
believe that in order to get to function types, we have to first deal
with erasure. That may, or may not be possible. But in a world of
reified structural types, function types start to make a lot more
sense [...]
An advantage of this approach is that we can define our own interface types with methods accepting as many arguments as we would like, and we could use them to create lambda expressions and method references as we see fit. In other words, we have the power to pollute the world with yet even more new functional interfaces. Also, we can create lambda expressions even for interfaces in earlier versions of the JDK or for earlier versions of our own APIs that defined SAM types like these. And so now we have the power to use Runnable and Callable as functional interfaces.
However, these interfaces become more difficult to memorize since they all have different names and methods.
Still, I am one of those wondering why they didn't solve the problem as in Scala, defining interfaces like Function0, Function1, Function2, ..., FunctionN. Perhaps, the only argument I can come up with against that is that they wanted to maximize the possibilities of defining lambda expressions for interfaces in earlier versions of the APIs as mentioned before.
Lack of Value Types Issue
So, evidently type erasure is one driving force here. But if you are one of those wondering why we also need all these additional functional interfaces with similar names and method signatures and whose only difference is the use of a primitive type, then let me remind you that in Java we also lack of value types like those in a language like C#. This means that the generic types used in our generic classes can only be reference types and not primitive types.
In other words, we can't do this:
List<int> numbers = asList(1,2,3,4,5);
But we can indeed do this:
List<Integer> numbers = asList(1,2,3,4,5);
The second example, though, incurs in the cost of boxing and unboxing of the wrapped objects back and forth from/to primitive types. This can become really expensive in operations dealing with collections of primitive values. So, the expert group decided to create this explosion of interfaces to deal with the different scenarios. To make things "less worse" they decided to only deal with three basic types: int, long and double.
Quoting the words of Brian Goetz in the lambda mailing list:
[...] More generally: the philosophy behind having specialized
primitive streams (e.g., IntStream) is fraught with nasty tradeoffs.
On the one hand, it's lots of ugly code duplication, interface
pollution, etc. On the other hand, any kind of arithmetic on boxed ops
sucks, and having no story for reducing over ints would be terrible.
So we're in a tough corner, and we're trying to not make it worse.
Trick #1 for not making it worse is: we're not doing all eight
primitive types. We're doing int, long, and double; all the others
could be simulated by these. Arguably we could get rid of int too, but
we don't think most Java developers are ready for that. Yes, there
will be calls for Character, and the answer is "stick it in an int."
(Each specialization is projected to ~100K to the JRE footprint.)
Trick #2 is: we're using primitive streams to expose things that are
best done in the primitive domain (sorting, reduction) but not trying
to duplicate everything you can do in the boxed domain. For example,
there's no IntStream.into(), as Aleksey points out. (If there were,
the next question(s) would be "Where is IntCollection? IntArrayList?
IntConcurrentSkipListMap?) The intention is many streams may start as
reference streams and end up as primitive streams, but not vice versa.
That's OK, and that reduces the number of conversions needed (e.g., no
overload of map for int -> T, no specialization of Function for int
-> T, etc.) [...]
We can see that this was a difficult decision for the expert group. I think few would agree that this is elegant, but most of us would most likely agree it was necessary.
For further reference on the subject you may want to read The State of Value Types by John Rose, Brian Goetz, and Guy Steele.
The Checked Exceptions Issue
There was a third driving force that could have made things even worse, and it is the fact that Java supports two types of exceptions: checked and unchecked. The compiler requires that we handle or explicitly declare checked exceptions, but it requires nothing for unchecked ones. So, this creates an interesting problem, because the method signatures of most of the functional interfaces do not declare to throw any exceptions. So, for instance, this is not possible:
Writer out = new StringWriter();
Consumer<String> printer = s -> out.write(s); //oops! compiler error
It cannot be done because the write operation throws a checked exception (i.e. IOException) but the signature of the Consumer method does not declare it throws any exception at all. So, the only solution to this problem would have been to create even more interfaces, some declaring exceptions and some not (or come up with yet another mechanism at the language level for exception transparency. Again, to make things "less worse" the expert group decided to do nothing in this case.
In the words of Brian Goetz in the lambda mailing list:
[...] Yes, you'd have to provide your own exceptional SAMs. But then
lambda conversion would work fine with them.
The EG discussed additional language and library support for this
problem, and in the end felt that this was a bad cost/benefit
tradeoff.
Library-based solutions cause a 2x explosion in SAM types (exceptional
vs not), which interact badly with existing combinatorial explosions
for primitive specialization.
The available language-based solutions were losers from a
complexity/value tradeoff. Though there are some alternative
solutions we are going to continue to explore -- though clearly not
for 8 and probably not for 9 either.
In the meantime, you have the tools to do what you want. I get that
you prefer we provide that last mile for you (and, secondarily, your
request is really a thinly-veiled request for "why don't you just give
up on checked exceptions already"), but I think the current state lets
you get your job done. [...]
So, it's up to us, the developers, to craft yet even more interface explosions to deal with these in a case-by-case basis:
interface IOConsumer<T> {
void accept(T t) throws IOException;
}
static<T> Consumer<T> exceptionWrappingBlock(IOConsumer<T> b) {
return e -> {
try { b.accept(e); }
catch (Exception ex) { throw new RuntimeException(ex); }
};
}
In order to do:
Writer out = new StringWriter();
Consumer<String> printer = exceptionWrappingBlock(s -> out.write(s));
Probably, in the future when we get Support for Value Types in Java and Reification, we will be able to get rid of (or at least no longer need to use anymore) some of these multiple interfaces.
In summary, we can see that the expert group struggled with several design issues. The need, requirement or constraint to keep backward compatibility made things difficult, then we have other important conditions like the lack of value types, type erasure and checked exceptions. If Java had the first and lacked the other two the design of JDK 8 would probably have been different. So, we all must understand that these were difficult problems with lots of tradeoffs and the EG had to draw a line somewhere and make decisions.
I am writing a program which accepts 400 numbers of type long and will modify some of them depending on conditions at runtime and I want to know whether to use ArrayList<Long> or long[].
Which will be faster to use? I am thinking of using long[] because size is fixed.
When the size is fixed, long[] is faster, but it allows a less maintainable API, because it does not implement the List interface.
Note a long[] is faster for 2 reasons:
Uses primitive longs and not box object Longs (also enables better cache performace, since the longs are allocated contigously and the Longs aren't guaranteed to)
An array is much simpler and more efficient DS.
Nevertheless, for simpler maintainability - I would have used a List<Long>, unless performace is very critical at this part of the program.
If you use this collection very often in a tight loop - and your profiler says it is indeed a bottleneck - I would then switch to a more efficient long[].
As far as the speed goes, it almost does not matter for a list of 400 items. If you need to grow your list dynamically, ArrayList<Long> is better; if the size is fixed, long[] may be better (and a bit faster, although again, you would probably not notice the difference in most situations).
There are few things which haven't been mentioned in other answers:
A generic collection is in actuality a collection of Objects, or better said, this is what Java compiler will make of it. This is while long[] will always remain what it is.
A consequence of the first bullet point is that if you do something that eventually puts something other then Long into your collection, in certain situations the compiler will let it through (because Java type system is unsound, i.e. as an example, it will allow you to upcast and then re-cast to a completely disparate type).
A more general consequence of these two is that Java generics are half-baked and in some less trivial cases such as reflection, serialization etc. may "surprise" you. It is in fact safer to use plain arrays, then generics.
package tld.example;
import java.util.List;
import java.util.ArrayList;
class Example {
static void testArray(long[] longs) {
System.out.println("testArray");
}
static void testGeneric(List<Long> longs) {
System.out.println("testGeneric");
}
#SuppressWarnings("unchecked")
public static void main(String... arguments) {
List<Long> fakeLongs = new ArrayList<Long>();
List<Object> mischiefManaged = (List<Object>)(Object)fakeLongs;
mischiefManaged.add(new Object());
// this call succeeds and prints the value.
// we could sneak in a wrong type into this function
// and it remained unnoticed
testGeneric(fakeLongs);
long[] realLongs = new long[1];
// this will fail because it is not possible to perform this cast
// despite the compiler thinks it is.
Object[] forgedLongs = (Object[])(Object)realLongs;
forgedLongs[0] = new Object();
testArray(realLongs);
}
}
This example is a bit contrived because it is difficult to come up with a short convincing example, but trust me, in less trivial cases, when you have to use reflection and unsafe casts this is quite a possibility.
Now, you have to consider that besides what is reasonable, there is a tradition. Every community has a set of it's customs and tradition. There are a lot of superficial beliefs, such as those voiced here eg. when someone claims that implementing List API is an unconditional goodness, and if this does not happen, then it must be bad... This is not just a dominating opinion, this is what overwhelming majority of Java programmers believe in. After all, it doesn't matter that much, and Java, as a language has a lot more of other shortcomings... so, if you want to secure your job interview or simply avoid conflicts with other Java programmers, then use Java generics, no matter the reason. But if you don't like it - well, perhaps just use some other language ;)
long[] is both much faster and takes much less memory. Read "Effective Java" Item 49: "Prefer primitive types to boxed primitives"
I think better to use ArrayList because it is much much easier to maintain in long runs. In future if the size of your array gets increased beyond 400 then maintaining a long[] has lot of overhead whereas ArrayList grows dynamically so you don't need to worry about increased size.
Also, deletion of element is handled in much better way by ArrayList than static arrays (long[]) as they automatically reorganize the elements so that they still appear as ordered elements.
Static arrays are worse at this.
Why not use one of the many list implementations that work on primitive types? TLongArrayList for instance. It can be used just like Javas List but is based on a long[] array. So you have the advantages of both sides.
HPPC has a short overview of some libraries: https://github.com/carrotsearch/hppc/blob/master/ALTERNATIVES.txt