Related
I need to know how to partially sort an array of primitive unique integers in descending order using Stream API. For example, if there is an array like {1,2,3,4,5}, I want to get {5,4,3, 1,2} - 3 biggest elements first and then the rest. Is it even possible using streams? I checked the docs - there are two methods skip and limit but they change the stream content and work from the beginning of the array.
I can sort the whole array like
Arrays.stream(arr)
.boxed()
.sorted(Collections.reverseOrder())
.mapToInt(Integer::intValue)
.toArray();
but how to make this sorting partial? I said Stream API because I want it to be written nicely.
Also I intuitively feel that concat may have a go here. Another approach I could think about - is to use a custom comparator limiting the number of sorted elements. What do you think?
P.S. I am not a Java expert.
Though the code is longer than the accepted answer, it does a lot less sorting: for big arrays this will matter:
private static int[] partiallySorted(int[] input, int bound) {
int[] result = new int[input.length];
int i = -1;
PriorityQueue<Integer> pq = new PriorityQueue<>(bound, Comparator.naturalOrder());
for (int x : input) {
pq.add(x);
if (pq.size() > bound) {
int el = pq.poll();
result[bound + ++i] = el;
}
}
while (!pq.isEmpty()) {
result[--bound] = pq.poll();
}
return result;
}
Here's an approach using streams.
int[] sortPartially(int[] inputArray, int limit) {
Map<Integer, Long> maxValues = IntStream.of(inputArray)
.boxed()
.sorted(Comparator.reverseOrder())
.limit(limit)
.collect(Collectors.groupingBy(x -> x, LinkedHashMap::new, Collectors.counting()));
IntStream head = maxValues.entrySet()
.stream()
.flatMapToInt(e -> IntStream.iterate(e.getKey(), i -> i)
.limit(e.getValue().intValue()));
IntStream tail = IntStream.of(inputArray)
.filter(x -> {
Long remainingDuplication = maxValues.computeIfPresent(x, (y, count) -> count - 1);
return remainingDuplication == null || remainingDuplication < 0;
});
return IntStream.concat(head, tail).toArray();
}
Above example of course sorts the entire input array, but keeps the order of unsorted elements stable.
Another stream example using priority queue (as others mentioned) reduces the runtime complexity:
Collection<Integer> sortPartially(int[] inputArray, int sortedPartLength) {
Queue<Integer> pq = new PriorityQueue<>(sortedPartLength);
Deque<Integer> result = IntStream.of(inputArray).boxed().map(x -> {
pq.add(x);
return pq.size() > sortedPartLength ? pq.poll() : null;
}).filter(Objects::nonNull).collect(Collectors.toCollection(ArrayDeque::new));
Stream.generate(pq::remove).limit(sortedPartLength).forEach(result::addFirst);
return result;
}
If there are duplicates in the input array, the order of unsorted elements can change.
I need to know how to partially sort an array of primitive integers in descending order using Stream API.
There is no built-in tool that lets you do this in Java. Neither in the Stream API nor in the Collections API. You either need to implement it on your own or change your approach.
I said Stream API because I want it to be written nicely.
Using Java 8 Streams does not mean that your code will be written nicely. Streams are not universal tools. Sometimes they offer enhanced readability and sometimes you have to use something else.
Another approach I could think about - is to use a custom comparator limiting the number of sorted elements.
That can't be done, since Comparator does not know how many elements have been sorted. Simply counting the calls will not give you any meaningful information in this regard.
What I would suggest is implementing something like C++'s std::partial_sort, which is most likely based on the heap approach.
I would save the three largest elements in a set and then define my own comparator.
public static void main(String[] args){
int[] input = {1,2,3,4,5};
Set<Integer> set = Arrays.stream(input).boxed().sorted(Comparator.reverseOrder()).limit(3).collect(Collectors.toSet());
Comparator<Integer> customComp = (a,b) -> {
if(set.contains(a) && set.contains(b)){ return a.compareTo(b);}
else if(set.contains(a)){ return 1;}
else if(set.contains(b)){ return -1;}
else { return 0;}
};
int[] sorted = Arrays.stream(input).boxed().sorted(customComp.reversed()).mapToInt(i->i).toArray();
System.out.println(Arrays.toString(sorted));
}
You won't be able to do this very nicely using streams. Here is one way to do it:
public static void main(String[] args) {
Integer[] arr = {1, 2, 3, 4, 5};
List<Integer> originalValues = new ArrayList<>(Arrays.asList(arr));
ArrayList<Integer> list = new ArrayList<>();
for (int i = 0; i < 3; i++) {
originalValues.stream().max(Integer::compareTo).ifPresent(v -> {
list.add(v);
originalValues.remove(v);
});
}
list.addAll(originalValues);
System.out.println(list);
// [5, 4, 3, 1, 2]
}
I have a list of Integer List, like list1=(1,2,3) and list2 = (0,1).
my list of lists contains list1 and list2. It could contains more but for the example i took only two lists.
The question is to get the index of the list with minimum size using java stream.
Here is my program and it work using only the for loop method.
import java.util.ArrayList;
public class Example {
public static void main( String[] args ) {
ArrayList<Integer> list1 = new ArrayList<>();
list1.add(1);list1.add(2);list1.add(3);
ArrayList<Integer> list2 = new ArrayList<>();
list2.add(0);list2.add(1);
ArrayList<ArrayList<Integer>> listOLists = new ArrayList<>();
listOLists.add(list1);
listOLists.add(list2);
printTheIndexOfTheListWithTheMinSize(listOLists);
}
private static void printTheIndexOfTheListWithTheMinSize( ArrayList<ArrayList<Integer>> listOLists ) {
int minSize = listOLists.get(0).size();
int minIndex = 0;
int i=0;
for ( ArrayList<Integer> list: listOLists ) {
if (list.size()<minSize)
{
minSize = list.size();
minIndex=i;
}
i++;
}
System.out.println(minIndex);
}
}
Could you please give me a hint how to do that using Java stream API.
Note that i'm calling this method many times in a heavy calcul, so the awnser should take that in consideration.
Not really elegant, because it requires boxing and unboxing, but...
Optional<Integer> minIndex =
IntStream.range(0, list.size())
.boxed()
.min(Comparator.comparingInt(i -> list.get(i).size()));
One way to possibly do that would be using indexOf and Collections.min with a comparator as:
int minIndex = listOLists.indexOf(Collections.min(listOLists,
Comparator.comparingInt(List::size)));
Stay away from solutions using indexOf. While they may allow to write rather short code, this indexOf operation bears a content-based linear search operation, invoking equals on the list elements until a match is found.
While it might look like a trivial thing, as all sub-lists differ in size, except for the matching element, most of Java 8’s List implementations do not use the size to short-cut the comparison.
To illustrate the issue,
use the following helper class
class Counter {
int count;
#Override
public boolean equals(Object obj) {
count++;
return super.equals(obj);
}
#Override
public int hashCode() {
return super.hashCode();
}
#Override
public String toString() {
return "equals invoked "+count+" times";
}
}
and
Counter c = new Counter();
List<List<Counter>> list = Arrays.asList(
new ArrayList<>(Collections.nCopies(10, c)),
new ArrayList<>(Collections.nCopies(15, c)),
new ArrayList<>(Collections.nCopies(7, c)),
new ArrayList<>(Collections.nCopies(10, c))
);
Comparator<List<?>> cmp = Comparator.comparingInt(List::size);
System.out.println("using IntStream.range(0, list.size()).boxed()\r\n" +
" .min(Comparator.comparing(list::get, cmp))");
int minIndex =
IntStream.range(0, list.size()).boxed()
.min(Comparator.comparing(list::get, cmp)).orElse(-1);
System.out.println("result "+minIndex+", "+c);
c.count = 0;
System.out.println("\nusing list.indexOf(Collections.min(list, cmp))");
minIndex = list.indexOf(Collections.min(list, cmp));
System.out.println("result "+minIndex+", "+c);
c.count = 0;
System.out.println("\nusing list.indexOf(list.stream().min(cmp).get())");
minIndex = list.indexOf(list.stream().min(cmp).get());
System.out.println("result "+minIndex+", "+c);
it will print
using IntStream.range(0, list.size()).boxed()
.min(Comparator.comparing(list::get, cmp))
result 2, equals invoked 0 times
using list.indexOf(Collections.min(list, cmp))
result 2, equals invoked 14 times
using list.indexOf(list.stream().min(cmp).get())
result 2, equals invoked 14 times
in Java 8, showing that calling equals on any contained element is an unnecessary operation (see the first variant, derived from this answer), but performed multiple times for the other variants. Now imagine what happens if we use larger lists and/or a larger number of lists and have an element type with a rather expensive equality test.
Note that for ArrayList, this has been solved in JDK 11, but there are still list implementations left, like the ones returned by Collections.nCopies or Arrays.asList, which do not short circuit, so it’s generally preferable not to do an entirely obsolete content based linear search operation.
Here's one way to go about it:
int index = listOLists.indexOf(listOLists.stream()
.min(Comparator.comparingInt(List::size))
.orElseGet(ArrayList::new));
or if you want to avoid the creation of an ArrayList when the source is empty then you could do:
int index = listOLists.isEmpty() ? -1 : listOLists.indexOf(listOLists.stream()
.min(Comparator.comparingInt(List::size)).get());
An alternative that creates index/size arrays and finds the min by size:
IntStream.range(0, listOLists.size())
.mapToObj(i -> new int[] { i, listOLists.get(i).size() })
.min(Comparator.comparingInt(arr -> arr[1]))
.map(arr -> arr[0])
.ifPresent(System.out::println);
This will print the index of min-sized list in listOLists
Suppose I had the following code:
public Set<String> csvToSet(String src) {
String[] splitted = src.split(",");
Set<String> result = new HashSet<>(splitted.length);
for (String s : splitted) {
result.add(s);
}
return result;
}
so I need to transform an array into Set.
And Intellij Idea suggests to replace my for-each loop with Collection.addAll one-liner so I get:
...
Set<String> result = new HashSet<>(splitted.length);
result.addAll(Arrays.asList(splitted));
return result;
The complete inspection message is:
This inspection warns when calling some method in a loop (e.g. collection.add(x)) could be replaced when calling a bulk method (e.g. collection.addAll(listOfX).
If checkbox "Use Arrays.asList() to wrap arrays" is checked, the inspection will warn even if the original code iterates over an array while bulk method requires a Collection. In this case the quick-fix action will automatically wrap an array with Arrays.asList() call.
From inspection description it sounds like it works as expected.
If we refer to a top answer of a question about converting an array into Set (How to convert an Array to a Set in Java) the same one liner is suggested:
Set<T> mySet = new HashSet<T>(Arrays.asList(someArray));
Even though creating an ArrayList from array is O(1), I do not like the idea of creating an additional List object.
Usually I trust Intellij inspections and assume it does not offer anything less efficient.
But today I am curious why both: top SO answer and Intellij Idea (with default settings) recommend using same one-liner with creating useless intermediate List object while there is also a Collections.addAll(destCollection, yourArray) since JDK 6.
The only reason I see for it is that both (inspection and answers) are too old. If so, here is the reason to improve intellij idea and give more votes to an answer proposing Collections.addAll() :)
A hint as to why Intellij doesn't suggest the Arrays.asList replacement for
Set<String> result = new HashSet<>(splitted.length);
result.addAll(Arrays.asList(splitted));
return result;
is in the source code for HashSet(Collection):
public HashSet(Collection<? extends E> c) {
map = new HashMap<>(Math.max((int) (c.size()/.75f) + 1, 16));
addAll(c);
}
Note that the capacity of the set isn't the size of c.
As such, the change would not be semantically equivalent.
Don't worry about creating the List. It is really cheap. It's not free; but you would have to be using it in a really performance-critical loop to notice.
I wrote a small function to measure the performance of the three ways of adding the array to a HashSet and here are the results.
First the base code used by all of them that would generate an array of maxSize with values between 0-100
int maxSize = 10000000; // 10M values
String[] s = new String[maxSize];
Random r = new Random();
for (int i = 0; i < maxSize; i++) {
s[i] = "" + r.nextInt(100);
}
Then the benchmark function:
public static void benchmark(String name, Runnable f) {
Long startTime = System.nanoTime();
f.run();
Long endTime = System.nanoTime();
System.out.println("Total execution time for: " + name + ": " + (endTime-startTime) / 1000000 + "ms");
}
So first way is using your code with a loop and for 10M values it took between 150ms and 190ms ( I ran the benchmark several times for each method)
Main.benchmark("Normal loop ", () -> {
Set<String> result = new HashSet<>(s.length);
for (String a : s) {
result.add(a);
}
});
Second is using result.addAll(Arrays.asList(s)); and it took between 180ms and 220ms
Main.benchmark("result.addAll(Arrays.asList(s)): ", () -> {
Set<String> result = new HashSet<>(s.length);
result.addAll(Arrays.asList(s));
});
Third way is using Collections.addAll(result, s); and it took between 170ms and 200ms
Main.benchmark("Collections.addAll(result, s); ", () -> {
Set<String> result = new HashSet<>(s.length);
Collections.addAll(result, s);
});
Now the explanation. From a runtime complexity they all run in O(n) meaning that for N values N operations are going to run (basically adding N values).
From a memory complexity point of view, is again, for all O(N). There's only the new HashSet which is created.
Arrays.asList(someArray) is not creating a new array, is just creating a new object that has a reference to that array. You can see it in the java code:
private final E[] a;
ArrayList(E[] array) {
a = Objects.requireNonNull(array);
}
Besides that, all the addAll methods are going to do exactly what you did, a for-loop:
// addAll method for Collections.addAll(result, s);
public static <T> boolean addAll(Collection<? super T> c, T... elements) {
boolean result = false;
for (T element : elements)
result |= c.add(element);
return result;
}
// addAll method for result.addAll(Arrays.asList(s));
public boolean addAll(Collection<? extends E> c) {
boolean modified = false;
for (E e : c)
if (add(e))
modified = true;
return modified;
}
To conclude, the runtime difference being so small, IntelliJ suggests a way to write code in a more clear way and less code.
Say I have a list with elements (34, 11, 98, 56, 43).
Using Java 8 streams, how do I find the index of the minimum element of the list (e.g. 1 in this case)?
I know this can be done easily in Java using list.indexOf(Collections.min(list)). However, I am looking at a Scala like solution where we can simply say List(34, 11, 98, 56, 43).zipWithIndex.min._2 to get the index of minimum value.
Is there anything that can be done using streams or lambda expressions (say Java 8 specific features) to achieve the same result.
Note: This is just for learning purpose. I don't have any problem in using Collections utility methods.
import static java.util.Comparator.comparingInt;
int minIndex = IntStream.range(0,list.size()).boxed()
.min(comparingInt(list::get))
.get(); // or throw if empty list
As #TagirValeev mentions in his answer, you can avoid boxing by using IntStream#reduce instead of Stream#min, but at the cost of obscuring the intent:
int minIdx = IntStream.range(0,list.size())
.reduce((i,j) -> list.get(i) > list.get(j) ? j : i)
.getAsInt(); // or throw
You could do it like this:
int indexMin = IntStream.range(0, list.size())
.mapToObj(i -> new SimpleEntry<>(i, list.get(i)))
.min(comparingInt(SimpleEntry::getValue))
.map(SimpleEntry::getKey)
.orElse(-1);
If the list is a random access list, get is a constant time operation. The API lacks of a standard tuple class, so I used the SimpleEntry from the AbstractMap class as a substitute.
So IntStream.range generates a stream of indexes from the list from which you map each index to its corresponding value. Then you get the minimum element by providing a comparator on the values (the ones in the list). From there you map the Optional<SimpleEntry<Integer, Integer>> to an Optional<Integer> from which you get the index (or -1 if the optional is empty).
As an aside, I would probably use a simple for-loop to get the index of the minimum value, as your combination of min / indexOf does 2 passes over the list.
You might also be interested to check Zipping streams using JDK8 with lambda (java.util.stream.Streams.zip)
Here's two possible solutions using my StreamEx library:
int idx = IntStreamEx.ofIndices(list).minBy(list::get).getAsInt();
Or:
int idx = EntryStream.of(list).minBy(Entry::getValue).get().getKey();
The second solution internally is very close to one proposed by #AlexisC. The first one is probably the fastest as it does not use boxing (internally it's a reduce operation).
Without using third-party code #Misha's answer looks the best for me.
Since this is for learning purposes, let's try to find a solution that doesn't just somehow use a stream, but actually works on the stream of our list. We also don't want to assume random access.
So, there are two ways to get a non-trivial result out of a stream: collect and reduce. Here is a solution that uses a collector:
class Minimum {
int index = -1;
int range = 0;
int value;
public void accept(int value) {
if (range == 0 || value < this.value) {
index = range;
this.value = value;
}
range++;
}
public Minimum combine(Minimum other) {
if (value > other.value) {
index = range + other.index;
value = other.value;
}
range += other.range;
return this;
}
public int getIndex() {
return index;
}
}
static Collector<Integer, Minimum, Integer> MIN_INDEX = new Collector<Integer, Minimum, Integer>() {
#Override
public Supplier<Minimum> supplier() {
return Minimum::new;
}
#Override
public BiConsumer<Minimum, Integer> accumulator() {
return Minimum::accept;
}
#Override
public BinaryOperator<Minimum> combiner() {
return Minimum::combine;
}
#Override
public Function<Minimum, Integer> finisher() {
return Minimum::getIndex;
}
#Override
public Set<Collector.Characteristics> characteristics() {
return Collections.emptySet();
}
};
Writing a collectors creates an annoying amount of code, but it can be easily generalized to support any comparable value. Also, calling the collector looks very idiomatic:
List<Integer> list = Arrays.asList(4,3,7,1,5,2,9);
int minIndex = list.stream().collect(MIN_INDEX);
If we change the accept and combine methods to always return a new Minimum instance (ie. if we make Minimum immutable), we can also use reduce:
int minIndex = list.stream().reduce(new Minimum(), Minimum::accept, Minimum::combine).getIndex();
I sense large potential for parallelization in this one.
I've inherited a bunch of code that makes extensive use of parallel arrays to store key/value pairs. It actually made sense to do it this way, but it's sort of awkward to write loops that iterate over these values. I really like the new Java foreach construct, but it does not seem like there is a way to iterate over parallel lists using this.
With a normal for loop, I can do this easily:
for (int i = 0; i < list1.length; ++i) {
doStuff(list1[i]);
doStuff(list2[i]);
}
But in my opinion this is not semantically pure, since we are not checking the bounds of list2 during iteration. Is there some clever syntax similar to the for-each that I can use with parallel lists?
I would use a Map myself. But taking you at your word that a pair of arrays makes sense in your case, how about a utility method that takes your two arrays and returns an Iterable wrapper?
Conceptually:
for (Pair<K,V> p : wrap(list1, list2)) {
doStuff(p.getKey());
doStuff(p.getValue());
}
The Iterable<Pair<K,V>> wrapper would hide the bounds checking.
From the official Oracle page on the enhanced for loop:
Finally, it is not usable for loops
that must iterate over multiple
collections in parallel. These
shortcomings were known by the
designers, who made a conscious
decision to go with a clean, simple
construct that would cover the great
majority of cases.
Basically, you're best off using the normal for loop.
If you're using these pairs of arrays to simulate a Map, you could always write a class that implements the Map interface with the two arrays; this could let you abstract away much of the looping.
Without looking at your code, I cannot tell you whether this option is the best way forward, but it is something you could consider.
This was a fun exercise. I created an object called ParallelList that takes a variable number of typed lists, and can iterate over the values at each index (returned as a list of values):
public class ParallelList<T> implements Iterable<List<T>> {
private final List<List<T>> lists;
public ParallelList(List<T>... lists) {
this.lists = new ArrayList<List<T>>(lists.length);
this.lists.addAll(Arrays.asList(lists));
}
public Iterator<List<T>> iterator() {
return new Iterator<List<T>>() {
private int loc = 0;
public boolean hasNext() {
boolean hasNext = false;
for (List<T> list : lists) {
hasNext |= (loc < list.size());
}
return hasNext;
}
public List<T> next() {
List<T> vals = new ArrayList<T>(lists.size());
for (int i=0; i<lists.size(); i++) {
vals.add(loc < lists.get(i).size() ? lists.get(i).get(loc) : null);
}
loc++;
return vals;
}
public void remove() {
for (List<T> list : lists) {
if (loc < list.size()) {
list.remove(loc);
}
}
}
};
}
}
Example usage:
List<Integer> list1 = Arrays.asList(new Integer[] {1, 2, 3, 4, 5});
List<Integer> list2 = Arrays.asList(new Integer[] {6, 7, 8});
ParallelList<Integer> list = new ParallelList<Integer>(list1, list2);
for (List<Integer> ints : list) {
System.out.println(String.format("%s, %s", ints.get(0), ints.get(1)));
}
Which would print out:
1, 6
2, 7
3, 8
4, null
5, null
This object supports lists of variable lengths, but clearly it could be modified to be more strict.
Unfortunately I couldn't get rid of one compiler warning on the ParallelList constructor: A generic array of List<Integer> is created for varargs parameters, so if anyone knows how to get rid of that, let me know :)
You can use a second constraint in your for loop:
for (int i = 0; i < list1.length && i < list2.length; ++i)
{
doStuff(list1[i]);
doStuff(list2[i]);
}//for
One of my preferred methods for traversing collections is the for-each loop, but as the oracle tutorial mentions, when dealing with parallel collections to use the iterator rather than the for-each.
The following was an answer by Martin v. Löwis in a similar post:
it1 = list1.iterator();
it2 = list2.iterator();
while(it1.hasNext() && it2.hasNext())
{
value1 = it1.next();
value2 = it2.next();
doStuff(value1);
doStuff(value2);
}//while
The advantage to the iterator is that it's generic so if you don't know what the collections are being used, use the iterator, otherwise if you know what your collections are then you know the length/size functions and so the regular for-loop with the additional constraint can be used here. (Note I'm being very plural in this post as an interesting possibility would be where the collections used are different e.g. one could be a List and the other an array for instance)
Hope this helped.
With Java 8, I use these to loop in the sexy way:
//parallel loop
public static <A, B> void loop(Collection<A> a, Collection<B> b, IntPredicate intPredicate, BiConsumer<A, B> biConsumer) {
Iterator<A> ait = a.iterator();
Iterator<B> bit = b.iterator();
if (ait.hasNext() && bit.hasNext()) {
for (int i = 0; intPredicate.test(i); i++) {
if (!ait.hasNext()) {
ait = a.iterator();
}
if (!bit.hasNext()) {
bit = b.iterator();
}
biConsumer.accept(ait.next(), bit.next());
}
}
}
//nest loop
public static <A, B> void loopNest(Collection<A> a, Collection<B> b, BiConsumer<A, B> biConsumer) {
for (A ai : a) {
for (B bi : b) {
biConsumer.accept(ai, bi);
}
}
}
Some example, with these 2 lists:
List<Integer> a = Arrays.asList(1, 2, 3);
List<String> b = Arrays.asList("a", "b", "c", "d");
Loop within min size of a and b:
loop(a, b, i -> i < Math.min(a.size(), b.size()), (x, y) -> {
System.out.println(x + " -> " + y);
});
Output:
1 -> a
2 -> b
3 -> c
Loop within max size of a and b (elements in shorter list will be cycled):
loop(a, b, i -> i < Math.max(a.size(), b.size()), (x, y) -> {
System.out.println(x + " -> " + y);
});
Output:
1 -> a
2 -> b
3 -> c
1 -> d
Loop n times ((elements will be cycled if n is bigger than sizes of lists)):
loop(a, b, i -> i < 5, (x, y) -> {
System.out.println(x + " -> " + y);
});
Output:
1 -> a
2 -> b
3 -> c
1 -> d
2 -> a
Loop forever:
loop(a, b, i -> true, (x, y) -> {
System.out.println(x + " -> " + y);
});
Apply to your situation:
loop(list1, list2, i -> i < Math.min(a.size(), b.size()), (e1, e2) -> {
doStuff(e1);
doStuff(e2);
});
Simple answer: No.
You want sexy iteration and Java byte code? Check out Scala:
Scala for loop over two lists simultaneously
Disclaimer: This is indeed a "use another language" answer. Trust me, I wish Java had sexy parallel iteration, but no one started developing in Java because they want sexy code.
ArrayIterator lets you avoid indexing, but you can’t use a for-each loop without writing a separate class or at least function. As #Alexei Blue remarks, official recommendation (at The Collection Interface) is: “Use Iterator instead of the for-each construct when you need to: … Iterate over multiple collections in parallel.”:
import static com.google.common.base.Preconditions.checkArgument;
import org.apache.commons.collections.iterators.ArrayIterator;
// …
checkArgument(array1.length == array2.length);
Iterator it1 = ArrayIterator(array1);
Iterator it2 = ArrayIterator(array2);
while (it1.hasNext()) {
doStuff(it1.next());
doOtherStuff(it2.next());
}
However:
Indexing is natural for arrays – an array is by definition something you index, and a numerical for loop, as in your original code, is perfectly natural and more direct.
Key-value pairs naturally form a Map, as #Isaac Truett remarks, so cleanest would be to create maps for all your parallel arrays (so this loop would only be in the factory function that creates the maps), though this would be inefficient if you just want to iterate over them. (Use Multimap if you need to support duplicates.)
If you have a lot of these, you could (partially) implement ParallelArrayMap<> (i.e., a map backed by parallel arrays), or maybe ParallelArrayHashMap<> (to add a HashMap if you want efficient lookup by key), and use that, which allows iteration in the original order. This is probably overkill though, but allows a sexy answer.
That is:
Map<T, U> map = new ParallelArrayMap<>(array1, array2);
for (Map.Entry<T, U> entry : map.entrySet()) {
doStuff(entry.getKey());
doOtherStuff(entry.getValue());
}
Philosophically, Java style is to have explicit, named types, implemented by classes. So when you say “[I have] parallel arrays [that] store key/value pairs.”, Java replies “Write a ParallelArrayMap class that implements Map (key/value pairs) and that has a constructor that takes parallel arrays, and then you can use entrySet to return a Set that you can iterate over, since Set implements Collection.” – make the structure explicit in a type, implemented by a class.
For iterating over two parallel collections or arrays, you want to iterate over a Iterable<Pair<T, U>>, which less explicit languages allow you to create with zip (which #Isaac Truett calls wrap). This is not idiomatic Java, however – what are the elements of the pair? See Java: How to write a zip function? What should be the return type? for an extensive discussion of how to write this in Java and why it’s discouraged.
This is exactly the stylistic tradeoff Java makes: you know exactly what type everything is, and you have to specify and implement it.
//Do you think I'm sexy?
if(list1.length == list2.length){
for (int i = 0; i < list1.length; ++i) {
doStuff(list1[i]);
doStuff(list2[i]);
}
}