mapreduce error:java.lang.indexoutofboundsexception:index:2,size:2 - java

I connect hadoop with the eclipse ,start the job by eclipse plug-in, the mapreduce job can complete successfully,but when i compile my code to jar file and then execute this job by hadoop command,it will throw errors as follows.
Error:java.lang.IndexOutOfBoundsException:Index:1,Size:1
at java.util.ArrayList.rangecheck(Arraylist.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at Combiner.reduce(Combiner.java:32)
at Combiner.reduce(Combiner.java:1)
and my code as follows:
import java.io.IOException;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.Iterator;
import java.util.PriorityQueue;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
public class Combiner extends MapReduceBase implements Reducer<Text,Text,Text,Text>{
public void reduce(Text key,Iterator<Text>values,OutputCollector<Text,Text>output,Reporter reporter)
throws IOException{
int num=3;
Comparator<String> comparator=new MyComparator();
PriorityQueue<String> queue=new PriorityQueue<String>(100,comparator);
ArrayList<String> list=new ArrayList<String>();
while(values.hasNext()){
String str=values.next().toString();
queue.add(str);
}
while(!queue.isEmpty()){
list.add(queue.poll());
}
String getCar="";
for(int i=0;i<num;i++){
getCar=getCar+list.get(i)+"\n";
}
output.collect(new Text(""), new Text(getCar));
}
public class MyComparator implements Comparator<String>{
public int compare(String s1,String s2){
if(Long.parseLong(s1.split(",")[4])>Long.parseLong(s2.split(",")[4])){
return 1;
}else if(Long.parseLong(s1.split(",")[4])<Long.parseLong(s2.split(",")[4])){
return -1;
}else{
return 0;
}
}
}
}

This happens, because your list has one element (Size:1), and you ask for the second element (Index:1 - Indexing starts from zero)! A simple System.out.println for each list element will help you get through...
Why do you set the number of elements to 3? If you know that it will be 3 (unlikely), then change the list to an array of size 3. If you don't know that, then change num to list.size(), like:
for(int i=0;i<list.size();i++)
But before anything else, you should understand why you get these values for this key.

Related

Transfering Value To Cleanup Function of the Reducer Not Working

I don't understand what my MapReduce job gives me as output. I have a .csv file as input where districts of a city are stored with the age of each tree for each district.
In the combiner I try to get the oldest tree per district, while in my reducer I try to retrieve the district with the oldest tree in the city.
My problem is that while the reduce funciton gives me output values of 11, 12, 16, and 5, the cleanup function inside the reducer that should return the last value of those (5) actually returns 9 (which is the last value that my reducer analyses).
I don't get what i missed.
Below is what I tried so far.
Mapper:
package com.opstty.mapper;
import org.apache.commons.io.output.NullWriter;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
import java.util.StringTokenizer;
public class TokenizerMapper_1_8_6 extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text result = new Text();
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString(),";");
int i = 0;
while (itr.hasMoreTokens()) {
String arrondissement = itr.nextToken();
if(i%13==1 && !arrondissement.toString().equals("ARRONDISSEMENT")) {
itr.nextToken();itr.nextToken();itr.nextToken();
String annee = itr.nextToken();
result.set(arrondissement);
if(Double.parseDouble((String.valueOf(annee))) > 1000){
context.write(result, new IntWritable((int) Double.parseDouble((String.valueOf(annee)))));
i+=3;
}
}
i++;
}
}
}
Combiner:
package com.opstty.job;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class Compare extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
List a = new ArrayList();
int sum = 0;
for (IntWritable val : values) {
a.add(val.get());
}
Collections.sort(a);
result.set((Integer) Collections.min(a));
context.write(key, result);
}
}
Reducer:
public class IntReducer6 extends Reducer<Text, IntWritable, Text, NullWritable> {
private int max = 100000;
private int annee=0;
int i =0;
private List a = new ArrayList();
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
for (IntWritable value : values)
{
annee = value.get();
}
if(annee < max)
{
a.add(key);
max = annee;
context.write((Text) a.get(i), NullWritable.get());
i++;
}
}
#Override
// only display the character which has the largest value
protected void cleanup(Context context) throws IOException, InterruptedException {
context.write((Text) a.get(a.size()-1), NullWritable.get());
}
}
Your approach at the reduce (and combiner-reduce) methods is a bit of an overkill for such a simple type as far as MapReduce jobs go. To be perfectly honest this type of task doesn't seem to need a combiner at all, and certainly cannot use a cleanup function for the reducers since it is executed for each one of them.
The main issue with your program is that there's no consideration about the operation of the reduce function, since the latter is then executed through a number of its instances for each key value, or on more simple terms the reduce function is called for every key separately. This means that for your type of job your reduce function needs to be executed only once (for all the "keys", as we are going to see how that turns out below) in order to find the district with the oldest tree.
Having that in mind, the map function should arrange the data of each row of the input .csv file in such a way where the key of each key-value pair is the same for every single one of the pairs (in order to have the reduce function operate on all of the rows) and the value of each pair holds the name of the district and the age of each tree. So the mappers will generate key-value pairs were the NULL value is going to be key for all of them, and each value will be a composite value where the district name and a particular tree age are stored, like so:
<NULL, (district, tree_age)>
As for the reduce function, it only needs to scan every value based on the NULL key (aka all of the pairs) and find the max tree age. Then, the final output key-value pair is gonna show the district with the oldest tree and the max tree age, like so:
<district_with_oldest_tree, max_tree_age>
To showcase my tested answer I took some liberties to simplify your program, mainly because the french(?)-named variables are kind of confusing to me and you overcomplicated things by using strictly Hadoop-friendly data structures like StringTokenizer when the more recent Hadoop releases are supporting more common Java datatypes.
First, since I don't have a look at your input .csv file, I created mine trees.csv stored in a directory named trees that has the following lines in it, with columns for districts and a tree's age:
District A; 7
District B; 20
District C; 10
District C; 1
District B; 17
District A; 6
District A; 11
District B; 18
District C; 2
In my (all-put-in-one-file-for-the-sake-of-simplicity) program, the # character is used as a delimiter to separate the data on the composite keys generated by the mappers, and the results are stored in a directory named oldest_tree. You can change this according to your needs or your own .csv input file.
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import java.io.*;
import java.io.IOException;
import java.util.*;
import java.nio.charset.StandardCharsets;
public class OldestTree
{
/* input: <byte_offset, line_of_dataset>
* output: <NULL, (district, tree_age)>
*/
public static class Map extends Mapper<Object, Text, NullWritable, Text>
{
public void map(Object key, Text value, Context context) throws IOException, InterruptedException
{
String row = value.toString();
String[] columns = row.split("; "); // split each row by the delimiter
String district_name = columns[0];
String tree_age = columns[1];
// set NULL as key for the generated key-value pairs aimed at the reducers
// and set the district with each of its trees age as a composite value,
// with the '#' character as a delimiter
context.write(NullWritable.get(), new Text(district_name + '#' + tree_age));
}
}
/* input: <NULL, (district, tree_age)>
* output: <district_with_oldest_tree, max_tree_age>
*/
public static class Reduce extends Reducer<NullWritable, Text, Text, IntWritable>
{
public void reduce(NullWritable key, Iterable<Text> values, Context context) throws IOException, InterruptedException
{
String district_with_oldest_tree = "";
int max_tree_age = 0;
// for all the values with the same (NULL) key,
// aka all the key-value pairs...
for(Text value : values)
{
// split the composite value by the '#' delimiter
String[] splitted_values = value.toString().split("#");
String district_name = splitted_values[0];
int tree_age = Integer.parseInt(splitted_values[1]);
// find the district with the oldest tree
if(tree_age > max_tree_age)
{
district_with_oldest_tree = district_name;
max_tree_age = tree_age;
}
}
// output the district (key) with the oldest tree's year of planting (value)
// to the output directory
context.write(new Text(district_with_oldest_tree), new IntWritable(max_tree_age));
}
}
public static void main(String[] args) throws Exception
{
// set the paths of the input and output directories in the HDFS
Path input_dir = new Path("trees");
Path output_dir = new Path("oldest_tree");
// in case the output directory already exists, delete it
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
if(fs.exists(output_dir))
fs.delete(output_dir, true);
// configure the MapReduce job
Job oldesttree_job = Job.getInstance(conf, "Oldest Tree");
oldesttree_job.setJarByClass(OldestTree.class);
oldesttree_job.setMapperClass(Map.class);
oldesttree_job.setReducerClass(Reduce.class);
oldesttree_job.setMapOutputKeyClass(NullWritable.class);
oldesttree_job.setMapOutputValueClass(Text.class);
oldesttree_job.setOutputKeyClass(Text.class);
oldesttree_job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(oldesttree_job, input_dir);
FileOutputFormat.setOutputPath(oldesttree_job, output_dir);
oldesttree_job.waitForCompletion(true);
}
}
So the result of the program stored in the oldest_tree directory (as seen through the Hadoop HDFS browser) is:
thanks for your help ! #Coursal
So this is my mapper:
package com.opstty.mapper;
import org.apache.commons.io.output.NullWriter;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
import java.util.StringTokenizer;
public class TokenizerMapper_1_8_6 extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text result = new Text();
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString(),";");
int i = 0;
while (itr.hasMoreTokens()) {
String arrondissement = itr.nextToken();
if(i%13==1 && !arrondissement.toString().equals("ARRONDISSEMENT")) {
itr.nextToken();itr.nextToken();itr.nextToken();
String annee = itr.nextToken();
result.set(arrondissement);
if(Double.parseDouble((String.valueOf(annee))) > 1000){
context.write(result, new IntWritable((int) Double.parseDouble((String.valueOf(annee)))));
i+=3;
}
}
i++;
}
}
}
and my combiner :
package com.opstty.job;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class Compare extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
List a = new ArrayList();
int sum = 0;
for (IntWritable val : values) {
a.add(val.get());
}
Collections.sort(a);
result.set((Integer) Collections.min(a));
context.write(key, result);
}
}
The goal is i have a csv file. In the map i get the district and the age of each trees inside a city.
In my combiner i get the oldest tree per district
and in my reduce i want to print the district which has actually the oldest tree of the city
It's working perfectly, thanks !
The raison why i used a combiner is the way my teacher had formulated the question:
Write a MapReduce job that displays the district where the oldest tree is. The mapper must extract the age and district of each tree.
The problem is, this information can’t be used as keys and values (why?). You will need to define a subclass of Writable to contain both information.
The reducer should consolidate all this data and only output district.
I'm totally new in map/reduce and i didn't really get what is a subclass of Writable. So i searched on the net and i found some topics about combiner and WritableCompare.

Can't import java.util.stream in java 8

I have problem importing java.util.stream.*;
Compiling my code gives me a stream()
"cannot find symbol error".
This is my import list
import java.util.stream.*;
import java.util.*;
import java.lang.String;
import java.util.Arrays;
import java.nio.file.*;
import java.io.IOException;
and this is the code i'm compiling
List<Beverage> l = cantine.stream()
.filter(p -> p.name.equals(nam))
.collect(Collectors.toList());
IMPORTANT: I do know what a "cannot find symbol error" is, so please do not blindly close this question.
full code for reference
import java.util.stream.*;
import java.util.*;
import java.lang.String;
import java.util.Arrays;
import java.nio.file.*;
import java.io.IOException;
public class Enoteca{
Map<String,Beverage> cantine;
public Enoteca(){
this.cantine = new HashMap<String,Beverage>();
}
public List<Beverage> byName(String nam){
List<Beverage> l = cantine.stream()
.filter(p -> p.name.equals(nam))
.collect(Collectors.toList());
}
public static void main(String[] args){
Enoteca e = new Enoteca();
for(String s: args){
Beverage b = new Beverage(s,"1987");
e.cantine.put(s,b);
}
System.out.println(e.cantine);
}
}
class Beverage{
String name;
String year;
public Beverage(String name,String year){
this.name = name;
this.year = year;
}
public String getName(){
return name;
}
#Override
public String toString(){
return name + " " +year;
}
}
The compiler is correct. Map does not have a stream() method. The Collections returned by a Map’s keySet, values, and entrySet methods do, but Map itself does not.
Since you want a List<Beverage>, I’m guessing you want cantine.values().stream().

NullPointerException in MapReduce Sorting Program

I know that SortComparator is used to sort the map output by their keys. I have written a custom SortComparator to understand the MapReduce framework better.This is my WordCount class with custom SortComparator class.
package bananas;
import java.io.FileWriter;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.io.WritableComparator;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Partitioner;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static class MyPartitoner extends Partitioner<Text, IntWritable>{
#Override
public int getPartition(Text key, IntWritable value, int numPartitions) {
return Math.abs(key.hashCode()) % numPartitions;
}
}
public static class MySortComparator2 extends WritableComparator{
protected MySortComparator2() {
super();
}
#SuppressWarnings({ "rawtypes" })
#Override
public int compare(WritableComparable w1,WritableComparable w2){
return 0;
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setSortComparatorClass(MySortComparator2.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
but when I execute this I am getting this error
Error: java.lang.NullPointerException
at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:157)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.compare(MapTask.java:1265)
at org.apache.hadoop.util.QuickSort.fix(QuickSort.java:35)
at org.apache.hadoop.util.QuickSort.sortInternal(QuickSort.java:87)
at org.apache.hadoop.util.QuickSort.sort(QuickSort.java:63)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1593)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:720)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
My custom SortComparator class looks fine to me. After mapping is done MySortComparator2's compare method should receive "Text" keys as input and since I am returning 0 no sorting will be done. This is what I expected to see/observe. I followed these tutorials
http://codingjunkie.net/secondary-sort/
http://blog.zaloni.com/secondary-sorting-in-hadoop
http://www.bigdataspeak.com/2013/02/hadoop-how-to-do-secondary-sort-on_25.html
Thanks in advance I would appreciate some help.
Actually, there is a problem with MySortComparator2 constructor. The code should looks like
protected MySortComparator2() {
super(Text.class, true);
}
where the first parameter is your key class and the second parameter's value ensures WritableComparator is instantiated in a way that WritableComparator.compare(byte[] b1, int s1, int l1, byte[] b2, int s2, int l2) can invoke MySortComparator2.compare(WritableComparable a, WritableComparable b)
You need to implement/override this method, too:
public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2, int l2) {
// per your desired no-sort logic
return 0;
}
I think that your comparator is being constructed in such a way that the variables mentioned in the super implementation are null (and this is the method that's being called in support of the sort - not the method you wrote above). That's why you're getting the null pointer exception. By overriding the method with an implementation that doesn't use the variables, you can avoid the exception.
As Chris Gerken said You need to override this method while extending WritableComparator or implement RawComparator instead of WritableComparator.
public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2, int l2) {
return 0;
}
and as you said you wanted to see no sorting to be done but if you return 0 that means every time MapReduce tries to sort/compare it sees every key as the same thing so, you will receive only one key, value pair which will be the first key in the map task which gets finished first and the value with number of words in the input file. Hope you understand what I am saying. If your input is something like this
why are rockets cylindrical
your reduce output will be
why 4
since it assumes everything as the same key. I hope this helps.

Java sorting with comparator [duplicate]

This question already has answers here:
What does a "Cannot find symbol" or "Cannot resolve symbol" error mean?
(18 answers)
Closed 8 years ago.
okay look i know this question has been answered time and time again HOWEVER i seriously can't figure out what im doing wrong.
please help.
i have two classes. Machine.java which is the object class and the InventoryTest.java class that holds the main statement.
everywhere i have read says this should work.
this is in the Machine.java class
public static Comparator<Machine> machineIdNumberComparator = new Comparator<Machine>() {
#Override
public int compare(Machine s1, Machine s2) {
String MachineID1 = s1.getMachineIdNumber();
String MachineID2 = s2.getMachineIdNumber();
//ascending order
return MachineID1.compareTo(MachineID2);
}};
this is in the InventoryTest
Collections.sort(arraylist, inventoryArray.machineIdNumberComparator);
this line comes up with problems. arraylist has cannot find symbol error and so does the machineIdNumberComparator.
i need to be able to sort by 3 different methods.
edit: this is how i add and created my arrayList
public List<Machine> inventoryArray = new ArrayList<>();
inventoryArray.add(new Machine(strMachineIdNumber, strManufacturer, strType, dblPowerOrCapacity, dblPrice));
the arraylist is declaired at the start just under the public class where you declair all your variables.
/**
*
* #author stephankranenfeld
*/
import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.event.WindowAdapter;
import java.awt.event.WindowEvent;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.List;
import java.util.Collections;
import java.util.Comparator;
import java.util.NoSuchElementException;
import java.util.Scanner;
import java.util.StringTokenizer;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.swing.*;
import javax.swing.JButton;
import javax.swing.JTextArea;
public class InventoryTest extends JFrame {
private static final int FRAME_WIDTH = 800;
private static final int FRAME_HEIGHT = 1500;
private final JTextArea textArea;
private final JButton addMachine;
private final JButton removeMachine;
private final JButton exitButton;
private final JButton sortButton;
final String START_TEXT = String.format("%-20s %-20s %-10s %15s %10s %n", "Machine ID Number:", "Manufacturer:", "Type:", "Power/Capacity:", "Price:"); // starting line of texted used in both display all and entre data buttons.
final String DEVIDER = "-------------------------------------------------------------------------------------\n";// a devider used multple times
public List<Machine> inventoryArray = new ArrayList<>();
/*
junk that isn't needed i think
*/
private void sort() {
JComboBox sortBy = new JComboBox();
sortBy.addItem("ID Number");
sortBy.addItem("Manufacturer");
sortBy.addItem("Type");
int option = JOptionPane.showConfirmDialog(null, sortBy, "Sort by?", JOptionPane.OK_CANCEL_OPTION);
if (option == JOptionPane.OK_OPTION)
{
//if id is selected sort by id
//if manufacturer is selected sor by manufacturer
//if type is selected sort by type
Collections.sort(inventoryArray, Machine.machineIdNumberComparator);
}
}
}
public class Machine{
/*
* all the get and set methods
*/
public static Comparator<Machine> machineIdNumberComparator = new Comparator<Machine>() {
#Override
public int compare(Machine s1, Machine s2) {
String MachineID1 = s1.getMachineIdNumber();
String MachineID2 = s2.getMachineIdNumber();
//ascending order
return MachineID1.compareTo(MachineID2);
}};
The second argument of Collections.sort() should be Machine.machineIdNumberComparator

Sort() "cannot find symbol"

Okay i got some issue with the Collection.sort(myintarray); line... it says cannot find symbol trying to make it sort the list so the lowest number in array come first.
package uppgift.pkg1;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.Arrays;
/**
* #author Fredrik
*
*/
public class Uppgift1 {
public static void main(String[] args) {
//Anrop av metod
skapaArray();
}
public static void skapaArray() {
List<Integer> myintarray = new ArrayList<Integer>();
int ints = 0;
int size = 1;
while (ints < size) {
myintarray.add(ints);
ints++;
}
Collection.sort(myintarray);
System.out.println(myintarray.size());
}
}
You should use Collections:
java.util.Collections;
and:
Collections.sort(myintarray);
It should be
Collections.sort(myintarray);
Sorting a collection in Java is easy, just use the Collections.sort(Collection) to sort your values.
You can do
Collections.sort(myintarray);
instead of
Collection.sort(myintarray);

Categories

Resources