I am working on Hadoop. My output is coming twice than expected.
I unable to understand why this is happening.
Please help me
Below is the mapper class :
import java.io.File;
import java.io.IOException;
import java.util.Scanner;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
public class StringMapper extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable>
{
//hadoop supported data types
private static IntWritable send;
private Text word;
//map method that performs the tokenizer job and framing the initial key value pairs
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException
{
String line = value.toString();
String out="";
int count=0;
out+=Integer.toString(count);
send = new IntWritable(1);
word = new Text(out);
output.collect(word, send);
}
}
Below is the reducer class
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
public class StringReducer extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable>
{
//reduce method accepts the Key Value pairs from mappers, do the aggregation based on keys and produce the final output
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException
{
int sum=0;
while(values.hasNext()){
sum=sum+values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
Sample input :
dashjdasdhashjfsda
dashjdasdhashjfsdadashjdasdhashjfsdadashjdasdhashjfsdadashjdasdhashjfsda
Sample output
0 10
Here output should be 0 5 instead of 0 10, because there are only five lines in my input.
Your Program seems to be OK. I copied your code and run it on my machine. It gives correct output i.e., 0 5
If you are using eclipse create a new configuration and also change your input directory.
Then it might work.
Related
I don't understand what my MapReduce job gives me as output. I have a .csv file as input where districts of a city are stored with the age of each tree for each district.
In the combiner I try to get the oldest tree per district, while in my reducer I try to retrieve the district with the oldest tree in the city.
My problem is that while the reduce funciton gives me output values of 11, 12, 16, and 5, the cleanup function inside the reducer that should return the last value of those (5) actually returns 9 (which is the last value that my reducer analyses).
I don't get what i missed.
Below is what I tried so far.
Mapper:
package com.opstty.mapper;
import org.apache.commons.io.output.NullWriter;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
import java.util.StringTokenizer;
public class TokenizerMapper_1_8_6 extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text result = new Text();
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString(),";");
int i = 0;
while (itr.hasMoreTokens()) {
String arrondissement = itr.nextToken();
if(i%13==1 && !arrondissement.toString().equals("ARRONDISSEMENT")) {
itr.nextToken();itr.nextToken();itr.nextToken();
String annee = itr.nextToken();
result.set(arrondissement);
if(Double.parseDouble((String.valueOf(annee))) > 1000){
context.write(result, new IntWritable((int) Double.parseDouble((String.valueOf(annee)))));
i+=3;
}
}
i++;
}
}
}
Combiner:
package com.opstty.job;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class Compare extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
List a = new ArrayList();
int sum = 0;
for (IntWritable val : values) {
a.add(val.get());
}
Collections.sort(a);
result.set((Integer) Collections.min(a));
context.write(key, result);
}
}
Reducer:
public class IntReducer6 extends Reducer<Text, IntWritable, Text, NullWritable> {
private int max = 100000;
private int annee=0;
int i =0;
private List a = new ArrayList();
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
for (IntWritable value : values)
{
annee = value.get();
}
if(annee < max)
{
a.add(key);
max = annee;
context.write((Text) a.get(i), NullWritable.get());
i++;
}
}
#Override
// only display the character which has the largest value
protected void cleanup(Context context) throws IOException, InterruptedException {
context.write((Text) a.get(a.size()-1), NullWritable.get());
}
}
Your approach at the reduce (and combiner-reduce) methods is a bit of an overkill for such a simple type as far as MapReduce jobs go. To be perfectly honest this type of task doesn't seem to need a combiner at all, and certainly cannot use a cleanup function for the reducers since it is executed for each one of them.
The main issue with your program is that there's no consideration about the operation of the reduce function, since the latter is then executed through a number of its instances for each key value, or on more simple terms the reduce function is called for every key separately. This means that for your type of job your reduce function needs to be executed only once (for all the "keys", as we are going to see how that turns out below) in order to find the district with the oldest tree.
Having that in mind, the map function should arrange the data of each row of the input .csv file in such a way where the key of each key-value pair is the same for every single one of the pairs (in order to have the reduce function operate on all of the rows) and the value of each pair holds the name of the district and the age of each tree. So the mappers will generate key-value pairs were the NULL value is going to be key for all of them, and each value will be a composite value where the district name and a particular tree age are stored, like so:
<NULL, (district, tree_age)>
As for the reduce function, it only needs to scan every value based on the NULL key (aka all of the pairs) and find the max tree age. Then, the final output key-value pair is gonna show the district with the oldest tree and the max tree age, like so:
<district_with_oldest_tree, max_tree_age>
To showcase my tested answer I took some liberties to simplify your program, mainly because the french(?)-named variables are kind of confusing to me and you overcomplicated things by using strictly Hadoop-friendly data structures like StringTokenizer when the more recent Hadoop releases are supporting more common Java datatypes.
First, since I don't have a look at your input .csv file, I created mine trees.csv stored in a directory named trees that has the following lines in it, with columns for districts and a tree's age:
District A; 7
District B; 20
District C; 10
District C; 1
District B; 17
District A; 6
District A; 11
District B; 18
District C; 2
In my (all-put-in-one-file-for-the-sake-of-simplicity) program, the # character is used as a delimiter to separate the data on the composite keys generated by the mappers, and the results are stored in a directory named oldest_tree. You can change this according to your needs or your own .csv input file.
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import java.io.*;
import java.io.IOException;
import java.util.*;
import java.nio.charset.StandardCharsets;
public class OldestTree
{
/* input: <byte_offset, line_of_dataset>
* output: <NULL, (district, tree_age)>
*/
public static class Map extends Mapper<Object, Text, NullWritable, Text>
{
public void map(Object key, Text value, Context context) throws IOException, InterruptedException
{
String row = value.toString();
String[] columns = row.split("; "); // split each row by the delimiter
String district_name = columns[0];
String tree_age = columns[1];
// set NULL as key for the generated key-value pairs aimed at the reducers
// and set the district with each of its trees age as a composite value,
// with the '#' character as a delimiter
context.write(NullWritable.get(), new Text(district_name + '#' + tree_age));
}
}
/* input: <NULL, (district, tree_age)>
* output: <district_with_oldest_tree, max_tree_age>
*/
public static class Reduce extends Reducer<NullWritable, Text, Text, IntWritable>
{
public void reduce(NullWritable key, Iterable<Text> values, Context context) throws IOException, InterruptedException
{
String district_with_oldest_tree = "";
int max_tree_age = 0;
// for all the values with the same (NULL) key,
// aka all the key-value pairs...
for(Text value : values)
{
// split the composite value by the '#' delimiter
String[] splitted_values = value.toString().split("#");
String district_name = splitted_values[0];
int tree_age = Integer.parseInt(splitted_values[1]);
// find the district with the oldest tree
if(tree_age > max_tree_age)
{
district_with_oldest_tree = district_name;
max_tree_age = tree_age;
}
}
// output the district (key) with the oldest tree's year of planting (value)
// to the output directory
context.write(new Text(district_with_oldest_tree), new IntWritable(max_tree_age));
}
}
public static void main(String[] args) throws Exception
{
// set the paths of the input and output directories in the HDFS
Path input_dir = new Path("trees");
Path output_dir = new Path("oldest_tree");
// in case the output directory already exists, delete it
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
if(fs.exists(output_dir))
fs.delete(output_dir, true);
// configure the MapReduce job
Job oldesttree_job = Job.getInstance(conf, "Oldest Tree");
oldesttree_job.setJarByClass(OldestTree.class);
oldesttree_job.setMapperClass(Map.class);
oldesttree_job.setReducerClass(Reduce.class);
oldesttree_job.setMapOutputKeyClass(NullWritable.class);
oldesttree_job.setMapOutputValueClass(Text.class);
oldesttree_job.setOutputKeyClass(Text.class);
oldesttree_job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(oldesttree_job, input_dir);
FileOutputFormat.setOutputPath(oldesttree_job, output_dir);
oldesttree_job.waitForCompletion(true);
}
}
So the result of the program stored in the oldest_tree directory (as seen through the Hadoop HDFS browser) is:
thanks for your help ! #Coursal
So this is my mapper:
package com.opstty.mapper;
import org.apache.commons.io.output.NullWriter;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
import java.util.StringTokenizer;
public class TokenizerMapper_1_8_6 extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text result = new Text();
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString(),";");
int i = 0;
while (itr.hasMoreTokens()) {
String arrondissement = itr.nextToken();
if(i%13==1 && !arrondissement.toString().equals("ARRONDISSEMENT")) {
itr.nextToken();itr.nextToken();itr.nextToken();
String annee = itr.nextToken();
result.set(arrondissement);
if(Double.parseDouble((String.valueOf(annee))) > 1000){
context.write(result, new IntWritable((int) Double.parseDouble((String.valueOf(annee)))));
i+=3;
}
}
i++;
}
}
}
and my combiner :
package com.opstty.job;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class Compare extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
List a = new ArrayList();
int sum = 0;
for (IntWritable val : values) {
a.add(val.get());
}
Collections.sort(a);
result.set((Integer) Collections.min(a));
context.write(key, result);
}
}
The goal is i have a csv file. In the map i get the district and the age of each trees inside a city.
In my combiner i get the oldest tree per district
and in my reduce i want to print the district which has actually the oldest tree of the city
It's working perfectly, thanks !
The raison why i used a combiner is the way my teacher had formulated the question:
Write a MapReduce job that displays the district where the oldest tree is. The mapper must extract the age and district of each tree.
The problem is, this information can’t be used as keys and values (why?). You will need to define a subclass of Writable to contain both information.
The reducer should consolidate all this data and only output district.
I'm totally new in map/reduce and i didn't really get what is a subclass of Writable. So i searched on the net and i found some topics about combiner and WritableCompare.
Good day, I don't know if my Title is the best one but I have this list:
201505011000######PEN DRIVE01470
201505011000#######NOTEBOOK11470
201605011000#######NOTEBOOK21471
201705011000#######NOTEBOOK21472
201705011000###GAVETA DE HD01472
201703011000###GAVETA DE HD01473
Where for eg.: 201505 represent the year and the month,
after the # sign I had the product name, and in the and the price 01470 represent 14,70.
What I need to do is get the lower price for each product and show the Year and month of that Price.
But I don't know to do that, what I can show are the Lower price and the product.
Here is my program:
MAPPER
package pkg.produto;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class MinProdutoMapper
extends Mapper<LongWritable, Text, Text, IntWritable> {
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
String ano = line.substring(0, 6);
String produto = line.substring(13, 27);//Nome do produto
produto = produto.substring(produto.lastIndexOf("#") + 1);
//String produto_ano = ano+produto ;
int valor = Integer.parseInt(line.substring(27, 32));//Valor do produto
context.write(new Text(produto), new IntWritable(valor));
}
}
REDUCER
package pkg.produto;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class MinProdutoReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
#Override
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int minValue = Integer.MAX_VALUE;
for (IntWritable value : values) {
minValue = Math.min(minValue, value.get());
}
context.write(key, new IntWritable(minValue));
}
}
Can someone help?
You can give a look at secondary sort and then modify your solution. here is one link with code example which can help you solve your problem. secondary-sort
I know that SortComparator is used to sort the map output by their keys. I have written a custom SortComparator to understand the MapReduce framework better.This is my WordCount class with custom SortComparator class.
package bananas;
import java.io.FileWriter;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.io.WritableComparator;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Partitioner;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static class MyPartitoner extends Partitioner<Text, IntWritable>{
#Override
public int getPartition(Text key, IntWritable value, int numPartitions) {
return Math.abs(key.hashCode()) % numPartitions;
}
}
public static class MySortComparator2 extends WritableComparator{
protected MySortComparator2() {
super();
}
#SuppressWarnings({ "rawtypes" })
#Override
public int compare(WritableComparable w1,WritableComparable w2){
return 0;
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setSortComparatorClass(MySortComparator2.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
but when I execute this I am getting this error
Error: java.lang.NullPointerException
at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:157)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.compare(MapTask.java:1265)
at org.apache.hadoop.util.QuickSort.fix(QuickSort.java:35)
at org.apache.hadoop.util.QuickSort.sortInternal(QuickSort.java:87)
at org.apache.hadoop.util.QuickSort.sort(QuickSort.java:63)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1593)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:720)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
My custom SortComparator class looks fine to me. After mapping is done MySortComparator2's compare method should receive "Text" keys as input and since I am returning 0 no sorting will be done. This is what I expected to see/observe. I followed these tutorials
http://codingjunkie.net/secondary-sort/
http://blog.zaloni.com/secondary-sorting-in-hadoop
http://www.bigdataspeak.com/2013/02/hadoop-how-to-do-secondary-sort-on_25.html
Thanks in advance I would appreciate some help.
Actually, there is a problem with MySortComparator2 constructor. The code should looks like
protected MySortComparator2() {
super(Text.class, true);
}
where the first parameter is your key class and the second parameter's value ensures WritableComparator is instantiated in a way that WritableComparator.compare(byte[] b1, int s1, int l1, byte[] b2, int s2, int l2) can invoke MySortComparator2.compare(WritableComparable a, WritableComparable b)
You need to implement/override this method, too:
public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2, int l2) {
// per your desired no-sort logic
return 0;
}
I think that your comparator is being constructed in such a way that the variables mentioned in the super implementation are null (and this is the method that's being called in support of the sort - not the method you wrote above). That's why you're getting the null pointer exception. By overriding the method with an implementation that doesn't use the variables, you can avoid the exception.
As Chris Gerken said You need to override this method while extending WritableComparator or implement RawComparator instead of WritableComparator.
public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2, int l2) {
return 0;
}
and as you said you wanted to see no sorting to be done but if you return 0 that means every time MapReduce tries to sort/compare it sees every key as the same thing so, you will receive only one key, value pair which will be the first key in the map task which gets finished first and the value with number of words in the input file. Hope you understand what I am saying. If your input is something like this
why are rockets cylindrical
your reduce output will be
why 4
since it assumes everything as the same key. I hope this helps.
I connect hadoop with the eclipse ,start the job by eclipse plug-in, the mapreduce job can complete successfully,but when i compile my code to jar file and then execute this job by hadoop command,it will throw errors as follows.
Error:java.lang.IndexOutOfBoundsException:Index:1,Size:1
at java.util.ArrayList.rangecheck(Arraylist.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at Combiner.reduce(Combiner.java:32)
at Combiner.reduce(Combiner.java:1)
and my code as follows:
import java.io.IOException;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.Iterator;
import java.util.PriorityQueue;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
public class Combiner extends MapReduceBase implements Reducer<Text,Text,Text,Text>{
public void reduce(Text key,Iterator<Text>values,OutputCollector<Text,Text>output,Reporter reporter)
throws IOException{
int num=3;
Comparator<String> comparator=new MyComparator();
PriorityQueue<String> queue=new PriorityQueue<String>(100,comparator);
ArrayList<String> list=new ArrayList<String>();
while(values.hasNext()){
String str=values.next().toString();
queue.add(str);
}
while(!queue.isEmpty()){
list.add(queue.poll());
}
String getCar="";
for(int i=0;i<num;i++){
getCar=getCar+list.get(i)+"\n";
}
output.collect(new Text(""), new Text(getCar));
}
public class MyComparator implements Comparator<String>{
public int compare(String s1,String s2){
if(Long.parseLong(s1.split(",")[4])>Long.parseLong(s2.split(",")[4])){
return 1;
}else if(Long.parseLong(s1.split(",")[4])<Long.parseLong(s2.split(",")[4])){
return -1;
}else{
return 0;
}
}
}
}
This happens, because your list has one element (Size:1), and you ask for the second element (Index:1 - Indexing starts from zero)! A simple System.out.println for each list element will help you get through...
Why do you set the number of elements to 3? If you know that it will be 3 (unlikely), then change the list to an array of size 3. If you don't know that, then change num to list.size(), like:
for(int i=0;i<list.size();i++)
But before anything else, you should understand why you get these values for this key.
I am comparing example Java code from the O'Reilly book Hadoop: The Definitive Guide (by Tom White, 3rd edition) and my own attempt at recreating/understanding it. The issue I am having is as follows:
The class from the book compiles just fine:
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class MaxTemperatureReducer
extends Reducer<Text, IntWritable, Text, IntWritable> {
#Override
public void reduce(Text key, Iterable<IntWritable> values,
Context context)
throws IOException, InterruptedException {
int maxValue = Integer.MIN_VALUE;
for (IntWritable value : values) {
maxValue = Math.max(maxValue, value.get());
}
context.write(key, new IntWritable(maxValue));
}
}
But when I try to test a portion of it on my own, I get the compile error of "int cannot be dereferenced:"
public class TestMinValue {
public static void main(String[] args){
int[] values = {1,2,3,4,5};
int maxValue = Integer.MIN_VALUE;
for(int value : values){
maxValue = Math.max(maxValue, value.get());
}
}
}
I am new to Java and would like to understand the difference; why is the example class working, but my snippet of it isn't?
The IntWritable type is a class, which has a get() method. You are using the primitive type int instead. Primitives do not have methods in Java.