Here is an example:
Is it possible to have same mapper run against multiple reducers at the same time? like
map output : {1:[1,2,3,4,5,4,3,2], 4:[5,4,6,7,8,9,5,3,3,2], 3:[1,5,4,3,5,6,7,8,9,1], so on}
reducer1 : sum of all numbers
reducer2 : average of all numbers
reducer3 : mode of all numbers
act on the the same key like
reducer1 output: {1:sum of values, 2:sum of values, and so on}
reducer2 output: {1:avg of values, 2: avg of values and so on}
reducer3 output: {1:mode of values, 2: mode of values, and so on}
and so on..Please let me know.
I really wanted to answer this for you but it's already been asked. Hadoop one Map and multiple Reduce
No it is not possible. But you can implement your own reducer that will calculate all the stuff for you and output it.
You can make an custom writable for that.
http://www.cloudera.com/blog/2011/04/simple-moving-average-secondary-sort-and-mapreduce-part-3/
This actually does pretty much what you are looking for.
Related
Let's say I have a dataset like below:
0,0,A
0,1,B
0,2,C
1,4,D
9,5,E
1,3,O
4,4,L
7,8,Z
I want to implement a MapReduce job so that I group these in chunks of size M. Lets say M=4 then I want a file in the output like:
0,0,A;0,1,B;0,2,C;1,4,D
9,5,E;1,3,O;4,4,L;7,8,Z
I am worried that this might not be possible cause values in reducer are grouped by a common key which does not exist in this scenario.
I came across follow code snippet of Apache Spark:
JavaRDD<String> lines = new JavaSparkContext(sparkSession.sparkContext()).textFile("src\\main\\resources\\data.txt");
JavaPairRDD<String, Integer> pairs = lines.mapToPair(s -> new Tuple2(s, 1));
System.out.println(pairs.collect());
JavaPairRDD<String, Integer> counts = pairs.reduceByKey((a, b) -> a + b);
System.out.println("Reduced data: " + counts.collect());
My data.txt is as follows:
Mahesh
Mahesh
Ganesh
Ashok
Abnave
Ganesh
Mahesh
The output is:
[(Mahesh,1), (Mahesh,1), (Ganesh,1), (Ashok,1), (Abnave,1), (Ganesh,1), (Mahesh,1)]
Reduced data: [(Ganesh,2), (Abnave,1), (Mahesh,3), (Ashok,1)]
While I understand how first line of output is obtained, I dont understand how second line is obtained, that is how JavaPairRDD<String, Integer> counts is formed by reduceByKey.
I found that the signature of reduceByKey() is as follows:
public JavaPairRDD<K,V> reduceByKey(Function2<V,V,V> func)
The [signature](http://spark.apache.org/docs/1.2.0/api/java/org/apache/spark/api/java/function/Function2.html#call(T1, T2)) of Function2.call() is as follows:
R call(T1 v1, T2 v2) throws Exception
The explanation of reduceByKey() reads as follows:
Merge the values for each key using an associative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce. Output will be hash-partitioned with the existing partitioner/ parallelism level.
Now this explanation sounds somewhat confusing to me. May be there is something more to the functionality of reduceByKey(). By looking at input and output to reduceByKey() and Function2.call(), I feel somehow reducebyKey() sends values of same keys to call() in pairs. But that simply does not sound clear. Can anyone explain what precisely how reduceByKey() and Function2.call() works together?
As its name implies, reduceByKey() reduces data based on the lambda function you pass to it.
In your example, this function is a simple adder: for a and b, return a + b.
The best way to understand how the result is formed is to imagine what happens internally. The ByKey() part groups your records based on their key values. In your example, you'll have 4 different sets of pairs:
Set 1: ((Mahesh, 1), (Mahesh, 1), (Mahesh, 1))
Set 2: ((Ganesh, 1), (Ganesh, 1))
Set 3: ((Ashok, 1))
Set 4: ((Abnave, 1))
Now, the reduce part will try to reduce the previous 4 sets using the lambda function (the adder):
For Set 1: (Mahesh, 1 + 1 + 1) -> (Mahesh, 3)
For Set 2: (Ganesh, 1 + 1) -> (Ganesh, 2)
For Set 3: (Ashok , 1) -> (Ashok, 1) (nothing to add)
For Set 4: (Abnave, 1) -> (Abnave, 1) (nothing to add)
Functions signatures can be sometimes confusing as they tend to be more generic.
I'm thinking that you probably understand groupByKey? groupByKey groups all values for a certain key into a list (or iterable) so that you can do something with that - like, say, sum (or count) the values. Basically, what sum does is to reduce a list of many values into a single value. It does so by iteratively adding two values to yield one value and that is what Function2 needs to do when you write your own. It needs to take in two values and return one value.
ReduceByKey does the same as a groupByKey, BUT it does what is called a "map-side reduce" before shuffling data around. Because Spark distributes data across many different machines to allow for parallel processing, there is no guarantee that data with the same key is placed on the same machine. Spark thus has to shuffle data around, and the more data that needs to be shuffled the longer our computations will take, so it's a good idea to shuffle as little data as needed.
In a map-side reduce, Spark will first sum all the values for a given key locally on the executors before it sends (shuffles) the result around for the final sum to be computed. This means that much less data - a single value instead of a list of values - needs to be send between the different machines in the cluster and for this reason, reduceByKey is most often preferable to a groupByKey.
For a more detailed description, I can recommend this article :)
Suppose I have a big tsv file with this kind of information:
2012-09-22 00:00:01.0 249342258346881024 47268866 0 0 0 bo
2012-09-22 00:00:02.0 249342260934746115 1344951 0 0 4 ot
2012-09-22 00:00:02.0 249342261098336257 346095334 1 0 0 ot
2012-09-22 00:05:02.0 249342261500977152 254785340 0 1 0 ot
I want to implement a MapReduce job that enumerates time intervals of five minutes and filter some information of the tsv inputs. The output file would look like this:
0 47268866 bo
0 134495 ot
0 346095334 ot
1 254785340 ot
The key is the number of the interval, e.g., 0 is the reference of the interval between 2012-09-22 00:00:00.0 to 2012-09-22 00:04:59.
I don't know if this problem doesn't fit on MapReduce approach or if I'm not thinking it right. In the map function, I'm just passing the timestamp as key and the filtered information as value. In the reduce function, I count the intervals by using global variables and produce the output mentioned.
i. Does the framework determine the number of reducers in some automatically way or it is user defined? With one reducer, I think that there is no problem on my approach, but I'm wondering if one reduce can become a bottleneck when dealing with really large files, can it?
ii. How can I solve this problem with multiple reducers?
Any suggestions would be really appreciated!
Thanks in advance!
EDIT:
The first question is answered by #Olaf, but the second still gives me some doubts regarding parallelism. The map output of my map function is currently this (I'm just passing the timestamp with minute precision):
2012-09-22 00:00 47268866 bo
2012-09-22 00:00 344951 ot
2012-09-22 00:00 346095334 ot
2012-09-22 00:05 254785340 ot
So in the reduce function I receive inputs that the key represents the minute when the information was collected and the values the information itself and I want to enumerate five minutes intervals beginning with 0. I'm currently using a global variable to store the beginning of the interval and when a key extrapolate it I'm incrementing the interval counter (That is also a global variable).
Here is the code:
private long stepRange = TimeUnit.MINUTES.toMillis(5);
private long stepInitialMillis = 0;
private int stepCounter = 0;
#Override
public void reduce(Text key, Iterable<Text> values, Context context)
throws IOException, InterruptedException {
long millis = Long.valueOf(key.toString());
if (stepInitialMillis == 0) {
stepInitialMillis = millis;
} else {
if (millis - stepInitialMillis > stepRange) {
stepCounter = stepCounter + 1;
stepInitialMillis = millis;
}
}
for (Text value : values) {
context.write(new Text(String.valueOf(stepCounter)),
new Text(key.toString() + "\t" + value));
}
}
So, with multiple reducers, I will have my reduce function running on two or more nodes, in two or more JVMs and I will lose the control given by the global variables and I'm not thinking of a workaround for my case.
The number of reducers depends on the configuration of the cluster, although you can limit the number of reducers used by your MapReduce job.
A single reducer would indeed become a bottleneck in your MapReduce job if you are dealing with any significant amount of data.
Hadoop MapReduce engine gurantees that all values associated with the same key are sent to the same reducer, so your approach should work with multile reducers. See Yahoo! tutorial for details: http://developer.yahoo.com/hadoop/tutorial/module4.html#listreducing
EDIT: To guarantee that all values for the same time interval go to the same reducer, you would have to use some unique identifier of the time interval as the key. You would have to do it in the mapper. I'm reading your question again and, unless you want to somehow aggregate the data between the records corresponding to the same time interval, you don't need any reducer at all.
EDIT: As #SeanOwen pointed, the number of reducers depends on the configuration of the cluster. Usually it is configured between 0.95 and 1.75 times the number of maximum tasks per node times the number of data nodes. If the mapred.reduce.tasks value is not set in the cluster configuration, the default number of reducers is 1.
It looks like you're wanting to aggregate some data by five-minute blocks. Map-reduce with Hadoop works great for this sort of thing! There should be no reason to use any "global variables". Here is how I would set it up:
The mapper reads one line of the TSV. It grabs the timestamp, and computes which five-minute bucket it belongs in. Make that into a string, and emit it as the key, something like "20120922:0000", "20120922:0005", "20120922:0010", etc. As for the value that is emitted along with that key, just keep it simple to start with, and send on the whole tab-delimited line as another Text object.
Now that the mapper has determined how the data needs to be organized, it's the reducer's job to do the aggregation. Each reducer will get a key (one of the five-minute buckers), along with the list of all the lines that fit into that bucket. It can iterate over that list, and extract whatever it wants from it, writing output to the context as needed.
As for mappers, just let hadoop figure that part out. Set the number of reducers to how many nodes you have in the cluster, as a starting point. Should run just fine.
Hope this helps.
Just for learning I tried to modify the word count example and added a partiotiner. I understood the part that by writing the customized partiotiner we can control the number of Reduce Task so getting created. This is good.
But one question I am not able to understood is number of output files so generated in hdfs so that depends on number of Reduce Task so called or number of Reduce calls so done for each Reduce task.
(For each Reduce Task there can be many reduce calls happening).
Let me know if any other detail is needed. Code is very basic so not posting it.
I think your perception that writing the customized partitioner can control the number of Reduce Task getting created is wrong. Please check the following explanation:-
Actually paritioner determines in which reducer to send the key and list of values based of the hash value of the key as explained below.
public class HashPartitioner<K, V> extends Partitioner<K, V> {
public int getPartition(K key, V value,
int numReduceTasks) {
return (key.hashCode() & Integer.MAX_VALUE) % numReduceTasks;
}
}
Now the question of number of output files generated depends on the number of reduce task which you have asked the job to run. So in case say you configured 3 reduce task for the job, and say you wrote a custom partitioner which lead to sending the keys into only 2 reducers. In this case you will find empty part-r00002 output file for the third reducer as it did not get any records after partitioning. This empty part file can be removed using LazyOutputFormat.
Ex: import org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat;
LazyOutputFormat.setOutputFormatClass(job, TextOutputFormat.class);
I hope this clears your doubt.
Hi I have written a mapreduce job which is generically parsing XML file. I am able to parse an XML file and getting all key value pair generated properly.I am having 6 different keys and there corresponding values. So I am running 6 different reducers in parallel.
Now problem I am facing is reducer is putting two different key - value pair in same file and remaining 4 key-value in individual files. So in short out of 6 files in output from reducer I am getting 4 files with single-key value pair and 1 file with two key-value pair and 1 file having nothing.
I tried doing research on Google and various forums only thing I concluded is I need a partitioner to solve this problem. I am new hadoop and so can someone put some light on this issue and help me solve this.
I am working on a pseudo-node cluster and using Java as a programming language. I am not able to share code here but still try to describe problem in brief.
Let me know more information is needed and thanks in advance.
Having only 6 keys for 6 reducers isn't the best utilisation of hadoop - while it would be nice for each of the 6 to go to a separate reducer it isn't guaranteed.
Keys cannot get split across reducers, so if you were to have less than 6 keys only a subset of your reducers would have any work to do. You should consider rethinking your key assignment (and perhaps input files' appropriate-ness for hadoop) and perhaps use a system such that there are enough keys to be spread somewhat evenly amongst the reducers.
EDIT: I believe what you might be after is MultipleOutputFormat, which has the method generateFileNameForKeyValue(key, value, name), allowing you to generate a file to write out to per key rather than just one file per Reducer.
Hadoop by default uses a default Hash partitioner - click here, which is something like
public class HashPartitioner<K2, V2> implements Partitioner<K2, V2> {
public void configure(JobConf job) {}
/** Use {#link Object#hashCode()} to partition. */
public int getPartition(K2 key, V2 value,
int numReduceTasks) {
return (key.hashCode() & Integer.MAX_VALUE) % numReduceTasks;
}
}
The key.hashCode() & Integer.MAX_VALUE) % numReduceTasks will return a number between 0 to numReduceTasks and in your case the range would be 0 to 5, since, numRuduceTask=6
The catch is there in that line itself - two such statement may return you the same number.
And, as a result two different keys could go to the same reducer.
For e.g.-
("go".hashCode() & Integer.MAX_VALUE) % 6
will return you 4 and,
("hello".hashCode() & Integer.MAX_VALUE) % 6
will also return you 4.
So, what I would suggest here is that if you want to be sure that all your 6 keys get processed by 6 different reducers, you need to create your own partitioner to get what you desire.
Check out this link for creating a custom partitioner, if you have any confusion and you specify your custom partitioner something like the following using the Job class.
job.setPartitioner(<YourPartionerHere.class>);
Hope this helps.