WordCount with Guaranteeing-message-processing - java

I am trying to run WordCount Example with Guaranteeing message processing.
There is one spout
WSpout - emitting random sentences with msgID.
and two bolts
SplitSentence - spliting sentence in words and emit with anchoring
WordCount - printing words count.
What i wanted to achieve with below code is that when all words counting for a sentence would be done. Spout corresponding to that sentence must be acknowledged.
I am acknowledging with _collector.ack(tuple) at last bolt WordCount only. I see strange is that inspite of ack() is getting called at WordCount.execute() , corresponding WSpout.ack() is not getting called. it is always failed after default timeout.
I really don't understand whats wrong with code. Please help me understand the problem.
Any help appreciated.
Below is complete Code.
public class TestTopology {
public static class WSpout implements IRichSpout {
SpoutOutputCollector _collector;
Integer msgID = 0;
#Override
public void nextTuple() {
Random _rand = new Random();
String[] sentences = new String[] { "There two things benefit",
" from Storms reliability capabilities",
"Specifying a link in the",
" tuple tree is " + "called anchoring",
" Anchoring is done at ",
"the same time you emit a " + "new tuple" };
String message = sentences[_rand.nextInt(sentences.length)];
_collector.emit(new Values(message), msgID);
System.out.println(msgID + " " + message);
msgID++;
}
#Override
public void open(Map conf, TopologyContext context,
SpoutOutputCollector collector) {
System.out.println("open");
_collector = collector;
}
#Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("LINE"));
}
#Override
public void ack(Object msgID) {
System.out.println("ack ------------------- " + msgID);
}
#Override
public void fail(Object msgID) {
System.out.println("fail ----------------- " + msgID);
}
#Override
public void activate() {
// TODO Auto-generated method stub
}
#Override
public void close() {
}
#Override
public void deactivate() {
// TODO Auto-generated method stub
}
#Override
public Map<String, Object> getComponentConfiguration() {
// TODO Auto-generated method stub
return null;
}
}
public static class SplitSentence extends BaseRichBolt {
OutputCollector _collector;
public void prepare(Map conf, TopologyContext context,
OutputCollector collector) {
_collector = collector;
}
public void execute(Tuple tuple) {
String sentence = tuple.getString(0);
for (String word : sentence.split(" ")) {
System.out.println(word);
_collector.emit(tuple, new Values(word));
}
//_collector.ack(tuple);
}
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word"));
}
}
public static class WordCount extends BaseBasicBolt {
Map<String, Integer> counts = new HashMap<String, Integer>();
#Override
public void execute(Tuple tuple, BasicOutputCollector collector) {
System.out.println("WordCount MSGID : " + tuple.getMessageId());
String word = tuple.getString(0);
Integer count = counts.get(word);
if (count == null)
count = 0;
count++;
System.out.println(word + " ===> " + count);
counts.put(word, count);
collector.emit(new Values(word, count));
}
#Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word", "count"));
}
}
public static void main(String[] args) throws Exception {
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("spout", new WSpout(), 2);
builder.setBolt("split", new SplitSentence(), 2).shuffleGrouping(
"spout");
builder.setBolt("count", new WordCount(), 2).fieldsGrouping("split",
new Fields("word"));
Config conf = new Config();
conf.setDebug(true);
if (args != null && args.length > 0) {
conf.setNumWorkers(1);
StormSubmitter.submitTopology(args[0], conf,
builder.createTopology());
} else {
conf.setMaxTaskParallelism(3);
LocalCluster cluster = new LocalCluster();
cluster.submitTopology("word-count", conf, builder.createTopology());
Thread.sleep(10000);
cluster.shutdown();
}
}
}

WordCount extends BaseBasicBolt which ensures the tuples are acked automatically IN THAT BOLT, like you stated in your comment. However, SplitSentence extends BaseRichBolt which requires you to ack tuples manually. You're not acking, so tuples time out.

Related

Why Storm is not replaying failed message on Cluster at Work but replaying on cluster mode at local desktop

Here the code I am trying to execute. I am intentionally failing in the bolt. So that I can see failed messaged being replayed by storm. But looks like this is not happening.
public static class FastRandomSentenceSpout extends BaseRichSpout {
SpoutOutputCollector _collector;
Random _rand;
private static final String[] CHOICES = {
"marry had a little lamb whos fleese was white as snow",
"and every where that marry went the lamb was sure to go"
};
#Override
public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {
_collector = collector;
_rand = ThreadLocalRandom.current();
}
#Override
public void nextTuple() {
String sentence = CHOICES[_rand.nextInt(CHOICES.length)];
_collector.emit(new Values(sentence), sentence);
}
#Override
public void fail(Object id) {
System.out.println("RAVI: the failedObjectId = "+id);
_collector.emit(new Values(id), id);
}
#Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("sentence"));
}
}
Here is the details about Split sentence Bolt. Where I intentionally fail.
public static class SplitSentence extends BaseRichBolt
{
OutputCollector _collector;
#Override
public void prepare(Map conf,
TopologyContext context,
OutputCollector collector)
{
_collector = collector;
}
This is the function where failing is happening
#Override
public void execute(Tuple tuple)
{
String sentence = tuple.getString(0);
System.out.println("sentence = "+sentence);
if(sentence.equals("marry had a little lamb whos fleese was white as snow"))
{
System.out.println("going to fail");
_collector.fail(tuple);
}
else
{
for (String word: sentence.split("\\s+")) {
_collector.emit(tuple, new Values(word, 1));
}
_collector.ack(tuple);
}
}
#Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word", "count"));
}
}
This the driving code details.
public static void main(String[] args) throws Exception {
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("spout", new FastRandomSentenceSpout(), 4);
builder.setBolt("split", new SplitSentence(), 4).shuffleGrouping("spout");
Config conf = new Config();
conf.registerMetricsConsumer(
org.apache.storm.metric.LoggingMetricsConsumer.class);
String name = "wc-test";
if (args != null && args.length > 0) {
name = args[0];
}
conf.setNumWorkers(1);
StormSubmitter.submitTopologyWithProgressBar(name,
conf,
builder.createTopology());
}
turns out it was due to global setting mentioned in storm.yaml . The specific setting was
topology.acker.executors: 0

In MapReduce program, reducer is not getting called by Driver

According to map reduce programming model I wrote this program where Driver code is as follows
MY DRIVER CLASS
public class MRDriver extends Configured implements Tool
{
#Override
public int run(String[] strings) throws Exception {
if(strings.length != 2)
{
System.err.println("usage : <inputlocation> <inputlocation> <outputlocation>");
System.exit(0);
}
Job job = new Job(getConf(), "multiple files");
job.setJarByClass(MRDriver.class);
job.setMapperClass(MRMapper.class);
job.setReducerClass(MRReducer.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(strings[0]));
FileOutputFormat.setOutputPath(job, new Path(strings[1]));
return job.waitForCompletion(true) ? 0 : 1;
//throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.
}
public static void main(String[] args) throws Exception
{
Configuration conf = new Configuration();
System.exit(ToolRunner.run(conf, new MRDriver(), args));
}
}
MY MAPPER CLASS
class MRMapper extends Mapper<LongWritable, Text, Text, Text>
{
#Override
public void map(LongWritable key, Text value, Context context)
{
try
{
StringTokenizer iterator;
String idsimval = null;
iterator = new StringTokenizer(value.toString(), "\t");
String id = iterator.nextToken();
String sentival = iterator.nextToken();
if(iterator.hasMoreTokens())
idsimval = iterator.nextToken();
context.write(new Text("unique"), new Text(id + "_" + sentival + "_" + idsimval));
} catch (IOException | InterruptedException e)
{
System.out.println(e);
}
}
MY REDUCER CLASS
class MRReducer extends Reducer<Text, Text, Text, Text> {
String[] records;
HashMap<Long, String> sentiMap = new HashMap<>();
HashMap<Long, String> cosiMap = new HashMap<>();
private String leftIdStr;
private ArrayList<String> rightIDList, rightSimValList, matchingSimValList, matchingIDList;
private double leftVal;
private double rightVal;
private double currDiff;
private double prevDiff;
private int finalIndex;
Context newContext;
private int i;
public void reducer(Text key, Iterable<Text> value, Context context) throws IOException, InterruptedException {
for (Text string : value) {
records = string.toString().split("_");
sentiMap.put(Long.parseLong(records[0]), records[1]);
if (records[2] != null) {
cosiMap.put(Long.parseLong(records[0]), records[2]);
}
if(++i == 2588)
{
newContext = context;
newfun();
}
context.write(new Text("hello"), new Text("hii"));
}
context.write(new Text("hello"), new Text("hii"));
}
void newfun() throws IOException, InterruptedException
{
for (HashMap.Entry<Long, String> firstEntry : cosiMap.entrySet()) {
try {
leftIdStr = firstEntry.getKey().toString();
rightIDList = new ArrayList<>();
rightSimValList = new ArrayList<>();
matchingSimValList = new ArrayList<>();
matchingIDList = new ArrayList<>();
for (String strTmp : firstEntry.getValue().split(" ")) {
rightIDList.add(strTmp.substring(0, 18));
rightSimValList.add(strTmp.substring(19));
}
String tmp = sentiMap.get(Long.parseLong(leftIdStr));
if ("NULL".equals(tmp)) {
leftVal = Double.parseDouble("0");
} else {
leftVal = Double.parseDouble(tmp);
}
tmp = sentiMap.get(Long.parseLong(rightIDList.get(0)));
if ("NULL".equals(tmp)) {
rightVal = Double.parseDouble("0");
} else {
rightVal = Double.parseDouble(tmp);
}
prevDiff = Math.abs(leftVal - rightVal);
int oldIndex = 0;
for (String s : rightIDList) {
try {
oldIndex++;
tmp = sentiMap.get(Long.parseLong(s));
if ("NULL".equals(tmp)) {
rightVal = Double.parseDouble("0");
} else {
rightVal = Double.parseDouble(tmp);
}
currDiff = Math.abs(leftVal - rightVal);
if (prevDiff > currDiff) {
prevDiff = currDiff;
}
} catch (Exception e) {
}
}
oldIndex = 0;
for (String s : rightIDList) {
tmp = sentiMap.get(Long.parseLong(s));
if ("NULL".equals(tmp)) {
rightVal = Double.parseDouble("0");
} else {
rightVal = Double.parseDouble(tmp);
}
currDiff = Math.abs(leftVal - rightVal);
if (Objects.equals(prevDiff, currDiff)) {
matchingSimValList.add(rightSimValList.get(oldIndex));
matchingIDList.add(rightIDList.get(oldIndex));
}
oldIndex++;
}
finalIndex = rightSimValList.indexOf(Collections.max(matchingSimValList));
newContext.write(new Text(leftIdStr), new Text(" " + rightIDList.get(finalIndex) + ":" + rightSimValList.get(finalIndex)));
} catch (NumberFormatException nfe) {
}
}
}
}
What is the problem and does it belong to map reduce program or hadoop system configuration? Whenever I run this program, it only writes mapper ouput into hdfs.
Inside your Reducer class you must override the reduce method. You are declaring a reducer method, which is not correct.
Try modifying your function inside the Reducer class:
#Override
public void reduce(Text key, Iterable<Text> value, Context context) throws IOException, InterruptedException {

Hadoop MapReduce reducer does not start

The map phase runs and then just quits without bothering with the reducer. The job alternately prints "Hello from mapper." and "Writing CellWithTotalAmount" and that's it. The output directory it creates is empty.
I've checked at least a dozen of other "reducer won't start" questions and have not found an answer. I've checked that the output of map is the same as input into reduce, that reduce uses Iterable, that correct output classes have been set, etc.
Job config
public class HoursJob {
public static void main(String[] args) throws Exception {
if (args.length != 2) {
System.err.println("Usage: HoursJob <input path> <output path>");
System.exit(-1);
}
Job job = Job.getInstance();
job.setJarByClass(HoursJob.class);
job.setJobName("Hours job");
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setMapperClass(HoursMapper.class);
job.setReducerClass(HoursReducer.class);
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(CellWithTotalAmount.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(NullWritable.class);
int ret = job.waitForCompletion(true) ? 0 : 1;
System.exit(ret);
}
}
Mapper
public class HoursMapper
extends Mapper<LongWritable, Text, IntWritable, CellWithTotalAmount> {
static double BEGIN_LONG = -74.913585;
static double BEGIN_LAT = 41.474937;
static double GRID_LENGTH = 0.011972;
static double GRID_HEIGHT = 0.008983112;
#Override
public void map(LongWritable key, Text value, Mapper.Context context)
throws IOException, InterruptedException {
System.out.println("Hello from mapper.");
String recordString = value.toString();
try {
DEBSFullRecord record = new DEBSFullRecord(recordString);
Date pickupDate = record.getPickup();
Calendar calendar = GregorianCalendar.getInstance();
calendar.setTime(pickupDate);
int pickupHour = calendar.get(Calendar.HOUR_OF_DAY);
int cellX = (int)
((record.getPickupLongitude() - BEGIN_LONG) / GRID_LENGTH) + 1;
int cellY = (int)
((BEGIN_LAT - record.getPickupLatitude()) / GRID_HEIGHT) + 1;
CellWithTotalAmount hourInfo =
new CellWithTotalAmount(cellX, cellY, record.getTotal());
context.write(new IntWritable(pickupHour), hourInfo);
} catch (Exception ex) {
System.out.println(
"Cannot parse: " + recordString + "due to the " + ex);
}
}
}
Reducer
public class HoursReducer
extends Reducer<IntWritable, CellWithTotalAmount, Text, NullWritable> {
#Override
public void reduce(IntWritable key, Iterable<CellWithTotalAmount> values,
Context context) throws IOException, InterruptedException {
System.out.println("Hello from reducer.");
int[][] cellRideCounters = getCellRideCounters(values);
CellWithRideCount cellWithMostRides =
getCellWithMostRides(cellRideCounters);
int[][] cellTotals = getCellTotals(values);
CellWithTotalAmount cellWithGreatestTotal =
getCellWithGreatestTotal(cellTotals);
String output = key + " "
+ cellWithMostRides.toString() + " "
+ cellWithGreatestTotal.toString();
context.write(new Text(output), NullWritable.get());
}
//omitted for brevity
}
Custom writable class
public class CellWithTotalAmount implements Writable {
public int cellX;
public int cellY;
public double totalAmount;
public CellWithTotalAmount(int cellX, int cellY, double totalAmount) {
this.cellX = cellX;
this.cellY = cellY;
this.totalAmount = totalAmount;
}
#Override
public void write(DataOutput out) throws IOException {
System.out.println("Writing CellWithTotalAmount");
out.writeInt(cellX);
out.writeInt(cellY);
out.writeDouble(totalAmount);
}
#Override
public void readFields(DataInput in) throws IOException {
System.out.println("Reading CellWithTotalAmount");
cellX = in.readInt();
cellY = in.readInt();
totalAmount = in.readDouble();
}
#Override
public String toString() {
return cellX + " " + cellY + " " + totalAmount;
}
}
I think there is a lot of exception in reduce function so Framework can not complete the job properly
public class HoursReducer
extends Reducer<IntWritable, CellWithTotalAmount, Text, NullWritable> {
#Override
public void reduce(IntWritable key, Iterable<CellWithTotalAmount> values,
Context context) throws IOException, InterruptedException {
System.out.println("Hello from reducer.");
try{
int[][] cellRideCounters = getCellRideCounters(values);
if(cellRideCounter[0].length>0){ // control it before executing it. more explanation is above
CellWithRideCount cellWithMostRides =
getCellWithMostRides(cellRideCounters);
int[][] cellTotals = getCellTotals(values);
CellWithTotalAmount cellWithGreatestTotal =
getCellWithGreatestTotal(cellTotals);
String output = key + " "
+ cellWithMostRides.toString() + " "
+ cellWithGreatestTotal.toString();
context.write(new Text(output), NullWritable.get());
}
}catch(Exception e)
e.printstack();
return;
{
}
}
add try-catch to get exceptions in reduce function
. Return from function in catch
. Also add an if statement before calling getCellWithMostRiders(..) I think the issue is in here. Fill the if statement as you want I made a guess and fill it according to my guess change it however you want if it is not proper for you

Reduce method in Reducer class is not executing

In the below code,inside reducer class reduce method is not executing. please help me.In my reduce method i want to write output in multiple files. so i have used multipleoutputs.
public class DataValidation {
public static class Map extends Mapper<LongWritable, Text, Text, Text> {
int flag = 1;
boolean result;
private HashMap<String, FileConfig> fileConfigMaps = new HashMap<String, FileConfig>();
private HashMap<String, List<LineValidator>> mapOfValidators = new HashMap<String, List<LineValidator>>();
private HashMap<String, List<Processor>> mapOfProcessors = new HashMap<String, List<Processor>>();
protected void setup(Context context) throws IOException {
System.out.println("configure inside map class");
ConfigurationParser parser = new ConfigurationParser();
Config config = parser.parse(new Configuration());
List<FileConfig> file = config.getFiles();
for (FileConfig f : file) {
try {
fileConfigMaps.put(f.getName(), f);
System.out.println("quotes in" + f.isQuotes());
System.out.println("file from xml : " + f.getName());
ValidationBuilder builder = new ValidationBuilder();
// ProcessorBuilder constructor = new ProcessorBuilder();
List<LineValidator> validators;
validators = builder.build(f);
// List<Processor> processors = constructor.build(f);
mapOfValidators.put(f.getName(), validators);
// mapOfProcessors.put(f.getName(),processors);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
// String filename = ((FileSplit) context.getInputSplit()).getPath()
// .getName();
FileSplit fs = (FileSplit) context.getInputSplit();
String fileName = fs.getPath().getName();
System.out.println("filename : " + fileName);
String line = value.toString();
String[] csvDataArray = null;
List<LineValidator> lvs = mapOfValidators.get(fileName);
flag = 1;
csvDataArray = line.split(",", -1);
FileConfig fc = fileConfigMaps.get(fileName);
System.out.println("filename inside fileconfig " + fc.getName());
System.out.println("quote values" + fc.isQuotes());
if (fc.isQuotes()) {
for (int i = 0; i < csvDataArray.length; i++) {
csvDataArray[i] = csvDataArray[i].replaceAll("\"", "");
}
}
for (LineValidator lv : lvs) {
if (flag == 1) {
result = lv.validate(csvDataArray, fileName);
if (result == false) {
String write = line + "," + lv.getFailureDesc();
System.out.println("write" + write);
System.out.println("key" + new Text(fileName));
// output.collect(new Text(filename), new Text(write));
context.write(new Text(fileName), new Text(write));
flag = 0;
if (lv.stopValidation(csvDataArray) == true) {
break;
}
}
}
}
}
protected void cleanup(Context context) {
System.out.println("clean up in mapper");
}
}
public static class Reduce extends Reducer<Text, Text, NullWritable, Text> {
protected void reduce(Text key, Iterator<Text> values, Context context)
throws IOException, InterruptedException {
System.out.println("inside reduce method");
while (values.hasNext()) {
System.out.println(" Nullwritable value" + NullWritable.get());
System.out.println("key inside reduce method" + key.toString());
context.write(NullWritable.get(), values.next());
// out.write(NullWritable.get(), values.next(), "/user/hadoop/"
// + context.getJobID() + "/" + key.toString() + "/part-");
}
}
}
public static void main(String[] args) throws Exception {
System.out.println("hello");
Configuration configuration = getConf();
Job job = Job.getInstance(configuration);
job.setJarByClass(DataValidation.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(NullWritable.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
private static Configuration getConf() {
return new Configuration();
}
}
You have not properly over-ridden reduce method. Use this:
public void reduce(Key key, Iterable values,
Context context) throws IOException, InterruptedException

MQTT Client in Java - Starting my Listener in a Thread

I am using org.fusesource.mqtt (mqtt-client-1.0-20120208.162159-18-uber) and wrote a listener in Java based on the non-blocking example.
I use my listener class in the following way:
Listener mqList = new Listener("tcp://localhost:1883", "mytopic/#", "c:/test.log", true);
new Thread(mqList).start( );
This work perfectly.
If I create two instances/threads then conflicts seem to arise and I get connect/disconnect messages flowing over me.
Here is the usage that fails:
Listener mqList = new Listener("tcp://localhost:1883", "mytopic/#", "c:/test.log", true);
new Thread(mqList).start( );
Listener mqList1 = new Listener("tcp://localhost:1883", "mytopic1/#", "c:/test1.log", true);
new Thread(mqList1).start( );
My Listener class is quite simple and I am puzzled why this does not work in multiple threads. Any ideas/hints?
Here is my class definition:
import org.fusesource.hawtbuf.Buffer;
import org.fusesource.hawtbuf.UTF8Buffer;
import org.fusesource.mqtt.client.*;
import java.io.IOException;
import java.util.ArrayList;
import java.util.concurrent.CountDownLatch;
import java.util.logging.*;
import java.io.*;
import java.net.URISyntaxException;
public class Listener implements Runnable{
private static final long DEFAULT_SLEEP_BEFORE_RE_ATTEMPT_IN_SECONDS = 5000;
private static final long DEFAULT_MAX_RE_ATTEMPT_DURATION_IN_SECONDS = 3600 * 3;
private long listenerSleepBeforeReAttemptInSeconds;
private long listenerMaxReAttemptDurationInSeconds;
private MQTT mqtt;
private ArrayList<Topic> topics;
private boolean listenerDebug;
private String listenerHostURI;
private String listenerTopic;
private String listenerLogFile;
private long listenerLastSuccessfulSubscription;
private Logger fLogger;
private String NEW_LINE = System.getProperty("line.separator");
public Listener(String listenerHostURI, String listenerTopic, String logFile, boolean debug) {
this(listenerHostURI, listenerTopic, logFile, DEFAULT_SLEEP_BEFORE_RE_ATTEMPT_IN_SECONDS, DEFAULT_MAX_RE_ATTEMPT_DURATION_IN_SECONDS, debug);
}
public Listener(String listenerHostURI, String listenerTopic, String logFile, long listenerSleepBeforeReAttemptInSeconds, long listenerMaxReAttemptDurationInSeconds, boolean debug) {
init(listenerHostURI, listenerTopic, logFile, listenerSleepBeforeReAttemptInSeconds, listenerMaxReAttemptDurationInSeconds, debug);
}
private void init(String listenerHostURI, String listenerTopic, String logFile, long listenerSleepBeforeReAttemptInSeconds, long listenerMaxReAttemptDurationInSeconds, boolean debug) {
this.listenerHostURI = listenerHostURI;
this.listenerTopic = listenerTopic;
this.listenerLogFile = logFile;
this.listenerSleepBeforeReAttemptInSeconds = listenerSleepBeforeReAttemptInSeconds;
this.listenerMaxReAttemptDurationInSeconds = listenerMaxReAttemptDurationInSeconds;
this.listenerDebug = debug;
initMQTT();
}
private void initMQTT() {
mqtt = new MQTT();
listenerLastSuccessfulSubscription = System.currentTimeMillis();
try {
fLogger = Logger.getLogger("eTactica.mqtt.listener");
FileHandler handler = new FileHandler(listenerLogFile);
fLogger.addHandler(handler);
} catch (IOException e) {
System.out.println("Logger - Failed");
}
try {
mqtt.setHost(listenerHostURI);
} catch (URISyntaxException e) {
stderr("setHost failed: " + e);
stderr(e);
}
QoS qos = QoS.AT_MOST_ONCE;
topics = new ArrayList<Topic>();
topics.add(new Topic(listenerTopic, qos));
}
private void stdout(String x) {
if (listenerDebug) {
fLogger.log(Level.INFO, x + NEW_LINE);
}
}
private void stderr(String x) {
if (listenerDebug) {
fLogger.log(Level.SEVERE, x + NEW_LINE);
}
}
private void stderr(Throwable e) {
if (listenerDebug) {
StringWriter sw = new StringWriter();
PrintWriter pw = new PrintWriter(sw);
e.printStackTrace(pw);
fLogger.log(Level.SEVERE, sw.toString() + NEW_LINE);
}
}
private void subscriptionSuccessful() {
listenerLastSuccessfulSubscription = System.currentTimeMillis();
}
private boolean tryToListen() {
return ((System.currentTimeMillis() - listenerLastSuccessfulSubscription) < listenerMaxReAttemptDurationInSeconds * 1000);
}
private void sleepBeforeReAttempt() throws InterruptedException {
stdout(String.format(("Listener stopped, re-attempt in %s seconds."), listenerSleepBeforeReAttemptInSeconds));
Thread.sleep(listenerSleepBeforeReAttemptInSeconds);
}
private void listenerReAttemptsOver() {
stdout(String.format(("Listener stopped since reattempts have failed for %s seconds."), listenerMaxReAttemptDurationInSeconds));
}
private void listen() {
final CallbackConnection connection = mqtt.callbackConnection();
final CountDownLatch done = new CountDownLatch(1);
/* Runtime.getRuntime().addShutdownHook(new Thread(){
#Override
public void run() {
setName("MQTT client shutdown");
stderr("Disconnecting the client.");
connection.getDispatchQueue().execute(new Runnable() {
public void run() {
connection.disconnect(new Callback<Void>() {
public void onSuccess(Void value) {
stdout("Disconnecting onSuccess.");
done.countDown();
}
public void onFailure(Throwable value) {
stderr("Disconnecting onFailure: " + value);
stderr(value);
done.countDown();
}
});
}
});
}
});
*/
connection.listener(new org.fusesource.mqtt.client.Listener() {
public void onConnected() {
stdout("Listener onConnected");
}
public void onDisconnected() {
stdout("Listener onDisconnected");
}
public void onPublish(UTF8Buffer topic, Buffer body, Runnable ack) {
stdout(topic + " --> " + body.toString());
ack.run();
}
public void onFailure(Throwable value) {
stdout("Listener onFailure: " + value);
stderr(value);
done.countDown();
}
});
connection.resume();
connection.connect(new Callback<Void>() {
public void onFailure(Throwable value) {
stderr("Connect onFailure...: " + value);
stderr(value);
done.countDown();
}
public void onSuccess(Void value) {
final Topic[] ta = topics.toArray(new Topic[topics.size()]);
connection.subscribe(ta, new Callback<byte[]>() {
public void onSuccess(byte[] value) {
for (int i = 0; i < value.length; i++) {
stdout("Subscribed to Topic: " + ta[i].name() + " with QoS: " + QoS.values()[value[i]]);
}
subscriptionSuccessful();
}
public void onFailure(Throwable value) {
stderr("Subscribe failed: " + value);
stderr(value);
done.countDown();
}
});
}
});
try {
done.await();
} catch (Exception e) {
stderr(e);
}
}
#Override
public void run() {
while (tryToListen()) {
initMQTT();
listen();
try {
sleepBeforeReAttempt();
} catch (InterruptedException e) {
stderr("Sleep failed:" + e);
stderr(e);
}
}
listenerReAttemptsOver();
}
}
A TCP port can have only one listener. That number in "tcp://localhost:1883" has to be unique for each listener. Someplace, presumably (I'm not familiar with this specific API) you're probably starting a client with a port number too; the numbers must match between the client and server.

Categories

Resources