How to publish mqtt message to kafka in spring - java

Develop mqtt connector for Kafka using spring.
Using the mqtt library provided by spring, messages are collected as follows.
message handler
#Bean
#ServiceActivator(inputChannel = "mqttInputChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
String topic = message.getHeaders().get(MqttHeaders.RECEIVED_TOPIC).toString();
if(topic.equals("myTopic")) {
System.out.println("Mqtt data pub");
}
System.out.println(message.getPayload());
if(topic==null) {
topic = "mqttdata";
}
String tag = "test/vib";
String name = null;
if(name==null) {
name = KafkaMessageService.MQTT_PRODUCER;
}
HashMap<String, Object> datalist = new HashMap<String, Object>();
try {
datalist =convertJSONstringToMap(message.getPayload().toString());
System.out.println(datalist.get("mac"));
counts = kafkaMessageService.publish(topic, name, tag, (HashMap<String,Object>[] datalist);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
public static HashMap<String,Object> convertJSONstringToMap(String json) throws Exception {
ObjectMapper mapper = new ObjectMapper();
HashMap<String, Object> map = new HashMap<String, Object>();
map = mapper.readValue(json, new TypeReference<HashMap<String, Object>>() {});
return map;
}
publish method
public int publish(String topic,String producerName,String tag,HashMap<String,Object>[] datalist) throws NotMatchedProducerException,KafkaPubFailureException{
KafkaProducerAdaptor adaptor = searchProducerAdaptor(producerName);
if(adaptor==null) {
throw new NotMatchedProducerException();
}
KafkaTemplate<String,Object> kafkaTemplate = adaptor.getKafkaTemplate();
LocalDateTime currentDateTime = LocalDateTime.now();
String receivedTime = currentDateTime.toString();
ObjectMapper objectMapper = new ObjectMapper();
String key = adaptor.getName();
int counts = 0;
for(HashMap<String,Object> data : datalist) {
Map<String,Object> messagePacket = new HashMap<String,Object>();
messagePacket.put("tag", tag);
messagePacket.put("data", data);
messagePacket.put("receivedtime", receivedTime);
try {
kafkaTemplate.send(topic,key,objectMapper.valueToTree(messagePacket)).get();
logger.info("Sent message : topic=["+topic+"],key=["+key+"] value=["+messagePacket+"]");
} catch(Exception e) {
logger.info("Unable to send message : topic=["+topic+"],key=["+key+"] message=["+messagePacket+"] / due to : "+e.getMessage());
throw new KafkaPubFailureException(e);
}
counts++;
}
return counts;
}
I don't know how to declare a hashmap <String, object> [] as an instance and how to use it.
The above source was taken from spring support as it is, and some modifications were made.

Related

Transform JSON to another JSON structure

I have a case to transform a response from
Dogs API
to a different structure like this :
[
{
"breed": "pug",
"sub_breed": []
},
{
"breed": "ridgeback",
"sub_breed": [
{
"breed": "rhodesian",
"sub_breed": []
}
]
},
{
"breed": "doberman",
"sub_breed": []
},
{
"breed": "hound",
"sub_breed": [
{
"breed": "Ibizan",
"sub_breed": []
},
{
"breed": "afghan",
"sub_breed": []
}
]
}
]
I am confused after getting the response and don't know how to transform it.
Here is what I do until getting the response
public List<DogResponse> getDogs() {
List<DogResponse> response = new ArrayList<DogResponse>();
try {
String url = "https://dog.ceo/api/breeds/list/all";
RestTemplate restTemplate = new RestTemplate();
ResponseEntity<String> result = restTemplate.getForEntity(url, String.class);
ObjectMapper mapper = new ObjectMapper();
Map<String, String> map = mapper.readValue(result.getBody().toString(), Map.class);
System.out.println(mapper.writerWithDefaultPrettyPrinter().writeValueAsString(map.get("message")));
for (Entry<String, String> entry : map.entrySet()) {
String key = entry.getKey();
String value = entry.getValue();
System.out.println("key : "+key);
System.out.println("val : "+value);
}
} catch (Exception e) {
// TODO: handle exception
}
return response;
}
DogResponse
public class DogResponse {
private String breed;
private DogResponse sub_breed;
public String getBreed() {
return breed;
}
public void setBreed(String breed) {
this.breed = breed;
}
public DogResponse getSub_breed() {
return sub_breed;
}
public void setSub_breed(DogResponse sub_breed) {
this.sub_breed = sub_breed;
}
}
I am trying using Map but failed when I want to print the key and value, it's showing nothing.
You should map the response to List of DogResponse you may have an issue because of circular dependency.
List<DogResponse> dogs = mapper.readValue(jsonString, new TypeReference<List<DogResponse>>() {});
You can try this.
public List<DogResponse> getDogs() {
List<DogResponse> response = new ArrayList<DogResponse>();
try {
String url = "https://dog.ceo/api/breeds/list/all";
RestTemplate restTemplate = new RestTemplate();
ResponseEntity<String> result = restTemplate.getForEntity(url, String.class);
ObjectMapper mapper = new ObjectMapper();
Map<String, Map<String, List<String>>> map = mapper.readValue(result.getBody().toString(), Map.class);
System.out.println(mapper.writerWithDefaultPrettyPrinter().writeValueAsString(map.get("message")));
Map<String, List<String>> innerMap = map.get("message");
for (Entry<String, List<String>> entry : innerMap.entrySet()) {
String key = entry.getKey();
List<String> value = entry.getValue();
System.out.println("key : " + key);
System.out.println("val : " + value);
}
} catch (Exception e) {
// TODO: handle exception
}
return response;
}
ResponseEntity result = restTemplate.getForEntity(url, DogResponse.class);
This should work.

Read from splunk source and write to topic - writing same record. not pulling latest records

same record is being written to topic. not pulling latest records from splunk. time parameters are set in start method to pull last one min data. Any inputs.
currently i dont set offset from source. when poll is run every time, does it look for source offset and then poll? in logs can we have time as offset.
#Override
public List<SourceRecord> poll() throws InterruptedException {
List<SourceRecord> results = new ArrayList<>();
Map<String, String> recordProperties = new HashMap<String, String>();
while (true) {
try {
String line = null;
InputStream stream = job.getResults(previewArgs);
String earlierKey = null;
String value = null;
ResultsReaderCsv csv = new ResultsReaderCsv(stream);
HashMap<String, String> event;
while ((event = csv.getNextEvent()) != null) {
for (String key: event.keySet()) {
if(key.equals("rawlogs")){
recordProperties.put("rawlogs", event.get(key)); results.add(extractRecord(Splunklog.SplunkLogSchema(), line, recordProperties));
return results;}}}
csv.close();
stream.close();
Thread.sleep(500);
} catch(Exception ex) {
System.out.println("Exception occurred : " + ex);
}
}
}
private SourceRecord extractRecord(Schema schema, String line, Map<String, String> recordProperties) {
Map<String, String> sourcePartition = Collections.singletonMap(FILENAME_FIELD, FILENAME);
Map<String, String> sourceOffset = Collections.singletonMap(POSITION_FIELD, recordProperties.get(OFFSET_KEY));
return new SourceRecord(sourcePartition, sourceOffset, TOPIC_NAME, schema, recordProperties);
}
#Override
public void start(Map<String, String> properties) {
try {
config = new SplunkSourceTaskConfig(properties);
} catch (ConfigException e) {
throw new ConnectException("Couldn't start SplunkSourceTask due to configuration error", e);
}
HttpService.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_2);
Service service = new Service("splnkip", port);
String credentials = "user:pwd";
String basicAuthHeader = Base64.encode(credentials.getBytes());
service.setToken("Basic " + basicAuthHeader);
String startOffset = readOffset();
JobArgs jobArgs = new JobArgs();
if (startOffset != null) {
log.info("-------------------------------task OFFSET!NULL ");
jobArgs.setExecutionMode(JobArgs.ExecutionMode.BLOCKING);
jobArgs.setSearchMode(JobArgs.SearchMode.NORMAL);
jobArgs.setEarliestTime(startOffset);
jobArgs.setLatestTime("now");
jobArgs.setStatusBuckets(300);
} else {
log.info("-------------------------------task OFFSET=NULL ");
jobArgs.setExecutionMode(JobArgs.ExecutionMode.BLOCKING);
jobArgs.setSearchMode(JobArgs.SearchMode.NORMAL);
jobArgs.setEarliestTime("+419m");
jobArgs.setLatestTime("+420m");
jobArgs.setStatusBuckets(300);
}
String mySearch = "search host=search query";
job = service.search(mySearch, jobArgs);
while (!job.isReady()) {
try {
Thread.sleep(500);
} catch (InterruptedException ex) {
log.error("Exception occurred while waiting for job to start: " + ex);
}
}
previewArgs = new JobResultsPreviewArgs();
previewArgs.put("output_mode", "csv");
stop = new AtomicBoolean(false);
}

DataFlow Apache Beam Java JdbcIO Read arguments issue

I am totally new to Apache Beam and Java.
Been working on PHP for around 5 years but i haven't worked in Java for the last 5 years :), plus Apache Beam SDK in java is something that is also new so bear with me.
I would like to implement pipeline where i will get data from Google PubSub, map the relevant fields into array and then check it to MySql Db to see if the message belong to one table, after that i will need to send api call to our API that will update some data in our app db. Another pipeline will enrich the data from elasticsearch and insert it into BigQuery.
But as of this moment i am stuck with reading data from MySql, i simply cannot adopt the argument in PCollection using JdbcIO.
My plan is to check if in Mysql table is present value that i get from pubsub ( value listid ).
Here is my code so far, any help will be appreciated.
Pipeline p = Pipeline.create(options);
org.apache.beam.sdk.values.PCollection<PubsubMessage> messages = p.apply(PubsubIO.readMessagesWithAttributes()
.fromSubscription("*******"));
org.apache.beam.sdk.values.PCollection<String> messages2 = messages.apply("GetPubSubEvent",
ParDo.of(new DoFn<PubsubMessage, String>() {
#ProcessElement
public void processElement(ProcessContext c) {
Map<String, String> Map = new HashMap<String, String>();
PubsubMessage message = c.element();
String messageText = new String(message.getPayload(), StandardCharsets.UTF_8);
JSONObject jsonObj = new JSONObject(messageText);
String requestURL = jsonObj.getJSONObject("httpRequest").getString("requestUrl");
String query = requestURL.split("\\?")[1];
final Map<String, String> querymap = Splitter.on('&').trimResults().withKeyValueSeparator("=")
.split(query);
JSONObject querymapJson = new JSONObject(querymap);
int subscriberid = 0;
int listid = 0;
int statid = 0;
int points = 0;
String stattype = "";
String requesttype = "";
try {
subscriberid = querymapJson.getInt("emp_uid");
} catch (Exception e) {
}
try {
listid = querymapJson.getInt("emp_lid");
} catch (Exception e) {
}
try {
statid = querymapJson.getInt("emp_statid");
} catch (Exception e) {
}
try {
stattype = querymapJson.getString("emp_stattype");
Map.put("stattype", stattype);
} catch (Exception e) {
}
try {
requesttype = querymapJson.getString("type");
} catch (Exception e) {
}
try {
statid = querymapJson.getInt("leadscore");
} catch (Exception e) {
}
Map.put("subscriberid", String.valueOf(subscriberid));
Map.put("listid", String.valueOf(listid));
Map.put("statid", String.valueOf(statid));
Map.put("requesttype", requesttype);
Map.put("leadscore", String.valueOf(points));
Map.put("requestip", jsonObj.getJSONObject("httpRequest").getString("remoteIp"));
System.out.print("Hello from message 1");
c.output(Map.toString());
}
}));
org.apache.beam.sdk.values.PCollection<String> messages3 = messages2.apply("Test",
ParDo.of(new DoFn<String, String>() {
#ProcessElement
public void processElement(ProcessContext c) {
System.out.println(c.element());
System.out.print("Hello from message 2");
}
}));
org.apache.beam.sdk.values.PCollection<KV<String, String>> messages23 = messages2.apply(JdbcIO.<KV<String, String>>read()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create("org.apache.derby.jdbc.ClientDriver",
"jdbc:derby://localhost:1527/beam"))
.withQuery("select * from artist").withRowMapper(new JdbcIO.RowMapper<KV<String, String>>() {
public KV<String, String> mapRow(ResultSet resultSet) throws Exception {
KV<String, String> kv = KV.of(resultSet.getString("label"), resultSet.getString("name"));
return kv;
}
#Override
public KV<String, String> mapRow(java.sql.ResultSet resultSet) throws Exception {
KV<String, String> kv = KV.of(resultSet.getString("label"), resultSet.getString("name"));
return kv;
}
}).withCoder(KvCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of())));
p.run().waitUntilFinish();

get the image name CodeNameOne

I want to get a list of objects from database
i'm 100% that i retreive the data but the list so my php code seems to be good
public ArrayList<Categorie> getListCategorie() {
ArrayList<Categorie> listcategories = new ArrayList<>();
ConnectionRequest con2 = new ConnectionRequest();
con2.setUrl("http://localhost/pidev2017/selectcategorie.php");
con2.addResponseListener(new ActionListener<NetworkEvent>() {
#Override
public void actionPerformed(NetworkEvent evt) {
try {
JSONParser j = new JSONParser();
Map<String, Object> catefories = j.parseJSON(new CharArrayReader(new String(con2.getResponseData()).toCharArray()));
List<Map<String, Object>> list = (List<Map<String, Object>>) catefories.get("Categorie");
for (Map<String, Object> obj : list) {
Categorie categorie = new Categorie();
categorie.setId(Integer.parseInt(obj.get("id").toString()));
categorie.setNomCategorie(obj.get("nomCategorie").toString());
listcategories.add(categorie);
}
} catch (IOException ex) {
}
}
});
NetworkManager.getInstance().addToQueue(con2);
return listcategories;
}
when i want to fetch my result "listcategories" i found that is empty
Change
NetworkManager.getInstance().addToQueue(con2);
to
NetworkManager.getInstance().addToQueueAndWait(con2);
It's possible that you try to get the result before the data has been fetched.

How to get a JavaDStream of an Object in Spark Kafka Connector?

I am using the Spark Kafka connector to fetch data from Kafka cluster. From it, I am getting the data as a JavaDStream<String>. How do I get the data as a JavaDStream<EventLog>, where EventLog is a Java bean?
public static JavaDStream<EventLog> fetchAndValidateData(String zkQuorum, String group, Map<String, Integer> topicMap) {
SparkConf sparkConf = new SparkConf().setAppName("JavaKafkaWordCount");
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000));
JavaPairReceiverInputDStream<String, String> messages =
KafkaUtils.createStream(jssc, zkQuorum, group, topicMap);
JavaDStream<String> lines = messages.map(new Function<Tuple2<String, String>, String>() {
#Override
public String call(Tuple2<String, String> tuple2) {
return tuple2._2();
}
});
jssc.start();
jssc.awaitTermination();
return lines;
}
My goal is to save this data into Cassandra where a table with the same specifications as EventLog. The Spark Cassandra connector accepts JavaRDD<EventLog> in the insert statement like this: javaFunctions(rdd).writerBuilder("ks", "event", mapToRow(EventLog.class)).saveToCassandra();. I want to get these JavaRDD<EventLog> from Kafka.
Use the overloaded createStream method where you can pass the key/value type and decoder classes.
Example:
createStream(jssc, String.class, EventLog.class, StringDecoder.class, EventLogDecoder.class,
kafkaParams, topicsMap, StorageLevel.MEMORY_AND_DISK_SER_2());
Above should give you JavaPairDStream<String, EventLog>
JavaDStream<EventLog> lines = messages.map(new Function<Tuple2<String, EventLog>, EventLog>() {
#Override
public EventLog call(Tuple2<String, EventLog> tuple2) {
return tuple2._2();
}
});
The EventLogDecoder should implement kafka.serializer.Decoder. Below example for json decoder.
public class EventLogDecoder implements Decoder<EventLog> {
public EventLogDecoder(VerifiableProperties verifiableProperties) {
}
#Override
public EventLog fromBytes(byte[] bytes) {
ObjectMapper objectMapper = new ObjectMapper();
try {
return objectMapper.readValue(bytes, EventLog.class);
} catch (IOException e) {
//do something
}
return null;
}
}

Categories

Resources