Pubnub V4 Migration Callbacks - java

I am trying to update my Code from pubnub sdk v3 to v4 and I am stuck at callbacks.
I have following function which I would like to update:
void transmitMessage(String toID, JSONObject packet){
if (this.id==null){ .
mRtcListener.onDebug(new PnRTCMessage("Cannot transmit before calling Client.connect"));
}
try {
JSONObject message = new JSONObject();
message.put(PnRTCMessage.JSON_PACKET, packet);
message.put(PnRTCMessage.JSON_ID, "");
message.put(PnRTCMessage.JSON_NUMBER, this.id);
this.mPubNub.publish(toID, message, new Callback() {
#Override
public void successCallback(String channel, Object message, String timetoken) {
mRtcListener.onDebug(new PnRTCMessage((JSONObject)message));
}
#Override
public void errorCallback(String channel, PubNubError error) {
mRtcListener.onDebug(new PnRTCMessage(error.errorObject));
}
});
} catch (JSONException e){
e.printStackTrace();
}
}
The docs say one does not Need to instantiate com.pubnub.api.Callback and one should use the new SubscribeCallback class. I am not sure how to handle it, the SubscribeCallback contains These Methods: Status, message and presence, currently I have a successCallback method and a errorCallback.

The code at https://www.pubnub.com/docs/android-java/api-reference-publish-and-subscribe#listeners should help you with this.
You can create listeners using the code below:
pubnub.addListener(new SubscribeCallback() {
#Override
public void status(PubNub pubnub, PNStatus status) {
switch (status.getOperation()) {
// let's combine unsubscribe and subscribe handling for ease of use
case PNSubscribeOperation:
case PNUnsubscribeOperation:
// note: subscribe statuses never have traditional
// errors, they just have categories to represent the
// different issues or successes that occur as part of subscribe
switch (status.getCategory()) {
case PNConnectedCategory:
// this is expected for a subscribe, this means there is no error or issue whatsoever
case PNReconnectedCategory:
// this usually occurs if subscribe temporarily fails but reconnects. This means
// there was an error but there is no longer any issue
case PNDisconnectedCategory:
// this is the expected category for an unsubscribe. This means there
// was no error in unsubscribing from everything
case PNUnexpectedDisconnectCategory:
// this is usually an issue with the internet connection, this is an error, handle appropriately
case PNAccessDeniedCategory:
// this means that PAM does allow this client to subscribe to this
// channel and channel group configuration. This is another explicit error
default:
// More errors can be directly specified by creating explicit cases for other
// error categories of `PNStatusCategory` such as `PNTimeoutCategory` or `PNMalformedFilterExpressionCategory` or `PNDecryptionErrorCategory`
}
case PNHeartbeatOperation:
// heartbeat operations can in fact have errors, so it is important to check first for an error.
// For more information on how to configure heartbeat notifications through the status
// PNObjectEventListener callback, consult <link to the PNCONFIGURATION heartbeart config>
if (status.isError()) {
// There was an error with the heartbeat operation, handle here
} else {
// heartbeat operation was successful
}
default: {
// Encountered unknown status type
}
}
}
#Override
public void message(PubNub pubnub, PNMessageResult message) {
String messagePublisher = message.getPublisher();
System.out.println("Message publisher: " + messagePublisher);
System.out.println("Message Payload: " + message.getMessage());
System.out.println("Message Subscription: " + message.getSubscription());
System.out.println("Message Channel: " + message.getChannel());
System.out.println("Message timetoken: " + message.getTimetoken());
}
#Override
public void presence(PubNub pubnub, PNPresenceEventResult presence) {
}
});
Once you've subscribed to a channel like below, when a message or presence event is received the above listeners will be called.
pubnub.subscribe()
.channels(Arrays.asList("my_channel")) // subscribe to channels
.withPresence() // also subscribe to related presence information
.execute();
Please note that we have recently launched new features with new types of listeners as well, all of which are listed in the link above.

Related

Delaying messages from queue rabbitmq spring

I have a problem with delaying messages when sending them to rabbitmq consumer. I have set the x-delay header however the problem is that I want messages to be send at one sec-time distance to the consumer. But in the case of the application with this code, they are all sent at the same time but after 1 sec of starting the application. So how can I send them gradually with 1 sec time difference between each other?
public void produce(Company company){
// for (Company company:
// companies) {
amqpTemplate.convertAndSend(exchange, routingkey, company, new MessagePostProcessor() {
#Override
public Message postProcessMessage(Message message) throws AmqpException {
message.getMessageProperties().setHeader("x-delay",10000);
return message;
}
});
System.out.println("Send msg = " + company);
// }
// amqpTemplate.convertAndSend(exchange, routingkey, company);
}
In the main application I'm calling the produce method for each company from company list (also tried with for inside produce method, didn't work).

How to handle exceptions in SpringBoot ListenableFuture

so I have a SpringBoot end point controller that starts like this:
#RequestMapping(value = "/post", method = RequestMethod.POST, produces = MediaType.APPLICATION_JSON_VALUE)
public Response post(#Valid #RequestBody Message message) throws FailedToPostException {
message.setRecieveTime(System.currentTimeMillis());
return this.service.post(message);
}
And the post function:
public Response post(Message message) throws FailedToPostException{
ListenableFuture<SendResult<String, Message>> future = kafkaTemplate.send("topicName", message);
future.addCallback(new ListenableFutureCallback<SendResult<String, Message>>() {
#Override
public void onSuccess(SendResult<String, Message> result) {
LOGGER.info("Post Finished. '{}' with offset: {}", message,
result.getRecordMetadata().offset());
}
#Override
public void onFailure(Throwable ex) {
LOGGER.error("Message Post Failed. '{}'", message, ex);
long nowMillis = System.currentTimeMillis();
int diffSeconds = (int) ((nowMillis - message.getRecieveTime()) / 1000);
if (diffSeconds >= 10) {
LOGGER.debug("timeout sending message to Kafka, aborting.");
return;
}
else {
post(message);
}
}
});
LOGGER.debug("D: " + Utils.getMetricValue("buffer-available-bytes", kafkaTemplate));
return new Response("Message Posted");
}
Now you can see, that we are trying to make sure, if a kafkaTemplate.send failed, we are going to recursively invoke post(message) again for up to 10 seconds, until the producer memory buffer clears and the message gets through.
The problems are:
We want to be able to return failure response to the endpoint's client (eg: "Failed to acknowledge the message").
Is there any better way to handle exceptions from a Future in a piece of code like that above?
Is there a way to avoid using a recursive function here? We did that, because we wanted to attempt delivery of the message to Kafka for like 10 seconds, before sending it as an email to look at.
Side note: I still didnt use buffer-available-bytes attribute from kafkaTemplate.metrics(), I intend to use it to minimize the chance of this problem, but still need to handle the above just in case of some race conditions
There are a few ways to do this, but I really like Spring Retry as a way to solve this kind of problem. It's a bit of pseudo code here, but if you need more specifics on how to do it, I could make things more explicit:
#Retryable(maxAttempts = 10, value = KafkaSendException.class)
public Response post(Message message) throws FailedToPostException{
ListenableFuture<SendResult<String, Message>> future = kafkaTemplate.send("topicName", message);
try {
future.get(1. TimeUnit.SECONDS);
} catch(SomeException ex) {
LOGGER.error("Message Post Failed. '{}'", ex.getCause().getMessage(), ex);
throw ex;
}
LOGGER.info("Post Finished. '{}' with offset: {}", message,
result.getRecordMetadata().offset());
}
Effectively does the same thing without recursing. I wouldn't recommend recursing code for error handling.
The controller should be able to massage the actual KafkaSendException with a nice #ExceptionHandler.

How to access the payload of the message arrived of the callback method (messageArrived) in the main method Eclipse Paho?

Problem statement:- I am trying to automate a MQTT flow, for that I a need to publish and subscribe to multiple topics but in a sequential order. The trick part is that the message received from the first publish has some value which will be passed in the next sub/pub commands.
For eg.
Sub to topicA/abc
Pub to topicA/abc
Message received on topicA/abc is xyz
sub to topic topicA/xyz
pub to topic topicA/xyz
I am able to receive the message on the first topic but I am not getting how to access the payload of the received message in the main method and pass and attach it to the next topic for next sub.
Is there a way to get the retrieved the message payload from messageArrived callback method to the main method where is client instance is created?
Note:- I am using a single client for publish and subscribe.
kindly help me out as I have ran out of options and methods to do so.
Edited:-
Code snippet
Main class
public class MqttOverSSL {
String deviceId;
MqttClient client = null;
public MqttOverSSL() {
}
public MqttOverSSL(String deviceId) throws MqttException, InterruptedException {
this.deviceId = deviceId;
MqttConnection mqttConObj = new MqttConnection();
this.client = mqttConObj.mqttConnection();
}
public void getLinkCodeMethod() throws MqttException, InterruptedException {
client.subscribe("abc/multi/" + deviceId + "/linkcode", 0);
publish(client, "abc/multi/" + deviceId + "/getlinkcode", 0, "".getBytes());
}
}
Mqtt Claback impl:-
public class SimpleMqttCallBack implements MqttCallback {
String arrivedMessage;
#Override
public void connectionLost(Throwable throwable) {
System.out.println("Connection to MQTT broker lost!");
}
#Override
public void messageArrived(String s, MqttMessage mqttMessage) throws Exception {
arrivedMessage = mqttMessage.toString();
System.out.println("Message received:\t" + arrivedMessage);
linkCode(arrivedMessage);
}
#Override
public void deliveryComplete(IMqttDeliveryToken iMqttDeliveryToken) {
System.out.println("Delivery complete callback: Publish Completed "+ Arrays.toString(iMqttDeliveryToken.getTopics()));
}
public void linkCode(String arrivedMessage) throws MqttException {
System.out.println("String is "+ arrivedMessage);
Gson g = new Gson();
GetCode code = g.fromJson(arrivedMessage, GetCode.class);
System.out.println(code.getLinkCode());
}
}
Publisher class:-
public class Publisher {
public static void publish(MqttClient client, String topicName, int qos, byte[] payload) throws MqttException {
String time = new Timestamp(System.currentTimeMillis()).toString();
log("Publishing at: "+time+ " to topic \""+topicName+"\" qos "+qos);
// Create and configure a message
MqttMessage message = new MqttMessage(payload);
message.setQos(qos);
// Send the message to the server, control is not returned until
// it has been delivered to the server meeting the specified
// quality of service.
client.publish(topicName, message);
}
static private void log(String message) {
boolean quietMode = false;
if (!quietMode) {
System.out.println(message);
}
}
}
OK, it's a little clearer what you are trying to do now.
Short answer No, you can not pass values back to the "main method". MQTT is asynchronous that means you have no idea when a message will arrive for a topic you subscribe to.
You need to update your code to deal check what the incoming message topic is and then deal do what ever action you wanted to do with that response in the messageArrived() handler. If you have a sequence of task to do then you may need to implement what is known as a state machine in order to keep track of where you are in the sequence.

Google Pub/Sub reuse existing subscription

I have created java pub/sub consumer relying on the following pub/sub doc.
public static void main(String... args) throws Exception {
TopicName topic = TopicName.create(pubSubProjectName, pubSubTopic);
SubscriptionName subscription = SubscriptionName.create(pubSubProjectName, "ssvp-sub");
SubscriptionAdminClient subscriptionAdminClient = SubscriptionAdminClient.create();
subscriptionAdminClient.createSubscription(subscription, topic, PushConfig.getDefaultInstance(), 0);
MessageReceiver receiver =
new MessageReceiver() {
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
System.out.println("Got message: " + message.getData().toStringUtf8());
consumer.ack();
}
};
Subscriber subscriber = null;
try {
subscriber = Subscriber.defaultBuilder(subscription, receiver).build();
subscriber.addListener(
new Subscriber.Listener() {
#Override
public void failed(Subscriber.State from, Throwable failure) {
// Handle failure. This is called when the Subscriber encountered a fatal error and is shutting down.
System.err.println(failure);
}
},
MoreExecutors.directExecutor());
subscriber.startAsync().awaitRunning();
Thread.sleep(60000);
} finally {
if (subscriber != null) {
subscriber.stopAsync();
}
}
}
It works well, but every run it ask for a new subscriber name by throwing StatusRuntimeException exception.
io.grpc.StatusRuntimeException: ALREADY_EXISTS: Resource already exists in the project (resource=ssvp-sub).
(see SubscriptionName.create(pubSubProjectName, "ssvp-sub") line in my code snippet)
I found out that in node.js client we can pass "reuseExisting:true" option to reuse existing subscription :
topic.subscribe('maybe-subscription-name', { reuseExisting: true }, function(err, subscription) {
// subscription was "get-or-create"-ed
});
What option should I pass if I use official java pubsub client?:
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-pubsub</artifactId>
<version>0.13.0-alpha</version>
</dependency>
The Java library does not have a method to allow one to call createSubscription with an existing subscription and not have an exception thrown. You have a couple of options, both of which involve using a try/catch block. The choice depends on whether or not you want to be optimistic about the existence of the subscription.
Pessimistic call:
try {
subscriptionAdminClient.createSubscription(subscription,
topic,
PushConfig.getDefaultInstance(),
0);
} catch (ApiException e) {
if (e.getStatusCode() != Status.Code.ALREADY_EXISTS) {
throw e;
}
}
// You know the subscription exists and can create a Subscriber.
Optimistic call:
try {
subscriptionAdminClient.getSubscripton(subscription);
} catch (ApiException e) {
if (e.getStatusCode() == Status.Code.NOT_FOUND) {
// Create the subscription
} else {
throw e;
}
}
// You know the subscription exists and can create a Subscriber.
In general, it is often the case that one would create the subscription prior to starting up the subscriber itself (via the Cloud Console or gcloud CLI), so you might even want to do the getSubscription() call and throw an exception no matter what. If a subscription got deleted, you might want to draw attention to this case and handle it explicitly as it has implications (like the fact that messages are no longer being stored to be delivered to the subscription).
However, if you are doing something like building a cache server that just needs to get updates transiently while it is up and running, then creating the subscription on startup could make sense.

Apache Camel creating Consumer component

I'm newbie to Apache Camel. In hp nonstop there is a Receiver that receives events generated by event manager assume like a stream. My goal is to setup a consumer end point which receives the incoming message and process it through Camel.
Another end point I simply need to write it in logs. From my study I understood that for Consumer end point I need to create own component and configuration would be like
from("myComp:receive").to("log:net.javaforge.blog.camel?level=INFO")
Here is my code snippet which receives message from event system.
Receive receive = com.tandem.ext.guardian.Receive.getInstance();
byte[] maxMsg = new byte[500]; // holds largest possible request
short errorReturn = 0;
do { // read messages from $receive until last close
try {
countRead = receive.read(maxMsg, maxMsg.length);
String receivedMessage=new String(maxMsg, "UTF-8");
//Here I need to handover receivedMessage to camel
} catch (ReceiveNoOpeners ex) {
moreOpeners = false;
} catch(Exception e) {
moreOpeners = false;
}
} while (moreOpeners);
Can someone guide with some hints how to make this as a Consumer.
The 10'000 feet view is this:
You need to start out with implementing a component. The easiest way to get started is to extend org.apache.camel.impl.DefaultComponent. The only thing you have to do is override DefaultComponent::createEndpoint(..). Quite obviously what it does is create your endpoint.
So the next thing you need is to implement your endpoint. Extend org.apache.camel.impl.DefaultEndpoint for this. Override at the minimum DefaultEndpoint::createConsumer(Processor) to create your own consumer.
Last but not least you need to implement the consumer. Again, best ist to extend org.apache.camel.impl.DefaultConsumer. The consumer is where your code has to go that generates your messages. Through the constructor you receive a reference to your endpoint. Use the endpoint reference to create a new Exchange, populate it and send it on its way along the route. Something along the lines of
Exchange ex = endpoint.createExchange(ExchangePattern.InOnly);
setMyMessageHeaders(ex.getIn(), myMessagemetaData);
setMyMessageBody(ex.getIn(), myMessage);
getAsyncProcessor().process(ex, new AsyncCallback() {
#Override
public void done(boolean doneSync) {
LOG.debug("Mssage was processed " + (doneSync ? "synchronously" : "asynchronously"));
}
});
I recommend you pick a simple component (DirectComponent ?) as an example to follow.
Herewith adding my own consumer component may help someone.
public class MessageConsumer extends DefaultConsumer {
private final MessageEndpoint endpoint;
private boolean moreOpeners = true;
public MessageConsumer(MessageEndpoint endpoint, Processor processor) {
super(endpoint, processor);
this.endpoint = endpoint;
}
#Override
protected void doStart() throws Exception {
int countRead=0; // number of bytes read
do {
countRead++;
String msg = String.valueOf(countRead)+" "+System.currentTimeMillis();
Exchange ex = endpoint.createExchange(ExchangePattern.InOnly);
ex.getIn().setBody(msg);
getAsyncProcessor().process(ex, new AsyncCallback() {
#Override
public void done(boolean doneSync) {
log.info("Mssage was processed " + (doneSync ? "synchronously" : "asynchronously"));
}
});
// This is an echo server so echo request back to requester
} while (moreOpeners);
}
#Override
protected void doStop() throws Exception {
moreOpeners = false;
log.debug("Message processor is shutdown");
}
}

Categories

Resources