Google Pub/Sub reuse existing subscription - java

I have created java pub/sub consumer relying on the following pub/sub doc.
public static void main(String... args) throws Exception {
TopicName topic = TopicName.create(pubSubProjectName, pubSubTopic);
SubscriptionName subscription = SubscriptionName.create(pubSubProjectName, "ssvp-sub");
SubscriptionAdminClient subscriptionAdminClient = SubscriptionAdminClient.create();
subscriptionAdminClient.createSubscription(subscription, topic, PushConfig.getDefaultInstance(), 0);
MessageReceiver receiver =
new MessageReceiver() {
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
System.out.println("Got message: " + message.getData().toStringUtf8());
consumer.ack();
}
};
Subscriber subscriber = null;
try {
subscriber = Subscriber.defaultBuilder(subscription, receiver).build();
subscriber.addListener(
new Subscriber.Listener() {
#Override
public void failed(Subscriber.State from, Throwable failure) {
// Handle failure. This is called when the Subscriber encountered a fatal error and is shutting down.
System.err.println(failure);
}
},
MoreExecutors.directExecutor());
subscriber.startAsync().awaitRunning();
Thread.sleep(60000);
} finally {
if (subscriber != null) {
subscriber.stopAsync();
}
}
}
It works well, but every run it ask for a new subscriber name by throwing StatusRuntimeException exception.
io.grpc.StatusRuntimeException: ALREADY_EXISTS: Resource already exists in the project (resource=ssvp-sub).
(see SubscriptionName.create(pubSubProjectName, "ssvp-sub") line in my code snippet)
I found out that in node.js client we can pass "reuseExisting:true" option to reuse existing subscription :
topic.subscribe('maybe-subscription-name', { reuseExisting: true }, function(err, subscription) {
// subscription was "get-or-create"-ed
});
What option should I pass if I use official java pubsub client?:
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-pubsub</artifactId>
<version>0.13.0-alpha</version>
</dependency>

The Java library does not have a method to allow one to call createSubscription with an existing subscription and not have an exception thrown. You have a couple of options, both of which involve using a try/catch block. The choice depends on whether or not you want to be optimistic about the existence of the subscription.
Pessimistic call:
try {
subscriptionAdminClient.createSubscription(subscription,
topic,
PushConfig.getDefaultInstance(),
0);
} catch (ApiException e) {
if (e.getStatusCode() != Status.Code.ALREADY_EXISTS) {
throw e;
}
}
// You know the subscription exists and can create a Subscriber.
Optimistic call:
try {
subscriptionAdminClient.getSubscripton(subscription);
} catch (ApiException e) {
if (e.getStatusCode() == Status.Code.NOT_FOUND) {
// Create the subscription
} else {
throw e;
}
}
// You know the subscription exists and can create a Subscriber.
In general, it is often the case that one would create the subscription prior to starting up the subscriber itself (via the Cloud Console or gcloud CLI), so you might even want to do the getSubscription() call and throw an exception no matter what. If a subscription got deleted, you might want to draw attention to this case and handle it explicitly as it has implications (like the fact that messages are no longer being stored to be delivered to the subscription).
However, if you are doing something like building a cache server that just needs to get updates transiently while it is up and running, then creating the subscription on startup could make sense.

Related

Republish message to same queue with updated headers after automatic nack in Spring AMQP

I am trying to configure my Spring AMQP ListenerContainer to allow for a certain type of retry flow that's backwards compatible with a custom rabbit client previously used in the project I'm working on.
The protocol works as follows:
A message is received on a channel.
If processing fails the message is nacked with the republish flag set to false
A copy of the message with additional/updated headers (a retry counter) is published to the same queue
The headers are used for filtering incoming messages, but that's not important here.
I would like the behaviour to happen on an opt-in basis, so that more standardised Spring retry flows can be used in cases where compatibility with the old client isn't a concern, and the listeners should be able to work without requiring manual acking.
I have implemented a working solution, which I'll get back to below. Where I'm struggling is to publish the new message after signalling to the container that it should nack the current message, because I can't really find any good hooks after the nack or before the next message.
Reading the documentation it feels like I'm looking for something analogous to the behaviour of RepublishMessageRecoverer used as the final step of a retry interceptor. The main difference in my case is that I need to republish immediately on failure, not as a final recovery step. I tried to look at the implementation of RepublishMessageRecoverer, but the many of layers of indirection made it hard for me to understand where the republishing is triggered, and if a nack goes before that.
My working implementation looks as follows. Note that I'm using an AfterThrowsAdvice, but I think an error handler could also be used with nearly identical logic.
/*
MyConfig.class, configuring the container factory
*/
#Configuration
public class MyConfig {
#Bean
// NB: bean name is important, overwrites autoconfigured bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
Jackson2JsonMessageConverter messageConverter,
RabbitTemplate rabbitTemplate
) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(messageConverter);
// AOP
var a1 = new CustomHeaderInspectionAdvice();
var a2 = new MyThrowsAdvice(rabbitTemplate);
Advice[] adviceChain = {a1, a2};
factory.setAdviceChain(adviceChain);
return factory;
}
}
/*
MyThrowsAdvice.class, hooking into the exception flow from the listener
*/
public class MyThrowsAdvice implements ThrowsAdvice {
private static final Logger logger = LoggerFactory.getLogger(MyThrowsAdvice2.class);
private final AmqpTemplate amqpTemplate;
public MyThrowsAdvice2(AmqpTemplate amqpTemplate) {
this.amqpTemplate = amqpTemplate;
}
public void afterThrowing(Method method, Object[] args, Object target, ListenerExecutionFailedException ex) {
var message = message(args);
var cause = ex.getCause();
// opt-in to old protocol by throwing an instance of BusinessException in business logic
if (cause instanceof BusinessException) {
/*
NB: Since we want to trigger execution after the current method fails
with an exception we need to schedule it in another thread and delay
execution until the nack has happened.
*/
new Thread(() -> {
try {
Thread.sleep(1000L);
var messageProperties = message.getMessageProperties();
var count = getCount(messageProperties);
messageProperties.setHeader("xb-count", count + 1);
var routingKey = messageProperties.getReceivedRoutingKey();
var exchange = messageProperties.getReceivedExchange();
amqpTemplate.send(exchange, routingKey, message);
logger.info("Sent!");
} catch (InterruptedException e) {
logger.error("Sleep interrupted", e);
}
}).start();
// NB: Produce the desired nack.
throw new AmqpRejectAndDontRequeueException("Business logic exception, message will be re-queued with updated headers", cause);
}
}
private static long getCount(MessageProperties messageProperties) {
try {
Long c = messageProperties.getHeader("xb-count");
return c == null ? 0 : c;
} catch (Exception e) {
return 0;
}
}
private static Message message(Object[] args) {
try {
return (Message) args[1];
} catch (Exception e) {
logger.info("Bad cast parse", e);
throw new AmqpRejectAndDontRequeueException(e);
}
}
}
Now, as you can imagine, I'm not particularly pleased with the indeterminism of scheduling a new thread with a delay.
So my question is simply, is there any way I could produce a deterministic solution to my problem using the provided hooks of the ListenerContainer ?
Your current solution risks message loss; since you are publishing on a different thread after a delay. If the server crashes during that delay, the message is lost.
It would be better to publish immediately to another queue with a TTL and dead-letter configuration to republish the expired message back to the original queue.
Using the RepublishMessageRecoverer with retries set to maxattempts=1 should do what you need.

Pubnub V4 Migration Callbacks

I am trying to update my Code from pubnub sdk v3 to v4 and I am stuck at callbacks.
I have following function which I would like to update:
void transmitMessage(String toID, JSONObject packet){
if (this.id==null){ .
mRtcListener.onDebug(new PnRTCMessage("Cannot transmit before calling Client.connect"));
}
try {
JSONObject message = new JSONObject();
message.put(PnRTCMessage.JSON_PACKET, packet);
message.put(PnRTCMessage.JSON_ID, "");
message.put(PnRTCMessage.JSON_NUMBER, this.id);
this.mPubNub.publish(toID, message, new Callback() {
#Override
public void successCallback(String channel, Object message, String timetoken) {
mRtcListener.onDebug(new PnRTCMessage((JSONObject)message));
}
#Override
public void errorCallback(String channel, PubNubError error) {
mRtcListener.onDebug(new PnRTCMessage(error.errorObject));
}
});
} catch (JSONException e){
e.printStackTrace();
}
}
The docs say one does not Need to instantiate com.pubnub.api.Callback and one should use the new SubscribeCallback class. I am not sure how to handle it, the SubscribeCallback contains These Methods: Status, message and presence, currently I have a successCallback method and a errorCallback.
The code at https://www.pubnub.com/docs/android-java/api-reference-publish-and-subscribe#listeners should help you with this.
You can create listeners using the code below:
pubnub.addListener(new SubscribeCallback() {
#Override
public void status(PubNub pubnub, PNStatus status) {
switch (status.getOperation()) {
// let's combine unsubscribe and subscribe handling for ease of use
case PNSubscribeOperation:
case PNUnsubscribeOperation:
// note: subscribe statuses never have traditional
// errors, they just have categories to represent the
// different issues or successes that occur as part of subscribe
switch (status.getCategory()) {
case PNConnectedCategory:
// this is expected for a subscribe, this means there is no error or issue whatsoever
case PNReconnectedCategory:
// this usually occurs if subscribe temporarily fails but reconnects. This means
// there was an error but there is no longer any issue
case PNDisconnectedCategory:
// this is the expected category for an unsubscribe. This means there
// was no error in unsubscribing from everything
case PNUnexpectedDisconnectCategory:
// this is usually an issue with the internet connection, this is an error, handle appropriately
case PNAccessDeniedCategory:
// this means that PAM does allow this client to subscribe to this
// channel and channel group configuration. This is another explicit error
default:
// More errors can be directly specified by creating explicit cases for other
// error categories of `PNStatusCategory` such as `PNTimeoutCategory` or `PNMalformedFilterExpressionCategory` or `PNDecryptionErrorCategory`
}
case PNHeartbeatOperation:
// heartbeat operations can in fact have errors, so it is important to check first for an error.
// For more information on how to configure heartbeat notifications through the status
// PNObjectEventListener callback, consult <link to the PNCONFIGURATION heartbeart config>
if (status.isError()) {
// There was an error with the heartbeat operation, handle here
} else {
// heartbeat operation was successful
}
default: {
// Encountered unknown status type
}
}
}
#Override
public void message(PubNub pubnub, PNMessageResult message) {
String messagePublisher = message.getPublisher();
System.out.println("Message publisher: " + messagePublisher);
System.out.println("Message Payload: " + message.getMessage());
System.out.println("Message Subscription: " + message.getSubscription());
System.out.println("Message Channel: " + message.getChannel());
System.out.println("Message timetoken: " + message.getTimetoken());
}
#Override
public void presence(PubNub pubnub, PNPresenceEventResult presence) {
}
});
Once you've subscribed to a channel like below, when a message or presence event is received the above listeners will be called.
pubnub.subscribe()
.channels(Arrays.asList("my_channel")) // subscribe to channels
.withPresence() // also subscribe to related presence information
.execute();
Please note that we have recently launched new features with new types of listeners as well, all of which are listed in the link above.

How to create channel events using Sawtooth Java SDK?

HyperLeger Sawtooth supports subscription to events in the Transaction Processor. However is there a way to create application-specific events in the Transaction Processor something like in Python example here: https://www.jacklllll.xyz/blog/2019/04/08/sawtooth/
ctx.addEvent(
'agreement/create',
[['name', 'agreement'],
['address', address],
['buyer name', agreement.BuyerName],
['seller name', agreement.SellerName],
['house id', agreement.HouseID],
['creator', signer]],
null)
In the current Sawtooth-Java SDK v0.1.2 the only override is
apply(TpProcessRequest, State)
Without the context. However on the documentation here: https://github.com/hyperledger/sawtooth-sdk-java/blob/master/sawtooth-sdk-transaction-processor/src/main/java/sawtooth/sdk/processor/TransactionHandler.java
addEvent(TpProcessRequest, Context)
So far I have managed to listen to events sawtooth/state-delta however this gives me all state changes of that tx-family
import sawtooth.sdk.protobuf.EventSubscription;
import sawtooth.sdk.protobuf.EventFilter;
import sawtooth.sdk.protobuf.ClientEventsSubscribeRequest;
import sawtooth.sdk.protobuf.ClientEventsSubscribeResponse;
import sawtooth.sdk.protobuf.ClientEventsUnsubscribeRequest;
import sawtooth.sdk.protobuf.Message;
EventFilter filter = EventFilter.newBuilder()
.setKey("address")
.setMatchString(nameSpace.concat(".*"))
.setFilterType(EventFilter.FilterType.REGEX_ANY)
.build();
EventSubscription subscription = EventSubscription.newBuilder()
.setEventType("sawtooth/state-delta")
.addFilters(filter)
.build();
context = new ZContext();
socket = context.createSocket(ZMQ.DEALER);
socket.connect("tcp://sawtooth-rest:4004");
ClientEventsSubscribeRequest request = ClientEventsSubscribeRequest.newBuilder()
.addSubscriptions(subscription)
.build();
message = Message.newBuilder()
.setCorrelationId("123")
.setMessageType(Message.MessageType.CLIENT_EVENTS_SUBSCRIBE_REQUEST)
.setContent(request.toByteString())
.build();
socket.send(message.toByteArray());
Once the Message.MessageType.CLIENT_EVENTS_SUBSCRIBE_REQUEST is registered I get messages in a thread loop.
I was hoping that in the TransactionHandler I should be able to addEvent() or create some type of event(s) that can then be subscribed using the Java SDK.
Has anyone else tried creating custom events in JAVA on Sawtooth?
Here's an example of an event being added in Python. Java would be similar.
You add your custom-named event in your Transaction Processor:
context.add_event(event_type="cookiejar/bake", attributes=[("cookies-baked", amount)])
See https://github.com/danintel/sawtooth-cookiejar/blob/master/pyprocessor/cookiejar_tp.py#L138
Here are examples of event handlers written in Python and Go:
https://github.com/danintel/sawtooth-cookiejar/tree/master/events
Java would also be similar. Basically the logic in the event handler is:
Subscribe to the events you want to listen to
Send the request to the Validator
Read and parse the subscription response
In a loop, listen to subscribed events in a loop
After exiting the loop (if ever), unsubscribe rom events
For those who are trying to use the java SDK for event publishing/subscribing - There is no direct API available. Atleast I cound't find it and I am using 1.0 docker images.
So to publish your events you need to publish directly to the sawtooth rest-api server. Need to take care of following:
You need a context id which is valid only per request. You get this from request in your apply() method. (Code below). So make sure you publish the event during transaction publishing i.e. during the implementation of apply() method
Event structure will be as described in docs here
If the transaction is successful and block is committed you get the event in event subscriber else it doesn't show up.
While creating a subscriber you need to subscribe to the
sawtooth/block-commit
event and add an additional subscription to your type of event e.g. "myNS/my-event"
Sample Event Publishing code:
public void apply(TpProcessRequest request, State state) throws InvalidTransactionException, InternalError {
///process your trasaction first
sawtooth.sdk.messaging.Stream eventStream = new Stream("tcp://localhost:4004"); // make this in the constructor of class NOT here
List<Attribute> attrList = new ArrayList<>();
Attribute attrs = Attribute.newBuilder().setKey("someKey").setValue("someValue").build();
attrList.add(attrs);
Event appEvent = Event.newBuilder().setEventType("myNS/my-event-type")
.setData( <some ByteString here>).addAllAttributes(attrList).build();
TpEventAddRequest addEventRequest = TpEventAddRequest.newBuilder()
.setContextId(request.getContextId()).setEvent(appEvent).build();
Future sawtoothSubsFuture = eventStream.send(MessageType.TP_EVENT_ADD_REQUEST, addEventRequest.toByteString());
try {
System.out.println(sawtoothSubsFuture.getResult());
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
then you subscribe to events as such (inspired from the marketplace samples):
try {
EventFilter eventFilter = EventFilter.newBuilder().setKey("address")
.setMatchString(String.format("^%s.*", "myNamespace"))
.setFilterType(FilterType.REGEX_ANY).build();
//subscribe to sawtooth/block-commit
EventSubscription deltaSubscription = EventSubscription.newBuilder().setEventType("sawtooth/block-commit")
.addFilters(eventFilter)
.build();
EventSubscription mySubscription = EventSubscription.newBuilder().setEventType("myNS/my-event-type")
.build(); //no filters added for my events.
ClientEventsSubscribeRequest subsReq = ClientEventsSubscribeRequest.newBuilder()
.addLastKnownBlockIds("0000000000000000").addSubscriptions(deltaSubscription).addSubscriptions(mySubscription)
.build();
Future sawtoothSubsFuture = eventStream.send(MessageType.CLIENT_EVENTS_SUBSCRIBE_REQUEST,
subsReq.toByteString());
ClientEventsSubscribeResponse eventSubsResp = ClientEventsSubscribeResponse
.parseFrom(sawtoothSubsFuture.getResult());
System.out.println("eventSubsResp.getStatus() :: " + eventSubsResp.getStatus());
if (eventSubsResp.getStatus().equals(ClientEventsSubscribeResponse.Status.UNKNOWN_BLOCK)) {
System.out.println("Unknown block ");
// retry connection if this happens by calling this same method
}
if(!eventSubsResp.getStatus().equals(ClientEventsSubscribeResponse.Status.OK)) {
System.out.println("Subscription failed with status " + eventSubsResp.getStatus());
throw new RuntimeException("cannot connect ");
} else {
isActive = true;
System.out.println("Making active ");
}
while(isActive) {
Message eventMsg = eventStream.receive();
EventList eventList = EventList.parseFrom(eventMsg.getContent());
for (Event event : eventList.getEventsList()) {
System.out.println("An event ::::");
System.out.println(event);
}
}
} catch (Exception e) {
e.printStackTrace();
}

How to handle exceptions in SpringBoot ListenableFuture

so I have a SpringBoot end point controller that starts like this:
#RequestMapping(value = "/post", method = RequestMethod.POST, produces = MediaType.APPLICATION_JSON_VALUE)
public Response post(#Valid #RequestBody Message message) throws FailedToPostException {
message.setRecieveTime(System.currentTimeMillis());
return this.service.post(message);
}
And the post function:
public Response post(Message message) throws FailedToPostException{
ListenableFuture<SendResult<String, Message>> future = kafkaTemplate.send("topicName", message);
future.addCallback(new ListenableFutureCallback<SendResult<String, Message>>() {
#Override
public void onSuccess(SendResult<String, Message> result) {
LOGGER.info("Post Finished. '{}' with offset: {}", message,
result.getRecordMetadata().offset());
}
#Override
public void onFailure(Throwable ex) {
LOGGER.error("Message Post Failed. '{}'", message, ex);
long nowMillis = System.currentTimeMillis();
int diffSeconds = (int) ((nowMillis - message.getRecieveTime()) / 1000);
if (diffSeconds >= 10) {
LOGGER.debug("timeout sending message to Kafka, aborting.");
return;
}
else {
post(message);
}
}
});
LOGGER.debug("D: " + Utils.getMetricValue("buffer-available-bytes", kafkaTemplate));
return new Response("Message Posted");
}
Now you can see, that we are trying to make sure, if a kafkaTemplate.send failed, we are going to recursively invoke post(message) again for up to 10 seconds, until the producer memory buffer clears and the message gets through.
The problems are:
We want to be able to return failure response to the endpoint's client (eg: "Failed to acknowledge the message").
Is there any better way to handle exceptions from a Future in a piece of code like that above?
Is there a way to avoid using a recursive function here? We did that, because we wanted to attempt delivery of the message to Kafka for like 10 seconds, before sending it as an email to look at.
Side note: I still didnt use buffer-available-bytes attribute from kafkaTemplate.metrics(), I intend to use it to minimize the chance of this problem, but still need to handle the above just in case of some race conditions
There are a few ways to do this, but I really like Spring Retry as a way to solve this kind of problem. It's a bit of pseudo code here, but if you need more specifics on how to do it, I could make things more explicit:
#Retryable(maxAttempts = 10, value = KafkaSendException.class)
public Response post(Message message) throws FailedToPostException{
ListenableFuture<SendResult<String, Message>> future = kafkaTemplate.send("topicName", message);
try {
future.get(1. TimeUnit.SECONDS);
} catch(SomeException ex) {
LOGGER.error("Message Post Failed. '{}'", ex.getCause().getMessage(), ex);
throw ex;
}
LOGGER.info("Post Finished. '{}' with offset: {}", message,
result.getRecordMetadata().offset());
}
Effectively does the same thing without recursing. I wouldn't recommend recursing code for error handling.
The controller should be able to massage the actual KafkaSendException with a nice #ExceptionHandler.

Is this a bad practice?

Is the following code considered a bad practice? Do you think it can be done otherwise?
The goal is to always update the status, either with success (i.e invocation to service.invoke(id);returns normally ) or with failure...
#Autowired
private Service service;
public void onMessage(Message message) {
String id = null;
String status = "FAILED";
try {
id = ((TextMessage) message).getText();
status = service.invoke(id); //can throw unchecked exception
} catch (final JMSException e) {
throw new RuntimeException(e);
} finally {
if (StringUtils.isNumeric(id)) {
service.update(id, status);
}
}
}
It depends on your Use-case, whether you have to perform a step or not based on previous step. Using finally may execute your second step regardless what exception you may receive.
I would recommend having the second step outside try...catch block so that you'll update only when you have got any exception you've Expected and continue to your second step, else, your method will throw and exit.
i think you should not use implementation of message listener , you should wire them independent of spring tech . just pojo based . use <jms:listener-container > with <jms:listener>

Categories

Resources