HyperLeger Sawtooth supports subscription to events in the Transaction Processor. However is there a way to create application-specific events in the Transaction Processor something like in Python example here: https://www.jacklllll.xyz/blog/2019/04/08/sawtooth/
ctx.addEvent(
'agreement/create',
[['name', 'agreement'],
['address', address],
['buyer name', agreement.BuyerName],
['seller name', agreement.SellerName],
['house id', agreement.HouseID],
['creator', signer]],
null)
In the current Sawtooth-Java SDK v0.1.2 the only override is
apply(TpProcessRequest, State)
Without the context. However on the documentation here: https://github.com/hyperledger/sawtooth-sdk-java/blob/master/sawtooth-sdk-transaction-processor/src/main/java/sawtooth/sdk/processor/TransactionHandler.java
addEvent(TpProcessRequest, Context)
So far I have managed to listen to events sawtooth/state-delta however this gives me all state changes of that tx-family
import sawtooth.sdk.protobuf.EventSubscription;
import sawtooth.sdk.protobuf.EventFilter;
import sawtooth.sdk.protobuf.ClientEventsSubscribeRequest;
import sawtooth.sdk.protobuf.ClientEventsSubscribeResponse;
import sawtooth.sdk.protobuf.ClientEventsUnsubscribeRequest;
import sawtooth.sdk.protobuf.Message;
EventFilter filter = EventFilter.newBuilder()
.setKey("address")
.setMatchString(nameSpace.concat(".*"))
.setFilterType(EventFilter.FilterType.REGEX_ANY)
.build();
EventSubscription subscription = EventSubscription.newBuilder()
.setEventType("sawtooth/state-delta")
.addFilters(filter)
.build();
context = new ZContext();
socket = context.createSocket(ZMQ.DEALER);
socket.connect("tcp://sawtooth-rest:4004");
ClientEventsSubscribeRequest request = ClientEventsSubscribeRequest.newBuilder()
.addSubscriptions(subscription)
.build();
message = Message.newBuilder()
.setCorrelationId("123")
.setMessageType(Message.MessageType.CLIENT_EVENTS_SUBSCRIBE_REQUEST)
.setContent(request.toByteString())
.build();
socket.send(message.toByteArray());
Once the Message.MessageType.CLIENT_EVENTS_SUBSCRIBE_REQUEST is registered I get messages in a thread loop.
I was hoping that in the TransactionHandler I should be able to addEvent() or create some type of event(s) that can then be subscribed using the Java SDK.
Has anyone else tried creating custom events in JAVA on Sawtooth?
Here's an example of an event being added in Python. Java would be similar.
You add your custom-named event in your Transaction Processor:
context.add_event(event_type="cookiejar/bake", attributes=[("cookies-baked", amount)])
See https://github.com/danintel/sawtooth-cookiejar/blob/master/pyprocessor/cookiejar_tp.py#L138
Here are examples of event handlers written in Python and Go:
https://github.com/danintel/sawtooth-cookiejar/tree/master/events
Java would also be similar. Basically the logic in the event handler is:
Subscribe to the events you want to listen to
Send the request to the Validator
Read and parse the subscription response
In a loop, listen to subscribed events in a loop
After exiting the loop (if ever), unsubscribe rom events
For those who are trying to use the java SDK for event publishing/subscribing - There is no direct API available. Atleast I cound't find it and I am using 1.0 docker images.
So to publish your events you need to publish directly to the sawtooth rest-api server. Need to take care of following:
You need a context id which is valid only per request. You get this from request in your apply() method. (Code below). So make sure you publish the event during transaction publishing i.e. during the implementation of apply() method
Event structure will be as described in docs here
If the transaction is successful and block is committed you get the event in event subscriber else it doesn't show up.
While creating a subscriber you need to subscribe to the
sawtooth/block-commit
event and add an additional subscription to your type of event e.g. "myNS/my-event"
Sample Event Publishing code:
public void apply(TpProcessRequest request, State state) throws InvalidTransactionException, InternalError {
///process your trasaction first
sawtooth.sdk.messaging.Stream eventStream = new Stream("tcp://localhost:4004"); // make this in the constructor of class NOT here
List<Attribute> attrList = new ArrayList<>();
Attribute attrs = Attribute.newBuilder().setKey("someKey").setValue("someValue").build();
attrList.add(attrs);
Event appEvent = Event.newBuilder().setEventType("myNS/my-event-type")
.setData( <some ByteString here>).addAllAttributes(attrList).build();
TpEventAddRequest addEventRequest = TpEventAddRequest.newBuilder()
.setContextId(request.getContextId()).setEvent(appEvent).build();
Future sawtoothSubsFuture = eventStream.send(MessageType.TP_EVENT_ADD_REQUEST, addEventRequest.toByteString());
try {
System.out.println(sawtoothSubsFuture.getResult());
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
then you subscribe to events as such (inspired from the marketplace samples):
try {
EventFilter eventFilter = EventFilter.newBuilder().setKey("address")
.setMatchString(String.format("^%s.*", "myNamespace"))
.setFilterType(FilterType.REGEX_ANY).build();
//subscribe to sawtooth/block-commit
EventSubscription deltaSubscription = EventSubscription.newBuilder().setEventType("sawtooth/block-commit")
.addFilters(eventFilter)
.build();
EventSubscription mySubscription = EventSubscription.newBuilder().setEventType("myNS/my-event-type")
.build(); //no filters added for my events.
ClientEventsSubscribeRequest subsReq = ClientEventsSubscribeRequest.newBuilder()
.addLastKnownBlockIds("0000000000000000").addSubscriptions(deltaSubscription).addSubscriptions(mySubscription)
.build();
Future sawtoothSubsFuture = eventStream.send(MessageType.CLIENT_EVENTS_SUBSCRIBE_REQUEST,
subsReq.toByteString());
ClientEventsSubscribeResponse eventSubsResp = ClientEventsSubscribeResponse
.parseFrom(sawtoothSubsFuture.getResult());
System.out.println("eventSubsResp.getStatus() :: " + eventSubsResp.getStatus());
if (eventSubsResp.getStatus().equals(ClientEventsSubscribeResponse.Status.UNKNOWN_BLOCK)) {
System.out.println("Unknown block ");
// retry connection if this happens by calling this same method
}
if(!eventSubsResp.getStatus().equals(ClientEventsSubscribeResponse.Status.OK)) {
System.out.println("Subscription failed with status " + eventSubsResp.getStatus());
throw new RuntimeException("cannot connect ");
} else {
isActive = true;
System.out.println("Making active ");
}
while(isActive) {
Message eventMsg = eventStream.receive();
EventList eventList = EventList.parseFrom(eventMsg.getContent());
for (Event event : eventList.getEventsList()) {
System.out.println("An event ::::");
System.out.println(event);
}
}
} catch (Exception e) {
e.printStackTrace();
}
Related
I'm totally new to the Java Reactor API.
I use a WebClient to retrieve data from an external webservice, which I then map to a DTO of class "LearnDetailDTO".
But before sending back this DTO, I have to modify it with data I get from another webservice. For this, I chain the calls with flatMap(). I get my data from the second webservice, but my DTO is returned before it is modified with the new data.
My problem is: how to wait until all calls to the second webservice are finished and the DTO is modified before sending it back to the caller?
Here is my code:
class Controller {
#GetMapping(value = "/learn/detail/", produces = MediaType.APPLICATION_JSON_VALUE)
public Mono<LearnDetailDTO> getLearnDetail() {
return getLearnDetailDTO();
}
private Mono<LearnDetailDTO> getLearnDetailDTO() {
WebClient client = WebClient.create("https://my_rest_webservice.com");
return client
.get()
.retrieve()
.bodyToMono(LearnDetailDTO.class)
.flatMap(learnDetailDTO -> {
LearnDetailDTO newDto = new LearnDetailDTO(learnDetailDTO );
for (GroupDTO group : newDto.getGroups()) {
String keyCode = group.getKeyCode();
for (GroupDetailDto detail : group.getGroupsDetailList()) {
adeService.getResourcesList(keyCode) // one asynchonous rest call to get resources
.flatMap(resource -> {
Long id = resource.getData().get(0).getId();
return adeService.getEventList(id); // another asynchronous rest call to get an events list with the resource coming from the previous call
})
.subscribe(event -> {
detail.setCreneaux(event.getData());
});
}
}
return Mono.just(newDto);
});
}
I tried to block() my call to adeservice.getEventList() instead of subscribe(), but I get the following error:
block()/blockFirst()/blockLast() are blocking, which is not supported
in thread reactor-http-nio-2
How to be sure that my newDTO object is complete before returning it ?
You should not mutate objects in subscribe. The function passed to subscribe will be called asynchronously in an unknown time in the future.
Subscribe should be considered a terminal operation, and should only serve to connect to other part of your system. It should not modify values inside the scope of your datastream.
What you want, is a pipeline that collects all events, and then map them to a dto with collected events.
As a rule of thumb your pipeline result must be composed of accumulated results in the operation chain. You should never have a "subscribe" in the middle of the operation chain, and you should never mutate an object with it.
I will provide a simplified example so you can take time to analyze the logic that can reach the goal: accumulate new values asynchronously in a single result. In this example, I've removed any notion of "detail" to connect directly groups to events, to simplify the overall code.
The snippet:
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import java.time.Duration;
import java.util.ArrayList;
import java.util.List;
public class AccumulateProperly {
// Data object definitions
record Event(String data) {}
record Resource(int id) {}
record Group(String keyCode, List<Event> events) {
// When adding events, do not mute object directly. Instead, create a derived version
Group merge(List<Event> newEvents) {
var allEvents = new ArrayList<>(events);
allEvents.addAll(newEvents);
return new Group(keyCode, allEvents);
}
}
record MyDto(List<Group> groups) { }
static Flux<Resource> findResourcesByKeyCode(String keyCode) {
return Flux.just(new Resource(1), new Resource(2));
}
static Flux<Event> findEventById(int id) {
return Flux.just(
new Event("resource_"+id+"_event_1"),
new Event("resource_"+id+"_event_2")
);
}
public static void main(String[] args) {
MyDto dtoInstance = new MyDto(List.of(new Group("myGroup", List.of())));
System.out.println("INITIAL STATE:");
System.out.println(dtoInstance);
// Asynchronous operation pipeline
Mono<MyDto> dtoCompletionPipeline = Mono.just(dtoInstance)
.flatMap(dto -> Flux.fromIterable(dto.groups)
// for each group, find associated resources
.flatMap(group -> findResourcesByKeyCode(group.keyCode())
// For each resource, fetch its associated event
.flatMap(resource -> findEventById(resource.id()))
// Collect all events for the group
.collectList()
// accumulate collected events in a new instance of the group
.map(group::merge)
)
// Collect all groups after they've collected events
.collectList()
// Build a new dto instance from the completed set of groups
.map(completedGroups -> new MyDto(completedGroups))
);
// NOTE: block is here only because we are in a main function and that I want to print
// pipeline output before program extinction.
// Try to avoid block. Return your mono, or connect it to another Mono or Flux object using
// an operation like flatMap.
dtoInstance = dtoCompletionPipeline.block(Duration.ofSeconds(1));
System.out.println("OUTPUT STATE:");
System.out.println(dtoInstance);
}
}
Its output:
INITIAL STATE:
MyDto[groups=[Group[keyCode=myGroup, events=[]]]]
OUTPUT STATE:
MyDto[groups=[Group[keyCode=myGroup, events=[Event[data=resource_1_event_1], Event[data=resource_1_event_2], Event[data=resource_2_event_1], Event[data=resource_2_event_2]]]]]
I am trying to configure my Spring AMQP ListenerContainer to allow for a certain type of retry flow that's backwards compatible with a custom rabbit client previously used in the project I'm working on.
The protocol works as follows:
A message is received on a channel.
If processing fails the message is nacked with the republish flag set to false
A copy of the message with additional/updated headers (a retry counter) is published to the same queue
The headers are used for filtering incoming messages, but that's not important here.
I would like the behaviour to happen on an opt-in basis, so that more standardised Spring retry flows can be used in cases where compatibility with the old client isn't a concern, and the listeners should be able to work without requiring manual acking.
I have implemented a working solution, which I'll get back to below. Where I'm struggling is to publish the new message after signalling to the container that it should nack the current message, because I can't really find any good hooks after the nack or before the next message.
Reading the documentation it feels like I'm looking for something analogous to the behaviour of RepublishMessageRecoverer used as the final step of a retry interceptor. The main difference in my case is that I need to republish immediately on failure, not as a final recovery step. I tried to look at the implementation of RepublishMessageRecoverer, but the many of layers of indirection made it hard for me to understand where the republishing is triggered, and if a nack goes before that.
My working implementation looks as follows. Note that I'm using an AfterThrowsAdvice, but I think an error handler could also be used with nearly identical logic.
/*
MyConfig.class, configuring the container factory
*/
#Configuration
public class MyConfig {
#Bean
// NB: bean name is important, overwrites autoconfigured bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
Jackson2JsonMessageConverter messageConverter,
RabbitTemplate rabbitTemplate
) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(messageConverter);
// AOP
var a1 = new CustomHeaderInspectionAdvice();
var a2 = new MyThrowsAdvice(rabbitTemplate);
Advice[] adviceChain = {a1, a2};
factory.setAdviceChain(adviceChain);
return factory;
}
}
/*
MyThrowsAdvice.class, hooking into the exception flow from the listener
*/
public class MyThrowsAdvice implements ThrowsAdvice {
private static final Logger logger = LoggerFactory.getLogger(MyThrowsAdvice2.class);
private final AmqpTemplate amqpTemplate;
public MyThrowsAdvice2(AmqpTemplate amqpTemplate) {
this.amqpTemplate = amqpTemplate;
}
public void afterThrowing(Method method, Object[] args, Object target, ListenerExecutionFailedException ex) {
var message = message(args);
var cause = ex.getCause();
// opt-in to old protocol by throwing an instance of BusinessException in business logic
if (cause instanceof BusinessException) {
/*
NB: Since we want to trigger execution after the current method fails
with an exception we need to schedule it in another thread and delay
execution until the nack has happened.
*/
new Thread(() -> {
try {
Thread.sleep(1000L);
var messageProperties = message.getMessageProperties();
var count = getCount(messageProperties);
messageProperties.setHeader("xb-count", count + 1);
var routingKey = messageProperties.getReceivedRoutingKey();
var exchange = messageProperties.getReceivedExchange();
amqpTemplate.send(exchange, routingKey, message);
logger.info("Sent!");
} catch (InterruptedException e) {
logger.error("Sleep interrupted", e);
}
}).start();
// NB: Produce the desired nack.
throw new AmqpRejectAndDontRequeueException("Business logic exception, message will be re-queued with updated headers", cause);
}
}
private static long getCount(MessageProperties messageProperties) {
try {
Long c = messageProperties.getHeader("xb-count");
return c == null ? 0 : c;
} catch (Exception e) {
return 0;
}
}
private static Message message(Object[] args) {
try {
return (Message) args[1];
} catch (Exception e) {
logger.info("Bad cast parse", e);
throw new AmqpRejectAndDontRequeueException(e);
}
}
}
Now, as you can imagine, I'm not particularly pleased with the indeterminism of scheduling a new thread with a delay.
So my question is simply, is there any way I could produce a deterministic solution to my problem using the provided hooks of the ListenerContainer ?
Your current solution risks message loss; since you are publishing on a different thread after a delay. If the server crashes during that delay, the message is lost.
It would be better to publish immediately to another queue with a TTL and dead-letter configuration to republish the expired message back to the original queue.
Using the RepublishMessageRecoverer with retries set to maxattempts=1 should do what you need.
I'm looking for using MC|Brand channel on a sponge minecraft server.
When i'm trying to use :
Sponge.getChannelRegistrar().getOrCreateRaw(plugin, channel).addListener((data, connection, side) -> {
if(side == Type.CLIENT) {
// do something
}
});
I'm getting this issue:
org.spongepowered.api.network.ChannelRegistrationException: Reserved channels cannot be registered by plugins
at org.spongepowered.server.network.VanillaChannelRegistrar.validateChannel(VanillaChannelRegistrar.java:71) ~[VanillaChannelRegistrar.class:1.12.2-7.3.0]
at org.spongepowered.server.network.VanillaChannelRegistrar.createRawChannel(VanillaChannelRegistrar.java:104) ~[VanillaChannelRegistrar.class:1.12.2-7.3.0]
at org.spongepowered.api.network.ChannelRegistrar.getOrCreateRaw(ChannelRegistrar.java:122) ~[ChannelRegistrar.class:1.12.2-7.3.0]
How can I fix it, just by using channel ? Is there an event for reserved MC channel message ?
I tried to register the channel exactly as Sponge does, but without the check that create the issue.
To do it, I use Java Reflection like that :
RawDataChannel spongeChannel = null; // declare channel
try {
// firstly, try default channel registration to go faster
spongeChannel = Sponge.getChannelRegistrar().getOrCreateRaw(plugin, channel);
} catch (ChannelRegistrationException e) { // error -> can't register
try {
// load class
Class<?> vanillaRawChannelClass = Class.forName("org.spongepowered.server.network.VanillaRawDataChannel");
Class<?> vanillaChannelRegistrarClass = Class.forName("org.spongepowered.server.network.VanillaChannelRegistrar");
Class<?> vanillaBindingClass = Class.forName("org.spongepowered.server.network.VanillaChannelBinding");
// get constructor of raw channel
Constructor<?> rawChannelConstructor = vanillaRawChannelClass.getConstructor(ChannelRegistrar.class, String.class, PluginContainer.class);
spongeChannel = (RawDataChannel) rawChannelConstructor.newInstance(Sponge.getChannelRegistrar(), channel, plugin.getContainer()); // new channel instance
// now register channel
Method registerChannel = vanillaChannelRegistrarClass.getDeclaredMethod("registerChannel", vanillaBindingClass); // get the method to register
registerChannel.setAccessible(true); // it's a private method, so set as accessible
registerChannel.invoke(Sponge.getChannelRegistrar(), spongeChannel); // run channel registration
} catch (Exception exc) {
exc.printStackTrace(); // reflection failed
}
}
if(spongeChannel == null) // channel not registered
return;
// my channel is now registered, by one of both available method. That's perfect
spongeChannel.addListener((data, connection, side) -> { // my listener
if(side == Type.CLIENT) {
// do something
}
});;
If there already have an error, specially when the reflection failed, I suggest you to check for new version, maybe method have change her parameter or class have been moved.
You can find Sponge code on their github.
We are using Java rabbitMq with spring boot in a distributed service architecture. One service gets an HTTP request and forwards it to an unkown queue for processing. At the same time it has to wait for a response on another queue before it can terminate the HTTP request. (It's a preview request that gets its work done by a renderer).
There can be more than one instance of ServiceA (the HTTP Interface) and ServiceB (the renderer) so with every preview message we also send a unique ID to be used as routing key.
I'm having trouble with the BlockingConsumer. Whenever I call consumer.nextMessage() I get the same message over and over again. This is doubly weird, as for one it should be ACKed and removed from the queue and for another the consumer shouldn't even bother with it as the unique ID we used is no longer bound to the queue. nextMessage even returns before the renderer service is done and has sent its done message back.
Here's the simplified setup:
general
All services use a global DirectExchange for all messages
#Bean
public DirectExchange globalDirectExchange() {
return new DirectExchange(EXCHANGE_NAME, false, true);
}
ServiceA (handles the HTTP request):
private Content requestPreviewByKey(RenderMessage renderMessage, String previewKey) {
String renderDoneRoutingKey= UUID.randomUUID().toString();
renderMessage.setPreviewDoneKey(renderDoneId);
Binding binding = BindingBuilder.bind(previewDoneQueue).to(globalDirectExchange)
.with(renderDoneRoutingKey);
try {
amqpAdmin.declareBinding(binding);
rabbitProducer.sendPreviewRequestToKey(renderMessage, previewKey);
return getContentBlocking();
} catch (Exception e) {
logErrorIfDebug(type, e);
throw new ApiException(BaseErrorCode.COMMUNICATION_ERROR, "Could not render preview");
} finally {
amqpAdmin.removeBinding(binding);
}
}
private Content getContentBlocking() {
BlockingQueueConsumer blockingQueueConsumer = new BlockingQueueConsumer(rabbitMqConfig.connectionFactory(), new DefaultMessagePropertiesConverter(), new ActiveObjectCounter<>(), AcknowledgeMode.AUTO, true, 1, PREVIEW_DONE_QUEUE);
try {
blockingQueueConsumer.start();
Message message = blockingQueueConsumer.nextMessage(waitForPreviewMs);
if (!StringUtils.isEmpty(message)) {
String result = new String(message.getBody());
return JsonUtils.stringToObject(result, Content.class);
}
throw new ApiException("Could not render preview");
} catch (Exception e) {
logError(e);
throw new ApiException("Could not render preview");
} finally {
blockingQueueConsumer.stop();
}
}
Service B
I'll spare you most of the code. My log says everything is going well and as soon as its done the service sends the correct message to the UUID key that was sent with the initial render request.
public void sendPreviewDoneMessage(Content content, String previewDoneKey) {
String message = JsonUtils.objectToString(content);
rabbitTemplate.convertAndSend(globalDirectExchange, previewDoneKey, message);
}
The whole thing works... Once...
The real issues seems to be the consumer setup. Why do I keep getting the same (first) message from the queue when I use nextMessage().
Doesn't creating and removing a Bindung ensure, that only messages bound to that routingKey are even received in that instance? And doesn't nextMessage() acknowledge the message and remove it from the queue?!
Thank's a lot for bearing with me and even more for any helpful answer!
BlockingQueueConsumer is not designed to be used directly; it is a component of the SimpleMessageListenerContainer, which will take care of acking the message after it has been consumed by a listener (the container calls commitIfNecessary).
There may be other unexpected side effects of using this consumer directly.
I strongly advise using the listener container to consume messages.
If you just want to receive messages on demand, use a RabbitTemplate receive() or receiveAndConvert() method instead.
I have created java pub/sub consumer relying on the following pub/sub doc.
public static void main(String... args) throws Exception {
TopicName topic = TopicName.create(pubSubProjectName, pubSubTopic);
SubscriptionName subscription = SubscriptionName.create(pubSubProjectName, "ssvp-sub");
SubscriptionAdminClient subscriptionAdminClient = SubscriptionAdminClient.create();
subscriptionAdminClient.createSubscription(subscription, topic, PushConfig.getDefaultInstance(), 0);
MessageReceiver receiver =
new MessageReceiver() {
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
System.out.println("Got message: " + message.getData().toStringUtf8());
consumer.ack();
}
};
Subscriber subscriber = null;
try {
subscriber = Subscriber.defaultBuilder(subscription, receiver).build();
subscriber.addListener(
new Subscriber.Listener() {
#Override
public void failed(Subscriber.State from, Throwable failure) {
// Handle failure. This is called when the Subscriber encountered a fatal error and is shutting down.
System.err.println(failure);
}
},
MoreExecutors.directExecutor());
subscriber.startAsync().awaitRunning();
Thread.sleep(60000);
} finally {
if (subscriber != null) {
subscriber.stopAsync();
}
}
}
It works well, but every run it ask for a new subscriber name by throwing StatusRuntimeException exception.
io.grpc.StatusRuntimeException: ALREADY_EXISTS: Resource already exists in the project (resource=ssvp-sub).
(see SubscriptionName.create(pubSubProjectName, "ssvp-sub") line in my code snippet)
I found out that in node.js client we can pass "reuseExisting:true" option to reuse existing subscription :
topic.subscribe('maybe-subscription-name', { reuseExisting: true }, function(err, subscription) {
// subscription was "get-or-create"-ed
});
What option should I pass if I use official java pubsub client?:
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-pubsub</artifactId>
<version>0.13.0-alpha</version>
</dependency>
The Java library does not have a method to allow one to call createSubscription with an existing subscription and not have an exception thrown. You have a couple of options, both of which involve using a try/catch block. The choice depends on whether or not you want to be optimistic about the existence of the subscription.
Pessimistic call:
try {
subscriptionAdminClient.createSubscription(subscription,
topic,
PushConfig.getDefaultInstance(),
0);
} catch (ApiException e) {
if (e.getStatusCode() != Status.Code.ALREADY_EXISTS) {
throw e;
}
}
// You know the subscription exists and can create a Subscriber.
Optimistic call:
try {
subscriptionAdminClient.getSubscripton(subscription);
} catch (ApiException e) {
if (e.getStatusCode() == Status.Code.NOT_FOUND) {
// Create the subscription
} else {
throw e;
}
}
// You know the subscription exists and can create a Subscriber.
In general, it is often the case that one would create the subscription prior to starting up the subscriber itself (via the Cloud Console or gcloud CLI), so you might even want to do the getSubscription() call and throw an exception no matter what. If a subscription got deleted, you might want to draw attention to this case and handle it explicitly as it has implications (like the fact that messages are no longer being stored to be delivered to the subscription).
However, if you are doing something like building a cache server that just needs to get updates transiently while it is up and running, then creating the subscription on startup could make sense.