server failover with Quarkus Reactive MySQL Clients / io.vertx.mysqlclient - java

Does io.vertx.mysqlclient support server failover as it can be set up with MySQL Connector/J?
My application is based on quarkus using io.vertx.mutiny.mysqlclient.MySQLPool which in turn is based on io.vertx.mysqlclient. If there is support for server failover in that stack, how can it be set up? I did not find any hints in the documentation and code.

No it doesn't support failover.
You could create two clients and then use Munity failover methods to get the same effect:
MySQLPool client1 = ...
MySQLPool client2 = ...
private Uni<List<Data>> query(MySQLPool client) {
// Use client param to send queries to the database
}
Uni<List<Data>> results = query(client1)
.onFailure().recoverWithUni(() -> query(client2));

Related

TCP server with Reactor Netty and Spring Boot

I'm very new to reactive programing. I have watch mutiple tutorials on reactive programing, reactive spring and project reactor. I understand the concepts and 4 interfaces of the reactive stream specification. However searching around the internet there are not enough examples that can help me wrap my head around applying all of the theory and concepts I learnt into practices.
I'm trying to create a backend TCP server for a game with spring boot and reactor netty. My server application will potentially have multiple TCP server that listening on multiple port. I should probably use reactor kafka and have my servers just talk to each other thru kafka. However, at this stage I don't want to deal with all that yet because it is will just make my learning curve steeper.
This is what I'm doing to create and start the TCP server. It is a class that have a function that I would be able to pass in the port to spin up different TCP servers listening on different port:
public void accept(String host, int port) {
TcpServer server =
TcpServer.create()
.port(port)
.option(ChannelOption.SO_BACKLOG, 1024)
.handle(acceptorObserver::onChannelMessage)
.doOnChannelInit(acceptorObserver::onChannelInit)
.doOnConnection(acceptorObserver::onClientConnect)
.doOnBound(acceptorObserver::onServerBound)
.doOnUnbound(acceptorObserver::onServerUnbound);
server.bindUntilJavaShutdown(Duration.ofSeconds(30), null);
}
my acceptorObserver is a class that just have all the function to handle the lifecycle of the TCP server created. The only method I'm interested in is the handle method where all inbound packets would reach here after all Netty pipelines.
public Publisher<Void> onChannelMessage(NettyInbound nettyInbound, NettyOutbound nettyOutbound) {
Mono<IPacketWriter> handshake =
Mono.create(sink ->
nettyOutbound.withConnection(conn -> {
Channel channel = conn.channel();
if (!channel.hasAttr(NettyAttributes.SOCKET)) {
logger.info("Client connected from {}", channel.remoteAddress().toString());
conn.onDispose().subscribe(null, null, () -> onClientDisconnected(conn));
NettySocket newSocket = new NettySocket(conn, (int) (Math.random() * Integer.MAX_VALUE), (int) (Math.random() * Integer.MAX_VALUE), version, patch, locale);
IPacketWriter handshakePacket = new PacketWriter().writeShort(version)
.writeString(patch)
.writeInt(newSocket.getIntSeqRecv())
.writeInt(newSocket.getIntSeqSend())
.writeByte(locale);
sink.success(handshakePacket);
channel.attr(NettyAttributes.SOCKET).set(newSocket);
sockets.put(newSocket.getId(), newSocket);
}
else {
logger.info("Received message from client {}", channel.remoteAddress().toString());
sink.success();
}
}));
return nettyOutbound.sendObject(handshake)
.then(nettyInbound.receiveObject().cast(PacketReader.class).mapNotNull(in -> {
logger.info("nettyInbound {}", BitOperator.readableByteArray(in.getBuffer()));
return in;
}).then());
}
Thanks Violeta she helped me with the above function. So everytime I received an inbound message I will check if the SocketChannel of Netty has my attributes if it doesnt that means it is a newly connected client and I would send my handshake package other wise the handshake Mono will be empty.
My goal is to aggregate all of the NettyInbound received objects into a stream on a different class something like a CentralInboundPacketDispatcher class and I would have different handlers subscribe to this stream and filter out the packets that they care about and handle that. Whenever a new client connected there would be a new NettyInbound stream. My problem is I have no idea how to achieved the above goal. Through my googling It seems that I would need to have an "infinite stream" aka hot stream? There are really not alot of example around or I'm very bad at googling. I would appreciated it if someone could point me to the right direction.
It would be very cool if I could do something like HTTP REST server with Spring Boot where I can use anotation like #GetMapping to tell it if the packet have this header use this function.

Appropriate Architecture for Akka WebSockets with Cluster Sharding

I am attempting to implement a way for users to connect to a specific websocket, which enables all connected clients to send and receive messages to all connected users. This can be thought of as a group chat where there is a dedicated websocket URL per chat room.
I have used a handleWebSocketMessages route and used the following boiler plate code (source) for data distribution across connected users:
Pair<Sink<Message, NotUsed>, Source<Message, NotUsed>> sinkSourcePair =
MergeHub.of(Message.class)
.toMat(BroadcastHub.of(Message.class), Keep.both())
.run(actorSystem);
Sink<Message, NotUsed> hubSink = sinkSourcePair.first();
Source<Message, NotUsed> hubSource = sinkSourcePair.second();
Flow<Message, Message, NotUsed> broadcastFlow = Flow.fromSinkAndSource(hubSink, hubSource);
When a message arrives via the websocket, I want it to be registered by the cluster sharded Actor (entity), which I complete using EntityRef.ask.
Flow<Message, Message, NotUsed> incomingMessageFlow = Flow.of(Message.class)
Flow<Message, Message, NotUsed> recordMessageFlow = ...
Flow<Message, Message, NotUsed> broadcastFlow = Flow.fromSinkAndSource(hubSink, hubSource);
return handleWebSocketMessages(incomingMessageFlow.via(recordMessageFlow).via(broadcastFlow);
The above works fine for clients connected to a single websocket but I want my websockets to be associated with a sharded Actor based on the websocket URL (e.g. ws://localhost/my-group-chat/ws).
Where should I define my broadcast flow? I've tried several approaches:
to define it within Route for websocket handling (makes no sense as it is created new for every connection)
to include it in a sharded actor (fails because of serialization requirements between sharded actors)
to store a map of broadcast flows so that when it exists for a specific URL it is utilized and when it doesn't exist it is initialized. <- this one worked but I don't like it
I believe the broadcast flow should be assigned to the sharded actor, where the current map breaks this pattern in terms of using Akka cluster sharding.
I'd appreciate any ideas.

vertx-redis-client 3.7.0: Is it cheap to create redis client on every http request

I am using vertx-redis-client in one of my projects. I am creating redis client like this:
private void createRedisClient(final Handler<AsyncResult<Redis>> redisHandler) {
Redis.createClient(vertx, AppSettings.REDIS_OPTIONS)
.connect(onConnect -> {
if (onConnect.succeeded()) {
System.out.println("Redis got connected");
Redis redisClient = onConnect.result();
redisHandler.handle(onConnect);
redisClient.exceptionHandler(e -> {
e.printStackTrace();
attemptReconnect(0, redisHandler);
});
} else {
onConnect.cause().printStackTrace();
redisHandler.handle(onConnect);
}
});
}
But, I need to switch redis DB based on parameters of REST API input JSON. So, is it wise (performant) to create a redis client for every request and connect to required DB? Or should I pool my redis clients somehow?
It is not cheap at all.
If you have more than one Redis client, you should put them in some kind of concurrent map, and use atomic operations to get those clients depending on your parameters.
Creating a connection for every access to Redis is going to kill your application's performance.
Getting good performance from Redis is also about how well you design your data structures. Ideally, you should fetch (or write) all the data in a single call - for example, you could have all your keys in a single db and organize closely associated data with the same key, so that you can get your work done in a single HGET/HSET.
If that is not possible, I'd recommend that you create a pool of Redis clients that are already connected to the dbs that you will access. A single Redis client can have multiple connections open, since keep-alive is on by default.

Vert.x Proxy Service - handle routing on different machiens

I got a webserver with 2 endpoints that I want to handle on different machines. They are independent and when updating one I don't want to restart the other.
Router router = Router.router(vertx);
router.route("/api*").handler(BodyHandler.create());
router.post("/api/end_point_1").handler(new Handler1());
router.post("/api/end_point_2").handler(new Handler2());
How can I achieve this in Vert.x? I been reading about Vert.x Service Proxy
But I am not quite sure how to apply it to Router.
What you're looking for is called Vertx cluster.
Your handlers would look something like this:
router.post("/api/end_point_1").handler(req -> {
// Extract data from request
// Package it into an object
// Send it over EventBus
vertx.eventBus().send("event1", data);
});
Now create another verticle in a separate application, which should do:
vertx.eventBus().consumer("event1");
consumer.handler(o -> {
System.out.println("Got message" + o.body());
});
Now to run those separate Jars follow this guide: http://vertx.io/docs/vertx-hazelcast/java/
I would simply package the code as two different JARs and deploy them independently. Then a load-balancer/API gateway/reverse-proxy would send the traffic to the right servers depending on the request URI.

Azure Service bus with AMQP - how to specify the session ID

I am trying to send messages to Service bus using AMQP QPID java library
I am getting this error:
"SessionId needs to be set for all brokered messages to a Partitioned
Topic that supports Ordering"
My topic has "Enforce Message ordering" turned on (this is way i get this error i guess)
When using the Azure Service bus java library (and not AMQP) i have this function :
this.entity.setSessionId(...);
When using the AMQP library i do not see an option to set the session ID on the message i want to send
Note that if i un-check the option "Enforce Message ordering" the message will be sent successfully
This is my code
private boolean sendServiceBusMsg(MessageProducer sender,Session sendSession) {
try {
// generate message
BytesMessage createBytesMessage = (BytesMessage)sendSession.createBytesMessage();
createBytesMessage.setStringProperty(CAMPAIGN_ID, campaignKey);
createBytesMessage.setJMSMessageID("ID:" + bm.getMessageId());
createBytesMessage.setContentType(Symbol.getSymbol("application/octet-stream"));
/*message is the actual data i send / not seen here*/
createBytesMessage.writeBytes(message.toByteArray());
sender.send(createBytesMessage);
} catch (JMSException e) {
}
The SessionId property is mapped to AMQP message properties.group-id. The Qpid JMS client should map it to JMSXGroupID property, so try the following,
createBytesMessage.setStringProperty("JMSXGroupID", "session-1");
As you guessed, there is a similar SO thread Azure Service Bus topics partitioning verified that to disable the feature Enforce Message Ordering via set SupportOrdering with false can solve the issue, but it can't be done via Azure Service Bus Java library because the property supportsOrdering is privated now.
And you can try to set property Group as #XinChen said using AMQP, as the content below from here.
Service Bus Sessions, also called 'Groups' in the AMQP 1.0 protocol, are unbounded sequences of related messages. ServiceBus guarantees ordering of messages in a session.
Hope it helps.

Categories

Resources