in my project I want to have multiple clients connecting to a service. I am using the java Rsocket implementation.
The service should maintain a state for each client. Now at this point I either can manage the clients by some identifier. This option I have already implemented. But I do not want to manage the session manually using strings.
So another idea is to identify the clients by the Rsocket connection. Is there a way to use Rsocket channel for identification of a specific client?
Imagine an example service and a couple of clients. Each client has the Rsocket channel with the service up and running. Is there a way to identify these clients on the server side using the Rsocket channel? Would be amazing if you could show a programmatic example of such behavior.
Thank you!
EDIT (describing the case more detailed)
Here is my example.
We currently have three CORBA objects that are used as demonstrated in the diagram:
LoginObject (to which a reference is retrieved via NamingService). Clients can call a login() method to obtain a session
The Session object has various methods for query details about the current serivce context and most importatly to obtain a Transaction object
The Transaction object can be used to execute various commands via a generic method that take a commandName and a list of key-value pairs as parameters.
After the client executed n commands he can commit or rollback the transaction (also via methods on the Transaction object).
so here we use the session object to execute transactions on our service.
Now we decided to move away from CORBA to Rsocket. Thus we need Rsocket microservice to be able to store the session's state, otherwise we can't know what's going to be commited or rolled back. Can this be done with just individual Publisher for each client?
Here's an example I made the other day that will create a stateful RSocket using Netifi's broker:
https://github.com/netifi/netifi-stateful-socket
Unfortunately you'd need to build our develop branch locally to try it out (https://github.com/netifi/netifi-java) - there should be a release with the code by the end of the week if you don't want to build it locally.
I'm working on a pure RSocket example too, but if you want to see how it would take a look at the StatefulSocket found in the example. It should give you a clue how to deal with the session with pure RSocket.
Regarding your other questions about a transaction manager - you would need to tie your transaction to the Reactive Streams signals that are being emitted - if you received an cancel, an onError you'd roll back, and if received a onComplete you would commit the transaction. There are side effect methods from Flux/Mono that should make this easy to deal with. Depending on what you are doing you could also use the BaseSubscriber as it has hooks to deal with the different Reactive Streams signals.
Thanks,
Robert
An example of resuming connections i.e. maintaining the state on the server, has landed in the rsocket-java repo
https://github.com/rsocket/rsocket-java/commit/d47629147dd1a4d41c7c8d5af3d80838e01d3ba5
The resumes a whole connection, including whatever state is associated with each individual channel etc.
There is an rsocket-cli project that lets you try this out. Start and stop the socat process and observe the client and server progress.
$ socat -d TCP-LISTEN:5001,fork,reuseaddr TCP:localhost:5000
$ ./rsocket-cli --debug --resume --server -i cli:time tcp://localhost:5000
$ ./rsocket-cli -i client --stream --resume tcp://localhost:5001
From your description it looks like channel will work best, I haven't used channel before so I can't really guarantee (sorry). But what I'd recommend you to try something like this:
A transcation contoller:
public class TransactionController implements Publisher<Payload> {
List<Transaction> transcations = new ArrayList<>();
#Override
public void subscribe(Subscriber<? super Payload> subscriber) {
}
public void processPayload(Payload payload) {
// handle transcations...
}
}
And in your RSocket implementation override the requestChannel:
#Override
public Flux<Payload> requestChannel(Publisher<Payload> payloads) {
// Create new controller for each channel
TranscationController cntrl = new TranscationController();
Flux.from(payloads)
.subscribe(cntrl::processPayload);
return Flux.from(cntrl);
}
Related
I'm trying to create a commitListener using the Java SDK to listen for commit events after submitting a transaction, although the listener is not responding.
I'm using the fabcar example.
// create a gateway connection
try (Gateway gateway = builder.connect()) {
// get the network and contract
Network network = gateway.getNetwork("mychannel");
Contract contract = network.getContract("fabcar");
FabcarCommitListener listener = new FabcarCommitListener();
network.addCommitListener(listener, network.getChannel().getPeers(), "createCar");
}
The FabcarCommitListener:
public class FabcarCommitListener implements CommitListener {
#Override
public void acceptCommit(BlockEvent.TransactionEvent transactionEvent) {
System.out.println("TX COMMITTED");
}
#Override
public void acceptDisconnect(PeerDisconnectEvent peerDisconnectEvent) {
System.out.println("peerDisconnected");
}
}
Any ideas how a commitListener works using the Java SDK?
A commit listener receives events only for a specific transaction invocation, not for all invocations of a given transaction name. Every transaction invocation has its own unique transaction ID, which you can obtain from the Transaction object prior to submitting:
https://hyperledger.github.io/fabric-gateway-java/release-2.2/org/hyperledger/fabric/gateway/Transaction.html#getTransactionId--
By default, a transaction submit will also listen for the transaction to be committed by peers so there is no need for your code to listen for transaction commits. There are several built-in strategies for determining when a transaction has been successfully committed, which you can select either:
When connecting the Gateway: https://hyperledger.github.io/fabric-gateway-java/release-2.2/org/hyperledger/fabric/gateway/Gateway.Builder.html#commitHandler-org.hyperledger.fabric.gateway.spi.CommitHandlerFactory-
For a specific transaction invocation: https://hyperledger.github.io/fabric-gateway-java/release-2.2/org/hyperledger/fabric/gateway/Transaction.html#setCommitHandler-org.hyperledger.fabric.gateway.spi.CommitHandlerFactory-
If you want to implement your own custom logic for identifying whether a transaction has committed successfully, you can write your own custom commit handler implementation, and this implementation can use a commit listener to identify the commit and connection status of all the peers you care about. Here is a sample commit handler and factory implementation that make use of commit listeners:
https://github.com/hyperledger/fabric-gateway-java/blob/release-2.2/src/test/java/org/hyperledger/fabric/gateway/sample/SampleCommitHandlerFactory.java
https://github.com/hyperledger/fabric-gateway-java/blob/release-2.2/src/test/java/org/hyperledger/fabric/gateway/sample/SampleCommitHandler.java
If you want to look at all the transactions committed to the blockchain, even if only to pick out certain ones you care about, then use a block listener:
https://hyperledger.github.io/fabric-gateway-java/release-2.2/org/hyperledger/fabric/gateway/Network.html#addBlockListener-java.util.function.Consumer-
From the block event you can navigate down through all the transactions included in the block.
Having said all this, both block listeners and commit listeners really deal with the mechanics of Fabric blockchains. So inspecting the transactions that have operated on the ledger and checking whether they were successfully committed. If you want to orchestrate business processes around transactional events then probably you should actually be using a contract event listener.
If you want to trigger some business process when a new car is created, implement your createCar transaction function so that it emits an event when it is committed:
https://hyperledger.github.io/fabric-chaincode-java/release-2.2/api/org/hyperledger/fabric/shim/ChaincodeStub.html#setEvent-java.lang.String-byte:A-
In your client application, simply listen for this event using a contract event listener:
https://hyperledger.github.io/fabric-gateway-java/release-2.2/org/hyperledger/fabric/gateway/Contract.html#addContractListener-java.util.function.Consumer-java.lang.String-
You can use checkpointing to allow your client to resume listening for events at the last processed block position after a client application restart:
https://hyperledger.github.io/fabric-gateway-java/release-2.2/org/hyperledger/fabric/gateway/Contract.html#addContractListener-org.hyperledger.fabric.gateway.spi.Checkpointer-java.util.function.Consumer-java.lang.String-
I'm using the Play framework for Java for an app. I am attempting to make it distributed by deploying copies of the application across several servers and having a configurable list of nodes in the database, which must be able to communicate between each other. In MongoDB, the list is stored in JSON like so:
{
"master": "host1.com:2678",
"nodes": ["host2.com:2678", "host3.com:2678", "host4.com:2678"]
}
The code deployed on each server is identical, but the scheduler is enabled only on the master node and will schedule particular work to nodes depending on how busy they are. Code not provided here as the specifics of the scheduler's operation isn't important for my question.
In order to know how busy they are, to schedule things and for other status updates, the nodes need to be able to communicate with each other.
Play Framework's Web Service client allows me to do this by making HTTP requests from within one node to the other like so
HttpResponse res = WS.url("http://host2.com").get();
But the idea is for specific HTTP requests (such as those used for scheduling) to be allowed only if coming from another one of the nodes (Be it the master or slave nodes) but not from a web browser, curl, etc. How do I do that securely? I can check for the host of the incoming request or particular headers but surely those are easy to forge?
If you want this to be enforced on all controllers, Check out play allowed hosts filter.
If you want to enforce this filter on a specific Controller \ method you can try to do this:
class MyController #Injects()(filter: AllowedHostsFilter) extends Controller {
def get = filter.apply(Action.async { implicit request =>
Future.successful(Ok)
})
}
You could have a look into pac4j.org they have a lot of options to implement security features on play.
You could maybe filter by ip address:
http://www.pac4j.org/1.9.x/docs/authenticators/ip.html
I need to implement RPC over STOMP, where the client runs with javascript in a browser, and the server side is implemented using Spring messaging capabilities.
While using #MessageMapping is fine for normal messaging, I find using #SendToUser quite limitating for implementing RPC because the client has an hard time to understand which reply is associated with which request in a scenario when multiple simultaneous requests are being made from the client.
Of course there is no problem when just only one request is made, and the client waits for its reply, but problems arise when the client has to keep track of multiple "open" rpc calls.
I've managed to make the system mostly fine by associating an ID with every request, i.e.: the client sends an id together with the message, and the server replies with a special message wrapper that contains this id, so the client is able to associate asynchronous replies with requests.
This works fine but has several limitations:
I have to develop code that needs to understand this structure, and that defies the uitlity to have simple annotated methods
when the server side code generates an Exception the Spring #MessageExceptionHandler get called and the correct Exception is returned to the client, but the request id is lost because the handler has no (easy) way to access it.
I know that with rabbitmq we can add "reply-to" header to every request that needs to be associated with a special reply (the rpc response), and this is implemented by creating a special temporary queue that the user is automatically subscribed to, but how may I use this scheme in Spring? Also, that would tie me a specific broker.
How may I elegantly implement a correct RPC call in Spring that correctly handles server side exceptions?
I find this a general problem and I think Spring could benefit greatly to implement it natively.
This not exactly what you demand, but maybe you can attempt something like this :
Path variables in Spring WebSockets #SendTo mapping
You define an ID on your client and send id to the queue /user/queue/{myid}
On the serveur side you will have a class who looks like this :
#MessageMapping("/user/queue/{myid}")
public void simple(#DestinationVariable String id, Object requestDto) {
simpMessagingTemplate.convertAndSendToUser(userId, "/user/queue/" + id, responseDto);
}
This solution can work with the same principle as the rabbit mq solution you mention.
Hope this helps.
If you do not need the exception/reason on the client, but only want to know which message failed you could send ack messages for successful messages. For successful messages you always have easy access to the message id / headers. By the absence of the ack message the client knows which message has failed.
Of course this comes at the costs of sending all the ack messages and knowing the timout of requests. Also additional code is required to keep track on the client side, but this can be done using a middleware and would end up in an ok-ish dev experience for the business logic.
my application must open a tcp socket connection to a server and listen to periodically incoming messages.
What are the best practices to implement this in a JEE 7 application?
Right now I have something like this:
#javax.ejb.Singleton
public class MessageChecker {
#Asynchronous
public void startChecking() {
// set up things
Socket client = new Socket(...);
[...]
// start a loop to retrieve the incoming messages
while((line = reader.readLine())!=null){
LOG.debug("Message from socket server: " + line);
}
}
}
The MessageChecker.startChecking() function is called from a #Startup bean with a #PostConstruct method.
#javax.ejb.Singleton
#Startup
public class Starter() {
#Inject
private MessageChecker checker;
#PostConstruct
public void startup() {
checker.startChecking();
}
}
Do you think this is the correct approach?
Actually it is not working well. The application server (JBoss 8 Wildfly) hangs and does not react to shutdown or re-deployment commands any more. I have the feeling that the it gets stuck in the while(...) loop.
Cheers
Frank
Frank, it is bad practice to do any I/O operations while you're in an EJB context. The reason behind this is simple. When working in a cluster:
They will inherently block each other while waiting on I/O connection timeouts and all other I/O related waiting timeouts. That is if the connection does not block for an unspecified amount of time, in which case you will have to create another Thread which scans for dead connections.
Only one of the EJBs will be able to connect and send/recieve information , the others will just wait in line. This way your system will not scale. No matter how many how many EJBs you have in your cluster, only one will actually do its work.
Apparently you already ran into problems by doing that :) . Jboss 8 seems not to be able to properly create and destroy the bean.
Now, I know your bean is a #Singleton so your architecture does not rely on transactionality, clustering and distribution of reading from that socket. So you might be ok with that.
However :D , you are asking for a java EE compliant way of solving this. Here is what should be done:
Redesign your solution to go with JMS. It 'smells' like you are trying to provide an async messaging functionality (Send a message & wait for reply). You might be using a synchronous protocol to do async messaging. Just give it a thought.
Create a JCA compliant adapter which will be injected in your EJB as a #Resource
You will have a connection pool configurable at AS level ( so you can have different values for different environments
You will have transactionality and rollback. Of course the rollback behavior will have to be coded by you
You can inject it via a #Resource annotation
There are some adapters out there, some might fit like a glove, some might be a bit overdesigned.
Oracle JCA Adapter
Does anyone have a good tutorial or some advice on how to implement one's own XAResource? I need Spring's MailSender to be transactional, so that the mail will only be sent once the transaction commits, and it seems there isn't any existing transactional wrapper.
If you just need to wait for the commit, as you say in a comment, you can investigate using TransactionSynchronizationManager.registerSynchronization() to trigger email sending on commit.
You can use a TransactionSynchronizationManager.registerSynchronization (like gpeche mentioned) with a TransactionSynchronizationAdapter which has a variety of methods that are called at various stages of the current transaction. I think the most suitable method for the question is the afterCommit.
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter() {
#Override
public void afterCommit() {
super.afterCommit();
sendEmail();
}
});
I doubt that it's possible to implement true XAResource for SMTP. There should be transaction support on the resource manager (SMTP server in this case) and I don't believe there are any. I would say your best bet is 'Last resource commit' pattern - which allows one non XA resource participate in XA transaction. Search Google, there are plenty of info. Most Java EE servers supports this.
One other option next to the one mentioned by gpeche, is sending a transactional JMS message from within the transaction. Then let the message listener (like e.g. a MDB bean) send the email.
Another trick in EJB is scheduling a timer from within a transaction. The timer is also transactional and will only be started when the transaction commits. Simply use a timer with timeout = 0, so it will start immediately after the transaction commits.