For my Code it is necessary to call multiple asynchronous Requests from the same client to a Thrift Service.
So I am using a Non blocking Server and Asynchronous Clients (see the code below) to allow asynchronous Calls, which means the execution of the code continues after the first call of the "checkForPrime()" Method, which I call on the Thrift Service.
Now this seems to work with only executing one call. If I make a second asynchronous call right after, I get the following error message:
Client is currently executing another method:
Interfaces.PrimeCheck$AsyncClient$checkForPrime_call
at
org.apache.thrift.async.TAsyncClient.checkReady(TAsyncClient.java:78)
at
Interfaces.PrimeCheck$AsyncClient.checkForPrime(PrimeCheck.java:110)
at ThriftClient.main(ThriftClient.java:40)
I need a smart solution to allow for multiple calls, but it has to be from the same client. Any suggestions are welcome. Please dont hesitate if you need further information.
org.apache.thrift.protocol.TBinaryProtocol.Factory factory = new TBinaryProtocol.Factory();
TAsyncClientManager manager;
TNonblockingSocket socket;
AsyncClient client;
try {
manager = new TAsyncClientManager();
socket =new TNonblockingSocket("localhost", 4711);
client = new AsyncClient(factory, manager, socket);
client.checkForPrime(5, resultHandler);
client.checkForPrime(7, resultHandler);
Thread.sleep(100);
} catch (IOException e2) ....
to allow asynchronous Calls, which means the execution of the code continues after the first call of the "checkForPrime()" Method,
Not quite. Asynchronous only means that the call is completed asynchronously and you don't have to wait for the completion until necessary.
It does not imply that you can use the same client to do another parallel request. Some implementations may support this but the current implementation does not.
Multiple outstanding calls require some bookkeeping, otherwise you will get lost with the responses:
call 1 made --->
call 2 made --->
response arrives <----
response arrives <----
Now, what call does the first response belong to: call 1 or call 2? Hard to say, it could be either one. Without more information a multi-call client would have a hard time trying to correlate the data.
The TAsyncClientManager handles that by restricting clients to allowing only one pending call at a time.
it is necessary to call multiple asynchronous Requests from the same client
Why do you think it is necessary?
The client is only a mediator, a means of transport. If you send two emails, do you require the emails follow the exact same path across the interwebs? No, because the relevant information the other side (server) should rely on is in the message content, not in the transport level.
If, however, you need to store data at the client, you should store it in a dedicated place outside of the client instance. Either way, the fact that we deal with one or two client instances should not really matter.
Related
I have a service (ServiceA) with an endpoint to which client can subscribe and after subscription, this service produces data continuously using server sent events.
If this is important, I am using Project Reactor with Java.
It may be important, so I'll explain what this endpoint does. Every 15 seconds it fetches data from another service (ServiceB), checks if there were some changes with data that it fetched 15 seconds ago and if there were, it prouces a new event with this data, if there were no changes, it does not send anything (so the payload to the client is as small as possible).
Now, this application can have multiple clients connected at once and they all ask for the same data - it is not filtered by the user etc.
Is it sensible that this observable producing the output is shared between multiple clients?
Of course it would save us a lot of unnecessary calls to the ServiceB, but I wonder if there are any counterindications to this approach - it is the first time I am writing reactive program on the backend (coming from the RxJS) and I don't know if this would cause any concurrency problems or any other sort of problems.
The other benefit I can see is that a new client connecting would immediately be served the last received data from the ServiceB (it usually takes about 4s per call to retrieve this data).
I also wonder if it would be possible that this observable is calling the ServiceB only if there are some subscribers - i.e. until there is at least one subscriber, call the service, if there are no subscribers stop calling it, when a new subscriber subscribes call it again but first fetch the client the last fetched data (no matter how old or stale it may be).
your SSE source can perfectly be shared using the following pattern:
source.publish().refCount();
Note that you need to store the return value of that call and return that same instance to subsequent callers in order for the sharing to occur.
Once all subscribers unsubscribe, refCount will also cancel its subscription to the original source. After that the first subscriber to come in will trigger a new subscription to the source, which you should craft so that it fetches the latest data and re-initializes a polling cycle every 15s.
Below method runs on main thread in 'Controller' class. It sends request packet to server to get device list.
public List<Device> getDeviceList(){
networkServer.sendMsg(deviceListReqPacket);
//wait till response returns. ???
}
This method runs on another thread in 'Server' class which reads data from server.
private void readDeviceList() {
// read packet from socket
List<nwkDeviceInfo_t> listdevice = networkServerDriver.getDeviceLists(packet);
}
}
What can i do to make getDeviceList() method wait until, readDeviceList() method construct listDevice.And get the listDevice object? Im a little bit confused. Am i trying something not possible or am i doing in a completely wrong way?
If your instances (of the above classes) run on the same JVM but in different threads, use one of the blocking queues that come with Java.
If they communicate over a network (e.g. HTTP and such) seems like the read would block until the write is done (and indeed received on the other side). So in that case, you already have the behaviour you want.
I am trying to implement a twitter like service with client using java. I am using Apache thrift for RPC calls. The service uses a key-value store. I am trying to make the service fault-tolerant along with consistency and data-replication in the key-value store.
For eg: suppose at a time, there are 10 servers running with id
S1,S2,S3 etc. and one client calls put(key,value) on S1, now S1 saves
this value and calls a RPC put(key,value) on all the remaining servers
for data-replication. I want the server method to save and return
success to client and also start a thread with async calls on the
remaining 9 servers so that the client is not blocked during
replication.
The auto generated code has Iface and AsyncIface and I have currently implemented the Iface in a ServerHandler class.
My goal is to expose a backend server to the client and have normal (blocking) calls between a client and a server and async calls between servers. There will be multiple client-server pairs running at a time.
I understand, the data-replication model is crude but I am trying to learn distributed systems.
Can someone please help me with an example on how I can achieve this.
Also, if you think my design is flawed and there are better ways in
which I can achieve data-replication using Apache Thrift please do
point out.
Thank You.
A oneway method is asynchronous, any other method not marked with oneway is synchronous.
exception OhMyGosh {
1: string msg
}
service TwelfthNightOrWhatYouWill {
// A oneway method is a "one shot" method. The server may execute
// it asynchronously, depending on the server implementation
// Oneways can be very useful when used with messaging systems
// A oneway does NOT return anything, including exceptions
oneway void ImAsync(1: i32 foo, 2: string bar, 3: double baz)
// Any method not marked with oneway is synchronous. Even if the call does
// not return anything, it will be still a blocking call for the client.
void ImSynchronous(1: i32 foo, 2: string bar) throws (1: OhMyGosh omg)
i32 ImAsWell(1: double baz) throws (1: OhMyGosh omg)
void MeToo()
}
Whether or not the server does execute the oneway asynchronously with regard to the connection, depends on what server implementation you use. A Threaded or Threadpool server seems a good choice.
After the client has sent his oneway request, it will not wait for reply from the server and just continue in his execution flow. Technically, for oneway no recv_Xxxx() function is generated, only the send_Xxx() part.
If you need data sent back to the client, the best option is to set up a server in the client process as well, which seems the optimal choice in your particular use case to me. In cases where this is not possible (think HTTP) the typical workarounds are polling or long-running calls, however both techniques come with some disadvantages.
With apolagies to W.Shakespeare
For my game, I have it running on two servers (one for the game, one for the login system). They both need to interact with each other, and sometimes, ask questions about the state of something else in the other server.
For this example, the game server will be asking the login server if a player is trying to log in:
public boolean isLoggingIn(int accountId) {
//Form a packet to send.
int retVal = sendData();
return retVal > 0;
}
Obviously I'd use an int so information other than booleans can be returned.
My question is, how do I get this modal-style programming working? It'd work just like JFileChooser's getOpenDialog() function.
Also, I should mention that more than one thread can call this method at once.
I assume by modal, you mean trying to block all actions except one. I strongly suspect that this style will lead you into trouble. Modal interaction is a form of locking and therefore not very tolerant to hangups and disconnects and such. To make it tolerant, you need timeouts and cleanup code for cases when someone entered a mode and then nothing further happened. (i.e they closed their laptop, or the game crashed, they unplugged the network cable etc).
If I were you I would instead try to think of things in terms of authentication and authorization.
The quick answer - you need to expose methods on both servers as RMI-capable, and simply invoke methods like you described.
You might find it useful to review the official Oracle RMI tutorial: http://docs.oracle.com/javase/tutorial/rmi/index.html
Althought your design might be wrong - it's your design, and why not letting you shoot your head? ;)
Also, it's worth looking at Spring Security: http://static.springsource.org/spring-security/site/
If you use something like this on a thread that is supposed to handle other requests after it, it would hang up all those requests while it is blocking for a return value if the latency between the game and login servers is high. Certainly what you want instead is a callback so that your thread could handle other requests while it waits for a response.
I see no reason to halt execution of a thread until a value is received. If you need the value for an operation after it, then just copy all the code you have after the call you want to be "modal" in the callback. If you expect to send multiple requests while still waiting for a response, then send a unique "responseId" from the requester's side that the responder can include in its response. Use the "responseId" as a key for a Map with Runnables as values. When you receive a response, call remove on the Map with the responseId key and call run() on the Runnable value that is returned. MINA is supposed to asynchronous and should not block for a response packet.
If you have a really good reason for why you want to handle it all on the same thread, you can look into the java.util.concurrent package. I would implement it using a CountDownLatch of count 1, call await() after sending a request message, and call countDown() when you receive a response by MINA. You have to use an AtomicReference or an array of length 1 to hold the value you received in the response that you can read back into the waiting thread.
PS, you still doing MapleStory work?
My system consists of a "proxy" class that receives "request" packets, marshals them and sends them over the network to a server, which unmarshals them, processes, and returns some "response packet".
My "submit" method on the proxy side should block until a reply is received to the request (packets have ids for identification and referencing purposes) or until a timeout is reached.
If I was building this in early versions of Java, I would likely implement in my proxy a collection of "pending messages ids", where I would submit a message, and wait() on the corresponding id (with a timeout). When a reply was received, the handling thread would notify() on the corresponding id.
Is there a better way to achieve this using an existing library class, perhaps in java.util.concurrency?
If I went with the solution described above, what is the correct way to deal with the potential race condition where a reply arrives before wait() is invoked?
The simple way would be to have a Callable that talks to the server and returns the Response.
// does not block
Future<Response> response = executorService.submit(makeCallable(request));
// wait for the result (blocks)
Response r = response.get();
Managing the request queue, assigning threads to the requests, and notifying the client code is all hidden away by the utility classes.
The level of concurrency is controlled by the executor service.
Every network call blocks one thread in there.
For better concurrency, one could look into using java.nio as well (but since you are talking to same server for all requests, a fixed number of concurrent connections, maybe even just one, seems to be sufficient).