I have been reading through the Java tutorial on RMI. I like the approach that is outlined here for implementing a remote interface:
http://download.oracle.com/javase/tutorial/rmi/implementing.html
What I would like to know are 2 things:
1) With regard to the executeTask method outlined in the aforementioned link, how would this design allow Remote Objects (tasks) access some sort of global state if the ComputeEngine is just calling the execute method of a Task?
2) Would this design be suitable for a multi-threaded environment?
Thanks indeed.
Ad. 1: Please note that remote client does not know anything about ComputeEngine class, only the Compute interface. Also, the server implementation might change completely, but if the interface does not change, client shouldn't notice. If you want to pass some context to the task coming from remote client, do it on interface layer:
public class ComputeEngine implements Compute {
private GlobalContext globalContext = //...
public <T> T executeTask(Task<T> t) {
return t.execute(globalContext);
}
This way each task has access to the globalContext and knows exactly what to expect from globalContext (what are the server capabilities, the context). GlobalContext would be a JavaBean or more likely some service interface.
On the client side it might look like this
Compute compute = //obtain RMI client stub somehow
compute.executeTask(new Task<String>() {
public String execute(GlobalContext globalContext) {
//Note that this code is executed on the server and
//getFoo() is implemented on the server side. We only know its interface
globalContext.getFoo();
//...
}
}
Ad. 2: It will work with multiple clients calling the service concurrently. However it is up to you to implement the server in thread-safe manner. The example from tutorial you mentioned in thread-safe, but my code using GlobalContext might not be. Please notice that several clients will use the same instance of globalContext concurrently, which might, but does not have to cause some issues. That's probably the most interesting part.
And finally remember that receiving unknown Task from remote client and running it on server is very impressive, but not quite safe.
Related
I am trying to integrate QFJ into a single-threaded application. At first I was trying to utilize QFJ with my own TCP layer, but I haven't been able to work that out. Now I am just trying to integrate an initiator. Based on my research into QFJ, I would think the overall design should be as follows:
The application will no longer be single-threaded, since the QFJ initiator will create threads, so some synchronization is needed.
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
There are 2 aspects to the integration of the initiator into my application:
Receiving side (fromApp callback): I believe this is straightforward, I simply push messages to a thread-safe queue consumed by my MainProcessThread.
Sending side: I'm struggling to find documentation on this front. How should I handle synchronization? Is it safe to call Session.sendToTarget() from the MainProcessThread? Or is there some synchronization I need to put in place?
As Michael already said, it is perfectly safe to call Session.sendToTarget() from multiple threads, even concurrently. But as far as I see it you only utilize one thread anyway (MainProcessThread).
The relevant part of the Session class is in method sendRaw():
private boolean sendRaw(Message message, int num) {
// sequence number must be locked until application
// callback returns since it may be effectively rolled
// back if the callback fails.
state.lockSenderMsgSeqNum();
try {
.... some logic here
} finally {
state.unlockSenderMsgSeqNum();
}
Other points:
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
Will you always use only one Session? If yes, then there is no use in utilizing the ThreadedSocketInitiator since all it does is creating a thread per Session.
The application will no longer be single threaded, since the QFJ initiator will create threads
As already stated here Use own TCP layer implementation with QuickFIX/J you could try passing an ExecutorFactory. But this might not be applicable to your specific use case.
I am working a java library, which has a singleton class with a methods - createTask() and addPointsToTask()
The library is meant to be used in any java service which executes multiple requests.
The service should be able to call createTask only once during it's processing of a single request. Any further calls to createTask in the same thread execution should fail. addPointsToTask can be called any number of times.
As a library owner how can I restrict this method to be called only once per thread?
I have explored ThreadLocal, but don't think it fits my purpose.
One solution is to ask the service that is using the library to set a unique id in threadLocal, but as this 'set-to-thread-local' solution is outside the boundary of the library, this is not a full-proof solution.
Any hints?
Short answer: you won't get a "fool-proof" solution; i.e. a solution that someone can't subvert.
Unless you are running your library on a JVM platfrom that you control, users of your library will be able to find a way to subvert your "only once per thread" restriction if they try hard enough. For example:
They could use reflection to access the private state of the objects or classes that implement the restriction.
They could use bytecode injection to subvert your code.
They could decompile and replacing your code.
They could modify their JVM to do something funky with your code. (The OpenJDK source code is available to anyone.)
Ask yourself the following:
Is this restriction reasonable from the perspective of the programmer you are trying to restrict?
Would a sensible programmer have good reason to try to break it?
Have you considered possible use-cases for your library where it would be reasonable to call createTask() multiple times? For example, use-cases that involve using thread pools?
If you are doing this because you think allowing multiple createTask() calls will break your library, my advice would be:
Tell the programmer via the javadocs and other documentation what is likely to break if they do the thing that you are trying to prevent.
Implement a "soft" check, and provide an easy way for a programmer to disable the check. (But do the check by default, if you think that is appropriate.)
The point is that a sensible programmer won't knowingly subvert restrictions unless they have good reason to. If they do, and they hurt themselves, that is not your problem.
On the other hand, you are implementing this restriction for "business reasons" or to stop "cheating" or something like that, my advice would be to recognize that a determined user will be able to subvert any restrictions you attempt to embed in your code when they run it on their platform. If this fundamentally breaks your model, look for a different model.
You will not be able to prohibit multiple calls from the same request, simply because your library has no concept of what a "request" actually is. This very much depends on the service using the library. Some services may use a single thread per request, but others may not. Using thread-locals is error-prone especially when you are working in multi-threaded or reactive applications where code processing a request can execute on multiple parallel threads.
If your requirement is that addPointsToTask is only called for a task that was actually started by some code that is processing the current request, you could set up your API like that. E.g. createTask could return a context object that is required to call addPointsToTask later.
public TaskContext createTask() {
}
public void addPointsToTask(TaskContext context, ....) {
}
This way you can track task context even over multiple different threads executing code for the same request and points will not get added to a task created by another request.
You could add a method to your singleton which runs some piece of Service-Code in the context of a request.
Dummy implementation:
package stackoverflow;
import java.util.concurrent.Callable;
public enum YourLibrarySingleton {
INSTANCE;
private final ThreadLocal<Task> threadLocalTask;
YourLibrarySingleton() {
this.threadLocalTask = new ThreadLocal<>();
}
public void createTask() {
this.threadLocalTask.set(new Task() {});
}
public void addPointsToTask() {
Task task = this.threadLocalTask.get();
// add points to that task
}
public <T> T handleRequest(Callable<T> callable) throws Exception {
try {
return callable.call();
} finally {
this.threadLocalTask.remove();
}
}
}
Which could be used like this:
package stackoverflow;
public class ServiceCode {
public void handleRequest() throws Exception {
YourLibrarySingleton.INSTANCE.handleRequest(() -> {
YourLibrarySingleton.INSTANCE.createTask();
YourLibrarySingleton.INSTANCE.addPointsToTask();
YourLibrarySingleton.INSTANCE.addPointsToTask();
return "result";
});
}
}
I intend to test a Server class to see how it handles concurrent reads and writes using direct calls to the server class, nothing more fancy. I have a Server API that has two functions.
int fetch(int key);
void push(int key, int value);
How do I create multiple clients making calls to the server? Do I just start multiple threads of a Client class implementing Runnable that call the functions using a static server variable within run()?
Yes, exactly, you should have multiple clients running at the same time on different threads, and they should call the same server object.
Note that with this kind of testing there is no guarantee that you find all the bugs. You should still reason about the thread safety of your code. Possibly you could also use more sophisticated concurrent testing frameworks like multithreadedtc
I am trying to implement a twitter like service with client using java. I am using Apache thrift for RPC calls. The service uses a key-value store. I am trying to make the service fault-tolerant along with consistency and data-replication in the key-value store.
For eg: suppose at a time, there are 10 servers running with id
S1,S2,S3 etc. and one client calls put(key,value) on S1, now S1 saves
this value and calls a RPC put(key,value) on all the remaining servers
for data-replication. I want the server method to save and return
success to client and also start a thread with async calls on the
remaining 9 servers so that the client is not blocked during
replication.
The auto generated code has Iface and AsyncIface and I have currently implemented the Iface in a ServerHandler class.
My goal is to expose a backend server to the client and have normal (blocking) calls between a client and a server and async calls between servers. There will be multiple client-server pairs running at a time.
I understand, the data-replication model is crude but I am trying to learn distributed systems.
Can someone please help me with an example on how I can achieve this.
Also, if you think my design is flawed and there are better ways in
which I can achieve data-replication using Apache Thrift please do
point out.
Thank You.
A oneway method is asynchronous, any other method not marked with oneway is synchronous.
exception OhMyGosh {
1: string msg
}
service TwelfthNightOrWhatYouWill {
// A oneway method is a "one shot" method. The server may execute
// it asynchronously, depending on the server implementation
// Oneways can be very useful when used with messaging systems
// A oneway does NOT return anything, including exceptions
oneway void ImAsync(1: i32 foo, 2: string bar, 3: double baz)
// Any method not marked with oneway is synchronous. Even if the call does
// not return anything, it will be still a blocking call for the client.
void ImSynchronous(1: i32 foo, 2: string bar) throws (1: OhMyGosh omg)
i32 ImAsWell(1: double baz) throws (1: OhMyGosh omg)
void MeToo()
}
Whether or not the server does execute the oneway asynchronously with regard to the connection, depends on what server implementation you use. A Threaded or Threadpool server seems a good choice.
After the client has sent his oneway request, it will not wait for reply from the server and just continue in his execution flow. Technically, for oneway no recv_Xxxx() function is generated, only the send_Xxx() part.
If you need data sent back to the client, the best option is to set up a server in the client process as well, which seems the optimal choice in your particular use case to me. In cases where this is not possible (think HTTP) the typical workarounds are polling or long-running calls, however both techniques come with some disadvantages.
With apolagies to W.Shakespeare
I have two classes in short here they are:
public final class ServerMain
{
static List<Table> s_AvailableGameTables = new Vector<Table>();
static List<Table> s_OccupiedGameTables = new Vector<Table>();
static List<ServerThread> s_PlayersOnServer = new Vector<ServerThread>();
...
}
class ServerThread extends Thread{
...}
ServerMain is the server itself, and it manages the ServerThreads by allocating a new ServerThread for each user who has just connected to the ServerMain.
My questions are simple:
When I'm currently running in the specific ServerThread and I want to access some static lists on the serverMain and to update them how can I do that , if I've already "left" the area of the ServerMain while being in the specific thread which runs in the background.
Is the only way is to hold a reference from each serverthread to papa ServerMain?
Maybe it can cause some problems as if at the same time two areas of the code can update the same list the ServerMain itself and the ServerThread which now knows who is the big boss around?
General question: does sockets programming means UDP or TCP?
I'd like to hear some good advice. Thanks in advance.
For #1, you wouldn't need an instance to access static members of ServerMain, assuming they are accessible (e.g. they are public) you can access them as ServerMain.MyVar.
For #2, yes, you would need to look into using the synchronized statement to prevent multiple threads for writing to the list at the same time, or use a thread safe list implementation.
For #3, the term 'sockets programming' can refer to either UDP or TCP. Which kind of socket you use will depend on what kind of app you are implementing.
1) That is one of the possibilities. In general, when you need to access another object methods, the best way is to keep the reference (directly or indirectly). Yet, as it is supposed that you'll only have a ServerMain object, you could try to declare static methods or use the singleton construction (private constructor, you can only access a getInstance() static method that returns a shared object).
2) Synchronization of access between threads is a lengthy subject and many books have been written about it. The simplest way is to use a synchronized method (or block) and do all race sensitive commands inside. But be conscient that this probably these synchronized blocks will later become your main bottleneck. When you have more practice, study java synchronization methods.
3) As others java stated, you just open a socket that listens to a protocol in a given port number. You can decide if you want it to be UDP or TCP. Of course, keep in mind that with UDP maybe the message that you receive won't be complete, so it will have to be dealt with by your code.
No, you can reference it like 'normal'. In the sense that there is no syntactic changes for actually referencing things from a different thread rather than a different object. Now, you may need to synchronize access to it, but I don't really see that as changing how you reference things.
Synchronize the list (preferably use the java.util.concurrent package). Make sure that the Tables themselves are thread-safe as well.
Neither, a socket uses a transport protocol, but it could be UDP, TCP, or whatever else. To clarify, you can't determine what transport protocol is being used just by saying you're using a socket; you'd have to specify which protocol you're actually using.
You can access as normal if you use a synchronized list (i.e., Vector, one of the lists from the java.util.concurrent package, or if it's a better fit Collections.synchronizedList(list)).
Vector is already 'synchronized', but be aware that you have to synchronize transactions manually (i.e., synchronized (vector) { vector.add(..); vector.remove(..); }). The synchronisation it employs by default essentially stops list method calls from interrupting currently-executing user-defined transactions and currently-executing method calls on the list. I'd advise using Collections.synchronizedList(list) instead of Vector, although they both do the same job, really.
ServerSocket / Socket is TCP, DatagramSocket is UDP.
1) That would be a way and probably a preferred way, though not necessarily the only way.
Also, the reference does not need to be owned by the ServerThread. Since ServerMain is likely a singleton, I have found in situations like this that it makes sense to give that Singleton a static variable which references itself. Then, from any ServerThread, you could do
class ServerThread extends Thread
{
public void someMethod()
{
ServerMain.serverMain.whatever();
}
}
2) Yes, that will cause problems. Read the Java Concurrency Trail, specifically the parts about synchronization. This is a topic too broad to cover easily in an answer here. But for a quick and dirty answer, check out synchronized methods. If you just make the method that handles this list access synchronized, then it will be safe. Of course, depending on your needs, locking the list access might take away any performance gain from your threads.
3) It doesn't necessarily have to, but it generally does. "TCP socket" is even more likely than "UDP socket", but both work. If you want a dedicated and reliable connection for an entire, prolonged transaction, you should probably use TCP. TCP makes guarantees that data was received and that it was received in a certain order.