I am creating a distributed service and i am looking at restricting a set of time consuming operations to a single thread of execution across all JVMs at any given time. (I will have to deal with 3 JVMs max).
My initial investigations point me towards java.util.concurrent.Executors , java.util.concurrent.Semaphore. Using singleton pattern and Executors or Semaphore does not guarantee me a single thread of execution across Multiple JVMs.
I am looking for a java core API (or at least a Pattern) that i can use to accomplish my task.
P.S: I have access to ActiveMQ within my existing project which i was planning to use in order to achieve single thread of execution across multiple JVM Machines only if i dont have another choice.
There is no simple solution for this with a core java API. If the 3 JVMs have access to a shared file system you could use it to track state across JVMs.
So basically you do something like create a lock file when you start the expensive operation and delete it at the conclusion. And then have each JVM check for the existence of this lock file before starting the operation. However there are some issues with this approach like what if the JVM dies in the middle of the expensive operation and the file isn't deleted.
ZooKeeper is a nice solution for problems like this and any other cross process synchronization issue. Check it out if that is a possibility for you. I think it's a much more natural way to solve a problem like than a JMS queue.
Related
I have a program which spins up thousands of threads. I am currently using one host for all the threads which takes a lot of time. If I want to use multiple hosts (say 10 hosts, each running 100 different threads), how should I proceed ?
Having thousands of threads on a single JVM sounds like a bad idea - you may spend most time context-switching instead of doing the actual work.
To split your work across multiple host, you cannot use threads managed by a single JVM. You'll need to have each host exposing an API that can receive part of work and return the result of the work done.
One approach would be to use Java RMI (remote method invocation) to complete this task, but really, your question lacks so many details important for the decision of what architecture to choose.
Creating 1000 threads in on JVM is very bad design and need to minimise count.
High thread count will not give you multi-threading benefit as context switching will be very frequent and will hit performance.
If you are thinking of dividing in multiple hosts then you need parallel processing system like Hadoop /Spark.
They internally handles task allocation as well as central system for syncing all hosts on which threads/tasks are running.
Can we run multiple processes in one JVM? And each process should have its own memory quota?
My aim is to start new process when a new http request comes in and assign a separate memory to the process so that each user request has its own memory quota - and doesn't bother other user requests if one's memory quota gets full.
How can I achieve this?
Not sure if this is hypothetical.
Short answer: not really.
The Java platform offers you two options:
Threads. And that is the typical answer in many cases: each new incoming request is dealt with by a separate thread (which is probably coming out of a pool to limit the overall number of thread instances that get created/used in parallel). But of course: threads exist in the same process; there is no such thing as controlling the memory consumption "associated" by what a thread is doing.
Child processes. You can create a real process and use that to run whatever you intend to run. But of course: then you have an external real process to deal with.
So, in essence, the real answer is: no, you can't apply this idea to Java. The "more" Java solution would be to look into concepts such as application servers, for example Tomcat or WebSphere.
Or, if you insist on doing things manually; you could build your own "load balancer"; where you have one client-facing JVM; which simply "forwards" requests to one of many other JVMs; and those "other" JVMs would work independently; each running in its own process; which of course you could then "micro manage" regarding CPU/memory/... usage.
The closest concept is Application Isolation API (JSR-121) that AFAIK has not been implemented: See https://en.wikipedia.org/wiki/Application_Isolation_API.
"The Application Isolation API (JSR 121) provides a specification for isolating and controlling Java application life cycles within a single Java Virtual Machine (JVM) or between multiple JVMs. An isolated computation is described as an Isolate that can communicate and exchange resource handles (e.g. open files) with other Isolates through a messaging facility."
See also https://www.flux.utah.edu/janos/jsr121-internal-review/java/lang/isolate/package-summary.html:
"Informally, isolates are a construct midway between threads and JVMs. Like threads, they can be used to initiate concurrent execution. Like JVMs, they cause execution of a "main" method of a given class to proceed within its own system-level context, independently of any other Java programs that may be running. Thus, isolates differ from threads by guaranteeing lack of interference due to sharing statics or per-application run-time objects (such as the AWT thread and shutdown hooks), and they differ from JVMs by providing an API to create, start, terminate, monitor, and communicate with these independent activities."
Using java IO, it seems like forking a new process gives better ability for a process B to read data written by process A to file than what you could get if thread A wrote to a file that thread B is trying to read (within the same process).
It seems like the rules are not comparable to the memory model. So what file-based concurrency works ? References would be appreciated.
Any observations like this bound to be operating system specific, and may be specific to different versions of the operating system (kernel). What you are hitting here is probably related to the way that the OS implements threads, and thread scheduling. The Java platform provides little in the way of tuning for this kind of thing.
IMO, if you need better performance, you probably should not be using a file as a data transfer channel between two threads in the same JVM. Code your application to detect that the threads are colocated in the same JVM and use (say) Java Pipe streams.
Maybe it could have to do with thread and process blocking.
When a process wants a resource (writing/reading a file) it blocks untils the S.O. fulfills the requirement and return something to the process.
If you are not using hyperthreading a process with two threads will block both threads for fullfilling each one of the tasks. But if you separate them, maybe the S.O. can optimize access and paralelize the read/write better.
(just guessing :)
I have a C program that will be storing and retrieving alot of data in a Java store. I am putting alot of stress in my C program and multiple threads are adding and retrieving data from Java store. How will java handle such load? Because if there is only one main thread running JVM and handling all the requests from C, then it may become bottleneck for me. Will Java create multiple threads to handle the load, or is it programmers job to create and later on abort the threads?
My java store is just a Hashtable that stores the data from C, as is, against a key provided.
You definitely want to check the jni documentation about threading, which has information around attaching multiple native threads to the JVM. Also you should consider which Map implementation that you need to use. If accessing from multiple Hashtable will work, but may introduce a bottle neck as it is synchronized on every call, which will effectively mean a single thread reading or writing at a time. Consider the ConcurrentHashMap, which uses lock striping and providers better concurrent throughput.
A couple of things to consider if you are concerned about bottlenecks and latency.
On a heavily loaded system, locking can introduce a high overhead. If the size of you map and the frequency of write allows, consider using an immutable map and perform a copy on write approach, where a single thread will handle writes by making updates to a copy of the map and replacing the original with the new version (make sure the reference is a volatile variable). This will allow reads to occur without blocking.
Calling from C to Java via JNI will probably become a bottle neck too, its not as fast as calling in the other direction (Java to C). You can pass Direct ByteBuffers through to Java that contain references to the C data structures and allow Java to call back down to C via the Direct ByteBuffer.
Plain Java requires that you write your own threading.
If you are communicating to java via web services it's likely that the web container will manage threads for you.
I guess you are using JNI, so then the situation is potentially more complex. Depending upon exactly how you are doing your JNI calls you can get at multiple threads in the JVM.
I've got to ask ... JNI is pretty gnarly and error prone, all too easy to bring down the whole process and get all manner of mysterious errors. Are there not C libraries containing a HashTable you could use? Or even write one, it's got to be less work than doing JNI.
I think this depends on the java code's implementation. If it proves not to thread, here's a potentially cleaner alternative to messy JNI:
Create a Java daemon process that communicates with your store, which INTERNALLY is threaded on requests, to guarantee efficient load handling. Use a single ExecutorService created by java.util.concurrent.Executors to service a work queue of store/retrieve operations. Each store/retrieve method call submits a Callable to the work queue and waits for it to be run. The ExecutorService will automagically queue and multithread the store/retrieval operations. This whole thing should be less than 100 lines of code, aside from communications with the C program.
You can communicate with this Java daemon from C using inter-process communication techniques (probably a socket), which would avoid JNI and let one Java daemon thread service numerous instances of the C program.
Alternately, you could use JNI to call the basic store/retrieve operations on your daemon. Same as currently, except the java daemon can decorate methods to provide caching, synchronization, and all sorts of fancy goodies associated with threading.
I have a situation here where I need to distribute work over to multiple JAVA processes running in different JVMs, probably different machines.
Lets say I have a table with records 1 to 1000. I am looking for work to be collected and distributed is sets of 10. Lets say records 1-10 to workerOne. Then records 11-20 to workerThree. And so on and so forth. Needless to say workerOne never does the work of workerTwo unless and until workerTwo couldnt do it.
This example was purely based on database but could be extended to any system, I believe be it File processing, email processing and so forth.
I have a small feeling that the immediate response would be to go for a Master/Worker approach. However here we are talking about different JVMs. Even if one JVM were to come down the other JVM should just keep doing its work.
Now the million dollar question would be: Are there any good frameworks(production ready) that would give me facility to do this. Even if there are concrete implementations of specific needs like Database records, File processing, Email processing and their likes.
I have seen the Java Parallel Execution Framework, but am not sure if it can be used for different JVMs and if one were to come down would the other keep going.I believe Workers could be on multiple JVMs, but what about the Master?
More Info 1: Hadoop would be a problem because of the JDK 1.6 requirement. Thats bit too much.
Thanks,
Franklin
Might want to look into MapReduce and Hadoop
You could also use message queues. Have one process that generates the list of work and packages it in nice little chunks. It then plops those chunks on a queue. Each one of the workers just keeps waiting on the queue for something to show up. When it does, the worker pulls a chunk off the queue and processes it. If one process goes down, some other process will pick up the slack. Simple and people have been doing it that way for a long time so there's a lot information about it on the net.
Check out Hadoop
I believe Terracotta can do this. If you are dealing with web pages, JBoss can be clustered.
If you want to do this yourself you will need a work manager which keeps track of jobs to do, jobs in progress and jobs never done which needs to be rescheduled. The workers then ask for something to do, do it, and send the result back, asking for more.
You may want to elaborate on what kind of work you want to do.
The problem you've described is definitely best solved using the master/worker pattern.
You should have a look into JavaSpaces (part of the Jini framework), it's really well suited to this kind of thing. Basically you just want to encapsulate each task to be carried out inside a Command object, subclassing as necesssary. Dump these into the JavaSpace, let your workers grab and process one at a time, then reassemble when done.
Of course your performance gains will totally depend on how long it takes you to process each set of records, but JavaSpaces won't cause any problems if distributed across several machines.
If you work on records in a single database, consider performing the work within the database itself using stored procedures. The gain for processing the records on different machine might be negated by the cost of retrieving and transmitting the work between the database and the computing nodes.
For file processing it could be a similar case. Working on files in (shared) filesystem might introduce large I/O pressure for OS.
And the cost for maintaining multiple JVM's on multiple machines might be an overkill too.
And for the question: I used the JADE (Java Agent Development Environment) for some distributed simulation once. Its multi-machine suppord and message passing nature might help you.
I would consider using Jgroups for that. You can cluster your jvms and one of your nodes can be selected as master and then can distribute the work to the other nodes by sending message over network. Or you can already partition your work items and then manage in master node the distribution of the partitions like partion-1 one goes to JVM-4 , partion-2 goes to JVM-3, partion-3 goes to JVM-2 and so on. And if JVM-4 goes down it will be realized by the master node and then master node will tell to one of the other nodes to start pick up partition-1 as well.
One other alternative which is easier to use is redis pub sub support. http://redis.io/topics/pubsub . But then you will have to maintain redis servers which i dont like.