I'm implementing a thread pool for processing a high volume market data feed and have a question about the strategy of reusing my worker instances that implement runnable which are submitted to the thread pool for execution. In my case I only have one type of worker that takes a String and parses it to create a Quote object which is then set on the correct Security. Given the amount of data coming off the feed it is possible to have upwards of 1,000 quotes to process per second and I see two ways to create the workers that get submitted to the thread pool.
First option is simply creating a new instance of a Worker every time a line is retrieved from the underlying socket and then adding it to the thread pool which will eventually be garbage collected after its run method executed. But then this got me thinking about performance, does it really make sense to instantiate 1,0000 new instances of the Worker class every second. In the same spirit as a thread pool do people know if it is a common pattern to have a runnable pool or queue as well so I can recycle my workers to avoid object creation and garbage collection. The way I see this being implemented is before returning in the run() method the Worker adds itself back to a queue of available workers which is then drawn from when processing new feed lines instead of creating new instances of Worker.
From a performance perspective, do I gain anything by going with the second approach or does the first make more sense? Has anyone implemented this type of pattern before?
Thanks - Duncan
I use a library I wrote called Java Chronicle for this. It is designed to persist and queue one million quotes per second without producing any significant garbage.
I have a demo here where it sends quote like objects with nano second timing information at a rate of one million messages per second and it can send tens of millions in a JVM with a 32 MB heap without triggering even a minor collection. The round trip latency is less than 0.6 micro-seconds 90% of the time on my ultra book. ;)
from a performance perspective, do I gain anything by going with the second approach or does the first make more sense?
I strongly recommend not filling your CPU caches with garbage. In fact I avoid any constructs which create any significant garbage. You can build a system which creates less than one object per event end to end. I have a Eden size which is larger than the amount of garbage I produce in a day so no GCs minor or full to worry about.
Has anyone implemented this type of pattern before?
I wrote a profitable low latency trading system in Java five years ago. At the time it was fast enough at 60 micro-seconds tick to trade in Java, but you can do better than that these days.
If you want low latency market data processing system, this is the way I do it. You might find this presentation I gave at JavaOne interesting as well.
http://www.slideshare.net/PeterLawrey/writing-and-testing-high-frequency-trading-engines-in-java
EDIT I have added this parsing example
ByteBuffer wrap = ByteBuffer.allocate(1024);
ByteBufferBytes bufferBytes = new ByteBufferBytes(wrap);
byte[] bytes = "BAC,12.32,12.54,12.56,232443".getBytes();
int runs = 10000000;
long start = System.nanoTime();
for (int i = 0; i < runs; i++) {
bufferBytes.reset();
// read the next message.
bufferBytes.write(bytes);
bufferBytes.position(0);
// decode message
String word = bufferBytes.parseUTF(StopCharTesters.COMMA_STOP);
double low = bufferBytes.parseDouble();
double curr = bufferBytes.parseDouble();
double high = bufferBytes.parseDouble();
long sequence = bufferBytes.parseLong();
if (i == 0) {
assertEquals("BAC", word);
assertEquals(12.32, low, 0.0);
assertEquals(12.54, curr, 0.0);
assertEquals(12.56, high, 0.0);
assertEquals(232443, sequence);
}
}
long time = System.nanoTime() - start;
System.out.println("Average time was " + time / runs + " nano-seconds");
when set with -verbose:gc -Xmx32m it prints
Average time was 226 nano-seconds
Note: there are no GCes triggered.
I'd use the Executor from the concurrency package. I believe it handles all this for you.
does it really make sense to instantiate 1,0000 new instances of the Worker class every second.
Not necessarily however you are going to have to be putting the Runnables into some sort of BlockingQueue to be able to be reused and the cost of the queue concurrency may outweigh the GC overhead. Using a profiler or watching the GC numbers via Jconsole will tell you if it is spending a lot of time in GC and this needs to be addressed.
If this does turn out to be a problem, a different approach would be to just put your String into your own BlockingQueue and submit the Worker objects to the thread-pool only once. Each of the Worker instances would dequeue from the queue of Strings and would never quit. Something like:
public void run() {
while (!shutdown) {
String value = myQueue.take();
...
}
}
So you would not need to create your 1000s of Workers per second.
Yes of course, something like this, because OS and JVM don't care about what is going on a thread, so generally this is a good practice to reuse a recyclable object.
I see two questions in your problem. One is about thread pooling, and another is about object pooling. For your thread pooling issue, Java has provided an ExecutorService . Below is an example of using an ExecutorService.
Runnable r = new Runnable() {
public void run() {
//Do some work
}
};
// Thread pool of size 2
ExecutorService executor = Executors.newFixedThreadPool(2);
// Add the runnables to the executor service
executor.execute(r);
The ExecutorService provides many different types of thread pools with different behaviors.
As far as object pooling is concerned, (Does it make sense to create 1000 of your objects per second, then leave them for garbage collection, this all is dependent on the statefulness and expense of your object. If your worried about the state of your worker threads being compromised, you can look at using the flyweight pattern to encapsulate your state outside of the worker. Additionally, if you were to follow the flyweight pattern, you can also look at how useful Future and Callable objects would be in your application architecture.
Related
So I am currently creating a data analytics and predictive program, and for testing purposes, I am simulating large amounts of data (in the range of 10,000 - 1,000,000) "trials". The data is a simulated Match for a theoretical game. Each Match has rounds. The basic psudocode for the program is this:
main(){
data = create(100000);
saveToFile(data);
}
Data create(){
Data returnData = new Data(playTestMatch());
}
Match playTestMatch(){
List<Round> rounds = new List<Round>();
while(!GameFinished){
rounds.add(playTestRound());
}
Match returnMatch = new Match(rounds);
}
Round playTestRound(){
//Do round stuff
}
Right now, I am wondering whether I can handle the simulation of these rounds over multiple threads to speed up the process. I am NOT familiar with the theory behind multithreading, so would someone please either help me accomplish this, OR explain to me why this won't work (won't speed up the process). Thanks!
If you are new to Java multi-threading, this explanation might seem a little difficult to understand at first but I'll try and make it seem as simple as possible.
Basically I think generally whenever you have large datasets, running operations concurrently using multiple threads does significantly speed up the process as oppose to using a single threaded approach, but there are exceptions of course.
You need to think about three things:
Creating threads
Managing Threads
Communicating/sharing results computed by each thread with main thread
Creating Threads:
Threads can be created manually extending the Thread class or you can use Executors class.
I would prefer the Executors class to create threads as it allows you to create a thread pool and does the thread management for you. That is it will allow you to re-use existing threads that are idle in the thread pool, thus reducing memory footprint of the application.
You also have to look at ExecutorService Interface as you will be using it to excite your tasks.
Managing threads:
Executors/Executors service does a great job of managing threads automatically, so if you use it you don't have to worry about thread management much.
Communication: This is the key part of the entire process. Here you have to consider in great detail about thread safety of your app.
I would recommend using two queues to do the job, a read queue to read data off and write queue to write data to.
But if you are using a simple arraylist make sure that you synchronize your code for thread safety by enclosing the arraylist in a synchronized block
synchronized(arrayList){
// do stuff
}
If your code is thread-safe and you can split the task into discrete chunks that do not rely on each other then it is relatively easy. Make the class that does the work Callable and add the chunks of work to a List, and then use ExecutorService, like this:
ArrayList<Simulation> SL=new ArrayList<Simulation>();
for(int i=0; i<chunks; i++)
SL.add(new Simulation(i));
ExecutorService executor=Executors.newFixedThreadPool(nthreads);//how many threads
List<Future<Result>> results=null;
try {
results = executor.invokeAll(SL);
} catch (InterruptedException e) {
e.printStackTrace();
}
executor.shutdown();
for(Future<Result> result:results)
result.print();
So, Simulation is callable and returns a Result, results is a List which gets filled when executor.invokeAll is called with the ArrayList of simulations. Once you've got your results you can print them or whatever. Probably best to set nthreads equal to the number of cores you available.
I have been planning to use concurrency in project after learning it indeed has increased through put for many.
Now I have not worked much on multi threading or concurrency so decided to learn and have a simple proof of concept before using it in actual project.
Below are the two examples I have tried:
1. With use of concurrency
public static void main(String[] args)
{
System.out.println("start main ");
ExecutorService es = Executors.newFixedThreadPool(3);
long startTime = new Date().getTime();
Collection<SomeComputation> collection = new ArrayList<SomeComputation>();
for(int i=0; i< 10000; i++){
collection.add(new SomeComputation("SomeComputation"+i));
}
try {
List<Future< Boolean >> list = es.invokeAll(collection);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("\n end main "+(new Date().getTime() - startTime));
}
2. Without use of concurrency
public static void main(String[] args) {
System.out.println("start main ");
long startTime = new Date().getTime();
Collection<SomeComputation> collection = new ArrayList<SomeComputation>();
for(int i=0; i< 10000; i++){
collection.add(new SomeComputation("SomeComputation"+i));
}
for(SomeComputation sc:collection)
{
sc.compute();
}
System.out.println("\n end main "+(new Date().getTime() - startTime));
}
Both share a common class
class SomeComputation implements Callable<Boolean>
{
String name;
SomeComputation(String name){this.name=name;}
public Boolean compute()
{
someDumbStuff();
return true;
}
public Boolean call()
{
someDumbStuff();
return true;
}
private void someDumbStuff()
{
for (int i = 0;i<50000;i++)
{
Integer.compare(i,i+1);
}
System.out.print("\n done with "+this.name);
}
}
Now the analysis after 20 odd runs of each approach.
1st one with concurrency takes on average 451 msecs.
2nd one without concurrency takes on average 290 msecs.
Now I learned this depends on configuration , OS , version(java 7) and processor.
But all was same for both approaches.
Also learned the cost of concurrency is affordable on when computation is heavy.But this point wasn't clear to me.
Hope some one can help me understand this more.
PS: I tried finding similar questions but could find this kind.Please comment the link if you do.
Concurrency has at least two different purposes: 1) performance, and 2) simplicity of code (like 1000 listeners for web requests).
If your purpose is performance, you can't get any more speedup than the number of hardware cores you put to work.
(And that's only if the threads are CPU-bound.)
What's more, each thread has a significant startup overhead.
So if you launch 1000 threads on a 4-core machine, you can't possibly do any better than a 4x speedup, but against that, you have 1000 thread startup costs.
As mentioned in one of answers, one use of concurrency is to have simplicity of code i.e. there are certain problems which are logically concurrent so there is no way to model those problems in a non-concurrent way like producer - consumer problems, listeners to web requests etc
Other than that, a concurrent program adds to performance only if its able to save CPU Cycles for you i.e. goal is to keep CPU or CPUs busy all the time and not waste its cycles, which further means that you let your CPU do something useful when your program is supposed to be busy in something NON - CPU tasks like waiting for Disk I/O, Wait for Locks , Sleeping, GUI app user wait etc - these times simply add time to your total program run time.
So the question is, what your CPU doing when your program not using it? Can I complete a portion of my program during that time and segregate waiting part in another thread? Now a days, most of modern systems are multiprocessor and multi core systems further leading to wastage if programs are not concurrent.
The example that you wrote is doing all the processing in memory without going into any of wait states so you don't see much gain but only loss in setting up of threads and context switching.
Try to measure performance by hitting a DB, get 1 million records, process those records and then save those records to DB again. Do it in one go sequentially and in small batches in concurrent way so you notice performance difference because DB operations are disk intensive and when you are reading or writing to DB, you are actually doing Disk I/O and CPU is free wasting its cycles during that time.
In my opinion, good candidates for concurrency are long running tasks involving one of wait operations mentioned above otherwise you don't see much gain.Programs which need some background tasks are also good candidates for concurrency.
Concurrency has not to be confused with multitasking of CPU i.e. when you run different programs on same CPU at the same time.
Hope it helps !!
concurrecy is needed when the threads are sharing the same data sources
so when some thread is working with this source the others must wait until it
finish the job than they have the acces
so you need to learn synchronized methode and bluck or some thing like that
sorry for my english read this turial it's helpful
https://docs.oracle.com/javase/tutorial/essential/concurrency/syncmeth.html
I've got introduced to LMAX and this wonderful concept called RingBuffer.
So guys tell that when writing to the ringbuffer with only one thread performance is way better than with multiple producers...
However i dont really find it possible for tipical application to use only one thread for writes on ringbuffer... i dont really understand how lmax is doing that (if they do). For example N number of different traders put orders on exchange, those are all asynchronious requests that are getting transformed to orders and put into ringbuffer, how can they possibly write those using one thread?
Question 1. I might missing something or misunderstanding some aspect, but if you have N concurrent producers how is it possible to merge them into 1 and not lock each other?
Question 2. I recall rxJava observables, where you could take N observables and merge them into 1 using Observable.merge i wonder if it is blocking or maintaining any lock in any way?
The impact on a RingBuffer of multi-treaded writing is slight but under very heavy loads can be significant.
A RingBuffer implementation holds a next node where the next addition will be made. If only one thread is writing to the ring the process will always complete in the minimum time, i.e. buffer[head++] = newData.
To handle multi-threading while avoiding locks you would generally do something like while ( !buffer[head++].compareAndSet(null,newValue)){}. This tight loop would continue to execute while other threads were interfering with the storing of the data, thus slowing town the throughput.
Note that I have used pseudo-code above, have a look at getFree in my implementation here for a real example.
// Find the next free element and mark it not free.
private Node<T> getFree() {
Node<T> freeNode = head.get();
int skipped = 0;
// Stop when we hit the end of the list
// ... or we successfully transit a node from free to not-free.
// This is the loop that could cause delays under hight thread activity.
while (skipped < capacity && !freeNode.free.compareAndSet(true, false)) {
skipped += 1;
freeNode = freeNode.next;
}
// ...
}
Internally, RxJava's merge uses a serialization construct I call emitter-loop which uses synchronized and is blocking.
Our 'clients' use merge mostly in throughput- and latency insensitive cases or completely single-threaded and blocking isn't really an issue there.
It is possible to write a non-blocking serializer I call queue-drain but merge can't be configured to use that instead.
You can also take a look at JCTools' MpscArrayQueue directly if you are willing to handle the producer and consumer threads manually.
I have a piece of code which is similar to the following:
final int THREADS = 11;
BlockingQueue<Future<Long>> futureQueue = new ArrayBlockingQueue<Future<Long>>(THREADS);
for (int i = 0; i < end; i++, count++) {
futureQueue.put(executor.submit(MyRunnable));
}
//Use queued results
How could I refactor this to make it more concurrent? Are there any subtleties I am overseeing here?
UPDATE:
Each Runnable is supposed to send a large amount of HTTP request to a server for stress testing.Am I on right track?
I would use
static final int THREADS = Runnable.getRuntime().availableProcesses();
ExecutorService service = Executors.newFixedThreadPool(THREADS);
List<Future<Long>> futureQueue = new ArrayList<Future<Long>>(end);
for (int i = 0; i < end; i++)
futureQueue.add(executor.submit(new MyRunnable()));
You are using a bounded queue and if end > THREADS it will just stop.
Each Runnable is supposed to send a large amount of HTTP request to a server for stress testing.Am I on right track?
In that case I would use the following as your code is IO rather than CPU bound.
ExecutorService service = Executors.newCachedThreadPool();
While you might benefit from using NIO if you had more than 1000 threads, this would only make your load tester more efficient, but it would make the code much more complicated. (If you think this is hard, writing efficient and correct Selector code is much harder)
Run the tester on more than one machine would make much more difference.
Using threads doesn't work well in your case. Thread pools work well when you have a CPU intensive job. In your case, you have an IO intensive job - it's not bound by the number of CPUs that you have but by the number of network packets you can send.
In this case, the classes in NIO are your friend. Create hundreds of connections and use NIO selectors to see which one is ready to receive more data.
Using this approach, you don't need threads at all; one CPU core is more than enough to fill even a GBit Ethernet connection (~100MB/s).
[EDIT] Of course, you could create hundreds of threads to try to fill the IO channel. But this has some drawbacks:
Threads are managed by the OS (or a small helper library). They need memory and each time a thread is switched, the CPU will have to save its state and flush its caches.
If a thread does only a small amount of work, thread switching can be more expensive than doing the work.
When you use threads, you get all the usual thread synchronization issues.
There is no simple way to make sure you have the right amount of threads. If there are too few threads, the IO channel won't be used optimally. If there are too many threads, the channel won't be used optimally because the threads will fight for access for it. In both cases, you can't change this after starting your tests. The system doesn't adapt to the needs.
For tasks like this, a framework like Akka is much better suited because it avoids all these issues and it's more simple to use than threads.
In our multithreaded java app, we are using LinkedBlockingDeque separate instance for each thread, assume threads (c1, c2, .... c200)
Threads T1 & T2 receive data from socket & add the object to the specific consumer's Q between c1 to c200.
Infinite loop inside the run(), which calls LinkedBlockingDeque.take()
In the load run the CPU usage for the javae.exe itself is 40%. When we sum up the other process in the system the overall CPU usage reaches 90%.
By using JavaVisualVM the run() is taking more CPU and we suspect the LinkedBlockingDeque.take()
So tried alternatives like thread.wait and notify and thread.sleep(0) but no change.
The reason why each consumer having separate Q is for two reason,
1.there might be more than one request for consumer c1 from T1 or T2
2.if we dump all req in single q, the seach time for c1 to c200 will be more and search criteria will extend.
3.and let the consumer have the separate Q to process thier requests
Trying to reduce the CPU usage and in need of your inputs...
SD
do profiling and make sure that the queue methods take relatively much CPU time. Is your message processing so simple that is compared to putting/taking to/from queue?
How many messages are processed per second? How many CPUs are there? If each CPU is processing less than 100K messages per second, then it's likely that the reason is not the access to the queues, but message handling itself.
Putting in LinkedBlockingDeque creates an instance of a helper object. And I suspect, each new message is allocated from heap, so 2 creation per message. Try to use a pool of preallocated messages and circular buffers.
200 threads is a way too many. This means, too many context switches. Try to use actor libraries and thread pools, for example, https://github.com/rfqu/df4j (yes, it's mine).
Check if http://code.google.com/p/disruptor/ would fit for your needs.