I'm new to Java programming. I have several questions about how to implement RingFiFoBuffer:
Can I store big XML files into this buffer? If yes how big?
Can several threads insert/delete/fetch records from the RingBuffer simultaneously?
How many records can I store?
Is there any tutorial that I can see how to write the code.
I only found http://commons.apache.org/collections/apidocs/org/apache/commons/collections/buffer/CircularFifoBuffer.html
Question 1 and 3: That is only limited by the memory you assign to the Java process that executes your program.
Qestion 2: Accessing a Collection like the referenced CircularFifoBuffer usually requires to "synchronize" them. The linked JavaDoc already contains the code for synchronizing it:
Buffer fifo = BufferUtils.synchronizedBuffer(new CircularFifoBuffer());
Can I store big XML files into this buffer? If yes how big?
You are only limited by your disk space with memory mapped files.
Can several threads insert/delete/fetch records from the RingBuffer simultaneously?
That depends on your implementation. Usually ring buffers are shared between threads.
How many records can I store?
This is something you usually limit when you create the ring buffer so its up to you. Its usually sensible to keep these to a minimum as larger ring buffers can often be slower than tighter ring buffers. So the practical limit may depend on your application and the hardware used.
Is there any tutorial that I can see how to write the code.
The best example I know is the Disruptor library. Its pretty advanced but has better documentation than any I can think of. (Including libraries I have written ;)
http://code.google.com/p/disruptor/
Related
I'm searching various alternatives for a write intensive application when it comes to selecting a Java data structure. I know that ONE data structure could not provide a single universal solution solution to a write intensive application but I'm surprised by the lack of discussion out there on the topic.
There are many people talking about read-intensive-rate-writes or concurrent-read-only applications but I cannot find any conversation around the data structures used for a write intensive application.
Based on the following requirements
key/value pairs - Map
Unsorted - for the sake of simplicity
1000+ writes per minute / negligible reads
All data are stored in memory
I am thinking of the following approaches
Simple ConcurrentHashMap: Although based on this from official Oracle docs
[...] even though all operations are thread-safe, retrieval operations do not entail locking
It must be better suited for read intensive applications
Combination of a BlockingQueue and a set of ConcurrentHashMaps. In batches, the queue is drained of all its elements and then the updates are appropriately allocated in the underlying maps. In this approach though I would need an additional map to identify which maps are included to every map - acting like an orchestrator
Use a HashMap and synchronize on the API level. Meaning that every write related method is going to be synchronized
synchronized void aWriteMethod(Integer aKey,String aValue) {
thisWriteIntensiveMap.put(aKey,aValue);
}
It'd be great if this question did not just receive criticism on the aforementioned options but also suggestions about new and better solutions.
PS: Apart from the integrity of the data, order of operations and throttling issues what else needs to be taken into account into choosing the "best" approach for a write intensive.
I know that this might look a bit open ended but it'd be interesting to hear how people think on this problem.
Even if you would go for the worst possible type of map like e.g. a synchronized HashMap, I don't think you will notice any performance impact with your load. A few thousand writes per minute is nothing.
I would set up a JMH benchmark and try out various map implementations. The obvious candidate is a ConcurrentHashMap because it is designed to deal with concurrent access.
So I think this is a good example of premature optimization.
This a general programming question. Let's say I have a thread doing a specific simulation, where speed is quite important. At every iteration I want to extract data from it and write it to a file.
Is it a better practice to hand over the data to a different thread and let the simulation thread focus on his job, or since speed is very important, make the simulation thread do the data recording too without any copying of data. (in my case it is 3-5 deques of integers with a size of 1000-10000)
Firstly it surely depends on how much data we are copying, but what else can it depend on? Can the cost of synchronization and copying be worth? Is it a good practice to create small runnables at each iteration to handle the recording task in case of 50 or more iterations per second?
If you truly want low latency on this stat capturing, and you want it during the simulation itself then two techniques come to mind. They can be used together very effectively. Please note that these two approaches are fairly far from the standard Java trodden path, so measure first and confirm that you need these techniques before abusing them; they can be difficult to implement correctly.
The fastest way to write the data to file during a simulation, without slowing down the simulation is to hand the work off to another thread. However care has to be taken on how the hand off occurs, as a memory barrier in the simulation thread will slow the simulation. Given the writer only cares that the values will come eventually I would consider using the memory barrier that sits behind AtomicLong.lazySet, it requests a thread safe write out to a memory address without blocking for the write to actually become visible to the other thread. Unfortunately direct access to this memory barrier is currently only availble via lazySet or via class sun.misc.Unsafe, which obviously is not part of the public Java API. However that should not be too large of a hurdle as it is on all current JVM implementations and Doug Lea is talking about moving parts of it into the mainstream.
To avoid the slow, blocking file IO that Java uses; make use of a memory mapped file. This lets the OS perform async IO for you on your behalf, and is very efficient. It also supports use of the same memory barrier mentioned above.
For examples of both techniques, I strongly recommend reading the source code to HFT Chronicle by Peter Lawrey. In fact, HFT Chronicle may be just the library for you to use here. It offers a highly efficient and simple to use disk backed queue that can sustain a million or so messages per second.
In my work on a stress-testing HTTP client I stored the stats into an array and, when the array was ready to send to the GUI, I would create a new array for the tester client and hand off the full array to the network layer. This means that you don't need to pay for any copying, just for the allocation of a fresh array (an ultra-fast operation on the JVM, involving hand-coded assembler macros to utilize the best SIMD instructions available for the task).
I would also suggest not throwing yourself head-on into the realms of optimal memory barrier usage; the difference between a plain volatile write and an AtomicReference.lazySet() can only be measurable if your thread does almost nothing else but excercise the memory barrier (at least millions of writes per second). Depending on your target I/O throughput, you may not even need NIO to meet the goal. Better try first with simple, easily maintainable code than dig elbows-deep into highly specialized APIs without a confirmed need for that.
Lets say I am reading a single incoming stream with millions of transaction per ms, is so fast that I can't afford to have a GC or the entire system will hang.
The functionality is very simple, it is merely to record every single packets that went pass NIC card. (hypothetical)
Is it even possible?
Are there design pattern for such implementation? I only know flyweight and resource pool design pattern.
Do I really need to code in C so that I can manage it?
1) I can have reasonable amount of ram but not ridiculous like 100gb (maybe 16gb)?
2) CPU processing is not an issue.
FAQ:
Must it be Java? No, please recommend me another language that can support most platform. (linux, aix, windows)
If you really want to handle everything passing through your network card. Java is the wrong language. Look into C possibly C++ or Assembler.
As you have been told a million transactions per milliseconds seem unrealistic, only achievable when you are able to split the work between multiple (read many many many) computers
There are many Garbage Collectors out there, go do some searching if anything is good for you.
If you really don't want the garbage collector to kick in, I think your only option is: Don't create garbage. Initialize an array of bytes as your memory to work in. Only use primitives, no Objects. It will be cumbersome, but it might be fast and I have been told this is the kind of stuff people working on real time systems do.
Assuming you meant millions of transactions per second, not ms, you can use Chronicle which promises up to 5-20m transactions per second, persisted.
I hope that millions of transactions per milliseconds is a joke or a hyperbole. No single computer can handle that much, particularly if a mere 100Gb counts as ridiculous amounts of RAM.
But ignoring the actual number of expected transactions, what is needed for this type of task is real-time Java. Provided that you want to stick with Java.
Either way you'll probably need a real-time OS first, because your application isn't going to be the only thing running on the computer and if the OS decides not to give your application control when it needs it, there's nothing you can do.
Update: If you only want to capture network traffic, don't reinvent the wheel, use libpcap.. AFAIK there's even a Java wrapper to it.
My application requires concurrent access to a data file using memory mapping. My goal is to make it scalable in a shared memory system. After studied the source code of memory mapped file library implementation, I cannot figure out:
Is it legal to read from a MappedByteBuffer in multiple threads? Does get block other get at OS (*nix) level?
If a thread put into a MappedByteBuffer, is the content immediately visible to another thread calling get?
Thank you.
To clarify a point: The threads are using a single instance of MappedByteBuffer, not multiple instances.
Buffers are not thread safe and their access should be controlled by appropriate synchronisation; see the Thread Safety section in http://docs.oracle.com/javase/6/docs/api/java/nio/Buffer.html . ByteBuffer is a subclass of the Buffer class and therefore has the same thread safety issue.
Trying to make scalable the use of memory mapped files in a shared memory system looks highly suspicious to me. The use of memory mapped files is for performance. When you step into shared systems, looking for performance should be one thing to give a low priority at. Not that you should look for a slow system but you will have so many other problems that simply make it work should be your first (and only?) priority at the beginning. I won't be surprised if at the end you will need to replace your concurrent access to a data file using memory mapping with something else.
For some ideas like the use of an Exchanger, see Can multiple threads see writes on a direct mapped ByteBuffer in Java? and Options to make Java's ByteBuffer thread safe .
I am working on a project related to plagiarism detection framework using Java. My document set contains about 100 documents and I have to preprocess them and store in a suitable data structure. I have a big question that how am i going to process the large set of documents efficiently and avoiding bottlenecks . The main focus on my question is how to improve the preprocessing performance.
Thanks
Regards
Nuwan
You're a bit lacking on specifics there. Appropriate optimizations are going to depend upon things like the document format, the average document size, how you are processing them, and what sort of information you are storing in your data structure. Not knowing any of them, some general optimizations are:
Assuming that the pre-processing of a given document is independent of the pre-processing of any other document, and assuming you are running a multi-core CPU, then your workload is a good candidate for multi-threading. Allocate one thread per CPU core, and farm out jobs to your threads. Then you can process multiple documents in parallel.
More generally, do as much in memory as you can. Try to avoid reading from/writing to disk as much as possible. If you must write to disk, try to wait until you have all the data you want to write, and then write it all in a single batch.
You give very little information on which to make any good suggestions.
My default would be to process them using an executor with a thread pool with the same number of threads as cores in your machine each thread processing a document.