Implementing a file based queue - java

I have an in memory bounded queue in which multiple threads queue objects. Normally the queue should be emptied by a single reader thread that processes the items in the queue.
However, there is a possibility that the queue is filled up. In such a case I would like to persist any additional items on the disk that would be processed by another background reader thread that scans a directory for such files and processes the entries within the files. I am familiar with Active MQ but prefer a more light weight solution. It is ok if the "FIFO" is not strictly followed (since the persisted entries may be processed out of order).
Are there any open source solutions out there? I did not find any but thought I would ping this list for suggestions before I embark on the implementation myself.
Thank you!

Take a look at http://square.github.io/tape/, and its impressive QueueFile.
(thanks to Brian McCallister's "The Long Tail Treasure Trove" for pointing me at that).

You could use something like SQLLite to store the objects in.

EHCache can overflow to disk. It's also highly concurrent, though you dont really need that

Why is the queue bounded? Why not use a dynamically expandable data structure? That seems much simpler than involving the disk.
Edit:
It's hard to answer your question with out more context.
Can you clarify what you mean by "run out of memory"? How big is the queue? How much memory do you have?
Are you on an embedded system with very little memory? Or do you have 2 GB or more of stuff in the queue?
If either is true, you really aught to use a "swappable" data structure like a BTree. Implementing one your self for one queue seems like overkill. I would just use an embedded database like SQL lite.
If neither of those us true, then just use a vector or a linked list.
Edit 2:
You probably don't need a BTree or a database. You could just use a linked list of pages. But again,
I have to ask: is this necessary?
Or, if you are willing to process things non serially, why not have multiple reader threads all the time?
Ultimately though I don't think your proposal is the way to go.

You could embed berkley db java edition for keeping queue elements in files.
You can look at working example here:
http://sysgears.com/articles/lightweight-fast-persistent-queue-in-java-using-berkley-db
Hope this helps

MapDB provides concurrent Maps, Sets and Queues backed by disk storage or off-heap-memory. It is a fast and easy to use embedded Java database engine.
https://github.com/jankotek/MapDB
http://www.mapdb.org/

The most performant and GC friendly solution I've found by now is Chronicle Queue.
It has extremely low write latency, order of tens of nanoseconds, several grades of magnitude lower than MapDB or SQLite.

Related

Abusing Java Arrays?

I'm developing a software package which makes heavy use of arrays (ArrayLists). Instructions to be process are put into an array queue to be processed, then when used, deleted from the array. Same with drawing on a plot, data is placed into an array queue, which is read to plot data, and the oldest data is eventually deleted as new data comes in. We are talking about thousands of instructions over an hour and at any time maybe 200,000 points plotted, continually growing/shrinking the array.
After sometime, the software beings to slow where the instructions are processed slower. Nothing really changes as to what is going on for processing, that is, the system is stable as to what how much data is plotted and what instructions are being process, just working off similar incoming data time after time.
Is there some memory issue going on with the "abuse" of the variable-sized (not a defined size, add/delete as needed) arrays/queues that could be causing eventual slowing?
Is there a better way than the String ArrayList to act as a queue?
Thanks!
Yes, you are most likely using the wrong data structure for the job. An ArrayList is a list with a backing array so get() is fast.
The Java runtime library has a very rich set of data structures so you can get a well-written and debugged with the characteristics you need out of the box. You most likely should be using one or more Queues instead.
My guess is that you forget to null out values in your arraylist so the JVM has to keep all of them around. This is a memory leak.
To confirm, use a profiler to see where your memory and cpu go. Visualvm is a nice standalone. Netbeans include one.
The use of VisualVM helped. It showed a heavy use of a "message" form that I was dumping incoming data to and forgot existed, so it was dealing with a million characters when the sluggishness became apparent, because I never limited its size.

Writing hundreds of data objects to a Mongo database

I am working on a Minecraft network which has several servers manipulating 'user-objects', which is just a Mongo document. After a user object is modified it need to be written to the database immediately, otherwise it may be overwritten in other servers (which have an older version of the user object), but sometimes hundreds of objects need to be written away in a short amount of time.. (in a few seconds). My question is: How can I easily write objects to a MongoDB database without really overload the database..
I have been thinking up an idea but I have no idea if it is relevant:
- Create some sort of queue in another thread, everytime an data object gets need to be saved into the database it gets in the queue and then in the 'queue thread' the objects will be saved one by one with some sort of interval..
Thanks in advance
btw Im using Morphia as framework in Java
"hundreds of objects [...] in a few seconds" doesn't sound that much. How much can you do at the moment?
The setting most important for the speed of write operations is the WriteConcern. What are you using at the moment and is this the right setting for your project (data safety vs speed)?
If you need to do many write operations at once, you can probably speed up things with bulk operations. They have been added in MongoDB 2.6 and Morphia supports them as well — see this unit test.
I would be very cautious with a queue:
Do you really need it? Depending on your hardware and configuration you should be able to do hundreds or even thousands of write operations per second.
Is async really the best approach for you? The producer of the write operation / message can only assume his change has been applied, but it probably has not and is still waiting in the queue to be written. Is this the intended behaviour?
Does it make your life easier? You need to know another piece of software, which adds many new and most likely unforeseen problems.
If you need to scale your writes, why not use sharding? No additional technology and your code will behave the same with and without it.
You might want to read the following blogpost on why you probably want to avoid queues for this kind of operation in general: http://widgetsandshit.com/teddziuba/2011/02/the-case-against-queues.html

How do I efficiently cache objects in Java using available RAM?

I need to cache objects in Java using a proportion of whatever RAM is available. I'm aware that others have asked this question, but none of the responses meet my requirements.
My requirements are:
Simple and lightweight
Not dramatically slower than a plain HashMap
Use LRU, or some deletion policy that approximates LRU
I tried LinkedHashMap, however it requires you to specify a maximum number of elements, and I don't know how many elements it will take to fill up available RAM (their sizes will vary significantly).
My current approach is to use Google Collection's MapMaker as follows:
Map<String, Object> cache = new MapMaker().softKeys().makeMap();
This seemed attractive as it should automatically delete elements when it needs more RAM, however there is a serious problem: its behavior is to fill up all available RAM, at which point the GC begins to thrash and the whole app's performance deteriorates dramatically.
I've heard of stuff like EHCache, but it seems quite heavy-weight for what I need, and I'm not sure if it is fast enough for my application (remembering that the solution can't be dramatically slower than a HashMap).
I've got similar requirements to you - concurrency (on 2 hexacore CPUs) and LRU or similar - and also tried Guava MapMaker. I found softValues() much slower than weakValues(), but both made my app excruciatingly slow when memory filled up.
I tried WeakHashMap and it was less problematic, oddly even faster than using LinkedHashMap as an LRU cache via its removeEldestEntry() method.
But by the fastest for me is ConcurrentLinkedHashMap which has made my app 3-4 (!!) times faster than any other cache I tried. Joy, after days of frustration! It's apparently been incorporated into Guava's MapMaker, but the LRU feature isn't in Guava r07 at any rate. Hope it works for you.
I've implemented serval caches and it's probably as difficult as implementing a new datasource or threadpool, my recommendation is use jboss-cache or a another well known caching lib.
So you will sleep well without issues
I've heard of stuff like EHCache, but it seems quite heavy-weight for what I need, and I'm not sure if it is fast enough for my application (remembering that the solution can't be dramatically slower than a HashMap).
I really don't know if one can say that EHCache is heavy-weight. At least, I do not consider EHCache as such, especially when using a Memory Store (which is backed by an extended LinkedHashMap and is of course the the fastest caching option). You should give it a try.
I believe MapMaker is going to be the only reasonable way to get what you're asking for. If "the GC begins to thrash and the whole app's performance deteriorates dramatically," you should spend some time properly setting the various tuning parameters. This document may seem a little intimidating at first, but it's actually written very clearly and is a goldmine of helpful information about GC:
https://www.oracle.com/technetwork/java/javase/memorymanagement-whitepaper-150215.pdf
I don't know if this would be a simple solution, especially compared with EHCache or similar, but have you looked at the Javolution library? It is not designed for as such, but in the javolution.context package they have an Allocator pattern which can reuse objects without the need for garbage collection. This way they keep object creation and garbage collection to a minimum, an important feature for real-time programming. Perhaps you should take a look and try to adapt it to your problem.
This seemed attractive as it should
automatically delete elements when it
needs more RAM, however there is a
serious problem: its behavior is to
fill up all available RAM
Using soft keys just allows the garbage collector to remove objects from the cache when no other objects reference them (i.e., when the only thing referring to the cache key is the cache itself). It does not guarantee any other kind of expulsion.
Most solutions you find will be features added on top of the java Map classes, including EhCache.
Have you looked at the commons-collections LRUMap?
Note that there is an open issue against MapMaker to provide LRU/MRU functionality. Perhaps you can voice your opinion there as well
Using your existing cache, store WeakReference rather than normal object refererences.
If GC starts running out of free space, the values held by WeakReferences will be released.
In the past I have used JCS. You can set up the configuration to try and meet you needs. I'm not sure if this will meet all of your requirements/needs but I found it to be pretty powerful when I used it.
You cannot "delete elements" you can only stop to hard reference them and wait for the GC to clean them, so go on with Google Collections...
I'm not aware of an easy way to find out an object's size in Java. Therefore, I don't think you'll find a way to limit a data structure by the amount of RAM it's taking.
Based on this assumption, you're stuck with limiting it by the number of cached objects. I'd suggest running simulations of a few real-life usage scenarios and gathering statistics on the types of objects that go into the cache. Then you can calculate the statistically average size, and the number of objects you can afford to cache. Even though it's only an approximation of the amount of RAM you want to dedicate to the cache, it might be good enough.
As to the cache implementation, in my project (a performance-critical application) we're using EhCache, and personally I don't find it to be heavyweight at all.
In any case, run several tests with several different configurations (regarding size, eviction policy etc.) and find out what works best for you.
Caching something, SoftReference maybe the best way until now I can imagine.
Or you can reinvent an Object-pool. That every object you doesn't use, you don't need to destroy it. But it to save CPU rather than save memory
Assuming you want the cache to be thread-safe, then you should examine the cache example in Brian Goetz's book "Java Concurrency in Practice". I can't recommend this highly enough.

Searching for regex patterns on a 30GB XML dataset. Making use of 16gb of memory

I currently have a Java SAX parser that is extracting some info from a 30GB XML file.
Presently it is:
reading each XML node
storing it into a string object,
running some regexex on the string
storing the results to the database
For several million elements. I'm running this on a computer with 16GB of memory, but the memory is not being fully utilized.
Is there a simple way to dynamically 'buffer' about 10gb worth of data from the input file?
I suspect I could manually take a 'producer' 'consumer' multithreaded version of this (loading the objects on one side, using them and discarding on the other), but damnit, XML is ancient now, are there no efficient libraries to crunch em?
Just to cover the bases, is Java able to use your 16GB? You (obviously) need to be on a 64-bit OS, and you need to run Java with -d64 -XMx10g (or however much memory you want to allocate to it).
It is highly unlikely memory is a limiting factor for what you're doing, so you really shouldn't see it fully utilized. You should be either IO or CPU bound. Most likely, it'll be IO. If it is, IO, make sure you're buffering your streams, and then you're pretty much done; the only thing you can do is buy a faster harddrive.
If you really are CPU-bound, it's possible that you're bottlenecking at regex rather than XML parsing.
See this (which references this)
If your bottleneck is at SAX, you can try other implementations. Off the top of my head, I can think of the following alternatives:
StAX (there are multiple implementations; Woodstox is one of the fastest)
Javolution
Roll your own using JFlex
Roll your own ad hoc, e.g. using regex
For the last two, the more constrained is your XML subset, the more efficient you can make it.
It's very hard to say, but as others mentioned, an XML-native database might be a good alternative for you. I have limited experience with those, but I know that at least Berkeley DB XML supports XPath-based indices.
First, try to find out what's slowing you down.
How much faster is the parser when you parse from memory?
Does using a BufferedInputStream with a large size help?
Is it easy to split up the XML file? In general, shuffling through 30 GiB of any kind of data will take some time, since you have to load it from the hard drive first, so you are always limited by the speed of this. Can you distribute the load to several machines, maybe by using something like Hadoop?
No Java experience, sorry, but maybe you should change the parser? SAX should work sequentially and there should be no need to buffer most of the file ...
SAX is, essentially, "event driven", so the only state you should be holding on to from element to element is state that relevant to that element, rather than the document as a whole. What other state are you maintaining, and why? As each "complete" node (or set of nodes) comes by, you should be discarding them.
I don't really understand what you're trying to do with this huge amount of XML, but I get the impression that
using XML was wrong for the data stored
you are buffering way beyond what you should do (and you are giving up all advantages of SAX parsing by doing so)
Apart from that: XML is not ancient and in massive and active use. What do you think all those interactive web sites are using for their interactive elements?
Are you being slowed down by multiple small commits to your db? Sounds like you would be writing to the db almost all the time from your program and making sure you don't commit too often could improve performance. Possibly also preparing your statements and other standard bulk processing tricks could help
Other than this early comment, we need more info - do you have a profiler handy that can scrape out what makes things run slowly
You can use the Jibx library, and bind your XML "nodes" to objects that represent them. You can even overload an ArrayList, then when x number of objects are added, perform the regexes all at once (presumably using the method on your object that performs this logic) and then save them to the database, before allowing the "add" method to finish once again.
Jibx is hosted on SourceForge: Jibx
To elaborate: you can bind your XML as a "collection" of these specialized String holders. Because you define this as a collection, you must choose what collection type to use. You can then specify your own ArrayList implementation.
Override the add method as follows (forgot the return type, assumed void for example):
public void add(Object o) {
super.add(o);
if(size() > YOUR_DEFINED_THRESHOLD) {
flushObjects();
}
}
YOUR_DEFINED_THRESHOLD
is how many objects you want to store in the arraylist until it has to be flushed out to the database. flushObjects(); is simply the method that will perform this logic. The method will block the addition of objects from the XML file until this process is complete. However, this is ok, the overhead of the database will probably be much greater than file reading and parsing anyways.
I would suggest to first import your massive XML file into a native XML database (such as eXist if you are looking for open source stuff, never tested it myself), and then perform iterative paged queries to process your data small chunks at a time.
You may want to try Stax instead of SAX, I hear it's better for that sort of thing (I haven't used it myself).
If the data in the XML is order independent, can you multi-thread the process to split the file up or run multiple processes starting in different locations in the file? If you're not I/O bound that should help speed it along.

Advice on handling large data volumes

So I have a "large" number of "very large" ASCII files of numerical data (gigabytes altogether), and my program will need to process the entirety of it sequentially at least once.
Any advice on storing/loading the data? I've thought of converting the files to binary to make them smaller and for faster loading.
Should I load everything into memory all at once?
If not, is opening what's a good way of loading the data partially?
What are some Java-relevant efficiency tips?
So then what if the processing requires jumping around in the data for multiple files and multiple buffers? Is constant opening and closing of binary files going to become expensive?
I'm a big fan of 'memory mapped i/o', aka 'direct byte buffers'. In Java they are called Mapped Byte Buffers are are part of java.nio. (Basically, this mechanism uses the OS's virtual memory paging system to 'map' your files and present them programmatically as byte buffers. The OS will manage moving the bytes to/from disk and memory auto-magically and very quickly.
I suggest this approach because a) it works for me, and b) it will let you focus on your algorithm and let the JVM, OS and hardware deal with the performance optimization. All to frequently, they know what is best more so than us lowly programmers. ;)
How would you use MBBs in your context? Just create an MBB for each of your files and read them as you see fit. You will only need to store your results. .
BTW: How much data are you dealing with, in GB? If it is more than 3-4GB, then this won't work for you on a 32-bit machine as the MBB implementation is defendant on the addressable memory space by the platform architecture. A 64-bit machine & OS will take you to 1TB or 128TB of mappable data.
If you are thinking about performance, then know Kirk Pepperdine (a somewhat famous Java performance guru.) He is involved with a website, www.JavaPerformanceTuning.com, that has some more MBB details: NIO Performance Tips and other Java performance related things.
You might want to have a look at the entries in the Wide Finder Project (do a google search for "wide finder" java).
The Wide finder involves reading over lots of lines in log files, so look at the Java implementations and see what worked and didn't work there.
You could convert to binary, but then you have 1+ something copies of the data, if you need to keep the original around.
It may be practical to build some kind of index on top of your original ascii data, so that if you need to go through the data again you can do it faster in subsequent times.
To answer your questions in order:
Should I load everything into memory all at once?
Not if don't have to. for some files, you may be able to, but if you're just processing sequentially, just do some kind of buffered read through the things one by one, storing whatever you need along the way.
If not, is opening what's a good way of loading the data partially?
BufferedReaders/etc is simplest, although you could look deeper into FileChannel/etc to use memorymapped I/O to go through windows of the data at a time.
What are some Java-relevant efficiency tips?
That really depends on what you're doing with the data itself!
Without any additional insight into what kind of processing is going on, here are some general thoughts from when I have done similar work.
Write a prototype of your application (maybe even "one to throw away") that performs some arbitrary operation on your data set. See how fast it goes. If the simplest, most naive thing you can think of is acceptably fast, no worries!
If the naive approach does not work, consider pre-processing the data so that subsequent runs will run in an acceptable length of time. You mention having to "jump around" in the data set quite a bit. Is there any way to pre-process that out? Or, one pre-processing step can be to generate even more data - index data - that provides byte-accurate location information about critical, necessary sections of your data set. Then, your main processing run can utilize this information to jump straight to the necessary data.
So, to summarize, my approach would be to try something simple right now and see what the performance looks like. Maybe it will be fine. Otherwise, look into processing the data in multiple steps, saving the most expensive operations for infrequent pre-processing.
Don't "load everything into memory". Just perform file accesses and let the operating system's disk page cache decide when you get to actually pull things directly out of memory.
This depends a lot on the data in the file. Big mainframes have been doing sequential data processing for a long time but they don't normally use random access for the data. They just pull it in a line at a time and process that much before continuing.
For random access it is often best to build objects with caching wrappers which know where in the file the data they need to construct is. When needed they read that data in and construct themselves. This way when memory is tight you can just start killing stuff off without worrying too much about not being able to get it back later.
You really haven't given us enough info to help you. Do you need to load each file in its entiretly in order to process it? Or can you process it line by line?
Loading an entire file at a time is likely to result in poor performance even for files that aren't terribly large. Your best bet is to define a buffer size that works for you and read/process the data a buffer at a time.
I've found Informatica to be an exceptionally useful data processing tool. The good news is that the more recent versions even allow Java transformations. If you're dealing with terabytes of data, it might be time to pony up for the best-of-breed ETL tools.
I'm assuming you want to do something with the results of the processing here, like store it somewhere.
If your numerical data is regularly sampled and you need to do random access consider to store them in a quadtree.
I recommend strongly leveraging Regular Expressions and looking into the "new" IO nio package for faster input. Then it should go as quickly as you can realistically expect Gigabytes of data to go.
If at all possible, get the data into a database. Then you can leverage all the indexing, caching, memory pinning, and other functionality available to you there.
If you need to access the data more than once, load it into a database. Most databases have some sort of bulk loading utility. If the data can all fit in memory, and you don't need to keep it around or access it that often, you can probably write something simple in Perl or your favorite scripting language.

Categories

Resources