I'm trying to transfer a stream of strings from my C++ program to my Java program in an efficient manner but I'm not sure how to do this. Can anyone post up links/explain the basic idea about how do implement this?
I was thinking of writing my data into a text file and then reading the text file from my Java program but I'm not sure that this will be fast enough. I need it so that a single string can be transferred in 16ms so that we can get around 60 data strings to the C++ program in a second.
Text files can easily be written to and read from upwards with 60 strings worth of content in merely a few milliseconds.
Some alternatives, if you find that you are running into timing troubles anyway:
Use socket programming. http://beej.us/guide/bgnet/output/html/multipage/index.html.
Sockets should easily be fast enough.
There are other alternatives, such as the tibco messaging service, which will be an order of magnitude faster than what you need: http://www.tibco.com/
Another alternative would be to use a mysql table to pass the data, and potentially just set an environment variable in order to indicate the table should be queried for the most recent entries.
Or I suppose you could just use an environment variable itself to convey all of the info -- 60 strings isn't very much.
The first two options are the more respectable solutions though.
Serialization:
protobuf or s11n
Pretty much any way you do this will be this fast. A file is likely to be the slowest and it could be around 10ms total!. A Socket will be similar if you have to create a new connection as well (its the connect, not the data which will take most time) Using a socket has the advantage of the sender and receiver knowing how much data has been produced. If you use a file instead, you need another way to say, the file is complete now, you should read it. e.g. a socket ;)
If the C++ and Java are in the same process, you can use a ByteBuffer to wrap a C array and import into Java in around 1 micro-second.
Related
I'm working on a machine learning project in Java which will involve a very large model (the output of a Support Vector Machine, for those of you familiar with that) that will need to be retrieved fairly frequently for use by the end user. The bulk of the model consists of large two-dimensional array of fairly small objects.
Unfortunately, I do not know exactly how large the model is going to be (I've been working with benchmark data so far, and the data I'm actually going to be using isn't ready yet), nor do I know the specifications of the machine it will run on, as that is also up in the air.
I already have a method to write the model to a file as a string, but the write process takes a great deal of time and the read process takes the better part of a minute. I'd like to cut down on that time, so I had the either bright or insanely convoluted idea of writing the model to a .java file in such a way that it could be compiled and then run to produce a fully formed model.
My questions to you are, will storing and compiling the model in Java be significantly faster than reading it from the file, under the assumption that the model is about 1 MB in size? And is there some reason I haven't seen yet that this could be a fantastically stupid idea that I should not pursue under any circumstances?
Thank you for any ideas you can give me.
EDIT: apparently trying to automatically write several thousand values into code makes a method that is roughly two orders of magnitude larger than the compiler can handle. Ah well, live and learn.
Instead of writing to a string or to a java file, you might consider creating a compact binary format for you data.
Will storing and compiling the model in Java be significantly faster
than reading it from the file ?
That depends on the way you fashion your custom datastructure to contain your model.
The question IMHO is if the reading of the file takes long because of IO or because of computing time (=> CPU). If the later is the case then tough luck. If your IO (e.g. hard disc) is the cause then you can compress the file and extract it after/while reading. There is (of course) ZIP-support in Java (even for Streams).
I agree with the answer given above to use a binary input format. Let's try optimising that first. Can you provide some information? ...or have you googled working with binary data? ...buffering it? etc.?
Writing a .java file and compiling it will be quiet interesting... but it is bound to give your issues at some point. However, I think you will find that it will be slightly slower than an optimised binary format, but faster than text-based input.
Also, be very careful for early optimisation. Usually, "highly-configurable" and "blinding fast" is mutual exclusive. Rather, get everything to work first and then use a profiler to optimise the really slow sections of the application.
Quick design question: I need to implement a form of communication between a client-server network in my game-engine architecture in order to send events between one another.
I had opted to create event objects and as such, I was wondering how efficient it would be to serialize these objects and pass them through an object stream over the simple socket network?
That is, how efficient is it comparatively to creating a string representation of the object, sending the string over via a char stream, and parsing the string client side?
The events will be sent every game loop, if not more; but the event object itself is just a simple wrapper for a few java primitives.
Thanks for your insight!
(tl;dr - are object streams over networks efficient?)
If performance is the primary issue, I suggest using Protocol Buffers over both your own custom serialization and Java's native serialization.
Jon Skeet gives a good explanation as well as benchmarks here: High performance serialization: Java vs Google Protocol Buffers vs ...?
If you can't use PBs, I suspect Java's native serialization will be more optimized than manually serializing/deserializing from a String. Whether or not this difference is significant is likely dependent on how complex of an object you're serializing. As always, you should benchmark to confirm your predictions.
The fact that you're sending things over a network shouldn't matter.
Edit: For time-critical applications Protocol Buffers appear a better choice. However, it appears to me that there is a significant increase in development time. Effectively you'll have to code every exchange message twice: Once as a .proto file which is compiled and spits out java wrappers, and once as a POJO which makes something useful out of these wrappers. But that's guessing from the documentation.
End of Edit
Abstract: Go for the Object Stream
So, what is less? The time it takes to code the object, send the byte stream, and decode it - all by hand - or the time it takes to code the object, send the byte stream, and decode it - all by the trusty and tried serialization mechanism?
You should make sure the objects you send are as small as possible. This can be achieved with enum values, lookup tables and the such, where possible. Might shave a few bytes off each transmission. The serialization algorithm appears very speedy to me, and anything you would code would do exactly the same. When you reinvent the wheel, more often than not you end up with triangles.
i'm writing a webserver for mobile android based devices in java.
This webserver is single-threaded and follow the idea behind nginx, node.js and similar: don't spawn multiple threads just use async operations in an event loop.
While using a multi-threaded webserver may give better performance on x86 recent cpus, on arm based single core cpu will need to do a lot of more job.
To clarify, i know quite well C and i've implemented single threaded webservers in plain c or multithreaded one in C#, taking advantage of IOPS on windows, but i've wrote only a simple webserver in java, the one i want to replace with this new one.
Right now, i'm using java nio and i've readed that ByteBuffer are quite slow when converted to string but this isn't a problem because i don't need to do, infact to gaix maximium performances i wanna to implement parsing and comparing at byte level.
My question is, which method for parsing byte buffer is faster?
I've seen that ByteBuffer supports get method, that give access to a single byte and move ahead the cursor, supports array method, that return the backing array, so my question is which method is faster?
I can work directly on backed array, or i should avoid and use get?
I wanna to implement a ByteBufferPool to reuse bytebuffer, i'll make thread-aware it, read below, can be this an issue?
In some cases i'll compare byte to byte, appling a mask to handle case sensitivity (i mean, if the first byte is G, the third is T and fourth is a space (0x47, 0x54 and 0x20) i can treath the request as a GET one) and in other cases i'll need to compare strings with byte array, like for headers (i'll loop through string chars, cast them to bytes and compare to bytes).
Sorry for these silly questions, but i don't know java specs and don't know internal java stuff, so i need infos :)
Someone can give an hint? :)
PS: obiviously, not all operation can be handled in a do-stuff-pause-continue-return manner, so i'll implement a ThreadPool to avoid thread creation penalty
Give Netty a try.
You can control the threading model and you can implement just what you need.
I currently have a Java SAX parser that is extracting some info from a 30GB XML file.
Presently it is:
reading each XML node
storing it into a string object,
running some regexex on the string
storing the results to the database
For several million elements. I'm running this on a computer with 16GB of memory, but the memory is not being fully utilized.
Is there a simple way to dynamically 'buffer' about 10gb worth of data from the input file?
I suspect I could manually take a 'producer' 'consumer' multithreaded version of this (loading the objects on one side, using them and discarding on the other), but damnit, XML is ancient now, are there no efficient libraries to crunch em?
Just to cover the bases, is Java able to use your 16GB? You (obviously) need to be on a 64-bit OS, and you need to run Java with -d64 -XMx10g (or however much memory you want to allocate to it).
It is highly unlikely memory is a limiting factor for what you're doing, so you really shouldn't see it fully utilized. You should be either IO or CPU bound. Most likely, it'll be IO. If it is, IO, make sure you're buffering your streams, and then you're pretty much done; the only thing you can do is buy a faster harddrive.
If you really are CPU-bound, it's possible that you're bottlenecking at regex rather than XML parsing.
See this (which references this)
If your bottleneck is at SAX, you can try other implementations. Off the top of my head, I can think of the following alternatives:
StAX (there are multiple implementations; Woodstox is one of the fastest)
Javolution
Roll your own using JFlex
Roll your own ad hoc, e.g. using regex
For the last two, the more constrained is your XML subset, the more efficient you can make it.
It's very hard to say, but as others mentioned, an XML-native database might be a good alternative for you. I have limited experience with those, but I know that at least Berkeley DB XML supports XPath-based indices.
First, try to find out what's slowing you down.
How much faster is the parser when you parse from memory?
Does using a BufferedInputStream with a large size help?
Is it easy to split up the XML file? In general, shuffling through 30 GiB of any kind of data will take some time, since you have to load it from the hard drive first, so you are always limited by the speed of this. Can you distribute the load to several machines, maybe by using something like Hadoop?
No Java experience, sorry, but maybe you should change the parser? SAX should work sequentially and there should be no need to buffer most of the file ...
SAX is, essentially, "event driven", so the only state you should be holding on to from element to element is state that relevant to that element, rather than the document as a whole. What other state are you maintaining, and why? As each "complete" node (or set of nodes) comes by, you should be discarding them.
I don't really understand what you're trying to do with this huge amount of XML, but I get the impression that
using XML was wrong for the data stored
you are buffering way beyond what you should do (and you are giving up all advantages of SAX parsing by doing so)
Apart from that: XML is not ancient and in massive and active use. What do you think all those interactive web sites are using for their interactive elements?
Are you being slowed down by multiple small commits to your db? Sounds like you would be writing to the db almost all the time from your program and making sure you don't commit too often could improve performance. Possibly also preparing your statements and other standard bulk processing tricks could help
Other than this early comment, we need more info - do you have a profiler handy that can scrape out what makes things run slowly
You can use the Jibx library, and bind your XML "nodes" to objects that represent them. You can even overload an ArrayList, then when x number of objects are added, perform the regexes all at once (presumably using the method on your object that performs this logic) and then save them to the database, before allowing the "add" method to finish once again.
Jibx is hosted on SourceForge: Jibx
To elaborate: you can bind your XML as a "collection" of these specialized String holders. Because you define this as a collection, you must choose what collection type to use. You can then specify your own ArrayList implementation.
Override the add method as follows (forgot the return type, assumed void for example):
public void add(Object o) {
super.add(o);
if(size() > YOUR_DEFINED_THRESHOLD) {
flushObjects();
}
}
YOUR_DEFINED_THRESHOLD
is how many objects you want to store in the arraylist until it has to be flushed out to the database. flushObjects(); is simply the method that will perform this logic. The method will block the addition of objects from the XML file until this process is complete. However, this is ok, the overhead of the database will probably be much greater than file reading and parsing anyways.
I would suggest to first import your massive XML file into a native XML database (such as eXist if you are looking for open source stuff, never tested it myself), and then perform iterative paged queries to process your data small chunks at a time.
You may want to try Stax instead of SAX, I hear it's better for that sort of thing (I haven't used it myself).
If the data in the XML is order independent, can you multi-thread the process to split the file up or run multiple processes starting in different locations in the file? If you're not I/O bound that should help speed it along.
So I have a "large" number of "very large" ASCII files of numerical data (gigabytes altogether), and my program will need to process the entirety of it sequentially at least once.
Any advice on storing/loading the data? I've thought of converting the files to binary to make them smaller and for faster loading.
Should I load everything into memory all at once?
If not, is opening what's a good way of loading the data partially?
What are some Java-relevant efficiency tips?
So then what if the processing requires jumping around in the data for multiple files and multiple buffers? Is constant opening and closing of binary files going to become expensive?
I'm a big fan of 'memory mapped i/o', aka 'direct byte buffers'. In Java they are called Mapped Byte Buffers are are part of java.nio. (Basically, this mechanism uses the OS's virtual memory paging system to 'map' your files and present them programmatically as byte buffers. The OS will manage moving the bytes to/from disk and memory auto-magically and very quickly.
I suggest this approach because a) it works for me, and b) it will let you focus on your algorithm and let the JVM, OS and hardware deal with the performance optimization. All to frequently, they know what is best more so than us lowly programmers. ;)
How would you use MBBs in your context? Just create an MBB for each of your files and read them as you see fit. You will only need to store your results. .
BTW: How much data are you dealing with, in GB? If it is more than 3-4GB, then this won't work for you on a 32-bit machine as the MBB implementation is defendant on the addressable memory space by the platform architecture. A 64-bit machine & OS will take you to 1TB or 128TB of mappable data.
If you are thinking about performance, then know Kirk Pepperdine (a somewhat famous Java performance guru.) He is involved with a website, www.JavaPerformanceTuning.com, that has some more MBB details: NIO Performance Tips and other Java performance related things.
You might want to have a look at the entries in the Wide Finder Project (do a google search for "wide finder" java).
The Wide finder involves reading over lots of lines in log files, so look at the Java implementations and see what worked and didn't work there.
You could convert to binary, but then you have 1+ something copies of the data, if you need to keep the original around.
It may be practical to build some kind of index on top of your original ascii data, so that if you need to go through the data again you can do it faster in subsequent times.
To answer your questions in order:
Should I load everything into memory all at once?
Not if don't have to. for some files, you may be able to, but if you're just processing sequentially, just do some kind of buffered read through the things one by one, storing whatever you need along the way.
If not, is opening what's a good way of loading the data partially?
BufferedReaders/etc is simplest, although you could look deeper into FileChannel/etc to use memorymapped I/O to go through windows of the data at a time.
What are some Java-relevant efficiency tips?
That really depends on what you're doing with the data itself!
Without any additional insight into what kind of processing is going on, here are some general thoughts from when I have done similar work.
Write a prototype of your application (maybe even "one to throw away") that performs some arbitrary operation on your data set. See how fast it goes. If the simplest, most naive thing you can think of is acceptably fast, no worries!
If the naive approach does not work, consider pre-processing the data so that subsequent runs will run in an acceptable length of time. You mention having to "jump around" in the data set quite a bit. Is there any way to pre-process that out? Or, one pre-processing step can be to generate even more data - index data - that provides byte-accurate location information about critical, necessary sections of your data set. Then, your main processing run can utilize this information to jump straight to the necessary data.
So, to summarize, my approach would be to try something simple right now and see what the performance looks like. Maybe it will be fine. Otherwise, look into processing the data in multiple steps, saving the most expensive operations for infrequent pre-processing.
Don't "load everything into memory". Just perform file accesses and let the operating system's disk page cache decide when you get to actually pull things directly out of memory.
This depends a lot on the data in the file. Big mainframes have been doing sequential data processing for a long time but they don't normally use random access for the data. They just pull it in a line at a time and process that much before continuing.
For random access it is often best to build objects with caching wrappers which know where in the file the data they need to construct is. When needed they read that data in and construct themselves. This way when memory is tight you can just start killing stuff off without worrying too much about not being able to get it back later.
You really haven't given us enough info to help you. Do you need to load each file in its entiretly in order to process it? Or can you process it line by line?
Loading an entire file at a time is likely to result in poor performance even for files that aren't terribly large. Your best bet is to define a buffer size that works for you and read/process the data a buffer at a time.
I've found Informatica to be an exceptionally useful data processing tool. The good news is that the more recent versions even allow Java transformations. If you're dealing with terabytes of data, it might be time to pony up for the best-of-breed ETL tools.
I'm assuming you want to do something with the results of the processing here, like store it somewhere.
If your numerical data is regularly sampled and you need to do random access consider to store them in a quadtree.
I recommend strongly leveraging Regular Expressions and looking into the "new" IO nio package for faster input. Then it should go as quickly as you can realistically expect Gigabytes of data to go.
If at all possible, get the data into a database. Then you can leverage all the indexing, caching, memory pinning, and other functionality available to you there.
If you need to access the data more than once, load it into a database. Most databases have some sort of bulk loading utility. If the data can all fit in memory, and you don't need to keep it around or access it that often, you can probably write something simple in Perl or your favorite scripting language.