Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
Currently I am working on transfer image from C++ to Java.
The destination location is allocate by Java,
the source location is the image generated by C++, so.
I have a
uint8_t* pixelPtr
, I want to move the content of this to a
__uint8_t* data
without copy.
I have 1920*1080*3 bytes in total, so I want to move rather than copy to be fast in computation, I am wondering is there any trick way to do so?
Thank you in advance!
Let's recap:
The source is a buffer allocated in C++ by an image generation function.
The destination is a buffer allocated in Java by some other code somewhere.
You want to transfer data between the two buffers.
As long as those two buffers are distinct, there is no "trick" to avoid this. "Moving" in this context would mean swapping the pointers around, but that does nothing to the underlying buffers. You will just have to copy the data.
Explore solutions such as generating the data in the destination buffer in the first place, or making use of appropriate functionality exposed by the C++ image generation function (or the Java code). Unfortunately we can't speculate on the possible existence or form of such solutions, from here.
The standard way is, you should modify your C++ code so it creates the data not wherever it wants, but in the given place. That is, if you have code like this
uint8_t* GenerateImage(...parameters...)
{
uint8_t* output = ... allocate ...
return output;
}
you should change it to receive the destination as a parameter
void GenerateImage(...parameters..., __uint8_t* destination)
{
... fill the destination ...
}
The latter is better C++ design anyway - this way you don't need to make a separate DestroyImage function - the memory is managed entirely by Java.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have a C++ program and a Java program. On the C++ side, I have a vector having millions of entries. I need to transfer this data to my Java program. So far, I have tried the following:
Created a unix socket, converted the vector to a long string (Serialized) and send it through the unix socket
Created a thrift server/client model to transfer the data
Both approaches worked very well, but the performance I'm getting is quite low. I don't even see it using the full network bandwidth (in the case of thrift).
Also with the unix socket approach, Since I'm serializing it to String and then again converting this string back to a string array (received byte[] to String and split) on the Java side is very expensive operation.
What is the best way to transfer data faster from the C++ world to the Java world with lesser overhead on reconstructing/serializing the object?
If both problems are on the same machine, I would use a shared memory mapped file.
This way both programs can access the data at full memory speed without serialization/deserialization esp if the values are int, long or double values.
You can place the file on a tmpfs, or ram drive to avoid hitting a hard drive.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am working on a project where I will have a binary file. The file is split into multiple sections, each of which represents a list of primitive values. I need a solution where I can have a collection of objects, each of which represents a section of the file. These collections are then all held within a "file" object that represents the file as a whole.
Each collections object will need to provide sequential access to each value in the represented section of the file. What method would provide the fastest data retrieval without loading all the data into memory first?
Also it would be nice if two separate collections of the same "file" object could be accessed by two separate Threads, but this is not as important.
A good approach is to divide the solution into layers, here: one for the file i/o, mapping bytes to Java shorts and ints, another one for the abstraction of the file sections and the entire file.
java.nio's MappedByteBuffer provides a good interface between the "byte array" of a random access file and what you need for getting the Java typed data from that.
As Kayaman has mentioned, FileChannel.map() returns a MappedByteBuffer and you can navigate easily on that with its methods.
The implemention should make use of the OS feature for mapping memory pages to file pages, actually accessing on the file only what you really access in memory. (I've used this recently with Java 8 and Linux, and it performed well on files exceeding even the capacity of a single MappedByteBuffer.)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
So I have this app, a Java servlet. It uses a dictionary object that reads words from a file specified as a constructor parameter on instantiation and then serves queries.
I can do basically the same on PHP, but it's my understanding the class will be instantiated on each and every request, and the file will be read again every time. In fact, I did it and it works, but it collapses my humble amazon EC2 micro instance at the ridiculous amount of 11 requests per second or more.
My question is: Shouldn't some kind of compiler/file system optimization be kicking in and making the performance impact insignificant when the file does not change at all?
If the answer is no, I guess my design is quite poor and I should try to improve it. In that case, my second question is: What would be the best approach to improve it?
Building a servlet-like service so the code is properly reused?
Using memcached to keep the words file content in memory?
Using a RDBMS instead of a plain text file and have my dictionary querying it?
(despite the dictionary being only a few KB of static data and despite having to perform some complex queries such as selecting a
(cryptographically safe) random word from those having a length
higher than some per-request user setting and such?)
Something else?
Your best bet is to generate a PHP file which contains the final structure of the dictionary in PHP code. You could then include() that cache file into your code or write a new one when the file changes. You should store it on the filesystem, no databases. You could cache it in memory as well. But I don't think this is really needed at this point.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've read this link about the GetByteArrayElements:
FAQ: How do I share raw data with native code?
http://developer.android.com/training/articles/perf-jni.html
It said that GetByteArrayElements will return a actual pointers to the raw data in the Dalvik heap. So I can manipulate the raw source in C++ and speed up the process, am I right?
And so, ReleaseByteArrayElements won't copy the data also? Or since GetByteArrayElements return a pointer and I don't even need to release it after manipulation of data just like using GetDirectBufferAddress for FloatBuffer?
If it doesn't have to copy any data from Java to C++, is that possible to pass in and manipulate float array via GetByteArrayElements? Plz answer at NDK: Passing Jfloat array from Java to C++ via GetByteArrayElements?
Get<Primitive>ArrayElements may or may not copy the data as it sees fit. The isCopy output parameter will tell you whether it has been copied. If data is not copied, then you have obtained a pointer to the data directly in the Dalvik heap. Read more here.
You always need to call the corresponding Release<Primitive>ArrayElements, regardless of whether a copy was made. Copying data back to the VM array isn't the only cleanup that might need to be done, although (according to the JNI documentation already linked) it is feasible that changes can be seen on the Java side before Release... has been called (iff data has not been copied).
I don't believe the VM is going to allow you to make the conversions that would be necessary to do what you are thinking. As I see it, either way you go, you will need to convert a byte array to a float or a float to a byte array in Java, which you cannot accomplish by type casting. The data is going to be copied at some point.
Edit:
What you are wanting to do is possible using ByteBuffer.allocateDirect, ByteBuffer.asFloatBuffer, and GetDirectBufferAddress. In Java, you can use the FloatBuffer to interpret data as floats, and the array will be available directly in native code using GetDirectBufferAddress. I posted an answer to your other question as further explanation.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have application with google map, because googlemaps not provide api to create routes on map in Android, i'm using WebApi to get routes.
I store route in ArrayList, and pass then throught Intent into my activity where drow a route. If LatLng objects less then 2000 it works nice, else if it more then 2000 objects then i have Java Binder Exception, maybe Intent have limit size.
To handle situation with more then 2000 objects i save LatLng latitude and longitude to file and pass only path, then in activity i load file. But this files very big ~ 0.5-3 mB.
How i can solve my problem, or how i can compress my arraylist with double values?
A couple of thoughts
Don't use an object to encapsulate lat/long as that has additional overhead. Use a data structure that is more concise such as an array of Doubles or a geohash of the lat/long coordinates
Compress the output as another answer suggested
A more advanced option could be to use a trie to encode the list of geohashs. Given that a route is going to consist of numerous coordinates very close to each other the 'compression' is likely to be quite high.
You could compress your data to an ZipOutputStream and pass this as raw bytes or use the way over a ZipFile. Compression should be quite good, if values ocure more often.
Make sure you don't waste space when writing the data. Don't use text/ASCII/XML data structures/files for storing the data.
Use DataOutputStream and writeDouble(..) method for writing the data (result: 8 byte per double value). Alternatively you could convert the double values to float and write them via writeFloat(..) - that would halve the data to be written. Usually for displaying points on a map a float would still be precise enough.
Reading works of course just the other way round using DataInputStream.
Compression would also be an option but better to minimize the data to be written first as compression can take much CPU resources, meaning it consumes power and time.