I am creating two arrays in c++ which will be read in java side:
env->NewDirectByteBuffer
env->NewByteArray
Do these functions copy the buffer I send it?
Do I need to create the buffer on the heap in the c++ side or is it ok to create it on the stack because the jvm will copy it?
for example will this code run ok:
std::string stam = "12345";
const char *buff = stam.c_str();
jobject directBuff = env->NewDirectByteBuffer((void*)buff, (jlong) stam.length() );
Another example:
std::string md5 "12345";
jbyteArray md5ByteArray = env->NewByteArray((jsize) (md5.length()));
env->SetByteArrayRegion(md5ByteArray, 0, (jsize) (md5.length()), (jbyte*)
md5.c_str());
string is created on the stack. Will this code always work or do I need to create those strings on the heap and be responsible to delete it after java finishes using it
Your use of DirectByteBuffer will almost certainly fail in spectacular, core-dumping, and unpredictable ways. And its behavior may vary between JVM implementations and operating systems. The problem is that your direct memory must remain valid for the lifetime of the DirectByteBuffer. Since your string is on the stack, it will go out of scope rather quickly. Meanwhile the Java code may or may not continue to use the DirectByteBuffer, depending on what it is. Are you writing the Java code too? Can you guarantee that its use of the DirectByteBuffer will be complete before the string goes out of scope?
Even if you can guarantee that, realize that Java's GC is non-deterministic. It is all too easy to think that your DirectByteBuffer isn't being used any more, but meanwhile it is wandering around in unreclaimed objects, which eventually get hoovered up by the GC, which may call some finalize() method that accidentally touches the DirectByteBuffer, and -- kablooey! In practice, it is very difficult to make these guarantees except for blocks of "shared memory" that never go away for the life of your application.
NewDirectByteBuffer is also not that fast (at least not in Windows), despite the intuitive assumption that performance is what it is all about. I've found experimentally that it is faster to copy 1000 bytes than it is to create a single DirectByteBuffer. It is usually much faster to have your Java pass a byte[] into the C++ and have the C++ copy bytes into it (ahem, assuming they fit). Overall, I make these recommendations:
Call NewByteArray() and SetByteArrayRegion(), return the resulting
jBytearray to Java and have no worries.
If performance is a
requirement, pass the byte[] from Java to C++ and have C++ fill it
in. You might need two C++ calls, one to get the size and the next
to get the data.
If the data is huge, use NewDirectBtyeBuffer and
make sure that the C++ data stays around "forever", or until you are
darn certain that the DirectByteBuffer has been disposed.
I've also read that both C++ and Java can memory-map the same file, and that this works very well for large data.
NewDirectByteBuffer: "Allocates and returns a direct java.nio.ByteBuffer referring to the block of memory starting at the memory address address and extending capacity bytes.
"Native code that calls this function and returns the resulting byte-buffer object to Java-level code should ensure that the buffer refers to a valid region of memory that is accessible for reading and, if appropriate, writing. An attempt to access an invalid memory location from Java code will either return an arbitrary value, have no visible effect, or cause an unspecified exception to be thrown.".
No copying there.
New<Primitive>Array: only arguments are JNIEnv * and length, so there is nothing to copy.
Set<Primitive>Array: "A family of functions that copies back a region of a primitive array from a buffer."
Related
Let’s say I’ve mapped a memory region [0, 1000] and now I have MappedByteBuffer.
Can I read and write to this buffer from multiple threads at the same time without locking, assuming that each thread accesses different part of the buffer for exp. T1 [0, 500), T2 [500, 1000]?
If the above is true, is it possible to determine whether it’s better to create one big buffer for multiple threads, or smaller buffer for each thread?
Detailed Intro:
If you wanna learn how to answer those questions yourself, check their implementation source codes:
MappedByteBuffer: https://github.com/himnay/java7-sourcecode/blob/master/java/nio/MappedByteBuffer.java (notice it's still abstract, so you cannot instantiate it directly)
extends ByteBuffer: https://github.com/himnay/java7-sourcecode/blob/master/java/nio/ByteBuffer.java
extends Buffer: https://github.com/himnay/java7-sourcecode/blob/329bbb33cbe8620aee3cee533eec346b4b56facd/java/nio/Buffer.java (which only does index checks, and does not grant an actual access to any buffer memory)
Now it gets a bit more complicated:
When you wanna allocate a MappedByteBuffer, you will get either a
HeapByteBuffer: https://github.com/himnay/java7-sourcecode/blob/329bbb33cbe8620aee3cee533eec346b4b56facd/java/nio/HeapByteBuffer.java
or a DirectByteBuffer: https://github.com/himnay/java7-sourcecode/blob/329bbb33cbe8620aee3cee533eec346b4b56facd/java/nio/DirectByteBuffer.java
Instead of having to browse internet pages, you could also simply download the source code packages for your Java version and attach them in your IDE so you can see the code in development AND debug modes. A lot easier.
Short (incomplete) answer:
Neither of them does secure against multithreading.
So if you ever needed to resize the MappedByteBuffer, you might get stale or even bad (ArrayIndexOutOfBoundsException) access
If the size is constant, you can rely on either Implementation to be "thread safe", as far as your requirements are concerned
On a side note, here also lies an implementation failure creep in the Java implementation:
MappedByteBuffer extends ByteBuffer
ByteBuffer has the heap byte[] called "hb"
DirectByteBuffer extends MappedByteBuffer extends ByteBuffer
So DirectByteBuffer still has ByteBuffer's byte[] hb buffer,
but does not use it
and instead creates and manages its own Buffer
This design flaw comes from the step-by-step development of those classes (they were no all planned and implemented at the same time), AND the topic of package visibility, resulting in inversion of dependency/hierarchy of the implementation.
Now to the true answer:
If you wanna do proper object-oriented programming, you should NOT share resource unless utterly needed.
This ESPECIALLY means that each Thread should have its very own Buffer.
Advantage of having one global buffer: the only "advantage" is to reduce the additional memory consumption for additional object references. But this impact is SO MINIMAL (not even 1:10000 change in your app RAM consumption) that you will NEVER notice it. There's so many other objects allocated for any number of weird (Java) reasons everywhere that this is the least of your concerns. Plus you would have to introduce additional data (index boundaries) which lessens the 'advantage' even more.
The big Advantages of having separate buffers:
You will never have to take care of the pointer/index arithmetics
especially when it comes to you needing more threads at any given time
You can freely allocate new threads at any time without having to rearrange any data or do more pointer arithmetics
you can freely reallocate/resize each individual buffer when needed (without worrying about all the other threads' indexing requirement)
Debugging: You can locate problems so much easier that result from "writing out of boundaries", because if they tried, the bad thread would crash, and not other threads that would have to deal with corrupted data
Java ALWAYS checks each array access (on normal heap arrays like byte[]) before it accesses it, exactly to prevent side effects
think back: once upon a time there was the big step in operating systems to introduce linear address space so programs would NOT have to care about where in the hardware RAM they're loaded.
Your one-buffer-design would be the exact step backwards.
Conclusion:
If you wanna have a really bad design choice - which WILL make life a lot harder later on - you go with one global Buffer.
If you wanna do it the proper OO way, separate those buffers. No convoluted dependencies and side effect problems.
You create a variable to store a value that you can refer to that variable in the future. I've heard that you must set a variable to 'null' once you're done using it so the garbage collector can get to it (if it's a field var).
If I were to have a variable that I won't be referring to agaon, would removing the reference/value vars I'm using (and just using the numbers when needed) save memory? For example:
int number = 5;
public void method() {
System.out.println(number);
}
Would that take more space than just plugging '5' into the println method?
I have a few integers that I don't refer to in my code ever again (game loop), but I've seen others use reference vars on things that really didn't need them. Been looking into memory management, so please let me know, along with any other advice you have to offer about managing memory
I've heard that you must set a variable to 'null' once you're done using it so the garbage collector can get to it (if it's a field var).
This is very rarely a good idea. You only need to do this if the variable is a reference to an object which is going to live much longer than the object it refers to.
Say you have an instance of Class A and it has a reference to an instance of Class B. Class B is very large and you don't need it for very long (a pretty rare situation) You might null out the reference to class B to allow it to be collected.
A better way to handle objects which don't live very long is to hold them in local variables. These are naturally cleaned up when they drop out of scope.
If I were to have a variable that I won't be referring to agaon, would removing the reference vars I'm using (and just using the numbers when needed) save memory?
You don't free the memory for a primitive until the object which contains it is cleaned up by the GC.
Would that take more space than just plugging '5' into the println method?
The JIT is smart enough to turn fields which don't change into constants.
Been looking into memory management, so please let me know, along with any other advice you have to offer about managing memory
Use a memory profiler instead of chasing down 4 bytes of memory. Something like 4 million bytes might be worth chasing if you have a smart phone. If you have a PC, I wouldn't both with 4 million bytes.
In your example number is a primitive, so will be stored as a value.
If you want to use a reference then you should use one of the wrapper types (e.g. Integer)
So notice variables are on the stack, the values they refer to are on the heap. So having variables is not too bad but yes they do create references to other entities. However in the simple case you describe it's not really any consequence. If it is never read again and within a contained scope, the compiler will probably strip it out before runtime. Even if it didn't the garbage collector will be able to safely remove it after the stack squashes. If you are running into issues where you have too many stack variables, it's usually because you have really deep stacks. The amount of stack space needed per thread is a better place to adjust than to make your code unreadable. The setting to null is also no longer needed
It's really a matter of opinion. In your example, System.out.println(5) would be slightly more efficient, as you only refer to the number once and never change it. As was said in a comment, int is a primitive type and not a reference - thus it doesn't take up much space. However, you might want to set actual reference variables to null only if they are used in a very complicated method. All local reference variables are garbage collected when the method they are declared in returns.
Well, the JVM memory model works something like this: values are stored on one pile of memory stack and objects are stored on another pile of memory called the heap. The garbage collector looks for garbage by looking at a list of objects you've made and seeing which ones aren't pointed at by anything. This is where setting an object to null comes in; all nonprimitive (think of classes) variables are really references that point to the object on the stack, so by setting the reference you have to null the garbage collector can see that there's nothing else pointing at the object and it can decide to garbage collect it. All Java objects are stored on the heap so they can be seen and collected by the garbage collector.
Nonprimitive (ints, chars, doubles, those sort of things) values, however, aren't stored on the heap. They're created and stored temporarily as they're needed and there's not much you can do there, but thankfully the compilers nowadays are really efficient and will avoid needed to store them on the JVM stack unless they absolutely need to.
On a bytecode level, that's basically how it works. The JVM is based on a stack-based machine, with a couple instructions to create allocate objects on the heap as well, and a ton of instructions to manipulate, push and pop values, off the stack. Local variables are stored on the stack, allocated variables on the heap.* These are the heap and the stack I'm referring to above. Here's a pretty good starting point if you want to get into the nitty gritty details.
In the resulting compiled code, there's a bit of leeway in terms of implementing the heap and stack. Allocation's implemented as allocation, there's really not a way around doing so. Thus the virtual machine heap becomes an actual heap, and allocations in the bytecode are allocations in actual memory. But you can get around using a stack to some extent, since instead of storing the values on a stack (and accessing a ton of memory), you can stored them on registers on the CPU which can be up to a hundred times (maybe even a thousand) faster than storing it on memory. But there's cases where this isn't possible (look up register spilling for one example of when this may happen), and using a stack to implement a stack kind of makes a lot of sense.
And quite frankly in your case a few integers probably won't matter. The compiler will probably optimize them out by itself in this case anyways. Optimization should always happen after you get it running and notice it's a tad slower than you'd prefer it to be. Worry about making simple, elegant, working code first then later make it fast (and hopefully) simple, elegant, working code.
Java's actually very nicely made so that you shouldn't have to worry about nulling variables very often. Whenever you stop needing to use something, it will usually incidentally be disappearing from the scope of your program (and thus becoming eligible for garbage collection). So I guess the real lesson here is to use local variables as often as you can.
*There's also a constant pool, a local variable pool, and a couple other things in memory but you have close to no control over the size of those things and I want to keep this fairly simple.
This question already has answers here:
Do Java primitives go on the Stack or the Heap?
(4 answers)
Closed 9 years ago.
This is NOT about whether primitives go to the stack or heap, it's about where they get saved in the actual physical RAM.
Take a simple example:
int a = 5;
I know 5 gets stored into a memory block.
My area of interest is where does the variable 'a' get stored?
Related Sub-questions: Where does it happen where 'a' gets associated to the memory block that contains the primitive value of 5? Is there another memory block created to hold 'a'? But that will seem as though a is a pointer to an object, but it's a primitive type involved here.
To expound on Do Java primitives go on the Stack or the Heap? -
Lets say you have a function foo():
void foo() {
int a = 5;
system.out.println(a);
}
Then when the compiler compiles that function, it'll create bytecode instructions that leave 4 bytes of room on the stack whenever that function is called. The name 'a' is only useful to you - to the compiler, it just creates a spot for it, remembers where that spot is, and everywhere where it wants to use the value of 'a' it instead inserts references to the memory location it reserved for that value.
If you're not sure how the stack works, it works like this: every program has at least one thread, and every thread has exactly one stack. The stack is a continuous block of memory (that can also grow if needed). Initially the stack is empty, until the first function in your program is called. Then, when your function is called, your function allocates room on the stack for itself, for all of its local variables, for its return types etc.
When your function main call another function foo, here's one example of what could happen (there are a couple simplifying white lies here):
main wants to pass parameters to foo. It pushes those values onto the top of the stack in such a way that foo will know exactly where they will be put (main and foo will pass parameters in a consistent way).
main pushes the address of where program execution should return to after foo is done. This increments the stack pointer.
main calls foo.
When foo starts, it sees that the stack is currently at address X
foo wants to allocate 3 int variables on the stack, so it needs 12 bytes.
foo will use X + 0 for the first int, X + 4 for the second int, X + 8 for the third.
The compiler can compute this at compile time, and the compiler can rely on the value of the stack pointer register (ESP on x86 system), and so the assembly code it writes out does stuff like "store 0 in the address ESP + 0", "store 1 into the address ESP + 4" etc.
The parameters that main pushed on the stack before calling foo can also be accessed by foo by computing some offset from the stack pointer.
foo knows how many parameters it takes (say 3) so it knows that, say, X - 8 is the first one, X - 12 is the second one, and X - 16 is the third one.
So now that foo has room on the stack to do its work, it does so and finishes
Right before main called foo, main wrote its return address on the stack before incrementing the stack pointer.
foo looks up the address to return to - say that address is stored at ESP - 4 - foo looks at that spot on the stack, finds the return address there, and jumps to the return address.
Now the rest of the code in main continues to run and we've made a full round trip.
Note that each time a function is called, it can do whatever it wants with the memory pointed to by the current stack pointer and everything after it. Each time a function makes room on the stack for itself, it increments the stack pointer before calling other functions to make sure that everybody knows where they can use the stack for themselves.
I know this explanation blurs the line between x86 and java a little bit, but I hope it helps to illustrate how the hardware actually works.
Now, this only covers 'the stack'. The stack exists for each thread in the program and captures the state of the chain of function calls between each function running on that thread. However, a program can have several threads, and so each thread has its own independent stack.
What happens when two function calls want to deal with the same piece of memory, regardless of what thread they're on or where they are in the stack?
This is where the heap comes in. Typically (but not always) one program has exactly one heap. The heap is called a heap because, well, it's just a big ol heap of memory.
To use memory in the heap, you have to call allocation routines - routines that find unused space and give it to you, and routines that let you return space you allocated but are no longer using. The memory allocator gets big pages of memory from the operating system, and then hands out individual little bits to whatever needs it. It keeps track of what the OS has given to it, and out of that, what it has given out to the rest of the program. When the program asks for heap memory, it looks for the smallest chunk of memory that it has available that fits the need, marks that chunk as being allocated, and hands it back to the rest of the program. If it doesn't have any more free chunks, it could ask the operating system for more pages of memory and allocate out of there (up until some limit).
In languages like C, those memory allocation routines I mentioned are usually called malloc() to ask for memory and free() to return it.
Java on the other hand doesn't have explicit memory management like C does, instead it has a garbage collector - you allocate whatever memory you want, and then when you're done, you just stop using it. The Java runtime environment will keep track of what memory you've allocated, and will scan your program to find out if you're not using all of your allocations any more and will automatically deallocate those chunks.
So now that we know that memory is allocated on the heap or the stack, what happens when I create a private variable in a class?
public class Test {
private int balance;
...
}
Where does that memory come from? The answer is the heap. You have some code that creates a new Test object - Test myTest = new Test(). Calling the java new operator causes a new instance of Test to be allocated on the heap. Your variable myTest stores the address to that allocation. balance is then just some offset from that address - probably 0 actually.
The answer at the very bottom is all just .. accounting.
...
The white lies I spoke about? Let's address a few of those.
Java is first a computer model - when you compile your program to bytecode, you're compiling to a completely made-up computer architecture that doesn't have registers or assembly instructions like any other common CPU - Java, and .Net, and a few others, use a stack-based processor virtual machine, instead of a register-based machine (like x86 processors). The reason is that stack based processors are easier to reason about, and so its easier to build tools that manipulate that code, which is especially important to build tools that compile that code to machine code that will actually run on common processors.
The stack pointer for a given thread typically starts at some very high address and then grows down, instead of up, at least on most x86 computers. That said, since that's a machine detail, it's not actually Java's problem to worry about (Java has its own made-up machine model to worry about, its the Just In Time compiler's job to worry about translating that to your actual CPU).
I mentioned briefly how parameters are passed between functions, saying stuff like "parameter A is stored at ESP - 8, parameter B is stored at ESP - 12" etc. This generally called the "calling convention", and there are more than a few of them. On x86-32, registers are sparse, and so many calling conventions pass all parameters on the stack. This has some tradeoffs, particularly that accessing those parameters might mean a trip to ram (though cache might mitigate that). x86-64 has a lot more named registers, which means that the most common calling conventions pass the first few parameters in registers, which presumably improves speed. Additionally, since the Java JIT is the only guy that generates machine code for the entire process (excepting native calls), it can choose to pass parameters using any convention it wants.
I mentioned how when you declare a variable in some function, the memory for that variable comes from the stack - that's not always true, and it's really up to the whims of the environment's runtime to decide where to get that memory from. In C#/DotNet's case, the memory for that variable could come from the heap if the variable is used as part of a closure - this is called "heap promotion". Most languages deal with closures by creating hidden classes. So what often happens is that the method local members that are involved in closures are rewritten to be members of some hidden class, and when that method is invoked, instead allocate a new instance of that class on the heap and stores its address on the stack; and now all references to that originally-local variable occur instead through that heap reference.
I think I got the point that you do not mean to ask whether data is store in heap or stack! we have the same puzzle about this!
The question you asked is highly related with programming language and how operating system deal with process and variables.
That is very interesting because when I were in my university studying C and C++, I encounter the same question as you. after reading some ASM code compiled by GCC, I have little bit of comprehension with this, let's discuss about it, if any problem, please comment it and let me learn more about it.
In my opinion, the variable name will not be stored and variable value are stored in, because in ASM code, there is no real variable name except for cache name for short, all the so called variable is just an off set from stack or heap.
which I think is a hint for my learning, since ASM deal with variable name in this way, other language might have the same strategy.
They just store off set for real place for holding data.
let us make an example, say the variable name a is placed in address #1000 and the type of this a is integer, thus in memory address
addr type value
#1000 int 5
which #1000 is the off set where the real data stored in.
as you can see that the data is put in the real off set for that.
In my understanding of process, that all the variable will be replaced by "address" of this "variable" at the beginning of a process, which means while CPU only deal with "address" that already allocated in memory.
let us review this procedure again: that you have defined
int a=5; print(a);
after compilation, the program is transfer into another format(all by my imagination) :
stack:0-4 int 5
print stack:0-4
while in the situation of process that real executing, I think the memory will be like this:
#2000 4 5 //allocate 4 byte from #2000, and put 5 into it
print #2000 4 //read 4 byte from #2000, then print
Since process's memory is allocated by CPU, the #2000 is an off set of this variable name, which means the name will be replaced by just an memory address, then will read data 5 from this address, and then execute the print command.
RETHINK
after completion of my writing, I found it rather hard to image by other people, we can discuss it if any problem or and mistake I have made.
Is there any Java api available which would help in simulating a fixed amount of memory being used ??
I am building a dummy application that contains no implementations in its methods. All i would like to do within this methods is simulate a certain amount of memory being used up - is this at all possible?
The simplest way to consume a fixed amount of memory is to create a byte array of that size and retain it.
byte[] bytes = new byte[1000*1000]; // use 1 MB of memory.
Could get tricky with the way Java handles memory, considering applications are run through the runtime environment, don't know if it's going to the heap, etc.
One simple way might be loading text files into memory of the specific sizes you want, then somehow making sure they don't get garbage collected once the method returns.
I'm wondering how I'd code up a ByteBuffer recycling class that can get me a ByteBuffer which is at least as big as the specified length, and which can lock up ByteBuffer objects in use to prevent their use while they are being used by my code. This would prevent re-construction of DirectByteBuffers and such over and over, instead using existing ones. Is there an existing Java library which can do this very effectively? I know Javolution can work with object recycling, but does that extend to the ByteBuffer class in this context with the requirements set out?
It would be more to the point to be more conservative in your usage patterns in the first place. For example there is lots of code out there that shows allocation of a new ByteBuffer on every OP_READ. This is insane. You only need two ByteBuffers at most per connection, one for input and one for output, and depending on what you're doing you can get away with exactly one. In extremely simple cases like an echo server you can get away with one BB for the entire application.
I would look into that rather than paper over the cracks with yet another layer of software.
This is just advice, not an answer. If you do implement some caching for DirectByteBuffer, then be sure to read about the GC implications, because the memory consumed by DirectByteBuffer is not tracked by the garbage collector.
Some references:
A thread - featuring Stack Overflow's tackline
A blog post on the same subject
And the followup
Typically, you would use combination of ThreadLocal and SoftReference wrapper. Former to simplify synchronization (eliminate need for it, essentially); and latter to make buffer recycleable if there's not enough memory (keeping in mind other comments wrt. GC issues with direct buffers). It's actually quite simple: check if SoftReference has buffer with big enough size; if not, allocate; if yes, clear reference. Once you are done with it, re-set reference to point to buffer.
Another question is whether ByteBuffer is needed, compared to regular byte[]. Many developers assume ByteBuffers are better performance-wise, but that assumption is not usually backed by actual data (i.e. testing to see if there is performance difference, and to what direction). Reason why byte[] may often be faster is that code accessing it can be simpler, easier for HotSpot to efficiently JIT.