How to call ALSA allocation #define from JNA? - java

I am trying to use JNA to call ALSA library functions from Java. I have been successful calling the basic open/close APIs snd_pcm_open() and snd_pcm_close() so I know I have the library loading OK and the basics right.
The first thing my application needs to do with ALSA is call snd_pcm_hw_params_alloca() to allocate a hardware parameter structure. But this is defined in the C header file as a macro, not a symbol:
#define snd_pcm_hw_params_alloca(ptr) __snd_alloca(ptr, snd_pcm_hw_params)
For a C application it would be called like this:
snd_pcm_hw_params_t *params;
snd_pcm_hw_params_alloca(&params);
and "params" is then passed to many subsequent API calls.
I am not much of a C expert, I cannot figure out exactly what this is doing. I expected a memory allocation of a structure, but the next line in the header file defines snd_pcm_hw_params like:
int snd_pcm_hw_params(snd_pcm_t *pcm, snd_pcm_hw_params_t *params);
So does that mean the macro is returning a pointer to a function? How can I model this in JPA?

This function returns a pointer to a structure whose definition you do not know. (This is why you need all those accessor functions.)
alloca() allocates memory from the stack, which implies that it's freed as soon as the current function returns. This means that you cannot wrap this into a function to be called from Java.
Instead of snd_pcm_hw_params_alloca(), use snd_pcm_hw_params_malloc(), and don't forget to call snd_pcm_hw_params_free() at the correct time.

Related

How are Class declarations and defintions stored in object oriented languages (C++) after compilation?

I understand how the memory is organised for C programs(the stack, heap, function calls etc).
Now, I really don't understand how all these things work in Object Oriented Languages (to be more specific, C++).
I understand that whenever I use the new keyword, the space for the object is allocated onto the heap.
Some of my basic questions regarding this are:
1) Are class definitions stored somewhere in memory during execution of the program ?
2) If yes, then where and how is it stored. If no, then how are the functions dispatched at run time (in case of the virtual/non-virtual functions).
3) When an object is allocated memory, what all details about the object are stored in it ? (which class it belongs to, the member functions, the public private variables/functions etc.)
So basically, can someone please explain how the object oriented code gets converted after/during compilation so that these O.O.P. features are implemented?
I am comfortable with Java/C++. So you can explain the logic with either of the languages since both have quite distinct features.
Also, please add any reference links so that I can read it from there too, just in case some further doubts arise!
Thanks!
1) Are class definitions stored somewhere in memory during execution of the program ?
In C++, no. In Java, yes.
2) If yes, then where and how is it stored. If no, then how are the functions dispatched at run time (in case of the virtual/non-virtual functions).
In C++, calls to non-virtual functions are replaced by the compiler with the actual static address of the function; calls to virtual functions work through a virtual table. new is translated to memory allocation (the compiler knows the precise size) followed by a call to the (statically-determined) constructor. A field access is translated by the compiler to accessing memory in a statically-known offset from the beginning of the object.
It's similar in Java - in particular, a virtual table is used for virtual calls - except that field access can be done symbolically.
3) When an object is allocated memory, what all details about the object are stored in it ? (which class it belongs to, the member functions, the public private variables/functions etc.)
In C++ - no metadata is stored (well, with the exception of some bits needed for RTTI). In Java you get type information and visibility for all members and a few other things - you can check out the Java class file definition for more information.
So basically, can someone please explain how the object oriented code gets converted after/during compilation so that these O.O.P. features are implemented?
As you can see from my answers above, it really depends on the language.
In a language like C++, the heavy lifting is done by the compiler, and the resulting code have very little to do with object oriented concepts - in fact, the typical target language for a C++ compiler (native binary code) is untyped.
In a language like Java, the compiler targets an intermediate representation which usually contains a lot of extra details - type information, member visibility, etc. This is also what enables reflection in those sorts of languages.
Are class definitions stored somewhere in memory during execution of the program ?
Definitions are not preserved - at least not in the sense of maintaining the information you have at compile time.
When an object is allocated memory, what all details about the object are stored in it ? (which class it belongs to, the member functions, the public private variables/functions etc.)
During compilation, things like references to fields are transformed into dereferences of pointers with a fixed offset. For example, the a->first might be translated as something like *(a + 4), a->second as *(a + 8) and so on. The actual numbers will depend on the sizes of the previous fields, the target architecture, etc.
Similar things apply for the size of the objects (for purposes of allocation and deallocation).
In short, the sizes of the objects and the offsets of their fields are known at compile time and they are replaced in the actual binary.
If no, then how are the functions dispatched at run time (in case of the virtual/non-virtual functions).
Things like virtual method calls are typically translated in a similar way as fields, since they too can be considered "fields" of a hidden data structure (called vtable) of that class. A pointer to the vtable of a given class is stored in every object (of that class), if it has virtual methods.
The correct implementations of non-virtual methods are known at compile time and thus these methods can be "linked" on the spot without the use of a vtable.
Details may differ, but generally for each C++ class we have:
a set of its methods, each method is just a function, and
virtual methods table: an array where each element refers to a method of this class or one of its superclasses
An object without virtual methods is just a structure like in C. As soon as a virtual method is declared, object gets a hidden field which refers to the virtual table (below, vmt).
Invocation of non-virtual method obj.m(arg) is converted to invocation of C-like function m$(obj, arg) where m$ is some artifical identifier generated by C++ compiler to distinguish method named m from so named methods in other classes.
Invocation of virtual method obj.m(arg) is converted to (obj->vmt[N])(obj, arg), that is, actual function is taken from the object's virtual table. Each method has its own number in the table. This number is known at compile time and hardcoded into the invocation instruction sequence.
No other information is saved/used in runtime for ordinary execution. More information can be kept for debugging purposes.
Look at the C++ standard to know what mandates should be shared with all compilers. The C++ standard governs some details on how objects are to be laid out in memory. These limitations should be shared among compilers. However, the details are left to the implementation of the language. Here are the traits I've found common in excess of the standard.
A simple object without inheritance or static fields is laid out like you see it. C++ mandates that memory is byte addressed, but that doesn't mean the data will be aligned to bytes. It will align to the compiler's specification (depending on architecture and other factors). Mostly I've found that data is aligned to words. If you find it packs by words, and you have only single bytes, the memory will have empty spots between the bytes. There is no metadata for an object other than a reference to the virtual function table, if it's needed. When you get to inheritance and multiple inheritance, it becomes more complicated.
Functions are stored separately from the object, and how you cast the object determines what functions you'll call those look at the object as whatever they expect it to be. This all works because in reality, the function has a hidden this pointer as its first argument. There are no run-time checks to make sure you're referring to the right object type. If you cast an object into another object and call a function on it, that function can hit a memory exception. There's no type safety on a c-style cast, avoid them.
Then you have the virtual function table, which returns pointers to functions depending on the type that you are accessing. But again, this is all decided at compile time.
When you get to a langauge that has reflection this changes drastically.
Type metadata is stored for runtime use, and there are type checks at runtime. You'll get exceptions for calling the wrong method on the wrong type.

C++ calling Java object method: Access Violation

I'm trying to implement a Java/C++ binding for a sound streaming class. To keep my example simple, I will reduce it to its seeking method, which is enough to describe my problem:
public abstract class JSoundStream extends SoundStream {
public abstract void seek(float timeOffset);
}
For testing, I use the following implementation:
#Override public void seek(float timeOffset) {
System.out.println("seek(" + timeOffset + ")");
}
The seek method is a callback method, delegated to by a native C++ functions that serves as a callback for whatever plays the stream. Picture a media player application with a fast forward function as an example:
"Fast forward" button pressed -> Streaming library invokes C++ callback seek -> Delegate to Java method seek
Note this is just an example, neither the Event Dispatch Thread nor anything else funky is involved.
When an instance of JSoundStream gets created, a native method is called that will save back both the Java VM pointer (JavaVM*) as well as the Java object reference (jobject). I do this because I cannot control when exactly the callback is called, and I know of no way to get the JNI environment or the object live with no Java references whatsoever. So I save back that information at the time of object creation, where I do have the references.
Inside of the C++ seek method, I'm trying to invoke the Java seek method this way:
virtual void OnSeek(float timeOffset) {
JNIEnv* env;
jvm->AttachCurrentThread((void**)&env, NULL);
env->CallVoidMethod(binding, m_seek, (jfloat)timeOffset);
}
Where binding is the jobject, jvm the Java VM pointer and m_seek the jmethodID of the seek method I obtained before.
However, that invocation of CallVoidMethod will result in an access violation in jvm.dll. All of the pointers and values are valid for what I can say, and I did make sure the Java object does not get garbage collected. I believe that storing the jobject and / or the Java VM pointer is the source of the problem, but then again I cannot see why, because those values are not changing while the program is running.
Can anybody see a problem in the way I am approaching this? How else - without storing references - would I invoke a Java object method from C++ code?
Your approach should be correct, if
Your jobject has been retrieved with binding = env->NewGlobalRef(binding_passed_as_argument);
You do not call AttachCurrentThread from the same thread multiple times - use TLS to store the JNIEnv pointer.

Safe to pass objects to C functions when working in JNI Invocation API?

I am coding up something using the JNI Invocation API. A C program starts up a JVM and makes calls into it. The JNIenv pointer is global to the C file. I have numerous C functions which need to perform the same operation on a given class of jobject. So I wrote helper functions which take a jobject and process it, returning the needed data (a C data type...for example, an int status value). Is it safe to write C helper functions and pass jobjects to them as arguments?
i.e. (a simple example - designed to illustrate the question):
int getStatusValue(jobject jStatus)
{
return (*jenv)->CallIntMethod(jenv,jStatus,statusMethod);
}
int function1()
{
int status;
jobject aObj = (*jenv)->NewObject
(jenv,
aDefinedClass,
aDefinedCtor);
jobject j = (*jenv)->CallObjectMethod
(jenv,
aObj,
aDefinedObjGetMethod)
status = getStatusValue(j);
(*jenv)->DeleteLocalRef(jenv,aObj);
(*jenv)->DeleteLocalRef(jenv,j);
return status;
}
Thanks.
I'm not acquainted with the details of JNI, but once thing I noticed is this:
return (*jenv)->CallIntMethod(jenv,jStatus,statusMethod);
That looks like the official JNI code and it is taking a jobect as a parameter. If it works for JNI, there is no reason it can't work for your code.
All jni objects are valid until the native method returns. As long as you dont store non global jni objects between two jni calls everything should work.
The invocation of a jni function should work like this:
Java function call
create native local references
call native function
do your stuff
exit native function
release existing local references
return to java
The step 4 can contain any code, local references stay valid until step 6 if not release before.
If you want to store jni objects on the c side between two calls to a native java function you have to create global references and release them later. Not releasing a global reference leads to memory leaks as the garbage collector is unable to free the related java objects.

Passing pointers between C and Java through JNI

At the moment, i'm trying to create a Java-application which uses CUDA-functionality. The connection between CUDA and Java works fine, but i've got another problem and wanted to ask, if my thoughts about it are correct.
When i call a native function from Java, i pass some data to it, the functions calculates something and returns a result. Is it possible, to let the first function return a reference (pointer) to this result which i can pass to JNI and call another function that does further calculations with the result?
My idea was to reduce the overhead that comes from copying data to and from the GPU by leaving the data in the GPU memory and just passing a reference to it so other functions can use it.
After trying some time, i thought for myself, this shouldn't be possible, because pointers get deleted after the application ends (in this case, when the C-function terminates). Is this correct? Or am i just to bad in C to see the solution?
Edit:
Well, to expand the question a little bit (or make it more clearly): Is memory allocated by JNI native functions deallocated when the function ends? Or may i still access it until either the JNI application ends or when i free it manually?
Thanks for your input :)
I used the following approach:
in your JNI code, create a struct that would hold references to objects you need. When you first create this struct, return its pointer to java as a long. Then, from java you just call any method with this long as a parameter, and in C cast it to a pointer to your struct.
The structure will be in the heap, so it will not be cleared between different JNI calls.
EDIT: I don't think you can use long ptr = (long)&address; since address is a static variable. Use it the way Gunslinger47 suggested, i.e. create new instance of class or a struct (using new or malloc) and pass its pointer.
In C++ you can use any mechanism you want to allocate/free memory: the stack, malloc/free, new/delete or any other custom implementation. The only requirement is that if you allocated a block of memory with one mechanism, you have to free it with the same mechanism, so you can't call free on a stack variable and you can't call delete on malloced memory.
JNI has its own mechanisms for allocating/freeing JVM memory:
NewObject/DeleteLocalRef
NewGlobalRef/DeleteGlobalRef
NewWeakGlobalRef/DeleteWeakGlobalRef
These follow the same rule, the only catch is that local refs can be deleted "en masse" either explicitly, with PopLocalFrame, or implicitly, when the native method exits.
JNI doesn't know how you allocated your memory, so it can't free it when your function exits. Stack variables will obviously be destroyed because you're still writing C++, but your GPU memory will remain valid.
The only problem then is how to access the memory on subsequent invocations, and then you can use Gunslinger47's suggestion:
JNIEXPORT jlong JNICALL Java_MyJavaClass_Function1() {
MyClass* pObject = new MyClass(...);
return (long)pObject;
}
JNIEXPORT void JNICALL Java_MyJavaClass_Function2(jlong lp) {
MyClass* pObject = (MyClass*)lp;
...
}
While the accepted answer from #denis-tulskiy does make sense, I've personnally followed suggestions from here.
So instead of using a pseudo-pointer type such as jlong (or jint if you want to save some space on 32bits arch), use instead a ByteBuffer. For example:
MyNativeStruct* data; // Initialized elsewhere.
jobject bb = (*env)->NewDirectByteBuffer(env, (void*) data, sizeof(MyNativeStruct));
which you can later re-use with:
jobject bb; // Initialized elsewhere.
MyNativeStruct* data = (MyNativeStruct*) (*env)->GetDirectBufferAddress(env, bb);
For very simple cases, this solution is very easy to use. Suppose you have:
struct {
int exampleInt;
short exampleShort;
} MyNativeStruct;
On the Java side, you simply need to do:
public int getExampleInt() {
return bb.getInt(0);
}
public short getExampleShort() {
return bb.getShort(4);
}
Which saves you from writing lots of boilerplate code ! One should however pay attention to byte ordering as explained here.
Java wouldn't know what to do with a pointer, but it should be able to store a pointer from a native function's return value then hand it off to another native function for it to deal with. C pointers are nothing more than numeric values at the core.
Another contibutor would have to tell you whether or not the pointed to graphics memory would be cleared between JNI invocations and if there would be any work-arounds.
I know this question was already officially answered, but I'd like to add my solution:
Instead of trying to pass a pointer, put the pointer in a Java array (at index 0) and pass that to JNI. JNI code can get and set the array element using GetIntArrayRegion/SetIntArrayRegion.
In my code, I need the native layer to manage a file descriptor (an open socket). The Java class holds a int[1] array and passes it to the native function. The native function can do whatever with it (get/set) and put back the result in the array.
If you are allocating memory dynamically (on the heap) inside of the native function, it is not deleted. In other words, you are able to retain state between different calls into native functions, using pointers, static vars, etc.
Think of it a different way: what could you do safely keep in an function call, called from another C++ program? The same things apply here. When a function is exited, anything on the stack for that function call is destroyed; but anything on the heap is retained unless you explicitly delete it.
Short answer: as long as you don't deallocate the result you're returning to the calling function, it will remain valid for re-entrance later. Just make sure to clean it up when you're done.
Its best to do this exactly how Unsafe.allocateMemory does.
Create your object then type it to (uintptr_t) which is a 32/64 bit unsigned integer.
return (uintptr_t) malloc(50);
void * f = (uintptr_t) jlong;
This is the only correct way to do it.
Here is the sanity checking Unsafe.allocateMemory does.
inline jlong addr_to_java(void* p) {
assert(p == (void*)(uintptr_t)p, "must not be odd high bits");
return (uintptr_t)p;
}
UNSAFE_ENTRY(jlong, Unsafe_AllocateMemory(JNIEnv *env, jobject unsafe, jlong size))
UnsafeWrapper("Unsafe_AllocateMemory");
size_t sz = (size_t)size;
if (sz != (julong)size || size < 0) {
THROW_0(vmSymbols::java_lang_IllegalArgumentException());
}
if (sz == 0) {
return 0;
}
sz = round_to(sz, HeapWordSize);
void* x = os::malloc(sz, mtInternal);
if (x == NULL) {
THROW_0(vmSymbols::java_lang_OutOfMemoryError());
}
//Copy::fill_to_words((HeapWord*)x, sz / HeapWordSize);
return addr_to_java(x);
UNSAFE_END

What is the 'correct' way to store a native pointer inside a Java object?

What is the 'correct' way to store a native pointer inside a Java object?
I could treat the pointer as a Java int, if I happen to know that native pointers are <= 32 bits in size, or a Java long if I happen to know that native pointers are <= 64 bits in size. But is there a better or cleaner way to do this?
Edit: Returning a native pointer from a JNI function is exactly what I don't want to do. I would rather return a Java object that represents the native resource. However, the Java object that I return must presumably have a field containing a pointer, which brings me back to the original question.
Or, alternatively, is there some better way for a JNI function to return a reference to a native resource?
IIRC, both java.util.zip and java.nio just use long.
java.nio.DirectByteBuffer does what you want.
Internally it uses a private long address to store pointer value. Dah !
Use JNI function env->NewDirectByteBuffer((void*) data, sizeof(MyNativeStruct)) to create a DirectByteBuffer on C/C++ side and return it to Java side as a ByteBuffer. Note: It's your job to free this data at native side! It miss the automatic Cleaner available on standard DirectBuffer.
At Java side, you can create a DirectByteBuffer this way :
ByteBuffer directBuff = ByteBuffer.allocateDirect(sizeInBytes);
Think it as sort of C's malloc(sizeInBytes). Note: It has as automatic Cleaner, which deallocates the memory previously requested.
But there are some points to consider about using DirectByteBuffer:
It can be Garbage Collected (GC) if you miss your direct ByteBuffer reference.
You can read/write values to pointed structure, but beware with both offset and data size. Compiler may add extra spaces for padding and break your assumed internal offsets in structure. Structure with pointers (stride is 4 or 8 bytes ?) also puzzle your data.
Direct ByteBuffers are very easy to pass as a parameter for native methods, as well to get it back as return.
You must cast to correct pointer type at JNI side. Default type returned by env->GetDirectBufferAddress(buffer) is void*.
You are unable to change pointer value once created.
Its your job to free memory previously allocated for buffers at native side. That ones you used with env->NewDirectByteBuffer().
There is no good way. In SWT, this code is used:
int /*long*/ hModule = OS.GetLibraryHandle ();
and there is a tool which converts the code between 32bit and 64bit by moving the comment. Ugly but it works. Things would have been much easier if Sun had added an object "NativePointer" or something like that but they didn't.
A better way might by to store it in a byte array, since native pointers aren't very Java-ish in the first place. ints and longs are better reserved for storing numeric values.
I assume that this is a pointer returned from some JNI code and my advice would be just dont do it :)
Ideally the JNI code should pass you back some sort of logical reference to the resource and not an actual pointer ?
As to your question there is nothing that comes to mind about a cleaner way to store the pointer - if you know what you have then use either the int or long or byte[] as required.
You could look to the way C# handles this with the IntPtr type. By creating your own type for holding pointers, the same type can be used as a 32-bit or 64-bit depending on the system you're on.

Categories

Resources