What could Cause SegFaults after Successful native calls through JNI? - java

Context
I am working on my Master Thesis using Intel SGX in order to use Enclave Code alongside a Java project.
The project uses Gradle for installation, and I have added a .jar containing a Class containing Native calls and the .so file containing the code for these calls.
I am currently only being able to open up an enclave and calculate an EC Curve for DH Key exchange, however, as the java code moves on, segfaults occur. These Segfaults however, dont occur inside the C code, as I am able to retrieve the correct and expected results from the native code in my java project. These segmentation faults usually happen there is heavy I/O workload (At least that is what I deduced so far), usually Sockets, which is something I use, since my thesis is in Distributed Systems.
C Code:
JNIEXPORT jint JNICALL Java_sgxUtils_SgxFunctions_jni_1initialize_1enclave
(JNIEnv *env , jobject,jint enclaveId, jbyteArray file_path){
if(file_path == NULL){
printf("NULL FILEPATH. error.");
return -1;
}
char token_path[MAX_PATH] = {'\0'};
sgx_launch_token_t token = {0};
sgx_status_t ret = SGX_ERROR_UNEXPECTED;
int updated = 0;
//Get the id of a given Enclave:
global_eid = (int)enclaveId;
int length = snprintf(NULL , 0, "%d", enclaveId) + 1; //Gives the length of the Integer as a String.
char* id_ptr = (char*)malloc(length);
snprintf(id_ptr,length,"%d",enclaveId);
/* Step 1: try to retrieve the launch token saved by last transaction
* if there is no token, then create a new one.
*/
/* try to get the token saved in $HOME */
const char *home_dir = getpwuid(getuid())->pw_dir;
if (home_dir != NULL &&
(strlen(home_dir)+strlen("/")+sizeof(TOKEN_FILENAME)+1) <= MAX_PATH) {
/* compose the token path */
strncpy(token_path, home_dir, strlen(home_dir));
strncat(token_path, "/", strlen("/"));
char* correctName = (char*) malloc(length + sizeof(TOKEN_FILENAME) + 1);
memcpy(correctName,id_ptr,length);
strncat(correctName,TOKEN_FILENAME,sizeof(TOKEN_FILENAME)+1);
strncat(token_path,correctName,length + sizeof(TOKEN_FILENAME)+1);
// strncat(token_path, TOKEN_FILENAME, sizeof(TOKEN_FILENAME)+1);
// free(correctName);
} else {
/* if token path is too long or $HOME is NULL */
strncpy(token_path, TOKEN_FILENAME, sizeof(TOKEN_FILENAME));
}
FILE *fp = fopen(token_path, "rb");
if (fp == NULL && (fp = fopen(token_path, "wb")) == NULL) {
printf("Warning: Failed to create/open the launch token file \"%s\".\n", token_path);
}
if (fp != NULL) {
/* read the token from saved file */
size_t read_num = fread(token, 1, sizeof(sgx_launch_token_t), fp);
if (read_num != 0 && read_num != sizeof(sgx_launch_token_t)) {
/* if token is invalid, clear the buffer */
memset(&token, 0x0, sizeof(sgx_launch_token_t));
printf("Warning: Invalid launch token read from \"%s\".\n", token_path);
}
}
printf("\n");
/* Step 2: call sgx_create_enclave to initialize an enclave instance */
/* Debug Support: set 2nd parameter to 1 */
//Here we obtain the char* of the given Enclave file path:
size_t filename_size = env->GetArrayLength(file_path);
jbyte* jb_filePath = env->GetByteArrayElements(file_path,NULL);
char* enclave_filePath = (char*) malloc(filename_size);
memcpy(enclave_filePath,jb_filePath,filename_size);
ret = sgx_create_enclave(enclave_filePath, SGX_DEBUG_FLAG, &token, &updated, &global_eid, NULL);
if (ret != SGX_SUCCESS) {
if (fp != NULL) fclose(fp);
return -1;
}
/* Step 3: save the launch token if it is updated */
if (updated == FALSE || fp == NULL) {
/* if the token is not updated, or file handler is invalid, do not perform saving */
if (fp != NULL) fclose(fp);
return 0;
}
/* reopen the file with write capablity */
fp = freopen(token_path, "wb", fp);
if (fp == NULL) return 0;
size_t write_num = fwrite(token, 1, sizeof(sgx_launch_token_t), fp);
if (write_num != sizeof(sgx_launch_token_t))
printf("Warning: Failed to save launch token to \"%s\".\n", token_path);
fclose(fp);
free(id_ptr);
// env->ReleaseStringUTFChars(file_path,enclave_filePath);
return 0;
}
JNIEXPORT jbyteArray JNICALL Java_sgxUtils_SgxFunctions_jni_1sgx_1begin_1ec_1dh(JNIEnv *env, jobject){
printf("Begin DH \n");
if(init_happened == 1){ //Verify if the Random generators were seeded.
sgx_status_t init_status;
init_mbed(global_eid,&init_status);
init_happened = 0;
//printf("Init happened inside enclave.");
}
unsigned char* public_ec_key = (unsigned char*)malloc(EC_KEY_SIZE); //32 bytes allocated. (256)
printf("Calculating EC_DH.\n");
//Start the ECDH Ecall.
sgx_status_t ecdh_status;
create_ec_key_pair(global_eid,&ecdh_status,public_ec_key,EC_KEY_SIZE); //Send the buffer and its size.
if(ecdh_status != SGX_SUCCESS){
printf("Failed. ec_key_pair returned %d\n",ecdh_status);
}
//Copy the information to a Jbyte in order to return in a JbyteArray:
jbyte* toCopy = (jbyte*)malloc(EC_KEY_SIZE * sizeof(jbyte));
memcpy(toCopy,public_ec_key,EC_KEY_SIZE);
//Return the Public Key:
jbyteArray result = env->NewByteArray(EC_KEY_SIZE);
env->SetByteArrayRegion(result,0,EC_KEY_SIZE,toCopy);
//Free the pointers and return the arrray.
free(public_ec_key);
free(toCopy);
printf("EC_DH Calculated.\n");
return result;
}
I will omit the large output from Valgrind, it is just a huge file with '?' every line, since SGX hides the Stack trace, giving me no info on the invalid reads/Writes.
Valgrind Output
==38490== Warning: client switching stacks? SP change: 0x6a937a0 --> 0x28789d48
==38490== to suppress, use: --max-stackframe=567240104 or greater
==38490== Warning: client switching stacks? SP change: 0x28789d48 --> 0x6a937a0
==38490== to suppress, use: --max-stackframe=567240104 or greater
==38490== Warning: client switching stacks? SP change: 0x6a97220 --> 0x28789d48
==38490== to suppress, use: --max-stackframe=567225128 or greater
==38490== further instances of this message will not be shown.
==38490==
==38490== Process terminating with default action of signal 11 (SIGSEGV)
==38490== Access not within mapped region at address 0x2
==38490== at 0x27CD0B05: sig_handler_sim(int, siginfo_t*, void*) (in /opt/intel/sgxsdk/lib64/libsgx_urts_sim.so)
==38490== If you believe this happened as a result of a stack
==38490== overflow in your program's main thread (unlikely but
==38490== possible), you can try to increase the size of the
==38490== main thread stack using the --main-stacksize= flag.
==38490== The main thread stack size used in this run was 8388608.
==38490==
==38490== HEAP SUMMARY:
==38490== in use at exit: 16,636,139 bytes in 39,334 blocks
==38490== total heap usage: 55,264 allocs, 15,930 frees, 71,819,001 bytes allocated
Java Code
private void initEnclave() {
this.logger.info("Starting InitEnclave");
enclave = new SgxFunctions(this.enclaveId);
try {
File pemFile = new File(HOME_DIR + "/" + this.enclaveId + ".pem");
if(!pemFile.exists()) {
pemFile = SgxFunctions.createPem(this.enclaveId);
}
File enclaveFile = new File(HOME_DIR + "/" + this.enclaveId + "enclave_signed.so");
if(!enclaveFile.exists()) {
enclave.createSignedEnclave(HOME_DIR, pemFile.getAbsolutePath(), enclaveId);
this.logger.info("Enclave signed");
}
else
this.logger.info("Enclave file exists. Starting enclave.");
logger.info("ABSOLUTE PATH: " + enclaveFile.getAbsolutePath());
byte[] enclaveFilePath = enclaveFile.getAbsolutePath().substring(0).getBytes(StandardCharsets.UTF_8);
byte[] correctPath = Arrays.copyOf(enclaveFilePath, enclaveFilePath.length+1);
correctPath[correctPath.length-1] = '\0';
int created = enclave.jni_initialize_enclave(enclaveId,correctPath);
if(created == 0)
this.logger.info("Enclave created.");
else
this.logger.info("Enclave failed to create.");
//
// //Create DH parameters:
this.dh_params = this.enclave.jni_sgx_begin_ec_dh(); //Begin the Enclave EC parameters for DH key exchange.
this.logger.info("EC-DH Parameters created for Key exchange.");
} catch (Exception e) {
e.printStackTrace();
}
}
If I comment out the 'begin_ec_dh()' Call, the program will not segfault. However, even when running this native call, the Enclave will return an acceptable result and I am able to use the result in my Java code no problem, but only up until a certain point where it segfaults for an unkown reason.
I suspect its obviously my code, but I have no idea what is going on.
Any help is appreciated, Thank you for your help and sorry for the long post.
Edit:
I have added a printf() in order to retrieve a Crash file from the JVM over the filename Pointer.
What I dont understand is that If you look at my C code, I have allocated exactly the Array Size that comes from java (And I have added the null terminator in my Java Code, which was previously the source of another segfault), any idea what is causing this? Seems like I am dereferencing an Invalid pointer, but why is it invalid?
JVM Crash File
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f9f51fd9627, pid=9910, tid=9911
#
# JRE version: OpenJDK Runtime Environment (17.0.3+7) (build 17.0.3+7-Ubuntu-0ubuntu0.20.04.1)
# Java VM: OpenJDK 64-Bit Server VM (17.0.3+7-Ubuntu-0ubuntu0.20.04.1, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
# Problematic frame:
# C [libc.so.6+0x188627]
#
# Core dump will be written. Default location: Core dumps may be processed with "/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E" (or dumping to /home/andrecruz/Tese/Con-BFT/build/install/library/core.9910)
#
# If you would like to submit a bug report, please visit:
# Unknown
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
--------------- S U M M A R Y ------------
Command Line: -Djava.security.properties=./config/java.security -Djava.library.path=./lib/ -Dfile.encoding=UTF-8 -XX:ErrorFile=/home/andrecruz/Tese/Con-BFT/ -XX:+ShowCodeDetailsInExceptionMessages -Dlogback.configurationFile=./config/logback.xml bftsmart.demo.map.MapServer 0 10
Host: Intel(R) Core(TM) i5-6440HQ CPU # 2.60GHz, 4 cores, 7G, Ubuntu 20.04.4 LTS
Time: Sun May 29 23:13:08 2022 WEST elapsed time: 2.134101 seconds (0d 0h 0m 2s)
--------------- T H R E A D ---------------
Current thread (0x00007f9f4c012b40): JavaThread "main" [_thread_in_native, id=9911, stack(0x00007f9f5061b000,0x00007f9f5071c000)]
Stack: [0x00007f9f5061b000,0x00007f9f5071c000], sp=0x00007f9f50718c28, free space=1015k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [libc.so.6+0x188627]
C [libc.so.6+0x61d6f] _IO_printf+0xaf
j sgxUtils.SgxFunctions.jni_initialize_enclave(I[B)I+0
j bftsmart.tom.ServiceReplica.initEnclave()V+248
j bftsmart.tom.ServiceReplica.<init>(ILjava/lang/String;Lbftsmart/tom/server/Executable;Lbftsmart/tom/server/Recoverable;Lbftsmart/tom/server/RequestVerifier;Lbftsmart/tom/server/Replier;Lbftsmart/tom/util/KeyLoader;I)V+147
j bftsmart.tom.ServiceReplica.<init>(ILbftsmart/tom/server/Executable;Lbftsmart/tom/server/Recoverable;I)V+17
j bftsmart.demo.map.MapServer.<init>(II)V+35
j bftsmart.demo.map.MapServer.main([Ljava/lang/String;)V+34
v ~StubRoutines::call_stub
V [libjvm.so+0x83ae75]
V [libjvm.so+0x8cf4e9]
V [libjvm.so+0x8d2392]
C [libjli.so+0x4abe]
C [libjli.so+0x84ed]
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j sgxUtils.SgxFunctions.jni_initialize_enclave(I[B)I+0
j bftsmart.tom.ServiceReplica.initEnclave()V+248
j bftsmart.tom.ServiceReplica.<init>(ILjava/lang/String;Lbftsmart/tom/server/Executable;Lbftsmart/tom/server/Recoverable;Lbftsmart/tom/server/RequestVerifier;Lbftsmart/tom/server/Replier;Lbftsmart/tom/util/KeyLoader;I)V+147
j bftsmart.tom.ServiceReplica.<init>(ILbftsmart/tom/server/Executable;Lbftsmart/tom/server/Recoverable;I)V+17
j bftsmart.demo.map.MapServer.<init>(II)V+35
j bftsmart.demo.map.MapServer.main([Ljava/lang/String;)V+34
v ~StubRoutines::call_stub
siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000000000000020
Register to memory mapping:
RAX=0x0000000000000010 is an unknown value
RBX=0x00007f9f51ec7b43: <offset 0x0000000000076b43> in /lib/x86_64-linux-gnu/libc.so.6 at 0x00007f9f51e51000
RCX=0x000000000000000f is an unknown value
RDX=0x000000000000002f is an unknown value
RSP=0x00007f9f50718c28 is pointing into the stack for thread: 0x00007f9f4c012b40
RBP=0x00007f9f507191a0 is pointing into the stack for thread: 0x00007f9f4c012b40
RSI=0x00007f9f50719168 is pointing into the stack for thread: 0x00007f9f4c012b40
RDI=0x0000000000000020 is an unknown value
R8 =0x00000000ffffffff points into unknown readable memory: 00
R9 =0x0000000000000009 is an unknown value
R10=0x00007f9f50610099: <offset 0x0000000000007099> in /home/andrecruz/Tese/Con-BFT/build/install/library/lib/libSgx.so at 0x00007f9f50609000
R11=0x000000000000002f is an unknown value
R12=0x00007f9f5203e6a0: _IO_2_1_stdout_+0x0000000000000000 in /lib/x86_64-linux-gnu/libc.so.6 at 0x00007f9f51e51000
R13=0x00007f9f5061008f: <offset 0x000000000000708f> in /home/andrecruz/Tese/Con-BFT/build/install/library/lib/libSgx.so at 0x00007f9f50609000
R14=0x00007f9f507191b0 is pointing into the stack for thread: 0x00007f9f4c012b40
R15=0x0000000000000073 is an unknown value

Related

Access violation calling memcpy() inside JNI library doing USB i/o

I am writing a Java application that executes USB I/O on Windows 10 with Java 10 JDK. My i/o library is the libusb api. The app loads a JNI library .dll compiled/built in Visual Studio 2017 with both Microsoft Visual 2013 C++ and Microsoft Visual C++ 2015-2019 installed. This application generates the following crash warning when ran inside a Netbeans 11.1 project.
EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x000002e1bbcd122e, pid=444, tid=3244
JRE version: Java(TM) SE Runtime Environment (10.0+46) (build 10+46)
Java VM: Java HotSpot(TM) 64-Bit Server VM (10+46, mixed mode, tiered, compressed oops, g1 gc, windows-amd64)
Problematic frame:
C [VCRUNTIME140D.dll+0x122e]
The problem function is my USB read function within the C code in the VisualStudio-built .dll.
JNI.c
JNIEXPORT jint JNICALL Java_SWMAPI_IoUSB_read
(JNIEnv * env, jobject thisObj, jbyteArray jArr, jint size, jlong javaHandle){
//Convert jLong to device handle pointer
libusb_device_handle * handle = (libusb_device_handle * )javaHandle;
//jbyteArray jArr: byte [] buffer. Must be overwritten with USB read data from device with
// corresponding handle javaHandle.
//jlong javaHandle: long passed from java representing the device to I/O.
//jint size: jArr length (e.g. buffer.length), used for debugging.
int status; //return code value
int bytes_written;
int bytes_read;
MESSAGE Message; //C struct containing 9 bytes of message header and (n) bytes of payload.
int length = (*env)->GetArrayLength(env, jArr);
jbyte * bufferIn = (*env)->GetPrimitiveArrayCritical(env, jArr, NULL); //get memory from jArr
//copy data in bufferIn to a Message struct.
memcpy((unsigned char *)&Message, (unsigned char *)bufferIn, length);
//write the Message out to device
bytes_written = libusb_control_transfer(handle, CTRL_OUT, MEM_RQ, 0, 0, (unsigned char *)&Message, length, 0);
if(bytes_written == length){
//read data in from Device to &Message
bytes_read = libusb_control_transfer(handle, CTRL_IN, MEM_RQ, 0, 0, (unsigned char
*)&Message, sizeof(MESSAGE), 30000);
status = (bytes_written < 0) ? bytes_read : // I/O Error
(bytes_written < length) ? USB_STATUS_SHORT_WRITE : // Check for short write
LIBUSB_SUCCESS;
if(Message.payload.length >= 0){ //If read data > 0
printf("Attempting memcpy\n");
fflush(stdout);
// The error appears on the line below
memcpy((unsigned char *)bufferIn,(unsigned char *)&Message, MAX_HEADER_LENGTH +
Message.payload.length);
printf("Success memcpy\n");
fflush(stdout);
}
}
(*env)->ReleasePrimitiveArrayCritical(env, jArr, bufferIn, 0);
return status;
}
This function has been working for my project up until today. Its frequency is approx 1 out of 3 runs. Can anyone please advise me on how I can move forward to solving this issue?

Application hangs in call to ZwWriteFile

I have a problem with a Java (Eclipse RCP) application running on Window 64bit (The problem does not occur if the application runs with a 32bit JVM)
The problem also does not occur if the application is running in the debugger.
In the application debug output is made with System.err.print(...). If the application then does not run under debugger control, then the UI thread just randomly freezes.
I attached to the process with GDB and saw that the process hangs in the call of ZwWriteFile.
(gdb) where
#0 0x000000007745bdba in ntdll!ZwWriteFile () from /cygdrive/c/windows/SYSTEM32/ntdll.dll
#1 0x000007fefd3a1b3b in WriteFile () from /cygdrive/c/windows/system32/KERNELBASE.dll
#2 0x0000000077311f66 in WriteFile () from /cygdrive/c/windows/system32/kernel32.dll
#3 0x000000006651cb65 in java!handleRead () from /cygdrive/d/Programme/Razorcat/Shared/1.3/JRE_1.8/bin/java.dll
#4 0x000000006651c31e in java!JNI_OnLoad () from /cygdrive/d/Programme/Razorcat/Shared/1.3/JRE_1.8/bin/java.dll
#5 0x0000000066512dc9 in java!Java_java_io_FileOutputStream_writeBytes () from /cygdrive/d/Programme/Razorcat/Shared/1.3/JRE_1.8/bin/java.dll
(gdb) disass
Dump of assembler code for function ntdll!ZwWriteFile:
0x000000007745bdb0 <+0>: mov %rcx,%r10
0x000000007745bdb3 <+3>: mov $0x5,%eax
0x000000007745bdb8 <+8>: syscall
=> 0x000000007745bdba <+10>: retq
0x000000007745bdbb <+11>: nopl 0x0(%rax,%rax,1)
Has anyone ever had such a problem? Is there a way (kernel debugger?) to investigate this problem more closely.
Below is the function (io_util_md.c) that calls the WriteFile Windows API function. Could be that Windows has a problem with overlapped IO at pseudo handles. (I guess the process has no standard handles when not running in the debugger)
static jint writeInternal(FD fd, const void *buf, jint len, jboolean append)
{
BOOL result = 0;
DWORD written = 0;
HANDLE h = (HANDLE)fd;
if (h != INVALID_HANDLE_VALUE) {
OVERLAPPED ov;
LPOVERLAPPED lpOv;
if (append == JNI_TRUE) {
ov.offset = (DWORD)0xFFFFFFFF;
ov.OffsetHigh = (DWORD)0xFFFFFFFF;
ov.hEvent = NULL;
lpOv = &ov;
} else {
lpOv = NULL;
}
result = WriteFile(h, /* File handle to write */
buf, /* pointers to the buffers */
len, /* number of bytes to write */
&written, /* receives number of bytes written */
lpOv); /* overlapped struct */
}
if ((h == INVALID_HANDLE_VALUE) ||| (result == 0)) {
return -1;
}
return (jint)written;
}

ChannelInputStream skip method is very slow

I have following piece of test code:
try {
InputStream is;
Stopwatch.start("FileInputStream");
is = new FileInputStream(imageFile.toFile());
is.skip(1024*1024*1024);
is.close();
Stopwatch.stop();
Stopwatch.start("Files.newInputStream");
is = Files.newInputStream(imageFile);
is.skip(1024*1024*1024);
is.close();
Stopwatch.stop();
}
catch(Exception e)
{
}
and I have following output:
Start: FileInputStream
FileInputStream : 0 ms
Start: Files.newInputStream
Files.newInputStream : 3469 ms
Do you have any idea what is going on? Why skip is so slow in the second case?
I need to use InputStreams acquired from channels because my test have shown that best for my task is to have two threads reading from file simultaneously (and I can notice any improvement only when I am using Streams from Channels).
During tests I figured out that I can do something like this:
SeekableByteChannel sbc = Files.newByteChannel(imageFile);
sbc.position(1024*1024*1024);
is = Channels.newInputStream(sbc);
which takes only avg. 28ms but that does not help me a lot because to use that I would have to make major API changes.
My platform:
Linux galileo 3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 07:38:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
Looking at the source, it appears that the default implementation of skip() is actually reading through (and discarding) the stream content until it reaches the target position:
public long skip(long n) throws IOException {
long remaining = n;
int nr;
if (n <= 0) {
return 0;
}
int size = (int)Math.min(MAX_SKIP_BUFFER_SIZE, remaining);
byte[] skipBuffer = new byte[size];
while (remaining > 0) {
nr = read(skipBuffer, 0, (int)Math.min(size, remaining));
if (nr < 0) {
break;
}
remaining -= nr;
}
return n - remaining;
}
The SeekableByteChannel#position() method probably just updates an offset pointer, which doesn't actually require any I/O. Presumably, FileInputStream overrides the skip() method with a similar optimization. The documentation supports this theory:
This method may skip more bytes than are remaining in the backing file. This produces no exception and the number of bytes skipped may include some number of bytes that were beyond the EOF of the backing file. Attempting to read from the stream after skipping past the end will result in -1 indicating the end of the file.
On platter disks or network storage, this could have a significant impact.
Try to set the range with GetObjectRequest.setRange to have the same behavior of skip.
GetObjectRequest req = new GetObjectRequest(BUCKET_NAME, "myfile.zip");
req.setRange(1024); // start download skiping 1024 bytes
S3ObjectInputStream in = client.getObject(req).getObjectContent();
// read "in" while not eof
I used this to avoid SocketTimeoutException on my implementation.
Each time I got a SocketTimeoutException I restart the download using the setRange to skip the bytes that I already downloaded.

Java: load a library that depends on other libs

I want to load my own native libraries in my java application. Those native libraries depend upon third-party libraries (which may or may not be present when my application is installed on the client computer).
Inside my java application, I ask the user to specify the location of dependent libs. Once I have this information, I am using it to update the "LD_LIBRARY_PATH" environment variable using JNI code. The following is the code snippet that I am using to change the "LD_LIBRARY_PATH" environment variable.
Java code
public static final int setEnv(String key, String value) {
if (key == null) {
throw new NullPointerException("key cannot be null");
}
if (value == null) {
throw new NullPointerException("value cannot be null");
}
return nativeSetEnv(key, value);
}
public static final native int nativeSetEnv(String key, String value);
Jni code (C)
JNIEXPORT jint JNICALL Java_Test_nativeSetEnv(JNIEnv *env, jclass cls, jstring key, jstring value) {
const char *nativeKey = NULL;
const char *nativeValue = NULL;
nativeKey = (*env)->GetStringUTFChars(env, key, NULL);
nativeValue = (*env)->GetStringUTFChars(env, value, NULL);
int result = setenv(nativeKey, nativeValue, 1);
return (jint) result;
}
I also have corresponding native methods to fetch the environment variable.
I can successfully update the LD_LIBRARY_PATH (this assertion is based on the output of C routine getenv().
I am still not able to load my native library. The dependent third-party libraries are still not detected.
Any help/pointers are appreciated. I am using Linux 64 bit.
Edit:
I wrote a SSCE (in C) to test if dynamic loader is working. Here is the SSCE
#include
#include
#include
#include
int main(int argc, const char* const argv[]) {
const char* const dependentLibPath = "...:";
const char* const sharedLibrary = "...";
char *newLibPath = NULL;
char *originalLibPath = NULL;
int l1, l2, result;
void* handle = NULL;
originalLibPath = getenv("LD_LIBRARY_PATH");
fprintf(stdout,"\nOriginal library path =%s\n",originalLibPath);
l1 = strlen(originalLibPath);
l2 = strlen(dependentLibPath);
newLibPath = (char *)malloc((l1+l2)*sizeof(char));
strcpy(newLibPath,dependentLibPath);
strcat(newLibPath,originalLibPath);
fprintf(stdout,"\nNew library path =%s\n",newLibPath);
result = setenv("LD_LIBRARY_PATH", newLibPath, 1);
if(result!=0) {
fprintf(stderr,"\nEnvironment could not be updated\n");
exit(1);
}
newLibPath = getenv("LD_LIBRARY_PATH");
fprintf(stdout,"\nNew library path from the env =%s\n",newLibPath);
handle = dlopen(sharedLibrary, RTLD_NOW);
if(handle==NULL) {
fprintf(stderr,"\nCould not load the shared library: %s\n",dlerror());
exit(1);
}
fprintf(stdout,"\n The shared library was successfully loaded.\n");
result = dlclose(handle);
if(result!=0) {
fprintf(stderr,"\nCould not unload the shared library: %s\n",dlerror());
exit(1);
}
return 0;
}
The C code also does not work. Apparently, the dynamic loader is not rereading the LD_LIBRARY_PATH environment variable. I need to figure out how to force the dynamic loader to re-read the LD_LIBRARY_PATH environment variable.
See the accepted answer here:
Changing LD_LIBRARY_PATH at runtime for ctypes
In other words, what you're trying to do isn't possible. You'll need to launch a new process with an updated LD_LIBRARY_PATH (e.g., use ProcessBuilder and update environment() to concatenate the necessary directory)
This is a hack used to manipulate JVM's library path programmatically. NOTE: it relies on internals of ClassLoader implementation so it might not work on all JVMs/versions.
String currentPath = System.getProperty("java.library.path");
System.setProperty( "java.library.path", currentPath + ":/path/to/my/libs" );
// this forces JVM to reload "java.library.path" property
Field fieldSysPath = ClassLoader.class.getDeclaredField( "sys_paths" );
fieldSysPath.setAccessible( true );
fieldSysPath.set( null, null );
This code uses UNIX-style file path separators ('/') and library path separator (':'). For cross-platform way of doing this use System Properties to get system-specific separators: http://download.oracle.com/javase/tutorial/essential/environment/sysprop.html
I have successfully implemented something similar for CollabNet Subversion Edge, which depends on the SIGAR libraries across ALL Operating Systems (we support Windows/Linux/Sparc both 32 bits and 64 bits)...
Subversion Edge is a web application that helps one managing Subversion repositories through a web console and uses SIGAR to the SIGAR libraries helps us provide users data values directly from the OS... You need to update the value of the property "java.library.path" at runtime. (https://ctf.open.collab.net/integration/viewvc/viewvc.cgi/trunk/console/grails-app/services/com/collabnet/svnedge/console/OperatingSystemService.groovy?revision=1890&root=svnedge&system=exsy1005&view=markup Note that the URL is a Groovy code, but I have modified it to a Java here)...
The following example is the implementation in URL above... (On Windows, your user will be required to restart the machine if he/she has downloaded the libraries after or downloaded them using your application)... The "java.library.path" will update the user's path "usr_paths" instead of System path "sys_paths" (permissions exception might be raised depending on the OS when using the latter).
133/**
134 * Updates the java.library.path at run-time.
135 * #param libraryDirPath
136 */
137 public void addDirToJavaLibraryPathAtRuntime(String libraryDirPath)
138 throws Exception {
139 try {
140 Field field = ClassLoader.class.getDeclaredField("usr_paths");
141 field.setAccessible(true);
142 String[] paths = (String[])field.get(null);
143 for (int i = 0; i < paths.length; i++) {
144 if (libraryDirPath.equals(paths[i])) {
145 return;
146 }
147 }
148 String[] tmp = new String[paths.length+1];
149 System.arraycopy(paths,0,tmp,0,paths.length);
150 tmp[paths.length] = libraryDirPath;
151 field.set(null,tmp);
152 String javaLib = "java.library.path";
153 System.setProperty(javaLib, System.getProperty(javaLib) +
154 File.pathSeparator + libraryDirPath);
155
156 } catch (IllegalAccessException e) {
157 throw new IOException("Failed to get permissions to set " +
158 "library path to " + libraryDirPath);
159 } catch (NoSuchFieldException e) {
160 throw new IOException("Failed to get field handle to set " +
161 "library path to " + libraryDirPath);
162 }
163 }
The Bootstrap services (Groovy on Grails application) class of the console runs a service and executes it with the full path to the library directory... UNiX-based servers do not need to restart the server to get the libraries, but Windows boxes do need a server restart after the installation. In your case, you would be calling this as follows:
String appHomePath = "/YOUR/PATH/HERE/TO/YOUR/LIBRARY/DIRECTORY";
String yourLib = new File(appHomePath, "SUBDIRECTORY/").getCanonicalPath();
124 try {
125 addDirToJavaLibraryPathAtRuntime(yourLib);
126 } catch (Exception e) {
127 log.error("Error adding the MY Libraries at " + yourLib + " " +
128 "java.library.path: " + e.message);
129 }
For each OS you ship your application, just make sure to provide a matching version of the libraries for the specific platform (32bit-Linux, 64bit-Windows, etc...).

Are there C++ equivalents for the Protocol Buffers delimited I/O functions in Java?

I'm trying to read / write multiple Protocol Buffers messages from files, in both C++ and Java. Google suggests writing length prefixes before the messages, but there's no way to do that by default (that I could see).
However, the Java API in version 2.1.0 received a set of "Delimited" I/O functions which apparently do that job:
parseDelimitedFrom
mergeDelimitedFrom
writeDelimitedTo
Are there C++ equivalents? And if not, what's the wire format for the size prefixes the Java API attaches, so I can parse those messages in C++?
Update:
These now exist in google/protobuf/util/delimited_message_util.h as of v3.3.0.
I'm a bit late to the party here, but the below implementations include some optimizations missing from the other answers and will not fail after 64MB of input (though it still enforces the 64MB limit on each individual message, just not on the whole stream).
(I am the author of the C++ and Java protobuf libraries, but I no longer work for Google. Sorry that this code never made it into the official lib. This is what it would look like if it had.)
bool writeDelimitedTo(
const google::protobuf::MessageLite& message,
google::protobuf::io::ZeroCopyOutputStream* rawOutput) {
// We create a new coded stream for each message. Don't worry, this is fast.
google::protobuf::io::CodedOutputStream output(rawOutput);
// Write the size.
const int size = message.ByteSize();
output.WriteVarint32(size);
uint8_t* buffer = output.GetDirectBufferForNBytesAndAdvance(size);
if (buffer != NULL) {
// Optimization: The message fits in one buffer, so use the faster
// direct-to-array serialization path.
message.SerializeWithCachedSizesToArray(buffer);
} else {
// Slightly-slower path when the message is multiple buffers.
message.SerializeWithCachedSizes(&output);
if (output.HadError()) return false;
}
return true;
}
bool readDelimitedFrom(
google::protobuf::io::ZeroCopyInputStream* rawInput,
google::protobuf::MessageLite* message) {
// We create a new coded stream for each message. Don't worry, this is fast,
// and it makes sure the 64MB total size limit is imposed per-message rather
// than on the whole stream. (See the CodedInputStream interface for more
// info on this limit.)
google::protobuf::io::CodedInputStream input(rawInput);
// Read the size.
uint32_t size;
if (!input.ReadVarint32(&size)) return false;
// Tell the stream not to read beyond that size.
google::protobuf::io::CodedInputStream::Limit limit =
input.PushLimit(size);
// Parse the message.
if (!message->MergeFromCodedStream(&input)) return false;
if (!input.ConsumedEntireMessage()) return false;
// Release the limit.
input.PopLimit(limit);
return true;
}
Okay, so I haven't been able to find top-level C++ functions implementing what I need, but some spelunking through the Java API reference turned up the following, inside the MessageLite interface:
void writeDelimitedTo(OutputStream output)
/* Like writeTo(OutputStream), but writes the size of
the message as a varint before writing the data. */
So the Java size prefix is a (Protocol Buffers) varint!
Armed with that information, I went digging through the C++ API and found the CodedStream header, which has these:
bool CodedInputStream::ReadVarint32(uint32 * value)
void CodedOutputStream::WriteVarint32(uint32 value)
Using those, I should be able to roll my own C++ functions that do the job.
They should really add this to the main Message API though; it's missing functionality considering Java has it, and so does Marc Gravell's excellent protobuf-net C# port (via SerializeWithLengthPrefix and DeserializeWithLengthPrefix).
I solved the same problem using CodedOutputStream/ArrayOutputStream to write the message (with the size) and CodedInputStream/ArrayInputStream to read the message (with the size).
For example, the following pseudo-code writes the message size following by the message:
const unsigned bufLength = 256;
unsigned char buffer[bufLength];
Message protoMessage;
google::protobuf::io::ArrayOutputStream arrayOutput(buffer, bufLength);
google::protobuf::io::CodedOutputStream codedOutput(&arrayOutput);
codedOutput.WriteLittleEndian32(protoMessage.ByteSize());
protoMessage.SerializeToCodedStream(&codedOutput);
When writing you should also check that your buffer is large enough to fit the message (including the size). And when reading, you should check that your buffer contains a whole message (including the size).
It definitely would be handy if they added convenience methods to C++ API similar to those provided by the Java API.
IsteamInputStream is very fragile to eofs and other errors that easily occurs when used together with std::istream. After this the protobuf streams are permamently damaged and any already used buffer data is destroyed. There are proper support for reading from traditional streams in protobuf.
Implement google::protobuf::io::CopyingInputStream and use that together with CopyingInputStreamAdapter. Do the same for the output variants.
In practice a parsing call ends up in google::protobuf::io::CopyingInputStream::Read(void* buffer, int size) where a buffer is given. The only thing left to do is read into it somehow.
Here's an example for use with Asio synchronized streams (SyncReadStream/SyncWriteStream):
#include <google/protobuf/io/zero_copy_stream_impl_lite.h>
using namespace google::protobuf::io;
template <typename SyncReadStream>
class AsioInputStream : public CopyingInputStream {
public:
AsioInputStream(SyncReadStream& sock);
int Read(void* buffer, int size);
private:
SyncReadStream& m_Socket;
};
template <typename SyncReadStream>
AsioInputStream<SyncReadStream>::AsioInputStream(SyncReadStream& sock) :
m_Socket(sock) {}
template <typename SyncReadStream>
int
AsioInputStream<SyncReadStream>::Read(void* buffer, int size)
{
std::size_t bytes_read;
boost::system::error_code ec;
bytes_read = m_Socket.read_some(boost::asio::buffer(buffer, size), ec);
if(!ec) {
return bytes_read;
} else if (ec == boost::asio::error::eof) {
return 0;
} else {
return -1;
}
}
template <typename SyncWriteStream>
class AsioOutputStream : public CopyingOutputStream {
public:
AsioOutputStream(SyncWriteStream& sock);
bool Write(const void* buffer, int size);
private:
SyncWriteStream& m_Socket;
};
template <typename SyncWriteStream>
AsioOutputStream<SyncWriteStream>::AsioOutputStream(SyncWriteStream& sock) :
m_Socket(sock) {}
template <typename SyncWriteStream>
bool
AsioOutputStream<SyncWriteStream>::Write(const void* buffer, int size)
{
boost::system::error_code ec;
m_Socket.write_some(boost::asio::buffer(buffer, size), ec);
return !ec;
}
Usage:
AsioInputStream<boost::asio::ip::tcp::socket> ais(m_Socket); // Where m_Socket is a instance of boost::asio::ip::tcp::socket
CopyingInputStreamAdaptor cis_adp(&ais);
CodedInputStream cis(&cis_adp);
Message protoMessage;
uint32_t msg_size;
/* Read message size */
if(!cis.ReadVarint32(&msg_size)) {
// Handle error
}
/* Make sure not to read beyond limit of message */
CodedInputStream::Limit msg_limit = cis.PushLimit(msg_size);
if(!msg.ParseFromCodedStream(&cis)) {
// Handle error
}
/* Remove limit */
cis.PopLimit(msg_limit);
Here you go:
#include <google/protobuf/io/zero_copy_stream_impl.h>
#include <google/protobuf/io/coded_stream.h>
using namespace google::protobuf::io;
class FASWriter
{
std::ofstream mFs;
OstreamOutputStream *_OstreamOutputStream;
CodedOutputStream *_CodedOutputStream;
public:
FASWriter(const std::string &file) : mFs(file,std::ios::out | std::ios::binary)
{
assert(mFs.good());
_OstreamOutputStream = new OstreamOutputStream(&mFs);
_CodedOutputStream = new CodedOutputStream(_OstreamOutputStream);
}
inline void operator()(const ::google::protobuf::Message &msg)
{
_CodedOutputStream->WriteVarint32(msg.ByteSize());
if ( !msg.SerializeToCodedStream(_CodedOutputStream) )
std::cout << "SerializeToCodedStream error " << std::endl;
}
~FASWriter()
{
delete _CodedOutputStream;
delete _OstreamOutputStream;
mFs.close();
}
};
class FASReader
{
std::ifstream mFs;
IstreamInputStream *_IstreamInputStream;
CodedInputStream *_CodedInputStream;
public:
FASReader(const std::string &file), mFs(file,std::ios::in | std::ios::binary)
{
assert(mFs.good());
_IstreamInputStream = new IstreamInputStream(&mFs);
_CodedInputStream = new CodedInputStream(_IstreamInputStream);
}
template<class T>
bool ReadNext()
{
T msg;
unsigned __int32 size;
bool ret;
if ( ret = _CodedInputStream->ReadVarint32(&size) )
{
CodedInputStream::Limit msgLimit = _CodedInputStream->PushLimit(size);
if ( ret = msg.ParseFromCodedStream(_CodedInputStream) )
{
_CodedInputStream->PopLimit(msgLimit);
std::cout << mFeed << " FASReader ReadNext: " << msg.DebugString() << std::endl;
}
}
return ret;
}
~FASReader()
{
delete _CodedInputStream;
delete _IstreamInputStream;
mFs.close();
}
};
I ran into the same issue in both C++ and Python.
For the C++ version, I used a mix of the code Kenton Varda posted on this thread and the code from the pull request he sent to the protobuf team (because the version posted here doesn't handle EOF while the one he sent to github does).
#include <google/protobuf/message_lite.h>
#include <google/protobuf/io/zero_copy_stream.h>
#include <google/protobuf/io/coded_stream.h>
bool writeDelimitedTo(const google::protobuf::MessageLite& message,
google::protobuf::io::ZeroCopyOutputStream* rawOutput)
{
// We create a new coded stream for each message. Don't worry, this is fast.
google::protobuf::io::CodedOutputStream output(rawOutput);
// Write the size.
const int size = message.ByteSize();
output.WriteVarint32(size);
uint8_t* buffer = output.GetDirectBufferForNBytesAndAdvance(size);
if (buffer != NULL)
{
// Optimization: The message fits in one buffer, so use the faster
// direct-to-array serialization path.
message.SerializeWithCachedSizesToArray(buffer);
}
else
{
// Slightly-slower path when the message is multiple buffers.
message.SerializeWithCachedSizes(&output);
if (output.HadError())
return false;
}
return true;
}
bool readDelimitedFrom(google::protobuf::io::ZeroCopyInputStream* rawInput, google::protobuf::MessageLite* message, bool* clean_eof)
{
// We create a new coded stream for each message. Don't worry, this is fast,
// and it makes sure the 64MB total size limit is imposed per-message rather
// than on the whole stream. (See the CodedInputStream interface for more
// info on this limit.)
google::protobuf::io::CodedInputStream input(rawInput);
const int start = input.CurrentPosition();
if (clean_eof)
*clean_eof = false;
// Read the size.
uint32_t size;
if (!input.ReadVarint32(&size))
{
if (clean_eof)
*clean_eof = input.CurrentPosition() == start;
return false;
}
// Tell the stream not to read beyond that size.
google::protobuf::io::CodedInputStream::Limit limit = input.PushLimit(size);
// Parse the message.
if (!message->MergeFromCodedStream(&input)) return false;
if (!input.ConsumedEntireMessage()) return false;
// Release the limit.
input.PopLimit(limit);
return true;
}
And here is my python2 implementation:
from google.protobuf.internal import encoder
from google.protobuf.internal import decoder
#I had to implement this because the tools in google.protobuf.internal.decoder
#read from a buffer, not from a file-like objcet
def readRawVarint32(stream):
mask = 0x80 # (1 << 7)
raw_varint32 = []
while 1:
b = stream.read(1)
#eof
if b == "":
break
raw_varint32.append(b)
if not (ord(b) & mask):
#we found a byte starting with a 0, which means it's the last byte of this varint
break
return raw_varint32
def writeDelimitedTo(message, stream):
message_str = message.SerializeToString()
delimiter = encoder._VarintBytes(len(message_str))
stream.write(delimiter + message_str)
def readDelimitedFrom(MessageType, stream):
raw_varint32 = readRawVarint32(stream)
message = None
if raw_varint32:
size, _ = decoder._DecodeVarint32(raw_varint32, 0)
data = stream.read(size)
if len(data) < size:
raise Exception("Unexpected end of file")
message = MessageType()
message.ParseFromString(data)
return message
#In place version that takes an already built protobuf object
#In my tests, this is around 20% faster than the other version
#of readDelimitedFrom()
def readDelimitedFrom_inplace(message, stream):
raw_varint32 = readRawVarint32(stream)
if raw_varint32:
size, _ = decoder._DecodeVarint32(raw_varint32, 0)
data = stream.read(size)
if len(data) < size:
raise Exception("Unexpected end of file")
message.ParseFromString(data)
return message
else:
return None
It might not be the best looking code and I'm sure it can be refactored a fair bit, but at least that should show you one way to do it.
Now the big problem: It's SLOW.
Even when using the C++ implementation of python-protobuf, it's one order of magnitude slower than in pure C++. I have a benchmark where I read 10M protobuf messages of ~30 bytes each from a file. It takes ~0.9s in C++, and 35s in python.
One way to make it a bit faster would be to re-implement the varint decoder to make it read from a file and decode in one go, instead of reading from a file and then decoding as this code currently does. (profiling shows that a significant amount of time is spent in the varint encoder/decoder). But needless to say that alone is not enough to close the gap between the python version and the C++ version.
Any idea to make it faster is very welcome :)
Just for completeness, I post here an up-to-date version that works with the master version of protobuf and Python3
For the C++ version it is sufficient to use the utils in delimited_message_utils.h, here a MWE
#include <google/protobuf/io/zero_copy_stream_impl.h>
#include <google/protobuf/util/delimited_message_util.h>
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
template <typename T>
bool writeManyToFile(std::deque<T> messages, std::string filename) {
int outfd = open(filename.c_str(), O_WRONLY | O_CREAT | O_TRUNC);
google::protobuf::io::FileOutputStream fout(outfd);
bool success;
for (auto msg: messages) {
success = google::protobuf::util::SerializeDelimitedToZeroCopyStream(
msg, &fout);
if (! success) {
std::cout << "Writing Failed" << std::endl;
break;
}
}
fout.Close();
close(outfd);
return success;
}
template <typename T>
std::deque<T> readManyFromFile(std::string filename) {
int infd = open(filename.c_str(), O_RDONLY);
google::protobuf::io::FileInputStream fin(infd);
bool keep = true;
bool clean_eof = true;
std::deque<T> out;
while (keep) {
T msg;
keep = google::protobuf::util::ParseDelimitedFromZeroCopyStream(
&msg, &fin, nullptr);
if (keep)
out.push_back(msg);
}
fin.Close();
close(infd);
return out;
}
For the Python3 version, building on #fireboot 's answer, the only thing thing that needed modification is the decoding of raw_varint32
def getSize(raw_varint32):
result = 0
shift = 0
b = six.indexbytes(raw_varint32, 0)
result |= ((ord(b) & 0x7f) << shift)
return result
def readDelimitedFrom(MessageType, stream):
raw_varint32 = readRawVarint32(stream)
message = None
if raw_varint32:
size = getSize(raw_varint32)
data = stream.read(size)
if len(data) < size:
raise Exception("Unexpected end of file")
message = MessageType()
message.ParseFromString(data)
return message
Was also looking for a solution for this. Here's the core of our solution, assuming some java code wrote many MyRecord messages with writeDelimitedTo into a file. Open the file and loop, doing:
if(someCodedInputStream->ReadVarint32(&bytes)) {
CodedInputStream::Limit msgLimit = someCodedInputStream->PushLimit(bytes);
if(myRecord->ParseFromCodedStream(someCodedInputStream)) {
//do your stuff with the parsed MyRecord instance
} else {
//handle parse error
}
someCodedInputStream->PopLimit(msgLimit);
} else {
//maybe end of file
}
Hope it helps.
Working with an objective-c version of protocol-buffers, I ran into this exact issue. On sending from the iOS client to a Java based server that uses parseDelimitedFrom, which expects the length as the first byte, I needed to call writeRawByte to the CodedOutputStream first. Posting here to hopegully help others that run into this issue. While working through this issue, one would think that Google proto-bufs would come with a simply flag which does this for you...
Request* request = [rBuild build];
[self sendMessage:request];
}
- (void) sendMessage:(Request *) request {
//** get length
NSData* n = [request data];
uint8_t len = [n length];
PBCodedOutputStream* os = [PBCodedOutputStream streamWithOutputStream:outputStream];
//** prepend it to message, such that Request.parseDelimitedFrom(in) can parse it properly
[os writeRawByte:len];
[request writeToCodedOutputStream:os];
[os flush];
}
Since I'm not allowed to write this as a comment to Kenton Varda's answer above; I believe there is a bug in the code he posted (as well as in other answers which have been provided). The following code:
...
google::protobuf::io::CodedInputStream input(rawInput);
// Read the size.
uint32_t size;
if (!input.ReadVarint32(&size)) return false;
// Tell the stream not to read beyond that size.
google::protobuf::io::CodedInputStream::Limit limit =
input.PushLimit(size);
...
sets an incorrect limit because it does not take into account the size of the varint32 which has already been read from input. This can result in data loss/corruption as additional bytes are read from the stream which may be part of the next message. The usual way of handling this correctly is to delete the CodedInputStream used to read the size and create a new one for reading the payload:
...
uint32_t size;
{
google::protobuf::io::CodedInputStream input(rawInput);
// Read the size.
if (!input.ReadVarint32(&size)) return false;
}
google::protobuf::io::CodedInputStream input(rawInput);
// Tell the stream not to read beyond that size.
google::protobuf::io::CodedInputStream::Limit limit =
input.PushLimit(size);
...
You can use getline for reading a string from a stream, using the specified delimiter:
istream& getline ( istream& is, string& str, char delim );
(defined in the header)

Categories

Resources