I've got an app written in Java and some native C++ code with system hooks. These two have to communicate with each other. I mean the C++ subprogram must send some data to the Java one. I would've written the whole thing in one language if it was possible for me. What I'm doing now is really silly, but works. I'm hiding the C++ program's window and sending it's data to it's standard output and then I'm reading that output with Java's standard input!!!
Ok, I know what JNI is but I'm looking for something easier for this (if any exists).
Can anyone give me any idea on how to do this?
Any help will be greatly appreciated.
Sockets & Corba are two techniques that come to mind.
Also, try Google's Protocol Buffers or Apache Thrift.
If you don't find JNI 'easy' then you are in need of an IPC (Inter process communication) mechanism. So from your C++ process you could communicate with your Java one.
What you are doing with your console redirection is a form of IPC, in essence that what IPC.
Since the nature of what you are sending isn't exactly clear its very hard to give you a good answer. But if you have 'simple' Objects or 'commands' that could be serialised easily into a simple protocol then you could use a communication protocol such as protocol buffers.
#include <iostream>
#include <boost/interprocess/file_mapping.hpp>
// Create an IPC enabled file
const int FileSize = 1000;
std::filebuf fbuf;
fbuf.open("cpp.out", std::ios_base::in | std::ios_base::out
| std::ios_base::trunc | std::ios_base::binary);
// Set the size
fbuf.pubseekoff(FileSize-1, std::ios_base::beg);
fbuf.sputc(0);
// use boost IPC to use the file as a memory mapped region
namespace ipc = boost::interprocess;
ipc::file_mapping out("cpp.out", ipc::read_write);
// Map the whole file with read-write permissions in this process
ipc::mapped_region region(out, ipc::read_write);
// Get the address of the mapped region
void * addr = region.get_address();
std::size_t size = region.get_size();
// Write to the memory 0x01
std::memset(addr, 0x01, size);
out.flush();
Now your java file could open 'cpp.out' and read the contents like a normal file.
Two approaches off the top of my head:
1) create two processes and use any suitable IPC;
2) compile C++ app into dynamic library and export functions with standard C interface, those should be callable from any language.
Related
Explanation
I need to exchange binary structured data over a stream (TCP socket or
pipe) between C++, Java and Python programs.
Therefore my question:
How to exchange binary structured data over a stream for C++, Java and Python?
There is no way to create the complete object to be serialized beforehand - there must be the possibility to stream in and stream out the data.
Because of performance issues I need some binary protocol format.
I want to use (if possible) some existing library, because hand-crafting all the (de-)serialization is a pain.
What I want
My idea is something like (for C++ writer):
StreamWriter sw(7); // fd to output to.
while( (DataSet const ds(get_next_row_from_db())) ) {
sw << ds; // data set is some structured data
}
and for C++ reader
StreamReader sr(9); // fd for input
while(sr) {
DataSet const ds(sr);
// handle ds
}
with a similar syntax and semantics for Java and Python.
What I did
I thought about using an existing library like Google Protocol Buffers, but this does not support stream handling and there is the need to create the complete object hierarchy before serialization.
Also I though about creating my own binary format, but this is too much work and pain.
I would recommend explicitly documenting how your data types are to be serialized, and writing serialization and deserialization code in each language as needed. I have found in the past that with good documentation of how the data is to be serialized, this is fairly painless.
Your other major option is to standardize on one platform's default serialization method, but that means you have to figure out that method and implement in the other languages. This tends to be trickier as the default serialization methods are often complex and not well documented.
The options are Apache Thrift, Google's protocol buffer and Pache Avro. Good comparison is there at http://www.slideshare.net/IgorAnishchenko/pb-vs-thrift-vs-avro
So I recommend you to try apache Avro.
I'm writing a C++ server/client application (TCP) that is working fine but I will soon have to write a Java client which obviously has to be compatible with the C++ server it connects to.
As for now, when the server or client receives strings (text), it loops through the bits till a '\0' is found, which marks the end of the string ...
Here's the question : is it still a good practice to handle strings that way when communicating over Java/C++ rather than C++/C++ ?
There's one thing you should read about: Encodings. Basically, the same sequence of bytes can be interpreted in different ways. As long as you pass things around in C++ or Java, things will agree on their meaning, but when using the net (i.e. a byte stream) you must make up your mind. If in doubt, read about and use UTF-8.
Consider using Protocol Buffers or Thrift instead of rolling your own protocol.
I'm reading up on non-blocking I/O as I'm using Akka and Play and blocking is a bad idea if avoidable in that context as far as I can read, but I can't get this to work together with my use case:
Get file over network (here alternatives using nio exist, but right now I'm using URL.openStream)
Decrypt file (PGP) using BouncyCastle (here I'm limited to InputStream)
Unzip file using standard Java GZIP (limited to InputStream)
Read each line in file, which is a position based flat file, and convert to a Case Classes (here I have no constraints on method for reading, right now scalax.io.Resource)
Persist using Slick/JDBC (Not sure if JDBC is blocking or not)
It's working right now basically using InputStreams all the way. However, in the interest of learning and improving my understanding, I'm investigating if I could do this with nonblocking IO.
I'd basically like to stream the file through a pipeline where I apply each step above and finally persist the data without blocking.
If code is required I can easily provide, but I'm looking of a solution on a general level: what do I do when I'm dependent on libraries using java.io?
I hope this helps with some of your points:
1/2/3/4) Akka can work well with libraries that use java.io.InputStream and java.io.OutputStream. See this page, specifically this section: http://doc.akka.io/docs/akka/snapshot/scala/io.html
A ByteStringBuilder can be wrapped in a java.io.OutputStream via the asOutputStream method. Likewise, ByteIterator can we wrapped in a java.io.InputStream via asInputStream. Using these, akka.io applications can integrate legacy code based on java.io streams.
1) You say get a file over the network. I'm guessing via HTTP? You could look into an asynchronous HTTP library. There are many fairly mature async HTTP libraries out there. I like using Spray Client in scala as it is built on top of akka, so plays well in an akka environment. It supports GZIP, but not PGP.
4) Another option: Is the file small enough to store in memory? If so you need not worry about being asynchronous as you will not be doing any IO. You will not be blocking whilst waiting for IO, you will instead be constantly using the CPU as memory is fast.
5) JDBC is blocking. You call a method with the SQL query as the argument, and the return type is a result set with the data. The method must block whilst performing the IO to be able to return this data.
There are some Java async database drivers, but all the ones I have seen seem unmaintained, so I have't used them.
Fear not. Read this section of the akka docs for how to deal with blocking libraries in an akka environment:
http://doc.akka.io/docs/akka/snapshot/general/actor-systems.html#Blocking_Needs_Careful_Management
Decrypt file (PGP) using BouncyCastle (here I'm limited to InputStream)
As you are limited to an InputStream in this step you've answered your own question. You can do the part involving the network with NIO but your step (2) requires an InputStream. You could spool the file from the network to disk using NIO and then use streams from then on, for unzipping and decrypting (CipherInputStream) ... still blocking in theory but continuous in practice.
I know this isn't non blocking IO exactly, but I think you should look at composing Futures (or Promsies) with map which is non-blocking in the Playframework sense of things.
def getFile(location: String): File = { //blocking code}
def decrypt(file: File): File = ..
def unzip(file: File): PromiseFile = ..
def store(file: File): String = ..
def result(status: String): SimpleResult[Json] = ..
AsyncResult{
Promise.pure(getFile("someloc")) map decrypt map unzip map store map result
}
On Linux, is there any way to programmatically get stats for single TCP connection? The stats I am looking for are the sort that are printed out by netstat -s, but for a single connection rather than in the aggregate across all connections. To give some examples: bytes in/out, retransmits, lost packets and so on.
I can run the code within the process that owns the socket, and it can be given the socket file descriptor. The code that sends/receives data is out of reach though, so for example there's no way to wrap recv()/send() to count bytes in/out.
I'll accept answers in any language, but C or Java are particularly relevant hence the tags.
The information nos refers to is available from C with:
#include <linux/tcp.h>
...
struct tcp_info info;
socklen_t optlen;
getsockopt(sd, IPPROTO_TCP, TCP_INFO, &info, &optlen)
Unfortunately, as this is Linux specific, it is not exposed through the Java Socket API. If there is a way to obtain the raw file descriptor from the socket, you might be able to implement this as a native method.
I do not see a way to get to the descriptor. However, it might be possible with your own SocketImplFactory and SocketImpl.
It's probably worth noting that the TCP(7) manual page says this re TCP_INFO:
This option should not be used in code intended to be portable.
Most of the statistics you see with netstat -s is not kept track of on a per connection basis, only overall counters exists.
What you can do, is pull out the information in /proc/net/tcp
First, readlink() on /proc/self/fd, you want to parse the inode number from that symlink, and match it against a line with the same inode number in /proc/net/tcp , which will contain some rudimentary info about that socket/connection. That fil is though not very well documented, so expect to spend some time on google and reading the linux kernel source code to interpret them.
I have a general socket programming question for you.
I have a C struct called Data:
struct data {
double speed;
double length;
char carName[32];
struct Attribs;
}
struct Attribs {
int color;
}
I would like to be able to create a similar structure in Java, create a socket, create the data packet with the above struct, and send it to a C++ socket listener.
What can you tell me about having serialized data (basically, the 1's and 0's that are transferred in the packet). How does C++ "read" these packets and recreate the struct? How are structs like this stored in the packet?
Generally, anything you can tell me to give me ideas on how to solve such a matter.
Thanks!
Be weary of endianness if you use binary serialization. Sun's JVM is Big Endian, and if you are on an Intel x86 you are on a little endian machine.
I would use Java's ByteBuffer for fast native serialization. ByteBuffers are part of the NIO library, thus supposedly higher performance than the ol' DataInput/OutputStreams.
Be especially weary of serializing floats! As suggested above, its safer to transfer all your data to character strings across the wire.
On the C++ side, regardless of the the networking, you will have a filled buffer of data at some point. Thus your deserialization code will look something like:
size_t amount_read = 0;
data my_data;
memcpy(buffer+amount_read, &my_data.speed, sizeof(my_data.speed))
amount_read += sizeof(my_data.speed)
memcpy(buffer+amount_read, &my_data.length, sizeof(my_data.length))
amount_read += sizeof(my_data.length)
Note that the sizes of basic C++ types is implementation defined, so you primitive types in Java and C++ don't directly translate.
You could use Google Protocol buffers. My preferred solution if dealing with a variety of data structures.
You could use JSON for serialization too.
The basic process is:
java app creates some portable version of the structs in the java app, for example XML
java app sends XML to C++ app via a socket
C++ app receives XML from java app
C++ app creates instances of structs using the data in the XML message