I am developing an interface that takes as input an encrypted byte stream -- probably a very large one -- that generates output of more or less the same format.
The input format is this:
{N byte envelope}
- encryption key IDs &c.
{X byte encrypted body}
The output format is the same.
Here's the usual use case (heavily pseudocoded, of course):
Message incomingMessage = new Message (inputStream);
ProcessingResults results = process (incomingMessage);
MessageEnvelope messageEnvelope = new MessageEnvelope ();
// set message encryption options &c. ...
Message outgoingMessage = new Message ();
outgoingMessage.setEnvelope (messageEnvelope);
writeProcessingResults (results, message);
message.writeToOutput (outputStream);
To me, it seems to make sense to use the same object to encapsulate this behaviour, but I'm at a bit of a loss as to how I should go about this. It isn't practical to load all of the encrypted body in at a time; I need to be able to stream it (so, I'll be using some kind of input stream filter to decrypt it) but at the same time I need to be able to write out new instances of this object. What's a good approach to making this work? What should Message look like internally?
I won't create one class to handle in- and output - one class, one responsibility. I would like two filter streams, one for input/decryption and one for output/encryption:
InputStream decrypted = new DecryptingStream(inputStream, decryptionParameters);
...
OutputStream encrypted = new EncryptingStream(outputSream, encryptionOptions);
They may have something like a lazy init mechanism reading the envelope before first read() call / writing the envelope before first write() call. You also use classes like Message or MessageEnvelope in the filter implementations, but they may stay package protected non API classes.
The processing will know nothing about de-/encryption just working on a stream. You may also use both streams for input and output at the same time during processing streaming the processing input and output.
Can you split the body at arbitrary locations?
If so, I would have two threads, input thread and output thread and have a concurrent queue of strings that the output thread monitors. Something like:
ConcurrentLinkedQueue<String> outputQueue = new ConcurrentLinkedQueue<String>();
...
private void readInput(Stream stream) {
String str;
while ((str = stream.readLine()) != null) {
outputQueue.put(processStream(str));
}
}
private String processStream(String input) {
// do something
return output;
}
private void writeOutput(Stream out) {
while (true) {
while (outputQueue.peek() == null) {
sleep(100);
}
String msg = outputQueue.poll();
out.write(msg);
}
}
Note: This will definitely not work as-is. Just a suggestion of a design. Someone is welcome to edit this.
If you need to read and write same time you either have to use threads (different threads reading and writing) or asynchronous I/O (the java.nio package). Using input and output streams from different threads is not a problem.
If you want to make a streaming API in java, you should usually provide InputStream for reading and OutputStream for writing. This way those can then be passed for other APIs so that you can chain things and so get the streams go all the way as streams.
Input example:
Message message = new Message(inputStream);
results = process(message.getInputStream());
Output example:
Message message = new Message(outputStream);
writeContent(message.getOutputStream());
The message needs to wrap the given streams with a classes that do the needed encryption and decryption.
Note that reading multiple messages at same time or writing multiple messages at same time would need support from the protocol too. You need to get the synchronization correct.
You should check Wikipedia article on different block cipher modes supporting encryption of streams. The different encryption algorithms may support a subset of these.
Buffered streams will allow you to read, encrypt/decrypt and write in a loop.
Examples demonstrating ZipInputStream and ZipOutputStream could provide some guidance on how you may solve this. See example.
What you need is using Cipher Streams (CipherInputStream). Here is an example of how to use it.
I agree with Arne, the data processor shouldn't know about encryption, it just needs to read the decrypted body of the message, and write out the results, and stream filters should take care of encryption. However, since this is logically operating on the same piece of information (a Message), I think they should be packaged inside one class which handles the message format, although the encryption/decryption streams are indeed independent from this.
Here's my idea for the structure, flipping the architecture around somewhat, and moving the Message class outside the encryption streams:
class Message {
InputStream input;
Envelope envelope;
public Message(InputStream input) {
assert input != null;
this.input = input;
}
public Message(Envelope envelope) {
assert envelope != null;
this.envelope = envelope;
}
public Envelope getEnvelope() {
if (envelope == null && input != null) {
// Read envelope from beginning of stream
envelope = new Envelope(input);
}
return envelope
}
public InputStream read() {
assert input != null
// Initialise the decryption stream
return new DecryptingStream(input, getEnvelope().getEncryptionParameters());
}
public OutputStream write(OutputStream output) {
// Write envelope header to output stream
getEnvelope().write(output);
// Initialise the encryption
return new EncryptingStream(output, getEnvelope().getEncryptionParameters());
}
}
Now you can use it by creating a new message for the input, and one for the output:
OutputStream output; // This is the stream for sending the message
Message inputMessage = new Message(input);
Message outputMessage = new Message(inputMessage.getEnvelope());
process(inputMessage.read(), outputMessage.write(output));
Now the process method just needs to read chunks of data as required from the input, and write results to the output:
public void process(InputStream input, OutputStream output) {
byte[] buffer = new byte[1024];
int read;
while ((read = input.read(buffer) > 0) {
// Process buffer, writing to output as you go.
}
}
This all now works in lockstep, and you don't need any extra threads. You can also abort early without having to process the whole message (if the output stream is closed for example).
Related
Before writing something like "why don't you use Java HTTP client such as apache, etc", I need you to know that the reason is SSL. I wish I could, they are very convenient, but I can't.
None of the available HTTP clients support GOST cipher suite, and I get handshake exception all the time. The ones which do support the suite, doesn't support SNI (they are also proprietary) - I'm returned with a wrong cert and get handshake exception over and over again.
The only solution was to configure openssl (with gost engine) and curl and finally execute the command with Java.
Having said that, I wrote a simple snippet for executing a command and getting input stream response:
public static InputStream executeCurlCommand(String finalCurlCommand) throws IOException
{
return Runtime.getRuntime().exec(finalCurlCommand).getInputStream();
}
Additionally, I can convert the returned IS to a string like that:
public static String convertResponseToString(InputStream isToConvertToString) throws IOException
{
StringWriter writer = new StringWriter();
IOUtils.copy(isToConvertToString, writer, "UTF-8");
return writer.toString();
}
However, I can't see a pattern according to which I could get a good response or a desired response header:
Here's what I mean
After executing a command (with -i flag), there might be lots and lots of information like in the screen below:
At first, I thought that I could just split it with '\n', but the thing is that a required response's header or a response itself may not satisfy the criteria (prettified JSON or long redirect URL break the rule).
Also, the static line GOST engine already loaded is a bit annoying (but I hope that I'll be able to get rid of it and nothing unrelated info like that will emerge)
I do believe that there's a pattern which I can use.
For now I can only do that:
public static String getLocationRedirectHeaderValue(String curlResponse)
{
String locationHeaderValue = curlResponse.substring(curlResponse.indexOf("Location: "));
locationHeaderValue = locationHeaderValue.substring(0, locationHeaderValue.indexOf("\n")).replace("Location: ", "");
return locationHeaderValue;
}
Which is not nice, obviosuly
Thanks in advance.
Instead of reading the whole result as a single string you might want to consider reading it line by line using a scanner.
Then keep a few status variables around. The main task would be to separate header from body. In the body you might have a payload you want to treat differently (e.g. use GSON to make a JSON object).
The nice thing: Header and Body are separated by an empty line. So your code would be along these lines:
boolean inHeader = true;
StringBuilder b = new StringBuilder;
String lastLine = "";
// Technically you would need Multimap
Map<String,String> headers = new HashMap<>();
Scanner scanner = new Scanner(yourInputStream);
while scanner.hasNextLine() {
String line = scanner.nextLine();
if (line.length() == 0) {
inHeader = false;
} else {
if (inHeader) {
// if line starts with space it is
// continuation of previous header
treatHeader(line, lastLine);
} else {
b.append(line);
b.appen(System.lineSeparator());
}
}
}
String body = b.toString();
Is there anyway to reuse an inputStream by changing its content? (Without new statement).
For instance, I was able to something very close to my requirement, but not enough
In the following code I am using a SequenceInputStream, and everytime I am adding a new InputStream to that sequence.
But I would like to do the same thing by using the same inputStream (I don't care which implementation of InputStream).
I thought about mark()/reset() APIs, but i still need to change the content to be read.
The idea to avoid new InputStream creations is because of performance issues
//Input Streams
List<InputStream> inputStreams = new ArrayList<InputStream>();
try{
//First InputStream
byte[] input = new byte[]{24,8,102,97};
inputStreams.add(new ByteArrayInputStream(input));
Enumeration<InputStream> enu = Collections.enumeration(inputStreams);
SequenceInputStream is = new SequenceInputStream(enu);
byte [] out = new byte[input.length];
is.read(out);
for (byte b : out){
System.out.println(b);//Will print 24,8,102,97
}
//Second InputStream
input = new byte[]{ 4,66};
inputStreams.add(new ByteArrayInputStream(input));
out = new byte[input.length];
is.read(out);
for (byte b : out){
System.out.println(b);//will print 4,66
}
is.close();
}catch (Exception e){//
}
No, You can't restart reading the input stream after it reaches to the end of the stream as it is uni-directional i.e. moves only in single direction.
But Refer below links, they may help:
How to Cache InputStream for Multiple Use
Getting an InputStream to read more than once, regardless of markSupported()
You could create your own implementation (subclass) of InputStream that would allow what you require. I doubt there is an existing implementation of this.
I highly doubt you'll get any measurable performance boost from this though, there's not much of logic in e.g. FileInputStream that you wouldn't need to perform anyways, and Java is well optimized for garbage-collecting short-lived objects.
In a Java (only) Play 2.3 project we need to send a non-chunked response of an InputStream directly to the client. The InputStream comes from a remote service from which we want to stream directly to the client, without blocking or buffering to a local file. Since we know the size before reading the input stream, we do not want a chunked response.
What is the best way to return a result for an input stream with a known size? (preferable without using Scala).
When looking at the default ok(file, ..) method for returning File objects it goes deep into play internals which are only accessible from scala, and it uses the play-internal execution context which can't even be accessed from outside. Would be nice if it would work identical, just with an InputStream.
FWIW I have now found a way to serve an InputStream, which basically duplicates the logic which the Results.ok(File) method to allow directly passing in an InputStream.
The key is to use the scala call to create an Enumerator from an InputStream: play.api.libs.iteratee.Enumerator$.MODULE$.fromStream
private final MessageDispatcher fileServeContext = Akka.system().dispatchers().lookup("file-serve-context");
protected void serveInputStream(InputStream inputStream, String fileName, long contentLength) {
response().setHeader(
HttpHeaders.CONTENT_DISPOSITION,
"attachment; filename=\"" + fileName + "\"");
// Set Content-Type header based on file extension.
scala.Option<String> contentType = MimeTypes.forFileName(fileName);
if (contentType.isDefined()) {
response().setHeader(CONTENT_TYPE, contentType.get());
} else {
response().setHeader(CONTENT_TYPE, ContentType.DEFAULT_BINARY.getMimeType());
}
response().setHeader(CONTENT_LENGTH, Long.toString(contentLength));
return new WrappedScalaResult(new play.api.mvc.Result(
new ResponseHeader(StatusCode.OK, toScalaMap(response().getHeaders())),
// Enumerator.fromStream() will also close the input stream once it is done.
play.api.libs.iteratee.Enumerator$.MODULE$.fromStream(
inputStream,
FILE_SERVE_CHUNK_SIZE,
fileServeContext),
play.api.mvc.HttpConnection.KeepAlive()));
}
/**
* A simple Result which wraps a scala result so we can call it from our java controllers.
*/
private static class WrappedScalaResult implements Result {
private play.api.mvc.Result scalaResult;
public WrappedScalaResult(play.api.mvc.Result scalaResult) {
this.scalaResult = scalaResult;
}
#Override
public play.api.mvc.Result toScala() {
return scalaResult;
}
}
I'm having issues with reading decrypted data from conceal. It looks like I can't correctly finish streaming.
I pretend there is some issue with conceal, because of when I switch my proxyStream (just the encryption part) to not run it through conceal, everything works as expected. I'm also assuming that writing is ok, there is no exception whatsoever and I can find the encrypted file on disk.
I'm proxying my data through contentprovider to allow other apps read decrypted data when the user wants it. (sharing,...)
In my content provider I'm using the openFile method to allow contentResolvers read the data
#Override
public ParcelFileDescriptor openFile(Uri uri, String mode) throws FileNotFoundException {
try {
ParcelFileDescriptor[] pipe = ParcelFileDescriptor.createPipe();
String name = uri.getLastPathSegment();
File file = new File(name);
InputStream fileContents = mStorageProxy.getDecryptInputStream(file);
ParcelFileDescriptor.AutoCloseOutputStream stream = new ParcelFileDescriptor.AutoCloseOutputStream(pipe[1]);
PipeThread pipeThread = new PipeThread(fileContents, stream);
pipeThread.start();
return pipe[0];
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
I guess in the Facebook app Facebook android team could be rather using a standard query() method with a byte array sent in MediaStore.MediaColumns() which is not suitable for me because of I'm not only encrypting media files and I also like the approach of streams better.
This is how I'm reading from the Inpustream. It's basically a pipe between two parcelFileDescriptors. The inputstream comes from conceal and it is a FileInputstream wrapped into a BufferedInputStream originaly.
static class PipeThread extends Thread {
InputStream input;
OutputStream out;
PipeThread(InputStream inputStream, OutputStream out) {
this.input=inputStream;
this.out=out;
}
#Override
public void run() {
byte[] buf=new byte[1024];
int len;
try {
while ((len=input.read(buf)) > 0) {
out.write(buf, 0, len);
}
input.close();
out.flush();
out.close();
}
catch (IOException e) {
Log.e(getClass().getSimpleName(),
"Exception transferring file", e);
}
}
}
I've tried other methods how to read the stream, so it really shouldn't be the issue.
Finally here's the exception I'm constantly ending up with. Do you know what could be the issue? It points to native calls, which I got lost in..
Exception transferring file
com.facebook.crypto.cipher.NativeGCMCipherException: decryptFinal
at com.facebook.crypto.cipher.NativeGCMCipher.decryptFinal(NativeGCMCipher.java:108)
at com.facebook.crypto.streams.NativeGCMCipherInputStream.ensureTagValid(NativeGCMCipherInputStream.java:126)
at com.facebook.crypto.streams.NativeGCMCipherInputStream.read(NativeGCMCipherInputStream.java:91)
at com.facebook.crypto.streams.NativeGCMCipherInputStream.read(NativeGCMCipherInputStream.java:76)
EDIT:
It looks like the stream is working ok, but what fails is the last iteration of reading from it. As I'm using buffer it seems like the fact that the buffer is bigger then the amount of remaiming data is causing the issue. I've been looking into sources of conceal and it seems to be ok from this regard there. Couldn't it be failing somewhere in the native layer?
Note: I've managed to get the decrypted file except its final chunk of bytes..So I have for example an incomplete image file (with last few thousands of pixels not being displayed)
From my little experience with conceal, I have noticed that, only the same application that encrypts a file could decrypt it successfully irrespective whether it has the same package or not. Be sure to put this in mind
This was resolved in https://github.com/facebook/conceal/issues/24. For posterity's sake, the problem here is that the author forgot to call close() on the output stream.
Updates:
For now using a Map. Class that wants to send something to other instance sends the object, the routing string.
Use an object stream, use Java serializable to write the object to servlet.
Write String first and then the object.
Receiving servlet wraps input stream around a ObjectInputStream. Reads string first and then the Object. Routing string decides were it goes.
A more generic way might have been to send a class name and its declared method or a Spring bean name, but this was enough for us.
Original question
Know the basic way but want details of steps. Also know I can use Jaxb or RMI or EJB ... but would like to do this using pure serialization to a bytearray and then encode that send it from servlet 1 in jvm 1 to servlet 2 in jvm 2 (two app server instances in same LAN, same java versions and jars set up in both J2EE apps)
Basic steps are (Approcah 1) :-
serialize any Serializable object to a byte array and make a string. Exact code see below
Base64 output of 1. Is it required to base 64 or can skip step 2?
use java.util.URLEncode.encode to encode the string
use apache http components or URL class to send from servlet 1 to 2 after naming params
on Servlet 2 J2EE framework would have already URLDecoced it, now just do reverse steps and cast to object according to param name.
Since both are our apps we would know the param name to type / class mapping. Basically looking for the fastest & most convenient way of sending objects between JVMs.
Example :
POJO class to send
package tst.ser;
import java.io.Serializable;
public class Bean1 implements Serializable {
/**
* make it 2 if add something without default handling
*/
private static final long serialVersionUID = 1L;
private String s;
public String getS() {
return s;
}
public void setS(String s) {
this.s = s;
}
}
* Utility *
package tst.ser;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.net.URLEncoder;
public class SerUtl {
public static String serialize(Object o) {
String s = null;
ObjectOutputStream os = null;
try {
os = new ObjectOutputStream(new ByteArrayOutputStream());
os.writeObject(o);
s = BAse64.encode(os.toByeArray());
//s = URLEncoder.encode(s, "UTF-8");//keep this for sending part
} catch (Exception e) {
// TODO: logger
e.printStackTrace();
return null;
} finally {
// close OS but is in RAM
try {
os.close();// not required in RAM
} catch (Exception e2) {// TODO: handle exception logger
}
os = null;
}
return s;
}
public static Object deserialize(String s) {
Object o = null;
ObjectInputStream is = null;
try {
// do base 64 decode if done in serialize
is = new ObjectInputStream(new ByteArrayInputStream(
Base64.decode(s)));
o = is.readObject();
} catch (Exception e) {
// TODO: logger
e.printStackTrace();
return null;
} finally {
// close OS but is in RAM
try {
is.close();// not required in RAM
} catch (Exception e2) {// TODO: handle exception logger
}
is = null;
}
return o;
}
}
**** sample sending servlet ***
Bean1 b = new Bean1(); b.setS("asdd");
String s = SerUtl.serialize(b);
//do UrlEncode.encode here if sending lib does not.
HttpParam p = new HttpParam ("bean1", s);
//http components send obj
**** sample receiving servlet ***
String s = request.getParameter("bean1");
Bean1 b1 = (Beean1)SerUtl.deserialize(s);
Serialize any Serializable object with to a byte array
Yes.
and make a string.
No.
Exact statements see below
os = new ObjectOutputStream(new ByteArrayOutputStream());
os.writeObject(o);
s = os.toString();
// s = Base64.encode(s);//Need this some base 64 impl like Apache ?
s = URLEncoder.encode(s, "UTF-8");
These statements don't even do what you have described, which is in any case incorrect. OutputStream.toString() doesn't turn any bytes into Strings, it just returns a unique object identifier.
Base64 output of 1.
The base64 output should use the byte array as the input, not a String. String is not a container for binary data. See below for corrected code.
ByteArrayOutputStream baos = new ByteArrayOutputStream();
os = new ObjectOutputStream(baos);
os.writeObject(o);
os.close();
s = Base64.encode(baos.toByeArray()); // adjust to suit your API
s = URLEncoder.encode(s, "UTF-8");
This at least accomplishes your objective.
Is it required to base 64 or can skip step 2?
If you want a String you must encode it somehow.
Use java.util.URLEncode.encode to encode the string
This is only necessary if you're sending it as a GET or POST parameter.
Use apache http components or URL class to send from servlet 1 to 2 after naming params
Yes.
On Servlet 2 J2EE framework would have already URLDecoded it, now just do reverse steps and cast to object according to param name.
Yes, but remember to go directly from the base64-encoded string to the byte array, no intermediate String.
Basically looking for the fastest & most convenient way of sending objects between JVMs.
These objectives aren't necessarily reconcilable. The most convenient these days is probably XML or JSON but I doubt that these are faster than Serialization.
os = null;
Setting references that are about to fall out of scope to null is pointless.
HttpParam p = new HttpParam ("bean1", s);
It's possible that HttpParam does the URLEncoding for you. Check this.
You need not convert to string. You can post the binary data straight to the servlet, for example by creating an ObjectOutputStream on top of a HttpUrlConnection's outputstream. Set the request method to POST.
The servlet handling the post can deserialize from an ObjectStream created from the HttpServletRequest's ServletInputStream.
I'd recommend JAXB any time over binary serialization, though. The frameworks are not only great for interoperability, they also speed up development and create more robust solutions.
The advantages I see are way better tooling, type safety, and code generation, keeping your options open so you can call your code from another version or another language, and easier debugging. Don't underestimate the cost of hard to solve bugs caused by accidentally sending the wrong type or doubly escaped data to the servlet. I'd expect the performance benefits to be too small to compensate for this.
Found this Base64 impl that does a lot of the heavy lifting for me : http://iharder.net/base64
Has utility methods :
String encodeObject(java.io.Serializable serializableObject, int options )
Object decodeToObject(String encodedObject, int options, final ClassLoader loader )
Using :
try {
String dat = Base64.encodeObject(srlzblObj, options);
StringBuilder data = new StringBuilder().append("type=");
data.append(appObjTyp).append("&obj=").append(java.net.URLEncoder.encode(dat, "UTF-8"));
Use the type param to tell the receiving JVM what type of object I'm sending. Each servlet/ jsps at most receives 4 types, usually 1. Again since its our own app and classes that we are sending this is quick (as in time to send over the network) and simple.
On the other end unpack it by :
String objData = request.getParameter("obj");
Object obj = Base64.decodeToObject(objData, options, null);
Process it, encode the result, send result back:
reply = Base64.encodeObject(result, options);
out.print("rsp=" + reply);
Calling servlet / jsp gets the result:
if (reply != null && reply.length() > 4) {
String objDataFromServletParam = reply.substring(4);
Object obj = Base64.decodeToObject(objDataFromServletParam, options, null);
options can be 0 or Base64.GZIP
You can use JMS as well.
Apache Active-MQ is one good solution. You will not have to bother with all this conversion.
/**
* #param objectToQueue
* #throws JMSException
*/
public void sendMessage(Serializable objectToQueue) throws JMSException
{
ObjectMessage message = session.createObjectMessage();
message.setObject(objectToQueue);
producerForQueue.send(message);
}
/**
* #param objectToQueue
* #throws JMSException
*/
public Serializable receiveMessage() throws JMSException
{
Message message = consumerForQueue.receive(timeout);
if (message instanceof ObjectMessage)
{
ObjectMessage objMsg = (ObjectMessage) message;
Serializable sobject = objMsg.getObject();
return sobject;
}
return null;
}
My point is do not write custom code for Serialization, iff it can be avoided.
When you use AMQ, all you need to do is make your POJO serializable.
Active-MQ functions take care of serialization.
If you want fast response from AMQ, use vm-transport. It will minimize n/w overhead.
You will automatically get benefits of AMQ features.
I am suggesting this because
You have your own Applications running on network.
You need a mechanism to transfer objects.
You will need a way to monitor it as well.
If you go for custom solution, you might have to solve above things yourselves.