How to read data serialized with Chronicle Wire from InputStream? - java

Some data are serialized to an outputstream via Chronicle Wire.
Object m = ... ;
OutputStream out = ... ;
WireType.RAW //
.apply(Bytes.elasticByteBuffer()) //
.getValueOut().object(m) //
.bytes().copyTo(out)
;
I want to get them back from an Inputstream.
InputStream in = ... ;
WireType.RAW
.apply(Bytes.elasticByteBuffer())
.getValueIn()
???
;
Object m = ???; // How to initialize m ?
How to read my initial object m from in ?

There is an assumption you will have some idea of how long the data is and read it in one go. It is also assumed you will want to reuse the buffers to avoid creating garbage. To minimise latency data is typical read to/from NIO Channels.
I have raised an issue to create this example, Improve support for Input/OutputStream and non Marshallable objects https://github.com/OpenHFT/Chronicle-Wire/issues/111
This should do what you want efficiently without creating garbage each time.
package net.openhft.chronicle.wire;
import net.openhft.chronicle.bytes.Bytes;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.nio.ByteBuffer;
public class WireToOutputStream {
private final Bytes<ByteBuffer> bytes = Bytes.elasticHeapByteBuffer(128);
private final Wire wire;
private final DataOutputStream dos;
public WireToOutputStream(WireType wireType, OutputStream os) {
wire = wireType.apply(bytes);
dos = new DataOutputStream(os);
}
public Wire getWire() {
wire.clear();
return wire;
}
public void flush() throws IOException {
int length = Math.toIntExact(bytes.readRemaining());
dos.writeInt(length);
dos.write(bytes.underlyingObject().array(), 0, length);
}
}
package net.openhft.chronicle.wire;
import net.openhft.chronicle.bytes.Bytes;
import java.io.DataInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.StreamCorruptedException;
import java.nio.ByteBuffer;
public class InputStreamToWire {
private final Bytes<ByteBuffer> bytes = Bytes.elasticHeapByteBuffer(128);
private final Wire wire;
private final DataInputStream dis;
public InputStreamToWire(WireType wireType, InputStream is) {
wire = wireType.apply(bytes);
dis = new DataInputStream(is);
}
public Wire readOne() throws IOException {
wire.clear();
int length = dis.readInt();
if (length < 0) throw new StreamCorruptedException();
bytes.ensureCapacity(length);
byte[] array = bytes.underlyingObject().array();
dis.readFully(array, 0, length);
bytes.readPositionRemaining(0, length);
return wire;
}
}
You can then do the following
package net.openhft.chronicle.wire;
import net.openhft.chronicle.core.util.ObjectUtils;
import org.junit.Test;
import java.io.IOException;
import java.io.Serializable;
import java.net.ServerSocket;
import java.net.Socket;
import static org.junit.Assert.assertEquals;
public class WireToOutputStreamTest {
#Test
public void testVisSocket() throws IOException {
ServerSocket ss = new ServerSocket(0);
Socket s = new Socket("localhost", ss.getLocalPort());
Socket s2 = ss.accept();
WireToOutputStream wtos = new WireToOutputStream(WireType.RAW, s.getOutputStream());
Wire wire = wtos.getWire();
AnObject ao = new AnObject();
ao.value = 12345;
ao.text = "Hello";
// write the type is needed.
wire.getValueOut().typeLiteral(AnObject.class);
Wires.writeMarshallable(ao, wire);
wtos.flush();
InputStreamToWire istw = new InputStreamToWire(WireType.RAW, s2.getInputStream());
Wire wire2 = istw.readOne();
Class type = wire2.getValueIn().typeLiteral();
Object ao2 = ObjectUtils.newInstance(type);
Wires.readMarshallable(ao2, wire2, true);
System.out.println(ao2);
ss.close();
s.close();
s2.close();
assertEquals(ao.toString(), ao2.toString());
}
public static class AnObject implements Serializable {
long value;
String text;
#Override
public String toString() {
return "AnObject{" +
"value=" + value +
", text='" + text + '\'' +
'}';
}
}
}
Sample code
// On Sender side
Object m = ... ;
OutputStream out = ... ;
WireToOutputStream wireToOutputStream = new
WireToOutputStream(WireType.TEXT, out);
Wire wire = wireToOutputStream.getWire();
wire.getValueOut().typeLiteral(m.getClass());
Wires.writeMarshallable(m, wire);
wireToOutputStream.flush();
// On Receiver side
InputStream in = ... ;
InputStreamToWire inputStreamToWire = new InputStreamToWire(WireType.TEXT, in);
Wire wire2 = inputStreamToWire.readOne();
Class type = wire2.getValueIn().typeLiteral();
Object m = ObjectUtils.newInstance(type);
Wires.readMarshallable(m, wire2, true);
This code is a lot simpler if your DTO extends Marshallable but this will work whether you extend an interface or not. i.e. you don't need to extend Serializable.
Also if you know what the type will be you don't need to write it each time.
The helper classes above have been added to the latest SNAPSHOT

Related

How to read file chunk by chunk from S3 using aws-java-sdk

I am trying to read large file into chunks from S3 without cutting any line for parallel processing.
Let me explain by example:
There is file of size 1G on S3. I want to divide this file into chucks of 64 MB. It is easy I can do it like :
S3Object s3object = s3.getObject(new GetObjectRequest(bucketName, key));
InputStream stream = s3object.getObjectContent();
byte[] content = new byte[64*1024*1024];
while (stream.read(content) != -1) {
//process content here
}
but problem with chunk is it may have 100 complete line and one incomplete. but I can not process incomplete line and don't want to discard it.
Is any way to handle this situations ? means all chucks have no partial line.
My usual approach (InputStream -> BufferedReader.lines() -> batches of lines -> CompletableFuture) won't work here because the underlying S3ObjectInputStream times out eventually for huge files.
So I created a new class S3InputStream, which doesn't care how long it's open for and reads byte blocks on demand using short-lived AWS SDK calls. You provide a byte[] that will be reused. new byte[1 << 24] (16Mb) appears to work well.
package org.harrison;
import java.io.IOException;
import java.io.InputStream;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GetObjectRequest;
/**
* An {#link InputStream} for S3 files that does not care how big the file is.
*
* #author stephen harrison
*/
public class S3InputStream extends InputStream {
private static class LazyHolder {
private static final AmazonS3 S3 = AmazonS3ClientBuilder.defaultClient();
}
private final String bucket;
private final String file;
private final byte[] buffer;
private long lastByteOffset;
private long offset = 0;
private int next = 0;
private int length = 0;
public S3InputStream(final String bucket, final String file, final byte[] buffer) {
this.bucket = bucket;
this.file = file;
this.buffer = buffer;
this.lastByteOffset = LazyHolder.S3.getObjectMetadata(bucket, file).getContentLength() - 1;
}
#Override
public int read() throws IOException {
if (next >= length) {
fill();
if (length <= 0) {
return -1;
}
next = 0;
}
if (next >= length) {
return -1;
}
return buffer[this.next++];
}
public void fill() throws IOException {
if (offset >= lastByteOffset) {
length = -1;
} else {
try (final InputStream inputStream = s3Object()) {
length = 0;
int b;
while ((b = inputStream.read()) != -1) {
buffer[length++] = (byte) b;
}
if (length > 0) {
offset += length;
}
}
}
}
private InputStream s3Object() {
final GetObjectRequest request = new GetObjectRequest(bucket, file).withRange(offset,
offset + buffer.length - 1);
return LazyHolder.S3.getObject(request).getObjectContent();
}
}
The aws-java-sdk already provides streaming functionality for your S3 objects. You have to call "getObject" and the result will be an InputStream.
1) AmazonS3Client.getObject(GetObjectRequest getObjectRequest) -> S3Object
2) S3Object.getObjectContent()
Note: The method is a simple getter and does not actually create a
stream. If you retrieve an S3Object, you should close this input
stream as soon as possible, because the object contents aren't
buffered in memory and stream directly from Amazon S3. Further,
failure to close this stream can cause the request pool to become
blocked.
aws java docs
100 complete line and one incomplete
do you mean you need to read the stream line by line? If so, instead of using a an InputStream try to read the s3 object stream by using BufferedReader so that you can read the stream line by line but I think this will make a little slower than by chunk.
S3Object s3object = s3.getObject(new GetObjectRequest(bucketName, key));
BufferedReader in = new BufferedReader(new InputStreamReader(s3object.getObjectContent()));
String line;
while ((line = in.readLine()) != null) {
//process line here
}
You can read all the files in the bucket with checking the tokens. And you can read files with other java libs.. i.e. Pdf.
import java.io.IOException;
import java.io.InputStream;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.List;
import javax.swing.JTextArea;
import java.io.FileWriter;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.text.PDFTextStripper;
import org.apache.pdfbox.text.PDFTextStripperByArea;
import org.joda.time.DateTime;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.AmazonS3Exception;
import com.amazonaws.services.s3.model.CopyObjectRequest;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ListObjectsV2Request;
import com.amazonaws.services.s3.model.ListObjectsV2Result;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import java.io.File;
//..
// in your main class
private static AWSCredentials credentials = null;
private static AmazonS3 amazonS3Client = null;
public static void intializeAmazonObjects() {
credentials = new BasicAWSCredentials(ACCESS_KEY, SECRET_ACCESS_KEY);
amazonS3Client = new AmazonS3Client(credentials);
}
public void mainMethod() throws IOException, AmazonS3Exception{
// connect to aws
intializeAmazonObjects();
ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
ListObjectsV2Result listObjectsResult;
do {
listObjectsResult = amazonS3Client.listObjectsV2(req);
int count = 0;
for (S3ObjectSummary objectSummary : listObjectsResult.getObjectSummaries()) {
System.out.printf(" - %s (size: %d)\n", objectSummary.getKey(), objectSummary.getSize());
// Date lastModifiedDate = objectSummary.getLastModified();
// String bucket = objectSummary.getBucketName();
String key = objectSummary.getKey();
String newKey = "";
String newBucket = "";
String resultText = "";
// only try to read pdf files
if (!key.contains(".pdf")) {
continue;
}
// Read the source file as text
String pdfFileInText = readAwsFile(objectSummary.getBucketName(), objectSummary.getKey());
if (pdfFileInText.isEmpty())
continue;
}//end of current bulk
// If there are more than maxKeys(in this case 999 default) keys in the bucket,
// get a continuation token
// and list the next objects.
String token = listObjectsResult.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);
req.setContinuationToken(token);
} while (listObjectsResult.isTruncated());
}
public String readAwsFile(String bucketName, String keyName) {
S3Object object;
String pdfFileInText = "";
try {
// AmazonS3 s3client = getAmazonS3ClientObject();
object = amazonS3Client.getObject(new GetObjectRequest(bucketName, keyName));
InputStream objectData = object.getObjectContent();
PDDocument document = PDDocument.load(objectData);
document.getClass();
if (!document.isEncrypted()) {
PDFTextStripperByArea stripper = new PDFTextStripperByArea();
stripper.setSortByPosition(true);
PDFTextStripper tStripper = new PDFTextStripper();
pdfFileInText = tStripper.getText(document);
}
} catch (Exception e) {
e.printStackTrace();
}
return pdfFileInText;
}
The #stephen-harrison answer works well. I updated it for v2 of the sdk. I made a couple of tweaks: mainly the connection can now be authorized and the LazyHolder class is no longer static -- I couldn't figure out how to authorize the connection and still keep the class static.
For another approach using Scala, see https://alexwlchan.net/2019/09/streaming-large-s3-objects/
package foo.whatever;
import java.io.IOException;
import java.io.InputStream;
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
import software.amazon.awssdk.services.s3.model.HeadObjectRequest;
import software.amazon.awssdk.services.s3.model.HeadObjectResponse;
/**
* Adapted for aws Java sdk v2 by jomofrodo#gmail.com
*
* An {#link InputStream} for S3 files that does not care how big the file is.
*
* #author stephen harrison
*/
public class S3InputStreamV2 extends InputStream {
private class LazyHolder {
String appID;
String secretKey;
Region region = Region.US_WEST_1;
public S3Client S3 = null;
public void connect() {
AwsBasicCredentials awsCreds = AwsBasicCredentials.create(appID, secretKey);
S3 = S3Client.builder().region(region).credentialsProvider(StaticCredentialsProvider.create(awsCreds))
.build();
}
private HeadObjectResponse getHead(String keyName, String bucketName) {
HeadObjectRequest objectRequest = HeadObjectRequest.builder().key(keyName).bucket(bucketName).build();
HeadObjectResponse objectHead = S3.headObject(objectRequest);
return objectHead;
}
// public static final AmazonS3 S3 = AmazonS3ClientBuilder.defaultClient();
}
private LazyHolder lazyHolder = new LazyHolder();
private final String bucket;
private final String file;
private final byte[] buffer;
private long lastByteOffset;
private long offset = 0;
private int next = 0;
private int length = 0;
public S3InputStreamV2(final String bucket, final String file, final byte[] buffer, String appID, String secret) {
this.bucket = bucket;
this.file = file;
this.buffer = buffer;
lazyHolder.appID = appID;
lazyHolder.secretKey = secret;
lazyHolder.connect();
this.lastByteOffset = lazyHolder.getHead(file, bucket).contentLength();
}
#Override
public int read() throws IOException {
if (next >= length || (next == buffer.length && length == buffer.length)) {
fill();
if (length <= 0) {
return -1;
}
next = 0;
}
if (next >= length) {
return -1;
}
return buffer[this.next++] & 0xFF;
}
public void fill() throws IOException {
if (offset >= lastByteOffset) {
length = -1;
} else {
try (final InputStream inputStream = s3Object()) {
length = 0;
int b;
while ((b = inputStream.read()) != -1) {
buffer[length++] = (byte) b;
}
if (length > 0) {
offset += length;
}
}
}
}
private InputStream s3Object() {
final Long rangeEnd = offset + buffer.length - 1;
final String rangeString = "bytes=" + offset + "-" + rangeEnd;
final GetObjectRequest getObjectRequest = GetObjectRequest.builder().bucket(bucket).key(file).range(rangeString)
.build();
return lazyHolder.S3.getObject(getObjectRequest);
}
}
Got puzzled while we were migrating from AWS Sdk V1 to V2 and realised in V2 SDK its not the same way to define the range
With AWS V1 SDK
S3Object currentS3Obj = client.getObject(new GetObjectRequest(bucket, key).withRange(start, end));
return currentS3Obj.getObjectContent();
With AWS V2 SDK
var range = String.format("bytes=%d-%d", start, end);
ResponseBytes<GetObjectResponse> currentS3Obj = client.getObjectAsBytes(GetObjectRequest.builder().bucket(bucket).key(key).range(range).build());
return currentS3Obj.asInputStream();

Issue with streaming formatted input stream to server

I am trying to write a "formatted" input stream to a tomcat servlet (with Guice).
The underlying problem is the following: I want to stream data from a database directly to a server. Therefore I load the data, convert it to JSON and upload it to the server. I don't want to write the JSON to a temporary file first, this is done due to performance issues, so I want to circumvent using the hard drive, by directly streaming to the server.
EDIT: Similar to Sending a stream of documents to a Jersey #POST endpoint
But a comment in the answer says that it is loosing data, I seem to have the same problem.
I wrote a "ModelInputStream" that
Loads the next model from the database when the previous is streamed
Writes one byte for the type (enum ordinal)
Writes 4 bytes for the length of the next byte array (int)
Writes a string (refId)
Writes 4 bytes for the length of the next byte array (int)
Writes the actual json
Repeat until all models are streamed
I also wrote a "ModelStreamReader" that knows that logic and reads accordingly.
When I test this directly it works fine, but once I create the ModelInputStream on client side and use the incoming input stream on the server with the ModelStreamReader the actual json bytes are less than specified in the 4 bytes defining the length. I guess this is due to deflating or compression.
I tried different content headers for trying to disable compression etc, but nothing worked.
java.io.IOException: Unexpected length, expected 8586, received 7905
So on Client the JSON byte array is 8586 bytes long and when it arrives at the server it is 7905 bytes long, which breaks the whole concept.
Also it seems that it does not really stream, but first caches the whole content that is returned from the input stream.
How would I need to adjust the calling code to get the result I described?
ModelInputStream
package *;
import java.io.IOException;
import java.io.InputStream;
import java.nio.ByteBuffer;
import java.util.LinkedList;
import java.util.List;
import java.util.Queue;
import ***.Daos;
import ***.IDatabase;
import ***.CategorizedEntity;
import ***.CategorizedDescriptor;
import ***.JsonExport;
import com.google.gson.Gson;
import com.google.gson.JsonObject;
public class ModelInputStream extends InputStream {
private final Gson gson = new Gson();
private final IDatabase db;
private final Queue<CategorizedDescriptor> descriptors;
private byte[] buffer = new byte[0];
private int position = 0;
public ModelInputStream(IDatabase db, List<CategorizedDescriptor> descriptors) {
this.db = db;
this.descriptors = new LinkedList<>();
this.descriptors.addAll(descriptors);
}
#Override
public int read() throws IOException {
if (position == buffer.length) {
if (descriptors.size() == 0)
return -1;
loadNext();
position = 0;
}
return buffer[position++];
}
private void loadNext() throws IOException {
CategorizedDescriptor descriptor = descriptors.poll();
byte type = (byte) descriptor.getModelType().ordinal();
byte[] refId = descriptor.getRefId().getBytes();
byte[] json = getData(descriptor);
buildBuffer(type, refId, json);
}
private byte[] getData(CategorizedDescriptor d) {
CategorizedEntity entity = Daos.createCategorizedDao(db, d.getModelType()).getForId(d.getId());
JsonObject object = JsonExport.toJson(entity);
String json = gson.toJson(object);
return json.getBytes();
}
private void buildBuffer(byte type, byte[] refId, byte[] json) throws IOException {
buffer = new byte[1 + 4 + refId.length + 4 + json.length];
int index = put(buffer, 0, type);
index = put(buffer, index, asByteArray(refId.length));
index = put(buffer, index, refId);
index = put(buffer, index, asByteArray(json.length));
put(buffer, index, json);
}
private byte[] asByteArray(int i) {
return ByteBuffer.allocate(4).putInt(i).array();
}
private int put(byte[] array, int index, byte... bytes) {
for (int i = 0; i < bytes.length; i++) {
array[index + i] = bytes[i];
}
return index + bytes.length;
}
}
ModelStreamReader
package *;
import java.io.IOException;
import java.io.InputStream;
import java.nio.ByteBuffer;
import *.ModelType;
public class ModelStreamReader {
private InputStream stream;
public ModelStreamReader(InputStream stream) {
this.stream = stream;
}
public Model next() throws IOException {
int modelType = stream.read();
if (modelType == -1)
return null;
Model next = new Model();
next.type = ModelType.values()[modelType];
next.refId = readNextPart();
next.data = readNextPart();
return next;
}
private String readNextPart() throws IOException {
int length = readInt();
byte[] bytes = readBytes(length);
return new String(bytes);
}
private int readInt() throws IOException {
byte[] bytes = readBytes(4);
return ByteBuffer.wrap(bytes).getInt();
}
private byte[] readBytes(int length) throws IOException {
byte[] buffer = new byte[length];
int read = stream.read(buffer);
if (read != length)
throw new IOException("Unexpected length, expected " + length + ", received " + read);
return buffer;
}
public class Model {
public ModelType type;
public String refId;
public String data;
}
}
Calling Code
ModelInputStream stream = new ModelInputStream(db, getAll(db));
URL url = new URL("http://localhost:8080/ws/test/streamed");
HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setDoOutput(true);
con.setRequestMethod("POST");
con.connect();
int read = -1;
while ((read = stream.read()) != -1) {
con.getOutputStream().write(read);
}
con.getOutputStream().flush();
System.out.println(con.getResponseCode());
System.out.println(con.getResponseMessage());
con.disconnect();
Server part (Jersey WebResource)
package *.webservice;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.nio.charset.Charset;
import java.nio.file.Files;
import java.util.HashMap;
import java.util.List;
import java.util.UUID;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.core.Response;
import *.ModelStreamReader;
import *.ModelStreamReader.Model;
#Path("test")
public class TestResource {
#POST
#Path("streamed")
public Response streamed(InputStream modelStream) throws IOException {
ModelStreamReader reader = new ModelStreamReader(modelStream);
writeDatasets(reader);
return Response.ok(new HashMap<>()).build();
}
private void writeDatasets(ModelStreamReader reader) throws IOException {
String commitId = UUID.randomUUID().toString();
File dir = new File("/opt/tests/streamed/" + commitId);
dir.mkdirs();
Model dataset = null;
while ((dataset = reader.next()) != null) {
File file = new File(dir, dataset.refId);
writeDataset(file, dataset.data);
}
}
private void writeDataset(File file, String data) {
try {
if (data == null)
file.createNewFile();
else
Files.write(file.toPath(), data.getBytes(Charset.forName("utf-8")));
} catch (IOException e) {
e.printStackTrace();
}
}
}
The bytes read have to be shifted into the (0, 255) range (see ByteArrayInputStream).
ModelInputStream
#Override
public int read() throws IOException {
...
return buffer[position++] & 0xff;
}
Finally this line has to be added to the calling code (for chunking):
...
HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setChunkedStreamingMode(1024 * 1024);
...
I found the problem which was of a totally different nature.
First the input stream was not compressed or anything. The bytes read have to be shifted into the (0, 255) range instead of (-128, 127). So the stream reading was interrupted by a -1 byte value.
ModelInputStream
#Override
public int read() throws IOException {
...
return buffer[position++] + 128;
}
Second, the data has to be transfered chunked to be actually "streaming". Therefore the ModelStreamReader.readBytes(int) method must be additionally adjusted to:
ModelStreamReader
private byte[] readBytes(int length) throws IOException {
byte[] result = new byte[length];
int totalRead = 0;
int position = 0;
int previous = -1;
while (totalRead != length) {
int read = stream.read();
if (read != -1) {
result[position++] = (byte) read - 128;
totalRead++;
} else if (previous == -1) {
break;
}
previous = read;
}
return result;
}
Finally this line has to be added to the calling code:
...
HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setChunkedStreamingMode(1024 * 1024);
...

java save http post requests hourly

I'm trying to set up a server on aws with simple http server and save each http post request headers & payload.
It works locally.
My steps after connection via ssh to the ec2 server:
javac Server.java
sudo nohup java Server
It saves the headers to log file but not the payload and it doesn't returns 204 response.
Server.java
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.net.URLDecoder;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.Executors;
public class Server {
private static final int PORT = 80;
private static final String FILE_PATH = "/home/ec2-user/logs/";
private static final String UTF8 = "UTF-8";
private static final String DELIMITER = "|||";
private static final String LINE_BREAK = "\n";
private static final String FILE_PREFIX = "dd_MM_YYYY_HH";
private static final SimpleDateFormat simpleDateFormat = new SimpleDateFormat(FILE_PREFIX);
private static final String FILE_TYPE = ".txt";
public static void main(String[] args) {
try {
HttpServer server = HttpServer.create(new InetSocketAddress(PORT), 0);
server.createContext("/", new HttpHandler() {
#Override
public void handle(HttpExchange t) throws IOException {
System.out.println("Req\t" + t.getRemoteAddress());
InputStream initialStream = t.getRequestBody();
byte[] buffer = new byte[initialStream.available()];
initialStream.read(buffer);
File targetFile = new File(FILE_PATH + simpleDateFormat.format(new Date()) + FILE_TYPE);
OutputStream outStream = new FileOutputStream(targetFile, true);
String prefix = LINE_BREAK + t.getRequestHeaders().entrySet().toString() + LINE_BREAK + System.currentTimeMillis() + DELIMITER;
outStream.write(prefix.getBytes());
Map<String, String> queryPairs = new HashMap<>();
String params = new String(buffer);
String[] pairs = params.split("&");
for (String pair : pairs) {
int idx = pair.indexOf("=");
String key = pair.substring(0, idx);
String val = pair.substring(idx + 1);
String decodedKey = URLDecoder.decode(key, UTF8);
String decodeVal = URLDecoder.decode(val, UTF8);
queryPairs.put(decodedKey, decodeVal);
}
outStream.write(queryPairs.toString().getBytes());
t.sendResponseHeaders(204, -1);
t.close();
}
});
server.setExecutor(Executors.newCachedThreadPool());
server.start();
} catch (Exception e) {
System.out.println("Exception: " + e.getMessage());
e.printStackTrace();
}
}
}
Consider these changes to your handle method. As a starting point, two things are changed:
It reads the complete input and copies that into your file (initialStream.available() might not be the full truth)
catch, log and rethrow IOExceptions (you didn't see your 204 after all)
Consider redirecting your output into files, so you can check what happend on server later:
sudo nohup java Server > server.log 2> server.err &
If you described in more detail the desired target file structure we could figure something out there as well I guess.
#Override
public void handle(HttpExchange t) throws IOException {
try {
System.out.println("Req\t" + t.getRemoteAddress());
InputStream initialStream = t.getRequestBody();
File targetFile = new File(FILE_PATH + simpleDateFormat.format(new Date()) + FILE_TYPE);
OutputStream outStream = new FileOutputStream(targetFile, true);
// This will copy ENTIRE input stream into your target file
IOUtils.copy(initialStream, outStream);
outStream.close();
t.sendResponseHeaders(204, -1);
t.close();
} catch(IOException e) {
e.printStackTrace();
throw e;
}
}

String data from a Scanner through a PrintWriter

I am trying to pass data from a String into a PrintWriter while simultaneously reading from a BufferedReader between two classes named Server.java and Client.java. My problem is that I am having trouble handling the exceptions that are being thrown from the block of code that reads data from the Scanner object (marked below).
Client.java
package root;
/**
* #author Noah Teshima
*/
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintWriter;
import java.net.InetAddress;
import java.net.Socket;
import java.util.Scanner;
public class Client {
private Socket clientSocket;
private BufferedReader bufferedReader;
private InputStreamReader inputStreamReader;
private PrintWriter printWriter;
private Scanner scanner;
public static void main(String[] args) {
new Client("localhost", 1025);
}
public Client(String hostName, int portNumber) {
try {
this.clientSocket = new Socket(hostName, portNumber);
this.bufferedReader = new BufferedReader(this.inputStreamReader = new InputStreamReader(this.clientSocket.getInputStream()));
this.printWriter = new PrintWriter(clientSocket.getOutputStream());
String msg = "",
msg2 = "";
this.printWriter.println(this.getClass());
this.printWriter.flush();
System.out.println(this.getClass().getName() + " is connected to " + this.bufferedReader.readLine());
while(!(msg = this.scanner.nextLine()).equals("exit")) { //Source of problem
this.printWriter.println(this.getClass().getName() + ": " + msg);
this.printWriter.flush();
while((msg2 = this.bufferedReader.readLine()) != null) {
System.out.println(msg2);
}
}
this.clientSocket.close();
}catch(IOException exception) {
exception.printStackTrace();
}
}
}
Stack trace::
Exception in thread "main" java.lang.NullPointerException
at root.Client.<init>(Client.java:47)
at root.Client.main(Client.java:25)
The code used to read from the BufferedReader and write to a PrintWriter is the same for both classes, so I only posted Client.java. If anyone would like to see the other class file, I would be happy to do so.
Before you use
while(!(msg = this.scanner.nextLine()).equals("exit")) { //Source of problem
You should have initialized the scanner.
The scanner object is null when you used it, and hence the NullPointerException.
I do not see a scanner = new Scanner(...); anywhere within your code,
maybe you have forgetten about it?

Starting with Esper + sockets

I'm a rookie with Esper and I would like to get some help. I've already managed to use Esper with CSV files, but now I need to use Java objects as events sent through a socket and I can't find simple examples on the Internet to use as a guide.
Has anybody some simple examples to base on them?
Anyway, I let here the code I am trying to make work. Nothing happens when I run it, it seems the socket connection does not work.
The server class (it also contains the event class). It is suposed to send the events:
import java.io.* ;
import java.net.* ;
class Server {
static final int PORT=5002;
public Server( ) {
try {
ServerSocket skServer = new ServerSocket( PORT );
System.out.println("Listening at " + PORT );
Socket skClient = skServer.accept();
System.out.println("Serving to Esper");
OutputStream aux = skClient.getOutputStream();
ObjectOutputStream flux = new ObjectOutputStream( aux );
int i = 0;
while (i<10) {
flux.writeObject( new MeteoEvent(i,"A") );
i++;
}
flux.flush();
skClient.close();
System.out.println("End of transmission");
} catch( Exception e ) {
System.out.println( e.getMessage() );
}
}
public static void main( String[] arg ) {
new Server();
}
class MeteoEvent{
private int sensorId;
private String GeoArea;
public MeteoEvent() {
}
public MeteoEvent(int sensorid, String geoarea) {
this.sensorId = sensorid;
this.GeoArea = geoarea;
}
public int getSensorId() {
return sensorId;
}
public void setSensorId(int sensorId) {
this.sensorId = sensorId;
}
public String getGeoArea() {
return GeoArea;
}
public void setGeoArea(String geoArea) {
GeoArea = geoArea;
}
}
}
And the Esper-based class.
import java.io.File;
import java.io.IOException;
import java.sql.SQLException;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
import com.espertech.esper.client.Configuration;
import com.espertech.esper.client.EPAdministrator;
import com.espertech.esper.client.EPRuntime;
import com.espertech.esper.client.EPServiceProvider;
import com.espertech.esper.client.EPServiceProviderManager;
import com.espertech.esper.client.EPStatement;
import com.espertech.esper.client.EventBean;
import com.espertech.esper.client.UpdateListener;
import com.espertech.esper.event.map.MapEventBean;
import com.espertech.esperio.socket.EsperIOSocketAdapter;
import com.espertech.esperio.socket.config.ConfigurationSocketAdapter;
import com.espertech.esperio.socket.config.DataType;
import com.espertech.esperio.socket.config.SocketConfig;
public class Demo {
public static class CEPListener implements UpdateListener {
private String tag;
public CEPListener (String tag)
{
this.tag = tag;
}
public static void main(String[] args) throws IOException, InterruptedException {
Configuration configuration = new Configuration();
Map<String, Object> eventProperties = new HashMap<String, Object>();
eventProperties.put("sensorId", int.class);
eventProperties.put("GeoArea", String.class);
configuration.addEventType("MeteoEvent", eventProperties);
ConfigurationSocketAdapter socketAdapterConfig = new ConfigurationSocketAdapter();
SocketConfig socketConfig = new SocketConfig();
socketConfig.setDataType(DataType.OBJECT);
socketConfig.setPort(5002);
socketAdapterConfig.getSockets().put("MeteoSocket", socketConfig);
EPServiceProvider cepService = EPServiceProviderManager.getProvider("MeteoSocket",configuration);
EPRuntime cepServiceRT = cepService.getEPRuntime();
EPAdministrator cepAdmin = cepService.getEPAdministrator();
EsperIOSocketAdapter socketAdapter = new EsperIOSocketAdapter (socketAdapterConfig, "MeteoSocket");
socketAdapter.start();
EPStatement stmt = cepAdmin.createEPL("insert into JoinStream select * from MeteoEvent");
EPStatement outputStatementX = cepAdmin.createEPL("select * from JoinStream");
outputStatementX.addListener(new CEPListener("JS"));
cepService.initialize();
Object lock = new Object();
synchronized (lock)
{
lock.wait();
}
}
Thank you very much in advance if anyone takes some time trying to help me.
Problem solved! The Esper Dev list was very useful. I learned how to use Esper + sockets through the test classes located here
Best regards!

Categories

Resources