I am writing some code to collect some controller's request param and response body.
Since the project framework is apache CXF, which version is 3.1.18,
I write an interceptor extends AbstractPhaseInterceptor to collect param in phase Phase.RECEIVE, which is working.
But when a write an outInterceptor extends AbstractPhaseInterceptor to collect the response of the controller, I find there no way for me to do this, there just one method handleMessage(Message message) in the interceptor, I can not fetch anything I want from the message
Can anybody help me? I am new to CXF. Thanks!
I found the answer from the other blob
package XXX.web.webservice.interceptor;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import org.apache.commons.io.IOUtils;
import org.apache.cxf.io.CachedOutputStream;
import org.apache.cxf.message.Message;
import org.apache.cxf.phase.AbstractPhaseInterceptor;
import org.apache.cxf.phase.Phase;
import org.apache.log4j.Logger;
public class ArtifactOutInterceptor extends AbstractPhaseInterceptor<Message>{
private static final Logger log = Logger.getLogger(ArtifactOutInterceptor.class);
public ArtifactOutInterceptor() {
//这儿使用pre_stream,意思为在流关闭之前
super(Phase.PRE_STREAM);
}
public void handleMessage(Message message) {
try {
OutputStream os = message.getContent(OutputStream.class);
CachedStream cs = new CachedStream();
message.setContent(OutputStream.class, cs);
message.getInterceptorChain().doIntercept(message);
CachedOutputStream csnew = (CachedOutputStream) message.getContent(OutputStream.class);
InputStream in = csnew.getInputStream();
String xml = IOUtils.toString(in);
//这里对xml做处理,处理完后同理,写回流中
IOUtils.copy(new ByteArrayInputStream(xml.getBytes()), os);
cs.close();
os.flush();
message.setContent(OutputStream.class, os);
} catch (Exception e) {
log.error("Error when split original inputStream. CausedBy : " + "\n" + e);
}
}
private class CachedStream extends CachedOutputStream {
public CachedStream() {
super();
}
protected void doFlush() throws IOException {
currentStream.flush();
}
protected void doClose() throws IOException {
}
protected void onWrite() throws IOException {
}
}
}
Related
I am trying to receive an http request using non-blocking io, then make another http request to another server using non-blocking io, and return some response, here is the code for my servlet:
package learn;
import java.io.IOException;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.TimeUnit;
import javax.servlet.AsyncContext;
import javax.servlet.ReadListener;
import javax.servlet.ServletException;
import javax.servlet.ServletInputStream;
import javax.servlet.ServletOutputStream;
import javax.servlet.WriteListener;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.InvocationCallback;
import javax.ws.rs.core.Response;
import org.jboss.resteasy.client.jaxrs.ResteasyClientBuilder;
#WebServlet(urlPatterns = {"/async"}, asyncSupported = true)
public class AsyncProcessing extends HttpServlet {
private static final long serialVersionUID = -535924906221872329L;
public CompletableFuture<String> readRequestAsync(final HttpServletRequest req) {
final CompletableFuture<String> request = new CompletableFuture<>();
final StringBuilder httpRequestData = new StringBuilder();
try (ServletInputStream inputStream = req.getInputStream()){
inputStream.setReadListener(new ReadListener() {
final int BUFFER_SIZE = 4*1024;
final byte buffer[] = new byte[BUFFER_SIZE];
#Override
public void onError(Throwable t) {
request.completeExceptionally(t);
}
#Override
public void onDataAvailable() {
if(inputStream.isFinished()) return;
System.out.println("----------------------------------------");
System.out.println("onDataAvailable: " + Thread.currentThread().getName());
try {
while(inputStream.isReady()) {
int length = inputStream.read(buffer);
httpRequestData.append(new String(buffer, 0, length));
}
} catch (IOException ex) {
request.completeExceptionally(ex);
}
}
#Override
public void onAllDataRead() throws IOException {
try {
request.complete(httpRequestData.toString());
}
catch(Exception e) {
request.completeExceptionally(e);
}
}
});
} catch (IOException e) {
request.completeExceptionally(e);
}
return request;
}
private Client createAsyncHttpClient() {
ResteasyClientBuilder restEasyClientBuilder = (ResteasyClientBuilder)ClientBuilder.newBuilder();
return restEasyClientBuilder.useAsyncHttpEngine().connectTimeout(640, TimeUnit.SECONDS).build();
}
public CompletableFuture<Response> process(String httpRequest){
System.out.println("----------------------------------------");
System.out.println("process: " + Thread.currentThread());
CompletableFuture<Response> futureResponse = new CompletableFuture<>();
Client client = createAsyncHttpClient();
client.target("http://localhost:3000").request().async().get(new InvocationCallback<Response>() {
#Override
public void completed(Response response) {
System.out.println("----------------------------------------");
System.out.println("completed: " + Thread.currentThread());
futureResponse.complete(response);
}
#Override
public void failed(Throwable throwable) {
System.out.println(throwable);
futureResponse.completeExceptionally(throwable);
}
});
return futureResponse;
}
public CompletableFuture<Integer> outputResponseAsync(Response httpResponseData, HttpServletResponse resp){
System.out.println("----------------------------------------");
System.out.println("outputResponseAsync: " + Thread.currentThread().getName());
CompletableFuture<Integer> total = new CompletableFuture<>();
try (ServletOutputStream outputStream = resp.getOutputStream()){
outputStream.setWriteListener(new WriteListener() {
#Override
public void onWritePossible() throws IOException {
System.out.println("----------------------------------------");
System.out.println("onWritePossible: " + Thread.currentThread().getName());
outputStream.print(httpResponseData.getStatus());
total.complete(httpResponseData.getLength());
}
#Override
public void onError(Throwable t) {
System.out.println(t);
total.completeExceptionally(t);
}
});
} catch (IOException e) {
System.out.println(e);
total.completeExceptionally(e);
}
return total;
}
#Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
System.out.println("----------------------------------------");
System.out.println("doGet: " + Thread.currentThread().getName());
final AsyncContext asyncContext = req.startAsync();
readRequestAsync(req)
.thenCompose(this::process)
.thenCompose(httpResponseData -> outputResponseAsync(httpResponseData, resp))
.thenAccept(a -> asyncContext.complete());
}
}
The server at http://localhost:3000 is an http server written in node which just returns a response after 27 seconds, i would like to make a request to the node server and while this request is being processed i would want to make another http request to the servlet to see if the same thread is being used. Currently i'm trying to use payara 5.194 to do this but even if i set the two thread pools to have one thread the app server seems to create another threads. So, i would like to know from your knowledge if this servlet is really doing non-blocking io and not blocking at any time, also it would be amazing if i could do some experiment to ensure this. I think it's important to point out that the class ServletInputStream is a subclass of InputStream, so i really don't know if this is non-blocking io. Thank you.
Unable to use StreamingFileSink and store incoming events in compressed fashion.
I am trying to use StreamingFileSink to write unbounded event stream to S3. In the process, I would like to compress the data to make better use of storage size available.
I wrote a compressed string writer, by borrowing some code from SequenceFileWriterFactory from flink. It fails with the exception I described below.
If I try to use BucketingSink, it works great.
Using BucketingSink, I approached compressed string write as below. Again, I borrowed this code from some other pull request.
import org.apache.flink.streaming.connectors.fs.StreamWriterBase;
import org.apache.flink.streaming.connectors.fs.Writer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.compress.CodecPool;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.io.compress.CompressionOutputStream;
import org.apache.hadoop.io.compress.Compressor;
import java.io.IOException;
public class CompressionStringWriter<T> extends StreamWriterBase<T> implements Writer<T> {
private static final long serialVersionUID = 3231207311080446279L;
private String codecName;
private String separator;
public String getCodecName() {
return codecName;
}
public String getSeparator() {
return separator;
}
private transient CompressionOutputStream compressedOutputStream;
public CompressionStringWriter(String codecName, String separator) {
this.codecName = codecName;
this.separator = separator;
}
public CompressionStringWriter(String codecName) {
this(codecName, System.lineSeparator());
}
protected CompressionStringWriter(CompressionStringWriter<T> other) {
super(other);
this.codecName = other.codecName;
this.separator = other.separator;
}
#Override
public void open(FileSystem fs, Path path) throws IOException {
super.open(fs, path);
Configuration conf = fs.getConf();
CompressionCodecFactory codecFactory = new CompressionCodecFactory(conf);
CompressionCodec codec = codecFactory.getCodecByName(codecName);
if (codec == null) {
throw new RuntimeException("Codec " + codecName + " not found");
}
Compressor compressor = CodecPool.getCompressor(codec, conf);
compressedOutputStream = codec.createOutputStream(getStream(), compressor);
}
#Override
public void close() throws IOException {
if (compressedOutputStream != null) {
compressedOutputStream.close();
compressedOutputStream = null;
} else {
super.close();
}
}
#Override
public void write(Object element) throws IOException {
getStream();
compressedOutputStream.write(element.toString().getBytes());
compressedOutputStream.write(this.separator.getBytes());
}
#Override
public CompressionStringWriter<T> duplicate() {
return new CompressionStringWriter<>(this);
}
}
BucketingSink<DeviceEvent> bucketingSink = new BucketingSink<>("s3://"+ this.bucketName + "/" + this.objectPrefix);
bucketingSink
.setBucketer(new OrgIdBasedBucketAssigner())
.setWriter(new CompressionStringWriter<DeviceEvent>("Gzip", "\n"))
.setPartPrefix("file-")
.setPartSuffix(".gz")
.setBatchSize(1_500_000);
The one with BucketingSink works.
Now my code snippets using StreamingFileSink involves the below set of code.
import org.apache.flink.api.common.serialization.BulkWriter;
import java.io.IOException;
public class CompressedStringBulkWriter<T> implements BulkWriter<T> {
private final CompressedStringWriter compressedStringWriter;
public CompressedStringBulkWriter(final CompressedStringWriter compressedStringWriter) {
this.compressedStringWriter = compressedStringWriter;
}
#Override
public void addElement(T element) throws IOException {
this.compressedStringWriter.write(element);
}
#Override
public void flush() throws IOException {
this.compressedStringWriter.flush();
}
#Override
public void finish() throws IOException {
this.compressedStringWriter.close();
}
}
import org.apache.flink.api.common.serialization.BulkWriter;
import org.apache.flink.core.fs.FSDataOutputStream;
import org.apache.hadoop.conf.Configuration;
import java.io.IOException;
public class CompressedStringBulkWriterFactory<T> implements BulkWriter.Factory<T> {
private SerializableHadoopConfiguration serializableHadoopConfiguration;
public CompressedStringBulkWriterFactory(final Configuration hadoopConfiguration) {
this.serializableHadoopConfiguration = new SerializableHadoopConfiguration(hadoopConfiguration);
}
#Override
public BulkWriter<T> create(FSDataOutputStream out) throws IOException {
return new CompressedStringBulkWriter(new CompressedStringWriter(out, serializableHadoopConfiguration.get(), "Gzip", "\n"));
}
}
import org.apache.flink.core.fs.FSDataOutputStream;
import org.apache.flink.core.fs.FileSystem;
import org.apache.flink.core.fs.Path;
import org.apache.flink.runtime.fs.hdfs.HadoopFileSystem;
import org.apache.flink.util.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.compress.CodecPool;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.io.compress.CompressionOutputStream;
import org.apache.hadoop.io.compress.Compressor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.Serializable;
public class CompressedStringWriter<T> implements Serializable {
private static final Logger LOG = LoggerFactory.getLogger(CompressedStringWriter.class);
private static final long serialVersionUID = 2115292142239557448L;
private String separator;
private transient CompressionOutputStream compressedOutputStream;
public CompressedStringWriter(FSDataOutputStream out, Configuration hadoopConfiguration, String codecName, String separator) {
this.separator = separator;
try {
Preconditions.checkNotNull(hadoopConfiguration, "Unable to determine hadoop configuration using path");
CompressionCodecFactory codecFactory = new CompressionCodecFactory(hadoopConfiguration);
CompressionCodec codec = codecFactory.getCodecByName(codecName);
Preconditions.checkNotNull(codec, "Codec " + codecName + " not found");
LOG.info("The codec name that was loaded from hadoop {}", codec);
Compressor compressor = CodecPool.getCompressor(codec, hadoopConfiguration);
this.compressedOutputStream = codec.createOutputStream(out, compressor);
LOG.info("Setup a compressor for codec {} and compressor {}", codec, compressor);
} catch (IOException ex) {
throw new RuntimeException("Unable to compose a hadoop compressor for the path", ex);
}
}
public void flush() throws IOException {
if (compressedOutputStream != null) {
compressedOutputStream.flush();
}
}
public void close() throws IOException {
if (compressedOutputStream != null) {
compressedOutputStream.close();
compressedOutputStream = null;
}
}
public void write(T element) throws IOException {
compressedOutputStream.write(element.toString().getBytes());
compressedOutputStream.write(this.separator.getBytes());
}
}
import org.apache.hadoop.conf.Configuration;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
public class SerializableHadoopConfiguration implements Serializable {
private static final long serialVersionUID = -1960900291123078166L;
private transient Configuration hadoopConfig;
SerializableHadoopConfiguration(Configuration hadoopConfig) {
this.hadoopConfig = hadoopConfig;
}
Configuration get() {
return this.hadoopConfig;
}
// --------------------
private void writeObject(ObjectOutputStream out) throws IOException {
this.hadoopConfig.write(out);
}
private void readObject(ObjectInputStream in) throws IOException {
final Configuration config = new Configuration();
config.readFields(in);
if (this.hadoopConfig == null) {
this.hadoopConfig = config;
}
}
}
My actual flink job
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties kinesisConsumerConfig = new Properties();
...
...
DataStream<DeviceEvent> kinesis =
env.addSource(new FlinkKinesisConsumer<>(this.streamName, new DeviceEventSchema(), kinesisConsumerConfig)).name("source")
.setParallelism(16)
.setMaxParallelism(24);
final StreamingFileSink<DeviceEvent> bulkCompressStreamingFileSink = StreamingFileSink.<DeviceEvent>forBulkFormat(
path,
new CompressedStringBulkWriterFactory<>(
BucketingSink.createHadoopFileSystem(
new Path("s3a://"+ this.bucketName + "/" + this.objectPrefix),
null).getConf()))
.withBucketAssigner(new OrgIdBucketAssigner())
.build();
deviceEventDataStream.addSink(bulkCompressStreamingFileSink).name("bulkCompressStreamingFileSink").setParallelism(16);
env.execute();
I expect data to be saved in S3 as multiple files. Unfortunately no files are being created.
In the logs, I see below exception
2019-05-15 22:17:20,855 INFO org.apache.flink.runtime.taskmanager.Task - Sink: bulkCompressStreamingFileSink (11/16) (c73684c10bb799a6e0217b6795571e22) switched from RUNNING to FAILED.
java.lang.Exception: Could not perform checkpoint 1 for operator Sink: bulkCompressStreamingFileSink (11/16).
at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:595)
at org.apache.flink.streaming.runtime.io.BarrierBuffer.notifyCheckpoint(BarrierBuffer.java:396)
at org.apache.flink.streaming.runtime.io.BarrierBuffer.processBarrier(BarrierBuffer.java:292)
at org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:200)
at org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:209)
at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Could not complete snapshot 1 for operator Sink: bulkCompressStreamingFileSink (11/16).
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:422)
at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1113)
at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1055)
at org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:729)
at org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:641)
at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:586)
... 8 more
Caused by: java.io.IOException: Stream closed.
at org.apache.flink.fs.s3.common.utils.RefCountedFile.requireOpened(RefCountedFile.java:117)
at org.apache.flink.fs.s3.common.utils.RefCountedFile.write(RefCountedFile.java:74)
at org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.flush(RefCountedBufferingFileStream.java:105)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeAndUploadPart(S3RecoverableFsDataOutputStream.java:199)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeForCommit(S3RecoverableFsDataOutputStream.java:166)
at org.apache.flink.streaming.api.functions.sink.filesystem.PartFileWriter.closeForCommit(PartFileWriter.java:71)
at org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:63)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:239)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:280)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:253)
at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:244)
at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:235)
at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.snapshotState(StreamingFileSink.java:347)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:90)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:395)
So wondering, what am I missing.
I am using AWS EMR latest (5.23).
In CompressedStringBulkWriter#close(), you are calling the close() method on the CompressionCodecStream which also closes the underlying the stream i.e. Flink's FSDataOutputStream. It has to be opened for the checkpointing to be done properly by Flink's internal to guarantee recoverable stream. That is why you are getting
Caused by: java.io.IOException: Stream closed.
at org.apache.flink.fs.s3.common.utils.RefCountedFile.requireOpened(RefCountedFile.java:117)
at org.apache.flink.fs.s3.common.utils.RefCountedFile.write(RefCountedFile.java:74)
at org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.flush(RefCountedBufferingFileStream.java:105)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeAndUploadPart(S3RecoverableFsDataOutputStream.java:199)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeForCommit(S3RecoverableFsDataOutputStream.java:166)
So instead of compressedOutputStream.close(), use compressedOutputStream.finish() which just flushes everything that's in buffer to the outputstream without closing it. BTW, there is an inbuilt HadoopCompressionBulkWriter made available in the latest version Flink, you can also use that.
I tried to create the Xtext calculator DSL as guided in the README in this repo. It is an Xtext DSL with a language server and LSP4E implementation, and has 2 sub projects named:
I downloaded the repo and opened it in the Eclipse IDE. Under the main project, there are 2 sub-projects named: org.xtext.calc.parent (which is the xtext project) and org.xtext.calc.lsp4e (the lsp4e implementation project).
In the org.xtext.calc.lsp4e project's src folder there are 3 Java files named: Activator, CalculatorLanguageServer, SocketStreamConnectionProvider. In the latter two, I get an error which I cannot resolve.
Below are the two files: -
1.) CalculatorLanguageServer.java
package org.xtext.calc.lsp4e;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URI;
import java.net.URL;
import java.util.ArrayList;
import java.util.List;
import org.eclipse.core.runtime.FileLocator;
import org.eclipse.core.runtime.Platform;
import org.eclipse.lsp4e.server.ProcessStreamConnectionProvider;
import org.eclipse.lsp4e.server.StreamConnectionProvider;
import org.eclipse.lsp4j.jsonrpc.messages.Message;
import org.eclipse.lsp4j.services.LanguageServer;
import org.osgi.framework.Bundle;
public class CalculatorLanguageServer implements StreamConnectionProvider {
private final static boolean SOCKET_MODE = true;
private StreamConnectionProvider delegate;
public CalculatorLanguageServer() {
if (SOCKET_MODE) {
this.delegate = new SocketStreamConnectionProvider(5007) {
};
} else {
List<String> commands = new ArrayList<>();
commands.add("java");
commands.add("-Xdebug");
commands.add("-Xrunjdwp:server=y,transport=dt_socket,address=4001,suspend=n,quiet=y");
commands.add("-jar");
Bundle bundle = Activator.getDefault().getBundle();
URL resource = bundle.getResource("/language-server/calculator-language-server-jar");
try {
commands.add(new File(FileLocator.resolve(resource).toURI()).getAbsolutePath());
} catch (Exception e) {
throw new IllegalStateException(e);
}
this.delegate = new ProcessStreamConnectionProvider(commands, Platform.getLocation().toOSString()) {};
}
}
public void start() throws IOException {
delegate.start();
}
public InputStream getInputStream() {
return delegate.getInputStream();
}
public OutputStream getOutputStream() {
return delegate.getOutputStream();
}
public Object getInitializationOptions(URI rootUri) {
return delegate.getInitializationOptions(rootUri);
}
public void stop() {
delegate.stop();
}
public void handleMessage(Message message, LanguageServer languageServer, URI rootURI) {
delegate.handleMessage(message, languageServer, rootURI);
}
#Override
public String toString() {
return "Calculator Language Server: " + super.toString();
}
}
2.) SocketStreamConnectionProvider.java
package org.xtext.calc.lsp4e;
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.Socket;
import org.eclipse.lsp4e.server.StreamConnectionProvider;
public class SocketStreamConnectionProvider implements StreamConnectionProvider {
private int port;
private Socket socket;
private InputStream inputStream;
private OutputStream outputStream;
public SocketStreamConnectionProvider(int port) {
this.port = port;
}
#Override
public void start() throws IOException {
this.socket = new Socket("localhost", port);
inputStream = new BufferedInputStream(socket.getInputStream());
outputStream = new BufferedOutputStream(socket.getOutputStream());
}
#Override
public InputStream getInputStream() {
return inputStream;
}
#Override
public OutputStream getOutputStream() {
return outputStream;
}
#Override
public void stop() {
if (socket != null) {
try {
socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
In both these files, I get an error in the class names:
1.) The type CalculatorLanguageServer must implement the inherited abstract method StreamConnectionProvider.getErrorStream()
2.) The type SocketStreamConnectionProvider must implement the inherited abstract method StreamConnectionProvider.getErrorStream()
How to resolve these errors?
Thanks!
We have JAX RS implementation which needs to send back JSON output. But the response size is huge. And the client expects the same synchronously.
Hence I tried to use StreamingOutput... but the client is not really getting the data in chunks.
Below is sample snippet:
Server Side
streamingOutput = new StreamingOutput() {
#Override
public void write(OutputStream out) throws IOException, WebApplicationException {
JsonGenerator jsonGenerator = mapper.getFactory().createGenerator(out);
jsonGenerator.writeStartArray();
for(int i=0; i < 10; i++) {
jsonGenerator.writeStartObject();
jsonGenerator.writeStringField("Response_State", "Response State - " + i);
jsonGenerator.writeStringField("Response_Report", "Response Report - " + i);
jsonGenerator.writeStringField("Error_details", "Error Details - " + i);
jsonGenerator.writeEndObject();;
jsonGenerator.flush();
try {
Thread.currentThread().sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
jsonGenerator.writeEndArray();
jsonGenerator.close();
}
};
return Response.status(200).entity(streamingOutput).build();
Client
HttpClient client = HttpClientBuilder.create().build();
HttpPost post = new HttpPost("http://localhost:8080/AccessData/FetchReport");
post.setHeader("Content-type", "application/json");
ResponseHandler<HttpResponse> responseHandler = new BasicResponseHandler();
StringEntity entity = new StringEntity(jsonRequest); //jsonRequest is request string
post.setEntity(entity);
HttpResponse response = client.execute(post);
BufferedReader buffReader = new BufferedReader(new InputStreamReader(response.getEntity().getContent()));
JsonParser jsonParser = new JsonFactory().createParser(buffReader);
while(jsonParser.nextToken() != JsonToken.END_OBJECT) {
System.out.println(jsonParser.getCurrentName() + ":" + jsonParser.getCurrentValue());
}
String output;
while((output = buffReader.readLine()) != null) {
System.out.println(output);
}
In the server side code, I am putting sleep call just to simulate a gap between chunks of data. What I need is that the client should receive chunks of data as and when it is thrown back by the server.
But here the client gets the response in entirety always.
Any possible solution?
Thanks in advance.
It looks like the client side is not implemented correctly: reading the array of the objects using the parser.
Also, I would like to recommend reading and writing a data transfer object instead of low level field-by-field reading and writing.
For the sake of completeness, here is a complete draft example that uses: Jersey 2.25.1, Jetty 9.2.14.v20151106.
Common
ResponseData class
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
public class ResponseData {
private final String responseState;
private final String responseReport;
private final String errorDetails;
#JsonCreator
public ResponseData(
#JsonProperty("Response_State") final String responseState,
#JsonProperty("Response_Report") final String responseReport,
#JsonProperty("Error_details") final String errorDetails) {
this.responseState = responseState;
this.responseReport = responseReport;
this.errorDetails = errorDetails;
}
public String getResponseState() {
return this.responseState;
}
public String getResponseReport() {
return this.responseReport;
}
public String getErrorDetails() {
return this.errorDetails;
}
#Override
public String toString() {
return String.format(
"ResponseData: responseState: %s; responseReport: %s; errorDetails: %s",
this.responseState,
this.responseReport,
this.errorDetails
);
}
}
Service
ServerProgram class
import java.net.URI;
import org.glassfish.jersey.jackson.JacksonFeature;
import org.glassfish.jersey.jetty.JettyHttpContainerFactory;
import org.glassfish.jersey.server.ResourceConfig;
public class ServerProgram {
public static void main(final String[] args) {
final URI uri = URI.create("http://localhost:8080/");
final ResourceConfig resourceConfig = new ResourceConfig(TestResource.class);
resourceConfig.register(JacksonFeature.class);
JettyHttpContainerFactory.createServer(uri, resourceConfig);
}
}
TestResource class
import com.fasterxml.jackson.core.JsonFactory;
import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.io.IOException;
import java.io.OutputStream;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.StreamingOutput;
#Path("/")
public class TestResource {
#GET
#Produces(MediaType.APPLICATION_JSON)
public Response getData() {
final StreamingOutput streamingOutput = new JsonStreamingOutput();
return Response.status(200).entity(streamingOutput).build();
}
private static class JsonStreamingOutput implements StreamingOutput {
#Override
public void write(final OutputStream outputStream) throws IOException, WebApplicationException {
final ObjectMapper objectMapper = new ObjectMapper();
final JsonFactory jsonFactory = objectMapper.getFactory();
try (final JsonGenerator jsonGenerator = jsonFactory.createGenerator(outputStream)) {
jsonGenerator.writeStartArray();
for (int i = 0; i < 10; i++) {
final ResponseData responseData = new ResponseData(
"Response State - " + i,
"Response Report - " + i,
"Error Details - " + i
);
jsonGenerator.writeObject(responseData);
jsonGenerator.flush();
try {
Thread.currentThread().sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
jsonGenerator.writeEndArray();
}
}
}
}
Client
ClientProgram class
import com.fasterxml.jackson.core.JsonFactory;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.core.JsonToken;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.io.BufferedInputStream;
import java.io.IOException;
import java.io.InputStream;
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.core.MediaType;
import org.glassfish.jersey.client.ClientProperties;
public class ClientProgram {
public static void main(final String[] args) throws IOException {
Client client = null;
try {
client = ClientBuilder.newClient();
client.property(ClientProperties.READ_TIMEOUT, 10000);
try (final InputStream inputStream = client
.target("http://localhost:8080/")
.request(MediaType.APPLICATION_JSON)
.get(InputStream.class);
final BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream)) {
processStream(bufferedInputStream);
}
} finally {
if (client != null) {
client.close();
}
}
}
private static void processStream(final InputStream inputStream) throws IOException {
final ObjectMapper objectMapper = new ObjectMapper();
final JsonFactory jsonFactory = objectMapper.getFactory();
try (final JsonParser jsonParser = jsonFactory.createParser(inputStream)) {
final JsonToken arrayToken = jsonParser.nextToken();
if (arrayToken == null) {
// TODO: Return or throw exception.
return;
}
if (!JsonToken.START_ARRAY.equals(arrayToken)) {
// TODO: Return or throw exception.
return;
}
// Iterate through the objects of the array.
while (JsonToken.START_OBJECT.equals(jsonParser.nextToken())) {
final ResponseData responseData = jsonParser.readValueAs(ResponseData.class);
System.out.println(responseData);
}
}
}
}
Hope this helps.
I would like to modify an outgoing SOAP Request.
I would like to remove 2 xml nodes from the Envelope's body.
I managed to set up an Interceptor and get the generated String value of the message set to the endpoint.
However, the following code does not seem to work as the outgoing message is not edited as expected. Does anyone have some code or ideas on how to do this?
public class MyOutInterceptor extends AbstractSoapInterceptor {
public MyOutInterceptor() {
super(Phase.SEND);
}
public void handleMessage(SoapMessage message) throws Fault {
// Get message content for dirty editing...
StringWriter writer = new StringWriter();
CachedOutputStream cos = (CachedOutputStream)message.getContent(OutputStream.class);
InputStream inputStream = cos.getInputStream();
IOUtils.copy(inputStream, writer, "UTF-8");
String content = writer.toString();
// remove the substrings from envelope...
content = content.replace("<idJustification>0</idJustification>", "");
content = content.replace("<indicRdv>false</indicRdv>", "");
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
outputStream.write(content.getBytes(Charset.forName("UTF-8")));
message.setContent(OutputStream.class, outputStream);
}
Based on the first comment, I created an abstract class which can easily be used to change the whole soap envelope.
Just in case someone wants a ready-to-use code part.
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import org.apache.commons.io.IOUtils;
import org.apache.cxf.binding.soap.interceptor.SoapPreProtocolOutInterceptor;
import org.apache.cxf.io.CachedOutputStream;
import org.apache.cxf.message.Message;
import org.apache.cxf.phase.AbstractPhaseInterceptor;
import org.apache.cxf.phase.Phase;
import org.apache.log4j.Logger;
/**
* http://www.mastertheboss.com/jboss-web-services/apache-cxf-interceptors
* http://stackoverflow.com/questions/6915428/how-to-modify-the-raw-xml-message-of-an-outbound-cxf-request
*
*/
public abstract class MessageChangeInterceptor extends AbstractPhaseInterceptor<Message> {
public MessageChangeInterceptor() {
super(Phase.PRE_STREAM);
addBefore(SoapPreProtocolOutInterceptor.class.getName());
}
protected abstract Logger getLogger();
protected abstract String changeOutboundMessage(String currentEnvelope);
protected abstract String changeInboundMessage(String currentEnvelope);
public void handleMessage(Message message) {
boolean isOutbound = false;
isOutbound = message == message.getExchange().getOutMessage()
|| message == message.getExchange().getOutFaultMessage();
if (isOutbound) {
OutputStream os = message.getContent(OutputStream.class);
CachedStream cs = new CachedStream();
message.setContent(OutputStream.class, cs);
message.getInterceptorChain().doIntercept(message);
try {
cs.flush();
IOUtils.closeQuietly(cs);
CachedOutputStream csnew = (CachedOutputStream) message.getContent(OutputStream.class);
String currentEnvelopeMessage = IOUtils.toString(csnew.getInputStream(), "UTF-8");
csnew.flush();
IOUtils.closeQuietly(csnew);
if (getLogger().isDebugEnabled()) {
getLogger().debug("Outbound message: " + currentEnvelopeMessage);
}
String res = changeOutboundMessage(currentEnvelopeMessage);
if (res != null) {
if (getLogger().isDebugEnabled()) {
getLogger().debug("Outbound message has been changed: " + res);
}
}
res = res != null ? res : currentEnvelopeMessage;
InputStream replaceInStream = IOUtils.toInputStream(res, "UTF-8");
IOUtils.copy(replaceInStream, os);
replaceInStream.close();
IOUtils.closeQuietly(replaceInStream);
os.flush();
message.setContent(OutputStream.class, os);
IOUtils.closeQuietly(os);
} catch (IOException ioe) {
getLogger().warn("Unable to perform change.", ioe);
throw new RuntimeException(ioe);
}
} else {
try {
InputStream is = message.getContent(InputStream.class);
String currentEnvelopeMessage = IOUtils.toString(is, "UTF-8");
IOUtils.closeQuietly(is);
if (getLogger().isDebugEnabled()) {
getLogger().debug("Inbound message: " + currentEnvelopeMessage);
}
String res = changeInboundMessage(currentEnvelopeMessage);
if (res != null) {
if (getLogger().isDebugEnabled()) {
getLogger().debug("Inbound message has been changed: " + res);
}
}
res = res != null ? res : currentEnvelopeMessage;
is = IOUtils.toInputStream(res, "UTF-8");
message.setContent(InputStream.class, is);
IOUtils.closeQuietly(is);
} catch (IOException ioe) {
getLogger().warn("Unable to perform change.", ioe);
throw new RuntimeException(ioe);
}
}
}
public void handleFault(Message message) {
}
private class CachedStream extends CachedOutputStream {
public CachedStream() {
super();
}
protected void doFlush() throws IOException {
currentStream.flush();
}
protected void doClose() throws IOException {
}
protected void onWrite() throws IOException {
}
}
}
I had this problem as well today. After much weeping and gnashing of teeth, I was able to alter the StreamInterceptor class in the configuration_interceptor demo that comes with the CXF source:
OutputStream os = message.getContent(OutputStream.class);
CachedStream cs = new CachedStream();
message.setContent(OutputStream.class, cs);
message.getInterceptorChain().doIntercept(message);
try {
cs.flush();
CachedOutputStream csnew = (CachedOutputStream) message.getContent(OutputStream.class);
String soapMessage = IOUtils.toString(csnew.getInputStream());
...
The soapMessage variable will contain the complete SOAP message. You should be able to manipulate the soap message, flush it to an output stream and do a message.setContent(OutputStream.class... call to put your modifications on the message. This comes with no warranty, since I'm pretty new to CXF myself!
Note: CachedStream is a private class in the StreamInterceptor class. Don't forget to configure your interceptor to run in the PRE_STREAM phase so that the SOAP interceptors have a chance to write the SOAP message.
Following is able to bubble up server side exceptions. Use of os.close() instead of IOUtils.closeQuietly(os) in previous solution is also able to bubble up exceptions.
public class OutInterceptor extends AbstractPhaseInterceptor<Message> {
public OutInterceptor() {
super(Phase.PRE_STREAM);
addBefore(StaxOutInterceptor.class.getName());
}
public void handleMessage(Message message) {
OutputStream os = message.getContent(OutputStream.class);
CachedOutputStream cos = new CachedOutputStream();
message.setContent(OutputStream.class, cos);
message.getInterceptorChain.aad(new PDWSOutMessageChangingInterceptor(os));
}
}
public class OutMessageChangingInterceptor extends AbstractPhaseInterceptor<Message> {
private OutputStream os;
public OutMessageChangingInterceptor(OutputStream os){
super(Phase.PRE_STREAM_ENDING);
addAfter(StaxOutEndingInterceptor.class.getName());
this.os = os;
}
public void handleMessage(Message message) {
try {
CachedOutputStream csnew = (CachedOutputStream) message .getContent(OutputStream.class);
String currentEnvelopeMessage = IOUtils.toString( csnew.getInputStream(), (String) message.get(Message.ENCODING));
csnew.flush();
IOUtils.closeQuietly(csnew);
String res = changeOutboundMessage(currentEnvelopeMessage);
res = res != null ? res : currentEnvelopeMessage;
InputStream replaceInStream = IOUtils.tolnputStream(res, (String) message.get(Message.ENCODING));
IOUtils.copy(replaceInStream, os);
replaceInStream.close();
IOUtils.closeQuietly(replaceInStream);
message.setContent(OutputStream.class, os);
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
}
}
Good example for replacing outbound soap content based on this
package kz.bee.bip;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import org.apache.commons.io.IOUtils;
import org.apache.cxf.binding.soap.interceptor.SoapPreProtocolOutInterceptor;
import org.apache.cxf.io.CachedOutputStream;
import org.apache.cxf.message.Message;
import org.apache.cxf.phase.AbstractPhaseInterceptor;
import org.apache.cxf.phase.Phase;
public class SOAPOutboundInterceptor extends AbstractPhaseInterceptor<Message> {
public SOAPOutboundInterceptor() {
super(Phase.PRE_STREAM);
addBefore(SoapPreProtocolOutInterceptor.class.getName());
}
public void handleMessage(Message message) {
boolean isOutbound = false;
isOutbound = message == message.getExchange().getOutMessage()
|| message == message.getExchange().getOutFaultMessage();
if (isOutbound) {
OutputStream os = message.getContent(OutputStream.class);
CachedStream cs = new CachedStream();
message.setContent(OutputStream.class, cs);
message.getInterceptorChain().doIntercept(message);
try {
cs.flush();
IOUtils.closeQuietly(cs);
CachedOutputStream csnew = (CachedOutputStream) message.getContent(OutputStream.class);
String currentEnvelopeMessage = IOUtils.toString(csnew.getInputStream(), "UTF-8");
csnew.flush();
IOUtils.closeQuietly(csnew);
/* here we can set new data instead of currentEnvelopeMessage*/
InputStream replaceInStream = IOUtils.toInputStream(currentEnvelopeMessage, "UTF-8");
IOUtils.copy(replaceInStream, os);
replaceInStream.close();
IOUtils.closeQuietly(replaceInStream);
os.flush();
message.setContent(OutputStream.class, os);
IOUtils.closeQuietly(os);
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
}
public void handleFault(Message message) {
}
private static class CachedStream extends CachedOutputStream {
public CachedStream() {
super();
}
protected void doFlush() throws IOException {
currentStream.flush();
}
protected void doClose() throws IOException {
}
protected void onWrite() throws IOException {
}
}
}
a better way would be to modify the message using the DOM interface, you need to add the SAAJOutInterceptor first (this might have a performance hit for big requests) and then your custom interceptor that is executed in phase USER_PROTOCOL
import org.apache.cxf.binding.soap.SoapMessage;
import org.apache.cxf.binding.soap.interceptor.AbstractSoapInterceptor;
import org.apache.cxf.interceptor.Fault;
import org.apache.cxf.phase.Phase;
import org.w3c.dom.Node;
import javax.xml.soap.SOAPException;
import javax.xml.soap.SOAPMessage;
abstract public class SoapNodeModifierInterceptor extends AbstractSoapInterceptor {
SoapNodeModifierInterceptor() { super(Phase.USER_PROTOCOL); }
#Override public void handleMessage(SoapMessage message) throws Fault {
try {
if (message == null) {
return;
}
SOAPMessage sm = message.getContent(SOAPMessage.class);
if (sm == null) {
throw new RuntimeException("You must add the SAAJOutInterceptor to the chain");
}
modifyNodes(sm.getSOAPBody());
} catch (SOAPException e) {
throw new RuntimeException(e);
}
}
abstract void modifyNodes(Node node);
}
this one's working for me. It's based on StreamInterceptor class from configuration_interceptor example in Apache CXF samples.
It's in Scala instead of Java but the conversion is straightforward.
I tried to add comments to explain what's happening (as far as I understand).
import java.io.OutputStream
import org.apache.cxf.binding.soap.interceptor.SoapPreProtocolOutInterceptor
import org.apache.cxf.helpers.IOUtils
import org.apache.cxf.io.CachedOutputStream
import org.apache.cxf.message.Message
import org.apache.cxf.phase.AbstractPhaseInterceptor
import org.apache.cxf.phase.Phase
// java note: base constructor call is hidden at the end of class declaration
class StreamInterceptor() extends AbstractPhaseInterceptor[Message](Phase.PRE_STREAM) {
// java note: put this into the constructor after calling super(Phase.PRE_STREAM);
addBefore(classOf[SoapPreProtocolOutInterceptor].getName)
override def handleMessage(message: Message) = {
// get original output stream
val osOrig = message.getContent(classOf[OutputStream])
// our output stream
val osNew = new CachedOutputStream
// replace it with ours
message.setContent(classOf[OutputStream], osNew)
// fills the osNew instead of osOrig
message.getInterceptorChain.doIntercept(message)
// flush before getting content
osNew.flush()
// get filled content
val content = IOUtils.toString(osNew.getInputStream, "UTF-8")
// we got the content, we may close our output stream now
osNew.close()
// modified content
val modifiedContent = content.replace("a-string", "another-string")
// fill original output stream
osOrig.write(modifiedContent.getBytes("UTF-8"))
// flush before set
osOrig.flush()
// replace with original output stream filled with our modified content
message.setContent(classOf[OutputStream], osOrig)
}
}