ClassLoader serializable? - java

I've been trying to make some sort of alpha key system for my game. I thought in order to prevent people from decompiling my jar and changing around some code to bypass the system and get straight into my game, I thought about making it so after some verification, the server would send a serialized copy of a ClassLoader object to the client, which the client can then use to load the required files off an external host to start running the game.
Turns out it's not working at all.. ClassLoader seems to be non-serializeable. Are there any suggestions on how I could make a simliar system, or some way to somehow be able to ram that ClassLoader object through?
Source code:
Server.java:
package org.arno;
import java.io.ObjectInputStream;
import java.net.ServerSocket;
import java.net.Socket;
import org.arno.Packet.ClassLoaderPacket;
public class InitServer {
private static ObjectOutputStream out;
private static ObjectInputStream in;
private static ServerSocket server;
private static Socket connection;
private static final float HANDSHAKE_UID = 9678;
public static void main(String[] args) {
startServer();
}
private static void startServer() {
try {
server = new ServerSocket(7799,100);
System.out.println("[LoginServer] Initiated");
while (true) {
waitForClientConnection();
setStreams();
waitForHandShake();
sendData();
closeClientConnection();
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private static void closeClientConnection() throws Exception {
out.close();
in.close();
connection.close();
}
private static void waitForHandShake() throws Exception{
float handshake = (float) in.readObject();
System.out.println(handshake == HANDSHAKE_UID? "Handshakes match UID" : "Wrong handshake sent");
}
private static void sendData() throws Exception {
ClassLoaderPacket.writeObject(new ClassLoaderPacket(out));
System.out.println("DATA SEND");
}
private static void waitForClientConnection() throws Exception {
connection = server.accept();
System.out.println("[LoginServer] Connection made from IP ["
+ connection.getInetAddress().getHostAddress() + "]");
}
private static void setStreams() throws Exception {
out = new ObjectOutputStream(connection.getOutputStream());
out.flush();
in = new ObjectInputStream(connection.getInputStream());
}
}
ClassLoaderPacket.java:
package org.arno.Packet;
import java.io.IOException;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLClassLoader;
/**
* #author arno
* File: ClassLoaderPacket.java
*/
public class ClassLoaderPacket implements Serializable {
static ObjectOutputStream out;
private transient ClassLoader cL;
private static final String GAME_URL = "https://dl.dropboxusercontent.com/u/9385659/Avalonpk718.jar";
public ClassLoaderPacket(ObjectOutputStream out) throws MalformedURLException {
this.out = out;
cL = new URLClassLoader(new URL[] { new URL(GAME_URL) });
}
public ClassLoader getClassLoaderContext() {
return cL;
}
public static void writeObject(ClassLoaderPacket packet) throws IOException {
out.writeObject(packet.getClassLoaderContext());
}
}
Client sided reading:
public void receiveData() throws Exception {
gameLoader = (ClassLoader) in.readObject();
}

I think there are too much of complex fields in the ClassLoader in order to serialize it. Additionally, it should implement Serializable interface and have serialVersionUID in the serializable class.
Would it be enough just to obfuscate the code? I think there are plenty of tools which may help you to conceal your code.
Here is the useful thread about java code obfuscation/protection: Best Java obfuscator?

Related

StreamingFileSink bulk writer results in some checkpoint error when running in AWS EMR

Unable to use StreamingFileSink and store incoming events in compressed fashion.
I am trying to use StreamingFileSink to write unbounded event stream to S3. In the process, I would like to compress the data to make better use of storage size available.
I wrote a compressed string writer, by borrowing some code from SequenceFileWriterFactory from flink. It fails with the exception I described below.
If I try to use BucketingSink, it works great.
Using BucketingSink, I approached compressed string write as below. Again, I borrowed this code from some other pull request.
import org.apache.flink.streaming.connectors.fs.StreamWriterBase;
import org.apache.flink.streaming.connectors.fs.Writer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.compress.CodecPool;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.io.compress.CompressionOutputStream;
import org.apache.hadoop.io.compress.Compressor;
import java.io.IOException;
public class CompressionStringWriter<T> extends StreamWriterBase<T> implements Writer<T> {
private static final long serialVersionUID = 3231207311080446279L;
private String codecName;
private String separator;
public String getCodecName() {
return codecName;
}
public String getSeparator() {
return separator;
}
private transient CompressionOutputStream compressedOutputStream;
public CompressionStringWriter(String codecName, String separator) {
this.codecName = codecName;
this.separator = separator;
}
public CompressionStringWriter(String codecName) {
this(codecName, System.lineSeparator());
}
protected CompressionStringWriter(CompressionStringWriter<T> other) {
super(other);
this.codecName = other.codecName;
this.separator = other.separator;
}
#Override
public void open(FileSystem fs, Path path) throws IOException {
super.open(fs, path);
Configuration conf = fs.getConf();
CompressionCodecFactory codecFactory = new CompressionCodecFactory(conf);
CompressionCodec codec = codecFactory.getCodecByName(codecName);
if (codec == null) {
throw new RuntimeException("Codec " + codecName + " not found");
}
Compressor compressor = CodecPool.getCompressor(codec, conf);
compressedOutputStream = codec.createOutputStream(getStream(), compressor);
}
#Override
public void close() throws IOException {
if (compressedOutputStream != null) {
compressedOutputStream.close();
compressedOutputStream = null;
} else {
super.close();
}
}
#Override
public void write(Object element) throws IOException {
getStream();
compressedOutputStream.write(element.toString().getBytes());
compressedOutputStream.write(this.separator.getBytes());
}
#Override
public CompressionStringWriter<T> duplicate() {
return new CompressionStringWriter<>(this);
}
}
BucketingSink<DeviceEvent> bucketingSink = new BucketingSink<>("s3://"+ this.bucketName + "/" + this.objectPrefix);
bucketingSink
.setBucketer(new OrgIdBasedBucketAssigner())
.setWriter(new CompressionStringWriter<DeviceEvent>("Gzip", "\n"))
.setPartPrefix("file-")
.setPartSuffix(".gz")
.setBatchSize(1_500_000);
The one with BucketingSink works.
Now my code snippets using StreamingFileSink involves the below set of code.
import org.apache.flink.api.common.serialization.BulkWriter;
import java.io.IOException;
public class CompressedStringBulkWriter<T> implements BulkWriter<T> {
private final CompressedStringWriter compressedStringWriter;
public CompressedStringBulkWriter(final CompressedStringWriter compressedStringWriter) {
this.compressedStringWriter = compressedStringWriter;
}
#Override
public void addElement(T element) throws IOException {
this.compressedStringWriter.write(element);
}
#Override
public void flush() throws IOException {
this.compressedStringWriter.flush();
}
#Override
public void finish() throws IOException {
this.compressedStringWriter.close();
}
}
import org.apache.flink.api.common.serialization.BulkWriter;
import org.apache.flink.core.fs.FSDataOutputStream;
import org.apache.hadoop.conf.Configuration;
import java.io.IOException;
public class CompressedStringBulkWriterFactory<T> implements BulkWriter.Factory<T> {
private SerializableHadoopConfiguration serializableHadoopConfiguration;
public CompressedStringBulkWriterFactory(final Configuration hadoopConfiguration) {
this.serializableHadoopConfiguration = new SerializableHadoopConfiguration(hadoopConfiguration);
}
#Override
public BulkWriter<T> create(FSDataOutputStream out) throws IOException {
return new CompressedStringBulkWriter(new CompressedStringWriter(out, serializableHadoopConfiguration.get(), "Gzip", "\n"));
}
}
import org.apache.flink.core.fs.FSDataOutputStream;
import org.apache.flink.core.fs.FileSystem;
import org.apache.flink.core.fs.Path;
import org.apache.flink.runtime.fs.hdfs.HadoopFileSystem;
import org.apache.flink.util.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.compress.CodecPool;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.io.compress.CompressionOutputStream;
import org.apache.hadoop.io.compress.Compressor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.Serializable;
public class CompressedStringWriter<T> implements Serializable {
private static final Logger LOG = LoggerFactory.getLogger(CompressedStringWriter.class);
private static final long serialVersionUID = 2115292142239557448L;
private String separator;
private transient CompressionOutputStream compressedOutputStream;
public CompressedStringWriter(FSDataOutputStream out, Configuration hadoopConfiguration, String codecName, String separator) {
this.separator = separator;
try {
Preconditions.checkNotNull(hadoopConfiguration, "Unable to determine hadoop configuration using path");
CompressionCodecFactory codecFactory = new CompressionCodecFactory(hadoopConfiguration);
CompressionCodec codec = codecFactory.getCodecByName(codecName);
Preconditions.checkNotNull(codec, "Codec " + codecName + " not found");
LOG.info("The codec name that was loaded from hadoop {}", codec);
Compressor compressor = CodecPool.getCompressor(codec, hadoopConfiguration);
this.compressedOutputStream = codec.createOutputStream(out, compressor);
LOG.info("Setup a compressor for codec {} and compressor {}", codec, compressor);
} catch (IOException ex) {
throw new RuntimeException("Unable to compose a hadoop compressor for the path", ex);
}
}
public void flush() throws IOException {
if (compressedOutputStream != null) {
compressedOutputStream.flush();
}
}
public void close() throws IOException {
if (compressedOutputStream != null) {
compressedOutputStream.close();
compressedOutputStream = null;
}
}
public void write(T element) throws IOException {
compressedOutputStream.write(element.toString().getBytes());
compressedOutputStream.write(this.separator.getBytes());
}
}
import org.apache.hadoop.conf.Configuration;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
public class SerializableHadoopConfiguration implements Serializable {
private static final long serialVersionUID = -1960900291123078166L;
private transient Configuration hadoopConfig;
SerializableHadoopConfiguration(Configuration hadoopConfig) {
this.hadoopConfig = hadoopConfig;
}
Configuration get() {
return this.hadoopConfig;
}
// --------------------
private void writeObject(ObjectOutputStream out) throws IOException {
this.hadoopConfig.write(out);
}
private void readObject(ObjectInputStream in) throws IOException {
final Configuration config = new Configuration();
config.readFields(in);
if (this.hadoopConfig == null) {
this.hadoopConfig = config;
}
}
}
My actual flink job
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties kinesisConsumerConfig = new Properties();
...
...
DataStream<DeviceEvent> kinesis =
env.addSource(new FlinkKinesisConsumer<>(this.streamName, new DeviceEventSchema(), kinesisConsumerConfig)).name("source")
.setParallelism(16)
.setMaxParallelism(24);
final StreamingFileSink<DeviceEvent> bulkCompressStreamingFileSink = StreamingFileSink.<DeviceEvent>forBulkFormat(
path,
new CompressedStringBulkWriterFactory<>(
BucketingSink.createHadoopFileSystem(
new Path("s3a://"+ this.bucketName + "/" + this.objectPrefix),
null).getConf()))
.withBucketAssigner(new OrgIdBucketAssigner())
.build();
deviceEventDataStream.addSink(bulkCompressStreamingFileSink).name("bulkCompressStreamingFileSink").setParallelism(16);
env.execute();
I expect data to be saved in S3 as multiple files. Unfortunately no files are being created.
In the logs, I see below exception
2019-05-15 22:17:20,855 INFO org.apache.flink.runtime.taskmanager.Task - Sink: bulkCompressStreamingFileSink (11/16) (c73684c10bb799a6e0217b6795571e22) switched from RUNNING to FAILED.
java.lang.Exception: Could not perform checkpoint 1 for operator Sink: bulkCompressStreamingFileSink (11/16).
at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:595)
at org.apache.flink.streaming.runtime.io.BarrierBuffer.notifyCheckpoint(BarrierBuffer.java:396)
at org.apache.flink.streaming.runtime.io.BarrierBuffer.processBarrier(BarrierBuffer.java:292)
at org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:200)
at org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:209)
at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Could not complete snapshot 1 for operator Sink: bulkCompressStreamingFileSink (11/16).
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:422)
at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1113)
at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1055)
at org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:729)
at org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:641)
at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:586)
... 8 more
Caused by: java.io.IOException: Stream closed.
at org.apache.flink.fs.s3.common.utils.RefCountedFile.requireOpened(RefCountedFile.java:117)
at org.apache.flink.fs.s3.common.utils.RefCountedFile.write(RefCountedFile.java:74)
at org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.flush(RefCountedBufferingFileStream.java:105)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeAndUploadPart(S3RecoverableFsDataOutputStream.java:199)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeForCommit(S3RecoverableFsDataOutputStream.java:166)
at org.apache.flink.streaming.api.functions.sink.filesystem.PartFileWriter.closeForCommit(PartFileWriter.java:71)
at org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:63)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:239)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:280)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:253)
at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:244)
at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:235)
at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.snapshotState(StreamingFileSink.java:347)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:90)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:395)
So wondering, what am I missing.
I am using AWS EMR latest (5.23).
In CompressedStringBulkWriter#close(), you are calling the close() method on the CompressionCodecStream which also closes the underlying the stream i.e. Flink's FSDataOutputStream. It has to be opened for the checkpointing to be done properly by Flink's internal to guarantee recoverable stream. That is why you are getting
Caused by: java.io.IOException: Stream closed.
at org.apache.flink.fs.s3.common.utils.RefCountedFile.requireOpened(RefCountedFile.java:117)
at org.apache.flink.fs.s3.common.utils.RefCountedFile.write(RefCountedFile.java:74)
at org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.flush(RefCountedBufferingFileStream.java:105)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeAndUploadPart(S3RecoverableFsDataOutputStream.java:199)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeForCommit(S3RecoverableFsDataOutputStream.java:166)
So instead of compressedOutputStream.close(), use compressedOutputStream.finish() which just flushes everything that's in buffer to the outputstream without closing it. BTW, there is an inbuilt HadoopCompressionBulkWriter made available in the latest version Flink, you can also use that.

How to only allow a single connection (url/port) to read and write from a flink application

I read from a url/port perform some processing and write back to the url/port. The Url/Port allows only a single connection (of which you need to read and write when needed).
Flink can read and write to the rl port but opens 2 connections.
I have used the basic connection and from a url/port through flink
// set up the streaming execution environment
val env = StreamExecutionEnvironment.getExecutionEnvironment
val data_stream = env.socketTextStream(url, port, socket_stream_deliminator, socket_connection_retries)
.map(x => printInput(x))
.writeToSocket(url, port, new SimpleStringSchema())
//.addSink(new SocketClientSink[String](url, port.toInt, new SimpleStringSchema))
// execute program
env.execute("Flink Streaming Scala API Skeleton")
The ideal solution or only solution for my case is to read and write from the same connection and not create 2 seperate connections
How would I go about doing this?
As I said in the comment, you have to store your connection in some static variable because your Sources- and Sinks won't use the same connection otherwise.
You must also ensure that your Source and Sink run on the same JVM using the same Classloader, otherwise you will still have more than one connection.
I built this wrapper class which holds a raw Socket-Connection and a Reader/Writer instance for that connection. Because your Source will always stop before your Sink (thats how Flink works), this class also does reconnect if it was closed before.
package example;
import java.io.BufferedReader;
import java.io.Closeable;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintStream;
import java.net.Socket;
public class SocketConnection implements Closeable {
private final String host;
private final int port;
private final Object lock;
private volatile Socket socket;
private volatile BufferedReader reader;
private volatile PrintStream writer;
public SocketConnection(String host, int port) {
this.host = host;
this.port = port;
this.lock = new Object();
this.socket = null;
this.reader = null;
this.writer = null;
}
private void connect() throws IOException {
this.socket = new Socket(this.host, this.port);
this.reader = new BufferedReader(new InputStreamReader(this.socket.getInputStream()));
this.writer = new PrintStream(this.socket.getOutputStream());
}
private void ensureConnected() throws IOException {
// only acquire lock if null
if (this.socket == null) {
synchronized (this.lock) {
// recheck if socket is still null
if (this.socket == null) {
connect();
}
}
}
}
public BufferedReader getReader() throws IOException {
ensureConnected();
return this.reader;
}
public PrintStream getWriter() throws IOException {
ensureConnected();
return this.writer;
}
#Override
public void close() throws IOException {
if (this.socket != null) {
synchronized (this.lock) {
if (this.socket != null) {
this.reader.close();
this.reader = null;
this.writer.close();
this.writer = null;
this.socket.close();
this.socket = null;
}
}
}
}
}
Your Main Class (or any other class) holds one instance of this class which is then accessed by both your source and your sink:
package example;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
public class Main {
public static final SocketConnection CONNECTION = new SocketConnection("your-host", 12345);
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.addSource(new SocketTextStreamSource())
.addSink(new SocketTextStreamSink());
env.execute("Flink Streaming Scala API Skeleton");
}
}
Your SourceFunction could look more or less like this:
package example;
import org.apache.flink.streaming.api.functions.source.SourceFunction;
public class SocketTextStreamSource implements SourceFunction<String> {
private volatile boolean running;
public SocketTextStreamSource() {
this.running = true;
}
#Override
public void run(SourceContext<String> context) throws Exception {
try (SocketConnection conn = Main.CONNECTION) {
String line;
while (this.running && (line = conn.getReader().readLine()) != null) {
context.collect(line);
}
}
}
#Override
public void cancel() {
this.running = false;
}
}
And your SinkFunction:
package example;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
public class SocketTextStreamSink extends RichSinkFunction<String> {
private transient SocketConnection connection;
#Override
public void open(Configuration parameters) throws Exception {
this.connection = Main.CONNECTION;
}
#Override
public void invoke(String value, Context context) throws Exception {
this.connection.getWriter().println(value);
this.connection.getWriter().flush();
}
#Override
public void close() throws Exception {
this.connection.close();
}
}
Note that I always use getReader() and getWriter() because the underlying Socket may have been closed in the meantime.

The type CalculatorLanguageServer must implement the inherited abstract method StreamConnectionProvider.getErrorStream()

I tried to create the Xtext calculator DSL as guided in the README in this repo. It is an Xtext DSL with a language server and LSP4E implementation, and has 2 sub projects named:
I downloaded the repo and opened it in the Eclipse IDE. Under the main project, there are 2 sub-projects named: org.xtext.calc.parent (which is the xtext project) and org.xtext.calc.lsp4e (the lsp4e implementation project).
In the org.xtext.calc.lsp4e project's src folder there are 3 Java files named: Activator, CalculatorLanguageServer, SocketStreamConnectionProvider. In the latter two, I get an error which I cannot resolve.
Below are the two files: -
1.) CalculatorLanguageServer.java
package org.xtext.calc.lsp4e;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URI;
import java.net.URL;
import java.util.ArrayList;
import java.util.List;
import org.eclipse.core.runtime.FileLocator;
import org.eclipse.core.runtime.Platform;
import org.eclipse.lsp4e.server.ProcessStreamConnectionProvider;
import org.eclipse.lsp4e.server.StreamConnectionProvider;
import org.eclipse.lsp4j.jsonrpc.messages.Message;
import org.eclipse.lsp4j.services.LanguageServer;
import org.osgi.framework.Bundle;
public class CalculatorLanguageServer implements StreamConnectionProvider {
private final static boolean SOCKET_MODE = true;
private StreamConnectionProvider delegate;
public CalculatorLanguageServer() {
if (SOCKET_MODE) {
this.delegate = new SocketStreamConnectionProvider(5007) {
};
} else {
List<String> commands = new ArrayList<>();
commands.add("java");
commands.add("-Xdebug");
commands.add("-Xrunjdwp:server=y,transport=dt_socket,address=4001,suspend=n,quiet=y");
commands.add("-jar");
Bundle bundle = Activator.getDefault().getBundle();
URL resource = bundle.getResource("/language-server/calculator-language-server-jar");
try {
commands.add(new File(FileLocator.resolve(resource).toURI()).getAbsolutePath());
} catch (Exception e) {
throw new IllegalStateException(e);
}
this.delegate = new ProcessStreamConnectionProvider(commands, Platform.getLocation().toOSString()) {};
}
}
public void start() throws IOException {
delegate.start();
}
public InputStream getInputStream() {
return delegate.getInputStream();
}
public OutputStream getOutputStream() {
return delegate.getOutputStream();
}
public Object getInitializationOptions(URI rootUri) {
return delegate.getInitializationOptions(rootUri);
}
public void stop() {
delegate.stop();
}
public void handleMessage(Message message, LanguageServer languageServer, URI rootURI) {
delegate.handleMessage(message, languageServer, rootURI);
}
#Override
public String toString() {
return "Calculator Language Server: " + super.toString();
}
}
2.) SocketStreamConnectionProvider.java
package org.xtext.calc.lsp4e;
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.Socket;
import org.eclipse.lsp4e.server.StreamConnectionProvider;
public class SocketStreamConnectionProvider implements StreamConnectionProvider {
private int port;
private Socket socket;
private InputStream inputStream;
private OutputStream outputStream;
public SocketStreamConnectionProvider(int port) {
this.port = port;
}
#Override
public void start() throws IOException {
this.socket = new Socket("localhost", port);
inputStream = new BufferedInputStream(socket.getInputStream());
outputStream = new BufferedOutputStream(socket.getOutputStream());
}
#Override
public InputStream getInputStream() {
return inputStream;
}
#Override
public OutputStream getOutputStream() {
return outputStream;
}
#Override
public void stop() {
if (socket != null) {
try {
socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
In both these files, I get an error in the class names:
1.) The type CalculatorLanguageServer must implement the inherited abstract method StreamConnectionProvider.getErrorStream()
2.) The type SocketStreamConnectionProvider must implement the inherited abstract method StreamConnectionProvider.getErrorStream()
How to resolve these errors?
Thanks!

Java File Transfer Program Hangs before Executing SwingWorker

I've written a program which uses multicast to discover peers on a local network and allows file transfer between them. It works, except some process of acquiring files/initializing the transfer thread is ridiculously slow. It hangs for about 10-15 seconds, then begins transferring and completes normally:
Transfer.java
My JFrame GUI class. This was done with Netbeans for convenience, so any generated code won't be posted here.
package transfer;
import java.beans.PropertyChangeEvent;
import java.io.File;
import java.io.IOException;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.swing.JFileChooser;
import static javax.swing.JFileChooser.FILES_AND_DIRECTORIES;
import javax.swing.SwingWorker;
import javax.swing.UIManager;
import javax.swing.UnsupportedLookAndFeelException;
public class Transfer extends javax.swing.JFrame {
private final FileDrop fileDrop;
private Client client;
private final Server server;
private final ClientMulticast clientMulticast;
private final ServerMulticast serverMulticast;
public Transfer() throws IOException {
initComponents();
this.setTitle("Transfer");
peerBox.setEditable(false);
peerBox.removeAllItems();
fileDrop = new FileDrop(backgroundPanel, (java.io.File[] files) -> {
System.out.println(files[0].isDirectory());
if (peerBox.getSelectedIndex() != -1) {
sendLabel.setText("Hello");
try {
client = new Client(sendLabel, files, (Peer) peerBox.getSelectedItem());
//Client property change listener - listens for updates to progress bar
client.addPropertyChangeListener((PropertyChangeEvent evt1) -> {
if (null != evt1.getPropertyName()) switch (evt1.getPropertyName()) {
case "progress":
sendProgressBar.setValue((Integer) evt1.getNewValue());
break;
}
});
client.execute();
} catch (Exception ex) {
System.out.println("Unable to send! IOException in FileDrop call.");
ex.printStackTrace(System.out);
}
} else {
sendLabel.setText("Host not found");
}
});
sendProgressBar.setMaximum(100);
sendProgressBar.setMinimum(0);
receiveProgressBar.setMaximum(100);
receiveProgressBar.setMinimum(0);
server = new Server(receiveLabel);
//Server property change listener - listens for updates to progress bar
server.addPropertyChangeListener((PropertyChangeEvent evt1) -> {
if ("progress".equals(evt1.getPropertyName())) {
receiveProgressBar.setValue((Integer) evt1.getNewValue());
}
});
server.execute();
serverMulticast = new ServerMulticast();
serverMulticast.execute();
clientMulticast = new ClientMulticast(peerBox);
clientMulticast.execute();
}
...GENERATED CODE...
private void openButtonActionPerformed(java.awt.event.ActionEvent evt) {
Transfer guiObject = this;
SwingWorker openFile = new SwingWorker<Void, String>() {
#Override
protected Void doInBackground() throws Exception {
openButton.setEnabled(false);
fileChooser.setFileSelectionMode(FILES_AND_DIRECTORIES);
int returnVal = fileChooser.showOpenDialog(guiObject);
if (returnVal == JFileChooser.APPROVE_OPTION && peerBox.getSelectedIndex() != -1) {
File[] fileArray = fileChooser.getSelectedFiles();
client = new Client(sendLabel, fileArray, (Peer) peerBox.getSelectedItem());
//Client property change listener - listens for updates to progress bar
client.addPropertyChangeListener((PropertyChangeEvent evt1) -> {
if ("progress".equals(evt1.getPropertyName())) {
sendProgressBar.setValue((Integer) evt1.getNewValue());
}
});
client.execute();
//block this swingworker until client worker is done sending
while(!client.isDone()) { }
}
openButton.setEnabled(true);
return null;
}
};
openFile.execute();
}
/**
* #param args the command line arguments
* #throws java.lang.ClassNotFoundException
* #throws java.lang.InstantiationException
* #throws java.lang.IllegalAccessException
* #throws javax.swing.UnsupportedLookAndFeelException
*/
public static void main(String args[]) throws ClassNotFoundException, InstantiationException, IllegalAccessException, UnsupportedLookAndFeelException {
System.setProperty("apple.laf.useScreenMenuBar", "true");
System.setProperty("com.apple.mrj.application.apple.menu.about.name", "Transfer");
UIManager.setLookAndFeel(javax.swing.UIManager.getSystemLookAndFeelClassName());
/* Create and display the form */
java.awt.EventQueue.invokeLater(() -> {
try {
new Transfer().setVisible(true);
} catch (IOException ex) {
Logger.getLogger(Transfer.class.getName()).log(Level.SEVERE, null, ex);
}
});
}
// Variables declaration - do not modify
private javax.swing.JPanel backgroundPanel;
private javax.swing.JFileChooser fileChooser;
private javax.swing.JButton openButton;
private javax.swing.JComboBox<Peer> peerBox;
private javax.swing.JLabel receiveHeaderLabel;
private javax.swing.JLabel receiveLabel;
private javax.swing.JPanel receivePanel;
private javax.swing.JProgressBar receiveProgressBar;
private javax.swing.JLabel sendHeaderLabel;
private javax.swing.JLabel sendLabel;
private javax.swing.JPanel sendPanel;
private javax.swing.JProgressBar sendProgressBar;
// End of variables declaration
}
ClientMutlicast.java
Sends out multicast packets and receives responses - modifies the GUI accordingly. This class is where the errors were.
package transfer;
import java.io.IOException;
import java.lang.reflect.InvocationTargetException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.net.SocketException;
import java.net.SocketTimeoutException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
import javax.swing.JComboBox;
import javax.swing.SwingUtilities;
import javax.swing.SwingWorker;
public class ClientMulticast extends SwingWorker<Void, Peer> {
private boolean peerPreviouslyFound;
private byte[] sendData, receiveData;
private final DatagramSocket mSocket;
private DatagramPacket receivePacket;
private final JComboBox<Peer> peerBox;
private Peer peer;
private ArrayList<Peer> peerList;
public ClientMulticast(JComboBox<Peer> peerBox) throws SocketException {
peerList = new ArrayList<>();
this.peerBox = peerBox;
mSocket = new DatagramSocket();
mSocket.setBroadcast(true);
mSocket.setSoTimeout(300);
sendData = "CLIENT_MSG".getBytes();
}
#Override
protected Void doInBackground() throws IOException, InterruptedException, InvocationTargetException {
while (true) {
try {
receiveData = new byte[1024];
receivePacket = new DatagramPacket(receiveData, receiveData.length);
peerPreviouslyFound = false;
//send broadcast message
mSocket.send(new DatagramPacket(sendData, sendData.length, InetAddress.getByName("239.255.255.255"), 8888));
//receive response
mSocket.receive(receivePacket);
//don't have to worry about responses from local host because
//server rejects multicast packets from local host
peer = new Peer(receivePacket.getAddress(), receivePacket.getPort(), System.currentTimeMillis());
for (Peer peerList1 : peerList) {
if (peerList1.getIPAddress().equals(peer.getIPAddress())) {
peerList1.setTimestamp(System.currentTimeMillis());
peerPreviouslyFound = true;
break;
}
}
//add to peer list only if reponse is valid, not from local host, and not previously received from this host
if ("SERVER_RESPONSE".equalsIgnoreCase(new String(receivePacket.getData()).trim())
&& !peerPreviouslyFound) {
//publish(peer);
peerBox.addItem(peer);
peerList.add(peer);
}
for (int i = 0; i < peerList.size(); i++) {
//if peer is greater than 5 seconds old, remove from list
if (peerList.get(i).getTimestamp() + 5000 < System.currentTimeMillis()) {
peerBox.removeItemAt(i);
peerList.remove(i);
}
}
} catch (SocketTimeoutException ex) {
for (int i = 0; i < peerList.size(); i++) {
//if peer is greater than 5 seconds old, remove from list
if (peerList.get(i).getTimestamp() + 5000 < System.currentTimeMillis()) {
final int j = i;
SwingUtilities.invokeAndWait(() -> {
peerBox.removeItemAt(j);
peerList.remove(j);
});
}
}
}
TimeUnit.MILLISECONDS.sleep(500);
}//end while
}
#Override
protected void process(List<Peer> p) {
peerBox.addItem(p.get(p.size() - 1));
peerList.add(p.get(p.size() - 1));
}
}
I'm pretty sure the issue is when the FileDrop object within the UI constructor tries to execute the Client SwingWorker using client.execute(), there's a large delay. I can debug it but it doesn't show any issues. Also, I know the issue can't be with my socket.connect() call within Client(), because the print statement immediately before socket.connect() doesn't print until the program has resumes from wherever it's hanging. Any ideas? I'm completely lost.
-- EDIT: All code above works as desired, bugs noted here have been solved. If anyone wants the full source, I'll gladly share it.
I am passing my combo box, called peerBox, to my ClientMuticast class. I was modifying this structure directly, storing peers within. This was blocking the interface thread, as the ClientMulticast class was constantly accessing/changing the combo box.
I changed the code to use the combo box to ONLY display values to the gui and store the discovered peers. An array list was used to copy all the values from the combo box upon each invocation of the ClientMulticast constructor. If a peer is discovered, the combo box is updated via the publish() and process() methods within the ClientMulticast class. Any code placed within process() will be scheduled to execute on the EDT.

ServerSocket doesn't work with try-with-resources?

So we're fooling around with ServerSockets in class, making a very simple HTTP server that takes a request, does nothing with it, and responds with a 200 OK followed by some HTML content.
I've been trying to figure out this problem for two days, and I haven't been able to get to grips with it, and neither has my teacher. I've come to think it is a problem with closing the server, for some odd reason. I've fixed the problem, but would just like to know why I happened in the first place.
Here are three snippets:
HttpServer.class:
package httpserver;
import java.io.Closeable;
import java.io.IOException;
import java.io.PrintWriter;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Scanner;
public class HttpServer implements Closeable {
public static final int PORT = 80;
public static final int BACKLOG = 1;
public static final String ROOT_CATALOG = "C:/HttpServer/";
private ServerSocket server;
private Socket client;
private Scanner in;
private PrintWriter out;
private String request;
public HttpServer() throws IOException {
server = new ServerSocket(PORT, BACKLOG);
}
public Socket accept() throws IOException {
client = server.accept();
in = new Scanner(client.getInputStream());
out = new PrintWriter(client.getOutputStream());
return client;
}
public void recieve() {
request = in.nextLine();
System.out.println(request);
}
public void respond(final String message) {
out.print(message);
out.flush();
}
#Override
public void close() throws IOException {
if(!server.isClosed()) {
client = null;
server = null;
}
}
}
Main.class solution that works:
package httpserver;
import java.io.IOException;
import java.net.Socket;
public class Main {
public static void main(String[] args) throws IOException {
HttpServer server = new HttpServer();
Socket client;
while(true) {
client = server.accept();
server.recieve();
server.respond("HTTP/1.0 200 OK\r\n"
+ "Content-Type: text/html\r\n"
+ "\r\n"
+ "<html><body><b>hello..</b></body></html>");
client.close();
}
}
}
Main.class solution that doesn't work:
package httpserver;
import java.io.IOException;
public class Main {
public static void main(String[] args) {
try(HttpServer server = new HttpServer()) {
while (true) {
server.accept();
server.recieve();
server.respond("HTTP/1.0 200 OK\r\n"
+ "Content-Type: text/html\r\n"
+ "\r\n"
+ "<html><body><b>hello..</b></body></html>");
}
} catch(IOException ex) {
System.out.println("We have a problem: " + ex.getMessage());
}
}
}
I could imagine it has something to do with not closing the client socket after each loop iteration. But even so, it should at least go through once, before bugging up in that case. I really can't see what the problem is supposed to be.
No error messages, nothing...
You do not specify any Content-length when sending the HTTP, so the browser does not know when to stop reading for more data. See How to know when HTTP-server is done sending data for more info.
In the working example you closed the client socket, which tells the browser there is no more data - for your ambitions this might be enough if you don't want the browser to respond.

Categories

Resources