How to eliminate race condition in Rox NIO tutorial - java

I've been using this tutorial for a simple file transfer client/server using socket IO. I changed the response handler to accept multiple reads as a part of one file, as I will be dealing with large files, potentially up to 500 MB. The tutorial didn't account for large server responses, so I'm struggling a bit, and I've created a race condition.
Here's the response handler code:
public class RspHandler {
private byte[] rsp = null;
public synchronized boolean handleResponse(byte[] rsp) {
this.rsp = rsp;
this.notify();
return true;
}
public synchronized void waitForResponse() {
while(this.rsp == null) {
try {
this.wait();
} catch (InterruptedException e) {
}
}
System.out.println("Received Response : " + new String(this.rsp));
}
public synchronized void waitForFile(String filename) throws IOException {
String filepath = "C:\\a\\received\\" + filename;
FileOutputStream fos = new FileOutputStream(filepath);
while(waitForFileChunk(fos) != -1){}
fos.close();
}
private synchronized int waitForFileChunk(FileOutputStream fos) throws IOException
{
while(this.rsp == null) {
try {
this.wait();
} catch (InterruptedException e) {
}
}
fos.write(this.rsp);
int length = this.rsp.length;
this.rsp = null;
if(length < NioClient.READ_SIZE)//Probably a bad way to find the end of the file
{
return -1;
}
else
{
return length;
}
}
}
The main thread of the program creates a RspHandler on the main thread, and passes it to a client, created on a separate thread. The main thread tells the client to request a file, then tells the RspHandler to listen for a response. When the client reads from the server(it reads in chunks of about 1KB right now), it calls the handleResponse(byte[] rsp) method, populating the rsp byte array.
Essentially, I'm not writing the received data to a file as fast as it comes. I'm a bit new to threads, so I'm not sure what to do to get rid of this race condition. Any hints?

this is classic consumer/producer. the most straightforward/easiest way to handle this is to use a BlockingQueue. producer calls put(), consumer calls take().
note, using a BlockingQueue usually leads to the "how do i finish" problem. the best way to do that is to use the "poison pill" method, where the producer sticks a "special" value on the queue which signals to the consumer that there is no more data.

Related

A thread that runs without stopping

Is it possible in java to create a thread that will always work in the background? The problem is that the application instance sometimes crashes with an OutOfMemoryException. Therefore, several instances are launched in parallel. Each instance does some work: it saves something to the database at the request of the user. And the stream, which should work constantly, will look into the database and somehow process the information from it.
Most likely, the sheduler will not work, since the thread must be running constantly and wait for a signal to start working.
First of all, I suggest you investigate and resolve the OutOfMemoryException because it better to avoid these cases. You can instanziate a thread that wait for a request, execute a request and then return to wait for another request. The implementation is like this for thread:
/** Squares integers. */
public class Squarer {
private final BlockingQueue<Integer> in;
private final BlockingQueue<SquareResult> out;
public Squarer(BlockingQueue<Integer> requests,
BlockingQueue<SquareResult> replies) {
this.in = requests;
this.out = replies;
}
public void start() {
new Thread(new Runnable() {
public void run() {
while (true) {
try {
// block until a request arrives
int x = in.take();
// compute the answer and send it back
int y = x * x;
out.put(new SquareResult(x, y));
} catch (InterruptedException ie) {
ie.printStackTrace();
}
}
}
}).start();
}
}
And for the caller method:
public static void main(String[] args) {
BlockingQueue<Integer> requests = new LinkedBlockingQueue<>();
BlockingQueue<SquareResult> replies = new LinkedBlockingQueue<>();
Squarer squarer = new Squarer(requests, replies);
squarer.start();
try {
// make a request
requests.put(42);
// ... maybe do something concurrently ...
// read the reply
System.out.println(replies.take());
} catch (InterruptedException ie) {
ie.printStackTrace();
}
}
To more information, you can start to read the post that I found here to provide you the example.
You basically need an infinitely running thread with some control.
I found this answer to be the simplest and it does what you need.
https://stackoverflow.com/a/2854890/11226302

AsynchronousSocketChannel not reading in entire message

When I run the below locally (on my own computer) it works fine - I can send messages to it and it reads them in properly. As soon as I put this on a remote server and send a message, only half the message gets read.
try {
this.asynchronousServerSocketChannel = AsynchronousServerSocketChannel.open().bind(new InetSocketAddress(80));
this.asynchronousServerSocketChannel.accept(null, new CompletionHandler<AsynchronousSocketChannel, Void>() {
#Override
public void completed(AsynchronousSocketChannel asynchronousSocketChannel, Void att) {
try {
asynchronousServerSocketChannel.accept(null, this);
ByteBuffer byteBuffer = ByteBuffer.allocate(10485760);
asynchronousSocketChannel.read(byteBuffer).get(120000, TimeUnit.SECONDS);
byteBuffer.flip();
System.out.println("request: " + Charset.defaultCharset().decode(byteBuffer).toString());
} catch (CorruptHeadersException | CorruptProtocolException | MalformedURLException ex) {
} catch (InterruptedException | ExecutionException | TimeoutException ex) {
}
}
#Override
public void failed(Throwable exc, Void att) {
}
});
} catch (IOException ex) {
}
I've looked around at other questions and tried some of the answers but nothing worked so far. I thought the cause might be that it's timing out due to it being slower over the network when it's placed remotely but increasing the timeout didn't resolve the issue. I also considered that the message might be too large but allocating more capacity to the ByteBuffer didn't resolve the issue either.
I believe your issue is with the Asynchronous nature of the code you're using. What you have is an open connection and you've called the asynchronous read method on your socket.
This reads n bytes from the channel where n is anything from 0 to the size of your available buffer.
I firmly believe that you have to read in a loop. That is, with Java's A-NIO; you'd need to call read again from your completed method on your CompletionHandler by, possibly, passing in the AsynchronousSocketChannel as an attachment to a new completed method on a CompletionHandler you create for read , not the one you already have for accept methods.
I think this is the same sort of pattern you'd use where you'd call accept again with this as the completion handler from your completed method in the CompletionHandler you're using for the accept method call.
It then becomes important to put an "Escape" clause into your CompletionHandler for instance, if the result is -1 or if the ByteBuffer had read X number of bytes based on what you're expecting, or based on if the final byte in the ByteBuffer is a specific message termination byte that you've agreed with the sending application.
The Java Documentation on the matter goes so far as to say the read method will only read the amount of bytes on the dst at the time of invocation.
In Summary; the completed method call for the handler for the read seems to execute once something was written to the channel; but if something is being streamed you could get half of the bytes, so you'd need to continue reading until you're satisfied you've got the end of what they were sending.
Below is some code I knocked together on reading until the end, responding whilst reading, asynchronously. It, unlike myself, can talk and listen at the same time.
public class ReadForeverCompletionHandler implements CompletionHandler<Integer, Pair<AsynchronousSocketChannel, ByteBuffer>> {
#Override
public void completed(Integer bytesRead, Pair<AsynchronousSocketChannel, ByteBuffer> statefulStuff) {
if(bytesRead != -1) {
final ByteBuffer receivedByteBuffer = statefulStuff.getRight();
final AsynchronousSocketChannel theSocketChannel = statefulStuff.getLeft();
if (receivedByteBuffer.position()>8) {
//New buffer as existing buffer is in use
ByteBuffer response = ByteBuffer.wrap(receivedByteBuffer.array());
receivedByteBuffer.clear(); //safe as we've not got any outstanding or in progress reads, yet.
theSocketChannel.read(receivedByteBuffer,statefulStuff,this); //Basically "WAIT" on more data
Future<Integer> ignoredBytesWrittenResult = theSocketChannel.write(response);
}
}
else {
//connection was closed code
try {
statefulStuff.getLeft().shutdownOutput(); //maybe
}
catch (IOException somethingBad){
//fire
}
}
}
#Override
public void failed(Throwable exc, Pair<AsynchronousSocketChannel, ByteBuffer> attachment) {
//shout fire
}
The read is originally kicked off by a call from the completed method in the handler from the very original asynchronous accept on the server socket like
public class AcceptForeverCompletionHandler implements CompletionHandler<AsynchronousSocketChannel, Pair<AsynchronousServerSocketChannel, Collection<AsynchronousSocketChannel>>> {
private final ReadForeverCompletionHandler readForeverAndEverAndSoOn = new ReadForeverCompletionHandler();
#Override
public void completed(AsynchronousSocketChannel result, Pair<AsynchronousServerSocketChannel, Collection<AsynchronousSocketChannel>> statefulStuff) {
statefulStuff.getLeft().accept(statefulStuff, this); //Accept more new connections please as we go
statefulStuff.getRight().add(result); //Collect these in case we want to for some reason, I don't know
ByteBuffer buffer = ByteBuffer.allocate(4098); //4k seems a nice number
result.read(buffer, Pair.of(result, buffer ),readForeverAndEverAndSoOn); //Kick off the read "forever"
}
#Override
public void failed(Throwable exc, Pair<AsynchronousServerSocketChannel, Collection<AsynchronousSocketChannel>> attachment) {
//Shout fire
}
}

Java Sockets listener

Would it be appropriate to use a thread to get objects received by a socket's InputStream and then add them to a ConcurrentLinkedQueue so that they can be accessed from the main thread without blocking at the poll-input loop?
private Queue<Packet> packetQueue = new ConcurrentLinkedQueue<Packet>();
private ObjectInputStream fromServer; //this is the input stream of the server
public void startListening()
{
Thread listeningThread = new Thread()
{
public void run()
{
while(isConnected()) //check if the socket is connected to anything
{
try {
packetQueue.offer((Packet) fromServer.readObject()); //add packet to queue
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}
};
listeningThread.start(); //start the thread
}
public Packet getNextPacket()
{
return packetQueue.poll(); //get the next packet in the queue
}
It depends on what you need to do with this object that you'll use in main thread.
If need sometime to process it or if it'll be used many times than you can put it in a queue or in another class that will hold this object for you, but if the time you need to process it is low you and you don't need this object further after processing you don't really need to use a queue.
About using the ConcurrentQueue depends too, you need order? you need guarantee synchronism between the read and the write?
You can use Asynchronous socket too to handle many clients and process in the same thread or even getting the objects from them and throwing in a queue to further process.
But "be appropriate" is hard to answer because depends on what you need to do with this objects and how you'll handle it.

Java waiting for client's data pauses application cause of infinite loop

hey I am trying to make a console application that can receive and send messages to the clients.
It will accept multiple clients & handle them.
To add a new client i do this in the run method:
#Override
public void run() {
try {
this.server = new ServerSocket(this.port);
this.factory = new ServerFactory(this.server);
System.out.println("Server runs and now waiting for clients");
this.runClientHandler();
Socket client;
while ((client = this.server.accept()) != null) {
this.handler.addClient(this.factory.createClient(client));
System.out.println("done");
}
} catch (IOException e) {
e.printStackTrace();
}
}
But "done" will never be printed because of this client's infinite loop for his message:
public void handleClient() throws IOException {
byte[] buffer = new byte[5*1024];
int read = -1;
byte[] data;
String message;
while ((read = this.socket.getInputStream().read(buffer)) > -1) {
data = new byte[read];
System.arraycopy(buffer, 0, data, 0, read);
message = new String(data, "UTF-8");
System.out.println("Client message: " + message);
}
}
handleClient() method will run in Thread-2 at handleClients.add():
public void addClient(Client c) throws IOException {
c.writeMessageStream("hey");
System.out.println("New client!");
this.clients.add(c);
//prints here
c.handleClient();
//never reaches this..
}
How can I ignore the while loop and let the program execute while the while loop runs without making a new thread for each client?
Check NIO Selectors. They are part of Java NIO in JDK. Or you can use an out-of-the-box solutions like Netty or (worse) Apache MINA.
Your code won't be able to handle multiple clients as it is serving the client from the same thread it is accepting connections. Generally, the client connections should be handled by different threads and you may like to use asynchronous IO so that multiple connections can be handled from a single thread. You should use Netty which simplified all these. Here are some example programs http://netty.io/5.0/xref/io/netty/example/telnet/package-summary.html

Can I invoke XMPPConnection.sendPacket from concurrent threads?

Motivation
I want extra eyes to confirm that I am able to call this method XMPPConnection.sendPacket(
Packet ) concurrently. For my current code, I am invoking a List of Callables (max 3) in a serial fashion. Each Callable sends/receives XMPP packets on the one piece of XMPPConnection. I plan to parallelize these Callables by spinning off multiple threads & each Callable will invoke sendPacket on the shared XMPPConnection without synchronization.
XMPPConnection
class XMPPConnection
{
private boolean connected = false;
public boolean isConnected()
{
return connected;
}
PacketWriter packetWriter;
public void sendPacket( Packet packet )
{
if (!isConnected())
throw new IllegalStateException("Not connected to server.");
if (packet == null)
throw new NullPointerException("Packet is null.");
packetWriter.sendPacket(packet);
}
}
PacketWriter
class PacketWriter
{
public void sendPacket(Packet packet)
{
if (!done) {
// Invoke interceptors for the new packet
// that is about to be sent. Interceptors
// may modify the content of the packet.
processInterceptors(packet);
try {
queue.put(packet);
}
catch (InterruptedException ie) {
ie.printStackTrace();
return;
}
synchronized (queue) {
queue.notifyAll();
}
// Process packet writer listeners. Note that we're
// using the sending thread so it's expected that
// listeners are fast.
processListeners(packet);
}
protected PacketWriter( XMPPConnection connection )
{
this.queue = new ArrayBlockingQueue<Packet>(500, true);
this.connection = connection;
init();
}
}
What I conclude
Since the PacketWriter is using a BlockingQueue, there is no problem with my intention to invoke sendPacket from multiple threads. Am I correct ?
Yes you can send packets from different threads without any problems.
The Smack blocking queue is because what you can't do is let the different threads write the output stream at the same time. Smack takes the responsibility of synchronizing the output stream by writing it with a per packet granularity.
The pattern implemented by Smack is simply a typical producer/consumer concurrency pattern. You may have several producers (your threads) and only one consumer (the Smack's PacketWriter running in it's own thread).
Regards.
You haven't provided enough information here.
We don't know how the following are implemented:
processInterceptors
processListeners
Who reads / writes the 'done' variable? If one thread sets it to true, then all the other threads will silently fail.
From a quick glance, this doesn't look thread safe, but there's no way to tell for sure from what you've posted.
Other issues:
Why is PacketWriter a class member of XMPPConnectionwhen it's only used in one method?
Why does PacketWriter have a XMPPConnection member var and not use it?
You might consider using a BlockingQueue if you can restrict to Java 5+.
From the Java API docs, with a minor change to use ArrayBlockingQueue:
class Producer implements Runnable {
private final BlockingQueue queue;
Producer(BlockingQueue q) { queue = q; }
public void run() {
try {
while(true) { queue.put(produce()); }
} catch (InterruptedException ex) { ... handle ...}
}
Object produce() { ... }
}
class Consumer implements Runnable {
private final BlockingQueue queue;
Consumer(BlockingQueue q) { queue = q; }
public void run() {
try {
while(true) { consume(queue.take()); }
} catch (InterruptedException ex) { ... handle ...}
}
void consume(Object x) { ... }
}
class Setup {
void main() {
BlockingQueue q = new ArrayBlockingQueue();
Producer p = new Producer(q);
Consumer c1 = new Consumer(q);
Consumer c2 = new Consumer(q);
new Thread(p).start();
new Thread(c1).start();
new Thread(c2).start();
}
}
For your usage you'd have your real sender (holder of the actual connection) be the Consumer, and packet preparers/senders be the producers.
An interesting additional thought is that you could use a PriorityBlockingQueue to allow flash override XMPP packets that are sent before any other waiting packets.
Also, Glen's points on the design are good points. You might want to take a look at the Smack API (http://www.igniterealtime.org/projects/smack/) rather than creating your own.

Categories

Resources