I have on input stream coming in that is periodically receiving data. One of my threads (let's call it threadA) reads every message from the stream and makes sure the data is ok, but will through an error otherwise. My other thread (let's call it threadB) needs to read a few specific messages and then process it. As of now I have threadA just store the important messages in a global variable, and threadB read the messages from the global variable.
Is there any way to allow for two threads to read from the same source to avoid this?
edit: the data coming in are responses to commands threadB issued. My issue is that threadB needs the replies from certain commands, which are issued in no particular pattern, but it does not need all the replies.
You probably could create a threadsafe inputstream or a wrapper and if the stream supports mark/reset you could also have two streams read the data in parallel. However, you'd have to handle situations where one thread reads faster than the other thus making mark/reset unusable or having to skip data - there's so much involved, I doubt you'll want to bother with all this.
I'd suggest you keep your basic setup but try to get rid of global variables, e.g. by using the obverser pattern, passing references to the shared store to the threads etc.
Related
I'd like my program to get a file, and then create 4 files based on its byte content.
Working with only the main thread, I just create one DataInputStream and do my thing sequentially.
Now, I'm interested in making my program concurrent. Maybe I can have four threads - one for each file to be created.
I don't want to read the file's bytes into memory all at once, so my threads will need to query the DataInputStream constantly to stream the bytes using read().
What is not clear to me is, should my 4 threads call read() on the same DataInputStream, or should each one have their own separate stream to read from?
I don't think this is a good idea. See http://download.java.net/jdk7/archive/b123/docs/api/java/io/DataInputStream.html
DataInputStream is not necessarily safe for multithreaded access. Thread safety is optional and is the responsibility of users of methods in this class.
Assuming you want all of the data in each of your four new files, each thread should create its own DataInputStream.
If the threads share a single DataInputStream, at best each thread will get some random quarter of the data. At worst, you'll get a crash or data corruption due to multithreaded access to code that is not thread safe.
If you want to read data from 1 file into 4 separate ones you will not share DataInputStream. You can however wrap that stream and add functionality that would make it thread safe.
For example you may want to read in a chunk of data from your DataInputStream and cache that small chunk. When all 4 threads have read the chunk you can dispose of it and continue reading. You would never have to load the complete file into memory. You would only have to load a small amount.
If you look at the doc of DataInputStream. It is a FilterInputStream, which means the read operation is delegated to other inputStream. Suppose you use here is a FileInputStream, In most platform, concurrent read will be supported.
So in your case, you should initialize four different FileInputStream, result in four DataInputStream, used in four thread separately. The read operation will not be interfered.
Short answer is no.
Longer answer: have a single thread read the DataInputStream, and put the data into one of four Queues, one per output file. Decide which Queue based upon the byte content.
Have four threads, each one reading from a Queue, that write to the output files.
I am building a server in java that communicates with several clients at the same time, the initial approach we had is the the server listens to connections from the clients, once a connection is received and a socket is created, a new thread is spawned to handle the communication with each client, that is read the request with an ObjectInputStream, do the desired operation (fetch data from the DB, update it, etc.), and send back a response to the client (if needed). While the server itself goes back to listen to more connections.
This works fine for the time being, however this approach is not really scalable, it works great for a small amount of clients connected at the same time, however since every client spawns another thread, what will happen when there are a too many clients connected at once?
So my next idea was to maintain a list of sorts that will hold all connected clients (the socket object and some extra info), use a ThreadPool for to iterate through them and read anything they sent, if a message was received then put it in a queue for execution by another ThreadPool of worker threads, and once the worker has finished with its task if a response is required then send it.
The 2 latter steps are pretty trivial to implement, the problem is that with the original thread per client implementation, I use ObjectInputStream.readObject() to read the message, and this method blocks until there is something to read, which is fine for this approach, but I can't use the same thing for the new approach, since if I block on every socket, I will never get to the ones that are further down the list.
So I need a way to check if I have anything to read before I call readObject(), so far I tried the following solutions:
Solution 1:
use ObjectInputStream.available() to check if there is anything available to read, this approach failed since this method seems to always return 0, regardless of whether there is an object in the stream or not. So this does not help at all.
Solution 2:
Use PushbackInputStream to check for the existence of the first unread byte in the stream, if it exists then push it back and read the object using the ObjectInputStream, and if it doesn't move on:
boolean available;
int b = pushbackinput.read();
if (b==-1)
available = false;
else
{
pushbackinput.unread(b);
available = true;
}
if (available)
{
Object message= objectinput.readObject();
// continue with what you need to do with that object
}
This turned out to be useless too, since read() blocks also if there is no input to read. It seems to only return the -1 option if the stream was closed. If the stream is still open but empty it just blocks, so this is no different than simply using ObjectInputStream.readObject();
Can anyone suggest an approach that will actually work?
This is a good question, and you've done some homework.... but it involves going through some history to get things right. Note, your issue is actually more to do with the socket-level communication rather than the ObjectInputStream:
The easiest way to do things in the past was to have a separate thread per socket. This was scalable to a point but threads were expensive and slow to create.
In response, for large systems, people created thread pools and would service the sockets on threads when there was work to do. This was complicated.
The Java language was then changed with the java.nio package which introduced the Selector together with non-blocking IO. This created a reliable (although sometimes confusing) way to service multiple sockets with fewer threads. In your case through, it would not help fully/much because you want to know when a full Object is ready to be read, not when there's just 'some' object.
In the interim the 'landscape' changed, and Java is now able to more efficiently create and manage threads. 'Current' thinking is that it is better/faster and easier to allocate a single thread per socket again.... see Java thread per connection model vs NIO
In your case, I would suggest that you stick with the thread-per-socket model, and you'll be fine. Java can scale and handle more threads than sockets, so you'll be fine.
Is there a way to immediately print the message received from the client without using an infinite loop to check whether the input stream is empty or not?
Because I found that using infinite loop consumes a lot of system resources, which makes the program running so slow. And we also have to do the same (infinite loop) on the client side to print the message on the screen in real time.
I'm using Java.
You should be dealing with the input stream in a separate Thread - and let it block waiting for input. It will not use any resources while it blocks. If you're seeing excessive resource usage while doing this sort of thing, you're doing it wrong.
I think you can just put your loop in a different thread and have it sleep a bit (maybe for half a second?) between iterations. It would still be an infinite loop, but it would not consume nearly as many resources.
You don't you change your architecture a little bit to accommodate WebSockets. check out Socket.IO . It is a cross browser WebSockets enabler.
You will have to write controllers (servlets for example in java) that push data to the client. This does not follow the request-response architecture.
You can also architect it so that a "push servlet" triggers a "request" from the client to obtain the "response".
Since your question talks about Java, and if you are interested in WebSockets, check this link out.
If you're using Sockets, which you should be for any networking.
Then you can use the socket's DataInputStream which you can get using socket.getInputStream() (i think that's the right method) and do the following:
public DataInputStream streamIn;
public Socket soc;
// initialize socket, etc...
streamIn = soc.getInputStream();
public String getInput() {
return (String) streamIn.readUTF(); // Do some other casting if this doesn't work
}
streamIn.readUTF() blocks until data is available, meaning you don't have to loop, and threading will let you do other processing while you wait for data.
Look here for more information on DataInputStream and what you can do with it: http://docs.oracle.com/javase/6/docs/api/java/io/DataInputStream.html
A method that does not require threads would involve subclassing the input stream and adding a notify type method. When called this method would alert any interested objects (i.e. objects that would have to change state due to the additions to the stream) that changes have been made. These interested objects could then respond in anyway that is desired.
Objects writing to the buffer would do their normal writing, and afterward would call the notify() method on the input stream, informing all interested objects of the change.
Edit: This might require subclassing more than a couple of classes and so could involve a lot of code changes. Without knowing more about your design you would have to decide if the implementation is worth the effort.
There are two approaches that avoid busy loops / sleeps.
Use a thread for each client connection, and simply have each thread call read. This blocks the thread until the client sends some data, but that's no problem because it doesn't block the threads handling other clients.
Use Java NIO channel selectors. These allow a thread to wait until one of set of channels (in this case sockets) has data to be read. There is a section of the Oracle Java Tutorials on this.
Of these two approaches, the second one is most efficient in terms of overall resource usage. (The thread-per-client approach uses a lot of memory on thread stacks, and CPU on thread switching overheads.)
Busy loops that repeatedly call (say) InputStream.available() to see if there is any input are horribly inefficient. You can make them less inefficient by slowing down the polling with Thread.sleep(...) calls, but this has the side effect of making the service less responsive. For instance, if you add a 1 second sleep between each set of polls, the effect that each client will see is that the server typically delays 1 second before processing each request. Assuming that those requests are keystrokes and the responses echo them, the net result is a horribly laggy service.
Is it possible to have one thread write to the OutputStream of a Java Socket, while another reads from the socket's InputStream, without the threads having to synchronize on the socket?
Sure. The exact situation you're describing shouldn't be a problem (reading and writing simultaneously).
Generally, the reading thread will block if there's nothing to read, and might timeout on the read operation if you've got a timeout specified.
Since the input stream and the output stream are separate objects within the Socket, the only thing you might concern yourself with is, what happens if you had 2 threads trying to read or write (two threads, same input/output stream) at the same time? The read/write methods of the InputStream/OutputStream classes are not synchronized. It is possible, however, that if you're using a sub-class of InputStream/OutputStream, that the reading/writing methods you're calling are synchronized. You can check the javadoc for whatever class/methods you're calling, and find that out pretty quick.
Yes, that's safe.
If you wanted more than one thread reading from the InputStream you would have to be more careful (assuming you are reading more than one byte at a time).
I can understand why network apps would use multiplexing (to not create too many threads), and why programs would use async calls for pipelining (more efficient). But I don't understand the efficiency purpose of AsynchronousFileChannel.
Any ideas?
It's a channel that you can use to read files asynchronously, i.e. the I/O operations are done on a separate thread, so that the thread you're calling it from can do other things while the I/O operations are happening.
For example: The read() methods of the class return a Future object to get the result of reading data from the file. So, what you can do is call read(), which will return immediately with a Future object. In the background, another thread will read the actual data from the file. Your own thread can continue doing things, and when it needs the read data, you call get() on the Future object. That will then return the data (if the background thread hasn't completed reading the data, it will make your thread block until the data is ready). The advantage of this is that your thread doesn't have to wait the whole length of the read operation; it can do some other things until it really needs the data.
See the documentation.
Note that AsynchronousFileChannel will be a new class in Java SE 7, which is not released yet.
I've just come across another, somewhat unexpected reason for using AsynchronousFileChannel. When performing random record-oriented writes across large files (exceeding physical memory so caching isn't helping everything) on NTFS, I find that AsynchronousFileChannel performs over twice as many operations, in single-threaded mode, versus a normal FileChannel.
My best guess is that because the asynchronous io boils down to overlapped IO in Windows 7, the NTFS file system driver is able to update its own internal structures faster when it doesn't have to create a sync point after every call.
I micro-benchmarked against RandomAccessFile to see how it would perform (results are very close to FileChannel, and still half of the performance of AsynchronousFileChannel.
Not sure what happens with multi-threaded writes. This is on Java 7, on an SSD (the SSD is an order of magnitude faster than magnetic, and another order of magnitude faster on smaller files that fit in memory).
Will be interesting to see if the same ratios hold on Linux.
The main reason I can think of to use asynchronous IO is to better utilize the processor. Imagine you have some application which does some sort of processing on a file. And also let's assume you can process the data contained in the file in chunks. If you don't make use of asynchronous IO then your application will probably behave something like this:
Read a block of data. No processor utilization at this point as you're blocked waiting for the data to be read.
process the data you just read. At this point your application will start consuming CPU cycles as it processed the data.
If more data to read, goto #1.
The processor utilization will go up and then to zero and then up and then to zero, ... . Ideally you want to not be idle if you want your application to be efficient and process the data as fast as possible. A better approach would be:
Issue async read
When read completes issue next async read and then process data
The first step is the bootstrapping. You have no data yet so you have to issue a read. From then on, when you get notified a read has completed you issue another async read and then process the data. The benefit here is that by the time you finish processing the chunk of data the next read has probably finished, so you always have data available to process and thus you're more efficiently using the processor. If your processing finishes before the read has finished you might need to issue multiple asynchronous reads so that you have more data to process.
Nick
Here's something no one has mentioned:
A plain FileChannel implements InterruptibleChannel so it, as well as anything that uses it such as the OutputStream returned by Files.newOutputStream(), has the unfortunate[1][2] behaviour that performing any blocking operation on it (e.g. read() and write()) in a thread in interrupted state will cause the Channel itself to close with java.nio.channels.ClosedByInterruptException.
If this is a problem, using AsynchronousFileChannel instead is a possible alternative.
[1] http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6608965
[2] https://bugs.openjdk.java.net/browse/JDK-4469683