java producer consumer threads for monitoring audio - java

I'm making a DAW in Java, actually its more basic than that, I modelled it after an old Tascam 4-Track recorder I once owned. I'm trying to monitor audio while recording with as little latency (delay) between the two as possible. If I write the audio bytes in the same thread I'm reading them in there's a significant amount of latency (if you want to see the code I have I'll post it but it seemed irrelevant since I think it needs to be rewritten). What I had been thinking about doing is using a producer, consumer thread and a queue to store chunks of bytes in between. so my producer thread would read bytes from a TargetDataLine and store them in a queue, probably using a method that returns the number of bytes read so I can check for the EOF in my while loop. And create a concurrent thread that takes the chunks of bytes stored in the queue (when they are bytes to be written) and writes them to a SourceDataLine. My thought is two threads running simultaneously will be able to write the bytes almost at the same time they're read, or at least be better than what I have now but I want to know how other people have solved this problem.
Also I would need to make sure my consumer thread waits if there are no bytes in the queue and is notified when bytes are added to start writing bytes again, if some one would post an example of the proper way to synchronize the two threads I would appreciate it. I know they have to be in synchronized code blocks, should I use multiple locks? I'm not asking for an example specific to audio just a general example that adds something to a collection then removes it, any help is appreciated. Thanks.

in "classic" java you can (and probbaly should) use a single lock object for producer-consumer implementations. something like
public final static Object LOCK = new Object();
then in your produce() method you'll have code like this:
synchronized(LOCK) {
//place stuff in queue
LOCK.notifyAll(); //wake up any sleepers
}
and in your consume() method you'll have the other side:
synchronized(LOCK) {
if (anything in queue) {
return something
}
//queue is empty - wait
while (nothing in queue) { //while bit is important - we might wakeup for no reason or someone else might grab everything leaving us with nothing
try {
Lock.wait();
} catch (InterruptedException ex) {
//99% of java code ignores these spurious wakeups, and they also hardly ever really happen
}
}
}
but this is old-school. more modern versions of java have classes that neatly wrap all of this low level voodoo for you. for example ArrayBlockingQueue. you could just define a "global" static queue and then use offer() and take() for you produce() and consume() implementations respectively.
but if youre really concerned with latency i'd go the extra mile and use a library written exactly for low-latency inter-thread ocmmunication. a good example of such a library is the disruptor that claims much better latencies than ArrayBlockingQueue.

Related

How to deal with a slow consumer in traditional Java NIO?

So, I've been brushing up my understanding of traditional Java non-blocking API. I'm a bit confused with a few aspects of the API that seem to force me to handle backpressure manually.
For example, the documentation on WritableByteChannel.write(ByteBuffer) says the following:
Unless otherwise specified, a write operation will return only after
writing all of the requested bytes. Some types of channels,
depending upon their state, may write only some of the bytes or
possibly none at all. A socket channel in non-blocking mode, for
example, cannot write any more bytes than are free in the socket's
output buffer.
Now, consider this example taken from Ron Hitchens book: Java NIO.
In the piece of code below, Ron is trying to demonstrate how we could implement an echo response in a non-blocking socket application (for context here's a gist with the full example).
//Use the same byte buffer for all channels. A single thread is
//servicing all the channels, so no danger of concurrent access.
private ByteBuffer buffer = ByteBuffer.allocateDirect(1024);
protected void readDataFromSocket(SelectionKey key) throws Exception {
var channel = (SocketChannel) key.channel();
buffer.clear(); //empty buffer
int count;
while((count = channel.read(buffer)) > 0) {
buffer.flip(); //make buffer readable
//Send data; don't assume it goes all at once
while(buffer.hasRemaining()) {
channel.write(buffer);
}
//WARNING: the above loop is evil. Because
//it's writing back to the same nonblocking
//channel it read the data from, this code
//can potentially spin in a busy loop. In real life
//you'd do something more useful than this.
buffer.clear(); //Empty buffer
}
if(count < 0) {
//Close channel on EOF, invalidates the key
channel.close();
}
}
My confusion is on the while loop writing into output channel stream:
//Send data; don't assume it goes all at once
while(buffer.hasRemaining()) {
channel.write(buffer);
}
It really confuses me how NIO is helping me here. Certainly the code may not be blocking as per the description of the WriteableByteChannel.write(ByteBuffer), because if the output channel cannot accept any more bytes because its buffer is full, this write operation does not block, it just writes nothing, returns, and the buffer remains unchanged. But --at least in this example-- there is no easy way to use the current thread in something more useful while we wait for the client to process those bytes. For all that matter, if I only had one thread, the other requests would be piling up in the selector while this while loop wastes precious cpu cycles “waiting” for the client buffer to open some space. There is no obvious way to register for readiness in the output channel. Or is there?
So, assuming that instead of an echo server I was trying to implement a response that needed to send a big number of bytes back to the client (e.g. a file download), and assuming that the client has a very low bandwidth or the output buffer is really small compared to the server buffer, the sending of this file could take a long time. It seems as if we need to use our precious cpu cycles attending other clients while our slow client is chewing our file download bytes.
If we have readiness in the input channel, but not on the output channel, it seems this thread could be using precious CPU cycles for nothing. It is not blocked, but it is as if it were since the thread is useless for undetermined periods of time doing insignificant CPU-bound work.
To deal with this, Hitchens' solution is to move this code to a new thread --which just moves the problem to another place--. Then I wonder, if we had to open a thread every time we need to process a long running request, how is Java NIO better than regular IO when it comes to processing this sort of requests?
It is not yet clear to me how I could use traditional Java NIO to deal with these scenarios. It is as if the promise of doing more with less resources would be broken in a case like this. What if I were implementing an HTTP server and I cannot know how long it would take to service a response to the client?
It appears as if this example is deeply flawed and a good design of the solution should consider listening for readiness on the output channel as well, e.g.:
registerChannel(selector, channel, SelectionKey.OP_WRITE);
But how would that solution look like? I’ve been trying to come up with that solution, but I don’t know how to achieve it appropriately.
I'm not looking for other frameworks like Netty, my intention is to understand the core Java APIs. I appreciate any insights anyone could share, any ideas on what is the proper way to deal with this back pressure scenario just using traditional Java NIO.
NIO's non-blocking mode enables a thread to request reading data from a channel, and only get what is currently available, or nothing at all, if no data is currently available. Rather than remain blocked until data becomes available for reading, the thread can go on with something else.
The same is true for non-blocking writing. A thread can request that some data be written to a channel, but not wait for it to be fully written. The thread can then go on and do something else in the meantime.
What threads spend their idle time on when not blocked in IO calls, is usually performing IO on other channels in the meantime. That is, a single thread can now manage multiple channels of input and output.
So I think you need to rely on the design of the solution by using a design pattern for handling this issue, maybe **Task or Strategy design pattern ** are good candidates and according to the framework or the application you are using you can decide the solution.
But in most cases you don't need to implement it your self as it's already implemented in Tomcat, Jetty etc.
Reference : Non blocking IO

Writing with a single thread LMAX

I've got introduced to LMAX and this wonderful concept called RingBuffer.
So guys tell that when writing to the ringbuffer with only one thread performance is way better than with multiple producers...
However i dont really find it possible for tipical application to use only one thread for writes on ringbuffer... i dont really understand how lmax is doing that (if they do). For example N number of different traders put orders on exchange, those are all asynchronious requests that are getting transformed to orders and put into ringbuffer, how can they possibly write those using one thread?
Question 1. I might missing something or misunderstanding some aspect, but if you have N concurrent producers how is it possible to merge them into 1 and not lock each other?
Question 2. I recall rxJava observables, where you could take N observables and merge them into 1 using Observable.merge i wonder if it is blocking or maintaining any lock in any way?
The impact on a RingBuffer of multi-treaded writing is slight but under very heavy loads can be significant.
A RingBuffer implementation holds a next node where the next addition will be made. If only one thread is writing to the ring the process will always complete in the minimum time, i.e. buffer[head++] = newData.
To handle multi-threading while avoiding locks you would generally do something like while ( !buffer[head++].compareAndSet(null,newValue)){}. This tight loop would continue to execute while other threads were interfering with the storing of the data, thus slowing town the throughput.
Note that I have used pseudo-code above, have a look at getFree in my implementation here for a real example.
// Find the next free element and mark it not free.
private Node<T> getFree() {
Node<T> freeNode = head.get();
int skipped = 0;
// Stop when we hit the end of the list
// ... or we successfully transit a node from free to not-free.
// This is the loop that could cause delays under hight thread activity.
while (skipped < capacity && !freeNode.free.compareAndSet(true, false)) {
skipped += 1;
freeNode = freeNode.next;
}
// ...
}
Internally, RxJava's merge uses a serialization construct I call emitter-loop which uses synchronized and is blocking.
Our 'clients' use merge mostly in throughput- and latency insensitive cases or completely single-threaded and blocking isn't really an issue there.
It is possible to write a non-blocking serializer I call queue-drain but merge can't be configured to use that instead.
You can also take a look at JCTools' MpscArrayQueue directly if you are willing to handle the producer and consumer threads manually.

Is it a good way to use java.util.List as a buffer?

I have the main process and a thread running together.
The main process receives all the incoming UDP messages and put it into a List.
Then the thread is intended for processing those UDP messages.
However when I tried the following snippet inside the thread
int count = 0;
while(true)
{
if (buffer.size()>count)
{
System.out.println("Processing "+buffer.get(count));
count++;
}
}
the thread doesn't seem to work well.
By the way, buffer is
List<String> buffer = new ArrayList<String>();
and it is where the main process puts all the received UDP messages
any advice guys? :-)
No. This is the classic purpose of a queue, and you probably want some implementation of BlockingQueue.
Your thread is using busy waiting, which explains why it doesn't work well. When the list is empty, the thread consumes all the CPU resources it can. You want the opposite: as long as the queue is empty, the thread should do nothing.
There are several ways of designing this. The basic behavior is known as the producer-consumer problem. The easiest approach to implementing it in Java is to use a BlockingQueue, although it's easy enough to implement your own wait/notify protocol on a basic List. I believe the Wikipedia article shows how to do this in Java.
Without some sort of synchronization, it is surely not a good idea to use shared resources this way. I am assuming no synchronization as I don't see any code for it.
Given the code you have, the while loop will never terminate and I can only imagine the buffer will continue to grow before you run out of memory.
Java provides specific data structures for implementations that you have described. You can consider looking into BlockingQueue
If you are parsing a String protocol, use a queue instead.
If you are parsing bytes, look into using ByteBuffer as its bulk operations will be more efficient for buffering.
In either case you'll likely need to ensure thread safety through synchronous structures.
First off, it looks like you have a syntactically incorrect typo (counter++). I'm going to assume that you meant (count++).
The solution to your problem is a Queue not a List. In particular you will want to use the ConcurrentLinkedQueue for this application.

Asynchronous atomic array

I have a critical section of my (Java) code which basically goes like the snippet below. They're coming in from a nio server.
void messageReceived(User user, Message message) {
synchronized(entryLock) {
userRegistry.updateLastMessageReceived(user,time());
server.receive(user,message);
}
}
However, a high percentage of my messages are not going to change the server state, really. They're merely the client saying "hello, I'm still here". I really don't want to have to make that inside the synchronization block.
I could use a synchronous map or something like that, but it's still going to incur a synchronization penalty.
What I would really like to do is to have something like a drop box, like this
void messageReceived(User user, Message message) {
dropbox.add(new UserReceived(user,time());
if(message.getType() != message.TYPE_KEPT_ALIVE) {
synchronized(entryLock) {
server.receive(user,message);
}
}
}
I have a cleanup routine to automatically put clients that aren't active to sleep. So instead of synchronizing on every kept alive message to update the registry, the cleanup routine can simply compile the kept alive messages in a single synchronization block.
So naturally, reconigizing a need for this, the first thing I did was start making a solution. Then I decided this was a non-trivial class, and a problem that was more than likely fairly common. so here I am.
tl;dr Is there a Java library or other solution I can use to facilitate atomically adding to a list of objects in an asynchronous manner? Collecting from the list in an asychronous manner is not required. I just don't want to synchronize on every add to the list.
ConcurrentLinkedQueue claims to be:
This implementation employs an efficient "wait-free" algorithm based on one described in Simple, Fast, and Practical Non-Blocking and Blocking Concurrent Queue Algorithms by Maged M. Michael and Michael L. Scott.
I'm not sure what the quotes on "wait-free" entail but the Concurrent* classes are good places to look for structures like you're looking for.
You might also be interested in the following: Effective Concurrency: Lock-Free Code — A False Sense of Security. It talks about how hard these things are to get right, even for experts.
Well, there are few things you must bear in mind.
First, there is very little "synchronization cost" if there is little contention (more than one thread trying to enter the synchronized block at the same time).
Second, if there is contention, you're going to incur some cost no matter what technique you're using. Paul is right about ConcurrentLinkedQueue and the "wait-free" means that thread concurrency control is not done using locks, but still, you will always pay some price for contention. You may also want to look at ConcurrentHashMap because I'm not sure a list is what you're looking for. Using both classes is quite simple and common.
If you want to be more adventurous, you might find some non-locking synchronization primitives in java.util.concurrent.atomic.
One thing we could do is to use a simple ArrayList for keep-alive messages:
Keep adding to this list whenever each keep-alive message comes.
The other thread would synch on a lock X and read and process
keep-alives. Note that this thread is not removing from list only
reading/copying.
Finally in messageReceived itself you check if the list has grown
say beyond 1000, in which case you synch on the lock X and clear the
list.
List keepAliveList = new ArrayList();
void messageReceived(User user, Message message) {
if(message.getType() == message.TYPE_KEPT_ALIVE) {
if(keepAliveList.size() > THRESHOLD) {
synchronized(X) {
processList.addAll(list);
list.clear();
}
}
keepAliveList.add(message);
}
}
//on another thread
void checkKeepAlives() {
synchronized(X) {
processList.addAll(list)
}
processKeepAlives(processList);
}

Chat system in Java

Is there a way to immediately print the message received from the client without using an infinite loop to check whether the input stream is empty or not?
Because I found that using infinite loop consumes a lot of system resources, which makes the program running so slow. And we also have to do the same (infinite loop) on the client side to print the message on the screen in real time.
I'm using Java.
You should be dealing with the input stream in a separate Thread - and let it block waiting for input. It will not use any resources while it blocks. If you're seeing excessive resource usage while doing this sort of thing, you're doing it wrong.
I think you can just put your loop in a different thread and have it sleep a bit (maybe for half a second?) between iterations. It would still be an infinite loop, but it would not consume nearly as many resources.
You don't you change your architecture a little bit to accommodate WebSockets. check out Socket.IO . It is a cross browser WebSockets enabler.
You will have to write controllers (servlets for example in java) that push data to the client. This does not follow the request-response architecture.
You can also architect it so that a "push servlet" triggers a "request" from the client to obtain the "response".
Since your question talks about Java, and if you are interested in WebSockets, check this link out.
If you're using Sockets, which you should be for any networking.
Then you can use the socket's DataInputStream which you can get using socket.getInputStream() (i think that's the right method) and do the following:
public DataInputStream streamIn;
public Socket soc;
// initialize socket, etc...
streamIn = soc.getInputStream();
public String getInput() {
return (String) streamIn.readUTF(); // Do some other casting if this doesn't work
}
streamIn.readUTF() blocks until data is available, meaning you don't have to loop, and threading will let you do other processing while you wait for data.
Look here for more information on DataInputStream and what you can do with it: http://docs.oracle.com/javase/6/docs/api/java/io/DataInputStream.html
A method that does not require threads would involve subclassing the input stream and adding a notify type method. When called this method would alert any interested objects (i.e. objects that would have to change state due to the additions to the stream) that changes have been made. These interested objects could then respond in anyway that is desired.
Objects writing to the buffer would do their normal writing, and afterward would call the notify() method on the input stream, informing all interested objects of the change.
Edit: This might require subclassing more than a couple of classes and so could involve a lot of code changes. Without knowing more about your design you would have to decide if the implementation is worth the effort.
There are two approaches that avoid busy loops / sleeps.
Use a thread for each client connection, and simply have each thread call read. This blocks the thread until the client sends some data, but that's no problem because it doesn't block the threads handling other clients.
Use Java NIO channel selectors. These allow a thread to wait until one of set of channels (in this case sockets) has data to be read. There is a section of the Oracle Java Tutorials on this.
Of these two approaches, the second one is most efficient in terms of overall resource usage. (The thread-per-client approach uses a lot of memory on thread stacks, and CPU on thread switching overheads.)
Busy loops that repeatedly call (say) InputStream.available() to see if there is any input are horribly inefficient. You can make them less inefficient by slowing down the polling with Thread.sleep(...) calls, but this has the side effect of making the service less responsive. For instance, if you add a 1 second sleep between each set of polls, the effect that each client will see is that the server typically delays 1 second before processing each request. Assuming that those requests are keystrokes and the responses echo them, the net result is a horribly laggy service.

Categories

Resources