I develop the first part of an Android application that allows to broadcast video stream through the network. Currently, I'm sending the video in a very direct way, like this:
Socket socket = new Socket(InetAddress.getByName(hostname), port);
ParcelFileDescriptor pfd = ParcelFileDescriptor.fromSocket(socket);
recorder.setOutputFile(pfd.getFileDescriptor());
But unfortunately, it is not very fluid. I want to buffered the data stream before sending it through the socket. One of the way I tried is to write the stream in a file using the Android API for recording media, and to use another thread to stream the file to the server on a conputer.
So my problem is: how can I send by a socket a file which is still under writing?
As BufferedInputStream has not a blocking method for reading, I tried to do things like this one, but without any success
while (inputStream.available() >= BUFFER_SIZE) {
inputStream.read(buffer);
outputStream.write(buffer);
}
outputStream.flush();
But when i'm doing that, if the network is faster than the datastream, I get quickly out of the loop.
Is there a 'good' way to do that? I though about doing active waiting but it is not a good solution, especially for mobiles. Another way is to do something like this :
while (true) {
while (inputStream.available() < BUFFER_SIZE) {
wait(TIME);
}
inputStream.read(buffer);
outputStream.write(buffer);
}
outputStream.flush();
But it sound quite dirty for me... Is there sleeker solution?
What I do in these situations if simply fill up a byte array (my buffer) until either I've hit the end of the data I'm about to transmit, or the buffer is full. In which case the buffer is ready to be passed to my Socket transmission logic. Admittedly, I do not do this on video or audio though … only on “regular” data.
Something worth noting is this will give a "janky" user experience to the recipient of that data (it might look like the network is stopping for short periods then running normally again ... the time the buffer is using to fill up). So if you have to use a buffered approach on either video or audio be careful on what buffer size you decide to work with.
For things like video it's been my experence to use streaming based logic versus buffered, but you apparently have some different and interesting requirements.
I can't think of a pretty way of doing this, but one option might be to create a local socket pair, use the 'client' end of the pair as the MediaRecorder output fd, and buffer between the local-server socket and the remote-server. This way, you can block on the local-server until there is data.
Another possibility is to use a file-based pipe/fifo (so the disk doesn't fill up), but I can't remember if the Java layer exposes mkfifo functionality.
In any event, you probably want to look at FileReader, since reads on that should block.
Hope this helps,
Phil Lello
Related
So, I've been brushing up my understanding of traditional Java non-blocking API. I'm a bit confused with a few aspects of the API that seem to force me to handle backpressure manually.
For example, the documentation on WritableByteChannel.write(ByteBuffer) says the following:
Unless otherwise specified, a write operation will return only after
writing all of the requested bytes. Some types of channels,
depending upon their state, may write only some of the bytes or
possibly none at all. A socket channel in non-blocking mode, for
example, cannot write any more bytes than are free in the socket's
output buffer.
Now, consider this example taken from Ron Hitchens book: Java NIO.
In the piece of code below, Ron is trying to demonstrate how we could implement an echo response in a non-blocking socket application (for context here's a gist with the full example).
//Use the same byte buffer for all channels. A single thread is
//servicing all the channels, so no danger of concurrent access.
private ByteBuffer buffer = ByteBuffer.allocateDirect(1024);
protected void readDataFromSocket(SelectionKey key) throws Exception {
var channel = (SocketChannel) key.channel();
buffer.clear(); //empty buffer
int count;
while((count = channel.read(buffer)) > 0) {
buffer.flip(); //make buffer readable
//Send data; don't assume it goes all at once
while(buffer.hasRemaining()) {
channel.write(buffer);
}
//WARNING: the above loop is evil. Because
//it's writing back to the same nonblocking
//channel it read the data from, this code
//can potentially spin in a busy loop. In real life
//you'd do something more useful than this.
buffer.clear(); //Empty buffer
}
if(count < 0) {
//Close channel on EOF, invalidates the key
channel.close();
}
}
My confusion is on the while loop writing into output channel stream:
//Send data; don't assume it goes all at once
while(buffer.hasRemaining()) {
channel.write(buffer);
}
It really confuses me how NIO is helping me here. Certainly the code may not be blocking as per the description of the WriteableByteChannel.write(ByteBuffer), because if the output channel cannot accept any more bytes because its buffer is full, this write operation does not block, it just writes nothing, returns, and the buffer remains unchanged. But --at least in this example-- there is no easy way to use the current thread in something more useful while we wait for the client to process those bytes. For all that matter, if I only had one thread, the other requests would be piling up in the selector while this while loop wastes precious cpu cycles “waiting” for the client buffer to open some space. There is no obvious way to register for readiness in the output channel. Or is there?
So, assuming that instead of an echo server I was trying to implement a response that needed to send a big number of bytes back to the client (e.g. a file download), and assuming that the client has a very low bandwidth or the output buffer is really small compared to the server buffer, the sending of this file could take a long time. It seems as if we need to use our precious cpu cycles attending other clients while our slow client is chewing our file download bytes.
If we have readiness in the input channel, but not on the output channel, it seems this thread could be using precious CPU cycles for nothing. It is not blocked, but it is as if it were since the thread is useless for undetermined periods of time doing insignificant CPU-bound work.
To deal with this, Hitchens' solution is to move this code to a new thread --which just moves the problem to another place--. Then I wonder, if we had to open a thread every time we need to process a long running request, how is Java NIO better than regular IO when it comes to processing this sort of requests?
It is not yet clear to me how I could use traditional Java NIO to deal with these scenarios. It is as if the promise of doing more with less resources would be broken in a case like this. What if I were implementing an HTTP server and I cannot know how long it would take to service a response to the client?
It appears as if this example is deeply flawed and a good design of the solution should consider listening for readiness on the output channel as well, e.g.:
registerChannel(selector, channel, SelectionKey.OP_WRITE);
But how would that solution look like? I’ve been trying to come up with that solution, but I don’t know how to achieve it appropriately.
I'm not looking for other frameworks like Netty, my intention is to understand the core Java APIs. I appreciate any insights anyone could share, any ideas on what is the proper way to deal with this back pressure scenario just using traditional Java NIO.
NIO's non-blocking mode enables a thread to request reading data from a channel, and only get what is currently available, or nothing at all, if no data is currently available. Rather than remain blocked until data becomes available for reading, the thread can go on with something else.
The same is true for non-blocking writing. A thread can request that some data be written to a channel, but not wait for it to be fully written. The thread can then go on and do something else in the meantime.
What threads spend their idle time on when not blocked in IO calls, is usually performing IO on other channels in the meantime. That is, a single thread can now manage multiple channels of input and output.
So I think you need to rely on the design of the solution by using a design pattern for handling this issue, maybe **Task or Strategy design pattern ** are good candidates and according to the framework or the application you are using you can decide the solution.
But in most cases you don't need to implement it your self as it's already implemented in Tomcat, Jetty etc.
Reference : Non blocking IO
I've recently finished a small game and have been trying to add audio to it. Currently the sound system I have is working (basically the same code as the top answer here
), but there is a significant stall during every output (~200-300 ms). Since it's a quick game I'm looking for something significant quicker. I'm not experienced with Threads, but would those be applicable here?
Instead of reading the file every time you wish to play its contents in audio format, read the file once into a byte array and then read the audio from that array of bytes.
public static byte[] getBytes(String file) {
RandomAccessFile raf = new RandomAccessFile(file, "r");
byte[] bytes = new byte[(int) raf.length()];
raf.read(bytes);
return bytes;
}
Then, you could simply alter the playSound method to take a byte array as the parameter, and then write them to the SourceDataLine instance to play the sound (like is done in the original method, but it reads them from the file just before it writes them).
You could try passing a BufferedInputStream to the overloaded method AudioSystem.getAudioInputStream() instead of passing a File.
The call to drain is a blocking one and it causes the delays that you observe. You do not need to wait there. However, if you let the sound output operate in parallel with your other code, you should also define what happens if there is a lot of sound in your sound buffers and you are queueing more. Learn about the available method and the rest of the API to be able to manage the sound card flexibly and without any "lagging sound" effects.
Threads can also be used for this purpose, but it is not necessary here. The role of the parallel process can be adequately played by the sound driver itself and the single threaded approach will make your application easier to design and easier to debug.
As much as I'd like to accept one of these existing answers, I solved my problem in a simple way. By loading all the referenced File variables during initialization, the delay does not come back at any point during gameplay. However if this is not an adequate solution for anyone else viewing this question, I would also recommend Vulcan's answer.
When reading from an InputStream, is there a way to cancel the read when it reaches a certain size and ignore the rest of the stream safely ensuring the resources are completely released?
So far, I just finish the read, but ideally I would like to stop reading it and move on. How do I do it safely?
Here is what I have so far:
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
int nRead;
byte[] byteData = new byte[16384];
while ((nRead = inputStream.read(byteData, 0, byteData.length)) != -1){
if(buffer.size() <= MAX_FILE_SIZE){
buffer.write(byteData, 0, nRead);
}
}
if(buffer.size() <= MAX_FILE_SIZE){
buffer.flush();
mData = buffer.toByteArray();
}
inputStream.close();
Thanks
Calling close() does what you want with respect to the JVM and its resources.
However, in some circumstances it could have effects that are undesirable. For instance, if the input stream is (ultimately) a socket stream, then closing the stream closes the socket, and take may cause the remote server that is sending data to see a network error. (This probably doesn't matter, but if it is not handled cleanly, you may well see exceptions in a remote webserver's logfile.)
Even if it was in the middle of being read and doesn't finish?
Yes. Any data that is "in flight" will be thrown away.
By closing its socket handle, this application says implicitly that it is no longer interested in the data.
Under normal circumstances1, there is nothing else that has the socket handle that allows it to read that data.
There is no way for anything else to reconnect to the socket. That is not supported by the socket APIs ... at the operating system level.
There is therefore no point in "keeping" the data.
(If we are talking about a socket stream then the remote server might get an exception if it tries to write more data to the socket after the close propagated. But even if that occurs, the remote server has no way of knowing how much data this end actually read before "pulling the plug" on the connection.)
Also, does the buffer need to be somehow cancelled or closed as well.
Since it is a ByteArrayOutputStream, No. Streams that read from / write to in-memory buffers (byte arrays, StringBuffers) don't need to be closed2. The GC can reclaim purely in-memory resources without any issues. Also a BufferedInput/OutputStream doesn't need to be closed if the stream it wraps doesn't need closing.
1 - I think it is possible for a Linux/Unix to open a socket, and pass it to a forked child process. However, it is impractical for both the parent and child processes to both use the socket because of the difficulty coordinating their use of it. Furthermore, you can't do this kind of thing between Java processes because the Java Process API doesn't allow it.
2 - The only hypothetical case where that is not true is when the buffer is a NIO Buffer backed by a shared memory segment or memory-mapped file ... which the garbage collector may be unable to reclaim in a timely fashion. And I say hypothetical because I don't think there are off-the-shelf stream wrappers for NIO Buffer objects.
close() is safe and does release resources: http://download.oracle.com/javase/6/docs/api/java/io/InputStream.html#close%28%29
That is all that you need to do on this end. It releases all JVM resources. If it's associated with a socket, this socket will be closed. Operating system (IOW transport layer) will simply discard all buffers, forthcoming packets etc. The other end of the connection (sender) may see an error, but either way it should be prepared for it.
I have a client connecting to my server. The client sends some messages to the server which I do not care about and do not want to waste time parsing its messages if I'm not going to be using them. All the i/o I'm using is simple java i/o, not nio.
If I create the input stream and just never read from it, can that buffer fill up and cause problems? If so, is there something I can do or a property I can set to have it just throw away data that it sees?
Now what if the server doesn't create the input stream at all? Will that cause any problems on the client/sending side?
Please let me know.
Thanks,
jbu
When you accept a connection from a client, you get an InputStream. If you don't read from that stream, the client's data will buffer up. Eventually, the buffer will fill up and the client will block when it tries to write more data. If the client writes all of its data before reading a response from the server, you will end up with a pretty classic deadlock situation. If you really don't care about the data from the client, just read (or call skip) until EOF and drop the data. Alternatively, if it's not a standard request/response (like HTTP) protocol, fire up a new thread that continually reads the stream to keep it from getting backed up.
If you get no useful data from the client, what's the point of allowing it to connect?
I'm not sure of the implications of never reading from a buffer in Java -- I'd guess that eventually the OS would stop accepting data on that socket, but I'm not sure there.
Why don't you just call the skip method of your InputStream occasionally with a large number, to ensure that you discard the data?
InputStream in = ....
byte[] buffer = new byte[4096] // or whatever
while(true)
in.read(buffer);
if you accept the connection, you should read the data. to tell you the truth i have never seen (or could forsee) a situation where this (a server that ignores all data) could be useful.
I think you get the InputStream once you accept the request, so if you don't acknowledge that request the underlying framework (i.e. tomcat) will drop that request (after some lapsed time).
Regards.
I can sent small data using java nio.
But If I want to send a very large data then my socket channel did not work fine.
message = "very large data"+"\n";
ByteBuffer buf = ByteBuffer.wrap(message.getBytes());
int nbytes = channel.write(buf);
all the data is sent.
I want to read data from server so i am using BufferedInputStreaReader.readLine();
In this case I am not getting any error also i cannot retrieve any of the data that i have sent
Thanks
Deepak
write()
Returns:
The number of bytes written, possibly zero
Write is not guaranteed to write your whole buf.
You need to check how much that was written, and do another write. (Probably also wait (select) until you can write again.)
You should probably also search for a good java.nio tutorial...
If you need a simpler api, use the blocking io in java.io instead...