I'm trying to make a simple network client. The client should be able to write into a queue (buffer) and a second thread should take this buffer and write it to the server.
I tried it with java.nio and created a Thread with a static ByteBuffer. This ByteBuffer is being used in the while(true) of the Thread for writing into the channel.
In my main loop at some point I'm putting some Bytes into the static buffer via the put() Method.
In the debug mode I suspended the channel-writing-thread and then I filled the buffer via my main program loop (just pushed 'A' to write into the buffer).
After three or four button pushes I started the channel-writing-thread again and it worked just fine.
But when I try the program just normal I'm getting a buffer overflow error in the main-loop-thread. I believe my program is trying to put data into the buffer while the buffer is accessed by my channel-writing-thread. I tried to use the synchronized keyword around both parts in both threads, but that didn't help.
Main loop:
[...]
if(Gdx.app.getInput().isKeyPressed(Input.Keys.A) && (now.getTime() - lastPush.getTime()) > 1000 )
{
lastPush = now;
//synchronized (PacketReader.writeBuffer)
//{
PacketReader.writeBuffer.put(("KeKe").getBytes());
//}
}
[...]
My Thread named "PacketReader" (well it's actually reading and writing):
class PacketReader implements Runnable
{
public static ByteBuffer writeBuffer = ByteBuffer.allocate(1024);
[...]
public void run()
{
while (true) {
[...]
if (selKey.isValid() && selKey.isWritable())
{
SocketChannel sChannel = (SocketChannel)selKey.channel();
//synchronized (PacketReader.writeBuffer)
//{
if(PacketReader.writeBuffer.hasRemaining())
{
PacketReader.writeBuffer.flip();
int numBytesWritten = sChannel.write(PacketReader.writeBuffer);
PacketReader.writeBuffer.flip();
}
//}
}
[...]
Any idea how to create such a buffered write system? I think it's a common problem, but I don't know what to search. All NIO tutorials seem to think that the buffer is filled within the channel loop.
In the end I'm trying to have a program, which has the network component started once and within my program I just wanted to use some static send method to send packets without thinking about the queue handling or waiting for the queue.
Is there maybe somewhere a tutorial? Most games should use a similar concept, but I couldn't find any opensource simple java games with a NIO implementation (I'll use it for android so I'm trying it without a framework)
You might try keeping a queue (say, ConcurrentLinkedQueue) of buffers to write, instead of putting into the same buffer you are sending out to the channel.
To enqueue something to be sent:
ByteBuffer buff = /* get buffer to write in */;
buff.put("KeKe".getBytes());
queue.add(buff);
Then in your select loop, when the channel is writable:
for(ByteBuffer buff = queue.poll(); buff != null; buff = queue.poll()) {
sChannel.write(buff);
/* maybe recycle buff */
}
You may also need to set/remove write interest on the channel depending on whether the queue is empty or not.
Not a direct answer to your question, but you should consider using an existing NIO framework to make this easier. Netty and Grizzly are popular examples. I would personnally use Netty instead of writing my own server from scratch using NIO.
You could probably also look at how Netty handles reading / writing to the buffers since I assume that they have optimized their implementation.
The whole point of NIO is that you don't need separate threads. The thread doing the filling should also do the writing.
Related
So, I've been brushing up my understanding of traditional Java non-blocking API. I'm a bit confused with a few aspects of the API that seem to force me to handle backpressure manually.
For example, the documentation on WritableByteChannel.write(ByteBuffer) says the following:
Unless otherwise specified, a write operation will return only after
writing all of the requested bytes. Some types of channels,
depending upon their state, may write only some of the bytes or
possibly none at all. A socket channel in non-blocking mode, for
example, cannot write any more bytes than are free in the socket's
output buffer.
Now, consider this example taken from Ron Hitchens book: Java NIO.
In the piece of code below, Ron is trying to demonstrate how we could implement an echo response in a non-blocking socket application (for context here's a gist with the full example).
//Use the same byte buffer for all channels. A single thread is
//servicing all the channels, so no danger of concurrent access.
private ByteBuffer buffer = ByteBuffer.allocateDirect(1024);
protected void readDataFromSocket(SelectionKey key) throws Exception {
var channel = (SocketChannel) key.channel();
buffer.clear(); //empty buffer
int count;
while((count = channel.read(buffer)) > 0) {
buffer.flip(); //make buffer readable
//Send data; don't assume it goes all at once
while(buffer.hasRemaining()) {
channel.write(buffer);
}
//WARNING: the above loop is evil. Because
//it's writing back to the same nonblocking
//channel it read the data from, this code
//can potentially spin in a busy loop. In real life
//you'd do something more useful than this.
buffer.clear(); //Empty buffer
}
if(count < 0) {
//Close channel on EOF, invalidates the key
channel.close();
}
}
My confusion is on the while loop writing into output channel stream:
//Send data; don't assume it goes all at once
while(buffer.hasRemaining()) {
channel.write(buffer);
}
It really confuses me how NIO is helping me here. Certainly the code may not be blocking as per the description of the WriteableByteChannel.write(ByteBuffer), because if the output channel cannot accept any more bytes because its buffer is full, this write operation does not block, it just writes nothing, returns, and the buffer remains unchanged. But --at least in this example-- there is no easy way to use the current thread in something more useful while we wait for the client to process those bytes. For all that matter, if I only had one thread, the other requests would be piling up in the selector while this while loop wastes precious cpu cycles “waiting” for the client buffer to open some space. There is no obvious way to register for readiness in the output channel. Or is there?
So, assuming that instead of an echo server I was trying to implement a response that needed to send a big number of bytes back to the client (e.g. a file download), and assuming that the client has a very low bandwidth or the output buffer is really small compared to the server buffer, the sending of this file could take a long time. It seems as if we need to use our precious cpu cycles attending other clients while our slow client is chewing our file download bytes.
If we have readiness in the input channel, but not on the output channel, it seems this thread could be using precious CPU cycles for nothing. It is not blocked, but it is as if it were since the thread is useless for undetermined periods of time doing insignificant CPU-bound work.
To deal with this, Hitchens' solution is to move this code to a new thread --which just moves the problem to another place--. Then I wonder, if we had to open a thread every time we need to process a long running request, how is Java NIO better than regular IO when it comes to processing this sort of requests?
It is not yet clear to me how I could use traditional Java NIO to deal with these scenarios. It is as if the promise of doing more with less resources would be broken in a case like this. What if I were implementing an HTTP server and I cannot know how long it would take to service a response to the client?
It appears as if this example is deeply flawed and a good design of the solution should consider listening for readiness on the output channel as well, e.g.:
registerChannel(selector, channel, SelectionKey.OP_WRITE);
But how would that solution look like? I’ve been trying to come up with that solution, but I don’t know how to achieve it appropriately.
I'm not looking for other frameworks like Netty, my intention is to understand the core Java APIs. I appreciate any insights anyone could share, any ideas on what is the proper way to deal with this back pressure scenario just using traditional Java NIO.
NIO's non-blocking mode enables a thread to request reading data from a channel, and only get what is currently available, or nothing at all, if no data is currently available. Rather than remain blocked until data becomes available for reading, the thread can go on with something else.
The same is true for non-blocking writing. A thread can request that some data be written to a channel, but not wait for it to be fully written. The thread can then go on and do something else in the meantime.
What threads spend their idle time on when not blocked in IO calls, is usually performing IO on other channels in the meantime. That is, a single thread can now manage multiple channels of input and output.
So I think you need to rely on the design of the solution by using a design pattern for handling this issue, maybe **Task or Strategy design pattern ** are good candidates and according to the framework or the application you are using you can decide the solution.
But in most cases you don't need to implement it your self as it's already implemented in Tomcat, Jetty etc.
Reference : Non blocking IO
I working on a basic minimal server concept flowing data from one port to another destination using NIO and socket channels.
Things work great at full speed, fast links, etc. Things fail terribly and consume a ton of CPU when one side is faster than the other. Things work, just work very inefficiently.
Example:
Iterator keys = selector.selectedKeys().iterator();
while (keys.hasNext()) {
SelectionKey key = (SelectionKey) keys.next();
keys.remove();
try {
if (!key.isValid()) continue;
if (key.isReadable()) read(key);
}
catch (Exception e) {
e.printStackTrace();
key.cancel();
}
}
The read call happens anytime the socket has data that can be read. But what if the writeable portion of this socket can't be written to because the client side has high latency or just isn't reading the data fast? We end up looping at an insane rate doing nothing until a little bit more writeable buffer frees up:
public void read(SelectionKey key) throws IOException {
ByteBuffer b = (ByteBuffer) buffers.get(readable); //prior buffer that may or may not have been fully used from the prior write attempt
int bytes_read = readable.read(b);
b.flip();
if (bytes_read > 0 || b.position() > 0) writeable.write(b);
b.compact();
}
So say we can read from the socket at a gigabit, but the receiver is only reading from our writeable socket at 100kilobit....we might loop a million times between each little piece of data we can write over to the client since they just aren't consuming the data in the socket buffers as fast as we want.
If I did this with a Thread, there would be no issue as we would be blocking on the write call. But with NIO, what are you supposed to do to let it skip the read notifications?
I figured this out. I still didn't find docs pointing me at this, but after considering things more I realized this is what the OP_WRITEABLE scenario is for.
So when the writeable returns that it accepted 0 bytes, I deregister the OP_READ on the readable channel, and register for OP_WRITEABLE on the writeable channel. Once I am notified that its writeable, I swap back the read/write channels, including what they are registered for dropping the OP_WRITEABLE now.
So when a write can't be done, de-register the read that is triggering you to try and write to it, and register for a write notification on it instead. Once you get that, swap back registrations.
Things work fine now.
Im creating a server for my game but this question is more java related. So i want to check of their is an incomming socket but i still want to run the game becouse the server is hosted by an user and not seperate by an external program. But still i want to check if someone is connection using an socket. What i now have is:
public void updateConnection() throws IOException {
Socket connection = server.accept();
System.out.println("Ape is connecting");
ServerClient client = new ServerClient(connection);
clientsWaiting.add(client);
}
I want this method to be used every frame and not continuously checking if thats posible. If it isn't posible what else shall i use to create my server and check if some ones connecting each frame.
You're best bet would be to have your game check for incoming socket connections in a separate thread. You could create a Runnable that just listens for connections continuously.
When you check for an incoming connection: Socket connection = server.accept();, what is actually happening is you are placing a block on that particular thread until you receive a connection. This will cause your code to stop executing. The only way around this is parallelization. You can handle all of your networking tasks on one thread, whilst handling your gaming logic and rendering on another.
Be aware though, writing code to be run on multiple threads has many pit falls. Java provides some tools to minimize the potential problems, but it is up to you, the programmer, to ensure that your code will be thread safe. Going into detail about the many concerns regarding parallel programming is beyond the scope of this question. I suggest that you do a bit of research on it, because bugs that arise from this type of programming are sometimes hard to reproduce and to track.
Now that I have given you this disclaimer, to use Runnable to accomplish what you are trying to do, you could do something similar to this:
Runnable networkListener = () -> {
//declare and instantiate server here
while(true){
Socket connection = server.accept();
//whatever you would like to do with the connection would go here
}
}
Thread networkThread = new Thread(networkListener);
networkThread.start();
You would place that before your game loop and it would spawn a thread that would listen for connections without interrupting your game. There are a lot of good idioms out there on how to handle Sockets using ThreadPools to track them, spawning a new Thread each time a new connection is made, so I suggest you do some research on that as well.
Good luck to you, this isn't an easy road you are about to venture down.
One more addition: when you establish TCP connection you are not dealing with frames(UDP is frame based protocol), you are dealing with stream of bytes.
The lower lever Byteoutpustream example:
InputStream inputStream = socket.getInputStream();
// read from the stream
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] content = new byte[ 2048 ];
int bytesRead = -1;
while( ( bytesRead = inputStream.read( content ) ) != -1 ) {
baos.write( content, 0, bytesRead );
} // while
So when client finishes writing, but stream is still open, your read method blocks. If you expect certain data from client, you read it and then call your print method or however you want to notify, etc...
I have this weird problem with my (multithreaded) server when I get more than 500 players connected simultaneously, the PrinterWriter take more than 100 seconds or more (2 minutes) to finish flush() or print() sometimes.
Here is the code:
public static void send(Player p, String packet)
{
PrintWriter out = p.get_out();
if(out != null && !packet.equals("") && !packet.equals(""+(char)0x00))
{
packet = Crypter.toUtf(packet);
out.print((packet)+(char)0x00);
out.flush();
}
}
the printWriter is something like this:
_in = new BufferedReader(new InputStreamReader(_socket.getInputStream()));
_out = new PrintWriter(_socket.getOutputStream());
If I add the keyword synchronized to the send() method, the whole server starts to lag every 2 seconds, if I don't then some random player starts to lag for no reason.
Anyone have any idea ? Where is this coming from? What should I do?
The print writer is wrapped around a socket's output stream, so I'm going to guess and say that the socket's output buffer is full, and so the write/flush call will block until the buffer has enough room to accommodate the message being sent.
The socket send buffer may become full if data is being written to it faster than it can be transmitted to the client (or faster than the client can receive it).
Edit:
P.S. If you're having scalability problems, it may be due to using java.io (which requires one thread per socket) instead of java.nio (in which case a single thread can detect and perform operations on those sockets which have pending data). nio is intended to support applications which must scale to a large number of connections, but the programming model is more difficult.
The reason is that your send() method is static, so all threads that write to any socket are being syncrhonized on the containing class object. Make it non-static, then only threads that are writing to the same socket will be synchronized.
I develop the first part of an Android application that allows to broadcast video stream through the network. Currently, I'm sending the video in a very direct way, like this:
Socket socket = new Socket(InetAddress.getByName(hostname), port);
ParcelFileDescriptor pfd = ParcelFileDescriptor.fromSocket(socket);
recorder.setOutputFile(pfd.getFileDescriptor());
But unfortunately, it is not very fluid. I want to buffered the data stream before sending it through the socket. One of the way I tried is to write the stream in a file using the Android API for recording media, and to use another thread to stream the file to the server on a conputer.
So my problem is: how can I send by a socket a file which is still under writing?
As BufferedInputStream has not a blocking method for reading, I tried to do things like this one, but without any success
while (inputStream.available() >= BUFFER_SIZE) {
inputStream.read(buffer);
outputStream.write(buffer);
}
outputStream.flush();
But when i'm doing that, if the network is faster than the datastream, I get quickly out of the loop.
Is there a 'good' way to do that? I though about doing active waiting but it is not a good solution, especially for mobiles. Another way is to do something like this :
while (true) {
while (inputStream.available() < BUFFER_SIZE) {
wait(TIME);
}
inputStream.read(buffer);
outputStream.write(buffer);
}
outputStream.flush();
But it sound quite dirty for me... Is there sleeker solution?
What I do in these situations if simply fill up a byte array (my buffer) until either I've hit the end of the data I'm about to transmit, or the buffer is full. In which case the buffer is ready to be passed to my Socket transmission logic. Admittedly, I do not do this on video or audio though … only on “regular” data.
Something worth noting is this will give a "janky" user experience to the recipient of that data (it might look like the network is stopping for short periods then running normally again ... the time the buffer is using to fill up). So if you have to use a buffered approach on either video or audio be careful on what buffer size you decide to work with.
For things like video it's been my experence to use streaming based logic versus buffered, but you apparently have some different and interesting requirements.
I can't think of a pretty way of doing this, but one option might be to create a local socket pair, use the 'client' end of the pair as the MediaRecorder output fd, and buffer between the local-server socket and the remote-server. This way, you can block on the local-server until there is data.
Another possibility is to use a file-based pipe/fifo (so the disk doesn't fill up), but I can't remember if the Java layer exposes mkfifo functionality.
In any event, you probably want to look at FileReader, since reads on that should block.
Hope this helps,
Phil Lello