Netty echo server can't handle 100+ channels - java

I want to create chat-server which would handle 100-500 users in different rooms.
I decided to use Netty framework, because of event-based arcitecture (which is very familiar to me). I started with small server which respond "NYA" for everything it recieve.
main:
public class ChatServer{
public static void main(String[] args) throws Exception {
ExecutorService bossExec = new OrderedMemoryAwareThreadPoolExecutor(1, 400000000, 2000000000, 60, TimeUnit.SECONDS);
ExecutorService ioExec = new OrderedMemoryAwareThreadPoolExecutor(4 /* число рабочих потоков */, 400000000, 2000000000, 60, TimeUnit.SECONDS);
ChannelFactory factory = new NioServerSocketChannelFactory(bossExec,ioExec);
ServerBootstrap bootstrap = new ServerBootstrap(factory);
bootstrap.setPipelineFactory(new ChannelPipelineFactory(){
public ChannelPipeline getPipeline(){
return Channels.pipeline(new ChatChannelHandler());
}
});
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.keepAlive", true);
bootstrap.setOption("reuseAddress", true);
bootstrap.bind(new InetSocketAddress(5555));
System.out.println("ChatServer started at last...");
}
}
handler:
public class ChatChannelHandler extends SimpleChannelUpstreamHandler {
private final ChannelBuffer messageBuffer = ChannelBuffers.dynamicBuffer();
private boolean processingMessage = false;
private short messageLength;
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
System.out.println("Connection from: " + e.getChannel().getRemoteAddress().toString());
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
e.getChannel().write(ChannelBuffers.copiedBuffer("NYA!", "UTF-8");
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
System.out.println("Exception: " + e.getCause());
Channel ch = e.getChannel();
ch.close();
}
}
I use simulation program which creates N connections and then randomly write stuff to server with 1-5 second pause. Then i connect with my Flash-based client, to see how it is working. So problem is that under 10-90 connections my flash client immediately get responds from the server, but when number of simulated connections exceed 100 my flash client remain silent. I simply don't understand why.
I figured out, that all messages from my 100+ client gets into buffer, but messageReceive event simply not fired for them. Looks like event queue reachs some limit of registered events, or something like that.
It's really sadden me, because i read about even simpler servers handles 1k+ request per second.
I work under Windows 7, if it is necessary. Also my server never use more then 2-3% of CPU.
My simulation generator:
public class ClientLoadEmulation implements Runnable {
private String host;
private int port;
private static final int minStringLength = 5;
private static final int maxStringLength = 40;
private static final int minPause = (int) (1 * 1000);
private static final int maxPause = (int) (5 * 1000);
Random rand = new Random();
public ClientLoadEmulation(String host, int port, int numThreads) {
this.host = "192.168.0.100";
this.port = 5555;
for (int i = 0; i < numThreads; ++i) {
new Thread(this).start();
try {
Thread.sleep(100);
} catch (Exception ex) {
}
}
}
#Override
public void run() {
byte buffer[] = new byte[255];
try {
Socket s = new Socket(host, port);
InputStream in = s.getInputStream();
OutputStream out = s.getOutputStream();
while (true) {
String govno = "";
...STRING GENERATION STUFF CUTTED...
ByteBuffer LENGTH_BUFFER = ByteBuffer.allocate(2);
LENGTH_BUFFER.putShort((short) govno.length());
LENGTH_BUFFER.flip();
out.write(LENGTH_BUFFER.array());
out.write(govno.getBytes("UTF-8"));
//System.out.println(Thread.currentThread() + " wrote " + govno);
int pause = minPause
+ (int) (rand.nextDouble() * (maxPause - minPause));
try {
Thread.sleep(pause);
} catch (InterruptedException ie) {
}
}
} catch (IOException ie) {
System.out.println("ERROR: " + ie.getMessage());
}
}
static public void main(String args[]) throws Exception {
new ClientLoadEmulation("127.0.0.1", 5555, 100);
}
}
(Sorry for my English skills)

I'm not sure about your sample, but when I tried to run ChatServer you provided, it was not waiting for the client conneciton, but rather quit.
Moreover I don't see there port 5555 setup at all (the one expected from the client).
Please post the working sample to check.

Related

Identifying a bottleneck in a multithreaded MQTT publisher

I'm currently developing a MQTT client service with Eclipse Paho for a bigger software and I'm encountering performance issues. I'm getting a bunch of events that I want to publish to the broker and I'm using GSON for serialization of those events. I've multithreaded the serialization and publishing. According to a rudimentary benchmark serialization and publishing takes up to 1 ms.
I'm using an ExecutorService with a fixed threadpool of size 10 (for now).
My code is currently submitting about 50 Runnables per second to the ExecutorService, but my Broker reports only about 5-10 messages per second.
I've previously benchmarked my MQTT setup and managed to send about 9000+ MQTT messages per second in a non-multithreaded way.
Does the threadpool have that much of an overhead, that I'm only able to get this small amount of publishes out of it?
public class MqttService implements IMessagingService{
protected int PORT = 1883;
protected String HOST = "localhost";
protected final String SERVICENAME = "MQTT";
protected static final String COMMANDTOPIC = "commands";
protected static final String REMINDSPREFIX = "Reminds/";
protected static final String VIOLATIONTOPIC = "violations/";
protected static final String WILDCARDTOPIC = "Reminds/#";
protected static final String TCPPREFIX = "tcp://";
protected static final String SSLPREFIX = "ssl://";
private MqttClient client;
private MqttConnectOptions optionsPublisher = new MqttConnectOptions();
private ExecutorService pool = Executors.newFixedThreadPool(10);
public MqttService() {
this("localhost", 1883);
}
public MqttService(String host, int port) {
this.HOST = host;
this.PORT = port;
}
#Override
public void setPort(int port) {
this.PORT = port;
}
#Override
public void setHost(String host) {
this.HOST = host;
}
#Override
public void sendMessage(AbstractMessage message) {
pool.submit(new SerializeJob(client,message));
}
#Override
public void connect() {
try {
client = new MqttClient(TCPPREFIX + HOST + ":" + PORT, IDPublisher);
optionsPublisher.setMqttVersion(MqttConnectOptions.MQTT_VERSION_3_1_1);
client.connect(optionsPublisher);
client.setCallback(new MessageCallback());
client.subscribe(WILDCARDTOPIC, 0);
} catch (MqttException e1) {
e1.printStackTrace();
}
}
}
The following code is the Runnable executed by the ExecutorService. This should not be an issue by itself though, since it only takes up to 1-2 ms to finish.
class SerializeJob implements Runnable {
private AbstractMessage message;
private MqttClient client;
public SerializeJob(MqttClient client, AbstractMessage m) {
this.client = client;
this.message = m;
}
#Override
public void run() {
String serializedMessage = MessageSerializer.serializeMessage(message);
MqttMessage wireMessage = new MqttMessage();
wireMessage.setQos(message.getQos());
wireMessage.setPayload(serializedMessage.getBytes());
if (client.isConnected()) {
StringBuilder topic = new StringBuilder();
topic.append(MqttService.REMINDSPREFIX);
topic.append(MqttService.VIOLATIONTOPIC);
try {
client.publish(topic.toString(), wireMessage);
} catch (MqttPersistenceException e) {
e.printStackTrace();
} catch (MqttException e) {
e.printStackTrace();
}
}
}
}
I'm not quite sure what's holing me back. MQTT itself seems to allow for a lot of events, that also can have a big payload, and network cannot possibly an issue either, since I'm currently hosting the broker locally on the same machine as the client.
Edit with further testing:
I've synthetically benchmarked now my own setup which consisted of a locally hosted HiveMQ and Mosquitto broker that ran "natively" off the machine. Using the Paho libraries I have sent increasingly bigger messages in batches of 1000. For each batch I calculated the throughput in messages from first to last message. This scenario didn't use any multithreading. With this I've come up with the following performance chart:
The machine running both the client and the brokers is a desktop with an i7 6700 and 32 GB of RAM. The brokers had access to all cores and 8 GB of Memory for its VM.
For benchmarking I used the following code:
import java.util.Random;
import org.eclipse.paho.client.mqttv3.IMqttDeliveryToken;
import org.eclipse.paho.client.mqttv3.MqttCallback;
import org.eclipse.paho.client.mqttv3.MqttClient;
import org.eclipse.paho.client.mqttv3.MqttConnectOptions;
import org.eclipse.paho.client.mqttv3.MqttException;
import org.eclipse.paho.client.mqttv3.MqttMessage;
import org.eclipse.paho.client.mqttv3.MqttPersistenceException;
public class MqttBenchmarker {
protected static int PORT = 1883;
protected static String HOST = "localhost";
protected final String SERVICENAME = "MQTT";
protected static final String COMMANDTOPIC = "commands";
protected static final String REMINDSPREFIX = "Reminds/";
protected static final String VIOLATIONTOPIC = "violations/";
protected static final String WILDCARDTOPIC = "Reminds/#";
protected static final String TCPPREFIX = "tcp://";
protected static final String SSLPREFIX = "ssl://";
private static MqttClient client;
private static MqttConnectOptions optionsPublisher = new MqttConnectOptions();
private static String IDPublisher = MqttClient.generateClientId();
private static int messageReceived = 0;
private static long timesent = 0;
private static int count = 2;
private static StringBuilder out = new StringBuilder();
private static StringBuilder in = new StringBuilder();
private static final int runs = 1000;
private static boolean receivefinished = false;
public static void main(String[] args) {
connect();
Thread sendThread=new Thread(new Runnable(){
#Override
public void run() {
Random rd = new Random();
for (int i = 2; i < 1000000; i += i) {
byte[] arr = new byte[i];
// System.out.println("Starting test run for byte Array of size:
// "+arr.length);
long startt = System.currentTimeMillis();
System.out.println("Test for size: " + i + " started.");
for (int a = 0; a <= runs; a++) {
rd.nextBytes(arr);
try {
client.publish(REMINDSPREFIX, arr, 1, false);
} catch (MqttPersistenceException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (MqttException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
try {
while (!receivefinished) {
Thread.sleep(10);
}
receivefinished = false;
System.out.println("Test for size: " + i + " finished.");
out.append("Sending Payload size: " + arr.length + " achieved "
+ runs / ((System.currentTimeMillis() - startt) / 1000d) + " messages per second.\n");
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
System.out.println(out.toString());
System.out.println(in.toString());
}
});
sendThread.start();
}
private static class MessageCallback implements MqttCallback {
#Override
public void messageArrived(String arg0, MqttMessage arg1) throws Exception {
if (messageReceived == 0) {
timesent = System.currentTimeMillis();
}
messageReceived++;
if (messageReceived >= runs) {
receivefinished = true;
in.append("Receiving payload size " + count + " achieved "
+ runs / ((System.currentTimeMillis() - timesent) / 1000d) + " messages per second.\n");
count += count;
messageReceived = 0;
}
}
#Override
public void deliveryComplete(IMqttDeliveryToken arg0) {
// TODO Auto-generated method stub
}
#Override
public void connectionLost(Throwable arg0) {
// TODO Auto-generated method stub
}
}
public static void connect() {
try {
client = new MqttClient(TCPPREFIX + HOST + ":" + PORT, IDPublisher);
optionsPublisher.setMqttVersion(MqttConnectOptions.MQTT_VERSION_3_1_1);
optionsPublisher.setAutomaticReconnect(true);
optionsPublisher.setCleanSession(false);
optionsPublisher.setMaxInflight(65535);
client.connect(optionsPublisher);
while (!client.isConnected()) {
try {
Thread.sleep(50);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
client.setCallback(new MessageCallback());
client.subscribe(WILDCARDTOPIC, 0);
} catch (MqttException e1) {
e1.printStackTrace();
}
}
}
What's weird is, that the serialized messages that I want to send from my application only use about 4000 bytes. So the theoretical throughput should be around 200 messages per second. Could this still be a problem caused by longer computations inside the callback function? I've already achieved far better results with the mosquitto broker, and I will further test, how far I can push performance with it.
Thanks for any suggestions!
One problem is in the test setup of the MQTT client.
You are using just one single MQTT client. What you are effectively testing is the size of the MQTT inflight window with this formula:
throughput <= inflight window-size / round-trip time
HiveMQ has by default a property enabled that is called <cluster-overload-protection> that limits the inflight window of a single client.
Additionally the paho client is not really suited for high throughput work in a multi threaded context. A better implementation for high performance scenarios would be the HiveMQ MQTT Client.
With 20 clients connected (10 Publishing and 10 receiving), I reach a sustained throughput of around 6000 qos=1 10kb messages per second.
Disclaimer: I work as a software developer for HiveMQ.
I'm still not exactly sure what the issue was, but switching my broker from HiveMQ to Mosquitto seems to have fixed the issue. Maybe Mosquitto has a different set of default setting compared to HiveMQ, or maybe the free trial version of HiveMQ is limited in some other way than just the number of connected clients.
In any way, Mosquitto worked much better for me, and handled all the messages I could throw at it from my application.

Getting ClassNotFound Exception in Flink SourceFunction

I'm using protocol buffer to send stream of data to Apache Flink.
I have two classes. one is Producer and one is Consumer.
Producer is a java thread class which reads the data from socket and Protobuf deserializes it and then I store it in my BlockingQueue
Consumer is a class which implements SourceFunction in Flink.
I tested this program with using:
DataStream<Event.MyEvent> stream = env.fromCollection(queue);
instead of custom source and it works fine.
But when I try to use a SourceFunction class it throws this exception:
Caused by: java.lang.RuntimeException: Unable to find proto buffer class
at com.google.protobuf.GeneratedMessageLite$SerializedForm.readResolve(GeneratedMessageLite.java:775)
...
Caused by: java.lang.ClassNotFoundException: event.Event$MyEvent
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
...
And in another attempt I mixed both classed into one (the class which implements SourceFunction). I get data from socket and deserialize it with protobuf and store it in BlockingQueue and then I read from BlockingQueue right after that. My code works fine with this approach too.
But I want to use two separate classes (multi-threading) but it throws that exception.
I'm trying to solve it in last 2 days and also did lots of searching but no luck.
Any help would be apperciated.
Producer:
public class Producer implements Runnable {
Boolean running = true;
Socket socket = null, bufferSocket = null;
PrintStream ps = null;
BlockingQueue<Event.MyEvent> queue;
final int port;
public Producer(BlockingQueue<Event.MyEvent> queue, int port){
this.port = port;
this.queue = queue;
}
#Override
public void run() {
try {
socket = new Socket("127.0.0.1", port);
bufferSocket = new Socket(InetAddress.getLocalHost(), 6060);
ps = new PrintStream(bufferSocket.getOutputStream());
while (running) {
queue.put(Event.MyEvent.parseDelimitedFrom(socket.getInputStream()));
ps.println("Items in Queue: " + queue.size());
}
}catch (Exception e){
e.printStackTrace();
}
}
}
Consumer:
public class Consumer implements SourceFunction<Event.MyEvent> {
Boolean running = true;
BlockingQueue<Event.MyEvent> queue;
Event.MyEvent event;
public Consumer(BlockingQueue<Event.MyEvent> queue){
this.queue = queue;
}
#Override
public void run(SourceContext<Event.MyEvent> sourceContext) {
try {
while (running) {
event = queue.take();
sourceContext.collect(event);
}
}catch (Exception e){
e.printStackTrace();
}
}
#Override
public void cancel() {
running = false;
}
}
Event.MyEvent is my protobuf class. I'm using version 2.6.1 and I compiled classes with v2.6.1 . I double checked the versions to be sure it's not the problem.
The Producer class is working fine.
I tested this with both Flink v1.1.3 and v1.1.4.
I'm running it in local mode.
EDIT: Answer was included in question, posted it separately and removed it here.
UPDATE 12/28/2016
...
But I'm still curious. What is causing this error? Is it a bug in Flink or am I doing something wrong?
...
The asker already found a way to make this working. I have extracted the relevant part from the question. Note that the reason why it happened remains unexplained.
I did not use quote syntax as it is a lot of text, but the below was shared by the asker:
So finally I got it to work. I created my BlockingQueue object inside SourceFunction (Consumer), and called Producer class from inside the SourceFunction class (Consumer) instead of making BlockingQueue and calling Producer class in main method of the program. and it now works!
Here's my full working code in Flink:
public class Main {
public static void main(String[] args) throws Exception {
final int port, buffer;
//final String ip;
try {
final ParameterTool params = ParameterTool.fromArgs(args);
port = params.getInt("p");
buffer = params.getInt("b");
} catch (Exception e) {
System.err.println("No port number and/or buffer size specified.");
return;
}
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<Event.MyEvent> stream = env.addSource(new Consumer(port, buffer));
//DataStream<Event.MyEvent> stream = env.fromCollection(queue);
Pattern<Event.MyEvent, ?> crashedPattern = Pattern.<Event.MyEvent>begin("start")
.where(new FilterFunction<Event.MyEvent>() {
#Override
public boolean filter(Event.MyEvent myEvent) throws Exception {
return (myEvent.getItems().getValue() >= 120);
}
})
.<Event.MyEvent>followedBy("next").where(new FilterFunction<Event.MyEvent>() {
#Override
public boolean filter(Event.MyEvent myEvent) throws Exception {
return (myEvent.getItems().getValue() <= 10);
}
})
.within(Time.seconds(3));
PatternStream<Event.MyEvent> crashed = CEP.pattern(stream.keyBy(new KeySelector<Event.MyEvent, String>() {
#Override
public String getKey(Event.MyEvent myEvent) throws Exception {
return myEvent.getEventType();
}
}), crashedPattern);
DataStream<String> alarm = crashed.select(new PatternSelectFunction<Event.MyEvent, String>() {
#Override
public String select(Map<String, Event.MyEvent> pattern) throws Exception {
Event.MyEvent start = pattern.get("start");
Event.MyEvent next = pattern.get("next");
return start.getEventType() + " | Speed from " + start.getItems().getValue() + " to " + next.getItems().getValue() + " in 3 seconds\n";
}
});
DataStream<String> rate = alarm.windowAll(TumblingProcessingTimeWindows.of(Time.seconds(1)))
.apply(new AllWindowFunction<String, String, TimeWindow>() {
#Override
public void apply(TimeWindow timeWindow, Iterable<String> iterable, Collector<String> collector) throws Exception {
int sum = 0;
for (String s: iterable) {
sum ++;
}
collector.collect ("CEP Output Rate: " + sum + "\n");
}
});
rate.writeToSocket(InetAddress.getLocalHost().getHostName(), 7070, new SimpleStringSchema());
env.execute("Flink Taxi Crash Streaming");
}
private static class Producer implements Runnable {
Boolean running = true;
Socket socket = null, bufferSocket = null;
PrintStream ps = null;
BlockingQueue<Event.MyEvent> queue;
final int port;
Producer(BlockingQueue<Event.MyEvent> queue, int port){
this.port = port;
this.queue = queue;
}
#Override
public void run() {
try {
socket = new Socket("127.0.0.1", port);
bufferSocket = new Socket(InetAddress.getLocalHost(), 6060);
ps = new PrintStream(bufferSocket.getOutputStream());
while (running) {
queue.put(Event.MyEvent.parseDelimitedFrom(socket.getInputStream()));
ps.println("Items in Queue: " + queue.size());
}
}catch (Exception e){
e.printStackTrace();
}
}
}
private static class Consumer implements SourceFunction<Event.MyEvent> {
Boolean running = true;
final int port;
BlockingQueue<Event.MyEvent> queue;
Consumer(int port, int buffer){
queue = new ArrayBlockingQueue<>(buffer);
this.port = port;
}
#Override
public void run(SourceContext<Event.MyEvent> sourceContext) {
try {
new Thread(new Producer(queue, port)).start();
while (running) {
sourceContext.collect(queue.take());
}
}catch (Exception e){
e.printStackTrace();
}
}
#Override
public void cancel() {
running = false;
}
}

Why is my Java UDP server listener not successfully listening to a UDP server if the UDP server starts after my code?

I have a thread that is listening to a UDP server. It works well unless (1) the UDP server is started after my application, or (2) the UDP server is restarted while my application is running. In either of those cases, my listener will not connect to the server any more.
It seems stuck in the awaitRequests() method, but I am not for sure.
Where is the code breaking?
package myPackage;
public class UDPServerListener implements Runnable {
private volatile boolean run = true;
private byte[] receiveBuffer;
private int receiveBufferSize;
private InetSocketAddress myInetSocketAddress;
private DatagramSocket myDatagramSocket;
private DatagramPacket myDatagramPacket;
#Override
public void run() {
try {
// Create and bind a new DatagramSocket
myDatagramSocket = new DatagramSocket(null);
myInetSocketAddress = new InetSocketAddress(123456);
myDatagramSocket.setReuseAddress(true);
myDatagramSocket.bind(myInetSocketAddress);
// Set-up the receive buffer
receiveBuffer = new byte[2047];
myDatagramPacket = new DatagramPacket(receiveBuffer, 2047);
awaitRequests(myDatagramSocket, myDatagramPacket);
} catch (SocketException se) {
} catch (IOException ioe) {
} catch (InterruptedException ie) {
}
}
private void awaitRequests(DatagramSocket myDatagramSocket, DatagramPacket myDatagramPacket) throws SocketException, IOException, InterruptedException{
int maxRetries = 5;
while (run){
try {
myDatagramSocket.receive(myDatagramPacket);
byte[] data = myDatagramPacket.getData();
maxRetries = 5;
process(data);
} catch (SocketException se){
maxRetries--;
if(maxRetries == 0) throw se;
Thread.currentThread().sleep(5000);
reconnect(myDatagramSocket);
}
}
}
private void process(byte[] data) throws SocketException {
receiveBufferSize = myDatagramPacket.getLength();
// Do stuff with the received data ...
myDatagramPacket.setLength(2047);
}
private void reconnect(DatagramSocket myDatagramSocket) throws SocketException{
myDatagramSocket.bind(myInetSocketAddress);
}
public boolean isRun() {
return run;
}
public void setRun(boolean run) {
this.run = run;
}
}
First of all:
myInetSocketAddress = new InetSocketAddress(123456);
is invalid, the max port number is 65535, so you're creating an invalid address.
I would go for something more like this:
public class TestUdp {
public static void main( String[] args )
{
boolean run = true;
DatagramSocket sock = null;
try {
sock = new DatagramSocket(50001);
} catch (SocketException e) {
e.printStackTrace();
return;
}
byte[] buff = new byte[2000];
DatagramPacket dp = new DatagramPacket(buff, buff.length);
while (run) {
try {
sock.receive(dp);
} catch (IOException e) {
break;
}
dp.getData();
System.out.println("Received: " + dp.getLength() + " bytes");
}
}
}
The DatagramSocket constructor already binds to the port and there is no need to do "reconnect()" like you did, as UDP si not connected, once you have a process bound to a UDP port, it's there.

Device discovery over Wifi for defined period of time

I am writing a code to send a UDP Multicast over Wifi from my mobile device. There is a server code running on other devices in the network. The servers will listen to the multicast and respond with their IP Address and Type of the system (Type: Computer, Mobile Device, Raspberry Pi, Flyports etc..)
On the mobile device which has sent the UDP Multicast, I need to get the list of the devices responding to the UDP Multicast.
For this I have created a class which will work as the structure of the device details.
DeviceDetails.class
public class DeviceDetails
{
String DeviceType;
String IPAddr;
public DeviceDetails(String type, String IP)
{
this.DeviceType=type;
this.IPAddr=IP;
}
}
I am sending the UDP Multicast packet at the group address of 225.4.5.6 and Port Number 5432.
I have made a class which will call a thread which will send the UDP Packets. And on the other hand I have made a receiver thread which implements Callable Interface to return the list of the devices responding.
Here is the code:
MulticastReceiver.java
public class MulticastReceiver implements Callable<DeviceDetails>
{
DatagramSocket socket = null;
DatagramPacket inPacket = null;
boolean check = true;
public MulticastReceiver()
{
try
{
socket = new DatagramSocket(5500);
}
catch(Exception ioe)
{
System.out.println(ioe);
}
}
#Override
public DeviceDetails call() throws Exception
{
// TODO Auto-generated method stub
try
{
byte[] inBuf = new byte[WifiConstants.DGRAM_LEN];
//System.out.println("Listening");
inPacket = new DatagramPacket(inBuf, inBuf.length);
if(check)
{
socket.receive(inPacket);
}
String msg = new String(inBuf, 0, inPacket.getLength());
Log.v("Received: ","From :" + inPacket.getAddress() + " Msg : " + msg);
DeviceDetails device = getDeviceFromString(msg);
Thread.sleep(100);
return device;
}
catch(Exception e)
{
Log.v("Receiving Error: ",e.toString());
return null;
}
}
public DeviceDetails getDeviceFromString(String str)
{
String type;
String IP;
type=str.substring(0,str.indexOf('`'));
str = str.substring(str.indexOf('`')+1);
IP=str;
DeviceDetails device = new DeviceDetails(type,IP);
return device;
}
}
The following code is of the activity which calls the Receiver Thread:
public class DeviceManagerWindow extends Activity
{
public void searchDevice(View view)
{
sendMulticast = new Thread(new MultiCastThread());
sendMulticast.start();
ExecutorService executorService = Executors.newFixedThreadPool(1);
List<Future<DeviceDetails>> deviceList = new ArrayList<Future<DeviceDetails>>();
Callable<DeviceDetails> device = new MulticastReceiver();
Future<DeviceDetails> submit = executorService.submit(device);
deviceList.add(submit);
DeviceDetails[] devices = new DeviceDetails[deviceList.size()];
int i=0;
for(Future<DeviceDetails> future :deviceList)
{
try
{
devices[i] = future.get();
}
catch(Exception e)
{
Log.v("future Exception: ",e.toString());
}
}
}
}
Now the standard way of receiving the packet says to call the receive method under an infinite loop. But I want to receive the incoming connections only for first 30seconds and then stop looking for connections.
This is similar to that of a bluetooth searching. It stops after 1 minute of search.
Now the problem lies is, I could use a counter but the problem is thread.stop is now depricated. And not just this, if I put the receive method under infinite loop it will never return the value.
What should I do.? I want to search for say 30 seconds and then stop the search and want to return the list of the devices responding.
Instead of calling stop(), you should call interrupt(). This causes a InterruptedException to be thrown at interruptable spots at your code, e.g. when calling Thread.sleep() or when blocked by an I/O operation. Unfortunately, DatagramSocket does not implement InterruptibleChannel, so the call to receive cannot be interrupted.
So you either use DatagramChannel instead of the DatagramSocket, such that receive() will throw a ClosedByInterruptException if Thread.interrupt() is called. Or you need to set a timeout by calling DatagramSocket.setSoTimeout() causing receive() to throw a SocketTimeoutException after the specified interval - in that case, you won't need to interrupt the thread.
Simple approach
The easiest way would be to simply set a socket timeout:
public MulticastReceiver() {
try {
socket = new DatagramSocket(5500);
socket.setSoTimeout(30 * 1000);
} catch (Exception ioe) {
throw new RuntimeException(ioe);
}
}
This will cause socket.receive(inPacket); to throw a SocketTimeoutException after 30 seconds. As you already catch Exception, that's all you need to do.
Making MulticastReceiver interruptible
This is a more radical refactoring.
public class MulticastReceiver implements Callable<DeviceDetails> {
private DatagramChannel channel;
public MulticastReceiver() {
try {
channel = DatagramChannel.open();
channel.socket().bind(new InetSocketAddress(5500));
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
}
public DeviceDetails call() throws Exception {
ByteBuffer inBuf = ByteBuffer.allocate(WifiConstants.DGRAM_LEN);
SocketAddress socketAddress = channel.receive(inBuf);
String msg = new String(inBuf.array(), 0, inBuf.capacity());
Log.v("Received: ","From :" + socketAddress + " Msg : " + msg);
return getDeviceFromString(msg);;
}
}
The DeviceManagerWindow looks a bit different; I'm not sure what you intend to do there, as you juggle around with lists and arrays, but you only have one future... So I assume you want to listen for 30 secs and fetch as many devices as possible.
ExecutorService executorService = Executors.newFixedThreadPool(1);
MulticastReceiver receiver = new MulticastReceiver();
List<DeviceDetails> devices = new ArrayList<DeviceDetails>();
long runUntil = System.currentTimeMillis() + 30 * 1000;
while (System.currentTimeMillis() < runUntil) {
Future<Object> future = executorService.submit(receiver);
try {
// wait no longer than the original 30s for a result
long timeout = runUntil - System.currentTimeMillis();
devices.add(future.get(timeout, TimeUnit.MILLISECONDS));
} catch (Exception e) {
Log.v("future Exception: ",e.toString());
}
}
// shutdown the executor service, interrupting the executed tasks
executorService.shutdownNow();
That's about it. No matter which solution you choose, don't forget to close the socket/channel.
I have solved it.. you can run your code in following fashion:
DeviceManagerWindow.java
public class DeviceManagerWindow extends Activity
{
public static Context con;
public static int rowCounter=0;
Thread sendMulticast;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_device_manager_window);
WifiManager wifi = (WifiManager)getSystemService( Context.WIFI_SERVICE );
if(wifi != null)
{
WifiManager.MulticastLock lock = wifi.createMulticastLock("WifiDevices");
lock.acquire();
}
TableLayout tb = (TableLayout) findViewById(R.id.DeviceList);
tb.removeAllViews();
con = getApplicationContext();
}
public void searchDevice(View view) throws IOException, InterruptedException
{
try
{
sendMulticast = new Thread(new MultiCastThread());
sendMulticast.start();
sendMulticast.join();
}
catch(Exception e)
{
Log.v("Exception in Sending:",e.toString());
}
here is the time bound search.... and you can quit your thread using thread.join
//Device Will only search for 1 minute
for(long stop=System.nanoTime()+TimeUnit.SECONDS.toNanos(1); stop>System.nanoTime();)
{
Thread recv = new Thread(new MulticastReceiver());
recv.start();
recv.join();
}
}
public static synchronized void addDevice(DeviceDetails device) throws InterruptedException
{
....
Prepare your desired list here.
....
}
}
Dont add any loop on the listening side. simply use socket.receive
MulticastReceiver.java
public class MulticastReceiver implements Runnable
{
DatagramSocket socket = null;
DatagramPacket inPacket = null;
public MulticastReceiver()
{
try
{
socket = new DatagramSocket(WifiConstants.PORT_NO_RECV);
}
catch(Exception ioe)
{
System.out.println(ioe);
}
}
#Override
public void run()
{
byte[] inBuf = new byte[WifiConstants.DGRAM_LEN];
//System.out.println("Listening");
inPacket = new DatagramPacket(inBuf, inBuf.length);
try
{
socket.setSoTimeout(3000)
socket.receive(inPacket);
String msg = new String(inBuf, 0, inPacket.getLength());
Log.v("Received: ","From :" + inPacket.getAddress() + " Msg : " + msg);
DeviceDetails device = getDeviceFromString(msg);
DeviceManagerWindow.addDevice(device);
socket.setSoTimeout(3000)will set the listening time for the socket only for 3 seconds. If the packet dont arrive it will go further.DeviceManagerWindow.addDevice(device);this line will call the addDevice method in the calling class. where you can prepare your list
}
catch(Exception e)
{
Log.v("Receiving Error: ",e.toString());
}
finally
{
socket.close();
}
}
public DeviceDetails getDeviceFromString(String str)
{
String type;
String IP;
type=str.substring(0,str.indexOf('`'));
str = str.substring(str.indexOf('`')+1);
IP=str;
DeviceDetails device = new DeviceDetails(type,IP);
return device;
}
}
Hope that works.. Well it will work.
All the best. Let me know if any problem.

Netty client sometimes doesn't receive all expected messages

I have a fairly simple test Netty server/client project . I am testing some aspects of the stability of the communication by flooding the server with messages and counting the messages and bytes that I get back to make sure that everything matches.
When I run the flood from the client, the client keeps track of the number of messages it sends and how many it gets back and then when the number equal to each other it prints out some stats.
On certain occassions when running locally (I'm guessing because of congestion?) the client never ends up printing out the final message. I haven't run into this issue when the 2 components are on remote machines. Any suggestions would be appreciated:
The Encoder is just a simple OneToOneEncoder that encodes an Envelope type to a ChannelBuffer and the Decoder is a simple ReplayDecoder that does the opposite.
I tried adding a ChannelInterestChanged method to my client handler to see if the channel's interest was getting changed to not read, but that did not seem to be the case.
The relevant code is below:
Thanks!
SERVER
public class Server {
// configuration --------------------------------------------------------------------------------------------------
private final int port;
private ServerChannelFactory serverFactory;
// constructors ---------------------------------------------------------------------------------------------------
public Server(int port) {
this.port = port;
}
// public methods -------------------------------------------------------------------------------------------------
public boolean start() {
ExecutorService bossThreadPool = Executors.newCachedThreadPool();
ExecutorService childThreadPool = Executors.newCachedThreadPool();
this.serverFactory = new NioServerSocketChannelFactory(bossThreadPool, childThreadPool);
this.channelGroup = new DeviceIdAwareChannelGroup(this + "-channelGroup");
ChannelPipelineFactory pipelineFactory = new ChannelPipelineFactory() {
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("encoder", Encoder.getInstance());
pipeline.addLast("decoder", new Decoder());
pipeline.addLast("handler", new ServerHandler());
return pipeline;
}
};
ServerBootstrap bootstrap = new ServerBootstrap(this.serverFactory);
bootstrap.setOption("reuseAddress", true);
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.keepAlive", true);
bootstrap.setPipelineFactory(pipelineFactory);
Channel channel = bootstrap.bind(new InetSocketAddress(this.port));
if (!channel.isBound()) {
this.stop();
return false;
}
this.channelGroup.add(channel);
return true;
}
public void stop() {
if (this.channelGroup != null) {
ChannelGroupFuture channelGroupCloseFuture = this.channelGroup.close();
System.out.println("waiting for ChannelGroup shutdown...");
channelGroupCloseFuture.awaitUninterruptibly();
}
if (this.serverFactory != null) {
this.serverFactory.releaseExternalResources();
}
}
// main -----------------------------------------------------------------------------------------------------------
public static void main(String[] args) {
int port;
if (args.length != 3) {
System.out.println("No arguments found using default values");
port = 9999;
} else {
port = Integer.parseInt(args[1]);
}
final Server server = new Server( port);
if (!server.start()) {
System.exit(-1);
}
System.out.println("Server started on port 9999 ... ");
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
server.stop();
}
});
}
}
SERVER HANDLER
public class ServerHandler extends SimpleChannelUpstreamHandler {
// internal vars --------------------------------------------------------------------------------------------------
private AtomicInteger numMessagesReceived=new AtomicInteger(0);
// constructors ---------------------------------------------------------------------------------------------------
public ServerHandler() {
}
// SimpleChannelUpstreamHandler -----------------------------------------------------------------------------------
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
Channel c = e.getChannel();
System.out.println("ChannelConnected: channel id: " + c.getId() + ", remote host: " + c.getRemoteAddress() + ", isChannelConnected(): " + c.isConnected());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception {
System.out.println("*** EXCEPTION CAUGHT!!! ***");
e.getChannel().close();
}
#Override
public void channelDisconnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
super.channelDisconnected(ctx, e);
System.out.println("*** CHANNEL DISCONNECTED ***");
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {
if(numMessagesReceived.incrementAndGet()%1000==0 ){
System.out.println("["+numMessagesReceived+"-TH MSG]: Received message: " + e.getMessage());
}
if (e.getMessage() instanceof Envelope) {
// echo it...
if (e.getChannel().isWritable()) {
e.getChannel().write(e.getMessage());
}
} else {
super.messageReceived(ctx, e);
}
}
}
CLIENT
public class Client implements ClientHandlerListener {
// configuration --------------------------------------------------------------------------------------------------
private final String host;
private final int port;
private final int messages;
// internal vars --------------------------------------------------------------------------------------------------
private ChannelFactory clientFactory;
private ChannelGroup channelGroup;
private ClientHandler handler;
private final AtomicInteger received;
private long startTime;
private ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
// constructors ---------------------------------------------------------------------------------------------------
public Client(String host, int port, int messages) {
this.host = host;
this.port = port;
this.messages = messages;
this.received = new AtomicInteger(0);
}
// ClientHandlerListener ------------------------------------------------------------------------------------------
#Override
public void messageReceived(Envelope message) {
if (this.received.incrementAndGet() == this.messages) {
long stopTime = System.currentTimeMillis();
float timeInSeconds = (stopTime - this.startTime) / 1000f;
System.err.println("Sent and received " + this.messages + " in " + timeInSeconds + "s");
System.err.println("That's " + (this.messages / timeInSeconds) + " echoes per second!");
}
}
// public methods -------------------------------------------------------------------------------------------------
public boolean start() {
// For production scenarios, use limited sized thread pools
this.clientFactory = new NioClientSocketChannelFactory(cachedThreadPool, cachedThreadPool);
this.channelGroup = new DefaultChannelGroup(this + "-channelGroup");
this.handler = new ClientHandler(this, this.channelGroup);
ChannelPipelineFactory pipelineFactory = new ChannelPipelineFactory() {
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("byteCounter", new ByteCounter("clientByteCounter"));
pipeline.addLast("encoder", Encoder.getInstance());
pipeline.addLast("decoder", new Decoder());
pipeline.addLast("handler", handler);
return pipeline;
}
};
ClientBootstrap bootstrap = new ClientBootstrap(this.clientFactory);
bootstrap.setOption("reuseAddress", true);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
bootstrap.setPipelineFactory(pipelineFactory);
boolean connected = bootstrap.connect(new InetSocketAddress(host, port)).awaitUninterruptibly().isSuccess();
System.out.println("isConnected: " + connected);
if (!connected) {
this.stop();
}
return connected;
}
public void stop() {
if (this.channelGroup != null) {
this.channelGroup.close();
}
if (this.clientFactory != null) {
this.clientFactory.releaseExternalResources();
}
}
public ChannelFuture sendMessage(Envelope env) {
Channel ch = this.channelGroup.iterator().next();
ChannelFuture cf = ch.write(env);
return cf;
}
private void flood() {
if ((this.channelGroup == null) || (this.clientFactory == null)) {
return;
}
System.out.println("sending " + this.messages + " messages");
this.startTime = System.currentTimeMillis();
for (int i = 0; i < this.messages; i++) {
this.handler.sendMessage(new Envelope(Version.VERSION1, Type.REQUEST, 1, new byte[1]));
}
}
// main -----------------------------------------------------------------------------------------------------------
public static void main(String[] args) throws InterruptedException {
final Client client = new Client("localhost", 9999, 10000);
if (!client.start()) {
System.exit(-1);
return;
}
while (client.channelGroup.size() == 0) {
Thread.sleep(200);
}
System.out.println("Client started...");
client.flood();
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
System.out.println("shutting down client");
client.stop();
}
});
}
}
CLIENT HANDLER
public class ClientHandler extends SimpleChannelUpstreamHandler {
// internal vars --------------------------------------------------------------------------------------------------
private final ClientHandlerListener listener;
private final ChannelGroup channelGroup;
private Channel channel;
// constructors ---------------------------------------------------------------------------------------------------
public ClientHandler(ClientHandlerListener listener, ChannelGroup channelGroup) {
this.listener = listener;
this.channelGroup = channelGroup;
}
// SimpleChannelUpstreamHandler -----------------------------------------------------------------------------------
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {
if (e.getMessage() instanceof Envelope) {
Envelope env = (Envelope) e.getMessage();
this.listener.messageReceived(env);
} else {
System.out.println("NOT ENVELOPE!!");
super.messageReceived(ctx, e);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception {
System.out.println("**** CAUGHT EXCEPTION CLOSING CHANNEL ***");
e.getCause().printStackTrace();
e.getChannel().close();
}
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
this.channel = e.getChannel();
System.out.println("Server connected, channel id: " + this.channel.getId());
this.channelGroup.add(e.getChannel());
}
// public methods -------------------------------------------------------------------------------------------------
public void sendMessage(Envelope envelope) {
if (this.channel != null) {
this.channel.write(envelope);
}
}
}
CLIENT HANDLER LISTENER INTERFACE
public interface ClientHandlerListener {
void messageReceived(Envelope message);
}
Without knowing how big the envelope is on the network I'm going to guess that your problem is that your client writes 10,000 messages without checking if the channel is writable.
Netty 3.x processes network events and writes in a particular fashion. It's possible that your client is writing so much data so fast that Netty isn't getting a chance to process receive events. On the server side this would result in the channel becoming non writable and your handler dropping the reply.
There are a few reasons why you see the problem on localhost but it's probably because the write bandwidth is much higher than your network bandwidth. The client doesn't check if the channel is writable, so over a network your messages are buffered by Netty until the network can catch up (if you wrote significantly more than 10,000 messages you might see an OutOfMemoryError). This acts as a natural break because Netty will suspend writing until the network is ready, allowing it to process incoming data and preventing the server from seeing a channel that's not writable.
The DiscardClientHandler in the discard handler shows how to test if the channel is writable, and how to resume when it becomes writable again. Another option is to have sendMessage return the ChannelFuture associated with the write and, if the channel is not writable after the write, block until the future completes.
Also your server handler should write the message and then check if the channel is writable. If it isn't you should set channel readable to false. Netty will notify ChannelInterestChanged when the channel becomes writable again. Then you can set channel readable to true to resume reading messages.

Categories

Resources