I have been trying to get UDP multicast working with Netty 4.0.26 and 5.0.0.Alpha2, without any success. I have adapted some code from this post which in its edited form supposedly works, but does not for me. The attached code simply echoes "Send 1", "Send 2" etc. but the corresponding packets are never received.
In the expression localSocketAddr = new InetSocketAddress(localAddr, MCAST_PORT) I have tried port 0 as well, also without success. All kinds of other combinations of bind port and local address have been tried also.
The various socket options are copied from the other post I mentioned.
Can anyone tell me what I'm doing wrong? This is with Java 8 on WIndows 8.1.
Thanks very much,
Sean
import io.netty.bootstrap.Bootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelOption;
import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.DatagramChannel;
import io.netty.channel.socket.DatagramPacket;
import io.netty.channel.socket.InternetProtocolFamily;
import io.netty.channel.socket.nio.NioDatagramChannel;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.NetworkInterface;
public class MCast {
private static final String LOCAL_ADDR = "192.168.0.18";
private static final String MCAST_GROUP = "239.254.42.96";
private static final int MCAST_PORT = 9796;
public static void main(String[] args) throws Exception {
Thread sender = new Thread(new Sender());
Thread receiver = new Thread(new Receiver());
receiver.start();
sender.start();
sender.join();
receiver.join();
}
private static class MCastSupport {
protected InetAddress localAddr;
protected InetAddress remoteAddr;
protected InetSocketAddress localSocketAddr;
protected InetSocketAddress remoteSocketAddr;
protected DatagramChannel chan;
protected Bootstrap bootstrap;
public MCastSupport() {
try {
localAddr = InetAddress.getByName(LOCAL_ADDR);
remoteAddr = InetAddress.getByName(MCAST_GROUP);
localSocketAddr = new InetSocketAddress(localAddr, MCAST_PORT);
remoteSocketAddr = new InetSocketAddress(remoteAddr, MCAST_PORT);
NetworkInterface nif = NetworkInterface.getByInetAddress(localAddr);
bootstrap = new Bootstrap()
.group(new NioEventLoopGroup())
.handler(new SimpleChannelInboundHandler<DatagramPacket>() {
#Override
protected void messageReceived(ChannelHandlerContext ctx, DatagramPacket msg) throws Exception {
System.out.println("Received: " + msg.content().getInt(0));
}
})
.channelFactory(new ChannelFactory<NioDatagramChannel>() {
#Override
public NioDatagramChannel newChannel() {
return new NioDatagramChannel(InternetProtocolFamily.IPv4);
}
})
.handler(new SimpleChannelInboundHandler<DatagramPacket>() {
#Override
protected void messageReceived(ChannelHandlerContext ctx, DatagramPacket msg) throws Exception {
System.out.println("Received: " + msg.content().getInt(0));
}
})
.option(ChannelOption.SO_BROADCAST, true)
.option(ChannelOption.SO_REUSEADDR, true)
.option(ChannelOption.IP_MULTICAST_LOOP_DISABLED, false)
.option(ChannelOption.SO_RCVBUF, 2048)
.option(ChannelOption.IP_MULTICAST_TTL, 255)
.option(ChannelOption.IP_MULTICAST_IF, nif);
chan = (DatagramChannel) bootstrap.bind(localSocketAddr).sync().channel();
chan.joinGroup(remoteSocketAddr, nif).sync();
} catch (Throwable t) {
System.err.println(t);
t.printStackTrace(System.err);
}
}
}
private static class Sender extends MCastSupport implements Runnable {
#Override
public void run() {
try {
for (int seq = 1; seq <= 5; ++ seq) {
ByteBuf buf = Unpooled.copyInt(seq);
DatagramPacket dgram = new DatagramPacket(buf, remoteSocketAddr, localSocketAddr);
chan.writeAndFlush(dgram);
System.out.println("Send: " + seq);
Thread.sleep(5000);
}
} catch (Throwable t) {
System.err.println(t);
t.printStackTrace(System.err);
}
}
}
private static class Receiver extends MCastSupport implements Runnable {
#Override
public void run() {
try {
Thread.sleep(5 * 5000);
} catch (Throwable t) {
System.err.println(t);
t.printStackTrace(System.err);
}
}
}
}
The root of my issue was setting ChannelOption.IP_MULTICAST_LOOP_DISABLED to false. Leaving out that line (i.e. allowing IP_MULTICAST_LOOP_DISABLED to default to true) allows multicast to work as expected. Frankly, this doesn't make a lot of sense to me, but I have no more time to investigate.
Related
So basically I have a MainConstroller class that has methods for every button. I have also a server multi-client application. In Client I have a method sendMessage that sends a String and an Object as a parameter to outputStreams to Server.
In the same method I have 2 inputStream for message from server and an object. The problem is that this method runs on a Thread which implements run method and I cannot return the object.
I tried to create a static class that saves these, but the getters are null, when called in Controller class.
What is the best method to achieve this?
public void onSaveButton(javafx.event.ActionEvent actionEvent) throws Exception {
Parent root = null;
Boolean type = false;
String message = null;
if (adminCheckbox.isSelected()) {
root = FXMLLoader.load(getClass().getResource("/fxml/admin.fxml"));
type = true;
message = "Admin";
}
if (competitorCheckbox.isSelected()) {
root = FXMLLoader.load(getClass().getResource("/fxml/competitor.fxml"));
message = "Competitor";
}
PersonEntity personEntity = new PersonEntity();
personEntity.setIdTeam(Integer.parseInt(teamField.getText()));
personEntity.setType(type);
personEntity.setUsername(usernameField.getText());
client.sendMessageToServer(message, personEntity);
System.out.println(Utils.getMessage());}
And the Client method:
public void sendMessageToServer(String message, Object object) throws Exception {
new Thread(new Runnable() {
#Override
public void run() {
System.out.println("Say something and the message will be sent to the server: ");
//For receiving and sending data
boolean isClose = false;
while (!isClose) {
try {
ObjectOutputStream outputStream = new ObjectOutputStream(socket.getOutputStream());
ObjectInputStream inputStream = new ObjectInputStream(socket.getInputStream());
if (message.equals("Bye")) {
isClose = true;
}
outputStream.writeObject(message);
outputStream.writeObject(object);
String messageFromServer = (String) inputStream.readObject();
//System.out.println(messageFromServer);
int index = messageFromServer.indexOf(' ');
String word = messageFromServer.substring(0, index);
if (messageFromServer.equals("Bye")) {
isClose = true;
}
if (!word.equals("LIST")) {
Object obj = (Object) inputStream.readObject();
Utils.setMessage(messageFromServer);
return;
//Utils.setObject(obj);
//System.out.println("IN FOR " + Utils.getMessage());
} else {
List<Object> list = (List<Object>) inputStream.readObject();
Utils.setMessage(messageFromServer);
Utils.setObjects(list);
return;
}
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
}
}
try {
socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}).start();
}
I don't know what your server is, nor the rest of your app. I know you are using raw TCP sockets for communication using a custom protocol.
What I did was take code for a similar example from the Java tutorials. The code is the KnockKnock client server code from: Writing the Server Side of a Socket. The basic server code is unchanged, I just replaced the console-based client with a JavaFX UI.
Unchanged server code:
https://docs.oracle.com/javase/tutorial/networking/sockets/examples/KnockKnockProtocol.java
https://docs.oracle.com/javase/tutorial/networking/sockets/examples/KKMultiServer.java
https://docs.oracle.com/javase/tutorial/networking/sockets/examples/KKMultiServerThread.java
Replaced console client:
https://docs.oracle.com/javase/tutorial/networking/sockets/examples/KnockKnockClient.java
I provide two different client implementations; one makes synchronous client calls on the JavaFX application thread, the other makes asynchronous client calls using a JavaFX task.
The current asynchronous task-based solution is not robust enough for a full-scale production system as it is technically possible for it to lose messages and it doesn't robustly match request and response messages. But for a demo like this it is OK.
For simple execution, the UI apps both launch a local server on a pre-defined port when they are run, but you could remove the code to launch the server from the UI apps and run the server from the command line if you want.
KnockKnockSyncClient.java
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintWriter;
import java.net.Socket;
public class KnockKnockSyncClient {
private Socket socket;
private PrintWriter out;
private BufferedReader in;
public String connect(String host, int port) {
try {
socket = new Socket(host, port);
out = new PrintWriter(
socket.getOutputStream(),
true
);
in = new BufferedReader(
new InputStreamReader(
socket.getInputStream()
)
);
return in.readLine();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
public String sendMessage(String request) {
String response = null;
try {
out.println(request);
response = in.readLine();
} catch (IOException e) {
e.printStackTrace();
}
return response;
}
public void disconnect() {
try {
if (socket != null) {
socket.close();
socket = null;
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
KnockKnockSyncApp
import javafx.animation.FadeTransition;
import javafx.application.*;
import javafx.geometry.Insets;
import javafx.scene.*;
import javafx.scene.control.*;
import javafx.scene.layout.VBox;
import javafx.stage.Stage;
import javafx.util.Duration;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class KnockKnockSyncApp extends Application {
private static final String HOST = "localhost";
private static final int PORT = 8809;
public static final String QUIT_RESPONSE = "Bye.";
private ExecutorService serverExecutor;
private KnockKnockSyncClient client;
private static final String CSS = """
data:text/css,
.root {
-fx-font-size: 20;
}
.list-cell {
-fx-alignment: baseline-right;
-fx-text-fill: purple;
}
.list-cell:odd {
-fx-alignment: baseline-left;
-fx-text-fill: darkgreen;
}
""";
#Override
public void init() {
serverExecutor = Executors.newSingleThreadExecutor(r -> new Thread(r, "KKServer"));
serverExecutor.submit(
() -> {
try {
KKMultiServer.main(new String[]{ PORT + "" });
} catch (Exception e) {
e.printStackTrace();
}
}
);
client = new KnockKnockSyncClient();
}
#Override
public void start(Stage stage) {
ListView<String> messageView = new ListView<>();
TextField inputField = new TextField();
inputField.setPromptText("Enter a message for the server.");
inputField.setOnAction(event -> {
String request = inputField.getText();
messageView.getItems().add(request);
String response = client.sendMessage(request);
messageView.getItems().add(response);
messageView.scrollTo(Math.max(0, messageView.getItems().size() - 1));
inputField.clear();
if (QUIT_RESPONSE.equals(response)) {
closeApp(inputField.getScene());
}
});
VBox layout = new VBox(10,
messageView,
inputField
);
layout.setPadding(new Insets(10));
layout.setPrefWidth(600);
Scene scene = new Scene(layout);
scene.getStylesheets().add(CSS);
stage.setScene(scene);
stage.show();
inputField.requestFocus();
String connectResponse = client.connect(HOST, PORT);
if (connectResponse != null) {
messageView.getItems().add(connectResponse);
}
}
private void closeApp(Scene scene) {
Parent root = scene.getRoot();
root.setDisable(true);
FadeTransition fade = new FadeTransition(Duration.seconds(1), root);
fade.setToValue(0);
fade.setOnFinished(e -> Platform.exit());
fade.play();
}
#Override
public void stop() {
client.disconnect();
serverExecutor.shutdown();
}
public static void main(String[] args) {
launch();
}
}
KnockKnockAsyncClient.java
import javafx.concurrent.Task;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintWriter;
import java.net.Socket;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingDeque;
public class KnockKnockAsyncClient extends Task<Void> {
private final String host;
private final int port;
private final BlockingQueue<String> messageQueue = new LinkedBlockingDeque<>();
public KnockKnockAsyncClient(String host, int port) {
this.host = host;
this.port = port;
}
#Override
protected Void call() {
try (
Socket kkSocket = new Socket(host, port);
PrintWriter out = new PrintWriter(kkSocket.getOutputStream(), true);
BufferedReader in = new BufferedReader(
new InputStreamReader(kkSocket.getInputStream()));
) {
String fromServer;
String fromUser;
while ((fromServer = in.readLine()) != null) {
// this is not a completely robust implementation because updateMessage
// can coalesce responses and there is no matching in send message calls
// to responses. A more robust implementation may allow matching of
// requests and responses, perhaps via a message id. It could also
// replace the updateMessage call with storage of results in a collection
// (e.g. an observable list) with thread safe notifications of response
// arrivals, e.g. through CompleteableFutures and/or Platform.runLater calls.
updateMessage(fromServer);
System.out.println("Server: " + fromServer);
if (fromServer.equals("Bye."))
break;
fromUser = messageQueue.take();
System.out.println("Client: " + fromUser);
out.println(fromUser);
}
} catch (InterruptedException e) {
if (isCancelled()) {
updateMessage("Cancelled");
}
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
public void sendMessage(String request) {
messageQueue.add(request);
}
}
KnockKnockAsyncApp.java
import javafx.animation.FadeTransition;
import javafx.application.Application;
import javafx.application.Platform;
import javafx.geometry.Insets;
import javafx.scene.Parent;
import javafx.scene.Scene;
import javafx.scene.control.ListView;
import javafx.scene.control.TextField;
import javafx.scene.layout.VBox;
import javafx.stage.Stage;
import javafx.util.Duration;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadFactory;
import java.util.concurrent.atomic.AtomicInteger;
public class KnockKnockAsyncApp extends Application {
private static final String HOST = "localhost";
private static final int PORT = 8809;
public static final String QUIT_RESPONSE = "Bye.";
private ExecutorService serverExecutor;
private ExecutorService clientExecutor;
private static final String CSS = """
data:text/css,
.root {
-fx-font-size: 20;
}
.list-cell {
-fx-alignment: baseline-right;
-fx-text-fill: purple;
}
.list-cell:odd {
-fx-alignment: baseline-left;
-fx-text-fill: darkgreen;
}
""";
#Override
public void init() {
serverExecutor = Executors.newSingleThreadExecutor(r -> new Thread(r, "KKServer"));
serverExecutor.submit(
() -> {
try {
KKMultiServer.main(new String[]{ PORT + "" });
} catch (Exception e) {
e.printStackTrace();
}
}
);
clientExecutor = Executors.newSingleThreadExecutor(new ThreadFactory() {
final AtomicInteger threadNum = new AtomicInteger(0);
#Override
public Thread newThread(Runnable r) {
return new Thread(r,"KKClient-" + threadNum.getAndAdd(1));
}
});
}
#Override
public void start(Stage stage) {
ListView<String> messageView = new ListView<>();
KnockKnockAsyncClient client = new KnockKnockAsyncClient(HOST, PORT);
// monitor and action responses from the server.
client.messageProperty().addListener((observable, oldValue, response) -> {
if (response != null) {
logMessage(messageView, response);
}
if (QUIT_RESPONSE.equals(response)) {
closeApp(messageView.getScene());
}
});
TextField inputField = new TextField();
inputField.setPromptText("Enter a message for the server.");
inputField.setOnAction(event -> {
String request = inputField.getText();
logMessage(messageView, request);
client.sendMessage(request);
inputField.clear();
});
VBox layout = new VBox(10,
messageView,
inputField
);
layout.setPadding(new Insets(10));
layout.setPrefWidth(600);
Scene scene = new Scene(layout);
scene.getStylesheets().add(CSS);
stage.setScene(scene);
stage.show();
inputField.requestFocus();
clientExecutor.submit(client);
}
private void logMessage(ListView<String> messageView, String request) {
messageView.getItems().add(request);
messageView.scrollTo(Math.max(0, messageView.getItems().size() - 1));
}
private void closeApp(Scene scene) {
Parent root = scene.getRoot();
root.setDisable(true);
FadeTransition fade = new FadeTransition(Duration.seconds(1), root);
fade.setToValue(0);
fade.setOnFinished(e -> Platform.exit());
fade.play();
}
#Override
public void stop() {
clientExecutor.shutdown();
serverExecutor.shutdown();
}
public static void main(String[] args) {
launch();
}
}
You can use a thread-safe ConcurrentLinkedQueue (static, of course). Utils.setObjects would add each element to the queue. On the UI thread, you would occasionally poll for new items on the queue to be displayed in the UI.
Reference documentation: https://docs.oracle.com/javase/6/docs/api/java/util/concurrent/ConcurrentLinkedQueue.html
I am trying to figure out why my pipeline is being executed out of order.
I took the HexDumpProxy example and was trying to turn it into a http-proxy where I can look at all the traffic. For some reason the code is being executed backwards and I can't figure out why.
My server listens on 8443 and takes in the http content. I wanted to read the host header and create a frontend handler to route the data to the server, but my frontend handler executes first despite being last in the pipeline. I am unsure why it is running first I thought it would be execute in the following order.
LoggingHandler
HttpRequestDecoder
HttpObjectAggregator
HttpProxyListener
HttpReEncoder
HTTPProxyFrontEnd
The goal is to remove frontendhandler from the pipeline and have the HTTPProxy listener add it to the pipeline after reading the host header. but if I remove the frontend handler no data is transferred. Using breakpoints HTTPProxyFrontEnd is hit before HttpProxyListener. I am unsure why it is being executed so out of order.
Main
```
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new HttpProxyServerInitializer(REMOTE_HOST, REMOTE_PORT))
.childOption(ChannelOption.AUTO_READ, false)
.bind(LOCAL_PORT).sync().channel().closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
```
Pipeline
```
import io.netty.buffer.ByteBuf;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.handler.codec.MessageToByteEncoder;
import io.netty.handler.codec.http.*;
import io.netty.handler.logging.LogLevel;
import io.netty.handler.logging.LoggingHandler;
import io.netty.handler.ssl.SslContext;
import io.netty.handler.ssl.SslContextBuilder;
import io.netty.handler.ssl.SslHandler;
import io.netty.handler.ssl.util.SelfSignedCertificate;
import javax.net.ssl.SSLEngine;
public class HttpProxyServerInitializer extends ChannelInitializer {
private final String remoteHost;
private final int remotePort;
public HttpProxyServerInitializer(String remoteHost, int remotePort) {
this.remoteHost = remoteHost;
this.remotePort = remotePort;
}
#Override
protected void initChannel(Channel ch) throws Exception {
ch.pipeline().addLast(
new LoggingHandler(LogLevel.INFO),
new HttpRequestDecoder(),
new HttpObjectAggregator(8192),
new HttpProxyListener(),
new HttpReEncoder(),
new HTTPProxyFrontEnd(remoteHost, remotePort));
}
}
```
Proxy Front end
```
import io.netty.bootstrap.Bootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.*;
import io.netty.channel.embedded.EmbeddedChannel;
import io.netty.handler.codec.DecoderResult;
import io.netty.handler.codec.http.*;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.util.CharsetUtil;
import java.net.SocketAddress;
import java.util.List;
import java.util.Map;
import java.util.Set;
import static io.netty.handler.codec.http.HttpResponseStatus.BAD_REQUEST;
import static io.netty.handler.codec.http.HttpResponseStatus.OK;
import static io.netty.handler.codec.http.HttpVersion.HTTP_1_1;
public class HTTPProxyFrontEnd extends ChannelInboundHandlerAdapter {
private final String remoteHost;
private final int remotePort;
private final StringBuilder buf = new StringBuilder();
private HttpRequest request;
// As we use inboundChannel.eventLoop() when building the Bootstrap this does not need to be volatile as
// the outboundChannel will use the same EventLoop (and therefore Thread) as the inboundChannel.
private Channel outboundChannel;
public HTTPProxyFrontEnd(String remoteHost, int remotePort) {
this.remoteHost = remoteHost;
this.remotePort = remotePort;
}
#Override
public void channelActive(ChannelHandlerContext ctx) {
System.out.println("HTTPFrontEnd");
final Channel inboundChannel = ctx.channel();
// Start the connection attempt.
Bootstrap b = new Bootstrap();
b.group(inboundChannel.eventLoop())
.channel(ctx.channel().getClass())
.handler(new HexDumpProxyBackendHandler(inboundChannel))
.option(ChannelOption.AUTO_READ, false);
ChannelFuture f = b.connect(remoteHost, remotePort);
SocketAddress test = ctx.channel().remoteAddress();
outboundChannel = f.channel();
f.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
// connection complete start to read first data
inboundChannel.read();
} else {
// Close the connection if the connection attempt has failed.
inboundChannel.close();
}
}
});
}
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) throws InterruptedException {
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
// was able to flush out data, start to read the next chunk
ctx.channel().read();
} else {
future.channel().close();
}
}
});
}
}
private boolean writeResponse(HttpObject currentObj, ChannelHandlerContext ctx) {
// Decide whether to close the connection or not.
boolean keepAlive = HttpUtil.isKeepAlive(request);
// Build the response object.
FullHttpResponse response = new DefaultFullHttpResponse(
HTTP_1_1, currentObj.decoderResult().isSuccess()? OK : BAD_REQUEST,
Unpooled.copiedBuffer(buf.toString(), CharsetUtil.UTF_8));
response.headers().set(HttpHeaderNames.CONTENT_TYPE, "text/plain; charset=UTF-8");
if (keepAlive) {
// Add 'Content-Length' header only for a keep-alive connection.
response.headers().setInt(HttpHeaderNames.CONTENT_LENGTH, response.content().readableBytes());
// Add keep alive header as per:
// - http://www.w3.org/Protocols/HTTP/1.1/draft-ietf-http-v11-spec-01.html#Connection
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE);
}
// Encode the cookie.
String cookieString = request.headers().get(HttpHeaderNames.COOKIE);
if (cookieString != null) {
Set<io.netty.handler.codec.http.cookie.Cookie> cookies = ServerCookieDecoder.STRICT.decode(cookieString);
if (!cookies.isEmpty()) {
// Reset the cookies if necessary.
for (io.netty.handler.codec.http.cookie.Cookie cookie: cookies) {
response.headers().add(HttpHeaderNames.SET_COOKIE, io.netty.handler.codec.http.cookie.ServerCookieEncoder.STRICT.encode(cookie));
}
}
} else {
// Browser sent no cookie. Add some.
response.headers().add(HttpHeaderNames.SET_COOKIE, io.netty.handler.codec.http.cookie.ServerCookieEncoder.STRICT.encode("key1", "value1"));
response.headers().add(HttpHeaderNames.SET_COOKIE, ServerCookieEncoder.STRICT.encode("key2", "value2"));
}
// Write the response.
//ctx.writeAndFlush(response);
return keepAlive;
}
private static void appendDecoderResult(StringBuilder buf, HttpObject o) {
DecoderResult result = o.decoderResult();
if (result.isSuccess()) {
return;
}
buf.append(".. WITH DECODER FAILURE: ");
buf.append(result.cause());
buf.append("\r\n");
}
#Override
public void channelInactive(ChannelHandlerContext ctx) {
if (outboundChannel != null) {
closeOnFlush(outboundChannel);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
closeOnFlush(ctx.channel());
}
/**
* Closes the specified channel after all queued write requests are flushed.
*/
static void closeOnFlush(Channel ch) {
if (ch.isActive()) {
ch.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
}
}
}
```
I am learning about DatagramChannel Multicast,
I am trying to do the following ,i noticed that the DatagramChannel#receive method blocks completely when i am on VPN and does not receive anything , but data gets printed few iterations when i disconnect VPN.
1. Sending data to a multicast IP group:
import java.io.IOException;
import java.net.*;
import java.nio.ByteBuffer;
import java.nio.channels.DatagramChannel;
import java.util.Date;
public class MultiCastChannelThread extends Thread {
DatagramChannel channel;
ProtocolFamily family = StandardProtocolFamily.INET;
MultiCastChannelThread(String name) throws IOException {
super(name);
configureChannel();
}
public static void main(String[] args) throws IOException {
new MultiCastChannelThread("MultiCastChannel").start();
}
private void configureChannel() throws IOException {
channel = DatagramChannel.open(family).setOption(StandardSocketOptions.SO_REUSEADDR, true).bind(null).setOption(
StandardSocketOptions.IP_MULTICAST_IF, NetworkInterface.getByName("wlan0"));
}
#Override
public void run() {
int i = 0;
while (i++ < 6) {
try {
byte[] buff = new Date().toString().getBytes();
ByteBuffer buffer = ByteBuffer.wrap(buff);
InetAddress group = InetAddress.getByName("224.0.15.15");
int sent = channel.send(buffer, new InetSocketAddress(group, 4466));
System.out.format("Number of bytes sent :%s\n Data sent:%s\t\n", sent, new String(buff));
try {
sleep(5 * 1000);
} catch (InterruptedException e) {
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
2. Joining and receiving the data on the group
import static java.lang.Thread.sleep;
import java.io.IOException;
import java.net.*;
import java.nio.ByteBuffer;
import java.nio.channels.DatagramChannel;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.Map;
public class MulticastChannelClient {
NetworkInterface networkInterface;
InetAddress inetAddress;
InetAddress group;
ProtocolFamily family = StandardProtocolFamily.INET;
public MulticastChannelClient() throws UnknownHostException, SocketException {
group = InetAddress.getByName("224.0.15.15");
networkInterface = NetworkInterface.getByName("wlan0");
}
public static void main(String[] args) throws IOException {
new MulticastChannelClient().testMultiCastChannel();
}
private void testMultiCastChannel() throws IOException {
new MultiCastChannelThread("MulticastChannelTest").start();
DatagramChannel channel = DatagramChannel.open(family).setOption(StandardSocketOptions.SO_REUSEADDR, true).bind(
new InetSocketAddress((InetAddress) null, 4466));
channel.join(group, networkInterface);
for (int i = 0; i < 6; i++) {
ByteBuffer received = ByteBuffer.allocate(250);
SocketAddress socketAddress = channel.receive(received);
System.out.println("Received ::: " + i + " " + new String(received.array()));
try {
sleep(2 * 1000);
} catch (InterruptedException e) {
}
}
}
}
I want to create large number client connection to the server for the test purpose. I accomplish this by creating thread per connection, so I can only create 3000 connection on my machine. below is my code:
package com.stepnetwork.iot.apsclient.application;
import io.netty.bootstrap.Bootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.*;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
/**
* Created by sam on 3/22/16.
*/
public class DtuClient extends Thread {
private static final String HOST = "192.168.54.36";
private static final int PORT = 30080;
private EventLoopGroup workerGroup;
private String dtuCode;
public DtuClient(String dtuCode, EventLoopGroup workerGroup) {
this.dtuCode = dtuCode;
this.workerGroup = workerGroup;
}
public void run() {
Bootstrap bootstrap = new Bootstrap(); // (1)
try {
bootstrap.group(workerGroup); // (2)
bootstrap.channel(NioSocketChannel.class); // (3)
bootstrap.option(ChannelOption.SO_KEEPALIVE, true); // (4)
bootstrap.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new MyClientHandler());
}
});
ChannelFuture feature = bootstrap.connect(HOST, PORT).sync();
feature.addListener((future) -> {
System.out.println(dtuCode + " connected to server");
Channel channel = feature.channel();
ByteBuf buffer = Unpooled.buffer(256);
buffer.writeBytes(dtuCode.getBytes());
channel.writeAndFlush(buffer);
});
feature.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("completed");
}
}
Can I get more connection.
I had try another solution after doing google research, but the channel will close automatically.
here is my another solution
package com.stepnetwork.iot.apsclient.application;
import io.netty.bootstrap.Bootstrap;
import io.netty.channel.Channel;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
import java.util.ArrayList;
import java.util.List;
/**
* Created by sam on 3/22/16.
*/
public class Test {
private static final String HOST = "192.168.54.36";
private static final int PORT = 30080;
public static void main(String[] args) throws InterruptedException {
EventLoopGroup workerGroup = new NioEventLoopGroup();
Bootstrap bootstrap = new Bootstrap(); // (1)
try {
bootstrap.group(workerGroup); // (2)
bootstrap.channel(NioSocketChannel.class); // (3)
bootstrap.option(ChannelOption.SO_KEEPALIVE, true); // (4)
bootstrap.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new MyClientHandler());
}
});
List<Channel> channels = new ArrayList<>();
// create many connection here, but the channel will be closed atomically
for (int i = 0; i < 10000; i++) {
channels.add(bootstrap.connect(HOST, PORT).sync().channel());
}
} catch (InterruptedException e) {
e.printStackTrace();
}
while (true) {
Thread.sleep(Integer.MAX_VALUE);
}
}
}
THE SCENE:
I am writing a echo client and server. The data being transfered is a string:
Client encode a string,and send it to server.
Server recv data, decode string, then encode the received string, send it back to client.
The above process will be repeated 100000 times.(Note: the connection is persistent).
DEFERENT CONTIONS:
When I run ONE server and TWO client at the same time, everything is ok,every client receives 100000 messages and terminated normally.
But When I add a ExecutionHandler on server, and then run ONE server and TWO client at the same time, one client will never terminate, and the network traffic is zero.
I cant locate the key point of this problem for now, will you give me some suggestions?
MY CODE:
string encoder , string decoder, client handler , server handler , client main ,server main.
//Decoder=======================================================
import java.nio.charset.Charset;
import org.jboss.netty.buffer.ChannelBuffer;
import org.jboss.netty.channel.Channel;
import org.jboss.netty.channel.ChannelHandlerContext;
import org.jboss.netty.handler.codec.frame.FrameDecoder;
public class Dcd extends FrameDecoder {
public static final Charset cs = Charset.forName("utf8");
#Override
protected Object decode(ChannelHandlerContext ctx, Channel channel,
ChannelBuffer buffer) throws Exception {
if (buffer.readableBytes() < 4) {
return null;
}
int headlen = 4;
int length = buffer.getInt(0);
if (buffer.readableBytes() < length + headlen) {
return null;
}
String ret = buffer.toString(headlen, length, cs);
buffer.skipBytes(length + headlen);
return ret;
}
}
//Encoder =======================================================
import org.jboss.netty.buffer.ChannelBuffer;
import org.jboss.netty.buffer.ChannelBuffers;
import org.jboss.netty.channel.Channel;
import org.jboss.netty.channel.ChannelHandlerContext;
import org.jboss.netty.handler.codec.oneone.OneToOneEncoder;
public class Ecd extends OneToOneEncoder {
#Override
protected Object encode(ChannelHandlerContext ctx, Channel channel,
Object msg) throws Exception {
if (!(msg instanceof String)) {
return msg;
}
byte[] data = ((String) msg).getBytes();
ChannelBuffer buf = ChannelBuffers.dynamicBuffer(data.length + 4, ctx
.getChannel().getConfig().getBufferFactory());
buf.writeInt(data.length);
buf.writeBytes(data);
return buf;
}
}
//Client handler =======================================================
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.jboss.netty.channel.ChannelHandlerContext;
import org.jboss.netty.channel.ChannelStateEvent;
import org.jboss.netty.channel.Channels;
import org.jboss.netty.channel.ExceptionEvent;
import org.jboss.netty.channel.MessageEvent;
import org.jboss.netty.channel.SimpleChannelUpstreamHandler;
/**
* Handler implementation for the echo client. It initiates the ping-pong
* traffic between the echo client and server by sending the first message to
* the server.
*/
public class EchoClientHandler extends SimpleChannelUpstreamHandler {
private static final Logger logger = Logger
.getLogger(EchoClientHandler.class.getName());
private final AtomicLong transferredBytes = new AtomicLong();
private final AtomicInteger counter = new AtomicInteger(0);
private final AtomicLong startTime = new AtomicLong(0);
private String dd;
/**
* Creates a client-side handler.
*/
public EchoClientHandler(String data) {
dd = data;
}
public long getTransferredBytes() {
return transferredBytes.get();
}
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
// Send the first message. Server will not send anything here
// because the firstMessage's capacity is 0.
startTime.set(System.currentTimeMillis());
Channels.write(ctx.getChannel(), dd);
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
// Send back the received message to the remote peer.
transferredBytes.addAndGet(((String) e.getMessage()).length());
int i = counter.incrementAndGet();
int N = 100000;
if (i < N) {
e.getChannel().write(e.getMessage());
} else {
ctx.getChannel().close();
System.out.println(N * 1.0
/ (System.currentTimeMillis() - startTime.get()) * 1000);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
// Close the connection when an exception is raised.
logger.log(Level.WARNING, "Unexpected exception from downstream.",
e.getCause());
e.getChannel().close();
}
}
//Client main =======================================================
import java.net.InetSocketAddress;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Executors;
import org.jboss.netty.bootstrap.ClientBootstrap;
import org.jboss.netty.channel.ChannelFuture;
import org.jboss.netty.channel.ChannelPipeline;
import org.jboss.netty.channel.ChannelPipelineFactory;
import org.jboss.netty.channel.Channels;
import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory;
/**
* Sends one message when a connection is open and echoes back any received data
* to the server. Simply put, the echo client initiates the ping-pong traffic
* between the echo client and server by sending the first message to the
* server.
*/
public class EchoClient {
private final String host;
private final int port;
public EchoClient(String host, int port) {
this.host = host;
this.port = port;
}
public void run() {
// Configure the client.
final ClientBootstrap bootstrap = new ClientBootstrap(
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
// Set up the pipeline factory.
bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() throws Exception {
return Channels.pipeline(new Dcd(), new Ecd(),
new EchoClientHandler("abcdd"));
}
});
bootstrap.setOption("sendBufferSize", 1048576);
bootstrap.setOption("receiveBufferSize", 1048576);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("writeBufferLowWaterMark", 32 * 1024);
bootstrap.setOption("writeBufferHighWaterMark", 64 * 1024);
List<ChannelFuture> list = new ArrayList<ChannelFuture>();
for (int i = 0; i < 1; i++) {
// Start the connection attempt.
ChannelFuture future = bootstrap.connect(new InetSocketAddress(
host, port));
// Wait until the connection is closed or the connection
// attempt
// fails.
list.add(future);
}
for (ChannelFuture f : list) {
f.getChannel().getCloseFuture().awaitUninterruptibly();
}
// Shut down thread pools to exit.
bootstrap.releaseExternalResources();
}
private static void testOne() {
final String host = "192.168.0.102";
final int port = 8000;
new EchoClient(host, port).run();
}
public static void main(String[] args) throws Exception {
testOne();
}
}
//server handler =======================================================
import java.util.concurrent.atomic.AtomicLong;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.jboss.netty.channel.ChannelHandlerContext;
import org.jboss.netty.channel.Channels;
import org.jboss.netty.channel.ExceptionEvent;
import org.jboss.netty.channel.MessageEvent;
import org.jboss.netty.channel.SimpleChannelUpstreamHandler;
/**
* Handler implementation for the echo server.
*/
public class EchoServerHandler extends SimpleChannelUpstreamHandler {
private static final Logger logger = Logger
.getLogger(EchoServerHandler.class.getName());
private final AtomicLong transferredBytes = new AtomicLong();
public long getTransferredBytes() {
return transferredBytes.get();
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
// Send back the received message to the remote peer.
transferredBytes.addAndGet(((String) e.getMessage()).length());
Channels.write(ctx.getChannel(), e.getMessage());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
// Close the connection when an exception is raised.
logger.log(Level.WARNING, "Unexpected exception from downstream.",
e.getCause());
e.getChannel().close();
}
}
//Server main =======================================================
import java.net.InetSocketAddress;
import java.util.concurrent.Executors;
import org.jboss.netty.bootstrap.ServerBootstrap;
import org.jboss.netty.channel.ChannelPipeline;
import org.jboss.netty.channel.ChannelPipelineFactory;
import org.jboss.netty.channel.Channels;
import org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory;
import org.jboss.netty.handler.execution.ExecutionHandler;
import org.jboss.netty.handler.execution.OrderedMemoryAwareThreadPoolExecutor;
/**
* Echoes back any received data from a client.
*/
public class EchoServer {
private final int port;
public EchoServer(int port) {
this.port = port;
}
public void run() {
// Configure the server.
ServerBootstrap bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
System.out.println(Runtime.getRuntime().availableProcessors() * 2);
final ExecutionHandler executionHandler = new ExecutionHandler(
new OrderedMemoryAwareThreadPoolExecutor(16, 1048576, 1048576));
// Set up the pipeline factory.
bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() throws Exception {
System.out.println("new pipe");
return Channels.pipeline(new Dcd(), new Ecd(),
executionHandler, new EchoServerHandler());
}
});
bootstrap.setOption("child.sendBufferSize", 1048576);
bootstrap.setOption("child.receiveBufferSize", 1048576);
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.writeBufferLowWaterMark", 32 * 1024);
bootstrap.setOption("child.writeBufferHighWaterMark", 64 * 1024);
// Bind and start to accept incoming connections.
bootstrap.bind(new InetSocketAddress(port));
}
public static void main(String[] args) throws Exception {
int port = 8000;
new EchoServer(port).run();
}
}
I have found the reason now, it is a hard work, but full with pleasure.
When added a ExecutionHandler, the message will be wrapped into a Runnable task, and will be executed in a ChildExecutor. The key point is here : A task maybe added to ChildExecutor when the executor almostly exit , then is will be ignored by the ChildExecutor.
I added three lines code and some comments, the final code looks like below, and it works now,should I mail to the author? :
private final class ChildExecutor implements Executor, Runnable {
private final Queue<Runnable> tasks = QueueFactory
.createQueue(Runnable.class);
private final AtomicBoolean isRunning = new AtomicBoolean();
public void execute(Runnable command) {
// TODO: What todo if the add return false ?
tasks.add(command);
if (!isRunning.get()) {
doUnorderedExecute(this);
} else {
}
}
public void run() {
// check if its already running by using CAS. If so just return
// here. So in the worst case the thread
// is executed and do nothing
boolean acquired = false;
if (isRunning.compareAndSet(false, true)) {
acquired = true;
try {
Thread thread = Thread.currentThread();
for (;;) {
final Runnable task = tasks.poll();
// if the task is null we should exit the loop
if (task == null) {
break;
}
boolean ran = false;
beforeExecute(thread, task);
try {
task.run();
ran = true;
onAfterExecute(task, null);
} catch (RuntimeException e) {
if (!ran) {
onAfterExecute(task, e);
}
throw e;
}
}
//TODO NOTE (I added): between here and "isRunning.set(false)",some new tasks maybe added.
} finally {
// set it back to not running
isRunning.set(false);
}
}
//TODO NOTE (I added): Do the remaining works.
if (acquired && !isRunning.get() && tasks.peek() != null) {
doUnorderedExecute(this);
}
}
}
This was a bug and will be fixed in 3.4.0.Alpha2.
See https://github.com/netty/netty/issues/234