I'm writing an application server that will receive SIP and DNS messages from the network.
When I receive a message from the network, I understand from the documentation that at first, I get a ChannelBuffer. I would like to determine which kind of message has been received (SIP or DNS) and to decode it.
To determine the message type, I can dedicate port to each type of message, but I would be interested to know if there exist another solution for that. My question is more about how to decode the ChannelBuffer.
Is there a ChannelHandler provided by Netty to decode SIP or DNS messages? If not, what would be the right place in the type hierarchy to write my custom ChannelHandler?
To illustrate my question, let's take as example the HttpRequestDecoder, the hierarchy is:
java.lang.Object
org.jboss.netty.channel.SimpleChannelUpstreamHandler
org.jboss.netty.handler.codec.frame.FrameDecoder
org.jboss.netty.handler.codec.replay.ReplayingDecoder<HttpMessageDecoder.State>
org.jboss.netty.handler.codec.http.HttpMessageDecoder
org.jboss.netty.handler.codec.http.HttpRequestDecoder
Also, do I need to use two different ChannelHandler for decoding and encoding, or is there a possibility to use a single ChannelHandler for both?
Thanks
If you really have a requirement for port unification (an example here), i.e. receiving different protocols on the same port, then you would have to detect the protocol in a handler and take appropriate actions. Could be as simple as inserting different handlers in the pipe line.
However, I find it very improbable that SIP and DNS would share the same port, hence no need to complicate matters.
I haven't seen a SIP decoder/encoder for Netty, but depending on what you want to do with the message, the HTTP decoder is a a very good starting point (and could be made simpler since chunking is not supported in SIP).
I would strongly recommend not to try to combine DNS and SIP decoding in one handler (or any other combination for that matter). Keep the handlers as simple and coherent as possible. Combine handlers instead, if needed.
Related
There are quite a couple of related questions (e.g. Java Socket specify a certain network interface for outgoing connections ) however I couldn't find a satisfying i.e. practical solution to my problem:
On my target (Linux) platform there are multiple network interfaces (eth0...ethN) from which a Server S is reachable. The default route is normally via eth0, however I'm trying to connect S via e.g. eth4 using
new java.net.Socket(IP_of_S, targetport, IP_of_eth4, srcport)
or
sock.bind( eth4_SocketAddress );
sock.connect( S_SocketAddress );
In this example case the IP of eth4 is assigned correctly but traffic is still going out trough the interface of the default route. I've learned this is due to the the "weak end system model" RFC 1122. However I'm wondering whether there's still a Java-based solution to achieving my original goal or whether I have to trigger external iptables or route calls from my program.
(BTW: The outgoing interface needs to be chosen dynamically at runtime, i.e. my program closes the connection and tries to reconnect using a different outbound interface quite frequently.)
As far as I know, you cannot choose the outgoing interface without some routing table setup.
In my opinion, the best solution is to set up a bunch of source-specific routes, routes that match on the source address of a packet, and bind to a given source address in order to select the route (as you already do). There are two ways of achieving that:
use ip rule and multiple routing tables — this is described in http://lartc.org/howto/lartc.rpdb.html ;
use ip route add ... from .... As far as I know, this only works for IPv6, but avoids the complexity of multiple routing tables.
You'll find some background about source-specific routing in https://arxiv.org/pdf/1403.0445v4.pdf (disclaimer, I'm a co-author).
I saw this description in the Oracle website:
"Since TCP by its nature is a stream based protocol, in order to reuse an existing connection, the HTTP protocol has to have a way to indicate the end of the previous response and the beginning of the next one. Thus, it is required that all messages on the connection MUST have a self-defined message length (i.e., one not defined by closure of the connection). Self demarcation is achieved by either setting the Content-Length header, or in the case of chunked transfer encoded entity body, each chunk starts with a size, and the response body ends with a special last chunk."
See Oracle doc
I don't know how to implement, can someone give me an example of Java implementation ?
If you are trying to implement "self-demarcation" in the same way as HTTP does it:
the HTTP 1.1 specification defines how it works,
the source code of (say) the Apache HTTP libraries are an example of its implementation.
In fact, it is advisable NOT to try and implement this (HTTP) yourself from scratch. Use an existing implementation.
On the other hand, if you simply want to implement your own ad-hoc self-demarcation scheme, it is really easy to do.
The sender figures out the size of the message, in bytes or characters or some other unit that makes sense.
The sender sends a the message size, followed by the message itself.
At the other end:
The receiver reads the message size, and then reads the requisite number of bytes, characters, to form the message body.
An alternative is to for the sender to send the message followed by a special end-of-message marker. To make this work, either you need to guarantee that no message will contain the end-of-message marker, or you need to use some sort of escaping mechanism.
Implementing these schemes is simple Java programming.
What makes a connection reusable
That is answered by the text that you quoted in your Question.
I have two applications, one that sends UDP messages using Camel with the Netty component, and one that receives UDP messages in Java NIO with DatagramChannel.
When receiving the data, I've noticed that there's an extra 29 bytes prepended to the front of my message. Netty Camel prints out the outgoing bytes and it looks fine, but when I do a packet.getData() as soon as the message comes in on the other side, it has extra stuff on the front (and it's always the same bytes).
Is Camel or Netty wrapping the packet before sending it?
[edit] Additional information:
-Camel is printing the log statement, not Netty
-the bytes prepended to the message change when the content of the message changes (only two bytes are changed)
I know this question is pretty old now, but I hit exactly this problem and it took me a while to find the solution. So here it is...
Basically the problem boils down to confusion of what camel-netty will do when you're telling it to send something sufficiently low-level like a byte[] in a UDP packet. I expect that like me the OP assumed they were setting raw data, but camel-netty uses Java Object Serialization by default - resulting in those extra "random" bytes appearing before the expected data.
The solution is to change the encoder/decoder used by the endpoint(s) in question. There are various built-in alternatives, but you can subclass them if you need something more... weird. Either way, the process is:
1) Add the "encoder=#myEncoder" and "decoder=#myDecoder" options as appropriate on to the endpoint URIs. e.g.
String destinationUri = "netty:udp://localhost:4242"
+ "?sync=false&encoder=#myEncoder";
2) Add a mapping from "myEncoder" to an instance of your new Encoder class to a Camel Registry. Same for myDecoder. Then use that registry when constructing the CamelContext. e.g.
SimpleRegistry registry = new SimpleRegistry();
registry.put("myEncoder", new StringEncoder());
registry.put("myDecoder", new StringDecoder());
CamelContext camelContext = new CamelContext(registry);
Obviously the real trick is in finding or making an Encoder/Decoder that suits your needs. A blog post at znetdevelopment really helped me, though it goes one step further and puts the custom Encoders in a custom Pipeline (I ignored that bit).
I would like to create a class that works much like a DefaultChannelGroup, but with the one difference that the message being written belongs to a connection and the channel associated with it will not have the message written back to it.
Think of the chat application where we should write to all other channels other than the one belonging to the user who wrote the message.
Looking at the implementation of the DefaultChannelGroup it seems I could add a new method named write that expects a given channel and the message, and will iterate the non-server channels and skip a channel that is equals to the given channel.
You could extend the DefaultChannelGroup to do as you outlined, but a channel group is already an iterator and set of channels already. If you already have a channel, you can perform ther write directly to it (i.e. you don't need to get it from the ChannelGroup), or, if for some reason, you really wanted to get it from the channel group, you could call ChannelGroup.find(channel.getId()).
I guess if you are doing this for the pruposes of narrowing down to a single channel, it's an issue of cosmetics. I am not pannning it.... personal preference ! If it makes it better for you, go for it.
The more interesting scenario, which would be a truly useful extension to the DefautChannelGroup, would be to assign individual channels a group of attributes encoded as a bit-mask. Then you might be able to do something like tell the BitMaskChannelGroup to write this message to all channels with a provided bit-mask argument, which might be the encoding for all chat-room users over the age of 21 living in New Jersey, or all routing devices where the manufacturer is Cisco.
I have a chat program implemented in Java. The client can send lots of different types of information to the server (i.e, Joins the server and sends username, password; requests a private chat with another user on the server, disconnects from the server, etc).
I'm looking for the correct way to have the server/client differentiate between 'text' messages that are just meant to be chat text messages sent from one client to the others, and 'command' messages (disconnect, request private chat, request file transfer, etc) that are meant for the server or the client.
I see two options:
Use serialized objects, and determine what they are on the receiving end by doing an 'instanceof'
Send the data as a byte array, reserving the first N bytes of the array to specify the 'type' of the incoming data.
What is the 'correct' way to do this? How to real protocols (oscar, irc) handle this situation?
I've googled around on this topic and only found examples/discussions centering on simple java chat applications. None that go into detail about protocol design (which I ultimately intend to practice).
Thanks to any help...
Second approach is much better, because serialization is a complex mechanism, that can be easily used in a wrong way (for example you may bind yourself to internal content of a concrete serialized class). Plus your protocol will be bound to JVM mechanism.
Using some "protocol header" for message differentiation is a common way in network protocols (FTP, HTTP, etc). It is even better when it is in a text form (people will be able to read it).
You typically have a little message header identifying the type of content in all messages, including standard text/chat messages.
Either of your two suggestions are fine. (In your second approach, you probably want to reserve some bytes for the length of the array as well.)