Netty 3.2.6 - less readablebytes - java

I am using 3.2.6 (I know its way too old but current project constraints does not allow me to upgrade till next two months) for one of server solution. I do not have control over client.
We have a framedecoder in pipeline to handle the custom message which has a header and body. One of the header fields contains the length of the body. For a specific message, I get a size of 1076 bytes. But while getting the message, I always get 1024 bytes (readableBytes()) and hence decoder fails.
I checked with wireshark (with suitable dissector for the protcol). The client always sends 1076 bytes and puts the same value on the header. So, client does not seem to misbehave.
My receive buffer size is already set to higher value as following on server bootstrap with no apparent effect.
bootstrap.setOption("child.receiveBufferSize", 1*1024*1024);
This is seen consistently only for this specific message and all other messages (whose sizes range from 100 to 1000 bytes) are handled correctly without any issues.
Please let me know what else could cause this issue and how to fix the same?

Related

Spring Integration - Which Deserializer to use for an endless stream of bytes with only a start-byte (being part of the message)

We are trying to read enOcean data from TCP.
The protocol description for enOcean ESP3-Messages says:
As soon as a Sync.-Byte (value 0x55) is identified, the subsequent 4 byte-Header is compared with the corresponding CRC8H value.
If the result is a match the Sync.-Byte is correct. Consequently, the ESP3 packet is detected properly and the subsequent data will be passed.
If the Header does not match the CRC8H, the value 0x55 does not correspond to a Sync.-Byte. The next 0x55 within the data stream is picked and the verification is repeated.
Until now we used a device (client) that automatically closes the connection to our server after the end of a number of messages coming within a very small timeframe (some milliseconds). Therefore we were able to use a simple ByteArrayRawSerializer. Whenever the connection was closed we got the byte array, read the data, found all the sync-Bytes and were able to interpret the messages.
Now there is a new device that is holding the connection for a very long time (several minutes), so we somehow need another way to read the data from the stream. Because there is no End-of-Message-Byte and the SyncByte 0x55 is part of the message and a ByteArrayLengthHeaderSerializer doesn't suit us either we wonder what we could use:
Is there any Deserializer usable for our scenario? Is it possible to writer our own Deserializer (in this specific scenario)? Or is there another (simpler) way we should follow? Maybe use an Interceptor?
You would need to write a custom deserializer.

JavaMail API - Reading a large outlook mailbox (>3000) for message content

I have a requirement to read a mailbox having more than 3000 mails. I need to read these mails, fetch their mail contents and feed the body into another api. Its easy to do with a few mails (for me it was approx 250), but after that it slowed down significantly. Is the accepted answer in this link
the only choice, or is there any other alternative way.
NOTE: I have purposely not pasted any snippet,as I have used the straight forward approach, and yes I did use FetchProfile too.
JavaMail IMAP performance is usually controlled by the speed of the server, the number of network round trips required, and the ammount of data being read. Using a FetchProfile is essential to reducing the number of round trips. Don't forget to consider the IMAP-specific FetchProfile items.
JavaMail will fetch message contents a buffer at a time. Large messages will obviously require many buffer fetches, and thus many round trips. You can change the size of the buffer (default 16K) by setting the mail.imap.fetchsize property. Or you can disable these partial fetches and require it to fetch the entire contents in one operation by setting the mail.imap.partialfetch property to false. Obviously the latter will require significant memory on the client if large messages are being read.
The JavaMail IMAP provider does not (usually; see below) cache message contents on the client, but it does cache message headers. When processing a very large number of messages it is sometimes helpful to invalidate the cache of headers when done processing a message by calling the IMAPMessage.invalidateHeaders method. When using IMAPFolder.FetchProfileItem.MESSAGE, the message contents are cache, and will also be invalidated by the above call.
Beyond that, you should examine the JavaMail debug output to ensure only the expected IMAP commands are being issued and that you're not doing something in your program that would cause it to issue unnecessary IMAP commands. You can also look at time-stamps for the protocol commands to determine whether the time is being spent on the server or the client.
Only after all of that has failed to yield acceptable performance, and you're sure the performance problems are not on the server (which you can't fix), would you need to look into custom IMAP commands as suggested in the link you referred to.

using chunked encoding request with variable chunk size in Java (HttpUrlConnection)

I'm searching for weeks now to find a solution how to use chunked transfer encoding in a Java client without coding my own myHttpURLConnection.
The HttpUrlConnection of Java expects a fixed chunk size, which is not usable for me. The data consists of several messages that are different in size and must be sent in neartime to the server. The system currently (in Prelive/UAT state) works based on having fixed 1024 byte chunks but since most messages are significantly shorter, this is a waste of band width not acceptable in PROD.
Furthermore, messages larger than 1024 bytes would be chopped apart so a) the server would need to assemble them again and b) the last part of the message would not be send until enough data is available for filling 1024 bytes (even worse, not neartime anymore).
Does anybody have an idea how to workaround the restriction of the HttpUrlConnection of Java (non compliant with RFC2616 as it does not fully implement it) without having to code everything on top of URLConnection?
I did not find any possibility to hook into the needed funcs for just setting a new chunk size for each heap of data.
My current option: douplicate all HTTPUrlConnection code and modify the parts dealing with CHUNKED (e.g. having some flush() function to adjust the chunk size and send what's there).

How SSLContext.getInstance() method works?

Entire code is quire complicated so I am directly coming to the point.
Code is as follows
SSLContext ctx = SSLContext.getInstance("TLS");
If you read docs for getInstance(String protocol) method it says
This method traverses the list of registered security Providers, starting
with the most preferred Provider. A new SSLContext object encapsulating
the SSLContextSpi implementation from the first Provider that supports the
specified protocol is returned.
Note that the list of registered providers may be retrieved via the
Security.getProviders() method.
For me Security.getProviders() method gives following providers
Now I have verified that "TLS" protocol is in com.sun.net.ssl.internal.ssl.Provider (index 2 ) and is always selected.
But the corresponding SSLContextSpi object is coming different in Java 6 and Java 7. In java 6 I am getting com.sun.net.ssl.internal.ssl.SSLContextImpl#7bbf68a9 and in java 7 I am getting sun.security.ssl.SSLContextImpl$TLS10Context#615ece16. This is having very bad effect as when later I am creating SSL socket they are of different class.
So why is this happening? Is there a work around? I want the same com.sun.net.ssl.internal.ssl.SSLContextImpl#7bbf68a9 SSLContextSpi object encapsulated in com.sun.net.ssl.internal.ssl.Provider context(which is same in both cases).
This is having very bad effect as when later I am creating SSL socket they are of different class.
This is not a bad effect. Which actual class you get from the factories in the public API is at the discretion of the JRE implementation: these concrete classes are not part of the public API.
The fact that you get different classes between Java 6 and Java 7 doesn't really matter. Even if they had the same name, if wouldn't make sense to compare them to one another.
EDIT:
public int read(byte[] b) function reads only 1 bytes when I give it a
byte array of length 4 and also i have confirmed that there are 4
bytes in the stream.
SSLSocket in Java 7 is behaving correctly when you get this. In fact, it's probably behaving better, since this initial 1-byte read is due to the BEAST-prevention measure. I'll copy and paste my own answer to that question, since you're making exactly the same mistake.
The assumption you're making about reading the byte[] exactly as you write them on the other end is a classic TCP mistake. It's not actually specific to SSL/TLS, but could also happen with a TCP connection.
There is no guarantee in TCP (and in SSL/TLS) that the reader's buffer will be filled with the exact same packet length as the packets in the writer's buffer. All TCP guarantees is in-order delivery, so you'll eventually get all your data, but you have to treat it as a stream.
This is why protocols that use TCP rely on indicators and delimiters to tell the other end when to stop reading certain messages.
For example, HTTP 1.1 uses a blank line to indicate when the headers end, and it uses the Content-Length header to tell the recipient what entity length to expect (or chunked transfer encoding). SMTP also uses line returns and . at the end of a message.
If you're designing your own protocol, you need to define a way for the recipient to know when what you define as meaningful units of data are delimited. When you read the data, read such indicators, and fill in your read buffer until you get the amount of bytes you expect or until you find the delimiter that you've defined.

Sending UDP in Camel/Netty and receiving extra bytes in NIO

I have two applications, one that sends UDP messages using Camel with the Netty component, and one that receives UDP messages in Java NIO with DatagramChannel.
When receiving the data, I've noticed that there's an extra 29 bytes prepended to the front of my message. Netty Camel prints out the outgoing bytes and it looks fine, but when I do a packet.getData() as soon as the message comes in on the other side, it has extra stuff on the front (and it's always the same bytes).
Is Camel or Netty wrapping the packet before sending it?
[edit] Additional information:
-Camel is printing the log statement, not Netty
-the bytes prepended to the message change when the content of the message changes (only two bytes are changed)
I know this question is pretty old now, but I hit exactly this problem and it took me a while to find the solution. So here it is...
Basically the problem boils down to confusion of what camel-netty will do when you're telling it to send something sufficiently low-level like a byte[] in a UDP packet. I expect that like me the OP assumed they were setting raw data, but camel-netty uses Java Object Serialization by default - resulting in those extra "random" bytes appearing before the expected data.
The solution is to change the encoder/decoder used by the endpoint(s) in question. There are various built-in alternatives, but you can subclass them if you need something more... weird. Either way, the process is:
1) Add the "encoder=#myEncoder" and "decoder=#myDecoder" options as appropriate on to the endpoint URIs. e.g.
String destinationUri = "netty:udp://localhost:4242"
+ "?sync=false&encoder=#myEncoder";
2) Add a mapping from "myEncoder" to an instance of your new Encoder class to a Camel Registry. Same for myDecoder. Then use that registry when constructing the CamelContext. e.g.
SimpleRegistry registry = new SimpleRegistry();
registry.put("myEncoder", new StringEncoder());
registry.put("myDecoder", new StringDecoder());
CamelContext camelContext = new CamelContext(registry);
Obviously the real trick is in finding or making an Encoder/Decoder that suits your needs. A blog post at znetdevelopment really helped me, though it goes one step further and puts the custom Encoders in a custom Pipeline (I ignored that bit).

Categories

Resources