How SSLContext.getInstance() method works? - java

Entire code is quire complicated so I am directly coming to the point.
Code is as follows
SSLContext ctx = SSLContext.getInstance("TLS");
If you read docs for getInstance(String protocol) method it says
This method traverses the list of registered security Providers, starting
with the most preferred Provider. A new SSLContext object encapsulating
the SSLContextSpi implementation from the first Provider that supports the
specified protocol is returned.
Note that the list of registered providers may be retrieved via the
Security.getProviders() method.
For me Security.getProviders() method gives following providers
Now I have verified that "TLS" protocol is in com.sun.net.ssl.internal.ssl.Provider (index 2 ) and is always selected.
But the corresponding SSLContextSpi object is coming different in Java 6 and Java 7. In java 6 I am getting com.sun.net.ssl.internal.ssl.SSLContextImpl#7bbf68a9 and in java 7 I am getting sun.security.ssl.SSLContextImpl$TLS10Context#615ece16. This is having very bad effect as when later I am creating SSL socket they are of different class.
So why is this happening? Is there a work around? I want the same com.sun.net.ssl.internal.ssl.SSLContextImpl#7bbf68a9 SSLContextSpi object encapsulated in com.sun.net.ssl.internal.ssl.Provider context(which is same in both cases).

This is having very bad effect as when later I am creating SSL socket they are of different class.
This is not a bad effect. Which actual class you get from the factories in the public API is at the discretion of the JRE implementation: these concrete classes are not part of the public API.
The fact that you get different classes between Java 6 and Java 7 doesn't really matter. Even if they had the same name, if wouldn't make sense to compare them to one another.
EDIT:
public int read(byte[] b) function reads only 1 bytes when I give it a
byte array of length 4 and also i have confirmed that there are 4
bytes in the stream.
SSLSocket in Java 7 is behaving correctly when you get this. In fact, it's probably behaving better, since this initial 1-byte read is due to the BEAST-prevention measure. I'll copy and paste my own answer to that question, since you're making exactly the same mistake.
The assumption you're making about reading the byte[] exactly as you write them on the other end is a classic TCP mistake. It's not actually specific to SSL/TLS, but could also happen with a TCP connection.
There is no guarantee in TCP (and in SSL/TLS) that the reader's buffer will be filled with the exact same packet length as the packets in the writer's buffer. All TCP guarantees is in-order delivery, so you'll eventually get all your data, but you have to treat it as a stream.
This is why protocols that use TCP rely on indicators and delimiters to tell the other end when to stop reading certain messages.
For example, HTTP 1.1 uses a blank line to indicate when the headers end, and it uses the Content-Length header to tell the recipient what entity length to expect (or chunked transfer encoding). SMTP also uses line returns and . at the end of a message.
If you're designing your own protocol, you need to define a way for the recipient to know when what you define as meaningful units of data are delimited. When you read the data, read such indicators, and fill in your read buffer until you get the amount of bytes you expect or until you find the delimiter that you've defined.

Related

Spring Integration - Which Deserializer to use for an endless stream of bytes with only a start-byte (being part of the message)

We are trying to read enOcean data from TCP.
The protocol description for enOcean ESP3-Messages says:
As soon as a Sync.-Byte (value 0x55) is identified, the subsequent 4 byte-Header is compared with the corresponding CRC8H value.
If the result is a match the Sync.-Byte is correct. Consequently, the ESP3 packet is detected properly and the subsequent data will be passed.
If the Header does not match the CRC8H, the value 0x55 does not correspond to a Sync.-Byte. The next 0x55 within the data stream is picked and the verification is repeated.
Until now we used a device (client) that automatically closes the connection to our server after the end of a number of messages coming within a very small timeframe (some milliseconds). Therefore we were able to use a simple ByteArrayRawSerializer. Whenever the connection was closed we got the byte array, read the data, found all the sync-Bytes and were able to interpret the messages.
Now there is a new device that is holding the connection for a very long time (several minutes), so we somehow need another way to read the data from the stream. Because there is no End-of-Message-Byte and the SyncByte 0x55 is part of the message and a ByteArrayLengthHeaderSerializer doesn't suit us either we wonder what we could use:
Is there any Deserializer usable for our scenario? Is it possible to writer our own Deserializer (in this specific scenario)? Or is there another (simpler) way we should follow? Maybe use an Interceptor?
You would need to write a custom deserializer.

What makes a connection reusable

I saw this description in the Oracle website:
"Since TCP by its nature is a stream based protocol, in order to reuse an existing connection, the HTTP protocol has to have a way to indicate the end of the previous response and the beginning of the next one. Thus, it is required that all messages on the connection MUST have a self-defined message length (i.e., one not defined by closure of the connection). Self demarcation is achieved by either setting the Content-Length header, or in the case of chunked transfer encoded entity body, each chunk starts with a size, and the response body ends with a special last chunk."
See Oracle doc
I don't know how to implement, can someone give me an example of Java implementation ?
If you are trying to implement "self-demarcation" in the same way as HTTP does it:
the HTTP 1.1 specification defines how it works,
the source code of (say) the Apache HTTP libraries are an example of its implementation.
In fact, it is advisable NOT to try and implement this (HTTP) yourself from scratch. Use an existing implementation.
On the other hand, if you simply want to implement your own ad-hoc self-demarcation scheme, it is really easy to do.
The sender figures out the size of the message, in bytes or characters or some other unit that makes sense.
The sender sends a the message size, followed by the message itself.
At the other end:
The receiver reads the message size, and then reads the requisite number of bytes, characters, to form the message body.
An alternative is to for the sender to send the message followed by a special end-of-message marker. To make this work, either you need to guarantee that no message will contain the end-of-message marker, or you need to use some sort of escaping mechanism.
Implementing these schemes is simple Java programming.
What makes a connection reusable
That is answered by the text that you quoted in your Question.

Send a Data Record through TCP in java

I'm a Delphi developer and recently I decided to port one of my programs to java and I'm doing the server side program in java to make it cross-platform.
In Delphi, I could easily send a record as an array of bytes through TCP but I don't have much experience in java and I have no idea how to do it in an easy but moderated way.
Here is a sample of my data record:
type
Tlogin = record
username : string[50];
password : string[50];
version : word;
end;
And I would just simply send this type of record after making it an array of bytes.
Any ideas how to make such data records in java and how do I set size for strings, or any better suggestions to handle strings for sending them through TCP.
In Java, you simply send objects over the sockets between a client and server and there are a number of ways to do that. For a related reference please visit
Sending objects over Java sockets
For a more step by step example visit the following link:
JGuru - Sending objects over a socket
In your case your object would look as follows
class TLogin implements Serializable
{
private String userName;
private String password;
private int version;
//implement your objects methods below
}
Fields within the object that you do not want to participate in serialization and de-serialization can be marked as transient
For a detailed step by step example of serialization visit
Java Serialization Example
Edit based on the comment provided to my earlier response.
Serialization in simple words : It is a technique where-in a Java object is converted to a byte sequence (essentially, all fields of the object except those marked transient are a part of this byte sequence). This byte sequence can then be used to re-construct the object at a later point of time. The byte sequence obtained by serializing an object can be either persisted to a store or transmitted over a network channel, in order to have it re-construct the object at a later stage.
Serialization is at the core of a lot of communication protocols that happen within a client server environment within Java using either of RMI, Sockets or SOAP.
Having talked about serialization , we come to the client-server problem.
In case, the plan is only to port the server side code to Java then you have the following options to enable communication between the client and server:
Design the server to use SOAP/REST to communicate with the Delphi client.
Augment your record with a header data structure that contains information about the length and type of the data being stored and use this header within the client transmitted byte sequence on the server side to re-construct the object.
However, in my opinion the first method is better than the second since
it is a standard inter-operable technique.If at a later point of time you wish to port the client to some other language like C# or Python, you do not need to change the server.
it lets the web service infrastructure handle the nitty gritty of
SOAP/REST serialization and lets you focus on the business logic
I hope this lengthy answer points you in a direction towards the solution

Sending UDP in Camel/Netty and receiving extra bytes in NIO

I have two applications, one that sends UDP messages using Camel with the Netty component, and one that receives UDP messages in Java NIO with DatagramChannel.
When receiving the data, I've noticed that there's an extra 29 bytes prepended to the front of my message. Netty Camel prints out the outgoing bytes and it looks fine, but when I do a packet.getData() as soon as the message comes in on the other side, it has extra stuff on the front (and it's always the same bytes).
Is Camel or Netty wrapping the packet before sending it?
[edit] Additional information:
-Camel is printing the log statement, not Netty
-the bytes prepended to the message change when the content of the message changes (only two bytes are changed)
I know this question is pretty old now, but I hit exactly this problem and it took me a while to find the solution. So here it is...
Basically the problem boils down to confusion of what camel-netty will do when you're telling it to send something sufficiently low-level like a byte[] in a UDP packet. I expect that like me the OP assumed they were setting raw data, but camel-netty uses Java Object Serialization by default - resulting in those extra "random" bytes appearing before the expected data.
The solution is to change the encoder/decoder used by the endpoint(s) in question. There are various built-in alternatives, but you can subclass them if you need something more... weird. Either way, the process is:
1) Add the "encoder=#myEncoder" and "decoder=#myDecoder" options as appropriate on to the endpoint URIs. e.g.
String destinationUri = "netty:udp://localhost:4242"
+ "?sync=false&encoder=#myEncoder";
2) Add a mapping from "myEncoder" to an instance of your new Encoder class to a Camel Registry. Same for myDecoder. Then use that registry when constructing the CamelContext. e.g.
SimpleRegistry registry = new SimpleRegistry();
registry.put("myEncoder", new StringEncoder());
registry.put("myDecoder", new StringDecoder());
CamelContext camelContext = new CamelContext(registry);
Obviously the real trick is in finding or making an Encoder/Decoder that suits your needs. A blog post at znetdevelopment really helped me, though it goes one step further and puts the custom Encoders in a custom Pipeline (I ignored that bit).

What is the proper way to handle strings : Java client & C++ server

I'm writing a C++ server/client application (TCP) that is working fine but I will soon have to write a Java client which obviously has to be compatible with the C++ server it connects to.
As for now, when the server or client receives strings (text), it loops through the bits till a '\0' is found, which marks the end of the string ...
Here's the question : is it still a good practice to handle strings that way when communicating over Java/C++ rather than C++/C++ ?
There's one thing you should read about: Encodings. Basically, the same sequence of bytes can be interpreted in different ways. As long as you pass things around in C++ or Java, things will agree on their meaning, but when using the net (i.e. a byte stream) you must make up your mind. If in doubt, read about and use UTF-8.
Consider using Protocol Buffers or Thrift instead of rolling your own protocol.

Categories

Resources