An application level message is send over the network in a series of packets that are assembled in the receiving side and passed to the application level.
Is it possible in Java to do network programming in the level of these individual packets?
Or in Java we can only see the "application" level packet? I.e. the "big-packet" that is assembled by all these network packets?
I tried to research on google for this matter but the results where really confusing.
The confusion is due to the fact that some resources that are about UDP seem to indicate that the operation is on packets, while others say that Java can not work in raw sockets which implies that it works on a higher level of abstraction.I could not find an answer to exactly what I am looking for.
If yes, which package does this?
Is it possible in Java to do network programming in the level of these individual packets?
Yes, but it's very unlikely you would want individual packets.
Or in Java we can only see the "application" level packet?
Pure Java can only see TCP streams, and UDP datagram which have a one-to-one mapping with packets, but you have no access to the UDP header.
I.e. the "big-packet" that is assembled by all these network packets?
You don't get packets at all big or small. You read data and the data available is read (up to the size of your buffer)
If yes, which package does this?
You can use JPcap to see individual packets, however, this is rarely useful unless you need accurate time stamping of each packet or you need to trace dropped packets.
This uses winpcap (Windows) or libpcap (linux) via JNI.
In most of these cases where I have seen this used it was a lot of work for little gain.
from my point of view an answer mentioning JNI means that Java does not support it (since you have to actually code in another language for what you need)
Sockets, Files, GUI components all use JNI in the end. By this definition you can't do anything which uses a system call in Java, because the OS is not written in Java.
I don't think this is a useful definition of what you can do in Java.
1) Pure Java can only see TCP streams. What about UDP?
You don't have access to the packet's header with any protocol in Java without libPCap.
I assume this point means no packet access
Not without any additional libraries.
2) In most of these cases where I have seen this used it was a lot of work ? Why.
Because it is very low level and lots of details you don't normally have to worry about are exposed to you. Note: you might not get a packet as they can be dropped while attempting to record them and you won't be able to ask for them again so you miss them.
It is just a library right?
Correct.
Doesn't it work?
Why do you say that?
I am trying to see if what I need to do can be done in Java or should look into other languages.
IMHO, You won't find it any easier in another language.
I read in the jpcap docs that it can not reshape traffic e.g. drop packets etc. Why can't it do that?
You can't force a network to drop a packet and you can't trick the kernel in to drop one either. If you think about what a dropped packet is, the answer is fairly obvious.
You won't see packetized segments if some other device breaks your large UDP packet into smaller packets.
When reading TCP you'll read bytes as a stream. You'll have no idea how these bytes were actually sent. You could read back 100 bytes, and they could have been sent over 10 packets for all you know.
There's no way to access this information in java without JNI. Of course with JNI you can do anything :)
You seem to think that UDP == raw. It doesn't. Raw == IP, or even Ethernet. UDP is a layer over IP, as is TCP. You can't work in raw sockets in Java but you can work in UDP and TCP.
Related
So I making a simple network game, in which I will be using UDP segments.
I have a couple of questions to ask:
Does java detect or provide the means to know that a segment has
been corrupted?
If not how can I check whether a segment is
corrupted?
Java by itself has nothing to generically detect a corrupt UDP segment. UDP has an (optional) checksum which is checked by the OS and any segments where the checksum is wrong will be dropped and not be delivered to the application in the first place. In case of UDP this means that such a segment gets simply lost from the perspective of the application. Note that this will not detect every possible error but only the more common ones like a single bit flip.
If the application needs more than this it is to be explicitly implemented at the application level, for example by using a HMAC.
This is a similar answer, though is not what I exactly want. I want to do following two things:
I want to find out if all the bytes have been sent to the receiver?
Also I want to know the current remaining capacity of output buffer of the socket, without attempting a write to it?
Taking your numbered points in order:
The only way you can find that out is by having the peer application acknowledge the receipt.
There isn't such an API in Java. As far as I know there isn't one at the BSD sockets layer either, but I'm not familiar with the outer limits of Linux where they may have introduced some such thing.
You cannot know. The data is potentially buffered by the OS and TCP/IP stack, and there is no method for determining if it has actually been placed on the wire. Even knowing it was placed on the wire is no guarantee of anything as it could be lost in transit.
For UDP you will never know if the data was received by the destination system unless you write a UDP-based protocol such that the remote system acknowledges the data.
For TCP the protocol stack will ensure that your code is notified if the data is lost in transit, but it may be many seconds before you receive confirmation.
I'm learning to make Minecraft servers similar to Bukkit for fun. I've dealt with NIO before but not very much and not in a practical way. I'm encountering an issue right now where Minecraft has many variable-length packets and since there's not any sort of consistent "header" for these packets of data, NIO is doing this weird thing where it fragments packets because the data isn't always sent immediately in full.
I learned recently that this is a thing from this thread: Java: reading variable-size packets using NIO I'd rather not use Netty/MINA/etc. because I'd like to learn this all myself as I'm doing this for the education and not with the intention of making it some huge project.
So my question is, how exactly do I go about preventing this sort of fragmenting of packets? I tried using Nagle's algorithm in java.net.Socket#setTcpNoDelay(boolean on) but oddly enough, all this does is make it so that every single time the packet is sent, it's fragmented, whereas when I don't have it enabled, the first packet always comes through OK, and then the following packets become fragmented.
I followed the Rox Java NIO Tutorial pretty closely so I know this code should work, but that tutorial only went as far as echoing a string message back to peers, not complicated bytestreams.
Here's my code. For some context, I'm using Executor#execute(Runnable) to create the two threads. Since I'm still learning about threads and concurrency and trying to piece them together with networking, any feedback on that would be very appreciated as well!!
ServerSocketManager
ServerDataManager
Thanks a lot, I know this is quite a bit of stuff to take in, so I can't thank you enough for taking the time to read & respond!!
TCP is ALWAYS a stream of bytes. You don't get to control when you get them or how many you get. It can come in at any time with any amount. That's why protocols exist.
Headers are a common part of a protocol to tell you how much data you need to read before you have the whole message.
So the short answer here is: You can't.
Everything you're saying you don't want to do -- that's what you have to do.
I need to implement a Peer To Peer File Transfer.
What protocol should I use? TCP or UDP? And why?
TCP is generically the best way to go when you want to ensure that your data gets to its intended destination with appropriate integrity.
In your case I would personnally choose tcp, because you will probably end up reimplementing tcp in some form inside of your udp packets otherwise (ask for block (syn), answer: i have block (syn ack), ok send it to me (ack)...data: (push ack)... ok done: (rst))
Also generically speaking, udp is the way to go when you want to broadcast, and you dont really care if the data gets there or not, meaning there is either high redundancy or low importance/integrity... since files require high integrity, it doesn't make a lot of sense to go UDP, again, unless you want to go through the extra work.
The only upside to udp would be the fact that is stateless, which could have some good implementations in a file sharing program.
Bottom line... go with your heart...
I'd recommend using TCP.
If you use UDP then you end up having to design and implement flow control and detection / retransmission of lost packets in your application-level protocol. Doing this in a way that gives you decent performance in good and bad networking conditions is hard work. For simple peer to peer, the payoff is generally not worth the effort.
FOLLOWUP
You ask:
and i plan to implement inter-lan calling over wifi, for that i would have to use UDP right?
Assuming that IP is implemented and the routing is set up correctly over your WiFi network(s), both UDP and TCP should work just fine.
UDP does not guarantee that the packets will be delivered, which is something that TCP does. Please take a look at this previous SO post which highlights the difference between these two protocols.
TCP helps ensure that your packets are received by the client and should be your choice for a file transfer because you want the file to be reproducible on the other end exactly as it is sent.
You can implement the file transfer using UDP too, but you would have to write your own logic for ensuring that the contents of the file are assembled correctly.
Since most users really care that all of their data makes it to the remote target with consistency, TCP is your best bet since packet error handling is managed. UDP is typically better for applications where loss is acceptable (e.g. player position in games) or where retransmission is not an option (e.g. streaming audio/video).
More on streaming
In the streaming A/V case, the data is always shipped with some error correction bits to fix some large percentage of errors. The endpoint manages the extra time required to detect (and potentially correct) the errors by buffering the stream. Nevertheless, it's obviously a ton of work (on both sides) to make it all happen and probably isn't worth it for P2P file transfer.
Update 1: audio streaming comment
The constraints are really based on required throughput, latency, and bit error rate (BER). Since these are likely both mobile devices, possibly operating across two carriers cellular networks, I'd opt for UDP with very high error-correction capability for audio. Users will likely be more displeased with no audio versus slightly corrupted audio and greater latency. Nevertheless, I would still use TCP for file transfer.
My Java application receives data through UDP. It uses the data for an online data mining task. This means that it is not critical to receive each and every packet, which is what makes the choice of UDP reasonable on the first place. Also, the data is transferred over LAN, so the physical network should be reasonably reliable. Anyway, I have no control over the choice of protocol or the data included.
Still, I am concerned about packet loss that may arise from overload and long processing time of the application itself. I would like to know how often these things happen and how much data is lost.
Ideally I am looking for a way to monitor packet loss continuously in the production system. But a partial solution would also be welcome.
I realize it is impossible to always know about UDP packet losses (without control on the packet contents). I was thinking of something along the lines of packets received by the OS but never arriving to the application; or maybe some clever solution inside the application, like a fast reading thread that drops data when its client is busy.
We are deploying under Windows Server 2003 or 2008.
The problem is that there is no way to tell that you have lost any packets if you are relying on the UDP format.
If you need to know this type of information, you need to build it into the format that you layer ontop of UDP (like the TCP Sequence Number). If you do that and the format is simple then you can easily create filters into Microsoft's NetMon or WireShark to log and track that information.
Also note that the TCP Sequence Number implementation also helps to detect out of order packets take may happen when using UDP.
If you are concerned about packet loss, use TCP.
That's one reason why it was invented.