UDP port scanning Java finds only 1 open UDP port - java

I have an assigment about port scanning. I am scanning UDP ports of some IP addresses in Java.In my program (assuming everything is OK) I can only find one open UDP port. In the other hands port scanning over "nmap" I get 4 open UDP ports. Can somebody tell me why I can not find more than one ports via Java code?
By the way I can find the true open port in my code.
int startPortRange=1;
int stopPortRange=1024;
InetAddress address = InetAddress.getByName("bigblackbox.cs.binghamton.edu");
int counter=0;
for(int i=startPortRange; i <=stopPortRange; i++)
{
counter++;
try{
byte [] bytes = new byte[128];
DatagramSocket ds = new DatagramSocket();
DatagramPacket dp = new DatagramPacket(bytes, bytes.length);
ds.setSoTimeout(100);
ds.connect(address, i);
ds.send(dp);
ds.isConnected();
dp = new DatagramPacket(bytes, bytes.length);
ds.receive(dp);
ds.close();
System.out.println("open");
System.out.println(counter);
}
catch(InterruptedIOException e){
//System.out.println("closed");
}
catch(IOException e){
//System.out.println("closed");
}
}
Output of above code is
135 open
When I make same operation in command line using nmap I get more open ports.
I could not upload an image because I am a new user.
Thank you

It is impossible to provide a concrete answer, unless you provide at least:
The source code of your program.
An example of the (incorrect) output that you are getting.
The expected output for the same scenario.
Without this information there is no way for us to tell you what is wrong. For all we know, it could even be a simple case of your program terminating prematurely after finding an open port. Or a case of the open port that was last found overwriting the entries of the previous ones before being displayed.
In any case, it might be worthwhile to investigate what is being sent and received using a network sniffer, such as Wireshark. By comparing an nmap session with a session created by your program, you might be able to spot some significant difference that would help pinpoint the issue.
EDIT:
After having a look at your code and comparing with nmap, it seems that you are mistakenly handling the case of a SocketTimeoutException as a closed port, while it could simply be the port of a server that refuses to answer to the packet that you sent.
EDIT 2:
Here's the full story:
When a port is properly closed, the server sends back an ICMP Destination Unreachable packet with the Port unreachable error code. Java interprets this error to an IOException that you correctly consider to indicate a closed port.
An open port, on the other hand may result into two different responses from the server:
The server sends back a UDP packet, which is received by your program and definitely indicates an open port. DNS servers, for example, often respond with a Format error response. nmap shows these ports are open.
The server ignores your probe packet because it is malformed w.r.t. to the provided service. This results in a network timeout and a SocketTimeoutException in your program.
Unfortunately there is no way to tell whether a network timeout is because an active server ignored a malformed probe packet or because a packet filter cut down the probe. This is why nmap displays ports that time out as open|filtered.

Related

Java TCP Socket Programming: Client and Server communicate well on the same computer, but fail to send data to each other over LAN

I am trying to set up a program where the server can communicate with multiple clients. The program is written in Java. I got it all working on the same machine so I decided to try LAN. I converted the program to JAR files and I tried connecting my laptop to my PC (both are on the same network). The connection is successful but unfortunately only 1 message arrives to the server. As you can see in the code below, I send multiple messages (Meaning that i write multiple times) via DataOutputStream. One defines the datatype (in the following example 0 means that it's a String) and the other sends the actual message data. I also print the size of the packets in bytes and it always matches the size of the DataOutputStream instance.
DataOutputStream dOut = new DataOutputStream(clientSocket.getOutputStream());
String str = "Hello";
//Type
System.out.println("Type output size: 1");
dOut.writeByte(0);
//Message
System.out.println("Message output size: " + (str.getBytes(StandardCharsets.UTF_8).length + 2));
dOut.writeUTF(str);
System.out.println("Length of all: " + (dOut.size()));
dOut.flush();
So now when the data from the client is sent we need to handle it on the server, which the code below does. It retrieves the InputStream from the Socket called client and inserts it into the DataInputStream. This is were it gets weird on LAN as the stream only contains the first message.
InputStream stream = client.getInputStream();
DataInputStream dIn = new DataInputStream(stream);
while(dIn.available() > 0) {
byte type = dIn.readByte();
switch(type) {
case 0:
System.out.println(dIn.readUTF());
break;
case 1:
System.out.println(dIn.readInt());
break;
case 2:
System.out.println(dIn.readByte());
break;
default:
throw new IllegalStateException("Unexpected value: " + type);
}
}
If you run the Client in the IDE on lets say a laptop connected to the same network and then you run the Server on a PC connected to the same network it will work. However, not if the programs are in JARS.
The actual stacktrace is the following:
java.net.SocketException: Connection reset
at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:323)
at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:350)
at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:803)
at java.base/java.net.Socket$SocketInputStream.read(Socket.java:966)
at java.base/java.net.Socket$SocketInputStream.read(Socket.java:961)
at
java.base/java.io.DataInputStream.readInt(DataInputStream.java:393)
The stacktrace does not tell me anything, but it points at case 0: in the switch case. It can't read the String as the DataInputStream does not contain any data (I guess?).
I would also like to state that the Server is multithreaded! I have one thread that adds the sockets when they are accepted through ServerSocket.accept() and I use the second (main thread) to read the data sent from clients.
I have selected the code above as I believe that the issue lies within it, however I am new to Socket Programming and I know that some of you would like to see other parts of the code. I will add more relevant code when I am asked.
I do not know why it acts like this, does anyone know why?
What have i tried?
I have tried waiting for packets - but that has only resulted in the
Server looping forever. With waiting for packets I mean not going forward until the DataInputStream contains enough bytes.
I have disabled Nagels Algorithm through setTCPNoDelay(false).
Tried to send different datatypes, but that also failed
I tried changing the first packet to a String which resulted in the String showing up in the DataInputStream.
I have tried portforwarding the port used and I have tried disabling the firewall on both computers.
Update 1
I have been taking advice from the comments which has led a to a few discoveries:
Closing the DataOutputStream successfully sends all packets to the client.
It is also possible to build your own buffer and decode it in the server. However, it is still not possible to send any more messages after this.
It worked as a JAR because IntelliJ was being nice (Eclipse threw the same error when running in IDE)
Update 2:
I think this post is relevant. It states that SocketException is sent when a client closes it's socket "ungracefully". And because my Client closes (as it is not in a loop) and I don't close the socket properly - it will close "ungracefully" and the data will be lost. Hence the error.
The issue is now solved, and the solution is quite logical. My client does not operate in a loop, rather it sends the data and closes the program. That sounds fine, but I forgot to properly close the socket of the client.
The reason why the second 'packet' never arrived was due to me doing this tiny mistake. The packet was on it's way through the local network but the client socket improperly closed it's socket before the packet arrived to the server, which is why I got a SocketException error. See this.
I solved the issue by putting socket.close() ,where socket is the client's socket, after I had sent all the messages I wanted to send.

Why does the source port of DatagramPacket not persistent?

I have a thread which periodically sends a datagram packet with the following setup:
DatagramSocket mySocket;
try {
mySocket = = new DatagramSocket(9999);
mySocket.connect(new InetSocketAddress(dstAddress, dstPort));
} catch (SocketException e) {
e.printStackTrace();
return;[![enter image description here][1]][1]
}
byte[] sentPacketBuffer = new byte[1];
DatagramPacket sentPacket = new DatagramPacket(sentPacketBuffer, sentPacketBuffer.length);
For each call of the send method:
mySocket.send(sentPacket);
I get a different source port on the receiver side.
I'v looked into this question, but the answer is actually related to setting the source port for the listener side.
Is there a way to make the source port (of the sender) persistent?
Edit
I used Android's VPNService to capture the received packets, and I dumped them to Wireshark:
As you can see only 1 packet has the correct source port.
Then I figured it might be related to the destination IP. The destination IP is not reachable from this device.
If I do make this address reachable (by connecting to 192.168.49.1, and having an interface in the same subnet) I get correct source port for all packets:
So, my question is now why does the destination reach-ability (or available interfaces) is related to the source port?
You are mistaken. The source port of datagrams sent by this code is always 9999.
NB Keep using the same socket. Creating and destroying a new one per datagram is pointless.

Trouble with UDP ports and DatagramSockets

I'm working on a project that is suppose to send a file from one machine to another using DatagramPackets and DatagramSockets. The implementation is suppose to mimic the TCP protocol. So once the receiver gets a packet it sends back an ACK to the sender, confirming the packet was delivered. My program so far without making any checks for ACKs. Im having trouble implementing the ACK messages. On my receiver program, it shows that the ACKs are being sent, but the sender application is not getting them.
I keep getting an error from creating the socket. "java.net.BindException: Address already in use: Cannot bind". I'm confused because nowhere else in the sender applicaion have a specified the port. I simply use DatagramSocket socket = new DatagramSocket();
but I do use
DatagramPacket packet = new DatagramPacket(packetData, packetData.length, internetAddress, 49000);
socket.send(packet); when sending packets.
I have tried removing the datagram declaration in my waitForAck() method and used the same datagramSocket I used to send packets. But socket.receive(packet); will hang and never recieve anything because it hasnt been assigned a port to listen on.
This is my method to listen for ACKs:
public void waitForACK(){
//listen for ack for a period of time
//if ACK received, then break send next packet
//if ACK not received or time out, send last packet
//TODO: implement a timeout
System.out.println("### Sender waiting for ACK");
try {
DatagramSocket receivingSocket = new DatagramSocket(49000);
while (!ACKreceived) {
byte[] buf = new byte[1500]; // Actual Ethernet packet size is 1500 bytes
// receive request
DatagramPacket packet = new DatagramPacket(buf, buf.length);
receivingSocket.receive(packet); //socket.receive(packet); <--
byte[] packetData = Arrays.copyOf(packet.getData(), packet.getLength());
ACKreceived = checkACK(packetData);//check the recieved packet contains an ACK message
}
System.out.println("### Sender recieved ACK");
} catch (Exception e) {
System.out.println("### never got ACK");
System.out.println(e);
}
}
I've also tried this but the scoket will hang and never actualy recieve anything. Even though the application that recieves the file successfully reports sending an ACK. I'm guessing its because it does not know to recieve the ACK on port 49000.
public void waitForACK(){
//listen for ack for a period of time
//if ACK received, then break send next packet
//if ACK not received or time out, send last packet
//TODO: implement a timeout
System.out.println("### Sender waiting for ACK");
try {
while (!ACKreceived) {
byte[] buf = new byte[1500]; // Actual Ethernet packet size is 1500 bytes
// receive request
DatagramPacket packet = new DatagramPacket(buf, buf.length);
socket.receive(packet); //<--- HANGS RIGHT HERE
byte[] packetData = Arrays.copyOf(packet.getData(), packet.getLength());
ACKreceived = checkACK(packetData);//check the recieved packet contains an ACK message
}
System.out.println("### Sender recieved ACK");
} catch (Exception e) {
System.out.println("### never got ACK");
System.out.println(e);
}
}
You're leaking sockets.
Don't create a new socket just to wait for an ACK. You should have exactly one DatagramSocket open for the life of the application.
Try to use the netstat command to check if another program (or even your program) is active on the port. On unix netstat -lp as su will show you, on windows netstat exists too with different command line options
Before we get into the problems with your code: Why is the client trying to listen on port 49000?
If you don't already realize this: the local port and peer port do not have to be the same, and generally are not. When you call DatagramSocket(), you get an arbitrary local port assigned by the OS. The fact that you sent to 49000 doesn't change your local port. And if the other side of the connection just sends back to the tuple it received a packet from, that won't arrive at 49000, it will arrive at your local port.
If that's your problem, the fix is to use the second version (just use your existing socket to listen as well as sending), and then fix the other side (that you haven't shown us the code for) to send the ACK to the complete address tuple of the packet's sender, not port 49000 on the sender's host.
If you realize that, but think that both sides need to have local port 49000 for some reason… well, they probably don't. Generally, a protocol needs one side (the "server") to have a well-known port to connect, but the other side (the "client") doesn't need that. That's why you can use DatagramSocket() instead of DatagramSocket(49000) on the client and things work.
Again, same fix.
In the rare cases where both sides really do need to have a well-known port (e.g., so you can explicit open it in your company's internal firewalls), you almost certainly want the sending to also happen on that port.
So, instead of creating a DatagramSocket() to send from, and a DatagramSocket(48000) to listen on, just create a DatagramSocket(48000) in the first place and use it for both.
However, note that this solution, like any solution that uses a fixed port, has two additional problems:
First, if client and server both want to bind port 48000, they can't both run on the same machine. You can renumber one of them to 48001, or just accept that.
Second, if you expect to start and stop the client frequently, it's often going to try to bind port 49000 while the OS still has a socket for that port in TIME_WAIT state, so you're going to get a bind error. This is what SO_REUSEADDR is for; use it.
What if you really do want to use an arbitrary-port sender on the client, but a fixed-port listener? There are some cases where that makes sense, but unless you can explain why you really need this, you don't have one.
If you do, then, and only then, could you use something like your first version. But you still probably want to create the listener socket once, not each time you listen for ACKs; it just should be a different attribute from the sending socket. (And of course you still need to deal with the same things as in the last section.)
And if you really do want to create a new listener socket for each ACK, then you have to make sure you close it immediately, rather than waiting for the Java GC and the OS to collectively get around to closing it for you, or the next time you wait for an ACK, you're likely to get a bind error, because the old listener socket is still bound to it.

DatagramSocket Broadcast Behavior (Windows vs. Linux)

Backstory:
I have a wireless device which creates it's own SSID, assigns itself an IP address using auto-ip, and begins broadcasting discovery information to 255.255.255.255. (unfortunately, it does not easily support multicast)
What I'm trying to do:
I need to be able to receive the discovery information, then send configuration information to the device. The problem is, with auto-ip, the "IP negotiation" process can take minutes on Windows, etc (during which time I can see the broadcasts and can even send broadcast information back to the device).
So I enumerate all connected network interfaces (can't directly tell which will be used to talk to the device), create a DatagramSocket for each of their addresses, then start listening. If I receive the discovery information via a particular socket, I know I can use that same socket to send data back to the device. This works on Windows.
The problem:
On Linux and OSX, the following code does not receive broadcast packets:
byte[] addr = {(byte)169, (byte)254, (byte)6, (byte)215};
DatagramSocket foo = new DatagramSocket(new InetSocketAddress(InetAddress.getByAddress(addr), PORT_NUM));
while (true)
{
byte[] buf = new byte[256];
DatagramPacket pct = new DatagramPacket(buf, buf.length);
foo.receive(pct);
System.out.println( IoBuffer.wrap(buf).getHexDump() );
}
In order to receive broadcast packets (on Linux/OSX), I need to create my DatagramSocket using:
DatagramSocket foo = new DatagramSocket(PORT_NUM);
However, when I then use this socket to send data back to the device, the packet is routed by the OS (I'm assuming) and since the interface of interest may be in the middle of auto-ip negotiation, fails.
Thoughts on the following?
How to get the "working" Windows behavior to happen on Linux/OSX
A better way to handle this process
Thanks in advance!
I do not think this is the problem with the code. Have you checked if OSX/Linux has correctly allowed those address / port number through their firewalls? I had this simple problem too in the past =P..
FYI, there is a nice technology called Zero-configuration which was built to solve this problem. It is very easy to learn so I recommend you to having a look at that as well.
Good luck.

Java SocketChannel doesn't detect disconnection?

I have a socket running, using selectors. I am trying to check to see if my socket is connected to the server or not.
Boolean connected = _channel.isConnected();
and it always returns true. I turned off Airport (internet connection) on my computer, and when i check to see if the socket is connected or not, it still returns true.
Any idea why?
I try to write data to the server every 3 seconds, and it still doesn't change the state of my socket to disconnected.
Usually if you turn off OS level networking, writes to socket should throw exceptions, so you know the connection is broken.
However more generally, we cann't be sure if a packet is delivered. In java (probably C too), there is no way to check if a packet is ACK'ed.
Even if we can check TCP ACKs, it doesn't guarantee that the server received or processed the packet. It only means that the target machine received the packet and buffered it in memory. Many things can go wrong after that.
So if you really want to sure, you can't rely on transport protocol. You must have application level ACK, that is, the server application writes back an ACK message after it received and processed a message from client.
From client point of view, it writes a message to server, then tries to read ACK from server. If it gets it, it can be certain that its message is received and processed. If it fails to get ACK, well, it has no idea what has happened. Empirically, most likely TCP failed. Next possiblity is that server crashed. It's also possible that everything went OK, except the ACK couldn't reach the client.
A socket channel can be connected by invoking its connect method; once connected, a socket channel remains connected until it is closed.
The channel is not closed when the server is not available anymore, due to a broken physical connection or a server failure. So once a connection has been established, isConnected() will be returning true until you close the channel on your side.
If you want to check, if the server is still available, send a byte to the sockets outputstream. If you get an Exception, then the server is unavailable (connection lost).
Edit
for EJP - some code to test and reconsider your comment and answer:
public class ChannelTest {
public static void main(String[] args) throws UnknownHostException, IOException {
Socket google = new Socket(InetAddress.getByName("www.google.com"), 80);
SocketChannel channel = SocketChannel.open(google.getRemoteSocketAddress());
System.out.println(channel.isConnected());
channel.close();
System.out.println(channel.isConnected());
}
}
Output on my machine is
true
false
isConnected() tells you whether you have connected the channel object, which you have, and it's not specified to return false after you close it, although apparently it does: see Andreas's answer. It's not there to tell you whether the underlying connection is still there. You can only tell that by using it: -1 from a read, or an exception, tells you that.

Categories

Resources