Java RMI connection broken - java

We have an java client-server RMI application. The client performs a call by:
public interface MyRemoteInterface extends java.rmi.Remote {
public byte[] action(byte[] b}
public static Remote lookUp(String ip, int port, String alias) {
return LocateRegistry.getRegistry(ip, port).lookup(alias)
}
serverImpl = (MyRemoteInterface)lookUp(ip, rmpiport, alias);
serverImpl.action(requestBytes);
..
It a classic rmi client-server appl. When the remote server receives the request (call) it performs some process P, which takes a time T and then returns. The problem is, suppose we have two clients C1 and C2, respectively, each are identical (same OS, same Java, no firewall, no antivirus etc). The difference of C1 and C2 is they are in different networks, different ISP provides etc. For example they are two users connecting from their home and use different service providers, modems and connecting to the centralized server S.
When the process P takes 4 minutes, there is no problem, both client receive the response. But if the process time takes for example 10 minutes, C2 receives no response without any exceptions on server or client side.
Question: What can be reason of this difference? How can the network architecture can effect the behaviour of rmi connection? Can we overcome this problem by setting some timeout parameters or jvm-parameters? We do not get any timeout exceptions on client or server. Simply one of the clients does not receive any response if the process time becomes longer.

Simply one of the clients does not receive any response if the process time becomes longer.
In our (enterprise) network we had some policy in force to close connections which were not active for long time. So there may be differences. Though the client should get a proper exception. For better answer you may gather more exact information on the network level (e. g. tcpdump)
But if the process time takes for example 10 minutes
I'm not really fond of long-running processes when executing remotely (regardless of the protocol). Mainly to prevent issues like the one in your question.
You may use some asynchronous processing, like a method to start a server process and then have the client to check for status/results.

Related

Can Chronicle Queue be used like RMI?

I want my two JVM applications speak to each other on the same machine. I considered using RMI, but then I found Chronicle Queue which claims that it is very fast. I wonder whether I can use Chronicle to invoke a method on the other JVM and wait for the return value. Are there any use cases for that?
It's doable, but might be an overkill (especially if you don't have to keep the history of the request/responses). Imagine a simple scenario with two processes: C (client) and S (server). Create two IndexedChronicles:
Q1 for sending requests from C to S
Q2 for sending responses from S to C
Server has a thread that is polling (busy spin with back-off) on Q1. When it receives a request (with id=x it does whatever is needed and writes out response to Q2 (with id=x. C polls Q2 with some policy and reads out responses as they appear. It uses the id to tie responses to requests.
Main task would be in devising a wire-level protocol for serialising your commands (equivalent of the method calls) from the client. This is application specific and can be done efficiently with the Chronicle tools.
Another issues to consider:
what should the client do with the historical responses on startup?
some heartbeat system so that client knows the server is alive
archiving of the old queues (VanillaChronicle makes it easier at some cost)

Java - Is it possible to send a client to another server?

I'm working on a ServerSocket with java, and I would like to know, upon a client connecting, is it possible to send the Client/Socket to another ServerSocket. For example a client connects to 123.41.67.817(Just a random IP) and upon connection, the client gets sent straight to for example 124.51.85.147(Another Random IP) with a port of course. So a little Map of what would happen.
ServerSocket(Listening for Connections)
Client ---> ServerSocket(Client connects)
ServerSocket -> Client(Server says: Hello, I am going to send you to 124.51.85.147)
Client -> ServerSocket(Client Says: OK!)
Client ---> ServerSocket(124.51.85.147)(Client gets sent to a different server Socket)
ServerSocket(124.51.85.147) -> Client(Server2 says: Welcome!) and then the client stays on Server2(124.51.85.147)
Is this possible in any way. Sorry for the long question.
Is this possible in any way.
No.
At the most basic level, a TCP/IP connection is a conversation between a pair of IP addresses. There is no provision in the TCP/IP protocol for changing one of the two IP addresses in mid conversation.
Even if it were possible at the Java level to (somehow) serialize the Socket object and send it to another program (local or remote), it would not be possible to change the IP addresses for the underlying conversation; see above.
Historical footnote: A long time ago (1980's) in a country far away (Cambridge UK) there was a network (The Cambridge Ring) whose stream protocol (BSP) implemented an operation known as "replug". If you had BSP connection between A & B and another between B & C, then B could replug the connections so that A talked directly to C. Reference: "The Cambridge Distributed Computer System" by R.M. Needham & A.J Herbert (Annex C).
I've never seen a replug operation elsewhere. Indeed, if you think about it, it requires a complicated 3-way handshake to implement a replug-like operation reliably. In the BSP case, they didn't do that ... at least according to the description in Needham & Herbert.

Netty - Call a method on connection termination

I've been trying to figure out how to go about calling a method when the connection has been forcefully terminated by the client, or if the client just loses connection in general. Currently I have an List<> of all of my online accounts, however if the player doesn't log out of the server naturally, the account will stay in the list.
I've been looking through the documents, and searching google wording my question in dozens of different ways, but I can't find the answer that I'm looking for.
Basically, I need a way to figure out which channel was disconnected, and pass it as a parameter to a method, is this possible? It almost has to be.
i guess this can be done using thread on both client and server side.
Make a Date variable lastActive in client class which will be set by client every 5min (let's say). Another thread will run from server side every 10 min to check for this flag, if lastActive is more than 10min then remove player from list. You can change this frequency time according to your need
Reliably detecting socket disconnects is a common problem and not unique to Netty. The issue as you described is that your peer may not reliably terminate their end of the connection. For example: peer loses power, peer application crashes, peer machine crashes, etc... One common solution is to close the connection if no read activity has been detected for longer than some time interval. Netty provides some utilities to ease this process such as the ReadTimeoutHandler. Setting the time interval is application specific and will depend on your protocol. If your desired interval is sufficiently small you may have to add additional messages to your protocol to serve as a heartbeat message (a simple request/response to indicate each side is talking to each other).
From a Netty specific point of view you can register a listener with the Channel's CloseFuture that will notify you when the channel is closed. If you setup the ReadTimeoutHandler as previously described then you will be notified of close events after your timeout interval passes and no activity is detected or the channel is closed normally.

Prevent TCP socket connection retries

How can I prevent TCP from making multiple socket connection attempts?
Background
I'm trying to get a rough estimate of the round-trip-time to a client. The high-level protocol I have to work with gives no way to determine the RTT, nor does it have any sort of no-op reqeust/response flow. So, I'm attempting to get information directly from the lower layers. In particular, I know that the client will actively reject TCP connection attempts on a particular port.
Me -> Client: SYN
Client -> Me: ACK, RST
Code
long lStartTime = System.nanoTime() / 1000000;
long lEndTime;
// Attempt to connect to the remote party. We don't mind whether this
// succeeds or fails.
try
{
// Connect to the remote system.
lSocket.connect(mTarget, MAX_PING_TIME_MS);
// Record the end time.
lEndTime = System.nanoTime() / 1000000;
// Close the socket.
lSocket.close();
}
catch (SocketTimeoutException|IOException lEx)
{
lEndTime = System.nanoTime() / 1000000;
}
// Calculate the interval.
lInterval = lEndTime - lStartTime;
System.out.println("Interval = " + lInterval);
Problem
Using Wireshark, I see that the call to lSocket.connect makes three (failed) attempts to connect the socket before giving up - with an apparently arbitrary inter-attempt interval (often ~300ms).
Me -> Client: SYN
Client -> Me: ACK, RST
Me -> Client: SYN
Client -> Me: ACK, RST
Me -> Client: SYN
Client -> Me: ACK, RST
Question
Is there any way to make TCP give up after a single SYN/RST pair?
I've looked through some of the Java code. I wondered if I was on to a winner when the comment on AbstractPlainSocketImpl said...
/**
* The workhorse of the connection operation. Tries several times to
* establish a connection to the given <host, port>. If unsuccessful,
* throws an IOException indicating what went wrong.
*/
...but sadly there's no evidence of looping/retries in that function or any of the other (non-native) functions that I've looked at.
Where does this retry behaviour actually come from? And how can it be controlled?
Alternatives
I may also be open to alternatives, but not...
Using ICMP echo requests (pings). I know that many clients won't respond to them.
Using raw sockets. One of the platforms is Windows, which these days severely limits the ability to use raw sockets. (I also think the Linux network stack jumps in unhelpfully if it's caught in the cross-fire of an application trying to use a raw socket to do TCP.)
Using the JNI, except as a last resort. My code needs to work on at least 2 very different operating systems.
TCP connect retries are a function of the OS's socket implementation. Configuring this depends on the platform. See https://security.stackexchange.com/questions/34607/why-is-the-server-returning-3-syn-ack-packets-during-a-syn-scan for a description of what this is and why it is happening.
On Windows, you should be able to modify the retry count in the registry:
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\TcpMaxConnectRetransmissions
Settings related to the RTT are detailed in that documentation as well.
On Linux, the accepted answer in the linked Security post talks about how to configure this parameter:
On a Linux system, see the special files in /proc/sys/net/ipv4/ called tcp_syn_retries and tcp_synack_retries: they contain the number of times the kernel would emit SYN (respectively SYN+ACK) on a given connection ... this is as simple as echo 3 > tcp_synack_retries ...
Note that this is a system-wide setting.
You can read the current values by reading the registry settings (on Windows) or reading the contents of the special files (on Linux).
Also, MSDN has this to say about the TCP connect RTT on Windows:
TCP/IP adjusts the frequency of retransmissions over time. The delay between the original transmission and the first retransmission for each interface is determined by the value of the TcpInitialRTT entry. By default, it is three seconds. This delay doubles after each attempt. After the final attempt, TCP/IP waits for an interval equal to double the last delay, and then it abandons the connection request.
By the way, re: raw sockets - Yes, you would have an extremely difficult time. Also, as of Windows XP SP2, Windows won't actually let you specify TCP protocol numbers for raw sockets under any circumstances (see Limitations).
Also, as an aside: Make sure that the TCP connection is not being blocked by a separate firewall in front of the client, otherwise you only end up measuring round trip time to the firewall.

A captivating and riveting java socket multithreading fandango

Greetings and salutations,
I have two Android apps: one is the server and one is the client. The server receives strings from two or more clients. The server tests collision of input (eg inputs received within 1 ms or whatever). Once a client establishes a connection with the server, the server instantiates a client object that runs a while-true thread which facilitates input reception.
I believe that with OS time-sharing of the CPU, threads alternate their execution. This is a problem since I need to know if two InputStreams send messages within a ms or two of each other. How may I allow this?
public void run() {
while (true) {
try {
if (input.ready()) {
String s = input.readLine();
//do stuff, like comparing socket inputs
}
} catch (Exception e) {
}
}
I think that you will find the bigger problem is that the network layer alone will introduce variability in timing in the order of many milliseconds (just look at the range of times when running ping), such that even if the clients all send at the exact same moment you will find that they will often all arrive at times with a delta larger than a few milliseconds. You will also likely run into other timing issues related to the handling of incoming network data depending on the radio hardware, and kernel configuration that sit between the VM and the physical network, on both the sender and receiver. My point is that detecting collision with the timing of a few ms is probably not possible.
I'd suggest you send a timestamp with the message from the client so that it doesn't matter when it processes. To make it even more accurate you can add some simple time synchronization to the startup of your protocol to find the delta between the client and server device clocks.
If the goal is to just know which client sent data first, then you can just assume the blocked input.readline() that returns first is the one that sent data first, given data of similar length and similar ping latency for each client. This will under normal conditions be correct. To deal with variable length data and a number of other issues related to the physical network a good tweak would be to do a byte read rather than a whole line, which gives you a better approximation of who arrived first as opposed to which client is able to send the whole line faster.

Categories

Resources