According to this post [When does OnWebSocketClose fire in Jetty 9, OnClose fire for me correctly.
but i can not reconnect, because I have not correct situation. (websocett is closed and I can not send any message)
where and when I can reconnect in websocket problem (close by network problem or timeout or kickout by sever after n seconds without handshaking)
I'm not sure if you have solved this issue but I managed to come up with a solution to reconnect a WebSocket connection.
For my scenario, I would like to reconnect my WebSocket connection if #OnWebSocketError method is triggered. Initially I implemented it like this
Implementation A:
#OnWebSocketError
public void onError(Throwable e) {
someMethodToReconnect();
}
and inside someMethodToReconnect
if (client == null) {
client = new WebSocketClient(sslContextFactory);
client.setMaxIdleTimeout(0);
}
if (socket == null) {
socket = new ReaderSocket(); // ReaderSocket is the name of my #WebSocket class
}
try {
client.start();
client.connect(socket, new URI(socketUrl), request);
} catch (Exception e) {
LOGGER.error("Unable to connect to WebSocket: ", e);
}
However, this led to another issue. There are 2 types of error thrown back to me if the server is not up. The exceptions are java.net.ConnectException and org.eclipse.jetty.websocket.api.UpgradeException.
The flow would be:
Initial request to WebSocket server (server is not up)
java.net.ConnectException thrown
org.eclipse.jetty.websocket.api.UpgradeException thrown
And in Implementation A, someMethodToReconnect would be called twice.
This led to Implementation B
#OnWebSocketError
public void onError(Throwable e) {
if (e instanceof ConnectException) {
// Attempt to reconnect
someMethodToReconnect();
} else {
// Ignore upgrade exception
}
}
So far Implementation B works fine, however I'm trying to find out if there's any other exceptions that would be thrown.
Related
If another application on the PC is connected to the same remote IP address, a java application will fail to connect properly.
This can also happen when a exits abruptly without closing the socket channel. The connection can be blocked and it is impossible to connect during a subsequent session.
What can I do to ensure that no matter the state of the connection in the underlying OS, my program will connect 100% of the time ?
I am looking for a cross platform solution (Windows & Ubuntu)
public void connect() throws CommunicationIOException {
try {
if (isConnected()) {
return;
}
socket = SocketChannel.open();
socket.socket().connect(new InetSocketAddress(getHostname(), getPort()), getConnectionTimeout());
if (!isConnected()) {
throw new CommunicationIOException("Failed to establish the connection");
}
socket.configureBlocking(false);
} catch (final IOException ex) {
throw new CommunicationIOException(
"An error occurred while connecting to " + getHostname() + " on port " + getPort(), ex);
}
}
.
public boolean isConnected() {
if (socket == null) {
return false;
} else {
return socket.isConnected();
}
}
.
public void close() throws CommunicationIOException {
if (socket != null) {
try {
socket.close();
} catch (final IOException ex) {
throw new CommunicationIOException(
MessageFormat.format(
"An error occurred while attempting to close the connection to {}:{}",
getHostname(), getPort()), ex);
}
}
}
If another application on the PC is connected to the same remote IP address, a java application will fail to connect properly.
No it won't, unless the server is improperly programmed.
This can also happen when a exits abruptly without closing the socket channel.
No it can't, again unless something is improperly programmed.
The connection can be blocked
No it can't.
and it is impossible to connect during a subsequent session.
No it isn't.
What can I do to ensure that no matter the state of the connection in the underlying OS, my program will connect 100% of the time ?
Nothing in this life will give you a 100% guarantee. However your fears as expressed above are baseless.
I am using GRPC-Java 1.1.2. In an active GRPC session, I have a few bidirectional streams open. Is there a way to clean them from the client end when the client is disconnecting? When I try to disconnect, I run the following look for a fixed number of times and then disconnect but I can see the following error on the server side (not sure if its caused by another issue though):
disconnect from client
while (!channel.awaitTermination(3, TimeUnit.SECONDS)) {
// check for upper bound and break if so
}
channel.shutdown().awaitTermination(3, TimeUnit.SECONDS);
error on server
E0414 11:26:48.787276000 140735121084416 ssl_transport_security.c:439] SSL_read returned 0 unexpectedly.
E0414 11:26:48.787345000 140735121084416 secure_endpoint.c:185] Decryption error: TSI_INTERNAL_ERROR
If you want to close gRPC (server-side or bi-di) streams from the client end, you will have to attach the rpc call with a Context.CancellableContext found in package io.grpc.
Suppose you have an rpc:
service Messaging {
rpc Listen (ListenRequest) returns (stream Message) {}
}
In the client side, you will handle it like this:
public class Messaging {
private Context.CancellableContext mListenContext;
private MessagingGrpc.MessagingStub getMessagingAsyncStub() {
/* return your async stub */
}
public void listen(final ListenRequest listenRequest, final StreamObserver<Message> messageStream) {
Runnable listenRunnable = new Runnable() {
#Override
public void run() {
Messaging.this.getMessagingAsyncStub().listen(listenRequest, messageStream);
}
if (mListenContext != null && !mListenContext.isCancelled()) {
Log.d(TAG, "listen: already listening");
return;
}
mListenContext = Context.current().withCancellation();
mListenContext.run(listenRunnable);
}
public void cancelListen() {
if (mListenContext != null) {
mListenContext.cancel(null);
mListenContext = null;
}
}
}
Calling cancelListen() will emulate the error, 'CANCELLED', the connection will be closed, and onError of your StreamObserver<Message> messageStream will be invoked with throwable message: 'CANCELLED'.
If you use shutdownNow() it will more aggressively shutdown the RPC streams you have. Also, you need to call shutdown() or shutdownNow() before calling awaitTermination().
That said, a better solution would be to end all your RPCs gracefully before closing the channel.
This question already has answers here:
How can I fix 'android.os.NetworkOnMainThreadException'?
(66 answers)
Closed 7 years ago.
What I was trying to do:
I was trying to build a test app, for now, simply establishing connection between the app on Android phone (4.2.2)(as client) and a java application running on pc (windows 8)(as server) via sockets connection.
What I've done already:
I've made the programs for both client and server in java on pc and tested them positively (Connection got established).
The network:
Both my phone and pc are connected to wifi at my home.ipconfig on pc shows address 192.168.56.1 while on logging into router it shows address of my pc to be 192.168.0.108 (Probably I don't understand networking :P).
The code:
client(Android)
public void connectPC(View view)
{
try
{
clientSocket = new Socket("192.168.0.108",1025);
outstream = clientSocket.getOutputStream();
instream = clientSocket.getInputStream();
data_out = new DataOutputStream(outstream);
data_in = new DataInputStream(instream);
statusView.setText("Connected!");
}
catch (Exception e)
{
statusView.setText("Error: "+e.getMessage());
}
}
The Server:
import java.net.*;
import java.io.*;
public class ServerSide extends Thread
{
ServerSocket serverSocket;
public ServerSide(int port) throws IOException
{
serverSocket = new ServerSocket(1025);
serverSocket.setSoTimeout(10000);
}
public void run()
{
while(true)
{
System.out.println("Waiting for client on port : "+ serverSocket.getLocalPort());
Socket server;
try {
server = serverSocket.accept();
System.out.println("Connected to : " +server.getRemoteSocketAddress());
} catch (IOException e) {
// TODO Auto-generated catch block
System.out.println(e.getMessage());
}
}
}
public static void main(String... args)
{
int port=6066;
try
{
Thread t = new ServerSide(port);
t.start();
}
catch(IOException e)
{
System.out.println(e.getMessage());
}
}
}
The Problem:- The connection simply doesn't establish, the catch block shows e.getMessage() as null.
PS I've tried 192.168.56.1 ip address too. And added uses permission in manifest file
Any help in this regard please..!
You need to print the stacktrace rather than just the exception message. That will give you more information to debug the problem ... including the name of the exception, and the place where it was thrown.
Also, it is a bad idea to catch Exception and attempt to recover from it. Catching Exception could catch all sorts of exceptions that you were never expecting. Recovering from exceptions that you weren't expecting is risky ... because you cannot be sure it is a safe thing to do. It is typically better to let the application die ...
I would like to give you some suggestions:
use asynctask in android for networking activities otherwise NetworkOnMainThreadException occur because it is good to run all time consuming activities in background.Also keep in mind do all task in doBackgroung function of asynctask and then update and publish result with help of onPostExecute() and onProgress().
If you are not using asynctask then simply use thread and perform all networking activity on separate thread.
In java software ,track IP address by using Enumeration instead of InetAddress because Enumeration will show all IP address on network and probably you will find answer of your question.(Try to connect to all the IP that is shown by Enumeration method and connection is established with suitable one automatically)
I am using jersey to implement a SSE scenario.
The server keeps connections alive. And push data to clients periodically.
In my scenario, there is a connection limit, only a certain number of clients can subscribe to the server at the same time.
So when a new client is trying to subscribe, I do a check(EventOutput.isClosed) to see if any old connections are not active anymore, so they can make room for new connections.
But the result of EventOutput.isClosed is always false, unless the client explicitly calls close of EventSource. This means that if a client drops accidentally(power outage or internet cutoff), it's still hogging the connection, and new clients can not subscribe.
Is there a work around for this?
#CuiPengFei,
So in my travels trying to find an answer to this myself I stumbled upon a repository that explains how to handle gracefully cleaning up the connections from disconnected clients.
The encapsulate all of the SSE EventOutput logic into a Service/Manager. In this they spin up a thread that checks to see if the EventOutput has been closed by the client. If so they formally close the connection (EventOutput#close()). If not they try to write to the stream. If it throws an Exception then the client has disconnected without closing and it handles closing it. If the write is successful then the EventOutput is returned to the pool as it is still an active connection.
The repo (and the actual class) are available here. Ive also included the class without imports below in case the repo is ever removed.
Note that they bind this to a Singleton. The store should be globally unique.
public class SseWriteManager {
private final ConcurrentHashMap<String, EventOutput> connectionMap = new ConcurrentHashMap<>();
private final ScheduledExecutorService messageExecutorService;
private final Logger logger = LoggerFactory.getLogger(SseWriteManager.class);
public SseWriteManager() {
messageExecutorService = Executors.newScheduledThreadPool(1);
messageExecutorService.scheduleWithFixedDelay(new messageProcessor(), 0, 5, TimeUnit.SECONDS);
}
public void addSseConnection(String id, EventOutput eventOutput) {
logger.info("adding connection for id={}.", id);
connectionMap.put(id, eventOutput);
}
private class messageProcessor implements Runnable {
#Override
public void run() {
try {
Iterator<Map.Entry<String, EventOutput>> iterator = connectionMap.entrySet().iterator();
while (iterator.hasNext()) {
boolean remove = false;
Map.Entry<String, EventOutput> entry = iterator.next();
EventOutput eventOutput = entry.getValue();
if (eventOutput != null) {
if (eventOutput.isClosed()) {
remove = true;
} else {
try {
logger.info("writing to id={}.", entry.getKey());
eventOutput.write(new OutboundEvent.Builder().name("custom-message").data(String.class, "EOM").build());
} catch (Exception ex) {
logger.info(String.format("write failed to id=%s.", entry.getKey()), ex);
remove = true;
}
}
}
if (remove) {
// we are removing the eventOutput. close it is if it not already closed.
if (!eventOutput.isClosed()) {
try {
eventOutput.close();
} catch (Exception ex) {
// do nothing.
}
}
iterator.remove();
}
}
} catch (Exception ex) {
logger.error("messageProcessor.run threw exception.", ex);
}
}
}
public void shutdown() {
if (messageExecutorService != null && !messageExecutorService.isShutdown()) {
logger.info("SseWriteManager.shutdown: calling messageExecutorService.shutdown.");
messageExecutorService.shutdown();
} else {
logger.info("SseWriteManager.shutdown: messageExecutorService == null || messageExecutorService.isShutdown().");
}
}}
Wanted to provide an update on this:
What was happening is that the eventSource on the client side (js) never got into readyState '1' unless we did a broadcast as soon as a new subscription was added. Even in this state the client could receive data pushed from the server. Adding call to do a broadcast of a simple "OK" message helped kicking the eventSource into readyState 1.
On closing the connection from the client side; to be pro-active in cleaning up resources, just closing the eventSource on the client side doesn't help. We must make another ajax call to the server to force the server to do a broadcast. When the broadcast is forced, jersey will clean up the connections that are no longer alive and will in-turn release resources (Connections in CLOSE_WAIT). If not a connection will linger in CLOSE_WAIT till the next broadcast happens.
I'm facing this issue working with a ServerSocket inside one of my bundles, let's just call it: FooBundle.
This FooBundle has, among others, a SocketListener.java class. This class is a Thread and to make a little overview of it, I'll paste some pseudocode:
public class SocketListener implements Runnable{
ServerSocket providerSocket;
Socket connection = null;
private boolean closeIt = false;
public void run() {
try {
//Create the server socket
providerSocket = new ServerSocket(41000, 10);
} catch (IOException e1) {
//catching the exception....
}
while(!closeIt){
try{
connection = providerSocket.accept();
in = new Scanner(new InputStreamReader(onnection.getInputStream()));
while(in.hasNext() !=false)
message = message + " "+in.next();
// bla bla bla...
} catch (IOException e) {
//bla bla...
}
finally{
try{
if (message.equalsIgnoreCase("bye"))
providerSocket.close();
closeIt = true;
}
catch(IOException ioException){
//........
}
}
As you can see, it's a simple thread that waits for a connection until the message it receives from one of the SocketClients is "bye".
This is the problem I'm facing right now: When the Bundle is stopped, I do need to restart the entire OSGi framework : If I try to restart the bundle, a java.net.BindException message is thrown: "Address already in use". So, I stopped the bundle but the socket hasn't been closed.
In OSGi, you need to take care of what the stop() method inside the Activator must include, but I just can't pass any reference of an anonymous thread to the Activator.
Imagine that this is my class diagram inside the bundle:
**FooBundle**
|__FooBundleActivator
|__FooImpl
|__SocketListener (thread)
The SocketListener thread is called from the FooImpl class as an anonymous thread.
My question is: Is there any appropiate method to have such control of anonymous threads and specifically in my case, of non-closing socket ports, inside the OSGi paradigm?
Thanks in advance.
If your bundle is told to stop then assume the guy doing the stopping knows what he is doing. Yes, your protocol expects the 'bye' but shit happens, any protocol that has problems with these things is too fragile for the real world. In general, all your tasks in OSGi should have a life cycle. So this would be my code (using DS instead of activators).
#Component
public class ProtocolServer extends Thread {
volatile ServerSocket server;
volatile Socket connection;
public ProtocolServer() {
super("Protocol Server on 4100"); // to identify the thread
}
#Activate void activate() {
setDaemon(true);
start();
}
#Deactivate void deactivate() {
interrupt();
// best effort close (even if null)
try { server.close(); } catch(Exception e) {}
try { connection.close(); } catch(Exception e) {}
join(10000); // waits 10 secs until thread exits
}
public void run() {
// loop for active component
while( !isInterrupted() )
try {
doServer();
} catch( Exception e) {
log(e);
// bad error, accept failed or bind failed
// or server socket was closed. If we should remain
// active, sleep to prevent overloading the
// system by trying too often, so sleep
if ( !isInterrupted() )
try { Thread.sleep(5000); } catch(Exception e) {}
}
}
private void doServer() throws Exception {
server = new ServerSocket(4100)
try {
while( !isInterrupted() )
doConnection(server);
} finally {
server.close();
}
}
private void doConnection(ServerSocket server) throws Exception {
connection = server.accept();
try {
doMessages(connection);
// the pseudo code exits here, but that seems
// kind of weird? If desired, interrupt
// this object, this will exit the thread
} catch( Exception e) {
log(e); // the connection failed, is not uncommon
} finally {
connection.close();
connection = null;
}
}
private void doMessages(Socket connection) {
MyScanner s = new MyScanner(socket);
String msg;
while( !isInterrupted() && !"bye".equals( msg=s.getMessage()))
process(msg);
}
}
One important design consideration in OSGi is that the components keep working even if there are failures. In a network you often have transient errors that go away on their own. Even if they don't it is desirable that the server keeps on trying while you fix the problem. Your pseudo code would be a nightmare in practice since it would disappear on any error. Any system with multiple such components tends to becomes quickly unstable.
One thing that also surprised me is that you only support one connection at a time. In general it is better to not limit this and handle the messages in their own thread. In that case, you must ensure that each created handler for a connection is also closed appropriately.
Instantiate the ServerSocket outside (probably in the Activator) and pass it to the SocketListener via a constructor. You can call serverSocket.stop() in the stop function of the Activator than.
In case you call ServerSocket.stop() a SocketException will be thrown that is a subclass of IOException. Please think of handling IOException in the while iteration in the way that it will stop executing the iteration for sure.
You need to close that listening socket regardless of the message before exiting the thread function. Then what should really make a difference for you is calling setReuseAddress(true) on that socket to allow binding the port while old connection hangs in the timeout state.
And, please please please, use better indentation technique in your code ...