How to check is a Websocket connection is alive - java

I have a websocket connection to a server:
import javax.websocket.*;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
#ClientEndpoint
public class WebsocketExample {
private Session userSession;
private void connect() {
try {
WebSocketContainer container = ContainerProvider.getWebSocketContainer();
container.connectToServer(this, new URI("someaddress"));
} catch (DeploymentException | URISyntaxException | IOException e) {
e.printStackTrace();
}
}
#OnOpen
public void onOpen(Session userSession) {
// Set the user session
this.userSession = userSession;
System.out.println("Open");
}
#OnClose
public void onClose(Session userSession, CloseReason reason) {
this.userSession = null;
System.out.println("Close");
}
#OnMessage
public void onMessage(String message) {
// Do something with the message
System.out.println(message);
}
}
After some time, it seems I don't receive any more messages from the server but the onClose method has not been called.
I would like to have a sort of timer that would at least log an error (and at best try to reconnect) if I did not receive any message during the last five minutes for instance. The timer would be reset when I receive a new message.
How can I do this?

Here is what I did. I changed javax.websocket by jetty and implemented a ping call:
import org.eclipse.jetty.util.ssl.SslContextFactory;
import org.eclipse.jetty.websocket.api.Session;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketClose;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketConnect;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketMessage;
import org.eclipse.jetty.websocket.api.annotations.WebSocket;
import org.eclipse.jetty.websocket.client.WebSocketClient;
import java.io.IOException;
import java.net.URI;
import java.nio.ByteBuffer;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
#WebSocket
public class WebsocketExample {
private Session userSession;
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(1);
private void connect() {
try {
SslContextFactory sslContextFactory = new SslContextFactory();
WebSocketClient client = new WebSocketClient(sslContextFactory);
client.start();
client.connect(this, new URI("Someaddress"));
} catch (Exception e) {
e.printStackTrace();
}
}
#OnWebSocketConnect
public void onOpen(Session userSession) {
// Set the user session
this.userSession = userSession;
System.out.println("Open");
executorService.scheduleAtFixedRate(() -> {
try {
String data = "Ping";
ByteBuffer payload = ByteBuffer.wrap(data.getBytes());
userSession.getRemote().sendPing(payload);
} catch (IOException e) {
e.printStackTrace();
}
},
5, 5, TimeUnit.MINUTES);
}
#OnWebSocketClose
public void onClose(int code, String reason) {
this.userSession = null;
System.out.println("Close");
}
#OnWebSocketMessage
public void onMessage(String message) {
// Do something with the message
System.out.println(message);
}
}
Edit: This is just a ping example... I don't know if all servers are supposed to answer by a pong...
Edit2: Here is how to deal with the pong message. The trick was not to listen for String messages, but to Frame messages:
#OnWebSocketFrame
#SuppressWarnings("unused")
public void onFrame(Frame pong) {
if (pong instanceof PongFrame) {
lastPong = Instant.now();
}
}
To manage server time out, I modified the scheduled task as follows:
scheduledFutures.add(executorService.scheduleAtFixedRate(() -> {
try {
String data = "Ping";
ByteBuffer payload = ByteBuffer.wrap(data.getBytes());
userSession.getRemote().sendPing(payload);
if (lastPong != null
&& Instant.now().getEpochSecond() - lastPong.getEpochSecond() > 60) {
userSession.close(1000, "Timeout manually closing dead connection.");
}
} catch (IOException e) {
e.printStackTrace();
}
},
10, 10, TimeUnit.SECONDS));
... and handle the reconnection in the onClose method

You should work around this problem by implementing a heartbeat system which one side sends ping and one side answers with pong. Almost every websocket client and server (as far as I know) support this feature internally. This ping/pong frames could be sent from both sides. I usually implement it on server side because I usually know it has better chance to stay alive than clients (my opinion). If clients dont send back pong for long time, I know the connection is dead. On client side, I check the same: If server has not sent ping messages for a long time, I know connection is dead.
If ping/pong are not implemented in libraries you use (which I think javax websocket has it) you could make your own protocol for that.

The accepted answer uses Jetty specific API. There's a standard API for this:
to send ping: session.getAsyncRemote().sendPing(data)
to send pong (just keep-alive, without answer) session.getAsyncRemote().sendPong(data)
to react to pongs either session.addMessageHandler(handler) where handler implements MessageHandler.Whole<PongMessage> or create a method that is annotated with #OnMessage and has PongMessage param:
#OnMessage
public void onMessage(PongMessage pong) {
// check if the pong has the same payload as ping that was sent etc...
}
Periodic ping/keep-alive sending can be scheduled for example using ScheduledExecutorService just as the accepted answer does, but proper care of synchronization must be taken: if session.getBasicRemote() is used then all calls to the remote need to be synchronized. In case of session.getAsyncRemote() probably all containers except Tomcat handle synchronization automatically: see the discussion in this bug report.
Finally, it's important to cancel the pinging task (ScheduledFuture obtained from executor.scheduleAtFixedRate(...)) in onClose(...).
I've developed a simple WebsocketPingerService to ease up things (available in maven central). Create an instance and store it somewhere as a static var:
public Class WhicheverClassInYourApp {
public static WebsocketPingerService pingerService = new WebsocketPingerService();
// more code here...
}
You can configure ping interval, ping size, failure limit after which sessions should be closed, etc by passing arguments to the constructor.
After that register your endpoints for pinging in onOpen(...) and deregister in onClose(...):
#ClientEndpoint // or #ServerEndpoint -> pinging can be done from both ends
public class WebsocketExample {
private Session userSession;
#OnOpen
public void onOpen(Session userSession) {
this.userSession = userSession;
WhicheverClassInYourApp.pingerService.addConnection(userSession);
}
#OnClose
public void onClose(Session userSession, CloseReason reason) {
WhicheverClassInYourApp.pingerService.removeConnection(userSession);
}
// other methods here
}

Related

Fire and forget for HTTP in Java

We're implementing our own analytics for that we've exposed a web service which needs to be invoked which will capture the data in our DB.
The problem is that as this is analytics we would be making lot of calls (like for every page load, call after each js, CSS loads etc...) so there'll be many many such calls. So I don want the server to be loaded with lots of requests to be more precise pending for response. Because the response we get back will hardly be of any use to us.
So is there any way to just fire the web service request and forget that I've fired it?
I understand that every HTTP request will have as response as well.
So one thing that ticked my mind was what if we make the request timeout to zero second? But I'm pretty not sure if this is the right way of doing this.
Please provide me with more suggestions
You might find following AsyncRequestDemo.java useful:
import java.net.URI;
import java.net.URISyntaxException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import org.apache.http.client.fluent.Async;
import org.apache.http.client.fluent.Content;
import org.apache.http.client.fluent.Request;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.concurrent.FutureCallback;
/**
* Following libraries have been used:
*
* 1) httpcore-4.4.5.jar
* 2) httpclient-4.5.2.jar
* 3) commons-logging-1.2.jar
* 4) fluent-hc-4.5.2.jar *
*
*/
public class AsyncRequestDemo {
public static void main(String[] args) throws Exception {
URIBuilder urlBuilder = new URIBuilder()
.setScheme("http")
.setHost("stackoverflow.com")
.setPath("/questions/38277471/fire-and-forget-for-http-in-java");
final int nThreads = 3; // no. of threads in the pool
final int timeout = 0; // connection time out in milliseconds
URI uri = null;
try {
uri = urlBuilder.build();
} catch (URISyntaxException use) {
use.printStackTrace();
}
ExecutorService executorService = Executors.newFixedThreadPool(nThreads);
Async async = Async.newInstance().use(executorService);
final Request request = Request.Get(uri).connectTimeout(timeout);
Future<Content> future = async.execute(request, new FutureCallback<Content>() {
public void failed(final Exception e) {
System.out.println("Request failed: " + request);
System.exit(1);
}
public void completed(final Content content) {
System.out.println("Request completed: " + request);
System.out.println(content.asString());
System.exit(0);
}
public void cancelled() {
}
});
System.out.println("Request submitted");
}
}
I used this:
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
URL url = new URL(YOUR_URL_PATH, "UTF-8"));
ExecutorService executor = Executors.newFixedThreadPool(1);
Future<HttpResponse> response = executor.submit(new HttpRequest(url));
executor.shutdown();
for HttpRequest,HttpResponse
public class HttpRequest implements Callable<HttpResponse> {
private URL url;
public HttpRequest(URL url) {
this.url = url;
}
#Override
public HttpResponse call() throws Exception {
return new HttpResponse(url.openStream());
}
}
public class HttpResponse {
private InputStream body;
public HttpResponse(InputStream body) {
this.body = body;
}
public InputStream getBody() {
return body;
}
}
that is.
Yes, you could initiate the request and break the connection without waiting for a response... But you probably don't want to do that. The overhead of the server-side having to deal with ungracefully broken connections will far outweigh letting it proceed with returning a response.
A better approach to solving this kind of performance problem in a Java servlet would bet to shove all the data from the requests into a queue, respond immediately, and have one or more worker threads pick up items out of the queue for processing (such as writing it into a database).

Acting on 403s in Spring-AMQP

I need to guarantee consumer exclusivity with a variable number of consumer threads in different runtimes consuming from a fixed number of queues (where the number of queues is much greater than that of consumers).
My general thought was that I'd have each consumer thread attempt to establish an exclusive connection to clear a queue, and, if it went a given period without receiving a message from that queue, redirect it to another queue.
Even if a queue is temporarily cleared, it's liable to receive messages again in the future, so that queue cannot simply be forgotten about -- instead, a consumer should return to it later. To achieve that rotation, I thought I'd use a queue-of-queues. The danger would be losing references to queues within the queue-of-queues when consumers fail; I thought that seemed solvable with acknowledgements, as follows.
Essentially, each consumer thread waits to get a message (A) with a reference to a queue (1) from the queue-of-queues; message (A) remains initially unacknowledged. The consumer happily attempts to clear queue (1), and once queue (1) remains empty for a given amount of time, the consumer requests a new queue name from the queue-of-queues. Upon receiving a second message (B) and a reference to a new queue (2), the reference to queue (1) is put back on the end of the queue-of-queues as a new message (C), and finally message (A) is acknowledged.
In fact, the queue-of-queue's delivered-at-least-and-probably-only-once guarantee almost gets me exclusivity for the normal queues (1, 2) here, but in order to make sure I absolutely don't lose references to queues, I need to republish queue (1) as message (C) before I acknowledge message (A). That means if a server fails after republishing queue (1) as message (C) but before acknowledging (A), two references to queue (1) could exist in the queue-of-queues, and exclusivity is no longer guaranteed.
Therefore, I'd need to use AMQP's exclusive consumers flags, which are great, but as it stands, I'd also like to NOT republish a reference to a queue if I received a "403 ACCESS REFUSED" for it, so that duplicate references do not proliferate.
However, I'm using Spring's excellent AMQP library, and I don't see how I can hook in with an error handler. The setErrorHandler method exposed on the container doesn't seem for the "403 ACCESS REFUSED" errors.
Is there a way that I can act on the 403s with the frameworks I'm currently using? Alternatively, is there another way I can achieve the guarantees that I need? My code is below.
The "monitoring service":
import org.joda.time.DateTime;
import org.joda.time.DateTimeZone;
import org.joda.time.Period;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.amqp.AmqpAuthenticationException;
import org.springframework.amqp.core.MessageListener;
import org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Optional;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import java.util.function.Supplier;
public class ListenerMonitoringService {
private static final Logger log = LoggerFactory.getLogger(ListenerMonitoringService.class);
private static final Period EXPIRATION_PERIOD = Period.millis(5000);
private static final long MONTIORING_POLL_INTERVAL = 5000;
private static final long MONITORING_INITIAL_DELAY = 5000;
private final Supplier<AbstractMessageListenerContainer> messageListenerContainerSupplier;
private final QueueCoordinator queueCoordinator;
private final ScheduledExecutorService executorService;
private final Collection<Record> records;
public ListenerMonitoringService(Supplier<AbstractMessageListenerContainer> messageListenerContainerSupplier,
QueueCoordinator queueCoordinator, ScheduledExecutorService executorService) {
this.messageListenerContainerSupplier = messageListenerContainerSupplier;
this.queueCoordinator = queueCoordinator;
this.executorService = executorService;
records = new ArrayList<>();
}
public void registerAndStart(MessageListener messageListener) {
Record record = new Record(messageListenerContainerSupplier.get());
// wrap with listener that updates record
record.container.setMessageListener((MessageListener) (m -> {
log.trace("{} consumed a message from {}", record.container, Arrays.toString(record.container.getQueueNames()));
record.freshen(DateTime.now(DateTimeZone.UTC));
messageListener.onMessage(m);
}));
record.container.setErrorHandler(e -> {
log.error("{} received an {}", record.container, e);
// this doesn't get called for 403s
});
// initial start up
executorService.execute(() -> {
String queueName = queueCoordinator.getQueueName();
log.debug("Received queue name {}", queueName);
record.container.setQueueNames(queueName);
log.debug("Starting container {}", record.container);
record.container.start();
// background monitoring thread
executorService.scheduleAtFixedRate(() -> {
log.debug("Checking container {}", record.container);
if (record.isStale(DateTime.now(DateTimeZone.UTC))) {
String newQueue = queueCoordinator.getQueueName();
String oldQueue = record.container.getQueueNames()[0];
log.debug("Switching queues for {} from {} to {}", record.container, oldQueue, newQueue);
record.container.setQueueNames(newQueue);
queueCoordinator.markSuccessful(queueName);
}
}, MONITORING_INITIAL_DELAY, MONTIORING_POLL_INTERVAL, TimeUnit.MILLISECONDS);
});
records.add(record);
}
private static class Record {
private static final DateTime DATE_TIME_MIN = new DateTime(0);
private final AbstractMessageListenerContainer container;
private Optional<DateTime> lastListened;
private Record(AbstractMessageListenerContainer container) {
this.container = container;
lastListened = Optional.empty();
}
public synchronized boolean isStale(DateTime now) {
log.trace("Comparing now {} to {} for {}", now, lastListened, container);
return lastListened.orElse(DATE_TIME_MIN).plus(EXPIRATION_PERIOD).isBefore(now);
}
public synchronized void freshen(DateTime now) {
log.trace("Updating last listened to {} for {}", now, container);
lastListened = Optional.of(now);
}
}
}
The "queue-of-queues" handler:
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Envelope;
import com.rabbitmq.client.GetResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.amqp.rabbit.connection.CachingConnectionFactory;
import org.springframework.amqp.rabbit.connection.Connection;
import org.springframework.amqp.rabbit.connection.ConnectionFactory;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
import java.io.IOException;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
private class MetaQueueCoordinator implements QueueCoordinator {
private static final Logger log = LoggerFactory.getLogger(MetaQueueCoordinator.class);
private final Channel channel;
private final Map<String, Envelope> envelopeMap;
private final RabbitTemplate rabbitTemplate;
public MetaQueueCoordinator(ConnectionFactory connectionFactory) {
Connection connection = connectionFactory.createConnection();
channel = connection.createChannel(false);
envelopeMap = new ConcurrentHashMap<>();
rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setExchange("");
rabbitTemplate.setRoutingKey("queue_of_queues");
}
#Override
public String getQueueName() {
GetResponse response;
try {
response = channel.basicGet("queue_of_queues", false);
} catch (IOException e) {
log.error("Unable to get from channel");
throw new RuntimeException(e);
}
String queueName = new String(response.getBody());
envelopeMap.put(queueName, response.getEnvelope());
return queueName;
}
#Override
public void markSuccessful(String queueName) {
Envelope envelope = envelopeMap.remove(queueName);
if (envelope == null) {
return;
}
log.debug("Putting {} at the end of the line...", queueName);
rabbitTemplate.convertAndSend(queueName);
try {
channel.basicAck(envelope.getDeliveryTag(), false);
} catch (IOException e) {
log.error("Unable to acknowledge {}", queueName);
}
}
#Override
public void markUnsuccessful(String queueName) {
Envelope envelope = envelopeMap.remove(queueName);
if (envelope == null) {
return;
}
try {
channel.basicAck(envelope.getDeliveryTag(), false);
} catch (IOException e) {
log.error("Unable to acknowledge {}", queueName);
}
}
}
The ErrorHandler is for handling errors during message delivery, not setting up the listener itself.
The upcoming 1.5 release publishes application events when exceptions such as this occur.
It will be released later this summer; this feature is currently only available in the 1.5.0.BUILD-SNAPSHOT; a release candidate should be available in the next few weeks.
The project page shows how to get the snapshot from the snapshots repo.

Pusher: Decrease timeout for connection state

I'm currently experimenting with websockets using the Pusher library for Java.
Pusher automatically changes its connection state from CONNECTED to DISCONNECTED if the internet connection is lost. However, this only seems to happen after 150 seconds of being disconnected. This is very unfortunate as in those 150s, a lot of messages can get lost, and a de facto old message can still be seen as the most up-to-date.
How can I know if the last received message is the most up-to-date? Or is there any way to decrease the timeout for the connection state?
Here is the pusher code I'm using:
import com.pusher.client.Pusher;
import com.pusher.client.channel.Channel;
import com.pusher.client.channel.ChannelEventListener;
import com.pusher.client.channel.SubscriptionEventListener;
import com.pusher.client.connection.ConnectionEventListener;
import com.pusher.client.connection.ConnectionState;
import com.pusher.client.connection.ConnectionStateChange;
public class Testing {
public static void main(String[] args) throws Exception {
// Create a new Pusher instance
Pusher pusher = new Pusher("PusherKey");
pusher.connect(new ConnectionEventListener() {
#Override
public void onConnectionStateChange(ConnectionStateChange change) {
System.out.println("State changed to " + change.getCurrentState() +
" from " + change.getPreviousState());
}
#Override
public void onError(String message, String code, Exception e) {
System.out.println("There was a problem connecting!");
}
}, ConnectionState.ALL);
// Subscribe to a channel
Channel channel = pusher.subscribe("channel", new ChannelEventListener() {
#Override
public void onSubscriptionSucceeded(String channelName) {
System.out.println("Subscribed!");
}
#Override
public void onEvent(String channelName, String eventName, String data) {
System.out.println("desilo se");
}
});
// Bind to listen for events called "my-event" sent to "my-channel"
channel.bind("my-event", new SubscriptionEventListener() {
#Override
public void onEvent(String channel, String event, String data) {
System.out.println("Received event with data: " + data);
}
});
while(true){
try {
Thread.sleep(1000);
} catch(InterruptedException ex) {
Thread.currentThread().interrupt();
}
}
}
}
}
Just found the answer: Initiate Pusher-object with PusherOptions-object.
Here is the PusherOptions-class: http://pusher.github.io/pusher-java-client/src-html/com/pusher/client/PusherOptions.html
Here is a simple example how I decreased my connection-timeout from 150s to 15s:
// Define timeout parameters
PusherOptions opt = new PusherOptions();
opt.setActivityTimeout((long)10000L);
opt.setPongTimeout((long)5000L);
// Create a new Pusher instance
Pusher pusher = new Pusher(PUSHER_KEY, opt);
ActivityTimeout defines how often a ping is sent out to check the connectivity, PongTimeout defines the waiting time until a response from the ping-signal is expected.
The minimum ActivityTimeout is 1000ms, however such a low value is strongly discouraged by Pusher, probably to decrease the server-traffic.

Hazelcast queue stops working, where to look for the error?

I use Hazelcast as a non-persistent queue between two applications running in a Tomcat.
Problem: QueueListener stops listening to its queue. This means, until a certain point, the following line appears periodically in the log, then it disappears:
LOGGER.debug("No messages on {}, {}", queueName, QueueListener.this.getClass().getSimpleName());
There is no error in the logs. I have several class that extends the QueueListener, all of them listen to a different named queue. One of them just stops and I have no clue why, except one thing: it happens right after handling an item. The descendant class's handle method logs the item I can see that in the logs. Then the "No messages on {queuename}" loglines just disappear. The executor had 2 threads. Both stopped, not sure if at once.
The descendant class's handle method executes a Http request and logs the response. Note that the response did not appear in the logs for the previous two handle call, before the listener stopped.
The descendant class's handle method does not have any catch block so it will not swallow any Exceptions. No exception was logged in the QueueListener.
My question, how to proceed to find the cause of this? Where to look for it?
The application that send messages into this queue runs in the same Tomcat as the one that listens to this queue. Multicast is enabled (see full HazelCast config bellow). There is an other Tomcat that runs on the same host and some other Tomcats running on different hosts, all connecting to this same Hazelcast instance. They're using the same confing.
Hazelcast version: 2.6
QueueListener.java:
package com.mi6.publishers;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.core.IQueue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
public abstract class QueueListener<T> {
private static final long TIMEOUT = 10000L;
private static final Logger LOGGER = LoggerFactory.getLogger(QueueListener.class);
/**
* queue which is processed
*/
private IQueue<T> queue;
private final String queueName;
#Autowired
private HazelcastInstance instance;
private ExecutorService svc;
private final int threadCount;
private volatile boolean shutdown = false;
/**
* Constructor
*
* #param queueName
* #param threadCount
*/
public QueueListener(String queueName, int threadCount) {
this.queueName = queueName;
this.threadCount = threadCount;
}
/**
* #PostConstuct Start background threads
*/
#PostConstruct
public void init() {
LOGGER.info("Constructing hazelcast listener for {}", getClass().getSimpleName());
if (instance != null) {
queue = instance.getQueue(queueName);
svc = Executors.newFixedThreadPool(threadCount);
for (int i = 0; i < threadCount; i++) {
svc.submit(new Runnable() {
#Override
public void run() {
while (!shutdown) {
try {
T item = queue.poll(TIMEOUT, TimeUnit.MILLISECONDS);
if (item != null) {
handle(item);
} else {
LOGGER.debug("No messages on {}, {}", queueName, QueueListener.this.getClass().getSimpleName());
}
} catch (InterruptedException ex) {
// do nothing if interrupted
} catch (Exception ex) {
LOGGER.error("Error while receiving messages from queue:{}", queueName);
LOGGER.error("Error while receiving messages", ex);
}
}
}
});
}
} else {
throw new IllegalStateException("Hazelcast instance cannot be null");
}
}
/**
* call before stop
*/
#PreDestroy
public void destroy() {
shutdown = true;
if (svc != null) {
svc.shutdown();
}
}
/**
* Event handler
*
* #param item
*/
public abstract void handle(T item);
public String getQueueName() {
return queueName;
}
}
This is how Hazelcast is configured:
#Value("${hazelcast.multicast:True}")
private Boolean hazelcastMulticast;
#Value("${hazelcast.group:groupNameNotSet}")
private String hazelcastGroup;
#Bean(destroyMethod = "shutdown")
public HazelcastInstance hazelcastInstance() {
Config cfg = new Config();
cfg.setInstanceName(hazelcastGroup);
NetworkConfig network = cfg.getNetworkConfig();
network.setPortAutoIncrement(true);
Join join = network.getJoin();
join.getMulticastConfig().setEnabled(hazelcastMulticast);
cfg.getGroupConfig().setName(hazelcastGroup);
cfg.getGroupConfig().setPassword(hazelcastGroup);
QueueConfig sms = new QueueConfig();
sms.setName("some-queue-name1");
cfg.addQueueConfig(sms);
QueueConfig flash = new QueueConfig();
flash.setName("some-queue-name2");
cfg.addQueueConfig(flash);
QueueConfig apns = new QueueConfig();
apns.setName("some-queue-name3");
cfg.addQueueConfig(apns);
QueueConfig gcm = new QueueConfig();
gcm.setName("some-queue-name4");
cfg.addQueueConfig(gcm);
return Hazelcast.newHazelcastInstance(cfg);
}

Gottox socket.io-java-client "Error while handshaking" null pointer exception

I'm trying to use socket.io to connect to a streaming server hosted by Geoloqi
I grabbed the Gottox socket.io-java-client code straight from github and didn't make any modifications, except to change the url, but it's giving me the "Error while handshaking" message. The url should work as I got it from the makers of Geoloqi: https://community.geoloqi.com/discussion/19/data-streaming#Item_11 (see the 1st response).
Here is the code, from BasicExample.java
package basic;
/*
* socket.io-java-client Test.java
*
* Copyright (c) 2012, Enno Boland
* socket.io-java-client is a implementation of the socket.io protocol in Java.
*
* See LICENSE file for more information
*/
import io.socket.IOAcknowledge;
import io.socket.IOCallback;
import io.socket.SocketIO;
import io.socket.SocketIOException;
import org.json.JSONException;
import org.json.JSONObject;
public class BasicExample implements IOCallback {
private SocketIO socket;
/**
* #param args
*/
public static void main(String[] args) {
try {
new BasicExample();
} catch (Exception e) {
e.printStackTrace();
}
}
public BasicExample() throws Exception {
socket = new SocketIO();
// socket.connect("http://localhost:8080/", this);
socket.connect("https://subscribe.geoloqi.com:443", this);
// Sends a string to the server.
socket.send("Hello Server");
// Sends a JSON object to the server.
socket.send(new JSONObject().put("key", "value").put("key2",
"another value"));
// Emits an event to the server.
socket.emit("event", "argument1", "argument2", 13.37);
}
#Override
public void onMessage(JSONObject json, IOAcknowledge ack) {
try {
System.out.println("Server said:" + json.toString(2));
} catch (JSONException e) {
e.printStackTrace();
}
}
#Override
public void onMessage(String data, IOAcknowledge ack) {
System.out.println("Server said: " + data);
}
#Override
public void onError(SocketIOException socketIOException) {
System.out.println("an Error occured");
socketIOException.printStackTrace();
}
#Override
public void onDisconnect() {
System.out.println("Connection terminated.");
}
#Override
public void onConnect() {
System.out.println("Connection established");
}
#Override
public void on(String event, IOAcknowledge ack, Object... args) {
System.out.println("Server triggered event '" + event + "'");
}
}
Here is the error message:
an Error occured
io.socket.SocketIOException: Error while handshaking
at io.socket.IOConnection.handshake(IOConnection.java:322)
at io.socket.IOConnection.access$7(IOConnection.java:292)
at io.socket.IOConnection$ConnectThread.run(IOConnection.java:199)
Caused by: java.lang.NullPointerException
at io.socket.IOConnection.handshake(IOConnection.java:302)
... 2 more
May 1, 2013 10:02:49 PM io.socket.IOConnection cleanup
INFO: Cleanup
What's going wrong with the code?
Looking at the source code where the exception is coming from (IOConnection.java:302, from the inner NullPointerException), there's this block of code:
if (connection instanceof HttpsURLConnection) {
((HttpsURLConnection) connection)
.setSSLSocketFactory(sslContext.getSocketFactory());
}
Clearly connection must be non-null, otherwise it wouldn't pass the instanceof test. Therefore, sslContext must be null. Since the only other places in that file that sslContext is referenced is in setSslContext() and getSslContext(), the only logical conclusion is that you must call setSslContext() prior to making an SSL connection. SocketIO.setDefaultSSLSocketFactory() also calls through to IOConnection.setSslContext(), so you can call that too instead.
Try this:
SocketIO.setDefaultSSLSocketFactory(SSLContext.getDefault());
socket = new SocketIO();
socket.connect("https://subscribe.geoloqi.com:443", this);
...
I got the same error when I used https://github.com/Gottox/socket.io-java-client to create my Java client. Seems like my server was based on 1.X and this only supports 1.0 (https://github.com/Gottox/socket.io-java-client/issues/101). Solved by using https://github.com/socketio/socket.io-client-java instead.

Categories

Resources