I have a new thread that creates a MqttAsyncClient instance and connects to a remote server. After connecting the client subscribes to a specific topic. If I use the MqttClient instead of the MqttAsyncClient I receive the messages, but if I use the MqttAsyncClient no messages are received. Below is my code, would someone please take a moment to see if I have something missing or incorrect?
public class MqttEventReceiver implements Runnable {
private static final String CLIENT_ID = UUID.randomUUID().toString();
private IMqttAsyncClient client = null;
public MqttEventReceiver(String apiStreamingUri, String
connectionAccessToken) {
}
private MqttCallback mqttCallback = new MqttCallback() {
public void messageArrived(String topic, MqttMessage message) throws Exception {
String incomingMsg = new String(message.getPayload());
LOG.info("Message: ", new String(payload));
}
public void deliveryComplete(IMqttDeliveryToken arg0) {
// TODO Auto-generated method stub
}
public void connectionLost(Throwable arg0) {
// TODO Auto-generated method stub
}
};
#Override
public void run() {
String tmpDir = System.getProperty("java.io.tmpdir");
MqttDefaultFilePersistence dataStore = new MqttDefaultFilePersistence(tmpDir);
//make the connect request. this request establishes a permanent connection
MqttConnectOptions options = new MqttConnectOptions();
options.setAutomaticReconnect(true);
options.setCleanSession(true);
options.setConnectionTimeout(10);
options.setUserName("authorization");
options.setPassword(connectionAccessToken.toCharArray());
Long threadId = successfullyConnected();
client = new MqttAsyncClient(apiStreamingUri, CLIENT_ID, dataStore);
client.setCallback(mqttCallback);
client.connect(options).waitForCompletion();
client.subscribe("topic", 1).waitForCompletion();
}
}
Turns out it was the QoS setting causing the message to be delivered slowly. I set the QoS to 0 and the message was delivered promptly.
Related
I'm using Postman to test Chat feature (socket.io technology with version 2).
Currently, i have to implement Chat test cases using Java.
Postman request information:
1. Socket server: https://hc-socketio-example.xyz
2. Header.authorization: xxx
3. Header.source: app
4. Message.text with JSON format:
{ "ticketId": "63bcc910c22293b4b0495fe4", "content": "test ", "type": "text"}
My Java code to connect socket server:
URI uri = URI.create("https://hc-socket.unibag.xyz");
// #formatter:off
IO.Options options = IO.Options.builder().build();
// #formatter:on
Socket socket = IO.socket(uri, options);
socket.on(Socket.EVENT_CONNECT, new Emitter.Listener() {
#Override
public void call(Object... args) {
System.out.println("Connected to server...");
}
});
socket.connect();
My issues need the help:
Look like my code gets wrong because no String "Connected to server..." printed.
I don't know the way to set the header or request: "authentication", "source"
I'm not sure the way to send JSON message like above:
COULD SOMEONE TAKE A LOOK AND GIVE ME THE ADVISE IN ORDER I COULD FIX MY CODE?
THANKS A LOT IN ADVANCE.
I tried researching on the internet some examples but no luck. I'm confusing about the way to send socket request.
Chats are typically facilitated through web sockets over http, assuming you want to build a chat system that runs over the internet.
I have a working program written that can establish a connection and send/receive messages from the Chat Server.
As a prerequisite you need a third party library.
<dependency>
<groupId>org.java-websocket</groupId>
<artifactId>Java-WebSocket</artifactId>
<version>1.3.0</version>
</dependency>
Also accompanied is the piece of client code that can send headers, get acknowledgement for connections and send/receive json messages
public class ChatWebSocketClient extends WebSocketClient{
public ChatWebSocketClient(URI serverURI, Map<String, String> headers) {
super(serverURI, new Draft_17(), headers, 0);
}
#Override
public void onOpen(ServerHandshake handshakedata) {
// TODO Auto-generated method stub
}
#Override
public void onMessage(String message) {
// TODO Auto-generated method stub
}
#Override
public void onClose(int code, String reason, boolean remote) {
// TODO Auto-generated method stub
}
#Override
public void onError(Exception ex) {
// TODO Auto-generated method stub
}
public static void main(String args[]) throws Exception {
Map<String, String> headers = new HashMap();
headers.put("header1", "value1 for header1");
ChatWebSocketClient client = new ChatWebSocketClient(new URI("Remote_Chat_EndPoint"), headers);
TrustManager[] trustAllCerts = new TrustManager[]{new X509TrustManager() {
#Override
public X509Certificate[] getAcceptedIssuers() {
// TODO Auto-generated method stub
return new java.security.cert.X509Certificate[]{};
}
#Override
public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {
// TODO Auto-generated method stub
}
#Override
public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {
// TODO Auto-generated method stub
}
}};
SSLContext sc = SSLContext.getInstance("TLS");
sc.init(null, trustAllCerts, new java.security.SecureRandom());
client.setWebSocketFactory(new DefaultSSLWebSocketClientFactory(sc));
client.connect();
client.send("{\"key\":\"Hello World\"}");
}
}
I'm currently developing a MQTT client service with Eclipse Paho for a bigger software and I'm encountering performance issues. I'm getting a bunch of events that I want to publish to the broker and I'm using GSON for serialization of those events. I've multithreaded the serialization and publishing. According to a rudimentary benchmark serialization and publishing takes up to 1 ms.
I'm using an ExecutorService with a fixed threadpool of size 10 (for now).
My code is currently submitting about 50 Runnables per second to the ExecutorService, but my Broker reports only about 5-10 messages per second.
I've previously benchmarked my MQTT setup and managed to send about 9000+ MQTT messages per second in a non-multithreaded way.
Does the threadpool have that much of an overhead, that I'm only able to get this small amount of publishes out of it?
public class MqttService implements IMessagingService{
protected int PORT = 1883;
protected String HOST = "localhost";
protected final String SERVICENAME = "MQTT";
protected static final String COMMANDTOPIC = "commands";
protected static final String REMINDSPREFIX = "Reminds/";
protected static final String VIOLATIONTOPIC = "violations/";
protected static final String WILDCARDTOPIC = "Reminds/#";
protected static final String TCPPREFIX = "tcp://";
protected static final String SSLPREFIX = "ssl://";
private MqttClient client;
private MqttConnectOptions optionsPublisher = new MqttConnectOptions();
private ExecutorService pool = Executors.newFixedThreadPool(10);
public MqttService() {
this("localhost", 1883);
}
public MqttService(String host, int port) {
this.HOST = host;
this.PORT = port;
}
#Override
public void setPort(int port) {
this.PORT = port;
}
#Override
public void setHost(String host) {
this.HOST = host;
}
#Override
public void sendMessage(AbstractMessage message) {
pool.submit(new SerializeJob(client,message));
}
#Override
public void connect() {
try {
client = new MqttClient(TCPPREFIX + HOST + ":" + PORT, IDPublisher);
optionsPublisher.setMqttVersion(MqttConnectOptions.MQTT_VERSION_3_1_1);
client.connect(optionsPublisher);
client.setCallback(new MessageCallback());
client.subscribe(WILDCARDTOPIC, 0);
} catch (MqttException e1) {
e1.printStackTrace();
}
}
}
The following code is the Runnable executed by the ExecutorService. This should not be an issue by itself though, since it only takes up to 1-2 ms to finish.
class SerializeJob implements Runnable {
private AbstractMessage message;
private MqttClient client;
public SerializeJob(MqttClient client, AbstractMessage m) {
this.client = client;
this.message = m;
}
#Override
public void run() {
String serializedMessage = MessageSerializer.serializeMessage(message);
MqttMessage wireMessage = new MqttMessage();
wireMessage.setQos(message.getQos());
wireMessage.setPayload(serializedMessage.getBytes());
if (client.isConnected()) {
StringBuilder topic = new StringBuilder();
topic.append(MqttService.REMINDSPREFIX);
topic.append(MqttService.VIOLATIONTOPIC);
try {
client.publish(topic.toString(), wireMessage);
} catch (MqttPersistenceException e) {
e.printStackTrace();
} catch (MqttException e) {
e.printStackTrace();
}
}
}
}
I'm not quite sure what's holing me back. MQTT itself seems to allow for a lot of events, that also can have a big payload, and network cannot possibly an issue either, since I'm currently hosting the broker locally on the same machine as the client.
Edit with further testing:
I've synthetically benchmarked now my own setup which consisted of a locally hosted HiveMQ and Mosquitto broker that ran "natively" off the machine. Using the Paho libraries I have sent increasingly bigger messages in batches of 1000. For each batch I calculated the throughput in messages from first to last message. This scenario didn't use any multithreading. With this I've come up with the following performance chart:
The machine running both the client and the brokers is a desktop with an i7 6700 and 32 GB of RAM. The brokers had access to all cores and 8 GB of Memory for its VM.
For benchmarking I used the following code:
import java.util.Random;
import org.eclipse.paho.client.mqttv3.IMqttDeliveryToken;
import org.eclipse.paho.client.mqttv3.MqttCallback;
import org.eclipse.paho.client.mqttv3.MqttClient;
import org.eclipse.paho.client.mqttv3.MqttConnectOptions;
import org.eclipse.paho.client.mqttv3.MqttException;
import org.eclipse.paho.client.mqttv3.MqttMessage;
import org.eclipse.paho.client.mqttv3.MqttPersistenceException;
public class MqttBenchmarker {
protected static int PORT = 1883;
protected static String HOST = "localhost";
protected final String SERVICENAME = "MQTT";
protected static final String COMMANDTOPIC = "commands";
protected static final String REMINDSPREFIX = "Reminds/";
protected static final String VIOLATIONTOPIC = "violations/";
protected static final String WILDCARDTOPIC = "Reminds/#";
protected static final String TCPPREFIX = "tcp://";
protected static final String SSLPREFIX = "ssl://";
private static MqttClient client;
private static MqttConnectOptions optionsPublisher = new MqttConnectOptions();
private static String IDPublisher = MqttClient.generateClientId();
private static int messageReceived = 0;
private static long timesent = 0;
private static int count = 2;
private static StringBuilder out = new StringBuilder();
private static StringBuilder in = new StringBuilder();
private static final int runs = 1000;
private static boolean receivefinished = false;
public static void main(String[] args) {
connect();
Thread sendThread=new Thread(new Runnable(){
#Override
public void run() {
Random rd = new Random();
for (int i = 2; i < 1000000; i += i) {
byte[] arr = new byte[i];
// System.out.println("Starting test run for byte Array of size:
// "+arr.length);
long startt = System.currentTimeMillis();
System.out.println("Test for size: " + i + " started.");
for (int a = 0; a <= runs; a++) {
rd.nextBytes(arr);
try {
client.publish(REMINDSPREFIX, arr, 1, false);
} catch (MqttPersistenceException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (MqttException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
try {
while (!receivefinished) {
Thread.sleep(10);
}
receivefinished = false;
System.out.println("Test for size: " + i + " finished.");
out.append("Sending Payload size: " + arr.length + " achieved "
+ runs / ((System.currentTimeMillis() - startt) / 1000d) + " messages per second.\n");
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
System.out.println(out.toString());
System.out.println(in.toString());
}
});
sendThread.start();
}
private static class MessageCallback implements MqttCallback {
#Override
public void messageArrived(String arg0, MqttMessage arg1) throws Exception {
if (messageReceived == 0) {
timesent = System.currentTimeMillis();
}
messageReceived++;
if (messageReceived >= runs) {
receivefinished = true;
in.append("Receiving payload size " + count + " achieved "
+ runs / ((System.currentTimeMillis() - timesent) / 1000d) + " messages per second.\n");
count += count;
messageReceived = 0;
}
}
#Override
public void deliveryComplete(IMqttDeliveryToken arg0) {
// TODO Auto-generated method stub
}
#Override
public void connectionLost(Throwable arg0) {
// TODO Auto-generated method stub
}
}
public static void connect() {
try {
client = new MqttClient(TCPPREFIX + HOST + ":" + PORT, IDPublisher);
optionsPublisher.setMqttVersion(MqttConnectOptions.MQTT_VERSION_3_1_1);
optionsPublisher.setAutomaticReconnect(true);
optionsPublisher.setCleanSession(false);
optionsPublisher.setMaxInflight(65535);
client.connect(optionsPublisher);
while (!client.isConnected()) {
try {
Thread.sleep(50);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
client.setCallback(new MessageCallback());
client.subscribe(WILDCARDTOPIC, 0);
} catch (MqttException e1) {
e1.printStackTrace();
}
}
}
What's weird is, that the serialized messages that I want to send from my application only use about 4000 bytes. So the theoretical throughput should be around 200 messages per second. Could this still be a problem caused by longer computations inside the callback function? I've already achieved far better results with the mosquitto broker, and I will further test, how far I can push performance with it.
Thanks for any suggestions!
One problem is in the test setup of the MQTT client.
You are using just one single MQTT client. What you are effectively testing is the size of the MQTT inflight window with this formula:
throughput <= inflight window-size / round-trip time
HiveMQ has by default a property enabled that is called <cluster-overload-protection> that limits the inflight window of a single client.
Additionally the paho client is not really suited for high throughput work in a multi threaded context. A better implementation for high performance scenarios would be the HiveMQ MQTT Client.
With 20 clients connected (10 Publishing and 10 receiving), I reach a sustained throughput of around 6000 qos=1 10kb messages per second.
Disclaimer: I work as a software developer for HiveMQ.
I'm still not exactly sure what the issue was, but switching my broker from HiveMQ to Mosquitto seems to have fixed the issue. Maybe Mosquitto has a different set of default setting compared to HiveMQ, or maybe the free trial version of HiveMQ is limited in some other way than just the number of connected clients.
In any way, Mosquitto worked much better for me, and handled all the messages I could throw at it from my application.
My server has few handlers
new ConnectionHandler(),
new MessageDecoder(),
new PacketHandler(),
new MessageEncoder()
It seems that last handler should be called when the server sends data to the client, but it never called.
Here is code of the handler
public class MessageEncoder extends ChannelOutboundHandlerAdapter {
#Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
System.out.println("from encoder");
NetCommand command = (NetCommand)msg;
byte[] data = JsonSerializer.getInstance().Serialize(command);
ByteBuf encoded = ctx.alloc().buffer(4+data.length);
encoded.writeInt(data.length).writeBytes(data);
ctx.writeAndFlush(encoded);
}
}
As you see i'm using POJO instead of ByteBuf, but it does not work. And even when I try to send ByteBuf the last handler is not called too.
Also here is how I send the data
#Override
public void run()
{
Packet packet;
NetCommand data;
Session session;
while ( isActive )
{
try {
session = (Session) this.sessionQueue.take();
packet = session.getWritePacketQueue().take();
data = packet.getData();
System.out.println("data code: "+data.netCode);
ChannelHandlerContext ctx = packet.getSender().getContext();
ctx.writeAndFlush(data);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
What did I do wrong?
Most likely the ChannelHandlerContext you use for writeAndFlush() belongs to a ChannelHandler which is befor your last handler in the pipeline. If you want to ensure that the write starts at the tail / end of the pipeline use ctx.channel().writeAndFlush(...)
Initially able to make the connection. Simply close the connection client and try to connect again or restart the client. the connection is not established. It creates connection only once.
Can someone help me to improve it. So, it can handle n number client simultaneously.
bossGroup = new NioEventLoopGroup(1);
workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class).option(ChannelOption.SO_BACKLOG, 100)
.handler(new LoggingHandler(LogLevel.INFO)).childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new DelimiterBasedFrameDecoder(20000, Delimiters.lineDelimiter()));
// p.addLast(new StringDecoder());
// p.addLast(new StringEncoder());
p.addLast(serverHandler);
}
});
// Start the server.
LOGGER.key("Simulator is opening listen port").low().end();
ChannelFuture f = b.bind(config.getPort()).sync();
LOGGER.key("Simulator started listening at port: " + config.getPort()).low().end();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} finally {
// Shut down all event loops to terminate all threads.
LOGGER.key("Shtting down all the thread if anyone is still open.").low().end();
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
Server Handler code is below:
public class SimulatorServerHandler extends SimpleChannelInboundHandler<String> {
private AtomicReference<ChannelHandlerContext> ctxRef = new AtomicReference<ChannelHandlerContext>();
private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
private AtomicInteger seqNum = new AtomicInteger(1);
private final Configuration configure;
private ScheduledFuture<?> hbTimerWorker;
private final int stx = 0x02;
private final int etx = 0x03;
private final ILogger LOGGER;
public int enablePublishFunction = 0;
public SimulatorServerHandler(Configuration config) {
this.configure = config;
//LOGGER = LogFactory.INSTANCE.createLogger();
LOGGER = new LogFactory().createLogger("SIM SERVER");
}
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
ctxRef.set(ctx);
enablePublishFunction =1;
// System.out.println("Connected!");
LOGGER.low().key("Gateway connected to the Simulator ").end();
startHBTimer();
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
ctx.fireChannelInactive();
hbTimerWorker.cancel(false);
enablePublishFunction =0;
LOGGER.low().key("Gateway disconnected from the Simulator ").end();
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String request) {
// Generate and write a response.
String response;
boolean close = false;
/* if (request.isEmpty()) {
response = "Please type something.\r\n";
} else if ("bye".equals(request.toLowerCase())) {
response = "Have a good day!\r\n";
close = true;
} else {
response = "Did you say '" + request + "'?\r\n";
}
// We do not need to write a ChannelBuffer here.
// We know the encoder inserted at TelnetPipelineFactory will do the conversion.
ChannelFuture future = ctx.write(response);
// Close the connection after sending 'Have a good day!'
// if the client has sent 'bye'.
if (close) {
future.addListener(ChannelFutureListener.CLOSE);
}
*/
System.out.println(request);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
LOGGER.key("Unknown exception while network communication :"+ cause.getStackTrace()).high().end();
cause.printStackTrace();
ctx.close();
}
Maybe because you use always the very same server handler in your pipeline for all connections (not using new ServerHandler())? Side effects in your implementation could block your handler to be reusable.
I'm new to Jerry, and trying to implement WebSocket Client on Jetty9.
I saw an example on Jetty8.
org.eclipse.jetty.websocket Class WebSocketClient
http://archive.eclipse.org/jetty/8.0.0.v20110901/apidocs/org/eclipse/jetty/websocket/WebSocketClient.html
to create a new instance of WebSocketClient is :
WebSocketClientFactory factory = new WebSocketClientFactory();
factory.start();
WebSocketClient client = factory.newWebSocketClient();
// Configure the client
WebSocket.Connection connection = client.open(new
URI("ws://127.0.0.1:8080/"), new WebSocket.OnTextMessage()
{
public void onOpen(Connection connection)
{
// open notification
}
public void onClose(int closeCode, String message)
{
// close notification
}
public void onMessage(String data)
{
// handle incoming message
}
}).get(5, TimeUnit.SECONDS);
connection.sendMessage("Hello World");
However, I've never seen a document for Jetty9 for this.
So far, referring to
org.eclipse.jetty.websocket.common
Interface SessionFactory
//----------------------------------------------
WebSocketSession createSession(URI requestURI,
EventDriver websocket,
LogicalConnection connection)
//----------------------------------------------
I've tried
private WebSocketSessionFactory factory = new WebSocketSessionFactory();
try
{
WebSocketSession session = factory.createSession(uri,
eventDriver, connection);
RemoteEndpoint ep = session.getRemote();
}
catch (Exception ex)
{
System.out.println("=ERROR= " + ex);
//=ERROR= java.lang.NullPointerException
}
private EventDriver eventDriver = new EventDriver()
{
#Override
public WebSocketPolicy getPolicy()
{
return null;
}
//......................................
#Override
public void incomingFrame(Frame frame)
{
}
};
private LogicalConnection connection = new LogicalConnection()
{
#Override
public void close()
{
}
//...............................
#Override
public void resume()
{
}
};
but I've encounter java.lang.NullPointerException
How do we implement Jetty9 WebSocket Client ??
Thanks for your advise.
Hope this helpful: EventClient.java