I'm running some performance test of a service/client service using mosquitto broker and clients, and paho clients. I got some strange results:
Deployment notes:
3 machines; producer, broker, consumer
Producers: 6 python scripts using mosquitto_pub as fast they can. See below.
Consumer: simple java client show below. Subscribing to all topics.
The hardware specifics has not shown significant difference.
1) Mosquitto gets around 1459.5055 messages/s but it sends only 973.9596666666667. The subscribers just get 485.5458333333333 .
2) Not matter how many instances of the paho clients are created the performance do not improve. E.g. if you run 6 producers in one topic and 2 consumer in two topic you will get 485.5458333333333. But if you add 6 producers to the other topic (already checked that the total amount of messages increment) the total performance stay the same and per topic is divided by two.
3) If you do the precisely the test to two separated java application the performance do not drop. Each application gets the max performance.
In no case the CPU or memory reaches any limit.
Producers.py
from datetime import datetime, date, time
import os,sys,time, json, random, itertools
arg = sys.argv
host="broker"
n=1
if len(arg)>1:
n = int(arg[1])
i=0
while True :
payload = {"id":str(n),"Time":datetime.now().strftime("%Y-%m-%dT%H:%M:%S.00Z"),"ResultValue":1.0,"ResultType":"integer","Datastream":{"id":str(n)}}
os.system( "mosquitto_pub -h "+host+" -t "+("/"+str(payload["id"])+" -m " +str(json.dumps(json.dumps(payload)))+"")
Consumer.java
package eu.linksmart.testing;
import org.eclipse.paho.client.mqttv3.*;
import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence;
import java.util.UUID;
public class Application implements MqttCallback {
public Application() {
id++;
}
public static void main(String[] args) {
try {
Application app = new Application();
create("1",new Application());
create("2",new Application());
while (true)
try {
Thread.sleep(30000);
} catch (InterruptedException e) {
e.printStackTrace();
}
} catch (MqttException e) {
e.printStackTrace();
}
}
static void create(String id, Application app) throws MqttException {
MqttClient mqttClient = new MqttClient("tcp://broker:1883", UUID.randomUUID().toString(), new MemoryPersistence());
mqttClient.connect();
mqttClient.subscribe("/"+id+"/#", 1);
mqttClient.setCallback(app);
}
long acc =0;
int i=0;
long start= System.nanoTime();
static int id=0;
#Override
public void connectionLost(Throwable throwable) {
}
#Override
public void messageArrived(String s, MqttMessage mqttMessage) throws Exception {
i++;
acc = (System.nanoTime()-start);
if(acc/1000000>1000){
start = System.nanoTime();
System.out.println(String.valueOf((i * 1000000000.0) / acc));
acc =0;
i=0;
}
}
#Override
public void deliveryComplete(IMqttDeliveryToken iMqttDeliveryToken) {
}
}
E.g. running producer for topic 1 as:
python Producers.py 1&
What limits the paho client inside an java application?
Well after a lot of debugging I found out what was the problem.
The topic $SYS/broker/load/messages/received/1min was reporting more messages as I was sending. Probably is counting the protocol messages as messages. It is so that in idle this topic is reporting 3.22 with one subscriber. So I thought I was sending 1459.5055 per/sec, this reported by mosquitto. But I was sending just the 485.5458333333333.
So do not trust this topic for application messages payload!
Related
I'm doing it first time. Where am going to read stream of data using websocket.
Here is my code snippet
RsvpApplication
#SpringBootApplication
public class RsvpApplication {
private static final String MEETUP_RSVPS_ENDPOINT = "ws://stream.myapi.com/2/rsvps";
public static void main(String[] args) {
SpringApplication.run(RsvpApplication.class, args);
}
#Bean
public ApplicationRunner initializeConnection(
RsvpsWebSocketHandler rsvpsWebSocketHandler) {
return args -> {
System.out.println("initializeConnection");
WebSocketClient rsvpsSocketClient = new StandardWebSocketClient();
rsvpsSocketClient.doHandshake(
rsvpsWebSocketHandler, MEETUP_RSVPS_ENDPOINT);
};
}
}
RsvpsWebSocketHandler
#Component
class RsvpsWebSocketHandler extends AbstractWebSocketHandler {
private static final Logger logger =
Logger.getLogger(RsvpsWebSocketHandler.class.getName());
private final RsvpsKafkaProducer rsvpsKafkaProducer;
public RsvpsWebSocketHandler(RsvpsKafkaProducer rsvpsKafkaProducer) {
this.rsvpsKafkaProducer = rsvpsKafkaProducer;
}
#Override
public void handleMessage(WebSocketSession session,
WebSocketMessage<?> message) {
logger.log(Level.INFO, "New RSVP:\n {0}", message.getPayload());
System.out.println("handleMessage");
rsvpsKafkaProducer.sendRsvpMessage(message);
}
}
RsvpsKafkaProducer
#Component
#EnableBinding(Source.class)
public class RsvpsKafkaProducer {
private static final int SENDING_MESSAGE_TIMEOUT_MS = 10000;
private final Source source;
public RsvpsKafkaProducer(Source source) {
this.source = source;
}
public void sendRsvpMessage(WebSocketMessage<?> message) {
System.out.println("sendRsvpMessage");
source.output()
.send(MessageBuilder.withPayload(message.getPayload())
.build(),
SENDING_MESSAGE_TIMEOUT_MS);
}
}
As far I know and read about websocket is that, It needs one time connection and stream of data will be flowing continuously until either party (client or server) stops.
I'm building it first time, so trying to cover major scenarios which can come acroos while dealing with 10000+ messages per minute. Total kafka brokers are two with enough space.
What can be done, if connection gets lost and again start consuming messages from webscoket once connected back where it was left in last failure and push messages into further Kafka broker ?
What can be done to put on hold websocket to keep pushing messages in broker if it has reached to threshold limit of not processed messages (in broker) ?
What can be done, When broker reached to its threshold, run a separate process to check available space in broker to push more messages and give indication to resume pushing messages in kafka broker ?
Please share other issues, which needs to be considered while setting up this thing ?
I am currently trying to learn netty-socket.io using their
demo project. I keep seeing Thread.sleep(Integer.MAX_VALUE);. Can someone please tell me why this is important?
Addition: To clarify, I am not asking what does the Thread.sleep() function do, obviously it pauses execution on a particular thread. I am asking about the relevance of it in this example socket server.
package com.corundumstudio.socketio.demo;
import com.corundumstudio.socketio.listener.*;
import com.corundumstudio.socketio.*;
public class NamespaceChatLauncher {
public static void main(String[] args) throws InterruptedException {
Configuration config = new Configuration();
config.setHostname("localhost");
config.setPort(9092);
final SocketIOServer server = new SocketIOServer(config);
final SocketIONamespace chat1namespace = server.addNamespace("/chat1");
chat1namespace.addEventListener("message", ChatObject.class, new DataListener<ChatObject>() {
#Override
public void onData(SocketIOClient client, ChatObject data, AckRequest ackRequest) {
// broadcast messages to all clients
chat1namespace.getBroadcastOperations().sendEvent("message", data);
}
});
final SocketIONamespace chat2namespace = server.addNamespace("/chat2");
chat2namespace.addEventListener("message", ChatObject.class, new DataListener<ChatObject>() {
#Override
public void onData(SocketIOClient client, ChatObject data, AckRequest ackRequest) {
// broadcast messages to all clients
chat2namespace.getBroadcastOperations().sendEvent("message", data);
}
});
server.start();
//Thread.sleep(Integer.MAX_VALUE);
Thread.sleep(4000);
server.stop();
}
}
So I figured out that this does not have anything to do with the server at all. This Thread.sleep(Integer.MAX_VALUE); has simply paused execution of the program. To make this answer intuitive, I will change Thread.sleep(Integer.MAX_VALUE) to Thread.sleep(4000) in the posted code block.
ie, this would start the server, run it for 4 seconds and then stop the server.
This seems to only be here to fulfill its purpose; which is to start and stop the server, as this was taken from a demo project.
I just want to crawl some Hacker News Stories, and my code:
import org.apache.http.client.fluent.Request;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.PrintWriter;
import java.util.logging.Logger;
import java.util.stream.IntStream;
public class HackCrawler {
private static String getUrlResponse(String url) throws IOException {
return Request.Get(url).execute().returnContent().asString();
}
private static String crawlItem(int id) {
try {
String json = getUrlResponse(String.format("https://hacker-news.firebaseio.com/v0/item/%d.json", id));
if (json.contains("\"type\":\"story\"")) {
return json;
}
} catch (IOException e) {
System.out.println("crawl " + id + " failed");
}
return "";
}
public static void main(String[] args) throws FileNotFoundException {
Logger logger = Logger.getLogger("main");
PrintWriter printWriter = new PrintWriter("hack.json");
for (int i = 0; i < 10000; i++) {
logger.info("batch " + i);
IntStream.range(12530671 - (i + 1) * 100, 12530671 - i * 100)
.parallel()
.mapToObj(HackCrawler::crawlItem).filter(x -> !x.equals(""))
.forEach(printWriter::println);
}
}
}
Now it will cost 3 seconds to crawl 100(1 batch) items.
I found use multithreading by parallel will give a speed up (about 5 times), but I have no idea about how to optimise it further.
Could any one give some suggestion about that?
To achieve what Fayaz means I would use Jetty Http Client asynchronous features (https://webtide.com/the-new-jetty-9-http-client/).
httpClient.newRequest("http://domain.com/path")
.send(new Response.CompleteListener()
{
#Override
public void onComplete(Result result)
{
// Your logic here
}
});
This client internally uses Java NIO to listen for incoming responses with a single thread per connection. It then dispatches content to worker threads which are not involved in any blocking I/O operation.
You can try to play with the maximum number of connections per destination (a destination is basically an host)
http://download.eclipse.org/jetty/9.3.11.v20160721/apidocs/org/eclipse/jetty/client/HttpClient.html#setMaxConnectionsPerDestination-int-
Since you are heavily loading a single server, this should be quite high.
The following steps should get you started.
Use a single thread to get response from the site as this is basically an IO operation.
Put these responses into a queue(Read about various implementations of BlockingQueue)
Now you can have multiple threads to pick up these responses and process them as you wish.
Basically, you will be having a single producer thread that gets the responses from the sites and multiple consumers who process these responses.
I wrote the server that accepts connection and bombards messages ( ~100 bytes ) using text protocol and my implementation is able to send about loopback 400K/sec messages with the 3rt party client. I picked Netty for this task, SUSE 11 RealTime, JRockit RTS.
But when I started developing my own client based on Netty I faced drastic throughput reduction ( down from 400K to 1.3K msg/sec ). The code of the client is pretty straightforward. Could you, please, give an advice or show examples how to write much more effective client. I,actually, more care about latency, but started with throughput tests and I don't think that it is normal to have 1.5Kmsg/sec on loopback.
P.S. client purpose is only receiving messages from server and very seldom send heartbits.
Client.java
public class Client {
private static ClientBootstrap bootstrap;
private static Channel connector;
public static boolean start()
{
ChannelFactory factory =
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
ExecutionHandler executionHandler = new ExecutionHandler( new OrderedMemoryAwareThreadPoolExecutor(16, 1048576, 1048576));
bootstrap = new ClientBootstrap(factory);
bootstrap.setPipelineFactory( new ClientPipelineFactory() );
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
bootstrap.setOption("receiveBufferSize", 1048576);
ChannelFuture future = bootstrap
.connect(new InetSocketAddress("localhost", 9013));
if (!future.awaitUninterruptibly().isSuccess()) {
System.out.println("--- CLIENT - Failed to connect to server at " +
"localhost:9013.");
bootstrap.releaseExternalResources();
return false;
}
connector = future.getChannel();
return connector.isConnected();
}
public static void main( String[] args )
{
boolean started = start();
if ( started )
System.out.println( "Client connected to the server" );
}
}
ClientPipelineFactory.java
public class ClientPipelineFactory implements ChannelPipelineFactory{
private final ExecutionHandler executionHandler;
public ClientPipelineFactory( ExecutionHandler executionHandle )
{
this.executionHandler = executionHandle;
}
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = pipeline();
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(
1024, Delimiters.lineDelimiter()));
pipeline.addLast( "executor", executionHandler);
pipeline.addLast("handler", new MessageHandler() );
return pipeline;
}
}
MessageHandler.java
public class MessageHandler extends SimpleChannelHandler{
long max_msg = 10000;
long cur_msg = 0;
long startTime = System.nanoTime();
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
cur_msg++;
if ( cur_msg == max_msg )
{
System.out.println( "Throughput (msg/sec) : " + max_msg* NANOS_IN_SEC/( System.nanoTime() - startTime ) );
cur_msg = 0;
startTime = System.nanoTime();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
e.getChannel().close();
}
}
Update. On the server side there is a periodic thread that writes to the accepted client channel. And the channel soon become unwritable.
Update N2. Added OrderedMemoryAwareExecutor in the pipeline, but still there is very low throughput ( about 4k msg/sec )
Fixed. I put executor in front of the whole pipeline stack and it worked out!
If the server is sending messages with a fixed size (~100 bytes), you can set the ReceiveBufferSizePredictor to the client bootstrap, this will optimize the read
bootstrap.setOption("receiveBufferSizePredictorFactory",
new AdaptiveReceiveBufferSizePredictorFactory(MIN_PACKET_SIZE, INITIAL_PACKET_SIZE, MAX_PACKET_SIZE));
According to the code segment you have posted: The client's nio worker thread is doing everything in the pipeline, so it will be busy with decoding and executing the message handlers. You have to add a execution handler.
You have said that, channel is becoming unwritable from server side, so you may have to adjust the watermark sizes in the server bootstrap. you can periodically monitor the write buffer size (write queue size) and make sure that channel is becoming unwritable because of messages can not written to the network. It can be done by having a util class like below.
package org.jboss.netty.channel.socket.nio;
import org.jboss.netty.channel.Channel;
public final class NioChannelUtil {
public static long getWriteTaskQueueCount(Channel channel) {
NioSocketChannel nioChannel = (NioSocketChannel) channel;
return nioChannel.writeBufferSize.get();
}
}
I am creating a program with a server A and multiple clients B, C, D.
B C & D will all message the client with a number X, and I would like to know how it is possible for the server to message ALL clients simultaneously with the latest value for X?
As it stands, it will update only the client who has last passed number X.
Here is the code I have for run()
public void run(){
String number;
do
{
//Accept message from client on
//the socket's input stream...
received = in.readLine();
//Echo message back to client on
//the socket's output stream...
out.println("Number recieved: " + number);
}
}
Google up JMS Publish and Subscribe.
Basically:
The server publishes to a topic and the clients subscribe to a topic.
The best way to notify clients about something is to use JMX. If you're not supposed to use this technology, then you should keep clients list somewhere in your code (say in static field) and then iterate over this list and send received number
I'm not sure what you're trying to do...but you could try broadcasting a message using socket programming. Check this out
You can add all the sockets to a collection. Send the same message to every socket in the collection. Remove sockets from the collection when they are closed.
e.g.
final List<Socket> sockets = new CopyOnWriteArrayList<Socket>();
// when you have a new socket
sockets.add(socket);
// when you have a dead socket.
sockets.remove(socket);
// to send the same message to multiple sockets.
public static void sendToAll(byte[] bytes) {
for(Socket s: sockets)
try {
s.getOutputStream().write(bytes);
} catch (IOException ioe) {
// handle exception, close the socket.
sockets.remove(s);
}
}
I agree the real solution is JMS, but if you want to "roll your own" a simple solution I would suggest is making your own simplified version using the same idea of JMS. Create a class that will receive events from your client. Create an interface that your clients can implement and then add themselves as a listener to this new class. Some simple code:
class MyEventPublisher {
Collection<EventListener> listeners;
int number;
public void addListener(EventListener listener) {
listeners.add(listener);
}
public void setNumber(int newNumber) {
int oldNumber = this.number;
this.number = newNumber;
for (EventListener listener : listeners) {
listener.numberChanged(newNumber, oldNumber);
}
}
}
interface EventListener {
void numberChanged(int newNumber, int oldNumber);
}
class MyClientSocket implements EventListener {
MyEventPublisher publisher;
public MyClientSocket(MyEventPublisher publisher) {
this.publisher = publisher;
publisher.addListener(this);
}
public recieveNumberFromSocket() {
int numberFromSocket = readNumber();
publisher.setNumber(numberFromSocket);
}
public void numberChanged(int newNumber, int oldNumber) {
//someone else changed the number
//do something interesting with it
}
}
You are looking for a multicast protocol, based on your descriptions.
So, I'll guess you'll be better of looking this:
Multicast (JDK 6)
Multicast (JDK 7)
Previous versions starting from JDK version 1.4.2 include multicast but you'll be better off if you use JDK version 6 or greater ;)