Jmeter TCP sampler not looping - java

I'd like to use jmeter's TCP Sampler to stress test my system (i am using the default TCP sampler and the jmeter GUI), but the thread group loop counter doesn't seem to be working. When i run my test threads only one message gets sent. It almost feels like jmeter is waiting for a response from the server before it begins sending the other messages, except in my case the server should not send any type of acknowledgement.
I am sending text message and using the default tcp sampler class, and have added response assertions to the sampler telling it to ignore the response (which i had hoped would solve my problem). I am also sure that my server is not the issue, and the message is formatted in a way my server expects (with a blank line to signify the end of a message)
My very simple thread group
My very simple TCP Sampler
The result after starting my test
As you can see from my 3rd image, when i start the test it sends one message and just hangs. My server only receives the first message. As i'm new to jmeter, any advice would be helpful. I assumed i can accomplish what i need via the default TCP sampler, if that's not the case can anyone point me in the right direction?

Looking into your screenshot the first request is still running:
Either your server needs to close the connection after receiving the message you you need to come up with your own AbstractTCPClient implementation which will act accordingly to your application setup.
You can also try out HTTP Raw Request sampler which has more simple configuration which might be more suitable for your use case out of the box. HTTP Raw Request sampler can be installed using JMeter Plugins Manager.

Yep, JMeter TCP sampler doesn't send data, when connection not send ask. Just use something like this with JSR223 Sampler:
def sock = new Socket()
def host = "localhost" // change it to your host
def port = 9999 // change it to your port
sock.connect(new InetSocketAddress(host, port))
def out = new OutputStreamWriter(sock.getOutputStream())
try {
while (true) {
out.write("Yor test data here")
out.flush()
if (sock.isConnected()) {
SampleResult.setSuccessful(true)
} else {
SampleResult.setSuccessful(false)
break
}
}
}
finally {
sock.close()
}

Related

Java WebSocket message limit

I'm trying to create communication between simple Java App (using java.net.http.WebSocket class) and remote google-chrome run using google-chrome --remote-debugging-port=9222 --user-data-dir=.
Sending and receiving small messages works as expected, but there is an issue in case of bigger messages, 16kb.
Here is part of java source:
var uri = new URI("ws://127.0.0.1:9222/devtools/page/C0D7B4DBC53FB39F7A4BE51DA79E96BB");
/// create websocket client
WebSocket ws = HttpClient
.newHttpClient()
.newWebSocketBuilder()
.connectTimeout(Duration.ofSeconds(30))
.buildAsync(uri, simpleListener)
.join();
// session Id attached to chrome tab
String sessionId = "...";
// send message
String message = "{\"id\":1,\"method\":\"Runtime.evaluate\",\"params\":{\"expression\":\"document.body.style.backgroundColor = 'blue';\",\"returnByValue\":true,\"awaitPromise\":true,\"userGesture\":true},\"sessionId\":\"" + sessionId + "\"}";
// this works
ws.send(message, true);
// generate big string contains over 18k chars for testing purpose
String bigMessage = "{\"id\":2,\"method\":\"Runtime.evaluate\",\"params\":{\"expression\":\"[" + ("1,".repeat(9000)) + "1]\",\"returnByValue\":true,\"awaitPromise\":true,\"userGesture\":true},\"sessionId\":\"" + sessionId + "\"}";
// this doesn't work
ws.send(bigMessage, true);
Here is stack:
java.net.SocketException: Connection reset
at java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:345)
at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:376)
at java.net.http/jdk.internal.net.http.SocketTube.readAvailable(SocketTube.java:1153)
at java.net.http/jdk.internal.net.http.SocketTube$InternalReadPublisher$InternalReadSubscription.read(SocketTube.java:821)
at java.net.http/jdk.internal.net.http.SocketTube$SocketFlowTask.run(SocketTube.java:175)
at java.net.http/jdk.internal.net.http.common.SequentialScheduler$SchedulableTask.run(SequentialScheduler.java:198)
...
I've tried basically the same by using puppeteer (nodejs library) and it works as expected.
I can't find any resource online about this issue.
Is there anything I'm missing in my example?
Here is url to simple example:
https://github.com/zeljic/websocket-devtools-protocol
Based on what I've seen so far, my best guess would be that Chrome Dev Tools do not process fragmented Text messages on that exposed webSocketDebuggerUrl endpoint. Whether Chrome Dev Tools can be configured to do so or not, is another question. I must note, however, that RFC 6455 (The WebSocket Protocol) mandates it:
Clients and servers MUST support receiving both fragmented and unfragmented messages.
There's one workaround I can see here. Keep in mind that this is unsupported and may change in the future unexpectedly. When running your client, specify the following system property on the command line -Djdk.httpclient.websocket.intermediateBufferSize=1048576 (or pick any other suitable size). As long as you keep sending your messages with true passed as boolean last argument to the send* methods, java.net.http.WebSocket will send messages unfragment, in a single WebSocket frame.
Well I had a similar issue when sending a big string by using web-sockets in java with a tomcat server.
There can be payload limit to send or receive in websocket server .
checkout org.apache.tomcat.websocket.textBufferSize in tomcat's doc. By default it is 8192 bytes try increasing the size.

Sending messages on same socket connection while a file is being sent

My app can transfer files and messages between server and client. Server is multithreaded and clients simply connects to it. While file is being transferred, if sender sends a message, it will be consumed as bytes of file.
I don't want to open more ports,
Can I establish a new connection to the server for file transfer? Or I
should open a separate port for files.
I don't want to block communication while a file is being transferred.
The question was marked as a duplicate but its not, i am trying to send messages and files simultaneously not one by one. I can already receive files one by one. Read again.
Also, as server is multithreaded, I cannot call server socket.accept() again to receive files in new connection because main thread listening for incoming will try to handle it instead. Is there a way around?
Seems to me like trying to multiplex files and messages onto the same socket stream is an XYProblem.
I am not an expert on this, but it sounds like you should do some reading on "ports vs sockets". My understanding is that ip:port is the address of the listening service. Once a client connects, the server will open a socket to actually do the communication.
The trick is that every time a client connects, spawn a new thread (on a new socket) to handle the request. This instantly frees up the main thread to go back to listening for new connections. Your file transfer and your messages can come into the same port, but each new request will get its own socket and its own server thread --> no collision!
See this question for a java implementation:
Multithreading Socket communication Client/Server
you could use some system of all the lines of a file start with a string like this (file:linenum) and then on the other side it puts that in a file then to send text you could do the same thing but with a tag like (text)
Server:
Scanner in = new Scanner(s.getInputStream());
while(true) {
String message = in.nextLine();
if(message.length > 14 && message.substring(0,6).equalsIgnoreCase("(file:") {
int line = Integer.valueOf(message.substring(6).replaceall(")", ""));
saveToFile(message.substring(6).replaceAll(")","").replaceAll("<1-9>",""));
} else {
System.out.println(message);
}
}
I think that code works but I haven't checked it so it might need some slight modifications
You could introduce a handshake protocol where clients can state who they are (probably happening already) and what they want from the given connection. The first connection they make could be about control, and perhaps the messages, and remain in use all the time. File transfer could happen via secondary connections, which may come and go during a session. Having several parallel connections between a client and a server is completely normal, that is what #MikeOunsworth was explaining too.
A shortcut you can take is issuing short-living, one-time tokens which clients can present when opening the secondary connection and then the server will immediately know which file it should start sending. Note that this approach easily can raise various security (if token encodes actual request data) and/or scalability issues (if token is something completely random and has to be looked up in some table).

Reuse Channel for HTTP requests

I want to reuse channel for multiple HTTP requests. I'm using java+netty for the server but clients could be written in C#/Java.
For the C# client I'm using HttpWebRequest with KeepAlive = true; and I don't close the channel after the arrival of the response. And it works perfect.
But when I tried the same for java <--> java communication I had some problems. I'm handling the responses from server something like in this sample and this client part.
If in if (msg instanceof LastHttpContent) { section I just do ctx.close(); I won't be able to reuse this channel again. What should I do here to be able to reuse it?
I tried:
ctx.write(new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.CONTINUE));
or
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER);
or tried to do nothing...but when I try to reuse this channel, i have problem in this handle. The first request was handled fine, but the second gives me this error:
channelRead0: DefaultHttpResponse(decodeResult: failure(java.lang.NullPointerException), version: HTTP/1.1)
Section if (msg instanceof HttpResponse) works fine (I mean headers was read), but throws exception somewhere after that.
And:
headers().set(HttpHeaders.Names.CONNECTION, HttpHeaders.Values.KEEP_ALIVE);
doesn't help too. To make it clear: 1st request/response is fine. Second request in same stream is fine, but there is a problem in decoding the response.
I checked Logger. 1st and second responses are equal, so I don't understand why it gets NullException when decoding it.
p.s. netty 4.0.26
You are entirely at the mercy of the clients. If they implement connection pooling, your connection will be reused. If not, not. Nothing you can do about it at the server end except observe and implement the Connection: close header if sent.

Netty: how do I reduce delay between consecutive messages from the server?

I'm on the dev team for a socket server which uses Netty. When a client sends a request, and the server sends a single response, the round trip time is quite fast. (GOOD) We recently noticed that if the request from the client triggers two messages from the server, even though the server writes both messages to the client at about the same time, there is a delay of more than 200ms between the first and second message arriving on the remote client. When using a local client the two messages arrive at the same time. If the remote client sends another request before the second message from the server arrives, that second message is sent immediately, but then the two messages from the new request are both sent with the delay of over 200ms.
Since it was noticed while using Netty 3.3.1, I tried upgrading to Netty 3.6.5 but I still see the same behavior. We are using NIO, not OIO, because we need to be able to support large numbers of concurrent clients.
Is there a setting that we need to configure that will reduce that 200+ ms delay?
editing to add a code snippet. I hope these are the most relevant parts.
#Override
public boolean openListener(final Protocol protocol,
InetSocketAddress inetSocketAddress) throws Exception {
ChannelFactory factory = new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool(),
threadingConfiguration.getProcessorThreadCount());
ServerBootstrap bootstrap = new ServerBootstrap(factory);
final ChannelGroup channelGroup = new DefaultChannelGroup();
bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
.... lots of pipeline setup snipped ......
});
Channel channel = bootstrap.bind(inetSocketAddress);
channelGroup.add(channel);
channelGroups.add(channelGroup);
bootstraps.add(bootstrap);
return true;
}
The writer factory uses ChannelBuffers.dynamicBuffer(defaultMessageSize) for the buffer, and when we write a message it's Channels.write(channel, msg).
What else would be useful? The developer who migrated the code to Netty is not currently available, and I'm trying to fill in.
200ms strikes me as the magic number of the Nagle's algorithm. Try setting the TcpNoDelay to true on both sides.
This is how you set the option for the server side.
serverBootstrap.setOption("child.tcpNoDelay", true);
This is for the client side.
clientBootStrap.setOption("tcpNoDelay", true);
Further reading: http://www.stuartcheshire.org/papers/NagleDelayedAck/

How to set the timeout for a MQTT client?

I'm using the IA92 Java implementation for MQTT, which allows me to connect to a MQTT broker. In order to establish the connection, I'm doing something like this:
// Create connection spec
String mqttConnSpec = "tcp://the_server#the_port";
// Create the client and connect
mqttClient = MqttClient.createMqttClient(mqttConnSpec, null);
mqttClient.connect("the_id", true, 666);
The problem is that sometimes the server takes too much time to send a response, and it throws a timeout exception:
org.apache.harmony.luni.platform.OSNetworkSystem.connectStreamWithTimeoutSocket(OSNetworkSystem.java:130)
at org.apache.harmony.luni.net.PlainSocketImpl.connect(PlainSocketImpl.java:246)
at org.apache.harmony.luni.net.PlainSocketImpl.connect(PlainSocketImpl.java:533)
at java.net.Socket.connect(Socket.java:1055)
at com.ibm.mqtt.j2se.MqttJava14NetSocket.<init>((null):-1)
at com.ibm.mqtt.j2se.MqttJavaNetSocket.setConnection((null):-1)
at com.ibm.mqtt.Mqtt.tcpipConnect((null):-1)
at com.ibm.mqtt.MqttBaseClient.doConnect((null):-1)
at com.ibm.mqtt.MqttBaseClient.connect((null):-1)
at com.ibm.mqtt.MqttClient.connect((null):-1)
at com.ibm.mqtt.MqttClient.connect((null):-1)
What I need to do is setting a timeout manually, instead of letting the mqtt client decide that. The documentation says: There are also methods for setting attributes of the MQ Telemetry Transport connection, such as timeouts and retries.
But, honestly, I haven't found anything about it. I have taken a look at the whole javadoc reference and there's no evidence of timeout configuration. I can't see the source code since it's not open source.
So how can I set the timeout for the Mqtt connection?
If you have confusion you can go to MqttConnectionOptions for detail.
String userName="Ohelig";
String password="Pojke";
MqttClient client = new MqttClient("tcp://192.168.1.4:1883","Sending");
MqttConnectOptions authen = new MqttConnectOptions();
authen.setUserName(userName);
authen.setPassword(password.toCharArray());
authen.setKeepAliveInterval(30);
authen.setConnectionTimeout(300);
client.connect(authen);
I don't know anything about ia92, but I'd imagine that the 666 in the connect() call is what you're trying to set the timeout to?
The timeout the documentation is referring to is probably the keepalive timeout. This is the maximum number of seconds (chosen by the client) that can elapse without communication between the server and client. I think this is what you're most interested in.
Retries on the other hand are most likely to refer to the retrying of messages that seem to have gone astray when sending messages with QoS>0. This will be something handled by the client library code though, rather than the broker. This is something that comes into play only after you've connected though, so I very much doubt it's your problem.
To be sure that the keepalive timeout is being set correctly, I'd try pointing your client at a modified mosquitto broker. You can modify mqtt3_handle_connect() in src/read_handle_server.c to print out the keepalive value when you connect. This will ensure it's doing what you think, but won't help with the actual problem I'm afraid!
What broker do you use? Really Small Message Broker V1.1 Alpha, Mosquitto, the broker that comes with IBM WebSphere? You need to set this timeout value in your server configuration. Because the system works that way. You set a keep alive value in your broker and send a ping from the client before that interval expires, in order not for the broker to close the client-server connection, and the process restarts. Actually, even if that interval expires, server will still not close the connection until the 'grace period' ends. See http://public.dhe.ibm.com/software/dw/webservices/ws-mqtt/mqtt-v3r1.html#connect

Categories

Resources