I'm trying to create communication between simple Java App (using java.net.http.WebSocket class) and remote google-chrome run using google-chrome --remote-debugging-port=9222 --user-data-dir=.
Sending and receiving small messages works as expected, but there is an issue in case of bigger messages, 16kb.
Here is part of java source:
var uri = new URI("ws://127.0.0.1:9222/devtools/page/C0D7B4DBC53FB39F7A4BE51DA79E96BB");
/// create websocket client
WebSocket ws = HttpClient
.newHttpClient()
.newWebSocketBuilder()
.connectTimeout(Duration.ofSeconds(30))
.buildAsync(uri, simpleListener)
.join();
// session Id attached to chrome tab
String sessionId = "...";
// send message
String message = "{\"id\":1,\"method\":\"Runtime.evaluate\",\"params\":{\"expression\":\"document.body.style.backgroundColor = 'blue';\",\"returnByValue\":true,\"awaitPromise\":true,\"userGesture\":true},\"sessionId\":\"" + sessionId + "\"}";
// this works
ws.send(message, true);
// generate big string contains over 18k chars for testing purpose
String bigMessage = "{\"id\":2,\"method\":\"Runtime.evaluate\",\"params\":{\"expression\":\"[" + ("1,".repeat(9000)) + "1]\",\"returnByValue\":true,\"awaitPromise\":true,\"userGesture\":true},\"sessionId\":\"" + sessionId + "\"}";
// this doesn't work
ws.send(bigMessage, true);
Here is stack:
java.net.SocketException: Connection reset
at java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:345)
at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:376)
at java.net.http/jdk.internal.net.http.SocketTube.readAvailable(SocketTube.java:1153)
at java.net.http/jdk.internal.net.http.SocketTube$InternalReadPublisher$InternalReadSubscription.read(SocketTube.java:821)
at java.net.http/jdk.internal.net.http.SocketTube$SocketFlowTask.run(SocketTube.java:175)
at java.net.http/jdk.internal.net.http.common.SequentialScheduler$SchedulableTask.run(SequentialScheduler.java:198)
...
I've tried basically the same by using puppeteer (nodejs library) and it works as expected.
I can't find any resource online about this issue.
Is there anything I'm missing in my example?
Here is url to simple example:
https://github.com/zeljic/websocket-devtools-protocol
Based on what I've seen so far, my best guess would be that Chrome Dev Tools do not process fragmented Text messages on that exposed webSocketDebuggerUrl endpoint. Whether Chrome Dev Tools can be configured to do so or not, is another question. I must note, however, that RFC 6455 (The WebSocket Protocol) mandates it:
Clients and servers MUST support receiving both fragmented and unfragmented messages.
There's one workaround I can see here. Keep in mind that this is unsupported and may change in the future unexpectedly. When running your client, specify the following system property on the command line -Djdk.httpclient.websocket.intermediateBufferSize=1048576 (or pick any other suitable size). As long as you keep sending your messages with true passed as boolean last argument to the send* methods, java.net.http.WebSocket will send messages unfragment, in a single WebSocket frame.
Well I had a similar issue when sending a big string by using web-sockets in java with a tomcat server.
There can be payload limit to send or receive in websocket server .
checkout org.apache.tomcat.websocket.textBufferSize in tomcat's doc. By default it is 8192 bytes try increasing the size.
Related
I am working with a commercial application which is throwing a SocketException with the message,
An existing connection was forcibly closed by the remote host
This happens with a socket connection between client and server. The connection is alive and well, and heaps of data is being transferred, but it then becomes disconnected out of nowhere.
Has anybody seen this before? What could the causes be? I can kind of guess a few causes, but also is there any way to add more into this code to work out what the cause could be?
Any comments / ideas are welcome.
... The latest ...
I have some logging from some .NET tracing,
System.Net.Sockets Verbose: 0 : [8188] Socket#30180123::Send() DateTime=2010-04-07T20:49:48.6317500Z
System.Net.Sockets Error: 0 : [8188] Exception in the Socket#30180123::Send - An existing connection was forcibly closed by the remote host DateTime=2010-04-07T20:49:48.6317500Z
System.Net.Sockets Verbose: 0 : [8188] Exiting Socket#30180123::Send() -> 0#0
Based on other parts of the logging I have seen the fact that it says 0#0 means a packet of 0 bytes length is being sent. But what does that really mean?
One of two possibilities is occurring, and I am not sure which,
The connection is being closed, but data is then being written to the socket, thus creating the exception above. The 0#0 simply means that nothing was sent because the socket was already closed.
The connection is still open, and a packet of zero bytes is being sent (i.e. the code has a bug) and the 0#0 means that a packet of zero bytes is trying to be sent.
What do you reckon? It might be inconclusive I guess, but perhaps someone else has seen this kind of thing?
This generally means that the remote side closed the connection (usually by sending a TCP/IP RST packet). If you're working with a third-party application, the likely causes are:
You are sending malformed data to the application (which could include sending an HTTPS request to an HTTP server)
The network link between the client and server is going down for some reason
You have triggered a bug in the third-party application that caused it to crash
The third-party application has exhausted system resources
It's likely that the first case is what's happening.
You can fire up Wireshark to see exactly what is happening on the wire to narrow down the problem.
Without more specific information, it's unlikely that anyone here can really help you much.
Using TLS 1.2 solved this error.
You can force your application using TLS 1.2 with this (make sure to execute it before calling your service):
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12
Another solution :
Enable strong cryptography in your local machine or server in order to use TLS1.2 because by default it is disabled so only TLS1.0 is used.
To enable strong cryptography , execute these commande in PowerShell with admin privileges :
Set-ItemProperty -Path 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord
Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord
You need to reboot your computer for these changes to take effect.
This is not a bug in your code. It is coming from .Net's Socket implementation. If you use the overloaded implementation of EndReceive as below you will not get this exception.
SocketError errorCode;
int nBytesRec = socket.EndReceive(ar, out errorCode);
if (errorCode != SocketError.Success)
{
nBytesRec = 0;
}
Had the same bug. Actually worked in case the traffic was sent using some proxy (fiddler in my case). Updated .NET framework from 4.5.2 to >=4.6 and now everything works fine. The actual request was:
new WebClient().DownloadData("URL");
The exception was:
SocketException: An existing connection was forcibly closed by the
remote host
Simple solution for this common annoying issue:
Just go to your ".context.cs" file (located under ".context.tt" which located under your "*.edmx" file).
Then, add this line to your constructor:
public DBEntities()
: base("name=DBEntities")
{
this.Configuration.ProxyCreationEnabled = false; // ADD THIS LINE!
}
I've got this exception because of circular reference in entity.In entity that look like
public class Catalog
{
public int Id { get; set; }
public int ParentId { get; set; }
public Catalog Parent { get; set; }
public ICollection<Catalog> ChildCatalogs { get; set; }
}
I added [IgnoreDataMemberAttribute] to the Parent property. And that solved the problem.
If Running In A .Net 4.5.2 Service
For me the issue was compounded because the call was running in a .Net 4.5.2 service. I followed #willmaz suggestion but got a new error.
In running the service with logging turned on, I viewed the handshaking with the target site would initiate ok (and send the bearer token) but on the following step to process the Post call, it would seem to drop the auth token and the site would reply with Unauthorized.
Solution
It turned out that the service pool credentials did not have rights to change TLS (?) and when I put in my local admin account into the pool, it all worked.
I had the same issue and managed to resolve it eventually. In my case, the port that the client sends the request to did not have a SSL cert binding to it. So I fixed the issue by binding a SSL cert to the port on the server side. Once that was done, this exception went away.
For anyone getting this exception while reading data from the stream, this may help. I was getting this exception when reading the HttpResponseMessage in a loop like this:
using (var remoteStream = await response.Content.ReadAsStreamAsync())
using (var content = File.Create(DownloadPath))
{
var buffer = new byte[1024];
int read;
while ((read = await remoteStream.ReadAsync(buffer, 0, buffer.Length)) != 0)
{
await content.WriteAsync(buffer, 0, read);
await content.FlushAsync();
}
}
After some time I found out the culprit was the buffer size, which was too small and didn't play well with my weak Azure instance. What helped was to change the code to:
using (Stream remoteStream = await response.Content.ReadAsStreamAsync())
using (FileStream content = File.Create(DownloadPath))
{
await remoteStream.CopyToAsync(content);
}
CopyTo() method has a default buffer size of 81920. The bigger buffer sped up the process and the errors stopped immediately, most likely because the overall download speeds increased. But why would download speed matter in preventing this error?
It is possible that you get disconnected from the server because the download speeds drop below minimum threshold the server is configured to allow. For example, in case the application you are downloading the file from is hosted on IIS, it can be a problem with http.sys configuration:
"Http.sys is the http protocol stack that IIS uses to perform http communication with clients. It has a timer called MinBytesPerSecond that is responsible for killing a connection if its transfer rate drops below some kb/sec threshold. By default, that threshold is set to 240 kb/sec."
The issue is described in this old blogpost from TFS development team and concerns IIS specifically, but may point you in a right direction. It also mentions an old bug related to this http.sys attribute: link
In case you are using Azure app services and increasing the buffer size does not eliminate the problem, try to scale up your machine as well. You will be allocated more resources including connection bandwidth.
I got the same issue while using .NET Framework 4.5. However, when I update the .NET version to 4.7.2 connection issue was resolved. Maybe this is due to SecurityProtocol support issue.
For me, it was because the app server I was trying to send email from was not added to our company's SMTP server's allowed list.
I just had to put in SMTP access request for that app server.
This is how it was added by the infrastructure team (I don't know how to do these steps myself but this is what they said they did):
1. Log into active L.B.
2. Select: Local Traffic > iRules > Data Group List
3. Select the appropriate Data Group
4. Enter the app server's IP address
5. Select: Add
6. Select: Update
7. Sync config changes
Yet another possibility for this error to occur is if you tried to connect to a third-party server with invalid credentials too many times and a system like Fail2ban is blocking your IP address.
I was trying to connect to the MQTT broker using the GO client,
broker address was given as address + port, or tcp://address:port
Example: ❌
mqtt://test.mosquitto.org
which indicates that you wish to establish an unencrypted connection.
To request MQTT over TLS use one of ssl, tls, mqtts, mqtt+ssl or tcps.
Example: ✅
mqtts://test.mosquitto.org
In my case, enable the IIS server & then restart and check again.
We are using a SpringBoot service. Our restTemplate code looks like below:
#Bean
public RestTemplate restTemplate(final RestTemplateBuilder builder) {
return builder.requestFactory(() -> {
final ConnectionPool okHttpConnectionPool =
new ConnectionPool(50, 30, TimeUnit.SECONDS);
final OkHttpClient okHttpClient =
new OkHttpClient.Builder().connectionPool(okHttpConnectionPool)
// .connectTimeout(30, TimeUnit.SECONDS)
.retryOnConnectionFailure(false).build();
return new OkHttp3ClientHttpRequestFactory(okHttpClient);
}).build();
}
All our call were failing after the ReadTimeout set for the restTemplate. We increased the time, and our issue was resolved.
This error occurred in my application with the CIP-protocol whenever I didn't Send or received data in less than 10s.
This was caused by the use of the forward open method. You can avoid this by working with an other method, or to install an update rate of less the 10s that maintain your forward-open-connection.
I'd like to use jmeter's TCP Sampler to stress test my system (i am using the default TCP sampler and the jmeter GUI), but the thread group loop counter doesn't seem to be working. When i run my test threads only one message gets sent. It almost feels like jmeter is waiting for a response from the server before it begins sending the other messages, except in my case the server should not send any type of acknowledgement.
I am sending text message and using the default tcp sampler class, and have added response assertions to the sampler telling it to ignore the response (which i had hoped would solve my problem). I am also sure that my server is not the issue, and the message is formatted in a way my server expects (with a blank line to signify the end of a message)
My very simple thread group
My very simple TCP Sampler
The result after starting my test
As you can see from my 3rd image, when i start the test it sends one message and just hangs. My server only receives the first message. As i'm new to jmeter, any advice would be helpful. I assumed i can accomplish what i need via the default TCP sampler, if that's not the case can anyone point me in the right direction?
Looking into your screenshot the first request is still running:
Either your server needs to close the connection after receiving the message you you need to come up with your own AbstractTCPClient implementation which will act accordingly to your application setup.
You can also try out HTTP Raw Request sampler which has more simple configuration which might be more suitable for your use case out of the box. HTTP Raw Request sampler can be installed using JMeter Plugins Manager.
Yep, JMeter TCP sampler doesn't send data, when connection not send ask. Just use something like this with JSR223 Sampler:
def sock = new Socket()
def host = "localhost" // change it to your host
def port = 9999 // change it to your port
sock.connect(new InetSocketAddress(host, port))
def out = new OutputStreamWriter(sock.getOutputStream())
try {
while (true) {
out.write("Yor test data here")
out.flush()
if (sock.isConnected()) {
SampleResult.setSuccessful(true)
} else {
SampleResult.setSuccessful(false)
break
}
}
}
finally {
sock.close()
}
My app can transfer files and messages between server and client. Server is multithreaded and clients simply connects to it. While file is being transferred, if sender sends a message, it will be consumed as bytes of file.
I don't want to open more ports,
Can I establish a new connection to the server for file transfer? Or I
should open a separate port for files.
I don't want to block communication while a file is being transferred.
The question was marked as a duplicate but its not, i am trying to send messages and files simultaneously not one by one. I can already receive files one by one. Read again.
Also, as server is multithreaded, I cannot call server socket.accept() again to receive files in new connection because main thread listening for incoming will try to handle it instead. Is there a way around?
Seems to me like trying to multiplex files and messages onto the same socket stream is an XYProblem.
I am not an expert on this, but it sounds like you should do some reading on "ports vs sockets". My understanding is that ip:port is the address of the listening service. Once a client connects, the server will open a socket to actually do the communication.
The trick is that every time a client connects, spawn a new thread (on a new socket) to handle the request. This instantly frees up the main thread to go back to listening for new connections. Your file transfer and your messages can come into the same port, but each new request will get its own socket and its own server thread --> no collision!
See this question for a java implementation:
Multithreading Socket communication Client/Server
you could use some system of all the lines of a file start with a string like this (file:linenum) and then on the other side it puts that in a file then to send text you could do the same thing but with a tag like (text)
Server:
Scanner in = new Scanner(s.getInputStream());
while(true) {
String message = in.nextLine();
if(message.length > 14 && message.substring(0,6).equalsIgnoreCase("(file:") {
int line = Integer.valueOf(message.substring(6).replaceall(")", ""));
saveToFile(message.substring(6).replaceAll(")","").replaceAll("<1-9>",""));
} else {
System.out.println(message);
}
}
I think that code works but I haven't checked it so it might need some slight modifications
You could introduce a handshake protocol where clients can state who they are (probably happening already) and what they want from the given connection. The first connection they make could be about control, and perhaps the messages, and remain in use all the time. File transfer could happen via secondary connections, which may come and go during a session. Having several parallel connections between a client and a server is completely normal, that is what #MikeOunsworth was explaining too.
A shortcut you can take is issuing short-living, one-time tokens which clients can present when opening the secondary connection and then the server will immediately know which file it should start sending. Note that this approach easily can raise various security (if token encodes actual request data) and/or scalability issues (if token is something completely random and has to be looked up in some table).
I have create a host with name dev002-All-Series, added tapper item to it with key test.ping.count add host and ip addres to allowed hosts. Then I try to send a data with zabbix-metrics library with code like that:
private MetricRegistry metricRegistry;
private Meter pingMeter;
private void init() {
metricRegistry = new MetricRegistry();
metricRegistry.register("jvm.attribute.guage.set", new JvmAttributeGaugeSet());
ZabbixSender zabbixSender = new ZabbixSender("zabbixHost", 10051);
ZabbixReporter zabbixReporter = ZabbixReporter.forRegistry(metricRegistry)
.hostName(HostUtil.getHostName()).prefix("test.").build(zabbixSender);
//FIXME us right time unit and amount
zabbixReporter.start(10, TimeUnit.SECONDS);
pingMeter = metricRegistry.meter("ping");
}
Note that zabbix-metrics library surrond ping meter with test. prefix and .count posyfix.
So why I hae receive that I have failed to send my data? The response is:
{"response":"success","info":"processed: 0; failed: 8; total: 8; seconds spent: 0.000013"}
What is neccesary configure in addition in zabbix to send data? Also is there a way to the reason why zabbix do not receive data - does is logs such requests?
Possible popular reasons:
incorrect host name; make sure to match the "Host name" field (not "Visible name", not IP, not DNS...); note that it is case sensitive
incorrect item key; make sure it matches the one in the item key properties exactly - also case sensitive
incorrect allowed host field contents, or data coming from a different host than expected - check that field for syntax errors, remember that in older Zabbix versions spaces are not supported in that field and tcpdump your incoming connection - does it arrive from the host you expected ?
host/item not in the configuration cache - if you just added or changed host/item, it might not be in the config cache yet. The config cache is updated every 60 seconds by default
if the host is monitored by a Zabbix proxy, you must send data to that proxy
In general, forget your application for a moment and test with zabbix_sender. If that works, check what is your application doing differently. If that fails, check all the items above.
As for logging, currently Zabbix does not log failures or their reasons.
I have found a problem. It turns that metrics zabbix library do not convert data well (for version 0.0.1). It send clock value as long in milliseconds, while zabbix needs to receive it in seconds. After manual converting I got:
{"response":"success","info":"processed: 2; failed: 0; total: 2; seconds spent: 0.000016"}
I it very funy that even when I got 2 successful processed elements, zabbix do not show any values at graphic.
UPDATED
To get all works you should check not only clock in data object, but clock in reqeust too. By default metrics zabbix uses zabbix-sedner version 0.0.1 which send clocks in milliseconds. To make metrics works with zabbix 3.0 which expect clock time in seconds you should change zabbix-sedner version to 0.0.3. Here is a maven sample:
<dependency>
<groupId>io.github.hengyunabc</groupId>
<artifactId>metrics-zabbix</artifactId>
<version>0.0.1</version>
</dependency>
<dependency>
<groupId>io.github.hengyunabc</groupId>
<artifactId>zabbix-sender</artifactId>
<version>0.0.3</version>
</dependency>
See also
I am in a bit of a bind. I am trying to read a message of a WMQ via jms and then convert it to a pcf message for processing. I have only been able to find one resource on this and it hasn't been very helpful [bottom of http://www-01.ibm.com/support/docview.wss?uid=swg21395682 ]
I have tried to implement the technique in the above doc but every time I get to line
PCFMessage response = new PCFMessage(dataInput);
I throw MQRC 3013 - MQRCCF_STRUCTURE_TYPE_ERROR
This is the way my code looks, maybe you can see something I don't.
BytesMessage message = null;
do {
// The consumer will wait 10 seconds (10,000 milliseconds)
message = (BytesMessage) myConsumer.receive(10000);
// get the size of the bytes message & read into an array
int bodySize = (int) message.getBodyLength();
byte[] data = new byte[bodySize];
message.readBytes(data, bodySize);
// Read into Stream and DataInput Stream
ByteArrayInputStream bais = new ByteArrayInputStream(data);
DataInput dataInput = new DataInputStream(bais);
// Pass to PCF Message to process
//MQException.logExclude(new Integer(2079));
PCFMessage qStatsPcf = new PCFMessage(dataInput);
session.commit();
if (message != null) {
processMessage(qStatsPcf);
}
} while (message != null);
myConsumer.close();
A couple updates in response to T.Rob's answer.
I am currently running MQ 7.0. This is a what it is type thing, I can't currently upgrade.
As to what I am trying to do, I am pulling messages from SYSTEM.ADMIN.STATISTICS.QUEUE and I want to parse that information for auditing purposes. The reasoning behind converting to a PCF message is that I am looking to pull some PCF parameters from these messages - for example .getParameter(PCFConstants.MQIAMO_PUTS)
I am not attempting to send messages to MQ in anyway, just pull messages off and process them.
A couple problems with this question:
There is no mention of the version of the version of MQ jms client that is in use. Since IBM has repackaged the Java/JMS classes several times, it is necessary to mention which version you are working with to get a better answer.
It is unclear what it is you are trying to do. An MQ PCF message is processed by the MQ Command Server. The messages are a binary format consisting of a linked list of name/type/value tuples. If your message body is not already in PCF name/type/value format, then casting it as a PCF message is expected to fail.
Since it is not possible to respond to the question as worded with a solution, I'll provide some recommendations based on wild guesses as to what it is you might be trying to do.
Use a modern MQ client. The Technote you linked to is for out-of-support versions of MQ client. You want one that is at least MQ v7.1, but preferably v8.0. since any version of MQ client works with any version of MQ, use the version that is most current. Just remember, the functionality you get is based on the oldest version of MQ used at the client or server. A v8.0 client doesn't get you v8.0 function on a v7.0 QMgr. Go to the SupportPacs page and look for entries with names like MQC**. The MQ v8.0 client is current and it is SupportPac MQC8.
If you really are trying to submit PCF messages to MQ's command processor, instantiate a PCF Agent to do it. Then construct the PCF message using one of the PCF message constructors that lets you specify the selectors and their values.
What happened when you tried using the PCF Java samples? Did they also fail? Did they work? If so, how does your code differ? You did look at IBM's PCF samples, right? Please see Installation directories for samples for the location for the sample programs, including the PCF samples.
If you are not attempting to send messages to the MQ Command Processor, please update the question to let us know what it is you are trying to do and why you believe you need PCF messages to do it.
my 2 cents...
Why are you using JMS to retrieve PCF Messages?
MQ Java is best placed to handle all statistics and event message. My suggestion is go with MQQueueManager object and retrieve a MQMessage out of SYSTEM.ADMIN.STATISTICS.QUEUE and pass it to PCFMessage constructor.
I have not compiled or tested the following, but it gives an outline.
//no try catch block to keep it simple
//assumed MQQueueManager (qmgr object) is already created
//assumed statQueue is available through qmgr.accessQueue() method
do {
MQMessage message = new MQMessage();
//gmo as CMQC.MQGMO_FAIL_IF_QUIESCING | CMQC.MQGMO_WAIT | CMQC.MQGMO_SYNCPOINT | CMQC.MQGMO_CONVERT;
message = statQueue.get(message, gmo);
// Pass to PCF Message to process
PCFMessage qStatsPcf = new PCFMessage(message);
qmgr.commit();
if (message != null) {
processMessage(qStatsPcf);
}
} while (message != null);
statQueue.close();
qmgr.close();