I am currently trying to write a small web application that makes use of websockets. The application is doing a broadacast to all connected clients. This works quite well as long as the Tomcat Container is running on a port different from port 80.
For this scenario think of the web application broadcasting messages all the time.
The working behaviour is the following (i.e. running on port different from 80):
Client (Browser) connects to the server successfully
Client immediately receives messages (i.e. websocket callback function is invoked)
As soon as I configure it to run on port 80 the following behaviour could be observed:
Client (Browser) connects to the server successfully
From the start no invocation of callback can be observed via console.log(...)
After some time the debug output from the onmessage callback is shown all at once having all the same timestamp, though the timespan between 5 broadcasts is definitely more than a second
Console Log:
Event data: {"aktuell":50,"total":788,"msg":"Indexiere Artikel"} at Mon Mar 03 2014 14:24:22 GMT+0100
Event data: {"aktuell":55,"total":788,"msg":"Indexiere Artikel"} at Mon Mar 03 2014 14:24:22 GMT+0100
Event data: {"aktuell":60,"total":788,"msg":"Indexiere Artikel"} at Mon Mar 03 2014 14:24:22 GMT+0100
Event data: {"aktuell":65,"total":788,"msg":"Indexiere Artikel"} at Mon Mar 03 2014 14:24:22 GMT+0100
Event data: {"aktuell":70,"total":788,"msg":"Indexiere Artikel"} at Mon Mar 03 2014 14:24:22 GMT+0100
Event data: {"aktuell":75,"total":788,"msg":"Indexiere Artikel"} at Mon Mar 03 2014 14:24:22 GMT+0100
Event data: {"aktuell":80,"total":788,"msg":"Indexiere Artikel"} at Mon Mar 03 2014 14:24:22 GMT+0100
Event data: {"aktuell":85,"total":788,"msg":"Indexiere Artikel"} at Mon Mar 03 2014 14:24:22 GMT+0100
Event data: {"aktuell":90,"total":788,"msg":"Indexiere Artikel"} at Mon Mar 03 2014 14:24:22 GMT+0100
Event data: {"aktuell":95,"total":784{"aktuell":50,"total":788, at Mon Mar 03 2014 14:24:22 GMT+0100
This behaviour is shown using Tomcat 7.0.42, 7.0.52 and Tomcat 8.0.3. For the client side IE 10, Firefox 21 and Chrome 33 have been used.
It seems to me that the websockets' content is somehow buffered to a size of about 510 bytes (observed when stripping debug messages down to the message content only). Even if I change the JSON message structure it will be a total of 510 bytes.
Is there anything that I missed that is different when working with port 80?
Just as an additional information, on the server side I use
session.getBasicRemote().sendText(message)
to send the message and on the client side I use
ws = new WebSocket(url); // Open connection
ws.onmessage = function(event) {
console.log(event.data); // Stripped down version
}
to handle any incoming event.
Related
I have a service that reads from DynamoDB and I am seeing high latency for a particular request. On checking the debug logs, could find the following lines and the difference in their timestamps is over 1000 ms. It would be a great help if someone could please help to understand what the following lines mean, how it increases the latency and how the same can be prevented.
10 Mar 2022 18:17:46,036 com.amazonaws.retry.ClockSkewAdjuster: Reported server date (from 'Date' header): Thu, 10 Mar 2022 18:17:46 GMT
10 Mar 2022 18:17:47,254 javax.xml.bind: Checking system property javax.xml.bind.context.factory
Mail message header is exposing information about hostname and internal IP address(first hop Server). Attacker can easily get information about internal server IP and it is a serious treat.
Message-ID: <5f698024.1c69fb81.621c8.e573SMTPIN_ADDED_BROKEN#mx.google.com>
X-Google-Original-Message-ID: <1634982533.0.1600749603315.JavaMail.production.domain.com>
Received: from <hostname> (unknown [10.11.XXX.XXX]) by devmail01.domain.com (Postfix) with ESMTP id 93DF9C0BD3 for <username.r#gmail.com>; Mon, 21 Sep 2020 21:40:02 -0700 (PDT)
Date: Tue, 22 Sep 2020 10:05:42 +0530 (IST)
From: mailer <mailer#us.abcd-mail.com>
To: username.r#gmail.com
Subject: [502FE0F6] org.apache.http.conn.HttpHostConnectException report for http://localhost:8080
So I added props.put("mail.smtp.localhost", "production.domain.com"); to SMTP configuration and now I see hostname is coming as "production.domain.com"
But still Internal IP is exposed and which is being used for reverse DNS lookup
Received: from production.domain.com (unknown [10.11.XXX.XXX]) by devmail01.domain.com (Postfix) with ESMTP id 2B933C0016 for <username.r#gmail.com>; Mon, 21 Sep 2020 23:57:04 -0700 (PDT)
How do I remove internal IP address for message headers?
Is it possible to prevent reverse DNS lookup?
I have a multi-threaded data-processing job that completes in around 5 hours (same code) on an EC2 instance. But when it is run on a docker container (I configured it to have 7 GB of RAM before creating the container), the job runs slowly in docker container for about 12+ hours and then docker container disappeared. How can we fix this ? Why should the job be very slow in the docker container? CPU processing was very very slow in the docker container, not just the network I/O. Network I/O being slow is fine. But I 'm wondering what could be the cause for the CPU processing being very slow compared to EC2 instance. Also where can I find the detailed trace of what happened in the host operating system to cause the docker container to die.
**docker logs <container_id>**
19-Feb-2019 22:49:42.098 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
19-Feb-2019 22:49:42.105 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
19-Feb-2019 22:49:42.106 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 27468 ms
19-Feb-2019 22:55:12.122 INFO [localhost-startStop-2] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/logging]
19-Feb-2019 22:55:12.154 INFO [localhost-startStop-2] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/logging] has finished in [32] ms
searchResourcePath=[null], isSearchResourceAvailable=[false]
knowledgeCommonResourcePath=[null], isKnowledgeCommonResourceAvailable=[false]
Load language resource fail...
blah blah blah some application log
bash: line 1: 10 Killed /usr/local/tomcat/bin/catalina.sh run
Error in Tomcat run: 137 ... failed!
Up on doing dmesg -T | grep docker, this is what I see. What is 500 dockerd? -500 docker-proxy? How to interpret what is here under?
[Tue Feb 19 14:49:04 2019] docker0: port 1(vethc30f313) entered blocking state
[Tue Feb 19 14:49:04 2019] docker0: port 1(vethc30f313) entered forwarding state
[Tue Feb 19 14:49:04 2019] docker0: port 1(vethc30f313) entered disabled state
[Tue Feb 19 14:49:07 2019] docker0: port 1(vethc30f313) entered blocking state
[Tue Feb 19 14:49:07 2019] docker0: port 1(vethc30f313) entered forwarding state
**[Wed Feb 20 04:09:23 2019] [10510] 0 10510 197835 12301 111 0 -500 dockerd
[Wed Feb 20 04:09:23 2019] [11241] 0 11241 84733 5434 53 0 0 docker
[Wed Feb 20 04:09:23 2019] [11297] 0 11297 29279 292 18 0 -500 docker-proxy**
[Wed Feb 20 04:09:30 2019] docker0: port 1(vethc30f313) entered disabled state
[Wed Feb 20 04:09:30 2019] docker0: port 1(vethc30f313) entered disabled state
[Wed Feb 20 04:09:30 2019] docker0: port 1(vethc30f313) entered disabled state
At 04:09:23, From above, it shows 500 dockerd etc and from below, at 04:09:24 it does Kill 11369 Java process score etc. What does it mean? Did it not kill docker process? It killed Java process running inside the docker container?
demsg -T | grep java
Wed Feb 20 04:09:23 2019] [ 3281] 503 3281 654479 38824 145 0 0 java
[Wed Feb 20 04:09:23 2019] [11369] 0 11369 3253416 1757772 4385 0 0 java
[Wed Feb 20 04:09:24 2019] Out of memory: Kill process 11369 (java) score 914 or sacrifice child
[Wed Feb 20 04:09:24 2019] Killed process 11369 (java) total-vm:13013664kB, anon-rss:7031088kB, file-rss:0kB, shmem-rss:0kB
TL;DR you need to increase the memory on your VM/host, or reduce the memory usage of your application.
The OS is killing Java which is running inside the container because the host ran out of memory. When the process inside the container dies, the container itself goes into an exited state. You can see these non-running containers with docker ps -a.
By default, docker does not limit the CPU or memory of a container. You can add these limits on containers, and if your container exceeds the container memory limits, docker will kill the container. That result will be visible with an OOM status when you inspect the stopped container.
The reason you see ether -500 lines setup on the docker processes is to prevent the OS from killing docker itself when the host runs out of memory. Instead, the process inside the container gets killed, and you can have a restart policy configured in docker to restart that container.
You can read more about memory limits, and configuring the OOM score for container processes at: https://docs.docker.com/engine/reference/run/
When using tomcat7 with https,i found that the content of the packet should be in the one packet,but it is transmitted to two packets.
I think the setting of ssl is ok.I can access web with htpps,or use wget --ca-certificate=/home/alice/root.crt https://IP:8443, it shows 200 ok.
Here is logs of conversation with tomcat.
Thu Sep 2 10:17:24 2015,comm.c[336](func_send): release qbuf to empty_qbuf_head ok.
Thu Sep 2 10:17:24 2015,comm.c[279](func_send): wait SEM_SEND.
Thu Sep 2 10:17:24 2015,comm.c[702](sock_recv): result=1 Recv Data _pdb: H
Thu Sep 2 10:17:24 2015,comm.c[432](func_recv): res=1 Recv Data p_data: H
Thu Sep 2 10:17:24 2015,comm.c[435](func_recv): colin bbb TO func_recv countqn=1
Thu Sep 2 10:17:24 2015,comm.c[436](func_recv): insert qbuf to recv_qbuf_head ok.
Thu Sep 2 10:17:24 2015,comm.c[493](func_recv): post SEM_RECV.
Thu Sep 2 10:17:24 2015,comm.c[402](func_recv): colin aaa TO func_recv countqn=2
Thu Sep 2 10:17:24 2015,comm.c[410](func_recv): get qbuf from empty_qbuf_head ok.
Thu Sep 2 10:17:24 2015,comm.c[418](func_recv): colin ready to sock_recv.
Thu Sep 2 10:17:24 2015,comm.c[702](sock_recv): result=1285 Recv Data _pdb: TTP/1.1 200 OK
Server: Apache-Coyote/1.1
SOAPAction:
Content-Type: text/xml;charset=ISO-8859-1
Content-Length: 1124
Date: 02 Sep 2015 02:16:51 GMT
The "Recv Data _pdb" is the packet whick i receive.I receive H and then TPP/1.1 ...,it makes something error. If receive HTTP/1.1 ..., everything is fine.
Generally,the packet should be HTTP/1.1 ...,but it shows that tomcat sends packet H and packet TTP/1.1 ....
I found some setting of HTTP connector https://tomcat.apache.org/tomcat-7.0-doc/config/http.html,but nothing relates this problem.
Tomcat version : Apache Tomcat/7.0.52 (Ubuntu)
OS :
Distributor ID: Ubuntu
Description: Ubuntu 14.04.2 LTS
Release: 14.04
Codename: trusty
This is part content of server.xml.
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
URIEncoding="UTF-8"
redirectPort="8443" />
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS"
keystoreFile="/home/jack/example.pkcs12"
keystoreType="PKCS12"
keystorePass="password" />
Any one has idea? Let me know which side is wrong. Thanks.
2015/09/03 update :
At server side,i use tcpdump to catch packet, and open wireshark to resolve it.Because device receives H and TPP/1.1,so the session is not completely;therefor,we can't use wireshark with private key to decrypt packet.I only can see data length to guess it. As previous shown, tomcat sends two packets.
They are both right. The HTTP protocol is not defined in packets, but as a stream of bytes. The underlying TCP/IP protocol must chomp the stream into packets, but they are reassembled in order on the receiving end.
If your code reads HTTP, then it must wait for more input. It cannot assume that data arrives together or in certain chunks.
The end-of-data is well-defined in HTTP. The HTTP header ends when 2 pairs of CR LF is received (CR LF CR LF). If the header indicates a payload, then the header must either:
Specify the size in bytes (Content-Length)
Use chunking (Transfer-Encoding: chunked)
Not use keep-alive. The data ends when connection is closed.
I have an active/passive nodes with Tomcat using heartbeat. When I shutdown the active node, Tomcat on passive node starts. This is a piece of the starting trace:
INFO: Initializing Coyote HTTP/1.1 on http-8080 May 22, 2014
7:37:43 PM org.apache.catalina.startup.Catalina load INFO:
Initialization processed in 366 ms May 22, 2014 7:37:43 PM
org.apache.catalina.realm.JAASRealm setContainer INFO: Set JAAS
app name VentusProxy May 22, 2014 7:37:59 PM
org.apache.catalina.mbeans.JmxRemoteLifecycleListener createServer
It takes 15 seconds to initialize JAAS, that means 15 seconds more without service.
I don't use JAAS at all in my application, so I'd like to disable it or, at least, try to reduce this 15 seconds.
Can anybody tell me if this is possible and how?