RabbitMQ connection in blocking state? - java

I connect to RabbitMQ server that time my connection display in blocking state and i not able to publish new message
i have ram of 6 gb free and disk space also be about 8GB
how to configure disk space limit in RabbitMQ

I got the same problem. Seem like the rabbitmq server was using more memory than the threshold
http://www.rabbitmq.com/memory.html
I ran following command to unblock these connections:
rabbitmqctl set_vm_memory_high_watermark 0.6
(default value is 0.4)

By default, [disk_free_limit](source: [1]) must exceed 1.0 times of RAM available. This is true in your case so you may want to check what exactly is blocking the flow. To do that, read [rabbitmqctl man](source: [2]), and run the last_blocked_by command. That should tell you the cause for blocking.
Assuming it is memory (and you somehow didn't calculate your free disk space correctly), to change disk_free_limit, read [configuring rabbitmq.config](source: [1]), then open your rabbitmq.config file and add the following line: {rabbit, [{disk_free_limit, {mem_relative, 0.1}}]} inside the config declaration. My rabbitmq.config file looks as follows:
[
{rabbit, [{disk_free_limit, {mem_relative, 0.1}}]}
].
The specific number is up to you, of course.
Sources
http://www.rabbitmq.com/configure.html#configuration-file
http://www.rabbitmq.com/man/rabbitmqctl.1.man.html

Related

How can increase the wait timeout for ChannelOutputStream for scpClient of apache SSHD because of SocketTimeoutException?

When using apache SSHD scp client to copy files from local to remote, I get the following error:
flush(ChannelOutputStream[ChannelExec[id=0, recipient=0]-ClientSessionImpl[uxxxxxx#Hostname.domain.com/192.163.23.68:45018]] SSH_MSG_CHANNEL_DATA) failed (SocketTimeoutException) to wait for space of len=24576: waitForCondition(Window[client/remote](ChannelExec[id=0, recipient=0]-ClientSessionImpl[uxxxxxx#Hostname.domain.com/192.163.23.68:45018])) timeout exceeded: 30000
Here is how I have set up the SSHServer and the ScpClient:
How to upload/download files using apache SSHD ScpClient
This SCPClient is running in a linux host and there are multiple SSHServers that are running amongs linux and windows hosts.
I use this SCPClient to copy files to both linux and win SSHServers. What I am doing is I create some 20 odd akka actors that take care of copying to the respective remote hosts which are a combination of win and linux. So this does put some strain on the localhost when copying.
However I get this error only when copying to some WIN servers in which the SSHServer is running.
I did notice the copying is very slow but I am not sure what exactly is the issue and how I can fix it?
I have a vague idea that it has to do something with this param:
https://github.com/apache/mina-sshd/blob/sshd-2.5.0/sshd-core/src/main/java/org/apache/sshd/common/channel/ChannelOutputStream.java#L43
But I am not exactly sure where I can configure this when creating the client?
any pointers would be helpful.
We can use the PropertyResolverUtils to update properties of any config
PropertyResolverUtils.updateProperty(sshClient, ChannelOutputStream.WAIT_FOR_SPACE_TIMEOUT, 120000);

jetty threads increasing linearly

All I have got an apache FUSION server and configured jetty for the same.
I can see using newrelic that the count of threads is increasing linearly. After a time these threads are increased to a limit and cause out of memory exception until I restart my proxy server.
Please find below the start.ini configs I did to regulate the number of threads.
--module=server
jetty.threadPool.minThreads=10
jetty.threadPool.maxThreads=150
jetty.threadPool.idleTimeout=5000
jetty.server.dumpAfterStart=false
jetty.server.dumpBeforeStop=false
jetty.httpConfig.requestHeaderSize=32768
etc/jetty-stop-timeout.xml
--module=continuation
--module=deploy
--module=jsp
--module=ext
--module=resources
--module=client
--module=annotations
--module=servlets
etc/jetty-logging.xml
--module=jmx
--module=stats
I tried adding thread enabled property too but it didn't work. Can anyone help how can I limit these threads? For the same configurations on other servers, I can see the threads are not increasing and are well in range on newrelic.

Debezium flush timeout and OutOfMemoryError errors with MySQL

Using Debezium 0.7 to read from MySQL but getting flush timeout and OutOfMemoryError errors in the initial snapshot phase. Looking at the logs below it seems like the connector is trying to write too many messages in one go:
WorkerSourceTask{id=accounts-connector-0} flushing 143706 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
WorkerSourceTask{id=accounts-connector-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
Exception in thread "RMI TCP Connection(idle)" java.lang.OutOfMemoryError: Java heap space
WorkerSourceTask{id=accounts-connector-0} Failed to flush, timed out while waiting for producer to flush outstanding 143706 messages [org.apache.kafka.connect.runtime.WorkerSourceTask]
Wonder what the correct settings are http://debezium.io/docs/connectors/mysql/#connector-properties for sizeable databases (>50GB). I didn't have this issue with smaller databases. Simply increasing the timeout doesn't seem like a good strategy. I'm currently using the default connector settings.
Update
Changed the settings as suggested below and it fixed the problem:
OFFSET_FLUSH_TIMEOUT_MS: 60000 # default 5000
OFFSET_FLUSH_INTERVAL_MS: 15000 # default 60000
MAX_BATCH_SIZE: 32768 # default 2048
MAX_QUEUE_SIZE: 131072 # default 8192
HEAP_OPTS: '-Xms2g -Xmx2g' # default '-Xms1g -Xmx1g'
This is a very complex question - first of all, the default memory settings for Debezium Docker images are quite low so if you are using them it might be necessary to increase them.
Next, there are multiple factors at play. I recommend to do follwoing steps.
Increase max.batch.size and max.queue.size - reduces number of commits
Increase offset.flush.timeout.ms - gives Connect time to process accumulated records
Decrease offset.flush.interval.ms - should reduce the amount of accumulated offsets
Unfortunately there is an issue KAFKA-6551 lurking in backstage that can still play a havoc.
I can confirm that the answer posted above by Jiri Pechanec solved my issues. This is the configurations I am using:
kafka connect worker configs set in worker.properties config file:
offset.flush.timeout.ms=60000
offset.flush.interval.ms=10000
max.request.size=10485760
Debezium configs passed through the curl request to initialize it:
max.queue.size = 81290
max.batch.size = 20480
We didn't run into this issue with our staging MySQL db (~8GB), because the dataset is a lot smaller. For production dataset (~80GB) , we had to adjust these configurations.
Hope this helps.
To add onto what Jiri said:
There is now an open issue in the Debezium bugtracker, if you have any more information about root causes, logs or reproduction, feel free to provide them there.
For me, changing the values that Jiri mentioned in his comment did not solve the issue. The only working workaround was to create multiple connectors on the same worker that are responsible for a subset of all tables each. For this to work, you need to start connector 1, wait for the snapshot to complete, then start connector 2 and so on. In some cases, an earlier connector will fail to flush when a later connector starts to snapshot. In those cases, you can just restart the worker once all snapshots are completed and the connectors will pick up from the binlog again (make sure your snapshot mode is "when_needed"!).

EARs are automatically undeployed in jboss-as-7.1.1.Final

I am not getting while EARs are undeployed automatically in jboss-as-7.1.1.Final.
I can see these logs:
ERROR org.apache.tomcat.util.net.JIoEndpoint$Acceptor [run] Socket accept failed: java.net.SocketException: Too many open files
WARN com.kpn.tie.ejbs.dao.webservice.tt.WebServiceProcessor [invoke] WebService unavailable. The request could not be completed due to technical problems. ; nested exception is: java.net.SocketException: Too many open files
Can somebody tell me root cause of this behavior and also suggest solution for this.
For workaround, restarting jboss in particular time interval will resolve this issue?
The reason could be that the application is overloaded or the file descriptor settings is too low. Due to this, the JVM can not open any new file handle, so you are getting Socket accept failed for incoming requests.
After a while the Deployment-Scanner comes into play (5 sec is default) and tries to check the deployments folder, which is not possible as it can not open any file-handle. So it gets confused and stops the deployed apps.
First solution could be:
Deactivate the scanner so that it only checks once during boot or remove the deployment scanner subsystem and use only CLI to deploy.
Second solution could be:
Increase the file-handler limit (open files size)
java.net.SocketException: Too many open files
On Linux you can increase the number of concurrently open files with
ulimit -n 2048
This would allow 2048 open at the same time in the current session. The command should be either inserted in the session configuration (e.g. .bashrc or similar, depends on your used shell) or in the JBoss start script.
To show the current limit you can use
ulimit -n

Apache Tomcat Exception - Too many open files

We are running a web service in Apache Tomcat in Amazon Linux. Initially web-service is running properly. We are getting too many open files exception after making more than 1000 web request. Again this issue will be resolved when we re start the tomcat server.
Please find below the Exception
25-Apr-2016 10:05:52.628 SEVERE [http-nio-8080-Acceptor-0] org.apache.tomcat.util.net.NioEndpoint$Acceptor.run Socket accept failed
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:686)
at java.lang.Thread.run(Thread.java:745)
PS : we are not doing any file related operations in the web service .
It looks like, that there is some limit on open files. As you are running on Linux I suspect you are running out of file descriptors.
Check out ulimit command to see the number of allowed opened files.
ulimit -n
You can change the number of open files by editing:
/etc/security/limits.conf
and adding something like this:
* soft nofile 4096
* hard nofile 4096
You can check more about limits.conf here.
The default limit is 1024 and can be too low for some Java applications.
More information about increasing the maximum number of open files in this article: http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/
Although if "ulimit" is raised at some point down the line tomcat stops causing same error.
So in order to avoid this you can check list of open files for the application user on Linux using command "lsof -u username" or simply "lsof" and see if code related files are open ( eg..properties files ) if so kill those specific files using # kill -9 lsof -t -u username command for that specific tomcat user.
You need to fix your code to load those files writing simply in a static block of your classes. So that only one file loads even if multiple hits are made by any number of users.
Now you can re check after deploying new changes with the same lsof command and see. Only one file will be seen. This will permanently fix your issue without raising the ulimit each time
That is because socket connections are treated as files, so that means you have too many connections opened. Check the limitations (each OS has different policy about it - same goes for each server), how many ports you can open at same time, etc. You can use NIO to limit those things.

Categories

Resources