Grizzly maxPendingBytes property ignored - java

I'm developing a project using Grizzly 2.3.22 with its Websocket support. Everything was OK until OOM happened. Looking through the dump I found that all the memory was eaten up by a single org.glassfish.grizzly.nio.transport.TCPNIOConnection holding a huge (1,5GB) write queue. I guess one of the client developers was debugging their connected application and stopped on a breakpoint for a long time. Anyway, this can easily happen if a client has a very slow connection - my server should be ready for that.
In the Grizzly documentation I found the maxPendingBytes property, which seem like a solution, at least for now. But I cannot get it to work at all. I set log level to ALL for AbstractNIOAsyncQueueWriter, connect with the client, put it on hold and observe how the server's queue grows like this:
TRACE 2016-07-05 21:02:26.330 [nioEventLoopGroup-2-1] o.g.g.n.AbstractNIOAsyncQueueWriter - AsyncQueueWriter.write connection=TCPNIOConnection{localSocketAddress={/127.0.0.1:8445}, peerSocketAddress={/127.0.0.1:56185}}, record=org.glassfish.grizzly.asyncqueue.AsyncWriteQueueRecord#1e35bafb, directWrite=false, size=165, isUncountable=false, bytesToReserve=165, pendingBytes=16170
TRACE 2016-07-05 21:02:26.368 [nioEventLoopGroup-2-1] o.g.g.n.AbstractNIOAsyncQueueWriter - AsyncQueueWriter.write connection=TCPNIOConnection{localSocketAddress={/127.0.0.1:8445}, peerSocketAddress={/127.0.0.1:56185}}, record=org.glassfish.grizzly.asyncqueue.AsyncWriteQueueRecord#3d6e05dd, directWrite=false, size=165, isUncountable=false, bytesToReserve=165, pendingBytes=16335
...
When I set maxPendingBytes=10000 I expect an exception thrown when the pendingBytes from the log above becomes larger than 10000, but it doesn't happen.
Moreover, I tried debugging the server with the Grizzly's source code, and found that while the property's value does get assigned to the NIOConnection.maxAsyncWriteQueueSize field, the AbstractNIOAsyncQueueWriter.canWrite(...) method - the only place where the field seems to be used - is never called.
I'm at a loss. Am I missing something here?

Related

How to stop "Ignoring query projection" warning for LOCAL Ignite cache

I'm running a ScanQuery on an Ignite cluster that is currently only a local cache. Every time it runs, I am getting a warning message below:
WARN Ignoring query projection because it's executed over LOCAL cache (only local node will be queried): GridCacheQueryAdapter [type=SCAN, clsName=null, clause=null, filter=com.sms.ignite.IgniteUtils$1#10895ea7, transform=null, part=null, incMeta=false, metrics=GridCacheQueryMetricsAdapter [minTime=9223372036854775807, maxTime=0, sumTime=0, avgTime=0.0, execs=0, completed=0, fails=0], pageSize=1024, timeout=0, keepAll=true, incBackups=false, dedup=false, prj=o.a.i.i.cluster.ClusterGroupAdapter#5307bf01, keepBinary=false, subjId=null, taskHash=0]
I've done some research and saw on the Ignite forums that this issue has been seen before, but haven't found any fix. Is there any sort of logging setting or configuration that will keep me from getting spammed with this message? I am fully aware that the cache is local, and don't want my entire log filling up with this useless message.
Please consider upgrade. I don't think this warning exists in 2.7.5.

akka.pattern.AskTimeoutException while running Lagom HelloWorld example

I have a problem while trying my hands on the Hello World example explained here.
Kindly note that I have just modified the HelloEntity.java file to be able to return something other than "Hello, World!". Most certain my changes are taking time and hence I am getting the below Timeout error.
I am currently trying (doing a PoC) on a single node to understand the Lagom framework and do not have liberty to deploy multiple nodes.
I have also tried modifying the default lagom.circuit-breaker in application.conf "call-timeout = 100s" however, this does not seem to have helped.
Following is the exact error message for your reference:
{"name":"akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://hello-impl-application/system/sharding/HelloEntity#1074448247]] after [5000 ms]. Sender[null] sent message of type \"com.lightbend.lagom.javadsl.persistence.CommandEnvelope\".","detail":"akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://hello-impl-application/system/sharding/HelloEntity#1074448247]] after [5000 ms]. Sender[null] sent message of type \"com.lightbend.lagom.javadsl.persistence.CommandEnvelope\".\n\tat akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)\n\tat akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)\n\tat akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)\n\tat scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:866)\n\tat scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)\n\tat scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)\n\tat scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:864)\n\tat akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)\n\tat akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)\n\tat akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)\n\tat akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)\n\tat java.lang.Thread.run(Thread.java:748)\n"}
Question: Is there a way to increase the akka Timeout by modifying the application.conf or any of the java source files in the Hello World project? Can you please help me with the exact details.
Thanks in advance for you time and help.
The call timeout is the timeout for circuit breakers, which is configured using lagom.circuit-breaker.default.call-timeout. But that's not what is timing out above, the thing that is timing out above is the request to your HelloEntity, that timeout is configured using lagom.persistence.ask-timeout. The reason why there's a timeout on requests to entities is because in a multi-node environment, your entities are sharded across nodes, so an ask on them may go to another node, which is why a timeout is needed in case that node is not responding.
All that said, I don't think changing the ask-timeout will solve your problem. If you have a single node, then your entities should respond instantly if everything is working ok.
Is that the only error you're seeing in the logs?
Are you seeing this in devmode (ie, using the runAll command), or are you running the Lagom service some other way?
Is your database responding?
Thanks James for the help/pointer.
Adding following lines to resources/application.conf did the trick for me:
lagom.persistence.ask-timeout=30s
hello {
..
..
call-timeout = 30s
call-timeout = ${?CIRCUIT_BREAKER_CALL_TIMEOUT}
..
}
A Call is a Service-to-Service communication. That’s a SeviceClient communicating to a remote server. It uses a circuit breaker. It is a extra-service call.
An ask (in the context of lagom.persistence) is sending a command to a persistent entity. That happens across the nodes insied your Lagom service. It is not using circuit breaking. It is an intra-service call.

In Apache Camel, how can I receive an error if an endpoint doesn't exist?

We are using Camel fluent builders to set up a series of complex routes, in which we are using dynamic routing using the RecipientList functionality.
We've encountered issues where in some cases, the recipient list contains a messaging endpoint that doesn't exist (for example, something like seda:notThere).
A simple example is something like this:
from("seda:SomeSource")....to("seda:notThere");
How can I configure the route so that if the exchange tries to route to an endpoint that doesn't already exist, an error is thrown?
I'm using Camel 2.9.x, and I've already experimented with the Dead Letter Channel and various Error Handler implementations, with (seemingly) no errors or warnings logged.
The only logging I see indicates that Camel is (attempting to) send to the endpoint which doesn't exist:
2013-07-03 16:07:08,030|main|DEBUG|o.a.c.p.SendProcessor|>>>> Endpoint[seda://notThere] Exchange[Message: x.y.Z#293b9fae]
Thanks in advance!
All endpoints behave differently in this case.
If you attempt to write to a ftp server that does not exist, you certainly get an error (connection refused or otherwise)..
This is also true for a number of endpoints.
SEDA queues gets created if the do not exist and the message will be left there. So your route actually sends to "notThere" and the message will still be there until the application restarts or someone starts to consume messages from seda:notThere. This is the way seda queues are designed. If you set the size of the seda queue by to("seda:notThere?size=100"), then if there is noone reading (or reading slowly) you will get exceptions on message 101 and forward.
If you need to be sure some route is consuming your messages, use "direct" instead of "seda". You can even have some middle layer to use the features of seda with respect to staging and the features of direct knowing there is a consumer active (if sent from recipient list with perhaps user input (god forbid).
from("whatever").recipentList( ... ); // "direct:ep1" work, "direct:ep2" throws exception
from("direct:ep1").to("seda:ep1");
from("seda:ep1").doRealStagedStuffHere();

Zookeeper Network Ensemble does not start appropiately

I've been working with zookeeper lately to fill a requirement of reliablity in distributed applications. I'm working with three computers and I followed this tutorial:
http://sanjivblogs.blogspot.ie/2011/04/deploying-zookeeper-ensemble.html
I did step by step to ensure I did it well, but now when I start my zookeepers with the
./zkServer.sh start
I'm getting these exceptions for all my computers:
2013-04-05 21:46:58,995 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker#679] - Interrupted while waiting for message on queue
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1961)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2038)
at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:342)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:831)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:62)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:667)
2013-04-05 21:46:58,995 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker#688] - Send worker leaving thread
2013-04-05 21:47:58,363 [myid:2] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker#762] - Connection broken for id 3, my id = 2, error =
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:747)
But I don't know what I am doing wrong to get this. My objective is to syncrhonize my zookeepers in different machines to get always an available service. I went to zookeeper.apache.org Web Päge and look for the same information on how to configure and start my zookeeper, but are the same steps I followed before.
If somebody could help me please I would really appretiate it. Thanks in advance.
I will need to follow some strict steps to achieve this, but finally done it. If somebody is facing the same issue, to make the zookeeper enssemble, please remember:
You need 3 zookeeper servers running (local or over the network), this is the minimum number to achieve the synchronization. In each server, it is needed to create a file called "myid" (inside the zookeeper folder), the content of each myid file must be a sequential number, for instance, I have three zookeeper servers (folders), so I have a myid with content 1, other with content 2 and other with content 3.
Then in the zoo.cfg it is necessary to stablish the parameters required:
tickTime=2000
#dataDir=/var/lib/zookeeper
dataDir=/home/mtataje/var/zookeeper1
clientPort=2184
initLimit=10
syncLimit=20
server.1=192.168.3.41:2888:3888
server.2=192.168.3.41:2889:3889
server.3=192.168.3.41:2995:2999
The zoo.cfg varies from each server to another, in my case because I was testing on local, I needed to change the port and the dataDir.
After that, excutes the:
./zkServer.sh start
Maybe some exceptions will appear, but it is because at least two zookeepers must be synchronized, when you start at least 2 zookeepers, exceptions should be gone.
Best regards.

Dealing with logs of CharConversionException in ServletRequestWrapper

I am working with a webapp that runs in a Tomcat 6 server.
With some request (that came from specific types of clients) it happens that the method getParameter of ServletRequestWrapper handles internally all CharConversionException logging to what i thing is the standard output of the server activity information about that exception. The thing is that sometimes it can be logging sensitive data (as password)... for example, it can log things like this :
INFO: Character decoding failed. Parameter [pw] with value [holaãã%20%222522%2] has been ignored. Note that the name and value quoted here may be corrupted due to the failed decoding. Use debug level logging to see the original, non-corrupted values.
java.io.CharConversionException: EOF
at org.apache.tomcat.util.buf.UDecoder.convert(UDecoder.java:80)
at org.apache.tomcat.util.buf.UDecoder.convert(UDecoder.java:46)
at org.apache.tomcat.util.http.Parameters.urlDecode(Parameters.java:410)
at org.apache.tomcat.util.http.Parameters.processParameters(Parameters.java:370)
at org.apache.tomcat.util.http.Parameters.processParameters(Parameters.java:217)
at org.apache.catalina.connector.Request.parseParameters(Request.java:2647)
at org.apache.catalina.connector.Request.getParameter(Request.java:1106)
at org.apache.catalina.connector.RequestFacade.getParameter(RequestFacade.java:355)
at javax.servlet.ServletRequestWrapper.getParameter(ServletRequestWrapper.java:158)
at myClasss (myClass.java:666)
I am not looking to resolve the problem in server, as i see is a problem from the client and the client must solve. I am looking forward to "hide" the value associated with the bad parameter that is outputted in the log.
I am not an expert of tomcat logging system and how to configure it, i visited and read some material (this and this too..) but couldn't find a clue that pointed me into the right direction (if there is any..).
I've already took at look this ServletRequestWrapper or ServletResponseWrapper in production?, but there is no clue about how to modify this internal message.
Well thanks for everything!.
Greetings
Victor
First two remarks:
The wrong encoding is not strictly a client problem; there are just different settings. So allow me to point to some server settings. Furthermore searching for "servlet filter character encoding" will yield some ServletFilters that set the request encoding right for getRequestParameter. (GET functions differently than POST!)
"%2" at the end is a bit suspicious, isn't it.
The output looks like log output, and indeed in Parameters.java I found org.apache.juli.logging.Log.This yet another logging library of Tomcat, seems based on java.util.logging, and you may set the level to FATAL/ERROR in the WEB-INF/classes/logging.properties for org.apache.tomcat.util.http.Parameters=SEVERE.

Categories

Resources