Axis2 getSOAPEnvelope() performance issue - java

Using axis2 on solaris I've noticed that the message.getSOAPEnvelope() call is maxing out the server processing to 0.0 idle. The call takes about 10 seconds and then processing load goes back to normal. This is crazy for a single method especially something built into Axis.
Can anyone suggest a solution to this, I've not been able to find anything similar online.
// get message for sending
Message message = getSOAPMessage();
...
message=signSOAPEnvelope(message.getSOAPEnvelope()); //problem
...
SOAPEnvelope retMsg = (SOAPEnvelope) call.invoke(message.getSOAPEnvelope()); //problem
--ADDITIONAL INFORMATION---
Ok so the issue lies in teh SAXParser.parse() method called by axis (not axis2 btw). So I've done some further tests with other messages.
My application builds the SoapEnvelope and the message body is added to it. I've taken a message from another application that I know is working and following the soap envelope build I've overridden the message string with this older xml. So the SoapEnvelope is identical in both cases, however the xml I took from the other project works well whilst my new xml doesn't. The crazy thing is the older xml is larger so should take longer. Below are the examples of the relevant xml as I can't work out why one should work and the other not.
WORKS OK: large older xml
<ns2:applicationDetailSearchQuery
xmlns:ns2="http://www.company.com.au/wib/ID/schema/query"
xmlns:ns3="http://www.company.com.au/wib/Counterparty/schema/query"
xmlns:tns="http://www.company.com.au/wib/icc/schema/query">
<tns:queryID scheme="http://www.company.com.au/treasury/idbb/queryid">44051</tns:queryID>
<tns:queryType>ApplicationDetailSearch</tns:queryType>
<tns:pageSize>10000</tns:pageSize>
<ns2:parameters>
<ns2:tradeIdList>
<ns2:tradeId>111111</ns2:tradeId>
</ns2:tradeIdList>
<ns2:queryByHeadDealId>N</ns2:queryByHeadDealId>
<ns2:retrieveSchedule>N</ns2:retrieveSchedule>
<ns2:retrieveCashFlowDeals>Y</ns2:retrieveCashFlowDeals>
<ns2:dealType>BOND</ns2:dealType>
</ns2:parameters>
</ns2:applicationDetailSearchQuery>
REALLY SLOW: small xml???
<ns5:querySetRequest setId="1" xmlns:ns2="http://schemas.company.com.au/ttt/icc/common/header-V2-0" xmlns:ns4="http://schemas.company.com.au/ttt/icc/Services/FXC/TradeEnquiryServiceEnvelope" xmlns:ns3="http://schemas.company.com.au/ttt/icc/common/envelopemsg-V2-0" xmlns:ns5="http://webservice.common.ttt/queryservice/types">
<ns5:query queryName="RemainingBalanceQuery" queryID="1">
<ns5:parameter value="FWD:169805" type="String" name="KondorId"/>
<ns5:parameter value="0.9592" type="Decimal" name="ExchgRate"/>
<ns5:parameter value="USD" type="String" name="CurrencyCode"/>
<ns5:parameter value="09/08/2011" type="String" name="MatDate"/>
</ns5:query>
</ns5:querySetRequest>
Any ideas what might be causing excessive cpu for this second set of xml?

This was an issue with excessive logging from the SAXParser. When I set logging to warn for the relevent packages it ran in milliseconds. Crazy stuff!

Related

akka.pattern.AskTimeoutException while running Lagom HelloWorld example

I have a problem while trying my hands on the Hello World example explained here.
Kindly note that I have just modified the HelloEntity.java file to be able to return something other than "Hello, World!". Most certain my changes are taking time and hence I am getting the below Timeout error.
I am currently trying (doing a PoC) on a single node to understand the Lagom framework and do not have liberty to deploy multiple nodes.
I have also tried modifying the default lagom.circuit-breaker in application.conf "call-timeout = 100s" however, this does not seem to have helped.
Following is the exact error message for your reference:
{"name":"akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://hello-impl-application/system/sharding/HelloEntity#1074448247]] after [5000 ms]. Sender[null] sent message of type \"com.lightbend.lagom.javadsl.persistence.CommandEnvelope\".","detail":"akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://hello-impl-application/system/sharding/HelloEntity#1074448247]] after [5000 ms]. Sender[null] sent message of type \"com.lightbend.lagom.javadsl.persistence.CommandEnvelope\".\n\tat akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)\n\tat akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)\n\tat akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)\n\tat scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:866)\n\tat scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)\n\tat scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)\n\tat scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:864)\n\tat akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)\n\tat akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)\n\tat akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)\n\tat akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)\n\tat java.lang.Thread.run(Thread.java:748)\n"}
Question: Is there a way to increase the akka Timeout by modifying the application.conf or any of the java source files in the Hello World project? Can you please help me with the exact details.
Thanks in advance for you time and help.
The call timeout is the timeout for circuit breakers, which is configured using lagom.circuit-breaker.default.call-timeout. But that's not what is timing out above, the thing that is timing out above is the request to your HelloEntity, that timeout is configured using lagom.persistence.ask-timeout. The reason why there's a timeout on requests to entities is because in a multi-node environment, your entities are sharded across nodes, so an ask on them may go to another node, which is why a timeout is needed in case that node is not responding.
All that said, I don't think changing the ask-timeout will solve your problem. If you have a single node, then your entities should respond instantly if everything is working ok.
Is that the only error you're seeing in the logs?
Are you seeing this in devmode (ie, using the runAll command), or are you running the Lagom service some other way?
Is your database responding?
Thanks James for the help/pointer.
Adding following lines to resources/application.conf did the trick for me:
lagom.persistence.ask-timeout=30s
hello {
..
..
call-timeout = 30s
call-timeout = ${?CIRCUIT_BREAKER_CALL_TIMEOUT}
..
}
A Call is a Service-to-Service communication. That’s a SeviceClient communicating to a remote server. It uses a circuit breaker. It is a extra-service call.
An ask (in the context of lagom.persistence) is sending a command to a persistent entity. That happens across the nodes insied your Lagom service. It is not using circuit breaking. It is an intra-service call.

Mulesoft Dataweave, LDAP to SOAP large message truncating at certain size. Size limit?

(question tldr at end)
So my task for the Mule "Transform Message" component is to take a bunch of user info from LDAP Directory Service and provide it to an old database endpoint using SOAP. Fairly simple transform stuff.
The main ! about this operation is the size of the message that has to be provided to the endpoint. The entire payload has to be provided in a single message, otherwise the service will remove all entries that are not part of the payload (there is no explicit 'delete' service). This is an issue because the amount of users in the directory is roughly 20,000 causing every message to be 5MB or so in size.
My flow in Mule Studio currently works with a low amount of users being returned from the LDAP component. Successful return from the endpoint and I can see the data updated in the legacy environment. When applying this to a more 'production-realistic' load the Web Service Consumer (SOAP) craps out with an odd exception (unexpected EOF/character).
So I stuck a File component in the middle to dumpcheck the message that was being sent to the Consumer. The message is indeed getting cut before it can finish, which is where the EOF is coming from.
This is the transform script in Dataweave.
%output application/xml
%namespace ns0 test.namespace.com
---
{
ns0#updateContact: {
ns0#ContactType: "Primary",
ns0#ContactDetails: {
(payload map {
(ns0#ContactDetailElem: {
ns0#personID: $.personID,
ns0#contactDetail: $.desc
}) when $.personID != null
})
}
}
}
The expected output is below and successfully occurs with a lesser payload.
<?xml version='1.0' encoding='windows-1252'?>
<ns0:updateContact xmlns:ns0="test.namespace.com">
<ns0:ContactType>Primary</ns0:ContactType>
<ns0:ContactDetails>
<../>
<ns0:ContactDetailElem>
<ns0:personID>{Integer}</ns0:personID>
<ns0:contactDetail>{String.detail}</ns0:contactDetail>
</ns0:ContactDetailElem>
<../>
</ns0:ContactDetails>
</ns0:updateContact>
On the big payload the following happens at the end of the file
<?xml version='1.0' encoding='windows-1252'?>
<ns0:updateContact xmlns:ns0="test.namespace.com">
<ns0:ContactType>Primary</ns0:ContactType>
<ns0:ContactDetails>
<../>
<ns0:ContactDetailElem>
<ns0:personID>{Integer}</ns0:personID>
<ns0:contactDetail>{String.detail}</ns0:contactDetail>
</ns0:ContactDeta
Which looks like a typo but is what looks like the message being cut before it can finish. The file size is always stopped at 3,553,099 characters. Of course this makes the flow crap out as the xml is invalid.
The question then is there a limit on the message size that the Dataweave transformer can create? If not a legitimate bug but a configuration issue, where would I find this setting? I've had a look around but can't find anybody encounter this type of issue.
TL;DR: Do Dataweave transform messages have a size limit around 3.38MB?
Exception caused by: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF in prolog
PS: I've found the documentation on dataweave streaming after typing this up, will see if this can help my situation. Otherwise i'm considering implementing a workaround to construct the message outside dataweave and then passing it to the Consumer.
Are you using Mule version 3.8.3? Try 3.8.4, it fixed a bug in DataWeave which caused cutoff of Strings in some cases.
We have a similar problem, same as yours that is with the problem of size. We implemented streaming using stax.

Grizzly maxPendingBytes property ignored

I'm developing a project using Grizzly 2.3.22 with its Websocket support. Everything was OK until OOM happened. Looking through the dump I found that all the memory was eaten up by a single org.glassfish.grizzly.nio.transport.TCPNIOConnection holding a huge (1,5GB) write queue. I guess one of the client developers was debugging their connected application and stopped on a breakpoint for a long time. Anyway, this can easily happen if a client has a very slow connection - my server should be ready for that.
In the Grizzly documentation I found the maxPendingBytes property, which seem like a solution, at least for now. But I cannot get it to work at all. I set log level to ALL for AbstractNIOAsyncQueueWriter, connect with the client, put it on hold and observe how the server's queue grows like this:
TRACE 2016-07-05 21:02:26.330 [nioEventLoopGroup-2-1] o.g.g.n.AbstractNIOAsyncQueueWriter - AsyncQueueWriter.write connection=TCPNIOConnection{localSocketAddress={/127.0.0.1:8445}, peerSocketAddress={/127.0.0.1:56185}}, record=org.glassfish.grizzly.asyncqueue.AsyncWriteQueueRecord#1e35bafb, directWrite=false, size=165, isUncountable=false, bytesToReserve=165, pendingBytes=16170
TRACE 2016-07-05 21:02:26.368 [nioEventLoopGroup-2-1] o.g.g.n.AbstractNIOAsyncQueueWriter - AsyncQueueWriter.write connection=TCPNIOConnection{localSocketAddress={/127.0.0.1:8445}, peerSocketAddress={/127.0.0.1:56185}}, record=org.glassfish.grizzly.asyncqueue.AsyncWriteQueueRecord#3d6e05dd, directWrite=false, size=165, isUncountable=false, bytesToReserve=165, pendingBytes=16335
...
When I set maxPendingBytes=10000 I expect an exception thrown when the pendingBytes from the log above becomes larger than 10000, but it doesn't happen.
Moreover, I tried debugging the server with the Grizzly's source code, and found that while the property's value does get assigned to the NIOConnection.maxAsyncWriteQueueSize field, the AbstractNIOAsyncQueueWriter.canWrite(...) method - the only place where the field seems to be used - is never called.
I'm at a loss. Am I missing something here?

Huge performance issue using camel routes in karaf

I have a tricky issue with karaf, and having tried all day to fix it, I need your insights. Here is the problem:
I have camel routes (pure java DSL) that get data from 2 sources, process them, and then send the results to a redis
- when using as standalone application (with a Main class and a command line "java -jar myjar.jar"), data are processed and saved in less than 20minutes
- when using them as a bundle (part of another feature actually) , on the same machine, it takes about 10 hours .
EDIT: I forgot to add: I use camel 2.1.0 and karaf 2.3.2
Now, we are in the process of refactoring our SI to karaf features, so sadly, it's not really possible to just keep the standalone app.
I tried playing with karaf java memory option, using a cluster (I failed :d ) playing with SEDA and threadpool, replacing all direct route by a seda, without success. A dev:create-dump shows a lot of
thread #38 - Split" Id=166 BLOCKED on java.lang.Class#56d1396f owned by "Camel (camelRedisProvisioning)
Could it be an issue with split and parallelProcessing in karaf ? Standalone app shows indeed a LOT more CPU activity.
Here are my camel route
//start with a quartz and a cron tab
from("quartz://provisioning/topOffersStart?cron=" + cronValue.replace(' ', '+')).multicast()
.parallelProcessing().to("direct:prodDAO", "direct:thesaurus");
//get from two sources and process
from("direct:prodDAO").bean(ProductsDAO.class)
.setHeader("_type", constant(TopExport.PRODUCT_TOP))
.setHeader("topOffer", constant("topOffer"))
.to("direct:topOffers");
from("direct:thesaurus")
.to(thesaurusUri).unmarshal(csv).bean(ThesaurusConverter.class, "convert")
.setHeader("_type", constant(TopExport.CATEGORY_TOP))
.setHeader("topOffer", constant("topOffer"))
.to("direct:topOffers");
//processing
from("direct:topOffers").choice()
.when(isCategory)
.to("direct:topOffersThesaurus")
.otherwise()
.when(isProduct)
.to("direct:topOffersProducts")
.otherwise()
.log(LoggingLevel.ERROR, "${header[_type]} is not valid !")
.endChoice()
.endChoice()
.end();
from("direct:topOffersThesaurus")
//here is where I think the problem comes
.split(body()).parallelProcessing().streaming()
.bean(someprocessing)
.to("direct:toRedis");
from("direct:topOffersProducts")
//here is where I think the problem comes
.split(body()).parallelProcessing().streaming()
.bean(someprocessing)
.to("direct:toRedis");
//save into redis
from("direct:toRedis")
.setHeader("CamelRedis.Key", simple("provisioning:${header[_topID]}"))
.setHeader("CamelRedis.Command", constant("SETEX"))
.setHeader("CamelRedis.Timeout", constant("90000"))//25h
.setHeader("CamelRedis.Value", simple("${body}"))
.to("spring-redis://?redisTemplate=#provisioningRedisTemplateStringSerializer");
NB: the body sent to direct:topOffers[products|thesaurus] is a list of pojo (the same class)
Thanks to anyone that can help
EDIT:
I think I narrowed it down to a deadlock on jaxb. Indeed, in my routes, I make lots of call to a java client calling a web service. When using karaf, thread are block there :
java.lang.Thread.State: BLOCKED (on object monitor) at com.sun.xml.bind.v2.runtime.reflect.opt.AccessorInjector.prepare(AccessorInjector.java:78)
further down the stack trace, we see the unmarshalling method used to transform the xml in object, those 2 line we suspect to me
final JAXBContext context = JAXBContext.newInstance(clazz.getPackage().getName());
final Unmarshaller um = context.createUnmarshaller();
I remove the final, no improvements. Maybe something to do with the jaxb used by karaf ? I do not install any jaxb impl with the bundle
Nailed it !
As seen above, it was indeed linked to a deadlock on the jaxb context in my webservice client.
what I did :
- refactoring of the old code for that client by removing the final keyword on Marshaller/Unmarshaller object (I think the deadlock came from there, even if it was the exact same code when runing on standalone)
- instanciate the context based on the package, and only once.
I must admit Classloader issues with OSGI had me banging my head on my desk for a few hours, but thanks to Why can't JAXB find my jaxb.index when running inside Apache Felix? , I manage to fix that
Granted, my threads are now sleeping instead of blocked, but now I process my data in less than 30 min, so that's good enough for me

Dealing with logs of CharConversionException in ServletRequestWrapper

I am working with a webapp that runs in a Tomcat 6 server.
With some request (that came from specific types of clients) it happens that the method getParameter of ServletRequestWrapper handles internally all CharConversionException logging to what i thing is the standard output of the server activity information about that exception. The thing is that sometimes it can be logging sensitive data (as password)... for example, it can log things like this :
INFO: Character decoding failed. Parameter [pw] with value [holaãã%20%222522%2] has been ignored. Note that the name and value quoted here may be corrupted due to the failed decoding. Use debug level logging to see the original, non-corrupted values.
java.io.CharConversionException: EOF
at org.apache.tomcat.util.buf.UDecoder.convert(UDecoder.java:80)
at org.apache.tomcat.util.buf.UDecoder.convert(UDecoder.java:46)
at org.apache.tomcat.util.http.Parameters.urlDecode(Parameters.java:410)
at org.apache.tomcat.util.http.Parameters.processParameters(Parameters.java:370)
at org.apache.tomcat.util.http.Parameters.processParameters(Parameters.java:217)
at org.apache.catalina.connector.Request.parseParameters(Request.java:2647)
at org.apache.catalina.connector.Request.getParameter(Request.java:1106)
at org.apache.catalina.connector.RequestFacade.getParameter(RequestFacade.java:355)
at javax.servlet.ServletRequestWrapper.getParameter(ServletRequestWrapper.java:158)
at myClasss (myClass.java:666)
I am not looking to resolve the problem in server, as i see is a problem from the client and the client must solve. I am looking forward to "hide" the value associated with the bad parameter that is outputted in the log.
I am not an expert of tomcat logging system and how to configure it, i visited and read some material (this and this too..) but couldn't find a clue that pointed me into the right direction (if there is any..).
I've already took at look this ServletRequestWrapper or ServletResponseWrapper in production?, but there is no clue about how to modify this internal message.
Well thanks for everything!.
Greetings
Victor
First two remarks:
The wrong encoding is not strictly a client problem; there are just different settings. So allow me to point to some server settings. Furthermore searching for "servlet filter character encoding" will yield some ServletFilters that set the request encoding right for getRequestParameter. (GET functions differently than POST!)
"%2" at the end is a bit suspicious, isn't it.
The output looks like log output, and indeed in Parameters.java I found org.apache.juli.logging.Log.This yet another logging library of Tomcat, seems based on java.util.logging, and you may set the level to FATAL/ERROR in the WEB-INF/classes/logging.properties for org.apache.tomcat.util.http.Parameters=SEVERE.

Categories

Resources