spring freezes after number of frequent refresh / ajax calls - java

I have a spring application that seems to work fine, aside from the fact that after 5-6 requests, it halts and fails to handle any new incoming requests.
My page has a select dropdown, and onChange of that dropdown, makes an ajax call to the spring server. After about 5 or 6 of these, no more are accepted, and refreshing the page hangs indefinitely.
Any idea what may be causing this? Let me know if you need more information as far as configuration files and the like, but I was hoping this was a common enough problem where I could be pointed in the right direction.
Thanks
EDIT
here is my ajax code called onChange
$.ajax({
url: "./service.go?data="+data+",
dataType:"json",
timeout:15000,
cache: false,
success: function(data){
...
},
error: function(request,cause,data){
if (cause ==="timeout"){
alert("Request timed out!");
}
else{
alert("ERROR: " + data.responseText);
}
}
});
However, I don't think this is the issue, because even if i dont use ajax at all and jus hit refresh over and over in the browser, it fails.
With further testing, the problem does not occur when i hit a mapping that doesn't require database connectivity, so maybe it has something to do with my hibernate pool configuration? if I refresh a page that requires database connectivity the problem occurs on the 10th request, consistently. Here is my hibernate c3p0 configuration
driverClassName=com.sybase.jdbc3.jdbc.SybDriver
url=jdbc: HIDDEN
username=
password=
# Number of Connections a pool will try to acquire upon startup
initialPoolSize=5
# Minimum number of Connections a pool will maintain at any given time
minPoolSize=1
# Maximum number of Connections a pool will maintain at any given time
maxPoolSize=20
# Connections to acquire when the pool is exhausted
acquireIncrement=5
# Seconds a Connection can remain pooled but unused before being discarded. 30 Min Check
maxIdleTime=1800
#Test all idle, pooled but unchecked-out connections, every this number of seconds
idleConnectionTestPeriod=300
and using these properties i define my pool bean as follows
<bean id="dsrc" class="com.mchange.v2.c3p0.ComboPooledDataSource">
<property name="driverClass" value="${driverClassName}" />
<property name="jdbcUrl" value="${url}" />
<property name="user" value="${username}" />
<property name="password" value="${password}" />
<property name="initialPoolSize" value="${initialPoolSize}" />
<property name="minPoolSize" value="${minPoolSize}" />
<property name="maxPoolSize" value="${maxPoolSize}" />
<property name="acquireIncrement" value="${acquireIncrement}" />
<property name="maxIdleTime" value="${maxIdleTime}" />
<property name="idleConnectionTestPeriod" value="${idleConnectionTestPeriod}" />
</bean>
Here is the simple function in my controller that is hit, until he 10th time
#RequestMapping(method = RequestMethod.GET, value = "/test")
public #ResponseBody String test(){
System.out.println("Hello");
//List<Object> objects = objectService.getObjects(station); // calls hibernate DAO and when this is used instead of system.out, halts after 10th call.
return "";
}
So I know it is hitting the controller due to the printout, until the 10th time. I am unsure of how to tell thereafter if the request is hitting the server since I know at the least the mapping is not hit.

Probably something is wrong with your db connections handling. Might be they are not being returned to the pool. Make sure that you are closing all your connections.

Related

ActiveMQ Pending messages

I have a problem with ActiveMQ similar to this one:
http://activemq.2283324.n4.nabble.com/Messages-stuck-in-pending-td4617979.html
and already tried the solution posted here.
Some messages seem to get stuck on the queue and can sit there for literally days without being consumed. I have more than enough consumers that are free most of the time, so it's not an issue of "saturation" of consumers.
Upon restart of the ActiveMQ SOME of the pending messages are consumed right away. Just a moment ago I had situation where I had 25 free consumers avaiable for queue (they are visible in the admin panel) with 7 of those "stuck" messages. Four of them were consumed right away but other 3 are still stuck. The other strange thing is - new messages kept coming to queue and were consumed right away, while the 3 old ones were still stuck.
On the consumer side my config in spring looks as follows:
<jms:listener-container concurrency="${activemq.concurrent.consumers}" prefetch="1">
<jms:listener destination="queue.request" response-destination="queue.response" ref="requestConsumer" method="onRequest"/>
</jms:listener-container>
<bean id="prefetchPolicy" class="org.apache.activemq.ActiveMQPrefetchPolicy">
<property name="queuePrefetch" value="1" />
</bean>
<bean id="connectionFactory" class="org.apache.activemq.spring.ActiveMQConnectionFactory">
<property name="brokerURL" value="${activemq.broker.url}?initialReconnectDelay=100&maxReconnectDelay=10000&startupMaxReconnectAttempts=3"/>
<property name="prefetchPolicy" ref="prefetchPolicy"/>
</bean>
The "stuck" messages are probably considered as "in delivery", restarting the broker will close the connections and, as the message are yet not acknowledged, the broker considers them as not delivered and will deliver them again.
There may be several problem leading to such a situation, most common ones are a problem in transaction / acknowledgment configuration, bad error / acknowledgment management on consumer side (the message is consumed but never acknowledged) or consumer being stuck on an endless operation (for example a blocking call to a third party resource which doesn't respond and there is no timeout handling).

Disgraceful shutdown of Camel Route

How can I shutdown a Camel route context disgracefully?
As soon as I click the button, the Camel route should stop immediately. I don't want any delay.
Each time I do a camelroute.context.stop(), it takes some time to stop, and in that time since the route was active earlier queues and dequeues the messages are sent to the target queue.
I want to stop the route mid-way when I click the desired button.
Is there a way to handle it?
Have a look at the timeout property of the DefaultShutdownStrategy.
Try setting it to zero in your Camel Context:
<bean id="shutdownStrategy" class="org.apache.camel.impl.DefaultShutdownStrategy">
<property name="timeout" value="0"/>
</bean>
The value is in seconds by default.
Also, have a look at Graceful Shutdown in the Camel docs, if you haven't yet.
EDIT 1: The DefaultShutdownStrategy does not allow 0 timeouts. You could try setting it to 1 NANOSECOND which might help:
<bean id="shutdownStrategy" class="org.apache.camel.impl.DefaultShutdownStrategy">
<property name="timeout" value="1"/>
<property name="timeUnit" value="NANOSECONDS" /
</bean>
Alternatively, you can implement your own ShutdownStrategy if it's really important for you to guarantee absolute immediate shutdown.

Spring Batch - Not all records are being processed from MQ retrieval

I am fairly new to Spring and Spring Batch, so feel free to ask any clarifying questions if you have any.
I am seeing an issue with Spring Batch that I cannot recreate in our test or local environments. We have a daily job that connects to Websphere MQ via JMS and retrieves a set of records. This job uses the out-of-the-box JMS ItemReader. We implement our own ItemProcessor, but it doesn't do anything special other than logging. There are no filters or processing that should affect incoming records.
The problem is that out of the 10,000+ daily records on MQ, only about 700 or so (the exact number is different each time) usually get logged in the ItemProcessor. All records are successfully pulled off the queue. The number of records logged is different each time and seems to have no pattern. By comparing the log files against the list of records in MQ, we can see that a seemingly random subset of records are being "processed" by our job. The first record might get picked up, then 50 are skipped, then 5 in a row, etc. And the pattern is different each time the job runs. No exceptions are logged either.
When running the same app in localhost and test using the same data set, all 10,000+ records are successfully retrieved and logged by the ItemProcessor. The job runs between 20 and 40 seconds in Production (also not constant), but in test and local it takes several minutes to complete (which obviously makes sense since it is handling so many more records).
So this is one of those tough issue to troubleshoot since we cannot recreate it. One idea is to implement our own ItemReader and add additional logging so that we can see if records are getting lost before the reader or after the reader - all we know now is that only a subset of records are being handled by the ItemProcessor. But even that will not solve our problem, and it will be somewhat timely to implement considering it is not even a solution.
Has anyone else seen an issue like this? Any possible ideas or troubleshooting suggestions would be greatly appreciated. Here are some of the jar version numbers we are using for reference.
Spring - 3.0.5.RELEASE
Spring Integration - 2.0.3.RELEASE
Spring Batch - 2.1.7.RELEASE
Active MQ - 5.4.2
Websphere MQ - 7.0.1
Thanks in advance for your input.
EDIT: Per request, code for processor:
public SMSReminderRow process(Message message) throws Exception {
SMSReminderRow retVal = new SMSReminderRow();
LOGGER.debug("Converting JMS Message to ClaimNotification");
ClaimNotification notification = createClaimNotificationFromMessage(message);
retVal.setShortCode(BatchCommonUtils
.parseShortCodeFromCorpEntCode(notification.getCorpEntCode()));
retVal.setUuid(UUID.randomUUID().toString());
retVal.setPhoneNumber(notification.getPhoneNumber());
retVal.setMessageType(EventCode.SMS_CLAIMS_NOTIFY.toString());
DCRContent content = tsContentHelper.getTSContent(Calendar
.getInstance().getTime(),
BatchCommonConstants.TS_TAG_CLAIMS_NOTIFY,
BatchCommonConstants.TS_TAG_SMSTEXT_TYP);
String claimsNotificationMessage = formatMessageToSend(content.getContent(),
notification.getCorpEntCode());
retVal.setMessageToSend(claimsNotificationMessage);
retVal.setDateTimeToSend(TimeUtils
.getGMTDateTimeStringForDate(new Date()));
LOGGER.debug(
"Finished processing claim notification for {}. Writing row to file.",
notification.getPhoneNumber());
return retVal;
}
JMS config:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:tx="http://www.springframework.org/schema/tx"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd">
<bean id="claimsQueueConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="jms/SMSClaimNotificationCF" />
<property name="lookupOnStartup" value="true" />
<property name="cache" value="true" />
<property name="proxyInterface" value="javax.jms.ConnectionFactory" />
</bean>
<bean id="jmsDestinationResolver"
class="org.springframework.jms.support.destination.DynamicDestinationResolver">
</bean>
<bean id="jmsJndiDestResolver"
class=" org.springframework.jms.support.destination.JndiDestinationResolver"/>
<bean id="claimsJmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="claimsQueueConnectionFactory" />
<property name="defaultDestinationName" value="jms/SMSClaimNotificationQueue" />
<property name="destinationResolver" ref="jmsJndiDestResolver" />
<property name="pubSubDomain">
<value>false</value>
</property>
<property name="receiveTimeout">
<value>20000</value>
</property>
</bean>
As a rule, MQ will NOT lose messages when properly configured. The question then is what does "properly configured" look like?
Generally, lost messages are caused by non-persistence or non-transactional GETs.
If non-persistent messages are traversing QMgr-to-QMgr channels and NPMSPEED(FAST) is set then MQ will not log errors if they are lost. That is what those options are intended to be used for so no error is expected.
Fix: Set NPMSPEED(NORMAL) on the QMgr-to-QMgr channel or make the messages persistent.
If the client is getting messages outside of syncpoint, messages can be lost. This is nothing to do with MQ specifically, it's just how messaging in general works. If you tell MQ to get a message destructively off the queue and it cannot deliver that message to the remote application then the only way for MQ to roll it back is if the message was retrieved under syncpoint.
Fix: Use a transacted session.
There are some additional notes, born out of experience.
Everyone swears message persistence is set to what they think it is. But when I stop the application and inspect the messages manually it very often is not what is expected. It's easy to verify so don't assume.
If a message is rolled back on the queue, it won't happen until MQ or TCP times out the orphan channel This can be up to 2 hours so tune the channel parms and TCP Keepalive to reduce that.
Check MQ's error logs (the ones at the QMgr not the client) to look for messages about transactions rolling back.
If you still cannot determine where the messages are going, try tracing with SupportPac MA0W. This trace runs as an exit and it is extremely configurable. You can trace all GET operations on a single queue and only that queue. The output is in human-readable form.
See http://activemq.apache.org/jmstemplate-gotchas.html .
There are issues using the JMSTemplate. I only ran into these issues when I upgraded my hardware and suddenly exposed a pre-existing race condition.
The short form is that by design and intent the JMS Template opens and closes the connection on every invocaton. It will not see messages older than its creation. In high volume and/or high throughput scenarios, it will fail to read some messages.

How to catch when JMS connection is established?

I have message producers that are sending JMS messages about some events using ActiveMQ.
However, connection to ActiveMQ might not be up all the time. Thus, events are stored and when connection is established they are suppose to be read and sent over. Here is my code:
private void sendAndSave(MyEvent event) {
boolean sent = sendMessage(event);
event.setProcessed(sent);
boolean saved = repository.saveEvent(event);
if (!sent && !saved) {
logger.error("Change event lost for Id = {}", event.getId());
}
}
private boolean sendMessage(MyEvent event) {
try {
messenger.publishEvent(event);
return true;
} catch (JmsException ex) {
return false;
}
}
I'd like to create some kind of ApplicationEventListener that will be invoked when connection is established and process unsent events.
I went through JMS, Spring framework and ActiveMQ documentation but couldn't find any clues how to hook up my listener with ConnectionFactory.
If someone can help me out, I'll appreciate it greatly.
Here is what my app Spring context says about JMS:
<!-- Connection factory to the ActiveMQ broker instance. -->
<!-- The URI and credentials must match the values in activemq.xml -->
<!-- These credentials are shared by ALL producers. -->
<bean id="jmsTransportListener" class="com.rhd.ams.service.common.JmsTransportListener"
init-method="init" destroy-method="cleanup"/>
<bean id="amqJmsConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="${jms.publisher.broker.url}"/>
<property name="userName" value="${jms.publisher.username}"/>
<property name="password" value="${jms.publisher.password}"/>
<property name="transportListener" ref="jmsTransportListener"/>
</bean>
<!-- JmsTemplate, by default, will create a new connection, session, producer for -->
<!-- each message sent, then close them all down again. This is very inefficient! -->
<!-- PooledConnectionFactory will pool the JMS resources. It can't be used with consumers.-->
<bean id="pooledAmqJmsConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop">
<property name="connectionFactory" ref="amqJmsConnectionFactory" />
</bean>
<!-- Although JmsTemplate instance is unique for each message, it is -->
<!-- thread-safe and therefore can be injected into referenced obj's. -->
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<constructor-arg ref="pooledAmqJmsConnectionFactory"/>
</bean>
The way you describe the issue, it sure sounds like an open-and-shut case of JMS Durable Subscriptions. You might want to consider a more traditional implementation before going down this road. Caveats aside, ActiveMQ provides Advisory Messages which you can listen for and which will be sent for various events including new connections.
=========
Shoot, sorry... I did not understand what the issue was. I don't think Advisories are the solution at all.... after all, you need to be connected to the broker to get them, but being connected is what you know about.
So if I understand it correctly (prepare for retry #2....), what you need is a client connection which, when it fails, attempts to reconnect indefinitely. When it does reconnect, you want to trigger an event (or more) that flushes pending messages to the broker.
So detecting the lost connection is easy. You just register a JMS ExceptionListener. As far as detecting a reconnect, the simplest way I can think of is to start a reconnect thread. When it connects, stop the reconnect thread and notify interested parties using Observer/Observable or JMX notifications or the like. You could use the ActiveMQ Failover Transport which will do a connection retry loop for you, even if you only have one broker. At least, it is supposed to, but it's not doing that much for you that would not be done by your own reconnect thread... but if you're willing to delegate some control to it, it will cache your unflushed messages (see the trackMessages option), and then send them when it reconnects, which is sort of all of what you're trying to do.
I guess if your broker is down for a few minutes, that's not a bad way to go, but if you're talking hours, or you might accumulate 10k+ messages in the downtime, I just don't know if that cache mechanism is as reliable as you would need it to be.
==================
Mobile app ... right. Not really appropriate for the failover transport. Then I would implement a timer that periodically connects (might be a good idea to use the http transport, but not relevant). When it does connect, if there's nothing to flush, then see you in x minutes. If there is, send each message, wait for a handshake and purge the message from you mobile store. Then see you again in x minutes.
I assume this is Android ? If not, stop reading here. We actually implemented this some time ago. I only did the server side, but if I remember correctly, the connection timer/poller spun every n minutes (variable frequencies, I think, because getting too aggressive was draining the battery). Once a successful connection was made, I believe they used an intent broadcast to nudge the message pushers to do their thing. The thinking was that even though there was only one message pusher, we might add more.

Data source rejected establishment of connection, message from server: "Too many connections"

I am using hibenate and spring and getting below exception ween we hit from jmeter with 250 users
"Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Data source rejected establishment of connection, message from server: "Too many connections"
hibernate_cfg.xml
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.connection.url">jdbc:mysql://localhost:3306/my_db</property>
<property name="hibernate.connection.username">user1</property>
<property name="hibernate.connection.pool_size">1</property>
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">50</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">500</property>
<property name="hibernate.c3p0.idle_test_period">3000</property>
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.hbm2ddl.auto">update</property>`
Spring
<bean id="dataSource" scope="prototype" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName">
<value>${dbDriver}</value>
</property>
<property name="url">
<value>${dbURL}</value>
</property>
<property name="username">
<value>${dbUsername}</value>
</property>
<property name="password">
<value>${dbPassword}</value>
</property>
</bean>
This is a message from the server, so, I'd check what's the number of connected clients that the server is reporting. If this is an expected number, like 500 or so, then I'd increase this limit on the server, if you really expect that level of concurrency for your application. Otherwise, reduce the number of clients.
A bit of background on how it works: each client is a thread on the server, and each thread will consume at least one connection. If you are doing it right, the connection will return to the pool once the thread finishes (ie: once the response is sent to the client). So, in the best case, you'd have 500 connections if you have around 500 users connected. If are seeing a number close to a multiple of the number of concurrent users (ie: 2 users, 4 connections), then you might be consuming more than one connection per thread (that's the price you pay for not using the data source provided by your application server, if you are using one). If you are seeing a really high number (like, 10 times the number of users), then you might have a connection leak somewhere. This might happen if you forget to close the connection.
I'd really suggest to use an EntityManager managed by your application server, and using a DataSource also provided by it. Then, you'd not have to worry about managing the connection pooling.
I think your problem is that your datasource doesn't have a connections pool.
from org.springframework.jdbc.datasource.DriverManagerDataSource javadocs:
NOTE: This class is not an actual connection pool; it does not actually
* pool Connections.
It is also say in the javadocs of that class to use Apache's Jakarta Commons DBCP in case u do need connection pool.
If you need a "real" connection pool outside of a J2EE container,
consider Apache's
Jakarta Commons DBCP or C3P0. Commons DBCP's
BasicDataSource and C3P0's ComboPooledDataSource are full connection
pool beans, supporting the same basic properties as this class plus
specific settings (such as minimal/maximal pool size etc).
I used it and it worked like a charm :)
Hoped I helped.
if connection created every time it may caused this problem. Solution is simple. Single on. connection for each sessin. So that I am publishing some code on bottom of my post. Check incorrect and correct coding. 1-incorrect,2-correct
private EntityManagerFactory emf = null;
#SuppressWarnings("unchecked")
public BaseDAO() {
emf = Persistence.createEntityManagerFactory("aaHIBERNATE");
persistentClass = (Class<T>) ((ParameterizedType) getClass()
.getGenericSuperclass()).getActualTypeArguments()[0];
}
private static EntityManagerFactory emf = null;
#SuppressWarnings("unchecked")
public BaseDAO() {
if (emf == null) {
emf = Persistence.createEntityManagerFactory("aaHIBERNATE");}
persistentClass = (Class<T>) ((ParameterizedType) getClass()
.getGenericSuperclass()).getActualTypeArguments()[0];
}
You need to trace it in application and database server both.
Check the database configuration for possible maximum open connections.
If you are using the database connections from another clients, Check for the current opened connections in database server.
Check your application if you are opening too many connections and not closing them.
You need to increase the value of max open connections in database server settings.
To change max open connections you need to edit max_connections and max_user_connections in my.cnf file under database server.
You can also grant/edit max number of connections with per user. more info available here

Categories

Resources