I have some integration tests for a RESTful application written in Java using Dropwizard. The test suite runs fine until eventually it hangs and I get an exception with C3P0PooledConnectionPoolManager: java.sql.SQLNonTransientConnectionException: Too many connections
I identified that the connections are not being cleaned up after each test using C3P0Registry.getPooledDataSources(), but I misdiagnosed the problem as not closing my jersey response entities, as detailed here: https://jersey.github.io/documentation/latest/client.html#d0e5255
Many of the tests are checking for just a status code, so it made sense to me that this would be happening (In the link it states: "If you don't read the entity, then you need to close the response manually by response.close()"). However, after fixing this problem and ensuring that each entity was closed, I'm still getting persistent connections between tests.
I'm using DropwizardAppRule as a Class Rule and after creation at the beginning and end of each test run, I can call to close the client that is associated with the rule, but the connections remain open. My C3P0ConnectionPool gains 3 connections per test class that is run and I can't figure out a way to stop it from growing with each new class that is added.
ClassRule snippet:
#ClassRule
public static final DropwizardAppRule<MicroServiceCoreConfiguration> RULE =
new DropwizardAppRule<>(App.class, ResourceHelpers.resourceFilePath("./config.yml"));
Will update with any information that is requested!
Related
I am trying to identify where a suspected memory / resource leak is occurring with regards to a JMS Queue I have built. I am new to JMS queues, so I have used many of the standard JMS class objects to ensure stability. But somewhere in my code or configuration I am doing something wrong, and my queue is filling up or resources are slowing down, perhaps inherent to unknown deficiencies within the architecture I am attempting to implement.
When load testing my API (using Gatling), I can run 20 messages a second through (which is a tiny load) for most of a ten minute duration. But after that, the messages seem to back up, and the ability to process them slows to a crawl. Generally time-out errors begin to occur once the overall requests exceed 60 seconds to complete. There is more business logic that processes data and persists it to a relational database, but none of that appears to be an issue.
Interestingly, subsequent test runs continue with the poor performance, indicating that whatever resource is leaking is transcending the tests. A restart of the application clears out whatever has become bloated leaking. Then the tests run fast again, for the first seven or eight minutes... upon which the cycle repeats itself. Only a restart of the App clears the issue. Since the issue doesn't self-correct itself, even after waiting for a period of time, something has filled up resources.
When pulling the JMS calls from the logic, I am able to process hundreds of messages a second. And I can run back-to-back tests runs without leaking or filling up the queue.
Although this is a Spring project, I am not using Spring's JMS Template, so I wrote my own Connection object, which I injected as a Spring Bean and implemented as a single connection to avoid creating a new connection for every JMS message I sent through.
Likewise, I configured my JMS Session to also be an injected Bean, in which I use the Connection Bean. That way I can persist my Connection and Session objects for sending all of my JMS messages through, which are sent one at a time. A Qpid Server I am calling receives these messages. While it is possible I am exceeding it's capacity to consume the messages I am producing, I expect that the resource leak is associated with my code, and not the JMS Server.
Here are some code snippets to give you an idea of my approach. Any feedback is appreciated.
JmsConfiguration (key methods)
#Bean
public ConnectionFactory jmsConnectionFactory() {
return new JmsConnectionFactory(user, pass, host);
}
#Bean(name="jmsSession")
public Session jmsConnection() throws JMSException {
Connection conn = jmsConnectionFactory().createConnection();
Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
return session; //Injected as Singleton
}
#Bean(name="jmsQueue")
public Queue jmsQueue() throws JMSException {
return jmsConnection().createQueue(queue);
}
//Jackson's objectMapper is heavy enough to warrant injecting and re-using it.
#Bean
public ObjectMapper objectMapper() {
return new ObjectMapper();
}
JmsMessageEnqueuer
#Component
public class MessageJmsEnqueuer extends CommonThreadScope {
#Autowired
#Qualifier("Session")
private Session jmsSession;
#Autowired
#Qualifier("jmsQueue")
private Queue jmsQueue;
#Value("${acme.jms.queue}")
private String jmsQueueName;
#Autowired
#Qualifier("jmsObjectMapper")
private ObjectMapper jmsObjectMapper;
public void enqueue(String message, String dataType) {
try {
String messageAsJson = objectMapper.writeValueAsString(message);
MessageProducer jmsMessageProducer = jmsSession.createProducer(jmsQueue);
TextMessage message = jmsSession.createTextMessage(message);
message.setStringProperty("dataType", dataType.name());
jmsMessageProducer.send(message);
logger.log(Level.INFO, "Message successfully sent. Queue=" + jmsQueueName + ", Message -> " + message);
} catch (JMSRuntimeException | JsonProcessingException jmsre) {
String msg = "JMS Message Processing encountered an error...";
logService.severe(logger, messagesBuilder() ... msg)
}
//Skip the close() method to persist connection...
//Reconnect logic exists to reset an expired connection from server.
}
}
I was able to solve my resource leak / deadlock issue simply by rewriting my code to use the simplified API provided with the release of JMS 2.0. Although I was never able to determine which of the Connection / Session / Queue objects was giving my code grief, using the Context object to build my connection and session was the golden ticket in this case.
Upon switching to the simplified API (since I was already pulling in the JMS 2.0 dependency), the resource leak immediately vanished! This leads me to believe that the simplified API does more than just make life easier by providing an easier API for the developer to code against. While that is already an advantage to begin with (even without the few features that the simplified API doesn't support), it is now clear to me that the underlying connection and session objects are being managed by the API, and thus resolved whatever was filling up or deadlocking.
Furthermore, because the resource build-up was no longer occurring, I was able to triple the number of messages I passed through, allowing me to process 60 users a second, instead of 20. That is a significant increase, and I have fixed the compatibility issues that prevented me from using the simplified JMS API to begin with.
While I would have liked to identify precisely what was fouling up the code, this works as a solution. Plus, the fact that version 2.0 of JMS was released in April of 2013 would indicate that the simplified API is definitely the preferred solution.
Just a guess, but a MessageProducer extends AutoClosable, suggesting it to be closed after it is no longer of use. Since you're not using a try-with-resources or explicitly close it afterwards, the jmsSession may contain more and more producers over time. Although I am not sure whether you should close per method call, or re-use the created producer.
Have you tried using a profiler such as VisualVM to visualize the heap and metaspace? If so, did you find any significant changes over time?
Abstract: how do devs Integration TEST timeouts for http requests?
Backstory: My team is having issues related to unusually long lasting HTTP web requests. We use the commons-httpclient version 3 by Apache. The code looks similar to this:
PostMethod post = new PostMethod(endpoint);
post.getParams().setSoTimeout(someInt);
httpClient.executeMethod(post);
The time to complete this request is usually acceptable (2 seconds or so), but occasionally, we will see 50-60 second requests despite having our SO timeout set to 4 seconds. This prompted me to do some research and found that most people are setting Connection Timeouts ANNNNND SO timeouts. It appears that SO timeouts should be set lower (as they simply time the distance between bytes in transit) and the the connection timeout is what we originally planned to use (i.e. initial delay between request and 1st byte returned).
Here is the code we scraped and plan on using:
httpClient.getHttpConnectionManager().getParams()
.setConnectionTimeout(someInt);
httpClient.getHttpConnectionManager().getParams()
.setSoTimeout(someInt);
The main pain here is that we are unable to integration test this change. More precisely, we are confused on how to integration test the delays coming from a socket connection to a foreign server. After digging through the commons-httpclient, I see protected and private classes that we will have to reproduce (because they are unextendable and unusable from outside the class), mock and string together the classes to ultimately get down to the socket class in java (which relies on a java native method -- which we would also need to reproduce and inject via mocks -- something I dont see frequently at that level).
The reason I am reaching out to Stack Overflow is to see how others are testing this/not testing this. I want to avoid testing this functionality in a performance environment at all costs.
Another thought of mine was to set up a mockserver to respond to the httpclient with a programmable delay time. I haven't seen an example of that yet.
First of all, there is no such thing as unit testing http requests - that would be integration testing.
Secondly, you can use a tool like JMeter to send http requests and test whether the response is being received in a certain amount of time as shown here in JMeter.
Taking the mock server route, I managed to set up a small web server with an API endpoint that I could test against. Here is the related code:
Lightweight server setup:
TJWSEmbeddedJaxrsServer server = new TJWSEmbeddedJaxrsServer();
server.setPort(SERVER_PORT);
server.getDeployment().getResources().add(new TestResource());
server.start();
API Endpoint:
/**
* In order to test the timeout, the resource will be injected into an embedded server.
* Each endpoint should have a unique use case.
*/
#Path("tests")
public class TestResource {
#POST
#Produces({MediaType.APPLICATION_XML})
#Path("socket-timeout")
public Response testSocketTimeout() throws InterruptedException {
Thread.sleep(SOCKET_TIMEOUT_SLEEP);
return Response.ok().build();
}
}
Within the api endpoint related class, I can control the sleep timeout which then triggers a socket timeout within the httpclient class. Its a bit hacky, but it works to test the functionality in the way I wanted to (simple, lightweight and effective).
I have AKKA actors running in Play 2 application. There are a list of POJO objects retrieved from database and pass along in a message to actors. When an actor starts processing these objects, it will throw this exception. I guess it tries to read data from DB because of lazy loading of ebean. This happens when running in test cases. I haven't tested in normal application env.
Attempting to obtain a connection from a pool that has already been shutdown
at com.avaje.ebeaninternal.server.transaction.TransactionManager.createQueryTransaction(TransactionManager.java:356)
at com.avaje.ebeaninternal.server.core.DefaultServer.createQueryTransaction(DefaultServer.java:2021)
at com.avaje.ebeaninternal.server.core.OrmQueryRequest.initTransIfRequired(OrmQueryRequest.java:241)
at com.avaje.ebeaninternal.server.core.DefaultServer.findList(DefaultServer.java:1468)
at com.avaje.ebeaninternal.server.core.DefaultBeanLoader.loadBean(DefaultBeanLoader.java:360)
at com.avaje.ebeaninternal.server.core.DefaultServer.loadBean(DefaultServer.java:526)
at com.avaje.ebeaninternal.server.loadcontext.DLoadBeanContext.loadBean(DLoadBeanContext.java:143)
at com.avaje.ebean.bean.EntityBeanIntercept.loadBean(EntityBeanIntercept.java:548)
at com.avaje.ebean.bean.EntityBeanIntercept.preGetter(EntityBeanIntercept.java:638)
at models.MemberInfo._ebean_get_type(MemberInfo.java:4)
at models.MemberInfo.getType(MemberInfo.java:232)
at actors.MessageWorker.doSendToIOS(MessageWorker.java:161)
at actors.MessageWorker.onReceive(MessageWorker.java:97)
at akka.actor.UntypedActor$$anonfun$receive$1.apply(UntypedActor.scala:154)
at akka.actor.UntypedActor$$anonfun$receive$1.apply(UntypedActor.scala:153)
at akka.actor.Actor$class.apply(Actor.scala:311)
at akka.actor.UntypedActor.apply(UntypedActor.scala:93)
at akka.actor.ActorCell.invoke(ActorCell.scala:619)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:196)
at akka.dispatch.Mailbox.run(Mailbox.scala:178)
at akka.dispatch.ForkJoinExecutorConfigurator$MailboxExecutionTask.exec(AbstractDispatcher.scala:505)
at akka.jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:259)
at akka.jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974)
at akka.jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1478)
at akka.jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Although I'm not sure if it's relevant for you, I'll tell my story. I had the same error message coming up when running my test-cases, without using actors.
First note that, during stopping a Play application, its data-sources are closed.
Since many of my test-cases require a running Application in scope, I was using the WithApplication helper around each test-case. The problem in my case was that my DB-access object was a singleton (a Scala object) initializing its Datasource only once. Since that object was never re-instantiated between test-cases, the closed datasource remained there, resulting in the mentioned error.
The solution in my case was to make sure the datasource was re-created between test-cases.
Context of problem I want to solve: I have a java spring http interceptor AuditHttpCommunicationInterceptor that audits communication with an external system. The HttpClieant that does the communication is used in a java service class that does some business logic called DoBusinessLogicSevice.
The DoBusinessLogicSevice opens a new transaction and using couple of collaborators does loads of stuff.
Problem to solove: Regardless of the outcome of any of the operations in DoBusinessLogicSevice (unexpected Exceptions, etc) I want audits to be stored in the database by AuditHttpCommunicationInterceptor.
Solution I used: The AuditHttpCommunicationInterceptor will open a new transaction this way:
TransactionDefinition transactionDefinition = new DefaultTransactionDefinition(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
new TransactionTemplate(platformTransactionManager, transactionDefinition).execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
// do stuff
}
});
Everything works fine. When a part of DoBusinessLogicSevice throws unexpected exception its transaction is rolled back, but the AuditHttpCommunicationInterceptor manages to store the audit in the database.
Problem that arises from this solution: AuditHttpCommunicationInterceptor uses a new db connection. So for every DoBusinessLogicSevice call I need 2 db connections.
Basicly, I want to know the solution to the problem: how to make TransactionTemplate "suspend" the current transaction and reuse the connection for a new one in this case.
Any ideas? :)
P.S.
One idea might be to take a different design approach: drop the interceptor and create an AuditingHttpClient that is used in DoBusinessLogicSevice directly (not invoked by spring) but I cannot do that because I cannot access all http fields in there.
Spring supports nested transactions (propagation="NESTED"), but this really depends on the database platform, and I don't believe every database platform is capable of handling nested transactions.
I really don't see what's a big deal with taking connection from a pool, doing a quick audit transaction and returning connection back.
Update: While Spring supports nested transactions, it looks like Hibernate doesn't. If that's the case, I say: go with another connection for audit.
I would like to be able to verify if each unit of work is done in its own transaction, or as part of a single global transaction.
I have a method (defined using spring and hibernate), which is of the form:
private void updateUser() {
updateSomething();
updateSomethingElse();
}
This is called from two places, the website when a user logs in and a batch job which runs daily. For the web server context, it will run with a transaction created by the web server. For the batch job, it must have one transaction for each user, so that if something fails during this method, the transaction is rolled back. So we have two methods:
#Transactional(propagation=Propagation.REQUIRES_NEW)
public void updateUserCreateNewTransaction() {
updateUser();
}
#Transactional(propagation=Propagation.REQUIRED)
public void updateUserWithExistingTransaction() {
updateUser();
}
updateUserCreateNewTransaction() is called from the batch job, and updateUserWithExistingTransaction() from the web server context.
This works. However, it is very important that this behaviour (of the batch) not be changed, so I wish to create a test that tests this behaviour. If possible, I would like to do this without changing the code.
So some of the options open to me are:
Count the transactions opened in the
database during the run of the batch
job.
Change the data in some sublte way so that at least one user update fails, in the updateSomethingElse() method, and check that the updateSomething() for that user has not taken place.
Code review.
1 is a very database dependent method, and how do I guarantee that hibernate won't create a transaction anyway?. 2 seems better, but is very complex to set up. 3 is not really practical because we will need to do one for every release.
So, does anyone have a method which would enable me to test this code, preferably through a system test or integration test?
I would try to setup a test in a unit test harness using an in memory HSQLDB and EasyMock (or some other mocking framework).
You could then have the updateSomething() method really write to the HSQL database but use the mock framework to mock the updateSomethingElse() method and throw a RuntimeException from that method. When that is done you could perform a query against the HSQLDB to verify that the updateSomething() stuff was rolled back.
It will require some plumbing to setup the HSQLDB and transaction manager but when that is done you have a test without external dependencies that can be re-run whenever you like.
Another thing you can do is configure logging output for Hibernate's transactions:
http://docs.jboss.org/hibernate/core/3.3/reference/en/html/session-configuration.html#configuration-logging
If you make a log4j category for org.hibernate.transaction with trace log leve, it should tell everything that Hibernate does transaction-wise during a unit test.