I am running a webapp inside Webpshere Application Server 6.1. This webapp has a rules kind of engine, where every rule obtains its very own connection from the websphere data source pool. So, I see that when an use case is run, for 100 records of input, about 400-800 connections are obtained from the pool and released back to the pool. I have a feeling that if this engine goes to production, it might take too much time to complete processing.
Is it a bad practice to obtain connections from pool that frequently? What are the overhead costs involved in obtaining connections from pool? My guess is that costs involved should be minimal as pool is nothing but a resource cache. Please correct me if I am wrong.
Connection pooling keeps your connection alive in anticipation, if another user connects the ready connection to the db is handed over and the database does not have to open a connection all over again.
This is actually a good idea because opening a connection is not just a one-go thing. There are many trips to the server (authentication, retrieval, status, etc) So if you've got a connection pool on your website, you're serving your customers faster.
Unless your website is not visited by people you can't afford not to have a connection pool working for you.
The pool doesn't seem to be your problem. The real problem lies in the fact that your "rules engine" doesn't release connections back to the pool before completing the entire calculation. The engine doesn't scale well, so it seems. If the number of database connections somehow depends on the number of records being processed, something is almost always very wrong!
If you manage to get your engine to release connections as soon as possible, it may be that you only need a few connections instead of a few hundred. Failing that, you could use a connection wrapper that re-uses the same connection every time the rules engine asks for one, that somewhat negates the benefits of having a connection pool though...
Not to mention that it introduces many multithreading and transaction isolation issues, if the connections are read-only, it might be an option.
A connection pool is all about connection re-use.
If you are holding on to a connection at times where you don't need a connection, then you are preventing that connection from being re-used somewhere else. And if you have a lot of threads doing this, then you must also run with a larger pool of connections to prevent pool exhaustion. More connections takes longer to create and establish, and they take more resources to maintain; there will be more reconnecting as the connections grow old and your database server will also be impacted by the greater number of connections.
In other words: you want to run with the smallest possible pool without exhausting it. And the way to do that is to hold on to your connections as little as possible.
I have implemented a JDBC connection pool myself and, although many pool implementations out there probably could be faster, you are likely not going to notice because any slack going on in the pool is most likely dwarfed by the time it takes to execute queries on your database.
In short: connection pools just love it when you return their connections. Or they should anyway.
To really check if your pool is a bottle neck you should profile you program. If you find the pool is a problem, then you have tuning problem. A simple pool should be able to handle 100K allocations per second or more or about 10 micro-seconds. However, as soon as you use a connection, it will take between 200 and 2,000 micro-seconds to do something useful.
I think this is a poor design. Sounds like a Rete rules engine run amok.
If you assume 0.5-1.0 MB minimum per thread (e.g. for stack, etc.) you'll be thrashing a lot of memory. Checking the connections in and out of the pool will be the least of your problems.
The best way to know is to do a performance test and measure memory, wall times for each operation, etc. But this doesn't sound like it'll end well.
Sometimes I see people assume that throwing all their rules into Blaze or ILOG or JRules or Drools simply because it's "standard" and high tech. It's a terrific resume item, but how many of those solutions would be better served by a simpler table-driven decision tree? Maybe your problem is one of those.
I'd recommend that you get some data, see if there's a problem, and be prepared to redesign if the data tells you it's necessary.
Could you provide more details on what your rules engine does exactly? If each rule "firing" is performing data updates, you may want to verify that the connection is being properly released (Put this in the finally block of your code to ensure that the connections are really being released).
If possible, you may want to consider capturing your data updates to a memory buffer, and write to the database only at the end of the rule session/invocation.
If the database operations are read-only, consider caching the information.
As bad as you think 400-800 connections being created and released to the pool is, I suspect it'll be much much worse if you have to create and close 400-800 unpooled connections.
Related
So I've been tracking a bug for a day or two now which happens out on a remote server that I have little control over. The ins and outs of my code are, I provide a jar file to our UI team, which wraps postgres and provides storage for data that users import. The import process is very slow due to multiple reasons, one of which is that the users are importing unpredictable, large amounts of data (which we can't really cut down on). This has lead to a whole plethora of time out issues.
After some preliminary investigation, I've narrowed it down to the jdbc to the postgres database is timing out. I had a lot of trouble replicating this on my local test setup, but have finally managed to by reducing the 'socketTimeout' of the connection properties to 10s (there's more than 10s between each call made on the connection).
My question now is, what is the best way to keep this alive? I've set the 'tcpKeepAlive' to true, but this doesn't seem to have an effect, do I need to poll the connection manually or something? From what I've read, I'm assuming that polling is automatic, and is controlled by the OS. If this is true, I don't really have control of the OS settings in the run environment, what would be the best way to handle this?
I was considering testing the connection each time it is used, and if it has timed out, I will just create a new one. Would this be the correct course of action or is there a better way to keep the connection alive? I've just taken a look at this post where people are suggesting that you should open and close a connection per query:
When my app loses connection, how should I recover it?
In my situation, I have a series of sequential inserts which take place on a single thread, if a single one fails, they all fail. To achieve this I've used transactions:
m_Connection.setAutoCommit(false);
m_TransactionSave = m_Connection.setSavepoint();
// Do something
m_Connection.commit();
m_TransactionSave = null;
m_Connection.setAutoCommit(true);
If I do keep reconnecting, or use a connection pool like PGBouncer (like someone suggested in comments), how do I persist this transaction across them?
JDBC connections to PostGres can be configured with a keep-alive setting. An issue was raised against this functionality here: JDBC keep alive issue. Additionally, there's the parameter help page.
From the notes on that, you can add the following to your connection parameters for the JDBC connection:
tcpKeepAlive=true;
Reducing the socketTimeout should make things worse, not better. The socketTimeout is a measure of how long a connection should wait when it expects data to arrive, but it has not. Making that longer, not shorter would be my instinct.
Is it possible that you are using PGBouncer? That process will actively kill connections from the server side if there is no activity.
Finally, if you are running on Linux, you can change the TCP keep alive settings with: keep alive settings. I am sure something similar exists for Windows.
We're currently trying to make our server software use a connection pool to greatly reduce lag however instead of reducing the time queries take to run, it is doubling the time and making it even slower than it was before the connection pool.
Are there any reasons for this? Does JDBC only allow a single query at a time or is there another issue?
Also, does anyone have any examples of multi-threaded connection pools to reduce the time hundreds of queries take as the examples we have found only made it worse.
We've tried using BoneCP and Apache DBCP with similar results...
That one is using Apache's DBCP. We also have tried using BoneCP with the same result...
A connection pool helps mitigating the overhead/cost of creating new connections to the database, by reusing already existing ones. This is important if your workload requires many, short to medium living connections, e.g. an app that processes concurrent user requests by querying the database. Unfortunately your example benchmark code does not have such a profile. You are just using 4 connections in parallel and there is no reuse involved.
What a connection pool cannot achieve is magically speeding up execution times or improving the concurrency level beyond that, which is provided by the database. If the benchmark code represents the expected workload, I would advise you to look into batching statements instead of threading. That will massively increase performance of INSERT/UPDATE operations.
update :
Using multiple connections in parallel can enhance performance. Just keep in mind, that there is not necessarily a relation between multiple threads in your Java application and in the database. JDBC is just a wrapper around the database driver, using multiple connections results in multiple queries being submitted to the database server in parallel. If those queries are suited for it, every modern RDBMS will be able to process them in parallel. But if those queries are very work intensive, or even worse include table locks or conflicting updates, the DB may not be able to do so. If you experience bad performance, check which queries are lagging and optimize them (are they efficient? proper indexes in place? denormalizing the schema may help in more extreme cases. Use prepared statements and batch mode for larger updates, etc.). If your db is overloaded with many, similar and small queries, consider caching frequently used data.
I'm developing a multithread application where threads are created when someone connects to my socket. Each connection creates a new thread, and each thread make queries to a MySQL database using JDBC. I'm wondering if this multiple connections to the MySQL from my different threads could cause any problem in my application or negatively affects MySQL data.
On the contrary, you should always connect to a DB in a multithreaded fashion. Or really, a pooled fashion!
Consider the situation when your application becomes a worldwide hit and you get 100k hits a minute, then you will have a heck of a lot of threads - namely one per connection, which will break your application, your app-server and your DB... :-)
Instead you might implement a pool of DB connections from which your threads can borrow and return when done with. There are several good opensource projects to choose from for this, C3PO and Commons DBCP being just two of them.
Hope that helps,
There is nothing to be gained by having more threads than there are
actual number of cpu/hardware threads. If anything, more threads
creates more overhead which will effectively slow your application
down.
What threads do provide is relatively easy of way of doing more
of the same thing, but after a while it starts hitting a brick wall and one has to start thinking about other potential solutions.
If you want your application to be scalable, now is a good time to
start thinking about how you can scale-out your application, i.e.
a distributed solution where multiple systems share the load.
Instead of more threads on a single system, think in terms of
work queues and worker threads distributed across N systems.
I have a Java batch which does a select with a large resulset (I process the elements using a Spring callbackhandler).
The callbackhandler puts a task in a fixed threadpool to process the row.
My poolsize is fixed on 16 threads.
The resulset contains about 100k elements.
All db access code is handled through a JdbcTemplate or through Hibernate/Spring, no manual connection management is present.
I have tried with Atomikos and with Commons DBCP as connection pool.
Now, I would think that 17 max connections in my connectionpool would be enough to get this batch to finish. One for the select and 16 for the threads in the connectionpool which update some rows. However that seems to be too naive, as I have to specify a max pool size a magnitude larger (haven't tried for an exact value), first I tried 50 which worked on my local Windows machine, but doesn't seem to be enough on our Unix test environment. There I have to specify 128 to make it work (again, I didn't even try a value between 50 and 128, I went straight to 128).
Is this normal? Is there some fundamental mechanism in connection pooling I'm missing? I find it hard to debug this as I don't know how to see what happens with the opened connections. I tried various log4j settings but didn't get any satisfactory result.
edit, additional info: when the connectionpool size seems to be too low, the batch seems to hang. If I do a jstat on the process I can see all threads are waiting for a new connection. At first I didn't specify the maxWait property on the dbcp connection pool, which causes threads to wait indefinitely on a new connection, and I noticed the batch kept hanging. So no connections were released. However, that only happened after processing +-70k rows, which dismissed my initial hunch of connection leakage somehow.
edit2: I forgot to mention I already rewrote the update part in my tasks. I qeueu my updates in a ConcurrentLinkedQueue, I empty that on 1000 elements. So i actually only do about 100 updates.
edit3: I'm using Oracle and I am using the concurrent utils. So i have an executor configured with fixed poolsize of 16. I submit my tasks on this executor. I don't use connections manually in my tasks, I use jdbctemplate which is threadsafe and asks it connections from the connectionpool. I suppose Spring/DBCP handles the connection/thread issue.
If you are using linux, you can try MySql administrator to monitor you connection status graphically, provided you are using MySQL.
Irrespective of that, even 100 connections is not uncommon for large enterprise applications, handling a few thousand requests per minute.
But if the requests are low or each request doesnt need unique a transaction, then I would recommend you to tune your operation inside threads.
That is, how are you distributing the 100k elements to 16 threads?
If you try to acquire the connection every time you read a row from the shared location(or buffer), then it is expected to take time.
See whether this helps.
getConnection
for each element until the buffer size becomes zero
process it.
if you need to update,
open a transaction
update
commit/rollback the transaction
go to step 2
release the connection
you can synchronize the buffer by using java.util.concurrent collections
Dont use one Runnable/Callable for each element. This will degrade the performance.
Also how are you creating threads? use Executors to run your runnable/callable. Also remember that DB connections are NOT expected to be shared across threads. So use 1 connection in 1 thread at a time.
For eg. create an Executor and submit 16 runnalbles each having its own connection.
I switched to c3p0 instead of DBCP. In c3p0 you can specify a number of helper threads. I notice if I put that number as high as the number of threads I'm using, the number of connections stays really low (using the handy jmx bean of c3p0 to inspect the active connections). Also, I have several dependencies with each its own entity manager. Apparently a new connection is needed for each entity manager, so I have about 4 entitymanagers/thread, which would explain the high number of connections. I think my tasks are all so short-lived that DBCP couldn't follow with closing/releasing connections, since c3p0 works more asynchronous and you can specify the number of helperthreads, it is able to release my connections in time.
edit: but the batch keeps hanging when deployed to the test environment, all threads are blocking when releasing the connection, the lock is on the pool. Just the same as with DBPC :(
edit: all my problems dissapeared when I switched to BoneCP, and I got a huge performance increase as bonus too
Have a use case wherein need to maintain a connection open to a database open to execute queries periodically.
Is it advisable to close connection after executing the query and then reopen it after the period interval (10 minutes). I would guess no since opening a connection to database is expensive.
Is connection pooling the alternative and keep using the connections?
You should use connection pooling. Write your application code to request a connection from the pool, use the connection, then return the connection back to the pool. This keeps your code clean. Then you rely on the pool implementation to determine the most efficient way to manage the connections (for example, keeping them open vs closing them).
Generally it is "expensive" to open a connection, typically due to the overhead of setting up a TCP/IP connection, authentication, etc. However, it can also be expensive to keep a connection open "too long", because the database (probably) has reserved resources (like memory) for use by the connection. So keeping a connection open can tie-up those resources.
You don't want to pollute your application code managing these types of efficiency trade-offs, so use a connection pool.
Yes, connection pooling is the alternative. Open the connection each time (as far as your code is concerned) and close it as quickly as you can. The connection pool will handle the physical connection in an appropriately efficient manner (including any keepalives required, occasional "liveness" tests etc).
I don't know what the current state of the art is, but I used c3p0 very successfully for my last Java project involving JDBC (quite a while ago).
The answer here really depends on the application. If there are other connections being used simultaneously for the same database from the same application, then a pool is definitely your answer.
If all your application does is query the db, wait 10 minutes, then query again, then simply connect and reconnect. A connection is considered to be an expensive operation, but all things are relative. It is not expensive if you do it only once every 10 minutes. If the application is this simple, don't introduce unnecessary complexity.
NOTE:
OK, complexity is also relative, so if are already using something like Spring and already know how to use its pooling mechanism, then apply it for this case. If this is not true, keep it simple.
Connection pooling would be an option for you. You can then leave your code as it is including opening and closing connections. The connection pool will care about the connections. If you close a connection of a pool it will not be closed but just be made available in the pool again. If you open a connection after you closed one if there is a open connection in the pool the pool will return this. So in an application server you can use the build-in connection pools. For simple java applications most of the JDBC drivers also include a pool driver.
There are many, many tradeoffs in opening and closing connections, keeping them open, making sure that connections that have been "kept alive" are still "valid" when you start to use them again, invalidating connections that get corrupted, etc. These kinds of complex tradeoffs make it difficult (but certainly not impossible) to implement the "best" connection management strategy for your specific case. The "safest" method is to open a connection, use it, and then close it. But, as you already realize, that is not at all the most efficient method. If you manage your own connections, then as you do things to make your strategy more efficient, the complexity will rise very quickly (especially in the presence of any less-than-perfect JDBC drivers, of which there are many.)
There are many connection pooling libraries available out there that can take care of all of this for you in extremely configurable ways (they almost always come pre-configured out-of-the-box for the most typical cases, and until you get up to the point that you're doing high-load activities, you probably don't have to worry about all that configurability - but you will be glad to have it if you scale up!) As is always the case, the libraries themselves may be of variable quality.
I have successfully used both C3P0 and Apache DBCP. If I were choosing again today, I would probably go with DBCP.