As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
What are the benefits of using a JDBC connection pooling tool like DBCP or c3p0 ?
in case of a small CRUD application with one user, can we just create one connection session as a singleton ?
PS: I'm building a small javafx application back-ended with tiny h2 database (5
tables).
From Jon Skeet's answer to What is the benefit of Connection and Statement Pooling?:
Creating a network connection to a database server is (relatively)
expensive. Likewise asking the server to prepare a SQL statement is
(relatively) expensive.
Using a connection/statement pool, you can reuse existing
connections/prepared statements, avoiding the cost of initiating a
connection, parsing SQL etc.
And the following, from Kent Boogaart's answer:
I am not familiar with c3p0, but the benefits of pooling connections
and statements include:
Performance. Connecting to the database is expensive and slow. Pooled connections can be left physically connected to the database,
and shared amongst the various components that need database access.
That way the connection cost is paid for once and amortized across all
the consuming components.
Diagnostics. If you have one sub-system responsible for connecting to the database, it becomes easier to diagnose and analyze database
connection usage.
Maintainability. Again, if you have one sub-system responsible for handing out database connections, your code will be easier to maintain
than if each component connected to the database itself.
Creating connections is costly and it doesn't make sense to create a new connection for each transaction that might only take a few milliseconds. Managing database connections in a pool means that applications can perform database transactions in a way that avoid the connection creation time (The connections still need to be created, but they are all created at startup). The downside is that if all connections are in use, and another connection is required, the requesting thread will have to wait until a connection is returned to the pool.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have a standalone application (no application/web server) with various methods accessing a database via JDBC. So far, a database connection is opened (and closed) in every method. There is no need for multiple connections at the same time.
But:
Creating a new connection every time seems a bad idea because of performance
Alternatively, using a single connection seems a bad idea as well.
What is the way to go? Connection pooling for just one connection?
If you configure it right, you can gain a lot by using a connection pool, most of all the performance of individual statements - connecting to the DB might be measured in seconds.
At the same time except for the initial pool creation (you might be able to run that parallel to other initialization) you still maintain a very good relaibility - as the pool will check connections on checkout or in between and discard connections that broke down. So you're likely to survive episodes of "not-connected" or similar as well.
I share your view that using a single connection might be a bad idea - because you'd have to deal with connection loss / reconnect all over your code.
A couple of benefits you could get from connection pooling, even if you only have 1 connection:
A connection pool typically manages the life cycle of it's connections. For instance, if a connection goes stale, a new one will be created in it's place. That prevents you from having to handle life cycle events in your code.
A connection pool can control the opening and closing of connections. Just because you call close() on a connection, doesn't necessarily mean the connection is closed by the pool. It can choose to keep the connection open. This can offer performance benefits if your application is constantly opening and closing connections.
Do not use connection pooling for only one thread.
For a standard JDBC connection pool, a pool of Connection objects is created at the time the application starts. That is when the connection pool server starts, it creates a predetermined number of Connection objects. These objects are then used by a pool manager which disperses them when they are requested by different clients. And returns them to the pool when the client doesn't need that Connection object. A great deal of resources are involved in managing this.
So it's basically wasting a little performance wise bothways if only one connection is being used throughout. IMO, opening and closing individual Connection objects will be a better option. Try sending batch queries to compensate on the performance loss.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm developing a Java application that communicates with a MySQL database in a server. The app should be able to read some data from an XML file and then insert the info read into the database.
I used to write SQL Statements directly to the Java code, but then a friend advised me to create a web service that does all the SQL stuff for the tool, and let the tool's only job is to read XML and send data into the web service.
My question is, does it deserve that effort? Why or Why not?
SQL in code is not recommended as it becomes difficult to maintain. The application is also tightly coupled to the database structure. Every time the database change (or you are moving to a new database) you need to make changes to your code and release again.
I don't think a web service is the correct answer here. I would recommend you try one of the following:
If your application uses a lot of tables and very high throughput is not critical, use Hibernate as an ORM tool. It has many features and can really reduce the time spent on data access.
If you do not have that many tables and you don't have the time to learn Hibernate, use iBatis. It will take you 30 minutes to grasp. It essentially allows you to put your SQL in a separate XML file, which it will read for you. For smaller applications it is really useful and it is faster than Hibernate.
As a last resort, rather put your SQL in a text file(s) which you open and execute.
How do you intend to create the webservice part? If you have the time to do, worth trying with Core Java or any Webservice framework, though I would suggest use Core java which would help to keep minimal dependency for your tool. Nevertheless, there is an ample amount of effort required to get the XML and Webservice requests in sync. My take - if it is not broken, dont fix it.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
To guarantee only a single write-transaction per database/resource I'm creating a .lock-file which prevents other JVMs from starting a session.
However, I'm not sure how it is handled for instance in Eclipse if the Application crashed. I think I had to remove the file manually?! So, is this a common solution or do other solutions exist? I think a restarted application (after a crash) cannot be distinguished from a usual application, I think that is one thing which bothered me once or twice with Eclipse, which didn't show up a proper message that I had to delete the lock-file before restarting Eclipse. But I'm not really sure if that was the problem.
Ok, I might have another solution for write transactions which have to check for a transaction-log which is deleted for proper commited transactions. But well, the write-transaction is ussed after the check. Do other solutions exist? I can't think of any...
Databases have been handling transactions and isolation for a long time. I cannot for the life of me see why you'd see the need to reinvent this wheel. Have you not heard of JTA?
Have a look at Spring and its transaction managers. This problem has been solved better by others.
UPDATE: NoSQL means no ACID by design. If you need ACID, don't use NoSQL. You're adding complexity to make up for a poor design decision.
What does Eclipse have to do with this? It's an IDE. I presume that your users won't have to fire up an IDE to run your app.
You should really try and dodge this bullet.
But if you are using a dbms with the ability to use transactions back in the old days, this was one trick for application or user locks
You create a table e.g applicationLocks
Then you start a transaction and insert / update a specific record, but do not commit it.
Anyone else coming in after won't be able to because the row is locked, and you take some suitable action.
When the application closes, rollback.
If the application crashes, the connection will get closed and the transaction rolled back anyway.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
For Hibernate we can use a Connection pool to manage connections inside of it. Is the state of the connections inside the pool opened? Or is it closed? And if the connections are open is there a possible security threat and a threat to the Database.
And we are using Oracle as the database- so is there a internal mechanism inside Oracle to disconnect unused connections?
The connections inside the pool are open (at least for some time period; depending on your pool implementation idle connections might get closed). Creating and opening new database connections can be expensive. Pooling is used to reduce this cost.
There's really no more security threat with using connection pooling than there would be without. In either case, your application still has the same level of access to the database; the same level of damage can be done regardless of if a connection has to be opened first or not.
The purpose of pooling the database connections is to have a set of open connections so that every time the application tries to open a new connection, the pool transparently returns already opened connection. This is much faster than opening new connection every time.
From the database perspective it looks like your application has open but idle database connection (like if you would open SQL console and not run any queries).
I am not a security expert, also I don't know how secure is Oracle connection and TCP/IP stack. But the fact that the idle connection remains for several seconds between your application shouldn't be a problem. Millions of applications are using database connection pooling (in fact, I can't think of any application not using it) and I have never heard of any attack vector targeting it. Remember that pooled connections are still subject of datbase authorization and authentication.
Consider tunneling or encrypting the database connection if it worries you that much (or if the database connection is over the Internet, not intranet.
All this issues are transparent to the using code. You will make this questions only if you implement a connection pool by yourself. If you use a well known (such as c3p0) you do not get in touch with that issues because you are coding against the DataSource interface.
(That does'nt mean that this libraries are free per-se of bugs, memory leaks or orphane open connections).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have yet to find a good benchmark on JSF performance. I know what you are thinking, it depends on your code/design/database setup, etc. I know that you have to optimize many things before you need to optimize your presentation layer (for instance you should see how good your database schema is), but let's say for the sake of the argument that we have reached the point in which we must review our presentation layer.
JSF is session-intensive. I've read a bunch of times that this can be a drawback when it comes to writing scalable applications. Having big user sessions in a clustered enviroment can be problematic. Is there any article on this? I'd hate to go to production just to see that the great JSF lifecycle and CDI integration have a huge cost in performance.
For high performance, session stickiness must be implemented, regardless of framework or language. How that's done depends on your setup; for example hardware loadbalancers usually have this feature. Then you don't really have to worry about inter-server network latency.
However, JSF+CDI performance on a single machine is also very important. Suppose the overhead is 300ms, that means a 4-core server can only handle 10 requests per second. Not too bad, but not in the high performance class. (Usually not a problem for companies on JEE bandwagons; they are usually enterprise-scaled, not internet-scaled; and they have cash to burn for lots of servers)
I don't really have the performance number though; it would be interesting if someone reports some CDI+JSF stats, for example, how long it takes to handle a typical page with a moderate size form.
I don't know if there is any truth in the assertion that JSF is heavy on session data. However, I'd have thought that the way to address scalability issues due to large amounts of session data would be to:
replicate the front-end servers (which you have to do anyway beyond a certain point of scaling), and
dispatch requests to the front-end based on the session token, so that the session data is likely to already be available in memory.
The presentation layer is one instance of an -> embarrassingly parallel application. In principle you can scale it by adding hardware; an extreme would be to have one hyper-thread per user in the minute of your site's maximum user-count. So scalablilty is not a problem here. What might be a problem is with pages that have to be rendered sequentially and take a long time to render even in single-user mode: If your JSF takes a minute to render in single-user mode then it will so too in multi-user mode and if you can not render it in multiple pieces in parallel then that is plain-old necessary.