I've ran through several examples over the web, and found that every single time I need something from the DB, I should write the following code:
try
{
// Step 1: Load the JDBC driver.
Class.forName("mysql_driver_name");
// Step 2: Establish the connection to the database.
String url = "jdbc:string_to_mysql_server";
Connection conn = DriverManager.getConnection(url,"user1","password");
// fetch from the DB ...
}
catch (Exception e)
{
System.err.println("Got an exception! ");
System.err.println(e.getMessage());
}
It's very annoying to put up this code every time I want something from the DB, so the question is - is there a way to only once connect entirely all my app to the DB somehow at the very start point, avoiding copy-pasting mentioned code, and then be able to do everything I want with DB?
I've quickly looked through NetBeans's Project menu, but didn't find any clue on how to configurate a persistent connection to a selected DB.
If it's important, i'm writing a purely desktop app, i.e. using Java SE. Also, it's worth mentioning that I'm a kinda beginner in Java.
There are many connection pool options to choose from, I would suggest that you try Apache Common Db Connection Pool http://commons.apache.org/dbcp/.
The connection pool idea is probably the best overal solution. However there is a simpler one in ypur case.
In your code conn goes out of scope in the method itwas created. There is no need to do that. You can create a method that includes all your code up to an including the line that assigns to conn. Then pass that conn variable to other parts of the program and use that for db work.
You can follow this Method of Establishing the Conenction
Create a Singleton class, which helps you to create a connection
In your DAO or any helper Class, Call this Single instance, which has a connection
Once you get the Conenction, write the operations that you want to perform on the database.
Close the connection, that will do to fullfill your connection.
This will avoid the code, what you have written in your query.
and this style will increases the readability and reduce the maintainability.
If you want any sample code let me know, I can provide you that
Related
I am using android studio to make a calender application. I made a database to save an event.I was able to open the event as long as the Android Virtual Device was running but when I closed it and opened it again I would not open the event again. Is it possible that the database remains as long as the AVD is running?
Well I can see in your snippet that you are doing your query and your data manipulation in the same class, if you'd had separated the responsibility away then the life of opening and closing a connection could have been easier.
You should have created a DatabaseFactory which sets up the connection to the database. Have simple methods like openConnection(string connectionString) Close() and query(string sql, string[] param).
Another class called DatabaseConsumer should basically open a connection, use the query, return the data however you want (ResultSet?), then close the connection after.
But to answer your question in terms of your design, you can just close the connection after your finished your while statement (res.moveToNext). Something like myDB.closeConnection() ?
edit : Implications of not closing a connection can leave the server holding that connection, some config value will say how many open connections your DB can handle. After a while the database will not allow the connection and give you an SQLServerException, boohoo.
I have a problem in implementing this case.
1) some examples show that we can use as in the following:
try{
bd.conectarBaseDeDatos();
PreparedStatement stmt;
String sql=("SELECT * FROM cab_pedido a, det_pedido b WHERE a.numero = b.numero and a.cod_agencia = b.cod_agencia");
System.out.println(sql);
stmt = bd.conexion.prepareStatement(sql,ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
bd.result = stmt.executeQuery(sql);
}
catch (InstantiationException | IllegalAccessException | SQLException e){ System.out.println(e);}
return rs;
}
This returns my resultSet and I can use rs.next(). rs.previous(), etc to solve my problem, but I see some comments that say we should close rs and db connection. How dangerous is that? I can implement without closing the resultset and connection? Because when we close resultSet, I will not able to get data anymore.
2) Store the data into Hashmap or list
This is another possibility but if I want to get the last or the first values how can I do that?
I need the next, prev, last, and first functions but I'm not sure about my first implementation.
Can anybody give me some advices of how start this.
I need solutions, advices. that duplicate means nothing.
I am going to try to start an answer here since I don't have option to comment yet All that is understandable from your code and question is:
You have a resultset from which you want to read first, last, previous, next etc set of data.
Closing the db connection resource
Either to use HashMap or List
Well, from what you have so far, you can definitely read the data from your resultset into List or HashMap
Secondly on closing the resources, Yes! You should close the db connection resources always to avoid memory leak. You can do that by adding a finally block after the catch block in your code.
On HasMap or List. If you aren't sure about the type of data you will be reading from your resultset then go with some implementation of List, e.g. ArrayList<String>. If you know the data will have Keys and Values, then you can go for HashMap.
These are just general guidelines. If you post some more code, maybe we can help further.
ANSWER for #1 -- This is a repeated question.
When you close your DB connection. ResultSet and Statement are also closed.
But that is not a guaranteed action. Therefore you should close all your DB resources.
Need for closing DB resources separately
Also consider scenario where max no of DB connection allowed to be open in the connection pool is set to a fix number. You will eventually reach that and your application will simply crash or not respond correctly.
This not only applies to DB connection/resulorces but to all IO resources.
You should (as a good practice) always close all your IO resources after you are done using them.
Not doing so you are simply leaving reason for memory leaks.
Answer to #2
Its a good to move the information/data into a proper data structure before you may want to close resultset. This is more of implementation scenario which we always face.
Form of data structure to use again will be based on the scenario. But in all cases you may want to use a Bean class/POJO to store relevant information for each row fetched because you are getting multiple values each row.
Another suggestion would be not to do a select * call via JDBC. This is not very useful. You should be preferably mentioning column names, this will allow you to control the data you are fetching into the resultset, also in many cases make the query execution faster.
I explain better my question since from the title it could be not very clear, but I didn't find a way to summarize the problem in few work. Basically I have a web application whose DB have 5 tables. 3 of these are managed using JPA and eclipselink implementation. The other 2 tables are manager directly with SQL using the package java.sql. When I say "managed" I mean just that query, insertion, deletion and updates are performed in two different way.
Now the problem is that I have to monitor the response time of each call to the DB. In order to do this I have a library that use aspects and at runtime I can monitor the execution time of any code snippet. Now the question is, if I want to monitor the response time of a DB request (let's suppose the DB in remote, so the response time will include also network latency, but actually this is fine), what are in the two distinct case described above the instructions whose execution time has to be considered.
I make an example in order to be more clear.
Suppose tha case of using JPA and execute a DB update. I have the following code:
EntityManagerFactory emf = Persistence.createEntityManagerFactory(persistenceUnit);
EntityManager em=emf.createEntityManager();
EntityToPersist e=new EntityToPersist();
em.persist(e);
Actually it is correct to suppose that only the em.persist(e) instruction connects and make a request to the DB?
The same for what concern using java.sql:
Connection c=dataSource.getConnection();
Statement statement = c.createStatement();
statement.executeUpdate(stm);
statement.close();
c.close();
In this case it is correct to suppose that only the statement.executeUpdate(stm) connect and make a request to the DB?
If it could be useful to know, actually the remote DBMS is mysql.
I try to search on the web, but it is a particular problem and I'm not sure about what to look for in order to find a solution without reading the JPA or java.sql full specification.
Please if you have any question or if there is something that is not clear from my description, don't hesitate to ask me.
Thank you a lot in advance.
In JPA (so also in EcliplseLink) you have to differentiate from SELECT queries (that do not need any transaction) and queries that change the data (DELETE, CREATE, UPDATE: all these need a transacion). When you select data, then it is enough the measure the time of Query.getResultList() (and calls alike). For the other operations (EntityManager.persist() or merge() or remove()) there is a mechanism of flushing, which basically forces the queue of queries (or a single query) from the cache to hit the database. The question is when is the EntityManager flushed: usually on transaction commit or when you call EntityManager.flush(). And here again another question: when is the transaction commit: and the answer is: it depends on your connection setup (if autocommit is true or not), but a very correct setup is with autocommit=false and when you begin and commit your transactions in your code.
When working with statement.executeUpdate(stm) it is enough to measure only such calls.
PS: usually you do not connect directly to any database, as that is done by a pool (even if you work with a DataSource), which simply gives you a already established connection, but that again depends on your setup.
PS2: for EclipseLink probably the most correct way would be to take a look in the source code in order to find when the internal flush is made and to measure that part.
I've got a pretty long transaction I'm trying to execute using JDBC for PostgreSQL. In JDBC I can't use COMMIT and ROLLBACK, so I'm trying to implement my desired behaviour in Java code...
try {
con = DriverManager.getConnection(url, user, password);
con.setAutoCommit(false);
Statement st = con.createStatement();
st.execute(myHugeTransaction);
con.commit();
} catch (SQLException ex) {
try {
con.rollback();
} catch (SQLException ex1) {
// log...
}
// log...
}
For small statements, this works pretty well, but for the large ones with about 10K statements in a single transaction, this fails in the con.commit line with
org.postgresql.util.PSQLException: ERROR: kind mismatch among backends. Possible last query was: "COMMIT" kind details are: 0[C] 1[N: there is no transaction in progress]
The funny thing is, if I catch SQL Warnings with st.getWarnings(); I can see that the database is actually processing the whole script I've sent, just when it comes to the commit, it all fails.
btw, the transaction is totally fine. I write an exact copy of it into a file and I can run it without errors by copying it into pgAdmin. Hope you can help me on that one, I've been searching and testing stuff for hours now...
edit
Maybe I didn't get this right, so two questions:
Can I execute multiple statements in one call to Statement.execute()?
If not, what is the right way to run a Script with multiple statements using JDBC (without the need to parse and split it into single statements)?
Honestly, if this is a SQL script you are better off doing a shell escape to psql. That is the best way to handle this. In general I have had way too many unpleasant surprises from people trying to parse out SQL code and run it against the db. This way madness lies.
you say "smaller scripts" which leads me to conclude you are doing something like setting up a database (or upgrading one, but that's less likely since there are no queries). Use psql through a shell escape and don't look back. That really is the best way.
I suppose if you have to you could try adding explicit BEGIN and COMMIT to your script.
I am not sure why it seems to be committing the transaction implicitly. you have set autocommit off properly. There are no obvious problems in your code. Is it possible you have an old or buggy JDBC driver? If not, I would recommend filing a bug report with the PostgreSQL JDBC driver project.
A hobby project of mine is a Java web application. It's a simple web page with a form. The user fills out the form, submits, and is presented with some results.
The data is coming over a JDBC Connection. When the user submits, I validate the input, build a "CREATE ALIAS" statement, a "SELECT" statement, and a "DROP ALIAS" statement. I execute them and do whatever I need to do with the ResultSet from the query.
Due to an issue with the ALIASes on the particular database/JDBC combination I'm using, it's required that each time the query is run, these ALIASes are created with a unique name. I'm using an int to ensure this which gets incremented each and every time we go to the database.
So, my data access class looks a bit like:
private final static Connection connection = // initialized however
private static int uniqueInvocationNumber = 0;
public static Whatever getData(ValidatedQuery validatedQuery) {
String aliasName = "TEMPALIAS" + String.valueOf(uniqueInvocationNumber);
// build statements, execute statements, deal with results
uniqueInvocationNumber++;
}
This works. However, I've recently been made aware that I'm firmly stuck in Jon Skeet's phase 0 of threading knowledge ("Complete ignorance - ignore any possibility of problems.") - I've never written either threaded code or thread-aware code. I have absolutely no idea what can happen when many users are using the application at the same time.
So my question is, (assuming I haven't stumbled to thread-safety by blind luck / J2EE magic):
How can I make this safe?
I've included information here which I believe is relevant but let me know if it's not sufficient.
Thanks a million.
EDIT: This is a proper J2EE web application using the Wicket framework. I'm typically deploying it inside Jetty.
EDIT: A long story about the motivation for the ALIASes, for those interested:
The database in question is DB2 on AS400 (i5, System i, iSeries, whatever IBM are calling it these days) and I'm using jt400.
Although DB2 on AS400 is kind of like DB2 on any other platform, tables have a concept of a "member" because of legacy stuff. A member is kind of like a chunk of a table. The query I want to run is
SELECT thisField FROM thisTable(thisMember)
which treats thisMember as a table in its own right so just gives you thisField for all the rows in the member.
Now, queries such as this run fine in an interactive SQL session, but don't work over JDBC (I don't know why). The workaround I use is to do something like
CREATE ALIAS tempAlias FOR thisTable(thisMember)
then a
SELECT thisField FROM tempAlias
then a
DROP ALIAS tempAlias
which works but for one show-stopping issue: when you do this repeatedly with the ALIAS always called "tempAlias", and have a case where thisField has a different length from one query to the next, the result set comes back garbled for the second query (getString for the first row is fine, the next one has a certain number of spaces prepended, the next one the same number of spaces further prepended - this is from memory, but it's something like that).
Hence the workaround of ensuring each ALIAS has a distinct name which clears this up.
I've just realised (having spent the time to tap this explanation out) that I probably didn't spend enough time thinking about the issue in the first place before seizing on the workaround. Unfortunately I haven't yet fulfilled my dream of getting an AS400 for my bedroom ;) so I can't try anything new now.
Well, I'm going to ignore any SQL stuff for the moment and just concentrate on the uniqueInvocationNumber part. There are two problems here:
There's no guarantee that the thread will see the latest value at any particular point
The increment isn't atomic
The simplest way to fix this in Java is to use AtomicInteger:
private static final AtomicInteger uniqueInvocationNumber = new AtomicInteger();
public static Whatever getData(ValidatedQuery validatedQuery) {
String aliasName = "TEMPALIAS" + uniqueInvocationNumber.getAndIncrement()
// build statements, execute statements, deal with results
}
Note that this still assumes you're only running a single instance on a single server. For a home project that's probably a reasonable assumption :)
Another potential problem is sharing a single connection amongst different threads. Typically a better way of dealing with database connections is to use a connection pool, and "open/use/close" a connection where you need to (closing the connection in a finally block).
If that static variable and the incrementing of the unique invocation number is visible to all requests, I'd say that it's shared state that needs to be synchronized.
I know this doesn't answer your question but I would seriously consider re-implementing the feature so creating all those aliases isn't required. (Could you explain what kind of alias you're creating and why it's necessary?)
I understand this is just a hoby project, but consider putting on your 'to do list' to switch to using a connection pool. It's all part of the learning which I guess is part of your motivation for doing this project. Connection pools are the proper way to deal with multiple simultaneous users in a database backed web-app.