I have a Java-written server which uses a connection using JDBC with SQLite which is set in auto commit mode. However all the queries end up generating a journal file and when the server restarts, the database looks never changed. Is there some general reason why this may happen? I know it would be helpful to provide some code, but I've been working on it for hours and I cannot even reproduce it with smaller amount of code...
Can anyone give a general idea of why a journal file is still there even when autocommit is set to true? Thanks!
OK I found the reason. ResultSet has to be closed whenever we're done with it.
Related
Is it possible to use Hibernate and connect to a database with a read only connection? I will be working on a project that will require connecting to an existing database, pulling data from it, and doing some complex data manipulation in the application. Throughout all of this I can`t change anything in the database, hence the read only connection requirement.
My first thought was to pull the data from the database using Hibernate so that I can have ready made Java objects represent the data, however, I can`t seem to find any information on how to force Hibernate to use a read only database connection ... I have a feeling this might actually be impossible, but I want to find out from others before I pursue other ideas.
I don't have enough reputation to comment, apparently :(
But responding to your comment about the cases where Hibernate may still write back to the DB, you could manually detach the object from your persistence context, after which Hibernate would cease caring about the state of the object & whether its been modified.
You can use: Session.setDefaultReadOnly( true );
http://docs.jboss.org/hibernate/orm/3.5/reference/en-US/html/readonly.html#readonly-api-loaddefault
To be bullet-proof safe against anything you do in the application, you need to assign read-only permissions to the DB user that Hibernate is configured to use. This has to be done on the database side. Otherwise, if you rely only on the configuration on the application side, you can always make a mistake (forget to detach the entities, forget to set the session to read-only mode etc.).
For our application to work properly we need to execute a SQL Statement on every new connection, before that connection is handed out to the application.
How do I configure a data source in WAS 7 accordingly?
We found the (deprecated) option to validate the datasource using a sql statement, which hopefully does the trick (coworker is testing it right now). This sounds wrong, since we are not 'testing' the connection, but setting it up properly. Also its deprecated so this probably will stop working with future versions of websphere
Is there a clean and correct way to do this?
The statement we'd like to execute is
ALTER SESSION NLS_SORT='GERMAN_AI'
One alternative approache: The application is hibernate based, so if we could convince hibernate to execute the statement before using a connection, this would work as well.
If it were me, I would just use the "connection test" approach:
It works!
The YAGNI principle says "worry about deprecation when it happens... if it ever happens" - probably years away or never
You will not add any business value by finding the "correct" way
You can drop this and get on with some real work that actually adds value to your project
The only downside is that it will be executed every time a connection is tested, which may be many times during the life of the connection, but so what - it's a very fast executing statement and is idempotent, so no problem.
Not a WAS expert by any means, but if you can set up Tomcat JDBC to provide your database connection pooling, it has a parameter amongst others called "initSQL". You can set that to a SQL statement that you want the connection pool to run whenever a connection is created.
Tomcat JDBC Connection Pool
A.
One way to go would be to use a custom Hibernate dialect, since you are actually specifying a 'different' way to talk with the database. I have no idea where to add the initialization code though.
I am currently responsible for migrating data for our application, for upgrading to new version.I am trying to migrate from HSQL to HSQL, later we will move on to other combinations.
So I have a stand alone utility to do this. I am using MockServletContext to initialize my services(this migration is to be done without starting the servers).
The problem is that all the tables are migrated except for 2-3 tables, the number depending on size of the data migrated. On extensive debugging I found nothing wrong. Meaning that all the data is getting migrated on debugging via eclipse, but on normal running it fails to complete for the last 3 tables.
Any clue where to look at?
In normal run I have put loggers to see if we are reading all the data from the source database and indeed the logs prove we do.
The only place where I am unable to put logs is when it calls a method in driver.
In the last step we give a call to PreparedStatement object's executeBatch()/executeUpdate() methods(Tried with both but exactly same result).
I am completeley clueless what to do and where to look for. Any suggestions?
Thanks
In normal run I have put loggers to see if we are reading all the data from the source database and indeed the logs prove we do. The only place where I am unable to put logs is when it calls a method in driver.
If you suspect something wrong there, try wrapping your driver in log4jdbc. It will show the SQL issued to DB. Good luck!
I have a view defined in SQL server 2008 that joins 4 tables together. Executing this view in SQL Server Management Studio takes roughly 3 seconds to run and returns about 45,000 records. My application is written in Java using hibernate to simply do a "from MyViewObject" query in HQL. When this is run, the execution time is consistently around 45 seconds. I have also tried simply using JDBC to run this query and received the same level of performance, so I've assumed it has nothing to do with hibernate.
My question: What can I do to diagnose this problem? There is obviously something different between how Management Studio is running the query vs how my application is running the query but I have not been able to come up with much.
The only thing I've come up with as a potentially viable explanation is an issue with the jtds library that contains the driver for SQL Server in Java.
Any guidance here would be greatly appreciated.
UPDATE
I went back to trying pure JDBC and tried adding the selectMethod and responseBuffering attributes to my connection string but didn't get any improvements. I also took my JDBC code from my application and ran it from a test program containing nothing but my JDBC code and it ran in the expected 3 seconds. So to me this seems environmental for the application.
My application is a Google Web Toolkit(GWT) based app, and the JDBC code is being run in my primary RPC Servlet. Essentially, the RPC method receives the call and immediately executes the JDBC code. Nothing in this setup gives me much indication of why the performance is terrible though. I am going to try the JDBC 3.0 driver and see if that works any better, but it doesn't feel like that will fix the issue to me quite yet.
My goal for the moment is to get my query working live with JDBC and then switch it back over to Hibernate so I can keep the testing simple enough. Thanks for the help so far!
UPDATE 2
I'm finally starting to zero in on the source of the problem, though still no idea what the actual issue is. I opened up the view in SQL Server and copied the SQL statement (rather large) exactly into my code and executed it using JDBC instead of pulling the data from the view and most of the performance issues are gone. It seems that some combination of GWT, SQL Server Views and JDBC is not working properly here. I don't see keeping a very large hand-written query in my code as a long term solution, but it does offer a bit more insight.
<property name="hibernate.show_sql">true</property>
setting this will show you the SQL query generated by hibernate. Analyze the query and make sure you are not missing a relationship.
reply for Update 1 and 2:
Like you mentioned, ran the query on your sql query and it seems like it is fast. So another thing to remember about hibernate is that it creates the object that is returned by your query (of course this depends if you initialize lazy obj. Dont remember what it is called). How many objects does your query return? also you can do a simple bench on where the issue is.
For example, before running the query, sysout the current time and then sysout the current time after. do these for all the places that you suspect is slowing your application down.
To analyze the problem you should look up you manual for tools that display the query or execution plan. Maybe you're missing an index on a join column.
i'm just curious about something.I'm using hsql in myproject (embedded of course).At some time i felt the need to visualize what hibernate was generating.I took a free copy of dbvisualizer. here is the hsqljdbc.properties
jdbc.url=jdbc:hsqldb:file:mydb;create=true
hibernate hbm2ddl.auto=create
i downloaded the hsql 1.8.0_10. i did all the required procedure.i could connect and see the tables but after that changes made to the table don't seem be willing to show up.then i've tried to delete the db generate a new one but still.You got any idea in this?
I usually Derby but i've realized lately that it's not that precise about relationship management.I use mysql for the moment which is not good for development so i want to know if i forgot to do something or it's just meant to behave that way.Thanks for reading this
Using HSQLDB for development and testing is discussed in detail in the new Guide.
http://hsqldb.org/doc/2.0/guide/deployment-chapt.html#dec_app_dev_testing
HSQLDB uses a write delay mechanism by default and changes are flushed to disk after 10 seconds in version 1.8.x or 0.5 sec in version 2.0 and later.
You can force the database to shutdown and write all the changes when the last connection is closed with this URL:
jdbc.url=jdbc:hsqldb:file:mydb;shutdown=true
With HSQLDB 2.x you can use the write_delay property to force each commit to write to disk immediately:
jdbc.url=jdbc:hsqldb:file:mydb;hsqldb.write_delay=false
Version 2.2.9 and later persist the latest changes when the last connection is closed, so it may not be necessary to use hsqldb.write_delay=false for tests that close the connections.
With HSQLDB 1.8, you need to run an SQL command at the beginnig to do this:
SET WRITE_DELAY FALSE
By default, HSQLDB keeps table contents in memory until the database is shut down: http://www.hsqldb.org/doc/guide/ch05.html#N10DD6
Depending on your needs (eg, working in a development environment) this may be sufficient. For production, however, I'd rather use a DBMS that writes each change to disk in multiple places (which for my means Oracle, although MySQL probably works just as well).
Why don't you just set the show_sql property to true if you want to see what hibernate does?