I have a table with more than 7 million of rows in mysql (InnoDB) and I do some operations with Java. Everything was working correctly until I had to delete some rows and insert some new rows.
The problem is that when I do a select, I keep getting the old values instead of the news. For example if I try to do a count(*), I get 2760, instead of 2786.
Any idea?
Thanks for your time.
Please, check in the Database options, if the db have the auto-commit option selected.
If auto-commit isnt possible, make a db.commit() after each transaction and make your transaction class synchcronized (for safe reasons).
use count(id) instead of count(*) where id is your primary key of your table because for count rows count(id) is preferable over count(*)
And you should mention exact problem so that we can easily solve it so provide some situation where you stuck
Related
I'm currently using the following query to insert into a table only if the record does not already exist, presumably this leads to a table scan. It inserts 28000 records in 10 minutes:
INSERT INTO tblExample(column)
(SELECT ? FROM tblExample WHERE column=? HAVING COUNT(*)=0)
If I change the query to the following, I can insert 98000 records in 10 minutes:
INSERT INTO tblExample(column) VALUES (?)
But it will not be checking whether the record already exists.
Could anyone suggest another way of querying such that my insert speed is faster?
One simple solution (but not recommended) could be to simply have insert statement, catch duplicate key exception and log them. Assuming that the table has unique key constraint.
Make sure that you have an index on the column[s] you're checking. In general, have a look at the query execution plan that the database is using - this should tell you where the time is going, and so what to do about it.
For Derby db this is how you get a plan and how to read it.
Derby also has a merge command, which can act as insert-if-not-there. I've not used it myself, so you'd need to test it to see if it's faster for your circumstances.
preparedStatement.executeUpdate()
Returns the number of rows updated. To my research so far it's not possible to do an update-query in which you would retrieve the updated rows, but this seems like such a basic feature that I'm clearly missing something. How to accomplish this?
Per first comment on question this is simply not possible in MySQL. PostgreSQL supports UPDATE...RETURNING as this feature.
If you use executeQuery instead of executeUpdate, you get a resultset back.
Then, change your stored procedure to be a function, and return the changed rows in a select at the end of the function. AFAIK, you cannot return data from a procedure in MySQL (as opposed to e.g. Microsoft SQL server).
EDIT: The suggestion struck out above is not possible. The JDBC specification does not allow updates in query statements (see the answer for this one: http://bugs.mysql.com/bug.php?id=692).
BUT, if you know the WHERE clause of the rows you are about to update, you can always select them first, to get the primary keys, perform the update, and then perform a select on them afterwards. Then you get the changed rows.
when you fire preparedStatement.executeUpdate() you already have the row identifiers using which you can uniquely identify the rows you want updated- you need to use the same identifiers to do a query and fetch the updated rows. you can not accomplish update and retrieval in one shot using JDBC apis.
I have a existing query in the system which is a simple select query as follows:
SELECT <COLUMN_X>, <COLUMN_Y>, <COLUMN_Z> FROM TABLE <WHATEVER>
Over time, <WHATEVER> is growing in terms of records. Is there any way possible to improve the performance here? The developer is using Statement interface. I believe PreparedStatement won't help here since the query is executed only once.
Is there any thing else that can be done? One of the columns is a primary key and others are VARCHAR (if the information helps)
Does you query have any predicates? Or are you always returning all of the rows from the table?
If you are always returning all the rows, a covering index on column_x, column_y, column_z would allow Oracle to merely scan the index rather than doing a table scan. The query will still slow down over time but the index should grow more slowly than the table.
If you are returning a subset of rows, there are potentially other indexes that would be more advantageous from a performance perspective.
Are there any optimization you can do outside of the SQL query tunning? If yes here are some suggestion:
Try putting the table in memory (like the MEMORY storage engine in MySQL) or any other optimization in the DB
Cache the ResultSet in java. query again only when the table content changes. If the table only has inserts and no updates or delete (wishful thinking), then you can use SELECT COUNT(*) FROM table. If the rows returned are different than the previous time then fire your original query and update cache only if needed.
I am dynamically adding column to table in db through code using alter table query.
But i am facing problem wen i am trying to insert values in that column. it throws an exception column does not exists.
And wen i clean and rebuild my project through netbeans it works fine.
I am using java and mysql as databse .
Is there any body who know the solution for this problem.
Following is my alter table query Code
String alterTableQuery ="alter table `test` add `abc` varchar(50) NOT NULL default ''";
stmt = conn.prepareStatement(alterTableQuery);
boolean val = stmt.execute();
And I am trying to insert data using following code.
String sqlQuery = "insert into `test` (`id`,`abc`) values (?)" ;
stmt = conn.prepareStatement(sqlQuery);
boolean val = stmt.execute();
You might also rethink your design. In general it is a poor practice for the user interface to add columns to tables. Perhaps you need a more normalized design. Database structural changes should not come from the user. You could create a real mess if different users were making changes at the same time. Additionally users should not have the security rights to add columns. This is a major risk for your system.
I dont know about Java but in .net after performing a change on a table you need to call dataAdapter.AcceptChanges(); which essentially commits the change to the table.
In your codedo you need to make a similar call after you have added the column to the table,for the insert to be able to work.
This may be because Data Description Language (DDL) is often executed outside of transactions. Perhaps a commit/rollback, or even reconnect would sort the problem. Just a guess.
There is a UNIQUE database constraint on an index which doesn't allow more than one record having identical columns.
There is a piece of code, managed by Hibernate (v2.1.8), doing two DAO
getHibernateTemplate().save( theObject )
calls which results two records entered into the table mentioned above.
If this code is executed without transactions, it results INSERT, UPDATE, then another INSERT and another UPDATE SQL statements and works fine. Apparently, the sequence is to insert the record containing DB NULL first, and then update it with the proper data.
If this code is executed under Spring (v2.0.5) wrapped in a single Spring transaction, it results two INSERTS, followed by immediate exception due to UNIQUE constraint mentioned above.
This problem only manifests itself on MS SQL due to its incompatibility with ANSI SQL. It works fine on MySQL and Oracle. Unfortunately, our solution is cross-platform and must support all databases.
Having this stack of technologies, what would be your preferred workaround for given problem?
You could try flushing the hibernate session in between the two saves. This may force Hibernate to perform the first update before the second insert.
Also, when you say that hibernate is inserting NULL with the insert, do you mean every column is NULL, or just the ID column?
I have no experience in Hibernate, so I don't know if you are free to change the DB at your will or if Hibernate requires a specific DB structure you cannot change.
If you can make changes then you can use this workaround in MSSQL tu emulate the ANSI behaviour :
drop the unique index/constraint
define a calc field like this:
alter table MyTable Add MyCalcField as
case when MyUniqueField is NULL
then cast(Myprimarykey as MyUniqueFieldType)
else MyUniqueField end
add the unique constraint on this new field you created.
Naturally this applies if MyUniqueField is not the primary key! :)
You can find more details in this article at databasejournal.com