I came across a problem with going through a ResultSet I'm generating from my MySQL db. My query should return at most one row per table (I'm looping through several tables searching by employee number). I've entered data in some of the tables; but my test o/p says that the resultset contains 0 rows and doesn't go through the ResultSet at all. The o/p line it's supposed to print never appears. It was in a while loop before I realised that it'd be returning at most one row, at which point I just swapped the while(rs.next()) for an if(rs.first()). Still no luck. Any suggestions?
My code looks like this:
try
{
rsTablesList = stmt.executeQuery("show tables;");
while(rsTablesList.next())
{
String tableName = rsTablesList.getString(1);
//checking if that table is a non-event table; loop is skipped in such a case
if(tableName.equalsIgnoreCase("emp"))
{
System.out.println("NOT IN EMP");
continue;
}
System.out.println("i'm in " + tableName); //tells us which table we're in
int checkEmpno = Integer.parseInt(empNoLbl.getText()); //search key
Statement s = con.createStatement();
query = "select 'eventname','lastrenewaldate', 'expdate' from " + tableName + " where 'empno'=" + checkEmpno + ";"; // eventname,
System.out.println("query is \n\t" + query + "");
rsEventDetails = s.executeQuery(query) ;
System.out.println("query executed\n");
//next two lines for the number of rows
rsEventDetails.last();
System.out.println("no. of rows is " + rsEventDetails.getRow()+ "\n\n");
if(rsEventDetails.first())
{
System.out.println("inside the if");
// i will add the row now
System.out.println("i will add the row now");
// cdTableModel.addRow(new Object[] {evtname,lastRenewalDate,expiryDate});
}
}
}
My output looks like this:
I'm in crm
query is
select 'eventname','lastrenewaldate', 'expdate' from crm where 'empno'=17;
query executed
no. of rows is 0
I'm in dgr
query is
select 'eventname','lastrenewaldate', 'expdate' from dgr where 'empno'=17;
query executed
no. of rows is 0
NOT IN EMP
I'm in eng_prof
query is
select 'eventname','lastrenewaldate', 'expdate' from eng_prof where 'empno'=17;
query executed
no. of rows is 0
I'm in frtol
query is
select 'eventname','lastrenewaldate', 'expdate' from frtol where 'empno'=17;
query executed
no. of rows is 0
(and so on, upto 17 tables.)
The '17' in the query is the empno that I've pulled from the user.
The thing is that I've already entered data in the first two tables, crm and dgr. The same query in the command line interface works; this morning, I tried the program out and it returned data for the one table that had data in it (crm). The next time onwards, nothing.
Context: I'm working on a school project to create some software for my dad's office, it'll help them organise the training etc schedules for the employees. (a little like Google Calendar I guess.) I'm using Netbeans and Mysql on Linux Mint. There are about 17 tables in the database. The user selects an employee name and the program searches for all entries in the database that correspond to an 'event' (my generic name for a test/training/other required event) and puts them into a JTable.
The single quotes around the column names and table name in the creation of the query seem to have caused the problem. On changing them to backticks, retrieval works fine and the data comes in as expected.
Thank you, #juergend (especially for the nice explanation) and #nailgun!
Related
I am working on an app using JDBC to update stocks and place order.
I am storing the products and I want to update the products if the quantity requested is less then the stored one , and I want to delete the product from the database if the quantity is equal to the number of current stock in the DB.
I am using two different statements, but I would like to use just one of them. For example, if I want to add an order into the DB the things that are going to be requested by the system are a name and product quantity. The product quantity would get subtracted from the total quanitity of the product on the DB. The pseudocode would be
IF product quantity - user quantity =0 THEN DELETE product FROM database
ELSE UPDATE product quantity TO product quantity-user quantity ON THE database
product quantity=quantity of the product in the database
user quantity=quantity requested by the user
The Prepared Statements that I have for now are these two
UPDATE products SET quantity=quantity-? WHERE product_name=?
DELETE FROM products WHERE product_name=?
I would like to merge them as one if possible
In a production system you would do this sort of thing.
For an order, as you said, do this.
UPDATE products SET quantity=quantity-? WHERE product_name=?
Then, in an overnight or weekly cleanup do this to get rid of rows with no quantity left.
DELETE FROM products WHERE quantity = 0
When you want to know what products are actually available, you do
SELECT product_name, quantity FROM products WHERE quantity > 0
The concept here: rows with zero quantity are "invisible" even if they aren't deleted.
If this were my system, I would not DELETE rows. For one thing, what happens when you get more products in stock?
One way is to loosen security by setting the MySQL Configuration Property allowMultiQueries to true in the connection URL.
Then you can execute two SQL statements together:
String sql = "UPDATE products" +
" SET quantity = quantity - ?" +
" WHERE product_name = ?" +
" AND quantity >= ?" +
";" +
"DELETE FROM products" +
" WHERE product_name = ?" +
" AND quantity = 0";
try (PreparedStatement stmt = conn.prepareStatement(sql)) {
stmt.setInt(1, userQuantity);
stmt.setString(2, productName);
stmt.setInt(3, userQuantity);
stmt.setString(4, productName);
stmt.execute();
int updateCount = stmt.getUpdateCount();
if (updateCount == 0)
throw new IllegalStateException("Product not available: " + productName);
// if you need to know if product got sold out, do the following
stmt.getMoreResults();
int deleteCount = stmt.getUpdateCount();
boolean soldOut = (deleteCount != 0);
}
I want to copy a table (10 million records) in originDB(sqlite3) into another database called targetDB.
The process of my method is:
read data from the origin table and generate a ResultSet, then generate corresponding insert sql about every record and execute commit to batch insert when the count of record reach 10000. The code as follow:
public void transfer() throws IOException, SQLException {
targetDBOperate.setCommit(false);//batch insert
int count = 0;
String[] cols = parser(propertyPath);//get fields of data table
String query = "select * from " + originTable;
ResultSet rs = originDBOperate.executeQuery(query);//get origin table
String base = "insert into " + targetTable;
while(rs.next()) {
count++;
String insertSql = buildInsertSql(base,rs,cols);//corresponding insert sql
targetDBOperate.executeSql(insertSql);
if(count%10000==0) {
targetDBOperate.commit();// batch insert
}
}
targetDBOperate.closeConnection();
}
The follow picture is the trend of using memory, and vertical axis represents memory usage
As we can say it will be bigger and bigger until out of memory. The stackoverflow has some relevant question such as Out of memory when inserting records in SQLite, FireDac, Delphi
, but I havent solve my problem for we use different implement method. My hypothesis is that when the count of record hasn't reach 10000, these corresponding insert sql will be cached in memory and they haven't been removed when execute commit by default? Every advice will be appreciate.
By moving a higher number of rows in SQLite or any other relational database you should follow some basic principles:
1) set autoCommit to false, i.e. do not commit each insert
2) use batch update, i.e. do not round trip for each row
3) use prepared statement, i.e. do not parse each insert.
Putting this together you get following code:
cn is the source connection, cn2 is the target connection.
For each inserted row you call addBatch, but only once per batchSize you call executeBatch which initiates a round trip.
Do not forget a last executeBatch at the end of the loop and the final commit.
cn2.setAutoCommit(false)
String SEL_STMT = "select id, col1,col2 from tab1"
String INS_STMT = "insert into tab2(id, col1,col2) values(?,?,?)"
def batchSize = 10000
def stmt = cn.prepareStatement(SEL_STMT)
def stmtIns = cn2.prepareStatement(INS_STMT)
rs = stmt.executeQuery()
while(rs.next())
{
stmtIns.setLong(1,rs.getLong(1))
stmtIns.setString(2,rs.getString(2))
stmtIns.setTimestamp(3,rs.getTimestamp(3))
stmtIns.addBatch();
i += 1
if (i == batchSize) {
def insRec = stmtIns.executeBatch();
i = 0
}
}
rs.close()
stmt.close()
def insRec = stmtIns.executeBatch();
stmtIns.close()
cn2.commit()
Sample test with your size with sqlite-jdbc-3.23.1:
inserted rows: 10000000
total time taken to insert the batch = 46848 ms
I do not observe any memory issues or problems with a large transaction
You are trying to fetch 10M records in one go by doing the following. This will definitely eat your memory like anything
String query = "select * from " + originTable;
ResultSet rs = originDBOperate.executeQuery(query);//get origin table
Use paginated queries to read the batches and do batch updates according.
You are not even doing a batch-update You are simply firing 10K queries one after the other by doing the following code
String insertSql = buildInsertSql(base,rs,cols);//corresponding insert sql
targetDBOperate.executeSql(insertSql);
if(count%10000==0) {
targetDBOperate.commit();// This simply means that you are commiting after 10K records
}
I'm trying to insert a large amount of records in my inverted index which has built as table in MS access database. This is table design (ID,term,doc,sent is compound primary key):
and this is the code:
Connection conn = DriverManager.getConnection("jdbc:ucanaccess://myDB.accdb");
Statement s = conn.createStatement();
s.execute("DELETE FROM invertedIndex");
for(String o : POSoutputs) //while (Tokenizer.hasMoreTokens())
{
String word = o;
s.execute("insert into invertedIndex (term,doc,sent) values ('"+ o + "','" + listOfFiles[i].getAbsolutePath() + "','" + fileText + "')");
conn.commit();// i do commit to empty the stack because i will insert thousands of records, by scanning hundreds of documents.
}
This is the Error:
java.lang.StackOverflowError
at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:297)
at java.nio.ByteBuffer.put(ByteBuffer.java:832)
at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:379)
at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:342)
at sun.nio.ch.IOUtil.write(IOUtil.java:60)
at sun.nio.ch.FileChannelImpl.writeInternal(FileChannelImpl.java:778)
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:761)
at com.healthmarketscience.jackcess.impl.PageChannel.allocateNewPage(PageChannel.java:350)
at com.healthmarketscience.jackcess.impl.TempPageHolder.setNewPage(TempPageHolder.java:115)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.createNewUsageMapPage(UsageMap.java:763)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.addOrRemovePageNumber(UsageMap.java:747)
at com.healthmarketscience.jackcess.impl.UsageMap.removePageNumber(UsageMap.java:337)
at com.healthmarketscience.jackcess.impl.PageChannel.allocateNewPage(PageChannel.java:354)
at com.healthmarketscience.jackcess.impl.TempPageHolder.setNewPage(TempPageHolder.java:115)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.createNewUsageMapPage(UsageMap.java:763)
at com.healthmarketscience.jackcess.impl.UsageMap$ReferenceHandler.addOrRemovePageNumber(UsageMap.java:747)
at com.healthmarketscience.jackcess.impl.UsageMap.removePageNumber(UsageMap.java:337)
at com.healthmarketscience.jackcess.impl.PageChannel.allocateNewPage(PageChannel.java:354)
at com.healthmarketscience.jackcess.impl.TempPageHolder.setNewPage(TempPageHolder.java:115)
........ ERROR RECORDS ARE DUPLICATED .. etc
what's the problem?
The error is clear: ERROR RECORDS ARE DUPLICATED
So you have a unique index on one or more fields. Either remove this or remove the records with duplicate field values.
The database was corrupted, i have created another one with the same tables ..
I think the reason is because the answered query in this question: How to restart counting from 1 after erasing table in MS Access?, which have damage the structural indices of the database.
A similar issue is in: deleting row with BigIndex UsageMap exception
I have table without unique index tuples, lets say table has records
A->B->Status
A->C->Status
A->B->Status
A->B->Status
A->C->Status
I am getting first and second record, processing them. After then I want to update only these two records
how can I make this process possible at java application layer?
Since there is not any unique index tupples I cannot use update SQL with proper WHERE clause
Using
Spring 3.XX
Oracle 11g
I think you may try to use ROWID pseudocolumn.
For each row in the database, the ROWID pseudocolumn returns the address of the row. Oracle Database rowid values contain information necessary to locate a row:
The data object number of the object
The data block in the datafile in which the row resides
The position of the row in the data block (first row is 0)
The datafile in which the row resides (first file is 1). The file
number is relative to the tablespace.
Usually, a rowid value uniquely identifies a row in the database. However, rows in different tables that are stored together in the same cluster can have the same rowid.
SELECT ROWID, last_name
FROM employees
WHERE department_id = 20;
The rowid for the row stays the same, even when the row migrates.
You can solve this issue by using updatable resultsets. This feature relies on rowid to perform all modifications (delete/update/insert).
This is a excerpt highlighting the feature itself:
String sqlString = "SELECT EmployeeID, Name, Office " +
" FROM employees WHERE EmployeeID=1001";
try {
stmt = con.createStatement(
ResultSet.TYPE_SCROLL_SENSITIVE,
ResultSet.CONCUR_UPDATABLE);
ResultSet rs = stmt.executeQuery(sqlString);
//Check the result set is an updatable result set
int concurrency = rs.getConcurrency();
if (concurrency == ResultSet.CONCUR_UPDATABLE) {
rs.first();
rs.updateString("Office", "HQ222");
rs.updateRow();
} else {
System.out.println("ResultSet is not an updatable result set.");
}
rs.close();
} catch(SQLException ex) {
System.err.println("SQLException: " + ex.getMessage());
}
Here is a complete example.
i am a beginner in java Desktop application, and i was trying to insert some data into my table and i get the following reply "colown count does not match value count at row 1" here is my query
String sql = "insert into cataloguetb(title_statement,aurthurs_name,edition_statement,book_title,publisher_name"
+ "place_of_publication,year_of_publication,isbn_no,index_no,pagenRomannuem,pagneArabi,illuss,size_of_book"
+ "otherAurthurs,addEntries,length_in_cm,accessionNO,call_No1,call_No2,call_No,call_No4)values ('"+(titleStatement)+"','"+(aurthursName)+"'"
+ "'"+(editionStatement)+"','"+(bookTitle)+"','"+(publisherName)+"','"+(placeOfPublication)+"','"+(yearOfPublication)+"'"
+ "'"+(isbnNo)+"','"+(indexNo)+"','"+(pageRoman)+"','"+(pageArabic)+"','"+(illustration)+"','"+(size)+"','"+(otherAuthurs)+"'"
+ "'"+(addedEntries)+"','"+(lengthOfBook)+"','"+(accessionNo)+"','"+(calNo1)+"','"+(calNo2)+"','"+(calNo3)+"','"+(calNo4)+"')";
i have tried so many solutions even from stacflowoverflow there seem to no solution thanks for your help.
You need trailing commas on the fields at the end of the rows. Try this:
String sql = "insert into cataloguetb(title_statement,aurthurs_name,edition_statement,book_title,publisher_name,"
+ "place_of_publication,year_of_publication,isbn_no,index_no,pagenRomannuem,pagneArabi,illuss,size_of_book,"
+ "otherAurthurs,addEntries,length_in_cm,accessionNO,call_No1,call_No2,call_No,call_No4) values ('"+(titleStatement)+ "','"+(aurthursName)+"',"
+ "'"+(editionStatement)+"','"+(bookTitle)+"','"+(publisherName)+"','"+(placeOfPublication)+"','"+(yearOfPublication)+"',"
+ "'"+(isbnNo)+"','"+(indexNo)+"','"+(pageRoman)+"','"+(pageArabic)+"','"+(illustration)+"','"+(size)+"','"+(otherAuthurs)+"',"
+ "'"+(addedEntries)+"','"+(lengthOfBook)+"','"+(accessionNo)+"','"+(calNo1)+"','"+(calNo2)+"','"+(calNo3)+"','"+(calNo4)+"')";
This is because number of columns that you want to enter and datas(values for columns) you are entering are not same
the output of your code will be something like this
insert into cataloguetb(title_statement,aurthurs_name,edition_statement,book_title,publisher_nameplace_of_publication,year_of_publication,isbn_no,index_no,pagenRomannuem,pagneArabi,illuss,size_of_bookotherAurthurs,addEntries,length_in_cm,accessionNO,call_No1,call_No2,call_No,call_No4)values (.......)
That means you trying to insert in 19 columns but giving 21 values.So the error.
Well it is not the proper way of insertion.
Better would be to use PreparedStatement like this way
PreparedStatement pt=con.prepareStatement("insert into table (x,y) values(?,?");
pt.setString(1,value_for_x);
pt.setString(2,value_for_y);
pt.executeUpdate();