I've recently switched from MySQL to PostgreSQL for the back end of a project and discovered some of my database proxy methods needed reviewing. To insert linked objects I use a transaction to make sure everything is stored. I do this using jdbc methods such as setAutoCommit(false) and commit(). I've written a utility method that inserts a record into a table and returns the generated key. Basically I've followed technique 2 as described here:
http://www.selikoff.net/2008/09/03/database-key-generation-in-java-applications/
This has worked since the start of the project, but after migrating from MySQL to PostgreSQL getGeneratedKeys returns all the columns of the newly inserted record (see console output below).
Code:
final ResultSet keys = ps.getGeneratedKeys();
final ResultSetMetaData metaData = keys.getMetaData();
for (int j = 0; j < metaData.getColumnCount(); j++) {
System.out.println("Col name: "+metaData.getColumnName(j+1));
}
Output:
Col name: pathstart
Col name: fk_id_c
Col name: xpathid
Col name: firstnodeisroot
Database signature for the table (auto generated SQL from pgAdmin III):
CREATE TABLE configuration.configuration_xpath
(
pathstart integer NOT NULL,
fk_id_c integer NOT NULL,
xpathid integer NOT NULL DEFAULT nextval('configuration.configuration_xpath_id_seq'::regclass),
firstnodeisroot boolean NOT NULL DEFAULT false,
CONSTRAINT configuration_xpath_pkey PRIMARY KEY (xpathid),
CONSTRAINT configuration_fk FOREIGN KEY (fk_id_c)
REFERENCES configuration.configuration (id_c) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE
)
Database signature for the sequence behind the PK:
CREATE SEQUENCE configuration.configuration_xpath_id_seq
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 242
CACHE 1
OWNED BY configuration.configuration_xpath.xpathid;
So the question is, why is getGeneratedKeys returning all the columns instead of just the generated key? I've searched and found someone else with a similar problem here:
http://www.postgresql.org/message-id/004801cb7518$cbc632e0$635298a0$#pravdin#disi.unitn.it
But their question has not been answered, only a suggested workaround is offered.
Most drivers support getGeneratedKeys() by tacking on a RETURNING-clause at the end of the query with the columns that are auto-generated. PostgreSQL returns all fields because it has RETURNING * which simply returns all columns. That means that to return the generated key it doesn't have to query the system table to determine which column(s) to return, and this saves network roundtrips (and query time).
This is implicitly allowed by the JDBC specification, because it says:
Note:If the columns which represent the auto-generated keys were not specified, the JDBC driver implementation will determine the columns which best represent the auto-generated keys.
Reading between the lines you can say that this allows for saying 'I don't know, or it is too much work, so all columns best represent the auto-generated keys'.
An additional reason might be that it is very hard to determine which columns are auto-generated and which aren't (I am not sure if that is true for PostgreSQL). For example in Jaybird (the JDBC driver for Firebird that I maintain) we also return all columns because in Firebird it is impossible to determine which columns are auto-generated (but we do need to query the system tables for the column names because Firebird 3 and earlier do not have RETURNING *).
Therefor it is always advisable to explicitly query the generated keys ResultSet by column name and not by position.
Other solutions are explicitly specifying the column names or the column positions you want returned using the alternate methods accepting a String[] or int[] (although I am not 100% sure how the PostgreSQL driver handles that).
BTW: Oracle is (was?) even worse: by default it returns the ROW_ID of the row, and you need to use a separate query to get the (generated) values from that row.
UPDATE - The accepted answer (by Mark) correctly explains what the problem is. My solution also works, but that's only because I added the PK column first when recreating the tables. Either way, all columns are returned by getGeneratedKeys().
After some research I've managed to find a possible cause of the problem. As I said before, I've changed from MySQL to PostgreSQL during the development of a software project. For this migration, I've taken an SQL dump which I loaded into PostgreSQL. Aside from the migrated tables, I've also created some new ones (using the GUI wizards in pgAdmin III). After a close investigation of the differences between two tables (one imported, one created), I've established 2 things:
CREATE TABLE statements from the MySQL dump convert PKs to BIGINT NOT NULL, not to SERIAL. This lead to the fact auto generated PKs no longer worked properly, though I fixed this before I asked this question.
The tables that I 'fixed' by adding a new sequence and linking it up work perfectly fine, but the SQL generation code (auto-generated by pgAdmin III, as shown in the original question) is different that that of a table that is made in PostgreSQL natively.
Note that my fixed tables work(ed) perfectly: I can insert records, update records and perform joins... basically do anything. The primary keys get auto generated and the sequence gets updated. However, the JDBC driver (psotgresql-9.2-1003.jdbc4.jar to be precise) fails to return my generated keys (though the tables are fully functional).
To illustrate the difference between a migrated and created table, here is an example of generation code for a table that I added after the migration:
CREATE TABLE configuration.configuration_xpathitem
(
xpathitemid serial NOT NULL,
xpathid integer,
fk_id_c integer,
itemname text,
index integer,
CONSTRAINT pk_configuration_xpathitem PRIMARY KEY (xpathitemid),
CONSTRAINT fk_configuration_xpathitem_configuration FOREIGN KEY (fk_id_c)
REFERENCES configuration.configuration (id_c) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT fk_configuration_xpathitem_configuration_xpath FOREIGN KEY (xpathid)
REFERENCES configuration.configuration_xpath (xpathid) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
You can clearly see here my PK has the serial keyword, where it is integer not null default ... for the migrated (and fixed) table.
Because of this, I figured maybe the JDBC driver for PostgreSQL was unable to find the PK. I had already read the specification that #Mark highlighted in his reply and this lead me to think that that was the cause for the driver to return all columns. This lead me to believe the driver could not find the PK because I think it is looking for the serial keyword.
So to solve the problem, I dumped my data, deleted my faulty tables and added them again, this time from scratch rather than with the SQL statements from the MySQL dump, and reloaded my data. This has solved the problem for me. I hope this can help anyone that is also stuck.
Related
I'm working on a Java project which needs to be able to alter all the primary keys in a table - and in most cases - some other values in the row as well.
The problem I have is that, if I update a row by selecting by its old primary key (SET pk=new_pk WHERE pk=old_pk) I get duplicate rows (since the old PK value may be equal to another row's new PK value and both rows are then updated).
I figured that in Oracle and some other DBs I might be able to do this with ROWNUM or something similar, but the system should work with most DB systems and right now we can't get this to work for MySQL.
I should add that I don't have access to change the schema of the DB - so, I can't add a column.
What I have tried:
Updating ResultSets directly with RS.updateRow() - this seems to
work, but is very slow.
Hashing the PK's in the table, storing the hash in code and selecting on the hashed PK. This acts sort of as a signature, since a
hashed PK indicates that the row has been read but not yet updated
and I can avoid appropriate rows that way. The issue with this seems
to have been hash collisions as I was getting duplicate PKs.
PS:
I realise this sounds like either a terrible idea, or terrible design, but we really have no choice. The system I'm working on aims to anonymize private data and that may entail changing PKs in some tables. Don't fear, we do account for FK references.
In this case you can use simple update with delta = max Pk from updating table
select delta
select max(pk) as delta from table
and then use it in query
update table SET pk=pk+delta+1
Before this operation you need to disable constraints. And don't forget that you should also update foreign keys.
sorry, if the question title is misleading or not accurate enough, but i didn't see how to ask it in one sentence.
Let's say we have a table where the PK is a String (numbers from '100,000' to '999,999', comma is for readability only).
Let's also say, the PK is not sequentially used.
Now i want to insert a new row into the table using java.sql and show the PK of the inserted row to the User. Since the PK is not generated by default (e.g. insert values without the PK didn't work, something like generated_keys is not available in the given environment) i've seen two different approaches:
in two different statements, first find a possible next key, then try to insert (and expect that another transaction used the same key in the time between the two statements) - is it valid to retry until success or could any sql trick with transaction-settings/locks help here? how can i realize that in java.sql?
for me, that's a disappointing solution, because of the non-deterministic behaviour (perhaps you could convince me of the contrary), so i searched for another one:
insert with a nested select statement that looks up the next possible PK. looking up other answers on generating the PK myself I came close to a working solution with that statement (left out the casts from string to int):
INSERT INTO mytable (pk,othercolumns)
VALUES(
(SELECT MIN(empty_numbers.empty_number)
FROM (SELECT t1.pk + 1 as empty_number
FROM mytable t1
LEFT OUTER JOIN mytable t2
ON t1.pk + 1 = t2.pk
WHERE t2.pk IS NULL
AND t1.pk > 100000)
as empty_numbers),
othervalues);
that works like a charm and has (afaik) a more predictable and stable solution than my first approach, but: how can i possibly retrieve the generated PK from that statement? I've read that there is no way to return the inserted row (or any columns) directly and most of the google results i've found, point to returning generated keys - even though my key is generated, it's not generated by the DBMS directly, but by my statement.
Note, that the DBMS used in development is MSSQL 2008 and the productive system is currently a DB2 on AS/400 (don't know which version) so i have to stick close to SQL standards. i can't change the db-structure in any way (e.g. use generated keys, i'm not sure about stored procedures).
DB2 for i allows generated keys, stored procedures, user defined functions - pretty much all of the things SQL Server can do. The exact implementation is different, but that's what manuals are for :-) Ask your admin what version of IBM i they're running, then hit up the Infocenter for specifics.
The constraining factor is that you can't alter the database design; you are stuck with apparently multiple processes trying to INSERT while backfilling 'holes' in the existing keyspace. That's a very tough nut to crack. Because you can't change the DB design, there's nothing to be done except to allow for and handle PK collisions. There's no SQL trick that'll help - the SQL way is to have the DB generate the PK, not the application.
There are several alternatives to suggest, in the event that some change is allowed. All have issues needing a workaround, but that is unavoidable at this point due to the application design.
Create a UDF that all INSERT clients use to retrieve the next available PK. Use a table of 'available numbers' and delete them as they are issued.
Pre-INSERT all the available numbers. Force clients to do an UPDATE. Make them FETCH...FOR UPDATE where (rest of data = not populated). This will lock the row, avoiding collisions as well as make the PK immediately available.
Leave the DB and the other application programs using this table as-is, but have your INSERT process draw from a block of keys that's been set aside for your use. Keep the next available number in an SQL SEQUENCE or an IBM i data area. This only works if there's a very large hole in the keyspace that's not yet used.
Hey guys, I have this pre-existing SQLite database that I want to use with my Android application. I have created a sample database from scratch for testing purposes where each primary key is named _id and also adding the table android_metadata. This works great.
So now when I've tried to rename the primary keys of the database I already have, and upload it to the application, it doesn't work.
Can anyone tell me what exactly I have to do to my existing database to get it to work with the Android OS? Like what exactly has to be changed in the database for it to work?
And yes, I have looked at most tutorials, but most of them don't go into detail about what you have to change in the pre-existing database.
Here is the database I am using:
http://www.mediafire.com/file/bpbpm19y6kbpjot/database.db
Thanks.
Again, I have found this document to be very useful: http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/
I usually set the flag NO_LOCALIZED_COLLATORS when calling SQLiteDatabase.openDatabase(). Then you don't need the android_metadata table. As far as I know the _id column also must be of the type INTEGER PRIMARY KEY AUTOINCREMENT.
You don't actually need the primary ID column to be named _id -- you can just use something like SELECT my_id as _id, another_field ... in your select statement.
And you can either do as Omokoii said above and set the NO_LOCALIZED_COLLATORS flag, or you can create the android_metadata table and insert the value en-US into it.
As for using an existing DB, perhaps this blog post might help: http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/
Make sure that your existing SQLite database declares integer primary keys using only "INTEGER" (verbatim)--not "int" or "int16" or any of the other possibilities for declaring an integer that SQLite will recognize.
I ran into a related problem when importing a SQLite database in Adobe AIR (which has a common codebase with Goodle and Mozilla and other consortium members, IIRC). My PK had been defined as "int" (verbatim). SQLite treats "INTEGER" primary keys differently than it treats "int" or "INT" or "int16" etc primary keys!
Documented here: http://www.sqlite.org/datatypes.html
An INTEGER primary key is treated by SQLite as a synonym for the RowId. Any other int type is just like a standard column, and with a standard column RowId will not necessarily equal the value in the PK column.
However, Adobe and the other related subgroup of SQLite consortium members did not implement this (documented) behavior--for them any/every integer type used as the PK column is treated as a synonym for the row id-- and their failing to implement this distinction can result in erroneous joins when a pre-existing SQLite database is imported into their implementation(s), if the pre-existing database used anything other than "INTEGER" when declaring its integer-type primary keys.
P.S. I brought this to Adobe's attention and discussed it ad nauseam on the SQLite mailing list and on the Adobe AIR forum. Adobe wrote me that they would document their departure from "standard" SQLite behavior but leave it as is, so I believe Android will also differ from SQLite documented behavior in this regard.
P.P.S. It seems this subgroup of consortium members either did not envision the possibility that a database would be imported (i.e. they assumed the database would always be created anew via their interface) or they simply overlooked this (admittedly wonky) exceptional behavior in SQLite.
P.P.P.S. This table, for example, from the database the OP is using would return spurious results when involved in joins on the [stop_id] column if attached by an implementation of SQLite that did not implement the "standard" INTEGER/int (et al) exceptional behavior but treated any/every int-type when used with the PK as a synonym for the rowid:
CREATE TABLE mt_stop (
stop_id int NOT NULL PRIMARY KEY ASC,
stop_lat real NOT NULL CHECK (stop_lat >= -90 AND stop_lat <= 90),
stop_lon real NOT NULL CHECK (stop_lon >= -180 AND stop_lon <= 180),
stop_name varchar (120) DEFAULT 'Unknown'
)
Here is my situation and my constraints:
I am using Java 5, JDBC, and DB2 9.5
My database table contains a BIGINT value which represents the primary key. For various reasons that are too complicated to go into here, the way I insert records into the table is by executing an insert against a VIEW; an INSTEAD OF trigger retrieves the NEXT_VAL from a SEQUENCE and performs the INSERT into the target table.
I can change the triggers, but I cannot change the underlying table or the general approach of inserting through the view.
I want to retrieve the sequence value from JDBC as if it were a generated key.
Question: How can I get access to the value pulled from the SEQUENCE. Is there some message I can fire within DB2 to float this sequence value back to the JDBC driver?
Resolution:
I resorted to retrieving the PREVIOUS_VAL from the sequence in a separate JDBC call.
Have you looked at java.sql.Statement.getGeneratedKeys()? I wouldn't hold out much hope since you're doing something so unusual but you never know.
You should be able to do this using the FINAL TABLE syntax:
select * from final table (insert into yourview values (...) );
This will return the data after all triggers have been fired.
There is a UNIQUE database constraint on an index which doesn't allow more than one record having identical columns.
There is a piece of code, managed by Hibernate (v2.1.8), doing two DAO
getHibernateTemplate().save( theObject )
calls which results two records entered into the table mentioned above.
If this code is executed without transactions, it results INSERT, UPDATE, then another INSERT and another UPDATE SQL statements and works fine. Apparently, the sequence is to insert the record containing DB NULL first, and then update it with the proper data.
If this code is executed under Spring (v2.0.5) wrapped in a single Spring transaction, it results two INSERTS, followed by immediate exception due to UNIQUE constraint mentioned above.
This problem only manifests itself on MS SQL due to its incompatibility with ANSI SQL. It works fine on MySQL and Oracle. Unfortunately, our solution is cross-platform and must support all databases.
Having this stack of technologies, what would be your preferred workaround for given problem?
You could try flushing the hibernate session in between the two saves. This may force Hibernate to perform the first update before the second insert.
Also, when you say that hibernate is inserting NULL with the insert, do you mean every column is NULL, or just the ID column?
I have no experience in Hibernate, so I don't know if you are free to change the DB at your will or if Hibernate requires a specific DB structure you cannot change.
If you can make changes then you can use this workaround in MSSQL tu emulate the ANSI behaviour :
drop the unique index/constraint
define a calc field like this:
alter table MyTable Add MyCalcField as
case when MyUniqueField is NULL
then cast(Myprimarykey as MyUniqueFieldType)
else MyUniqueField end
add the unique constraint on this new field you created.
Naturally this applies if MyUniqueField is not the primary key! :)
You can find more details in this article at databasejournal.com