I have a table with 4 fields: _id, to, from, hidden
Upon trying to insert the following values:
_id=167 from=1311005879000 to=1311005879000 hidden=0
into my table, I got an SQLiteConstraintException stating that
'column _id is not unique (code 19)'
To find the cause for this problem I tried querying the size of the table and found it is 0, so I have no idea where this is coming from.
Maybe I didn't understand the error correctly?
Edit: some code!
try {
mDatabase.insertOrThrow("groups", null,
mContentValues);
} catch (SQLException e) {
e.printStackTrace();
}
Creation SQL:
CREATE TABLE IF NOT EXISTS groups(_id LONG PRIMARY KEY,hidden INTEGER,from LONG,to LONG
'column _id is not unique (code 19)'
So you are violating UNIQUE Constraint. This is reason of SQLiteConstraintException. Your _id column is most likely primary key and primary keys have implicitly assigned UNIQUE constraint that say no two rows can have same primary key.
You are trying to insert duplicit _id that already is in db or PK is assigned to NULL.
I tried querying the size of the table and found it is 0, so I have no
idea where this is coming from.
I think your query was broken because your Exception says everything and it cannot be thrown if somewhere is not a problem.
Update:
If you are not assigned NULL to PK and also your table has 0 records probably problem is here:
mDatabase.insertOrThrow("groups", null, mContentValues);
You are assigned NULL to ColumnHack as second param of insert() method that you shouldn't. In SQLite, each row must have at least one column specified. It needs one column that can be safe assigned to NULL.
Related
The code below is deliberately simplified and abridged to avoid unnecessary distraction.
I've mysql database with table:
CREATE TABLE `payment` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`status` varchar(45) DEFAULT NULL,
`last_update_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `id_UNIQUE` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=latin1
I have Spring Transactional method which is called from the controller.
#GetMapping("/reproduce_error")
#Transactional
public ResponseEntity reproduceErrorOnUpdate(Long id) {
try {
Payment payment = paymentDao.getPayment(id); //1st query
payment.setStatus(payment.getStatus().equals("A") ? "B" : "A");
paymentDao.findAllByStatus("NOT EXISTING STATUS"); // 2nd query
payment.setStatus("C");
} catch (Exception e) {
return new ResponseEntity<>(e, HttpStatus.INTERNAL_SERVER_ERROR);
}
return new ResponseEntity(HttpStatus.OK);
}
When I call this method I always have an exception:
org.springframework.orm.ObjectOptimisticLockingFailureException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1; nested exception is org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
BUT AT THE SAME TIME (and it is the most curious thing):
if we omit the 2nd query (which anyway returns null!), the method works correctly!
Both methods are executed at the SINGLE transaction!
(here is how I call them both, so you can see that there are no other transactions somewhere)
There are no different threads or HTTP calls working with the same DB row (as per I reproduce it on my local machine). The only difference is that in the 1st case (error) we make the second search at the DB and have an error (even if this second search returns nothing) and at the 2nd case (success) we don't do so.
I understand that (in error case) Hibernate flushes changes before the 2nd call to DB is executed to support consistency. But why then we have StaleStateException as we DO process everything it ONE transaction and every changes made in it should be visible for itself.
Could someone help me with this weird hibernate behavior and explain why this happens.
I've prepared the minimal reproduceable app and placed it on bitbucket:
https://bitbucket.org/gegunov/hibernate_issue
I've also raised a ticket at Hibernate project's Jira: https://hibernate.atlassian.net/browse/HHH-13867
The Hibernate team has helped me with it. It turned out that this issue was a result of both, Hibernate and MySql bugs connected with precision losses during the flush.
One of the workarounds is to change DB column format from TIMESTAMP to (e.g) TIMESTAMP(6) or DATETIME(6).
so I'm trying to use symmetricDS for replicating java h2 database to postgres. I'm using the zip file simple configuration. Here is what happen. I followed the getting started guide, download the symmetricds, and try the demo, then I tried with my own configuration with some table in the trigger. But:
If I replicate the table without varchar field in h2 it works perfectly fine.
If I have a table that has varchar field in it, it crash during creating the table.
JdbcSqlTemplate - ERROR: length for type varchar cannot exceed 10485760
Position: 161. Failed to execute: CREATE TABLE "asset"(
"db_id" BIGINT NOT NULL DEFAULT nextval('"asset_db_id_seq"'),
"id" BIGINT NOT NULL,
"account_id" BIGINT NOT NULL,
"name" VARCHAR(2147483647) NOT NULL,
"description" VARCHAR(2147483647),
"quantity" BIGINT NOT NULL,
"decimals" SMALLINT NOT NULL,
"initial_quantity" BIGINT NOT NULL,
"height" INTEGER NOT NULL,
"latest" BOOLEAN DEFAULT 'TRUE' NOT NULL,
PRIMARY KEY ("db_id")
)
indeed a clear error saying the varchar should not exceed 255, but that's how the source database is, is there anyway to force any varchar to TEXT type? or are there any other way around this? or is this a bug in symmetricds has yet to be solved?
Thanks.
I managed to go way around this by creating the table on target database manually. Here is what I did before running bin/sym.
generate query for table I want to create using dbexport by bin/dbexport --engine corp-000 --compatible=postgres --no-data table_a table_b > samples/create_asset_and_trade.sql
modify the flaw in generated query file samples/create_asset_and_trade.sql. in my case it's the length of the varchar.
after fixing that, run the generated query to fill in the target database using dbimport. bin/dbimport --engine store-001 samples/create_asset_and_trade.sql.
running bin/sym should be okay now, it'll detect that the table is already created, and skip the table creation step.
This is not the ideal way, but it should work for now.
My tables
N
ID|T_ID
1|1
2|2
T
ID|NAME
1|T1
2|T2
Using the tables as follows
com.db.N N_TABLE = N.as("N_TABLE");
com.db.T T_TABLE = T.as("T_TABLE");
com.db.T T2_TABLE = T.as("T2_TABLE"); //Random alias, not used in query
SelectQuery selectQuery = create.selectQuery();
selectQuery.addFrom(N_TABLE);
selectQuery.addJoin(T_TABLE, JoinType.LEFT_OUTER_JOIN, T_TABLE.ID.eq(N_TABLE.T_ID));
Result<Record> result = selectQuery.fetch();
for (Record record : result) {
System.out.println(record.get(T2_TABLE.NAME));
}
It gives a ambiguity warning, but still gets the value even though alias is wrong. I would expect it to return "null", I guess it falls back to using only field name.
Any idea how should I use it to get "null" in case of a wrong alias?
EDIT
I'll try to provide a more concrete example
My table is as follows
CREATE TABLE user
(
id bigserial NOT NULL,
username character varying(200) NOT NULL,
last_name character varying(100),
created_user_id bigint NOT NULL,
modified_user_id bigint NOT NULL,
CONSTRAINT pk_user PRIMARY KEY (id),
CONSTRAINT user_username_key UNIQUE (username)
)
Data in tables
3;"admin";"admin";3;3
4;"test";"test";4;3
Code
//Input params
Long userId = 4L;
boolean includeModifiedUser = false;
User userTable = USER.as("userTable");
User modifiedUserTable = USER.as("modifiedUserTable");
SelectQuery selectQuery = create.selectQuery();
selectQuery.addFrom(userTable);
//In some cases I want to include the last modifier in the query
if (includeModifiedUser) {
selectQuery.addJoin(modifiedUserTable, JoinType.LEFT_OUTER_JOIN, modifiedUserTable.ID.eq(userTable.MODIFIED_USER_ID));
}
selectQuery.addConditions(userTable.ID.eq(userId));
Record record = selectQuery.fetchOne();
System.out.println(record.get(userTable.LAST_NAME)); //prints "test1"
System.out.println(record.get(modifiedUserTable.LAST_NAME)); //prints "test1", would expect null as modifiedUserTable is currently not joined
Tested on jooq 3.9.3 and 3.9.5
Works as designed
In SQL, there is no such thing as a qualified column name in a result set. Instead, a result set (like any other table) has a set of columns and each column has a name, which is described by jOOQ's Field.getName(). Now, "unfortunately", in top-level SELECT statements, you are allowed to have duplicate column names, in all SQL dialects, and also in jOOQ. This is useful when you join two tables and both tables have e.g. an ID column. That way, you don't have to rename each column just because an ambiguity arises.
If you do have duplicate column names in a table / result, jOOQ will apply the algorithm described in TableLike.field(Field)
This will return:
A field that is the same as the argument field (by identity comparison).
A field that is equal to the argument field (exact matching fully qualified name).
A field that is equal to the argument field (partially matching qualified name).
A field whose name is equal to the name of the argument field.
null otherwise.
If several fields have the same name, the first one is returned and a warning is logged.
As you can see, the rationale here is that if there is no full or partial identity or qualified name equality between a field in the result set and the field you're looking up in the result set, then the field name as in Field.getName() is used to look up the field.
Side note on column match ambiguity
At first, you've mentioned that there was an "ambiguous match" warning in the logs, which then disappeared. That warning is there to indicate that two columns go by the same Field.getName(), but neither of them is an "exact" match as described before. In that case, you will get the first column as a match (for historic reasons), and that warning, because that might not be what you wanted to do.
How do I need to handle a unique_constraint on non-key attribute ? I am using Oracle database.
I have a set unique constraint on username field. (emp_id is primary key but I have to check on emp_username). When I intentionally insert a duplicate username, my program is stuck, instead of displaying any error in console while debugging.
String sql = "insert into employee(emp_username, emp_password) values (\'"+username+"\', \'"+password+"\')";
statement.executeUpdate(sql);
But on command line duplicate insertion shows an error:
ERROR at line 1:
ORA-00001: unique constraint (USMAN.UNIQUE_USERNAME) violated
it seems here that the problem is not in your code(Your code is fine), it's in the data that you are trying to insert, the username column is unique, so you can't insert value multiple times in that column .
I have one database that has two tables.
One table (Table A) has 3 columns:
idpms serial NOT NULL,
iduser integer NOT NULL,
iduser2 integer NOT NULL,
And the other (Table B) 3 columns
idmessage serial NOT NULL,
idpms integer NOT NULL,
text character varying(255),
I'm using Java to manage my database.
First, table A is updated with the values of user one and user two.
With the idpms values generated when I insert this data, Table B is updated with idpms and text
I have two questions:
How can I retrieve the value of idpms after inserting data in Table A?
In Java, can I somehow extract the idpms and cast it to an int?
Q1 :Yes.
Q2 :Yes.
Serial is just autoincrementing integer
There is no special command to get the value after insert you can use sql queries for it.
If you want to get value before insert then This article discusses about it I hope you get what you want
insert into t (iduser, iduser2) values (1,2)
returning idpms
I think you are looking for a KeyHolder object.
Assuming you are using Spring to perform database operations:
import org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate;
import org.springframework.jdbc.support.KeyHolder;
// stuff...
KeyHolder generatedKeyHolder = new GeneratedKeyHolder();
// Execute the update
namedJdbcTemplate.update(sqlQuery, params, generatedKeyHolder);
// Retrieve the map of ALL the autoincremented fields in the table
// Just get the desired one by field name
Integer idpms = (Integer) generatedKeyHolder.getKeys().get("idpms");
// use idbpms as parameter for your next query...
hope that helps