I want to get surrogate keys for my user table(s) in MySQL. I'm sure concatinating an incrementing value + a timestamp would get me unique keys across multiple tables but how do I get the incremental value for the class's persistence table before I persist it to the database.
let hibernate do it for you using one of their key generators. If you must define your own key scheme, you will have to write your own generator.
Related
I am using a postgresql database table which may have inserts with the ID set manually by the user, or need an ID generated using hibernate.
This may lead to the occurrence of generating an ID which has already been inserted into the database manually. Is there any way hibernate can check for collisions between the generated ID and existing IDs?
Hibernate cannot check that, because the sequence is allocated by the database. You could either:
assign negative numbers for manually inserted IDs
use UUID instead of sequences
I need to persist a queue of tasks in MySQL. When reading them from DB I have to make sure the order is exactly the same as they have been persisted.
In general I prefer to have the solution DB agnostic (i.e. pure JPA) but adding some flavor of Hibernate and/or MySQL is acceptable as well.
My (probably naive) first version looks like:
em.createNamedQuery("MyQuery", MyTask.class).setFirstResult(0).setMaxResults(count).getResultList();
Where MyQuery doesn't have any "order by" clause i.e. it looks like:
SELECT t FROM MyTasks
Would such approach guarantee that the incoming results/entities are ordered in the way they have been persisted? What if I enable caching as well?
I was also thinking of adding an extra field to the task entity which is a timestamp in milliseconds (UTC from 1970-01-01) and then order by it in the query but then I might be in a situation where two tasks get generated immediately one after the other and they have the same timestamp.
Any solutions/ideas are welcome!
EDIT:
I just realised that auto increment (at least in MySQL) would throw an exception once it reaches its max value and no more inserts would be possible. This means I shouldn't worry about having the counter reset by the DB and I could explicitly order by an "auto increment" column in my query. Of course I would have another problem to deal with i.e. what to do in case the volume is so high that the largest possible unsigned integer type in MySQL is not big enough but this problem is not nesessarily coupled with the problem I am dealing right now.
Focusing in a pure JPA solution, cause the entity MyTasks must have a primary key I suggest you to use Sequence Generator for its primary key and sort the result of your query using order by clause on the key.
For example:
#Entity
class MyTask {
#Id #GeneratedValue(strategy=GenerationType.SEQUENCE)
private Long id;
You can also tight it a little bit with your database using #SequenceGenerator to specify a generator defined in the database.
Edit: Did you take a look at the #PrePersist option for setting the timestamp? Maybe you can combine the timestamp field and the id sequenced generation and order by in this order, so timestamp conflicts are resolved by id comparation (which are unique).
Most RDBMS's will store in the order of insertion and given no other instruction will order results that way too. If you don't want to leave it to chance, you have a couple of options.
1) You can generate a reasonably unique ID by using a timestamp and a incrementing fixed-length number,
OR
2) You can just define your table with an autonumbered primary key (which is probably easier).
If the table has a primary key to order by, then by default, most RDBMS's will return things in ascending primary key order... or you can enforce it explicitly in your query.
JPA (with or without cache) and RDBMS not guarantee of persisting or uploading sequence when you do not use order instruction. To solve task you should add integral primary key to the entity and use it when gather data as it mentioned other answereres.
My application has quite a good number of tables in DB. What is the efficient way of key generation for Memcached? Because, whenever we update a table's data, we have to see if there is any cached data related to that table and clear it. Also I need to take care of join queries because if either of the tables involved in a cached join is modified, the cached data should be cleared too.
Key could be with the DB_TABLE NAME_PrimaryKey-field. Where the PrimaryKey-field is provided with the "primary key" of the table.
In the custom client class say CustomAppCache have inner class say CacheKeyGen this can be defined with the properties having database, tableName, primaryKeyField. Now the memcached will have the data with the key as DB_TABLE_NAME_PrimaryKey-field and the table data as the value.
While using the setCache set the data to the memcached with all the data of the table.
While using the getCache check for matching the pattern of the requisite and perform the intended operation like delete from the cache and reload it.
This should solve the key generation problem. Let me know if this solves your key gen problem.
I have joined a project which has used client side generated random numbers for primary key fields within a mysql database. These primary key fields are auto-increment, but have not been used as such.
I think this was done because the developer did not know how to retrieve the database generated id after insertion.
We now have a sparse array of id values in many tables and a significant number of key collisions on insertion.
Is there some remedial work I can do to allow the database to generate the ids (i.e. start from the last allocated id and find the next available id) and for the following JDBC call to work?
numero = stmt.executeUpdate(query, Statement.RETURN_GENERATED_KEYS);
I think if you search Google for 'generate database primary key from sequence MySQL' it will help.
Using sequences: Find the largest primary key in a given table, and create a sequence that increments by 1, starting with that largest primary key value. Create a separate sequence for each table. Note your are letting the database generate the key so you will have to disable the program or client from generating the key.
Alternately, you can create a brand new database schema that implements primary key generation correctly. Then migrate the data from the old database to the new one, populating parent tables before the child tables. Then, programmatically matching parent to child records based on the old priamry keys. However, I think this will be very time consuming for the size of your database.
I have more of theoretical question:
When data gets inserted into a database? is it after persist or after commit is called? Because I have a problem with unique keys (manually generated) - they get duplicate. I'm thinking this is due multiple users inserting data simultaneously into a same table.
UPDATE 1:
I generate keys in my application. Keys example: '123456789123','123456789124','123456789125'...
Key field is varchar type, because there are lot of old keys (I can't delete or change them) like 'VP123456','VP15S3456'. Another problem, that after inserting them into one database, these keys have to be inserted in another database. And I don't know what are DB sequences and Atomic objects..
UPDATE 2:
These keys are used in finance documents and not as database keys. So they must be unique, but they are not used anywhere in programming as object keys.
I would suggest you create a Singleton that takes care of generating your keys. Make sure you can only get a new id once the singleton has initialized with the latest value from the database.
To safeguard you from incomplete inserts into the two databases I would suggest you try to use XA transactions. This will allow you to have all-or-nothing inserts and updates. So if any of the operations on any of the databases fails, everything will be rolled back. Of course there is a downside of XA transactions; they are quite slow and not all databases and database drivers support it.
How do you generate these keys? Have you tried using sequences in DB or atomic objects?
I'm asking because it is normal to populate DB concurrently.
EDIT1:
You can write a method that returns new keys based on atomic counter, this way you'll know that anytime you request a new key you receive a unique key. This strategy may and will lead to some keys being discarded but it is a small price to pay, unless it is a requirement that keys in the database are sequential.
private AtomicLong counter; //initialized somewhere else.
public String getKey(){
return "VP" + counter.incrementAndGet();
}
And here's some help on DB Sequences in Oracle, MySql, etc.