Generate encoding String according to creation order - java

I need to generate encoding String for each item I inserted into the database. for example:
x00001 for the first item
x00002 for the sencond item
x00003 for the third item
The way I chose to do this is counting the rows. Before I insert the third item, I count against the database, I know there're already 2 rows, so the next encoding is ended with 3.
But there is a problem. If I delete the second item, the forth item will not be the x00004,but x00003.
I can add additional columns to table, to store the next encoding, I don't know if there's other better solutions ?

Most databases support some sort of auto incrementing identity field. This field is normally also setup to be unique, so duplicate ids do not occur.
Consult your database documentation to see how it is done in your database and use that - don't reinvent the wheel when you have a good mechanism in place already.

What you want is SELECT MAX(id) or SELECT MAX(some_function(id)) inside the transaction.
As suggested in Oded's answer a lot of databases have their own methods of providing sequences which are more efficient and depending on the DBMS might support non numeric ids.
Also you could have id broken down into Y and 00001 as separate columns and having both columns make up primary key; then most databases would be able to provide the sequence.
However this leads to the question if your primary key should have a meaning or not; Y suggest that there is some meaning in the part of the key (otherwise you would be content with a plain integer id).

Related

Is there a way to maintain an order column using jpa hibernate?

I am trying to have a table with an "order" column to allow rearranging the order of data. Is this possible using jpa? Maybe something similar to #OrderColumn but on the table itself.
Basically I want to add a new column called "order" that saves the order the records. If a record is added, it would automatically get a "order" value. If a record was deleted, the "order" of the remaining would be automatically updated. Additionally if possible, to rearrange the orders by moving one record to an lower "order" and it would push the others
There is no way to do this out of the box, but you can implement this yourself if you want. Just query for the count of objects right before persisting and set the count + 1 as value for that order column. Make sure that the order column is declared as being unique i.e. with a unique constraint.
Note that your requirement is pretty exotic and will likely require some kind of table lock or retry mechanism if you have high concurrency.
IMO you should ask whoever gave you this requirement what the goal is that should be achieved. I bet that you will find out you don't need this after all.

How to redistribute unique integer ids in a MySQL database?

Consider this:
I have a database with 10 rows.
Each row has a unique id (int) paired with some value e.g. name (varchar).
These ids are incremented from 1 to 10.
I delete 2 of the records - 2 and 8.
I add 2 more records 11 and 12.
Questions:
Is there a good way to redistribute unique ids in this database so it would go from 1 to 10 again ?
Would this be considered bad practice ?
I ask this question, because after some use of this database: adding and deleting values the ids would differ significantly.
One way to approach this would be to just generate the row numbers you want at the time you actually query, something like this:
SET #rn = 0;
SELECT
(#rn:=#rn + 1) AS rn, name
FROM yourTable;
ORDER BY id;
Generally speaking, you should not be worrying about the auto increment values which MySQL is assigning. MySQL will make sure that the values are unique without your intervention.
If you set the ID column to be primary key and an auto-increment as well, "resetting" is not really necessary because it will keep assigning unique IDs anyways.
If the thing that bothers you are the "gaps" among the existing values, then you might resort to "sort deletion", by employing the is_deleted column with bit/boolean values. Default value would be 0 (or b0), of course. In fact, soft-deleting is advised if there are some really important data that might be useful later on, especially if it involves possibility for payment-related entries where user can delete one of such entries either by omission or deliberately.
There is no simple way to employ the deletion where you simply remove one value and re-arrange the remaining IDs to retain the sequence. A workaround might be to do the following steps:
DELETE entry first. i.e. delete from <table> where ID = _value
INSERT INTO SELECT (without id column). please note that the table need to be identical in terms of columns and types in order for this query to work properly, so to speak... and you can also utilize temporary as the backup_table. i.e. insert into <backup_table> select <coluum1, column2, ...> from <table>
TRUNCATE your table, i.e. truncate table <table>
copy the values from the temp table back into the existing table. You can utilize the INSERT INTO SELECT once again, but make sure to drop the temp table in the end
Please note that I would NOT advise you to do this, mainly because most people utilize some sort of caching in their applications and they also utilize the specific ways to evaluate whether a specific object is the same.
I.e. in Java, the equals() and hashCode() methods for POJOs are overriden and programmers generally rely on IDs to be permanent way of identifying a specific object. By utilizing the above method, you essentially break the whole concept and I would not advise you to change the object's autoincrement ID value for this reason, before anything else.
Essentially, what you want to do is simply an anti-pattern and will generally make common patterns and practices employed by experienced programmers into solutions that are prone to unexpected issues and/or failures... and this especially applies if/when advanced features are involved, such as employing this such anti-pattern into an application that utilizes galera cluster and/or application caching.

Most efficient way to determine if a row EXISTS and INSERT into MySQL using java JDBC

I'm looking at trying to query a table in a MySQL database (I have the primary key, which is comprised of two categories, a name and a number but string comparison), such that this table could have anywhere from very few rows to upwards of hundreds of millions. Now, for efficiency, I'm not exactly sure how costly it is to actually do an INSERT query but I have a few options as to go about it:
I could query the database to see if the element EXISTS and then call an INSERT query if it doesn't.
I could try to brute force INSERT into the database and if it succeeds or fails, so be it.
I could initially on program execution, create a cache/store, grab the primary key columns and store them in a Map<String, List<Integer>> and then search the key for if the name exists, then if it does, does the key and value combination in the List<Integer> exists, if it doesn't, then INSERT query the database.
?
Option one really isn't on the table for what I would really implement, just on the list of possible choices. Option two would most likely average better for unique occurrences such that it isn't in the table already. Option three would favour if common occurrences are the case such that a lot are in the cache.
Bearing in mind that option chosen will be iterated over potentially millions of times. Memory usage aside (From option 3), from my calculations it's nothing significant in respect to the capacity available.
Let the database do the work.
You should do the second method. If you don't want to get a failure, you can use on duplicate key update:
insert into t(pk1, pk2, . . . )
values ( . . . )
on duplicate key update set pk1 = values(pk1);
The only purpose of on duplicate key update is to do nothing useful but not return an error.
Why is this the best solution? In a database, a primary key (or columns declared unique) have an index structure. This is efficient for the database to use.
Second, this requires only one round-trip to the database.
Third, there are no race conditions, if you have multiple threads or applications that might be attempting to insert the same record(s).
Fourth, the method with on duplicate key update will work for inserting multiple rows at once. (Without on duplicate key insert, then a multi-value statement would fail if a single row is duplicated.) Combining multiple inserts into a single statement can be another big efficiency.
Your second option is really the right way to go.
Rather than fetching all your result in the third option , you could try using Limit 1 , given the fact that the combination of your name and number form a primary key thus , using limit 1 to fetch the result and then if the result is empty then you can probably insert your desired data. It would lot faster that way.
MySQL has a neat way to perform an special insertion. The INSERT ON DUPLICATE KEY UPDATE is a MySQL extension to the INSERT statement. If you specify the ON DUPLICATE KEY UPDATE option in the INSERT statement and the new row causes a duplicate value in the UNIQUE or PRIMARY KEY index, MySQL performs an update to the old row based on the new values:
INSERT INTO table(column_list)
VALUES(value_list)
ON DUPLICATE KEY UPDATE column_1 = new_value_1, column_2 = new_value_2;

Autoincrementing number unique id in Solr

I want to generate unique keys automatically in solr. I checked the default function here
but it is generating id like 1cdee8b4-c42d-4101-8301-4dc350a4d522. In my application, I need unique autoincrement numbers like we do in MySql. What should be approach to do this ? Solrj pointers would be much helpful.
Another solution (hack) that I've implemented is to create a record in solr inside the existing schema. For example if you have a schema which has 2 string fields then you can store the values as MAX_VALUE and the other being the actual integer max value stored as string. So anytime you would add, you'd have to query for "fieldname:MAX_VALUE" and retrieve the string value from the other field of the same document. You can parse it and add 1. You then update the existing MAX_VALUE document. It's not the most feasible but it is a solution. The implementation keeps your max number within your index rather than in another application.
It's also solj friendly as it's fairly straight forward to make the query and the update query.
I apologize for the grammar. Do comment if you can't understand what I'm saying.

Refreshing PrimaryID to start from one after a deleted Row

Im programming a program in java and i have a database in a JTable just like the ones below. I wanted to know if it is possible to refresh the primaryID location from 1 on the GUI interface form one when a row is deleted? for example below the LoactionID is deleted for London and added again with an id 4. Is this possible?
Im using SQL in java
To answer your question, yes it is possible.
There is no good reason for you to do this though, and I highly recommend you don't do this.
The only reason to do this would be for cosmetic ones - the database doesn't care if records are sequential, only that they relate to one another consistently. There's no need to "correct" the values for the database's sake.
If you use these Id's for some kind of numbering on the UI (cosmetic reason):
Do not use your identity for this. Separate the visual row number, order or anything else from the internal database key.
If you REALLY want to do this,
Google "reseeding or resetting auto increment primary ID's" for your sql product.
Be aware for some solutions if you reset the identity seed below values that you currently have in the table, that you will violate the indentity column's uniqueness constraint as soon as the values start to overlap
Thanks Andriy for mentioning my blindly pasting a mysql solution :)
Some examples:
ALTER TABLE table_name ALTER COLUMN auto_increment_column_name RESTART WITH 8 Java DB
DBCC CHECKIDENT (mytable, RESEED, 0)
Altering the sequence

Categories

Resources