I know some software shops have been burned by using the int type for the primary key of a persistent class. That being said, not all tables grow past 2 billions. As a matter of fact, most don't.
So, do you guys use the long type only for those classes that are mapped to potentially large tables OR for every persistent class just to be consistent? What's the industry concensus?
I'll leave this question open for a while so that you can share with us your success/horror stories.
Long can be advantageous even if the table does not grow super large, yet has a high turnover ie if rows are deleted/inserted frequently. Your auto-generated/sequential unique identifier may increment to a high number while the table remains small.
I generally use Long because the performance benefits are not noticeable in most of my projects, however a bug due to overflow would be very noticeable!
That's not to say that Int is not a better option for other people's scenarios, for example for data crunching or complex query systems. Just be clear of the risks/benefits and how they impact your specific project.
I don't know about "burned". It's not difficult to change from int to long when you need to. The conversion is straight forward in SQL, and then it's just a search and replace in your client code (or make the change in your persistence layer, and then compile and see what breaks.) You're moving from one integer type to another, so you don't have to worry about subtle conversion issues or truncation..
Going from float to double would be a lot harder.
I use Integer for my surrogate keys unless I have a need for them to be something else. It is not necessary to always use a Long if you don't have a need for it.
(I typically use JPA/Hibernate in my projects running against either Oracle 10g or MySQL 5.x databases.)
Because Int will always be faster for Select/Sorts.
Related
I have a question regarding UUID generation.
Typically, when I'm generating a UUID I will use a random or time based generation method.
HOWEVER, I'm migrating legacy data from MySQL over to a C* datastore and I need to change the legacy (auto-incrementing) integer IDs to UUIDS. Instead of creating another denormalized table with the legacy integer IDs as the primary key and all the data duplicated, I was wondering what folks thought about padding 0's onto the front of the integer ID to form a UUID. Example below.
*Something important to note is that the legacy IDs highest values will never top 1 million, so overflow isn't really an issue.
The idea would look like this:
Legacy ID: 123456 ---> UUID: 00000000-0000-0000-0000-000000123456
This would be done using some string concats and the UUID.fromString("00000000-0000-0000-0000-000000123456" method.
Does this seem like a bad pattern to anyone? I'm not a huge fan of the idea, gives me a bad taste in my mouth, but I don't have a technical reason for why haha.
As far as collisions go, the probability of a collision occurring is still ridiculously low. So I'm not worried about increasing collisions. I suppose it just seems like bad practice to me, that its "too easy".
We faced the same kind of issue before when migrating from Oracle with ids generated by sequence to Cassandra with generated UUIDs.
We had to design a type to both support old data coming from Oracle with type long and new data with uuid.
The obvious solution is to use type blob to store the id. A blob can encode a long or an uuid.
This solution only works for partition key because you query them using =. It won't work for clustering column using operators like > or < because we need an ordering on their value.
There was a small objection at that time, which was using a blob to store the id makes it opaque to user, for example in cqlsh when you're doing a SELECT and you need to provide the id, how would you make a blob ?
Fortunately, the native functions of CQL bigIntAsBlob(), blobAsBigInt(), uuidAsBlob() and blobAsUUID() come in very handy.
I've decided to go in a different direction from doanduyhai's answer.
In order to maintain data consistency, we decided to fully de-normalize the data and create another table in C* that is keyed on our legacy IDs. When migrating the objects from our legacy into C*, they are assigned a new randomly generated UUID, which will be their new primary ID for the future. The legacy IDs will be kept around until such a time that we decide they are no longer needed. Upon that time, we can cleanly drop the legacy ID table and be done with them.
This solution allowed for a cleaner break from our legacy ID system in the future, and allowed us to prevent the use of strange custom made UUIDs. I also wasn't a huge fan of having the ID field as a blob type that could have multiple types of data stored in it since, in the future, we plan on only wanting UUIDs to be there.
I know (or think I know) that using things like prepared statements can help future executions of the same query execute faster. However, I was wondering, if you're using prepared statements but the actual values are the same every time, will it then also additionally optimize using the value?
To give a little more context, I want to test performance for a service request that uses an underlying database. The easy route would be to send in the same data each time. The more arduous route would be to ensure the data values were different each time. However, in either case, the same SQL query would be generated -- just the values would be different. So, will these scenarios end up testing the same thing or something different because of potential DB optimization?
I've tried to research this topic but I feel like a lot of what I'm reading is over my head. Any good links for someone that knows little about DB optimization would also be welcomed in addition to the central question.
It depends on exactly what you are doing and measuring. I would expect, though, that you'd need to use different values in order to get realistic results.
Caching
If you send the same values every time, you can probably guarantee that the particular row(s) that you're interested in are always going to be cached (in the buffer cache, in the file system cache, in the SAN cache, etc.) which is probably not terribly realistic if the set of possible inputs is large. On the other hand, if there are a small number of potential inputs and you're reasonably confident that the rows of interest will always be cached (for example, if you know that some other activity that takes place just before your service is called will cause the data you're interested in to be cached in memory before your service is called) then perhaps this is a realistic assumption.
Optimization
Ignoring caching, we can look at how the optimizer would treat the two cases. If you are generating SQL queries with embedded literals (a bad practice that is particularly harmful in Oracle but one that is very common), then you are generating different SQL statements. As far as Oracle is concerned
SELECT *
FROM emp
WHERE deptno = 10
is a completely different statement from
SELECT *
FROM emp
WHERE deptno = 20
There are some settings (i.e. cursor_sharing) you can tweak to ask Oracle to treat these two as identical queries (by having Oracle force them into using bind variables) but that is not without its own downsides and is generally only recommended when you're trying to apply a band-aid to a poorly written application while you work on refactoring the application to use bind variables properly.
Assuming that you are generating queries using bind variables in your application, preparing the statement, and then binding different values before executing the query multiple times, i.e.
SELECT *
FROM emp
WHERE deptno = :1
then you get into the realm of histograms, bind variable peeking, and adaptive cursor sharing. This can get pretty involved and depends heavily on the version of Oracle you're using, the edition you're using, and how you've configured the optimizer to work. I'll try to give a simplified high-level overview here-- if you want to delve too much deeper into one of these, we'll probably want a separate question.
Histograms
By default, the optimizer assumes that data is equally spaced and equally likely. So, for example, if the deptno column has 50 distinct values, the optimizer assumes by default that each value is equally likely. That's probably a pretty reasonable assumption for most columns but it's obviously not reasonable for all columns. If I have a table with all active duty military members, for example, and one of the columns is birth_year, there will be far more rows with a birth_year of 1994 (20 years ago) than 1934 (80 years ago). In these cases, you gather histograms on the column in question in order to tell the optimizer that the data isn't evenly distributed and to let the optimizer gather information about which values are more common and how common they are.
The optimizer doesn't care about the values you are passing for your bind variable values unless there is a histogram on one of the columns in your predicate (I'll ignore for the moment the possibility that you are passing a value that is out of range).
Bind variable peeking
If you do have a histogram on one or more columns, then Oracle (9.1 and later if memory serves) will "peek" at the first value that is passed in for a bind variable and use that value with the histogram to determine the best plan for all subsequent executions. This works reasonably well the vast majority of the time but it occasionally leads to hair-pullingly painful problems (and much swearing) when Oracle peeks at a "bad" value and generates a plan that is efficient for that one execution but terrible for all future executions. This is summed up by Tom Kyte's story about the database that has to be restarted if it's rainy on a Monday morning. If you have a histogram on the column and different values that you might pass in would likely benefit from different query plans, you'd likely want to take bind variable peeking into consideration to determine if passing in values in a different order created any performance issues.
Adaptive cursor sharing
In recent versions (if memory serves 11.1 and later) and depending on your configuration, Oracle can use adaptive cursor sharing to maintain multiple query plans for a single statement and to use the most appropriate version for the particular bind variable value that is passed in. This is a much more sophisticated version of bind variable peeking that peeks for each set of values you pass in and figures out whether it is close enough to some other set of values to use the previously generated plan or whether it needs to compute a new plan for the new set of values. Figuring out what constitutes "close enough" and how this interacts with various features for ensuring plan stability is a rather involved topic in its own right.
you could use db caching
http://www.oracle.com/technetwork/articles/sql/11g-caching-pooling-088320.html
if the app is making network roundtrip and caculating results, that will still eat considerable time
I used Oracle sequence as primary key of a table, and used int in Java application mapping this primary key, now I found my customer has reached to the maximum int in table, even the sequence can be continuous increase. but Java int has no longer able to store it, I don't want change Java code from int to long because of very big cost. then I found customer DB there have many big gaps in ID column. can any way I can reuse these missing Id number?
If can do this in DB level, something like I can re-org this sequence to add these missing number to it, so no Java code change then I can use these gaps. it should be great.
I will write a function to find the gap ranges, after having these numbers, If I can, I want assign them to pool of the sequence value, so maybe from now on, it will not use auto-incrementing, just use the number I assigned. in Java code, I can continue use findNextNumber to call the sequence. but sequence will be able to return the value I assigned to it. it seems impossible, right? any alternative?
Do you mean, will the sequence ever return a value that is in a "gap" range? I don't think so, unless you drop/re-create it for some reason. I guess you could write a function of some sorts to find the PK gaps in your table, then save those gap ranges to another table, and "roll" your own sequence function using the gap table. Very ugly. Trying to "recover" these gaps just sounds like a desperate attempt to avoid the unavoidable - your java PK data type should have aligned with the DB data type. I had the same problem a long time ago with a VB app that had a class key defined as 16-bit integer, and the sequence exceeded 32K, had to change the variables to a Long. I say, bite the bullet, and make the conversion. A little pain now, will save you a lot of ongoing pain later. Just my opinion.
I would definitely make the change to be able to use longer numbers, but in the meantime you might manage until you can make that change by using a sequence that generates negative numbers. There'd be a performance impact on the maintenance of the PK index, and it would grow larger disproportionately quicker, though
I want to use long timestamp value(may be generated by System.currentTimeInMillis()) as column names in my database. Can System.currentTimeInMillis() method guarantee an always increasing values ?? I have seen people complaining that sometimes it became slower.. !
I am also open to other alternatives that may be considerable for putting as increasing column names. I just want to guarantee uniqueness(until they fall in same millisecond when I can consider them ok..) & increasing sequence ( may be also perhaps smaller in size (less bytes) if anyhow possible!).
Edit: I have a NoSQL database where column names(& hence columns) are sorted in a row as ascending/descending number sequence. Thus I am looking to generate timestamps as column names that could enable me to sort the columns by time.
I am looking to store comments of a blog post in a single row using timestamp values as column names to enable sort by time. I think I wouldnt mind even if 10 ms is the resolution since probablity of someone commenting in the same 1/100 of a sec on the same blog post on my application would be very low.
Edit: Thank you all for your comments and suggestions. Really helpful.. I think I have got a solution to work around the problems of seldom failures of System.currentTimeInMillis(). I could implement like this:-
When a user adds a new comment to a post, the frontend with send an id 'suggestedId' which is one greater than the id of last comment( frontend would know about this from the previous database read). This id would be compared with the id generated using System.nanotime(). if the suggestedId is less than the generatedId then generatedId will be used else suggestedId would be used. So it simply means whatever is greater, use that Id. This guarantees monotonocity
Although not truly perfect but yes sounds good for practical usage!
Would you guys like to share your thoughts upon this? Thanks!!!
The general database design issues have been addressed by other commenters, but just on this point:
Can System.currentTimeInMillis() method guarantee an always increasing values ?? I have seen people complaining that sometimes it became slower.. !
For future reference, the word for this (always-increasing values) is monotonicity. No, System.currentTimeMillis() is not monotonic. Not only can it go more slowly, or speed up (if, say, the System it's running on is using NTP for time correction), but it can arbitrarily change up or down (if the user, or a script, changes the system time).
System.nanoTime() does not formally guarantee monotonicity; however, the Hotspot JVM does if and only if the underlying system supports it (modern Linux kernels on modern hardware certainly do). Sounds better - with the caveat that some processors use power management techniques etc which can screw this up in the presence of multiple cores. So it's better, but still not perfect.
On many systems, System.currentTimeMillis() does not resolve below 10 ms increments. So two different calls can easily return the same value.
I suggest that you keep an auxiliary table with a counter that you can increment to give the next value.
Why do you want this for column names? It seems a very odd sort of data base design.
I am looking to store comments of a blog post in a single row using timestamp values as column names to enable sort by time.
I'm no NoSQL expert, but I'd say it's not a good idea to store comments as columns in one row. Why don't you add a row per comments along with a timestamp you can sort by?
Using a traditional relational database the table could look like this:
comments
--------
id (PK)
blog_id (FK)
created_on (timestamp)
text
Selecting the comments in order would then be in SQL:
SELECT * from comments WHERE blog_id = ? ORDER BY created_on
System.currentTimeMillis() typically has around 10-20ms granularity, but even if it had 1ms granularity, in principle, 1ms is an eternity in computing time and it would be quite plausible, depending on what you're doing, for two calls to end up with the same value. However, I'm guessing that even 20ms is probably not an eternity compared to how frequently people make blog comments.
So, if two people post a comment within the same 20ms (or whatever), just sorting on this value will not define an order for the posts in question. But do you particularly care about this unlikely situation. If you do, then you need to build in a little bit of extra logic (have a counter for the number of messages posted "this millisecond"). I personally wouldn't bother in your use case.
As far as I can understand, you're also storing the data in a fundamentally silly way. Why not just have a "Comments" table with a row per comment and a single time column, which you can sort on as required.
Many databases provide a way to get serial numbers into column. For example see this -- PostgreSQL Autoincrement
I'm planning on using client provided UUID's as the primary key in several tables in a MySQL Database.
I've come across various mechanisms for storing UUID's in a MySQL database but nothing that compares them against each other. These include storage as:
BINARY(16)
CHAR(16)
CHAR(36)
VARCHAR(36)
2 x BIGINT
Are there any better options, how do the options compare against each other in terms of:
storage size?
query overhead? (index issues, joins etc.)
ease of inserting and updating values from client code? (typically Java via JPA)
Are there any differences based on which version of MySQL your running, or the storage engine? We're currently running 5.1 and were planning on using InnoDB. I'd welcome any comments based on practical experience of trying to use UUIDs. Thanks.
I would go with storing it in a Binary(16) column, if you are indeed set on using UUIDs at all. something like 2x bigint would be quite cumbersome to manage. Also, i've heard of people reversing them because the start of the UUIDs on the same machine tend to be the same at the beginning, and the different parts are at the end, so if you reverse them, your indexes will be more efficient.
Of course, my instinct says that you should be using auto increment integers unless you have a really good reason for using the UUID. One good reason is generating unique keys accross different databases. The other option is that you plan to have more records than an INT can store. Although not many applications really need things like this. THere is not only a lot of efficiency lost when not using integers for your keys, and it's also harder to work with them. they are too long to type in, and passing them around in your URLs make the URLs really long. So, go with the UUID if you need it, but try to stay away.
I have used UUIDs for smart client online/offline storage and data synchronization and for databases that I knew would have to be merged at some point. I have always used char(36) or char(32)(no dashes). You get a slight performance gain over varchar and almost all databases support char. I have never tried binary or bigint. One thing to be aware of, is that char will pad with spaces if you do not use 36 or 32 characters. Point being, don't write a unit test that sets the ID of an object to "test" and then try to find it in the database. ;)