I am using Ruby to generate a 64 bit timestamp similar to Java. I went through Class:Time and it says time could use 63 bit integer. I thought I could use:
Time.now.to_f * 1000
but I am worried about losing precision due to the floating point conversion. Can I simply get the 64 bit timestamp (millis since epoch) as in Java in ruby, as precise as possible?
Calendar.getInstance().getTimeInMillis();
I need to use the timestamp as unique ID in a database, so I would like to keep time-related collisions from minimum to non-existent.
I've added comments suggesting this really isn't what you should be doing anyway, but I really don't think you need to worry about losing precision in any meaningful way. The Ruby documentation states that the value is stored down to the nanosecond. Converting it to a floating point number may lose the last few digits, but it's not going to be significant at the millisecond level - you really don't care if the value round up or down a bit, after all... you're already relying on only creating a single entry per millisecond.
An alternative approach would be to use to_i and nsec: multiply the result of to_i by 1000, divide the result of nsec by 1000000, and add the two together. That will get you the number of milliseconds using only integer arithmetic.
The time is signed long in Java so it is 63-bit also.
So you are worried that in this year you will get an overflow? Personally, I don't think anyone will be using Java by then. In fact it's likely we will be extinct/evolved by then as well.
System.out.println("Overflow at " + new Date(Long.MAX_VALUE));
prints
Overflow at Sun Aug 17 08:12:55 CET 292278994
Note: 292 million years ago was before the dinosaurs ruled the earth.
If you are concerned about the loss of accuracy of converting a nano-second time stamp to double you can calculate what that error is
long now = System.currentTimeMillis() * 1000000L;
double error_f = Math.ulp((float) now);
double error = Math.ulp((double) now);
System.out.println("The error for a nano-second timestamp using a double "
+ now + " is " + error + " and float is " + error_f);
prints
The error for a nano-second timestamp using a double 1378970569656000000 is 256.0 and float is 1.37438953472E11
This means the error for converting to double is up to half of this which is 128 ns, for converting to float, the error is also half the ulp, which is 68 seconds, which is quite high.
Related
What is the most efficient way in Java (11) to round a given timestamp (e.g. System.currentTimeMillis()) to the nearest 10 seconds?
e.g. 12:55:11 would be 12:55:10 and 12:55:16 would be 12:55:20
This code is executed ~10-20 times per second, so it must be efficient.
Any ideas?
Thanks
Probably this:
long time = System.currentTimeMillis();
long roundedTime = (time + 5_000) / 10_000 * 10_000;
Basically 3 x 64 bit primitive arithmetic operations.
(If you want to truncate to 10 seconds granularity, just remove the + 5_000.)
Theoretically we should consider integer overflow. In practice the above code should be OK for roughly the next 292 million years. (Source: Wikipedia.)
I am asked to store the time right before my algorithm start, and time when it ends, and also need to provide the difference between them (end time - start time).
But the System.currentTimeMillis() function generates values that are too long:
start=1497574732045
end=1497574732168
Is there a way to make this value just 3 digits like "123" but also be as precise as using the System.currentTimeMillis() function?
as the currentTimeMillis() description says:-
Returns the current time in milliseconds. Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.
Returns:
the difference, measured in milliseconds, between the current time and midnight, January 1, 1970 UTC.
in your case use this simple trick and you will get the desired result.
Long startTime= Long.parseLong("1497674732168");
Long endTime= Long.parseLong("1497574732168");
System.out.println("start time is"+new Date(startTime)+"end time is"+new Date(endTime));
If you need to store the start and end times separately, there are only two ways (I can think of) to make the values smaller.
Firstly, System.currentTimeMillis() counts from January 1, 1970 UTC. But if your clock is never going to run previous to "now", you can subtract a fixed amount of time. I chose 1497580000000 as it's definitely in the past at the time I wrote this and its a nice even number.
Second, divide the value by any amount of precision you are willing to lose. In your case you might not want to even do that, but here I chose 100.
The numbers returned look small now, but they will continue to get bigger as the difference between the current time and 1497580000000 become more pronounced.
The preferred solution is to not do any of this at all, but just store the long value if you can.
You'll never magic a large precise number into only 3 decimal digits. Not without quantum mechanics.
{
long start = 1497584001010L;
long end = 1497584008000L;
System.out.println("Diff: " + (end - start));
int compactStart = compact(start);
int compactEnd = compact(end);
System.out.println("Compact Start: " + compactStart);
System.out.println("Compact End: " + compactEnd);
System.out.println("Diff: " + (expand(compactEnd) - expand(compactStart)));
}
private int compact(long millis) {
return (int)((millis - 1497580000000L)/100);
}
private long expand(int millis) {
return (millis + 1497584000000L)*100;
}
Result...
Diff: 6990
Compact Start: 40010
Compact End: 40080
Diff: 7000
Note 7000 doesn't equal 6990 because of the intentional precision loss.
I'm attempting to take the current system time,
long now = System.currentTimeMillis();
find out how far light has traveled since 1970 (in km),
double km = (now / 1000.0) * 299792.458;
and print the value.
System.out.printf("Since the unix Epoch, light has traveled...\n%f km", km);
But when I run the code, I'm only getting the answer down to two decimals.
435963358497001.750000 km
Is there a reason for this? Printing the value using
System.out.println(km);
gives me
4.3596335849700175E14
Seeing the E there makes sense why it cuts off. Is there a way I could get the value to 3 or more decimal places?
You are exceeding the precision of a double (15 to 17 decimal digits), so the result is rounded.
Try using BigDecimal instead:
long now = 1454217232166L; // Sun Jan 31 00:13:52 EST 2016
BigDecimal km = BigDecimal.valueOf(now).multiply(BigDecimal.valueOf(299.792458));
System.out.println(km); // prints: 435963358497001.804028
Just format it to three places, try this.
System.out.printf("Since the unix Epoch, light has traveled...\n%.3f km", km);
I need to store an exact audio position in a database, namely SQLite. I could store the frame position (sample offset / channels) as an integer, but this would cause extra data maintenance in case of certain file conversions.
So I'm thinking about storing the position as an 8 byte real value in seconds, that is a double, and so as a REAL in SQLite. That makes the database structure more consistent.
But, given a maximum samplerate of 192kHz, is the double precision sufficient so that I can always recover the exact frame position when multiplying the value by the samplerate?
Is there a certain maximum position above which an error may occur? What is this maximum position?
PS: this is about SQLite REAL, but also about the C and Java double type which may hold the position value at various stages.
Update:
Since the discussions now focus on the risks related to conversion and rounding, here's the C method that I'm planning to use:
// Given these types:
int samplerate;
long long framepos;
double position;
// First compute the position in seconds from the framepos:
position = (double) framepos / samplerate;
// Now store the position in an SQLite REAL column, and retrieve it later
// Then compute the framepos back from position, with rounding:
framepos = position * samplerate + 0.5;
Is this safe and symmetrical?
A double has 51 bits worth of precision. Depending on the exponent part, some of these bits will represent whole numbers (seconds in your case), the others fractions of seconds.
At 48 kilobits, a minimum of 16 bits is required to get the sub-second precise enough (more if rounding is not optimal). That leaves 35 bits for the seconds, which will span just over a thousand years.
So even if you need an extra bit or two for the sub-second to guard against rounding, and even if SQL loses a bit or two of precision converting it to decimal and back here and there, you aren't anywhere near losing sample precision with your double precision number. Make sure your rounding works correctly - C tends to always round down on convert to integer, so even an infintessimaly small conversion error could throw you off by 1.
I would store it as a (64-bit) integer representing microseconds (approx 2**20). This avoids floating point hardware/software, is readily understood by all, and gives you a range of 0..2**44 seconds which is a little over 55 thousand years.
As an alternative, use a readable fixed precision decimal representation (20 digits should be enough). Right-justified with leading zeros. The cost of conversion is negligible compared to DB accesses anyway.
One advantage of these options is that any database will trivially know how to order them, not necessarily obvious for floating point values.
As the answer by Matthias Wandel explains, there's probably nothing to worry about. OTOH by using integers you would get fixed precision regardless of the magnitude which might be useful.
Say, use a 64-bit integer, and store the time as microseconds. That gives you an equivalent sampling precision of 1 MHz and a range of almost 300000 years (if my quick calculation is correct).
Edit Even when taking into account the need for the timestamp * sample_rate to fit into a 64-bit integer, you still have a range of 1.5 years (2**63/1e6/3600/24/365/192e3), assuming a max sample rate of 192kHz.
I'm in and android widget and checking elapsed time between two calls of System.nanoTime() and the number is huge. How do you measure elapsed time with this? it should be a fraaction of a second and instead its much more. Thanks
The System.nanoTime() returns a time value whose granularity is a nanosecond; i.e. 10-9 seconds, as described in the javadoc. The difference between two calls to System.nanoTime() that are a substantial fraction of a second apart is bound to be a large number.
If you want a time measure with a larger granularity, consider System.currentTimeMillis() ... or just divide the nanosecond values by an appropriate power of 10 to suit your application.
Note that on the Android platform there are 3 distinct system clocks that support different "measures" of time; see SystemClock. If you are programming explicitly for the Android platform, you should read the javadoc and decide which measure is most appropriate to what you are doing.
For your information, "nano-" is one of the standard prefixes defines by the International System of Units (SI) - see http://physics.nist.gov/cuu/Units/prefixes.html.
If you really think that "they" got it wrong and that "nano-" is too small, you could always write a letter to the NIST. I'm sure someone would appreciate it ... :-)
One seconds contains 1,000,000,000 nanoseconds, so as long as your number is in that range, it's reasonable.
If you want it in fractional form, just take your value / 10^9 where value is your difference in nanoTime()s.
long nanoSeconds = 500000000;
float seconds = nanoSeconds / 1000000000;
Log.i("NanoTime", nanoSeconds + " ns is the same as " + seconds + " seconds");
Your output would be:
07-27 11:35:47.196: INFO/NanoTime(14237): 500000000 ns is the same as 0.5 seconds