My program needs to generate unique labels that consist of a tag, date and time. Something like this:
"myTag__2019_09_05__07_51"
If one tries to generate two labels with the same tag in the same minute, one will receive equal labels, what I cannot allow. I think of adding as an additional suffix result of System.nanoTime() to make sure that each label will be unique (I cannot access all labels previously generated to look for duplicates):
"myTag__2019_09_05__07_51__{System.nanoTime() result}"
Can I trust that each invocation of System.nanoTime() this will produce a different value? I tested it like this on my laptop:
assertNotEquals(System.nanoTime(), System.nanoTime())
and this works. I wonder if I have a guarantee that it will always work.
TLDR; If you only use a single thread on a popular VM on a modern operating system, it may work in practice. But many serious applications use multiple threads and multiple instances of the application, and there won't be any guarantee in that case.
The only guarantee given in the Javadoc for System.nanoTime() is that the resolution of the clock is at least as good as System.currentTimeMillis() - so if you are writing cross-platform code, there is clearly no expectation that the results of nanoTime are unique, as you can call nanoTime() many times per millisecond.
On my OS (Java 11, MacOS) I always get at least one nanosecond difference between successive calls on the same thread (and that is after Integer.MAX_VALUE looks at successive return values); it's possible that there is something in the implementation that guarantees it.
However it is simple to generate duplicate results if you use multiple Threads and have more than 1 physical CPU. Here's code that will show you:
public class UniqueNano {
private static volatile long a = -1, b = -2;
public static void main(String[] args) {
long max = 1_000_000;
new Thread(() -> {
for (int i = 0; i < max; i++) { a = System.nanoTime(); }
}).start();
new Thread(() -> {
for (int i = 0; i < max; i++) { b = System.nanoTime(); }
}).start();
for (int i = 0; i < max; i++) {
if (a == b) {
System.out.println("nanoTime not unique");
}
}
}
}
Also, when you scale your application to multiple machines, you will potentially have the same problem.
Relying on System.nanoTime() to get unique values is not a good idea.
I am trying to write a high data rate UDP streaming interface simulator/tester in Java 8 to a realtime machine that has a very accurate time processor card. Every message has a time field in it and this field is in microseconds resolution. The interface relies on the high resolution time processor for packet ordering. The interface relies on the high precision time card which I don't have and need to simulate out of the equation. I figured I could get away with using something like this:
TimeUnit.MILLISECONDS.toMicros(System.currentTimeMillis());
It does work but after running for extended periods of time I found UDP bites me because I send a couple hundred packets out of order with the same exact time stamp and the other side of the interface can't tell that the packets it received were out of order. The interface is tolerant of this to an extent but this isn't really an issue on the real system with the high precision clocks.
To mitigate this I have added a sense of synthetic microseconds to my currentTimeMillis() as follows:
class TimeFactory {
private long prev;
private long incr;
public long now() {
final long now = TimeUnit.MILLISECONDS.toMicros(System.currentTimeMillis());
long synthNow = now;
if(now == prev) {
if(incr < 999) {
incr += 1;
}
synthNow += incr;
} else {
incr = 0;
}
prev = now;
return synthNow;
}
}
Has anyone ever dealt with synthetic time like this? Is there any other way to tighten this code up or even a better way to handle this (using nanoTime somehow)? If I ever did send more then 999 packets would it be safe to increment into the milliseconds range (ie: increment + 1000 or more)? It looks like I am getting around ~10-15ms difference between currentTimeMillis() calls but I'm sure this is very system dependent.
In case anyone is interested here is what I ended up with to work around the lack of a high resolution system clock. It will give me a synthetic microseconds counter that increments until either System.currentTimeMillis() returns an updated value or you have called this 999 times. In practice I have only seen a maximum of ~500 increments. It doesn't look like I will have worry about spilling into the millisecond range.
I'm still open to other more realistic result alternatives.
public class SyntheticMicrosClock extends Clock {
private final ZoneId zone;
private long prev;
private long incr;
SyntheticMicrosClock (ZoneId zone) {
this.zone = zone;
}
#Override
public ZoneId getZone() {
return zone;
}
#Override
public Clock withZone(ZoneId zone) {
if (zone.equals(this.zone)) { // intentional NPE
return this;
}
return new SyntheticMicrosClock(zone);
}
public long micros() {
final long now = TimeUnit.MILLISECONDS.toMicros(millis());
long synthNow = now;
if(now == prev) {
if(incr < 999) {
incr += 1;
}
synthNow += incr;
} else {
incr = 0;
}
prev = now;
return synthNow;
}
#Override
public long millis() {
return System.currentTimeMillis();
}
#Override
public Instant instant() {
return Instant.ofEpochSecond(0, micros());
}
}
To use it I inject my synthetic Clock where I need it. Ex:
Clock synthClock = Inject or new SynthClock(ZoneOffset.UTC);
Instant.now(synthClock);
Do you need a timestamp or just a high resolution increasing number that it time based? If so, you might be able to use System.nanoTime.
There were issues with this call in early JVM's/OS' but they seem to have been addressed (see first answer here).
Of course there's that odd chance that it might loop around on you. Don't know what kind of flexibility you have with your protocol, but there should be ways to deal with that.
Building on what #Bill suggested, you have 200+ years of resolution with nanoTime, so why not store nanoTime on init, currentTimeMillis on init, then add the difference of nanoTime and initNanoTime to initCurrentTimeMillis to get an augmented, high-precision timestamp? Once you detect clock skew between this augmented clock and the real one over 100ms, or so, you can reinit.
I'm playing with some piece of code calculating the time needed to compute some Java code to get a feeling of the efficiency or inefficiency of some of Java's functionality. Doing so I'm stuck now with some really strange effect I just can't explain myself. Maybe someone of you can help me understand it.
public class PerformanceCheck {
public static void main(String[] args) {
List<PerformanceCheck> removeList = new LinkedList<PerformanceCheck>();
int maxTimes = 1000000000;
for (int i=0;i<10;i++) {
long time = System.currentTimeMillis();
for (int times=0;times<maxTimes;times++) {
// PERFORMANCE CHECK BLOCK START
if (removeList.size() > 0) {
testFunc(3);
}
// PERFORMANCE CHECK BLOCK END
}
long timeNow = System.currentTimeMillis();
System.out.println("time: " + (timeNow - time));
}
}
private static boolean testFunc(int test) {
return 5 > test;
}
}
Starting this results in a relatively long computation time (remember removeList is empty, so testFunc is not even called):
time: 2328
time: 2223
...
While replacing anything of the combination of removeList.size() > 0 and testFunc(3) with anything else has better results. For example:
...
if (removeList.size() == 0) {
testFunc(3);
}
...
Results in (testFunc is called every single time):
time: 8
time: 7
time: 0
time: 0
Even calling both functions independent from each other results in the lower computation time:
...
if (removeList.size() == 0);
testFunc(3);
...
Result:
time: 6
time: 5
time: 0
time: 0
...
Only this particular combination in my initial example takes so long. This is irritating me and I'd really like to understand it. What's so special about it?
Thanks.
Addition:
Changing testFunc() in the first example
if (removeList.size() > 0) {
testFunc(times);
}
to something else, like
private static int testFunc2(int test) {
return 5*test;
}
Will result in being fast again.
That is really surprising. The generated bytecode is identical except for the conditional, which is ifle vs ifne.
The results are much more sensible if you turn off the JIT with -Xint. The second version is 2x slower. So it's to do with what the JIT optimization.
I assume that it can optimize out the check in the second case but not the first (for whatever reason). Even though it means it does the work of the function, missing that conditional makes things much faster. It avoids pipeline stalls and all that.
While not directly related to this question, this is how you would correctly micro benchmark the code using Caliper. Below is a modified version of your code so that it will run with Caliper. The inner loops had to be modified some so that the VM will not optimize them out. It is surprisingly smart at realizing nothing was happening.
There are also a lot of nuances when benchmarking Java code. I wrote about some of the issues I ran into at Java Matrix Benchmark, such as how past history can effect current results. You will avoid many of those issues by using Caliper.
http://code.google.com/p/caliper/
Benchmarking issues with Java Matrix Benchmark
public class PerformanceCheck extends SimpleBenchmark {
public int timeFirstCase(int reps) {
List<PerformanceCheck> removeList = new LinkedList<PerformanceCheck>();
removeList.add( new PerformanceCheck());
int ret = 0;
for( int i = 0; i < reps; i++ ) {
if (removeList.size() > 0) {
if( testFunc(i) )
ret++;
}
}
return ret;
}
public int timeSecondCase(int reps) {
List<PerformanceCheck> removeList = new LinkedList<PerformanceCheck>();
removeList.add( new PerformanceCheck());
int ret = 0;
for( int i = 0; i < reps; i++ ) {
if (removeList.size() == 0) {
if( testFunc(i) )
ret++;
}
}
return ret;
}
private static boolean testFunc(int test) {
return 5 > test;
}
public static void main(String[] args) {
Runner.main(PerformanceCheck.class, args);
}
}
OUTPUT:
0% Scenario{vm=java, trial=0, benchmark=FirstCase} 0.60 ns; σ=0.00 ns # 3 trials
50% Scenario{vm=java, trial=0, benchmark=SecondCase} 1.92 ns; σ=0.22 ns # 10 trials
benchmark ns linear runtime
FirstCase 0.598 =========
SecondCase 1.925 ==============================
vm: java
trial: 0
Well, I am glad not having to deal with Java performance optimizations. I tried it myself with Java JDK 7 64-Bit. The results are arbitrary ;). It makes no difference which lists I am using or if I cache the result of size() before entering the loop. Also entirely wiping out the test function makes almost no difference (so it can't be a branch prediction hit either).
Optimization flags improve performance but are as arbitrary.
The only logical consequence here is that the JIT compiler sometimes is able to optimize away the statement (which is not that hard to be true), but it seems rather arbitrary. One of the many reasons why I prefer languages like C++, where the behaviour is at least deterministic, even if it is sometimes arbitrary.
BTW in the latest Eclipse, like it always was on Windows, running this code via IDE "Run" (no debug) is 10 times slower than running it from console, so much about that...
When the runtime compiler can figure out testFunc evaluates to a constant, I believe it does not evaluate the loop, which explains the speedup.
When the condition is removeList.size() == 0 the function testFunc(3) gets evaluated to a constant. When the condition is removeList.size() != 0 the inner code never gets evaluated so it can't be sped-up. You can modify your code as follows:
for (int times = 0; times < maxTimes; times++) {
testFunc(); // Removing this call makes the code slow again!
if (removeList.size() != 0) {
testFunc();
}
}
private static boolean testFunc() {
return testFunc(3);
}
When testFunc() is not initially called, the runtime compiler does not realize that testFunc() evaluates to a constant, so it cannot optimize the loop.
Certain functions like
private static int testFunc2(int test) {
return 5*test;
}
the compiler likely tries to pre-optimize (before execution), but apparently not for the case of an parameter is passed in as an integer and evaluated in a conditional.
Your benchmark returns times like
time: 107
time: 106
time: 0
time: 0
...
suggesting that it takes 2 iterations of the outer-loop for the runtime compiler to finish optimizing. Compiling with the -server flag would probably return all 0's in the benchmark.
The times are unrealistically fast per iteration. This means the JIT has detected that your code doesn't do anything and has eliminated it. Subtle changes can confuse the JIT and it can't determine the code doesn't do anything and it takes some time.
If you change the test to do something marginally useful, the difference will disappear.
These benchmarks are tough since compilers are so darned smart. One guess: Since the result of testFunc() is ignored, the compiler might be completely optimizing it out. Add a counter, something like
if (testFunc(3))
counter++;
And, just for thoroughness, do a System.out.println(counter) at the end.
We're creating a scheduling application and we need to represent someone's available schedule during the day, regardless of what time zone they are in. Taking a cue from Joda Time's Interval, which represents an interval in absolute time between two instances (start inclusive, end exclusive), we created a LocalInterval. The LocalInterval is made up of two LocalTimes (start inclusive, end exclusive), and we even made a handy class for persisting this in Hibernate.
For example, if someone is available from 1:00pm to 5:00pm, we would create:
new LocalInterval(new LocalTime(13, 0), new LocalTime(17, 0));
So far so good---until someone wants to be available from 11:00pm until midnight on some day. Since the end of an interval is exclusive, this should be easily represented as such:
new LocalInterval(new LocalTime(23, 0), new LocalTime(24, 0));
Ack! No go. This throws an exception, because LocalTime cannot hold any hour greater than 23.
This seems like a design flaw to me---Joda didn't consider that someone may want a LocalTime that represents a non-inclusive endpoint.
This is really frustrating, as it blows a hole in what was otherwise a very elegant model that we created.
What are my options---other than forking Joda and taking out the check for hour 24? (No, I don't like the option of using a dummy value---say 23:59:59---to represent 24:00.)
Update: To those who keep saying that there is no such thing as 24:00, here's a quote from ISO 8601-2004 4.2.3 Notes 2,3: "The end of one calendar day [24:00] coincides with [00:00] at the start of the next calendar day ..." and "Representations where [hh] has the value [24] are only preferred to represent the end of a time interval ...."
Well after 23:59:59 comes 00:00:00 on the next day. So maybe use a LocalTime of 0, 0 on the next calendar day?
Although since your start and end times are inclusive, 23:59:59 is really what you want anyways. That includes the 59th second of the 59th minute of the 23rd hour, and ends the range exactly on 00:00:00.
There is no such thing as 24:00 (when using LocalTime).
The solution we finally went with was to use 00:00 as a stand-in for 24:00, with logic throughout the class and the rest of the application to interpret this local value. This is a true kludge, but it's the least intrusive and most elegant thing I could come up with.
First, the LocalTimeInterval class keeps an internal flag of whether the interval endpoint is end-of-day midnight (24:00). This flag will only be true if the end time is 00:00 (equal to LocalTime.MIDNIGHT).
/**
* #return Whether the end of the day is {#link LocalTime#MIDNIGHT} and this should be considered midnight of the
* following day.
*/
public boolean isEndOfDay()
{
return isEndOfDay;
}
By default the constructor considers 00:00 to be beginning-of-day, but there is an alternate constructor for manually creating an interval that goes all day:
public LocalTimeInterval(final LocalTime start, final LocalTime end, final boolean considerMidnightEndOfDay)
{
...
this.isEndOfDay = considerMidnightEndOfDay && LocalTime.MIDNIGHT.equals(end);
}
There is a reason why this constructor doesn't just have a start time and an "is end-of-day" flag: when used with a UI with a drop-down list of times, we don't know if the user will choose 00:00 (which is rendered as 24:00), but we know that as the drop-down list is for the end of the range, in our use case it means 24:00. (Although LocalTimeInterval allows empty intervals, we don't allow them in our application.)
Overlap checking requires special logic to take care of 24:00:
public boolean overlaps(final LocalTimeInterval localInterval)
{
if (localInterval.isEndOfDay())
{
if (isEndOfDay())
{
return true;
}
return getEnd().isAfter(localInterval.getStart());
}
if (isEndOfDay())
{
return localInterval.getEnd().isAfter(getStart());
}
return localInterval.getEnd().isAfter(getStart()) && localInterval.getStart().isBefore(getEnd());
}
Similarly, converting to an absolute Interval requires adding another day to the result if isEndOfDay() returns true. It is important that application code never constructs an Interval manually from a LocalTimeInterval's start and end values, as the end time may indicate end-of-day:
public Interval toInterval(final ReadableInstant baseInstant)
{
final DateTime start = getStart().toDateTime(baseInstant);
DateTime end = getEnd().toDateTime(baseInstant);
if (isEndOfDay())
{
end = end.plusDays(1);
}
return new Interval(start, end);
}
When persisting LocalTimeInterval in the database, we were able to make the kludge totally transparent, as Hibernate and SQL have no 24:00 restriction (and indeed have no concept of LocalTime anyway). If isEndOfDay() returns true, our PersistentLocalTimeIntervalAsTime implementation stores and retrieves a true time value of 24:00:
...
final Time startTime = (Time) Hibernate.TIME.nullSafeGet(resultSet, names[0]);
final Time endTime = (Time) Hibernate.TIME.nullSafeGet(resultSet, names[1]);
...
final LocalTime start = new LocalTime(startTime, DateTimeZone.UTC);
if (endTime.equals(TIME_2400))
{
return new LocalTimeInterval(start, LocalTime.MIDNIGHT, true);
}
return new LocalTimeInterval(start, new LocalTime(endTime, DateTimeZone.UTC));
and
final Time startTime = asTime(localTimeInterval.getStart());
final Time endTime = localTimeInterval.isEndOfDay() ? TIME_2400 : asTime(localTimeInterval.getEnd());
Hibernate.TIME.nullSafeSet(statement, startTime, index);
Hibernate.TIME.nullSafeSet(statement, endTime, index + 1);
It's sad that we had to write a workaround in the first place; this is the best I could do.
It's not a design flaw. LocalDate doesn't handle (24,0) because there's no such thing as 24:00.
Also, what happens when you want to represent an interval between, say 9pm and 3am?
What's wrong with this:
new LocalInterval(new LocalTime(23, 0), new LocalTime(0, 0));
You just have to handle the possibility that the end time might be "before" the start time, and add a day when necessary, and just hope that noone wants to represent an interval longer than 24 hours.
Alternatively, represent the interval as a combination of a LocalDate and a Duration or Period. That removes the "longer than 24 hours" problem.
Your problem can be framed as defining an interval on a domain that wraps around. Your min is 00:00, and your max is 24:00 (not inclusive).
Suppose your interval is defined as (lower, upper). If you require that lower < upper, you can represent (21:00, 24:00), but you are still unable to represent (21:00, 02:00), an interval that wraps across the min/max boundary.
I don't know whether your scheduling application would involve wrap-around intervals, but if you are going to go to (21:00, 24:00) without involving days, I don't see what will stop you from requiring (21:00, 02:00) without involving days (thus leading to a wrap-around dimension).
If your design is amenable to a wrap-around implementation, the interval operators are quite trivial.
For example (in pseudo-code):
is x in (lower, upper)? :=
if (lower <= upper) return (lower <= x && x <= upper)
else return (lower <= x || x <= upper)
In this case, I have found that writing a wrapper around Joda-Time implementing the operators is simple enough, and reduces impedance between thought/math and API. Even if it is just for the inclusion of 24:00 as 00:00.
I do agree that the exclusion of 24:00 annoyed me at the start, and it'll be nice if someone offered a solution. Luckily for me, given that my use of time intervals is dominated by wrap-around semantics, I always end up with a wrapper, which incidentally solves the 24:00 exclusion.
The time 24:00 is a difficult one. While we humans can understand what is meant, coding up an API to represent that without negatively impacting everything else appears to me to be nigh on impossible.
The value 24 being invalid is deeply encoded in Joda-Time - trying to remove it would have negative implications in a lot of places. I wouldn't recommend trying to do that.
For your problem, the local interval should consist of either (LocalTime, LocalTime, Days) or (LocalTime, Period). The latter is slightly more flexible. This is needed to correctly support an interval from 23:00 to 03:00.
I find JodaStephen's proposal of (LocalTime, LocalTime, Days) acceptable.
Considering on 13 March 2011 and your availability on Sunday from 00:00-12:00 you would have (00:00, 12:00, 0) which were in fact 11 hours long because of DST.
An availability from say 15:00-24:00 you could then code as (15:00, 00:00, 1) which would expanded to 2011-03-13T15:00 - 2011-03-14T00:00 whereat the end would be desired 2011-03-13T24:00. That means you would use a LocalTime of 00:00 on the next calendar day like already aroth proposed.
Of course it would be nice to use a 24:00 LocalTime directly and ISO 8601 conform but this seems not possible without changing a lot inside JodaTime so this approach seems the lesser evil.
And last but not least you could even extend the barrier of a single day with something like (16:00, 05:00, 1)...
this is our implementation of TimeInterval, using null as end Date for end-of-day. It supports the overlaps() and contains() methods and is also based on joda-time. It supports intervals spanning multiple days.
/**
* Description: Immutable time interval<br>
* The start instant is inclusive but the end instant is exclusive.
* The end is always greater than or equal to the start.
* The interval is also restricted to just one chronology and time zone.
* Start can be null (infinite).
* End can be null and will stay null to let the interval last until end-of-day.
* It supports intervals spanning multiple days.
*/
public class TimeInterval {
public static final ReadableInstant INSTANT = null; // null means today
// public static final ReadableInstant INSTANT = new Instant(0); // this means 1st jan 1970
private final DateTime start;
private final DateTime end;
public TimeInterval() {
this((LocalTime) null, null);
}
/**
* #param from - null or a time (null = left unbounded == LocalTime.MIDNIGHT)
* #param to - null or a time (null = right unbounded)
* #throws IllegalArgumentException if invalid (to is before from)
*/
public TimeInterval(LocalTime from, LocalTime to) throws IllegalArgumentException {
this(from == null ? null : from.toDateTime(INSTANT),
to == null ? null : to.toDateTime(INSTANT));
}
/**
* create interval spanning multiple days possibly.
*
* #param start - start distinct time
* #param end - end distinct time
* #throws IllegalArgumentException - if start > end. start must be <= end
*/
public TimeInterval(DateTime start, DateTime end) throws IllegalArgumentException {
this.start = start;
this.end = end;
if (start != null && end != null && start.isAfter(end))
throw new IllegalArgumentException("start must be less or equal to end");
}
public DateTime getStart() {
return start;
}
public DateTime getEnd() {
return end;
}
public boolean isEndUndefined() {
return end == null;
}
public boolean isStartUndefined() {
return start == null;
}
public boolean isUndefined() {
return isEndUndefined() && isStartUndefined();
}
public boolean overlaps(TimeInterval other) {
return (start == null || (other.end == null || start.isBefore(other.end))) &&
(end == null || (other.start == null || other.start.isBefore(end)));
}
public boolean contains(TimeInterval other) {
return ((start != null && other.start != null && !start.isAfter(other.start)) || (start == null)) &&
((end != null && other.end != null && !other.end.isAfter(end)) || (end == null));
}
public boolean contains(LocalTime other) {
return contains(other == null ? null : other.toDateTime(INSTANT));
}
public boolean containsEnd(DateTime other) {
if (other == null) {
return end == null;
} else {
return (start == null || !other.isBefore(start)) &&
(end == null || !other.isAfter(end));
}
}
public boolean contains(DateTime other) {
if (other == null) {
return start == null;
} else {
return (start == null || !other.isBefore(start)) &&
(end == null || other.isBefore(end));
}
}
#Override
public String toString() {
final StringBuilder sb = new StringBuilder();
sb.append("TimeInterval");
sb.append("{start=").append(start);
sb.append(", end=").append(end);
sb.append('}');
return sb.toString();
}
}
For the sake of completeness this test fails:
#Test()
public void testJoda() throws DGConstraintViolatedException {
DateTimeFormatter simpleTimeFormatter = DateTimeFormat.forPattern("HHmm");
LocalTime t1 = LocalTime.parse("0000", simpleTimeFormatter);
LocalTime t2 = LocalTime.MIDNIGHT;
Assert.assertTrue(t1.isBefore(t2));
}
This means the MIDNIGHT constant is not very usefull for the problem, as someone suggested.
This question is old, but many of these answers focus on Joda Time, and only partly address the true underlying problem:
The model in the OP's code doesn't match the reality it's modeling.
Unfortunately, since you do appear to care about the boundary condition between days, your "otherwise elegant model" isn't a good match for the problem you are modeling. You've used a pair of time values to represent intervals. Attempting to simplify the model down to a pair of times is simplifying below the complexity of the real world problem. Day boundaries actually do exist in reality and a pair of times looses that type of information. As always, over simplification results in subsequent complexity to restore or compensate for the missing information. Real complexity can only be pushed around from one part of the code to another.
The complexity of reality can only be eliminated with the magic of "unsupported use cases".
Your model would only make sense in a problem space where one didn't care how many days might exist between the start and end times. That problem space doesn't match most real world problems. Therefore, it's not surprising that Joda Time doesn't support it well. The use of 25 values for the hours place (0-24) is a code smell and usually points to a weakness in the design. There are only 24 hours in the day so 25 values should not be needed!
Note that since you aren't capturing the date on either end of LocalInterval, your class also does not capture sufficient information to account for daylight savings time. [00:30:00 TO 04:00:00) is usually 3.5 hours long but could also be 2.5, or 4.5 hours long.
You should either use a start date/time and duration, or a start date/time and an end date/time (inclusive start, exclusive end is a good default choice). Using a duration becomes tricky if you intend to display the end time because of things like daylight savings time, leap years and leap seconds. On the other hand using an end date becomes just as tricky if you expect to display the duration. Storing both of course is dangerous because it violates the DRY principle. If I were writing such a class I would store an end date/time and encapsulate the logic for obtaining the duration via a method on the object. That way clients of the class class do not all come up with their own code to calculate the duration.
I'd code up a example, but there's an even better option. Use the standard Interval Class from Joda time, which already accepts a start instant and either duration or end instant. It will also and happily calculate the duration or the end time for you. Sadly JSR-310 doesn't have an interval or similar class. (though one can use ThreeTenExtra to make up for that)
The relatively bright folks at Joda Time and Sun/Oracle (JSR-310) both thought very carefully about these problems. You might be smarter than them. It's possible. However, even if you are a brighter bulb, your 1 hour is probably not going to accomplish what they spent years on. Unless you are somewhere out in an esoteric edge case, it's usually waste of time and money to spend effort second guessing them. (of course at the time of the OP JSR-310 wasn't complete...)
Hopefully the above will help folks who find this question while designing or fixing similar issues.