The default java.time.Clock implementation is based on System.currentTimeMillis(). As discussed for example here,
Monotonically increasing time in Java?,
it is not guaranteed to be monotonic.
And indeed, I regularly experience a situation, where the system time is automatically adjusted a few seconds to the past, and the java clock jumps back too.
//now() returns 2016-01-13T22:34:05.681Z
order.setCreationTime(Instant.now());
//... something happens, order gets cancelled
//now() returns 2016-01-13T22:34:03.123Z
//which is a few seconds before the former one,
//even though the call was performed later - in any reasonable sense.
//The recorded history of events is obviously inconsistent with the real world.
order.setCancelationTime(Instant.now());
It is then impossible to perform time-sensitive things, like recording and analysing event history, when one can not rely on time going only in one direction.
The aforementioned post says that System.nanoTime() is monotonic (if the underlying system supports it). So, if I want to base my code on the java.time API, I would need Clock that uses nanoTime internally to ensure one-way flow of the time. Maybe something like this would work. Or would't it?
public class MyClock() extends java.time.Clock {
private final long adjustment;
public MyClock() {
long cpuTimeMillis = System.nanoTime()/1000000;
long systemTimeMillis = System.currentTimeMillis();
adjustment = systemTimeMillis - cpuTimeMillis;
}
#Override
public long millis() {
long currentCpuTimeMillis = System.nanoTime()/1000000;
return currentCpuTimeMillis + adjustment;
}
}
It is just a sketch, not a full Clock implementation. And I suppose a proper implementation should also perform the adjustment against another Clock passed in the constructor, rather than directly against the currentTimeMillis().
Or, is there already such a monotonic Clock implementation available anywhere? I would guess there must have been many people facing the same issue.
Conclusion
Thanks for the inspiring comments and answers. There are several interesting points scattered across the comments, so I will summarize it here.
1. Monotonic clock
As for my original question, yes, it is possible to have monotonic clock that is not affected by system time jumping backwards. Such implementation can be based on System.nanoTime() as I suggested above. There used to be problems with this aproach in the past, but it should work fine on today's modern systems. This approach is already implemented for example in the Time4J library, their monotonic clock can be easily converted to java.time.Clock:
Clock clock = TemporalType.CLOCK.from(SystemClock.MONOTONIC);
2. Proper system time control
It it possible to configure system time management (ntpd in unix/linux), so that the system time virtually never moves back (it gets just slowed down if necessary), then one can rely on the system time being monotonic, and no clock-magic is necessary in Java.
I will go this way, as my app is server-side and I can get the time under control. Actually, I experienced the anomalies in an experimental environment that I installed myself (with only superficial knowledge of the domain), and it was using just ntpdate client (which can jump backwards if the time is out of sync), rather than ntpd (which can be configured so that it does not jump back).
3. Using sequences rather than clock
When one needs to track a strong relation what happened before what, it is safer to give the events serial numbers from an atomically generated sequence and not rely on the wall clock. It becomes the only option once the application is running on several nodes (which is not my case though).
As #the8472 says, the key is to have the time synchronization on the machine (where the jvm runs) correct.
If you program a client then it really can be dangerous to rely on the system clocks.
But for servers, there is a solution - you might want to consider using NTP with strict configuration.
In here they basicly explain that NTP will slow down the time and not set it backwards.
And this NTP documentation says :
Sometimes, in particular when ntpd is first started, the error might
exceed 128 ms. This may on occasion cause the clock to be set
backwards if the local clock time is more than 128 s in the future
relative to the server. In some applications, this behavior may be
unacceptable. If the -x option is included on the command line, the
clock will never be stepped and only slew corrections will be used.
Do note that nanoTime may increase monotonically but it does not relate nicely to wall time, e.g. due to hibernation events, VM suspension and similar things.
And if you start distributing things across multiple servers then synchronizing on currentMillis may also bite you again due to clock drift.
Maybe you should consider getting the system time of your servers under control.
Or track relative sequence of the events separately from the time at which they were supposedly recorded.
Related
I have Java service running on Windows 7 that runs once per day on a SingleThreadScheduledExecutor. I've never given it much though as it's non critical but recently looked at the numbers and saw that the service was drifting approximately 15 minutes per day which sounds way to much so dug it up.
Executors.newSingleThreadScheduledExecutor().scheduleAtFixedRate(() -> {
long drift = (System.currentTimeMillis() - lastTimeStamp - seconds * 1000);
lastTimeStamp = System.currentTimeMillis();
}, 0, 10, TimeUnit.SECONDS);
This method pretty consistently drifts +110ms per each 10 seconds. If I run it on a 1 second interval the drift averages +11ms.
Interestingly if I do the same on a Timer() values are pretty consistent with an average drift less than a full millisecond.
new Timer().schedule(new TimerTask() {
#Override
public void run() {
long drift = (System.currentTimeMillis() - lastTimeStamp - seconds * 1000);
lastTimeStamp = System.currentTimeMillis();
}
}, 0, seconds * 1000);
Linux: doesn't drift (nor with Executor, nor with Timer)
Windows: drifts like crazy with Executor, doesn't with Timer
Tested with Java8 and Java11.
Interestingly, if you assume a drift of 11ms per second you'll get 950400ms drift per day which amounts to 15.84 minutes per day. So it's pretty consistent.
The question is: why?
Why would this happen with a SingleThreadExecutor but not with a Timer.
Update1: following Slaw's comment I tried on multiple different hardware. What I found is that this issue doesn't manifest on any personal hardware. Only on the company one. On company hardware it also manifests on Win10, though an order of magnitude less.
As pointed out in the comments, the ScheduledThreadPoolExecutor bases its calculations on System.nanoTime(). For better or worse, the old Timer API however preceeded nanoTime(), and so uses System.currentTimeMillis() instead.
The difference here might seem subtle, but is more significant than one might expect. Contrary to popular belief, nanoTime() is not just a "more accurate version" of currentTimeMillis(). Millis is locked to system time, whereas nanos is not. Or as the docs put it:
This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. [...] The values returned by this method become meaningful only when the difference between two such values, obtained within the same instance of a Java virtual machine, is computed.
In your example, you're not following this guidance for the values to be "meaningful" - understandably, because the ScheduledThreadPoolExecutor only uses nanoTime() as an implementation detail. But the end result is the same, that being that you can't guarantee that it will stay synchronised to the system clock.
But why not? Seconds are seconds, right, so the two should stay in sync from a certain, known point?
Well, in theory, yes. But in practice, probably not.
Taking a look at the relevant native code on windows:
LARGE_INTEGER current_count;
QueryPerformanceCounter(¤t_count);
double current = as_long(current_count);
double freq = performance_frequency;
jlong time = (jlong)((current/freq) * NANOSECS_PER_SEC);
return time;
We see nanos() uses the QueryPerformanceCounter API, which works by QueryPerformanceCounter getting the "ticks" of a frequency that's defined by QueryPerformanceFrequency. That frequency will stay identical, but the timer it's based off, and its synchronistaion algorithm that windows uses, varies by configuration, OS, and underlying hardware. Even ignoring the above, it's never going to be close to 100% accurate (it's based of a reasonably cheap crystal oscillator somewhere on the board, not a Caesium time standard!) so it's going to drift out with the system time as NTP keeps it in sync with reality.
In particular, this link gives some useful background, and reinforces the above pont:
When you need time stamps with a resolution of 1 microsecond or better and you don't need the time stamps to be synchronized to an external time reference, choose QueryPerformanceCounter.
(Bolding is mine.)
For your specific case of Windows 7 performing badly, note that in Windows 8+, the TSC synchronisation algorithm was improved, and QueryPerformanceCounter was always based on a TSC (as oppose to Windows 7, where it could be a TSC, HPET or the ACPI PM timer - the latter of which is especially rather inaccurate.) I suspect this is the most likely reason the situation improves tremendously on Windows 10.
That being said, the above factors still mean that you can't rely on the ScheduledThreadPoolExecutor to keep in time with "real" time - it will always drift. If that drift is an issue, then it's not a solution you can rely on in this context.
Side note: In Windows 8+, there is a GetSystemTimePreciseAsFileTime function which offers the high resolution of QueryPerformanceCounter combined with the accuracy of the system time. If Windows 7 was dropped as a supported platform, this could in theory be used to provide a System.getCurrentTimeNanos() method or similar, assuming other similar native functions exist for other supported platforms.
CronScheduler is a project of mine designed to be proof against time drift problem, and at the same time it avoids some of the problems with the old Timer class described in this post.
Example usage:
Duration syncPeriod = Duration.ofMinutes(1);
CronScheduler cron = CronScheduler.create(syncPeriod);
cron.scheduleAtFixedRateSkippingToLatest(0, 1, TimeUnit.MINUTES, runTimeMillis -> {
// Collect and send summary metrics to a remote monitoring system
});
Note: this project was actually inspired by this StackOverflow question.
Accuracy Vs. Precision
What I would like to know is whether I should use System.currentTimeMillis() or System.nanoTime() when updating my object's positions in my game? Their change in movement is directly proportional to the elapsed time since the last call and I want to be as precise as possible.
I've read that there are some serious time-resolution issues between different operating systems (namely that Mac / Linux have an almost 1 ms resolution while Windows has a 50ms resolution??). I'm primarly running my apps on windows and 50ms resolution seems pretty inaccurate.
Are there better options than the two I listed?
Any suggestions / comments?
If you're just looking for extremely precise measurements of elapsed time, use System.nanoTime(). System.currentTimeMillis() will give you the most accurate possible elapsed time in milliseconds since the epoch, but System.nanoTime() gives you a nanosecond-precise time, relative to some arbitrary point.
From the Java Documentation:
public static long nanoTime()
Returns the current value of the most precise available system timer, in nanoseconds.
This method can only be used to
measure elapsed time and is not
related to any other notion of system
or wall-clock time. The value returned
represents nanoseconds since some
fixed but arbitrary origin time (perhaps in
the future, so values may be
negative). This method provides
nanosecond precision, but not
necessarily nanosecond accuracy. No
guarantees are made about how
frequently values change. Differences
in successive calls that span greater
than approximately 292 years (263
nanoseconds) will not accurately
compute elapsed time due to numerical
overflow.
For example, to measure how long some code takes to execute:
long startTime = System.nanoTime();
// ... the code being measured ...
long estimatedTime = System.nanoTime() - startTime;
See also: JavaDoc System.nanoTime() and JavaDoc System.currentTimeMillis() for more info.
Since no one else has mentioned this…
It is not safe to compare the results of System.nanoTime() calls between different JVMs, each JVM may have an independent 'origin' time.
System.currentTimeMillis() will return the (approximate) same value between JVMs, because it is tied to the system wall clock time.
If you want to compute the amount of time that has elapsed between two events, like a stopwatch, use nanoTime(); changes in the system wall-clock make currentTimeMillis() incorrect for this use case.
Update by Arkadiy: I've observed more correct behavior of System.currentTimeMillis() on Windows 7 in Oracle Java 8. The time was returned with 1 millisecond precision. The source code in OpenJDK has not changed, so I do not know what causes the better behavior.
David Holmes of Sun posted a blog article a couple years ago that has a very detailed look at the Java timing APIs (in particular System.currentTimeMillis() and System.nanoTime()), when you would want to use which, and how they work internally.
Inside the Hotspot VM: Clocks, Timers and Scheduling Events - Part I - Windows
One very interesting aspect of the timer used by Java on Windows for APIs that have a timed wait parameter is that the resolution of the timer can change depending on what other API calls may have been made - system wide (not just in the particular process). He shows an example where using Thread.sleep() will cause this resolution change.
As others have said, currentTimeMillis is clock time, which changes due to daylight saving time (not: daylight saving & time zone are unrelated to currentTimeMillis, the rest is true), users changing the time settings, leap seconds, and internet time sync. If your app depends on monotonically increasing elapsed time values, you might prefer nanoTime instead.
You might think that the players won't be fiddling with the time settings during game play, and maybe you'd be right. But don't underestimate the disruption due to internet time sync, or perhaps remote desktop users. The nanoTime API is immune to this kind of disruption.
If you want to use clock time, but avoid discontinuities due to internet time sync, you might consider an NTP client such as Meinberg, which "tunes" the clock rate to zero it in, instead of just resetting the clock periodically.
I speak from personal experience. In a weather application that I developed, I was getting randomly occurring wind speed spikes. It took a while for me to realize that my timebase was being disrupted by the behavior of clock time on a typical PC. All my problems disappeared when I started using nanoTime. Consistency (monotonicity) was more important to my application than raw precision or absolute accuracy.
System.nanoTime() isn't supported in older JVMs. If that is a concern, stick with currentTimeMillis
Regarding accuracy, you are almost correct. On SOME Windows machines, currentTimeMillis() has a resolution of about 10ms (not 50ms). I'm not sure why, but some Windows machines are just as accurate as Linux machines.
I have used GAGETimer in the past with moderate success.
Yes, if such precision is required use System.nanoTime(), but be aware that you are then requiring a Java 5+ JVM.
On my XP systems, I see system time reported to at least 100 microseconds 278 nanoseconds using the following code:
private void test() {
System.out.println("currentTimeMillis: "+System.currentTimeMillis());
System.out.println("nanoTime : "+System.nanoTime());
System.out.println();
testNano(false); // to sync with currentTimeMillis() timer tick
for(int xa=0; xa<10; xa++) {
testNano(true);
}
}
private void testNano(boolean shw) {
long strMS=System.currentTimeMillis();
long strNS=System.nanoTime();
long curMS;
while((curMS=System.currentTimeMillis()) == strMS) {
if(shw) { System.out.println("Nano: "+(System.nanoTime()-strNS)); }
}
if(shw) { System.out.println("Nano: "+(System.nanoTime()-strNS)+", Milli: "+(curMS-strMS)); }
}
For game graphics & smooth position updates, use System.nanoTime() rather than System.currentTimeMillis(). I switched from currentTimeMillis() to nanoTime() in a game and got a major visual improvement in smoothness of motion.
While one millisecond may seem as though it should already be precise, visually it is not. The factors nanoTime() can improve include:
accurate pixel positioning below wall-clock resolution
ability to anti-alias between pixels, if you want
Windows wall-clock inaccuracy
clock jitter (inconsistency of when wall-clock actually ticks forward)
As other answers suggest, nanoTime does have a performance cost if called repeatedly -- it would be best to call it just once per frame, and use the same value to calculate the entire frame.
System.currentTimeMillis() is not safe for elapsed time because this method is sensitive to the system realtime clock changes of the system.
You should use System.nanoTime.
Please refer to Java System help:
About nanoTime method:
.. This method provides nanosecond precision, but not necessarily
nanosecond resolution (that is, how frequently the value changes) - no
guarantees are made except that the resolution is at least as good as
that of currentTimeMillis()..
If you use System.currentTimeMillis() your elapsed time can be negative (Back <-- to the future)
I've had good experience with nanotime. It provides wall-clock time as two longs (seconds since the epoch and nanoseconds within that second), using a JNI library. It's available with the JNI part precompiled for both Windows and Linux.
one thing here is the inconsistency of the nanoTime method.it does not give very consistent values for the same input.currentTimeMillis does much better in terms of performance and consistency,and also ,though not as precise as nanoTime,has a lower margin of error,and therefore more accuracy in its value. i would therefore suggest that you use currentTimeMillis
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
This might sound like a weird question, but how can you program time without using the API of a programming language? Time is such an abstract concept, how can you write a program without using the predefined function for time.
I was thinking it would have to be calculated by a count of processor computations, but if every computer has a different speed performance how would you write code to iterate time.
Assuming the language the program is written in doesn't matter, how would you do this?
EDIT: I should also say, not using system time, or any pre-generated version of time from the system
Typically time is provided to the language runtime by the OS layer. So, if you're running a C/C++ program compiled in a Windows environment, it's asking the Windows OS for the time. If you're running a Java program that is executing in the JVM on a Linux box, the java program gets the time from the JVM, which in turn gets it from Linux. If you're running as JavaScript in a browser, the Javascript runtime gets the time from the Browser, which gets its time from the OS, etc...
At the lower levels, I believe the time the OS has it based on elapsed clock cycles in the hardware layer and then that's compared to some root time that you set in the BIOS or OS.
Updated with some more geek-detail:
Going even more abstract, if your computer is 1GHz, that means it's cpu changes "state" every 1/1Billion(10-9) second (the period of a single transition from +voltage to -voltage and back). EVERYTHING in a computer is based on these transitions, so there are hardware timers on the motherboard that make sure that these transitions happen with a consistent frequency. Since those hardware timers are so precise, they are the basis for "counting" time for things that matter about the Calendar time abstraction that we use.
I'm not a hardware expert, but this is my best understanding from computer architecture classes and building basic circuits in school.
Clarifying based on your edit:
A program doesn't inherently "know" anything about how slow or fast it's running, so on its own there is no way to accurately track the passage of time. Some languages can access information like "cycle count" and "processor speed" from the OS, so you could approximate some representation of the passage of time based on that without having to use a time api. But even that is sort of cheating given the constraints of your question.
Simply put, you can't. There's no pure-software way of telling time.
Every computer has a number of different hardware timers. These fire off interrupts once triggered, which is how the processor itself can keep track of time. Without these or some other external source, you cannot keep track of time.
The hardware clock in your motherboard contains a precisely tuned quartz crystal that vibrates at a frequency of 32,768 Hz [2^15] when a precise current is passed through it. The clock counts these vibrations to mark the passage of time. System calls reference the hardware clock for time, and without the hardware clock your PC wouldn't have the faintest idea if, between to arbitrary points in execution, a second, a day, or a year had passed.
This is what the system calls reference, and trying to use anything else is just an excercise in futility because everything else is the computer is designed to simply function as fast as possible based on the voltage it happens to be receiving at the time.
You could try counting CPU clock cycles, but the CPU clock is simply designed to vibrate as fast as possible based on the input voltage and can vary based on load requirements and how stable the voltage your power supply delivers is. This makes it wholly unreliable as a method to measure time because if you get a program to monitor the clock speed in real time you will notice that it usually fluctuates constantly by +/- a few MHz.
Even hardware clocks are unreliable as the voltage applied to the crystal, while tightly controlled, is still variable. If you turned off the NTP services and/or disconnected it from the internet the time would probably drift a few minutes per month or even per week. The NTP servers reference atomic clocks that measure fundamental properties of physics, namely the oscillations of cesium atoms. One second is currently defined as:
the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.
oh... and that's measured by a computer as well.
Without a clock or reference to the OS you can't measure anything relative to the outside of the world. However you can of course measure internally knowing that the task is 1/3 of the way done or whatever. But depending on system load, CPU throttling from thermal requirements, other programs running, etc. the last 1/3 might take as long as the first 2/3rds or longer. You can apply huristics to load balanace long running tasks against themselves (only) so that for instance things will be smooth if the number of tasks relative to threads varies to achieve a desired performance characteristic, but the PC has to get it's time from somewhere. Really cheap clocks get their time from the fact that power is 60hz, so every 60 cycles a second goes by. But the actual number of hz varies a bit and is likely to constantly vary in a single direction, so clocks like that get out of synch pretty fast, seconds per day, or more. I guess with a camera and a hole in a box you could determine when the sun was at a particular position in the sky and determine time that way, but we're getting pretty far afield here.
In Java, you will notice that the easiest way to get the time is
System.currentTimeMillis();
which is implemented as
public static native long currentTimeMillis();
That is a native method which is implemented in native code, c in all likelihood. Your computer's CPU has an internal clock that can adjust itself. The native call is an OS call to the hardware to retrieve that value, possibly doing some software transformation somewhere.
I think you kind of answered your own question in a way: "Time is such an abstract concept". So I would argue it depends what exactly you're trying to measure? If it's algorithmic efficiency we don't care about a concrete measure of time, simply how much longer does the algorithm take with respect to the number of inputs (big O notation). Is it how long does it take for the earth to spin on it's axis or some fraction of that, then obviously you need some thing external to the computer to tell it when one iteration started and one ended thereafter ignoring cpu clock drift the computer should do a good job of telling you what the time in the day is.
It is possible, however, I suspect it would take quite some time.
You could use the probability that your computer will be hit by a cosmic ray.
Reference: Cosmic Rays: what is the probability they will affect a program?
You would need to create a program that manipulates large amounts of data in the computer's memory, thus making it susceptible to cosmic ray intrusion. Such data would become corrupted at a certain point in time.
The program should be able to check the integrity of the data and mark that moment when its data becomes partially corrupted. When this happens, the program should also be able to generate another frame of refference, for example how many times a given function runs between two cosmic rays hits.
Then, these intervals should be recorded in a database and averaged after a few billion/trillion/zillion occurences thereby reducing the projected randomness of an occuring cosmic ray hit.
At that point and on, the computer would be able to tell time by a cosmic ray average hit coeficient.
Offcourse, this is an oversimplified solution. I am quite certain the hardware would fail during this time, the cosmic rays could hit runnable code zones of memory instead of raw data, cosmic rays might change occurences due to our solar system's continuous motion through the galaxy etc.
However, it is indeed possible...
I want to optimize a method so it runs in less time. I was using System.currentTimeMillis() to calculate the time it lasted.
However, I just read the System.currentTimeMillis() Javadoc and it says this:
This method shouldn't be used for measuring timeouts or other elapsed
time measurements, as changing the system time can affect the results.
So, if I shouldn't use it to measure the elapsed time, how should I measure it?
Android native Traceview will help you measuring the time and also will give you more information.
Using it is as simple as:
// start tracing to "/sdcard/calc.trace"
Debug.startMethodTracing("calc");
// ...
// stop tracing
Debug.stopMethodTracing();
A post with more information in Android Developers Blog
Also take #Rajesh J Advani post into account.
There are a few issues with System.currentTimeMillis().
if you are not in control of the system clock, you may be reading the elapsed time wrong.
For server code or other long running java programs, your code is likely going to be called in over a few thousand iterations. By the end of this time, the JVM will have optimized the bytecode to the extent where the time taken is actually a lot lesser than what you measured as part of your testing.
It doesn't take into account the fact that there might be other processes on your computer or other threads in the JVM that compete for CPU time.
You can still use the method, but you need to keep the above points in mind. As others have mentioned, a profiler is a much better way of measuring system performance.
Welcome to the world of benchmarking.
As others point out - techniques based on timing methods like currentTimeMillis will only tell you elapsed time, rather than how long the method spent in the CPU.
I'm not aware of a way in Java to isolate timings of a method to how long it spent on the CPU - the answer is to either:
1) If the method is long running (or you run it many times, while using benchmarking rules like do not discard every result), use something like the "time" tool on Linux (http://linux.die.net/man/1/time) who will tell you how long the app spent on the CPU (obviously you have to take away the overhead of the application startup etc).
2) Use a profiler as others pointed out. This has dangers such as adding too much overhead using tracing techniques - if it uses stack sampling, it won't be 100% accurate
3) Am not sure how feasible this is on android - but you could get your bechmark running on a quiet multicore system and isolate a core (or ideally whole socket) to only be able to run your application.
You can use something called System.nanoTime(). As given here
http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/System.html#nanoTime()
As the document says
This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time.
Hope this will help.
SystemClock.elapsedRealtime()
Quoting words in the linked page: elapsedRealtime() and elapsedRealtimeNanos() return the time since the system was booted, and include deep sleep. This clock is guaranteed to be monotonic, and continues to tick even when the CPU is in power saving modes, so is the recommend basis for general purpose interval timing.
Is it possible to slow down time in the Java virtual machine according to CPU usage by modification of the source code of OpenJDK? I have a network simulation (Java to ns-3) which consumes real time, synchronised loosely to the wall clock. However, because I run so many clients in the simulation, the CPU usage hits 100% and hard guarantees aren't maintained about how long events in the simulator should take to process (i.e., a high amount of super-late events). Therefore, the simulation tops out at around 40 nodes when there's a lot of network traffic, and even then it's a bit iffy. The ideal solution would be to slow down time according to CPU, but I'm not sure how to do this successfully. A lesser solution is to just slow down time by some multiple (time lensing?).
If someone could give some guidance, the source code for the relevant file in question (for Windows) is at http://pastebin.com/RSQpCdbD. I've tried modifying some parts of the file, but my results haven't really been very successful.
Thanks in advance,
Chris
You might look at VirtualBox, which allows one to Accelerate or slow down the guest clock from the command line.
I'm not entirely sure if this is what you want but, with the Joda-time library you can stop time completely. So calls to new Date() or new DateTime() within Joda-time will continously return the same time.
So, you could, in one Thread "stop time" with this call:
DateTimeUtils.setCurrentMillisFixed(System.currentTimeMillis());
Then your Thread could sleep for, say, 5000ms, and then call:
// advance time by one second
DateTimeUtils.setCurrentMillisFixed(System.currentTimeMillis() + 1000);
So provided you application is doing whatever it does based on the time within the system this will "slow" time by setting time forwards one second every 5 seconds.
But, as i said... i'm not sure this will work in your environment.