Possible to slow down time in the Java virtual machine? - java

Is it possible to slow down time in the Java virtual machine according to CPU usage by modification of the source code of OpenJDK? I have a network simulation (Java to ns-3) which consumes real time, synchronised loosely to the wall clock. However, because I run so many clients in the simulation, the CPU usage hits 100% and hard guarantees aren't maintained about how long events in the simulator should take to process (i.e., a high amount of super-late events). Therefore, the simulation tops out at around 40 nodes when there's a lot of network traffic, and even then it's a bit iffy. The ideal solution would be to slow down time according to CPU, but I'm not sure how to do this successfully. A lesser solution is to just slow down time by some multiple (time lensing?).
If someone could give some guidance, the source code for the relevant file in question (for Windows) is at http://pastebin.com/RSQpCdbD. I've tried modifying some parts of the file, but my results haven't really been very successful.
Thanks in advance,
Chris

You might look at VirtualBox, which allows one to Accelerate or slow down the guest clock from the command line.

I'm not entirely sure if this is what you want but, with the Joda-time library you can stop time completely. So calls to new Date() or new DateTime() within Joda-time will continously return the same time.
So, you could, in one Thread "stop time" with this call:
DateTimeUtils.setCurrentMillisFixed(System.currentTimeMillis());
Then your Thread could sleep for, say, 5000ms, and then call:
// advance time by one second
DateTimeUtils.setCurrentMillisFixed(System.currentTimeMillis() + 1000);
So provided you application is doing whatever it does based on the time within the system this will "slow" time by setting time forwards one second every 5 seconds.
But, as i said... i'm not sure this will work in your environment.

Related

Is there any consistent (monotonic) Clock implementation in Java?

The default java.time.Clock implementation is based on System.currentTimeMillis(). As discussed for example here,
Monotonically increasing time in Java?,
it is not guaranteed to be monotonic.
And indeed, I regularly experience a situation, where the system time is automatically adjusted a few seconds to the past, and the java clock jumps back too.
//now() returns 2016-01-13T22:34:05.681Z
order.setCreationTime(Instant.now());
//... something happens, order gets cancelled
//now() returns 2016-01-13T22:34:03.123Z
//which is a few seconds before the former one,
//even though the call was performed later - in any reasonable sense.
//The recorded history of events is obviously inconsistent with the real world.
order.setCancelationTime(Instant.now());
It is then impossible to perform time-sensitive things, like recording and analysing event history, when one can not rely on time going only in one direction.
The aforementioned post says that System.nanoTime() is monotonic (if the underlying system supports it). So, if I want to base my code on the java.time API, I would need Clock that uses nanoTime internally to ensure one-way flow of the time. Maybe something like this would work. Or would't it?
public class MyClock() extends java.time.Clock {
private final long adjustment;
public MyClock() {
long cpuTimeMillis = System.nanoTime()/1000000;
long systemTimeMillis = System.currentTimeMillis();
adjustment = systemTimeMillis - cpuTimeMillis;
}
#Override
public long millis() {
long currentCpuTimeMillis = System.nanoTime()/1000000;
return currentCpuTimeMillis + adjustment;
}
}
It is just a sketch, not a full Clock implementation. And I suppose a proper implementation should also perform the adjustment against another Clock passed in the constructor, rather than directly against the currentTimeMillis().
Or, is there already such a monotonic Clock implementation available anywhere? I would guess there must have been many people facing the same issue.
Conclusion
Thanks for the inspiring comments and answers. There are several interesting points scattered across the comments, so I will summarize it here.
1. Monotonic clock
As for my original question, yes, it is possible to have monotonic clock that is not affected by system time jumping backwards. Such implementation can be based on System.nanoTime() as I suggested above. There used to be problems with this aproach in the past, but it should work fine on today's modern systems. This approach is already implemented for example in the Time4J library, their monotonic clock can be easily converted to java.time.Clock:
Clock clock = TemporalType.CLOCK.from(SystemClock.MONOTONIC);
2. Proper system time control
It it possible to configure system time management (ntpd in unix/linux), so that the system time virtually never moves back (it gets just slowed down if necessary), then one can rely on the system time being monotonic, and no clock-magic is necessary in Java.
I will go this way, as my app is server-side and I can get the time under control. Actually, I experienced the anomalies in an experimental environment that I installed myself (with only superficial knowledge of the domain), and it was using just ntpdate client (which can jump backwards if the time is out of sync), rather than ntpd (which can be configured so that it does not jump back).
3. Using sequences rather than clock
When one needs to track a strong relation what happened before what, it is safer to give the events serial numbers from an atomically generated sequence and not rely on the wall clock. It becomes the only option once the application is running on several nodes (which is not my case though).
As #the8472 says, the key is to have the time synchronization on the machine (where the jvm runs) correct.
If you program a client then it really can be dangerous to rely on the system clocks.
But for servers, there is a solution - you might want to consider using NTP with strict configuration.
In here they basicly explain that NTP will slow down the time and not set it backwards.
And this NTP documentation says :
Sometimes, in particular when ntpd is first started, the error might
exceed 128 ms. This may on occasion cause the clock to be set
backwards if the local clock time is more than 128 s in the future
relative to the server. In some applications, this behavior may be
unacceptable. If the -x option is included on the command line, the
clock will never be stepped and only slew corrections will be used.
Do note that nanoTime may increase monotonically but it does not relate nicely to wall time, e.g. due to hibernation events, VM suspension and similar things.
And if you start distributing things across multiple servers then synchronizing on currentMillis may also bite you again due to clock drift.
Maybe you should consider getting the system time of your servers under control.
Or track relative sequence of the events separately from the time at which they were supposedly recorded.

Java process in linux needs initial warmup

we have a legacy java multithreaded process on a RHEL 6.5 which is very time critical (low latency), and it processes hundreds of thousands of message a day. It runs in a powerful Linux machine with 40cpus. What we found is the process has a high latency when it process the first 50k messages with average of 10ms / msg , and after this 'warmup' time, the latency starts to drop and became about 7ms, then 5ms and eventually stops at about 3-4ms / second at day end.
this puzzles me , and one of possibility that i can think of is maps are being resized at the beginning till it reaches a very big capacity - and it just doesn't exceed the load factor anymore. From what I see, the maps are not initialized with initial capacity - so that is why i say that may be the case. I tried to put it thru profiler and pump millions of messages inside, hoping to see some 'resize' method from the java collections, but i was unable to find any of them. It could be i am searching for wrong things, or looking into wrong direction. As a new joiner and existing team member left, i am trying to see if there are other reasons that i haven't thought of.
Another possibility that i can think of is kernel settings related, but i am unsure what it could be.
I don't think it is a programming logic issue, because it runs in acceptable speed after the first 30k-50k of messages.
Any suggestion?
It sounds like it takes some time for the operating system to realize that your application is a big resource consumer. So after few seconds it sees that there is a lot of activity with files of your application, and only then the operating system deals with the activity by populating the cache and action like this.

How do you program time? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
This might sound like a weird question, but how can you program time without using the API of a programming language? Time is such an abstract concept, how can you write a program without using the predefined function for time.
I was thinking it would have to be calculated by a count of processor computations, but if every computer has a different speed performance how would you write code to iterate time.
Assuming the language the program is written in doesn't matter, how would you do this?
EDIT: I should also say, not using system time, or any pre-generated version of time from the system
Typically time is provided to the language runtime by the OS layer. So, if you're running a C/C++ program compiled in a Windows environment, it's asking the Windows OS for the time. If you're running a Java program that is executing in the JVM on a Linux box, the java program gets the time from the JVM, which in turn gets it from Linux. If you're running as JavaScript in a browser, the Javascript runtime gets the time from the Browser, which gets its time from the OS, etc...
At the lower levels, I believe the time the OS has it based on elapsed clock cycles in the hardware layer and then that's compared to some root time that you set in the BIOS or OS.
Updated with some more geek-detail:
Going even more abstract, if your computer is 1GHz, that means it's cpu changes "state" every 1/1Billion(10-9) second (the period of a single transition from +voltage to -voltage and back). EVERYTHING in a computer is based on these transitions, so there are hardware timers on the motherboard that make sure that these transitions happen with a consistent frequency. Since those hardware timers are so precise, they are the basis for "counting" time for things that matter about the Calendar time abstraction that we use.
I'm not a hardware expert, but this is my best understanding from computer architecture classes and building basic circuits in school.
Clarifying based on your edit:
A program doesn't inherently "know" anything about how slow or fast it's running, so on its own there is no way to accurately track the passage of time. Some languages can access information like "cycle count" and "processor speed" from the OS, so you could approximate some representation of the passage of time based on that without having to use a time api. But even that is sort of cheating given the constraints of your question.
Simply put, you can't. There's no pure-software way of telling time.
Every computer has a number of different hardware timers. These fire off interrupts once triggered, which is how the processor itself can keep track of time. Without these or some other external source, you cannot keep track of time.
The hardware clock in your motherboard contains a precisely tuned quartz crystal that vibrates at a frequency of 32,768 Hz [2^15] when a precise current is passed through it. The clock counts these vibrations to mark the passage of time. System calls reference the hardware clock for time, and without the hardware clock your PC wouldn't have the faintest idea if, between to arbitrary points in execution, a second, a day, or a year had passed.
This is what the system calls reference, and trying to use anything else is just an excercise in futility because everything else is the computer is designed to simply function as fast as possible based on the voltage it happens to be receiving at the time.
You could try counting CPU clock cycles, but the CPU clock is simply designed to vibrate as fast as possible based on the input voltage and can vary based on load requirements and how stable the voltage your power supply delivers is. This makes it wholly unreliable as a method to measure time because if you get a program to monitor the clock speed in real time you will notice that it usually fluctuates constantly by +/- a few MHz.
Even hardware clocks are unreliable as the voltage applied to the crystal, while tightly controlled, is still variable. If you turned off the NTP services and/or disconnected it from the internet the time would probably drift a few minutes per month or even per week. The NTP servers reference atomic clocks that measure fundamental properties of physics, namely the oscillations of cesium atoms. One second is currently defined as:
the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.
oh... and that's measured by a computer as well.
Without a clock or reference to the OS you can't measure anything relative to the outside of the world. However you can of course measure internally knowing that the task is 1/3 of the way done or whatever. But depending on system load, CPU throttling from thermal requirements, other programs running, etc. the last 1/3 might take as long as the first 2/3rds or longer. You can apply huristics to load balanace long running tasks against themselves (only) so that for instance things will be smooth if the number of tasks relative to threads varies to achieve a desired performance characteristic, but the PC has to get it's time from somewhere. Really cheap clocks get their time from the fact that power is 60hz, so every 60 cycles a second goes by. But the actual number of hz varies a bit and is likely to constantly vary in a single direction, so clocks like that get out of synch pretty fast, seconds per day, or more. I guess with a camera and a hole in a box you could determine when the sun was at a particular position in the sky and determine time that way, but we're getting pretty far afield here.
In Java, you will notice that the easiest way to get the time is
System.currentTimeMillis();
which is implemented as
public static native long currentTimeMillis();
That is a native method which is implemented in native code, c in all likelihood. Your computer's CPU has an internal clock that can adjust itself. The native call is an OS call to the hardware to retrieve that value, possibly doing some software transformation somewhere.
I think you kind of answered your own question in a way: "Time is such an abstract concept". So I would argue it depends what exactly you're trying to measure? If it's algorithmic efficiency we don't care about a concrete measure of time, simply how much longer does the algorithm take with respect to the number of inputs (big O notation). Is it how long does it take for the earth to spin on it's axis or some fraction of that, then obviously you need some thing external to the computer to tell it when one iteration started and one ended thereafter ignoring cpu clock drift the computer should do a good job of telling you what the time in the day is.
It is possible, however, I suspect it would take quite some time.
You could use the probability that your computer will be hit by a cosmic ray.
Reference: Cosmic Rays: what is the probability they will affect a program?
You would need to create a program that manipulates large amounts of data in the computer's memory, thus making it susceptible to cosmic ray intrusion. Such data would become corrupted at a certain point in time.
The program should be able to check the integrity of the data and mark that moment when its data becomes partially corrupted. When this happens, the program should also be able to generate another frame of refference, for example how many times a given function runs between two cosmic rays hits.
Then, these intervals should be recorded in a database and averaged after a few billion/trillion/zillion occurences thereby reducing the projected randomness of an occuring cosmic ray hit.
At that point and on, the computer would be able to tell time by a cosmic ray average hit coeficient.
Offcourse, this is an oversimplified solution. I am quite certain the hardware would fail during this time, the cosmic rays could hit runnable code zones of memory instead of raw data, cosmic rays might change occurences due to our solar system's continuous motion through the galaxy etc.
However, it is indeed possible...

Measuring method time

I want to optimize a method so it runs in less time. I was using System.currentTimeMillis() to calculate the time it lasted.
However, I just read the System.currentTimeMillis() Javadoc and it says this:
This method shouldn't be used for measuring timeouts or other elapsed
time measurements, as changing the system time can affect the results.
So, if I shouldn't use it to measure the elapsed time, how should I measure it?
Android native Traceview will help you measuring the time and also will give you more information.
Using it is as simple as:
// start tracing to "/sdcard/calc.trace"
Debug.startMethodTracing("calc");
// ...
// stop tracing
Debug.stopMethodTracing();
A post with more information in Android Developers Blog
Also take #Rajesh J Advani post into account.
There are a few issues with System.currentTimeMillis().
if you are not in control of the system clock, you may be reading the elapsed time wrong.
For server code or other long running java programs, your code is likely going to be called in over a few thousand iterations. By the end of this time, the JVM will have optimized the bytecode to the extent where the time taken is actually a lot lesser than what you measured as part of your testing.
It doesn't take into account the fact that there might be other processes on your computer or other threads in the JVM that compete for CPU time.
You can still use the method, but you need to keep the above points in mind. As others have mentioned, a profiler is a much better way of measuring system performance.
Welcome to the world of benchmarking.
As others point out - techniques based on timing methods like currentTimeMillis will only tell you elapsed time, rather than how long the method spent in the CPU.
I'm not aware of a way in Java to isolate timings of a method to how long it spent on the CPU - the answer is to either:
1) If the method is long running (or you run it many times, while using benchmarking rules like do not discard every result), use something like the "time" tool on Linux (http://linux.die.net/man/1/time) who will tell you how long the app spent on the CPU (obviously you have to take away the overhead of the application startup etc).
2) Use a profiler as others pointed out. This has dangers such as adding too much overhead using tracing techniques - if it uses stack sampling, it won't be 100% accurate
3) Am not sure how feasible this is on android - but you could get your bechmark running on a quiet multicore system and isolate a core (or ideally whole socket) to only be able to run your application.
You can use something called System.nanoTime(). As given here
http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/System.html#nanoTime()
As the document says
This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time.
Hope this will help.
SystemClock.elapsedRealtime()
Quoting words in the linked page: elapsedRealtime() and elapsedRealtimeNanos() return the time since the system was booted, and include deep sleep. This clock is guaranteed to be monotonic, and continues to tick even when the CPU is in power saving modes, so is the recommend basis for general purpose interval timing.

Methods of limiting emulated cpu speed

I'm writing a MOS 6502 processor emulator as part of a larger project I've undertaken in my spare time. The emulator is written in Java, and before you say it, I know its not going to be as efficient and optimized as if it was written in c or assembly, but the goal is to make it run on various platforms and its pulling 2.5MHZ on a 1GHZ processor which is pretty good for an interpreted emulator. My problem is quite to the contrary, I need to limit the number of cycles to 1MHZ. Ive looked around but not seen many strategies for doing this. Ive tried a few things including checking the time after a number of cycles and sleeping for the difference between the expected time and the actual time elapsed, but checking the time slows down the emulation by a factor of 8 so does anyone have any better suggestions or perhaps ways to optimize time polling in java to reduce the slowdown?
The problem with using sleep() is that you generally only get a granularity of 1ms, and the actual sleep that you will get isn't necessarily even accurate to the nearest 1ms as it depends on what the rest of the system is doing. A couple of suggestions to try (off the top of my head-- I've not actually written a CPU emulator in Java):
stick to your idea, but check the time between a large-ish number of emulated instructions (execution is going to be a bit "lumpy" anyway especially on a uniprocessor machine, because the OS can potentially take away the CPU from your thread for several milliseconds at a time);
as you want to execute in the order of 1000 emulated instructions per millisecond, you could also try just hanging on to the CPU between "instructions": have your program periodically work out by trial and error how many runs through a loop it needs to go between instructions to "waste" enough CPU to make the timing work out at 1 million emulated instructions / sec on average (you may want to see if setting your thread to low priority helps system performance in this case).
I would use System.nanoTime() in a busy wait as #pst suggested earlier.
You can speed up the emulation by generating byte code. Most instructions should translate quite well and you can add a busy wait call so each instruction takes the amount of time the original instruction would have done. You have an option to increase the delay so you can watch each instruction being executed.
To make it really cool you could generate 6502 assembly code as text with matching line numbers in the byte code. This would allow you to use the debugger to step through the code, breakpoint it and see what the application is doing. ;)
A simple way to emulate the memory is to use direct ByteBuffer or native memory with the Unsafe class to access it. This will give you a block of memory you can access as any data type in any order.
You might be interested in examining the Java Apple Computer Emulator (JACE), which incorporates 6502 emulation. It uses Thread.sleep() in its TimedDevice class.
Have you looked into creating a Timer object that goes off at the cycle length you need it? You could have the timer itself initiate the next loop.
Here is the documentation for the Java 6 version:
http://download.oracle.com/javase/6/docs/api/java/util/Timer.html

Categories

Resources