Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
This might sound like a weird question, but how can you program time without using the API of a programming language? Time is such an abstract concept, how can you write a program without using the predefined function for time.
I was thinking it would have to be calculated by a count of processor computations, but if every computer has a different speed performance how would you write code to iterate time.
Assuming the language the program is written in doesn't matter, how would you do this?
EDIT: I should also say, not using system time, or any pre-generated version of time from the system
Typically time is provided to the language runtime by the OS layer. So, if you're running a C/C++ program compiled in a Windows environment, it's asking the Windows OS for the time. If you're running a Java program that is executing in the JVM on a Linux box, the java program gets the time from the JVM, which in turn gets it from Linux. If you're running as JavaScript in a browser, the Javascript runtime gets the time from the Browser, which gets its time from the OS, etc...
At the lower levels, I believe the time the OS has it based on elapsed clock cycles in the hardware layer and then that's compared to some root time that you set in the BIOS or OS.
Updated with some more geek-detail:
Going even more abstract, if your computer is 1GHz, that means it's cpu changes "state" every 1/1Billion(10-9) second (the period of a single transition from +voltage to -voltage and back). EVERYTHING in a computer is based on these transitions, so there are hardware timers on the motherboard that make sure that these transitions happen with a consistent frequency. Since those hardware timers are so precise, they are the basis for "counting" time for things that matter about the Calendar time abstraction that we use.
I'm not a hardware expert, but this is my best understanding from computer architecture classes and building basic circuits in school.
Clarifying based on your edit:
A program doesn't inherently "know" anything about how slow or fast it's running, so on its own there is no way to accurately track the passage of time. Some languages can access information like "cycle count" and "processor speed" from the OS, so you could approximate some representation of the passage of time based on that without having to use a time api. But even that is sort of cheating given the constraints of your question.
Simply put, you can't. There's no pure-software way of telling time.
Every computer has a number of different hardware timers. These fire off interrupts once triggered, which is how the processor itself can keep track of time. Without these or some other external source, you cannot keep track of time.
The hardware clock in your motherboard contains a precisely tuned quartz crystal that vibrates at a frequency of 32,768 Hz [2^15] when a precise current is passed through it. The clock counts these vibrations to mark the passage of time. System calls reference the hardware clock for time, and without the hardware clock your PC wouldn't have the faintest idea if, between to arbitrary points in execution, a second, a day, or a year had passed.
This is what the system calls reference, and trying to use anything else is just an excercise in futility because everything else is the computer is designed to simply function as fast as possible based on the voltage it happens to be receiving at the time.
You could try counting CPU clock cycles, but the CPU clock is simply designed to vibrate as fast as possible based on the input voltage and can vary based on load requirements and how stable the voltage your power supply delivers is. This makes it wholly unreliable as a method to measure time because if you get a program to monitor the clock speed in real time you will notice that it usually fluctuates constantly by +/- a few MHz.
Even hardware clocks are unreliable as the voltage applied to the crystal, while tightly controlled, is still variable. If you turned off the NTP services and/or disconnected it from the internet the time would probably drift a few minutes per month or even per week. The NTP servers reference atomic clocks that measure fundamental properties of physics, namely the oscillations of cesium atoms. One second is currently defined as:
the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.
oh... and that's measured by a computer as well.
Without a clock or reference to the OS you can't measure anything relative to the outside of the world. However you can of course measure internally knowing that the task is 1/3 of the way done or whatever. But depending on system load, CPU throttling from thermal requirements, other programs running, etc. the last 1/3 might take as long as the first 2/3rds or longer. You can apply huristics to load balanace long running tasks against themselves (only) so that for instance things will be smooth if the number of tasks relative to threads varies to achieve a desired performance characteristic, but the PC has to get it's time from somewhere. Really cheap clocks get their time from the fact that power is 60hz, so every 60 cycles a second goes by. But the actual number of hz varies a bit and is likely to constantly vary in a single direction, so clocks like that get out of synch pretty fast, seconds per day, or more. I guess with a camera and a hole in a box you could determine when the sun was at a particular position in the sky and determine time that way, but we're getting pretty far afield here.
In Java, you will notice that the easiest way to get the time is
System.currentTimeMillis();
which is implemented as
public static native long currentTimeMillis();
That is a native method which is implemented in native code, c in all likelihood. Your computer's CPU has an internal clock that can adjust itself. The native call is an OS call to the hardware to retrieve that value, possibly doing some software transformation somewhere.
I think you kind of answered your own question in a way: "Time is such an abstract concept". So I would argue it depends what exactly you're trying to measure? If it's algorithmic efficiency we don't care about a concrete measure of time, simply how much longer does the algorithm take with respect to the number of inputs (big O notation). Is it how long does it take for the earth to spin on it's axis or some fraction of that, then obviously you need some thing external to the computer to tell it when one iteration started and one ended thereafter ignoring cpu clock drift the computer should do a good job of telling you what the time in the day is.
It is possible, however, I suspect it would take quite some time.
You could use the probability that your computer will be hit by a cosmic ray.
Reference: Cosmic Rays: what is the probability they will affect a program?
You would need to create a program that manipulates large amounts of data in the computer's memory, thus making it susceptible to cosmic ray intrusion. Such data would become corrupted at a certain point in time.
The program should be able to check the integrity of the data and mark that moment when its data becomes partially corrupted. When this happens, the program should also be able to generate another frame of refference, for example how many times a given function runs between two cosmic rays hits.
Then, these intervals should be recorded in a database and averaged after a few billion/trillion/zillion occurences thereby reducing the projected randomness of an occuring cosmic ray hit.
At that point and on, the computer would be able to tell time by a cosmic ray average hit coeficient.
Offcourse, this is an oversimplified solution. I am quite certain the hardware would fail during this time, the cosmic rays could hit runnable code zones of memory instead of raw data, cosmic rays might change occurences due to our solar system's continuous motion through the galaxy etc.
However, it is indeed possible...
Related
For my bachelor thesis, I need to write an Android app that gets very exact and consistent reaction times (every millisecond matters) from the user. It will be used for psychological studies. I am using the Android SDK and Java.
One of the ways the user can "react" is by touching the display.
At the moment, I call System.nanoTime() in the onTouchEvent(event) callback.
I substract from it the start value (aso taken with System.nanoTime()) to get the reaction time.
But I am concerned about how fast, exact and consistent (over time / on different devices) the system calls this method after the user actually touched the display.
Possible problems I have in mind:
a) different delays on different devices because of the different hardware used
b) delays because of other threads who could be executed first
c) the high-level nature of the Java language --> you never really know (and can't control) what is happening in the background at what time/order.
How could I find out about this? Could using the NDK(C++) help me get more accurate and consistent values? Thanks!
I need to write an Android app that gets very exact and consistent reaction times (every millisecond matters) from the user
Android is not a real-time operating system (RTOS).
Possible problems I have in mind
Your first two are valid. The third isn't, strictly speaking, in that Java has no more "happening in the background" stuff than do other programming languages. However, Java is intrinsically slower, adding latency.
In most situations, you also need to take into account other apps that may be running, as they too will want CPU time, and there are only so many CPU cores to go around.
How could I find out about this?
You aren't going to have a great way to measure this with a human participant. After all, if you knew exactly when the user touched the screen (to measure the time between it an onTouchEvent()), you would just use that mechanism instead of onTouchEvent().
It is possible to create a non-human participant. There are people who have created robots that use a capacitive stylus to tap on the screen. In theory, if you have nicely synchronized clocks, you could determine the time when the robot tapped the screen and the time that you receive the onTouchEvent() call. Having "nicely synchronized clocks" may be difficult in its own right, and creating a screen-touching robot is its own engineering exercise.
Could using the NDK(C++) help me get more accurate and consistent values?
I don't know how you are defining "accurate" in this scenario. I would not expect the NDK to help with consistency. It should help minimize the latency between the screen touch and your finding out about that screen touch (using whatever NDK-based facilities that game developers use to find out about screen touches).
The default java.time.Clock implementation is based on System.currentTimeMillis(). As discussed for example here,
Monotonically increasing time in Java?,
it is not guaranteed to be monotonic.
And indeed, I regularly experience a situation, where the system time is automatically adjusted a few seconds to the past, and the java clock jumps back too.
//now() returns 2016-01-13T22:34:05.681Z
order.setCreationTime(Instant.now());
//... something happens, order gets cancelled
//now() returns 2016-01-13T22:34:03.123Z
//which is a few seconds before the former one,
//even though the call was performed later - in any reasonable sense.
//The recorded history of events is obviously inconsistent with the real world.
order.setCancelationTime(Instant.now());
It is then impossible to perform time-sensitive things, like recording and analysing event history, when one can not rely on time going only in one direction.
The aforementioned post says that System.nanoTime() is monotonic (if the underlying system supports it). So, if I want to base my code on the java.time API, I would need Clock that uses nanoTime internally to ensure one-way flow of the time. Maybe something like this would work. Or would't it?
public class MyClock() extends java.time.Clock {
private final long adjustment;
public MyClock() {
long cpuTimeMillis = System.nanoTime()/1000000;
long systemTimeMillis = System.currentTimeMillis();
adjustment = systemTimeMillis - cpuTimeMillis;
}
#Override
public long millis() {
long currentCpuTimeMillis = System.nanoTime()/1000000;
return currentCpuTimeMillis + adjustment;
}
}
It is just a sketch, not a full Clock implementation. And I suppose a proper implementation should also perform the adjustment against another Clock passed in the constructor, rather than directly against the currentTimeMillis().
Or, is there already such a monotonic Clock implementation available anywhere? I would guess there must have been many people facing the same issue.
Conclusion
Thanks for the inspiring comments and answers. There are several interesting points scattered across the comments, so I will summarize it here.
1. Monotonic clock
As for my original question, yes, it is possible to have monotonic clock that is not affected by system time jumping backwards. Such implementation can be based on System.nanoTime() as I suggested above. There used to be problems with this aproach in the past, but it should work fine on today's modern systems. This approach is already implemented for example in the Time4J library, their monotonic clock can be easily converted to java.time.Clock:
Clock clock = TemporalType.CLOCK.from(SystemClock.MONOTONIC);
2. Proper system time control
It it possible to configure system time management (ntpd in unix/linux), so that the system time virtually never moves back (it gets just slowed down if necessary), then one can rely on the system time being monotonic, and no clock-magic is necessary in Java.
I will go this way, as my app is server-side and I can get the time under control. Actually, I experienced the anomalies in an experimental environment that I installed myself (with only superficial knowledge of the domain), and it was using just ntpdate client (which can jump backwards if the time is out of sync), rather than ntpd (which can be configured so that it does not jump back).
3. Using sequences rather than clock
When one needs to track a strong relation what happened before what, it is safer to give the events serial numbers from an atomically generated sequence and not rely on the wall clock. It becomes the only option once the application is running on several nodes (which is not my case though).
As #the8472 says, the key is to have the time synchronization on the machine (where the jvm runs) correct.
If you program a client then it really can be dangerous to rely on the system clocks.
But for servers, there is a solution - you might want to consider using NTP with strict configuration.
In here they basicly explain that NTP will slow down the time and not set it backwards.
And this NTP documentation says :
Sometimes, in particular when ntpd is first started, the error might
exceed 128 ms. This may on occasion cause the clock to be set
backwards if the local clock time is more than 128 s in the future
relative to the server. In some applications, this behavior may be
unacceptable. If the -x option is included on the command line, the
clock will never be stepped and only slew corrections will be used.
Do note that nanoTime may increase monotonically but it does not relate nicely to wall time, e.g. due to hibernation events, VM suspension and similar things.
And if you start distributing things across multiple servers then synchronizing on currentMillis may also bite you again due to clock drift.
Maybe you should consider getting the system time of your servers under control.
Or track relative sequence of the events separately from the time at which they were supposedly recorded.
I am developing a real-time drum composing application with JAVA. The main issue I am trying to counter is to determine what is the rhythmic value of the note. I am having accuracy issues because I am considering even 32th notes in a metronome of 100bpm. This gives interval among notes of 75 ms. I am not 100% sure that the theoritical approach of considering time segments and assigning a rhythmical value can be scaled to every bpm or time interval.
Do you think it is doable, by also taking into consideration the human factor of playing? I guess this is a very specific/empirical question that goes for guys that developed similar apps.
Standard Java is not a suitable language for real-time operations, and if accuracy of timing is so important for you this means you should not use Java directly(at least). Especially the non-deterministic behaviour of garbage collection complicates matters on the Java side for real time applications.
I'm writing a MOS 6502 processor emulator as part of a larger project I've undertaken in my spare time. The emulator is written in Java, and before you say it, I know its not going to be as efficient and optimized as if it was written in c or assembly, but the goal is to make it run on various platforms and its pulling 2.5MHZ on a 1GHZ processor which is pretty good for an interpreted emulator. My problem is quite to the contrary, I need to limit the number of cycles to 1MHZ. Ive looked around but not seen many strategies for doing this. Ive tried a few things including checking the time after a number of cycles and sleeping for the difference between the expected time and the actual time elapsed, but checking the time slows down the emulation by a factor of 8 so does anyone have any better suggestions or perhaps ways to optimize time polling in java to reduce the slowdown?
The problem with using sleep() is that you generally only get a granularity of 1ms, and the actual sleep that you will get isn't necessarily even accurate to the nearest 1ms as it depends on what the rest of the system is doing. A couple of suggestions to try (off the top of my head-- I've not actually written a CPU emulator in Java):
stick to your idea, but check the time between a large-ish number of emulated instructions (execution is going to be a bit "lumpy" anyway especially on a uniprocessor machine, because the OS can potentially take away the CPU from your thread for several milliseconds at a time);
as you want to execute in the order of 1000 emulated instructions per millisecond, you could also try just hanging on to the CPU between "instructions": have your program periodically work out by trial and error how many runs through a loop it needs to go between instructions to "waste" enough CPU to make the timing work out at 1 million emulated instructions / sec on average (you may want to see if setting your thread to low priority helps system performance in this case).
I would use System.nanoTime() in a busy wait as #pst suggested earlier.
You can speed up the emulation by generating byte code. Most instructions should translate quite well and you can add a busy wait call so each instruction takes the amount of time the original instruction would have done. You have an option to increase the delay so you can watch each instruction being executed.
To make it really cool you could generate 6502 assembly code as text with matching line numbers in the byte code. This would allow you to use the debugger to step through the code, breakpoint it and see what the application is doing. ;)
A simple way to emulate the memory is to use direct ByteBuffer or native memory with the Unsafe class to access it. This will give you a block of memory you can access as any data type in any order.
You might be interested in examining the Java Apple Computer Emulator (JACE), which incorporates 6502 emulation. It uses Thread.sleep() in its TimedDevice class.
Have you looked into creating a Timer object that goes off at the cycle length you need it? You could have the timer itself initiate the next loop.
Here is the documentation for the Java 6 version:
http://download.oracle.com/javase/6/docs/api/java/util/Timer.html
Is it possible to slow down time in the Java virtual machine according to CPU usage by modification of the source code of OpenJDK? I have a network simulation (Java to ns-3) which consumes real time, synchronised loosely to the wall clock. However, because I run so many clients in the simulation, the CPU usage hits 100% and hard guarantees aren't maintained about how long events in the simulator should take to process (i.e., a high amount of super-late events). Therefore, the simulation tops out at around 40 nodes when there's a lot of network traffic, and even then it's a bit iffy. The ideal solution would be to slow down time according to CPU, but I'm not sure how to do this successfully. A lesser solution is to just slow down time by some multiple (time lensing?).
If someone could give some guidance, the source code for the relevant file in question (for Windows) is at http://pastebin.com/RSQpCdbD. I've tried modifying some parts of the file, but my results haven't really been very successful.
Thanks in advance,
Chris
You might look at VirtualBox, which allows one to Accelerate or slow down the guest clock from the command line.
I'm not entirely sure if this is what you want but, with the Joda-time library you can stop time completely. So calls to new Date() or new DateTime() within Joda-time will continously return the same time.
So, you could, in one Thread "stop time" with this call:
DateTimeUtils.setCurrentMillisFixed(System.currentTimeMillis());
Then your Thread could sleep for, say, 5000ms, and then call:
// advance time by one second
DateTimeUtils.setCurrentMillisFixed(System.currentTimeMillis() + 1000);
So provided you application is doing whatever it does based on the time within the system this will "slow" time by setting time forwards one second every 5 seconds.
But, as i said... i'm not sure this will work in your environment.