I am developing a real-time drum composing application with JAVA. The main issue I am trying to counter is to determine what is the rhythmic value of the note. I am having accuracy issues because I am considering even 32th notes in a metronome of 100bpm. This gives interval among notes of 75 ms. I am not 100% sure that the theoritical approach of considering time segments and assigning a rhythmical value can be scaled to every bpm or time interval.
Do you think it is doable, by also taking into consideration the human factor of playing? I guess this is a very specific/empirical question that goes for guys that developed similar apps.
Standard Java is not a suitable language for real-time operations, and if accuracy of timing is so important for you this means you should not use Java directly(at least). Especially the non-deterministic behaviour of garbage collection complicates matters on the Java side for real time applications.
Related
For my bachelor thesis, I need to write an Android app that gets very exact and consistent reaction times (every millisecond matters) from the user. It will be used for psychological studies. I am using the Android SDK and Java.
One of the ways the user can "react" is by touching the display.
At the moment, I call System.nanoTime() in the onTouchEvent(event) callback.
I substract from it the start value (aso taken with System.nanoTime()) to get the reaction time.
But I am concerned about how fast, exact and consistent (over time / on different devices) the system calls this method after the user actually touched the display.
Possible problems I have in mind:
a) different delays on different devices because of the different hardware used
b) delays because of other threads who could be executed first
c) the high-level nature of the Java language --> you never really know (and can't control) what is happening in the background at what time/order.
How could I find out about this? Could using the NDK(C++) help me get more accurate and consistent values? Thanks!
I need to write an Android app that gets very exact and consistent reaction times (every millisecond matters) from the user
Android is not a real-time operating system (RTOS).
Possible problems I have in mind
Your first two are valid. The third isn't, strictly speaking, in that Java has no more "happening in the background" stuff than do other programming languages. However, Java is intrinsically slower, adding latency.
In most situations, you also need to take into account other apps that may be running, as they too will want CPU time, and there are only so many CPU cores to go around.
How could I find out about this?
You aren't going to have a great way to measure this with a human participant. After all, if you knew exactly when the user touched the screen (to measure the time between it an onTouchEvent()), you would just use that mechanism instead of onTouchEvent().
It is possible to create a non-human participant. There are people who have created robots that use a capacitive stylus to tap on the screen. In theory, if you have nicely synchronized clocks, you could determine the time when the robot tapped the screen and the time that you receive the onTouchEvent() call. Having "nicely synchronized clocks" may be difficult in its own right, and creating a screen-touching robot is its own engineering exercise.
Could using the NDK(C++) help me get more accurate and consistent values?
I don't know how you are defining "accurate" in this scenario. I would not expect the NDK to help with consistency. It should help minimize the latency between the screen touch and your finding out about that screen touch (using whatever NDK-based facilities that game developers use to find out about screen touches).
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
This might sound like a weird question, but how can you program time without using the API of a programming language? Time is such an abstract concept, how can you write a program without using the predefined function for time.
I was thinking it would have to be calculated by a count of processor computations, but if every computer has a different speed performance how would you write code to iterate time.
Assuming the language the program is written in doesn't matter, how would you do this?
EDIT: I should also say, not using system time, or any pre-generated version of time from the system
Typically time is provided to the language runtime by the OS layer. So, if you're running a C/C++ program compiled in a Windows environment, it's asking the Windows OS for the time. If you're running a Java program that is executing in the JVM on a Linux box, the java program gets the time from the JVM, which in turn gets it from Linux. If you're running as JavaScript in a browser, the Javascript runtime gets the time from the Browser, which gets its time from the OS, etc...
At the lower levels, I believe the time the OS has it based on elapsed clock cycles in the hardware layer and then that's compared to some root time that you set in the BIOS or OS.
Updated with some more geek-detail:
Going even more abstract, if your computer is 1GHz, that means it's cpu changes "state" every 1/1Billion(10-9) second (the period of a single transition from +voltage to -voltage and back). EVERYTHING in a computer is based on these transitions, so there are hardware timers on the motherboard that make sure that these transitions happen with a consistent frequency. Since those hardware timers are so precise, they are the basis for "counting" time for things that matter about the Calendar time abstraction that we use.
I'm not a hardware expert, but this is my best understanding from computer architecture classes and building basic circuits in school.
Clarifying based on your edit:
A program doesn't inherently "know" anything about how slow or fast it's running, so on its own there is no way to accurately track the passage of time. Some languages can access information like "cycle count" and "processor speed" from the OS, so you could approximate some representation of the passage of time based on that without having to use a time api. But even that is sort of cheating given the constraints of your question.
Simply put, you can't. There's no pure-software way of telling time.
Every computer has a number of different hardware timers. These fire off interrupts once triggered, which is how the processor itself can keep track of time. Without these or some other external source, you cannot keep track of time.
The hardware clock in your motherboard contains a precisely tuned quartz crystal that vibrates at a frequency of 32,768 Hz [2^15] when a precise current is passed through it. The clock counts these vibrations to mark the passage of time. System calls reference the hardware clock for time, and without the hardware clock your PC wouldn't have the faintest idea if, between to arbitrary points in execution, a second, a day, or a year had passed.
This is what the system calls reference, and trying to use anything else is just an excercise in futility because everything else is the computer is designed to simply function as fast as possible based on the voltage it happens to be receiving at the time.
You could try counting CPU clock cycles, but the CPU clock is simply designed to vibrate as fast as possible based on the input voltage and can vary based on load requirements and how stable the voltage your power supply delivers is. This makes it wholly unreliable as a method to measure time because if you get a program to monitor the clock speed in real time you will notice that it usually fluctuates constantly by +/- a few MHz.
Even hardware clocks are unreliable as the voltage applied to the crystal, while tightly controlled, is still variable. If you turned off the NTP services and/or disconnected it from the internet the time would probably drift a few minutes per month or even per week. The NTP servers reference atomic clocks that measure fundamental properties of physics, namely the oscillations of cesium atoms. One second is currently defined as:
the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.
oh... and that's measured by a computer as well.
Without a clock or reference to the OS you can't measure anything relative to the outside of the world. However you can of course measure internally knowing that the task is 1/3 of the way done or whatever. But depending on system load, CPU throttling from thermal requirements, other programs running, etc. the last 1/3 might take as long as the first 2/3rds or longer. You can apply huristics to load balanace long running tasks against themselves (only) so that for instance things will be smooth if the number of tasks relative to threads varies to achieve a desired performance characteristic, but the PC has to get it's time from somewhere. Really cheap clocks get their time from the fact that power is 60hz, so every 60 cycles a second goes by. But the actual number of hz varies a bit and is likely to constantly vary in a single direction, so clocks like that get out of synch pretty fast, seconds per day, or more. I guess with a camera and a hole in a box you could determine when the sun was at a particular position in the sky and determine time that way, but we're getting pretty far afield here.
In Java, you will notice that the easiest way to get the time is
System.currentTimeMillis();
which is implemented as
public static native long currentTimeMillis();
That is a native method which is implemented in native code, c in all likelihood. Your computer's CPU has an internal clock that can adjust itself. The native call is an OS call to the hardware to retrieve that value, possibly doing some software transformation somewhere.
I think you kind of answered your own question in a way: "Time is such an abstract concept". So I would argue it depends what exactly you're trying to measure? If it's algorithmic efficiency we don't care about a concrete measure of time, simply how much longer does the algorithm take with respect to the number of inputs (big O notation). Is it how long does it take for the earth to spin on it's axis or some fraction of that, then obviously you need some thing external to the computer to tell it when one iteration started and one ended thereafter ignoring cpu clock drift the computer should do a good job of telling you what the time in the day is.
It is possible, however, I suspect it would take quite some time.
You could use the probability that your computer will be hit by a cosmic ray.
Reference: Cosmic Rays: what is the probability they will affect a program?
You would need to create a program that manipulates large amounts of data in the computer's memory, thus making it susceptible to cosmic ray intrusion. Such data would become corrupted at a certain point in time.
The program should be able to check the integrity of the data and mark that moment when its data becomes partially corrupted. When this happens, the program should also be able to generate another frame of refference, for example how many times a given function runs between two cosmic rays hits.
Then, these intervals should be recorded in a database and averaged after a few billion/trillion/zillion occurences thereby reducing the projected randomness of an occuring cosmic ray hit.
At that point and on, the computer would be able to tell time by a cosmic ray average hit coeficient.
Offcourse, this is an oversimplified solution. I am quite certain the hardware would fail during this time, the cosmic rays could hit runnable code zones of memory instead of raw data, cosmic rays might change occurences due to our solar system's continuous motion through the galaxy etc.
However, it is indeed possible...
I'm writing a MOS 6502 processor emulator as part of a larger project I've undertaken in my spare time. The emulator is written in Java, and before you say it, I know its not going to be as efficient and optimized as if it was written in c or assembly, but the goal is to make it run on various platforms and its pulling 2.5MHZ on a 1GHZ processor which is pretty good for an interpreted emulator. My problem is quite to the contrary, I need to limit the number of cycles to 1MHZ. Ive looked around but not seen many strategies for doing this. Ive tried a few things including checking the time after a number of cycles and sleeping for the difference between the expected time and the actual time elapsed, but checking the time slows down the emulation by a factor of 8 so does anyone have any better suggestions or perhaps ways to optimize time polling in java to reduce the slowdown?
The problem with using sleep() is that you generally only get a granularity of 1ms, and the actual sleep that you will get isn't necessarily even accurate to the nearest 1ms as it depends on what the rest of the system is doing. A couple of suggestions to try (off the top of my head-- I've not actually written a CPU emulator in Java):
stick to your idea, but check the time between a large-ish number of emulated instructions (execution is going to be a bit "lumpy" anyway especially on a uniprocessor machine, because the OS can potentially take away the CPU from your thread for several milliseconds at a time);
as you want to execute in the order of 1000 emulated instructions per millisecond, you could also try just hanging on to the CPU between "instructions": have your program periodically work out by trial and error how many runs through a loop it needs to go between instructions to "waste" enough CPU to make the timing work out at 1 million emulated instructions / sec on average (you may want to see if setting your thread to low priority helps system performance in this case).
I would use System.nanoTime() in a busy wait as #pst suggested earlier.
You can speed up the emulation by generating byte code. Most instructions should translate quite well and you can add a busy wait call so each instruction takes the amount of time the original instruction would have done. You have an option to increase the delay so you can watch each instruction being executed.
To make it really cool you could generate 6502 assembly code as text with matching line numbers in the byte code. This would allow you to use the debugger to step through the code, breakpoint it and see what the application is doing. ;)
A simple way to emulate the memory is to use direct ByteBuffer or native memory with the Unsafe class to access it. This will give you a block of memory you can access as any data type in any order.
You might be interested in examining the Java Apple Computer Emulator (JACE), which incorporates 6502 emulation. It uses Thread.sleep() in its TimedDevice class.
Have you looked into creating a Timer object that goes off at the cycle length you need it? You could have the timer itself initiate the next loop.
Here is the documentation for the Java 6 version:
http://download.oracle.com/javase/6/docs/api/java/util/Timer.html
I'm trying to do a typical "A/B testing" like approach on two different implementations of a real-life algorithm, using the same data-set in both cases. The algorithm is deterministic in terms of execution, so I really expect the results to be repeatable.
On the Core 2 Duo, this is also the case. Using just the linux "time" command I'll get variations in execution time around 0.1% (over 10 runs).
On the i7 I will get all sorts of variations, and I can easily have 30% variations up and down from the average. I assume this is due to the various CPU optimizations that the i7 does (dynamic overclocking etc), but it really makes it hard to do this kind of testing. Is there any other way to determine which of 2 algorithms is "best", any other sensible metrics I can use ?
Edit: The algorithm does not sustain for very long and this is actually the real-life scenario I'm trying to benchmark. So running repeatedly is not really an option as such.
See if you can turn off the dynamic over-clocking in your BIOS. Also, ditch all possible other processes running when doing the benchmarking.
Well you could use O-notation principles in determining the performance of algorithms. This will determine the theoretical speed of an algorithm.
http://en.wikipedia.org/wiki/Big_O_notation
If you absolutely must know the real life speed of the alogorithm, then ofc you must benchmark it on a system. But using the O-notation you can see past all that and only focus on the factors/variables that are important.
You didn't indicate how you're benchmarking. You might want to read this if you haven't yet: How do I write a correct micro-benchmark in Java?
If you're running a sustained test I doubt dynamic clocking is causing your variations. It should stay at the maximum turbo speed. If you're running it too long perhaps it's going down one multiplier for heat. Although I doubt that, unless you're over-clocking and are near the thermal envelope.
Hyper-Threading might be playing a role. You could disable that in your BIOS and see if it makes a difference in your numbers.
On linux you can lock the CPU speed to stop clock speed variation. ;)
You need to make the benchmark as realistic as possible. For example, if you run an algorithm flat out and take an average it you might get very different results from performing the same tasks every 10 ms. i.e. I have seen 2x to 10x variation (between flat out and relatively low load), even with a locked clock speed.
Could you please give me a real example what is latency-driven or performance-driven application ? Both have what differences , what requirement in design system in java ?
Thanks.
Examples
An example of a latency-driven Java application is a signal processor or command+control unit for a radar. The JEOPARD project recently implemented such a thing, and the AN/FPS-85 radar is another example. Both of these are Java examples, and both use an instance of Real-Time Java. The latter uses RTSJ.
Why are they "latency-driven?" Well, computations are only correct if they are delivered on time -- when the computation is intended to steer a phased-array radar beam such that it impacts the predicted location of an object under track, the computation is incorrect if it is late. Therefore, there is a latency bound on the loop which traverses the last paint of the object with the control steering the beam onto the next predicted location.
These types of systems do have throughput requirements, but they tend not to be the driving requirements. Instead, specific latencies for specific activities must be met for correct operation, and that is the primary correctness metric.
Design techniques for these systems.
There are two common approaches: The first is basically to ignore the time requirements (latency, etc...), get the code "working" in the sense of being computationally correct, and then conduct performance engineering/optimization until the system implicitly behaves as you want. The second is to articulate clear timeliness requirements and design with those requirements in mind for each component. Given my background, I'm strongly biased toward the second path because the cost to take a random conventional development through integration and test, and tune it for the correct behavior tends to be very high and very risky. The more performance/latency-dependent the system is, the more you should ignore the rule "avoid premature optimization." It's not optimization if it's a correctness criteria. (This is not an excuse to write murky, fast code, but a bias.)
If some measure of end-to-end latency is required, a typical approach is to analyze what you expect to be the stressing conditions and develop a "latency budget", allocating portions of the latency to sequential bits of computation. As the system evolves, the budget may change around, but it becomes a useful design and test tool.
Finally, in Java, this might be manifest in three different approaches, which are really on a spectrum:
1) Just build the damn thing, and tune it once it more or less works. (Conventional design usually works this way.)
2) Build the thing, but also build in instrumentation/metrics to explicitly include latency context as work units progress through your software. A simple example of this is to timestamp arriving data and pass that timestamp along with the packet/unit as it is operated on. This is really easy for some systems, and basically impossible for others. If it's possible, it's highly recommended because then the timeliness context is explicitly available and may be used when making resource management decisions (i.e., assigning thread priorities, deadlines, queue priorities, etc...)
3) Do the analysis up-front, and use a real-time stack with formal timeliness parameters. This is the heavyweight solution, and is appropriate when you have high-criticality, safety-critical, or simply hard real-time constraints. Even if you aren't in that world, RTSJ implementations like Oracle's JavaRTS still offer benefits for soft real-time systems simply because they reduce jitter/non-determinism. There is usually a tradeoff here against raw throughput performance.
I have only addressed the computational side here. Obviously if your system includes or even is defined by networks, there's a whole world of latency/QoS management on that side. Common interfaces to time-sensitive Java applications there might include RSVP or perhaps specific middleware like DDS or CORBA or whatever. Probably half of the existing time-sensitive applications eschew middleware in favor of their own TCP, UDP, raw IP, or even specialized low-level solution, or build on top of a proprietary/special purpose bus.
Best Case vs. Common Case
In networking terms, throughput and latency are distinct dimensions of system performance. Throughput measures the rate (units per second) at which the system can process / transfer information. Latency measures the time (seconds) by which a computation/communication completes. Both of these can be used in common- or worst-case descriptions of performance, though it's a little hard to get your arms around "worst-case throughput" in many settings. For a specific example, consider the difference between a satellite link and a copper link over the same distance. In that setting, the satellite link has high latency (10's to 100's of milliseconds) because of speed of light time, but may also have very high bandwidth, and thus higher throughput. A single copper cable might have lower latency, but also have lower throughput (due to lower bandwidth).
In the computational setting, latency tends to be a measure of worst-case computation (though you often care about average latency, too), while throughput tends to be a measure of common-case computation rate. Examples of common latency metrics might be task-switch latency, interrupt service latency, packet service latency, etc.
Real-time or "time-critical" systems TEND to be dominated by concern for worst-case behaviors, and worst-case latencies in particular. Conventional/general-purpose systems TEND to be dominated by concern for maximum throughput. Soft real-time systems (e.g., VOIP or media) tend to manage both simultaneously, and tolerate a wider range of tradeoffs. There are corner cases like user interfaces, where perceived performance is a complicated mixture of both.
Edit to add: Some related, Java-specific SO questions. Coded using primitives only? and RTSJ implementations.
Latency is a networking term, think of it as "time to get the first byte."
Bandwidth is the other, related term networking term. Think of it as "time to transfer a large block of data."
These two things are more or less independent factors. For example, NetFlix sending you a BluRay is high latency (it takes a long time to get the first bit) but also high bandwidth (you gets lots and lots of data in one fell swoop).
Performance is a higher level concept. Performance is totally subjective - it can really only be discussed as as a delta compared to another system.
Latency, bandwidth, CPU, memory, bus, disk, and of course the code itself are all a factor in dealing with performance.