Java timing accuracy on Windows XP vs. Windows 7 - java

I have a bizarre problem - I'm hoping someone can explain to me what is happening and a possible workaround. I am implementing a Z80 core in Java, and attempting to slow it down, by using a java.util.Timer object in a separate thread.
The basic setup is that I have one thread running an execute loop, 50 times per second. Within this execute loop, however many cycles are executed, and then wait() is invoked. The external Timer thread will invoke notifyAll() on the Z80 object every 20ms, simulating a PAL Sega Master System clock frequency of 3.54 MHz (ish).
The method I have described above works perfectly on Windows 7 (tried two machines) but I have also tried two Windows XP machines and on both of them, the Timer object seems to be oversleeping by around 50% or so. This means that one second of emulation time is actually taking around 1.5 seconds or so on a Windows XP machine.
I have tried using Thread.sleep() instead of a Timer object, but this has exactly the same effect. I realise granularity of time in most OSes isn't better than 1ms, but I can put up with 999ms or 1001ms instead of 1000ms. What I can't put up with is 1562ms - I just don't understand why my method works OK on newer version of Windows, but not the older one - I've investigated interrupt periods and so on, but don't seem to have developed a workaround.
Could anyone please tell me the cause of this problem and a suggested workaround? Many thanks.
Update: Here is the full code for a smaller app I built to show the same issue:
import java.util.Timer;
import java.util.TimerTask;
public class WorkThread extends Thread
{
private Timer timerThread;
private WakeUpTask timerTask;
public WorkThread()
{
timerThread = new Timer();
timerTask = new WakeUpTask(this);
}
public void run()
{
timerThread.schedule(timerTask, 0, 20);
while (true)
{
long startTime = System.nanoTime();
for (int i = 0; i < 50; i++)
{
int a = 1 + 1;
goToSleep();
}
long timeTaken = (System.nanoTime() - startTime) / 1000000;
System.out.println("Time taken this loop: " + timeTaken + " milliseconds");
}
}
synchronized public void goToSleep()
{
try
{
wait();
}
catch (InterruptedException e)
{
System.exit(0);
}
}
synchronized public void wakeUp()
{
notifyAll();
}
private class WakeUpTask extends TimerTask
{
private WorkThread w;
public WakeUpTask(WorkThread t)
{
w = t;
}
public void run()
{
w.wakeUp();
}
}
}
All the main class does is create and start one of these worker threads. On Windows 7, this code produces a time of around 999ms - 1000ms, which is totally fine. Running the same jar on Windows XP however produces a time of around 1562ms - 1566ms, and this is on two separate XP machines that I have tested this. They are all running Java 6 update 27.
I find this problem is happening because the Timer is sleeping for 20ms (quite a small value) - if I bung all the execute loops for a single second into wait wait() - notifyAll() cycle, this produces the correct result - I'm sure people who see what I'm trying to do (emulate a Sega Master System at 50fps) will see how this is not a solution though - it won't give an interactive response time, skipping 49 of every 50. As I say, Win7 copes fine with this. Sorry if my code is too large :-(

Could anyone please tell me the cause of this problem and a suggested workaround?
The problem you are seeing probably has to do with clock resolution. Some Operating Systems (Windows XP and earlier) are notorious for oversleeping and being slow with wait/notify/sleep (interrupts in general). Meanwhile other Operating Systems (every Linux I've seen) are excellent at returning control at quite nearly the moment specified.
The workaround? For short durations, use a live wait (busy loop). For long durations, sleep for less time than you really want and then live wait the remainder.

I'd forgo the TimerTask and just use a busy loop:
long sleepUntil = System.nanoTime() + TimeUnit.MILLISECONDS.toNanos(20);
while (System.nanoTime() < sleepUntil) {
Thread.sleep(2); // catch of InterruptedException left out for brevity
}
The two millisecond delay gives the host OS plenty of time to work on other stuff (and you're likely to be on a multicore anyway). The remaining program code is a lot simpler.
If the hard-coded two milliseconds are too much of a blunt instrument, you can calculate the required sleep time and use the Thread.sleep(long, int) overload.

You can set the timer resolution on Windows XP.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd757624%28v=vs.85%29.aspx
Since this is a system-wide setting, you can use a tool to set the resolution so you can verify whether this is your problem.
Try this out and see if it helps: http://www.lucashale.com/timer-resolution/
You might see better timings on newer versions of Windows because, by default, newer version might have tighter timings. Also, if you are running an application such as Windows Media Player, it improves the timer resolution. So if you happen to be listening to some music while running your emulator, you might get great timings.

Related

LineChart performance decrease over time

So we're working on an signal processing application, there's a specific type of hardware in the PC and a C driver communicating with it.
The application frontend/gui is written in JavaFX. We're having some issues with the JavaFX LineChart, we're measuring electrical signal frequency and trying to plot it on the aforementioned LineChart.
The measurements are running in a loop until 1000 samples are gathered, we've been testing with 100Hz signal, which means that it takes 10s to get these 1000 samples.
There's a separate 'LineChart' thread running and checking (every 10ms) whether there are new samples available, if so these are added to the LineChart, if the measurement thread is finished the LineChart thread resets the LineChart (clears the series data) and the process starts over.
Every thing is running fine for first ~20 min, after which it seems that the LineChart 'slows down', it looks as if the drawing is not as fast/dynamic as in the beginning.
We've checked pretty much everything we could in the application and found nothing, so we've created a separate project which only has the LineChart and a thread that adds samples to the chart every 10ms (up to 1000 samples). We've observed the same behavior, here's how it's done:
Thread t = new Thread(new Runnable() {
#Override
public void run() {
int iteration = 0;
long start = 0;
long stop = 0;
while (run) {
CountDownLatch latch = new CountDownLatch(1);
start = System.currentTimeMillis();
for (int i = 0; i < 1001; i++) {
double ran = random(50, 105);
final int c = i;
Platform.runLater(() -> {
series.getData().add(new XYChart.Data<>(c, ran));
if (c == 1000) {
System.out.print("Points: " + series.getData().size());
series.getData().clear();
latch.countDown();
}
});
try {
Thread.sleep(10);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
iteration++;
stop = System.currentTimeMillis();
try {
latch.await();
} catch (InterruptedException e) {}
System.out.println(", Iteration : " + iteration + ", elapsed: " + (stop - start) + " [ms]");
}
}
});
What are we missing here? Why is the performance dropping after ~30-45 min in above example? Any ideas?
The above piece of code was run for 8h, each time all points were added to the Chart, the 'drawing time' was comparable (between 10100ms and 10350ms).
You don't have anything wrong with the code that I can see, but, you keep adding to the series. I don't think that this is an issue with the code, but it's the machine trying to keep up and manage ALL the points you have, you said 1000 in 10s, that means that after 20 mins, you have 120,000 points in storage, being managed, and plotted. Assuming that you record to the tenths place, thats a TON of storage, and more likely than not, you're seeing the processing slow down with all that info. Simply put, the machine can't handle it.
This is an older question, but in case anyone stumbles across it looking for performance problems, there is a huge hit here in the way data is added to the series.
Points should be added to a series all at once when possible, rather than individually.
In the case of the above example, if the code collected all encountered data points into a list, and then added to the entire list to the series using an addAll call, the performance would increase. The frequency of the addAll call can be set based on trial and error for aesthetic performance, but the hertz a user can see is much less than the hertz the Platform.RunLater is trying to update.
I found the reason, it was lack of hardware acceleration on Linux Platforms where AMD GFX cards where installed. Oracle did not provide hardware support so JavaFX was falling back to some crappy one resulting in performance decay. The piece of code from original post works no problem on Win machines or Linux machines with Nvidia cards BUT not on Linux with AMD cards. On Linux with amd cards you have to manually enforce software acceleration (as opposed to the default one).

Java Threads When Using Replicate Runs

I have a really odd bug with some Java code I'm writing. If I run some parallel code I've written it runs perfectly, however if I run the same code multiple times in the same run the runtime gets slower and slower each time.
If I increase the number threads from 4 to 8, the slowdown is more dramatic each iteration.
Each run is completely independent, I even set the runtime variable to null in between to clear the old run. So I have no idea what could be slowing it down. I've been using the Visual VM and it says that .run() is spending all of its time on "Self Time" or "Thread.init()" which is not helpful.
Some snippets of code:
for (int r = 0; r < replicateRuns; ++r) {
startTime = System.nanoTime();
result = newRuntime.execute();
result = null;
System.out.println((System.nanoTime() - startTime) / 1000000);
total += System.nanoTime() - startTime;
}
parentThread = this;
thread = new Thread[numberOfThreads];
barrier = new CyclicBarrier(numberOfThreads);
for (int i = 0; i < numberOfThreads; i++) {
thread[i] = new Thread(new Runtime(parentThread, variation, delta, eta, i,
numberOfThreads), i + "");
thread[i].start();
}
for (int i = 0; i < numberOfThreads; i++) {
try {
thread[i].join();
thread[i] = null;
} catch (Exception ex) {
ex.printStackTrace();
}
}
So any clues at to why if I launch the Java app many times I get decent results, but if I run it many times within the same launch everything slows down, even though as far as I can see I'm null'ing everything so the GC comes and cleans it.
I'm using thread local variables, but from what I've read they're all cleaned when the thread itself is set to null.
Cheers for any help!
EDIT 1:
Thanks for all the help. The plot thickens, on my Windows desktop (as opposed to my MacBook) there are no issues at all, each thread runs fine with no slow down inbetween even when I increase the amount of runs! After staring at this for a day I'm going to try again with Eclipse MAT first thing in the morning.
With regards to the source, I'm extending the MOEA framework with a parallel version of MOEAD, hence the many dependencies and classes. MOEA framework You can find the source of my class here . Essentially iterate is called repeatedly until numberOfEvaulations reaches a set figure.
I believe the problem as guys are saying here is that you are not 'stopping' your threads in the right way - sort of speak.
The best way in my experience is to store a state in a thread, in a boolean variable e.g. isRunning. Then inside your loop you test the state of the isRunning flag, i.e.
//inside the run method
while(isRunning){
//your code goes here
}
This way on each iteration of the loop you are checking the current state of the flag thus when you will set it to 'false' in, for example, your custom stop() method. The next iteration of the loop will cause the thread to exit its run method thus ending life of your thread. Well technically now it becomes ready to be garbage collected. It's memory will be deallocated at some point in the near future, considering there is no reference to this threat hanging in some place in your code.
There is more sources showing this approach, for example, check out this discussion on LinkedIn.
As a side note it would be actually useful to see what exactly is the newRuntime or result variables, their classes and inheritance etc. Otherwise we can only try to guess as to what actually is going on in your code.
You are always generating new threads and never disposing of them .
If the number of threads is larger than the number of processor cores we have to switch threads , which can decrease performance about 1000 times .
If you are using Netbeans IDE , in profiler you can the threads and their status .

Make a simple timer in Java

I can't seem to figure out how to make a simple timer in java. All I need it to do is just display time, really. So just a start method, and it keeps counting up like 0:00, 0:01, 0:02, etc. I've seen some other similar forum posts on this, but all the code is kind of complicated for my level of understanding; I'm kind of new to java. But it shouldnt be that hard to make a timer that just performs such a basic function? If anyone could help it would be greatly appreciated :)
This is not difficult. However, I would caution you that I have seen some very confused answers on stack overflow, in some cases shockingly poor coding habits, so be very careful. First let me answer the question.
If seem that the biggest mistake that programmers make in implementing a timer, is thinking that they need something to keep track of the current time. That is, they write some sort of loop that increments a variable every second or some such silly thing. You do not need to write code to keep track of the time. The function System.currentTimeMillis() will do that for you, and it does it quite accurately.
Timer code will involve two aspects which many programmers mix up:
calculation of the time
refresh of the display
All you need to do to calculate the time to display, is to record the time that the timer started:
long startTime = System.currentTimeMillis();
Later, when you want to display the amount of time, you just subtract this from the current time.
long elapsedTime = System.currentTimeMillis() - startTime;
long elapsedSeconds = elapsedTime / 1000;
long secondsDisplay = elapsedSeconds % 60;
long elapsedMinutes = elapsedSeconds / 60;
//put here code to format and display the values
The biggest mistake that programmers make is to think they need a variable to hold the current time and then to write code to increment that variable every second, e.g. something called "elapsedSeconds" which they maintain. The problem is that you can schedule code to be called every second, but there is no guarantee of exactly when that code will be called. If the system is busy, that code might be called quite a bit later than the second. If the system is extremely busy (for example page fetching from a faulty disk) it could actually be several seconds late. Code that uses the Thread.sleep(1000) function to loop every second will find that the error builds up over time. If sleep returns 300ms late one time, that error is compounded into your calculation of what time it is. This is all completely unnecessary because the OS has a function to tell you the current time.
The above calculation will be accurate whether you run this code every second, 100 times a second, or once every 3.572 seconds. The point is that currentTimeMillis() is the accurate representation of the time regardless of when this code is called -- and that is an important consideration because thread and timer events are not guaranteed to be accurate at a specific time.
The second aspect of a timer is refresh of the display. This will depend upon the technology you are using to display with. In a GUI environment you need to schedule paint events. You would like these paint events to come right after the time that the display is expected to change. However, it is tricky. You can request a paint event, but there may be hundreds of other paint events queued up to be handled before yours.
One lazy way to do this is to schedule 10 paint events per second. Because the calculation of the time does not depend on the code being called at a particular point in time, and because it does not matter if you re-paint the screen with the same time, this approach more or less guarantees that the displayed time will show the right time within about 1/10 of a second. This seems a bit of a waste, because 9 times out of 10 you are painting what is already on the screen.
If you are writing a program with animation of some sort (like a game) which is refreshing the screen 30 times a second, then you need do nothing. Just incorporate the timer display call into your regular screen refresh.
If paint events are expensive, or if you are writing a program that does terminal-style output, you can optimize the scheduling of events by calculating the amount of time remaining until the display will change:
long elapsedTime = System.currentTimeMillis() - startTime;
long timeTillNextDisplayChange = 1000 - (elapsedTime % 1000);
The variable timeTillNextDisplayChange holds the number of milliseconds you need to wait until the seconds part of the timer will change. You can then schedule a paint event to occur at that time, possibly calling Thread.sleep(timeTillNextDisplayChange) and after the sleep do the output. If your code is running in a browser, you can use this technique to update the page DOM at the right time.
Note, that there is nothing in this calculation of the display refresh that effects the accuracy of the timer itself. The thread might return from sleep 10ms late, or even 500ms late, and the accuracy of the timer will not be effected. On every pass we calculate the time to wait from the currentTimeMillis, so being called late on one occasion will not cause later displays to be late.
That is the key to an accurate timer. Do not expect the OS to call your routine or send the paint event exactly when you ask it to. Usually, of course, with modern machines, the OS is remarkably responsive and accurate. This happens in test situations where you are not running much else, and the timer seems to work. But, in production, under rare stress situation, you do not want your timer "drifting" because the system is busy.
You can either use Timer class from java.util or another way, which is more complicated, is with Threads. Timer also has thread action, but it's pretty easy to understand to use it.
For creating a simple timer as you explained as per your need , it is very easy to write a code for that. I have written the below code for your reference. If you wish you can enhance it.
import java.util.concurrent.TimeUnit;
public class PerfectTimer {
public static void main(String[] args) throws InterruptedException
{
boolean x=true;
long displayMinutes=0;
long starttime=System.currentTimeMillis();
System.out.println("Timer:");
while(x)
{
TimeUnit.SECONDS.sleep(1);
long timepassed=System.currentTimeMillis()-starttime;
long secondspassed=timepassed/1000;
if(secondspassed==60)
{
secondspassed=0;
starttime=System.currentTimeMillis();
}
if((secondspassed%60)==0)
displayMinutes++;
System.out.println(displayMinutes+"::"+secondspassed);
}
}
}
if you want to update something in the main thread (like UI components)
better to use Handler
Handler h = new Handler();
h.postDelayed(new Runnable() {
#Override
public void run() {
//do something
}
}, 20);
20 - the delay In MS to do something.
and run it in a loop.
I have created a Timer that has everything you might need in it.
I even documented it!
And I also compiled it for faster usage.
Here's an example:
//...
//For demo only!
public static void main(String[]a){
Timer timer=new Timer();
timer.setWatcher(new Timer.TimerWatcher(){
public void hasStopped(boolean stopped){
System.out.print(stopped+" | ");
}
public void timeElapsed(long nano, long millis, long seconds){
System.out.print(nano+", ");
System.out.print(millis+", ");
System.out.print(seconds+" | ");
}
public void timeLeft(long timeLeft){
System.out.print(timeLeft+"\r");
}
});
//Block the thread for 5 seconds!
timer.stopAfter(5, Timer.seconds); //You can replace this with Integer.MAX_VALUE.
//So that our watcher won't go to waste.
System.out.println();
}
//...
This is not for promotion, made this to help people not waste their time in coding classes themselves!

java square wave

I am trying to create a square wave on the parallel port with java. So far I have this implementation.
public class Wave extends Thread {
public Wave() {
super();
setPriority(MAX_PRIORITY);
}
#Override
public void run() {
Wave.high();
LockSupport.parkNanos(20000000);
Wave.low();
LockSupport.parkNanos(20000000);
}
public static native void high();
public static native void low();
}
In which high() and low() are implemented using JNI (a shared C library controls the parallel port). It works pretty well; it generates a square wave with a period of about 40ms. Using an oscilloscope it looks like the standard deviation is about 10 microseconds when the computer is idle. When the computer is not idle the standard deviation becomes much larger. I think this is because more context switches happen and Threads stay too long in the waiting state and the specified 20 ms is not achieved accurately.
Is there a way to make my implementation more accurate? I know I could use hardware for this but I want to know if I can do this with software too.
Would an option be to "listen" to a clock and perform an action timed to the millisecond?
Just "listening" to the clock won't solve the problem of context switches causing jitter.
If you can dedicate a core to this:
bind the thread to the core;
move IRQ handling to other cores;
have a tight loop constantly checking the time (using System.nanoTime() or RDTS/RDTSCP), and calling high()/low() as appropriate.
This way you should be able to achieve very low jitter.
Of course, if the task is to simply produce a square wave, this is a pretty inefficient use of computing resources.
i think there are going to be two sources of jitter.
first, garbage collection (and possibly other background processes, like the JIT) in java. for the code you gave, there should not be any gc. but if this is part of a larger system then you will likely find that garbage collection is required, and that it may alter the timings when it runs. you can try ameliorate this by playing with the jvm settings (java -X).
second, the system scheduler. in addition to the suggestions by aix, you can bump the priority of the process and do some linux-specific tweaks. this article explains some of the problems with linux. ubuntu has a low-latency kernel, which you can install, but i can't find info on what it actually contains so you can do the same on other systems (update: i think it may contain this patch). if you want to look for more info "low latency" is the key think to search for, and people doing audio processing on linux tend to be the ones who care most about this).
If your context switching does not cause too much delay, you may try to park your thread until a given time, rather than for a given interval:
public class Wave extends Thread {
private final Object BLOCKER = new Object();
public Wave() {
super();
setPriority(MAX_PRIORITY);
}
#Override
public void run() {
// I suspect this should be running in an endless loop?
for (;;) {
Wave.high();
long t1 = System.currentTimeMillis();
// Round interval up to the next 20ms "deadline"
LockSupport.parkUntil(BLOCKER, t1 + 20 - (t1 % 20));
Wave.low();
// Round interval up to the next 20ms "deadline"
long t2 = System.currentTimeMillis();
LockSupport.parkUntil(BLOCKER, t2 + 20 - (t2 % 20));
}
}
public static native void high();
public static native void low();
}
As this relies on the wall-clock time in ms, rather than a more precise nano-seconds time, this will not work well for much higher frequencies. But this may not work either, as GC (and other processes) may interrupt this thread for an "unfortunate" amount of time, resulting in the same jitter.
When I tested this on my Windows 7 quad-core with JDK 6, I had some non-negligible jitter about every second, so aix's solution is probably better

Accurate Sleep for Java on Windows

Does anyone know a Library which provides a Thread.sleep() for Java which has an error not higher than 1-2 Millisecond?
I tried a mixture of Sleep, error measurement and BusyWait but I don't get this reliable on different windows machines.
It can be a native implementation if the implementation is available for Linux and MacOS too.
EDIT
The link Nick provided ( http://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks ) is a really good resource to understand the issues all kinds of timers/sleeps/clocks java has.
To improve granularity of sleep you can try the following from this Thread.sleep page.
Bugs with Thread.sleep() under Windows
If timing is crucial to your
application, then an inelegant but
practical way to get round these bugs
is to leave a daemon thread running
throughout the duration of your
application that simply sleeps for a
large prime number of milliseconds
(Long.MAX_VALUE will do). This way,
the interrupt period will be set once
per invocation of your application,
minimising the effect on the system
clock, and setting the sleep
granularity to 1ms even where the
default interrupt period isn't 15ms.
The page also mentions that it causes a system-wide change to Windows which may cause the user's clock to run fast due to this bug.
EDIT
More information about this is available
here and an associated bug report from Sun.
This is ~5 months late but might be useful for people reading this question. I found that java.util.concurrent.locks.LockSupport.parkNanos() does the same as Thread.sleep() but with nanosecond precision (in theory), and much better precision than Thread.sleep() in practice. This depends of course on the Java Runtime you're using, so YMMV.
Have a look: LockSupport.parkNanos
(I verified this on Sun's 1.6.0_16-b01 VM for Linux)
Unfortunately, as of Java 6 all java sleep-related methods on Windows OS [including LockSupport.awaitNanos()] are based on milliseconds, as mentioned by several people above.
One way of counting precise interval is a "spin-yield". Method System.nanoTime() gives you fairly precise relative time counter. Cost of this call depends on your hardware and lies somewhere 2000-50 nanos.
Here is suggested alternative to Thread.sleep():
public static void sleepNanos (long nanoDuration) throws InterruptedException {
final long end = System.nanoTime() + nanoDuration;
long timeLeft = nanoDuration;
do {
if (timeLeft > SLEEP_PRECISION)
Thread.sleep (1);
else
if (timeLeft > SPIN_YIELD_PRECISION)
Thread.yield();
timeLeft = end - System.nanoTime();
} while (timeLeft > 0);
}
This approach has one drawback - during last 2-3 milliseconds of the wait hit CPU core. Note that sleep()/yield() will share with other threads/processes. If you are willing to compromise a little of CPU this gives you very accurate sleep.
There are no good reasons to use Thread.sleep() in normal code - it is (almost) always an indication of a bad design. Most important is, that there is no gurantee that the thread will continue execution after the specified time, because the semantics of Thread.sleep() is just to stop execution for a given time, but not to continue immedeately after that period elapsed.
So, while I do not know what you try to achieve, I am quite sure you should use a timer instead.
JDK offers the Timer class.
http://java.sun.com/j2se/1.5.0/docs/api/java/util/Timer.html
Reading the docs clearly indicates that beyond the plumbing to make this a generalized framework, it uses nothing more sophisticated than a call to Object.wait(timeout):
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Object.html#wait(long)
So, you can probably cut the chase an just use Object#wait yourself.
Beyond those considerations, the fact remains that JVM can not guarantee time accuracy across platforms. (Read the docs on http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#currentTimeMillis())
I think you'll need to experiment with a compromise solution combining Timer and busy polling if you want to want the highest timing precision possible on your platform. Effectively Object#wait(1) -> System#nanoTime -> calculate delta -> [loop if necessary].
If you are willing to roll your own, JNI pretty much leaves it wide open for platform specific solutions. I am blissfully un-aware of Window's internals, but obviously if the host OS does provide sufficiently accurate realtime timer services, the barebones structure of setting up a timerRequest(timedelta, callback) native library shouldn't be beyond reach.
The Long.MAX_VALUE hack is the working solution.
I tried Object.wait(int milis) to replace Thread.sleep, but found that Object.wait is as accurate as Thread.sleep (10ms under Windows). Without the hack, both methods are not suitable for any animation
Use one of the Thread::join overrides on the current thread. You specify the number of milliseconds (and nanoseconds) to wait.
You could try using the new concurrency libraries. Something like:
private static final BlockingQueue SLEEPER = new ArrayBlockingQueue(1);
public static void main(String... args) throws InterruptedException {
for(int i=0;i<100;i++) {
long start = System.nanoTime();
SLEEPER.poll(2, TimeUnit.MILLISECONDS);
long time = System.nanoTime() - start;
System.out.printf("Sleep %5.1f%n", time/1e6);
}
}
This sleeps between 2.6 and 2.8 milliseconds.
Sounds like you need an implementation of real-time Java.

Categories

Resources