I am developing a programming contest manager in JAVA.
Concept of Contest Manager:
The main concept of Contest Manager is summarized below. If you have idea about it, you can skip lines before the picture.
Server runs on Judge PC.
All contestants be connected with Judge as client.
Contestants are provided hard copies of problem statements and they write solution on C++. They submit their solution using Contest Manger Software written in JAVA.
Judge have some input data in file and corresponding output data in file for every problem.
When a contestant submits his solution, Judge server runs it against the provided inputs by Judge.
Judge server then matches the output of the contestant with the correct output provided before.
Then Judge server gives a verdict on the basis of the matching result like Accepted, Wrong Answer, Compile Error, Time Limit Exceeded, etc.
Each problem has a predefined time limit. That means the submitted solution must run within a certain time period. (Usually it ranges from 1 second to 15 second)
The verdict of a submitted solution would be visible to all contestants. The picture below would clear the scenario. This is the submission queue and it is visible to all the contestants and judge.
Problem Background:
In the picture, you can see a red marked area where the time elapsed by every submitted solution is written in milliseconds. I could do easily by having the following code:
long start,end;
start = new Date().getTime();
int verdictCode = RunProgram(fileEXE, problem.inputFile, fileSTDOUT, problem.timeLimit);
end = new Date().getTime();
submission.timeElapsed = end - start;
Here, RunProgram function runs the submitted solution (program) and generates output file against an input file. If you need the details of it, ask me later, I would describe.
Main Problem:
However, There is another type of verdict called Memory Limit Exceeded
which is not implemented here. I want to implement that. But getting no idea how to do it. I googled it. Somebody tell about profiling, but I am not getting how to do it properly and Do not know can it serve my purpose or not.
That means, there would be a column named Memory Elapsed like Time Elapsed.
It is possible to do the thing because Online Judges like Codeforces are already showing it. But my question is, Is it possible to do the same in JAVA?
If yes, then how?
If no, then how could you be sure?
Note:
The software has some dependency. It must run on windows platform.
I think you are asking about measuring statistics on native programs written in C++, correct?. You can't measure the memory usage of other programs in the OS with Java, you can only get memory information about the current JVM in Java. To measure memory usage or things like CPU usage of other processes you would need a platform-dependent solution (native code which you run with JNI). Luckily people have already implemented things like this, so that you can use plain Java objects to do what you want without having to write any C/JNI code. Check out Hyperic Sigar library for an easy way to do what you want.
I think you want the Runtime class.
Runtime runtime = Runtime.getRuntime();
System.out.println("Free memory: " + runtime.getFreeMemory() + " / " + rutime.getMaxMemory());
Related
Problem:
I'm looking for a programming language and runtime whose execution can be "timed" in steps of code.
To be more concrete I need a language runtime/interpreter that can execute like 100 steps(not lines). After the execution the called method (of the runtime) returns while keeping its state. Later you can tell the runtime to continue execution for another 100 steps and so on.
It's somewhat like a VM only for execution of a single program.
Question:
Are there any runtimes for given languages that fullfill those criteria?
Preferred languages are Julia(julialang.org) and Java but I'm looking forward to all tips you can have for me. (keywords for search, problems in realisation, partial solutions, other languages that support it etc)
What i need it for:
I'd like to create a mod for minecraft that has codeable blocks but in order to prevent the whole minecraft world to be stuck due to players mistake and to be able to save the game/state at any time i need to be able to execute code of these codeable blocks in a fixed amount of time and save the current state of that codeable block's runtime after any of these runs.
Aaron aka rapus95
A practical solution might be to use Java threads and have a timer thread interrupt the work thread when its time limit is reached. However, there is an interesting abstraction known as "engines" that can be implemented using call/cc in Scheme: http://www.scheme.com/tspl4/examples.html#g208 This lets you pair a piece of work to do (represented as a 0-argument procedure) with an amount of "fuel" that it's allowed to consume. The "thread" stops when it runs out of fuel.
over the years I always encountered these kind of problems, and I've never solved them the way it felt "right".
imagine I have to implement a function/method to judge level of a river.the int river() gives us int from 0 to 10 , which means
0 being no water passed at all
, and
10 means near-over-flow
normally I would take the output for a few sec/min and then grade it to some groups such as empty/ half-full / full
if time is a free option here, how would you collect the output of river(), and judge after a few sec/min ? or even is it correct and reliable to use Time as a parameter in these kind of problems.
Im asking for an Idea of algorithm to solve these type of questions. Respect
Edit1: bold part is my main question
Edit2: by Time, I mean implementing a Runnable and call it every 10 sec
If I understood your question in the right way, you would have two seperate programs (or threads at least). One gathers information, the other interprets the information.
The data-gatherer runs non-stop and saves the data in a defined interval (for a river, maybe every 5 minutes). It stores the data in any kind of persistence layer (everything is possible from a simple array to a full-blown database)
The data-interpreter then asks the gatherer for that persistent data or gets it from the persistence layer directly if possible and then interprets it. The interpreter can run at any time you want and will always get the most recent information when it asks the gatherer.
Edit: to tackle your edits: Read this Java Thread every X seconds and that Accuracy of ScheduledExecutorService on normal OS / JVM. It should answer all your questions.
In short: It will be slightly inaccurate, but i think thats close enough for 99% of all the usecases.
I'm building a large import script that uses functionality from a separate code base that I suspect of having a memory leak. It calls the code base as many as 10000 times for the same operations and while the first is relatively quick (2 sec) the script is requiring a long time to run (over 100 hours and counting) and by the end the same task is up to 60 sec or more (and still climbing). What is the best way to work around this while the leaks are found and fixed?
Some solutions that have been brainstormed would be:
Create a process that runs a part of the script then end it, reclaiming the resources it used.
Use a shell script to launch the program multiple times completing a sub-set of the tasks each time and have the updated data output to file to be used by the next iteration
edit: Changed the way the question was phrased to make it clear that the import and the code base are separate programs
You know, none of the evidence you have presented clearly points to a storage leak. The real problem could be something completely different, like a poorly designed algorithm, or a poorly tuned database table or query.
Assuming that this is a storage leak and applying "band-aid" solutions could be a waste of time, or actually make the problem worse.
You will be better off spending the time up front to determine what the real problem is and fix it, rather than trying a series of workarounds ... which may turn out to be futile.
I solved this issue by minimizing the scope that contains references to the other codebase. Basically every time I initialize an object or call a function from the other codebase I went through hoops to make sure it existed for the minimal time possible. Often setting references again to NULL in order to make sure all references were removed.
This ended up working excellently, reduced the time from over 150 hours and counting to under 30.
I want to filter what classes are being cpu-profiled in Java VisualVm (Version 1.7.0 b110325). For this, I tried under Profiler -> Settings -> CPU-Settings to set "Profile only classes" to my package under test, which had no effect. Then I tried to get rid of all java.* and sun.* classes by setting them in "Do not profile classes", which had no effect either.
Is this simply a bug? Or am I missing something? Is there a workaround? I mean other than:
paying for a better profiler
doing sampling by hand (see One could use a profiler, but why not just halt the program?)
switch to the Call Tree view, which is no good since only the Profiler view gives me the percentages of consumed CPU per method.
I want to do this mainly to get halfway correct percentages of consumed CPU per method. For this, I need to get rid of the annoying measurements, e.g. for sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run() (around 70%). Many users seem to have this problem, see e.g.
Java VisualVM giving bizarre results for CPU profiling - Has anyone else run into this?
rmi.transport.tcp.tcptransport Connectionhandler consumes much CPU
Can't see my own application methods in Java VisualVM.
The reason you see sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run() in the profile is that you left the option Profile new Runnables selected.
Also, if you took a snapshot of your profiling session you would be able to see the whole callstack for any hotspot method - this way you could navigate from the run() method down to your own application logic methods, filtering out the noise generated by the Profile new Runnables option.
OK, since your goal is to make the code run as fast as possible, let me suggest how to do it.
I'm no expert on VisualVM, but I can tell you what works. (Only a few profilers actually tell you what you need to know, which is - which lines of your code are on the stack a healthy fraction of wall-clock time.)
The only measuring I ever bother with is some stopwatch on the overall time, or alternatively, if the code has something like a framerate, the number of frames per second. I don't need any sort of further precision breakdown, because it's at best a remote clue to what's wasting time (and more often totally irrelevant), when there's a very direct way to locate it.
If you don't want to do random-pausing, that's up to you, but it's proven to work, and here's an example of a 43x speedup.
Basically, the idea is you get a (small, like 10) number of stack samples, taken at random wall-clock times.
Each sample consists (obviously) of a list of call sites, and possibly a non-call site at the end.
(If the sample is during I/O or sleep, it will end in the system call, which is just fine. That's what you want to know.)
If there is a way to speed up your code (and there almost certainly is), you will see it as a line of code that appears on at least one of the stack samples.
The probability it will appear on any one sample is exactly the same as the fraction of time it uses.
So if there's a call site or other line of code using a healthy fraction of time, and you can avoid executing it, the overall time will decrease by that fraction.
I don't know every profiler, but one I know that can tell you that is Zoom.
Others may be able to do it.
They may be more spiffy, but they don't work any quicker or better than the manual method when your purpose is to maximize performance.
Is it possible to slow down time in the Java virtual machine according to CPU usage by modification of the source code of OpenJDK? I have a network simulation (Java to ns-3) which consumes real time, synchronised loosely to the wall clock. However, because I run so many clients in the simulation, the CPU usage hits 100% and hard guarantees aren't maintained about how long events in the simulator should take to process (i.e., a high amount of super-late events). Therefore, the simulation tops out at around 40 nodes when there's a lot of network traffic, and even then it's a bit iffy. The ideal solution would be to slow down time according to CPU, but I'm not sure how to do this successfully. A lesser solution is to just slow down time by some multiple (time lensing?).
If someone could give some guidance, the source code for the relevant file in question (for Windows) is at http://pastebin.com/RSQpCdbD. I've tried modifying some parts of the file, but my results haven't really been very successful.
Thanks in advance,
Chris
You might look at VirtualBox, which allows one to Accelerate or slow down the guest clock from the command line.
I'm not entirely sure if this is what you want but, with the Joda-time library you can stop time completely. So calls to new Date() or new DateTime() within Joda-time will continously return the same time.
So, you could, in one Thread "stop time" with this call:
DateTimeUtils.setCurrentMillisFixed(System.currentTimeMillis());
Then your Thread could sleep for, say, 5000ms, and then call:
// advance time by one second
DateTimeUtils.setCurrentMillisFixed(System.currentTimeMillis() + 1000);
So provided you application is doing whatever it does based on the time within the system this will "slow" time by setting time forwards one second every 5 seconds.
But, as i said... i'm not sure this will work in your environment.