Time drifting in saving and judging data over Time - java

over the years I always encountered these kind of problems, and I've never solved them the way it felt "right".
imagine I have to implement a function/method to judge level of a river.the int river() gives us int from 0 to 10 , which means
0 being no water passed at all
, and
10 means near-over-flow
normally I would take the output for a few sec/min and then grade it to some groups such as empty/ half-full / full
if time is a free option here, how would you collect the output of river(), and judge after a few sec/min ? or even is it correct and reliable to use Time as a parameter in these kind of problems.
Im asking for an Idea of algorithm to solve these type of questions. Respect
Edit1: bold part is my main question
Edit2: by Time, I mean implementing a Runnable and call it every 10 sec

If I understood your question in the right way, you would have two seperate programs (or threads at least). One gathers information, the other interprets the information.
The data-gatherer runs non-stop and saves the data in a defined interval (for a river, maybe every 5 minutes). It stores the data in any kind of persistence layer (everything is possible from a simple array to a full-blown database)
The data-interpreter then asks the gatherer for that persistent data or gets it from the persistence layer directly if possible and then interprets it. The interpreter can run at any time you want and will always get the most recent information when it asks the gatherer.
Edit: to tackle your edits: Read this Java Thread every X seconds and that Accuracy of ScheduledExecutorService on normal OS / JVM. It should answer all your questions.
In short: It will be slightly inaccurate, but i think thats close enough for 99% of all the usecases.

Related

How to observe programatically how much memory a program used?

I am developing a programming contest manager in JAVA.
Concept of Contest Manager:
The main concept of Contest Manager is summarized below. If you have idea about it, you can skip lines before the picture.
Server runs on Judge PC.
All contestants be connected with Judge as client.
Contestants are provided hard copies of problem statements and they write solution on C++. They submit their solution using Contest Manger Software written in JAVA.
Judge have some input data in file and corresponding output data in file for every problem.
When a contestant submits his solution, Judge server runs it against the provided inputs by Judge.
Judge server then matches the output of the contestant with the correct output provided before.
Then Judge server gives a verdict on the basis of the matching result like Accepted, Wrong Answer, Compile Error, Time Limit Exceeded, etc.
Each problem has a predefined time limit. That means the submitted solution must run within a certain time period. (Usually it ranges from 1 second to 15 second)
The verdict of a submitted solution would be visible to all contestants. The picture below would clear the scenario. This is the submission queue and it is visible to all the contestants and judge.
Problem Background:
In the picture, you can see a red marked area where the time elapsed by every submitted solution is written in milliseconds. I could do easily by having the following code:
long start,end;
start = new Date().getTime();
int verdictCode = RunProgram(fileEXE, problem.inputFile, fileSTDOUT, problem.timeLimit);
end = new Date().getTime();
submission.timeElapsed = end - start;
Here, RunProgram function runs the submitted solution (program) and generates output file against an input file. If you need the details of it, ask me later, I would describe.
Main Problem:
However, There is another type of verdict called Memory Limit Exceeded
which is not implemented here. I want to implement that. But getting no idea how to do it. I googled it. Somebody tell about profiling, but I am not getting how to do it properly and Do not know can it serve my purpose or not.
That means, there would be a column named Memory Elapsed like Time Elapsed.
It is possible to do the thing because Online Judges like Codeforces are already showing it. But my question is, Is it possible to do the same in JAVA?
If yes, then how?
If no, then how could you be sure?
Note:
The software has some dependency. It must run on windows platform.
I think you are asking about measuring statistics on native programs written in C++, correct?. You can't measure the memory usage of other programs in the OS with Java, you can only get memory information about the current JVM in Java. To measure memory usage or things like CPU usage of other processes you would need a platform-dependent solution (native code which you run with JNI). Luckily people have already implemented things like this, so that you can use plain Java objects to do what you want without having to write any C/JNI code. Check out Hyperic Sigar library for an easy way to do what you want.
I think you want the Runtime class.
Runtime runtime = Runtime.getRuntime();
System.out.println("Free memory: " + runtime.getFreeMemory() + " / " + rutime.getMaxMemory());

Why does Java CPU profile (using visualvm) show so many hits on a method that does nothing?

This is something I think I've seen before with other profiling tools in other environments, but it's particularly dramatic in this case.
I'm taking a CPU profile of a task that runs for about 12 minutes, and it's showing almost half the time spent in a method that literally does nothing: it's got an empty body. What can cause this? I don't believe that the method is being called a ridiculous number of times, certainly not to account for half the execution time.
For what it's worth, the method in question is called startContent() and it's used to notify a parsing event. The event is passed down a chain of filters (perhaps a dozen of them), and the startContent() method on each filter does almost nothing except to call startContent() on the next filter in the chain.
This is pure Java code, and I'm running it on a Mac.
Attached is a screen shot of the CPU sampler output:
and here is a sample showing the call stack:
(After a delay due to vacation) Here are a couple of pictures showing the output from the profiler. These figures are much more what I would expect the profile to look like. The profiler output seems entirely meaningful, while the sampler output is spurious.
As some of you will have guessed, the job in question is a run of the Saxon XML schema validator (on a 9Gb input file). The profile shows about half the time being spent validating element content against simple types (which happens during endElement processing) and about half being spent testing key constraints for uniqueness; the two profiler views show highlight the activity involved in these two aspects of the task.
I'm not able to supply the data as it comes from a client.
I have not used VisualVM, but I suspect the problem is likely because of the instrumentation overhead on such an empty method. Here's the relevant passage in JProfiler's documentation (which I have used extensively):
If the method call recording type is set to Dynamic instrumentation, all methods of profiled classes are instrumented. This creates some overhead which is significant for methods that have very short execution times. If such methods are called very frequently, the measured time of those method will be far to high. Also, due to the instrumentation, the hot spot compiler might be prevented from optimizing them. In extreme cases, such methods become the dominant hot spots although this is not true for an uninstrumented run. An example is the method of an XML parser that reads the next character. This method returns very quickly, but may be invoked millions of times in a short time span.
Basically, a profiler adds it's own "time length detection code", essentially, but in an empty method the profiler will spend all it's time doing that rather than actually allowing the method to run.
I recommend, if it's possible, to tell VisualVM to stop instrumenting that thread, if it supports such a filtering.
It is generally assumed that using a profiler is much better (for finding performance problems, as opposed to measuring things) than - anything else, really - certainly than the bone-simple way of random pausing.
This assumption is only common wisdom - it has no basis in theory or practice.
There are numerous scholarly peer-reviewed papers about profiling, but none that I've read even address the point, let alone substantiate it.
It's a blind spot in academia, not a big one, but it's there.
Now to your question -
In the screenshot showing the call stack, that is what's known as the "hot path", accounting for roughly 60% of in-thread CPU time. Assuming the code with "saxon" in the name is what you're interested in, it is this:
net.sf.saxon.event.ReceivingContentHandler.startElement
net.sf.saxon.event.ProxyReceiver.startContent
net.sf.saxon.event.ProxyReceiver.startContent
net.sf.saxon.event.StartTagBuffer.startContent
net.sf.saxon.event.ProxyReceiver.startContent
com.saxonica.ee.validate.ValidationStack.startContent
com.saxonica.ee.validate.AttributeValidator.startContent
net.sf.saxon.event.TeeOutputter.startContent
net.sf.saxon.event.ProxyReceiver.startContent
net.sf.saxon.event.ProxyReceiver.startContent
net.sf.saxon.event.Sink.startContent
First, this looks to me like it has to be doing I/O, or at least waiting for some other process to give it content. If so, you should be looking at wall-clock time, not CPU time.
Second, the problem(s) could be at any of those call sites where a function calls the one below. If any such call is not truly necessary and could be skipped or done less often, it will reduce time by a significant fraction.
My suspicion is drawn to StartTagBuffer and to validate, but you know best.
There are other points I could make, but these are the major ones.
ADDED after your edit to the question.
I tend to assume you are looking for ways to optimize the code, not just ways to get numbers for their own sake.
It still looks like only CPU time, not wall-clock time, because there is no I/O in the hot paths. Maybe that's OK in your case, but what it means is, of your 12-minute wall clock time, 11 minutes could be spent in I/O wait, with 1 minute in CPU. If so, you could maybe cut out 30 seconds of fat in the CPU part, and only shorten the time by 30 seconds.
That's why I prefer sampling on wall-clock time, so I have overall perspective.
By looking at hot paths alone, you're not getting a true picture.
For example, if the hot path says function F is on the hot path for, say 40% of the time, that only means F costs no less than 40%. It could be much more, because it could be on other paths that aren't so hot. So you could have a juicy opportunity to speed things up by a lot, but it doesn't get much exposure in the specific path that the profiler chose to show you, so you don't give it much attention.
In fact, a big time-taker might not show up at all because on any specific hot path there's always something else a little bigger, like new, or because it goes by multiple names, such as templated collection class constructors.
It's not showing you any line-resolution information.
If you want to inspect a supposedly high-cost routine for the reason for the cost, you have to look at the lines within it. There's a tendency when looking at a routine to say "It's just doing what it's supposed to do.", but if you are looking at a specific costly line of code, which most often is a method call, you can ask "Is it really necessary to do this call? Maybe I already have the information." It's far more specific in suggesting what you could fix.
Can it actually show you some raw stack samples?
In my experience these are far more informative than any summary, like a hot path, that the profiler can present.
The thing to do is examine the sample and come to a full understanding of what the program was doing, and the reason why, at that point in time.
Then repeat for several more samples.
You will see things that don't need to be done, that you can fix to get substantial speedup.
(Unless the code is already optimal, in which case it will be nice to know.)
The point is, you're looking for problems, not measurements.
Statistically, it's very rough, but good enough, and no problem will escape.
My guess is that the method Sink.startContent actually is called a ridiculous number of times.
Your screenshot shows the Sampling tab, which usually results in realistic timings if user over a long enoung interval. If you use Profiler tab instead, you will also get the invocation count. (You'll also get less realistic timings and your program will get very very slow, but you only need to do this for a few seconds to get a good idea about the invocation counts).
It's hard to predict what optimizations and especially inlining HotSpot performs, and the sampling profiler can only attribute the time of inlined methods to the call sites. I suspect that some of the invocation code in saxon might for some reason be attributed to your empty callback function. In that case, you're just suffering the cost of XML, and switching to a different parser might be the only option.
I've had a lot of useful information and guidance from this thread, for which many thanks. However, I don't think the core question has been answered: why is the CPU sampling in VisualVM giving an absurdly high number of hits in a method that does nothing, and that isn't called any more often than many other methods?
For future investigations I will rely on the profiler rather than the sampler, now I have gained a bit of insight into how they differ.
From the profiler I haven't really gained a lot of new information about this specific task, in so far as it has largely confirmed what I would have guessed; but that itself is useful. It has told me that there's no magic bullet to speeding up this particular process, but has put bounds on what might be achieved by some serious redesign, e.g a possible future enhancement that appears to have some promise is generating a bytecode validator for each user-defined simple type in the schema.

Adobe CQ Evaluation: Are there problems with Multi Site Manager / TarOptimizer?

I work at a retailer and we consider to introduce CQ5 as a CMS.
However, after doing some research and talking to consultants it turns out, that there may be things that may be "complicated". Perhaps one of you can shed a little light on this.
The first thing is, we were told that when you use the Multi Site Manager to create multi language pages (about 80 languages) the update process can be as slow as half an hour until a change is ultimately published. Did someone of you experience something similar?
The other thing is, that the TarOptimizer has pretty long running times. I was told that runs that take up to 24 hours are not uncommon. Again my question: Did someone of you had such a problem or has an explanation for this?
I am really looking forward to your response.
These are really 2 separate question, but I'll address them based on my experience.
The update process for creating new multi-language pages will vary based on the number of languages, and also the number of publish instances and web-servers (assuming you're using dispatcher to cache) you are running. This is because the replication process is where the bottleneck is (at least in my experience), and as such if you're trying to push out a large amount of content across a large number of publishers with a large number of front-end web-servers whose cache needs to be cleared, there will be some delay in getting this to happen since replication is an asynchronous process. The longest delay I've seen for this has been in the 10-15 minute range, that was with 12 publishers and 12 front end webservers, but this comes with the obvious caveat that your mileage may vary.
For the Tar Optimzation job, I'd encourage you to take a look at this page as it has a lot of good info about the Tar Optizer job and how to tune it. The job can take a long time to run when you have a large repository, especially on an instance with a large number of write operations, but the run times can be configured so that it only runs during a given time period, and it will pick up where it left off the night before if the total run time is longer than the allowed run time. By default, it runs from 2-5 am each night, so if it takes more than that 3 hour period, it will continue where it left off the next night, allowing it to optimize the entire repository over a period of a few days if needed.

Ways to work around a memory leak in Java

I'm building a large import script that uses functionality from a separate code base that I suspect of having a memory leak. It calls the code base as many as 10000 times for the same operations and while the first is relatively quick (2 sec) the script is requiring a long time to run (over 100 hours and counting) and by the end the same task is up to 60 sec or more (and still climbing). What is the best way to work around this while the leaks are found and fixed?
Some solutions that have been brainstormed would be:
Create a process that runs a part of the script then end it, reclaiming the resources it used.
Use a shell script to launch the program multiple times completing a sub-set of the tasks each time and have the updated data output to file to be used by the next iteration
edit: Changed the way the question was phrased to make it clear that the import and the code base are separate programs
You know, none of the evidence you have presented clearly points to a storage leak. The real problem could be something completely different, like a poorly designed algorithm, or a poorly tuned database table or query.
Assuming that this is a storage leak and applying "band-aid" solutions could be a waste of time, or actually make the problem worse.
You will be better off spending the time up front to determine what the real problem is and fix it, rather than trying a series of workarounds ... which may turn out to be futile.
I solved this issue by minimizing the scope that contains references to the other codebase. Basically every time I initialize an object or call a function from the other codebase I went through hoops to make sure it existed for the minimal time possible. Often setting references again to NULL in order to make sure all references were removed.
This ended up working excellently, reduced the time from over 150 hours and counting to under 30.

How to write a profiler?

i would to know how to write a profiler? What books and / or articles recommended? Can anyone help me please?
Someone has already done something like this?
Encouraging lot, aren't we :)
Profilers aren't too hard if you're just trying to get a reasonable idea of where the program's spending most of its time. If you're bothered about high accuracy and minimum disruption, things get difficult.
So if you just want the answers a profiler would give you, go for one someone else has written. If you're looking for the intellectual challenge, why not have a go at writing one?
I've written a couple, for run time environments that the years have rendered irrelevant.
There are two approaches
adding something to each function or other significant point that logs the time and where it is.
having a timer going off regularly and taking a peek where the program currently is.
The JVMPI version seems to be the first kind - the link provided by uzhin shows that it can report on quite a number of things (see section 1.3). What gets executed changes to do this, so the profiling can affect the performance (and if you're profiling what was otherwise a very lightweight but often called function, it can mislead).
If you can get a timer/interrupt telling you where the program counter was at the time of the interrupt, you can use the symbol table/debugging information to work out which function it was in at the time. This provides less information but can be less disruptive. A bit more information can be obtained from walking the call stack to identify callers etc. I've no idea if these is even possible in Java...
Paul.
I wrote one once, mainly as an attempt to make "deep sampling" more user-friendly. When you do the method manually, it is explained here. It is based on sampling, but rather than take a large number of small samples, you take a small number of large samples.
It can tell you, for example, that instruction I (usually a function call) is costing you some percent X of total execution time, more or less, since it appears on the stack on X% of samples.
Think about it, because this is a key point. The call stack exists as long as the program is running. If a particular call instruction I is on the stack X% of the time, then if that instruction could disappear, that X% of time would disappear. This does not depend on how many times I is executed, or how long the function call takes. So timers and counters are missing the point. And in a sense all instructions are call instructions, even if they only call microcode.
The sampler is based on the premise that it is better to know the address of instruction I with precision (because that is what you are looking for) than to know the number X% with precision. If you know that you could save roughly 30% of time by recoding something, do you really care that you might be off by 5%? You're still going to want to fix it. The amount of time it actually saves won't be made any less or greater by your knowing X precisely.
So it is possible to drive samples off of a timer, but frankly I found it just as useful to trigger an interrupt by the user pressing both shift keys as the same time. Since 20 samples is generally plenty, and this way you can be sure to take samples at a relevant time (i.e. not while waiting for user input) it was quite adequate. Another way would be to only do the timer-driven samples while the user holds down both shift keys (or something like that).
It did not concern me that the taking of samples might slow down the program, because the goal was not to measure speed, but to locate the most costly instructions. After fixing something, the overall speedup is easy to measure.
The main thing that the profiler provided was a UI so you could examine the results painlessly. What comes out of the sampling phase is a collection of call stack samples, where each sample is a list of addresses of instructions, where every instruction but the last is a call instruction. The UI was mainly what is called a "butterfly view".
It has a current "focus", which is a particular instruction. To the left is displayed the call instructions immediately above that instruction, as culled from the stack samples. If the focus instruction is a call instruction, then the instructions below it appear to the right, as culled from the samples. On the focus instruction is displayed a percent, which is the percent of stacks containing that instruction. Similarly for each instruction on the left or right, the percent is broken down by the frequency of each such instruction. Of course, the instruction was represented by file, line number, and the name of the function it was in. The user could easily explore the data by clicking any of the instructions to make it the new focus.
A variation on this UI treated the butterfly as bipartite, consisting of alternating layers of function call instructions and the functions containing them. That can give a little more clarity of time spent in each function.
Maybe it's not obvious, so it's worth mentioning some properties of this technique.
Recursion is not an issue, because if an instruction appears more than once on any given stack sample, that still counts as only one sample containing it. It still remains true that the estimated time that would be saved by its removal is the percent of stacks it is on.
Notice this is not the same as a call tree. It gives you the cost of an instruction no matter how many different branches of a call tree it is in.
Performance of the UI is not an issue, because the number of samples need not be very large. If a particular instruction I is the focus, it is quite simple to find how may samples contain it, and for each adjacent instruction, how many of the samples containing I also contain the adjacent instruction next to it.
As mentioned before, speed of sampling is not an issue, because we're not measuring performance, we're diagnosing. The sampling does not bias the results, because the sampling does not affect what the overall program does. An algorithm that takes N instructions to complete still takes N instructions even if it is halted any number of times.
I'm often asked how to sample a program that completes in milliseconds. The simple answer is wrap it in an outer loop to make it take long enough to sample. You can find out what takes X% of time, remove it, get the X% speedup, and then remove the outer loop.
This little profiler, that I called YAPA (yet another performance analyzer) was DOS-based and made a nice little demo, but when I had serious work to do, I would fall back on the manual method. The main reason for this is that the call stack alone is often not enough state information to tell you why a particular cycle is being spent. You may also need to know other state information so you have a more complete idea of what the program was doing at that time. Since I found the manual method pretty satisfactory, I shelved the tool.
A point that's often missed when talking about profiling is that you can do it repeatedly to find multiple problems. For example, suppose instruction I1 is on the stack 5% of the time, and I2 is on the stack 50% of the time. Twenty samples will easily find I2, but maybe not I1. So you fix I2. Then you do it all again, but now I1 takes 10% of the time, so 20 samples will probably see it. This magnification effect allows repeated applications of profiling to achieve large compounded speedup factors.
I would look at those open-source projects first:
Eclipse TPTP (http://www.eclipse.org/tptp/)
VisualVM (https://visualvm.dev.java.net/)
Then I would look at JVMTI (not JVMPI)
http://java.sun.com/developer/technicalArticles/Programming/jvmti/
JVMPI spec: http://java.sun.com/j2se/1.5.0/docs/guide/jvmpi/jvmpi.html
I salute your courage and bravery
EDIT: And as noted by user Boune, JVMTI:
http://java.sun.com/developer/technicalArticles/Programming/jvmti/
As another answer, I just looked at LukeStackwalker on sourceforge. It is a nice, small, example of a stack-sampler, and a nice place to start if you want to write a profiler.
Here, in my opinion, is what it does right:
It samples the entire call stack.
Sigh ... so near yet so far. Here, IMO, is what it (and other stack samplers like xPerf) should do:
It should retain the raw stack samples. As it is, it summarizes at the function level as it samples. This loses the key line-number information locating the problematic call sites.
It need not take so many samples, if storage to hold them is an issue. Since typical performance problems cost from 10% to 90%, 20-40 samples will show them quite reliably. Hundreds of samples give more measurement precision, but they do not increase the probability of locating the problems.
The UI should summarize in terms of statements, not functions. This is easy to do if the raw samples are kept. The key measure to attach to a statement is the fraction of samples containing it. For example:
5/20 MyFile.cpp:326 for (i = 0; i < strlen(s); ++i)
This says that line 326 in MyFile.cpp showed up on 5 out of 20 samples, in the process of calling strlen. This is very significant, because you can instantly see the problem, and you know how much speedup you can expect from fixing it. If you replace strlen(s) by s[i], it will no longer be spending time in that call, so these samples will not occur, and the speedup will be approximately 1/(1-5/20) = 20/(20-5) = 4/3 = 33% speedup. (Thanks to David Thornley for this sample code.)
The UI should have a "butterfly" view showing statements. (If it shows functions too, that's OK, but the statements are what really matter.) For example:
3/20 MyFile.cpp:502 MyFunction(myArgs)
2/20 HisFile.cpp:113 MyFunction(hisArgs)
5/20 MyFile.cpp:326 for (i = 0; i < strlen(s); ++i)
5/20 strlen.asm:23 ... some assembly code ...
In this example, the line containing the for statement is the "focus of attention". It occurred on 5 samples. The two lines above it say that on 3 of those samples, it was called from MyFile.cpp:502, and on 2 of those samples, it was called from HisFile.cpp:113. The line below it says that on all 5 of those samples, it was in strlen (no surprise there). In general, the focus line will have a tree of "parents" and a tree of "children". If for some reason, the focus line is not something you can fix, you can go up or down. The goal is to find lines that you can fix that are on as many samples as possible.
IMPORTANT: Profiling should not be looked at as something you do once. For example, in the sample above, we got a 4/3 speedup by fixing one line of code. When the process is repeated, other problematic lines of code should show up at 4/3 the frequency they did before, and thus be easier to find. I never hear of people talking about iterating the profiling process, but it is crucial to getting overall large compounded speedups.
P.S. If a statement occurs more than once in a single sample, that means there is recursion taking place. It is not a problem. It still only counts as one sample containing the statement. It is still the case that the cost of the statement is approximated by the fraction of samples containing it.

Categories

Resources