What is the best way to get the UNIX uptime in Java? Is there a standard Java library/function I could use or should I use Runtime's exec or ProcessBuilder to execute 'uptime'? Thanks
You can read /proc/uptime:
new Scanner(new FileInputStream("/proc/uptime")).next();
//"1128046.07" on my machine and still counting
From Wikipedia:
Shows how long the system has been on since it was last restarted:
$ cat /proc/uptime
350735.47 234388.90
The first number is the total number of seconds the system has been up. The second number is how much of that time the machine has spent idle, in seconds.
Runtime.getRuntime().exec('uptime');
Did you try this? The only command that does something equivalent to system('uptime'); in java is runTime.exec(). Thats all I could think of ..
This is likely to be entirely system dependant but you can use System.nanotime();
System.out.println("/proc/uptime " + new Scanner(new FileInputStream("/proc/uptime")).next());
System.out.println("System.nanoTime " + System.nanoTime() / 1e9);
prints
/proc/uptime 265671.85
System.nanoTime 265671.854834206
Warning: This is unlikely to work on all platforms.
Related
I've seen this question and it's somewhat similar. I would like to know if it really is a big factor that would affect the performance of my application. Here's my scenario.
I have this Java webapp that can upload thousands of data from a Spreadsheet which is being read per row from top to bottom. I'm using System.out.println() to show on the server's side on what line the application is currently reading.
- I'm aware of creating a log file. In fact, I'm creating a log file and at the same time, displaying the logs on the server's prompt.
Is there any other way of printing the current data on the prompt?
I was recently testing with (reading and) writing large (1-1.5gb) text-files, and I found out that:
PrintWriter out = new PrintWriter(new BufferedWriter(new OutputStreamWriter(new FileOutputStream(java.io.FileDescriptor.out), "UTF-8"), 512));
out.println(yourString);
//...
out.flush();
is in fact almost 250% faster than
System.out.println(yourString);
My test-program first read about 1gb of data, processed it a bit and outputted it in slightly different format.
Test results (on Macbook Pro, with SSD reading&writing using same disk):
data-output-to-system-out > output.txt => 1min32sec
data-written-to-file-in-java => 37sec
data-written-to-buffered-writer-stdout > output.txt => 36sec
I did try with multiple buffer sized between 256-10k but that didn't seem to matter.
So keep in mind if you're creating unix command-line tools with Java where output is meant to be directed or piped to somewhere else, don't use System.out directly!
It can have an impact on your application performance. The magnitude will vary depending on the kind of hardware you are running on and the load on the host.
Some points on which this can translate to performance wise:
-> Like Rocket boy stated, println is synchronized, which means you will be incurring in locking overhead on the object header and may cause thread bottlenecks depending on your design.
-> Printing on the console requires kernel time, kernel time means the cpu will not be running on user mode which basically means your cpu will be busy executing on kernel code instead of your application code.
-> If you are already logging this, that means extra kernel time for I/O, and if your platform does not support asynchronous I/O this means your cpu might become stalled on busy waits.
You can actually try and benchmark this and verify this yourself.
There are ways to getaway with this like for example having a really fast I/O, a huge machine for dedicated use maybe and biased locking on your JVM options if your application design will not be multithreaded on that console printing.
Like everything on performance, it all depends on your hardware and priorities.
System.out.println()
is synchronized.
public void println(String x) {
synchronized (this) {
print(x);
newLine();
}
If multiple threads write to it, its performance will suffer.
Yes, it will have a HUGE impact on performance. If you want a quantifiable number, well then there's plenty of software and/or ways of measuring your own code's performance.
System.out.println is very slow compared to most slow operations. This is because it places more work on the machine than other IO operations (and it is single threaded)
I suggest you write the output to a file and tail the output of this file. This way, the output will still be slow, but it won't slow down your web service so much.
Here's a very simple program to check performance of System.out.println and compare it with multiplication operation (You can use other operations or function specific to your requirements).
public class Main{
public static void main(String []args) throws InterruptedException{
long tTime = System.nanoTime();
long a = 123L;
long b = 234L;
long c = a*b;
long uTime = System.nanoTime();
System.out.println("a * b = "+ c +". Time taken for multiplication = "+ (uTime - tTime) + " nano Seconds");
long vTime = System.nanoTime();
System.out.println("Time taken to execute Print statement : "+ (vTime - uTime) + " nano Seconds");
}
}
Output depends on your machine and it's current state.
Here's what I got on : https://www.onlinegdb.com/online_java_compiler
a * b = 28782. Time taken for multiplication = 330 nano Seconds
Time taken to execute Print statement : 338650 nano Seconds
EDIT :
I have logger set up on my local machine so wanted to give you idea of performance difference between system.out.println and logger.info - i.e., performance comparison between console print vs logging
public static void main(String []args) throws InterruptedException{
long tTime = System.nanoTime();
long a = 123L;
long b = 234L;
long c = a*b;
long uTime = System.nanoTime();
System.out.println("a * b = "+ c +". Time taken for multiplication = "+ (uTime - tTime) + " nano Seconds");
long vTime = System.nanoTime();
System.out.println("Time taken to execute Print statement : "+ (vTime - uTime) + " nano Seconds");
long wTime = System.nanoTime();
logger.info("a * b = "+ c +". Time taken for multiplication = "+ (uTime - tTime) + " nano Seconds");
long xTime = System.nanoTime();
System.out.println("Time taken to execute log statement : "+ (xTime - wTime) + " nano Seconds");
}
Here's what I got on my local machine :
a * b = 28782. Time taken for multiplication = 1667 nano Seconds
Time taken to execute Print statement : 34080917 nano Seconds
2022-11-15 11:36:32.734 [] INFO CreditAcSvcImpl uuid: - a * b = 28782. Time taken for multiplication = 1667 nano Seconds
Time taken to execute log statement : 9645083 nano Seconds
Notice that system.out.println is taking almost 24 ms higher then the logger.info.
Are there any way to get the size of the total memory on the operating system from java? Using
Runtime.getRuntime().maxMemory()
returns the allowed memory for the JVM, not of the operating system. Does anyone have a way to obtain this (from java code)?
com.sun.management.OperatingSystemMXBean bean =
(com.sun.management.OperatingSystemMXBean)
java.lang.management.ManagementFactory.getOperatingSystemMXBean();
long max = bean.getTotalPhysicalMemorySize();
returns available RAM size for JVM (limited by 32bit), not the heap size.
There is no Java-only way to get that information. You may use Runtime.exec() to start OS-specific commands, e.g. /usr/bin/free on Linux. Still on Linux systems, you can use Java file access classes (FileInputStream) to parse /proc/meminfo.
that is not possible with pure Java, your program runs on java virtual machine, and therefore it is isolated from OS. I suggest 2 solutions for this:
1) You can use a JNI and call a C++ function to do that
2) Another option is to use Runtime.exec(). Then you have to get the info from "cat /proc/meminfo"
You can get the RAM usage with this. This is the same value that taskmanager in windows shows
com.sun.management.OperatingSystemMXBean bean = (com.sun.management.OperatingSystemMXBean)java.lang.management.ManagementFactory.getOperatingSystemMXBean();
double percentage = ((double)bean.getFreeMemorySize() / (double)bean.getTotalPhysicalMemorySize()) * 100;
percentage = 100 - percentage;
System.out.println("RAM Usage: " + percentage + "%");
I have the following two programs:
long startTime = System.currentTimeMillis();
for (int i = 0; i < N; i++);
long endTime = System.currentTimeMillis();
System.out.println("Elapsed time: " + (endTime - startTime) + " msecs");
and
long startTime = System.currentTimeMillis();
for (long i = 0; i < N; i++);
long endTime = System.currentTimeMillis();
System.out.println("Elapsed time: " + (endTime - startTime) + " msecs");
Note: the only difference is the type of the loop variable (int and long).
When I run this, the first program consistently prints between 0 and 16 msecs, regardless of the value of N. The second takes a lot longer. For N == Integer.MAX_VALUE, it runs in about 1800 msecs on my machine. The run time appears to be more or less linear in N.
So why is this?
I suppose the JIT-compiler optimizes the int loop to death. And for good reason, because obviously it doesn't do anything. But why doesn't it do so for the long loop as well?
A colleague thought we might be measuring the JIT compiler doing its work in the long loop, but since the run time seems to be linear in N, this probably isn't the case.
I'm using JDK 1.6.0 update 17:
C:\>java -version
java version "1.6.0_17"
Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode)
I'm on Windows XP Professional x64 Edition, Service Pack 2, with an Intel Core2 Quad CPU at 2.40GHz.
DISCLAIMER
I know that microbenchmarks aren't useful in production. I also know that System.currentTimeMillis() isn't as accurate as its name suggests. This is just something I noticed while fooling around, and I was simply curious as to why this happens; nothing more.
It's an interesting question, but to be honest I'm not convinced that considering Hotspot's behaviour here will yield useful information. Any answers you do get are not going to be transferable in a general case (because we're looking at the optimisations that Hotspot performs in one specific situation), so they'll help you understand why one no-op is faster than another, but they won't help you write faster "real" programs.
It's also incredibly easy to write very misleading micro benchmarks around this sort of thing - see this IBM DW article for some of the common pitfalls, how to avoid them and some general commentary on what you're doing.
So really this is a "no comment" answer, but I think that's the only valid response. A compile-time-trivial no-op loop doesn't need to be fast, so the compiler isn't optimised to be fast in some of these conditions.
You are probably using a 32-bit JVM. The results will be probably different with a 64bit JVM. In a 32-bit JVM an int can be mapped to a native 32-bit integer and be incremented with a single operation. The same doesn't hold for a long, which will require more operations to increment.
See this question for a discussion about int and long sizes.
My guess - and it is only a guess - is this:
The JVM concludes that the first loop effectively does nothing, so it removes it entirely. No variable "escapes" from the for-loop.
In the second case, the loop also does nothing. But it might be that the JVM code that determines that the loop does nothing has a "if (type of i) == int" clause. In which case the optimisation to remove the do-nothing for-loop only works with int.
An optimisation that removes code has to be sure there are no side-effects. The JVM coders seem to have erred on the side of caution.
Micro-benchmarking like this does not make a lot of sense, because the results depend a lot on the internal workings of the Hotspot JIT.
Also, note that the system clock values that you get with System.currentTimeMillis() don't have a 1 ms resolution on all operating systems. You can't use this to very accurately time very short duration events.
Have a look at this article, that explains why doing micro-benchmarks in Java is not as easy as most people think: Anatomy of a flawed microbenchmark
I have a program which i have myself written in java, but I want to test method execution times and get timings for specific methods. I was wondering if this is possible, by maybe somehow an eclipse plug-in? or maybe inserting some code?
I see, it is quite a small program, nothing more than 1500 lines, which would be better a dedicated tool or System.currentTimeMillis()?
Other than using a profiler, a simple way of getting what you want is the following:
public class SomeClass{
public void somePublicMethod()
{
long startTime = System.currentTimeMillis();
someMethodWhichYouWantToProfile();
long endTime = System.currentTimeMillis();
System.out.println("Total execution time: " + (endTime-startTime) + "ms");
}
}
If the bottleneck is big enough to be observed with a profiler, use a profiler.
If you need more accuracy, the best way of measuring an small piece of code is the use of a Java microbenchmark framework like OpenJDK's JMH or Google's Caliper. I believe they are as simple to use as JUnit and not only you will get more accurate results, but you will gain the expertise from the community to do it right.
Follows a JMH microbenchmark to measure the execution time of Math.log():
private double x = Math.PI;
#Benchmark
public void myBenchmark() {
return Math.log(x)
}
Using the currentMillis() and nanoTime() for measuring has many limitations:
They have latency (they also take time to execute) which bias your measurements.
They have VERY limited precision, this means you can mesure things from 26ns to 26ns in linux and 300 or so in Windows has described here
The warmup phase is not taken into consideration, making your measurements fluctuate A LOT.
The currentMillis() and nanoTime() methods in Java can be useful but must be used with EXTREME CAUTION or you can get wrong measurements errors like this where the order of the measured snippets influence the measurements or like this where the author wrongly conclude that several millions of operations where performed in less than a ms, when in fact the JMV realised no operations where made and hoisted the code, running no code at all.
Here is a wonderful video explaining how to microbenchmark the right way: https://shipilev.net/#benchmarking
For quick and dirty time measurement tests, don't use wall-clock time (System.currentTimeMillis()). Prefer System.nanoTime() instead:
public static void main(String... ignored) throws InterruptedException {
final long then = System.nanoTime();
TimeUnit.SECONDS.sleep(1);
final long millis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - then);
System.out.println("Slept for (ms): " + millis); // = something around 1000.
}
You should use a profiler like
jprofiler
yourkit
They will easily integrate with any IDE and show whatever detail you need.
Of course these tools are complex and meant to be used to profile complex programs, if you need just some simple benchmarks I suggest you to use System.currentTimeMillis() or System.nanoTime() and calculate delta of millisecs between calls by yourself.
Using a profiler is better because you can find out average execution times and bottlenecks in your app.
I use VisualVM. slick and simple.
Google Guava has a stopwatch, makes things simple and easy.
Stopwatch stopwatch = Stopwatch.createStarted();
myFunctionCall();
LOGGER.debug("Time taken by myFunctionCall: " + stopwatch.stop());
Jprofiler and yourkit are good, but cost money.
There is a free plugin for eclispe called TPTP (Test & Performance Tools Platform) That can give you code execution times. Here is a tutorial that a quick google search brought up. http://www.eclipse.org/articles/Article-TPTP-Profiling-Tool/tptpProfilingArticle.html
Another Custom made solution could be based on the the following post : http://www.mkyong.com/spring/spring-aop-examples-advice/
You then have also the possibility to use the utilities around application monitoring & snmp. If you need to "time" your methods on a regular basis in a production environment, you proabably should consider using one of the those SNMP tools
Usually I store the time in a .txt file for analise the outcome
StopWatch sWatch = new StopWatch();
sWatch.start();
//do stuff that you want to measure
downloadContent();
sWatch.stop();
//make the time pretty
long timeInMilliseconds = sWatch.getTime();
long hours = TimeUnit.MILLISECONDS.toHours(timeInMilliseconds);
long minutes = TimeUnit.MILLISECONDS.toMinutes(timeInMilliseconds - TimeUnit.HOURS.toMillis(hours));
long seconds = TimeUnit.MILLISECONDS.toSeconds(timeInMilliseconds - TimeUnit.HOURS.toMillis(hours) - TimeUnit.MINUTES.toMillis(minutes));
long milliseconds = timeInMilliseconds - TimeUnit.HOURS.toMillis(hours) - TimeUnit.MINUTES.toMillis(minutes) - TimeUnit.SECONDS.toMillis(seconds);
String t = String.format("%02d:%02d:%02d:%d", hours, minutes, seconds, milliseconds);
//each line to store in a txt file, new line
String content = "Ref: " + ref + " - " + t + "\r\n";
//you may want wrap this section with a try catch
File file = new File("C:\\time_log.txt");
FileWriter fw = new FileWriter(file.getAbsoluteFile(), true); //append content set to true, so it does not overwrite existing data
BufferedWriter bw = new BufferedWriter(fw);
bw.write(content);
bw.close();
You can add this code and it will tell you how long the method took to execute.
long millisecondsStart = System.currentTimeMillis();
executeMethod();
long timeSpentInMilliseconds = System.currentTimeMillis() - millisecondsStart;
One of my friends showed me something he had done, and I was at a serious loss to explain how this could have happened: he was using a System.nanotime to time something, and it gave the user an update every second to tell how much time had elapsed (it Thread.sleep(1000) for that part), and it took seemingly forever (something that was waiting for 10 seconds took roughly 3 minutes to finish). We tried using millitime in order to see how much time had elapsed: it printed how much nanotime had elapsed every second, and we saw that for every second, the nanotime was moving by roughly 40-50 milliseconds every second.
I checked for bugs relating to System.nanotime and Java, but it seemed the only things I could find involved the nanotime suddenly greatly increasing and then stopping. I also browsed this blog entry based on something I read in a different question, but that didn't have anything that may cause it.
Obviously this could be worked around for this situation by just using the millitime instead; there are lots of workarounds to this, but what I'm curious about is if there's anything other than a hardware issue with the system clock or at least whatever the most accurate clock the CPU has (since that's what System.nanotime seems to use) that could cause it to run consistently slow like this?
long initialNano = System.nanoTime();
long initialMili = System.currentTimeMillis();
//Obviously the code isn't actually doing a while(true),
//but it illustrates the point
while(true) {
Thread.sleep(1000);
long currentNano = System.nanoTime();
long currentMili = System.currentTimeMillis();
double secondsNano = ((double) (currentNano - initialNano))/1000000000D;
double secondsMili = ((double) (currentMili - initialMili))/1000D;
System.out.println(secondsNano);
System.out.println(secondsMili);
}
secondsNano will print something along the lines of 0.04, whereas secondsMili will print something very close to 1.
It looks like a bug along this line has been reported at Sun's bug database, but they closed it as a duplicate, but their link doesn't go to an existing bug. It seems to be very system-specific, so I'm getting more and more sure this is a hardware issue.
... he was using a System.nanotime to cause the program to wait before doing something, and ...
Can you show us some code that demonstrates exactly what he was doing? Was it some strange kind of busy loop, like this:
long t = System.nanoTime() + 1000000000L;
while (System.nanoTime() < t) { /* do nothing */ }
If yes, then that's not the right way to make your program pause for a while. Use Thread.sleep(...) instead to make the program wait for a specified number of milliseconds.
You do realise that the loop you are using doesn't take exactly 1 second to run? Firstly Thread.sleep() isn't guaranteed to be accurate, and the rest of the code in the loop does take some time to execute (Both nanoTime() and currentTimeMillis() actually can be quite slow depending on the underlying implementation). Secondly, System.currentTimeMillis() is not guaranteed to be accurate either (it only updates every 50ms on some operating system and hardware combinations). You also mention it being inaccurate to 40-50ms above and then go on to say 0.004s which is actually only 4ms.
I would recommend you change your System.out.println() to be:
System.out.println(secondsNano - secondsMili);
This way, you'll be able to see how much the two clocks differ on a second-by-second basis. I left it running for about 12 hours on my laptop and it was out by 1.46 seconds (fast, not slow). This shows that there is some drift in the two clocks.
I would think that the currentTimeMillis() method provides a more accurate time over a large period of time, yet nanoTime() has a greater resolution and is good for timing code or providing sub-millisecond timing over short time periods.
I've experienced the same problem. Except in my case, it is more pronounced.
With this simple program:
public class test {
public static void main(String[] args) {
while (true) {
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {
}
OStream.out("\t" + System.currentTimeMillis() + "\t" + nanoTimeMillis());
}
}
static long nanoTimeMillis() {
return Math.round(System.nanoTime() / 1000000.0);
}
}
I get the following results:
13:05:16:380 main: 1288199116375 61530042
13:05:16:764 main: 1288199117375 61530438
13:05:17:134 main: 1288199118375 61530808
13:05:17:510 main: 1288199119375 61531183
13:05:17:886 main: 1288199120375 61531559
The nanoTime is showing only ~400ms elapsed for each second.