How can I get the CPU utilization of client's machine.
I have created a Java web application and where I need to get the CPU utilization of client's machine.
Can it be done using either Javascript or Java?
Whatever you use, Java or Javascript, you can't get CPU utilization of the client's machine. That will be a huge security risk. The only way to do it will be to install a plug-in. This wouldn't help too much however, as reading the CPU utilization is platform specific.
#Daniel provides a link to a method that (on a good day) would give an approximate measure of CPU usage.
#kgiannakakis says that this would be a serious security risk.
But I wonder. How could an attacker exploit CPU usage levels to compromise the user machine's security, or steal sensitive information about the user? Off the top of my head, I cannot think of anything realistic.
The only possible risk I can think of would be if the attacker had already installed/launched some spy software somewhere else on the user's machine. The ability to measure CPU usage could be used to implement a "covert channel" to get information from the spy s/w to the outside world. But this is only of theoretical interest (unless you are trying to implement A1 / A2 level security).
It may look like a security risk, but there is a technique to get an estimate of the CPU usage with JavaScript inside the browser. You may want to check this article:
Ajaxian - JPU: JavaScript CPU Monitor (Demo here)
As kgiannakakis noted in a comment below, this calculation is based on the delay of setInterval() calls.
I remember seeing a similar implementation on Mindmeister.com (showing the CPU usage while editing a mind map.)
There is simple example of a CPU usage monitor demonstrates how to use XML DOM capabilities of client JavaScript in a BSP application to dynamically request data from server and adjust corresponding elements of the screen without reloading the whole page.
This article might be helpful.
jni concept for cpu utilization
http://www.javaworld.com/javaworld/javaqa/2002-11/01-qa-1108-cpu.html?page=1
try this link
If its java look at OperatingSystemMXBean using the please be aware this is Java 1.6 only.
OperatingSystemMXBean operatingSystemInfo =
(com.sun.management.OperatingSystemMXBean)ManagementFactory.getOperatingSystemMXBean();
I am not sure how well this works in an applet.
I once looked into creating a Web Start application that could be launched by a user to "scan" their system hardware. Luckily, the targeted systems were Windows boxes, so I used the Jacob package (Google or search this site for info) to do a WMI query for things like serial numbers, etc.
I created a proof of concept Swing client app, but never got around to the Web Start portion.
Related
I need to optimise a Java application. It makes some 3rd party calls. I need some good tool to accurately measure the time taken by individual API calls.
To give an idea of complexity-
the application takes a data source file containing 1 million rows, and it takes around one hour to complete the processing. As a part of processing , it makes some 3rd party calls (including some network calls). I need to identify which calls are taking more time then others, and based on that, find out a way to optimise the application.
Any suggestions would be appreciated.
I can recommend JVisualVM. It's a great monitoring / profiling tool that is bundled with the Oracle/Sun JDK. Just fire it up, connect to your application and start the CPU-profiling. You should get great histograms over where the time is spent.
Getting Started with VisualVM has a great screen-cast showing you how to work with it.
Screen shot:
Another more rudimentary alternative is to go with the -Xprof command line option:
-Xprof
Profiles the running program, and sends profiling data to
standard output. This option is provided as a utility that is
useful in program development and is not intended to be be
used in production systems.
I've been using YourKit a few times and what quite happy with it. I've however never profiled a long-running operation.
Is the processing the same for each row? In which case the size of the input file doesn't really matter. You could profile a subset to figure out which calls are expensive.
Just wanted to mention the inspectIT tool. It recently became completely open source (https://github.com/inspectIT/inspectIT). It provides complete and detailed call graph with contextual information, there are many out-of the box sensor for database calls, http monitoring, exceptions, etc.
Seams perfect for your use-case..
Try OPNET's Panorama software product
It sounds like a normal profiler might not be the right tool in this case, since they're geared towards measuring the CPU time taken by the program being profiled rather than external APIs that it calls, and they tend to incur a high overhead of their own and collect a large amount of data that would probably overwhelm your system if left running for a long time.
If you really need to collect performance data over such a long time, and mainly for external calls, then Perf4J is probably a better tool.
In our office we use YourKit profiler on a day to day basis. It's really light weight and serves most of the performance related use cases we have had.
But I have also used Visual VM. It's free and fast. You may first want to give Visual VM a try before going towards YourKit (YourKit is not freeware).
visualvm (part of the SDK) and Java 7 can produce detailed profiling.
I use profiler in NetBeans (it is really brilliant and already built in, no need to install plugin) or JVisualVM when not using NetBeans.
I'm new here and I'm not that very good in CPU consumption and Multi Threading. But I was wondering why my web app is consuming too much of the CPU process? What my program does is update values in the background so that users don't have to wait for the processing of the data and will only need to fetch it upon request. The updating processes are scheduled tasks using executor library that fires off 8 threads every 5 seconds to update my data.
Now I'm wondering why my application is consuming too much of the CPU. Is it because of bad code or is it because of a low spec server? (2 cores with 2 database and 1 major application running with my web app)
Thank you very much for your help.
You need to profile your application to find out where the CPU is actually being consumed. Java has some basic profiling methods built in, or if your environment permits it, you could run the built in "hprof" compiler:
java -Xrunhprof ...
(In reality, you probably want to set some extra options: Google "hprof" for more details.)
The latter is easier in principle, but I mention the possibility of adding your own profiling routine because it's more flexible and you can do it e.g. in a Servlet environment where running another profiler is more cumbersome.
Paulo,
It is not possible for someone here to say whether the problem is that your code is inefficient or the server is under spec. It could be either or both of those, or something else.
You are going to need to do some research of your own:
Profile the code. This will allow you to identify where your webapp is spending most of its time.
Look at the OS-level stats that are available to you. This might tell you that the real problem is memory usage or disk I/O.
Look at the performance of the back-end database. Is it using a lot of CPU?
Once you have identified the area(s) where the CPU is being used, you need to figure out the real cause of the problem is and work out how to fix it. And once you've got a potential fix implemented, you can rerun your profiling, etc to see it has helped.
I want to gain more insight regarding the scale of workload a single-server Java Web application deployed to a single Tomcat instance can handle. In particular, let's pretend that I am developing a Wiki application that has a similar usage pattern like Wikipedia. How many simultaneous requests can my server handle reliably before going out of memory or show signs of excess stress if I deploy it on a machine with the following configuration:
4-Core high-end Intel Xeon CPU
8GB RAM
2 HDDs in RAID-1 (No SSDs, no PCIe based Solid State storages)
RedHat or Centos Linux (64-bit)
Java 6 (64-bit)
MySQL 5.1 / InnoDB
Also let's assume that the MySQL DB is installed on the same machine as Tomcat and that all the Wiki data are stored inside the DB. Furthermore, let's pretend that the Java application is built on top of the following stack:
SpringMVC for the front-end
Hibernate/JPA for persistence
Spring for DI and Security, etc.
If you haven't used the exact configuration but have experience in evaluating the scalability of a similar architecture, I would be very interested in hearing about that as well.
Thanks in advance.
EDIT: I think I have not articulated my question properly. I mark the answer with the most up votes as the best answer and I'll rewrite my question in the community wiki area. In short, I just wanted to learn about your experiences on the scale of workload your Java application has been able to handle on one physical server as well as some description regarding the type and architecture of the application itself.
You will need to use group of tools :
Loadtesting Tool - JMeter can be used.
Monitoring Tool - This tool will be used to monitor various numbers of resources load. There are Lot paid as well as free ones. Jprofiler,visualvm,etc
Collection and reporting tool. (Not used any tool)
With above tools you can find optimal value. I would approach it in following way.
will get to know what should be ratio of pages being accessed. What are background processes and their frequency.
Configure my JMeter accordingly (for ratios) , and monitor performance for load applied ( time to serve page ...can be done in JMeter), monitor other resources using Monitor tool. Also check count of error ratio. (NOTE: you need to decide upon what error ratio is not acceptable.)
Keep increasing Load step by step and keep writting various numbers of interest till server fails completely.
You can decide upon optimal value based on many criterias, Low error rate, Max serving time etc.
JMeter supports lot of ways to apply load.
To be honest, it's almost impossible to say. There's probably about 3 ways (of the top of my head to build such a system) and each would have fairly different performance characteristics. You best bet is to build and test.
Firstly try to get some idea of what the estimated volumes you'll have and the latency constraints that you'll need to meet.
Come up with a basic architecture and implement a thin slice end to end through the system (ideally the most common use case). Use a load testing tool like (Grinder or Apache JMeter) to inject load and start measuring the performance. If the performance is acceptable - be conservative your simple implementation will likely include less functionality and be faster than the full system - continue building the system and testing to make sure you don't introduce a major performance bottleneck. If not come up with a different design.
If your code is reasonable the bottleneck will likely be the database and somewhere in the region 100s of db ops per second. If that is insufficient then you may need to think about caching.
Definitely take a look at Spring Insight for performance monitoring and analysis.
English Wikipedia has 14GB data. A 8GB mem cache would have very high hit/miss ratio, and I think harddisk read would be well within its capacity. Therefore, the app is most likely network bound.
English Wikipedia has about 3000 page views per second. It is possible that tomcat can handle the load by careful tuning, and the network has enough throughput to server the traffic.
So the entire wikipedia site can be hosted on one moderate machine? Probably not. Just an idea.
-
http://stats.wikimedia.org/EN/TablesWikipediaEN.htm
http://stats.wikimedia.org/EN/TablesPageViewsMonthly.htm
Tomcat doesn't allow for spreading over multiple machines. If you really are concerned about scalability, you must consider what to do when your application outgrows a single machine.
If so, then how? I'm doing a team project for school. I thought that Java couldn't actually access hardware directly, since that would make it hard to be cross-platform. I need to know this, because after some quick Googling, I haven't found anything, and my team members(who want to do this and want to use Java) seem unsure of how to proceed- after apparently much more searching than I've done.
Your right in that you can't access hardware directly from Java (unless your calling on native code, but that's not what your after) since it runs in a sandboxed environment, namely the Java Virtual Machine (JVM).
However you can get some basic info from the JVM that it gathers from the underlying OS.
Take a look at using Java to get OS-level system information
What you are looking for is SIGAR API
Overview
The Sigar API provides a portable interface for gathering system information such as:
System memory, swap, cpu, load
average, uptime, logins
Per-process memory, cpu, credential
info, state, arguments, environment,
open files
File system detection and metrics
Network interface detection,
configuration info and metrics
TCP and UDP connection tables
Network route table
This information is available in most operating systems, but each OS has their own way(s) providing it.
SIGAR provides developers with one API to access this information regardless of the underlying platform.
The core API is implemented in pure C with bindings currently implemented for Java, Perl, Ruby, Python, Erlang, PHP and C#.
The amount of info you'll be able to get using native Java API's is pretty small, as your program generally only knows about the VM it's sitting on.
You can, however, call out to the command line, run native apps, and parse the results. It's not particularly good form in a Java app, however, as you lose the cross-platform benefits usually associated with the language.
You can get some basic information regarding the Processor/s using System.getEnv().
You can also use Runtime.getRuntime() - See the response of R. Kettelerij for details.
Another option is to use JMX. The MemoryMXBean for example provides some information regarding the RAM usage (heap and non-heap).
To anyone looking for a library that is still being maintained as of 2023 check out OSHI
I've written a Java file, using Jsp,servlets, that I would like to perform run-time tests on. I've never done this before and was just curious on how to go about it.
What I'm interested in knowing, besides the actual timings, is how to find cpu,memory and io utilization when running the application.Your thoughts are appreciated.
Typically you wouldn't measure these from within the application, but by running another tool on the same host.
If you just want to see the impact on the host operating system, you can use a program like top (on *nix boxes), or good old Task Manager on Windows, to see the CPU/memory/IO utilisation of your Java process (typically the servlet container such as Tomcat).
If you want more detailed information on the actual Java process itself, you can connect JConsole or jvisualvm to get VM information (including memory and CPU) for the process itself. (With Java 6 you should be able to do this from the local machine without passing any parameters to the Java process at startup; for Java 5, or remote connections, you'll need to pass command-line arguments to the Java process to allow (remote) JMX connections.)
Finally, if you want really in-depth details of the resource usage, down to the performance of various methods (which it sounds like you're after), you'll need to use a profiler. There are several of these for Java - with YourKit and JProfiler being the biggest commercial ones (in my unqualified opinion). I believe that the NetBeans IDE also has a decent profiler built-in. The process for connecting these to your application would vary depending on the app itself, but these will all typically allow you to "drill down" into the CPU time to see which classes/methods took the most cycles to execute, and likewise to drill down into memory use to see which classes are taking up the most memory.
The standard way to monitor running Java applications these days is sing JMX through the JConcole
If your a using a commercial application server like Weblogic or WebSphere these have custom and powerful management consoles that provide the monitoring information you are looking for. The technology at the heart of these consoles is still JMX so these can also be monitored and managed using the standard JConsole. This article shows how to do this for Weblogic.
I guess you need this info in the client side (browser). So it's not Java based question.
If so, here is my answer:
I prefer using FireBug and ySlow extensions. They give performance information, memory information and much more.
I combine it with using regular task-manager to view more information about the browser.
BR