What are the differences between JVisualVM and Java Mission Control? - java

Other than the more 'advanced' GUI from Java mission control, how are they different?
At first glance they seem to offer very similar functionality (Interpreting JMX data and Memory/CPU profiling).
However, as they are both shipped with the JDK (I'm using JDK 1.7.0_51 SE) I'm assuming there are significant differences, otherwise they would be combined into a single solution. Especially as this increases the size of the JDK significantly.
Is Java Mission Control ultimately going to replace JVisualVM in the future?

One important point is that Mission Control is potentially not free to use on production environments. It is free for applications running in DEV & QA and Oracle are not currently enforcing the charges for production applications (as of Nov 2014). However, their executives have made it clear this may change in time.

The JMX Console part of Java Mission Control is just like any other JMX console. I'm of course biased, but in my opinion it's one of the more feature rich consoles available. The more unique part of JMC is the Java Flight Recorder part.
JMC is targeting production systems, and is very careful to avoid introducing unnecessary overhead. With the Java Flight Recorder you can do production time profiling and diagnostics with an almost unmeasurable overhead.

Related

Command-line-based daemon for Java Mission Control? Alternatives?

I have been asked to investigate Oracle Java Mission Control, so that server-side Java applications may be monitored and actions taken (e.g., alerts emitted and logged, flight recordings saved) under certain conditions. Java Mission Control's trigger system, where you specify conditions and actions, meets our needs, but it seems to depend on the GUI application ("Oracle Java Mission Control") being running, implying that triggers are not the monitored JMX server's responsibility. Is this the case? There are a number of servers usually accessed via terminal...
Is there a way of running Java Mission Control as a daemon, from a terminal session, unattended, while retaining and obeying any specified trigger rules (e.g., imported from an XML file)?
If not, are there competing tools with a similar trigger system that can fill the void?
Thanks! :)
Currently no, you can't run JMC without a GUI.
You are not the first person that wants to do this.
One option is to run JMC in another machine, and make it connect to many servers, which of course requires running the remote JMX agent etc.
We have been discussing server side triggers/rules, but AFAIK, it is not planned for any JDK release.
It is possible to dump flight recordings from code, so you could write your own little agent that uses the DiagnosticMBean to do this on another JVM on the same machine or on remotely. I'm pretty sure this how some people solve the same sort of problem. It is also possible to parse and analyze flight recordings in code. If you're interested in this approach, I'm sure there's some sample code around, of course it's more work than if JMC could run as a daemon :/
You should probably have a look at an APM tool instead of monitoring with JMC. The product is extremely weak, introduces a lot of overhead (making it unsuitable for production) and creates a lot of issues. There are also developer focused tools available out there.
APM : AppDynamics (deepest of the bunch), New Relic, Ruxit
Java Developer Tools : Takipi, Fusion Reactor, Javosize

How to setup development environment for Java HotSpot VM?

What is the best way to understand the Java HotSpot VM? And if I want to make modifications to the source code and add my own features, what would be the best development environment (does ctags work well with the large code base, or do I need a full-blown IDE)?
I doubt that you would want to dive into the Hotspot code-base... I'm copying parts of my answer on from this question:
Which JVM to choose for GC hacking?
I think the Maxine Research VM from Oracle Labs would be a good starting point. Here's a quote from the first page of their wiki:
Project Overview
In this era of modern, managed languages we demand ever more from our virtual machines: better performance, more scalability, and support for the latest new languages. Research and experimentation is essential but no longer practical in the context of mature, complex, production VMs written in multiple languages.
The Maxine VM is a next generation platform that establishes a new standard of productivity in this area of research. It is written entirely in Java, completely compatible with modern Java IDEs and the standard JDK, features a modular architecture that permits alternate implementations of subsystems such as GC and compilation to be plugged in, and is accompanied by a dedicated development tool (the Maxine Inspector) for debugging and visualizing nearly every aspect of the VM's runtime state.
Here's an excelent video demonstrating its memory monitoring utilities:
Introduction to the Maxine Inspector

Which JVM to choose for GC hacking?

I have a design for a GC algorithm that I would like to implement for a JVM, to allow benchmarking.
Does anyone have any experience as to which implementation would allow the easy hacking, but which still has a built in GC that would make for a meaningful comparison?
Edited: I want a JVM that has garbage collection, as I want to collect stats using it, then rip out their GC, put my own in, and the compare. I want it to have a good GC, as otherwise the comparison is meaning, but I want something with code that is not too difficult to work with (HotSpot has a lot of assembler, making the task more difficult)
I think that the Maxine Research VM from Oracle Labs would be a perfect match for your needs.
Quote from the first page of their wiki:
Project Overview
In this era of modern, managed languages we demand ever more from our virtual machines: better performance, more scalability, and support for the latest new languages. Research and experimentation is essential but no longer practical in the context of mature, complex, production VMs written in multiple languages.
The Maxine VM is a next generation platform that establishes a new standard of productivity in this area of research. It is written entirely in Java, completely compatible with modern Java IDEs and the standard JDK, features a modular architecture that permits alternate implementations of subsystems such as GC and compilation to be plugged in, and is accompanied by a dedicated development tool (the Maxine Inspector) for debugging and visualizing nearly every aspect of the VM's runtime state.
Here's an excelent video demonstrating its memory monitoring utilities:
Introduction to the Maxine Inspector
I'm not aware of any that don't have a built-in GC; not much of a Java without one. Why not start with OpenJDK or Harmony?
maybe you don't need a JVM but a virtual machine would be sufficient for testing your algorithm. Unless, you are obliged to use a JVM, you can use APache Harmony or I would recommend another VM that was created on a phd thesis called VmKit. You can take a look at it and browse the source

Performance Cost of Profiling a Web-Application in Production

I am attempting to solve performance issues with a large and complex tomcat java web application. The biggest issue at the moment is that, from time to time, the memory usage spikes and the application becomes unresponsive. I've fixed everything I can fix with log profilers and Bayesian analysis of the log files. I'm considering running a profiler on the production tomcat server.
A Note to the Reader with Gentle Sensitivities:
I understand that some may find the very notion of profiling a production app offensive. Please be assured that I have exhausted most of the other options. The reason I am considering this is that I do not have the resources to completely duplicate our production setup on my test server, and I have been unable to cause the failures of interest on my test server.
Questions:
I am looking for answers which work either for a java web application running on tomcat, or answer this question in a language agnostic way.
What are the performance costs of profiling?
Any other reasons why it is a bad idea to remotely connect and profile a web application in production (strange failure modes, security issues, etc)?
How much does profiling effect the memory foot print?
Specifically are there java profiling tools that have very low performance costs?
Any java profiling tools designed for profiling web applications?
Does anyone have benchmarks on the performance costs of profiling with visualVM?
What size applications and datasets can visualVM scale to?
OProfile and its ancestor DPCI were developed for profiling production systems. The overhead for these is very low, and they profile your full system, including the kernel, so you can find performance problems in the VM and in the kernel and libraries.
To answer your questions:
Overhead: These are sampled profilers, that is, they generate timer or performance counter interrupts at some regular interval, and they take a look at what code is currently executing. They use that to build a histogram of where you spend your time, and the overhead is very low (1-8% is what they claim) for reasonable sampling intervals.
Take a look at this graph of sampling frequency vs. overhead for OProfile. You can tune the sampling frequency for lower overhead if the defaults are not to your liking.
Usage in production: The only caveat to using OProfile is that you'll need to install it on your production machine. I believe there's kernel support in Red Hat since RHEL3, and I'm pretty sure other distributions support it.
Memory: I'm not sure what the exact memory footprint of OProfile is, but I believe it keeps relatively small buffers around and dumps them to log files occasionally.
Java: OProfile includes profiling agents that support Java and that are aware of code running in JITs. So you'll be able to see Java calls, not just the C calls in the interpreter and JIT.
Web Apps: OProfile is a system-level profiler, so it's not aware of things like sessions, transactions, etc. that a web app would have.
That said, it is a full-system profiler, so if your performance problem is caused by bad interactions between the OS and the JIT, or if it's in some third-party library, you'll be able to see that, because OProfile profiles the kernel and libraries. This is an advantage for production systems, as you can catch problems that are due to misconfigurations or particulars of the production environment that might not exist in your test environment.
VisualVM: Not sure about this one, as I have no experience with VisualVM
Here's a tutorial on using OProfile to find performance bottlenecks.
I've used YourKit to profile apps in a high-load production environment, and while there was certainly an impact, it was easily an acceptable one. Yourkit makes a big deal of being able to do this in a non-invasive manner, such as selectively turning off certain profiling features that are more expensive (it's a sliding scale, really).
My favourite aspect of it is that you can run the VM with the YourKit agent running, and it has zero performance impact. it's only when you connect the GUI and start profiling that it has an effect.
There is nothing wrong in profiling production apps. If you work on distributed applications, there are times when a outofmemory exception occurs in a very unique probability scenario which is very difficult to reproduce in a dev/stage/uat environment.
You can try using custom profilers but if you are in a hurry and plugging in/ setting upa profiler on a production box will take time, you can also use the jvm to take a memory dump(jvms memory dump also gives you thread dump)
You can activate the automatic generation on the JVM command line, by using the following option :
-XX:+HeapDumpOnOutOfMemoryError
he Eclipse Memory Analyzer project has a very powerful feature called “group by value”, which makes it possible to build an object query and regroup the instances by a field value. This is useful in the case where you have a lot of instances that are containing a smaller set of possible values, and you can to see which values are being used the most. This has really helped me understand some complex memory dumps so I recommend you try it out.
You may also consider using one of the modern HotSpot JVM - Java Flight Recorder and Java Mission Control. It is a set of tools that allow you to collect low-level runtime information with the CPU overhead about 5% (I cannot prove the last statement anyhow, this is the statement of Oracle engineer who presented the feature and live demo).
You can use this tool as long as your application is running 1_7u40 JVM or higher. To enable the runtime info collection, you need to start JVM with particular flags:
By default, JFR is disabled in the JVM. To enable JFR, you must launch your Java application with the -XX:+FlightRecorder option. Because JFR is a commercial feature, available only in the commercial packages based on Java Platform, Standard Edition (Oracle Java SE Advanced and Oracle Java SE Suite), you also have to enable commercial features using the -XX:+UnlockCommercialFeatures options.
(Quoted http://docs.oracle.com/javase/8/docs/technotes/guides/jfr/about.html#sthref7)
I added this answer as this is viable option for profiling in production IMO.
Also there is an Eclipse plugin that supports JFR and JMC and capable of displaying information user-friendly.
The tools have improved vastly over the years. These days, most people who have needs like these use a tool that hooks into Java's instrumentation API instead of the profiling API. Surely there are more examples, but NewRelic and AppDynamics come to mind. Instrumentation-based solutions usually run as an agent in the JVM and constantly collect data. They report the data at a higher level (business transaction, web transaction, database transaction) than the old profiling approach and allow you to dig deeper (down to the method or line) if necessary. You can even setup monitoring and alerts, so you can track/alert on metrics like page load times and performance against SLAs. With these great tools, you really should have no reason to run a profiler in production any longer. The cost of running them is negligible.

Java on mainframes

I work for a large corporation that runs a lot of x86 based servers on which we run JVMs.
We have experimented successfully with VMWare ESX to get better usage out of our data center. But these still consume a lot of power per processing unit.
I had a mad idea that we should resurrect mainframes, we could host either lots of JVMs or virtual machines.
Has anyone tried this? Are there any good cost-benefits?
Do you lose flexibility? E.g. we have mainframes in other parts of the company but they seem to have much more rigid usage of the machines.. lots of change control, long lead times etc
IBM makes a special Java co-processor that you should seriously consider. I would not run Java on the general engines as this may increase MPU charges for licensed software.
All this assumes you’re talking about Java on Z/OS and not running Linux VM’s on the mainframe to take advantage of the cost savings that come with fewer machines.
My thoughts on virtualization are at the end of this and it’s probably the route you want to look at but I’ll start out with Z/OS since it’s what mainframes are traditionally associated with and what I have familiarity with. I have some experience with mainframe Java.
The short answer is, it depends, but probably not. What exactly are your applications? The mainframe is a difficult environment compared to x86 servers. If you're running I/O-intensive workloads under something like Websphere, it might be worth it, assuming your mainframe is underutilized.
In my experience, Java is horribly slow on a mainframe but that’s because the system I used was set up for developer flexibility rather than performance. That just goes to prove performance tuning on the mainframe is usually much more complicated then on an average server since mainframes will be running many more workloads then a generic x86 server.
Remember that the mainframe is designed primarily for I/O throughput and can outperform any normal x86 server at that. It was not designed to do a lot of computationally intensive calculations so won’t outperform a small cluster of x86 servers if your doing a lot of math.
The change controls on mainframes are there for a good reason - if one x86 server has a problem, you reboot it. If a mainframe has a problem, every second that it’s down is costing the company money. You also have to take into account any native code your apps depend on or third party libraries that may use native code. All that code would have to be ported.
Configuration of a mainframe also takes a lot longer on average then on an x86 server. I would suggest that, if you want to seriously look into this, you make a better business case than power savings, such as tight integration with current business apps and start out small either with a proof of concept or a new application. One that is not business critical, that can be implemented to take advantage of the mainframes strengths.
IBM mainframes can also run Linux in either native mode or a virtualized environment similar to VMWare. Unless your company is the exception to the rule, your Linux instances would run as virtual machines. I haven’t had much experience with this but, if your app depends on no native code and runs under Linux, it would probably work on a mainframe running Linux. For more info about Linux on mainframes see this link.
We have extensive experience running Java under Windows, Linux and on IBM SystemI (or iSeries, or AS/400, depending on IBM's mood that year) minicomputers. It is my opinion that the mini-computer platform seems to deliver much less bang for your buck against modern multi-core x86 CPU's.
Note that Java benefits more readily from having multiple cores available than typical software today, because of it's inherently multithreaded nature - this would be even more true as you run multiple JVMs.
That said, you will typically be capable of getting many more CPU cores available with better bandwidth to access memory on a mini or mainframe, and better throughput on disk subsystems (overall) so these systems may very well scale much better as you toss more JVMs on them.
IBM allows this. Some of their mainframes can hold Java accelerator processors that run the bytecode natively for more performance. They also have DB2 accelerators, and possibly some for XML operations.
I've never gotten to play with any of them, but I'd sure love to.
Though I've been in the Industry since 1975, I'm no longer sure what a "mainframe" is. My current development machine has four 3GHZ processors in it, 8GB of RAM, and 750GB of disk space (RAID 1, so it's really double that), and two 19-inch flatscreen monitors.
That's because I'm there on a contract. The employees all have much more powerful boxes than mine.
I understand that the server machines, especially the database servers, are much faster.
Mainframe?
Depending on you workload this is worth looking at!
There are a bewildering number of options available to you just using the IBM hardware:
Its definately worth considering the add on java processors.
(these are actually not that different form the standard cpus its just they are
restricted to java jvm workloads -- and -- most importantly are excluded from
cpu based software license pricing).
You can run multple Linux VMs each ruuning thier own Java app.
You can run multiple native VMs running thier minimalist operating system
used to be called DOS but they change the name every couple of years.
The software licenses are cheaper than the main OS, but it has very limited
functionality which turns out to be an advantage if you are running self contained
applications.
You can run in the monster z/OS environment either:-
a. Within USS (Unix System Services) which is pretty much a full UNIX
OS running inside the parent z/OS.
b. Run your java app in its own started task (== unix daemon).
c. Run your app inside CICS.
(Probably not as you need to use CICS/Java API where you would
normally use Servlet/J2EE APIs so you app would require a rewrite.)

Categories

Resources