High ram usage and same process showing multiple times? - java

I made a mediaplayer in Android Studio which uses a Service to control playback.
Now everything works fine and i don't notice any lag even though my logcat says 123 frames where skipped during startup.
I read somewhere that this message can be ignored if it doesn't exceed 300+ frames skipped, but now i'm not so sure anymore because i now read that even 1 frame skipped is too much.
I also compared the ram usage with a mp3 player from the store and i noticed most mediaplayers stay below 10mb memory usage.
But mine exceeds almost 50mb and i have no idea why but if you see at details you can see that there are processes with the same name and alot 'sandboxed_processes'.
So my question is if it's ok that my app almost consumes 50mb memory and what those 'sandboxed_processes' mean.

Question 1 :
it depends on how many resources your application are using and needing .you should really take memory seriously, and you should use as little as possible. Garbage collector will help you to recycle the allocated memory, but you should think about the lifespan of any instance of objects you create, and also about their design so to minimize the structures. Last but not least, Studio allows you to profile the memory allocation of your app, so use it :) .
Question 2 :
Application Sandbox from the official documentation
The Android platform takes advantage of the Linux user-based
protection to identify and isolate app resources. This isolates apps
from each other and protects apps and the system from malicious apps.
To do this, Android assigns a unique user ID (UID) to each Android
application and runs it in its own process.
Android uses this UID to set up a kernel-level Application Sandbox.
The kernel enforces security between apps and the system at the
process level through standard Linux facilities, such as user and
group IDs that are assigned to apps. By default, apps can't interact
with each other and have limited access to the operating system. For
example, if application A tries to do something malicious, such as
read application B's data or dial the phone without permission (which
is a separate application), then the operating system protects against
this behavior because application A does not have the appropriate user
privileges. The sandbox is simple, auditable, and based on decades-old
UNIX-style user separation of processes and file permissions.
Because the Application Sandbox is in the kernel, this security model
extends to native code and to operating system applications. All of
the software above the kernel, such as operating system libraries,
application framework, application runtime, and all applications, run
within the Application Sandbox. On some platforms, developers are
constrained to a specific development framework, set of APIs, or
language in order to enforce security. On Android, there are no
restrictions on how an application can be written that are required to
enforce security; in this respect, native code is as sandboxed as
interpreted code.

Related

What do you monitor with JMX in Java application?

This question is not about how JMX works or what JMX does. As far as I know On top of using JMX we can get OS level metrics and JVM specific metrics (such as: Garbage collection time & frequency, heap utilisation etc)
My Question is What are the aspects can be monitored with JMX in Java applications(internal metric)?
Monitor your app’s status at runtime
Java Management Extensions (JMX) is a standard way for you to embed code within your own app to report at runtime the state of your app’s operations.
This embedding of a reporting agent (a “probe”) within a larger piece of software is known as “instrumenting” your code. JMX enables you a framework to surface those pieces of static information at runtime, so you need not invent that reporting-system plumbing yourself. At runtime, you, or your system administrator, can use any of a number of standard monitoring apps sometimes known as “consoles” or “dashboards”.
JMX handles transporting the updates to the monitoring apps without you needing to do any additional programming. The monitoring app need not be local, instead could be running remotely over the network. Which monitoring app is chosen by your sysadmin, and where they choose to run it, has no effect on your code in your app. JMX is a buffer, a layer of indirection, separating your compile-time code from these practical run-time configuration issues.
The purpose is to provide a control room like this, but for your software:
The key advantage here is using standard protocols for reporting status, rather than you inventing your own protocols.
App servers
If your app is a web app, then your Jakarta EE server or web container such as Tomcat or Jetty may be instrumented with JMX. So you can monitor its operations. For example, you can see what user-sessions are currently open.
JVM
Some JVM implementations are themselves instrumented using JMX to report the status of various aspects of the JVM’s operations. As your Question mentioned, some of those reports may be on memory usage, garbage collector activity, etc. You sysadmin’s monitoring app can watch both the JVM and your app, each reporting a stream of status updates.
Operating system
Your operating system may also be instrumented to report on its internal operations as well, though not likely using JMX. One powerful dynamic tracing framework for this purpose is DTrace, built into macOS, FreeBSD, and Solaris.
So your sysadmin may be watching all four sets of status information on her monitoring app: the OS, the JVM, the app server, and your app.
Read the Wikipedia page for basic info.
Read and write
JMX provides not only read-access to monitor current status, but also writing. Your chosen external monitoring app can be used to alter the state within your app in whatever way you choose in your programming. For example, you could change the size of thread pools or caches.
Continuing our metaphor of the control room seen above, you can think of read-access via JMX as watching the gauges seen on those control room panels.
Think of write-access as turning the knobs, switches, and sliders on those panels.

Gather profiling statistics from desktop client applications

I have a numerous client instances of some desktop application.
Some users of this application encounter performance problems while using some specific reproducing steps and their private execution context (i.e. let say using some private kittens photos that they do not want to share with anybody).
I would like to minimize the number of communication with users and reproduce their problems successfully on my development environment.
I cannot use their execution context because of legal reasons.
So the only option I see here is to gather statistics of application usage (i.e. method calls, CPU load factor).
Ideally I would like to simplify the life for users and just ask them to enable/disable statistics gathering in the application when they see some problems. All other (i.e. capturing customized statistics, transferring statistics to the support) would be done in background.
Looks like a rather common need.
Are there any solutions that can help to achieve described behaviour?
JProfiler allows you to distribute the profiling agent at no cost and operate it in offline mode. The profiling agent is activated by adding a special VM parameter to the invocation of the JVM (-agentpath:...).
Then you can use the Controller class to record data and save snapshots to disk. The start/stop button for recording statistics in your desktop application would call these methods.
If the application is obfuscated, JProfiler can deobfuscate the snapshot when you open it.
To set this all up, create a locally launched session in JProfiler and then choose
Sesssion->Conversion Wizards->Convert Application Session To Redistributed Session
Disclaimer: My company develops JProfiler.
Distributed tracing is really what I needed.
There is a Dapper project from Google.
And Zipkin from Twitter with many integrations including Spring via Sleuth.

Workaround for handling expectable java.lang.OutOfMemoryError: Java heap space

I am working on a Java Web Application based on Java 6/Tomcat 6.0. It's a web based document management system. The customers may upload any kind of file to that web application. After uploading a file a new Thread is spawned, in which the uploaded file is analyzed. The analysis is done using a third party library.
This third-party-libraries works fine in about 90% of the analyze-jobs, but sometimes (depending on the uploaded file) the logic starts to use all remaining memory, leading to an OutOfMemoryError.
As the whole application is running in a single JVM, the OoM-Error is not only affecting the analyze-jobs, but has also impact on other features. In the worst case scenario, the application crashes completely or remains in an inconsistent state.
I am now looking for a rather quick (but safe) way to handle those OoM-Errors. Replacing the library currently is no option (that's why I have neither mentioned the name of the library, nor what kind of analysis is done). Does anybody have an idea of what could be done to work around this error?
I've been thinking about launching a new process (java.lang.ProcessBuilder) to have a new JVM. If the third-party-lib causes an OoM-Error there, it would not have effects on the web application. On the other hand, this would cause additional effort to synchronize the new Process with the Analysis-Part of the web application. Does anybody have any experience with such a system (especially with regards to the stability of the system)?
Some more information:
1) The analysis part can be summarized as a kind of text extraction. The module receives a file reference as input and writes the analysis result into a text file. The resulting text-file is further processed within the web applications business logic. Currently the workflow is synchronous. The business logics waits for the third-party-lib to complete its job. There is no queuing or other asynchronous approach.
2) I am quite sure that the third-party-library causes the OoM-Error. I've tested the analysis part in isolation with different files of different sizes. The file that causes the OoM-Error is quite small (about 4MB). I have done further tests with that particular file. While having a JVM with 256MB of heap, the analysis crashes due to the OoM-Error. The same test in a JVM with 512MB heap passes. However, increasing the heap size will only help for a short period of time, as a larger test file again causes the test to fail due to OoM-Error.
3) A Limit for the size of files being uploaded is in place; but of course you cannot have a limit of 4MB per file. Same is for the OS and architecture. The system has to work on both 32- and 64-bit systems (Windows and Linux)
It depends on both the client and the server as well as the design of the web app. You need to answer a few questions:
what is supposed to happen as a result of the analysis and when is it supposed to happen?
Does the client wait for the result of the analysis?
What is returned to the client?
You also need to determine the nature of the OOM.
It is possible that you might want to handle the file upload and the file analysis separately. For instance, your webapp can upload the file to somewhere in the file system and you can defer the analysis part to a web service, which would be passed a reference to the file location. The webservice may or may not be called asynchronously, depending on how and when the client that uploaded the file needs notification in the case of a problem in the analysis.
All of these factors go into your determination.
Other considerations, what JVM are you using, what is the OS and how is it configured in terms of system memory? Is it the JVM 32 or 64 bit, what is the max file size allowed on upload, what kind of garbage collectors have you tried.
It is possible that you can solve this problem from an infrastructure perspective as opposed to changing the code. Limiting the max size of the file upload, moving from 32 to 64 bit, changing the garbage collector, upgrading libraries after determining whether or not there is a bug or memory leak in one of them, etc.
One other red flag that is glaring, you say "a thread is spawned". While this sort of thing is possible it is often frowned upon in the JEE world. Spawning threads yourself can cause problems in how the container manages resources. Make sure you are not causing the issue yourself, try a file load independently in a test environment on a file that is known to cause problems (if that can be ascertained). This will help you determine ff the problem is the third party library or a design one.
Why not have a (possibly clustered) application per 3rd-party lib that handles file analysation. Those applications are called remotely (possibly asynchronously) from your main application. They are passed a URL which points to the file they should analyze and return their analysation results.
When a file upload completed the analyzation job is put into the queue. When an analyzation application is up again after it crashed it will resume consuming messages from the queue.

Why have one JVM per application?

I read that each application runs in its own JVM. Why is it so ? Why don't they make one JVM run 2 or more apps ?
I read a SO post, but could not get the answers there.
Is there one JVM per Java application?
I am talking about applications launched via a public static void main(String[]) method ...)
(I assume you are talking about applications launched via a public static void main(String[]) method ...)
In theory you can run multiple applications in a JVM. In practice, they can interfere with each other in various ways. For example:
The JVM has one set of System.in/out/err, one default encoding, one default locale, one set of system properties, and so on. If one application changes these, it affects all applications.
Any application that calls System.exit() will effectively kill all applications.
If one application goes wild, and consumes too much CPU or memory it will affect the other applications too.
In short, there are lots of problems. People have tried hard to make this work, but they have never really succeeded. One example is the Echidna library, though that project has been quiet for ~10 years. JNode is another example, though they (actually we) "cheated" by hacking core Java classes (like java.lang.System) so that each application got what appeared to be independent versions of System.in/out/err, the System properties and so on1.
1 - This ("proclets") was supposed to be an interim hack, pending a proper solution using true "isolates". But isolates support stalled, primarily because the JNode architecture used a single address space with no obvious way to separate "system" and "user" stuff. So while we could create APIs that matched the isolate APIs, key isolate functionality (like cleanly killing an isolate) was virtually impossible to implement. Or at least, that was/is my view.
Reason to have one JVM pre application, basically same having OS process per application.
Here are few reasons why to have a process per application.
Application bug will not bring down / corrupt data in other applications sharing same process.
System resources are accounted per process hence per application.
Terminating process will automatically release all associated resources (application may not clean up for itself, so sharing processes may produce resource leaks).
Well some applications such a Chrome go even further creating multiple processes to isolate different tabs and plugins.
Speaking of Java there are few more reasons not to share JVM.
Heap space maintenance penalty is higher with large heap size. Multiple smaller independent heaps easier to manage.
It is fairly hard to unload "application" in JVM (there to many subtle reasons for it to stay in memory even if it is not running).
JVM have a lot of tuning option which you may want to tailor for an application.
Though there are several cases there JVM is actually shared between application:
Application servers and servlet containers (e.g. Tomcat). Server side Java specs are designed with shared server JVM and dynamic loading/unloading applications in mind.
There few attempts to create shared JVM utility for CLI applications (e.g. nailgun)
But in practice, even in server side java, it usually better to use JVM (or several) per applications, for reasons mentioned above.
For isolating execution contexts.
If one of the processes hangs, or fails, or it's security is compromised, the others don't get affected.
I think having separate runtimes also helps GC, because it has less references to handle than if it was altogether.
Besides, why would you run them all in one JVM?
Java Application Servers, like JBoss, are design to run many applications in one JVM

How to monitor exceptions or errors generated by other Java applications?

I want to find or develop an application that can run as a daemon, notify the administrator by email or sms when the Java applications running on a host get any exceptions or errors. I know JVMTI can achieve part of my goal, but it will impact performance of the monitored applications(I don't know how much will it be, it will be acceptable if it's slight), besides it seems to be a troublesom job to develop a JVMTI agent and I'm not sure what would happen if several applications running at the same time using the same agent. Is there any better solutions? Thanks in advance.
One way would be to use a logging system like log4j that publishes all errors occuring on system A to a logging server on system B from which you can monitor the errors occured. This isn't a completely generic solutation however, since only exceptions propagated to log4j (or any other logging system) would be handled - but it may be a good start.
The best solution is to have the Java application send its errors via email/sms. The problem is that programs will generate exceptions and handle correctly in normal operation. You only want particular exception.
Failing this you could write a log reader, which reads the logs of the application. This is tricky to get right, but it can be done.
An application can generate 1000+ exception per days and still be behaving normally because the application knows how to handle these exceptions. e.g. every time a socket connection is closed an exception can be thrown.
IMO, the best approach is to deploy an external monitoring system. This can:
monitor multiple applications
monitor infrastructure services
monitor network availability and machine accessibility,
monitor resources such as processor and file system usage.
Applications can be monitored in a variety of ways, including:
by processing log events,
by watching for application restarts,
by "pinging" the application's web apis to check service liveness, and
by using the application's JMX interfaces.
This information can be filtered and prioritized in an intelligent fashion, and critical events can be reported by whatever means is most appropriate.
You don't want individual applications sending emails, because they don't have sufficient information to do a decent job. Furthermore, putting the reporting logic into individual applications is likely to lead to inconsistent implementation, poor configurability, and so on.
There is a nearby alternative to JVMTI : JPDA. This infrastructure allows you to create a remote "debugger" (yes, that's what you're planning to do) using Java code, and connect it to the VM using either local or remote connection.
There will be, like for JVMTI, an overhead to program execution. However, as the Trace.java example shows, it's quite simple to both implement and connect to target VM.
Finally, notice if you want to instrument code run by application server (JBoss, Glassfish, Tomcat, you name it) there are various other means available.
I follow the pattern where every exception gets logged to a table.
Then an RSS feed selects from that table.
I subscribe to the RSS feed in MS Outlook at work and also on my Android phone with a program called NewsRob. NewsRob let me set my phone to alert me when there is something new.
I blog about how to do this HERE. It is in .net, but you get the idea.
As a related step I found a way to notify myself when something DIDN'T happen. That blog is HERE.
There are loads of applications out there that do what you are looking for in a way that does not impact performance. Have you had a look at Kibana/ElasticSearch, or Splunk or Logscape for enterprise solutions ( they both also have free versions).
I'm going to echo what has already been said and highlight what java already provides and what you can do with an external monitoring system. Java already provides:
log4j - log ERRORS, WARNINGS, FATAL and Exceptions to a file
JMX - Create custom application metrics and you also have access to java.lang/* which will give you heap memory usage , garbage collection, thread counters etc.
JVM gc logging - you can log all your garbage collection events to a file and watch for any long Full GC collections.
An external monitoring system will allow you to set alerts triggered off different operational scenarios. You will also get visualisation of your system performance through charts. I've used Logscape's java app in the past to monitor 30 java processes spread out over3 hosts.

Categories

Resources