I have problem similar to my previous one presented here.
This time I want use program written in c/c++ to track execution of JAVA program. So as I stated before same code which track stdout printing for c/c++ and register syscall 4 haven't done it for JAVA. I assume it's because execlp which I trace is used just to run jvm. And later on there are created more processes (by inner mechanism of jvm) which I do not track. I found this topic which seems to be partial solution. If I got it right every child will be traced. But that's is a problem as well I want to track only that process which handles my application and not all others that jvm might create. Is there any chance to get know which jvm thread/process handles my program and track only it?
For make it a bit easier let's assume my JAVA program is one-thread.
If you start the binary through your tracer app, all threads will be traced.
But if you attach to a process, then you won't attach to all it's threads. You have to attach to all of its threads using the threadids, that you can found listed eg. in /proc/%d/task/.
Also, I suggest reading through strace's source code, I've learnt a lot from it. If you can use strace to successfully follow java threads as you want, you can get the logic from it.
Related
I have a situation where I want to instrument Java code to add function calls, those function I add calls to might affect the objects status in the system thus changing the state of the program. I am looking for a way to insert those calls but leave the program status unchanged.
I am looking for a method to store the status (Image?) of the heap and come back to it later, I mean at the end of my instrumentation code. I tried tuckling it with an idea of copying the current JVM, maybe execute the instrumented code inside it (with the exact state of the program) and come back to the original JVM when the instrumentation is done. I couldn't find a documentation on such scenario so I am wondering if there is a better approach to it.
The state of Java program is not only the Heap. It also includes running threads, loaded classes, constant pools, caches and many other VM structures.
Saving state of a Java program is roughly the same as saving state of an arbitrary process in OS. fork is probably the closest way to achieve this, but it's still not an easy solution.
I have a Java process which has been running for over a week. In the process, I have been processing some data and have been storing some intermediate result into a hashmap in memory.
Now, I need to stop the process due to some bugs in the code. But if i kill the process, then i lose the data in the hashmap n will have to reprocess it again, the next time i run the code.
Is their a way by which i can take a dump of the hashmap present in the memory ?
You can trigger a memory dump which you can read with various tools such as profilers. This is very difficult to read and I have never heard of some one using this to restart a program.
Restarting a program is not something you can do as an afterthought, it needs to designed into your application and thoroughly tested. One way people get around having to worry about this issue is to use a database. This is because databases are, designed and tested professionally, to be restarted without losing data.
You can use JConsole, which is included with the Standard JDK, or visualvm to attach to an already-running process and trigger a heap dump.
However, as Peter said, actually reading this dump is not going to be fun unless you can find a tool to assist you.
I have a java program where certain parts are computationally intensive.
I wish to make that part (which essentially generates an image according to some text data) in C/D.
(Multiple instances of the C program might be running at the same time).
Now, I want to be able to track the progress of the C/D program, so the java code
needs to read the status (progess, errors) of the C/D program somehow.
My idea is to use the environment variables in the OS, to store the status, "TIME_LEFT=2h10m42s" sort of.
Questions:
Is this a good idea, or is there something really bad about this design?
Are there some alternatives, (using sockets, stdin/stdout, other)?
EDIT: The Java works as a front-end, so the C/D code should NOT include anything specifically written for Java. The C/D code is essentially a stand-alone program, Java (or other) provides with GUI.
You cannot use environment variables for this, as you cannot communicate environment variables to another program by other than setting it before you start a new process. So you can't run a C program that changes environment variables that your parent java program can see.
Write line based status to stdout in your C (or D) program rather and read it in your java program.
Using environment variables is a bad idea. Environment variables are inherited by new running processes. They aren't register-type variables that you can just pump and access from any process, so to speak :) You could use JNI and keep checking the time remaining in MS on the Java side, or have the C/D code poll the Java code with the time remaining in its loop (I prefer the other way, however).
Well you can do the other way around .From Java to poll the status of execution.On each 5 seconds to call via JNI the status (progress) of the heavy execution.
I agree with Chris Dennett that this is a bad idea. I would avoid JNI - it is a terrific way to introduce subtle bugs that crash your JVM.
I would implement this by creating a C/D HTTP server running on the local host. The server accepts a POST request to /image/ to start creating the image, a long running process. That POST request returns immediately with a token. I would then GET /image/token which would return either the progress information or the image, depending on if it is done or not. Your Java process can then poll the GET /image/token URL.
Instead of using environment variable use JNA. It is easier than JNI and reliable method to communicate with the program. Another approach is you use Message Queue like ActiveMQ for which C API also available and this is open source. It will decouple the application.
I'm chasing a production bug that's intermittent enough to be a real bastich to diagnose properly but frequent enough to be a legitimate nuisance for our customers. While I'm waiting for it to happen again on a machine set to spam the logfile with trace output, I'm trying to come up with a theory on what it could be.
Is there any way for competing file read/writes to create what amounts to a deadlock condition? For instance, let's say I have Thread A that occasionally writes to config.xml, and Thread B that occasionally reads from it. Is there a set of circumstances that would cause Thread B to prevent Thread A from proceeding?
My thanks in advance to anybody who helps with this theoretical fishing expedition.
Edit: To answer Pyrolistical's questions: the code isn't using FileLock, and is running on a WinXP machine. Not asked, but probably worth noting: The production machines are running Java 1.5.
Temporarily setup your production process to startup with debugging support, add this to how you're starting your java program or to say the tomcat startup:
-Xdebug -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=n
Then attach to it:
jdb -connect com.sun.jdi.SocketAttach:hostname=localhost,port=8000
And take a look at your stack(s).
FileLock is an inter-process locking mechanism. It does nothing within the same JVM, so that isn't it. I would look at your synchronizations, and specifically at making sure you always acquire multiple locks in the same order.
I've gotten some useful tips for chasing the underlying bug, but based on the responses I've gotten, it would seem the correct answer to the actual question is:
No.
Damn. That was anti-climactic.
I know this is old, but to add some clarity on a "No" answer (for those of us who need to know why):
Deadlocking happens when exactly two distinct processes (transactions) update alternate dependent rows or records, but in reverse order. Basically both hang waiting for the other to complete an action which will never occur (as they are both waiting on the other). This is generally found in faulty database design.
If I recall, Wikipedia has a good definition here: http://en.wikipedia.org/wiki/Deadlock
Simple file access should not create dependencies like this. A more common issue would be your resource being used by another process and unavailable to the one trying to access it.
I've a bash script that sequentially calls a java program. That's a bit tricky but mainly what i do is to have a loop and executes about 1500 times the same java program with different arguments.
My question is, when using java 1.5 (the sun VM), each time I'm calling the java program a new instance of the jvm is created ? (I'm not sure that's the right vocabulary...)
Should I avoid this situation by introducing a level of indirection, i.e building a list of all the parameters and then executing one java program which takes these parameters and executes what was previously my entry point ?
Or can I deal with the problem by configuring the JVM as resident or something like that and dynamically invokes my program....
hope this is clear....
thx...
You could save the parameters into a file and use the Java program to process it without constant restart. You could also pipe in the parameters into the running Java app through the console, similarly as for example ls | grep java
Edit: And for the first question. I doubt the java runtime would stay deliberately in memory. Probably most JRE files would remain in the disk cache anyway. On Windows there is a Java Quick Start service which keeps the JRE files around to reduce the startup time for a java program. Don't know whether there is a similar thing for *nix.
Obviously having all the parameters beforehand and running the program once after that would be the best solution. If you cannot do that for any reason i have a very dirty solution to this. Have your program register a port and listen to it for input. Then simply pass the arguments to that port and have your program handle them as a new instance.
JVM startup is notoriously slow, and it certainly not intended to be done in a loop like that. Unfortunately the only way to avoid this if you are passing command line parameters to the java program is to modify the java program itself in some way to have alternative forms of interaction (either from the console, or a port, or a read a file). Java Quick Start is the only (closest thing to a) solution if the java program cannot be changed.
The real solution is to change the Java program. The most obvious change would be to have your loop write to a file, and then start the java program that would read the file one line at a time. That works if the loop doesn't care about the results from the java program for the next set of parameters.
If it does, then it would really be necessary to understand that relationship to advise on an appropriate solution. The socket solution suggested by Savvas is certain a general purpose solution, but there may be better options, depending on what you need to accomplish.
You can use a launcher like in the answer to
Simultaneously run java programs run on same JVM? to read input line by line and start your program's main() method.