Mechanism to generate RMI automatically from Abstract Syntax Tree? - java

i'm working on a project for my thesis in JAVA which requires automatic RMI generation from Abstract Syntax Tree. I'm using RMI as
`public int createProcess(CompilationUnit cu){
//Some Code Here
return processid;
} `
for generating RMI from AST on each node. And it will automatically generates the Interface file and all the java files from AST and put all the methods in these files. I am able to execute the javac, rmic <remote-class>, rmiRegistry commands using process builder. But
how to destroy and unbind the remote objects after process completion ? Do i have to put this code at the end of each file where control exits ?
public void exit() throws RemoteException
{
try{
// Unregister ourself
Naming.unbind(mServerName);
// Unexport; this will also remove us from the RMI runtime
UnicastRemoteObject.unexportObject(this, true);
}
catch(Exception e){}
}
Do i have to execute rmiRegistry after every remote method/classes creation or it will automatically adds the later remote methods/classes to registry, if it is already in the executing state (means if processbuilder is already executing the command "rmiRegistry") ? For example if nodeA creates a Process1 (RMI class) on nodeB and then execute it using commands via Processbuilder, rmiRegistery will be in running state. Now if NodeA wants to create another Process2 on NodeB, do i have to stop that instance of rmiRegistery and rerun it, or there is no need to do that Registery will detect & add new bindings automatically?
Will all the RMI run on same port ?? means if i create process1 and bind it with localhost/process1 & process2 with localhost/process2 , can we access them via same port ?
i'm working with RMI first time so don't have any previous experience or knowledge.
Apologies, my question seemed unclear , so i tried to put more explanation by editing ?
Following this tutorial Link

1 how to destroy and unbind the remote objects after process completion ?
See 2, but I don't know why you want to do so. Just leave them in existence and bound to the Registry.
2 Do I have to put this code at the end of each file where control exits ?
Yes, if you want it to execute, otherwise no. Don't generate empty catch-blocks though.
3 Do I have to execute rmiRegistry after every remote object creation
No, you have to start it once, at the beginning of the containing process. Simplest way is via LocateRegustry.createRegistry().

Related

Calling ps on Linux from Java

In Java, I start one new Process using Runtime.exec(), and this process in turn spawns several child processes.
I want to be able to kill all the processes, and have previously been trying process.destroy() and process.destroyForcibly() - but the docs say that destroyForcibly() just calls destroy() in the default implementation and destroy() may not kill all subprocesses (I've tried and it clearly doesn't kill the child processes).
I'm now trying a different approach, looking up the PID of the parent process using the method suggested here and then calling ps repeatedly to traverse the PIDs of child processes, then killing them all using kill. (It only needs to run on Linux).
I've managed the first bit - looking up the PID, and am trying the following command to call ps to get the child PIDs:
String command = "/bin/ps --ppid " + pid;
Process process = new ProcessBuilder(command).start();
process.waitFor();
Unfortunately the 2nd line above is throwing an IOException, with the following message: java.io.IOException: Cannot run program "/bin/ps --ppid 21886": error=2, No such file or directory
The command runs fine if I paste it straight into the terminal on Ubuntu 16.04.
Any ideas would be very much appreciated.
Thanks
Calling the command you wish to run this way is always destined to fail.
Since Process does not effectively run a shell session, the command is basically handed over to the underlying OS to run. This means that it'll fail, since the path to t he program to be executed (in this case ps), is not the full one hence the error you're getting.
Also, testing whether your command works using a terminal is not correct. Using a terminal contains the notion of performing an action with an active logged in user with a correct path etc etc. All the above are not the case though when running a command through Process as these are not taken into consideration.
Furthermore, you also need to account for cases where the actual java application could be running under a different user, with a different set of permissions, paths etc.
In order for your to fix this, you can simply do either of the following:
1) Invoke your ps command using the full path to it (still not sure if it would work)
2) Change the way your create the Process object into something like: p = new ProcessBuilder("bash", "-c", command).start();
The second, will effectively run a bash session, passing in the ps command as an argument thus obtaining the desired result.
http://commons.apache.org/proper/commons-exec/tutorial.html
```
String line = "AcroRd32.exe /p /h " + file.getAbsolutePath();
CommandLine cmdLine = CommandLine.parse(line);
DefaultExecutor executor = new DefaultExecutor();
int exitValue = executor.execute(cmdLine);
```

Can you create a new JVM in a c++ function called from java using JNI?

So my setup is that I have a .dll which is developed by me (A.dll) which in the original application is called from an external process which is basically just a .exe file that I do not have the source code for (B.exe). The purpose of A.dll is to communicate with a .jar file, which is also developed by me (C.jar). So in the application, the "communication flows" as shown below
B.exe -> A.dll -> (through JNI) -> C.jar
Now, what i want to do is to add the calls going between A.dll and C.jar as a part of my test suite in the development environment for C.jar. What I have so far is that I have created another .dll (D.dll) which mirrors all functions in A.dll, but with JNIEXPORT, and simply makes direct call to the respective function in A.dll. So the "communication flow" in this situation will be as follows:
Unit test in C.jar development framework -> (through JNI) -> D.dll -> A.dll -> (through JNI) -> C.jar
At this point, a very simple function call that simply prints out something in C.jar works through the whole chain; all the way from the unit test call and into C.jar. The problem however arises when i call the function in A.dll which creates a new JVM using CreateJavaVM(), which produces the following error:
Error occurred during initialization of VM
Unable to load native library: The specified procedure could not be found
So basically I'm wondering if it is actually possible to do this, or is it just simply impossible to call CreateJavaVM() when there is already a running JVM in the same process? I know that you can't call CreateJavaVM() several times within the same process, but in this situation it is only called once but a JVM already exists in the process - can you even have several JVMs running in the same process?
SOLUTION:
Thanks to #apangin's answer the code snippet below solved my problem:
jsize nVMs = 0;
JavaVM** buffer;
jni_GetCreatedJavaVMs = (GetCreatedJavaVMs) GetProcAddress(GetModuleHandle(
TEXT("jvm.dll")), "JNI_GetCreatedJavaVMs");
if (jni_GetCreatedJavaVMs == NULL) {
// stuff
CreateJavaVM(&jvm, (void **) &env, &args);
} else {
jni_GetCreatedJavaVMs(NULL, 0, &nVMs); // 1. just get the required array length
JavaVM** buffer = new JavaVM*[nVMs];
jni_GetCreatedJavaVMs(buffer, nVMs, &nVMs); // 2. get the data
buffer[0]->GetEnv((void **) &env, jni_version); // 3. get environment
jvm = buffer[0];
}
Current JNI specification explicitly states that creation of multiple VMs in a single process is not supported, and this is actually asserted in HotSpot source code.
Even if your dll calls JNI_CreateJavaVM only once, it does not mean that this is the very first call within the whole process. In fact, JNI_CreateJavaVM is first called by java.exe or by another launcher of your IDE (idea.exe, eclipse.exe, netbeans.exe etc).
Therefore, instead of creating Java VM blindly, A.dll should check first if JVM already exists in current process by calling JNI_GetCreatedJavaVMs. If the function returns nonempty array, then use GetEnv or AttachCurrentThread to obtain JNIEnv* for the existing VM, otherwise create a new VM.

C# - Stop ServiceProcess wrapper after java process is stopped

I'm building a c sharp serviceProcess that will start a Batch file (the batch file will start a java application).
If i stop the service it kills the java process. Stopping the java process can take up to 2 minutes. The service has to wait for the java application to stop, so i made a sleep.
System.Threading.Thread.Sleep( );
Is it possible to check if the "java" process is closed and after that stop the ServiceProcess.
You can access the process using the Process-class. It allows you to get several information about a specific process. When you start the java.exe directly (using Process.Start) you already have the Process-instance.
When using a batch-file, you need to find the process, which is not a problem at all. You can find a process using Process.GetProcessesByName, but you'd find all all java-processes running on your machine.
You could try something like the following:
Process[] proc = Process.GetProcessesByName("Java");
if (proc.Count() != 0)
{
//Process Alive
Process prod = proc[0];
prod.Kill();
prod.WaitForExit();
}
else
{
//Process Dead
}
A better option would be to use the process ID(if you know it)
Warning: This will kill the first java process it finds, you need to check which one you need to kill...

Mongodb: db.printShardingStatus() / sh.status() call in Java (and JavaScript)

I need to get a list of chunks after sharding inside my Java code. My code is simple and looks like this:
Mongo m = new Mongo( "localhost" , 27017 );
DB db = m.getDB( "admin" );
Object cr = db.eval("db.printShardingStatus()", 1);
A call of eval() returns an error:
Exception in thread "main" com.mongodb.CommandResult$CommandFailure: command failed [$eval]: { "serverUsed" : "localhost/127.0.0.1:27017" , "errno" : -3.0 , "errmsg" : "invoke failed: JS Error: ReferenceError: printShardingStatus is not defined src/mongo/shell/db.js:891" , "ok" : 0.0}
at com.mongodb.CommandResult.getException(CommandResult.java:88)
at com.mongodb.CommandResult.throwOnError(CommandResult.java:134)
at com.mongodb.DB.eval(DB.java:340)
at org.sm.mongodb.MongoTest.main(MongoTest.java:35)
And, really, if we look into the code of db.js, in line 891 there is a call to a method printShardingStatus() that is not defined inside a file. Inside of sh.status() method in utils_sh.js file, there is even a comment:
// TODO: move the actual commadn here
Important to mention, when I run these commands in mongo command line, everything works properly!
My questions are:
Is there any other possibility of getting a full sharding status within Java code? (eg. with DB.command() method)
If not, any other suggestions how to avoid my problem?
Many of the shell's helper functions are not available for server-side code execution. In the case of printShardingStatus(), it makes sense because there isn't a console to use for printing output and you'd rather have a string returned. Thankfully, you should be able to pull up the source of the shell function and reimplement it in your application (e.g. concatenating a returned string instead of printing directly).
$ mongo
MongoDB shell version: 2.2.0
connecting to: test
> db.printShardingStatus
function (verbose) {
printShardingStatus(this.getSiblingDB("config"), verbose);
}
So, let's look at the printShardingStatus() function...
> printShardingStatus
function (configDB, verbose) {
if (configDB === undefined) {
configDB = db.getSisterDB("config");
}
var version = configDB.getCollection("version").findOne();
// ...
}
Before turning all of the output statements into string concatenation, you'd want to make sure the other DB methods are all available to you. Performance-wise, I think the best option is to port the innards of this function to Java and avoid server-side JS evaluation altogether. If you dive deeper into the printShardingStatus() function, you'll see it's just issuing find() on the config database along with some group() queries.
If you do want to stick with evaluating JS and would rather not keep this code within your Java application, you can also look into storing JS functions server-side.
Have you deployed a shard cluster properly?
If so, you could connect to a mongo database that has sharding enabled.
Try calling the method db.printShardingStatus() with a that database within the mongo shell and see what happens.
Apparently the Javascript function 'printShardingStatus' is only available for the mongo shell and not for execution with server commands, to see the code start mongo.exe and type only 'printShardingStatus' and press enter.
In this case writing an extension method would be the best for solving this...
Javascript way of printing output of MongoDB query to a file
1] create a javascript file
test.js
cursor = db.printShardingStatus();
while(cursor.hasNext()){
printjson(cursor.next());
}
2] run
mongo admin --quiet test.js > output.txt

How can I replace the current Java process, like a unix-style exec?

I have a server written in Java that runs as a Windows service (thanks to Install4J). I want this service to be able to download the latest version of the JAR file it runs from, and start running the new code. The stitch is that I don't want the Windows service to fully exit.
Ideally, I would accomplish this by a unix-style exec() call to stop the current version and run the new one. How can I best accomplish this?
Here is a complicated, but portable, way.
Split your code into two jars. One very small jar is there just to manage process startup. It creates a ClassLoader that holds the other jar on its classpath.
When you want to load a new version, you terminate all threads running code from the old jar. Null out all references to instances of classes from the old jar. Null out all references to the ClassLoader that loaded the old jar. At this point, if you didn't miss anything, the old classes and ClassLoader should be eligible for garbage collection.
Now you start over with a new ClassLoader instance pointing at the new jar, and restart your application code.
As far as I know, there is no way to do this in Java.
I suppose you could work around it by using the Java Runtime.exec or ProcessBuilder's start() command (which start new processes) then letting the current one end... the docs state
The subprocess is not killed when
there are no more references to the
Process object, but rather the
subprocess continues executing
asynchronously.
I'm assuming the same is true if the parent finishes and is garbage collected.
The catch is Runtime.exec's process will no longer have valid in, out, and err streams.
Java Service Wrapper does this and more.
You could use the built in reloading functionality of for example Tomcat, Jetty or JBoss, they can be run as a service, and you don't have to use them as a web container or Java EE container.
Other options are OSGi, and your own class loading functionality.
But be aware of reloading in a production environment. It is not always the best solution.
General approach:
import java.io.BufferedInputStream;
import java.util.Arrays;
public class ProcessSpawner {
public static void main(String[] args) {
//You can change env variables and working directory, and
//have better control over arguments.
//See [ProcessBuilder javadocs][1]
ProcessBuilder builder = new ProcessBuilder("ls", "-l");
try {
Process p = builder.start();
//here we just echo stdout from the process to java's stdout.
//of course, this might not be what you're after.
BufferedInputStream stream =
new BufferedInputStream(p.getInputStream());
byte[] b = new byte[80];
while(stream.available() > 0) {
stream.read(b);
String s = new String(b);
System.out.print(s);
Arrays.fill(b, (byte)0);
}
//exit with the exit code of the spawned process.
System.exit(p.waitFor());
} catch(Exception e) {
System.err.println("Exception: "+e.getMessage());
System.exit(1);
}
}
}

Categories

Resources