So i've been playing around with the Oozie java api, all fine and dandy until i've hit the following problem. While trying to run the following java code:
OozieClient oc = new OozieClient(OOZIE_URL);
Properties conf = oc.createConfiguration();
conf.setProperty(OozieClient.APP_PATH, PATH_TO_WF);
String jobId = oc.run(conf);
while(oc.getJobInfo(jobId).getStatus() == WorkflowJob.Status.PREP){
Thread.sleep(1000);
}
oc.kill(jobId);
This fails with the following exception:
E0508: User [?] not authorized for WF job [JOB_ID_GOES_HERE]
I've been able to find some related issues on google, though the ones i noticed were only related to the command line oozie client.
My main question is that considering you can run an Oozie workflow from java as another user by simply adding:
conf.setProperty("user.name", "user123");
Is there something similar that can be done with killing a workflow ?
Use AuthOozieClient and set system user.
OozieClient oc = new AuthOozieClient(OOZIE_URL);
System.setProperty("user.name", userName);
client.kill(jobId);
Related
I have a system that has Windows COM interface so that external applications can connect to it and it has following details
Interface: InterfaceName
Flags: (1234) Dual OleAutomation Dispatchable
GUID: {ABCDEFG-ABCD-1234-ABCD-ABCDE1234}
I'd like to connect to this interface through Java Spring Application, it will sends a request to this interface and process the response.
I've tried to use the following code
ActiveXComponent mf = new ActiveXComponent("ApplicationName.InterfaceName");
try {
Dispatch f2 = mf.QueryInterface(" {ABCDEFG-ABCD-1234-ABCD-ABCDE1234} ");
Dispatch.put(f2, 201, new Variant("Request String"));
} catch (Exception e) {
e.printStackTrace();
}
The executable file opens but it doesn't do what I want. I want to do the following.
How do I make sure, my interface has bee registered, I can see it
under
Computer\HKEY_CLASSES_ROOT\ApplicationName.InterfaceName
Using ActiveXComponent opens the instance of application, which is not required. Application is already running.
call the interface with dispid.
Retreive the response from the call/put/invoke ( which suits best
for my requiremet ? ) and process the response.
I'm working first time with JAVA-COM_Interface and don't have much experience with it also I could find very few examples on the internet for it and I tried to convert the example I found for my project, also I am not sure the approach I am taking to call the interface is correct or not I would be glad if you can give a hand!
I have resolved this using JACOB lib.
1) Download JACOB folder from here.
2) Check your application is working & has details under
Computer\HKEY_CLASSES_ROOT\ApplicationName.InterfaceName
3) Make sure ApplicationName.dll file is registered. If not use this link for more info
regsvr32
4) Use this Java Code to send data to COM Interface with below simple code.
Dispatch dispatch = new Dispatch("Application.InterfaceName");
Variant response = Dispatch.call(dispatch, <DISPID>, message);
syso(response.getString()); // to print the response
Hope this helps.
I'm using IntelliJ IDEA to remote debug a Java CLI program with the debugger listening for connections.
This works fine for the first invocation, but the debugger stops listening after the CLI program disconnects. I want the debugger to keep listening since multiple CLI invocations will be made (in sequence, not in parallel) and only one of these will trigger the breakpoint I've set.
Here's my client debug config:
-agentlib:jdwp=transport=dt_socket,server=n,address=5005,suspend=y
Is it possible to keep the debugger listening?
Well since your CLI program terminates, debugger also stops. If you still want to continue debugger session for multiple runs of CLI program, then you can try as below,
Write a wrapper program from which you invoke your CLI program multiple times and debug the wrapper program instead of your CLI program.
Something like this,
public class Wrapper {
public static void main(String[] args) {
YourCLIProgram yp = new YourCLIProgram();
// First Invocation
String[] arg1 = { }; // Arguments required for your CLI program
yp.main(arg1);
// Second Invocation
String[] arg2 = { }; // Arguments required for your CLI program
yp.main(arg2);
// Third Invocation
String[] arg3 = { }; // Arguments required for your CLI program
yp.main(arg3);
// Fourth Invocation
String[] arg4 = { }; // Arguments required for your CLI program
yp.main(arg4);
}
}
I hope it works.
It depends also what you are trying to achieve.
If you want to check what parameters are passed to your CLI you can just log them to the file or save any information that you need in DB (or file as well).
In JPDA by specification transport service could support or not multiple connections.
For example, in Eclipse it doesn't. I suppose the same for IDEA.
When setting up your run configuration, did you select the "Listen" Debugger mode? The command line arguments you show look like the normal "Attach" settings, whereas the arguments for "Listen" look like this:
-agentlib:jdwp=transport=dt_socket,server=n,address=yourhost.yourdomain:5005, suspend=y,onthrow=<FQ exception class name>,onuncaught=<y/n> (Specifically, your arguments are missing the address for the application - your CLI program - to connect to IDEA at on start-up.)
I read a post that suggests the "onthrow" argument may not be necessary for general debugging, but I haven't tried it myself.
Try with suspend=n:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005
On my local app (tomcat web app), even though I run on JDK8, I still use the older way of doing it and it works fine (another thing you could try):
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005
I have a software component which submits MR jobs to Hadoop. I now want to check if there are other jobs running before submitting it. I found out that there is a Cluster object in the new API which can be used to query the cluster for running jobs, get their configurations and extract the relevant information from them. However I am having problems using this.
Just doing new Cluster(conf) where conf is a valid Configuration which can be used to access this cluster (e.g., to submit jobs to it) leaves the object unconfigured, and the getAllJobStatuses() method of Cluster returns null.
Extracting mapreduce.jobtracker.address from the configuration, constructing an InetSocketAddress from it and using the other constructor of Cluster throws Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses..
Using the old api, doing something like new JobClient(conf).getAllJobs() throws an NPE.
What am I missing here? How can I programmatically get the running jobs?
I investigated even more, and I solved it. Thomas Jungblut was right, it was because of the mini cluster. I used the mini cluster following this blog post which turned out to work for MR jobs, but set up the mini cluster in a deprecated way with an incomplete configuration. The Hadoop Wiki has a page on how to develop unit tests which also explains how to correctly set up a mini cluster.
Essentially, I do the mini cluster setup the following way:
// Create a YarnConfiguration for bootstrapping the minicluster
final YarnConfiguration bootConf = new YarnConfiguration();
// Base directory to store HDFS data in
final File hdfsBase = Files.createTempDirectory("temp-hdfs-").toFile();
bootConf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, hdfsBase.getAbsolutePath());
// Start Mini DFS cluster
final MiniDFSCluster hdfsCluster = new MiniDFSCluster.Builder(bootConf).build();
// Configure and start Mini MR YARN cluster
bootConf.setInt(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB, 64);
bootConf.setClass(YarnConfiguration.RM_SCHEDULER, FifoScheduler.class, ResourceScheduler.class);
final MiniMRYarnCluster yarnCluster = new MiniMRYarnCluster("test-cluster", 1);
yarnCluster.init(bootConf);
yarnCluster.start();
// Get the "real" Configuration to use from now on
final Configuration conf = yarnCluster.getConfig();
// Get the filesystem
final FileSystem fs = new Path ("hdfs://localhost:" + hdfsCluster.getNameNodePort() + "/").getFileSystem(conf);
Now, I have conf and fs I can use to submit jobs and access HDFS, and new Cluster(conf) and cluster.getAllJobStatuses works as expected.
When everything is done, to shut down and clean up, I call:
yarnCluster.stop();
hdfsCluster.shutdown();
FileUtils.deleteDirectory(hdfsBase); // from Apache Commons IO
Note: JAVA_HOME must be set for this to work. When building on Jenkins, make sure JAVA_HOME is set for the default JDK. Alternatively you can explicitly state a JDK to use, Jenkins will then set up JAVA_HOME automatically.
I tried it like this, it worked for me, but it is after submitting the job
JobClient jc = new JobClient(job.getConfiguration());
for(JobStatus js: jc.getAllJobs())
{
if(js.getState().getValue() == State.RUNNING.getValue())
{
}
}
jc.close();
or else we can get the cluster from job api and there are methods which return all the jobs, jobs status
cluster.getAllJobStatuses();
I am executing My Jobs/ Transformation using Java API and I am able to do it correctly in my host.
Now I am looking for a way to execute the transformation in remote host(where carte in running). Please help me or redirect me to the proper documentation where I find the classes to use to accomplish this.
PDI Version - 5.0.1
Currently I am executing my Job as below
try {
if(jobDetails.getGraphlocation()!=null)
{
KettleEnvironment.init();
JobMeta jobMeta = new JobMeta(jobDetails.getGraphlocation(), null);
for( String s : jobDetails.getArguments() )
{
String[] splitString = s.split("\\=");
if(splitString.length==2)
{
jobMeta.setParameterValue(splitString[0], splitString[1]);
}
else
System.err.println("Parameter should be of the form - name=value");
}
Job job = new Job(null, jobMeta);
job.setLogLevel(LogLevel.valueOf(jobDetails.getLoglevel().toString()));
job.start();
job.waitUntilFinished();
if (job.getErrors()!=0) {
System.out.println("Error encountered!");
}
}
The above code is able to execute the Job where ever I am running it. But I want to execute it in slave server by just passing the carte username,password ans server IP address.
you can do it through spoon by registering the carte server, or you can do it in a job by specifying the name and port of the carte server in the actual job/transformation step. i.e. you can create a launcher which just has start, job ( pointing at carte server ), success steps.
I am need to spawn a SSH connection from a JAVA program using ProcessBuilder and a USERID/PASSWORD combination.
I have already successfully implemented SSH connections using Ganymed, JSch, a combination of JAVA Processbuilder and Expect scripting (Expect4J also), JAVA ProcessBuilder and SSHPASS script and SSH Shared Key.
Security is NOT a concern at this point in time and all I am after is to be able to support programmatically all kinds of combinations for SSH connection.
My problem is the Password prompt that SSH throws somewhere that is not on STDIN/STDOUT (on a tty I believe). This is my last hurdle to overcome.
My question is there a way to intercept SSH password request and provide it from my JAVA code?
Please, note this is a very narrow question (and all the above information was to guarantee the answer would not be too broad).
Here is a sample code of what I am trying:
import java.io.*;
import java.util.*;
public class ProcessBuilderTest {
public static void main(String[] args) throws IOException, Exception {
ProcessBuilder pb = new ProcessBuilder(
"/usr/bin/ssh",
"nyuser#myserver.com",
"export NOME='Jennifer Lawrence'; echo $NOME"
);
pb.redirectErrorStream(); //redirect stderr to stdout
Process process = pb.start();
InputStream inputStream = process.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream));
String line = null;
while((line = reader.readLine())!= null) {
System.out.println(line);
}
process.waitFor();
}
}
But, when I run it I got this:
[memphis BuilderTest]# java ProcessBuilderTest
myuser#myserver's password:
and after I type the password, I got the rest of the output:
Jennifer Lawrence
[memphis BuilderTest]#
Again, the specific question is:
Is there a way to spawn an external ssh client (OpenSSH, Tectia SSH, SecureCRT, etc) using PasswordAuthentication method (no other method can be used) process using JAVA ProcessBuilder interface (no other language can be used), intercept/capture the password prompt and respond/interact providing that password from my JAVA code (so the user does not need to type it)?
You need to learn about pseudo-ttys, assuming that you are operating on Linux. The password prompt is on the tty device. You will need to build a separate process running against a pseudo-tty instead of just inheriting your tty device, and then you can intercept the password prompt.
This is a moderately complex process.
There is a library that supports some of this: http://www.ganymed.ethz.ch/ssh2/FAQ.html. You might find reading its source illuminating if it is available.
While it has been suggested that a pseudo-tty (pty) is required to simulate a terminal, the accepted answer doesn't provide a working solution - there are also lots of similar questions with no working answers.
Here are two solutions that allow you to capture the "Password:" prompt in SSH and enter the password in an automated way without using SSH_ASKPASS or Expect.
Why use one programming language when you can use two - the first option isn't ideal, but it demonstrates the solution:
ProcessBuilder pb = new ProcessBuilder("/usr/bin/python", "-c", "import pty; pty.spawn(['/usr/bin/ssh', '<hostname>'])");
The above example makes use of the Python pty module to wrap SSH into a PTY. Although it is simple, it doesn't provide any flexibility to allow you to modify any terminal properties like the passed window size.
The other more lightweight option is to use a PTY wrapper in C - the pty tool from the "Advanced Programming in the UNIX® Environment" book is just this - the source can be found at https://github.com/abligh/pty.
You will then use it in a similar way, but referencing the pty tool instead of Python:
ProcessBuilder pb = new ProcessBuilder("/usr/local/bin/pty", "/usr/bin/ssh", "<hostname>");
This is the same approach that Expect uses to simulate a PTY, which is why you are able to intercept it using Expect. It goes without saying that tunneled clear text passwords are insecure and public key authentication should always be the preferred way of doing this.