The application is packaged as executable jar, but it can't find the webapp directory that is included with the fat jar. so I must include the web app in the same directory as the fat jar.
I suspect the issue has to do with this code:
public void init(String host, int port) throws Exception {
logger.info("Starting Server bound to '" + host + ":" + port + "'");
String memory = Configurations.get("refine.memory");
if (memory != null) {
logger.info("refine.memory size: " + memory + " JVM Max heap: " + Runtime.getRuntime().maxMemory());
}
int maxThreads = Configurations.getInteger("refine.queue.size", 30);
int maxQueue = Configurations.getInteger("refine.queue.max_size", 300);
long keepAliveTime = Configurations.getInteger("refine.queue.idle_time", 60);
LinkedBlockingQueue<Runnable> queue = new LinkedBlockingQueue<Runnable>(maxQueue);
threadPool = new ThreadPoolExecutor(maxThreads, maxQueue, keepAliveTime, TimeUnit.SECONDS, queue);
this.setThreadPool(new ThreadPoolExecutorAdapter(threadPool));
Connector connector = new SocketConnector();
connector.setPort(port);
connector.setHost(host);
connector.setMaxIdleTime(Configurations.getInteger("refine.connection.max_idle_time",60000));
connector.setStatsOn(false);
this.addConnector(connector);
File webapp = new File("webapp");
final String contextPath = Configurations.get("refine.context_path","/");
final int maxFormContentSize = Configurations.getInteger("refine.max_form_content_size", 1048576);
logger.info("Initializing context: '" + contextPath + "' from '" + webapp.getAbsolutePath() + "'");
WebAppContext context = new WebAppContext();
URL webRootLocation = this.getClass().getResource("/webapp");
if (webRootLocation == null)
{
throw new IllegalStateException("Unable to determine webroot URL location");
}
URI webRootUri = URI.create(webRootLocation.toURI().toASCIIString());
System.err.printf("Web Root location: %s%n",webRootLocation);
System.err.printf("Web Root URI: %s%n",webRootUri);
context.setContextPath(webRootLocation.toString());
context.setBaseResource(Resource.newResource(webRootLocation));
context.setMaxFormContentSize(maxFormContentSize);
this.setHandler(context);
this.setHandler(context);
this.setStopAtShutdown(true);
this.setSendServerVersion(true);
// Enable context autoreloading
if (Configurations.getBoolean("refine.autoreload",false)) {
scanForUpdates(webapp, context);
}
// start the server
try {
this.start();
} catch (BindException e) {
logger.error("Failed to start server - is there another copy running already on this port/address?");
throw e;
}
configure(context);
}
I found what seems to be a possible solution is this SO,
am I on the right path ? how would I fix this ?
Edit
I modified the code following advice given here and web searches, but now I get the following errors:
#-ThinkPad-T450s:~/projects/github/OpenRefine/fatjar$ java -jar openrefinefat.jar
10:54:47.405 [ refine_server] Starting Server bound to '127.0.0.1:3333' (0ms)
10:54:47.416 [ refine_server] Initializing context: '/' from '/home/me/projects/github/OpenRefine/fatjar/webapp' (11ms)
Web Root location: jar:file:/home/me/projects/github/OpenRefine/fatjar/openrefinefat.jar!/webapp
Web Root URI: jar:file:/home/me/projects/github/OpenRefine/fatjar/openrefinefat.jar!/webapp
10:54:48.506 [ refine] Starting OpenRefine trunk [TRUNK]... (1090ms)
10:54:48.537 [..enrefinefat.jar!/webapp] unavailable (31ms)
java.lang.NullPointerException
at java.io.File.<init>(File.java:277)
at edu.mit.simile.butterfly.Butterfly.init(Butterfly.java:191)
at org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:440)
at org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:263)
at com.google.refine.RefineServer.configure(Refine.java:328)
at com.google.refine.RefineServer.init(Refine.java:242)
at com.google.refine.Refine.init(Refine.java:117)
at com.google.refine.Refine.main(Refine.java:111)
10:54:48.543 [ org.mortbay.log] Nested in javax.servlet.ServletException: java.lang.NullPointerException: (6ms)
java.lang.NullPointerException
at java.io.File.<init>(File.java:277)
at edu.mit.simile.butterfly.Butterfly.init(Butterfly.java:191)
at org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:440)
at org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:263)
at com.google.refine.RefineServer.configure(Refine.java:328)
at com.google.refine.RefineServer.init(Refine.java:242)
at com.google.refine.Refine.init(Refine.java:117)
at com.google.refine.Refine.main(Refine.java:11
Created new window in existing browser session.
You are missing the WebAppContext.setBaseResource(Resource) call.
This is how the WebAppContext, and the ServletContext finds its static resources, as well as any configuration resources that are specific to that context.
Related
We are using Apache-Mina SSHD 1.7 to expose a SFTP server that uses a custom file-system implementation which creates a file system per company. So users of the same company (or more precisely for the same connector) will access the same file system while a users of an other company will access a filesystem unique to their company. The file-system is moreover just a view on a MySQL database and will write uploaded files after some conversions directly into the DB and read files on download from the DB.
The setup of the server looks like the excerpt below
void init() {
server = MessageSftpServer.setUpDefaultServer();
server.setPort(port);
LOG.debug("Server is configured for port {}", port);
File pemFile = new File(pemLocation);
FileKeyPairProvider provider = new FileKeyPairProvider(pemFile.toPath());
validateKeyPairProvider(provider.loadKeys(), publicKeyList);
server.setKeyPairProvider(provider);
server.setCommandFactory(new ScpCommandFactory());
server.setPasswordAuthenticator(
(String username, String password, ServerSession session) -> {
...
});
PropertyResolverUtils.updateProperty(server, ServerAuthenticationManager.MAX_AUTH_REQUESTS, 3);
SftpSubsystemFactory sftpFactory = new SftpSubsystemFactory.Builder()
.withShutdownOnExit(false)
.withUnsupportedAttributePolicy(UnsupportedAttributePolicy.Warn)
.build();
server.setSubsystemFactories(Collections.singletonList(sftpFactory));
// add our custom virtual file system to trick the user into believing she is operating against
// a true file system instead of just operating against a backing database
server.setFileSystemFactory(
new DBFileSystemFactory(connectorService, companyService, mmService, template));
// filter connection attempts based on remote IPs defined in connectors
server.addSessionListener(whitelistSessionListener);
}
Within the file system factory we basically just create the URI for the file system provider and pass it to the respective method of it
#Override
public FileSystem createFileSystem(Session session) throws IOException {
SFTPServerConnectorEntity connector =
connectorService.getSFTPServerConnectorForUser(session.getUsername());
if (null == connector) {
throw new IOException("No SFTP Server connector found for user " + session.getUsername());
}
String ip = CommonUtils.getIPforSession(session);
URI fsUri = URI.create("dbfs://" + session.getUsername() + "#" + ip + "/" + connector.getUuid());
LOG.debug("Checking whether to create file system for user {} connected via IP {}",
session.getUsername(), ip);
Map<String, Object> env = new HashMap<>();
env.put("UserAgent", session.getClientVersion());
try {
return fileSystemProvider.newFileSystem(fsUri, env);
} catch (FileSystemAlreadyExistsException fsaeEx) {
LOG.debug("Reusing existing filesystem for connector {}", connector.getUuid());
return fileSystemProvider.getFileSystem(fsUri);
}
}
and within the provider we simply parse the values from the provided URI and environment variables to create the final filesystem if none was yet available within the cache
#Override
public DBFileSystem newFileSystem(URI uri, Map<String, ?> env) throws IOException {
LOG.trace("newFileSystem({}, {}))", uri, env);
ConnectionInfo ci = ConnectionInfo.fromSchemeSpecificPart(uri.getSchemeSpecificPart());
String cacheKey = generateCacheKey(ci);
synchronized (fileSystems) {
if (fileSystems.containsKey(cacheKey)) {
throw new FileSystemAlreadyExistsException(
"A filesystem for connector " + ci.getConnectorUuid()
+ " connected from IP " + ci.getIp() + " already exists");
}
}
SFTPServerConnectorEntity connector =
connectorService.get(SFTPServerConnectorEntity.class, ci.getConnectorUuid());
List<CompanyEntity> companies = companyService.getCompaniesForConnector(connector);
if (companies.size() < 1) {
throw new IOException("No company for connector " + connector.getUuid() + " found");
}
DBFileSystem fileSystem = null;
synchronized (fileSystems) {
if (!fileSystems.containsKey(cacheKey)) {
LOG.info("Created new filesystem for connector {} (Remote IP: {}, User: {}, UserAgent: {})",
ci.getConnectorUuid(), ci.getIp(), ci.getUser(), env.get("UserAgent"));
fileSystem = new DBFileSystem(this, connector.getUsername(), companies, connector,
template, ci.getIp(), (String) env.get("UserAgent"));
Pair<DBFileSystem, AtomicInteger> sessions = Pair.of(fileSystem, new AtomicInteger(1));
fileSystems.put(cacheKey, sessions);
}
}
if (null == fileSystem) {
throw new FileSystemAlreadyExistsException(
"A filesystem for connector " + ci.getConnectorUuid()
+ " connected from IP " + ci.getIp() + " already exists");
}
return fileSystem;
}
#Override
public DBFileSystem getFileSystem(URI uri) {
LOG.trace("getFileSystem({}))", uri);
String schemeSpecificPart = uri.getSchemeSpecificPart();
if (!schemeSpecificPart.startsWith("//")) {
throw new IllegalArgumentException(
"Invalid URI provided. URI must have a form of 'dbfs://ip:port/connector-uuid' where "
+ "'ip' is the IP address of the connected user, 'port' is the remote port of the user and "
+ "'connector-uuid' is a UUID string identifying the connector the filesystem was created for");
}
ConnectionInfo ci = ConnectionInfo.fromSchemeSpecificPart(schemeSpecificPart);
String cacheKey = generateCacheKey(ci);
if (!fileSystems.containsKey(cacheKey)) {
throw new FileSystemNotFoundException(
"No filesystem found for connector " + ci.getConnectorUuid() + " with connection from IP "
+ ci.getIp());
}
Pair<DBFileSystem, AtomicInteger> sessions = fileSystems.get(cacheKey);
if (!sessions.getKey().isOpen()) {
throw new FileSystemNotFoundException(
"Filesystem for connector " + ci.getConnectorUuid() + " with connection from IP " + ci
.getIp() + " was closed already");
}
int curSessions = sessions.getValue().incrementAndGet();
LOG.info("Added further session to filesystem for connector {}. Current connected sessions: {} (Remote IP: {}, User: {})",
ci.getConnectorUuid(), curSessions, ci.getIp(), ci.getUser());
return sessions.getKey();
}
private String generateCacheKey(String user, String ip, String connectorUuid) {
return connectorUuid + "_" + ip + "_" + user;
}
private String generateCacheKey(ConnectionInfo ci) {
return generateCacheKey(ci.getUser(), ci.getIp(), ci.getConnectorUuid());
}
This works out really well, however, as more and more users get added to the SFTP server the monitoring of the performed actions is suffering a bit due to the lack of propper MDC logging. Simply adding MDC logging isn't working cleanly as Mina or SSHD in particular share the threads among connected users which lead to the MDC context printing the wrong information at times which further lead to confusion on analyzing the log. As a temporary solution we removed it currently from the project.
We also tried to customize Nio2Session (and a couple of other classes) in order to intervene into the threading creation, though this classes were obviously not designed for inheritance which later lead to problems down the road.
Is there a better strategy to include propper MDC logging in our particular scenario where not one file system is used but a filesystem per company approach?
I have some code connecting to JMX and getting mBean by name. Now I'm writing tests with JUnit for it. I have already done some tests without authentication using something like this:
private static void startJmxServer() throws Exception {
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
LocateRegistry.createRegistry(PORT);
JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://" + HOST + ':' + PORT + "/jmxrmi");
JMXConnectorServer connectorServer = JMXConnectorServerFactory.newJMXConnectorServer(url, null, mbs);
Example exampleMBean = new Example();
ObjectName exampleName = new ObjectName(MBEAN_NAME);
mbs.registerMBean(exampleMBean, exampleName);
connectorServer.start();
}
Now I want to do some test with authentication. So I need to specify next JVM properies:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=1234
-Dcom.sun.management.jmxremote.authenticate=true
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.access.file=/somepath/jmxremote.access
-Dcom.sun.management.jmxremote.password.file=/somepath/jmxremote.password
I've already tried passing this properties in JMXConnectorServer environment variable. Also I've tried System.setProperty. But have failed, as connection was available without any credentials.
The only way, that makes it work is:
private static void startJmxServer() throws Exception {
String name = ManagementFactory.getRuntimeMXBean().getName();
VirtualMachine vm = VirtualMachine.attach(name.substring(0, name.indexOf('#')));
String lca = vm.getAgentProperties().getProperty("com.sun.management.jmxremote.localConnectorAddress");
if (lca == null) {
Path p = Paths.get(System.getProperty("java.home")).normalize();
if (!"jre".equals(p.getName(p.getNameCount() - 1).toString()
.toLowerCase())) {
p = p.resolve("jre");
}
File f = p.resolve("lib").resolve("management-agent.jar").toFile();
if (!f.exists()) {
throw new IOException("Management agent not found");
}
String options = String.format("com.sun.management.jmxremote.port=%d, " +
"com.sun.management.jmxremote.authenticate=true, " +
"com.sun.management.jmxremote.ssl=false, " +
"com.sun.management.jmxremote.access.file=/somepath/jmxremote.access, " +
"com.sun.management.jmxremote.password.file=/somepath/jmxremote.password", PORT);
vm.loadAgent(f.getCanonicalPath(), options);
}
vm.detach();
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
Example exampleMBean = new Example();
ObjectName exampleName = new ObjectName(MBEAN_NAME);
mbs.registerMBean(exampleMBean, exampleName);
}
But as agent was loaded I can not change VM properties to run test without authentication.Also I'm want to avoid such sort of thing, because of need in manual defining tools.jar and want to use common JMX tools. Any idea how to manage this?
Authentication configuration is passed in environment - the second argument to JMXConnectorServerFactory.newJMXConnectorServer.
HashMap<String, Object> env = new HashMap<>();
env.put("jmx.remote.x.password.file", "/somepath/jmxremote.password");
env.put("jmx.remote.x.access.file", "/somepath/jmxremote.access");
JMXConnectorServer connectorServer =
JMXConnectorServerFactory.newJMXConnectorServer(url, env, mbs);
Note that the attribute names here differ from property names.
Consult ConnectorBootstrap.java from JDK sources to see how the default JMXConnectorServer is initialized.
I am trying to connect to my AWS S3 bucket to upload a file per these links' instructions.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjSingleOpJava.html
http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/credentials.html#credentials-specify-provider
For some reason when it tries to instantiate the AmazonS3Client object it throws an exception that's being swallowed and it exits my Struts Action. Because of this, I don't have much information to debug off of.
I've tried both the The default credential profiles file (~/.aws/credentials) approach and the explicit secret and access key (new BasicAWSCredentials(access_key_id, secret_access_key)
/**
* Uses the secret key and access key to return an object for accessing AWS features
* #return BasicAWSCredentials
*/
public static BasicAWSCredentials getAWSCredentials() {
final Properties props = new Properties();
try {
props.load(Utils.class.getResourceAsStream("/somePropFile"));
BasicAWSCredentials credObj = new BasicAWSCredentials(props.getProperty("accessKey"),
props.getProperty("secretKey"));
return credObj;
} catch (IOException e) {
log.error("getAWSCredentials IOException" + e.getMessage());
return null;
}
catch (Exception e) {
log.error("getAWSCredentials Exception: " + e.getMessage());
e.printStackTrace();
return null;
}
}
********* Code attempting S3 Access **********
try {
AmazonS3 s3client = new AmazonS3Client(Utils.getAWSCredentials());
//AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());
String fileKey = "catering/" + catering.getId() + fileUploadsFileName.get(i);
System.out.println("Uploading a new object to S3 from a file\n");
s3client.putObject(new PutObjectRequest(
Utils.getS3BucketName(),
fileKey, file));
// Save Attachment record
Attachment newAttach = new Attachment();
newAttach.setFile_key(fileKey);
newAttach.setFilename(fileUploadsFileName.get(i));
newAttach.setFiletype(fileUploadsContentType.get(i));
newAttach = aDao.add(newAttach);
} catch (AmazonServiceException ase) {
System.out.println("Caught an AmazonServiceException, which " +
"means your request made it " +
"to Amazon S3, but was rejected with an error response" +
" for some reason.");
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
fileErrors.add(fileUploadsFileName.get(i));
} catch (AmazonClientException ace) {
System.out.println("Caught an AmazonClientException, which " +
"means the client encountered " +
"an internal error while trying to " +
"communicate with S3, " +
"such as not being able to access the network.");
System.out.println("Error Message: " + ace.getMessage());
fileErrors.add(fileUploadsFileName.get(i));
} catch(Exception e) {
System.out.println("Error Message: " + e.getMessage());
}
It never makes it past the AmazonS3 s3client = new AmazonS3Client(Utils.getAWSCredentials()); line. I've verified that the BasicAWSCredentials object contains the correct field values. Based on this information what might be going wrong to prevent the S3 client from connecting?
** EDIT **
I found this in the resulting stack trace that seems like useful information:
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.lang.NoClassDefFoundError: Could not initialize class
com.amazonaws.ClientConfiguration at
com.amazonaws.services.s3.AmazonS3Client.(AmazonS3Client.java:384)
at
gearup.actions.CateringController.assignAttachments(CateringController.java:176)
at
gearup.actions.CateringController.update(CateringController.java:135)
Earlier I tried following a demo that created a ClientConfiguration object and set the protocol to HTTP. However I ran into an issue where invoking the new ClientConfiguration(); constructor threw a NullPointerException. Am I missing some requirement here?
It looks like your project is missing some dependencies.
You clearly have the aws-java-sdk-s3 jar configured in your project since it's resolving AmazonS3Client, but this jar also depends on aws-java-sdk-core. You need to add the core jar to your classpath.
This is totally weird, since aws-java-sdk-s3 explicitly depends on aws-java-sdk-core (see the pom.xml). Something is fishy here.
For me, it turned out it was a clash of apache httpclient versions (I had older version in one of my POMs than the one amazon library uses).
I've heard from others of similar clashes, e.g. jackson.
So for anyone in this situation, I suggest that you check out Dependency hierarchy when you open a POM.xml in Eclipse (or use mvn dependency:tree. See here for more info).
Also, check the first error message that the AWS throws. It seems that it's not linked as the cause in all further stack traces, which only tell you something like java.lang.NoClassDefFoundError: Could not initialize class com.amazonaws.http.AmazonHttpClient.
I have created a new Glacier vault to use in development. I setup SNS and SQS for job completion notifications.
I am using the java SDK from AWS. I am able to successfully add archives to the vault but I get an error when creating a retrieval job.
The code I am using is from the SDK
InitiateJobRequest initJobRequest = new InitiateJobRequest()
.withVaultName(vaultName)
.withJobParameters(new JobParameters().withType("archive-retrieval").withArchiveId(archiveId));
I use the same code in Test and Production and it works fine, yet in development I get this error:
Status Code: 400, AWS Service: AmazonGlacier, AWS Request ID: xxxxxxxx, AWS Error Code: InvalidParameterValueException, AWS Error Message: Invalid vault name: arn:aws:glacier:us-west-2:xxxxxxx:vaults/xxxxxx
I know the vault name is correct and it exists as I use the same name to run the add archive job and it completes fine.
I had a suspicion that the vault may take a bit of time after creation before it will allow retrieval requests, but I couldn't find any documentation to confirm this.
Anyone had any similar issues? Or know if there are delays on vaults before you can initiate a retrieval request?
try {
// Get the S3 directory file.
S3Object object = null;
try {
object = s3.getObject(new GetObjectRequest(s3BucketName, key));
} catch (com.amazonaws.AmazonClientException e) {
logger.error("Caught an AmazonClientException");
logger.error("Error Message: " + e.getMessage());
return;
}
// Show
logger.info("\tContent-Type: "
+ object.getObjectMetadata().getContentType());
GlacierS3Dir dir = GlacierS3Dir.digestS3GlacierDirectory(object
.getObjectContent());
logger.info("\tGlacier object ID is " + dir.getGlacierFileID());
// Connect to Glacier
ArchiveTransferManager atm = new ArchiveTransferManager(client,credentials);
logger.info("\tVault: " + vaultName);
// create a name
File f = new File(key);
String filename = f.getName();
filename = path + filename.replace("dir", "tgz");
logger.info("Downloading to '" + filename
+ "'. This will take up to 4 hours...");
atm.download(vaultName, dir.getGlacierFileID(), new File(filename));
logger.info("Done.");
} catch (AmazonServiceException ase) {
logger.error("Caught an AmazonServiceException.");
logger.error("Error Message: " + ase.getMessage());
logger.error("HTTP Status Code: " + ase.getStatusCode());
logger.error("AWS Error Code: " + ase.getErrorCode());
logger.error("Error Type: " + ase.getErrorType());
logger.error("Request ID: " + ase.getRequestId());
} catch (AmazonClientException ace) {
logger.error("Caught an AmazonClientException.");
logger.error("Error Message: " + ace.getMessage());
}
Error message "Invalid vault name" means this archive is located in a different Vault. Proof link: https://forums.aws.amazon.com/message.jspa?messageID=446187
I need to develop an application for managing WebSphere Application Server v7.0.0.11. I explored a bit and found out that we can use Mbeans. Actually I need to create something similar as Web-sphere's web console.
My problem is that the application should be in C# .net so is there any connector/Adapter to invoke web-sphere's management API. Please point me in right direction.
I am a C#.net developer and a total newbie in java/websphere, I tried creating Admin Client Example from IBM site by using packages found at IBM/Webshpere/Cimrepos directory. The name of Jar file is com.ibm.wplc.was_7.0.0.11.jar I unzipped that jar file in the same folder.
So now My App is starts, connects to websphere successfully and finds mbean on the nodeAgent. The problem I am facing in invoking mbean. I am getting following error message.
exception invoking launchProcess : javax.management.ReflectionExcetion: Target Method not found com.ibm.ws.management.nodeagent.NodeAgent.launchProcess
I am using following url for list of mbean
http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.javadoc.doc/web/mbeanDocs/index.html
i tried using different methods from nodeAgent mbean but no joy , I am always getting same exception "method not found".
Following is the code snipped for invoking launchprocess
private void invokeLaunchProcess(String serverName)
{
// Use the launchProcess operation on the NodeAgent MBean to start
// the given server
String opName = "launchProcess";
String signature[] = { "java.lang.String" };
String params[] = { serverName };
boolean launched = false;
try
{
Boolean b = (Boolean)adminClient.invoke(nodeAgent, opName, params, null);
launched = b.booleanValue();
if (launched)
System.out.println(serverName + " was launched");
else
System.out.println(serverName + " was not launched");
}
catch (Exception e)
{
System.out.println("Exception invoking launchProcess: " + e);
}
}
Full Code could be found on following link
http://pic.dhe.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=%2Fcom.ibm.websphere.express.doc%2Finfo%2Fexp%2Fae%2Ftjmx_develop.html
Please let me know what I am doing wrong, do i need to include some other package ? I browsed com.ibm.wplc.was_7.0.0.11.jar, there isn't any folder named nodeagent in com\ibm\ws\managemnt. I found the same jar file in Appserver\runtimes library.
Any help is greatly appreciated, Thanks in Advance.
Getting Mbean
private void getNodeAgentMBean(String nodeName)
{
// Query for the ObjectName of the NodeAgent MBean on the given node
try
{
String query = "WebSphere:type=NodeAgent,node=" + nodeName + ",*";
ObjectName queryName = new ObjectName(query);
Set s = adminClient.queryNames(queryName, null);
if (!s.isEmpty())
nodeAgent = (ObjectName)s.iterator().next();
else
{
System.out.println("Node agent MBean was not found");
System.exit(-1);
}
}
catch (MalformedObjectNameException e)
{
System.out.println(e);
System.exit(-1);
}
catch (ConnectorException e)
{
System.out.println(e);
System.exit(-1);
}catch (Exception e){
e.printStackTrace();
System.exit(-1);
}
System.out.println("Found NodeAgent MBean for node " + nodeName);
}
It seems my problem was with adminClient.invoke method I wasn't passing parameters correctly. It got fixed after having correct parameters. I hope this helps if someone is having same problem.