How add externals .jar to Ibm Integration Bus 10. java compute - java

I try to create a PDF from an XML using apache fop, i can do it in Netbeans or Eclipse IDE's, but the Java Compute inside the IIB when i try to execute from SOAP UI launch me this java error
java.lang.NoClassDefFoundError: org/apache/fop/apps/FopFactory
java error
but, i already add the necesaires libraries:
libraries added without errors
Here reference is made
libraries referenced
I hope you can help me, thank you all.

It depends on how you want to work, but you simply have to put this jar on the shared-classes.
This folder exist at the execution group (integration server) level, or at the broker (integration node) level. If you plan to reuse it later, I would suggest you to put it at the broker level, otherwise at the execution group level.
Path sample (unix in this case)
/var/mqsi/shared-classes (For all your broker on this VM, NOT recommanded)
/var/mqsi/config/"yourBrokerName"/shared-classes (broker level)
/var/mqsi/config/"yourBrokerName"/"yourExecutionGroupName"/shared-classes (execution group level)
If you put it at the execution group level, the execution group needs to be restarted. If you put it at the broker level, you should restart the whole broker.
Feel free to contact me if you have other questions, but with the shared-classes keyword, you should be able to find everything you are looking for.

Related

How to auto-deploy compute task without placing jar in cluster node using thin client

We upgraded to ignite2.10.0 latest version. And trying to launch compute tasks using thin client configuration.
Problem: I have to place jar everytime in cluster node whenever i changed a single line of code
Is there any way i can execute tasks dymanically (auto-deploy) without placing jar in cluster node?
Your response will be greatly appericated
It's not possible to deploy tasks dynamically (like using peerClassLoading) for thin clients yet. For now, you need to have the tasks deployed before calling them.
More details: https://ignite.apache.org/docs/latest/thin-clients/java-thin-client#executing-compute-tasks
Alternatively, you might check the following docs regarding UriDeploymentSpi. In short, instead of manually updating the jars, you might set a local folder or URI that the cluster will check from time to time and deploy a new version of code if available.

Flow execution in Ejb project in java

Hi am working on very very old projects which contain 10 to 15 dependencies am very curious to know any tool or some utility to track flow execution of java class,method,lines,return type,get query executed while operation ..etc
for example:
if am calling soap request from SoapUI its goes into #WebService() and flow goes on..
How do I trace methods calls in Java? i have seen this link not sure how to incorporate and execute in existing project am doing manual job right now by debug with eclipse.kindly help on if way to write code or tool which i can see executions
JPDA. any decent IDE will connect to your container via JDPA protocol and allow you to debug from inside the container. you have to configure the container using some configuration switches (which get passed onto the JVM at startup time). check the documentation for you app server for how it prefers to be configured for JPDA.

Jenkins not finding krb5.conf file in distributed build from scripted Jenkinsfile

I have a scripted Jenkinsfile running in our distributed Jenkins build environment.
I have code performing Kerberos authentication in the Jenkinsfile. That code is based on two small Java programs, both successfully authenticating to Kerberos. Those two Java programs run on both my Windows workstation and a Linux virtual machine guest.
That is: I have a pair of working Java programs that successfully perform Kerberos authentication from Windows and from Linux using a set of Kerberos config files. When I translate the code to my Jenkinsfile, it apparently fails at step 1: finding my carefully constructed krb5.conf (and login.conf) files.
The Kerberos code is in a correctly configured global shared library. I know it is correctly configured because the library is used elsewhere in my Jenkinsfile and I know it has downloaded the correct Kerberos libraries from our repository because I don't get any kind of compilation or class not found errors.
The specific error message, which I have not managed to budge over dozens of different build, trying to put the krb5.conf file everywhere I can think Jenkins might look for it, is this:
GSSException: Invalid name provided (Mechanism level: KrbException: Cannot locate default realm)
Yes, there's a longer stack trace, but if you know what's going on, that's all you should need.
I have tried using System.setProperty() from the Jenkinsfile to point at a file which has been checked in to the project, created using Jenkins file credentials, and by using the writeFile step to write a string containing the config file directly to the build workspace. In each case, Jenkins seems to simply not find the krb5.conf file and I get the same "Cannot locate default realm" error.
It's problematic to put the file in /etc for a variety of reasons. Plus, should I really have to put the Kerberos config files there when there is a clearly elucidated algorithm for finding them, and I seem to be following it?
If you have any idea what's going on, any help would be greatly appreciated.
NB: I have successfully authenticated to Kerberos using the krb5.conf and login.conf files at issue here. They work. Kerberos and my configs don't seem to be the issue. Whatever Jenkins is or is not doing seems to be the issue.
Answering my own question as I did eventually find a resolution to both this and to successfully using Kerberos authentication (after a fashion) with Jenkins Pipeline in part of our build process.
Our Jenkins instance uses a number of executors in our little part of the AWS cloud. Practically speaking, the only part of our pipeline that runs on the executors is the build step: Jenkins checks out workspaces to the build nodes (the executors) and performs builds on those nodes.
Almost everything else, and explicitly everything in Jenkins' so-called global shared libraries including the Kerberos code referenced in my original question, is actually run on master: even when you wrap calls to a function in a global shared library in a node() step in your Jenkinsfile, those calls still run on master.
Because, obviously, right?
What I was trying to do is place the krb5.conf file in all the places it should be on the build nodes. But since my Kerberos code wasn't part of the build (or one of the other few steps, like sh(), that run on nodes in Jenkins), it wasn't happening on the nodes: it was happening on the Jenkins master. Even though the calls were wrapped in a node step. I'm not bitter. It's fine.
Placing the krb5.conf file in the correct location on master resolved this issue, but created other problems. Ultimately, I put the Kerberos logic into a small Java command line utility in a jar, along with the appropriate config files. That was downloaded via curl and executed, all in an sh() step in our pipeline. Not the most elegant solution, but even after discussing the issue with Cloudbees support, this was the solution they recommended for what we were trying to do.
The Kerberos implementation inside JDK uses the system property "java.security.krb5.conf" to locate krb5.conf. I am not sure if you are using a 3rd-party Kerberos library or not.

Java library search order

Background / example (but question is probably broader than this):
I'm trying to write a Java application that accesses a Google AppEngine server. To set up the project for this, I followed the steps outlined in the accepted answer here:
Developing a Java Application that uses an AppEngine database
I am now running into a problem where I'm trying to execute an HttpURLConnection-request in the Java client application (i.e. not in the AppEngine server code), but Google's AppEngine library seems to have replaced the Java version of this connection with its own urlFetch()-implementation. This leads to me getting the following error: "The API package 'urlfetch' or call 'Fetch()' was not found.".
Actual question:
What determines the order in which Java looks through libraries to find needed class-implementations? Is there a way to modify this order (specifically in Eclipse), so that the actual JRE-functions take precedence over a third-party-library that is also needed. Or is there maybe something special going on with the implementation of Url in the example given above, that cannot be resolved by specifying a library order?
Update:
Turns out the problem I was seeing had nothing to do with the order in which classes were loaded. The AppEngine server code explicitly calls setContentHandlerFactory(...) to register its own handler during execution rather than at library load time (see here for a fix to this specific issue). So, while my "actual question" might still stand, I haven't actually yet come across a scenario where it matters...
You might have to define a custom ClassLoader.
Also, take a look at this answer.
Inside Eclipse, you can adjust the classpath order. Right click your project, choose Properties, Java Build Path, then click the "Order and Export" tab. However, of course, this won't affect your program when running outside Eclipse.

Hazelcast dedicated nodes

What is the simplest way to run Hazelcast nodes on dedicated servers?
We have a web application that uses a Hazelcast distributed map.
Currently the Hazelcast nodes are configured to run in the Servlet Container nodes.
As we scale up, we'd like to add dedicated hardware as Hazelcast nodes.
Then we won't need full Hazelcast nodes in the Servlet Containers anymore, those can be clients. (There are licensing costs associated with the Servlet Containers, so getting load off them is good, don't ask...)
So the question is, what's a minimal Hazelcast node installation? Something analogous to a memcached installation.
All it needs to do is read configuration and start up, no local clients.
I see it supports Jetty, but is that needed at all, or is there some simple class in those jars I could execute on a JVM raw?
Just create a simple class that calls HazelCast.init
There are a number of test classes in the com.hazelcast.examples package which can be run from the bin directory of the hazelcast distribution.
TL;DR
Newer version:
java -cp hazelcast-3.7.2.jar com.hazelcast.core.server.StartServer
Older version:
java -cp hazelcast-2.0.3.jar com.hazelcast.examples.StartServer
This will start a standalone Hazelcast instance
If you're using maven:
mvn -DgroupId=com.hazelcast -DartifactId=hazelcast -Dversion=3.7.2 dependency:get
cd ~/.m2/repository/com/hazelcast/hazelcast/3.7.2
will get you to the folder with the jar
You can get it to run by calling {hazelcast-directory}/bin/server.shor on Windows {hazelcast-directory}/bin/server.bat.
The configuration file can still be found in {hazelcast-directory}/bin/hazelcast.xml
This is an update to thSoft's answer as that way is no longer valid.
You can also simply run hazelcast/bin/start.sh (the configuration file is hazelcast/bin/hazelcast.xml).

Categories

Resources