Inject a parameter from client to ejb bean in the context - java

How to inject a parameter in the client of an Ejb?
Something like that:
final Hashtable<String, String> jndiProperties = new Hashtable<String, String>();
jndiProperties.put("java.naming.factory.initial", "org.ow2.carol.jndi.spi.MultiOrbInitialContextFactory");
jndiProperties.put("java.naming.factory.url.pkgs", "org.ow2.jonas.naming");
jndiProperties.put("java.naming.provider.url", "rmi://localhost:1099");
final Context context = new InitialContext(jndiProperties);
Object obj = context.lookup("MyEjbTest");
context.addToEnvironment("user", new Object());
In the server side, using an Interceptor get the parameter injected by the client:
public Object intercept(InvocationContext ctx) throws Exception {
Object o = ctx.getContextData().get("user");
if (o != null) {
LOG.info("Exists " + o.toString());
return ctx.proceed();
} else {
return null;
}
}
The parameter user is never injected in the context and in the server side o is always null. Is there any way to handle that?

No, there is no standard way to implicitly pass data to an EJB from a client. You must explicitly pass data to the EJB via a method argument.
If you're using RMI-IIOP, then you could write your own interceptor to transfer context data to the server and then store it in a thread local. If you're using WebSphere Application Server, you could use application context work areas (this was attempted to be standardized by JSR 149, but it was not deemed portable enough). These options are likely too niche or too cumbersome, so you're likely better off just explicitly passing the data via a method argument.
A complete example for sending additional context data using RMI-IIOP is quite extensive, but the general steps are:
Start by registering an ORBInitializer. See the javadoc therein, but since ORB configuration is usually tightly controlled by the application server, you should read your application server documentation, particularly for how (or if it's even supported at all) you can add an ORB interceptor and how the class loading works.
In the client, your ORBInitializer should call ORBInitInfo.add_client_request_interceptor. In your implementation of the send_request method, call ClientRequestInfo.add_request_service_context.
Normally, you would reserve a vendor prefix with the OMG for the service context ID, but if it's local to your environment (i.e., you're not providing your application to third parties), then you can probably pick one that doesn't conflict with any other products in your environment.
The bytes you send are your choosing. Your client would probably set some data in a thread local, and then your implementation of the send_request method would serialize the data to a byte[] to be added to the ServiceContext.
In the server, your ORBInitializer should call add_server_request_interceptor. Your implementation of this interceptor would decode the service context sent by the client and probably set a thread local variable for the duration of the request and remove it at the end.

Related

How to create #RabbitListener to be idempotent

Our configuration is: 1...n Message receivers with a shared database.
Messages should only be processed once.
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "message-queue", durable = "true"),
exchange = #Exchange(value = TOPIC_EXCHANGE, type = "topic", durable = "true"),
key = MESSAGE_QUEUE1_RK)
)
public void receiveMessage(CustomMessage message) throws InterruptedException {
System.out.println("I have been received = " + message);
}
We want to to guarantee messages will be processed once, we have a message store with id's of messages already processed.
Is it possible to hook in this check before receiveMessage?
We tried to look at a MessagePostProcessor with a rabbitTemplate but didn't seem to work.
any advice on how to do this?
We tried with a MethodInterceptor and this works, but is pretty ugly.
Thanks
Solution found - thanks to Gary
I created a MessagePostProcessorInjector which implements SmartLifecycle
and on startup, I inspect each container and if it is a AbstractMessageListenerContainer add a customer MessagePostProccesser
and a custom ErrorHandler which looks for certain type of Exceptions and drops them (other forward to defaultErrorHandler)
Since we are using DLQ I found throwing exceptions or setting to null wouldn't really work.
I'll make a pull request to ignore null Messages after a MPP.
Interesting; the SimpleMessageListenerContainer does have a property afterReceivePostProcessors (not currently available via the listener container factory used by the annotation, but it could be injected later).
However, those postprocessors won't help because we still invoke the listener.
Please feel free to open a JIRA Improvement Issue for two things:
expose the afterReceivePostProcessors in the listener container factories
if a post processor returns null, skip calling the listener method.
(correction, the property is indeed exposed by the factory).
EDIT
How it works...
During context initialization...
For each annotation detected by the bean post processor the container is created and registered in the RabbitListenerEndpointRegistry
Near the end of context initialization, the registry is start()ed and it starts all containers that are configured for autoStartup (default).
To do further configuration of the container before it's started (e.g. for properties not currently exposed by the container factories), set autoStartup to false.
You can then get the container(s) from the registry (either as a collection or by id). Simply #Autowire the registry in your app.
Cast the container to a SimpleMessageListenerContainer (or alternatively a DirectMessageListenerContainer if using Spring AMQP 2.0 or later and you are using its factory instead).
Set the additional properties (such as the afterReceiveMessagePostProcessors); then start() the container.
Note: until we enhance the container to allow MPPs that return null, a possible alternative is to throw an AmqpRejectAndDontRequeueException from the MPP. However, this is probably not what you want if you have DLQs configured.
Throwing an exception extending from ImmediateAcknowledgeAmqpException from postProcessMessage() of DuplicateChecking MPP when message is duplicate will also not pass the message to the rabbit Listener.

GRPC Java pass data from server interceptor to rpc service call

We are using Java GRPC for one of our internal services and we have a server side interceptor that we use to grab information from the headers and set them up in a logging context that that uses a ThreadLocal internally.
So in our interceptor we do something similar to this:
LogMessageBuilder.setServiceName("some-service");
final String someHeaderWeWant = headers.get(HEADER_KEY);
final LoggerContext.Builder loggingContextBuilder = new LoggerContext.Builder()
.someFieldFromHeaders(someHeaderWeWant);
LoggerContext.setContext(loggingContextBuilder.build());
Then in our service call we access it like this:
LoggingContext loggingContext = LoggingContext.getCurrent()
However the current context is null some of the time.
We then tried to use the GRPC Context class like below:
LogMessageBuilder.setServiceName("some-service");
final String someHeaderWeWant = headers.get(HEADER_KEY);
final LoggerContext.Builder loggingContextBuilder = new LoggerContext.Builder()
.someFieldFromHeaders(someHeaderWeWant);
Context.current().withValue(LOGGING_CONTEXT_KEY, loggingContextBuilder.build()).attach()
Then accessing it in the service call like:
LoggingContext context = LOGGING_CONTEXT_KEY.get(Context.current())
However that is also sometimes null and if I print out the memory addresses it appears that early on the context is always the ROOT context regardless of me attaching in the interceptor, but after a few calls the contexts are correct and the logger data is there like it should.
So if anyone has any ideas or better ways to propagate data from an interceptor to the service call I would love to hear it.
Each callback can be called on a different thread, so the thread-local has to be set for each callback. It seems you may accidentally be getting Contexts intended for other RPCs.
grpc-java 0.12.0 should be released this week. Context has been partially integrated in 0.12.0, and we also added Contexts.interceptCall() which is exactly what you need: it attaches and detaches the context for each callback.
In 0.12.0, you should now see new contexts being created for each server call (instead of ROOT) and contexts propagated from client calls to StreamObserver callbacks.
As another note, unlike ThreadLocal Context is intended to be tightly scoped: after attach(), you should generally have a try-finally to detach().

Access an service in jboss through JNDI (RMI?)

I need to make an service in jboss and access it through JNDI on the client side.
I have been playing around a bit with JNDI and made something like this on the client side:
import javax.naming.*;
public class App {
public static void main(String[] args) throws NamingException {
App app = new App();
app.setSysProp();
app.setObject();
app.getObject();
}
public void setSysProp() {
System.setProperty(Context.PROVIDER_URL, "127.0.0.1:1099");
System.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
}
public void setObject() throws NamingException {
Context context = new InitialContext();
MyObject obj = new MyObject();
obj.setName("NameOfMyObject");
context.bind("obj", obj);
}
public void getObject() throws NamingException {
Context context = new InitialContext();
MyObject obj = (MyObject) context.lookup("obj");
System.out.println(obj.getName());
}
}
This only binds and object to jndi on the client side and later retrieves it.
Now what i want is to bind a similar object on the server side (Jboss 4.2.3) and through it make some operations on the server. How can this be done? Ive read that something named RMI should be used in this case but what exactly is that and how to use it?
Ive read that something named RMI should be used in this case but what exactly is that and how to use it?
RMI is the Java standar API for Remote Method Invocation. It allows you execute a method on a object that reside in other Java Virtual Machine. Take in mind that you don't need a application server like JBoss to communicate with a remote java object. This link provides a simple tutorial.(notice that JBoss or any other app server is not mentioned)
I need to make an service in jboss and access it through JNDI on the client side.
This is a different thing. Although, JBoss (as a Java EE specification compliant server) use RMI extensively, you don't need understand how this API works. What you need is to create a server side component called EJB which allows you to have a service running on a server.
How can this be done?
There are hundred of tutorials about how to implement a basic EJB. Choose one compatible with your JBoss version due to some implementation details often change from one version to another.
You will also find that EJB specification has been evolving. With JBoss 4.2.3 and Java 5 you can start with EJB 3.0 which is easier than the previous one (2.1)

OSGi services architecture: creation of service at request of consumer

I am developing an application in Eclipse RCP. I need help with a design decision concerning the design of a service.
I have some bundles which are used to provide an REngine object to other modules. The REngine is an interface to a calculation engine, which can be implemented in multiple ways. The bundles provide instances of REngine by connecting to a remote server or starting a local calculation thread. Some bundles require configuration by the GUI (but also need to be available on a headless platform). A client Bundle may request multiple REngine objects for parallel calculations.
I currently register these modules to provide an REngine service. The service is created by a ServiceFactory, which either launches a local calculation instance or a remote (server) instance. The client is responsible for trying out all Service registrations of the REngine class and selecting the right one.
The code to do this can be summarised as follows:
class API.REngine { ... }
class REngineProvider.Activator {
public void start(BundleContext ctx) {
ctx.registerService(REngine.class.getName(), new REngineFactory(), null);
}
}
class REngineProvider.REngineFactory implements ServiceFactory {
public Object getService(Bundle bundle, ServiceReference reference) {
return new MyREngineImplementation();
}
public void ungetService(REngine service) {
service.releaseAssociatedResources();
}
}
class RConsumer.Class {
REngine getREngine() {
ServiceReference[] references = bundleContext.getAllServiceReferences(REngine.class.getName(), null);
for(ServiceReference ref: references) {
try {
return bundleContext.getService(ref);
} catch (Exception e) {} // too bad, try the next one
}
}
}
I would like to keep this model. It is nice that the OSGi service spec matches my business requirement that REngine objects are living objects which should be released when they are not needed anymore.
However, a registered service can only provide one service instance per bundle. The second time the service is requested, a cached instance is returned (instead of creating a new one). This does not match my requirement; a bundle should be able to get multiple REngine objects from the same provider.
I have already looked at other OSGi framework classes, but nothing seems to help. The alternative is the whiteboard model, but it seems strange to register an REngineRequestService that is used by the REngineProvider bundle to give out a live REngine.
How do I implement this in OSGi? As a reminder, here is my list of requirements:
Easy enabling and disabling of REngineProvider bundles. Client code will just use another provider instead.
Configuration of REngineProvider bundles.
Multiple REngine instances per client bundle.
Explicit release of REngine instances
REngine creation can fail. The client module should be able to know the reason why.
Just to add the solution I have chosen as future reference. It seems the OSGi Services platform is not made for "requesting a service". It is the provider bundle that creates a service, and the client bundle that can find and use the services. It is not possible to provide an automatic "Factory" for services per user request.
The solution chosen involves the OSGi whiteboard model. On first sight, this may seem very difficult to manage, but Blueprint can help a lot!
Provider blueprint.xml file:
<reference-list interface="org.application.REngineRequest"
availability="optional">
<reference-listener
bind-method="bind" unbind-method="unbind">
<bean class="org.provider.REngineProvider"/>
</reference-listener>
The class REngineRequest is a shared API class allowing the provider to input his REngine object, or set an Exception explaining why the creation did not work.
For the client, using an REngine is now as easy as doing:
REngineRequest req = new REngineRequest();
ServiceRegistration reg = bundleContext.registerService(req, REngineRequest.class.getName(), engineCreationProperties);
req.getEngine().doSomeStuff();
reg.unregister();
We make the assumption that the provider will never stop while the client is using the REngine. If it does, the REngine becomes invalid.
ComponentFactory from the Declarative Services is what you need. Most of the time you should rather use DS instead of manually registering and looking up the services.
The provider side should register REngine factory service (you don't have to implement the factory itself, DS will do that for you). The consmer should declare one-to-many dependency to the REngine service. At runtime, all available factories will be injected and consumer can go through them to create actual REngine instances.
Two years ago I tried to create Genuine Service Factories what later became Parameterized Services. However, after analysis it turned out that nothing was needed, just register the factory as the service.
However.
I do not know enough about your service but it sounds very much that you could significantly simplify things by removing control from the client bundle, the client bundle should just use whatever REngine service is available in the service registry, maybe with a property signalling its use type if there are multiple bundles that need REngines and they should not share the same REngine (which should rarely be the case).
If that model is possible, it usually significantly simplifies. I generally then use DS with Configuration Admin configurations that drive the instances (one of the most useful aspects of DS, see http://www.aqute.biz/Bnd/Components). With the metatype integration, you even get a user interface to edit your configuration properties.
One solution would be to register the REngineFactory as the service rather than the REngine implementation itself and return the factory from the getService method. That way the clients can look up the factory and, on successfully finding one, use it to get a new REngine implementation.

Using EJBs with regular java classes. Trying to instantiate a stateless EJB

I have a large web project in Java EE 6 so far everything is working great.
Now I'm adding a new class that takes twitter information and returns a string. So far the strings have been extracted from the JSON file from twitter and are ready to be persisted in my database. My problem is I'm not sure how to pass information from the EJB that normally handles all of my database calls. I'm using JPA and have a DAO class that managers all database access. I already have a method there for updateDatabase(String). I'd like to be able to call updateDatabase(String) from the class that has the strings to add but I don't know if it's good form to instantiate a stateless bean like that. Normally you inject beans and then call just their class name to access their methods. I could also maybe try and reference the twitter string generating class from inside of the EJB but then I'd have to instantiate it there and mess with main() method calls for execution. I'm not really sure how to do this. Right now my Twitter consuming class is just a POJO with a main method. For some reason some of the library methods did not work outside of main in face IOUtils() API directly says "Instances should NOT be constructed in standard programming".
So on a higher level bottom line, I'm just asking how POJO's are normally "mixed" into a Java EE project where most of your classes are EJBs and servlets.
Edit: the above seems confusing to me after rereading so I'll try to simplify it. basically I have a class with a main method. I'd like to call my EJB class that handles database access and call it's updateDatabase(String) method and just pass in the string. How should I do this?
Edit: So it looks like a JNDI lookup and subsequence reference is the preferred way to do this rather than instantiating the EJB directly?
Edit: these classes are all in the same web project. In the same package. I could inject one or convert the POJO to an EJB. However the POJO does have a main method and some of the library files do not like to be instantiated so running it in main seems like the best option.
My main code:
public class Driver {
#EJB
static RSSbean rssbean;
public static void main(String[] args) throws Exception {
System.setProperty("http.proxyHost", "proxya..com");
System.setProperty("http.proxyPort", "8080");
/////////////auth code///////////////auth code/////////////////
String username = System.getProperty("proxy.authentication.username");
String password = System.getProperty("proxy.authentication.password");
if (username == null) {
Authenticator.setDefault(new ProxyAuthenticator("", ""));
}
///////////////end auth code/////////////////////////////////end
URL twitterSource = new URL("http://search.twitter.com/search.json?q=google");
ByteArrayOutputStream urlOutputStream = new ByteArrayOutputStream();
IOUtils.copy(twitterSource.openStream(), urlOutputStream);
String urlContents = urlOutputStream.toString();
JSONObject thisobject = new JSONObject(urlContents);
JSONArray names = thisobject.names();
JSONArray asArray = thisobject.toJSONArray(names);
JSONArray resultsArray = thisobject.getJSONArray("results");
JSONObject(urlContents.substring(urlContents.indexOf('s')));
JSONObject jsonObject = resultsArray.getJSONObject(0);
String twitterText = jsonObject.getString("text");
rssbean.updateDatabase("twitterText");
}
}
I'm also getting a java.lang.NullPointerException somewhere around rssbean.updateDatabase("twitterText");
You should use InitialContext#lookup method to obtain EJB reference from an application server.
For example:
#Stateless(name="myEJB")
public class MyEJB {
public void ejbMethod() {
// business logic
}
}
public class TestEJB {
public static void main() {
MyEJB ejbRef = (MyEJB) new InitialContext().lookup("java:comp/env/myEJB");
ejbRef.ejbMethod();
}
}
However, note that the ejb name used for lookup may be vendor-specific. Also, EJB 3.1 introduces the idea of portable JNDI names which should work for every application server.
Use the POJO as a stateless EJB, there's nothing wrong with that approach.
From the wikipedia: EJB is a server-side model that encapsulates the business logic of an application.
Your POJO class consumes a web service, so it performs a business logic for you.
EDIT > Upon reading your comment, are you trying to access an EJB from outside of the Java EE container? Because if not, then you can inject your EJB into another EJB (they HAVE to be Stateless, both of them)
If you have a stand alone program that wishes to access an EJB you have a couple of options.
One is to simply use JNDI to look up the EJB. The EJB must have a Remote interface, and you need to configure the JNDI part for you container, as well as include any specific container jars within your stand alone application.
Another technique is to use the Java EE artifact know as the "application client". Here, there is a container provider wrapper for your class, but it provides a run time environment very similar to running the class within the container, notably you get things like EJB injection.
You app still runs in a separate JVM, so you still need to reference Remote EJBs, but the app client container handles a bunch of the boiler plate in getting your app connected to the server. This, too, while a Java EE artifact, is also container dependent in how to configure and launch an app client application.
Finally, there is basically little difference in how a POJO interact with the EJB container this way in contrast to a POJO deployed within the container. The interface is still a matter of getting the EJB injected (more easily done in Java EE 6 than before) or looking up a reference via JNDI. The only significant difference being that a POJO deployed in the container can use a Local interface instead of the Remote.

Categories

Resources