Parallel webservices access in a Weld CDI environment - java

We're developing a Web Frontend using JSF 2 and Weld Cdi on Tomcat.
Now I've a problem executing multiple webservices in parallel to optimize the request time.
The user may select mutliple items form a list.
For each selected item, the process gathers it's information from one webservice using the list key as parameter.
My current approach is using a Producer, that returns the webservice port interface, which is injected into the bean. The bean calls this webservie in a loop for each selected key.
#Inject
private WSAnzeigeAssetsummen serviceAccess;
:
for ( Integer pfNr : sessionKeys.getPfnrList() ) {
summaryTable = serviceAccess.execute(snr, pfnr, requestType, "", desiredRows, userName);
processResult(summaryTable):
}
To get faster, I tried to use a ExecutorService and as many workers as needed, which are returning Futures.
The problem of this construct is, that I can't inject the service port into the worker, cause the worker is not managed. Creating the service port by hand, works but is not appreciated, cause it ignores the producer class.
Also when testing, it's not possible to inject a dummy service port, which delivers predefined result sets.
Since I did not find anything, about parallel execution in a tomcat-weld enviroment, there must be something wrong with my approach.
What is the correct approach to solve such a situation ?
Edit: To be more clear what I tried...
public class DataCollector implements ISumRequest<Integer, Integer, String, FutureResult> {
ExecutorService pool = Executors.newCachedThreadPool();
#Inject
SessionBean sessionBean;
public Future<FutureResult> collectInformation(Integer snr, Integer pfnr, String requestType) {
CollectWorker worker = new CollectWorker (snr,pfnr,requestType,sessionBean.getUserName());
return pool.submit(worker);
}
}
When doing like this, the worker is not managed.

You can wrap your created worker in a CDI creational context, something like this:
#Inject
private BeanManager beanManager;
public <T extends Object> T performInjection(final T obj) {
if (this.beanManager != null) { // only do injection if the bean manager is present.
// Create a creational context from the BeanManager
final CreationalContext creationalContext = this.beanManager.createCreationalContext(null);
// Create an injection target with the Type of the instance we need to inject into
final InjectionTarget injectionTarget = this.beanManager.createInjectionTarget(this.beanManager.createAnnotatedType(obj.getClass()));
// Perform injection into the instance
injectionTarget.inject(obj, creationalContext);
// Call PostConstruct on instance
injectionTarget.postConstruct(obj);
}
return obj;
}

Related

scope of #kafkaListener

I just want to understand that what is the scope of #kafkaListener, either prototype or singleton. In case of multiple consumers of a single topic, is it return the single instance or multiple instances. In my case, I have multiple customers are subscribed to single topic and get the reports. I just wanted to know, what would happen, if
multiple customers wants to query for the report on the same time. In
my case, I am closing the container after successful consumption of
messages but at the same time if some other person wants to fetch
reports, the container should be open.
how to change the scope to prototype (if it is not) associated with Id's of
container, so that each time a separate instance can be generated.
#KafkaListener(id = "id1", topics = "testTopic" )
public void listen() {
// code goes here
}
A Single Listener Instance is invoked for all consuming Threads.
The annotation #KafkaListener is not Prototype scoped, and it is not possible with this annotation either.
4.1.10. Thread Safety
When using a concurrent message listener container, a single listener instance is invoked on all consumer threads. Listeners, therefore, need to be thread-safe, and it is preferable to use stateless listeners. If it is not possible to make your listener thread-safe or adding synchronization would significantly reduce the benefit of adding concurrency, you can use one of a few techniques:
Use n containers with concurrency=1 with a prototype scoped MessageListener bean so that each container gets its own instance (this is not possible when using #KafkaListener).
Keep the state in ThreadLocal<?> instances.
Have the singleton listener delegate to a bean that is declared in SimpleThreadScope (or a similar scope).
To facilitate cleaning up thread state (for the second and third items in the preceding list), starting with version 2.2, the listener container publishes a ConsumerStoppedEvent when each thread exits. You can consume these events with an ApplicationListener or #EventListener method to remove ThreadLocal<?> instances or remove() thread-scoped beans from the scope. Note that SimpleThreadScope does not destroy beans that have a destruction interface (such as DisposableBean), so you should destroy() the instance yourself.
By default, the application context’s event multicaster invokes event listeners on the calling thread. If you change the multicaster to use an async executor, thread cleanup is not effective.
https://docs.spring.io/spring-kafka/reference/html/
=== Edited ===
Lets take their 3rd option (Delcaring a SimpleThreadScope and delegating to it)
Register SimpleThreadScope . It is not picked up automatically. You need to register it like below:
#Bean
public static BeanFactoryPostProcessor beanFactoryPostProcessor() {
return new BeanFactoryPostProcessor() {
#Override
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
beanFactory.registerScope("thread", new SimpleThreadScope());
}
};
}
Create a component with scopeName = "thread"
#Component
#Scope(scopeName = "thread", proxyMode = ScopedProxyMode.TARGET_CLASS)
public class KafkaDelegate{
public void handleMessageFromKafkaListener(String message){
//Do some stuff here with Message
}
}
Create a #Service
public class KafkaListenerService{
#Autowired
private KafkaDelegate kafkaDelegate;
#KafkaListener(id = "id1", topics = "testTopic" )
public void listen(String message) {
kafkaDelete.handleMessageFromKafkaListener(message);
}
}
Another example: How to implement a stateful message listener using Spring Kafka?
See this answer for an example of how to use a prototype scoped #KafkaListener bean.

How to access object without passing it as parameter?

Is there a way to autowire an object that needs to be re-instantiated frequently?
I am using Netflix's DGS + spring boot framework, and basically storing the user authentication details in a custom context which is created for each request. I am trying to avoid adding context to the method signature because of the large amount of refactoring needed.
e.g.
public Result dataFetcher(DataFetchingEnvironment dfe) {
// this context contains user details which is used for authorization
// instantiated for every request
setRolesInContext(dfe);
MyCustomContext context = DgsContext.getCustomContext(dfe);
// trying to avoid adding context as an extra param e.g. dataFetcherHelper(context)
dataFetcherHelper(); // this calls other helper methods from other classes
}
I was thinking of using the facade pattern but this would not be thread safe. Basically autowire the RequestContextHolder, and call setRequestContext each time a new context gets initialized.
#Component
#NoArgsConstructor
#Getter
#Setter
public class RequestContextHolder {
private RequestContext requestContext;
}
I'm not sure how your question:
Is there a way to autowire an object that needs to be re-instantiated frequently?
Is related to the use case that you've presented in the question...
From the question it looks like you can consider using ThreadLocals as a conceptual "substitution" to the global variable available all over the place in the request if you don't want to add parameters to the methods to propagate the context through the flow.
This will work only in "thread-per-request" model, it won't work for reactive systems and for the complicated cases where you maintain different thread pools and switch the threads while implementing the Business Logic on backend:
So to achieve "thread-safety" in your context holder that you have suggested you can use:
#Configuration
public class MyConfig {
#Bean
public ThreadLocal<MyCustomContext> ctxHolder() {
return new ThreadLocal<>();
}
}
Then, again, if you're working in thread-per-request model, you can:
#Component
public class DataFetcherInterceptor {
#Autowired
private ThreadLocal<MyCustomContext> ctxHolder;
public Result dataFetcher(DataFetchingEnvironment dfe) {
// this context contains user details which is used for authorization
// instantiated for every request
setRolesInContext(dfe);
MyCustomContext context = DgsContext.getCustomContext(dfe);
ctxHolder.set(context);
dataFetcherHelper();
}
}
In the dataFetcherHelper or in general in any method that requires the access to the context you can:
public class SomeService {
#Autowired ThreadLocal<MyCustomContext> ctxHolder;
public void dataFetcherHelper() {
MyCustomContext ctx = ctxHolder.get();
}
Now, I see that dataFetcherHelper is just a method that you call from withing this "interceptor" class, in this case its an overkill, but I assume, you've intended that this is actually a method that belongs to another class, that might be an element in the call-chain of different classes. For these situations, this can be a working solution.

How to manage shutdown of ExecutorService when we allow to inject it?

Suppose I am writing a service, which needs some executor service/separate thread. I give ability to use factory method to not worry about executor service, but still want to allow passing existing executor service (dependency injection).
How can I manage for executorService.shutdown()?
Example code:
public class ThingsScheduler {
private final ExecutorService executorService;
public ThingsScheduler(ExecutorService executorService) {
this.executorService = executorService;
}
public static ThingsScheduler createDefaultSingleThreaded() {
return new ThingsScheduler(Executors.newSingleThreadExecutor());
}
public scheduleThing() {
executorService.submit(new SomeTask());
}
// implement Closeable?
// #PreDestory?
// .shutdown() + JavaDoc?
}
There are several problems
We should have ability to shutdown internally created executor, or in best case handle it automatically (Spring #PreDestory, or in worst case finalize())
We shold rather not shutdown executor if it's externally managed (injected)
We could create some attribute stating if executor is created by our class or if it's injected, and then on finalize/#PreDestroy/shutdown hook we could shut down it, but it not feels elegant for me.
Maybe we should completely resign from factory method and always require injection pushing executor lifecycle management to the client?
You may crate an instance of anonymous sub-inner class from your default factory as shown below. The class will define the close/#PreDestroy method which shall be called by your DI container.
e.g.
public class ThingsScheduler {
final ExecutorService executorService;
public ThingsScheduler(ExecutorService executorService) {
this.executorService = executorService;
}
/**
* assuming you are using this method as factory method to make the returned
* bean as managed by your DI container
*/
public static ThingsScheduler createDefaultSingleThreaded() {
return new ThingsScheduler(Executors.newSingleThreadExecutor()) {
#PreDestroy
public void close() {
System.out.println("closing the bean");
executorService.shutdown();
}
};
}
}
I would say that solution in fully up to you. Third-party libraries like spring widely use a dedicated attribute to understand who should release a particular resource depending on its creator. mongoInstanceCreated in SimpleMongoDbFactory, localServer in SimpleHttpServerJaxWsServiceExporter, etc. But they do it because these classes are created only for external usage. If your class is only used in your application code than you can either inject executorService and don't care about its releasing or create and release it inside the class which uses it. This choice depends on your class/application design (does your class work with any executorService, whether executorService is shared and used by other classes, etc). Otherwise i don't see other option than the dedicated flag.
More "elegant" solution would be to extend your ExecutorService and in it override shutdown method (whichever you choose). In case of injection, you would return that extended type and it would have it's own shutdown logic. In case of factory - you still have original logic.
After some more thinking I came up with some conclusions:
do not think about shutting it down if it's injected - someone else created it, someone else will manage it's lifecycle
an executor factory could be injected instead of Executor, then we create instance using factory and manage closing it by ourself as we manage the lifecycle (and in such case responses from other users applies)

Mapping WebService-Client-Access to #Stateful

if I understand it right a #Stateful bean saves the state. If the client does a request again it comes back to the same instance. So it's possible to save class-attributes, what's not possible in #Stateless. In an other thread here someone wrote "it's like a classical java instance, every injection gets it's own instance of this bean".
But I don't understand how the mapping of the request to the #Stateful bean works - what is to do that it works? This question goes out for two cases:
I call #Stateful by a webservice by the client software. Is it an ID I have to send with it? But what is the ID and how do the container knows that this is the identify-attribut and routes it to the right #Stateful bean?
I call #Stateful out of an #Stateless bean. As example if the client first calls a #Stateless bean and is redirect to his #Stateful bean.
This question is not for the technical process of the container / server-software, it's for the specific doing at the development. Thank you for your support.
Greetings
That's unfortunately not the way web services work. The stateful bean is only stateful for the stateless bean. And not for a client. And that's very dangerous for several reasons:
-The stateless bean saves state of a call in its stateful reference. But the next call of the stateless bean can be happen in another context/by another client.
-The stateful bean can be destroyed by the container while the stateless is still alive/in the pool.
You can use stateful beans with remote-calls or in web applications but not in the context of webservices.
A webservice is per definition without any application state. The Java EE-Servlet listens for the requests and call one stateless bean implementation from a pool of instances.
If you really want to implement stateful web services, you must do it on your own. The following example will work in a Java EE 6-container:
/// Client depended values(your statefull bean)
public class SessionValues {
private final List<String> values = new ArrayList<String>();
public void addValue(String s){
values.add(s);
}
public List<String> loadValues(){
return Collections.unmodifiableList(values);
}
}
You can store the sessions in a singleton(your own pool)
#Singleton
#Startup
public class StatefullSingleton {
private final Map<String, SessionValues> sessions = new Hashtable<String, SessionValues>();
#Lock(LockType.READ)
public void addValue(String sessionId, String value) {
if (!sessions.containsKey(sessionId))
sessions.put(sessionId, new SessionValues());
SessionValues p = sessions.get(sessionId);
p.addValue(value);
}
#Lock(LockType.READ)
public List<String> loadValues(String sessionId) {
if (sessions.containsKey(sessionId))
return sessions.get(sessionId).loadValues();
else
return new ArrayList<String>();
}
}
and inject the singleton in the stateless webservice beans(the pool, the singleton and the calls of the singleton are managed by the Java EE-Container):
#Stateless
#WebService
public class WebserviceBean {
#Inject
private StatefullSingleton ejb;
public void addvalue(String sessionId, String value) {
ejb.addValue(sessionId, value);
}
public List<String> loadValues(String sessionId) {
return ejb.loadValues(sessionId);
}
}
The example above is only a pattern. You must be very carefully with the session-id and the multithreading if you want to implement it in production.
Edit: remove the unnecessary #localBean

how to implement a service layer in servlet application

Suppose I want to create a service layer for my web application which uses servlets,How should I go about this?(I am not using a web app framework..So,please bear with me).Should I implement it as a listener?The service is meant to do database access.That is,I should be able to call from my servlet
class MyServlet{
...
doPost(...){
...
MyEntity entity = dbAccessService.getMyEntity(someId);
...
}
}
Where the dbAccessService should deal with hibernate session,transactions etc.Previously I used to do all this inside dao methods, but I was advised that was not a good idea.
Any suggestions welcome
thanks
mark
Sample code snippet is given below
class DBAccessServiceImpl{
...
private MyEntity getMyEntity(Long id){
Transaction tx = null;
MyEntity me = null;
Session session = HibernateUtil.getCurrentSession();
try{
tx = session.beginTransaction();
return entitydao.findEntityById(id);
}catch(RuntimeException e){
logger.info("problem occurred while calling findEntityById()");
throw e;
}
}
...
}
Then create a listener to instantiate DBAccessService
class MyAppListener implements ServletContextListener {
#Override
public void contextInitialized(ServletContextEvent ctxEvent) {
ServletContext sc = ctxEvent.getServletContext();
DBAccessService dbservice = new DBAccessServiceImpl();
sc.setAttribute("dbAccessService",dbservice);
}
}
In web.xml add listener
...
<listener>
<listener-class>myapp.listeners.MyAppListener</listener-class>
</listener>
...
Assuming you do not want to introduce a framework, two options make sense (in my opinion):
define your service layer using stateless EJB session beans. You need an EJB container.
do it as always in OO languages, create an interface and a corresponding implementation:
Define an interface
public interface BusinessService {
abstract public BusinessObject performSomeOperation(SomeInput input);
}
And an implementation
public class BusinessServiceImpl implements BusinessService {
public BusinessObject performSomeOperation(SomeInput input) {
// some logic here...
}
}
You have several options for instantiating the service. If you start from scratch with a small application it may be sufficient to simply instantiate the service inside your web application:
BusinessService service = new BusinessServiceImpl();
service.performSomeOperation(...);
BTW: At a later time you may want to refactor and implement some abstractions around the Service instantiation (Factory pattern, dependency injection, etc.). Furthermore, in large systems there is a chance that you have to host the service layer on it's own infrastructure for scalability, so that your webapp communicates with the service layer via an open protocol, be it RESTful or Web Services.
However the future looks like, having a well defined interface defining your business functions in place, allows you to "easily" move forward if the application grows.
Response to your update:
I would not implement the service itself as a listener, this does not make sense. Nevertheless, your sample code seems to be reasonable, but you must distinguish between the Service (in this case DBAccessService) and the way you instantiate/retrieve it (the listener). The listener you've implemented plays in fact the role of a ServiceLocator which is capable of finding a certain services. If you store the instance of your Service in the servlet context you have to remind that the service implementation must be thread safe.
You have to be carefull to not over-engineer your design - keep it simple as long as you cannot foresee further, complex requirements. If it's not yet complex I suggest to encapsulate the implementation using a simple static factory method:
public final class ServiceFactory {
public static DBAccessService getDBAccessService() {
DBAccessService service = new DBAccessServiceImpl();
return service;
}
}
Complex alternatives are available to implement the ServiceFactory and nowadays some call it anti-pattern. But as long as you do not want to start with dependency injection (etc.) this one is still a valid solution. The service implementation DBAccessServiceImpl is accessed at one place only (the factory). As I mentioned before - keep an eye on multi-threading... hope this helps!
What you're suggesting is really no different to doing the session and transaction handling in a DAO. After all, your service class calls the DAO; to the client code, there is no difference.
Rather, i suspect that whoever told you not to put the session handling in the DAO was thinking that you should instead use Open Session In View pattern. Very simply, in its usual form, that involves writing a Filter which opens a session and starts a transaction before passing the request down the chain, and then commits the transaction (or rolls it back if necessary) and closes the session after the request completes. That means that within any one request, all access to persistent objects happens in a single transaction and a single session, which is usually the right way to do it (it's certainly the fastest way to do it).

Categories

Resources