Why does not commit transaction of Requires_New? - java

I am working on SAP java application server with EJB 3.0
I want to database insert one by one. Because I have too much data and I have to divide data. So I made try test code and it did work but it did not work as I want.
I want to create a new transaction for each part and of course at the end method(transaction) should be commit.
Sample code is below;
package com.transaction.jobs;
import javax.ejb.Local;
/**
*
* #author muratdemir
*/
#Local
public interface TestTransactionLocal {
public void onStart();
public void insertObject(int i);
}
and
package com.transaction.jobs;
import com.transaction.service.DatabaseServiceLocal;
import com.transaction.entity.Item;
import com.transaction.entity.Logger;
import java.util.Date;
import javax.annotation.Resource;
import javax.ejb.EJB;
import javax.ejb.EJBContext;
import javax.ejb.Stateless;
import javax.ejb.TransactionAttribute;
import javax.ejb.TransactionAttributeType;
import javax.ejb.TransactionManagement;
import javax.ejb.TransactionManagementType;
/**
*
* #author muratdemir
*/
#Stateless
#TransactionManagement(TransactionManagementType.CONTAINER)
public class TestTransactionService implements TestTransactionLocal {
#EJB
DatabaseServiceLocal databaseService;
#Resource
EJBContext context;
#TransactionAttribute(TransactionAttributeType.REQUIRED)
public void onStart() {
try {
System.out.println("START");
Logger log1 = new Logger(new Date(), ">>>T1 commiting");
databaseService.create(log1);
System.out.println(">>>T1 committing");
Thread.sleep(5000);
for (int i = 1; i < 10; i++) {
System.out.println("Call new Transaction");
insertObject(i);
Thread.sleep(2000);
}
Thread.sleep(5000);
Logger log2 = new Logger(new Date(), "<<<T1 commiting");
databaseService.create(log2);
System.out.println("<<<T1 committing");
Thread.sleep(5000);
System.out.println("END");
} catch (Exception e) {
e.printStackTrace();
context.setRollbackOnly();
}
}
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public void insertObject(int i) {
try {
System.out.println("New Transaction Start i:" + i);
Item item = new Item(new Date(), "Name_" + i);
databaseService.create(item);
System.out.println("commit transaction: " + i);
} catch (Exception e) {
e.printStackTrace();
context.setRollbackOnly();
}
}
}
The insertObject(Requires_New) function is work but it did not commit. It waiting for commit other onStart(REQUIRED) function. If mytimer function is end, the insert function makes all commit.
Why new transaction is did not committed?
Note: If I change transaction attribute of the onStart function REQUIRED to NOT_SUPPORTED it works as I want. Why it works this way?

You have to initialize another TestTransactionLocal with a use of SessionContext#getBusinessObject method. This way your TestTransactionLocal instance will respect #TransactionAttribute annotation.
#Resource
private SessionContext sessionContext;
private TestTransactionLocal local;
#PostConstruct
void init() {
local = sessionContext.getBusinessObject(TestTransactionLocal.class);
}
Then invoke insertObject() through this new reference:
local.insertObject(i);
See this blog post: http://www.adam-bien.com/roller/abien/entry/how_to_self_invoke_ejb

You're calling prepare() method directly, so the transaction annotation isn't considered. You would need to call it through its own interface i.e. myTestTimerLocal.prepare(), for any transactional effect.

Related

Prometheus requires that all meters with the same name have the same set of tag keys

If #Around only #Timed annotated method like this:
package ru.fabit.visor.config.aop;
import io.micrometer.core.annotation.Timed;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Tag;
import io.micrometer.core.instrument.Tags;
import io.micrometer.core.instrument.Timer;
import io.micrometer.core.lang.NonNullApi;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.scheduling.annotation.Scheduled;
import java.lang.reflect.Method;
import java.util.function.Function;
/**
* The type Targeted timed aspect.
*/
#Aspect
#NonNullApi
public class TargetedTimedAspect {
public static final String DEFAULT_METRIC_NAME = "method.timed";
public static final String EXCEPTION_TAG = "exception";
public static final String BINDING_TAG = "binding";
public static final String SCHEDULED_CRON_TAG = "cron";
private final MeterRegistry registry;
private final Function<ProceedingJoinPoint, Iterable<Tag>> tagsBasedOnJoinPoint;
public TargetedTimedAspect(MeterRegistry registry) {
this(registry, pjp ->
Tags.of("class", pjp.getStaticPart().getSignature().getDeclaringTypeName(),
"method", pjp.getStaticPart().getSignature().getName())
);
}
public TargetedTimedAspect(MeterRegistry registry, Function<ProceedingJoinPoint, Iterable<Tag>> tagsBasedOnJoinPoint) {
this.registry = registry;
this.tagsBasedOnJoinPoint = tagsBasedOnJoinPoint;
}
// enable TimedAspect only for #StreamListener and #Scheduled annotated methods or allowed methods pointcut
#Around("timedAnnotatedPointcut() )")
public Object timedMethod(ProceedingJoinPoint pjp) throws Throwable {
Method method = ((MethodSignature) pjp.getSignature()).getMethod();
StreamListener streamListener = method.getAnnotation(StreamListener.class);
Scheduled scheduled = method.getAnnotation(Scheduled.class);
// timed can be on method or class
Timed timed = method.getAnnotation(Timed.class);
if (timed == null) {
method = pjp.getTarget().getClass().getMethod(method.getName(), method.getParameterTypes());
timed = method.getAnnotation(Timed.class);
}
final String metricName = timed.value().isEmpty() ? DEFAULT_METRIC_NAME : timed.value();
Timer.Sample sample = Timer.start(registry);
String exceptionClass = "none";
try {
return pjp.proceed();
} catch (Exception ex) {
exceptionClass = ex.getClass().getSimpleName();
throw ex;
} finally {
try {
Timer.Builder timerBuilder = Timer.builder(metricName)
.description(timed.description().isEmpty() ? null : timed.description())
.tags(timed.extraTags())
.tags(EXCEPTION_TAG, exceptionClass)
.tags(tagsBasedOnJoinPoint.apply(pjp))
.publishPercentileHistogram(timed.histogram())
.publishPercentiles(timed.percentiles().length == 0 ? null : timed.percentiles());
if (streamListener != null) {
timerBuilder.tags(
BINDING_TAG,
streamListener.value().isEmpty() ? streamListener.target() : streamListener.value()
);
} else if (scheduled != null) {
timerBuilder.tags(SCHEDULED_CRON_TAG, scheduled.cron());
}
sample.stop(timerBuilder.register(registry));
} catch (Exception e) {
// ignoring on purpose
}
}
}
#Pointcut(
"(#annotation(org.springframework.cloud.stream.annotation.StreamListener) ||" +
"#annotation(org.springframework.scheduling.annotation.Scheduled))"
)
public void asyncAnnotatedPointcut() {
// Method is empty as this is just a Pointcut, the implementations are in the advices.
}
#Pointcut("execution(public * ru.fabit.visor.service.impl.StorageClientImpl.*(..)) ||" +
"execution(public * ru.fabit.visor.service.s3storage.S3StorageClientImpl.*(..))")
public void allowedMethodPointcut() {
// Method is empty as this is just a Pointcut, the implementations are in the advices.
}
#Pointcut("#annotation(io.micrometer.core.annotation.Timed)")
public void timedAnnotatedPointcut() {
// Method is empty as this is just a Pointcut, the implementations are in the advices.
}
}
Then return: java.lang.IllegalArgumentException: Prometheus requires that all meters with the same name have the same set of tag keys. There is already an existing meter named 'web_photos_gotten_list_seconds' containing tag keys [class, exception, method]. The meter you are attempting to register has keys [exception, method, outcome, status, uri].
But, if add all #Timed method in Pointcut, all good work, i dont understand, why we need all annotated method add to Pointcut separately?
This work:
package ru.fabit.visor.config.aop;
import io.micrometer.core.annotation.Timed;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Tag;
import io.micrometer.core.instrument.Tags;
import io.micrometer.core.instrument.Timer;
import io.micrometer.core.lang.NonNullApi;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.scheduling.annotation.Scheduled;
import java.lang.reflect.Method;
import java.util.function.Function;
/**
* The type Targeted timed aspect.
*/
#Aspect
#NonNullApi
public class TargetedTimedAspect {
public static final String DEFAULT_METRIC_NAME = "method.timed";
public static final String EXCEPTION_TAG = "exception";
public static final String BINDING_TAG = "binding";
public static final String SCHEDULED_CRON_TAG = "cron";
private final MeterRegistry registry;
private final Function<ProceedingJoinPoint, Iterable<Tag>> tagsBasedOnJoinPoint;
public TargetedTimedAspect(MeterRegistry registry) {
this(registry, pjp ->
Tags.of("class", pjp.getStaticPart().getSignature().getDeclaringTypeName(),
"method", pjp.getStaticPart().getSignature().getName())
);
}
public TargetedTimedAspect(MeterRegistry registry, Function<ProceedingJoinPoint, Iterable<Tag>> tagsBasedOnJoinPoint) {
this.registry = registry;
this.tagsBasedOnJoinPoint = tagsBasedOnJoinPoint;
}
// enable TimedAspect only for #StreamListener and #Scheduled annotated methods or allowed methods pointcut
#Around("timedAnnotatedPointcut() && (asyncAnnotatedPointcut() || allowedMethodPointcut())")
public Object timedMethod(ProceedingJoinPoint pjp) throws Throwable {
Method method = ((MethodSignature) pjp.getSignature()).getMethod();
StreamListener streamListener = method.getAnnotation(StreamListener.class);
Scheduled scheduled = method.getAnnotation(Scheduled.class);
// timed can be on method or class
Timed timed = method.getAnnotation(Timed.class);
if (timed == null) {
method = pjp.getTarget().getClass().getMethod(method.getName(), method.getParameterTypes());
timed = method.getAnnotation(Timed.class);
}
final String metricName = timed.value().isEmpty() ? DEFAULT_METRIC_NAME : timed.value();
Timer.Sample sample = Timer.start(registry);
String exceptionClass = "none";
try {
return pjp.proceed();
} catch (Exception ex) {
exceptionClass = ex.getClass().getSimpleName();
throw ex;
} finally {
try {
Timer.Builder timerBuilder = Timer.builder(metricName)
.description(timed.description().isEmpty() ? null : timed.description())
.tags(timed.extraTags())
.tags(EXCEPTION_TAG, exceptionClass)
.tags(tagsBasedOnJoinPoint.apply(pjp))
.publishPercentileHistogram(timed.histogram())
.publishPercentiles(timed.percentiles().length == 0 ? null : timed.percentiles());
if (streamListener != null) {
timerBuilder.tags(
BINDING_TAG,
streamListener.value().isEmpty() ? streamListener.target() : streamListener.value()
);
} else if (scheduled != null) {
timerBuilder.tags(SCHEDULED_CRON_TAG, scheduled.cron());
}
sample.stop(timerBuilder.register(registry));
} catch (Exception e) {
// ignoring on purpose
}
}
}
#Pointcut(
"(#annotation(org.springframework.cloud.stream.annotation.StreamListener) ||" +
"#annotation(org.springframework.scheduling.annotation.Scheduled))"
)
public void asyncAnnotatedPointcut() {
// Method is empty as this is just a Pointcut, the implementations are in the advices.
}
#Pointcut("execution(public * ru.fabit.visor.service.impl.StorageClientImpl.*(..)) ||" +
"execution(public * ru.fabit.visor.service.s3storage.S3StorageClientImpl.*(..))")
public void allowedMethodPointcut() {
// Method is empty as this is just a Pointcut, the implementations are in the advices.
}
#Pointcut("#annotation(io.micrometer.core.annotation.Timed)")
public void timedAnnotatedPointcut() {
// Method is empty as this is just a Pointcut, the implementations are in the advices.
}
}
pom.xml:
<dependency>
<groupId>io.dropwizard.metrics</groupId>
<artifactId>metrics-core</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
The problem youre discribing has nothing to do with pointcuts. There is one piece of code that generates a Timer with three tags (class, exception, method) and another one creating the timer with the exact same same with 5 tags (exception, method, outcome, status, uri) and the framework clearly says, that this is now allowed.
There are several possibilites to solve the issue:
simply use another name for the timer (if you need the other one)
find the other piece of code that generates that timer and deactivate it. Maybe you need to use the debugger and set an conditional breakpoint in MeterRegistry.register()` that registers the meters with the condition that the meter name matches.
PS: using the URI as a tag is not a good practice. The issue is that anyone access your service using different URIs (e.g. by just adding a random number) that will end up in a very high number of meters, which will finally kill your prometheus.

How to write a proper unit test for Elasticsearch in Java

Overview:
I'm totally new to Elastic search testing and I'm gonna add proper unit tests. The project compatibilities are as follow:
Java 8
Elasticsearch 6.2.4
Project uses low level rest client for fetching data from ES
More info about ES configurations is as follow:
import static java.net.InetAddress.getByName;
import static java.util.Arrays.stream;
import java.net.UnknownHostException;
import java.util.Map;
import java.util.Objects;
import javax.inject.Inject;
import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.transport.client.PreBuiltTransportClient;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import au.com.api.util.RestClientUtil;
import lombok.extern.slf4j.Slf4j;
#Slf4j
#Configuration
public class ElasticConfiguration implements InitializingBean{
#Value(value = "${elasticsearch.hosts}")
private String[] hosts;
#Value(value = "${elasticsearch.httpPort}")
private int httpPort;
#Value(value = "${elasticsearch.tcpPort}")
private int tcpPort;
#Value(value = "${elasticsearch.clusterName}")
private String clusterName;
#Inject
private RestClientUtil client;
#Bean
public RestHighLevelClient restHighClient() {
return new RestHighLevelClient(RestClient.builder(httpHosts()));
}
#Bean
#Deprecated
public RestClient restClient() {
return RestClient.builder(httpHosts()).build();
}
/**
* #return TransportClient
* #throws UnknownHostException
*/
#SuppressWarnings("resource")
#Bean
public TransportClient transportClient() throws UnknownHostException{
Settings settings = Settings.builder()
.put("cluster.name", clusterName).build();
return new PreBuiltTransportClient(settings).addTransportAddresses(transportAddresses());
}
#Override
public void afterPropertiesSet() throws Exception {
log.debug("loading search templates...");
try {
for (Map.Entry<String, String> entry : Constants.SEARCH_TEMPLATE_MAP.entrySet()) {
client.putInlineSearchTemplateToElasticsearch(entry.getKey(), entry.getValue());
}
} catch (Exception e) {
log.error("Exception has occurred in putting search templates into ES.", e);
}
}
private HttpHost[] httpHosts() {
return stream(hosts).map(h -> new HttpHost(h, httpPort, "http")).toArray(HttpHost[]::new);
}
private TransportAddress[] transportAddresses() throws UnknownHostException {
TransportAddress[] transportAddresses = stream(hosts).map(h -> {
try {
return new TransportAddress(getByName(h), tcpPort);
} catch (UnknownHostException e) {
log.error("Exception has occurred in creating ES TransportAddress. host: '{}', tcpPort: '{}'", h, tcpPort, e);
}
return null;
}).filter(Objects::nonNull).toArray(TransportAddress[]::new);
if (transportAddresses.length == 0) {
throw new UnknownHostException();
}
return transportAddresses;
}
}
Issue:
I don't know how to Mock ES or how to test ES without running an standalone ES on my machine. Please use the following class as an example and let me know how could I write a testcase (unit test not integration) for getSearchResponse method:
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.transport.NoNodeAvailableException;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.script.ScriptType;
import org.elasticsearch.script.mustache.SearchTemplateRequestBuilder;
import org.elasticsearch.search.Scroll;
import org.elasticsearch.search.aggregations.Aggregation;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.MessageSource;
import org.springframework.stereotype.Repository;
#Slf4j
#Repository
#NoArgsConstructor
public abstract class NewBaseElasticsearchRepository {
#Autowired
protected NewIndexLocator newIndexLocator;
#Value(value = "${elasticsearch.client.timeout}")
private Long timeout;
#Autowired
protected TransportClient transportClient;
#Autowired
protected ThresholdService thresholdService;
#Autowired
protected MessageSource messageSource;
/**
* #param script the name of the script to be executed
* #param templateParams a map of the parameters to be sent to the script
* #param indexName the index to target (an empty indexName will search all indexes)
*
* #return a Search Response object containing details of the request results from Elasticsearch
*
* #throws NoNodeAvailableException thrown when the transport client cannot connect to any ES Nodes (or Coordinators)
* #throws Exception thrown for all other request errors such as parsing and non-connectivity related issues
*/
protected SearchResponse getSearchResponse(String script, Map<String, Object> templateParams, String... indexName) {
log.debug("transport client >> index name --> {}", Arrays.toString(indexName));
SearchResponse searchResponse;
try {
searchResponse = new SearchTemplateRequestBuilder(transportClient)
.setScript(script)
.setScriptType(ScriptType.STORED)
.setScriptParams(templateParams)
.setRequest(new SearchRequest(indexName))
.execute()
.actionGet(timeout)
.getResponse();
} catch (NoNodeAvailableException e) {
log.error(ELASTIC_SEARCH_EXCEPTION_NOT_FOUND, e.getMessage());
throw new ElasticSearchException(ELASTIC_SEARCH_EXCEPTION_NOT_FOUND);
} catch (Exception e) {
log.error(ELASTIC_SEARCH_EXCEPTION, e.getMessage());
throw new ElasticSearchException(ELASTIC_SEARCH_EXCEPTION);
}
log.debug("searchResponse ==> {}", searchResponse);
return searchResponse;
}
So, I would be grateful if you could have a look on the example class and share your genuine solutions with me here about how could I mock TransportClient and get a proper response from SearchResponse object.
Note:
I tried to use ESTestCase from org.elasticsearch.test:framework:6.2.4 but faced jar hell issue and could't resolve it. In the meantime, I could't find any proper docs related to that or Java ES unit testing, in general.

EntityManager em.getTransaction().begin() locks thread

My question is, if the method "begin()" is capable of lock a thread further than the "timeout" config in the persistence.xml.
Here is a snippet:
#Inject EntityManager em;
#Inject ContextControl ctxCtrl;
String fileType;
String fileName;
String hash;
BufferedReader reader = null;
public void run(File f, String fileType, String hash) throws ProcessorException, IOException{
this.fileType = fileType;
this.hash= hash;
this.fileName = f.getName();
try {
ctxCtrl.startContext(RequestScoped.class);
em.getTransaction().begin();
reader = openReader(f);
//rest of the code...
em.getTransaction().commit();
}catch (Exception e) {
logger.error(e.getMessage(), e);
try{ //for database breakdown purpose
em.getTransaction().rollback();
}catch(Exception e2){
logger.error(e2.getMessage(), e2);
throw new ProcessorException();
}
throw new ProcessorException();
}finally{
reader.close();
ctxCtrl.stopContext(RequestScoped.class);
}
"run" method is executed inside a loop. This method is executed serially, there is no possible concurrency.
Now, the thing is that the thread stops randomly at line "em.getTransaction().begin();", with no exception. And since this is a critical area, all the application is stopped and the lock is never released.
The only thing I can think of is the "begin()" method getting stuck somehow, but not in an exception way, but rather in a lock way (since no exception is caught).
I wasn't able to recreate the issue, I can only say that the issue has nothing to do with the file. Also, this is happening in production, so I can't debug the application other than check some logs
thanks in advance
EDIT
I use Deltaspike to provide the CDI. Entitymanager it's injected anytime it's needed. It's created like this:
CLASS ENTITYMANAGER FACTORY PRODUCER
import java.io.FileInputStream;
import java.util.Properties;
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.context.RequestScoped;
import javax.enterprise.inject.Disposes;
import javax.enterprise.inject.Produces;
import javax.inject.Inject;
import javax.persistence.EntityManagerFactory;
import javax.persistence.Persistence;
import org.slf4j.Logger;
#ApplicationScoped
public class EntityManagerFactoryProducer {
#Inject Logger logger;
#Produces
#ApplicationScoped
public EntityManagerFactory create() {
Properties props = new Properties();
try {
props.load(new FileInputStream("cfg/connection.properties"));
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
return Persistence.createEntityManagerFactory("scgach",props);
}
public void destroy(#Disposes EntityManagerFactory factory) {
factory.close();
}
}
CLASS ENTITYMANAGER PRODUCER
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.context.RequestScoped;
import javax.enterprise.inject.Disposes;
import javax.enterprise.inject.Produces;
import javax.inject.Inject;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
#ApplicationScoped
public class EntityManagerProducer {
#Inject EntityManagerFactory emf;
#Produces #RequestScoped
public EntityManager create() {
return emf.createEntityManager();
}
public void destroy(#Disposes EntityManager em) {
if(em.isOpen())
em.close();
}
}

NameAlreadyBoundException using JNDI

I created a session bean with this code:
package ejb2;
import javax.annotation.Resource;
import javax.ejb.SessionContext;
import javax.ejb.Stateless;
#Stateless(name = "TestEJB", mappedName = "EJB2-Project1-TestEJB")
public class TestEJBBean implements TestEJB, TestEJBLocal {
#Resource
SessionContext sessionContext;
public TestEJBBean() {
}
public String getHello(String who_welcome) {
return "Hello " + who_welcome;
}
}
As you can see, it's almost a default code (except getHello method). Besides this bean I have a client:
package ejb2;
import java.util.Hashtable;
import javax.naming.CommunicationException;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
public class TestEJBClient {
public static void main(String[] args) {
try {
final Context context = getInitialContext();
TestEJB testEJB = (TestEJB) context.lookup("EJB2-Project1-TestEJB#ejb2.TestEJB");
System.out.println(testEJB.getHello("Student"));
} catch (CommunicationException ex) {
System.out.println(ex.getClass().getName());
System.out.println(ex.getRootCause().getLocalizedMessage());
System.out.println("\n*** A CommunicationException was raised. This typically\n*** occurs when the target WebLogic server is not running.\n");
} catch (Exception ex) {
ex.printStackTrace();
}
}
private static Context getInitialContext() throws NamingException {
Hashtable env = new Hashtable();
// WebLogic Server 10.x/12.x connection details
env.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
env.put(Context.PROVIDER_URL, "t3://localhost:7101");
return new InitialContext(env);
}
}
First time it worked like a charm. But then I created another bean:
package ejb2;
import javax.annotation.Resource;
import javax.ejb.Stateless;
import javax.ejb.SessionContext;
import javax.naming.Context;
import javax.naming.InitialContext;
#Stateless(name = "ClientEJB", mappedName = "EJB2-Project1-ClientEJB")
public class ClientEJBBean implements ClientEJB, ClientEJBLocal {
#Resource
SessionContext sessionContext;
TestEJB testEJB;
public ClientEJBBean() {
try {
final Context context = new InitialContext();
testEJB = (TestEJB) context.lookup("EJB2-Project1-TestEJB#ejb2.TestEJB");
} catch (Exception ex) {
ex.printStackTrace();
}
}
public String getHelloFromBean(String who) {
return testEJB.getHello(who);
}
}
And now beans aren't working. I get an error like this:
weblogic.application.ModuleException: Unable to bind Business Interface to the JNDI name: EJB2Project1WebApp_warClientEJB_Home, throw exception javax.naming.NameAlreadyBoundException: [EJB:011224]Unable to bind the interface ejb2.ClientEJB to ClientEJB. Another EJB has already bound an interface to that name.; remaining name 'EJB2-Project1-ClientEJB#ejb2'. NestedException Message is :[EJB:011224]Unable to bind the interface ejb2.ClientEJB to ClientEJB. Another EJB has already bound an interface to that name.
What's the problem with these codes?
As far as i can see you try to deploy two stateless EJBs with the same JNDI name
Try to undeploy the current application , check the JNDI tree from Admin Console
and make sure the tree does not have the JNDI name you see as duplicate.

Check for open GUI Instance

I was wondering if it is possible to check whether there is an instance of an object(my gui) open in Java and if so how I would be able to find it?
You can use following code if this question is for swing window like JFrame or JDialog,
java.awt.Window win[] = java.awt.Window.getWindows();
for(int i=0;i<win.length;i++){
if (win[i].getName().equals("YourWindowName"))
isOpen = true;
break;
}
For this ypu need to give name to your JFrame and if that matches with open windows it will set true and return.
I used RMI to solve the same problem. My application creates Registry and places a lock object there after start. If lock object is already there at that time then it sends message via RMI to existing application and terminates. The sent message triggers existing application to move its window on top. Here is the code
public static void main(String[] args) {
RmiManager rmiManager = new RmiManager();
rmiManager.createRmiRegistry();
if(rmiManager.isAlreadyRunning()) {
logger.error("Another application instance is running! Exit");
System.exit(0);
return;
}
rmiManager.registerApplication();
}
RmiManager.java which is actually responsible for all the stuff
package myapp;
import java.rmi.AccessException;
import java.rmi.AlreadyBoundException;
import java.io.File;
import java.io.IOException;
import java.rmi.registry.LocateRegistry;
import java.rmi.NotBoundException;
import java.rmi.registry.Registry;
import java.rmi.RemoteException;
import java.rmi.server.UnicastRemoteObject;
import org.apache.log4j.Logger;
public class RmiManager {
private static final String LOCK_OBJECT_NAME = "myapp";
private static final Logger logger = Logger.getLogger(RmiManager.class);
public void createRmiRegistry() {
try {
logger.debug("Creating RMI registry...");
LocateRegistry.createRegistry(Registry.REGISTRY_PORT);
logger.debug("RMI registry was created");
} catch (RemoteException e) {
logger.debug("RMI registry is already created");
}
}
public boolean isAlreadyRunning() {
try {
logger.debug("Checking if application is already running. Looking for RMI registry...");
Registry registry = LocateRegistry.getRegistry();
logger.debug("RMI registry obtained. Looking for RmiListener: " + LOCK_OBJECT_NAME + "...");
try {
IRmiListener rmiListener = (IRmiListener) registry.lookup(LOCK_OBJECT_NAME);
logger.debug("RmiListener got. Checking...");
boolean isAlreadyRunning = rmiListener.isAlreadyRunning();
logger.debug("IsAlreadyRunning result: " + isAlreadyRunning);
return isAlreadyRunning;
} catch (AccessException e) {
logger.error("Error accessing RMI registry!", e);
return false;
} catch (NotBoundException e) {
logger.debug("RMI listener wasn't found. There are no other application instances running");
return false;
}
} catch (RemoteException e) {
logger.error("RemoteException!", e);
return false;
}
}
public void registerApplication() {
try {
logger.debug("Registering application...");
RmiListenerImpl rmiListenerImpl = new RmiListenerImpl();
logger.debug("Exporting RmiListener object...");
IRmiListener rmiListener = (IRmiListener) UnicastRemoteObject.exportObject(rmiListenerImpl, Registry.REGISTRY_PORT);
logger.debug("RmiListener object was exported. Looking for RMI registry...");
Registry registry = LocateRegistry.getRegistry();
logger.debug("RMI registry found");
try {
logger.debug("Binding RmiListener to " + LOCK_OBJECT_NAME + "...");
registry.bind(LOCK_OBJECT_NAME, rmiListener);
logger.debug("RmiListener binding was done. Application registration complete.");
} catch (AccessException e) {
logger.error("AccessException!", e);
} catch (AlreadyBoundException e) {
logger.error("RmiListener object is already bind", e);
}
} catch (RemoteException e) {
logger.error("RemoteException!", e);
}
}
}
IRmiListener.java
package myapp;
import java.rmi.Remote;
import java.rmi.RemoteException;
public interface IRmiListener extends Remote {
boolean isAlreadyRunning() throws RemoteException;
}
RmiListenerImpl.java
package myapp;
import java.rmi.RemoteException;
import org.apache.log4j.Logger;
public class RmiListenerImpl implements IRmiListener {
private static final Logger logger = Logger.getLogger( RmiListenerImpl.class );
#Override
public boolean isAlreadyRunning() throws RemoteException {
// here I notify my GUI class to pop up the window
return true;
}
}
It can be more simple I think.
Assuming that by "open UI objects" you mean Swing dialogs and frames, it is better to design the application in a way that would remove the need to look for open instances all together.
This can be achieved by providing a factory that would produce application dialogs and frames instead of using something like new JFrame. This factory would register the produced instances internally and would serve as a single point of reference for all "open UI objects".
Although, be careful when implementing such solution as every registered object would have one additional reference preventing GC from collecting the allocated memory as intended. Please used weak references (weak reference map) for caching. A good blog post about different kinds of Java references can be found here.
This way if you need to find an open UI object, simply request your factory to provide a list of open instances.

Categories

Resources