Deactivating Weblogic Load Balancing Optimization for collocated objects - java

Is there a way to deactivate the optimization for collocated objects that Weblogic uses by default for a specific EJB ?
EDIT: Some context :
We have a scheduler service that runs inside one node of the cluster. This is for historic reasons and cannot be changed at the moment.
This service makes call to an EJB and we would like to load balance these calls. Unfortunately at the moment every calls runs on the node that hosts the scheduler service because of the optimization mentioned in the question.
I was thinking of coding a custom load balancing class however this optimization seems to be done before the load balancing step happens.

Supposing you are trying to call a remote EJB (load balancing on local ejbs can only be obtained through an indirection trick like Patrick mentioned) you will have to create a new InitialContext using the address of the cluster instead of a particular server. This new IC will provide stubs as if you were a foreign client, subject to the same load balancing strategies as they are.
Unfortunately, this means that EJB3 injections won't work. You will have to do the lookup yourself. There is a chance, an this is pure speculation, that those stubs you can get from the cluster IC are serializable. In other words, it might be possible to bind them and get them injected using #Resource afterwards.

Not being too familiar with guts of weblogic but reading their material I would say you cannot, without some amount of trickery.
It does seem though that you can put a JMS (MDB) facade in front of your EJB that does not abide by the collocated object optimization.
OR
If your scheduler is servlet based, you should be able to deploy it in a separate web-application within the container and have it execute calls to the EJB cluster.

How many nodes are there in your cluster? Have you considered deploying the EJB to nodes other than the one the scheduler service has been deployed to?

I happen to stumble upon this old question.
Is it possible that you create a JMS queue in between the scheduler and the actual executor? Since all managed servers will consume from the same queue, the queue will act as a load-balancer, in a way.
If you have solved this issue differently, I'd be interested to know how :)

Related

Is there a way to offload EclipseLink Cache in Glassfish for EJB to Redis or another outside server for load balancing?

I have an EJB packaged in an EAR and deployed to Glassfish.
Currently we just use Glassfish/Eclipselink for caching.
But our server is starting to come under heavy loads and I want to set it up behind a load balancer on AWS.
The problem is, I don't want my cache to be out of sync for automatically spun up instances. I want the instances to be completely automatic.
I know you can set Glassfish up in a cluster, but as far as I know that isn't automatic. I would have to manage it myself. I want to fully automate everything.
It would be awesome if the Glassfish instances could be completely independent of each other, and I could use Redis or another server like that to offload the cache. That way the cache would be in one place, the Glassfish instances could spin up and down automatically and it would never matter, I wouldn't have to register them with a Glassfish cluster. I could also use the same Redis cache for the front end of the application. Glassfish is running the business layer accessible by API calls. The front end web is running separately. I was going to set up a Redis cache for that also, but if they could both share the same cache, that would be awesome.
Any ideas?
I can only answer on basis of a conceptual level, since I don't know the used products in detail.
Regardless if you add another level of caching, you need to care about the data consistency within your application.
In a cluster setup, a local non-distributed cache is no problem. The consistancy coordination solves this, e.g. via JMS. You need to explore how to setup the consistancy coordination across your cluster.

Spring MVC with JBoss vs Tomcat - Advantages / Practice

Okay. This is again a question of industry practice.
Tomcat = Web Container
JBoss, WebLogic, etc = Application Servers that have Web Container within (for JBoss, its forked Tomcat)
Spring does not need Application Server like JBoss. If we use enterprise services like JMS, etc we can use independent systems like RabbitMQ, ApacheMQ, etc.
Question is why do people still use JBoss and other Application Serves for purely spring based applications?
What are the advantages Spring can make use of, by using Application Servers? Like object pooling? What specific advantages does Application Server offers? How are those configured?
If not for spring, for what other purposes Application Servers are used for Spring/Hibernate, etc stack? (Use cases)
Actually I would say listening for JMS is probably the best reason for an application server. A stand alone message broker does not fix the problem since you still need a component that's listening for messages. The best way to do this is to use a MDB. In theory you can use Springs MessageListenerContainer. However this has several disadvantages like JMS only supports blocking reads and Spring therefore needs to spin up it's own threads which is totally unsupported (even in Tomcat) and can break transactions, security, naming (JNDI) and class loading (which in turn can break remoting). A JCA resource adapter is free to do whatever it wants including spinning up threads via WorkManager. Likely a database is used besides JMS (or another destination) at which point you need XA-transactions and JTA, in other words an application server. Yes you can patch this into servlet container but that this point it becomes indistinguishable from an application server.
IMHO the biggest reason against application servers is that it takes years after a spec is published (which in turn takes years as well) until severs implement the spec and have ironed out the worst bugs. Only now, right before EE 7 is about to be published do we have are EE 6 servers starting to appear that are not totally riddled with bugs. It gets comical to the point where some vendors do no longer fix bugs in their EE 6 line because they're already busy with the upcoming EE 7 line.
Edit
Long explanation of the last paragraph:
Java EE in a lot of places relies on what's called contextual information. Information that's not explicitly passed as an argument from the server/container to the application but implicitly "there". For example the current user for security checks. The current transaction or connection. The current application for looking up classes to lazily load code or deserialize objects. Or the current component (servlet, EJB, …) for doing JNDI look ups. All this information is in thread locals that the server/container sets before calling a component (servlet, EJB, …). If you create your own threads then the server/container doesn't know about them and all the features relying on this information don't work anymore. You might get away with this by just not using any of those features in threads you spawn.
Some links
http://www.oracle.com/technetwork/java/restrictions-142267.html#threads
http://www.ibm.com/developerworks/websphere/techjournal/0609_alcott/0609_alcott.html#spring-4
If we check the Servlet 3.0 specification we find:
2.3.3.3 Asynchronous processing
Java Enterprise Edition features such as Section 15.2.2, “Web Application Environment” on page 15-174 and Section 15.3.1, “Propagation of Security Identity in EJBTM Calls” on page 15-176 are available only to threads executing the initial request or when the request is dispatched to the container via the AsyncContext.dispatch method. Java Enterprise Edition features may be available to other threads operating directly on the response object via the AsyncContext.start(Runnable) method.
This is about asynchronous processing but the same restrictions apply for custom threads.
public void start(Runnable r) - This method causes the container to dispatch a thread, possibly from a managed thread pool, to run the specified Runnable. The container may propagate appropriate contextual information to the Runnable.
Again, asynchronous processing but the same restrictions apply for custom threads.
15.2.2  Web Application Environment
This type of servlet container should support this behavior when performed on threads created by the developer, but are not currently required to do so. Such a requirement will be added in the next version of this specification. Developers are cautioned that depending on this capability for application-created threads is not recommended, as it is non-portable.
Non-portable means it can may in one server but not in an other.
When you want do receive messages with JMS outside of an MDB you can use four methods on javax.jms.MessageConsumer:
#receiveNoWait() you can to this in a container thread, it doesn't block, but it's like peeking. If no message is present it just returns null. This isn't very well suited for listening to messages.
#receive(long) you can to this in a container thread, it does block. You generally don't wan't to do blocking waits in a container thread. Again not very well suited for listening to messages.
#receive(), this blocks possibly indefinitely. Again not very well suited for listening to messages.
#setMessageListener() this is what you want, your get a callback when a message arrives. However unless the library can hook into the application server this won't be a container thread. The hooks into the application server are only available via JCA to resource adapters.
So yes, it may work, but it's not guaranteed and there are a lot of things that may break.
You are right that you don't really need a true application server (implementing all Java EE specs) to use Spring. The biggest reason people don't use true Java EE apps like JBoss is that then have been slow as #$##% on cold start up time making development a pain (hot deploy still doesn't work that well).
You see there are two camps:
Java EE
Spring Framework.
One of the camps believes in the spec/committee process and the other believes in benevolent dictator / organic OSS process. Both have people with their "agendas".
Your probably not going to get a very good unbiased answer as these two camps are much like the Emacs vs VIM war.
Answer your questions w/ a Spring bias
Because it in theory buys your less vendor lock-in (albeit I have found this to be the opposite).
Spring's biggest advantage is AspectJ AOP. By far.
I guess see Philippe's answer.
(start of rant)
Since #PhilippeMarschall defended Java EE I will say that I have done the Tomcat+RabbitMQ+Spring route and it works quite well. #PhilippeMarschall discussion is valid if you want proper JTA+JMS but with proper setup with Sprig AMQP and an a good transactional database like Postgresql this is less of an issue. Also he is incorrect about the message queue transactions not being bound/synchronized to the platform transactions as Spring supports this (and IMHO much more elegantly with #Transactional AOP). Also AMQP is just plain superior to JMS.
(end of rant)
We are using JBoss over tomcat for the JNDI data sources and pooling.. It makes it so the programmer don't have to know anything about the database but its JNDI name

How to trigger code on two different servers in a WAS cluster?

I have an administrative page in a web application that resets the cache, but it only resets the cache on the current JVM.
The web application is deployed as a cluster to two WAS servers.
Any way that I can elegantly have the "clear cache" button on each server trigger the method on both JVMs?
Edit:
The original developer just wrote a singleton holding a HashMap to be the cache in question. Lightweight and (previously) worked just fine for the requirements. It caches content pulled from six or seven web services for specified amounts of time.
Edit:
The entire application in question is three pages, so the elegant solution might well be the lightest solution.
Since the Cache is internal to your application you are going to need to provide an interface to clear it within your application. Quotidian says to use a JMS queue. That will not work because only one instance will pick up the message assuming you have clustered MQ Queues.
If you want to reuse the same implementation then the only way to do this is to write some JMX that you will be able to interact with at the instance level.
If not you can either use the built in WAS cache (which is JMX enabled) or use a distributed cache like ehcache.
in the past I have created a subclassed LinkedHashMap that was linked to all instances on the network using JBOSS JGroups. Of course reinventing the wheel is always painful.
I tend to use a JMS queue for doing exactly that.

How to start a background process in Java EE

I want to start a background process in a Java EE (OC4J 10) environment. It seems wrong to just start a Thread with "new Thread" But I can't find a good way for this.
Using a JMS queue is difficult in my special case, since my parameters for this method call are not serializable.
I also thought about using an onTimeout Timer Method on a session bean but this does not allow me to pass parameters (as far as I know).
Is there any "canon" way to handle such a task, or do I just have to revert to "new Thread" or a java.concurrent.ThreadPool.
Java EE usually attempts to removing threading from the developers concerns. (It's success at this is a completely different topic).
JMS is clearly the preferred approach to handle this.
With most parameters, you have the option of forcing or faking serialization, even if they aren't serializable by default. Depending on the data, consider wrapping it in a serializable object that can reload the data. This will clearly depend on the parameter and application.
JMS is the Java EE way of doing this. You can start your own threads if the container lets you, but that does violate the Java EE spec (you may or may not care about this).
If you don't care about Java EE generic compliance (if you would in fact resort to threads rather than deal with JMS), the Oracle container will for sure have proprietary ways of doing this (such as the OracleAS Job Scheduler).
Don't know OCJ4 in detail but I used the Thread approach and a java.util.Timer approach to perform some task in a Tomcat based application. In Java 5+ there is an option to use one of the Executor services (Sheduled, Priority).
I don't know about the onTimeout but you could pass parameters around in the session itself, the app context or in a static variable (discouraged would some say). But the name tells me it is invoked when the user's session times out and you want to do some cleanup.
Using the JMS is the right way to do it, but it's heavier weight.
The advantage you get is that if you need multiple servers, one server or whatever, once the servers are configured, your "Threading" can now be distributed to multiple machines.
It also means you don't want to send a message for a truly trivial amount of work or with a massive amount of data. Choose your interface points well.
see here for some more info:
stackoverflow.com/questions/533783/why-spawning-threads-in-j2ee-container-is-discouraged
I've been creating threads in a container (Tomcat, JBoss) with no problem, but they were really simple queues, and I don't rely on clustering.
However, EJB 3.1 will introduce asynchronous invocation that you may find useful:
http://www.theserverside.com/tt/articles/article.tss?track=NL-461&ad=700869&l=EJB3-1Maturity&asrc=EM_NLN_6665442&uid=2882457
Java EE doesn't really forbid you to create your own threads, it's the EJB spec that says "unmanaged threads" arn't allowed. The reason is that these threads are unknown to the application server and therefore the container cannot manage things like security and transactions on these threads.
Nevertheless there are lots of frameworks out there that do create their own threads. For example Quartz, Axis and Spring. Changes are your already using one of these, so it's not that bad to create your own threads as long as you're aware of the consequences. That said I agree with the others that the use of JMS or JCA is preferred over manual thread creation.
By the way, OC4J allows you to create your own threads. However it doesn't allow JNDI lookups from these unmanaged threads. You can disable this restriction by specifying the -userThreads argument.
I come from a .NET background, and JMS seems quite heavy-weight to me. Instead, I recommend Quartz, which is a background-scheduling library for Java and JEE apps. (I used Quartz.NET in my ASP.NET MVC app with much success.)

What's the best way to share business object instances between Java web apps using JBoss and Spring?

We currently have a web application loading a Spring application context which instantiates a stack of business objects, DAO objects and Hibernate. We would like to share this stack with another web application, to avoid having multiple instances of the same objects.
We have looked into several approaches; exposing the objects using JMX or JNDI, or using EJB3.
The different approaches all have their issues, and we are looking for a lightweight method.
Any suggestions on how to solve this?
Edit: I have received comments requesting me to elaborate a bit, so here goes:
The main problem we want to solve is that we want to have only one instance of Hibernate. This is due to problems with invalidation of Hibernate's 2nd level cache when running several client applications working with the same datasource. Also, the business/DAO/Hibernate stack is growing rather large, so not duplicating it just makes more sense.
First, we tried to look at how the business layer alone could be exposed to other web apps, and Spring offers JMX wrapping at the price of a tiny amount of XML. However, we were unable to bind the JMX entities to the JNDI tree, so we couldn't lookup the objects from the web apps.
Then we tried binding the business layer directly to JNDI. Although Spring didn't offer any method for this, using JNDITemplate to bind them was also trivial. But this led to several new problems: 1) Security manager denies access to RMI classloader, so the client failed once we tried to invoke methods on the JNDI resource. 2) Once the security issues were resolved, JBoss threw IllegalArgumentException: object is not an instance of declaring class. A bit of reading reveals that we need stub implementations for the JNDI resources, but this seems like a lot of hassle (perhaps Spring can help us?)
We haven't looked too much into EJB yet, but after the first two tries I'm wondering if what we're trying to achieve is at all possible.
To sum up what we're trying to achieve: One JBoss instance, several web apps utilizing one stack of business objects on top of DAO layer and Hibernate.
Best regards,
Nils
Are the web applications deployed on the same server?
I can't speak for Spring, but it is straightforward to move your business logic in to the EJB tier using Session Beans.
The application organization is straight forward. The Logic goes in to Session Beans, and these Session Beans are bundled within a single jar as an Java EE artifact with a ejb-jar.xml file (in EJB3, this will likely be practically empty).
Then bundle you Entity classes in to a seperate jar file.
Next, you will build each web app in to their own WAR file.
Finally, all of the jars and the wars are bundled in to a Java EE EAR, with the associated application.xml file (again, this will likely be quite minimal, simply enumerating the jars in the EAR).
This EAR is deployed wholesale to the app server.
Each WAR is effectively independent -- their own sessions, there own context paths, etc. But they share the common EJB back end, so you have only a single 2nd level cache.
You also use local references and calling semantic to talk to the EJBs since they're in the same server. No need for remote calls here.
I think this solves quite well the issue you're having, and its is quite straightforward in Java EE 5 with EJB 3.
Also, you can still use Spring for much of your work, as I understand, but I'm not a Spring person so I can not speak to the details.
What about spring parentContext?
Check out this article:
http://springtips.blogspot.com/2007/06/using-shared-parent-application-context.html
Terracotta might be a good fit here (disclosure: I am a developer for Terracotta). Terracotta transparently clusters Java objects at the JVM level, and integrates with both Spring and Hibernate. It is free and open source.
As you said, the problem of more than one client web app using an L2 cache is keeping those caches in synch. With Terracotta you can cluster a single Hibernate L2 cache. Each client node works with it's copy of that clustered cache, and Terracotta keeps it in synch. This link explains more.
As for your business objects, you can use Terracotta's Spring integration to cluster your beans - each web app can share clustered bean instances, and Terracotta keeps the clustered state in synch transparently.
Actually, if you want a lightweight solution and don't need transactions or clustering just use Spring support for RMI. It allows to expose Spring beans remotely using simple annotations in the latest versions. See http://static.springframework.org/spring/docs/2.0.x/reference/remoting.html.
You should take a look at the Terracotta Reference Web Application - Examinator. It has most of the components you are looking for - it's got Hibernate, JPA, and Spring with a MySQL backend.
It's been pre-tuned to scale up to 16 nodes, 20k concurrent users.
Check it out here: http://reference.terracotta.org/examinator
Thank you for your answers so far. We're still not quite there, but we have tried a few things now and see things more clearly. Here's a short update:
The solution which appears to be the most viable is EJB. However, this will require some amount of changes in our code, so we're not going to fully implement that solution right now. I'm almost surprised that we haven't been able to find some Spring feature to help us out here.
We have also tried the JNDI route, which ends with the need for stubs for all shared interfaces. This feels like a lot of hassle, considering that everything is on the same server anyway.
Yesterday, we had a small break through with JMX. Although JMX is definately not meant for this kind of use, we have proven that it can be done - with no code changes and a minimal amount of XML (a big Thank You to Spring for MBeanExporter and MBeanProxyFactoryBean). The major drawbacks to this method are performance and the fact that our domain classes must be shared through JBoss' server/lib folder. I.e., we have to remove some dependencies from our WARs and move them to server/lib, else we get ClassCastException when the business layer returns objects from our own domain model. I fully understand why this happens, but it is not ideal for what we're trying to achieve.
I thought it was time for a little update, because what appears to be the best solution will take some time to implement. I'll post our findings here once we've done that job.
Spring does have an integration point that might be of interest to you: EJB 3 injection nterceptor. This enables you to access spring beans from EJBs.
I'm not really sure what you are trying to solve; at the end of the day each jvm will either have replicated instances of the objects, or stubs representing objects existing on another (logical) server.
You could, setup a third 'business logic' server that has a remote api which your two web apps could call. The typical solution is to use EJB, but I think spring has remoting options built into its stack.
The other option is to use some form of shared cache architecture... which will synchronize object changes between the servers, but you still have two sets of instances.
Take a look at JBossCache. It allows you to easily share/replicate maps of data between mulitple JVM instances (same box or different). It is easy to use and has lots of wire level protocol options (TCP, UDP Multicast, etc.).

Categories

Resources