Transaction across multiple web services - java

I have quite a simple problem. I am rewriting very old app which is using direct access to database through DAO objects. There is no business layer (the code is not mine and is quite anti-code), so connection.setAutoCommit(false) is used for starting the transactions everywhere in the code. I had to rewrite the project because of security reasons, so it does not use database connection but webservices and hibernate/jpa on the J2EE server side (before it was standalone app, now app+j2ee). Simple - I just moved the DAO/VO objects to the webservice server and rewrote sql to hql and DAO in client replaced with webservice client.
But what to do with transaction code? Normally one transaction one webservice call. So I need some mechanism (parameter in webservices?) that could help me to reference to the same hibernate transaction across multiple webservice calls. Is it completely bad approach and should I just move the transactions in server code?

I think you should use SessionBeans expose as JAX-RS services, and let them control the transactions.
If you need to have a transaction accross multiple webservice calls, just define a new webservice, also a EJB SessionBean that acts as a facade for the other calls.
I think is a bad practice to implement what you suggest (with referecing the same hibernate transaction), and I think it might not even be possible. Each WS call is a separate thread, at different moment in times, mixing transactions across threads is not a good practice.

I also think this is bad practice, because you usually build webservice methods that are coarse grained. So usually you are fine with one request per transaction.
I can understand your need but think about the downsides:
How will you do a rollback about several transactions? This will introduce data inconsistencies, if not possible.
If this is possible, your webservice won't be stateless anymore,
which is commonly considered bad practice.
This means, your API requests will be dependent on each other, so you have prerequisites for executing any of your request.
Have you tried to put your transaction within one request? This might help to re-structure and possibly enhance the code of your app.

Related

Migrating CORBA Application to Modern Java technologies (Rest/SOAP/EJB)

I have a requirement to migrate a legacy CORBA system to any latest java technology. The main problem I am facing is to provide long lived transaction(db) in the proposed system. Currently the client(Swing App) retain the CORBA service object and perform multiple db txn before actually committing/rolling back all the txn. Service layer keep the state of connection object through out to complete transaction.
I wanted to reproduce this mechanism in my new system(REST/WS) so that either Swing client/Web(future) can work in the same as is.
eg:
try {
service1.updateXXData(); // --> insert in to table XX
service2.updateUUData() //--> insert in to table UU
service1.updateZZData(); // --> insert in to table ZZ
service2.updateAAData(); // --> insert in to table AA
service1.commit(); // con.commmit();
service2.commit(); // con.commmit();
}
exception(){
service1.rollback(); // con.rollback();
service2.rollback(); // con.rollback();
}
Now I wanted to migrate CORBA to any modern technolgy, but still I am at large to find a solution for this. ( the concern is client do not want to make any change to service layer or db layer) , they just wanted to remove CORBA.
couple of options available for me are
Migrate CORBA to RMI --> so that changes required to current system are minimal, but transaction management,connection pooling, retaining state need to do my self.
Migrate CORBA to Stateful EJB --> Compare RMI more changes required, but better since I can use container managed connection pooling, maintain state in a better way.
Migrate CORBA to Stateful Webservice(SOAP) --> More futuristic, but lot of changes required - How ever I can convert IDL to WSDL, and delegate the call to implementation layer
Migrate CORBA to REST --> Most desired if possible - but the amount of time required to migrate is huge , Code changes would require from UI layer to service layer.
Thank you very much in advance
The order in which I would choose the options, from best to worst, would be 4, 3, 2, and 1, however I'd avoid stateful beans or services if humanly possible to do so.
I'll go over the implementation details of what you'll have to do in detail.
For any of these solutions, you'll have to use XA-compliant data sources and transactions so you can guarantee ACID compliance, preferably from an application server so you don't have to generate the transaction yourself. This should be an improvement from your existing application as it almost certainly can't guarantee that, but be advised that in my experience, people put loads of hacks in to essentially reinvent JTA, so watch out for that.
For 4, you'll want to use container-managed transactions with XA. You might do this by injecting a #PersistenceContext backed by a JTA connection. Yes, this costs a ton of time, testing, and effort, but it has two bonuses: First, moving to the web will be a lot easier, and it sounds like that time is coming. Second, those that come after you are more likely to be well-versed in newer web service technologies than bare CORBA and RMI.
For 3, you'll also want to use container-managed transactions with XA. SOAP would not be my first choice as it uses very verbose messages and REST is more popular, but it could be done. If it's stateful, though, you'll have to use bean-managed transactions instead and then hang on to resources across web service calls. This is dangerous, as it could potentially deadlock the whole system.
For 2, you can go two ways, either using container-managed transactions with XA by using a stateless session facade for a stateful EJB. You can use a client JAR for your EJB and package that with the Swing app. Using the stateless facade is preferable, as it will reduce the load on your application server. Keep in mind that you can generate web services from stateless EJB beans too, essentially turning this into #3.
For 1... well, good luck. It is possible to use RMI to interface with EJB's, and generate your own stub and tie, though this is not recommended, and for very good reason. This hasn't been a popular practice for years, may require the stubs and ties to be regenerated periodically, and may require an understanding of the low-level functions of the app server. Even here, you'll want XA transactions. You don't want to handle the transaction management yourself, if possible.
Ultimately, as I'm sure everyone will agree, the choice is yours on what to do, and there's no "right" or "wrong" way, despite the opinions stated above. If it were me (and it's not), I'd ask two important questions of myself and my customer:
Is this for a contract or temporary engagement, and if so what is the term? Do I get first pick at another contract for this same system later when they want additional updates? (In other words, how much money am I going to get out of this vs. how much time am I spending? If it's going to be a long term, then I would go with 4 or 3, otherwise 3 or 2 would be better.)
Why get rid of CORBA? "Because it's old" is an honest answer, but what's the impetus of getting rid of the "old hotness?" Do they plan on expanding usage of this system in the future? Is there some license about to expire and they just want to keep the lights on? Is it because they don't want to dump this on some younger programmer who might not know how to deal with low-level stuff like this? What do you want the system to do in two years, five years, or longer?
(OK, so that's more than two questions :D)

Java Library for Activities/Chain-of-Responsibility with Transaction Support

I have to implement a number of activities (for example, update a user's profile, transfer points from one user to another, etc...), each of which can be composed by one or more logical steps (check if user has enough points, subtract points from the first user, check if the other user can receive then, credit these points to the second user) in a given order. I also need to implement some kind of "rollback" mechanism so that I can undo any previous steps if something goes wrong with step N (kind of what one usually foods in database transactions, except for the fact that a database may or may not be involved).
Are there any Java libraries which can help me with this? I've had a look at Drools but it seems overly complex. Also, I'm not sure it supports this kind of rollback mechanism. Any ideas?
The JTA specifications is a framework defining a standard behavior for java transactions.
A typical and well known use case is the simple database transaction, but JTA is far more generic. It's a framework to manage transaction over one or more transactional resource. A transactional resource can be a database of course, but it can also be a file, a messaging service, ...
If you have multiple transactionnal resources implied in one transaction, you must search for a JTA implementation supporting XA-Transactions.(and here is another interesting link about XA)
I don't say that this is a simple framework... but the problem you are facing is not simple at all.
If you've got to integrate with other services/providers through REST/SOAP/EJBs/etc. then I recommend looking at Apache Camel. Camel is an integration service that can integrate with pretty much every service or protocol out there. And, I believe, it supports rudimentary transactions. You can make a single service call and Camel will handle the routing and integration with whatever backend services you define. And, you can chain them. So, your flow would look like this:
Client makes call to service 'FOO'...
'FOO' is defined as a route that makes a REST call to '/bar', followed by an EJB call to 'MyService', followed by persisting the results to a SQL database, and then lastly calling a SOAP web service. The client then gets back the return value from this call, which can be whatever transformation or permutation of these calls you want. It's completely transparent to the client where the result came from.

Will removing EJBs improve the performance of project?

I have been working in a Government Project using EJBs. I have found some Server issues while deploying EJBs. People working in my project have thought about removing EJBs from between the RequestHandler & DAO and to directly call DAO methods from RequestHandler.
My argument with this matter is, how can we even think about removing EJBs from the project which itself has the base framework as EJB !!!
Please inform about the correct solution required to improve performance while deployment & also inform the other way to improve speed & performance.
I have worked on a few projects where removing EJB improved performance dramatically.
For me using EJBs is about improving productivity and the quality of the solution you produce rather than worrying about performance. Usually performance isn't a big issue, but if it is you can throw hardware at it and use a cloud/distributed solution, which costs less than it used to. i.e. it can be cheaper than spending more time developing.
Check whether you are doing remote EJB calls or local EJB calls.
Doing remote calls within a project might lead to performance issues (if you are logically working locally).
it's impossible to answer the question with no information provided. however, if you don't use the ejb services for anything you may as well get rid of them. it's not unlikely the reason to call your dao via ejb in the first place was to utilize CMT, so if you ditch ejb you need to handle transactions by other means.
in general, my advice is to try to figure out what the question/problem really is before you start looking for answers.
Depends on what EJBs you are talking about.
Session Beans : Having them or not will have barely any impact on performance.
Entity Beans : Entity beans can have a drastic effect on performance. I would use them in a situation where you are dealing with complex transactions in create, delete or update calls (CRUD). In situations where I am just calling a query (CRUD) which returns 1000s of records I might switch to pure JDBC.
A lot of JPA/EJB containers these days are pretty smart, so maybe they can delegate performance issues to the database level.
For example: If I return 10000 customer objects, and each customer has multiple addresses, I could join the customer and address objects in the EJB layer. This may be as fast as creating a join in the database level and returning data as a view if the EJB container is smart enough.

What is the purpose of Managers / Transactions?

I'm building a spring application for the first time. I'm running into lots of problems with concurrency, and I suspect that there is something wrong with the way I'm managing the backend. The only difference I can see between my backend code and examples I've seen are manager classes.
In my code, I have my model (managed by hibernate) and my DAOs on top of that to do CRUD/searching/etc on the models. In example code I have looked at, they never use the DAO directly. Instead, they use manager classes that call the DAOs indirectly. To me, this just seems like pointless code duplication.
What are these manager classes for? I've read that they wrap my code in "transactions," but why would I want that?
Transactions are used to make updates "transactional".
Example) A user clicks a webpage that leads to 13 records being updated in the database. A transaction would ensure either 0 or 13 of the updates go through, an error would make it all roll back.
Managers have to do with making things easier to do. They will not magically make your code threadsafe. Using a DAO directly is not a thread safety bug in and of itself.
However, I suggest you limit the logic in your DAO, and put as much logic as you can in the business layers. See Best practice for DAO pattern?
If you post maybe a small example of your code that isn't working well with multiple threads, we can suggest some ideas... but neither transactions nor managers alone will fix your problem.
Many applications have non trivial requirements and the business logic often involves access to several resources (e.g. several DAOs), coordination of these accesses and control of transaction across these accesses (if you access DAO1 and DAO2, you want to commit or rollback the changes as an indivisible unit of work).
It is thus typical to encapsulate and hide this complexity in dedicated services components exposing business behavior in a coarse-grained manner to the clients.
And this is precisely what the managers you are referring to are doing, they constitute the Service Layer.
A Service Layer defines an application's boundary [Cockburn PloP] and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations.
DAOs should not own transactions, because they have no way of knowing whether or not they're only a part of a larger transaction.
The service tier is where transactions belong. You're incorrect to say they're a "pointless code duplication."

What's the best way to share business object instances between Java web apps using JBoss and Spring?

We currently have a web application loading a Spring application context which instantiates a stack of business objects, DAO objects and Hibernate. We would like to share this stack with another web application, to avoid having multiple instances of the same objects.
We have looked into several approaches; exposing the objects using JMX or JNDI, or using EJB3.
The different approaches all have their issues, and we are looking for a lightweight method.
Any suggestions on how to solve this?
Edit: I have received comments requesting me to elaborate a bit, so here goes:
The main problem we want to solve is that we want to have only one instance of Hibernate. This is due to problems with invalidation of Hibernate's 2nd level cache when running several client applications working with the same datasource. Also, the business/DAO/Hibernate stack is growing rather large, so not duplicating it just makes more sense.
First, we tried to look at how the business layer alone could be exposed to other web apps, and Spring offers JMX wrapping at the price of a tiny amount of XML. However, we were unable to bind the JMX entities to the JNDI tree, so we couldn't lookup the objects from the web apps.
Then we tried binding the business layer directly to JNDI. Although Spring didn't offer any method for this, using JNDITemplate to bind them was also trivial. But this led to several new problems: 1) Security manager denies access to RMI classloader, so the client failed once we tried to invoke methods on the JNDI resource. 2) Once the security issues were resolved, JBoss threw IllegalArgumentException: object is not an instance of declaring class. A bit of reading reveals that we need stub implementations for the JNDI resources, but this seems like a lot of hassle (perhaps Spring can help us?)
We haven't looked too much into EJB yet, but after the first two tries I'm wondering if what we're trying to achieve is at all possible.
To sum up what we're trying to achieve: One JBoss instance, several web apps utilizing one stack of business objects on top of DAO layer and Hibernate.
Best regards,
Nils
Are the web applications deployed on the same server?
I can't speak for Spring, but it is straightforward to move your business logic in to the EJB tier using Session Beans.
The application organization is straight forward. The Logic goes in to Session Beans, and these Session Beans are bundled within a single jar as an Java EE artifact with a ejb-jar.xml file (in EJB3, this will likely be practically empty).
Then bundle you Entity classes in to a seperate jar file.
Next, you will build each web app in to their own WAR file.
Finally, all of the jars and the wars are bundled in to a Java EE EAR, with the associated application.xml file (again, this will likely be quite minimal, simply enumerating the jars in the EAR).
This EAR is deployed wholesale to the app server.
Each WAR is effectively independent -- their own sessions, there own context paths, etc. But they share the common EJB back end, so you have only a single 2nd level cache.
You also use local references and calling semantic to talk to the EJBs since they're in the same server. No need for remote calls here.
I think this solves quite well the issue you're having, and its is quite straightforward in Java EE 5 with EJB 3.
Also, you can still use Spring for much of your work, as I understand, but I'm not a Spring person so I can not speak to the details.
What about spring parentContext?
Check out this article:
http://springtips.blogspot.com/2007/06/using-shared-parent-application-context.html
Terracotta might be a good fit here (disclosure: I am a developer for Terracotta). Terracotta transparently clusters Java objects at the JVM level, and integrates with both Spring and Hibernate. It is free and open source.
As you said, the problem of more than one client web app using an L2 cache is keeping those caches in synch. With Terracotta you can cluster a single Hibernate L2 cache. Each client node works with it's copy of that clustered cache, and Terracotta keeps it in synch. This link explains more.
As for your business objects, you can use Terracotta's Spring integration to cluster your beans - each web app can share clustered bean instances, and Terracotta keeps the clustered state in synch transparently.
Actually, if you want a lightweight solution and don't need transactions or clustering just use Spring support for RMI. It allows to expose Spring beans remotely using simple annotations in the latest versions. See http://static.springframework.org/spring/docs/2.0.x/reference/remoting.html.
You should take a look at the Terracotta Reference Web Application - Examinator. It has most of the components you are looking for - it's got Hibernate, JPA, and Spring with a MySQL backend.
It's been pre-tuned to scale up to 16 nodes, 20k concurrent users.
Check it out here: http://reference.terracotta.org/examinator
Thank you for your answers so far. We're still not quite there, but we have tried a few things now and see things more clearly. Here's a short update:
The solution which appears to be the most viable is EJB. However, this will require some amount of changes in our code, so we're not going to fully implement that solution right now. I'm almost surprised that we haven't been able to find some Spring feature to help us out here.
We have also tried the JNDI route, which ends with the need for stubs for all shared interfaces. This feels like a lot of hassle, considering that everything is on the same server anyway.
Yesterday, we had a small break through with JMX. Although JMX is definately not meant for this kind of use, we have proven that it can be done - with no code changes and a minimal amount of XML (a big Thank You to Spring for MBeanExporter and MBeanProxyFactoryBean). The major drawbacks to this method are performance and the fact that our domain classes must be shared through JBoss' server/lib folder. I.e., we have to remove some dependencies from our WARs and move them to server/lib, else we get ClassCastException when the business layer returns objects from our own domain model. I fully understand why this happens, but it is not ideal for what we're trying to achieve.
I thought it was time for a little update, because what appears to be the best solution will take some time to implement. I'll post our findings here once we've done that job.
Spring does have an integration point that might be of interest to you: EJB 3 injection nterceptor. This enables you to access spring beans from EJBs.
I'm not really sure what you are trying to solve; at the end of the day each jvm will either have replicated instances of the objects, or stubs representing objects existing on another (logical) server.
You could, setup a third 'business logic' server that has a remote api which your two web apps could call. The typical solution is to use EJB, but I think spring has remoting options built into its stack.
The other option is to use some form of shared cache architecture... which will synchronize object changes between the servers, but you still have two sets of instances.
Take a look at JBossCache. It allows you to easily share/replicate maps of data between mulitple JVM instances (same box or different). It is easy to use and has lots of wire level protocol options (TCP, UDP Multicast, etc.).

Categories

Resources