I'm building a spring application for the first time. I'm running into lots of problems with concurrency, and I suspect that there is something wrong with the way I'm managing the backend. The only difference I can see between my backend code and examples I've seen are manager classes.
In my code, I have my model (managed by hibernate) and my DAOs on top of that to do CRUD/searching/etc on the models. In example code I have looked at, they never use the DAO directly. Instead, they use manager classes that call the DAOs indirectly. To me, this just seems like pointless code duplication.
What are these manager classes for? I've read that they wrap my code in "transactions," but why would I want that?
Transactions are used to make updates "transactional".
Example) A user clicks a webpage that leads to 13 records being updated in the database. A transaction would ensure either 0 or 13 of the updates go through, an error would make it all roll back.
Managers have to do with making things easier to do. They will not magically make your code threadsafe. Using a DAO directly is not a thread safety bug in and of itself.
However, I suggest you limit the logic in your DAO, and put as much logic as you can in the business layers. See Best practice for DAO pattern?
If you post maybe a small example of your code that isn't working well with multiple threads, we can suggest some ideas... but neither transactions nor managers alone will fix your problem.
Many applications have non trivial requirements and the business logic often involves access to several resources (e.g. several DAOs), coordination of these accesses and control of transaction across these accesses (if you access DAO1 and DAO2, you want to commit or rollback the changes as an indivisible unit of work).
It is thus typical to encapsulate and hide this complexity in dedicated services components exposing business behavior in a coarse-grained manner to the clients.
And this is precisely what the managers you are referring to are doing, they constitute the Service Layer.
A Service Layer defines an application's boundary [Cockburn PloP] and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations.
DAOs should not own transactions, because they have no way of knowing whether or not they're only a part of a larger transaction.
The service tier is where transactions belong. You're incorrect to say they're a "pointless code duplication."
Related
We are in the middle of breaking a big monolithic e-commerce application into microservices. (We plan to use Java, Spring, Hibernate) We have concept of fulfillment items and persistent items in our monolithic application. Our plan is to mostly break up the fulfillment item CRUD operations and persistent item CRUD operations into two separate APIs. But we have some common entities/tables that both the API's will end up needing. What is the best way to handle this scenario?
Currently one of the options open on table is to have one microservice own the entity/table and have a READ ONLY object reference in other microservice. Are there any drawbacks to this?
Depends a lot on your deployment strategy. If you going to bundle/package both the APIs into one then it's ok if both share the same entities(infact you should not duplicate entities). I would prefer having all the entities and repositories/DAO into one common bundle/package just to expose various APIs for crud operations(without any other business logic). And then my other components will consume these APIs and will have the business logic.
There really isn't much of a drawback except in situations where a micro service cannot operate under eventual consistency. And even in these cases, you can always add a dependency for your non-common micro service to know how to query the common micro service for relevant updates if necessary, although that's less than ideal.
You will likely have to introduce some form of mediator mechanism for your use case though. Something like a JMS broker is an ideal choice that would allow one micro service to inform other interested micro services that something occured so that they each can handle the event in their own way.
For example, a CustomerMessage could be raised that contains the customer's id, name, address, and perhaps credit-limit and one micro service may only be concerned with the id and name while another may be interested also in the address and credit-limit.
I have a requirement to migrate a legacy CORBA system to any latest java technology. The main problem I am facing is to provide long lived transaction(db) in the proposed system. Currently the client(Swing App) retain the CORBA service object and perform multiple db txn before actually committing/rolling back all the txn. Service layer keep the state of connection object through out to complete transaction.
I wanted to reproduce this mechanism in my new system(REST/WS) so that either Swing client/Web(future) can work in the same as is.
eg:
try {
service1.updateXXData(); // --> insert in to table XX
service2.updateUUData() //--> insert in to table UU
service1.updateZZData(); // --> insert in to table ZZ
service2.updateAAData(); // --> insert in to table AA
service1.commit(); // con.commmit();
service2.commit(); // con.commmit();
}
exception(){
service1.rollback(); // con.rollback();
service2.rollback(); // con.rollback();
}
Now I wanted to migrate CORBA to any modern technolgy, but still I am at large to find a solution for this. ( the concern is client do not want to make any change to service layer or db layer) , they just wanted to remove CORBA.
couple of options available for me are
Migrate CORBA to RMI --> so that changes required to current system are minimal, but transaction management,connection pooling, retaining state need to do my self.
Migrate CORBA to Stateful EJB --> Compare RMI more changes required, but better since I can use container managed connection pooling, maintain state in a better way.
Migrate CORBA to Stateful Webservice(SOAP) --> More futuristic, but lot of changes required - How ever I can convert IDL to WSDL, and delegate the call to implementation layer
Migrate CORBA to REST --> Most desired if possible - but the amount of time required to migrate is huge , Code changes would require from UI layer to service layer.
Thank you very much in advance
The order in which I would choose the options, from best to worst, would be 4, 3, 2, and 1, however I'd avoid stateful beans or services if humanly possible to do so.
I'll go over the implementation details of what you'll have to do in detail.
For any of these solutions, you'll have to use XA-compliant data sources and transactions so you can guarantee ACID compliance, preferably from an application server so you don't have to generate the transaction yourself. This should be an improvement from your existing application as it almost certainly can't guarantee that, but be advised that in my experience, people put loads of hacks in to essentially reinvent JTA, so watch out for that.
For 4, you'll want to use container-managed transactions with XA. You might do this by injecting a #PersistenceContext backed by a JTA connection. Yes, this costs a ton of time, testing, and effort, but it has two bonuses: First, moving to the web will be a lot easier, and it sounds like that time is coming. Second, those that come after you are more likely to be well-versed in newer web service technologies than bare CORBA and RMI.
For 3, you'll also want to use container-managed transactions with XA. SOAP would not be my first choice as it uses very verbose messages and REST is more popular, but it could be done. If it's stateful, though, you'll have to use bean-managed transactions instead and then hang on to resources across web service calls. This is dangerous, as it could potentially deadlock the whole system.
For 2, you can go two ways, either using container-managed transactions with XA by using a stateless session facade for a stateful EJB. You can use a client JAR for your EJB and package that with the Swing app. Using the stateless facade is preferable, as it will reduce the load on your application server. Keep in mind that you can generate web services from stateless EJB beans too, essentially turning this into #3.
For 1... well, good luck. It is possible to use RMI to interface with EJB's, and generate your own stub and tie, though this is not recommended, and for very good reason. This hasn't been a popular practice for years, may require the stubs and ties to be regenerated periodically, and may require an understanding of the low-level functions of the app server. Even here, you'll want XA transactions. You don't want to handle the transaction management yourself, if possible.
Ultimately, as I'm sure everyone will agree, the choice is yours on what to do, and there's no "right" or "wrong" way, despite the opinions stated above. If it were me (and it's not), I'd ask two important questions of myself and my customer:
Is this for a contract or temporary engagement, and if so what is the term? Do I get first pick at another contract for this same system later when they want additional updates? (In other words, how much money am I going to get out of this vs. how much time am I spending? If it's going to be a long term, then I would go with 4 or 3, otherwise 3 or 2 would be better.)
Why get rid of CORBA? "Because it's old" is an honest answer, but what's the impetus of getting rid of the "old hotness?" Do they plan on expanding usage of this system in the future? Is there some license about to expire and they just want to keep the lights on? Is it because they don't want to dump this on some younger programmer who might not know how to deal with low-level stuff like this? What do you want the system to do in two years, five years, or longer?
(OK, so that's more than two questions :D)
I have quite a simple problem. I am rewriting very old app which is using direct access to database through DAO objects. There is no business layer (the code is not mine and is quite anti-code), so connection.setAutoCommit(false) is used for starting the transactions everywhere in the code. I had to rewrite the project because of security reasons, so it does not use database connection but webservices and hibernate/jpa on the J2EE server side (before it was standalone app, now app+j2ee). Simple - I just moved the DAO/VO objects to the webservice server and rewrote sql to hql and DAO in client replaced with webservice client.
But what to do with transaction code? Normally one transaction one webservice call. So I need some mechanism (parameter in webservices?) that could help me to reference to the same hibernate transaction across multiple webservice calls. Is it completely bad approach and should I just move the transactions in server code?
I think you should use SessionBeans expose as JAX-RS services, and let them control the transactions.
If you need to have a transaction accross multiple webservice calls, just define a new webservice, also a EJB SessionBean that acts as a facade for the other calls.
I think is a bad practice to implement what you suggest (with referecing the same hibernate transaction), and I think it might not even be possible. Each WS call is a separate thread, at different moment in times, mixing transactions across threads is not a good practice.
I also think this is bad practice, because you usually build webservice methods that are coarse grained. So usually you are fine with one request per transaction.
I can understand your need but think about the downsides:
How will you do a rollback about several transactions? This will introduce data inconsistencies, if not possible.
If this is possible, your webservice won't be stateless anymore,
which is commonly considered bad practice.
This means, your API requests will be dependent on each other, so you have prerequisites for executing any of your request.
Have you tried to put your transaction within one request? This might help to re-structure and possibly enhance the code of your app.
I am doing things now like writing classes and their unit tests -- business logic. Without question, I will need to have something like JPA to allow me to store these classes and initialize the application from a database. I also know that I will need to do a lot of operations within a transaction.
My question is, does it make sense to implement the business logic first and then worry about persistence later or am I asking for trouble in this way -- perhaps instead I should be incorporating persistence in my design from the start: it might be very hard to add it in later? Or is there an approach where business logic can be totally ignorant of persistence? The reason I am guessing not is that the persistent classes need annotations.
Anyway, I could have been more succinct -- maybe the title says it all.
Cheers.
Isolating the implementation from a particular technology is a best practice. In general, you should be better off developing an application without preparing to use JPA.
For this, you can use a separate domain model for your business logic. The domain objects should be mapped to/from a persisteable representation at the boundaries of your business logic layer.
Domain driven design, clean architecture, hexagonal architecture (and probably some others) are different but closely related approaches that emphasize the separation of business domain from frameworks.
The primary benefit is a clean separation of concerns. You can reach a very good testability for your code with very fast tests that do not rely on the DB. You also can switch persistence technologies (going with in-mem DBs or flat files if you should so desire) with much less pain.
The downside is that you will have to define a boundary mapping between your domain classes and persistent classes.
Having said that, sometimes fully embracing a framework can have it's own benefits that have to be weighed against clean design. When creating a simple, one-off webapp, it can make sense to use JPA entities all the way - even using attached JPA entites for display in the UI - this is called 'transaction view'.
The expected benefit is simplicity - sometimes there is no use in introducing a 'business logic' layer if there is no logic to speak of.
I haven't found any questions addressing this specific issue.
What is better: to allow Services (or facades) to access several DAOs (classes which talk to the database) or only other Services?
In other words, should I introduce inter-depencencies between different Service classes or is it better to make Service classes completely independent of each other by injecting more than one DAO (if necessary) to each Service class?
I found that both strategies will do the job, but I want to be consistent and make the application as modular and maintainable as possible.
I feel that allowing or forbidding a service to call another service or more than one DAO is subjective.
I try to avoid unnecesary code or odd couplings just to satisy some rule about layer-communication, and following basic OO principles of making simple, clear objects usually leads to a compromise.
If a service B needs another functionality already comprised in a service A then it should call it. I try to reduce dependencies among services and usually end up defining a small set of "basic" services that can be called from other services.
Creating a method in a service only to wrap a call to a DAO is pointless (in my opinion) and therefore I prefer to let services call as many DAOs as they need. Again, a service or a method with many DAOs indicates something that should be refactored or a data model that needs adjustment.
There's some opinion in this to be sure, but a true "service" method should be an atomic unit of work. If they're creating a web of interdependency calling each other back forth and sideways, clearly the invocations aren't performing atomic tasks. I see nothing wrong with letting a "service" use whatever DAOs it needs. Creating a set of "service"-CRUD methods abstracting the DAO which is already a collection of CRUD methods which is itself probably abstracting away the abstraction that is JPA, you can see how that might be one too many levels of non-functional abstraction.
This approach does sometimes lead you to build shared "business beans" that are in the domain rather than the service, that multiple services share. This is fine.
(Can you tell I personally think JPA has made the entire idea of DAOs obsolete and we should just use EntityManager in the service? :) )