If I have a Java class defined below that is injected in my web application via dependency injection:
public AccountDao
{
private NamedParameterJdbcTemplate njt;
private List<Account> accounts;
public AccountDao(Datasource ds)
{
this.njt = new NamedParameterJdbcTemplate(ds);
refreshAccounts();
}
/*called at creation, and then via API calls to inform service new users have
been added to the database by a separate program*/
public void refreshAccounts()
{
this.accounts = /*call to database to get list of accounts*/
}
//called by every request to web service
public boolean isActiveAccount(String accountId)
{
Account a = map.get(accountId);
return a == null ? false : a.isActive();
}
}
I am concerned about thread safety. Does the Spring framework not handle cases where one request is reading from the list and it is currently being updated by another? I have used read/write locks before in other applications, but I have never thought about a case such as above before.
I was planning on using the bean as a singleton so I could reduce database load.
By the way, this is a follow up of the below question:
Java Memory Storage to Reduce Database Load - Safe?
EDIT:
So would code like this solve this problem:
/*called at creation, and then via API calls to inform service new users have
been added to the database by a separate program*/
public void refreshAccounts()
{
//java.util.concurrent.locks.Lock
final Lock w = lock.writeLock();
w.lock();
try{
this.accounts = /*call to database to get list of accounts*/
}
finally{
w.unlock();
}
}
//called by every request to web service
public boolean isActiveAccount(String accountId)
{
final Lock r = lock.readLock();
r.lock();
try{
Account a = map.get(accountId);
}
finally{
r.unlock();
}
return a == null ? false : a.isActive();
}
Spring framework does not do anything under the hood concerning the multithreaded behavior of a singleton bean. It is the developer's responsibility to deal with concurrency issue and thread safety of the singleton bean.
I would suggest reading the below article: Spring Singleton, Request, Session Beans and Thread Safety
You could have asked for clarification on my initial answer. Spring does not synchronize access to a bean. If you have a bean in the default scope (singleton), there will only be a single object for that bean, and all concurrent requests will access that object, requiring that object to the thread safe.
Most spring beans have no mutable state, and as such are trivially thread safe. Your bean has mutable state, so you need to ensure no thread sees a list of accounts the other thread is currently assembling.
The easiest way to do that is to make the accounts field volatile. That assumes that you assign the new list to the field after having filled it (as you appear to be doing).
private volatile List<Accounts> accounts;
As a singleton and non-synchronized, Spring will allow any number of threads to concurrently invoke isActiveAccount and refreshAccounts. So, no this class is not going to be thread-safe and will not reduce the database load.
we have many such meta data and have some 11 nodes running. on each app node we have static maps for such data so its one instance only, init from db at start up once in off peak hour every day or when support person triggers it. have an interal simple http post based API to send updates from 1 node to others for some of the data which we need updates in real time.
public AccountDao
{
private static List<Account> accounts;
private static List<String> activeAccounts;
private NamedParameterJdbcTemplate njt;
static {
try{
refreshAccounts();
}catch(Exception e){
//log but do not throw. any uncaught exceptions in static means your class is un-usable
}
}
public AccountDao(Datasource ds)
{
this.njt = new NamedParameterJdbcTemplate(ds);
//refreshAccounts();
}
/*called at creation, and then via API calls to inform service new users have
been added to the database by a separate program*/
public void refreshAccounts()
{
this.accounts = /*call to database to get list of accounts*/
}
public void addAccount(Account acEditedOrAdded)
{
//add or reove from map onr row
//can be called from this node or other node
//meaning if you have 2 nodes, keep IP port of each or use a internal web service or the like to tell
//node B when a account id added or changed in node A ...
}
//called by every request to web service
public static boolean isActiveAccount(String accountId)
{
Account a = map.get(accountId);
return a == null ? false : a.isActive();
}
}
Related
I am creating a banking application. Currently, there is one primary service, the transaction service. This service allows getting transactions by id, and creating transactions. In order to create a transaction, I first want to check if the transaction is valid, by checking the balance of the account it is trying to deduct from, and seeing if they have enough balance. Right now I am doing
TransactionController calls TransactionService. TransactionService creates Transaction, then checks if this is a valid transaction. At this point, I have created an AccountsService, that queries the AccountsRepository, returns an Account. I then do the comparison based on Account.balance > Transaction.amount.
I am conscious here that the TransactionService create method is relying on the AccountService get method. Additionally, AccountService is never directly called from a controller.
Is this an ok way to architect, or is there a more elegant way to do this?
In your case, I would say it is ok because I guess that if Account.balance < Transaction.amount you don't really want to go forward with the transaction. So you must at some point get the needed data from the AccountService, there is no way around that.
If you just wanted o trigger some side-effect task (like sending an email or something) you could rely on an event-based approach on which TransactionService would publish an event and a hypothetical NotificationsService would react to it at some point in time and do its thing.
Your logic seems fine. If you only need the Account Service from the Transaction service (or another Service), this is valid. No need to call an Account Service from a Controller just to do so if the logic makes no sense. In fact some Services that are invoked from a Controller may call many other services - such as an Email Service, a Text Message Service, and so on.
You can reference your AccountRepository directly in your TransactionService.
Sorry I don't speak Java but here is a C# example :
public class Transaction {
// implementation redacted
public Transaction(decimal amount, Account from, Account to) {
if(amount > from?.Balance) throw ... ;
// redacted
}
}
public class TransactionService {
private readonly AccountRepository accounts; // by injection
private readonly TransactionRepository transactions; // by injection
public void AddTransaction(decimal amount, int source, int destination) {
var from = accounts.Find(source); // throws if not found
var to = accounts.Find(destination); // throws if not found
var transaction = new Transaction(amount, from, to);
transactions.Insert(transaction);
transactions.Persist();
}
}
However, this solution is less ORM friendly because of the Transaction constructor. Another way around would be to use Account as your root aggregate, and place the business rule validation and entities relationship handling code there :
public class Account {
// implementation redacted
public void AddTransaction(decimal amount, Account to) {
if(amount > this.Balance) throw ... ;
// more redacted validations
this.Debitus.Add(new Transaction { Amount = amount, From = this, To = to });
}
}
public class TransactionService {
private readonly AccountRepository accounts; // by injection
public void AddTransaction(decimal amount, int source, int destination) {
var from = accounts.Find(source); // throws if not found
var to = accounts.Find(destination); // throws if not found
from.AddTransaction(amount, to);
accounts.Persist();
}
}
I wouldn't create an account service. I would call the account repo from the transaction service. Apart from that, i wouldnt create a transaction object before knowing if it is valid. I would check the conditions before creating the transaction.
Agree with #choquero70 I don't like to compose or couple Serivce's operations because that "getBalance" maybe is different for one purpose or another, I would implement on TransactionService invoking AccountRepository or whatever repo or client that you need. TransactionService is the "perspective" of the operation
The title might be incorrect, but I will try to explain my issue. My project is a Spring Boot project. I have services which do calls to external REST endpoints.
I have a service method which contains several method calls to other services I have. Every individual method call can be successful or not. Every method call is done to a REST endpoint and there can be issues that for example the webservice is not available or that it throws an unknown exception in rare cases. What ever happens, I need to be able to track which method calls were successful and if any one of them fails, I want to rollback to the original state as if nothing happened, see it a bit as #Transactional annotation. All REST calls are different endpoints and need to be called separately and are from an external party which I don't have influence on. Example:
public MyServiceImpl implements MyService {
#Autowired
private Process1Service;
#Autowired
private Process2Service;
#Autowired
private Process3Service;
#Autowired
private Process4Service;
public void bundledProcess() {
process1Service.createFileRESTcall();
process2Service.addFilePermissionsRESTcall();
process3Service.addFileMetadataRESTcall(); <-- might fail for example
process4Service.addFileTimestampRESTcall();
}
}
If for example process3Service.addFileMetadataRESTcall fails I want to do something like undo (in reverse order) for every step before process3:
process2Service.removeFilePermissionsRESTcall();
process1Service.deleteFileRESTcall();
I read about the Command pattern, but that seems to be used for Undo actions inside an application as a sort of history of actions performed, not inside a Spring web application. Is this correct for my use case too or should I track per method/webservice call if it was successful? Is there a best practice for doing this?
I guess however I track it, I need to know which method call failed and from there on perform my 'undo' method REST calls. Although in theory even these calls might also fail of course.
My main goal is to not have files being created (in my example) which any further processes have not been performed on. It should either be all successful or nothing. A sort of transactional.
Update1: improved pseudo implementation based on comments:
public Process1ServiceImpl implements Process1Service {
public void createFileRESTcall() throws MyException {
// Call an external REST api, pseudo code:
if (REST-call fails) {
throw new MyException("External REST api failed");
}
}
}
public class BundledProcessEvent {
private boolean createFileSuccess;
private boolean addFilePermissionsSuccess;
private boolean addFileMetadataSuccess;
private boolean addFileTimestampSuccess;
// Getters and setters
}
public MyServiceImpl implements MyService {
#Autowired
private Process1Service;
#Autowired
private Process2Service;
#Autowired
private Process3Service;
#Autowired
private Process4Service;
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
#Transactional(rollbackOn = MyException.class)
public void bundledProcess() {
BundleProcessEvent bundleProcessEvent = new BundleProcessEvent();
this.applicationEventPublisher.publishEvent(bundleProcessEvent);
bundleProcessEvent.setCreateFileSuccess = bundprocess1Service.createFileRESTcall();
bundleProcessEvent.setAddFilePermissionsSuccess = process2Service.addFilePermissionsRESTcall();
bundleProcessEvent.setAddFileMetadataSuccess = process3Service.addFileMetadataRESTcall();
bundleProcessEvent.setAddFileTimestampSuccess = process4Service.addFileTimestampRESTcall();
}
#TransactionalEventListener(phase = TransactionPhase.AFTER_ROLLBACK)
public void rollback(BundleProcessEvent bundleProcessEvent) {
// If the last process event is successful, we should not
// be in this rollback method even
//if (bundleProcessEvent.isAddFileTimestampSuccess()) {
// remove timestamp
//}
if (bundleProcessEvent.isAddFileMetadataSuccess()) {
// remove metadata
}
if (bundleProcessEvent.isAddFilePermissionsSuccess()) {
// remove file permissions
}
if (bundleProcessEvent.isCreateFileSuccess()) {
// remove file
}
}
Your operation looks like a transaction, so you can use #Transactional annotation. From your code I can't really tell how you are managing HTTP response calls for each of those operations, but you should consider having your service methods to return them, and then do a rollback depending on response calls. You can create an array of methods like so, but how exactly you want your logic to be is up to you.
private Process[] restCalls = new Process[] {
new Process() { public void call() { process1Service.createFileRESTcall(); } },
new Process() { public void call() { process2Service.addFilePermissionsRESTcall(); } },
new Process() { public void call() { process3Service.addFileMetadataRESTcall(); } },
new Process() { public void call() { process4Service.addFileTimestampRESTcall(); } },
};
interface Process {
void call();
}
#Transactional(rollbackOn = Exception.class)
public void bundledProcess() {
restCalls[0].call();
... // say, see which process returned wrong response code
}
#TransactionalEventListener(phase = TransactionPhase.AFTER_ROLLBACK)
public void rollback() {
// handle rollback according to failed method index
}
Check this article. Might come in handy.
The answer to this question is quite broad. There are various ways to do distributed transactions to go through them all here. However, since you are using Java and Spring, your best bet is to use something like JTA (Java Transaction API), which enables a distributed transactions across multiple services/instances/etc.. Fortunately, Spring Boot supports JTA using either Atomikos or Bitronix. You can read the doc here.
One approach to enable distributed transactions is through a message broker such as JMS, RabbitMQ, Kafka, ActiveMQ, etc. and use a protocol like XA transactions (two-phase commit). In the case of external services that do not support distributed, one approach is to write a wrapper service that understands XA transactions to that external service.
In order to optimize sql request, I've made a service that aggregate other services consumptions to avoid unecessary calls.
(Some pages of my webapp are called millions times by day, so I want to reuse the results of database queries as many times as possible on each request)
The solution I create is this one :
My service has #RequestScope instead of default scope (Singleton)
In MyService
#Service
#RequestScope
public MyService {
private int param;
#Autowired
private OtherService otherService;
#Autowired
private OtherService2 otherService2;
private List<Elements> elements;
private List<OtherElements> otherElements;
public void init(int param) {
this.param = param;
}
public List<Elements> getElements() {
if(this.elements == null) {
//Init elements
this.elements = otherService.getElements(param);
}
return this.elements;
}
public List<OtherElements> getOtherElements() {
if(this.otherElements == null) {
//Init otherElements
this.otherElements = otherService2.getOtherElements(param);
}
return this.otherElements;
}
public String getMainTextPres() {
//Need to use lElements;
List<Elements> elts = this.getElements();
....
return myString;
}
public String getSecondTextPres() {
//Need to use lElements;
List<Elements> elts = this.getElements();
//Also Need to use lElements;
List<OtherElements> otherElts = this.getOtherElements();
....
return myString;
}
}
In my controller :
public class myController {
#Autowired MyService myService;
#RequestMapping...
public ModelAndView myFunction(int param) {
myService.init(param);
String mainTextPres = myService.getMainTextPres();
String secondTextPres = myService.getSecondTextPres();
}
#OtherRequestMapping...
public ModelAndView myFunction(int param) {
myService.init(param);
String secondTextPres = myService.getSecondTextPres();
}
}
Of course, I've simplified my example, because myService use lots of other elements, and i protect the initialization of his members attributes
This method has the advantage of doing lazy loading of the attributes only when I need them.
If somewhere in my project (in same or other controller) I only need the SecondTextPres, then calling "getSecondTextPres" will initialize both lists which is not the case in my example beacuse the first list has been initialized when "getMainTextPres" was called.
My question are :
What do you think of this way of doing things ?
May I have performance issues because I instantiate my service on each request ?
Thanks a lot !
Julien
I think that your idea is not going to fly. I you call the same or different controller this is will be different request - in that case new bean will be created (elements and other elements are empty again).
Have you been thinking about caching? Spring has nice support where you can define cache expiration, etc
It's not quite clear to me what exactly you want to optimise instantiating Service in request scope? If you are bothered about memory foot print, you could easily measure it by JMX or VisualVM.
On the other hand, you could make all Service calls pure, i.e. depending on function parameters and (ofc) database state only and instantiate the Service with default scope as Singleton.
This decision will save you reasonable amount of resources as you will not instantiate possible large object graph on each call and will not require GC to clean the thing after Request is done.
The rule of thumb is to think why exactly you need the specific Class instantiated on every call and if it doesn't keep any state specific to call, make it Singleton.
Speaking about lazy loading, it always helps to think about worst case repeated like 100 times. Will it really save you something comparing to be loaded once and for the whole Container lifetime.
I have a Spring Web Application with concurrent access to a certain resource. This resource holds a list of a certain numer of objects which might be fetched by a request. If the list is empty, no SomeClass Object should be returned any more
The resource looks like this:
public class Resource {
private List<SomeClass> someList;
public List<SomeClass> fetch() {
List<SomeClass> fetched = new ArrayList<SomeClass>();
int max = someList.size();
if(max<=0) {
return fetched;
}
int added = 0;
while(added<max) {
int randomIndex = Math.random(max-1);
SomeClass someClass = someList.get(randomIndex);
if(!fetched.contains(someClass)) {
fetched.add(someClass);
++added;
}
}
someList.remove(fetched);
return fetched;
}
}
This resource is loaded in the service layer, then accessed and saved back to the database:
#Service
public class ResourceService {
#Autowired
private ResourceRepository repo;
public List<SomeClass> fetch(long id) {
Resource resource = repo.findOne(id);
List<SomeClass> fetched = resource.fetch();
repo.save(resource);
return fetched;
}
}
I tried to use #Transactional on the ResourceService#fetch method to avoid the problem that two concurrent requests might fetch a SomeClass object from the list although the list was already emptied by the first request but I'm not sure if this is the right approach... Do I have to use #Synchronized on the Resource#fetch method or introduce an explicit Lock in the service layer?
I need to make sure that only one request is accessing the Resource (fetching a list of SomeClass objects) without throwing an exception. Instead, subsequent requests should be queued and try to access the resource after the current request has finished fetching the list of SomeClass objects.
My final solution was to introduce a Blocking Queue in the #Service and add all incoming requests to it. A seperate thread is then taking an element from the queue as soon as one was added and processing it.
I think this is the cleanest solution, as adding an ReentrantLock would block the request processing.
So for a school project we created a site where a user could submit a report on underwater life etc. We used simple dependency injection (javax.inject) and an error checking pattern as follows :
ReportService.java
public interface ReportService {
public static enum ReportServiceErrorsENUM {
DB_FAILURE, WRONG_COORD // etc
}
public Set<ReportServiceErrorsENUM> getLastErrors();
public int addNewReport(Report report);
}
ReportServiceImpl.java
public class ReportServiceImpl implements ReportService {
private Set<ReportServiceErrorsENUM> lastErrors;
private #Inject ReportDAO reportDAO;
#Override
public Set<ReportServiceErrorsENUM> getLastErrors() {
return this.lastErrors;
}
#Override
public int addNewReport(Report report) {
lastErrors= new HashSet<ReportServiceErrorsENUM>();//throw away previous errors
UserInput input = report.getUserInput();
if (input.getLatitude() == null) {
addError(ReportServiceErrorsENUM.WRONG_COORD);
}
// etc etc
if (reportDAO.insertReport(report) != 0) {
// failure inserting the report in the DB
addError(ReportServiceErrorsENUM.DB_ERROR);
}
if (lastErrors.isEmpty()) // if there were no errors
return EXIT_SUCCESS; // 0
return EXIT_FAILURE; // 1
}
}
SubmitReportController.java
#WebServlet("/submitreport")
public class SubmitReportController extends HttpServlet {
private static final long serialVersionUID = 1L;
private #Inject ReportService reportService;
#Override
protected void doPost(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
Report report = new Report();
// set the report's fields from the HttpServletRequest attributes
if(reportService.addNewReport(report) == ReportService.EXIT_FAILURE) {
for(ReportServiceErrorsENUM error : reportService.getLastErrors())
// display the errors etc
} else {
// display confirmation
}
}
}
The idea is that the Servlet controller calls the service (which is injected) then checks the services' return value and calls getLastErrors() on the service if there was an error - to inform the user what went wrong etc. Now I just came to realize that this is not thread safe - the #Inject'ed ReportService (reportService) will be shared by all threads using the servlet
Is it (crosses fingers) ?
How could one improve on this error mechanism ?
Thanks
typically for servlets you'd want to keep those variables (generally called "state") in some container-managed context.
i'd move these errors to the request scope - that way they're stored on the request object (conceptually) and any servlet/jsp/whatever working on that same request can see/edit them.
different requests mean different data storage.
example code for using the request scope from a servlet can be found here: http://www.exampledepot.com/egs/javax.servlet/State.html
Your design is neither thread-safe nor ready to server multiple users. It is not thread safe because several users (browsers) can hit the servlet at the same time and in turn access lastErrors set concurrently. (Yes, there is only one instance of servlet and your service).HashSet which you use is not thread safe.
Also if two different people try to use the same application, they will overwrite and have access to reports (errors) submitted by each other. In other words there is global state shared among all users while there should have been a state per user/session.
By fixing the second issue (I gave you a tip: use HTTPSession) you are unlikely to see the first issue. This is because it is rather rare to see concurrent access to the same session. But it's possible (concurrent AJAX requests, two browser tabs). Keep that in mind, but there are more important issues to solve now.