Handling additional data in Apache ServiceComb compensation methods - java

I'm currently looking at the implementations of saga pattern for distributed transactions and I found that Apache ServiceComp pack might be something that works for me.
However, I have found a problem that the limitation of compensating methods to have the same declaration as the methods they compensate may be a bottleneck.
From Apache's example:
#Compensable(compensationMethod = "cancel")
void order(CarBooking booking) {
booking.confirm();
bookings.put(booking.getId(), booking);
}
void cancel(CarBooking booking) {
Integer id = booking.getId();
if (bookings.containsKey(id)) {
bookings.get(id).cancel();
}
}
You can see that we have the same declaration for both methods.
But, what if I need additional information to compensate my transaction? For instance, I have a call to external system to update some flag to "true". When I need to compensate it, how do I make "cancel" method know what the original value of this flag was?
The things get more tricky when we update the whole object. How do I send the whole object before modification to the cancel transaction?
These limitation doesn't look quite promising. Do you know if there are approaches to fight with this limitation?

You can save localTxId and flag an in your application and use localTxId in the compensation method to get the flag
Map extmap = new HashMap();
#Autowired
OmegaContext omegaContext;
#Compensable(compensationMethod = "cancel")
void order(CarBooking booking) {
booking.confirm();
bookings.put(booking.getId(), booking);
//save flag
extmap.put(omegaContext.localTxId(),'Your flag')
}
void cancel(CarBooking booking) {
//get flag
extmap.get(omegaContext.localTxId());
Integer id = booking.getId();
if (bookings.containsKey(id)) {
bookings.get(id).cancel();
}
}

Related

Can I get the Field value in String into custom TokenFilter in Apache Solr?

I need to write a custom LemmaTokenFilter, which replaces and indexes the words with their lemmatized(base) form. The problem is, that I get the base forms from an external API, meaning I need to call the API, send my text, parse the response and send it as a Map<String, String> to my LemmaTokenFilter. The map contains pairs of <originalWord, baseFormOfWord>. However, I cannot figure out how can I access the full value of the text field, which is being proccessed by the TokenFilters.
One idea is to go through the tokenStream one by one when the LemmaTokenFilter is being created by the LemmaTokenFilterFactory, however I would need to watch out to not edit anything in the tokenStream, somehow reset the current token(since I would need to call the .increment() method on it to get all the tokens), but most importantly this seems unnecessary, since the field value is already there somewhere and I don't want to spend time trying to put it together again from the tokens. This implementation would probably be too slow.
Another idea would be to just process every token separately, however calling an external API with only one word and then parsing the response is definitely too inefficient.
I have found something on using the ResourceLoaderAware interface, however I don't really understand how could I use this to my advantage. I could probably save the map in a text file before every indexing, but writing to a file, opening it and reading from it before every document indexing seems too slow as well.
So the best way would be to just pass the value of the field as a String to the constructor of LemmaTokenFilter, however I don't know how to access it from the create() method of the LemmaTokenFilterFactory.
I could not find any help googling it, so any ideas are welcome.
Here's what I have so far:
public final class LemmaTokenFilter extends TokenFilter {
private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
private Map<String, String> lemmaMap;
protected LemmaTokenFilter(TokenStream input, Map<String, String> lemmaMap) {
super(input);
this.lemmaMap = lemmaMap;
}
#Override
public boolean incrementToken() throws IOException {
if (input.incrementToken()) {
String term = termAtt.toString();
String lemma;
if ((lemma = lemmaMap.get(term)) != null) {
termAtt.setEmpty();
termAtt.copyBuffer(lemma.toCharArray(), 0, lemma.length());
}
return true;
} else {
return false;
}
}
}
public class LemmaTokenFilterFactory extends TokenFilterFactory implements ResourceLoaderAware {
public LemmaTokenFilterFactory(Map<String, String> args) {
super(args);
if (!args.isEmpty()) {
throw new IllegalArgumentException("Unknown parameters: " + args);
}
}
#Override
public TokenStream create(TokenStream input) {
return new LemmaTokenFilter(input, getLemmaMap(getFieldValue(input)));
}
private String getFieldValue(TokenStream input) {
//TODO: how?
return "Šach je desková hra pro dva hráče, v dnešní soutěžní podobě zároveň považovaná i za odvětví sportu.";
}
private Map<String, String> getLemmaMap(String data) {
return UdPipeService.getLemma(data);
}
#Override
public void inform(ResourceLoader loader) throws IOException {
}
}
1. API based approach:
You can create an Analysis Chain with the Custom lemmatizer on top. To design this lemmatizer, I guess you can look at the implementation of the Keyword Tokenizer;
Such that you can read everything whatever is there inside the input and then call your API;
Replace all your tokens from the API response in the input text;
After that in Analysis Chain, use standard or white space tokenizer to tokenized your data.
2. File-Based Approach
It will follow all the same steps, except calling the API it can use the hashmap, from the files mentioned while defining the TokenStream
Now coming to the ResourceLoaderAware:
It is required when you need to indicate your Tokenstream that resource has changed it has inform method which takes care of that. For reference, you can look into StemmerOverrideFilter
Keyword Tokenizer: Emits the entire input as a single token.
So I think I found the answer, or actually two answers.
One would be to write my client application in a way, that incoming requests are first processed - the field value is sent to the external API and the response is stored into some global variable, which can then be accessed from the custom TokenFilters.
Another one would be to use custom UpdateRequestProcessors, which allow us to modify the content of the incoming document, calling the external API and again saving the response so it's somehow globally accessible from custom TokenFilters. Here Erik Hatcher talks about the use of the ScriptUpdateProcessor, which I believe can be used in my case too.
Hope this helps to anyone stumbling upon a similar problem, because I had a hard time looking for a solution to this(could not find any similar threads on SO)

Checking "rules" in Java without lots of if statements

I'm creating a springboot banking API and in order to create a transaction a bunch of "rules" have to be checked.
e.g:
Current logged in user can't withdraw money from another user's savings account
Amount can't be higher/lower than certain number
etc.
This causes my createTransaction method to contain a lot of if statements (12!). This is what my code looks like in pseudo:
public ResponseEntity<String> createTransaction(Transaction body) {
if (check rule 1) {
return ResponseEntity.status(HttpStatus.BAD_REQUEST).body("...");
}
if (check rule 2) {
return ResponseEntity.status(HttpStatus.BAD_REQUEST).body("...");
}
// etc...
// Transaction complies to set rules
return ResponseEntity.status(HttpStatus.CREATED).body("Transaction successful!");
}
I can post my actual code if necessary but I think this paints the picture without having anyone to read 100 lines of code.
Because I have around 12 if statements checking these rules, my function is quite lengthy and difficult to read/maintain.
Googling for a solution didn't bring up results I was looking for. I've tried implementing exceptions but this didn't remove the amount of if statements. Maybe a switch could improve a bit, but I'm wondering if there's a clean OOP solution.
My question is: How can I clean this code up (OOP style)?
Thanks in advance.
You should create a TransactionRule interface that allows you to implement specific transaction rules, and then use a stream to get the final result:
public interface TransactionRule {
public boolean isAllowed(Transaction someTransaction);
}
Example implementation 1:
public class SufficientBudgetTransactionRule implements TransactionRule {
public boolean isAllowed(Transaction someTransaction) {
// Custom logic e.g.
return someTransaction.wallet.value >= someTransaction.transaction.value;
}
}
Example implementation 2:
public class NotInFutureTransactionRule implements TransactionRule {
public boolean isAllowed(Transaction someTransaction) {
// Custom logic e.g.
return someTransaction.transaction.datetime.isBefore(OffsetDateTime.now());
}
}
Then, you can store all the TransactionRules in a List and check whether they all validate like so:
private final List<TransactionRule> transactionRules; // Fill these of course
public boolean allTransactionRulesMatch(Transaction someTransaction) {
return transactionRules.stream()
.map(transactionRule -> transactionRule.isAllowed(someTransaction))
.allMatch(result => result);
}

Refreshing cache without impacting latency to access the cache

I have a cache refresh logic and want to make sure that it's thread-safe and correct way to do it.
public class Test {
Set<Integer> cache = Sets.newConcurrentHashSet();
public boolean contain(int num) {
return cache.contains(num);
}
public void refresh() {
cache.clear();
cache.addAll(getNums());
}
}
So I have a background thread refreshing cache - periodically call refresh. And multiple threads are calling contain at the same time. I was trying to avoid having synchronized in the methods signature because refresh could take some time (imagine that getNum makes network calls and parsing huge data) then contain would be blocked.
I think this code is not good enough because if contain called in between clear and addAll then contain always returns false.
What is the best way to achieve cache refreshing without impacting significant latency to contain call?
Best way would be to use functional programming paradigm whereby you have immutable state (in this case a Set), instead of adding and removing elements to that set you create an entirely new Set every time you want to add or remove elements. This is in Java9.
It can be a bit awkward or infeasible however to achieve this method for legacy code. So instead what you could do is have 2 Sets 1 which has the get method on it which is volatile, and then this is assigned a new instance in the refresh method.
public class Test {
volatile Set<Integer> cache = new HashSet<>();
public boolean contain(int num) {
return cache.contains(num);
}
public void refresh() {
Set<Integer> privateCache = new HashSet<>();
privateCache.addAll(getNums());
cache = privateCache;
}
}
Edit We don't want or need a ConcurrentHashSet, that is if you want to add and remove elements to a collection at the same time, which in my opinion is a pretty useless thing to do. But you want to switch the old Set with a new one, which is why you just need a volatile variable to make sure you can't read and edit the cache at the same time.
But as I mentioned in my answer at the start is that if you never modify collections, but instead make new ones each time you want to update a collection (note that this is a very cheap operation as internally the old set is reused in the operation). This way you never need to worry about concurrency, as there is no shared state between threads.
How would you make sure your cache doesn't contain invalid entries when calling contains?? Furthermore, you'd need to call refresh every time getNums() changes, which is pretty inefficient. It would be best if you make sure you control your changes to getNums() and then update cache accordingly. The cache might look like:
public class MyCache {
final ConcurrentHashMap<Integer, Boolean> cache = new ConcurrentHashMap<>(); //it's a ConcurrentHashMap to be able to use putIfAbsent
public boolean contains(Integer num) {
return cache.contains(num);
}
public void add(Integer nums) {
cache.putIfAbsent(num, true);
}
public clear(){
cache.clear();
}
public remove(Integer num) {
cache.remove(num);
}
}
Update
As #schmosel made me realize, mine was a wasted effort: it is in fact enough to initialize a complete new HashSet<> with your values in the refresh method. Assuming of course that the cache is marked with volatile. In short #Snickers3192's answer, points out what you seek.
Old answer
You can also use a slightly different system.
Keep two Set<Integer>, one of which will always be empty. When you refresh the cache, you can asynchronously re-initialize the second one and then just switch the pointers. Other threads accessing the cache won't see any particular overhead in this.
From an external point of view, they will always be accessing the same cache.
private volatile int currentCache; // 0 or 1
private final Set<Integer> caches[] = new HashSet[2]; // use two caches; either one will always be empty, so not much memory consumed
private volatile Set<Integer> cachePointer = null; // just a pointer to the current cache, must be volatile
// initialize
{
this.caches[0] = new HashSet<>(0);
this.caches[1] = new HashSet<>(0);
this.currentCache = 0;
this.cachePointer = caches[this.currentCache]; // point to cache one from the beginning
}
Your refresh method may look like this:
public void refresh() {
// store current cache pointer
final int previousCache = this.currentCache;
final int nextCache = getNextPointer();
// you can easily compute it asynchronously
// in the meantime, external threads will still access the normal cache
CompletableFuture.runAsync( () -> {
// fill the unused cache
caches[nextCache].addAll(getNums());
// then switch the pointer to the just-filled cache
// from this point on, threads are accessing the new cache
switchCachePointer();
// empty the other cache still on the async thread
caches[previousCache].clear();
});
}
where the utility methods are:
public boolean contains(final int num) {
return this.cachePointer.contains(num);
}
private int getNextPointer() {
return ( this.currentCache + 1 ) % this.caches.length;
}
private void switchCachePointer() {
// make cachePointer point to a new cache
this.currentCache = this.getNextPointer();
this.cachePointer = caches[this.currentCache];
}

Should I abstract the service layer on the client side and if yes how?

The thing is that I am using Hibernate on the server side and that I am sending basically "raw" database data to the client - which is fine I guess but that also means that my client gets a List<UpcomingEventDTO> when calling the according service which is just a list from a specified date to another one.
If I now want to split those events into a map where the keys map to lists of events of one day e.g. a Map<Integer, List<UpcomingEventDTO>> then I will have to do this on the client side. This wouldn't bother me if I wouldn't have to do that in my Presenter.
On the one hand I'm having the loading in my presenter:
private void loadUpcomingEvents(final Integer calendarWeekOffset) {
new XsrfRequest<StoreServletAsync, List<UpcomingEventDTO>>(this.storeServlet) {
#Override
protected void onCall(AsyncCallback<List<UpcomingEventDTO>> asyncCallback) {
storeServlet.getUpcomingEventsForCalendarWeek(storeId, calendarWeekOffset, asyncCallback);
}
#Override
protected void onFailure(Throwable caught) {
}
#Override
protected void onSuccess(List<UpcomingEventDTO> result) {
upcomingEvents = result;
presentUpcomingEvents();
}
}.request();
}
and the conversion of the data before I can present it:
private void presentUpcomingEvents() {
Map<Integer, List<UpcomingEventDTO>> dayToUpcomingEvent = new HashMap<>();
for (UpcomingEventDTO upcomingEvent : this.upcomingEvents) {
#SuppressWarnings("deprecation")
Integer day = upcomingEvent.getDate().getDay();
List<UpcomingEventDTO> upcomingEvents = dayToUpcomingEvent.get(day);
if(upcomingEvents == null) {
upcomingEvents = new ArrayList<>();
}
upcomingEvents.add(upcomingEvent);
dayToUpcomingEvent.put(day, upcomingEvents);
}
List<Integer> days = new ArrayList<Integer>(dayToUpcomingEvent.keySet());
Collections.sort(days);
this.calendarWeekView.removeUpcomingEvent();
for(Integer day : days) {
CalendarDayPresenterImpl eventCalendarDayPresenter = null;
eventCalendarDayPresenter = this.dayToEventCalendarDayPresenter.get(day);
if(eventCalendarDayPresenter == null) {
List<UpcomingEventDTO> upcomingEvents = dayToUpcomingEvent.get(day);
eventCalendarDayPresenter = new CalendarDayPresenterImpl(upcomingEvents);
this.dayToEventCalendarDayPresenter.put(day, eventCalendarDayPresenter);
}
this.calendarWeekView.appendEventCalendarDay(eventCalendarDayPresenter.getView());
}
}
So my problem is basically that I am not really happy with having code like this in my presenter but on the other hand I wouldn't know how and where to provide the data in this "upgraded" form for my presenter(s).
One could argue and say that I could also just return the data from the server in a way I would need it on the server but then I would lose generality and I don't want to write for all views and presenters their "own" API to the database.
Another possibility would be e.g. to introduce another layer between the service/servlet layer and have something like a DAO- or database-layer before my presenters model. But this would also raise quite a lot questions for me. E.g. what would be the name of such a layer ^^ and would that layer provide "customize" data for presenters or would the data still be kind of generalized?
I'm having quite a huge issue figuring out what to do here so I hope I can benefit from someones experience.
Thanks a lot for any help here!
The presentation logic should be on server side in controller layer where its meant to prepare the view for the clients. ( MVC pattern )
And if many views want to use this, you can make an abstract controller which can be reused for other views.
Also its good to prepare your controller layer for the future requirements. Ask yourself whether another client will ask to present the data in different granularity ? May be show the upcoming events by month/time ? Hence you have to provide your API a granularity enum UPCOMING_EVENTS_DAY_GRANULARITY( DAY, MONTH, HOUR) as a method parameter so that you will make client to decide what they want.
And to make it more beautiful, you can also say rename/move controller layer into a webservice layer which can be considered as your future API for external systems (not only for your views but for anyone outside your system)..

Track visitors in play framework 2 application,caching and saving to DB

What I want to accomplish is a sane way of storing a list of the pages visited during each anonymous user session on my website. It is important that the sequence is correct and that one users navigation is not mixed up with another ones.
It is similar to this C# question: "track visitor info on every request", where they proposed to use a logfile for short-term storage. My site will only host ~2K users/day so I think I would be fine with having the cache in memory (atleast that seems easier).
I cant seem to figure out a solution that will work with the stateless Play framework since the solution requires saving data(to DB) only when an user has timed out and is not active anymore. My idea would rely on storing each request in memory and then call the database if a new request hasnt arrived in a certain time (user timeout). How would it be possible to do that?
Is there any other way you think might be better, perhaps storing all the requests from all users and then making a big write to db instead of one for each session?
EDIT:
I have now taken the easy way out, it would have been great to do what Salem mentioned but I cant figure out how to save a list to memory. My first plan was actually to use the Akka scheduler that would be altered (timer reset and a new post added) each time the user gets a new page but I simply dont know how to get in touch with a instance ion memory from a previous request so if someone could tell me that I would be very grateful.
For now I have solved my problem the "bad way" with requests to database for each page view. Code below in case someone also wanna be bad:
In the Action:
String uuid=session("uuid");
Logger.info("Old uuid:"+uuid);
String generatedId = UserTracker.updateTracker(uuid,id.toString()).toString();
if(uuid==null || !generatedId.equals(uuid)){
Logger.info("new UUID for user:"+generatedId);
session("uuid", generatedId);
}
UserTracker class:
#Entity
public class UserTracker extends Model {
#Id
#SequenceGenerator(name="ut_gen", sequenceName="user_tracker_seq", allocationSize=1)
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator="ut_gen")
long session_id;
String navigations;
public static Long updateTracker(String session,String page){
UserTracker ut=null;
try{
Long sessionId=Long.parseLong(session);
ut = UserTracker.find.byId(sessionId);
if(ut!=null)
{
ut.addPage(page);
ut.update();
} else {
throw new NumberFormatException();
}
} catch (NumberFormatException e){
ut = new UserTracker(page);
ut.save();
}
return ut.session_id;
}
private void addPage(String page) {
// TODO Auto-generated method stub
if(navigations!=null && !navigations.isEmpty())
navigations+=","+page;
else
navigations=page;
}
public UserTracker(String page){
addPage(page);
save();
}
public static Finder<Long,UserTracker> find =
new Finder<Long,UserTracker>(Long.class,UserTracker.class);
}

Categories

Resources