Set Object Variables Within Service Method - java

I have a couple domain objects, Message and Contact:
public class Contact {
String name;
}
public class Message {
String body;
Contact contact;
}
I'm populating a list of Message and showing them the user. Contact info for each of the Messages is retrieved asynchronously, and then the list is updated. This is basically how I have it set up:
listAdapter.setDataSet(listOfMessages);
for (Message message : listOfMessages) {
fetchContactDetails(message);
}
...
fetchContactDetails(Message message) {
contactService
.fetchContactDetails(listOfMessages)
.subscribeOn... // observe, etc
.observe(new Observable<Contact>() {
onNext(Contact contact) {
message.setContact(contact);
list.notifyChanged(message);
}
});
This feels like a lot of code, given that I could instead just update each Message Contact within the service function contactService.fetchContactDetails. On the contrary, it feels unclean to use a service method to modify the object passed in without returning anything.
Is it a bad practice to use a service function to update a Object passed in as an argument, without returning anything?

Is it a bad practice to use a service function to update a Object passed in as an argument, without returning anything?
No, you can implement void method Observable, by using Completable, but in your case you will still need to update the UI on the main thread, so you will subscribe and handle the onNext on mainThread (by adding observeOn(AndroidSchedulers.mainThread())).
So in this case your method not seems to be pure void method that returns nothing, but method that updated some data and return it to the UI for processing.
So, I think it is more general architectural question, of how to separate the code right. I would defiantly gather the model/domain logic together in separate Observable, and let the UI/presentation handle the just the UI updates.
BTW, you can loop more elegantly with Rx using from() and flatMap()
Observable.from(listOfMessages)
.flatMap(msg -> contactService.fetchContactDetails(msg))
then you can have control on the enitre process, like do something when all updates done, or limit the parallelism or whatever.

Related

Returning Java Object from Mono

I am trying to get Json String from Mono. I tried to use block() method to get object it worked fine , but when I use map/flatmap ,I don't see following lines of code is executed.And I see account Mono is not empty.
private String getJsonString( Mono<Account> account) {
response.map(it->{
**// call is not coming here**
val json = mapper.writeValueAsString(it)
System.out.println(son)
});
}
am I doing anything wrong here?
If you give a read to the official documentation , you will see that:
Nothing happens until you subscribe
Now to understand, In spring boot webflux based microservice, who is the subscriber?, have a look at this stackoverflow question
Now, if you think, you can have blocking and reactive implementations in the same service, unfortunately, it doesn't work like that. For this you have to understand the event loop model on which reactor works. Thus calling a block method at any point in the flow is of no good and is equivalent to using the old blocking spring-web methods. Because the thread in which the request is being processed gets blocked and waits for the outcome of the I/O operation / network call.
Coming to your question in the comment:
But when i use flatMap in my controller to call handler method it goes service method with Object not mono?serviceRequest-->Mono-->Object how this works?
Let me give you a simple example for this:
Suppose you have an employee application, where you want to fetch details of an employee for a given id.
Now in your controller, you will have an endpoint like this:
#GetMapping("/{tenant}/api/employee/{id}")
public Mono<ResponseEntity> getEmployeeDetails(#PathVariable("id") Long employeeId) {
return employeeService.getDetails(employeeId)
.map(ResponseEntity::ok);
}
Now in your service,
public Mono<EmployeeEntity> getDetails(Long employeeId) {
return employeeRepository.findById(employeeId);
}
And your repository will look like this:
#Repository
public interface EmployeeRepository extends ReactiveCrudRepository<EmployeeEntity, Long> {
}

Android Architecture Components network threads

I'm currently checking out the following guide: https://developer.android.com/topic/libraries/architecture/guide.html
The networkBoundResource class:
// ResultType: Type for the Resource data
// RequestType: Type for the API response
public abstract class NetworkBoundResource<ResultType, RequestType> {
// Called to save the result of the API response into the database
#WorkerThread
protected abstract void saveCallResult(#NonNull RequestType item);
// Called with the data in the database to decide whether it should be
// fetched from the network.
#MainThread
protected abstract boolean shouldFetch(#Nullable ResultType data);
// Called to get the cached data from the database
#NonNull #MainThread
protected abstract LiveData<ResultType> loadFromDb();
// Called to create the API call.
#NonNull #MainThread
protected abstract LiveData<ApiResponse<RequestType>> createCall();
// Called when the fetch fails. The child class may want to reset components
// like rate limiter.
#MainThread
protected void onFetchFailed() {
}
// returns a LiveData that represents the resource
public final LiveData<Resource<ResultType>> getAsLiveData() {
return result;
}
}
I'm a bit confused here about the use of threads.
Why is #MainThread applied here for networkIO?
Also, for saving into the db, #WorkerThread is applied, whereas #MainThread for retrieving results.
Is it bad practise to use a worker thread by default for NetworkIO and local db interaction?
I'm also checking out the following demo (GithubBrowserSample): https://github.com/googlesamples/android-architecture-components
This confuses me from a threading point of view.
The demo uses executors framework, and defines a fixed pool with 3 threads for networkIO, however in the demo only a worker task is defined for one call, i.e. the FetchNextSearchPageTask. All other network requests seem to be executed on the main thread.
Can someone clarify the rationale?
It seems you have a few misconceptions.
Generally it is never OK to call network from the Main (UI) thread but unless you have a lot of data it might be OK to fetch data from DB in the Main thread. And this is what Google example does.
1.
The demo uses executors framework, and defines a fixed pool with 3 threads for networkIO, however in the demo only a worker task is defined for one call, i.e. the FetchNextSearchPageTask.
First of all, since Java 8 you can create simple implementation of some interfaces (so called "functional interfaces") using lambda syntax. This is what happens in the NetworkBoundResource:
appExecutors.diskIO().execute(() -> {
saveCallResult(processResponse(response));
appExecutors.mainThread().execute(() ->
// we specially request a new live data,
// otherwise we will get immediately last cached value,
// which may not be updated with latest results received from network.
result.addSource(loadFromDb(),
newData -> result.setValue(Resource.success(newData)))
);
});
at first task (processResponse and saveCallResult) is scheduled on a thread provided by the diskIO Executor and then from that thread the rest of the work is scheduled back to the Main thread.
2.
Why is #MainThread applied here for networkIO?
and
All other network requests seem to be executed on the main thread.
This is not so. Only result wrapper i.e. LiveData<ApiResponse<RequestType>> is created on the main thread. The network request is done on a different thread. This is not easy to see because Retrofit library is used to do all the network-related heavy lifting and it nicely hides such implementation details. Still, if you look at the LiveDataCallAdapter that wraps Retrofit into a LiveData, you can see that Call.enqueue is used which is actually an asynchronous call (scheduled internally by Retrofit).
Actually if not for "pagination" feature, the example would not need networkIO Executor at all. "Pagination" is a complicated feature and thus it is implemented using explicit FetchNextSearchPageTask and this is a place where I think Google example is done not very well: FetchNextSearchPageTask doesn't re-use request parsing logic (i.e. processResponse) from RepoRepository but just assumes that it is trivial (which it is now, but who knows about the future...). Also there is no scheduling of the merging job onto the diskIO Executor which is also inconsistent with the rest of the response processing.

Mongodb async java driver find()

I have a webapp in which I have to return the results from a mongodb find() to the front-end from my java back-end.
I am using the Async Java driver, and the only way I think I have to return the results from mongo is something like this:
public String getDocuments(){
...
collection.find(query).map(Document::toJson)
.into(new HashSet<String>(), new SingleResultCallback<HashSet<String>>() {
#Override
public void onResult(HashSet<String> strings, Throwable throwable) {
// here I have to get all the Json Documents in the set,
// make a whole json string and wake the main thread
}
});
// here I have to put the main thread to wait until I get the data in
// the onResult() method so I can return the string back to the front-end
...
return jsonString;
}
Is this assumption right or thereĀ“s another way to do it?
Asynchronous APIs (any API based on callbacks, not necessarily MongoDB) can be a true blessing for multithreaded applications. But to really benefit from them, you need to design your whole application architecture in an asynchronous fashion. This is not always feasible, especially when it is supposed to fit into a given framework which isn't built on callbacks.
So sometimes (like in your case) you just want to use an asynchronous API in a synchronous fashion. In that case, you can use the class CompletableFuture.
This class provides (among others) two methods <T> get() and complete(<T> value). The method get will block until complete is called to provide the return value (should complete get called before get, get returns immediately with the provided value).
public String getDocuments(){
...
CompletableFuture<String> result = new CompletableFuture<>(); // <-- create an empty, uncompleted Future
collection.find(query).map(Document::toJson)
.into(new HashSet<String>(), new SingleResultCallback<HashSet<String>>() {
#Override
public void onResult(HashSet<String> strings, Throwable throwable) {
// here I have to get all the Json Documents in the set and
// make a whole json string
result.complete(wholeJsonString); // <--resolves the future
}
});
return result.get(); // <-- blocks until result.complete is called
}
The the get()-method of CompletableFuture also has an alternative overload with a timeout parameter. I recommend using this to prevent your program from accumulating hanging threads when the callback is not called for whatever reason. It will also be a good idea to implement your whole callback in a try { block and do the result.complete in the finally { block to make sure the result always gets resolved, even when there is an unexpected error during your callback.
Yes, you're right.
That's the correct behaviour of Mongo async driver (see MongoIterable.into).
However, Why don't you use sync driver in this situation? Is there any reason to use async method?

Best way to sequence a pair of external service calls in Akka

I need to geocode an Address object, and then store the updated Address in a search engine. This can be simplified to taking an object, performing one long-running operation on the object, and then persisting the object. This means there is an order of operations requirement that the first operation be complete before persistence occurs.
I would like to use Akka to move this off the main thread of execution.
My initial thought was to use a pair of Futures to accomplish this, but the Futures documentation is not entirely clear on which behavior (fold, map, etc) guarantees one Future to be executed before another.
I started out by creating two functions, defferedGeocode and deferredWriteToSearchEngine which return Futures for the respective operations. I chain them together using Future<>.andThen(new OnComplete...), but this gets clunky very quickly:
Future<Address> geocodeFuture = defferedGeocode(ec, address);
geocodeFuture.andThen(new OnComplete<Address>() {
public void onComplete(Throwable failure, Address geocodedAddress) {
if (geocodedAddress != null) {
Future<Address> searchEngineFuture = deferredWriteToSearchEngine(ec, addressSearchService, geocodedAddress);
searchEngineFuture.andThen(new OnComplete<Address>() {
public void onComplete(Throwable failure, Address savedAddress) {
// process search engine results
}
});
}
}
}, ec);
And then deferredGeocode is implemented like this:
private Future<Address> defferedGeocode(
final ExecutionContext ec,
final Address address) {
return Futures.future(new Callable<Address>() {
public Address call() throws Exception {
log.debug("Geocoding Address...");
return address;
}
}, ec);
};
deferredWriteToSearchEngine is pretty similar to deferredGeocode, except it takes the search engine service as an additional final parameter.
My understand is that Futures are supposed to be used to perform calculations and should not have side effects. In this case, geocoding the address is calculation, so I think using a Future is reasonable, but writing to the search engine is definitely a side effect.
What is the best practice here for Akka? How can I avoid all the nested calls, but ensure that both the geocoding and the search engine write are done off the main thread?
Is there a more appropriate tool?
Update:
Based on Viktor's comments below, I am trying this code out now:
ExecutionContext ec;
private Future<Address> addressBackgroundProcess(Address address) {
Future<Address> geocodeFuture = addressGeocodeFutureFactory.defferedGeocode(address);
return geocodeFuture.flatMap(new Mapper<Address, Future<Address>>() {
#Override
public Future<Address> apply(Address geoAddress) {
return addressSearchEngineFutureFactory.deferredWriteToSearchEngine(geoAddress);
}
}, ec);
}
This seems to work ok except for one issue which I'm not thrilled with. We are working in a Spring IOC code base, and so I would like to inject the ExecutionContext into the FutureFactory objects, but it seems wrong for this function (in our DAO) to need to be aware of the ExecutionContext.
It seems odd to me that the flatMap() function needs an EC at all, since both futures provide one.
Is there a way to maintain the separation of concerns? Am I structuring the code badly, or is this just the way it needs to be?
I thought about creating an interface in the FutureFactory's that would allow chaining of FutureFactory's, so the flatMap() call would be encapsulated in a FutureFactory base class, but this seems like it would be deliberately subverting an intentional Akka design decision.
Warning: Pseudocode ahead.
Future<Address> myFutureResult = deferredGeocode(ec, address).flatMap(
new Mapper<Address, Future<Address>>() {
public Future<Address> apply(Address geocodedAddress) {
return deferredWriteToSearchEngine(ec, addressSearchService, geocodedAddress);
}
}, ec).map(
new Mapper<Address, SomeResult>() {
public SomeResult apply(Address savedAddress) {
// Create SomeResult after deferredWriteToSearchEngine is done
}
}, ec);
See how it is not nested. flatMap and map is used for sequencing the operations. "andThen" is useful for when you want a side-effecting-only operation to run to full completion before passing the result on. Of course, if you map twice on the SAME future-instance then there is no ordering guaranteed, but since we are flatMapping and mapping on the returned futures (new ones according to the docs), there is a clear data-flow in our program.

Using Stripes, what is the best pattern for Show/Update/etc Action Beans?

I have been wrestling with this problem for a while. I would like to use the same Stripes ActionBean for show and update actions. However, I have not been able to figure out how to do this in a clean way that allows reliable binding, validation, and verification of object ownership by the current user.
For example, lets say our action bean takes a postingId. The posting belongs to a user, which is logged in. We might have something like this:
#UrlBinding("/posting/{postingId}")
#RolesAllowed({ "USER" })
public class PostingActionBean extends BaseActionBean
Now, for the show action, we could define:
private int postingId; // assume the parameter in #UrlBinding above was renamed
private Posting posting;
And now use #After(stages = LifecycleStage.BindingAndValidation) to fetch the Posting. Our #After function can verify that the currently logged in user owns the posting. We must use #After, not #Before, because the postingId won't have been bound to the parameter before hand.
However, for an update function, you want to bind the Posting object to the Posting variable using #Before, not #After, so that the returned form entries get applied on top of the existing Posting object, instead of onto an empty stub.
A custom TypeConverter<T> would work well here, but because the session isn't available from the TypeConverter interface, its difficult to validate ownership of the object during binding.
The only solution I can see is to use two separate action beans, one for show, and one for update. If you do this however, the <stripes:form> tag and its downstream tags won't correctly populate the values of the form, because the beanclass or action tags must map back to the same ActionBean.
As far as I can see, the Stripes model only holds together when manipulating simple (none POJO) parameters. In any other case, you seem to run into a catch-22 of binding your object from your data store and overwriting it with updates sent from the client.
I've got to be missing something. What is the best practice from experienced Stripes users?
In my opinion, authorisation is orthogonal to object hydration. By this, I mean that you should separate the concerns of object hydration (in this case, using a postingId and turning it into a Posting) away from determining whether a user has authorisation to perform operations on that object (like show, update, delete, etc.,).
For object hydration, I use a TypeConverter<T>, and I hydrate the object without regard to the session user. Then inside my ActionBean I have a guard around the setter, thus...
public void setPosting(Posting posting) {
if (accessible(posting)) this.posting = posting;
}
where accessible(posting) looks something like this...
private boolean accessible(Posting posting) {
return authorisationChecker.isAuthorised(whoAmI(), posting);
}
Then your show() event method would look like this...
public Resolution show() {
if (posting == null) return NOT_FOUND;
return new ForwardResolution("/WEB-INF/jsp/posting.jsp");
}
Separately, when I use Stripes I often have multiple events (like "show", or "update") within the same Stripes ActionBean. For me it makes sense to group operations (verbs) around a related noun.
Using clean URLs, your ActionBean annotations would look like this...
#UrlBinding("/posting/{$event}/{posting}")
#RolesAllowed({ "USER" })
public class PostingActionBean extends BaseActionBean
...where {$event} is the name of your event method (i.e. "show" or "update"). Note that I am using {posting}, and not {postingId}.
For completeness, here is what your update() event method might look like...
public Resolution update() {
if (posting == null) throw new UnauthorisedAccessException();
postingService.saveOrUpdate(posting);
message("posting.save.confirmation");
return new RedirectResolution(PostingsAction.class);
}

Categories

Resources