Unit testing clients of Observables - java

I have the following method go() I'd like to test:
private Pair<String, String> mPair;
public void go() {
Observable.zip(
mApi.webCall(),
mApi.webCall2(),
new Func2<String, String, Pair<String, String>>() {
#Override
public Pair<String, String> call(String s, String s2) {
return new Pair(s, s2);
}
}
)
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(new Action1<Pair<String, String>>() {
#Override
public void call(Pair<String, String> pair) {
mApi.webCall3(pair.first, pair.second);
}
});
}
This method uses Observable.zip() to execute to http requests asynchronously, and merge them together in one Pair. In the end, another http request is executed with the result of these previous requests.
I'd like to verify that calling the go() method makes the webCall() and webCall2() requests, followed by the webCall3(String, String) request. Therefore, I'd like the following test to pass (using Mockito to spy the Api):
#Test
public void testGo() {
/* Given */
Api api = spy(new Api() {
#Override
public Observable<String> webCall() {
return Observable.just("First");
}
#Override
public Observable<String> webCall2() {
return Observable.just("second");
}
#Override
public void webCall3() {
}
});
Test test = new Test(api);
/* When */
test.go();
/* Then */
verify(api).webCall();
verify(api).webCall2();
verify(api).webCall3("First", "second");
}
However when running this, web calls are executed asynchronously, and my test executes the assertion before the subscriber is done causing my test to fail.
I have read that you can use RxJavaSchedulersHook and RxAndroidSchedulersHook to return Schedulers.immediate() for all methods, but this results in the test running indefinitely.
I am running my unit tests on a local JVM.
How can I achieve this, preferably without having to modify the signature of go()?

(Lambdas thanks to retrolambda)
For starters, I would rephrase go as:
private Pair<String, String> mPair;
public Observable<Pair<String, String>> go() {
return Observable.zip(
mApi.webCall(),
mApi.webCall2(),
(String s, String s2) -> new Pair(s, s2)
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.doOnNext(pair -> mPair = pair);
}
public Pair<String, String> getPair() {
return mPair;
}
doOnNext allows you to intercept the value that is being processed in the chain whenever someone will subscribe to the Observable
Then, I would call the test like that:
Pair result = test.go().toBlocking().lastOrDefault(null);
Then you can test what result is.

I would use the TestScheduler and TestSubscriber in your tests. In order to use this you'll have to receive the observable that composes the zip so you can subscribe to that work with the testScheduler. Also you'll have to parameterize your schedulers. You won't have to change your go method's signature but you would have to parameterize the schedulers in underlying functionality. You could inject the schedulers by constructor, override a protected field by inheritance, or call to a package protected overload. I have written my examples assuming an overload that accepts the schedulers as arguments and returns the Observable.
The TestScheduler gives you a synchronous way to trigger async operator behavior in a predictable reproducible way. The TestSubscriber gives you a way to await termination and assert over values and signals received. Also you might want to be aware that the delay(long, TimeUnit) operator by default schedules work on the computation scheduler. You'll need to use the testScheduler there as well.
Scheduler ioScheduler = Schedulers.io();
Scheduler mainThreadScheduler = AndroidSchedulers.mainThread();
public void go() {
go(subscribeOnScheduler, mainThreadScheduler).toBlocking().single();
}
/*package*/ Observable<Pair<String, String>> go(Scheduler ioScheduler, Scheduler mainThreadScheduler) {
return Observable.zip(
mApi.webCall(),
mApi.webCall2(),
new Func2<String, String, Pair<String, String>>() {
#Override
public Pair<String, String> call(String s, String s2) {
return new Pair(s, s2);
}
})
.doOnNext(new Action1<Pair<String, String>>() {
#Override
public void call(Pair<String, String>() {
mApi.webCall3(pair.first, pair.second);
})
})
.subscribeOn(ioScheduler)
.observeOn(mainThreadScheduler);
}
Test code
#Test
public void testGo() {
/* Given */
TestScheduler testScheduler = new TestScheduler();
Api api = spy(new Api() {
#Override
public Observable<String> webCall() {
return Observable.just("First").delay(1, TimeUnit.SECONDS, testScheduler);
}
#Override
public Observable<String> webCall2() {
return Observable.just("second");
}
#Override
public void webCall3() {
}
});
Test test = new Test(api);
/* When */
test.go(testScheduler, testScheduler).subscribe(subscriber);
testScheduler.triggerActions();
subscriber.awaitTerminalEvent();
/* Then */
verify(api).webCall();
verify(api).webCall2();
verify(api).webCall3("First", "second");
}

I have found out that I can retrieve my Schedulers in a non-static way, basically injecting them into my client class. The SchedulerProvider replaces the static calls to Schedulers.x():
public interface SchedulerProvider {
Scheduler io();
Scheduler mainThread();
}
The production implementation delegates back to Schedulers:
public class SchedulerProviderImpl implements SchedulerProvider {
public static final SchedulerProvider INSTANCE = new SchedulerProviderImpl();
#Override
public Scheduler io() {
return Schedulers.io();
}
#Override
public Scheduler mainThread() {
return AndroidSchedulers.mainThread();
}
}
However, during tests I can create a TestSchedulerProvider:
public class TestSchedulerProvider implements SchedulerProvider {
private final TestScheduler mIOScheduler = new TestScheduler();
private final TestScheduler mMainThreadScheduler = new TestScheduler();
#Override
public TestScheduler io() {
return mIOScheduler;
}
#Override
public TestScheduler mainThread() {
return mMainThreadScheduler;
}
}
Now I can inject the SchedulerProvider in to the Test class containing the go() method:
class Test {
/* ... */
Test(Api api, SchedulerProvider schedulerProvider) {
mApi = api;
mSchedulerProvider = schedulerProvider;
}
void go() {
Observable.zip(
mApi.webCall(),
mApi.webCall2(),
new Func2<String, String, Pair<String, String>>() {
#Override
public Pair<String, String> call(String s, String s2) {
return new Pair(s, s2);
}
}
)
.subscribeOn(mSchedulerProvider.io())
.observeOn(mSchedulerProvider.mainThread())
.subscribe(new Action1<Pair<String, String>>() {
#Override
public void call(Pair<String, String> pair) {
mApi.webCall3(pair.first, pair.second);
}
});
}
}
Testing this works as follows:
#Test
public void testGo() {
/* Given */
TestSchedulerProvider testSchedulerProvider = new TestSchedulerProvider();
Api api = spy(new Api() {
#Override
public Observable<String> webCall() {
return Observable.just("First");
}
#Override
public Observable<String> webCall2() {
return Observable.just("second");
}
#Override
public void webCall3() {
}
});
Test test = new Test(api, testSchedulerProvider);
/* When */
test.go();
testSchedulerProvider.io().triggerActions();
testSchedulerProvider.mainThread().triggerActions();
/* Then */
verify(api).webCall();
verify(api).webCall2();
verify(api).webCall3("First", "second");
}

I had a similar issue that took one more step in order to be solved.:
existingObservable
.zipWith(Observable.interval(100, TimeUnit.MILLISECONDS), new Func1<> ...)
.subscribeOn(schedulersProvider.computation())
Was still not using the provided TestScheduler schedulersProvider returned. It was necessary to specify .subscribeOn() on the individual streams that i was zipping in order to work.:
existingObservable.subscribeOn(schedulersProvider.computation())
.zipWith(Observable.interval(100, TimeUnit.MILLISECONDS).subscribeOn(schedulersProvider.computation()), new Func1<> ...)
.subscribeOn(schedulersProvider.computation())
Note that schedulersProvider is a mock returning the TestScheduler of my Test!

Related

Mocking completable future

I am trying to create unit test for the following code. The code utilizes AWS Java 2 SDK. The code calls selectObjectContent in S3AsyncClient class which returns a CompletableFuture (https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClient.html). My test is returning null pointer exception while invoking future.get()
Here is the method I want to unit test.
public <T> Collection<T> queryWithS3Select(
List<String> s3Keys,
String s3SelectQuery,
InputSerialization inputSerialization,
Class<T> modelObject,
Comparator<T> comparator
) throws ExecutionException, InterruptedException, IOException {
TreeSet<T> collection = new TreeSet<>(comparator);
List<SelectObjectContentRequest> selectObjectContentRequest =
buildS3SelectRequests(s3Keys, s3SelectQuery, inputSerialization);
S3SelectContentHandler s3SelectContentHandler = new S3SelectContentHandler();
StringBuilder selectionResult = new StringBuilder();
for (SelectObjectContentRequest socr : selectObjectContentRequest) {
CompletableFuture<Void> future = s3AsyncClient.selectObjectContent(socr, s3SelectContentHandler);
future.get();
s3SelectContentHandler.getReceivedEvents().forEach(e -> {
if (e.sdkEventType() == SelectObjectContentEventStream.EventType.RECORDS) {
RecordsEvent response = (RecordsEvent) e;
selectionResult.append(response.payload().asUtf8String());
}
});
}
JsonParser parser = objectMapper.createParser(selectionResult.toString());
collection.addAll(Lists.newArrayList(objectMapper.readValues(parser, modelObject)));
return collection;
}
My unit test so far. Running this code I get null pointer exception at future.get() line above. How can I use the mock s3AsyncClient to return a valid future?
#Mock
private S3AsyncClient s3AsyncClient;
#Test
public void itShouldReturnQueryResults() throws IOException, ExecutionException, InterruptedException {
List<String> keysToQuery = List.of("key1", "key2");
InputSerialization inputSerialization = InputSerialization.builder()
.json(JSONInput.builder().type(JSONType.DOCUMENT).build())
.compressionType(String.valueOf(CompressionType.GZIP))
.build();
Comparator<S3SelectObject> comparator =
Comparator.comparing((S3SelectObject e) -> e.getStartTime());
underTest.queryWithS3Select(keysToQuery, S3_SELECT_QUERY, inputSerialization, S3SelectObject.class, comparator );
}
Here is the S3SelectContentHandler
public class S3SelectContentHandler implements SelectObjectContentResponseHandler {
private SelectObjectContentResponse response;
private List<SelectObjectContentEventStream> receivedEvents = new ArrayList<>();
private Throwable exception;
#Override
public void responseReceived(SelectObjectContentResponse response) {
this.response = response;
}
#Override
public void onEventStream(SdkPublisher<SelectObjectContentEventStream> publisher) {
publisher.subscribe(receivedEvents::add);
}
#Override
public void exceptionOccurred(Throwable throwable) {
exception = throwable;
}
#Override
public void complete() {}
public List<SelectObjectContentEventStream> getReceivedEvents() {
return receivedEvents;
}
}
I will share unit test for similar functionality and show you how to work with completable future when your code has .join() to continue the execution
Code under test
private final S3AsyncClient s3AsyncClient;
public long getSize(final S3AsyncClient client, final String bucket, final String key) {
return client.headObject(HeadObjectRequest.builder().bucket(bucket).key(key).build()).join().contentLength();
}
and in this code the client.headObject() returns the CompletableFuture<HeadObjectResponse> which we are going to mock and test in Unit Test as shown below
#Test
#DisplayName("Verify getSize returns the size of the given key in the bucket")
void verifyGetSizeRetunsSizeOfFileInS3() {
CompletableFuture<HeadObjectResponse> headObjectResponseCompletableFuture =
CompletableFuture.completedFuture(HeadObjectResponse.builder().contentLength(20000L).build());
when(s3AsyncClient.headObject(headObjectRequestArgumentCaptor.capture()))
.thenReturn(headObjectResponseCompletableFuture);
long size = s3Service.getSize(s3AsyncClient, "somebucket", "someFile");
assertThat(headObjectRequestArgumentCaptor.getValue()).hasFieldOrPropertyWithValue("bucket", "somebucket")
.hasFieldOrPropertyWithValue("key", "someFile");
assertThat(size).isEqualTo(20000L);
}

How do I structure factory classes to allow for a fluent interface?

I want to create a series of Actions that do related things
public interface Action{
public void execute();
}
public class DatabaseAction implements Action{
public void execute(){}
}
public class WebAction implements Action{
public void execute(){}
}
public class EmailAction implements Action{
public void execute(){}
}
Generally speaking, users don't care about the details. They want all the actions to run and not worry about it.
But there's going to be some special cases where they only want to run some of the actions, and configure some of the actions.
And I suppose there could be cases where configuration is non-optional.
I figure a fluent interface is the most readable here.
// Executes all Actions - intended to be used in almost all cases
// I write to a database, call a web API, and send an email.
Actions.withAllDefaults().execute();
// I don't need to send an email and I need to configure the database
Actions.withAction(DATABASE_ACTION)
.withConfiguration(DatabaseAction.PORT, 9000)
.withAction(WEB_ACTION)
.execute();
It feels like I should be implementing some sort of factory but it's hard for me to actually translate that into code.
Consider using the Fluent Builder Pattern instead of trying to make your factory fluent.
C# uses fluent programming extensively in LINQ to build queries using "standard query operators".
This is C# implementation. It looks like this sample code can be converted to Java as special features of C# is not used.
So let's see an example. We start from an interface IFluent which allows to build your actions with settings:
public interface IFluent
{
IFluent WithAction(Action action);
IFluent WithConfiguration(KeyValuePair<string, object> configuration);
}
and this is Fluent class which implements IFluent interface:
public class Fluent : IFluent
{
private List<Action> actions;
public IFluent WithAction(Action action)
{
if (actions == null)
actions = new List<Action>();
actions.Add(action);
return this;
}
public IFluent WithConfiguration(KeyValuePair<string, object> configuration)
{
if (actions == null || actions.Count == 0)
throw new InvalidOperationException("There are no actions");
int currentActionIndex = actions.Count - 1;
actions[currentActionIndex].Set(configuration);
return this;
}
}
Then we create an abstract class for Action that should define behavior for derived classes:
public abstract class Action
{
public abstract Dictionary<string, object> Properties { get; set; }
public abstract void Execute();
public abstract void Set(KeyValuePair<string, object> configuration);
public abstract void Add(string name, object value);
}
And our derived classes would look like this:
public class DatabaseAction : Abstract.Action
{
public override Dictionary<string, object> Properties { get; set; }
= new Dictionary<string, object>()
{
{ "port", 0},
{ "connectionString", "foobarConnectionString"},
{ "timeout", 60}
};
public override void Execute()
{
Console.WriteLine("It is a database action");
}
public override void Set(KeyValuePair<string, object> configuration)
{
if (Properties.ContainsKey(configuration.Key))
Properties[configuration.Key] = configuration.Value;
}
public override void Add(string name, object value)
{
Properties.Add(name, value);
}
}
and EmailAction:
public class EmailAction : Abstract.Action
{
public override Dictionary<string, object> Properties { get; set; }
= new Dictionary<string, object>()
{
{ "from", "Head First - Object Oriented Design"},
{ "to", "who wants to learn object oriented design"},
{ "index", 123456}
};
public override void Execute()
{
Console.WriteLine("It is a email action");
}
public override void Set(KeyValuePair<string, object> configuration)
{
if (Properties.ContainsKey(configuration.Key))
Properties[configuration.Key] = configuration.Value;
}
public override void Add(string name, object value)
{
Properties.Add(name, value);
}
}
and WebAction:
public class WebAction : Abstract.Action
{
public override Dictionary<string, object> Properties { get; set; }
= new Dictionary<string, object>()
{
{ "foo", "1"},
{ "bar", "2"},
{ "hey", "hi"}
};
public override void Execute()
{
Console.WriteLine("It is a email action");
}
public override void Set(KeyValuePair<string, object> configuration)
{
if (Properties.ContainsKey(configuration.Key))
Properties[configuration.Key] = configuration.Value;
}
public override void Add(string name, object value)
{
Properties.Add(name, value);
}
}
The it is possible to call code like this:
Fluent actions = new Fluent();
actions.WithAction(new DatabaseAction())
.WithConfiguration(new KeyValuePair<string, object>("port", 1))
.WithAction(new EmailAction())
.WithConfiguration(new KeyValuePair<string, object>("to", "me"));

Resilience4j context propagator not able to propagte thread local values

I am trying to migrate my circuit breaker code from Hystrix to Resilience4j. The communication is between two applications out of which one is an artifact containing all the resilience 4j config in the java code itself and the second application which is a microservice uses it directly.
There's one RequestId which generates in the microservice and propagates to the artifact context where it gets printed in the logs. With Hystrix, it was working perfectly fine but ever since I moved to resilience, I am getting null for the request Id.
Below is my config for bulk head and context propagator :
ThreadPoolBulkheadConfig bulkheadConfig = ThreadPoolBulkheadConfig.custom()
.maxThreadPoolSize(maxThreadPoolSize)
.coreThreadPoolSize(coreThreadPoolSize)
.queueCapacity(queueCapacity)
.contextPropagator(new DummyContextPropagator())
.build();
// Bulk Head Registry
ThreadPoolBulkheadRegistry bulkheadRegistry = ThreadPoolBulkheadRegistry.of(bulkheadConfig);
// Create Bulk Head
ThreadPoolBulkhead bulkhead = bulkheadRegistry.bulkhead(name, bulkheadConfig);
Dummy Context Propagator :
public class DummyContextPropagator implements ContextPropagator {
private static final Logger log = LoggerFactory.getLogger( DummyContextPropagator.class);
#Override
public Supplier<Optional<Object>> retrieve() {
return () -> (Optional<Object>) get();
}
#Override
public Consumer<Optional<Object>> copy() {
return (t) -> t.ifPresent(e -> {
clear();
put(e);
});
}
#Override
public Consumer<Optional<Object>> clear() {
return (t) -> DummyContextHolder.clear();
}
public static class DummyContextHolder {
private static final ThreadLocal threadLocal = new ThreadLocal();
private DummyContextHolder() {
}
public static void put(Object context) {
if (threadLocal.get() != null) {
clear();
}
threadLocal.set(context);
}
public static void clear() {
if (threadLocal.get() != null) {
threadLocal.set(null);
threadLocal.remove();
}
}
public static Optional<Object> get() {
return Optional.ofNullable(threadLocal.get());
}
}
}
However, nothing seems to work so that I can get the RequestId.
Am I doing everything right or is there another way to do that ?
i think you want to get params from threadlocal from parent-thread when you in sub-thread, in hystrix it use command-model to decorate callabletask
in resilience4j i think u can fix it like this:
#Resource
DispatcherServlet dispatcherServlet;
#PostConstruct
public void changeThreadLocalModel() {
dispatcherServlet.setThreadContextInheritable(true);
}
i find my last answer may lead to some problems, when you use "dispatcherServlet.setThreadContextInheritable(true);"
it may pollute your custom thread-pool`s threadlocalmap;
so here is my final resolve, and it only works at resilience4j;
#Resource
Resilience4jBulkheadProvider resilience4jBulkheadProvider;
#PostConstruct
public void concurrentThreadContextStrategy() {
ThreadPoolBulkheadConfig threadPoolBulkheadConfig = ThreadPoolBulkheadConfig.custom().contextPropagator(new CustomInheritContextPropagator()).build();
resilience4jBulkheadProvider.configureDefault(id -> new Resilience4jBulkheadConfigurationBuilder()
.bulkheadConfig(BulkheadConfig.ofDefaults()).threadPoolBulkheadConfig(threadPoolBulkheadConfig)
.build());
}
private static class CustomInheritContextPropagator implements ContextPropagator<RequestAttributes> {
#Override
public Supplier<Optional<RequestAttributes>> retrieve() {
// give requestcontext to reference from threadlocal;
// this method call by web-container thread, such as tomcat, jetty,or undertow, depends on what you used;
return () -> Optional.ofNullable(RequestContextHolder.getRequestAttributes());
}
#Override
public Consumer<Optional<RequestAttributes>> copy() {
// load requestcontex into real-call thread
// this method call by resilience4j bulkhead thread;
return requestAttributes -> requestAttributes.ifPresent(context -> {
RequestContextHolder.resetRequestAttributes();
RequestContextHolder.setRequestAttributes(context);
});
}
#Override
public Consumer<Optional<RequestAttributes>> clear() {
// clean requestcontext finally ;
// this method call by resilience4j bulkhead thread;
return requestAttributes -> RequestContextHolder.resetRequestAttributes();
}
}
i got the same problem with springboot 2.5 et springboot cloud 2020.0.6
and I solved it with an implementation of ContextPropagator
public class SleuthPropagator implements ContextPropagator<TraceContext> {
ThreadLocal<ScopedSpan> scopedSpanThreadLocal = new ThreadLocal<>();
#Override
public Supplier<Optional<TraceContext>> retrieve() {
return this::getCurrentcontext;
}
#Override
public Consumer<Optional<TraceContext>> copy() {
return c -> {
if (!c.isPresent()) {
return;
}
TraceContext traceContext = c.get();
ScopedSpan resilience4jSpan = getTracer()
.map(t -> t.startScopedSpanWithParent("Resilience4j", traceContext))
.orElse(null);
scopedSpanThreadLocal.set(resilience4jSpan);
};
}
#Override
public Consumer<Optional<TraceContext>> clear() {
return t -> {
try {
ScopedSpan resilience4jSpan = scopedSpanThreadLocal.get();
if (resilience4jSpan != null) {
resilience4jSpan.finish();
}
} finally {
scopedSpanThreadLocal.remove();
}
};
}
private static Optional<Tracer> getTracer() {
return Optional.ofNullable(Tracing.current())
.map(Tracing::tracer);
}
private Optional<TraceContext> getCurrentcontext() {
return getTracer()
.map(Tracer::currentSpan)
.map(Span::context);
}
}
And use the propagator in adding this to your application.properties
resilience4j.thread-pool-bulkhead.instances.YOUR_BULKHEAD_CONFIG.context-propagators=com.your.package.SleuthPropagator

How do I test Function's code when it's passed as method parameter?

Is it possible to test code that is written in lambda function that is passed inside the method process?
#AllArgsConstructor
public class JsonController {
private final JsonElementProcessingService jsonElementProcessingService;
private final JsonObjectProcessingService jsonObjectProcessingService;
private final JsonArrayProcessingService jsonArrayProcessingService;
public void process(String rawJson) {
jsonElementProcessingService.process(json -> {
JsonElement element = new JsonParser().parse(json);
if (element.isJsonArray()) {
return jsonArrayProcessingService.process(element.getAsJsonArray());
} else {
return jsonObjectProcessingService.process(element.getAsJsonObject());
}
}, rawJson);
}
}
Since the lambda is lazy the function is not invoked (Function::apply) when I call JsonController::process so is there any way to check that jsonArrayProcessingService::process is called?
#RunWith(JMockit.class)
public class JsonControllerTest {
#Injectable
private JsonElementProcessingService jsonElementProcessingService;
#Injectable
private JsonObjectProcessingService jsonObjectProcessingService;
#Injectable
private JsonArrayProcessingService jsonArrayProcessingService;
#Tested
private JsonController jsonController;
#Test
public void test() {
jsonController.process("[{\"key\":1}]");
// how check here that jsonArrayProcessingService was invoked?
}
}
Just make it testable (and readable) by converting it to a method:
public void process(String rawJson) {
jsonElementProcessingService.process(this::parse, rawJson);
}
Object parse(String json) {
JsonElement element = new JsonParser().parse(json);
if (element.isJsonArray()) {
return jsonArrayProcessingService.process(element.getAsJsonArray());
} else {
return jsonObjectProcessingService.process(element.getAsJsonObject());
}
}
The relevant guiding principles I personally follow are:
anytime my lambdas require curly brackets, convert them to a method
organise code so that it can be unit tested
You may need to change the return type of the parse method to match whatever your processing services (which you didn’t show) return.
Given its relatively-basic redirection logic, don't you just want to confirm which of the #Injectables got called:
#Test
public void test() {
jsonController.process("[{\"key\":1}]");
new Verifications() {{
jsonArrayProcessingService.process(withInstanceOf(JsonArray.class));
}};
}

JavaRx on ErrorReturn return the different type

Observable is tied to a type. When onError I don't want to return the same type but different Object. Example Response object with status=400. How to achieve this?
public class Test{
#Autowired
private Server server;
public Response getResponse(String id){
Observable<Person> personObservable = server.get(id);
ExecutorService executorService = Executors.newFixedThreadPool(100);
List<Person> persons = new ArrayList<Person>();
personObservable.onErrorReturn(new Func1<Throwable, Person>() {
#Override
public Person call(Throwable throwable) {
//I would like to return a HttpResponseObject taking the message
//from throwable error information how to do it?
// How to use Transform() in this case ?
return null;
}
}).subscribeOn(Schedulers.from(executorService)).subscribe(new Action1<Person>() {
// If i use subscribe() will it be not async?
// I think subscribe still run on the main thread so is this
// subscribeOn use fine ?
#Override
public void call(Person person) {
// Is this fine to use the list outside the observable ?
persons.add(person);
}
});
Response r = new Response;
r.addPersons(persons);
return r;
}
}
Use onErrorResumeNext:
Observable<Person> personObservable = ...;
return personObservable
.toList()
.map(persons -> new Response(persons))
.onErrorResumeNext(error -> new Response(error.getMessage())
.toBlocking().single();

Categories

Resources