I'm a little bit new to RxJava. I am trying to emit another item if onError() get called without losing the error(I still want onError() to be called on the observer). but when I'm implementing each of the error handling methods declared in the docs the error being swallowed and on error isn't being called. any solutions?
edit:
that's what I've tried to do yesterday -
#Override
public Observable<ArrayList<Address>> getAirports() {
return new Observable<ArrayList<AirportPOJO>>() {
#Override
protected void subscribeActual(Observer<? super ArrayList<AirportPOJO>> observer) {
try {
// get airports from api list and map it
ArrayList<AirportPOJO> airportsList = apiDb.getAirportsList(POJOHelper.toPOJO(AppCredentialManager.getCredentials()));
observer.onNext(airportsList);
} catch (Exception e) {
e.printStackTrace();
observer.onError(handleException(e));
}
}
}.map(AirportsMappers.getAirportsPojoToDomainAirportsMapper()).doOnNext(new Consumer<ArrayList<Address>>() {
#Override
public void accept(ArrayList<Address> airportsList) throws Exception {
// if airports loaded from api - save them to local db
if (airportsList != null) {
try {
localDb.saveAirportList(AirportsMappers.getAirportsToLocalDbAirportsMapper().apply(airportsList));
} catch (Exception e) {
e.printStackTrace();
}
}
}
}).onErrorResumeNext(new Function<Throwable, ObservableSource<? extends ArrayList<Address>>>() {
#Override
public ObservableSource<? extends ArrayList<Address>> apply(final Throwable throwable) throws Exception {
// load the local airports -
ArrayList<LocalDbAirportEntity> localAirportsEntities = localDb.getAirports();
// map
ArrayList<Address> airports = AirportsMappers.getLocalDbAirportsToAirportsMapper().apply(localAirportsEntities);
// return the concat observable with the error
return Observable.just(airports).concatWith(Observable.
<ArrayList<Address>>error(new Callable<Throwable>() {
#Override
public Throwable call() throws Exception {
return throwable;
}
}));
}
});
}
today I tought I might doing it wrong and tried -
#Override
public Observable<ArrayList<Address>> getAirports() {
ArrayList<Observable<ArrayList<Address>>> observables = new ArrayList<>();
observables.add(apiDb.getAirportsList(POJOHelper.toPOJO(AppCredentialManager.getCredentials())).map(AirportsMappers.getAirportsPojoToDomainAirportsMapper()));
observables.add(localDb.getAirports().map(AirportsMappers.getLocalDbAirportsToAirportsMapper()));
Observable<ArrayList<Address>> concatenatedObservable = Observable.concatDelayError(observables);
return concatenatedObservable;
}
but I've got the same result. the onNext() called with the data of the second observable and the onError() not being called afterwards.
Resume with the desired value concatenated with the original error:
source.onErrorResumeNext(error ->
Observable.just(item).concatWith(Observable.<ItemType>error(error))
);
My app is built upon Spring + SockJs. Main page represents a table of available connections so that user can monitor them in real-time. Every single url monitor can be suspended/resumed separatelly from each other. The problem is once you suspend some monitor then you can never resume it back because ApplicationEvents property of MonitoringFacade bean suddenly becomes null for the SINGLE entity. For other entites listener keeps working pretty well. When attempt to invoke methods of such null listener NullPointerException is never thrown though.
class IndexController implements ApplicationEvents
...
public IndexController(SimpMessagingTemplate simpMessagingTemplate, MonitoringFacade monitoringFacade) {
this.simpMessagingTemplate = simpMessagingTemplate;
this.monitoringFacade = monitoringFacade;
}
#PostConstruct
public void initialize() {
if (logger.isDebugEnabled()) {
logger.debug(">>Index controller initialization.");
}
monitoringFacade.addDispatcher(this);
}
...
#Override
public void monitorUpdated(String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>Sending monitoring data to client with monitor id " + monitorId);
}
try {
ConfigurationDTO config = monitoringFacade.findConfig(monitorId);
Report report = monitoringFacade.findReport(monitorId);
ReportReadModel readModel = ReportReadModel.mapFrom(config, report);
simpMessagingTemplate.convertAndSend("/client/update", readModel);
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
public class MonitoringFacadeImpl implements MonitoringFacade
...
private ApplicationEvents dispatcher;
public void addDispatcher(ApplicationEvents dispatcher) {
logger.info("Setting up dispatcher");
this.dispatcher = dispatcher;
}
...
#Override
public void refreshed(RefreshEvent event) {
final String monitorId = event.getId().getIdentity();
if (logger.isDebugEnabled()) {
logger.debug(String.format(">>Refreshing monitoring data with monitor id '%s'", monitorId));
}
Configuration refreshedConfig = configurationService.find(monitorId);
reportingService.compileReport(refreshedConfig, event.getData());
if (logger.isDebugEnabled()) {
logger.debug(String.format(">>Notifying monitoring data updated with monitor id '%s'", monitorId) + dispatcher);
}
dispatcher.monitorUpdated(monitorId); // here dispatcher has null value... or it's actually not
}
void refreshed(RefreshEvent event) method succesfully receives updates from Quartz scheduler through the interface and sends it back to controller.
The question is how a singleton-scoped bean can have different property values for different objects it is applied for and why such a property becomes null even though i have never set it to null?
UPD:
#MessageMapping("/monitor/{monitorId}/suspend")
public void handleSuspend(#DestinationVariable String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>>Handling suspend request for monitor with id " + monitorId);
}
try {
monitoringFacade.disableUrlMonitoring(monitorId);
monitorUpdated(monitorId);// force client update
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
#MessageMapping("/monitor/{monitorId}/resume")
public void handleResume(#DestinationVariable String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>>Handling resume request for monitor with id " + monitorId);
}
try {
monitoringFacade.enableUrlMonitoring(monitorId);
monitorUpdated(monitorId);// force client update
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
I'm converting a Kafka consumer to an AWS Kinesis consumer, using the KCL (v2). In Kafka, offsets are used to help a consumer keep track of its most recently-consumed message. If my Kafka app dies, it will use the offset to consume from where it left off when it restarts.
However this isn't the same in Kinesis. I can set kinesisClientLibConfiguration.withInitialPositionInStream(...) but the only arguments for that are TRIM_HORIZON, LATEST or AT_TIMESTAMP. If my Kinesis app died, it would not know where to resume consuming from when it restarts.
My KCL consumer is very simple. The main() method looks like:
KinesisClientLibConfiguration config = new KinesisClientLibConfiguration("benTestApp",
"testStream", new DefaultAWSCredentialsProviderChain(), UUID.randomUUID().toString());
config.withInitialPositionInStream(InitialPositionInStream.TRIM_HORIZON);
Worker worker = new Worker.Builder()
.recordProcessorFactory(new KCLRecordProcessorFactory())
.config(config)
.build();
and the RecordProcessor is a simple implementation:
#Override
public void initialize(InitializationInput initializationInput) {
LOGGER.info("Initializing record processor for shard: {}", initializationInput.getShardId());
}
#Override
public void processRecords(ProcessRecordsInput processRecordsInput) {
List<Record> records = processRecordsInput.getRecords();
LOGGER.info("Retrieved {} records", records.size());
records.forEach(r -> LOGGER.info("Record: {}", StandardCharsets.UTF_8.decode(r.getData())));
}
#Override
public void shutdown(ShutdownInput shutdownInput) {
LOGGER.info("Shutting down input");
}
If I check the corresponding DynamoDB table, the value of checkpoint is set as TRIM_HORIZON, and does not get updated with sequenceIds as records are consumed.
What's the solution here to ensure I consume every message?
As identified by #kdgregory, the KCL requires users to set their own checkpoints. Working code:
#Override
public void initialize(InitializationInput initializationInput) {
LOGGER.info("Initializing record processor for shard: {}", initializationInput.getShardId());
}
#Override
public void processRecords(ProcessRecordsInput processRecordsInput) {
List<Record> records = processRecordsInput.getRecords();
LOGGER.info("Retrieved {} records", records.size());
records.forEach(r -> LOGGER.info("Record with sequenceId {} at date {} : {}", r.getSequenceNumber(),
r.getApproximateArrivalTimestamp(), StandardCharsets.UTF_8.decode(r.getData())));
try {
processRecordsInput.getCheckpointer().checkpoint();
} catch (InvalidStateException | ShutdownException e) {
LOGGER.error("Unable to checkpoint");
}
}
#Override
public void shutdown(ShutdownInput shutdownInput) {
LOGGER.info("Shutting down input");
try {
shutdownInput.getCheckpointer().checkpoint();
} catch (InvalidStateException | ShutdownException e) {
LOGGER.error("Unable to checkpoint");
}
}
Here is my DataClientFactory class.
public class DataClientFactory {
public static IClient getInstance() {
return ClientHolder.INSTANCE;
}
private static class ClientHolder {
private static final DataClient INSTANCE = new DataClient();
static {
new DataScheduler().startScheduleTask();
}
}
}
Here is my DataClient class.
public class DataClient implements IClient {
private ExecutorService service = Executors.newFixedThreadPool(15);
private RestTemplate restTemplate = new RestTemplate();
// for initialization purpose
public DataClient() {
try {
new DataScheduler().callDataService();
} catch (Exception ex) { // swallow the exception
// log exception
}
}
#Override
public DataResponse getDataSync(DataKey dataKeys) {
DataResponse response = null;
try {
Future<DataResponse> handle = getDataAsync(dataKeys);
response = handle.get(dataKeys.getTimeout(), TimeUnit.MILLISECONDS);
} catch (TimeoutException e) {
// log error
response = new DataResponse(null, DataErrorEnum.CLIENT_TIMEOUT, DataStatusEnum.ERROR);
} catch (Exception e) {
// log error
response = new DataResponse(null, DataErrorEnum.ERROR_CLIENT, DataStatusEnum.ERROR);
}
return response;
}
#Override
public Future<DataResponse> getDataAsync(DataKey dataKeys) {
Future<DataResponse> future = null;
try {
DataTask dataTask = new DataTask(dataKeys, restTemplate);
future = service.submit(dataTask);
} catch (Exception ex) {
// log error
}
return future;
}
}
I get my client instance from the above factory as shown below and then make a call to getDataSync method by passing DataKey object. DataKey object has userId and Timeout values in it. Now after this, call goes to my DataTask class to call method as soon as handle.get is called.
IClient dataClient = DataClientFactory.getInstance();
long userid = 1234l;
long timeout_ms = 500;
DataKey keys = new DataKey.Builder().setUserId(userid).setTimeout(timeout_ms)
.remoteFlag(false).secondaryFlag(true).build();
// call getDataSync method
DataResponse dataResponse = dataClient.getDataSync(keys);
System.out.println(dataResponse);
Here is my DataTask class which has all the logic -
public class DataTask implements Callable<DataResponse> {
private DataKey dataKeys;
private RestTemplate restTemplate;
public DataTask(DataKey dataKeys, RestTemplate restTemplate) {
this.restTemplate = restTemplate;
this.dataKeys = dataKeys;
}
#Override
public DataResponse call() {
DataResponse dataResponse = null;
ResponseEntity<String> response = null;
int serialId = getSerialIdFromUserId();
boolean remoteFlag = dataKeys.isRemoteFlag();
boolean secondaryFlag = dataKeys.isSecondaryFlag();
List<String> hostnames = new LinkedList<String>();
Mappings mappings = ClientData.getMappings(dataKeys.whichFlow());
String localPrimaryAdress = null;
String remotePrimaryAdress = null;
String localSecondaryAdress = null;
String remoteSecondaryAdress = null;
// use mappings object to get above Address by using serialId and basis on
// remoteFlag and secondaryFlag populate the hostnames linked list
if (remoteFlag && secondaryFlag) {
hostnames.add(localPrimaryHostIPAdress);
hostnames.add(localSecondaryHostIPAdress);
hostnames.add(remotePrimaryHostIPAdress);
hostnames.add(remoteSecondaryHostIPAdress);
} else if (remoteFlag && !secondaryFlag) {
hostnames.add(localPrimaryHostIPAdress);
hostnames.add(remotePrimaryHostIPAdress);
} else if (!remoteFlag && !secondaryFlag) {
hostnames.add(localPrimaryHostIPAdress);
} else if (!remoteFlag && secondaryFlag) {
hostnames.add(localPrimaryHostIPAdress);
hostnames.add(localSecondaryHostIPAdress);
}
for (String hostname : hostnames) {
// If host name is null or host name is in local block host list, skip sending request to this host
if (hostname == null || ClientData.isHostBlocked(hostname)) {
continue;
}
try {
String url = generateURL(hostname);
response = restTemplate.exchange(url, HttpMethod.GET, dataKeys.getEntity(), String.class);
// make DataResponse
break;
} catch (HttpClientErrorException ex) {
// make DataResponse
return dataResponse;
} catch (HttpServerErrorException ex) {
// make DataResponse
return dataResponse;
} catch (RestClientException ex) {
// If it comes here, then it means some of the servers are down.
// Add this server to block host list
ClientData.blockHost(hostname);
// log an error
} catch (Exception ex) {
// If it comes here, then it means some weird things has happened.
// log an error
// make DataResponse
}
}
return dataResponse;
}
private String generateURL(final String hostIPAdress) {
// make an url
}
private int getSerialIdFromUserId() {
// get the id
}
}
Now basis on userId, I will get the serialId and then get the list of hostnames, I am suppose to make a call depending on what flag is passed. Then I iterate the hostnames list and make a call to the servers. Let's say, if I have four hostnames (A, B, C, D) in the linked list, then I will make call to A first and if I get the data back, then return the DataResponse back. But suppose if A is down, then I need to add A to block list instantly so that no other threads can make a call to A hostname. And then make a call to hostname B and get the data back and return the response (or repeat the same thing if B is also down).
I have a background thread as well which runs every 10 minutes and it gets started as soon we get the client instance from the factory and it parses my another service URL to get the list of block hostnames that we are not supposed to make a call. Since it runs every 10 minutes so any servers which are down, it will get the list after 10 minutes only, In general suppose if A is down, then my service will provide A as the block list of hostnames and as soon as A becomes up, then that list will be updated as well after 10 minutes.
Here is my background thread code DataScheduler-
public class DataScheduler {
private RestTemplate restTemplate = new RestTemplate();
private static final Gson gson = new Gson();
private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
public void startScheduleTask() {
scheduler.scheduleAtFixedRate(new Runnable() {
public void run() {
try {
callDataService();
} catch (Exception ex) {
// log an error
}
}
}, 0, 10L, TimeUnit.MINUTES);
}
public void callDataService() throws Exception {
String url = null;
// execute the url and get the responseMap from it as a string
parseResponse(responseMap);
}
private void parseResponse(Map<FlowsEnum, String> responses) throws Exception {
// .. some code here to calculate partitionMappings
// block list of hostnames
Map<String, List<String>> coloExceptionList = gson.fromJson(response.split("blocklist=")[1], Map.class);
for (Map.Entry<String, List<String>> entry : coloExceptionList.entrySet()) {
for (String hosts : entry.getValue()) {
blockList.add(hosts);
}
}
if (update) {
ClientData.setAllMappings(partitionMappings);
}
// update the block list of hostnames
if (!DataUtils.isEmpty(responses)) {
ClientData.replaceBlockedHosts(blockList);
}
}
}
And here is my ClientData class which holds all the information for block list of hostnames and partitionMappings details (which is use to get the list of valid hostnames).
public class ClientData {
private static final AtomicReference<ConcurrentHashMap<String, String>> blockedHosts = new AtomicReference<ConcurrentHashMap<String, String>>(
new ConcurrentHashMap<String, String>());
// some code here to set the partitionMappings by using CountDownLatch
// so that read is blocked for first time reads
public static boolean isHostBlocked(String hostName) {
return blockedHosts.get().contains(hostName);
}
public static void blockHost(String hostName) {
blockedHosts.get().put(hostName, hostName);
}
public static void replaceBlockedHosts(List<String> blockList) {
ConcurrentHashMap<String, String> newBlockedHosts = new ConcurrentHashMap<>();
for (String hostName : blockList) {
newBlockedHosts.put(hostName, hostName);
}
blockedHosts.set(newBlockedHosts);
}
}
Problem Statement:-
When all the servers are up (A,B,C,D as an example) above code works fine and I don't see any TimeoutException happening at all from the handle.get but if let's say one server (A) went down which I was supposed to make a call from the main thread then I start seeing lot of TimeoutException, by lot I mean, huge number of client timeouts happening.
And I am not sure why this is happening? In general this won't be happening right since as soon as the server goes down, it will get added to blockList and then no thread will be making a call to that server, instead it will try another server in the list? So it should be smooth process and then as soon as those servers are up, blockList will get updated from the background thread and then you can start making a call.
Is there any problem in my above code which can cause this problem? Any suggestions will be of great help.
In general, what I am trying to do is - make a hostnames list depending on what user id being passed by using the mappings object. And then make a call to the first hostname and get the response back. But if that hostname is down, then add to the block list and make a call to the second hostname in the list.
Here is the Stacktrace which I am seeing -
java.util.concurrent.TimeoutException\n\tat java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:258)
java.util.concurrent.FutureTask.get(FutureTask.java:119)\n\tat com.host.client.DataClient.getDataSync(DataClient.java:20)\n\tat
NOTE: For multiple userId's, we can have same server, meaning server A can get resolve to multiple userId's.
In DataClient class, at the below line:
public class DataClient implements IClient {
----code code---
Future<DataResponse> handle = getDataAsync(dataKeys);
//BELOW LINE IS PROBLEM
response = handle.get(dataKeys.getTimeout(), TimeUnit.MILLISECONDS); // <--- HERE
} catch (TimeoutException e) {
// log error
response = new DataResponse(null, DataErrorEnum.CLIENT_TIMEOUT, DataStatusEnum.ERROR);
} catch (Exception e) {
// log error
response = new DataResponse(null, DataErrorEnum.ERROR_CLIENT, DataStatusEnum.ERROR);
----code code-----
You have assigned a timeout to handle.get(...), which is timing out before your REST connections can respond. The rest connections themselves may or may not be timing out, but since you are timing out of get method of future before the completion of the execution of the thread, the blocking of hosts has no visible effect, while the code inside the call method of DataTask may be performing as expected. Hope this helps.
You asked about suggestions, so here are some suggestions:
1.) Unexpected return value
Method returns unexpectedly FALSE
if (ClientData.isHostBlocked(hostname)) //this may return always false! please check
2.) Exception-Handling
Are you really sure, that a RestClientException occurs?
Only when this exception occured, the host will be added to blocked list!
Your posted code seems to ignore logging (it is commented out!)
...catch (HttpClientErrorException ex) {
// make DataResponse
return dataResponse;
} catch (HttpServerErrorException ex) {
// make DataResponse
return dataResponse;
} catch (RestClientException ex) {
// If it comes here, then it means some of the servers are down.
// Add this server to block host list
ClientData.blockHost(hostname);
// log an error
} catch (Exception ex) {
// If it comes here, then it means some weird things has happened.
// log an error
// make DataResponse
}
Im using hibernate 3 and spring.
When I start a thread an exception occurred:
org.hibernate.HibernateException: Illegal attempt to associate a collection with two open sessions
I dont know how to detach entities or close session with this architecture.
I appreciate some help.
CommunicationService.sendCommunications() code:
public void sendCommunications(HibernateMessageToSendRepository messageToSendRepository) {
Long messageId = new Long(41); //this is only for test. the idea is get a list of id and generate a thread group.
MessageSender sender = SmsSender(messageId, messageToSendRepository);
sender.start();
}
Invoking sendCommunications code:
ApplicationContext appCont = new ClassPathXmlApplicationContext("appContext.xml");
ServiceLocator serviceLocator = ServiceLocator.getInstance();
HibernateMessageToSendRepository messageToSendRepository = (HibernateMessageToSendRepository) appCont.getBean("messageToSendRepository");
CommunicationService communication = serviceLocator.getCommunicationService();
communication.sendCommunications(messageToSendRepository);
SmsSender (extends from MessageSender (thread)) code:
public class SmsSender extends MessageSender {
public SmsSender(Long messageToSendId, HibernateMessageToSendRepository messageToSendRepository) {
super(messageToSendRepository);
MessageToSend messageToSendNew = this.messageToSendRepository.getById(messageToSendId);
this.messageToSend = messageToSendNew;
}
public void run() {
try {
MessageToSendSms messageToSendSms = (MessageToSendSms) this.messageToSend;
Iterator<CustomerByMessage> itCbmsgs = messageToSendSms.getCustomerByMessage().iterator();
while (itCbmsgs.hasNext()) {
CustomerByMessage cbm = (CustomerByMessage) itCbmsgs.next();
//sms sending
this.getGateway().sendSMS(cbm.getBody(), cbm.getCellphone());
cbm.setStatus(CustomerByMessageStatus.SENT_OK);
cbm.setSendingDate(Calendar.getInstance().getTime());
}
messageToSendSms.getMessage().setStatus(messageToSendStatus.ALL_MESSAGES_SENT);
this.messageToSendRepository.update(messageToSendSms);
} catch (Exception e) {
this.log.error("Error en sms sender " + e.getMessage());
}
}
}
MessageToSendRepository code:
public void update(MessageToSend messageToSend) {
try {
this.getSession().update(messageToSend);
} catch (HibernateException e) {
this.log.error(e.getMessage(), e);
throw e;
}
}
You need to detach messageToSendNew after you you retrieve it, but before you share it with another thread. You can detach the object by calling Session.close() on your hibernate session.
Caveat you must eagerly populate all the fields that you need.
If you need to reconnect it with a new session you can use the merge() method.