I have the following Observable where I am expecting some DB insertions to occur upon subscribing to it.
But nothing happens, no DB inserts and same time no errors either.
But If I directly subscribe to the method that does the DB calls, the DB insert occurs as expected.
How can I fix this such that the subscription to the Observable call below will perform the DB insert?
Please advice. Thanks.
This is the Observable where no DB insert occurs and no errors. I want to change this such that the DB insertion occurs when I subscribe to this Observable.
public Observable<KafkaConsumerRecord<String, RequestObj>> apply(KafkaConsumerRecords<String, RequestObj> records) {
Observable.from(records.getDelegate().records().records("TOPIC_NAME"))
.buffer(2)
.map(this::convertToEventRequest)
.doOnNext(this::handleEventInsertions)
.doOnSubscribe(() -> System.out.println("Subscribed!"))
.subscribe(); // purposely subscribing here itself to test
return null; // even if I return this observable and subscribe at the caller, same outcome.
}
Just to test if the query works, if I were to directly subscribe to the method that does the insertion, it works as expected as follows.
Doing this in debug mode.
client.rxQueryWithParams(query, new JsonArray(params)).subscribe() // works
The following are references to see whats happening inside the convertToEventRequest and handleEventInsertions methods
private Map<String, List<?>> convertToEventRequest(Object records) {
List<ConsumerRecord<String, RequestObj>> consumerRecords = (List<ConsumerRecord<String, RequestObj>>) records;
List<AddEventRequest> addEventRequests = new ArrayList<>();
List<UpdateEventRequest> updateEventRequests = new ArrayList<>();
consumerRecords.forEach(record -> {
String eventType = new String(record.headers().headers("type").iterator().next().value(), StandardCharsets.UTF_8);
if("add".equals(eventType)) {
AddEventRequest request = AddEventRequest.builder()
.count(Integer.parseInt(new String(record.headers().headers("count").iterator().next().value(), StandardCharsets.UTF_8)))
.data(record.value())
.build();
addEventRequests.add(request);
} else {
UpdateEventRequest request = UpdateEventRequest.builder()
.id(new String(record.headers().headers("id").iterator().next().value(), StandardCharsets.UTF_8))
.status(Integer.parseInt(new String(record.headers().headers("status").iterator().next().value(), StandardCharsets.UTF_8)))
.build();
updateEventRequests.add(request);
}
});
return new HashMap<String, List<?>>() {{
put("add", addEventRequests);
put("update", updateEventRequests);
}};
}
private void handleEventInsertions(Object eventObject) {
Map<String, List<?>> eventMap = (Map<String, List<?>>) eventObject;
List<AddEventRequest> addEventRequests = (List<AddEventRequest>) eventMap.get("add");
List<UpdateEventRequest> updateEventRequests = (List<UpdateEventRequest>) eventMap.get("update");
if(addEventRequests != null && !addEventRequests.isEmpty()) {
insertAddEvents(addEventRequests);
}
if(updateEventRequests != null && !updateEventRequests.isEmpty()) {
insertUpdateEvents(updateEventRequests);
}
}
private Single<ResultSet> insertAddEvents(List<AddEventRequest> requests) {
AddEventRequest request = requests.get(0);
List<Object> params = Arrays.asList(request.getCount(), request.getData());
String query = "INSERT INTO mytable(count, data, creat_ts) " +
"VALUES (?, ?, current_timestamp)";
return client.rxQueryWithParams(query, new JsonArray(params));
}
private Single<ResultSet> insertUpdateEvents(List<UpdateEventRequest> requests) {
UpdateEventRequest request = requests.get(0);
return client.rxQueryWithParams(
"UPDATE mytable SET status=?, creat_ts=current_timestamp WHERE id=?",
new JsonArray(Arrays.asList(request.getStatus(), request.getId())));
}
Can you try to wrap it into Observable.defer?
Observable.defer(() -> Observable.from(records.getDelegate().records().records("TOPIC_NAME"))...
Related
I am using Hazelcast jet to some aggregation and grouping but after being idle for sometime and when I tried to stop my tomcat it is not allowing to stop my tomcat and I have restart my PC. Below is the error which I am getting. Please anyone can guide me what it exactly error is showing and how to shutdown it gracefully?
Sending multicast datagram failed. Exception message saying the operation is not permitted
usually means the underlying OS is not able to send packets at a given pace. It can be caused by starting several hazelcast members in parallel when the members send their join message nearly at the same time.
java.net.NoRouteToHostException: No route to host: Datagram send failed
at java.net.TwoStacksPlainDatagramSocketImpl.send(Native Method)
at java.net.DatagramSocket.send(DatagramSocket.java:693)
at com.hazelcast.internal.cluster.impl.MulticastService.send(MulticastService.java:291)
at com.hazelcast.internal.cluster.impl.MulticastJoiner.searchForOtherClusters(MulticastJoiner.java:113)
at com.hazelcast.internal.cluster.impl.SplitBrainHandler.searchForOtherClusters(SplitBrainHandler.java:75)
at com.hazelcast.internal.cluster.impl.SplitBrainHandler.run(SplitBrainHandler.java:42)
at com.hazelcast.spi.impl.executionservice.impl.DelegateAndSkipOnConcurrentExecutionDecorator$DelegateDecorator.run(DelegateAndSkipOnConcurrentExecutionDecorator.java:77)
at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
Code is quite huge but I tried show you some sample may it won't work as it is just glimpse of a code:
Class Abc{
// It Create Jet Instance
JetConfig jetConfig = new JetConfig();
jetConfig.getHazelcastConfig().setProperty( "hazelcast.logging.type", "log4j" );
jetConfig.getInstanceConfig().setCooperativeThreadCount(5);
jetConfig.configureHazelcast(c -> {
c.getNetworkConfig().setReuseAddress(true);
c.setClusterName("DATA" + UUID.randomUUID().toString());
c.getNetworkConfig().setPort(9093);
c.getNetworkConfig().setPublicAddress("localhost");
c.getNetworkConfig().setPortAutoIncrement(true);
});
JetInstance jetInstance= Jet.newJetInstance(jetConfig);
public Pipeline createPipeline() {
return Pipeline.create();
}
// To Add Job to pipeline
public void joinPipeToJet(Pipeline pl, String name) {
JobConfig j = new JobConfig();
//j.setProcessingGuarantee(ProcessingGuarantee.EXACTLY_ONCE);
j.setName(name);
jetInstance.newJob(pl,j).join();
}
public void readJsonFile(final Map<String, Object> data) {
// Random Id for job so I can separate two jobs Imaps
String jobid = UUID.randomUUID().toString();
try {
Pipeline pl = createPipeline();
UUID idOne = UUID.randomUUID();
final IMap<Object, Object> abc = jetInstance.getMap(idOne.toString());
abc.putAll(data);
// Reading data from file and sending data to next
final BatchSource batchSource = Sources.map(abc);
pl.readFrom(batchSource)
.writeTo(Sinks.map(this.uid));
joinPipeToJet(pl, jobid);
abc.destroy();
} catch (Exception e) {
Job j1 = jetInstance.getJob(jobid);
if (j1 != null) {
j1.cancel();
}
} finally {
Job j1 = jetInstance.getJob(jobid);
if (j1 != null) {
j1.cancel();
}
}
}
//Process to mainplate data and returning it using BatchStage to Map
public Map<String, Object> runProcess(final Pipeline pl) {
String jobid = UUID.randomUUID().toString();
UID idOne = UUID.randomUUID();
BatchStage<Object> bd1 = ;//get data by calling method
bd1.writeTo(Sinks.list(idOne.toString()));
joinPipeToJet(pl, jobid);
IList<Object> abc = jetInstance.getList(idOne.toString());
List<Object> result = new ArrayList(abc);
final Map<String, Object> finalresult =new HashMap<String, Object>();
finalresult.put("datas", result.get(0));
abc.destroy();
return finalresult;
}
public static void main(String...args) {
Map<String, Object> p = new HashMap<String, Object>();
p.putAll("Some Data");
readJsonFile(p);
Pipeline pl = createPipeline();
runProcess(pl);
}
}
I have multiple async tasks running in spring boot.These tasks read an excel file and insert all that data into the database.
The task is started when a request is made from the front-end. The front-end then periodically keeps requesting for the progress status of the task.
I need to track the progress of each of these tasks and know when they are completed.
This is the controller file that takes in requests for tasks and for polling their progress status:
public class TaskController {
#RequestMapping(method = RequestMethod.POST, value = "/uploadExcel")
public ResponseEntity<?> uploadExcel(String excelFilePath) {
String taskId = UUID.randomUUID().toString();
taskAsyncService.AsyncManager(id, excelFilePath);
HashMap<String, String> responseMap = new HashMap<>();
responeMap.put("taskId",taskId);
return new ResponseEntity<>(responseMap, HttpStatus.ACCEPTED);
}
// This will be polled to get progress of tasks being executed
#RequestMapping(method = RequestMethod.GET, value = "/tasks/progress/{id}")
public ResponseEntity<?> getTaskProgress(#PathVariable String taskId) {
HashMap<String, String> map = new HashMap<>();
if (taskAsyncService.containsTaskEntry(id) == null) {
map.put("Error", "TaskId does not exist");
return new ResponseEntity<>(map, HttpStatus.BAD_REQUEST);
}
boolean taskProgress = taskAsyncService.getTaskProgress(taskId);
if (taskProgress) {
map.put("message", "Task complete");
taskAsyncService.removeTaskProgressEntry(taskId);
return new ResponseEntity<>(map, HttpStatus.OK);
}
//Otherwise task is still running
map.put("progressStatus", "Task running");
return new ResponseEntity<>(map, HttpStatus.PARTIAL_CONTENT);
}
}
This is the code that executes the async tasks.
public class TaskAsyncService {
private final AtomicReference<ConcurrentHashMap<String, Boolean>> isTaskCompleteMap = new AtomicReference<ConcurrentHashMap<String, Boolean>>();
protected boolean containsTaskEntry(String taskId) {
if (isTaskCompleteMap.get().get(taskId) != null) {
return true;
}
return false;
}
protected boolean getTaskProgress(String taskId, String excelFilePath) {
return isTaskCompleteMap.get().get(taskId);
}
protected void removeTaskProgressEntry(String taskId) {
if (isTaskCompleteMap.get() != null) {
isTaskCompleteMap.get().remove(taskId);
}
}
#Async
public CompletableFuture<?> AsyncManager(String taskId) {
HashMap<String, String> map = new HashMap<>();
//Add a new entry into isTaskCompleteMap
isTaskCompleteMap.get().put(taskId, false);
//Insert excel rows into database
//Task completed set value to true
isTaskCompleteMap.get().put(taskId, true);
map.put("Success", "Task completed");
return CompletableFuture.completedFuture(map);
}
}
I am using AWS EC2 with a load balancer. Therefore, sometimes a
polling request gets handled by a newly spawned server which cannot
access the isTaskCompleteMap and returns saying that "TaskId does not exist".
How do I track the status of the tasks in this case? I understand i need a distributed data structure but don't understand of what kind and how to implement it.
You can use Hazelcast or similar distributed solutions(Redis, etc).
maps - https://docs.hazelcast.org/docs/3.0/manual/html/ch02.html#Map
Use distributed map from hazelcast instead of CHM.
Get from such map should return task even if they are processing on another pod(server)
I'm using an external API with two functions, one that returns a Maybe and one that returns a Completable (see code below). I'd like my function 'saveUser()' to return a Completable, so that I can just check it with doOnSuccess() and doOnError. But currently my code doesn't compile. Also please note that if my 'getMaybe' doesn't return anything, I'd like to get a null value as an argument in my flatmap, so that I can handle the null vs not-null cases (as seen in the code).
private Maybe<DataSnapshot> getMaybe(String key) {
// external API that returns a maybe
}
private Completable updateChildren(mDatabase, childUpdates) {
// external API that returns a Completable
}
// I'd like my function to return a Completable but it doesn't compile now
public Completable saveUser(String userKey, User user) {
return get(userKey)
.flatMap(a -> {
Map<String, Object> childUpdates = new HashMap<>();
if (a != null) {
// add some key/values to childUpdates
}
childUpdates.put(DB_USERS + "/" + userKey, user.toMap());
// this returns a Completable
return updateChildren(mDatabase, childUpdates)
});
}
First of all, remember that Maybe is use to get one element, empty or an error
I refactor your code below to make posible return a Completable
public Completable saveUser(String userKey, User user) {
return getMaybe(userKey)
.defaultEmpty(new DataSnapshot)
.flatMapCompletable(data -> {
Map<String, Object> childUpdates = new HashMap<>();
//Thanks to defaultempty the object has an
//Id is null (it can be any attribute that works for you)
//so we can use it to validate if the maybe method
//returned empty or not
if (data.getId() == null) {
// set values to the data
// perhaps like this
data.setId(userKey);
// and do whatever you what with childUpdates
}
childUpdates.put(DB_USERS + "/" + userKey, user.toMap());
// this returns a Completable
return updateChildren(mDatabase, childUpdates);
});
}
This is the solution I finally came up with.
public Completable saveUser(String userKey, User user) {
return getMaybe(userKey)
.map(tripListSnapshot -> {
Map<String, Object> childUpdates = new HashMap<>();
// // add some key/values to childUpdates
return childUpdates;
})
.defaultIfEmpty(new HashMap<>())
.flatMapCompletable(childUpdates -> {
childUpdates.put(DB_USERS + "/" + userKey, user.toMap());
return updateChildren(mDatabase, childUpdates);
});
}
I can easily query the Alfresco audit log in REST using this query:
http://localhost:8080/alfresco/service/api/audit/query/audit-custom?verbose=true
But how to perform the same request in Java within Alfresco module?
It must be synchronous.
A lazy solution would be to call the REST URL in Java, but it would probably be inefficient, and more importantly it would require me to store an admin's password somewhere.
I noticed AuditService has a auditQuery method so I am trying to call it. Unfortunately it seems to be for asynchronous operations? I don't need callbacks, as I need to wait until the queried data is ready before going on to the next step.
Here is my implementation, mostly copied from the source code of the REST API:
int maxResults = 10000;
if (!auditService.isAuditEnabled(AUDIT_APPLICATION, ("/" + AUDIT_APPLICATION))) {
throw new WebScriptException(
"Auditing for " + AUDIT_APPLICATION + " is disabled!");
}
final List<Map<String, Object>> entries =
new ArrayList<Map<String,Object>>(limit);
AuditQueryCallback callback = new AuditQueryCallback() {
#Override
public boolean valuesRequired() {
return true; // true = verbose
}
#Override
public boolean handleAuditEntryError(
Long entryId, String errorMsg, Throwable error) {
return true;
}
#Override
public boolean handleAuditEntry(
Long entryId,
String applicationName,
String user,
long time,
Map<String, Serializable> values) {
// Convert values to Strings
Map<String, String> valueStrings =
new HashMap<String, String>(values.size() * 2);
for (Map.Entry<String, Serializable> mapEntry : values.entrySet()) {
String key = mapEntry.getKey();
Serializable value = mapEntry.getValue();
try {
String valueString = DefaultTypeConverter.INSTANCE.convert(
String.class, value);
valueStrings.put(key, valueString);
}
catch (TypeConversionException e) {
// Use the toString()
valueStrings.put(key, value.toString());
}
}
entry.put(JSON_KEY_ENTRY_VALUES, valueStrings);
}
entries.add(entry);
return true;
}
};
AuditQueryParameters params = new AuditQueryParameters();
params.setApplicationName(AUDIT_APPLICATION);
params.setForward(true);
auditService.auditQuery(callback, params, maxResults);
Though the callback might it look asynchronous, it is not.
I would like to use Vertx common SQL Interface to query from table t1, t2, t3 in database TDB and together with table s1, s2, s3 from database SDB and return them as a JsonObject. The final result should be like this
{
"t1": [{...},{...},...],
"t2": [{...},{...},...],
"t3": [{...},{...},...],
"s1": [{...},{...},...],
"s2": [{...},{...},...],
"s3": [{...},{...},...]
}
If it were to be only one table, I would do it like this
JDBCClient tdbClient = JDBCClient.createShared(vertx, tdbConfig, "TDB");
JDBCClient sdbClient = JDBCClient.createShared(vertx, sdbConfig, "SDB");
vertx.eventBus().consumer("myservice.getdata").handler(msg -> {
tdbClient.getConnection(tConresult -> {
if (tConresult.succeeded()) {
SQLConnection tConnection = tConresult.result();
tConnection.query("select * from t1", t1 -> {
if (t1.succeeded()) {
JsonArray t1Result = new JsonArray(t1.result().getRows());
JsonObject allResult = new JsonObject()
.put("t1", t1Result);
msg.reply(allResult);
} else {
msg.fail(1, "failt to query t1");
}
});
} else {
msg.fail(1, "connot get connection to TDB");
}
});
});
But since it have to be many tables, I find an ugly way like this
vertx.eventBus().consumer("myservice.getdata").handler(msg -> {
tdbClient.getConnection(tConresult -> { if (tConresult.succeeded()) {
sdbClient.getConnection(sConresult -> { if (sConresult.succeeded()) {
SQLConnection tConnection = tConresult.result();
SQLConnection sConnection = sConresult.result();
tConnection.query("select * from t1", t1 -> { if (t1.succeeded()) {
tConnection.query("select * from t2", t2 -> { if (t2.succeeded()) {
tConnection.query("select * from t3", t3 -> { if (t3.succeeded()) {
sConnection.query("select * from s1", s1 -> { if (s1.succeeded()) {
sConnection.query("select * from s2", s2 -> { if (s2.succeeded()) {
sConnection.query("select * from s3", s3 -> { if (s3.succeeded()) {
JsonArray t1Result = new JsonArray(t1.result().getRows());
JsonArray t2Result = new JsonArray(t2.result().getRows());
JsonArray t3Result = new JsonArray(t3.result().getRows());
JsonArray s1Result = new JsonArray(s1.result().getRows());
JsonArray s2Result = new JsonArray(s2.result().getRows());
JsonArray s3Result = new JsonArray(s3.result().getRows());
JsonObject allResult = new JsonObject()
.put("t1", t1Result)
.put("t2", t2Result)
.put("t3", t3Result)
.put("s1", s1Result)
.put("s2", s2Result)
.put("s3", s3Result);
msg.reply(allResult);
} else {msg.fail(1, "failt to query s3");}});
} else {msg.fail(1, "failt to query s2");}});
} else {msg.fail(1, "failt to query s1");}});
} else {msg.fail(1, "failt to query t3");}});
} else {msg.fail(1, "failt to query t2");}});
} else {msg.fail(1, "failt to query t1");}});
} else {msg.fail(1, "connot get connection to SDB");}});
} else {msg.fail(1, "connot get connection to TDB");}});
});
But I think I'm doing it wrong, despite of the ugly code, it takes a lot of time to process because it doesn't do the queries in parallel.
Please suggest a better way to achieve this.
What you are experiencing here is the callback hell. Vert.x provides some features to handle AsyncResult in a much more composable and convient way than callbacks. They are called Future. I suggest you read about them in the documentation.
A Future is a placeholder for results of asynchronous calls. Vert.x is full of asynchronous calls. If asynchronous calls depend on each other you typically end up with a callback hell. With Future you can do something like this:
Future<SQLConnection> tConResultFuture = Future.future();
tdbClient.getConnection(tConresult -> {
if (tConresult.succeeded()) {
logger.info("Yeah got a connection! tCon");
tConResultFuture.complete(tConresult.result());
} else {
tConResultFuture.fail(tConresult.cause());
}
});
The Handler for the AsyncResult<SQLConnection> put the asynchronous result of getting a SQLConnection into the Future tConResultFuture. Now you can a Handler for the Future and wait for the asynchronous result for getConnection:
tConResultFuture.setHandler(result -> {
// ...
});
But that wouldn't make much sense as you already could to that with the first Handler. Now think of an example like yours – with many depending Futures. I use your example add a second connection – the sConresult:
Future<SQLConnection> sConResultFuture = Future.future();
sdbClient.getConnection(sConresult -> {
if (sConresult.succeeded()) {
logger.info("Yeah got a connection! sCon");
sConResultFuture.complete(sConresult.result());
} else {
sConResultFuture.fail(sConresult.cause());
}
});
So lets say, you want to wait for both Future results because they depend on each other. Here we use Vert.x' CompositeFuture:
CompositeFuture.all(tConResultFuture, sConResultFuture).setHandler(connections -> {
if (connections.succeeded()) {
logger.info("Both connections are ready for use!");
SQLConnection tCon = tConResultFuture.result();
SQLConnection sCon = sConResultFuture.result();
// do stuff...
} else {
logger.severe("Both or one connections attempt failed!");
}
});
The CompositeFuture waits for Future tConResultFuture and Future sConResultFuture to complete successfully or not and than call its Handler. Now both asynchronous results are finished and you can call their results.
You and the nice thing, both asynchronous calls are done concurrently.