I can easily query the Alfresco audit log in REST using this query:
http://localhost:8080/alfresco/service/api/audit/query/audit-custom?verbose=true
But how to perform the same request in Java within Alfresco module?
It must be synchronous.
A lazy solution would be to call the REST URL in Java, but it would probably be inefficient, and more importantly it would require me to store an admin's password somewhere.
I noticed AuditService has a auditQuery method so I am trying to call it. Unfortunately it seems to be for asynchronous operations? I don't need callbacks, as I need to wait until the queried data is ready before going on to the next step.
Here is my implementation, mostly copied from the source code of the REST API:
int maxResults = 10000;
if (!auditService.isAuditEnabled(AUDIT_APPLICATION, ("/" + AUDIT_APPLICATION))) {
throw new WebScriptException(
"Auditing for " + AUDIT_APPLICATION + " is disabled!");
}
final List<Map<String, Object>> entries =
new ArrayList<Map<String,Object>>(limit);
AuditQueryCallback callback = new AuditQueryCallback() {
#Override
public boolean valuesRequired() {
return true; // true = verbose
}
#Override
public boolean handleAuditEntryError(
Long entryId, String errorMsg, Throwable error) {
return true;
}
#Override
public boolean handleAuditEntry(
Long entryId,
String applicationName,
String user,
long time,
Map<String, Serializable> values) {
// Convert values to Strings
Map<String, String> valueStrings =
new HashMap<String, String>(values.size() * 2);
for (Map.Entry<String, Serializable> mapEntry : values.entrySet()) {
String key = mapEntry.getKey();
Serializable value = mapEntry.getValue();
try {
String valueString = DefaultTypeConverter.INSTANCE.convert(
String.class, value);
valueStrings.put(key, valueString);
}
catch (TypeConversionException e) {
// Use the toString()
valueStrings.put(key, value.toString());
}
}
entry.put(JSON_KEY_ENTRY_VALUES, valueStrings);
}
entries.add(entry);
return true;
}
};
AuditQueryParameters params = new AuditQueryParameters();
params.setApplicationName(AUDIT_APPLICATION);
params.setForward(true);
auditService.auditQuery(callback, params, maxResults);
Though the callback might it look asynchronous, it is not.
Related
I'm using an external API with two functions, one that returns a Maybe and one that returns a Completable (see code below). I'd like my function 'saveUser()' to return a Completable, so that I can just check it with doOnSuccess() and doOnError. But currently my code doesn't compile. Also please note that if my 'getMaybe' doesn't return anything, I'd like to get a null value as an argument in my flatmap, so that I can handle the null vs not-null cases (as seen in the code).
private Maybe<DataSnapshot> getMaybe(String key) {
// external API that returns a maybe
}
private Completable updateChildren(mDatabase, childUpdates) {
// external API that returns a Completable
}
// I'd like my function to return a Completable but it doesn't compile now
public Completable saveUser(String userKey, User user) {
return get(userKey)
.flatMap(a -> {
Map<String, Object> childUpdates = new HashMap<>();
if (a != null) {
// add some key/values to childUpdates
}
childUpdates.put(DB_USERS + "/" + userKey, user.toMap());
// this returns a Completable
return updateChildren(mDatabase, childUpdates)
});
}
First of all, remember that Maybe is use to get one element, empty or an error
I refactor your code below to make posible return a Completable
public Completable saveUser(String userKey, User user) {
return getMaybe(userKey)
.defaultEmpty(new DataSnapshot)
.flatMapCompletable(data -> {
Map<String, Object> childUpdates = new HashMap<>();
//Thanks to defaultempty the object has an
//Id is null (it can be any attribute that works for you)
//so we can use it to validate if the maybe method
//returned empty or not
if (data.getId() == null) {
// set values to the data
// perhaps like this
data.setId(userKey);
// and do whatever you what with childUpdates
}
childUpdates.put(DB_USERS + "/" + userKey, user.toMap());
// this returns a Completable
return updateChildren(mDatabase, childUpdates);
});
}
This is the solution I finally came up with.
public Completable saveUser(String userKey, User user) {
return getMaybe(userKey)
.map(tripListSnapshot -> {
Map<String, Object> childUpdates = new HashMap<>();
// // add some key/values to childUpdates
return childUpdates;
})
.defaultIfEmpty(new HashMap<>())
.flatMapCompletable(childUpdates -> {
childUpdates.put(DB_USERS + "/" + userKey, user.toMap());
return updateChildren(mDatabase, childUpdates);
});
}
I am consuming a service which is getting the data from the FLow .The return type of the call() is HashMap .Upon calling the APi , I am getting the below Exception. I read that HashMap are not included in the whitelist .Can anyone suggest how to return an Map or ConcurrentHashMap from corda flow to sevice.
"com.esotericsoftware.kryo.KryoException: Class java.util.HashMap is
not annotated or on the whitelist, so cannot be used in
serialization\nSerialization trace:\nvalue (rx.Notification)",
Some Code Snippets:
#Suspendable
#Override
public Map<Party, StateAndRef<MembershipState>> call() throws FlowException {
MembershipsCacheHolder membershipService = getServiceHub().cordaService(MembershipsCacheHolder.class);
MembershipsCache cache = membershipService.getCache();
Instant now = getServiceHub().getClock().instant();
System.out.println("==========started the Get membership flow " + forceRefresh + "cahce" + cache);
if (forceRefresh || cache == null || cache.getExpires() == null || cache.getExpires().isBefore(now)) {
MemberConfigurationService configuration = getServiceHub().cordaService(MemberConfigurationService.class);
Party bno = configuration.bnoParty();
FlowSession bnoSession = initiateFlow(bno);
UntrustworthyData<MembershipsListResponse> packet2 = bnoSession.sendAndReceive(MembershipsListResponse.class, new MembershipListRequest());
MembershipsListResponse response = packet2.unwrap(data -> {
// Perform checking on the object received.
// T O D O: Check the received object.
// Return the object.
return data;
});
try {
MembershipsCache newCache = MembershipsCache.from(response);
membershipService.setCache(newCache);
} catch (Exception e) {
System.out.println("==========failed the Get membership flow " + forceRefresh + "cahce" + cache);
e.printStackTrace();
}
Map<Party, StateAndRef<MembershipState>> hashMap = new HashMap<Party, StateAndRef<MembershipState>>(membershipService.getCache().getMembershipMap());
return hashMap;
} else {
Map<Party, StateAndRef<MembershipState>> hashMap = new HashMap<Party, StateAndRef<MembershipState>>(cache.getMembershipMap());
return hashMap;
}
}
}
}
AND Code From Spring Boot Side :
#GetMapping(value = "getMemberShips", produces = MediaType.APPLICATION_JSON_VALUE)
public String getMembership() throws InterruptedException,JsonProcessingException , ExecutionException{
FlowProgressHandle<Map<Party, StateAndRef<MembershipState>> > flowHandle = proxy.startTrackedFlowDynamic(GetMembershipsFlow.GetMembershipsFlowInitiator.class,true);
flowHandle.getProgress().subscribe(evt -> System.out.printf(">> %s\n", evt));
final Map<Party, StateAndRef<MembershipState>> result = flowHandle
.getReturnValue()
.get();
return result.toString();
}
As suggested by corda , We could use LinkedHashMap . It worked fine. Thanks
I am using spring framework StringRedisTemplate to update an entry which happen with multiple threads.
public void processSubmission(final String key, final Map<String, String> submissionDTO) {
final String hashKey = String.valueOf(Hashing.MURMUR_HASH.hash(key));
this.stringRedisTemplate.expire(key, 60, TimeUnit.MINUTES);
final HashOperations<String, String, String> ops = this.stringRedisTemplate.opsForHash();
Map<String, String> data = findByKey(key);
String json;
if (data != null) {
data.putAll(submissionDTO);
json = convertSubmission(data);
} else {
json = convertSubmission(submissionDTO);
}
ops.put(key, hashKey, json);
}
In this json entry looks below,
key (assignmentId) -> value (submissionId, status)
As seen in code, before update the cache entry, I fetch current entry and add the new entry and put them all. But since this operation can be do in multiple threads, there can be situation of race condition leads to data lost. I could synchronize above method, but then it will be a bottle neck for the parallel processing power of RxJava implementation where processSubmission method is call via RxJava on two asynchronous threads.
class ProcessSubmission{
#Override
public Observable<Boolean> processSubmissionSet1(List<Submission> submissionList, HttpHeaders requestHeaders) {
return Observable.create(observer -> {
for (final Submission submission : submissionList) {
//Cache entry insert method invoke via this call
final Boolean status = processSubmissionExecutor.processSubmission(submission, requestHeaders);
observer.onNext(status);
}
observer.onCompleted();
});
}
#Override
public Observable<Boolean> processSubmissionSet2(List<Submission> submissionList, HttpHeaders requestHeaders) {
return Observable.create(observer -> {
for (final Submission submission : submissionList) {
//Cache entry insert method invoke via this call
final Boolean status = processSubmissionExecutor.processSubmission(submission, requestHeaders);
observer.onNext(status);
}
observer.onCompleted();
});
}
}
Above will call from below service API.
class MyService{
public void handleSubmissions(){
final Observable<Boolean> statusObser1 = processSubmission.processSubmissionSet1(subListDtos.get(0), requestHeaders)
.subscribeOn(Schedulers.newThread());
final Observable<Boolean> statusObser2 = processSubmission.processSubmissionSet2(subListDtos.get(1), requestHeaders)
.subscribeOn(Schedulers.newThread());
statusObser1.subscribe();
statusObser2.subscribe();
}
}
So handleSubmissions is calling with multiple threads per assignment id. But then per main thread is create and call two reactive java threads and process the submission list associate with each assignment.
What would be the best approach I could prevent redis entry race condition, while keep the performance of the RxJava implementation? Is there a way I could do this redis operation more efficient way?
It looks like you're only using the ops variable to do a put operation at the end, and you could isolate that point which is where you need to synchronize.
In the short research that I did, I couldn't find if HashOperations is not already thread-safe).
But an example of how you could just isolate the part you're concerned about is to do something like:
public void processSubmission(final String key, final Map<String, String> submissionDTO) {
final String hashKey = String.valueOf(Hashing.MURMUR_HASH.hash(key));
this.stringRedisTemplate.expire(key, 60, TimeUnit.MINUTES);
Map<String, String> data = findByKey(key);
String json;
if (data != null) {
data.putAll(submissionDTO);
json = convertSubmission(data);
} else {
json = convertSubmission(submissionDTO);
}
putThreadSafeValue(key, hashKey, json);
}
And have a method that is synchronized just for the put operation:
private synchronized void putThreadSafeValue(key, hashKey, json) {
final HashOperations<String, String, String> ops = this.stringRedisTemplate.opsForHash();
ops.put(key, hashKey, json);
}
There are a number of ways to do this, but it looks like you could restrict the thread contention down to that put operation.
I have a method inside a class that I am mocking that returns a boolean value.
#Service
public boolean writeToHDFS(HashMap<String, String> tableHistoryMap, String text, String tableName, String siteName, String customer,
boolean incremental){
try{
String absoluteFilepath = hdfsService.generateHDFSAbsolutePathNoDate(tableHistoryMap, tableName, siteName, customer, incremental);
Path writeFile = new Path(absoluteFilepath);
if (!hdfsService.HDFSFileExists(writeFile)) {
writer = hdfs.create(writeFile, false);
logger.info("hdfs file create set up");
}
else {
writer = hdfs.append(writeFile);
logger.info("hdfs file append set up");
//TODO: WRITING HERE
//IF stripping capability is enabled, strip headers then
text = hdfsService.parseText(stripHeaders, text, tableName);
}
if(hdfsService.writerHDFS(writer,text)){
return true;
}
}
catch(Exception ex){
logger.error(ex.getMessage());
}
return false;
}
#MockBean
WriteDataIntoHDFSService writeDataIntoHDFSService
#Test
public void testDepositData(){
KafkaMessage kafkaMessage = new KafkaMessage();
kafkaMessage.setInitialLoadRunning(true);
kafkaMessage.setCustomer("customer");
kafkaMessage.setMessageContent("message");
kafkaMessage.setTableName("table");
kafkaMessage.setInitialLoadComplete(false);
HashMap<String, String> tableMap = new HashMap<>();
tableMap.put("Test","Test");
when(writeDataIntoHDFSService.writeToHDFS(tableMap, kafkaMessage.getMessageContent(), kafkaMessage.getTableName(), "Site", kafkaMessage.getCustomer(),
false)).thenReturn(true);
Assert.assertTrue(kafkaService.depositData(kafkaMessage,"test",kafkaMessage.getCustomer(),tableMap));
}
The assertion fails and I debugged it and it returns WrongTypOfReturnValue at when()...thenReturn(true). It says boolean cannot be returned by toString().
Log:
org.mockito.exceptions.misusing.WrongTypeOfReturnValue:
Boolean cannot be returned by toString()
toString() should return String
***
If you're unsure why you're getting above error read on.
Due to the nature of the syntax above problem might occur because:
1. This exception *might* occur in wrongly written multi-threaded tests.
Please refer to Mockito FAQ on limitations of concurrency testing.
2. A spy is stubbed using when(spy.foo()).then() syntax. It is safer to stub spies -
- with doReturn|Throw() family of methods. More in javadocs for Mockito.spy() method.
dataDeposit:
public boolean depositData(KafkaMessage kafkaMessage, String topic, HashMap<String, String> tableHistoryMap){
String customer = getCustomer(topic);
String site = getSite(topic);
if (kafkaMessage.isInitialLoadRunning() && !kafkaMessage.isInitialLoadComplete()) {
//runInitial(kafkaMessage, splitKafkaTopic[2]);
if (writeDataIntoHDFSService
.writeToHDFS(tableHistoryMap, kafkaMessage.getMessageContent(), kafkaMessage.getTableName(), site,
kafkaMessage.getCustomer(), false)) {
logger.info("Wrote initial data into HDFS file for: " + kafkaMessage.getTableName());
synchronized (lock) {
kafkaConsumer.commitSync();
logger.info("Kafka commit sync done for initial load.");
}
return true;
}
}
return false;
}
UPDATE: not even sure what I did but it seems to be working now...
I am writing a controller, that I need to make it asynchronous. How can I deal with a list of ListenableFuture? Because I have a list of URLs that I need to send GET request one by one, what is the best solution for it?
#RequestMapping(value = "/repositories", method = RequestMethod.GET)
private void getUsername(#RequestParam(value = "username") String username) {
System.out.println(username);
List<ListenableFuture> futureList = githubRestAsync.getRepositoryLanguages(username);
System.out.println(futureList.size());
}
In the service I use List<ListanbleFuture> which seems does not work, since it is asynchronous, in the controller method I cannot have the size of futureList to run a for loop on it for the callbacks.
public List<ListenableFuture> getRepositoryLanguages(String username){
return getRepositoryLanguages(username, getUserRepositoriesFuture(username));
}
private ListenableFuture getUserRepositoriesFuture(String username) throws HttpClientErrorException {
HttpEntity entity = new HttpEntity(httpHeaders);
ListenableFuture future = restTemplate.exchange(githubUsersUrl + username + "/repos", HttpMethod.GET, entity, String.class);
return future;
}
private List<ListenableFuture> getRepositoryLanguages(final String username, ListenableFuture<ResponseEntity<String>> future) {
final List<ListenableFuture> futures = new ArrayList<>();
future.addCallback(new ListenableFutureCallback<ResponseEntity<String>>() {
#Override
public void onSuccess(ResponseEntity<String> response) {
ObjectMapper mapper = new ObjectMapper();
try {
repositories = mapper.readValue(response.getBody(), new TypeReference<List<Repositories>>() {
});
HttpEntity entity = new HttpEntity(httpHeaders);
System.out.println("Repo size: " + repositories.size());
for (int i = 0; i < repositories.size(); i++) {
futures.add(restTemplate.exchange(githubReposUrl + username + "/" + repositories.get(i).getName() + "/languages", HttpMethod.GET, entity, String.class));
}
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void onFailure(Throwable throwable) {
System.out.println("FAILURE in getRepositoryLanguages: " + throwable.getMessage());
}
});
return futures;
}
Should I use something like ListenableFuture<List> instead of List<ListenableFuture> ?
It seems like you have a List<ListenableFuture<Result>>, but you want a ListenableFuture<List<Result>>, so you can take one action when all of the futures are complete.
public static <T> ListenableFuture<List<T>> allOf(final List<? extends ListenableFuture<? extends T>> futures) {
// we will return this ListenableFuture, and modify it from within callbacks on each input future
final SettableListenableFuture<List<T>> groupFuture = new SettableListenableFuture<>();
// use a defensive shallow copy of the futures list, to avoid errors that could be caused by
// someone inserting/removing a future from `futures` list after they call this method
final List<? extends ListenableFuture<? extends T>> futuresCopy = new ArrayList<>(futures);
// Count the number of completed futures with an AtomicInt (to avoid race conditions)
final AtomicInteger resultCount = new AtomicInteger(0);
for (int i = 0; i < futuresCopy.size(); i++) {
futuresCopy.get(i).addCallback(new ListenableFutureCallback<T>() {
#Override
public void onSuccess(final T result) {
int thisCount = resultCount.incrementAndGet();
// if this is the last result, build the ArrayList and complete the GroupFuture
if (thisCount == futuresCopy.size()) {
List<T> resultList = new ArrayList<T>(futuresCopy.size());
try {
for (ListenableFuture<? extends T> future : futuresCopy) {
resultList.add(future.get());
}
groupFuture.set(resultList);
} catch (Exception e) {
// this should never happen, but future.get() forces us to deal with this exception.
groupFuture.setException(e);
}
}
}
#Override
public void onFailure(final Throwable throwable) {
groupFuture.setException(throwable);
// if one future fails, don't waste effort on the others
for (ListenableFuture future : futuresCopy) {
future.cancel(true);
}
}
});
}
return groupFuture;
}
Im not quite sure if you are starting a new project or working on a legacy one, but if the main requirement for you is none blocking and asynchronous rest service I would suggest you to have a look into upcoming Spring Framework 5 and it integration with reactive streams. Particularly Spring 5 will allow you to create fully reactive and asynchronous web services with little of coding.
So for example fully functional version of your code can be written with this small code snippet.
#RestController
public class ReactiveController {
#GetMapping(value = "/repositories")
public Flux<String> getUsername(#RequestParam(value = "username") String username) {
WebClient client = WebClient.create(new ReactorClientHttpConnector());
ClientRequest<Void> listRepoRequest = ClientRequest.GET("https://api.github.com/users/{username}/repos", username)
.accept(MediaType.APPLICATION_JSON).header("user-agent", "reactive.java").build();
return client.exchange(listRepoRequest).flatMap(response -> response.bodyToFlux(Repository.class)).flatMap(
repository -> client
.exchange(ClientRequest
.GET("https://api.github.com/repos/{username}/{repo}/languages", username,
repository.getName())
.accept(MediaType.APPLICATION_JSON).header("user-agent", "reactive.java").build())
.map(r -> r.bodyToMono(String.class)))
.concatMap(Flux::merge);
}
static class Repository {
private String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
}
To run this code locally just clone the spring-boot-starter-web-reactive and copy the code into it.
The result is something like {"Java":50563,"JavaScript":11541,"CSS":1177}{"Java":50469}{"Java":130182}{"Shell":21222,"Makefile":7169,"JavaScript":1156}{"Java":30754,"Shell":7058,"JavaScript":5486,"Batchfile":5006,"HTML":4865} still you can map it to something more usable in asynchronous way :)