Manipulate with cache as with collection in Spring - java

I looked a lot of stuff on on internet but I don't found any solution for my needs.
Here is a sample code which doesn't work but show my requirements for better understanding.
#Service
public class FooCachedService {
#Autowired
private MyDataRepository dataRepository;
private static ConcurrentHashMap<Long, Object> cache = new ConcurrentHashMap<>();
public void save(Data data) {
Data savedData = dataRepository.save(data);
if (savedData.getId() != null) {
cache.put(data.getRecipient(), null);
}
}
public Data load(Long recipient) {
Data result = null;
if (!cache.containsKey(recipient)) {
result = dataRepository.findDataByRecipient(recipient);
if (result != null) {
cache.remove(recipient);
return result;
}
}
while (true) {
try {
if (cache.containsKey(recipient)) {
result = dataRepository.findDataByRecipient(recipient);
break;
}
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
return result;
}
}
and data object:
public class Data {
private Long id;
private Long recipient;
private String payload;
// getters and setters
}
As you can see in code above I need implement service which will be stored new data into database and into cache as well.
Whole algorithm should looks something like that:
Some userA create POST request to my controller to store data and it fire save method of my service.
Another userB logged in system send request GET to my controller which fire method load of my service. In this method is compared logged user's id which sent request with recipients' ids in map. If map contains data for this user they are fetched with repository else algorithm check every second if there are some new data for that user (this checking will be some timeout, for example 30s, and after 30s request return empty data, and user create new GET request and so on...)
Can you tell me if it possible do it with some elegant way and how? How to use cache for that or what is the best practice for that? I am new in this area so I will be grateful for any advice.

Related

HAPI FHIR #Create Operation not returning MethodOutcome Response

I was basing my program off of the samples on hapishir's website and the operation works in that I receive the JSON body and I'm updating the database. The issue I have though is that there is no response being returned. I build the MethodOutcome object, and "return" it, but nothing appears in postman. I've written #read and #Search operations also and those both return the resource in the response on Postmat, but this #Create doesn't return any response.
ObservationResourceProvider.java
public class ObservationResourceProvider implements IResourceProvider {
public ObservationResourceProvider() { }
#Override
public Class<? extends IBaseResource> getResourceType() {
return Observation.class;
}
#Create()
public MethodOutcome createObservation(#ResourceParam Observation observation){
OpenERMDatabase db = new OpenERMDatabase();
String newObservationId = db.addNewObservation(observation);
//return the new Id if success else return an error message
MethodOutcome retVal = new MethodOutcome();
if (newObservationId != null) {
retVal.setId(new IdType("Observation", newObservationId, "1.0"));
retVal.setCreated(true);
}else {
OperationOutcome outcome = new OperationOutcome();
outcome.addIssue().setDiagnostics("An Error Occurred");
retVal.setOperationOutcome(outcome);
retVal.setCreated(false);
}
return retVal;
}
}
SimpleRestfulServer.java
#WebServlet("/*")
public class SimpleRestfulServer extends RestfulServer{
//Initialize
#Override
protected void initialize()throws ServletException{
//create a context for the appropriate version
setFhirContext(FhirContext.forDstu3());
//Register Resource Providers
registerProvider(new PatientResourceProvider());
registerProvider(new ObservationResourceProvider());
}
}
I've built an environment and debugged the server side code.
I'm sure you will get some hint from this. There are three modes defined in PreferReturnEnum, when you specify an extra header in the HEADERS with key as "Prefer" and value as " return=OperationOutcome", the value defined in operationOutcome will be retured.

Spring WebFlux - how to determine when my client has finished working

I need to call certain API with multiple query params simultaneously, in order to do that I wanted to use reactive approach. I ended up with reactive client that is able to call endpoint based on passed SearchQuery, handle pagination of that response and call for remaining pages and returns Flux<Item>. So far it works fine, however what I need to do now is to:
Collect data for all search queries and save them as initial state
Once the initial data is collected, I need to start repeating those calls in small time intervals and validate each item against initial data. Basically, I need to find new items from here.
But I'm running out of options how to solve that, I came up with probably the dirties solution ever, but I bet there are much better ways to do that.
So first of all, this is relevant code of my client
public Flux<Item> collectData(final SearchQuery query) {
final var iteration = new int[]{0};
return invoke(query, 0).expand(res ->
this.handleResponse(res, query, iteration))
.flatMap(response -> Flux.fromIterable(response.collectItems()));
}
private Mono<ApiResponse> handleResponse(final ApiResponse response, final SearchQuery searchQuery, final int[] iteration) {
return hasNextPage(response) ? invoke(searchQuery, calculateOffset(++iteration[0])) : Mono.empty();
}
private Mono<ApiResponse> invoke(final SearchQuery query, final int offset) {
final var url = offset == 0 ? query.toUrlParams() : query.toUrlParamsWithOffset(offset);
return doInvoke(url).onErrorReturn(ApiResponse.emptyResponse());
}
private Mono<ApiResponse> doInvoke(final String endpoint) {
return webClient.get()
.uri(endpoint)
.retrieve()
.bodyToMono(ApiResponse.class);
}
And here is my service that is using this client
private final Map<String, Item> initialItems = new ConcurrentHashMap<>();
void work() {
final var executorService = Executors.newSingleThreadScheduledExecutor();
queryRepository.getSearchQueries().forEach(query -> {
reactiveClient.collectData(query).subscribe(item -> initialItems.put(item.getId(), item));
});
executorService.scheduleAtFixedRate(() -> {
if(isReady()) {
queryRepository.getSearchQueries().forEach(query -> {
reactiveClient.collectData(query).subscribe(this::process);
});
}
}, 0, 3, TimeUnit.SECONDS);
}
/**
* If after 2 second sleep size of initialItems remains the same,
* that most likely means that initial population phase is over,
* and we can proceed with further data processing
**/
private boolean isReady() {
try {
final var snapshotSize = initialItems.size();
Thread.sleep(2000);
return snapshotSize == initialItems.size();
} catch (Exception e) {
return false;
}
}
I think the code speaks for itself, I just want to finish first phase, which is initial data population and then start processing all incomming data.

the use of threads and the java Future interface in AWS Lambda

I want to create an AWS Lambda function in java that writes to a database in Firestore. The short story is that, while the code does what it should when I execute it
on my own computer, using NetBeans (the truth is that it works most of the time, but not always, maybe due to problems with my internet connection), nothing at all
happens when I deploy it as a Lambda function and invoke this. I suspect that this has less to do with Firestore itself, but rather with how AWS Lambda handles asynchronous
operations.
Now to the details!
As a simple example, the method that writes to the Firestore object db reads
public static void writeFirestore(Firestore db){
try{
DateTime now = DateTime.now();
String time = now.toString();
Map<String, String> data = new HashMap<>();
data.put("time", time);
String collTitle = "Notebook";
String docTitle = "Document: "+time;
db.collection(collTitle).document(docTitle).set(data);
System.out.println("wrote to Firestore");
}
catch(Exception e){
System.out.println("Could not write to db: "+e.toString());
}
}
Now, as it takes some time to connect to Firestore and initialize db, I want to make sure that db is not passed as an argument into writeFirestore() before it
has been properly retrieved. So, I define a version of db in the form of a Future object, using ExecutorService, and then retrieve
the object db with the get()-method. For this, I define the class TaskRunner:
public class TaskRunner {
ExecutorService executor;
public TaskRunner(){
executor = Executors.newSingleThreadExecutor();
}
public static interface Callback<T>{
public void onCallback(T result);
}
public <T> void executeAsync(Callable<T> callable, Callback<T> callback) throws Exception{
try{
Future future = executor.submit(callable);
Object result = future.get();
if(result != null){
System.out.println("result is not null; applying callback...");
callback.onCallback((T) result);
}
else{
System.out.println("result is null");
}
}
catch(Exception e){
System.out.println("Problem running executeAsync: "+e.toString());
}
}
}
Writing the example document to my fixed database db now goes as follows:
I define the class FirestoreCreator that implements Callable with the purpose of retrieving the Firestore object db:
public static class FirestoreCreator implements Callable<Firestore>{
#Override
public Firestore call() throws Exception {
String projectId = "myProjectId";
GoogleCredentials credentials =
GoogleCredentials.fromStream(new FileInputStream("myCredentialsFile.json"));
FirestoreOptions firestoreOptions = FirestoreOptions.getDefaultInstance()
.toBuilder()
.setProjectId(projectId)
.setCredentials(credentials)
.build();
Firestore db = firestoreOptions.getService();
return db;
}
}
I implement the TaskRunner.Callback interface using writeFirestore().
I create a TaskRunner object, taskRunner, and call its executeAsync() method with the above two objects as parameters.
These three steps are collected in the final method testUpdateFirestore() that does the job:
public static void testUpdateFirestoreInterface(){
FirestoreCreator fsCreator = new FirestoreCreator();
TaskRunner.Callback<Firestore> updateCallback = new TaskRunner.Callback<Firestore>() {
#Override
public void onCallback(Firestore result) {
writeFirestore(result);
}
};
TaskRunner taskRunner = new TaskRunner();
try {
taskRunner.executeAsync(fsCreator, updateCallback);
} catch (Exception ex) {
System.out.println("Failed to run executeAsync");
}
}
As I already mentioned in the introduction, the code works (most times) when I run it on my computer, but not at all in AWS Lambda. No exception is thrown, and yet no document has been written in Firestore.
The discussion about threads in AWS Lambda (https://dzone.com/articles/multi-threaded-programming-with-aws-lambda) made me suspect that reason is that the use of some thread that runs when ExecutorService is used is not being handled properly.
Does anyone know what goes wrong and what a solution could look like?

Java - How to delete an entity from Google Cloud Datastore

Architecture: I have a web application from where I'm interacting with the Datastore and a client (raspberry pi) which is calling methods from the web application using Google Cloud Endpoints.
I have to add that I'm not very familiar with web applications and I assume that something's wrong with the setConsumed() method because I can see the call of /create in the app engine dashboard but there's no entry for /setConsumed.
I'm able to add entities to the Datastore using objectify:
//client method
private static void sendSensorData(long index, String serialNumber) throws IOException {
SensorData data = new SensorData();
data.setId(index+1);
data.setSerialNumber(serialNumber);
sensor.create(data).execute();
}
//api method in the web application
#ApiMethod(name = "create", httpMethod = "post")
public SensorData create(SensorData data, User user) {
// check if user is authenticated and authorized
if (user == null) {
log.warning("User is not authenticated");
System.out.println("Trying to authenticate user...");
createUser(user);
// throw new RuntimeException("Authentication required!");
} else if (!Constants.EMAIL_ADDRESS.equals(user.getEmail())) {
log.warning("User is not authorised, email: " + user.getEmail());
throw new RuntimeException("Not authorised!");
}
data.save();
return data;
}
//method in entity class SensorData
public Key<SensorData> save() {
return ofy().save().entity(this).now();
}
However, I'm not able to delete an entity from the datastore using the following code.
EDIT: There are many logs of the create-request in Stackdriver Logging, but none of setConsumed(). So it seems like the calls don't even reach the API although both methods are in the same class.
EDIT 2: The entity gets removed when I invoke the method from the Powershell so the problem is most likely on client side.
//client method
private static void removeSensorData(long index) throws IOException {
sensor.setConsumed(index+1);
}
//api method in the web application
#ApiMethod(name = "setConsumed", httpMethod = "put")
public void setConsumed(#Named("id") Long id, User user) {
// check if user is authenticated and authorized
if (user == null) {
log.warning("User is not authenticated");
System.out.println("Trying to authenticate user...");
createUser(user);
// throw new RuntimeException("Authentication required!");
} else if (!Constants.EMAIL_ADDRESS.equals(user.getEmail())) {
log.warning("User is not authorised, email: " + user.getEmail());
throw new RuntimeException("Not authorised!");
}
Key serialKey = KeyFactory.createKey("SensorData", id);
datastore.delete(serialKey);
}
This is what I follow to delete an entity from datastore.
public boolean deleteEntity(String propertyValue) {
String entityName = "YOUR_ENTITY_NAME";
String gql = "SELECT * FROM "+entityName +" WHERE property= "+propertyValue+"";
Query<Entity> query = Query.newGqlQueryBuilder(Query.ResultType.ENTITY, gql)
.setAllowLiteral(true).build();
try{
QueryResults<Entity> results = ds.run(query);
if (results.hasNext()) {
Entity rs = results.next();
ds.delete(rs.getKey());
return true;
}
return false;
}catch(Exception e){
logger.error(e.getMessage());
return false;
}
}
If you don't want to use literals, you can also use binding as follows:
String gql = "SELECT * FROM "+entityName+" WHERE property1= #prop1 AND property2= #prop2";
Query<Entity> query = Query.newGqlQueryBuilder(Query.ResultType.ENTITY, gql)
.setBinding("prop1", propertyValue1)
.setBinding("prop2", propertyValue2)
.build();
Hope this helps.
I was able to solve it by myself finally!
The problem was just related to the data type of the index used for removeSensorData(long index) which came out of a for-loop and therefore was an Integer instead of a long.

Synchronous access to REST web service

i m in trouble with a simple REST service using this code :
#GET
#Path("next/{uuid}")
#Produces({"application/xml", "application/json"})
public synchronized Links nextLink(#PathParam("uuid") String uuid) {
Links link = null;
try {
link = super.next();
if (link != null) {
link.setStatusCode(5);
link.setProcessUUID(uuid);
getEntityManager().flush();
Logger.getLogger("Glassfish Rest Service").log(Level.INFO, "Process {0} request url : {1} #id {2} at {3} #", new Object[]{uuid, link.getLinkTxt(), link.getLinkID(), Calendar.getInstance().getTimeInMillis()});
}
} catch (NoResultException ex) {
} catch (IllegalArgumentException ex) {
}
return link;
}
this should provide a link object, and mark it as used (setStatusCode(5)) to prevent next access to service to send the same object. the probleme, is that when there s a lot of fast clients accessing to the web service, this one provides 2 or 3 times the same link object to different clients. how can i solve this ??
here is the resquest using to :
#NamedQuery(name = "Links.getNext", query = "SELECT l FROM Links l WHERE l.statusCode = 2")
and the super.next() methode :
public T next() {
javax.persistence.Query q = getEntityManager().createNamedQuery("Links.getNext");
q.setMaxResults(1);
T res = (T) q.getSingleResult();
return res;
}
thx
The life-cycle of a (root) JAX-RS resource is per request, so the (otherwise correct) synchronized keyword on the nextLink method is sadly ineffectual.
What you need is a mean to synchronize the access/update.
This could be done in many ways:
I) You could synchronize on an external object, injected by a framework (example: a CDI injected #ApplicationScoped) as in:
#ApplicationScoped
public class SyncLink{
private ReentrantLock lock = new ReentrantLock();
public Lock getLock(){
return lock;
}
}
....
public class MyResource{
#Inject SyncLink sync;
#GET
#Path("next/{uuid}")
#Produces({"application/xml", "application/json"})
public Links nextLink(#PathParam("uuid") String uuid) {
sync.getLock().lock();
try{
Links link = null;
try {
link = super.next();
if (link != null) {
link.setStatusCode(5);
link.setProcessUUID(uuid);
getEntityManager().flush();
Logger.getLogger("Glassfish Rest Service").log(Level.INFO, "Process {0} request url : {1} #id {2} at {3} #", new Object[]{uuid, link.getLinkTxt(), link.getLinkID(), Calendar.getInstance().getTimeInMillis()});
}
} catch (NoResultException ex) {
} catch (IllegalArgumentException ex) {
}
return link;
}finally{
sync.getLock().unlock();
}
}
}
II) You could be lazy and synchronize on the class
public class MyResource{
#Inject SyncLink sync;
#GET
#Path("next/{uuid}")
#Produces({"application/xml", "application/json"})
public Links nextLink(#PathParam("uuid") String uuid) {
Links link = null;
synchronized(MyResource.class){
try {
link = super.next();
if (link != null) {
link.setStatusCode(5);
link.setProcessUUID(uuid);
getEntityManager().flush();
Logger.getLogger("Glassfish Rest Service").log(Level.INFO, "Process {0} request url : {1} #id {2} at {3} #", new Object[]{uuid, link.getLinkTxt(), link.getLinkID(), Calendar.getInstance().getTimeInMillis()});
}
} catch (NoResultException ex) {
} catch (IllegalArgumentException ex) {
}
}
return link;
}
}
III) You could synchronize using the database. In that case you would investigate the pessimistic locking available in JPA2.
You need to use some form of locking, most likely optimistic version locking. This will ensure only one transaction succeeds, the other will fail.
See,
http://en.wikibooks.org/wiki/Java_Persistence/Locking
Depending on how frequent you believe the contention will be in creating new Links you should choose either Optimistic locking using a #Version property or Pessimistic locking.
My guess is optimistic locking will work out better for you. In any case let your Resource class act as a Service Facade and place the model related code into a Stateless Session Bean EJB and handle any OptimisticLockExceptions with a simply retry.
I noticed you mentioned you are having trouble catching locking related exceptions and it also looks like you are using Eclipselink. In that case you could try something like this:
#Stateless
public class LinksBean {
#PersistenceContext(unitName = "MY_JTA_PU")
private EntityManager em;
#Resource
private SessionContext sctx;
public Links createUniqueLink(String uuid) {
Links myLink = null;
shouldRetry = false;
do {
try
myLink = sctx.getBusinessObject(LinksBean.class).createUniqueLinkInNewTX(uuid);
}catch(OptimisticLockException olex) {
//Retry
shouldRetry = true;
}catch(Exception ex) {
//Something else bad happened so maybe we don't want to retry
log.error("Something bad happened", ex);
shouldRetry = false;
} while(shouldRetry);
return myLink;
}
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public Links createUniqueLinkInNewTX(uuid) {
TypedQuery<Links> q = em.createNamedQuery("Links.getNext", Links.class);
q.setMaxResults(1);
try {
myLink = q.getSingleResult();
}catch(NoResultException) {
//No more Links that match my criteria
myLink = null;
}
if (myLink != null) {
myLink.setProcessUUID(uuid);
//If you change your getNext NamedQuery to add 'AND l.uuid IS NULL' you
//could probably obviate the need for changing the status code to 5 but if you
//really need the status code in addition to the UUID then:
myLink.setStatusCode(5);
}
//When this method returns the transaction will automatically be committed
//by the container and the entitymanager will flush. This is the point that any
//optimistic lock exception will be thrown by the container. Additionally you
//don't need an explicit merge because myLink is managed as the result of the
//getSingleResult() call and as such simply using its setters will be enough for
//eclipselink to automatically merge it back when it commits the TX
return myLink;
}
}
Your JAX-RS/Jersey Resource class should then look like so:
#Path("/links")
#RequestScoped
public class MyResource{
#EJB LinkBean linkBean;
#GET
#Path("/next/{uuid}")
#Produces({"application/xml", "application/json"})
public Links nextLink(#PathParam("uuid") String uuid) {
Links link = null;
if (uuid != null) {
link = linkBean.createUniqueLink(uuid);
Logger.getLogger("Glassfish Rest Service").log(Level.INFO, "Process {0} request url : {1} #id {2} at {3} #", new Object[]{uuid, link.getLinkTxt(), link.getLinkID(), Calendar.getInstance().getTimeInMillis()});
}
return link;
}
}
That's a semi-polished example of one approach to skin this cat and there's a lot going on here. Let me know if you have any questions.
Also, from the REST end of things you might consider using #PUT for this resource instead of #GET because your endpoint has the side effect of updating (UUID and/or statusCode) the state of the resource not simply fetching it.
When using JAX-RS which is a Java EE feature, in my understanding you should not manage threads in Java SE style like using synchronized block.
In Java EE you can provide synchronized access to your method with singleton EJB:
#Path("")
#Singleton
public class LinksResource {
#GET
#Path("next/{uuid}")
#Produces({"application/xml", "application/json"})
public Links nextLink(#PathParam("uuid") String uuid) {
By default this will use #Lock(WRITE) which allows only one request at a time to your method.

Categories

Resources