I'd like to cache a list of objects that are available for all methods and need it periodically update. I'm wondering if this is safe with multiple threads as per the Spring Boot server. Do I keep the list as static? Or is there a better way to do this?
For example:
#Controller
public class HomeController
{
private static List<String> cachedTerms = new ArrayList<>();
#GetMapping("/getFirstCachedTerm")
public String greeting()
{
if(!cachedTerms.isEmpty())
{
return cachedTerms.get(0);
}else
{
return "no terms";
}
}
//Scheduled to update
private static void updateTerms()
{
//populating from disk IO
cachedTerms.clear();
cachedTerms.add("hello");
}
}
Found out how. By using CopyOnWriteArray, which can be read even while being altered (and thread safe), and by using the #Scheduled tag to automatically run the update.
#Controller
public class HomeController
{
private static final List<String> TERMS_CACHE= new CopyOnWriteArrayList<String>();
#GetMapping("/FirstTerm")
public String getFirstTerm()
{
for(String term: TERMS_CACHE)
{
return term;
}
}
//Scheduled to update
#Scheduled(initialDelay = 1000, fixedRate = 1000)
private static synchronized void updateTerms()
{
//populating from disk IO
TERMS_CACHE.clear();
TERMS_CACHE.add("hello");
}
}
Related
I am new to Reactor framework and trying to utilize it in one of our existing implementations. LocationProfileService and InventoryService both return a Mono and are to executed in parallel and have no dependency on each other (from the MainService). Within LocationProfileService - there are 4 queries issued and the last 2 queries have a dependency on the first query.
What is a better way to write this? I see the calls getting executed sequentially, while some of them should be executed in parallel. What is the right way to do it?
public class LocationProfileService {
static final Cache<String, String> customerIdCache //define Cache
#Override
public Mono<LocationProfileInfo> getProfileInfoByLocationAndCustomer(String customerId, String location) {
//These 2 are not interdependent and can be executed immediately
Mono<String> customerAccountMono = getCustomerArNumber(customerId,location) LocationNumber).subscribeOn(Schedulers.parallel()).switchIfEmpty(Mono.error(new CustomerNotFoundException(location, customerId))).log();
Mono<LocationProfile> locationProfileMono = Mono.fromFuture(//location query).subscribeOn(Schedulers.parallel()).log();
//Should block be called, or is there a better way to do ?
String custAccount = customerAccountMono.block(); // This is needed to execute and the value from this is needed for the next 2 calls
Mono<Customer> customerMono = Mono.fromFuture(//query uses custAccount from earlier step).subscribeOn(Schedulers.parallel()).log();
Mono<Result<LocationPricing>> locationPricingMono = Mono.fromFuture(//query uses custAccount from earlier step).subscribeOn(Schedulers.parallel()).log();
return Mono.zip(locationProfileMono,customerMono,locationPricingMono).flatMap(tuple -> {
LocationProfileInfo locationProfileInfo = new LocationProfileInfo();
//populate values from tuple
return Mono.just(locationProfileInfo);
});
}
private Mono<String> getCustomerAccount(String conversationId, String customerId, String location) {
return CacheMono.lookup((Map)customerIdCache.asMap(),customerId).onCacheMissResume(Mono.fromFuture(//query).subscribeOn(Schedulers.parallel()).map(x -> x.getAccountNumber()));
}
}
public class InventoryService {
#Override
public Mono<InventoryInfo> getInventoryInfo(String inventoryId) {
Mono<Inventory> inventoryMono = Mono.fromFuture(//inventory query).subscribeOn(Schedulers.parallel()).log();
Mono<List<InventorySale>> isMono = Mono.fromFuture(//inventory sale query).subscribeOn(Schedulers.parallel()).log();
return Mono.zip(inventoryMono,isMono).flatMap(tuple -> {
InventoryInfo inventoryInfo = new InventoryInfo();
//populate value from tuple
return Mono.just(inventoryInfo);
});
}
}
public class MainService {
#Autowired
LocationProfileService locationProfileService;
#Autowired
InventoryService inventoryService
public void mainService(String customerId, String location, String inventoryId) {
Mono<LocationProfileInfo> locationProfileMono = locationProfileService.getProfileInfoByLocationAndCustomer(....);
Mono<InventoryInfo> inventoryMono = inventoryService.getInventoryInfo(....);
//is using block fine or is there a better way to do?
Mono.zip(locationProfileMono,inventoryMono).subscribeOn(Schedulers.parallel()).block();
}
}
You don't need to block in order to get the pass that parameter your code is very close to the solution. I wrote the code using the class names that you provided. Just replace all the Mono.just(....) with the call to the correct service.
public Mono<LocationProfileInfo> getProfileInfoByLocationAndCustomer(String customerId, String location) {
Mono<String> customerAccountMono = Mono.just("customerAccount");
Mono<LocationProfile> locationProfileMono = Mono.just(new LocationProfile());
return Mono.zip(customerAccountMono, locationProfileMono)
.flatMap(tuple -> {
Mono<Customer> customerMono = Mono.just(new Customer(tuple.getT1()));
Mono<Result<LocationPricing>> result = Mono.just(new Result<LocationPricing>());
Mono<LocationProfile> locationProfile = Mono.just(tuple.getT2());
return Mono.zip(customerMono, result, locationProfile);
})
.map(LocationProfileInfo::new)
;
}
public static class LocationProfileInfo {
public LocationProfileInfo(Tuple3<Customer, Result<LocationPricing>, LocationProfile> tuple){
//do wathever
}
}
public static class LocationProfile {}
private static class Customer {
public Customer(String cutomerAccount) {
}
}
private static class Result<T> {}
private static class LocationPricing {}
Pleas remember that the first zip is not necessary. I re write it to mach your solution. But I would solve the problem a little bit differently. It would be clearer.
public Mono<LocationProfileInfo> getProfileInfoByLocationAndCustomer(String customerId, String location) {
return Mono.just("customerAccount") //call the service
.flatMap(customerAccount -> {
//declare the call to get the customer
Mono<Customer> customerMono = Mono.just(new Customer(customerAccount));
//declare the call to get the location pricing
Mono<Result<LocationPricing>> result = Mono.just(new Result<LocationPricing>());
//declare the call to get the location profile
Mono<LocationProfile> locationProfileMono = Mono.just(new LocationProfile());
//in the zip call all the services actually are executed
return Mono.zip(customerMono, result, locationProfileMono);
})
.map(LocationProfileInfo::new)
;
}
I have a question on the use of IO operations within java.util.function.Predicate. Please consider the following example:
public class ClientGroupFilter implements Predicate<Client> {
private GroupMapper mapper;
private List<String> validGroupNames = new ArrayList<>();
public ClientGroupFilter(GroupMapper mapper) {
this.mapper = mapper;
}
#Override
public boolean test(Client client) {
// this is a database call
Set<Integer> validsIds = mapper.getValidIdsForGroupNames(validGroupNames);
return client.getGroupIds().stream().anyMatch(validIds::contains);
}
public void permit(String name) {
validGroupNames.add(name);
}
}
As you can see this filter accepts any number of server group names, which are resolved by the mapper when a specific client is tested. If the client owns one of the valid server groups, true is returned.
Now, of course it is obivous that this is totally iniffecient if the filter is applied to multiple clients. So, refactoring lead me to this:
public class ClientGroupFilter implements Predicate<Client> {
private GroupMapper mapper;
private List<String> validGroupNames = new ArrayList<>();
private boolean updateRequired = true;
private Set<Integer> validIds = new HashSet<>();
public ClientGroupFilter(GroupMapper mapper) {
this.mapper = mapper;
}
#Override
public boolean test(Client client) {
if(updateRequired) {
// this is a database call
validIds = mapper.getValidIdsForGroupNames(validGroupNames);
updateRequired = false;
}
return client.getGroupIds().stream().anyMatch(validIds::contains);
}
public void permit(String name) {
validGroupNames.add(name);
updateRequired = true;
}
}
The performance is a lot better, of course, but im still not happy with the solution, since i feel like java.util.function.Predicate should not be used like this. However, i still want to be able to provide a fast solution to filter a list of clients, without the need to require the consumer to map the server group name to its ids.
Does anyone have a better idea to refactor this?
If your usage pattern is such that you call permit several times, and then use Predicate<Client> without calling permit again, you can separate the code that collects validGroupNames from the code of your predicate by using a builder:
class ClientGroupFilterBuilder {
private final GroupMapper mapper;
private List<String> validGroupNames = new ArrayList<>();
public ClientGroupFilter(GroupMapper mapper) {
this.mapper = mapper;
}
public void permit(String name) {
validGroupNames.add(name);
}
public Predicate<Client> build() {
final Set<Integer> validIds = mapper.getValidIdsForGroupNames(validGroupNames);
return new Predicate<Client>() {
#Override
public boolean test(Client client) {
return client.getGroupIds().stream().anyMatch(validIds::contains);
}
}
}
}
This restricts building of validIds to the point where we construct the Predicate<Client>. Once the predicate is constructed, no further input is necessary.
Still struggling with properly making a cacheBean. I think I want the bean to be a singleton, from what I have read. Will only need
one instance of it. Use it to get often used keywords and so on.
http://blog.defrog.nl/2013/02/prefered-way-for-referencing-beans-from.html
I used this pattern to make my CacheBean (and used a utility method).
If I make this a managedBean by putting it into Faces-config, then I can easily get the value of models
<xp:text escape="true" id="computedField1"
value="#{CacheBean.models}"></xp:text>
The JSF takes care of instantiating the bean for me.
But I don't want it to reload the same values (like models) over and over. I thought that to get that to happen I needed to make
a POJO and grab the currentInstance of the bean, as in the url.
However, when I made this change (taking the bean out of the faces-config file, I cannot seem to get a handle on the properties.
This won't even compile:
<xp:text escape="true" id="computedField1"
value="#{Cache.getCurrentInstance().models}">
</xp:text>
What am I doing wrong?
================================
package com.scoular.cache;
import java.io.Serializable;
import org.openntf.domino.xsp.XspOpenLogUtil;
import com.scoular.Utils;
public class CacheBean implements Serializable {
private static final long serialVersionUID = -2665922853615670023L;
public static final String BEAN_NAME = "CacheBean";
private String pcDataDBpath;
private Vector<Object> models = new Vector<Object>();
public CacheBean() {
initConfigData();
}
private void initConfigData() {
try {
loadModels();
loadDBPaths();
} catch (Exception e) {
XspOpenLogUtil.logError(e);
}
}
// Getters and Setters
public static CacheBean getInstance(String beanName) {
return (CacheBean) Utils.getVariableValue(beanName);
}
public static CacheBean getInstance() {
return getInstance(BEAN_NAME);
}
public String getPcDataDBpath() {
return pcDataDBpath;
}
public void setPcDataDBpath(String pcDataDBpath) {
this.pcDataDBpath = pcDataDBpath;
}
public void loadDBPaths() {
Session session = Factory.getSession();
Database tmpDB = session.getCurrentDatabase();
pcAppDBpath = (tmpDB.getServer() + "!!" + "scoApps\\PC\\PCApp.nsf");
pcDataDBpath = (tmpDB.getServer() + "!!" + "scoApps\\PC\\PCData.nsf");
compDirDBpath = (tmpDB.getServer() + "!!" + "compdir.nsf");
}
public void loadModels() {
try {
Session session = Factory.getSession();
Database tmpDB = session.getCurrentDatabase();
Database PCDataDB = session.getDatabase(tmpDB.getServer(), "scoApps\\PC\\PCData.nsf");
ViewNavigator vn = PCDataDB.getView("dbLookupModels").createViewNav();
ViewEntry entry = vn.getFirst();
while (entry != null) {
Vector<Object> thisCat = entry.getColumnValues();
if (entry.isCategory()) {
String thisCatString = thisCat.elementAt(0).toString();
models.addElement(thisCatString);
}
entry = vn.getNextCategory();
}
} catch (Exception e) {
XspOpenLogUtil.logError(e);
}
}
p
ackage com.scoular;
import javax.faces.context.FacesContext;
public class Utils {
public static Object getVariableValue(String varName) {
FacesContext context = FacesContext.getCurrentInstance();
return context.getApplication().getVariableResolver().resolveVariable(context, varName);
}
}
When the bean has the right scope you can access the bean directly if is created.
private static final String BEAN_NAME = "CacheBean";
//access to the bean
public static CacheBean get() {
return (CacheBean) JSFUtil.resolveVariable(BEAN_NAME);
}
//in my JSFUtil class I have the method
public static Object resolveVariable(String variable) {
return FacesContext.getCurrentInstance().getApplication().getVariableResolver().resolveVariable(FacesContext.getCurrentInstance(), variable);
}
so in a Java Class you can call
CacheBean.get().models
in EL you can use
CacheBean.models
I can tell you why it's not compiling at least.
value="#{Cache.getCurrentInstance().models}"
That's EL. So there should not be a get or a (). You want
value="#{Cache.currentInstance.models}"
And check your var name as I thought you were using CacheBean and not Cache.
#CachePut or #Cacheable(value = "CustomerCache", key = "#id")
public Customer updateCustomer(Customer customer) {
sysout("i am inside updateCustomer");
....
return customer;
}
I found below documentation under CachePut source code
CachePut annotation does not cause the target method to be skipped -
rather it always causes the method to be invoked and its result to be
placed into the cache.
Does it mean if I use #Cacheable , updateCustomer method will be executed only once and result will be updated in cache. Subsequent calls to
updateCustomer will not execute updateCustomer , it will just update the cache.
While in case of #CachePut, updateCustomer method will be executed on each call and result will be updated in cache.
Is my understanding correct?
Yes.
I even made a test to be sure:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = CacheableTest.CacheConfigurations.class)
public class CacheableTest {
public static class Customer {
final private String id;
final private String name;
public Customer(String id, String name) {
this.id = id;
this.name = name;
}
public String getId() {
return id;
}
public String getName() {
return name;
}
}
final public static AtomicInteger cacheableCalled = new AtomicInteger(0);
final public static AtomicInteger cachePutCalled = new AtomicInteger(0);
public static class CustomerCachedService {
#Cacheable("CustomerCache")
public Customer cacheable(String v) {
cacheableCalled.incrementAndGet();
return new Customer(v, "Cacheable " + v);
}
#CachePut("CustomerCache")
public Customer cachePut(String b) {
cachePutCalled.incrementAndGet();
return new Customer(b, "Cache put " + b);
}
}
#Configuration
#EnableCaching()
public static class CacheConfigurations {
#Bean
public CustomerCachedService customerCachedService() {
return new CustomerCachedService();
}
#Bean
public CacheManager cacheManager() {
return new GuavaCacheManager("CustomerCache");
}
}
#Autowired
public CustomerCachedService cachedService;
#Test
public void testCacheable() {
for(int i = 0; i < 1000; i++) {
cachedService.cacheable("A");
}
Assert.assertEquals(cacheableCalled.get(), 1);
}
#Test
public void testCachePut() {
for(int i = 0; i < 1000; i++) {
cachedService.cachePut("B");
}
Assert.assertEquals(cachePutCalled.get(), 1000);
}
}
#CachePut always lets the method execute. It is generally used if you want your cache to be updated with the result of the method execution.
Example: When you want to update a stale data which is cached, instead of blowing the cache completely.
#Cacheable will be executed only once for the given cachekey and subsequent requests won't execute the method, until the cache expires or gets flushed.
Yes, you are absolutely correct.
#Cacheput and #Cacheable are used in conjunction.
#Cacheable will not update the cache on every call. In order to remove the stale data, there must be a service that uses the #Cacheput that clears the stale data.
Below answer is for the ones who are using guava caching to build cache.
Using guava caching, the time interval that is applied will empty the cache after a certain period of time which is not the case with #Cacheput. #Cacheput will only update the values that are stale and hence it calls the method every time to update the cache.
I hope my answer clears your question.
I have the following classes:
public enum TaskType {
VERIFY_X_TASK, COMPUTE_Y_TASK, PROCESS_Z_TASK;
}
public interface Task{
void process();
}
#Component
public class VerifyXTask implements Task{
// Similar classes for the other types of tasks
public void process() {
}
}
#Component
public class TaskFactory{
private Map<TaskType, Task> tasks;
public Task getTask(TaskType type){
return tasks.get(type); // return a singleton with all it's fields injected by the application context
}
}
class UseTool{
#Autowired
private TaskFactory taskFactory;
public void run(String taskType){
Task task = taskFactory.getTask(TaskType.valueOf(taskType));
task.process();
}
}
What is the most elegant way of injecting the association between TaskType and Task into the factory?
Consider that there are almost 100 task types and that these may change quite frequently.
--
Further explanations:
I could do in the TaskFactory class smth like:
tasks.put(TaskType.VERIFY_X_TASK, new VerifyTask());
tasks.put(TaskType.COMPUTE_Y_TASK, new ComputeTask());
tasks.put(TaskType.PROCESS_Z_TASK, new ProcessTask());
But this does not inject any properties in the Task object.
I would suggest the following approach:
Define a custom annotation #ImplementsTask that takes a TaskType as a parameter, so that you can write your implementation class like this:
#Component
#ImplementsTask(TaskType.VERIFY_X_TASK)
public class VerifyXTask implements Task {
...
(Or you can meta-annotate #Component to avoid having to use it on all the classes.)
Inject all of the identified Task objects into your factory:
#Autowired
private Set<Task> scannedTasks;
In a #PostConstruct method on the factory, iterate over each of the elements in scannedTasks, reading the annotation value and adding a Map entry (to an EnumMap, of course). You'll need to decide how to deal with duplicate implementations for a given TaskType.
This will require a bit of reflection work in the factory setup, but it means that you can just annotate a Task implementation with the appropriate value and have it scanned in without any additional work by the implementor.
I got into similar kind of problem to solve, what I really did is, It may be helpful.
Define Tasks Enum like.
public enum Tasks {
Task1(SubTasks.values());
Tasks(PagesEnumI[] pages) {
this.pages = pages;
}
PagesEnumI[] pages;
// define setter and getter
}
Defined Subtask like
public interface PagesEnumI {
String getName();
String getUrl();
}
public enum SubTasks implements PagesEnumI {
Home("home_url");
SubTasks(String url) {
this.url = url;
}
private String url;
#Override
public String getUrl() {
return url;
}
#Override
public String getName() {
return this.name();
}
}
Defined Service to call per SubTasks enum like
public interface PageI {
void process();
Sites getTaskName();
PagesEnumI getSubTaskName();
}
#Component
public class Home implements PageI {
// function per SubTask to process
#Override
public void process() {}
// to get the information about Main Task
#Override
public Tasks getTaskName() {
return Tasks.Task1;
}
// to get the information about Sub Task
#Override
public PagesEnumI getSubTaskName() {
return Task1.Home;
}
}
Define a factory like...
#Component
public class PageFactory {
Set<PageI> pages;
// HashMap for keeping objects into
private static HashMap<String, PageI> pagesFactory = new HashMap<>();
#Autowired
public void setPages(Set<PageI> pages) {
this.pages = pages;
}
// construct key by
private static String constructKey(Tasks taks, PagesEnumI page) {
return task.name() + "__" + page.getName();
}
// PostConstruct means after construct class object this method should get run
// iterating over all pages and storing into Map
#PostConstruct
private void postConstruct() {
for (PageI pageI : pages) {
pagesFactory.put(constructKey(pageI.getTaskName(), pageI.getSubTaskName()), pageI);
}
}
// getting object from factory
public PageI getPageObject(Tasks task, PagesEnumI page) {
return pagesFactory.get(constructKey(task, page));
}
}
Till now we have registered our enum(Tasks and SunTasks) and their service(With getter of Tasks and SubTasks), Now defining a factory to call service process method.
#SpringBootApplication
public class Application implements CommandLineRunner {
PageFactory factory;
#Autowired
public void setFactory(PageFactory factory) {
this.factory = factory;
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
#Override
public void run(String... args) throws Exception {
// for each task we might have different sub task
Arrays.stream(Tasks.values()).forEach(
task -> {
// for each and subtask of a task need to perform process
for (PagesEnumI page : task.getPages()) {
PageI pageI = factory.getPageObject(task, page);
pageI.process();
}
}
);
}
}
This is not exact similar problem, way to solve it may be similar. So I thought this might be helpful to put it here. Please don't by putting name, just trying to understand concept. If anyone have more inputs, please share.
Let Task tell the factory which TaskType it supports.
It can be done using a plain old Java method, no Spring annotations required.
public interface Task {
void process();
TaskType supportedType();
}
#Component
public class VerifyXTask implements Task {
#Override
public void process() {
}
#Override
public TaskType supportedType() {
return TaskType.VERIFY_X_TASK;
}
}
#Component
public class TaskFactory {
private Map<TaskType, Task> tasks;
public TaskFactory(List<Task> tasks) {
this.tasks = tasks.stream()
.collect(Collectors.toMap(Task::supportedType, Function.identity()));
}
public Task getTask(TaskType type) {
return tasks.get(type);
}
}