I'm trying to map a Supplier Bean to an Azure function using Spring Cloud Function 2.0, but I need to extend AzureSpringBootRequestHandler, which seems to only support functions with an input parameter and a return value. class AzureSpringBootRequestHandler has two type parameters: input and output, and AzureSpringBootRequestHandler.handleRequest() also expects the input parameter.
#Bean
public Supplier<List<String>> foo() {
return () -> Arrays.asList("foo1", "foo2");
}
/////
class FooFunction extends AzureSpringBootRequestHandler<Void, List<String>> {
#FunctionName("foo")
List<String> foo(#HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.FUNCTION) HttpRequestMessage<Optional<String>> request,
ExecutionContext context) {
return handleRequest(null, context);
}
}
The code above causes NPE at reactor.core.publisher.FluxJust.(FluxJust.java:60)
Changing the #Bean return type to Function<Void, List<String>> causes IllegalStateException "No function defined with name=foo" at AzureSpringFunctionInitializer.lookup
Adding a dummy int parameter works.
P.S Ideally I don't even need the return value so instead of Supplier I would make it a Runnable, but this seems completely unsupported.
Any help would be appreciated.
Support for supplier and consumer is added in Spring Cloud Function 3.0.0. This is currently still a milestone.
More information this change.
I solved the issue, using Spring Cloud Function 2.x, by changing the signature of AzureSpringBootRequestHandler to use Optional as follows:
public class SomeFunction extends AzureSpringBootRequestHandler<Optional<?>, List<Foo>> {
#FunctionName("some-function")
public List<Device> execute(#HttpTrigger(name = "req",
methods = {HttpMethod.GET},
authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Void> request,
ExecutionContext context) {
return handleRequest(Optional.empty(), context);
}
}
You'll also have to change the type of your bean to match this:
#Bean(name="some-function")
public Function<Optional<?>, List<Device>> someFunction() {
return v -> fooService.bar();
}
Related
I need to test a class that is using another interface as a dependency injection with a lambda Consumer.
#Builder
public class Interactor {
private final Gateway gateway;
void process(String message, Consumer<String> response){
gateway.process(message, uuid -> {
response.accept(uuid.toString());
});
}
}
The dependency is defined like so:
public interface Gateway {
void process(String message, Consumer<UUID> uuid);
}
How would I mock the Gateway so I can provide a UUID value response to my test?
This is what I have tried so far:
#Test
void whillReturnTheInjectedUUIDValueTest() {
UUID uuid = UUID.randomUUID();
Gateway gateway = Mockito.mock(Gateway.class);
Mockito.when(gateway.process("ignored", uuid1 -> {return uuid;}));
Interactor.builder().gateway(gateway).build().process("ignored", s -> {
Assertions.assertEquals(uuid.toString(), s);
});
}
How should I provide the return value to the consumer?
this one did the trick
#Test
void willReturnTheInjectedUUIDValueTest() {
UUID uuid = UUID.randomUUID();
Gateway gateway = Mockito.mock(Gateway.class);
doAnswer(ans -> {
Consumer<UUID> callback = (Consumer<UUID>) ans.getArgument(1);
callback.accept(uuid);
return null;
}).when(gateway).process(Mockito.any(String.class), Mockito.any(Consumer.class));
Interactor.builder().gateway(gateway).build().process("ignored", s -> {
Assertions.assertEquals(uuid.toString(), s);
});
}
This answer was the hint -> https://stackoverflow.com/a/47226374/148397
For now I ended up mocking my own inner class without Mockito.
class InteractorTest {
#Test
void whillReturnTheInjectedUUIDValueTest() {
UUID uuid = UUID.randomUUID();
Gateway gateway = GatewayMock.builder().value(uuid).build();
Interactor.builder().gateway(gateway).build().process("ignored", s -> {
Assertions.assertEquals(uuid.toString(), s);
});
}
#Builder
private static class GatewayMock implements Gateway {
private final UUID value;
#Override
public void process(String message, Consumer<UUID> uuid) {
uuid.accept(value);
}
}
}
What you want...
is to verify the result of the logic of Interactor#process, which takes a Consumer<String> and basically just calls Gateway#process with a Consumer<UUID> implementation that calls back to the Consumer<String> parameter.
Ideally you want to do this just by calling Interactor#process and verifying the call to the Consumer<String> you supply.
Your problem is...
that it is up to the implementator of Gateway to trigger the Consumer<UUID> (and consequently the Consumer<String>). So you need to inject an implementation of Gateway that does just that.
The solution is...
along the lines of the answer you provided (i.e. using a Fake Object instead of a Mock), unless there is a real implementation of Gateway in your production code that you can easily use instead (but I assume you'd be doing that already if that was the case).
Can you do it with Mockito? Sure, you can call methods on a mock object's method arguments, as shown in this answer: https://stackoverflow.com/a/16819818/775138 - but why would you go there if you can provide a sufficient Gateway implementation in a single line:
Gateway gw = (message, consumer) -> consumer.accept(uuid);
where uuid is the UUID instance that you constructed before. So no need to declare a static inner class just for this.
However...
there is an important issue with your assertion: if Interactor does nothing at all instead of calling the Gateway, the Consumer<String> you provide (containing the assertion) won't get triggered, so your test will pass! Just assign the value to a member variable of your test class, and assert on that outside the lambda.
I want to create an action that I can use with the #With annotation style. This action will need to proceed to an RPC call so if I understood correctly the documentation I should rather put this in an async way.
This is what I tried to do until now:
public class GetUserIdAction extends play.mvc.Action.Simple {
#Override
public CompletionStage<Result> call(Http.Context context) {
String token = "";
if (StringUtils.isEmpty(token)) {
return delegate.call(context);
}
CompletionStage<Http.Context> promiseOfUpdatedContext = CompletableFuture.supplyAsync(() -> setUserIdForToken(context, token));
return promiseOfUpdatedContext.thenApply(ctx -> delegate.call(ctx));
}
private Http.Context setUserIdForToken(Http.Context context, String token) {
context.args.put("user_id", authenticationManager.getUserIdForToken(token));
// The AuthenticationManager is issuing an RPC call and thus may take some time to complete.
return context;
}
}
Set aside the fact that token is always empty and authenticationManager is not set, this is just a quick meaningless example, my IDE is complaining on the thenApply part. For what I understand, it is expecting a CompletionStage<Result> and gets something more like a CompletionStage<CompletionStage<Result>>.
What is a way to deal with it? Cause here all I want is to put some information in the Context and then continue the delegate.call chain.
Or maybe I'm trying to do something stupid and composed actions are already asynchronous?
You have a CompletionStage<Something> and want to end with a CompletionStage<Result>. The easiest way to achieve that is using thenCompose.
Here is an example, with a small change: I have a CompletableFuture to get the token and only then I add it to the HttpContext
#Override
public CompletionStage<Result> call(final Http.Context context) {
final String token = "";
if (StringUtils.isEmpty(token)) {
return delegate.call(context);
}
return CompletableFuture.supplyAsync(() -> {
// do something to fetch that token
return "your_new_token";
}).thenCompose(tokenReceived -> {
context.args.put("user_id", tokenReceived);
return delegate.call(context);
});
}
I have a method that pulls in a bunch of data. This has the potential to take a decent amount of time due to the large data set and the amount of computation required. The method that does this call will be used many times. The result list should return the same results each time. With that being said, I want to cache the results, so I only have to do that computation once. I'm supposed to use the CacheBuilder class. The script I have is essentially something like:
class CheckValidValues implements AValidValueInterface {
private ADataSourceInterface dataSource;
public CheckValidValues(ADataSourceInterface dataSource) {
this.dataSource = dataSource;
}
#Override
public void validate(String value) {
List<?> validValues = dataSource.getValidValues();
if (!validValues.contains(value)) {
// throw an exception
So I'm not even sure where I should be putting the caching method (i.e. in the CheckValidValues class or the getValidValues() method in dataSource. Also, I'm not entirely sure how you can add code into one of the methods without it instantiating the cache multiple times. Here's the route that I'm trying to take, but have no idea if it's correct. Adding above the List validValues = dataSource.getValidValues() line:
LoadingCache<String, List<?>> validValuesCache = CacheBuilder.newBuilder()
.expireAfterAccess(30, TimeUnit.SECONDS)
.build(
new CacheLoader<String, List<?>>() {
public List<?> load(#Nonnull String validValues) {
return valuesSupplier.getValidValues();
}
}
);
Then later, I'd think I could get that value with:
validValuesCache.get("validValues");
What I think should happen there is that it will do the getValidValues command and store that in the cache. However, if this method is being called multiple times, then, to me, that would mean it would create a new cache each time.
Any idea what I should do for this? I simply want to add the results of the getValidValues() method to cache so that it can be used in the next iteration without having to redo any computations.
You only want to cache a single value, the list of valid values. Use Guavas' Suppliers.memoizeWithExpiration(Supplier delegate, long duration, TimeUnit unit)
Each valid value is only existing once. So your List is essentially a Set. Back it by a HashSet (or a more efficient variant in Guava). This way the contains() is a hash table lookup instead of a sequential search inside the list.
We use Guava and Spring-Caching in a couple of projects where we defined the beans via Java configuration like this:
#Configuration
#EnableCaching
public class GuavaCacheConfig {
...
#Bean(name="CacheEnabledService")
public SomeService someService() {
return new CacheableSomeService();
}
#Bean(name="guavaCacheManager")
public CacheManager cacheManager() {
// if different caching strategies should occur use this technique:
// http://www.java-allandsundry.com/2014/10/spring-caching-abstraction-and-google.html
GuavaCacheManager guavaCacheManager = new GuavaCacheManager();
guavaCacheManager.setCacheBuilder(cacheBuilder());
return guavaCacheManager;
}
#Bean(name = "expireAfterAccessCacheBuilder")
public CacheBuilder<Object, Object> cacheBuilder() {
return CacheBuilder.newBuilder()
.recordStats()
.expireAfterAccess(5, TimeUnit.SECONDS);
}
#Bean(name = "keyGenerator")
public KeyGenerator keyGenerator() {
return new CustomKeyGenerator();
}
...
}
Note that the code above was taken from one of our integration tests.
The service, which return values should be cached is defined as depicted below:
#Component
#CacheConfig(cacheNames="someCache", keyGenerator=CustomKeyGenerator.NAME, cacheManager="guavaCacheManager")
public class CacheableService {
public final static String CACHE_NAME = "someCache";
...
#Cacheable
public <E extends BaseEntity> E findEntity(String id) {
...
}
...
#CachePut
public <E extends BaseEntity> ObjectId persist(E entity) {
...
}
...
}
As Spring-Caching uses an AOP approach, on invoking a #Cacheable annotated method Spring will first check if already a previous stored return value is available in the cache for the invoked method (depending on the cache key; we use a custom key generator therefore). If no value is yet available, Spring will invoke the actual service method and store the return value into the local cache which is available on subsequent calls.
#CachePut will always execute the service method and put the return value into the cache. This is useful if an existing value inside the cache should be replaced by a new value in case of an update for example.
I am planning to use the Spring #Cacheable annotation in order to cache the results of invoked methods.
But this implementation somehow does not look very "safe" to me. As far as I understand, the returned value will be cached by the underlying caching engine and will be deleted when the Spring evict method is called.
I would need an implementation which does not destroy the old value until the new value was loaded. This would be required and the following scenario should work:
Cacheable method is called -> Valid result returned
Result will be cached by the Spring #Cacheable backend
Spring invalidates cache because it expired (e.g. TTL of 1 hour)
Cacheable method is called again -> Exception/null value returned!
OLD result will be cached again and thus, future invokations of the method will return a valid result
How would this be possible?
Your requirement of serving old values if the #Cacheable method throws an exception can easily be achieved with a minimal extension to Google Guava.
Use the following example configuration
#Configuration
#EnableWebMvc
#EnableCaching
#ComponentScan("com.yonosoft.poc.cache")
public class ApplicationConfig extends CachingConfigurerSupport {
#Bean
#Override
public CacheManager cacheManager() {
SimpleCacheManager simpleCacheManager = new SimpleCacheManager();
GuavaCache todoCache = new GuavaCache("todo", CacheBuilder.newBuilder()
.refreshAfterWrite(10, TimeUnit.MINUTES)
.maximumSize(10)
.build(new CacheLoader<Object, Object>() {
#Override
public Object load(Object key) throws Exception {
CacheKey cacheKey = (CacheKey)key;
return cacheKey.method.invoke(cacheKey.target, cacheKey.params);
}
}));
simpleCacheManager.setCaches(Arrays.asList(todoCache));
return simpleCacheManager;
}
#Bean
#Override
public KeyGenerator keyGenerator() {
return new KeyGenerator() {
#Override
public Object generate(Object target, Method method, Object... params) {
return new CacheKey(target, method, params);
}
};
}
private class CacheKey extends SimpleKey {
private static final long serialVersionUID = -1013132832917334168L;
private Object target;
private Method method;
private Object[] params;
private CacheKey(Object target, Method method, Object... params) {
super(params);
this.target = target;
this.method = method;
this.params = params;
}
}
}
CacheKey serves the single purpose of exposing SimpleKey attributes. Guavas refreshAfterWrite will configure the refresh time without expiring the cache entries. If the methods annotated with #Cacheable throws an exception the cache will continue to serve the old value until evicted due to maximumSize or replaced by a new value from succesful method response. You can use refreshAfterWrite in conjunction with expireAfterAccess and expireAfterAccess.
I may be wrong in my reading of the Spring code, notably org.springframework.cache.interceptor.CacheAspectSupport#execute(org.springframework.cache.interceptor.CacheOperationInvoker, org.springframework.cache.interceptor.CacheAspectSupport.CacheOperationContexts), but I believe the abstraction does not provide what you ask indeed.
Spring will not expire entries, this will be left to the underlying caching implementation.
You mention that you would like to see values even though they are expired. That's against the expiry abstraction used in most cache implementations that I know of.
Returning a previously cached value on invocation error is clearly use case specific. The Spring abstraction will simply throw the error back at the user. The CacheErrorHandler mechanism only deals with cache invocation related exceptions.
All in all, it seems to me that what you are asking for is very use case specific and thus not something an abstraction would/should offer.
I am writing a web application using Spring MVC. I have a interface that looks like this:
public interface SubscriptionService
{
public String getSubscriptionIDForUSer(String userID);
}
The getSubscriptionIDForUser actually makes a network call to another service to get the subscription details of the user. My business logic calls this method in multiple places in its logic. Hence, for a given HTTP request I might have multiple calls made to this method. So, I want to cache this result so that repeated network calls are not made for the same request. I looked at the Spring documentation, but could not find references to how can I cache this result for the same request. Needless to say the cache should be considered invalid if it is a new request for the same userID.
My requirements are as follows:
For one HTTP request, if multiple calls are made to getSubscriptionIDForUser, the actual method should be executed only once. For all other invocations, the cached result should be returned.
For a different HTTP request, we should make a new call and disregard the cache hit, if at all, even if the method parameters are exactly the same.
The business logic might execute its logic in parallel from different threads. Thus for the same HTTP request, there is a possibility that Thread-1 is currently making the getSubscriptionIDForUser method call, and before the method returns, Thread-2 also tries to invoke the same method with the same parameters. If so, then Thread-2 should be made to wait for the return of the call made from Thread-1 instead of making another call. Once the method invoked from Thread-1 returns, Thread-2 should get the same return value.
Any pointers?
Update: My webapp will be deployed to multiple hosts behind a VIP. My most important requirement is Request level caching. Since each request will be served by a single host, I need to cache the result of the service call in that host only. A new request with the same userID must not take the value from the cache. I have looked through the docs but could not find references as to how it is done. May be I am looking at the wrong place?
I'd like to propose another solution that a bit smaller than one proposed by #Dmitry. Instead of implementing own CacheManager we can use ConcurrentMapCacheManager provided by Spring in 'spring-context' artifact. So, the code will look like this (configuration):
//add this code to any configuration class
#Bean
#Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public CacheManager cacheManager() {
return new ConcurrentMapCacheManager();
}
and may be used:
#Cacheable(cacheManager = "cacheManager", cacheNames = "default")
public SomeCachedObject getCachedObject() {
return new SomeCachedObject();
}
I ended up with solution as suggested by herman in his comment:
Cache manager class with simple HashMap:
public class RequestScopedCacheManager implements CacheManager {
private final Map<String, Cache> cache = new HashMap<>();
public RequestScopedCacheManager() {
System.out.println("Create");
}
#Override
public Cache getCache(String name) {
return cache.computeIfAbsent(name, this::createCache);
}
#SuppressWarnings("WeakerAccess")
protected Cache createCache(String name) {
return new ConcurrentMapCache(name);
}
#Override
public Collection<String> getCacheNames() {
return cache.keySet();
}
public void clearCaches() {
cache.clear();
}
}
Then make it RequestScoped:
#Bean
#Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public CacheManager requestScopedCacheManager() {
return new RequestScopedCacheManager();
}
Usage:
#Cacheable(cacheManager = "requestScopedCacheManager", cacheNames = "default")
public YourCachedObject getCachedObject(Integer id) {
//Your code
return yourCachedObject;
}
Update:
After a while, I have found that my previous solution was incompatible with Spring-actuator. CacheMetricsRegistrarConfiguration is trying to initialize request scoped cache outside the request scope, which leads to exception.
Here is my alternative Implementation:
public class RequestScopedCacheManager implements CacheManager {
public RequestScopedCacheManager() {
}
#Override
public Cache getCache(String name) {
Map<String, Cache> cacheMap = getCacheMap();
return cacheMap.computeIfAbsent(name, this::createCache);
}
protected Map<String, Cache> getCacheMap() {
RequestAttributes requestAttributes = RequestContextHolder.getRequestAttributes();
if (requestAttributes == null) {
return new HashMap<>();
}
#SuppressWarnings("unchecked")
Map<String, Cache> cacheMap = (Map<String, Cache>) requestAttributes.getAttribute(getCacheMapAttributeName(), RequestAttributes.SCOPE_REQUEST);
if (cacheMap == null) {
cacheMap = new HashMap<>();
requestAttributes.setAttribute(getCacheMapAttributeName(), cacheMap, RequestAttributes.SCOPE_REQUEST);
}
return cacheMap;
}
protected String getCacheMapAttributeName() {
return this.getClass().getName();
}
#SuppressWarnings("WeakerAccess")
protected Cache createCache(String name) {
return new ConcurrentMapCache(name);
}
#Override
public Collection<String> getCacheNames() {
Map<String, Cache> cacheMap = getCacheMap();
return cacheMap.keySet();
}
public void clearCaches() {
for (Cache cache : getCacheMap().values()) {
cache.clear();
}
getCacheMap().clear();
}
}
Then register a not(!) request scoped bean. Cache implementation will get request scope internally.
#Bean
public CacheManager requestScopedCacheManager() {
return new RequestScopedCacheManager();
}
Usage:
#Cacheable(cacheManager = "requestScopedCacheManager", cacheNames = "default")
public YourCachedObject getCachedObject(Integer id) {
//Your code
return yourCachedObject;
}
EHCache comes to mind right off the bat, or you could even roll-your-own solution to cache the results in the service layer. There are probably a billion options on caching here. The choice depends on several factors, like do you need the values to timeout, or are you going to clean the cache manually. Do you need a distributed cache, like in the case where you have a stateless REST application that is distributed amongst several app servers. You you need something robust that can survive a crash or reboot.
You can use Spring Cache annotations and create your own CacheManager that caches at request scope. Or you can use the one I wrote: https://github.com/rinoto/spring-request-cache