Restrict number of processed requests per Requestmapping - java

we have a service that has one endpoint which needs to be restricted to process 2 requests at a time. These 2 requests can take a while to be completed.
Currently we use the tomcat properties to do so.
The problem we face is now is that - when these 2 threads are used up for that endpoint - our healthcheck does not work anymore.
So we would like to restrict the number of requests for that particular endpoint.
We pondered a while about it and one idea was to do so via filter, but that seems very hacky to me...
So I was hoping someone has another idea?

Here's an example of how to implement an asynchronous REST controller that will handle no more than 2 simultaneous requests at the same time. This implementation will not block any of your Tomcat servlet threads while the requests are being processed.
If another one arrives while the two are in progress then the caller will get an HTTP 429 (Too Many Requests).
This example immediately rejects requests that cannot be handled with a 429. If instead you'd like to queue pending requests until one of the 2 processing threads are available then replace SynchronousQueue with another implementation of BlockingQueue.
You might want to tidy up this sample, I've intentionally embedded all classes used to fit it in here:
#Configuration
#RestController
public class TestRestController {
static class MyRunnable implements Runnable {
DeferredResult<ResponseEntity<String>> deferredResult;
MyRunnable(DeferredResult<ResponseEntity<String>> dr) {
this.deferredResult = dr;
}
#Override
public void run() {
// do your work here and adjust the following
// line to set your own result for the caller...
this.deferredResult.setResult(ResponseEntity.ok("it worked"));
}
}
#SuppressWarnings("serial")
#ResponseStatus(HttpStatus.TOO_MANY_REQUESTS)
static class TooManyRequests extends RuntimeException {
}
private final ExecutorService executorService = new ThreadPoolExecutor(2, 2,
0L, TimeUnit.MILLISECONDS,
new SynchronousQueue<Runnable>(),
(runnable, executor) -> {
((MyRunnable) runnable).deferredResult.setErrorResult(new TooManyRequests());
});
#GetMapping(value = "/blah", produces = MediaType.APPLICATION_JSON_UTF8_VALUE)
public DeferredResult<ResponseEntity<String>> yourRestService() {
final DeferredResult<ResponseEntity<String>> deferredResult = new DeferredResult<>();
this.executorService.execute(new MyRunnable(deferredResult));
return deferredResult;
}
}

By default RequestMappingHandlerAdapter handles #Controller 's #RequestMapping methods . So the most easiest way is to create your own RequestMappingHandlerAdapter and override its handleInternal to add your control logic.
Below is the pseudocode:
public static class MyRequestMappingHandlerAdapter extends RequestMappingHandlerAdapter {
//counter to keep track number of concurrent request for each HandlerMethod
//HandlerMethod represent a #RequestMapping method
private Map<HandlerMethod, Integer> requestCounterMap = new ConcurrentHashMap<>();
#Override
protected ModelAndView handleInternal(HttpServletRequest request, HttpServletResponse response,
HandlerMethod handlerMethod) throws Exception {
//Increase the counter for this handlerMethod by 1.
//Throw exception if the counter is more than 2 request
ModelAndView mv = super.handleInternal(request, response, handlerMethod);
//Method finish , decrease the counter by 1
return mv;
}
}
Assume you are using the spring boot MVC auto-configuration , you can replace RequestMappingHandlerAdapter with your customized one by creating a WebMvcRegistrations bean and override its getRequestMappingHandlerAdapter() methods:
#Bean
public WebMvcRegistrations webMvcRegistrations() {
return new WebMvcRegistrations() {
#Override
public RequestMappingHandlerAdapter getRequestMappingHandlerAdapter() {
return new MyRequestMappingHandlerAdapter();
}
};
}

Related

java async service implementation using BlockingQueue

I am trying to write my own Async service implementation alongside my already existing Synchronous version.
I have the following so far:
#Service("asynchronousProcessor")
public class AsynchronousProcessor extends Processor {
private BlockingQueue<Pair<String, MyRequest>> requestQueue = new LinkedBlockingQueue<>();
public AsynchronousProcessor(final PBRequestRepository pbRequestRepository,
final JobRunner jobRunner) {
super(pbRequestRepository, jobRunner);
}
#Override
public MyResponse process(MyRequest request, String id) {
super.saveTheRequestInDB(request);
// add task to blocking queue and have it processed in the background
}
}
Basically I have an endpoint RestController class that calls process(). The async version should queue the request in a BlockingQueue and have it processed in the background.
I am unsure how to implement this code to solve this problem. Whether I should use ExecutorService and how best to fit with this current design.
It would be useful to have some controls such as before executing a task or after executing a task calls.
Any answer with some code samples to show design would be really helpful :)
If the only requirement is to process it asynchronously then I'd strongly recommend consider using spring inbuilt #Async for this purpose. Using this approach however will not be interface compatible with your existing process method of Processor since the return type MUST be either void or wrapped in Future type. This limitation is for good reasons since the async execution can not return the response immediately thus Future wrapper is the only way to get access to result should that be needed.
Following solution outline lays out what should be done in order to switch from sync execution to async execution while retaining interface compatibility. All important points are mentioned with inline comments. Please note, although this is interface compatible, the return type is null (for the reasons stated above). If you MUST need the return value within your controller than this approach (or any async approach for that matter) is NOT going to work unless you switch to async controller as well (a different topic with much wider change and design though). Following outline also include pre and post execution hooks.
/**
* Base interface extracted from existing Processor.
* Use this interfae as injection type in the controller along
* with #Qualifier("synchProcessor") for using sync processor.
* Once ready, switch the Qualifier to asynchronousProcessor
* to start using async instead.
*/
public interface BaseProcessor {
public MyResponse process(MyRequest request, String id);
}
#Service("synchProcessor")
#Primary
public class Processor implements BaseProcessor {
#Override
public MyResponse process(MyRequest request, String id) {
// normal existing sync logic
}
}
#Service("asynchronousProcessor")
public class AsynchronousProcessor implements BaseProcessor {
#Autowired
private AsynchQueue queue;
public MyResponse process(MyRequest request, String id) {
queue.process(request,id);
// async execution can not return result immediately
// this is a hack to have this implementation interface
// compatible with existing BaseProcessor
return null;
}
}
#Component
public class AsynchQueue {
#Autowired
#Qualifier("synchProcessor")
private BaseProcessor processor;
/**
* This method will be scheduled by spring scheduler and executd
* asynchronously using an executor. Presented outline will
* call preProcess and postProcess methods before actual method
* execution. Actual method execution is delegated to existing
* synchProcessor resuing it 100% AS-IS.
*/
#Override
#Async
public void process(MyRequest request, String id) {
preProcess(request, id);
MyResponse response = processor.process(request, id);
postProcess(request, id, response);
}
private void preProcess(MyRequest request, String id) {
// add logic for pre processing here
}
private void postProcess(MyRequest request, String id, MyResponse response) {
// add logic for post processing here
}
}
Another use case could be to batch process the db updates instead of processing them using one by one as you are doing already. This is especially useful if you have high volume and db updates are becoming bottleneck. For this case, using a BlockingQueue makes sense. Following is the solution outline that you can use for this purpose. Again, although this is interface compatible, the return type is still null. You can further fine tune this outline to have multiple processing threads (or spring executor for that matter) should that be needed for batch processing. For one similar use case, a single processing thread with batch updates was sufficient for my needs, concurrent db updates were presenting bigger problems due to db level locks in concurrent execution.
public class MyRequestAndID {
private MyRequest request;
prviate String id;
public MyRequestAndID(MyRequest request, String id){
this.request = request;
this.id = id;
}
public MyRequest getMyRequest() {
return this.request;
}
public String MyId() {
return this.id;
}
}
#Service("asynchronousProcessor")
public class BatchProcessorQueue implements BaseProcessor{
/* Batch processor which can process one OR more items using a single DB query */
#Autowired
private BatchProcessor batchProcessor;
private LinkedBlockingQueue<MyRequestAndID> inQueue = new LinkedBlockingQueue<>();
private Set<MyRequestAndID> processingSet = new HashSet<>();
#PostConstruct
private void init() {
Thread processingThread = new Thread(() -> processQueue());
processingThread.setName("BatchProcessor");
processingThread.start();
}
public MyResponse process(MyRequest request, String id) {
enqueu(new MyRequestAndID(request, id));
// async execution can not return result immediately
// this is a hack to have this implementation interface
// compatible with existing BaseProcessor
return null;
}
public void enqueu(MyRequestAndID job) {
inQueue.add(job);
}
private void processQueue() {
try {
while (true) {
processQueueCycle();
}
} catch (InterruptedException ioex) {
logger.error("Interrupted while processing queue", ioex);
}
}
private void processQueueCycle() throws InterruptedException {
// blocking call, wait for at least one item
MyRequestAndID job = inQueue.take();
processingSet.add(job);
updateSetFromQueue();
processSet();
}
private void processSet() {
if (processingSet.size() < 1)
return;
int qSize = processingSet.size();
preProcess(processingSet)
batchProcessor.processAll(processingSet);
postProcess(processingSet)
processingSet.clear();
}
private void updateSetFromQueue() {
List<MyRequestAndID> inData = Arrays.asList(inQueue.toArray(new MyRequestAndID[0]));
if (inData.size() < 1)
return;
inQueue.removeAll(inData);
processingSet.addAll(inData);
}
private void preProcess(Set<MyRequestAndID> currentSet) {
// add logic for pre processing here
}
private void postProcess(Set<MyRequestAndID> currentSet) {
// add logic for post processing here
}
}

How to transfer data via reactor's subscriber context?

I'm a new for a project reactor, but i have task to send some information from classic spring rest controller to some service, which is interacts with different system. Whole project developed with project reactor.
Here is my rest controller:
#RestController
public class Controller {
#Autowired
Service service;
#PostMapping("/path")
public Mono<String> test(#RequestHeader Map<String, String> headers) throws Exception {
testService.saveHeader(headers.get("header"));
return service.getData();
}
And here is my service:
#Service
public class Service {
private Mono<String> monoHeader;
private InteractionService interactor;
public Mono<String> getData() {
return Mono.fromSupplier(() -> interactor.interact(monoHeader.block()));
}
public void saveHeader(String header) {
String key = "header";
monoHeader = Mono.just("")
.flatMap( s -> Mono.subscriberContext()
.map( ctx -> s + ctx.get(key)))
.subscriberContext(ctx -> ctx.put(key, header));
}
Is it acceptable solution?
Fisrt off, I don't think you need the Context here. It is useful to implicitly pass data to a Flux or a Mono that you don't create (eg. one that a database driver creates for you). But here you're in charge of creating the Mono<String>.
Does the service saveHeader really achieve something? The call seem transient in nature: you always immediately call the interactor with the last saved header. (there could be a side effect there where two parallel calls to your endpoint end up overwriting each other's headers).
If you really want to store the headers, you could add a list or map in your service, but the most logical path would be to add the header as a parameter of getData().
This eliminates monoHeader field and saveHeader method.
Then getData itself: you don't need to ever block() on a Mono if you aim at returning a Mono. Adding an input parameter would allow you to rewrite the method as:
public Mono<String> getData(String header) {
return Mono.fromSupplier(() -> interactor.interact(header));
}
Last but not least, blocking.
The interactor seems to be an external service or library that is not reactive in nature. If the operation involves some latency (which it probably does) or blocks for more than a few milliseconds, then it should run on a separate thread.
Mono.fromSupplier runs in whatever thread is subscribing to it. In this case, Spring WebFlux will subscribe to it, and it will run in the Netty eventloop thread. If you block that thread, it means no other request can be serviced in the whole application!
So you want to execute the interactor in a dedicated thread, which you can do by using subscribeOn(Schedulers.boundedElastic()).
All in all:
#RestController
public class Controller {
#Autowired
Service service;
#PostMapping("/path")
public Mono<String> test(#RequestHeader Map<String, String> headers) throws Exception {
return service.getData(headers.get("header"));
}
}
#Service
public class Service {
private InteractionService interactor;
public Mono<String> getData(String header) {
return Mono.fromSupplier(() -> interactor.interact(header))
.subscribeOn(Schedulers.boundedElastic());
}
}
How to transfer data via reactor's subscriber context?
Is it acceptable solution?
No.
Your code of saveHeader() method is an equivalent of simple
public void saveHeader(String header) {
monoHeader = Mono.just(header);
}
A subscriberContext is needed if you consume the value elsewhere - if the mono is constructed elsewhere. In your case (where you have all code before your eyes in the same method) just use the actual value.
BTW, there are many ways to implement your getData() method.
One is as suggested by Simon Baslé to get rid of a separate saveHeader() method.
One other way, if you have to keep your monoHeader field, could be
public Mono<String> getData() {
return monoHeader.publishOn(Schedulers.boundedElastic())
.map(header -> interactor.interact(header));
}

Calling Spring controller method without going to internet

tldr: Is there a way to make an internal request (using the method's path) without going to the internet?
--
Why do I need it? I have a project which receives many events. The decision of who will handle each event is made by a Controller. So I have something similar to this:
#RestController
#RequestMapping("/events")
public class EventHandlerAPI {
#Autowired
private EventAHandler eventAhandler;
#Autowired
private EventBHandler eventBhandler;
#PostMapping("/a")
public void handleEventA(#RequestBody EventA event) {
eventAhandler.handle(id, event);
}
#PostMapping("/b")
public void handleEventB(#RequestBody EventB event) {
eventBhandler.handle(id, event);
}
}
We recently added support to receive events through a Queue service. It sends to us the payload and the event class. Our decision is to let both interfaces working (rest and queue). The solution to avoid code duplication was to keep the Controller choosing which handler will take care of the event. The code nowadays is similar to this:
#Configuration
public class EventHandlerQueueConsumer {
#Autowired
private EventHandlerAPI eventHandlerAPI;
private Map<Class, EventHandler> eventHandlers;
#PostConstruct
public void init() {
/* start listen queue */
declareEventHandlers();
}
private void declareEventHandlers() {
eventHandlers = new HashMap<>();
eventHandlers.put(EventAHandler.class, (EventHandler<EventAHandler>) eventHandlerAPI::handleEventA);
eventHandlers.put(EventBHandler.class, (EventHandler<EventBHandler>) eventHandlerAPI::handleEventB);
}
private void onEventReceived(AbstractEvent event) {
EventHandler eventHandler = eventHandlers.get(event.getClass());
eventHandler.handle(event);
}
private interface EventHandler<T extends AbstractEvent> {
void handle(T event);
}
}
This code works, but it doesn't let the controller choose who will handle the event (our intention). The decision is actually being made by the map.
What I would like to do was to invoke the controller method through it's request mapping without going to the internet. Something like this:
#Configuration
public class EventHandlerQueueConsumer {
// MADE UP CLASS TO SHOW WHAT I WANT
#Autowired
private ControllerInkover controllerInvoker;
#PostConstruct
public void init() { /* start listen queue */ }
private void onEventReceived(AbstractEvent event) {
controllerInvoker.post(event.getPath(), new Object[] { event });
}
}
This way is much cleaner and let all the decisions be made by the controller.
I've researched a lot and didn't found a way to implement it. Debugging spring, I found how he routes the request after the DispatcherServlet, but all the spring internals uses HttpServletRequest and HttpServletResponse :(
Is there a way to make an internal request (using the method's path) without going to the internet?
They are classes of the same application
Then it should easy enough.
1) You can call your own API on http(s)://localhost:{port}/api/{path} using RestTemplate utility class. This is preferred way, since you'll follow standard MVC pattern. Something like:
restTemplate.exchange(uri, HttpMethod.POST, httpEntity, ResponseClass.class);
2) If you don't want to invoke network connection at all, then you can either use Spring's internal to find the mapping/method map or use some reflection to build custom
map upon controller's startup. Then you can pass your event/object to the method from the map in a way shown in your mock-up class. Something like:
#RequestMapping("foo")
public void fooMethod() {
System.out.println("mapping = " + getMapping("fooMethod")); // you can get all methods/mapping in #PostContruct initialization phase
}
private String getMapping(String methodName) {
Method methods[] = this.getClass().getMethods();
for (int i = 0; i < methods.length; i++) {
if (methods[i].getName() == methodName) {
String mapping[] = methods[i].getAnnotation(RequestMapping.class).value();
if (mapping.length > 0) {
return mapping[mapping.length - 1];
}
}
}
return null;
}

Multiple threads submit tasks and wait for results while another thread periodically executes each task

I have multiple threads that consume some data and call one third party service (serviceA). I can send only one request to serviceA per 10 second. Each thread have to wait until it receives a result from serviceA and then continue doing other thread specific work.
I want to implement some sort of proxy for serviceA that will receive all calls for serviceA, collect them, execute one call per 10 seconds and return a result of this call. Each thread should wait until the proxy returns the result. It should look something like this
public class ServiceAProxy implements ServiceA {
private ServiceA serviceA;
private ??? callsHolder;
public ServiceAProxy(ServiceA serviceA) {
this.serviceA = serviceA;
}
public Result call(String parameter) {
return callsHolder.submitAndWaitResult(() -> serviceA.call(parameter));
}
#Scheduled(fixedDelay = 10000)
public void executeOldestCall() {
callsHolder.executeOldestTask();
}
}
Probably callHolder could be implemented using 2 SynchronousQueues but is there any cleaner solution to do this without reinventing the wheel?
In case number of threads is small and blocking of a calling thread until it can send the request is not a big deal, Guava RateLimiter may be just enough. So your service proxy would look something like this:
public class ServiceAProxy implements ServiceA {
private final ServiceA serviceA;
private final RateLimiter throttle;
public ServiceAProxy(ServiceA serviceA, double callsPerSecond) {
this.serviceA = serviceA;
throttle = RateLimiter.create(callsPerSecond);
}
public Result call(String parameter) {
// every thread may potentially block here until throttle allows it to proceed
throttle.acquire();
return serviceA.call(parameter);
}
}

Spring cache for a given request

I am writing a web application using Spring MVC. I have a interface that looks like this:
public interface SubscriptionService
{
public String getSubscriptionIDForUSer(String userID);
}
The getSubscriptionIDForUser actually makes a network call to another service to get the subscription details of the user. My business logic calls this method in multiple places in its logic. Hence, for a given HTTP request I might have multiple calls made to this method. So, I want to cache this result so that repeated network calls are not made for the same request. I looked at the Spring documentation, but could not find references to how can I cache this result for the same request. Needless to say the cache should be considered invalid if it is a new request for the same userID.
My requirements are as follows:
For one HTTP request, if multiple calls are made to getSubscriptionIDForUser, the actual method should be executed only once. For all other invocations, the cached result should be returned.
For a different HTTP request, we should make a new call and disregard the cache hit, if at all, even if the method parameters are exactly the same.
The business logic might execute its logic in parallel from different threads. Thus for the same HTTP request, there is a possibility that Thread-1 is currently making the getSubscriptionIDForUser method call, and before the method returns, Thread-2 also tries to invoke the same method with the same parameters. If so, then Thread-2 should be made to wait for the return of the call made from Thread-1 instead of making another call. Once the method invoked from Thread-1 returns, Thread-2 should get the same return value.
Any pointers?
Update: My webapp will be deployed to multiple hosts behind a VIP. My most important requirement is Request level caching. Since each request will be served by a single host, I need to cache the result of the service call in that host only. A new request with the same userID must not take the value from the cache. I have looked through the docs but could not find references as to how it is done. May be I am looking at the wrong place?
I'd like to propose another solution that a bit smaller than one proposed by #Dmitry. Instead of implementing own CacheManager we can use ConcurrentMapCacheManager provided by Spring in 'spring-context' artifact. So, the code will look like this (configuration):
//add this code to any configuration class
#Bean
#Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public CacheManager cacheManager() {
return new ConcurrentMapCacheManager();
}
and may be used:
#Cacheable(cacheManager = "cacheManager", cacheNames = "default")
public SomeCachedObject getCachedObject() {
return new SomeCachedObject();
}
I ended up with solution as suggested by herman in his comment:
Cache manager class with simple HashMap:
public class RequestScopedCacheManager implements CacheManager {
private final Map<String, Cache> cache = new HashMap<>();
public RequestScopedCacheManager() {
System.out.println("Create");
}
#Override
public Cache getCache(String name) {
return cache.computeIfAbsent(name, this::createCache);
}
#SuppressWarnings("WeakerAccess")
protected Cache createCache(String name) {
return new ConcurrentMapCache(name);
}
#Override
public Collection<String> getCacheNames() {
return cache.keySet();
}
public void clearCaches() {
cache.clear();
}
}
Then make it RequestScoped:
#Bean
#Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public CacheManager requestScopedCacheManager() {
return new RequestScopedCacheManager();
}
Usage:
#Cacheable(cacheManager = "requestScopedCacheManager", cacheNames = "default")
public YourCachedObject getCachedObject(Integer id) {
//Your code
return yourCachedObject;
}
Update:
After a while, I have found that my previous solution was incompatible with Spring-actuator. CacheMetricsRegistrarConfiguration is trying to initialize request scoped cache outside the request scope, which leads to exception.
Here is my alternative Implementation:
public class RequestScopedCacheManager implements CacheManager {
public RequestScopedCacheManager() {
}
#Override
public Cache getCache(String name) {
Map<String, Cache> cacheMap = getCacheMap();
return cacheMap.computeIfAbsent(name, this::createCache);
}
protected Map<String, Cache> getCacheMap() {
RequestAttributes requestAttributes = RequestContextHolder.getRequestAttributes();
if (requestAttributes == null) {
return new HashMap<>();
}
#SuppressWarnings("unchecked")
Map<String, Cache> cacheMap = (Map<String, Cache>) requestAttributes.getAttribute(getCacheMapAttributeName(), RequestAttributes.SCOPE_REQUEST);
if (cacheMap == null) {
cacheMap = new HashMap<>();
requestAttributes.setAttribute(getCacheMapAttributeName(), cacheMap, RequestAttributes.SCOPE_REQUEST);
}
return cacheMap;
}
protected String getCacheMapAttributeName() {
return this.getClass().getName();
}
#SuppressWarnings("WeakerAccess")
protected Cache createCache(String name) {
return new ConcurrentMapCache(name);
}
#Override
public Collection<String> getCacheNames() {
Map<String, Cache> cacheMap = getCacheMap();
return cacheMap.keySet();
}
public void clearCaches() {
for (Cache cache : getCacheMap().values()) {
cache.clear();
}
getCacheMap().clear();
}
}
Then register a not(!) request scoped bean. Cache implementation will get request scope internally.
#Bean
public CacheManager requestScopedCacheManager() {
return new RequestScopedCacheManager();
}
Usage:
#Cacheable(cacheManager = "requestScopedCacheManager", cacheNames = "default")
public YourCachedObject getCachedObject(Integer id) {
//Your code
return yourCachedObject;
}
EHCache comes to mind right off the bat, or you could even roll-your-own solution to cache the results in the service layer. There are probably a billion options on caching here. The choice depends on several factors, like do you need the values to timeout, or are you going to clean the cache manually. Do you need a distributed cache, like in the case where you have a stateless REST application that is distributed amongst several app servers. You you need something robust that can survive a crash or reboot.
You can use Spring Cache annotations and create your own CacheManager that caches at request scope. Or you can use the one I wrote: https://github.com/rinoto/spring-request-cache

Categories

Resources