Spring cache for a given request - java

I am writing a web application using Spring MVC. I have a interface that looks like this:
public interface SubscriptionService
{
public String getSubscriptionIDForUSer(String userID);
}
The getSubscriptionIDForUser actually makes a network call to another service to get the subscription details of the user. My business logic calls this method in multiple places in its logic. Hence, for a given HTTP request I might have multiple calls made to this method. So, I want to cache this result so that repeated network calls are not made for the same request. I looked at the Spring documentation, but could not find references to how can I cache this result for the same request. Needless to say the cache should be considered invalid if it is a new request for the same userID.
My requirements are as follows:
For one HTTP request, if multiple calls are made to getSubscriptionIDForUser, the actual method should be executed only once. For all other invocations, the cached result should be returned.
For a different HTTP request, we should make a new call and disregard the cache hit, if at all, even if the method parameters are exactly the same.
The business logic might execute its logic in parallel from different threads. Thus for the same HTTP request, there is a possibility that Thread-1 is currently making the getSubscriptionIDForUser method call, and before the method returns, Thread-2 also tries to invoke the same method with the same parameters. If so, then Thread-2 should be made to wait for the return of the call made from Thread-1 instead of making another call. Once the method invoked from Thread-1 returns, Thread-2 should get the same return value.
Any pointers?
Update: My webapp will be deployed to multiple hosts behind a VIP. My most important requirement is Request level caching. Since each request will be served by a single host, I need to cache the result of the service call in that host only. A new request with the same userID must not take the value from the cache. I have looked through the docs but could not find references as to how it is done. May be I am looking at the wrong place?

I'd like to propose another solution that a bit smaller than one proposed by #Dmitry. Instead of implementing own CacheManager we can use ConcurrentMapCacheManager provided by Spring in 'spring-context' artifact. So, the code will look like this (configuration):
//add this code to any configuration class
#Bean
#Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public CacheManager cacheManager() {
return new ConcurrentMapCacheManager();
}
and may be used:
#Cacheable(cacheManager = "cacheManager", cacheNames = "default")
public SomeCachedObject getCachedObject() {
return new SomeCachedObject();
}

I ended up with solution as suggested by herman in his comment:
Cache manager class with simple HashMap:
public class RequestScopedCacheManager implements CacheManager {
private final Map<String, Cache> cache = new HashMap<>();
public RequestScopedCacheManager() {
System.out.println("Create");
}
#Override
public Cache getCache(String name) {
return cache.computeIfAbsent(name, this::createCache);
}
#SuppressWarnings("WeakerAccess")
protected Cache createCache(String name) {
return new ConcurrentMapCache(name);
}
#Override
public Collection<String> getCacheNames() {
return cache.keySet();
}
public void clearCaches() {
cache.clear();
}
}
Then make it RequestScoped:
#Bean
#Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public CacheManager requestScopedCacheManager() {
return new RequestScopedCacheManager();
}
Usage:
#Cacheable(cacheManager = "requestScopedCacheManager", cacheNames = "default")
public YourCachedObject getCachedObject(Integer id) {
//Your code
return yourCachedObject;
}
Update:
After a while, I have found that my previous solution was incompatible with Spring-actuator. CacheMetricsRegistrarConfiguration is trying to initialize request scoped cache outside the request scope, which leads to exception.
Here is my alternative Implementation:
public class RequestScopedCacheManager implements CacheManager {
public RequestScopedCacheManager() {
}
#Override
public Cache getCache(String name) {
Map<String, Cache> cacheMap = getCacheMap();
return cacheMap.computeIfAbsent(name, this::createCache);
}
protected Map<String, Cache> getCacheMap() {
RequestAttributes requestAttributes = RequestContextHolder.getRequestAttributes();
if (requestAttributes == null) {
return new HashMap<>();
}
#SuppressWarnings("unchecked")
Map<String, Cache> cacheMap = (Map<String, Cache>) requestAttributes.getAttribute(getCacheMapAttributeName(), RequestAttributes.SCOPE_REQUEST);
if (cacheMap == null) {
cacheMap = new HashMap<>();
requestAttributes.setAttribute(getCacheMapAttributeName(), cacheMap, RequestAttributes.SCOPE_REQUEST);
}
return cacheMap;
}
protected String getCacheMapAttributeName() {
return this.getClass().getName();
}
#SuppressWarnings("WeakerAccess")
protected Cache createCache(String name) {
return new ConcurrentMapCache(name);
}
#Override
public Collection<String> getCacheNames() {
Map<String, Cache> cacheMap = getCacheMap();
return cacheMap.keySet();
}
public void clearCaches() {
for (Cache cache : getCacheMap().values()) {
cache.clear();
}
getCacheMap().clear();
}
}
Then register a not(!) request scoped bean. Cache implementation will get request scope internally.
#Bean
public CacheManager requestScopedCacheManager() {
return new RequestScopedCacheManager();
}
Usage:
#Cacheable(cacheManager = "requestScopedCacheManager", cacheNames = "default")
public YourCachedObject getCachedObject(Integer id) {
//Your code
return yourCachedObject;
}

EHCache comes to mind right off the bat, or you could even roll-your-own solution to cache the results in the service layer. There are probably a billion options on caching here. The choice depends on several factors, like do you need the values to timeout, or are you going to clean the cache manually. Do you need a distributed cache, like in the case where you have a stateless REST application that is distributed amongst several app servers. You you need something robust that can survive a crash or reboot.

You can use Spring Cache annotations and create your own CacheManager that caches at request scope. Or you can use the one I wrote: https://github.com/rinoto/spring-request-cache

Related

Restrict number of processed requests per Requestmapping

we have a service that has one endpoint which needs to be restricted to process 2 requests at a time. These 2 requests can take a while to be completed.
Currently we use the tomcat properties to do so.
The problem we face is now is that - when these 2 threads are used up for that endpoint - our healthcheck does not work anymore.
So we would like to restrict the number of requests for that particular endpoint.
We pondered a while about it and one idea was to do so via filter, but that seems very hacky to me...
So I was hoping someone has another idea?
Here's an example of how to implement an asynchronous REST controller that will handle no more than 2 simultaneous requests at the same time. This implementation will not block any of your Tomcat servlet threads while the requests are being processed.
If another one arrives while the two are in progress then the caller will get an HTTP 429 (Too Many Requests).
This example immediately rejects requests that cannot be handled with a 429. If instead you'd like to queue pending requests until one of the 2 processing threads are available then replace SynchronousQueue with another implementation of BlockingQueue.
You might want to tidy up this sample, I've intentionally embedded all classes used to fit it in here:
#Configuration
#RestController
public class TestRestController {
static class MyRunnable implements Runnable {
DeferredResult<ResponseEntity<String>> deferredResult;
MyRunnable(DeferredResult<ResponseEntity<String>> dr) {
this.deferredResult = dr;
}
#Override
public void run() {
// do your work here and adjust the following
// line to set your own result for the caller...
this.deferredResult.setResult(ResponseEntity.ok("it worked"));
}
}
#SuppressWarnings("serial")
#ResponseStatus(HttpStatus.TOO_MANY_REQUESTS)
static class TooManyRequests extends RuntimeException {
}
private final ExecutorService executorService = new ThreadPoolExecutor(2, 2,
0L, TimeUnit.MILLISECONDS,
new SynchronousQueue<Runnable>(),
(runnable, executor) -> {
((MyRunnable) runnable).deferredResult.setErrorResult(new TooManyRequests());
});
#GetMapping(value = "/blah", produces = MediaType.APPLICATION_JSON_UTF8_VALUE)
public DeferredResult<ResponseEntity<String>> yourRestService() {
final DeferredResult<ResponseEntity<String>> deferredResult = new DeferredResult<>();
this.executorService.execute(new MyRunnable(deferredResult));
return deferredResult;
}
}
By default RequestMappingHandlerAdapter handles #Controller 's #RequestMapping methods . So the most easiest way is to create your own RequestMappingHandlerAdapter and override its handleInternal to add your control logic.
Below is the pseudocode:
public static class MyRequestMappingHandlerAdapter extends RequestMappingHandlerAdapter {
//counter to keep track number of concurrent request for each HandlerMethod
//HandlerMethod represent a #RequestMapping method
private Map<HandlerMethod, Integer> requestCounterMap = new ConcurrentHashMap<>();
#Override
protected ModelAndView handleInternal(HttpServletRequest request, HttpServletResponse response,
HandlerMethod handlerMethod) throws Exception {
//Increase the counter for this handlerMethod by 1.
//Throw exception if the counter is more than 2 request
ModelAndView mv = super.handleInternal(request, response, handlerMethod);
//Method finish , decrease the counter by 1
return mv;
}
}
Assume you are using the spring boot MVC auto-configuration , you can replace RequestMappingHandlerAdapter with your customized one by creating a WebMvcRegistrations bean and override its getRequestMappingHandlerAdapter() methods:
#Bean
public WebMvcRegistrations webMvcRegistrations() {
return new WebMvcRegistrations() {
#Override
public RequestMappingHandlerAdapter getRequestMappingHandlerAdapter() {
return new MyRequestMappingHandlerAdapter();
}
};
}

Spring 4 #Service with #RequestScope

In order to optimize sql request, I've made a service that aggregate other services consumptions to avoid unecessary calls.
(Some pages of my webapp are called millions times by day, so I want to reuse the results of database queries as many times as possible on each request)
The solution I create is this one :
My service has #RequestScope instead of default scope (Singleton)
In MyService
#Service
#RequestScope
public MyService {
private int param;
#Autowired
private OtherService otherService;
#Autowired
private OtherService2 otherService2;
private List<Elements> elements;
private List<OtherElements> otherElements;
public void init(int param) {
this.param = param;
}
public List<Elements> getElements() {
if(this.elements == null) {
//Init elements
this.elements = otherService.getElements(param);
}
return this.elements;
}
public List<OtherElements> getOtherElements() {
if(this.otherElements == null) {
//Init otherElements
this.otherElements = otherService2.getOtherElements(param);
}
return this.otherElements;
}
public String getMainTextPres() {
//Need to use lElements;
List<Elements> elts = this.getElements();
....
return myString;
}
public String getSecondTextPres() {
//Need to use lElements;
List<Elements> elts = this.getElements();
//Also Need to use lElements;
List<OtherElements> otherElts = this.getOtherElements();
....
return myString;
}
}
In my controller :
public class myController {
#Autowired MyService myService;
#RequestMapping...
public ModelAndView myFunction(int param) {
myService.init(param);
String mainTextPres = myService.getMainTextPres();
String secondTextPres = myService.getSecondTextPres();
}
#OtherRequestMapping...
public ModelAndView myFunction(int param) {
myService.init(param);
String secondTextPres = myService.getSecondTextPres();
}
}
Of course, I've simplified my example, because myService use lots of other elements, and i protect the initialization of his members attributes
This method has the advantage of doing lazy loading of the attributes only when I need them.
If somewhere in my project (in same or other controller) I only need the SecondTextPres, then calling "getSecondTextPres" will initialize both lists which is not the case in my example beacuse the first list has been initialized when "getMainTextPres" was called.
My question are :
What do you think of this way of doing things ?
May I have performance issues because I instantiate my service on each request ?
Thanks a lot !
Julien
I think that your idea is not going to fly. I you call the same or different controller this is will be different request - in that case new bean will be created (elements and other elements are empty again).
Have you been thinking about caching? Spring has nice support where you can define cache expiration, etc
It's not quite clear to me what exactly you want to optimise instantiating Service in request scope? If you are bothered about memory foot print, you could easily measure it by JMX or VisualVM.
On the other hand, you could make all Service calls pure, i.e. depending on function parameters and (ofc) database state only and instantiate the Service with default scope as Singleton.
This decision will save you reasonable amount of resources as you will not instantiate possible large object graph on each call and will not require GC to clean the thing after Request is done.
The rule of thumb is to think why exactly you need the specific Class instantiated on every call and if it doesn't keep any state specific to call, make it Singleton.
Speaking about lazy loading, it always helps to think about worst case repeated like 100 times. Will it really save you something comparing to be loaded once and for the whole Container lifetime.

How to build and utilize a cache using CacheBuilder in Java

I have a method that pulls in a bunch of data. This has the potential to take a decent amount of time due to the large data set and the amount of computation required. The method that does this call will be used many times. The result list should return the same results each time. With that being said, I want to cache the results, so I only have to do that computation once. I'm supposed to use the CacheBuilder class. The script I have is essentially something like:
class CheckValidValues implements AValidValueInterface {
private ADataSourceInterface dataSource;
public CheckValidValues(ADataSourceInterface dataSource) {
this.dataSource = dataSource;
}
#Override
public void validate(String value) {
List<?> validValues = dataSource.getValidValues();
if (!validValues.contains(value)) {
// throw an exception
So I'm not even sure where I should be putting the caching method (i.e. in the CheckValidValues class or the getValidValues() method in dataSource. Also, I'm not entirely sure how you can add code into one of the methods without it instantiating the cache multiple times. Here's the route that I'm trying to take, but have no idea if it's correct. Adding above the List validValues = dataSource.getValidValues() line:
LoadingCache<String, List<?>> validValuesCache = CacheBuilder.newBuilder()
.expireAfterAccess(30, TimeUnit.SECONDS)
.build(
new CacheLoader<String, List<?>>() {
public List<?> load(#Nonnull String validValues) {
return valuesSupplier.getValidValues();
}
}
);
Then later, I'd think I could get that value with:
validValuesCache.get("validValues");
What I think should happen there is that it will do the getValidValues command and store that in the cache. However, if this method is being called multiple times, then, to me, that would mean it would create a new cache each time.
Any idea what I should do for this? I simply want to add the results of the getValidValues() method to cache so that it can be used in the next iteration without having to redo any computations.
You only want to cache a single value, the list of valid values. Use Guavas' Suppliers.memoizeWithExpiration(Supplier delegate, long duration, TimeUnit unit)
Each valid value is only existing once. So your List is essentially a Set. Back it by a HashSet (or a more efficient variant in Guava). This way the contains() is a hash table lookup instead of a sequential search inside the list.
We use Guava and Spring-Caching in a couple of projects where we defined the beans via Java configuration like this:
#Configuration
#EnableCaching
public class GuavaCacheConfig {
...
#Bean(name="CacheEnabledService")
public SomeService someService() {
return new CacheableSomeService();
}
#Bean(name="guavaCacheManager")
public CacheManager cacheManager() {
// if different caching strategies should occur use this technique:
// http://www.java-allandsundry.com/2014/10/spring-caching-abstraction-and-google.html
GuavaCacheManager guavaCacheManager = new GuavaCacheManager();
guavaCacheManager.setCacheBuilder(cacheBuilder());
return guavaCacheManager;
}
#Bean(name = "expireAfterAccessCacheBuilder")
public CacheBuilder<Object, Object> cacheBuilder() {
return CacheBuilder.newBuilder()
.recordStats()
.expireAfterAccess(5, TimeUnit.SECONDS);
}
#Bean(name = "keyGenerator")
public KeyGenerator keyGenerator() {
return new CustomKeyGenerator();
}
...
}
Note that the code above was taken from one of our integration tests.
The service, which return values should be cached is defined as depicted below:
#Component
#CacheConfig(cacheNames="someCache", keyGenerator=CustomKeyGenerator.NAME, cacheManager="guavaCacheManager")
public class CacheableService {
public final static String CACHE_NAME = "someCache";
...
#Cacheable
public <E extends BaseEntity> E findEntity(String id) {
...
}
...
#CachePut
public <E extends BaseEntity> ObjectId persist(E entity) {
...
}
...
}
As Spring-Caching uses an AOP approach, on invoking a #Cacheable annotated method Spring will first check if already a previous stored return value is available in the cache for the invoked method (depending on the cache key; we use a custom key generator therefore). If no value is yet available, Spring will invoke the actual service method and store the return value into the local cache which is available on subsequent calls.
#CachePut will always execute the service method and put the return value into the cache. This is useful if an existing value inside the cache should be replaced by a new value in case of an update for example.

Spring #Cacheable: Preserve old value on error

I am planning to use the Spring #Cacheable annotation in order to cache the results of invoked methods.
But this implementation somehow does not look very "safe" to me. As far as I understand, the returned value will be cached by the underlying caching engine and will be deleted when the Spring evict method is called.
I would need an implementation which does not destroy the old value until the new value was loaded. This would be required and the following scenario should work:
Cacheable method is called -> Valid result returned
Result will be cached by the Spring #Cacheable backend
Spring invalidates cache because it expired (e.g. TTL of 1 hour)
Cacheable method is called again -> Exception/null value returned!
OLD result will be cached again and thus, future invokations of the method will return a valid result
How would this be possible?
Your requirement of serving old values if the #Cacheable method throws an exception can easily be achieved with a minimal extension to Google Guava.
Use the following example configuration
#Configuration
#EnableWebMvc
#EnableCaching
#ComponentScan("com.yonosoft.poc.cache")
public class ApplicationConfig extends CachingConfigurerSupport {
#Bean
#Override
public CacheManager cacheManager() {
SimpleCacheManager simpleCacheManager = new SimpleCacheManager();
GuavaCache todoCache = new GuavaCache("todo", CacheBuilder.newBuilder()
.refreshAfterWrite(10, TimeUnit.MINUTES)
.maximumSize(10)
.build(new CacheLoader<Object, Object>() {
#Override
public Object load(Object key) throws Exception {
CacheKey cacheKey = (CacheKey)key;
return cacheKey.method.invoke(cacheKey.target, cacheKey.params);
}
}));
simpleCacheManager.setCaches(Arrays.asList(todoCache));
return simpleCacheManager;
}
#Bean
#Override
public KeyGenerator keyGenerator() {
return new KeyGenerator() {
#Override
public Object generate(Object target, Method method, Object... params) {
return new CacheKey(target, method, params);
}
};
}
private class CacheKey extends SimpleKey {
private static final long serialVersionUID = -1013132832917334168L;
private Object target;
private Method method;
private Object[] params;
private CacheKey(Object target, Method method, Object... params) {
super(params);
this.target = target;
this.method = method;
this.params = params;
}
}
}
CacheKey serves the single purpose of exposing SimpleKey attributes. Guavas refreshAfterWrite will configure the refresh time without expiring the cache entries. If the methods annotated with #Cacheable throws an exception the cache will continue to serve the old value until evicted due to maximumSize or replaced by a new value from succesful method response. You can use refreshAfterWrite in conjunction with expireAfterAccess and expireAfterAccess.
I may be wrong in my reading of the Spring code, notably org.springframework.cache.interceptor.CacheAspectSupport#execute(org.springframework.cache.interceptor.CacheOperationInvoker, org.springframework.cache.interceptor.CacheAspectSupport.CacheOperationContexts), but I believe the abstraction does not provide what you ask indeed.
Spring will not expire entries, this will be left to the underlying caching implementation.
You mention that you would like to see values even though they are expired. That's against the expiry abstraction used in most cache implementations that I know of.
Returning a previously cached value on invocation error is clearly use case specific. The Spring abstraction will simply throw the error back at the user. The CacheErrorHandler mechanism only deals with cache invocation related exceptions.
All in all, it seems to me that what you are asking for is very use case specific and thus not something an abstraction would/should offer.

Java Spring Recreate specific Bean

I want to re-create (new Object) a specific bean at Runtime (no restarting the server) upon some DB changes. This is how it looks -
#Component
public class TestClass {
#Autowired
private MyShop myShop; //to be refreshed at runtime bean
#PostConstruct //DB listeners
public void initializeListener() throws Exception {
//...
// code to get listeners config
//...
myShop.setListenersConfig(listenersConfig);
myShop.initialize();
}
public void restartListeners() {
myShop.shutdownListeners();
initializeListener();
}
}
This code does not run as myShop object is created by Spring as Singleton & its context does not get refreshed unless the server is restarted. How to refresh (create a new object) myShop ?
One bad way I can think of is to create new myShop object inside restartListeners() but that does not seem right to me.
In DefaultListableBeanFactory you have public method destroySingleton("beanName")so you can play with it, but you have to be aware that if your autowired your bean it will keep the same instance of the object that has been autowired in the first place, you can try something like this:
#RestController
public class MyRestController {
#Autowired
SampleBean sampleBean;
#Autowired
ApplicationContext context;
#Autowired
DefaultListableBeanFactory beanFactory;
#RequestMapping(value = "/ ")
#ResponseBody
public String showBean() throws Exception {
SampleBean contextBean = (SampleBean) context.getBean("sampleBean");
beanFactory.destroySingleton("sampleBean");
return "Compare beans " + sampleBean + "=="
+ contextBean;
//while sampleBean stays the same contextBean gets recreated in the context
}
}
It is not pretty but shows how you can approach it. If you were dealing with a controller rather than a component class, you could have an injection in method argument and it would also work, because Bean would not be recreated until needed inside the method, at least that's what it looks like. Interesting question would be who else has reference to the old Bean besides the object it has been autowired into in the first place,because it has been removed from the context, I wonder if it still exists or is garbage colected if released it in the controller above, if some other objects in the context had reference to it, above would cause problems.
We have the same use-case. As already mentioned one of the main issues with re-creating a bean during runtime is how to updating the references that have already been injected. This presents the main challenge.
To work around this issue I’ve used Java’s AtomicReference<> class. Instead of injecting the bean directly, I’ve wrapped it as an AtomicReference and then inject that. Because the object wrapped by the AtomicReference can be reset in a thread safe manner, I am able to use this to change the underlying object when a database change is detected. Below is an example config / usage of this pattern:
#Configuration
public class KafkaConfiguration {
private static final String KAFKA_SERVER_LIST = "kafka.server.list";
private static AtomicReference<String> serverList;
#Resource
MyService myService;
#PostConstruct
public void init() {
serverList = new AtomicReference<>(myService.getPropertyValue(KAFKA_SERVER_LIST));
}
// Just a helper method to check if the value for the server list has changed
// Not a big fan of the static usage but needed a way to compare the old / new values
public static boolean isRefreshNeeded() {
MyService service = Registry.getApplicationContext().getBean("myService", MyService.class);
String newServerList = service.getPropertyValue(KAFKA_SERVER_LIST);
// Arguably serverList does not need to be Atomic for this usage as this is executed
// on a single thread
if (!StringUtils.equals(serverList.get(), newServerList)) {
serverList.set(newServerList);
return true;
}
return false;
}
public ProducerFactory<String, String> kafkaProducerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.CLIENT_ID_CONFIG, "...");
// Here we are pulling the value for the serverList that has been set
// see the init() and isRefreshNeeded() methods above
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, serverList.get());
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
#Lazy
public AtomicReference<KafkaTemplate<String, String>> kafkaTemplate() {
KafkaTemplate<String, String> template = new KafkaTemplate<>(kafkaProducerFactory());
AtomicReference<KafkaTemplate<String, String>> ref = new AtomicReference<>(template);
return ref;
}
}
I then inject the bean where needed, e.g.
public MyClass1 {
#Resource
AtomicReference<KafkaTemplate<String, String>> kafkaTemplate;
...
}
public MyClass2 {
#Resource
AtomicReference<KafkaTemplate<String, String>> kafkaTemplate;
...
}
In a separate class I run a scheduler thread that is started when the application context is started. The class looks something like this:
class Manager implements Runnable {
private ScheduledExecutorService scheduler;
public void start() {
scheduler = Executors.newSingleThreadScheduledExecutor();
scheduler.scheduleAtFixedRate(this, 0, 120, TimeUnit.SECONDS);
}
public void stop() {
scheduler.shutdownNow();
}
#Override
public void run() {
try {
if (KafkaConfiguration.isRefreshNeeded()) {
AtomicReference<KafkaTemplate<String, String>> kafkaTemplate =
(AtomicReference<KafkaTemplate<String, String>>) Registry.getApplicationContext().getBean("kafkaTemplate");
// Get new instance here. This will have the new value for the server list
// that was "refreshed"
KafkaConfiguration config = new KafkaConfiguration();
// The set here replaces the wrapped objet in a thread safe manner with the new bean
// and thus all injected instances now use the newly created object
kafkaTemplate.set(config.kafkaTemplate().get());
}
} catch (Exception e){
} finally {
}
}
}
I am still on the fence if this is something I would advocate doing as it does have a slight smell to it. But in limited and careful usage it does provide an alternate approach to the stated use-case. Please be aware that from a Kafka standpoint this code example will leave the old producer open. In reality one would need to properly do a flush() call on the old producer to close it. But that's not what the example is meant to demonstrate.

Categories

Resources