Spring REST webservice and session persistence - java

I have this Spring webservice test code:
#RestController
#RequestMapping("/counter")
public class CounterController
{
#Autowired
private Counter counter;
#RequestMapping(value = "/inc", method = GET)
public int inc() throws Exception {
counter.incCounter();
return counter.getCounter();
}
#RequestMapping(value = "/get", method = GET)
public int get() throws Exception {
Thread.sleep(5000);
return counter.getCounter();
}
}
where Counter is a session scoped object
#Component
#Scope(value = WebApplicationContext.SCOPE_SESSION, proxyMode = ScopedProxyMode.TARGET_CLASS)
public class Counter implements Serializable {
private static final long serialVersionUID = 9162936878293396831L;
private int value;
public int getCounter() {
return value;
}
public void incCounter() {
value += 1;
}
}
The session configuration
#Configuration
#EnableRedisHttpSession(maxInactiveIntervalInSeconds=1800)
public class HttpSessionConfig {
#Bean
public JedisConnectionFactory connectionFactory() {
return new JedisConnectionFactory();
}
#Bean
public HttpSessionStrategy httpSessionStrategy(){
return new HeaderHttpSessionStrategy();
}
}
As you can see the get() method sleeps 5 secons and returns the value of the counter.
The problem is that if I call inc() many times during the execution of the get() all the counter changes are lost because when get() finishes returns the value of the counter that it has when started the execution. The weird problem is that get() when finishes persists the counter (It is a session object) and when this operation is done all the changes are lost.
Does exist a way to prevent that functions that do not modify a session object not persist it?
Update: I think that the Spring code corroborates this wrong behavior. This snippet of code of the class ServletRequestAttributes shows that every session object that is accessed (regardless if the access is for read) is marked to be saved when the webservice operation finishes:
#Override
public Object getAttribute(String name, int scope) {
if (scope == SCOPE_REQUEST) {
if (!isRequestActive()) {
throw new IllegalStateException(
"Cannot ask for request attribute - request is not active anymore!");
}
return this.request.getAttribute(name);
}
else {
HttpSession session = getSession(false);
if (session != null) {
try {
Object value = session.getAttribute(name);
if (value != null) {
this.sessionAttributesToUpdate.put(name, value);
}
return value;
}
catch (IllegalStateException ex) {
// Session invalidated - shouldn't usually happen.
}
}
return null;
}
}
According to the Spring Session documentation:
Optimized Writes
The Session instances managed by RedisOperationsSessionRepository
keeps track of the properties that have changed and only updates
those. This means if an attribute is written once and read many times
we only need to write that attribute once.
Or the documentation is wrong or I'm doing something wrong.

I think You did some mistakes while testing Your code. I have just tested it, and it works as expected.
I have used SoapUI, created 2 request's with the same JSESSIONID value in Cookie (same session).
Then I requested for /get, and meanwhile in second request window, i spammed /inc.
What /get returned was the number of /inc. (at the beggining value was 0 , than I have incremented it to 11 while /get was sleeping. Finally, /get returned 11).
I suggest You double check if nothing is messed up with Your session.
Edit: Your code with additional logs: (I've increased the sleeping time to 10000):
2016-04-06 11:56:10.977 INFO 7884 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization completed in 14 ms
2016-04-06 11:56:11.014 INFO 7884 --- [nio-8080-exec-1] c.p.controller.TestServiceController : Before 10sec counter value: 0
2016-04-06 11:56:21.015 INFO 7884 --- [nio-8080-exec-1] c.p.controller.TestServiceController : After 10sec counter value: 0
2016-04-06 11:56:36.955 INFO 7884 --- [nio-8080-exec-2] c.p.controller.TestServiceController : Before 10sec counter value: 0
2016-04-06 11:56:46.956 INFO 7884 --- [nio-8080-exec-2] c.p.controller.TestServiceController : After 10sec counter value: 0
2016-04-06 11:56:50.558 INFO 7884 --- [nio-8080-exec-3] c.p.controller.TestServiceController : Incrementing counter value: 1
2016-04-06 11:56:53.494 INFO 7884 --- [nio-8080-exec-4] c.p.controller.TestServiceController : Before 10sec counter value: 1
2016-04-06 11:57:03.496 INFO 7884 --- [nio-8080-exec-4] c.p.controller.TestServiceController : After 10sec counter value: 1
2016-04-06 11:57:05.600 INFO 7884 --- [nio-8080-exec-5] c.p.controller.TestServiceController : Before 10sec counter value: 1
2016-04-06 11:57:06.715 INFO 7884 --- [nio-8080-exec-6] c.p.controller.TestServiceController : Incrementing counter value: 2
2016-04-06 11:57:06.869 INFO 7884 --- [nio-8080-exec-7] c.p.controller.TestServiceController : Incrementing counter value: 3
2016-04-06 11:57:07.038 INFO 7884 --- [nio-8080-exec-8] c.p.controller.TestServiceController : Incrementing counter value: 4
2016-04-06 11:57:07.186 INFO 7884 --- [nio-8080-exec-9] c.p.controller.TestServiceController : Incrementing counter value: 5
2016-04-06 11:57:07.321 INFO 7884 --- [io-8080-exec-10] c.p.controller.TestServiceController : Incrementing counter value: 6
2016-04-06 11:57:07.478 INFO 7884 --- [nio-8080-exec-1] c.p.controller.TestServiceController : Incrementing counter value: 7
2016-04-06 11:57:07.641 INFO 7884 --- [nio-8080-exec-2] c.p.controller.TestServiceController : Incrementing counter value: 8
2016-04-06 11:57:07.794 INFO 7884 --- [nio-8080-exec-3] c.p.controller.TestServiceController : Incrementing counter value: 9
2016-04-06 11:57:07.967 INFO 7884 --- [nio-8080-exec-4] c.p.controller.TestServiceController : Incrementing counter value: 10
2016-04-06 11:57:08.121 INFO 7884 --- [nio-8080-exec-6] c.p.controller.TestServiceController : Incrementing counter value: 11
2016-04-06 11:57:15.602 INFO 7884 --- [nio-8080-exec-5] c.p.controller.TestServiceController : After 10sec counter value: 11

It seems that it is nothing to do with this problem, this is the expected behavior of Spring Session with session scoped beans. For me it 's a critical problem and I've decided forget distributed caches (Redis and Hazelcast) and use the MapSessionRepository

Related

Unable to reset partition offset with CooperativeStickyAssignor

I'm trying to reset offset to a partition to 0, like this:
final KafkaConsumer<String, String> stateConsumer = new KafkaConsumer<>(stateConsumerProperties.getConsumerProps());
stateConsumer.subscribe(STATE_TOPIC);
...
stateConsumer.seekToBeginning(stateConsumer.assignment());
...
stateConsumer.poll(Duration.ofMillis(1000)) // timeout too long, testing only
.forEach(record -> {
log.info("Warmup state read: " + record.value() + ", partition: " + record.partition());
stateMessages.add(record.value());
});
Consumer config is only this:
this.stateConsumerProperties.setConsumer(Map.of(
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka:9092",
ConsumerConfig.GROUP_ID_CONFIG, "state",
ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest",
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer",
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer",
ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "30",
ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, "2000",
ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, CustomPartitionAssignor.class.getName()
));
The "custom" assignor is only for logging purposes, these are the only overrides:
public class CustomPartitionAssignor extends CooperativeStickyAssignor {
...
#Override
public GroupAssignment assign(Cluster metadata, GroupSubscription groupSubscription) {
return super.assign(metadata, groupSubscription);
}
#Override
public void onAssignment(Assignment assignment, ConsumerGroupMetadata metadata) {
super.onAssignment(assignment, metadata);
Arrays.toString(ASSIGNED_PARTITIONS.toArray()));
ASSIGNED_PARTITIONS = assignment.partitions();
log.info("New Assigned partitions: " + assignment.partitions());
}
...
}
I have only two identical consumers for testing and each consumer runs this code on join.
The first joiner has no issue since there is not state yet ("consumers" read another topic and produce "state" to state topic).
Problem is, this is what I get in the log when the second consumer joins:
consumer_2 | 2022-10-10 18:46:26.833 INFO 7 --- [pool-1-thread-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-state-2, groupId=state] Setting offset for partition state-1 to the committed offset FetchPosition{offset=361, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[179bd3c6448e:9092 (id: 1001 rack: null)], epoch=0}}
consumer_2 | 2022-10-10 18:46:26.875 INFO 7 --- [pool-1-thread-1] c.company.fdi.poc.kafka.service.Consumer : Warmup state read: state 723, partition: 1
consumer_2 | 2022-10-10 18:46:26.876 INFO 7 --- [pool-1-thread-1] c.company.fdi.poc.kafka.service.Consumer : Warmup state read: state 725, partition: 1
consumer_2 | 2022-10-10 18:46:26.876 INFO 7 --- [pool-1-thread-1] c.company.fdi.poc.kafka.service.Consumer : Warmup state read: state 727, partition: 1
State "production" is just an atomic counter that starts at 0.
What I want to happen is that when a new consumer joins, it resets its assigned partition offset to 0 and starts reading from the first record.
I have a suspicion that this has something to do with the CooperativeStickyAssignor, but I have no clue as of now.
You can assume 2 consumers, 2 partitions for state topic if this helps. Cluster should be as balanced as possible.
Start is two partitions one consumer, then I add another consumer.
Any help much appreciated, thanks in advance.

TraceId and SpanId are null in WebFilter even when it is registered after TraceWebFilter

I'm currently using 2020.0.0 with Spring Boot 2.4.2 and registered a custom web filter that logs information from the request (Order is webProperties.filterOrder + 1 -- so it should be registered after TraceWebFilter). For some reason, traceId and spanId are not in MDC (and therefore not logging).
WebFilter implementation:
class TraceWebFilter(
private val webProperties: SleuthWebProperties,
private val tracer: Tracer
): WebFilter, Ordered {
override fun getOrder(): Int = webProperties.filterOrder + 1
override fun filter(exchange: ServerWebExchange, chain: WebFilterChain): Mono<Void> {
logger.debug { "Building trace web filter" }
return Mono.defer {
MDC.put("Request-Method", exchange.request.method?.toString())
MDC.put("Request-Path", exchange.request.path.toString())
logger.info { tracer.currentSpan() }
logger.info { ">> headers=${exchange.request.headers}" }
logger.info { ">> method=${exchange.request.method}" }
logger.info { ">> path=${exchange.request.path}" }
exchange.response.beforeCommit {
Mono.fromRunnable {
logger.info { "<< headers=${exchange.response.headers}" }
logger.info { "<< status=${exchange.response.rawStatusCode}" }
}
}
chain.filter(exchange)
}
}
}
Logs:
app_1 | 2021-01-28 04:43:24.420 DEBUG [user-service,method=,path=,traceId=,spanId=] 1 --- [p-nio-80-exec-2] o.s.c.s.instrument.web.TraceWebFilter : Received a request to uri [/users]
app_1 | 2021-01-28 04:43:24.566 DEBUG [user-service,method=,path=,traceId=,spanId=] 1 --- [p-nio-80-exec-2] o.s.c.s.instrument.web.TraceWebFilter : Handled receive of span RealSpan(9b7383611e5a3bc2/9b7383611e5a3bc2)
app_1 | 2021-01-28 04:43:24.633 DEBUG [user-service,method=,path=,traceId=,spanId=] 1 --- [p-nio-80-exec-2] c.p.p.components.tracing.TraceWebFilter : Building trace web filter
app_1 | 2021-01-28 04:43:24.638 INFO [user-service,method=POST,path=/users,traceId=,spanId=] 1 --- [p-nio-80-exec-2] c.p.p.components.tracing.TraceWebFilter : null
app_1 | 2021-01-28 04:43:24.694 INFO [user-service,method=POST,path=/users,traceId=,spanId=] 1 --- [p-nio-80-exec-2] c.p.p.components.tracing.TraceWebFilter : >> headers=[content-type:"application/json", user-agent:"PostmanRuntime/7.26.8", accept:"*/*", cache-control:"no-cache", postman-token:"18d99e30-ce7a-4659-a2c1-21020d8bf2b5", host:"localhost:9100", accept-encoding:"gzip, deflate, br", connection:"keep-alive", content-length:"2"]
app_1 | 2021-01-28 04:43:24.699 INFO [user-service,method=POST,path=/users,traceId=,spanId=] 1 --- [p-nio-80-exec-2] c.p.p.components.tracing.TraceWebFilter : >> method=POST
app_1 | 2021-01-28 04:43:24.747 INFO [user-service,method=POST,path=/users,traceId=,spanId=] 1 --- [p-nio-80-exec-2] c.p.p.components.tracing.TraceWebFilter : >> path=/users
Please notice the following:
traceId and spanId in the logs are null
logged tracer.currentSpan in logs is null
Any clues as to why this is happening?
If you look at TraceWebFilter, you will see that it does not put anything into the MDC (MDC is basically a ThreadLocal and you are in an async event loop).
But it interacts with the Tracer that you can also inject into your filter and get the current Span from it: tracer.currentSpan() and from the TraceContext of the Span you can get the traceId and spanId.
Also, it puts the Span into the Exchange Attributes:
this.exchange.getAttributes().put(TRACE_REQUEST_ATTR, span);
I recommend injecting the Tracer into your filter and getting the current Span from it.

Fetch offset 5705 is out of range for partition , resetting offset

I am getting below info message every time in kafka consumer.
2020-07-04 14:54:27.640 INFO 1 --- [istener-0-0-C-1] c.n.o.c.h.p.n.PersistenceKafkaConsumer : beginning to consume batch messages , Message Count :11
2020-07-04 14:54:27.809 INFO 1 --- [istener-0-0-C-1] c.n.o.c.h.p.n.PersistenceKafkaConsumer : Execution Time :169
2020-07-04 14:54:27.809 INFO 1 --- [istener-0-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {nbi.cm.changes.mo.test23-1=OffsetAndMetadata{offset=5705, leaderEpoch=null, metadata=''}}
2020-07-04 14:54:27.812 INFO 1 --- [istener-0-0-C-1] c.n.o.c.h.p.n.PersistenceKafkaConsumer : Acknowledgment Success
2020-07-04 14:54:27.813 INFO 1 --- [istener-0-0-C-1] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-1, groupId=cm-persistence-notification] Fetch offset 5705 is out of range for partition nbi.cm.changes.mo.test23-1, resetting offset
2020-07-04 14:54:27.820 INFO 1 --- [istener-0-0-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-1, groupId=cm-persistence-notification] Resetting offset for partition nbi.cm.changes.mo.test23-1 to offset 666703.
Got OFFSET_OUT_OF_RANGE error in debug log and resetting to some other partition that actually not exist. Same all messages able to receive in consumer console.
But actually I committed offset before that only , offset are available in kafka , log retention policy is 24hr, so it's not deleted in kafka.
In debug log, I got below messages:
beginning to consume batch messages , Message Count :710
2020-07-02 04:58:31.486 DEBUG 1 --- [ce-notification] o.a.kafka.clients.FetchSessionHandler : [Consumer clientId=consumer-1, groupId=cm-persistence-notification] Node 1002 sent an incremental fetch response for session 253529272 with 1 response partition(s)
2020-07-02 04:58:31.486 DEBUG 1 --- [ce-notification] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-1, groupId=cm-persistence-notification] Fetch READ_UNCOMMITTED at offset 11372 for partition nbi.cm.changes.mo.test12-1 returned fetch data (error=OFFSET_OUT_OF_RANGE, highWaterMark=-1, lastStableOffset = -1, logStartOffset = -1, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=0)
When all we will get OFFSET_OUT_OF_RANGE.
Listener Class :
#KafkaListener( id = "batch-listener-0", topics = "topic1", groupId = "test", containerFactory = KafkaConsumerConfiguration.CONTAINER_FACTORY_NAME )
public void receive(
#Payload List<String> messages,
#Header( KafkaHeaders.RECEIVED_MESSAGE_KEY ) List<String> keys,
#Header( KafkaHeaders.RECEIVED_PARTITION_ID ) List<Integer> partitions,
#Header( KafkaHeaders.RECEIVED_TOPIC ) List<String> topics,
#Header( KafkaHeaders.OFFSET ) List<Long> offsets,
Acknowledgment ack )
{
long startTime = System.currentTimeMillis();
handleNotifications( messages ); // will take more than 5s to process all messages
long endTime = System.currentTimeMillis();
long timeElapsed = endTime - startTime;
LOGGER.info( "Execution Time :{}", timeElapsed );
ack.acknowledge();
LOGGER.info( "Acknowledgment Success" );
}
Do i need to close consumer here , i thought spring-kafka automatically take care those , if no could you please tell how to close in apring-kafka and also how to check if rebalance happened or not , because in DEBUG log not able to see any log related to rebalance.
I think your consumer may be rebalancing, because you are not calling consumer.close() at the end of your process.
This is a guess, but if the retention policy isn't kicking in (and the logs are not being deleted), this is the only reason I can tell for that behaviour.
Update:
As you set them as #KafkaListeners, you could just call stop() on the KafkaListenerEndpointRegistry: kafkaListenerEndpointRegistry.stop()

What is the best option to execute 200 requests per minute to an external API in a multi-threaded Spring Boot application?

So, there is an external server (game). There is a market there. A lot of products and their combinations. Their total number is 2146.
I want to receive up-to-date pricing information from time to time.
When the application starts, I create 2146 tasks, each of which is responsible for its own type of product. Tasks run in a separate thread with a delay of 2.5 seconds.
#EventListener(ApplicationReadyEvent.class)
public void start() {
log.info("Let's get party started!");
Set<MarketplaceCollector> collectorSet = marketplaceCollectorProviders.stream()
.flatMap(provider -> provider.getCollectors().stream())
.peek(this::subscribeOfferDBSubscriber)
.collect(Collectors.toSet());
collectors.addAll(collectorSet);
runTasks();
}
private void subscribeOfferDBSubscriber(MarketplaceCollector marketplaceCollector) {
marketplaceCollector.subscribe(marketplaceOfferDBSubscriber);
}
private void runTasks() {
Thread thread = new Thread(() -> collectors.forEach(this::runWithDelay));
thread.setName("tasks-runner");
thread.start();
}
private void runWithDelay(Collector collector) {
try {
collector.collect();
Thread.sleep(2_500);
counter += 1;
} catch (InterruptedException e) {
log.error(e);
}
log.debug(counter);
}
Using RestTemplate, I access the server. If the price has changed, this task will be completed again after 1 minute. If the price remains the same, add one minute to the wait and again make a request. Thus, if the price does not change, the maximum time between requests for one product will be 20 minutes. I assume that my application will execute up to 200 requests per minute, otherwise I will get a "too many requests" error.
#Override
public void collect() {
executorService.schedule(new MarketplaceTask(), INIT_DELAY, MILLISECONDS);
}
private MarketplaceRequest request() {
return MarketplaceRequest.builder()
.country(country)
.industry(industry)
.quality(quality)
.orderBy(ASC)
.currentPage(1)
.build();
}
private class MarketplaceTask implements Runnable {
private long MIN_DELAY = 60; // 1 minute
private long MAX_DELAY = 1200; // 20 minutes
private Double PREVIOUS_PRICE = Double.MAX_VALUE;
private long DELAY = 0; // seconds
#Override
public void run() {
log.debug(format("Collecting offer of %s %s in %s after %d m delay", industry, quality, country, DELAY / 60));
MarketplaceResponse response = marketplaceClient.getOffers(request());
subscribers.forEach(s -> s.onSubscription(response));
updatePreviousPriceAndPeriod(response);
executorService.schedule(this, DELAY, SECONDS);
}
private void updatePreviousPriceAndPeriod(MarketplaceResponse response) {
if (response.isError() || response.getOffers().isEmpty()) {
increasePeriod();
} else {
Double currentPrice = response.getOffers().get(0).getPriceWithTaxes();
if (isPriceChanged(currentPrice)) {
setMinimalDelay();
PREVIOUS_PRICE = currentPrice;
} else {
increasePeriod();
}
}
}
private void increasePeriod() {
if (DELAY < MAX_DELAY) {
DELAY += 60;
}
}
private boolean isPriceChanged(Double currentPrice) {
return !Objects.equals(currentPrice, PREVIOUS_PRICE);
}
private void setMinimalDelay() {
DELAY = MIN_DELAY;
}
}
public MarketplaceClient(#Value("${app.host}") String host,
AuthenticationService authenticationService,
RestTemplateBuilder restTemplateBuilder,
CommonHeadersComposer headersComposer) {
this.host = host;
this.authenticationService = authenticationService;
this.restTemplateList = restTemplateBuilder.build();
this.headersComposer = headersComposer;
}
public MarketplaceResponse getOffers(MarketplaceRequest request) {
var authentication = authenticationService.getAuthentication();
var requestEntity = new HttpEntity<>(requestString(request, authentication), headersComposer.getHeaders());
log.debug(message("PING for", request));
var responseEntity = restTemplate.exchange(host + MARKET_URL, POST, requestEntity, MarketplaceResponse.class);
log.debug(message("PONG for", request));
if (responseEntity.getBody().isError()) {
log.warn("{}: {} {} in {}", responseEntity.getBody().getMessage(), request.getIndustry(), request.getQuality(), request.getCountry());
}
return responseEntity.getBody();
}
private String requestString(MarketplaceRequest request, Authentication authentication) {
return format("countryId=%s&industryId=%s&quality=%s&orderBy=%s&currentPage=%s&ajaxMarket=1&_token=%s",
request.getCountry().getId(), request.getIndustry().getId(), request.getQuality().getValue(),
request.getOrderBy().getValue(), request.getCurrentPage(), authentication.getToken());
}
But I have a problem after a few minutes of the application. Some tasks cease to care. The request may go to the server and not return. However, other tasks work without problems. Logs how it behaves (for example):
2020-04-04 14:11:58.267 INFO 3546 --- [ main] c.g.d.e.harvesting.CollectorManager : Let's get party started!
2020-04-04 14:11:58.302 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.harvesting.MarketplaceCollector : Collecting offer of WEAPONS Q5 in GREECE after 0 m delay
2020-04-04 14:11:58.379 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.market.api.MarketplaceClient : PING for: WEAPONS Q5 in GREECE
2020-04-04 14:11:59.217 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.market.api.MarketplaceClient : PONG for: WEAPONS Q5 in GREECE
2020-04-04 14:12:00.805 DEBUG 3546 --- [ tasks-runner] c.g.d.e.harvesting.CollectorManager : 1
2020-04-04 14:12:00.806 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.harvesting.MarketplaceCollector : Collecting offer of WEAPONS Q4 in PAKISTAN after 0 m delay
2020-04-04 14:12:00.807 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.market.api.MarketplaceClient : PING for: WEAPONS Q4 in PAKISTAN
2020-04-04 14:12:03.308 DEBUG 3546 --- [ tasks-runner] c.g.d.e.harvesting.CollectorManager : 2
2020-04-04 14:12:03.309 DEBUG 3546 --- [pool-1-thread-2] c.g.d.e.harvesting.MarketplaceCollector : Collecting offer of FOOD_RAW Q1 in SAUDI_ARABIA after 0 m delay
2020-04-04 14:12:03.311 DEBUG 3546 --- [pool-1-thread-2] c.g.d.e.market.api.MarketplaceClient : PING for: FOOD_RAW Q1 in SAUDI_ARABIA
2020-04-04 14:12:05.810 DEBUG 3546 --- [ tasks-runner] c.g.d.e.harvesting.CollectorManager : 3
2020-04-04 14:12:05.810 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.harvesting.MarketplaceCollector : Collecting offer of WEAPONS Q5 in COLOMBIA after 0 m delay
2020-04-04 14:12:05.811 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.market.api.MarketplaceClient : PING for: WEAPONS Q5 in COLOMBIA
2020-04-04 14:12:08.314 DEBUG 3546 --- [ tasks-runner] c.g.d.e.harvesting.CollectorManager : 4
2020-04-04 14:12:08.315 DEBUG 3546 --- [pool-1-thread-4] c.g.d.e.harvesting.MarketplaceCollector : Collecting offer of WEAPONS Q1 in CZECH_REPUBLIC after 0 m delay
2020-04-04 14:12:08.316 DEBUG 3546 --- [pool-1-thread-4] c.g.d.e.market.api.MarketplaceClient : PING for: WEAPONS Q1 in CZECH_REPUBLIC
2020-04-04 14:12:10.818 DEBUG 3546 --- [ tasks-runner] c.g.d.e.harvesting.CollectorManager : 5
#Configuration
public class BeanConfiguration {
#Bean
public ScheduledExecutorService scheduledExecutorService() {
return Executors.newScheduledThreadPool(8);
}
}
I tried to change the connection pool for one host, but I only made it worse. I even created 200 instances of RestTemplate, but over time the access to the server ceased.
I would not want to use Spring Webflux for this purpose.
What should I do to make the app work as expected?

when i was try to get distance between two set of latlng in google map api i will get error

this is a parameter i was passed
String[] fromPoint:["40.7127837,-74.0059413","33.9533487,-117.3961564"];
String[] toPoint:["38.6270025,-90.19940419999999","12.8445,80.1523"];
// this is code i was written, i was convert string to latlng and when i try to execute code i will get
-11-16 16:31:52.130 INFO 10956 --- [nio-8080-exec-2] c.g.maps.internal.OkHttpPendingResult : Retrying request. Retry #1
2018-11-16 16:31:52.568 INFO 10956 --- [nio-8080-exec-2]
c.g.maps.internal.OkHttpPendingResult : Retrying request. Retry #2
2018-11-16 16:31:53.594 INFO 10956 --- [nio-8080-exec-2]
c.g.maps.internal.OkHttpPendingResult : Retrying request. Retry #3
2018-11-16 16:31:54.907 INFO 10956 --- [nio-8080-exec-2]
c.g.maps.internal.OkHttpPendingResult : Retrying request. Retry #4
2018-11-16 16:31:55.914 INFO 10956 --- [nio-8080-exec-2]
c.g.maps.internal.OkHttpPendingResult : Retrying request. Retry #5
2018-11-16 16:31:57.544 INFO 10956 --- [nio-8080-exec-2]
c.g.maps.internal.OkHttpPendingResult : Retrying request. Retry #6
String key = "Key";
LatLng[] fLatLong = getLatLong(fromPoint);
LatLng[] tLatLong = getLatLong(toPoint);
try{
GeoApiContext distCalcer = new GeoApiContext.Builder()
.apiKey(key)
.build();
DistanceMatrixApiRequest req = DistanceMatrixApi.newRequest(distCalcer);
DistanceMatrix result = req.origins(fLatLong)
.destinations(tLatLong)
.await();
DistanceMatrixElement[] row = result.rows[0].elements;
System.out.println(row[0].toString());
}catch(Exception e){
e.getStackTrace();
}
please help thank you!

Categories

Resources