In my javagent, I started a HttpServer:
public static void premain(String agentArgs, Instrumentation inst) throws InstantiationException, IOException {
HttpServer server = HttpServer.create(new InetSocketAddress(8000), 0);
server.createContext("/report", new ReportHandler());
server.createContext("/data", new DataHandler());
server.createContext("/stack", new StackHandler());
ExecutorService es = Executors.newCachedThreadPool(new ThreadFactory() {
int count = 0;
#Override
public Thread newThread(Runnable r) {
Thread t = new Thread(r);
t.setDaemon(true);
t.setName("JDBCLD-HTTP-SERVER" + count++);
return t;
}
});
server.setExecutor(es);
server.start();
// how to properly close ?
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
server.stop(5);
log.info("internal httpserver has been closed.");
es.shutdown();
try {
if (!es.awaitTermination(60, TimeUnit.SECONDS)) {
log.warn("executor service of internal httpserver not closing in 60 seconds");
es.shutdownNow();
if (!es.awaitTermination(60, TimeUnit.SECONDS))
log.error("executor service of internal httpserver not closing in 120 seconds, give up");
}else {
log.info("executor service of internal httpserver closed.");
}
} catch (InterruptedException ie) {
log.warn("thread interrupted, shutdown executor service of internal httpserver");
es.shutdownNow();
Thread.currentThread().interrupt();
}
}
});
// other instrumention code ignored ...
}
testing programe:
public class AgentTest {
public static void main(String[] args) throws SQLException {
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:oracle:thin:#172.31.27.182:1521/pas");
config.setUsername("pas");
config.setPassword("pas");
HikariDataSource ds = new HikariDataSource(config);
Connection c = ds.getConnection();
Connection c1 = ds.getConnection();
c.getMetaData();
try {
Thread.sleep(1000 * 60 * 10);
} catch (InterruptedException e) {
e.printStackTrace();
c.close();
c1.close();
ds.close();
}
c.close();
c1.close();
ds.close();
}
}
When target jvm exit, I want the to stop that HttpServer. but when my testing java programe finish, main thread stoped but the whole jvm process won't terminate, shutdown hook in above code won't execute. if I click 'terminate' button in eclipse IDE, eclipse will show a error:
but at least jvm will exit, and my shutdown hook get invoked.
according to the java doc of java.lang.Runtime:
The Java virtual machine shuts down in response to two kinds of
events:
The program exits normally, when the last non-daemon thread exits or
when the exit (equivalently, System.exit) method is invoked, or The
virtual machine is terminated in response to a user interrupt, such as
typing ^C, or a system-wide event, such as user logoff or system
shutdown.
com.sun.net.httpserver.HttpServer will started a non-daemon dispatcher thread, that thread will exit when HttpServer#stop get called, so I am facing a deadlock.
non-daemon thread not finish -> shutdown hook not triggered -> can't
stop server -> non-daemon thread not finish
Any good idea? please note I can't modify code of targeting application.
UPDATES after apply kriegaex's answer
I added some logging to watch dog thread, and here is outputs:
2021-09-22 17:30:00.967 INFO - Connnection#1594791957 acquired by 40A4F128987F8BD9C0EE6749895D1237
2021-09-22 17:30:00.968 DEBUG - Stack#40A4F128987F8BD9C0EE6749895D1237:
java.lang.Throwable:
at com.zaxxer.hikari.pool.ProxyConnection.<init>(ProxyConnection.java:102)
at com.zaxxer.hikari.pool.HikariProxyConnection.<init>(HikariProxyConnection.java)
at com.zaxxer.hikari.pool.ProxyFactory.getProxyConnection(ProxyFactory.java)
at com.zaxxer.hikari.pool.PoolEntry.createProxyConnection(PoolEntry.java:97)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:192)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:162)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:100)
at agenttest.AgentTest.main(AgentTest.java:19)
2021-09-22 17:30:00.969 INFO - Connnection#686560878 acquired by 464555C270688B747CA211DE489B7730
2021-09-22 17:30:00.969 DEBUG - Stack#464555C270688B747CA211DE489B7730:
java.lang.Throwable:
at com.zaxxer.hikari.pool.ProxyConnection.<init>(ProxyConnection.java:102)
at com.zaxxer.hikari.pool.HikariProxyConnection.<init>(HikariProxyConnection.java)
at com.zaxxer.hikari.pool.ProxyFactory.getProxyConnection(ProxyFactory.java)
at com.zaxxer.hikari.pool.PoolEntry.createProxyConnection(PoolEntry.java:97)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:192)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:162)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:100)
at agenttest.AgentTest.main(AgentTest.java:20)
2021-09-22 17:30:00.971 DEBUG - Connnection#1594791957 used by getMetaData
2021-09-22 17:30:01.956 DEBUG - there is still 12 active threads, keep wathcing
2021-09-22 17:30:01.956 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true HikariPool-1 connection adder#true
2021-09-22 17:30:02.956 DEBUG - there is still 12 active threads, keep wathcing
2021-09-22 17:30:02.956 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true HikariPool-1 connection adder#true
2021-09-22 17:30:03.957 DEBUG - there is still 12 active threads, keep wathcing
2021-09-22 17:30:03.957 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true HikariPool-1 connection adder#true
2021-09-22 17:30:04.959 DEBUG - there is still 12 active threads, keep wathcing
2021-09-22 17:30:04.959 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true HikariPool-1 connection adder#true
2021-09-22 17:30:05.959 DEBUG - there is still 12 active threads, keep wathcing
2021-09-22 17:30:05.960 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true HikariPool-1 connection adder#true
2021-09-22 17:30:06.960 DEBUG - there is still 11 active threads, keep wathcing
2021-09-22 17:30:06.960 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true
2021-09-22 17:30:07.961 DEBUG - there is still 11 active threads, keep wathcing
2021-09-22 17:30:07.961 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true
2021-09-22 17:30:08.961 DEBUG - there is still 11 active threads, keep wathcing
2021-09-22 17:30:08.961 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true
2021-09-22 17:30:09.962 DEBUG - there is still 11 active threads, keep wathcing
2021-09-22 17:30:09.962 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true
2021-09-22 17:30:10.962 DEBUG - there is still 11 active threads, keep wathcing
2021-09-22 17:30:10.963 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true
2021-09-22 17:30:10.976 INFO - Connnection#1594791957 released
2021-09-22 17:30:10.976 DEBUG - set connection count to 0 by stack hash 40A4F128987F8BD9C0EE6749895D1237
2021-09-22 17:30:10.976 INFO - Connnection#686560878 released
2021-09-22 17:30:10.976 DEBUG - set connection count to 0 by stack hash 464555C270688B747CA211DE489B7730
2021-09-22 17:30:11.963 DEBUG - there is still 10 active threads, keep wathcing
2021-09-22 17:30:11.963 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true DestroyJavaVM#false
2021-09-22 17:30:12.964 DEBUG - there is still 10 active threads, keep wathcing
updates
I want to support all kinds of java application, include web application running with servlet containers and standard alone javase applications.
Here is a little MCVE illustrating ewrammer's idea. I used the little byte-buddy-agent helper library for dynamically attaching an agent in order to make my example self-contained, starting the Java agent right from the main method. I omitted the 3 trivial no-op dummy handler classes necessary to run this example.
package org.acme.agent;
import com.sun.net.httpserver.HttpServer;
import net.bytebuddy.agent.ByteBuddyAgent;
import java.io.IOException;
import java.lang.instrument.Instrumentation;
import java.net.InetSocketAddress;
import java.util.Random;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadFactory;
import java.util.concurrent.TimeUnit;
public class Agent {
public static void premain(String agentArgs, Instrumentation inst) throws IOException {
HttpServer httpServer = HttpServer.create(new InetSocketAddress(8000), 0);
ExecutorService executorService = getExecutorService(httpServer);
Runtime.getRuntime().addShutdownHook(getShutdownHook(httpServer, executorService));
// other instrumention code ignored ...
startWatchDog();
}
private static ExecutorService getExecutorService(HttpServer server) {
server.createContext("/report", new ReportHandler());
server.createContext("/data", new DataHandler());
server.createContext("/stack", new StackHandler());
ExecutorService executorService = Executors.newCachedThreadPool(new ThreadFactory() {
int count = 0;
#Override
public Thread newThread(Runnable r) {
Thread t = new Thread(r);
t.setDaemon(true);
t.setName("JDBCLD-HTTP-SERVER" + count++);
return t;
}
});
server.setExecutor(executorService);
server.start();
return executorService;
}
private static Thread getShutdownHook(HttpServer httpServer, ExecutorService executorService) {
return new Thread(() -> {
httpServer.stop(5);
System.out.println("Internal HTTP server has been stopped");
executorService.shutdown();
try {
if (!executorService.awaitTermination(60, TimeUnit.SECONDS)) {
System.out.println("Executor service of internal HTTP server not closing in 60 seconds");
executorService.shutdownNow();
if (!executorService.awaitTermination(60, TimeUnit.SECONDS))
System.out.println("Executor service of internal HTTP server not closing in 120 seconds, giving up");
}
else {
System.out.println("Executor service of internal HTTP server closed");
}
}
catch (InterruptedException ie) {
System.out.println("Thread interrupted, shutting down executor service of internal HTTP server");
executorService.shutdownNow();
Thread.currentThread().interrupt();
}
});
}
private static void startWatchDog() {
ThreadGroup threadGroup = Thread.currentThread().getThreadGroup();
while (threadGroup.getParent() != null)
threadGroup = threadGroup.getParent();
final ThreadGroup topLevelThreadGroup = threadGroup;
// Plus 1, because of the monitoring thread we are going to start right below
final int activeCount = topLevelThreadGroup.activeCount() + 1;
new Thread(() -> {
do {
try {
Thread.sleep(1000);
}
catch (InterruptedException ignored) {}
} while (topLevelThreadGroup.activeCount() > activeCount);
System.exit(0);
}).start();
}
public static void main(String[] args) throws IOException {
premain(null, ByteBuddyAgent.install());
Random random = new Random();
for (int i = 0; i < 5; i++) {
new Thread(() -> {
int threadDurationSeconds = 1 + random.nextInt(10);
System.out.println("Starting thread with duration " + threadDurationSeconds + " s");
try {
Thread.sleep(threadDurationSeconds * 1000);
System.out.println("Finishing thread after " + threadDurationSeconds + " s");
}
catch (InterruptedException ignored) {}
}).start();
}
}
}
As you can see, this is basically your example code, refactored into a few helper methods for readability, plus the new watchdog method. It is quite straightforward.
This produces a console log like:
Starting thread with duration 6 s
Starting thread with duration 6 s
Starting thread with duration 8 s
Starting thread with duration 7 s
Starting thread with duration 5 s
Finishing thread after 5 s
Finishing thread after 6 s
Finishing thread after 6 s
Finishing thread after 7 s
Finishing thread after 8 s
internal httpserver has been closed.
executor service of internal httpserver closed.
Related
I want to send n-number of requests to a REST endpoint in parallel.I want to make sure these get executed in different threads for performance and need to wait till all n requests finish.
Only way I could come up with is using CountDownLatch as follows (please check the main() method. This is testable code):
public static void main(String args[]) throws Exception {
int n = 10; //n is dynamic during runtime
final CountDownLatch waitForNRequests = new CountDownLatch(n);
//send n requests
for (int i =0;i<n;i++) {
var r = testRestCall(""+i);
r.publishOn(Schedulers.parallel()).subscribe(res -> {
System.out.println(">>>>>>> Thread: " + Thread.currentThread().getName() + " response:" +res.getBody());
waitForNRequests.countDown();
});
}
waitForNRequests.await(); //wait till all n requests finish before goto the next line
System.out.println("All n requests finished");
Thread.sleep(10000);
}
public static Mono<ResponseEntity<Map>> testRestCall(String id) {
WebClient client = WebClient.create("https://reqres.in/api");
JSONObject request = new JSONObject();
request.put("name", "user"+ id);
request.put("job", "leader");
var res = client.post().uri("/users")
.contentType(MediaType.APPLICATION_JSON)
.body(BodyInserters.fromValue(request.toString()))
.accept(MediaType.APPLICATION_JSON)
.retrieve()
.toEntity(Map.class)
.onErrorReturn(ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE).build());
return res;
}
This doesnt look good and I am sure there is an elegant solution without using Latches..etc
I tried following method,but I dont know how to resolve following issues:
Flux.merge() , contact() results in executing all n-requests in a single thread
How to wait till n-requests finish execution (fork-join)?
List<Mono<ResponseEntity<Map>>> lst = new ArrayList<>();
int n = 10; //n is dynamic during runtime
for (int i =0;i<n;i++) {
var r = testRestCall(""+i);
lst.add(r);
}
var t= Flux.fromIterable(lst).flatMap(Function.identity()); //tried merge() contact() as well
t.publishOn(Schedulers.parallel()).subscribe(res -> {
System.out.println(">>>>>>> Thread: " + Thread.currentThread().getName() + " response:" +res.getBody());
///??? all requests execute in a single thread.How to parallelize ?
});
//???How to wait till all n requests finish before goto the next line
System.out.println("All n requests finished");
Thread.sleep(10000);
Update:
I found the reason why the Flux subscriber runs in the same thread, I need to create a ParallelFlux. So the correct order should be:
var t= Flux.fromIterable(lst).flatMap(Function.identity());
t.parallel()
.runOn(Schedulers.parallel())
.subscribe(res -> {
System.out.println(">>>>>>> Thread: " + Thread.currentThread().getName() + " response:" +res.getBody());
///??? all requests execute in a single thread.How to parallelize ?
});
Ref: https://projectreactor.io/docs/core/release/reference/#advanced-parallelizing-parralelflux
In reactive you think not about threads but about concurrency.
Reactor executes non-blocking/async tasks on a small number of threads using Schedulers abstraction to execute tasks. Schedulers have responsibilities very similar to ExecutorService. By default, for parallel scheduler number of threads is equal to number of CPU cores, but could be controlled by `reactor.schedulers.defaultPoolSize’ system property.
In your example instead of creating multiple Mono and then merge them, better to use Flux and then process elements in parallel controlling concurrency.
Flux.range(1, 10)
.flatMap(this::testRestCall)
By default, flatMap will process Queues.SMALL_BUFFER_SIZE = 256 number of in-flight inner sequences.
You could control concurrency flatMap(item -> process(item), concurrency) or use concatMap operator if you want to process sequentially. Check flatMap(..., int concurrency, int prefetch) for details.
Flux.range(1, 10)
.flatMap(i -> testRestCall(i), 5)
The following test shows that calls are executed in different threads
#Test
void testParallel() {
var flow = Flux.range(1, 10)
.flatMap(i -> testRestCall(i))
.log()
.then(Mono.just("complete"));
StepVerifier.create(flow)
.expectNext("complete")
.verifyComplete();
}
The result log
2022-12-30 21:31:25.169 INFO 43383 --- [ctor-http-nio-4] reactor.Mono.FlatMap.3 : | onComplete()
2022-12-30 21:31:25.170 INFO 43383 --- [ctor-http-nio-3] reactor.Mono.FlatMap.2 : | onComplete()
2022-12-30 21:31:25.169 INFO 43383 --- [ctor-http-nio-2] reactor.Mono.FlatMap.1 : | onComplete()
2022-12-30 21:31:25.169 INFO 43383 --- [ctor-http-nio-8] reactor.Mono.FlatMap.7 : | onComplete()
2022-12-30 21:31:25.169 INFO 43383 --- [tor-http-nio-11] reactor.Mono.FlatMap.10 : | onComplete()
2022-12-30 21:31:25.169 INFO 43383 --- [ctor-http-nio-7] reactor.Mono.FlatMap.6 : | onComplete()
2022-12-30 21:31:25.169 INFO 43383 --- [ctor-http-nio-9] reactor.Mono.FlatMap.8 : | onComplete()
2022-12-30 21:31:25.170 INFO 43383 --- [ctor-http-nio-6] reactor.Mono.FlatMap.5 : | onComplete()
2022-12-30 21:31:25.378 INFO 43383 --- [ctor-http-nio-5] reactor.Mono.FlatMap.4 : | onComplete()
I'm writing a simple orchestration framework using reactor framework which executes tasks sequentially, and the next task to execute is dependent on the result from previous tasks. I might have multiple paths to choose from based on the outcome of previous tasks. Earlier, I wrote a similar framework based on a static DAG where I passed as list of tasks as iterables and used Flux.fromIterable(taskList). However, this does not give me the flexibility to go dynamic because of the static array publisher.
I'm looking for alternate approaches like do(){}while(condition) to solve for DAG traversal and task decision and I came up with Flux.generate(). I evaluate the next step in generate method and pass the next task downstream. The problem I'm facing now is, Flux.generate does not wait for downstream to complete, but pushes until the condition is set to invalid. And by the time task 1 gets executed, task 2 would have been pushed n times, which is not the expected behavior.
Can someone please point me towards the right direction?
Thanks.
First iteration using List of tasks (static DAG)
Flux.fromIterable(taskList)
.publishOn(this.factory.getSharedSchedulerPool())
.concatMap(
reactiveTask -> {
log.info("Running task =>{}", reactiveTask.getTaskName());
return reactiveTask
.run(ctx);
})
// Evaluates status from previous task and terminates stream or continues.
.takeWhile(context -> evaluateStatus(context))
.onErrorResume(throwable -> buildResponse(ctx, throwable))
.doOnCancel(() -> log.info("Task cancelled"))
.doOnComplete(() -> log.info("Completed flow"))
.subscribe();
Attempt to dynamic dag
Flux.generate(
(SynchronousSink<ReactiveTask<OrchestrationContext>> synchronousSink) -> {
ReactiveTask<OrchestrationContext> task = null;
if (ctx.getLastExecutedStep() == null) {
// first task;
task = getFirstTaskFromDAG();
} else {
task = deriveNextStep(ctx.getLastExecutedStep(), ctx.getDecisionData());
}
if (task.getName.equals("END")) {
synchronousSink.complete();
}
synchronousSink.next(task);
})
.publishOn(this.factory.getSharedSchedulerPool())
.doOnNext(orchestrationContextReactiveTask -> log.info("On next => {}",
orchestrationContextReactiveTask.getTaskName()))
.concatMap(
reactiveTask -> {
log.info("Running task =>{}", reactiveTask.getTaskName());
return reactiveTask
.run(ctx);
})
.onErrorResume(throwable -> buildResponse(ctx, throwable))
.takeUntil(context -> evaluateStatus(context, tasks))
.doOnCancel(() -> log.info("Task cancelled"))
.doOnComplete(() -> log.info("Completed flow")).subscribe();
The problem in above approach is, while task 1 is executing, the onNext() subscriber prints many time because generate is publishing. I want the generate method to wait on results from previous task and submit new task. In non-reactive world, this can be achieve through simple while() loop.
Each Task will perform the following action.
public class ResponseTask extends AbstractBaseTask {
private TaskDefinition taskDefinition;
final String taskName;
public ResponseTask(
StateManager stateManager,
ThreadFactory factory,
) {
this.taskDefinition = taskDefinition;
this.taskName = taskName;
}
public Mono<String> transform(OrchestrationContext context) {
Any masterPayload = Any.wrap(context.getIngestionPayload());
return Mono.fromCallable(() -> stateManager.doTransformation(context, masterPayload);
}
public Mono<OrchestrationContext> execute(OrchestrationContext context, String payload) {
log.info("Executing sleep for task=>{}", context.getLastExecutedStep());
return Mono.delay(Duration.ofSeconds(1), factory.getSharedSchedulerPool())
.then(Mono.just(context));
}
public Mono<OrchestrationContext> run(OrchestrationContext context) {
log.info("Executing task:{}. Last executed:{}", taskName, context.getLastExecutedStep());
return transform(context)
.doOnNext((result) -> log.info("Transformation complete for task=?{}", taskName);)
.flatMap(payload -> {
return execute(context, payload);
}).onErrorResume(throwable -> {
context.setStatus(FAILED);
return Mono.just(context);
});
}
}
EDIT - From #Ikatiforis 's recommendation - I got the following output
Here's the output from my side.
2021-12-02 09:58:14,643 INFO (reactive_shared_pool) [ReactiveEngine lambda$doOrchestration$5:98] On next => Task1
2021-12-02 09:58:14,644 INFO (reactive_shared_pool) [ReactiveEngine lambda$doOrchestration$6:101] Running task =>Task1
2021-12-02 09:58:14,644 INFO (reactive_shared_pool) [AbstractBaseTask run:75] Executing task:Task1. Last executed:Task1
2021-12-02 09:58:14,658 INFO (reactive_shared_pool) [ReactiveEngine lambda$doOrchestration$5:98] On next => Task2
2021-12-02 09:58:14,659 INFO (reactive_shared_pool) [AbstractBaseTask lambda$run$0:83] Transformation complete for task=?Task1
2021-12-02 09:58:14,659 INFO (reactive_shared_pool) [ResponseTask execute:41] Executing sleep for task=>Task1
2021-12-02 09:58:15,661 INFO (reactive_shared_pool) [AbstractBaseTask lambda$run$4:106] Success for task=>Task1
2021-12-02 09:58:15,663 INFO (reactive_shared_pool)
[ReactiveEngine lambda$doOrchestration$6:101] Running task =>Task2
2021-12-02 09:58:15,811 INFO (cassandra-nio-worker-8) [AbstractBaseTask run:75] Executing task:Task2. Last executed:Task2
2021-12-02 09:58:15,811 INFO (reactive_shared_pool) [ReactiveEngine lambda$doOrchestration$5:98] On next => Task2
2021-12-02 09:58:15,812 INFO (reactive_shared_pool) [AbstractBaseTask lambda$run$0:83] Transformation complete for task=?Task2
2021-12-02 09:58:15,812 INFO (reactive_shared_pool) [ResponseTask execute:41] Executing sleep for task=>Task2
2021-12-02 09:58:15,837 INFO (centaurus_reactive_shared_pool) [ReactiveEngine lambda$doOrchestration$9:113] Completed flow
I see couple of problems here --
The sequence of execution is
1. Task does transformations ( runs on Mono.fromCallable)
2. Task induces a delay - Mono.fromDelay()
3. Task completes execution. After this, generate method should evaluate the context and pass on the next task to be executed.
What I see from the output is:
1. Task 1 starts the transformations - Runs on Mono.fromCallable.
2. Task 2 doOnNext is reported - which means the stream already got this task.
3. Task 1 completes.
4. Task 2 starts and executes delay -> the stream does not wait for response from task 2 but completes the flow.
The problem in above approach is, while task 1 is executing, the
onNext() subscriber prints many time because generate is publishing.
This is happening because concatMap requests a number of items upfront(32 by default) instead of requesting elements one by one. If you really need to request one element at the time you can use concatMap(Function<? super T,? extends Publisher<? extends V>> mapper,int prefetch) variant method and provide the prefetch value like this:
.concatMap(reactiveTask -> {
log.info("Running task =>{}", reactiveTask.getTaskName());
return reactiveTask.run(ctx);
}, 1)
Edit
There is also a publishOn method which takes a prefetch value. Take a look at the following Fibonacci generator sample and let me know if it works as you expect:
generateFibonacci(100)
.publishOn(boundedElasticScheduler, 1)
.doOnNext(number -> log.info("On next => {}", number))
.concatMap(number -> {
log.info("Running task => {}", number);
return task(number).doOnNext(num -> log.info("Task completed => {}", num));
}, 1)
.takeWhile(context -> context < 3)
.subscribe();
public Flux<Integer> generateFibonacci(int limit) {
return Flux.generate(
() -> new FibonacciState(0, 1),
(state, sink) -> {
log.info("Generating number: " + state);
sink.next(state.getFormer());
if (state.getLatter() > limit) {
sink.complete();
}
int temp = state.getFormer();
state.setFormer(state.getLatter());
state.setLatter(temp + state.getLatter());
return state;
});
}
Here is the output:
2021-12-02 10:47:51,990 INFO main c.u.p.p.s.c.Test - Generating number: FibonacciState(former=0, latter=1)
2021-12-02 10:47:51,993 INFO pool-1-thread-1 c.u.p.p.s.c.Test - On next => 0
2021-12-02 10:47:51,996 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Running task => 0
2021-12-02 10:47:54,035 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Task completed => 0
2021-12-02 10:47:54,035 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Generating number: FibonacciState(former=1, latter=1)
2021-12-02 10:47:54,036 INFO pool-1-thread-1 c.u.p.p.s.c.Test - On next => 1
2021-12-02 10:47:54,036 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Running task => 1
2021-12-02 10:47:56,036 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Task completed => 1
2021-12-02 10:47:56,036 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Generating number: FibonacciState(former=1, latter=2)
2021-12-02 10:47:56,036 INFO pool-1-thread-1 c.u.p.p.s.c.Test - On next => 1
2021-12-02 10:47:56,036 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Running task => 1
2021-12-02 10:47:58,036 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Task completed => 1
2021-12-02 10:47:58,036 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Generating number: FibonacciState(former=2, latter=3)
2021-12-02 10:47:58,036 INFO pool-1-thread-1 c.u.p.p.s.c.Test - On next => 2
2021-12-02 10:47:58,036 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Running task => 2
2021-12-02 10:48:00,036 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Task completed => 2
2021-12-02 10:48:00,037 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Generating number: FibonacciState(former=3, latter=5)
2021-12-02 10:48:00,037 INFO pool-1-thread-1 c.u.p.p.s.c.Test - On next => 3
2021-12-02 10:48:00,037 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Running task => 3
2021-12-02 10:48:02,037 INFO pool-1-thread-1 c.u.p.p.s.c.Test - Task completed => 3
2021-12-02 10:52:07,877 INFO pool-1-thread-2 c.u.p.p.s.c.Test - Completed flow
Edit 04122021
You stated:
I'm trying to simulate HTTP / blocking calls. Hence the Mono.delay.
Mono#Delay is not the appropriate method to simulate a blocking call. The delay is introduced through the parallel scheduler and as a result, it does not wait for the task to complete. You can simulate a blocking call like this:
public String get() throws IOException {
HttpsURLConnection connection = (HttpsURLConnection) new URL("https://jsonplaceholder.typicode.com/comments").openConnection();
connection.setRequestMethod("GET");
try(InputStream inputStream = connection.getInputStream()) {
return new String(inputStream.readAllBytes(), StandardCharsets.UTF_8);
}
}
Note that as an alternative you could use .limitRate(1) operator instead of the prefetch parameter.
So, there is an external server (game). There is a market there. A lot of products and their combinations. Their total number is 2146.
I want to receive up-to-date pricing information from time to time.
When the application starts, I create 2146 tasks, each of which is responsible for its own type of product. Tasks run in a separate thread with a delay of 2.5 seconds.
#EventListener(ApplicationReadyEvent.class)
public void start() {
log.info("Let's get party started!");
Set<MarketplaceCollector> collectorSet = marketplaceCollectorProviders.stream()
.flatMap(provider -> provider.getCollectors().stream())
.peek(this::subscribeOfferDBSubscriber)
.collect(Collectors.toSet());
collectors.addAll(collectorSet);
runTasks();
}
private void subscribeOfferDBSubscriber(MarketplaceCollector marketplaceCollector) {
marketplaceCollector.subscribe(marketplaceOfferDBSubscriber);
}
private void runTasks() {
Thread thread = new Thread(() -> collectors.forEach(this::runWithDelay));
thread.setName("tasks-runner");
thread.start();
}
private void runWithDelay(Collector collector) {
try {
collector.collect();
Thread.sleep(2_500);
counter += 1;
} catch (InterruptedException e) {
log.error(e);
}
log.debug(counter);
}
Using RestTemplate, I access the server. If the price has changed, this task will be completed again after 1 minute. If the price remains the same, add one minute to the wait and again make a request. Thus, if the price does not change, the maximum time between requests for one product will be 20 minutes. I assume that my application will execute up to 200 requests per minute, otherwise I will get a "too many requests" error.
#Override
public void collect() {
executorService.schedule(new MarketplaceTask(), INIT_DELAY, MILLISECONDS);
}
private MarketplaceRequest request() {
return MarketplaceRequest.builder()
.country(country)
.industry(industry)
.quality(quality)
.orderBy(ASC)
.currentPage(1)
.build();
}
private class MarketplaceTask implements Runnable {
private long MIN_DELAY = 60; // 1 minute
private long MAX_DELAY = 1200; // 20 minutes
private Double PREVIOUS_PRICE = Double.MAX_VALUE;
private long DELAY = 0; // seconds
#Override
public void run() {
log.debug(format("Collecting offer of %s %s in %s after %d m delay", industry, quality, country, DELAY / 60));
MarketplaceResponse response = marketplaceClient.getOffers(request());
subscribers.forEach(s -> s.onSubscription(response));
updatePreviousPriceAndPeriod(response);
executorService.schedule(this, DELAY, SECONDS);
}
private void updatePreviousPriceAndPeriod(MarketplaceResponse response) {
if (response.isError() || response.getOffers().isEmpty()) {
increasePeriod();
} else {
Double currentPrice = response.getOffers().get(0).getPriceWithTaxes();
if (isPriceChanged(currentPrice)) {
setMinimalDelay();
PREVIOUS_PRICE = currentPrice;
} else {
increasePeriod();
}
}
}
private void increasePeriod() {
if (DELAY < MAX_DELAY) {
DELAY += 60;
}
}
private boolean isPriceChanged(Double currentPrice) {
return !Objects.equals(currentPrice, PREVIOUS_PRICE);
}
private void setMinimalDelay() {
DELAY = MIN_DELAY;
}
}
public MarketplaceClient(#Value("${app.host}") String host,
AuthenticationService authenticationService,
RestTemplateBuilder restTemplateBuilder,
CommonHeadersComposer headersComposer) {
this.host = host;
this.authenticationService = authenticationService;
this.restTemplateList = restTemplateBuilder.build();
this.headersComposer = headersComposer;
}
public MarketplaceResponse getOffers(MarketplaceRequest request) {
var authentication = authenticationService.getAuthentication();
var requestEntity = new HttpEntity<>(requestString(request, authentication), headersComposer.getHeaders());
log.debug(message("PING for", request));
var responseEntity = restTemplate.exchange(host + MARKET_URL, POST, requestEntity, MarketplaceResponse.class);
log.debug(message("PONG for", request));
if (responseEntity.getBody().isError()) {
log.warn("{}: {} {} in {}", responseEntity.getBody().getMessage(), request.getIndustry(), request.getQuality(), request.getCountry());
}
return responseEntity.getBody();
}
private String requestString(MarketplaceRequest request, Authentication authentication) {
return format("countryId=%s&industryId=%s&quality=%s&orderBy=%s¤tPage=%s&ajaxMarket=1&_token=%s",
request.getCountry().getId(), request.getIndustry().getId(), request.getQuality().getValue(),
request.getOrderBy().getValue(), request.getCurrentPage(), authentication.getToken());
}
But I have a problem after a few minutes of the application. Some tasks cease to care. The request may go to the server and not return. However, other tasks work without problems. Logs how it behaves (for example):
2020-04-04 14:11:58.267 INFO 3546 --- [ main] c.g.d.e.harvesting.CollectorManager : Let's get party started!
2020-04-04 14:11:58.302 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.harvesting.MarketplaceCollector : Collecting offer of WEAPONS Q5 in GREECE after 0 m delay
2020-04-04 14:11:58.379 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.market.api.MarketplaceClient : PING for: WEAPONS Q5 in GREECE
2020-04-04 14:11:59.217 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.market.api.MarketplaceClient : PONG for: WEAPONS Q5 in GREECE
2020-04-04 14:12:00.805 DEBUG 3546 --- [ tasks-runner] c.g.d.e.harvesting.CollectorManager : 1
2020-04-04 14:12:00.806 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.harvesting.MarketplaceCollector : Collecting offer of WEAPONS Q4 in PAKISTAN after 0 m delay
2020-04-04 14:12:00.807 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.market.api.MarketplaceClient : PING for: WEAPONS Q4 in PAKISTAN
2020-04-04 14:12:03.308 DEBUG 3546 --- [ tasks-runner] c.g.d.e.harvesting.CollectorManager : 2
2020-04-04 14:12:03.309 DEBUG 3546 --- [pool-1-thread-2] c.g.d.e.harvesting.MarketplaceCollector : Collecting offer of FOOD_RAW Q1 in SAUDI_ARABIA after 0 m delay
2020-04-04 14:12:03.311 DEBUG 3546 --- [pool-1-thread-2] c.g.d.e.market.api.MarketplaceClient : PING for: FOOD_RAW Q1 in SAUDI_ARABIA
2020-04-04 14:12:05.810 DEBUG 3546 --- [ tasks-runner] c.g.d.e.harvesting.CollectorManager : 3
2020-04-04 14:12:05.810 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.harvesting.MarketplaceCollector : Collecting offer of WEAPONS Q5 in COLOMBIA after 0 m delay
2020-04-04 14:12:05.811 DEBUG 3546 --- [pool-1-thread-1] c.g.d.e.market.api.MarketplaceClient : PING for: WEAPONS Q5 in COLOMBIA
2020-04-04 14:12:08.314 DEBUG 3546 --- [ tasks-runner] c.g.d.e.harvesting.CollectorManager : 4
2020-04-04 14:12:08.315 DEBUG 3546 --- [pool-1-thread-4] c.g.d.e.harvesting.MarketplaceCollector : Collecting offer of WEAPONS Q1 in CZECH_REPUBLIC after 0 m delay
2020-04-04 14:12:08.316 DEBUG 3546 --- [pool-1-thread-4] c.g.d.e.market.api.MarketplaceClient : PING for: WEAPONS Q1 in CZECH_REPUBLIC
2020-04-04 14:12:10.818 DEBUG 3546 --- [ tasks-runner] c.g.d.e.harvesting.CollectorManager : 5
#Configuration
public class BeanConfiguration {
#Bean
public ScheduledExecutorService scheduledExecutorService() {
return Executors.newScheduledThreadPool(8);
}
}
I tried to change the connection pool for one host, but I only made it worse. I even created 200 instances of RestTemplate, but over time the access to the server ceased.
I would not want to use Spring Webflux for this purpose.
What should I do to make the app work as expected?
I am working on something where I need to pull data from MariaDB (using HikariCP), and then send it through Redis. Eventually, when I try to pull from the database, the connection will start leaking. This only happens over time, and suddenly.
Here is the full log from when the leak started happening: https://hastebin.com/sekiximehe.makefile
Here is some debug info:
21:04:40 [INFO] 21:04:40.680 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Before cleanup stats (total=6, active=2, idle=4, waiting=0)
21:04:40 [INFO] 21:04:40.680 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - After cleanup stats (total=6, active=2, idle=4, waiting=0)
21:04:40 [INFO] 21:04:40.682 [HikariPool-1 connection adder] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Added connection org.mariadb.jdbc.MariaDbConnection#4b7a5e97
21:04:40 [INFO] 21:04:40.682 [HikariPool-1 connection adder] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - After adding stats (total=7, active=2, idle=5, waiting=0)
21:05:05 [INFO] 21:05:05.323 [HikariPool-1 housekeeper] WARN com.zaxxer.hikari.pool.ProxyLeakTask - Connection leak detection triggered for org.mariadb.jdbc.MariaDbConnection#52ede989 on thread Thread-272, stack trace follows
java.lang.Exception: Apparent connection leak detected
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:123)
at us.survivewith.bungee.database.FetchPlayerInfo.run(FetchPlayerInfo.java:29)
at java.lang.Thread.run(Thread.java:748)
21:05:10 [INFO] 21:05:10.681 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Before cleanup stats (total=7, active=2, idle=5, waiting=0)
21:05:10 [INFO] 21:05:10.681 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - After cleanup stats (total=7, active=2, idle=5, waiting=0)
21:05:39 [INFO] 21:05:39.352 [HikariPool-1 housekeeper] WARN com.zaxxer.hikari.pool.ProxyLeakTask - Connection leak detection triggered for org.mariadb.jdbc.MariaDbConnection#3cba7850 on thread Thread-274, stack trace follows
java.lang.Exception: Apparent connection leak detected
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:123)
at us.survivewith.bungee.database.FetchPlayerInfo.run(FetchPlayerInfo.java:29)
at java.lang.Thread.run(Thread.java:748)
Here is the FetchPlayerInfo.run() method:
#Override
public void run()
{
String select = "SELECT `Rank`,`Playtime` FROM `Players` WHERE PlayerUUID=?;";
// This is line 29. How can this possibly be causing a leak?
try(Connection connection = Database.getHikari().getConnection())
{
// Get the data by querying the Players table
try(PreparedStatement serverSQL = connection.prepareStatement(select))
{
serverSQL.setString(1, player);
// Execute statement
try(ResultSet serverRS = serverSQL.executeQuery())
{
// If a row exists
if(serverRS.next())
{
String rank = serverRS.getString("Rank");
Jedis jPublisher = Redis.getJedis().getResource();
jPublisher.publish("playerconnections", player + "~" + serverRS.getInt("Playtime") + "~" + rank);
}
else
{
Jedis jPublisher = Redis.getJedis().getResource();
jPublisher.publish("playerconnections", player + "~" + 0 + "~DEFAULT");
}
}
}
}
catch(SQLException e)
{
//Print out any exception while trying to prepare statement
e.printStackTrace();
}
}
This is how I've setup my Database class:
/**
* This class is used to connect to the database
*/
public class Database
{
private static HikariDataSource hikari;
/**
* Connects to the database
*/
public static void connectToDatabase(String address,
String db,
String user,
String password,
int port)
{
// Setup main Hikari instance
hikari = new HikariDataSource();
hikari.setMaximumPoolSize(20);
hikari.setLeakDetectionThreshold(60 * 1000);
hikari.setDataSourceClassName("org.mariadb.jdbc.MariaDbDataSource");
hikari.addDataSourceProperty("serverName", address);
hikari.addDataSourceProperty("port", port);
hikari.addDataSourceProperty("databaseName", db);
hikari.addDataSourceProperty("user", user);
hikari.addDataSourceProperty("password", password);
}
/**
* Returns an instance of Hikari.
* This instance is connected to the database that contains all data.
* The stats table is only used in this database every other day
*
* #return The main HikariDataSource
*/
public static HikariDataSource getHikari()
{
return hikari;
}
And this is how I am calling the FetchPlayerInfo class:
new Thread(new FetchPlayerInfo(player.getUniqueId().toString())).start();
EDIT:
The problem still persists after using a synchronized getConnection() method from the Database class.
Jedis is also a resource of JedisPool you should close:
/// Jedis implements Closeable. Hence, the jedis instance will be auto-closed after the last statement.
try (Jedis jedis = pool.getResource()) {
What version of HikariCP? It is possible that the leak is not actually a leak. The leak will be reported when the connection is out of the pool for longer than the threshold, he may actually be returned later. Newer versions of HikariCP will log “unleaked” Connections.
EDIT: I am as close to 100% certain as I can be that here is no race condition in HikariCP. This scenario is far to simple, and HikariCP is used by far too many users (millions) for such a fundamental flaw to not have surfaced before.
The only thing that makes sense, looking at the code above and the logs generated, is that one of the calls inside of the outer try-catch is hanging (blocking). I suggest getting a stack dump when the condition occurs, to find if there is a thread blocked inside of FetchPlayerInfo.run().
This is on Java 7 (51) on RHEL with 24 cores
We are noticing a rise in average response times of a java SimpleDateFormat wrapped in thread local as we increase the thread pool size. Is this expected? or, I am just doing something stupid ?
Test program
public class DateFormatterLoadTest {
private static final Logger LOG = Logger.getLogger(DateFormatterLoadTest .class);
private final static int CONCURRENCY = 10;
public static void main(String[] args) throws Exception {
final AtomicLong total = new AtomicLong(0);
ExecutorService es = Executors.newFixedThreadPool(CONCURRENCY);
final CountDownLatch cdl = new CountDownLatch(CONCURRENCY);
for (int i = 0; i < CONCURRENCY; i++) {
es.execute(new Runnable() {
#Override
public void run() {
try {
int size = 65000;
Date d = new Date();
long time = System.currentTimeMillis();
for (int i = 0; i < size; i++) {
String sd = ISODateFormatter.convertDateToString(d);
assert (sd != null);
}
total.addAndGet((System.currentTimeMillis() - time));
} catch (Throwable t) {
t.printStackTrace();
} finally {
cdl.countDown();
}
}
});
}
cdl.await();
es.shutdown();
LOG.info("TOTAL TIME:" + total.get());
LOG.info("AVERAGE TIME:" + (total.get() / CONCURRENCY));
}
}
DateFormatter class:
public class ISODateFormatter {
private static final Logger LOG = Logger.getLogger(ISODateFormatter.class);
private static ThreadLocal<DateFormat> dfWithTZ = new ThreadLocal<DateFormat>() {
#Override
public DateFormat get() {
return super.get();
}
#Override
protected DateFormat initialValue() {
return new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssZ",
Locale.ENGLISH);
}
#Override
public void remove() {
super.remove();
}
#Override
public void set(DateFormat value) {
super.set(value);
}
};
public static String convertDateToString(Date date) {
if (date == null) {
return null;
}
try {
return dfWithTZ.get().format(date);
} catch (Exception e) {
LOG.error("!!! Error parsing dateString: " + date, e);
return null;
}
}
}
Someone suggested to take out the AtomicLong so just wanted to share that it is not playing any role in increasing the average time:
##NOT USING ATOMIC LONG##
2014-02-28 11:03:52,790 [pool-1-thread-1] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:328
2014-02-28 11:03:52,868 [pool-1-thread-6] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:406
2014-02-28 11:03:52,821 [pool-1-thread-2] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:359
2014-02-28 11:03:52,821 [pool-1-thread-8] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:359
2014-02-28 11:03:52,868 [pool-1-thread-4] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:406
2014-02-28 11:03:52,915 [pool-1-thread-5] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:453
2014-02-28 11:03:52,930 [pool-1-thread-7] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:468
2014-02-28 11:03:52,930 [pool-1-thread-3] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:468
2014-02-28 11:03:52,930 [main] INFO net.ahm.graph.DateFormatterLoadTest - CONCURRENCY:8
##USING ATOMIC LONG##
2014-02-28 11:02:53,852 [main] INFO net.ahm.graph.DateFormatterLoadTest - TOTAL TIME:2726
2014-02-28 11:02:53,852 [main] INFO net.ahm.graph.DateFormatterLoadTest - CONCURRENCY:8
2014-02-28 11:02:53,852 [main] INFO net.ahm.graph.DateFormatterLoadTest - AVERAGE TIME:340
##NOT USING ATOMIC LONG##
2014-02-28 11:06:57,980 [pool-1-thread-3] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:312
2014-02-28 11:06:58,339 [pool-1-thread-8] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:671
2014-02-28 11:06:58,339 [pool-1-thread-4] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:671
2014-02-28 11:06:58,307 [pool-1-thread-7] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:639
2014-02-28 11:06:58,261 [pool-1-thread-6] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:593
2014-02-28 11:06:58,105 [pool-1-thread-15] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:437
2014-02-28 11:06:58,089 [pool-1-thread-13] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:421
2014-02-28 11:06:58,073 [pool-1-thread-1] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:405
2014-02-28 11:06:58,073 [pool-1-thread-12] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:405
2014-02-28 11:06:58,042 [pool-1-thread-14] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:374
2014-02-28 11:06:57,995 [pool-1-thread-2] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:327
2014-02-28 11:06:57,995 [pool-1-thread-16] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:327
2014-02-28 11:06:58,385 [pool-1-thread-10] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:717
2014-02-28 11:06:58,385 [pool-1-thread-11] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:717
2014-02-28 11:06:58,417 [pool-1-thread-9] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:749
2014-02-28 11:06:58,418 [pool-1-thread-5] INFO net.ahm.graph.DateFormatterLoadTest - THREAD TIME:750
2014-02-28 11:06:58,418 [main] INFO net.ahm.graph.DateFormatterLoadTest - CONCURRENCY:16
##USING ATOMIC LONG##
2014-02-28 11:07:57,510 [main] INFO net.ahm.graph.DateFormatterLoadTest - TOTAL TIME:9365
2014-02-28 11:07:57,510 [main] INFO net.ahm.graph.DateFormatterLoadTest - CONCURRENCY:16
2014-02-28 11:07:57,510 [main] INFO net.ahm.graph.DateFormatterLoadTest - AVERAGE TIME:585
Creating an instance of SimpleDateFormat is very expensive (this article shows some profiling/benchmarking). If this is true, compared with the parsing of the dates into strings, then it follows that as you increase the number of threads (and therefore the number of SimpleDateFormat instances as they are threadlocals) your average time is going to increase.
Another approach to speed up your formatting is to cache the formatted result. This considers the fact, that there are usually not so many different dates to format. If you split the formatting of date and time, it is even a better candidate for caching.
The downside of this is, that normal Java cache implementations, like EHCache, are to slow, the cache access just takes longer then the formatting.
There is another cache implementation around that has access times on par with a HashMap. In this case you get a nice speed up. Here you find my proof of concept tests: https://github.com/headissue/cache2k-benchmark/blob/master/zoo/src/test/java/org/cache2k/benchmark/DateFormattingBenchmark.java
Maybe this can be a solution within your scenario.
Disclaimer: I am working on cache2k....
SimpleDateFormat Not Thread-Safe
As the correct answer by Martin Wilson states, instantiating a SimpleDateFormat is relatively expensive.
Knowing that your first thought might be, "Well, let's cache an instance for re-use.". Nice thought, but beware: The SimpleDateFormat class in not thread-safe. So says the class documentation under its Synchronization heading.
Joda-Time
A better solution is to avoid the notoriously troublesome (and now outmoded) java.util.Date, .Calendar, and SimpleDateFormat classes. Instead use either:
Joda-Time third-party open-source library, popular replacement for Date/Calendar.
java.time package New, bundled in Java 8, supplanting the old Date/Calendar classes, inspired by Joda-Time, defined by JSR 310.
Joda-Time is intentionally built to be thread-safe, largely through use of immutable objects. There are some mutable classes, but those are not usually used.
This other question on StackOverflow explains that the DateTimeFormatter class is indeed thread-safe. So you can create one instance, cache it, and let all your threads use that formatter without adding any extra synchronization or other concurrency controls.
Our usecase was write once(single thread) and read many times(concurrently). So i converted Date to String at the time of storing the data, instead of doing this each time a request needs to be responded.