Vertx.io cluster and service discovery - java

I am playing with vertx.io, it looks great. Now I build up a cluster of three verticles (three simple java main fat jars). One verticle expose a web interface (a poorly rest api), the other two simply awares the web verticle that they are up or down by the vertx.io's service discovery mechanism.
Here is my (relevant part) simple "non web" verticle:
public class FileReader extends AbstractVerticle {
private ServiceDiscovery discovery;
private Logger log = LogManager.getLogger(getClass());
private Record record;
#Override
public void start(Future<Void> startFuture) throws Exception {
record = EventBusService.createRecord(getServiceName(), getServiceAddress(), getClass());
setUpRecord(record);
discovery = ServiceDiscovery.create(vertx);
discovery.publish(record, h -> {
if (h.succeeded()) {
log.info("Record published.");
} else {
log.info("Record not published.", h.cause());
}
});
startFuture.complete();
}
...
#Override
public void stop(Future<Void> stopFuture) throws Exception {
log.info("Stopping verticle.");
discovery.unpublish(record.getRegistration(), h -> {
if (h.succeeded()) {
log.info("Service unpublished.");
stopFuture.complete();
} else {
log.error(h.cause());
stopFuture.fail(h.cause());
}
});
}
}
And here is how I deploy one of the two "non web" verticles:
public class FileReaderApp {
private static Logger log = LogManager.getLogger(FileReaderApp.class);
private static String id;
public static void main(String[] args) {
ClusterManager cMgr = new HazelcastClusterManager();
VertxOptions vOpt = new VertxOptions(new JsonObject());
vOpt.setClusterManager(cMgr);
Vertx.clusteredVertx(vOpt, ch -> {
if (ch.succeeded()) {
log.info("Deploying file reader.");
Vertx vertx = ch.result();
vertx.deployVerticle(new FileReader(), h -> {
if (h.succeeded()) {
id = h.result();
} else {
log.error(h.cause());
}
});
} else {
log.error(ch.cause());
}
});
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
log.info("Undeploying " + id);
Vertx.vertx().undeploy(id, h -> {
if (h.succeeded()) {
log.info("undeployed.");
} else {
log.error(h.cause());
}
});
}
});
}
}
When "non-web" verticles start, the "web" verticle is correctly notified. But when "non-web" verticles shutdown, I hit a keyboard Ctrl-C, I got this error and "web" verticle still think everyone is up:
2017-12-01 09:08:27 INFO FileReader:31 - Undeploying 82a8f5c2-e6a2-4fc3-84ff-4bb095b5dc43
Exception in thread "Thread-3" java.lang.IllegalStateException: Shutdown in progress
at java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:66)
at java.lang.Runtime.addShutdownHook(Runtime.java:211)
at io.vertx.core.impl.FileResolver.setupCacheDir(FileResolver.java:310)
at io.vertx.core.impl.FileResolver.<init>(FileResolver.java:92)
at io.vertx.core.impl.VertxImpl.<init>(VertxImpl.java:185)
at io.vertx.core.impl.VertxImpl.<init>(VertxImpl.java:144)
at io.vertx.core.impl.VertxImpl.<init>(VertxImpl.java:140)
at io.vertx.core.impl.VertxFactoryImpl.vertx(VertxFactoryImpl.java:34)
at io.vertx.core.Vertx.vertx(Vertx.java:82)
at edu.foo.app.FileReaderApp$1.run(FileReaderApp.java:32)
I don't fully get what's going on. Application shutdown while it was undeploying verticle? How to solve this? What is the vertx.io approach?

There are two problems
You should undeploy the verticle using the clustered Vert.x instance, not just any instance
undeploy is a non blocking operation so the shutdown hook thread must wait for completion.
Here's a modified version:
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
log.info("Undeploying " + id);
CountDownLatch latch = new CountDownLatch(1);
theClusteredVertxInstance.undeploy(id, h -> {
if (h.succeeded()) {
log.info("undeployed.");
} else {
log.error(h.cause());
}
latch.countDown();
});
try {
latch.await(5, TimeUnit.SECONDS);
} catch(Exception ignored) {
}
}
});

Related

Scheduler works incorrect in unit testing

I need to collect data from a public API. I want to collect it daily or twice a day.
public class AlphavantageStockRequestDispatcher {
public static void startAlphavantageStockScraper(int timeInterval) {
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
Runnable getStockList =
new Runnable() {
#Override
public void run() {
List<AlphavantageStock> stocks = AlphavantageStockRequest.getStockPrices(); //Method contains requests
StockDao<AlphavantageStock> dao = new JpaAlphavantageStockDao();
for (AlphavantageStock stock : stocks) {
dao.save(stock);
}
}
};
scheduler.scheduleAtFixedRate(getStockList, 0, timeInterval, TimeUnit.HOURS);
}
}
The problem is when I start it from the same class (just added main method and invoked startAlphavantageStockScraper(1); it works fine. But when I want to test it via JUnit it's not working (test class is in symmetric package name but test subfolder):
public class AlphavantageStockRequestDispatcherTest {
#Test
public void startDispatcher_TwoFullCycles_WithOneHourIntervalBetween() {
AlphavantageStockRequestDispatcher.startAlphavantageStockScraper(1);
}
}
While debugging I found out that in unit test execution a program reaches public void run() line then skips it. So there's no error. Program ends up correctly but does nothing useful.
Any help will be appreciated.
This is how asynchronous programming works. In the AlphavantageStockRequestDispatcher class you've just submitted a task but you have to wait for it's completed. There are several ways to handle this situation. I prefer state notification using java.util.concurrent.CountDownLatch. So some refactoring is recommended in AlphavantageStockRequestDispatcher class like this:
public class AlphavantageStockRequestDispatcher {
public static void startAlphavantageStockScraper(int timeInterval, CountDownLatch latch) {
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
Runnable getStockList =
new Runnable() {
#Override
public void run() {
System.out.println("worker started");
try {
Thread.sleep(10_000L);
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
System.out.println("worker finished");
Optional.ofNullable(latch).ifPresent(CountDownLatch::countDown);
}
}
};
scheduler.scheduleAtFixedRate(getStockList, 0, timeInterval, TimeUnit.HOURS);
}
}
Now it's possible to test that.
public class AlphavantageStockRequestDispatcherTest {
#Test
void startDispatcher_TwoFullCycles_WithOneHourIntervalBetween() throws InterruptedException {
CountDownLatch latch = new CountDownLatch(1);
AlphavantageStockRequestDispatcher.startAlphavantageStockScraper(1, latch);
latch.await(20, TimeUnit.SECONDS);
System.out.println("first finished - need some assertions");
}
}

JSVC daemon stop does not wait for threads to finish

I have deployed java service application on linux using apache jsvc. It's a multithread application, however when daemon stop is initiated, it kills jvm before all threads are finished.
It goes like this:
WorkerLauncher which implements daemon interface, starts service
WorkerPool class initiates parent thread and worker (child) threads that are represented by Worker class.
When WorkerLauncher stop is called, WorkerPool thread is interrupt and this InterruptedException is caught, where child threads are interrupted as well.
When child thread is interrupted, it performs calculations before stopping. This is where it goes wrong: calculations are not finished before thread is killed I guess (not quite sure here).
WorkerLauncher
public class WorkerLauncher implements Daemon {
private static final Logger log = LoggerFactory.getLogger("com.worker");
private WorkerPool pool;
#Override
public void init(DaemonContext context) {
}
#Override
public void start() {
log.info("Starting worker pool...");
if (pool == null) pool = new WorkerPool();
pool.start();
log.info("Worker pool started");
}
#Override
public void stop(){
pool.stop();
}
#Override
public void destroy() {
pool = null;
}
}
WorkerPool
public class WorkerPool implements Runnable {
private static final Logger log = LoggerFactory.getLogger("com.worker");
private Thread thread = null;
private List<Thread> workers = new ArrayList<Thread>();
public WorkerPool() {
for (int i = 0; i < 1; i++) workers.add(new Thread(new Worker("Worker "+i), "Worker "+i));
this.thread = new Thread(this, "Worker Main");
}
#Override
public void run() {
for (Thread thread : workers) {
thread.start();
}
while (isRunning()) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
log.info("Worker pool stopping");
for (Thread worker : workers) {
worker.interrupt();
}
break;
}
}
}
public boolean isRunning() {
return !thread.isInterrupted();
}
public void stop() {
thread.interrupt();
}
public void start() {
thread.start();
}
}
Worker
public class Worker implements Runnable{
private static final Logger log = LoggerFactory.getLogger("com.worker");
private String name;
public Worker(String name) {
this.name = name;
}
private void stop() {
longCalculations();
log.info("Worker {] stopped", name);
}
#Override
public void run() {
try {
try {
while (true) {
Thread.sleep(1000);
}
} catch (InterruptedException e) {
log.info("InterruptedException");
}
} finally {
stop();
}
}
private void longCalculations() {
for (int i = 0; i < 99999; i++) {
for (int j = 0; j < 9999; j++) {
Math.round(i + j * 0.999);
}
}
}
}
This is just an example from real application, but with this I reproduce the same issue. commons-daemon 1.1 and 1.2 versions tested. If long calculations are removed or made to last shorter (better performance) everything works. What am I missing here? Any ideas?
Here's how log output looks like:
07:55:10.790 ( main) INFO Starting worker pool...
07:55:10.791 ( main) INFO Worker pool started
07:55:34.056 (Worker Main) INFO Worker pool stopping
07:55:34.057 ( Worker 0) INFO InterruptedException
Note how Worker {] stopped is missing. And in real application 3rd party process that was started by worker is still running, which can be seen in ps -A even if jsvc does not show in processes.
EDIT
Modified stop() method:
private void stop() {
log.info("Worker {] being stopped", name);
longCalculations();
log.info("Worker {] stopped", name);
}
Log output:
06:18:55.762 ( main) INFO Starting worker pool...
06:18:55.764 ( main) INFO Worker pool started
06:19:08.614 (Worker Main) INFO Worker pool stopping
06:19:08.615 ( Worker 0) INFO InterruptedException
06:19:08.616 ( Worker 0) INFO Worker {] being stopped
I start/stop the service using jsvc:
START
./jsvc -cwd . -cp commons-daemon-1.1.0.jar:multithread-test.jar -outfile /tmp/worker.out -errfile /tmp/worker.err -pidfile /var/run/worker.pid com.worker.WorkerLauncher
STOP
./jsvc -cwd . -cp commons-daemon-1.1.0.jar:multithread-test.jar -outfile /tmp/worker.out -errfile /tmp/worker.err -pidfile /var/run/worker.pid -stop com.worker.WorkerLauncher
IMPORTANT!
Forgot to mention, that this application works as expected on windows using apache procun. This is only happening when lauching it on linux using jsvc.
ANOTHER EDIT
if I wait for threads to finish (check if any worker thread isAlive) after pool.stop() in WorkerLauncher.stop() everything works.
#Override
public void stop(){
pool.stop();
while(pool.isAnyGuardianRunning()) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
break;
}
}
}
And in WorkerPool:
public boolean isAnyGuardianRunning() {
for (Thread thread : workers) {
if (thread.isAlive()) return true;
}
return false;
}
But I still don't know why is this happening... Any ideas?

PublishOn Reactor Asynchronous Flux and thread locking

I'm trying to build a scenario which follows theses steps:
(on Init) Flux publisher created
(on Init) Subscribers subscribe
(on user action) publisher start streaming/publishing events
Web controller subscriber consumes and caches last BUFFER_SIZE events
Based on http://projectreactor.io/docs/core/release/reference/#advanced-parallelizing-parralelflux and https://www.baeldung.com/reactor-core I'm trying to use create and publish to do this, and the issue I'm having is that the thread that calls flux.connect is trapped in the while loop inside the publisher.
Here is a minimal working example using spring-boot-starter-webflux:
private ConnectableFlux<Integer> flux;
private Scheduler scheduler;
private int nextRead = 0;
private static final int BUFFERSIZE = 100;
private List<Integer> sink = new LinkedList<Integer>() ;
#PostConstruct
public void Init() {
this.scheduler = Schedulers.newSingle("Streamer");
flux = Flux.<Integer>create(fluxSink -> {
while (true) {
fluxSink.next(nextRead++);
try {
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}).publishOn(scheduler).publish();
}
#GetMapping("/subscribe")
public void subscribe(){
this.flux.subscribeOn(scheduler,false).subscribe(new CoreSubscriber<Integer>() {
#Override
public Context currentContext() {
return null;
}
#Override
public void onSubscribe(Subscription subscription) {
subscription.request(Long.MAX_VALUE);
}
#Override
public void onNext(Integer e) {
while (sink.size() >= BUFFERSIZE) sink.remove(0);
sink.add(e);
logger.debug("sink event: " + e);
}
#Override
public void onError(Throwable t) {}
#Override
public void onComplete() {}
});
}
#GetMapping("/start")
public void startStream(){
logger.debug("EventStreamSimulator startStream before connect");
this.flux.connect();
logger.debug("EventStreamSimulator startStream after connect");
}
#GetMapping("/values")
public Flux<Integer> getEvents(){
return Flux.fromIterable(sink);
}
Based on this code, the web request on /start will start the streaming but the http thread will be stuck in the emitter infinite loop. requests on /values and logging shows that it is working fine though (but the original http request to /start never finishes/returns)
Sample logs:
2018-10-09 18:12:54.798 DEBUG 6024 --- [ctor-http-nio-2] com.example.FluxPocController : emmit event: 0
2018-10-09 18:12:54.798 DEBUG 6024 --- [ Streamer-1] com.example.FluxPocController : sink event: 0
Then, here is the question: is the publishOn directive supported for these async way of using Flux.create? if so, how to use it?

How to make JUnit4 "Wait" for asynchronous job to finish before running tests

I am trying to write a test for my android app that communicates with a cloud service.
Theoretically the flow for the test is supposed to be this:
Send request to the server in a worker thread
Wait for the response from the server
Check the response returned by the server
I am trying to use Espresso's IdlingResource class to accomplish that but it is not working as expected. Here's what I have so far
My Test:
#RunWith(AndroidJUnit4.class)
public class CloudManagerTest {
FirebaseOperationIdlingResource mIdlingResource;
#Before
public void setup() {
mIdlingResource = new FirebaseOperationIdlingResource();
Espresso.registerIdlingResources(mIdlingResource);
}
#Test
public void testAsyncOperation() {
Cloud.CLOUD_MANAGER.getDatabase().getCategories(new OperationResult<List<Category>>() {
#Override
public void onResult(boolean success, List<Category> result) {
mIdlingResource.onOperationEnded();
assertTrue(success);
assertNotNull(result);
}
});
mIdlingResource.onOperationStarted();
}
}
The FirebaseOperationIdlingResource
public class FirebaseOperationIdlingResource implements IdlingResource {
private boolean idleNow = true;
private ResourceCallback callback;
#Override
public String getName() {
return String.valueOf(System.currentTimeMillis());
}
public void onOperationStarted() {
idleNow = false;
}
public void onOperationEnded() {
idleNow = true;
if (callback != null) {
callback.onTransitionToIdle();
}
}
#Override
public boolean isIdleNow() {
synchronized (this) {
return idleNow;
}
}
#Override
public void registerIdleTransitionCallback(ResourceCallback callback) {
this.callback = callback;
}}
When used with Espresso's view matchers the test is executed properly, the activity waits and then check the result.
However plain JUNIT4 assert methods are ignored and JUnit is not waiting for my cloud operation to complete.
Is is possible that IdlingResource only work with Espresso methods ? Or am I doing something wrong ?
I use Awaitility for something like that.
It has a very good guide, here is the basic idea:
Wherever you need to wait:
await().until(newUserIsAdded());
elsewhere:
private Callable<Boolean> newUserIsAdded() {
return new Callable<Boolean>() {
public Boolean call() throws Exception {
return userRepository.size() == 1; // The condition that must be fulfilled
}
};
}
I think this example is pretty similar to what you're doing, so save the result of your asynchronous operation to a field, and check it in the call() method.
Junit will not wait for async tasks to complete. You can use CountDownLatch to block the thread, until you receive response from server or timeout.
Countdown latch is a simple yet elegant solution and does NOT need an external library. It also helps you focus on the actual logic to be tested rather than over-engineering the async wait or waiting for a response
void testBackgroundJob() {
Latch latch = new CountDownLatch(1);
//Do your async job
Service.doSomething(new Callback() {
#Override
public void onResponse(){
ACTUAL_RESULT = SUCCESS;
latch.countDown(); // notify the count down latch
// assertEquals(..
}
});
//Wait for api response async
try {
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
assertEquals(expectedResult, ACTUAL_RESULT);
}

How to send periodic messages using WebSocket?

I am using WebSocket on Tomcat (the actual implementation is Tyrus, the reference implementation of JSR 356). It works great, when I have to handle client messages, and respond to them. However, I would like implement a push solution for several of my client-side controls. Actually I need two type of solution:
pushing out data with a specific interval,
pushing out system messages, when they are raised.
For the first one, I think ScheduledExecutorService can be a solution, I already have a more or less working example, I have issues with cleaning up though. For the second one, I think I would need to have a thread, which would trigger a method in the WebSocket endpoint, but I don't really know how to do this cleanly either. And by clean, I mean that I would like to have running threads only if there are connected sessions to my endpoint.
To summarize my question: how would you properly implement a push message solution using the Java EE WebSocket API?
ps.: I would prefer a "pure" solution, but Spring is also not unwelcome.
Current code skeleton
This is how my current solution looks like for the first problem:
#ServerEndpoint(...)
public class MyEndPoint {
// own class, abstracting away session handling
private static SessionHandler sessionHandler = new SessionHandler();
private static ScheduledExecutorService timer =
Executors.newSingleThreadScheduledExecutor();
private static boolean timerStarted = false;
#OnOpen
public void onOpen(Session session, EndpointConfig config) {
sessionHandler.addSession(session);
if (!timerStarted) {
timer.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
sessionHandler.sendToAllSession("foo");
}
}, 0, 3, TimeUnit.SECONDS);
timerStarted = true;
}
}
#OnClose
public void onClose(Session session) {
sessionHandler.removeSession(session);
if (0 == sessionHandler.countSessions()) {
// TODO: cleanup thread properly
timer.shutdown();
try {
while (!timer.awaitTermination(10, TimeUnit.SECONDS));
} catch (InterruptedException e) {
log.debug("Timer terminated.");
}
timerStarted = false;
}
}
}
This works more or less, but after a few page reload, it dies with RejectedExecutionException and I am not so sure how to handle the situation.
Unfortunately you can't use any ExecutorService after shutdown();
So after OnClose() method next OnOpen() method will crash.
Just little code for demonstrate:
public class TestThread {
public static void main(String[] args) {
final ScheduledExecutorService timer = Executors.newSingleThreadScheduledExecutor();
boolean timerStarted = false;
//OnOpen - 1; - OK
if (!timerStarted) {
timer.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
System.out.println("foo");
}
}, 0, 3, TimeUnit.SECONDS);
timerStarted = true;
}
//OnOpen - 2; - OK
if (!timerStarted) {
timer.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
System.out.println("foo");
}
}, 0, 3, TimeUnit.SECONDS);
timerStarted = true;
}
//OnClose - 1 - OK
timer.shutdown();
timerStarted = false;
//OnOpen - 2; - NOT OK, because after stop you can't use timer, RejectedExecutionException will thrown
if (!timerStarted) {
// will crash at this invocke
timer.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
System.out.println("foo");
}
}, 0, 3, TimeUnit.SECONDS);
timerStarted = true;
}
}
}
You may try to use your class also as a web listener http://docs.oracle.com/javaee/7/api/javax/servlet/annotation/WebListener.html
and create timer in methods that executed on startup and destroy of server
#WebListener
#ServerEndpoint(...)
public class MyEndPoint implements ServletContextListener{
final ScheduledExecutorService timer = Executors.newSingleThreadScheduledExecutor();
#Override
public void contextInitialized(ServletContextEvent servletContextEvent) {
timer.scheduleWithFixedDelay(...)
}
#Override
public void contextDestroyed(ServletContextEvent servletContextEvent) {
timer.shutdown();
}
...
}

Categories

Resources