I am trying to figure out the possibilities I have to solve the following problem.
a) I want to have a database table that uses "crontab syntax" to schedule tasks, the structure would be something like this:
|-Id-|---Crontab Syntax---|---------Task----------|
| 1 | 30 * * * * * | MyClass.TaskA(args[]) |
| 2 | 0 1 * * 1-5 * | MyClass.TaskB(args[]) |
| | | |
The above table will be modified at any time by an external application. Tasks added or removed should instantly affect the scheduler.
b) The scheduler itself should reside on a Java application server. It should constantly be synched with the active scheduled tasks in the database table. Whenever a schedule event occurs it should trigger/call an EJB with the value in 'Task' as argument.
I am not looking for an answer to the above problem. But rather some input in what frameworks can be used for the crontab parsing and in what manner the EJB representing the scheduler should be deployed.
Thanks in advance.
See the EJB 3.1 #Schedule API. The API we chose for the spec is a little closer to Quartz syntax than cron -- tiny variances between the two.
Here's an annotation example:
package org.superbiz.corn;
import javax.ejb.Lock;
import javax.ejb.LockType;
import javax.ejb.Schedule;
import javax.ejb.Schedules;
import javax.ejb.Singleton;
import java.util.concurrent.atomic.AtomicInteger;
/**
* This is where we schedule all of Farmer Brown's corn jobs
*/
#Singleton
#Lock(LockType.READ) // allows timers to execute in parallel
public class FarmerBrown {
private final AtomicInteger checks = new AtomicInteger();
#Schedules({
#Schedule(month = "5", dayOfMonth = "20-Last", minute = "0", hour = "8"),
#Schedule(month = "6", dayOfMonth = "1-10", minute = "0", hour = "8")
})
private void plantTheCorn() {
// Dig out the planter!!!
}
#Schedules({
#Schedule(month = "9", dayOfMonth = "20-Last", minute = "0", hour = "8"),
#Schedule(month = "10", dayOfMonth = "1-10", minute = "0", hour = "8")
})
private void harvestTheCorn() {
// Dig out the combine!!!
}
#Schedule(second = "*", minute = "*", hour = "*")
private void checkOnTheDaughters() {
checks.incrementAndGet();
}
public int getChecks() {
return checks.get();
}
}
Full source for this here
You can do the same thing programmatically via the ScheduleExpression class which is just a constructable version of the above annotation. Here's what the above example would look like if the schedule was done in code:
package org.superbiz.corn;
import javax.annotation.PostConstruct;
import javax.annotation.Resource;
import javax.ejb.Lock;
import javax.ejb.LockType;
import javax.ejb.ScheduleExpression;
import javax.ejb.Singleton;
import javax.ejb.Startup;
import javax.ejb.Timeout;
import javax.ejb.Timer;
import javax.ejb.TimerConfig;
import javax.ejb.TimerService;
import java.util.concurrent.atomic.AtomicInteger;
/**
* This is where we schedule all of Farmer Brown's corn jobs
*
* #version $Revision$ $Date$
*/
#Singleton
#Lock(LockType.READ) // allows timers to execute in parallel
#Startup
public class FarmerBrown {
private final AtomicInteger checks = new AtomicInteger();
#Resource
private TimerService timerService;
#PostConstruct
private void construct() {
final TimerConfig plantTheCorn = new TimerConfig("plantTheCorn", false);
timerService.createCalendarTimer(new ScheduleExpression().month(5).dayOfMonth("20-Last").minute(0).hour(8), plantTheCorn);
timerService.createCalendarTimer(new ScheduleExpression().month(6).dayOfMonth("1-10").minute(0).hour(8), plantTheCorn);
final TimerConfig harvestTheCorn = new TimerConfig("harvestTheCorn", false);
timerService.createCalendarTimer(new ScheduleExpression().month(9).dayOfMonth("20-Last").minute(0).hour(8), harvestTheCorn);
timerService.createCalendarTimer(new ScheduleExpression().month(10).dayOfMonth("1-10").minute(0).hour(8), harvestTheCorn);
final TimerConfig checkOnTheDaughters = new TimerConfig("checkOnTheDaughters", false);
timerService.createCalendarTimer(new ScheduleExpression().second("*").minute("*").hour("*"), checkOnTheDaughters);
}
#Timeout
public void timeout(Timer timer) {
if ("plantTheCorn".equals(timer.getInfo())) {
plantTheCorn();
} else if ("harvestTheCorn".equals(timer.getInfo())) {
harvestTheCorn();
} else if ("checkOnTheDaughters".equals(timer.getInfo())) {
checkOnTheDaughters();
}
}
private void plantTheCorn() {
// Dig out the planter!!!
}
private void harvestTheCorn() {
// Dig out the combine!!!
}
private void checkOnTheDaughters() {
checks.incrementAndGet();
}
public int getChecks() {
return checks.get();
}
}
The source for this example is here
Side note, both examples are runnable in a plain IDE and have test cases that use the Embeddable EJBContainer API also new in EJB 3.1.
#Schedule vs ScheduleExpression
#Schedule
Statically configured
Many schedule methods are possible
Not possible to pass arguments
Cannot be cancelled
The above is all done in the deployment descriptor and is therefore limited to only things that can be configured in advance. The more dynamic version uses the following signature of the TimerService:
TimerService.createCalendarTimer(javax.ejb.ScheduleExpression, javax.ejb.TimerConfig)
ScheduleExpression
Dynamically created
Exactly one #Timeout supports all ScheduleExpression
The timeout method must take javax.ejb.Timer as a parameter
Arguments can be passed
The caller wraps arguments in a TimerConfig.setInfo(Serializable) object
The #Timeout method accesses them via Timer.getInfo()
Can be cancelled by the caller or the #Timeout method
Also note that there is an interceptor #AroundTimeout annotation that functions identically to #AroundInvoke and allows interceptors to participate in the bean's timer functionality.
EJB has its own built in timers, but you'll have to write the boilerplate code to translate cron parsing. The parsing of the cron instructions themselves should be trivial.
If you're not afraid of venturing outside of EJB, Quartz is as lexicore mentioned an excellent option.
Take a look at Quartz. If you use Spring there's very good support there. A neat, reliable good-working thing.
Related
I'm trying to migrate from Vert.x to Quarkus and in Vert.x when I write message consumers like Kafka/AMQP etc. I have to scale the number of verticals to maximize performance across multiple cores i.e. Vertical Scaling - is this possible in Quarkus? I see a similar question here but it wasn't answered.
For example, with Kafka I might create a consumer inside a vertical and then scale that vertical say 10 times (that is specify the number of instances in the deployment to be 10) after doing performance testing to determine that's the optimal number. My understanding is that by default, 1 vertical = 1 event loop and does not scale across multiple cores.
I know that it's possible to use Vert.x verticals in Quarkus but is there another way to scale things like the number of Kafka consumers across multiple core?
I see that this type of scalability is configurable for things like Quarkus HTTP but I can't find anything about message consumers.
Here's the Vert.x Verticle approach that overall I'm very happy with, but I wish there were better documentation on how to do this.
UPDATE - Field injection doesn't work with this example but constructor injection does work.
Lets say I want to inject this
#ApplicationScoped
public class CoffeeRepositoryService {
public CoffeeRepositoryService() {
System.out.println("Injection succeeded!");
}
}
Here's my Verticle
package org.acme;
import io.smallrye.mutiny.Uni;
import io.smallrye.mutiny.vertx.core.AbstractVerticle;
import io.vertx.core.impl.logging.Logger;
import io.vertx.core.impl.logging.LoggerFactory;
import io.vertx.mutiny.core.eventbus.EventBus;
import io.vertx.mutiny.rabbitmq.RabbitMQClient;
import io.vertx.mutiny.rabbitmq.RabbitMQConsumer;
import io.vertx.rabbitmq.QueueOptions;
import io.vertx.rabbitmq.RabbitMQOptions;
public class RQVerticle extends AbstractVerticle {
private final Logger LOGGER = LoggerFactory.getLogger(org.acme.RQVerticle.class);
//This doesn't work - returns null
#Inject
CoffeeRepositoryService coffeeRepositoryService;
RQVerticle() {} // dummy constructor needed
#Inject // constructor injection - this does work
RQVerticle(CoffeeRepositoryService coffeeRepositoryService) {
//Here coffeeRepositoryService is injected properly
}
#Override
public Uni<Void> asyncStart() {
LOGGER.info(
"Creating RabbitMQ Connection after Quarkus successful initialization");
RabbitMQOptions config = new RabbitMQOptions();
config.setUri("amqp://localhost:5672");
RabbitMQClient client = RabbitMQClient.create(vertx, config);
Uni<Void> clientResp = client.start();
clientResp.subscribe()
.with(asyncResult -> {
LOGGER.info("RabbitMQ successfully connected!");
});
return clientResp;
}
}
Main Class - injection doesn't work like this
package org.acme;
import io.quarkus.runtime.Quarkus;
import io.quarkus.runtime.QuarkusApplication;
import io.quarkus.runtime.annotations.QuarkusMain;
import io.vertx.core.DeploymentOptions;
import io.vertx.mutiny.core.Vertx;
#QuarkusMain
public class Main {
public static void main(String... args) {
Quarkus.run(MyApp.class, args);
}
public static class MyApp implements QuarkusApplication {
#Override
public int run(String... args) throws Exception {
var vertx = Vertx.vertx();
System.out.println("Deployment Starting");
DeploymentOptions options = new DeploymentOptions()
.setInstances(2);
vertx.deployVerticleAndAwait(RQVerticle::new, options);
System.out.println("Deployment completed");
Quarkus.waitForExit();
return 0;
}
}
}
Main Class with working injection but cannot deploy more than one instance
package org.acme;
import io.quarkus.runtime.StartupEvent;
import io.vertx.mutiny.core.Vertx;
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.event.Observes;
import org.jboss.logging.Logger;
#ApplicationScoped
public class MainVerticles {
private static final Logger LOGGER = Logger.getLogger(MainVerticles.class);
public void init(#Observes StartupEvent e, Vertx vertx, RQVerticle verticle) {
public void init(#Observes StartupEvent e, Vertx vertx, RQVerticle verticle) {
DeploymentOptions options = new DeploymentOptions()
.setInstances(2);
vertx.deployVerticle(verticle,options).await().indefinitely();
}
}
Std Out - first main class looks good
2021-09-15 15:48:12,052 INFO [org.acm.RQVerticle] (vert.x-eventloop-thread-2) Creating RabbitMQ Connection after Quarkus successful initialization
2021-09-15 15:48:12,053 INFO [org.acm.RQVerticle] (vert.x-eventloop-thread-3) Creating RabbitMQ Connection after Quarkus successful initialization
Std Out - second main class
2021-09-22 15:48:11,986 ERROR [io.qua.run.Application] (Quarkus Main
Thread) Failed to start application (with profile dev):
java.lang.IllegalArgumentException: Can't specify > 1 instances for
already created verticle
I am trying to create a simple application where the app will consume Kafka message do some cql transform and publish to Kafka and below is the code:
JAVA: 1.8
Flink: 1.13
Scala: 2.11
flink-siddhi: 2.11-0.2.2-SNAPSHOT
I am using library: https://github.com/haoch/flink-siddhi
input json to Kafka:
{
"awsS3":{
"ResourceType":"aws.S3",
"Details":{
"Name":"crossplane-test",
"CreationDate":"2020-08-17T11:28:05+00:00"
},
"AccessBlock":{
"PublicAccessBlockConfiguration":{
"BlockPublicAcls":true,
"IgnorePublicAcls":true,
"BlockPublicPolicy":true,
"RestrictPublicBuckets":true
}
},
"Location":{
"LocationConstraint":"us-west-2"
}
}
}
main class:
public class S3SidhiApp {
public static void main(String[] args) {
internalStreamSiddhiApp.start();
//kafkaStreamApp.start();
}
}
App class:
package flinksidhi.app;
import com.google.gson.JsonObject;
import flinksidhi.event.s3.source.S3EventSource;
import io.siddhi.core.SiddhiManager;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.core.fs.FileSystem;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
import org.apache.flink.streaming.siddhi.SiddhiCEP;
import org.json.JSONObject;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.Map;
import static flinksidhi.app.connector.Consumers.createInputMessageConsumer;
import static flinksidhi.app.connector.Producer.*;
public class internalStreamSiddhiApp {
private static final String inputTopic = "EVENT_STREAM_INPUT";
private static final String outputTopic = "EVENT_STREAM_OUTPUT";
private static final String consumerGroup = "EVENT_STREAM1";
private static final String kafkaAddress = "localhost:9092";
private static final String zkAddress = "localhost:2181";
private static final String S3_CQL1 = "from inputStream select * insert into temp";
private static final String S3_CQL = "from inputStream select json:toObject(awsS3) as obj insert into temp;" +
"from temp select json:getString(obj,'$.awsS3.ResourceType') as affected_resource_type," +
"json:getString(obj,'$.awsS3.Details.Name') as affected_resource_name," +
"json:getString(obj,'$.awsS3.Encryption.ServerSideEncryptionConfiguration') as encryption," +
"json:getString(obj,'$.awsS3.Encryption.ServerSideEncryptionConfiguration.Rules[0].ApplyServerSideEncryptionByDefault.SSEAlgorithm') as algorithm insert into temp2; " +
"from temp2 select affected_resource_name,affected_resource_type, " +
"ifThenElse(encryption == ' ','Fail','Pass') as state," +
"ifThenElse(encryption != ' ' and algorithm == 'aws:kms','None','Critical') as severity insert into outputStream";
public static void start(){
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//DataStream<String> inputS = env.addSource(new S3EventSource());
//Flink kafka stream consumer
FlinkKafkaConsumer<String> flinkKafkaConsumer =
createInputMessageConsumer(inputTopic, kafkaAddress,zkAddress, consumerGroup);
//Add Data stream source -- flink consumer
DataStream<String> inputS = env.addSource(flinkKafkaConsumer);
SiddhiCEP cep = SiddhiCEP.getSiddhiEnvironment(env);
cep.registerExtension("json:toObject", io.siddhi.extension.execution.json.function.ToJSONObjectFunctionExtension.class);
cep.registerExtension( "json:getString", io.siddhi.extension.execution.json.function.GetStringJSONFunctionExtension.class);
cep.registerStream("inputStream", inputS, "awsS3");
inputS.print();
System.out.println(cep.getDataStreamSchemas());
//json needs extension jars to present during runtime.
DataStream<Map<String,Object>> output = cep
.from("inputStream")
.cql(S3_CQL1)
.returnAsMap("temp");
//Flink kafka stream Producer
FlinkKafkaProducer<Map<String, Object>> flinkKafkaProducer =
createMapProducer(env,outputTopic, kafkaAddress);
//Add Data stream sink -- flink producer
output.addSink(flinkKafkaProducer);
output.print();
try {
env.execute();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Consumer class:
package flinksidhi.app.connector;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.json.JSONObject;
import java.util.Properties;
public class Consumers {
public static FlinkKafkaConsumer<String> createInputMessageConsumer(String topic, String kafkaAddress, String zookeeprAddr, String kafkaGroup ) {
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", kafkaAddress);
properties.setProperty("zookeeper.connect", zookeeprAddr);
properties.setProperty("group.id",kafkaGroup);
FlinkKafkaConsumer<String> consumer = new FlinkKafkaConsumer<String>(
topic,new SimpleStringSchema(),properties);
return consumer;
}
}
Producer class:
package flinksidhi.app.connector;
import flinksidhi.app.util.ConvertJavaMapToJson;
import org.apache.flink.api.common.serialization.SerializationSchema;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema;
import org.json.JSONObject;
import java.util.Map;
public class Producer {
public static FlinkKafkaProducer<Tuple2> createStringProducer(StreamExecutionEnvironment env, String topic, String kafkaAddress) {
return new FlinkKafkaProducer<Tuple2>(kafkaAddress, topic, new AverageSerializer());
}
public static FlinkKafkaProducer<Map<String,Object>> createMapProducer(StreamExecutionEnvironment env, String topic, String kafkaAddress) {
return new FlinkKafkaProducer<Map<String,Object>>(kafkaAddress, topic, new SerializationSchema<Map<String, Object>>() {
#Override
public void open(InitializationContext context) throws Exception {
}
#Override
public byte[] serialize(Map<String, Object> stringObjectMap) {
String json = ConvertJavaMapToJson.convert(stringObjectMap);
return json.getBytes();
}
});
}
}
I have tried many things but the code where the CQL is invoked is never called and doesn't even give any error not sure where is it going wrong.
The same thing if I do creating an internal stream source and use the same input json to return as string it works.
Initial guess: if you are using event time, are you sure you have defined watermarks correctly? As stated in the docs:
(...) an incoming element is initially put in a buffer where elements are sorted in ascending order based on their timestamp, and when a watermark arrives, all the elements in this buffer with timestamps smaller than that of the watermark are processed (...)
If this doesn't help, I would suggest to decompose/simplify the job to a bare minimum, for example just a source operator and some naive sink printing/logging elements. And if that works, start adding back operators one by one. You could also start by simplifying your CEP pattern as much as possible.
First of all thanks a lot #Piotr Nowojski , just because of your small pointer which no matter how many times I pondered over about event time , it did not came in my mind. So yes while debugging the two cases:
With internal datasource , where it was processing successfully, while debugging the flow , I identified that it was processing a watermark after it was processing the data, but it did not catch me that it was somehow managing the event time of the data implicitly.
With kafka as a datasource , while I was debugging I could very clearly see that it was not processing any watermark in the flow, but it did not occur to me that , it is happening because of the event time and watermark not handled properly.
Just adding a single line of code in the application code which I understood from below Flink code snippet:
#deprecated In Flink 1.12 the default stream time characteristic has been changed to {#link
* TimeCharacteristic#EventTime}, thus you don't need to call this method for enabling
* event-time support anymore. Explicitly using processing-time windows and timers works in
* event-time mode. If you need to disable watermarks, please use {#link
* ExecutionConfig#setAutoWatermarkInterval(long)}. If you are using {#link
* TimeCharacteristic#IngestionTime}, please manually set an appropriate {#link
* WatermarkStrategy}. If you are using generic "time window" operations (for example {#link
* org.apache.flink.streaming.api.datastream.KeyedStream#timeWindow(org.apache.flink.streaming.api.windowing.time.Time)}
* that change behaviour based on the time characteristic, please use equivalent operations
* that explicitly specify processing time or event time.
*/
I got to know that by default flink considers event time and for that watermark needs to be handled properly which I didn't so I added below link for setting the time characteristics of the flink execution environment:
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
and kaboom ... it started working , while this is deprecated and needs some other configuration, but thanks a lot , it was a great pointer and helped me a lot and I solved the issue..
Thanks again #Piotr Nowojski
I have a simple Verticle that reads configuration from a properties file and loads in into vertx config. I have written a unit test to test the deployment of this verticle and possible cause of test failure is non availability of the properties file at the location.
When I run the test, unit test passes irrespective of whether I change the properties file name or path and the handler says the verticle was deployed successfully.
Am I doing something wrong here? Below is my code
import io.vertx.config.ConfigRetrieverOptions;
import io.vertx.config.ConfigStoreOptions;
import io.vertx.core.DeploymentOptions;
import io.vertx.core.json.JsonObject;
import io.vertx.rxjava.config.ConfigRetriever;
import io.vertx.rxjava.core.AbstractVerticle;
/**
* This is the main launcher verticle, the following operations will be executed in start() method of this verticle:
* 1. Read configurations from application.properties file
* 2. Deploy all other verticles in the application
*/
public class LauncherVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
//set up configuration from the properties file
ConfigStoreOptions fileStore = new ConfigStoreOptions()
.setType("file")
.setFormat("properties")
.setConfig(new JsonObject().put("path", System.getProperty("vertex.config.path"));
//create config retriever options add properties to filestore
ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, options);
DeploymentOptions deploymentOptions = new DeploymentOptions();
//Deploy verticles after the config has been loaded
//The configurations are loaded into JsonConfig object
//This JsonConfig object can be accessed in other verticles using the config() method.
configRetriever.rxGetConfig().subscribe(s -> {
//pass on the JsonConfig object to other verticles through deployment options
deploymentOptions.setConfig(s);
vertx.deployVerticle(AnotherVerticle.class.getName(), deploymentOptions);
}, e -> {
log.error("Failed to start application : " + e.getMessage(), e);
try {
stop();
} catch (Exception e1) {
log.error("Unable to stop vertx, terminate the process manually : "+e1.getMessage(), e1);
}
});
}
}
This is my unit test
import io.vertx.ext.unit.TestContext;
import io.vertx.ext.unit.junit.VertxUnitRunner;
import io.vertx.rxjava.core.Vertx;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import rx.Single;
#RunWith(VertxUnitRunner.class)
public class LoadConfigurationTest {
/**
* Config should be loaded successfully
*
* #param context
*/
#Test
public void loadConfigTest(TestContext context) {
/*
* Set the system property "vertx.config.path" with value "application.properties"
* This system property will be used in the Launcher verticle to read the config file
*/
System.setProperty("vertx.config.path", "/opt/vertx/config/application.properties");
//create vertx instance
Vertx vertx = Vertx.vertx();
Single<String> single = vertx.rxDeployVerticle(LauncherVerticle.class.getName());
single.subscribe(s -> {
vertx.rxUndeploy(s);
}, e -> {
Assert.fail(e.getMessage());
});
}
/**
* Test for negative use case - file not available in the specified location
*
* #param context
*/
#Test
public void loadConfigFailTest(TestContext context) {
//set path = non existing path
System.setProperty("vertx.config.path", "/non/existing/path/application.properties");
//create vertx instance
Vertx vertx = Vertx.vertx();
Single single = vertx.rxDeployVerticle(LauncherVerticle.class.getName());
single.subscribe(s -> {
//not executing this statement
Assert.fail("Was expecting error but Verticle deployed successfully");
}, e -> {
//not executing this statement either
System.out.println("pass");
});
}
}
Can you try the below code inside your LauncherVerticle the changes only include using AbstractVerticles start with Future which is a neat way for handling the config loading and everything around the same during your starup.
public class LauncherVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> startFuture) throws Exception {
ConfigStoreOptions fileStore = new ConfigStoreOptions()
.setType("file")
.setFormat("properties")
.setConfig(new JsonObject().put("path", System.getProperty("vertex.config.path")));
ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, options);
DeploymentOptions deploymentOptions = new DeploymentOptions();
configRetriever.rxGetConfig().subscribe(s -> {
deploymentOptions.setConfig(s);
vertx.deployVerticle(AnotherVerticle.class.getName(),
deploymentOptions,
result -> startFuture.complete()
);
},
startFuture::fail
);
}
}
startFuture there, would help you to control the state of your verticle loading.
Also remember that #Constantine way for handing the test is best way, use of Async to prevent your tests passing without actually asserting anything.
Seems like there is nothing wrong with your verticle. However, there is something in tests - the asynchronous nature of verticle deployment is not taken into account. These test methods finish immediately instead of waiting for verticle deployment, and JUnit test that does not result in AssertionError is a passed test. You have to signal completion explicitly using Async.
Please see an example for your negative scenario below:
import io.vertx.ext.unit.Async;
import io.vertx.ext.unit.TestContext;
import io.vertx.ext.unit.junit.RunTestOnContext;
import io.vertx.ext.unit.junit.VertxUnitRunner;
import io.vertx.rxjava.core.Vertx;
import org.junit.Rule;
import org.junit.Test;
import org.junit.runner.RunWith;
#RunWith(VertxUnitRunner.class)
public class LoadConfigurationTest {
#Rule
public RunTestOnContext runTestOnContextRule = new RunTestOnContext();
#Test
public void testConfigLoading_shouldFail_whenConfigDoesNotExist(TestContext context) {
// create an Async instance that controls the completion of the test
Async async = context.async();
// set non existing path
System.setProperty("vertx.config.path", "/non/existing/path/application.properties");
// take vertx instance and wrap it with rx-ified version
Vertx vertx = Vertx.newInstance(runTestOnContextRule.vertx());
vertx.rxDeployVerticle(LauncherVerticle.class.getName()).subscribe(s -> {
context.fail("Was expecting error but Verticle deployed successfully"); // failure
}, e -> {
async.complete(); // success
});
}
}
Also please note that you can take a Vertx instance from RunTestOnContext rule (as in the snippet above).
How do I get a handle on a JavaFX application started using the following code?
CPUUsageChart.launch(CPUUsageChart.class);
CPUUsageChart extends Application from JavaFX and I am launching it from a main method of a simple Java project.
What I ultimately want to achieve is, that I can start the App and use its methods in the simple Java code, so that I do not have to do the calling in the Constructor of the Application extending class. I only want to use JavaFX's abilities for drawing charts and save them to HDD, for later usage, but I do not need to see any GUI made in JavaFX.
Proposed Solution
You can only launch an application once, so there will only ever be a single instance of your application class.
Because there is only a single instance of the application, you can store a reference to the instance in a static variable of the application when the application is started and you can get the instance as required from a static method (a kind of singleton pattern).
Caveats
Care must be taken to ensure:
The instance is available before you try to use it.
That threading rules are appropriately observed.
That the JavaFX Platform is appropriately shutdown when it is no longer required.
Sample Solution
The sample code below uses a lock and a condition to ensure that the application instance is available before you try to use it. It will also require explicit shutdown of the JavaFX platform when it is no longer required.
Thanks to StackOverflow user James-D for some edit assistance with this code.
import javafx.application.Application;
import javafx.application.Platform;
import javafx.collections.ObservableList;
import javafx.embed.swing.SwingFXUtils;
import javafx.scene.Scene;
import javafx.scene.chart.LineChart;
import javafx.scene.chart.NumberAxis;
import javafx.scene.chart.XYChart;
import javafx.stage.Stage;
import javax.imageio.ImageIO;
import java.io.File;
import java.io.IOException;
import java.nio.file.Paths;
import java.time.LocalDateTime;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class CPUUsageChart extends Application {
private static CPUUsageChart appInstance;
private static final Lock lock = new ReentrantLock();
private static final Condition appStarted = lock.newCondition();
/**
* Starts the application and records the instance.
* Sets the JavaFX platform not to exit implicitly.
* (e.g. an explicit call to Platform.exit() is required
* to exit the JavaFX Platform).
*/
#Override
public void start(Stage primaryStage) {
lock.lock();
try {
Platform.setImplicitExit(false);
appInstance = this;
appStarted.signalAll();
} finally {
lock.unlock();
}
}
/**
* Get an instance of the application.
* If the application has not already been launched it will be launched.
* This method will block the calling thread until the
* start method of the application has been invoked and the instance set.
* #return application instance (will not return null).
*/
public static CPUUsageChart getInstance() throws InterruptedException {
lock.lock();
try {
if (appInstance == null) {
Thread launchThread = new Thread(
() -> launch(CPUUsageChart.class),
"chart-launcher"
);
launchThread.setDaemon(true);
launchThread.start();
appStarted.await();
}
} finally {
lock.unlock();
}
return appInstance;
}
/**
* Public method which can be called to perform the main operation
* for this application.
* (render a chart and store the chart image to disk).
* This method can safely be called from any thread.
* Once this method is invoked, the data list should not be modified
* off of the JavaFX application thread.
*/
public void renderChart(
ObservableList<XYChart.Data<Number, Number>> data
) {
// ensure chart is rendered on the JavaFX application thread.
if (!Platform.isFxApplicationThread()) {
Platform.runLater(() -> this.renderChartImpl(data));
} else {
this.renderChartImpl(data);
}
}
/**
* Private method which can be called to perform the main operation
* for this application.
* (render a chart and store the chart image to disk).
* This method must be invoked on the JavaFX application thread.
*/
private void renderChartImpl(
ObservableList<XYChart.Data<Number, Number>> data
) {
LineChart<Number, Number> chart = new LineChart<>(
new NumberAxis(),
new NumberAxis(0, 100, 10)
);
chart.setAnimated(false);
chart.getData().add(
new XYChart.Series<>("CPU Usage", data)
);
Scene scene = new Scene(chart);
try {
LocalDateTime now = LocalDateTime.now();
File file = Paths.get(
System.getProperty("user.dir"),
"cpu-usage-chart-" + now + ".png"
).toFile();
ImageIO.write(
SwingFXUtils.fromFXImage(
chart.snapshot(null, null),
null
),
"png",
file
);
System.out.println("Chart saved as: " + file);
} catch (IOException e) {
e.printStackTrace();
}
}
}
To use this (from any thread):
try {
// get chartApp instance, blocking until it is available.
CPUUsageChart chartApp = CPUUsageChart.getInstance();
// call render chart as many times as you want
chartApp.renderChart(cpuUsageData);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
// note your program should only ever exit the platform once.
Platform.exit();
}
Complete sample application which creates five graphs of cpu usage data with ten samples in each chart, each sample spaced by 100 milliseconds. As the sample invokes the chart application to render the charts, it will create chart png image files in the current java working directory and the file names will be output to the system console. No JavaFX stage or window is displayed.
Code to sample CPU usage copied from: How to get percentage of CPU usage of OS from java
import javafx.application.Platform;
import javafx.collections.FXCollections;
import javafx.collections.ObservableList;
import javafx.scene.chart.XYChart;
import javax.management.*;
import java.lang.management.ManagementFactory;
public class ChartTest {
public static void main(String[] args) {
try {
CPUUsageChart chart = CPUUsageChart.getInstance();
for (int i = 0; i < 5; i++) {
ObservableList<XYChart.Data<Number, Number>> cpuUsageData = FXCollections.observableArrayList();
for (int j = 0; j < 10; j++) {
cpuUsageData.add(
new XYChart.Data<>(
j / 10.0,
getSystemCpuLoad()
)
);
Thread.sleep(100);
}
chart.renderChart(cpuUsageData);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} catch (MalformedObjectNameException | ReflectionException | InstanceNotFoundException e) {
e.printStackTrace();
} finally {
Platform.exit();
}
}
public static double getSystemCpuLoad() throws MalformedObjectNameException, ReflectionException, InstanceNotFoundException {
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
ObjectName name = ObjectName.getInstance("java.lang:type=OperatingSystem");
AttributeList list = mbs.getAttributes(name, new String[]{ "SystemCpuLoad" });
if (list.isEmpty()) return Double.NaN;
Attribute att = (Attribute)list.get(0);
Double value = (Double)att.getValue();
if (value == -1.0) return Double.NaN; // usually takes a couple of seconds before we get real values
return ((int)(value * 1000) / 10.0); // returns a percentage value with 1 decimal point precision
}
}
Sample output (percentage CPU usage on the Y axis, and time in tenth of second sample spacing on the X axis).
Background Information
Application javadoc to further understand the JavaFX application lifecycle.
Related question: How do I start again an external JavaFX program? Launch prevents this, even if the JavaFX program ended with Platform.Exit
Alternate Implementations
You could use a JFXPanel rather than a class which extends Application. Though, then your application would also have a dependency on Swing.
You could make the main class of your application extend Application, so the application is automatically launched when your application is started rather than having a separate Application just for your usage chart.
If you have lots and lots of charts to render you could look a this off screen chart renderer implementation.
For the first time i stored the jobs and scheduled them using crontrigger with the below code.
package com.generalsentiment.test.quartz;
import static org.quartz.CronScheduleBuilder.cronSchedule;
import static org.quartz.JobBuilder.newJob;
import static org.quartz.TriggerBuilder.newTrigger;
import java.util.Date;
import java.util.Properties;
import org.quartz.CronTrigger;
import org.quartz.JobDetail;
import org.quartz.Scheduler;
import org.quartz.SchedulerFactory;
import org.quartz.SchedulerMetaData;
import org.quartz.impl.StdSchedulerFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class CronTriggerExample {
public void run() throws Exception {
Logger log = LoggerFactory.getLogger(CronTriggerExample.class);
System.out.println("------- Initializing -------------------");
Xml config = new Xml("src/hibernate.cfg.xml", "hibernate-configuration");
Properties prop = new Properties();
prop.setProperty("org.quartz.scheduler.instanceName", "ALARM_SCHEDULER");
prop.setProperty("org.quartz.threadPool.class",
"org.quartz.simpl.SimpleThreadPool");
prop.setProperty("org.quartz.threadPool.threadCount", "4");
prop.setProperty("org.quartz.threadPool
.threadsInheritContextClassLoaderOfInitializingThread", "true");
prop.setProperty("org.quartz.jobStore.class",
"org.quartz.impl.jdbcjobstore.JobStoreTX");
prop.setProperty("org.quartz.jobStore.driverDelegateClass",
"org.quartz.impl.jdbcjobstore.StdJDBCDelegate");
prop.setProperty("org.quartz.jobStore.dataSource", "tasksDataStore");
prop.setProperty("org.quartz.jobStore.tablePrefix", "QRTZ_");
prop.setProperty("org.quartz.jobStore.misfireThreshold", "60000");
prop.setProperty("org.quartz.jobStore.isClustered", "false");
prop.setProperty("org.quartz.dataSource.tasksDataStore.driver",
config.child("session-factory").children("property").get(1).content());
prop.setProperty("org.quartz.dataSource.tasksDataStore.URL", config.child("session-
factory").children("property").get(2).content());
prop.setProperty("org.quartz.dataSource.tasksDataStore.user", config.child("session-
factory").children("property").get(3).content());
prop.setProperty("org.quartz.dataSource.tasksDataStore.password",
config.child("session-factory").children("property").get(4).content());
prop.setProperty("org.quartz.dataSource.tasksDataStore.maxConnections", "20");
// First we must get a reference to a scheduler
SchedulerFactory sf = new StdSchedulerFactory(prop);
Scheduler sched = sf.getScheduler();
System.out.println("------- Initialization Complete --------");
System.out.println("------- Scheduling Jobs ----------------");
// jobs can be scheduled before sched.start() has been called
// job 1 will run exactly at 12:55 daily
JobDetail job = newJob(SimpleJob.class).withIdentity("job2", "group2").build();
CronTrigger trigger = newTrigger().withIdentity("trigger2", "group2")
.withSchedule(cronSchedule("00 15 15 * *
?")).build();
Date ft = sched.scheduleJob(job, trigger);
System.out.println(sched.getSchedulerName());
System.out.println(job.getKey() + " has been scheduled to run at: " + ft
+ " and repeat based on expression: "
+ trigger.getCronExpression());
System.out.println("------- Starting Scheduler ----------------");
/*
* All of the jobs have been added to the scheduler, but none of the
* jobs will run until the scheduler has been started. If you have
* multiple jobs performing multiple tasks, then its recommended to
* write it in separate classes, like SimpleJob.class writes
* organization members to file.
*/
sched.start();
System.out.println("------- Started Scheduler -----------------");
System.out.println("------- Waiting five minutes... ------------");
try {
// wait five minutes to show jobs
Thread.sleep(300L * 1000L);
// executing...
} catch (Exception e) {
}
System.out.println("------- Shutting Down ---------------------");
sched.shutdown(true);
System.out.println("------- Shutdown Complete -----------------");
SchedulerMetaData metaData = sched.getMetaData();
System.out.println("Executed " + metaData.getNumberOfJobsExecuted() + " jobs.");
}
public static void main(String[] args) throws Exception {
CronTriggerExample example = new CronTriggerExample();
example.run();
}
}
And the details are stored in Tables - QRTZ_CRON_TRIGGERS, QRTZ_JOB_DETAILS & QRTZ_TRIGGERS
My doubt is How to schedule the jobs that are stored in DB. How to display the list of jobs in a jsp page & how to trigger them automatically.
Ours is a struts2 application with Hibernate3 ORM. I am trying to initialize the quartz scheduler when the application loads. But am unable to.
Date ft = sched.scheduleJob(job, trigger);
When this is called, your job would be scheduled for the next fire time. The scheduled job would stored in the appropriate DB tables.
To Display the list of jobs on a jsp, you should persist you job key as well as custom description of what your job entails to another DB table so that during retrieval you can retrieve this custom description as well as data Quartz persist into its own tables.
Triggering this jobs automatically is something Quartz handles for you. Once the crone expression is set to what is desired and your Job class implements org.quartz.Job, Quartz would run the execute() method at your desired next fire time
JobDetail job = newJob(SimpleJob.class).withIdentity("job2", "group2").build();
this means you will have a class named SimpleJob that implements org.quartz.Job. In that class execute() method need to be implemented. Job is triggered automatically at the time you specified by cron expression. That execute method is called when job is triggered.