I am trying to attach a subscriber to an event in Esper but I would like to use .epl file for that. I've been browsing repositories and I have seen examples of doing that by using annotation interfaces. I was trying to do it the same way they do it in CoinTrader, but I can't seem to get it to work. Yet, if I set the subscriber in Java, it works.
This is my project structure for reference
This is my .epl file:
module queries;
import events.*;
import configDemo.*;
import annotations.*;
create schema MyTickEvent as TickEvent;
#Name('allEvents')
#Description('test')
#Subscriber(className='configDemo.TickSubscriber')
select * from TickEvent;
#Name('tickEvent')
#Description('Get a tick event every 3 seconds')
select currentPrice from TickEvent;
This is my config file:
<?xml version="1.0" encoding="UTF-8"?>
<esper-configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.espertech.com/schema/esper"
xsi:noNamespaceSchemaLocation="esper-configuration-6-0.xsd">
<event-type-auto-name package-name="events"/>
<auto-import import-name="annotations.*"/>
<auto-import import-name="events.*"/>
<auto-import import-name="configDemo.*"/>
This is my Subscriber interface:
package annotations;
public #interface Subscriber {
String className();
}
This is my event class:
package configDemo;
import events.TickEvent;
public class TickSubscriber {
public void update(TickEvent tick) {
System.out.println("Event registered by subscriber - Tick is: " +
tick.getCurrentPrice());
}
}
And my main file is this:
package configDemo;
import java.io.IOException;
import java.util.concurrent.CountDownLatch;
import com.espertech.esper.client.EPStatement;
import com.espertech.esper.client.deploy.DeploymentException;
import com.espertech.esper.client.deploy.DeploymentOptions;
import com.espertech.esper.client.deploy.Module;
import com.espertech.esper.client.deploy.ParseException;
public class Main {
public static EngineHelper engineHelper;
public static Thread engineThread;
public static boolean continuousSimulation = true;
public static void main(String[] args) throws DeploymentException, InterruptedException, IOException, ParseException {
engineHelper = new EngineHelper();
DeploymentOptions options = new DeploymentOptions();
options.setIsolatedServiceProvider("validation"); // we isolate any statements
options.setValidateOnly(true); // validate leaving no started statements
options.setFailFast(false); // do not fail on first error
Module queries = engineHelper.getDeployAdmin().read("queries.epl");
engineHelper.getDeployAdmin().deploy(queries, null);
CountDownLatch latch = new CountDownLatch(1);
EPStatement epl = engineHelper.getAdmin().getStatement("allEvents");
//epl.setSubscriber(new TickSubscriber());
engineThread = new Thread(new EngineThread(latch, continuousSimulation, engineHelper.getRuntime()));
engineThread.start();
}
}
As you can see the setSubscriber line is commented out. When I run it as is, I expected that the subscriber will be recognized and registered and yet it isn't. I only get the tick events flowing in the console. If I decomment the line and I run it, I get a notification after each tick that the subscriber received the event and it all works fine.
What am I doing wrong? How can I set a subscriber within the .epl file?
Assigning a subscriber is done by the application and is not something that the engine does for you. The app code would need to loop thru the statements, get the annotations "stmt.getAnnotations" and inspect these and assign the subscriber.
Related
I'm trying to migrate from Vert.x to Quarkus and in Vert.x when I write message consumers like Kafka/AMQP etc. I have to scale the number of verticals to maximize performance across multiple cores i.e. Vertical Scaling - is this possible in Quarkus? I see a similar question here but it wasn't answered.
For example, with Kafka I might create a consumer inside a vertical and then scale that vertical say 10 times (that is specify the number of instances in the deployment to be 10) after doing performance testing to determine that's the optimal number. My understanding is that by default, 1 vertical = 1 event loop and does not scale across multiple cores.
I know that it's possible to use Vert.x verticals in Quarkus but is there another way to scale things like the number of Kafka consumers across multiple core?
I see that this type of scalability is configurable for things like Quarkus HTTP but I can't find anything about message consumers.
Here's the Vert.x Verticle approach that overall I'm very happy with, but I wish there were better documentation on how to do this.
UPDATE - Field injection doesn't work with this example but constructor injection does work.
Lets say I want to inject this
#ApplicationScoped
public class CoffeeRepositoryService {
public CoffeeRepositoryService() {
System.out.println("Injection succeeded!");
}
}
Here's my Verticle
package org.acme;
import io.smallrye.mutiny.Uni;
import io.smallrye.mutiny.vertx.core.AbstractVerticle;
import io.vertx.core.impl.logging.Logger;
import io.vertx.core.impl.logging.LoggerFactory;
import io.vertx.mutiny.core.eventbus.EventBus;
import io.vertx.mutiny.rabbitmq.RabbitMQClient;
import io.vertx.mutiny.rabbitmq.RabbitMQConsumer;
import io.vertx.rabbitmq.QueueOptions;
import io.vertx.rabbitmq.RabbitMQOptions;
public class RQVerticle extends AbstractVerticle {
private final Logger LOGGER = LoggerFactory.getLogger(org.acme.RQVerticle.class);
//This doesn't work - returns null
#Inject
CoffeeRepositoryService coffeeRepositoryService;
RQVerticle() {} // dummy constructor needed
#Inject // constructor injection - this does work
RQVerticle(CoffeeRepositoryService coffeeRepositoryService) {
//Here coffeeRepositoryService is injected properly
}
#Override
public Uni<Void> asyncStart() {
LOGGER.info(
"Creating RabbitMQ Connection after Quarkus successful initialization");
RabbitMQOptions config = new RabbitMQOptions();
config.setUri("amqp://localhost:5672");
RabbitMQClient client = RabbitMQClient.create(vertx, config);
Uni<Void> clientResp = client.start();
clientResp.subscribe()
.with(asyncResult -> {
LOGGER.info("RabbitMQ successfully connected!");
});
return clientResp;
}
}
Main Class - injection doesn't work like this
package org.acme;
import io.quarkus.runtime.Quarkus;
import io.quarkus.runtime.QuarkusApplication;
import io.quarkus.runtime.annotations.QuarkusMain;
import io.vertx.core.DeploymentOptions;
import io.vertx.mutiny.core.Vertx;
#QuarkusMain
public class Main {
public static void main(String... args) {
Quarkus.run(MyApp.class, args);
}
public static class MyApp implements QuarkusApplication {
#Override
public int run(String... args) throws Exception {
var vertx = Vertx.vertx();
System.out.println("Deployment Starting");
DeploymentOptions options = new DeploymentOptions()
.setInstances(2);
vertx.deployVerticleAndAwait(RQVerticle::new, options);
System.out.println("Deployment completed");
Quarkus.waitForExit();
return 0;
}
}
}
Main Class with working injection but cannot deploy more than one instance
package org.acme;
import io.quarkus.runtime.StartupEvent;
import io.vertx.mutiny.core.Vertx;
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.event.Observes;
import org.jboss.logging.Logger;
#ApplicationScoped
public class MainVerticles {
private static final Logger LOGGER = Logger.getLogger(MainVerticles.class);
public void init(#Observes StartupEvent e, Vertx vertx, RQVerticle verticle) {
public void init(#Observes StartupEvent e, Vertx vertx, RQVerticle verticle) {
DeploymentOptions options = new DeploymentOptions()
.setInstances(2);
vertx.deployVerticle(verticle,options).await().indefinitely();
}
}
Std Out - first main class looks good
2021-09-15 15:48:12,052 INFO [org.acm.RQVerticle] (vert.x-eventloop-thread-2) Creating RabbitMQ Connection after Quarkus successful initialization
2021-09-15 15:48:12,053 INFO [org.acm.RQVerticle] (vert.x-eventloop-thread-3) Creating RabbitMQ Connection after Quarkus successful initialization
Std Out - second main class
2021-09-22 15:48:11,986 ERROR [io.qua.run.Application] (Quarkus Main
Thread) Failed to start application (with profile dev):
java.lang.IllegalArgumentException: Can't specify > 1 instances for
already created verticle
I am trying to create a simple application where the app will consume Kafka message do some cql transform and publish to Kafka and below is the code:
JAVA: 1.8
Flink: 1.13
Scala: 2.11
flink-siddhi: 2.11-0.2.2-SNAPSHOT
I am using library: https://github.com/haoch/flink-siddhi
input json to Kafka:
{
"awsS3":{
"ResourceType":"aws.S3",
"Details":{
"Name":"crossplane-test",
"CreationDate":"2020-08-17T11:28:05+00:00"
},
"AccessBlock":{
"PublicAccessBlockConfiguration":{
"BlockPublicAcls":true,
"IgnorePublicAcls":true,
"BlockPublicPolicy":true,
"RestrictPublicBuckets":true
}
},
"Location":{
"LocationConstraint":"us-west-2"
}
}
}
main class:
public class S3SidhiApp {
public static void main(String[] args) {
internalStreamSiddhiApp.start();
//kafkaStreamApp.start();
}
}
App class:
package flinksidhi.app;
import com.google.gson.JsonObject;
import flinksidhi.event.s3.source.S3EventSource;
import io.siddhi.core.SiddhiManager;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.core.fs.FileSystem;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
import org.apache.flink.streaming.siddhi.SiddhiCEP;
import org.json.JSONObject;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.Map;
import static flinksidhi.app.connector.Consumers.createInputMessageConsumer;
import static flinksidhi.app.connector.Producer.*;
public class internalStreamSiddhiApp {
private static final String inputTopic = "EVENT_STREAM_INPUT";
private static final String outputTopic = "EVENT_STREAM_OUTPUT";
private static final String consumerGroup = "EVENT_STREAM1";
private static final String kafkaAddress = "localhost:9092";
private static final String zkAddress = "localhost:2181";
private static final String S3_CQL1 = "from inputStream select * insert into temp";
private static final String S3_CQL = "from inputStream select json:toObject(awsS3) as obj insert into temp;" +
"from temp select json:getString(obj,'$.awsS3.ResourceType') as affected_resource_type," +
"json:getString(obj,'$.awsS3.Details.Name') as affected_resource_name," +
"json:getString(obj,'$.awsS3.Encryption.ServerSideEncryptionConfiguration') as encryption," +
"json:getString(obj,'$.awsS3.Encryption.ServerSideEncryptionConfiguration.Rules[0].ApplyServerSideEncryptionByDefault.SSEAlgorithm') as algorithm insert into temp2; " +
"from temp2 select affected_resource_name,affected_resource_type, " +
"ifThenElse(encryption == ' ','Fail','Pass') as state," +
"ifThenElse(encryption != ' ' and algorithm == 'aws:kms','None','Critical') as severity insert into outputStream";
public static void start(){
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//DataStream<String> inputS = env.addSource(new S3EventSource());
//Flink kafka stream consumer
FlinkKafkaConsumer<String> flinkKafkaConsumer =
createInputMessageConsumer(inputTopic, kafkaAddress,zkAddress, consumerGroup);
//Add Data stream source -- flink consumer
DataStream<String> inputS = env.addSource(flinkKafkaConsumer);
SiddhiCEP cep = SiddhiCEP.getSiddhiEnvironment(env);
cep.registerExtension("json:toObject", io.siddhi.extension.execution.json.function.ToJSONObjectFunctionExtension.class);
cep.registerExtension( "json:getString", io.siddhi.extension.execution.json.function.GetStringJSONFunctionExtension.class);
cep.registerStream("inputStream", inputS, "awsS3");
inputS.print();
System.out.println(cep.getDataStreamSchemas());
//json needs extension jars to present during runtime.
DataStream<Map<String,Object>> output = cep
.from("inputStream")
.cql(S3_CQL1)
.returnAsMap("temp");
//Flink kafka stream Producer
FlinkKafkaProducer<Map<String, Object>> flinkKafkaProducer =
createMapProducer(env,outputTopic, kafkaAddress);
//Add Data stream sink -- flink producer
output.addSink(flinkKafkaProducer);
output.print();
try {
env.execute();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Consumer class:
package flinksidhi.app.connector;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.json.JSONObject;
import java.util.Properties;
public class Consumers {
public static FlinkKafkaConsumer<String> createInputMessageConsumer(String topic, String kafkaAddress, String zookeeprAddr, String kafkaGroup ) {
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", kafkaAddress);
properties.setProperty("zookeeper.connect", zookeeprAddr);
properties.setProperty("group.id",kafkaGroup);
FlinkKafkaConsumer<String> consumer = new FlinkKafkaConsumer<String>(
topic,new SimpleStringSchema(),properties);
return consumer;
}
}
Producer class:
package flinksidhi.app.connector;
import flinksidhi.app.util.ConvertJavaMapToJson;
import org.apache.flink.api.common.serialization.SerializationSchema;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema;
import org.json.JSONObject;
import java.util.Map;
public class Producer {
public static FlinkKafkaProducer<Tuple2> createStringProducer(StreamExecutionEnvironment env, String topic, String kafkaAddress) {
return new FlinkKafkaProducer<Tuple2>(kafkaAddress, topic, new AverageSerializer());
}
public static FlinkKafkaProducer<Map<String,Object>> createMapProducer(StreamExecutionEnvironment env, String topic, String kafkaAddress) {
return new FlinkKafkaProducer<Map<String,Object>>(kafkaAddress, topic, new SerializationSchema<Map<String, Object>>() {
#Override
public void open(InitializationContext context) throws Exception {
}
#Override
public byte[] serialize(Map<String, Object> stringObjectMap) {
String json = ConvertJavaMapToJson.convert(stringObjectMap);
return json.getBytes();
}
});
}
}
I have tried many things but the code where the CQL is invoked is never called and doesn't even give any error not sure where is it going wrong.
The same thing if I do creating an internal stream source and use the same input json to return as string it works.
Initial guess: if you are using event time, are you sure you have defined watermarks correctly? As stated in the docs:
(...) an incoming element is initially put in a buffer where elements are sorted in ascending order based on their timestamp, and when a watermark arrives, all the elements in this buffer with timestamps smaller than that of the watermark are processed (...)
If this doesn't help, I would suggest to decompose/simplify the job to a bare minimum, for example just a source operator and some naive sink printing/logging elements. And if that works, start adding back operators one by one. You could also start by simplifying your CEP pattern as much as possible.
First of all thanks a lot #Piotr Nowojski , just because of your small pointer which no matter how many times I pondered over about event time , it did not came in my mind. So yes while debugging the two cases:
With internal datasource , where it was processing successfully, while debugging the flow , I identified that it was processing a watermark after it was processing the data, but it did not catch me that it was somehow managing the event time of the data implicitly.
With kafka as a datasource , while I was debugging I could very clearly see that it was not processing any watermark in the flow, but it did not occur to me that , it is happening because of the event time and watermark not handled properly.
Just adding a single line of code in the application code which I understood from below Flink code snippet:
#deprecated In Flink 1.12 the default stream time characteristic has been changed to {#link
* TimeCharacteristic#EventTime}, thus you don't need to call this method for enabling
* event-time support anymore. Explicitly using processing-time windows and timers works in
* event-time mode. If you need to disable watermarks, please use {#link
* ExecutionConfig#setAutoWatermarkInterval(long)}. If you are using {#link
* TimeCharacteristic#IngestionTime}, please manually set an appropriate {#link
* WatermarkStrategy}. If you are using generic "time window" operations (for example {#link
* org.apache.flink.streaming.api.datastream.KeyedStream#timeWindow(org.apache.flink.streaming.api.windowing.time.Time)}
* that change behaviour based on the time characteristic, please use equivalent operations
* that explicitly specify processing time or event time.
*/
I got to know that by default flink considers event time and for that watermark needs to be handled properly which I didn't so I added below link for setting the time characteristics of the flink execution environment:
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
and kaboom ... it started working , while this is deprecated and needs some other configuration, but thanks a lot , it was a great pointer and helped me a lot and I solved the issue..
Thanks again #Piotr Nowojski
Requirement: I need to get all the messages from all the sessions from the queue. registerSessionHandler should consume messages as soon as they appear in the queue.
Issue: the code accepts the messages that are pertaining to one of the session id (xyz) and then it just waits. And even when more messages are pushed with that session id (xyz), it fails to consume it.
Any suggestions - what obvious thing I am missing here.
I have registerSessionHandler for receiving session messages continuously from the queue. Whenever i start new session, i am getting message of only one sessionID.
Test Queue: In this queue, there are 20 messages available with 4 different sessionIDs. Whenever i run Java application (QueueSessionReceiveTest.java), i get only 5 messages which is associated with single session ID.
MAVEN - azure-servicebus - 1.1.1
RECEIVE CODE:
import java.time.Duration;
import com.microsoft.azure.servicebus.IMessage;
import com.microsoft.azure.servicebus.IQueueClient;
import com.microsoft.azure.servicebus.QueueClient;
import com.microsoft.azure.servicebus.ReceiveMode;
import com.microsoft.azure.servicebus.SessionHandlerOptions;
import com.microsoft.azure.servicebus.primitives.ConnectionStringBuilder;
public class QueueSessionReceiveTest {
private static final String connectionString = "Endpoint=sb://XXXXXX";
private static final String queueName = "test";
private static IQueueClient queueClient;
public static void main(String[] args) throws Exception {
queueClient = new QueueClient(new ConnectionStringBuilder(connectionString, queueName), ReceiveMode.PEEKLOCK);
queueClient.registerSessionHandler(new QueueMessageSessionHandler(), new SessionHandlerOptions(1, false, Duration.ofMinutes(1)));
}}
SESSION HANDLER CODE:
import java.util.concurrent.CompletableFuture;
import com.microsoft.azure.servicebus.ExceptionPhase;
import com.microsoft.azure.servicebus.IMessage;
import com.microsoft.azure.servicebus.IMessageSession;
import com.microsoft.azure.servicebus.IQueueClient;
import com.microsoft.azure.servicebus.ISessionHandler;
public class QueueMessageSessionHandler implements ISessionHandler {
#Override
public CompletableFuture<Void> onMessageAsync(IMessageSession session, IMessage iMessage) {
return session.completeAsync(iMessage.getLockToken()).thenRunAsync(() -> Logger.debug("some log") );
}
#Override
public CompletableFuture<Void> OnCloseSessionAsync(IMessageSession session) {
return null;
}
#Override
public void notifyException(Throwable exception, ExceptionPhase exceptionPhase) {
// Do nothing
}}
Please refer below link for solution.
https://github.com/Azure/azure-service-bus-java/issues/187
I have been inventing a way how to work around the problem of adding consumers to a jetty endpoint (it does not allow multiple consumers). The way we do it in our company is to build our own router and a broadcasting endpoint which consumes from jetty and routes requests to underlying "subscriptions". Only one of them will eventually process the request. It kind of works but it's not completely ok, since recently when updating to latest Camel we have found our custom built component to leak memory and in general I consider using built-in functionality over custom hacks.
I started investigating the Camel REST API and found it very nice and pretty much replacing our home-grown component apart from one thing - you cannot re-configure it at runtime - you have to stop the context basically for this to work. Below I include my unit test with a happy path and the path that fails. Frankly I think is a bug, but if there is a legitimate way to achieve what I want, I'd like to hear sound advice:
package com.anydoby.camel;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.fail;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.commons.io.IOUtils;
import org.junit.Before;
import org.junit.Test;
/**
* Test tries to add/remove routes at runtime.
*/
public class RoutesTest {
private DefaultCamelContext ctx;
#Before
public void pre() throws Exception {
ctx = new DefaultCamelContext();
new RouteBuilder(ctx) {
#Override
public void configure() throws Exception {
restConfiguration("jetty").host("localhost").port(8080);
rest("/")
.get("/issues/{isin}").route().id("issues")
.process(e -> e.getOut().setBody("Here's your issue " + e.getIn().getHeader("isin"))).endRest()
.get("/listings").route().id("listings").process(e -> e.getOut().setBody("some listings"));
}
}.addRoutesToCamelContext(ctx);
ctx.start();
}
#Test
public void test() throws IOException {
{
InputStream stream = new URL("http://localhost:8080/issues/35").openStream();
assertEquals("Here's your issue 35", IOUtils.toString(stream));
}
{
InputStream stream = new URL("http://localhost:8080/listings").openStream();
assertEquals("some listings", IOUtils.toString(stream));
}
}
#Test
public void disableRoute() throws Exception {
ctx.stopRoute("issues");
ctx.removeRoute("issues");
try (InputStream stream = new URL("http://localhost:8080/issues/35").openStream()) {
fail();
} catch (Exception e) {
}
new RouteBuilder(ctx) {
#Override
public void configure() throws Exception {
rest().get("/issues/{isin}/{sedol}").route().id("issues")
.process(e -> e.getOut()
.setBody("Here's your issue " + e.getIn().getHeader("isin") + ":" + e.getIn().getHeader("sedol")))
.endRest();
}
}.addRoutesToCamelContext(ctx);
{
InputStream stream = new URL("http://localhost:8080/issues/35/65").openStream();
assertEquals("Here's your issue 35:65", IOUtils.toString(stream));
}
}
}
The disableRoute() test fails since I cannot add another consumer to an existing endpoint.
So my question is - "is there a way to add a new URL mapping to a restful camel-jetty endpoint"? If you do it during first configuration it works fine, but when later you want to reconfigure one of the routes the error is:
org.apache.camel.FailedToStartRouteException: Failed to start route because of Multiple consumers for the same endpoint is not allowed: jetty:http://localhost:8080/issues/%7Bisin%7D/%7Bsedol%7D?httpMethodRestrict=GET
Pretty much, I'm trying to write a simple program that lets the user choose a file. Unfortunately, JFileChooser through Swing is a little outdated, so I am trying to use JavaFX FileChooser for this. The goal is to run FileGetter as a thread, transfer the file data to the Main Class, and continue from there.
Main Class:
package application;
import java.io.File;
import javafx.application.Application;
import javafx.stage.Stage;
import javafx.scene.Scene;
import javafx.scene.layout.BorderPane;
public class Main {
public static void main(String[] args) {
Thread t1 = new Thread(new FileGetter());
FileGetter fg = new FileGetter();
t1.start();
boolean isReady = false;
while(isReady == false){
isReady = FileGetter.getIsReady();
}
File file = FileGetter.getFile();
System.out.println(file.getAbsolutePath());
...
}
}
FileGetter Class:
package application;
import java.io.File;
import javafx.application.Application;
import javafx.application.Platform;
import javafx.stage.FileChooser;
import javafx.stage.Stage;
import javafx.scene.Scene;
import javafx.scene.layout.BorderPane;
public class FileGetter extends Application implements Runnable {
static File file;
static boolean isReady = false;
#Override
public void start(Stage primaryStage) {
try {
FileChooser fc = new FileChooser();
while(file == null){
file = fc.showOpenDialog(primaryStage);
}
isReady = true;
Platform.exit();
} catch(Exception e) {
e.printStackTrace();
}
}
#Override
public void run() {
launch();
}
public static boolean getIsReady(){
return isReady;
}
public static File getFile(){
return file;
}
}
Problem is that the value of isReady in the while loop doesn't update to true when the user picked a file (the reason I have it is to prevent the code in Main from continuing with a File set to null).
Any help, alternative suggestions, or explanations as of why this happens is very much appreciated!
The java memory model does not require variable values to be the same in different threads except under specific conditions.
What is happening here is that the FileGetter thread is updating the value in the own memory that is only accessed from this thread, but your main thread doesn't see the updated value, since it only sees the version of the variable stored in it's own memory that is different from the one of the FileGetter thread. Each of the threads has it's own copy of the field in memory, which is perfectly fine according to the java specification.
To fix this, you can simply add the volatile modifier to isReady:
static volatile boolean isReady = false;
which makes sure the updated value will be visible from your main thread.
Furthermore I recommend reducing the number of FileGetter instances you create. In your code 3 instances are created, but only 1 is used.
Thread t1 = new Thread(() -> Application.launch(FileGetter.class));
t1.start();
...
The easiest way to implement this
Instead of trying to drive the horse with the cart, why not just follow the standard JavaFX lifecycle? In other words, make your Main class a subclass of Application, get the file in the start() method, and then proceed (in a background thread) with the rest of the application?
public class Main extends Application {
#Override
public void init() {
// make sure we don't exit when file chooser is closed...
Platform.setImplicitExit(false);
}
#Override
public void start(Stage primaryStage) {
File file = null ;
FileChooser fc = new FileChooser();
while(file == null){
file = fc.showOpenDialog(primaryStage);
}
final File theFile = file ;
new Thread(() -> runApplication(theFile)).start();
}
private void runApplication(File file) {
// run your application here...
}
}
What is wrong with your code
If you really want the Main class to be separate from the JavaFX Application class (which doesn't really make sense: once you have decided to use a JavaFX FileChooser, you have decided you are writing a JavaFX application, so the startup class should be a subclass of Application), then it gets a bit tricky. There are several issues with your code as it stands, some of which are addressed in other answers. The main issue, as shown in Fabian's answer, is that you are referencing FileGetter.isReady from multiple threads without ensuring liveness. This is exactly the issue addressed in Josh Bloch's Effective Java (Item 66 in the 2nd edition).
Another issue with your code is that you won't be able to use the FileGetter more than once (you can't call launch() more than once), which might not be an issue in your code now, but almost certainly will be at some point with this application as development progresses. The problem is that you have mixed two issues: starting the FX toolkit, and retrieving a File from a FileChooser. The first thing must only be done once; the second should be written to be reusable.
And finally your loop
while(isReady == false){
isReady = FileGetter.getIsReady();
}
is very bad practice: it checks the isReady flag as fast as it possibly can. Under some (fairly unusual) circumstances, it could even prevent the FX Application thread from having any resources to run. This should just block until the file is ready.
How to fix without making Main a JavaFX Application
So, again only if you have a really pressing need to do so, I would first create a class that just has the responsibility of starting the FX toolkit. Something like:
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.atomic.AtomicBoolean;
import javafx.application.Application;
import javafx.application.Platform;
import javafx.stage.Stage;
public class FXStarter extends Application {
private static final AtomicBoolean startRequested = new AtomicBoolean(false);
private static final CountDownLatch latch = new CountDownLatch(1);
#Override
public void init() {
Platform.setImplicitExit(false);
}
#Override
public void start(Stage primaryStage) {
latch.countDown();
}
/** Starts the FX toolkit, if not already started via this method,
** and blocks execution until it is running.
**/
public static void startFXIfNeeded() throws InterruptedException {
if (! startRequested.getAndSet(true)) {
new Thread(Application::launch).start();
}
latch.await();
}
}
Now create a class that gets a file for you. This should ensure the FX toolkit is running, using the previous class. This implementation allows you to call getFile() from any thread:
import java.io.File;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.FutureTask;
import javafx.application.Platform;
import javafx.stage.FileChooser;
public class FileGetter {
/**
** Retrieves a file from a JavaFX File chooser. This method can
** be called from any thread, and will block until the user chooses
** a file.
**/
public File getFile() throws InterruptedException {
FXStarter.startFXIfNeeded() ;
if (Platform.isFxApplicationThread()) {
return doGetFile();
} else {
FutureTask<File> task = new FutureTask<File>(this::doGetFile);
Platform.runLater(task);
try {
return task.get();
} catch (ExecutionException exc) {
throw new RuntimeException(exc);
}
}
}
private File doGetFile() {
File file = null ;
FileChooser chooser = new FileChooser() ;
while (file == null) {
file = chooser.showOpenDialog(null) ;
}
return file ;
}
}
and finally your Main is just
import java.io.File;
public class Main {
public static void main(String[] args) throws InterruptedException {
File file = new FileGetter().getFile();
// proceed...
}
}
Again, this is pretty complex; I see no reason not to simply use the standard FX Application lifecycle for this, as in the very first code block in the answer.
In this code
while(isReady == false){
isReady = FileGetter.getIsReady();
}
there is nothing that is going to change the state of FileGetter's isReady to true