Quartz Scheduler - Is the job correctly scheduled? - java

Fully Updated Code
I am using Twilio to implement a text reminder to be send 3 days before an appointment. I have implemented it in my back-end (java). I have added a few visits into the database that should trigger a job to be scheduled but I am unsure how to check if the job has been scheduled to send a message?
I know the visit is being added to the database since I can check that, so scheduleJob(visit); should be scheduling a text to send, but I am not sure.
VisitController.java
#PostMapping()
public ResponseEntity<Object> addVisit(#RequestBody Visit visit){
Result<Visit> result = service.add(visit);
if(result.isSuccess()){
scheduleJob(visit);
return new ResponseEntity<>(result.getPayload(), HttpStatus.CREATED);
}
return ErrorResponse.build(result);
}
private void scheduleJob(Visit visit) {
String visitId = String.valueOf(visit.getVisitId());
ZoneId defaultZoneId = ZoneId.systemDefault();
LocalDate date = visit.getVisitDate().minusDays(3);
Date finalDate = Date.from(date.atStartOfDay(defaultZoneId).plusHours(16).toInstant());
JobDetail job =
newJob(VisitScheduler.class).withIdentity("Appointment_J_" + visitId)
.usingJobData("visitId", visitId).build();
Trigger trigger =
newTrigger().withIdentity("Appointment_T_" + visitId).startAt(finalDate).build();
try {
scheduler.scheduleJob(job, trigger);
} catch (SchedulerException e) {
System.out.println("Unable to schedule the Job");
}
}
VisitScheduler.java
public class VisitScheduler implements Job {
private static Logger logger = LoggerFactory.getLogger(VisitScheduler.class);
public static final String ACCOUNT_SID = "account sid";
public static final String AUTH_TOKEN = "auth token";
public static final String TWILIO_NUMBER = "twilio number";
public static final String TO_NUMBER= "number to send messages to";
private final VisitRepository repository;
public VisitScheduler(VisitRepository repository) {
this.repository = repository;
}
#Override
public void execute(JobExecutionContext context) {
VisitService service = new VisitService(repository);
Twilio.init(ACCOUNT_SID, AUTH_TOKEN);
JobDataMap dataMap = context.getJobDetail().getJobDataMap();
int visitId = Integer.parseInt(dataMap.getString("visitId"));
Visit visit = service.findByVisitId(visitId);
if (visit != null) {
String date = String.valueOf(visit.getVisitDate());
String time = String.valueOf(visit.getVisitTime());
String messageBody = "This is a reminder about your visit at " + time + " on " + date + " See you then!";
try {
Message message = Message
.creator(new PhoneNumber(TO_NUMBER), new PhoneNumber(TWILIO_NUMBER), messageBody)
.create();
System.out.println("Message sent! Message SID: " + message.getSid());
} catch(TwilioException ex) {
logger.error("An exception occurred trying to send the message \"{}\" to {}." +
" \nTwilio returned: {} \n", messageBody, TO_NUMBER, ex.getMessage());
}
}
}
}

Twilio developer evangelist here.
In this line you store the job data:
JobDetail job = newJob(VisitScheduler.class).withIdentity("Appointment_J_" + visitId)
.usingJobData("appointmentId", visitId).build();
You set the "appointmentId" to visitId.
In VisitScheduler you then try to get the visitId with:
int visitId = Integer.parseInt(dataMap.getString("visitId"));
Should this be "appointmentId" to match the field you set earlier?

Related

Add an Telegram bot on Java to the group chat

I have a working Telegram bot that replies to my private messages. However, when I add it to my test chat and run the app, I get this error: 'Unexpected action from user'. I guess, it's a wrong way to create a bot for group chat, and maybe I shouldn't use TelegramLongPollingBot. Can you please help me to understand, how to create a group chat bot?
The Bot's class:
public class MessageCalculator extends TelegramLongPollingBot {
private PropertiesFileReader propertiesFileReader = new PropertiesFileReader();
private Properties prop;
{
try {
prop = propertiesFileReader.readPropertiesFile("src/main/resources/config.properties");
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
}
private static final Logger log = LoggerFactory.getLogger(Main.class);
private int messageCount = 0;
#Override
public String getBotUsername() {
return prop.getProperty("telegram.bot.username");
}
#Override
public String getBotToken() {
return prop.getProperty("telegram.bot.token");
}
#Override
public void onUpdateReceived(Update update) {
if (update.hasMessage() && update.getMessage().hasText()) {
String textFromUser = update.getMessage().getText();
Long userId = update.getMessage().getChatId();
String userFirstName = update.getMessage().getFrom().getFirstName();
log.info("[{}, {}] : {}", userId, userFirstName, textFromUser);
messageCount++;
SendMessage sendMessage = SendMessage.builder()
.chatId(userId.toString())
.text("Hello, " + userFirstName + "! Thank you for the message #" + messageCount ": " + textFromUser)
.build();
try {
this.sendApiMethod(sendMessage);
} catch (TelegramApiException e) {
log.error("Sending message error:\t", e);
}
} else {
//And I get this error message:
log.warn("Unexpected action from user");
}
}
}
I expect to create a chat bot that can count messages from each user later.

log4j2 custom appender does not stop / exit

Using the boxfuse cloudwatchlogs-java-appender's appender as the starting point, I had created a generic version of the log4j2 appender for logging to AWS CloudWatch. However, I am facing issue with the log4j2 appender not shutting down at all.
Here is my appender plugin CloudwatchLogsLog4J2Appender.java -
package ...
imports ...
#Plugin(name = CloudwatchLogsLog4J2Appender.APPENDER_NAME, category = "Core", elementType = Appender.ELEMENT_TYPE, printObject = true)
public class CloudwatchLogsLog4J2Appender extends AbstractAppender {
static final String APPENDER_NAME = "CloudwatchLogs-Appender";
private final CloudwatchLogsConfig config = new CloudwatchLogsConfig();
private BlockingQueue<CloudwatchLogsLogEvent> eventQueue;
private CloudwatchLogsLogEventPutter putter;
private long discardedCount;
public CloudwatchLogsLog4J2Appender(String name, Filter filter, Layout<? extends Serializable> layout) {
super(name, filter, layout);
}
public CloudwatchLogsLog4J2Appender(String name, Filter filter, Layout<? extends Serializable> layout, boolean ignoreExceptions) {
super(name, filter, layout, ignoreExceptions);
}
// Your custom appender needs to declare a factory method
// annotated with `#PluginFactory`. Log4j will parse the configuration
// and call this factory method to construct an appender instance with
// the configured attributes.
#PluginFactory
public static CloudwatchLogsLog4J2Appender createAppender(
#PluginAttribute(value = "name", defaultString = APPENDER_NAME) String name,
#PluginElement("Filter") final Filter filter,
#PluginAttribute("debug") Boolean debug,
#PluginAttribute("stdoutFallback") Boolean stdoutFallback,
#PluginAttribute("endpoint") String endpoint,
#PluginAttribute("logGroupName") String logGroupName,
#PluginAttribute("module") String module,
#PluginAttribute(value = "maxEventQueueSize", defaultInt = CloudwatchLogsConfig.DEFAULT_MAX_EVENT_QUEUE_SIZE) Integer maxEventQueueSize,
#PluginAttribute("region") String region,
#PluginAttribute("flushDelayInMillis") int flushDelayInMillis) {
System.out.println("CloudwatchLogsLog4J2Appender:createAppender() called...");
CloudwatchLogsLog4J2Appender appender = new CloudwatchLogsLog4J2Appender(name, filter, null, true);
if (debug != null) {
appender.getConfig().setStdoutFallback(debug);
}
if (stdoutFallback != null) {
appender.getConfig().setStdoutFallback(stdoutFallback);
}
if (endpoint != null) {
appender.getConfig().setEndpoint(endpoint);
}
if (logGroupName != null) {
appender.getConfig().setLogGroupName(logGroupName);
}
if (module != null) {
appender.getConfig().setModule(module);
}
appender.getConfig().setMaxEventQueueSize(maxEventQueueSize);
if (region != null) {
appender.getConfig().setRegion(region);
}
if (flushDelayInMillis > 0) {
appender.getConfig().setFlushDelayInMills(flushDelayInMillis);
}
return appender;
}
/**
* #return The config of the appender. This instance can be modified to override defaults.
*/
public CloudwatchLogsConfig getConfig() {
return config;
}
#Override
public void start() {
System.out.println("CloudwatchLogsLog4J2Appender:start() called...");
super.start();
eventQueue = new LinkedBlockingQueue<>(config.getMaxEventQueueSize());
putter = CloudwatchLogsLogEventPutter.create(config, eventQueue);
new Thread(putter).start();
}
#Override
public void stop() {
System.out.println("CloudwatchLogsLog4J2Appender:stop() called...");
putter.terminate();
super.stop();
}
#Override
protected boolean stop(Future<?> future) {
System.out.println("CloudwatchLogsLog4J2Appender:stop(future) called...");
putter.terminate();
return super.stop(future);
}
#Override
public boolean stop(long timeout, TimeUnit timeUnit) {
System.out.println("CloudwatchLogsLog4J2Appender:stop(timeout, timeunit) called...");
putter.terminate();
System.out.println("CloudwatchLogsLog4J2Appender:stop(timeout, timeunit) Done calling terminate()... passing to super");
return super.stop(timeout, timeUnit);
}
/**
* #return The number of log events that had to be discarded because the event queue was full.
* If this number is non zero without having been affected by AWS CloudWatch Logs availability issues,
* you should consider increasing maxEventQueueSize in the config to allow more log events to be buffer before having to drop them.
*/
public long getDiscardedCount() {
return discardedCount;
}
#Override
public void append(LogEvent event) {
String message = event.getMessage().getFormattedMessage();
Throwable thrown = event.getThrown();
while (thrown != null) {
message += "\n" + dump(thrown);
thrown = thrown.getCause();
if (thrown != null) {
message += "\nCaused by:";
}
}
Marker marker = event.getMarker();
String eventId = marker == null ? null : marker.getName();
CloudwatchLogsLogEvent logEvent = new CloudwatchLogsLogEvent(event.getLevel().toString(), event.getLoggerName(), eventId, message, event.getTimeMillis(), event.getThreadName());
while (!eventQueue.offer(logEvent)) {
eventQueue.poll();
discardedCount++;
}
}
private String dump(Throwable throwableProxy) {
StringBuilder builder = new StringBuilder();
builder.append(throwableProxy.getClass().getName()).append(": ").append(throwableProxy.getMessage()).append("\n");
for (StackTraceElement step : throwableProxy.getStackTrace()) {
String string = step.toString();
builder.append("\t").append(string);
builder.append(step);
builder.append("\n");
}
return builder.toString();
}
}
Here is the CloudwatchLogsLogEventPutter
public class CloudwatchLogsLogEventPutter implements Runnable {
private static int MAX_FLUSH_DELAY = 500 * 1000 * 1000;
private static final int MAX_BATCH_COUNT = 10000;
private static final int MAX_BATCH_SIZE = 1048576;
private final CloudwatchLogsConfig config;
private final BlockingQueue<CloudwatchLogsLogEvent> eventQueue;
private final AWSLogs logsClient;
private final ObjectMapper objectMapper = new ObjectMapper().setSerializationInclusion(JsonInclude.Include.NON_NULL);
private final boolean enabled;
private boolean running;
private String module;
private String logGroupName;
private int batchSize;
private long lastFlush;
private List<InputLogEvent> eventBatch;
private String nextSequenceToken;
private final AtomicLong processedCount = new AtomicLong(0);
/**
* Creates a new EventPutter for the current AWS region.
*
* #param config The config to use.
* #param eventQueue The event queue to consume from.
* #return The new EventPutter.
*/
public static CloudwatchLogsLogEventPutter create(CloudwatchLogsConfig config, BlockingQueue<CloudwatchLogsLogEvent> eventQueue) {
boolean enabled = config.getRegion() != null || config.getEndpoint() != null;
AWSLogs logsClient = enabled ? createLogsClient(config) : null;
CloudwatchLogsLogEventPutter logPutter = new CloudwatchLogsLogEventPutter(config, eventQueue, logsClient, enabled);
return logPutter;
}
/**
* For internal use only. This constructor lets us switch the AWSLogs implementation for testing.
*/
public CloudwatchLogsLogEventPutter(CloudwatchLogsConfig config, BlockingQueue<CloudwatchLogsLogEvent> eventQueue,
AWSLogs awsLogs, boolean enabled) {
this.config = config;
module = config.getModule();
this.eventQueue = eventQueue;
this.enabled = enabled;
logsClient = awsLogs;
if(config.getFlushDelayInMills() > 0) {
MAX_FLUSH_DELAY = config.getFlushDelayInMills() * 1000;
}
logGroupName = config.getLogGroupName();
}
static AWSLogs createLogsClient(CloudwatchLogsConfig config) {
AWSLogsClientBuilder builder = AWSLogsClientBuilder.standard();
if (config.getEndpoint() != null) {
// Non-AWS mock endpoint
builder.setCredentials(new AWSStaticCredentialsProvider(new AnonymousAWSCredentials()));
builder.setEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(config.getEndpoint(), config.getRegion()));
} else {
builder.setRegion(config.getRegion());
}
return builder.build();
}
/**
* #return The number of log events that have been processed by this putter.
*/
public long getProcessedCount() {
return processedCount.get();
}
#Override
public void run() {
if (!enabled && !config.isStdoutFallback()) {
System.out.println("WARNING: AWS CloudWatch Logs appender is disabled (Unable to detect the AWS region and no CloudWatch Logs endpoint specified)");
return;
}
running = true;
nextSequenceToken = null;
eventBatch = new ArrayList<>();
batchSize = 0;
lastFlush = System.nanoTime();
printWithTimestamp(new Date(), "Initiating the while loop...");
while (running) {
CloudwatchLogsLogEvent event = eventQueue.poll();
printWithTimestamp(new Date(), "Inside Loopity loop...");
if (event != null) {
Map<String, Object> eventMap = new TreeMap<>();
eventMap.put("context", config.getContext());
eventMap.put("module", config.getModule());
eventMap.put("level", event.getLevel());
eventMap.put("event", event.getEvent());
eventMap.put("message", event.getMessage());
eventMap.put("logger", event.getLogger());
eventMap.put("thread", event.getThread());
String eventJson;
try {
eventJson = toJson(eventMap);
} catch (JsonProcessingException e) {
printWithTimestamp(new Date(), "Unable to serialize log event: " + eventMap);
continue;
}
// Source: http://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html
// The maximum batch size is 1,048,576 bytes,
int eventSize =
// and this size is calculated as the sum of all event messages in UTF-8,
eventJson.getBytes(StandardCharsets.UTF_8).length
// plus 26 bytes for each log event.
+ 26;
if (eventSize > MAX_BATCH_SIZE) {
printWithTimestamp(new Date(), "Unable to send log event as its size (" + eventSize + " bytes)"
+ " exceeds the maximum size supported by AWS CloudWatch Logs (" + MAX_BATCH_SIZE + " bytes): " + eventMap);
continue;
}
if (config.isDebug()) {
printWithTimestamp(new Date(), "Event Size: " + eventSize + " bytes, Batch Size: " + batchSize
+ " bytes, Batch Count: " + eventBatch.size() + ", Event: " + eventJson);
}
if ((eventBatch.size() + 1) >= MAX_BATCH_COUNT || (batchSize + eventSize) >= MAX_BATCH_SIZE) {
flush();
}
eventBatch.add(new InputLogEvent().withMessage(eventJson).withTimestamp(event.getTimestamp()));
batchSize += eventSize;
printWithTimestamp(new Date(event.getTimestamp()), "batchSize = " + batchSize);
} else {
printWithTimestamp(new Date(), "No events, just flush attempts...");
if (!eventBatch.isEmpty() && isTimeToFlush()) {
printWithTimestamp(new Date(), "eventbatch is not empty and its time to flush");
flush();
}
try {
printWithTimestamp(new Date(), "going to sleep...");
Thread.sleep(100);
printWithTimestamp(new Date(), "done sleeping...");
} catch (InterruptedException e) {
printWithTimestamp(new Date(), "Exception while flusing and sleeping...");
running = false;
}
}
}
printWithTimestamp(new Date(), "Done with that while loop...");
}
private void finalFlush() {
printWithTimestamp(new Date(), "finalFlush() called...");
if (!eventBatch.isEmpty()) {
printWithTimestamp(new Date(), "finalFlush() ==> flush()...");
flush();
printWithTimestamp(new Date(), "finalFlush() ==> flush()... DONE");
}
try {
printWithTimestamp(new Date(), "finalFlush() ==> Sleeping...");
Thread.sleep(100);
printWithTimestamp(new Date(), "finalFlush() ==> Sleeping... DONE");
} catch (InterruptedException e) {
printWithTimestamp(new Date(), "Exception while finalFlusing and sleeping... setting running to false");
running = false;
}
}
private boolean isTimeToFlush() {
return lastFlush <= (System.nanoTime() - MAX_FLUSH_DELAY);
}
private void flush() {
printWithTimestamp(new Date(),"flush() called");
Collections.sort(eventBatch, new Comparator<InputLogEvent>() {
#Override
public int compare(InputLogEvent o1, InputLogEvent o2) {
return o1.getTimestamp().compareTo(o2.getTimestamp());
}
});
if (config.isStdoutFallback()) {
for (InputLogEvent event : eventBatch) {
printWithTimestamp(new Date(event.getTimestamp()), logGroupName + " " + module + " " + event.getMessage());
}
} else {
int retries = 15;
do {
printWithTimestamp(new Date(),"flush() - prepping PutLogEventsRequest");
PutLogEventsRequest request =
new PutLogEventsRequest(logGroupName, module, eventBatch).withSequenceToken(nextSequenceToken);
try {
long start = 0;
if (config.isDebug()) {
start = System.nanoTime();
}
PutLogEventsResult result = logsClient.putLogEvents(request);
if (config.isDebug()) {
long stop = System.nanoTime();
long elapsed = (stop - start) / 1000000;
printWithTimestamp(new Date(), "Sending " + eventBatch.size() + " events took " + elapsed + " ms");
}
processedCount.addAndGet(request.getLogEvents().size());
nextSequenceToken = result.getNextSequenceToken();
break;
} catch (DataAlreadyAcceptedException e) {
nextSequenceToken = e.getExpectedSequenceToken();
printWithTimestamp(new Date(),"flush() - received DataAlreadyAcceptedException");
} catch (InvalidSequenceTokenException e) {
nextSequenceToken = e.getExpectedSequenceToken();
printWithTimestamp(new Date(),"flush() - received InvalidSequenceTokenException");
} catch (ResourceNotFoundException e) {
printWithTimestamp(new Date(), "Unable to send logs to AWS CloudWatch Logs at "
+ logGroupName + ">" + module + " (" + e.getErrorMessage() + "). Dropping log events batch ...");
break;
} catch (SdkClientException e) {
try {
printWithTimestamp(new Date(),"flush() - received SDKClientException. Sleeping to retry");
Thread.sleep(1000);
printWithTimestamp(new Date(),"flush() - received SDKClientException. Sleeping DONE");
} catch (InterruptedException e1) {
System.out.println("SDKException while pushing logs to cloudwatch ...");
}
if (--retries > 0) {
printWithTimestamp(new Date(), "Attempt " + (15-retries) + "Unable to send logs to AWS CloudWatch Logs ("
+ e.getMessage() + "). Dropping log events batch ...");
}
}
} while (retries > 0); // && eventBatch.size() > 0
}
eventBatch = new ArrayList<>();
batchSize = 0;
lastFlush = System.nanoTime();
}
/* private -> for testing */
String toJson(Map<String, Object> eventMap) throws JsonProcessingException {
// Compensate for https://github.com/FasterXML/jackson-databind/issues/1442
Map<String, Object> nonNullMap = new TreeMap<>();
for (Map.Entry<String, Object> entry : eventMap.entrySet()) {
if (entry.getValue() != null) {
nonNullMap.put(entry.getKey(), entry.getValue());
}
}
return objectMapper.writeValueAsString(nonNullMap);
}
private void printWithTimestamp(Date date, String str) {
System.out.println(new SimpleDateFormat("YYYY-MM-dd HH:mm:ss.SSS").format(date) + " " + str);
}
public void terminate() {
printWithTimestamp(new Date(),"terminate() ==> finalFlush()");
//finalFlush();
printWithTimestamp(new Date(),"terminate() ==> finalFlush() DONE. Setting running=false");
running = false;
}
}
CloudwatchLogsLogEvent
public class CloudwatchLogsLogEvent {
private final String level;
private final String logger;
private final String event;
private final String message;
private final long timestamp;
private final String thread;
public CloudwatchLogsLogEvent(String level, String logger, String event, String message, long timestamp, String thread) {
this.level = level;
this.logger = logger;
this.event = event;
this.message = message;
this.timestamp = timestamp;
this.thread = thread;
}
public String getLevel() {
return level;
}
public String getLogger() {
return logger;
}
public String getEvent() {
return event;
}
public String getMessage() {
return message;
}
public long getTimestamp() {
return timestamp;
}
public String getThread() {
return thread;
}
}
and lastly a sample log4j2.xml configuration
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="trace" package="com.cloudwatchlogs.appender.log4j2">
<Appenders>
<CloudwatchLogs-Appender name="myCloudWatchLogger">
<region>us-west-2</region>
<logGroupName>myCloudWatchLogGroup</logGroupName>
<module>myCloudWatchLogStream</module>
<flushDelayInMillis>1</flushDelayInMillis>
<!-- Optional config parameters -->
<!-- Whether to fall back to stdout instead of disabling the appender when running outside of a Boxfuse instance. Default: false -->
<stdoutFallback>false</stdoutFallback>
<!-- The maximum size of the async log event queue. Default: 1000000.
Increase to avoid dropping log events at very high throughput.
Decrease to reduce maximum memory usage at the risk if the occasional log event drop when it gets full. -->
<maxEventQueueSize>1000000</maxEventQueueSize>
</CloudwatchLogs-Appender>
<Console name="console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
</Appenders>
<Loggers>
<Root level="DEBUG">
<AppenderRef ref="console"/>
</Root>
<Logger name="com.mycompany.src" level="DEBUG" additivity="false">
<AppenderRef ref="myCloudWatchLogger" level="DEBUG"/>
</Logger>
</Loggers>
</Configuration>
I tried using this config in a very simple app -
package ...
import ...
public class MyApp
{
private static Logger logger = LogManager.getLogger(MyApp.class);
AmazonS3 s3Client = null;
AmazonDynamoDB dynamoDBClient = null;
MyApp() {
initS3Client(new DefaultAWSCredentialsProviderChain());
}
public void listObjects(String bucketName) {
ObjectListing objectListing = s3Client.listObjects(bucketName);
logger.info("Listing objects in bucket - " + bucketName);
List<String> commonPrefixes = objectListing.getCommonPrefixes();
commonPrefixes.stream().forEach(s -> System.out.println("commonPrefix - " + s));
List<S3ObjectSummary> objectSummaries = objectListing.getObjectSummaries();
for(S3ObjectSummary objectSummary : objectSummaries) {
logger.info("key = " + objectSummary.getKey());
logger.info("ETag = " + objectSummary.getETag());
logger.info("Size = " + objectSummary.getSize());
logger.info("Storage Class = " + objectSummary.getStorageClass());
logger.info("Last Modified = " + objectSummary.getLastModified());
}
s3Client.shutdown();
}
public static void main(String[] args){
MyApp myApp = new MyApp();
myApp.listObjects("test-bucket");
}
void initS3Client(AWSCredentialsProvider credentialsProvider) {
AmazonS3ClientBuilder clientBuilder = AmazonS3ClientBuilder.standard()
.withCredentials(credentialsProvider)
.withRegion(Regions.US_WEST_2);
s3Client = clientBuilder.build();
}
void initDynamoDBClient(AWSCredentialsProvider credentialsProvider) {
AmazonDynamoDBClientBuilder clientBuilder = AmazonDynamoDBClientBuilder.standard()
.withCredentials(credentialsProvider)
.withRegion(Regions.US_WEST_2);
dynamoDBClient = clientBuilder.build();
}
}
When I run the MyApp.java, I see that after all the relevant logs are streamed to CloudWatch, the while loop in the CloudwatchLogsLogEventPutter.java's run() method does not terminate. I understand it is a separate thread that is running forever, but shouldn't log4j2 be initiating the stop() method in the lifecycle by itself once the application related tasks in the MyApp.main() method are complete?
If I try to do a Ctrl+C, I see the below overriden stop() method from the CloudwatchLogsLog4J2Appender.java being called -
public boolean stop(long timeout, TimeUnit timeUnit)
I am not sure where I am going wrong and there seems to be very little documentation around handling the various lifecycle methods for Appender and the lifecycle itself. This is my first time writing an appender. Any help is appreciated. Thanks.
Update 1: Sample log file - https://gist.github.com/dev-usa/822309bcd8b4f8a5fb0f4e1eca70d67e
So, I have fixed several problems with this implementation.
To gracefully shutdown loggers - LogManager.shutdown() needs to be called from the application using the appender. At the time of posting this question, the application was not using the shutdown method. Using it acts as the Ctrl+C behavior I was explaining above. Once that is done, I see that the logging framework is shutting down as expected and the appender is closed.
The next issue I faced was that I was losing some logs not being sent to CloudWatch. After some debugging, I found that the implementation of the while loop in the run() method of CloudwatchLogsLogEventPutter.java is getting only 1 log at a time during the run() loop. I updated the design to use the drainTo method on BlockingQueue to get the whole list of events and push them at one go. This drastically reduces the number of while loops to get the events pushed out to CloudWatch. See updated implementation below -
while (running) {
List<CloudwatchLogsLogEvent> logEvents = new ArrayList<>();
eventQueue.drainTo(logEvents);
printWithTimestamp( "Draining events from eventLoop. No. of events received = " + logEvents.size());
if(logEvents.size() > 0) {
printWithTimestamp( "Translating " + logEvents.size() + " log events to CloudWatch Logs...");
logEvents.stream().forEach(logEvent -> translateLogEventToCloudWatchLog(logEvent));
printWithTimestamp( "Translating " + logEvents.size() + " log events to CloudWatch Logs... DONE");
}
boolean timeToFlush = isTimeToFlush();
printWithTimestamp( "isTimeToFlush() = " + timeToFlush);
printWithTimestamp( "eventBatch.size() = " + eventBatch.size());
if (!eventBatch.isEmpty() && timeToFlush) {
printWithTimestamp( "Event Batch is NOT empty and it's time to flush");
flush();
}
try {
printWithTimestamp( "going to sleep...");
Thread.sleep(100);
printWithTimestamp( "done sleeping...");
} catch (InterruptedException e) {
printWithTimestamp( "Exception while flushing and sleeping...");
running = false;
}
}
Lastly, I also faced the issue with the log4j2 configuration and the appender not recognized in classpath when packaging my application as a fat jar, adopting the solution suggested here solved my issue.
Good luck!

Using ScheduledExecutorService to save(Entites), I get detached Entity passed to persist error

I have a very curious Error that I cant seem to get my head around.
I need to use a ScheduledExecutorService to pass the Survey Entity I created to be edited and then saved as a new Entity.
public void executeScheduled(Survey eventObject, long interval) {
HashMap<String, String> eventRRules = StringUtils.extractSerialDetails(eventObject.getvCalendarRRule());
long delay = 10000;
ScheduledExecutorService service = Executors.newScheduledThreadPool(1);
Runnable runnable = new Runnable() {
private int counter = 1;
private int executions = Integer.parseInt(eventRRules.get("COUNT"));
Survey survey = eventObject;
public void run() {
String uid = eventObject.getUniqueEventId();
logger.info("SurveyController - executeScheduled - Iteration: " + counter);
String serialUid = null;
if (counter == 1) {
serialUid = uid + "-" + counter;
} else {
serialUid = StringUtils.removeLastAndConcatVar(eventObject.getUniqueEventId(), Integer.toString(counter));
}
if (++counter > executions) {
service.shutdown();
}
survey.setUniqueEventId(serialUid);
try {
executeCreateSurvey(survey);
} catch(Exception e) {
logger.debug("SurveyController - executeScheduled - Exception caught: ");
e.printStackTrace();
}
}
};
service.scheduleAtFixedRate(runnable, delay, interval, TimeUnit.MILLISECONDS);
}
When the executeCreateSurvey(survey) Method is run without the ScheduleExecutorService, it works flawlessly.
Yet when it is executed inside the run() Method, I get the "detached entity passed to persist" Error each time the save(survey) Method is run within the executeCreateSurvey() Method....
The executeCreateSurvey() Method where the .save() Method is called:
public ResponseEntity<?> executeCreateSurvey(Survey eventObject) {
MailService mailService = new MailService(applicationProperties);
Participant eventOwner = participantRepositoryImpl.createOrFindParticipant(eventObject.getEventOwner());
eventObject.setEventOwner(eventOwner);
Survey survey = surveyRepositoryImpl.createSurveyOrFindSurvey((Survey) eventObject);
// Saves additional information if small errors (content
// errors,.. )
// occurs
String warnMessage = "";
List<Participant> participants = new ArrayList<Participant>();
for (Participant participantDetached : eventObject.getParticipants()) {
// Check if participant already exists
Participant participant = participantRepositoryImpl.createOrFindParticipant(participantDetached);
participants.add(participant);
// Only create PartSur if not existing (Update case)
if (partSurRepository.findAllByParticipantAndSurvey(participant, survey).isEmpty()) {
PartSur partSur = new PartSur(participant, survey);
partSurRepository.save(partSur);
try {
mailService.sendRatingInvitationEmail(partSur);
surveyRepository.save(survey);
} catch (Exception e) {
// no special exception for "security" reasons
String errorMessage = "error sending mail for participant: " + e.getMessage() + "\n";
warnMessage += errorMessage;
logger.warn("createSurvey() - " + errorMessage);
}
}
}
// Delete all PartSurs and Answers from removed participants
List<PartSur> partSursForParticipantsRemoved = partSurRepository.findAllByParticipantNotIn(participants);
logger.warn("createSurvey() - participants removed: " + partSursForParticipantsRemoved.size());
partSurRepositoryImpl.invalidatePartSurs(partSursForParticipantsRemoved);
return new ResponseEntity<>("Umfrage wurde angelegt. Warnungen: " + warnMessage, HttpStatus.OK);
}
What could the reason be for this?
I have not been able to find this Problem anywhere so far.

How to schedule a java code having messageArrived method of MqttCallback

I am new in MQTT world. I have written a code to subscribe a topic and get message from topic and store it in database. Now my problem is how to put this code on server so that it will keep receiving message infinitely. I am trying to create a scheduler but in that case i am Getting Persistence Already in Use error from MQTT. I cannot change the clientId every time it connect. It is a fixed one in my case. Is there any way to get the persistence object which is already connected for a particular clientId?
Please help. Thanks and advance.
Please Find the code subscribe topic and messageArrived method of mqqt to get message from topic
public class AppTest {
private MqttHandler handler;
public void doApp() {
// Read properties from the conf file
Properties props = MqttUtil.readProperties("MyData/app.conf");
String org = props.getProperty("org");
String id = props.getProperty("appid");
String authmethod = props.getProperty("key");
String authtoken = props.getProperty("token");
// isSSL property
String sslStr = props.getProperty("isSSL");
boolean isSSL = false;
if (sslStr.equals("T")) {
isSSL = true;
}
// Format: a:<orgid>:<app-id>
String clientId = "a:" + org + ":" + id;
String serverHost = org + MqttUtil.SERVER_SUFFIX;
handler = new AppMqttHandler();
handler.connect(serverHost, clientId, authmethod, authtoken, isSSL);
// Subscribe Device Events
// iot-2/type/<type-id>/id/<device-id>/evt/<event-id>/fmt/<format-id>
handler.subscribe("iot-2/type/" + MqttUtil.DEFAULT_DEVICE_TYPE
+ "/id/+/evt/" + MqttUtil.DEFAULT_EVENT_ID + "/fmt/json", 0);
}
/**
* This class implements as the application MqttHandler
*
*/
private class AppMqttHandler extends MqttHandler {
// Pattern to check whether the events comes from a device for an event
Pattern pattern = Pattern.compile("iot-2/type/"
+ MqttUtil.DEFAULT_DEVICE_TYPE + "/id/(.+)/evt/"
+ MqttUtil.DEFAULT_EVENT_ID + "/fmt/json");
DatabaseHelper dbHelper = new DatabaseHelper();
/**
* Once a subscribed message is received
*/
#Override
public void messageArrived(String topic, MqttMessage mqttMessage)
throws Exception {
super.messageArrived(topic, mqttMessage);
Matcher matcher = pattern.matcher(topic);
if (matcher.matches()) {
String payload = new String(mqttMessage.getPayload());
// Parse the payload in Json Format
JSONObject contObj = new JSONObject(payload);
System.out
.println("jsonObject arrived in AppTest : " + contObj);
// Call method to insert data in database
dbHelper.insertIntoDB(contObj);
}
}
}
Code to connect to client
public void connect(String serverHost, String clientId, String authmethod,
String authtoken, boolean isSSL) {
// check if client is already connected
if (!isMqttConnected()) {
String connectionUri = null;
//tcp://<org-id>.messaging.internetofthings.ibmcloud.com:1883
//ssl://<org-id>.messaging.internetofthings.ibmcloud.com:8883
if (isSSL) {
connectionUri = "ssl://" + serverHost + ":" + DEFAULT_SSL_PORT;
} else {
connectionUri = "tcp://" + serverHost + ":" + DEFAULT_TCP_PORT;
}
if (client != null) {
try {
client.disconnect();
} catch (MqttException e) {
e.printStackTrace();
}
client = null;
}
try {
client = new MqttClient(connectionUri, clientId);
} catch (MqttException e) {
e.printStackTrace();
}
client.setCallback(this);
// create MqttConnectOptions and set the clean session flag
MqttConnectOptions options = new MqttConnectOptions();
options.setCleanSession(false);
options.setUserName(authmethod);
options.setPassword(authtoken.toCharArray());
//If SSL is used, do not forget to use TLSv1.2
if (isSSL) {
java.util.Properties sslClientProps = new java.util.Properties();
sslClientProps.setProperty("com.ibm.ssl.protocol", "TLSv1.2");
options.setSSLProperties(sslClientProps);
}
try {
// connect
client.connect(options);
System.out.println("Connected to " + connectionUri);
} catch (MqttException e) {
e.printStackTrace();
}
}
}

Quartz SimpleTrigger scheduled job for specific time, but never runs the job

I created a SimpleTrigger in quartz that schedules itself one hour before a certain appointment so that it will send a text to those that it need to. After creating the trigger, I am receiving the log that tells me what time it is scheduled and its doing that without errors. So I'm assuming it is indeed scheduling the job correctly. When the time stated previously comes around it never fires. I get nothing in the logs at all. If it had even entered the class it would have logged something, but nothing past the time it is scheduled is logged.
There is no stack trace or any obvious difference between this job and my other jobs other than the trigger is created dynamically by way of the appointment time. And an object is getting passed to this job.
Here is the Trigger:
public static void scheduleTextJob(Appointment appointment ){
Logger log = LoggerFactory.getLogger(DailyTriggerController.class);
JobDetail volunteerTextReminder = (JobDetail) newJob(VolunteerTextReminder.class).withIdentity(appointment.getId()).build();
try{
SchedulerFactory sf = new StdSchedulerFactory();
Scheduler sched = sf.getScheduler();
Calendar cal = Calendar.getInstance();
cal.setTime(appointment.getStartTime());
Date scheduledTime;
if(cal.get(Calendar.HOUR_OF_DAY) < 9 || (cal.get(Calendar.HOUR_OF_DAY) == 9 && cal.get(Calendar.MINUTE) < 30)){
cal.add(Calendar.DAY_OF_YEAR, -1);
cal.set(Calendar.HOUR_OF_DAY, 18);
cal.set(Calendar.MINUTE, 0);
}
else{
cal.add(Calendar.HOUR, -1);
}
scheduledTime = cal.getTime();
SimpleTrigger uniqueTrigger = (SimpleTrigger) newTrigger()
.withIdentity(appointment.getId(), "TextGroup")
.startAt(scheduledTime)
.forJob(volunteerTextReminder)
.build();
volunteerTextReminder.getJobDataMap().put("appointmentId", appointment.getId());
Date ft = sched.scheduleJob(volunteerTextReminder, uniqueTrigger);
log.info(volunteerTextReminder.getKey() + " will run at: " + ft);
} catch (Exception e) {
e.printStackTrace();
}
}
Here is the job that it should be running:
public class VolunteerTextReminder implements Job {
private #Autowired
HttpServletRequest request;
private static Logger _log = LoggerFactory.getLogger(VolunteerTextReminder.class);
private static final String APPLICATION_CONTEXT_KEY = "applicationContext";
/**
* Quartz requires a public empty constructor so that the
* scheduler can instantiate the class whenever it needs.
*/
public VolunteerTextReminder() {
}
public void execute(JobExecutionContext context)
throws JobExecutionException {
SchedulerContext schedulerContext = null;
AppointmentDAO adao = null;
VolunteerDAO vdao = null;
ApplicationContext appContext = getApplicationContext(context);
if (appContext != null) {
adao = (AppointmentDAO) appContext.getBean("AppointmentDAO");
vdao = (VolunteerDAO) appContext.getBean("VolunteerDAO");
_log.info("appContext created adao");
}
else {
_log.info("web context DID NOT create adao");
}
// This job simply prints out its job name and the
// date and time that it is running
JobKey jobKey = context.getJobDetail().getKey();
_log.info("VolunteerTextReminder says: " + jobKey + " executing at " + new Date());
try {
JobDataMap data = context.getJobDetail().getJobDataMap();
String id = data.getString("appointmentId");
assert adao != null;
List<Person> people = adao.readPersonsByAppointmentId(id);
List<Person> volunteers = new ArrayList<>();
for(Person person: people){
if(person.getType().equalsIgnoreCase("volunteer")){
volunteers.add(person);
}
}
for(Person volunteer: volunteers){
Volunteer volunteerToEmail = vdao.getVolunteerToEmailById(volunteer.getLdsAccountId());
if(volunteerToEmail != null) {
Appointment appointment = (Appointment) adao.readAppointmentById(id);
if(!appointment.getCancelled()){
createAndSendText(volunteerToEmail, appointment);
}
}
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
}

Categories

Resources