In my javagent, I started a HttpServer:
public static void premain(String agentArgs, Instrumentation inst) throws InstantiationException, IOException {
HttpServer server = HttpServer.create(new InetSocketAddress(8000), 0);
server.createContext("/report", new ReportHandler());
server.createContext("/data", new DataHandler());
server.createContext("/stack", new StackHandler());
ExecutorService es = Executors.newCachedThreadPool(new ThreadFactory() {
int count = 0;
#Override
public Thread newThread(Runnable r) {
Thread t = new Thread(r);
t.setDaemon(true);
t.setName("JDBCLD-HTTP-SERVER" + count++);
return t;
}
});
server.setExecutor(es);
server.start();
// how to properly close ?
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
server.stop(5);
log.info("internal httpserver has been closed.");
es.shutdown();
try {
if (!es.awaitTermination(60, TimeUnit.SECONDS)) {
log.warn("executor service of internal httpserver not closing in 60 seconds");
es.shutdownNow();
if (!es.awaitTermination(60, TimeUnit.SECONDS))
log.error("executor service of internal httpserver not closing in 120 seconds, give up");
}else {
log.info("executor service of internal httpserver closed.");
}
} catch (InterruptedException ie) {
log.warn("thread interrupted, shutdown executor service of internal httpserver");
es.shutdownNow();
Thread.currentThread().interrupt();
}
}
});
// other instrumention code ignored ...
}
testing programe:
public class AgentTest {
public static void main(String[] args) throws SQLException {
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:oracle:thin:#172.31.27.182:1521/pas");
config.setUsername("pas");
config.setPassword("pas");
HikariDataSource ds = new HikariDataSource(config);
Connection c = ds.getConnection();
Connection c1 = ds.getConnection();
c.getMetaData();
try {
Thread.sleep(1000 * 60 * 10);
} catch (InterruptedException e) {
e.printStackTrace();
c.close();
c1.close();
ds.close();
}
c.close();
c1.close();
ds.close();
}
}
When target jvm exit, I want the to stop that HttpServer. but when my testing java programe finish, main thread stoped but the whole jvm process won't terminate, shutdown hook in above code won't execute. if I click 'terminate' button in eclipse IDE, eclipse will show a error:
but at least jvm will exit, and my shutdown hook get invoked.
according to the java doc of java.lang.Runtime:
The Java virtual machine shuts down in response to two kinds of
events:
The program exits normally, when the last non-daemon thread exits or
when the exit (equivalently, System.exit) method is invoked, or The
virtual machine is terminated in response to a user interrupt, such as
typing ^C, or a system-wide event, such as user logoff or system
shutdown.
com.sun.net.httpserver.HttpServer will started a non-daemon dispatcher thread, that thread will exit when HttpServer#stop get called, so I am facing a deadlock.
non-daemon thread not finish -> shutdown hook not triggered -> can't
stop server -> non-daemon thread not finish
Any good idea? please note I can't modify code of targeting application.
UPDATES after apply kriegaex's answer
I added some logging to watch dog thread, and here is outputs:
2021-09-22 17:30:00.967 INFO - Connnection#1594791957 acquired by 40A4F128987F8BD9C0EE6749895D1237
2021-09-22 17:30:00.968 DEBUG - Stack#40A4F128987F8BD9C0EE6749895D1237:
java.lang.Throwable:
at com.zaxxer.hikari.pool.ProxyConnection.<init>(ProxyConnection.java:102)
at com.zaxxer.hikari.pool.HikariProxyConnection.<init>(HikariProxyConnection.java)
at com.zaxxer.hikari.pool.ProxyFactory.getProxyConnection(ProxyFactory.java)
at com.zaxxer.hikari.pool.PoolEntry.createProxyConnection(PoolEntry.java:97)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:192)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:162)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:100)
at agenttest.AgentTest.main(AgentTest.java:19)
2021-09-22 17:30:00.969 INFO - Connnection#686560878 acquired by 464555C270688B747CA211DE489B7730
2021-09-22 17:30:00.969 DEBUG - Stack#464555C270688B747CA211DE489B7730:
java.lang.Throwable:
at com.zaxxer.hikari.pool.ProxyConnection.<init>(ProxyConnection.java:102)
at com.zaxxer.hikari.pool.HikariProxyConnection.<init>(HikariProxyConnection.java)
at com.zaxxer.hikari.pool.ProxyFactory.getProxyConnection(ProxyFactory.java)
at com.zaxxer.hikari.pool.PoolEntry.createProxyConnection(PoolEntry.java:97)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:192)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:162)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:100)
at agenttest.AgentTest.main(AgentTest.java:20)
2021-09-22 17:30:00.971 DEBUG - Connnection#1594791957 used by getMetaData
2021-09-22 17:30:01.956 DEBUG - there is still 12 active threads, keep wathcing
2021-09-22 17:30:01.956 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true HikariPool-1 connection adder#true
2021-09-22 17:30:02.956 DEBUG - there is still 12 active threads, keep wathcing
2021-09-22 17:30:02.956 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true HikariPool-1 connection adder#true
2021-09-22 17:30:03.957 DEBUG - there is still 12 active threads, keep wathcing
2021-09-22 17:30:03.957 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true HikariPool-1 connection adder#true
2021-09-22 17:30:04.959 DEBUG - there is still 12 active threads, keep wathcing
2021-09-22 17:30:04.959 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true HikariPool-1 connection adder#true
2021-09-22 17:30:05.959 DEBUG - there is still 12 active threads, keep wathcing
2021-09-22 17:30:05.960 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true HikariPool-1 connection adder#true
2021-09-22 17:30:06.960 DEBUG - there is still 11 active threads, keep wathcing
2021-09-22 17:30:06.960 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true
2021-09-22 17:30:07.961 DEBUG - there is still 11 active threads, keep wathcing
2021-09-22 17:30:07.961 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true
2021-09-22 17:30:08.961 DEBUG - there is still 11 active threads, keep wathcing
2021-09-22 17:30:08.961 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true
2021-09-22 17:30:09.962 DEBUG - there is still 11 active threads, keep wathcing
2021-09-22 17:30:09.962 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true
2021-09-22 17:30:10.962 DEBUG - there is still 11 active threads, keep wathcing
2021-09-22 17:30:10.963 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true main#false server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true HikariPool-1 housekeeper#true
2021-09-22 17:30:10.976 INFO - Connnection#1594791957 released
2021-09-22 17:30:10.976 DEBUG - set connection count to 0 by stack hash 40A4F128987F8BD9C0EE6749895D1237
2021-09-22 17:30:10.976 INFO - Connnection#686560878 released
2021-09-22 17:30:10.976 DEBUG - set connection count to 0 by stack hash 464555C270688B747CA211DE489B7730
2021-09-22 17:30:11.963 DEBUG - there is still 10 active threads, keep wathcing
2021-09-22 17:30:11.963 DEBUG - Reference Handler#true Finalizer#true Signal Dispatcher#true server-timer#true Thread-2#false jdbcld-watch-dog#false Timer-0#true oracle.jdbc.driver.BlockSource.ThreadedCachingBlockSource.BlockReleaser#true InterruptTimer#true DestroyJavaVM#false
2021-09-22 17:30:12.964 DEBUG - there is still 10 active threads, keep wathcing
updates
I want to support all kinds of java application, include web application running with servlet containers and standard alone javase applications.
Here is a little MCVE illustrating ewrammer's idea. I used the little byte-buddy-agent helper library for dynamically attaching an agent in order to make my example self-contained, starting the Java agent right from the main method. I omitted the 3 trivial no-op dummy handler classes necessary to run this example.
package org.acme.agent;
import com.sun.net.httpserver.HttpServer;
import net.bytebuddy.agent.ByteBuddyAgent;
import java.io.IOException;
import java.lang.instrument.Instrumentation;
import java.net.InetSocketAddress;
import java.util.Random;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadFactory;
import java.util.concurrent.TimeUnit;
public class Agent {
public static void premain(String agentArgs, Instrumentation inst) throws IOException {
HttpServer httpServer = HttpServer.create(new InetSocketAddress(8000), 0);
ExecutorService executorService = getExecutorService(httpServer);
Runtime.getRuntime().addShutdownHook(getShutdownHook(httpServer, executorService));
// other instrumention code ignored ...
startWatchDog();
}
private static ExecutorService getExecutorService(HttpServer server) {
server.createContext("/report", new ReportHandler());
server.createContext("/data", new DataHandler());
server.createContext("/stack", new StackHandler());
ExecutorService executorService = Executors.newCachedThreadPool(new ThreadFactory() {
int count = 0;
#Override
public Thread newThread(Runnable r) {
Thread t = new Thread(r);
t.setDaemon(true);
t.setName("JDBCLD-HTTP-SERVER" + count++);
return t;
}
});
server.setExecutor(executorService);
server.start();
return executorService;
}
private static Thread getShutdownHook(HttpServer httpServer, ExecutorService executorService) {
return new Thread(() -> {
httpServer.stop(5);
System.out.println("Internal HTTP server has been stopped");
executorService.shutdown();
try {
if (!executorService.awaitTermination(60, TimeUnit.SECONDS)) {
System.out.println("Executor service of internal HTTP server not closing in 60 seconds");
executorService.shutdownNow();
if (!executorService.awaitTermination(60, TimeUnit.SECONDS))
System.out.println("Executor service of internal HTTP server not closing in 120 seconds, giving up");
}
else {
System.out.println("Executor service of internal HTTP server closed");
}
}
catch (InterruptedException ie) {
System.out.println("Thread interrupted, shutting down executor service of internal HTTP server");
executorService.shutdownNow();
Thread.currentThread().interrupt();
}
});
}
private static void startWatchDog() {
ThreadGroup threadGroup = Thread.currentThread().getThreadGroup();
while (threadGroup.getParent() != null)
threadGroup = threadGroup.getParent();
final ThreadGroup topLevelThreadGroup = threadGroup;
// Plus 1, because of the monitoring thread we are going to start right below
final int activeCount = topLevelThreadGroup.activeCount() + 1;
new Thread(() -> {
do {
try {
Thread.sleep(1000);
}
catch (InterruptedException ignored) {}
} while (topLevelThreadGroup.activeCount() > activeCount);
System.exit(0);
}).start();
}
public static void main(String[] args) throws IOException {
premain(null, ByteBuddyAgent.install());
Random random = new Random();
for (int i = 0; i < 5; i++) {
new Thread(() -> {
int threadDurationSeconds = 1 + random.nextInt(10);
System.out.println("Starting thread with duration " + threadDurationSeconds + " s");
try {
Thread.sleep(threadDurationSeconds * 1000);
System.out.println("Finishing thread after " + threadDurationSeconds + " s");
}
catch (InterruptedException ignored) {}
}).start();
}
}
}
As you can see, this is basically your example code, refactored into a few helper methods for readability, plus the new watchdog method. It is quite straightforward.
This produces a console log like:
Starting thread with duration 6 s
Starting thread with duration 6 s
Starting thread with duration 8 s
Starting thread with duration 7 s
Starting thread with duration 5 s
Finishing thread after 5 s
Finishing thread after 6 s
Finishing thread after 6 s
Finishing thread after 7 s
Finishing thread after 8 s
internal httpserver has been closed.
executor service of internal httpserver closed.
I have a requirement where I need to process messages from Kafka without losing any message and also need to maintain the message order. Therefore, I used transactions and enabled 'exactly_once' processing guarantee in my Kafka streams topology. As I assume that the topology processing will be 'all or nothing', that the message offset is committed only after the last node successfully processed the message.
However in a failure scenario, for example when the database is down and the processor fails to store message and throws an exception. At this point, the topology dies as intended and is recreated automatically on rebalance. I assume that the topology should either re-consume the original message again from the Kafka topic OR on application restart, it should re-consume that original message from Kafka topic. However, it seems that original message disappears and is never consumed or processed after that topology died.
What do I need to do to reprocess the original message sent to Kafka topic? Or what Kafka configuration requires change? Do I need manually assign a state store and keep track of messages processed on a changelog topic?
Topology:
#Singleton
public class EventTopology extends Topology {
private final Deserializer<String> deserializer = Serdes.String().deserializer();
private final Serializer<String> serializer = Serdes.String().serializer();
private final EventLogMessageSerializer eventLogMessageSerializer;
private final EventLogMessageDeserializer eventLogMessageDeserializer;
private final EventLogProcessorSupplier eventLogProcessorSupplier;
#Inject
public EventTopology(EventsConfig eventsConfig,
EventLogMessageSerializer eventLogMessageSerializer,
EventLogMessageDeserializer eventLogMessageDeserializer,
EventLogProcessorSupplier eventLogProcessorSupplier) {
this.eventLogMessageSerializer = eventLogMessageSerializer;
this.eventLogMessageDeserializer = eventLogMessageDeserializer;
this.eventLogProcessorSupplier = eventLogProcessorSupplier;
init(eventsConfig);
}
private void init(EventsConfig eventsConfig) {
var topics = eventsConfig.getTopicConfig().getTopics();
String eventLog = topics.get("eventLog");
addSource("EventsLogSource", deserializer, eventLogMessageDeserializer, eventLog)
.addProcessor("EventLogProcessor", eventLogProcessorSupplier, "EventsLogSource");
}
}
Processor:
#Singleton
#Slf4j
public class EventLogProcessor implements Processor<String, EventLogMessage> {
private final EventLogService eventLogService;
private ProcessorContext context;
#Inject
public EventLogProcessor(EventLogService eventLogService) {
this.eventLogService = eventLogService;
}
#Override
public void init(ProcessorContext context) {
this.context = context;
}
#Override
public void process(String key, EventLogMessage value) {
log.info("Processing EventLogMessage={}", value);
try {
eventLogService.storeInDatabase(value);
context.commit();
} catch (Exception e) {
log.warn("Failed to process EventLogMessage={}", value, e);
throw e;
}
}
#Override
public void close() {
}
}
Configuration:
eventsConfig:
saveTopicsEnabled: false
topologyConfig:
environment: "LOCAL"
broker: "localhost:9093"
enabled: true
initialiseWaitInterval: 3 seconds
applicationId: "eventsTopology"
config:
auto.offset.reset: latest
session.timeout.ms: 6000
fetch.max.wait.ms: 7000
heartbeat.interval.ms: 5000
connections.max.idle.ms: 7000
security.protocol: SSL
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: org.apache.kafka.common.serialization.StringSerializer
max.poll.records: 5
processing.guarantee: exactly_once
metric.reporters: com.simple.metrics.kafka.DropwizardReporter
default.deserialization.exception.handler: org.apache.kafka.streams.errors.LogAndContinueExceptionHandler
enable.idempotence: true
request.timeout.ms: 8000
acks: all
batch.size: 16384
linger.ms: 1
enable.auto.commit: false
state.dir: "/tmp"
topicConfig:
topics:
eventLog: "EVENT-LOG-LOCAL"
kafkaTopicConfig:
partitions: 18
replicationFactor: 1
config:
retention.ms: 604800000
Test:
Feature: Feature covering the scenarios to process event log messages produced by external client.
Background:
Given event topology is healthy
Scenario: event log messages produced are successfully stored in the database
Given database is down
And the following event log messages are published
| deptId | userId | eventType | endDate | eventPayload_partner |
| dept-1 | user-1234 | CREATE | 2021-04-15T00:00:00Z | PARTNER-1 |
When database is up
And database is healthy
Then event log stored in the database as follows
| dept_id | user_id | event_type | end_date | event_payload |
| dept-1 | user-1234 | CREATE | 2021-04-15T00:00:00Z | {"partner":"PARTNER-1"} |
Logs:
INFO [data-plane-kafka-request-handler-1] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Preparing to rebalance group eventsTopology in state PreparingRebalance with old generation 0 (__consumer_offsets-0) (reason: Adding new member eventsTopology-57fdac0e-09fb-4aa0-8b0b-7e01809b31fa-StreamThread-1-consumer-96a3e980-4286-461e-8536-5f04ccb2c778 with group instance id None)
INFO [executor-Rebalance] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Stabilized group eventsTopology generation 1 (__consumer_offsets-0)
INFO [data-plane-kafka-request-handler-2] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Assignment received from leader for group eventsTopology for generation 1
INFO [data-plane-kafka-request-handler-1] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-0_0 with producerId 0 and producer epoch 0 on partition __transaction_state-4
INFO [data-plane-kafka-request-handler-6] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-0_1 with producerId 1 and producer epoch 0 on partition __transaction_state-3
...
INFO [data-plane-kafka-request-handler-0] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-0_16 with producerId 17 and producer epoch 0 on partition __transaction_state-37
INFO [data-plane-kafka-request-handler-4] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_1 with producerId 18 and producer epoch 0 on partition __transaction_state-42
INFO [data-plane-kafka-request-handler-6] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_0 with producerId 19 and producer epoch 0 on partition __transaction_state-43
...
INFO [data-plane-kafka-request-handler-3] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_17 with producerId 34 and producer epoch 0 on partition __transaction_state-45
INFO [data-plane-kafka-request-handler-5] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_16 with producerId 35 and producer epoch 0 on partition __transaction_state-46
INFO [pool-26-thread-1] ManagerClient - Manager request {uri:http://localhost:8081/healthcheck, method:GET, body:'', headers:{}}
INFO [pool-26-thread-1] ManagerClient - Manager response from with body {"Database":{"healthy":true},"eventsTopology":{"healthy":true}}
INFO [dw-admin-130] KafkaConnectionCheck - successfully connected to kafka broker: localhost:9093
INFO [kafka-producer-network-thread | EVENT-LOG-LOCAL-test-client-id] LocalTestEnvironment - Message: ProducerRecord(topic=EVENT-LOG-LOCAL, partition=null, headers=RecordHeaders(headers = [], isReadOnly = true), key=null, value={"endDate":1618444800000,"deptId":"dept-1","userId":"user-1234","eventType":"CREATE","eventPayload":{"previousEndDate":null,"partner":"PARTNER-1","info":null}}, timestamp=null) pushed onto topic: EVENT-LOG-LOCAL
INFO [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] EventLogProcessor - Processing EventLogMessage=EventLogMessage(endDate=Thu Apr 15 01:00:00 BST 2021, deptId=dept-1, userId=user-1234, eventType=CREATE, eventPayload=EventLogMessage.EventPayload(previousEndDate=null, partner=PARTNER-1, info=null))
WARN [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] EventLogProcessor - Failed to process EventLogMessage=EventLogMessage(endDate=Thu Apr 15 01:00:00 BST 2021, deptId=dept-1, userId=user-1234, eventType=CREATE, eventPayload=EventLogMessage.EventPayload(previousEndDate=null, partner=PARTNER-1, info=null))
exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
at manager.service.EventLogService.storeInDatabase(EventLogService.java:24)
at manager.topology.processor.EventLogProcessor.process(EventLogProcessor.java:47)
at manager.topology.processor.EventLogProcessor.process(EventLogProcessor.java:19)
at org.apache.kafka.streams.processor.internals.ProcessorNode.lambda$process$2(ProcessorNode.java:142)
at org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:836)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:142)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:236)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:216)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:168)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:96)
at org.apache.kafka.streams.processor.internals.StreamTask.lambda$process$1(StreamTask.java:679)
at org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:836)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:679)
at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:1033)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:690)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:551)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:510)
ERROR [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] org.apache.kafka.streams.processor.internals.TaskManager - stream-thread [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] Failed to process stream task 0_8 due to the following error:
org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_8, processor=EventsLogSource, topic=EVENT-LOG-LOCAL, partition=8, offset=0, stacktrace=exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
ERROR [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] Encountered the following exception during processing and the thread is going to shut down:
org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_8, processor=EventsLogSource, topic=EVENT-LOG-LOCAL, partition=8, offset=0, stacktrace=exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
ERROR [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] org.apache.kafka.streams.KafkaStreams - stream-client [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3] All stream threads have died. The instance will be in error state and should be closed.
Exception: java.lang.IllegalStateException thrown from the UncaughtExceptionHandler in thread "eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1"
INFO [executor-Heartbeat] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Member eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1-consumer-f11ca299-2a68-4317-a559-dd1b96cd431f in group eventsTopology has failed, removing it from the group
INFO [executor-Heartbeat] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Preparing to rebalance group eventsTopology in state PreparingRebalance with old generation 1 (__consumer_offsets-0) (reason: removing member eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1-consumer-f11ca299-2a68-4317-a559-dd1b96cd431f on heartbeat expiration)
INFO [data-plane-kafka-request-handler-2] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Stabilized group eventsTopology generation 2 (__consumer_offsets-0)
INFO [data-plane-kafka-request-handler-6] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Assignment received from leader for group eventsTopology for generation 2
INFO [data-plane-kafka-request-handler-0] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-0_0 with producerId 0 and producer epoch 1 on partition __transaction_state-4
...
INFO [data-plane-kafka-request-handler-0] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_16 with producerId 35 and producer epoch 1 on partition __transaction_state-46
INFO [main] Cluster - New databse host localhost/127.0.0.1:59423 added
com.jayway.awaitility.core.ConditionTimeoutException: Condition defined as a lambda expression in steps.EventLogSteps
Expecting:
<0>
to be equal to:
<1>
but was not. within 20 seconds.
I have a spark java code that is running good in spark-core_2.11 v2.2.0 but throwing an exception in spark-core_2.11 v2.3.1.
The code is basically to map a column "isrecurrence" to values 1 if the value is true and 0 if the value is false.
That column also contains value "null" (as string). Those "null" string will get replaced by "\N" (for hive will read this data as NULL).
Code:
public static Seq<String> convertListToSeq(List<String> inputList)
{
return JavaConverters.asScalaIteratorConverter(inputList.iterator()).asScala().toSeq();
}
String srcCols = "id,whoid,whatid,whocount,whatcount,subject,activitydate,status,priority,ishighpriority,ownerid,description,isdeleted,accountid,isclosed,createddate,createdbyid,lastmodifieddate,lastmodifiedbyid,systemmodstamp,isarchived,calldurationinseconds,calltype,calldisposition,callobject,reminderdatetime,isreminderset,recurrenceactivityid,isrecurrence,recurrencestartdateonly,recurrenceenddateonly,recurrencetimezonesidkey,recurrencetype,recurrenceinterval,recurrencedayofweekmask,recurrencedayofmonth,recurrenceinstance,recurrencemonthofyear,recurrenceregeneratedtype";
String table = "task";
String[] colArr = srcCols.split(",");
List<String> colsList = Arrays.asList(colArr);
Dataset<Row> filtered = spark.read().format("com.springml.spark.salesforce")
.option("username", prop.getProperty("salesforce_user"))
.option("password", prop.getProperty("salesforce_auth"))
.option("login", prop.getProperty("salesforce_login_url"))
.option("soql", "SELECT "+srcCols+" from "+table)
.option("version", prop.getProperty("salesforce_version"))
.load().persist();
String column = "isrecurrence"; //This column has values 'true' or 'false' as string.
//'true' will be mapped to '1' (as string)
//'false' will be mapped to '0' (as string).
String newCol = column + "_mapped_to_new_value";
filtered = filtered.selectExpr(convertListToSeq(colsList))
.withColumn(newCol, //code is breaking here at "withColumn"
when(filtered.col(column).notEqual("null"),
when(filtered.col(column).equalTo("true"), 1)
.otherwise(when(filtered.col(column).equalTo("false"), 0)))
.otherwise(lit("\\N"))).alias(newCol)
.drop(filtered.col(column));
filtered.write().mode(SaveMode.Overwrite).option("delimiter", "^").csv(hdfsExportLoaction);
Error:
Exception in thread "main" org.apache.spark.sql.AnalysisException: unresolved operator 'Project [id#35, whoid#21, whatid#1, whocount#13, whatcount#5, subject#27, activitydate#22, status#19, priority#24, ishighpriority#10, ownerid#15, description#2, isdeleted#20, accountid#3, isclosed#12, createddate#34, createdbyid#16, lastmodifieddate#0, lastmodifiedbyid#37, systemmodstamp#28, isarchived#30, calldurationinseconds#23, calltype#9, calldisposition#6, ... 16 more fields];;
'Project [id#35, whoid#21, whatid#1, whocount#13, whatcount#5, subject#27, activitydate#22, status#19, priority#24, ishighpriority#10, ownerid#15, description#2, isdeleted#20, accountid#3, isclosed#12, createddate#34, createdbyid#16, lastmodifieddate#0, lastmodifiedbyid#37, systemmodstamp#28, isarchived#30, calldurationinseconds#23, calltype#9, calldisposition#6, ... 16 more fields]
+- Project [id#35, whoid#21, whatid#1, whocount#13, whatcount#5, subject#27, activitydate#22, status#19, priority#24, ishighpriority#10, ownerid#15, description#2, isdeleted#20, accountid#3, isclosed#12, createddate#34, createdbyid#16, lastmodifieddate#0, lastmodifiedbyid#37, systemmodstamp#28, isarchived#30, calldurationinseconds#23, calltype#9, calldisposition#6, ... 15 more fields]
+- Project [id#35, whoid#21, whatid#1, whocount#13, whatcount#5, subject#27, activitydate#22, status#19, priority#24, ishighpriority#10, ownerid#15, description#2, isdeleted#20, accountid#3, isclosed#12, createddate#34, createdbyid#16, lastmodifieddate#0, lastmodifiedbyid#37, systemmodstamp#28, isarchived#30, calldurationinseconds#23, calltype#9, calldisposition#6, ... 15 more fields]
+- Relation[LastModifiedDate#0,WhatId#1,Description#2,AccountId#3,RecurrenceDayOfWeekMask#4,WhatCount#5,CallDisposition#6,ReminderDateTime#7,RecurrenceEndDateOnly#8,CallType#9,IsHighPriority#10,RecurrenceRegeneratedType#11,IsClosed#12,WhoCount#13,RecurrenceInterval#14,OwnerId#15,CreatedById#16,RecurrenceActivityId#17,IsReminderSet#18,Status#19,IsDeleted#20,WhoId#21,ActivityDate#22,CallDurationInSeconds#23,... 15 more fields] DatasetRelation(null,com.springml.salesforce.wave.impl.ForceAPIImpl#68303c3e,SELECT id,whoid,whatid,whocount,whatcount,subject,activitydate,status,priority,ishighpriority,ownerid,description,isdeleted,accountid,isclosed,createddate,createdbyid,lastmodifieddate,lastmodifiedbyid,systemmodstamp,isarchived,calldurationinseconds,calltype,calldisposition,callobject,reminderdatetime,isreminderset,recurrenceactivityid,isrecurrence,recurrencestartdateonly,recurrenceenddateonly,recurrencetimezonesidkey,recurrencetype,recurrenceinterval,recurrencedayofweekmask,recurrencedayofmonth,recurrenceinstance,recurrencemonthofyear,recurrenceregeneratedtype from task,null,org.apache.spark.sql.SQLContext#2ec23ec3,null,0,1000,None,false,false)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:41)
at org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:92)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$3.apply(CheckAnalysis.scala:356)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$3.apply(CheckAnalysis.scala:354)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:354)
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:92)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:3301)
at org.apache.spark.sql.Dataset.select(Dataset.scala:1312)
at org.apache.spark.sql.Dataset.withColumns(Dataset.scala:2197)
at org.apache.spark.sql.Dataset.withColumn(Dataset.scala:2164)
at com.sfdc.SaleforceReader.mapColumns(SaleforceReader.java:187)
at com.sfdc.SaleforceReader.main(SaleforceReader.java:547)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/07/10 09:38:51 INFO SparkContext: Invoking stop() from shutdown hook
19/07/10 09:38:51 INFO AbstractConnector: Stopped Spark#72456279{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
19/07/10 09:38:51 INFO SparkUI: Stopped Spark web UI at http://ebdp-po-e007s.sys.comcast.net:4040
19/07/10 09:38:51 INFO YarnClientSchedulerBackend: Interrupting monitor thread
19/07/10 09:38:51 INFO YarnClientSchedulerBackend: Shutting down all executors
19/07/10 09:38:51 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
19/07/10 09:38:51 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
19/07/10 09:38:51 INFO YarnClientSchedulerBackend: Stopped
19/07/10 09:38:51 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/07/10 09:38:51 INFO MemoryStore: MemoryStore cleared
19/07/10 09:38:51 INFO BlockManager: BlockManager stopped
19/07/10 09:38:51 INFO BlockManagerMaster: BlockManagerMaster stopped
I am not sure if this is happening due to nested when-otherwise().
I used the lit() and it worked:
when(filtered.col(column).equalTo("true"), lit(1))
.otherwise(when(filtered.col(column).equalTo("false"), lit(0)))
instead of
when(filtered.col(column).equalTo("true"), 1)
.otherwise(when(filtered.col(column).equalTo("false"), 0))
public class GetCountryFromIP extends EvalFunc<String> {
#Override
public List<String> getCacheFiles() {
List<String> list = new ArrayList<String>(1);
list.add("/input/pig/resources/GeoLite2-Country.mmdb#GeoLite2-Country");
return list;
}
#Override
public String exec(Tuple input) throws IOException {
if (input == null || input.size() == 0 || input.get(0) == null) {
return null;
}
try {
String inputIP = (String) input.get(0);
String output;
File database = new File("./GeoLite2-Country");
//CODE FOR EXPLAIN
if (database.exists()) {
System.out.print("EXIST!!!");
} else {
System.out.print("NOTEXISTS!!!");
}
//CODE FOR EXPLAIN
DatabaseReader reader = new DatabaseReader.Builder(database).build();
InetAddress ipAddress = InetAddress.getByName(inputIP);
CountryResponse response = reader.country(ipAddress);
Country country = response.getCountry();
output = country.getIsoCode();
return output;
} catch (AddressNotFoundException e) {
return null;
} catch (Exception ee) {
throw new IOException("Uncaught exec" + ee);
}
}
}
Here is My UDF code, I need GeoLite2-Count.mmdb File, So use GetCacheFile.
Also I put all Pig-Latin to one pig file, 'batch.pig'
When I run this file 'pig batch.pig'
Output seams like this
2015-10-06 01:16:56,737 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - soft limit at 83886080
2015-10-06 01:16:56,737 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - bufstart = 0; bufvoid = 104857600
2015-10-06 01:16:56,737 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - kvstart = 26214396; length = 6553600
2015-10-06 01:16:56,738 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2015-10-06 01:16:56,744 [LocalJobRunner Map Task Executor #0] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2015-10-06 01:16:56,754 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Map - Aliases being processed per job phase (AliasName[line,offset]): M: weblog[-1,-1],weblog_web[30,13],weblog_web[-1,-1],weblog_web[-1,-1],desktop_active_log_account_filter[7,36],desktop_parsed[3,18],desktop_parsed_abstract[5,26],weblog_web[-1,-1],web_active_log_account_filter[20,32],weblog_web_parsed[16,20],weblog_web_parsed_abstract[18,29] C: R:
EXIST!!!
2015-10-06 01:16:56,997 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner -
2015-10-06 01:16:56,997 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Starting flush of map output
...
...
...
2015-10-06 01:16:57,938 [Thread-1885] INFO org.apache.hadoop.mapred.LocalJobRunner - reduce task executor complete.
2015-10-06 01:16:57,939 [pool-59-thread-1] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
NOTEXIST!!!
2015-10-06 01:16:57,974 [pool-59-thread-1] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapReduce$Reduce - Aliases being processed per job phase (AliasName[line,offset]): M: account_hour_activity[42,24],account_hour_activity_group[41,30],team_hour_activity[76,21],team_hour_activity_group[75,27] C:
...
...
..
2015-10-06 01:16:57,976 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Finished spill 0
2015-10-06 01:16:57,977 [Thread-2139] INFO org.apache.hadoop.mapred.LocalJobRunner - map task executor complete.
2015-10-06 01:16:57,981 [Thread-2139] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local1209692101_0021
java.lang.Exception: org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: Local Rearrange[tuple]{tuple}(true) - scope-2240 Operator Key: scope-2240): org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: weblog_web_parsed_abstract: New For Each(false,false,false)[bag] - scope-1379 Operator Key: scope-1379): org.apache.pig.backend.executionengine.ExecException: ERROR 2078: Caught error from UDF: com.tosslab.sprinklr.country.GetCountryFromIP [Uncaught execjava.io.FileNotFoundException: ./GeoLite2-Country (No such file or directory)]
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: Local Rearrange[tuple]{tuple}(true) - scope-2240 Operator Key: scope-2240): org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: weblog_web_parsed_abstract: New For Each(false,false,false)[bag] - scope-1379 Operator Key: scope-1379): org.apache.pig.backend.executionengine.ExecException: ERROR 2078: Caught error from UDF: com.tosslab.sprinklr.country.GetCountryFromIP [Uncaught execjava.io.FileNotFoundException: ./GeoLite2-Country (No such file or directory)]
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:316)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:291)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POSplit.runPipeline(POSplit.java:259)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POSplit.processPlan(POSplit.java:241)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POSplit.processPlan(POSplit.java:246)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POSplit.getNextTuple(POSplit.java:233)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:283)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:278)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: weblog_web_parsed_abstract: New For Each(false,false,false)[bag] - scope-1379 Operator Key: scope-1379): org.apache.pig.backend.executionengine.ExecException: ERROR 2078: Caught error from UDF: com.tosslab.sprinklr.country.GetCountryFromIP [Uncaught execjava.io.FileNotFoundException: ./GeoLite2-Country (No such file or directory)]
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:316)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:246)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
... 17 more
This Means mmdb File is deleted During the batch work...
What's going on here? How could I solve this Issue?
Seems like the job is run from local mode.
2015-10-06 01:16:57,976 [**LocalJobRunner** Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Finished spill 0
When you run the job in local mode the distributed cache is not supported.
2015-10-05 23:22:56,675 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Distributed cache not supported or needed in local mode.
Put everything in HDFS and run in mapreduce mode.
I want to execute two tasks on scheduled time (23:59 CET and 08:00 CET). I have created an EJB singleton bean that maintains those methods:
#Singleton
public class OfferManager {
#Schedule(hour = "23", minute = "59", timezone = "CET")
#AccessTimeout(value = 0) // concurrent access is not permitted
public void fetchNewOffers() {
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Fetching new offers started");
// ...
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Fetching new offers finished");
}
#Schedule(hour="8", minute = "0", timezone = "CET")
public void sendMailsWithReports() {
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Generating reports started");
// ...
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Generating reports finished");
}
}
The problem is that both tasks are executed twice. The server is WildFly Beta1, configured in UTC time.
Here are some server logs, that might be useful:
2013-10-20 11:15:17,684 INFO [org.jboss.as.server] (XNIO-1 task-7) JBAS018559: Deployed "crawler-0.3.war" (runtime-name : "crawler-0.3.war")
2013-10-20 21:59:00,070 INFO [com.indeed.control.OfferManager] (EJB default - 1) Fetching new offers started
....
2013-10-20 22:03:48,608 INFO [com.indeed.control.OfferManager] (EJB default - 1) Fetching new offers finished
2013-10-20 23:59:00,009 INFO [com.indeed.control.OfferManager] (EJB default - 2) Fetching new offers started
....
2013-10-20 23:59:22,279 INFO [com.indeed.control.OfferManager] (EJB default - 2) Fetching new offers finished
What might be the cause of such behaviour?
I solved the problem with specifying scheduled time with server time (UTC).
So
#Schedule(hour = "23", minute = "59", timezone = "CET")
was replaced with:
#Schedule(hour = "21", minute = "59")
I don't know the cause of such beahaviour, maybe the early release of Wildfly is the issue.
I had the same problem with TomEE plume 7.0.4. In my case the solution was to change #Singleton to #Stateless.