Java EE Singleton Scheduled Task Executed twice - java

I want to execute two tasks on scheduled time (23:59 CET and 08:00 CET). I have created an EJB singleton bean that maintains those methods:
#Singleton
public class OfferManager {
#Schedule(hour = "23", minute = "59", timezone = "CET")
#AccessTimeout(value = 0) // concurrent access is not permitted
public void fetchNewOffers() {
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Fetching new offers started");
// ...
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Fetching new offers finished");
}
#Schedule(hour="8", minute = "0", timezone = "CET")
public void sendMailsWithReports() {
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Generating reports started");
// ...
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Generating reports finished");
}
}
The problem is that both tasks are executed twice. The server is WildFly Beta1, configured in UTC time.
Here are some server logs, that might be useful:
2013-10-20 11:15:17,684 INFO [org.jboss.as.server] (XNIO-1 task-7) JBAS018559: Deployed "crawler-0.3.war" (runtime-name : "crawler-0.3.war")
2013-10-20 21:59:00,070 INFO [com.indeed.control.OfferManager] (EJB default - 1) Fetching new offers started
....
2013-10-20 22:03:48,608 INFO [com.indeed.control.OfferManager] (EJB default - 1) Fetching new offers finished
2013-10-20 23:59:00,009 INFO [com.indeed.control.OfferManager] (EJB default - 2) Fetching new offers started
....
2013-10-20 23:59:22,279 INFO [com.indeed.control.OfferManager] (EJB default - 2) Fetching new offers finished
What might be the cause of such behaviour?

I solved the problem with specifying scheduled time with server time (UTC).
So
#Schedule(hour = "23", minute = "59", timezone = "CET")
was replaced with:
#Schedule(hour = "21", minute = "59")
I don't know the cause of such beahaviour, maybe the early release of Wildfly is the issue.

I had the same problem with TomEE plume 7.0.4. In my case the solution was to change #Singleton to #Stateless.

Related

When doing a redeploy of JBoss WAR with Apache Ignite, Failed to marshal custom event: StartRoutineDiscoveryMessage

I am trying to make it so I can redeploy a JBoss 7.1.0 cluster with a WAR that has apache ignite.
I am starting the cache like this:
System.setProperty("IGNITE_UPDATE_NOTIFIER", "false");
igniteConfiguration = new IgniteConfiguration();
int failureDetectionTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT", "60000"));
igniteConfiguration.setFailureDetectionTimeout(failureDetectionTimeout);
String igniteVmIps = getProperty("IGNITE_VM_IPS");
List<String> addresses = Arrays.asList("127.0.0.1:47500");
if (StringUtils.isNotBlank(igniteVmIps)) {
addresses = Arrays.asList(igniteVmIps.split(","));
}
int networkTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_NETWORK_TIMEOUT", "60000"));
boolean failureDetectionTimeoutEnabled = Boolean.parseBoolean(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT_ENABLED", "true"));
int tcpDiscoveryLocalPort = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT", "47500"));
int tcpDiscoveryLocalPortRange = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT_RANGE", "0"));
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
tcpDiscoverySpi.setLocalPort(tcpDiscoveryLocalPort);
tcpDiscoverySpi.setLocalPortRange(tcpDiscoveryLocalPortRange);
tcpDiscoverySpi.setNetworkTimeout(networkTimeout);
tcpDiscoverySpi.failureDetectionTimeoutEnabled(failureDetectionTimeoutEnabled);
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(addresses);
tcpDiscoverySpi.setIpFinder(ipFinder);
igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);
Ignite ignite = Ignition.start(igniteConfiguration);
ignite.cluster().active(true);
Then I am stopping the cache when the application undeploys:
ignite.close();
When I try to redeploy, I get the following error during initialization.
org.apache.ignite.spi.IgniteSpiException: Failed to marshal custom event: StartRoutineDiscoveryMessage [startReqData=StartRequestData [prjPred=org.apache.ignite.internal.cluster.ClusterGroupAdapter$CachesFilter#7385a997, clsName=null, depInfo=null, hnd=org.apache.ignite.internal.GridEventConsumeHandler#2aec6952, bufSize=1, interval=0, autoUnsubscribe=true], keepBinary=false, deserEx=null, routineId=bbe16e8e-2820-4ba0-a958-d5f644498ba2]
If I full restart the server, starts up fine.
Am I missing some magic in the shutdown process?
I see what I did wrong, and it was code I omitted from the ticket.
ignite.events(ignite.cluster().forCacheNodes(cacheConfig.getKey())).remoteListen(locLsnr, rmtLsnr,
EVT_CACHE_OBJECT_PUT, EVT_CACHE_OBJECT_READ, EVT_CACHE_OBJECT_REMOVED);
When it was trying to register this code twice, it was causing that strange error.
I put a try-catch ignore around it for now and things seem to be ok.

Why I am getting Value is Null while executing two scenarios in Gatling?

I have two scenarios in my script. 1st "getAssets" scenario will fetch all asset IDs and save it in a list, 2nd scenario "fetchMetadata" will iterate those IDs.
I have to exeecute "getAssets" scenario only once to fetch all the IDs, and then "fetchMetadata" scenario will execute till given time duration.
Here is the Json response of "/api/assets;limit=$limit" request (We are fetching id from here using $.assets[*].id),
{
"assets": [
{
"id": 3010411,
"name": "Asset 2016-11-22 20:06:07",
....
....
},
{
"id": 3010231,
"name": "Asset 2016-11-22 20:07:07",
....
....
}, and so on..
Here is the code
import java.util.concurrent.ThreadLocalRandom
import scala.concurrent.duration._
import io.gatling.core.Predef._
import io.gatling.http.Predef._
class getAssetsMetadata extends Simulation {
val getAssetURL = System.getProperty("getAssetURL", "https://performancetesting.net")
val username = System.getProperty("username", "performanceuser")
val password = System.getProperty("password", "performanceuser")
val limit = Integer.getInteger("limit", 1000).toInt
val userCount = Integer.getInteger("userCount", 100).toInt
val duration = Integer.getInteger("duration",1).toInt //in minutes
var IdList: Seq[String] = _
val httpProtocol = http
.basicAuth(username, password)
.baseURL(getAssetURL)
.contentTypeHeader("""application/vnd.v1+json""")
// Scenario 1 get assets
val getAssets = scenario("Get Assets")
.exec(http("List of Assets")
.get(s"""/api/assets;limit=$limit""")
.check(jsonPath("$.assets[*].id").findAll.transform {v => IdList = v; v }.saveAs("IdList"))
)
// Scenario 2 Fetch Metadata
val fetchMetadata = scenario("Fetch Metadata")
.exec(_.set("IdList", IdList))
.exec(http("Metadata Request")
.get("""/api/assets/${IdList.random()}/metadata""")
)
val scn = List(getAssets.inject(atOnceUsers(1)), fetchMetadata.inject(constantUsersPerSec(userCount) during(duration minutes)))
setUp(scn).protocols(httpProtocol)
}
:::ERROR:::
It throws "Value is null" (While we have 10 million asset IDs here). here is the Gatling log
14883 [GatlingSystem-akka.actor.default-dispatcher-4] INFO io.gatling.http.config.HttpProtocol - Warm up done
14907 [GatlingSystem-akka.actor.default-dispatcher-4] INFO io.gatling.http.config.HttpProtocol - Start warm up
14909 [GatlingSystem-akka.actor.default-dispatcher-4] INFO io.gatling.http.config.HttpProtocol - Warm up done
14911 [GatlingSystem-akka.actor.default-dispatcher-4] INFO i.g.core.controller.Controller - Total number of users : 6001
14918 [GatlingSystem-akka.actor.default-dispatcher-6] INFO i.g.c.r.writer.ConsoleDataWriter - Initializing
14918 [GatlingSystem-akka.actor.default-dispatcher-3] INFO i.g.c.result.writer.FileDataWriter - Initializing
14923 [GatlingSystem-akka.actor.default-dispatcher-6] INFO i.g.c.r.writer.ConsoleDataWriter - Initialized
14928 [GatlingSystem-akka.actor.default-dispatcher-3] INFO i.g.c.result.writer.FileDataWriter - Initialized
14931 [GatlingSystem-akka.actor.default-dispatcher-4] DEBUG i.g.core.controller.Controller - Launching All Scenarios
14947 [GatlingSystem-akka.actor.default-dispatcher-12] ERROR i.g.http.action.HttpRequestAction - 'httpRequest-2' failed to execute: Value is null
14954 [GatlingSystem-akka.actor.default-dispatcher-4] DEBUG i.g.core.controller.Controller - Finished Launching scenarios executions
14961 [GatlingSystem-akka.actor.default-dispatcher-4] DEBUG i.g.core.controller.Controller - Setting up max duration
14962 [GatlingSystem-akka.actor.default-dispatcher-4] INFO i.g.core.controller.Controller - Start user #7187317726850756780-0
14963 [GatlingSystem-akka.actor.default-dispatcher-4] INFO i.g.core.controller.Controller - Start user #7187317726850756780-1
14967 [GatlingSystem-akka.actor.default-dispatcher-4] INFO i.g.core.controller.Controller - End user #7187317726850756780-1
14970 [GatlingSystem-akka.actor.default-dispatcher-4] INFO i.g.core.controller.Controller - Start user #7187317726850756780-2
14970 [GatlingSystem-akka.actor.default-dispatcher-5] INFO io.gatling.http.ahc.HttpEngine - Sending request=List of Assets uri=https://performancetesting.net/api/assets;limit=1000: scenario=Get Assets, userId=7187317726850756780-0
14970 [GatlingSystem-akka.actor.default-dispatcher-7] ERROR i.g.http.action.HttpRequestAction - 'httpRequest-2' failed to execute: Value is null
14972 [GatlingSystem-akka.actor.default-dispatcher-7] INFO i.g.core.controller.Controller - End user #7187317726850756780-2
14980 [GatlingSystem-akka.actor.default-dispatcher-7] ERROR i.g.http.action.HttpRequestAction - 'httpRequest-2' failed to execute: Value is null
14980 [GatlingSystem-akka.actor.default-dispatcher-4] INFO i.g.core.controller.Controller - Start user #7187317726850756780-3
14984 [GatlingSystem-akka.actor.default-dispatcher-4] INFO i.g.core.controller.Controller - End user #7187317726850756780-3
.....
.....
.....
61211 [GatlingSystem-akka.actor.default-dispatcher-12] INFO i.g.core.controller.Controller - Start user #7187317726850756780-4626
61211 [GatlingSystem-akka.actor.default-dispatcher-7] ERROR i.g.http.action.HttpRequestAction - 'httpRequest-2' failed to execute: Value is null
61211 [GatlingSystem-akka.actor.default-dispatcher-7] INFO i.g.core.controller.Controller - End user #7187317726850756780-4626
61224 [GatlingSystem-akka.actor.default-dispatcher-2] INFO i.g.core.controller.Controller - Start user #7187317726850756780-4627
61225 [GatlingSystem-akka.actor.default-dispatcher-5] INFO io.gatling.http.ahc.HttpEngine - Sending request=Metadata Request uri=https://performancetesting.net/api/assets/3010320/metadata: scenario=Fetch Metadata, userId=7187317726850756780-4627
61230 [GatlingSystem-akka.actor.default-dispatcher-12] INFO i.g.core.controller.Controller - Start user #7187317726850756780-4628
61230 [GatlingSystem-akka.actor.default-dispatcher-7] INFO io.gatling.http.ahc.HttpEngine - Sending request=Metadata Request uri=https://performancetesting.net/api/assets/3009939/metadata: scenario=Fetch Metadata, userId=7187317726850756780-4628
61233 [GatlingSystem-akka.actor.default-dispatcher-2] INFO i.g.core.controller.Controller - End user #7187317726850756780-0
61233 [New I/O worker #12] DEBUG c.n.h.c.p.netty.handler.Processor - Channel Closed: [id: 0x8c94a1ae, /192.168.100.108:56739 :> performancetesting.net/10.20.14.176:443] with attribute INSTANCE
---- Requests ------------------------------------------------------------------
> Global (OK=261 KO=40 )
> Metadata Request (OK=260 KO=40 )
> List of Assets (OK=1 KO=0 )
---- Errors --------------------------------------------------------------------
> Value is null 40 (100.0%)
================================================================================
Thank you.
The list of ID is not passed to the other scenario, because it is obtained by another "user" (session).
Your simulation reads like the following
1 user is obtaining the list of IDs (and keeps the list for itself)
AT THE SAME TIME, n users try to fetch assets with an (undefined) list of IDs
The first user doesnt talk to the others, both scenario injections are independent of each other.
A way to solve this is to store the list in a shared container (i.e. AtomicReference) and access this container from the second scenario.
To ensure, the container is populated, inject a nothingFor step at the beginning of the second scenario in order to wait for the first scenario to finish.
Another way is to fetch the list of IDs in the beginning of the second scenario - if it has not been fetched before (see container above).
I'm sure there are ways to accomplish this as well (i.e. using some feeder and fetch the list of ids before the actual test)

Weblogic MBeanServer provides the value only after 15 secs

I invoke my custom Monitor registered on the Weblogic MBeanServer, but weblogic give me the updated value only after 15 seconds.
Does Weblogic cache call?
found!
I marked my MBean with the following (spring) annotatioon:
#ManagedResource(
objectName = "bean:name=obuInterfaceMonitor", description = "obuInterface Monitor", log = true,
logFile = "jmx.log", currencyTimeLimit = 15, persistPolicy = "OnUpdate", persistPeriod = 200, persistLocation = "interfaceMonitor", persistName = "bar"
)

quartz.properties cant access mysql : Could not load driverClass com.mysql.jdbc.Driver

i am struggling to resolve this matter, i have try to run a small test to connect with my mysql and it work.but when i am trying to work with quartz.properties it cannot access mysql.
i am sure my jdbc mysql driver is fine. as other than using quartz.properties, i can access my mysql database.
i need help. i have work on this for days now.
here is the part i suspect have problem
#The details of the datasource specified previously
org.quartz.dataSource.myDS.driver =com.mysql.jdbc.Driver
org.quartz.dataSource.myDS.URL =jdbc:mysql://10.1.1.111:3306/somedatabase
org.quartz.dataSource.myDS.user =username
org.quartz.dataSource.myDS.password =somepassword
org.quartz.dataSource.myDS.maxConnections = 20
error output:
207 [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler 'MyScheduler' initialized from default file in current working dir: 'quartz.properties'
208 [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler version: 2.2.1
267 [main] DEBUG com.mchange.v2.resourcepool.BasicResourcePool - incremented pending_acquires: 1
267 [main] DEBUG com.mchange.v2.resourcepool.BasicResourcePool - incremented pending_acquires: 2
268 [main] DEBUG com.mchange.v2.resourcepool.BasicResourcePool - incremented pending_acquires: 3
268 [main] DEBUG com.mchange.v2.resourcepool.BasicResourcePool - com.mchange.v2.resourcepool.BasicResourcePool#42af94c4 config: [start -> 3; min -> 1; max -> 20; inc -> 3; num_acq_attempts -> 30; acq_attempt_delay -> 1000; check_idle_resources_delay -> 0; mox_resource_age -> 0; max_idle_time -> 0; excess_max_idle_time -> 0; destroy_unreturned_resc_time -> 0; expiration_enforcement_delay -> 0; break_on_acquisition_failure -> false; debug_store_checkout_exceptions -> false]
269 [com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2] WARN com.mchange.v2.c3p0.DriverManagerDataSource - Could not load driverClass com.mysql.jdbc.Driver
java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:190)
Here is my quartz code
public class QuartzDaily {
static Logger log = Logger.getLogger("QuartzDaily");
public static void main(String[] args) throws ParseException, SchedulerException, IOException, NoSuchAlgorithmException
{
//lines of codes to configure log
DateFormat format3 = new SimpleDateFormat( "MM_dd_yyyy_HH_mm" );
Date dateToday = new Date();
String strToday= format3.format(dateToday);
BasicConfigurator.configure();
PatternLayout pattern = new PatternLayout("%r [%t] %-5p %c %x - %m%n");
FileAppender fileappender = new FileAppender(pattern,"log\\QuartzScheduler_"+strToday+".txt");
log.addAppender(fileappender);
log.info("QuartzReport; main(): [** Starting scheduler services **]");
//run quartz job
quartzWeekly();
}
public static void quartzWeekly() throws SchedulerException{
//some quartz job code
}
public static class TestJob implements Job {
#Override
public void execute(JobExecutionContext context) throws JobExecutionException {
System.out.println("hello world");
}
}
}
Below is the full code, below code is not mine, it was taken from random tutorial site.
#Skip Update Check
#Quartz contains an "update check" feature that connects to a server to
#check if there is a new version of Quartz available
org.quartz.scheduler.skipUpdateCheck: true
org.quartz.scheduler.instanceName =MyScheduler
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 4
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true
org.quartz.threadPool.threadPriority = 5
#specify the jobstore used
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.useProperties = false
#specify the TerracottaJobStore
#org.quartz.jobStore.class = org.terracotta.quartz.TerracottaJobStore
#org.quartz.jobStore.tcConfigUrl = localhost:9510
#The datasource for the jobstore that is to be used
org.quartz.jobStore.dataSource = myDS
#quartz table prefixes in the database
org.quartz.jobStore.tablePrefix = qrtz_
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.isClustered = false
#The details of the datasource specified previously
org.quartz.dataSource.myDS.driver =com.mysql.jdbc.Driver
org.quartz.dataSource.myDS.URL =jdbc:mysql://10.1.1.11:3306/somedatabase
org.quartz.dataSource.myDS.user =username
org.quartz.dataSource.myDS.password =somepassword
org.quartz.dataSource.myDS.maxConnections = 20
#Clustering
#clustering currently only works with the JDBC-Jobstore (JobStoreTX
#or JobStoreCMT). Features include load-balancing and job fail-over
#(if the JobDetail's "request recovery" flag is set to true). It is important to note that
# When using clustering on separate machines, make sure that their clocks are synchronized
#using some form of time-sync service (clocks must be within a second of each other).
#See http://www.boulder.nist.gov/timefreq/service/its.htm.
# Never fire-up a non-clustered instance against the same set of tables that any
#other instance is running against.
# Each instance in the cluster should use the same copy of the quartz.properties file.
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 20000
#Each server must have the same copy of the configuration file.
#auto-generate instance ids.
org.quartz.scheduler.instanceId = AUTO
#Note :
#If a job throws an exception, Quartz will typically immediately
#re-execute it (and it will likely throw the same exception again).
#It's better if the job catches all exception it may encounter, handle them,
#and reschedule itself, or other jobs. to work around the issue.
I had the exact same issue and I fixed it by adding mysql connector jar in WEB-INF/lib. Please make sure that the WEB-INF is marked as a source folder in build path.

ejb bean instance pool jboss EAP 6.1

In our project we are migrating from JBoss5 to Jboss EAP 6.1.
When I was going through the configuration to be used in Jboss EAP 6.1, I stumbled upon below:
<pools>
<bean-instance-pools>
<strict-max-pool name="slsb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="1" instance-acquisitiontimeout-unit="MILLISECONDS"/>
<strict-max-pool name="mdb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="1" instance-acquisitiontimeout-unit="MILLISECONDS"/>
</bean-instance-pools>
</pools>
I am not clear about the max-pool-size argument.Is this limit 20 instances per Stateless EJB bean deployed on JBoss or pool will go only up to 20 instances irrespective of the no of stateless EJB beans.
I don't agree with eis.
Here is code of Wildfly 8.2.1
StatelessSessionComponent.java
public StatelessSessionComponent(final StatelessSessionComponentCreateService slsbComponentCreateService) {
super(slsbComponentCreateService);
StatelessObjectFactory<StatelessSessionComponentInstance> factory = new StatelessObjectFactory<StatelessSessionComponentInstance>() {
#Override
public StatelessSessionComponentInstance create() {
return (StatelessSessionComponentInstance) createInstance();
}
#Override
public void destroy(StatelessSessionComponentInstance obj) {
obj.destroy();
}
};
final PoolConfig poolConfig = slsbComponentCreateService.getPoolConfig();
if (poolConfig == null) {
ROOT_LOGGER.debug("Pooling is disabled for Stateless EJB " + slsbComponentCreateService.getComponentName());
this.pool = null;
this.poolName = null;
} else {
ROOT_LOGGER.debug("Using pool config " + poolConfig + " to create pool for Stateless EJB " + slsbComponentCreateService.getComponentName());
this.pool = poolConfig.createPool(factory);
this.poolName = poolConfig.getPoolName();
}
this.timeoutMethod = slsbComponentCreateService.getTimeoutMethod();
this.weakAffinity = slsbComponentCreateService.getWeakAffinity();
}
As I see pool is non-static field and is created for every type of Component(ejb class).
Red Hat documentation says
the maximum size of the bean pool.
Also, if you go to admin panel of EAP and go to Profile -> Container -> EJB3 -> Bean Pools -> "Need Help?" it says
Max Pool Size: The maximum number of bean instances that the pool can
hold at a given point in time
I would interpret that to mean that pool will go only up to 20 instances.
Edit: in retrospect, answer by Sergey Kosarev saying it is per instance seems convincing enough that you should probably believe that instead.

Categories

Resources