How can I disable jobs in the Quartz JDBCJobStore? - java

What is the best way to disable a job in the JDBCJobStore without deleting it's job or trigger records and without wiping the cron expression?

Use scheduler.pauseJob() or scheduler.pauseTrigger().
Alternatively you can use the following SQL script:
UPDATE QRTZ_TRIGGERS SET TRIGGER_STATE = "PAUSED"

Use the pauseJob or pauseJobGroup methods of JobStore.

Related

Why Spark dataframe cache doesn't work here

I just wrote a toy class to test Spark dataframe (actually Dataset since I'm using Java).
Dataset<Row> ds = spark.sql("select id,name,gender from test2.dummy where dt='2018-12-12'");
ds = ds.withColumn("dt", lit("2018-12-17"));
ds.cache();
ds.write().mode(SaveMode.Append).insertInto("test2.dummy");
//
System.out.println(ds.count());
According to my understanding, there're 2 actions, "insertInto" and "count".
I debug the code step by step, when running "insertInto", I see several lines of:
19/01/21 20:14:56 INFO FileScanRDD: Reading File path: hdfs://ip:9000/root/hive/warehouse/test2.db/dummy/dt=2018-12-12/000000_0, range: 0-451, partition values: [2018-12-12]
When running "count", I still see similar logs:
19/01/21 20:15:26 INFO FileScanRDD: Reading File path: hdfs://ip:9000/root/hive/warehouse/test2.db/dummy/dt=2018-12-12/000000_0, range: 0-451, partition values: [2018-12-12]
I have 2 questions:
1) When there're 2 actions on same dataframe like above, if I don't call ds.cache or ds.persist explicitly, will the 2nd action always causes the re-executing of the sql query?
2) If I understand the log correctly, both actions trigger hdfs file reading, does that mean the ds.cache() actually doesn't work here? If so, why it doesn't work here?
Many thanks.
It's because you append into the table where ds is created from, so ds needs to be recomputed because the underlying data changed. In such cases, spark invalidates the cache. If you read e.g. this Jira (https://issues.apache.org/jira/browse/SPARK-24596):
When invalidating a cache, we invalid other caches dependent on this
cache to ensure cached data is up to date. For example, when the
underlying table has been modified or the table has been dropped
itself, all caches that use this table should be invalidated or
refreshed.
Try to run the ds.count before inserting into the table.
I found that the other answer doesn't work. What I had to do was break lineage such that the df I was writing does not know that one of its source is the table I am writing to. To break lineage, I created a copy df using
copy_of_df = sql_context.createDataframe(df.rdd)

How to turn off WAL in hbase 2.0.0 with java API?

I wonder if there is any way to disable WAL (write ahead log) operations when inserting new data to a hbase table with JAVA API?
Thank you for you help :)
In HBase 2.0.0
To skip WAL at an individual update level (for a single Put or Delete):
Put p = new Put(ROW_ID).addColumn(FAMILY, NAME, VALUE).setDurability(Durability.SKIP_WAL)
To set this setting for the entire table (so you don't have to do it each time for each update):
TableDescriptorBuilder tBuilder = TableDescriptorBuilder.newBuilder(TableName.valueOf(TABLE_ID));
tBuilder.setDurability(Durability.SKIP_WAL);
... continue building the table
Hope this helps

Disable SQL escape in Apache Ignite

I've tried
CacheConfiguration<?, ?> cacheCfg = new CacheConfiguration<>(cacheTemplateName).setSqlSchema("PUBLIC"); //create table can only be executed on public schema
cacheCfg.setSqlEscapeAll(false); //otherwise ignite tries to quote after we've quoted and there are cases we have to quote before ignite gets it
cacheCfg.setCacheMode(CacheMode.PARTITIONED);
ignite.addCacheConfiguration(cacheCfg); //required to register cacheTemplateName as a template, see WITH section of https://apacheignite-sql.readme.io/docs/create-table
Unfortunately nothing I try seems to work.
I've debugged through and isSqlEscapeAll() always returned true.
FYI in the CREATE TABLE statement I've set TEMPLATE=MyTPLName.
Is it possible to disable this behaviour? My queries are already appropriately quoted.
This flag doesn't work for dynamic caches since it could cause some unclearness with table names, which were described in this thread on Ignite dev list: http://apache-ignite-developers.2346864.n4.nabble.com/Remove-deprecate-CacheConfiguration-sqlEscapeAll-property-td17966.html
By the way, what is the problem you want to solve using this flag?

Mongo DB, Java Mongo API, how to add hint into aggregate command

I stuck with Mongo with $hint command.
I have collection and i had indexed this collection. But the problem is, I query collection with Aggregate framework, but I want temporary disable Indexing, so I use hint command like this:
db.runCommand(
{aggregate:"MyCollectionName",
pipeline:[{$match : {...somthing...},
{$project : {...somthing...}}]
},
{$hint:{$natural:1}}
)
Please Note that I use {$hint:{$natural:1}} to disable Indexing for this query,
I have run SUCCESSFULLY this command on MongoDB command line. But I don't know how to map this command to Mongo Java Api (Java Code).
I used lib mongo-2.10.1.jar
Currently you can't - it is on the backlog - please vote for SERVER-7944

Why Quartz scheduler's unscheduleJob is deleting both trigger and job detail?

Im trying to execute the following quartz scheduler code in the a cluster environment.
scheduler.unscheduleJob("genericJobTrigger", "DEFAULT");
where as
Scheduler scheduler = (Scheduler) context.getBean("scheduler");
JobDetail genericJob = (JobDetail) context.getBean("genericJob");
CronTrigger genericJobTrigger = (CronTrigger) context.getBean("genericJobTrigger");
Above piece of code is deleting entries from trigger and job detail. It supposed to remove only trigger right?
Why Quartz scheduler's unscheduleJob is deleting both trigger and job detail?
durability is set true to Jobs to avoid deleting the JOBS when Triggers are deleted.
Whenever you are creating an object of JobDetail then set storeDurably(), refer the below example:
return JobBuilder.newJob(ScheduledJob.class)
.setJobData(jobDataMap)
.withDescription("job executes at specified frequency")
.withIdentity(UUID.randomUUID().toString(), "email-jobs")
.storeDurably() //This will not allow to delete automatially
.build();
Also you can verify it by checking the value of IS_DURABLE column in jobDetails table.

Categories

Resources