Sorting RDD by key [duplicate] - java

As per Spark documentation only RDD actions can trigger a Spark job and the transformations are lazily evaluated when an action is called on it.
I see the sortBy transformation function is applied immediately and it is shown as a job trigger in the SparkUI. Why?

sortBy is implemented using sortByKey which depends on a RangePartitioner (JVM) or partitioning function (Python). When you call sortBy / sortByKey partitioner (partitioning function) is initialized eagerly and samples input RDD to compute partition boundaries. Job you see corresponds to this process.
Actual sorting is performed only if you execute an action on the newly created RDD or its descendants.

As per Spark documentation only the action triggers a job in Spark, the transformations are lazily evaluated when an action is called on it.
In general you're right, but as you've just experienced, there are few exceptions and sortBy is among them (with zipWithIndex).
As a matter of fact, it was reported in Spark's JIRA and closed with Won't Fix resolution. See SPARK-1021 sortByKey() launches a cluster job when it shouldn't.
You can see the job running with DAGScheduler logging enabled (and later in web UI):
scala> sc.parallelize(0 to 8).sortBy(identity)
INFO DAGScheduler: Got job 1 (sortBy at <console>:25) with 8 output partitions
INFO DAGScheduler: Final stage: ResultStage 1 (sortBy at <console>:25)
INFO DAGScheduler: Parents of final stage: List()
INFO DAGScheduler: Missing parents: List()
DEBUG DAGScheduler: submitStage(ResultStage 1)
DEBUG DAGScheduler: missing: List()
INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[4] at sortBy at <console>:25), which has no missing parents
DEBUG DAGScheduler: submitMissingTasks(ResultStage 1)
INFO DAGScheduler: Submitting 8 missing tasks from ResultStage 1 (MapPartitionsRDD[4] at sortBy at <console>:25)
DEBUG DAGScheduler: New pending partitions: Set(0, 1, 5, 2, 6, 3, 7, 4)
INFO DAGScheduler: ResultStage 1 (sortBy at <console>:25) finished in 0.013 s
DEBUG DAGScheduler: After removal of stage 1, remaining stages = 0
INFO DAGScheduler: Job 1 finished: sortBy at <console>:25, took 0.019755 s
res1: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[6] at sortBy at <console>:25

Related

Activiti Job Executor problem with async serviceTasks (activiti >= 5.17)

Please consider the following diagram
MyProcess.bpmn
<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:activiti="http://activiti.org/bpmn" xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI" xmlns:omgdc="http://www.omg.org/spec/DD/20100524/DC" xmlns:omgdi="http://www.omg.org/spec/DD/20100524/DI" typeLanguage="http://www.w3.org/2001/XMLSchema" expressionLanguage="http://www.w3.org/1999/XPath" targetNamespace="http://www.activiti.org/test">
<process id="myProcess" name="My process" isExecutable="true">
<startEvent id="startevent1" name="Start"></startEvent>
<userTask id="evl" name="Evaluation"></userTask>
<boundaryEvent id="timer_event_autocomplete" name="Timer" attachedToRef="evl" cancelActivity="false">
<timerEventDefinition>
<timeDate>PT2S</timeDate>
</timerEventDefinition>
</boundaryEvent>
<serviceTask id="timer_service" name="Timed Autocomplete" activiti:async="true" activiti:class="com.example.service.TimerService"></serviceTask>
<serviceTask id="store_docs_service" name="Store Documents" activiti:async="true" activiti:class="com.example.service.StoreDocsService"></serviceTask>
<sequenceFlow id="flow1" sourceRef="startevent1" targetRef="evl"></sequenceFlow>
<sequenceFlow id="flow2" sourceRef="timer_event_autocomplete" targetRef="timer_service"></sequenceFlow>
<sequenceFlow id="flow3" sourceRef="evl" targetRef="store_docs_service"></sequenceFlow>
<sequenceFlow id="flow4" sourceRef="store_docs_service" targetRef="endevent1"></sequenceFlow>
<endEvent id="endevent1" name="End"></endEvent>
</process>
</definitions>
To describe it in words, there is one user task (Evaluation) and a timer attached to it (configured to trigger in 2 seconds). Upon triggering the timer, the Timed Autocomplete async service task in its Java Delegate, TimerService, tries to complete the user task (Evaluation). Completing the user task (Evaluation) the flow moves to the other async service task (Store Documents), it calls its Java Delegate, StoreDocsService, and the flow ends.
TimerService.java
public class TimerService implements JavaDelegate {
Logger LOGGER = LoggerFactory.getLogger(TimerService.class);
#Override
public void execute(DelegateExecution execution) throws Exception {
LOGGER.info("*** Executing Timer autocomplete ***");
Task task = execution.getEngineServices().getTaskService().createTaskQuery().active().singleResult();
execution.getEngineServices().getTaskService().complete(task.getId());
LOGGER.info("*** Task: {} autocompleted by timer ***", task.getId());
}
}
StoreDocsService.java
public class StoreDocsService implements JavaDelegate {
Logger LOGGER = LoggerFactory.getLogger(StoreDocsService.class);
#Override
public void execute(DelegateExecution execution) throws Exception {
LOGGER.info("*** Executing Store Documents ***");
}
}
App.java
public class App
{
public static void main( String[] args ) throws Exception
{
// DefaultAsyncJobExecutor demoAsyncJobExecutor = new DefaultAsyncJobExecutor();
// demoAsyncJobExecutor.setCorePoolSize(10);
// demoAsyncJobExecutor.setMaxPoolSize(50);
// demoAsyncJobExecutor.setKeepAliveTime(10000);
// demoAsyncJobExecutor.setMaxAsyncJobsDuePerAcquisition(50);
ProcessEngineConfiguration cfg = new StandaloneProcessEngineConfiguration()
.setJdbcUrl("jdbc:h2:mem:activiti;DB_CLOSE_DELAY=1000")
.setJdbcUsername("sa")
.setJdbcPassword("")
.setJdbcDriver("org.h2.Driver")
.setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_TRUE)
// .setAsyncExecutorActivate(true)
// .setAsyncExecutorEnabled(true)
// .setAsyncExecutor(demoAsyncJobExecutor)
.setJobExecutorActivate(true)
;
ProcessEngine processEngine = cfg.buildProcessEngine();
String pName = processEngine.getName();
String ver = ProcessEngine.VERSION;
System.out.println("ProcessEngine [" + pName + "] Version: [" + ver + "]");
RepositoryService repositoryService = processEngine.getRepositoryService();
Deployment deployment = repositoryService.createDeployment()
.addClasspathResource("MyProcess.bpmn").deploy();
ProcessDefinition processDefinition = repositoryService.createProcessDefinitionQuery()
.deploymentId(deployment.getId()).singleResult();
System.out.println(
"Found process definition ["
+ processDefinition.getName() + "] with id ["
+ processDefinition.getId() + "]");
final Map<String, Object> variables = new HashMap<String, Object>();
final RuntimeService runtimeService = processEngine.getRuntimeService();
ProcessInstance id = runtimeService.startProcessInstanceByKey("myProcess", variables);
System.out.println("Started Process Id: "+id.getId());
try {
final TaskService taskService = processEngine.getTaskService();
// List<Task> tasks = taskService.createTaskQuery().active().list();
// while (!tasks.isEmpty()) {
// Task task = tasks.get(0);
// taskService.complete(task.getId());
// tasks = taskService.createTaskQuery().active().list();
// }
} catch (Exception e) {
System.out.println(e.getMessage());
} finally {
}
while(!runtimeService.createExecutionQuery().list().isEmpty()) {
}
processEngine.close();
}
}
Activiti 5.15
When the timer triggers, the above diagram executes as described. We use Activiti's DefaultJobExecutor
As we can see in the logs:
[main] INFO org.activiti.engine.impl.ProcessEngineImpl - ProcessEngine default created
[main] INFO org.activiti.engine.impl.jobexecutor.JobExecutor - Starting up the JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor].
[Thread-1] INFO org.activiti.engine.impl.jobexecutor.AcquireJobsRunnable - JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor] starting to acquire jobs
ProcessEngine [default] Version: [5.15]
[main] INFO org.activiti.engine.impl.bpmn.deployer.BpmnDeployer - Processing resource MyProcess.bpmn
Found process definition [My process] with id [myProcess:1:3]
Started Process Id: 4
[pool-1-thread-1] INFO com.example.service.TimerService - *** Executing Timer autocomplete ***
[pool-1-thread-1] INFO com.example.service.TimerService - *** Task: 9 autocompleted by timer ***
[pool-1-thread-1] INFO com.example.service.StoreDocsService - *** Executing Store Documents ***
[main] INFO org.activiti.engine.impl.jobexecutor.JobExecutor - Shutting down the JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor].
[Thread-1] INFO org.activiti.engine.impl.jobexecutor.AcquireJobsRunnable - JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor] stopped job acquisition
Activiti >= 5.17
Changing only the activiti's version in pom.xml to 5.17.0 and up (tested till 5.22.0) and executing the same code, the flow executes the timer's Java Delegate, TimerService, which completes the user task (Evaluation) but Store Documents Java Delegate, StoreDocsService is never called. To add more, it seems that the flow never ends and the execution remains stuck at Store Documents async service task.
Logs:
[main] INFO org.activiti.engine.impl.ProcessEngineImpl - ProcessEngine default created
[main] INFO org.activiti.engine.impl.jobexecutor.JobExecutor - Starting up the JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor].
[Thread-1] INFO org.activiti.engine.impl.jobexecutor.AcquireJobsRunnableImpl - JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor] starting to acquire jobs
ProcessEngine [default] Version: [5.17.0.2]
[main] INFO org.activiti.engine.impl.bpmn.deployer.BpmnDeployer - Processing resource MyProcess.bpmn
Found process definition [My process] with id [myProcess:1:3]
Started Process Id: 4
[pool-1-thread-2] INFO com.example.service.TimerService - *** Executing Timer autocomplete ***
[pool-1-thread-2] INFO com.example.service.TimerService - *** Task: 9 autocompleted by timer ***
Changing to Async Job Executor. One feature of 5.17 release was the new async job executor (however the default non-async executor remains configured as default). So trying to enable the async executor in App.java by the following lines:
DefaultAsyncJobExecutor demoAsyncJobExecutor = new DefaultAsyncJobExecutor();
demoAsyncJobExecutor.setCorePoolSize(10);
demoAsyncJobExecutor.setMaxPoolSize(50);
demoAsyncJobExecutor.setKeepAliveTime(10000);
demoAsyncJobExecutor.setMaxAsyncJobsDuePerAcquisition(50);
ProcessEngineConfiguration cfg = new StandaloneProcessEngineConfiguration()
.setJdbcUrl("jdbc:h2:mem:activiti;DB_CLOSE_DELAY=1000")
.setJdbcUsername("sa")
.setJdbcPassword("")
.setJdbcDriver("org.h2.Driver")
.setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_TRUE)
.setAsyncExecutorActivate(true)
.setAsyncExecutorEnabled(true)
.setAsyncExecutor(demoAsyncJobExecutor)
;
The flow seems to execute correctly, StoreDocsService is called after TimerService, but it never ends (the while(!runtimeService.createExecutionQuery().list().isEmpty()) statement in App.java is always true)!
Logs:
[main] INFO org.activiti.engine.impl.ProcessEngineImpl - ProcessEngine default created
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Starting up the default async job executor [org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor].
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Creating thread pool queue of size 100
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Creating executor service with corePoolSize 10, maxPoolSize 50 and keepAliveTime 10000
[Thread-1] INFO org.activiti.engine.impl.asyncexecutor.AcquireTimerJobsRunnable - {} starting to acquire async jobs due
[Thread-2] INFO org.activiti.engine.impl.asyncexecutor.AcquireAsyncJobsDueRunnable - {} starting to acquire async jobs due
ProcessEngine [default] Version: [5.17.0.2]
[main] INFO org.activiti.engine.impl.bpmn.deployer.BpmnDeployer - Processing resource MyProcess.bpmn
Found process definition [My process] with id [myProcess:1:3]
Started Process Id: 4
[pool-1-thread-2] INFO com.example.service.TimerService - *** Executing Timer autocomplete ***
[pool-1-thread-2] INFO com.example.service.TimerService - *** Task: 9 autocompleted by timer ***
[pool-1-thread-3] INFO com.example.service.StoreDocsService - *** Executing Store Documents ***
!!!! UPDATE !!!
Activiti 6.0.0
Tried the same scenario but with Activiti version 6.0.0.
Changes needed in TimerService, cannot get the EngineServices from DelegateExecution:
public class TimerService implements JavaDelegate {
Logger LOGGER = LoggerFactory.getLogger(TimerService.class);
#Override
public void execute(DelegateExecution execution) {
LOGGER.info("*** Executing Timer autocomplete ***");
Task task = Context.getProcessEngineConfiguration().getTaskService().createTaskQuery().active().singleResult();
Context.getProcessEngineConfiguration().getTaskService().complete(task.getId());
// Task task = execution.getEngineServices().getTaskService().createTaskQuery().active().singleResult();
// execution.getEngineServices().getTaskService().complete(task.getId());
LOGGER.info("*** Task: {} autocompleted by timer ***", task.getId());
}
}
and this version has only the async executor so the ProcessEngineConfiguration in App.java changes to:
ProcessEngineConfiguration cfg = new StandaloneProcessEngineConfiguration()
.setJdbcUrl("jdbc:h2:mem:activiti;DB_CLOSE_DELAY=1000")
.setJdbcUsername("sa")
.setJdbcPassword("")
.setJdbcDriver("org.h2.Driver")
.setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_TRUE)
.setAsyncExecutorActivate(true)
// .setAsyncExecutorEnabled(true)
// .setAsyncExecutor(demoAsyncJobExecutor)
// .setJobExecutorActivate(true)
;
With 6.0.0 version and async executor the process completes successfully as we can see in the logs:
[main] INFO org.activiti.engine.impl.ProcessEngineImpl - ProcessEngine default created
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Starting up the default async job executor [org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor].
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Creating thread pool queue of size 100
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Creating executor service with corePoolSize 2, maxPoolSize 10 and keepAliveTime 5000
[Thread-1] INFO org.activiti.engine.impl.asyncexecutor.AcquireAsyncJobsDueRunnable - {} starting to acquire async jobs due
[Thread-2] INFO org.activiti.engine.impl.asyncexecutor.AcquireTimerJobsRunnable - {} starting to acquire async jobs due
[Thread-3] INFO org.activiti.engine.impl.asyncexecutor.ResetExpiredJobsRunnable - {} starting to reset expired jobs
ProcessEngine [default] Version: [6.0.0.4]
Found process definition [My process] with id [myProcess:1:3]
Started Process Id: 4
[activiti-async-job-executor-thread-2] INFO com.example.service.TimerService - *** Executing Timer autocomplete ***
[activiti-async-job-executor-thread-2] INFO com.example.service.TimerService - *** Task: 10 autocompleted by timer ***
[activiti-async-job-executor-thread-2] INFO com.example.service.StoreDocsService - *** Executing Store Documents ***
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Shutting down the default async job executor [org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor].
[activiti-reset-expired-jobs] INFO org.activiti.engine.impl.asyncexecutor.ResetExpiredJobsRunnable - {} stopped resetting expired jobs
[activiti-acquire-timer-jobs] INFO org.activiti.engine.impl.asyncexecutor.AcquireTimerJobsRunnable - {} stopped async job due acquisition
[activiti-acquire-async-jobs] INFO org.activiti.engine.impl.asyncexecutor.AcquireAsyncJobsDueRunnable - {} stopped async job due acquisition
Process finished with exit code 0
2 Questions:
We have upgraded from Activiti 5.15 to 5.22.0 and we do not use the async job executor. Is there any way to keep the functionality of this piece of diagram to behave as it was behaving in 5.15?
If switching to the async job executor is inevitable, then what are we missing in order to make this process complete successfully?
A sample project of the above can be found at: https://github.com/pleft/DemoActiviti
Without answering your question explicitly which would require setting up your environment and debugging, I would recommend you at the very least move to Activiti 6.
The 5.x branch of Activiti hasn't been maintained for over 5 years and is effectively dead.
Even the 6.x line has pretty much been abandoned as the core developers have all moved to the "Flowable" project.
If you choose to stay with Activiti 5.x, your options are:
Maintain the codebase yourself (and hopefully contribute any changes/enhancements back to the project).
Contract Activiti support services. There are a couple of vendors offering such services.

how to integrate spark streaming spark-2.1.0 with kafka 2.11-0.10.2.0 correctly in Java?

I tried using spark streaming to process kafka messages,followed this wiki https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
and my code is below:
SparkConf sparkConf = new SparkConf().setAppName("JavaDirectKafkaWordCount").setMaster("spark://sl:7077");
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, Durations.seconds(10));
Map<String, Object> kafkaParams = new HashMap<>();
kafkaParams.put("bootstrap.servers", "10.0.1.5:9092");
kafkaParams.put("key.deserializer", StringDeserializer.class);
kafkaParams.put("value.deserializer", StringDeserializer.class);
kafkaParams.put("group.id", "group1");
kafkaParams.put("auto.offset.reset", "earliest");
kafkaParams.put("enable.auto.commit", false);
Collection<String> topics = Collections.singletonList("test");
final JavaInputDStream<ConsumerRecord<String, String>> stream = KafkaUtils.createDirectStream(jssc,
LocationStrategies.PreferConsistent(),
ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams));
stream.print();
after submit, it returns :
17/04/05 22:43:10 INFO SparkContext: Starting job: print at JavaDirectKafkaWordCount.java:47
17/04/05 22:43:10 INFO DAGScheduler: Got job 0 (print at JavaDirectKafkaWordCount.java:47) with 1 output partitions
17/04/05 22:43:10 INFO DAGScheduler: Final stage: ResultStage 0 (print at JavaDirectKafkaWordCount.java:47)
17/04/05 22:43:10 INFO DAGScheduler: Parents of final stage: List()
17/04/05 22:43:10 INFO DAGScheduler: Missing parents: List()
17/04/05 22:43:10 INFO DAGScheduler: Submitting ResultStage 0 (KafkaRDD[0] at createDirectStream at JavaDirectKafkaWordCount.java:44), which has no missing parents
17/04/05 22:43:10 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 2.3 KB, free 366.3 MB)
17/04/05 22:43:10 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1529.0 B, free 366.3 MB)
17/04/05 22:43:10 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.245.226.155:15258 (size: 1529.0 B, free: 366.3 MB)
17/04/05 22:43:10 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:996
17/04/05 22:43:10 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (KafkaRDD[0] at createDirectStream at JavaDirectKafkaWordCount.java:44)
17/04/05 22:43:10 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/04/05 22:43:10 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.245.226.155:53448) with ID 0
17/04/05 22:43:10 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 10.245.226.155, executor 0, partition 0, PROCESS_LOCAL, 7295 bytes)
17/04/05 22:43:10 INFO BlockManagerMasterEndpoint: Registering block manager 10.245.226.155:14669 with 366.3 MB RAM, BlockManagerId(0, 10.245.226.155, 14669, None)
17/04/05 22:43:10 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.245.226.155:53447) with ID 1
17/04/05 22:43:10 INFO BlockManagerMasterEndpoint: Registering block manager 10.245.226.155:33754 with 366.3 MB RAM, BlockManagerId(1, 10.245.226.155, 33754, None)
17/04/05 22:43:11 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 10.245.226.155, executor 0): java.lang.NullPointerException
at org.apache.spark.util.Utils$.decodeFileNameInURI(Utils.scala:409)
at org.apache.spark.util.Utils$.fetchFile(Utils.scala:434)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:508)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:500)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:500)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:257)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
can someone help on it? thanks very much.
Can you provide the parameters passed to spark-submit?
You might have passed a jar-file name instead of an absolute path to the jar file. The class org.apache.spark.executor.Executor tries to load "Added Jars" and "Added Files" in updateDependencies method, but the URI path is not as presumed by spark.

Java Spark RDD in an other RDD?

I try to create a JavaRDD which contains an other series of RDD inside.
RDDMachine.foreach(machine -> startDetectionNow())
Inside, machine start a query to ES and get an other RDD. I collect all this (1200hits) and covert to Lists. After the Machine start work with this list
Firstly : is it possible to do this or not ? if not, in which way can i try to do something different?
Let me show what I try to do :
SparkConf conf = new SparkConf().setAppName("Algo").setMaster("local");
conf.set("es.index.auto.create", "true");
conf.set("es.nodes", "IP_ES");
conf.set("es.port", "9200");
sparkContext = new JavaSparkContext(conf);
MyAlgoConfig config_algo = new MyAlgoConfig(Detection.byPrevisionMerge);
Machine m1 = new Machine("AL-27", "IP1", config_algo);
Machine m2 = new Machine("AL-20", "IP2", config_algo);
Machine m3 = new Machine("AL-24", "IP3", config_algo);
Machine m4 = new Machine("AL-21", "IP4", config_algo);
ArrayList<Machine> Machines = new ArrayList();
Machines.add(m1);
Machines.add(m2);
Machines.add(m3);
Machines.add(m4);
JavaRDD<Machine> machineRDD = sparkContext.parallelize(Machines);
machineRDD.foreach(machine -> machine.startDetectNow());
I try to start my algorithm in each machine which must learn from data located in Elasticsearch.
public boolean startDetectNow()
// MEGA Requete ELK
JavaRDD dataForLearn = Elastic.loadElasticsearch(
Algo.sparkContext
, "logstash-*/Collector"
, Elastic.req_AvgOfCall(
getIP()
, "hour"
, "2016-04-16T00:00:00"
, "2016-06-10T00:00:00"));
JavaRDD<Hit> RDD_hits = Elastic.mapToHit(dataForLearn);
List<Hit> hits = Elastic.RddToListHits(RDD_hits);
So I try to get all data of a query in every "Machine".
My question is : is it possible to do this with Spark ? Or maybe in an other way ?
When I start it in Spark; it's seams to be something like lock when the code is around the second RDD.
And the error message is :
16/08/17 00:17:13 INFO SparkContext: Starting job: collect at Elastic.java:94
16/08/17 00:17:13 INFO DAGScheduler: Got job 1 (collect at Elastic.java:94) with 1 output partitions
16/08/17 00:17:13 INFO DAGScheduler: Final stage: ResultStage 1 (collect at Elastic.java:94)
16/08/17 00:17:13 INFO DAGScheduler: Parents of final stage: List()
16/08/17 00:17:13 INFO DAGScheduler: Missing parents: List()
16/08/17 00:17:13 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[4] at map at Elastic.java:106), which has no missing parents
16/08/17 00:17:13 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.3 KB, free 7.7 KB)
16/08/17 00:17:13 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.5 KB, free 10.2 KB)
16/08/17 00:17:13 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:46356 (size: 2.5 KB, free: 511.1 MB)
16/08/17 00:17:13 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
16/08/17 00:17:13 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[4] at map at Elastic.java:106)
16/08/17 00:17:13 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
^C16/08/17 00:17:22 INFO SparkContext: Invoking stop() from shutdown hook
16/08/17 00:17:22 INFO SparkUI: Stopped Spark web UI at http://192.168.10.23:4040
16/08/17 00:17:22 INFO DAGScheduler: ResultStage 0 (foreach at GuardConnect.java:60) failed in 10,292 s
16/08/17 00:17:22 INFO DAGScheduler: Job 0 failed: foreach at GuardConnect.java:60, took 10,470974 s
Exception in thread "main" org.apache.spark.SparkException: Job 0 cancelled because SparkContext was shut down
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:806)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:804)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:804)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1658)
at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1581)
at org.apache.spark.SparkContext$$anonfun$stop$9.apply$mcV$sp(SparkContext.scala:1740)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1739)
at org.apache.spark.SparkContext$$anonfun$3.apply$mcV$sp(SparkContext.scala:596)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:267)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1765)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:218)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:912)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:910)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.foreach(RDD.scala:910)
at org.apache.spark.api.java.JavaRDDLike$class.foreach(JavaRDDLike.scala:332)
at org.apache.spark.api.java.AbstractJavaRDDLike.foreach(JavaRDDLike.scala:46)
at com.seigneurin.spark.GuardConnect.main(GuardConnect.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/08/17 00:17:22 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo#4a7e0846)
16/08/17 00:17:22 INFO DAGScheduler: ResultStage 1 (collect at Elastic.java:94) failed in 9,301 s
16/08/17 00:17:22 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo#6c6b4cb8)
16/08/17 00:17:22 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(0,1471385842813,JobFailed(org.apache.spark.SparkException: Job 0 cancelled because SparkContext was shut down))
16/08/17 00:17:22 INFO DAGScheduler: Job 1 failed: collect at Elastic.java:94, took 9,317650 s
16/08/17 00:17:22 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Job 1 cancelled because SparkContext was shut down
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:806)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:804)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:804)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1658)
at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1581)
at org.apache.spark.SparkContext$$anonfun$stop$9.apply$mcV$sp(SparkContext.scala:1740)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1739)
at org.apache.spark.SparkContext$$anonfun$3.apply$mcV$sp(SparkContext.scala:596)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:267)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1765)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:218)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:339)
at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:46)
at com.seigneurin.spark.Elastic.RddToListHits(Elastic.java:94)
at com.seigneurin.spark.OXO.prepareDataAndLearn(OXO.java:126)
at com.seigneurin.spark.OXO.startDetectNow(OXO.java:148)
at com.seigneurin.spark.GuardConnect.lambda$main$1282d8df$1(GuardConnect.java:60)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:332)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:332)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$32.apply(RDD.scala:912)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$32.apply(RDD.scala:912)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/08/17 00:17:22 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(1,1471385842814,JobFailed(org.apache.spark.SparkException: Job 1 cancelled because SparkContext was shut down))
16/08/17 00:17:22 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/08/17 00:17:22 INFO MemoryStore: MemoryStore cleared
16/08/17 00:17:22 INFO BlockManager: BlockManager stopped
16/08/17 00:17:22 INFO BlockManagerMaster: BlockManagerMaster stopped
16/08/17 00:17:22 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/08/17 00:17:22 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/08/17 00:17:22 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/08/17 00:17:22 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, partition 0,ANY, 6751 bytes)
16/08/17 00:17:22 ERROR Inbox: Ignoring error
java.util.concurrent.RejectedExecutionException: Task org.apache.spark.executor.Executor$TaskRunner#65fd4104 rejected from java.util.concurrent.ThreadPoolExecutor#4387a1bf[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at org.apache.spark.executor.Executor.launchTask(Executor.scala:128)
at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$reviveOffers$1.apply(LocalBackend.scala:86)
at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$reviveOffers$1.apply(LocalBackend.scala:84)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.scheduler.local.LocalEndpoint.reviveOffers(LocalBackend.scala:84)
at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$receive$1.applyOrElse(LocalBackend.scala:69)
at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/08/17 00:17:22 INFO SparkContext: Successfully stopped SparkContext
16/08/17 00:17:22 INFO ShutdownHookManager: Shutdown hook called
16/08/17 00:17:22 INFO ShutdownHookManager: Deleting directory /tmp/spark-8bf65e78-a916-4cc0-b4d1-1b0ec9a07157
16/08/17 00:17:22 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/08/17 00:17:22 INFO ShutdownHookManager: Deleting directory /tmp/spark-8bf65e78-a916-4cc0-b4d1-1b0ec9a07157/httpd-6d3aeb80-808c-4749-8f8b-ac9341f990a7
Thank if you can give me some advice.
You cannot create an RDD inside an RDD, what soever the type of RDD be.
This is the first rule. This is because RDD being an abstraction pointing to your data.

How to consume multiple messages in parallel and detect when all execution have completed?

I want to send multiple messages that will traverse the same route asynchronously and be able to know when all processing have completed.
Since I need to know when each route has terminated, I thought about using
ProducerTemplate#asyncRequestBody which use InOut pattern so that calling get on the Future object returned will block until the route has terminated.
So far so good, each request are sent asynchronously to the route, and looping over all Future calling get method will block until all
my routes have completed.
The problem is that, while the requests are sent asynchronously, I want them to be also consumed in parallel.
For example, consider P being the ProducerTemplate, Rn beeing requests and En being endpoints - what I want is :
-> R0 -> from(E1).to(E2).to(E3) : done.
/
P -> R1 -> from(E1).to(E2).to(E3) : done.
\
-> R2 -> from(E1).to(E2).to(E3) : done.
^__ Requests consumed in parallel.
After a few research, I stumbled onto Competing Consumers which parallelize execution adding more consumers.
However, since there is multiple executions at the same time, this slow down the execution of each route which causes some ExchangeTimedOutException :
The OUT message was not received within: 20000 millis due reply message with correlationID...
Not a surprise, as I am sending an InOut request. But actually, I don't really care of the response, I use it only to know
when my route has terminated. I would use an InOnly (ProducerTemplate#asyncSendBody), but calling Future#get would not block until
the entire task is completed.
Is there another alternative to send requests asynchronously and detect when they have all completed?
Note that changing the timeout is not an option in my case.
My first instinct is to recommend using NotifyBuilder to track the processing, more specifically using the whenBodiesDone to target specific bodies.
EDIT:
Here's a trivial implementation but it does demonstrate a point:
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#Component
public static class ParallelProcessingRouteBuilder extends RouteBuilder {
#Override
public void configure() throws Exception {
from("seda:test?concurrentConsumers=5")
.routeId("parallel")
.log("Received ${body}, processing")
.delay(5000)
.log("Processed ${body}")
.stop();
from("timer:testStarter?delay=3000&period=300000")
.routeId("test timer")
.process(exchange -> {
// messages we want to track
List<Integer> toSend = IntStream.range(0, 5).boxed().collect(toList());
NotifyBuilder builder = new NotifyBuilder(getContext())
.fromRoute("parallel")
.filter(e -> toSend.contains(e.getIn().getBody(Integer.class)))
.whenDone(toSend.size())
.create();
ProducerTemplate template = getContext().createProducerTemplate();
// messages we do not want to track
IntStream.range(10, 15)
.forEach(body -> template.sendBody("seda:test", body));
toSend.forEach(body -> template.sendBody("seda:test", body));
exchange.getIn().setBody(builder.matches(1, TimeUnit.MINUTES));
})
.log("Matched? ${body}");
}
}
}
And here's a sample of the logs:
2016-08-06 11:45:03.861 INFO 27410 --- [1 - seda://test] parallel : Received 10, processing
2016-08-06 11:45:03.861 INFO 27410 --- [5 - seda://test] parallel : Received 11, processing
2016-08-06 11:45:03.864 INFO 27410 --- [2 - seda://test] parallel : Received 12, processing
2016-08-06 11:45:03.865 INFO 27410 --- [4 - seda://test] parallel : Received 13, processing
2016-08-06 11:45:03.866 INFO 27410 --- [3 - seda://test] parallel : Received 14, processing
2016-08-06 11:45:08.867 INFO 27410 --- [1 - seda://test] parallel : Processed 10
2016-08-06 11:45:08.867 INFO 27410 --- [3 - seda://test] parallel : Processed 14
2016-08-06 11:45:08.867 INFO 27410 --- [4 - seda://test] parallel : Processed 13
2016-08-06 11:45:08.868 INFO 27410 --- [2 - seda://test] parallel : Processed 12
2016-08-06 11:45:08.868 INFO 27410 --- [5 - seda://test] parallel : Processed 11
2016-08-06 11:45:08.870 INFO 27410 --- [1 - seda://test] parallel : Received 0, processing
2016-08-06 11:45:08.872 INFO 27410 --- [4 - seda://test] parallel : Received 2, processing
2016-08-06 11:45:08.872 INFO 27410 --- [3 - seda://test] parallel : Received 1, processing
2016-08-06 11:45:08.872 INFO 27410 --- [2 - seda://test] parallel : Received 3, processing
2016-08-06 11:45:08.872 INFO 27410 --- [5 - seda://test] parallel : Received 4, processing
2016-08-06 11:45:13.876 INFO 27410 --- [1 - seda://test] parallel : Processed 0
2016-08-06 11:45:13.876 INFO 27410 --- [3 - seda://test] parallel : Processed 1
2016-08-06 11:45:13.876 INFO 27410 --- [4 - seda://test] parallel : Processed 2
2016-08-06 11:45:13.876 INFO 27410 --- [5 - seda://test] parallel : Processed 4
2016-08-06 11:45:13.876 INFO 27410 --- [2 - seda://test] parallel : Processed 3
2016-08-06 11:45:13.877 INFO 27410 --- [r://testStarter] test timer : Matched? true
You'll notice how the NotifyBuilder returned the result as soon as the results matched.
If you know each batch of messages you are consuming has X messages in them you can use an aggregator at the end of your parallel processing. For your example, each group of message would have it's own unique header tag that will be picked up by the aggregator. After all the messages have been processes and all the messages have ended up at the aggregator you can aggregate the messages to whatever format you want and return them.

Unable to Execute More than a spark Job "Initial job has not accepted any resources"

Using a Standalone Spark Java to execute the below code snippet, I'm getting the Status is always WAITING with the below error.It doesn't work when I try to add the Print statement. Is there any configuration I might have missed to run multiple jobs?
15/09/18 15:02:56 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (MapPartitionsRDD[2] at filter at SparkTest.java:143)
15/09/18 15:02:56 INFO TaskSchedulerImpl: Adding task set 0.0 with 2
tasks
15/09/18 15:03:11 WARN TaskSchedulerImpl: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources
15/09/18 15:03:26 WARN TaskSchedulerImpl: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources
15/09/18 15:03:41 WARN TaskSchedulerImpl: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources
JavaRDD<String> words = input.flatMap(new FlatMapFunction<String, String>() //Ln:143
{
public Iterable<String> call(String x)
{
return Arrays.asList(x.split(" "));
}
});
// Count all the words
System.out.println("Total words is" + words.count())
This error message means that your application is requesting more resources from the cluster than the cluster can currently provide i.e. more cores or more RAM than available in the cluster.
One of the reasons for this could be that you already have a job running which uses up all the available cores.
When this happens, your job is most probably waiting for another job to finish and release resources.
You can check this in the Spark UI.

Categories

Resources