Although i found a
similar question, the answer wasn't satisfactory or perhaps doesn't work in my condition.
I have N no of threads to run with ramp up period of suppose 5. The login authentication for N users are being passed from a CSV file.
The listener's report shows that thread 38 or any other thread runs before thread 1 i.e first iteration is of a thread no X (where X!=1). Using a loop controller doesn't seem to be the solution since my N users are all different. Below is the Test report of my test.
Thread Iteration Time(milliseconds) Bytes Success
ThreadNo 1-38 1 94551 67485 true
ThreadNo 1-69 2 92724 67200 true
ThreadNo 1-58 3 91812 66332 true
ThreadNo 1-12 4 92144 66335 true
ThreadNo 1-18 5 91737 66340 true
ThreadNo 1-17 6 93055 66514 true
So i want my iteration 1 to start with thread 1(ThreadNo 1-1).
Update:
My test plan has the
Run thread groups consecutively(i.e. run groups one at a time)
as selected.
Below is the snapshot of my testplan
Below is the jmeter log
jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-39
2015/12/14 02:00:37 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-39
2015/12/14 02:00:37 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-49
2015/12/14 02:00:37 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-49
2015/12/14 02:00:37 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-38
2015/12/14 02:00:37 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-38
2015/12/14 02:00:38 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-41
2015/12/14 02:00:38 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-41
2015/12/14 02:00:38 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-42
2015/12/14 02:00:38 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-42
2015/12/14 02:00:38 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-34
2015/12/14 02:00:38 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-34
2015/12/14 02:00:39 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-47
2015/12/14 02:00:39 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-47
2015/12/14 02:00:39 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-40
I'll tell you a little secret: JMeter does start threads sequentially, you don't need to take any extra actions. If you look into jmeter.log file you'll see something like
log.info("Executing request with thread number: " + Parameters);
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-1
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-2
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-3
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-4
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-5
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-6
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-7
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-8
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-9
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-10
What you see in the test report it seems to be request completion time which is supposed to be sequential only in ideal world.
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-45
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-47
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-47
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-46
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-50
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-50
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-49
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-48
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-48
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-49
If you still need for some reason to have certain sampler executed by 1st thread on 1st iteration - put it under the If Controller and use the following statement as "Condition"
${__BeanShell(vars.getIteration() == 1)} && ${__threadNum} == 1
It utilises the following JMeter Functions:
__threadNum() - to get current thread number
__Beanshell - to get execute arbitrary Beanshell script, in this case - get current iteration (applicable to Thread Group iterations, won't increment for iterations driven by Loop Controller or whatever)
Related
I want to run periodically (for instance every 10 minutes) a method to export .txt file inside a recursive resultless ForkJoinTask class.
So I initialized an ExecutorService for running ForkJoinTasks and create a pool of processors, an ExecutorService to run the command after a given delay, and a writer to output the file like this :
WriterTXT writer = new WriterTXT();
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
ForkJoinPool pool = new ForkJoinPool(Runtime.getRuntime().availableProcessors());
Then inside the compute method recursive ForkJoinTask class, I create a runnable task like this :
protected void compute() {
if (condition)
run();
Runnable task = () -> {
try {
writer.writetxt();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
};
exec.scheduleWithFixedDelay(task, 0, 10, TimeUnit.MINUTES);
else {
dosomthingelse();
}
I understand that in scheduleWithFixedDelay, 0 is the intial delay and 10 minutes is the delay, However in my case since i use multiple processors, the file is exported continusously and the delays are not respected.
Below an example of running the application on 2 processors and and an initial delay of 0 minutes, and a delay of 5 minutes
exec.scheduleWithFixedDelay(task, 0, 5, TimeUnit.MINUTES);
2022-08-29 17:40:22.669 [main ] INFO - Number of CPU: 2
2022-08-29 17:40:23.661 [thread-1] INFO - Writing partial Solutions
2022-08-29 17:40:23.951 [thread-1] INFO - Writing partial Solutions
2022-08-29 17:40:23.992 [thread-1] INFO - Writing partial Solutions
2022-08-29 17:40:24.075 [thread-1] INFO - Writing partial Solutions
2022-08-29 17:40:24.191 [worker-2] INFO - Queued Tasks = 33725 / 33731
2022-08-29 17:40:24.191 [thread-1] INFO - Writing partial Solutions
2022-08-29 17:40:24.456 [thread-1] INFO - Writing partial Solutions
2022-08-29 17:40:24.498 [thread-1] INFO - Writing partial Solutions
2022-08-29 17:40:24.597 [thread-1] INFO - Writing partial Solutions
2022-08-29 17:40:24.881 [thread-1] INFO - Writing partial Solutions
2022-08-29 17:40:25.182 [worker-2] INFO - Queued Tasks = 33720 / 33731
2022-08-29 17:40:25.182 [thread-1] INFO - Writing partial Solutions
2022-08-29 17:40:25.481 [thread-1] INFO - Writing partial Solutions
2022-08-29 17:40:25.634 [thread-1] INFO - Writing partial Solutions
2022-08-29 17:40:25.735 [thread-1] INFO - Writing partial Solutions
So is there a way to make this working?
Thank you in advance.
I am trying to understand how doAfterTerminate works with delaySequence. I have the following test:
#Test
fun testDoAfterTerminate() {
logger.info("Starting test")
val sch = Schedulers.single()
val testFlux = Flux.fromArray(intArrayOf(1, 2, 3).toTypedArray())
.doAfterTerminate { logger.info("Finished processing batch!") }
.delaySequence(Duration.ofSeconds(1), sch)
.doOnNext { logger.info("Done $it")}
.doAfterTerminate { logger.info("Finished v2")}
StepVerifier.create(testFlux).expectNextCount(3).verifyComplete()
}
The output of this test is:
22:27:54.547 [Test worker] INFO leon.patmore.kafkareactive.TestReactor - Finished processing batch!
22:27:55.561 [single-1] INFO leon.patmore.kafkareactive.TestReactor - Done 1
22:27:55.561 [single-1] INFO leon.patmore.kafkareactive.TestReactor - Done 2
22:27:55.561 [single-1] INFO leon.patmore.kafkareactive.TestReactor - Done 3
22:27:55.562 [single-1] INFO leon.patmore.kafkareactive.TestReactor - Finished v2
Does anyone understand why the first doAfterTerminate is called before the flux completes?
If I remove the .delaySequence(Duration.ofSeconds(1), sch) line, the termination happens as expected:
22:29:37.588 [Test worker] INFO leon.patmore.kafkareactive.TestReactor - Done 1
22:29:37.588 [Test worker] INFO leon.patmore.kafkareactive.TestReactor - Done 2
22:29:37.588 [Test worker] INFO leon.patmore.kafkareactive.TestReactor - Done 3
22:29:37.588 [Test worker] INFO leon.patmore.kafkareactive.TestReactor - Finished v2
22:29:37.588 [Test worker] INFO leon.patmore.kafkareactive.TestReactor - Finished processing batch!
Thanks!
The first doAfterTerminate is triggered on the main thread without any delay. Later, the signals are delayed and continue on the single() Scheduler.
Adding some logs() to make it more clear:
INFO main r.F.P.1 - | onSubscribe([Fuseable] FluxPeekFuseable.PeekFuseableSubscriber)
INFO main r.Flux.Peek.2 - onSubscribe(FluxPeek.PeekSubscriber)
INFO main r.Flux.Peek.2 - request(unbounded)
INFO main r.F.P.1 - | request(unbounded)
INFO main r.F.P.1 - | onNext(1)
INFO main r.F.P.1 - | onNext(2)
INFO main r.F.P.1 - | onNext(3)
INFO main r.F.P.1 - | onComplete()
Finished processing batch!
Done 1
Done 2
INFO single-1 r.Flux.Peek.2 - onNext(1)
Done 3
INFO single-1 r.Flux.Peek.2 - onNext(2)
INFO single-1 r.Flux.Peek.2 - onNext(3)
INFO single-1 r.Flux.Peek.2 - onComplete()
Finished v2
I am unable to view graphs even after installing the plugins from the jmeter-plugins.org site.
I can view the jpgc graph in the listener but on running only csv is getting created not the graphs.
I am not getting any error message but it shows warnings. I followed all steps properly as mentioned in this link.
Below is the error log:
2017/02/22 16:07:49 INFO - jmeter.engine.StandardJMeterEngine: Running the test!
2017/02/22 16:07:49 INFO - jmeter.samplers.SampleEvent: List of sample_variables: []
2017/02/22 16:07:49 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(true,*local*)
2017/02/22 16:07:50 INFO - jmeter.engine.StandardJMeterEngine: Starting ThreadGroup: 1 : Thread Group
2017/02/22 16:07:50 INFO - jmeter.engine.StandardJMeterEngine: Starting 10 threads for group Thread Group.
2017/02/22 16:07:50 INFO - jmeter.engine.StandardJMeterEngine: Thread will continue on error
2017/02/22 16:07:50 INFO - jmeter.threads.ThreadGroup: Starting thread group number 1 threads 10 ramp-up 5 perThread 500.0 delayedStart=false
2017/02/22 16:07:50 INFO - jmeter.threads.ThreadGroup: Started thread group number 1
2017/02/22 16:07:50 INFO - jmeter.engine.StandardJMeterEngine: All thread groups have been started
2017/02/22 16:07:50 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-1
2017/02/22 16:07:50 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-2
2017/02/22 16:07:51 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-3
2017/02/22 16:07:51 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-4
2017/02/22 16:07:52 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-5
2017/02/22 16:07:52 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-6
2017/02/22 16:07:53 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-7
2017/02/22 16:07:53 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-8
2017/02/22 16:07:54 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-9
2017/02/22 16:07:54 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-10
2017/02/22 16:07:57 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-1
2017/02/22 16:07:57 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-1
2017/02/22 16:07:58 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-4
2017/02/22 16:07:58 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-4
2017/02/22 16:07:59 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-3
2017/02/22 16:07:59 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-3
2017/02/22 16:07:59 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-2
2017/02/22 16:07:59 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-2
2017/02/22 16:08:00 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-7
2017/02/22 16:08:00 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-7
2017/02/22 16:08:00 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-5
2017/02/22 16:08:00 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-5
2017/02/22 16:08:00 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-8
2017/02/22 16:08:00 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-8
2017/02/22 16:08:01 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-9
2017/02/22 16:08:01 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-9
2017/02/22 16:08:01 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-6
2017/02/22 16:08:01 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-6
2017/02/22 16:08:01 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-10
2017/02/22 16:08:01 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-10
2017/02/22 16:08:01 INFO - jmeter.engine.StandardJMeterEngine: Notifying test listeners of end of test
2017/02/22 16:08:01 INFO - kg.apc.jmeter.PluginsCMDWorker: Using JMeterPluginsCMD v. N/A
2017/02/22 16:08:01 WARN - kg.apc.jmeter.JMeterPluginsUtils: JMeter env exists. No one should see this normally.
2017/02/22 16:08:01 WARN - jmeter.engine.StandardJMeterEngine: Error encountered during shutdown of kg.apc.jmeter.listener.GraphsGeneratorListener#297d7a76 java.lang.RuntimeException: java.lang.ClassNotFoundException: kg.apc.jmeter.vizualizers.SynthesisReportGui
at kg.apc.jmeter.PluginsCMDWorker.getGUIObject(PluginsCMDWorker.java:237)
at kg.apc.jmeter.PluginsCMDWorker.getGUIObject(PluginsCMDWorker.java:234)
at kg.apc.jmeter.PluginsCMDWorker.setPluginType(PluginsCMDWorker.java:73)
at kg.apc.jmeter.listener.GraphsGeneratorListener.testEnded(GraphsGeneratorListener.java:221)
at kg.apc.jmeter.listener.GraphsGeneratorListener.testEnded(GraphsGeneratorListener.java:137)
at org.apache.jmeter.engine.StandardJMeterEngine.notifyTestListenersOfEnd(StandardJMeterEngine.java:215)
at org.apache.jmeter.engine.StandardJMeterEngine.run(StandardJMeterEngine.java:436)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.ClassNotFoundException: kg.apc.jmeter.vizualizers.SynthesisReportGui
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Unknown Source)
at kg.apc.jmeter.PluginsCMDWorker.getGUIObject(PluginsCMDWorker.java:227)
... 7 more
2017/02/22 16:08:01 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(false,*local*)
You need Synthesis Report which is a pre-requisite for Graphs Generator, you can install it either manually or using JMeter Plugins Manager (recommended)
I try to get value from Redis using Redis Data Set plugin in Jmeter. If the Redis key is simple (as in Example https://www.youtube.com/watch?v=u0vu3tfrdKc), its value is extracted without any problems. In my case, the value is stored in the complex key, like - user.confirmation.6869427a27e784f7e7cbb0746714c27d and when I use it as the value of "Redis Key:" in Redis Data Set the following message pops up on the screen while the script is not performed and jmeter key value wouldn't return:
2017/02/11 12:57:57 INFO - jmeter.engine.StandardJMeterEngine: Running the test!
2017/02/11 12:57:57 INFO - jmeter.samplers.SampleEvent: List of sample_variables: []
2017/02/11 12:57:57 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(true,*local*)
2017/02/11 12:57:58 INFO - jmeter.engine.StandardJMeterEngine: Starting ThreadGroup: 1 : Thread Group User Service
2017/02/11 12:57:58 INFO - jmeter.engine.StandardJMeterEngine: Starting 1 threads for group Thread Group User Service.
2017/02/11 12:57:58 INFO - jmeter.engine.StandardJMeterEngine: Thread will start next loop on error
2017/02/11 12:57:58 INFO - jmeter.threads.ThreadGroup: Starting thread group number 1 threads 1 ramp-up 1 perThread 1000.0 delayedStart=false
2017/02/11 12:57:58 INFO - jmeter.threads.ThreadGroup: Started thread group number 1
2017/02/11 12:57:58 INFO - jmeter.engine.StandardJMeterEngine: All thread groups have been started
2017/02/11 12:57:58 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group User Service 1-1
2017/02/11 12:57:58 INFO - jmeter.threads.JMeterThread: Stop Thread seen: org.apache.jorphan.util.JMeterStopThreadException: End of redis data detected, thread will exit
2017/02/11 12:57:58 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group User Service 1-1
2017/02/11 12:57:58 INFO - jmeter.engine.StandardJMeterEngine: Notifying test listeners of end of test
2017/02/11 12:57:58 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(false,*local*)
Besides there is no problem in receiving the value in Redis console itself.
Attempts to screen the dots in the key come to no avail as well.
I looking forward to hearing from you with any comment.
To test, I created a Redis (key,value) set like this:
key: user.confirmation.6869427a27e784f7e7cbb0746714c27d
row1: user.confirmation.6869427a27e784f7e7cbb0746714c27d
row2: test
And I could retrieve both rows data with Redis Data Set, so it seams that the issue is not related to the long name, but maybe this name is not the same in your Redis data store and JMeter. That is why JMeter complains: "End of redis data detected, thread will exit"
I try to create a JavaRDD which contains an other series of RDD inside.
RDDMachine.foreach(machine -> startDetectionNow())
Inside, machine start a query to ES and get an other RDD. I collect all this (1200hits) and covert to Lists. After the Machine start work with this list
Firstly : is it possible to do this or not ? if not, in which way can i try to do something different?
Let me show what I try to do :
SparkConf conf = new SparkConf().setAppName("Algo").setMaster("local");
conf.set("es.index.auto.create", "true");
conf.set("es.nodes", "IP_ES");
conf.set("es.port", "9200");
sparkContext = new JavaSparkContext(conf);
MyAlgoConfig config_algo = new MyAlgoConfig(Detection.byPrevisionMerge);
Machine m1 = new Machine("AL-27", "IP1", config_algo);
Machine m2 = new Machine("AL-20", "IP2", config_algo);
Machine m3 = new Machine("AL-24", "IP3", config_algo);
Machine m4 = new Machine("AL-21", "IP4", config_algo);
ArrayList<Machine> Machines = new ArrayList();
Machines.add(m1);
Machines.add(m2);
Machines.add(m3);
Machines.add(m4);
JavaRDD<Machine> machineRDD = sparkContext.parallelize(Machines);
machineRDD.foreach(machine -> machine.startDetectNow());
I try to start my algorithm in each machine which must learn from data located in Elasticsearch.
public boolean startDetectNow()
// MEGA Requete ELK
JavaRDD dataForLearn = Elastic.loadElasticsearch(
Algo.sparkContext
, "logstash-*/Collector"
, Elastic.req_AvgOfCall(
getIP()
, "hour"
, "2016-04-16T00:00:00"
, "2016-06-10T00:00:00"));
JavaRDD<Hit> RDD_hits = Elastic.mapToHit(dataForLearn);
List<Hit> hits = Elastic.RddToListHits(RDD_hits);
So I try to get all data of a query in every "Machine".
My question is : is it possible to do this with Spark ? Or maybe in an other way ?
When I start it in Spark; it's seams to be something like lock when the code is around the second RDD.
And the error message is :
16/08/17 00:17:13 INFO SparkContext: Starting job: collect at Elastic.java:94
16/08/17 00:17:13 INFO DAGScheduler: Got job 1 (collect at Elastic.java:94) with 1 output partitions
16/08/17 00:17:13 INFO DAGScheduler: Final stage: ResultStage 1 (collect at Elastic.java:94)
16/08/17 00:17:13 INFO DAGScheduler: Parents of final stage: List()
16/08/17 00:17:13 INFO DAGScheduler: Missing parents: List()
16/08/17 00:17:13 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[4] at map at Elastic.java:106), which has no missing parents
16/08/17 00:17:13 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.3 KB, free 7.7 KB)
16/08/17 00:17:13 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.5 KB, free 10.2 KB)
16/08/17 00:17:13 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:46356 (size: 2.5 KB, free: 511.1 MB)
16/08/17 00:17:13 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
16/08/17 00:17:13 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[4] at map at Elastic.java:106)
16/08/17 00:17:13 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
^C16/08/17 00:17:22 INFO SparkContext: Invoking stop() from shutdown hook
16/08/17 00:17:22 INFO SparkUI: Stopped Spark web UI at http://192.168.10.23:4040
16/08/17 00:17:22 INFO DAGScheduler: ResultStage 0 (foreach at GuardConnect.java:60) failed in 10,292 s
16/08/17 00:17:22 INFO DAGScheduler: Job 0 failed: foreach at GuardConnect.java:60, took 10,470974 s
Exception in thread "main" org.apache.spark.SparkException: Job 0 cancelled because SparkContext was shut down
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:806)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:804)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:804)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1658)
at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1581)
at org.apache.spark.SparkContext$$anonfun$stop$9.apply$mcV$sp(SparkContext.scala:1740)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1739)
at org.apache.spark.SparkContext$$anonfun$3.apply$mcV$sp(SparkContext.scala:596)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:267)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1765)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:218)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:912)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:910)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.foreach(RDD.scala:910)
at org.apache.spark.api.java.JavaRDDLike$class.foreach(JavaRDDLike.scala:332)
at org.apache.spark.api.java.AbstractJavaRDDLike.foreach(JavaRDDLike.scala:46)
at com.seigneurin.spark.GuardConnect.main(GuardConnect.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/08/17 00:17:22 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo#4a7e0846)
16/08/17 00:17:22 INFO DAGScheduler: ResultStage 1 (collect at Elastic.java:94) failed in 9,301 s
16/08/17 00:17:22 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo#6c6b4cb8)
16/08/17 00:17:22 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(0,1471385842813,JobFailed(org.apache.spark.SparkException: Job 0 cancelled because SparkContext was shut down))
16/08/17 00:17:22 INFO DAGScheduler: Job 1 failed: collect at Elastic.java:94, took 9,317650 s
16/08/17 00:17:22 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Job 1 cancelled because SparkContext was shut down
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:806)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:804)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:804)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1658)
at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1581)
at org.apache.spark.SparkContext$$anonfun$stop$9.apply$mcV$sp(SparkContext.scala:1740)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1739)
at org.apache.spark.SparkContext$$anonfun$3.apply$mcV$sp(SparkContext.scala:596)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:267)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1765)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:239)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:218)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:339)
at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:46)
at com.seigneurin.spark.Elastic.RddToListHits(Elastic.java:94)
at com.seigneurin.spark.OXO.prepareDataAndLearn(OXO.java:126)
at com.seigneurin.spark.OXO.startDetectNow(OXO.java:148)
at com.seigneurin.spark.GuardConnect.lambda$main$1282d8df$1(GuardConnect.java:60)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:332)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:332)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$32.apply(RDD.scala:912)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$32.apply(RDD.scala:912)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/08/17 00:17:22 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(1,1471385842814,JobFailed(org.apache.spark.SparkException: Job 1 cancelled because SparkContext was shut down))
16/08/17 00:17:22 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/08/17 00:17:22 INFO MemoryStore: MemoryStore cleared
16/08/17 00:17:22 INFO BlockManager: BlockManager stopped
16/08/17 00:17:22 INFO BlockManagerMaster: BlockManagerMaster stopped
16/08/17 00:17:22 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/08/17 00:17:22 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/08/17 00:17:22 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/08/17 00:17:22 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, partition 0,ANY, 6751 bytes)
16/08/17 00:17:22 ERROR Inbox: Ignoring error
java.util.concurrent.RejectedExecutionException: Task org.apache.spark.executor.Executor$TaskRunner#65fd4104 rejected from java.util.concurrent.ThreadPoolExecutor#4387a1bf[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at org.apache.spark.executor.Executor.launchTask(Executor.scala:128)
at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$reviveOffers$1.apply(LocalBackend.scala:86)
at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$reviveOffers$1.apply(LocalBackend.scala:84)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.scheduler.local.LocalEndpoint.reviveOffers(LocalBackend.scala:84)
at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$receive$1.applyOrElse(LocalBackend.scala:69)
at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/08/17 00:17:22 INFO SparkContext: Successfully stopped SparkContext
16/08/17 00:17:22 INFO ShutdownHookManager: Shutdown hook called
16/08/17 00:17:22 INFO ShutdownHookManager: Deleting directory /tmp/spark-8bf65e78-a916-4cc0-b4d1-1b0ec9a07157
16/08/17 00:17:22 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/08/17 00:17:22 INFO ShutdownHookManager: Deleting directory /tmp/spark-8bf65e78-a916-4cc0-b4d1-1b0ec9a07157/httpd-6d3aeb80-808c-4749-8f8b-ac9341f990a7
Thank if you can give me some advice.
You cannot create an RDD inside an RDD, what soever the type of RDD be.
This is the first rule. This is because RDD being an abstraction pointing to your data.