Get value from Redis using Jmeter Redis Data Set - java

I try to get value from Redis using Redis Data Set plugin in Jmeter. If the Redis key is simple (as in Example https://www.youtube.com/watch?v=u0vu3tfrdKc), its value is extracted without any problems. In my case, the value is stored in the complex key, like - user.confirmation.6869427a27e784f7e7cbb0746714c27d and when I use it as the value of "Redis Key:" in Redis Data Set the following message pops up on the screen while the script is not performed and jmeter key value wouldn't return:
2017/02/11 12:57:57 INFO - jmeter.engine.StandardJMeterEngine: Running the test!
2017/02/11 12:57:57 INFO - jmeter.samplers.SampleEvent: List of sample_variables: []
2017/02/11 12:57:57 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(true,*local*)
2017/02/11 12:57:58 INFO - jmeter.engine.StandardJMeterEngine: Starting ThreadGroup: 1 : Thread Group User Service
2017/02/11 12:57:58 INFO - jmeter.engine.StandardJMeterEngine: Starting 1 threads for group Thread Group User Service.
2017/02/11 12:57:58 INFO - jmeter.engine.StandardJMeterEngine: Thread will start next loop on error
2017/02/11 12:57:58 INFO - jmeter.threads.ThreadGroup: Starting thread group number 1 threads 1 ramp-up 1 perThread 1000.0 delayedStart=false
2017/02/11 12:57:58 INFO - jmeter.threads.ThreadGroup: Started thread group number 1
2017/02/11 12:57:58 INFO - jmeter.engine.StandardJMeterEngine: All thread groups have been started
2017/02/11 12:57:58 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group User Service 1-1
2017/02/11 12:57:58 INFO - jmeter.threads.JMeterThread: Stop Thread seen: org.apache.jorphan.util.JMeterStopThreadException: End of redis data detected, thread will exit
2017/02/11 12:57:58 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group User Service 1-1
2017/02/11 12:57:58 INFO - jmeter.engine.StandardJMeterEngine: Notifying test listeners of end of test
2017/02/11 12:57:58 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(false,*local*)
Besides there is no problem in receiving the value in Redis console itself.
Attempts to screen the dots in the key come to no avail as well.
I looking forward to hearing from you with any comment.

To test, I created a Redis (key,value) set like this:
key: user.confirmation.6869427a27e784f7e7cbb0746714c27d
row1: user.confirmation.6869427a27e784f7e7cbb0746714c27d
row2: test
And I could retrieve both rows data with Redis Data Set, so it seams that the issue is not related to the long name, but maybe this name is not the same in your Redis data store and JMeter. That is why JMeter complains: "End of redis data detected, thread will exit"

Related

Activiti Job Executor problem with async serviceTasks (activiti >= 5.17)

Please consider the following diagram
MyProcess.bpmn
<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:activiti="http://activiti.org/bpmn" xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI" xmlns:omgdc="http://www.omg.org/spec/DD/20100524/DC" xmlns:omgdi="http://www.omg.org/spec/DD/20100524/DI" typeLanguage="http://www.w3.org/2001/XMLSchema" expressionLanguage="http://www.w3.org/1999/XPath" targetNamespace="http://www.activiti.org/test">
<process id="myProcess" name="My process" isExecutable="true">
<startEvent id="startevent1" name="Start"></startEvent>
<userTask id="evl" name="Evaluation"></userTask>
<boundaryEvent id="timer_event_autocomplete" name="Timer" attachedToRef="evl" cancelActivity="false">
<timerEventDefinition>
<timeDate>PT2S</timeDate>
</timerEventDefinition>
</boundaryEvent>
<serviceTask id="timer_service" name="Timed Autocomplete" activiti:async="true" activiti:class="com.example.service.TimerService"></serviceTask>
<serviceTask id="store_docs_service" name="Store Documents" activiti:async="true" activiti:class="com.example.service.StoreDocsService"></serviceTask>
<sequenceFlow id="flow1" sourceRef="startevent1" targetRef="evl"></sequenceFlow>
<sequenceFlow id="flow2" sourceRef="timer_event_autocomplete" targetRef="timer_service"></sequenceFlow>
<sequenceFlow id="flow3" sourceRef="evl" targetRef="store_docs_service"></sequenceFlow>
<sequenceFlow id="flow4" sourceRef="store_docs_service" targetRef="endevent1"></sequenceFlow>
<endEvent id="endevent1" name="End"></endEvent>
</process>
</definitions>
To describe it in words, there is one user task (Evaluation) and a timer attached to it (configured to trigger in 2 seconds). Upon triggering the timer, the Timed Autocomplete async service task in its Java Delegate, TimerService, tries to complete the user task (Evaluation). Completing the user task (Evaluation) the flow moves to the other async service task (Store Documents), it calls its Java Delegate, StoreDocsService, and the flow ends.
TimerService.java
public class TimerService implements JavaDelegate {
Logger LOGGER = LoggerFactory.getLogger(TimerService.class);
#Override
public void execute(DelegateExecution execution) throws Exception {
LOGGER.info("*** Executing Timer autocomplete ***");
Task task = execution.getEngineServices().getTaskService().createTaskQuery().active().singleResult();
execution.getEngineServices().getTaskService().complete(task.getId());
LOGGER.info("*** Task: {} autocompleted by timer ***", task.getId());
}
}
StoreDocsService.java
public class StoreDocsService implements JavaDelegate {
Logger LOGGER = LoggerFactory.getLogger(StoreDocsService.class);
#Override
public void execute(DelegateExecution execution) throws Exception {
LOGGER.info("*** Executing Store Documents ***");
}
}
App.java
public class App
{
public static void main( String[] args ) throws Exception
{
// DefaultAsyncJobExecutor demoAsyncJobExecutor = new DefaultAsyncJobExecutor();
// demoAsyncJobExecutor.setCorePoolSize(10);
// demoAsyncJobExecutor.setMaxPoolSize(50);
// demoAsyncJobExecutor.setKeepAliveTime(10000);
// demoAsyncJobExecutor.setMaxAsyncJobsDuePerAcquisition(50);
ProcessEngineConfiguration cfg = new StandaloneProcessEngineConfiguration()
.setJdbcUrl("jdbc:h2:mem:activiti;DB_CLOSE_DELAY=1000")
.setJdbcUsername("sa")
.setJdbcPassword("")
.setJdbcDriver("org.h2.Driver")
.setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_TRUE)
// .setAsyncExecutorActivate(true)
// .setAsyncExecutorEnabled(true)
// .setAsyncExecutor(demoAsyncJobExecutor)
.setJobExecutorActivate(true)
;
ProcessEngine processEngine = cfg.buildProcessEngine();
String pName = processEngine.getName();
String ver = ProcessEngine.VERSION;
System.out.println("ProcessEngine [" + pName + "] Version: [" + ver + "]");
RepositoryService repositoryService = processEngine.getRepositoryService();
Deployment deployment = repositoryService.createDeployment()
.addClasspathResource("MyProcess.bpmn").deploy();
ProcessDefinition processDefinition = repositoryService.createProcessDefinitionQuery()
.deploymentId(deployment.getId()).singleResult();
System.out.println(
"Found process definition ["
+ processDefinition.getName() + "] with id ["
+ processDefinition.getId() + "]");
final Map<String, Object> variables = new HashMap<String, Object>();
final RuntimeService runtimeService = processEngine.getRuntimeService();
ProcessInstance id = runtimeService.startProcessInstanceByKey("myProcess", variables);
System.out.println("Started Process Id: "+id.getId());
try {
final TaskService taskService = processEngine.getTaskService();
// List<Task> tasks = taskService.createTaskQuery().active().list();
// while (!tasks.isEmpty()) {
// Task task = tasks.get(0);
// taskService.complete(task.getId());
// tasks = taskService.createTaskQuery().active().list();
// }
} catch (Exception e) {
System.out.println(e.getMessage());
} finally {
}
while(!runtimeService.createExecutionQuery().list().isEmpty()) {
}
processEngine.close();
}
}
Activiti 5.15
When the timer triggers, the above diagram executes as described. We use Activiti's DefaultJobExecutor
As we can see in the logs:
[main] INFO org.activiti.engine.impl.ProcessEngineImpl - ProcessEngine default created
[main] INFO org.activiti.engine.impl.jobexecutor.JobExecutor - Starting up the JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor].
[Thread-1] INFO org.activiti.engine.impl.jobexecutor.AcquireJobsRunnable - JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor] starting to acquire jobs
ProcessEngine [default] Version: [5.15]
[main] INFO org.activiti.engine.impl.bpmn.deployer.BpmnDeployer - Processing resource MyProcess.bpmn
Found process definition [My process] with id [myProcess:1:3]
Started Process Id: 4
[pool-1-thread-1] INFO com.example.service.TimerService - *** Executing Timer autocomplete ***
[pool-1-thread-1] INFO com.example.service.TimerService - *** Task: 9 autocompleted by timer ***
[pool-1-thread-1] INFO com.example.service.StoreDocsService - *** Executing Store Documents ***
[main] INFO org.activiti.engine.impl.jobexecutor.JobExecutor - Shutting down the JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor].
[Thread-1] INFO org.activiti.engine.impl.jobexecutor.AcquireJobsRunnable - JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor] stopped job acquisition
Activiti >= 5.17
Changing only the activiti's version in pom.xml to 5.17.0 and up (tested till 5.22.0) and executing the same code, the flow executes the timer's Java Delegate, TimerService, which completes the user task (Evaluation) but Store Documents Java Delegate, StoreDocsService is never called. To add more, it seems that the flow never ends and the execution remains stuck at Store Documents async service task.
Logs:
[main] INFO org.activiti.engine.impl.ProcessEngineImpl - ProcessEngine default created
[main] INFO org.activiti.engine.impl.jobexecutor.JobExecutor - Starting up the JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor].
[Thread-1] INFO org.activiti.engine.impl.jobexecutor.AcquireJobsRunnableImpl - JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor] starting to acquire jobs
ProcessEngine [default] Version: [5.17.0.2]
[main] INFO org.activiti.engine.impl.bpmn.deployer.BpmnDeployer - Processing resource MyProcess.bpmn
Found process definition [My process] with id [myProcess:1:3]
Started Process Id: 4
[pool-1-thread-2] INFO com.example.service.TimerService - *** Executing Timer autocomplete ***
[pool-1-thread-2] INFO com.example.service.TimerService - *** Task: 9 autocompleted by timer ***
Changing to Async Job Executor. One feature of 5.17 release was the new async job executor (however the default non-async executor remains configured as default). So trying to enable the async executor in App.java by the following lines:
DefaultAsyncJobExecutor demoAsyncJobExecutor = new DefaultAsyncJobExecutor();
demoAsyncJobExecutor.setCorePoolSize(10);
demoAsyncJobExecutor.setMaxPoolSize(50);
demoAsyncJobExecutor.setKeepAliveTime(10000);
demoAsyncJobExecutor.setMaxAsyncJobsDuePerAcquisition(50);
ProcessEngineConfiguration cfg = new StandaloneProcessEngineConfiguration()
.setJdbcUrl("jdbc:h2:mem:activiti;DB_CLOSE_DELAY=1000")
.setJdbcUsername("sa")
.setJdbcPassword("")
.setJdbcDriver("org.h2.Driver")
.setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_TRUE)
.setAsyncExecutorActivate(true)
.setAsyncExecutorEnabled(true)
.setAsyncExecutor(demoAsyncJobExecutor)
;
The flow seems to execute correctly, StoreDocsService is called after TimerService, but it never ends (the while(!runtimeService.createExecutionQuery().list().isEmpty()) statement in App.java is always true)!
Logs:
[main] INFO org.activiti.engine.impl.ProcessEngineImpl - ProcessEngine default created
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Starting up the default async job executor [org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor].
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Creating thread pool queue of size 100
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Creating executor service with corePoolSize 10, maxPoolSize 50 and keepAliveTime 10000
[Thread-1] INFO org.activiti.engine.impl.asyncexecutor.AcquireTimerJobsRunnable - {} starting to acquire async jobs due
[Thread-2] INFO org.activiti.engine.impl.asyncexecutor.AcquireAsyncJobsDueRunnable - {} starting to acquire async jobs due
ProcessEngine [default] Version: [5.17.0.2]
[main] INFO org.activiti.engine.impl.bpmn.deployer.BpmnDeployer - Processing resource MyProcess.bpmn
Found process definition [My process] with id [myProcess:1:3]
Started Process Id: 4
[pool-1-thread-2] INFO com.example.service.TimerService - *** Executing Timer autocomplete ***
[pool-1-thread-2] INFO com.example.service.TimerService - *** Task: 9 autocompleted by timer ***
[pool-1-thread-3] INFO com.example.service.StoreDocsService - *** Executing Store Documents ***
!!!! UPDATE !!!
Activiti 6.0.0
Tried the same scenario but with Activiti version 6.0.0.
Changes needed in TimerService, cannot get the EngineServices from DelegateExecution:
public class TimerService implements JavaDelegate {
Logger LOGGER = LoggerFactory.getLogger(TimerService.class);
#Override
public void execute(DelegateExecution execution) {
LOGGER.info("*** Executing Timer autocomplete ***");
Task task = Context.getProcessEngineConfiguration().getTaskService().createTaskQuery().active().singleResult();
Context.getProcessEngineConfiguration().getTaskService().complete(task.getId());
// Task task = execution.getEngineServices().getTaskService().createTaskQuery().active().singleResult();
// execution.getEngineServices().getTaskService().complete(task.getId());
LOGGER.info("*** Task: {} autocompleted by timer ***", task.getId());
}
}
and this version has only the async executor so the ProcessEngineConfiguration in App.java changes to:
ProcessEngineConfiguration cfg = new StandaloneProcessEngineConfiguration()
.setJdbcUrl("jdbc:h2:mem:activiti;DB_CLOSE_DELAY=1000")
.setJdbcUsername("sa")
.setJdbcPassword("")
.setJdbcDriver("org.h2.Driver")
.setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_TRUE)
.setAsyncExecutorActivate(true)
// .setAsyncExecutorEnabled(true)
// .setAsyncExecutor(demoAsyncJobExecutor)
// .setJobExecutorActivate(true)
;
With 6.0.0 version and async executor the process completes successfully as we can see in the logs:
[main] INFO org.activiti.engine.impl.ProcessEngineImpl - ProcessEngine default created
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Starting up the default async job executor [org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor].
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Creating thread pool queue of size 100
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Creating executor service with corePoolSize 2, maxPoolSize 10 and keepAliveTime 5000
[Thread-1] INFO org.activiti.engine.impl.asyncexecutor.AcquireAsyncJobsDueRunnable - {} starting to acquire async jobs due
[Thread-2] INFO org.activiti.engine.impl.asyncexecutor.AcquireTimerJobsRunnable - {} starting to acquire async jobs due
[Thread-3] INFO org.activiti.engine.impl.asyncexecutor.ResetExpiredJobsRunnable - {} starting to reset expired jobs
ProcessEngine [default] Version: [6.0.0.4]
Found process definition [My process] with id [myProcess:1:3]
Started Process Id: 4
[activiti-async-job-executor-thread-2] INFO com.example.service.TimerService - *** Executing Timer autocomplete ***
[activiti-async-job-executor-thread-2] INFO com.example.service.TimerService - *** Task: 10 autocompleted by timer ***
[activiti-async-job-executor-thread-2] INFO com.example.service.StoreDocsService - *** Executing Store Documents ***
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Shutting down the default async job executor [org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor].
[activiti-reset-expired-jobs] INFO org.activiti.engine.impl.asyncexecutor.ResetExpiredJobsRunnable - {} stopped resetting expired jobs
[activiti-acquire-timer-jobs] INFO org.activiti.engine.impl.asyncexecutor.AcquireTimerJobsRunnable - {} stopped async job due acquisition
[activiti-acquire-async-jobs] INFO org.activiti.engine.impl.asyncexecutor.AcquireAsyncJobsDueRunnable - {} stopped async job due acquisition
Process finished with exit code 0
2 Questions:
We have upgraded from Activiti 5.15 to 5.22.0 and we do not use the async job executor. Is there any way to keep the functionality of this piece of diagram to behave as it was behaving in 5.15?
If switching to the async job executor is inevitable, then what are we missing in order to make this process complete successfully?
A sample project of the above can be found at: https://github.com/pleft/DemoActiviti
Without answering your question explicitly which would require setting up your environment and debugging, I would recommend you at the very least move to Activiti 6.
The 5.x branch of Activiti hasn't been maintained for over 5 years and is effectively dead.
Even the 6.x line has pretty much been abandoned as the core developers have all moved to the "Flowable" project.
If you choose to stay with Activiti 5.x, your options are:
Maintain the codebase yourself (and hopefully contribute any changes/enhancements back to the project).
Contract Activiti support services. There are a couple of vendors offering such services.

Unable to reproduce org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space ERROR

We received an error from the customer end about an oozie job failing with OutOfMemory issue. The oozie job has three to four actions, one of which is a hive action.
The Hive action apparently perform a join which intern does a full table scan. Due to maintenance activity at the customer end, periodic purging did not happen and data got accumulated for few days. Which led to the hive action doing a scan for an additional number of days.
The below is the stack trace of the error :
2018-06-15 00:54:28,977 INFO org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader: Processing file hdfs://xxx:8020/data/csv/7342/2018-06-14/17/1/Network_xxx.dat
2018-06-15 00:54:29,005 INFO org.apache.hadoop.hive.ql.exec.MapOperator: Processing alias ntwk for file hdfs://xxx:8020/data/csv/7342/2018-06-14/17/1
2018-06-15 00:55:04,029 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7 finished. closing...
2018-06-15 00:55:04,129 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7 forwarded 6672342 rows
2018-06-15 00:55:04,266 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 3 finished. closing...
2018-06-15 00:55:04,266 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 3 forwarded 0 rows
2018-06-15 00:55:04,513 INFO org.apache.hadoop.hive.ql.exec.ReduceSinkOperator: 2 finished. closing...
2018-06-15 00:55:04,538 INFO org.apache.hadoop.hive.ql.exec.ReduceSinkOperator: 2 forwarded 0 rows
2018-06-15 00:55:04,563 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 3 Close done
2018-06-15 00:55:04,589 INFO org.apache.hadoop.hive.ql.exec.MapOperator: DESERIALIZE_ERRORS:0
2018-06-15 00:55:04,616 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 1 finished. closing...
2018-06-15 00:55:04,641 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 1 forwarded 6672342 rows
2018-06-15 00:55:04,666 INFO org.apache.hadoop.hive.ql.exec.ReduceSinkOperator: 0 finished. closing...
2018-06-15 00:55:04,691 INFO org.apache.hadoop.hive.ql.exec.ReduceSinkOperator: 0 forwarded 0 rows
2018-06-15 00:55:04,716 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 1 Close done
2018-06-15 00:55:04,741 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7 Close done
2018-06-15 00:55:04,792 INFO ExecMapper: ExecMapper: processed 6672342 rows: used memory = 412446808
2018-06-15 00:55:10,316 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2018-06-15 00:55:10,852 FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.io.compress.DecompressorStream.<init>(DecompressorStream.java:50)
at org.apache.hadoop.io.compress.BlockDecompressorStream.<init>(BlockDecompressorStream.java:50)
at org.apache.hadoop.io.compress.SnappyCodec.createInputStream(SnappyCodec.java:173)
at org.apache.hadoop.hive.ql.io.RCFile$Reader.nextKeyBuffer(RCFile.java:1447)
at org.apache.hadoop.hive.ql.io.RCFile$Reader.next(RCFile.java:1602)
at org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:98)
at org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:85)
at org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:39)
at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:329)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:247)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:215)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:200)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
We received statistics about the data volume from the customer. Looks like they get 300MB data per day. The failed hive query has processed three days data, so nearly 1GB data. Altogether 30million records for three days
We tried to reproduce the same error in our lab set up, We loaded our simulated data which amounts to 100million (Totally 5GB data for five days) for five days, but still the hive query and its background jobs ran seamlessly.
Not sure why with the same map jvm parameters, we are not able to get OutOfMemory error. Kindly note that we do not have customer data dump. We are using our simulated data.
What might be the reason for us not facing the same problem as the customer in-spite for increasing the data volume five folds? Unable to understand the reason.
Below is the configuration :
mapred.map.child.java.opts : -Xmx512M
mapred.job.reduce.memory.mb : -1
mapred.job.map.memory.mb : -1

java.nio.channels.UnresolvedAddressException when tranquility index data to druid

I am trying tranquility with Druid 0.11 and Kafka. When tranquility receive new data it throw the following exception:
2018-01-12 18:27:34,010 [Curator-ServiceCache-0] INFO c.m.c.s.net.finagle.DiscoResolver - Updating instances for service[firehose:druid:overlord:flow-018-0000-0000] to Set(ServiceInstance{name='firehose:druid:overlord:flow-018-0000-0000', id='ea85b248-0c53-4ec1-94a6-517525f72e31', address='druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local', port=8100, sslPort=-1, payload=null, registrationTimeUTC=1515781653895, serviceType=DYNAMIC, uriSpec=null})
Jan 12, 2018 6:27:37 PM com.twitter.finagle.netty3.channel.ChannelStatsHandler exceptionCaught
WARNING: ChannelStatsHandler caught an exception
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779)
at org.jboss.netty.channel.SimpleChannelHandler.connectRequested(SimpleChannelHandler.java:306)
The worker was created by middle Manager:
2018-01-12T18:27:25,704 INFO [WorkerTaskMonitor] io.druid.indexing.worker.WorkerTaskMonitor - Submitting runnable for task[index_realtime_flow_2018-01-12T18:00:00.000Z_0_0]
2018-01-12T18:27:25,719 INFO [WorkerTaskMonitor] io.druid.indexing.worker.WorkerTaskMonitor - Affirmative. Running task [index_realtime_flow_2018-01-12T18:00:00.000Z_0_0]
And tranquility talk with overlord fine... I think by the following logs:
2018-01-12T18:27:25,268 INFO [qtp271944754-62] io.druid.indexing.overlord.TaskLockbox - Adding task[index_realtime_flow_2018-01-12T18:00:00.000Z_0_0] to activeTasks
2018-01-12T18:27:25,272 INFO [TaskQueue-Manager] io.druid.indexing.overlord.TaskQueue - Asking taskRunner to run: index_realtime_flow_2018-01-12T18:00:00.000Z_0_0
2018-01-12T18:27:25,272 INFO [TaskQueue-Manager] io.druid.indexing.overlord.RemoteTaskRunner - Added pending task index_realtime_flow_2018-01-12T18:00:00.000Z_0_0
2018-01-12T18:27:25,279 INFO [rtr-pending-tasks-runner-0] io.druid.indexing.overlord.RemoteTaskRunner - No worker selection strategy set. Using default of [EqualDistributionWorkerSelectStrategy]
2018-01-12T18:27:25,294 INFO [rtr-pending-tasks-runner-0] io.druid.indexing.overlord.RemoteTaskRunner - Coordinator asking Worker[druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local:8091] to add task[index_realtime_flow_2018-01-12T18:00:00.000Z_0_0]
2018-01-12T18:27:25,334 INFO [rtr-pending-tasks-runner-0] io.druid.indexing.overlord.RemoteTaskRunner - Task index_realtime_flow_2018-01-12T18:00:00.000Z_0_0 switched from pending to running (on [druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local:8091])
2018-01-12T18:27:25,336 INFO [rtr-pending-tasks-runner-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_realtime_flow_2018-01-12T18:00:00.000Z_0_0] status changed to [RUNNING].
2018-01-12T18:27:25,747 INFO [Curator-PathChildrenCache-1] io.druid.indexing.overlord.RemoteTaskRunner - Worker[druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local:8091] wrote RUNNING status for task [index_realtime_flow_2018-01-12T18:00:00.000Z_0_0] on [TaskLocation{host='null', port=-1, tlsPort=-1}]
2018-01-12T18:27:25,829 INFO [Curator-PathChildrenCache-1] io.druid.indexing.overlord.RemoteTaskRunner - Worker[druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local:8091] wrote RUNNING status for task [index_realtime_flow_2018-01-12T18:00:00.000Z_0_0] on [TaskLocation{host='druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local', port=8100, tlsPort=-1}]
2018-01-12T18:27:25,829 INFO [Curator-PathChildrenCache-1] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_realtime_flow_2018-01-12T18:00:00.000Z_0_0] location changed to [TaskLocation{host='druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local', port=8100, tlsPort=-1}].
What's wrong? I tried a thousand things and nothing solves it ...
Thanks a lot
UnresolvedAddressException being hit by Druid broker
You have to have all the druid cluster information set in you servers running tranquility.
It's because you only get DNS of you druid cluster from zookeeper, not the IP.
For example, on linux server, save you cluster information in /etc/hosts.

unable to get graphs using jp gc generator in jmeter

I am unable to view graphs even after installing the plugins from the jmeter-plugins.org site.
I can view the jpgc graph in the listener but on running only csv is getting created not the graphs.
I am not getting any error message but it shows warnings. I followed all steps properly as mentioned in this link.
Below is the error log:
2017/02/22 16:07:49 INFO - jmeter.engine.StandardJMeterEngine: Running the test!
2017/02/22 16:07:49 INFO - jmeter.samplers.SampleEvent: List of sample_variables: []
2017/02/22 16:07:49 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(true,*local*)
2017/02/22 16:07:50 INFO - jmeter.engine.StandardJMeterEngine: Starting ThreadGroup: 1 : Thread Group
2017/02/22 16:07:50 INFO - jmeter.engine.StandardJMeterEngine: Starting 10 threads for group Thread Group.
2017/02/22 16:07:50 INFO - jmeter.engine.StandardJMeterEngine: Thread will continue on error
2017/02/22 16:07:50 INFO - jmeter.threads.ThreadGroup: Starting thread group number 1 threads 10 ramp-up 5 perThread 500.0 delayedStart=false
2017/02/22 16:07:50 INFO - jmeter.threads.ThreadGroup: Started thread group number 1
2017/02/22 16:07:50 INFO - jmeter.engine.StandardJMeterEngine: All thread groups have been started
2017/02/22 16:07:50 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-1
2017/02/22 16:07:50 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-2
2017/02/22 16:07:51 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-3
2017/02/22 16:07:51 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-4
2017/02/22 16:07:52 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-5
2017/02/22 16:07:52 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-6
2017/02/22 16:07:53 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-7
2017/02/22 16:07:53 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-8
2017/02/22 16:07:54 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-9
2017/02/22 16:07:54 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-10
2017/02/22 16:07:57 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-1
2017/02/22 16:07:57 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-1
2017/02/22 16:07:58 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-4
2017/02/22 16:07:58 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-4
2017/02/22 16:07:59 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-3
2017/02/22 16:07:59 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-3
2017/02/22 16:07:59 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-2
2017/02/22 16:07:59 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-2
2017/02/22 16:08:00 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-7
2017/02/22 16:08:00 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-7
2017/02/22 16:08:00 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-5
2017/02/22 16:08:00 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-5
2017/02/22 16:08:00 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-8
2017/02/22 16:08:00 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-8
2017/02/22 16:08:01 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-9
2017/02/22 16:08:01 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-9
2017/02/22 16:08:01 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-6
2017/02/22 16:08:01 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-6
2017/02/22 16:08:01 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-10
2017/02/22 16:08:01 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-10
2017/02/22 16:08:01 INFO - jmeter.engine.StandardJMeterEngine: Notifying test listeners of end of test
2017/02/22 16:08:01 INFO - kg.apc.jmeter.PluginsCMDWorker: Using JMeterPluginsCMD v. N/A
2017/02/22 16:08:01 WARN - kg.apc.jmeter.JMeterPluginsUtils: JMeter env exists. No one should see this normally.
2017/02/22 16:08:01 WARN - jmeter.engine.StandardJMeterEngine: Error encountered during shutdown of kg.apc.jmeter.listener.GraphsGeneratorListener#297d7a76 java.lang.RuntimeException: java.lang.ClassNotFoundException: kg.apc.jmeter.vizualizers.SynthesisReportGui
at kg.apc.jmeter.PluginsCMDWorker.getGUIObject(PluginsCMDWorker.java:237)
at kg.apc.jmeter.PluginsCMDWorker.getGUIObject(PluginsCMDWorker.java:234)
at kg.apc.jmeter.PluginsCMDWorker.setPluginType(PluginsCMDWorker.java:73)
at kg.apc.jmeter.listener.GraphsGeneratorListener.testEnded(GraphsGeneratorListener.java:221)
at kg.apc.jmeter.listener.GraphsGeneratorListener.testEnded(GraphsGeneratorListener.java:137)
at org.apache.jmeter.engine.StandardJMeterEngine.notifyTestListenersOfEnd(StandardJMeterEngine.java:215)
at org.apache.jmeter.engine.StandardJMeterEngine.run(StandardJMeterEngine.java:436)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.ClassNotFoundException: kg.apc.jmeter.vizualizers.SynthesisReportGui
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Unknown Source)
at kg.apc.jmeter.PluginsCMDWorker.getGUIObject(PluginsCMDWorker.java:227)
... 7 more
2017/02/22 16:08:01 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(false,*local*)
You need Synthesis Report which is a pre-requisite for Graphs Generator, you can install it either manually or using JMeter Plugins Manager (recommended)

Running N no of Jmeter threads in the order of sequence

Although i found a
similar question, the answer wasn't satisfactory or perhaps doesn't work in my condition.
I have N no of threads to run with ramp up period of suppose 5. The login authentication for N users are being passed from a CSV file.
The listener's report shows that thread 38 or any other thread runs before thread 1 i.e first iteration is of a thread no X (where X!=1). Using a loop controller doesn't seem to be the solution since my N users are all different. Below is the Test report of my test.
Thread Iteration Time(milliseconds) Bytes Success
ThreadNo 1-38 1 94551 67485 true
ThreadNo 1-69 2 92724 67200 true
ThreadNo 1-58 3 91812 66332 true
ThreadNo 1-12 4 92144 66335 true
ThreadNo 1-18 5 91737 66340 true
ThreadNo 1-17 6 93055 66514 true
So i want my iteration 1 to start with thread 1(ThreadNo 1-1).
Update:
My test plan has the
Run thread groups consecutively(i.e. run groups one at a time)
as selected.
Below is the snapshot of my testplan
Below is the jmeter log
jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-39
2015/12/14 02:00:37 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-39
2015/12/14 02:00:37 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-49
2015/12/14 02:00:37 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-49
2015/12/14 02:00:37 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-38
2015/12/14 02:00:37 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-38
2015/12/14 02:00:38 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-41
2015/12/14 02:00:38 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-41
2015/12/14 02:00:38 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-42
2015/12/14 02:00:38 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-42
2015/12/14 02:00:38 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-34
2015/12/14 02:00:38 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-34
2015/12/14 02:00:39 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-47
2015/12/14 02:00:39 INFO - jmeter.threads.JMeterThread: Thread finished: ThreadAction 1-47
2015/12/14 02:00:39 INFO - jmeter.threads.JMeterThread: Thread is done: ThreadAction 1-40
I'll tell you a little secret: JMeter does start threads sequentially, you don't need to take any extra actions. If you look into jmeter.log file you'll see something like
log.info("Executing request with thread number: " + Parameters);
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-1
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-2
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-3
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-4
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-5
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-6
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-7
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-8
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-9
2015/12/15 18:35:31 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-10
What you see in the test report it seems to be request completion time which is supposed to be sequential only in ideal world.
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-45
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-47
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-47
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-46
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-50
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-50
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-49
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-48
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-48
2015/12/15 18:39:04 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-49
If you still need for some reason to have certain sampler executed by 1st thread on 1st iteration - put it under the If Controller and use the following statement as "Condition"
${__BeanShell(vars.getIteration() == 1)} && ${__threadNum} == 1
It utilises the following JMeter Functions:
__threadNum() - to get current thread number
__Beanshell - to get execute arbitrary Beanshell script, in this case - get current iteration (applicable to Thread Group iterations, won't increment for iterations driven by Loop Controller or whatever)

Categories

Resources