Null Pointer exception while running Jmeter test in Java - java

Following is my simple code to run jmeter test plan in Java8.
import org.apache.jmeter.control.LoopController;
import org.apache.jmeter.engine.StandardJMeterEngine;
import org.apache.jmeter.protocol.http.sampler.HTTPSampler;
import org.apache.jmeter.testelement.TestPlan;
import org.apache.jmeter.threads.ThreadGroup;
import org.apache.jmeter.util.JMeterUtils;
import org.apache.jorphan.collections.HashTree;
public class JmxSuite {
public static void main(String[] args){
StandardJMeterEngine jmeter = new StandardJMeterEngine();
JMeterUtils.setJMeterHome("C:\\PLACES\\apache-jmeter-2.11");
HashTree testPlanTree = new HashTree();
HTTPSampler httpSampler = new HTTPSampler();
httpSampler.setDomain("exapmle.com");
httpSampler.setPort(80);
httpSampler.setPath("/");
httpSampler.setMethod("GET");
LoopController loopController = new LoopController();
loopController.setLoops(1);
loopController.addTestElement(httpSampler);
loopController.setFirst(true);
loopController.initialize();
ThreadGroup threadGroup = new ThreadGroup();
threadGroup.setNumThreads(1);
threadGroup.setRampUp(1);
threadGroup.setSamplerController(loopController);
TestPlan testPlan = new TestPlan("RKSV Jmeter Testing");
testPlanTree.add("testPlan", testPlan);
testPlanTree.add("loopController", loopController);
testPlanTree.add("threadGroup", threadGroup);
testPlanTree.add("httpSampler", httpSampler);
jmeter.configure(testPlanTree);
jmeter.run();
}
}
But I am keep getting the following error while running it.
INFO 2016-04-07 15:40:44.060 [jmeter.e] (): Listeners will be started after enabling running version
INFO 2016-04-07 15:40:44.079 [jmeter.e] (): To revert to the earlier behaviour, define jmeterengine.startlistenerslater=false
INFO 2016-04-07 15:40:44.108 [jmeter.p] (): No response parsers defined: text/html only will be scanned for embedded resources
INFO 2016-04-07 15:40:44.115 [jmeter.p] (): Maximum connection retries = 10
INFO 2016-04-07 15:40:44.121 [jmeter.e] (): Running the test!
INFO 2016-04-07 15:40:44.141 [jmeter.s] (): List of sample_variables: []
INFO 2016-04-07 15:40:44.141 [jmeter.s] (): List of sample_variables: []
Exception in thread "main" java.lang.NullPointerException
at org.apache.jmeter.util.JMeterUtils.setProperty(JMeterUtils.java:885)
at org.apache.jmeter.threads.JMeterContextService.startTest(JMeterContextService.java:92)
at org.apache.jmeter.engine.StandardJMeterEngine.run(StandardJMeterEngine.java:313)
at JmxSuite.main(JmxSuite.java:48)
And following is my pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>Places</groupId>
<artifactId>Places</artifactId>
<version>0.0.1-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.jmeter</groupId>
<artifactId>ApacheJMeter_java</artifactId>
<version>2.11</version>
</dependency>
<dependency>
<groupId>org.apache.jmeter</groupId>
<artifactId>ApacheJMeter_http</artifactId>
<version>2.11</version>
</dependency>
</dependencies>
</project>
What could be the possible reason for it?
I have used code from 4th topic from this 5 ways to run jmeter

You're missing few lines from the article, double check your code, to wit:
JMeterUtils.loadJMeterProperties("C:\\PLACES\\apache-jmeter-2.11\\bin\\jmeter.properties");
JMeterUtils.initLogging();
JMeterUtils.initLocale();
Also there is an example project jmeter-from-code which can be used as a reference or skeleton.
Also consider using latest version of JMeter, for the moment it's Apache JMeter 2.13

The error occurs because the internal Properties-object of JMeterUtils is null at this point. Add the following line to your code to initalize this object:
JMeterUtils.loadJMeterProperties("/path/to/your/jmeter/bin/jmeter.properties");
EDIT: Instead of doing it all manually you should call
JMeterUtils.initializeProperties("/path/to/your/jmeter/bin/jmeter.properties");
which is the correct way and does all the initalizing of logging and locale internally.

Related

Reactor Kafka: message consumption always on one thread no matter the number of CPU from machine

Small question regarding Reactor Kafka please.
I am having a very straightforward Reactor Kafka project.
package com.example.micrometer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.messaging.Message;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import java.util.function.Consumer;
#SpringBootApplication
public class StreamReactiveConsumerApplication implements CommandLineRunner {
private static final Logger log = LoggerFactory.getLogger(StreamReactiveConsumerApplication.class);
public static void main(String... args) {
new SpringApplicationBuilder(StreamReactiveConsumerApplication.class).run(args);
}
#Override
public void run(String... args) {
}
#Bean
Consumer<Flux<Message<String>>> consume() {
return flux -> flux.flatMap(one -> myHandle(one) ).subscribe();
}
private Mono<String> myHandle(Message<String> one) {
log.info("<==== look at this thread" + "\u001B[32m" + one.getPayload() + "\u001B[0m");
String payload = one.getPayload();
String decryptedPayload = complexInMemoryDecryption(payload); //this is NON blocking, takes 1 second
String complexMatrix = convertDecryptedPayloadToGiantMatrix(decryptedPayload); //this is NON blocking, takes 1 second
String newMatrix = matrixComputation(complexMatrix); //this is NON blocking, takes 1 second
return myNonBlockingReactiveRepository.save(complexMatrix);
}
}
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.example</groupId>
<artifactId>streamreactiveconsumer</artifactId>
<version>1.0-SNAPSHOT</version>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.0.2</version>
<relativePath/>
</parent>
<properties>
<maven.compiler.source>17</maven.compiler.source>
<maven.compiler.target>17</maven.compiler.target>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>2022.0.1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
(Note, it is not a Spring Kafka project, not a Spring Cloud Stream project)
I am consuming from a topic with 3 partitions. The rate of the messages sent is one message per second.
The consumption and the processing of the message takes 3ish seconds second per message.
Important: please note the processing does not contain any blocking operation. It is a giant in memory decryption + giant matrix computation. It is BlockHound tested NON blocking.
Actual:
When I consume the messages with project Reactor Kafka, the whole consumption happens on one thread only. Everything happens on container-0-C-1
Everything will happen on container-0-C-1, tested with hardware with 2 CPUs, 4 CPUs, 8 CPUs
2023-02-06 10:42:59 8384 INFO --- [KafkaConsumerDestination{consumerDestinationName='prod_audit_hdfs', partitions=3, dlqName='null'}.container-0-C-1] [stream-reactive-consumer,,] c.e.m.StreamReactiveConsumerApplication :
2023-02-06 10:42:59 8384 INFO --- [KafkaConsumerDestination{consumerDestinationName='prod_audit_hdfs', partitions=3, dlqName='null'}.container-0-C-1] [stream-reactive-consumer,,] c.e.m.StreamReactiveConsumerApplication :
2023-02-06 10:42:59 8384 INFO --- [KafkaConsumerDestination{consumerDestinationName='prod_audit_hdfs', partitions=3, dlqName='null'}.container-0-C-1] [stream-reactive-consumer,,] c.e.m.StreamReactiveConsumerApplication :
Expected:
We migrated from http webflux based to Kafka consumption based. The business logic did not change one bit.
On the Reactor Netty Spring webflux application, we could see processing happening from multiple thread, corresponding to the reactor cores. On a machine with many cores, this could keep up easily.
[or-http-epoll-1] [or-http-epoll-2] [or-http-epoll-3] [or-http-epoll-4]
The processing with just switch between any of those reactor-http-epoll-N.
I could see when reactor-http-epoll-1 is handling the complex in memory computation for the first message, reactor-http-epoll-3 would handle the computation for the second message, etc... The parallelism is clear
I understand there are way to "scale" this application, but this a question in terms of Reactor Kafka itself.
I expect the messages would be handled in parallel. Some kind of container-0-C-1 for the first message, container-0-C-2 for the second message, etc...
How can I achieve that please?
What am I missing?
Thank you
Typically in kafka consumers it's a good idea to separate polling cycle from processing logic. There is also I/O thread that is native to the KafkaConsumer. Sometimes this architecture is called "consumer with pipelining". In this architecture polling thread are continuously fetching records from kafka and then "feed" them to some bounded buffer/queue (i.e. ArrayBlockingQueue or LinkedBlockingQueue). On the other side processing threads take records from the queue and process them. It allows to separate decouple polling logic from the processing implementing buffering and backpreasure.
Reactor Kafka is built on top of KafkaConsumer API and use similar architecture implementing reactive streams with backpreasure. KafkaReceiver provides polling cycle and by default, publishes fetched records on a Schedulers.single thread.
Now, depends on your logic, you could process data and commit offsets sequentially or in parallel. For concurrent processing use flatMap that by default processes 256 records in parallel and could be controlled using concurrency parameter.
kafkaReceiver.receive()
.flatMap(rec -> proces(rec), concurrency)
If you add logging, you would see that all records are received on kafka-receiver-2 but processed on different parallel-# threads. Note, that records are received in order per partition.
12:50:08.347 [kafka-receiver-2] INFO [c.e.d.KafkaConsumerTest] - receive: value-2, partition: 0
12:50:08.349 [kafka-receiver-2] INFO [c.e.d.KafkaConsumerTest] - receive: value-3, partition: 0
12:50:08.350 [kafka-receiver-2] INFO [c.e.d.KafkaConsumerTest] - receive: value-4, partition: 0
12:50:08.350 [kafka-receiver-2] INFO [c.e.d.KafkaConsumerTest] - receive: value-6, partition: 0
12:50:08.351 [kafka-receiver-2] INFO [c.e.d.KafkaConsumerTest] - receive: value-9, partition: 0
12:50:08.353 [kafka-receiver-2] INFO [c.e.d.KafkaConsumerTest] - receive: value-0, partition: 2
12:50:08.354 [kafka-receiver-2] INFO [c.e.d.KafkaConsumerTest] - receive: value-8, partition: 2
12:50:08.355 [kafka-receiver-2] INFO [c.e.d.KafkaConsumerTest] - receive: value-1, partition: 1
12:50:08.356 [kafka-receiver-2] INFO [c.e.d.KafkaConsumerTest] - receive: value-5, partition: 1
12:50:08.358 [kafka-receiver-2] INFO [c.e.d.KafkaConsumerTest] - receive: value-7, partition: 1
12:50:09.353 [parallel-3] INFO [c.e.d.KafkaConsumerTest] - process: value-2, partition: 0
12:50:09.353 [parallel-6] INFO [c.e.d.KafkaConsumerTest] - process: value-6, partition: 0
12:50:09.353 [parallel-4] INFO [c.e.d.KafkaConsumerTest] - process: value-3, partition: 0
12:50:09.353 [parallel-5] INFO [c.e.d.KafkaConsumerTest] - process: value-4, partition: 0
12:50:09.355 [parallel-7] INFO [c.e.d.KafkaConsumerTest] - process: value-9, partition: 0
12:50:09.360 [parallel-10] INFO [c.e.d.KafkaConsumerTest] - process: value-1, partition: 1
12:50:09.360 [parallel-9] INFO [c.e.d.KafkaConsumerTest] - process: value-8, partition: 2
12:50:09.360 [parallel-8] INFO [c.e.d.KafkaConsumerTest] - process: value-0, partition: 2
12:50:09.361 [parallel-11] INFO [c.e.d.KafkaConsumerTest] - process: value-5, partition: 1
12:50:09.361 [parallel-12] INFO [c.e.d.KafkaConsumerTest] - process: value-7, partition: 1
In other words this is by design and you should not worry about polling logic. You can scale processing by increasing parallelism for flatMap.

kinesis analytics flink write parquet file

Using amazon kinesis analytics with a java flink application I am taking data from a firehose and trying to write it to a S3 bucket as a series of parquet files. I am hitting the following exception in my cloud watch logs which is the only error I can see that might be related.
I have enabled checkpointing as specified in the documentation and included the flink/arvo dependancies. Running this locally works. The parquet files are written to local local disk when a checkpoint is reached.
The exception
"message": "Exception type is USER from filter results [UserClassLoaderExceptionFilter -> USER, UserAPIExceptionFilter -> SKIPPED, UserSerializationExceptionFilter -> SKIPPED, UserFunctionExceptionFilter -> SKIPPED, OutOfMemoryExceptionFilter -> NONE, TooManyOpenFilesExceptionFilter -> NONE, KinesisServiceExceptionFilter -> NONE].",
"throwableInformation": [
"java.lang.Exception: Error while triggering checkpoint 1360 for Source: Custom Source -> Map -> Sink: HelloS3 (1/1)",
"org.apache.flink.runtime.taskmanager.Task$1.run(Task.java:1201)",
"java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)",
"java.util.concurrent.FutureTask.run(FutureTask.java:266)",
"java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)",
"java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)",
"java.lang.Thread.run(Thread.java:748)",
"Caused by: java.lang.AbstractMethodError: org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(Lorg/apache/parquet/bytes/BytesInput;IILorg/apache/parquet/column/statistics/Statistics;Lorg/apache/parquet/column/Encoding;Lorg/apache/parquet/column/Encoding;Lorg/apache/parquet/column/Encoding;)V",
"org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:53)",
"org.apache.parquet.column.impl.ColumnWriterBase.writePage(ColumnWriterBase.java:315)",
"org.apache.parquet.column.impl.ColumnWriteStoreBase.flush(ColumnWriteStoreBase.java:152)",
"org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:27)",
"org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:172)",
"org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:114)",
"org.apache.parquet.hadoop.ParquetWriter.close(ParquetWriter.java:308)",
"org.apache.flink.formats.parquet.ParquetBulkWriter.finish(ParquetBulkWriter.java:62)",
"org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:62)",
"org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:235)",
"org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:276)",
"org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:249)",
"org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:244)",
"org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:235)",
"org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.snapshotState(StreamingFileSink.java:347)",
"org.apache.flink.streaming.util.functions.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118)",
"org.apache.flink.streaming.util.functions.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99)",
"org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:90)",
"org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:395)",
"org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1138)",
"org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1080)",
"org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:754)",
"org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:666)",
"org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:584)",
"org.apache.flink.streaming.runtime.tasks.SourceStreamTask.triggerCheckpoint(SourceStreamTask.java:114)",
"org.apache.flink.runtime.taskmanager.Task$1.run(Task.java:1190)",
"\t... 5 more"
Below is my code snippets. I am getting my logging when processing the events and even the logging from the bucketassigner.
env.setStateBackend(new FsStateBackend("s3a://<BUCKET>/checkpoint"));
env.setParallelism(1);
env.enableCheckpointing(5000, CheckpointingMode.EXACTLY_ONCE);
StreamingFileSink<Metric> sink = StreamingFileSink
.forBulkFormat(new Path("s3a://<BUCKET>/raw"), ParquetAvroWriters.forReflectRecord(Metric.class))
.withBucketAssigner(new EventTimeBucketAssigner())
.build();
My pom:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-parquet_2.11</artifactId>
<version>1.11-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.apache.parquet</groupId>
<artifactId>parquet-avro</artifactId>
<version>1.11.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>3.2.1</version>
</dependency>
My AWS configuration has 'Snapshots' enabled. Write permissions are working to the bucket when I use the rowWriting instead of bulk writing.
Really unsure what to look for to get this working now.

Siddhi HTTP NoSuchMethodError

This question is about java library of Siddhi - CEP
Description:
I tried to establish an HTTP source to receive data. There was no error creating the Runtime and starting it.
[nioEventLoopGroup-2-1] INFO org.wso2.transport.http.netty.listener.ServerConnectorBootstrap$HTTPServerConnector - HTTP(S) Interface starting on host localhost and port 9056
[main] INFO org.wso2.extension.siddhi.io.http.source.HttpConnectorPortBindingListener - siddhi: started HTTP server connector localhost:9056
[main] INFO org.wso2.extension.siddhi.io.http.source.HttpSourceListener - Source Listener has created for url http://localhost:9056/endpoints/
However, when I send a POST request to the designated address. I get an error:
[nioEventLoopGroup-3-1] ERROR org.wso2.extension.siddhi.io.http.source.HTTPConnectorListener - Error in http server connector
java.lang.NoSuchMethodError: io.netty.handler.codec.http.HttpRequest.method()Lio/netty/handler/codec/http/HttpMethod;
at org.wso2.transport.http.netty.listener.CustomHttpContentCompressor.decode(CustomHttpContentCompressor.java:44)
at org.wso2.transport.http.netty.listener.CustomHttpContentCompressor.decode(CustomHttpContentCompressor.java:14)
at io.netty.handler.codec.MessageToMessageCodec$2.decode(MessageToMessageCodec.java:81)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:354)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:748)
Could anyone suggest a reason of what I have done wrong? Thank you in advance.
Affected Product Version:
4.1.17
OS, DB, other environment details and versions:
IntelliJ IDEA 2017.3.5 (Community Edition)
Build #IC-173.4674.33, built on March 6, 2018
JRE: 1.8.0_152-release-1024-b15 amd64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
Windows 10 10.0
Steps to reproduce:
The test code I wrote:
import org.wso2.siddhi.core.SiddhiAppRuntime;
import org.wso2.siddhi.core.SiddhiManager;
import org.wso2.siddhi.core.event.Event;
import org.wso2.siddhi.core.stream.output.StreamCallback;
import org.wso2.siddhi.core.util.EventPrinter;
//import org.wso2.extension.siddhi.io.http.source.*;
public class httpTest
{
public static void main(String[] args) {
String siddhiString = "#App:name(\"haha\") " +
"#App:description(\"fasd\") " +
"#App:statistics(reporter = \"jmx\", interval = \"30\") " +
"#source(type=\"http\",receiver.url=\"http://localhost:9056/endpoints/\",#map(type=\"text\",fail.on.missing.attribute=\"true\",regex.A=\"(.*)\",#attributes(data=\"A\"))) " +
"#sink(type=\"mqtt\",url=\"tcp://120.78.71.179:1883\",topic=\"34\",#map(type=\"text\")) " +
"define stream a4P068X5YCK(data String);";
SiddhiManager siddhiManager = new SiddhiManager();
SiddhiAppRuntime siddhiAppRuntime = siddhiManager.createSiddhiAppRuntime(siddhiString);
siddhiAppRuntime.addCallback("a4P068X5YCK", new StreamCallback() {
#Override
public void receive(Event[] events) {
EventPrinter.print(events);
}
});
siddhiAppRuntime.start();
}
}
Then I send a POST request to http://localhost:9056/endpoints/. It returns the exception posted above.
Update:
I went back and check the Siddhi-io-http github documentation page. I found that it says:
... This extension only works inside the WSO2 Data Analytic Server and cannot be run with standalone siddhi.
I guess it might suggest that http is not supported by siddhi library at the moment. I have submitted issue on siddhi repository page to ask for confirmation.
Update 2:
I have changed my Siddhi Query so that it copy the source stream into the other sink stream. Other part of the code remains the same:
String siddhiString = "#App:name(\"haha\") " +
"#App:description(\"fasd\") " +
"#App:statistics(reporter = \"jmx\", interval = \"30\") " +
"#source(type=\"http\",receiver.url=\"http://localhost:9056/endpoints/\",#map(type=\"text\",fail.on.missing.attribute=\"true\",regex.A=\"(.*)\",#attributes(data=\"A\"))) " +
"define stream a4P068X5YCK(data String); " +
"#sink(type=\"mqtt\",url=\"tcp://120.78.71.179:1883\",topic=\"34\",#map(type=\"text\")) " +
"define stream pout(data String); " +
"from a4P068X5YCK " +
"select * " +
"insert into pout; " +
"";
The same problem still exists. I tried the wso2 processor and it works fine. Now my guesses are:
1. version mismatch
2. lack of some packages in wso2 processor dependecies.
I will try to identify it in those two direction and will update in here and Issue page as soon as I find something new.
Update 3:
As I keep adding updates, the format seems to have some problem but fortunately this issue also comes to an end. I tried to Include all dependencies from wso2 processor source code and my test program starts working. Therefore I assume there is a component in wso2 processor that siddhi library is lacking.
I tried to delete the dependencies one by one to see if my test program still works. Finally I have found that package. With this package my code works well.
<dependency>
<groupId>org.wso2.msf4j</groupId>
<artifactId>org.wso2.msf4j.feature</artifactId>
<version>${msf4j.version}</version>
<type>zip</type>
</dependency>
As I am a beginner to coding, I am not exactly what was the problem. I would be grateful if someone could explain to me the reason behind the problem. I appreciate all the helps received in this process and it would also be a great experience for me.
Update 4: #Grainier I tried the sample code you posted and it actually worked! Although I still have no idea why. I tried to copy your exact code to a new .java in my project. It still won't work. Therefore I guess there is something to do with POM file.
Something I noticed is that when I ran your sample code there are few more WARNINGS printed in console: SMALL UPDATE: I have found that the Warnings appeared because I am using JDK 10. As soon as I switch back to 1.8 warnings disappeared and the code still works. So maybe this is not the reason.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.ReflectionUtil (file:/C:/Users/ktz001/.m2/repository/io/netty/netty-common/4.1.16.Final/netty-common-4.1.16.Final.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.ReflectionUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
The second difference is in the POM file. In you have one more repository added compared to mine.
<repository>
<id>wso2-nexus</id>
<name>WSO2 internal Repository</name>
<url>http://maven.wso2.org/nexus/content/groups/wso2-public/</url>
<releases>
<enabled>true</enabled>
<updatePolicy>daily</updatePolicy>
<checksumPolicy>ignore</checksumPolicy>
</releases>
</repository>
It would be great if you could suggest any reason.
Thank you for all of your work! It has been really helpful.
There seems to be an issue with the documentation... This should work with standalone Siddhi. All you have to do is add following dependencies in your project (also mqtt, which I haven't included below);
<dependencies>
<dependency>
<groupId>org.wso2.siddhi</groupId>
<artifactId>siddhi-core</artifactId>
<version>${siddhi.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.siddhi</groupId>
<artifactId>siddhi-annotations</artifactId>
<version>${siddhi.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.siddhi</groupId>
<artifactId>siddhi-query-compiler</artifactId>
<version>${siddhi.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.extension.siddhi.io.http</groupId>
<artifactId>siddhi-io-http</artifactId>
<version>${siddhi.io.http.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.extension.siddhi.map.text</groupId>
<artifactId>siddhi-map-text</artifactId>
<version>${siddhi.mapper.text.version}</version>
</dependency>
</dependencies>
However, there's an issue with your query which is, you have defined a #source and a #sink to a single stream. Which is wrong. If you want to make it a passthrough, then you have to define two streams (one for source and one for sink) and write a query to insert events from source stream to sink stream.
UPDATE:
A sample can be found here; Please try that and see whether it's working.

WireMock fails with NoSuchMethodError HttpServletResponse.getHeader

I'm trying to use WireMock in my JUnit tests to mock calls to an external API.
public class ExampleWiremockTest {
#Rule
public WireMockRule wireMockRule = new WireMockRule(9999);
#Before
public void setUp() {
stubFor(get(urlEqualTo("/bin/sillyServlet"))
.willReturn(aResponse()
.withStatus(200)
.withBody("Hello WireMock!")
)
);
}
#Test
public void testNothing() throws URISyntaxException, IOException {
URI uri = new URIBuilder().setScheme("http")
.setHost("localhost")
.setPort(9999)
.setPath("/bin/sillyServlet")
.build();
HttpGet httpGet = new HttpGet(uri);
CloseableHttpClient httpClient = HttpClients.createDefault();
CloseableHttpResponse response = httpClient.execute(httpGet);
HttpEntity entity = response.getEntity();
String body = EntityUtils.toString(entity);
assertThat(body, is("Hello WireMock!"));
}
}
The code compiles but when I run my test, WireMock throws an HTTP 500 that seems to be caused by an inconsistency in the underlying Servlet API version.
Running com.example.core.ExampleWiremockTest
[main] INFO wiremock.org.eclipse.jetty.util.log - Logging initialized #1030ms
[main] INFO wiremock.org.eclipse.jetty.server.Server - jetty-9.2.z-SNAPSHOT
[main] INFO wiremock.org.eclipse.jetty.server.handler.ContextHandler - Started w.o.e.j.s.ServletContextHandler#ef9296d{/__admin,null,AVAILABLE}
[main] INFO wiremock.org.eclipse.jetty.server.handler.ContextHandler - Started w.o.e.j.s.ServletContextHandler#659a969b{/,null,AVAILABLE}
[main] INFO wiremock.org.eclipse.jetty.server.NetworkTrafficServerConnector - Started NetworkTrafficServerConnector#723d73e1{HTTP/1.1}{0.0.0.0:9999}
[main] INFO wiremock.org.eclipse.jetty.server.Server - Started #1168ms
[qtp436546048-16] INFO /__admin - RequestHandlerClass from context returned com.github.tomakehurst.wiremock.http.AdminRequestHandler. Normalized mapped under returned 'null'
[qtp436546048-20] INFO / - RequestHandlerClass from context returned com.github.tomakehurst.wiremock.http.StubRequestHandler. Normalized mapped under returned 'null'
[qtp436546048-20] WARN wiremock.org.eclipse.jetty.servlet.ServletHandler - Error for /bin/sillyServlet
java.lang.NoSuchMethodError: javax.servlet.http.HttpServletResponse.getHeader(Ljava/lang/String;)Ljava/lang/String;
at wiremock.org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:322)
at wiremock.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at wiremock.org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at wiremock.org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at wiremock.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at wiremock.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at wiremock.org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at wiremock.org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at wiremock.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at wiremock.org.eclipse.jetty.server.Server.handle(Server.java:499)
at wiremock.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at wiremock.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at wiremock.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at wiremock.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at wiremock.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
[qtp436546048-20] WARN wiremock.org.eclipse.jetty.server.HttpChannel - /bin/sillyServlet
java.lang.NoSuchMethodError: javax.servlet.http.HttpServletRequest.isAsyncStarted()Z
at wiremock.org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:684)
at wiremock.org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at wiremock.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at wiremock.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at wiremock.org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at wiremock.org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at wiremock.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at wiremock.org.eclipse.jetty.server.Server.handle(Server.java:499)
at wiremock.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at wiremock.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at wiremock.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at wiremock.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at wiremock.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
[qtp436546048-20] WARN wiremock.org.eclipse.jetty.server.HttpChannel - Could not send response error 500: java.lang.NoSuchMethodError: javax.servlet.http.HttpServletRequest.isAsyncStarted()Z
[main] INFO wiremock.org.eclipse.jetty.server.NetworkTrafficServerConnector - Stopped NetworkTrafficServerConnector#723d73e1{HTTP/1.1}{0.0.0.0:9999}
[main] INFO wiremock.org.eclipse.jetty.server.handler.ContextHandler - Stopped w.o.e.j.s.ServletContextHandler#659a969b{/,null,UNAVAILABLE}
[main] INFO wiremock.org.eclipse.jetty.server.handler.ContextHandler - Stopped w.o.e.j.s.ServletContextHandler#ef9296d{/__admin,null,UNAVAILABLE}
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.577 sec <<< FAILURE! - in com.example.core.ExampleWiremockTest
I do have other libraries in my classpath that rely on different versions of Jetty, which I suppose is the cause of the problem.
WireMock uses Jetty 9.2.13, I also have a transitive dependency on Cobertura, which relies on 6.1.14
I was originally trying to use the following dependency:
<dependency>
<groupId>com.github.tomakehurst</groupId>
<artifactId>wiremock</artifactId>
<version>2.6.0</version>
</dependency>
I switched to the standalone Jar version, hoping it would help me avoid the conflict but the result is exactly the same.
<dependency>
<groupId>com.github.tomakehurst</groupId>
<artifactId>wiremock-standalone</artifactId>
<version>2.6.0</version>
</dependency>
Check your servlet-api version, you likely are using an old version.
Both of those methods were added in Servlet 3.0
HttpServletResponse.getHeader(String name)
HttpServletRequest.isAsyncStarted()
You probably have the Servlet 2.5 (or Servlet 2.4) jar in your classpath.
As mentioned, in the question, one of the libraries I was using had a dependency on Cobertura, which in turn introducde a dependency on Servlet API 2.5 (both as a transitive dependency via an older verison of Jetty and a direct dependency).
Excluding the artifact from the original dependency (the one dependent on cobertura) allowed me to run my test successfully.
<dependency>
<groupId>com.cognifide.slice</groupId>
<artifactId>slice-core-api</artifactId>
<version>${slice.version}</version>
<scope>provided</scope>
<exclusions>
<exclusion>
<groupId>org.mortbay.jetty</groupId>
<artifactId>servlet-api-2.5</artifactId>
</exclusion>
</exclusions>
</dependency>

ElasticSearch: xerial.snappy error FAILED_TO_LOAD_NATIVE_LIBRARY

I'm trying running ElasticSearch client and getting xerial.snappy error FAILED_TO_LOAD_NATIVE_LIBRARY.
I'm using elastic search v. 0.20.5:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>0.20.5</version>
</dependency>
and also added snappy v.1.0.4.1 into my dependency(but it did not help either):
<dependency>
<groupId>org.xerial.snappy</groupId>
<artifactId>snappy-java</artifactId>
<version>1.0.4.1</version>
</dependency>
here is the error I'm getting (my app continues to run, but I suspect compression lib is not in use)
INFO Log4jESLogger.internalInfo - [Human Top II] loaded [], sites []
DEBUG Log4jESLogger.internalDebug - using [UnsafeChunkDecoder] decoder
DEBUG Log4jESLogger.internalDebug - failed to load xerial snappy-java
org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:229)
at org.xerial.snappy.Snappy.<clinit>(Snappy.java:44)
at org.elasticsearch.common.compress.snappy.xerial.XerialSnappy.<clinit>(XerialSnappy.java:42)
at org.elasticsearch.common.compress.CompressorFactory.<clinit>(CompressorFactory.java:58)
at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:161)
at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:109)
My code that generates this issue:
public static void main(String[] args)
{
// Error happens during client creation...
Client client = new TransportClient().addTransportAddress(new InetSocketTransportAddress("localhost", 9300));
try
{
SearchResponse res = client.prepareSearch().execute().actionGet();
SearchHits hits = res.getHits();
}
finally
{
client.close();
}
}
Can anyone shed some light into this issue? How to make snappy to load native lib? I'm currently on Win7-64, but want to be running on AWS(centOS,RH,etc)

Categories

Resources