EmbeddedKafka/Zookeeper failed to start due to ZkInterruptedException: java.lang.InterruptedException - java

I'm trying to integrate EmbeddedKafka (https://github.com/spring-projects/spring-kafka/blob/master/src/reference/asciidoc/testing.adoc) with my Unit Tests.
No always but very often I get errors during EmbeddedKafka startup.
2019-10-08T11:23:43.894Z INFO [main] org.apache.zookeeper.server.ZooKeeperServer: Server environment:java.io.tmpdir=C:\tmp\
2019-10-08T11:23:43.894Z INFO [main] org.apache.zookeeper.server.ZooKeeperServer: Server environment:java.compiler=<NA>
2019-10-08T11:23:43.894Z INFO [main] org.apache.zookeeper.server.ZooKeeperServer: Server environment:os.name=Windows 10
2019-10-08T11:23:43.894Z INFO [main] org.apache.zookeeper.server.ZooKeeperServer: Server environment:os.arch=amd64
2019-10-08T11:23:43.894Z INFO [main] org.apache.zookeeper.server.ZooKeeperServer: Server environment:os.version=10.0
2019-10-08T11:23:43.894Z INFO [main] org.apache.zookeeper.server.ZooKeeperServer: Server environment:user.name=user
2019-10-08T11:23:43.894Z INFO [main] org.apache.zookeeper.server.ZooKeeperServer: Server environment:user.home=C:\Users\user
2019-10-08T11:23:43.894Z INFO [main] org.apache.zookeeper.server.ZooKeeperServer: Server environment:user.dir=C:\work\
2019-10-08T11:23:43.913Z INFO [main] org.apache.zookeeper.server.ZooKeeperServer: Created server with tickTime 500 minSessionTimeout 1000 maxSessionTimeout 10000 datadir C:\tmp\kafka-2406612557331641452\version-2 snapdir C:\tmp\kafka-919479945966258903\version-2
2019-10-08T11:23:43.923Z INFO [main] org.apache.zookeeper.server.NIOServerCnxnFactory: binding to port /127.0.0.1:0
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.146 sec <<< FAILURE! - in kafka.KafkaTopicUtilsTest
kafka.KafkaTopicUtilsTest Time elapsed: 0.146 sec <<< ERROR!
org.I0Itec.zkclient.exception.ZkInterruptedException: java.lang.InterruptedException
Caused by: java.lang.InterruptedException
pom.xml:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>2.2.1</kafka>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>5.0.7.RELEASE</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-test</artifactId>
<version>5.0.7.RELEASE</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<version>2.2.9.RELEASE</version>
<scope>test</scope>
</dependency>
KafkaTopicUtilsTest.java server initialization thru #Rule:
#RunWith(MockitoJUnitRunner.class)
public class KafkaTopicUtilsTest {
static final String INITIAL_TOPIC = "initial_topic";
#ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 5, INITIAL_TOPIC);
...
}
As mentioned almost always it is working well when I run test in InteliJ.
Executing from INteliJ (Run 'KafkaTopicUtilsTest') works fine.
Executing test via maven mvn clean install is failing.
Explicit test executions mvn -Dtest=KafkaTopicUtilsTest test works fine.
Anyone faced such issues? Any clue what could be wrong?
Issue solved
The problem was related to other test cases. Another test (not using EmbeddedKafka) was throwing and InterrupedException and checking if the code correctly react on it. Interrupted state was maintained with thru call of Thread.currentThread().interrupt(). Looks like the VM kept Interrupted state and EmbeddedKafka react on it.

Related

Deeplearning4j - 1.0.0-M1.1 with Cuda - cudaGetSymbolAddress(...) failed error

I am getting the error as per showed in title. I have searched in Stackoverflow and other people have been through by the same problem in previous versions. In an answer was said that would be solved in a next version of DL4J and it seems that it have not occurred.
Below are pom.xml and the dependencies i am using.
Please, can anybody help me?
Thank you in advance.
pom.xml:
<properties>
<dl4j-master.version>1.0.0-M1.1</dl4j-master.version>
<logback.version>1.2.3</logback.version>
<java.version>1.8</java.version>
<maven-shade-plugin.version>2.4.3</maven-shade-plugin.version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-core</artifactId>
<version>${dl4j-master.version}</version>
</dependency>
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-nlp</artifactId>
<version>${dl4j-master.version}</version>
</dependency>
<dependency>
<groupId>org.datavec</groupId>
<artifactId>datavec-api</artifactId>
<version>${dl4j-master.version}</version>
</dependency>
<dependency>
<groupId>org.nd4j</groupId>
<artifactId>nd4j-cuda-11.0-platform</artifactId>
<version>${dl4j-master.version}</version>
</dependency>
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>cuda-platform-redist</artifactId>
<version>11.0-8.0-1.5.4</version>
</dependency>
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-cuda-11.0</artifactId>
<version>${dl4j-master.version}</version>
</dependency>
<dependency>
<groupId>org.bytedeco.javacpp-presets</groupId>
<artifactId>cuda</artifactId>
<version>10.0-7.4-1.4.4</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>${logback.version}</version>
</dependency>
</dependencies>
Error:
11:11:35.720 [main] INFO org.nd4j.linalg.factory.Nd4jBackend - Loaded [JCublasBackend] backend
11:11:37.543 [main] INFO org.nd4j.nativeblas.NativeOpsHolder - Number of threads used for linear algebra: 32
11:11:37.675 [main] INFO org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner - Backend used: [CUDA]; OS: [Windows 10]
11:11:37.676 [main] INFO org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner - Cores: [4]; Memory: [3,5GB];
11:11:37.676 [main] INFO org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner - Blas vendor: [CUBLAS]
11:11:37.702 [main] INFO org.nd4j.linalg.jcublas.JCublasBackend - ND4J CUDA build version: 11.0.221
11:11:37.705 [main] INFO org.nd4j.linalg.jcublas.JCublasBackend - CUDA device 0: [NVIDIA GeForce 930M]; cc: [5.0]; Total memory: [4294836224]
11:11:37.705 [main] INFO org.nd4j.linalg.jcublas.JCublasBackend - Backend build information:
MSVC: 192930038
STD version: 201703L
CUDA: 11.0.221
DEFAULT_ENGINE: samediff::ENGINE_CUDA
HAVE_FLATBUFFERS
11:11:37.782 [main] INFO org.deeplearning4j.models.sequencevectors.SequenceVectors - Starting vocabulary building...
11:11:37.783 [main] DEBUG org.deeplearning4j.models.word2vec.wordstore.VocabConstructor - Target vocab size before building: [0]
11:11:37.814 [main] DEBUG org.deeplearning4j.models.word2vec.wordstore.VocabConstructor - Trying source iterator: [0]
11:11:37.814 [main] DEBUG org.deeplearning4j.models.word2vec.wordstore.VocabConstructor - Target vocab size before building: [0]
11:11:51.450 [main] DEBUG org.deeplearning4j.models.word2vec.wordstore.VocabConstructor - Waiting till all processes stop...
11:11:51.457 [main] DEBUG org.deeplearning4j.models.word2vec.wordstore.VocabConstructor - Vocab size before truncation: [168165], NumWords: [1952392], sequences parsed: [318], counter: [1952389]
11:11:51.457 [main] DEBUG org.deeplearning4j.models.word2vec.wordstore.VocabConstructor - Vocab size after truncation: [168165], NumWords: [1952392], sequences parsed: [318], counter: [1952389]
11:11:54.179 [main] INFO org.deeplearning4j.models.word2vec.wordstore.VocabConstructor - Sequences checked: [318], Current vocabulary size: [168165]; Sequences/sec: [19,39];
11:11:54.248 [main] INFO org.deeplearning4j.models.embeddings.loader.WordVectorSerializer - Projected memory use for model: [128,30 MB]
Exception in thread "main" java.lang.RuntimeException: cudaGetSymbolAddress(...) failed; Error code: [13]
at org.nd4j.linalg.jcublas.ops.executioner.CudaExecutioner.createShapeInfo(CudaExecutioner.java:2173)
at org.nd4j.linalg.api.shape.Shape.createShapeInformation(Shape.java:3279)
at org.nd4j.linalg.api.ndarray.BaseShapeInfoProvider.createShapeInformation(BaseShapeInfoProvider.java:75)
at org.nd4j.jita.constant.ProtectedCudaShapeInfoProvider.createShapeInformation(ProtectedCudaShapeInfoProvider.java:96)
at org.nd4j.jita.constant.ProtectedCudaShapeInfoProvider.createShapeInformation(ProtectedCudaShapeInfoProvider.java:77)
at org.nd4j.linalg.jcublas.CachedShapeInfoProvider.createShapeInformation(CachedShapeInfoProvider.java:46)
at org.nd4j.linalg.api.ndarray.BaseNDArray.(BaseNDArray.java:180)
at org.nd4j.linalg.api.ndarray.BaseNDArray.(BaseNDArray.java:174)
at org.nd4j.linalg.api.ndarray.BaseNDArray.(BaseNDArray.java:316)
at org.nd4j.linalg.jcublas.JCublasNDArray.(JCublasNDArray.java:135)
at org.nd4j.linalg.jcublas.JCublasNDArrayFactory.createUninitialized(JCublasNDArrayFactory.java:1533)
at org.nd4j.linalg.factory.Nd4j.createUninitialized(Nd4j.java:4379)
at org.nd4j.linalg.factory.Nd4j.rand(Nd4j.java:2957)
at org.nd4j.linalg.factory.Nd4j.rand(Nd4j.java:2946)
at org.deeplearning4j.models.embeddings.inmemory.InMemoryLookupTable.resetWeights(InMemoryLookupTable.java:145)
at org.deeplearning4j.models.sequencevectors.SequenceVectors.fit(SequenceVectors.java:278)
at org.deeplearning4j.models.paragraphvectors.ParagraphVectors.fit(ParagraphVectors.java:667)
at gov.rfb.cocaj.dl4jGPU.DocumentClassifier.main(DocumentClassifier.java:44)
This is always due to an incompatible cuda version. Make sure that the version you have installed locally is not different from the one you are using with dl4j.

MicroBatchExecution: Query terminated with error UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z

Here I am trying to execute Structured Based Streaming with Apache Kafka. But in here not working and execute error (ERROR MicroBatchExecution: Query [id = daae4c34-9c8a-4c28-9e2e-88e5fcf3d614, runId = ca57d90c-d584-41d3-a8de-6f9534ead0a0] terminated with error
java.lang.UnsatisfiedLinkError:org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z).
How can i solve this issue. I work on windows 10 machine.
App Class:
package com.rakib;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.streaming.OutputMode;
import org.apache.spark.sql.streaming.StreamingQuery;
import org.apache.spark.sql.streaming.StreamingQueryException;
import java.util.concurrent.TimeoutException;
import java.util.logging.Level;
import java.util.logging.Logger;
public class App {
public static void main(String[] args) throws TimeoutException, StreamingQueryException {
System.setProperty("hadoop.home.dir", "c:/hadoop");
Logger.getLogger("org.apache").setLevel(Level.WARNING);
SparkSession sparkSession = SparkSession.builder()
.appName("SparkSQL")
.master("local[*]")
.getOrCreate();
Dataset<Row> rowDataset = sparkSession
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9091,localhost:9092,localhost:9093")
.option("subscribe", "student")
.option("startingOffsets", "earliest")
.load();
rowDataset.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)");
//rowDataset.createOrReplaceTempView("student_info");
//Dataset<Row> dataset = sparkSession.sql("SELECT value FROM student_info");
StreamingQuery query = rowDataset
.writeStream()
.format("console")
.outputMode(OutputMode.Append())
.start();
query.awaitTermination();
}
}
Pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.example</groupId>
<artifactId>Test_One</artifactId>
<version>1.0-SNAPSHOT</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>8</source>
<target>8</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql-kafka-0-10_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs-client</artifactId>
<version>3.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.3.0</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
</dependencies>
</project>
Error:
20/08/20 23:37:21 INFO MicroBatchExecution: Reading table [org.apache.spark.sql.kafka010.KafkaSourceProvider$KafkaTable#71202043] from DataSourceV2 named 'kafka' [org.apache.spark.sql.kafka010.KafkaSourceProvider#1a4d79db]
20/08/20 23:37:21 ERROR MicroBatchExecution: Query [id = daae4c34-9c8a-4c28-9e2e-88e5fcf3d614, runId = ca57d90c-d584-41d3-a8de-6f9534ead0a0] terminated with error
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:793)
at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1221)
at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1426)
at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:495)
at org.apache.hadoop.fs.DelegateToFileSystem.listStatus(DelegateToFileSystem.java:177)
at org.apache.hadoop.fs.ChecksumFs.listStatus(ChecksumFs.java:548)
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1905)
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1901)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1907)
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1866)
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1825)
at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.list(CheckpointFileManager.scala:299)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.getLatest(HDFSMetadataLog.scala:186)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.populateStartOffsets(MicroBatchExecution.scala:272)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:194)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:352)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:350)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:69)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:191)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:185)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:334)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:245)
Exception in thread "stream execution thread for [id = daae4c34-9c8a-4c28-9e2e-88e5fcf3d614, runId = ca57d90c-d584-41d3-a8de-6f9534ead0a0]" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:793)
at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1221)
at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1426)
at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:495)
at org.apache.hadoop.fs.DelegateToFileSystem.listStatus(DelegateToFileSystem.java:177)
at org.apache.hadoop.fs.ChecksumFs.listStatus(ChecksumFs.java:548)
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1905)
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1901)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1907)
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1866)
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1825)
at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.list(CheckpointFileManager.scala:299)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.getLatest(HDFSMetadataLog.scala:186)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.populateStartOffsets(MicroBatchExecution.scala:272)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:194)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:352)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:350)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:69)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:191)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:185)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:334)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:245)
Exception in thread "main" org.apache.spark.sql.streaming.StreamingQueryException: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
=== Streaming Query ===
Identifier: [id = daae4c34-9c8a-4c28-9e2e-88e5fcf3d614, runId = ca57d90c-d584-41d3-a8de-6f9534ead0a0]
Current Committed Offsets: {}
Current Available Offsets: {}
Current State: ACTIVE
Thread State: RUNNABLE
Logical Plan:
WriteToMicroBatchDataSource ConsoleWriter[numRows=20, truncate=true]
+- StreamingDataSourceV2Relation [key#7, value#8, topic#9, partition#10, offset#11L, timestamp#12, timestampType#13], org.apache.spark.sql.kafka010.KafkaSourceProvider$KafkaScan#8774409, KafkaV2[Subscribe[student]]
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:355)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:245)
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:793)
at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1221)
at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1426)
at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:495)
at org.apache.hadoop.fs.DelegateToFileSystem.listStatus(DelegateToFileSystem.java:177)
at org.apache.hadoop.fs.ChecksumFs.listStatus(ChecksumFs.java:548)
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1905)
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1901)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1907)
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1866)
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1825)
at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.list(CheckpointFileManager.scala:299)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.getLatest(HDFSMetadataLog.scala:186)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.populateStartOffsets(MicroBatchExecution.scala:272)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:194)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:352)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:350)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:69)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:191)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:185)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:334)
... 1 more
20/08/20 23:37:21 INFO SparkContext: Invoking stop() from shutdown hook
20/08/20 23:37:21 INFO SparkUI: Stopped Spark web UI at http://DESKTOP-3147U79:4040
20/08/20 23:37:21 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/08/20 23:37:21 INFO MemoryStore: MemoryStore cleared
20/08/20 23:37:21 INFO BlockManager: BlockManager stopped
20/08/20 23:37:21 INFO BlockManagerMaster: BlockManagerMaster stopped
20/08/20 23:37:21 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/08/20 23:37:21 INFO SparkContext: Successfully stopped SparkContext
20/08/20 23:37:21 INFO ShutdownHookManager: Shutdown hook called
20/08/20 23:37:21 INFO ShutdownHookManager: Deleting directory C:\Users\itc\AppData\Local\Temp\temporary-850444d9-5110-4c13-881f-a6e0ba7153d8
20/08/20 23:37:21 INFO ShutdownHookManager: Deleting directory C:\Users\itc\AppData\Local\Temp\spark-813cc4f1-9d4b-44f2-99ae-435d9e99f566
Process finished with exit code 1
This error generally occurs due to the mismatch in your binary files in your %HADOOP_HOME%\bin folder. So, what you need to do is to get hadoop.dll and winutils.exe specifically for your hadoop version.
Get hadoop.dll and winutils.exe for your specific hadoop version and copy them to your %HADOOP_HOME%\bin folder.

Metrics from Spring Boot are not showing up in Prometheus

Followed steps given below to send Metrics from Spring Boot to Prometheus:
Note: I have installed Prometheus locally on my Mac using a Docker image.
In pom.xml added this:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
<version>2.0.4.RELEASE</version>
</dependency>
<!-- Micormeter core dependecy -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-core</artifactId>
<version>1.0.6</version>
</dependency>
<!-- Micrometer Prometheus registry -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
<version>1.0.6</version>
</dependency>
In application.properties added this:
server.port: 9000
management.server.port: 9001
management.server.address: 127.0.0.1
management.endpoint.metrics.enabled=true
management.endpoints.web.exposure.include=*
management.endpoint.prometheus.enabled=true
management.metrics.export.prometheus.enabled=true
Started Prometheus with following lines in configuration file:
Global configurations
global:
scrape_interval: 5s # Set the scrape interval to every 5 seconds.
evaluation_interval: 5s # Evaluate rules every 5 seconds.
scrape_configs:
- job_name: 'hello-world-promethus'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['localhost:9001']
When I hit: http://localhost:9001/actuator/prometheus, I can see the metrics but they are not visible on Prometheus UI.
What am I missing?
Solution was simple. You will run into this only if you're running Prometheus Docker Container. Changed target from: 'localhost:9001' to 'docker.for.mac.localhost:9001'. For example:
- job_name: hello-world-promethus
scrape_interval: 5s
scrape_timeout: 5s
metrics_path: /actuator/prometheus
scheme: http
static_configs:
- targets:
- docker.for.mac.localhost:9001

IBM RTC Logging SDK in Java with Spring Boot

When using the RTC SDK normally in an application, I can turn off the logging in that layer using Log4j with the following code:
// Only show warnings for IBM dependencies
Logger.getLogger("com.ibm").setLevel(Level.WARN);
Logger.getLogger("com.ibm").setAdditivity(false);
Logger.getRootLogger().setLevel(Level.DEBUG);
When trying to convert over to SpringBoot, I add just the basic SpringBoot package and I get all sorts of debug information from the RTC SDK. Even if I have only the root logger set to FATAL and have not settings anywhere else for logging.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<version>1.3.2.RELEASE</version>
</dependency>
As soon as I add the dependency, (without even having the #SpringBootApplication annotation or even SpringApplication.run(Main.class, args), it starts spewing out RTC log information like the following:
16:14:20.161 [main] DEBUG c.i.t.r.c.i.u.InternalTeamPlatform - Thread[main,5,main]
16:14:20.164 [main] DEBUG c.i.t.r.c.i.u.InternalTeamPlatform - start asBundlefalse
16:14:20.164 [main] DEBUG c.i.t.r.c.i.u.InternalTeamPlatform - set start true
16:14:22.387 [main] DEBUG c.i.t.r.t.client.RemoteTeamServer - Entering setCredentials(userid=, password=)
16:14:22.387 [main] DEBUG c.i.t.r.t.client.RemoteTeamServer - Entering closeHttpClient
16:14:22.387 [main] DEBUG c.i.t.r.t.client.RemoteTeamServer - Value of _httpclient: null
16:14:22.408 [main] DEBUG c.i.t.r.t.client.RemoteTeamServer - httpclient already closed
16:14:22.410 [main] DEBUG c.i.t.r.t.client.RemoteTeamServer - Entering createTeamService
16:14:22.410 [main] DEBUG c.i.t.r.t.client.RemoteTeamServer - creating RemoteTeamService from com.ibm.team.repository.common.internal.IRepositoryRemoteService
16:14:22.420 [main] DEBUG c.i.t.r.t.client.RemoteTeamServer - Entering createTeamService
16:14:22.420 [main] DEBUG c.i.t.r.t.client.RemoteTeamServer - creating RemoteTeamService from com.ibm.team.repository.common.service.IQueryService
16:14:22.424 [main] DEBUG c.i.t.r.t.client.RemoteTeamServer - Entering createTeamService
16:14:22.424 [main] DEBUG c.i.t.r.t.client.RemoteTeamServer - creating RemoteTeamService from com.ibm.team.repository.common.service.IExternalUserRegistryService
My question is, how can I turn this excess logging off? It is quite annoying and not useful to me.
As my colleague suggested in his comment:
you have to include this inside of your pom underneath the dependency tag:
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>

Could not instantiate TestExecutionListener

When I run my selenium test below from within Eclipse, I get a series of Could not instantiate TestExecutionListener messages in my log.
This is the actual test.
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = SeleniumConfig.class)
public final class TestWebpage {
private static final Logger LOG = Logger.getLogger(TestWebpage.class);
#Autowired
private WebDriver driver;
#Test
public void testLoadingPage() {
LOG.debug("Hello World!");
}
}
And this is the log
0 [main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Loaded default TestExecutionListener class names from location [META-INF/spring.factories]: [org.springframework.test.context.web.ServletTestExecutionListener, org.springframework.test.context.support.DependencyInjectionTestExecutionListener, org.springframework.test.context.support.DirtiesContextTestExecutionListener, org.springframework.test.context.transaction.TransactionalTestExecutionListener, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener]
5 [main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Could not instantiate TestExecutionListener [org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener]. Specify custom listener classes or make the default listener classes (and their required dependencies) available. Offending class: [org/springframework/transaction/interceptor/TransactionAttribute]
6 [main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Could not instantiate TestExecutionListener [org.springframework.test.context.transaction.TransactionalTestExecutionListener]. Specify custom listener classes or make the default listener classes (and their required dependencies) available. Offending class: [org/springframework/transaction/interceptor/TransactionAttributeSource]
7 [main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Could not instantiate TestExecutionListener [org.springframework.test.context.web.ServletTestExecutionListener]. Specify custom listener classes or make the default listener classes (and their required dependencies) available. Offending class: [javax/servlet/ServletContext]
8 [main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Using TestExecutionListeners: [org.springframework.test.context.support.DependencyInjectionTestExecutionListener#152c95a3, org.springframework.test.context.support.DirtiesContextTestExecutionListener#22140b31]
127 [main] INFO org.springframework.context.support.GenericApplicationContext - Refreshing org.springframework.context.support.GenericApplicationContext#35523de0: startup date [Wed Oct 01 01:20:22 EST 2014]; root of context hierarchy
3961 [main] DEBUG org.rmb.selenium.external.TestWebpage - Hello World!
3963 [Thread-8] INFO org.springframework.context.support.GenericApplicationContext - Closing org.springframework.context.support.GenericApplicationContext#35523de0: startup date [Wed Oct 01 01:20:22 EST 2014]; root of context hierarchy
Note that I am using Spring 4.1.0.RELEASE.
One Solution, three Extra Dependencies
I noticed in the answer to a previous question the suggestion to add #WebAppConfiguration
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = SeleniumConfig.class)
#WebAppConfiguration
public final class TestWebpage {
Which I then needed three extra dependencies in my pom.xml to support:
javax.servlet-api
spring-jdbc
spring-web
Why do I need all this extra when I am not actually using JDBC at all, or anything using spring-web/servlet - this is just a selenium test with some of my own configuration.
Is there an easier way? Am I missing something bigger?
Config Class
This is the class I configure my tests with.
public final class SeleniumConfig {
#Bean
public String baseUrl() {
return "http://localhost:8888/";
}
#Bean
public WebDriver driver() {
return new CloseableFirefoxDriver();
}
class CloseableFirefoxDriver extends FirefoxDriver implements DisposableBean {
public void destroy() throws Exception {
quit();
}
}
}
POM
My pom.xml (before I added the extra dependencies).
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>WebAppWithSeleniumTest</groupId>
<artifactId>WebAppWithSeleniumTest</artifactId>
<packaging>war</packaging>
<version>0.0.1-SNAPSHOT</version>
<name>WebAppWithSeleniumTest Maven Webapp</name>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.16</version>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>2.43.1</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-test</artifactId>
<version>${spring.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>${spring.version}</version>
</dependency>
</dependencies>
<build>
<finalName>WebAppWithSeleniumTest</finalName>
<resources>
<resource>
<directory>src/main/resources</directory>
<targetPath>${basedir}/target/classes</targetPath>
<includes>
<include>log4j.properties</include>
</includes>
</resource>
</resources>
</build>
<description>Web App with Selenium Tests - a base</description>
<properties>
<spring.version>4.1.0.RELEASE</spring.version>
</properties>
</project>
If I leave in the three extra dependencies
javax.servlet-api
spring-jdbc
spring-web
I can leave my test class defined as this:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = SeleniumConfig.class)
public final class TestWebpage {
and I will get this logging:
0 [main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Loaded default TestExecutionListener class names from location [META-INF/spring.factories]: [org.springframework.test.context.web.ServletTestExecutionListener, org.springframework.test.context.support.DependencyInjectionTestExecutionListener, org.springframework.test.context.support.DirtiesContextTestExecutionListener, org.springframework.test.context.transaction.TransactionalTestExecutionListener, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener]
20 [main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Using TestExecutionListeners: [org.springframework.test.context.web.ServletTestExecutionListener#3997ebf6, org.springframework.test.context.support.DependencyInjectionTestExecutionListener#25048104, org.springframework.test.context.support.DirtiesContextTestExecutionListener#4ab24098, org.springframework.test.context.transaction.TransactionalTestExecutionListener#7caee177, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener#3d548b94]
132 [main] INFO org.springframework.context.support.GenericApplicationContext - Refreshing org.springframework.context.support.GenericApplicationContext#6f55137: startup date [Wed Oct 01 21:55:02 EST 2014]; root of context hierarchy
4183 [main] DEBUG org.rmb.selenium.external.TestWebpage - Hello World!
4186 [Thread-8] INFO org.springframework.context.support.GenericApplicationContext - Closing org.springframework.context.support.GenericApplicationContext#6f55137: startup date [Wed Oct 01 21:55:02 EST 2014]; root of context hierarchy
No errors, but obviously Spring is doing a fair bit of work in the background.
Alternatively, I can remove the three extra dependencies and add this minimal #TestExecutionListeners annotation.
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = SeleniumConfig.class)
#TestExecutionListeners(listeners = {DependencyInjectionTestExecutionListener.class})
public final class TestWebpage {
I get logging as below:
0 [main] INFO org.springframework.test.context.support.DefaultTestContextBootstrapper - Using TestExecutionListeners: [org.springframework.test.context.support.DependencyInjectionTestExecutionListener#4fce6eaf]
117 [main] INFO org.springframework.context.support.GenericApplicationContext - Refreshing org.springframework.context.support.GenericApplicationContext#42695958: startup date [Wed Oct 01 21:59:05 EST 2014]; root of context hierarchy
4189 [main] DEBUG org.rmb.selenium.external.TestWebpage - Hello World!
4190 [Thread-8] INFO org.springframework.context.support.GenericApplicationContext - Closing org.springframework.context.support.GenericApplicationContext#42695958: startup date [Wed Oct 01 21:59:05 EST 2014]; root of context hierarchy
At least no errors.
As to why I need any of this, I don't understand yet. I am leaving this here as a reference, at the very least to show the minimal changes required to get rid of the Could not instantiate TestExecutionListener messages.
To stay close to the original Spring implementation, use this instead:
#TestExecutionListeners(listeners = { DependencyInjectionTestExecutionListener.class,
DirtiesContextTestExecutionListener.class, TransactionalTestExecutionListener.class })
as defined in org.springframework.test.context.TestContextManager:
private static final String[] DEFAULT_TEST_EXECUTION_LISTENER_CLASS_NAMES = new String[] {
"org.springframework.test.context.web.ServletTestExecutionListener",
"org.springframework.test.context.support.DependencyInjectionTestExecutionListener",
"org.springframework.test.context.support.DirtiesContextTestExecutionListener",
"org.springframework.test.context.transaction.TransactionalTestExecutionListener" };
Only ServletTestExecutionListener should be evicted.
Any INFO messages like:
Could not instantiate TestExecutionListener org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener]
Can safely be ignored if you are not using or testing JDBC or WEB related spring features. That is just an INFO message telling us that Spring did not activate these listeners as the required dependencies (pom dependencies) have not been added. Which is fine, in case you are not using those features.
However lets say you are using #Sql to load some test data into a database, AND you see this warning, THEN we need to wire in required dependencies (spring-jdbc with test scope in your project pom.xml) in order for the required listener (SqlScriptsTestExecutionListener in this case) to be activated by Spring
At least for my setup using TestNG, the original answer was not quite enough. I had to add the following annotation:
#TestExecutionListeners(inheritListeners = false, listeners =
{DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class})

Categories

Resources