sqoop import communications link falilure - java

I am getting below error while importing a MySQL table using Sqoop to HDFS.
as MySQL is on my Name node and rest slaves are on different VMs, for this I also edited the MySQL confide file and disables the bind-address 0.0.0.0, still no progress.
replies much appreciated, thanks.
FYI, I am using a hadoop multinode cluster.
adminn#master:~/sqoop/bin$ sqoop import --connect jdbc:mysql://localhost:3306/testdb?characterEncoding=latin1 --driver com.mysql.jdbc.Driver --username root --password xxxxxx --table sample --m 1
Warning: /home/adminn/sqoop/../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/adminn/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/adminn/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /home/adminn/sqoop/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
/home/adminn/hadoop/libexec/hadoop-functions.sh: line 2358: HADOOP_ORG.APACHE.SQOOP.SQOOP_USER: invalid variable name
/home/adminn/hadoop/libexec/hadoop-functions.sh: line 2453: HADOOP_ORG.APACHE.SQOOP.SQOOP_OPTS: invalid variable name
2022-12-30 10:31:24,292 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
2022-12-30 10:31:24,320 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
2022-12-30 10:31:24,390 WARN sqoop.ConnFactory: Parameter --driver is set to an explicit driver however appropriate connection manager is not being set (via --connection-manager). Sqoop is going to fall back to org.apache.sqoop.manager.GenericJdbcManager. Please specify explicitly which connection manager should be used next time.
2022-12-30 10:31:24,423 INFO manager.SqlManager: Using default fetchSize of 1000
2022-12-30 10:31:24,423 INFO tool.CodeGenTool: Beginning code generation
2022-12-30 10:31:24,735 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM sample AS t WHERE 1=0
2022-12-30 10:31:24,745 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM sample AS t WHERE 1=0
2022-12-30 10:31:24,800 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/adminn/hadoop
Note: /tmp/sqoop-adminn/compile/db6838afbd83c3028b82196f2fa2d7c9/sample.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
2022-12-30 10:31:26,707 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-adminn/compile/db6838afbd83c3028b82196f2fa2d7c9/sample.jar
2022-12-30 10:31:26,756 INFO mapreduce.ImportJobBase: Beginning import of sample
2022-12-30 10:31:26,757 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2022-12-30 10:31:26,876 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
2022-12-30 10:31:26,879 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM sample AS t WHERE 1=0
2022-12-30 10:31:27,509 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
2022-12-30 10:31:27,653 INFO client.RMProxy: Connecting to ResourceManager at /192.168.0.15:8032
2022-12-30 10:31:28,352 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/adminn/.staging/job_1672373403508_0004
2022-12-30 10:31:32,595 INFO db.DBInputFormat: Using read commited transaction isolation
2022-12-30 10:31:32,799 INFO mapreduce.JobSubmitter: number of splits:1
2022-12-30 10:31:32,915 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2022-12-30 10:31:33,190 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1672373403508_0004
2022-12-30 10:31:33,192 INFO mapreduce.JobSubmitter: Executing with tokens: []
2022-12-30 10:31:33,490 INFO conf.Configuration: resource-types.xml not found
2022-12-30 10:31:33,491 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2022-12-30 10:31:34,186 INFO impl.YarnClientImpl: Submitted application application_1672373403508_0004
2022-12-30 10:31:34,278 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1672373403508_0004/
2022-12-30 10:31:34,280 INFO mapreduce.Job: Running job: job_1672373403508_0004
2022-12-30 10:31:48,004 INFO mapreduce.Job: Job job_1672373403508_0004 running in uber mode : false
2022-12-30 10:31:48,006 INFO mapreduce.Job: map 0% reduce 0%
2022-12-30 10:31:55,129 INFO mapreduce.Job: Task Id : attempt_1672373403508_0004_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:170)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:161)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:223)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:168)
... 10 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2260)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:787)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:357)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:216)
... 11 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:344)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2181)
... 24 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at java.net.Socket.connect(Socket.java:556)
at java.net.Socket.<init>(Socket.java:452)
at java.net.Socket.<init>(Socket.java:262)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:256)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:293)
... 25 more
2022-12-30 10:32:01,340 INFO mapreduce.Job: Task Id : attempt_1672373403508_0004_m_000000_1, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:170)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:161)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:223)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:168)
... 10 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2260)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:787)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:357)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:216)
... 11 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:344)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2181)
... 24 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at java.net.Socket.connect(Socket.java:556)
at java.net.Socket.<init>(Socket.java:452)
at java.net.Socket.<init>(Socket.java:262)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:256)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:293)
... 25 more
2022-12-30 10:32:08,436 INFO mapreduce.Job: Task Id : attempt_1672373403508_0004_m_000000_2, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:170)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:161)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:223)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:168)
... 10 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2260)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:787)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:357)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:216)
... 11 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:344)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2181)
... 24 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at java.net.Socket.connect(Socket.java:556)
at java.net.Socket.<init>(Socket.java:452)
at java.net.Socket.<init>(Socket.java:262)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:256)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:293)
... 25 more
2022-12-30 10:32:15,557 INFO mapreduce.Job: map 100% reduce 0%
2022-12-30 10:32:15,607 INFO mapreduce.Job: Job job_1672373403508_0004 failed with state FAILED due to: Task failed task_1672373403508_0004_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0
2022-12-30 10:32:15,805 INFO mapreduce.Job: Counters: 8
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=4
Total time spent by all maps in occupied slots (ms)=38808
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=19404
Total vcore-milliseconds taken by all map tasks=19404
Total megabyte-milliseconds taken by all map tasks=4967424
2022-12-30 10:32:15,819 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2022-12-30 10:32:15,825 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 48.2897 seconds (0 bytes/sec)
2022-12-30 10:32:15,830 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2022-12-30 10:32:15,831 INFO mapreduce.ImportJobBase: Retrieved 0 records.
2022-12-30 10:32:15,831 ERROR tool.ImportTool: Import failed: Import job failed!
adminn#master:~/sqoop/bin$ B

Don't use localhost for database string. Use the full IP or hostname since the mapreduce command will be distributed in the network
You should not disable bind address 0.0.0.0, as that is what controls external network access.
It's also encountered not to run mysql on any of the Hadoop servers.

Related

multi-pom in intellijIDEA. Error adding module to project: null

I wanted to make a multi-pom project in Intellij (2020.3.4). In one Maven project, I right-click ->New -> Module -> Maven. select parent to "my project". However, even if parent is not specified, I get the error: Error adding module to project: null. (at the same time, it adds a regular folder with the module name and an empty pom.xml inside) (This happens with any archetype). I didn't find any solutions on the Internet at all. Only downgrade to the old version of Intellij IDEA and that solution was published 10 years ago. Tell me what is the problem? Or what to do? Does not work only with "Maven". With "Java" and "Spring Inizializr" it works.
idea.log
2022-03-23 15:54:05,382 [81624884] WARN - il.projectWizard.ModuleBuilder -
java.lang.NullPointerException
at java.base/java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011)
at java.base/java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006)
at java.base/java.util.Properties.put(Properties.java:1340)
at java.base/java.util.Properties.setProperty(Properties.java:228)
at org.jetbrains.idea.maven.utils.MavenUtil.runOrApplyMavenProjectFileTemplate(MavenUtil.java:384)
at org.jetbrains.idea.maven.utils.MavenUtil.runOrApplyMavenProjectFileTemplate(MavenUtil.java:341)
at org.jetbrains.idea.maven.wizards.MavenModuleBuilderHelper.lambda$configure$0(MavenModuleBuilderHelper.java:85)
at com.intellij.openapi.command.WriteCommandAction$BuilderImpl$2.run(WriteCommandAction.java:122)
at com.intellij.openapi.application.RunResult.run(RunResult.java:35)
at com.intellij.openapi.command.WriteCommandAction.lambda$performWriteCommandAction$1(WriteCommandAction.java:253)
at com.intellij.openapi.application.impl.ApplicationImpl.runWriteAction(ApplicationImpl.java:1000)
at com.intellij.openapi.command.WriteCommandAction.lambda$performWriteCommandAction$2(WriteCommandAction.java:252)
at com.intellij.openapi.command.WriteCommandAction.lambda$doExecuteCommand$4(WriteCommandAction.java:310)
at com.intellij.openapi.command.impl.CoreCommandProcessor.executeCommand(CoreCommandProcessor.java:220)
at com.intellij.openapi.command.impl.CoreCommandProcessor.executeCommand(CoreCommandProcessor.java:187)
at com.intellij.openapi.command.WriteCommandAction.doExecuteCommand(WriteCommandAction.java:312)
at com.intellij.openapi.command.WriteCommandAction.performWriteCommandAction(WriteCommandAction.java:251)
at com.intellij.openapi.command.WriteCommandAction.execute(WriteCommandAction.java:232)
at com.intellij.openapi.command.WriteCommandAction$BuilderImpl.compute(WriteCommandAction.java:124)
at org.jetbrains.idea.maven.wizards.MavenModuleBuilderHelper.configure(MavenModuleBuilderHelper.java:79)
at org.jetbrains.idea.maven.wizards.AbstractMavenModuleBuilder.lambda$setupRootModel$0(AbstractMavenModuleBuilder.java:77)
at org.jetbrains.idea.maven.utils.MavenUtil.runDumbAware(MavenUtil.java:211)
at org.jetbrains.idea.maven.utils.MavenUtil.runWhenInitialized(MavenUtil.java:231)
at org.jetbrains.idea.maven.wizards.AbstractMavenModuleBuilder.setupRootModel(AbstractMavenModuleBuilder.java:70)
at com.intellij.ide.util.projectWizard.ModuleBuilder.setupModule(ModuleBuilder.java:260)
at com.intellij.ide.util.projectWizard.ModuleBuilder.createModule(ModuleBuilder.java:253)
at com.intellij.ide.util.projectWizard.ModuleBuilder.createAndCommitIfNeeded(ModuleBuilder.java:292)
at com.intellij.ide.util.projectWizard.ModuleBuilder.lambda$commitModule$4(ModuleBuilder.java:338)
at com.intellij.openapi.application.impl.ApplicationImpl.runWriteActionWithClass(ApplicationImpl.java:988)
at com.intellij.openapi.application.impl.ApplicationImpl.runWriteAction(ApplicationImpl.java:1014)
at com.intellij.ide.util.projectWizard.ModuleBuilder.commitModule(ModuleBuilder.java:337)
at com.intellij.openapi.roots.ui.configuration.actions.NewModuleAction.createModuleFromWizard(NewModuleAction.java:59)
at com.intellij.openapi.roots.ui.configuration.actions.NewModuleAction.actionPerformed(NewModuleAction.java:49)
at com.intellij.openapi.actionSystem.ex.ActionUtil.performActionDumbAware(ActionUtil.java:281)
at com.intellij.openapi.actionSystem.impl.ActionMenuItem$ActionTransmitter.lambda$actionPerformed$0(ActionMenuItem.java:310)
at com.intellij.openapi.wm.impl.FocusManagerImpl.runOnOwnContext(FocusManagerImpl.java:286)
at com.intellij.openapi.wm.impl.IdeFocusManagerImpl.runOnOwnContext(IdeFocusManagerImpl.java:77)
at com.intellij.openapi.actionSystem.impl.ActionMenuItem$ActionTransmitter.actionPerformed(ActionMenuItem.java:299)
at java.desktop/javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1967)
at com.intellij.openapi.actionSystem.impl.ActionMenuItem.lambda$fireActionPerformed$0(ActionMenuItem.java:110)
at com.intellij.openapi.application.TransactionGuardImpl.performUserActivity(TransactionGuardImpl.java:95)
at com.intellij.openapi.actionSystem.impl.ActionMenuItem.fireActionPerformed(ActionMenuItem.java:110)
at com.intellij.ui.plaf.beg.BegMenuItemUI.doClick(BegMenuItemUI.java:514)
at com.intellij.ui.plaf.beg.BegMenuItemUI$MyMouseInputHandler.mouseReleased(BegMenuItemUI.java:544)
at java.desktop/java.awt.Component.processMouseEvent(Component.java:6652)
at java.desktop/javax.swing.JComponent.processMouseEvent(JComponent.java:3345)
at java.desktop/java.awt.Component.processEvent(Component.java:6417)
at java.desktop/java.awt.Container.processEvent(Container.java:2263)
at java.desktop/java.awt.Component.dispatchEventImpl(Component.java:5027)
at java.desktop/java.awt.Container.dispatchEventImpl(Container.java:2321)
at java.desktop/java.awt.Component.dispatchEvent(Component.java:4859)
at java.desktop/java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4918)
at java.desktop/java.awt.LightweightDispatcher.processMouseEvent(Container.java:4547)
at java.desktop/java.awt.LightweightDispatcher.dispatchEvent(Container.java:4488)
at java.desktop/java.awt.Container.dispatchEventImpl(Container.java:2307)
at java.desktop/java.awt.Window.dispatchEventImpl(Window.java:2780)
at java.desktop/java.awt.Component.dispatchEvent(Component.java:4859)
at java.desktop/java.awt.EventQueue.dispatchEventImpl(EventQueue.java:778)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:727)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:721)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:95)
at java.desktop/java.awt.EventQueue$5.run(EventQueue.java:751)
at java.desktop/java.awt.EventQueue$5.run(EventQueue.java:749)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85)
at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:748)
at com.intellij.ide.IdeEventQueue.defaultDispatchEvent(IdeEventQueue.java:976)
at com.intellij.ide.IdeEventQueue.dispatchMouseEvent(IdeEventQueue.java:911)
at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.java:840)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$8(IdeEventQueue.java:454)
at com.intellij.openapi.progress.impl.CoreProgressManager.computePrioritized(CoreProgressManager.java:773)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$9(IdeEventQueue.java:453)
at com.intellij.openapi.application.impl.ApplicationImpl.runIntendedWriteActionOnCurrentThread(ApplicationImpl.java:822)
at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.java:507)
at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:203)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:124)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:113)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:109)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:90)
2022-03-23 15:54:05,981 [81625483] ERROR - plication.impl.ApplicationImpl - Cannot reconnect.
java.lang.RuntimeException: Cannot reconnect.
at org.jetbrains.idea.maven.server.RemoteObjectWrapper.perform(RemoteObjectWrapper.java:82)
at org.jetbrains.idea.maven.server.MavenIndexerWrapper.getArchetypes(MavenIndexerWrapper.java:143)
at org.jetbrains.idea.maven.indices.MavenIndicesManager.getArchetypes(MavenIndicesManager.java:337)
at org.jetbrains.idea.maven.wizards.MavenArchetypesStep.lambda$updateArchetypesList$3(MavenArchetypesStep.java:223)
at com.intellij.util.RunnableCallable.call(RunnableCallable.java:20)
at com.intellij.util.RunnableCallable.call(RunnableCallable.java:11)
at com.intellij.openapi.application.impl.ApplicationImpl$1.call(ApplicationImpl.java:270)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:668)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:665)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1.run(Executors.java:665)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is:
java.net.ConnectException: Connection refused: connect
at java.rmi/sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:623)
at java.rmi/sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:209)
at java.rmi/sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:196)
at java.rmi/sun.rmi.server.UnicastRef.invoke(UnicastRef.java:132)
at java.rmi/java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:217)
at java.rmi/java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:162)
at com.sun.proxy.$Proxy146.createIndexer(Unknown Source)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at com.intellij.execution.rmi.RemoteUtil.invokeRemote(RemoteUtil.java:150)
at com.intellij.execution.rmi.RemoteUtil.access$400(RemoteUtil.java:21)
at com.intellij.execution.rmi.RemoteUtil$1.lambda$invoke$0(RemoteUtil.java:134)
at com.intellij.openapi.util.ClassLoaderUtil.computeWithClassLoader(ClassLoaderUtil.java:31)
at com.intellij.execution.rmi.RemoteUtil.executeWithClassLoader(RemoteUtil.java:202)
at com.intellij.execution.rmi.RemoteUtil$1.invoke(RemoteUtil.java:134)
at com.sun.proxy.$Proxy146.createIndexer(Unknown Source)
at org.jetbrains.idea.maven.server.MavenServerConnectorImpl.createIndexer(MavenServerConnectorImpl.java:197)
at org.jetbrains.idea.maven.server.MavenServerManager$4.create(MavenServerManager.java:381)
at org.jetbrains.idea.maven.server.MavenServerManager$4.create(MavenServerManager.java:377)
at org.jetbrains.idea.maven.server.RemoteObjectWrapper.getOrCreateWrappee(RemoteObjectWrapper.java:41)
at org.jetbrains.idea.maven.server.MavenIndexerWrapper.lambda$getArchetypes$6(MavenIndexerWrapper.java:143)
at org.jetbrains.idea.maven.server.RemoteObjectWrapper.perform(RemoteObjectWrapper.java:76)
... 14 more
Caused by: java.net.ConnectException: Connection refused: connect
at java.base/java.net.PlainSocketImpl.connect0(Native Method)
at java.base/java.net.PlainSocketImpl.socketConnect(PlainSocketImpl.java:101)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.base/java.net.Socket.connect(Socket.java:609)
at java.base/java.net.Socket.connect(Socket.java:558)
at java.base/java.net.Socket.<init>(Socket.java:454)
at java.base/java.net.Socket.<init>(Socket.java:231)
at com.intellij.execution.rmi.RemoteServer$1.createSocket(RemoteServer.java:122)
at java.rmi/sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:617)
... 37 more
2022-03-23 15:54:05,981 [81625483] ERROR - plication.impl.ApplicationImpl - IntelliJ IDEA 2020.3.4 Build #IU-203.8084.24
2022-03-23 15:54:05,981 [81625483] ERROR - plication.impl.ApplicationImpl - JDK: 11.0.10; VM: OpenJDK 64-Bit Server VM; Vendor: JetBrains s.r.o.
2022-03-23 15:54:05,981 [81625483] ERROR - plication.impl.ApplicationImpl - OS: Windows 10
2022-03-23 15:54:05,981 [81625483] ERROR - plication.impl.ApplicationImpl - Last Action: NewModuleInGroup
2022-03-23 15:54:06,580 [81626082] INFO - .diagnostic.PerformanceWatcher - Indexable file iteration took 1922ms; general responsiveness: ok; EDT responsiveness: ok
2022-03-23 15:54:06,580 [81626082] INFO - indexing.UnindexedFilesUpdater - Unindexed files update started: 0 files to index
2022-03-23 15:54:06,580 [81626082] INFO - .diagnostic.PerformanceWatcher - Indexable file iteration took 1922ms; general responsiveness: ok; EDT responsiveness: ok
2022-03-23 15:54:06,595 [81626097] INFO - indexing.UnindexedFilesUpdater - Unindexed files update started: 0 files to index
2022-03-23 15:54:08,524 [81628026] INFO - rationStore.ComponentStoreImpl - Saving appFileTypeManager took 16 ms
I reinstalled windows. (I needed it anyway) and installed intellij IDEA 2021.3 instead of 2020.3. Everything worked. Some of this has affected. Perhaps reinstalling the same version would also solve this problem.
In Intellij goto File -> Project Structure -> Project SDK -> [Select Your Installed SDK Version] -> Ok
And try to add the new module, it worked for me.

How to resolve a ConnectException when running a jar on Hadoop?

I have written a simple map reduce job to perform KMeans clustering on some points.
However, when running the following command on Windows 10 cmd:
hadoop jar kmeans.jar KMeansJob /input /output
I get the following error:
21/04/08 22:26:14 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
21/04/08 22:26:14 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
21/04/08 22:26:17 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/????????/.staging/job_1617909910497_0001
Exception in thread "main" java.net.ConnectException: Call From PCNAME/XXX.XXX.X.X to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:824)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:754)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1439)
at org.apache.hadoop.ipc.Client.call(Client.java:1349)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:796)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1649)
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1529)
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1526)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1526)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:327)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:237)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:106)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:70)
at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:210)
at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:128)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:101)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
at clustering.KMeansJob.run(KMeansJob.java:43)
at clustering.KMeansJob.main(KMeansJob.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:687)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:790)
at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:411)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1554)
at org.apache.hadoop.ipc.Client.call(Client.java:1385)
... 43 more
In the above logs, I have hidden the IP the call was made from.
Running jps gives the following output:
4608 ResourceManager
13284 DataNode
7252 NameNode
10632 NodeManager
15436 Jps
What is the problem and is there any suggestion to cope with it?
Changing the core-site.xml configuration seems to do the job:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
For the record, my previous configuration had a value of:
<value>hdfs://0.0.0.0:19000</value>

Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException) for hadoop 3.1.3

I am trying to run a mapreduce job but I am getting error for Hadoop-3.1.3
hadoop jar WordCount.jar WordcountDemo.WordCount /mapwork/Mapwork /r_out
Error
2020-04-04 19:59:11,379 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
2020-04-04 19:59:12,499 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2020-04-04 19:59:12,569 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/tejashri/.staging/job_1586009643433_0007
2020-04-04 19:59:12,727 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hadoop-yarn/staging/tejashri/.staging/job_1586009643433_0007/job.jar could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2205)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1545)
at org.apache.hadoop.ipc.Client.call(Client.java:1491)
at org.apache.hadoop.ipc.Client.call(Client.java:1388)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:514)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1081)
at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1866)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1668)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
2020-04-04 19:59:12,734 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/tejashri/.staging/job_1586009643433_0007
Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hadoop-yarn/staging/tejashri/.staging/job_1586009643433_0007/job.jar could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2205)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1545)
at org.apache.hadoop.ipc.Client.call(Client.java:1491)
at org.apache.hadoop.ipc.Client.call(Client.java:1388)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:514)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1081)
at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1866)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1668)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
Update (from comments):
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>C:\hadoop\hdfstmp</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>C:\hadoop\data\namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>C:\hadoop\data\datanode</value>
</property>
<property>
<name>dfs.datanode.failed.volumes.tolerated</name>
<value>0</value>
</property>
</configuration>
Output of jps:
16832 NodeManager
5556 ResourceManager
18280 NameNode
11708 Jps
datanode error log:
2020-04-04 21:42:25,150 WARN common.Storage: Failed to add storage directory [DISK]file:/C:/hadoop/data/datanode
java.io.IOException: Incompatible clusterIDs in C:\hadoop\data\datanode: namenode clusterID = CID-199fd5c5-1f1d-4c44-9e39-80995486695e; datanode clusterID = CID-16d0af22-57e1-4531-a5c8-4bf3eefd351d
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:744)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:294)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:407)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:387)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:559)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1743)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1679)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
at java.lang.Thread.run(Thread.java:748)
2020-04-04 21:42:25,156 ERROR datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid 7578b7ba-c42a-476b-abc2-2088b15b3474) service to localhost/127.0.0.1:9000. Exiting.
java.io.IOException: All specified directories have failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:560)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1743)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1679)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
at java.lang.Thread.run(Thread.java:748)
2020-04-04 21:42:25,158 WARN datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid 7578b7ba-c42a-476b-abc2-2088b15b3474) service to localhost/127.0.0.1:9000
2020-04-04 21:42:25,261 INFO datanode.DataNode: Removed Block pool <registering> (Datanode Uuid 7578b7ba-c42a-476b-abc2-2088b15b3474)
2020-04-04 21:42:27,274 WARN datanode.DataNode: Exiting Datanode
The Mapreduce job fails because it is unable to access HDFS since There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
And from the datanode logs, it is understood that the Datanode daemon is unable to register itself with the HDFS cluster due to Incompatible clusterIDs.
When a namenode is formatted (during installation and setup), a clusterID is generated and this clusterID is stored in the VERSION file of each daemon when they initialize. This clusterID acts as the identifier for the datanodes, letting them to rejoin the cluster whenever they are stopped and started.
Incompatible clusterIDs among the nodes can happen when the namenode is formatted on an active cluster and the other daemons are not re-initialized.
To get the cluster back in form,
Stop the cluster
Delete the contents of the following
directories C:\hadoop\hdfstmp, C:\hadoop\data\namenode,
C:\hadoop\data\datanode
Format the namenode
Start the cluster
You have recopy the data required for the Mapreduce job and run the job.
I do not have the option to shut down and restart my cluster. However, running the following command solved the problem without causing any other issue that I could see.
hdfs dfsadmin -safemode leave
See the following:
https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#DFSAdmin_Command
https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Safemode

RabbitMQ+SpringAMPQ: Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection refused: connect

I am currently learning RabbitMQ and Spring AMPQ.
I was trying some examples this week but I am facing an issue in some step of the configuration.
I have this docker-compose file:
app:
build: .
environment:
INSERTION_QUEUE: insertion.queue
VALIDATION_QUEUE: validation.queue
NUMBER_OF_VALIDATION_CONSUMERS: 1
RESPONSE_EXCHANGE: response.exchange
RESPONSE_ROUTING_KEY: response.routing.key
RABBITMQ_HOST: rabbitmq
RABBITMQ_PORT: 5672
RABBITMQ_VHOST: /
RABBITMQ_USERNAME: guest
RABBITMQ_PASSWORD: guest
JDBC_URL: jdbc:mysql://mysql:3306/hided
links:
- mysql:mysql
- rabbitmq:rabbitmq
mysql:
image: mysql:5.7
environment:
MYSQL_DATABASE: hided
MYSQL_ROOT_PASSWORD: secret
rabbitmq:
image: rabbitmq:3.6-management
ports:
- 15672:15672
And I copied this example, adding maven configuration to it: https://github.com/spring-guides/gs-messaging-rabbitmq/tree/master/complete
The RabbitMQ is accessed through the URL localhost:15672 correctly with username and password "guest"
But when I run this example, it gives me this error (which I believe the important message is Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection refused: connect):
2020-04-11 04:50:49.202 INFO 25576 --- [ main] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2020-04-11 04:50:51.222 INFO 25576 --- [ main] o.s.a.r.l.SimpleMessageListenerContainer : Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection refused: connect
2020-04-11 04:50:51.225 INFO 25576 --- [ container-1] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2020-04-11 04:50:53.242 ERROR 25576 --- [ container-1] o.s.a.r.l.SimpleMessageListenerContainer : Failed to check/redeclare auto-delete queue(s).
org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused: connect
at org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:61) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:510) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createConnection(CachingConnectionFactory.java:751) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.createConnection(ConnectionFactoryUtils.java:214) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.core.RabbitTemplate.doExecute(RabbitTemplate.java:2095) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:2068) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:2048) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.core.RabbitAdmin.getQueueInfo(RabbitAdmin.java:407) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.core.RabbitAdmin.getQueueProperties(RabbitAdmin.java:391) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.attemptDeclarations(AbstractMessageListenerContainer.java:1830) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.redeclareElementsIfNecessary(AbstractMessageListenerContainer.java:1811) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.initialize(SimpleMessageListenerContainer.java:1342) [spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1188) [spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at java.lang.Thread.run(Unknown Source) [na:1.8.0_241]
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) ~[na:1.8.0_241]
at java.net.DualStackPlainSocketImpl.socketConnect(Unknown Source) ~[na:1.8.0_241]
at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source) ~[na:1.8.0_241]
at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source) ~[na:1.8.0_241]
at java.net.AbstractPlainSocketImpl.connect(Unknown Source) ~[na:1.8.0_241]
at java.net.PlainSocketImpl.connect(Unknown Source) ~[na:1.8.0_241]
at java.net.SocksSocketImpl.connect(Unknown Source) ~[na:1.8.0_241]
at java.net.Socket.connect(Unknown Source) ~[na:1.8.0_241]
at com.rabbitmq.client.impl.SocketFrameHandlerFactory.create(SocketFrameHandlerFactory.java:60) ~[amqp-client-5.7.3.jar:5.7.3]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1113) ~[amqp-client-5.7.3.jar:5.7.3]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1063) ~[amqp-client-5.7.3.jar:5.7.3]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.connect(AbstractConnectionFactory.java:526) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:473) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
... 12 common frames omitted
2020-04-11 04:50:53.244 INFO 25576 --- [ container-1] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2020-04-11 04:50:55.266 INFO 25576 --- [ main] c.a.A.AppNameApplication : Started AppNameApplication in 7.328 seconds (JVM running for 8.175)
Sending message...
2020-04-11 04:50:55.268 INFO 25576 --- [ main] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2020-04-11 04:50:57.285 INFO 25576 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2020-04-11 04:50:57.293 ERROR 25576 --- [ main] o.s.boot.SpringApplication : Application run failed
java.lang.IllegalStateException: Failed to execute CommandLineRunner
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:787) [spring-boot-2.2.6.RELEASE.jar:2.2.6.RELEASE]
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:768) [spring-boot-2.2.6.RELEASE.jar:2.2.6.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:322) [spring-boot-2.2.6.RELEASE.jar:2.2.6.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) [spring-boot-2.2.6.RELEASE.jar:2.2.6.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1215) [spring-boot-2.2.6.RELEASE.jar:2.2.6.RELEASE]
at com.appname.AppName.AppNameApplication.main(AppNameApplication.java:19) [classes/:na]
Caused by: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused: connect
at org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:61) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:510) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createConnection(CachingConnectionFactory.java:751) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.createConnection(ConnectionFactoryUtils.java:214) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.core.RabbitTemplate.doExecute(RabbitTemplate.java:2095) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:2068) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.core.RabbitTemplate.send(RabbitTemplate.java:1009) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.core.RabbitTemplate.convertAndSend(RabbitTemplate.java:1075) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.core.RabbitTemplate.convertAndSend(RabbitTemplate.java:1068) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at com.appname.AppName.Runner.run(Runner.java:23) ~[classes/:na]
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:784) [spring-boot-2.2.6.RELEASE.jar:2.2.6.RELEASE]
... 5 common frames omitted
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) ~[na:1.8.0_241]
at java.net.DualStackPlainSocketImpl.socketConnect(Unknown Source) ~[na:1.8.0_241]
at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source) ~[na:1.8.0_241]
at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source) ~[na:1.8.0_241]
at java.net.AbstractPlainSocketImpl.connect(Unknown Source) ~[na:1.8.0_241]
at java.net.PlainSocketImpl.connect(Unknown Source) ~[na:1.8.0_241]
at java.net.SocksSocketImpl.connect(Unknown Source) ~[na:1.8.0_241]
at java.net.Socket.connect(Unknown Source) ~[na:1.8.0_241]
at com.rabbitmq.client.impl.SocketFrameHandlerFactory.create(SocketFrameHandlerFactory.java:60) ~[amqp-client-5.7.3.jar:5.7.3]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1113) ~[amqp-client-5.7.3.jar:5.7.3]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1063) ~[amqp-client-5.7.3.jar:5.7.3]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.connect(AbstractConnectionFactory.java:526) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:473) ~[spring-rabbit-2.2.5.RELEASE.jar:2.2.5.RELEASE]
... 14 common frames omitted
2020-04-11 04:50:57.296 INFO 25576 --- [ main] o.s.a.r.l.SimpleMessageListenerContainer : Waiting for workers to finish.
2020-04-11 04:50:57.296 INFO 25576 --- [ main] o.s.a.r.l.SimpleMessageListenerContainer : Successfully waited for workers to finish.
2020-04-11 04:50:57.298 INFO 25576 --- [ main] o.s.a.r.l.SimpleMessageListenerContainer : Shutdown ignored - container is not active already
I would like to understand what I am doing wrong, I tried to add the code below to "MessagingRabbitmqApplication.java" to force the connection, made some changes in ports, address, disabled firewall, no success:
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost("localhost");
connectionFactory.setVirtualHost("/");
connectionFactory.setUsername("guest");
connectionFactory.setPassword("guest");
return connectionFactory;
}
I did try other examples but I keep facing this same error.
My Dockerfile is:
# Start with a base image containing Java runtime
FROM openjdk:8-jdk-alpine
# Add Maintainer Info
LABEL maintainer="hidden"
# Add a volume pointing to /tmp
VOLUME /tmp
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
And my application.properties is:
#Database properties
spring.datasource.url = jdbc:mysql://mysql:3306/app_name?user=root&password=secret
spring.datasource.username = root
spring.datasource.password = secret
spring.datasource.platform = mysql
spring.datasource.driver-class-name = com.mysql.cj.jdbc.Driver
#Rabbitmq properties
spring.rabbitmq.host=localhost
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
spring.rabbitmq.template.exchange=response.exchange
spring.rabbitmq.template.routing-key=response.routing.key
spring.rabbitmq.virtual-host=/
Thank you for your time,
I appreciate any help.
I found the error today, we need to expose the RabbitMQ port in the docker-compose file or it won't be accessible. So we need to add the line - 5672:5672 in the rabbitmq > ports
app:
build: .
environment:
INSERTION_QUEUE: insertion.queue
VALIDATION_QUEUE: validation.queue
NUMBER_OF_VALIDATION_CONSUMERS: 1
RESPONSE_EXCHANGE: response.exchange
RESPONSE_ROUTING_KEY: response.routing.key
RABBITMQ_HOST: rabbitmq
RABBITMQ_PORT: 5672
RABBITMQ_VHOST: /
RABBITMQ_USERNAME: guest
RABBITMQ_PASSWORD: guest
JDBC_URL: jdbc:mysql://mysql:3306/hided
links:
- mysql:mysql
- rabbitmq:rabbitmq
mysql:
image: mysql:5.7
environment:
MYSQL_DATABASE: hided
MYSQL_ROOT_PASSWORD: secret
rabbitmq:
image: rabbitmq:3.6-management
ports:
- 5672:5672
- 15672:15672
Thank you!
Please correct your bean creation configuration you should replace with
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost(System.getProperty("RABBITMQ_HOST"));
connectionFactory.setVirtualHost(System.getProperty("RABBITMQ_VHOST"));
connectionFactory.setUsername(System.getProperty("RABBITMQ_USERNAME"));
connectionFactory.setPassword(System.getProperty("RABBITMQ_PASSWORD"));
return connectionFactory;
}
and modify your Dockerfile last line
# Start with a base image containing Java runtime
FROM openjdk:8-jdk-alpine
# Add Maintainer Info
LABEL maintainer="hidden"
# Add a volume pointing to /tmp
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT java -DRABBITMQ_HOST=${RABBITMQ_HOST} -DRABBITMQ_VHOST=${RABBITMQ_VHOST} -DRABBITMQ_USERNAME=${RABBITMQ_USERNAME} -DRABBITMQ_PASSWORD=${RABBITMQ_PASSWORD} -Djava.security.egd=file:/dev/./urandom -jar app.jar
You can run docker with this command to use the default value :
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3-management
Got stuck on this error for a couple days.
What worked for me was to include the
SPRING_RABBITMQ_SSL_ENABLED=true
setting in a .env file that's being pulled in by my docker-compose.yml file.

Apache Nifi: PutHiveStreaming is not connecting

I have a simple process flow that follows the example: https://community.hortonworks.com/articles/52856/stream-data-into-hive-like-a-king-using-nifi.html
The flow looks like this:
The flow files go through the entire process but then fails to write to the Hive DB in the processor "Stream CSV to Hive."
When I look at the nifi-app.log, I get the following exception:
2018-03-03 00:51:33,942 ERROR [Timer-Driven Process Thread-8]
o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=e88d5c4e-0161-
1000-1713-79d402d400b2] Error connecting to Hive endpoint: table olympics
at thrift://master:9083
2018-03-03 00:51:33,947 ERROR [Timer-Driven Process Thread-8]
o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=e88d5c4e-0161-
1000-1713-79d402d400b2] Hive Streaming co nnect/write error, flow file will
be penalized and routed to retry.
org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to
EndPoint {metaStoreUri='thrift://m aster:9083', database='default',
table='olympics', partitionVals=[] }:
org.apache.nifi.processors.hive.PutHiveStreaming$ShouldRetryException: Hive
Streaming connect/write error , flow file will be penalized and routed to
retry. org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed
connecting to EndPoint {metaStoreUri='thrift://master:9083', data
base='default', table='olympics', partitionVals=[] }
org.apache.nifi.processors.hive.PutHiveStreaming$ShouldRetryException: Hive
Streaming connect/write error, flow file will be penalized and routed to
retry. org.apache.nifi.util .hive.HiveWriter$ConnectFailure: Failed
connecting to EndPoint {metaStoreUri='thrift://master:9083',
database='default', table='olympics', partitionVals=[] }
at
org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onHiveRecordsError$1(Put
HiveStreaming.java:527)
at org.apache.nifi.processor.util.pattern.ExceptionHandler$OnError.lambda$andThen$0(ExceptionHandler.java:54)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onHiveRecordError$2(PutHiveStreaming.java:545)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:148)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$12(PutHiveStreaming.java:677)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2174)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2144)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:631)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$4(PutHiveStreaming.java:555)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:555)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://master:9083', database='default', table='olympics', partit ionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:79)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:968)
at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:879)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$8(PutHiveStreaming.java:680)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
... 19 common frames omitted
Caused by: org.apache.nifi.util.hive.HiveWriter$TxnBatchFailure: Failed acquiring Transaction Batch from EndPoint: {metaStoreUri='thrift://master:9083', database='default', tab le='olympics', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:264)
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:73)
... 24 common frames omitted
Caused by: org.apache.hive.hcatalog.streaming.TransactionError: Unable to acquire lock on {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionV als=[] }
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:578)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:547)
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:261)
... 25 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:3906)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:3893)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:1863)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy85.lock(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:573)
... 27 common frames omitted
I have the Hive metastore daemon running and the default port, 9083 is open and I can connect to it with telnet. Hive is running on the same machine as Nifi and has an entry in /etc/hosts to name it master.
I'm getting the following exception:
org.apache.thrift.transport.TTransportException: null.
Which is getting passed a null to the parent, Exception class's constructor.
I've also tried using older versions of Apache NiFi and Apache Hive. I just tried Nifi version 1.4 and Hive version 1.2.2. I'm now getting a different error:
2018-03-13 19:17:20,126 INFO [put-hive-streaming-0] hive.metastore Trying to connect to metastore with URI thrift://master:9083
2018-03-13 19:17:20,220 INFO [put-hive-streaming-0] hive.metastore Connected to metastore.
2018-03-13 19:17:20,524 INFO [Timer-Driven Process Thread-6] hive.metastore Trying to connect to metastore with URI thrift://master:9083
2018-03-13 19:17:20,525 INFO [Timer-Driven Process Thread-6] hive.metastore Connected to metastore.
2018-03-13 19:17:21,204 WARN [put-hive-streaming-0] o.a.h.h.m.RetryingMetaStoreClient MetaStoreClient lost connection. Attempting to reconnect.
org.apache.thrift.TApplicationException: Internal error processing open_txns
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_open_txns(ThriftHiveMetastore.java:3834)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.open_txns(ThriftHiveMetastore.java:3821)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.openTxns(HiveMetaStoreClient.java:1841)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy122.openTxns(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.openTxnImpl(HiveEndPoint.java:520)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:504)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:461)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:345)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:325)
at org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$2(HiveWriter.java:259)
at org.apache.nifi.util.hive.HiveWriter.lambda$callWithTimeout$4(HiveWriter.java:365)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-03-13 19:17:22,205 INFO [put-hive-streaming-0] hive.metastore Trying to connect to metastore with URI thrift://master:9083
2018-03-13 19:17:22,207 INFO [put-hive-streaming-0] hive.metastore Connected to metastore.
2018-03-13 19:17:22,214 ERROR [Timer-Driven Process Thread-6] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=e88d5c4e-0161-1000-1713-79d402d400b2] Failed to create HiveWriter for endpoint: {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }: org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:79)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:968)
at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:879)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$8(PutHiveStreaming.java:680)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$12(PutHiveStreaming.java:677)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2174)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2144)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:631)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$4(PutHiveStreaming.java:555)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:555)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.nifi.util.hive.HiveWriter$TxnBatchFailure: Failed acquiring Transaction Batch from EndPoint: {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:264)
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:73)
... 24 common frames omitted
Caused by: org.apache.hive.hcatalog.streaming.TransactionBatchUnAvailable: Unable to acquire transaction batch on end point: {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:511)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:461)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:345)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:325)
at org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$2(HiveWriter.java:259)
at org.apache.nifi.util.hive.HiveWriter.lambda$callWithTimeout$4(HiveWriter.java:365)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 common frames omitted
Caused by: org.apache.thrift.TApplicationException: Internal error processing open_txns
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_open_txns(ThriftHiveMetastore.java:3834)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.open_txns(ThriftHiveMetastore.java:3821)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.openTxns(HiveMetaStoreClient.java:1841)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy122.openTxns(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.openTxnImpl(HiveEndPoint.java:520)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:504)
... 9 common frames omitted
What version of NiFi/HDF are you using and what kind/version of Hadoop cluster are you connecting to (HDP, e.g.)? It is a known issue that Apache NiFi will not work with HDP Hive starting with HDP 2.5 (or maybe 2.4), you would have to download the NiFi-only distro of Hortonworks Data Flow in order to make it work.

Categories

Resources