I have a dataflow running in NiFi 1.12.0, the relevant properties from this installation is here:
nifi.sensitive.props.key=
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=
I am facing migration issue in nifi when I upgrade base version to 1.16.3 from 1.12.0. which having following properties.
nifi.sensitive.props.key=testPassword
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=NIFI_ARGON2_AES_GCM_256
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=
I am getting following exception while I execute command for migrating flow file.
nifi#nifi-service-0:/opt/nifi/nifi-toolkit-current/bin$ ./encrypt-config.sh -n $NIFI_HOME/conf/nifi.properties -f /opt/nifi/data/old_flow.xml.gz -s testPassword -x
[main] WARN org.apache.nifi.properties.ConfigEncryptionTool - The source nifi.properties and destination nifi.properties are identical [/opt/nifi/nifi-current/conf/nifi.properties] so the original will be overwritten
[main] WARN org.apache.nifi.properties.ConfigEncryptionTool - The source flow.xml.gz and destination flow.xml.gz are identical [/opt/nifi/data/old_flow.xml.gz] so the original will be overwritten
[main] WARN org.apache.nifi.properties.AbstractBootstrapPropertiesLoader - System Property [nifi.properties.file.path] not found: Using Relative Path [conf/nifi.properties]
[main] INFO org.apache.nifi.properties.NiFiPropertiesLoader - Loading Application Properties [/opt/nifi/nifi-current/conf/nifi.properties]
[main] INFO org.apache.nifi.properties.NiFiPropertiesLoader - Loading Application Properties [/opt/nifi/nifi-current/conf/nifi.properties]
[main] INFO org.apache.nifi.properties.ConfigEncryptionTool - Loaded NiFiProperties instance with 138 properties
[main] INFO org.apache.nifi.properties.NiFiPropertiesLoader - Loading Application Properties [/opt/nifi/nifi-current/conf/nifi.properties]
[main] INFO org.apache.nifi.properties.ConfigEncryptionTool - Migrating flow.xml file at /opt/nifi/data/old_flow.xml.gz. This could take a while if the flow XML is very large.
[main] ERROR org.apache.nifi.properties.ConfigEncryptionTool - Encountered an error: Decryption Failed with Algorithm [AES/GCM/NoPadding]
Encountered an error migrating flow content
usage: org.apache.nifi.properties.ConfigEncryptionTool [-h] [-v] [-n <file>] [-o <file>] [-l <file>] [-i <file>] [-a <file>] [-u <file>] [-f <file>] [-g <file>]
[-b <file>] [-S <protectionScheme>] [-k <keyhex>] [-e <keyhex>] [-H <protectionScheme>] [-p <password>] [-w <password>] [-r] [-m] [-x] [-s
<password|keyhex>] [-A <algorithm>] [-P <algorithm>] [-c]
This tool reads from a nifi.properties and/or login-identity-providers.xml file with plain sensitive configuration values, prompts the user for a root key, and
encrypts each value. It will replace the plain value with the protected value in the same file (or write to a new file if specified). It can also be used to
migrate already-encrypted values in those files or in flow.xml.gz to be encrypted with a new key.
Please help to solve this issue
Solved by myself.
Issue can be resolved by following steps
Before migration if you don't have nifi.sensitive.props.key set, set it using following command ${NIFI_TOOLKIT_PAT}/bin/encrypt-config.sh -f /opt/nifi/nifi-current/data/flow.xml.gz -p ${NIFI_HOME}/conf/nifi.properties -s <NEW_KEY_TO_SET> -x
Once key is set upgrade nifi. Since in newer version algorithm is changed set it using command ${NIFI_HOME}/bin/nifi.sh set-sensitive-properties-algorithm <NEW_ALGORITHM>
Once algorithm set, encrypt again using command ${NIFI_TOOLKIT_PAT}/bin/encrypt-config.sh -f /opt/nifi/nifi-current/data/flow.xml.gz -p ${NIFI_HOME}/conf/nifi.properties -s <NEW_KEY_TO_SET> -x
Now you will get all compatible files with respect your latest version
On my local system zeppelin is working as expected
However on another system with same java version and ubuntu 16 I ran into below issue
Log dir doesn't exist, create /data/software/zeppelin-0.9.0-preview1-bin-all/logs
Pid dir doesn't exist, create /data/software/zeppelin-0.9.0-preview1-bin-all/run
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/software/apache-hive-2.3.7-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/software/zeppelin-0.9.0-preview1-bin-all/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2020-07-30T11:28:08,483 WARN [main] org.apache.zeppelin.conf.ZeppelinConfiguration - Failed to load configuration, proceeding with a default
2020-07-30T11:28:08,538 INFO [main] org.apache.zeppelin.conf.ZeppelinConfiguration - Server Host: 127.0.0.1
2020-07-30T11:28:08,538 INFO [main] org.apache.zeppelin.conf.ZeppelinConfiguration - Server Port: 8080
2020-07-30T11:28:08,538 INFO [main] org.apache.zeppelin.conf.ZeppelinConfiguration - Context Path: /
2020-07-30T11:28:08,538 INFO [main] org.apache.zeppelin.conf.ZeppelinConfiguration - Zeppelin Version: 0.9.0-preview1
Exception in thread "main" java.lang.NoSuchMethodError:
org.eclipse.jetty.util.thread.QueuedThreadPool.<init>(III)V
at org.apache.zeppelin.server.ZeppelinServer.setupJettyServer(ZeppelinServer.java:310)
at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:132)
Here are logs from working system
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
I have downloaded standaone zeplin.. and No other config changes are done
UPDATE
UPDATE
I have fixed the issue below thank you to Mike for pointing out however, now when I ran the command "jps" to check HMaster process as per the quick start guide suggested, I got the error command not found:
I search for this and this command is related to java. Therefore here is my java configuration on my machine:
In .bashrc and .bash_profile:
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-1.fc31.x86_64/jre
export JRE_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-1.fc31.x86_64/jre
export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
In hbase-env.sh:
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-1.fc31.x86_64/jre
Location of my java:
[hadoop#new-hbase-shuti logs]$ whereis java
java: /usr/bin/java /usr/lib/java /etc/java /usr/share/java /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-1.fc31.x86_64/jre/bin/java /usr/share/man/man1/java.1.gz
My java version:
[hadoop#new-hbase-shuti logs]$ java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
Here is the new log file from Hbase (hbase-hadoop-master-new-hbase-shuti.log):
I follow the quick start guide to just install HBase standalone. Here is my configuration:
I was not quite sure which HBase pkg to use but it said to choose the stable one, so I downloaded this: http://mirrors.standaloneinstaller.com/apache/hbase/stable/hbase-2.2.3-bin.tar.gz
The conf/hbase-env.sh where I just have JAVA_HOME env path:
The conf/hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///home/testuser/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/testuser/zookeeper</value>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
<description>
Controls whether HBase will check for stream capabilities (hflush/hsync).
Disable this if you intend to run on LocalFileSystem, denoted by a rootdir
with the 'file://' scheme, but be mindful of the NOTE below.
WARNING: Setting this to false blinds you to potential data loss and
inconsistent system state in the event of process and/or node failures. If
HBase is complaining of an inability to use hsync or hflush it's most
likely not a false positive.
</description>
</property>
</configuration>
Then run from bin, the script start-hbase.sh
But I get this error:
/home/hadoop/hadoop/bin/../libexec/hadoop-functions.sh: line 2360: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_USER: invalid variable name
/home/hadoop/hadoop/bin/../libexec/hadoop-functions.sh: line 2455: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_OPTS: invalid variable name
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-2-2-3/hbase-2.2.3/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
running master, logging to /home/hadoop/hbase-2-2-3/hbase-2.2.3/bin/../logs/hbase-hadoop-master-new-hbase-shuti.out
/home/hadoop/hadoop/bin/../libexec/hadoop-functions.sh: line 2360: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_USER: invalid variable name
/home/hadoop/hadoop/bin/../libexec/hadoop-functions.sh: line 2455: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_OPTS: invalid variable name
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-2-2-3/hbase-2.2.3/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
I have also attached the error log files from HBase below. Could anyone who is familiar with HBase help me please? Thank you very much in advance.
Error from "hbase-hadoop-master-new-hbase-shuti.log"
Thu 26 Mar 2020 08:59:07 PM CET Starting master on new-hbase-shuti
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7523
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7523
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
2020-03-26 20:59:07,933 INFO [main] master.HMaster: STARTING service HMaster
2020-03-26 20:59:07,934 INFO [main] util.VersionInfo: HBase 2.2.3
2020-03-26 20:59:07,934 INFO [main] util.VersionInfo: Source code repository git://hao-OptiPlex-7050/home/hao/open_source/hbase revision=6a830d87542b766bd3dc4cfdee28655f62de3974
2020-03-26 20:59:07,934 INFO [main] util.VersionInfo: Compiled by hao on 2020年 01月 10日 星期五 18:27:51 CST
2020-03-26 20:59:07,934 INFO [main] util.VersionInfo: From source with checksum 097925184b85f6995e20da5462b10f3f
2020-03-26 20:59:08,190 INFO [main] master.HMasterCommandLine: Starting a zookeeper cluster
2020-03-26 20:59:08,204 INFO [main] server.ZooKeeperServer: Server environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: Server environment:host.name=new-hbase-shuti.mshome.net
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: Server environment:java.version=1.8.0_222
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: Server environment:java.vendor=Oracle Corporation
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: Server environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-1.fc31.x86_64/jre
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: vices-core-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.1.3.jar:/home/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.1.3.jar:/home/hadoop/hbase-2-2-3/hbase-2.2.3/bin/../lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: Server environment:java.library.path=/home/hadoop/hadoop//lib/native
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: Server environment:java.io.tmpdir=/tmp
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: Server environment:java.compiler=<NA>
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: Server environment:os.name=Linux
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: Server environment:os.arch=amd64
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: Server environment:os.version=5.3.7-301.fc31.x86_64
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: Server environment:user.name=hadoop
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: Server environment:user.home=/home/hadoop
2020-03-26 20:59:08,205 INFO [main] server.ZooKeeperServer: Server environment:user.dir=/home/hadoop/hbase-2-2-3/hbase-2.2.3/bin
2020-03-26 20:59:08,207 ERROR [main] master.HMasterCommandLine: Master exiting
java.io.IOException: Unable to create data directory /home/testuser/zookeeper/zookeeper_0/version-2
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:85)
at org.apache.zookeeper.server.ZooKeeperServer.<init>(ZooKeeperServer.java:224)
at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:229)
at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:187)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:210)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2940)
Error from "hbase-hadoop-master-new-hbase-shuti.out":
/home/hadoop/hadoop/bin/../libexec/hadoop-functions.sh: line 2360: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_USER: invalid variable name
/home/hadoop/hadoop/bin/../libexec/hadoop-functions.sh: line 2455: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_OPTS: invalid variable name
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-2-2-3/hbase-2.2.3/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
After digging in... hadoop I found that, in my case, it has something to do
with ubuntu user permission ...
vi /opt/hadoop/libexec/hadoop-functions.sh
function hadoop_verify_user_resolves
{
...
}
so I've decided to add those lines in
/opt/hbase/conf/hbase-env.sh
export HBASE_SSH_OPTS="-p 22 -l daniel"
export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC"
I have corrected my JAVA_HOME path environment to make sure it points to jdk instead of jre.
I have a scala app that is running in docker container. I use image 'develar/java' which is based on alpine linux. My app is working, but i don't see cyrillic logs. Here what i have:
docker logs -f myApp
22:22:08.152 [main] INFO application - Creating Pool for datasource 'default'
22:22:09.213 [main] INFO play.api.db.DefaultDBApi - Database [default] connected at jdbc:postgresql://localhost/db
22:22:09.627 [main] INFO p.a.l.concurrent.ActorSystemProvider - Starting application default Akka system: application
22:22:09.698 [main] INFO application - ????????????? ??????? ???????
22:22:09.722 [main] INFO application - ????????????? ??????? 'direct
22:22:09.734 [main] INFO application - ????????????? ??????? 'adwords
22:22:09.761 [main] INFO play.api.Play$ - Application started (Prod)
22:22:09.866 [main] INFO play.core.server.NettyServer$ - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
But logs that are delivered to elasticsearch server are ok. How can i force alpine linux to work with utf-8?
develar/java has an old bug of an old glibc 2.21 package. Andy Shinn (the creator and maintainer of glibc package for Alpine) and I have resolved this a long time ago in glibc 2.23 packaging, which I have integrated into frolvlad/alpine-glibc, which is the base image for frolvlad/alpine-oraclejre8. Just replace devalar/java with frolvlad/alpine-oraclejre8:slim and you should be fine.
I need help setting up my Tomcat server on my Rasp. The majo problem is, i can run Tomcat from its directory with startup.sh. It than says that tomcat is running ... what a lie. If i test it with configtest.sh i get a few exceptions thrown at me.
further when i look # catalina.out i get somthing like this:
23-Nov-2015 17:11:52.997 INFO [localhost-startStop-1] org.apache.catalina.start$
23-Nov-2015 17:11:52.999 INFO [localhost-startStop-1] org.apache.catalina.start$
23-Nov-2015 17:11:53.201 INFO [localhost-startStop-1] org.apache.catalina.start$
23-Nov-2015 17:11:53.204 INFO [localhost-startStop-1] org.apache.catalina.start$
23-Nov-2015 17:11:55.521 INFO [localhost-startStop-1] org.apache.catalina.start$
23-Nov-2015 17:11:55.523 INFO [localhost-startStop-1] org.apache.catalina.start$
23-Nov-2015 17:11:55.642 INFO [localhost-startStop-1] org.apache.catalina.start$
23-Nov-2015 17:11:55.680 INFO [main] org.apache.coyote.AbstractProtocol.start S$
23-Nov-2015 17:11:55.735 INFO [main] org.apache.coyote.AbstractProtocol.start S$
23-Nov-2015 17:11:55.740 INFO [main] org.apache.catalina.startup.Catalina.start$
23-Nov-2015 17:11:55.751 SEVERE [main] org.apache.catalina.core.StandardServer.$
java.net.BindException: Cannot assign requested address
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:3$
at java.net.ServerSocket.bind(ServerSocket.java:375)
at java.net.ServerSocket.<init>(ServerSocket.java:237)
at org.apache.catalina.core.StandardServer.await(StandardServer.java:42$
at org.apache.catalina.startup.Catalina.await(Catalina.java:713)
at org.apache.catalina.startup.Catalina.start(Catalina.java:659)
I'm not able to understand the error... There is nothing else running, i rebootet several times, checkt the processes, checked the ports but nothing is in use as far as i can see it.
I tried tomcat per apt-get as a service and as a simple package from tomcats website. Both without any sucess.
I than deletet all and installed apache just for testeng. Apache runs without any errors and is accesable via the browser after installation.
I got two questions.
1: What is the problem with tomcat? Any suggestions why tomcat may not be able to get a adress.
2: The main reson i want tomcat is to run a RESTFUL webaplication on the Rasp.Is there maby a other Webserver that can hold on to Java servlets or war files?
Setup: Raspberry PI2
Java 1.8
Tomcat 7 or 8 tried them both.
thanks in advance and a nice Monday too you