Situation: During the build process of an azure pipeline i use the JavaToolInstaller on a self hosted agent and it places "java/JAVA_HOME_8_x64_" in the directory
Background: I dont believe this to be a permission issue when i have applied full permissions to that directory and created it for the build process
Assessment: Has anyone see this issue before?
Steps to reproduce - host onsite agent
JavaToolInstaller
- task: JavaToolInstaller#0
inputs:
versionSpec: '8'
jdkArchitectureOption: 'x64'
jdkSourceOption: 'LocalDirectory'
jdkFile: '/opt/jdk-8u251-linux-x64.tar.gz'
jdkDestinationDirectory: '/opt/java'
cleanDestinationDirectory: true
condition: eq( variables['Agent.OS'], 'Linux' )
Error during build
Cleaning destination folder before extraction: /opt/java
Retrieving the JDK from local path.
##[warning]Can\'t find loc string for key: ExtractingArchiveToPath
ExtractingArchiveToPath /opt/java/JAVA_HOME_8_x64_jdk-8u251-linux-x64_tar.gz
Creating destination folder: /opt/java/JAVA_HOME_8_x64_jdk-8u251-linux-x64_tar.gz
##[error]Unable to create directory '/opt/java/JAVA_HOME_8_x64_jdk-8u251-linux-x64_tar.gz'. EACCES: permission denied, mkdir '/opt/java/JAVA_HOME_8_x64_jdk-8u251-linux-x64_tar.gz'
##[error]Unable to create directory '/opt/java/JAVA_HOME_8_x64_jdk-8u251-linux-x64_tar.gz'. EACCES: permission denied, mkdir '/opt/java/JAVA_HOME_8_x64_jdk-8u251-linux-x64_tar.gz'
Finishing: JavaToolInstaller
According to the error message, it seems that you do not have permission to write this file. Please check it and ensure that you have w and r permission for this folder
Steps:
Locate the file jdk-8u251-linux-x64.tar.gz->right-click on the icon->select Properties->click the tab permission then check the account permission
Or use the cmd ls –l {file name} to check the folder permission, then run the cmd chmod [permission] [file_name] to update the permission
Please refer this link for more details: How to change directory permissions in Linux
Related
when i am running my spring application in tomcat using .sh file in init-container in kubernetes and i have set runAsUser : 1337 in security context of the init-container in deployment.yaml file.
it gives
cp: cannot create regular file '/usr/java/openjdk-11/conf/security/java.security.bak': permission denied
and
sed: couldn't open temporary file '': permission denied.
i have used chmod to change permission but facing below issue
chmod: changing permissions of '/opt/jdk/conf/security/java.security': Operation not permitted
also facing
/startup.sh: line 3: exec: catalina.sh: not found
my .sh file (after adding chmod)
chmod -R 766 ${JAVA_HOME}/conf/security
/add-jce-provider.sh ${JAVA_HOME}/conf/security/java.security;
exec catalina.sh run;
If you're not able to write to the directory, then it is possible that:
the directory has the immutable flag enabled. check with lsattr.
the directory is mounted with read-only permissions: type in
terminal: cat /proc/mounts (or mount or cat /etc/mtab)
and check the output, if directory is mounted read-only.
If you are in the first case, change the directory attributes with chattr;
remove immutable flag on file or directory chattr -i <file/dir>
adding immutable flag on file or directory again chattr +i <file/dir>
If you're in the latter case, edit the file /etc/fstab.
Some of selected resources were not committed.
Some of selected resources were not committed.
svn: E204900: Commit failed (details follow):
svn: E204900: Commit failed (details follow):
svn: E204900: Can't make directory '/opt/bitnami/subversion/repos/x/dav/activities.d': Permission denied
svn: E175002: MKACTIVITY of '/svn/!svn/act/36d4274a-7c01-0010-82e2-67d061997a37': 500 Internal Server Error (https://xxxsystems.com)
Getting this error when try to commit from eclipse. No issue commit from tortoise.
Anything else can be done from client site? or any setting skip creating that folder or writing to that folder?
There might be multiple causes for your problem. From the error it looks like there is a SVN server on a Linux machine. The user that you use to connect to the SVN server should be located somewhere in a authz file located on that machine. Since you could commit from Turtoise, it means that you have all the write rules required in authz file, otherwise you would get a forbidden error.
Permission denied suggests that the user that is reading this file and serves the web requests (maybe a user that runs Apache httpd) does not have sufficient privileges to create directories. If you have access to the server, you could try to chmod 777 -R the root directory of the authz file or the /svn directory and restart the server. If you are on Ubuntu, you need to give ownership and write permission to web server by running sudo chown -R root:www-data /svn and sudo chmod -R 775 /svn.
One other possible cause could be Selinux, that sometimes messes with directory access if server is on CentOS, you can try to deactivate it.
If this also doesn't work, you can try a workaround on client side. If you can commit from Turtoise, then you can either find a TurtoiseSVN plugin for your version of Eclipse (saw on marketplace that there is a ContextQuickie plugin that might do the trick) or check at commit window if Eclipse tries to commit weird/unwanted files.
I am trying to execute a Spark application built through Scala IDE through my standalone Spark service running on cloudera quickstart VM 5.3.0.
My cloudera account JAVA_HOME is /usr/java/default
However, I am facing the below error message while executing the start-all.sh command from cloudera user as below:
[cloudera#localhost sbin]$ pwd
/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin
[cloudera#localhost sbin]$ ./start-all.sh
chown: changing ownership of `/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs': Operation not permitted
starting org.apache.spark.deploy.master.Master, logging to /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs/spark-cloudera-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out
/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/spark-daemon.sh: line 151: /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs/spark-cloudera-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out: Permission denied
failed to launch org.apache.spark.deploy.master.Master:
tail: cannot open `/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs/spark-cloudera-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out' for reading: No such file or directory
full log in /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs/spark-cloudera-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out
cloudera#localhost's password:
localhost: chown: changing ownership of `/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/logs': Operation not permitted
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/logs/spark-cloudera-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out
localhost: /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/spark-daemon.sh: line 151: /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/logs/spark-cloudera-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out: Permission denied
localhost: failed to launch org.apache.spark.deploy.worker.Worker:
localhost: tail: cannot open `/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/logs/spark-cloudera-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out' for reading: No such file or directory
localhost: full log in /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/logs/spark-cloudera-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out
I had added export CMF_AGENT_JAVA_HOME=/usr/java/default in /etc/default/cloudera-scm-agent and run sudo service cloudera-scm-agent restart. See How to set CMF_AGENT_JAVA_HOME
I had also added export JAVA_HOME=/usr/java/default in locate_java_home function definition in file /usr/share/cmf/bin/cmf-server and restarted the cluster and standalone Spark service
But the below error is repeating while starting spark service from root user
[root#localhost spark]# sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out
failed to launch org.apache.spark.deploy.master.Master:
JAVA_HOME is not set
full log in /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out
root#localhost's password:
localhost: Connection closed by UNKNOWN
Can anybody suggest how to set JAVA_HOME so as to start Spark standalone service on cloudera manager?
The solution had been quite easy and straightforward. Just added export JAVA_HOME=/usr/java/default in /root/.bashrc and it successfully started the spark services from root user without the JAVA_HOME is not set error. Hope it helps somebody facing same problem.
set JAVA_HOME variable in ~/.bashrc as follows
sudo gedit ~/.bashrc
write this line in the file (address of your installed JDK)
JAVA_HOME="/usr/lib/jvm/java-11-openjdk-amd64"
Then command
source ~/.bashrc
I am trying to use Hive 1.2.0 over Hadoop 2.6.0. I have created an employee table. However, when I run the following query:
hive> load data local inpath '/home/abc/employeedetails' into table employee;
I get the following error:
Failed with exception Unable to move source file:/home/abc/employeedetails to destination hdfs://localhost:9000/user/hive/warehouse/employee/employeedetails_copy_1
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
What wrong am I doing here? Are there any specific permissions that I need to set? Thanks in advance!
As mentioned by Rio, the issue involved lack of permissions to load data into hive table. I figures out the following command solves my problems:
hadoop fs -chmod g+w /user/hive/warehouse
See the permission for the HDFS directory:
hdfs dfs -ls /user/hive/warehouse/employee/employeedetails_copy_1
Seems like you may not have permission to load data into hive table.
The error might be due to permission issue on local filesystem.
Change the permission for local filesystem:
sudo chmod -R 777 /home/abc/employeedetails
Now, run:
hive> load data local inpath '/home/abc/employeedetails' into table employee;
If we face same error After running the above command in distributed mode, we can try the the below cammand in all super users of all nodes.
sudo usermod -a -G hdfs yarn
Note:we get this error after restart the all the services of YARN(in AMBARI).My problem was resolved.This is admin command better to care when you are running.
I meet the same problems and search it two days .Finally I find the reason is that datenode start a moment and shut down.
solve steps:
hadoop fs -chmod -R 777 /home/abc/employeedetails
hadoop fs -chmod -R 777 /user/hive/warehouse/employee/employeedetails_copy_1
vi hdfs-site.xml and add follow infos :
dfs.permissions.enabled
false
hdfs --daemon start datanode
vi hdfs-site.xml #find the location of 'dfs.datanode.data.dir'and'dfs.namenode.name.dir'.If it is the same location ,you must change it ,this is why I can't start datanode reason.
follow 'dfs.datanode.data.dir'/data/current edit the VERSION and copy clusterID to 'dfs.namenode.name.dir'/data/current clusterID of VERSION。
start-all.sh
if above it is unsolved , to be careful to follow below steps because of the safe of data ,but I already solve the problem because follow below steps.
stop-all.sh
delete the data folder of 'dfs.datanode.data.dir' and the data folder of 'dfs.namenode.name.dir' and tmp folder.
hdfs namenode -format
start-all.sh
solve the problem
maybe you will meet other problem like this.
problems:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
Cannot create directory
/opt/hive/tmp/root/1be8676a-56ac-47aa-ab1c-aa63b21ce1fc. Name node is
in safe mode
methods: hdfs dfsadmin -safemode leave
It might be because your hive user does not have the access to the HDFS' local directories
I'm using tomcat security manager for my application. In my application, i have placed my running tomcat in
path: usr/local/tomcat-7/webapps/myapplication
And when i run my application i will logged all the actions in logger. That logger file is placed in another path
path:usr/local/tomcat-6/logs/mylogs.log (*this is not running server, just a folder named tomcat-6*)
When i run my application with security manager, it will throw this exception:
java.security.AccessControlException:access denied("java.io.FilePermission" "usr/local/tomcat6/logs/mylogs.log" "write" ).
In My catalina.policy file i have gave this rule to grant permission to this file. But it doesn't works.
grant codeBase "file:${catalina.home}/../tomcat-6/logs/-" {
permission java.security.AllPermission;
};
How can i resolve this problem?
1: Login as root user
2: Go to logs directory
3: chmod 644 mylogs.log
Change your java permission. read thisenter link description here