I have a CentOS box hosting a Drupal 7 site. I've attempted to run a Java application called Tika on it, to index files using Apache Solr search.
I keep running into an issue only when SELinux is enabled:
extract using tika: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f1ed9000000, 2555904, 1) failed; error='Permission denied' (errno=13)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 2555904 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/jvm-2356/hs_error.log
This does not happen if I disable selinux. If I run the command from SSH, it works fine -- but not in browser. This is the command it is running:
java '-Dfile.encoding=UTF8' -cp '/var/www/drupal/sites/all/modules/contrib/apachesolr_attachments/tika' -jar '/var/www/drupal/sites/all/modules/contrib/apachesolr_attachments/tika/tika-app-1.11.jar' -t '/var/www/drupal/sites/all/modules/contrib/apachesolr_attachments/tests/test-tika.pdf'
Here is the log from SELinux at /var/log/audit/audit.log:
type=AVC msg=audit(1454636072.494:3351): avc: denied { execmem } for pid=11285 comm="java" scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:system_r:httpd_t:s0 tclass=process
type=SYSCALL msg=audit(1454636072.494:3351): arch=c000003e syscall=9 success=no exit=-13 a0=7fdfe5000000 a1=270000 a2=7 a3=32 items=0 ppid=2377 pid=11285 auid=506 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=1 comm="java" exe="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.95.x86_64/jre/bin/java" subj=unconfined_u:system_r:httpd_t:s0 key=(null)
Is there a way I can run this with SELinux enabled? I do not know the policy name of Tika (or should I use Java?) so I'm unsure where to go from here...
This worked for me...
I have tika at /var/apache-tika/tika-app-1.14.jar
setsebool -P httpd_execmem 1
chcon -t httpd_exec_t /var/apache-tika/tika-app-1.14.jar
Using the sealert tools (https://wiki.centos.org/HowTos/SELinux) helped track down the correct selinux type.
All of your context messages reference httpd_t, so I would run
/usr/sbin/getsebool -a | grep httpd
And experiment with enabling properties that show as off. It's been a while since I ran a database-backed website (Drupal, WordPress, etc.) on CentOS, but as I recall, these two were required to be enabled:
httpd_can_network_connect
httpd_can_network_connect_db
to enable a property with persistence, run
setsebool -P httpd_can_network_connect on
etc.
The booleans you're looking for are:
httpd_execmem
httpd_read_user_content
How to find:
audit2why -i /var/log/audit/audit.log will tell you this.
Part of package: policycoreutils-python-utils
Related
I am trying to use a tool that, in two steps, analyzes code smells for android.
In the first step, the tool parses an apk and generates within a directory .db files that should then be converted to CSV files in the next step; however, whenever I try to run the second step, the console returns the following error:
java.io.IOException: Unable to create directory path [/User/Desktop/db2/logs] for Neo4j store.
I think it is a Neo4J configuration problem.
I am currently running the tool with the following Java configuration:
echo $JAVA_HOME
/home/User/openlogic-openjdk-11.0.15
update-alternatives --config java
* 0 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 auto mode
To be safe, I also started Neo4J, which returned the following output
sudo systemctl status neo4j.service
neo4j.service - Neo4j Graph Database
Loaded: loaded (/lib/systemd/system/neo4j.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2022-07-06 20:11:04 CEST; 16min ago
Main PID: 1040 (java)
Tasks: 57 (limit: 18901)
Memory: 705.4M
CPU: 16.639s
CGroup: /system.slice/neo4j.service
└─1040 /usr/bin/java -cp "/var/lib/neo4j/plugins:/etc/neo4j:/usr/share/neo4j/lib/*:/var/lib/neo4j/plugins/*" -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -XX:+AlwaysPreTouch -XX:+UnlockExper>.
How can I solve this?
You posted this error:
java.io.IOException: Unable to create directory path [/User/Desktop/db2/logs] for Neo4j store.
From that error, it looks like:
Neo4j was installed at "/User/Desktop/db2"
The permissions for that directory do not have "write" permission
I tried to reproduce this locally using Neo4j Community 4.4.5, following the steps below.
I do see an IOException related to "logs", but it's slightly different from what you posted. Perhaps we're on different versions of Neo4j.
Open terminal into install directory: cd neo4j
Verify "neo4j" is stopped: ./bin/neo4j stop
Rename existing "logs" directory: mv logs logs.save
Remove write permission for the Neo4j install: chmod u-w .
Start neo4j in console mode: ./bin/neo4j console
Observe errors in console output
2022-07-08 03:28:38.081+0000 INFO Starting...
ERROR StatusLogger Unable to create file [****************************]/neo4j/logs/debug.log
java.io.IOException: Could not create directory [****************************]/neo4j/logs
...
To fix things, try:
Get a terminal into your Neo4j directory:
cd /User/Desktop/db2
Set write permissions for the entire directory tree:
chmod u+w -R .
Start neo4j in console mode:
./bin/neo4j console
If this works and you're able to run neo4j fine, it points to an issue with user permissions when running neo4j as a system service.
The best steps from there depend on the system, your access, how comfortable you are making changes, probably other things. An easy, brute-force hammer would be to manually create each directory you discover (such as "/User/Desktop/db2/logs") and grant premissions to all users (chmod ugo+w .), then try re-running the service, see what errors pop up. Repeat that until you're able to run the service without errors.
I'm try to run 2 Spring Boot applications within the same Pod (essentially one is a reverse proxy for the other one - a small implementation of the sidecar pattern) and I've that one of the containers can't start. In fact, it crashes with the following error:
Starting the Java application using /opt/jboss/container/java/run/run-java.sh ...
INFO exec java -javaagent:/usr/share/java/jolokia-jvm-agent/jolokia-jvm.jar=config=/opt/jboss/container/jolokia/etc/jolokia.properties -javaagent:/usr/share/java/prometheus-jmx-exporter/jmx_prometheus_javaagent.jar=9779:/opt/jboss/container/prometheus/etc/jmx-exporter-config.yaml -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:+ExitOnOutOfMemoryError -cp "." -jar /deployments/app-backend-1.0.0-SNAPSHOT.jar
Could not start Jolokia agent: java.lang.IllegalStateException: Cannot open keystore for https communication: java.net.BindException: Address already in use
Exception in thread "main" java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at java.instrument/sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:491)
at java.instrument/sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:503)
Caused by: java.net.BindException: Address already in use
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:555)
at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:337)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:89)
at jdk.httpserver/sun.net.httpserver.ServerImpl.bind(ServerImpl.java:134)
at jdk.httpserver/sun.net.httpserver.HttpServerImpl.bind(HttpServerImpl.java:54)
at io.prometheus.jmx.shaded.io.prometheus.client.exporter.HTTPServer.<init>(HTTPServer.java:145)
at io.prometheus.jmx.JavaAgent.premain(JavaAgent.java:31)
... 6 more
FATAL ERROR in native method: processing of -javaagent failed, processJavaStart failed
*** java.lang.instrument ASSERTION FAILED ***: "result" with message agent load/premain call failed at src/java.instrument/share/native/libinstrument/JPLISAgent.c line: 422
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f3cf9ef7e91, pid=1, tid=154
#
# JRE version: OpenJDK Runtime Environment 21.9 (17.0.1+12) (build 17.0.1+12-LTS)
# Java VM: OpenJDK 64-Bit Server VM 21.9 (17.0.1+12-LTS, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, serial gc, linux-amd64)
# Problematic frame:
# C [libc.so.6+0x21e91] abort+0x203
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /deployments/hs_err_pid1.log
#
# If you would like to submit a bug report, please visit:
# https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20Enterprise%20Linux%208&component=java-17-openjdk
#
I've the impression the issue is caused by the way the plugin I'm using builds the final image, but I'm not sure.
I suspect that both the containers use the same port (since the images are built in the same way) for the Jolokia and Prometheus JMX agents, but I didn't find a way to disable them.
Does somebody have some ideas?
when running 2 containers in one pod, the containers recognize the pod as their localhost environment. as a result, if both containers try to bind the same port, they will interfere each other.
try to configure one of the containers to bind another port. this should resolve the error above. :)
edit: I checked the provided link to the github issue you opened for the plugin. jkube and spring boot docs both list several different ways to provide build- and runtime configs. I don't think this is a bug but a configuration issue that can be solved with the options the framework and plugin provides.
I propose you dig the docs further to check how to configure the ports for the processes that are started in both containers via springs application properties, ENVs or something like that. Afterwards, you can check how to provide these values during build/runtime using jkube generated manifests.
docs I've found:
https://www.eclipse.org/jkube/docs/kubernetes-maven-plugin#_configuration
https://docs.spring.io/spring-boot/docs/2.1.1.RELEASE/reference/html/production-ready-jmx.html
https://blog.kubernauts.io/https-blog-kubernauts-io-monitoring-java-spring-boot-applications-with-prometheus-part-2-b883063486b5
In the end it was effectively a bug in JKube. See https://github.com/eclipse/jkube/issues/1294#event-6132755248
So I'm new to Solr and am following tutorials for the most part using Solr 8.3.1 (the most recent release as of posting).
I've got Java version 1.8.0_181 installed on my Windows 10 machine and I've added solr/bin to the PATH variable.
When I run solr start I get:
/d/Program Files and Documents/solr-8.3.1/bin/solr: line 1525: ulimit: -m: invalid option
ulimit: usage: ulimit [-SHabcdefiklmnpqrstuvxPT] [limit]
*** [WARN] *** Your open file limit is currently 256.
It should be set to 65000 to avoid operational disruption.
If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
*** [WARN] *** Your Max Processes Limit is currently 256.
It should be set to 65000 to avoid operational disruption.
If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
/d/Program Files and Documents/solr-8.3.1/bin/solr: line 1542: [: !=: unary operator expected
NOTE: Please install lsof as this script needs it to determine if Solr is listening on port 8983.
Started Solr server on port 8983 (pid=). Happy searching!
Which says to me that it started running, but when I run solr status I get:
Found 1 Solr nodes:
Solr process 56136 from /d/Program Files and Documents/solr-8.3.1/bin/solr-8983.pid not found.
Which doesn't seem right. I continued with the tutorial anyway and ran solr start -e cloud and got:
$ solr start -e cloud
/d/Program Files and Documents/solr-8.3.1/bin/solr: line 1525: ulimit: -m: invalid option
ulimit: usage: ulimit [-SHabcdefiklmnpqrstuvxPT] [limit]
*** [WARN] *** Your open file limit is currently 256.
It should be set to 65000 to avoid operational disruption.
If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
*** [WARN] *** Your Max Processes Limit is currently 256.
It should be set to 65000 to avoid operational disruption.
If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
/d/Program Files and Documents/solr-8.3.1/bin/solr: line 1542: [: !=: unary operator expected
Error: Could not find or load main class org.apache.solr.util.SolrCLI
I tried solr start -e techproducts with the same result.
I've seen answers for this with earlier versions of Solr and usually the answers involved updating the Solr version... Which I'm already at the most recent? I'm sure I'm missing something dumb, but any pointers in the right direction would be much appreciated! Thanks so much!
EDIT 1: Solved! Was using Git Bash which is Cygwin-based. Works like a charm with Windows Command Prompt and with Windows Powershell.
Output for solr start:
Java HotSpot(TM) 64-Bit Server VM warning: JVM cannot use large page memory because it does not have enough privilege to lock pages in memory.
Waiting up to 30 to see Solr running on port 8983
Started Solr server on port 8983. Happy searching!
Output for solr status:
Found Solr process 50764 running on port 8983
{
"solr_home":"D:\\Program Files and Documents\\solr-8.3.1\\server\\solr",
"version":"8.3.1 a3d456fba2cd1b9892defbcf46a0eb4d4bb4d01f - ishan - 2019-11-29 11:51:37",
"startTime":"2019-12-19T22:20:25.127Z",
"uptime":"0 days, 0 hours, 0 minutes, 14 seconds",
"memory":"176.3 MB (%34.4) of 512 MB"}
Output for solr start -e cloud:
Welcome to the SolrCloud example!
This interactive session will help you launch a SolrCloud cluster on your local workstation.
To begin, how many Solr nodes would you like to run in your local cluster? (specify 1-4 nodes) [2]:
Seems like you're running the *nix-version of bin/solr under cygwin from the /d/... path. Under Windows you should use the bin/solr.cmd command with the standard command line console (´cmd.exe`).
It seems the solr.in script contains options not recognized by the version you have of bash inside cygwin. I'm guessing that could also affect the setup of the classpath which means that the classes aren't found where they'd be expected to.
Hello Benevolent community of stackoverflow,
I have a web service stack that runs on Red hat, Nginx, Jruby with sinatra, and Passenger Enterprise. My ultimate goal is to enable JMX metrics which can be pushed into my App Dynamics controller (hosted by SaaS).
The App Dynamics installation is relatively easy to configure though the metrics is not coming through. I reckoned JMX is not enabled and I am trying to find the script that initializes the JVM for passenger enterprise to start java. I have been unsuccessful on tracking down exactly where to input the additional parameters to enable JMX.
Here are all my current java applications
root 19260 1 0 Mar20 ? 00:05:12 /usr/lib/jvm/jre/bin/java -Xmx500m -Xss2048k -Djffi.boot.library.path=/opt/jruby-1.7.12/lib/jni -Xbootclasspath/a:/opt/jruby-1.7.12/lib/jruby.jar -classpath : -Djruby.home=/opt/jruby-1.7.12 -Djruby.lib=/opt/jruby-1.7.12/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main mojo_copytruncate.rb
nobody 20996 20861 4 17:02 ? 00:02:42 java -Xmx500m -Xss2048k -Djffi.boot.library.path=/opt/jruby-1.7.12/lib/jni -Xbootclasspath/a:/opt/jruby-1.7.12/lib/jruby.jar -classpath : -Djruby.home=/opt/jruby-1.7.12 -Djruby.lib=/opt/jruby-1.7.12/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /opt/passenger/passenger-enterprise-server-5.0.4/helper-scripts/rack-loader.rb
What I am interested is getting the JMX metrics from the rack-loader.rb.
Here is how the process look like when I trace back on PPID 20861
root 20861 20858 0 17:02 ? 00:00:09 PassengerAgent server
nobody 20996 20861 4 17:02 ? 00:02:42 java -Xmx500m -Xss2048k -Djffi.boot.library.path=/opt/jruby-1.7.12/lib/jni -Xbootclasspath/a:/opt/jruby-1.7.12/lib/jruby.jar -classpath : -Djruby.home=/opt/jruby-1.7.12 -Djruby.lib=/opt/jruby-1.7.12/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /opt/passenger/passenger-enterprise-server-5.0.4/helper-scripts/rack-loader.rb
I have no idea where to look for the config in PassengerAgent server...
Found my own solution.
Since PassengerAgent starts by using a ruby script called rack-loader.rb, I should attach the java agent whenever a RVM is used. This solution fits my scenario since our stack is entirely Ruby based and there isn't anything else that runs RVM.
To pass the java agent, I created a script called appdynamics.sh in my /etc/profile.d/ folder.
Inside I wrote
export AGENT_HOME=YOUR_AGENT_FILE_PATH.jar
export JRUBY_OPTS=-J-javaagent:$AGENT_HOME
Restart your terminal and it should load this as environmental variable. Metrics came in just fine.
I'm working on some code to access HBase and I am writing unit tests that create a MiniDFSCluster as part of the test setup.
(defn test-config [& options]
(let [testing-utility (HBaseTestingUtility.)]
(.startMiniCluster testing-utility 1)
(let [config (.getConfiguration testing-utility)]
(if (not= options nil)
(doseq [[key value] options]
(.set config key value)))
config)))
;; For those who don't read Clojure, lines 2 and 3 cause
;; the failure and are equivalent to the following Java
;;
;; HBaseTestingUtility testingUtility = new HBaseTestingUtility();
;; testingUtility.startMiniCluster(1); // blows up on Linux but not Mac OSX
This runs fine on Mac OSX with Java HotSpot:
$ java -version
java version "1.6.0_51"
Java(TM) SE Runtime Environment (build 1.6.0_51-b11-457-11M4509)
Java HotSpot(TM) 64-Bit Server VM (build 20.51-b01-457, mixed mode)
$ lein test
lein test hbase.config-test
lein test hbase.table-test
2013-07-12 17:44:13.488 java[27384:1203] Unable to load realm info from SCDynamicStore
Starting DataNode 0 with dfs.data.dir: /Users/dwilliams/Desktop/Repos/mobiusinversion/hbase/target/test-data/fe0199fd-0168-48d9-98ce-b4a5e62d3257/dfscluster_bbad1095-58d1-4571-ba12-4d4f1c24203f/dfs/data/data1,/Users/dwilliams/Desktop/Repos/mobiusinversion/hbase/target/test-data/fe0199fd-0168-48d9-98ce-b4a5e62d3257/dfscluster_bbad1095-58d1-4571-ba12-4d4f1c24203f/dfs/data/data2
Cluster is active
Ran 11 tests containing 14 assertions.
0 failures, 0 errors.
But when this is run in a Linux environment, the following error occurs:
ERROR in (create-table) (MiniDFSCluster.java:426)
Uncaught exception, not in assertion.
expected: nil
actual: java.lang.NullPointerException: null
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes (MiniDFSCluster.java:426)
org.apache.hadoop.hdfs.MiniDFSCluster.<init> (MiniDFSCluster.java:284)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster (HBaseTestingUtility.java:444)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster (HBaseTestingUtility.java:612)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster (HBaseTestingUtility.java:568)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster (HBaseTestingUtility.java:555)
I filed a travis-ci ticket, since this first manifested itself there and I thought it might be due to their environment.
https://github.com/travis-ci/travis-ci/issues/1240
However, after discussion with travis support I was able to reproduce the error on CentOS. I tried both the Sun JDK and the OpenJDK on Linux and both produced the same error. Whats going on here? Is this a trivial configuration problem? Perhaps something not set in the Linux ENV that is set in Mac OSX's ENV?
If you would like to run the tests, please clone the repo
https://github.com/mobiusinversion/hbase
And run lein test. Help is greatly appreciated!
Update:
Filed this HBASE Jira ticket
https://issues.apache.org/jira/browse/HBASE-8944
Short answer: set "umask 022" prior to running the tests.
Long answer: This is a common environmental issue with running MiniDFSCluster from Hadoop 1.x, releases, which HBaseTestingUtility uses internally. It has been effectively fixed in Hadoop 0.22+ (including 2.0+, but not 1.x at the moment).
The underlying problem is https://issues.apache.org/jira/browse/HDFS-2556.
When the MiniDFSCluster starts up, it creates the temporary storage directories to use for the datanode processes (configured as "dfs.data.dir"). These will be created with your currently set umask. When each datanode starts up, it checks that the directories configured in "dfs.data.dir" both exist and that the directory permissions match the expected value (set as "dfs.datanode.data.dir.perm"). If the directories permissions do not match the expected value ("755" by default), then the datanode process exits.
By default, in Hadoop 1.x, this value is set to "755", so if you set your umask to "022", the data directories will wind up with the correct permissions. If however, the permissions do not match the expected value, the datanode will abort and you will see errors like the following in the test log file:
WARN [main] datanode.DataNode(1577): Invalid directory in dfs.data.dir: Incorrect permission for /.../dfs/data/data2, expected: rwxr-xr-x, while actual: rwxrwxr-x
In later versions of Hadoop, the datanode will attempt to change the directory permissions to the expected value if they do not match. Only if this operation fails will the datanode abort. HDFS-2556 proposes backporting this change to the 1.x releases, but has not yet been fixed.