I want to build Android 10 from source code and followed the official instructions. In order to get started, I want to simply build it for the emulator. However, the build keeps failing and I get the following error:
[11177/12864] rm -rf "out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/out" "out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/srcjars" "out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/stubsDir" && mkdir -p "out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/out" "out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/srcjars" "out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/stubsDir" && out/soong/host/linux-x86/bin/zipsync -d out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/srcjars -l out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/srcjars/list -f "*.java" out/soong/.intermediates/frameworks/base/framework-javastream-protos/gen/frameworks/base/core/proto/android/privacy.srcjar [...]
FAILED: out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/api-stubs-docs-stubs.srcjar out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/api-stubs-docs_api.txt out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/api-stubs-docs_removed.txt out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/private.txt out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/api-stubs-docs_annotations.zip out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/api-versions.xml out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/api-stubs-docs_api.xml out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/api-stubs-docs_last_released_api.xml
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.base/java.nio.HeapCharBuffer.<init>(HeapCharBuffer.java:68)
at java.base/java.nio.CharBuffer.allocate(CharBuffer.java:341)
at java.base/java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:794)
at java.base/java.nio.charset.Charset.decode(Charset.java:818)
at com.intellij.openapi.fileEditor.impl.LoadTextUtil.convertBytes(LoadTextUtil.java:640)
at com.intellij.openapi.fileEditor.impl.LoadTextUtil.getTextByBinaryPresentation(LoadTextUtil.java:555)
at com.intellij.openapi.fileEditor.impl.LoadTextUtil.getTextByBinaryPresentation(LoadTextUtil.java:545)
at com.intellij.openapi.fileEditor.impl.LoadTextUtil.loadText(LoadTextUtil.java:531)
at com.intellij.openapi.fileEditor.impl.LoadTextUtil.loadText(LoadTextUtil.java:503)
at com.intellij.mock.MockFileDocumentManagerImpl.getDocument(MockFileDocumentManagerImpl.java:53)
at com.intellij.psi.AbstractFileViewProvider.getDocument(AbstractFileViewProvider.java:194)
at com.intellij.psi.AbstractFileViewProvider$VirtualFileContent.getText(AbstractFileViewProvider.java:484)
at com.intellij.psi.AbstractFileViewProvider.getContents(AbstractFileViewProvider.java:174)
at com.intellij.psi.impl.source.PsiFileImpl.loadTreeElement(PsiFileImpl.java:204)
at com.intellij.psi.impl.source.PsiFileImpl.calcTreeElement(PsiFileImpl.java:709)
at com.intellij.psi.impl.source.PsiJavaFileBaseImpl.getClasses(PsiJavaFileBaseImpl.java:66)
at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCliJavaFileManagerImpl$Companion.findClassInPsiFile(KotlinCliJavaFileManagerImpl.kt:250)
at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCliJavaFileManagerImpl$Companion.access$findClassInPsiFile(KotlinCliJavaFileManagerImpl.kt:246)
at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCliJavaFileManagerImpl.findPsiClassInVirtualFile(KotlinCliJavaFileManagerImpl.kt:216)
at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCliJavaFileManagerImpl.access$findPsiClassInVirtualFile(KotlinCliJavaFileManagerImpl.kt:47)
at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCliJavaFileManagerImpl$findClasses$1$$special$$inlined$forEachClassId$lambda$1.invoke(KotlinCliJavaFileManagerImpl.kt:155)
at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCliJavaFileManagerImpl$findClasses$1$$special$$inlined$forEachClassId$lambda$1.invoke(KotlinCliJavaFileManagerImpl.kt:47)
at org.jetbrains.kotlin.cli.jvm.index.JvmDependenciesIndexImpl$traverseDirectoriesInPackage$1.invoke(JvmDependenciesIndexImpl.kt:77)
at org.jetbrains.kotlin.cli.jvm.index.JvmDependenciesIndexImpl$traverseDirectoriesInPackage$1.invoke(JvmDependenciesIndexImpl.kt:32)
at org.jetbrains.kotlin.cli.jvm.index.JvmDependenciesIndexImpl.search(JvmDependenciesIndexImpl.kt:131)
at org.jetbrains.kotlin.cli.jvm.index.JvmDependenciesIndexImpl.traverseDirectoriesInPackage(JvmDependenciesIndexImpl.kt:76)
at org.jetbrains.kotlin.cli.jvm.index.JvmDependenciesIndex$DefaultImpls.traverseDirectoriesInPackage$default(JvmDependenciesIndex.kt:35)
at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCliJavaFileManagerImpl$findClasses$1.invoke(KotlinCliJavaFileManagerImpl.kt:151)
at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCliJavaFileManagerImpl$findClasses$1.invoke(KotlinCliJavaFileManagerImpl.kt:47)
at org.jetbrains.kotlin.util.PerformanceCounter.time(PerformanceCounter.kt:91)
at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCliJavaFileManagerImpl.findClasses(KotlinCliJavaFileManagerImpl.kt:147)
at com.intellij.psi.impl.PsiElementFinderImpl.findClasses(PsiElementFinderImpl.java:45)
When searching for a solution, I only find problems related to jack-server. As I understand jack is not used in recent builds anymore. Also, I tried to reduce the number of build threads using m -j1 without success.
Here is some info about my setup: 4 core CPU, 8GB RAM
============================================
PLATFORM_VERSION_CODENAME=REL
PLATFORM_VERSION=10
TARGET_PRODUCT=aosp_arm
TARGET_BUILD_VARIANT=eng
TARGET_BUILD_TYPE=release
TARGET_ARCH=arm
TARGET_ARCH_VARIANT=armv7-a-neon
TARGET_CPU_VARIANT=generic
HOST_ARCH=x86_64
HOST_2ND_ARCH=x86
HOST_OS=linux
HOST_OS_EXTRA=Linux-4.4.0-142-generic-x86_64-Ubuntu-14.04.6-LTS
HOST_CROSS_OS=windows
HOST_CROSS_ARCH=x86
HOST_CROSS_2ND_ARCH=x86_64
HOST_BUILD_TYPE=release
BUILD_ID=QQ1D.200205.002
OUT_DIR=out
============================================
After some research I found a solution. During build, /prebuilts/jdk/jdk9/linux-x86/bin/java is being called without the -Xmx option. When typing
$ /prebuilts/jdk/jdk9/linux-x86/bin/java -XX:+PrintFlagsFinal -version | grep -iE 'HeapSize'
into the commandline, I found out that a maximum of only about 2 GB of heap were allowed.
Solution
Because I don't know in which file java is being called, I just set the heap size to 4 GB using an enviroment variable:
$ export _JAVA_OPTIONS="-Xmx4g"
Java will automatically pick this option up.
(Optional) I also increased the swap size from 8 GB to 20 GB.
Related
I run JMeter test for ActiveMQ using Linux build agent I've got java.lang.OutOfMemoryError: Java heap space. Detailed log:
at java.util.Arrays.copyOf(Arrays.java:3332)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) ~
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
at java.lang.StringBuilder.append(StringBuilder.java:136
at org.apache.jmeter.protocol.jms.sampler.SubscriberSampler.extractContent(SubscriberSampler.java:282) ~[ApacheJMeter_jms.jar:5.3]
at org.apache.jmeter.protocol.jms.sampler.SubscriberSampler.sample(SubscriberSampler.java:186) ~[ApacheJMeter_jms.jar:5.3
at org.apache.jmeter.protocol.jms.sampler.BaseJMSSampler.sample(BaseJMSSampler.java:98) ~
at org.apache.jmeter.threads.JMeterThread.doSampling(JMeterThread.java:635) ~[ApacheJMeter_core.jar:5.4]
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:558) ~[ApacheJMeter_core.jar:5.4]
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:489) ~[ApacheJMeter_core.jar:5.4]
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256) ~[ApacheJMeter_core.jar:5.4]
I've already allocated maximum HEAP memory (-Xmx8g), but it doesn't help. Yet the same test with the same configuration on Windows build agent passed without Out of memory error.
How can it be handled? Maybe some configuration should be done for Linux machine?
Are you sure your Heap setting gets applied on Linux?
You can check it my creating a simple test plan with single JSR223 Sampler using the following code:
println('Max heap size: ' + Runtime.getRuntime().maxMemory() / 1024 / 1024 + ' megabytes')
and when you run JMeter in command-line non-GUI mode you will see the current maximum JVM heap size printed:
In order to make the change permanent amend this line in jmeter startup script according to your requirements.
The issue was resolved after updating Java to 11 version on Linux machines.
I´m having repeated crashes in my Cloudera cluster HDFS Datanodes due to an OutOfMemoryError:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /tmp/hdfs_hdfs-DATANODE-e26e098f77ad7085a5dbf0d369107220_pid18551.hprof ...
Heap dump file created [2487730300 bytes in 16.574 secs]
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="/usr/lib64/cmf/service/common/killparent.sh"
# Executing /bin/sh -c "/usr/lib64/cmf/service/common/killparent.sh"...
18551 TS 19 ? 00:25:37 java
Wed Aug 7 11:44:54 UTC 2019
JAVA_HOME=/usr/lib/jvm/java-openjdk
using /usr/lib/jvm/java-openjdk as JAVA_HOME
using 5 as CDH_VERSION
using /run/cloudera-scm-agent/process/3087-hdfs-DATANODE as CONF_DIR
using as SECURE_USER
using as SECURE_GROUP
CONF_DIR=/run/cloudera-scm-agent/process/3087-hdfs-DATANODE
CMF_CONF_DIR=/etc/cloudera-scm-agent
4194304
When analyzing the heap dump, the apparent biggest suspects are millions of instances of ScanInfo apparently quequed in the ExecutorService of the class org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.
When I inspect the content of each ScanInfo runnable object, I don´t see anything weird:
Apart from this and a bit high block count in HDFS, I don´t get any other information apart from the different DataNodes crashing randomly in my cluster.
Any idea why these objects keep queueing up in the DirectoryScanner thread pool?
You can try once below command.
$ hadoop dfsadmin -finalizeUpgrade
The -finalizeUpgrade command removes the previous version of the NameNode’s and DataNodes’ storage directories.
I started up the cluster last night with little issues. After 45 minutes it went to do a log roll, and then the cluster started to throw JVM wait errors. Since then the cluster won't restart. When starting, the resource manager isn't getting started.
The server name node and data nodes are also off line.
I had two installs of hadoop 2.8 on the server, removed first one and reinstalled the second, making the adjustments to the files to get it restarted.
Error logs from when it crashed, appear to be a Java Stack over flow, and out of range, with a growing saved memory size in the logs. My expectations is that I have misconfigured memory some place. I went to deleted and reformat the name nodes and I get the same segmentation error. At this point not sure what to do.
Ubuntu-Mate 16.04, Hadoop 2.8, Spark for Hadoop 2.7, NFS, Scalia, ...
When I go to start yarn now I get the following error message
hduser#nodeserver:/opt/hadoop-2.8.0/sbin$ sudo ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.8.0/logs/yarn-root->resourcemanager-nodeserver.out
/opt/hadoop-2.8.0/sbin/yarn-daemon.sh: line 103: 5337 Segmentation >fault nohup nice -n $YARN_NICENESS "$HADOOP_YARN_HOME"/bin/yarn -->config $YARN_CONF_DIR $command "$#" > "$log" 2>&1 < /dev/null
node1: starting nodemanager, logging to /opt/hadoop-2.8.0/logs/yarn->root-nodemanager-node1.out
node3: starting nodemanager, logging to /opt/hadoop-2.8.0/logs/yarn->root-nodemanager-node3.out
node2: starting nodemanager, logging to /opt/hadoop-2.8.0/logs/yarn->root-nodemanager-node2.out
starting proxyserver, logging to /opt/hadoop-2.8.0/logs/yarn-root->proxyserver-nodeserver.out
/opt/hadoop-2.8.0/sbin/yarn-daemon.sh: line 103: 5424 Segmentation >fault nohup nice -n $YARN_NICENESS "$HADOOP_YARN_HOME"/bin/yarn -->config $YARN_CONF_DIR $command "$#" > "$log" 2>&1 < /dev/null
hduser#nodeserver:/opt/hadoop-2.8.0/sbin$
Editing to add more error outputs for help
>hduser#nodeserver:/opt/hadoop-2.8.0/sbin$ jps
Segmentation fault
and
>hduser#nodeserver:/opt/hadoop-2.8.0/bin$ sudo ./hdfs namenode -format
Segmentation fault
logs which appear to show the Java stack went crazy and expanded from 512k to 5056k. So, how does one reset their stack?
Heap:
def new generation total 5056K, used 1300K [0x35c00000, 0x36170000, >0x4a950000)
eden space 4544K, 28% used [0x35c00000, 0x35d43b60, 0x36070000)
from space 512K, 1% used [0x360f0000, 0x360f1870, 0x36170000)
to space 512K, 0% used [0x36070000, 0x36070000, 0x360f0000)
tenured generation total 10944K, used 9507K [0x4a950000, 0x4b400000, >0x74400000)
the space 10944K, 86% used [0x4a950000, 0x4b298eb8, 0x4b299000, 0x4b400000)
Metaspace used 18051K, capacity 18267K, committed 18476K, >reserved 18736K
Update 24 hours later, I have tried complete reinstall's on Java and Hadoop, and still no luck. When I try java -version I still get segmentation fault.
Appears I have a Stack Overflow, and no easy fix. Easier to start over and rebuild the cluster with clean software.
I have a CentOS box hosting a Drupal 7 site. I've attempted to run a Java application called Tika on it, to index files using Apache Solr search.
I keep running into an issue only when SELinux is enabled:
extract using tika: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f1ed9000000, 2555904, 1) failed; error='Permission denied' (errno=13)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 2555904 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/jvm-2356/hs_error.log
This does not happen if I disable selinux. If I run the command from SSH, it works fine -- but not in browser. This is the command it is running:
java '-Dfile.encoding=UTF8' -cp '/var/www/drupal/sites/all/modules/contrib/apachesolr_attachments/tika' -jar '/var/www/drupal/sites/all/modules/contrib/apachesolr_attachments/tika/tika-app-1.11.jar' -t '/var/www/drupal/sites/all/modules/contrib/apachesolr_attachments/tests/test-tika.pdf'
Here is the log from SELinux at /var/log/audit/audit.log:
type=AVC msg=audit(1454636072.494:3351): avc: denied { execmem } for pid=11285 comm="java" scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:system_r:httpd_t:s0 tclass=process
type=SYSCALL msg=audit(1454636072.494:3351): arch=c000003e syscall=9 success=no exit=-13 a0=7fdfe5000000 a1=270000 a2=7 a3=32 items=0 ppid=2377 pid=11285 auid=506 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=1 comm="java" exe="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.95.x86_64/jre/bin/java" subj=unconfined_u:system_r:httpd_t:s0 key=(null)
Is there a way I can run this with SELinux enabled? I do not know the policy name of Tika (or should I use Java?) so I'm unsure where to go from here...
This worked for me...
I have tika at /var/apache-tika/tika-app-1.14.jar
setsebool -P httpd_execmem 1
chcon -t httpd_exec_t /var/apache-tika/tika-app-1.14.jar
Using the sealert tools (https://wiki.centos.org/HowTos/SELinux) helped track down the correct selinux type.
All of your context messages reference httpd_t, so I would run
/usr/sbin/getsebool -a | grep httpd
And experiment with enabling properties that show as off. It's been a while since I ran a database-backed website (Drupal, WordPress, etc.) on CentOS, but as I recall, these two were required to be enabled:
httpd_can_network_connect
httpd_can_network_connect_db
to enable a property with persistence, run
setsebool -P httpd_can_network_connect on
etc.
The booleans you're looking for are:
httpd_execmem
httpd_read_user_content
How to find:
audit2why -i /var/log/audit/audit.log will tell you this.
Part of package: policycoreutils-python-utils
I have been using PySpark with Ipython lately on my server with 24 CPUs and 32GB RAM. Its running only on one machine. In my process, I want to collect huge amount of data as is give in below code:
train_dataRDD = (train.map(lambda x:getTagsAndText(x))
.filter(lambda x:x[-1]!=[])
.flatMap(lambda (x,text,tags): [(tag,(x,text)) for tag in tags])
.groupByKey()
.mapValues(list))
When I do
training_data = train_dataRDD.collectAsMap()
It gives me outOfMemory Error. Java heap Space. Also, I can not perform any operations on Spark after this error as it looses connection with Java. It gives Py4JNetworkError: Cannot connect to the java server.
It looks like heap space is small. How can I set it to bigger limits?
EDIT:
Things that I tried before running:
sc._conf.set('spark.executor.memory','32g').set('spark.driver.memory','32g').set('spark.driver.maxResultsSize','0')
I changed the spark options as per the documentation here(if you do ctrl-f and search for spark.executor.extraJavaOptions) : http://spark.apache.org/docs/1.2.1/configuration.html
It says that I can avoid OOMs by setting spark.executor.memory option. I did the same thing but it seem not be working.
After trying out loads of configuration parameters, I found that there is only one need to be changed to enable more Heap space and i.e. spark.driver.memory.
sudo vim $SPARK_HOME/conf/spark-defaults.conf
#uncomment the spark.driver.memory and change it according to your use. I changed it to below
spark.driver.memory 15g
# press : and then wq! to exit vim editor
Close your existing spark application and re run it. You will not encounter this error again. :)
If you're looking for the way to set this from within the script or a jupyter notebook, you can do:
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.master('local[*]') \
.config("spark.driver.memory", "15g") \
.appName('my-cool-app') \
.getOrCreate()
I had the same problem with pyspark (installed with brew). In my case it was installed on the path /usr/local/Cellar/apache-spark.
The only configuration file I had was in apache-spark/2.4.0/libexec/python//test_coverage/conf/spark-defaults.conf.
As suggested here I created the file spark-defaults.conf in the path /usr/local/Cellar/apache-spark/2.4.0/libexec/conf/spark-defaults.conf and appended to it the line spark.driver.memory 12g.
I got the same error and I just assigned memory to spark while creating session
spark = SparkSession.builder.master("local[10]").config("spark.driver.memory", "10g").getOrCreate()
or
SparkSession.builder.appName('test').config("spark.driver.memory", "10g").getOrCreate()