Liquibase causing java heap space fatal error - java

the command i ran:
mvn liquibase:updateSQL -P MyProject -Dusername=MyUser -Dpassword=password
-Ddb_name=$(DB_NAME) -Duser_password=$(USERPASSWORD) -Dvarchar=nvarchar
-Dnumber=numeric -Dchar=nchar -Ddate=datetime -Dtimestamp=datetime
-Dclob=nvarchar(max) -Dlong=nvarchar(max) -Dblob=varbinary(max) -Draw=varbinary
-Dsysdate=GETDATE() -Dsubstring_function=substring -Dfrom_dual_clause=
-Dconcat=+ -Disnull=isnull
this is my first time taking on a liquibase problem. here is the stack trace, how should I go about debugging this?

Try setting the -Xmx flag to a higher value. By default Java runs with a fixed amount of memory (64MB), which is too small for the program you are running (hence the OutOfMemoryError).

Related

I've got java.lang.OutOfMemoryError: Java heap space testing ActiveMQ by JMeter on Linux build agent

I run JMeter test for ActiveMQ using Linux build agent I've got java.lang.OutOfMemoryError: Java heap space. Detailed log:
at java.util.Arrays.copyOf(Arrays.java:3332)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) ~
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
at java.lang.StringBuilder.append(StringBuilder.java:136
at org.apache.jmeter.protocol.jms.sampler.SubscriberSampler.extractContent(SubscriberSampler.java:282) ~[ApacheJMeter_jms.jar:5.3]
at org.apache.jmeter.protocol.jms.sampler.SubscriberSampler.sample(SubscriberSampler.java:186) ~[ApacheJMeter_jms.jar:5.3
at org.apache.jmeter.protocol.jms.sampler.BaseJMSSampler.sample(BaseJMSSampler.java:98) ~
at org.apache.jmeter.threads.JMeterThread.doSampling(JMeterThread.java:635) ~[ApacheJMeter_core.jar:5.4]
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:558) ~[ApacheJMeter_core.jar:5.4]
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:489) ~[ApacheJMeter_core.jar:5.4]
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256) ~[ApacheJMeter_core.jar:5.4]
I've already allocated maximum HEAP memory (-Xmx8g), but it doesn't help. Yet the same test with the same configuration on Windows build agent passed without Out of memory error.
How can it be handled? Maybe some configuration should be done for Linux machine?
Are you sure your Heap setting gets applied on Linux?
You can check it my creating a simple test plan with single JSR223 Sampler using the following code:
println('Max heap size: ' + Runtime.getRuntime().maxMemory() / 1024 / 1024 + ' megabytes')
and when you run JMeter in command-line non-GUI mode you will see the current maximum JVM heap size printed:
In order to make the change permanent amend this line in jmeter startup script according to your requirements.
The issue was resolved after updating Java to 11 version on Linux machines.

java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "main"’ for a file of more than 4GB

Using this tool, https://github.com/citygml4j/citygml-tools, which is called to-cityjson. I want to convert a cityGML file to a cityJSON file. The file is 4.36 GB, but i get the following error:
java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread main or
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at jdk.internal.reflect.GeneratedConstructorAccessor183.newInstance(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at com.sun.xml.bind.v2.ClassFactory.create0(ClassFactory.java:102)
at com.sun.xml.bind.v2.runtime.ClassBeanInfoImpl.createInstance(ClassBeanInfoImpl.java:255)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.createInstance(UnmarshallingContext.java:672)
at com.sun.xml.bind.v2.runtime.unmarshaller.StructureLoader.startElement(StructureLoader.java:158)
at com.sun.xml.bind.v2.runtime.unmarshaller.ProxyLoader.startElement(ProxyLoader.java:30)
at com.sun.xml.bind.v2.runtime.ElementBeanInfoImpl$IntercepterLoader.startElement(ElementBeanInfoImpl.java:223)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext._startElement(UnmarshallingContext.java:547)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.startElement(UnmarshallingContext.java:526)
at com.sun.xml.bind.v2.runtime.unmarshaller.InterningXmlVisitor.startElement(InterningXmlVisitor.java:45)
at com.sun.xml.bind.v2.runtime.unmarshaller.StAXStreamConnector.handleStartElement(StAXStreamConnector.java:216)
at com.sun.xml.bind.v2.runtime.unmarshaller.StAXStreamConnector.bridge(StAXStreamConnector.java:150)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:385)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:356)
at org.citygml4j.builder.jaxb.xml.io.reader.JAXBSimpleReader.nextFeature(JAXBSimpleReader.java:133)
at org.citygml4j.tools.command.ToCityJSONCommand.execute(ToCityJSONCommand.java:133)
at org.citygml4j.tools.CityGMLTools.handleParseResult(CityGMLTools.java:102)
at org.citygml4j.tools.CityGMLTools.handleParseResult(CityGMLTools.java:35)
at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526)
at org.citygml4j.tools.CityGMLTools.main(CityGMLTools.java:44)
I found one solution, which would be to use java -Xmx15G, but i don't know how to implement it.
You can use JAVA_OPTS or CITYGML_TOOLS_OPTS environment variable which is read by citygml-tools executable. Or you can modify the DEFAULT_JVM_OPTS option in the citygml-tools code:
# Add default JVM options here. You can also use JAVA_OPTS and CITYGML_TOOLS_OPTS to pass JVM options to this script.
DEFAULT_JVM_OPTS='"-Xms1G"'
If you are using Linux you can set in the terminal:
export JAVA_OPTS="-Xmx15G"
citygml-tools <file>
java -Xmx6144M -d64
Go to command line and execute this command, it will set it to 64 GB
Source: Increase heap size in Java
You can try a brute Force approach and assign a very large healthy space
Grep for the line
defaultJvmOpts = ['-Xms1G']
only 1 GiB of heap space
Also make sure you are using a 64 bit Java and have enough ram
Buddy Another option if you are using eclipse is the following.
This worked for me, I hope it helps you (click over here).

PySpark: java.lang.OutofMemoryError: Java heap space

I have been using PySpark with Ipython lately on my server with 24 CPUs and 32GB RAM. Its running only on one machine. In my process, I want to collect huge amount of data as is give in below code:
train_dataRDD = (train.map(lambda x:getTagsAndText(x))
.filter(lambda x:x[-1]!=[])
.flatMap(lambda (x,text,tags): [(tag,(x,text)) for tag in tags])
.groupByKey()
.mapValues(list))
When I do
training_data = train_dataRDD.collectAsMap()
It gives me outOfMemory Error. Java heap Space. Also, I can not perform any operations on Spark after this error as it looses connection with Java. It gives Py4JNetworkError: Cannot connect to the java server.
It looks like heap space is small. How can I set it to bigger limits?
EDIT:
Things that I tried before running:
sc._conf.set('spark.executor.memory','32g').set('spark.driver.memory','32g').set('spark.driver.maxResultsSize','0')
I changed the spark options as per the documentation here(if you do ctrl-f and search for spark.executor.extraJavaOptions) : http://spark.apache.org/docs/1.2.1/configuration.html
It says that I can avoid OOMs by setting spark.executor.memory option. I did the same thing but it seem not be working.
After trying out loads of configuration parameters, I found that there is only one need to be changed to enable more Heap space and i.e. spark.driver.memory.
sudo vim $SPARK_HOME/conf/spark-defaults.conf
#uncomment the spark.driver.memory and change it according to your use. I changed it to below
spark.driver.memory 15g
# press : and then wq! to exit vim editor
Close your existing spark application and re run it. You will not encounter this error again. :)
If you're looking for the way to set this from within the script or a jupyter notebook, you can do:
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.master('local[*]') \
.config("spark.driver.memory", "15g") \
.appName('my-cool-app') \
.getOrCreate()
I had the same problem with pyspark (installed with brew). In my case it was installed on the path /usr/local/Cellar/apache-spark.
The only configuration file I had was in apache-spark/2.4.0/libexec/python//test_coverage/conf/spark-defaults.conf.
As suggested here I created the file spark-defaults.conf in the path /usr/local/Cellar/apache-spark/2.4.0/libexec/conf/spark-defaults.conf and appended to it the line spark.driver.memory 12g.
I got the same error and I just assigned memory to spark while creating session
spark = SparkSession.builder.master("local[10]").config("spark.driver.memory", "10g").getOrCreate()
or
SparkSession.builder.appName('test').config("spark.driver.memory", "10g").getOrCreate()

Mallet: java.lang.OutOfMemoryError with 1024GB Memory allocation

I am trying to use Mallet to run topic modeling on a ~1GB text file, with 11403956 rows. From the mallet directory, I cd to bin and upgrade the memory requirement to 1024GB:
set MALLET_MEMORY=1024G
I then try to run the command:
bin/mallet import-file --input combined_bios.txt --output dh_size.mallet --keep-sequence --remove-stopwords
However, this throws a memory error:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at gnu.trove.TObjectIntHashMap.rehash(TObjectIntHashMap.java:170)
at gnu.trove.THash.postInsertHook(THash.java:359)
at gnu.trove.TObjectIntHashMap.put(TObjectIntHashMap.java:155)
at cc.mallet.types.Alphabet.lookupIndex(Alphabet.java:115)
at cc.mallet.types.Alphabet.lookupIndex(Alphabet.java:123)
at cc.mallet.types.FeatureSequence.add(FeatureSequence.java:131)
at cc.mallet.pipe.TokenSequence2FeatureSequence.pipe(TokenSequence2FeatureSequence.java:44)
at cc.mallet.pipe.Pipe$SimplePipeInstanceIterator.next(Pipe.java:294)
at cc.mallet.pipe.Pipe$SimplePipeInstanceIterator.next(Pipe.java:282)
at cc.mallet.types.InstanceList.addThruPipe(InstanceList.java:267)
at cc.mallet.classify.tui.Csv2Vectors.main(Csv2Vectors.java:290)
Is there a workaround for such situations? Any help others can offer would be greatly appreciated!
If you are on Linux or OS X, I think you might be altering the wrong variable. The one you are changing is found in bin/mallet.bat, but you want to change the one in the executable at bin/mallet (i.e. without the .bat file extension):
MEMORY=1g
This is also described under "Issues with Big Data" in this Mallet tutorial:
http://programminghistorian.org/lessons/topic-modeling-and-mallet

GC overhead limit exceeded trying to build LibGDX project

I'm trying to run a LibGDX project with the iOS configuration but I keep running into the following error:
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':ios:launchIPhoneSimulator'.
> java.lang.OutOfMemoryError: GC overhead limit exceeded
I've tried modifying the gradlew file with the following params, but I still get the same error:
DEFAULT_JVM_OPTS="-Xmx2048m -XX:+UseConcMarkSweepGC"
Any ideas what else I can do to work around this issue?
Thanks!
Tried several different things (gradlew clean, removing the dependencies and downloading them again, increasing heap size all the way to 2g, etc), but eventually what fixed it was rebooting the machine.
Yeah, a reboot fixed it. Weird.
I had the same problem... but I was found the solution!
Open your "gradle.properties" file and it must be something like this:
org.gradle.daemon=true
org.gradle.jvmargs=-Xms128m -Xmx512m
org.gradle.configureondemand=true
You must to edit the second string! Change "-Xms128m" to "-Xms1024m", "-Xmx512m" to "-Xmx4096m" and "gradle.properties" in final must look like:
org.gradle.daemon=true
org.gradle.jvmargs=-Xms1024m -Xmx4096m
org.gradle.configureondemand=true
That's it!
You can disable this error by adding next flag: -XX:-UseGCOverheadLimit
. But it's bad approach.
This exception occur then FULL GC worked to often last minute and didn't free any memory(or free too low memory).
You can try to add additional memory, for example try to add -Xmx3048m (or more). If exception will still occur then definitely there is a memory leak problem.
If you do not change your code, but you suddenly have this problem, my suggestion is restart android studio, clean project, reboot your emulator. If not, then change the code of build.gradle.

Categories

Resources