I am generating report(CSV) through java and i am using hibernate for fetching the data from data base.
Part of my code is as below :
ScrollableResults items = null;
String sql = " from " + topBO.getClass().getName() + " where " + spec;
StringBuffer sqlQuery = new StringBuffer(sql);
Query query = sessionFactory.getCurrentSession().createQuery(sqlQuery.toString());
items = query.setFetchSize( 1000 ).setCacheable(false).scroll(ScrollMode.FORWARD_ONLY);
list = new ArrayList<TopBO>();
// error occurs in while loop. at the time of fetching more data.
while(items.next())
{
TopBO topBO2 =(TopBO) items.get(0);
list.add(topBO2 );
topBO2 = null;
}
sessionFactory.evict(topBO.getClass());
Environment info
JVM config : Xms512M -Xmx1024M -XX:MaxPermSize=512M -XX:MaxHeapSize=1024M
Jboss : JBoss 5.1 Runtime Server
Oracle : 10g
JDK : jdk1.6.0_24(32-bit/x86)
Operating System : Window 7(32-bit/x86)
Ram : 4gb
Error : When i fetch the data up to 50k it works fine. but when i am fetching the data more then it. it gives me the error :
#
# An unexpected error has been detected by Java Runtime Environment:
#
# java.lang.OutOfMemoryError: requested 4096000 bytes for GrET in C:\BUILD_AREA\jdk6_11\hotspot\src\share\vm\utilities\growableArray.cpp. Out of swap space?
#
# Internal Error (allocation.inline.hpp:42), pid=1408, tid=6060
# Error: GrET in C:\BUILD_AREA\jdk6_11\hotspot\src\share\vm\utilities\growableArray.cpp
#
# Java VM: Java HotSpot(TM) Client VM (11.0-b16 mixed mode windows-x86)
# An error report file with more information is saved as:
# D:\jboss-5.1.0.GA\bin\hs_err_pid1408.log
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
#
When i set the Xms512M -Xmx768M -XX:MaxPermSize=512M -XX:MaxHeapSize=768M It throws me another exception :
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError generally caused due to lacking required Heap space. What you can do is to increase your jvm heap size using flag -Xmx1548M or more MB than 1548.
But you seems like running out of System memory so you should use a better JVM that handles memory more efficiently and I suggest a JVM upgrade. How about upgrading your JVM 1.6 to some more recent versions?
The solution of this problem can be uncommon. Try to use recommendations from article OutOfMemoryError: Out of swap space - Problem Patterns.
There are multiple scenarios which can lead to a native
OutOfMemoryError.
Native Heap (C-Heap) depletion due to too many Java EE applications deployed on a single 32-bit JVM (combined with large Java
Heap e.g. 2 GB) * most common problem *
Native Heap (C-Heap) depletion due to a non-optimal Java Heap size e.g. Java Heap too large for the application(s) needs on a single
32-bit JVM
Native Heap (C-Heap) depletion due to too many created Java Threads e.g. allowing the Java EE container to create too many Threads
on a single 32-bit JVM
OS physical / virtual memory depletion preventing the HotSpot VM to allocate native memory to the C-Heap (32-bit or 64-bit VM)
OS physical / virtual memory depletion preventing the HotSpot VM to expand its Java Heap or PermGen space at runtime (32-bit or
64-bit VM)
C-Heap / native memory leak (third party monitoring agent / library, JVM bug etc.)
I would begin with such recommendation to troubleshooting:
Review your JVM memory settings. For a 32-bit VM, a Java Heap of 2 GB+
can really start to add pressure point on the C-Heap; depending how
many applications you have deployed, Java Threads etc… In that case,
please determine if you can safely reduce your Java Heap by about 256
MB (as a starting point) and see if it helps improve your JVM memory
“balance”.
It is also possible to try (but it is more difficult in respect of labor costs) to upgrade your environment to 64-bit versions of OS and JVM, because 4Gb of physical RAM will be better utilized on x64 OS.
Related
I create and persist a df1 on which then I am doing the below:
df1.persist (From the Storage Tab in spark UI it says it is 3Gb)
df2=df1.groupby(col1).pivot(col2) (This is a df with 4.827 columns and 40107 rows)
df2.collect
df3=df1.groupby(col2).pivot(col1) (This is a df with 40.107 columns and 4.827 rows)
-----it hangs here for almost 2 hours-----
df4 = (..Imputer or na.fill on df3..)
df5 = (..VectorAssembler on df4..)
(..PCA on df5..)
df1.unpersist
I have a cluster with 16 nodes(each node has 1 worker with 1 executor with 4 cores and 24Gb Ram) and a master(with 15Gb of Ram). Also spark.shuffle.partitions is 192. It hangs for 2 hours and nothing is happening. Nothing is active in Spark UI. Why does it hang for so long? Is it the DagScheduler? How can I check it? Please let me know if you need any more information.
----Edited 1----
After waiting for almost two hours it proceeds and then eventually fails. Below is the stages and executor tabs from Spark UI:
Also, in the stderr file in the worker nodes it says:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000003fe900000, 6434586624, 0) failed; error='Cannot allocate memory' (errno=12)
Moreover, it seems there is a file produced named "hs_err_pid11877" in the folder next to stderr and stdout which says:
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 6434586624 bytes for committing reserved memory.
Possible reasons:
The system is out of physical RAM or swap space
The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap
Possible solutions:
Reduce memory load on the system
Increase physical memory or swap space
Check if swap backing store is full
Decrease Java heap size (-Xmx/-Xms)
Decrease number of Java threads
Decrease Java thread stack sizes (-Xss)
Set larger code cache with -XX:ReservedCodeCacheSize=
JVM is running with Zero Based Compressed Oops mode in which the Java heap is
placed in the first 32GB address space. The Java Heap base address is the
maximum limit for the native heap growth. Please use -XX:HeapBaseMinAddress
to set the Java Heap base and to place the Java Heap above 32GB virtual address.
This output file may be truncated or incomplete.
Out of Memory Error (os_linux.cpp:2792), pid=11877, tid=0x00007f237c1f8700
JRE version: OpenJDK Runtime Environment (8.0_265-b01) (build 1.8.0_265-8u265-b01-0ubuntu2~18.04-b01)
Java VM: OpenJDK 64-Bit Server VM (25.265-b01 mixed mode linux-amd64 compressed oops)
Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
...and other information about the task it fails, GC information, etc..
----Edited 2----
Here is the Tasks Section of the the last pivot(stage with id 16 from stages picture).. just before the hanging. It seems that all 192 partitions have a pretty equal amount of data, from 15 to 20MB.
pivot in Spark generates an extra Stage to get the pivot values, that happens underwater and can take some time and depends how your resources are allocated, etc.
I launch our spring boot application in docker container on AWS Fargate service, so once the CPU consumption is reached more then 100% the container is stopped Docker OOM-killer with error
Reason: OutOfMemoryError: Container killed due to memory usage
On metrics we can see that CPU becomes more then 100%. It seems after some time of profiling we found CPU consuming code, but my question is, how CPU can be grater than 100%?
Is it some way to say JVM use only 100%?
I remember we had similar issue with memory consumption. I read a lot of articles about cgroups, and the solution was found to specify
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
So when you launch docker with option -m=512 heap size will be 1/4 of mac size. The heap size can also be tuned with option
-XX:MaxRAMFraction=2
which will allocate 1/2 of docker memory for heap.
Should I use something similar for CPU?
I read article https://blogs.oracle.com/java-platform-group/java-se-support-for-docker-cpu-and-memory-limits, but it tells that
As of Java SE 8u131, and in JDK 9, the JVM is Docker-aware with
respect to Docker CPU limits transparently. That means if
-XX:ParalllelGCThreads, or -XX:CICompilerCount are not specified as command line options, the JVM will apply the Docker CPU limit as the
number of CPUs the JVM sees on the system. The JVM will then adjust
the number of GC threads and JIT compiler threads just like it would
as if it were running on a bare metal system with number of CPUs set
as the Docker CPU limit.
Docker command is used to start
docker run -d .... -e JAVA_OPTS='-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:+PrintFlagsFinal -XshowSettings:vm' -m=512 -c=256 ...
Java version is used
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-1~deb9u1-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
Some additional info on app during start up
VM settings:
Max. Heap Size (Estimated): 123.75M
Ergonomics Machine Class: client
Using VM: OpenJDK 64-Bit Server VM
ParallelGCThreads = 0
CICompilerCount := 2
CICompilerCountPerCPU = true
I found answer to my question.
The behaviour to identify number of processors to use was fixed in https://bugs.openjdk.java.net/browse/JDK-8146115
Number of CPUs
Use a combination of number_of_cpus() and cpu_sets() in order to determine how many processors are available to
the process and adjust the JVMs os::active_processor_count
appropriately. The number_of_cpus() will be calculated based on the
cpu_quota() and cpu_period() using this formula: number_of_cpus() =
cpu_quota() / cpu_period(). If cpu_shares has been setup for the
container, the number_of_cpus() will be calculated based on
cpu_shares()/1024. 1024 is the default and standard unit for
calculating relative cpu usage in cloud based container management
software.
Also add a new VM flag (-XX:ActiveProcessorCount=xx) that allows the
number of CPUs to be overridden. This flag will be honored even if
UseContainerSupport is not enabled.
So on AWS you generally setup cpu_shares on task definition level.
Before jvm fix it was calculated incorrectly.
On java8 version < 191: cpu_shares()/1024 = 256/1024 = was identified as 2
After migration on java8 version > 191: cpu_shares()/1024 = 256/1024 = was identified as 1
The code to test
val version = System.getProperty("java.version")
val runtime = Runtime.getRuntime()
val processors = runtime.availableProcessors()
logger.info("========================== JVM Info ==========================")
logger.info("Java version is: {}", version)
logger.info("Available processors: {}", processors)
The sample output
"Java version is: 1.8.0_212"
"Available processors: 1"
I hope it will help someone, as I can't find the answer anywhere (spring-issues-tracker, AWS support, etc.)
Sorry for asking the question, should have searched a bit more.
Im running weka with a rather large dataset and a memory intesive algoithm. I need all the heap space I cant get!
This works:
java -jar -Xmx2048m weka.jar &
But this does not
java -jar -Xmx4096m weka.jar &
I get:
Error occurred during initialization of VM Could not reserve enough
space for object heap Could not create the Java virtual machine.
By some quick searching I found that this is the upper limit
java -jar -Xmx2594m weka.jar &
I have 4GB ram but a 32 bit machine. Why can't I use 2^32 bytes = 4096MB of memory?
For the future I am wondering if I can run java with e.g. hundreds of GB of heap space if I have the correct hardware and OS?
I have both 1.6 and 1.7 JVM installed:
$java -showversion
java version "1.6.0_24"
OpenJDK Runtime Environment (IcedTea6 1.11.4) (6b24-1.11.4-1ubuntu0.12.04.1)
OpenJDK Server VM (build 20.0-b12, mixed mode)
I have 4GB ram but a 32 bit machine. Why can't I use 2^32 bytes = 4096MB of memory?
For the future I am wondering if I can run java with e.g. hundreds of GB of heap space if I have the correct hardware and OS?
For 4 GB I suggest you use a 64-bit OS and possibly a 64-bit JVM as the limit for the heap size can be as small as 1.2 GB (on Windows XP)
If you want larger JVMs I suggest making sure you have 64-bit OS and JVM and you have more memory than the size of the JVM. e.g. if you want a 40 GB heap you need something like 48 GB or 64 GB of memory.
Use the 64-bit version of Java which allows you to use more memory. This is the limit of the 32-bit Java virtual machine.
If you have 4GB of RAM how can you expect that all will be available to your JVM? What about the OS the JVM is running in, this will also require memory. The way it works is that even though you can address all 4GB generally an OS will limit the amount available per process.
You are not able to have an allocation of 4096m because. It tries to get a single block of 4096m. Which is not possible at any given point of time. So you can use some smaller values between
3000-4000. or make sure your RAM is not used by any of the processes
I am running a server with the following attributes:
Windows Server 2008 R2 Standard - 64bit
4gb RAM
I am trying to set the heap size to 3gb for an application. I am using the flags -Xmx3G -Xms3G. Running with the flags results in the following error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
I have been playing with the setting to see what my ceiling is and found that 1568 is my ceiling. What am I missing?
How much physical memory is available on your system (out of the original 4 GB)? It sounds like your system doesn't have 3GB of physical memory available when the vm starts up.
Remember that the JVM needs more memory than is allocated to the heap -- there are other data structures as well (thread stacks, etc) that also need memory. So the settings you are providing attempt to use more than 3GB of memory.
Also, are you using a 64-bit jvm? The practical limit for heap size on a 32-bit vm is 1.4 to 1.6 gigabytes according to this document.
Java requires continuous virtual memory on startup. On windows, 32-bit application run in an 32-bit emulated environment so you don't get much more continuous memory than you would in a 32-bit OS. c.f. on Solaris you get over 3 GB virtual memory for 32-bit Java.
I suggest you use the 64-bit version of Java as this will make use of all the memory you have. You still need to have free memory but the larger address space doesn't suffer from fragmentation.
BTW: The heap space is only part of the memory used, you need memory for shared libraries, direct memory, GUI components etc.
It seems you don't have 3G of physical mememory available. Here is an interesting article on Java heap size settings errors. Java heap size setting errors
I'd like to run a very simple bot written in java on my VPS.
I want to limit jvm memory to let's say 10MB (I doubt it would need any more).
I'm running the bot with the following command:
java -Xms5M -Xmx10M -server -jar
IrcBot.jar "/home/jbot"
But top shows that actual memory reserved for java is 144m (or am I interpreting things wrong here?).
13614 jbot 17 0 144m 16m 6740
S 0.0 3.2 0:00.20 java
Any ideas what can be wrong here?
Java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode)
BTW. I'm running CentOS - if it matters.
EDIT:
Thank you for your answers.
I can't really accept any of them, since it turns out the problem lies within the language i choose to write the program, not the JVM itself.
-Xmx specifies the max Java heap allocation (-Xms specifies the min heap allocation). The Java process has its own overhead (the actual JVM etc), plus the loaded classes and the perm gen space (set via -XX:MaxPermSize=128m) sits outside of that value too.
Think of your heap allocation as simply Java's "internal working space", not the process as a whole.
Try experimenting and you'll see what I mean:
java -Xms512m -Xmx1024m ...
Also, try using a tool such as JConsole or JVisualVM (both are shipped with the Sun / Oracle JDK) and you'll be able to see graphical representations of the actual heap usage (and the settings you used to constrain the size).
Finally, as #Peter Lawrey very rightly states, the resident memory is the crucial figure here - in your case the JVM is only using 16 MiB RSS (according to 'top'). The shared / virtual allocation won't cause any issues as long as the JVM's heap isn't pushed into swap (non-RAM). Again, as I've stated in some of the comments, there are other JVM's available - "Java" is quite capable of running on low resource or embedded platforms.
Xmx is the max heap size, but besides that there's a few other things that the JVM needs to keep in memory: the stack, the classes, etc. For a brief introduction see this post about the JVM memory structure.
The JVM maps in shared libraries which are about 150m. The amount of virtual memory used is unlikely to be important to use if you are trying to minimise physical main memory.
The number you want to look at is the resident memory which is amount of physical main memory actually used (which is 16 MB)