What i want to do
I want to run the jar when I receive an email.
I wrote the configuration file as follows.
/etc/aliases
mail: mailuser,| "/bin/bash /tmp/mailtest/mailtrigger.sh"
/tmp/mailtest/mailtrigger.sh
#!/bin/bash
echo start >> /tmp/mailtest/stdout.log
java -jar /tmp/mailtest/example.jar
echo end >> /tmp/mailtest/stdout.log
Running from CLI works fine
Running bash mailtrigger.sh from CLI works fine.
/tmp/mailtest/stdout.log
start
success
end
Crash when run from email
When the command mail -s test mail#example.local is executed, it becomes as follows.
/tmp/mailtest/stdout.log
The email was received correctly and sh is running. However, the following error file is output.
/tmp/mailtest/stdout.log
start
end
tmp/hs_err_pid32575.log
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 2555904 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# JVM is running with Unscaled Compressed Oops mode in which the Java heap is
# placed in the first 4GB address space. The Java Heap base address is the
# maximum limit for the native heap growth. Please use -XX:HeapBaseMinAddress
# to set the Java Heap base and to place the Java Heap above 4GB virtual address.
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2753), pid=28040, tid=0x00007fca156d3700
#
# JRE version: (8.0_212-b04) (build )
# Java VM: OpenJDK 64-Bit Server VM (25.212-b04 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
Memory seems to be enough
I think that it is enough because the memory is not insufficient even if it is executed from the CLI.
I will write the following just in case.
vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 50176 807536 0 391192 0 0 1 2 39 35 4 3 93 0 0
Not only jar but java -version doesn't work
Rewriting the sh file as follows also crashed.
/tmp/mailtest/mailtrigger.sh
#!/bin/bash
echo start >> /tmp/mailtest/stdout.log
#java -jar /tmp/mailtest/example.jar
java -version
echo end >> /tmp/mailtest/stdout.log
Of course it succeeds from the CLI.
java -version
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_212-b04)
OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.212-b04, mixed mode)
Other information
postconf | grep mail_version
mail_version = 2.10.1
cat /etc/system-release
CentOS Linux release 7.5.1804 (Core)
I'm sorry if there is not enough information.
I will add the necessary information.
in trouble. help me. Please.
The cause was SELINUX.
I temporarily changed SELINUX to Permissive and it was successful.
Thank you very much.
Related
While running a ./mvnw clean package command inside Docker container running eclipse-temurin:17-jdk image, I got the following error (Maven doesn't even execute):
[0.002s][warning][os,thread] Failed to start thread "GC Thread#0" - pthread_create failed (EPERM) for attributes: stacksize: 1024k, guardsize: 4k, detached.
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create worker GC thread. Out of system resources.
# An error report file with more information is saved as:
# /home/myuser/hs_err_pid8.log
There were more information in hs_err_pid8.log file:
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create worker GC thread. Out of system resources.
# Possible reasons:
# The system is out of physical RAM or swap space
# The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# JVM is running with Zero Based Compressed Oops mode in which the Java heap is
# placed in the first 32GB address space. The Java Heap base address is the
# maximum limit for the native heap growth. Please use -XX:HeapBaseMinAddress
# to set the Java Heap base and to place the Java Heap above 32GB virtual address.
# This output file may be truncated or incomplete.
#
# Out of Memory Error (workerManager.hpp:87), pid=8, tid=8
#
# JRE version: (17.0.5+8) (build )
# Java VM: OpenJDK 64-Bit Server VM (17.0.5+8, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
How can I fix this?
Adding --security-opt seccomp=unconfined argument to the docker run command fixed my issue.
Thanks to this SO answer that pointed out seccomp Docker security profile & official documentation.
I have elasticsearch 2.2.0 and logstash 2.1 installed in my system.
When i do a config test for logstash with:
sudo service logstash configtest
It gives following errors:
There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 55545856 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2638), pid=7887, tid=140476386535168
#
# JRE version: OpenJDK Runtime Environment (8.0_71-b15) (build 1.8.0_71-b15)
# Java VM: OpenJDK 64-Bit Server VM (25.71-b15 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
What is the best way to get rid of this memory error, as there a are lot of prescribed solutions?
Also,
I have bootstrap.mlockall : true in elasticsearch.yml configured.
Can we configure elasticsearch bit differently?
I've been working on a vision project and using some C++ libraries in Java by JNI.
OS: Ubuntu 12.04
In my project, I'm using boost library to generate random number. But sometimes I get an exception as follows:
Core dum140002367330048 also had an error]
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f54f72a615a, pid=11979, tid=140002352568064
#
# JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build 1.7.0_67-b01)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libCBIR.so+0x3215a] boost::random::mersenne_twister_engine<unsigned int, 32ul, 624ul, 397ul, 31ul, 2567483615u, 11ul, 4294967295u, 7ul, 2636928640u, 15ul, 4022730752u, 18ul, 1812433253u>::operator()()+0x3a
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
When I searched for this on StackOverflow, I found some issues related to the IDE (Eclipse). The application is independent of the IDE. So, the solution must be independent from the IDE, too. Any ideas?
I was experiencing the same issue.
As, the error itself suggests -
Failed to write core dump. Core dumps have been disabled. To enable core dumping, try ulimit -c unlimited before starting Java again
ulimit gets and sets user limits. For more info on ulimit do -
man ulimit
So, open a terminal and run -
ulimit -c unlimited
This should solve the problem. To check if the change was successful, run -
ulimit -c -l
This should give you an output as follows -
core file size (blocks, -c) unlimited
max locked memory (kbytes, -l) 64
If the problem persists refer to this and this from askUbuntu.
A core dump or a crash dump is a memory snapshot of a running process. A core dump can be automatically created by the operating system when a fatal or unhandled error (for example, signal or system exception) occurs.
for more info https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/bugreports004.html
For anyone seeing this issue from within Jenkins (as we are): To enable core dumps from jenkins, edit /etc/init.d/jenkins and add "--core" to $DAEMON_ARGS. Setting ulimit directly from the shell script or via /etc/security/limits.conf will not work.
In my case, this error occured because I used the wrong Java version for my project. I was supposed to use Java 11, and I used Java 8 instead.
I am using Python3 on Ubuntu 14.04, and am running Stanford POSTagger on a corpus of 67 raw text articles, thje redacted python script is as follows:
from nltk.tag.stanford import POSTagger
with open('the_file.txt','r') as file:
G=file.readlines()
stan=[]
english_postagger = POSTagger('models/english-bidirectional-distsim.tagger', 'stanford-postagger.jar')
for line in g:
stan.append(english_postagger.tag(tokenize_fast(line)))
after several iterations of which I get the following error:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at edu.stanford.nlp.sequences.ExactBestSequenceFinder.bestSequence(ExactBestSequenceFinder.java:109)
at edu.stanford.nlp.sequences.ExactBestSequenceFinder.bestSequence(ExactBestSequenceFinder.java:31)
at edu.stanford.nlp.tagger.maxent.TestSentence.runTagInference(TestSentence.java:322)
at edu.stanford.nlp.tagger.maxent.TestSentence.testTagInference(TestSentence.java:312)
at edu.stanford.nlp.tagger.maxent.TestSentence.tagSentence(TestSentence.java:135)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.tagSentence(MaxentTagger.java:998)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.tagCoreLabelsOrHasWords(MaxentTagger.java:1788)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.tagAndOutputSentence(MaxentTagger.java:1798)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.runTagger(MaxentTagger.java:1709)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.runTagger(MaxentTagger.java:1770)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.runTagger(MaxentTagger.java:1543)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.runTagger(MaxentTagger.java:1499)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.main(MaxentTagger.java:1842)
I have also run the stanford postagger from command line as:
java -mx300m -classpath stanford-postagger.jar edu.stanford.nlp.tagger.maxent.MaxentTagger -model models/wsj-0-18-bidirectional-distsim.tagger -textFile sample-input.txt > sample-tagged.txt
with a similar error. I even passed Java 2 GB of memory, and still no luck.
Any thoughts/ideas or hacky type solutions are greatly welcomed!
Well spotted #nsanglar, so I tried:
java -Xmx2g -classpath stanford-postagger.jar edu.stanford.nlp.tagger.maxent.MaxentTagger -model models/wsj-0-18-bidirectional-distsim.tagger -textFile raw_text.txt > sample-tagged.txt
I get an error log message, with the following header:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 283639808 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
# Out of Memory Error (os_linux.cpp:2798), pid=25677, tid=140571167794944
# JRE version: OpenJDK Runtime Environment (7.0_65-b32) (build 1.7.0_65-b32)
# Java VM: OpenJDK 64-Bit Server VM (24.65-b04 mixed mode linux-amd64 compressed oops)
# Derivative: IcedTea 2.5.2
# Distribution: Ubuntu 14.04 LTS, package 7u65-2.5.2-3~14.04
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
Well, it turns out it was a RAM issue, I simply did not have enough memory to execute the command. Running the tagger off a server did the trick.
you should use -Xmx1024m. I think you made a typo because currently your are using -mx :)
In python set:
nltk.internals.config_java(options='-Xmx3024m')
My OS version is Windows 7 64 bit and the JDK is 32 bit version. I started my JBoss Wrapper Application successfully, but after it ran for a while the JVM failed and restarted.
The message in the JVM dump log is:
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 543672 bytes for Chunk::new
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (allocation.cpp:328), pid=5480, tid=4740
#
# JRE version: 7.0_05-b05
# Java VM: Java HotSpot(TM) Server VM (23.1-b03 mixed mode windows-x86 )
# Failed to write core dump. Minidumps are not enabled by default on client versions of Windows
#
I deploy 3 wrapper applications on my computer. Each of them set to a maximum JVM heap size of 700Mb.
Please help me review this problem. thanks. My questions are:
How can I know current JVM allocated size?
What is the reason for this problem?
How can fix it? Someone recommended me to use the 64 bit JDK. Is it necessary?
If you use 32-bit JDK, the maximum heap size we can set and still have the JVM start up is about 1.2 GB.For larger heaps, we need to run a 64-bit JDK. To run 64-bit JDK, you’d also need a64-bitoperating system running on a server that has a64-bit` CPU.
Downloaded the JDK 64 bit version
Set the JAVA_OPTS to
JAVA_OPTS=-Xms1024m -Xmx1024m -XX:MaxPermSize=256m
and refer this link.
Also this is good article about memory.
32-bit JVM are limited to a 2GB heap maximum (-Xmx). In some operating systems, much less than that.
A 64-bit JVM will not have this limitation.
In Windows, you can follow your JVM's memory consumption with
Task Manager->Processes.