We have Jenkins setup for running jobs for a project our team is currently working on but we are having problems with the jobs crashing constantly due to an OutOfMemory.
The Jenkins environment is running on a virtual machine. The machine it is on has fairly decent specs and doesn't have to many VMs on it. Our SBT jobs run in a separate jobs list which has 8GB of RAM available.
Project build.properties sbt.version=0.13.9
Jenkins ver. 2.6
We are executing the following command for the job:
/usr/java/default/bin/java -Xmx2G -XX:+CMSClassUnloadingEnabled -XX:MaxMetaspaceSize=2G -Dsbt.override.build.repos=true -Dsbt.log.noformat=true -jar /usr/local/sbt/default/bin/sbt-launch.jar compile test:compile test universal:publish
Which produces the following throughout the log:
Exception in thread "Thread-40" java.io.EOFException
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2626)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1321)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at org.scalatest.tools.Framework$ScalaTestRunner$Skeleton$1$React.react(Framework.scala:945)
at org.scalatest.tools.Framework$ScalaTestRunner$Skeleton$1.run(Framework.scala:934)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "Thread-29" java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.net.SocketInputStream.read(SocketInputStream.java:223)
at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2321)
at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2614)
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2624)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1321)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at sbt.React.react(ForkTests.scala:114)
at sbt.ForkTests$$anonfun$mainTestTask$1$Acceptor$2$.run(ForkTests.scala:74)
at java.lang.Thread.run(Thread.java:745)
The dump file the job produces here (pastebin.com/EM3qva5C)
We have tried different variations of the java args but all have come to the same result so we are wondering if there is something else wrong/what we need to change to prevent the builds from failing?
Your tests are working in a forked JVM, so you have to provide more memory to them.
Add the following line to build.sbt:
javaOptions ++= Seq("-Xmx1G")
Related
Yesterday all of the sudden my projects on a Windows 10 machine stopped running in parallel due to file lock timeouts.
All my projects are using gradle-wrapper and provide a run task
When I start the 1st run-task, it work normally, but any following run-tasks break with the error like this:
> .\gradlew run
Starting a Gradle Daemon, 1 busy and 4 stopped Daemons could not be reused, use --status for details
FAILURE: Build failed with an exception.
* What went wrong:
Gradle could not start your build.
> Could not create service of type FileAccessTimeJournal using GradleUserHomeScopeServices.createFileAccessTimeJournal().
> Timeout waiting to lock journal cache (C:\Users\injec\.gradle\caches\journal-1). It is currently in use by another Gradle instance.
Owner PID: 16440
Our PID: 12216
Owner Operation:
Our operation:
Lock file: C:\Users\injec\.gradle\caches\journal-1\journal-1.lock
the --status option shows:
> .\gradlew --status
PID STATUS INFO
12216 IDLE 6.9.1
16440 BUSY 6.9.1
14992 STOPPED (stop command received)
7856 STOPPED (other compatible daemons were started and after being idle for 0 minutes and not recently used)
26680 STOPPED (by user or operating system)
18556 STOPPED (by user or operating system)
I tried different tricks, like switching the Gradle verison 5.6.1 - 6.8.3 - 6.9.1 and using the --stop option, but the error remains.
Adding the --stacktrace to the run command reveals that not only journal-1 cache is involved, but also some others dirs like modules-2.
I didn't do any changes to my system, apart from regular Win10 updates.
How can the problem be fixed?
TIA
It's likely that gradle process was exited abnormally and left the lock file behind. Check in the task manager if process with id 16440 exists, and if not just remove the orphan lock file C:\Users\injec\.gradle\caches\journal-1\journal-1.lock
This may be the file-system permissions of C:\Users\injec\.gradle... while you may have overseen one detail: you're calling .\gradlew instead of ./gradlew or gradlew.bat ...which means that you are not running on CMD, but on PS or WSL. gradlew.bat run would run directly on CMD.
Check .gradle file system. Gradle not works well on non-native file system. https://github.com/gradle/gradle/issues/15881
File system watching supports the following file system types:
APFS
btrfs
ext3
ext4
XFS
HFS+
NTFS
Gradle also supports VirtualBox’s shared folders.
Network file systems like Samba and NFS are not supported.
Symlinks
File system watching is not compatible with symlinks. If your project files include symlinks, symlinked files do not benefit from file system watching optimizations.
or you can disable file system watch for the build https://docs.gradle.org/current/userguide/file_system_watching.html#disable
Gradle maintains a Virtual File System (VFS) to calculate what needs to be rebuilt on repeat builds of a project. By watching the file system, Gradle keeps the VFS current between builds.
Trying configure Prometheus JMX agent for Jmeter but faced following issue - when I start Jmeter outside of $JMETER_HOME/bin folder - it fails with an error:
java.lang.Throwable: Could not access null/lib
at org.apache.jmeter.NewDriver.<clinit>(NewDriver.java:105)
java.lang.Throwable: Could not access null/lib/ext
at org.apache.jmeter.NewDriver.<clinit>(NewDriver.java:105)
java.lang.Throwable: Could not access null/lib/junit
at org.apache.jmeter.NewDriver.<clinit>(NewDriver.java:105)
java.lang.ClassNotFoundException: org.apache.jmeter.JMeter
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at org.apache.jmeter.NewDriver.main(NewDriver.java:250)
JMeter home directory was detected as: null
Launch command:
java -Dcom.sun.management.jmxremote.port=12021 -Dcom.sun.management.jmxremote.rmi.port=12021 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dgroovy.use.classvalue=true -javaagent:/apps/injector/apache-jmeter/bin/jmx_prometheus_javaagent-0.13.0.jar=8778:/apps/injector/apache-jmeter/bin/prometheus_config.yaml -jar /apps/injector/apache-jmeter/bin/ApacheJMeter.jar -n -t /apps/injector/apache-jmeter/extras/Test.jmx
Same command works fine in case I run it from $JMETER_HOME/bin folder.
It's not seems to be a Jmeter issue itself, as I can run same command from any place, and it will not cause an error in case I remove -javaagent option.
Can somebody help me configure Prometheus JMX agent for properly work with Jmeter.
Add to java options of JMeter:
-Djmeter.home=$JMETER_HOME
As per your bugzilla ticket:
https://bz.apache.org/bugzilla/show_bug.cgi?id=64680
Credit to Felix S. member of JMeter team
Long story short, some coworkers are running a pretty old setup(oc4j jdk1.5.6 in x86_64) with an application which happens to be mission critical. They recently have tried to deploy a new version of the application, but as soon as they do the java process(es) throw a core dump and die.
The problem is, the core dumps seem to be fine, gdb can open them, but jmap and other tools refuse to process them:
# /usr/java/jdk1.5.0_06/bin/jmap /usr/java/jdk1.5.0_06/bin/java core
Attaching to core core from executable /usr/java/jdk1.5.0_06/bin/java, please wait...
Error attaching to core file: Can't attach to the core file
And newer versions throw a exception:
# jdk1.6.0_45/bin/jmap /usr/java/jdk1.5.0_06/bin/java core
Attaching to core core from executable /usr/java/jdk1.5.0_06/bin/java, please wait...
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at sun.tools.jmap.JMap.runTool(JMap.java:179)
at sun.tools.jmap.JMap.main(JMap.java:110)
Caused by: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported versions are 20.45-b01. Target VM is 1.5.0_06-b05
at sun.jvm.hotspot.runtime.VM.checkVMVersion(VM.java:224)
at sun.jvm.hotspot.runtime.VM.<init>(VM.java:287)
at sun.jvm.hotspot.runtime.VM.initialize(VM.java:357)
at sun.jvm.hotspot.bugspot.BugSpotAgent.setupVM(BugSpotAgent.java:594)
at sun.jvm.hotspot.bugspot.BugSpotAgent.go(BugSpotAgent.java:494)
at sun.jvm.hotspot.bugspot.BugSpotAgent.attach(BugSpotAgent.java:348)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:169)
at sun.jvm.hotspot.tools.PMap.main(PMap.java:67)
... 6 more
gdb offers little information without symbols:
Reading symbols from /usr/java/jdk1.5.0_06/bin/java...(no debugging symbols found)...done.
[New Thread 9841]
[New Thread 31442]
[New Thread 31441]
...
Core was generated by `/usr/java/jdk1.5.0_06/bin/java -server -XX:+UseConcMarkSweepGC -XX:MaxHeapFreeR'.
Program terminated with signal 6, Aborted.
#0 0x0000003bbf030285 in ?? ()
(gdb) bt
#0 0x0000003bbf030285 in ?? ()
#1 0x0000003bbf031d30 in ?? ()
#2 0x0000000000000000 in ?? ()
The only valuable information I've gathered from the core is that most threads
are blocked(I'm far from being a gdb guru):
35 Thread 10093 0x0000003bbfc0b1c0 in pthread_cond_timedwait##GLIBC_2.3.2 ()
from /lib64/libpthread.so.0
34 Thread 10097 0x0000003bbfc0b1c0 in pthread_cond_timedwait##GLIBC_2.3.2 ()
from /lib64/libpthread.so.0
33 Thread 10099 0x0000003bbfc0b1c0 in pthread_cond_timedwait##GLIBC_2.3.2 ()
from /lib64/libpthread.so.0
Besides, I don't know if it's really relevant. The app is almost always heavily loaded, and my bet is that there were some lock contention already but since it's another's team app my knowledge about it it's pretty shallow.
I guess this is a long shot, but is there something that we can do to get a java thread dump or something like that? Do Sun used to offer debuginfo of the jdk as I guess is avalaible now with openjdk?
Thanks in advance.
UPDATE: The other team has resolved the issue without getting info from the core dump, just by trial and error after successfully replicating the problem in a test system. I'm still intrigued about the thing: how to debug an ancient java core dump which jmap can't process, it might be valuable info for the future, althought it seems is that there is no solution to that problem. Probably the JVM memory got corrupted and that's why jmap can't process it.
You can add the following JVM option when starting your application, that will allow you to run any command you specify if a fatal JVM error occurs:
-XX:OnError="<cmd args>"
For instance, you could run a command (or a script) that will perform certain actions like get a heap or thread dump.
Jmap and other JVM utilities are extremely version sensitive. From your error, it is self explanatory that hopefully the same jvm is not used in your case.
Java VisualVM can load core dumps directly. But you must use the same jvm that created the core file.
Resource Link:
https://stackoverflow.com/a/9981498/2293534
Suggestion#1:
kjkoster has given a solution here in this tutorial.
You need to use the jmap that comes with the JVM. From your error
message I gather that you are using a different version of jmap than
of the JVM.
Please check what JVMs are installed on your machine and ensure than
when you run jmap, you use the right version.
To solve such issues I never rely on path. Instead I set JAVA_HOME to
be the one that the JVM uses and then invoke both the JVM and jmap
like so:
Code:
$ JAVA_HOME=/usr/local/jdk1.6.0
$ export JAVA_HOME
$ ${JAVA_HOME}/bin/java ...
...
$ ${JAVA_HOME}/bin/jmap ...
Hope this helps.
Suggestion#2:
It is a full solution step by step is given by Chamilad. Hope it will clarify your root cause and solution procedure.
Almost every Java developer knows about jmap and jstack tools that come with the JDK. These provide functionality to extract heap and thread information of a running JVM instance. Easy.
What if there’s a running JVM that has produced a deadlock and you want to take a thread dump while the process is running? You go in and run the following.
jstack pid >> thread_dump.txt
Turns out the system doesn’t know what jstack is. You don’t panic, but you get a tiny sensation at the back of your head saying you’re not leaving early this Friday.
What has happened is the running JVM is based on a JRE and not a JDK. The JRE is a minimal runtime that doesn’t pack the monitoring and analysis tools that the JDK packs.
So what are our options here?
Stop the process. Download JDK, start the process again on top of
JDK and hope the deadlock happens again. Nope.
Start JVisualVM on your laptop and hope the process has JMX enabled. Nope.
tools.jar TO THE RESCUE!
Functionalities such as jstack are implemented in the tools.jar file which is packed inside <JDK_HOME>/lib folder. We can use this to invoke the JStack class and get a thread dump of the running process.
So we march on to download and extract the JDK, and then to run the following.
java -classpath <JDK_HOME>/lib/tools.jar sun.tools.jstack.JStack <pid> >> thread_dump.txt
..and come across the following error.
Exception in thread "main" java.lang.UnsatisfiedLinkError: no attach in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at sun.tools.attach.LinuxVirtualMachine.<clinit>(LinuxVirtualMachine.java:342)
at sun.tools.attach.LinuxAttachProvider.attachVirtualMachine(LinuxAttachProvider.java:63)
at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:208)
at sun.tools.jstack.JStack.runThreadDump(JStack.java:163)
at sun.tools.jstack.JStack.main(JStack.java:116)
Darn it! Spoiled again!
How do we solve this? The above error is caused when the process can’t find libattach.so file which is related to the Dynamic Attach function related to JStack. Setting the following environment variable will help the JVM to find the libattach.so file.
export LD_LIBRARY_PATH=<JDK_HOME>/jre/lib/amd64/
Now let’s run JStack again, this time with results!
java -classpath <JDK_HOME>/lib/tools.jar sun.tools.jstack.JStack <pid> >> thread_dump.txt
Now that we have the thread dump, we move on to the heap dump. The tool we normally use is jmap but that too is not available on the JRE. So what? We can use the binary in the JDK’s bin directory right? right?
root#snowflake1 latest]# <JDK_HOME>/bin/jmap -heap <pid>
Attaching to process ID <pid>, please wait…
Error attaching to process: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported versions are 25.102-b14. Target VM is 25.91-b14
sun.jvm.hotspot.debugger.DebuggerException: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported versions are 25.102-b14. Target VM is 25.91-b14
at sun.jvm.hotspot.HotSpotAgent.setupVM(HotSpotAgent.java:435)
at sun.jvm.hotspot.HotSpotAgent.go(HotSpotAgent.java:305)
at sun.jvm.hotspot.HotSpotAgent.attach(HotSpotAgent.java:140)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:185)
at sun.jvm.hotspot.tools.Tool.execute(Tool.java:118)
at sun.jvm.hotspot.tools.HeapSummary.main(HeapSummary.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.tools.jmap.JMap.runTool(JMap.java:201)
at sun.tools.jmap.JMap.main(JMap.java:130)
Caused by: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported versions are 25.102-b14. Target VM is 25.91-b14
at sun.jvm.hotspot.runtime.VM.checkVMVersion(VM.java:227)
at sun.jvm.hotspot.runtime.VM.<init>(VM.java:294)
at sun.jvm.hotspot.runtime.VM.initialize(VM.java:370)
at sun.jvm.hotspot.HotSpotAgent.setupVM(HotSpotAgent.java:431)
… 11 more
Nope! Unless you match the JDK version with the exact version the JRE is at you get the above issue (which is pretty self-explanatory). So we download the JDK of the JRE of on top our process is running and run jmap again.
<JDK_HOME>/bin/jmap -dump:file=heap_dump.hprof <pid>
Resource Link:
Extracting memory and thread dumps from a running JRE based JVM
jmap is not going to help you debug a core dump. The JVM dumps core when either it has a bug or you have JNI code with a problem. Organizations with mission-critical applications should, sadly, see upgrading from unsupported versions of the JVM as mission-critical, or be prepared to pay Oracle a fortune for help.
I get an error while I try to Sync Gradle in IDEA.. I tried many things like setting gradle folder in the path variable and also setting the GRADLE_HOME path in System Vraiable but nothings seems to work.
Here is the error:
Error:Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the user guide chapter on the daemon at http://gradle.org/docs/2.3/userguide/gradle_daemon.html
Please read the following process output to find out more:
13:20:19.766 [main] DEBUG o.g.l.daemon.bootstrap.DaemonMain - Assuming the daemon was started with following jvm opts: [-XX:MaxPermSize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Xmx1024m, -Dfile.encoding=windows-1252, -Duser.country=US, -Duser.language=en, -Duser.variant]
13:20:20.093 [main] DEBUG o.g.l.daemon.server.DaemonServices - Creating daemon context with opts: [-XX:MaxPermSize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Xmx1024m, -Dfile.encoding=windows-1252, -Duser.country=US, -Duser.language=en, -Duser.variant]
Java HotSpot(TM) Client VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
It may occur for inconsitent cache value of gradle daemon. You could kill all running gradle daemon manually and then run:
gradle clean build --refresh-dependencies
P.S.: I'm not entirely sure but deleting .gradle and reimporting project may fix it for IDEA.
We had some issues of thread pile ups with production tomcat server so I wanted to setup some cron to to periodically check thread dumps and send alert email if something is wrong. To do this we need to take thread dumps in file from shell script but I am unable to do that. From shell I can issue KILL -3 <PID> at periodic intervals but the problem is that dump goes to catalina.out which contains GBs of data because of which pulling out only thread dump is painful process. Some discussion threads suggested usage of "jstack" and redirect output to a file but that also is not working and giving this error:
-bash-3.2# java -version
java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
-bash-3.2# uname -a
Linux ip-10-130-225-20 2.6.16.33-xenU #2 SMP Wed Aug 15 17:27:36 SAST 2007 x86_64 x86_64 x86_64 GNU/Linux
-bash-3.2# sudo /usr/java/jdk1.6.0_24/bin/jstack -F 15668
Attaching to process ID 15668, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 19.1-b02
Deadlock Detection:
No deadlocks found.
Thread 8183: (state = BLOCKED)
Error occurred during stack walking:
sun.jvm.hotspot.debugger.DebuggerException: sun.jvm.hotspot.debugger.DebuggerException: get_thread_regs failed for a lwp
at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$LinuxDebuggerLocalWorkerThread.execute(LinuxDebuggerLocal.java:152)
at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.getThreadIntegerRegisterSet(LinuxDebuggerLocal.java:466)
at sun.jvm.hotspot.debugger.linux.LinuxThread.getContext(LinuxThread.java:65)
at sun.jvm.hotspot.runtime.linux_amd64.LinuxAMD64JavaThreadPDAccess.getCurrentFrameGuess(LinuxAMD64JavaThreadPDAccess.java:92)
at sun.jvm.hotspot.runtime.JavaThread.getCurrentFrameGuess(JavaThread.java:256)
at sun.jvm.hotspot.runtime.JavaThread.getLastJavaVFrameDbg(JavaThread.java:218)
at sun.jvm.hotspot.tools.StackTrace.run(StackTrace.java:76)
at sun.jvm.hotspot.tools.StackTrace.run(StackTrace.java:45)
at sun.jvm.hotspot.tools.JStack.run(JStack.java:60)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:221)
at sun.jvm.hotspot.tools.JStack.main(JStack.java:86)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at sun.tools.jstack.JStack.runJStackTool(JStack.java:118)
at sun.tools.jstack.JStack.main(JStack.java:84)
Caused by: sun.jvm.hotspot.debugger.DebuggerException: get_thread_regs failed for a lwp
at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.getThreadIntegerRegisterSet0(Native Method)
This bug seems to be open with java team.
Any other suggestion to intelligently take and analyze thread dumps? Or script to parse huge catalina.out and get just thread dumps out of it?
Checkout this article about scheduling thread dumps. I think it is exactly what you are trying to do.
Another option could be JMX - ThreadMXBean. This was discussed at SO question How do I create a thread dump via JMX?
One quick way I figured out was (mis)using javamelody. We use it to monitor various aspects of application and it also provides visual & usual ways to look at current threads so I wrote small shell script to hit "http://IP:PORT/SERVICE/monitoring?part=threadsDump" every minute and dump response in a file. If response contains any blocked thread script will take thread dumps every 5 seconds. This helps upto some extent but when problem worsens and server stalls, javamelody also stop responding.