Segmentation fault while running Java program on Linux - java

I am running a JAVA program on RHEL 6.4 Server. The program terminates abnormally displaying the messaeg "Segmentation fault(Core dumped)". But i do not find any file indicating the reason for termination in the current user directory from where the program was run.
How can i debug to find the error in such case?
DOUBT
As per my understandin segmentation fault occurs when program tries to access a memory address outside the programs range. I would expect to see such faults in C or C++ programs, but in Java since there are no pointers , how is segmentation fault possible.

how is segmentation fault possible.
There are several possible reasons for this. There could be a bug in the JVM itself, or in a package (some of these are written in C or C++). It could also be due to a misconfiguration where incompatible components are used together.
From experience, a JVM bug is the least likely of these (although I've seen some).
If you capture the stack trace at the point of the crash, this might give you some clues as to where exactly the crash is occurring.

Related

jdbc mssql connection causes EXCEPTION_ACCESS_VIOLATION [duplicate]

When a Java VM crashes with an EXCEPTION_ACCESS_VIOLATION and produces an hs_err_pidXXX.log file, what does that indicate? The error itself is basically a null pointer exception. Is it always caused by a bug in the JVM, or are there other causes like malfunctioning hardware or software conflicts?
Edit: there is a native component, this is an SWT application on win32.
Most of the times this is a bug in the VM.
But it can be caused by any native code (e.g. JNI calls).
The hs_err_pidXXX.log file should contain some information about where the problem happened.
You can also check the "Heap" section inside the file. Many of the VM bugs are caused by the garbage collection (expecially in older VMs). This section should show you if the garbage was running at the time of the crash. Also this section shows, if some sections of the heap are filled (the percentage numbers).
The VM is also much more likely to crash in a low memory situation than otherwise.
Answer found!
I had the same error and noticed that others who provided the contents of the pid log file were running 64 bit Windows. Just like me. At the end log file, it included the PATH statement. There I could see C:\Windows\SysWOW64 was incorrectly listed ahead of: %SystemRoot%\system32. Once I corrected it, the exception disappeared.
First thing you should do is upgrade your JVM to the latest you can.
Can you repeat the issue? Or does it seem to happen randomly? We recently had a problem where our JVM was crashing all over the place, at random times. Turns out it was a hardware problem. We put the drives in a new server and it completely went away.
Bottom line, the JVM should never crash, as the poster above mentioned if your not doing any JNI then my gut is that you have a hardware problem.
The cause of the problem will be documented in the hs_err* file, if you know what to look for. Take a look, and if it still isn't clear, consider posting the first 5 or 10 lines of the stack trace and other pertinent info (don't post the whole thing, there's tons of info in there that won't help - but you have to figure out which 1% is important :-) )
Are you using a Browser widget and executing javascript in the Browser widget? If so, then there are bugs in some versions of SWT that causes the JVM to crash in native code, in various Windows libraries.
Two examples (that I opened) are bug 217306 and bug 127960. These two bug reports are not the only bug reports of the JVM crashing in SWT, however.
If you aren't using the Browser widget then these suggestions won't help you. In that case, you can search for a list of SWT bugs causing a JVM crash. If none of those are your issue, then I highly recommend that you open a bug report with SWT.
I have the same problem with a JNLP application that I have been using for a long time and is pretty reliable. The problem started immediately after I upgraded from Windows 7 to Windows 10. According to my investigation, it is most likely a bug in Win 10.
The following is not a solution, but an ugly workaround. In jre/bin directory, there is javaws.exe. If I right-clicked /Properties/Compatibility and ticked Run this program as an administrator, the JNLP app started to work.
Please, be aware that this approach could cause security issues and use it only if you have no other option and 100% know what you are doing.

How to debug a crash with Java Result: error_code

I have a Java application that uses a C++ DLL via JNA. The C++ DLL is proprietary, therefore, I cannot share the code unless I can make a simplified reproducible example. It is not straight forward to make a reproducible example until I further debug.
The application crashes sporadically with the error message Java Result: -1073740940. I am running the Java application from Netbeans, although it crashes without Netbeans. Since there is no hs_err_.log, I guess crash is in the C++ layer. How can I begin debug this crash?
The "Java Result" output from Netbeans simply tells you the exit code of the java program. You could generate the same with a System.exit(-1073740940);. A successful program exits with a code of 0. Anything else is a failure that requires documentation to interpret.
You have not given us any indication what DLL you are using, so the only information we have to work with is this exit code. Converting that int to hex digits results in 0xc0000374 which you can enter into your favorite search engine and find out is a Heap Corruption Exception. Some examples are provided but in general this means you are accessing non-allocated native memory.
Without having any idea what code you're using, I would guess you're doing something wrong with native memory, invoking native functions, or incorrectly manipulating pointers or handles somewhere in your application.
You should start by looking closely at arguments to native functions. Type mapping could be a problem if the number of bytes is mismatched. Investigate any Pointer-based arguments to native functions, including ByReference arguments. Trace back in the code and find when/how these Pointers were associated with native-allocated memory. If it was never allocated, that's one possibility for the problem. If it was allocated, see if you can find a point where that memory was freed, possibly by a different native function.
The root cause of the crash was heap corruption in the C++ layer. If a random crash occurs due to heap corruption, sometimes, it is complicated to pinpoint the cause of crash because the crash can happen later, when the program tries to manipulate the corrupted memory. Hence, it is also complicated to provide an SSCCE, especially when we work on the proprietary legacy code.
How I debugged this crash:
Reproduction: Try to find a consistent use case for the crash. If the crash is random then try to figure out a set of user actions that always leads to the crash.
Assumption: Guess which feature/component contains the crash.
Validation: Make sure that crash is not happening when you disable this feature/component.
Verification: Skimm through and slice the code. Review the small piece of code.
Documentation: Write everything.
Daniel's answer was very helpful in fixing this crash!

What does typically happen in C due to stack overflow?

In Java there will be a stacktrace that says StackOverflowError and the whole system won't crash, only the program.
In C I'm aware that an array index out of bounds will produce a segmentation fault. Is it the same for a stack overflow in C and there will also be a segmentation fault i.e. same error type for a similar problem?
I'm not testing a conscious infinite resursion in C to see what happens because I don't know the consequences.
Or is it sometimes something much worse and a stack overflow in C could cause an operating system failure and force you to power cycle to get back? Or even worse, cause irreversible hardware damage? How bad effects can a stack overflow mistake have?
It seems clear that the protection is better in Java than in C. Is it any better in C than in assembly / machine code or is it practically the same (lack of) protection in C as a assembly?
In C I'm aware that an array index out of bounds will produce a segmentation fault. Is it the same for a stack overflow in C and there will also be a segmentation fault i.e. same error type for a similar problem?
There's no guarantee in C that there will be a segmentation fault. The C standard says it's undefined behaviour and leave it at that. How that might manifest, it at all, is up to the implementation/platform.
Or is it sometimes something much worse and a stack overflow in C could cause an operating system failure and force you to power cycle to get back? Or even worse, cause irreversible hardware damage? How bad effects can a stack overflow mistake have?
It's pretty rare on modern Operating systems that anything untoward would happen to the system; typically, only the program would crash. Modern operating systems use various memory protection techniques.
It seems clear that the protection is better in Java than in C. Is it any better in C than in assembly / machine code or is it practically the same (lack of) protection in C as a assembly?
That's because in Java, memory is "managed". In C, it's left to the programmer; it's by design. A C compiler does generate machine code in the end; so it can't be any better or worse. Obviously,
a good compiler could detect some of these problems and warn you, which is an advantage in C compared to assembly.
Well the handling of memory failure, as any system resource failure, is basically handled by the OS, not the language itself.
Excluding some specific actions of prevention, as the stack checking, this kind of problems normally triggers an OS exception that can be handled by the language runtime.
The stack checking if enabled, normally specifying some switches on the compiler command line, instructs the compiler to insert check probes code for each stack consuming operation to verify the memory availability.
By default when for any reason, overuse of the stack or corruption, the execution try to access memory outside the bounds of allocated stack space the OS triggers a structured exception. Java as many C runtime normally handle those exception and also supply some way to pass them to the user code for eventual recovery (i.e. through signal or SEH). If no handler has been associated from the user code the control is passed to the runtime that by default will manage a controlled task shutdown (gracious shutdown).
If no handling is available, not even from runtime, the OS will shutdown the task and operate an abruptly resource relief (i.e. truncate files, close ports etc).
In any case the OS wil protect the system, unless the OS if a flaw one...
In C it is normal to register an handler that protect the code fragment that can fail. The way you can handle the exception depends on your OS (i.e. under windows you can wrap the code that can fail in an exception handler __try __except).
Each executing thread have their stack allocated during thread creation at runtime. If a stack overflow is detected during program execution (native program compiled), only your program (process) will be affected, not the OS.
This is not a C problem, at least what happens is not specified by C. C would only says that it is undefined behavior. So effect is matter of the runtime. On any reasonable OS this will produce some kind of error that will be catched and in *nixes will produce a segmentation fault to your process. Even exotic small OSes will protect itself from your faulty process. Whatever, this will never crash the OS.
Java is not better than C, they are different languages and have different runtime. Java is, by design, more secure in the sense that it will protect you against many memory problems (among others). C gives you finer control over the machine, and yes it is more or less a kind of assembly language.

Getting error "Failed to write core dump" while writing a multithreaded program [duplicate]

When a Java VM crashes with an EXCEPTION_ACCESS_VIOLATION and produces an hs_err_pidXXX.log file, what does that indicate? The error itself is basically a null pointer exception. Is it always caused by a bug in the JVM, or are there other causes like malfunctioning hardware or software conflicts?
Edit: there is a native component, this is an SWT application on win32.
Most of the times this is a bug in the VM.
But it can be caused by any native code (e.g. JNI calls).
The hs_err_pidXXX.log file should contain some information about where the problem happened.
You can also check the "Heap" section inside the file. Many of the VM bugs are caused by the garbage collection (expecially in older VMs). This section should show you if the garbage was running at the time of the crash. Also this section shows, if some sections of the heap are filled (the percentage numbers).
The VM is also much more likely to crash in a low memory situation than otherwise.
Answer found!
I had the same error and noticed that others who provided the contents of the pid log file were running 64 bit Windows. Just like me. At the end log file, it included the PATH statement. There I could see C:\Windows\SysWOW64 was incorrectly listed ahead of: %SystemRoot%\system32. Once I corrected it, the exception disappeared.
First thing you should do is upgrade your JVM to the latest you can.
Can you repeat the issue? Or does it seem to happen randomly? We recently had a problem where our JVM was crashing all over the place, at random times. Turns out it was a hardware problem. We put the drives in a new server and it completely went away.
Bottom line, the JVM should never crash, as the poster above mentioned if your not doing any JNI then my gut is that you have a hardware problem.
The cause of the problem will be documented in the hs_err* file, if you know what to look for. Take a look, and if it still isn't clear, consider posting the first 5 or 10 lines of the stack trace and other pertinent info (don't post the whole thing, there's tons of info in there that won't help - but you have to figure out which 1% is important :-) )
Are you using a Browser widget and executing javascript in the Browser widget? If so, then there are bugs in some versions of SWT that causes the JVM to crash in native code, in various Windows libraries.
Two examples (that I opened) are bug 217306 and bug 127960. These two bug reports are not the only bug reports of the JVM crashing in SWT, however.
If you aren't using the Browser widget then these suggestions won't help you. In that case, you can search for a list of SWT bugs causing a JVM crash. If none of those are your issue, then I highly recommend that you open a bug report with SWT.
I have the same problem with a JNLP application that I have been using for a long time and is pretty reliable. The problem started immediately after I upgraded from Windows 7 to Windows 10. According to my investigation, it is most likely a bug in Win 10.
The following is not a solution, but an ugly workaround. In jre/bin directory, there is javaws.exe. If I right-clicked /Properties/Compatibility and ticked Run this program as an administrator, the JNLP app started to work.
Please, be aware that this approach could cause security issues and use it only if you have no other option and 100% know what you are doing.

Invalid memory access of location in Java

I've been working on a Java project for year. My code had been working fine for months. A few days ago I upgraded the Java SDK to the newest version 1.6.0_26 on my Mac (Snow Leopard 10.6.8). After the upgrade, something very weird happens. When I run some of the classes, I get this error:
Invalid memory access of location 0x202 rip=0x202
But, if I run them with -Xint (interpreted) they work, slow but work fine. I get that problem in classes where I use bitwise operators (bitboards for the game Othello). I can't put any code here because I don't get an error, exception or something similar. I just get that annoying message.
Is it normal that the code doesn't run without -Xint but it works with it? What should I do?
Thanks in advance
When a JVM starts crashing like that, it is a sign that something has broken the JVM's execution model.
Does your application include any native code? Does it use any 3rd-party libraries with native code components? If neither is true, then the chances are that this is a bug in the Apple port of the JVM. It could be a JIT compiler bug, or a bug in some JVM native code library.
What can you do about a bug like that?
Not a lot.
Reduce your application by progressively chopping out bits until you have a small testcase that exhibits the problem.
Based on the testcase, see if there's some empirical way to avoid the problem.
Submit a bug report to Apple with the testcase.
I just came across this situation and it turned out to be related to a piece of code that was serializing a JSON object with a cyclic reference to itself. I removed the cycle and the error went away. I suspect this is related to a memory overflow error that is now handled differently by newer JVMs on Mac OSX. In this case, I was running Mac OSX 10.7.
For completeness the errors I was receiving were:
Invalid access of stack red zone 0x10e586d30 rip=0x10daabba6
Bus error: 10
And:
Invalid memory access of location 0x10b655890 rip=0x10a8baba6
Segmentation fault: 11
Also verify that you are building the GUI on the event dispatch thread and never updating a GUI component from any other thread.
Related errors are notoriously hard to reproduce, but the change associated with altered timing is suggestive.
Please check if /etc/hosts is empty and verify that it include these configurations :
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost

Categories

Resources