In Java there will be a stacktrace that says StackOverflowError and the whole system won't crash, only the program.
In C I'm aware that an array index out of bounds will produce a segmentation fault. Is it the same for a stack overflow in C and there will also be a segmentation fault i.e. same error type for a similar problem?
I'm not testing a conscious infinite resursion in C to see what happens because I don't know the consequences.
Or is it sometimes something much worse and a stack overflow in C could cause an operating system failure and force you to power cycle to get back? Or even worse, cause irreversible hardware damage? How bad effects can a stack overflow mistake have?
It seems clear that the protection is better in Java than in C. Is it any better in C than in assembly / machine code or is it practically the same (lack of) protection in C as a assembly?
In C I'm aware that an array index out of bounds will produce a segmentation fault. Is it the same for a stack overflow in C and there will also be a segmentation fault i.e. same error type for a similar problem?
There's no guarantee in C that there will be a segmentation fault. The C standard says it's undefined behaviour and leave it at that. How that might manifest, it at all, is up to the implementation/platform.
Or is it sometimes something much worse and a stack overflow in C could cause an operating system failure and force you to power cycle to get back? Or even worse, cause irreversible hardware damage? How bad effects can a stack overflow mistake have?
It's pretty rare on modern Operating systems that anything untoward would happen to the system; typically, only the program would crash. Modern operating systems use various memory protection techniques.
It seems clear that the protection is better in Java than in C. Is it any better in C than in assembly / machine code or is it practically the same (lack of) protection in C as a assembly?
That's because in Java, memory is "managed". In C, it's left to the programmer; it's by design. A C compiler does generate machine code in the end; so it can't be any better or worse. Obviously,
a good compiler could detect some of these problems and warn you, which is an advantage in C compared to assembly.
Well the handling of memory failure, as any system resource failure, is basically handled by the OS, not the language itself.
Excluding some specific actions of prevention, as the stack checking, this kind of problems normally triggers an OS exception that can be handled by the language runtime.
The stack checking if enabled, normally specifying some switches on the compiler command line, instructs the compiler to insert check probes code for each stack consuming operation to verify the memory availability.
By default when for any reason, overuse of the stack or corruption, the execution try to access memory outside the bounds of allocated stack space the OS triggers a structured exception. Java as many C runtime normally handle those exception and also supply some way to pass them to the user code for eventual recovery (i.e. through signal or SEH). If no handler has been associated from the user code the control is passed to the runtime that by default will manage a controlled task shutdown (gracious shutdown).
If no handling is available, not even from runtime, the OS will shutdown the task and operate an abruptly resource relief (i.e. truncate files, close ports etc).
In any case the OS wil protect the system, unless the OS if a flaw one...
In C it is normal to register an handler that protect the code fragment that can fail. The way you can handle the exception depends on your OS (i.e. under windows you can wrap the code that can fail in an exception handler __try __except).
Each executing thread have their stack allocated during thread creation at runtime. If a stack overflow is detected during program execution (native program compiled), only your program (process) will be affected, not the OS.
This is not a C problem, at least what happens is not specified by C. C would only says that it is undefined behavior. So effect is matter of the runtime. On any reasonable OS this will produce some kind of error that will be catched and in *nixes will produce a segmentation fault to your process. Even exotic small OSes will protect itself from your faulty process. Whatever, this will never crash the OS.
Java is not better than C, they are different languages and have different runtime. Java is, by design, more secure in the sense that it will protect you against many memory problems (among others). C gives you finer control over the machine, and yes it is more or less a kind of assembly language.
Related
I'm looking into the stacksize parameter for Thread to handle some recursion as described in my other question: How to extend stack size without access to JVM settings?.
The Javadoc says:
On some platforms, specifying a higher value for the stackSize parameter may allow a thread to achieve greater recursion depth before throwing a StackOverflowError. Similarly, specifying a lower value may allow a greater number of threads to exist concurrently without throwing an OutOfMemoryError (or other internal error). The details of the relationship between the value of the stackSize parameter and the maximum recursion depth and concurrency level are platform-dependent. On some platforms, the value of the stackSize parameter may have no effect whatsoever.
Does anyone have some more details? The server running my code has Oracle Java Runtime Environment. Will specifying stack size have effect? I don't have info on the OS (or other system specs), and I can't test myself because I can't submit code year round.
Oracle Java Runtime Environment.
That's deprecated.
Will specifying stack size have effect?
It will change the size of each thread's stack, yes.
Will that affect your app? Probably not.
If you run many threads simultaneously (we're talking a couple hundred at least), lowering it may have an effect (specifically, may make your app work whereas without doing that, your app fails with out of memory errors, or the app becomes like molasses because your system doesn't have the RAM).
If you have deep recursive stacks, but not the kind that run forever (due to a bug in your code), upping it may have an effect (specifically, may make your app work whereas without doing that, your app fails with stack overflow errors).
Most java apps have neither, and in that case, whilst the -Xss option works fine, you won't notice. The memory load barely changes. The app continues to work just the same, and as fast.
Does YOUR app fall in one of the two exotic categories? How would we be able to tell without seeing any of the code?
Most apps don't, that's... all there is to say without more details.
If you're just trying to tweak things so it 'runs better', don't. The default settings are default for a reason: Because they work the best for the most cases. You don't tweak defaults unless you have a lot of info, preferably backed up by profiler reports, that tweaking is neccessary. And if the aim is to just generally 'make things run more smoothly', I'd start by replacing the obsolete (highly outdated) JRE you do have. JRE as a concept is gone (java8 is the last that had it, almost a decade old at this point) - just install a JDK.
I have a Java application that uses a C++ DLL via JNA. The C++ DLL is proprietary, therefore, I cannot share the code unless I can make a simplified reproducible example. It is not straight forward to make a reproducible example until I further debug.
The application crashes sporadically with the error message Java Result: -1073740940. I am running the Java application from Netbeans, although it crashes without Netbeans. Since there is no hs_err_.log, I guess crash is in the C++ layer. How can I begin debug this crash?
The "Java Result" output from Netbeans simply tells you the exit code of the java program. You could generate the same with a System.exit(-1073740940);. A successful program exits with a code of 0. Anything else is a failure that requires documentation to interpret.
You have not given us any indication what DLL you are using, so the only information we have to work with is this exit code. Converting that int to hex digits results in 0xc0000374 which you can enter into your favorite search engine and find out is a Heap Corruption Exception. Some examples are provided but in general this means you are accessing non-allocated native memory.
Without having any idea what code you're using, I would guess you're doing something wrong with native memory, invoking native functions, or incorrectly manipulating pointers or handles somewhere in your application.
You should start by looking closely at arguments to native functions. Type mapping could be a problem if the number of bytes is mismatched. Investigate any Pointer-based arguments to native functions, including ByReference arguments. Trace back in the code and find when/how these Pointers were associated with native-allocated memory. If it was never allocated, that's one possibility for the problem. If it was allocated, see if you can find a point where that memory was freed, possibly by a different native function.
The root cause of the crash was heap corruption in the C++ layer. If a random crash occurs due to heap corruption, sometimes, it is complicated to pinpoint the cause of crash because the crash can happen later, when the program tries to manipulate the corrupted memory. Hence, it is also complicated to provide an SSCCE, especially when we work on the proprietary legacy code.
How I debugged this crash:
Reproduction: Try to find a consistent use case for the crash. If the crash is random then try to figure out a set of user actions that always leads to the crash.
Assumption: Guess which feature/component contains the crash.
Validation: Make sure that crash is not happening when you disable this feature/component.
Verification: Skimm through and slice the code. Review the small piece of code.
Documentation: Write everything.
Daniel's answer was very helpful in fixing this crash!
I've seen the JNA Crash protection feature description starting with "It is not uncommon when defining a new library and writing tests to encounter memory access errors which crash the VM". This suggests to me that this crash protection is really intended for debugging / early phases of development. Is it safe to leave it on in production, or is there a large performance cost that I should be aware of / other reason to turn it off?
For Windows, it's on by default and safe to leave that way, because it uses structured exception handling to catch the errors (you should still attempt to shut down gracefully, because there's no guarantee that the caught error is in any way recoverable).
For platforms that use signal handling to trap errors, you should probably not use it in production, since the VM itself uses those same signals. In development you typically need to use the jsig library in order for JNA to be able to properly trap the signals without interfering with the JVM (or vice versa) and it's unlikely you'll be able to do this in production.
There is no performance cost, it's more of a reliability issue.
I am running a JAVA program on RHEL 6.4 Server. The program terminates abnormally displaying the messaeg "Segmentation fault(Core dumped)". But i do not find any file indicating the reason for termination in the current user directory from where the program was run.
How can i debug to find the error in such case?
DOUBT
As per my understandin segmentation fault occurs when program tries to access a memory address outside the programs range. I would expect to see such faults in C or C++ programs, but in Java since there are no pointers , how is segmentation fault possible.
how is segmentation fault possible.
There are several possible reasons for this. There could be a bug in the JVM itself, or in a package (some of these are written in C or C++). It could also be due to a misconfiguration where incompatible components are used together.
From experience, a JVM bug is the least likely of these (although I've seen some).
If you capture the stack trace at the point of the crash, this might give you some clues as to where exactly the crash is occurring.
A great deal has been written about the wisdom of exceptions in general and the use of checked vs. unchecked exceptions in Java in particular, but I'm interested in seeing a defense of the decision to make thread termination the default policy instead of application termination the way it is in C++. This choice seems extremely dangerous to me: some condition that the programmer didn't plan for randomly causes some part of the program to die after logging a stack trace but the rest of the program soldiers resolutely on, what could go wrong? My intuition and experience say that a lot of things can go wrong here and that the default policy is the sort of thing that should only be selected specifically by someone who has a specific reason for choosing it, so what's the upside to this strategy which has such a seemingly large downside? Am I overestimating the risk?
EDIT: based on the answers so far, I feel that I need to be more focused in my description of the dangers that I perceive; I'm talking about the case of an application which uses multiple thread (e.g. in a thread pool), to update shared state. I recognize that this policy does not present a problem for single-threaded applications.
EDIT2: You can see that there is an awareness of these risks among the language maintainers from the explanation for why the Thread.stop() method was deprecated (found here: http://docs.oracle.com/javase/7/docs/technotes/guides/concurrency/threadPrimitiveDeprecation.html). The exact same issues apply when a thread dies unexpectedly due to uncaught exceptions. They must have designed the JVM so that all monitors are automatically unlocked when a thread dies, which seems like a poor implementation choice; having a thread die while it has a monitor locked should be an indication that the entire program should die because the alternative is almost certain to be internal inconsistency in some shared state.
#BD, Not sure what your what your experience says about this because you haven't explained it here. But, here is what I have experienced as a developer:
Generally, it is a bad idea to make application fail if one of its component has failed (temporarily or permanently) due to any reason like DB restart or some file being replaced. for example if I introduced a new type of trade in system and some issue comes in, it shouldn't shutdown my application.
Applications like web/application servers should be able to continue to work and responding to users even if any of its deployment is throwing any weird exception/s.
As per your worry on exceptions, generally all applications have a health monitoring system which monitors their health like CPU/Disk/RAM usage or errors in logs etc. and fire alerts accordingly.
I hope this should resolve your confusion.
From discussing this issue with a co-worker and also reviewing the answers received so far, I have formed a supposition here, and would like to get some feedback.
I suspect that the decision to make this behavior the default has its roots in the philosophy that defined the early development of language, as well as its early environment.
As part of the original philosophy, programmers/designers were expected to use checked exceptions, and the language enforces that checked exceptions which may be emitted by a method call (i.e. have been declared in the method definition) must be handled in the calling method, or else be declared by it to "officially" pass the responsibility to higher-level callers. Common practice has moved sharply away from the use of checked exceptions, not to mention the fact that one of the most commonly occurring exceptions in practice, NullPointerException, is unchecked. As a result of this, programmers must now assume that any method call can generate an unchecked exception, and the corollary to this is that any code which updates shared data in a concurrent context must implement transactional semantics for these updates in order to be fully correct. My experience is that most developers don't really understand this even if they do understand the basics of multi-threaded development, such as avoiding deadlock when managing critical sections with synchronization. The default uncaught exception handler behavior exacerbates the problem by masking its effects: in C++, it wouldn't matter if an uncaught exception would result in corrupted shared state because the program is dead anyway, but in Java the program will continue to limp along as best it can despite the fact that it is very likely to no longer be operating correctly.
The environmental factor is that single-threaded programs were likely the norm when the language was first developed, so the default behavior masqueraded as the correct one. The rise of multi-core architectures and increased usage of thread pools exposes the threat more broadly, and commonly applied approaches such as use of immutable objects can only go so far to solve it (hint for some: ConcurrentMap is probably not as safe as you think it is). My experience so far is that people who deny this risk are not being paranoid enough relative to the actual safety requirements of their code, but I would love to be proved wrong.
I suspect that modifying uncaught exception handlers to terminate the program should be the standard procedure required by most development organizations; at the very least this should be done for thread pools which are known to update shared state based on incoming inputs.
Normal (no GUI, no container) applications exit on uncaught exceptions - the default behavior is fine - as same as you wanted.
A GUI based application, it will be nice to to show error message and able to handle the error more usefully - for example we could submit a defect report with some additional information.
The behavior is fully changeable by providing thread specific exception handler- that could include application exit.
Here are some useful notes