I am not sure if this is a permissions problem, or an error in JFreeChart (latest version 1.0.13) with my web application that is running on Tomcat in CentOS. But I have a very odd situation where I in my application an event fires, eventually causing the below method to be executed with the supplied parameters.
I've checked the documentation and it appears that these static methods don't throw exceptions, so I can only assume they return null series or similar if they cannot execute properly. However, in my case, I will execute the use case that causes this code to be run, and as I'm looking at the Tomcat log catalina.out, I can see the line "===================5" appear, but "===================6" never does. And that's where I am stumped. And clearly since "chart" never get's made, the image file can never be generated, leaving an ugly error on my webpage.
Can anyone shed some light on why ChartFactory.createTimeSeriesChart seems to hang? Wouldn't bad input cause the method to return a null series or something, certainly this very mature product wouldn't just sit there and block forever right?
The other detail is that this worked in the GWT servlet container, and also another Tomcat servlet container on Windows...which kind of makes me think there could be some permissions issue. Except that for my last test I made everything root...
Finally, perhaps I missed something huge and JFree methods do throw exceptions? Perhaps my catch block errors out and the message never goes to my error logs?
EDIT: The class files in my .war were compiled the Windows machine that they function correctly on. bytecode is bytecode, right? Or is there some potential problem?
EDIT 2: The project is headless and configured as such.
Code:
public LineChart(final String title, List<GraphData> graphxy[],
String url, String sensorName[], String unit[], float critHigh,
float critLow, Double percent, String historic, String clickZoomIn,
String BaseUrl,Date[][] dateDifference)
throws IOException
{
try {
...
System.out.println("===================5");
//All the parameters are built defined in excised code:
final JFreeChart chart = ChartFactory.createTimeSeriesChart(ShowsensorName, "Date", readUnit, dataset, true, true, false);
System.out.println("===================6");
...
}
catch (Exception e)
{
System.out.println("Exception in line chart demo is========="+e);
}
}
Here's a few things to try:
If headless, verify correct setup.
Try related demos on the affected server.
Try evoking your chart as an application, e.g. TimeSeriesChartDemo1.
Walk through the source by clicking the createTimeSeriesChart() link.
Related
Any Android App produces Logs in the LogCat, even those not generated by developer's source code via Log.d, Log.i, Log.w and Log.e etc. etc. etc.. Perhaps Google Developers has some "automagic" thing for this, I don't know about that...
The point is I remember, years ago, I could somehow extend the class Application, override one or several of it's methods, and then:
Add my own code to process any single Log object generated by my
App in the LogCat
Do whatever I wanted with them (getting the label and the description strings, and then send them via mail, Slack etc., basically)
And then, calling super on that method and let the system do with that Log whatever Application by default does with it...
or something like that... if I recall correctly, I could do this with any log in my app's namespace. Or maybe it was just the crash handler? I can't remember...
It's been so long since I accomplished that (several years already!), so I don't remember how could I do that anymore... I search the internet like crazy trying to recall, but I am struggling to find it again... :-S
// ...public?? oO
[¿¿??] class MyApp extends Application [...] {
// [...]
#Override
public void whateverMethodItWasIDontRemember(params) {
// My coding stuff for the error reports
/* magic :D */
sendTheLogsMyWay();
// I bet this is important
super.whateverMethodItWasIDontRemember(params);
}
// [...]
}
I am about to launch the first Beta version of a new app, so I want beta testers to have a reliable way to send me LogCat's feed if anything has to be reported due to crashes, unexpected behaviour etc.
I mean, it would be ridiculous having to fill with CustomLogs every inch of source code for the beta version, when, in most cases, default logs are more than enough to see why it crashed (errors), or what optimization problems (usually warnings) might the Beta Tester have... not to mention that, if I forget to monitor something this way, the ridiculously big effort to log every single line of my code would be useless... oO
// -__- Mmm... perhaps extending Log itself
// would be more elegant...
import android.util.Log
public final class CustomLog {
public static void d(String label, String msg) {
// AKA My code to handle it
packItForNextErrorReport(label, msg);
Log.d(label, msg);
}
/*
* ... and so on with Log.i, w and e.
* ...I think you get the idea
*/
}
I am working on an integration with Apache Drill which enables users to query PDF files directly using SQL. I'm about 80% done and really impressed with how well Tabula works for this.
However, when I execute the first Drill query that uses the Tabula libraries a Java icon pops up and I get the following text in the command line:
2020-10-25 15:06:55.770 java[71188:7121498] Persistent UI failed to open file file://localhost/Users/******/Saved%20Application%20State/net.java.openjdk.cmd.savedState/window_1.data: Permission denied (13)
I changed the permissions on that directory but I'm still getting the Java popup.
This is not normal behavior for Drill and my goal here was to integrate Tabula programmatically. Is Tabula trying to open a window or something like that and if so, is there a way to disable this? I noted that this does not occur in my unit tests.
Here are some relevant code snippets:
public static List<Table> extractTablesFromPDF(PDDocument document, ExtractionAlgorithm algorithm) {
NurminenDetectionAlgorithm detectionAlgorithm = new NurminenDetectionAlgorithm();
ExtractionAlgorithm algExtractor;
SpreadsheetExtractionAlgorithm extractor=new SpreadsheetExtractionAlgorithm();
ObjectExtractor objectExtractor = new ObjectExtractor(document);
PageIterator pages = objectExtractor.extract();
List<Table> tables= new ArrayList<>();
while (pages.hasNext()) {
Page page = pages.next();
algExtractor = algorithm;
/*if (extractor.isTabular(page)) {
algExtractor=new SpreadsheetExtractionAlgorithm();
}
else {
algExtractor = new BasicExtractionAlgorithm();
}*/
List<Rectangle> tablesOnPage = detectionAlgorithm.detect(page);
for (Rectangle guessRect : tablesOnPage) {
Page guess = page.getArea(guessRect);
tables.addAll(algExtractor.extract(guess));
}
}
return tables;
}
This doesn't happen in my unit tests.
Thanks in advance for your help!
Because some code is executed that does an operation that is usually, but technically not necessarily, involved in things that require so-called 'headful' mode (well, that's perhaps not really a term, but the opposite, 'headless' certainly is). This causes a few things to happen, including that icon showing up.
One easy way out of this is to force headless mode. But note that when you do this, any of these 'usually but technically not neccessarily headful' operations may either [1] work fine and no longer show that icon, or, [2] crash with a HeadlessException. Which one you end up with is not just dependent on which operation you're doing, but also which VM you are doing it on - as a rule once one of these ops works fine and no longer throws, later versions won't revert back to throwing (in other words, newer versions of java offer more things that work in headless mode).
To force headless mode, run java with java -Djava.awt.headless=true.
If you must do it from within java code, run System.setProperty("java.awt.headless", "true"); at least once, and before you do any of these 'usually causes headful mode' operations.
Presumably, the thing that is causes headful mode to occur is something graphics involved, such as rendering a JPG or PNG into an ImageBuffer. It's not surprising that Apache Drill is doing this to 'read' images, for example.
Another option is to just upgrade your VM, maybe that helps. As a general rule, features 'move downwards' on this line:
Requires headful mode; running it makes the VM go headful (icon appears); if java.awt.headless is set, the operation fails with a HeadlessException.
Causes headful mode; running it makes the VM go headful. However, if headless is set, it works fine and won't do that.
Completely freed. Running the code works fine and does not cause the VM to go headful. the headless flag has no bearing whatsoever on how the code operates.
EDIT: I'm using the LeanFT Java SDK 14.50
EDIT2: for text clarification
I'm writing test scripts for a web application that sometimes opens popup browsers for specific actions. So natually when that happens, I will attach the new browser using BrowserFactory.attach(...). The problem is that leanFT does not seem to have a way to validate that the browser exists before attaching it, and if I try to attach it too early, it will fail. And I don't like to use an arbitrairy wait/sleep time as I can never really know how much time it's going to take for the browser to get be ready. So my solution to this is below
private Browser attachPopUpBrowser(BrowserType bt, RegExpProperty url){
Browser browser = null;
int iteration = 0;
//TimeoutLimit.SHORT = 15000
while (browser == null && iteration < TimeoutLimit.SHORT.getLimit()) {
try {
Reporter.setReportLevel(ReportLevel.Off);
browser = BrowserFactory.attach(
new BrowserDescription.Builder()
.type(bt)
.url(url)
.build()
);
Reporter.setReportLevel(ReportLevel.All);
} catch (GeneralLeanFtException e) {
try {
Thread.sleep(1000);
iteration += 1000;
} catch (InterruptedException e1) {
}
}
}
return browser;
}
Now, this works wonderfully with one exception. It generates errors in the leanft test result. Errors that I want to ignore because I know that it will fail a few times before it will succeed. As you can see, I've tried changing the ReportLevel while doing this in order to suppress the error logging, but it doesn't work. I've tried using
Browser[] browsers = BrowserFactory.getallOpenBrowsers(BrowserDescription);
thinking that it will return an empty Array if it finds nothing, but I still get errors while the browser is not ready. Does anyone have suggestions as to how I could work around this?
TL;DR
I'm looking for a way to either suppress the errors generated within my While..Loop or to validate that the browser is ready before attaching it. All of that, so that I can have a nice and clean Run Result at the end of my test (because these errors will present false negatives in all nearly all of my tests)
Addendum
Also, when the attach fails for the first time, I get a an exception
com.hp.lft.sdk.ReplayObjectNotFoundException: attachApplication
as expected, but all subsequent failures are throwing
com.hp.lft.sdk.GeneralLeanFtException: Cannot read property 'match' of null
I've compared both stack traces and they are identical except for the last 2 lines which happen within the ReplayExceptionFactory.CreateDefault() so I think that there is something that gets corrupted during the exception generation, but that is within the leanft.sdk.internal package so there might not be a lot we can do about it right now.I'm guessing that if I did not get that second "cannot read property" exception, I would correctly get the ReplayObjectNotFoundException until the browser is correctly attached.
I'd rather not force an attach endlessly until it works. Even if we'd solve the false negatives, we'd still have a not so good approach to the problem.
The cleanest solution would be to see if there is anything to attach to in the first place.
And you can do just that by getting all the browser instances that meets your description.
Browser[] browsers = BrowserFactory.getAllOpenBrowsers(new BrowserDescription.Builder().build());
Any element in this collection is an already "attached" browser - you can start using it.
If the list doesn't contain your browser instance, rerun the query.
I am running into a peculiar issue (peculiar for me anyways) that seems to happen in a SwingWorker that I use for saving the result of another 'SwingWorker' thread as a tab-delimited file (just a spreadsheet of data).
Here is the worker, that initializes and declares an object which organizes the data and writes each table row to a file (using BufferedWriter):
// Some instance variables outside of the SwingWorker:
// model: holds a matrix of numerical data (double[][])
// view: the GUI class
class SaveWorker extends SwingWorker<Void, Void> {
/* The finished reordered matrix axes */
private String[] reorderedRows;
private String[] reorderedCols;
private String filePath; // the path of the file that will be generated
public SaveWorker(String[] reorderedRows, String[] reorderedCols) {
// variables have been checked for null outside of the worker
this.reorderedRows = reorderedRows;
this.reorderedCols = reorderedCols;
}
#Override
protected Void doInBackground() throws Exception {
if (!isCancelled()) {
LogBuffer.println("Initializing writer.");
final CDTGenerator cdtGen = new CDTGenerator(
model, view, reorderedRows, reorderedCols);
LogBuffer.println("Generating CDT.");
cdtGen.generateCDT();
LogBuffer.println("Setting file path.");
filePath = cdtGen.getFilePath(); // stops inside here, jumps to done()
LogBuffer.println("Path: " + filePath);
}
return null;
}
#Override
protected void done() {
if (!isCancelled()) {
view.setLoadText("Done!");
LogBuffer.println("Done saving. Opening file now.");
// need filePath here to load and then display generated file
visualizeData(filePath);
} else {
view.setReorderOngoing(false);
LogBuffer.println("Reordering has been cancelled.");
}
}
}
When I run the program from Eclipse, this all works perfectly fine. No issues whatsoever. Now I know there have been tons of question on here that are about Eclipse running fine while the runnable JAR fails. It's often due to not including dependencies or referring to them in the wrong way. But what's weird is that the JAR also works completely fine when it's being started from command line (Windows 8.1):
java -jar reorder.jar
Et voilà, everything as expected. The CDTGenerator will finish, write all the matrix rows to a file, and return the filePath. With the filePath I can subsequently open the new file and display the matrix.
In the case of double-clicking the JAR on my desktop, where I placed it when creating it from Eclipse, this is where the program will let me know that stuff happens. I get the error message I created for the case of filePath == null and using some logging I closed in on where the CDTGenerator object stops executing its method generateCDT() (Eclipse debugger also won't reproduce the error and do everything as planned).
What the log shows made me think it's an issue with concurrency, but I am actually leaning against that because Eclipse and command line both run the code fine. The log just tells me that the code suddenly stops executing during a loop which transforms double values from a matrix row (double[]) to Strings to be stored in a String[] for later writing with BufferedWriter.
If I use more logging in that loop, the loop will stop at a different iterator (???).
Furthermore, the code does work for small matrices (130x130) but not for larger ones (1500x3500) but I haven't tested where the limit is. This makes it seem almost time dependent, or memory.
I also used jVisualVM to look at potential memory issues, but even for the larger matrices I am on ~250MB which is nowhere near problematic regarding potential OutOfMemoryExceptions.
And finally, the last potential factor I can think of: Generating the JAR 'fails' due to some classpath issues (clean & rebuild have no effect...) but this has never been an issue before as I have run the code many many times using the 'broken' JAR and execute from Desktop.
I am a real newbie to programming, so please point in some direction if possible. I have tried to find logged exceptions, logged the values of variables, I am checking for null and IndexOutOfBound issues at the array where it stops executing... I am at a complete loss especially because this runs fine from command line.
It looks like the problem had to see with the java versions installed in OP's computer. They checked the file extensions and the programs associated to each one in order to see if it was the same java version as executed from Eclipse and the command line.
Once they cleaned older java versions the jar started to work by double-clicking it :)
Cause I do not have enough points (need 50 to directly answer your question), I need to ask this way:
If you double click a JAR you won't see a console which is often the problem because you can't see stack traces. They get just written to "nowhere". Maybe you get an NPE ore something else.
Try to attach an Exceptionhandler like this Thread.setDefaultUncaughtExceptionHandler(UncaughtExceptionHandler) and let this handler write down a message to a file or such...
Just an idea.
I am working on a web-application that uses Spring MVC.
It has been working fine on Glassfish 3.0.1, but when migrating to Glassfish 3.1, it started acting strange. Some pages are only partially showing, or showing nothing at all, and in the log, a lot of messages of this type:
[#|2012-08-30T11:50:17.582+0200|WARNING|glassfish3.1|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=69;_ThreadName=Thread-1;|StandardWrapperValve[SpringServlet]: PWC1406: Servlet.service() for servlet SpringServlet threw exception
org.springframework.beans.NotReadablePropertyException: Invalid property 'something' of bean class [com.something.Something]: Bean property 'something' is not readable or has an invalid getter method: Does the return type of the getter match the parameter type of the setter?
at org.springframework.beans.BeanWrapperImpl.getPropertyValue(BeanWrapperImpl.java:729)
at org.springframework.beans.BeanWrapperImpl.getNestedBeanWrapper(BeanWrapperImpl.java:576)
at org.springframework.beans.BeanWrapperImpl.getBeanWrapperForPropertyPath(BeanWrapperImpl.java:553)
at org.springframework.beans.BeanWrapperImpl.getPropertyValue(BeanWrapperImpl.java:719)
at org.springframework.validation.AbstractPropertyBindingResult.getActualFieldValue(AbstractPropertyBindingResult.java:99)
at org.springframework.validation.AbstractBindingResult.getFieldValue(AbstractBindingResult.java:226)
at org.springframework.web.servlet.support.BindStatus.<init>(BindStatus.java:120)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getBindStatus(AbstractDataBoundFormElementTag.java:178)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getPropertyPath(AbstractDataBoundFormElementTag.java:198)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getName(AbstractDataBoundFormElementTag.java:164)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.writeDefaultAttributes(AbstractDataBoundFormElementTag.java:127)
at org.springframework.web.servlet.tags.form.AbstractHtmlElementTag.writeDefaultAttributes(AbstractHtmlElementTag.java:421)
at org.springframework.web.servlet.tags.form.TextareaTag.writeTagContent(TextareaTag.java:95)
at org.springframework.web.servlet.tags.form.AbstractFormTag.doStartTagInternal(AbstractFormTag.java:102)
at org.springframework.web.servlet.tags.RequestContextAwareTag.doStartTag(RequestContextAwareTag.java:79)
The error message isn't incorrect, because the property in question does not have a setter-method (gets its value through the constructor). But like I said, this has not been a problem when using Glassfish 3.0.1, only when using it on the new server with Glassfish 3.1.
Does anyone know if there is something in the Glassfish version that might cause this? Or is it some kind of configuration that is missing on the new server?
Some code:
Controller:
#ModelAttribute
public SomethingContainer retriveSomethingContainer(#PathVariable final long id {
return somethingContainerDao.retrieveSomethingContainer(id);
}
#InitBinder("somethingContainer")
public void initBinderForSomething(final WebDataBinder binder) {
binder.setAllowedFields(new String[] {
"something.title",
"something.description",
});
}
SomethingContainer:
#Embedded
private final Something something = new Something();
public Something getSomething() {
return something;
}
//no setter
public String getDescription() {
return something.getDescription();
}
Update:
Restarting Glassfish actually removes the problem - temporarily. I suspect that it might have something to do with the loading of the custom binders, we had some problems with out of memory errors, which I thought had something to do with it, but that has been fixed without fixing this problem.
Update 2:
On the 3.0.1 server, the one of the jvm arguments was -client. On the 3.1-server, it was -server. We changed it to -client, and this made the frequency of the error go down a lot, it was happening every other day with -server, took 2 weeks for it to happen with -client.
Update 3:
Some information about the servers (more can be added if requested..)
Server1 (the working one):
Windows Server 2003
Java jdk 6 build 35
Glassfish 3.0.1 build 22
-xmx 1024m
Server2 (the one with problems):
Windows Server 2008 64-bit
Java jdk 6 build 31
Glassfish 3.1 build 43
-xmx 1088m
-xms 1088m
We are using Spring version 3.1.0.
Update 4:
I recreated the error by renaming a field in a jsp to something that does not exist in the modelattribute.
But, more importantly, I noticed something: The fields where the system can't find the getters are often fields of superclasses of the ones that are referenced in the modelattribute. To continue my example, the SomthingContainer is really like this:
public class SuperSomethingContainer {
[...]
private Something something;
public Something getSomething() {
return something;
}
}
public class SomethingContainer extends SuperSomethingContainer {
[...]
}
The reference in the controller stays as is, so it's referencing a field that is in the superclass of the object in question.
Update 5:
I tried connecting to the production server with a debugger after the error occured. I put a breakpoint on the return statement of a controller-method returning the object with the error, and tried to see if I could access the field with problems at the time. And that I could, so the problem must lie within Spring MVC/the generated jsp-classes.
(Also, the field in error was of the type "someobject.something[0].somethingelse[0]", but when the somethingelse-list was empty, there was no error! To me, this implies that it somehow can't find the get-method of a list(?))
Update 6:
It seems that the problem has to do with the generation of Java-classes from the jsps. We have not used precompile jsps when deploying, so they are compiled when first used. The problem occurs the first time a page is visited, and the jsp compiled. I also noticed that once the problem has occured, jsps that are compiled after will all give errors. I've kept a few of the problem generated java files, and upon the next restart I will compare them to the working ones. Getting closer :)
Update 7:
Compared the compiled jsp java files that resulted in an error with ones that did not, and there was no difference. So that kinda leaves that out.
So, I now know that the Java object leaving the controller is fine (checked with debugger), and the java class generated from the jsp is fine. So it must be something in between, now I need to find out what...
Update 8:
Another round of debugging, and narrowed the problem down some more. It turns out that spring does some caching of the properties belonging to the various classes. In org.springframework.beans.BeanWrapperImpl, method getPropertyValue, there is the following:
private Object getPropertyValue(PropertyTokenHolder tokens) throws BeansException {
String propertyName = tokens.canonicalName;
String actualName = tokens.actualName;
PropertyDescriptor pd = getCachedIntrospectionResults().getPropertyDescriptor(actualName);
if (pd == null || pd.getReadMethod() == null) {
throw new NotReadablePropertyException(getRootClass(), this.nestedPath + propertyName);
}
The problem is that the cachedIntrospectionResults does not contain the property in question, it contains every other property of the class though. Will need to dig some more to try to find out why it is missing, if it's missing from the start or if it gets lost somewhere along the line.
Also, I've noticed that the missing properties are those that do not have setters, only getters. And, it seems to be context aware, as indicated by the stacktrace. So not finding a property when visiting one page does not mean that its not available when visiting another.
Update 9:
Another day, more debugging. Actually found some good stuff. The getCachedIntrospectionResults() call in the previous code block wounded up calling CachedIntrospectionResults#forClass(theClassInQuestion). This returned a CachedIntrospectionResults object, containing far from all of the properties expected (11 of 21). Going into the forClass-method, I found:
static CachedIntrospectionResults forClass(Class beanClass) throws BeansException {
CachedIntrospectionResults results;
Object value = classCache.get(beanClass);
if (value instanceof Reference) {
Reference ref = (Reference) value;
results = (CachedIntrospectionResults) ref.get();
}
else {
results = (CachedIntrospectionResults) value;
}
if (results == null) {
//build the CachedIntrospectionResults, store it in classCache and return it.
It turned out that the CachedIntrospectionResults returned was found by classCache.get(beanClass). So what was stored in the classCache was corrupted/did not contain all that it should. I put a breakpoint on the classCache.get(beanClass)-line, and tried running this through the debugger:
classCache.put(beanClass, null);
When allowing the method to finish, and rebuild the CachedIntrospectionResults, things started working again. So, what is being stored in the classCache is out of sync with what would and should be created if it was allowed to rebuild it. Whether this is due to something going wrong the first time it is built, or if the classCache is corrupted somewhere along the line I do not currently know.
I'm starting to suspect that this has something to do with classloaders, as I've previously experienced problems due to changes in the way the classloader works when updating Glassfish..
There may be more than one possible reason. I am not sure about the actual but I can give you the way to find out the problem
Step 1: on server 2 machine deploy application on Glassfish 3.0.1 build 22 , now if it works fine on the server 2 machine that it means there might be problem with the libraries of Glass fish, following can be reason for this problem
Any library that is missing in the Glassfish 3.1 build 43 that is in Glassfish 3.0.1 build 22. you can solve by copying all libraries from working Glassfish server to new server.
My be the libraries of Glassfish is conflicting with spring version. [Similliar kind of problem I have faced on tomcat and when i replaced my spring libraires from 3.0.1 to 3.0.3 it worked for me] , so replace your spring libraries with latest one.
Step 2: and if the result of step1 is that application is not running on server 2 machin on Glassfish 3.0.1 build 22 there may be following reason
if any libraries that you have pasted on java lib either not included in this server machine or having different versions.
Any folders that are set on classpath or using any environment variables on server 1, either does not exist on server 2 or don't have the jars or having jars with diff versions
I got a colleague of mine investigate the error, and he was able to recreate it in a unit test. This was done by invoking the method that builds CachedIntroSpectionResults for a class, while at the same time stressing the jvm by adding strings to the memory, with very low memory settings. This approach made it fail 20/30000 times.
As to the cause of it, I only got an oral explanation, so I don't have all the details, but it was something like this: Java has its own introspection-results, and these are wrapped by Spring. The problem is that the java-results utilize soft references, which make them prone to garbage collections. So, when Spring was building its wrappers around these soft references at the exact same time that the garbage collector ran, it actually cleared some of the basis of what Spring was using, leading to properties being "lost".
The solution seems to be upgrading from Spring 3.1.0.RELEASE to Spring 3.1.3.RELEASE. Here, there are some changes, and Spring no longer wraps soft references when determining the properties of a class (soft reference are used in rare, special cases, instead of all the time). After upgrading the version of Spring, the error has not been reproducable through the unit test, it remains to see if this is the case through in practice use.
Update: It's been a few weeks, an no sign of the error. So updating Spring version worked :)
I think I've actually found a candidate for the cause of this.
After getting the error on one of the test-servers after a very short duration and little use, we did some additional checks on the cause. It turned out that the test-server had just half the available memory, which turned us into looking at it a bit more thoroughly. It turned out that it hadn't used up all its memory, but when using JConsole to investigate the memory usage of the different part of the new generation space on the heap, it turned out that one of the surivior spaces was packed full. I'm guessing that this made parts of it overflow, leading to the overflowed parts to be GC-ed or unreachable by not being where it was supposed to.
We have yet to verify that this is in fact the problem in the production environment as well, but once the error turns up again we will check, and if it is the case we will change some memory settings to allow more space for survival areas of the new generation heap. (-XX:SurvivorRatio=6 or something like that).
So it seems that larger Spring MVC applications has a need for a large survivor space, specially in newer versions of Glassfish.
Indeed, there had been an issue with the new introduced ExtendedBeanInfo class in Spring 3.1.0, which had been fixed in Spring 3.1.1 - see (https://jira.spring.io/browse/SPR-8347).