So, back again
I have a JHipster generated project which uses an elasticsearch java client embedded in spring boot.
I have recently done some major changes to the datasets since we've been migrating a whole new bunch of data from different repositories
When deploying the application it all works fine, all SearchRepositories are loaded with no problem and all search capabilities roll smooth
The issues come when running from the test environment. There have been no changes what so ever to the application-test.yml file nor to the elasticsearch java config file.
We have some code which updates the indices and I've run it several times, it seems to update the clusters indices just fine, but where I'm suffering is in the target folder, it just won't create the new indices
There are 12 indices that I cannot get in to the target folder when running in test mode, however, only 5 of them fail in their ResourceIntTest because of the error mentioned in the title
I don't want to fill this post with hundreds of irrelevant lines of code, so suffice for now to include the workaround that helps test not to fail:
When in the initTest of the 5 failing test cases, if I write the following line (obviously changing the class name in each different case):
surveyDataQualitySearchRepository.save(surveyDataQualityRepository.findAll());
Then the index will create itself and the testcase will not fail, however this shouldn't be necessary to do manually, it should be created when the resetIndex method in the IndexReinitializer class is called upon deployment
resetIndex:
#PostConstruct
public void resetIndex() {
long t = currentTimeMillis();
elasticsearchTemplate.deleteIndex("_all");
t = currentTimeMillis() - t;
logger.debug("ElasticSearch indexes reset in {} ms", t);
}
Commenting this piece of code also allows all indices to be loaded, but it should not be commented as this serves as an updater for the indices, plus it works fine in an old version of the application which is still pointing to the old dataset
All help will be very welcome, I've been on this almost a full day now trying to understand where the error is coming from, I'm also more than happy to upload any pieces of code that may be relevant to anyone willing to help here.
EDIT To add code for the indices rebuild as requested via comments
#Test
public void synchronizeData() throws Exception{
resetIndex();
activePharmaIngredientSearchRepository.save(activePharmaIngredientRepository.findAll());
countrySearchRepository.save(countryRepository.findAll());
dosageUnitSearchRepository.save(dosageUnitRepository.findAll());
drugCategorySearchRepository.save(drugCategoryRepository.findAll());
drugQualityCategorySearchRepository.save(drugQualityCategoryRepository.findAll());
formulationSearchRepository.save(formulationRepository.findAll());
innDrugSearchRepository.save(innDrugRepository.findAll());
locationSearchRepository.save(locationRepository.findAll());
manufacturerSearchRepository.save(manufacturerRepository.findAll());
outletTypeSearchRepository.save(outletTypeRepository.findAll());
publicationSearchRepository.save(publicationRepository.findAll());
publicationTypeSearchRepository.save(publicationTypeRepository.findAll());
qualityReferenceSearchRepository.save(qualityReferenceRepository.findAll());
reportQualityAssessmentAssaySearchRepository.save(reportQualityAssessmentAssayRepository.findAll());
//rqaaQualitySearchRepository.save(rqaaQualityRepository.findAll());
rqaaTechniqueSearchRepository.save(rqaaTechniqueRepository.findAll());
samplingTypeSearchRepository.save(samplingTypeRepository.findAll());
//surveyDataQualitySearchRepository.save(surveyDataQualityRepository.findAll());
surveyDataSearchRepository.save(surveyDataRepository.findAll());
techniqueSearchRepository.save(techniqueRepository.findAll());
tradeDrugApiSearchRepository.save(tradeDrugApiRepository.findAll());
tradeDrugSearchRepository.save(tradeDrugRepository.findAll());
publicationDrugTypesSearchRepository.save(publicationDrugTypesRepository.findAll());
wrongApiSearchRepository.save(wrongApiRepository.findAll());
}
private void resetIndex() {
long t = currentTimeMillis();
elasticsearchTemplate.deleteIndex("_all");
t = currentTimeMillis() - t;
logger.debug("ElasticSearch indexes reset in {} ms", t);
}
Please try to update to the latest version of spring-data-elasticsearch
Related
I'm deploying a little backend with some methods. One of them makes a simple query to retrieve a list of objects. This is the method:
#ApiMethod(path = "getMessagesByCity", name = "getMessagesByCity", httpMethod = ApiMethod.HttpMethod.POST)
public MessageResponse getMessagesByCity(#Named("City_id") Long city) {
MessageResponse response = new MessageResponse();
List<Message> message = ofy().load().type(Message.class).filter("city", city).list();
response.response = 200;
return response;
}
And this is the Message class:
#Entity
public class Message {
#Id
private Long id;
private String name;
#Index
private Long city;
...
}
I've read a lot of posts and all of them are mentioning that probably is caused because datastore-indexes.xml are not being updated automatically. However, Google doc says this (https://cloud.google.com/appengine/docs/standard/python/config/indexconfig):
Every Cloud Datastore query made by an application needs a
corresponding index. Indexes for simple queries, such as queries over
a single property, are created automatically.
So,following that, I think that index related files are not necessary for me.
If I execute the method "getMessagesByCity" with the simple query:
List<Message> message = ofy().load().type(Message.class).filter("city", city).list();
The backend returns me an error 503 with this log message:
"com.google.appengine.api.datastore.DatastoreNeedIndexException: no
matching index found. An index is missing but we are unable to tell
you which one due to a bug in the App Engine SDK. If your query only
contains equality filters you most likely need a composite index on
all the properties referenced in those filters."
Any idea? How can I solve it?
You need to upload index configs in, so Datastore will start to accept your queries with custom projections with this command.
gcloud app deploy index.yaml
See https://cloud.google.com/datastore/docs/concepts/indexes for more information about Datastore queries handling and indexes.
Every time you use a new datastore query in your code with a different set of filters / orders etc. your index.yaml should automatically update, (might need to run that logic at least once in the local dev server for it to add the new index to the file)
On local dev, the first time you hit it, it should work, HOWEVER when deploying new indexes, there is a lag time before it becomes available in production / on the appspot server. We have run into this a lot, and from the google console you can actually see if its in progress or not, by going to Datastore > Indexes (https://console.cloud.google.com/datastore/indexes) for the project in question.
If all indexes have a green tick and the issue persists then this is not the issue, and can debug further, however if some have spinners next to them this means this index is still being made and cannot be used until its finished.
If this is your problem, you can avoid it in the future by deploying the index.yaml first through gcloud and then only deploying your application.
alternatively make sure you have run the new method / function on your local, and make sure that the index.yaml did in-fact get changed, if you use Git or something the file should have popped up as modified after the local server ran the function / method.
I am running into a peculiar issue (peculiar for me anyways) that seems to happen in a SwingWorker that I use for saving the result of another 'SwingWorker' thread as a tab-delimited file (just a spreadsheet of data).
Here is the worker, that initializes and declares an object which organizes the data and writes each table row to a file (using BufferedWriter):
// Some instance variables outside of the SwingWorker:
// model: holds a matrix of numerical data (double[][])
// view: the GUI class
class SaveWorker extends SwingWorker<Void, Void> {
/* The finished reordered matrix axes */
private String[] reorderedRows;
private String[] reorderedCols;
private String filePath; // the path of the file that will be generated
public SaveWorker(String[] reorderedRows, String[] reorderedCols) {
// variables have been checked for null outside of the worker
this.reorderedRows = reorderedRows;
this.reorderedCols = reorderedCols;
}
#Override
protected Void doInBackground() throws Exception {
if (!isCancelled()) {
LogBuffer.println("Initializing writer.");
final CDTGenerator cdtGen = new CDTGenerator(
model, view, reorderedRows, reorderedCols);
LogBuffer.println("Generating CDT.");
cdtGen.generateCDT();
LogBuffer.println("Setting file path.");
filePath = cdtGen.getFilePath(); // stops inside here, jumps to done()
LogBuffer.println("Path: " + filePath);
}
return null;
}
#Override
protected void done() {
if (!isCancelled()) {
view.setLoadText("Done!");
LogBuffer.println("Done saving. Opening file now.");
// need filePath here to load and then display generated file
visualizeData(filePath);
} else {
view.setReorderOngoing(false);
LogBuffer.println("Reordering has been cancelled.");
}
}
}
When I run the program from Eclipse, this all works perfectly fine. No issues whatsoever. Now I know there have been tons of question on here that are about Eclipse running fine while the runnable JAR fails. It's often due to not including dependencies or referring to them in the wrong way. But what's weird is that the JAR also works completely fine when it's being started from command line (Windows 8.1):
java -jar reorder.jar
Et voilĂ , everything as expected. The CDTGenerator will finish, write all the matrix rows to a file, and return the filePath. With the filePath I can subsequently open the new file and display the matrix.
In the case of double-clicking the JAR on my desktop, where I placed it when creating it from Eclipse, this is where the program will let me know that stuff happens. I get the error message I created for the case of filePath == null and using some logging I closed in on where the CDTGenerator object stops executing its method generateCDT() (Eclipse debugger also won't reproduce the error and do everything as planned).
What the log shows made me think it's an issue with concurrency, but I am actually leaning against that because Eclipse and command line both run the code fine. The log just tells me that the code suddenly stops executing during a loop which transforms double values from a matrix row (double[]) to Strings to be stored in a String[] for later writing with BufferedWriter.
If I use more logging in that loop, the loop will stop at a different iterator (???).
Furthermore, the code does work for small matrices (130x130) but not for larger ones (1500x3500) but I haven't tested where the limit is. This makes it seem almost time dependent, or memory.
I also used jVisualVM to look at potential memory issues, but even for the larger matrices I am on ~250MB which is nowhere near problematic regarding potential OutOfMemoryExceptions.
And finally, the last potential factor I can think of: Generating the JAR 'fails' due to some classpath issues (clean & rebuild have no effect...) but this has never been an issue before as I have run the code many many times using the 'broken' JAR and execute from Desktop.
I am a real newbie to programming, so please point in some direction if possible. I have tried to find logged exceptions, logged the values of variables, I am checking for null and IndexOutOfBound issues at the array where it stops executing... I am at a complete loss especially because this runs fine from command line.
It looks like the problem had to see with the java versions installed in OP's computer. They checked the file extensions and the programs associated to each one in order to see if it was the same java version as executed from Eclipse and the command line.
Once they cleaned older java versions the jar started to work by double-clicking it :)
Cause I do not have enough points (need 50 to directly answer your question), I need to ask this way:
If you double click a JAR you won't see a console which is often the problem because you can't see stack traces. They get just written to "nowhere". Maybe you get an NPE ore something else.
Try to attach an Exceptionhandler like this Thread.setDefaultUncaughtExceptionHandler(UncaughtExceptionHandler) and let this handler write down a message to a file or such...
Just an idea.
we are using Spring-Data-Mongo to access our MongoDB from a Java application. In general, everything works fine but I encountered one odd behavior.
When initializing our Repositories in the Java code we use ensureIndex to create indices on the collections. In a unit test we read all indices from the collections as IndexInfo objects and check if those IndexInfo objects contain the fields we want to index in the member indexFields. This also worked fine when we set everything up.
Now it happened that we had to recreate one of the indices on our production environment, so we dropped it and created it again using the Mongo shell. The system seems to run fine and no issues came up. For consistency reasons we then made the same change to our test and even local environments in the same way. Then we noticed that our unit test for index checking fails because the indexField member is now empty.
I tried everything I can imagine but as soon as I create an index using the Mongo shell Spring does not deliver any index fields anymore even when I create an index with the identical configuration.
Can anyone tell me why that happens and if that indicates there is a problem? Is there a way to fix this without having to drop the collection? I was thinking about dropping the index after our next production release and then trigger an insert. On my local machine this created the index the way I expected it and the test succeeds.
---- Additional info -----
Hi Trisha,
sorry for not acting sooner but I just got time to build a small unit test for this.
If you run the following test on an empty db it works fine:
#Test
public void testIndexing() throws Exception {
this.mongoTemplate.indexOps("testcollection").ensureIndex(
new Index().on("indexfield", Order.ASCENDING).unique().sparse());
List<IndexInfo> indexInfos = mongoTemplate.indexOps("testcollection").getIndexInfo();
assertEquals("We want two indexes, id and indexfield", 2, indexInfos.size());
for (IndexInfo info : indexInfos) {
assertEquals("All indexes are only meant to have one field", 1, info.getIndexFields().size());
if (info.getName().startsWith("indexfield")) {
assertTrue("Unexpected index field", info.isIndexForFields(Arrays.asList(new String[]{ "indexfield" })));
assertTrue("Index indexfield must be unique", info.isUnique());
assertTrue("Index indexfield must be sparse", info.isSparse());
assertFalse("Index indexfield must not be droping duplicates", info.isDropDuplicates());
} else if (!"_id_".equals(info.getName())) {
fail("Unexpected index: '" + info.getName() + "'");
}
}
}
Then open the mongo shell and call:
db.testcollection.dropIndexes();
db.testcollection.ensureIndex({"indexfield":1}, {"unique":true, "sparse":true})
The second call should create exactly the same index as the java code did. Now if you run the test again, the ensureIndex-Method does nothing because an index is already there (as it should, I guess) but the test fails on the assert for the index fields. The first assert works fine because the index info is there.
Checking the indexes in the mongo shell produces the same output no matter if the index was created via shell or via java code but spring does not get the index fields for some reason when the index is created via shell.
It would be really cool if you could give me a hint on this.
Thanks to your updated question I was able to reproduce your problem. I finally tracked down the issue to a misunderstanding between how MongoDB stores the index information and how SpringData is expecting it.
When you create an index using the following:
db.testcollection.ensureIndex({"indexfield":1}, {"unique":true, "sparse":true})
under the covers it stores the index as:
{
"v" : 1,
"key" : {
"indexfield" : 1
},
"unique" : true,
"ns" : "TheDatabase.testcollection",
"name" : "indexfield_1",
"sparse" : true
}
However, by default MongoDB treats all numbers as floating point, therefore it's secretly thinking of this as
"key" : {
"indexfield" : 1.0
},
SpringData is expecting this value as an integer, however, as this is how it creates the index values it saves, and so it cannot correctly parse an index created in the shell (incidentally, this means that Geo indexes create in the shell will be parsed fine by SpringData as these are stored as Strings).
I would recommend reporting this to the guys at Spring, you will not be the only one who experiences this. I found the problem in DefaultIndexOperations lines 138 - 141 in version 1.1.1 of spring-data-mongodb.
However you'll be pleased to hear there is a workaround. You can force your command on the shell to store the value as an integer (so Spring can correctly parse the index) using the following command:
db.testcollection.ensureIndex({indexfield: NumberInt(1) }, {unique:true, sparse:true})
It's a bit clumsy, but when I followed your steps but applied this command in the shell the test passed.
I am working on a web-application that uses Spring MVC.
It has been working fine on Glassfish 3.0.1, but when migrating to Glassfish 3.1, it started acting strange. Some pages are only partially showing, or showing nothing at all, and in the log, a lot of messages of this type:
[#|2012-08-30T11:50:17.582+0200|WARNING|glassfish3.1|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=69;_ThreadName=Thread-1;|StandardWrapperValve[SpringServlet]: PWC1406: Servlet.service() for servlet SpringServlet threw exception
org.springframework.beans.NotReadablePropertyException: Invalid property 'something' of bean class [com.something.Something]: Bean property 'something' is not readable or has an invalid getter method: Does the return type of the getter match the parameter type of the setter?
at org.springframework.beans.BeanWrapperImpl.getPropertyValue(BeanWrapperImpl.java:729)
at org.springframework.beans.BeanWrapperImpl.getNestedBeanWrapper(BeanWrapperImpl.java:576)
at org.springframework.beans.BeanWrapperImpl.getBeanWrapperForPropertyPath(BeanWrapperImpl.java:553)
at org.springframework.beans.BeanWrapperImpl.getPropertyValue(BeanWrapperImpl.java:719)
at org.springframework.validation.AbstractPropertyBindingResult.getActualFieldValue(AbstractPropertyBindingResult.java:99)
at org.springframework.validation.AbstractBindingResult.getFieldValue(AbstractBindingResult.java:226)
at org.springframework.web.servlet.support.BindStatus.<init>(BindStatus.java:120)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getBindStatus(AbstractDataBoundFormElementTag.java:178)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getPropertyPath(AbstractDataBoundFormElementTag.java:198)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getName(AbstractDataBoundFormElementTag.java:164)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.writeDefaultAttributes(AbstractDataBoundFormElementTag.java:127)
at org.springframework.web.servlet.tags.form.AbstractHtmlElementTag.writeDefaultAttributes(AbstractHtmlElementTag.java:421)
at org.springframework.web.servlet.tags.form.TextareaTag.writeTagContent(TextareaTag.java:95)
at org.springframework.web.servlet.tags.form.AbstractFormTag.doStartTagInternal(AbstractFormTag.java:102)
at org.springframework.web.servlet.tags.RequestContextAwareTag.doStartTag(RequestContextAwareTag.java:79)
The error message isn't incorrect, because the property in question does not have a setter-method (gets its value through the constructor). But like I said, this has not been a problem when using Glassfish 3.0.1, only when using it on the new server with Glassfish 3.1.
Does anyone know if there is something in the Glassfish version that might cause this? Or is it some kind of configuration that is missing on the new server?
Some code:
Controller:
#ModelAttribute
public SomethingContainer retriveSomethingContainer(#PathVariable final long id {
return somethingContainerDao.retrieveSomethingContainer(id);
}
#InitBinder("somethingContainer")
public void initBinderForSomething(final WebDataBinder binder) {
binder.setAllowedFields(new String[] {
"something.title",
"something.description",
});
}
SomethingContainer:
#Embedded
private final Something something = new Something();
public Something getSomething() {
return something;
}
//no setter
public String getDescription() {
return something.getDescription();
}
Update:
Restarting Glassfish actually removes the problem - temporarily. I suspect that it might have something to do with the loading of the custom binders, we had some problems with out of memory errors, which I thought had something to do with it, but that has been fixed without fixing this problem.
Update 2:
On the 3.0.1 server, the one of the jvm arguments was -client. On the 3.1-server, it was -server. We changed it to -client, and this made the frequency of the error go down a lot, it was happening every other day with -server, took 2 weeks for it to happen with -client.
Update 3:
Some information about the servers (more can be added if requested..)
Server1 (the working one):
Windows Server 2003
Java jdk 6 build 35
Glassfish 3.0.1 build 22
-xmx 1024m
Server2 (the one with problems):
Windows Server 2008 64-bit
Java jdk 6 build 31
Glassfish 3.1 build 43
-xmx 1088m
-xms 1088m
We are using Spring version 3.1.0.
Update 4:
I recreated the error by renaming a field in a jsp to something that does not exist in the modelattribute.
But, more importantly, I noticed something: The fields where the system can't find the getters are often fields of superclasses of the ones that are referenced in the modelattribute. To continue my example, the SomthingContainer is really like this:
public class SuperSomethingContainer {
[...]
private Something something;
public Something getSomething() {
return something;
}
}
public class SomethingContainer extends SuperSomethingContainer {
[...]
}
The reference in the controller stays as is, so it's referencing a field that is in the superclass of the object in question.
Update 5:
I tried connecting to the production server with a debugger after the error occured. I put a breakpoint on the return statement of a controller-method returning the object with the error, and tried to see if I could access the field with problems at the time. And that I could, so the problem must lie within Spring MVC/the generated jsp-classes.
(Also, the field in error was of the type "someobject.something[0].somethingelse[0]", but when the somethingelse-list was empty, there was no error! To me, this implies that it somehow can't find the get-method of a list(?))
Update 6:
It seems that the problem has to do with the generation of Java-classes from the jsps. We have not used precompile jsps when deploying, so they are compiled when first used. The problem occurs the first time a page is visited, and the jsp compiled. I also noticed that once the problem has occured, jsps that are compiled after will all give errors. I've kept a few of the problem generated java files, and upon the next restart I will compare them to the working ones. Getting closer :)
Update 7:
Compared the compiled jsp java files that resulted in an error with ones that did not, and there was no difference. So that kinda leaves that out.
So, I now know that the Java object leaving the controller is fine (checked with debugger), and the java class generated from the jsp is fine. So it must be something in between, now I need to find out what...
Update 8:
Another round of debugging, and narrowed the problem down some more. It turns out that spring does some caching of the properties belonging to the various classes. In org.springframework.beans.BeanWrapperImpl, method getPropertyValue, there is the following:
private Object getPropertyValue(PropertyTokenHolder tokens) throws BeansException {
String propertyName = tokens.canonicalName;
String actualName = tokens.actualName;
PropertyDescriptor pd = getCachedIntrospectionResults().getPropertyDescriptor(actualName);
if (pd == null || pd.getReadMethod() == null) {
throw new NotReadablePropertyException(getRootClass(), this.nestedPath + propertyName);
}
The problem is that the cachedIntrospectionResults does not contain the property in question, it contains every other property of the class though. Will need to dig some more to try to find out why it is missing, if it's missing from the start or if it gets lost somewhere along the line.
Also, I've noticed that the missing properties are those that do not have setters, only getters. And, it seems to be context aware, as indicated by the stacktrace. So not finding a property when visiting one page does not mean that its not available when visiting another.
Update 9:
Another day, more debugging. Actually found some good stuff. The getCachedIntrospectionResults() call in the previous code block wounded up calling CachedIntrospectionResults#forClass(theClassInQuestion). This returned a CachedIntrospectionResults object, containing far from all of the properties expected (11 of 21). Going into the forClass-method, I found:
static CachedIntrospectionResults forClass(Class beanClass) throws BeansException {
CachedIntrospectionResults results;
Object value = classCache.get(beanClass);
if (value instanceof Reference) {
Reference ref = (Reference) value;
results = (CachedIntrospectionResults) ref.get();
}
else {
results = (CachedIntrospectionResults) value;
}
if (results == null) {
//build the CachedIntrospectionResults, store it in classCache and return it.
It turned out that the CachedIntrospectionResults returned was found by classCache.get(beanClass). So what was stored in the classCache was corrupted/did not contain all that it should. I put a breakpoint on the classCache.get(beanClass)-line, and tried running this through the debugger:
classCache.put(beanClass, null);
When allowing the method to finish, and rebuild the CachedIntrospectionResults, things started working again. So, what is being stored in the classCache is out of sync with what would and should be created if it was allowed to rebuild it. Whether this is due to something going wrong the first time it is built, or if the classCache is corrupted somewhere along the line I do not currently know.
I'm starting to suspect that this has something to do with classloaders, as I've previously experienced problems due to changes in the way the classloader works when updating Glassfish..
There may be more than one possible reason. I am not sure about the actual but I can give you the way to find out the problem
Step 1: on server 2 machine deploy application on Glassfish 3.0.1 build 22 , now if it works fine on the server 2 machine that it means there might be problem with the libraries of Glass fish, following can be reason for this problem
Any library that is missing in the Glassfish 3.1 build 43 that is in Glassfish 3.0.1 build 22. you can solve by copying all libraries from working Glassfish server to new server.
My be the libraries of Glassfish is conflicting with spring version. [Similliar kind of problem I have faced on tomcat and when i replaced my spring libraires from 3.0.1 to 3.0.3 it worked for me] , so replace your spring libraries with latest one.
Step 2: and if the result of step1 is that application is not running on server 2 machin on Glassfish 3.0.1 build 22 there may be following reason
if any libraries that you have pasted on java lib either not included in this server machine or having different versions.
Any folders that are set on classpath or using any environment variables on server 1, either does not exist on server 2 or don't have the jars or having jars with diff versions
I got a colleague of mine investigate the error, and he was able to recreate it in a unit test. This was done by invoking the method that builds CachedIntroSpectionResults for a class, while at the same time stressing the jvm by adding strings to the memory, with very low memory settings. This approach made it fail 20/30000 times.
As to the cause of it, I only got an oral explanation, so I don't have all the details, but it was something like this: Java has its own introspection-results, and these are wrapped by Spring. The problem is that the java-results utilize soft references, which make them prone to garbage collections. So, when Spring was building its wrappers around these soft references at the exact same time that the garbage collector ran, it actually cleared some of the basis of what Spring was using, leading to properties being "lost".
The solution seems to be upgrading from Spring 3.1.0.RELEASE to Spring 3.1.3.RELEASE. Here, there are some changes, and Spring no longer wraps soft references when determining the properties of a class (soft reference are used in rare, special cases, instead of all the time). After upgrading the version of Spring, the error has not been reproducable through the unit test, it remains to see if this is the case through in practice use.
Update: It's been a few weeks, an no sign of the error. So updating Spring version worked :)
I think I've actually found a candidate for the cause of this.
After getting the error on one of the test-servers after a very short duration and little use, we did some additional checks on the cause. It turned out that the test-server had just half the available memory, which turned us into looking at it a bit more thoroughly. It turned out that it hadn't used up all its memory, but when using JConsole to investigate the memory usage of the different part of the new generation space on the heap, it turned out that one of the surivior spaces was packed full. I'm guessing that this made parts of it overflow, leading to the overflowed parts to be GC-ed or unreachable by not being where it was supposed to.
We have yet to verify that this is in fact the problem in the production environment as well, but once the error turns up again we will check, and if it is the case we will change some memory settings to allow more space for survival areas of the new generation heap. (-XX:SurvivorRatio=6 or something like that).
So it seems that larger Spring MVC applications has a need for a large survivor space, specially in newer versions of Glassfish.
Indeed, there had been an issue with the new introduced ExtendedBeanInfo class in Spring 3.1.0, which had been fixed in Spring 3.1.1 - see (https://jira.spring.io/browse/SPR-8347).
I am using Apache JCI's FAM (FileAlterationMonitor) in a Java OSGi Service to monitor and handle changes in the FileSystem. Everything seems to be working fairly well except whenever I start the Service (which starts the FAM using the code below), FAM picks up on ALL the changes that exist in the directory.
Currently I am watching /tmp
/tmp includes a subtree: /tmp/foo/bar/cat/dog
Everytime I start the service and which starts FAM, it reports DirectoryCreate events for:
/tmp/foo
/tmp/foo/bar
/tmp/foo/bar/cat
/tmp/foo/bar/cat/dog
Even if no changes have been made to any part of that subtree.
Code run on service activation:
File watchFolder = new File("/tmp");
watchFolder.mkdirs();
fam = new FilesystemAlterationMonitor();
fam.setInterval(1000);
fam.addListener(watchFolder, listener);
fam.start();
// I've already tried adding:
listener.waitForFirstCheck();
Listener example:
private FileChangeListener listener = new FileChangeListener() {
public void onDirectoryChange(File pDir) { System.out.println(pDir.getAbsolutePath()); }
public void onDirectoryCreate(File pDir) { System.out.println(pDir.getAbsolutePath()); }
...
}
Yes, that's one very annoying feature of JCI. When monitoring is started, it will notify you of all the files and directories it finds with calls to onXxxCreate(). I think you have the following options
After starting the monitoring, wait for some time (couple of seconds) in your FileChangeListener callback implementation before you actually process the events coming from JCI. That's what I did in a project and it works fairly well, although there is the possibility that you miss an actual file creation that just happens within the "grace period"
Take the sources of JCI and modify them to use two new event methods onDirectoryFound(File)and onFileFound(File) that will only be fired when files and directories are found on startup of the monitoring
Take a look at java.nio.file.WatchService that comes with Java 7. IMO the best option, as it uses native methods internally in order to be notified of changes by the OS, instead of starting a thread and checking periodically. With JCI, you may get delays in the range of several seconds until changes are propagated to your callbacks
Forget about WatchService. It is not intuitive and there are issues with it when trying to see if it can detect that the folder it is monitoring is deleted or changed. I would stay far away from it. I have worked with Watcher but prefer Apache IO much more. I believe Camel uses it as well.