I'm deploying a little backend with some methods. One of them makes a simple query to retrieve a list of objects. This is the method:
#ApiMethod(path = "getMessagesByCity", name = "getMessagesByCity", httpMethod = ApiMethod.HttpMethod.POST)
public MessageResponse getMessagesByCity(#Named("City_id") Long city) {
MessageResponse response = new MessageResponse();
List<Message> message = ofy().load().type(Message.class).filter("city", city).list();
response.response = 200;
return response;
}
And this is the Message class:
#Entity
public class Message {
#Id
private Long id;
private String name;
#Index
private Long city;
...
}
I've read a lot of posts and all of them are mentioning that probably is caused because datastore-indexes.xml are not being updated automatically. However, Google doc says this (https://cloud.google.com/appengine/docs/standard/python/config/indexconfig):
Every Cloud Datastore query made by an application needs a
corresponding index. Indexes for simple queries, such as queries over
a single property, are created automatically.
So,following that, I think that index related files are not necessary for me.
If I execute the method "getMessagesByCity" with the simple query:
List<Message> message = ofy().load().type(Message.class).filter("city", city).list();
The backend returns me an error 503 with this log message:
"com.google.appengine.api.datastore.DatastoreNeedIndexException: no
matching index found. An index is missing but we are unable to tell
you which one due to a bug in the App Engine SDK. If your query only
contains equality filters you most likely need a composite index on
all the properties referenced in those filters."
Any idea? How can I solve it?
You need to upload index configs in, so Datastore will start to accept your queries with custom projections with this command.
gcloud app deploy index.yaml
See https://cloud.google.com/datastore/docs/concepts/indexes for more information about Datastore queries handling and indexes.
Every time you use a new datastore query in your code with a different set of filters / orders etc. your index.yaml should automatically update, (might need to run that logic at least once in the local dev server for it to add the new index to the file)
On local dev, the first time you hit it, it should work, HOWEVER when deploying new indexes, there is a lag time before it becomes available in production / on the appspot server. We have run into this a lot, and from the google console you can actually see if its in progress or not, by going to Datastore > Indexes (https://console.cloud.google.com/datastore/indexes) for the project in question.
If all indexes have a green tick and the issue persists then this is not the issue, and can debug further, however if some have spinners next to them this means this index is still being made and cannot be used until its finished.
If this is your problem, you can avoid it in the future by deploying the index.yaml first through gcloud and then only deploying your application.
alternatively make sure you have run the new method / function on your local, and make sure that the index.yaml did in-fact get changed, if you use Git or something the file should have popped up as modified after the local server ran the function / method.
So, back again
I have a JHipster generated project which uses an elasticsearch java client embedded in spring boot.
I have recently done some major changes to the datasets since we've been migrating a whole new bunch of data from different repositories
When deploying the application it all works fine, all SearchRepositories are loaded with no problem and all search capabilities roll smooth
The issues come when running from the test environment. There have been no changes what so ever to the application-test.yml file nor to the elasticsearch java config file.
We have some code which updates the indices and I've run it several times, it seems to update the clusters indices just fine, but where I'm suffering is in the target folder, it just won't create the new indices
There are 12 indices that I cannot get in to the target folder when running in test mode, however, only 5 of them fail in their ResourceIntTest because of the error mentioned in the title
I don't want to fill this post with hundreds of irrelevant lines of code, so suffice for now to include the workaround that helps test not to fail:
When in the initTest of the 5 failing test cases, if I write the following line (obviously changing the class name in each different case):
surveyDataQualitySearchRepository.save(surveyDataQualityRepository.findAll());
Then the index will create itself and the testcase will not fail, however this shouldn't be necessary to do manually, it should be created when the resetIndex method in the IndexReinitializer class is called upon deployment
resetIndex:
#PostConstruct
public void resetIndex() {
long t = currentTimeMillis();
elasticsearchTemplate.deleteIndex("_all");
t = currentTimeMillis() - t;
logger.debug("ElasticSearch indexes reset in {} ms", t);
}
Commenting this piece of code also allows all indices to be loaded, but it should not be commented as this serves as an updater for the indices, plus it works fine in an old version of the application which is still pointing to the old dataset
All help will be very welcome, I've been on this almost a full day now trying to understand where the error is coming from, I'm also more than happy to upload any pieces of code that may be relevant to anyone willing to help here.
EDIT To add code for the indices rebuild as requested via comments
#Test
public void synchronizeData() throws Exception{
resetIndex();
activePharmaIngredientSearchRepository.save(activePharmaIngredientRepository.findAll());
countrySearchRepository.save(countryRepository.findAll());
dosageUnitSearchRepository.save(dosageUnitRepository.findAll());
drugCategorySearchRepository.save(drugCategoryRepository.findAll());
drugQualityCategorySearchRepository.save(drugQualityCategoryRepository.findAll());
formulationSearchRepository.save(formulationRepository.findAll());
innDrugSearchRepository.save(innDrugRepository.findAll());
locationSearchRepository.save(locationRepository.findAll());
manufacturerSearchRepository.save(manufacturerRepository.findAll());
outletTypeSearchRepository.save(outletTypeRepository.findAll());
publicationSearchRepository.save(publicationRepository.findAll());
publicationTypeSearchRepository.save(publicationTypeRepository.findAll());
qualityReferenceSearchRepository.save(qualityReferenceRepository.findAll());
reportQualityAssessmentAssaySearchRepository.save(reportQualityAssessmentAssayRepository.findAll());
//rqaaQualitySearchRepository.save(rqaaQualityRepository.findAll());
rqaaTechniqueSearchRepository.save(rqaaTechniqueRepository.findAll());
samplingTypeSearchRepository.save(samplingTypeRepository.findAll());
//surveyDataQualitySearchRepository.save(surveyDataQualityRepository.findAll());
surveyDataSearchRepository.save(surveyDataRepository.findAll());
techniqueSearchRepository.save(techniqueRepository.findAll());
tradeDrugApiSearchRepository.save(tradeDrugApiRepository.findAll());
tradeDrugSearchRepository.save(tradeDrugRepository.findAll());
publicationDrugTypesSearchRepository.save(publicationDrugTypesRepository.findAll());
wrongApiSearchRepository.save(wrongApiRepository.findAll());
}
private void resetIndex() {
long t = currentTimeMillis();
elasticsearchTemplate.deleteIndex("_all");
t = currentTimeMillis() - t;
logger.debug("ElasticSearch indexes reset in {} ms", t);
}
Please try to update to the latest version of spring-data-elasticsearch
I am trying to connect to a SAS-driven remote database from within R, using RJDBC. The first time I do a dbConnect, I get an error:
Error in .jcall(drv#jdrv, "Ljava/sql/Connection;", "connect", as.character(url)[1],
: java.lang.NoClassDefFoundError: com/sas/net/crypto/CryptoException
When I do the dbConnect a second time after the first call, it connects fine, and I get back an object of class JDBCConnection.
I looked in the sas.core.jar file (from the latest 94M2 SAS JDBC drivers), and can see CryptoException listed in there. However, I am also curious why it was trying to throw a CryptoException.
Question 1: How can I silently ignore the error on the first dbConnect call?
Question 2: Why was it trying to throw a CryptoException? What can I do to prevent this? (This may cancel question 1.)
This is the same question as shared on the SAS Support Communities page:
https://communities.sas.com/thread/80620
There you shared the code you are using
https://github.com/wthielen/wrds/blob/7edfbfe89ddc329618be72e591cc0bd50e294ea4/R/wrds.R#L47
In this code, the problem appears to be that you are trying to set the classpath before initializing the JVM. Using .jinit() before the call to .jaddClassPath should correct the issue. In the doc for .jinit and since you are developing a package, you may want to use .jpackage instead of .jinit
https://www.rforge.net/doc/packages/rJava/jpackage.html
I was having the same problems and tried the above solutions with no change.
The solution on my computer was modifying the .Renviron file which holds the classpath to the java drivers. Replacing "JDBC_Drivers" with "WRDS_Drivers" was all that was needed:
CLASSPATH="C:/Users/nicol/Documents/WRDS_Drivers/sas.core.jar;C:/Users/nicol/Documents/WRDS_Drivers/sas.intrnet.javatools.jar"
This question is specifically related to the JT400 class ProgramCallDocument, with it's method callProgram(String ProgramName)
I've tried wapping the call in a try/catch - but it's not throwing an exception, the debugger goes into the callProgram method, and just sits there indefinitely.
A small amount of specific information about the API is available here:
http://publib.boulder.ibm.com/infocenter/iadthelp/v7r0/index.jsp?topic=/com.ibm.etools.iseries.toolbox.doc/rzahhxpcmlusing.htm
Here's the code that I'm running:
AS400 as400System = AS400Factory.getAS400System()
ProgramCallDocument programCallDocument = new ProgramCallDocument(as400System, "com.sample.xpcml.Sample.xpcml")
programCallDocument.setStringValue("sampleProgramName.value", sampleValue)
Boolean didProgramCallDocumentRunSuccessfullyOnTheAS400 = programCallDocument.callProgram("sampleProgramName")
The last line of that snippet is the one that just sits there. I left out the try/catch for brevity.
The XPCML file that the ProgramCallDocument constructor uses is just a proprietary XML format that IBM uses for specifying the parameter lengths and types for a program call. I can come back and add it in if it would be helpful, but the ProgramCallDocument constructor runs validation on the XML, and it didn't come up with any validation errors. I'm not familiar with JT400, or how it does Program Calls, so any assistance would be wonderful.
As a further note, doing some more digging on a related issue today I also found this SO post:
Monitor and handle MSGW messages on a job on an IBM i-series (AS/400) from Java
I think it's relevant to this question, because it's about ways to trap MSGW status on the Java/Groovy side.
It's very likely the called program went into a MSGW status (error).
Check WRKACTJOB JOB(QZRCSRVS) to find the program call job and see the status as well as review the job log.
It may be easier to call a native program using the CommandCall class or as a JDBC stored procedure.
Here's an example of the CommandCall usage in Groovy:
sys = AS400Factory.AS400System
cmd = new CommandCall(sys)
if (!cmd.run "CALL MYLIB.MYPGM PARM('${sampleValue}')") {
println cmd.messageList
}
I am working on a web-application that uses Spring MVC.
It has been working fine on Glassfish 3.0.1, but when migrating to Glassfish 3.1, it started acting strange. Some pages are only partially showing, or showing nothing at all, and in the log, a lot of messages of this type:
[#|2012-08-30T11:50:17.582+0200|WARNING|glassfish3.1|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=69;_ThreadName=Thread-1;|StandardWrapperValve[SpringServlet]: PWC1406: Servlet.service() for servlet SpringServlet threw exception
org.springframework.beans.NotReadablePropertyException: Invalid property 'something' of bean class [com.something.Something]: Bean property 'something' is not readable or has an invalid getter method: Does the return type of the getter match the parameter type of the setter?
at org.springframework.beans.BeanWrapperImpl.getPropertyValue(BeanWrapperImpl.java:729)
at org.springframework.beans.BeanWrapperImpl.getNestedBeanWrapper(BeanWrapperImpl.java:576)
at org.springframework.beans.BeanWrapperImpl.getBeanWrapperForPropertyPath(BeanWrapperImpl.java:553)
at org.springframework.beans.BeanWrapperImpl.getPropertyValue(BeanWrapperImpl.java:719)
at org.springframework.validation.AbstractPropertyBindingResult.getActualFieldValue(AbstractPropertyBindingResult.java:99)
at org.springframework.validation.AbstractBindingResult.getFieldValue(AbstractBindingResult.java:226)
at org.springframework.web.servlet.support.BindStatus.<init>(BindStatus.java:120)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getBindStatus(AbstractDataBoundFormElementTag.java:178)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getPropertyPath(AbstractDataBoundFormElementTag.java:198)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getName(AbstractDataBoundFormElementTag.java:164)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.writeDefaultAttributes(AbstractDataBoundFormElementTag.java:127)
at org.springframework.web.servlet.tags.form.AbstractHtmlElementTag.writeDefaultAttributes(AbstractHtmlElementTag.java:421)
at org.springframework.web.servlet.tags.form.TextareaTag.writeTagContent(TextareaTag.java:95)
at org.springframework.web.servlet.tags.form.AbstractFormTag.doStartTagInternal(AbstractFormTag.java:102)
at org.springframework.web.servlet.tags.RequestContextAwareTag.doStartTag(RequestContextAwareTag.java:79)
The error message isn't incorrect, because the property in question does not have a setter-method (gets its value through the constructor). But like I said, this has not been a problem when using Glassfish 3.0.1, only when using it on the new server with Glassfish 3.1.
Does anyone know if there is something in the Glassfish version that might cause this? Or is it some kind of configuration that is missing on the new server?
Some code:
Controller:
#ModelAttribute
public SomethingContainer retriveSomethingContainer(#PathVariable final long id {
return somethingContainerDao.retrieveSomethingContainer(id);
}
#InitBinder("somethingContainer")
public void initBinderForSomething(final WebDataBinder binder) {
binder.setAllowedFields(new String[] {
"something.title",
"something.description",
});
}
SomethingContainer:
#Embedded
private final Something something = new Something();
public Something getSomething() {
return something;
}
//no setter
public String getDescription() {
return something.getDescription();
}
Update:
Restarting Glassfish actually removes the problem - temporarily. I suspect that it might have something to do with the loading of the custom binders, we had some problems with out of memory errors, which I thought had something to do with it, but that has been fixed without fixing this problem.
Update 2:
On the 3.0.1 server, the one of the jvm arguments was -client. On the 3.1-server, it was -server. We changed it to -client, and this made the frequency of the error go down a lot, it was happening every other day with -server, took 2 weeks for it to happen with -client.
Update 3:
Some information about the servers (more can be added if requested..)
Server1 (the working one):
Windows Server 2003
Java jdk 6 build 35
Glassfish 3.0.1 build 22
-xmx 1024m
Server2 (the one with problems):
Windows Server 2008 64-bit
Java jdk 6 build 31
Glassfish 3.1 build 43
-xmx 1088m
-xms 1088m
We are using Spring version 3.1.0.
Update 4:
I recreated the error by renaming a field in a jsp to something that does not exist in the modelattribute.
But, more importantly, I noticed something: The fields where the system can't find the getters are often fields of superclasses of the ones that are referenced in the modelattribute. To continue my example, the SomthingContainer is really like this:
public class SuperSomethingContainer {
[...]
private Something something;
public Something getSomething() {
return something;
}
}
public class SomethingContainer extends SuperSomethingContainer {
[...]
}
The reference in the controller stays as is, so it's referencing a field that is in the superclass of the object in question.
Update 5:
I tried connecting to the production server with a debugger after the error occured. I put a breakpoint on the return statement of a controller-method returning the object with the error, and tried to see if I could access the field with problems at the time. And that I could, so the problem must lie within Spring MVC/the generated jsp-classes.
(Also, the field in error was of the type "someobject.something[0].somethingelse[0]", but when the somethingelse-list was empty, there was no error! To me, this implies that it somehow can't find the get-method of a list(?))
Update 6:
It seems that the problem has to do with the generation of Java-classes from the jsps. We have not used precompile jsps when deploying, so they are compiled when first used. The problem occurs the first time a page is visited, and the jsp compiled. I also noticed that once the problem has occured, jsps that are compiled after will all give errors. I've kept a few of the problem generated java files, and upon the next restart I will compare them to the working ones. Getting closer :)
Update 7:
Compared the compiled jsp java files that resulted in an error with ones that did not, and there was no difference. So that kinda leaves that out.
So, I now know that the Java object leaving the controller is fine (checked with debugger), and the java class generated from the jsp is fine. So it must be something in between, now I need to find out what...
Update 8:
Another round of debugging, and narrowed the problem down some more. It turns out that spring does some caching of the properties belonging to the various classes. In org.springframework.beans.BeanWrapperImpl, method getPropertyValue, there is the following:
private Object getPropertyValue(PropertyTokenHolder tokens) throws BeansException {
String propertyName = tokens.canonicalName;
String actualName = tokens.actualName;
PropertyDescriptor pd = getCachedIntrospectionResults().getPropertyDescriptor(actualName);
if (pd == null || pd.getReadMethod() == null) {
throw new NotReadablePropertyException(getRootClass(), this.nestedPath + propertyName);
}
The problem is that the cachedIntrospectionResults does not contain the property in question, it contains every other property of the class though. Will need to dig some more to try to find out why it is missing, if it's missing from the start or if it gets lost somewhere along the line.
Also, I've noticed that the missing properties are those that do not have setters, only getters. And, it seems to be context aware, as indicated by the stacktrace. So not finding a property when visiting one page does not mean that its not available when visiting another.
Update 9:
Another day, more debugging. Actually found some good stuff. The getCachedIntrospectionResults() call in the previous code block wounded up calling CachedIntrospectionResults#forClass(theClassInQuestion). This returned a CachedIntrospectionResults object, containing far from all of the properties expected (11 of 21). Going into the forClass-method, I found:
static CachedIntrospectionResults forClass(Class beanClass) throws BeansException {
CachedIntrospectionResults results;
Object value = classCache.get(beanClass);
if (value instanceof Reference) {
Reference ref = (Reference) value;
results = (CachedIntrospectionResults) ref.get();
}
else {
results = (CachedIntrospectionResults) value;
}
if (results == null) {
//build the CachedIntrospectionResults, store it in classCache and return it.
It turned out that the CachedIntrospectionResults returned was found by classCache.get(beanClass). So what was stored in the classCache was corrupted/did not contain all that it should. I put a breakpoint on the classCache.get(beanClass)-line, and tried running this through the debugger:
classCache.put(beanClass, null);
When allowing the method to finish, and rebuild the CachedIntrospectionResults, things started working again. So, what is being stored in the classCache is out of sync with what would and should be created if it was allowed to rebuild it. Whether this is due to something going wrong the first time it is built, or if the classCache is corrupted somewhere along the line I do not currently know.
I'm starting to suspect that this has something to do with classloaders, as I've previously experienced problems due to changes in the way the classloader works when updating Glassfish..
There may be more than one possible reason. I am not sure about the actual but I can give you the way to find out the problem
Step 1: on server 2 machine deploy application on Glassfish 3.0.1 build 22 , now if it works fine on the server 2 machine that it means there might be problem with the libraries of Glass fish, following can be reason for this problem
Any library that is missing in the Glassfish 3.1 build 43 that is in Glassfish 3.0.1 build 22. you can solve by copying all libraries from working Glassfish server to new server.
My be the libraries of Glassfish is conflicting with spring version. [Similliar kind of problem I have faced on tomcat and when i replaced my spring libraires from 3.0.1 to 3.0.3 it worked for me] , so replace your spring libraries with latest one.
Step 2: and if the result of step1 is that application is not running on server 2 machin on Glassfish 3.0.1 build 22 there may be following reason
if any libraries that you have pasted on java lib either not included in this server machine or having different versions.
Any folders that are set on classpath or using any environment variables on server 1, either does not exist on server 2 or don't have the jars or having jars with diff versions
I got a colleague of mine investigate the error, and he was able to recreate it in a unit test. This was done by invoking the method that builds CachedIntroSpectionResults for a class, while at the same time stressing the jvm by adding strings to the memory, with very low memory settings. This approach made it fail 20/30000 times.
As to the cause of it, I only got an oral explanation, so I don't have all the details, but it was something like this: Java has its own introspection-results, and these are wrapped by Spring. The problem is that the java-results utilize soft references, which make them prone to garbage collections. So, when Spring was building its wrappers around these soft references at the exact same time that the garbage collector ran, it actually cleared some of the basis of what Spring was using, leading to properties being "lost".
The solution seems to be upgrading from Spring 3.1.0.RELEASE to Spring 3.1.3.RELEASE. Here, there are some changes, and Spring no longer wraps soft references when determining the properties of a class (soft reference are used in rare, special cases, instead of all the time). After upgrading the version of Spring, the error has not been reproducable through the unit test, it remains to see if this is the case through in practice use.
Update: It's been a few weeks, an no sign of the error. So updating Spring version worked :)
I think I've actually found a candidate for the cause of this.
After getting the error on one of the test-servers after a very short duration and little use, we did some additional checks on the cause. It turned out that the test-server had just half the available memory, which turned us into looking at it a bit more thoroughly. It turned out that it hadn't used up all its memory, but when using JConsole to investigate the memory usage of the different part of the new generation space on the heap, it turned out that one of the surivior spaces was packed full. I'm guessing that this made parts of it overflow, leading to the overflowed parts to be GC-ed or unreachable by not being where it was supposed to.
We have yet to verify that this is in fact the problem in the production environment as well, but once the error turns up again we will check, and if it is the case we will change some memory settings to allow more space for survival areas of the new generation heap. (-XX:SurvivorRatio=6 or something like that).
So it seems that larger Spring MVC applications has a need for a large survivor space, specially in newer versions of Glassfish.
Indeed, there had been an issue with the new introduced ExtendedBeanInfo class in Spring 3.1.0, which had been fixed in Spring 3.1.1 - see (https://jira.spring.io/browse/SPR-8347).