I am unit testing a Play Framework based application. As I read in the documentation, for the sake of clearing the state, before every test I reload the list of fixtures like this:
#Before
public void setUp() {
Fixtures.deleteAll();
Fixtures.load("data.yml");
Logger.info("FIXTURES RELOADED");
}
Then I go to the Web.based testing platform (http://localhost:9000/#tests), choose a test that deals with fetching some data (User u = User.findById(1l);) and then assert against the data. It works.
However, if I try to select the test again, and rerun it, it fails with:
A java.lang.NullPointerException has been caught, Try to read name on null object models.User
If I stop the application completely and restart it, it runs again (the first time), but starting and stopping takes a bit of time and is quite tedious, if you do it 10 times a minute.
I am using Play 1.2.5
The problem is auto-incrementing user ID (on every insert) while trying to get user with ID 1 on every test.
You can get the newly created user ID and use it in your test or find user by other field that you definitely know.
Related
so I have a function that deletes some object, thing. This can take a while, like a half hour ish, and I want to check if it is successfully deleted.
#Test
public void successfulThingDelete(){
Thing thing = new Thing();
deleteThing(thing);
if (thing.getStatus() == 'deleted'){
pass
}
else {
fail
}
I want to be able to continually check the status of thing (i.e thing.getStatus()) and pass the test if it is deleted. But if a certain time elapses and it's not deleted then the code has failed and it should fail. I'm assuming I need to introduce a new thread for this pinging of the status but I'm not sure how to add that within this method. Thanks for any help!
I would go for awaitility. With that you can write tests like
#Test
public void updatesCustomerStatus() throws Exception {
// Publish an asynchronous event:
publishEvent(updateCustomerStatusEvent);
// Awaitility lets you wait until the asynchronous operation completes:
await().atMost(5, SECONDS).until(customerStatusIsUpdated());
...
}
I'm guessing that this is a Junit test. The Test annotation allows a timeout attribute in the form "...#Test(timeout=1000)..." - the value is in milliseconds. So calculate the milliseconds in thirty minutes - 1800000 and use that. Junit will fail the test if it isn't finished in that time.
#Test(timeout=1800000)
public void successfulThingDelete(){...
If the test runs its course and finishes before the time limit then the usual coded assertions happen and the test ends. if the test actions take longer then Junit will interrupt whatever's running and fail the test overall.
Ref - https://github.com/junit-team/junit4/wiki/timeout-for-tests
I'm deploying a little backend with some methods. One of them makes a simple query to retrieve a list of objects. This is the method:
#ApiMethod(path = "getMessagesByCity", name = "getMessagesByCity", httpMethod = ApiMethod.HttpMethod.POST)
public MessageResponse getMessagesByCity(#Named("City_id") Long city) {
MessageResponse response = new MessageResponse();
List<Message> message = ofy().load().type(Message.class).filter("city", city).list();
response.response = 200;
return response;
}
And this is the Message class:
#Entity
public class Message {
#Id
private Long id;
private String name;
#Index
private Long city;
...
}
I've read a lot of posts and all of them are mentioning that probably is caused because datastore-indexes.xml are not being updated automatically. However, Google doc says this (https://cloud.google.com/appengine/docs/standard/python/config/indexconfig):
Every Cloud Datastore query made by an application needs a
corresponding index. Indexes for simple queries, such as queries over
a single property, are created automatically.
So,following that, I think that index related files are not necessary for me.
If I execute the method "getMessagesByCity" with the simple query:
List<Message> message = ofy().load().type(Message.class).filter("city", city).list();
The backend returns me an error 503 with this log message:
"com.google.appengine.api.datastore.DatastoreNeedIndexException: no
matching index found. An index is missing but we are unable to tell
you which one due to a bug in the App Engine SDK. If your query only
contains equality filters you most likely need a composite index on
all the properties referenced in those filters."
Any idea? How can I solve it?
You need to upload index configs in, so Datastore will start to accept your queries with custom projections with this command.
gcloud app deploy index.yaml
See https://cloud.google.com/datastore/docs/concepts/indexes for more information about Datastore queries handling and indexes.
Every time you use a new datastore query in your code with a different set of filters / orders etc. your index.yaml should automatically update, (might need to run that logic at least once in the local dev server for it to add the new index to the file)
On local dev, the first time you hit it, it should work, HOWEVER when deploying new indexes, there is a lag time before it becomes available in production / on the appspot server. We have run into this a lot, and from the google console you can actually see if its in progress or not, by going to Datastore > Indexes (https://console.cloud.google.com/datastore/indexes) for the project in question.
If all indexes have a green tick and the issue persists then this is not the issue, and can debug further, however if some have spinners next to them this means this index is still being made and cannot be used until its finished.
If this is your problem, you can avoid it in the future by deploying the index.yaml first through gcloud and then only deploying your application.
alternatively make sure you have run the new method / function on your local, and make sure that the index.yaml did in-fact get changed, if you use Git or something the file should have popped up as modified after the local server ran the function / method.
So, back again
I have a JHipster generated project which uses an elasticsearch java client embedded in spring boot.
I have recently done some major changes to the datasets since we've been migrating a whole new bunch of data from different repositories
When deploying the application it all works fine, all SearchRepositories are loaded with no problem and all search capabilities roll smooth
The issues come when running from the test environment. There have been no changes what so ever to the application-test.yml file nor to the elasticsearch java config file.
We have some code which updates the indices and I've run it several times, it seems to update the clusters indices just fine, but where I'm suffering is in the target folder, it just won't create the new indices
There are 12 indices that I cannot get in to the target folder when running in test mode, however, only 5 of them fail in their ResourceIntTest because of the error mentioned in the title
I don't want to fill this post with hundreds of irrelevant lines of code, so suffice for now to include the workaround that helps test not to fail:
When in the initTest of the 5 failing test cases, if I write the following line (obviously changing the class name in each different case):
surveyDataQualitySearchRepository.save(surveyDataQualityRepository.findAll());
Then the index will create itself and the testcase will not fail, however this shouldn't be necessary to do manually, it should be created when the resetIndex method in the IndexReinitializer class is called upon deployment
resetIndex:
#PostConstruct
public void resetIndex() {
long t = currentTimeMillis();
elasticsearchTemplate.deleteIndex("_all");
t = currentTimeMillis() - t;
logger.debug("ElasticSearch indexes reset in {} ms", t);
}
Commenting this piece of code also allows all indices to be loaded, but it should not be commented as this serves as an updater for the indices, plus it works fine in an old version of the application which is still pointing to the old dataset
All help will be very welcome, I've been on this almost a full day now trying to understand where the error is coming from, I'm also more than happy to upload any pieces of code that may be relevant to anyone willing to help here.
EDIT To add code for the indices rebuild as requested via comments
#Test
public void synchronizeData() throws Exception{
resetIndex();
activePharmaIngredientSearchRepository.save(activePharmaIngredientRepository.findAll());
countrySearchRepository.save(countryRepository.findAll());
dosageUnitSearchRepository.save(dosageUnitRepository.findAll());
drugCategorySearchRepository.save(drugCategoryRepository.findAll());
drugQualityCategorySearchRepository.save(drugQualityCategoryRepository.findAll());
formulationSearchRepository.save(formulationRepository.findAll());
innDrugSearchRepository.save(innDrugRepository.findAll());
locationSearchRepository.save(locationRepository.findAll());
manufacturerSearchRepository.save(manufacturerRepository.findAll());
outletTypeSearchRepository.save(outletTypeRepository.findAll());
publicationSearchRepository.save(publicationRepository.findAll());
publicationTypeSearchRepository.save(publicationTypeRepository.findAll());
qualityReferenceSearchRepository.save(qualityReferenceRepository.findAll());
reportQualityAssessmentAssaySearchRepository.save(reportQualityAssessmentAssayRepository.findAll());
//rqaaQualitySearchRepository.save(rqaaQualityRepository.findAll());
rqaaTechniqueSearchRepository.save(rqaaTechniqueRepository.findAll());
samplingTypeSearchRepository.save(samplingTypeRepository.findAll());
//surveyDataQualitySearchRepository.save(surveyDataQualityRepository.findAll());
surveyDataSearchRepository.save(surveyDataRepository.findAll());
techniqueSearchRepository.save(techniqueRepository.findAll());
tradeDrugApiSearchRepository.save(tradeDrugApiRepository.findAll());
tradeDrugSearchRepository.save(tradeDrugRepository.findAll());
publicationDrugTypesSearchRepository.save(publicationDrugTypesRepository.findAll());
wrongApiSearchRepository.save(wrongApiRepository.findAll());
}
private void resetIndex() {
long t = currentTimeMillis();
elasticsearchTemplate.deleteIndex("_all");
t = currentTimeMillis() - t;
logger.debug("ElasticSearch indexes reset in {} ms", t);
}
Please try to update to the latest version of spring-data-elasticsearch
Blockquote
I am facing a problem with jobParameters in spring batch.I have a jobParameter which is optional.For the first time when i am passing job parameter through commandLineJobRunner it is working.For the second time i am not passing any jobParameter but still it is taking the previous jobParameter.When i clear my Meta-Data then jobParameter is coming as null i am not passing.How can i fix this without clearing the Meta-Data.Is this happens normally in spring batch
edited code
I am using MapJobRegistry and next is used while launching the job.When i debugged i have observed that to increment the run.id it is loading all the previous parameters
public JobParameters More ...getNext(JobParameters parameters) {
if (parameters == null) {
parameters = new JobParameters();
}
long id = parameters.getLong(key, 0L) + 1;
return new JobParametersBuilder(parameters).addLong(key, id).toJobParameters();
}
First thing to mention is that you should not use all the Map... classes. They are not intendend for production and, therefore, you better of using the different Jdbc implementations. If you don't wanna use a real DB, you can use always an inmemory DB.
But about your initial question:
You are using the CommandLineJobRunner togehter with the option "next".
Having a look at method CommandLineJobRunner.start() you find the following lines:
if (opts.contains("-next")) {
JobParameters nextParameters = getNextJobParameters(job);
Map<String, JobParameter> map = new HashMap<String, JobParameter>(nextParameters.getParameters());
map.putAll(jobParameters.getParameters());
jobParameters = new JobParameters(map);
}
You can see that getNextJobParameters is called. Inside this method you can see that the data of the previous run is loaded 'jobExplorer.getJobInstances(jobIdentifier, 0, 1);' (if there was a previous run). If there is a previous run, then the job-parameters of this old run are returned after applying the incrementer.next method -> hence, this is the reason you get your old parameters.
Now, this is the technical explanation but the question that follows is how you should use the "next" and "restart" option in order to get what you want.
using of next:
- next works only as expected if you launch a job with the same name and the same jobparameters. Otherwise the results can be confusing. Actually, I use next only inside units and integration tests
using of restart:
- you can use "restart" if the previous jobexecution with the same jobname failed. Also here, the jobparameters will be taken from your previous launch.
For a normal start of a job, you shouldn't be using next nor restart. A normal start of a job should always have a unique jobparameter. For instance, a "runid" whose value is changed with every start of the job. (otherwise, you would get JobInstanceAlready... Exception).
In case of unit tests, I use a unique "runId" for every testcase. And here, I'm using the "next" option.
I am quite new to jmeter, I am using it to load test an application. My current setup is good if running few threads at a time but gets problems when more users get connected.
Here's the scenario,
sample_1: request table data
sample_2: set table row with empty user column as used by current user
|
'-->post_process_beanshell: check if have error message
sample_3: do other stuff
Currently I am able to check if the 2nd sample has an error message, the question is how do I tell beanshell to go back to 1st sample when the 2nd sample has an error message?
I would recommend put your "sample_3" under If Controller like:
Loop Controller (define maximum number of n-tries)
sample_1
sample_2
post_process_beanshell
If Controller: condition ${JMeterThread.last_sample_ok}
sample_3
JMeterThread.last_sample_ok - is a pre-defined variable which returns "true" if previous sampler was successful and "false" if not so if your "sample_2" will fail - "sample_3" won't be executed and the whole sequence will start over
Assuming you want to keep going back to sampler1 until sampler2 beanshell returns true, use a While controller.
Stick both sampler1 and sampler2 in a while controller which is conditional on the result of your error check.