I've a database and some classes. These classes are linked with OneToMany, and so on.
If I print the object itself with spring it contains everything. But if I print it with the Resource feature, it contains only the variables, which are no collections or linked otherwise with an other class.
How can I add the collections to the output?
By default Spring Data REST does not show associated resources except as links. If you want that you have to define projections that describe the fields you want to see, whether they're simple fields like the ones you describe or associated resources. See
http://docs.spring.io/spring-data/rest/docs/current/reference/html/#projections-excerpts
For example say you have a Service resource with associations to resources like serviceType, serviceGroup, owner, serviceInstances and docLinks. If you want those to show up in the response body you can create a projection:
package my.app.entity.projection;
import org.springframework.data.rest.core.config.Projection;
...
#Projection(name = "serviceDetails", types = Service.class)
public interface ServiceDetails {
String getKey();
String getName();
ServiceType getType();
ServiceGroup getGroup();
Person getOwner();
List<ServiceInstance> getServiceInstances();
List<DocLink> getDocLinks();
String getPlatform();
}
Then GET your URL with the projection:
http://localhost:8080/api/services/15?projection=serviceDetails
The result will include the projected properties:
{
"name" : "MegaphoneService",
"key" : "megaphone",
"type" : {
"key" : "application",
"name" : "User Application",
"description" : "A service that allows users to use a megaphone."
},
"owner" : null,
"serviceInstances" : [ {
"key" : "megaphone-a-dr",
"description" : null,
"loadBalanced" : true,
"minCapacityDeploy" : null,
"minCapacityOps" : 50
}, ... ],
...
}
Related
I'm quite new to elastic search, I'm not able to set mapping via field annotation. I'm using spring data elastic 4.3.4. I'm adding settings via #Setting annotation which is actually working. But if I set Field type to Keyword it is not getting updated, and elastic dynamically maps the field type to text. My requirement is to add a normaliser to enable alphabetic sort on specific fields. Please find my set-up below, I really appreciate your help.
Configuration:
Elastic Search version :"7.11.1",
Spring data elastic 4.3.4.
Sample code
#Document(indexName = "#{#environment.getProperty('elastic.index.prefix')}-test")
#Setting(settingPath = "/elasticsearch/analyzer.json")
public class ElasticTest {
#Id
String id;
#Field(type = FieldType.Keyword, normalizer="sort_normalizer")
private String name;
#Field
private String createdDate;
#Field(type=FieldType.Object)
private CustomerType customerType;
=============================================================================
So once the index is created, I can see the settings added
"creation_date" : "1664385255792",
"analysis" : {
"normalizer" : {
"sort_normalizer" : {
"filter" : [
"lowercase",
"asciifolding"
],
"type" : "custom",
"char_filter" : [ ]
}
},
"analyzer" : {
"custom_pattern_analyzer" : {
"lowercase" : "true",
"pattern" : """\W|_""",
"type" : "pattern"
}
}
},
Mapping:
"name" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
Note: Since I'm working locally I have to drop and recreate the index multiple times. I'm able to set these field types via curl/kibana.
Update: Here we create the index by:
if (!searchTemplate.indexOps(ElasticContract.class).exists()) {
searchTemplate.indexOps(ElasticContract.class).create();
}
And we also use ElasticsearchRepository for querying.
How are the index and the mapping created?
If you use Spring Data Elasticsearch repositories, then the index with the setting and mapping will be created if it does not yet exist on application startup.
If you do not use a repository, but use ElasticsearchOperations, you will need to create the index by yourself:
ElasticsearchOperations operations;
IndexOperations indexOps = operations.indexOps(ElasticTest.class);
indexOps.createWithMapping();
If you do not create the index but just insert some data, then Elasticsearch will automatically create the index and the mapping. The mapping you show is the typical autocreated one for a String field.
The #Field annotation you use is correct, we have a similar setup in one of the tests that tests exactly this behaviour, see https://github.com/spring-projects/spring-data-elasticsearch/blob/main/src/test/java/org/springframework/data/elasticsearch/core/index/MappingBuilderIntegrationTests.java#L126-L145
Supposing we have documents in a Mongo collection with the following format:
{
"_id" : "5fb3c5ce9997c61e15a9108c",
"stages" : {
"stage1" : {
"type" : "RandomType"
},
"stage2" : {
"type" : "RandomType2"
},
"arbitaryStage" : {
"type" : "RandomType3"
},
// Possibly many other stages
},
// Fields omitted
}
How can I query a collection of such documents where any stages.X.type is equal to a predefined value? My application doesn't know what X is and doesn't care about it, it should only know that the type of at least one of the stages is equal to a given value. I am trying to do that in Morphia however plain JS would guide me if that's possible to do with the given data format.
For reference the Class from which this entity is originated from is the following
#Entity(value = "stages_collection", noClassnameStored = true)
public class StackOverflowQ {
#Id
private ObjectId id;
#Embedded
private Map<String, Stage> stages;
// Rest of fields/setters/getters omitted
}
public class Stage {
private String type;
// Rest of fields/setters/getters omitted
}
Even if you employ a blanket check, if the system cannot pick the right field then basically the best we can do is search the whole collection, even if you find someway to do it it won't be a very efficient.
A simple change in schema would be better when you expect random fields in the data. In your case.
{
stageName : "Stage 1",
type : "RandomType"
}
You will be able to utilise indexes properly here as well when you scale and the flexibility remains in your hands for future additions. No need to change things in the code when a new stage is required.
So I was looking at the #PutMapping example at the Spring website https://spring.io/guides/tutorials/rest/
I noticed that they call the database to get the Employee entity with that id and then update the entity from the repository with the name and role from the request.
#PutMapping("/employees/{id}")
Employee replaceEmployee(#RequestBody Employee newEmployee, #PathVariable Long id) {
return repository.findById(id)
.map(employee -> {
employee.setName(newEmployee.getName());
employee.setRole(newEmployee.getRole());
return repository.save(employee);
})
.orElseGet(() -> {
newEmployee.setId(id);
return repository.save(newEmployee);
});
}
That's great for a small example demo, but how do you handle this on a more complex entity?
What if Employee had a list of Laptops
Lets suppose that the list in JSON looks something like
{
"name": "John",
"role": "MyRole",
"laptops": [
{
"model": "abc",
"serial": "123"
},
{
"model": "xyz",
"serial": "789"
},
]
}
Of course if your mapping is correct from the repository you'll get back an Employee entity with the list of laptops and the laptops id on the Java side.
But if the user request for the laptop looks something like:
{
"id": 1,
"name": "John",
"role": "MyRole",
"laptops": [
{
"model": "abcModified",
"serial": "123"
},
{
"model": "newModel-xyz was actually removed from the list",
"serial": "456"
},
]
}
What would you do in this scenario? Were we suppose to send back the foreign keys?
If we were to send the foreign key what would stop someone from referencing foreign that didn't belong to the entity?
How do you property map a complex object that may contain list of objects that has other list that was modified.
Edit: I'm calling the Employee path because lets say I need to update the role and the list of laptops
First when you want to update the child object with parent without child entity id, then you have to delete previous children and create again.
You can send child id when in get API so that you can send them when update. Then for put request, you can find those children by id and update them and add new one if available.
But the best way is to write separate API for the child entity also. Update existing child with id and reference with foreign key and create new with separate API.
This is the Object of CcStorePartnerclass, There is an another Object of Partner class inside this class.
Here i want to make filter on attributes of CcStorePartner obj and attributes of Partner Class,
This is the web service body tag. where i have decleared the filter on the objects which are placed in Mongodb. I'm using MongoTemplate
{
"target":"stores",
"filter":
[
{
"storeId" : "a487c" ,
"Type": "contains"
},
{
"partner.partnerCode": "ucb",
"Type":"contains"
}
]
}
What will be the mongo query. which will provide the List
Here m using this..
query.addCriteria(Criteria.where("storeId").regex(".*a487c.*","i"));
query.addCriteria(Criteria.where("partner.partnerCode").regex(".*ucb.*","i"));
I'm receiving this error
Pojo Class
and this is the Mongo Template Code which m using- Dynamic Query.
I have the following JSON, which is a generic wrapper for Messages. From the subject I can determine what the contents are.
{
"subject" : "P:WORKSPACE:ADDED",
"msgType" : "FileInfo[]",
"contents" : [ {
"lastModified" : 1380552566000,
"name" : "genSPI.vhd.pshdl",
"size" : 630,
"syntax" : "unknown",
"type" : "pshdl"
} ]
}
Now when I read the Object with an objectReader, the contents will be a generic ArrayList with embedded Maps as the objectReader does not know what to do with the contents. That is ok for me. But how can I create a class from the contents later on? I don't want to use the polymorphic feature of Jackson as the classes that Message can contain are not known statically.
The solution I found so far appears rather clumsy to me:
final Object json = message.getContents();
final String jsonString = writer.writeValueAsString(json);
final FileInfo[] readValues = mapper.readValue(jsonString, FileInfo[].class);