With Relational Databases, I could create a model in Java since I know How many columns a table has and their respective names.
But MongoDb doesn't work often with such static schema(If I'm not wrong). So, I want to create a model in Java that hold all the data and convert it into JSON and sent it to the response as a web service.
But I could do by returning just Document or DBObject objects. But It's converting ID as
"_id": {
"timestamp": 1505194179,
"machineIdentifier": 13503772,
"processIdentifier": 3816,
"counter": 1819499,
"date": 1505194179000,
"time": 1505194179000,
"timeSecond": 1505194179
}
Where I just need an ID value for further to make network calls to use this identifier.
So I want to know a best practice or strategy in order to achieve this. I use spring boot.
I'me very much new to Mongodb with Spring boot. So, bear me If my understanding is wrong.
Edit: In Spring boot, it's necessary to define entity class to access the data but I just want to know other way where I don't want a model to be predefined but it should be dynamic as the schema in mongodb.
Related
I'm attempting to create a Spring-Data MongoDB repository using Spring WebFlux for use in my server backend. I'm still very new to Spring-Data overall, and I'm trying to figure out how return a document from a manual reference using only a single GET request to my server.
For instance, lets say that I have the following document structure for my Person documents:
{
"_id": "123",
"person": "Test Person",
"address": {
"refId": "456",
"name": "House"
},
"occupation": "Test occupation"
}
and the following structure for my Address documents:
{
"_id": "456",
"street": "Test Street",
"houseNumber": "404"
}
I create a PersonRepository and AddressRepository, extending ReactiveMongoRepository<Person, String> and ReactiveMongoRepository<Address, String> respectively. This is great for when I want to query for individual Person(s) and Address(es), I've tested and it seems to work as expected. My frontend is able to make GET requests to this backend (of the form /person/{id} and /address/{id}) and retrieve the correct data. However, whenever I want to obtain the Address for a specific Person I need to make two GET requests, one to get the Person, from which I parse the specific address "refId", which I then plug into a second GET request to the Address endpoint. While this is logical its an extra GET request to the server, and I'm wondering if there is some way to describe a query in my backend so I can simply make a single GET request to a /person/{id}/address endpoint which finds the Person document with the specified id and returns the Address document from the "refId" in the Person (not the "address" part of the Person document, I want to return the actual Address document).
There isn't much I have already tried, as I don't have many ideas. Since I'm using Spring WebFlux all of my server responses are in the form of Publishers (Mono/Flux) as that is what the repository returns, and I'm not sure how to work with those before returning them as a ServerResponse object.
Thank you.
I have the updateProvider(ProviderUpdateDto providerUpdt) method in my Spring controller, But I do not see the need to send the whole payload of the provider entity, if for example the client can only update the name or other attribute, that is, it is not necessary to send the whole entity if only it is necessary to update a field, this produces a Excessive bandwidth consumption when it is not necessary.
What is a better practice to send only the fields that are going to be updated and be able to build a DTO dynamically? and How would I do if I'm using Spring Boot to build my API?
You can use Jackson library, it provides the annotation #JsonInclude(Include.NON_NULL) and with this only properties with not null values will be passed to your client.
Check the link http://www.baeldung.com/jackson-ignore-null-fields for an example.
There are many technique to improve bandwidth usage
not pretty print Json
enable HTTP GZIP compression
However, it is more important to ensure ur API is logically sound, omitting some fields may break the business rules, too fine grain API design will also increase the interface complexity
Another option would be to have a DTO object for field changes which would work for every entity you have. E.g:
class EntityUpdateDTO {
// The class of the object you are updating. Or just use a custom identifier
private Class<? extends DTO> entityClass;
// the id of such object
private Long entityId;
// the fields you are updating
private String[] updateFields;
// the values of those fields...
private Object[] updateValues;
}
Example of a json object:
{
entityClass: 'MyEntityDTO',
entityId: 324123,
updateFields: [
'property1',
'property2'
],
updateValues: [
'blabla',
25,
]
}
Might bring some issues if any of your updateValues are complex objects themselves though...
Your API would become updateProvider(EntityUpdateDTO update);.
Of course you should leave out the entityClass field if you have an update API for each DTO, as you'd already know which class entity you are working on...
Still, unless you are working with huge objects I wouldn't worry about bandwidth.
According to Elasticsearch documentation it is possible to exclude a field from _all field using include_in_all setting (set to false). I need to exclude a field from _all and I'm using spring data elasticsearch do define my mappings. I haven't found a way to do it this way.
Is this possible using spring data elasticsearch annotations?
Unfortunately, Spring Data Elasticsearch cannot do this thing. The improvement request is already a year old, but its priority is minor, so there is no hope for now:
https://jira.spring.io/browse/DATAES-226
In my last project I had to do a lot of simple searches (instead of one by "_all" fields, I used 1 search per each field), and then I united all the results. I assume that there is no nice solve for that problem by Spring Data Elasticsearch.
You can save the mapping of your type into a json file. Then you add the '"include_in_all": false' to the field you want to exclude. This should look something like this.
{
"my_type": {
"properties": {
"excludedField": {
"type": "text",
"include_in_all": false
}
}
}
}
Then you load the file using your favorite JSON reader, parse it as a string and change your mapping with the elasticsearch api .
client.admin().indices()
.preparePutMapping("my_index")
.setType("my_type")
.setSource(loadedFileString)
.get();
EDIT: I just noticed you wanted to use annotations for it. Maybe the #Field annotation has a parameter for it? I'm sorry, I have no experience with the annotations.
I have this model in the database
A Country has many States, which has many Cities, which each has a Mayor . The bold being independent tables with ref keys to all They are also Java Classes/Models.
I'd like to construct a JSON in this format for a JS library
{
"Country1": [
"State1":[
"City1":[
"Mr.Mayor"
]
"City2":[
"Mrs.Mayor"
]
],
"State2": [
"City1":[
"Mr.Mayor"
]
.....
Currently implemented as a query that joins all of them into one list of all Countries and their states and cities. Then while looping over the result set from the query construct the above JSON. What is the best/fastest way? I am not using an ORM or JPA but MVC and queries are in DAO
Try building a Multimap<Country, Multimap<State, Map<City, Mayor>>>. Be careful to use the correct types while serializing and deserializing. For example, if you are using Gson, you will need to use the TypeToken class.
I've got an object design question.
I'm building a json api in Java. My system uses pojos to represent json objects and translates them from json to pojo using Jackson. Each object needs to take different forms in different contexts, and I can't decide whether to create a bunch of separate classes, one for each context, or try to make a common class work in all circumstances.
Let me give a concrete example.
The system has users. The api has a service to add, modify and delete uses. There is a table of users in a database. The database record looks like this:
{
id: 123, // autoincrement
name: "Bob",
passwordHash: "random string",
unmodifiable: "some string"
}
When you POST/add a user, your pojo should not include an id, because that's autogenerated. You also want to be able to include a password, which gets hashed and stored in the db.
When you PUT/update a user, your pojo shouldn't include the unmodifiable field, but it must include the id, so you know what user you're modifying.
When you GET/retrieve the user, you should get all fields except the passwordHash.
So the pojo that represents the user has different properties depending on whether you're adding, updating, or retrieving the user. And it has different properties in the database.
So, should I create four different pojos in my system and translate among them? Or create one User class and try to make it look different in different circumstances, using Jackson views or some other mechanism?
I'm finding the latter approach really hard to manage.
In my opinion you should create only one POJO - User which has all needed properties. And now you should decide whether your API is rigorous or lenient. If your API is rigorous it should return error when it receives wrong JSON data. In lenient version API can skip superfluous (unnecessary) properties.
Before I will provide an example, let me change the 'passwordHash' property to 'password'.
Add new user/POST
JSON data from client:
{
id: 123,
name: "Bob",
password: "random string",
unmodifiable: "some string"
}
Rigorous version can return for example something like this:
{
"status": "ERROR",
"errors": [
{
"errorType": 1001,
"message": "Id field is not allowed in POST request."
}
]
}
Lenient version can return for example something like this:
{
"status": "SUCCESS",
"warnings": [
"Id field was omitted."
]
}
For each CRUD method you can write a set of unit tests which will be holding information which way you choose and what is allowed and what is not.