creating a document with status as pending and generated-token then send mail
and immediately after sending mail retrieving document by generated-token value and changing the previous status pending to unverified.
Even though for updating the document status, first retrieved the existing document and then only updating it, still end up in creating two different documents for both the status.
#Document
public class VerificationInfo {
private LoginInfo user;
private String token;
private String verificationStatus = VerificationStatus.PENDING.getVerificationStatus();
}
daoService
public void updateStatus(VerificationInfo verificationToken, String status) {
VerificationInfo vt = verificationRepository.findByToken(verificationToken.getToken()).get(0);
vt.setVerificationStatus(status);
verificationRepository.save(vt);
}
repository
#Repository
public interface VerificationRepository extends MongoRepository<VerificationInfo, String> {
List<VerificationInfo> findByToken(String token);
List<VerificationInfo> findByUser(LoginInfo user);
}
db entries
{ "_id" : ObjectId("5f4e7486664e197f3d745b17"), "token" : "c82907b7-e13e-484d-89cf-92ea394b6f6d", "verificationStatus" : "pending", "_class" : "com.models.VerificationInfo" }
{ "_id" : ObjectId("5f4e748b664e197f3d745b18"), "token" : "c82907b7-e13e-484d-89cf-92ea394b6f6d", "verificationStatus" : "unverified", "_class" : "com.models.VerificationInfo" }
If the status is correct, the problem with you identification of the document (_id).
public class VerificationInfo {
#Id
ObjectId _id;
// Other fields
}
Here we set a unique id to each document. So when you create a object, it will create a new document. If the _id already exists in database, then it will update the document based on the particular id.
1. There is no _id present in the model class
You extends MongoRepository<VerificationInfo, String>, the second parameter is the type of id. But there is no any id found in your model class. (Usually we use ObjectId, but String also can be given)
2. It will always create new document when you get data from frontend
Since you don't have id, when you pass a data to updateStatus(VerificationInfo verificationToken, String status), it will create new id and set the data, that's why you always getting new document.
Suppose you are sending the data with existing id, then the existing document will be updated based on given id
Related
I am creating a new endpoint in springboot that will return simple stats on users generated from an aggregate query in a mongo database. However I get a PropertyReferenceException. I have read multiple stackoverflow questions about it, but didn't find one that solved this problem.
We have a mongo data scheme like this:
{
"_id" : ObjectId("5d795993288c3831c8dffe60"),
"user" : "000001",
"name" : "test",
"attributes" : {
"brand" : "Chrome",
"language" : "English" }
}
The database is filled with multiple users and we want using Springboot aggregate the stats of users per brand. There could be any number of attributes in the attributes object.
Here is the aggregation we are doing
Aggregation agg = newAggregation(
group("attributes.brand").count().as("number"),
project("number").and("type").previousOperation()
);
AggregationResults<Stats> groupResults
= mongoTemplate.aggregate(agg, Profile.class, Stats.class);
return groupResults.getMappedResults();
Which produces this mongo query which works:
> db.collection.aggregate([
{ "$group" : { "_id" : "$attributes.brand" , "number" : { "$sum" : 1}}} ,
{ "$project" : { "number" : 1 , "_id" : 0 , "type" : "$_id"}} ])
{ "number" : 4, "type" : "Chrome" }
{ "number" : 2, "type" : "Firefox" }
However when running a simple integration test we get this error:
org.springframework.data.mapping.PropertyReferenceException: No property brand found for type String! Traversed path: Profile.attributes.
From what I understand, it seems that since attributes is a Map<String, String> there might be a schematic problem. And in the mean time I can't modify the Profile object.
Is there something I am missing in the aggregation, or anything I could change in my Stats object?
For reference, here are the data models we're using, to work with JSON and jackson.
The Stats data model:
#Document
public class Stats {
#JsonProperty
private String type;
#JsonProperty
private int number;
public Stats() {}
/* ... */
}
The Profile data model:
#Document
public class Profiles {
#NotNull
#JsonProperty
private String user;
#NotNull
#JsonProperty
private String name;
#JsonProperty
private Map<String, String> attributes = new HashMap<>();
public Stats() {}
/* ... */
}
I found a solution, which was a combination of two problems:
The PropertyReferenceException was indeed caused because attributes is a Map<String, String> which means there is no schemes for Mongo.
The error message No property brand found for type String! Traversed path: Profile.attributes. means that the Map object doesn't have a brand property in it.
In order to fix that without touching my orginal Profile class, I had to create a new custom class which would map the attributes to an attributes object having the properties I want to aggreate on like:
public class StatsAttributes {
#JsonProperty
private String brand;
#JsonProperty
private String language;
public StatsAttributes() {}
/* ... */
}
Then I created a custom StatsProfile which would leverage my StatsAttributes and would be similar to the the original Profile object without modifying it.
#Document
public class StatsProfile {
#JsonProperty
private String user;
#JsonProperty
private StatsAttributes attributes;
public StatsProfile() {}
/* ... */
}
With that I made disapear my problem with the PropertyReferenceException using my new class StatsAggregation in the aggregation:
AggregationResults<Stats> groupResults
= mongoTemplate.aggregate(agg, StatsProfile.class, Stats.class);
However I would not get any results. It seems the query would not find any document in the database. That's where I realied that production mongo objects had the field "_class: com.company.dao.model.Profile" which was tied to the Profile object.
After some research, for the new StatsProfile to work it would need to be a #TypeAlias("Profile"). After looking around, I found that I also needed to precise a collection name which would lead to:
#Document(collection = "profile")
#TypeAlias("Profile")
public class StatsProfile {
/* ... */
}
And with all that, finally it worked!
I suppose that's not the prettiest solution, I wish I would not need to create a new Profile object and just consider the attributes as a StatsAttributes.class somehow in the mongoTemplate query. If anyone knows how to, please share 🙏
Lets say I have a postgres table named Employee with the following columns:
ID
FirstName
LastName
Employment
Date
Manager
Department
I am interested in having a REST endpoint such that /employee/{ID} will return all information for that particular employee in JSON format, but if I specify /employee/{ID}/FirstName then it'd return particular employee's first name only in JSON format, /employee/{ID}/LastName would return the employee's last name in JSON format, and so on. Is there a good way to implement this instead of implementing an endpoint for accessing each column? Thanks.
A simple way to solve this, is to use a request param instead of querying for the URL. Using a param like fields you would have an URL like /employee/{id}?fields=FirstName,LastName. Using the code below you could have a Map<String, Object> that would be serialized to a JSON with your data. Like this:
#ResponseBody
public Map<String, Object> getPerson(#PathVariable("id") long id, #RequestParam("fields") String fields) throws Exception {
return personService.getPersonFields(id, fields);
}
#Service
class PersonService {
public Map<String, Object> getPersonFields(Long personId, String fields) throws Exception {
final Person person = personRepository.findById(personId);
if (person == null) {
throw new Exception("Person does not exists!");
}
String[] fieldsArray = fields.split(",");
Map<String, Field> personFields = Arrays.stream(person.getClass().getFields()).collect(Collectors.toMap(Field::getName, field -> field);
Map<String, Object> personFieldsReturn = new HashMap<>();
for (String field : fieldsArray) {
if (personFields.containsKey(field)) {
personFields.get(field).setAccessible(true);
personFieldsReturn.put(field, personFields.get(field).get(person));
personFields.get(field).setAccessible(false);
}
}
return personFieldsReturn;
}
}
This is not a good solution though. But it should work.
So as #dave mentioned in a comment you can have a REST endpoint /employee/{ID}/{column} and in your controller you will have a mapping between value of {column} argument and actual column name in database. If you do not want to redeploy your application when mapping changes you can put it in a separate properties file on a server outside of your jar/war and you can also add an additional endpoint to either reload mapping form file on a server or an endpoint that will allow to upload and parse a file with mapping directly to your application.
I would suggest you to use RepositoryRestResource from Spring Data.
First of all, create your entity:
public class Employee {
//props
}
Afterthat create Employee Repository:
#RepositoryRestResource(collectionResourceRel = "employee", path = "employee")
public interface EmployeeRepository extends PagingAndSortingRepository<Employee, Long> {
List<Employee> findByLastName(#Param("firstName") String firstName);
}
And that's all, you will get:
discoverable REST API for your domain model using HAL as media type.
collection, item and association resources representing your mode.
paginating and sorting
and so on.
Check out the docs:
Spring Guide
Spring Dc
Spring Data Rest Project
I'm trying simply to save entity into solr using spring data and get its autogenerated id. I see that id is generated but it was not returned back to me. Code is trivial
entity:
#SolrDocument(solrCoreName = "bank")
#Canonical
class Shop {
#Id
#Field
String id
#Field
String name
}
repository:
#Repository
interface ShopRepository extends SolrCrudRepository<Shop, String>{
}
handler:
#Autowired
ShopRepository repository
void save() {
Shop shop = new Shop()
shop.name = 'shop1'
log.info("before {}", shop)
Shop savedShop = repository.save(shop)
log.info("after {}", savedShop)
}
dependencies:
dependencies {
compile lib.groovy_all
compile 'org.springframework.boot:spring-boot-starter-data-solr:1.5.10.RELEASE'
}
and result is:
before com.entity.Shop(null, shop1)
after com.entity.Shop(null, shop1)
however via solr's admin console I see generated id:
{ "responseHeader":{
"status":0,
"QTime":0,
"params":{
"q":"*:*",
"_":"1527472154657"}}, "response":{"numFound":3,"start":0,"docs":[
{
"name":["shop1"],
"id":"4db1eb1d-718b-4a38-b960-6d52f9b6240c",
"_version_":1601670593291223040,
"name_str":["shop1"]},
{
"name":["shop1"],
"id":"6ad52214-0f23-498d-82b8-82f360ef22f1",
"_version_":1601670855078707200,
"name_str":["shop1"]},
{
"name":["shop1"],
"id":"b45b5773-f2b9-4474-b177-92c98810978b",
"_version_":1601670887722975232,
"name_str":["shop1"]}] }}
and repository.findAll() also returns correct result with mapped id. Is it a feature or bug?
The flow is working as expected (no ID available in the returned object):
During the Save operation
Original object is converted in something can be digested by Solr (id is null)
The update request (with the object with null id) is sent to Solr
Solr process the "create" and generate (internally) the ID
Solr response is OK/KO (with few other data...but no ID here)
So...the final object is exactly the same of the original object (id null).
A quick "workaround" can be implemented as:
#Repository
public interface PlaceRepo extends SolrCrudRepository<PlaceModel, String> {
default PlaceModel create(PlaceModel model, Duration commit) {
model.setId(IDGenerator.generateID());
return this.save(model, commit);
}
default PlaceModel create(PlaceModel model) {
return this.create(model, Duration.ZERO);
}
}
You are moving the ID generation logic to the Java layer.
The Id can be generated as
public static String generateID() {
return UUID.randomUUID().toString();
}
I am developing an application which uses Spring-boot, a relational database and Elasticsearch.
I use JSON serialization at 2 differents places in the code:
In the response of the REST API.
When the code interacts with Elasticsearch.
There are some properties that I need in Elasticsearch but that I want to hide to the application user (e.g. internal ids coming from the relational database).
Here is an example of entity :
#Document
public class MyElasticsearchEntity {
#Id
private Long id; //I want to hide this to the user.
private String name;
private String description;
}
Problem : When the object it persisted in Elasticsearch, it gets serialized as JSON. Hence, fields with #JsonIgnore are ignored when serialized to Elasticsearch.
Up to now, I found 2 unsatisfying solutions :
Solution 1 : Use #JsonProperty like this :
#Id
#JsonProperty(access = JsonProperty.Access.READ_ONLY)
private Long id;
The id gets written in Elasticsearch and is nullified in the JSON response :
{
"id" : null,
"name" : "abc",
"description" : null
}
So it works but the application user still sees that this property exists. This is messy.
Solution 2 : Cutomize the object mapper to ignore null values
Spring-boot has a built-in option for that :
spring.jackson.serialization-inclusion=NON_NULL
Problem : it suppresses all non-null properties, not only those that I want to ignore. Suppose that the field description of the previous entity is empty, the JSON response will be :
{
"name" : "abc"
}
And this is problematic for the UI.
So is there a way to ignore such field only in the JSON response?
You could use Jackson JsonView for your purpose. You can define one view which will be used to serialize pojo for the application user :
Create the views as class, one public and one private:
class Views {
static class Public { }
static class Private extends Public { }
}
Then uses the view in your Pojo as an annotation:
#Id
#JsonView(Views.Private.class) String name;
private Long id;
#JsonView(Views.Public.class) String name;
private String publicField;
and then serialize your pojo for the application user using the view:
objectMapper.writeValueUsingView(out, beanInstance, Views.Public.class);
This is one example of many others on how view can fit your question. Eg. you can use too objectMapper.configure(SerializationConfig.Feature.DEFAULT_VIEW_INCLUSION, false); to exclude field without view annotation and remove the Private view.
I have the following Entity class:
#Entity
#Table(name="reporteddevicedata", schema="schemaName")
public class MobileDeviceData {
#EmbeddedId
MobileDeviceDataId mobileDeviceDataId;
#Column(name="activitydetecteddate")
private ZonedDateTime activityDetectedDate;
public void setFlagId(int flagId) {
mobileDeviceDataId.setFlagId(flagId);
}
......
}
#Embeddable
class MobileDeviceDataId implements Serializable {
#Column(name="clientid")
private int clientId;
#Column(name="flagid")
private int flagId;
}
My Controller code looks like this:
#RequestMapping(value="/mobile/device", method = RequestMethod.PUT)
public ResponseEntity<Object> flagDevice (#RequestBody List<MobileDeviceData> deviceInfoList) {
// code here
}
Originally I had my Entity class with just one primary key #ID on the clientId and it worked great. I would make a REST call and it would populate the MobileDeviceData class as expected. Then I switched to a composite ID using the #Embeddable and #EmbeddableId annotation and now the #RequestMapping is unable to populate the flagId parameter. Now when I make a REST call I get a null pointer exception for mobileDeviceDataId, thus its unable to update that field when it gets called and Throws a null pointer exception.
So my question is, how do I get an instance of the #Embeddable class? Can I just create one with new? I'm not sure of the implications of this since Spring may be expecting to make that value itself? What is the "normal" way this field would get updated via RequestMapping?
First of all you should avoid embedded id's, it just makes all things harder
surrogate primary keys are just easier to use, when you have a foreign key on a table with multi-column primary key it makes it much more complicated to deal with
now you faced these problems by yourself but according to your question
#RequestMapping(value="/mobile/device", method = RequestMethod.PUT)
public ResponseEntity<Object> flagDevice (#RequestBody List<MobileDeviceData> deviceInfoList) {
for(MobileDeviceData mobileDeviceData : deviceInfoList){
int clientId = mobileDeviceData.getMobileDeviceDataId().getClientId();
int flagId = mobileDeviceData.getMobileDeviceDataId().getFlagId();
MobileDeviceData foundMobileDeviceData = mobileDeviceDataService.findByClientIdAndFlagId(clientId, flagId);
if(foundMovileDeviceData == null){
mobileDeviceDataService.save(mobileDeviceData);
}else {
//update foundMobileDeviceData with mobileDeviceData fields
mobileDeviceDataService.save(foundMobileDeviceData);
}
}
}
else if you want to update just flag id
#RequestMapping(value="/mobile/device", method = RequestMethod.PUT)
public ResponseEntity<Object> flagDevice (#RequestBody List<MobileDeviceData> deviceInfoList) {
for(MobileDeviceData mobileDeviceData : deviceInfoList){
int clientId = mobileDeviceData.getMobileDeviceDataId().getClientId();
MobileDeviceData foundMobileDeviceData = mobileDeviceDataService.findByClientId(clientId);
if(foundMovileDeviceData == null){
mobileDeviceDataService.save(mobileDeviceData);
}else {
//update foundMobileDeviceData with mobileDeviceData
int flagId = foundMobileDeviceData.getMobileDeviceDataId().getFlagId();
MobileDeviceDataId mobileDeviceDataId = foundMobileDeviceData.getMobileDeviceDataId();
mobileDeviceDataId.setFlagId(mobileDeviceData)
mobileDeviceDataService.save(foundMobileDeviceData);
}
}
}
Next if you want to find something by client id just create a JPA Query like
"from MobileDeviceData WHERE mobileDeviceDataId.clientId = :clientId"
or native sql
"SELECT * FROM reporteddevicedata WHERE client_id = :someParam"
EXAMPLE JSON Request
[ {
"mobileDeviceDataId" : {
"clientId" : 0,
"flagId" : 0
},
"activityDetectedDate" : null
}, {
"mobileDeviceDataId" : {
"clientId" : 1,
"flagId" : 1
},
"activityDetectedDate" : null
}, {
"mobileDeviceDataId" : {
"clientId" : 2,
"flagId" : 2
},
"activityDetectedDate" : null
} ]
uglified ready to copy/paste version :
[{"mobileDeviceDataId":{"clientId":0,"flagId":3},"activityDetectedDate":null},{"mobileDeviceDataId":{"clientId":1,"flagId":1},"activityDetectedDate":null},{"mobileDeviceDataId":{"clientId":2,"flagId":2},"activityDetectedDate":null}]
Additionaly there should be some validation added on your MobileData object to avoid nullpointers when an invalid json request is send (with no mobileDeviceDataId present)
Finally answering the question:
Its not considered a good practice to use database model as container to share between API (because of primary keys, maybe some sensitive data, it depends).
Moreover if you want your embeddableId to work, request have to be build like in the example json. Proper fields have to be filled out. When requests arent build that way and they are just flat json without 'embedded id' you have to create some wrapper wich will fit the json format(this wrapper will be the requestbody class or List). Nextly you will have to convert wrapper to your db object with embedded id(created with a new keyword).
And this is why i suggest you not to use the composite or embedded id. This is simple example where do you have just one table, but when it comes to use foreign keys and multicolumn primary keys, the tabels are getting more complicated and messy, you make searching over db harder and this is why i suggest you to use surrogate id's without embedding anything, it makes things harder and is just messy