I'm trying to query some data from mongoDb which contains likes array. Each like object holds user.id of liker.
I need an extra boolean field that says if document is liked by user or not. So I need isLiked to be true if user has liked the document.
Here is what I have done till now:
I used ConditionalOperators.Cond to check if likes.userId is equal to userId of visitor.
#Override
public List<PostExtra> findPostsNearBy(double[] point, Distance distance, String thisUserId) {
mongoTemplate.indexOps(CheckInEntity.class).ensureIndex(new GeospatialIndex("position"));
ConditionalOperators.Cond conditionalOperators = new ConditionalOperators.ConditionalOperatorFactory(Criteria.where("likes.userId").is(thisUserId)).then(true).otherwise(false);
Aggregation aggregation = Aggregation.newAggregation(
Aggregation.geoNear(
locationBasedOperationHelper.findNear(point,distance)
.query(new Query(privacyConsideredOperationHelper.privacyConsideredQuery(userRelationsEntity)))
,"distance"
),
//Aggregation.unwind("likes"),
Aggregation.project("user", "description").and(conditionalOperators).as("isLiked")
);
final AggregationResults<PostExtra> results =
mongoTemplate.aggregate(aggregation, PostEntity.class, PostExtra.class);
return results.getMappedResults();
}
if I remove the comment on Aggregation.unwind("likes") I can only get posts that this user has liked not those he hasn't.
I have seen the same matter here but I dont know whats the MontoTemplate code related to that?
Also I have seen approaches with setIsSubset, still I dont know java implementation.
I'm using spring boot 2.0.4.RELEASE and spring-boot-starter-data-mongodb.
#Document(collection = EntityCollectionNames.POST_COLLECTION_NAME)
public class PostEntity{
#Id
private String id;
#Field
#DBRef
#Indexed
private UserEntity user;
#GeoSpatialIndexed(type = GeoSpatialIndexType.GEO_2DSPHERE)
#Field(value = "position")
private Point position;
#Field(value = "description")
private String description;
#Field
private int likesCount;
#Field
private List<LikeEntity> likes;
}
Post Extra:
public class PostExtra extends PostEntity {
private double distance;
private boolean isLiked;
}
Like:
public class LikeEntity {
#Field
private String userId;
}
After some searching I found out to do it without using unwind I need to implement something like this to project isLiked field:
$cond:[
{$gt:[
{$size:
{$setIntersection:["$likes.userId",userIdsToCheckIn]}
},0]
},true,false]
I tried to do it in spring like this:
ConditionalOperators.Cond conditionalOperators = new ConditionalOperators.ConditionalOperatorFactory(
ComparisonOperators.Gt.valueOf(
ArrayOperators.Size.lengthOfArray(
SetOperators.SetIntersection.arrayAsSet("$likes.userId").intersects(????????)
)
).greaterThanValue(0)
).then(true).otherwise(false);
But I didnt know what to pass to intersects() method and apparently Spring does not let you pass List or Array as that input. Then I found out I can implement my own AggregationExpressions and override toDocument() method.
So I came up with this and it worked:
ConditionalOperators.Cond conditionalOperators = new ConditionalOperators.ConditionalOperatorFactory(
new AggregationExpression() {
#Override
public Document toDocument(AggregationOperationContext aggregationOperationContext) {
return Document.parse("{$gt:[{$size:{$setIntersection:[\"$likes.userId\",[\""+userId+"\"]]}},0]}");
}
}
).then(true).otherwise(false);
Ofcourse I could add $cond to the implementation directly too.
I would still like to know if there are any spring solutions that does not contain mongodb queries directly for this.
Related
I'm creating one small web app for practice, its Calories Counter. I created entities and now want to populate Mongo DB, but I have one problem and I'll try to explain best I can.
Just to notify in my main class I implements CommandLineRunner and then implement run method.
First I #Autowired my services.
#Autowired
FoodService foodService;
#Autowired
UserService userService;
#Autowired
HistoryService historyService;
Then I created this method to populate DB.
#Override
public void run(String... args) throws Exception {
HistoryDto historyDto1 = new HistoryDto();
HistoryDto historyDto2 = new HistoryDto();
UserDto userDto1 = new UserDto();
UserDto userDto2 = new UserDto();
FoodDto foodDto1 = new FoodDto();
FoodDto foodDto2 = new FoodDto();
HistoryDto savedHistory1 = historyService.save(historyDto1);
HistoryDto savedHistory2 = historyService.save(historyDto2);
UserDto savedUser1 = userService.save(userDto1);
UserDto savedUser2 = userService.save(userDto2);
FoodDto savedFood1 = foodService.save(foodDto1);
FoodDto savedFood2 = foodService.save(foodDto2);
userDto1.setUsername("John");
userDto1.setHistoryId(savedHistory1.getHistoryId());
userDto2.setUsername("Marc");
userDto2.setHistoryId(savedHistory2.getHistoryId());
foodDto1.setName("HotDog");
foodDto1.setDescription("Very HOT DOG");
foodDto1.setCalories(99);
foodDto2.setName("Burger");
foodDto2.setDescription("Very Burger");
foodDto2.setCalories(230);
historyDto1.setFoodId(savedUser1.getFoodId());
historyDto1.setTotalCalories(savedFood1.getCalories());
historyDto1.setUserId(savedUser1.getUserId());
historyDto2.setFoodId(savedUser2.getFoodId());
historyDto2.setTotalCalories(savedFood2.getCalories());
historyDto2.setUserId(savedUser2.getUserId());
As expected, no errors, database is created but my collections stays empty because I first save entities and then initialize data.
As you can see, History has savedFood and savedUserso If I first initialize History like this:
historyDto1.setFoodId(savedUser1.getFoodId());
historyDto1.setTotalCalories(savedFood1.getCalories());
historyDto1.setUserId(savedUser1.getUserId());
historyDto2.setFoodId(savedUser2.getFoodId());
historyDto2.setTotalCalories(savedFood2.getCalories());
historyDto2.setUserId(savedUser2.getUserId());
Then create variable to save it like this:
HistoryDto savedHistory1 = historyService.save(historyDto1);
HistoryDto savedHistory2 = historyService.save(historyDto2);
Im getting errors for savedUser1 and savedFood1 as expected.
And If I put user below that like this so part of code now stays like:
userDto1.setUsername("John");
userDto1.setHistoryId(savedHistory1.getHistoryId());
userDto2.setUsername("Marc");
userDto2.setHistoryId(savedHistory2.getHistoryId());
historyDto1.setFoodId(savedUser1.getFoodId());
historyDto1.setTotalCalories(savedFood1.getCalories());
historyDto1.setUserId(savedUser1.getUserId());
historyDto2.setFoodId(savedUser2.getFoodId());
historyDto2.setTotalCalories(savedFood2.getCalories());
historyDto2.setUserId(savedUser2.getUserId());
HistoryDto savedHistory1 = historyService.save(historyDto1);
HistoryDto savedHistory2 = historyService.save(historyDto2);
I'm getting error on userDto1.setHistoryId(savedHistory1.getHistoryId()); logic, because now savedHistory1 is not resolved.
How I can fix this. I tried to add collection by collection but then, for example If I created Food collection and now want to create History I cant call foodId to store in History.
I tried to explain best I can, and sorry for bad English, still learning.
Since you are fresher, I'm not showing optimized code, but I will solve you with the same code you tried. I'm not sure, why you are saving cyclic way like userId in History and history id in User (but mongodb allows it).
Ok, lets come to the solution. I used default ObjectId, sometime you might haved used any data type. Code works top to bottom. You dont need to save at first. Once your business logic is finished, you can save
FoodDto foodDto1 = new FoodDto();
foodDto1.setName("HotDog");
foodDto1.setDescription("Very HOT DOG");
foodDto1.setCalories(99);
FoodDto savedFood1 = foodService.save(foodDto1);
UserDto userDto1 = new UserDto();
userDto1.setUsername("John");
userDto1.setHistoryId(savedHistory1.getId());
UserDto savedUser1 = userService.save(userDto1);
HistoryDto historyDto1 = new HistoryDto();
historyDto1.setFoodId(savedFood1.getId());
historyDto1.setTotalCalories(savedFood1.getCalories());
historyDto1.setUserId(savedUser1.getId());
HistoryDto savedHistory1 = historyService.save(historyDto1);
And your respective classes
public class FootDto{
#Id
private ObjectId _id;
private String name;
private String description;
private int calories;
}
public class UserDto{
#Id
private ObjectId _id;
private String username;
private ObjectId historyId;
}
public class HistoryDto{
#Id
private ObjectId _id;
private ObjectId foodId;
private int totalCalories;
private ObjectId userId;
}
I'm new to MongoDB and Reactor and I'm trying to retrieve a User with its Profiles associated
Here's the POJO :
public class User {
private #Id String id;
private String login;
private String hashPassword;
#Field("profiles") private List<String> profileObjectIds;
#Transient private List<Profile> profiles; }
public class Profile {
private #Id String id;
private #Indexed(unique = true) String name;
private List<String> roles; }
The problem is, how do I inject the profiles in the User POJO ?
I'm aware I can put a #DBRef and solve the problem but in it's documentation, MongoDB specify manual Ref should be preferred over DB ref.
I'm seeing two solutions :
Fill the pojo when I get it :
public Mono<User> getUser(String login) {
return userRepository.findByLogin(login)
.flatMap(user -> ??? );
}
I should do something with profileRepository.findAllById() but I don't know or to concatene both Publishers given that profiles result depends on user result.
Declare an AbstractMongoEventListener and override onAfterConvert method :
But here I am mistaken since the method end before the result is Published
public void onAfterConvert(AfterConvertEvent<User> event) {
final User source = event.getSource();
source.setProfiles(new ArrayList<>());
profileRepository.findAllById(source.getProfileObjectIds())
.doOnNext(e -> source.getProfiles().add(e))
subscribe();
}
TL;DR
There's no DBRef support in reactive Spring Data MongoDB and I'm not sure there will be.
Explanation
Spring Data projects are organized into Template API, Converter and Mapping Metadata components. The imperative (blocking) implementation of the Template API uses an imperative approach to fetch Documents and convert these into domain objects. MappingMongoConverter in particular handles all the conversion and DBRef resolution. This API works in a synchronous/imperative API and is used for both Template API implementations (imperative and the reactive one).
Reuse of MappingMongoConverter was the logical decision while adding reactive support as we don't have a need to duplicate code. The only limitation is DBRef resolution that does not fit the reactive execution model.
To support reactive DBRefs, the converter needs to be split up into several bits and the whole association handling requires an overhaul.
Reference : https://jira.spring.io/browse/DATAMONGO-2146
Recommendation
Keep references as keys/Id's in your domain model and look up these as needed. zipWith and flatMap are the appropriate operators, depending on what you want to archive (enhance model with references, lookup references only).
On a related note: Reactive Spring Data MongoDB comes partially with a reduced feature set. Contextual SpEL extension is a feature that is not supported as these components assume an imperative programming model and thus synchronous execution.
For the first point, I finally achieve doing what I wanted :
public Mono<User> getUser(String login) {
return userRepository.findByLogin(login)
.flatMap( user ->
Mono.just(user)
.zipWith(profileRepository.findAllById(user.getProfileObjectIds())
.collectionList(),
(u, p) -> {
u.setProfiles(p);
return u;
})
);
}
In my case, I have managed this problem using the follow approuch:
My Entity is:
#Getter
#Setter
#NoArgsConstructor
#AllArgsConstructor
#Document(collection = "post")
public class Post implements Serializable {
private static final long serialVersionUID = -6281811500337260230L;
#EqualsAndHashCode.Include
#Id
private String id;
private Date date;
private String title;
private String body;
private AuthorDto author;
private Comment comment;
private List<Comment> listComments = new ArrayList<>();
private List<String> idComments = new ArrayList<>();
}
My controller is:
#GetMapping(FIND_POST_BY_ID_SHOW_COMMENTS)
#ResponseStatus(OK)
public Mono<Post> findPostByIdShowComments(#PathVariable String id) {
return postService.findPostByIdShowComments(id);
}
Last, but not, least, my Service (here is the solution):
public Mono<Post> findPostByIdShowComments(String id) {
return postRepo
.findById(id)
.switchIfEmpty(postNotFoundException())
.flatMap(postFound -> commentService
.findCommentsByPostId(postFound.getId())
.collectList()
.flatMap(comments -> {
postFound.setListComments(comments);
return Mono.just(postFound);
})
);
}
public Flux<Comment> findCommentsByPostId(String id) {
return postRepo
.findById(id)
.switchIfEmpty(postNotFoundException())
.thenMany(commentRepo.findAll())
.filter(comment1 -> comment1.getIdPost()
.equals(id));
}
Thanks, this helped a lot.
Here is my solution:
public MappingMongoConverter mappingMongoConverter(MongoMappingContext mongoMappingContext) {
MappingMongoConverter converter = new MappingMongoConverter(NoOpDbRefResolver.INSTANCE, mongoMappingContext);
converter.setTypeMapper(new DefaultMongoTypeMapper(null));
converter.setCustomConversions(mongoCustomConversions());
return converter;
}
The trick was to use the NoOpDbRefResolver.INSTANCE
I am experimenting with spring data elasticsearch by implementing a cluster which will host multi-tenant indexes, one index per tenant.
I am able to create and set settings dynamically for each needed index, like
public class SpringDataES {
#Autowired
private ElasticsearchTemplate es;
#Autowired
private TenantIndexNamingService tenantIndexNamingService;
private void createIndex(String indexName) {
Settings indexSettings = Settings.builder()
.put("number_of_shards", 1)
.build();
CreateIndexRequest indexRequest = new CreateIndexRequest(indexName, indexSettings);
es.getClient().admin().indices().create(indexRequest).actionGet();
es.refresh(indexName);
}
private void preapareIndex(String indexName){
if (!es.indexExists(indexName)) {
createIndex(indexName);
}
updateMappings(indexName);
}
The model is created like this
#Document(indexName = "#{tenantIndexNamingService.getIndexName()}", type = "movies")
public class Movie {
#Id
#JsonIgnore
private String id;
private String movieTitle;
#CompletionField(maxInputLength = 100)
private Completion movieTitleSuggest;
private String director;
private Date releaseDate;
where the index name is passed dynamically via the SpEl
#{tenantIndexNamingService.getIndexName()}
that is served by
#Service
public class TenantIndexNamingService {
private static final String INDEX_PREFIX = "test_index_";
private String indexName = INDEX_PREFIX;
public TenantIndexNamingService() {
}
public String getIndexName() {
return indexName;
}
public void setIndexName(int tenantId) {
this.indexName = INDEX_PREFIX + tenantId;
}
public void setIndexName(String indexName) {
this.indexName = indexName;
}
}
So, whenever I have to execute a CRUD action, first I am pointing to the right index and then to execute the desired action
tenantIndexNamingService.setIndexName(tenantId);
movieService.save(new Movie("Dead Poets Society", getCompletion("Dead Poets Society"), "Peter Weir", new Date()));
My assumption is that the following dynamically index assignment, will not work correctly in a multi-threaded web application:
#Document(indexName = "#{tenantIndexNamingService.getIndexName()}"
This is because TenantIndexNamingService is singleton.
So my question is how achieve the right behavior in a thread save manner?
I would probably go with an approach similar to the following one proposed for Cassandra:
https://dzone.com/articles/multi-tenant-cassandra-cluster-with-spring-data-ca
You can have a look at the related GitHub repository here:
https://github.com/gitaroktato/spring-boot-cassandra-multitenant-example
Now, since Elastic has differences in how you define a Document, you should mainly focus in defining a request-scoped bean that will encapsulate your tenant-id and bind it to your incoming requests.
Here is my solution. I create a RequestScope bean to hold the indexes per HttpRequest
how does singleton bean handle dynamic index
I am a bit new to spring reactive programming. I am currently developing an app using spring-boot-starter-webflux:2.0.0.M5.
I have a Foodtruck mongodb model which contains menus :
#Document(collection = "Foodtruck")
#lombok.Getter
#lombok.Setter
#lombok.NoArgsConstructor
#lombok.AllArgsConstructor
public class Foodtruck {
#Id
private String id;
#URL
private String siteUrl;
#NotBlank
private String name;
private String description;
#NotBlank
private String address;
private List<Menu> menus;
#JsonIgnore
private GridFS image;
}
And here is the model for Menu :
#lombok.Getter
#lombok.Setter
#lombok.NoArgsConstructor
#lombok.AllArgsConstructor
#Document(collection = "menus")
public class Menu {
#Id
private String id;
private List<DayOfWeek> days;
#NotBlank
private String label;
private String description;
private Double price;
private List<Dish> dishes;
#JsonIgnore
private GridFS image;
}
To get all my menus, I first need to get all my restaurants and then merge all the Flux obtained, and return them through my rest controller.
Here is the service I call from my rest resource :
#Service
public class MenuServiceImpl implements MenuService {
FoodtruckService foodtruckService;
public MenuServiceImpl(FoodtruckService foodtruckService) {
this.foodtruckService = foodtruckService;
}
#Override
public Flux<Menu> getAllMenus() {
Flux<Menu> allMenuFlux = Flux.empty();
Flux<Foodtruck> foodtruckFlux = foodtruckService.getAllFoodtrucks();
foodtruckFlux.toStream().forEach(foodtruck -> {
Flux<Menu> currentMenuFlux = Flux.fromIterable(foodtruck.getMenus());
allMenuFlux.mergeWith(currentMenuFlux);
});
return allMenuFlux;
}
}
The mergeWith does not seems to add anything to allMenuFlux. I think I have an understanding problem here.
I have been through the documentation, tested other methods like concat or zip, but it does not interleave flux events as I wish. I can't manage to merge these Flux correctly.
I also think there is a better way to get mongo embedded documents through a menu repository, since my method seems to be a bit overkill, and can cause performance problems in the long run. I have tried and it does not work.
EDIT:
After trying the following code (Being sure that my list is not empty), the resulting fluxtest3 variable is merged correctly :
Flux<Menu> allMenuFlux = Flux.empty();
Flux<Foodtruck> foodtruckFlux = foodtruckService.getAllFoodtrucks();
List<Foodtruck> test = foodtruckFlux.collectList().block();
Flux<Menu> fluxtest1 = Flux.fromIterable(test.get(0).getMenus());
Flux<Menu> fluxtest2 = Flux.fromIterable(test.get(1).getMenus());
Flux<Menu> fluxtest3 = fluxtest1.mergeWith(fluxtest2);
But that's not exactly what I want. Why isn't it working with an empty flux as parent flux.
What am I missing here ?
Thanks in advance for your help.
I think there are a couple of misunderstandings at work here.
If you ever convert a Flux to a Collection, Stream or similar and then something obtained from that back to a Flux you are most certainly doing something wrong. These classes force your pipeline to collect multiple elements, then process them and then convert them back to a Flux. In almost every case this should be possible just with operations offered by Flux.
The methods on Flux don't change the Flux but create new instances with additional behavior. So if you have code like this:
Flux<Menu> allMenuFlux = Flux.empty();
Flux<Foodtruck> foodtruckFlux = foodtruckService.getAllFoodtrucks();
foodtruckFlux.toStream().forEach(foodtruck -> {
Flux<Menu> currentMenuFlux = Flux.fromIterable(foodtruck.getMenus());
allMenuFlux.mergeWith(currentMenuFlux);
});
return allMenuFlux;
Everytime you call allMenuFlux.mergeWith(currentMenuFlux); you create a new Flux just so the Garbage Collector can take care of it. allMenuFlux is still the empty `Flux you started with.
What you really seem to want is:
return foodtruckService.getAllFoodtrucks()
.flatMap(Foodtruck::getMenus);
See the documentation for flatMap. The difference between flatMap and mergeWith is that mergeWith keeps the original elements. Which is superfluous if there are none as in your use-cases.
Bonus: Your additional question
Flux<Menu> fluxtest1 = Flux.fromIterable(test.get(0).getMenus());
Flux<Menu> fluxtest2 = Flux.fromIterable(test.get(1).getMenus());
Flux<Menu> fluxtest3 = fluxtest1.mergeWith(fluxtest2);
Here you are not returning the original fluxtest1 but the newly generated fluxtest2. Therefore it does work.
I'm using spring-data-elasticsearch and for the beginning everything works fine.
#Document( type = "products", indexName = "empty" )
public class Product
{
...
}
public interface ProductRepository extends ElasticsearchRepository<Product, String>
{
...
}
In my model i can search for products.
#Autowired
private ProductRepository repository;
...
repository.findByIdentifier( "xxx" ).getCategory() );
So, my problem is - I've the same Elasticsearch type in different indices and I want to use the same document for all queries. I can handle more connections via a pool - but I don't have any idea how I can implement this.
I would like to have, something like that:
ProductRepository customerRepo = ElasticsearchPool.getRepoByCustomer("abc", ProductRepository.class);
repository.findByIdentifier( "xxx" ).getCategory();
Is it possible to create a repository at runtime, with an different index ?
Thanks a lot
Marcel
Yes. It's possible with Spring. But you should use ElasticsearchTemplate instead of Repository.
For example. I have two products. They are stored in different indices.
#Document(indexName = "product-a", type = "product")
public class ProductA {
#Id
private String id;
private String name;
private int value;
//Getters and setters
}
#Document(indexName = "product-b", type = "product")
public class ProductB {
#Id
private String id;
private String name;
//Getters and setters
}
Suppose if they have the same type, so they have the same fields. But it's not necessary. Two products can have totally different fields.
I have two repositories:
public interface ProductARepository extends ElasticsearchRepository<ProductA, String> {
}
public interface ProductBRepository
extends ElasticsearchRepository<ProductB, String> {
}
It's not necessary too. Only for testing. The fact that ProductA is stored in "product-a" index and ProductB is stored in "product-b" index.
How to query two(ten, dozen) indices with the same type?
Just build custom repository like this
#Repository
public class CustomProductRepositoryImpl {
#Autowired
private ElasticsearchTemplate elasticsearchTemplate;
public List<ProductA> findProductByName(String name) {
MatchQueryBuilder queryBuilder = QueryBuilders.matchPhrasePrefixQuery("name", name);
//You can query as many indices as you want
IndicesQueryBuilder builder = QueryBuilders.indicesQuery(queryBuilder, "product-a", "product-b");
SearchQuery searchQuery = new NativeSearchQueryBuilder().withQuery(builder).build();
return elasticsearchTemplate.query(searchQuery, response -> {
SearchHits hits = response.getHits();
List<ProductA> result = new ArrayList<>();
Arrays.stream(hits.getHits()).forEach(h -> {
Map<String, Object> source = h.getSource();
//get only id just for test
ProductA productA = new ProductA()
.setId(String.valueOf(source.getOrDefault("id", null)));
result.add(productA);
});
return result;
});
}
}
You can search as many indices as you want and you can transparently inject this behavior into ProductARepository adding custom behavior to single repositories
Second solution is to use indices aliases, but you had to create custom model or custom repository too.
We can use the withIndices method to switch the index if needed:
NativeSearchQueryBuilder nativeSearchQueryBuilder = nativeSearchQueryBuilderConfig.getNativeSearchQueryBuilder();
// Assign the index explicitly.
nativeSearchQueryBuilder.withIndices("product-a");
// Then add query as usual.
nativeSearchQueryBuilder.withQuery(allQueries)
The #Document annotation in entity will only clarify the mapping, to query against a specific index, we still need to use above method.
#Document(indexName="product-a", type="_doc")