Spring Data Elasticsearch: Multiple Index with same Document - java

I'm using spring-data-elasticsearch and for the beginning everything works fine.
#Document( type = "products", indexName = "empty" )
public class Product
{
...
}
public interface ProductRepository extends ElasticsearchRepository<Product, String>
{
...
}
In my model i can search for products.
#Autowired
private ProductRepository repository;
...
repository.findByIdentifier( "xxx" ).getCategory() );
So, my problem is - I've the same Elasticsearch type in different indices and I want to use the same document for all queries. I can handle more connections via a pool - but I don't have any idea how I can implement this.
I would like to have, something like that:
ProductRepository customerRepo = ElasticsearchPool.getRepoByCustomer("abc", ProductRepository.class);
repository.findByIdentifier( "xxx" ).getCategory();
Is it possible to create a repository at runtime, with an different index ?
Thanks a lot
Marcel

Yes. It's possible with Spring. But you should use ElasticsearchTemplate instead of Repository.
For example. I have two products. They are stored in different indices.
#Document(indexName = "product-a", type = "product")
public class ProductA {
#Id
private String id;
private String name;
private int value;
//Getters and setters
}
#Document(indexName = "product-b", type = "product")
public class ProductB {
#Id
private String id;
private String name;
//Getters and setters
}
Suppose if they have the same type, so they have the same fields. But it's not necessary. Two products can have totally different fields.
I have two repositories:
public interface ProductARepository extends ElasticsearchRepository<ProductA, String> {
}
public interface ProductBRepository
extends ElasticsearchRepository<ProductB, String> {
}
It's not necessary too. Only for testing. The fact that ProductA is stored in "product-a" index and ProductB is stored in "product-b" index.
How to query two(ten, dozen) indices with the same type?
Just build custom repository like this
#Repository
public class CustomProductRepositoryImpl {
#Autowired
private ElasticsearchTemplate elasticsearchTemplate;
public List<ProductA> findProductByName(String name) {
MatchQueryBuilder queryBuilder = QueryBuilders.matchPhrasePrefixQuery("name", name);
//You can query as many indices as you want
IndicesQueryBuilder builder = QueryBuilders.indicesQuery(queryBuilder, "product-a", "product-b");
SearchQuery searchQuery = new NativeSearchQueryBuilder().withQuery(builder).build();
return elasticsearchTemplate.query(searchQuery, response -> {
SearchHits hits = response.getHits();
List<ProductA> result = new ArrayList<>();
Arrays.stream(hits.getHits()).forEach(h -> {
Map<String, Object> source = h.getSource();
//get only id just for test
ProductA productA = new ProductA()
.setId(String.valueOf(source.getOrDefault("id", null)));
result.add(productA);
});
return result;
});
}
}
You can search as many indices as you want and you can transparently inject this behavior into ProductARepository adding custom behavior to single repositories
Second solution is to use indices aliases, but you had to create custom model or custom repository too.

We can use the withIndices method to switch the index if needed:
NativeSearchQueryBuilder nativeSearchQueryBuilder = nativeSearchQueryBuilderConfig.getNativeSearchQueryBuilder();
// Assign the index explicitly.
nativeSearchQueryBuilder.withIndices("product-a");
// Then add query as usual.
nativeSearchQueryBuilder.withQuery(allQueries)
The #Document annotation in entity will only clarify the mapping, to query against a specific index, we still need to use above method.
#Document(indexName="product-a", type="_doc")

Related

How to convert Optional<Entity> to Optional<EntityDTO> in Spring JPA?

I am new in Spring and although I can convert domain entities as List<Entity>, I cannot convert them properly for the the Optional<Entity>. I have the following methods in repository and service:
EmployeeRepository:
#Query(value = "SELECT ...")
Optional<Employee> findByUuid(#Param(value = "uuid") final UUID uuid);
EmployeeService:
#Override
#LogExecution
#Transactional(readOnly = true)
public Optional<EmployeeDTO> findByUuid(UUID uuid) {
Optional<Employee> employee = employeeRepository.findByUuid(uuid);
return employee
.stream()
.map(EmployeeDTO::new)
// .orElse(null);
//.findFirst(); /// ???
}
My questions:
1. How should I convert Optional<Employee> to Optional<EmployeeDTO> properly?
2. Does Spring JPA collect the fields in the SELECT clause and map them in the service method to the corresponding DTO by matching their names? If so, does it maintain the naming e.g. employee_name to employeeName in database table and domain model class?
The mapping that happens between the output of employeeRepository#findByUuid that is Optional<Employee> and the method output type Optional<EmployeeDTO> is 1:1, so no Stream (calling stream()) here is involved.
All you need is to map properly the fields of Employee into EmployeeDTO. Handling the case the Optional returned from the employeeRepository#findByUuid is actually empty could be left on the subsequent chains of the optional. There is no need for orElse or findFirst calls.
Assuming the following classes both with all-args constructor and getters:
class Employee {
private final long id;
private final String firstName;
private final String lastName;
}
class EmployeeDTO {
private final long id;
private final String name;
private final String surname;
}
... you can perform this. Nothing else than finding a way to create EmployeeDTO from Employee's fields is needed. If the Optional returned from the employeeRepository is returned, no mapping happens and an empty Optional is returned.
#Override
#LogExecution
#Transactional(readOnly = true)
public Optional<EmployeeDTO> findByUuid(UUID uuid) {
return employeeRepository
.findByUuid(uuid) // Optional<Employee>
.map(emp -> new EmployeeDTO( // Optional<EmployeeDTO>
emp.getId(), // .. id -> id
emp.getFirstName(), // .. firstName -> name
emp.getLastName())); // .. lastName -> surname
}
Note: For Employee -> EmployeeDTO mapping I recommend picking one of these:
Create a constructor accepting Employee in EmployeeDTO allowing to map with .map(EmployeeDTO::new) (drawback: creates a dependency).
Just map with getters/setters.
Use a mapping framework such as MapStruct or any other.
There are multiple options to map your entity to a DTO.
Using projections: Your repository can directly return a DTO by using projections. This might be the best option if you don't need the entity at all. You can find everything about projections here https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#projections
Using a library like mapstruct or modelmapper to generate your mapping code
Add a constructor or static factory method to your DTO. Something like
class EmployeeDTO {
// fields here ...
public static EmployeeDTO ofEntity(Employee entity) {
var dto = new EmployeeDTO();
// set fields
return dto;
}
}
And call employee.map(EmployeeDTO::ofEntity) in your service.

How to use db references with reactive Spring Data MongoDB?

I'm new to MongoDB and Reactor and I'm trying to retrieve a User with its Profiles associated
Here's the POJO :
public class User {
private #Id String id;
private String login;
private String hashPassword;
#Field("profiles") private List<String> profileObjectIds;
#Transient private List<Profile> profiles; }
public class Profile {
private #Id String id;
private #Indexed(unique = true) String name;
private List<String> roles; }
The problem is, how do I inject the profiles in the User POJO ?
I'm aware I can put a #DBRef and solve the problem but in it's documentation, MongoDB specify manual Ref should be preferred over DB ref.
I'm seeing two solutions :
Fill the pojo when I get it :
public Mono<User> getUser(String login) {
return userRepository.findByLogin(login)
.flatMap(user -> ??? );
}
I should do something with profileRepository.findAllById() but I don't know or to concatene both Publishers given that profiles result depends on user result.
Declare an AbstractMongoEventListener and override onAfterConvert method :
But here I am mistaken since the method end before the result is Published
public void onAfterConvert(AfterConvertEvent<User> event) {
final User source = event.getSource();
source.setProfiles(new ArrayList<>());
profileRepository.findAllById(source.getProfileObjectIds())
.doOnNext(e -> source.getProfiles().add(e))
subscribe();
}
TL;DR
There's no DBRef support in reactive Spring Data MongoDB and I'm not sure there will be.
Explanation
Spring Data projects are organized into Template API, Converter and Mapping Metadata components. The imperative (blocking) implementation of the Template API uses an imperative approach to fetch Documents and convert these into domain objects. MappingMongoConverter in particular handles all the conversion and DBRef resolution. This API works in a synchronous/imperative API and is used for both Template API implementations (imperative and the reactive one).
Reuse of MappingMongoConverter was the logical decision while adding reactive support as we don't have a need to duplicate code. The only limitation is DBRef resolution that does not fit the reactive execution model.
To support reactive DBRefs, the converter needs to be split up into several bits and the whole association handling requires an overhaul.
Reference : https://jira.spring.io/browse/DATAMONGO-2146
Recommendation
Keep references as keys/Id's in your domain model and look up these as needed. zipWith and flatMap are the appropriate operators, depending on what you want to archive (enhance model with references, lookup references only).
On a related note: Reactive Spring Data MongoDB comes partially with a reduced feature set. Contextual SpEL extension is a feature that is not supported as these components assume an imperative programming model and thus synchronous execution.
For the first point, I finally achieve doing what I wanted :
public Mono<User> getUser(String login) {
return userRepository.findByLogin(login)
.flatMap( user ->
Mono.just(user)
.zipWith(profileRepository.findAllById(user.getProfileObjectIds())
.collectionList(),
(u, p) -> {
u.setProfiles(p);
return u;
})
);
}
In my case, I have managed this problem using the follow approuch:
My Entity is:
#Getter
#Setter
#NoArgsConstructor
#AllArgsConstructor
#Document(collection = "post")
public class Post implements Serializable {
private static final long serialVersionUID = -6281811500337260230L;
#EqualsAndHashCode.Include
#Id
private String id;
private Date date;
private String title;
private String body;
private AuthorDto author;
private Comment comment;
private List<Comment> listComments = new ArrayList<>();
private List<String> idComments = new ArrayList<>();
}
My controller is:
#GetMapping(FIND_POST_BY_ID_SHOW_COMMENTS)
#ResponseStatus(OK)
public Mono<Post> findPostByIdShowComments(#PathVariable String id) {
return postService.findPostByIdShowComments(id);
}
Last, but not, least, my Service (here is the solution):
public Mono<Post> findPostByIdShowComments(String id) {
return postRepo
.findById(id)
.switchIfEmpty(postNotFoundException())
.flatMap(postFound -> commentService
.findCommentsByPostId(postFound.getId())
.collectList()
.flatMap(comments -> {
postFound.setListComments(comments);
return Mono.just(postFound);
})
);
}
public Flux<Comment> findCommentsByPostId(String id) {
return postRepo
.findById(id)
.switchIfEmpty(postNotFoundException())
.thenMany(commentRepo.findAll())
.filter(comment1 -> comment1.getIdPost()
.equals(id));
}
Thanks, this helped a lot.
Here is my solution:
public MappingMongoConverter mappingMongoConverter(MongoMappingContext mongoMappingContext) {
MappingMongoConverter converter = new MappingMongoConverter(NoOpDbRefResolver.INSTANCE, mongoMappingContext);
converter.setTypeMapper(new DefaultMongoTypeMapper(null));
converter.setCustomConversions(mongoCustomConversions());
return converter;
}
The trick was to use the NoOpDbRefResolver.INSTANCE

spring data elasticsearch dynamic multi tenant index mismatch?

I am experimenting with spring data elasticsearch by implementing a cluster which will host multi-tenant indexes, one index per tenant.
I am able to create and set settings dynamically for each needed index, like
public class SpringDataES {
#Autowired
private ElasticsearchTemplate es;
#Autowired
private TenantIndexNamingService tenantIndexNamingService;
private void createIndex(String indexName) {
Settings indexSettings = Settings.builder()
.put("number_of_shards", 1)
.build();
CreateIndexRequest indexRequest = new CreateIndexRequest(indexName, indexSettings);
es.getClient().admin().indices().create(indexRequest).actionGet();
es.refresh(indexName);
}
private void preapareIndex(String indexName){
if (!es.indexExists(indexName)) {
createIndex(indexName);
}
updateMappings(indexName);
}
The model is created like this
#Document(indexName = "#{tenantIndexNamingService.getIndexName()}", type = "movies")
public class Movie {
#Id
#JsonIgnore
private String id;
private String movieTitle;
#CompletionField(maxInputLength = 100)
private Completion movieTitleSuggest;
private String director;
private Date releaseDate;
where the index name is passed dynamically via the SpEl
#{tenantIndexNamingService.getIndexName()}
that is served by
#Service
public class TenantIndexNamingService {
private static final String INDEX_PREFIX = "test_index_";
private String indexName = INDEX_PREFIX;
public TenantIndexNamingService() {
}
public String getIndexName() {
return indexName;
}
public void setIndexName(int tenantId) {
this.indexName = INDEX_PREFIX + tenantId;
}
public void setIndexName(String indexName) {
this.indexName = indexName;
}
}
So, whenever I have to execute a CRUD action, first I am pointing to the right index and then to execute the desired action
tenantIndexNamingService.setIndexName(tenantId);
movieService.save(new Movie("Dead Poets Society", getCompletion("Dead Poets Society"), "Peter Weir", new Date()));
My assumption is that the following dynamically index assignment, will not work correctly in a multi-threaded web application:
#Document(indexName = "#{tenantIndexNamingService.getIndexName()}"
This is because TenantIndexNamingService is singleton.
So my question is how achieve the right behavior in a thread save manner?
I would probably go with an approach similar to the following one proposed for Cassandra:
https://dzone.com/articles/multi-tenant-cassandra-cluster-with-spring-data-ca
You can have a look at the related GitHub repository here:
https://github.com/gitaroktato/spring-boot-cassandra-multitenant-example
Now, since Elastic has differences in how you define a Document, you should mainly focus in defining a request-scoped bean that will encapsulate your tenant-id and bind it to your incoming requests.
Here is my solution. I create a RequestScope bean to hold the indexes per HttpRequest
how does singleton bean handle dynamic index

How to save and query dynamic fields in Spring Data MongoDB?

I'm on Spring boot 1.4.x branch and Spring Data MongoDB.
I want to extend a Pojo from HashMap to give it the possibility to save new properties dynamically.
I know I can create a Map<String, Object> properties in the Entry class to save inside it my dynamics values but I don't want to have an inner structure. My goal is to have all fields at the root's entry class to serialize it like that:
{
"id":"12334234234",
"dynamicField1": "dynamicValue1",
"dynamicField2": "dynamicValue2"
}
So I created this Entry class:
#Document
public class Entry extends HashMap<String, Object> {
#Id
private String id;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
}
And the repository like this:
public interface EntryRepository extends MongoRepository<Entry, String> {
}
When I launch my app I have this error:
Error creating bean with name 'entryRepository': Invocation of init method failed; nested exception is org.springframework.data.mapping.model.MappingException: Could not lookup mapping metadata for domain class java.util.HashMap!
Any idea?
TL; DR;
Do not use Java collection/map types as a base class for your entities.
Repositories are not the right tool for your requirement.
Use DBObject with MongoTemplate if you need dynamic top-level properties.
Explanation
Spring Data Repositories are repositories in the DDD sense acting as persistence gateway for your well-defined aggregates. They inspect domain classes to derive the appropriate queries. Spring Data excludes collection and map types from entity analysis, and that's why extending your entity from a Map fails.
Repository query methods for dynamic properties are possible, but it's not the primary use case. You would have to use SpEL queries to express your query:
public interface EntryRepository extends MongoRepository<Entry, String> {
#Query("{ ?0 : ?1 }")
Entry findByDynamicField(String field, Object value);
}
This method does not give you any type safety regarding the predicate value and only an ugly alias for a proper, individual query.
Rather use DBObject with MongoTemplate and its query methods directly:
List<DBObject> result = template.find(new Query(Criteria.where("your_dynamic_field")
.is(theQueryValue)), DBObject.class);
DBObject is a Map that gives you full access to properties without enforcing a pre-defined structure. You can create, read, update and delete DBObjects objects via the Template API.
A last thing
You can declare dynamic properties on a nested level using a Map, if your aggregate root declares some static properties:
#Document
public class Data {
#Id
private String id;
private Map<String, Object> details;
}
Here we can achieve using JSONObject
The entity will be like this
#Document
public class Data {
#Id
private String id;
private JSONObject details;
//getters and setters
}
The POJO will be like this
public class DataDTO {
private String id;
private JSONObject details;
//getters and setters
}
In service
Data formData = new Data();
JSONObject details = dataDTO.getDetails();
details.put("dynamicField1", "dynamicValue1");
details.put("dynamicField2", "dynamicValue2");
formData.setDetails(details);
mongoTemplate.save(formData );
i have done as per my business,refer this code and do it yours. Is this helpful?

Sorting by parent entity using Specifications

I’m dealing with an issue which to my understanding looks unsupported on Spring Data JPA.
I got a grid (using JqGrid plugin for jQuery) on the view which sends parameters to the server, they are parsed and then a dynamic query generated through Specifications is executed.
The issue comes when I want to order a column which doesn’t belong to the root entity.
Eg. Transaction, Card and Account are my entities and grid displays last4digits as a way for the user to identify the card. As you can imagine last4digits belongs to Card. I query transactions per account.
Using specifications I can filter by that attribute, joining tables and so on but sorting fails as findAll() implementation assumes properties from Sort class belongs to the root entity.
Code example:
JQGridRule panFirst6DigitsRule = FilterUtils.findSearchOrFilterRule(settings, Card_.panFirst6Digits.getName());
JQGridRule panLast4DigitsRule = FilterUtils.findSearchOrFilterRule(settings, Card_.panLast4Digits.getName());
if(panFirst6DigitsRule != null) {
filterPan1 = TransactionSpecs.withPanFirst6Digits(panFirst6DigitsRule.getData(),
panFirst6DigitsRule.getOp(), gridGroupOp);
}
if(panLast4DigitsRule != null) {
filterPan2 = TransactionSpecs.withPanLast4Digits(panLast4DigitsRule.getData(),
panLast4DigitsRule.getOp(), gridGroupOp);
}
Specification<Transaction> joinSpec = TransactionSpecs.withAccountId(account.getAccountId());
Specification<Transaction> activeSpec = BaseSpecs.withEntityStatus(true);
Page<Transaction> results = transactionRepository.findAll(
Specifications.where(joinSpec).and(filterSpec).and(filterPan1).and(filterPan2).and(activeSpec), springPageable);
springPageable variable contains a Sort for last4Digits column generated this way*:
List<Order> sortOrders = new ArrayList<Order>();
Order sortOrder = new Order(Direction.ASC, "panLast4Digits");
sortOrders.add(sortOrder);
sort = new Sort(sortOrders);
*There are missing code parsing parameters and creating more Order objects
Does someone know how to implement that kind of sort over an attribute which belongs to a parent entity/class?
Thanks in advance
Version 1.4.3 for Spring-data-jpa and 4.2.8 for Hibernate
EDIT
Showing how Specification for panLast4Digits is generated
public static Specification<Transaction> withPanLast4Digits(final String panLast4Digits, final JQGridSearchOp op, final JQGridGroupOp whereOp) {
Specification<Transaction> joinSpec = new Specification<Transaction>() {
#Override
public Predicate toPredicate(Root<Transaction> root, CriteriaQuery<?> query, CriteriaBuilder cb) {
Join<Transaction, Card> join = joinCards(root, JoinType.INNER);
return FilterUtils.buildPredicate(cb, join.get(Card_.panLast4Digits), op, panLast4Digits, null, whereOp);
}
};
return joinSpec;
}
private static Join<Transaction, Card> joinCards(Root<Transaction> root, JoinType joinType) {
Join<Transaction, Card> join = getJoin(root, Transaction_.parentCard, joinType);
// only join if not already joined
if (join == null) {
join = root.join(Transaction_.parentCard, joinType);
}
return join;
}
protected static <C, T> Join<C, T> getJoin(Root<C> root, Attribute<? super C, T> attribute, JoinType joinType) {
Set<Join<C, ?>> joins = root.getJoins();
for (Join<C, ?> join : joins) {
if (join.getAttribute().equals(attribute) && join.getJoinType().equals(joinType)) {
return (Join<C, T>) join;
}
}
return null;
}
Also I have updated to spring-data-jpa 1.6.0 and hibernate 4.3.5
the attribute for Sorting is "yourChildentity.attribute"
In your Case you can use the PagingAndSortingRepository this way:
let's assume you have two entities : an Account and a Card
#Entity
public class Account{
// Autogeneration and Ill just assume that your id is type long
private Long id;
#ManyToOne
#JoinColumn(name="CARD_ID")
private Card creditCard;
//getters and setters
}
#Entity
public class Card{
//Id and other attributes.
private String panLast4Digits;
//getters and Setters
}
Repository interface :
#Repository
public interface AccountRepository extends PagingAndSortingRepository<Account, Long>,
JpaSpecificationExecutor<Account>{
}
Service Layer :
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
public interface AccountService{
//you can specify other arguments the one that you want to filter by
Page<Account> filter(Pageable pageable);
}
Service Implementation:
#Service
public calss AccountServiceImpl implements AccountService{
#Resource//or #Autowired
private AccountRepository repository;
#Override
public Page<Account> filter(Pageable pageable){
//Filter using Specifications if you have other arguments passed in the signature of the method.
return repository.findAll(pageable);//if you have specifications than return repository.findAll(yourspecification,pageable);
}
Now the call to service throw an endpoint or a Controller:
just a mthod to see how to sort throw child entity parameter :
import org.springframework.data.domain.Page;
import org.springframework.data.domain.PageRequest;
import org.springframework.data.domain.Sort.Direction;
// method
#Resource
private AccountService service;
public Page<Account> consumeMyService(){
// 0 : for Page 1
// 12 for page size
// Soting throw Child enntiy Account , by attribute panLast4Digits
PageRequest pageable = new PageRequest(0,
12, Direction.ASC, "mycard.panLast4Digits");
Page<Account> service.filter(pageable);
}
You must register you beans by configuring Jpa:repositories for the repository interfaces, and context:component-scan for service implementation
this answer may be useful too.

Categories

Resources