I am trying to figure out why I have to set my bean name to elasticsearchTemplate. Without it, my application crashes. I have the code below to configure my Rest client. The issue is if I don't add the elasticsearchTemplate as the bean name, it fails and says it cannot find elasticsearchTemplate. Any idea on why it does this and also what is the difference of using elasticsearchoperations vs elasticsearchtemplate?
Using Spring-Data-Elasticsearch Version 3.2
Using Java High-Level Rest Client Version 6.8.0
Works
#Bean("elasticsearchtemplate")
public ElasticsearchOperations elasticsearchTemplate() throws Exception {
return new ElasticsearchTemplate(client());
}
Doesn't Work
public ElasticsearchOperations elasticsearchTemplate() throws Exception {
return new ElasticsearchTemplate(client());
}
Maybe because the startup configuration (application.properties) is missing the configuration related to elasticsearch.
You need to define some elastic search properties in your application.properties file such as cluster-nodes, cluster-names which are used by ElasticsearchTemplate and ElasticsearchRepository to connect to the Elasticsearch engine.
as follows
You can manually configure rest client by extending AbstractElasticsearchConfiguration.
#Configuration
public class RestClientConfig extends AbstractElasticsearchConfiguration {
#Override
public RestHighLevelClient elasticsearchClient() {
return RestClients.create(ClientConfiguration.localhost()).rest();
}
}
what is the difference of using elasticsearchoperations vs elasticsearchtemplate?
The ElasticsearchTemplate is an implementation of the ElasticsearchOperations interface using the Transport Client.
https://docs.spring.io/spring-data/elasticsearch/docs/3.2.0.RELEASE/reference/html/#elasticsearch.operations.resttemplate
Related
I am trying to use a shared cache in a spring boot clustered app.
It seems that everything is working but when i tried to retrieve cached values from a second
instance of the app, it don't get it from cached values.
Seems like every app is working with his own cache and not sharing it.
I followed the guideline found here to setup a simple environment https://hazelcast.com/blog/spring-boot/
My code:
Controller.java
#Controller
#RequestMapping("/public/testcache")
public class TestCacheController {
#Autowired
BookService bookService;
#GetMapping("/get/{isbn}")
#ResponseBody
public String getBookNameByIsbn(#PathVariable("isbn") String isbn) {
return bookService.getBookNameByIsbn(isbn);
}
#GetMapping("/clear/cache")
#ResponseBody
public String clearCache() {
bookService.deleteCache();
return "done";
}
}
BookService.java
#Service
public class BookService {
#Cacheable("books")
public String getBookNameByIsbn(String isbn) {
return findBookInSlowSource(isbn);
}
private String findBookInSlowSource(String isbn) {
// some long processing
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
return "Sample Book "+isbn;
}
#CacheEvict(value = {"books"}, allEntries = true)
public void deleteCache() {}
}
# hazelcast.yaml
hazelcast:
network:
join:
multicast:
enabled: true
When i start the applications I always get the right output:
Members {size:2, ver:2} [
Member [192.168.178.107]:5702 - d53f2c3f-d66f-4ba3-bf8d-88d4935bde4e
Member [192.168.178.107]:5701 - 69860793-c420-48d3-990c-d0c30a3a92d6 this
]
I tried:
running two Spring Boot apps on different ports
running two tomcat on different ports
replace the yaml configuration with java configuration
Java Based Configuration
#Configuration
#EnableCaching
public class CacheConfigurator {
#Bean
public Config config() {
Config config=new Config();
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(true);
return config;
}
}
Every time I get the same result, every app seems to cache on his own.
Additional information:
I tried to use management center https://hazelcast.com/product-features/management-center/ and i can connect to cluster member, but i never see any value under "Map"
I am wondering if the instances of hazelcast are launched but not used by spring boot that instead uses his own simple cache
My application.properties is empty
Spring boot version 2.4.4
Maybe each of your Spring Boot applications actually created two separate Hazelcast instances and it uses not clustered one for caching.
Please try to follow these guides:
Hazelcast Guides: Getting Started with Hazelcast using Spring Boot, it
Hazelcast Guides: Caching with Spring Boot and Hazelcast
Elastic Search beginner here.
When performing a .findAll() operation through a ProviderRepository that extends ElasticsearchRepository, I'm seeing the warning below.
2020-11-19_17:31:09.218 WARN org.elasticsearch.client.RestClient - [::] request [POST http://mysearch:9200/provider_search/provider/_search?rest_total_hits_as_int=true&typed_keys=true&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&search_type=query_then_fetch&batched_reduce_size=512] returned 1 warnings: [299 Elasticsearch-7.6.2-ef48eb35cf30adf4db14086e8aabd07ef6fb113f "[types removal] Specifying types in search requests is deprecated."]
I do not want the /provider/ part in the URL. If I paste the URL without the /provider/ part, I can get the desired response. /provider/ part causes the request to return with 0 results. However, I'm not sure where the /provider/ part gets appended to the URL.
Here's my Repository
public interface ProviderRepository extends ElasticsearchRepository<Provider, Long> {
}
Here's the Entity/Document
#Document(indexName = "provider_search")
public class Provider {
private Long id;
private String providerName;
...
}
And, here's my config
#Configuration
#EnableElasticsearchRepositories(
basePackages = { "com.commons.repositories.elastic" })
public class ElasticDataSourceConfig {
#Bean
public RestHighLevelClient client() {
ClientConfiguration clientConfiguration = ClientConfiguration.builder()
.connectedTo("mysearch:9200")
.build();
return RestClients.create(clientConfiguration).rest();
}
#Bean
public ElasticsearchOperations elasticsearchTemplate() {
return new ElasticsearchRestTemplate(client());
}
}
Please let me know if I need to provide more information.
Tl;dr
Need help removing /provider/ part from the aforementioned URL.
I had to update Spring Boot version from 2.2.* to 2.3.* to get it working. For some reason, the embedded transitive dependency of spring-boot-starter-data was messing with my direct elasticsearch dependency. Even when I installed 4.0.* spring-data-elasticsearch, the transitive dependencies within spring-data-elasticsearch were in 6.* versions. Once, I updated Spring Boot, all of the transitive dependencies of spring-data-elasticsearch (version 4.0.) changed from 6. to 7.*.
I have a simple Multi Database setup to try out Multi Database configuration with r2dbc.
However, it is not working as expected, it always uses the first Database.
#Configuration
#EnableR2dbcRepositories(databaseClientRef="postgreDbClient". basePackages={"com.x.y.repo.postgresql"})
public class PostgreSqlConfiguration extends AbstractR2dbcConfiguration{
#Bean(name="postgresqlConnectionFactory")
ConnectionFactory connectionFactory(){
return ConnectionFactories.get("r2dbc:postgresql://<host>:5432/<database>");
}
#Bean(name="postgreDbClient")
DatabaseClient databaseClient(){
return DatabaseClient.create(this.connectionFactory());
}
}
#Configuration
#EnableR2dbcRepositories(databaseClientRef="mssqlDbClient". basePackages={"com.x.y.repo.mssql"})
public class PostgreSqlConfiguration extends AbstractR2dbcConfiguration{
#Bean(name="mssqlConnectionFactory")
ConnectionFactory connectionFactory(){
return ConnectionFactories.get("r2dbc:mssql://<host>:1433/<database>");
}
#Bean(name="mssqlDbClient")
DatabaseClient databaseClient(){
return DatabaseClient.create(this.connectionFactory());
}
}
com.x.y.repo.postgresql
-EmployeeRepository.java
-DepartmentRepository.java
com.x.y.repo.mssql
-PuchaseRepository.java
-SalesRepository.java
public interface EmployeeRepository extends R2dbcRepository<Employee, Integer>{
}
public interface PuchaseRepository extends R2dbcRepository<Purchase, Integer>{
}
The above is the simple representation of my code.
My requests go to Postgresql always, though basepackage is configured for mssql package com.x.y.repo.mssql
Not sure which version you are using, I also encountered the same issue when using the latest spring boot 2.4.0-M2/Spring Data R2dbc 1.2.0-M2.
Using AbstractR2dbcConfiguration is problematic here, check this quesiton. I was using MySQL and Postgres in the single application.
I finally resolved it by creating custom config and giving up AbstractR2dbcConfiguration, check the sample codes.
I am using a SOLR 7.1.0 Server with an JAVA spring boot application.
To communicate with the SOLR server I am using "springframework.data.solr"
I have a "template" schema from which I want to create new cores at the runtime.
The goal I want to achieve is to create a new core for each customer, while keeping the schema the same.
This is how my SolrConfig looks like:
#Configuration
#EnableSolrRepositories(basePackages = "com.my.repository", multicoreSupport = true)
#ComponentScan
public class SolrConfig {
#Bean
public SolrClient solrClient() {
return new HttpSolrClient("http://localhost:8983/solr");
}
#Bean
#Scope("prototype")
public SolrTemplate solrTemplate(SolrClient client) throws Exception {
return new SolrTemplate(client);
}
}
my repository interface:
public interface OpenItemsDebtorsRepository extends CustomOpenItemsDebtorsRepository, SolrCrudRepository<OpenItemDebtor, String> {
void setCore(String core);
#Query("orderNumber:*?0*~")
List<OpenItemDebtor> findByOrderNumber(String orderNumber);
}
I am looking for something like this:
solrTemplate.CreateNewCore(String coreName)
Do you have any suggestions?
I would strongly suggest to use the native Solr client (SolrJ) for your spring boot project. Have a service component created that would provide you an instance of the the Solr Server (CLoudSolrClient).
SolrJ has all the components that you would need to create and manage cores and collection.
I know this is not a straight answer but I hope this helps.
Let's say we have a bean definition in spring configuration
<bean id="scanningIMAPClient" class="com.acme.email.incoming.ScanningIMAPClient" />
What I really want is the scanningIMAPClient to be of type com.acme.email.incoming.GenericIMAPClient if the configured email server is a normal IMAP server and com.acme.email.incoming.GmailIMAPClient incase it is a GMAIL server, (since gmail behaves in slightly different way) GmailIMAPClient is a subclass of GenericIMAPClient.
How can I accomplish that in spring configuration?
There is a properties file which contains configuration of the email server.
It's simple with Java configuration:
#Value("${serverAddress}")
private String serverAddress;
#Bean
public GenericIMAPClient scanningIMAPClient() {
if(serverAddress.equals("gmail.com"))
return new GmailIMAPClient();
else
return new GenericIMAPClient();
}
You can emulate this behaviour with custom FactoryBean.
You can use programatic configuration:
#Configuration
public class AppConfig {
#Bean(name="scanningIMAPClient")
public GenericIMAPClient helloWorld() {
...check config and return desired type
}
}
More info here.