I am using a SOLR 7.1.0 Server with an JAVA spring boot application.
To communicate with the SOLR server I am using "springframework.data.solr"
I have a "template" schema from which I want to create new cores at the runtime.
The goal I want to achieve is to create a new core for each customer, while keeping the schema the same.
This is how my SolrConfig looks like:
#Configuration
#EnableSolrRepositories(basePackages = "com.my.repository", multicoreSupport = true)
#ComponentScan
public class SolrConfig {
#Bean
public SolrClient solrClient() {
return new HttpSolrClient("http://localhost:8983/solr");
}
#Bean
#Scope("prototype")
public SolrTemplate solrTemplate(SolrClient client) throws Exception {
return new SolrTemplate(client);
}
}
my repository interface:
public interface OpenItemsDebtorsRepository extends CustomOpenItemsDebtorsRepository, SolrCrudRepository<OpenItemDebtor, String> {
void setCore(String core);
#Query("orderNumber:*?0*~")
List<OpenItemDebtor> findByOrderNumber(String orderNumber);
}
I am looking for something like this:
solrTemplate.CreateNewCore(String coreName)
Do you have any suggestions?
I would strongly suggest to use the native Solr client (SolrJ) for your spring boot project. Have a service component created that would provide you an instance of the the Solr Server (CLoudSolrClient).
SolrJ has all the components that you would need to create and manage cores and collection.
I know this is not a straight answer but I hope this helps.
Related
I have read an article about mapping an DTO class to an entity. In the article, it manages to create a very general and extensible way of mapping DTO using annotations and RequestResponseBodyMethodProcessor. I have been creating a SOAP service using Spring Boot + Apache CXF. The service is still in an early stage, but it will get really big in the following months. The DTO pattern in the article seems to be a good choice for separating what the client sends and what is stored in the database. I have tried multiple ways of implementing it in the project, but none worked properly. I know RequestResponseBodyMethodProcessor is only used for #RequestBody and #ResponseBody, so I tried putting it in the SOAP Service, but Spring seems to ignore it completely. I have also done some search and maybe the problem comes from the fact CXF uses JAXB and Spring uses Jackson. If this is the case, is there any way to integrate CXF to use Jackson? If the problem isn't JAXB and Jackson, is there any other way of implementing the above pattern for a SOAP service? Just for completeness, the project is Java 1.8, the SOAP service is created using #WebService and the service is published through an WebServiceConfiguration class. Example code of how the code looks like:
#WebService
#Service
public interface MyService {
#WebMethod
public void myEndpoint(#WebParam(name = "someClass") SomeClass someClass);
// others endpoints
}
#Configuration
public interface WebServiceConfiguration {
#AutoWired
private MyServiceImpl myServiceImpl;
#Bean
public ServletRegistrationBean<CXFServlet> servletRegistrationBean(ApplicationContext context) {
return new ServletRegistrationBean<>(new CXFServlet(), "/service/*");
}
#Bean(name = Bus.DEFAULT_BUS_ID)
public SpringBus bus() {
return new SpringBus();
}
#Bean
public Endpoint myService() {
Edpoint endpoint = new EndpointImpl(bus(), myServiceImpl);
endpoint.publish("/myService");
return endpoint;
}
}
I am trying to use a shared cache in a spring boot clustered app.
It seems that everything is working but when i tried to retrieve cached values from a second
instance of the app, it don't get it from cached values.
Seems like every app is working with his own cache and not sharing it.
I followed the guideline found here to setup a simple environment https://hazelcast.com/blog/spring-boot/
My code:
Controller.java
#Controller
#RequestMapping("/public/testcache")
public class TestCacheController {
#Autowired
BookService bookService;
#GetMapping("/get/{isbn}")
#ResponseBody
public String getBookNameByIsbn(#PathVariable("isbn") String isbn) {
return bookService.getBookNameByIsbn(isbn);
}
#GetMapping("/clear/cache")
#ResponseBody
public String clearCache() {
bookService.deleteCache();
return "done";
}
}
BookService.java
#Service
public class BookService {
#Cacheable("books")
public String getBookNameByIsbn(String isbn) {
return findBookInSlowSource(isbn);
}
private String findBookInSlowSource(String isbn) {
// some long processing
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
return "Sample Book "+isbn;
}
#CacheEvict(value = {"books"}, allEntries = true)
public void deleteCache() {}
}
# hazelcast.yaml
hazelcast:
network:
join:
multicast:
enabled: true
When i start the applications I always get the right output:
Members {size:2, ver:2} [
Member [192.168.178.107]:5702 - d53f2c3f-d66f-4ba3-bf8d-88d4935bde4e
Member [192.168.178.107]:5701 - 69860793-c420-48d3-990c-d0c30a3a92d6 this
]
I tried:
running two Spring Boot apps on different ports
running two tomcat on different ports
replace the yaml configuration with java configuration
Java Based Configuration
#Configuration
#EnableCaching
public class CacheConfigurator {
#Bean
public Config config() {
Config config=new Config();
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(true);
return config;
}
}
Every time I get the same result, every app seems to cache on his own.
Additional information:
I tried to use management center https://hazelcast.com/product-features/management-center/ and i can connect to cluster member, but i never see any value under "Map"
I am wondering if the instances of hazelcast are launched but not used by spring boot that instead uses his own simple cache
My application.properties is empty
Spring boot version 2.4.4
Maybe each of your Spring Boot applications actually created two separate Hazelcast instances and it uses not clustered one for caching.
Please try to follow these guides:
Hazelcast Guides: Getting Started with Hazelcast using Spring Boot, it
Hazelcast Guides: Caching with Spring Boot and Hazelcast
I am trying to figure out why I have to set my bean name to elasticsearchTemplate. Without it, my application crashes. I have the code below to configure my Rest client. The issue is if I don't add the elasticsearchTemplate as the bean name, it fails and says it cannot find elasticsearchTemplate. Any idea on why it does this and also what is the difference of using elasticsearchoperations vs elasticsearchtemplate?
Using Spring-Data-Elasticsearch Version 3.2
Using Java High-Level Rest Client Version 6.8.0
Works
#Bean("elasticsearchtemplate")
public ElasticsearchOperations elasticsearchTemplate() throws Exception {
return new ElasticsearchTemplate(client());
}
Doesn't Work
public ElasticsearchOperations elasticsearchTemplate() throws Exception {
return new ElasticsearchTemplate(client());
}
Maybe because the startup configuration (application.properties) is missing the configuration related to elasticsearch.
You need to define some elastic search properties in your application.properties file such as cluster-nodes, cluster-names which are used by ElasticsearchTemplate and ElasticsearchRepository to connect to the Elasticsearch engine.
as follows
You can manually configure rest client by extending AbstractElasticsearchConfiguration.
#Configuration
public class RestClientConfig extends AbstractElasticsearchConfiguration {
#Override
public RestHighLevelClient elasticsearchClient() {
return RestClients.create(ClientConfiguration.localhost()).rest();
}
}
what is the difference of using elasticsearchoperations vs elasticsearchtemplate?
The ElasticsearchTemplate is an implementation of the ElasticsearchOperations interface using the Transport Client.
https://docs.spring.io/spring-data/elasticsearch/docs/3.2.0.RELEASE/reference/html/#elasticsearch.operations.resttemplate
I have a spring boot application where I am embedding a solr server. I seem to be forced right now to name my solr core as "collection1" so that my code indeed finds the core to load
anyone know how I can make this arbitrary ?
#Configuration
#EnableSolrRepositories(basePackages = ["uk.xxx"], multicoreSupport = true)
class SolrContext {
static final String SOLR_HOST = 'solr.host'
static final String SOLR_EMBEDDED_PATH = 'solr.embedded.path'
#Resource
Environment environment
#Bean
public SolrServer solrServer() {
EmbeddedSolrServerFactory factory = new EmbeddedSolrServerFactory(environment.getRequiredProperty(SOLR_EMBEDDED_PATH))
return factory.getSolrServer()
}
#Bean
public SolrOperations solrTemplate() {
return new SolrTemplate(solrServer())
}
}
tried like you said, still no good, I named my core 'coursefinder', and passed that in like you said but its still looking for colelction1 :
#Bean
public SolrServer solrServer() {
println "starting embedded solr index"
EmbeddedSolrServerFactory factory = new EmbeddedSolrServerFactory(grailsApplication.config.getProperty('solr.embedded.path'))
return factory.getSolrServer("coursefinder")
}
}
output
starting embedded solr index
ERROR org.apache.solr.core.CoreContainer - Error creating core [collection1]: Could not load conf for core collection1: Error loading solr config from /Developer/dev/LSE/coursefinder-grails-angular/embeddedsolr/collection1/conf/solrconfig.xml
org.apache.solr.common.SolrException: Could not load conf for core collection1: Error loading solr config from /Developer/dev/LSE/coursefinder-grails-angular/embeddedsolr/collection1/conf/solrconfig.xml
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:66)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:489)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:255)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:249)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)`
Change return factory.getSolrServer() to return factory.getSolrServer("yourcorename")
I tried all many posted solution, the only thing which is working for solr 5.x is the following.
Be aware that you have a core.properties beneath the home folder which has the same name inside.
Best way is to build a core with solr create and use that structure as home folder.
Following is for Spring based jUnit testing
#Bean
public EmbeddedSolrServer solrServerFactoryBean() {
String folder = "src/main/resources/server/solr/";
CoreContainer container = new CoreContainer(folder);
container.load();
return new EmbeddedSolrServer(container, "myName");
}
#Bean
public SolrTemplate solrTemplate(EmbeddedSolrServer server) throws Exception {
SolrTemplate solrTemplate = new SolrTemplate(server);
return solrTemplate;
}
Not sure how to do this using spring boot but I wrote an example that helps to load multiple instances of EmbeddedSolrServer per testing purpose.
Infact EmbeddedSolrServer is widely used in many tests of the Solr codebase.
https://github.com/freedev/EmbeddedSolrServer-junit-example
Let's say there are #Service and #Repository interfaces like the following:
#Repository
public interface OrderDao extends JpaRepository<Order, Integer> {
}
public interface OrderService {
void saveOrder(Order order);
}
#Service
public class OrderServiceImpl implements OrderService {
#Autowired
private OrderDao orderDao;
#Override
#Transactional
public void saveOrder(Order order) {
orderDao.save(order);
}
}
This is part of working application, everything is configured to access single database and everything works fine.
Now, I would like to have possibility to create stand-alone working instance of OrderService with auto-wired OrderDao using pure Java with jdbcUrl specified in Java code, something like this:
final int tenantId = 3578;
final String jdbcUrl = "jdbc:mysql://localhost:3306/database_" + tenantId;
OrderService orderService = someMethodWithSpringMagic(appContext, jdbcUrl);
As you can see I would like to introduce multi-tenant architecture with tenant per database strategy to existing Spring-based application.
Please note that I was able to achieve that quite easily before with self-implemented jdbcTemplate-like logic also with JDBC transactions correctly working so this is very valid task.
Please also note that I need quite simple transaction logic to start transaction, do several requests in service method in scope of that transaction and then commit it/rollback on exception.
Most solutions on the web regarding multi-tenancy with Spring propose specifying concrete persistence units in xml config AND/OR using annotation-based configuration which is highly inflexible because in order to add new database url whole application should be stopped, xml config/annotation code should be changed and application started.
So, basically I'm looking for a piece of code which is able to create #Service just like Spring creates it internally after properties are read from XML configs / annotations. I'm also looking into using ProxyBeanFactory for that, because Spring uses AOP to create service instances (so I guess simple good-old re-usable OOP is not the way to go here).
Is Spring flexible enough to allow this relatively simple case of code reuse?
Any hints will be greatly appreciated and if I find complete answer to this question I'll post it here for future generations :)
HIbernate has out of the box support for multi tenancy, check that out before trying your own. Hibernate requires a MultiTenantConnectionProvider and CurrentTenantIdentifierResolver for which there are default implementations out of the box but you can always write your own implementation. If it is only a schema change it is actually pretty simple to implement (execute a query before returning the connection). Else hold a map of datasources and get an instance from that, or create a new instance.
About 8 years ago we already wrote a generic solution which was documented here and the code is here. It isn't specific for hibernate and could be used with basically anything you need to switch around. We used it for DataSources and also some web related things (theming amongst others).
Creating a transactional proxy for an annotated service is not a difficult task but I'm not sure that you really need it. To choose a database for a tenantId I guess that you only need to concentrate in DataSource interface.
For example, with a simple driver managed datasource:
public class MultitenancyDriverManagerDataSource extends DriverManagerDataSource {
#Override
protected Connection getConnectionFromDriverManager(String url,
Properties props) throws SQLException {
Integer tenant = MultitenancyContext.getTenantId();
if (tenant != null)
url += "_" + tenant;
return super.getConnectionFromDriverManager(url, props);
}
}
public class MultitenancyContext {
private static ThreadLocal<Integer> tenant = new ThreadLocal<Integer>();
public static Integer getTenantId() {
return tenant.get();
}
public static void setTenatId(Integer value) {
tenant.set(value);
}
}
Of course, If you want to use a connection pool, you need to elaborate it a bit, for example using a connection pool per tenant.