I am currently creating a Vaadin app that is supposed to get it's content data from a mysql database on a server (e.g. server run with Xampp). The problem is I am confused with a direction that most information sources give me. Every single tutorial has spring and spring boot code and there is no actual reference to creating a connection with data base in vaadin. I read a lot about the matter but still all that comes up are spring backends with some vaadin UI elements. Does it mean that Vaadin app uses spring components for connection with data base and updating, showing, editing the data using vaadin UI forms etc? I'm really confused right now. So then what is the difference between creating app in Vaadin or spring/spring boot if the back-end is still created in spring no matter what?
Vaadin does not take any decisions about how the data is accessed. If you are using spring-boot then creating data source according to their documentation would be a good place to start.
Now you are set to create entities and repositories. You can then edit and display entities in Vaadin application. Some recommend creating separate classes for editing and viewing while others don't.
Each Vaadin page that you have could, for example, have injected repository which it uses to load entities that it will then present to the user.
As Mika said, Vaadin does not decide your CMS connection. I recommend using Vaadin and hibernate since you can use dataproviders and hibernate criteria for easy filtering of data.
EDIT: Code-Example (really just an example)
I recommend you read about Hibernate and DataProviders yourself
public class EntityDataProvider extends AbstractDataProvider<Entity, Filter> implements DataProvider<CarePiDevice, String> {
private static final long serialVersionUID = 7331161527158310247L;
private SessionFactory sessionFactory;
public EntityDataProvider() {
Configuration configuration = new Configuration().configure();
sessionFactory = configuration.buildSessionFactory();
}
#Override
public boolean isInMemory() {
return false;
}
#Override
public int size(#Nullable Query<Entity, Filter> query) {
Session session = sessionFactory.openSession();
Criteria criteria = session.createCriteria(Entity.class);
Filter filter = query.getFilter();
// apply filters to Criteria
return criteria.list().size();
}
#Override
public Stream<CarePiDevice> fetch(#Nullable Query<Entity, Filter> query) {
Session session = sessionFactory.openSession();
Criteria criteria = session.createCriteria(Entity.class);
Filter filter = query.getFilter();
// apply filters to Criteria
return criteria.list().stream();
}
}
Related
I am using the #PostConstruct annotation on application start to query the entire list result from the DB and am storing it as a static global variable. I am parsing this result list and getting the responses I need. As shown below:
private static List<Object[]> allObjects;
#PostConstruct
public void test() {
System.out.println("Calling Method");
Query q = entityManager.createNativeQuery(query);
List<Object[]> resultList = (List<Object[]>) q.getResultList();
allObjects = resultList;
}
However, I would like to use ehcache to store the result list so I can refresh the cache at any time or remove the items from the cache. Is it possible to store a result list (without a key) in the cache instead of storing it as a global variable?
If you are working with spring boot than using spring cache abstraction is the most natural and recommended way for any caching needs (including with EhCache). It'll also solve the problem that you are trying to solve. Please setup the EhCacheManager as outlined in Spring Boot Ehcache Example article. Post this setup, separate the dbloading routine in a new bean and make it cache enabled. To pre-load the cache on startup, you can invoke this bean's method in any other bean's postconstruct. Following outline will give you fully working solution.
#Component
public class DbListProvider {
#Cacheable("myDbList")
public List<Object[]> getDbList() {
System.out.println("Calling Method");
Query q = entityManager.createNativeQuery(query);
List<Object[]> resultList = (List<Object[]>) q.getResultList();
return resultList;
}
}
// In your existing post construct method, just call this method to pre-load
// these objects on startup Please note that you CAN NOT add this PostConstruct
// method to DbListProvider as it will not trigger spring boot cache
// abstraction responsible for managing and populating the cache
public class Myinitializer {
#Autowired
private DbListProvider listProvider;
#PostConstruct
private void init() {
// load on startup
listProvider.getDbList();
}
}
You can further inject DbListProvider bean anywhere in code base which gives you additional flexibility (should you need that).
You can use #CachePut, #CacheEvict as per your eviction policies without having to worry about EhCache behind the scene. I'll further recommend to understand all options available from spring cache and use them appropriately for your future needs. Following should help -
A Guide To Caching in Spring
Spring Cache Abstraction
Hope this helps!!
I use spring-boot-starter-data-solr and would like to make use of the schmea cration support of Spring Data Solr, as stated in the documentation:
Automatic schema population will inspect your domain types whenever the applications context is refreshed and populate new fields to your index based on the properties configuration. This requires solr to run in Schemaless Mode.
However, I am not able to achieve this. As far as I can see, the Spring Boot starter does not enable the schemaCreationSupport flag on the #EnableSolrRepositories annotation. So what I tried is the following:
#SpringBootApplication
#EnableSolrRepositories(schemaCreationSupport = true)
public class MyApplication {
#Bean
public SolrOperations solrTemplate(SolrClient solr) {
return new SolrTemplate(solr);
}
}
But looking in Wireshark I cannot see any calls to the Solr Schema API when saving new entities through the repository.
Is this intended to work, or what am I missing? I am using Solr 6.2.0 with Spring Boot 1.4.1.
I've run into the same problem. After some debugging, I've found the root cause why the schema creation (or update) is not happening at all:
By using the #EnableSolrRepositories annotation, an Spring extension will add a factory-bean to the context that creates the SolrTemplate that is used in the repositories. This template initialises a SolrPersistentEntitySchemaCreator, which should do the creation/update.
public void afterPropertiesSet() {
if (this.mappingContext == null) {
this.mappingContext = new SimpleSolrMappingContext(
new SolrPersistentEntitySchemaCreator(this.solrClientFactory)
.enable(this.schemaCreationFeatures));
}
// ...
}
Problem is that the flag schemaCreationFeatures (which enables the creator) is set after the factory calls the afterPropertiesSet(), so it's impossible for the creator to do it's work.
I'll create an issue in the spring-data-solr issue tracker. Don't see any workaround right now, other either having a custom fork/build of spring-data or extend a bunch of spring-classes and trying to get the flag set before by using (but doubt of this can be done).
I am working in a multi-tenant environment where data can be accessed from about 10 different datasources (and entitymanagers) with a webapplication (rest) frontend.
The entitymanager to be used is depending on a URL parameter in the rest api, ex. api/orders/1/1000003.
I need to use entitymanager "1" to fetch the data. At the moment I am using a method in the repository layer where I call setDistrict(1), before creating a hibernate session and creating a query via hibernate Criteria.
All is working fine, but I am worried about the fact that the method will need to be synchronized to avoid getting data from a wrong entitymanager.
When I synchronize the repository method I am worried that the performance will be horrible..
What is the good strategy for implementing this multi-tenant access so performance is good and the correct data will be returned under heavy load as well?
Thanks for your advice.
The SessionFactory of Hibernate allows to use a tenancy behavior:
SCHEMA Correlates to the separate schema approach. It is an error to attempt to open a session without a tenant identifier using
this strategy. Additionally, a
org.hibernate.service.jdbc.connections.spi.MultiTenantConnectionProvider
must be specified.
DATABASE Correlates to the separate database approach. It is an error to attempt to open a session without a tenant identifier
using this strategy. Additionally, a
org.hibernate.service.jdbc.connections.spi.MultiTenantConnectionProvider
must be specified.
DISCRIMINATOR Correlates to the partitioned (discriminator) approach. It is an error to attempt to open a session without a tenant
identifier using this strategy. This strategy is not yet implemented
in Hibernate as of 4.0 and 4.1. Its support is planned for 5.0.
In your case I think you need SCHEMA or DATABASE and have to implement the MultiTenantConnectionProvider (source).
/**
* Simplisitc implementation for illustration purposes supporting 2 hard coded providers (pools) and leveraging
* the support class {#link org.hibernate.service.jdbc.connections.spi.AbstractMultiTenantConnectionProvider}
*/
public class MultiTenantConnectionProviderImpl extends AbstractMultiTenantConnectionProvider {
private final ConnectionProvider acmeProvider = ConnectionProviderUtils.buildConnectionProvider( "acme" );
private final ConnectionProvider jbossProvider = ConnectionProviderUtils.buildConnectionProvider( "jboss" );
#Override
protected ConnectionProvider getAnyConnectionProvider() {
return acmeProvider;
}
#Override
protected ConnectionProvider selectConnectionProvider(String tenantIdentifier) {
if ( "acme".equals( tenantIdentifier ) ) {
return acmeProvider;
}
else if ( "jboss".equals( tenantIdentifier ) ) {
return jbossProvider;
}
throw new HibernateException( "Unknown tenant identifier" );
}
}
For more details see the linked documentation.
Let's say there are #Service and #Repository interfaces like the following:
#Repository
public interface OrderDao extends JpaRepository<Order, Integer> {
}
public interface OrderService {
void saveOrder(Order order);
}
#Service
public class OrderServiceImpl implements OrderService {
#Autowired
private OrderDao orderDao;
#Override
#Transactional
public void saveOrder(Order order) {
orderDao.save(order);
}
}
This is part of working application, everything is configured to access single database and everything works fine.
Now, I would like to have possibility to create stand-alone working instance of OrderService with auto-wired OrderDao using pure Java with jdbcUrl specified in Java code, something like this:
final int tenantId = 3578;
final String jdbcUrl = "jdbc:mysql://localhost:3306/database_" + tenantId;
OrderService orderService = someMethodWithSpringMagic(appContext, jdbcUrl);
As you can see I would like to introduce multi-tenant architecture with tenant per database strategy to existing Spring-based application.
Please note that I was able to achieve that quite easily before with self-implemented jdbcTemplate-like logic also with JDBC transactions correctly working so this is very valid task.
Please also note that I need quite simple transaction logic to start transaction, do several requests in service method in scope of that transaction and then commit it/rollback on exception.
Most solutions on the web regarding multi-tenancy with Spring propose specifying concrete persistence units in xml config AND/OR using annotation-based configuration which is highly inflexible because in order to add new database url whole application should be stopped, xml config/annotation code should be changed and application started.
So, basically I'm looking for a piece of code which is able to create #Service just like Spring creates it internally after properties are read from XML configs / annotations. I'm also looking into using ProxyBeanFactory for that, because Spring uses AOP to create service instances (so I guess simple good-old re-usable OOP is not the way to go here).
Is Spring flexible enough to allow this relatively simple case of code reuse?
Any hints will be greatly appreciated and if I find complete answer to this question I'll post it here for future generations :)
HIbernate has out of the box support for multi tenancy, check that out before trying your own. Hibernate requires a MultiTenantConnectionProvider and CurrentTenantIdentifierResolver for which there are default implementations out of the box but you can always write your own implementation. If it is only a schema change it is actually pretty simple to implement (execute a query before returning the connection). Else hold a map of datasources and get an instance from that, or create a new instance.
About 8 years ago we already wrote a generic solution which was documented here and the code is here. It isn't specific for hibernate and could be used with basically anything you need to switch around. We used it for DataSources and also some web related things (theming amongst others).
Creating a transactional proxy for an annotated service is not a difficult task but I'm not sure that you really need it. To choose a database for a tenantId I guess that you only need to concentrate in DataSource interface.
For example, with a simple driver managed datasource:
public class MultitenancyDriverManagerDataSource extends DriverManagerDataSource {
#Override
protected Connection getConnectionFromDriverManager(String url,
Properties props) throws SQLException {
Integer tenant = MultitenancyContext.getTenantId();
if (tenant != null)
url += "_" + tenant;
return super.getConnectionFromDriverManager(url, props);
}
}
public class MultitenancyContext {
private static ThreadLocal<Integer> tenant = new ThreadLocal<Integer>();
public static Integer getTenantId() {
return tenant.get();
}
public static void setTenatId(Integer value) {
tenant.set(value);
}
}
Of course, If you want to use a connection pool, you need to elaborate it a bit, for example using a connection pool per tenant.
I have a pure JAVA project, which I develop in eclipse with maven. It has a persistence capability using JPA with EclipseLink that save the data into Apache Derby. The project workts perfectly in unit tests and in standalone java applications, in which I instantiate the EntityManagerFactory directly from my code:
public class JPAUtil
{
private static EntityManagerFactory factory = Persistence.createEntityManagerFactory("unit-name");
private static Map<Long, EntityManager> ems = new HashMap<Long, EntityManager>();
private JPAUtil(){}
/**
* Get an entity manager
*/
public static EntityManager em(Long id)
{
EntityManager result = null;
if (ems.containsKey(id))
{
result = ems.get(id);
if(!result.isOpen())
{
result = createEntityManager();
ems.put(id, result);
}
}
else
{
result = createEntityManager();
ems.put(id, result);
}
return result;
}
private static EntityManager createEntityManager()
{
EntityManager result =
// factory.createEntityManager(SynchronizationType.SYNCHRONIZED);
factory.createEntityManager();
return result;
}
}
Now when I add it into a GWT project I am hitting some very difficult to debug/solve problems.
Problem 1:
If I use the above JPAUtil class to instantiate EntityManagers for use on each RPC request it works. However, if the GWT client-side started making multiple requests to the server-side, which in turn tried to pull data from the JPA layer, multiple cryptic ConcurrencyException occured on read (with or without lazy loading - it seems to make no difference).
When instead of using the above class I try to "inject" the EntityManager using the following lines into the GWT ServiceImpls (servlets), attempting to access the data layer crashes with a NullPointerException:
#PersistenceContext(unitName = "unit-name")
transient protected EntityManager em;
I obviously was thinking, that it would be a more appropriate way of accessing the persistence layer from GWT. However, I get NullPointerExceptions when accessing the EntityManager, ergo the development Jetty server of GWT cannot inject the EntityManager by itself. My skills with this kind of problems appear to be limited, and my Google-FU seems to be helpless either. So to formulate a concrete question:
How would it be best to approach the problem of creating a fast, stable GWT application with JPA in the backend?
Thank you in advance,
el.nicko
You need to synchronize your access to the HashMap as many RPC requests can be handled in parallel by multiple threads. Suggest you replace HashMap with ConcurrentHashMap or put synchronize on the em method.
#Inject will likely not work as GWT servlet is not CDI aware.