Sorry for the long back story but I wanted to give a good idea of why we're doing what we're doing.
Our application currently uses Hibernate 3.6 and we wish to upgrade to Hibernate 4.3.
The application was specifically written to avoid using persistence.xml to configure JPA and create the EntityManagerFactory and instead uses Hibernate's Ejb3Configuration class like this (example):
Properties properties = new Properties();
properties.put("javax.persistence.provider", "org.hibernate.ejb.HibernatePersistence");
properties.put("javax.persistence.transactionType", "RESOURCE_LOCAL");
properties.put("hibernate.dialect", "org.hibernate.dialect.Oracle10gDialect");
properties.put("hibernate.show_sql", "false");
properties.put("hibernate.format_sql", "true");
Ejb3Configuration cfg = new Ejb3Configuration();
cfg.addProperties(properties);
DataSource dataSource = dataSourceProvider.get();
cfg.setDataSource(dataSource);
//add the annotated classes
cfg.addAnnotatedClass(SomePersistentObject.class);
EntityManagerFactory factory = cfg.buildEntityManagerFactory();
The reason we do it this way is because we have a web app (war file) deployed to Tomcat that provides "core" functionality. Then, we install what we call "client bundles" which are jar files in the exploded /WEB-INF/lib directory. The "client bundles" contain overrides to the existing "core" behavior of the web app. This allows us to service multiple clients, each with various customizations from the "core" behavior, in one instance of the web app. We know which client bundle to use based on the domain or subdomain of the incoming HTTP request.
Each client bundle always gets its own database instance, and thus each client bundle defines its own EntityManagerFactory. The schemas are almost identical, although client bundles can add new persistent classes if needed.
So, the reason we do JPA configuration in Java is so that each client bundle extend the "core" classes and add their own entity classes. Java is great for inheritance while XML stinks. If we have to configure via XML, then each client bundle would need to copy the core's persistence.xml and update it from there. I would much rather use inheritance over copy/paste.
I think we have a pretty valid use case for preferring JPA configuration via Java rather than XML.
My question: Does Hibernate 4.3 allow this in any way? If so, how can I go about it?
If not, does anybody have any suggestions on how to make my above scenario as easy as possible while being stuck with XML configuration?
Can multiple jar files within a single web app contain /META-INF/persistence.xml files, or do multiple persistence units need to be defined another way?
Thank you!!!
-Ryan
I overcame the problem by dynamically writing a new persistence.xml file to the web app's classpath, before JPA is bootstrapped.
When the web app starts up, the JPA configuration for all the client bundles is read, and then a single persistence.xml file is written to the classpath. Each client bundle gets its own entry as a persistence-unit within persistence.xml.
Then, after the new persistence.xml is written, JPA is bootstrapped. JPA doesn't know or care, obviously, that the persistence.xml file was written dynamically.
It seems a little like a hack but I couldn't figure out any other way to do it. One nice benefit is that it keeps me away from Hibernate specific APIs, so if I ever want to switch to something like DataNucleus as the JPA provider I will have the flexibility to do so.
Related
Specification
Each tenant has their own database which handles users in greater detail, and there needs to exist a central database which handles:
Tokens (OAuth2)
Users (limited level of detail)
Mapping users to their database
Problem
I've found solutions for multi-tenancy which allows me to determine the datasource depending on the user. However, I'm not sure how I can also link certain crud repositories to this central datasource, and others to variable datasources.
Another solution involved updating the properties file, and using a configuration server (i.e. via git) to trigger #RefreshScope annotated configs. Though I'm not sure if this can work for Datasources, or if this could cause problems later on.
Extra Context
I'm using Spring Boot and Hibernate heavily in this project.
This blog gives a very good tutorial on how to do it.
After a lot of research it looks like hibernate just isn't built for doing that, but by manually writing the schema myself I can inject that into new tenant databases using native queries.
I also had a problem with MS Server DBs, as they don't allow simply appending ;createDatabaseIfNotExist to the JDBC URL, which meant even more native queries (Moving the project over to use MySQL anyway, so this is no longer a problem.)
I'm using JPA quite often in command line java applications. With an application server I can easily link to an external configuration via <jta-data-source>jdbc/myDatabase</jta-data-source> in the persistence.xml. How is that possible without an application server? I could find some information about the attribute <non-jta-data-source/>. But how can I reference the values from an external file (probably in the properties format) in an elegant way? It would be nice to have as few as possible boilerplate code.
I've found an approach to this here, but I think there is a more elegant way:
JPA Desktop application
Now I'm working with this solution:
I need a properties file which looks like this:
javax.persistence.jdbc.url = jdbc:mysql://localhost:3306/database
javax.persistence.jdbc.user = root
javax.persistence.jdbc.password = root
javax.persistence.jdbc.driver = com.mysql.jdbc.Driver
Respecting this scheme allows me to use the values without any mapping later. I can then pass the values easily to cretae the EntityManagerFactory like this:
try (final InputStream jpaFileInput = Files.newInputStream(propFile)) {
final Properties properties = new Properties();
properties.load(jpaFileInput);
emf = Persistence.createEntityManagerFactory(PU_NAME, properties);
}
It is easy to use JPA with desktop applications. It's almost the same thing but you will need to manage transactions. As you have no application server the management of each and every transaction has to be done by you. Access to your JPA unit can be achieved through EntityManagerFactory.
Example:
EntityManagerFactory emFactory = Persistence.createEntityManagerFactory("jpa-example");
EntityManager em = emFactory.getEntityManager();
em.getTransaction().begin();
em.persist(address);
em.getTransaction().commit();
You need to put your persistence.xml file under META-INF folder. You need to point out in your persistence config file that transaction type is RESOURCE_LOCAL. This is needed for running independently with no Application Server:
<persistence-unit name="jpa-example" transaction-type="RESOURCE_LOCAL">
You will however need to download and link the libraries in your project classpath. You will need the JTA jar and your persistence provider JARs which could be of Hibernate or any other vendor of your choosing. This could be achieved cleanly with Maven.
You may want to check this tutorial:
http://java.dzone.com/articles/jpa-tutorial-setting-jpa-java
I have a Swing project using Spring for DI and now I am trying to migrate to Eclipse 4 and OSGi.
Using the configuration files of Spring the user could comment/uncomment beans in order to add/remove functionality (offered by these back-end beans).
Now in Eclipse and OSGi I am looking for the best way to do this based on OSGi.
I know that I can make the beans as services and define start levels in OSGi but this does not solve my use case, which is:
The application starts without these beans/modules running and if the user updates the configuration from the running UI these beans/modules start and they are also started on the next start-up of the application.
Is there a nice/clean approach for my problem?
You probably want to use Eclipse Gemini Blueprint to do the management of how everything is integrated between Spring and OSGi (Gemini Blueprint is the successor to Spring Dynamic Modules). In particular, it can handle virtually all the complexity relating to dynamic service registration for you; your beans can remain virtually identical.
Another approach would be to use Declarative Services together with Configuration Admin to let configuration data determine which services to activate. In more detail here.
Like you already found out services are a good aproach to this. Simply install all your modules but do not start them. Then your UI can start and stop modules as the user selects the functionality he wants. THe OSGi framework then remembers the installed and started modules on a restart.
The absolute best approach for this is Declarative Services (DS). DS is integrated with OSGi's Configuration Admin, making it trivial to control the number of service instances as well as their configuration and service properties. For example, the following component (with the bnd annotations [which will resemble similar functionality in the OSGi specs soon]):
#Component(designateFactory=Config.class)
public MyComp implements MyService {
interface Config {
int port();
String host();
}
Config config;
#Activate
void activate(Map<String,Object> map) {
config = Configurable.createConfigurable(Config.class,map);
start();
}
void foo() { ... do the MyService stuff ... }
#Reference
void setDataSource( DataSource ds ) { ... }
}
This component requires a Configuration Admin factory configuration. The best way to see how powerful this is, is to setup a framework with Apache Felix Webconsole. The designateFactory=Config.class tells bnd to create a metatype XML file in the bundle. This is used by the Webconsole to create a pretty nice looking form for the configuration data, derived from the interface and its methods. This form is type aware, i.e. you cannot enter a non-nummeric value for the port number. Through the webconsole you can now instantiate multiple components by creating multiple factory configurations. Deleting these factory configurations, removes the service. In your application, you can manipulate Configuration Admin yourself under the control of the user.
Another advantage is that through Configuration Admin you can control the binding of the component's dependencies. In the aforementioned example, you can set the dataSource.target property to a filter like (db=accounting) to select the accounting database. All configuration properties are added as service properties so you can easily set the 'db' service property on the configuration that creates the Data Source (if it was implemented this way).
This is one of the least understood advantages of DS, and I can tell you it is HUGE. To get started with this this, just create a DS project in bndtools and then a new Run Descriptor and select the Webconsole template.
Another advantage of DS is that it is small and it is not try to hide the dynamics, which in Blueprint can be painful.
I have use JPA with Hibernate in a standalone application but now I want to try with with an application server. I know GlassFish provides EclipseLink implementation for JPA but I have a few questions.
Do I need to specify in persistence.xml EclipseLink as a provider for my persistence-unit?
Does persistence.xml look the same as if it the application would not be deployed? If it does not look the same how does it look?
Do I need to specifically download the implementation jars for EclipseLink and build with them or does the container handles this after my application is deployed?
How do I specify the jdbc driver in persistence.xml?
Does my application need to be deployed as a .ear?
You don't need to specify the persistence provider, by default the one contained in your application server will be used (if it has at least the Web profile, of course, otherwise servers such as Tomcat won't provide you EclipseLink).
Yes, it will have the same look (in both applications you are just using JPA the same way).
For your code to compile, you will only need to have persistence-api.jar in your classpath (if you use Maven, set the scope to "provided"). Then the server will automatically provide its implementation jars.
You could use a persistence unit like described in this page ("typical configuration in a Java SE environment"). But I would rather suggest you use a <jta-data-source> instead, that refers to a datasource provided by GlassFish.
As far as I can tell, it can also be a WAR file, I didn't have any problem deploying it (webapp as a Maven WAR module + beans in a JAR module).
I am developing a OSGi program composed of several bundles that I run sometimes on my local windows dev computer, sometimes on a classic linux.
Currently, several bundles dedicated to resource connection have their own configuration file (properties file) containing some informations like the path to access some important files (present on both environments).
However, since the paths are different on the two execution-environments, I have to manually change the configuration before compilation, depending on which environment I am going to run my program in.
Is there a way for bundles to refer to an external configuration file? A solution could be to create a fragment for each environment that I generate only once, but I won't be able to change the configuration file easily since it will be in the jar of the fragment.
Are there some "best practices" that I should know to solve my "simple" problem ?
Take a look at OSGi's ConfigurationAdmin [1],[2] - this will suit your needs exactly (and is yet another example of OSGi's elegance).
Basically you'll implement a ManagedService or ManagedServiceFactory and the ConfigurationAdmin service take care of the rest.
The default setup for the Felix implementation used in concert with File Install (see Angelo's comment) will scan a directory of configuration files (filename is the service ID and file suffix .cfg). But ConfigurationAdmin is pluggable, so the backend for config could be a database etc.
The great thing about externalizing your config in this way is that you can keep it with the app/environment - so your bundles become agnostic of their environment.
Expanding on #earcam's excellent suggestion, I recommend binding your configuration via Declarative Services and Metatype. It makes it REALLY easy, particularly with the Felix annotations. Below is a simplified example of a service that uses JAAS for authentication, and it has a configurable JAAS realm name. The "ConfigurationPolicy.OPTIONAL" is the awesome part. If you set it to REQUIRE then the service will not be registered until its configured.
#Component(
name = "com.example.authprovider",
label = "Example authentication interceptor",
description = "Blocks unauthenticated access to REST endpoints",
specVersion = "1.1",
metatype = true,
policy = ConfigurationPolicy.OPTIONAL
)
#Service
#References({
...
})
#Properties({
#Property(name="jaasRealm", value = "default", label="JAAS Realm",
description = "the JAAS realm to use to find LoginModules to authenticate this login"),
...
})
public class Foo implements ... {
...
}
If you take this approach and use a Metatype-friendly container like Apache Karaf, then you'll get an auto-generated configuration UI for free in the admin web console.