What is the best way to change database table schema at run time in Play! Framework? I get an unspecified amount of columns from a client application and can not have domain objects.
Another option may be to use a schema-less datastore ala mongodb. Seems like there is a mongo module for play, http://www.playframework.org/modules/mongo.
Why not using Apache DDLUtils ?
It's not plugged by default in Play! but I use it in other projects and it's quite useful.
http://db.apache.org/ddlutils/
I think this is exactly what you were looking for
have a look at play snippets
http://www.playframework.org/community/snippets/13
public class Application extends Controller {
#Before
public static void setDBSchema() {
String schema = "someSchema"; // This can come from a Cache, for example
Play.configuration.setProperty("hibernate.default_schema", schema);
JPA.entityManagerFactory = null;
JPAPlugin plugin = new JPAPlugin();
plugin.onApplicationStart();
}
...
you just change the configured hibernate schema, and then force the JPAPlugin to restart
Related
We're in the process of converting our java application from Hibernate Search 5 to 6 with an Elasticsearch backend.
For some good background info, see How to do highlighting within HibernateSearch over Elasticsearch for a question we had when upgrading our highlighting code from a Lucene to Elasticsearch backend and how it was resolved.
Hibernate Search 6 seems to support using 2 backends at the same time, Lucene and Elasticsearch, so we'd like to use Elasticsearch for all our queries and Lucene for the highlighting, if that's possible.
Here is basically what we're trying to do:
public boolean matchPhoneNumbers() {
String phoneNumber1 = "603-436-1234";
String phoneNumber2 = "603-436-1234";
LuceneBackend luceneBackend =
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
Analyzer analyzer = luceneBackend.analyzer("phoneNumberKeywordAnalyzer").get();
//... builds a Lucene Query using the analyzer and phoneNumber1 term
Query phoneNumberQuery = buildQuery(analyzer, phoneNumber1, ...);
return isMatch("phoneNumberField", phoneNumber2, phoneNumberQuery, analyzer);
}
private boolean isMatch(String field, String target, Query sourceQ, Analyzer analyzer) {
Highlighter highlighter = new Highlighter(new QueryScorer(sourceQ, field));
highlighter.setTextFragmenter(new NullFragmenter());
try {
String result = highlighter.getBestFragment(analyzer, field, target);
return StringUtils.hasText(result);
} catch (IOException e) {
...
}
}
What I've attempted so far is to configure two separate backends in the configuration properties, per the documentation, like this:
properties.setProperty("hibernate.search.backends.elasticsearch.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backends.elasticsearch.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backends.elasticsearch.uris", "http://127.0.0.1:9200");
The AnalysisConfigurer class implements ElasticsearchAnalysisConfigurer and
CustomLuceneAnalysisConfigurer implements from LuceneAnalysisConfigurer.
Analyzers are defined twice, once in the Elasticsearch configurer and again in the Lucene configurer.
I don't know why both hibernate.search.backends.elasticsearch.type and hibernate.search.backends.lucene.type are necessary but if I don't include the lucene.type, I get Ambiguous backend type: configuration property 'hibernate.search.backends.lucene.type' is not set.
But if I do have both backend properties types set, I get
HSEARCH000575: No default backend. Check that at least one entity is configured to target the default backend, when attempting to retrieve the Lucene backend, like:
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
And the same error when trying to retrieve the Elasticsearch backend.
I've also added #Indexed(..., backend = "elasticsearch") to my entities since I wish to have them saved into Elasticsearch and don't need them in Lucene. I also tried adding a fake entity with #Indexed(..., backend = "lucene") but it made no difference.
What have I got configured wrong?
I don't know why both hibernate.search.backends.elasticsearch.type and hibernate.search.backends.lucene.type are necessary but if I don't include the lucene.type, I get Ambiguous backend type: configuration property 'hibernate.search.backends.lucene.type' is not set.
That's because the backend name is just that: a name. Hibernate Search doesn't infer particular information from it, even if you name your backend "lucene" or "elasticsearch". You could have multiple Elasticsearch backends for all it knows :)
But if I do have both backend properties types set, I get HSEARCH000575: No default backend. Check that at least one entity is configured to target the default backend, when attempting to retrieve the Lucene backend, like:
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
``
You called .backend(), which retrieves the default backend, i.e. the backend that doesn't have a name and is configured through hibernate.search.backend.* instead of hibernate.search.backends.<somename>.* (see https://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#configuration-structure ).
But you are apparently mapping all your entities to a named backends, one named elasticsearch and one named lucene. So the default backend just doesn't exist.
You should call this:
Search.mapping(entityManager.getEntityManagerFactory())
.backend("lucene").unwrap(LuceneBackend.class);
I've also added #Indexed(..., backend = "elasticsearch") to my entities since I wish to have them saved into Elasticsearch
Since you obviously only want to use one backend for indexing, I would recommend reverting that change (keeping #Indexed without setting #Indexed.backend) and simply making using the default backend.
In short, remove the #Indexed.backend and replace this:
properties.setProperty("hibernate.search.backends.elasticsearch.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backends.elasticsearch.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backends.elasticsearch.uris", "http://127.0.0.1:9200");
With this
properties.setProperty("hibernate.search.backend.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backend.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backend.uris", "http://127.0.0.1:9200");
You don't technically have to do that, but I think it will be simpler in the long term. It keeps the Lucene backend as a separate hack that doesn't affect your whole application.
I also tried adding a fake entity with #Indexed(..., backend = "lucene")
I confirm you will need that fake entity mapped to the "lucene" backend, otherwise Hibernate Search will not create the "lucene" backend.
In my current project almost every entity has a field recordStatus which can have 2 values:
A for Active
D for Deleted
In spring data one can normally use:
repository.findByLastName(lastName)
but with the current data model we have to remember about the active part in every repository call, eg.
repository.findByLastNameAndRecordStatus(lastName, A)
The question is: is there any way to extend spring data in such a way it would be able to recognize the following method:
repository.findActiveByLastName(lastName)
and append the
recordStatus = 'A'
automatically?
Spring Data JPA provides 2 additional options for you dealing with circumstances that their DSL can't handle by default.
The first solution is custom queries with an #Query annotation
#Query("select s from MyTable s where s.recordStatus like 'A%'")
public MyObect findActiveByLastName(String lastName);
The second solution is to add a completely custom method the "Old Fashion Way" You can create a new class setup like: MyRepositoryImpl The Impl is important as it is How spring knows to find your new method (Note: you can avoid this, but you will have to manually link things the docs can help you with that)
//Implementation
public class MyRepositoryImpl implements MyCustomMethodInterface {
#PersistenceContext
EntityManager em;
public Object myCustomJPAMethod() {
//TODO custom JPA work similar to this
String myQuery = "TODO";
return em.createQuery(myQuery).execute();
}
}
//Interface
public interface MyCustomMethodInterface {
public Object myCustomJPAMethod();
}
//For clarity update your JPA repository as well so people see your custom work
public interface MySuperEpicRepository extends JPARepository<Object, String>, MyCustomMethodInterface {
}
These are just some quick samples so feel free to go read their Spring Data JPA docs if you would like to get a bit more custom with it.
http://docs.spring.io/spring-data/jpa/docs/current/reference/html/
Finally just a quick note. Technically this isn't a built in feature from Spring Data JPA, but you can also use Predicates. I will link you to a blog on this one since I am not overly familiar on this approach.
https://spring.io/blog/2011/04/26/advanced-spring-data-jpa-specifications-and-querydsl/
You can use Spring Data's Specifications. Take a look at this article.
If you create a 'Base'-specification with the recordStatus filter, and deriving all other specifications form this one.
Of course, everybody in your team should use the specifactions api, and not the default spring data api.
I am not sure you can extend the syntax unless you override the base class (SimpleReactiveMongoRepository; this is for reactive mongo but you can find the class for your DB type), what I can suggest you is to extend the base methods and then make your method be aware of what condition you want to execute. If you check this post you get the idea that I did for the patch operation for all entities.
https://medium.com/#ghahremani/extending-default-spring-data-repository-methods-patch-example-a23c07c35bf9
With Spring JPA is there an easy way to use native queries but maintaining database independence, for example by using the query which fits best?
At the moment I do this by checking the currently set Dialect from the Environment and call the proper method of my Repository:
public Foo fetchFoo() {
if (POSTGRES_DIALECT.equals(env.getRequiredProperty("hibernate.dialect"))) {
return repo.postgresOptimizedGetFoo();
}
return repo.getFoo();
}
This works but I have the feeling that there is a better way or that I am missing something. Especially because (Spring) JPA allows it to use native queries quite easily but that breaks one of its big advantages: database independence.
As per my understanding, this can be achieved simply by using #Transactional(readOnly=false) and then instead of calling session.createQuery, one can use session.createSQLQuery, as provided in this example.
Your sql can be any of your native query.
Hope this works for you. :)
#Override
#Transactional(readOnly = false)
public Long getSeqVal() {
Session session = entityManager.unwrap(Session.class);
String sql = "SELECT nextval('seqName')";
Query query = session.createSQLQuery(sql);
BigInteger big = (BigInteger) query.list().get(0);
return big.longValue();
}
This is just an idea: I do not know whether it works or not:
My idea would be having subinterfaces, one normal Spring-Data-JPA-interface with all methods for one entiy (without native query hints). Than I would crate a subinterface for every database, that "override" the database specific native statements. (This intrface would be empty if there are no DB specific statements). Then I would try configure Spring-JPA with some profiles to load the right specific interface (for example by a class-name or package-name-pattern)
This seems like a way to complicated way to get queries to work.
If you really want to use optimized queries make it at least transparant for your code. I suggest using named queries and create an orm.xml per database (much like Spring Boot uses to load the schema.xml for a different database).
In your code you can simply do
public interface YourRepository extends JpaRepository<YourEntity, Long> {
List<YourEntity> yourQueryMethod();
}
This will look for a named query with the name YourEntity.yourQueryMethod. Now in your orm.xml add the named query (the default one and in another one the optimized one).
Then you need to configure your LocalContainerEntityManagerFactory to load the specific one needed. Assuming you have a property defining which database you use, lets name it database.type you could do something like the following
<bean class="LocalContainerEntityManagerFactoryBean">
<property name="mappingResources" value="classpath:META-INF/orm-${database.type}.xml" />
... other config ...
</bean>
This way you can keep your code clean of the if/then/else construct and apply where needed. Cleans your code nicely imho.
So I have an app that needs to store certain configuration info, and so I am planning on storing the configs as simple JSON documents in Mongo:
appConfig: {
fizz: true,
buzz: 34
}
This might map to a Java POJO/entity like:
public class AppConfig {
private boolean fizz;
private int buzz;
}
etc. Ordinarily, with relational databases, I use Hibernate/JPA for O/R mapping from table data to/from Java entities. I believe the closest JSON/Mongo companion to table/Hibernate is a Morphia/GSON combo: use Morphia to drive connectivity from my Java app to Mongo, and then use GSON to O/J map the JSON to/from Java POJOs/entities.
The problem here is that, over time, my appConfig document structure will change. It may be something simple like:
appConfig: {
fizz: true,
buzz: 34
foo: "Hello!"
}
Which would then require the POJO/entity to become:
public class AppConfig {
private boolean fizz;
private int buzz;
private String foo;
}
But the problem is that I may have tens of thousands of JSON documents already stored in Mongo that don't have foo properties in them. In this specific case, the obvious solution is to set a default on the property like:
public class AppConfig {
private boolean fizz;
private int buzz;
private String foo = "Hello!"
}
However in reality, eventually the AppConfig document/schema/structure might change so much that it in no way, shape or form resembles its original design. But the kicker is: I need to be backwards-compatible and, preferably, be capable of updating/transforming documents to match the new schema/structure where appropriate.
My question: how is this "versioned document" problem typically solved?
I usually solve this problem by adding a version field to each document in the collection.
You might have several documents in the AppConfig collection:
{
_id: 1,
fizz: true,
buzz: 34
}
{
_id: 2,
version: 1,
fizz: false,
buzz: 36,
foo: "Hello!"
}
{
_id: 3,
version: 1,
fizz: true,
buzz: 42,
foo: "Goodbye"
}
In the above example, there are two documents at version one, and one older document at version zero (in this pattern, I generally interpret a missing or null version field to be version zero, because I always only add this once I'm versioning by documents in production).
The two principles of this pattern:
Documents are always saved at the newest version when they are actually modified.
When a document is read, if it's not at the newest version, it gets transparently upgraded to the newest version.
You do this by checking the version field, and performing a migration when the version isn't new enough:
DBObject update(DBObject document) {
if (document.getInt("version", 0) < 1) {
document.put("foo", "Hello!"); //add default value for foo
document.put("version", 1);
}
return document;
}
This migration can fairly easily add fields with default values, rename fields, and remove fields. Since it's located in application code, you can do more complicated calculations as necessary.
Once the document has been migrated, you can run it through whatever ODM solution you like to convert it into Java objects. This solution no longer has to worry about versioning, since the documents it deals with are all current!
With Morphia this could be done using the #PreLoad annotation.
Two caveats:
Sometimes you may want to save the upgraded document back to the database immediately. The most common reasons for this are when the migration is expensive, the migration is non-deterministic or integrates with another database, or you're in a hurry to upgrade an old version.
Adding or renaming fields that are used as criteria in queries is a bit trickier. In practice, you may need to perform more than one query, and unify the results.
In my opinion, this pattern highlights one of the great advantages of MongoDB: since the documents are versioned in the application, you can seamlessly migrate data representations in the application without any offline "migration phase" like you would need with a SQL database.
The JSON deserialzer solves this in a very simple way for you, (using JAVA)
Just allow your POJO/entity to grown with new fields. When you deserialize your JSON from mongo to you entity - all missing fields will be null.
mongoDocument v1 : Entity of v3
{
fizz="abc", --> fizz = "abc";
buzz=123 --> buzz = 123;
--> newObj = null;
--> obj_v3 = null;
}
You can even use this the other way around if you like to have you legacy servers work with new database objects:
mongoDocument v3 : Entity of v1
{
fizz:"abc", --> fizz = "abc";
buzz:123, --> buzz = 123;
newObj:"zzz", -->
obj_v3:"b -->
}
Depending if they have the fields or not - it will be populated by the deserializer.
Keep in mind that booleans are not best suited for this since they can default to false (depending on which deserializer you use).
So unless you are actively going to work with versioning of your objects why bother with the overhead when you can build a legacy safe server implementation what with just a few null checks can handle any of the older objects.
I hope this proposal might help you with your set-up
I guess below thread will help you although it is not about versioning documents in the DB, and it has been done using spring-data-mongodb,
How to add a final field to an existing spring-data-mongodb document collection?
So you can assign values to the POJO based on existence of the property in the document using the Converter implementation.
You have a couple of options with morphia, at least. You could use a versioned class name then rely on morphia's use of the className property to fetch correct class version. Then your application would just have to migrate that old object to the new class definition. Another option is to use #PreLoad and massage the DBObject coming out of mongo to the new shape before morphia maps the DBObject to your class. Using a version field on the class you can determine which migration to run when the data is loaded. From that point, it would just look like the new form to morphia and would map seamlessly. Once you save that configuration object back to mongo, it'd be in the new form and the next load wouldn't need to run the migration.
I am using Spring-data to access a Neo4j database via REST.
One of my entities looks similar to the following one:
#NodeEntity
#TypeAlias("org.example.Foo")
public class Foo {
#GraphId
private Long nodeId;
//...
#RelatedTo(type="HAS_BAR", direction=Direction.OUTGOING)
private Set<Bar> bars;
//...
}
A typical Foo might have anywhere from 0-1000 Bars. Most of the time, those Bars are not needed when loading a Foo so I thought I should by fine by not adding a #Fetch annotation and thus avoiding to eager-load the Bars.
However, when now loading a Foo using the generated repository methods, the Bars are loaded - at least partially (only their nodeId properties).
Is there any way to avoid this? Performance suffers quite much from this behavior.
I really would like to be able to use lazy-loading like shown in https://stackoverflow.com/a/16159051/232175 for the collection itself.
For the lazy-fetching to work, spring data creates a proxy for all the Bar's with just enough information (node id) that can be used to lazily fetch the Bar's when required. That is why the Bar's are being created in your case. I suggest you use the Neo4jTemplate to pull just the Foo's properties that you are looking for as shown below
Result<Map<String, Object>> result = template.query("START n=node({0}) RETURN n.property1, n.property2, n.property3");
result.handle(new Handler<Map<String, Object>>()
{
#Override
public void handle(Map<String, Object> row)
{
System.err.println(row.get("n.property1"));
System.err.println(row.get("n.property2"));
System.err.println(row.get("n.property3"));
}
});
If you don't any particular reason to use neo4j via rest you can use it embedded with the aspectj mapping that doesn't have this problem. You could use it also via REST but according to this post Neo4j Spring data POC for social RESTful layer is better to avoid it.
If you dont need the Bars most of the time, remove them from your entity and and just load them if needed with cypher?
Apart from that, spring-data-neo4j doesnt support explicit lazy loading in simple mode, but you might try your luck with the advanced mapping mode(http://static.springsource.org/spring-data/data-graph/snapshot-site/reference/html/#reference:aspectj)