I have two applications which use the same Elasticsearch instance as a search engine. Both applications share the same code base and have only minor differences.
Applications run against different databases, and hence, the different ES indices should be used.
I try to parameterize index name using SpEL like this:
#Indexed(index="${es.index.users}")
public UserEntity {}
However, it doesn't work.
The second option I've tried was setting a different prefix for different applications via hibernate.search.default.indexBase=<app_name>. However, it works only for the Lucene engine but not for ES.
Is there a way to pass the index name into #Indexed annotation on the runtime?
If not, is there another way to pass the index which should be used?
At the moment, the only solution would be to use the programmatic mapping API. This will allow you to execute code to set the index names. If you need to retrieve the index names from configuration files, that will be on you...
First, remove the #Indexed annotations from your indexed entities.
Then, implement a mapping factory:
package com.myCompany;
// ... imports ...
public class MyAppSearchMappingFactory {
#Factory
public SearchMapping getSearchMapping() {
SearchMapping mapping = new SearchMapping();
for ( Map.Entry<Class<?>, String> entry : getIndexNames() ) {
mapping.entity( entry.getKey() ).indexed().indexName( entry.getValue() );
}
return mapping;
}
private Map<Class<?>, String> getIndexNames() {
// Fetch the index names somehow. Maybe just use a different implementation of this class in each application?
}
}
Then reference it in the Hibernate ORM properties (persistence.xml, hibernate.properties, or some framework-specific file, depending on what you use):
hibernate.search.model_mapping com.myCompany.MyAppSearchMappingFactory;
And you should be all set.
Related
A related group of applications I've been assigned share a database table for configuration values that has a table with columns 'application', 'config_name', 'config_type' (IE String, Integer), and 'config_value'. There's also a stored procedure that takes in a string (applicationName) and returns all config names, types, and values where applicationName == application.
In each application, a wrapper class is instantiated which contains a static ThreadLocal (hereafter 'static config)', and that static config pulls all values from the config table for the application.
When loading configuration values, the stored procedure returns a massive list of properties that are iterated over, going through a massive list of if-else statements testing whether the 'config_name' column matches a string literal, and if so, loads the value into a differently named variable.
EX:
if (result.isBeforeFirst()) {
while(result.next()) {
if (result.getString("config_name").equals("myConfig1") {
myConfigurationValue1 = result.getString(config_value); }
else if (result.getString("config_name").equals("myConfig2") {
myConfigurationValue2 = result.getString(config_value); }
}}
These cover between 60-100ish configs per app, and each application has an identical Configuration class save for the names of the properties they're trying to read.
So my questions are:
Having one gigantic configuration class is poor design, right? I'm not entirely sure how to break them down and I can't share them here, but I'm assuming best practice would be to have multiple configuration classes that have all the components needed to perform a particular operation, IE 'LocalDatabaseConfig' or '(ExternalSystemName)DatabaseConfig'?
Once broken down, what's the best way to get config valuee where needed without static access? If I have each class instantiate the configuration it needs I'll be doing a lot of redundant db operations, but if I just pass them from the application entry point then many classes have to 'hold on' to data they don't need to feed it to later classes... Is this a time when static config classes are the best option??
Is there an elegant way to load properties from the DB (in core java - company is very particular with third party libraries) without using this massive if-else chain I keep thinking that ideally we'd just dynamically load each property as it's referenced, but the only way I can think to do that is to use another stored procedure that takes in a unique identifier for a property and load it that way, but that would involve a lot more string literals...m
(Might be invalidated by 3) Is there a better way for the comparison in the pseudo-code above to test for a property rather than using a string literal? Could this be resolved if we just agreed to name our configuration properties in the application the same way they're named in the DB?
Currently every application just copy-pastes this configuration class and replaces the string literals and variable names; many of the values are unique in name and value, some are unique in value but are named the same between applications (and vice versa), and some are the same name and value for each application, but because the stored procedure fetches values based on application, redundant db entries are necessary (despite that many such values are supposed to be the same at all times, and any change to one needs to be performed on the other versions as well). Would it make sense to create a core library class that can construct any of the proposed 'broken down' configuration classes? IE, every application needs some basic logging configurations that don't change across the applications. We already have a core library that's a dependency for each application, but I don't know whether it would make sense add all/some/none of the configuration classes to the core library...
Thanks for your help! Sorry for the abundance of questions!?
The cascading if-then-else might be eliminated by using a while loop to copy the database-query results into two maps: a Map[String, String] for the string-based configuration variables, and a Map[String, Integer] for the integer configuration variables. Then the class could provide the following operations:
public String lookupStringVariable(String name, String defaultValue) {
String value = stringMap.get(name);
if (value == null) {
return defaultValue;
} else {
return value;
}
}
public int lookupIntVariable(String name, int defaultValue) {
Integer value = intMap.get(name);
if (value == null) {
return defaultValue;
} else {
return value.intValue();
}
}
If there is a requirement (perhaps for runtime performance) to have the configuration values stored in fields of the configuration class, then the configuration class could make the above two operations private and use them to initialize fields. For example:
logLevel = lookupIntVariable("log_level", 2);
logDir = lookupStringVariable("log_dir", "/tmp");
An alternative (but complementary) suggestion is to write a code generator application that will query the DB table and generate a separate Java class for each value in the application column of the DB table. The implementation of a generated Java class would use whatever coding approach you like to query the DB table and retrieve the application-specific configuration variables. Once you have written this generator application, you can rerun it whenever the DB table is updated to add/modify configuration variables. If you decide to write such a generator, you can use print() statements to generate the Java code. Alternatively, you might use a template engine to reduce some of the verbosity associated with print() statements. An example of a template engine is Velocity, but the Comparison of web template engines Wikipedia article lists dozens more.
You would be better off separating the database access from the application initialisation. A basic definition would be Map<String,String> returned by querying for one application's settings:
Map<String,String> config = dbOps.getConfig("myappname");
// which populates a map from the config_name/config_value queries:
// AS: config.put(result.getString("config_name"), result.getString("config_value");
Your application code then can initialise from the single application settings:
void init(Map<String,String> config) {
myConfigurationValue1 = config.get("myConfig1");
myConfigurationValue2 = config.get("myConfig2");
}
A benefit of this decoupling is that you define test cases for your application by hardwiring the config for different permutations of Map settings without accessing a huge test database configurations, and can test the config loader independently of your application logic.
Once this is working, you might consider whether dbOps.getConfig("myappname") caches the per-application settings to avoid excessive queries (if they don't change on database), or whether to declare Config as a class backed by Map but with calls for getInt / get and default values, and which throw RuntimeException if missing keys:
void init(Config config) {
myConfigurationValue1 = config.get("myConfig1", "aDefaultVal");
myConfigurationInt2 = config.getInt("myConfig2", 100);
}
My Spring Boot app is using Couchbase 5.1 community.
My app needs both a primary & several secondary indexes.
Currently, in order to create the needed indexes, I access the UI and the query page and manually create the indexes that the app needs as described here.
I was looking for a way to do it automatically via code, so when the app is starting, it will check if the indexes are missing and will create them if needed.
Is there a way to do it via Spring Data or via the Couchbase client?
You can create them by using the DSL from the index class. There's an example of using it in the documentation under "Indexing the Data: N1QL & GSI"
From that example:
You can also create secondary indexes on specific fields of the JSON,
for better performance:
Index.createIndex("index_name").on(bucket.name(), "field_to_index")
In this case, give a name to your index, specify the target bucket AND
the field(s) in the JSON to index.
If the index already exists, there will be an IndexAlreadyExistsException (see documentation), so you'll need to check for that.
So this is how I solve it:
import com.couchbase.client.java.Bucket;
public class MyCouchBaseRepository{
private Bucket bucket;
public MyCouchBaseRepository(<My Repository that extends CouchbasePagingAndSortingRepository> myRepository){
bucket = myRepository.getCouchbaseOperations().getCouchbaseBucket();
createIndices();
}
private void createIndices(){
bucket.bucketManager().createN1qlPrimaryIndex(true, false)
bucket.query(N1qlQuery.simple("CREATE INDEX xyz ON `myBucket`(userId) WHERE _class = 'com.example.User'"))
...
}
}
When I use IntelliJ to generate a persistence mapping from exisitng database schema it puts a catalog value as part of #Table annotation. Unfortunately names of database instances have names of dev/test/prod environemnts in them and while I can overwrite the connection string with a map passed to EntityManagerFactory I still get Invalid object name 'BAR_DEV.dbo.FOO' when executing a query against BAR_TEST instance.
Can I dynamically overwrite the catalog value at runtime without doing global search and replace to remove it manually after entity generation?
#Entity
#Table(name = "FOO", schema = "dbo", catalog = "BAR_DEV")
public class Foo{ /* ... */ }
No, it is not possible directly with standard JPA.
However, a solution I used in my project was to define multiple persistence units, each for a particular environment. You may overwrite any database mapping in an orm.xml file, or even set default catalog or schema for all entities. Next step is to dynamically retrieve proper EntityManager - if you are using Java EE, I recomment injecting using #Inject and creating a producer, which returns particular EM for specified environment.
Non portable, Eclipselink only org.eclipse.persistence.dynamic.DynamicHelper.SessionCustomizer can replace many defaults at runtime.
EDIT: I haven't ready code for You. I use this way
public void customize(Session session) throws SQLException {
...
for (ClassDescriptor descriptor : session.getDescriptors().values()) {
if (!descriptor.getTables().isEmpty() && descriptor.getAlias().equalsIgnoreCase(descriptor.getTableName())) {
tableName = TABLE_PREFIX + clazz.getSimpleName();
descriptor.setTableName(tableName);
}
}
So I have an app that needs to store certain configuration info, and so I am planning on storing the configs as simple JSON documents in Mongo:
appConfig: {
fizz: true,
buzz: 34
}
This might map to a Java POJO/entity like:
public class AppConfig {
private boolean fizz;
private int buzz;
}
etc. Ordinarily, with relational databases, I use Hibernate/JPA for O/R mapping from table data to/from Java entities. I believe the closest JSON/Mongo companion to table/Hibernate is a Morphia/GSON combo: use Morphia to drive connectivity from my Java app to Mongo, and then use GSON to O/J map the JSON to/from Java POJOs/entities.
The problem here is that, over time, my appConfig document structure will change. It may be something simple like:
appConfig: {
fizz: true,
buzz: 34
foo: "Hello!"
}
Which would then require the POJO/entity to become:
public class AppConfig {
private boolean fizz;
private int buzz;
private String foo;
}
But the problem is that I may have tens of thousands of JSON documents already stored in Mongo that don't have foo properties in them. In this specific case, the obvious solution is to set a default on the property like:
public class AppConfig {
private boolean fizz;
private int buzz;
private String foo = "Hello!"
}
However in reality, eventually the AppConfig document/schema/structure might change so much that it in no way, shape or form resembles its original design. But the kicker is: I need to be backwards-compatible and, preferably, be capable of updating/transforming documents to match the new schema/structure where appropriate.
My question: how is this "versioned document" problem typically solved?
I usually solve this problem by adding a version field to each document in the collection.
You might have several documents in the AppConfig collection:
{
_id: 1,
fizz: true,
buzz: 34
}
{
_id: 2,
version: 1,
fizz: false,
buzz: 36,
foo: "Hello!"
}
{
_id: 3,
version: 1,
fizz: true,
buzz: 42,
foo: "Goodbye"
}
In the above example, there are two documents at version one, and one older document at version zero (in this pattern, I generally interpret a missing or null version field to be version zero, because I always only add this once I'm versioning by documents in production).
The two principles of this pattern:
Documents are always saved at the newest version when they are actually modified.
When a document is read, if it's not at the newest version, it gets transparently upgraded to the newest version.
You do this by checking the version field, and performing a migration when the version isn't new enough:
DBObject update(DBObject document) {
if (document.getInt("version", 0) < 1) {
document.put("foo", "Hello!"); //add default value for foo
document.put("version", 1);
}
return document;
}
This migration can fairly easily add fields with default values, rename fields, and remove fields. Since it's located in application code, you can do more complicated calculations as necessary.
Once the document has been migrated, you can run it through whatever ODM solution you like to convert it into Java objects. This solution no longer has to worry about versioning, since the documents it deals with are all current!
With Morphia this could be done using the #PreLoad annotation.
Two caveats:
Sometimes you may want to save the upgraded document back to the database immediately. The most common reasons for this are when the migration is expensive, the migration is non-deterministic or integrates with another database, or you're in a hurry to upgrade an old version.
Adding or renaming fields that are used as criteria in queries is a bit trickier. In practice, you may need to perform more than one query, and unify the results.
In my opinion, this pattern highlights one of the great advantages of MongoDB: since the documents are versioned in the application, you can seamlessly migrate data representations in the application without any offline "migration phase" like you would need with a SQL database.
The JSON deserialzer solves this in a very simple way for you, (using JAVA)
Just allow your POJO/entity to grown with new fields. When you deserialize your JSON from mongo to you entity - all missing fields will be null.
mongoDocument v1 : Entity of v3
{
fizz="abc", --> fizz = "abc";
buzz=123 --> buzz = 123;
--> newObj = null;
--> obj_v3 = null;
}
You can even use this the other way around if you like to have you legacy servers work with new database objects:
mongoDocument v3 : Entity of v1
{
fizz:"abc", --> fizz = "abc";
buzz:123, --> buzz = 123;
newObj:"zzz", -->
obj_v3:"b -->
}
Depending if they have the fields or not - it will be populated by the deserializer.
Keep in mind that booleans are not best suited for this since they can default to false (depending on which deserializer you use).
So unless you are actively going to work with versioning of your objects why bother with the overhead when you can build a legacy safe server implementation what with just a few null checks can handle any of the older objects.
I hope this proposal might help you with your set-up
I guess below thread will help you although it is not about versioning documents in the DB, and it has been done using spring-data-mongodb,
How to add a final field to an existing spring-data-mongodb document collection?
So you can assign values to the POJO based on existence of the property in the document using the Converter implementation.
You have a couple of options with morphia, at least. You could use a versioned class name then rely on morphia's use of the className property to fetch correct class version. Then your application would just have to migrate that old object to the new class definition. Another option is to use #PreLoad and massage the DBObject coming out of mongo to the new shape before morphia maps the DBObject to your class. Using a version field on the class you can determine which migration to run when the data is loaded. From that point, it would just look like the new form to morphia and would map seamlessly. Once you save that configuration object back to mongo, it'd be in the new form and the next load wouldn't need to run the migration.
I have pretty much zero experience with Hibernate, though I've used similar persistence libraries in other languages before. I'm working on a Java project that will require a way to define "models" (in the MVC sense) in text configuration files, generate the database tables automatically, and (ideally) be database-backend-agnostic. As far as I can tell from some quick Googling, Hibernate is the only widely-used backend-agnostic Java database library; while I could write my own compatibility layer between my model system and multiple DB backends, that's a debugging endeavor that I'd like to avoid if possible.
My question is: Can Hibernate be used to store data whose structure is represented in some other way than an annotated Java class file, such as a HashMap with some configuration object that describes its structure? And, if not, are there any other relatively-stable Java database libraries that could provide this functionality?
EDIT: Here's a clearer description of what I'm trying to accomplish:
I am writing a data-modeling library. When a certain method of the library is called, and passed a configuration object (loaded from a text file), it should create a new "model," and create a database table for that model if necessary. The library can be queried for items of this model, which should return HashMaps containing the models' fields. It should also allow HashMaps to be saved to the database, provided their structure matches the configuration files. None of these models should be represented by actual compiled Java classes.
I think you could try use #MapKey annotation provided by JPA (not the Hibernate #MapKey annotation, it's pretty different!).
#javax.persistence.OneToMany(cascade = CascadeType.ALL)
#javax.persistence.MapKey(name = "name")
private Map<String, Configuration> configurationMap = new HashMap<String, Configuration>();
I don't believe Hibernate will let you have a Map as an #Entity but it will let you have a custom class that contains a map field:
#Entity
public class Customer {
#Id #GeneratedValue public Integer getId() { return id; }
public void setId(Integer id) { this.id = id; }
private Integer id;
#OneToMany #JoinTable(name="Cust_Order")
#MapKeyColumn(name"orders_number")
public Map<String,Order> getOrders() { return orders; }
public void setOrders(Map<String,Order> orders) { this.orders = orders; }
private Map<String,Order> orders;
}
(example from Hibernate docs)
Additionally, you don't have to use annotations (if that is what you're trying to avoid): Hibernate relationships can be described via xml files and there are utilities (maven plugins for example) which can automatically generate the necessary java pojo's from the xml.
Does your model require a relational database? You might consider a database like Mongo that stores any object you can represent with JSON.
you can configure hibernate to work without any entity classes (beans linked to tables),
1. you need to use xml configuration for this. in place of class use entity-name and in place of <property name="something" use <property node="something".
create a hibernate session with entiy-mode as map.
you can use a map to store and retuive information from db. Remember, since you are using map there will be difficulties in 2-way mapping (this was as of 3.3, not sure if its been fixed in later releses)