Apache Ignite - how to evolve schema/properties of an annotated class - java

I'm using Apache Ignite with class annotations as described in "Query Configuration by Annotations".
How should we handle class changes? For example what happen if from v1 and v2 of my application I add a new property?
Are previous values deserialized? Can I specify a default value?
I cannot find any documentation on this topic. I have tried with a simple use case and seems that new properties are null. How can I handle this?
UPDATE
Following suggestions from #dmagda I have tried to add a property on my class, adding it to the table using ALTER TABLE MYTABLE ADD COLUMN myNewProperty varchar; and then changing it's value using UPDATE MYTABLE SET myNewProperty='myDefaultValue'.
But unfortunately running the abode UPDATE I get the exception: Error: class org.apache.ignite.binary.BinaryObjectException: Failed to unmarshal object with optimized marshaller (state=50000,code=0)
It is possible to update existing records by changing new fields using SQL? How?
UPDATE 2
Solved my problem. It was caused by the fact that my class was written in scala with some scala specific types ('Map', ...). My app connects to Ignite using client mode and so when executing UPDATE from sqlline utility Ignite was unable to deserialize the types.
Now I switched my class to be plain POJO and now I'm able to update schema and update data.

Just update your Java class by adding a new field and it will be stored and can be read back without any issue. You might see null as a value of the new field for two reasons:
It was not set to any specific value by your application
You're reading back from Ignite an old object which was stored before you updated your class and, thus, the new field didn't present there.
If you need to access the new field using SQL, then use ALTER TABLE command to add the field to the SQL schema.

Related

Highlighting in Hibernate Search 6 and Elasticsearch backend

We're in the process of converting our java application from Hibernate Search 5 to 6 with an Elasticsearch backend.
For some good background info, see How to do highlighting within HibernateSearch over Elasticsearch for a question we had when upgrading our highlighting code from a Lucene to Elasticsearch backend and how it was resolved.
Hibernate Search 6 seems to support using 2 backends at the same time, Lucene and Elasticsearch, so we'd like to use Elasticsearch for all our queries and Lucene for the highlighting, if that's possible.
Here is basically what we're trying to do:
public boolean matchPhoneNumbers() {
String phoneNumber1 = "603-436-1234";
String phoneNumber2 = "603-436-1234";
LuceneBackend luceneBackend =
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
Analyzer analyzer = luceneBackend.analyzer("phoneNumberKeywordAnalyzer").get();
//... builds a Lucene Query using the analyzer and phoneNumber1 term
Query phoneNumberQuery = buildQuery(analyzer, phoneNumber1, ...);
return isMatch("phoneNumberField", phoneNumber2, phoneNumberQuery, analyzer);
}
private boolean isMatch(String field, String target, Query sourceQ, Analyzer analyzer) {
Highlighter highlighter = new Highlighter(new QueryScorer(sourceQ, field));
highlighter.setTextFragmenter(new NullFragmenter());
try {
String result = highlighter.getBestFragment(analyzer, field, target);
return StringUtils.hasText(result);
} catch (IOException e) {
...
}
}
What I've attempted so far is to configure two separate backends in the configuration properties, per the documentation, like this:
properties.setProperty("hibernate.search.backends.elasticsearch.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backends.elasticsearch.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backends.elasticsearch.uris", "http://127.0.0.1:9200");
The AnalysisConfigurer class implements ElasticsearchAnalysisConfigurer and
CustomLuceneAnalysisConfigurer implements from LuceneAnalysisConfigurer.
Analyzers are defined twice, once in the Elasticsearch configurer and again in the Lucene configurer.
I don't know why both hibernate.search.backends.elasticsearch.type and hibernate.search.backends.lucene.type are necessary but if I don't include the lucene.type, I get Ambiguous backend type: configuration property 'hibernate.search.backends.lucene.type' is not set.
But if I do have both backend properties types set, I get
HSEARCH000575: No default backend. Check that at least one entity is configured to target the default backend, when attempting to retrieve the Lucene backend, like:
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
And the same error when trying to retrieve the Elasticsearch backend.
I've also added #Indexed(..., backend = "elasticsearch") to my entities since I wish to have them saved into Elasticsearch and don't need them in Lucene. I also tried adding a fake entity with #Indexed(..., backend = "lucene") but it made no difference.
What have I got configured wrong?
I don't know why both hibernate.search.backends.elasticsearch.type and hibernate.search.backends.lucene.type are necessary but if I don't include the lucene.type, I get Ambiguous backend type: configuration property 'hibernate.search.backends.lucene.type' is not set.
That's because the backend name is just that: a name. Hibernate Search doesn't infer particular information from it, even if you name your backend "lucene" or "elasticsearch". You could have multiple Elasticsearch backends for all it knows :)
But if I do have both backend properties types set, I get HSEARCH000575: No default backend. Check that at least one entity is configured to target the default backend, when attempting to retrieve the Lucene backend, like:
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
``
You called .backend(), which retrieves the default backend, i.e. the backend that doesn't have a name and is configured through hibernate.search.backend.* instead of hibernate.search.backends.<somename>.* (see https://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#configuration-structure ).
But you are apparently mapping all your entities to a named backends, one named elasticsearch and one named lucene. So the default backend just doesn't exist.
You should call this:
Search.mapping(entityManager.getEntityManagerFactory())
.backend("lucene").unwrap(LuceneBackend.class);
I've also added #Indexed(..., backend = "elasticsearch") to my entities since I wish to have them saved into Elasticsearch
Since you obviously only want to use one backend for indexing, I would recommend reverting that change (keeping #Indexed without setting #Indexed.backend) and simply making using the default backend.
In short, remove the #Indexed.backend and replace this:
properties.setProperty("hibernate.search.backends.elasticsearch.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backends.elasticsearch.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backends.elasticsearch.uris", "http://127.0.0.1:9200");
With this
properties.setProperty("hibernate.search.backend.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backend.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backend.uris", "http://127.0.0.1:9200");
You don't technically have to do that, but I think it will be simpler in the long term. It keeps the Lucene backend as a separate hack that doesn't affect your whole application.
I also tried adding a fake entity with #Indexed(..., backend = "lucene")
I confirm you will need that fake entity mapped to the "lucene" backend, otherwise Hibernate Search will not create the "lucene" backend.

How to use Hibernate 5.2.10 MySQL JSON support without AttributeConverter or customUserType to map to Java Entity Class?

I am trying to map the MySQL JSON column to Java Entity class. Looking for the cleanest way of doing this.
Upon doing some research found 3 possible ways:
Extending AbstractSingleColumnStandardBasicType
Create a custom UserType
Use an attribute Converter
I used an attribute converter to convert the JSON column from String (as MySQL driver makes it to a String) to my required type - this works with both the Hibernate V4.3.10 and V5.2.10
I tried to find if JSON is natively supported in Hibernate and found the PR https://github.com/hibernate/hibernate-orm/pull/1395, based on the PR looks like it does add JSON mapping to the MySQL Dialect hence letting Hibernate know about the JSON Column.
Does this mean I can use something like this to map to JSON Column in DB ?#Column(name="json_type_column")
Private Object correspondingJsonAttribute;
If I cannot use it like this and need to use one of the above 3 methods, is there a reason I would need to upgrade to get the registerColumnType( Types.JAVA_OBJECT, "json" ); which is part of the PR and is present in Hibernate V5.2.10, Do I get any more features from V5.2.10 that support JSON columns?
I also looked into the corresponding test case to understand how the JSON column mapping is being done https://github.com/hibernate/hibernate-orm/blob/master/hibernate-core/src/test/java/org/hibernate/test/bytecode/enhancement/access/MixedAccessTestTask.java, this uses #Access annotation via property, looks like it sets the corresponding JSON column variable in Entity to Map after converting it from String.
Any help is much appreciated.
Thanks!
Upon doing some research found 3 possible ways:
Extending AbstractSingleColumnStandardBasicType
Create a custom UserType
Use an attribute Converter
AttributeConvertor won't help you for this, but you can still use a custom UserType, or Hibernate Type Descriptors.
Does this mean I can use something like this to map to JSON Column in
DB?
#Column(name="json_type_column") Private Object
correspondingJsonAttribute;
No. The json type is just for JDBC so that Hibernate knows how to handle that JDBC object when setting a parameter on a PreparedStatement or when fetching a ResultSet.
Do I get any more features from V5.2.10 that support JSON columns?
No, but you just need to supply your own JSON type.
You can just use the hibernate-types which is available on Maven Central.
<dependency>
<groupId>com.vladmihalcea</groupId>
<artifactId>hibernate-types-52</artifactId>
<version>${hibernate-types.version}</version>
</dependency>
And use the provided JdonType from Hibernate Types as it works on MySQL, PostgreSQL, Oracle, SQL Server or H2 without doing any modifications.

Access Custom Field Salesforce

I have a created a custom field in Contacts object in Salesforce whose API name is "Resume_Text__c" and I'm making a SOAP call to get the value of that filed using Java Implementation by writing a following SOQL.
SELECT Resume_Text__c FROM Contact
But execution of query throwing following exception.
No such column 'Resume_Text__c' on entity 'Contact'. If you are attempting to use a custom field, be sure to append the '__c' after the custom field name. Please reference your WSDL or the describe call for the appropriate names.'
So how can I access custom field via Soap API Java Implementation?
Whenever you are using Enterprise.wsdl file in your implementation, you need to make sure that every time you create some new fields and object on Salesforce.com environment, you refresh your Enterprise.wsdl to import all the dependency mappings else go with Partner.wsdl.

Creating a new Hibernate table

So I'm still pretty new to Hibernate, and I'm working on a large-ish application that already has a database with several Hibernate tables. I'm working on a new feature, which includes a new #Entity class, and I need these objects to be stored in a new table. The class is declared like this:
#Entity
#Table(name="DATA_REQUEST")
public class DataRequest {
//Some fields, nothing fancy
}
The DATA_REQUEST table does not exist, nor do I have any data to store in it yet. I started the application up, expecting that it would either create the table or crash because it doesn't exist yet. Neither of these actually happened.
So: do I need to create the table manually (easily done)? Or do I need to go somewhere else to tell Hibernate that I need this table? I've seen the hibernate.cfg.xml file, which looks like a good place to start.
You need to specify "create" for the "hibernate.hbm2ddl.auto" property. Read more details here. This is not recommended in production but only for testing purposes.
As for adding a new column to the table
As long as it is not a not null column you don't need drop the table or restart your hibernate app
If you do want to use the column then you need to map the column in the code/hbm file, so you will have to restart the hibernate app
If there is no mapping present as far as hibernate is concerned the column does not exisist, If it is a not null column then underlying data base would reject inserts/updates as hibernate will not include the column in generated sql
from hibernate documentation
hibernate.hbm2ddl.auto
Automatically validates or exports schema DDL to the database when the SessionFactory is created. With create-drop, the database schema will be dropped when the SessionFactory is closed explicitly.
e.g. validate | update | create | create-drop
hibernate Configuration

What are the possible values of the Hibernate hbm2ddl.auto configuration and what do they do

I really want to know more about the update, export and the values that could be given to hibernate.hbm2ddl.auto
I need to know when to use the update and when not? And what is the alternative?
These are changes that could happen over DB:
new tables
new columns in old tables
columns deleted
data type of a column changed
a type of a column changed its attributes
tables dropped
values of a column changed
In each case what is the best solution?
From the community documentation:
hibernate.hbm2ddl.auto Automatically validates or exports schema DDL to the database when the SessionFactory is created. With create-drop, the database schema will be dropped when the SessionFactory is closed explicitly.
e.g. validate | update | create | create-drop
So the list of possible options are,
validate: validate the schema, makes no changes to the database.
create-only: database creation will be generated.
drop: database dropping will be generated.
update: update the schema.
create: creates the schema, destroying previous data.
create-drop: drop the schema when the SessionFactory is closed explicitly, typically when the application is stopped.
none: does nothing with the schema, makes no changes to the database
These options seem intended to be developers tools and not to facilitate any production level databases, you may want to have a look at the following question; Hibernate: hbm2ddl.auto=update in production?
There's also the value of none to disable it entirely.
The configuration property is called hibernate.hbm2ddl.auto
In our development environment we set hibernate.hbm2ddl.auto=create-drop to drop and create a clean database each time we deploy, so that our database is in a known state.
In theory, you can set hibernate.hbm2ddl.auto=update to update your database with changes to your model, but I would not trust that on a production database. An earlier version of the documentation said that this was experimental, at least; I do not know the current status.
Therefore, for our production database, do not set hibernate.hbm2ddl.auto - the default is to make no database changes. Instead, we manually create an SQL DDL update script that applies changes from one version to the next.
First, the possible values for the hbm2ddl configuration property are the following ones:
none - No action is performed. The schema will not be generated.
create-only - The database schema will be generated.
drop - The database schema will be dropped.
create - The database schema will be dropped and created afterward.
create-drop - The database schema will be dropped and created afterward. Upon closing the SessionFactory, the database schema will be dropped.
validate - The database schema will be validated using the entity mappings.
update - The database schema will be updated by comparing the existing database schema with the entity mappings.
The hibernate.hbm2ddl.auto="update" is convenient but less flexible if you plan on adding functions or executing some custom scripts.
So, The most flexible approach is to use Flyway.
However, even if you use Flyway, you can still generate the initial migration script using hbm2ddl.
I would use liquibase for updating your db. hibernate's schema update feature is really only o.k. for a developer while they are developing new features. In a production situation, the db upgrade needs to be handled more carefully.
Although it is quite an old post but as i did some research on the topic so thought of sharing it.
hibernate.hbm2ddl.auto
As per the documentation it can have four valid values:
create | update | validate | create-drop
Following is the explanation of the behaviour shown by these value:
create :- create the schema, the data previously present (if there) in the schema is lost
update:- update the schema with the given values.
validate:- validate the schema. It makes no change in the DB.
create-drop:- create the schema with destroying the data previously present(if there). It also drop the database schema when the SessionFactory is closed.
Following are the important points worth noting:
In case of update, if schema is not present in the DB then the schema is created.
In case of validate, if schema does not exists in DB, it is not created. Instead, it will throw an error:- Table not found:<table name>
In case of create-drop, schema is not dropped on closing the session. It drops only on closing the SessionFactory.
In case if i give any value to this property(say abc, instead of above four values discussed above) or it is just left blank. It shows following behaviour:
-If schema is not present in the DB:- It creates the schema
-If schema is present in the DB:- update the schema.
hibernate.hbm2ddl.auto automatically validates and exports DDL to the schema when the sessionFactory is created.
By default, it does not perform any creation or modification automatically on DB. If the user sets one of the below values then it is doing DDL schema changes automatically.
create - doing creating a schema
<entry key="hibernate.hbm2ddl.auto" value="create">
update - updating existing schema
<entry key="hibernate.hbm2ddl.auto" value="update">
validate - validate existing schema
<entry key="hibernate.hbm2ddl.auto" value="validate">
create-drop - create and drop the schema automatically when a session is starts and ends
<entry key="hibernate.hbm2ddl.auto" value="create-drop">
If you don't want to use Strings in your app and are looking for predefined constants have a look at org.hibernate.cfg.AvailableSettings class included in the Hibernate JAR, where you'll find a constant for all possible settings. In your case for example:
/**
* Auto export/update schema using hbm2ddl tool. Valid values are <tt>update</tt>,
* <tt>create</tt>, <tt>create-drop</tt> and <tt>validate</tt>.
*/
String HBM2DDL_AUTO = "hibernate.hbm2ddl.auto";
validate: validates the schema, no change happens to the database.
update: updates the schema with current execute query.
create: creates new schema every time, and destroys previous data.
create-drop: drops the schema when the application is stopped or SessionFactory is closed explicitly.
I Think you should have to concentrate on the
SchemaExport Class
this Class Makes Your Configuration Dynamic
So it allows you to choose whatever suites you best...
Checkout [SchemaExport]
validate: It validates the schema and makes no changes to the DB.
Assume you have added a new column in the mapping file and perform the insert operation, it will throw an Exception "missing the XYZ column" because the existing schema is different than the object you are going to insert. If you alter the table by adding that new column manually then perform the Insert operation then it will definitely insert all columns along with the new column to the Table.
Means it doesn't make any changes/alters the existing schema/table.
update: it alters the existing table in the database when you perform operation.
You can add or remove columns with this option of hbm2ddl.
But if you are going to add a new column that is 'NOT NULL' then it will ignore adding that particular column to the DB. Because the Table must be empty if you want to add a 'NOT NULL' column to the existing table.
Since 5.0, you can now find those values in a dedicated Enum: org.hibernate.boot.SchemaAutoTooling (enhanced with value NONE since 5.2).
Or even better, since 5.1, you can also use the org.hibernate.tool.schema.Action Enum which combines JPA 2 and "legacy" Hibernate DDL actions.
But, you cannot yet configure a DataSource programmatically with this. It would be nicer to use this combined with org.hibernate.cfg.AvailableSettings#HBM2DDL_AUTO but the current code expect a String value (excerpt taken from SessionFactoryBuilderImpl):
this.schemaAutoTooling = SchemaAutoTooling.interpret( (String) configurationSettings.get( AvailableSettings.HBM2DDL_AUTO ) );
… and internal enum values of both org.hibernate.boot.SchemaAutoToolingand org.hibernate.tool.schema.Action aren't exposed publicly.
Hereunder, a sample programmatic DataSource configuration (used in ones of my Spring Boot applications) which use a gambit thanks to .name().toLowerCase() but it only works with values without dash (not create-drop for instance):
#Bean(name = ENTITY_MANAGER_NAME)
public LocalContainerEntityManagerFactoryBean internalEntityManagerFactory(
EntityManagerFactoryBuilder builder,
#Qualifier(DATA_SOURCE_NAME) DataSource internalDataSource) {
Map<String, Object> properties = new HashMap<>();
properties.put(AvailableSettings.HBM2DDL_AUTO, SchemaAutoTooling.CREATE.name().toLowerCase());
properties.put(AvailableSettings.DIALECT, H2Dialect.class.getName());
return builder
.dataSource(internalDataSource)
.packages(JpaModelsScanEntry.class, Jsr310JpaConverters.class)
.persistenceUnit(PERSISTENCE_UNIT_NAME)
.properties(properties)
.build();
}
To whomever searching for default value...
It is written in the source code at version 2.0.5 of spring-boot and 1.1.0 at JpaProperties:
/**
* DDL mode. This is actually a shortcut for the "hibernate.hbm2ddl.auto"
* property. Defaults to "create-drop" when using an embedded database and no
* schema manager was detected. Otherwise, defaults to "none".
*/
private String ddlAuto;
With all above said...
Notice this property is called dll.auto and should only control dll operations(create/drop schema/table), I found surprisingly that it has to do with dml, too: only update will allow insert data, which is dml operation.
Got caught by this when trying to populate data into a in-memory database; only update works.

Categories

Resources