How do I get QueryFields from custom class when defining cache? - java

I'm trying to get Ignite Cache data through jdbc. For that purpose I define new custom class and annotate fields like that :
public class MyClass implements Serializable {
#QuerySqlField(index = true)
public Integer id;
#QuerySqlField(index = true)
public String records_offset;
#QuerySqlField(index = true)
public Integer session_id;
...
}
Then I start ignite in this way:
CacheConfiguration conf = new CacheConfiguration();
conf.setBackups(1);
conf.setName("test");
QueryEntity queryEntity = new QueryEntity();
queryEntity.setKeyType(Integer.class.getName());
queryEntity.setValueType(CDR.class.getName());
queryEntity.setTableName("CDR");
conf.setQueryEntities(Arrays.asList(queryEntity));
IgniteConfiguration iconf = new IgniteConfiguration();
iconf.setCacheConfiguration(conf);
iconf.setPeerClassLoadingEnabled(true);
this.ignite = Ignition.start(iconf);
this.cache = ignite.getOrCreateCache("test");
Now when I try to get data from JDBC, I get error:
Error: class org.apache.ignite.binary.BinaryObjectException: Custom objects are not supported (state=50000,code=0)
I could define a set of fields to get opportunity to fetch data from JDBC
LinkedHashMap<String, String> fields = new LinkedHashMap();
fields.put("session_id", Integer.class.getName());
fields.put("records_offset", String.class.getName());
queryEntity.setFields(fields);
But Why do I need to do this if I've already annotated field in class definition?

You have three options to define SQL schema:
Annotations and CacheConfiguration.setIndexedTypes
https://apacheignite.readme.io/docs/cache-queries#section-query-configuration-by-annotations
You can configure QueryEntity:
https://apacheignite.readme.io/docs/cache-queries#section-query-configuration-using-queryentity
or just use pure SQL:
https://apacheignite-sql.readme.io/docs/create-table
In your case, you mixed [1] and [2], so you registered key and value for indexing by QueryEntity, but defined fields with annotations, so mixing of different ways doesn't work. you need to stick to open specific way like you already did by adding key and value registration for indexing with CacheConfiguration.setIndexedTypes method. So you can get rid of QueryEntity now.

Related

Spring Data Elasticsearch: Convert a String to an Object (and vice versa) using ValueConverter and dot-notation

I have kind of a combination-follow up question to [1] and [2].
I have a POJO with a field I want to persist in - and read from - Elasticsearch:
#Document
public class MyPojo {
private String level3;
// getters/setters...
}
For convenience and because the property is also being persisted (flatly) into postgres, the property level3 should be a String, however it should be written into ES as a nested object (because the ES index is defined elsewhere).
The current solution is unsatisfactory:
#Document
#Entity
public class MyPojo {
#Column(name = "level3)
#Field(name = "level3", type = FieldType.Keyword)
#ValueConverter(MyConverter.class)
private String level3;
// getters/setters...
}
with the object path "level1.level2.level3" hardcoded within MyConverter, which converts from Map<String, Object> to String (read) and from String to Map<String, Object> (write). Because we potentially need to do this on multiple fields, this is not a really viable solution.
I'd rather do something like this:
#Document
#Entity
public class MyPojo {
#Column(name = "level3)
#Field(name = "level1.level2.level3", type = FieldType.Keyword)
#ValueConverter(MyConverter2.class)
private String level3;
// getters/setters...
}
which does not work (writing works fine, while reading we get the "null is not a map" error from [2]).
Is this at all possible (if I understood [2] correctly, no)? If not, is there another way to achieve what I want without hardcoding and an extra converter per field?
Can I somehow access the #Field annotation within MyConverter (e.g. the name), or can I somehow supply additional arguments to MyConverter?
[1] Spring data elasticsearch embedded field mapping
[2] Spring Elasticsearch: Adding fields to working POJO class causes IllegalArgumentException: null is not a Map

Deserialize jsonb entity field with Apache DBUtils

I was provided with access to a Postgres table yielding a JSONB column amongst many "standard" others and I'm creating an entity class for it without resorting to any sort of ORM frameworks such as spring-data or hibernate. To represent that column I created the necessary POJOs, e.g.
#Entity
public class MyClass {
#Id
#Column(nullable = false)
private Long id;
#Convert(converter = SomeConverter.class)
#Column(columnDefinition = "jsonb")
private Data data; //My custom POJO
///... many other fields
Then I created a simple unit test in which I use Apache DBUtils to convert the query result set to a MyClass instance:
PGSimpleDataSource ds = //...
ResultSetHandler<List<MyClass>> h = new BeanListHandler<>(
MyClass.class, new BasicRowProcessor(
new GenerousBeanProcessor()));
QueryRunner run = new QueryRunner(ds);
run.query("select * from mytable", h);
which results in the following error: Cannot set data: incompatible types, cannot convert org.postgresql.util.PGobject, suggesting that it is not able to handle the conversion between the JSONB column and MyClass. Is there any good way to tackle this problem? I was able to get around this problem in a "not so good way" by implementing my own BeanHandler:
public class MyClassHandler extends BeanHandler<MyClass> {
public AssetHandler(Class<? extends MyClass> type, RowProcessor convert) {
super(type, convert);
}
#Override
public MyClass handle(ResultSet rs) throws SQLException {
MyClass myclass = new MyClass();
myclass.setId(rs.getLong("id"));
//...
myclass.setData(new ObjectMapper().readValue(rs.getObject("data").toString());
return myclass;
}
which works but DBUtils becomes useless as I end up doing all the work myself - this wouldn't be the case if for example I could invoke super.handle(rs) and tackle only the data field as I did there, but I found no way to do it.

How to save and query dynamic fields in Spring Data MongoDB?

I'm on Spring boot 1.4.x branch and Spring Data MongoDB.
I want to extend a Pojo from HashMap to give it the possibility to save new properties dynamically.
I know I can create a Map<String, Object> properties in the Entry class to save inside it my dynamics values but I don't want to have an inner structure. My goal is to have all fields at the root's entry class to serialize it like that:
{
"id":"12334234234",
"dynamicField1": "dynamicValue1",
"dynamicField2": "dynamicValue2"
}
So I created this Entry class:
#Document
public class Entry extends HashMap<String, Object> {
#Id
private String id;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
}
And the repository like this:
public interface EntryRepository extends MongoRepository<Entry, String> {
}
When I launch my app I have this error:
Error creating bean with name 'entryRepository': Invocation of init method failed; nested exception is org.springframework.data.mapping.model.MappingException: Could not lookup mapping metadata for domain class java.util.HashMap!
Any idea?
TL; DR;
Do not use Java collection/map types as a base class for your entities.
Repositories are not the right tool for your requirement.
Use DBObject with MongoTemplate if you need dynamic top-level properties.
Explanation
Spring Data Repositories are repositories in the DDD sense acting as persistence gateway for your well-defined aggregates. They inspect domain classes to derive the appropriate queries. Spring Data excludes collection and map types from entity analysis, and that's why extending your entity from a Map fails.
Repository query methods for dynamic properties are possible, but it's not the primary use case. You would have to use SpEL queries to express your query:
public interface EntryRepository extends MongoRepository<Entry, String> {
#Query("{ ?0 : ?1 }")
Entry findByDynamicField(String field, Object value);
}
This method does not give you any type safety regarding the predicate value and only an ugly alias for a proper, individual query.
Rather use DBObject with MongoTemplate and its query methods directly:
List<DBObject> result = template.find(new Query(Criteria.where("your_dynamic_field")
.is(theQueryValue)), DBObject.class);
DBObject is a Map that gives you full access to properties without enforcing a pre-defined structure. You can create, read, update and delete DBObjects objects via the Template API.
A last thing
You can declare dynamic properties on a nested level using a Map, if your aggregate root declares some static properties:
#Document
public class Data {
#Id
private String id;
private Map<String, Object> details;
}
Here we can achieve using JSONObject
The entity will be like this
#Document
public class Data {
#Id
private String id;
private JSONObject details;
//getters and setters
}
The POJO will be like this
public class DataDTO {
private String id;
private JSONObject details;
//getters and setters
}
In service
Data formData = new Data();
JSONObject details = dataDTO.getDetails();
details.put("dynamicField1", "dynamicValue1");
details.put("dynamicField2", "dynamicValue2");
formData.setDetails(details);
mongoTemplate.save(formData );
i have done as per my business,refer this code and do it yours. Is this helpful?

How to interact with elastic search Alias using Spring data

Hi I am using elastic search Spring data. Domain structure of my project keeps on changing.So I have to drop the index in order to change the mapping every time. To overcome this problem, I am using Aliases.
I created an Alias using:
elasticsearchTemplate.createIndex(Test.class);
elasticsearchTemplate.putMapping(Test.class);
String aliasName = "test-alias";
AliasQuery aliasQuery = new AliasBuilder()
.withIndexName("test")
.withAliasName(aliasName).build();
elasticsearchTemplate.addAlias(aliasQuery);
I have a test class:
import org.springframework.data.annotation.Id
import org.springframework.data.elasticsearch.annotations.Document
import org.springframework.data.elasticsearch.annotations.Field
import org.springframework.data.elasticsearch.annotations.FieldIndex
import org.springframework.data.elasticsearch.annotations.FieldType
import org.springframework.data.elasticsearch.annotations.Setting
#Document(indexName = "test", type = "test")
#Setting(settingPath = 'elasticSearchSettings/analyzer.json')
class Test extends BaseEntity{
#Id
#Field(type = FieldType.String, index = FieldIndex.not_analyzed)
String id
#Field(type = FieldType.String, index = FieldIndex.analyzed, indexAnalyzer = "generic_analyzer", searchAnalyzer = "generic_analyzer")
String firstName
}
TestRepository Class:
package com.as.core.repositories
import com.as.core.entities.Test
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository
interface TestRepository extends ElasticsearchRepository<Test, String>
{
}
My question is how can I read from alias instead of the index itself?
Does write operation also takes place on alias.
I have looked at following link:
https://www.elastic.co/guide/en/elasticsearch/guide/current/index-aliases.html#index-aliases
It says that we will have to interact the alias instead of the actual index.How to achieve this using Elasticsearch Spring data Java API.
I have worked around this limitation by using the ElasticsearchTemplate in the repository class associated with the object (although it would be much nicer if there was a way to specify an alias name on the entity itself).
The way it works is to create a custom repository interface. In your case it would be TestRepositoryCustom:
public interface TestRepositoryCustom
{
Test> findByCustom(...);
}
Then implement this interface appending 'Impl' to the end of the base repository name:
public class TestRepositoryImpl implements TestRepositoryCustom
{
Page<Test> findByCustom(Pageable pageable, ...)
{
BoolQueryBuilder boolQuery = new BoolQueryBuilder();
FilterBuilder filter = FilterBuilders.staticMethodsToBuildFilters;
/*
* Your code here to setup your query
*/
NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder().withQuery(boolQuery).withFilter(filter).withPageable(pageable);
//These two are the crucial elements that will allow the search to look up based on alias
builder.withIndices("test-alias");
builder.withTypes("test");
//Execute the query
SearchQuery searchQuery = builder.build();
return elasticSearchTemplate.queryForPage(searchQuery, Test.class);
}
}
Finally in your base JPA repsitory interface, TestRepository, extend the TestRepositoryCustom interface to get access to any methods on your custom interface from your repository bean.
public interface TestRepository extends ElasticsearchRepository<Consultant, String>, TestRepositoryCustom
{
}
What I would really like to see is an annotation on the entity like:
#Document(aliasName="test-alias")
This would just work in the background to provide searching on this index out of the gate so that all the jpa queries would just work regardless of the index name.
Spring-data-elasticsearch supports finding documents in an alias. We were able to make version 3.2.6.RELEASE read documents annotated with
#Document(
indexName = "alias",
createIndex = false,
type = '_doc'
)
from a Spring Data ElasticsearchRepository backed by a ElasticsearchRestTemplate

Modify map table name in hibernate

Here's my class:
#Entity (name = "Client")
public abstract class MyClient
{
private Map<String, String> _properties;
}
Hiberate map my properties object into a class named "MyClient_properties".
How can I modify it so it will be mapped to "Client_properties"?
Thanks
Interestingly I thought that is supposed to be the default. Pretty sure the default naming feature is supposed to take the #Entity#name value rather than the class name if it is supplied.
Anyway, to explicitly name the collection table you'd use (oddly enough) the JPA #CollectionTable annotation:
#CollectionTable( name="Client_properties" )
private Map<String, String> _properties;

Categories

Resources