MarkLogic POJO Data Binding Interface: JSONMappingException when performing a POJO search - java

I'm working with the MarkLogic POJO Databinding Interface at the moment. I'm able to write POJOs to MarkLogic. Now I want to search those POJOs and retrieve the search results. I'm following the instructions from: https://docs.marklogic.com/guide/java/binding#id_89573 However, the search results don't seem to return the correct objects. I'm getting a JSONMappingException. Here's the code:
HashMap<String, MatchedPropertyInfo> matchedProperties = new HashMap<String, MatchedPropertyInfo>();
PropertyMatches PM = new PropertyMatches(123,"uri/prefix/location2", "uri/prefix", 1234,0,"/aKey","/aLocation",true,matchedProperties);
MatchedPropertyInfo MPI1 = new MatchedPropertyInfo("matched/property/uri1", "matched/property/key1", "matched/property/location1", true,"ValueMatch1", 12, 1*1.0/3, true);
MatchedPropertyInfo MPI2 = new MatchedPropertyInfo("matched/property/uri2", "matched/property/key2", "matched/property/location2", true,"ValueMatch2", 14, 1.0/2.0, true);
PM.getMatchedProperties().put("matched/property/prefix/location1", MPI1);
PM.getMatchedProperties().put("matched/property/prefix/location2", MPI2);
PojoRepository myClassRepo = client.newPojoRepository(PropertyMatches.class, Long.class);
myClassRepo.write(PM);
PojoQueryBuilder qb = myClassRepo.getQueryBuilder();
PojoPage<PropertyMatches> matches = myClassRepo.search(qb.value("uri", "uri/prefix/location2"),1);
if (matches.hasContent()) {
while (matches.hasNext()) {
PropertyMatches aPM = matches.next();
System.out.println(" " + aPM.getURI());
}
} else {
System.out.println(" No matches");
}
The PropertyMatches (PM) object is succesfully written to the MarkLogic database. This class contains a member: private String URI which is initiated with "uri/prefix/location2". The matches.hasContent() returns true in the example above. However, I'm getting an error on PropertyMatches aPM = matches.next();

Searching POJOs in MarkLogic and read them into your Java program requires the POJOs to have an empty constructor. In this case PropertyMatches should have public PropertyMatches(){} and MatchedPropertyInfo should have public MatchedPropertyInfo(){}

Thanks #sjoerd999 for posting the answer you found. Just to add some documentation references, this topic is discussed here: http://docs.marklogic.com/guide/java/binding#id_54408 and here: https://docs.marklogic.com/javadoc/client/com/marklogic/client/pojo/PojoRepository.html.
Also worth noting is you can have multiple parameters in the consructor, you just have to do it the Jackson way. Here are examples of two ways (with annotations and without): https://manosnikolaidis.wordpress.com/2015/08/25/jackson-without-annotations/
I'd recommend using annotations as that's built-in with Jackson. But if you want to do it without annotations, here's the code:
ObjectMapper mapper = new ObjectMapper();
// Avoid having to annotate the Person class
// Requires Java 8, pass -parameters to javac
// and jackson-module-parameter-names as a dependency
mapper.registerModule(new ParameterNamesModule());
// make private fields of Person visible to Jackson
mapper.setVisibility(FIELD, ANY);
If you want to do this with PojoRepository you'll have to use the unsupported getObjectMapper method to get the ObjectMapper and call registerModule and setVisibility on that:
ObjectMapper objectMapper = ((PojoRepositoryImpl) myClassRepo).getObjectMapper();

Related

Does spring-data-mongodb supports Atlas search? need an example of it

I am trying to find how to use mongo Atlas search indexes, from java application, which is using spring-data-mongodb to query the data, can anyone share an example for it
what i found was as code as below, but that is used for MongoDB Text search, though it is working, but not sure whether it is using Atlas search defined index.
TextQuery textQuery = TextQuery.queryText(new TextCriteria().matchingAny(text)).sortByScore();
textQuery.fields().include("cast").include("title").include("id");
List<Movies> movies = mongoOperations
.find(textQuery, Movies.class);
I want smaple java code using spring-data-mongodb for below query:
[
{
$search: {
index: 'cast-fullplot',
text: {
query: 'sandeep',
path: {
'wildcard': '*'
}
}
}
}
]
It will be helpful if anyone can explain how MongoDB Text Search is different from Mongo Atlas Search and correct way of using Atalas Search with the help of java spring-data-mongodb.
How to code below with spring-data-mongodb:
Arrays.asList(new Document("$search",
new Document("index", "cast-fullplot")
.append("text",
new Document("query", "sandeep")
.append("path",
new Document("wildcard", "*")))),
new Document())
Yes, spring-data-mongo supports the aggregation pipeline, which you'll use to execute your query.
You need to define a document list, with the steps defined in your query, in the correct order. Atlas Search must be the first step in the pipeline, as it stands. You can translate your query to the aggregation pipeline using the Mongo Atlas interface, they have an option to export the pipeline array in the language of your choosing. Then, you just need to execute the query and map the list of responses to your entity class.
You can see an example below:
public class SearchRepositoryImpl implements SearchRepositoryCustom {
private final MongoClient mongoClient;
public SearchRepositoryImpl(MongoClient mongoClient) {
this.mongoClient = mongoClient;
}
#Override
public List<SearchEntity> searchByFilter(String text) {
// You can add codec configuration in your database object. This might be needed to map
// your object to the mongodb data
MongoDatabase database = mongoClient.getDatabase("aggregation");
MongoCollection<Document> collection = database.getCollection("restaurants");
List<Document> pipeline = List.of(new Document("$search", new Document("index", "default2")
.append("text", new Document("query", "Many people").append("path", new Document("wildcard", "*")))));
List<SearchEntity> searchEntityList = new ArrayList<>();
collection.aggregate(pipeline, SearchEntity.class).forEach(searchEntityList::add);
return searchEntityList;
}
}

Deserialize and persist with #ManyToMany

I have a Spring Boot app which is using MySQL db.
At the moment I'm trying to do the following:
- deserialize instances from the *.csv files;
- inject them into the MySQL db.
For the simple instances there are no issues. But in case if I have an object with ManyToMany or OneToMany relations, deserialization is not working correctly. Currently I'm using Jackson dependency for *.csv deserialization:
CsvMapper csvMapper = new CsvMapper();
csvMapper.disable(MapperFeature.SORT_PROPERTIES_ALPHABETICALLY);
CsvSchema csvSchema = csvMapper.schemaFor(type).withHeader().withColumnSeparator(';').withLineSeparator("\n");
MappingIterator<Object> mappingIterator = csvMapper.readerFor(type).with(csvSchema).readValues(csv);
List<Object> objects = new ArrayList<>();
while (mappingIterator.hasNext()) {
objects.add(mappingIterator.next());
}
Example of the instance with many to many: (Idea is that one app can have different versions)
public class Application {
private Long id;
private String name;
#OneToMany(mappedBy = "application")
private Set<Version> versions = new HashSet<>();
}
For insertion into the DB I'm using Spring Boot entities that are #Autowired.
My first question is - what should I put as an input into the CSV file to deserialize it correctly ? Because if I have :
id;name;
1;testName;
(skipping versions), I'm having a trouble. The same even if I try to put some values into the version. So I don't know how to provide correctly the input for Jackson CSV deserialization in case of SET + later, how can I persist this entity ? Should I first put all the versions into the DB and then try to put applications?
Any thoughts? Thanks in advance!
Use ApacheCommons to parse csv.
final byte[] sourceCsv;
String csvString = new String(sourceCsv);
CSVFormat csvFormat = CSVFormat.DEFAULT;
List<CSVRecord> csvRecord = csvFormat.parse(new
StringReader(csvString)).getRecords();
It will help you to deserialize and to store in database.

Generate JSON SCHEMA Manally using java code+GSON without any POJO

I want to create JSON Schema manually using GSON but i dont find any JsonSchema element support in GSON. I dont want to convert a pojo to schema but want to create schema programatically . Is there any way in GSON ? May be something like following.
**1 JsonSchema schema = new JsonSchema();
2 schema.Type = JsonSchemaType.Object;
3 schema.Properties = new Dictionary<string, JsonSchema>
4{
5 { "name", new JsonSchema { Type = JsonSchemaType.String } },
6 {
7 "hobbies", new JsonSchema
8 {
9 Type = JsonSchemaType.Array,
10 Items = new List<JsonSchema> { new JsonSchema { Type = JsonSchemaType.String } }
11 }
12 },
13};**
You may consider using everit-org/json-schema for programmatically creating JSON Schemas. Although it is not properly documented, its builder classes form a fluent API which lets you do it. Example:
Schema schema = ObjectSchema.builder()
.addPropertySchema("name", StringSchema.builder().build())
.addPropertySchema("hobbies", ArraySchema.builder()
.allItemSchema(StringSchema.builder().build())
.build())
.build();
It is slightly different syntax than what you described, but it can be good for the same purpose.
(disclaimer: I'm the author of everit-org/json-schema)
I tried to build a schema as suggested above, see Everit schema builder includes unset properties as null

remove square brackets from domino access services

I want to access Domino data via the Domino Access Services (DAS) as REST provider in java e.g.
String url = "http://malin1/fakenames.nsf/api/data/collections/name/groups";
ObjectMapper mapper = new ObjectMapper();
JsonFactory factory = new JsonFactory();
JsonParser parser = factory.createParser(new URL(url));
JsonNode rootNode = mapper.readTree(parser);
however, I notice DAS binds the JSON in square brackets:
[
{
"#entryid":"1-D68BB54DEA77AC8085256B700078923E",
"#unid":"D68BB54DEA77AC8085256B700078923E",
"#noteid":"1182",
"#position":"1",
"#read":true,
"#siblings":3,
"#form":"Group",
"name":"LocalDomainAdmins",
"description":"This group should contain all Domino administrators in your domain. Most system databases and templates give people in this group Manager access."
},
{
"#entryid":"3-9E6EABBF405A1A9985256B020060E64E",
"#unid":"9E6EABBF405A1A9985256B020060E64E",
"#noteid":"F46",
"#position":"3",
"#read":true,
"#siblings":3,
"#form":"Group",
"name":"OtherDomainServers",
"description":"You should add all Domino servers in other domains with which you commonly replicate to this group."
}
]
How can I easily get rid of these brackets?
As already mentioned you should leave them intact. You can parse theJSON array for example with Jackson.
find an example snippet below
import org.codehaus.jackson.JsonNode;
import org.codehaus.jackson.JsonProcessingException;
import org.codehaus.jackson.map.ObjectMapper;
...
String response = ... your posted string
ObjectMapper mapper = new ObjectMapper();
try {
JsonNode taskIdsjsonNode = mapper.readTree(response);
for (JsonNode next : taskIdsjsonNode) {
System.out.printf("%s: %s%n", "#entryid", next.get("#entryid"));
System.out.printf("%s: %s%n", "name", next.get("name"));
}
} catch (.... ) {
// your exception handling goes here
}
output
#entryid: "1-D68BB54DEA77AC8085256B700078923E"
name: "LocalDomainAdmins"
#entryid: "3-9E6EABBF405A1A9985256B020060E64E"
name: "OtherDomainServers"
The brackets are not nasty but a correct notation. To access the contens just use [0] in your client side script or with your JSON parser in Java you like.
Perhaps the explanation here can help:
https://quintessens.wordpress.com/2015/05/08/processing-json-data-from-domino-access-services-with-jackson/
Basically you establish a call to DAS via the Jersey client and then you parse the json via Jackson library to a map in java.
During the parsing process you can define which values you want to parse and transform them.
Take a look at the Person class...

How to get a List of Indices from ElasticSearch using Jest

I'm trying to retrieve a list of indices using Jest, but I just got as far as:
Stats statistics = new Stats.Builder().build();
result = client.execute(statistics);
How can i retrieve the list of indices from the result? Do I have to use something else than Stats?
It would also help if someone could show me a detailed documentation of Jest. The basics are really well documented, but with the different kinds of builders I'm really lost at the moment.
Get Aliases will give you all the aliases for the indices on a node.
One can simply navigate a browser to the following URL to get the indexes available on an ElasticSearch cluster.
http://elasticsearch.company.com/_aliases
This will return an array of indexes and their aliases in JSON. Here's an example:
{
"compute-devzone1": { },
"compute-den2": { },
"compute-den1": { },
...
}
To get the list of indexes with Jest, use this code...
HttpClientConfig config;
JestClientFactory factory;
JestClient client;
GetAliases aliases;
JestResult result;
String json;
config = new HttpClientConfig.
Builder("http://elasticsearch.company.com").
build();
aliases = new GetAliases.
Builder().
build();
factory = new JestClientFactory();
factory.setHttpClientConfig(config);
client = factory.getObject();
result = client.execute(aliases);
json = result.getJsonString();
Use your favorite JSON processor to extract the indexes from json.
Use:
JestResult result = elasticSearchClient.execute(new Cat.IndicesBuilder().build());
This will return a JSON response just like curl -XGET "localhost:9200/_cat/indices?format=json"

Categories

Resources