How to convert sample JSON into JSON schema in Java - java

I want convert json document into json schema. I googled it but not got the exact idea according to my requirement.
here is JSON
{
"empId":1001,
"firstName":"jonh",
"lastName":"Springer",
"title": "Engineer",
"address": {
"city": "Mumbai",
"street": "FadkeStreet",
"zipCode":"420125",
"privatePhoneNo":{
"privateMobile": "2564875421",
"privateLandLine":"251201546"
}
},
"salary": 150000,
"department":{
"departmentId": 10521,
"departmentName": "IT",
"companyPhoneNo":{
"cMobile": "8655340546",
"cLandLine": "10251215465"
},
"location":{
"name": "mulund",
"locationId": 14500
}
}
}
I want to generate like this
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"title": "Employee",
"properties": {
"empId": {
"type": "integer"
},
"firstName":{
"type":"string"
},
"lastName": {
"type": "string"
},
"title": {
"type": "string"
},
"address": {
"type": "object",
"properties": {
"city": {
"type": "string"
},
"street": {
"type": "string"
},
"zipCode": {
"type": "string"
},
"privatePhoneNo": {
"type": "object",
"properties": {
"privateMobile": {
"type": "string"
},
"privateLandLine": {
"type": "string"
}
}
}
}
},
"salary": {
"type": "number"
},
"department": {
"type": "object",
"properties": {
"departmentId": {
"type": "integer"
},
"departmentName": {
"type": "string"
},
"companyPhoneNo": {
"type": "object",
"properties": {
"cMobile": {
"type": "string"
},
"cLandLine": {
"type": "string"
}
}
},
"location": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"locationId": {
"type": "integer"
}
}
}
}
}
}
}
Is there any library is doing like this or what is another way?

https://github.com/perenecabuto/json_schema_generator
http://jsonschema.net/#/
I'm think this maybe will help

It's been a while since this was asked but I was having the same issue. So far the best solution I have come across is this library:
https://github.com/saasquatch/json-schema-inferrer
I found this from the json-schema doc itself. It has links to implementations for other languages as well:
https://json-schema.org/implementations.html#from-data

Related

Consumer reads data twice from avro schema

I have a streaming app where it listens to some data and then transforming the data by pushing the data into a new topic. I use avro schema for both to read/write my data into topics. The problem is when i consume the data from the final destination by using the command in the below. However, my data is a little complex with some array and json inside of it and i suspect that my avro schemas might not be correct for my purpose. There is no error or anything, I can see all my data on my final topic but the "Pets" field are duplicated for some reason and i can't understand why. In fact, i only add one new field (job_id) to my existing data in the avro schema, i don't make big changes on it when i transform it.
./bin/kafka-console-consumer --topic my_topic \
--bootstrap-server localhost:9092 \
Here's the json data i have
{
"Person":{
"id":"104440",
"Name":"William",
"LastName":"Dorsey",
"archived":false,
"Timezone":"America/Los_Angeles",
"brandCompanyName":"Twitter",
"brandID":"cf545a7b",
"creatorID":"1234",
"currency":"USD",
"dateCreated":"2020-09-07T02:56:22Z",
"dateModified":"2020-09-07T02:57:24Z",
"disabled":false,
"endDate":"2020-11-29T19:51:00-08:00",
"startDate":"2020-08-31T20:55:00-07:00",
"totalBudget":0
},
"Pets":[
{
"Name":"Pawny",
"Id":"4214",
"budget":"0",
"adoptionDate":"2020-09-07T02:56:22Z",
"year":"2",
"type":"Golden",
"gender":"male"
}
],
"CreationTime":"1604036638"
}
my avro schema
{
"name": "MyClass",
"type": "record",
"namespace": "com.acme.avro",
"fields": [
{
"name": "Person",
"type": {
"name": "Person",
"type": "record",
"fields": [
{
"name": "id",
"type": "string"
},
{
"name": "Name",
"type": "string"
},
{
"name": "LastName",
"type": "string"
},
{
"name": "archived",
"type": "boolean"
},
{
"name": "Timezone",
"type": "string"
},
{
"name": "brandCompanyName",
"type": "string"
},
{
"name": "brandID",
"type": "string"
},
{
"name": "creatorID",
"type": "string"
},
{
"name": "currency",
"type": "string"
},
{
"name": "dateCreated",
"type": "int",
"logicalType": "date"
},
{
"name": "dateModified",
"type": "int",
"logicalType": "date"
},
{
"name": "disabled",
"type": "boolean"
},
{
"name": "endDate",
"type": "int",
"logicalType": "date"
},
{
"name": "startDate",
"type": "int",
"logicalType": "date"
},
{
"name": "totalBudget",
"type": "int"
}
]
}
},
{
"name": "Pets",
"type": {
"type": "array",
"items": {
"name": "Pets_record",
"type": "record",
"fields": [
{
"name": "Name",
"type": "string"
},
{
"name": "Id",
"type": "string"
},
{
"name": "budget",
"type": "string"
},
{
"name": "adoptionDate",
"type": "int",
"logicalType": "date"
},
{
"name": "year",
"type": "string"
},
{
"name": "type",
"type": "string"
},
{
"name": "gender",
"type": "string"
}
]
}
}
},
{
"name": "CreationTime",
"type": "string"
},
{
"name":"jobID",
"type":"string"
}
]
}
the output in my topic when i consume the topic - the pets field are duplicated for some reason? I can't figure out why
{
"id":"104440",
"Name":"William",
"LastName:"Dorsey",
"archived":false,
"Timezone":"America/Los_Angeles",
"brandCompanyName":"Twitter",
"brandID":"cf545a7b",
"creatorID":"1234",
"currency":"USD",
"dateCreated":"2020-09-07T02:56:22Z",
"dateModified":"2020-09-07T02:57:24Z",
"disabled":false,
"endDate":"2020-11-29T19:51:00-08:00",
"startDate":"2020-08-31T20:55:00-07:00",
"totalBudget":0,
"Pets":[
{
"Name":"Pawny",
"Id":"4214",
"budget":"0",
"adoptionDate":2020-09-07T02:56:22Z",
"year":"2",
"type":"Golden",
"gender":"male"
}
],
"CreationTime":1604036638,
"jobID":12512,
"pets":[
{
"Name":"Pawny",
"Id":"4214",
"budget":"0",
"adoptionDate":2020-09-07T02:56:22Z",
"year":"2",
"type":"Golden",
"gender":"male"
}
]
}
It's because i was using Uppercase name in my field names... Wandering in endless loops for 24 hours, i was finally able to figure out this if anyone ran into same issue. Please read here and use lowercase name for your fieldnames. When i changed my field name to "pet". The duplicates are gone

The property of type String did not match the following type: object in schema JSON Schema validator

I'm trying to write a JSON schema for my JSON object and I'm not able to follow the error.
I want my JSON object to be stored in the following manner in Java:
public class Category {
private Map<String, List<String>> categoryMapping;
}
Sample JSON:
{
"categoryMapping": {
"categoryA": ["a","b","c"],
"categoryB": ["x","y","z"],
"categoryC": ["x","y","z"]
}
}
However if I write the schema in the following way:
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"title": "$id$",
"description": "list_of_values-1",
"required": [
"categoryMapping"
],
"properties": {
"categoryMapping": {
"$id": "#/properties/categoryMapping",
"type": "object",
"title": "The categoryMapping Schema",
"properties": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
}
I get the following error: The property '#/properties/categoryMapping/properties/type' of type String did not match the following type: object in schema http://json-schema.org/draft-04/schema#
But if I specify the types of categories it works:
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"title": "$id$",
"description": "list_of_values-1",
"properties": {
"categoryMapping": {
"$id": "#/properties/categoryMapping",
"type": "object",
"title": "The Categorymapping Schema",
"required": [
"categoryA",
"categoryB",
"categoryC"
],
"properties": {
"categoryA": {
"$id": "#/properties/categoryMapping/properties/categoryA",
"type": "array",
"title": "The Categorya Schema",
"items": {
"$id": "#/properties/categoryMapping/properties/categoryA/items",
"type": "string",
"title": "The Items Schema",
"default": "",
"examples": [
"a",
"b",
"c"
],
"pattern": "^(.*)$"
}
},
"categoryB": {
"$id": "#/properties/categoryMapping/properties/categoryB",
"type": "array",
"title": "The Categoryb Schema",
"items": {
"$id": "#/properties/categoryMapping/properties/categoryB/items",
"type": "string",
"title": "The Items Schema",
"default": "",
"examples": [
"x",
"y",
"z"
],
"pattern": "^(.*)$"
}
},
"categoryC": {
"$id": "#/properties/categoryMapping/properties/categoryC",
"type": "array",
"title": "The Categoryc Schema",
"items": {
"$id": "#/properties/categoryMapping/properties/categoryC/items",
"type": "string",
"title": "The Items Schema",
"default": "",
"examples": [
"x",
"y",
"z"
],
"pattern": "^(.*)$"
}
}
}
}
}
}
Is there a way to write the schema without explicitly specifying a list of all category types?
So your sample JSON is actually a type with three properties, which is why the schema you have generated requires you to define each property explicitly, even though they are effectively of the same type.
If you were willing to modify your sample json a little bit, however:
{
"categoryMapping": [
{
"name": "categoryA",
"map": ["a","b","c"]
},
{
"name": "categoryB",
"map": ["x","y","z"]
},
{
"name": "categoryC",
"map": ["x","y","z"]
}
]
}
Then you could validate it with the following schema:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"categoryMapping": {
"type": "array",
"items": {
"type": "object",
"required": [
"name",
"map"
],
"properties": {
"name": {
"type": "string"
},
"map": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
}
}
}
Because you can specify the minimum and maximum number of items allowed in an array, you could restrict the number of categories to 3 and the number of "maps" to 3 also if you wanted to.

JSON file schema evaluation using json-schema-validator

I have a sample JSON file and I have also come up with a schema to evaluate above file using below JSON file:
//[gcp_ingestion_parameters_schema.json]
{
...
"properties": {
"application": {
"$ref": "#/definitions/application"
},
"ingestion": {
"$ref": "#/definitions/ingestion"
}
},
"definitions": {
"applicaion": {
"type": "object",
"properties": {
"project_id": {
"type": "string"
},
"path_to_json_key_file": {
"type": "string"
}
},
"required": [
"project_id",
"path_to_json_key_file"
]
},
...
I am still not sure how to write the schema file. In my sample file both application and ingestion tags should occur once, but fileingestion-mappings inside ingestion can occur one or more than once.
I have written some java code to evaluate my JSON file (first file) based on the provided JSON schema file.
but I get exception as follow:
Exception in thread "main"
com.github.fge.jsonschema.core.exceptions.ProcessingException: fatal: JSON Reference "#/definitions/appl
ication" cannot be resolved
level: "fatal"
schema: {"loadingURI":"#","pointer":"/properties/application"}
ref: "#/definitions/application"
Can some with experience working wit above library answer my questions asked in this tread?
As suggested you have a typo in ur schema it should be below
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "http://json-schema.org/draft-07/schema#",
"title": "Core schema meta-schema",
"definitions": {
"schemaArray": {
"type": "array",
"minItems": 1,
"items": { "$ref": "#" }
},
"nonNegativeInteger": {
"type": "integer",
"minimum": 0
},
"nonNegativeIntegerDefault0": {
"allOf": [
{ "$ref": "#/definitions/nonNegativeInteger" },
{ "default": 0 }
]
},
"simpleTypes": {
"enum": [
"array",
"boolean",
"integer",
"null",
"number",
"object",
"string"
]
},
"stringArray": {
"type": "array",
"items": { "type": "string" },
"uniqueItems": true,
"default": []
}
},
"type": ["object", "boolean"],
"properties": {
"$id": {
"type": "string",
"format": "uri-reference"
},
"$schema": {
"type": "string",
"format": "uri"
},
"$ref": {
"type": "string",
"format": "uri-reference"
},
"$comment": {
"type": "string"
},
"title": {
"type": "string"
},
"description": {
"type": "string"
},
"default": true,
"readOnly": {
"type": "boolean",
"default": false
},
"examples": {
"type": "array",
"items": true
},
"multipleOf": {
"type": "number",
"exclusiveMinimum": 0
},
"maximum": {
"type": "number"
},
"exclusiveMaximum": {
"type": "number"
},
"minimum": {
"type": "number"
},
"exclusiveMinimum": {
"type": "number"
},
"maxLength": { "$ref": "#/definitions/nonNegativeInteger" },
"minLength": { "$ref": "#/definitions/nonNegativeIntegerDefault0" },
"pattern": {
"type": "string",
"format": "regex"
},
"additionalItems": { "$ref": "#" },
"items": {
"anyOf": [
{ "$ref": "#" },
{ "$ref": "#/definitions/schemaArray" }
],
"default": true
},
"maxItems": { "$ref": "#/definitions/nonNegativeInteger" },
"minItems": { "$ref": "#/definitions/nonNegativeIntegerDefault0" },
"uniqueItems": {
"type": "boolean",
"default": false
},
"contains": { "$ref": "#" },
"maxProperties": { "$ref": "#/definitions/nonNegativeInteger" },
"minProperties": { "$ref": "#/definitions/nonNegativeIntegerDefault0" },
"required": { "$ref": "#/definitions/stringArray" },
"additionalProperties": { "$ref": "#" },
"definitions": {
"type": "object",
"additionalProperties": { "$ref": "#" },
"default": {}
},
"properties": {
"type": "object",
"additionalProperties": { "$ref": "#" },
"default": {}
},
"patternProperties": {
"type": "object",
"additionalProperties": { "$ref": "#" },
"propertyNames": { "format": "regex" },
"default": {}
},
"dependencies": {
"type": "object",
"additionalProperties": {
"anyOf": [
{ "$ref": "#" },
{ "$ref": "#/definitions/stringArray" }
]
}
},
"propertyNames": { "$ref": "#" },
"const": true,
"enum": {
"type": "array",
"items": true,
"minItems": 1,
"uniqueItems": true
},
"type": {
"anyOf": [
{ "$ref": "#/definitions/simpleTypes" },
{
"type": "array",
"items": { "$ref": "#/definitions/simpleTypes" },
"minItems": 1,
"uniqueItems": true
}
]
},
"format": { "type": "string" },
"contentMediaType": { "type": "string" },
"contentEncoding": { "type": "string" },
"if": {"$ref": "#"},
"then": {"$ref": "#"},
"else": {"$ref": "#"},
"allOf": { "$ref": "#/definitions/schemaArray" },
"anyOf": { "$ref": "#/definitions/schemaArray" },
"oneOf": { "$ref": "#/definitions/schemaArray" },
"not": { "$ref": "#" }
},
"default": true
}
This works perfectly fine with the JSON you have provided.
You have typo error in applicaion it should be application
Change this "definitions": { "applicaion": { to "definitions": { "application": {
also Refer this link to validate your schema https://www.liquid-technologies.com/online-json-schema-validator.

How programatically convert JSON to JSON schema to compare structure of two json object [duplicate]

I want convert json document into json schema. I googled it but not got the exact idea according to my requirement.
here is JSON
{
"empId":1001,
"firstName":"jonh",
"lastName":"Springer",
"title": "Engineer",
"address": {
"city": "Mumbai",
"street": "FadkeStreet",
"zipCode":"420125",
"privatePhoneNo":{
"privateMobile": "2564875421",
"privateLandLine":"251201546"
}
},
"salary": 150000,
"department":{
"departmentId": 10521,
"departmentName": "IT",
"companyPhoneNo":{
"cMobile": "8655340546",
"cLandLine": "10251215465"
},
"location":{
"name": "mulund",
"locationId": 14500
}
}
}
I want to generate like this
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"title": "Employee",
"properties": {
"empId": {
"type": "integer"
},
"firstName":{
"type":"string"
},
"lastName": {
"type": "string"
},
"title": {
"type": "string"
},
"address": {
"type": "object",
"properties": {
"city": {
"type": "string"
},
"street": {
"type": "string"
},
"zipCode": {
"type": "string"
},
"privatePhoneNo": {
"type": "object",
"properties": {
"privateMobile": {
"type": "string"
},
"privateLandLine": {
"type": "string"
}
}
}
}
},
"salary": {
"type": "number"
},
"department": {
"type": "object",
"properties": {
"departmentId": {
"type": "integer"
},
"departmentName": {
"type": "string"
},
"companyPhoneNo": {
"type": "object",
"properties": {
"cMobile": {
"type": "string"
},
"cLandLine": {
"type": "string"
}
}
},
"location": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"locationId": {
"type": "integer"
}
}
}
}
}
}
}
Is there any library is doing like this or what is another way?
https://github.com/perenecabuto/json_schema_generator
http://jsonschema.net/#/
I'm think this maybe will help
It's been a while since this was asked but I was having the same issue. So far the best solution I have come across is this library:
https://github.com/saasquatch/json-schema-inferrer
I found this from the json-schema doc itself. It has links to implementations for other languages as well:
https://json-schema.org/implementations.html#from-data

How to implement an autocomplete search field (suggestor) with an existing ElasticSearch index?

The ES index consists of 2 types that are implicitly mapped (default mapping). One type is "person" or an author, the 2nd type is "document".
The index has some 500k entries.
What I have to do is: implement an autocomplete (suggestions) functionality where only the fields "title", "classification" (document) and "name" (author) are relevant for the suggestions shown to the user.
Could it be done without changing the 500k docs in the index?
I found some tutorials that suggest preparing a specific mapping and also altering the documents (this I want to avoid if possible) and so on but I am new to this and I am not sure how to go about the this problem?
Below is the JSON for the index, and how the documents look:
//a Document
{
"rawsource": "Phys.Rev. D67 (2003) 084031",
"pubyear": 2003,
"citedFrom": 19,
"topics": [
{
"name": "General Relativity and Quantum Cosmology"
}
],
"cited": [
{
"ref": 0,
"id": "PN132433"
},
{
"ref": 1,
"id": "PN206900"
}
],
"id": "PN120001",
"collection": "PN",
"source": "Phys Rev D",
"classification": "Physics",
"title": "Observables in causal set cosmology",
"url": "http://arxiv.org/abs/gr-qc/0210061",
"authors": [
{
"name": "Brightwell, Graham"
},
{
"name": "Dowker, H. Fay"
},
{
"name": "Garcia, Raquel S."
},
{
"name": "Henson, Joe"
},
{
"name": "Sorkin, Rafael D."
}
]
}
//a Person (author)
{
"name": "Terasawa, M.",
"documents": [
{
"citedFrom": 0,
"id": "PN039187"
}
],
"coAuthors": [
{
"name": "Famiano, M. A.",
"count": "1"
},
{
"name": "Boyd, R. N.",
"count": "1"
}
],
"topics": [
{
"name": "Astrophysics",
"count": "1"
}
]
}
//the mapping (implicit/default)
{
"dlsnew": {
"aliases": {
},
"mappings": {
"person": {
"properties": {
"coAuthors": {
"properties": {
"count": {
"type": "string"
},
"name": {
"type": "string"
}
}
},
"documents": {
"properties": {
"citedFrom": {
"type": "long"
},
"id": {
"type": "string"
}
}
},
"name": {
"type": "string"
},
"referenced": {
"properties": {
"count": {
"type": "string"
},
"id": {
"type": "string"
}
}
},
"topics": {
"properties": {
"count": {
"type": "string"
},
"name": {
"type": "string"
}
}
}
}
},
"document": {
"properties": {
"abstract": {
"type": "string"
},
"authors": {
"properties": {
"name": {
"type": "string"
}
}
},
"cited": {
"properties": {
"id": {
"type": "string"
},
"ref": {
"type": "long"
}
}
},
"citedFrom": {
"type": "long"
},
"classification": {
"type": "string"
},
"collection": {
"type": "string"
},
"id": {
"type": "string"
},
"pubyear": {
"type": "long"
},
"rawsource": {
"type": "string"
},
"source": {
"type": "string"
},
"title": {
"type": "string"
},
"topics": {
"properties": {
"name": {
"type": "string"
}
}
},
"url": {
"type": "string"
}
}
}
},
"settings": {
"index": {
"creation_date": "1454247029258",
"number_of_shards": "5",
"uuid": "k_CyQaxwSAaae67wW98HyQ",
"version": {
"created": "1050299"
},
"number_of_replicas": "1"
}
},
"warmers": {
}
}
}
The implementation is to be done using JAVA and the Vaadin Framework (this is not relevant at this point, but examples in Java/Vaadin will be most welcomed).
Thanks.
So, I think I solved my problem on the Elasticsearch side or at least to a good enough extend for me and the task at hand. I followed this ruby example.
I had to re-index all documents to accommodate the new settings for my index and to change my mapping explicitly.
They key is in defining proper analyzers and an edgeNGram filter in this case, like so:
"settings": {
"index": {
"analysis": {
"filter": {
"def_ngram_filter": {
"min_gram": "1",
"side": "front",
"type": "edgeNGram",
"max_gram": "16"
}
},
"analyzer": {
"def_search_analyzer": {
"filter": [
"lowercase",
"asciifolding"
],
"type": "custom",
"tokenizer": "def_tokenizer"
},
"def_ngram_analyzer": {
"filter": [
"lowercase",
"asciifolding",
"def_ngram_filter"
],
"type": "custom",
"tokenizer": "def_tokenizer"
},
"def_shingle_analyzer": {
"filter": [
"shingle",
"lowercase",
"asciifolding"
],
"type": "custom",
"tokenizer": "def_tokenizer"
},
"def_default_analyzer": {
"filter": [
"lowercase",
"asciifolding"
],
"type": "custom",
"tokenizer": "def_tokenizer"
}
},
"tokenizer": {
"def_tokenizer": {
"type": "whitespace"
}
}
}
}
}
and the use these in the mapping for the fields to be searched, like so:
"mappings": {
"person": {
"properties": {
"coAuthors": {
"properties": {
"count": {
"type": "string"
},
"name": {
"type": "string"
}
}
},
"documents": {
"properties": {
"citedFrom": {
"type": "long"
},
"id": {
"type": "string"
}
}
},
"name": {
"type": "string",
"analyzer": "def_default_analyzer",
"fields": {
"ngrams": {
"type": "string",
"index_analyzer": "def_ngram_analyzer",
"search_analyzer": "def_search_analyzer"
},
"shingles": {
"type": "string",
"analyzer": "def_shingle_analyzer"
},
"stemmed": {
"type": "string",
"analyzer": "def_snowball_analyzer"
}
}
},
"referenced": {
"properties": {
"count": {
"type": "string"
},
"id": {
"type": "string"
}
}
},
"topics": {
"properties": {
"count": {
"type": "string"
},
"name": {
"type": "string"
}
}
}
}
},
"document": {
"properties": {
"abstract": {
"type": "string"
},
"authors": {
"properties": {
"name": {
"type": "string",
"analyzer": "def_default_analyzer",
"fields": {
"ngrams": {
"type": "string",
"index_analyzer": "def_ngram_analyzer",
"search_analyzer": "def_search_analyzer"
},
"shingles": {
"type": "string",
"analyzer": "def_shingle_analyzer"
},
"stemmed": {
"type": "string",
"analyzer": "def_snowball_analyzer"
}
}
}
}
},
"cited": {
"properties": {
"id": {
"type": "string"
},
"ref": {
"type": "long"
}
}
},
"citedFrom": {
"type": "long"
},
"classification": {
"type": "string"
},
"collection": {
"type": "string"
},
"id": {
"type": "string"
},
"pubyear": {
"type": "long"
},
"rawsource": {
"type": "string"
},
"source": {
"type": "string"
},
"title": {
"type": "string",
"analyzer": "def_default_analyzer",
"fields": {
"ngrams": {
"type": "string",
"index_analyzer": "def_ngram_analyzer",
"search_analyzer": "def_search_analyzer"
},
"shingles": {
"type": "string",
"analyzer": "def_shingle_analyzer"
},
"stemmed": {
"type": "string",
"analyzer": "def_snowball_analyzer"
}
}
},
"topics": {
"properties": {
"name": {
"type": "string",
"analyzer": "def_default_analyzer",
"fields": {
"ngrams": {
"type": "string",
"index_analyzer": "def_ngram_analyzer",
"search_analyzer": "def_search_analyzer"
},
"shingles": {
"type": "string",
"analyzer": "def_shingle_analyzer"
},
"stemmed": {
"type": "string",
"analyzer": "def_snowball_analyzer"
}
}
}
}
},
"url": {
"type": "string"
}
}
}
}
then querying the index with the following works as expected:
curl -XGET "http://localhost:9200/_search " -d'
{
"size": 5,
"query": {
"multi_match": {
"query": "physics",
"type": "most_fields",
"fields": [
"document.title^10",
"document.title.shingles^2",
"document.title.ngrams",
"person.name^10",
"person.name.shingles^2",
"person.name.ngrams",
"document.topics.name^10",
"document.topics.name.shingles^2",
"document.topics.name.ngrams"
],
"operator": "and"
}
}
}'
Hope this will help someone, it is probably not the best example as I am a complete noob to this, but it worked for me.
There exist different Autocomplete components for Vaadin.
Have a look at this link.
Depending on which Add-On you choose, the databinding is done differently, but you have to "connect" it to your index.

Categories

Resources