{
"status": 200,
"id": "123e4567-e89b-12d3-a456-426655440000",
"shop": {
"c73bcdcc-2669-4bf6-81d3-e4ae73fb11fd": {
"123e4567-e89b-12d3-a456-426655443210": {
"quantity": {
"value": 10
}
},
"123e4567-e89b-12d3-a456-426655443211": {
"quantity": {
"value": 20
}
}
}
}
}
This is my json response. I want to validate the fields "c73bcdcc-2669-4bf6-81d3-e4ae73fb11fd" , "123e4567-e89b-12d3-a456-426655443210" and "123e4567-e89b-12d3-a456-426655443211", which are uniquely generated every time whenever hits the endpoint.
Building on #pxcv7r's answer:
To validate UUID in particular, you may use format in JSON schema, which provides built-in support for the UUID syntax: { "type": "string", "format": "uuid" }
See https://json-schema.org/understanding-json-schema/reference/string.html
Additionally, you can use a combination of "propertyNames" and "unevaluatedProperties" to avoid the need for any regular expression:
{
"$schema": "https://json-schema.org/draft/2019-09/schema",
"type": "object",
"properties": {
"status": {
"type": "integer"
},
"id": {
"type": "string",
"format": "uuid"
},
"shop": {
"type": "object",
"minProperties": 1,
"maxProperties": 1,
"propertyNames": {
"format": "uuid"
},
"unevaluatedProperties": {
"type":"object",
"minProperties": 1,
"propertyNames": {
"format": "uuid"
},
"unevaluatedProperties": {
"title": "single variant of a shop",
"type": "object",
"properties": {
"quantity": {
"type": "object",
"properties": {
"value": {
"type": "integer"
}
}
}
}
}
}
}
}
}
To validate in JSON schema that a string conforms to a regular expression pattern use
{ "type": "string", "pattern": "\b[0-9a-f]{8}\b-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-\b[0-9a-f]{12}\b" }
The concrete pattern is adapted from the question Searching for UUIDs in text with regex see there for more details.
To validate UUID in particular, you may use format in JSON schema, which provides built-in support for the UUID syntax: { "type": "string", "format": "uuid" }
See https://json-schema.org/understanding-json-schema/reference/string.html
You need "patternProperties":
{
"$schema":"http://json-schema.org/draft-07/schema#",
"type":"object",
"properties": {
"shop":{
"type":"object",
"additionalProperties":false,
"patternProperties":{
"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}": {
"type":"object",
"patternProperties" :{
"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}":{
"type":"object",
"properties":{
"quantity":{
"type":"object",
"properties":{
"value":{
"type":"integer"
}
}
}
}
}
}
}
}
}
}
}
Related
I have a JSON schema:
{
"type": "object",
"properties": {
"name": { "type": ["string", "null"] },
"credit_card": {
"type": ["string", "null"]
},
"billing_address": {
"type": ["string", "null"]
}
},
"dependencies": [{
"credit_card": ["billing_address"]
}]
}
I want the billing_address value to be present if the credit_card value is provided. But since I have specified the type of billing_address as null it accepts null value even when the credit_card value is present and hence not validating. Could someone suggest the right approach for doing this.
Thanks in advance.
Your dependencies should be defined as an object {} not an array []. You just need to remove the outer square brackets:
"dependencies": {
"credit_card": ["billing_address"]
}
Overall, this gives the following schema:
{
"type": "object",
"properties": {
"name": {
"type": ["string", "null"]
},
"credit_card": {
"type": ["string", "null"]
},
"billing_address": {
"type": ["string", "null"]
}
},
"dependencies": {
"credit_card": ["billing_address"]
}
}
Using the above schema, the following JSON is valid:
{
"name": "Abel",
"credit_card": "1234...",
"billing_address": "some address here..."
}
But the following JSON is invalid:
{
"name": "Abel",
"credit_card": "1234"
}
You can test these using an online validator such as this one.
You may also want to consider removing the null values you are using in the schema. For example, by using this:
{
"type": "object",
"properties": {
"name": {
"type": "string",
},
"credit_card": {
"type": "string",
},
"billing_address": {
"type": "string",
}
},
"dependencies": {
"credit_card": ["billing_address"]
}
}
Using this revised schema, you will now also get a validation error for JSON such as the following:
{
"name": "Abel",
"credit_card": null,
"billing_address": "some address here..."
}
Update - both fields are present but null:
If both credit_card and billing_address are null, then this case can be handled using a conditional validation (added to the end of our schema below):
{
"type": "object",
"properties": {
"name": {
"type": ["string", "null"]
},
"credit_card": {
"type": ["string", "null"]
},
"billing_address": {
"type": ["string", "null"]
}
},
"dependencies": {
"credit_card": ["billing_address"]
},
"if": {
"properties": { "credit_card": { "const": null } }
},
"then": {
"properties": { "billing_address": { "const": null } }
}
}
Now, the following will also be valid:
{
"name": "Abel",
"credit_card": null,
"billing_address": null
}
One note of warning: This uses a relatively newer feature of the JSON Schema spec. It is supported by the online validator I referred to above - but I do not know if it is supported by whatever validator you may be using.
Is there any Java package which I can use to convert a JSON string to JSON schema? I have looked up online. I found libraries in Python, Ruby and NodeJS, but not in Java
All the Java libraries generate JSON schema from a POJO
I think that you can try this library on github it does exactly what you want from a JSON, You only need to build it and use json-string-schema-generator
String json = "{\"sectors\": [{\"times\":[{\"intensity\":30," +
"\"start\":{\"hour\":8,\"minute\":30},\"end\":{\"hour\":17,\"minute\":0}}," +
"{\"intensity\":10,\"start\":{\"hour\":17,\"minute\":5},\"end\":{\"hour\":23,\"minute\":55}}]," +
"\"id\":\"dbea21eb-57b5-44c9-a953-f61816fd5876\"}]}";
String result = JsonSchemaGenerator.outputAsString("Schedule", "this is a test", json);
/* sample output
{
"title": "Schedule",
"description": "this is a test",
"type": "object",
"properties": {
"sectors": {
"type": "array",
"items": {
"properties": {
"times": {
"type": "array",
"items": {
"properties": {
"intensity": {
"type": "number"
},
"start": {
"type": "object",
"properties": {
"hour": {
"type": "number"
},
"minute": {
"type": "number"
}
}
},
"end": {
"type": "object",
"properties": {
"hour": {
"type": "number"
},
"minute": {
"type": "number"
}
}
}
}
}
},
"id": {
"type": "string"
}
}
}
}
}
}
*/
// To generate JSON schema into a file
JsonSchemaGenerator.outputAsFile("Schedule", "this is a test", json, "output-schema.json");
// To generate POJO(s)
JsonSchemaGenerator.outputAsPOJO("Schedule", "this is a test", json, "com.example", "generated-sources");
}
I understand how to build a mapping for any index and type. But, I want to have a field my_field_1 and my_field_2 which will not be analyzed for all the indexes and types that will be created in future.
PUT /address_index
{
"mappings":{
"address":{
"properties":{
"state":{
"type":"string",
"fields":{
"raw":{
"type":"string",
"index":"not_analyzed"
}
}
}
}
}
}
}
I also saw in one of the links on how to do for all the strings. But, I am unable to add it for just the fields mentioned above.
I will be implementing this in Java. However, just DSL JSON would be good headstart.
You can do this by creating an index template with a pattern of "*" meaning it will apply to all indices you create in the future and defining this mapping in there.
PUT 127.0.0.1:9200/_template/stacktest
{
"template": "*",
"settings": {
"number_of_shards": 1
},
"mappings": {
"address": {
"properties": {
"state": {
"type": "string",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}
}
Now you can create an index with any name and this mapping will apply to it.
PUT 127.0.0.1:9200/testindex/
GET 127.0.0.1:9200/testindex/_mapping
{
"testindex": {
"mappings": {
"address": {
"properties": {
"state": {
"type": "text",
"fields": {
"raw": {
"type": "keyword"
}
}
}
}
}
}
}
}
Note that the index: not_analyzed part was transformed into the keyword datatype, as string has been deprecated. You should use text and keyword if you are on version 5.x.
Edit to address your comments
To adapt this to the specific two fields mentioned by you, the following request would create the template:
{
"template": "*",
"settings": {
"number_of_shards": 1
},
"mappings": {
"_default_": {
"properties": {
"my_field_1": {
"type": "string",
"index": "not_analyzed"
},
"my_field_2": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
If you now index a document into a new index, those two fields will be not analyzed for any type of document, any other string fields will be analyzed, which is the way I understood your original question.
PUT 127.0.0.1:9200/testindex/address
{
"my_field_1": "this is not_analyzed",
"my_field_2": "this is not_analyzed either",
"other_field": "this however is analyzed"
}
PUT 127.0.0.1:9200/testindex/differenttype
{
"my_field_1": "this is not_analyzed",
"my_field_2": "this is not_analyzed either",
"other_field": "this however is analyzed"
}
Now check the mapping and notice the difference:
{
"testindex": {
"mappings": {
"differenttype": {
"properties": {
"my_field_1": {
"type": "keyword"
},
"my_field_2": {
"type": "keyword"
},
"other_field": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
"address": {
"properties": {
"my_field_1": {
"type": "keyword"
},
"my_field_2": {
"type": "keyword"
},
"other_field": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
"_default_": {
"properties": {
"my_field_1": {
"type": "keyword"
},
"my_field_2": {
"type": "keyword"
}
}
}
}
}
}
I have the below mapping structure for my Elasticsearch index.
{
"users": {
"mappings": {
"user-type": {
"properties": {
"lastModifiedBy": {
"type": "string"
},
"lastModifiedDate": {
"type": "date",
"format": "dateOptionalTime"
},
"details": {
"type": "nested",
"properties": {
"lastModifiedBy": {
"type": "string"
},
"lastModifiedDate": {
"type": "date",
"format": "dateOptionalTime"
},
"views": {
"type": "nested",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"properties": {
"properties": {
"name": {
"type": "string"
},
"type": {
"type": "string"
},
"value": {
"type": "string"
}
}
}
}
}
}
}
}
}
}
}
}
Basically I want to retrieve ONLY the view object inside details based on index id & view id(details.views.id).
I have tried with the below java code.But seems to be not working.
SearchRequestBuilder srq = this.client.prepareSearch(this.indexName)
.setTypes(this.type)
.setQuery(QueryBuilders.termQuery("_id", sid))
.setPostFilter(FilterBuilders.nestedFilter("details.views",
FilterBuilders.termFilter("details.views.id", id)));
Below is the query structure for this java code.
{
"query": {
"term": {
"_id": "123"
}
},
"post_filter": {
"nested": {
"filter": {
"term": {
"details.views.id": "def"
}
},
"path": "details.views"
}
}
}
Since details is nested and view is nested inside details, you basically need two nested filters as well (one for each level) + the constraint on the _id field is best done with the ids query. The query DSL would look like this:
{
"query": {
"ids": {
"values": [
"123"
]
}
},
"post_filter": {
"nested": {
"filter": {
"nested": {
"path": "details.view",
"filter": {
"term": {
"details.views.id": "def"
}
}
}
},
"path": "details"
}
}
}
Translating this into Java code yields:
// 2nd-level nested filter
FilterBuilder detailsView = FilterBuilders.nestedFilter("details.views",
FilterBuilders.termFilter("details.views.id", id));
// 1st-level nested filter
FilterBuilder details = FilterBuilders.nestedFilter("details", detailsView);
// ids constraint
IdsQueryBuilder ids = QueryBuilders.idsQuery(this.type).addIds("123");
SearchRequestBuilder srq = this.client.prepareSearch(this.indexName)
.setTypes(this.type)
.setQuery(ids)
.setPostFilter(details);
PS: I second what #Paul said, i.e. always play around with the query DSL first and when you know you have zeroed in on the exact query you need, then you can translate it to the Java form.
Problem: How to create an index from a json file using
The json file contains a definition for the index de_brochures. It also defines an analyzer de_analyzerwith custom filters that are used by the respective index.
As the json works with curl and Sense I assume I have to adapt the syntax of it to work with the java API.
I don't want to use XContentFactory.jsonBuilder() as the json comes from a file!
I have the following json file to create my mapping from and to set settings:
Using Sense with PUT /indexname it does create an index from this.
{
"mappings": {
"de_brochures": {
"properties": {
"text": {
"type": "string",
"store": true,
"index_analyzer": "de_analyzer"
},
"classification": {
"type": "string",
"index": "not_analyzed"
},
"language": {
"type": "string",
"index": "not_analyzed"
}
}
}
"settings": {
"analysis": {
"filter": {
"de_stopwords": {
"type": "stop",
"stopwords": "_german_"
},
"de_stemmer": {
"type": "stemmer",
"name": "light_german"
}
},
"analyzer": {
"de_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"de_stopwords",
"de_stemmer"
]
}
}
}
}
}
As the above did not work with addMapping() alone I tried to split it into two seperate files (I realized that I had to remove the "mappings": and "settings": part):
------ Mapping json ------
{
"de_brochures": {
"properties": {
"text": {
"type": "string",
"store": true,
"index_analyzer": "de_analyzer"
},
"classification": {
"type": "string",
"index": "not_analyzed"
},
"language": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
------- Settings json --------
{
"analysis": {
"filter": {
"de_stopwords": {
"type": "stop",
"stopwords": "_german_"
},
"de_stemmer": {
"type": "stemmer",
"name": "light_german"
}
},
"analyzer": {
"de_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"de_stopwords",
"de_stemmer"
]
}
}
}
}
This is my java code to load and add/set the json.
CreateIndexRequestBuilder createIndexRequestBuilder = client.admin().indices().prepareCreate(index);
// CREATE SETTINGS
String settings_json = new String(Files.readAllBytes(brochures_mapping_path));
createIndexRequestBuilder.setSettings(settings_json);
// CREATE MAPPING
String mapping_json = new String(Files.readAllBytes(brochures_mapping_path));
createIndexRequestBuilder.addMapping("de_brochures", mapping_json);
CreateIndexResponse indexResponse = createIndexRequestBuilder.execute().actionGet();
There is no more complaint about the mapping file's structure but it now fails with the error:
Caused by: org.elasticsearch.index.mapper.MapperParsingException: Analyzer [de_analyzer] not found for field [text]
Solution:
I managed to do it with my original json file using createIndexRequestBuilder.setSource(settings_json);
I think the problem is with structure of your mapping file.
Here is a sample example.
mapping.json
{
"en_brochures": {
"properties": {
"text": {
"type": "string",
"store": true,
"index_analyzer": "en_analyzer",
"term_vector": "yes"
},
"classification": {
"type": "string",
"index": "not_analyzed"
},
"language": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
String mapping = new String(Files.readAllBytes(Paths.get("mapping.json")));
createIndexRequestBuilder.addMapping('en_brochures', mapping);
CreateIndexResponse indexResponse =createIndexRequestBuilder.execute().actionGet();
This works in mine, you can try.