Description: I have to read a particular field from Elasticsearch 5.4 using Apache camel. When I use the below code, I'm not able to view the response
Exception: Error building toString out of XContent: com.fasterxml.jackson.core.JsonGenerationException: Can not start an object, expecting field name (context: Object)
Code:
from("direct:start")
.process(exchange -> {
GetRequest a = new GetRequest("example", "doc", "1");
exchange.getIn().setBody(a);
})
.to("elasticsearch5://elastic?operation=GET_BY_ID&ip=<ip>&port=9300")
.log("${body}");
Complete Stacktrace:
(route1) elasticsearch5://elastic?ip=&operation=GET_BY_ID&port=9300 --> log[messageId] <<< Pattern:InOnly, Headers:{breadcrumbId=ID-NLVHPRAAB02027-53300-1510315731625-0-1}, BodyType:org.elasticsearch.action.support.PlainActionFuture, Body:Error building toString out of XContent: com.fasterxml.jackson.core.JsonGenerationException: Can not start an object, expecting field name (context: Object)
at com.fasterxml.jackson.core.JsonGenerator._reportError(JsonGenerator.java:1897)
at com.fasterxml.jackson.core.json.JsonGeneratorImpl._reportCantWriteValueExpectName(JsonGeneratorImpl.java:244)
at com.fasterxml.jackson.core.json.UTF8JsonGenerator._verifyValueWrite(UTF8JsonGenerator.java:1033)
at com.fasterxml.jackson.core.json.UTF8JsonGenerator.writeStartObject(UTF8JsonGenerator.java:313)
at org.elasticsearch.common.xcontent.json.JsonXContentGenerator.writeStartObject(JsonXContentGenerator.java:161)
at org.elasticsearch.common.xcontent.XContentBuilder.startObject(XContentBuilder.java:217)
at org.elasticsearch.index.get.GetResult.toXContent(GetResult.java:251)
at org.elasticsearch.action.get.GetResponse.toXContent(GetResponse.java:158)
at org.elasticsearch.common.Strings.toString(Strings.java:901)
at org.elastic... [Body clipped after 1000 chars, total length is 4350]
Are you using the camel-elasticsearch5 component? In that case you need to do something like this:
https://github.com/apache/camel/blob/master/components/camel-elasticsearch5/src/test/java/org/apache/camel/component/elasticsearch5/ElasticsearchGetSearchDeleteExistsUpdateTest.java#L45-L57
Related
Using a standalone MongoDB instance in version 4.4.1 with a Java client that connects using the latest driver (org.mongodb:mongodb-driver-sync:4.1.1), I am getting an error when calling findOneAndUpdate with the $setOnInsert operator.
Here is the query used:
final List<Bson> updates = new ArrayList<>();
updates.add(Updates.set("data", "test"));
updates.add(Updates.setOnInsert("firstSeenTime", new Date()));
final Document updatedDocument =
this.visitorsCollection.findOneAndUpdate(
eq("userId", "u1"), updates, new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER).upsert(true));
The error:
Exception in thread "main" com.mongodb.MongoCommandException: Command
failed with error 40324 (Location40324): 'Unrecognized pipeline stage
name: '$setOnInsert'' on server A.B.C.D:XXXXX. The full
response is {"ok": 0.0, "errmsg": "Unrecognized pipeline stage name:
'$setOnInsert'", "code": 40324, "codeName": "Location40324"} at
com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:175)
at
com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:359)
at
com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:280)
at
com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:100)
at
com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:490)
at
com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:71)
at
com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:255)
at
com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:202)
at
com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:118)
at
com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:110)
at
com.mongodb.internal.operation.CommandOperationHelper$13.call(CommandOperationHelper.java:712)
at
com.mongodb.internal.operation.OperationHelper.withReleasableConnection(OperationHelper.java:620)
at
com.mongodb.internal.operation.CommandOperationHelper.executeRetryableCommand(CommandOperationHelper.java:705)
at
com.mongodb.internal.operation.CommandOperationHelper.executeRetryableCommand(CommandOperationHelper.java:697)
at
com.mongodb.internal.operation.BaseFindAndModifyOperation.execute(BaseFindAndModifyOperation.java:69)
at
com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:195)
at
com.mongodb.client.internal.MongoCollectionImpl.executeFindOneAndUpdate(MongoCollectionImpl.java:785)
at
com.mongodb.client.internal.MongoCollectionImpl.findOneAndUpdate(MongoCollectionImpl.java:765)
If I get rid of the Updates.setOnInsert(...) call, then the update works but not as I would like. My purpose is to set some fields based on whether the document to update exists or not. Looking at the documentation, $setOnInsert should be supported:
https://docs.mongodb.com/manual/reference/operator/update/#id1
Any idea about what is wrong?
The problem here is there are 2 forms of findOneAndUpdate. The second argument can be either:
a document containing update operator expressions
an array containing $set, $unset, and $replaceRoot aggregation stages
Since you are creating updates as an ArrayList, findOneAndUpdate is trying to process it as an aggregation pipeline, which does not recognize a $setOneInsert stage.
You need to build updates as a Document for the update operators to be recognized. Following your example, you can simply wrap the list with Updates.combine(updates) and pass it to findOneAndUpdate as the second parameter.
I'm trying to create a pipeline that streams data from a Kafka topic to google's Bigquery. Data in the topic is in Avro.
I call the apply function 3 times. Once to read from Kafka, once to extract record and once to write to Bigquery. Here is the main part of the code:
pipeline
.apply("Read from Kafka",
KafkaIO
.<byte[], GenericRecord>read()
.withBootstrapServers(options.getKafkaBrokers().get())
.withTopics(Utils.getListFromString(options.getKafkaTopics()))
.withKeyDeserializer(
ConfluentSchemaRegistryDeserializerProvider.of(
options.getSchemaRegistryUrl().get(),
options.getSubject().get())
)
.withValueDeserializer(
ConfluentSchemaRegistryDeserializerProvider.of(
options.getSchemaRegistryUrl().get(),
options.getSubject().get()))
.withoutMetadata()
)
.apply("Extract GenericRecord",
MapElements.into(TypeDescriptor.of(GenericRecord.class)).via(KV::getValue)
)
.apply(
"Write data to BQ",
BigQueryIO
.<GenericRecord>write()
.optimizedWrites()
.useBeamSchema()
.useAvroLogicalTypes()
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_NEVER)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
.withSchemaUpdateOptions(ImmutableSet.of(BigQueryIO.Write.SchemaUpdateOption.ALLOW_FIELD_ADDITION))
//Temporary location to save files in GCS before loading to BQ
.withCustomGcsTempLocation(options.getGcsTempLocation())
.withNumFileShards(options.getNumShards().get())
.withFailedInsertRetryPolicy(InsertRetryPolicy.retryTransientErrors())
.withMethod(FILE_LOADS)
.withTriggeringFrequency(Utils.parseDuration(options.getWindowDuration().get()))
.to(new TableReference()
.setProjectId(options.getGcpProjectId().get())
.setDatasetId(options.getGcpDatasetId().get())
.setTableId(options.getGcpTableId().get()))
);
When running, i get the following error:
Exception in thread "main" java.lang.IllegalStateException: Unable to return a default Coder for Extract GenericRecord/Map/ParMultiDo(Anonymous).output [PCollection]. Correct one of the following root causes: No Coder has been manually specified; you may do so using .setCoder().
Inferring a Coder from the CoderRegistry failed: Unable to provide a Coder for org.apache.avro.generic.GenericRecord.
Building a Coder using a registered CoderProvider failed.
How do I set the coder to properly read Avro?
There are at least three approaches to this:
Set the coder inline:
pipeline.apply("Read from Kafka", ....)
.apply("Dropping key", Values.create())
.setCoder(AvroCoder.of(Schema schemaOfGenericRecord))
.apply("Write data to BQ", ....);
Note that the key is dropped because its unused, with this you wont need MapElements any more.
Register the coder in the pipeline's instance of CoderRegistry:
pipeline.getCoderRegistry().registerCoderForClass(GenericRecord.class, AvroCoder.of(Schema genericSchema));
Get the coder from the schema registry via:
ConfluentSchemaRegistryDeserializerProvider.getCoder(CoderRegistry registry)
https://beam.apache.org/releases/javadoc/2.22.0/org/apache/beam/sdk/io/kafka/ConfluentSchemaRegistryDeserializerProvider.html#getCoder-org.apache.beam.sdk.coders.CoderRegistry-
I'm using Jasper API rest v2 https://github.com/Jaspersoft/jrs-rest-java-client. I'm trying to create input control's dynamically.
ClientInputControl cliInp = new ClientInputControl();
cliInp.setLabel("FUNCIONARIO_ID_1");
cliInp.setDataType(new ClientDataType().setType(TypeOfDataType.date));
cliInp.setUri("/datatypes/FUNCIONARIO_ID_1");
session.resourcesService().resource("/datatypes").createNew(cliInp);
I need to create this input control so I can add to my report.
When executing this code I have
Exception in thread "main" com.jaspersoft.jasperserver.jaxrs.client.core.exceptions.BadRequestException: Bad Request
EDIT
Log files give following error:
mt error:[{
"message":"The type 0 is invalid",
"errorCode":"illegal.parameter.value.error",
"parameters":
["type",
"0"]
}]
Can someone tell me what I'm doing wrong?
You should define more values
ClientDataType type = new ClientDataType()
.setLabel("Data")
.setType(TypeOfDataType.date)
.setUri("/types");
byte singleValue = 2;
ClientInputControl inputControl = new ClientInputControl()
.setLabel("Data")
.setType(singleValue) //this parameter missing is your error
.setDataType(type)
.setUri("/inputs");
I tried to make the zenTasks tutorial for the play-java framework (I use the current playframework, which is 2.3.2). As it comes to testing and adding fixtures I'm kind of lost!
The docu states that
Edit the conf/test-data.yml file and start to describe a User:
- !!models.User
email: bob#gmail.com
name: Bob
password: secret
...
And I should download a sample (which is in fact a dead link!)
So I tried myself adding more Users like this:
- !!models.User
email: somemail1#example.com
loginName: test1
- !!models.User
email: somemail2#example.com
loginName: test2
If I then try to load it via
Object load = Yaml.load("test-data.yml");
if (load instanceof List){
List list = (List)load;
Ebean.save(list);
} else {
Ebean.save(load);
}
I get the following Exception:
[error] Test ModelsTest.createAndRetrieveUser failed:
java.lang.IllegalArgumentException: This bean is of type [class
java.util.ArrayList] is not enhanced?, took 6.505 sec [error] at
com.avaje.ebeaninternal.server.persist.DefaultPersister.saveRecurse(DefaultPersister.java:270)
[error] at
com.avaje.ebeaninternal.server.persist.DefaultPersister.save(DefaultPersister.java:244)
[error] at
com.avaje.ebeaninternal.server.core.DefaultServer.save(DefaultServer.java:1610)
[error] at
com.avaje.ebeaninternal.server.core.DefaultServer.save(DefaultServer.java:1600)
[error] at com.avaje.ebean.Ebean.save(Ebean.java:453) [error]
at ModelsTest.createAndRetrieveUser(ModelsTest.java:18) [error]
...
How Am I supposed to load more than one User (or whatever object I wish) and parse them without exception?
In Ebean class save method is overloaded.
save(Object) - expects parameter which is entity (extends Model, has #Entity annotation)
save(Collection) - expects collection of entities.
Yaml.load function returns objecs which can be:
Entity
List of entities
But if we simply do:
Object load = Yaml.load("test-data.yml");
Ebean.save(load);
then save(Object) method will be called. This is because at compile time compiler doesn't know what exactly will Yaml.load return. So above code will throw exception posted is question when there is more then one user in "test-data.yml" file.
But when we cast the result to List as in code provided by OP then everything works good. save(Collection) method is called and all entities are saved correctly. So the code from question is correct.
I have same problem with loading data from "test-data.yml". But I have found solution for this problem. Here is http://kewool.com/2013/07/bugs-in-play-framework-version-2-1-1-tutorial-fixtures/ solution code. But all Ebean.save methods must be replaced with Ebean.saveAll methods.
Code is in Scala. It is extremely similar to Java code.
Code that our map indexer uses to create index: https://gist.github.com/a16e5946b67c6d12b2b8
Utilities that the above code uses to create index and mapping: https://gist.github.com/4f88033204cd761abec0
Errors that java gives: https://gist.github.com/d6c835233e2b606a7074
Response of http://elasticsearch.domain/maps/_settings after running code and getting errors: https://gist.github.com/06ca7112ce1b01de3944
JSON FILES:
https://gist.github.com/bbab15d699137f04ad87
https://gist.github.com/73222e300be9fffd6380
Attached are the json files i'm loading in. I have confirmed that it is loading the right json files and properly outputting it as a string into .loadFromSource and .setSource.
Any ideas why it can't find the analyzers even though they are in _settings? If I run these json files via curl they work fine and properly setup the mapping.
The code I was using to create the index (found here: Define custom ElasticSearch Analyzer using Java API) was creating settings in the index like:
"index.settings.analysis.filter.my_snow.type: "stemmer","
It had settings in the setting path.
I changed my indexing code to the following to fix this:
def createIndex(client: Client, indexName: String, indexFile: String) {
//Create index
client.admin().indices().prepareCreate(indexName)
.setSource(Utils.loadFileAsString(indexFile))
.execute()
.actionGet()
}