Unable to retrieve query result from gremlin client in JSON format - java

Case
I am using ResultSet's submit method in Java (provided by org.apache.tinkerpop:gremlin-driver:3.0.1-incubating dependency ) to query gremlin server. I need to know how to configure my client to receive response in JSON format .
What I have done
I have tried using both GraphSONMessageSerializerV1d0 and GraphSONMessageSerializerGremlinV1d0 serializers but the response is not a valid json.This is my gremlin-server.yaml file
authentication: {className:
org.apache.tinkerpop.gremlin.server.auth.AllowAllAuthenticator,
config: null}
channelizer:
org.apache.tinkerpop.gremlin.server.channel.WebSocketChannelizer
graphs: {graph: src/test/resources/titan-inmemory.properties}
gremlinPool: 8
host: localhost
maxAccumulationBufferComponents: 1024
maxChunkSize: 8192
maxContentLength: 65536
maxHeaderSize: 8192
maxInitialLineLength: 4096
metrics:
consoleReporter: null
csvReporter: null
gangliaReporter: null
graphiteReporter: null
jmxReporter: null
slf4jReporter: {enabled: true, interval: 180000, loggerName:
org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics}
plugins: [aurelius.titan]
port: 8182
processors: []
resultIterationBatchSize: 64
scriptEngines:
gremlin-groovy:
config: null
imports: [java.lang.Math]
scripts: [src/test/resources/generate-asset-plus-locations.groovy]
staticImports: [java.lang.Math.PI]
scriptEvaluationTimeout: 30000
serializedResponseTimeout: 30000
serializers:
- className:
org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0
config: {useMapperFromGraph: graph}
- className:
org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0
config: {serializeResultToString: true}
- className:
org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializer
GremlinV1d0
config: {useMapperFromGraph: graph}
- className:
org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0
config: {useMapperFromGraph: graph}
ssl: {enabled: false, keyCertChainFile: null, keyFile: null, keyPassword:
null, trustCertChainFile: null}
threadPoolBoss: 1
threadPoolWorker: 1
writeBufferHighWaterMark: 65536
writeBufferLowWaterMark: 32768
So it would be great if some one could help me in configuring the client side to receive the result in JSON format!!

To use GraphSON as the serialization format you just need to specify it to the Cluster builder:
Cluster cluster = Cluster.build().serializer(Serializers.GRAPHSON_V2D0).create();
But it's worth nothing that this won't return you a string of JSON to work with. It tells the server to use JSON as the serialization format, but the driver deserializes the JSON into objects (Maps, Lists, etc.). If you want an actual JSON string then you should return one in your script that you send to the server. Your only other option is to write your own serializer which would always just preserve the string.

Related

Should I customize the Spring gateway key resolver?

I am using the spring gateway implementation "org.springframework.cloud:spring-cloud-starter-gateway" RequestRateLimiter to limit the request rate, this is the RequestRateLimiter config:
- filters:
- name: JwtAuthentication
- name: RequestRateLimiter
args:
redis-rate-limiter.replenishRate: 1
redis-rate-limiter.burstCapacity: 2
id: time-capsule-service
order: 2
predicates:
- Path=/tik/**
uri: http://10.111.149.10:11015
Now I am facing a problem that all the requests return 403 http code. I am tracing the source code and found the key resolver gets an empty key. Must I customize the key by myself? BTW, I have already configured the redis server address.

need insight with logstash and elk

I am trying to add in data from logstash to elastic search bellow is the config file and the result on the terminal
input {
file {
path => "F:/business/data/clinical_trials/ctp.csv"
start_position => "beginning"
sincedb_path => "C:/Users/matth/Downloads/logstash-7.14.2-windows-x86_641/logstash-
7.14.2/data/plugins/inputs/file/.sincedb_88142d557695dc3df93b28d02940763d"
}
}
filter {
csv {
separator => ","
columns => ["web-scraper-order", "web-scraper-start-url", "First Submitted Date"]
}
}
output {
stdout { codec => "rubydebug"}
elasticsearch {
hosts => ["https://idm.es.eastus2.azure.elastic-cloud.com:9243"]
index => "DB"
user => "elastic"
password => "******"
}
}
and when i run i get the following it gets right down to the piplines and the next step should be establishing connection with elastic but it hangs on pipeline forever.
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
C:/Users/matth/Downloads/logstash-7.14.2-windows-x86_641/logstash-7.14.2/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/rubygems_integration.rb:200: warning: constant Gem::ConfigMap is deprecated
Sending Logstash logs to C:/Users/matth/Downloads/logstash-7.14.2-windows-x86_641/logstash-7.14.2/logs which is now configured via log4j2.properties
[2021-09-22T22:40:20,327][INFO ][logstash.runner ] Log4j configuration path used is: C:\Users\matth\Downloads\logstash-7.14.2-windows-x86_641\logstash-7.14.2\config\log4j2.properties
[2021-09-22T22:40:20,333][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.14.2", "jruby.version"=>"jruby 9.2.19.0 (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.12+7 on 11.0.12+7 +indy +jit [mswin32-x86_64]"}
[2021-09-22T22:40:20,383][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2021-09-22T22:40:21,625][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2021-09-22T22:40:22,007][INFO ][org.reflections.Reflections] Reflections took 46 ms to scan 1 urls, producing 120 keys and 417 values
[2021-09-22T22:40:22,865][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://idm.es.eastus2.azure.elastic-cloud.com:9243"]}
[2021-09-22T22:40:23,032][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx#idm.es.eastus2.azure.elastic-cloud.com:9243/]}}
[2021-09-22T22:40:23,707][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://elastic:xxxxxx#idm.es.eastus2.azure.elastic-cloud.com:9243/"}
[2021-09-22T22:40:24,094][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.14.1) {:es_version=>7}
[2021-09-22T22:40:24,096][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2021-09-22T22:40:24,277][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["C:/Users/matth/Downloads/logstash-7.14.2-windows-x86_641/logstash-7.14.2/config/logstash-sample.conf"], :thread=>"#<Thread:0x52ff5a7a run>"}
[2021-09-22T22:40:24,346][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2021-09-22T22:40:24,793][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.51}
[2021-09-22T22:40:24,835][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2021-09-22T22:40:24,856][INFO ][filewatch.observingtail ][main][c4df97cc309d579dc89a5115b7fcf43a440ef876f640039f520f6b116b913f49] START, creating Discoverer, Watch with file and sincedb collections
[2021-09-22T22:40:24,876][INFO ][logstash.agent ] Pipelines running {:count=>1, `:running_pipelines=>[:main], :non_running_pipelines=>[]}
any help as to why my data isnt getting to elastic??? please help

kafka connect JdbcSourceConnector deserialization issue

I'm using kafka connect to connect to a database in order to store info on a compacted topic and am having deserialization issues when trying to consume the topic in a spring cloud stream application.
connector config:
{
"name": "my-connector",
"config": {
"name": "my-connector",
"poll.interval.ms": "86400000",
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url": "oracle-jdbc-string",
"connection.user": "testid",
"connection.password": "test",
"catalog.pattern": "mySchema",
"table.whitelist": "MY_TABLE",
"table.types": "TABLE",
"mode": "bulk",
"numeric.mapping": "best_fit",
"transforms": "createKey, extractCode, UpdateTopicName",
"transforms.UpdateTopicName.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.extractCode.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.extractCode.field": "ID",
"transforms.createKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.createKey.fields": "ID",
"transforms.UpdateTopicName.regex": "(.*)",
"transforms.UpdateTopicName.replacement": "my_topic",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "io.confluent.connect.json.JsonSchemaConverter",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"topic.prefix": "nt_mny_"
}
}
The connector appears to be working fine and putting the appropriate message on the topic and a sample message looks like this when using the kafka-console-consumer
kafka-console-consumer --bootstrap-server localhost.ntrs.com:9092 --topic nt_mny_ece_alert_avro --from-beginning --property print.key=true | jq '.'
7247
0
{
"ID": 7247,
"USER_SK": 5623,
"TYP_CDE": "TEST",
"ALRT_ACTIVE_FLAG": "Y",
"ALRT_DESC": "My Alert",
"ALRT_STATUS": "VISIBLE",
"CREAT_BY": "ME",
"CREAT_TM": 1593547299565,
"UPD_BY": "ME",
"UPD_TM": 1593547299565
}
I'm wondering if the 0 printed in between the key and value is the issue or just kafka noise.
The issue I see in the code is
org.springframework.messaging.converter.MessageConversionException: Could not read JSON: Invalid UTF-32 character 0x17a2241 (above 0x0010ffff) at char #1, byte #7); nested exception is java.io.CharConversionException: Invalid UTF-32 character 0x17a2241 (above 0x0010ffff) at char #1, byte #7)
and my processor/sink code is relatively simple.
#StreamListener
public void process(
#Input(MyAlertsProcessor.MY_ALERT_AVRO) KStream<String, Json> myAlertKconnectStream) {
myAlertKconnectStream.peek((key,value) -> {
System.out.println("HELOOOOOO");
logger.debug("ece/pre: key={}, value={}",key,value);});
}
I've spent days trying to figure this out with little to show for it any help is appreciated!
You're using the JSON Schema converter (io.confluent.connect.json.JsonSchemaConverter), not the JSON converter (org.apache.kafka.connect.json.JsonConverter).
The JSON Schema converter uses the Schema Registry to store the schema, and puts information about it on the front few bytes of the message. That's what's tripping up your code (Could not read JSON: Invalid UTF-32 character 0x17a2241 (above 0x0010ffff) at char #1, byte #7)).
So either use the JSON Schema deserialiser in your code (better), or switch to using the org.apache.kafka.connect.json.JsonConverter converter (less preferable; you throw away the schema then).
More details: https://rmoff.net/2020/07/03/why-json-isnt-the-same-as-json-schema-in-kafka-connect-converters-and-ksqldb-viewing-kafka-messages-bytes-as-hex/

MQ7 with Java 7 and SSL is not working., it was working before 6 months

We have One QM and One CHANNEL and many QUEUES created for clients. Around 5 clients are connected to this QM for their transactions. Each 5 clients connected to their respective QUEUES . There is a jks file created in this QM for SSL connection. Each 5 clients connect with jks file + SSL_RSA_WITH_RC4_128_SHA from their javaClient. QM is also configured with SSLCIPH(RC4_SHA_US).
Now all of a sudden , without any javaClient change , 1 client could not able to connect to configured QM. All others are able to connect to same QM , without any issue.
AMQERR01.LOG is not logged with any specific exception or error
In application logs its saying common MQ exception
Error as com.ibm.mq.MQException: MQJE001: Completion Code '2', Reason '2397'
2397 - Cipher spec<>suite not matching--is any possibility?
we enabled tracing (strmqtrc -m TEST.QM -t detail -t all) and saw Trace logs in path (C:\Program Files (x86)\IBM\Websphere MQ\trace) ,but could not get any details on why SSL-connection could not happening?
We done one more exercise like created a new QM for issue client and tested without SSL and its working. When we enabled SSL in new QM and javaClient , the same 2397 started logging.
Could someone guide me for better logging and tracing in MQ , which can see why 2397 is throwing?
Could someone guide me for better logging and tracing in Java using -D [-Djavax.net.debug=all] , which can see why 2397 is throwing?
MQ Version ->7
MQ Server in ->Windows
from trace logs
returning TEST.QM
Freeing cbmindex:0 pointer:24DDB540 length:2080
-----} TreeNode.getMQQmgrExtObject (rc=OK)
cbmindex:10
-------------} xcsFreeMemFn (rc=OK)
------------} amqjxcoa.wmqGetAttrs (rc=OK)
-----{ UiQueueManager.testQmgrAttribute
-------------{ Message.getMessage
testing object 'TEST.QM'
An internal method detected an unexpected system return code. The method {0} returned {1}. (AMQ4580)
checking attribute 'QmgrCmdLevelGreaterThan'
-------------} Message.getMessage (rc=OK)
for value '510'
-----------}! NativeCalls.getAttrs (rc=Unknown(C35E))
-----} UiQueueManager.testQmgrAttribute (rc=OK)
Message = An internal method detected an unexpected system return code. The method wmq_get_attrs returned "retval.rc2 = 268460388". (AMQ4580), msgID = AMQ4580, rc = 50014, reason = 268460388, severity = 30
result = true
---} TreeNode.testAttribute (rc=OK)
---{ TreeNode.testAttribute
-----{ QueueManagerTreeNode.toString
-----} QueueManagerTreeNode.toString (rc=OK)
testing object 'TEST.QM'
checking attribute 'OamTreeNode'
-----------{ NativeCalls.getAttrs
------------{ amqjxcoa.wmqGetAttrs
qmgr:2A7B32C8, stanza:2A7B32C4, version:1
for value 'true'
QMgrName('TEST.QM')
-----{ TreeNode.getMQQmgrExtObject
StanzaName('QMErrorLog')
testing object 'TEST.QM'
Full QM.INI filename: SOFTWARE\IBM\MQSeries\CurrentVersion\Configuration\QueueManager\TEST!QM, Multi-Instance: FALSE
--------------} xcsGetIniFilename (rc=OK)
--------------{ xcsGetIniAttrs
---------------{ xcsBrowseIniCallback
FileType = (1)
----------------{ xcsBrowseRegistryCallback
xcsBrowseRegistryCallback
-----------------{ xusAddStanzaLineList
------------------{ xcsGetMemFn
checking attribute 'PluginEnabled'
component:24 function:15 length:2080 options:0 cbmindex:0 *pointer:24DDB540
------------------} xcsGetMemFn (rc=OK)
for value 'com.ibm.mq.explorer.oam'
RetCode (OK)
-----------------} xusAddStanzaLineList (rc=OK)
-----------------{ xusAddStanzaLineList
------------------{ xcsGetMemFn
-----{ UiPlugin.isPluginEnabled
component:24 function:15 length:2080 options:0 cbmindex:1 *pointer:24DDDFE8
------------------} xcsGetMemFn (rc=OK)
RetCode (OK)
-----------------} xusAddStanzaLineList (rc=OK)
testing plugin_id: com.ibm.mq.explorer.oam
-----------------{ xurGetSpecificRegStanza
-------{ PluginRegistrationManager.isPluginEnabled
Couldn't open key (QMErrorLog) result 2: The system cannot find the file specified.
MQ version 7.0.1.9
jdk1.8.0_181-i586
com.ibm.mq*jar Version
Specification -version : 6.0.2.1
Implementation-Version :6.0.2.1 -j600-201-070305

dropwizard test integration give an exception ConfigurationParsingException

I 'm currently doing an integration test with the framwork dropwizard , embeddedmongo . but when I execute the tests , I always this exception .
com.example.pointypatient.integration.PointyPatientApplicationTest Time elapsed: 3.054 sec <<< ERROR!
java.lang.RuntimeException: io.dropwizard.configuration.ConfigurationParsingException: dropwizard-angular- example-master\target\test-classes\integration.yml has an error:Configuration at dropwizard-angular-example-master\target\test-classes\integration.yml must not be empty
and my integration.yml is here:
dbConfig:
host: localhost
port: 12345
dbName: test
server:
applicationConnectors:
- type: http
port: 8080
- type: https
port: 8443
keyStorePath: example.keystore
keyStorePassword: example
validateCerts: false
adminConnectors:
- type: http
port: 8081
- type: https
port: 8444
keyStorePath: example.keystore
keyStorePassword: example
validateCerts: false
# Logging settings.
logging:
# The default level of all loggers. Can be OFF, ERROR, WARN, INFO, DEBUG, TRACE, or ALL.
level: INFO
appenders:
- type: console
thank you for any help
From ConfigurationFactory.java, this exception is thrown only if the method readTree from ObjectMapper returns null.
Look at ConfigurationSourceProvider derived class that you are passing to build method, as it is not handling IOException properly ( I can assume you are using a mocked one);
Look at path argument, it seems you should be passing "dropwizard-angular- example-master\target\test-classes\integration.yml" instead of "dropwizard-angular-example-master\target\test-classes\integration.yml" *first path has an empty space just after dropwizard-angular-
If your dropwizard-config file (i.e. integration.yml) is in test/resources directory, you don't need to use ResourceHelpers.resourceFilePath("integration.yml") to pass full config path into DropwizardAppExtension as a param, you should only pass its name (e.g. "integration.yml")
private static DropwizardAppExtension<MyDropwizardConfiguration> RULE = new DropwizardAppExtension<>(
MyDropwizardApplication.class, "integration.yml");

Categories

Resources