this is a newo4j rest api call related error - from my java code I'm making a REST API call to a remote Neo4J Database by passing query and parameters, the query being executed is as below
*MERGE (s:Sequence {name:'CommentSequence'}) ON CREATE SET s.current = 1 ON MATCH SET s.current=s.current+1 WITH s.current as sequenceCounter MERGE (cmnt01:Comment {text: {text}, datetime:{datetime}, type:{type}}) SET cmnt01.id = sequenceCounter WITH cmnt01 MATCH (g:Game {game_id:{gameid}}),(b:Block {block_id:{bid}, game_id:{gameid}}),(u:User {email_id:{emailid}}) MERGE (b)-[:COMMENT]->(cmnt01)<-[:COMMENT]-(u)*
Basically this query is generating a sequence number at run time and sets the 'CommentId' property of the Comment Node as this Sequence number before attaching the comment node to a Game's block i.e. For every comment added by the user I'm adding a sequence number as it's id.
This is working for almost 90% of the cases but there are couple of cases in a day when it fails with below error
ERROR com.exectestret.dao.BaseGraphDAO - Query execution error:**Error reading as JSON ''**
Why does the Neo4J Query not return any proper error code ? It just says error reading as JSON ''.
Neo4J Version is
Neo4j Community Edition 2.2.1
Thanks,
Deepesh
It gets HTML back and can't read it as JSON, but should output the failure HTML can you check the log output for that and share it too?
Also check your graph.db/messages.log and data/log/console.log for any error messages.
Related
I have a job design as attached and i am trying to extract the results from trestapi call and use treplicate & tnormalize to display the unique combination of executionID & triggerTimestamp column results,but for some reason it always gets duplicated after extraction.
Please suggest what needs to be fixed:
I would like to capture the results in the context variables and perform manipulation for every iteration of executionID & triggerTime column combination!
tjava3:
context.testexecutionID=((String)globalMap.get("executionID"));
context.triggerTimestamp=((String)globalMap.get("triggerTimestamp"));
context.executionID=context.testexecutionID.substring(5, context.testexecutionID.lastIndexOf("\"")-3);
context.triggerTime=context.triggerTimestamp.substring(5, context.triggerTimestamp.lastIndexOf("\"")-3);
Since the iteration results are not ideal,i am getting unparseable data in tjava3 error
Any help would be appreciated!
I'm implementing quite a complex search using MarkLogic Java API. I would like to enable relevance-trace (Relavance trace) to see how my results are scored. Unfortunately, I don't know how to enable it in Java API. I have tried something like:
DatabaseClient client = initClient();
var qmo = client.newServerConfigManager().newQueryOptionsManager();
var searchOptions = "<search:options xmlns=\"http://marklogic.com/appservices/search\">\n"
+ " <search-option>relevance-trace</search-option>\n"
+ " </search:options>";
qmo.writeOptions("searchOptions", new StringHandle(searchOptions).withFormat(Format.XML));
QueryManager qm = client.newQueryManager();
StructuredQueryBuilder qb = qm.newStructuredQueryBuilder("searchOptions");
// query definition
qm.search(query, new SearchHandle())
Unfortunately it ends up with following error:
"Local message: /config/query write failed: Internal Server Error. Server Message: XDMP-DOCNONSBIND:
xdmp:get-request-body(\"xml\") -- No namespace binding for prefix search at line 1 . See the
MarkLogic server error log for further detail."
My question is how to use search options in MarkLogic API, especially I'm interested in relevance-trace and simple-score
Update 1
As suggested by #Jamess Kerr I have change my options to
var searchOptions = "<options xmlns=\"http://marklogic.com/appservices/search\">\n"
+ " <search-option>relevance-trace</search-option>\n"
+ " </options>";
but unfortunately, it still doesn't work. After that change I get error:
Local message: /config/query write failed: Internal Server Error. Server Message: XDMP-UPDATEFUNCTIONFROMQUERY: xdmp:apply(function() as item()*) -- Cannot apply an update function from a query . See the MarkLogic server error log for further detail.
Your search options XML uses the search: namespace prefix but you don't define that prefix. Since you are setting the default namespace, just drop the search: prefix from the search:options open and close tags.
The original Java Query contains both syntactical and semantic issues:
First of all, it is an invalid MarkLogic XQuery in the sense that it has only query option(s) portion. Bypassing the namespace binding prefix is another wrong end of the stick.
To tweak your original query, please replace a search text in between the search:qtext tag ( the pink line ) and run the query.
Result:
Matched and Listing 2 documents:
Matched 1 locations in /medals/coin_1333113127296.xml with 94720 score:
73. …pulsating maple leaf coin another world-first, the [Royal Canadian Mint]is proud to launch a numismatic breakthrough from its ambitious and creative R&D team...
Matched 1 locations in /medals/coin_1333078361643.xml with 94720 score:
71. ...the [Royal Canadian Mint]and Royal Australian Mint have put an end to the dispute relating to the circulation coin colouring process...
Without a semantic criterion, to put it into context, your original query will be an equivalent of removing the search:qtext and performing a fuzzy search.
Note:
If you use serialised term search or search constraints instead of text search, you should get higher score results.
MarkLogic Java API operates in unfiltered mode by default, while the cts:search operates in filtered mode by default. Just be mindful of how you construct the query and the expected score in Java API.
And it is really intended for bulk data write/extract/transformation. qconsole is, in my opinion, more befitting to tune specific query and gather search score, relevance and computation details.
I have a field which contains forward slashes. I'm trying to execute this Query:
QueryBuildres.termQuery("id", QueryParser.escape("/my/field/val"))
and I cannot get any results. When I'm looking for 'val' only, then I get the proper results. Any ideas why is that happening? Of course without escaping it also doesn't return the results.
UPDATE
so QP.escape parses string properly, but when request goes to elasticsearch it's double escaped
[2015-07-10 01:53:00,063][WARN ][index.search.slowlog.query] [Aaa AA] [index_name][4] took[420.8micros], took_millis[0], types[page], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"query":{"term":{"pageId":"\\/path\\/and\\/testestest"}}}], extra_source[],
UPDATE 2: It works when I'm using querystring, but I wouldn't like to user that and type everything by hand.
You might have to use _id instead of Id
So the reason why I didn't get any results is the default index which I had created.
I didn't specified mapping for my field, so ElasticSearch didn't treated my field.
In ElasticSearch documentation I read, that during the analysis process, elastic search splits the string into words, lower-case them and do some other stuff.
In my case my "/path/in/my/field" was splitted into four fields:
path
in
my
field
So when I was searching for "pageId:/path/in/my/field" I didn't get any results because pageId in fact didn't contained it.
To solve the issue I had to add proper mapping to pageId field, which didn't do any preprocessing (instead of four words, now I have one "/path/in/my/field")
Links to docs:
https://www.elastic.co/guide/en/elasticsearch/guide/current/analysis-intro.html
https://www.elastic.co/guide/en/elasticsearch/guide/current/mapping-intro.html
Consider the following scenario.
I have a Java application which uses Oracle database to store some status codes and messages.
Ex: I have patient record that process in 3 layers (assume 1. Receiving class 2. Translation class 3. Sending class). We store data into database in each layer. When we run query it will show like this.
Name Status Status_Message
XYZ 11 XML message received
XYZ 21 XML message translated to swift format
XYZ 31 Completed message send to destination
ABC 11 XML message received
ABC 21 XML message translated to swift format
ABC 91 Failed message send to destination
On Java class I am executing the below query to get the last status message.
select STATUS_MESSAGE from INTERFACE_MESSAGE_STATUS
where NAME = ? order by STATUS
I publish this status message on a webpage. But my problem is I am not getting the last status message; it's behaving differently. It is printing sometimes "XML message received", sometimes "XML message translated to swift format", etc.
But I want to publish the last status like "Completed message send to destination" or "Failed message send to destination" depending on the last status. How can I do that? Please suggest.
You can use a query like this:
select
i.STATUS_MESSAGE
from
INTERFACE_MESSAGE_STATUS i,
(select
max(status) as last_status
from
INTERFACE_MESSAGE_STATUS
where
name = ?) s
where
i.name = ? and
i.status = s.last_status
In the above example I am assuming the status with the highest status is the last status.
I would recommend you to create a view out of this select query and then use that in your codebase. The reason is that it is much easier to read and makes it possible to easily select on multiple last_statuses without complicating your queries too much.
You have no explicit ordering specified. As the data is stored in a HEAP in Oracle, there is no specific order given. In other words: many factors influence the element you get. Only explicit ORDER BY guarantees desired order. And/or creating a index on some of the rows.
My suggestion: add a date_created field to your DB and sort based on that.
The scenario:
I have a servlet that receives xmls, parses them (using JAXB), persists the parsed data to a MySQL DB (using hibernate) and also saves a copy of the xml for future reference.
It saves this xml also when parsing fails.
In these cases I receive an email with a summary of the error and then check the saved xml for clues to what went wrong.
The operation runs pretty smoothly. The servlet receives a couple of thousands xmls per day.
The problem:
At least once a day I get an error like this:
org.hibernate.exception.DataException: could not insert ..........
Caused by: com.mysql.jdbc.MysqlDataTruncation: Data truncation: Incorrect datetime value: '20122012-01-22 15:20:51' for column 'createdAt' at row 1
I get this error for some other "columns" as well.
These columns are datetime type on mysql side and java.sql.Timestamp on the java side.
When I take a look at the xml that was received i see the correct date format: "2012-01-22 15:20:51"
Any idea what could have gone wrong?
Haven't got this error lately.
I've recently took care of a concurrency issue with SimpleDateFormat usage so maybe that was the problem.