The PhraseSuggestionBuilder in 1.6.0 of the elasticsearch java API has a collateQuery method that takes a String.
builder
.collateQuery("\"match\": {\"title\" : \"{{suggestion}}\"}")
.collatePrune(true);
Unfortunately the escaped quotes are escaped by the builder to produce JSON like this:
"collate" : {
"query" : "\"match\": {\"title\" : \"{{suggestion}}\"}",
"prune" : true
}
Anyone any ideas how I can stop this String being escaped as the JSON is generated?
Thanks.
Related
Can anyone guide me on how to replace this char (‘ ’) using groovy or java?
When I try the below code (i assume this is a single quote), it's not working.
def a = "‘NOA’,’CTF’,’CLM’"
def rep = a.replaceAll("\'","")
My expected Output : NOA,CTF,CLM
Those are curly quotes in your source text. Your replaceAll is replacing straight quotes.
You should have copy-pasted the characters from your source.
System.out.println(
"‘NOA’,’CTF’,’CLM’"
.replaceAll( "‘" , "" )
.replaceAll( "’" , "" )
);
See this code run live at OneCompiler.
NOA,CTF,CLM
i would suggest this
a.replaceAll("[‘’]", "")
or even better to escape unicode characters in a source code
a.replaceAll("[\u2018\u2019]", "")
This question already has answers here:
How to split a string, but also keep the delimiters?
(24 answers)
Closed 6 years ago.
I am having a string "role1#role2#role3#role4$arole" separated with delimiter # and $. I used below java code
String str = "role1#role2#role3#role4$arole";
String[] values = StringUtils.splitPreserveAllTokens(str, "\\#\\$");
for (String value : values) {
System.out.println(value);
}
And got the result
role1
role2
role3
role4
arole
But my requirement is to preserve the delimiter in the result. So, the result has to be as per requirement
role1
#role2
#role3
#role4
$arole
I analyzed the apache commons StringUtils method to do that but was unable to found any clue.
Any library class to get the above intended results?
You may use a simple split with a positive lookahead:
String str = "role1#role2#role3#role4$arole";
String[] res = str.split("(?=[#$])");
System.out.println(Arrays.toString(res));
// => [role1, #role2, #role3, #role4, $arole]
See the Java demo
The (?=[#$]) regex matches any location in a string that is followed with a # or $ symbol (note the $ does not have to be escaped inside a [...] character class).
I am working on a DSL wherein I am supposed to call Java Functions I have written. How can they be called in XText Grammar definition rules ?
Example
Sample.xtext
Data:
'Data'':'
(objectRules += ObjectRule)+ //Call to Java Function here
;
I am writng the grammar and I want to invoke Java Function to perform further processing like pasting a block of code when the Rule is encountered.
Please refer to the documentation on Xbase if you want to use Java from your Xtext languages.
The pattern would be something like this:
grammar org.acme.MyDsl with org.eclipse.xtext.xbase.Xbase
generate ..
MyConcept:
operation=ID '(' ')' body = XBlockExpression
;
This would allow things like
myOperation() {
System.out.println("")
}
Code is in Scala. It is extremely similar to Java code.
Code that our map indexer uses to create index: https://gist.github.com/a16e5946b67c6d12b2b8
Utilities that the above code uses to create index and mapping: https://gist.github.com/4f88033204cd761abec0
Errors that java gives: https://gist.github.com/d6c835233e2b606a7074
Response of http://elasticsearch.domain/maps/_settings after running code and getting errors: https://gist.github.com/06ca7112ce1b01de3944
JSON FILES:
https://gist.github.com/bbab15d699137f04ad87
https://gist.github.com/73222e300be9fffd6380
Attached are the json files i'm loading in. I have confirmed that it is loading the right json files and properly outputting it as a string into .loadFromSource and .setSource.
Any ideas why it can't find the analyzers even though they are in _settings? If I run these json files via curl they work fine and properly setup the mapping.
The code I was using to create the index (found here: Define custom ElasticSearch Analyzer using Java API) was creating settings in the index like:
"index.settings.analysis.filter.my_snow.type: "stemmer","
It had settings in the setting path.
I changed my indexing code to the following to fix this:
def createIndex(client: Client, indexName: String, indexFile: String) {
//Create index
client.admin().indices().prepareCreate(indexName)
.setSource(Utils.loadFileAsString(indexFile))
.execute()
.actionGet()
}
How do I submit a JSON to SOLR using Zend_Client?
Assume the JSON I am using is (It was taken from the SOLR WIKI, so I assume it is right).
$JSON ='[{"id" : "3", "title" : "test3","description":"toottoto totot ototot "}]';
I see no error in the solr error log, this is the code I use to submit
DOES NOT WORK
$url = 'http://localhost:8983/solr/update/json';
$Client = new Zend_Http_Client($url);
$Client->resetParameters();
$Client->setMethod(Zend_Http_Client::POST);
$Client->setHeaders('Content-type','application/json');
$Client->setParameterPost($JSON);//***** WRONG *****
$Client->setRawData($JSON); //* **** RIGHT FROM ANSWER BELOW, STILL NEED TO ENCODE IT!
$response = $Client->request();
THIS WORKS FROM THE COMMAND LINE!
sudo curl http://localhost:8983/solr/update/json -H 'Content-type:application/json' -d '
[{"id" : "3", "title" : "test3","description":"toottoto totot ototot "}]'
The setParameterPost() method takes two arguments, the parameter name and its value like this:
$client->setParameterPost('name', 'john'); // results in name=john
Try using setRawData() instead, this will let you set raw post data.