I tried to delete specific document from SOLR admin console with the following approach and query executed successfully but document could not deleted.
Go to the admin console.
Go to the document page.
Selected /update handler.
Selected document type as XML
Commit true
Commit within 1000
query: <delete><query>source_type:xyz</query></delete>
Note:
In this image source_type:xyz not showing but we tried with <delete><query>source_type:xyz</query></delete>query.
source_type is one of the field in our schema.
We will really appreciate if some one can help us on this.
I found the way for the same double quotes worked for me.
<delete><query>source_type:"xyz"</query></delete>
Related
i have to integrate Kibana dashboard(Iframe) with my own elastic query .
so using rison-node how can i pass the elastic query into dashboard through URL.
Followings that are i tried:
https://discuss.elastic.co/t/dashboard-search-parameter-via-url/84385/2
Not the best solution. But it's a dirty one.
I would start by getting 2 URLs from the browser. First URL which links to the pure dashboard. Second, with a filter applied.
Now, compare the 2 URLs online or with a tool like BeyondCompare. This will reveal the changes required to add a filter.
All words no code :|
For example, I tried this on my own dashboard URL. See a part of this huge URL, that was changed.
filters:!(),options:(darkTheme:!f),panels:!((col:1,id:AWbJ883y-laqWN-SkuG2,panelIndex:1,row:4,size_x:6,size_y:3,type:visualization),(col:7,id:AWbJ9BBX-laqWN-SkuG3,panelIndex:2,row:1,size_x:6,size_y:3,type:vis
filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:AWbJsP0d-laqWN-SkuGu,key:user.keyword,negate:!f,type:phrase,value:aditya),query:(match:(user.keyword:(query:aditya,type:phrase))))),options:(darkTheme
Here, as you can see the filter section is empty in the first case, while the second case does have my filter query. Now, you can easily create dynamic URLs based on this approach.
Performing a test with BIRT I was able to create a report and render it in PDF, but unfortunatelly I'm not getting the expected result.
For my DataSource I created a Scripted DataSource and no code was needed in there (as far as I could see the documentation to achieve what I'm trying to do).
For my DataSet I create a Scripted DataSet using my Scripted DataSource as source. In there I defined the script for open like:
importPackage(Packages.org.springframework.context);
importPackage(Packages.org.springframework.web.context.support);
var sc = reportContext.getHttpServletRequest().getSession().getServletContext();
var spring = WebApplicationContextUtils.getWebApplicationContext(sc);
myPojo = spring.getBean("myDao").findById(params["pojoId"]);
And script for fetch like:
if(myPojo != null){
row["title"] = myPojo.getTitle();
myPojo = null;
return true;
}
return false;
As the population of row will be done on runtime, I wasn't able to automatically get the DataSet columns, so I created one with the following configuration: name: columnTitle (as this is the name used to populated row object in fetch code).
Afterwards I edited the layout of my report and added the column to my layout.
I was able to confirm that spring.getBean("myDao").findById(params["pojoId"]); is executed. But my rendered report is not showing the title. If I double click on my column label on report layout I can see there that expression is dataSetRow["columnTitle"] is it right? Even I'm using row in my fetch script? What am I missing here?
Well, what is conctractVersion?
It is obviously not initialized in the open event.
Should this read myPojo.contractVersion or perhaps myPojo.getContractVersion()?
Another point: Is the DS with the column "columnTitle" bound to the layout?
You should also run your report as HTML or in the previewer to check for script errors.
Unfortunately, these are silently ignored when generating the report as PDF...
The problem was the use of batik twice (two different versions), one dependency for BIRT and other for DOCX4J.
The issue is quite difficult to identify because there is no log output rendering PDF files.
Rendering HTML I could see an error message which I could investigate and find the problem.
For my case I could remove the DocX4j from maven POM.
This is my first time with intellisense so please go easy on me :)
I'm using Java swing for building a search engine for viewing XML files .
At the moment the XML file is uploaded and searched successfully , however I still
would like to add the intellisense element to the project .
It's very hard to be specific since the code is pretty big , then I'll do my best to be
as specific as I could .
Here is a visual picture of the XML search engine :
Now , after the user uploaded the XML file (using the open button on the left)
he enters a query in the **current path:* box . Each part of the query is separated via / , hence what I want is to give to the user my options when he hits / , which they are :
String[] axesTypes = {"child::", "attribute::", "descendant::",
"descendant-or-self::", "slef::", "parent::",
"following-sibling::", "preceding-sibling::",
"ancestor-or-self::", "ancestor::", "following::",
"preceding::" ,"preceding::", "namespace::", "node()"};
I'd appreciate if someone can give an explanation regarding how to add that element to my project .
If you need me to post the code - please say so , and I would .
Regards
sorry in advance for my poor grammar.
I have created a pipeline with GATE API, i run it successfully.
I created a serialanalysercontroller like this: pipeline = (SerialAnalyserController)Factory.createResource("gate.creole.SerialAnalyserController");
, then i load a corpus of files (previously populated)
pipeline.setCorpus(foo)
and last, pipeline.execute().
It all works great and i see the results. My problem is that i cannot find the way to get the AnnotationSet for each document that was processed in the corpus. For example i want to find the AnnotationSet ("sentences") to find in which offsets the sentences start and stop in the original text file. The API does not tell how I will get the annotations from the SerialAnalyserController - how to get each gate.Document after the process pipeline has finished.
Thanks in advance
Ok, found it!
I get the corpus back, then because the Corpus is a list, with method get(x) i get which document I want and then I get the annotationSets.
Thanks
I am working in Solr and making some filter quires. One of my filter is consists of a space
for eg:- "fq=listing_type:New home"
But this is giving error. No result is comming out.
I also tried "fq=Listing_type:New+home"
This was not giving error. But no results are comming out. Event there is some xml which have thse values.
Can anyone tell me where is my error?
Here you see the schema.xml
Have you tried with fq=listing_type:"New home" ??? Why dont you index this field as "NewHome" ?