i have to integrate Kibana dashboard(Iframe) with my own elastic query .
so using rison-node how can i pass the elastic query into dashboard through URL.
Followings that are i tried:
https://discuss.elastic.co/t/dashboard-search-parameter-via-url/84385/2
Not the best solution. But it's a dirty one.
I would start by getting 2 URLs from the browser. First URL which links to the pure dashboard. Second, with a filter applied.
Now, compare the 2 URLs online or with a tool like BeyondCompare. This will reveal the changes required to add a filter.
All words no code :|
For example, I tried this on my own dashboard URL. See a part of this huge URL, that was changed.
filters:!(),options:(darkTheme:!f),panels:!((col:1,id:AWbJ883y-laqWN-SkuG2,panelIndex:1,row:4,size_x:6,size_y:3,type:visualization),(col:7,id:AWbJ9BBX-laqWN-SkuG3,panelIndex:2,row:1,size_x:6,size_y:3,type:vis
filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:AWbJsP0d-laqWN-SkuGu,key:user.keyword,negate:!f,type:phrase,value:aditya),query:(match:(user.keyword:(query:aditya,type:phrase))))),options:(darkTheme
Here, as you can see the filter section is empty in the first case, while the second case does have my filter query. Now, you can easily create dynamic URLs based on this approach.
Related
I am using JSOUP to fetch the documents from a website.
Below is my code
webPageUrl = https://mwcc.ms.gov/#/electronicDataInterchange
Document doc = Jsoup.connect(webPageUrl).get();
Elements links = doc.getElementsByAttribute("a[href]");
Below line of code is not working. It is supposed to return an element but doesn't:
doc.getElementsByAttribute("a[href]")
Can someone please point out the mistake in my code?
That page seems to be an Angular application, which means it loads some (probably all or most) of its content via JavaScript scripts.
The fact that the URL contains the fragment separator # is already a strong indicator of that fact, because if you do a HTTP request, then everything after that indicator is cut off (i.e. not sent to the server), so the actual request will just be of https://mwcc.ms.gov/.
As far as I know JSoup does not support running JavaScript, so you might need to look into a more involved scraping tool (possibly running a full browser engine).
I'm using the Java SDK to connect to Box. I find the root folder (this is a small dev instance so I don't mind searching from there.) I execute the search query and I get results. My problem is that the search parameters do not seem to work consistently or at all.
For example, this query
Iterator resultSet = rootFolder.search("query=NR_chewy_chic_swt_pot_app&file_extensions=jpg&content_type=name&type=file").iterator();
returns three entries.
NR_chewy_chic_swt_pot_app.jpg
NR Chewy Chicken AD02.xls
PreInvoice_M197301-3644756_NR Chewy Treats SURP.pdf
I remove the substring "&file_extensions=jpg" because it doesn't seem to do anything and save/recompile/run and I get the same three results.
I change "&type=file" to "&type=folder" and I get the same three results.
I change "query=NR_chewy_chic_swt_pot_app" to "query=NR" and I get NO results. Even though SO user Peter (who appears to work for Box) says that partial strings should match1.
I've tried rearranging the order of the search parameters to no avail. What am I missing?
Thanks,
Eric B.
Advanced search has yet to be implemented in the SDK (since it's still in beta), but it will be added in the coming weeks.
The reason why your call doesn't work is because the query method parameter is sent as the "query" URL parameter in the API call. Therefore, the ampersands in your query string are being escaped.
If you need an immediate workaround, you can use the BoxAPIRequest and BoxAPIResponse classes to make a custom search request. For example:
BoxAPIConnection api = new BoxAPIConnection("token");
URL url = new URL("https://api.box.com/2.0/search?query=NR_chewy_chic_swt_pot_app&file_extensions=jpg&content_type=name&type=file")
BoxAPIRequest request = new BoxAPIRequest(api, url, "GET");
BoxJSONResponse response = (BoxJSONResponse) request.send();
String json = response.getJSON();
Sorry that this wasn't clear. We'll update the documentation to make it more explicit that query represents the query field and not the URL query string.
I'm trying programmatically retrieve list of all design documents in the given bucket via CouchbaseClient. I have followed creating-views-from-sdk documentation but its only explains how to create view. What I need it a way to retrieve all design documents and their views. Any solution out there?
So far I was able to get only one design document...but the name is not coming from the server, e.g.
CouchbaseClient client = new CouchbaseClient(urls, bucketName, bucketPassword);
DesignDocument dc = client.getDesignDocument("MY-HARDCODED-DOC-NAME");
List<View> views = (List<View>) dc.getViews();
for (View view : views)
{
// process view data
}
What I'm trying to accomplish is to write an utility to import/export views from a given couchbase bucket. Since, strangely enough this basic function can't be found anywhere in the admin tools that come with couchbase.
I don't think you can do this with the java client, but there is an endpoint you can hit with an HTTP client from java to get this info:
http://localhost:8091/pools/default/buckets/mybucketname/ddocs
Just replace mybucketname with the bucket you want to get ddocs for. You will need to supply the basic auth header to hit this endpoint so be sure to not forget that part. You will get back json that you can then parse to get the names of the ddocs in the bucket.
We're using nutch 1.6 to crawl web. According to nutch configuration, one should give the seedlist and domain url-filter to traverse across specified domains. However, we want to fetch newly discovered urls if their extension is, let's say, co.uk (only for this extension) We can manage it by adding newly discovered url's domains to a file - or db, whatever -, stop crawler, update domain url-filters and seedlist, then restart it. But how can we do it dynamically, w/o stopping the crawler?
Thanks in advance.
P.S : co.uk domain extension is just an example, we also could add more than one extension to allow.
Got it.
You can add suffixes to domain-urlfilter.txt like "gov.uk" as DomainURLFilter source code on lines 186-189:
if (domainSet.contains(suffix) || domainSet.contains(domain)
|| domainSet.contains(host)) {
return url;
}
it checks for suffix, domain and host.
Also, you may keep domain urls in an HBase table and manage them via your own filter plugin instead of using DomainURLFilter.
I am trying to map my requests in a special way to achieve a very simple purpose.
Say the root website is abc.com and has several users. Each user has a home page, admin page, requests page, etc.
Let us assume we have users user1 and user 2
I want the urls to be coded as:
abc.com/user1/admin
abc.com/user1/home
abc.com/user1/requests
So basically abc.com/user1/home is the home page for user 1 and abc.com/user1/admin is the the admin page for user 1.
I have tried using the request mapping in wicket using named parameters etc. I can encode my URL'S as abc.com/home/user1 but I can not get the encoding I desire.
Any help is welcome.
Thanks
Anant
I'm just starting with the version 1.5 of wicket but I think the new mapping system will resolve your point quite easily :
mountPage("{userCode}/home", UserHomePage.class);
mountPage("{userCode}/admin", UserAdminPage.class);
Then, in the page you just have to retrieve the page parameter to load your model.
String userCode = pageParameter.get("userCode").toString();