GeoTools WFS without DescribeFeatureType calls - java

I'm trying to access a WFS source by using the GeoTools WFS plugin as presented here.
However, the layer I'm trying to fetch is only accessible through a proxy, and the GetCapabilities page has the plain URLs (ie. not the proxy URLs). Thus, my requests fail as the WFS plugin attempts a DescribeFeatureType request to the wrong URL.
Is there any way to just fetch a certain GetFeature-layer, without having to query the feature schema? Could I somehow supply the schema to the plugin so no query is made?

The WFS-NG module is now the prefered GeoTools wfs module. However, it won't help with this issue either I'm afraid.
The best solution is to fix the server you are using (if it is a GeoServer you can change the base URL). Alternatively may be you can apply an XSLT (or regexp) to the incoming capabilities document?
Finally you can look at modifying the GeoTools code to allow the end user to supply a URL (or to guess it from the getCapababilities URL). Please read the contribution guide lines. It would certainly do no harm to create a feature request in the issue tracker.

Related

Is there a way to restrict access and visibility to end points when using google cloud endpoints?

We generate our swagger/openAPI documents based on annotations in our java code. There are end points in the code that are there just for our internal usage. These are endpoints that we don't want accessible or even visible publicly.
It's possible, I guess that we could post process the swagger file and remove these endpoints, but that adds another step in the build process. What would really be nice is if could just tag them in such a way that the google cloud endpoint load-balancer saw the tag it would ignore the endpoint. Is such a thing possible?
I imagine we could do something like this by identifying them as requiring access, and then not configure anyone to have access, but I wondered if there was another path that might yield the same or better results.
Right now you can only manage who can access your endpoint with
API keys
Firebase authentication
Auth0
Service accounts
but this is only to authenticate, however what you want to accomplish is not possible at the moment, I could suggest you create a feature request. So the GCP team knows you are interested on the functionality and maybe implemented on the future.

Elasticsearch Rest Client Update Operation

I am working on a project which uses Elasticsearch 5.4 as a datastore. I need to update a field in all the documents in a index. We are using Java RestClient for connection between Java and Elasticsearch and i am not able to find any documentation or Resources for Update API in RestClient 5.4.3.
Can someone please guide me how to proceed from here!
Note : I cannot use Transport client.
Thanks
Did you try to perform POST request on _update_by_query endpoint?
Please take a look on Update By Query API:
The simplest usage of _update_by_query just performs an update on every document in the index without changing the source.
and
So far we’ve only been updating documents without changing their source. That is genuinely useful for things like picking up new properties but it’s only half the fun. _update_by_query supports a script object to update the document.

Preprocessing input text before calling ElasticSearch API

I have a Java client that allows indexing documents on a local ElasticSearch server.
I now want to build a simple Web UI that allows users to query the ES index by typing in some text in a form.
My problem is that, before calling ES APIs to issue the query, I want to preprocess the user input by calling some Java code.
What is the easiest and "cleanest" way to achieve this?
Should I create my own APIs so that the UI can access my Java code?
Should I build the UI with JSP so that I can directly call my Java
code?
Can I somehow make ElasticSearch execute my Java code before
the query is executed? (Perhaps by creating my own ElasticSearch plugin?)
In the end, I opted for the simple solution of using Json-based RESTful APIs. Time proved this to be quite flexible and effective for my case, so I thought I should share it:
My Java code exposes its ability to query an ElasticSearch index, by running an HTTP server and responding to client requests with Json-formatted ES results. I created the HTTP server with a few lines of code, by using sun.net.HttpServer. There are more serious/complex HTTP servers out there (such as Tomcat), but this was very quick to adopt and required zero configuration headaches.
My Web UI makes HTTP GET requests to the Java server, receives Json-formatted data and consumes it happily. My UI is implemented in PHP, but any web language does the job, as long as you can issue HTTP requests.
This solution works really well in my case, because it allows to have no dependencies on ES plugins. I can do any sort of pre-processing before calling ES, and even post-process ES output before sending the results back to the UI.
Depending on the type of pre-processing, you can create an Elasticsearch plugin as custom analyser or custom filter: you essentially extend the appropriate Lucene class(es) and wrap everything into an Elasticsearch plugin. Once the plugin is loaded, you can configure the custom analyser and apply it to the related fields. There are a lot of analysers and filters already available in Elasticsearch, so you might want to have a look at those before writing your own.
Elasticsearch plugins: https://www.elastic.co/guide/en/elasticsearch/reference/1.6/modules-plugins.html (a list of known plugins at the end)
Defining custom analysers: https://www.elastic.co/guide/en/elasticsearch/guide/current/custom-analyzers.html

Validating webservice parameters for XSS attack - Axis2, Java

We have a webservice which saves data and presents the same on the User interface for viewing the transactions. Now, my requirement is to validate all the input parameters in the web service request to make sure that vulnerable content is not shown on UI. I am looking for solutions to validate input params in the web service request, before it is saved to database.
Some of the solutions that I have are below:
Use Java Filter along with any parser API - Dom or SAX etc and validate all the input parameters. But, this approach might create lot of burden on the server.
Dom and SAX parser
Before saving the data into our database, validate each parameter from java object and if any of them fails, fail the transaction. This looks fine, but kind of maintenance overhead as and when we add a new service.
Are there any API or jars which can be integrated with axis2 or java which takes care of validating the request params rather than doing it manually?
Please suggest what is the best way.
Thanks,
Harika
As you mentioned approach 2 is the ideal one and you can use Apache Commons Lang library's StringEscapeUtils which has methods escapeHtml, escapeJavascript and escapeXml which can eliminate Front end code before saving it into the database.
This will prevent XSS but can not guarantee SQL Injection prevention.

CMIS vs REST. Which client would be easier to implement from scratch?

Im working on a Java project that needs to upload files using either REST or CMIS (both services are available). I'm completely new to these APIs and would like to ask which one would be easiest and more straightforward to implement.
I can't use external libraries in the project, so I will need to implement the client from scratch.
Note: the only requirement is to upload files.
Thanks in advance.
The goal of the Content Management Interoperability Services (CMIS) specification is to provide a set of services for working with rich content repositories. It provides a entire specification for ECM applications, that can be REST or SOAP.
CMIS provides specification to operations that control folders,documents,relationships and policy.
I think that for your upload, using CMIS would be like killing a fly with a bomb.
While I admit that I don't know CMIS, file upload with REST is just classic HTTP file upload where you interpret the path name as indicative of the resource to update or replace. Basic REST usage would have you do (an HTTP) GET (method) as “read the file”, POST as “create the file while picking a new name” (typically with a redirect afterwards so that the client can find out what name was picked), PUT as “create the file with the given name or replace that file's contents”, and DELETE as “delete the file”. Moreover, you don't need to support all those methods; do as few as you want (but it's a good idea to support some GET requests, even if just to let people that their uploads worked).
However, when implementing you want in all cases to try to avoid holding much of the file's data in memory; that doesn't scale. Better to take the time to implement streaming transfers so that you never actually need to buffer more than a few kilobytes. You can certainly do this with REST/HTTP. (You could even do it with SOAP using MTOM, but that's probably out of scope for you…)

Categories

Resources