I tried adding members using Java library ecwid and using post request and I am able to do that but I am not getting how we can add members in batches using API.
V3.0 of API has not implemented batch operation as yet. What about V2.0, somebody is able to do it?
In v2.0, the call you're looking for is batchSubscribe().
Related
Context: I'm creating objects that contain some data, and in my case, data cannot find from one source. So I'm consuming data coming from a Kinesis stream and need to call a Rest API providing some data that came from the stream to get some other fields required to create the final object.
Question: I know that Apache beam is not yet released as an official way to communicate with Rest APIs. But looks like it is still in progress - https://issues.apache.org/jira/browse/BEAM-1946
What I want to know is the best and proper way to call a Rest API inside the beam pipeline. Highly appreciate it if you can share any resources/examples along with your feedback's.
Tried the standard(HttpClient API) java way of calling Rest API, but I want to know whether there is a better way to achieve this. Am expecting answers and some resources if possible.
TIA
Apache Beam offers a way to define arbitrary computations in the form of DoFn UDF implementation used with ParDo transforms. This should allow you to add logic that connects to any REST API. Currently there's no official REST Beam connector as you mentioned but this should not block you from connecting to any custom API within your pipeline.
I am working on a project which uses Elasticsearch 5.4 as a datastore. I need to update a field in all the documents in a index. We are using Java RestClient for connection between Java and Elasticsearch and i am not able to find any documentation or Resources for Update API in RestClient 5.4.3.
Can someone please guide me how to proceed from here!
Note : I cannot use Transport client.
Thanks
Did you try to perform POST request on _update_by_query endpoint?
Please take a look on Update By Query API:
The simplest usage of _update_by_query just performs an update on every document in the index without changing the source.
and
So far we’ve only been updating documents without changing their source. That is genuinely useful for things like picking up new properties but it’s only half the fun. _update_by_query supports a script object to update the document.
I saw already answered questions and seems they are old enough that I couldn't use them. I tried an example given at https://www.elastic.co/blog/found-java-clients-for-elasticsearch which has the code written but not in an organized manner that would help me. The libraries are old and code gives me error.
I saw Spring Data project but that only allow a specific type of document/class to be indexed and needs the model to be predefined which is not my usecase. My goal is to build a java web application which could ingest any data documents fed to elasticsearch and we could analyze it with Kibana. I would need to know how can i fire a rest call or curl for bulk data. Can anyone state an example with complete parts please.
Use rest client.
The Java High Level REST Client works on top of the Java Low Level
REST client. Its main goal is to expose API specific methods, that
accept request objects as an argument and return response objects, so
that request marshalling and response un-marshalling is handled by the
client itself.
To upload data from java application to ES , you can use bulk Api.
To check list of APIs check link
I have been using SOAPAPI earlier to create and update records from a remote system to salesforce. Then I added an additional functionality of oAuth 2.0 in my application. But on making connection with this oAuth, SOAP's upsert or any other query method refused to accept the connection made by it. Then I tried to switch it to Rest API.
Again there is an issue since one can send record one by one in REST. Last option for me is BulkAPI. But I could not find any sample example to create or update a record using a list. Everywhere this Bulk API is used with csv file. But I need to pass record from a list of objects as it is done with SOAP.
Can anyone provide me some sample example or if this can only be done with csv file then, is there any alternative to use same approach for list?
If you looking for code examples of data upload with Bulk API, you can find it in Salesforce Developer documentation, for instance - Walk Through the Sample Code. There is example of uploading data from CSV file.
Also I recommend you to read Bulk API Developer Guide and specially to consider Bulk API Limits for better understanding how it is suitable for your case.
UPD: Here is repo with an example of using Salesforce Bulk API from C#.
I have a Java client that allows indexing documents on a local ElasticSearch server.
I now want to build a simple Web UI that allows users to query the ES index by typing in some text in a form.
My problem is that, before calling ES APIs to issue the query, I want to preprocess the user input by calling some Java code.
What is the easiest and "cleanest" way to achieve this?
Should I create my own APIs so that the UI can access my Java code?
Should I build the UI with JSP so that I can directly call my Java
code?
Can I somehow make ElasticSearch execute my Java code before
the query is executed? (Perhaps by creating my own ElasticSearch plugin?)
In the end, I opted for the simple solution of using Json-based RESTful APIs. Time proved this to be quite flexible and effective for my case, so I thought I should share it:
My Java code exposes its ability to query an ElasticSearch index, by running an HTTP server and responding to client requests with Json-formatted ES results. I created the HTTP server with a few lines of code, by using sun.net.HttpServer. There are more serious/complex HTTP servers out there (such as Tomcat), but this was very quick to adopt and required zero configuration headaches.
My Web UI makes HTTP GET requests to the Java server, receives Json-formatted data and consumes it happily. My UI is implemented in PHP, but any web language does the job, as long as you can issue HTTP requests.
This solution works really well in my case, because it allows to have no dependencies on ES plugins. I can do any sort of pre-processing before calling ES, and even post-process ES output before sending the results back to the UI.
Depending on the type of pre-processing, you can create an Elasticsearch plugin as custom analyser or custom filter: you essentially extend the appropriate Lucene class(es) and wrap everything into an Elasticsearch plugin. Once the plugin is loaded, you can configure the custom analyser and apply it to the related fields. There are a lot of analysers and filters already available in Elasticsearch, so you might want to have a look at those before writing your own.
Elasticsearch plugins: https://www.elastic.co/guide/en/elasticsearch/reference/1.6/modules-plugins.html (a list of known plugins at the end)
Defining custom analysers: https://www.elastic.co/guide/en/elasticsearch/guide/current/custom-analyzers.html