How to get the channel list from Slack API? - java

I'm using Slack's API with Java. I have already discovered how to create a simple code to send messages using incoming Webhooks, but now I am interested in receiving a list of the available channels in the getChannels function.
The problem is that I do not find examples in Java about this.
Now, my code is:
package slack;
import com.github.seratch.jslack.Slack;
import java.io.IOException;
import com.github.seratch.jslack.api.methods.SlackApiException;
import com.github.seratch.jslack.api.webhook.*;
public class SlackManager {
private String token_="{myToken}";
private Slack slack_ = Slack.getInstance();
private String url_="{url}";
public void sendMessage(String text, String channel, String name) throws IOException, SlackApiException {
Payload payload = Payload.builder()
.channel("#"+channel)
.username(name)
.iconEmoji(":smile_cat:")
.text(text)
.build();
WebhookResponse response = slack_.send(url_, payload);
System.out.println(response.getMessage().toString());
}
public void getChannels(){
//I don't know how to get the channel list!!!
}
}
I'm try this:
public void getChannels() throws IOException, SlackApiException{
List<String> channels = slack_.methods().channelsList(ChannelsListRequest.builder().token(token_).build())
.getChannels().stream()
.map(c -> c.getId()).collect(Collectors.toList());
for (String string : channels) {
System.out.println(string);
}
}
but the result was a 'javaNullPointException'. Must the token be String?

Slack's incoming webhooks won't be able to provide this functionality - you'll need to use Slack's Web API to get what you need.
Using the Web API, try following this example from the jslack library you're using:
List<String> channels = slack.methods().channelsList(ChannelsListRequest.builder().token(token).build())
.getChannels().stream()
.map(c -> c.getId()).collect(toList());

Related

How to send JsonArray data in apache-pulsar-client?

I am a beginner who just started developing pulsar-client with spring boot.
First of all, I learned the basics through pulsar doc and git, but I was stuck testing batch transmission of messages from the pulsar-client producer.
In particular, I want to send JsonArray data in batches, but I keep getting a JsonArray.getAsInt error.
Please take a look at my code and tell me what's wrong
package com.refactorizando.example.pulsar.producer;
import static java.util.stream.Collectors.toList;
import com.refactorizando.example.pulsar.config.PulsarConfiguration;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.stream.IntStream;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import net.sf.json.JSONArray;
import org.apache.pulsar.client.api.CompressionType;
import org.apache.pulsar.client.api.Message;
import org.apache.pulsar.client.api.MessageId;
import org.apache.pulsar.client.api.Producer;
import org.apache.pulsar.client.api.PulsarClient;
import org.apache.pulsar.client.api.PulsarClientException;
import org.apache.pulsar.client.impl.schema.JSONSchema;
import org.apache.pulsar.shade.com.google.gson.JsonArray;
import org.apache.pulsar.shade.com.google.gson.JsonElement;
import org.apache.pulsar.shade.com.google.gson.JsonObject;
import org.apache.pulsar.shade.com.google.gson.JsonParser;
import org.springframework.context.annotation.Bean;
import org.springframework.stereotype.Component;
#Component
#RequiredArgsConstructor
#Slf4j
public class PulsarProducer {
private static final String TOPIC_NAME = "Json_Test";
private final PulsarClient client;
#Bean(name = "producer")
public void producer() throws PulsarClientException {
// batching
Producer<JsonArray> producer = client.newProducer(JSONSchema.of(JsonArray.class))
.topic(TOPIC_NAME)
.batchingMaxPublishDelay(60, TimeUnit.SECONDS)
.batchingMaxMessages(2)
.enableBatching(true)
.compressionType(CompressionType.LZ4)
.create();
String data = "{'users': [{'userId': 1,'firstName': 'AAAAA'},{'userId': 2,'firstName': 'BBBB'},{'userId': 3,'firstName': 'CCCCC'},{'userId': 4,'firstName': 'DDDDD'},{'userId': 5,'firstName': 'EEEEE'}]}";
JsonElement element = JsonParser.parseString(data);
JsonObject obj = element.getAsJsonObject();
JsonArray arr = obj.getAsJsonArray("users");
try {
producer.send(arr);
} catch (Exception e) {
log.error("Error sending mesasage");
e.printStackTrace();
}
producer.close();
}
}
I'm still a beginner developer, so I couldn't find it on stackOverflow because I couldn't search well. If you have anything related to it, please leave a link and I'll delete the question.
Thanks for reading my question and have a nice day!
I tried several things, such as converting to JsonObject and sending, converting to String and sending, etc., but the same error came out.
cho ,
Welcome to Pulsar and Spring Pulsar! I believe there are a few things to cover to fully answer your question.
Spring Pulsar Usage
In your example you are crafting a Producer directly from the PulsarClient. There is absolutely nothing wrong/bad about using that API directly. However, if you want to use Spring Pulsar, the recommended approach to send messages in a Spring Boot app using Spring Pulsar is via the auto-configured PulsarTemplate (or ReactivePulsarTemplate if using Reactive). It simplifies usage and allows configuring the template/producer using configuration properties. For example, instead of building up and then using Producer.send() you would instead inject the pulsar template and use it as follows:
pulsarTemplate.newMessage(foo)
.withTopic("Json_Test")
.withSchema(Schema.JSON(Foo.class))
.withProducerCustomizer((producerBuilder) -> {
producerBuilder
.batchingMaxPublishDelay(60, TimeUnit.SECONDS)
.batchingMaxMessages(2)
.enableBatching(true)
.compressionType(CompressionType.LZ4);
})
.send();
Furthermore you can replace the builder calls w/ configuration properties like:
spring:
pulsar:
producer:
batching-enabled: true
batching-max-publish-delay: 60s
batching-max-messages: 2
compression-type: lz4
and then your code becomes:
pulsarTemplate.newMessage(foo)
.withTopic("Json_Test")
.withSchema(Schema.JSON(Foo.class))
.send();
NOTE: I replace json array w/ Foo for simplicity.
Schemas
In Pulsar, the Schema knows how to de/serialize the data. The built-in Pulsar Schema.JSON by default uses the Jackson json lib to de/serialize the data. This requires that the data must be able to be handled by Jackson ObjectMapper.readValue/writeValue methods. It handles POJOs really well, but does not handle the JSON impl you are using.
I noticed the latest json-lib is 2.4 and (AFAICT) has 9 CVEs against it and was last released in 2010. If I had to use a Json level API for my data I would pick a more contemporary and well supported / used lib such as Jackson or Gson.
I switched your sample to use Jackson ArrayNode and it worked well. I did have to replace the single quotes in your data string to backslash double-quote as Jackson by default does not like single-quoted data. Here is the re-worked sample app using Jackson ArrayNode:
#SpringBootApplication
public class HyunginChoSpringPulsarUserApp {
public static void main(String[] args) {
SpringApplication.run(HyunginChoSpringPulsarUserApp.class, args);
}
#Bean
ApplicationRunner sendDataOnStartup(PulsarTemplate<ArrayNode> pulsarTemplate) {
return (args) -> {
String data2 = "{\"users\": [{\"userId\": 1,\"firstName\": \"AAAAA\"},{\"userId\": 2,\"firstName\": \"BBBB\"},{\"userId\": 3,\"firstName\": \"CCCCC\"},{\"userId\": 4,\"firstName\": \"DDDDD\"},{\"userId\": 5,\"firstName\": \"EEEEE\"}]}";
ArrayNode jsonArray = (ArrayNode) ObjectMapperFactory.create().readTree(data2).get("users");
System.out.printf("*** SENDING: %s%n", jsonArray);
pulsarTemplate.newMessage(jsonArray)
.withTopic("Json_Test")
.withSchema(Schema.JSON(ArrayNode.class))
.send();
};
}
#PulsarListener(topics = "Json_Test", schemaType = SchemaType.JSON, batch = true)
public void listenForData(List<ArrayNode> user) {
System.out.printf("***** LISTEN: %s%n".formatted(user));
}
}
The output looks like:
*** SENDING: [{"userId":1,"firstName":"AAAAA"},{"userId":2,"firstName":"BBBB"},{"userId":3,"firstName":"CCCCC"},{"userId":4,"firstName":"DDDDD"},{"userId":5,"firstName":"EEEEE"}]
***** LISTEN: [{"userId":1,"firstName":"AAAAA"},{"userId":2,"firstName":"BBBB"},{"userId":3,"firstName":"CCCCC"},{"userId":4,"firstName":"DDDDD"},{"userId":5,"firstName":"EEEEE"}]
Data Model
Your data is an array of users. Do you have a requirement to use a Json level API or you instead deal with a List<User> POJOs? This would simplify things and make it much better experience to use. The Java record is a great choice such as:
public record(String userId, String firstName) {}
then you can pass in a List<User> to your PulsarTemplate and everything will work well. For example:
#SpringBootApplication
public class HyunginChoSpringPulsarUserApp {
public static void main(String[] args) {
SpringApplication.run(HyunginChoSpringPulsarUserApp.class, args);
}
#Bean
ApplicationRunner sendDataOnStartup(PulsarTemplate<User> pulsarTemplate) {
return (args) -> {
String data2 = "{\"users\": [{\"userId\": 1,\"firstName\": \"AAAAA\"},{\"userId\": 2,\"firstName\": \"BBBB\"},{\"userId\": 3,\"firstName\": \"CCCCC\"},{\"userId\": 4,\"firstName\": \"DDDDD\"},{\"userId\": 5,\"firstName\": \"EEEEE\"}]}";
ObjectMapper objectMapper = ObjectMapperFactory.create();
JsonNode usersNode = objectMapper.readTree(data2).get("users");
List<User> users = objectMapper.convertValue(usersNode, new TypeReference<>() {});
System.out.printf("*** SENDING: %s%n", users);
for (User user : users) {
pulsarTemplate.newMessage(user)
.withTopic("Json_Test2")
.withSchema(Schema.JSON(User.class))
.send();
}
};
}
#PulsarListener(topics = "Json_Test2", schemaType = SchemaType.JSON, batch = true)
public void listenForData(List<User> users) {
users.forEach((user) -> System.out.printf("***** LISTEN: %s%n".formatted(user)));
}
public record User(String userId, String firstName) {}
}
*** SENDING: [User[userId=1, firstName=AAAAA], User[userId=2, firstName=BBBB], User[userId=3, firstName=CCCCC], User[userId=4, firstName=DDDDD], User[userId=5, firstName=EEEEE]]
...
***** LISTEN: User[userId=1, firstName=AAAAA]
***** LISTEN: User[userId=2, firstName=BBBB]
***** LISTEN: User[userId=3, firstName=CCCCC]
***** LISTEN: User[userId=4, firstName=DDDDD]
***** LISTEN: User[userId=5, firstName=EEEEE]
I hope this helps. Take care.

I'm using JDA 4.4.0_352 and my Discord Bot is unable to send message in the channel

Here's my code in Bot.java.
import net.dv8tion.jda.api.JDA;
import net.dv8tion.jda.api.JDABuilder;
import javax.security.auth.login.LoginException;
public class Bot {
private JDA api;
public Bot() throws LoginException, InterruptedException {
api = JDABuilder.createDefault("token")
.addEventListeners(new search()).build()
.awaitReady();
}
public static void main(String[] args) throws LoginException, InterruptedException {
new Bot();
}
}
token is replaced with an actual token in my code :)
And here's my code in search.java
package core;
import net.dv8tion.jda.api.hooks.ListenerAdapter;
import net.dv8tion.jda.api.events.message.MessageReceivedEvent;
public class search extends ListenerAdapter {
#Override
public void onMessageReceived(MessageReceivedEvent event) {
String msg = event.getMessage().getContentRaw();
if (event.getAuthor().isBot())
return;
if (msg.equalsIgnoreCase("!search")) {
event.getPrivateChannel()
.sendMessage("Please enter the course name: ").queue();
}
}
}
I'm not sure why my bot doesn't send the appropriate message when I put search to my discord channel (tried both private and guild).
Please let me know if you know where the problem is, thank you!
The Bot's intents/permissions need to be set in the createDefault method (and possibly inside the discord developer portal). For direct messages it'd be:
JDABuilder.createDefault("token", GatewayIntent.DIRECT_MESSAGES)
To handle messages in your guild, I think it'd require Message Content Intent enabled in the discord developer portal; and it'd be:
JDABuilder.createDefault("token", GatewayIntent.GUILD_MESSAGES)
The GatewayIntent parameter inside the createDefault method uses varargs, so you can tack on any number of intents:
JDABuilder.createDefault("token", GatewayIntent.DIRECT_MESSAGES, GatewayIntent.GUILD_MESSAGES)
Also, event.getPrivateChannel() will throw an IllegalStateException in the event that the message is not from a private channel; I'd recommend first checking if event.getChannelType() == ChannelType.PRIVATE, or a try/catch.

ElasticSearch Java API Client - Send already serialized data and avoid serialization

I have a Kafka Topic wit JSON data. Now im trying to send those JSON strings to an ES topic using the new "Java API Client" (https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/7.17/index.html), but im running into a parser exception:
co.elastic.clients.elasticsearch._types.ElasticsearchException: [es/index] failed: [mapper_parsing_exception] failed to parse
at co.elastic.clients.transport.rest_client.RestClientTransport.getHighLevelResponse(RestClientTransport.java:281)
at co.elastic.clients.transport.rest_client.RestClientTransport.performRequest(RestClientTransport.java:147)
at co.elastic.clients.elasticsearch.ElasticsearchClient.index(ElasticsearchClient.java:953)
This exception occurs in the last line of the following code:
final IndexRequest<String> request =
new IndexRequest.Builder<String>()
.index("myIndex")
.id(String.valueOf(UUID.randomUUID()))
.document(consumerRecord.value()) //already serialized json data
.build();
elasticsearchClient.index(request);
As far as I understand this exception occurs, because the ES client tries to serialize the data im providing, which is already serialized, resulting in a malformed JSON string.
Is there anyway to get around this and just send simple JSON strings? Also I believe this was possible with the earlier "Low Level Java Library", right? And yes, I know there are ways to allow communication between Kafka and ES without writing a Consumer.
Thanks for any hints.
If you use a JacksonJsonpMapper when creating your ElasticsearchTransport, you can use a custom PreserializedJson class to send already-serialized JSON.
ElasticsearchTransport transport = new RestClientTransport(
createLowLevelRestClient(), // supply your own!
new JacksonJsonpMapper()
);
ElasticsearchClient client = new ElasticsearchClient(transport);
IndexResponse response = client.index(indexReq -> indexReq
.index("my-index")
.id("docId")
.document(new PreserializedJson("{\"foo\":\"bar\"}"))
);
System.out.println(response);
Here is the source for PreserializedJson:
import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.databind.SerializerProvider;
import com.fasterxml.jackson.databind.annotation.JsonSerialize;
import com.fasterxml.jackson.databind.ser.std.StdSerializer;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import static java.util.Objects.requireNonNull;
#JsonSerialize(using = PreserializedJson.Serializer.class)
public class PreserializedJson {
private final String value;
public PreserializedJson(String value) {
this.value = requireNonNull(value);
}
public PreserializedJson(byte[] value) {
this(new String(value, StandardCharsets.UTF_8));
}
public static class Serializer extends StdSerializer<PreserializedJson> {
public Serializer() {
super(PreserializedJson.class);
}
#Override
public void serialize(PreserializedJson value, JsonGenerator gen, SerializerProvider provider) throws IOException {
gen.writeRaw(value.value);
}
}
}
I solved the problem by substituting "Java API Client" (https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/introduction.html) with "Java Low Level Rest Client" (https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/java-rest-low.html).
This Library allows sending of arbitrary JSON-Strings to ES:
final Request request = new Request("POST", "/twitter/_doc");
request.setJsonEntity(record.value());
restClient.performRequest(request);
With the new API Client, you can natively insert raw json into it.
As specified here : Using raw json data
IndexRequest<JsonData> request = IndexRequest.of(i -> i
.index("logs")
.withJson(input)
);

How do you read and print a chunked HTTP response using java.net.http as chunks arrive?

Java 11 introduces a new package, java.net.http, for making HTTP requests. For general usage, it's pretty straight forward.
My question is: how do I use java.net.http to handle chunked responses as each chunk is received by the client?
java.http.net contains a reactive BodySubscriber which appears to be what I want, but I can't find an example of how it's used.
http_get_demo.py
Below is a python implementation that prints chunks as they arrive, I'd like to the same thing with java.net.http:
import argparse
import requests
def main(url: str):
with requests.get(url, stream=True) as r:
for c in r.iter_content(chunk_size=1):
print(c.decode("UTF-8"), end="")
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Read from a URL and print as text as chunks arrive")
parser.add_argument('url', type=str, help="A URL to read from")
args = parser.parse_args()
main(args.url)
HttpGetDemo.java
Just for completeness, here's a simple example of making a blocking request using java.net.http:
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpResponse;
import java.net.http.HttpRequest;
public class HttpGetDemo {
public static void main(String[] args) throws Exception {
var request = HttpRequest.newBuilder()
.uri(URI.create(args[0]))
.build();
var bodyHandler = HttpResponse.BodyHandlers
.ofString();
var client = HttpClient.newHttpClient();
var response = client.send(request, bodyHandler);
System.out.println(response.body());
}
}
HttpAsyncGetDemo.java
And here's the example making an non-blocking/async request:
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpResponse;
import java.net.http.HttpRequest;
/**
* ReadChunked
*/
public class HttpAsyncGetDemo {
public static void main(String[] args) throws Exception {
var request = HttpRequest.newBuilder()
.uri(URI.create(args[0]))
.build();
var bodyHandler = HttpResponse.BodyHandlers
.ofString();
var client = HttpClient.newHttpClient();
client.sendAsync(request, bodyHandler)
.thenApply(HttpResponse::body)
.thenAccept(System.out::println)
.join();
}
}
The python code does not ensure that the response body data is made available one HTTP chunk at a time. It just provides small amounts of data to the application, thus reducing the amount of memory consumed at the application level ( it could be buffered lower in the stack ). The Java 11 HTTP Client supports streaming through one of the streaming body handlers, HttpResponse.BodyHandlers: ofInputStream, ofByteArrayConsumer, asLines, etc.
Or write your own handler / subscriber as demonstrated:
https://www.youtube.com/watch?v=qiaC0QMLz5Y
You can print ByteBuffers as they come, but there's no guarantee that a ByteBuffer corresponds to a chunk. Chunks are handled by the stack. One ByteBuffer slice will be pushed for every chunk - but if there isn’t enough space remaining in the buffer, then a partial chunk will be pushed. All the consumer sees is a stream of ByteBuffers that contain the data.
So what you can do is print those ByteBuffers as they come, but you have no guarantee that they correspond exactly one chunk each as was sent by the server.
Note: If the body of your request is text based, then you can use
BodyHandlers.fromLineSubscriber(Subscriber<? super String> subscriber) with a custom Subscriber<String> that will print each line as it comes.
The BodyHandlers.fromLineSubscriber does the hard word of decoding bytes into chars using the charset indicated in the response headers, buffering bytes if needed until they can be decoded (a ByteBuffer might end in the middle of an encoding sequence if the text contains chars encoded over multiple bytes), and splitting them at the line boundary. The Subscriber::onNext method will be invoked once for each line in the text. See https://download.java.net/java/early_access/jdk11/docs/api/java.net.http/java/net/http/HttpResponse.BodyHandlers.html#fromLineSubscriber(java.util.concurrent.Flow.Subscriber) for more info.
Thanks to #pavel and #chegar999 for their partial answers. They led me to my solution.
Overview
The solution I came up with is below. Basically, the solution is to use a custom java.net.http.HttpResponse.BodySubscriber. A BodySubscriber contains reactive methods (onSubscribe, onNext, onError, and onComplete) and a getBody method that basically returns a java CompletableFuture that will eventually produce the body of the HTTP request. Once you have your BodySubscriber in hand you can use it like:
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(uri))
.build();
return client.sendAsync(request, responseInfo -> new StringSubscriber())
.whenComplete((r, t) -> System.out.println("--- Status code " + r.statusCode()))
.thenApply(HttpResponse::body);
Note the line:
client.sendAsync(request, responseInfo -> new StringSubscriber())
That's where we register our custom BodySubscriber; in this case, my custom class is named StringSubscriber.
CustomSubscriber.java
This is a complete working example. Using Java 11, you can run it without compiling it. Just past it into a file named CustomSubscriber.java, then run the command java CustomSubscriber <some url>. It prints the contents of each chunk as it arrives. It also collects them and returns them as the body when the response has completed.
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.net.http.HttpResponse.BodyHandlers;
import java.net.http.HttpResponse.BodySubscriber;
import java.net.URI;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.concurrent.Flow;
import java.util.stream.Collectors;
import java.util.List;
public class CustomSubscriber {
public static void main(String[] args) {
CustomSubscriber cs = new CustomSubscriber();
String body = cs.get(args[0]).join();
System.out.println("--- Response body:\n: ..." + body + "...");
}
public CompletableFuture<String> get(String uri) {
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(uri))
.build();
return client.sendAsync(request, responseInfo -> new StringSubscriber())
.whenComplete((r, t) -> System.out.println("--- Status code " + r.statusCode()))
.thenApply(HttpResponse::body);
}
static class StringSubscriber implements BodySubscriber<String> {
final CompletableFuture<String> bodyCF = new CompletableFuture<>();
Flow.Subscription subscription;
List<ByteBuffer> responseData = new CopyOnWriteArrayList<>();
#Override
public CompletionStage<String> getBody() {
return bodyCF;
}
#Override
public void onSubscribe(Flow.Subscription subscription) {
this.subscription = subscription;
subscription.request(1); // Request first item
}
#Override
public void onNext(List<ByteBuffer> buffers) {
System.out.println("-- onNext " + buffers);
try {
System.out.println("\tBuffer Content:\n" + asString(buffers));
}
catch (Exception e) {
System.out.println("\tUnable to print buffer content");
}
buffers.forEach(ByteBuffer::rewind); // Rewind after reading
responseData.addAll(buffers);
subscription.request(1); // Request next item
}
#Override
public void onError(Throwable throwable) {
bodyCF.completeExceptionally(throwable);
}
#Override
public void onComplete() {
bodyCF.complete(asString(responseData));
}
private String asString(List<ByteBuffer> buffers) {
return new String(toBytes(buffers), StandardCharsets.UTF_8);
}
private byte[] toBytes(List<ByteBuffer> buffers) {
int size = buffers.stream()
.mapToInt(ByteBuffer::remaining)
.sum();
byte[] bs = new byte[size];
int offset = 0;
for (ByteBuffer buffer : buffers) {
int remaining = buffer.remaining();
buffer.get(bs, offset, remaining);
offset += remaining;
}
return bs;
}
}
}
Trying it out
To test this solution, you'll need a server that sends a response that uses Transfer-encoding: chunked and sends it slow enough to watch the chunks arrive. I've created one at https://github.com/hohonuuli/demo-chunk-server but you can spin it up using Docker like so:
docker run -p 8080:8080 hohonuuli/demo-chunk-server
Then run the CustomSubscriber.java code using java CustomSubscriber.java http://localhost:8080/chunk/10
There is now a new Java library to address this kind of requirements
RxSON: https://github.com/rxson/rxson
It utilizes the JsonPath wit RxJava to read JSON streamed chunks from the response as soon as they arrive, and parse them to java objects.
Example:
String serviceURL = "https://think.cs.vt.edu/corgis/datasets/json/airlines/airlines.json";
HttpRequest req = HttpRequest.newBuilder(URI.create(serviceURL)).GET().build();
RxSON rxson = new RxSON.Builder().build();
String jsonPath = "$[*].Airport.Name";
Flowable<String> airportStream = rxson.create(String.class, req, jsonPath);
airportStream
.doOnNext(it -> System.out.println("Received new item: " + it))
//Just for test
.toList()
.blockingGet();

Good Zookeeper Hello world Program with Java client

I was trying to use Zookeeper in our project. Could run the server..Even test it using zkcli.sh .. All good..
But couldn't find a good tutorial for me to connect to this server using Java ! All I need in Java API is a method
public String getServiceURL ( String serviceName )
I tried https://cwiki.apache.org/confluence/display/ZOOKEEPER/Index --> Not good for me.
http://zookeeper.apache.org/doc/trunk/javaExample.html : Sort of ok; but couldnt understand concepts clearly ! I feel it is not explained well..
Finally, this is the simplest and most basic program I came up with which will help you with ZooKeeper "Getting Started":
package core.framework.zookeeper;
import java.util.Date;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import org.apache.zookeeper.CreateMode;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.Watcher.Event.KeeperState;
import org.apache.zookeeper.ZooDefs.Ids;
import org.apache.zookeeper.ZooKeeper;
public class ZkConnect {
private ZooKeeper zk;
private CountDownLatch connSignal = new CountDownLatch(0);
//host should be 127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002
public ZooKeeper connect(String host) throws Exception {
zk = new ZooKeeper(host, 3000, new Watcher() {
public void process(WatchedEvent event) {
if (event.getState() == KeeperState.SyncConnected) {
connSignal.countDown();
}
}
});
connSignal.await();
return zk;
}
public void close() throws InterruptedException {
zk.close();
}
public void createNode(String path, byte[] data) throws Exception
{
zk.create(path, data, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
}
public void updateNode(String path, byte[] data) throws Exception
{
zk.setData(path, data, zk.exists(path, true).getVersion());
}
public void deleteNode(String path) throws Exception
{
zk.delete(path, zk.exists(path, true).getVersion());
}
public static void main (String args[]) throws Exception
{
ZkConnect connector = new ZkConnect();
ZooKeeper zk = connector.connect("54.169.132.0,52.74.51.0");
String newNode = "/deepakDate"+new Date();
connector.createNode(newNode, new Date().toString().getBytes());
List<String> zNodes = zk.getChildren("/", true);
for (String zNode: zNodes)
{
System.out.println("ChildrenNode " + zNode);
}
byte[] data = zk.getData(newNode, true, zk.exists(newNode, true));
System.out.println("GetData before setting");
for ( byte dataPoint : data)
{
System.out.print ((char)dataPoint);
}
System.out.println("GetData after setting");
connector.updateNode(newNode, "Modified data".getBytes());
data = zk.getData(newNode, true, zk.exists(newNode, true));
for ( byte dataPoint : data)
{
System.out.print ((char)dataPoint);
}
connector.deleteNode(newNode);
}
}
This post has almost all operations required to interact with Zookeeper.
https://www.tutorialspoint.com/zookeeper/zookeeper_api.htm
Create ZNode with data
Delete ZNode
Get list of ZNodes(Children)
Check an ZNode exists or not
Edit the content of a ZNode...
This blog post, Zookeeper Java API examples, includes some good examples if you are looking for Java examples to start with. Zookeeper also provides a client API library( C and Java) that is very easy to use.
Zookeeper is one of the best open source server and service that helps to reliably coordinates distributed processes. Zookeeper is a CP system (Refer CAP Theorem) that provides Consistency and Partition tolerance. Replication of Zookeeper state across all the nods makes it an eventually consistent distributed service.
This is about as simple as you can get. I am building a tool which will use ZK to lock files that are being processed (hence the class name):
package mypackage;
import java.io.IOException;
import java.util.List;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.ZooKeeper;
import org.apache.zookeeper.Watcher;
public class ZooKeeperFileLock {
public static void main(String[] args) throws IOException, KeeperException, InterruptedException {
String zkConnString = "<zknode1>:2181,<zknode2>:2181,<zknode3>:2181";
ZooKeeperWatcher zkWatcher = new ZooKeeperWatcher();
ZooKeeper client = new ZooKeeper(zkConnString, 10000, zkWatcher);
List<String> zkNodes = client.getChildren("/", true);
for(String node : zkNodes) {
System.out.println(node);
}
}
public static class ZooKeeperWatcher implements Watcher {
#Override
public void process(WatchedEvent event) {
}
}
If you are on AWS; now We can create internal ELB which supports redirection based on URI .. which can really solve this problem with High Availability already baked in.

Categories

Resources