Adding REST route to an existing Jetty endpoint in Camel at runtime - java

I have been inventing a way how to work around the problem of adding consumers to a jetty endpoint (it does not allow multiple consumers). The way we do it in our company is to build our own router and a broadcasting endpoint which consumes from jetty and routes requests to underlying "subscriptions". Only one of them will eventually process the request. It kind of works but it's not completely ok, since recently when updating to latest Camel we have found our custom built component to leak memory and in general I consider using built-in functionality over custom hacks.
I started investigating the Camel REST API and found it very nice and pretty much replacing our home-grown component apart from one thing - you cannot re-configure it at runtime - you have to stop the context basically for this to work. Below I include my unit test with a happy path and the path that fails. Frankly I think is a bug, but if there is a legitimate way to achieve what I want, I'd like to hear sound advice:
package com.anydoby.camel;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.fail;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.commons.io.IOUtils;
import org.junit.Before;
import org.junit.Test;
/**
* Test tries to add/remove routes at runtime.
*/
public class RoutesTest {
private DefaultCamelContext ctx;
#Before
public void pre() throws Exception {
ctx = new DefaultCamelContext();
new RouteBuilder(ctx) {
#Override
public void configure() throws Exception {
restConfiguration("jetty").host("localhost").port(8080);
rest("/")
.get("/issues/{isin}").route().id("issues")
.process(e -> e.getOut().setBody("Here's your issue " + e.getIn().getHeader("isin"))).endRest()
.get("/listings").route().id("listings").process(e -> e.getOut().setBody("some listings"));
}
}.addRoutesToCamelContext(ctx);
ctx.start();
}
#Test
public void test() throws IOException {
{
InputStream stream = new URL("http://localhost:8080/issues/35").openStream();
assertEquals("Here's your issue 35", IOUtils.toString(stream));
}
{
InputStream stream = new URL("http://localhost:8080/listings").openStream();
assertEquals("some listings", IOUtils.toString(stream));
}
}
#Test
public void disableRoute() throws Exception {
ctx.stopRoute("issues");
ctx.removeRoute("issues");
try (InputStream stream = new URL("http://localhost:8080/issues/35").openStream()) {
fail();
} catch (Exception e) {
}
new RouteBuilder(ctx) {
#Override
public void configure() throws Exception {
rest().get("/issues/{isin}/{sedol}").route().id("issues")
.process(e -> e.getOut()
.setBody("Here's your issue " + e.getIn().getHeader("isin") + ":" + e.getIn().getHeader("sedol")))
.endRest();
}
}.addRoutesToCamelContext(ctx);
{
InputStream stream = new URL("http://localhost:8080/issues/35/65").openStream();
assertEquals("Here's your issue 35:65", IOUtils.toString(stream));
}
}
}
The disableRoute() test fails since I cannot add another consumer to an existing endpoint.
So my question is - "is there a way to add a new URL mapping to a restful camel-jetty endpoint"? If you do it during first configuration it works fine, but when later you want to reconfigure one of the routes the error is:
org.apache.camel.FailedToStartRouteException: Failed to start route because of Multiple consumers for the same endpoint is not allowed: jetty:http://localhost:8080/issues/%7Bisin%7D/%7Bsedol%7D?httpMethodRestrict=GET

Related

Migrate from HystrixCommand to Resilience4j

Resilience4j version: 1.7.0
Java version: 1.8
I have challenge in implementing TimeLimiter feature of Resilience4j. I am able to get the Circuit Breaker (CB) work.
We have 2 services Lets say serviceA and serviceB. We have used Command design pattern which encapsulates logic to communicate with ServiceB. RabbitMQ is used to establish inter microservice communication. We had implemented Hystrix CB by making all our Command classes extend HystrixCommand. When we decided to move to Resilience4j main challenge was to retain the existing design pattern than configuring Resilence4J CB.
We have Synchronous communication at present between ServiceA and ServiceB. Though we use RabbitMQ to communicate which is Async communication, with the help of Spring wrapper method RabbitTemplate.convertSendAndReceive() we are able to achieve Sync mode of communication with RabbitMQ.
When I removed HystrixCommand reference which was the Base class for all my Command classes, naturally there was a need to implement a custom Base Command class which will be implemented using Resilience4J Decorators.
I managed introduce a Resilience4JCommand abstract class which will implement a execute() and execute run() from all my command classes. Also defined a abstract run() which all my existing Command classes will override and implement business logic.
I understood from many of the discussion that our method which needs to implement CB pattern needs to return of type CompletableFuture and also understood from many places that fallback method also must have same return type. My Base Command Class Resilience4JCommand looks something like below
import java.io.PrintWriter;
import java.io.StringWriter;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeoutException;
import java.util.function.Supplier;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.stereotype.Component;
import com.ge.hc.XYZ.exception.ResourceNotFoundException;
import io.github.resilience4j.bulkhead.annotation.Bulkhead;
import io.github.resilience4j.bulkhead.annotation.Bulkhead.Type;
import io.github.resilience4j.circuitbreaker.annotation.CircuitBreaker;
import io.github.resilience4j.timelimiter.annotation.TimeLimiter;
#Component
public abstract class Resilience4JCommand<R> {
/** The class logger. */
protected static final Logger LOGGER = LoggerFactory.getLogger(Resilience4JCommand.class);
public R execute() {
R result = null;
try {
result = executeWithCircuitBreaker().get();
} catch (Exception e) {
System.out.println("Inside Catch block of executeAsync ...........**************\n\n ");
e.printStackTrace();
throw new RuntimeException(e);
}
return result;
}
#Bulkhead(name = "XYZStreamingServer3", fallbackMethod = "getFallback", type = Bulkhead.Type.THREADPOOL)
#TimeLimiter(name = "XYZStreamingServer2", fallbackMethod = "getFallback")
#CircuitBreaker(name = "XYZStreamingServer1", fallbackMethod = "getFallback")
public CompletableFuture<R> executeWithCircuitBreaker() {
return CompletableFuture.supplyAsync(new Supplier<R>() {
#Override
public R get() {
return run();
}
});
}
protected abstract R run();
public CompletableFuture<R> getFallback(Throwable e) {
StringWriter sw = new StringWriter();
PrintWriter pw = new PrintWriter(sw);
if (e != null) {
e.printStackTrace(pw);
}
String reason = sw.toString();
LOGGER.error("Calling XYZ-hystrix fallback method for command: {}; fallback reason: {}",
this.getClass().getSimpleName(), (reason.isEmpty() ? "unknown" : reason));
throw new ResourceNotFoundException("Circuit Breaker ");
}
}
But nothing works with above setup. I am able to achieve CB alone work without the need of writing new method executeWithCircuitBreaker() which returns CompletableFuture. I can make CB work just with below execute()
Bulkhead AND TimeLimiter do not work with return type other than CompletableFuture
#CircuitBreaker(name = SCHEME_NAME, fallbackMethod = "getFallback")
public R execute() {
return run();
}
I have spent more than a week in setting up this .. Helpful if someone can point me what I am missing 😢
My application.properties looks something like belwo
management.health.circuitbreakers.enabled=true
management.endpoints.web.exposure.include=health
management.endpoint.health.show-details=always
resilience4j.circuitbreaker.instances.XYZStreamingServer1.registerHealthIndicator=true
resilience4j.circuitbreaker.instances.XYZStreamingServer1.eventConsumerBufferSize=10
resilience4j.circuitbreaker.instances.XYZStreamingServer1.failureRateThreshold=50
resilience4j.circuitbreaker.instances.XYZStreamingServer1.minimumNumberOfCalls=5
resilience4j.circuitbreaker.instances.XYZStreamingServer1.automaticTransitionFromOpenToHalfOpenEnabled=true
resilience4j.circuitbreaker.instances.XYZStreamingServer1.waitDurationInOpenState=5s
resilience4j.circuitbreaker.instances.XYZStreamingServer1.permittedNumberOfCallsInHalfOpenState=3
resilience4j.circuitbreaker.instances.XYZStreamingServer1.slidingWindowSize=10
resilience4j.circuitbreaker.instances.XYZStreamingServer1.slidingWindowType=COUNT_BASED
resilience4j.timelimiter.instances.XYZStreamingServer2.timeoutDuration=5s
resilience4j.timelimiter.instances.XYZStreamingServer2.cancelRunningFuture=true
resilience4j.thread-pool-bulkhead.instances.XYZStreamingServer3.maxThreadPoolSize=10
resilience4j.thread-pool-bulkhead.instances.XYZStreamingServer3.coreThreadPoolSize=5
resilience4j.thread-pool-bulkhead.instances.XYZStreamingServer3.queueCapacity=5

How to investigate data on Apache Camel Route?

My project is working on getting data from one system to another. We are using Apache Camel Routes to send the data between JBoss EAP v7 servers. My question is, is there a way to investigate what the content of the packages are as they come across different routes?
We have tried upping the logging but our files/console just get flooded. We have also tried to use Hawtio on the server to see the messages coming across the routes but have had no success identifying where our message is getting "stuck".
Any help is appreciated!
You can use unit tests to test your routes locally and then either log contents of the exchange at specific points using adviceWith and weave methods.
With unit tests you can easily debug your routes in your favourite IDE even if you're running camel in something like Karaf or Red Hat fuse.
package com.example;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.RoutesBuilder;
import org.apache.camel.builder.AdviceWithRouteBuilder;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.model.dataformat.JsonLibrary;
import org.apache.camel.test.junit4.CamelTestSupport;
import org.junit.Test;
public class ExampleRouteTests extends CamelTestSupport {
#Test
public void exampleTest() throws Exception
{
ContractDetails testDetails = new ContractDetails(1512, 1215);
mockJDBCEndpoints();
context.getRouteDefinition("exampleRoute")
.adviceWith(context, new AdviceWithRouteBuilder(){
#Override
public void configure() throws Exception {
replaceFromWith("direct:start");
weaveByToUri("direct:getDetailsFromAPI")
.replace()
.to("log:testLogger?showAll=true")
.to("mock:api")
.setBody(constant(testDetails));
weaveByToUri("direct:saveToDatabase")
.replace()
.to("log:testLogger?showAll=true")
.to("mock:db");
}
});
MockEndpoint apiMockEndpoint = getMockEndpoint("mock:api");
apiMockEndpoint.expectedMessageCount(1);
MockEndpoint dbMockEndpoint = getMockEndpoint("mock:db");
dbMockEndpoint.expectedMessageCount(1);
context.start();
String body = "{\"name\":\"Bob\",\"age\":10}";
template.sendBody("direct:start", body);
apiMockEndpoint.assertIsSatisfied();
dbMockEndpoint.assertIsSatisfied();
}
#Override
protected RoutesBuilder createRouteBuilder() throws Exception {
return new RouteBuilder(){
#Override
public void configure() throws Exception {
from("amqp:queue:example")
.routeId("exampleRoute")
.unmarshal().json(JsonLibrary.Jackson,
Person.class)
.to("direct:getDetailsFromAPI")
.process(new SomeProcessor())
.to("direct:saveToDatabase");
from("direct:saveToDatabase")
.routeId("saveToDatabaseRoute")
.to("velocity:sql/insertQueryTemplate.vt")
.to("jdbc:exampleDatabase");
from("direct:getDetailsFromAPI")
.removeHeaders("*")
.toD("http4:someAPI?name=${body.getName()}")
.unmarshal().json(JsonLibrary.Jackson,
ContractDetails.class);
}
};
}
void mockJDBCEndpoints() throws Exception {
context.getRouteDefinition("saveToDatabaseRoute")
.adviceWith(context, new AdviceWithRouteBuilder(){
#Override
public void configure() throws Exception {
weaveByToUri("jdbc:*")
.replace()
.to("mock:db");
}
});
}
#Override
public boolean isUseAdviceWith() {
return true;
}
}
Now for troubleshooting problems that do not occur with unit tests you can configure generic or route specific exception handling with onException and use Dead letter channel to process and and store information about the failed exchange. Alternatively you can just use stream or file component to save information about the exception and failed exchange in to a separate file to avoid flooding logs.

How to start multiple message consumers in Quarkus?

I'm trying to migrate from Vert.x to Quarkus and in Vert.x when I write message consumers like Kafka/AMQP etc. I have to scale the number of verticals to maximize performance across multiple cores i.e. Vertical Scaling - is this possible in Quarkus? I see a similar question here but it wasn't answered.
For example, with Kafka I might create a consumer inside a vertical and then scale that vertical say 10 times (that is specify the number of instances in the deployment to be 10) after doing performance testing to determine that's the optimal number. My understanding is that by default, 1 vertical = 1 event loop and does not scale across multiple cores.
I know that it's possible to use Vert.x verticals in Quarkus but is there another way to scale things like the number of Kafka consumers across multiple core?
I see that this type of scalability is configurable for things like Quarkus HTTP but I can't find anything about message consumers.
Here's the Vert.x Verticle approach that overall I'm very happy with, but I wish there were better documentation on how to do this.
UPDATE - Field injection doesn't work with this example but constructor injection does work.
Lets say I want to inject this
#ApplicationScoped
public class CoffeeRepositoryService {
public CoffeeRepositoryService() {
System.out.println("Injection succeeded!");
}
}
Here's my Verticle
package org.acme;
import io.smallrye.mutiny.Uni;
import io.smallrye.mutiny.vertx.core.AbstractVerticle;
import io.vertx.core.impl.logging.Logger;
import io.vertx.core.impl.logging.LoggerFactory;
import io.vertx.mutiny.core.eventbus.EventBus;
import io.vertx.mutiny.rabbitmq.RabbitMQClient;
import io.vertx.mutiny.rabbitmq.RabbitMQConsumer;
import io.vertx.rabbitmq.QueueOptions;
import io.vertx.rabbitmq.RabbitMQOptions;
public class RQVerticle extends AbstractVerticle {
private final Logger LOGGER = LoggerFactory.getLogger(org.acme.RQVerticle.class);
//This doesn't work - returns null
#Inject
CoffeeRepositoryService coffeeRepositoryService;
RQVerticle() {} // dummy constructor needed
#Inject // constructor injection - this does work
RQVerticle(CoffeeRepositoryService coffeeRepositoryService) {
//Here coffeeRepositoryService is injected properly
}
#Override
public Uni<Void> asyncStart() {
LOGGER.info(
"Creating RabbitMQ Connection after Quarkus successful initialization");
RabbitMQOptions config = new RabbitMQOptions();
config.setUri("amqp://localhost:5672");
RabbitMQClient client = RabbitMQClient.create(vertx, config);
Uni<Void> clientResp = client.start();
clientResp.subscribe()
.with(asyncResult -> {
LOGGER.info("RabbitMQ successfully connected!");
});
return clientResp;
}
}
Main Class - injection doesn't work like this
package org.acme;
import io.quarkus.runtime.Quarkus;
import io.quarkus.runtime.QuarkusApplication;
import io.quarkus.runtime.annotations.QuarkusMain;
import io.vertx.core.DeploymentOptions;
import io.vertx.mutiny.core.Vertx;
#QuarkusMain
public class Main {
public static void main(String... args) {
Quarkus.run(MyApp.class, args);
}
public static class MyApp implements QuarkusApplication {
#Override
public int run(String... args) throws Exception {
var vertx = Vertx.vertx();
System.out.println("Deployment Starting");
DeploymentOptions options = new DeploymentOptions()
.setInstances(2);
vertx.deployVerticleAndAwait(RQVerticle::new, options);
System.out.println("Deployment completed");
Quarkus.waitForExit();
return 0;
}
}
}
Main Class with working injection but cannot deploy more than one instance
package org.acme;
import io.quarkus.runtime.StartupEvent;
import io.vertx.mutiny.core.Vertx;
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.event.Observes;
import org.jboss.logging.Logger;
#ApplicationScoped
public class MainVerticles {
private static final Logger LOGGER = Logger.getLogger(MainVerticles.class);
public void init(#Observes StartupEvent e, Vertx vertx, RQVerticle verticle) {
public void init(#Observes StartupEvent e, Vertx vertx, RQVerticle verticle) {
DeploymentOptions options = new DeploymentOptions()
.setInstances(2);
vertx.deployVerticle(verticle,options).await().indefinitely();
}
}
Std Out - first main class looks good
2021-09-15 15:48:12,052 INFO [org.acm.RQVerticle] (vert.x-eventloop-thread-2) Creating RabbitMQ Connection after Quarkus successful initialization
2021-09-15 15:48:12,053 INFO [org.acm.RQVerticle] (vert.x-eventloop-thread-3) Creating RabbitMQ Connection after Quarkus successful initialization
Std Out - second main class
2021-09-22 15:48:11,986 ERROR [io.qua.run.Application] (Quarkus Main
Thread) Failed to start application (with profile dev):
java.lang.IllegalArgumentException: Can't specify > 1 instances for
already created verticle

Unit testing a verticle deployment

I have a simple Verticle that reads configuration from a properties file and loads in into vertx config. I have written a unit test to test the deployment of this verticle and possible cause of test failure is non availability of the properties file at the location.
When I run the test, unit test passes irrespective of whether I change the properties file name or path and the handler says the verticle was deployed successfully.
Am I doing something wrong here? Below is my code
import io.vertx.config.ConfigRetrieverOptions;
import io.vertx.config.ConfigStoreOptions;
import io.vertx.core.DeploymentOptions;
import io.vertx.core.json.JsonObject;
import io.vertx.rxjava.config.ConfigRetriever;
import io.vertx.rxjava.core.AbstractVerticle;
/**
* This is the main launcher verticle, the following operations will be executed in start() method of this verticle:
* 1. Read configurations from application.properties file
* 2. Deploy all other verticles in the application
*/
public class LauncherVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
//set up configuration from the properties file
ConfigStoreOptions fileStore = new ConfigStoreOptions()
.setType("file")
.setFormat("properties")
.setConfig(new JsonObject().put("path", System.getProperty("vertex.config.path"));
//create config retriever options add properties to filestore
ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, options);
DeploymentOptions deploymentOptions = new DeploymentOptions();
//Deploy verticles after the config has been loaded
//The configurations are loaded into JsonConfig object
//This JsonConfig object can be accessed in other verticles using the config() method.
configRetriever.rxGetConfig().subscribe(s -> {
//pass on the JsonConfig object to other verticles through deployment options
deploymentOptions.setConfig(s);
vertx.deployVerticle(AnotherVerticle.class.getName(), deploymentOptions);
}, e -> {
log.error("Failed to start application : " + e.getMessage(), e);
try {
stop();
} catch (Exception e1) {
log.error("Unable to stop vertx, terminate the process manually : "+e1.getMessage(), e1);
}
});
}
}
This is my unit test
import io.vertx.ext.unit.TestContext;
import io.vertx.ext.unit.junit.VertxUnitRunner;
import io.vertx.rxjava.core.Vertx;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import rx.Single;
#RunWith(VertxUnitRunner.class)
public class LoadConfigurationTest {
/**
* Config should be loaded successfully
*
* #param context
*/
#Test
public void loadConfigTest(TestContext context) {
/*
* Set the system property "vertx.config.path" with value "application.properties"
* This system property will be used in the Launcher verticle to read the config file
*/
System.setProperty("vertx.config.path", "/opt/vertx/config/application.properties");
//create vertx instance
Vertx vertx = Vertx.vertx();
Single<String> single = vertx.rxDeployVerticle(LauncherVerticle.class.getName());
single.subscribe(s -> {
vertx.rxUndeploy(s);
}, e -> {
Assert.fail(e.getMessage());
});
}
/**
* Test for negative use case - file not available in the specified location
*
* #param context
*/
#Test
public void loadConfigFailTest(TestContext context) {
//set path = non existing path
System.setProperty("vertx.config.path", "/non/existing/path/application.properties");
//create vertx instance
Vertx vertx = Vertx.vertx();
Single single = vertx.rxDeployVerticle(LauncherVerticle.class.getName());
single.subscribe(s -> {
//not executing this statement
Assert.fail("Was expecting error but Verticle deployed successfully");
}, e -> {
//not executing this statement either
System.out.println("pass");
});
}
}
Can you try the below code inside your LauncherVerticle the changes only include using AbstractVerticles start with Future which is a neat way for handling the config loading and everything around the same during your starup.
public class LauncherVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> startFuture) throws Exception {
ConfigStoreOptions fileStore = new ConfigStoreOptions()
.setType("file")
.setFormat("properties")
.setConfig(new JsonObject().put("path", System.getProperty("vertex.config.path")));
ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, options);
DeploymentOptions deploymentOptions = new DeploymentOptions();
configRetriever.rxGetConfig().subscribe(s -> {
deploymentOptions.setConfig(s);
vertx.deployVerticle(AnotherVerticle.class.getName(),
deploymentOptions,
result -> startFuture.complete()
);
},
startFuture::fail
);
}
}
startFuture there, would help you to control the state of your verticle loading.
Also remember that #Constantine way for handing the test is best way, use of Async to prevent your tests passing without actually asserting anything.
Seems like there is nothing wrong with your verticle. However, there is something in tests - the asynchronous nature of verticle deployment is not taken into account. These test methods finish immediately instead of waiting for verticle deployment, and JUnit test that does not result in AssertionError is a passed test. You have to signal completion explicitly using Async.
Please see an example for your negative scenario below:
import io.vertx.ext.unit.Async;
import io.vertx.ext.unit.TestContext;
import io.vertx.ext.unit.junit.RunTestOnContext;
import io.vertx.ext.unit.junit.VertxUnitRunner;
import io.vertx.rxjava.core.Vertx;
import org.junit.Rule;
import org.junit.Test;
import org.junit.runner.RunWith;
#RunWith(VertxUnitRunner.class)
public class LoadConfigurationTest {
#Rule
public RunTestOnContext runTestOnContextRule = new RunTestOnContext();
#Test
public void testConfigLoading_shouldFail_whenConfigDoesNotExist(TestContext context) {
// create an Async instance that controls the completion of the test
Async async = context.async();
// set non existing path
System.setProperty("vertx.config.path", "/non/existing/path/application.properties");
// take vertx instance and wrap it with rx-ified version
Vertx vertx = Vertx.newInstance(runTestOnContextRule.vertx());
vertx.rxDeployVerticle(LauncherVerticle.class.getName()).subscribe(s -> {
context.fail("Was expecting error but Verticle deployed successfully"); // failure
}, e -> {
async.complete(); // success
});
}
}
Also please note that you can take a Vertx instance from RunTestOnContext rule (as in the snippet above).

Good Zookeeper Hello world Program with Java client

I was trying to use Zookeeper in our project. Could run the server..Even test it using zkcli.sh .. All good..
But couldn't find a good tutorial for me to connect to this server using Java ! All I need in Java API is a method
public String getServiceURL ( String serviceName )
I tried https://cwiki.apache.org/confluence/display/ZOOKEEPER/Index --> Not good for me.
http://zookeeper.apache.org/doc/trunk/javaExample.html : Sort of ok; but couldnt understand concepts clearly ! I feel it is not explained well..
Finally, this is the simplest and most basic program I came up with which will help you with ZooKeeper "Getting Started":
package core.framework.zookeeper;
import java.util.Date;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import org.apache.zookeeper.CreateMode;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.Watcher.Event.KeeperState;
import org.apache.zookeeper.ZooDefs.Ids;
import org.apache.zookeeper.ZooKeeper;
public class ZkConnect {
private ZooKeeper zk;
private CountDownLatch connSignal = new CountDownLatch(0);
//host should be 127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002
public ZooKeeper connect(String host) throws Exception {
zk = new ZooKeeper(host, 3000, new Watcher() {
public void process(WatchedEvent event) {
if (event.getState() == KeeperState.SyncConnected) {
connSignal.countDown();
}
}
});
connSignal.await();
return zk;
}
public void close() throws InterruptedException {
zk.close();
}
public void createNode(String path, byte[] data) throws Exception
{
zk.create(path, data, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
}
public void updateNode(String path, byte[] data) throws Exception
{
zk.setData(path, data, zk.exists(path, true).getVersion());
}
public void deleteNode(String path) throws Exception
{
zk.delete(path, zk.exists(path, true).getVersion());
}
public static void main (String args[]) throws Exception
{
ZkConnect connector = new ZkConnect();
ZooKeeper zk = connector.connect("54.169.132.0,52.74.51.0");
String newNode = "/deepakDate"+new Date();
connector.createNode(newNode, new Date().toString().getBytes());
List<String> zNodes = zk.getChildren("/", true);
for (String zNode: zNodes)
{
System.out.println("ChildrenNode " + zNode);
}
byte[] data = zk.getData(newNode, true, zk.exists(newNode, true));
System.out.println("GetData before setting");
for ( byte dataPoint : data)
{
System.out.print ((char)dataPoint);
}
System.out.println("GetData after setting");
connector.updateNode(newNode, "Modified data".getBytes());
data = zk.getData(newNode, true, zk.exists(newNode, true));
for ( byte dataPoint : data)
{
System.out.print ((char)dataPoint);
}
connector.deleteNode(newNode);
}
}
This post has almost all operations required to interact with Zookeeper.
https://www.tutorialspoint.com/zookeeper/zookeeper_api.htm
Create ZNode with data
Delete ZNode
Get list of ZNodes(Children)
Check an ZNode exists or not
Edit the content of a ZNode...
This blog post, Zookeeper Java API examples, includes some good examples if you are looking for Java examples to start with. Zookeeper also provides a client API library( C and Java) that is very easy to use.
Zookeeper is one of the best open source server and service that helps to reliably coordinates distributed processes. Zookeeper is a CP system (Refer CAP Theorem) that provides Consistency and Partition tolerance. Replication of Zookeeper state across all the nods makes it an eventually consistent distributed service.
This is about as simple as you can get. I am building a tool which will use ZK to lock files that are being processed (hence the class name):
package mypackage;
import java.io.IOException;
import java.util.List;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.ZooKeeper;
import org.apache.zookeeper.Watcher;
public class ZooKeeperFileLock {
public static void main(String[] args) throws IOException, KeeperException, InterruptedException {
String zkConnString = "<zknode1>:2181,<zknode2>:2181,<zknode3>:2181";
ZooKeeperWatcher zkWatcher = new ZooKeeperWatcher();
ZooKeeper client = new ZooKeeper(zkConnString, 10000, zkWatcher);
List<String> zkNodes = client.getChildren("/", true);
for(String node : zkNodes) {
System.out.println(node);
}
}
public static class ZooKeeperWatcher implements Watcher {
#Override
public void process(WatchedEvent event) {
}
}
If you are on AWS; now We can create internal ELB which supports redirection based on URI .. which can really solve this problem with High Availability already baked in.

Categories

Resources