I have installed IntelliJ Community 2018.1 on Windows 7 Professional. Running Java 1.8.0.172 jdk.
I am running Apache Storm related Simple Java code from Edureka Tutorials.
Very simple and basic. Just emit few integers from a spout and the bolt just doubles up each number and re-emits.
I am not running any local or remote zk/storm cluster. I am relying on the ZK/Storm instances that the code somehow generates itself. I do not have any Storm directory locally. All I have is IntelliJ and few dependency lines in pom.xml.
This is what I have in my Windows hosts file.
localhost sandbox.hortonworks.com sandbox-hdp.hortonworks.com sandbox-hdf.hortonworks.com
127.0.0.1 sandbox.hortonworks.com sandbox-hdp.hortonworks.com sandbox-hdf.hortonworks.com
When I run the Java program, I am consistently getting this error :-
org.apache.storm.shade.org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid 0x164ebfb3e3e000f, likely client has closed socket
Here are the steps I did to write my code :-
(1) Create new Maven project in IntelliJ.
(2) Add dependencies section to POM.xml so that storm-core libraries are imported. All validates fine.
<dependencies>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.1.1</version>
<scope>compile</scope>
</dependency>
</dependencies>
(4) Create a Java class for Spout in IntelliJ.
import org.apache.storm.spout.SpoutOutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichSpout;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Values;
import java.util.Map;
public class IntegerSpout extends BaseRichSpout {
SpoutOutputCollector myspoutOutputCollector;
private Integer index = 2;
public void open(Map map, TopologyContext topologyContext, SpoutOutputCollector spoutOutputCollector) {
this.myspoutOutputCollector = spoutOutputCollector;
}
public void nextTuple() {
//Emit 100 numbers from the Spout.
if (index <100) {
System.out.println("Index is " + Integer.toString(index));
this.myspoutOutputCollector.emit(new Values(index));
index++;
}
}
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declare(new Fields("field"));
}
}
(5) Feed these numbers from the Spout into a (Multiplier?) Bolt, that doubles each number from the spout and emits out further. Simple and Straightforward.
import org.apache.storm.topology.BasicOutputCollector;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseBasicBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values;
public class MultiplierBolt extends BaseBasicBolt {
public void execute(Tuple tuple, BasicOutputCollector basicOutputCollector) {
Integer number = tuple.getInteger(0);
number*= 2;
basicOutputCollector.emit(new Values(number));
}
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declare(new Fields("field"));
}
}
(6) Now write a Main class with a main() that defines a topology and connects the spout to the bolt and then submits it for execution.
import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.topology.TopologyBuilder;
public class MainTopology {
public static void main(String[] args) {
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("IntegerSpout", new IntegerSpout());
builder.setBolt("MultiplierBolt", new MultiplierBolt()).shuffleGrouping("IntegerSpout");
Config config = new Config();
config.setDebug(true);
LocalCluster localCluster = new LocalCluster();
try {
localCluster.submitTopology("HelloTopology", config, builder.createTopology());
Thread.sleep(1000);
} catch (Exception e) {
System.out.println("Exception Raised");
e.printStackTrace();
} finally {
localCluster.shutdown();
};
}
}
That's it folks. Now just compile them and run. I get mostly exceptions and errors in my log.
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN o.a.s.s.o.a.z.s.NIOServerCnxn - caught end of stream exception org.apache.storm.shade.org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid 0x164ebf470410009, likely client has closed socket
Any pointers of help will be appreciated.
TIA.
Amit
Related
I'm trying to migrate from Vert.x to Quarkus and in Vert.x when I write message consumers like Kafka/AMQP etc. I have to scale the number of verticals to maximize performance across multiple cores i.e. Vertical Scaling - is this possible in Quarkus? I see a similar question here but it wasn't answered.
For example, with Kafka I might create a consumer inside a vertical and then scale that vertical say 10 times (that is specify the number of instances in the deployment to be 10) after doing performance testing to determine that's the optimal number. My understanding is that by default, 1 vertical = 1 event loop and does not scale across multiple cores.
I know that it's possible to use Vert.x verticals in Quarkus but is there another way to scale things like the number of Kafka consumers across multiple core?
I see that this type of scalability is configurable for things like Quarkus HTTP but I can't find anything about message consumers.
Here's the Vert.x Verticle approach that overall I'm very happy with, but I wish there were better documentation on how to do this.
UPDATE - Field injection doesn't work with this example but constructor injection does work.
Lets say I want to inject this
#ApplicationScoped
public class CoffeeRepositoryService {
public CoffeeRepositoryService() {
System.out.println("Injection succeeded!");
}
}
Here's my Verticle
package org.acme;
import io.smallrye.mutiny.Uni;
import io.smallrye.mutiny.vertx.core.AbstractVerticle;
import io.vertx.core.impl.logging.Logger;
import io.vertx.core.impl.logging.LoggerFactory;
import io.vertx.mutiny.core.eventbus.EventBus;
import io.vertx.mutiny.rabbitmq.RabbitMQClient;
import io.vertx.mutiny.rabbitmq.RabbitMQConsumer;
import io.vertx.rabbitmq.QueueOptions;
import io.vertx.rabbitmq.RabbitMQOptions;
public class RQVerticle extends AbstractVerticle {
private final Logger LOGGER = LoggerFactory.getLogger(org.acme.RQVerticle.class);
//This doesn't work - returns null
#Inject
CoffeeRepositoryService coffeeRepositoryService;
RQVerticle() {} // dummy constructor needed
#Inject // constructor injection - this does work
RQVerticle(CoffeeRepositoryService coffeeRepositoryService) {
//Here coffeeRepositoryService is injected properly
}
#Override
public Uni<Void> asyncStart() {
LOGGER.info(
"Creating RabbitMQ Connection after Quarkus successful initialization");
RabbitMQOptions config = new RabbitMQOptions();
config.setUri("amqp://localhost:5672");
RabbitMQClient client = RabbitMQClient.create(vertx, config);
Uni<Void> clientResp = client.start();
clientResp.subscribe()
.with(asyncResult -> {
LOGGER.info("RabbitMQ successfully connected!");
});
return clientResp;
}
}
Main Class - injection doesn't work like this
package org.acme;
import io.quarkus.runtime.Quarkus;
import io.quarkus.runtime.QuarkusApplication;
import io.quarkus.runtime.annotations.QuarkusMain;
import io.vertx.core.DeploymentOptions;
import io.vertx.mutiny.core.Vertx;
#QuarkusMain
public class Main {
public static void main(String... args) {
Quarkus.run(MyApp.class, args);
}
public static class MyApp implements QuarkusApplication {
#Override
public int run(String... args) throws Exception {
var vertx = Vertx.vertx();
System.out.println("Deployment Starting");
DeploymentOptions options = new DeploymentOptions()
.setInstances(2);
vertx.deployVerticleAndAwait(RQVerticle::new, options);
System.out.println("Deployment completed");
Quarkus.waitForExit();
return 0;
}
}
}
Main Class with working injection but cannot deploy more than one instance
package org.acme;
import io.quarkus.runtime.StartupEvent;
import io.vertx.mutiny.core.Vertx;
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.event.Observes;
import org.jboss.logging.Logger;
#ApplicationScoped
public class MainVerticles {
private static final Logger LOGGER = Logger.getLogger(MainVerticles.class);
public void init(#Observes StartupEvent e, Vertx vertx, RQVerticle verticle) {
public void init(#Observes StartupEvent e, Vertx vertx, RQVerticle verticle) {
DeploymentOptions options = new DeploymentOptions()
.setInstances(2);
vertx.deployVerticle(verticle,options).await().indefinitely();
}
}
Std Out - first main class looks good
2021-09-15 15:48:12,052 INFO [org.acm.RQVerticle] (vert.x-eventloop-thread-2) Creating RabbitMQ Connection after Quarkus successful initialization
2021-09-15 15:48:12,053 INFO [org.acm.RQVerticle] (vert.x-eventloop-thread-3) Creating RabbitMQ Connection after Quarkus successful initialization
Std Out - second main class
2021-09-22 15:48:11,986 ERROR [io.qua.run.Application] (Quarkus Main
Thread) Failed to start application (with profile dev):
java.lang.IllegalArgumentException: Can't specify > 1 instances for
already created verticle
We are using apache camel to 2.17.* version inorder to utilize the maxPollRecords parameter I am trying to upgrade to 2.18.0. Post upgrade to 2.18.0 The consumer doesn't seems to be recognized by the broker anymore. Following is the sample consumer I tried to create. I could produce the message from cli to the topic and if I create a consumer in the cli I could see the consumer created in cli consumes the message but not the consumer created through apache camel.
Also with the consumer group describe cli command I could see the consumer-id as blank if I run only apache camel consumer instance. While I was running with 2.17.5 the broker used to recognize and assign that to the partition. I can't find the example
please help.
package com.test;
import org.apache.camel.CamelContext;
import org.apache.camel.Exchange;
import org.apache.camel.Message;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.properties.PropertiesComponent;
import org.apache.camel.impl.DefaultCamelContext;
public class CamelConsumer {
public static void main(String argv[]){
CamelContext camelContext = new DefaultCamelContext();
// Add route to send messages to Kafka
try {
camelContext.addRoutes(new RouteBuilder() {
public void configure() {
PropertiesComponent pc = getContext().getComponent("properties", PropertiesComponent.class);
pc.setLocation("classpath:application.properties");
System.out.println("About to start route: Kafka Server -> Log ");
from("kafka:{{consumer.topic}}?brokers={{kafka.host}}:{{kafka.port}}"
+ "&maxPollRecords={{consumer.maxPollRecords}}" + "&consumersCount={{consumer.consumersCount}}"
+ "&groupId={{consumer.group}}").routeId("FromKafka")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
Message message = exchange.getIn();
Object data = message.getBody();
System.out.println(data);
}
});
}
});
camelContext.start();
Thread.sleep(5 * 60 * 1000);
camelContext.stop();
} catch (Exception e) {
e.printStackTrace();
}
}
}
I donot get any exception either. I donot find any documentation related to this. Please help.
consumer.topic=test kafka.host=localhost kafka.port=9092
consumer.maxPollRecords=1 consumer.consumersCount=1
consumer.group=test
I could find out the working example code from 2.19* in the following repository.
https://github.com/Talend/apache-camel/branches (Forked branch)
https://github.com/apache/camel (actual camel branch)
Finally it worked with the version 2.21.5 and I had to bump up the apache kafka maven version to 1.0.0 from 0.9*
I am trying to send OSC (Open Sound Control) to Figure 53's QLab. So far I have come up with this code.
import com.illposed.osc.transport.udp.OSCPortOut;
import com.illposed.osc.OSCMessage;
import java.io.IOException;
import java.net.InetAddress;
import java.net.UnknownHostException;
public class controller {
public static void main(String[] args) throws UnknownHostException, IOException {
OSCPortOut sender = new OSCPortOut(InetAddress.getByName("10.67.192.113"), 53000);
OSCMessage msg = new OSCMessage("/cue/1");
try {
sender.send(msg);
} catch (Exception e) {
e.printStackTrace();
}
}
}
I have successfully added Illposed's JavaOSC library to my code, but then it says I need SLF4J, and when I try to add slf4j-api-1.7.30.jar it says Failed to load class "org.slf4j.impl.StaticLoggerBinder". When I try to run the above code with both libraries SLF4J and JavaOSC.
You will need to use Maven to handle the dependencies JavaOSC needs.
Here's a good tutorial for Eclipse. Just use a simple project, you do not need to do archetype selection.
Then add the following at the end of your new projects pom.xml file, before
</project>
but after everything else.
`<dependency>`
<groupId>com.illposed.osc</groupId>
<artifactId>javaosc-core</artifactId>
<version>0.7</version>
</dependency>
I have been inventing a way how to work around the problem of adding consumers to a jetty endpoint (it does not allow multiple consumers). The way we do it in our company is to build our own router and a broadcasting endpoint which consumes from jetty and routes requests to underlying "subscriptions". Only one of them will eventually process the request. It kind of works but it's not completely ok, since recently when updating to latest Camel we have found our custom built component to leak memory and in general I consider using built-in functionality over custom hacks.
I started investigating the Camel REST API and found it very nice and pretty much replacing our home-grown component apart from one thing - you cannot re-configure it at runtime - you have to stop the context basically for this to work. Below I include my unit test with a happy path and the path that fails. Frankly I think is a bug, but if there is a legitimate way to achieve what I want, I'd like to hear sound advice:
package com.anydoby.camel;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.fail;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.commons.io.IOUtils;
import org.junit.Before;
import org.junit.Test;
/**
* Test tries to add/remove routes at runtime.
*/
public class RoutesTest {
private DefaultCamelContext ctx;
#Before
public void pre() throws Exception {
ctx = new DefaultCamelContext();
new RouteBuilder(ctx) {
#Override
public void configure() throws Exception {
restConfiguration("jetty").host("localhost").port(8080);
rest("/")
.get("/issues/{isin}").route().id("issues")
.process(e -> e.getOut().setBody("Here's your issue " + e.getIn().getHeader("isin"))).endRest()
.get("/listings").route().id("listings").process(e -> e.getOut().setBody("some listings"));
}
}.addRoutesToCamelContext(ctx);
ctx.start();
}
#Test
public void test() throws IOException {
{
InputStream stream = new URL("http://localhost:8080/issues/35").openStream();
assertEquals("Here's your issue 35", IOUtils.toString(stream));
}
{
InputStream stream = new URL("http://localhost:8080/listings").openStream();
assertEquals("some listings", IOUtils.toString(stream));
}
}
#Test
public void disableRoute() throws Exception {
ctx.stopRoute("issues");
ctx.removeRoute("issues");
try (InputStream stream = new URL("http://localhost:8080/issues/35").openStream()) {
fail();
} catch (Exception e) {
}
new RouteBuilder(ctx) {
#Override
public void configure() throws Exception {
rest().get("/issues/{isin}/{sedol}").route().id("issues")
.process(e -> e.getOut()
.setBody("Here's your issue " + e.getIn().getHeader("isin") + ":" + e.getIn().getHeader("sedol")))
.endRest();
}
}.addRoutesToCamelContext(ctx);
{
InputStream stream = new URL("http://localhost:8080/issues/35/65").openStream();
assertEquals("Here's your issue 35:65", IOUtils.toString(stream));
}
}
}
The disableRoute() test fails since I cannot add another consumer to an existing endpoint.
So my question is - "is there a way to add a new URL mapping to a restful camel-jetty endpoint"? If you do it during first configuration it works fine, but when later you want to reconfigure one of the routes the error is:
org.apache.camel.FailedToStartRouteException: Failed to start route because of Multiple consumers for the same endpoint is not allowed: jetty:http://localhost:8080/issues/%7Bisin%7D/%7Bsedol%7D?httpMethodRestrict=GET
I was trying to use Zookeeper in our project. Could run the server..Even test it using zkcli.sh .. All good..
But couldn't find a good tutorial for me to connect to this server using Java ! All I need in Java API is a method
public String getServiceURL ( String serviceName )
I tried https://cwiki.apache.org/confluence/display/ZOOKEEPER/Index --> Not good for me.
http://zookeeper.apache.org/doc/trunk/javaExample.html : Sort of ok; but couldnt understand concepts clearly ! I feel it is not explained well..
Finally, this is the simplest and most basic program I came up with which will help you with ZooKeeper "Getting Started":
package core.framework.zookeeper;
import java.util.Date;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import org.apache.zookeeper.CreateMode;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.Watcher.Event.KeeperState;
import org.apache.zookeeper.ZooDefs.Ids;
import org.apache.zookeeper.ZooKeeper;
public class ZkConnect {
private ZooKeeper zk;
private CountDownLatch connSignal = new CountDownLatch(0);
//host should be 127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002
public ZooKeeper connect(String host) throws Exception {
zk = new ZooKeeper(host, 3000, new Watcher() {
public void process(WatchedEvent event) {
if (event.getState() == KeeperState.SyncConnected) {
connSignal.countDown();
}
}
});
connSignal.await();
return zk;
}
public void close() throws InterruptedException {
zk.close();
}
public void createNode(String path, byte[] data) throws Exception
{
zk.create(path, data, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
}
public void updateNode(String path, byte[] data) throws Exception
{
zk.setData(path, data, zk.exists(path, true).getVersion());
}
public void deleteNode(String path) throws Exception
{
zk.delete(path, zk.exists(path, true).getVersion());
}
public static void main (String args[]) throws Exception
{
ZkConnect connector = new ZkConnect();
ZooKeeper zk = connector.connect("54.169.132.0,52.74.51.0");
String newNode = "/deepakDate"+new Date();
connector.createNode(newNode, new Date().toString().getBytes());
List<String> zNodes = zk.getChildren("/", true);
for (String zNode: zNodes)
{
System.out.println("ChildrenNode " + zNode);
}
byte[] data = zk.getData(newNode, true, zk.exists(newNode, true));
System.out.println("GetData before setting");
for ( byte dataPoint : data)
{
System.out.print ((char)dataPoint);
}
System.out.println("GetData after setting");
connector.updateNode(newNode, "Modified data".getBytes());
data = zk.getData(newNode, true, zk.exists(newNode, true));
for ( byte dataPoint : data)
{
System.out.print ((char)dataPoint);
}
connector.deleteNode(newNode);
}
}
This post has almost all operations required to interact with Zookeeper.
https://www.tutorialspoint.com/zookeeper/zookeeper_api.htm
Create ZNode with data
Delete ZNode
Get list of ZNodes(Children)
Check an ZNode exists or not
Edit the content of a ZNode...
This blog post, Zookeeper Java API examples, includes some good examples if you are looking for Java examples to start with. Zookeeper also provides a client API library( C and Java) that is very easy to use.
Zookeeper is one of the best open source server and service that helps to reliably coordinates distributed processes. Zookeeper is a CP system (Refer CAP Theorem) that provides Consistency and Partition tolerance. Replication of Zookeeper state across all the nods makes it an eventually consistent distributed service.
This is about as simple as you can get. I am building a tool which will use ZK to lock files that are being processed (hence the class name):
package mypackage;
import java.io.IOException;
import java.util.List;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.ZooKeeper;
import org.apache.zookeeper.Watcher;
public class ZooKeeperFileLock {
public static void main(String[] args) throws IOException, KeeperException, InterruptedException {
String zkConnString = "<zknode1>:2181,<zknode2>:2181,<zknode3>:2181";
ZooKeeperWatcher zkWatcher = new ZooKeeperWatcher();
ZooKeeper client = new ZooKeeper(zkConnString, 10000, zkWatcher);
List<String> zkNodes = client.getChildren("/", true);
for(String node : zkNodes) {
System.out.println(node);
}
}
public static class ZooKeeperWatcher implements Watcher {
#Override
public void process(WatchedEvent event) {
}
}
If you are on AWS; now We can create internal ELB which supports redirection based on URI .. which can really solve this problem with High Availability already baked in.