I'm programmatically configuring Logback and trying to set custom headers and footers to the log files when they rollover. To do this I've extended PatternLayout with a custom class:
class LogbackAdapterLayout extends PatternLayout {
#Override
public String getPresentationHeader() {
return "head";
}
#Override
public String getPresentationFooter() {
return "foot";
}
}
But I end up with this:
head
08:49:52.464 [main] TRACE com.example.test - Testing 0
08:49:52.467 [main] TRACE com.example.test - Testing 1
08:49:52.467 [main] TRACE com.example.test - Testing 2
08:49:52.467 [main] TRACE com.example.test - Testing 3
08:49:52.467 [main] TRACE com.example.test - Testing 4
08:49:52.467 [main] TRACE com.example.test - Testing 5
08:49:52.467 [main] TRACE com.example.test - Testing 6
08:49:52.467 [main] TRACE com.example.test - Testing 7
08:49:52.467 [main] TRACE com.example.test - Testing 8
08:49:52.467 [main] TRACE com.example.test - Testing 9
head
foot
I'm using logback 1.1.3, configuring programmatically, wrapping the LogbackAdapterLayout with a LayoutWrappingEncoder, which is added to a RollingFileAppender (with SizeBasedTriggeringPolicy and FixedWindowRollingPolicy)
Is there something I've configured incorrectly? Is there a way I can stop the header being at the bottom of the log file?
EDIT: A little more information, the header at the top is added when the new log file is created, the "header" at the bottom is added on rollover, I can change LogbackAdapterLayout and run again and the new header string will appear at the bottom of the rolled over log, with the old one unchanged at the top.
Related
When writing the logs of a war into payara server logs, it was not able to determine/identify the Logger name.
When I see it on the admin console, the Logger value us blank, and it is printing Log Level as SEVERE even it is just an INFO level log.
Following is the entry in payara server
Log Entry Detail
Timestamp : Mar 25, 2022 12:37:03.439
Log Level : SEVERE
Logger :
Name-Value Pairs : {levelValue=1000, timeMillis=1648211823439}
Record Number : 679
Message ID :
Complete Message : [http-thread-pool::http-listener-1(18)] INFO com.test.LogTest - com.test.LogTest
Sample code looks like the following
public class LogTest {
static Logger logger = LoggerFactory.getLogger(LogTest.class);
public static void main(String[] args) {
logger.info(logger.getName());
logger.debug("debug");
logger.info("info");
}
}
In my local I was able to get the following output
[main] INFO com.test.LogTest - com.test.LogTest
[main] DEBUG com.test.LogTest - debug
[main] INFO com.test.LogTest - info
I am using SimpleLogger (slf4j) as the logging library
Please help me on how to set the Logger value in the payara server logs, for the logs that I am logging
Working on a project were I migrated some methods from inline #QueryParam param lists into #BeanParam I noticed a significant and unexplained impact in latency.
I am unsure what internals are causing this issue because it does not seem to be just the new Bean creation, since an empty Bean is as fast as no Bean. Also #QueryParams added into the Bean increase the latency proportionally, the same definitely does not happen at the method level.
The difference between both forms is huge in terms of latency, the #BeanParam one takes 20% longer despite doing essentially the same.
#Path("test1")
public Response test1(#QueryParam("instring") String one, #QueryParam("instring2") String two)
vs
public class Params {
#QueryParam("instring") String one;
#QueryParam("instring2") String two;
}
#Path("test2")
public Response test2(#BeanParam Params params)
I have created a minimal example here.
My goal is to find a workaround for this since I really find #BeanParam very nice to organise groups of params and its not clear to me where the latency increase comes from, perhaps there is some kind of hint I can supply to eliminate the perf hit.
As #paul-samsotha mentioned, the additional latency you are encountering seems to be associated with the reflection used to associate query params with the bean fields after the bean is constructed.
Taking your example, commenting out all the Params bean fields initially and running the tests multiple times, adding a field back into the Params bean on each run you get the following results:
No Fields: Commenting all fields out of Params class
15:14:53.664 [main] INFO org.example.ProofOfConcept - Warming up
15:15:35.562 [main] INFO org.example.ProofOfConcept - Warmed up
15:15:49.244 [main] INFO org.example.ProofOfConcept - Reqs1: 731.101 10000 in 13.68
15:16:02.968 [main] INFO org.example.ProofOfConcept - Reqs2: 728.7037 10000 in 13.72
15:16:17.016 [main] INFO org.example.ProofOfConcept - Req2/Req1 1.0032899
Single Field: First param only
15:16:57.160 [main] INFO org.example.ProofOfConcept - Warming up
15:17:26.052 [main] INFO org.example.ProofOfConcept - Warmed up
15:17:39.715 [main] INFO org.example.ProofOfConcept - Reqs1: 732.06445 10000 in 13.66
15:17:54.582 [main] INFO org.example.ProofOfConcept - Reqs2: 672.6759 10000 in 14.87
...
3 Fields: First 3 string params
15:20:33.870 [main] INFO org.example.ProofOfConcept - Warming up
15:21:01.859 [main] INFO org.example.ProofOfConcept - Warmed up
15:21:15.825 [main] INFO org.example.ProofOfConcept - Reqs1: 716.17847 10000 in 13.96
15:21:30.926 [main] INFO org.example.ProofOfConcept - Reqs2: 662.2078 10000 in 15.10
...
All params
15:23:55.339 [main] INFO org.example.ProofOfConcept - Warming up
15:24:25.717 [main] INFO org.example.ProofOfConcept - Warmed up
15:24:39.376 [main] INFO org.example.ProofOfConcept - Reqs1: 732.2789 10000 in 13.66
15:24:55.676 [main] INFO org.example.ProofOfConcept - Reqs2: 613.5346 10000 in 16.30
15:24:55.676 [main] INFO org.example.ProofOfConcept - Req2/Req1 1.1935413
The performance gradually gets worse as more Bean fields are added.
As a workaround you could update your Bean class as follows:
#ToString
public static class Params {
String instring;
String inint;
int inint2;
int inint3;
String inint4;
String inint5;
String inint6;
String inint7;
Params(#Context UriInfo allUri){
MultivaluedMap<String, String> params = allUri.getQueryParameters();
instring = params.getFirst("instring");
inint = params.getFirst("inint");
inint2 = toInt(params.getFirst("inint2"));
inint3 = toInt(params.getFirst("inint3"));
inint4 = params.getFirst("inint4");
inint5 = params.getFirst("inint5");
inint6 = params.getFirst("inint6");
inint7 = params.getFirst("inint7");
}
int toInt(String value){
return nonNull(value) ? Integer.parseInt(value) : -1;
}
}
This update should eliminate the reflection mapping stage and result in improved performance:
15:35:16.713 [main] INFO org.example.ProofOfConcept - Warming up
15:35:45.513 [main] INFO org.example.ProofOfConcept - Warmed up
15:35:59.493 [main] INFO org.example.ProofOfConcept - Reqs1: 715.5123 10000 in 13.98
15:36:13.536 [main] INFO org.example.ProofOfConcept - Reqs2: 712.0986 10000 in 14.04
15:36:13.536 [main] INFO org.example.ProofOfConcept - Req2/Req1 1.004794
I've setup a riak server on ubuntu.
http://192.168.0.102:8098/ping return "OK"
I'm trying to remotely connect to it using riak java client(2.1.1) using the following code. client.execute() never returns. I'm attaching the log also.
public class Testing {
public static void main(String[] args) throws ExecutionException,
InterruptedException, UnknownHostException {
RiakClient client = RiakClient.newClient(8098, "192.168.0.102");
// put some stuff
Namespace ns = new Namespace("TestBucket");
Location location = new Location(ns, "TestKey");
String myData = "TestValue";
StoreValue store = new StoreValue.Builder(myData)
.withLocation(location).build();
Response rv = client.execute(store); // << NEVER GETS PAST THIS
System.out.println("write done");
// get some stuff
FetchValue fv = new FetchValue.Builder(location).build();
FetchValue.Response response = client.execute(fv);
String obj = response.getValue(String.class);
System.out.println(obj);
System.out.println("fetch done");
}
}
Log on the console is...
17:19:40.841 [main] DEBUG i.n.u.i.l.InternalLoggerFactory - Using SLF4J as the default logging framework
17:19:40.865 [main] DEBUG i.n.c.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 16
17:19:40.891 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
17:19:40.892 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
17:19:40.893 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
17:19:40.894 [main] DEBUG i.n.util.internal.PlatformDependent0 - direct buffer constructor: available
17:19:40.894 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
17:19:40.894 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): available
17:19:40.896 [main] DEBUG io.netty.util.internal.Cleaner0 - java.nio.ByteBuffer.cleaner(): available
17:19:40.896 [main] DEBUG i.n.util.internal.PlatformDependent - Platform: Windows
17:19:40.897 [main] DEBUG i.n.util.internal.PlatformDependent - Java version: 8
17:19:40.897 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noUnsafe: false
17:19:40.897 [main] DEBUG i.n.util.internal.PlatformDependent - sun.misc.Unsafe: available
17:19:40.898 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noJavassist: false
17:19:40.899 [main] DEBUG i.n.util.internal.PlatformDependent - Javassist: unavailable
17:19:40.899 [main] DEBUG i.n.util.internal.PlatformDependent - You don't have Javassist in your class path or you don't have enough permission to load dynamically generated classes. Please check the configuration for better performance.
17:19:40.899 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.tmpdir: C:\Users\Rakesh\AppData\Local\Temp (java.io.tmpdir)
17:19:40.900 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.bitMode: 32 (sun.arch.data.model)
17:19:40.900 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
17:19:40.900 [main] DEBUG i.n.util.internal.PlatformDependent - io.netty.maxDirectMemory: 259522560 bytes
17:19:40.921 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
17:19:40.921 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
17:19:40.922 [main] DEBUG i.n.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
17:19:41.039 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 2924 (auto-detected)
17:19:41.041 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
17:19:41.041 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
17:19:41.162 [main] DEBUG io.netty.util.NetUtil - Loopback interface: lo (Software Loopback Interface 1, 127.0.0.1)
17:19:41.163 [main] DEBUG io.netty.util.NetUtil - \proc\sys\net\core\somaxconn: 200 (non-existent)
17:19:41.321 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: e4:b3:18:ff:fe:6c:52:eb (auto-detected)
17:19:41.321 [main] DEBUG i.n.util.internal.ThreadLocalRandom - -Dio.netty.initialSeedUniquifier: 0xb620b93d4006e503
17:19:41.333 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
17:19:41.333 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.maxRecords: 4
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 2
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 2
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.tinyCacheSize: 512
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
17:19:41.364 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
17:19:41.365 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 65536
17:19:41.365 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
17:19:41.406 [main] INFO com.basho.riak.client.core.RiakNode - RiakNode started; 192.168.0.102:8098
17:19:41.407 [main] INFO c.basho.riak.client.core.RiakCluster - RiakCluster is starting.
17:19:41.408 [main] INFO c.b.r.c.core.util.DefaultCharset - No desired charset found in system properties, the default charset 'windows-1252' will be used
17:19:41.408 [main] INFO c.b.r.c.core.util.DefaultCharset - Initializing client charset to: windows-1252
17:19:41.443 [main] DEBUG com.basho.riak.client.core.RiakNode - Attempting to acquire channel permit
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 32768
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
17:19:41.447 [main] DEBUG com.basho.riak.client.core.RiakNode - Operation 28144878 being executed on RiakNode 192.168.0.102:8098
17:19:41.461 [nioEventLoopGroup-2-10] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.bytebuf.checkAccessible: true
17:19:41.463 [nioEventLoopGroup-2-10] DEBUG i.n.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#1536e36
Call stack of suspended thread
Thread [main] (Suspended)
Unsafe.park(boolean, long) line: not available [native method]
LockSupport.park(Object) line: not available
CountDownLatch$Sync(AbstractQueuedSynchronizer).parkAndCheckInterrupt() line: not available
CountDownLatch$Sync(AbstractQueuedSynchronizer).doAcquireSharedInterruptibly(int) line: not available
CountDownLatch$Sync(AbstractQueuedSynchronizer).acquireSharedInterruptibly(int) line: not available
CountDownLatch.await() line: not available
StoreOperation(FutureOperation<T,U,S>).await() line: 387
GenericRiakCommand$1(CoreFutureAdapter<T2,S2,T,S>).await() line: 90
StoreValue(RiakCommand<T,S>).execute(RiakCluster) line: 92
RiakClient.execute(RiakCommand<T,S>) line: 355
Testing.main(String[]) line: 29
A simple code addition after the following line of your code should fix things for you:
response rv = client.execute(store);
add:
client.shutdown();
to release that connection and continue execution.
Note that you will need to create a new connection for your next request against the database since you closed client or use .executeAsync() in place of .execute().
It appears you are expecting the Riak java client to connect using HTTP API. The Riak java client only connects using protocol buffer; using the HTTP address and port will freeze.
Yoy have to use this, its works fine...
public static void main(String[] args) throws ExecutionException,
InterruptedException, UnknownHostException {
RiakClient client = RiakClient.newClient(8087,"192.168.0.65");
// put some stuff
Namespace ns = new Namespace("TestBucket");
Location location = new Location(ns, "TestKey");
String myData = "TestValue";
StoreValue store = new StoreValue.Builder(myData)
.withLocation(location).build();
client.execute(store); // << NEVER GETS PAST THIS
System.out.println("write done");
// get some stuff
FetchValue fv = new FetchValue.Builder(location).build();
FetchValue.Response response = client.execute(fv);
String obj = response.getValue(String.class);
System.out.println(obj);
System.out.println("fetch done");
}
hope you will also get... !!!
Using scala and spark, i try to build simple apps. To avoid too many logs, i have setLevel Log to Level.ERROR, but it seems all log still appear. Here're my codes :
import org.apache.log4j.{BasicConfigurator, Level, Logger}
import org.apache.spark.{SparkConf, SparkContext}
/**
* Created by hduser on 16/08/16.
*/
object ALS_Test {
def main(command : Array[String]): Unit =
{
Logger.getRootLogger
Logger.getLogger(this.getClass).setLevel(Level.ERROR)
Logger.getLogger("org.spark_project").setLevel(Level.ERROR)
BasicConfigurator.configure()
val sparkConf = new SparkConf().setAppName("AppName").setMaster("local[4]")
val sc = new SparkContext(sparkConf)
println("test 123")
}
}
The Output when running still have many confused logs :
0 [main] INFO org.apache.spark.SparkContext - Running Spark version 2.0.0
163 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation #org.apache.hadoop.metrics2.annotation.Metric(always=false, about=, sampleName=Ops, type=DEFAULT, value=[Rate of successful kerberos logins and latency (milliseconds)], valueName=Time)
175 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation #org.apache.hadoop.metrics2.annotation.Metric(always=false, about=, sampleName=Ops, type=DEFAULT, value=[Rate of failed kerberos logins and latency (milliseconds)], valueName=Time)
176 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation #org.apache.hadoop.metrics2.annotation.Metric(always=false, about=, sampleName=Ops, type=DEFAULT, value=[GetGroups], valueName=Time)
177 [main] DEBUG org.apache.hadoop.metrics2.impl.MetricsSystemImpl - UgiMetrics, User and group related metrics
555 [main] DEBUG org.apache.hadoop.util.Shell - Failed to detect a valid hadoop home directory
java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
I'm using Storm Flux (0.10.0) DSL to deploy the following topology (simplified to keep only the relevant parts of it):
---
name: "my-topology"
components:
- id: "esConfig"
className: "java.util.HashMap"
configMethods:
- name: "put"
args:
- "es.nodes"
- "${es.nodes}"
bolts:
- id: "es-bolt"
className: "org.elasticsearch.storm.EsBolt"
constructorArgs:
- "myindex/docs"
- ref: "esConfig"
parallelism: 1
# ... other bolts, spouts and streams here
As you can see, one of the bolts I use is org.elasticsearch.storm.EsBolt which has the following constructors (see code):
public EsBolt(String target) { ... }
public EsBolt(String target, boolean writeAck) { ... }
public EsBolt(String target, Map configuration) { ... }
The last one should be called because I pass a String and a Map in the constructorArgs. But when I deploy the topology I get the following exception, as if Flux wasn't able to infer the right constructor from the types (String, Map):
storm jar mytopology-1.0.0.jar org.apache.storm.flux.Flux --local mytopology.yml --filter mytopology.properties
...
Version: 0.10.0
Parsing file: mytopology.yml
958 [main] INFO o.a.s.f.p.FluxParser - loading YAML from input stream...
965 [main] INFO o.a.s.f.p.FluxParser - Performing property substitution.
969 [main] INFO o.a.s.f.p.FluxParser - Not performing environment variable substitution.
1252 [main] INFO o.a.s.f.FluxBuilder - Detected DSL topology...
1431 [main] WARN o.a.s.f.FluxBuilder - Found multiple invokable constructors for class class org.elasticsearch.storm.EsBolt, given arguments [myindex/docs, {es.nodes=localhost}]. Using the last one found.
Exception in thread "main" java.lang.IllegalArgumentException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.storm.flux.FluxBuilder.buildObject(FluxBuilder.java:291)
at org.apache.storm.flux.FluxBuilder.buildBolts(FluxBuilder.java:372)
at org.apache.storm.flux.FluxBuilder.buildTopology(FluxBuilder.java:88)
at org.apache.storm.flux.Flux.runCli(Flux.java:153)
at org.apache.storm.flux.Flux.main(Flux.java:98)
Any idea about what could be happening? Here is how Storm Flux finds a compatible constructor. The magic is in the canInvokeWithArgs method.
These are Flux debug logs, where you see how FluxBuilder finds the most appropriate constructor:
Version: 0.10.0
Parsing file: mytopology.yml
559 [main] INFO o.a.s.f.p.FluxParser - loading YAML from input stream...
566 [main] INFO o.a.s.f.p.FluxParser - Performing property substitution.
569 [main] INFO o.a.s.f.p.FluxParser - Not performing environment variable substitution.
804 [main] INFO o.a.s.f.FluxBuilder - Detected DSL topology...
org.apache.logging.slf4j.Log4jLogger#3b69e7d1
1006 [main] DEBUG o.a.s.f.FluxBuilder - Found constructor arguments in definition: java.util.ArrayList
1006 [main] DEBUG o.a.s.f.FluxBuilder - Checking arguments for references.
1010 [main] DEBUG o.a.s.f.FluxBuilder - Target class: org.elasticsearch.storm.EsBolt
1011 [main] DEBUG o.a.s.f.FluxBuilder - found constructor with same number of args..
1011 [main] DEBUG o.a.s.f.FluxBuilder - Comparing parameter class class java.lang.String to object class class java.lang.String to see if assignment is possible.
1011 [main] DEBUG o.a.s.f.FluxBuilder - Yes, they are the same class.
1012 [main] DEBUG o.a.s.f.FluxBuilder - ** invokable --> true
1012 [main] DEBUG o.a.s.f.FluxBuilder - found constructor with same number of args..
1012 [main] DEBUG o.a.s.f.FluxBuilder - Comparing parameter class class java.lang.String to object class class java.lang.String to see if assignment is possible.
1012 [main] DEBUG o.a.s.f.FluxBuilder - Yes, they are the same class.
1012 [main] DEBUG o.a.s.f.FluxBuilder - ** invokable --> true
1012 [main] DEBUG o.a.s.f.FluxBuilder - Skipping constructor with wrong number of arguments.
1012 [main] WARN o.a.s.f.FluxBuilder - Found multiple invokable constructors for class class org.elasticsearch.storm.EsBolt, given arguments [myindex/docs, {es.nodes=localhost}]. Using the last one found.
1014 [main] DEBUG o.a.s.f.FluxBuilder - Found something seemingly compatible, attempting invocation...
1044 [main] DEBUG o.a.s.f.FluxBuilder - Comparing parameter class class java.lang.String to object class class java.lang.String to see if assignment is possible.
1044 [main] DEBUG o.a.s.f.FluxBuilder - They are the same class.
1044 [main] DEBUG o.a.s.f.FluxBuilder - Comparing parameter class boolean to object class class java.util.HashMap to see if assignment is possible.
Exception in thread "main" java.lang.IllegalArgumentException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.storm.flux.FluxBuilder.buildObject(FluxBuilder.java:291)
...
This issue has been fixed on Nov 18, 2015.
See this: https://github.com/apache/storm/commit/69b9cf581fd977f6c28b3a78a116deddadc44014
And the next version of Storm with this fix should be released within a month.
As a crappy workaround, I've finally extended EsBolt to expose only the constructors I need and avoid collisions.
package com.mypackage;
import java.util.Map;
import org.elasticsearch.storm.EsBolt;
public class EsBoltWrapper extends EsBolt {
public EsBoltWrapper(String target) {
super(target);
}
public EsBoltWrapper(String target, Map configuration) {
super(target, configuration);
}
}
Now my topology looks like this:
bolts:
- id: "es-bolt"
className: "com.mypackage.EsBoltWrapper" # THE NEW CLASS
constructorArgs:
- "myindex/docs"
- ref: "esConfig"
parallelism: 1
It seems to be a bug in Storm Flux.