I am getting an error when using flume - java

I am using the below configuration
ClouderaTwitterAgent.sources = Twitter
ClouderaTwitterAgent.channels = MemChannel
ClouderaTwitterAgent.sinks = HDFS
ClouderaTwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
ClouderaTwitterAgent.sources.Twitter.channels = MemChannel
ClouderaTwitterAgent.sources.Twitter.consumerKey = xxxxxxxxxxxxx
ClouderaTwitterAgent.sources.Twitter.consumerSecret = xxxxxxxxxxxxx
ClouderaTwitterAgent.sources.Twitter.accessToken = xxxxxxxxxxxxxxxx
ClouderaTwitterAgent.sources.Twitter.accessTokenSecret = xxxxxxxxxxxxxx
ClouderaTwitterAgent.sources.Twitter.keywords = Sully
ClouderaTwitterAgent.sinks.HDFS.channel = MemChannel
ClouderaTwitterAgent.sinks.HDFS.type = hdfs
ClouderaTwitterAgent.sinks.HDFS.hdfs.path = hdfs://localhost:9000/user/tweets
ClouderaTwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
ClouderaTwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
ClouderaTwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
ClouderaTwitterAgent.sinks.HDFS.hdfs.rollSize = 0
ClouderaTwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
ClouderaTwitterAgent.channels.MemChannel.type = memory
ClouderaTwitterAgent.channels.MemChannel.capacity = 10000
ClouderaTwitterAgent.channels.MemChannel.transactionCapacity = 100
and this is the command that I am using to run flume
`bin/flume-ng agent --conf ./conf/ -f conf/flume-cloudera.conf -Dflume.root.logger=DEBUG,console -n ClouderaTwitterAgent`
and this is the error that I am getting
2016-09-20 14:53:14,245 (Twitter4J Async Dispatcher[0]) [DEBUG - com.cloudera.flume.source.TwitterSource$1.onStatus(TwitterSource.java:121)] tweet arrived
2016-09-20 14:53:16,073 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://localhost:9000/user/tweets/FlumeData.1474363316543.tmp
2016-09-20 14:53:16,113 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:459)] process failed
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.security.Groups.<init>(Groups.java:64)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:255)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:232)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:718)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:703)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:605)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2554)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2546)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2412)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:243)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
... 21 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.security.JniBasedUnixGroupsMapping
at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.<init>(JniBasedUnixGroupsMappingWithFallback.java:38)
... 26 more
2016-09-20 14:53:16,121 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR-org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] Unable to deliver event. Exception follows.
Can anyone please help me. I am new to this and I got this configuration from the internet and I followed every instruction.
I even searched for any solution to this problem.
I will list out what I have done
1.I checked my system clock
2.removed the hdfs folder and then made a new folder
3.formatted the namenode
4.restarted the agent several times

Related

problem when creating audio context in LWJGL

When I try to create audio context in LWJGL 3.3.1 I get this error:
[ALSOFT] (EE) Failed to set real-time priority for thread: Operation not permitted (1)
here is the code:
String defaultDeviceName = ALC10.alcGetString(0, ALC10.ALC_DEFAULT_DEVICE_SPECIFIER);
audioDevice = ALC10.alcOpenDevice(defaultDeviceName);
audioContext = ALC10.alcCreateContext(audioContext, (IntBuffer)null);
ALC10.alcMakeContextCurrent(audioContext);
ALCCapabilities alcCapabilities = ALC.createCapabilities(audioDevice);
ALCapabilities alCapabilities = AL.createCapabilities(alcCapabilities);
I would be grateful for any advice.

How to increase Dataflow read parallelism from Cassandra

I am trying to export a lot of data (2 TB, 30kkk rows) from Cassandra to BigQuery. All my infrastructure is on GCP. My Cassandra cluster have 4 nodes (4 vCPUs, 26 GB memory, 2000 GB PD (HDD) each). There is one seed node in the cluster. I need to transform my data before writing to BQ, so I am using Dataflow. Worker type is n1-highmem-2. Workers and Cassandra instances are at the same zone europe-west1-c. My limits for Cassandra:
Part of my pipeline code responsible for reading transform is located here.
Autoscaling
The problem is that when I don't set --numWorkers, the autoscaling set number of workers in such manner (2 workers average):
Load balancing
When I set --numWorkers=15 the rate of reading doesn't increase and only 2 workers communicate with Cassandra (I can tell it from iftop and only these workers have CPU load ~60%).
At the same time Cassandra nodes don't have a lot of load (CPU usage 20-30%). Network and disk usage of the seed node is about 2 times higher than others, but not too high, I think:
And for the not seed node here:
Pipeline launch warnings
I have some warnings when pipeline is launching:
WARNING: Size estimation of the source failed:
org.apache.beam.sdk.io.cassandra.CassandraIO$CassandraSource#7569ea63
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.132.9.101:9042 (com.datastax.driver.core.exceptions.TransportException: [/10.132.9.101:9042] Cannot connect), /10.132.9.102:9042 (com.datastax.driver.core.exceptions.TransportException: [/10.132.9.102:9042] Cannot connect), /10.132.9.103:9042 (com.datastax.driver.core.exceptions.TransportException: [/10.132.9.103:9042] Cannot connect), /10.132.9.104:9042 [only showing errors of first 3 hosts, use getErrors() for more details])
My Cassandra cluster is in GCE local network and it seams that some queries are made from my local machine and cannot reach the cluster (I am launching pipeline with Dataflow Eclipse plugin as described here). These queries are about size estimation of tables. Can I specify size estimation by hand or launch pipline from GCE instance? Or can I ignore these warnings? Does it have effect on rate of read?
I'v tried to launch pipeline from GCE VM. There is no more problem with connectivity. I don't have varchar columns in my tables but I get such warnings (no codec in datastax driver [varchar <-> java.lang.Long]). :
WARNING: Can't estimate the size
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [varchar <-> java.lang.Long]
at com.datastax.driver.core.CodecRegistry.notFound(CodecRegistry.java:741)
at com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:588)
at com.datastax.driver.core.CodecRegistry.access$500(CodecRegistry.java:137)
at com.datastax.driver.core.CodecRegistry$TypeCodecCacheLoader.load(CodecRegistry.java:246)
at com.datastax.driver.core.CodecRegistry$TypeCodecCacheLoader.load(CodecRegistry.java:232)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3628)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2336)
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2295)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2208)
at com.google.common.cache.LocalCache.get(LocalCache.java:4053)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4057)
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4986)
at com.datastax.driver.core.CodecRegistry.lookupCodec(CodecRegistry.java:522)
at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:485)
at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:467)
at com.datastax.driver.core.AbstractGettableByIndexData.codecFor(AbstractGettableByIndexData.java:69)
at com.datastax.driver.core.AbstractGettableByIndexData.getLong(AbstractGettableByIndexData.java:152)
at com.datastax.driver.core.AbstractGettableData.getLong(AbstractGettableData.java:26)
at com.datastax.driver.core.AbstractGettableData.getLong(AbstractGettableData.java:95)
at org.apache.beam.sdk.io.cassandra.CassandraServiceImpl.getTokenRanges(CassandraServiceImpl.java:279)
at org.apache.beam.sdk.io.cassandra.CassandraServiceImpl.getEstimatedSizeBytes(CassandraServiceImpl.java:135)
at org.apache.beam.sdk.io.cassandra.CassandraIO$CassandraSource.getEstimatedSizeBytes(CassandraIO.java:308)
at org.apache.beam.runners.direct.BoundedReadEvaluatorFactory$BoundedReadEvaluator.startDynamicSplitThread(BoundedReadEvaluatorFactory.java:166)
at org.apache.beam.runners.direct.BoundedReadEvaluatorFactory$BoundedReadEvaluator.processElement(BoundedReadEvaluatorFactory.java:142)
at org.apache.beam.runners.direct.TransformExecutor.processElements(TransformExecutor.java:146)
at org.apache.beam.runners.direct.TransformExecutor.run(TransformExecutor.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Pipeline read code
// Read data from Cassandra table
PCollection<Model> pcollection = p.apply(CassandraIO.<Model>read()
.withHosts(Arrays.asList("10.10.10.101", "10.10.10.102", "10.10.10.103", "10.10.10.104")).withPort(9042)
.withKeyspace(keyspaceName).withTable(tableName)
.withEntity(Model.class).withCoder(SerializableCoder.of(Model.class))
.withConsistencyLevel(CASSA_CONSISTENCY_LEVEL));
// Transform pcollection to KV PCollection by rowName
PCollection<KV<Long, Model>> pcollection_by_rowName = pcollection
.apply(ParDo.of(new DoFn<Model, KV<Long, Model>>() {
#ProcessElement
public void processElement(ProcessContext c) {
c.output(KV.of(c.element().rowName, c.element()));
}
}));
Number of splits (Stackdriver log)
W Number of splits is less than 0 (0), fallback to 1
I Number of splits is 1
W Number of splits is less than 0 (0), fallback to 1
I Number of splits is 1
W Number of splits is less than 0 (0), fallback to 1
I Number of splits is 1
What I'v tried
No effect:
set read consistency level to ONE
nodetool setstreamthroughput 1000, nodetool setinterdcstreamthroughput 1000
increase Cassandra read concurrency (in cassandra.yaml): concurrent_reads: 32
setting different number of workers 1-40.
Some effect:
1. I'v set numSplits = 10 as #jkff proposed. Now I can see in logs:
I Murmur3Partitioner detected, splitting
W Can't estimate the size
W Can't estimate the size
W Number of splits is less than 0 (0), fallback to 10
I Number of splits is 10
W Number of splits is less than 0 (0), fallback to 10
I Number of splits is 10
I Splitting source org.apache.beam.sdk.io.cassandra.CassandraIO$CassandraSource#6d83ee93 produced 10 bundles with total serialized response size 20799
I Splitting source org.apache.beam.sdk.io.cassandra.CassandraIO$CassandraSource#25d02f5c produced 10 bundles with total serialized response size 19359
I Splitting source [0, 1) produced 1 bundles with total serialized response size 1091
I Murmur3Partitioner detected, splitting
W Can't estimate the size
I Splitting source [0, 0) produced 0 bundles with total serialized response size 76
W Number of splits is less than 0 (0), fallback to 10
I Number of splits is 10
I Splitting source org.apache.beam.sdk.io.cassandra.CassandraIO$CassandraSource#2661dcf3 produced 10 bundles with total serialized response size 18527
But I'v got another exception:
java.io.IOException: Failed to start reading from source: org.apache.beam.sdk.io.cassandra.Cassandra...
(5d6339652002918d): java.io.IOException: Failed to start reading from source: org.apache.beam.sdk.io.cassandra.CassandraIO$CassandraSource#5f18c296
at com.google.cloud.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.start(WorkerCustomSources.java:582)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:347)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:183)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:148)
at com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:68)
at com.google.cloud.dataflow.worker.DataflowWorker.executeWork(DataflowWorker.java:336)
at com.google.cloud.dataflow.worker.DataflowWorker.doWork(DataflowWorker.java:294)
at com.google.cloud.dataflow.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:135)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:115)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:102)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:53 mismatched character 'p' expecting '$'
at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58)
at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:68)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:43)
at org.apache.beam.sdk.io.cassandra.CassandraServiceImpl$CassandraReaderImpl.start(CassandraServiceImpl.java:80)
at com.google.cloud.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.start(WorkerCustomSources.java:579)
... 14 more
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:53 mismatched character 'p' expecting '$'
at com.datastax.driver.core.Responses$Error.asException(Responses.java:144)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:186)
at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:50)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:817)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:651)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1077)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1000)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:565)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:479)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
Maybe there is a mistake: CassandraServiceImpl.java#L220
And this statement looks like mistype: CassandraServiceImpl.java#L207
Changes I'v done to CassandraIO code
As #jkff proposed, I've change CassandraIO in the way I needed:
#VisibleForTesting
protected List<BoundedSource<T>> split(CassandraIO.Read<T> spec,
long desiredBundleSizeBytes,
long estimatedSizeBytes) {
long numSplits = 1;
List<BoundedSource<T>> sourceList = new ArrayList<>();
if (desiredBundleSizeBytes > 0) {
numSplits = estimatedSizeBytes / desiredBundleSizeBytes;
}
if (numSplits <= 0) {
LOG.warn("Number of splits is less than 0 ({}), fallback to 10", numSplits);
numSplits = 10;
}
LOG.info("Number of splits is {}", numSplits);
Long startRange = MIN_TOKEN;
Long endRange = MAX_TOKEN;
Long startToken, endToken;
String pk = "$pk";
switch (spec.table()) {
case "table1":
pk = "table1_pk";
break;
case "table2":
case "table3":
pk = "table23_pk";
break;
}
endToken = startRange;
Long incrementValue = endRange / numSplits - startRange / numSplits;
String splitQuery;
if (numSplits == 1) {
// we have an unique split
splitQuery = QueryBuilder.select().from(spec.keyspace(), spec.table()).toString();
sourceList.add(new CassandraIO.CassandraSource<T>(spec, splitQuery));
} else {
// we have more than one split
for (int i = 0; i < numSplits; i++) {
startToken = endToken;
endToken = startToken + incrementValue;
Select.Where builder = QueryBuilder.select().from(spec.keyspace(), spec.table()).where();
if (i > 0) {
builder = builder.and(QueryBuilder.gte("token(" + pk + ")", startToken));
}
if (i < (numSplits - 1)) {
builder = builder.and(QueryBuilder.lt("token(" + pk + ")", endToken));
}
sourceList.add(new CassandraIO.CassandraSource(spec, builder.toString()));
}
}
return sourceList;
}
I think this should be classified as a bug in CassandraIO. I filed BEAM-3424. You can try building your own version of Beam with that default of 1 changed to 100 or something like that, while this issue is being fixed.
I also filed BEAM-3425 for the bug during size estimation.

Couchbase + Scala + Java SDK : IndexOutOfBounds

I am relatively new to scala and to couchbase, but I need to learn both fast. Recently while trying to run a sample application using the Java couchbase SDK through Scala I have run into the following problem.
[cb-core-3-2] WARN com.couchbase.client.core.CouchbaseCore - Exception while Handling Request Events RequestEvent{request=null}
java.lang.IndexOutOfBoundsException: Index: 1854, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at com.couchbase.client.core.config.DefaultCouchbaseBucketConfig.nodeIndexForMaster(DefaultCouchbaseBucketConfig.java:135)
at com.couchbase.client.core.node.locate.KeyValueLocator.calculateNodeId(KeyValueLocator.java:165)
at com.couchbase.client.core.node.locate.KeyValueLocator.locateForCouchbaseBucket(KeyValueLocator.java:124)
at com.couchbase.client.core.node.locate.KeyValueLocator.locateAndDispatch(KeyValueLocator.java:84)
at com.couchbase.client.core.RequestHandler.dispatchRequest(RequestHandler.java:219)
at com.couchbase.client.core.RequestHandler.onEvent(RequestHandler.java:176)
at com.couchbase.client.core.RequestHandler.onEvent(RequestHandler.java:71)
at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
[error] (run-main-0) java.lang.RuntimeException: java.util.concurrent.TimeoutException
java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:71)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:354)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:349)
at App$.main(Application.scala:28)
at App.main(Application.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
Caused by: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:71)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:354)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:349)
at App$.main(Application.scala:28)
at App.main(Application.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
[trace] Stack trace suppressed: run last compile:run for the full output.
[cb-core-3-1] WARN com.couchbase.client.core.CouchbaseCore - Exception while Handling Response Events null
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
at com.couchbase.client.deps.com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45)
at com.couchbase.client.deps.com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:124)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
[cb-core-3-2] WARN com.couchbase.client.core.CouchbaseCore - Exception while Handling Request Events RequestEvent{request=null}
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
at com.couchbase.client.deps.com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45)
at com.couchbase.client.deps.com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:124)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
And this is the code that generated the error
import com.couchbase.client.java._
import com.couchbase.client.core.time._
import com.couchbase.client.java.document._
import com.couchbase.client.java.document.json._
import com.couchbase.client.java.query._
import com.couchbase.client.java.env.DefaultCouchbaseEnvironment
object App {
def main(args: Array[String]): Unit = {
// Initialize the Connection
// Connects to localhost
val env = DefaultCouchbaseEnvironment.builder()
.connectTimeout(5000)
.bootstrapCarrierEnabled(false)
.build()
val cluster = CouchbaseCluster.create(env, "127.0.0.1")
// Opens the "default" bucket
val bucket = cluster.openBucket("default")
// Create a JSON Document
val user: JsonObject = JsonObject.create()
.put("firstname", "Walter")
.put("lastname", "White")
.put("job", "chemistry teacher")
.put("age", 50)
val stored: JsonDocument = bucket.upsert(JsonDocument.create("walter", user));
// Load the Document and print it
// Prints Content and Metadata of the stored Document
println(bucket.get("walter"))
// Just close a single bucket
bucket.close();
// Disconnect and close all buckets
cluster.disconnect();
}
}
EDIT:
I am a little new to this but here is what I managed to get out of the debugger.
this = {DefaultCouchbaseBucketConfig#2385} "DefaultCouchbaseBucketConfig{name='testBucket', locator=VBUCKET, uri='/pools/default/buckets/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4', streamingUri='/pools/default/bucketsStreaming/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4', nodeInfo=[NodeInfo{, hostname=localhost/127.0.0.1, configPort=0, directServices={CONFIG=8091, QUERY=8093, VIEW=8092, BINARY=11210}, sslServices={CONFIG=18091, QUERY=18093, VIEW=18092, BINARY=11207}}], partitionInfo=PartitionInfo{numberOfReplicas=1, partitionHosts=[localhost], partitions=[], tainted=false}, tainted=false, rev=23}"
partitionInfo = {CouchbasePartitionInfo#2391} "PartitionInfo{numberOfReplicas=1, partitionHosts=[localhost], partitions=[], tainted=false}"
numberOfReplicas = 1
partitionHosts = {String[1]#2428}
partitions = {ArrayList#2390} size = 0
forwardPartitions = null
tainted = false
partitionHosts = {ArrayList#2396} size = 1
0 = {DefaultNodeInfo#2425} "NodeInfo{, hostname=localhost/127.0.0.1, configPort=8091, directServices={CONFIG=8091, BINARY=11210, VIEW=8092}, sslServices={}}"
nodesWithPrimaryPartitions = {HashSet#2397} size = 0
tainted = false
rev = 23
name = "testBucket"
value = {char[10]#2422}
hash = 1241531676
password = ""
value = {char[0]#2421}
hash = 0
locator = {BucketNodeLocator#2400} "VBUCKET"
name = "VBUCKET"
ordinal = 0
uri = "/pools/default/buckets/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4"
value = {char[78]#2411}
hash = 0
streamingUri = "/pools/default/bucketsStreaming/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4"
value = {char[87]#2420}
hash = 0
nodeInfo = {ArrayList#2403} size = 1
0 = {DefaultNodeInfo#2408} "NodeInfo{, hostname=localhost/127.0.0.1, configPort=0, directServices={CONFIG=8091, QUERY=8093, VIEW=8092, BINARY=11210}, sslServices={CONFIG=18091, QUERY=18093, VIEW=18092, BINARY=11207}}"
enabledServices = 15
partition = 7620
useFastForward = false
EDIT 2:
I had a look at the Couchbase Console log and I am constantly getting the following
Service 'memcached' exited with status 1. Restarting. Messages: Failed to open library "/Users/luishreis/Downloads/couchbase-server-enterprise_4/Couchbase Server.app/Contents/Resources/couchbase-core/lib/memcached/stdin_term_handler.so": dlopen(/Users/luishreis/Downloads/couchbase-server-enterprise_4/Couchbase Server.app/Contents/Resources/couchbase-core/lib/memcached/stdin_term_handler.dylib, 6): image not found
Unable to load extension /Users/luishreis/Downloads/couchbase-server-enterprise_4/Couchbase Server.app/Contents/Resources/couchbase-core/lib/memcached/stdin_term_handler.so using the config
Any help on the matter would be appreciated.
Many thanks.

MySQL: Intermittant com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

I am getting this exception intermittently and am having a really hard time tracking down how to fix it.
The load on this server is pretty low, and the application is on the same host as the MySQL server connecting via localhost (MySQL 5.6). I have reasonable values for 'connect_timeout' (10), "interactive_timeout' (28800) and 'wait_timeout' (28800) and there is no firewall to cause issues.
Yet every so often this error pops up:
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
Last packet sent to the server was 1 ms ago.
at sun.reflect.GeneratedConstructorAccessor206.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2985)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2871)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3414)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1936)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2060)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2536)
at com.mysql.jdbc.ConnectionImpl.setAutoCommit(ConnectionImpl.java:4874)
at com.mchange.v2.c3p0.impl.NewProxyConnection.setAutoCommit(NewProxyConnection.java:881)
at org.hibernate.c3p0.internal.C3P0ConnectionProvider.getConnection(C3P0ConnectionProvider.java:94)
at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:380)
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:228)
... 76 more
Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2431)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2882)
This is what I have for my Hibernate configuration:
.setProperty("hibernate.hbm2ddl.auto", "validate")
.setProperty("hibernate.dialect", "org.hibernate.dialect.MySQL5InnoDBDialect")
.setProperty("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider")
.setProperty("hibernate.c3p0.idle_test_period", "1000")
.setProperty("hibernate.c3p0.min_size", "1")
.setProperty("hibernate.c3p0.max_size", "20")
.setProperty("hibernate.c3p0.timeout", "1800")
.setProperty("hibernate.c3p0.max_statements", "50")
.setProperty("hibernate.c3p0.debugUnreturnedConnectionStackTraces", "true")
.setProperty("hibernate.c3p0.unreturnedConnectionTimeout", "20")
And this is my MySQL config:
[mysqld]
## General
ignore-db-dir = lost+found
datadir = /var/lib/mysql
socket = /var/lib/mysql/mysql.sock
tmpdir = /var/lib/mysqltmp
## Cache
table-definition-cache = 4096
table-open-cache = 4096
#table-open-cache-instances = 1
#thread-cache-size = 16
#query-cache-size = 32M
#query-cache-type = 1
## Per-thread Buffers
#join-buffer-size = 512K
#read-buffer-size = 512K
#read-rnd-buffer-size = 512K
#sort-buffer-size = 512K
## Temp Tables
#max-heap-table-size = 64M
#tmp-table-size = 32M
## Networking
#interactive-timeout = 3600
max-connections = 400
max-connect-errors = 1000000
max-allowed-packet = 16M
skip-name-resolve
wait-timeout = 600
## MyISAM
key-buffer-size = 32M
#myisam-recover = FORCE,BACKUP
myisam-sort-buffer-size = 128M
## InnoDB
#innodb-buffer-pool-size = 256M
innodb-file-format = Barracuda
#innodb-file-per-table = 1
#innodb-flush-method = O_DIRECT
#innodb-log-file-size = 128M
## Replication and PITR
#binlog-format = ROW
expire-logs-days = 7
#log-bin = /var/log/mysql/bin-log
#log-slave-updates = 1
#max-binlog-size = 128M
#read-only = 1
#relay-log = /var/log/mysql/relay-log
relay-log-space-limit = 16G
server-id = 1
## Logging
#log-output = FILE
#log-slow-admin-statements
#log-slow-slave-statements
#log-warnings = 0
#long-query-time = 2
#slow-query-log = 1
#slow-query-log-file = /var/log/mysql/slow-log
[mysqld_safe]
log-error = /var/log/mysqld.log
#malloc-lib = /usr/lib64/libjemalloc.so.1
open-files-limit = 65535
[mysql]
no-auto-rehash
If load is low you can try to use testConnectionOnCheckout to true
testConnectionOnCheckout Must be set in c3p0.properties, C3P0 default: false
If set to true, an operation will be performed at every connection checkout to verify that the connection is valid.
Actually it's expensive.
A better choice is to verify connections periodically using c3p0.idleConnectionTestPeriod. Try to reduce the value and check how it works

java.lang.RuntimeException: java.net.SocketTimeoutException: Read timed out

I am developing a project using Eclipse Java EE, Maven and Geotools. This is the part of the code that I am going to talk about:
Map connectionParameters = new HashMap();
// Connection parameters
connectionParameters.put(WFSDataStoreFactory.URL.key, getCapabilities );
connectionParameters.put(WFSDataStoreFactory.PROTOCOL.key, false );
connectionParameters.put(WFSDataStoreFactory.LENIENT.key, true );
connectionParameters.put(WFSDataStoreFactory.MAXFEATURES.key, 30);
connectionParameters.put(WFSDataStoreFactory.TIMEOUT.key, 600000);
try { // The WFSDataStoreFactory dsf is already created before
WFSDataStore dataStore = dsf.createDataStore(connectionParameters);
// We get the source and then the features from it
SimpleFeatureSource source = dataStore.getFeatureSource("gmgml_AREAOBRA");
FeatureCollection<SimpleFeatureType, SimpleFeature> fc = source.getFeatures();
// We try to go one by one and print to see if it really exists
while(fc.features().hasNext()){
SimpleFeature sf = fc.features().next();
System.out.println(sf.getAttribute("IDOBRA")); } // It crashes
The thing is that I read every post about the next error that gives me after crashing:
> SEVERE: Failed to execute request http://mapa20.ewise.es/WFS_EGIOS_SITUATIONROOM/
service.svc/get?TYPENAME=gmgml%3AAREAOBRA&REQUEST=GetFeature&OUTPUTFORMAT=text%2
Fxml%3B+subtype%3Dgml%2F3.1.1&VERSION=1.1.0&SERVICE=WFS
java.lang.RuntimeException: java.net.SocketTimeoutException: Read timed out
at org.geotools.data.store.ContentFeatureCollection.features(ContentFeatureColl
ection.java:176)
at org.geotools.data.store.ContentFeatureCollection.features(ContentFeatureColl
ection.java:58)
at com.sitep.imi.acefat.server.daemon.InsertarBBDDDaemon.dataAccess(InsertarBBD
DDaemon.java:229)
at com.sitep.imi.acefat.server.daemon.InsertarBBDDDaemon.insertData(InsertarBBD
DDaemon.java:98)
at com.sitep.imi.acefat.App.main(App.java:17)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnectio
n.java:1535)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection
.java:1440)
at org.geotools.data.ows.SimpleHttpClient$SimpleHTTPResponse.<init>(SimpleHttpC
lient.java:171)
at org.geotools.data.ows.SimpleHttpClient.get(SimpleHttpClient.java:102)
at org.geotools.data.ows.AbstractOpenWebService.internalIssueRequest(AbstractOp
enWebService.java:426)
at org.geotools.data.wfs.internal.WFSClient.internalIssueRequest(WFSClient.java
:286)
at org.geotools.data.wfs.internal.WFSClient.issueRequest(WFSClient.java:326)
at org.geotools.data.wfs.WFSFeatureSource.getReaderInternal(WFSFeatureSource.ja
va:256)
at org.geotools.data.store.ContentFeatureSource.getReader(ContentFeatureSource.
java:634)
at org.geotools.data.store.ContentFeatureCollection.features(ContentFeatureColl
ection.java:173)
But I cannot find a specific answer for my problem.

Categories

Resources