NiFi PutHiveStreaming processor with Hive: Failed connecting to EndPoint - java

Someone would help on this issue with Nifi 1.3.0 and Hive. I get the same error with hive 1.2 and Hive 2.1.1. The hive table is partioned , bucketed and stored as ORC format.
The partition is created on hdfs but data failed on writing stage. Please check the logs as below:
[5:07 AM] papesdiop: Failed connecting to EndPoint {metaStoreUri='thrift://localhost:9083', database='mydb', table='guys', partitionVals=[dev] }
[5:13 AM] papesdiop: I get in log see next, hope it might help too:
[5:13 AM] papesdiop: Caused by: org.apache.hive.hcatalog.streaming.TransactionError: Unable to acquire lock on {metaStoreUri='thrift://localhost:9083', database='mydb', table='guys', partitionVals=[dev] }
  at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:578)
FULL TRACE LOGS:
reconnect.
org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:3906)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:3893)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:1863)
at sun.reflect.GeneratedMethodAccessor380.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy126.lock(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:573)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:547)
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:261)
at org.apache.nifi.util.hive.HiveWriter.(HiveWriter.java:73)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:964)
at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:875)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$40(PutHiveStreaming.java:676)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$44(PutHiveStreaming.java:673)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2136)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2106)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:627)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$36(PutHiveStreaming.java:551)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:551)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2017-09-07 06:41:31,015 DEBUG [Timer-4] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=13ed53d2-015e-1000-c7b1-5af434c38751] Start sending heartbeat on all writers
2017-09-07 06:41:31,890 INFO [Timer-Driven Process Thread-3] hive.metastore Trying to connect to metastore with URI thrift://localhost:9083
2017-09-07 06:41:31,893 INFO [Timer-Driven Process Thread-3] hive.metastore Connected to metastore.
2017-09-07 06:41:31,911 ERROR [Timer-Driven Process Thread-3] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=13ed53d2-015e-1000-c7b1-5af434c38751] Failed to create HiveWriter for endpoint: {metaStoreUri='thrift://localhost:9083', database='default', table='guys', partitionVals=[dev] }: org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://localhost:9083', database='default', table='guys', partitionVals=[dev] }
org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://localhost:9083', database='default', table='guys', partitionVals=[dev] }
at org.apache.nifi.util.hive.HiveWriter.(HiveWriter.java:79)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:964)
at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:875)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$40(PutHiveStreaming.java:676)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$44(PutHiveStreaming.java:673)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2136)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2106)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:627)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$36(PutHiveStreaming.java:551)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:551)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.nifi.util.hive.HiveWriter$TxnBatchFailure: Failed acquiring Transaction Batch from EndPoint: {metaStoreUri='thrift://localhost:9083', database='default', table='guys', partitionVals=[dev] }
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:264)
at org.apache.nifi.util.hive.HiveWriter.(HiveWriter.java:73)
... 24 common frames omitted
Caused by: org.apache.hive.hcatalog.streaming.TransactionError: Unable to acquire lock on {metaStoreUri='thrift://localhost:9083', database='default', table='guys', partitionVals=[dev] }
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:578)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:547)
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:261)
... 25 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:3906)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:3893)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:1863)
at sun.reflect.GeneratedMethodAccessor380.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy126.lock(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:573)
... 27 common frames omitted
2017-09-07 06:41:31,911 ERROR [Timer-Driven Process Thread-3] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=13ed53d2-015e-1000-c7b1-5af434c38751] Error connecting to Hive endpoint: table guys at thrift://localhost:9083
2017-09-07 06:41:31,911 DEBUG [Timer-Driven Process Thread-3] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=13ed53d2-015e-1000-c7b1-5af434c38751] has chosen to yield its resources; will not be scheduled to run again for 1000 milliseconds
2017-09-07 06:41:31,912 ERROR [Timer-Driven Process Thread-3] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=13ed53d2-015e-1000-c7b1-5af434c38751] Hive Streaming connect/write error, flow file will be penalized and routed to retry. org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://localhost:9083', database='default', table='guys', partitionVals=
The Hive table
CREATE TABLE mydb.guys(
  firstname string,
  lastname string)
PARTITIONED BY (
  job string)
CLUSTERED BY (
  firstname)
INTO 10 BUCKETS
ROW FORMAT SERDE
  'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS ORC
LOCATION
  'hdfs://localhost:9000/user/papesdiop/guys'
TBLPROPERTIES ( 'transactional'='true')
Thanks in advance

If this is failing during the write to HDFS, perhaps your user does not have permissions to write to the target directory? If you have more information from the full stack trace please add it to your question, as it will help diagnose the problem. When I had this issue a while ago, it was because my NiFi user needed to be created on the target OS and added to the appropriate HDFS group(s) in order to get permission for PutHiveStreaming to write out to the ORC file(s) in HDFS.

Related

Failed to connect to my own azure cosmos db instance using ScalarDB

While i was testing the functionality of cosmosdb of the scalardb (2.2.0), i couldn’t connect to the azure cosmos db instance via scalardb, when i try to initialize the DistributedTransaction or the Cosmos object with the following code
DatabaseConfig cosmosDBConfig = new DatabaseConfig(f);
Cosmos cosmos = new Cosmos(cosmosDBConfig);
DistributedTransactionManager db = Guice.createInjector(new TransactionModule(cosmosDBConfig))
.getInstance(TransactionService.class); // failed here
// also tried with this
// Cosmos cosmos = new Cosmos(cosmosDBConfig); // fail here
and I got the following errors
Network failure
java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
at java.net.InetSocketAddress.<init>(InetSocketAddress.java:188)
at reactor.netty.tcp.InetSocketAddressUtil.createForIpString(InetSocketAddressUtil.java:88)
at reactor.netty.tcp.InetSocketAddressUtil.createInetSocketAddress(InetSocketAddressUtil.java:74)
at reactor.netty.tcp.InetSocketAddressUtil.createUnresolved(InetSocketAddressUtil.java:48)
at reactor.netty.tcp.TcpUtils._updatePort(TcpUtils.java:91)
at reactor.netty.tcp.TcpUtils.updatePort(TcpUtils.java:73)
at reactor.netty.tcp.TcpClient.lambda$port$5(TcpClient.java:411)
at reactor.netty.tcp.TcpClientBootstrap.configure(TcpClientBootstrap.java:39)
at reactor.netty.tcp.TcpClientBootstrap.configure(TcpClientBootstrap.java:39)
at reactor.netty.tcp.TcpClientBootstrap.configure(TcpClientBootstrap.java:39)
at reactor.netty.tcp.TcpClientBootstrap.configure(TcpClientBootstrap.java:39)
at reactor.netty.tcp.TcpClient.connect(TcpClient.java:196)
at reactor.netty.http.client.HttpClientFinalizer.connect(HttpClientFinalizer.java:68)
at reactor.netty.http.client.HttpClientFinalizer.responseConnection(HttpClientFinalizer.java:85)
at com.azure.cosmos.implementation.http.ReactorNettyClient.send(ReactorNettyClient.java:123)
at com.azure.cosmos.implementation.RxGatewayStoreModel.performRequest(RxGatewayStoreModel.java:159)
at com.azure.cosmos.implementation.RxGatewayStoreModel.read(RxGatewayStoreModel.java:91)
at com.azure.cosmos.implementation.RxGatewayStoreModel.invokeAsyncInternal(RxGatewayStoreModel.java:351)
at com.azure.cosmos.implementation.RxGatewayStoreModel.lambda$invokeAsync$3(RxGatewayStoreModel.java:366)
at com.azure.cosmos.implementation.BackoffRetryUtility.lambda$executeRetry$0(BackoffRetryUtility.java:34)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:44)
at reactor.core.publisher.FluxRetryWhen.subscribe(FluxRetryWhen.java:79)
at reactor.core.publisher.MonoRetryWhen.subscribeOrReturn(MonoRetryWhen.java:46)
at reactor.core.publisher.FluxFromMonoOperator.subscribe(FluxFromMonoOperator.java:76)
at reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:54)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150)
at reactor.core.publisher.MonoDelay$MonoDelayRunnable.run(MonoDelay.java:117)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Failed to retrieve database account information. java.lang.IllegalArgumentException: port out of range:-1
Fail to reach global gateway [scalardb-test-cosmos.cassandra.cosmos.azure.com], [null]
startRefreshLocationTimerAsync() - Unable to refresh database account from any location. Exception: CosmosException{userAgent=Linux/4.18.0-147.8.1.el8_1.x86_64 JRE/1.8.0_252 cosmosdb-java-sdk/4.1.0, error=null, resourceAddress='null', requestUri='null', statusCode=0, message=null, causeInfo=[class: class java.lang.IllegalArgumentException, message: port out of range:-1], responseHeaders={}, requestHeaders=[Accept=application/json, x-ms-date=Thu, 01 Oct 2020 09:28:49 GMT]}
CosmosException{userAgent=Linux/4.18.0-147.8.1.el8_1.x86_64 JRE/1.8.0_252 cosmosdb-java-sdk/4.1.0, error=null, resourceAddress='null', requestUri='null', statusCode=0, message=null, causeInfo=[class: class java.lang.IllegalArgumentException, message: port out of range:-1], responseHeaders={}, requestHeaders=[Accept=application/json, x-ms-date=Thu, 01 Oct 2020 09:28:49 GMT]}
at com.azure.cosmos.BridgeInternal.createCosmosException(BridgeInternal.java:281)
at com.azure.cosmos.implementation.RxGatewayStoreModel.lambda$toDocumentServiceResponse$2(RxGatewayStoreModel.java:301)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:88)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onError(FluxMapFuseable.java:134)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onError(MonoFlatMap.java:165)
at reactor.core.publisher.MonoSingle$SingleSubscriber.onError(MonoSingle.java:141)
at reactor.core.publisher.Operators.error(Operators.java:196)
at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:134)
at reactor.core.publisher.MonoFlatMapMany.subscribeOrReturn(MonoFlatMapMany.java:49)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.FluxRetryWhen.subscribe(FluxRetryWhen.java:79)
at reactor.core.publisher.MonoRetryWhen.subscribeOrReturn(MonoRetryWhen.java:46)
at reactor.core.publisher.FluxFromMonoOperator.subscribe(FluxFromMonoOperator.java:76)
at reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:54)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150)
at reactor.core.publisher.MonoDelay$MonoDelayRunnable.run(MonoDelay.java:117)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
at java.net.InetSocketAddress.<init>(InetSocketAddress.java:188)
at reactor.netty.tcp.InetSocketAddressUtil.createForIpString(InetSocketAddressUtil.java:88)
at reactor.netty.tcp.InetSocketAddressUtil.createInetSocketAddress(InetSocketAddressUtil.java:74)
at reactor.netty.tcp.InetSocketAddressUtil.createUnresolved(InetSocketAddressUtil.java:48)
at reactor.netty.tcp.TcpUtils._updatePort(TcpUtils.java:91)
at reactor.netty.tcp.TcpUtils.updatePort(TcpUtils.java:73)
at reactor.netty.tcp.TcpClient.lambda$port$5(TcpClient.java:411)
at reactor.netty.tcp.TcpClientBootstrap.configure(TcpClientBootstrap.java:39)
at reactor.netty.tcp.TcpClientBootstrap.configure(TcpClientBootstrap.java:39)
at reactor.netty.tcp.TcpClientBootstrap.configure(TcpClientBootstrap.java:39)
at reactor.netty.tcp.TcpClientBootstrap.configure(TcpClientBootstrap.java:39)
at reactor.netty.tcp.TcpClient.connect(TcpClient.java:196)
at reactor.netty.http.client.HttpClientFinalizer.connect(HttpClientFinalizer.java:68)
at reactor.netty.http.client.HttpClientFinalizer.responseConnection(HttpClientFinalizer.java:85)
at com.azure.cosmos.implementation.http.ReactorNettyClient.send(ReactorNettyClient.java:123)
at com.azure.cosmos.implementation.RxGatewayStoreModel.performRequest(RxGatewayStoreModel.java:159)
at com.azure.cosmos.implementation.RxGatewayStoreModel.read(RxGatewayStoreModel.java:91)
at com.azure.cosmos.implementation.RxGatewayStoreModel.invokeAsyncInternal(RxGatewayStoreModel.java:351)
at com.azure.cosmos.implementation.RxGatewayStoreModel.lambda$invokeAsync$3(RxGatewayStoreModel.java:366)
at com.azure.cosmos.implementation.BackoffRetryUtility.lambda$executeRetry$0(BackoffRetryUtility.java:34)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:44)
... 15 more
Exception in thread "main" java.lang.NullPointerException
at com.azure.cosmos.BridgeInternal.isEnableMultipleWriteLocations(BridgeInternal.java:163)
at com.azure.cosmos.implementation.RxDocumentClientImpl.initializeGatewayConfigurationReader(RxDocumentClientImpl.java:264)
at com.azure.cosmos.implementation.RxDocumentClientImpl.init(RxDocumentClientImpl.java:281)
at com.azure.cosmos.implementation.AsyncDocumentClient$Builder.build(AsyncDocumentClient.java:203)
at com.azure.cosmos.CosmosAsyncClient.<init>(CosmosAsyncClient.java:79)
at com.azure.cosmos.CosmosClientBuilder.buildAsyncClient(CosmosClientBuilder.java:649)
at com.azure.cosmos.CosmosClient.<init>(CosmosClient.java:30)
at com.azure.cosmos.CosmosClientBuilder.buildClient(CosmosClientBuilder.java:661)
at com.scalar.db.storage.cosmos.Cosmos.<init>(Cosmos.java:58)
However, I was able to insert data into the instance using the cosmosdb sdk (without scalardb) on both nodejs and java client. Below is my scalardb configuration. I even tried to run the scalardb initialization code on one of my azure vm but didn’t work as well with the same error.
Below is my scalardb configuration
scalar.db.contact_points=<InstanceContactPoints>
scalar.db.contact_port=<DefaultPort>
scalar.db.username=<InstanceUsername>
scalar.db.password=<InstancePassword>
scalar.db.storage=cosmos
I wonder if my configuration is wrong? How do I set it up? Thanks in advance.
Your code looks good.
I'm wondering whether the scalar.db.contact_points has the port like ...:443 in your Scalar DB configuration.
The scalar.db.contract_port isn't required for Cosmos DB because the contact_point has the port. You can remove the scalar.db.contact_port line.

org.springframework.messaging.MessagingException: Problem occurred while synchronizing remote to local directory

I have a spring batch application where batch runs continuously to pull data from another server through SFTP. In log continuously I'm getting below mentioned error but it seems batch is working as expected its pulling file from another server without any issue. I don't know why its throwing error in log. And strange is same code is not throwing any error in qa env its only throwing error in prod.
Batch is doing its job but its throwing error also in log.
Can anyone please provide me suggestion how to get rid of this error?
Below is he code for creating sessionfactory.
**public SessionFactory<LsEntry> pimSftpSessionFactory() {
Resource resource = new FileSystemResource(
sftpProperties.privateKeyLocation);
Properties config = new Properties();
config.put("PreferredAuthentications", "publickey,password");
DefaultSftpSessionFactory sftpSessionFactory = new DefaultSftpSessionFactory();
sftpSessionFactory.setHost(sftpProperties.hostName);
sftpSessionFactory.setPort(sftpProperties.port);
sftpSessionFactory.setUser(sftpProperties.username);
sftpSessionFactory.setKnownHosts(sftpProperties.knownHosts);
sftpSessionFactory.setPrivateKey(resource);
sftpSessionFactory.setSessionConfig(config);
return sftpSessionFactory;**
Pom entry:
**<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-sftp</artifactId>
</dependency>**
Error:
03 Sep 2020 01:47:56.688 ERROR o.s.i.handler.LoggingHandler - org.springframework.messaging.MessagingException: Problem occurred while synchronizing remote to local directory; nested exception is org.springframework.messaging.MessagingException: Failed to obtain pooled item; nested exception is java.lang.IllegalStateException: failed to create SFTP Session
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:303)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:200)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:62)
at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:134)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:224)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:245)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:58)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:190)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:186)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:353)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:51)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:344)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.springframework.messaging.MessagingException: Failed to obtain pooled item; nested exception is java.lang.IllegalStateException: failed to create SFTP Session
at org.springframework.integration.util.SimplePool.getItem(SimplePool.java:178)
at org.springframework.integration.file.remote.session.CachingSessionFactory.getSession(CachingSessionFactory.java:123)
at org.springframework.integration.file.remote.RemoteFileTemplate.execute(RemoteFileTemplate.java:441)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:264)
... 22 more
Caused by: java.lang.IllegalStateException: failed to create SFTP Session
at org.springframework.integration.sftp.session.DefaultSftpSessionFactory.getSession(DefaultSftpSessionFactory.java:393)
at org.springframework.integration.sftp.session.DefaultSftpSessionFactory.getSession(DefaultSftpSessionFactory.java:57)
at org.springframework.integration.file.remote.session.CachingSessionFactory$1.createForPool(CachingSessionFactory.java:81)
at org.springframework.integration.file.remote.session.CachingSessionFactory$1.createForPool(CachingSessionFactory.java:78)
at org.springframework.integration.util.SimplePool.doGetItem(SimplePool.java:188)
at org.springframework.integration.util.SimplePool.getItem(SimplePool.java:169)
... 25 more
Caused by: java.lang.IllegalStateException: failed to connect
at org.springframework.integration.sftp.session.SftpSession.connect(SftpSession.java:273)
at org.springframework.integration.sftp.session.DefaultSftpSessionFactory.getSession(DefaultSftpSessionFactory.java:388)
... 30 more
Caused by: com.jcraft.jsch.JSchException: java.io.IOException: Pipe closed
at com.jcraft.jsch.ChannelSftp.start(ChannelSftp.java:315)
at com.jcraft.jsch.Channel.connect(Channel.java:152)
at com.jcraft.jsch.Channel.connect(Channel.java:145)
at org.springframework.integration.sftp.session.SftpSession.connect(SftpSession.java:268)
... 31 more
Caused by: java.io.IOException: Pipe closed
at java.io.PipedInputStream.read(PipedInputStream.java:307)
at java.io.PipedInputStream.read(PipedInputStream.java:377)
at com.jcraft.jsch.ChannelSftp.fill(ChannelSftp.java:2909)
at com.jcraft.jsch.ChannelSftp.header(ChannelSftp.java:2935)
at com.jcraft.jsch.ChannelSftp.start(ChannelSftp.java:262)
... 34 more

Hbase client API not connecting to Hbase throwing SocketTimeoutException

I'm trying to connect to Hbase using Hbase client API in a kerborized Cloudera cluster.
Sample code:
Configuration hbaseConf = HBaseConfiguration.create();
/*hbaseConf.set("hbase.master", "somenode.net:2181");
hbaseConf.set("hbase.client.scanner.timeout.period", "1200000");
hbaseConf.set("hbase.zookeeper.quorum",
"somenode.net,somenode2.net");
hbaseConf.set("zookeeper.znode.parent", "/hbase");*/
hbaseConf.setInt("timeout", 120000);
hbaseConf.set(TableInputFormat.INPUT_TABLE, tableName);
//hbaseConf.addResource("src/main/resources/hbase-site.xml");
UserGroupInformation.setConfiguration(hbaseConf);
UserGroupInformation.loginUserFromKeytab("principal", "keytab");
JavaPairRDD<ImmutableBytesWritable, Result> javaPairRdd = ctx
.newAPIHadoopRDD(hbaseConf, TableInputFormat.class,
ImmutableBytesWritable.class, Result.class);
I tried to set the hbase-site.xml in the maven project resources, also passed as jar file in spark-submit command using --jars, but nothing works.
Error log:
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68545: row '¨namespace:test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hostname.net,60020,1511970022474, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:278)
at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:266)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:920)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1242)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:394)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:203)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:360)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:334)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more
18/02/26 16:25:42 INFO spark.SparkContext: Invoking stop() from shutdown hook
The problem you are facing is because your environment is not properly set up.
I have answer it to my own question here

Nifi GenerateTableFetch gives error when partition size is set to zero

While setting partitionSize=0 so as to fetch all the rows in the given table for GenerateTableFetch Processor in Nifi, I am getting the following error:
ERROR [Timer-Driven Process Thread-4]
o.a.n.p.standard.GenerateTableFetch
GenerateTableFetch[id=d0932834-015d-1000-8224-c230630b6fa6]
GenerateTableFetch[id=d0932834-015d-1000-8224-c230630b6fa6] failed to
process session due to java.lang.NullPointerException: {}
java.lang.NullPointerException: null
at org.apache.nifi.processors.standard.GenerateTableFetch.onTrigger(GenerateTableFetch.java:300)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
When I give partitionSize>0 it works. How can I resolve this error?
Unfortunately this is a bug in the processor code for the case where Partition Size is 0.
I have created this JIRA for it:
https://issues.apache.org/jira/browse/NIFI-4286

Cassandra NoHostAvailableException caused by DriverException while traversing a resultset

I am getting this error repeatedly while traversing a resultset of size 100,000 . I have used
.withReconnectionPolicy(new ConstantReconnectionPolicy(1000))
.withRetryPolicy(DefaultRetryPolicy.INSTANCE)
.withQueryOptions(new QueryOptions().setFetchSize(2000))
in my connector. Still i am getting this error. Typically it fails after fetching 80000 rows.
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: /172.16.12.143:9042
(com.datastax.driver.core.exceptions.DriverException: Timed out
waiting for server response), /172.16.12.141:9042
(com.datastax.driver.core.exceptions.DriverException: Timed out
waiting for server response)) at
com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at
com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:259)
at
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:175)
at
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at
com.payu.merchantAnalytics.FunnelHourlyUtilCql.groupByHourCql(FunnelHourlyUtilCql.java:87)
at
com.payu.merchantAnalytics.FunnelHourlyUtilCql.main(FunnelHourlyUtilCql.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497) at
org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:58)
at
org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Caused by:
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: /172.16.12.143:9042
(com.datastax.driver.core.exceptions.DriverException: Timed out
waiting for server response), /172.16.12.141:9042
(com.datastax.driver.core.exceptions.DriverException: Timed out
waiting for server response)) at
com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:108)
at
com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:179)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Solved
Thanks Andy Tolbert for the answer. Setting the readTimeoutMillis using SocketOptions.setReadTimeoutMillis(100000) worked.
Now the code looks like :-
SocketOptions socketOptions = new SocketOptions().setReadTimeoutMillis(100000);
Cluster.builder()
.addContactPoints(nodes)
.withReconnectionPolicy(new ConstantReconnectionPolicy(1000))
.withRetryPolicy(DefaultRetryPolicy.INSTANCE)
.withQueryOptions(new QueryOptions().setFetchSize(2000))
.withSocketOptions(socketOptions)
.withCredentials(username, password).build();
Thanks A Lot :)

Categories

Resources