While connecting to Cassandra Database and creating a keyspace I am getting the following error.
Exception in thread "main" > com.datastax.driver.core.exceptions.NoHostAvailableException:
All host(s) tried for query failed (tried: /127.0.0.1:9042
(com.datastax.driver.core.ConnectionException: [/127.0.0.1:9042] Unexpected error during transport initialization
(com.datastax.driver.core.TransportException: [/127.0.0.1:9042] Unexpected exception triggered (java.lang.IndexOutOfBoundsException: Not enough readable bytes - Need 4, maximum is 0))))
This is my code:
package com.hadoop.reloaded.cassandra;
import com.datastax.driver.core.*;
public class CassandraOne {
public static void main(String[] args){
Cluster cluster1 = Cluster.builder().withPort(9042).addContactPoint("127.0.0.1").build();
Session session = cluster1.connect();
String query = "CREATE KEYSPACE test WITH replication "
+ "= {'class':'SimpleStrategy', 'replication_factor':1}; ";
session.execute(query);
session.execute("USE test");
}
}
and this exception throws when i run it:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException:
All host(s) tried for query failed (tried: /127.0.0.1:9042 (
com.datastax.driver.core.ConnectionException: [/127.0.0.1:9042]
Unexpected error during transport initialization (
com.datastax.driver.core.TransportException: [/127.0.0.1:9042]
Unexpected exception triggered (java.lang.IndexOutOfBoundsException: Not enough readable bytes - Need 4, maximum is 0))))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:196)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1104)
at com.datastax.driver.core.Cluster.init(Cluster.java:121)
at com.datastax.driver.core.Cluster.connect(Cluster.java:198)
at com.hadoop.reloaded.cassandra.CassandraOne.main(CassandraOne.java:13)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
My guess is you are using a version of the Datastax Java Driver that is too old. Check to see if you are running version 3.0.0 or later as this is required to connect to a Cassandra 3.X cluster.
I faced the same issue, after upgrade driver to cassandra-driver-core-3.0.3.jar
and netty as this picture. issue has resolved
Maybe encoding issue ?
More détails on this link
Related
I am currently facing a weird issue. I CAN connect to the MongoDB server from MongoDB Compass and IntelliJ. But when I try to connect to the Mongo database from my Java application it continuously throws
Caused by: com.mongodb.spi.dns.DnsWithResponseCodeException: DNS name not found [response code 3]
Caused by: javax.naming.NameNotFoundException: DNS name not found [response code 3]
Background:
I have a simple Java application and to run that I use Gradle's bootRun task. I have added mongodb-driver-core, mongodb-driver-sync, and bson dependencies from org.mongodb and spring-data-mongodb from org.springframework.data. I have added the connection string as a property in a yml file.
The connection string is:
mongodb+srv://<username>:<password>#<domain>/<dbname>?ssl=false&authMechanism=PLAIN&connectTimeoutMS=30000&maxIdleTimeMS=600000
and here is my mongo config class:
#Configuration
public class MongoConfiguration {
#Bean
public MongoClient mongoClient(#Value("${spring.data.mongodb.uri}") String url) {
return MongoClients.create(url);
}
#Bean
public MongoTemplate mongoTemplate(MongoClient mongo,
#Value("${mongo.database}") String databaseName) {
MongoTemplate mongoTemplate = new MongoTemplate(mongo, databaseName);
mongoTemplate.createCollection("Collection_Name");
return mongoTemplate;
}
}
The weird part is even if the application is throwing DNS name not found error, I can connect to the database from MongoDB Compass with the same connection string.
Also, one important thing to note here is the same code works for all of my teammates. They can access the database from the code.
I have tried a few things:
Flush DNS cache on Windows
Added inbound and outbound for the port (30000 in my case) in the firewall
Remove the srv part from my connection URI according to the answer in this post
but none of them worked yet. Any help is appreciated!
Here's my complete stacktrace:
[INFO ] 2023-01-28 10:11:19 o.m.d.cluster [cluster-ClusterId{value='63d53b1769d90d388d824a76', description='null'}-srv-<domain>]: Exception while resolving SRV records
com.mongodb.MongoConfigurationException: Failed looking up SRV record for '_mongodb._tcp.<domain>'.
at com.mongodb.internal.dns.DefaultDnsResolver.resolveHostFromSrvRecords(DefaultDnsResolver.java:92) ~[mongodb-driver-core-4.8.2.jar:?]
at com.mongodb.internal.connection.DefaultDnsSrvRecordMonitor$DnsSrvRecordMonitorRunnable.run(DefaultDnsSrvRecordMonitor.java:80) [mongodb-driver-core-4.8.2.jar:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.mongodb.spi.dns.DnsWithResponseCodeException: DNS name not found [response code 3]
at com.mongodb.internal.dns.JndiDnsClient.getResourceRecordData(JndiDnsClient.java:52) ~[mongodb-driver-core-4.8.2.jar:?]
at com.mongodb.internal.dns.DefaultDnsResolver.resolveHostFromSrvRecords(DefaultDnsResolver.java:74) ~[mongodb-driver-core-4.8.2.jar:?]
... 2 more
Caused by: javax.naming.NameNotFoundException: DNS name not found [response code 3]
at com.sun.jndi.dns.DnsClient.checkResponseCode(DnsClient.java:661) ~[jdk.naming.dns:?]
at com.sun.jndi.dns.DnsClient.isMatchResponse(DnsClient.java:579) ~[jdk.naming.dns:?]
at com.sun.jndi.dns.DnsClient.doUdpQuery(DnsClient.java:427) ~[jdk.naming.dns:?]
at com.sun.jndi.dns.DnsClient.query(DnsClient.java:212) ~[jdk.naming.dns:?]
at com.sun.jndi.dns.Resolver.query(Resolver.java:81) ~[jdk.naming.dns:?]
at com.sun.jndi.dns.DnsContext.c_getAttributes(DnsContext.java:434) ~[jdk.naming.dns:?]
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_getAttributes(ComponentDirContext.java:235) ~[?:?]
Caused by: com.mongodb.spi.dns.DnsWithResponseCodeException: DNS name not found [response code 3]
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:141) ~[?:?]
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:129) ~[?:?]
at javax.naming.directory.InitialDirContext.getAttributes(InitialDirContext.java:142) ~[?:?]
at com.mongodb.internal.dns.JndiDnsClient.getResourceRecordData(JndiDnsClient.java:41) ~[mongodb-driver-core-4.8.2.jar:?]
at com.mongodb.internal.dns.DefaultDnsResolver.resolveHostFromSrvRecords(DefaultDnsResolver.java:74) ~[mongodb-driver-core-4.8.2.jar:?]
... 2 more
[WARN ] 2023-01-28 10:11:19 o.s.d.c.CustomConversions [main]: Registering converter from class java.time.LocalDateTime to class org.joda.time.LocalDateTime as reading converter although it doesn't convert from a store-supported type; You might want to check your annotation setup at the converter implementation
[WARN ] 2023-01-28 10:11:19 o.s.d.c.CustomConversions [main]: Registering converter from class java.time.LocalDateTime to class org.joda.time.LocalDateTime as reading converter although it doesn't convert from a store-supported type; You might want to check your annotation setup at the converter implementation
Caused by: javax.naming.NameNotFoundException: DNS name not found [response code 3]
I am trying to load/write the data from Redis cache in my spark-sql java based application.
Here is my code:
SparkSession sprk = SparkSession.builder().appName("Bulk processing").master("local").config("spark.redis.host", "127.0.0.1")
.config("spark.redis.port", "57855").getOrCreate();
Dataset<Row> employeeDF = sprk.read().format("org.apache.spark.sql.redis").
option("table","employees").load();
employeeDF.show();
This is the exception I am getting:
Exception in thread "main" redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketException: Connection reset
at redis.clients.jedis.util.RedisInputStream.ensureFill(RedisInputStream.java:205)
at redis.clients.jedis.util.RedisInputStream.readByte(RedisInputStream.java:43)
at redis.clients.jedis.Protocol.process(Protocol.java:155)
at redis.clients.jedis.Protocol.read(Protocol.java:220)
at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:318)
at redis.clients.jedis.Connection.getBinaryBulkReply(Connection.java:255)
at redis.clients.jedis.Connection.getBulkReply(Connection.java:245)
at redis.clients.jedis.BinaryJedis.info(BinaryJedis.java:2972)
at com.redislabs.provider.redis.RedisConfig.clusterEnabled(RedisConfig.scala:194)
at com.redislabs.provider.redis.RedisConfig.getNodes(RedisConfig.scala:317)
at com.redislabs.provider.redis.RedisConfig.getHosts(RedisConfig.scala:233)
at com.redislabs.provider.redis.RedisConfig.<init>(RedisConfig.scala:132)
at org.apache.spark.sql.redis.RedisSourceRelation.<init>(RedisSourceRelation.scala:34)
at org.apache.spark.sql.redis.DefaultSource.createRelation(DefaultSource.scala:13)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
at com.apps.employees.spark.Main.main(Main.java:41)
Caused by: java.net.SocketException: Connection reset
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:186)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:126)
at redis.clients.jedis.util.RedisInputStream.ensureFill(RedisInputStream.java:199)
... 18 more
I have started the redis using docker and kubectl in Windows machine.
enter image description here
I am not sure what is causing the issue. Could someone help me to resolve this issue
I am running a map reduce job taking data from a table in Accumulo as the input and storing the result in another table in Accumulo. To do this, I am using the AccumuloInputFormat and AccumuloOutputFormat classes. Here is the code
public int run(String[] args) throws Exception {
Opts opts = new Opts();
opts.parseArgs(PivotTable.class.getName(), args);
Configuration conf = getConf();
conf.set("formula", opts.formula);
Job job = Job.getInstance(conf);
job.setJobName("Pivot Table Generation");
job.setJarByClass(PivotTable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setMapperClass(PivotTableMapper.class);
job.setCombinerClass(PivotTableCombiber.class);
job.setReducerClass(PivotTableReducer.class);
job.setInputFormatClass(AccumuloInputFormat.class);
ClientConfiguration zkConfig = new ClientConfiguration().withInstance(opts.getInstance().getInstanceName()).withZkHosts(opts.getInstance().getZooKeepers());
AccumuloInputFormat.setInputTableName(job, opts.dataTable);
AccumuloInputFormat.setZooKeeperInstance(job, zkConfig);
AccumuloInputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));
job.setOutputFormatClass(AccumuloOutputFormat.class);
BatchWriterConfig bwConfig = new BatchWriterConfig();
AccumuloOutputFormat.setBatchWriterOptions(job, bwConfig);
AccumuloOutputFormat.setZooKeeperInstance(job, zkConfig);
AccumuloOutputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));
AccumuloOutputFormat.setDefaultTableName(job, opts.pivotTable);
AccumuloOutputFormat.setCreateTables(job, true);
return job.waitForCompletion(true) ? 0 : 1;
}
PivotTable is the name of the class that contains the main method (and this one too). I have made the mapper, combiner and reducer classes as well. But when I try to exectute this job, I get an error
Exception in thread "main" java.io.IOException: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:707)
at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.validateOptions(AbstractInputFormat.java:397)
at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.getSplits(AbstractInputFormat.java:668)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at com.latize.ulysses.accumulo.postprocess.PivotTable.run(PivotTable.java:247)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at com.latize.ulysses.accumulo.postprocess.PivotTable.main(PivotTable.java:251)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:87)
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.hasTablePermission(SecurityOperationsImpl.java:220)
at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:692)
... 21 more
Caused by: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.recv_hasTablePermission(ClientService.java:641)
at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.hasTablePermission(ClientService.java:624)
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:223)
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:220)
at org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:79)
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:73)
Can someone tell me what am I doing wrong here? Any help would be appreciated.
EDIT : I am running Accumulo 1.7.0
A TApplicationException indicates the error occurred on the Accumulo tablet server, rather than in your client (MapReduce) code. You'll need to examine your tablet server logs to get more information about the particular error wherever you see TApplicationException.
Table permissions are usually retrieved from ZooKeeper, so it may indicate a problem with the tserver connecting to ZooKeeper.
Unfortunately, I don't see the hostname or the IP in the stack trace, so you may have to check all the tserver logs to find it.
I am getting this error repeatedly while traversing a resultset of size 100,000 . I have used
.withReconnectionPolicy(new ConstantReconnectionPolicy(1000))
.withRetryPolicy(DefaultRetryPolicy.INSTANCE)
.withQueryOptions(new QueryOptions().setFetchSize(2000))
in my connector. Still i am getting this error. Typically it fails after fetching 80000 rows.
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: /172.16.12.143:9042
(com.datastax.driver.core.exceptions.DriverException: Timed out
waiting for server response), /172.16.12.141:9042
(com.datastax.driver.core.exceptions.DriverException: Timed out
waiting for server response)) at
com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at
com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:259)
at
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:175)
at
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at
com.payu.merchantAnalytics.FunnelHourlyUtilCql.groupByHourCql(FunnelHourlyUtilCql.java:87)
at
com.payu.merchantAnalytics.FunnelHourlyUtilCql.main(FunnelHourlyUtilCql.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497) at
org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:58)
at
org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Caused by:
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: /172.16.12.143:9042
(com.datastax.driver.core.exceptions.DriverException: Timed out
waiting for server response), /172.16.12.141:9042
(com.datastax.driver.core.exceptions.DriverException: Timed out
waiting for server response)) at
com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:108)
at
com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:179)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Solved
Thanks Andy Tolbert for the answer. Setting the readTimeoutMillis using SocketOptions.setReadTimeoutMillis(100000) worked.
Now the code looks like :-
SocketOptions socketOptions = new SocketOptions().setReadTimeoutMillis(100000);
Cluster.builder()
.addContactPoints(nodes)
.withReconnectionPolicy(new ConstantReconnectionPolicy(1000))
.withRetryPolicy(DefaultRetryPolicy.INSTANCE)
.withQueryOptions(new QueryOptions().setFetchSize(2000))
.withSocketOptions(socketOptions)
.withCredentials(username, password).build();
Thanks A Lot :)
I am trying to insert blob into table in informix database using JDBC query. However am getting this error:
java.sql.SQLException: Smart-large-object error.
at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:413)
at com.informix.jdbc.IfxSqli.a(IfxSqli.java:3494)
at com.informix.jdbc.IfxSqli.E(IfxSqli.java:3807)
at com.informix.jdbc.IfxSqli.dispatchMsg(IfxSqli.java:2610)
at com.informix.jdbc.IfxSqli.receiveMessage(IfxSqli.java:2526)
at com.informix.jdbc.IfxSqli.executeCommand(IfxSqli.java:940)
at com.informix.jdbc.IfxResultSet.b(IfxResultSet.java:303)
at com.informix.jdbc.IfxStatement.c(IfxStatement.java:1273)
at com.informix.jdbc.IfxPreparedStatement.executeUpdate(IfxPreparedStatement.java:421)
at etaxarchive.FillDataManager.insertIntoTable(FillDataManager.java:196)
at etaxarchive.FillDataManager.fillTableData(FillDataManager.java:112)
at etaxarchive.ETaxArchiveManager.archiveData(ETaxArchiveManager.java:89)
at etaxarchive.ETaxArchive.main(ETaxArchive.java:33)
Caused by: java.sql.SQLException: ISAM error: Lock Timeout Expired
at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:413)
at com.informix.jdbc.IfxSqli.E(IfxSqli.java:3812)
... 10 more
Does anybody knows how to resolve this problem?
I was getting a BLOB which using this line of code:
InputStream binaryStream = rs.getBlob(i).getBinaryStream();
I change it to this line of code:
InputStream test = rs.getBinaryStream(i);
and I was not getting this exception again.
Maybe this was not the problem, but for me it works.