I recenty updated to MongoDB 2.6.3 via Ubuntu debs and also switched to Mongo Client library 2.12.2; when I now execute
final MongoClient m = new MongoClient( "localhost" );
DB db = m.getDB( "test" );
System.out.println( db.getName( ) );
System.out.println( db.collectionExists( "Customer" ) );
then the "test" sysout is written, but during the collectionExists() method a timeout occurs:
Exception in thread "main" com.mongodb.MongoTimeoutException: Timed out while waiting to connect after 4996 ms
at com.mongodb.BaseCluster.getDescription(BaseCluster.java:114)
at com.mongodb.DBTCPConnector.getClusterDescription(DBTCPConnector.java:396)
at com.mongodb.DBTCPConnector.getMaxBsonObjectSize(DBTCPConnector.java:641)
at com.mongodb.Mongo.getMaxBsonObjectSize(Mongo.java:641)
at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:81)
at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66)
at com.mongodb.DB.getCollectionNames(DB.java:510)
at com.mongodb.DB.collectionExists(DB.java:553)
at com.apiomat.backend.persistence.MongoFacade.main(MongoFacade.java:342)
I can connect to MongoDB via the command line client tool and query what I want without problems.
Related
I have a grails2 based application which is using tomcat jdbc pool, recently I have been getting into problem where all the connections in the pool get used up and I start getting:-
org.springframework.transaction.CannotCreateTransactionException: Could not open Hibernate Session for transaction; nested exception is org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-nio-8443-exec-38] Timeout: Pool empty. Unable to fetch a connection in 10 seconds, none available[size:100; busy:100; idle:0; lastwait:10000].; nested exception is org.springframework.transaction.CannotCreateTransactionException: Could not open Hibernate Session for transaction; nested exception is org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-nio-8443-exec-38] Timeout: Pool empty. Unable to fetch a connection in 10 seconds, none available[size:100; busy:100; idle:0; lastwait:10000].
I have a few query that requires heavy join and some stored proc that executes for about 2 - 3 minutes, for it i am manually get the connection from the datasource bean :-
currentConnection = dataSource.connection
sqlInstance = new Sql(currentConnection)
sqlInstance.execute(query)
sqlInstance.close()
I've logged the total active connection in stdout and i see that the no. of active connection keeps on rising and rising and it never drops, it then gets to 100 which is the total active connection allowed and then i start getting issue of poolexhaustauion, can anyone give me an idea, what i might be missing or where the connection might be leaking. here is my connection detail :-
dataSource {
pooled = true
driverClassName = "com.mysql.jdbc.Driver"
url="jdbc:mysql://something:3306/something?zeroDateTimeBehavior=convertToNull&autoReconnect=true&relaxAutoCommit=true"
username="#####"
password='#$#$$$$$$$'
dbCreate = "update"
properties {
initialSize=5
maxActive=100
minIdle=5
maxIdle=25
maxWait = 10000
maxAge = 10 * 60000
timeBetweenEvictionRunsMillis=5000
minEvictableIdleTimeMillis=60000
validationQuery="SELECT 1"
validationInterval=15000
testWhileIdle=true
testOnBorrow=true
testOnReturn=true
removeAbandoned=true
removeAbandonedTimeout=400
logAbandoned=true
jdbcInterceptors = "ConnectionState"
defaultTransactionIsolation = java.sql.Connection.TRANSACTION_READ_COMMITTED
}
}
Using : Ubuntu 16.04.3
I'm connecting to a replica set through mongodb java driver v3.4.3 (or 3.5.0) . I may create the MongoClient by providing the IPs , or the hostnames I have provided in the /etc/hosts file as such :
new MongoClient(
Arrays.asList(
new ServerAddress("X.X.Y.A",27017),
new ServerAddress("X.X.Y.B",27017),
new ServerAddress("X.X.Y.C",27017))
);
or
new MongoClient(new MongoClientURI("mongodb://X.X.Y.A:27017/?replicaSet=my-rs"));
or
//myserver1 on /etc/hosts as X.X.Y.A myserver1
new MongoClient(new MongoClientURI("mongodb://myserver1:27017/?replicaSet=my-rs"));
in both cases , mongodb is trying to monitor the replica sets as "mongodb1" , "mongodb2" and "mongodb3" and keeps throwing the INFO Log and prevents the execution :
Oct 18, 2017 9:12:13 AM com.mongodb.diagnostics.logging.JULLogger log
INFO: Exception in monitor thread while connecting to server mongodb3:27017
com.mongodb.MongoSocketException: mongodb1
at com.mongodb.ServerAddress.getSocketAddress(ServerAddress.java:188)
at com.mongodb.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:57)
at com.mongodb.connection.SocketStream.open(SocketStream.java:58)
at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:115)
at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:113)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: mongodb1
at java.net.InetAddress.getAllByName0(InetAddress.java:1280)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at java.net.InetAddress.getByName(InetAddress.java:1076)
at com.mongodb.ServerAddress.getSocketAddress(ServerAddress.java:186)
... 5 more
Throwing the similar exceptions for "mongodb2" and "mongodb3" .
But when i add the mongodb1 etc rows on /etc/hosts file , everything works as :
X.X.Y.A mongodb1
X.X.Y.B mongodb2
X.X.Y.C mongodb3
This leads me to think that mongo driver is hardcodedly monitoring the N members of the replicaset as "mongodbN:port". Is this a bug on mongodb java driver or just incredibly useless way of doing it ?
I'm trying to access remote Cassandra using Spark in Java. However, when I'm trying to execute an aggregation function (count), the following error:
Exception in thread "main" com.datastax.driver.core.exceptions.TransportException: [/192.168.1.103:9042] Connection has been closed
at com.datastax.driver.core.exceptions.TransportException.copy(TransportException.java:38)
at com.datastax.driver.core.exceptions.TransportException.copy(TransportException.java:24)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
I already set the timeout in the Cassandra.yml to big value.
Here is my code:
SparkConf conf = new SparkConf();
conf.setAppName("Test");
conf.setMaster("local[*]");
conf.set("spark.cassandra.connection.host", "host");
Spark app = new Spark(conf);
app.run();
.
.
.
CassandraConnector connector = CassandraConnector.apply(sc.getConf());
// Prepare the schema
try (Session session = connector.openSession()) {
session.execute("USE keyspace0");
ResultSet results = session.execute("SELECT count(*) FROM table0");
I have 2 cassandra node with replica_factor=2. I am trying to run select().all() from my code and i used setFetchSize(50000). When i start iterating result after some time it throw readTimeOutException Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded). Could any one please give me some suggestion?
I am creating cluster using below code
PoolingOptions poolingOptions = new PoolingOptions();
poolingOptions.setCoreConnectionsPerHost(HostDistance.LOCAL, 52)
.setMaxConnectionsPerHost(HostDistance.LOCAL, 80)
.setMaxRequestsPerConnection(HostDistance.LOCAL, 500);
SocketOptions socketOption = new SocketOptions();
socketOption.setReadTimeoutMillis(600000)
.setReceiveBufferSize(1024*512)
.setSendBufferSize(1024*512)
.setKeepAlive(true).setConnectTimeoutMillis(1800000);
cluster = Cluster.builder()
.addContactPoints(cassandraHosts.get("HOST_1"), cassandraHosts.get("HOST_2"))
.withPoolingOptions(poolingOptions)
.withPort(cassandraPort)
.withSocketOptions(socketOption)
.withLoadBalancingPolicy(new TokenAwarePolicy(new DCAwareRoundRobinPolicy())).build();
Session session = cluster.connect(cassandraDB);
Cassandra version: 2.2.1
Java 7
Is there any other way to execute select all query without read time out exception
I am not able to connect to HBase through Apache phoenix driver.
Env info:
hadoop-2.6.0.
hbase-0.98.9-hadoop2.
phoenix-4.1.0-server-hadoop2(Kept on all region servers).
phoenix-4.1.0-client-hadoop2(Using this jar to create a jdbc connection).
Java Client side , I am getting exception
Caused by: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: java.io.IOException: Class org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded
at ...
Caused by: java.io.IOException: Class org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.testTableCoprocessorAttrs...
At HBase Master node logs I am getting this error:
2015-02-02 12:48:11,550 DEBUG [FifoRpcScheduler.handler1-thread-14] util.FSTableDescriptors: Exception during readTableDecriptor. Current table name = SYSTEM.CATALOG
org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://HadoopNode:9000/home/hduser/Data/hbase/data/default/SYSTEM.CATALOG
Code which I am using to create phoenix connection:
String zkQuorum = "HbaseMasterNode:2222";
try
{
Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
String connectionURL = "jdbc:phoenix:" + zkQuorum;
Connection connection = DriverManager.getConnection(connectionURL);
System.out.println(connection);
}
catch (Exception e)
{
throw new IllegalArgumentException("Create phoenix connection(" + zkQuorum + ") throw exception", e);
}
With the help of Basic Hbase java APIs I am able to connect but this issue I am facing only If I try to use Phoenix driver for HBase.