java BerkeleyDB remove couple of records - java

How can I remove couple of records in one transaction?
Configs:
EnvironmentConfig myEnvConfig = new EnvironmentConfig();
StoreConfig storeConfig = new StoreConfig();
myEnvConfig.setReadOnly(readOnly);
storeConfig.setReadOnly(readOnly);
// If the environment is opened for write, then we want to be
// able to create the environment and entity store if
// they do not exist.
myEnvConfig.setAllowCreate(!readOnly);
storeConfig.setAllowCreate(!readOnly);
// Allow transactions if we are writing to the store.
myEnvConfig.setTransactional(!readOnly);
storeConfig.setTransactional(!readOnly);
// Open the environment and entity store
bklEnv = new Environment(envHome, myEnvConfig);
//bklEnv.openDatabase(null, envHome.getAbsolutePath(), myEnvConfig);
bklstore = new EntityStore(bklEnv, entryStore, storeConfig);
Clear old data in cyclically. Here we are clearing data by getting firstKey() from index:
public void clearOldDBData(Integer maxCount) throws DatabaseException {
TransactionConfig config = new TransactionConfijg();
config.setReadUncommitted(true);
Transaction txn = berkeleyDbEnv.getBklEnv().beginTransaction(null, config);
txn.setTxnTimeout(1000);
Long keyV = null;
try{
PrimaryIndex<Long,MemoryBTB> memoryBTBIndex =
berkeleyDbEnv.getBklstore().getPrimaryIndex(Long.class, MemoryBTB.class);
if(!memoryBTBIndex.sortedMap().isEmpty() && memoryBTBIndex.sortedMap().keySet().size() > maxCount){
for(int i = 0; i < memoryBTBIndex.sortedMap().keySet().size() - maxCount; i++){
log.trace(BERKELEYDB_CLEAR_DATA);
System.out.println("**************************************************");
PrimaryIndex<Long,MemoryBTB> memoryBTBIndexInternal =
berkeleyDbEnv.getBklstore().getPrimaryIndex(Long.class, MemoryBTB.class);
memoryBTBIndexInternal.delete(txn, memoryBTBIndexInternal.sortedMap().firstKey());
}
}
txn.commit();
System.out.println("+++++++++++++++++++++++++++++++++++++++++++++++++++");
}catch(DatabaseException dbe){
// one more time deleting
try {
Thread.sleep(100);
dataAccessor.getMemoryBTB().delete(txn, keyV);
txn.commit();
}catch(DatabaseException dbeInternal){
log.trace(String.format(TXN_ABORT, dbeInternal.getMessage()));
txn.abort();
} catch (InterruptedException e) {
e.printStackTrace();
throw dbe;
}
}
}
Stacktrace:
[12/12 10:35:20] - TRACE - BerkeleyRepository - Berkeley DB clear data
**************************************************
[12/12 10:35:20] - TRACE - BerkeleyRepository - Berkeley DB clear data
**************************************************
[12/12 10:35:21] - TRACE - MemService - Berkeley DB JSON produce error: (JE 3.3.75) Lock expired. Locker 7752330 -1_Thread-295_ThreadLocker: waited for lock on database=persist#MemoryEntityStore#com.company.memcheck.persists.MemoryBTB LockAddr:1554328 node=333 type=READ grant=WAIT_NEW timeoutMillis=500 startTime=1386862520718 endTime=1386862521218
Owners: [<LockInfo locker="31510392 17395_Thread-295_Txn" type="WRITE"/>]
Waiters: []
com.sleepycat.util.RuntimeExceptionWrapper: (JE 3.3.75) Lock expired. Locker 7752330 -1_Thread-295_ThreadLocker: waited for lock on database=persist#MemoryEntityStore#com.com pany.memcheck.persists.MemoryBTB LockAddr:1554328 node=333 type=READ grant=WAIT_NEW timeoutMillis=500 startTime=1386862520718 endTime=1386862521218
Owners: [<LockInfo locker="31510392 17395_Thread-295_Txn" type="WRITE"/>]
Waiters: []
at com.sleepycat.collections.StoredContainer.convertException(StoredContainer.java:466)
at com.sleepycat.collections.StoredSortedMap.getFirstOrLastKey(StoredSortedMap.java:216)
at com.sleepycat.collections.StoredSortedMap.firstKey(StoredSortedMap.java:185)
at com.company.memcheck.repository.BerkeleyRepositoryImpl.clearOldDBData(BerkeleyRepositoryImpl.java:142)
at com.company.memcheck.service.MemServiceImpl.removeOldData(MemServiceImpl.java:305)
at com.company.memcheck.service.MemServiceImpl.access$3(MemServiceImpl.java:299)
at com.company.memcheck.service.MemServiceImpl$2.run(MemServiceImpl.java:129)
at java.lang.Thread.run(Thread.java:662)
So as we can see only one entry in a loop where "*******" indicating that it was successful but the others failed.
Do I use cursor for this one?

Related

Spring application's timer threads keep increasing

I have some Java app using Spring Batch. I've got a table used as a queue which contains information on jobs that were requested by clients (as a client requests for a task to be executed, a row is added to this queue).
In one of my classes a while loop is run until someone deactivates some flag :
protected void runJobLaunchingLoop() {
while (!isTerminated()) {
try {
if (isActivated()) {
QueueEntryDTO queueEntry = dequeueJobEntry();
launchJob(queueEntry);
}
}
catch (EmptyQueueException ignored) {}
catch (Exception exception) {
logger.error("There was a problem while de-queuing a job ('" + exception.getMessage() + "').");
}
finally {
pauseProcessor();
}
}
}
The pauseProcessor() method calls Thread.sleep(). When I run this app in a Docker container it looks like the number of threads run by the application keep on increasing. The threads have the name "Timer-X" with X some integer that auto-increments.
I looked at the stack trace of one of these :
"Timer-14" - Thread t#128
java.lang.Thread.State: WAITING
at java.base#11.0.6/java.lang.Object.wait(Native Method)
- waiting on <25e60c31> (a java.util.TaskQueue)
at java.base#11.0.6/java.lang.Object.wait(Unknown Source)
at java.base#11.0.6/java.util.TimerThread.mainLoop(Unknown Source)
- locked <25e60c31> (a java.util.TaskQueue)
at java.base#11.0.6/java.util.TimerThread.run(Unknown Source)
Locked ownable synchronizers:
- None
Any idea what could be the cause of this? I'm not sure but if I don't run the app in a container but locally from IntelliJ, it seems like the problem does not occur. I'm not sure because sometimes it takes a while before thread count starts increasing.
EDIT : Some relevant code ...
protected QueueEntryDTO dequeueJobEntry() {
Collection<QueueEntryDTO> collection = getQueueService().dequeueEntry();
if (collection.isEmpty())
throw new EmptyQueueException();
return collection.iterator().next();
}
#Transactional
public Collection<QueueEntryDTO> dequeueEntry() {
Optional<QueueEntry> optionalEntry = this.queueEntryDAO.findTopByStatusCode(QueueStatusEnum.WAITING.getStatusCode());
if (optionalEntry.isPresent()) {
QueueEntry entry = (QueueEntry)optionalEntry.get();
QueueEntry updatedEntry = this.saveEntryStatus(entry, QueueStatusEnum.PROCESSING, (String)null);
return Collections.singleton(this.queueEntryDTOMapper.toDTO(updatedEntry));
} else {
return new ArrayList();
}
}
private void pauseProcessor() {
try {
Long sleepDuration = generalProperties.getQueueProcessingSleepDuration();
sleepDuration = Objects.requireNonNullElseGet(
sleepDuration,
() -> Double.valueOf(Math.pow(2.0, getRetries()) * 1000.0).longValue());
Thread.sleep(sleepDuration);
if (getRetries() < 4)
setRetries(getRetries() + 1);
}
catch (Exception ignored) {
logger.warn("Failed to pause job queue processor.");
}
}
It seems like this was caused by a bug that was resolved in a more recent version of DB2 than I was using.
Applications are getting large number of timer threads when API
timerLevelforQueryTimeout value is not set explicitly in an
application using JCC driver version 11.5 GA (JCC 4.26.14) or
later.
This issue is fixed in 11.5 M4 FP0(JCC 4.27.25).
I updated the version to a newer one (11.5.6) in my POM file, but this didn't fix the issue. Turns out my K8s pod was still using 11.5.0 and Maven acted weird. I then applied this technique (using dependencyManagement in the POM file) and the newer version was loaded.

Spark DataFrame java.lang.OutOfMemoryError: GC overhead limit exceeded on long loop run

I'm running a Spark application (Spark 1.6.3 cluster), which does some calculations on 2 small data sets, and writes the result into an S3 Parquet file.
Here is my code:
public void doWork(JavaSparkContext sc, Date writeStartDate, Date writeEndDate, String[] extraArgs) throws Exception {
SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);
S3Client s3Client = new S3Client(ConfigTestingUtils.getBasicAWSCredentials());
boolean clearOutputBeforeSaving = false;
if (extraArgs != null && extraArgs.length > 0) {
if (extraArgs[0].equals("clearOutput")) {
clearOutputBeforeSaving = true;
} else {
logger.warn("Unknown param " + extraArgs[0]);
}
}
Date currRunDate = new Date(writeStartDate.getTime());
while (currRunDate.getTime() < writeEndDate.getTime()) {
try {
SparkReader<FirstData> sparkReader = new SparkReader<>(sc);
JavaRDD<FirstData> data1 = sparkReader.readDataPoints(
inputDir,
currRunDate,
getMinOfEndDateAndNextDay(currRunDate, writeEndDate));
// Normalize to 1 hours & 0.25 degrees
JavaRDD<FirstData> distinctData1 = data1.distinct();
// Floor all (distinct) values to 6 hour windows
JavaRDD<FirstData> basicData1BySixHours = distinctData1.map(d1 -> new FirstData(
d1.getId(),
TimeUtils.floorTimePerSixHourWindow(d1.getTimeStamp()),
d1.getLatitude(),
d1.getLongitude()));
// Convert Data1 to Dataframes
DataFrame data1DF = sqlContext.createDataFrame(basicData1BySixHours, FirstData.class);
data1DF.registerTempTable("data1");
// Read Data2 DataFrame
String currDateString = TimeUtils.getSimpleDailyStringFromDate(currRunDate);
String inputS3Path = basedirInput + "/dt=" + currDateString;
DataFrame data2DF = sqlContext.read().parquet(inputS3Path);
data2DF.registerTempTable("data2");
// Join data1 and data2
DataFrame mergedDataDF = sqlContext.sql("SELECT D1.Id,D2.beaufort,COUNT(1) AS hours " +
"FROM data1 as D1,data2 as D2 " +
"WHERE D1.latitude=D2.latitude AND D1.longitude=D2.longitude AND D1.timeStamp=D2.dataTimestamp " +
"GROUP BY D1.Id,D1.timeStamp,D1.longitude,D1.latitude,D2.beaufort");
// Create histogram per ID
JavaPairRDD<String, Iterable<Row>> mergedDataRows = mergedDataDF.toJavaRDD().groupBy(md -> md.getAs("Id"));
JavaRDD<MergedHistogram> mergedHistogram = mergedDataRows.map(new MergedHistogramCreator());
logger.info("Number of data1 results: " + data1DF.select("lId").distinct().count());
logger.info("Number of coordinates with data: " + data1DF.select("longitude","latitude").distinct().count());
logger.info("Number of results with beaufort histograms: " + mergedDataDF.select("Id").distinct().count());
// Save to parquet
String outputS3Path = basedirOutput + "/dt=" + TimeUtils.getSimpleDailyStringFromDate(currRunDate);
if (clearOutputBeforeSaving) {
writeWithCleanup(outputS3Path, mergedHistogram, MergedHistogram.class, sqlContext, s3Client);
} else {
write(outputS3Path, mergedHistogram, MergedHistogram.class, sqlContext);
}
} finally {
TimeUtils.progressToNextDay(currRunDate);
}
}
}
public void write(String outputS3Path, JavaRDD<MergedHistogram> outputRDD, Class outputClass, SQLContext sqlContext) {
// Apply a schema to an RDD of JavaBeans and save it as Parquet.
DataFrame fullDataDF = sqlContext.createDataFrame(outputRDD, outputClass);
fullDataDF.write().parquet(outputS3Path);
}
public void writeWithCleanup(String outputS3Path, JavaRDD<MergedHistogram> outputRDD, Class outputClass,
SQLContext sqlContext, S3Client s3Client) {
String fileKey = S3Utils.getS3Key(outputS3Path);
String bucket = S3Utils.getS3Bucket(outputS3Path);
logger.info("Deleting existing dir: " + outputS3Path);
s3Client.deleteAll(bucket, fileKey);
write(outputS3Path, outputRDD, outputClass, sqlContext);
}
public Date getMinOfEndDateAndNextDay(Date startTime, Date proposedEndTime) {
long endOfDay = startTime.getTime() - startTime.getTime() % MILLIS_PER_DAY + MILLIS_PER_DAY ;
if (endOfDay < proposedEndTime.getTime()) {
return new Date(endOfDay);
}
return proposedEndTime;
}
The size of data1 is around 150,000 and data2 is around 500,000.
What my code does is basically does some data manipulation, merges the 2 data objects, does a bit more manipulation, prints some statistics and saves to parquet.
The spark has 25GB of memory per server, and the code runs fine.
Each iteration takes about 2-3 minutes.
The problem starts when I run it on a large set of dates.
After a while, I get an OutOfMemory:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at scala.collection.immutable.List.$colon$colon$colon(List.scala:127)
at org.json4s.JsonDSL$JsonListAssoc.$tilde(JsonDSL.scala:98)
at org.apache.spark.util.JsonProtocol$.taskEndToJson(JsonProtocol.scala:139)
at org.apache.spark.util.JsonProtocol$.sparkEventToJson(JsonProtocol.scala:72)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:144)
at org.apache.spark.scheduler.EventLoggingListener.onTaskEnd(EventLoggingListener.scala:164)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:42)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:38)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:87)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:72)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:72)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:71)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:70)
Last time it ran, it crashed after 233 iterations.
The line it crashed on was this:
logger.info("Number of coordinates with data: " + data1DF.select("longitude","latitude").distinct().count());
Can anyone please tell me what can be the reason for the eventual crashes?
I'm not sure that everyone will find this solution viable, but upgrading the Spark cluster to 2.2.0 seems to have resolved the issue.
I have ran my application for several days now, and had no crashes yet.
This error occurs when GC takes up over 98% of the total execution time of process. You can monitor the GC time in your Spark Web UI by going to stages tab in http://master:4040.
Try increasing the driver/executor(whichever is generating this error) memory using spark.{driver/executor}.memory by --conf while submitting the spark application.
Another thing to try is to change the garbage collector that the java is using. Read this article for that: https://databricks.com/blog/2015/05/28/tuning-java-garbage-collection-for-spark-applications.html. It very clearly explains why GC overhead error occurs and which garbage collector is best for your application.

Kafka 0.9-Java : consumer skipping offsets during application restart

I have a java application with below properties
kafkaProperties = new Properties();
kafkaProperties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBrokersList);
kafkaProperties.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupName);
kafkaProperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
kafkaProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
kafkaProperties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, consumerSessionTimeoutMs);
kafkaProperties.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, maxPartitionFetchBytes);
kafkaProperties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
I've created 15 consumer threads and let them process the below runnable .I don't have any other consumer with this consumer group name consuming .
#Override
public void run() {
try {
logger.info("Starting ConsumerWorker, consumerId={}", consumerId);
consumer.subscribe(Arrays.asList(kafkaTopic), offsetLoggingCallback);
while (true) {
boolean isPollFirstRecord = true;
logger.debug("consumerId={}; about to call consumer.poll() ...", consumerId);
ConsumerRecords<String, String> records = consumer.poll(pollIntervalMs);
Map<Integer,Long> partitionOffsetMap = new HashMap<>();
for (ConsumerRecord<String, String> record : records) {
if (isPollFirstRecord) {
isPollFirstRecord = false;
logger.info("Start offset for partition {} in this poll : {}", record.partition(), record.offset());
}
messageProcessor.processMessage(record.value(), record.offset());
partitionOffsetMap.put(record.partition(),record.offset());
}
if (!records.isEmpty()) {
logger.info("Invoking commit for partition/offset : {}", partitionOffsetMap);
consumer.commitAsync(offsetLoggingCallback);
}
}
} catch (WakeupException e) {
logger.warn("ConsumerWorker [consumerId={}] got WakeupException - exiting ... Exception: {}",
consumerId, e.getMessage());
} catch (Exception e) {
logger.error("ConsumerWorker [consumerId={}] got Exception - exiting ... Exception: {}",
consumerId, e.getMessage());
} finally {
logger.warn("ConsumerWorker [consumerId={}] is shutting down ...", consumerId);
consumer.close();
}
}
I also have a OffsetCommitCallbackImpl like below . It basically maintains the partition's and their commited offset as map .It also logs whenever offset is committed .
#Override
public void onComplete(Map<TopicPartition, OffsetAndMetadata> offsets, Exception exception) {
if (exception == null) {
offsets.forEach((topicPartition, offsetAndMetadata) -> {
partitionOffsetMap.put(topicPartition, offsetAndMetadata);
logger.info("Offset position during the commit for consumerId : {}, partition : {}, offset : {}", Thread.currentThread().getName(), topicPartition.partition(), offsetAndMetadata.offset());
});
} else {
offsets.forEach((topicPartition, offsetAndMetadata) -> logger.error("Offset commit error, and partition offset info : {}, partition : {}, offset : {}", exception.getMessage(), topicPartition.partition(), offsetAndMetadata.offset()));
}
}
Problem/Issue :
I noticed that i miss events/messages whenever i (restart) bring the application down and bring it back up . So when i closely looked at the logging . by comparing the offsets that are committed(using offsetcommitcallback logging) before shutdown vs offsets that are picked up for processing after restart, i see that for certain partition we did not pickup the offset where we left before shutdown. sometimes the start offset for certain partition's are like 1000 more than the committed offset .
NOTE : This happens to like 8 out of 40 partitions
If you closely look at the logging in run method there is one log statement where i actually print the offset before invoking async commit . For example if that last log before shutdown shows that as 10 for partition 1 . After restart the first offset we are processing for partition 1 would be like 100 . And i validated that we are exactly missing 90 messages .
Can any one think of a reason why this would be happening ?

Grails groovy too many hibernate connections

I am struggling with manual the transaction management. Background: I need to run quarz crons which run batch processes. It is recommended for batch processing to manually decide when to flush to the db to not slow down the application to much.
I have a pooled hibernate connection as the following
dataSource {
pooled = true
driverClassName = "com.mysql.jdbc.Driver"
dialect = "org.hibernate.dialect.MySQL5InnoDBDialect"
properties {
maxActive = 50
maxIdle = 25
minIdle = 1
initialSize = 1
minEvictableIdleTimeMillis = 60000
timeBetweenEvictionRunsMillis = 60000
numTestsPerEvictionRun = 3
maxWait = 10000
testOnBorrow = true
testWhileIdle = true
testOnReturn = false
validationQuery = "SELECT 1"
validationQueryTimeout = 3
validationInterval = 15000
jmxEnabled = true
maxAge = 10 * 60000
// http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#JDBC_interceptors
jdbcInterceptors = "ConnectionState;StatementCache(max=200)"
}
}
hibernate {
cache.use_second_level_cache = false
cache.use_query_cache = false
cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory'
show_sql = false
logSql = false
}
the cron job calls a service in the service i run do the following:
for(int g=0; g<checkResults.size() ;g++) {
def tmpSearchTerm = SearchTerm.findById((int)results[g+i][0])
tmpSearchTerm.count=((String)checkResults[g]).toInteger()
batch.add(tmpSearchTerm)
}
//increase counter
i+=requestSizeTMP
if (i%(requestSize*4)==0 || i+1==results.size()){
println "PREPARATION TO WRITE:" + i
SearchTerm.withSession{
def tx = session.beginTransaction()
for (SearchTerm s: batch) {
s.save()
}
batch.clear()
tx.commit()
println ">>>>>>>>>>>>>>>>>>>>>writing: ${i}<<<<<<<<<<<<<<<<<<<<<<"
}
session.flush()
session.clear()
}
}
So I am adding things to a batch until I have enough (4x the request size or the last item) and then I am trying to write it to the db.
Everything works fine.. but somehow the code seems to open hibernate transactions and does not close them. I don't really understand why but I am getting a hard error and tomcat crashes with too many connections. I have 2 Problems with that, which i do not understand:
1) If the dataSource is pooled and the maxActive is 50 how can i get a too many connection errors if the limit of tomcat is 500.
2) How do I explicitly terminate the transaction so that i do not have so many open connections?
You can use withTransaction because it will manage transaction.
For example
Account.withTransaction { status ->
def source = Account.get(params.from)
def dest = Account.get(params.to)
int amount = params.amount.toInteger()
if (source.active) {
source.balance -= amount
if (dest.active) {
dest.amount += amount
}
else {
status.setRollbackOnly()
}
}
}
You can look about withTransaction in http://grails.org/doc/latest/ref/Domain%20Classes/withTransaction.html
You can see the difference between withSession and withTransaction in https://stackoverflow.com/a/19692615/1610918
------UPDATE-----------
But I would prefer you to use service and it can be called from job.

Operation timed out using CouchbaseClient

I am getting Timeout exceptions even though there is not much load on the Couchbase server.
net.spy.memcached.OperationTimeoutException: Timeout waiting for value
at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:1003)
at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:1018)
at com.eos.cache.CacheClient.get(CacheClient.java:280)
at com.eos.cache.GenericCacheAccessObject.get(GenericCacheAccessObject.java:55)
...
...
Caused by: net.spy.memcached.internal.CheckedOperationTimeoutException: Timed out waiting for operation - failing node: /192.168.4.12:11210
at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:157)
at net.spy.memcached.internal.GetFuture.get(GetFuture.java:62)
at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:997)
...30 more
This is how I am creating the client.
List<URI> uris = new ArrayList<URI>();
String[] serverTokens = getServers().split(" ");
for (int index = 0; index < serverTokens.length; index++) {
uris.add(new URI(serverTokens[index]));
}
CouchbaseConnectionFactoryBuilder ccfb = new CouchbaseConnectionFactoryBuilder();
ccfb.setProtocol(Protocol.BINARY);
ccfb.setOpTimeout(10000); // wait up to 10 seconds for an operation to
// succeed
ccfb.setOpQueueMaxBlockTime(5000); // wait up to 5 seconds when trying
// to enqueue an operation
ccfb.setMaxReconnectDelay(1500);
CouchbaseConnectionFactory cf = ccfb.buildCouchbaseConnection(uris, bucket, "");
CouchbaseClient client = new CouchbaseClient(cf);
I am maintaining a pool of persistent clients in our web server. And we are not even touching the max conn limit which has been set to 15 only.
Pls help me guys in solving this.

Categories

Resources