I am using the Spanner client library for Java and i configure the client using Spring.
After a while, the application start to log the following message but i don't understand why. The application concurrency is minimal. It's seem the sessions aren't reused. Any suggestions ?
RESOURCE_EXHAUSTED: No session available in the pool. Maximum number
of sessions in the pool can be overridden by invoking
SessionPoolOptions#Builder#setMaxSessions. Client can be made to block
rather than fail by setting
SessionPoolOptions#Builder#setBlockIfPoolExhausted.
#Configuration
public class SpannerConfig {
#Value("${datasource.instanceId}")
private String instance;
#Value("${datasource.databaseId}")
private String database;
#Bean
public Spanner spannerService() throws IOException {
SessionPoolOptions sessionPoolOptions = SessionPoolOptions.newBuilder()
.setFailIfPoolExhausted()
.setMinSessions(5)
.setMaxSessions(100)
.build();
SpannerOptions options = SpannerOptions.newBuilder()
.setSessionPoolOption(sessionPoolOptions)
.build();
return options.getService();
}
#Bean
public DatabaseClient spannerClient(Spanner spannerService) {
DatabaseId databaseId = DatabaseId.of(spannerService.getOptions().getProjectId(), instance, database);
return spannerService.getDatabaseClient(databaseId);
}
}
It sounds like you have a session leak. Make sure that you're using a try-with-resources expression around any DatabaseClient.singleUse* or DatabaseClient.ReadOnlyTransaction calls to ensure that the Transaction or ResultSet gets closed, allowing the corresponding session to be returned to the session pool.
you are setting .setMaxSessions(100) which obviously does exceed the predefined limit.
in principle, when one client already has allocated 100, the next client can only allocate 0.
the documentation for the sessions reads:
Note: The Cloud Spanner client libraries manage sessions automatically.
... after reading into the source code, I'm sure that the error message is only thrown when using .setFailIfPoolExhausted(). that it reports an exhausted pool, might possibly be a bug, in case StackDriver monitoring tells the opposite.
Related
Websocket connection works on Spring’s WebSocket 4.1.6’s TextWebSocketHandler. For connection establishment we have HandshakeInterceptor, which in its beforeHandshake() method sets the user context:
AuthenticatedUser authUser = (AuthenticatedUser) httpServletRequest
.getSession().getAttribute(WEB_SOCKET_USER);
UserContextHolder.setUserContext(new UserContext(authUser));
The UserContextHolder class is modeled like Spring’s org.springframework.context.i18n.LocaleContextHolder - a class to hold the current UserContext in thread local storage:
private static ThreadLocal<UserContext> userContextHolder = new InheritableThreadLocal<UserContext>();
public static void setUserContext(UserContext userContext) {
userContextHolder.set(userContext);
if (userContext != null) {
LocaleContextHolder.setLocaleContext(userContext);
} else {
LocaleContextHolder.setLocaleContext(new LocaleContext() {
public Locale getLocale() {
return UserContext.getDefaultLocale();
}
});
}
}
This UserContext holds the thread’s authenticated User information for entire app, not only Spring (we also use another software, like Quartz, coexisting on this codebase on JVM, so we need to communicate between them).
Everything works okay when we run on the previous Tomcat 7 with standard BIO Connector. The problem arises with the upgrade to Tomcat 8 with the new NIO Connector being enabled by default.
When the WebSocket’s messages arrive they are processed in a call to Service methods which is validated in #SecurityValidation annotation, the MethodInterceptor checks if given thread has the User Context set:
AuthenticatedUser user = UserContextHolder.getUser();
if (user == null) {
throw new Exception(/* ... */);
}
But it’s null, so Exception gets thrown.
We believe that the problem is in threading change after the switch from BIO to NIO Connector.
BIO Scenario – we have one thread per one WebSocket model, so one
Handshake sets one UserContext and it works on this exact thread. It
works okay, even when there are more sockets, because i.e., when we
have 4 different WebSockets open, there are 4 different threads
handling them, that’s why ThreadLocal usage is working well.
NIO Scenario – the Non-Blocking IO concept is for reducing the threads
number (simplifying for our case), internally in 3rd party NIO
Connector there is used NIO’s Selector to manage the workload on a
single thread (with an Event Loop I guess, need to confirm). As we now
have just one single thread to handle all the WebSockets (or at least,
some part of them) the unexpected exception is thrown.
I’m not sure why once set UserContext gets nullified later, the code investigation brings us no clues, that’s why we think that might be a bug (or something).
The UserContext’s ThreadLocal being null in NIO seems to be the cause of the exception. Has anyone used ThreadLocal with NIO Connector? What is the best way to migrate Tomcat 7 BIO implementation to Tomcat 8 NIO implementation when using ThreadLocal in WebSocket communiction? Thanks!
I am trying to identify where a suspected memory / resource leak is occurring with regards to a JMS Queue I have built. I am new to JMS queues, so I have used many of the standard JMS class objects to ensure stability. But somewhere in my code or configuration I am doing something wrong, and my queue is filling up or resources are slowing down, perhaps inherent to unknown deficiencies within the architecture I am attempting to implement.
When load testing my API (using Gatling), I can run 20 messages a second through (which is a tiny load) for most of a ten minute duration. But after that, the messages seem to back up, and the ability to process them slows to a crawl. Generally time-out errors begin to occur once the overall requests exceed 60 seconds to complete. There is more business logic that processes data and persists it to a relational database, but none of that appears to be an issue.
Interestingly, subsequent test runs continue with the poor performance, indicating that whatever resource is leaking is transcending the tests. A restart of the application clears out whatever has become bloated leaking. Then the tests run fast again, for the first seven or eight minutes... upon which the cycle repeats itself. Only a restart of the App clears the issue. Since the issue doesn't self-correct itself, even after waiting for a period of time, something has filled up resources.
When pulling the JMS calls from the logic, I am able to process hundreds of messages a second. And I can run back-to-back tests runs without leaking or filling up the queue.
Although this is a Spring project, I am not using Spring's JMS Template, so I wrote my own Connection object, which I injected as a Spring Bean and implemented as a single connection to avoid creating a new connection for every JMS message I sent through.
Likewise, I configured my JMS Session to also be an injected Bean, in which I use the Connection Bean. That way I can persist my Connection and Session objects for sending all of my JMS messages through, which are sent one at a time. A Qpid Server I am calling receives these messages. While it is possible I am exceeding it's capacity to consume the messages I am producing, I expect that the resource leak is associated with my code, and not the JMS Server.
Here are some code snippets to give you an idea of my approach. Any feedback is appreciated.
JmsConfiguration (key methods)
#Bean
public ConnectionFactory jmsConnectionFactory() {
return new JmsConnectionFactory(user, pass, host);
}
#Bean(name="jmsSession")
public Session jmsConnection() throws JMSException {
Connection conn = jmsConnectionFactory().createConnection();
Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
return session; //Injected as Singleton
}
#Bean(name="jmsQueue")
public Queue jmsQueue() throws JMSException {
return jmsConnection().createQueue(queue);
}
//Jackson's objectMapper is heavy enough to warrant injecting and re-using it.
#Bean
public ObjectMapper objectMapper() {
return new ObjectMapper();
}
JmsMessageEnqueuer
#Component
public class MessageJmsEnqueuer extends CommonThreadScope {
#Autowired
#Qualifier("Session")
private Session jmsSession;
#Autowired
#Qualifier("jmsQueue")
private Queue jmsQueue;
#Value("${acme.jms.queue}")
private String jmsQueueName;
#Autowired
#Qualifier("jmsObjectMapper")
private ObjectMapper jmsObjectMapper;
public void enqueue(String message, String dataType) {
try {
String messageAsJson = objectMapper.writeValueAsString(message);
MessageProducer jmsMessageProducer = jmsSession.createProducer(jmsQueue);
TextMessage message = jmsSession.createTextMessage(message);
message.setStringProperty("dataType", dataType.name());
jmsMessageProducer.send(message);
logger.log(Level.INFO, "Message successfully sent. Queue=" + jmsQueueName + ", Message -> " + message);
} catch (JMSRuntimeException | JsonProcessingException jmsre) {
String msg = "JMS Message Processing encountered an error...";
logService.severe(logger, messagesBuilder() ... msg)
}
//Skip the close() method to persist connection...
//Reconnect logic exists to reset an expired connection from server.
}
}
I was able to solve my resource leak / deadlock issue simply by rewriting my code to use the simplified API provided with the release of JMS 2.0. Although I was never able to determine which of the Connection / Session / Queue objects was giving my code grief, using the Context object to build my connection and session was the golden ticket in this case.
Upon switching to the simplified API (since I was already pulling in the JMS 2.0 dependency), the resource leak immediately vanished! This leads me to believe that the simplified API does more than just make life easier by providing an easier API for the developer to code against. While that is already an advantage to begin with (even without the few features that the simplified API doesn't support), it is now clear to me that the underlying connection and session objects are being managed by the API, and thus resolved whatever was filling up or deadlocking.
Furthermore, because the resource build-up was no longer occurring, I was able to triple the number of messages I passed through, allowing me to process 60 users a second, instead of 20. That is a significant increase, and I have fixed the compatibility issues that prevented me from using the simplified JMS API to begin with.
While I would have liked to identify precisely what was fouling up the code, this works as a solution. Plus, the fact that version 2.0 of JMS was released in April of 2013 would indicate that the simplified API is definitely the preferred solution.
Just a guess, but a MessageProducer extends AutoClosable, suggesting it to be closed after it is no longer of use. Since you're not using a try-with-resources or explicitly close it afterwards, the jmsSession may contain more and more producers over time. Although I am not sure whether you should close per method call, or re-use the created producer.
Have you tried using a profiler such as VisualVM to visualize the heap and metaspace? If so, did you find any significant changes over time?
I am having a jsp which makes 25 hl7 hapi fhir calls asynchronously using dstu2. As suggested in best practices, I am creating the fhir context once using static loading and reusing it in every service call. However, service calls fail intermittently with the below stack trace: (I initialized the fhir context for every service call and this issue gets resolved. However, this is slowing down the calls. Could someone help me with any alternative approaches or tell me what I am doing wrong)
Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
public class MyFHIRContext{
public static FhirContext ctx;
static{
ctx = FhirContext.forDstu2();
ctx.getRestfulClientFactory().setSocketTimeout(60 * 1000);
ctx.getRestfulClientFactory().setConnectTimeout(60 * 1000);
ctx.getRestfulClientFactory().setServerValidationMode(ServerValidationModeEnum.NEVER);
}
}
calling code:
IGenericClient client = MyFHIRContext.ctx.newRestfulGenericClient("server url");
The exception suggests that your connection pool is not big enough to support that many overlapping requests.
You could either make the pool bigger or, better, reduce the number of requests by issuing them all (or groups of them) as batch requests - see http://hl7.org/fhir/DSTU2/http.html#transaction for the details.
We make extensive use of batch requests in our FHIR clients to good effect.
I am working on a web project using java with mongoDB as back-end database.To open a connection once and reusing the same for each service contained in the project, i am following the below URL
mongodb open connection issue .For closing the connections which are opened, i'm using the function MongoDBClass.INSTANCE.close(); during the user logout of the session in web site.But the problem is, once the user login the session again it produces the following error java.lang.IllegalStateException: state should be: open.That means the connection is not opened , MongoDBClass INSTANCE is not reinitialized so MongoClient is not reopening the connection.But after the server restarts login works perfectly for first time. How to build a new connection again after when i call close method during logout the session of user without restarting the server. I am using the following code
public enum MongoDBClass {
INSTANCE;
private static final String MONGO_DB_HOST = "hostURL";
private Mongo mongoObject;
private DB someDB;
String DB_NAME = null;
MongoClientOptions options = null;
MongoDBClass() {
options = MongoClientOptions.builder().connectionsPerHost(100)
.readPreference(ReadPreference.secondaryPreferred()).build();
mongoObject = new MongoClient(new ServerAddress(MONGO_DB_HOST, 27001),
options);
someDB = mongoObject.getDB(Nutans_Mongo.getNameOFDB());}}
public DB getSomeDB() {
return someDB;
}
public void setSomeDB(String dbName) {
someDB = mongoObject.getDB(dbName);
DB_NAME = dbName;
}
public String close() {
mongoObject.close();
return "true";
}
}
MongoClient maintains a connection pool internally so there is no need to open/close a client for each request. Also, Java enums are not meant to be used this way. Any state an enum has should be globally usable as there will only be one instance of a enum value per ClassLoader/VM. When you call close(), you're globally closing that enum's MongoClient. Since you open the connection in the constructor it never gets reopened because a another INSTANCE is never created.
There are several approaches to ensuring a singleton-like lifecycle of objects in a servlet context. Using CDI to create and inject a MongoClient in to your servlet is one way. Using a ServletContextListener and static field is another, if slightly less savory, approach.
I have the same problem. I'm using the Mongo Java driver 3.0.0.
I upgraded my database from 2.4 to 2.6. But the problem persists.
When I don't close the connection, next time it connects successfully, but in this case open connections raises quickly.
So from time to time we see exceptions like these:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at org.bson.io.Bits.readFully(Bits.java:48)
at org.bson.io.Bits.readFully(Bits.java:35)
at org.bson.io.Bits.readFully(Bits.java:30)
at com.mongodb.Response.<init>(Response.java:42)
at com.mongodb.DBPort$1.execute(DBPort.java:141)
at com.mongodb.DBPort$1.execute(DBPort.java:135)
at com.mongodb.DBPort.doOperation(DBPort.java:164)
at com.mongodb.DBPort.call(DBPort.java:135)
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:292)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:271)
at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84)
at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66)
at com.mongodb.DBCollection.findOne(DBCollection.java:870)
at com.mongodb.DBCollection.findOne(DBCollection.java:844)
at com.mongodb.DBCollection.findOne(DBCollection.java:790)
at org.springframework.data.mongodb.core.MongoTemplate$FindOneCallback.doInCollection(MongoTemplate.java:2000)
Whats the best way to handle and recover from these in code?
Do we need to put a 'retry' around each and every mongodb call?
You can use MongoClientOptions object to set different optional connection parameters. You are looking at setting heart beat frequency to make sure driver retry for connection. Also set socket time out to make sure it does not continue for too long.
MinHeartbeatFrequency: In the event that the driver has to frequently re-check a server's availability, it will wait at least this long since the previous check to avoid wasted effort. The default value is 10ms.
HeartbeatSocketTimeout: Timeout for heart beat check
SocketTimeout: Time out for connection
Reference API
To avoid too much code duplication, optionally you can follow some pattern as given below.
Basic idea is to avoid any database connection related configuration littered everywhere in the projects.
/**
* This class is an abstraction for all mongo connection config
**/
#Component
public class MongoConnection{
MongoClient mongoClient = null;
...
#PostConstruct
public void init() throws Exception {
// Please watch out for deprecated methods in new version of driver.
mongoClient = new MongoClient(new ServerAddress(url, port),
MongoClientOptions.builder()
.socketTimeout(3000)
.minHeartbeatFrequency(25)
.heartbeatSocketTimeout(3000)
.build());
mongoDb = mongoClient.getDB(db);
.....
}
public DBCollection getCollection(String name) {
return mongoDb.getCollection(name);
}
}
Now you can use MongoConnection in DAO-s
#Repository
public class ExampleDao{
#Autowired
MongoConnection mongoConnection;
public void insert(BasicDBObject document) {
mongoConnection.getCollection("example").insert(document);
}
}
You can also implement all the database operations inside MongoConnection to introduce some common functionality across the board. For example add logging for all "inserts"
One of the many options, to handle retry is Spring retry project
https://github.com/spring-projects/spring-retry
Which provides declarative retry support for Spring applications.
This is basically Spring answer for this problem. It is used in Spring Batch, Spring Integration, Spring for Apache Hadoop (amongst others).
If you want to approach timeouts (and related) problems not only for your MongoDB but also for any other external references then you should try Netflix's Hystrix (https://github.com/Netflix/Hystrix).
It is an awesome library that integrates nicely with RX and Asynchronous processing that becomes so much more popular lately.
If i'm not mistaken, i think you need to config your properties like timeout or so when you try to build the connection or just prepare them well in connection pool.
Or,you may just check your network or machine, and split your request-data by more times to reduce network trans time
https://github.com/Netflix/Hystrix is your tool for handling dependecies.