I am having a connection issue with Spring JDBC and an SQL database. The problem is that on the first try, my method creates n amount of threads, they query the database, and no issues occur. If I immediately run the method again, same thing - no issues. Note I am not restarting the application between tries.
The issue occurs when I wait for a few minutes before running the application again - so I assume there is a timeout issue somewhere, or threads are being abandoned.
And the catch is, when I run this method using a single threaded version it works perfectly fine. So I believe the actual URL /user/pass/driver setup is ok. I am new to multithreading so I think there is a flaw somewhere in my implementation.
I am using Spring JDBC with Apache Tomcat JNDI connection pooling:
Java Multithreading:
public List<Item> getSetPoints(List<Item> items) {
ExecutorService executorService = Executors.newFixedThreadPool(10);
for(Item item: items) {
executorService.submit(new ProcessItem(item));
}
executorService.shutdown();
}
class ProcessItem implements Runnable {
private Item item;
public ProcessItem(Item item) {
this.item = item;
}
public void run() {
Item newItem = piDAO.retrieveSetPoint(item);
}
}
DAO:
#Component("PIDAO")
public class PIDAO {
private NamedParameterJdbcTemplate jdbc;
#Resource(name="pijdbc")
public void setPiDataSource(DataSource jdbc) {
this.jdbc = new NamedParameterJdbcTemplate(jdbc);
}
public Item retrieveSetPoint(Item item) {
MapSqlParameterSource params = new MapSqlParameterSource();
params.addValue("tag", item.getTagName());
String sql = "SELECT TOP 1 time, value, status FROM piarchive.picomp2 WHERE tag = :tag AND status=0 AND questionable = false ORDER BY time DESC";
try {
return jdbc.queryForObject(sql, params, (rs, rowNum) -> {
item.setPiDate(rs.getString("time"));
item.setPiValue(rs.getString("value"));
return item;
});
} catch (Exception e) {
System.out.println(e);
}
}
}
Spring DAO Container:
<jee:jndi-lookup jndi-name="jdbc/PI" id="pijdbc"
expected-type="javax.sql.DataSource">
</jee:jndi-lookup>
JNDI Configuration:
<Resource
name="jdbc/PI"
auth="Container"
type="javax.sql.DataSource"
maxTotal ="25"
maxIdle="30"
maxWaitMillis ="10000"
driverClassName="com.osisoft.jdbc.Driver"
url="**Valid URL**"
username="**Valid Username**"
password="**Valid Password**"
/>
Stacktrace when error occurs:
org.springframework.jdbc.UncategorizedSQLException: PreparedStatementCallback; uncategorized SQLException for SQL [SELECT TOP 1 time, value, status FROM piarchive.picomp2 WHERE tag = ? AND status=0 AND questionable = false ORDER BY time DESC]; SQL state [null]; error code [0]; [Orb.Channel] The channel is not registered on server.; nested exception is java.sql.SQLException: [Orb.Channel] The channel is not registered on server.
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:84)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:645)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:680)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:707)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:757)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.queryForObject(NamedParameterJdbcTemplate.java:211)
at btv.app.dao.PIDAO.retrieveSetPoint(PIDAO.java:36)
at btv.app.service.PiService$ProcessItem.run(PiService.java:91)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: [Orb.Channel] The channel is not registered on server.
at com.osisoft.jdbc.PreparedStatementImpl.executeQuery(PreparedStatementImpl.java:167)
at org.apache.tomcat.dbcp.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:82)
at org.apache.tomcat.dbcp.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:82)
at org.springframework.jdbc.core.JdbcTemplate$1.doInPreparedStatement(JdbcTemplate.java:688)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:629)
... 11 more
error : java.sql.SQLException: [Orb.Channel] The channel is not registered on server, is unique to the particular driver I am using.
After some research it essentially means that the connection to the server was dropped - however, looking at the logs on the SQL server indicate it was not dropped on the server side.
Try creating a fresh NamedParameterJdbcTemplate within the retrieveSetPoint method to see if this eliminates any problems you're having with timeouts
#Component("PIDAO")
public class PIDAO {
private DataSource jdbc;
#Resource(name="pijdbc")
public void setPiDataSource(DataSource jdbc) {
this.jdbc = jdbc;
}
public Item retrieveSetPoint(Item item) {
MapSqlParameterSource params = new MapSqlParameterSource();
params.addValue("tag", item.getTagName());
String sql = "SELECT TOP 1 time, value, status FROM piarchive.picomp2 WHERE tag = :tag AND status=0 AND questionable = false ORDER BY time DESC";
try {
return (new NamedParameterJdbcTemplate(jdbc)).queryForObject(sql, params, (rs, rowNum) -> {
item.setPiDate(rs.getString("time"));
item.setPiValue(rs.getString("value"));
return item;
});
} catch (Exception e) {
System.out.println(e);
}
}
}
Alternatively, you can reuse the NamedParameterJdbcTemplates and refresh them when they time out; this can benefit from explicit pooling, e.g.
private final int poolSize = 10;
public Collection<Item> getSetPoints(List<Item> items) {
ExecutorService executorService = Executors.newFixedThreadPool(poolSize);
Queue<Item> queue = new ConcurrentLinkedQueue<>();
queue.addAll(items);
Collection<Item> output = new ConcurrentLinkedQueue<>();
for(int i = 0; i < poolSize; i++) {
executorService.submit(new ProcessItem(queue, output);
}
return output;
}
class ProcessItem implements Runnable {
private final Queue<Item> queue;
private final Collection<Item> output;
private NamedParameterJdbcTemplate jdbc;
public ProcessItem(Queue<Item> queue, Collection<Item> output) {
this.queue = queue;
this.output = output;
this.jdbc = piDAO.getNamedJdbcTemplate();
}
public void run() {
Item item = null;
while((item = queue.poll()) != null) {
try {
output.add(piDAO.retrieveSetPoint(item, jdbc));
} catch(SQLException e) {
this.jdbc = piDAO.getNamedJdbcTemplate();
output.add(piDAO.retrieveSetPoint(item, jdbc));
}
}
}
}
#Component("PIDAO")
public class PIDAO {
private DataSource jdbc;
#Resource(name="pijdbc")
public void setPiDataSource(DataSource jdbc) {
this.jdbc = jdbc;
}
public NamedParameterJdbcTemplate getNamedJdbcTemplate() {
return new NamedParameterJdbcTemplate(jdbc);
}
public Item retrieveSetPoint(Item item, NamedParameterJdbcTemplate template) throws SQLException {
MapSqlParameterSource params = new MapSqlParameterSource();
params.addValue("tag", item.getTagName());
String sql = "SELECT TOP 1 time, value, status FROM piarchive.picomp2 WHERE tag = :tag AND status=0 AND questionable = false ORDER BY time DESC";
return template.queryForObject(sql, params, (rs, rowNum) -> {
item.setPiDate(rs.getString("time"));
item.setPiValue(rs.getString("value"));
return item;
});
}
}
Related
I'm using AWS Keyspace (Cassandra 3.11.2) run on Apache Flink in AWS EMR. Some time below query throws Exception. The same code used on AWS Lambda also had the same Exception NoHost. What did I do wrong?
String query = "INSERT INTO TEST (field1, field2) VALUES(?, ?)";
PreparedStatement prepared = CassandraConnector.prepare(query);
int i = 0;
BoundStatement bound = prepared.bind().setString(i++, "Field1").setString(i++, "Field2")
.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
ResultSet rs = CassandraConnector.execute(bound);
at com.datastax.oss.driver.api.core.NoNodeAvailableException.copy(NoNodeAvailableException.java:40)
at com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149)
at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:53)
at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:30)
at com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:230)
at com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:53)
at com.test.manager.connectors.CassandraConnector.execute(CassandraConnector.java:16)
at com.test.repository.impl.BackupRepositoryImpl.insert(BackupRepositoryImpl.java:36)
at com.test.service.impl.BackupServiceImpl.insert(BackupServiceImpl.java:18)
at com.test.flink.function.AsyncBackupFunction.processMessage(AsyncBackupFunction.java:78)
at com.test.flink.function.AsyncBackupFunction.lambda$asyncInvoke$0(AsyncBackupFunction.java:35)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1596)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
This is my code:
CassandraConnector.java:
Because cost of init preparedStatement is huge, I'm cached this.
public class CassandraConnector {
private static final ConcurrentHashMap<String, PreparedStatement> preparedStatementCache = new ConcurrentHashMap<String, PreparedStatement>();
public static ResultSet execute(BoundStatement bound) {
CqlSession session = CassandraManager.getSessionInstance();
return session.execute(bound);
}
public static ResultSet execute(String query) {
CqlSession session = CassandraManager.getSessionInstance();
return session.execute(query);
}
public static PreparedStatement prepare(String query) {
PreparedStatement result = preparedStatementCache.get(query);
if (result == null) {
CqlSession session = CassandraManager.getSessionInstance();
result = session.prepare(query);
preparedStatementCache.putIfAbsent(query, result);
}
return result;
}
}
CassandraManager.java:
I'm using singleton double-check locking for session object.
public class CassandraManager {
private static final Logger logger = LoggerFactory.getLogger(CassandraManager.class);
private static final String SSL_CASSANDRA_PASSWORD = "password";
private static volatile CqlSession session;
static {
try {
initSession();
} catch (Exception e) {
logger.error("Error CassandraManager getSessionInstance", e);
}
}
private static void initSession() {
List<InetSocketAddress> contactPoints = Collections.singletonList(InetSocketAddress.createUnresolved(
"cassandra.ap-southeast-1.amazonaws.com", 9142));
DriverConfigLoader loader = DriverConfigLoader.fromClasspath("application.conf");
Long start = BaseHelper.getTime();
session = CqlSession.builder().addContactPoints(contactPoints).withConfigLoader(loader)
.withAuthCredentials(AppUtil.getProperty("cassandra.username"),
AppUtil.getProperty("cassandra.password"))
.withSslContext(getSSLContext()).withLocalDatacenter("ap-southeast-1")
.withKeyspace(AppUtil.getProperty("cassandra.keyspace")).build();
logger.info("End connect: " + (new Date().getTime() - start));
}
public static CqlSession getSessionInstance() {
if (session == null || session.isClosed()) {
synchronized (CassandraManager.class) {
if (session == null || session.isClosed()) {
initSession();
}
}
}
return session;
}
public static SSLContext getSSLContext() {
InputStream in = null;
try {
KeyStore ks = KeyStore.getInstance("JKS");
in = CassandraManager.class.getClassLoader().getResourceAsStream("cassandra_truststore.jks");
ks.load(in, SSL_CASSANDRA_PASSWORD.toCharArray());
TrustManagerFactory tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
tmf.init(ks);
SSLContext ctx = SSLContext.getInstance("TLS");
ctx.init(null, tmf.getTrustManagers(), null);
return ctx;
} catch (Exception e) {
logger.error("Error CassandraConnector getSSLContext", e);
} finally {
if (in != null) {
try {
in.close();
} catch (IOException e) {
logger.error("", e);
}
}
}
return null;
}
}
application.conf
datastax-java-driver {
basic.request {
timeout = 5 seconds
consistency = LOCAL_ONE
}
advanced.connection {
max-requests-per-connection = 1024
pool {
local.size = 1
remote.size = 1
}
}
advanced.reconnect-on-init = true
advanced.reconnection-policy {
class = ExponentialReconnectionPolicy
base-delay = 1 second
max-delay = 60 seconds
}
advanced.retry-policy {
class = DefaultRetryPolicy
}
advanced.protocol {
version = V4
}
advanced.heartbeat {
interval = 30 seconds
timeout = 1 second
}
advanced.session-leak.threshold = 8
advanced.metadata.token-map.enabled = false
}
There are two scenarios where the driver would report NoNodeAvailableException:
Nodes are unresponsive/unavailable and the driver has marked all of them as down.
All the contact points provided are invalid.
If some inserts are working but eventually runs into NoNodeAvailableException, that indicates to me that the nodes are getting overloaded and eventually become unresponsive so the driver no longer picks a coordinator since they're all marked as "down".
If none of the requests work at all, it means that the contact points are unreachable or unresolvable so the driver can't connect to the cluster. Cheers!
The NoHostAvailableException is a client side exception thrown by the open source driver after it has retried available hosts. The open source driver encapsulated the root cause for retry, which can be confusing.
I suggest first improving you observability by setting up these CloudWatch metrics. You can follow this prebuild CloudFormation template to get started it only takes a few seconds.
Here is a set up for Keyspace & Table Metrics for Amazon Keyspaces using Cloud Watch:
https://github.com/aws-samples/amazon-keyspaces-cloudwatch-cloudformation-templates
You can also replace retry policy with the following examples found in this helper project. The retry policy in this project will either try or throw the original exception which will remove the occurrences of NoHostAvailableException this will provide you with better transparency to your application. Here's the like to the Github repo: https://github.com/aws-samples/amazon-keyspaces-java-driver-helpers
If you're using the private VPC endpoint you want to add the following permissions to enable more entries in the system.peers table.,
Amazon Keyspaces just announced new functionality that will provide more connection points when establishing a session with a private VPC endpoints.
Here is a link about how Keyspaces now automatically optimizes client connection made through AWS PrivateLink to improve availability and write and read: https://aws.amazon.com/about-aws/whats-new/2021/07/amazon-keyspaces-for-apache-cassandra-now-automatically-optimi/
This link that talks about Using Amazon Keypscaes with Interface VPC Endpoints: https://docs.aws.amazon.com/keyspaces/latest/devguide/vpc-endpoints.html . To enable this new functionality you will need to provide additional permissions to DescribeNetworkInterfaces and DescribeVpcEndpoints.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"ListVPCEndpoints",
"Effect":"Allow",
"Action":[
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeVpcEndpoints"
],
"Resource":"*"
}
]
}
I suspect that this:
.withLocalDatacenter(AppUtil.getProperty("cassandra.localdatacenter"))
Pulls back a data center name which either does not match the keyspace replication definition or the configured data center name:
nodetool status | grep Datacenter
Basically, if your connection is defined with a local data center which does not exist, it will still try to read/write with replicas in that data center. This will fail, because it obviously cannot find nodes in a non-existent data center.
Similar question here: NoHostAvailable error in cqlsh console
I have a small spring-boot app set up that connects to one or more Topics on ActiveMQ, which are set in the application's application.properties file on startup - and then sends these messages on to a database.
This is all working fine, but I am having some problems when trying to implement a failover - basically, the app will try to reconnect, but after a certain number of retries, the application process will just automatically exit, preventing the retry (ideally, I would like the app to just retry forever until killed manually or ActiveMQ becomes available again). I have tried explicitly setting the connection options (such as maxReconnectAttempts) in the connection URL (using url.options in application.properties) to -1/0/99999 but none of these seem to be right as the behavior is the same each time. From looking at the advice on Apache's own reference page I would also expect this behavior to be working as default too.
If anyone has any advice to force the app not to quit, I would be very grateful! The bits of my code that I think will matter is below:
#Configuration
public class AmqConfig {
private static final Logger LOG = LogManager.getLogger(AmqConfig.class);
private static final String LOG_PREFIX = "[AmqConfig] ";
private String clientId;
private static ArrayList<String> amqUrls = new ArrayList<>();
private static String amqConnectionUrl;
private static Integer numSubs;
private static ArrayList<String> destinations = new ArrayList<>();
#Autowired
DatabaseService databaseService;
public AmqConfig (#Value("${amq.urls}") String[] amqUrl,
#Value("${amq.options}") String amqOptions,
#Value("${tocCodes}") String[] tocCodes,
#Value("${amq.numSubscribers}") Integer numSubs,
#Value("${clientId}") String clientId) throws UnknownHostException {
Arrays.asList(amqUrl).forEach(url -> {
amqUrls.add("tcp://" + url);
});
String amqServerAddress = "failover:(" + String.join(",", amqUrls) + ")";
String options = Strings.isNullOrEmpty(amqOptions) ? "" : "?" + amqOptions;
this.amqConnectionUrl = amqServerAddress + options;
this.numSubs = Optional.ofNullable(numSubs).orElse(4);
this.clientId = Strings.isNullOrEmpty(clientId) ? InetAddress.getLocalHost().getHostName() : clientId;
String topic = "Consumer." + this.clientId + ".VirtualTopic.Feed";
if (tocCodes.length > 0){
Arrays.asList(tocCodes).forEach(s -> destinations.add(topic + "_" + s));
} else { // no TOC codes = connecting to default feed
destinations.add(topic);
}
}
#Bean
public ActiveMQConnectionFactory connectionFactory() throws JMSException {
LOG.info("{}Connecting to AMQ at {}", LOG_PREFIX, amqConnectionUrl);
LOG.info("{}Using client id {}", LOG_PREFIX, clientId);
ActiveMQConnectionFactory connectionFactory =
new ActiveMQConnectionFactory(amqConnectionUrl);
Connection conn = connectionFactory.createConnection();
conn.setClientID(clientId);
conn.setExceptionListener(new AmqExceptionListener());
conn.start();
destinations.forEach(destinationName -> {
try {
for (int i = 0; i < numSubs; i++) {
Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue(destinationName);
MessageConsumer messageConsumer = session.createConsumer(destination);
messageConsumer.setMessageListener(new MessageReceiver(databaseService, destinationName));
}
} catch (JMSException e) {
LOG.error("{}Error setting up queue # {}", LOG_PREFIX, destinationName);
LOG.error(e.getMessage());
}
});
return connectionFactory;
}
}
public class MessageReceiver implements MessageListener, ExceptionListener {
public static final Logger LOG = LogManager.getLogger(MessageReceiver.class);
private static final String LOG_PREFIX = "[Message Receiver] ";
private DatabaseService databaseService;
public MessageReceiver(DatabaseService databaseService, String destinationName){
this.databaseService = databaseService;
LOG.info("{}Creating MessageReceiver for queue with destination: {}", LOG_PREFIX, destinationName);
}
#Override
public void onMessage(Message message) {
String messageText = null;
if (message instanceof TextMessage) {
TextMessage tm = (TextMessage) message;
try {
messageText = tm.getText();
} catch (JMSException e) {
LOG.error("{} Error getting message from AMQ", e);
}
} else if (message instanceof ActiveMQMessage) {
messageText = message.toString();
} else {
LOG.warn("{}Unrecognised message type, cannot process", LOG_PREFIX);
LOG.warn(message.toString());
}
try {
databaseService.sendMessageNoResponse(messageText);
} catch (Exception e) {
LOG.error("{}Unable to acknowledge message from AMQ. Message: {}", LOG_PREFIX, messageText, e);
}
}
}
public class AmqExceptionListener implements ExceptionListener {
public static final Logger LOG = LogManager.getLogger(AmqExceptionListener.class);
private static final String LOG_PREFIX = "[AmqExceptionListener ] ";
#Override
public void onException(JMSException e){
LOG.error("{}Exception thrown by ActiveMQ", LOG_PREFIX, e);
}
}
The console output I get from my application is just the below (apologies, as it is not much to go off)
[2019-12-12 14:43:30.292] [WARN ] Transport (tcp://[address]:61616) failed , attempting to automatically reconnect: java.io.EOFException
[2019-12-12 14:43:51.098] [WARN ] Failed to connect to [tcp://[address]:61616] after: 10 attempt(s) continuing to retry.
Process finished with exit code 0
Very interesting Question!
Configuring the maxReconnectAttempts=-1 will cause the connection attempts to be retried forever, but what I feel the problem here are as follows:
You are trying to connect to ActiveMQ while creating the Bean at App
startup, If ActiveMQ is not running when APP is starting up, the
Bean creation would retry the connection attempts forever causing a
timeout and not letting the APP to start.
Also when the ActiveMQ stops running midway you are not reattempting the connection as it is done inside #Bean and will only happen on APP startup
Hence the Connection shouldn't happen at Bean creation time, but maybe it can be done after the APP is up (maybe inside a #PostConstruct block)
These are just the pointers, You need to take it forward
Hope this helps!
Good luck!
I have written a RESTful API using Apache Jersey. I am using MongoDB as my backend. I used Morphia (v.1.3.4) to map and persist POJO to database. I tried to follow "1 application 1 connection" in my API as recommended everywhere but I am not sure I am successful. I run my API in Tomcat 8. I also ran Mongostat to see the details and connection. At start, Mongostat showed 1 connection to MongoDB server. I tested my API using Postman and it was working fine. I then created a load test in SoapUI where I simulated 100 users per second. I saw the update in Mongostat. I saw there were 103 connections. Here is the gif which shows this behaviour.
I am not sure why there are so many connections. The interesting fact is that number of mongo connection are directly proportional to number of users I create on SoapUI. Why is that? I found other similar questions but I think I have implemented there suggestions.
Mongo connection leak with morphia
Spring data mongodb not closing mongodb connections
My code looks like this.
DatabaseConnection.java
// Some imports
public class DatabaseConnection {
private static volatile MongoClient instance;
private static String cloudhost="localhost";
private DatabaseConnection() { }
public synchronized static MongoClient getMongoClient() {
if (instance == null ) {
synchronized (DatabaseConnection.class) {
if (instance == null) {
ServerAddress addr = new ServerAddress(cloudhost, 27017);
List<MongoCredential> credentialsList = new ArrayList<MongoCredential>();
MongoCredential credentia = MongoCredential.createCredential(
"test", "test", "test".toCharArray());
credentialsList.add(credentia);
instance = new MongoClient(addr, credentialsList);
}
}
}
return instance;
}
}
PourService.java
#Secured
#Path("pours")
public class PourService {
final static Logger logger = Logger.getLogger(Pour.class);
private static final int POUR_SIZE = 30;
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Response createPour(String request)
{
WebApiResponse response = new WebApiResponse();
Gson gson = new GsonBuilder().setDateFormat("dd/MM/yyyy HH:mm:ss").create();
String message = "Pour was not created.";
HashMap<String, Object> data = null;
try
{
Pour pour = gson.fromJson(request, Pour.class);
// Storing the pour to
PourRepository pourRepository = new PourRepository();
String id = pourRepository.createPour(pour);
data = new HashMap<String, Object>();
if ("" != id && null != id)
{
data.put("id", id);
message = "Pour was created successfully.";
logger.debug(message);
return response.build(true, message, data, 200);
}
logger.debug(message);
return response.build(false, message, data, 500);
}
catch (Exception e)
{
message = "Error while creating Pour.";
logger.error(message, e);
return response.build(false, message, new Object(),500);
}
}
PourDao.java
public class PourDao extends BasicDAO<Pour, String>{
public PourDao(Class<Pour> entityClass, Datastore ds) {
super(entityClass, ds);
}
}
PourRepository.java
public class PourRepository {
private PourDao pourDao;
final static Logger logger = Logger.getLogger(PourRepository.class);
public PourRepository ()
{
try
{
MongoClient mongoClient = DatabaseConnection.getMongoClient();
Datastore ds = new Morphia().map(Pour.class)
.createDatastore(mongoClient, "tilt45");
pourDao = new PourDao(Pour.class,ds);
}
catch (Exception e)
{
logger.error("Error while creating PourDao", e);
}
}
public String createPour (Pour pour)
{
try
{
return pourDao.save(pour).getId().toString();
}
catch (Exception e)
{
logger.error("Error while creating Pour.", e);
return null;
}
}
}
When I work with Mongo+Morphia I get better results using a Factory pattern for the Datastore and not for the MongoClient, for instance, check the following class:
public DatastoreFactory(String dbHost, int dbPort, String dbName) {
final Morphia morphia = new Morphia();
MongoClientOptions.Builder options = MongoClientOptions.builder().socketKeepAlive(true);
morphia.getMapper().getOptions().setStoreEmpties(true);
final Datastore store = morphia.createDatastore(new MongoClient(new ServerAddress(dbHost, dbPort), options.build()), dbName);
store.ensureIndexes();
this.datastore = store;
}
With that approach, everytime you need a datastore you can use the one provided by the factory. Of course, this can implemented better if you use a framework/library that support factory pattern (e.g.: HK2 with org.glassfish.hk2.api.Factory), and also singleton binding.
Besides, you can check the documentation of MongoClientOptions's builder method, perhaps you can find a better connection control there.
Setup: Spring application deployed on Weblogic 12c, using JNDI lookup to get a datasource to the Oracle Database.
We have multiple services which will be polling the database regularly for new jobs. In order to prevent two services picking the same job we are using a native SELECT FOR UPDATE query in a CrudRepository. The application then takes the resulting job and updates it to PROCESSING instead of WAITING using the CrusRepository.save() method.
The problem is that I can't seem to get the save() to work within the FOR UPDATE transaction (at least this is my current working theory of what goes wrong), and as a result the entire polling freezes until the default 10 minute timeout occurs. I have tried putting #Transactional (with various propagation flags) basically everywhere, but I'm not able to get it to work (#EnableTransactionManagement is activated and working).
Obviously there must be some basic knowledge I'm missing. Is this even a possible setup? Unfortunately, just using #Transactional with a non-native CrudRepository SELECT query is not possible, as it apparently first makes a SELECT to see if the row is locked or not, and only then makes a new SELECT that locks it. Another service could very well pick up the same job in the meanwhile, which is why we need it to lock immediately.
Update in relation to #M. Deinum's comment.: I should perhaps also mention that it's a setup wherein the central component that's doing the polling is a library used by all the other services (therefore the library has #SpringBootApplication, as does each service using it, so double component scanning is certainly present). Furthermore, the service has two separate classes for polling depending on the type of service, with a lot of common code, shared in an AbstractTransactionHelper class. Below I've aggregated some code for the sake of brevity.
The library's main class:
#SpringBootApplication
#EnableTransactionManagement
#EnableJpaRepositories
public class JobsMain {
public static void initializeJobsMain(){
PersistenceProviderResolverHolder.setPersistenceProviderResolver(new PersistenceProviderResolver() {
#Override
public List<PersistenceProvider> getPersistenceProviders() {
return Collections.singletonList(new HibernatePersistenceProvider());
}
#Override
public void clearCachedProviders() {
//Not quite sure what this should do...
}
});
}
#Bean
public JtaTransactionManager transactionManager(){
return new WebLogicJtaTransactionManager();
}
public DataSource dataSource(){
final JndiDataSourceLookup dsLookup = new JndiDataSourceLookup();
dsLookup.setResourceRef(true);
DataSource dataSource = dsLookup.getDataSource("Jobs");
return dataSource;
}
}
The repository (we're returning a set with only one job as we had some other issues when returning a single object):
public interface JobRepository extends CrudRepository<Job, Integer> {
#Query(value = "SELECT * FROM JOB WHERE JOB.ID IN "
+ "(SELECT ID FROM "
+ "(SELECT * FROM JOB WHERE "
+ "JOB.STATUS = :status1 OR "
+ "JOB.STATUS = :status2 "
+ "ORDER BY JOB.PRIORITY ASC, JOB.CREATED ASC) "
+ "WHERE ROWNUM <= 1) "
+ "FOR UPDATE", nativeQuery = true)
public Set<Job> getNextJob(#Param("status1") String status1, #Param("status2") String status2);
The transaction handling class:
#Service
public class JobManagerTransactionHelper extends AbstractTransactionHelper{
#Transactional
#Override
public QdbJob getNextJobToProcess(){
Set<Job> jobs = null;
try {
jobs = jobRepo.getNextJob(Status.DONE.name(), Status.FAILED.name());
} catch (Exception ex) {
logger.error(ex);
}
return extractSingleJobFromSet(jobs);
}
Update 2: Some more code.
AbstractTransactionHelper:
#Service
public abstract class AbstractTransactionHelper {
#Autowired
QdbJobRepository jobRepo;
#Autowired
ArchivedJobRepository archive;
protected Job extractSingleJobFromSet(Set<Job> jobs){
Job job = null;
if(jobs != null && !jobs.isEmpty()){
for(job job : jobs){
if(this instanceof JobManagerTransactionHelper){
updateJob(job);
}
job = job;
}
}
return job;
}
protected void updateJob(Job job){
updateJob(job, Status.PROCESSING, null);
}
protected void updateJob(Job job, Status status, String serviceMessage){
if(job != null){
if(status != null){
job.setStatus(status);
}
if(serviceMessage != null){
job.setServiceMessage(serviceMessage);
}
saveJob(job);
}
}
protected void saveJob(Job job){
jobRepo.save(job);
archive.save(Job.convertJobToArchivedJob(job));
}
Update 4: Threading. newJob() is implemented by each service that uses the library.
#Service
public class JobManager{
#Autowired
private JobManagerTransactionHelper transactionHelper;
#Autowired
JobListener jobListener;
#Autowired
Config config;
protected final AtomicInteger atomicThreadCounter = new AtomicInteger(0);
protected boolean keepPolling;
protected Future<?> futurePoller;
protected ScheduledExecutorService pollService;
protected ThreadPoolExecutor threadPool;
public boolean start(){
if(!keepPolling){
ThreadFactory pollServiceThreadFactory = new ThreadFactoryBuilder()
.setNamePrefix(config.getService() + "ScheduledPollingPool-Thread").build();
ThreadFactory threadPoolThreadFactory = new ThreadFactoryBuilder()
.setNamePrefix(config.getService() + "ThreadPool-Thread").build();
keepPolling = true;
pollService = Executors.newSingleThreadScheduledExecutor(pollServiceThreadFactory);
threadPool = (ThreadPoolExecutor)Executors.newFixedThreadPool(getConfig().getThreadPoolSize(), threadPoolThreadFactory);
futurePoller = pollService.scheduleWithFixedDelay(getPollTask(), 0, getConfig().getPollingFrequency(), TimeUnit.MILLISECONDS);
return true;
}else{
return false;
}
}
protected Runnable getPollTask() {
return new Runnable(){
public void run(){
try{
while(atomicThreadCounter.get() < threadPool.getMaximumPoolSize() &&
threadPool.getActiveCount() < threadPool.getMaximumPoolSize() &&
keepPolling == true){
Job job = transactionHelper.getNextJobToProcess();
if(job != null){
threadPool.submit(getJobHandler(job));
atomicThreadCounter.incrementAndGet();//threadPool.getActiveCount() isn't updated fast enough the first loop
}else{
break;
}
}
}catch(Exception e){
logger.error(e);
}
}
};
}
protected Runnable getJobHandler(final Job job){
return new Runnable(){
public void run(){
try{
atomicThreadCounter.decrementAndGet();
jobListener.newJob(job);
}catch(Exception e){
logger.error(e);
}
}
};
}
As it turns out, the problem was the WeblogicJtaTransactionManager. My guess is that the FOR UPDATE resulted in a JPA transaction, but upon updating the object in the database, the WeblogicJtaTransactionManager was used, which failed to find an ongoing JTA transaction. Since we're deploying on Weblogic we wrongly assumed we had to use the WeblogicJtaTransactionManager.
Either way, exchanging the TransactionManager to a JpaTransactionManager (and explicitly setting the EntityManagerFactory and DataSource on it) basically solved all problems.
#Bean
public PlatformTransactionManager transactionManager() {
JpaTransactionManager jpaTransactionManager = new JpaTransactionManager(entityManagerFactory().getObject());
jpaTransactionManager.setDataSource(dataSource());
jpaTransactionManager.setJpaDialect(new HibernateJpaDialect());
return jpaTransactionManager;
}
Assuming you also have added an EntityManagerFactoryBean which is needed if you want to use multiple datasources in the same project (which we're doing, but not within single transactions, so no need for JTA).
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
LocalContainerEntityManagerFactoryBean factoryBean = new LocalContainerEntityManagerFactoryBean();
factoryBean.setDataSource(dataSource());
factoryBean.setJpaVendorAdapter(vendorAdapter);
factoryBean.setPackagesToScan("my.model");
return factoryBean;
}
I am using the MySQL JDBC Replication Driver com.mysql.jdbc.ReplicationDriver to shift load between Master and Slave.
I am using that connection URL
jdbc.de.url=jdbc:mysql:replication://master:3306,slave1:3306,slave2:3306/myDatabase?zeroDateTimeBehavior=convertToNull&characterEncoding=UTF-8&roundRobinLoadBalance=true
As soon as I am starting my application I am getting only that data from where it has been started, like I am working on a locked snapshot of the database. If I am doing any CRUD operation the data is not callable or updates are not shown. Replication of mysql is working just fine and I can query the correct data from the database.
There is no level2 cache active and I am using hibernate with pooled connections
If I am using the normal JDBC Driver com.mysql.jdbc.Driver everything is working just fine. So why am I getting always the same resultsets, no matter what I do change in the database...
Update 1
It seems like it is related to my aspect
#Aspect
public class ReadOnlyConnectionInterceptor implements Ordered {
private class ReadOnly implements ReturningWork<Object> {
ProceedingJoinPoint pjp;
public ReadOnly(ProceedingJoinPoint pjp) {
this.pjp = pjp;
}
#Override
public Object execute(Connection connection) throws SQLException {
boolean autoCommit = connection.getAutoCommit();
boolean readOnly = connection.isReadOnly();
try {
connection.setAutoCommit(false);
connection.setReadOnly(true);
return pjp.proceed();
} catch (Throwable e) {
//if an exception was raised, return it
return e;
} finally {
// restore state
connection.setReadOnly(readOnly);
connection.setAutoCommit(autoCommit);
}
}
}
private int order;
private EntityManager entityManager;
public void setOrder(int order) {
this.order = order;
}
#Override
public int getOrder() {
return order;
}
#PersistenceContext
public void setEntityManager(EntityManager entityManager) {
this.entityManager = entityManager;
}
#Around("#annotation(readOnlyConnection)")
public Object proceed(ProceedingJoinPoint pjp,
ReadOnlyConnection readOnlyConnection) throws Throwable {
Session hibernateSession = entityManager.unwrap(Session.class);
Object result = hibernateSession.doReturningWork(new ReadOnly(pjp));
if (result == null) {
return result;
}
//If the returned object extends Throwable, throw it
if (Throwable.class.isAssignableFrom(result.getClass())) {
throw (Throwable) result;
}
return result;
}
}
I annotate all my readOnly request with #ReadOnlyConnection. Before I had all my service layer methods annotated with that even though they might be calling each other. Now I am only annotating the request method and I am to the state, where I am getting the database updates on the second call.
1) Doing initial call => getting data as expected
2) Changing data in the database
3) Doing same call again => getting the exact same data from the first call
4) Doing same call again => getting the changed data
The thing with connection.setAutoCommit(false) is that it seems to not do a commit after set back to connection.setAutoCommit(true). So after adding the following line to the aspect, everything worked as expected again
try {
connection.setAutoCommit(false);
connection.setReadOnly(true);
return pjp.proceed();
} catch (Throwable e) {
return e;
} finally {
// restore state
connection.commit(); // THIS LINE
connection.setReadOnly(readOnly);
connection.setAutoCommit(autoCommit);
}