I'm using AWS Keyspace (Cassandra 3.11.2) run on Apache Flink in AWS EMR. Some time below query throws Exception. The same code used on AWS Lambda also had the same Exception NoHost. What did I do wrong?
String query = "INSERT INTO TEST (field1, field2) VALUES(?, ?)";
PreparedStatement prepared = CassandraConnector.prepare(query);
int i = 0;
BoundStatement bound = prepared.bind().setString(i++, "Field1").setString(i++, "Field2")
.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
ResultSet rs = CassandraConnector.execute(bound);
at com.datastax.oss.driver.api.core.NoNodeAvailableException.copy(NoNodeAvailableException.java:40)
at com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149)
at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:53)
at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:30)
at com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:230)
at com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:53)
at com.test.manager.connectors.CassandraConnector.execute(CassandraConnector.java:16)
at com.test.repository.impl.BackupRepositoryImpl.insert(BackupRepositoryImpl.java:36)
at com.test.service.impl.BackupServiceImpl.insert(BackupServiceImpl.java:18)
at com.test.flink.function.AsyncBackupFunction.processMessage(AsyncBackupFunction.java:78)
at com.test.flink.function.AsyncBackupFunction.lambda$asyncInvoke$0(AsyncBackupFunction.java:35)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1596)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
This is my code:
CassandraConnector.java:
Because cost of init preparedStatement is huge, I'm cached this.
public class CassandraConnector {
private static final ConcurrentHashMap<String, PreparedStatement> preparedStatementCache = new ConcurrentHashMap<String, PreparedStatement>();
public static ResultSet execute(BoundStatement bound) {
CqlSession session = CassandraManager.getSessionInstance();
return session.execute(bound);
}
public static ResultSet execute(String query) {
CqlSession session = CassandraManager.getSessionInstance();
return session.execute(query);
}
public static PreparedStatement prepare(String query) {
PreparedStatement result = preparedStatementCache.get(query);
if (result == null) {
CqlSession session = CassandraManager.getSessionInstance();
result = session.prepare(query);
preparedStatementCache.putIfAbsent(query, result);
}
return result;
}
}
CassandraManager.java:
I'm using singleton double-check locking for session object.
public class CassandraManager {
private static final Logger logger = LoggerFactory.getLogger(CassandraManager.class);
private static final String SSL_CASSANDRA_PASSWORD = "password";
private static volatile CqlSession session;
static {
try {
initSession();
} catch (Exception e) {
logger.error("Error CassandraManager getSessionInstance", e);
}
}
private static void initSession() {
List<InetSocketAddress> contactPoints = Collections.singletonList(InetSocketAddress.createUnresolved(
"cassandra.ap-southeast-1.amazonaws.com", 9142));
DriverConfigLoader loader = DriverConfigLoader.fromClasspath("application.conf");
Long start = BaseHelper.getTime();
session = CqlSession.builder().addContactPoints(contactPoints).withConfigLoader(loader)
.withAuthCredentials(AppUtil.getProperty("cassandra.username"),
AppUtil.getProperty("cassandra.password"))
.withSslContext(getSSLContext()).withLocalDatacenter("ap-southeast-1")
.withKeyspace(AppUtil.getProperty("cassandra.keyspace")).build();
logger.info("End connect: " + (new Date().getTime() - start));
}
public static CqlSession getSessionInstance() {
if (session == null || session.isClosed()) {
synchronized (CassandraManager.class) {
if (session == null || session.isClosed()) {
initSession();
}
}
}
return session;
}
public static SSLContext getSSLContext() {
InputStream in = null;
try {
KeyStore ks = KeyStore.getInstance("JKS");
in = CassandraManager.class.getClassLoader().getResourceAsStream("cassandra_truststore.jks");
ks.load(in, SSL_CASSANDRA_PASSWORD.toCharArray());
TrustManagerFactory tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
tmf.init(ks);
SSLContext ctx = SSLContext.getInstance("TLS");
ctx.init(null, tmf.getTrustManagers(), null);
return ctx;
} catch (Exception e) {
logger.error("Error CassandraConnector getSSLContext", e);
} finally {
if (in != null) {
try {
in.close();
} catch (IOException e) {
logger.error("", e);
}
}
}
return null;
}
}
application.conf
datastax-java-driver {
basic.request {
timeout = 5 seconds
consistency = LOCAL_ONE
}
advanced.connection {
max-requests-per-connection = 1024
pool {
local.size = 1
remote.size = 1
}
}
advanced.reconnect-on-init = true
advanced.reconnection-policy {
class = ExponentialReconnectionPolicy
base-delay = 1 second
max-delay = 60 seconds
}
advanced.retry-policy {
class = DefaultRetryPolicy
}
advanced.protocol {
version = V4
}
advanced.heartbeat {
interval = 30 seconds
timeout = 1 second
}
advanced.session-leak.threshold = 8
advanced.metadata.token-map.enabled = false
}
There are two scenarios where the driver would report NoNodeAvailableException:
Nodes are unresponsive/unavailable and the driver has marked all of them as down.
All the contact points provided are invalid.
If some inserts are working but eventually runs into NoNodeAvailableException, that indicates to me that the nodes are getting overloaded and eventually become unresponsive so the driver no longer picks a coordinator since they're all marked as "down".
If none of the requests work at all, it means that the contact points are unreachable or unresolvable so the driver can't connect to the cluster. Cheers!
The NoHostAvailableException is a client side exception thrown by the open source driver after it has retried available hosts. The open source driver encapsulated the root cause for retry, which can be confusing.
I suggest first improving you observability by setting up these CloudWatch metrics. You can follow this prebuild CloudFormation template to get started it only takes a few seconds.
Here is a set up for Keyspace & Table Metrics for Amazon Keyspaces using Cloud Watch:
https://github.com/aws-samples/amazon-keyspaces-cloudwatch-cloudformation-templates
You can also replace retry policy with the following examples found in this helper project. The retry policy in this project will either try or throw the original exception which will remove the occurrences of NoHostAvailableException this will provide you with better transparency to your application. Here's the like to the Github repo: https://github.com/aws-samples/amazon-keyspaces-java-driver-helpers
If you're using the private VPC endpoint you want to add the following permissions to enable more entries in the system.peers table.,
Amazon Keyspaces just announced new functionality that will provide more connection points when establishing a session with a private VPC endpoints.
Here is a link about how Keyspaces now automatically optimizes client connection made through AWS PrivateLink to improve availability and write and read: https://aws.amazon.com/about-aws/whats-new/2021/07/amazon-keyspaces-for-apache-cassandra-now-automatically-optimi/
This link that talks about Using Amazon Keypscaes with Interface VPC Endpoints: https://docs.aws.amazon.com/keyspaces/latest/devguide/vpc-endpoints.html . To enable this new functionality you will need to provide additional permissions to DescribeNetworkInterfaces and DescribeVpcEndpoints.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"ListVPCEndpoints",
"Effect":"Allow",
"Action":[
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeVpcEndpoints"
],
"Resource":"*"
}
]
}
I suspect that this:
.withLocalDatacenter(AppUtil.getProperty("cassandra.localdatacenter"))
Pulls back a data center name which either does not match the keyspace replication definition or the configured data center name:
nodetool status | grep Datacenter
Basically, if your connection is defined with a local data center which does not exist, it will still try to read/write with replicas in that data center. This will fail, because it obviously cannot find nodes in a non-existent data center.
Similar question here: NoHostAvailable error in cqlsh console
Related
I have a small spring-boot app set up that connects to one or more Topics on ActiveMQ, which are set in the application's application.properties file on startup - and then sends these messages on to a database.
This is all working fine, but I am having some problems when trying to implement a failover - basically, the app will try to reconnect, but after a certain number of retries, the application process will just automatically exit, preventing the retry (ideally, I would like the app to just retry forever until killed manually or ActiveMQ becomes available again). I have tried explicitly setting the connection options (such as maxReconnectAttempts) in the connection URL (using url.options in application.properties) to -1/0/99999 but none of these seem to be right as the behavior is the same each time. From looking at the advice on Apache's own reference page I would also expect this behavior to be working as default too.
If anyone has any advice to force the app not to quit, I would be very grateful! The bits of my code that I think will matter is below:
#Configuration
public class AmqConfig {
private static final Logger LOG = LogManager.getLogger(AmqConfig.class);
private static final String LOG_PREFIX = "[AmqConfig] ";
private String clientId;
private static ArrayList<String> amqUrls = new ArrayList<>();
private static String amqConnectionUrl;
private static Integer numSubs;
private static ArrayList<String> destinations = new ArrayList<>();
#Autowired
DatabaseService databaseService;
public AmqConfig (#Value("${amq.urls}") String[] amqUrl,
#Value("${amq.options}") String amqOptions,
#Value("${tocCodes}") String[] tocCodes,
#Value("${amq.numSubscribers}") Integer numSubs,
#Value("${clientId}") String clientId) throws UnknownHostException {
Arrays.asList(amqUrl).forEach(url -> {
amqUrls.add("tcp://" + url);
});
String amqServerAddress = "failover:(" + String.join(",", amqUrls) + ")";
String options = Strings.isNullOrEmpty(amqOptions) ? "" : "?" + amqOptions;
this.amqConnectionUrl = amqServerAddress + options;
this.numSubs = Optional.ofNullable(numSubs).orElse(4);
this.clientId = Strings.isNullOrEmpty(clientId) ? InetAddress.getLocalHost().getHostName() : clientId;
String topic = "Consumer." + this.clientId + ".VirtualTopic.Feed";
if (tocCodes.length > 0){
Arrays.asList(tocCodes).forEach(s -> destinations.add(topic + "_" + s));
} else { // no TOC codes = connecting to default feed
destinations.add(topic);
}
}
#Bean
public ActiveMQConnectionFactory connectionFactory() throws JMSException {
LOG.info("{}Connecting to AMQ at {}", LOG_PREFIX, amqConnectionUrl);
LOG.info("{}Using client id {}", LOG_PREFIX, clientId);
ActiveMQConnectionFactory connectionFactory =
new ActiveMQConnectionFactory(amqConnectionUrl);
Connection conn = connectionFactory.createConnection();
conn.setClientID(clientId);
conn.setExceptionListener(new AmqExceptionListener());
conn.start();
destinations.forEach(destinationName -> {
try {
for (int i = 0; i < numSubs; i++) {
Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue(destinationName);
MessageConsumer messageConsumer = session.createConsumer(destination);
messageConsumer.setMessageListener(new MessageReceiver(databaseService, destinationName));
}
} catch (JMSException e) {
LOG.error("{}Error setting up queue # {}", LOG_PREFIX, destinationName);
LOG.error(e.getMessage());
}
});
return connectionFactory;
}
}
public class MessageReceiver implements MessageListener, ExceptionListener {
public static final Logger LOG = LogManager.getLogger(MessageReceiver.class);
private static final String LOG_PREFIX = "[Message Receiver] ";
private DatabaseService databaseService;
public MessageReceiver(DatabaseService databaseService, String destinationName){
this.databaseService = databaseService;
LOG.info("{}Creating MessageReceiver for queue with destination: {}", LOG_PREFIX, destinationName);
}
#Override
public void onMessage(Message message) {
String messageText = null;
if (message instanceof TextMessage) {
TextMessage tm = (TextMessage) message;
try {
messageText = tm.getText();
} catch (JMSException e) {
LOG.error("{} Error getting message from AMQ", e);
}
} else if (message instanceof ActiveMQMessage) {
messageText = message.toString();
} else {
LOG.warn("{}Unrecognised message type, cannot process", LOG_PREFIX);
LOG.warn(message.toString());
}
try {
databaseService.sendMessageNoResponse(messageText);
} catch (Exception e) {
LOG.error("{}Unable to acknowledge message from AMQ. Message: {}", LOG_PREFIX, messageText, e);
}
}
}
public class AmqExceptionListener implements ExceptionListener {
public static final Logger LOG = LogManager.getLogger(AmqExceptionListener.class);
private static final String LOG_PREFIX = "[AmqExceptionListener ] ";
#Override
public void onException(JMSException e){
LOG.error("{}Exception thrown by ActiveMQ", LOG_PREFIX, e);
}
}
The console output I get from my application is just the below (apologies, as it is not much to go off)
[2019-12-12 14:43:30.292] [WARN ] Transport (tcp://[address]:61616) failed , attempting to automatically reconnect: java.io.EOFException
[2019-12-12 14:43:51.098] [WARN ] Failed to connect to [tcp://[address]:61616] after: 10 attempt(s) continuing to retry.
Process finished with exit code 0
Very interesting Question!
Configuring the maxReconnectAttempts=-1 will cause the connection attempts to be retried forever, but what I feel the problem here are as follows:
You are trying to connect to ActiveMQ while creating the Bean at App
startup, If ActiveMQ is not running when APP is starting up, the
Bean creation would retry the connection attempts forever causing a
timeout and not letting the APP to start.
Also when the ActiveMQ stops running midway you are not reattempting the connection as it is done inside #Bean and will only happen on APP startup
Hence the Connection shouldn't happen at Bean creation time, but maybe it can be done after the APP is up (maybe inside a #PostConstruct block)
These are just the pointers, You need to take it forward
Hope this helps!
Good luck!
I have written a RESTful API using Apache Jersey. I am using MongoDB as my backend. I used Morphia (v.1.3.4) to map and persist POJO to database. I tried to follow "1 application 1 connection" in my API as recommended everywhere but I am not sure I am successful. I run my API in Tomcat 8. I also ran Mongostat to see the details and connection. At start, Mongostat showed 1 connection to MongoDB server. I tested my API using Postman and it was working fine. I then created a load test in SoapUI where I simulated 100 users per second. I saw the update in Mongostat. I saw there were 103 connections. Here is the gif which shows this behaviour.
I am not sure why there are so many connections. The interesting fact is that number of mongo connection are directly proportional to number of users I create on SoapUI. Why is that? I found other similar questions but I think I have implemented there suggestions.
Mongo connection leak with morphia
Spring data mongodb not closing mongodb connections
My code looks like this.
DatabaseConnection.java
// Some imports
public class DatabaseConnection {
private static volatile MongoClient instance;
private static String cloudhost="localhost";
private DatabaseConnection() { }
public synchronized static MongoClient getMongoClient() {
if (instance == null ) {
synchronized (DatabaseConnection.class) {
if (instance == null) {
ServerAddress addr = new ServerAddress(cloudhost, 27017);
List<MongoCredential> credentialsList = new ArrayList<MongoCredential>();
MongoCredential credentia = MongoCredential.createCredential(
"test", "test", "test".toCharArray());
credentialsList.add(credentia);
instance = new MongoClient(addr, credentialsList);
}
}
}
return instance;
}
}
PourService.java
#Secured
#Path("pours")
public class PourService {
final static Logger logger = Logger.getLogger(Pour.class);
private static final int POUR_SIZE = 30;
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Response createPour(String request)
{
WebApiResponse response = new WebApiResponse();
Gson gson = new GsonBuilder().setDateFormat("dd/MM/yyyy HH:mm:ss").create();
String message = "Pour was not created.";
HashMap<String, Object> data = null;
try
{
Pour pour = gson.fromJson(request, Pour.class);
// Storing the pour to
PourRepository pourRepository = new PourRepository();
String id = pourRepository.createPour(pour);
data = new HashMap<String, Object>();
if ("" != id && null != id)
{
data.put("id", id);
message = "Pour was created successfully.";
logger.debug(message);
return response.build(true, message, data, 200);
}
logger.debug(message);
return response.build(false, message, data, 500);
}
catch (Exception e)
{
message = "Error while creating Pour.";
logger.error(message, e);
return response.build(false, message, new Object(),500);
}
}
PourDao.java
public class PourDao extends BasicDAO<Pour, String>{
public PourDao(Class<Pour> entityClass, Datastore ds) {
super(entityClass, ds);
}
}
PourRepository.java
public class PourRepository {
private PourDao pourDao;
final static Logger logger = Logger.getLogger(PourRepository.class);
public PourRepository ()
{
try
{
MongoClient mongoClient = DatabaseConnection.getMongoClient();
Datastore ds = new Morphia().map(Pour.class)
.createDatastore(mongoClient, "tilt45");
pourDao = new PourDao(Pour.class,ds);
}
catch (Exception e)
{
logger.error("Error while creating PourDao", e);
}
}
public String createPour (Pour pour)
{
try
{
return pourDao.save(pour).getId().toString();
}
catch (Exception e)
{
logger.error("Error while creating Pour.", e);
return null;
}
}
}
When I work with Mongo+Morphia I get better results using a Factory pattern for the Datastore and not for the MongoClient, for instance, check the following class:
public DatastoreFactory(String dbHost, int dbPort, String dbName) {
final Morphia morphia = new Morphia();
MongoClientOptions.Builder options = MongoClientOptions.builder().socketKeepAlive(true);
morphia.getMapper().getOptions().setStoreEmpties(true);
final Datastore store = morphia.createDatastore(new MongoClient(new ServerAddress(dbHost, dbPort), options.build()), dbName);
store.ensureIndexes();
this.datastore = store;
}
With that approach, everytime you need a datastore you can use the one provided by the factory. Of course, this can implemented better if you use a framework/library that support factory pattern (e.g.: HK2 with org.glassfish.hk2.api.Factory), and also singleton binding.
Besides, you can check the documentation of MongoClientOptions's builder method, perhaps you can find a better connection control there.
I am using datastax java driver 3.1.0 to connect to cassandra cluster and my cassandra cluster version is 2.0.10.
Below is the singleton class I am using to connect to cassandra cluster.
public class CassUtil {
private static final Logger LOGGER = Logger.getInstance(CassUtil.class);
private Session session;
private Cluster cluster;
private static class Holder {
private static final CassUtil INSTANCE = new CassUtil();
}
public static CassUtil getInstance() {
return Holder.INSTANCE;
}
private CassUtil() {
List<String> servers = TestUtils.HOSTNAMES;
String username =
TestUtils.loadCredentialFile().getProperty(TestUtils.USERNAME);
String password =
TestUtils.loadCredentialFile().getProperty(TestUtils.PASSWORD);
// is this right setting?
PoolingOptions poolingOptions = new PoolingOptions();
poolingOptions.setConnectionsPerHost(HostDistance.LOCAL, 4, 10).setConnectionsPerHost(
HostDistance.REMOTE, 2, 4);
Builder builder = Cluster.builder();
cluster =
builder
.addContactPoints(servers.toArray(new String[servers.size()]))
.withRetryPolicy(DowngradingConsistencyRetryPolicy.INSTANCE)
.withPoolingOptions(poolingOptions)
.withReconnectionPolicy(new ConstantReconnectionPolicy(100L))
.withLoadBalancingPolicy(
DCAwareRoundRobinPolicy
.builder()
.withLocalDc(
!TestUtils.isProduction() ? "DC2" : TestUtils.getCurrentLocation()
.get().name().toLowerCase()).build())
.withCredentials(username, password).build();
try {
session = cluster.connect("testkeyspace");
StringBuilder sb = new StringBuilder();
Set<Host> allHosts = cluster.getMetadata().getAllHosts();
for (Host host : allHosts) {
sb.append("[");
sb.append(host.getDatacenter());
sb.append(host.getRack());
sb.append(host.getAddress());
sb.append("]");
}
LOGGER.logInfo("connected: " + sb.toString());
} catch (NoHostAvailableException ex) {
LOGGER.logError("error= ", ExceptionUtils.getStackTrace(ex));
} catch (Exception ex) {
LOGGER.logError("error= " + ExceptionUtils.getStackTrace(ex));
}
}
public void shutdown() {
LOGGER.logInfo("Shutting down the whole cassandra cluster");
if (null != session) {
session.close();
}
if (null != cluster) {
cluster.close();
}
}
public Session getSession() {
if (session == null) {
throw new IllegalStateException("No connection initialized");
}
return session;
}
public Cluster getCluster() {
return cluster;
}
}
What is the settings I need to use to connect to local cassandra nodes first and if they are down, then only talk to remote nodes. Also my pooling configuration options is right here which I am using in the above code?
By default the datastax drivers will only connect to nodes in the local DC. If you do not use withLocalDc it will attempt to discern the local datacenter from the DC of the contact point it is able to connect to.
If you want the driver to fail over to host in remote data center(s) you should use withUsedHostsPerRemoteDc, i.e.:
cluster.builder()
.withLoadBalancingPolicy(DCAwareRoundRobinPolicy.builder()
.withLocalDc("DC1")
.withUsedHostsPerRemoteDc(3).build())
With this configuration, the driver will establish connections to 3 hosts in each remote DC, and only send queries to them if all hosts in the local datacenter is down.
There are other strategies for failover to remote data centers. For example, you could run your application clients in each same physical data center as your C* data centers, and then when a physical data center fails, you can fail over at a higher level (like your load balancer).
Also my pooling configuration options is right here which I am using in the above code?
I think what you have is fine. The defaults are fine too.
i am trying to use cassandra as database for an app i am working on. The app is a Netbeans platform app.
In order to start the cassandra server on my localhost i issue Runtime.getRuntime().exec(command)
where command is the string to start the cassandra server and then i connect to the cassandra sever with the datastax driver. However i get the error:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.TransportException: [/127.0.0.1:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:80)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1154)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:318)
at org.dhviz.boot.DatabaseClient.connect(DatabaseClient.java:43)
at org.dhviz.boot.Installer.restored(Installer.java:67)
....
i figure it out that the server requires some time to start so i have added the line Thread.sleep(MAX_DELAY_SERVER) which seem to resolve the problem.
Is there any more elegant way to sort this issue?
Thanks.
Code is below.
public class Installer extends ModuleInstall {
private final int MAX_DELAY_SERVER = 12000;
//private static final String pathSrc = "/org/dhviz/resources";
#Override
public void restored() {
/*
-*-*-*-*-*DESCRIPTION*-*-*-*-*-*
IMPLEMENT THE CASSANDRA DATABASE
*********************************
*/
DatabaseClient d = new DatabaseClient();
// launch an instance of the cassandra server
d.loadDatabaseServer();
/*wait for MAX_DELAY_SERVER milliseconds before launching the other instructions.
*/
try {
Thread.sleep(MAX_DELAY_SERVER);
Logger.getLogger(Installer.class.getName()).log(Level.INFO, "wait for MAX_DELAY_SERVER milliseconds before the connect database");
} catch (InterruptedException ex) {
Exceptions.printStackTrace(ex);
Logger.getLogger(Installer.class.getName()).log(Level.INFO, "exeption in thread sleep");
}
d.connect("127.0.0.1");
}
}
public class DatabaseClient {
private Cluster cluster;
private Session session;
private ShellCommand shellCommand;
private final String defaultKeyspace = "dhviz";
final private String LOAD_CASSANDRA = "launchctl load /usr/local/Cellar/cassandra/2.1.2/homebrew.mxcl.cassandra.plist";
final private String UNLOAD_CASSANDRA = "launchctl unload /usr/local/Cellar/cassandra/2.1.2/homebrew.mxcl.cassandra.plist";
public DatabaseClient() {
shellCommand = new ShellCommand();
}
public void connect(String node) {
//this connect to the cassandra database
cluster = Cluster.builder()
.addContactPoint(node).build();
// cluster.getConfiguration().getSocketOptions().setConnectTimeoutMillis(12000);
Metadata metadata = cluster.getMetadata();
System.out.printf("Connected to cluster: %s\n",
metadata.getClusterName());
for (Host host
: metadata.getAllHosts()) {
System.out.printf("Datatacenter: %s; Host: %s; Rack: %s\n",
host.getDatacenter(), host.getAddress(), host.getRack());
}
session = cluster.connect();
Logger.getLogger(DatabaseClient.class.getName()).log(Level.INFO, "connected to server");
}
public void loadDatabaseServer() {
if (shellCommand == null) {
shellCommand = new ShellCommand();
}
shellCommand.executeCommand(LOAD_CASSANDRA);
Logger.getLogger(DatabaseClient.class.getName()).log(Level.INFO, "database cassandra loaded");
}
public void unloadDatabaseServer() {
if (shellCommand == null) {
shellCommand = new ShellCommand();
}
shellCommand.executeCommand(UNLOAD_CASSANDRA);
Logger.getLogger(DatabaseClient.class.getName()).log(Level.INFO, "database cassandra unloaded");
}
}
If you are calling cassandra without any parameters in Runtime.getRuntime().exec(command) it's likely that this is spawning cassandra as a background process and returning before the cassandra node has fully started and is listening.
I'm not sure why you are attempting to embed cassandra in your app, but you may find using cassandra-unit useful for providing a mechanism to embed cassandra in your app. It's primarily used for running tests that require a cassandra instance, but it may also meet your use case.
The wiki provides a helpful example on how to start an embedded cassandra instance using cassandra-unit:
EmbeddedCassandraServerHelper.startEmbeddedCassandra();
In my experience cassandra-unit will wait until the server is up and listening before returning. You could also write a method that waits until a socket is in use, using logic opposite of this answer.
I have changed the code to the following taking inspiration from the answers below. Thanks for your help!
cluster = Cluster.builder()
.addContactPoint(node).build();
cluster.getConfiguration().getSocketOptions().setConnectTimeoutMillis(50000);
boolean serverConnected = false;
while (serverConnected == false) {
try {
try {
Thread.sleep(MAX_DELAY_SERVER);
} catch (InterruptedException ex) {
Exceptions.printStackTrace(ex);
}
cluster = Cluster.builder()
.addContactPoint(node).build();
cluster.getConfiguration().getSocketOptions().setConnectTimeoutMillis(50000);
session = cluster.connect();
serverConnected = true;
} catch (NoHostAvailableException ex) {
Logger.getLogger(DatabaseClient.class.getName()).log(Level.INFO, "trying connection to cassandra server...");
serverConnected = false;
}
}
In my web application I'm using Stateless sessions with Hibernate to have better performances on my inserts and updates.
It was working fine with H2 database (the one used in play framework in dev mode).
But when I test it with MySQL I get the following exception :
ERROR ~ Lock wait timeout exceeded; try restarting transaction
ERROR ~ HHH000315: Exception executing batch [Lock wait timeout exceeded; try restarting transaction]
Here is the code :
public static void update() {
Session session = (Session) JPA.em().getDelegate();
StatelessSession stateless = this.session.getSessionFactory().openStatelessSession();
try {
stateless.beginTransaction();
// Fetch all products
{
List<ProductType> list = ProductType.retrieveAllWithHistory();
for (ProductType pt : list) {
updatePrice(pt, stateless);
}
}
// Fetch all raw materials
{
List<RawMaterialType> list = RawMaterialType.retrieveAllWithHistory();
for (RawMaterialType rm : list) {
updatePrice(rm, stateless);
}
}
} catch (Exception ex) {
play.Logger.error(ex.getMessage());
ExceptionLog.log(ex, Thread.currentThread());
} finally {
stateless.getTransaction().commit();
stateless.close();
}
}
private static void updatePrice(ProductType pt, StatelessSession stateless) {
pt.priceDelta = computeDelta();
pt.unitPrice = computePrice();
stateless.update(pt);
PriceHistory ph = new PriceHistory(pt, price);
stateless.insert(ph);
}
private static void updatePrice(RawMaterialType rm, StatelessSession stateless) {
rm.priceDelta = computeDelta();
rm.unitPrice = computePrice();
stateless.update(rm);
PriceHistory ph = new GoodPriceHistory(rm, price);
stateless.insert(ph);
}
In this example I have 3 simple Entities (ProductType, RawMaterialType and PriceHistory).
computeDelta and computePrice are just algorithm functions with no DB stuff.
retrieveAllWithHistory functions are functions that fetch some data from the database using Play framework model functions.
So, this code retrieves some data, edit some, create new one and finally save everything.
Why have I a lock exception with MySQL and no exception with H2 ?
I'm not sure why you have a commit in a finally block. Give this structure a try:
try {
factory.getCurrentSession().beginTransaction();
factory.getCurrentSession().getTransaction().commit();
} catch (RuntimeException e) {
factory.getCurrentSession().getTransaction().rollback();
throw e; // or display error message
}
Also, it might be helpful for you to check this documentation.