JDBC-JobStoreCMT lock when scheduling - java

I am using Weblogic + Spring + quartz.
Quartz is configured to use JobStoreCMT.
I noticed that JobStoreCMT is aquireing a DB lock on the quartz tables when jobs are scheduled.
Below is the JobStoreCMT snippet
protected Object executeInLock(
String lockName,
TransactionCallback txCallback) throws JobPersistenceException {
boolean transOwner = false;
Connection conn = null;
try {
if (lockName != null) {
// If we aren't using db locks, then delay getting DB connection
// until after aquiring the lock since it isn't needed.
if (getLockHandler().requiresConnection()) {
conn = getConnection();
}
transOwner = getLockHandler().obtainLock(conn, lockName);
}
if (conn == null) {
conn = getConnection();
}
return txCallback.execute(conn);
} finally {
try {
releaseLock(conn, LOCK_TRIGGER_ACCESS, transOwner);
} finally {
cleanupConnection(conn);
}
}
}
After this method I see in the quartz tables in the DB inserted the triggers and jobs i scheduled.
My question is why Quartz needs lock on the DB level at this phase ?
I would see a need to have the lock when the jobs are started to be executed , finished etc.
Thanks

I found some setting which solved my issue:
setLockOnInsert to false because it is true by default.
public void setLockOnInsert(boolean lockOnInsert)
Whether or not to obtain locks when inserting new jobs/triggers. Defaults to true, which is safest - some db's (such as MS SQLServer) seem to require this to avoid deadlocks under high load, while others seem to do fine without.
Setting this property to false will provide a significant performance increase during the addition of new jobs and triggers.
+org.quartz.jobStore.acquireTriggersWithinLock i set it to false (as default ) not to true as i configured initially.

Related

How many roundtrips are made to a MongoDB server when using transactions?

I wonder how many roundtrips that are made to the server when using transactions MongoDB? For example if the Java driver is used like this:
ClientSession clientSession = client.startSession();
TransactionOptions txnOptions = TransactionOptions.builder()
.readPreference(ReadPreference.primary())
.readConcern(ReadConcern.LOCAL)
.writeConcern(WriteConcern.MAJORITY)
.build();
TransactionBody txnBody = new TransactionBody<String>() {
public String execute() {
MongoCollection<Document> coll1 = client.getDatabase("mydb1").getCollection("foo");
MongoCollection<Document> coll2 = client.getDatabase("mydb2").getCollection("bar");
coll1.insertOne(clientSession, new Document("abc", 1));
coll2.insertOne(clientSession, new Document("xyz", 999));
return "Inserted into collections in different databases";
}
};
try {
clientSession.withTransaction(txnBody, txnOptions);
} catch (RuntimeException e) {
// some error handling
} finally {
clientSession.close();
}
In this case two documents are stored in a transaction:
coll1.insertOne(clientSession, new Document("abc", 1));
coll2.insertOne(clientSession, new Document("xyz", 999));
Are the "insert operations" stacked up and sent to the server in one roundtrip or are two calls (or more?) actually made to the server?
Each insert is sent separately. You can use use bulk writes to batch write operations together.
The commit at the end is a separate operation also.

Connection pooling in multi tenant app. Shared pool vs pool per tenant

I'm building a multi tenant REST server application with Spring 2.x, Hibernate 5.x, Spring Data REST, Mysql 5.7.
Spring 2.x uses Hikari for connection pooling.
I'm going to use a DB per tenant approach, so every tenant will have his own database.
I created my MultiTenantConnectionProvider in this way:
#Component
#Profile("prod")
public class MultiTenantConnectionProviderImpl implements MultiTenantConnectionProvider {
private static final long serialVersionUID = 3193007611085791247L;
private Logger log = LogManager.getLogger();
private Map<String, HikariDataSource> dataSourceMap = new ConcurrentHashMap<String, HikariDataSource>();
#Autowired
private TenantRestClient tenantRestClient;
#Autowired
private PasswordEncrypt passwordEncrypt;
#Override
public void releaseAnyConnection(Connection connection) throws SQLException {
connection.close();
}
#Override
public Connection getAnyConnection() throws SQLException {
Connection connection = getDataSource(TenantIdResolver.TENANT_DEFAULT).getConnection();
return connection;
}
#Override
public Connection getConnection(String tenantId) throws SQLException {
Connection connection = getDataSource(tenantId).getConnection();
return connection;
}
#Override
public void releaseConnection(String tenantId, Connection connection) throws SQLException {
log.info("releaseConnection " + tenantId);
connection.close();
}
#Override
public boolean supportsAggressiveRelease() {
return false;
}
#Override
public boolean isUnwrappableAs(Class unwrapType) {
return false;
}
#Override
public <T> T unwrap(Class<T> unwrapType) {
return null;
}
public HikariDataSource getDataSource(#NotNull String tentantId) throws SQLException {
if (dataSourceMap.containsKey(tentantId)) {
return dataSourceMap.get(tentantId);
} else {
HikariDataSource dataSource = createDataSource(tentantId);
dataSourceMap.put(tentantId, dataSource);
return dataSource;
}
}
public HikariDataSource createDataSource(String tenantId) throws SQLException {
log.info("Create Datasource for tenant {}", tenantId);
try {
Database database = tenantRestClient.getDatabase(tenantId);
DatabaseInstance databaseInstance = tenantRestClient.getDatabaseInstance(tenantId);
if (database != null && databaseInstance != null) {
HikariConfig hikari = new HikariConfig();
String driver = "";
String options = "";
switch (databaseInstance.getType()) {
case MYSQL:
driver = "jdbc:mysql://";
options = "?useLegacyDatetimeCode=false&serverTimezone=UTC&useUnicode=yes&characterEncoding=UTF-8&useSSL=false";
break;
default:
driver = "jdbc:mysql://";
options = "?useLegacyDatetimeCode=false&serverTimezone=UTC&useUnicode=yes&characterEncoding=UTF-8&useSSL=false";
}
hikari.setJdbcUrl(driver + databaseInstance.getHost() + ":" + databaseInstance.getPort() + "/" + database.getName() + options);
hikari.setUsername(database.getUsername());
hikari.setPassword(passwordEncrypt.decryptPassword(database.getPassword()));
// MySQL optimizations, see
// https://github.com/brettwooldridge/HikariCP/wiki/MySQL-Configuration
hikari.addDataSourceProperty("cachePrepStmts", true);
hikari.addDataSourceProperty("prepStmtCacheSize", "250");
hikari.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
hikari.addDataSourceProperty("useServerPrepStmts", "true");
hikari.addDataSourceProperty("useLocalSessionState", "true");
hikari.addDataSourceProperty("useLocalTransactionState", "true");
hikari.addDataSourceProperty("rewriteBatchedStatements", "true");
hikari.addDataSourceProperty("cacheResultSetMetadata", "true");
hikari.addDataSourceProperty("cacheServerConfiguration", "true");
hikari.addDataSourceProperty("elideSetAutoCommits", "true");
hikari.addDataSourceProperty("maintainTimeStats", "false");
hikari.setMinimumIdle(3);
hikari.setMaximumPoolSize(5);
hikari.setIdleTimeout(30000);
hikari.setPoolName("JPAHikari_" + tenantId);
// mysql wait_timeout 600seconds
hikari.setMaxLifetime(580000);
hikari.setLeakDetectionThreshold(60 * 1000);
HikariDataSource dataSource = new HikariDataSource(hikari);
return dataSource;
} else {
throw new SQLException(String.format("DB not found for tenant %s!", tenantId));
}
} catch (Exception e) {
throw new SQLException(e.getMessage());
}
}
}
In my implementation I read tenantId and I get information about the database instance from a central management system.
I create a new pool for each tenant and I cache the pool in order to avoid to recreate it each time.
I read this interesting question, but my question is quite different.
I'm thinking to use AWS (both for server instance, and RDS db instance).
Let's hypothesize a concrete scenario in which I've 100 tenants.
The application is a management/point of sale software. It will be used just from agents. Let's say each tenants has an average of 3 agents working concurrently in each moment.
With that numbers in mind and according to this article, the first thing I realize is that it seems hard to have a pool for each tenant.
For 100 tenants I would like to think that a db.r4.large (2vcore, 15,25GB RAM and fast disk access ) with Aurora should be enough (about 150€/month).
According to the formula to size a connection pool:
connections = ((core_count * 2) + effective_spindle_count)
I should have 2core*2 + 1 = 5 connections in the pool.
From what I got, this should be the max connections in the pool to maximise performance on that DB instance.
1st solution
So my first question is pretty simple: how can I create a separate connection pool for each tenant seen that I should only use 5 connection in total?
It seems not possible to me. Even if I assign 2 connections to each tenant, I would have 200 connections to the DBMS!!
According to this question, on a db.r4.large instance I could have at max 1300 connections, so seems the instance should face quite well the load.
But according the article I mentioned before, seems a bad practice use hundreds connections to the db:
If you have 10,000 front-end users, having a connection pool of 10,000 would be shear insanity. 1000 still horrible. Even 100 connections, overkill. You want a small pool of a few dozen connections at most, and you want the rest of the application threads blocked on the pool awaiting connections.
2nd solution
The second solution I have in mind is to share a connection pool for tenants on the same DMBS. This means that all 100 tenants will use the same Hikari pool of 5 connections (honestly it seems quite low to me).
Should this the right way to maximize performance and redure the response time of the application?
Do you have a better idea of how to manage this scenario with Spring, Hibernate, Mysql (hosted on AWS RDS Aurora)?
Most definitely opening connection per tenant is a very bad idea. All you need is a pool of connections shared across all users.
So first step would be to find the load or anticipate what it would be based on some projections.
Decide how much latency is acceptable, what is the burst peak time traffic etc
Finally come to number of connections you will need for this and decide on number of instances required. For instance if your peak time usage is 10k per s and each query takes 10ms then you will need 100 open connections for latency of 1s.
Implement it without any bindings to user. i.e. the same pool shared across all. Unless you have a case to group say premium/basic users to say have set of two pools etc
Finally as you are doing this in AWS if you need more than 1 instance based on point 3 - see if you can autoscale up/down based on load to save costs.
Check these out for some comparison metrics
This one is probably most interesting in terms of spike demand
https://github.com/brettwooldridge/HikariCP/blob/dev/documents/Welcome-To-The-Jungle.md
Some more...
https://github.com/brettwooldridge/HikariCP
https://www.wix.engineering/blog/how-does-hikaricp-compare-to-other-connection-pools
Follow previous Q&A the selected strategy for multi tenant environment will be (surprisingly) using connection pool per tenant
Strategy 2 : each tenant have it's own schema and it's own connection pool in a single database
strategy 2 is more flexible and safe : every tenant cannot consume more than a given amount of connection (and this amount can be configured per tenant if you need it)
I suggest put the HikariCP's formula aside here, and use less tenants number as 10 (dynamic size? ) with low connection pool size as 2.
Be more focus on the traffic you expect, notice that 10 connection pool size comment in HikariCP Pool Size maybe should suffice:
10 as a nice round number. Seem low? Give it a try, we'd wager that you could easily handle 3000 front-end users running simple queries at 6000 TPS on such a setup.
See also comment indicates that 100 instances are too much
, but it would have to be a massive load to require 100s.
By #EssexBoy

Synchronizing Hibernate inserts using guarded block

I am trying to solve a problem with conflicting concurrent database inserts via Hibernate in MySQL.
I have a piece of code that can easily be executed by multiple threads at the same time. It is checking the database for an existence of a record and if it does not exist a new record gets inserted. This same insert-if-nonexistent operation is performed on a related child record. I get a ConstraintViolationException if two threads try to persist the child record at the same time, because both threads see the record does not exist at the moment they are querying it, so both threads attempt to save the same record which violates a unique constraint, and one of them fails.
I am trying to synchronise the query-insert operations on the application level using a guarded block, so that a thread is waiting for another thread to finish inserting the records before querying the database. But even though I see the synchronisation works, querying for the record still returns no results, even if the record has been persisted in another thread. So the constraint violation still happens.
I am using Hibernate 5.1.0
I am managing database transactions manually
I have enabled query cache and second-level cache globally, but am using CacheMode.REFRESH for the SELECT queries
I am not using optimistic or pessimistic database locking or row versioning.
Here is a code example:
In each synchronized operation I try to persist a Product if it does not exist, and a related parent Supplier if it does not exist.
public class UpdateProcessor extends HttpServlet {
// Indicator used for synchronizing read-insert operations
public static Boolean newInsertInProgress = false;
#Override
public void doPost(HttpServletRequest request, HttpServletResponse response) {
Session hbSession = null;
Transaction tx = null;
try {
hbSession = HibernateUtils.getNewSession();
UpdateProcessor.waitForInsert(); // if there is an insert in progress, wait for it to finish
UpdateProcessor.notifyInsertStarted(); // obtain lock
tx = hbSession.beginTransaction();
Product existingProduct = findProductBySKU(sku);
if(existingProduct == null) {
Product newProduct = new Product();
newProduct.setSKU(sku);
Supplier existingSupplier = findSupplierByName(name);
if(existingSupplier == null) {
Supplier newSupplier = new Supplier();
newSupplier.setName(name);
db.save(newSupplier);
newProduct.setSupplier(newSupplier);
} else {
newProduct.setSupplier(existingSupplier);
}
db.save(newProduct);
}
tx.commit();
} catch (Exception t) {
// <rollback transaction>
response.sendError(500);
} finally {
// Safeguard to avoid thread deadlock - release lock always, if obtained
if(UpdateProcessor.newInsertInProgress) {
UpdateProcessor.notifyInsertFinished(); // release lock and notify next thread
}
// <close session>
}
}
private static synchronized void waitForInsert() {
if(!UpdateProcessor.newInsertInProgress) {
log("Skipping wait - thread " + Thread.currentThread().getId() + " - " + System.currentTimeMillis());
return;
}
while(UpdateProcessor.newInsertInProgress) {
boolean loggedEntering = false;
if(!loggedEntering) {
log("Entering wait - thread " + Thread.currentThread().getId() + " - " + System.currentTimeMillis());
loggedEntering = true;
}
try {
UpdateProcessor.class.wait();
} catch (InterruptedException e) {}
}
log("Exiting wait - thread " + Thread.currentThread().getId() + " - " + System.currentTimeMillis());
}
private static synchronized void notifyInsertStarted() {
UpdateProcessor.newInsertInProgress = true;
UpdateProcessor.class.notify();
log("Notify start - thread " + Thread.currentThread().getId() + " - " + System.currentTimeMillis());
}
private static synchronized void notifyInsertFinished() {
UpdateProcessor.newInsertInProgress = false;
UpdateProcessor.class.notify();
log("Notify finish - thread " + Thread.currentThread().getId() + " - " + System.currentTimeMillis());
}
}
The output after concurrently making the request:
Skipping wait - thread 254 - 1478171162713
Notify start - thread 254 - 1478171162713
Entering wait - thread 255 - 1478171162713
Entering wait - thread 256 - 1478171162849
Notify finish - thread 254 - 1478171163050
Exiting wait - thread 255 - 1478171163051
Notify start - thread 255 - 1478171163051
Entering wait - thread 256 - 1478171163051
Error - thread 255:
org.hibernate.exception.ConstraintViolationException: could not execute statement
...
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '532-supplier-name-1' for key 'supplier_name_uniq'
Persisting the new supplier record still threw an exception in thread 255 because the unique constraint (id, name) is violated.
Why is the SELECT still not returning any records after a synchronized insert? Is guarded lock a correct way to avoid the multi-insert problem?
Based on Mechkov's answer above:
Short answer: I needed to include the Hibernate session creation in the synchronised piece of code.
Long answer:
The guarded block properly synchronised the query-insert block but the problem was that even though one thread finishes persisting the records, the second thread cannot see the change in the database until a fresh Hibernate session is created. So the effects of concurrent database modifications are not immediately visible to all threads. An up-to-date database state is obtained via creating a session AFTER an insert is made in some other thread. Including the session creation in the synchronised code ensures that is the case.

creating a mongodb healthcheck (in dropwizard)

Not necessarily specific to dropwizard, but for the life of me I can't figure out how to easily create a healthcheck for mongodb. This is in java, using version 3.3.0 of mongodb's own java driver.
I was hoping there would be a method that doesn't change the state of the database if it succeeds, but also throws an Exception when the query (or connection, or whatever) fails in order to return a health or unhealthy state. Ideally I'd perform a find, but this doesn't throw an Exception as far as I can tell.
I would just list all collections in database like:
MongoClient client = new MongoClient(addr, opts);
MongoDatabase db = client.getDatabase(database);
try {
MongoIterable<String> allCollections = db.listCollectionNames();
for (String collection : allCollections) {
System.out.println("MongoDB collection: " + collection);
}
} catch (Exception me) {
// problems with mongodb
}

JDBC call causes UI to Hang

Can somebody please help to optimize the code below.
The problem statement is : i am trying to populate the struct array by looping through the List List. which is causing performance issue. is there a way to do it without the loop?
The code below works as expected but the UI hangs becuase of loop, can somebody please help optimise it.
public BigDecimal saveCSV(String dataSource,int rollNumber,String username,List<Project> projects) throws SQLException{
Connection conn = getConnection(dataSource);
Connection nativeConn=doGetNativeConnection(conn);
nativeConn.setAutoCommit(false);
CallableStatement cs= nativeConn.prepareCall(ProjectConstants.PROC);
ArrayDescriptor des = ArrayDescriptor.createDescriptor("PROJECT_DETAILS_TYPE", nativeConn);
Object [] data = projects.toArray();
Array array_to_pass = new ARRAY(des,nativeConn,data);
STRUCT[] structArrayOfProjects=new STRUCT[projects.size()];
Object[] projObjectArray = null;
for (int i = 0; i < projects.size(); ++i) {
Project proj=projects.get(i);
projObjectArray=new Object[]{proj.name,proj.activity};
StructDescriptor desc = StructDescriptor.createDescriptor("PROJECT_DETAILS_TYPE", nativeConn);
STRUCT structprojects = new STRUCT(desc, nativeConn, projObjectArray);
structArrayOfProjects[i] = structprojects;
}
ArrayDescriptor projectTypeArrayDesc = ArrayDescriptor.createDescriptor("PROJECT_DETAILS_TAB_TYPE", nativeConn);
ARRAY arrayOfProjects = new ARRAY(projectTypeArrayDesc, nativeConn, structArrayOfProjects);
cs.setArray(1, array_to_pass);
cs.setInt(2, rollNumber);
cs.setString(3, username);
cs.registerOutParameter(4, OracleTypes.ARRAY,"NUMBER_TAB_TYPE");
cs.registerOutParameter(5, OracleTypes.ARRAY,"PROJECTS_ERROR_TAB_TYPE");
cs.execute();
nativeConn.commit();
Array value=cs.getArray(4);
BigDecimal[] projDetailsId = (BigDecimal[])value.getArray();
BigDecimal rmt_id = null;
try{
rmt_id=projDetailsId[0];
}
catch(Exception e){
e.printStackTrace();
}
return rmt_id;
}
Use worker thread to perform DB tasks and UI thread to update your GUI.
Doing I/O and CPU intensive tasks on UI thread is discouraged.
As you didn't specify what kind of user interface you are using,
I assume Swing, if so read this guide how to handle such tasks.
UPDATE
After OP comment that environment where code is running is Spring MVC, here is suggestion.
Same logic applies to applications deployed into servlet containers.
When you have long running task in request thread, you should use ExecutorService to create asynchronous task and return HTTP202 immediately.
Then you need to use some polling methods to periodically request completion status (or use websocket if possible).
Here are some examples : here, here or here.

Categories

Resources