Recover from idle in transaction with postgresql and BasicDataSource - java

I run postgresql with a transactional application where the stack is:
postgresql
tomcat
hibernate/spring
This is a production application where we have a bunch of customers connected at once. Each customer has it's own postgresql database but each customer also has many users.
Occasionally, i will get a situation where one of the customer databases locks up and I see idle in transaction tied to the processes for this customer.
postgres: customer1 customer1 127.0.0.1(59738) idle in transaction
When the database locks up the other databases continue to work fine. I can not get this to unlock without restarting the server application.
The problem is often triggered by various long running reports that the customer runs. I believe it is a locking/blocking issue where other users are also accessing the same data in the customer database.
This problem happens rarely and I can't ever reproduce it. But when it does happen it is a serious issue. Mostly I just want to recover from it.
Postgresql seems to function fine itself when this occurs. Like I can perform psql to the database and run queries. So, I think the problem or rather the solution centers around the data source.
I use the apache commons BasicDataSource.
<bean id="customer1DataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="org.postgresql.Driver"/>
<property name="url" value="jdbc:postgresql://localhost:5432/customer1"/>
<property name="username" value="customer1"/>
<property name="password" value="password"/>
</bean>
So does anyone know of a setting in the basic data source that will kill these idle in transaction connections, throw an exception or allow them to recover somehow?

Related

Connecting to Oracle Wallet from Cloud web application

I have recently started deploying my web applications into Google cloud platform. Fortunately, I've solved every annoying errors, exceptions, troubles on my own by researching in stack overflow and other platforms. My current deployed application establish the connection to my real oracle database which is located in Oracle Cloud Infrastructure. While running my web app in localhost, of course it connects because the config file points to wallet folder in my file system. But now in cloud, I don't know where to store my wallet to reference it from my hibernate config xml. And also I don't know if it's possible to reference somewhere other than filesystem, like https://blablafileupload.com/mywalletfolder.
I'm gonna provide my config file below. Can you help me if you know how to do it, and also where is the best place for storing such database wallets (I guess the storage in the location same as my deployment is good place, but I don't know how).
-<bean class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close" id="myDataSource">
<property value="oracle.jdbc.driver.OracleDriver" name="driverClass"/>
<property value="jdbc:oracle:thin:#oraclesql_medium?TNS_ADMIN=/Users/user/Desktop/Fuad/Wallet_OracleSQL/" name="jdbcUrl"/>
<property value="TABLE" name="user"/>
<property value="********" name="password"/>
<!-- these are connection pool properties for C3P0 -->
<property value="5" name="minPoolSize"/>
<property value="20" name="maxPoolSize"/>
<property value="30000" name="maxIdleTime"/>
</bean>
As you can see on 3rd line, it refers to the wallet folder in my filesystem ( I want to store that somewhere and make xml refers to it on Internet.)

how to set particular value for c3p0's property in hibernate?

I'm using hibernate for my web application and it's working fine. I have set the properties of connection pooling like below.
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">50</property>
<property name="hibernate.c3p0.idle_test_period">3000</property>
I have set min_size = 5, max_size=20, max_statements=50
but it could be min_size=1, max_size=100, max_statements=500
so, at what basis should I set these values? I have read some tutorials
about hibernate connection pooling but didm't get any specific idea
how to set these properties' values
that totally depends on how much load your application have, to check how much db activities are happening on c3p0 side you have to monitor the internal stats of objects and you need JMX to see that stats and based on that you can manage and configure pool, plz check below link
Configuring and Managing c3p0 via JMX
I would also recommend you to Check HikariCP, as its way much better than c3p0.

Is it possible to combine a container managed and application managed entitymanager in a bean?

I am using container managed transactions in my JavaEE application, but, as I have understood it, container managed entitymanagers lacks support for batch inserts. And now I have a case where I will insert a lot of data into the DB. Is it possible, in some way, to combine a container managed entitymanager with an application managed entitymanager in a bean?
If so, I could make a method in my bean that commits the data after I have called entitymanager.persist(myEntity); several times, making it a batch insert.
But to get that working, I now have to set #TransactionManagement(TransactionManagementType.BEAN) for the whole class/bean, making the whole bean application managed. But I want my other methods to be container managed, just one method (the method making batch inserts) to be application managed.
Is that possible or are there any other approaches for cases like this?
JDBC batching is a cross-cutting concern and you can get it working for all entity manager configurations.
First you need to set the following Hibernate properties:
<property name="hibernate.order_updates" value="true"/>
<property name="hibernate.order_inserts" value="true"/>
<property name="hibernate.jdbc.batch_versioned_data" value="true"/>
<property name="hibernate.jdbc.fetch_size" value="20"/>
<property name="hibernate.jdbc.batch_size" value="50"/>
Also make sure you use SEQUENCE or TABLE identifier generators since IDENTITY will disable JDBC batching

Persistent Sessions with JDBC and Tomcat

We have a cluster of Tomcat servers that share a common web server running mod_jk. We currently use sticky sessions to take care of session handling, but we would like to move to JDBC session sharing. Does anyone have a good resource or step-by-step solution to deal with this?
I was not sure if this question was meant for stackoverflow, serverfault, or DBA, but here it is. :)
EDIT:
I think the content of my question must be confusing. The sessions to which I am referring are user sessions (JSESSIONID), not connections to the database. What I want to do is use the database to handle the user sessions so that when one server in the cluster goes down, the transition to another server is seamless to the user. Right now, the user is logged out when an error on the server occurs.
Most of this is available in Tomcat documentation, see Persistent Manager Implementation.
You can also look at this.
Since you say JDBC, I'm assuming you mean in Java? Your question seems to have some ambiguity, so I'm not sure this is what you are looking for, but based on my understanding, I'll give it a shot. Anyway, I use connection pooling (Apache commons dbcp) and Spring, which makes it pretty easy.
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="jdbc:mysql://localhost/databasename"/>
<property name="username" value="root"/>
<property name="password" value="password"/>
Then in the code I use Spring jdbctemplate, and with this setup, the connections to the database are pooled and reused. The datasource is managed as a Spring bean, then dependency-injected into where it is used. Spring handled the sharing of the jdbc sessions for you, and voila! Here is how I do the dependency injection with annotations:
private JdbcTemplate jdbcTemplate;
#Autowired
public void setDataSource(DataSource dataSource) {
this.jdbcTemplate = new JdbcTemplate(dataSource);
}
Even if you aren't using Spring for MVC or anything else, the Spring JDBC tools are really nice.

HSQLdb permissions regarding OpenJPA

I'm (still) having loads of issues with HSQLdb & OpenJPA.
Exception in thread "main" <openjpa-1.2.0-r422266:683325 fatal store error> org.apache.openjpa.persistence.RollbackException: user lacks privilege or object not found: OPENJPA_SEQUENCE_TABLE {SELECT SEQUENCE_VALUE FROM PUBLIC.OPENJPA_SEQUENCE_TABLE WHERE ID = ?} [code=-5501, state=42501]
at org.apache.openjpa.persistence.EntityManagerImpl.commit(EntityManagerImpl.java:523)
at model_layer.EntityManagerHelper.commit(EntityManagerHelper.java:46)
at HSQLdb_mvn_openJPA_autoTables.App.main(App.java:23)
The HSQLdb is running as a server process, bound to port 9001 at my local machine. The user is SA. It's configured as follows:
<persistence-unit name="HSQLdb_mvn_openJPA_autoTablesPU"
transaction-type="RESOURCE_LOCAL">
<provider>
org.apache.openjpa.persistence.PersistenceProviderImpl
</provider>
<class>model_layer.Testobjekt</class>
<class>model_layer.AbstractTestobjekt</class>
<properties>
<property name="openjpa.ConnectionUserName" value="SA" />
<property name="openjpa.ConnectionPassword" value=""/>
<property name="openjpa.ConnectionDriverName"
value="org.hsqldb.jdbc.JDBCDriver" />
<property name="openjpa.ConnectionURL"
value="jdbc:hsqldb:hsql://localhost:9001/mydb" />
<!--
<property name="openjpa.jdbc.SynchronizeMappings"
value="buildSchema(ForeignKeys=true)" />
-->
</properties>
</persistence-unit>
I have made a successful connection with my ORM layer. I can create and connect to my EntityManager.
However each time I use
EntityManagerHelper.commit();
It fail with that error, which makes no sense to me. SA is the Standard Admin user I used to create the table. It should be able to persist as this user into hsqldb.
edit: after hours of debugging I found out why this fails. This kind of error message also appears if you do not set required table entries (NOT NULL). It didn't indicate that for me. It seems the OpenJPA layer mistakes not being able to insert statements because of missing entries for permission problems. I simply accepted the first answer therefore. Thanks for reading :)
I have the impressoin that HSQL has no rights to write its datafile in the configured directory.
That happens to me all the time when I test my server manually as root/Administrator and that when starting it as a daemon/service it changes to a less privileged user. Then the files are owned by another user as the server is running as.
It could be other reasons : on Windows I had it when another process (another server instance) was still clinging on to the files, or even when eclipse in its infinite wisdom decided to index the database.

Categories

Resources