Oracle 19c JDBC (Instant client basiclite) driver Issue - java

I working on upgrade the oracle jdbc driver from 11g (ojdbc6.jar) to 19c (ojdbc8.jar) in my java application, driver used is Instant Client (instantclient-basiclite-nt-19.11) with JRE1.8.0_271. After change to 19c, my application keep hit "ORA-02396: exceeded maximum idle time, please connect again" or "ORA-03113: end-of-file" error.
In oracle database properties, there are some limitation set, Idle-Time = 2 minutes and Connection-Time = 10 minutes. But I will not do any change on database because this may cause high CPU if many users are using the application at the same time.
In Java application, connection is stored in connection pool, I put the logging and can see that connection is closed and return to connection pool after finish execute. But when I run the application again after 2 minutes, the oracle error raised.
If I switch back to 11g, I don't get such error and application is working fine after 2 minutes. No change in code.
Is this BUG in latest oracle driver? I saw there is UCP.jar (Universal Connection Pool) package available in 19c but not in 11g, is it I have implement this? and how?

Your problem has nothing to do with the driver but with a database profile setting that limits the maximum allowed idle_time. Normally this is done to get rid of forgotten sessions.
You can check this using
select a.username,b.profile,b.RESOURCE_NAME,b.LIMIT from dba_users a, dba_profiles b where b.resource_name='IDLE_TIME' and a.profile=b.profile;
find which profile is used for your user and see if your dba is willing to change the limit.
if this happens to be the default profile it could be changed to unlimited by
ALTER PROFILE DEFAULT LIMIT IDLE_TIME UNLIMITED;
but it might be better to create a custom profile for your user.
If the IDLE_TIME limit can not be changed, run a query every once and a while, like select 'keep me alive from dual;
This also prevents closure by firewalls.

Related

Closed connection issues after DB upgrade from 11G to 19C & ojdbc14 to ojdbc8

I managed to successfully perform a DB upgrade from 11G to 19C & ojdbc14 to ojdbc8. However my application is now facing closed connection issues, java.sql.SQLRecoverableException: Closed Connection for almost all my queries after every few days. This issue usually go away after I perform a server restart. I am running my application on JBOSS WildFly 13.
I noticed this usually occurs when i ran this query, and there are around 170 records. However most of the Status are inactive. I am thinking this could be a out of resource memory issue. Instead of doing a server restart every few days, how am I supposed to fix this issue to remove the inactive status completely? Could this be a leak within my Java application, I have made very minimal code changes to the aplication except for some autocommit changes which were causing errors..
select
substr(a.spid,1,9) pid,
substr(b.sid,1,5) sid,
substr(b.serial#,1,5) ser#,
substr(b.machine,1,6) box,
substr(b.username,1,10) username,
substr(b.osuser,1,8) os_user,
substr(b.program,1,30) program,
status
from v$session b, v$process a
where
b.paddr = a.addr
and type='USER'
order by spid;

How could I connect my Eclipse IDE with the Oralce 11g in Windows 8?

I want my java program to handle Oracle SQL queries. For that, I have written the following code.
on running this code it shows the following errors.
I don't know how to connect my Eclipse IDE with the Oracle. How could I connect my Eclipse IDE to the Oracle and how to get rid of these errors? And I don't want to connect to any online oracle servers, I googled and get suggestions for connecting to online servers.
How could I connect my Oracle 11g with my Eclipse Kepler in Windows 8?
Please check below :
Driver version and DB version should be same.
Try login with sqlplus or some clients like Toad, SQL developer.
The number of connections allowed may exceeded, result in TNS listener error, that you may got minus one from read call error in stacktrace. If connections are exceeded kill inactive connections.
Try establishing connection like below
DriverManager.getConnection(DB_URL,USER,PASS);
Above exception your are getting when operating system has some internal environment problem.
I have got same problem with type 4 driver. But type 1 driver not giving this exception. So start using the type 1 oracle driver.
You can check the sid port in tnsnames.ora file which is in below location.
C:\oraclexe\app\oracle\product\10.2.0\server\NETWORK\ADMIN\SAMPLE\tnsnames.ora

Postgres query run executed using hibernate getting dropped if the query takes a long time without any timeout exception

I am running a postgres query that takes more than two hours.
This query is executed using hibernate in a java program.
After about 1.5 hours the query stops showing up in the server status in pg_admin.
Since, the query disappeared from the list of active queries on the database, I am expecting a success or a timeout exception. But, I get none.(No exception) and my thread in stuck in the wait state.
I know the query has not finished because it was supposed to do some inserts in a table and I cannot find the expected rows in the table.
I am using pgbouncer for the connection pooling and the query_timeout is disabled.
Had it been a hibernate timeout I should have got an exception.
OS parameters on the DB machine and Client machine(Machine running java program)
tcp_keepalive_time is 7200 (seconds)
tcp_keepalive_intvl = 75
tcp_keepalive_probes = 9 (number of probes)
Both the machines run RHEL operating system.
I am unable to put my finger on the issue.
I found that the issue was caused due to the TCP connection getting dropped and the client still hanging waiting for the response.
I altered the following parameters at OS level:-
/proc/sys/net/ipv4/tcp_keepalive_time = 2700
Default value was 7200.
This causes a keep alive check at every 2700 seconds instead of 7200 seconds.
I am sure you would have already looked at the following resources:
PostgreSQL Timeout Docs
PgBouncer timeout (you already mention).
Hibernate timeout parameters, if any.
Once that is done, (just like triaging permission issues during a new installation, ) I recommend that you try the following SQL, from different scenarios (given below) and ascertain what is actually causing this timeout:
SELECT pg_sleep(7200);
Login to the server (via psql) and see whether this SQL times-out.
Login to the PgBouncer (again via psql) and see whether PgBouncer times out.
Execute this SQL via Hibernate (via PgBouncer), and see whether there is a timeout.
This should allow you to clearly isolate the cause for this.

Oracle database alert opiodr aborting process ORA-609

I am running a batch java application. The application runs every 10/20 minutes in my Production and UAT environment and I get database alerts like this:
Thu Feb 06 15:15:08 2014
opiodr aborting process unknown ospid (28246400) as a result of ORA-609
After researching a bit on the internet the suggested fix for these alerts is to change INBOUND_CONNECT_TIMEOUT as:
Sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=180
Listener.ora: INBOUND_CONNECT_TIMEOUT_listener_name=120
We have changed the setting on the database server side but don't know where to change in the client application. We are using c3p0 to create a connection pool and we are setting only these parameters:
dataSource.setAcquireRetryDelay(30000);
dataSource.setMaxPoolSize(50);
dataSource.setMinPoolSize(20);
dataSource.setInitialPoolSize(10);
We have other web services running on the same server as the batch application and they use Tomcat's DBCP pool and they don't seem to create any alerts. Also strangely enough, our batch application doesn't generate the alerts in lower test environments. They happen once in a while but the UAT and PROD environments get these alerts very frequently based on the schedule. Any suggestions what configurations to set in the c3p0 pool or should I try changing to another pool API like DBCP?
Update: I have added a few more parameters in the datasource and the frequency of alerts has reduced. I added the following and the number of alerts have gone down from 15 an hour to 4 an hour.
dataSource.setLoginTimeout(120);
dataSource.setAcquireRetryAttempts(10);
dataSource.setCheckoutTimeout(60000);
I moved to DBCP connection pooling and it seems to have fixed the issue. I tried changing a few more c3p0 settings mentioned above but nothing changed. The alerts were reduced but didn't go completely. So we decided to try DBCP. I am using all default values in DBCP except for the pool size. I'm using the tomcat version of DBCP available in tomcat's lib folder (tomcat-dbcp.jar).

How to reestablish a JDBC connection after a timeout?

I have a long-running method which executes a large number of native SQL queries through the EntityManager (TopLink Essentials). Each query takes only milliseconds to run, but there are many thousands of them. This happens within a single EJB transaction. After 15 minutes, the database closes the connection which results in following error:
Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.1 (Build b02-p04 (04/12/2010))): oracle.toplink.essentials.exceptions.DatabaseException
Internal Exception: java.sql.SQLException: Closed Connection
Error Code: 17008
Call: select ...
Query: DataReadQuery()
at oracle.toplink.essentials.exceptions.DatabaseException.sqlException(DatabaseException.java:319)
.
.
.
RAR5031:System Exception.
javax.resource.ResourceException: This Managed Connection is not valid as the phyiscal connection is not usable
at com.sun.gjc.spi.ManagedConnection.checkIfValid(ManagedConnection.java:612)
In the JDBC connection pool I set is-connection-validation-required="true" and connection-validation-method="table" but this did not help .
I assumed that JDBC connection validation is there to deal with precisely this kind of errors. I also looked at TopLink extensions (http://www.oracle.com/technetwork/middleware/ias/toplink-jpa-extensions-094393.html) for some kind of timeout settings but found nothing. There is also the TopLink session configuration file (http://download.oracle.com/docs/cd/B14099_19/web.1012/b15901/sessions003.htm) but I don't think there is anything useful there either.
I don't have access to the Oracle DBA tables, but I think that Oracle closes connections after 15 minutes according to the setting in CONNECT_TIME profile variable.
Is there any other way to make TopLink or the JDBC pool to reestablish a closed connection?
The database is Oracle 10g, application server is Sun Glassfish 2.1.1.
All JPA implementations (running on a Java EE container) use a datasource with an associated connection pool to manage connectivity with the database.
The persistence context itself is associated with the datasource via an appropriate entry in persistence.xml. If you wish to change the connection timeout settings on the client-side, then the associated connection pool must be re-configured.
In Glassfish, the timeout settings associated with the connection pool can be reconfigured by editing the pool settings, as listed in the following links:
Changing timeout settings in GlassFish 3.1
Changing timeout settings in GlassFish 2.1
On the server-side (whose settings if lower than the client settings, would be more important), the Oracle database can be configured to have database profiles associated with user accounts. The session idle_time and connect_time parameters of a profile would constitute the timeout settings of importance in this aspect of the client-server interaction. If no profile has been set, then by default, the timeout is unlimited.
Unless you've got some sort of RAC failover, when the connection is terminated, it will end the session and transaction.
The admins may have set into some limits to prevent runaway transactions or a single job 'hogging' a connection in a pool. You generally don't want to lock a connection in a pool for an extended period.
If these queries aren't necessarily part of the same transaction, then you could try terminating and restarting a new connection.
Are you able to restructure your code so that it completes in under 15 minutes. A stored procedure in the background may be able to do the job a lot quicker than dragging the results of thousands of operations over the network.
I see you set your connection-validation-method="table" and is-connection-validation-required="true", but you do not mention that you specified the table you were validating on; did you set validation-table-name="any_table_you_know_exists" and provide any existing table-name? validation-table-name="existing_table_name" is required.
See this article for more details on connection validation.
Related StackOverflow article with similar problem - he wants to flush the entire invalid connection pool.

Categories

Resources