Mysql-Java Batch insert handling out of memory error - java

I want to ask, is it normal to handle OutOfMemoryError while doing batch-inserts?
I am using following code to batch-insert in mysql:
try
{
Connection con = null;
PreparedStatement ps = null;
con = Manager.getInstance().getConnection();
ps = con.prepareStatement("INSERT INTO" +
" movie_release_date_pushed_to_subscriber"
+ "(movie_id,cinema_id,msisdn,sent_timestamp)VALUES(?,?,?,?)");
for (String msisdn : subscriberBatch)
{
try
{
ps.setInt(1, movieToBeReleased.getMovieId());
ps.setInt(2, movieToBeReleased.getCinemaId());
ps.setString(3, msisdn);
ps.setTimestamp(4, new java.sql.Timestamp(new Date().getTime()));
ps.addBatch();
}
catch (OutOfMemoryError oome)
{
....
ps.executeBatch();
}
}
ps.executeBatch();
}
catch (Throwable e)
{
....
}
finally
{
try
{
Manager.getInstance().close(ps);
if (con != null)
{
con.close();
}
}
catch (Throwable e)
{
....
}
}
NOTE: Any kind of advice/recommendation is most welcome,

No its not normal. And your catch handler is totally ineffective. Catching an OOME does not miraculously solve its root cause - exhaustion of program memory. You get that error after the runtime has made a best effort at reclaiming memory, and failed. You should not be trying to execute code at that point, you may not even be able to log messages!
If you feel for whatever reason that your batch statement may cause an OOME, then you should either:
Break up the batch cycle into smaller 'buckets'
Make more memory available to the program

It makes no sense to try to execute the batch if you get an OutOfMemoryError. However, you can replace your INSERT query with an INSERT IGNORE INTO and in case of an OOME, ask the user to run the batch again after restarting JVM.
What INSERT IGNORE INTO will do is not run the insert if the primary key already exists in the table, so your batch will resume from where it crashed the app.
However, I will have to warn you that this is probably a very dirty way to circumvent this situation.

Related

When `GC` will be triggered when an `OutOfMemoryException` is caught?

I am having an java package, which connects with a database and fetches some data. At some rare case, I am getting heap memory exception, since the fetched query data size is exceeding the java heap space. Increasing the java heap space is not something the business can think for now.
Other option is to catch the exception and continue the flow with stopping the execution. ( I know catching OOME is not a good idea but here only me local variables are getting affected). My code is below:
private boolean stepCollectCustomerData() {
try {
boolean biResult = generateMetricCSV();
} catch (OutOfMemoryError e) {
log.error("OutOfMemoryError while collecting data ");
log.error(e.getMessage());
return false;
}
return true;
}
private boolean generateMetricCSV(){
// Executing the PAC & BI cluster SQL queries.
try (Connection connection = DriverManager.getConnection("connectionURL", "username", "password")) {
connection.setAutoCommit(false);
for (RedshiftQueryDefinition redshiftQueryDefinition: redshiftQueryDefinitions){
File csvFile = new File(dsarConfig.getDsarHomeDirectory() + dsarEntryId, redshiftQueryDefinition.getCsvFileName());
log.info("Running the query for metric: " + redshiftQueryDefinition.getMetricName());
try( PreparedStatement preparedStatement = createPreparedStatement(connection,
redshiftQueryDefinition.getSqlQuery(), redshiftQueryDefinition.getArgumentsList());
ResultSet resultSet = preparedStatement.executeQuery();
CSVWriter writer = new CSVWriter(new FileWriter(csvFile));) {
if (resultSet.next()) {
resultSet.beforeFirst();
log.info("Writing the data to CSV file.");
writer.writeAll(resultSet, true);
log.info("Metric written to csv file: " + csvFile.getAbsolutePath());
filesToZip.put(redshiftQueryDefinition.getCsvFileName(), csvFile);
} else {
log.info("There is no data for the metric " + redshiftQueryDefinition.getCsvFileName());
}
} catch (SQLException | IOException e) {
log.error("Exception while generating the CSV file: " + e);
e.printStackTrace();
return false;
}
}
} catch (SQLException e){
log.error("Exception while creating connection to the Redshift cluster: " + e);
return false;
}
return true;
}
We are getting exception in the line "ResultSet resultSet = preparedStatement.executeQuery()" in the later method and i am catching this exception in the parent method. Now, i need to make sure when the exception is caught in the former method, is the GC already triggered and cleared the local variables memory? (such as connection and result set variable) If not, when that will be happen?
I am worried about the java heap space because, this is continuous flow and I need to keep on fetching the data for another users.
The code that i have provided is only to explain the underlying issue and flow and kindly ignore syntax, etc.., I am using JDK8
Thanks in advance.

maximum open cursors exceeded exception in java code

this my code to execute update query
public boolean executeQuery(Connection con,String query) throws SQLException
{
boolean flag=false;
try
{
Statement st = con.createStatement();
flag=st.execute(query);
st.close();
st=null;
flag=true;
}
catch (Exception e)
{
flag=false;
e.printStackTrace();
throw new SQLException(" UNABLE TO FETCH INSERT");
}
return flag;
}
maximum open cursor is set to 4000
code is executing
update tableA set colA ='x',lst_upd_date = trunc(sysdate) where trunc(date) = to_date('"+date+"','dd-mm-yyyy')
update query for around 8000 times
but after around 2000 days its throwing exception as "maximum open cursors exceeded"
please suggest code changes for this.
#TimBiegeleisen here is the code get connecttion
public Connection getConnection(String sessId)
{
Connection connection=null;
setLastAccessed(System.currentTimeMillis());
connection=(Connection)sessionCon.get(sessId);
try
{
if(connection==null || connection.isClosed() )
{
if ( ds == null )
{
InitialContext ic = new InitialContext();
ds = (DataSource) ic.lookup("java:comp/env/iislDB");
}
connection=ds.getConnection();
sessionCon.put(sessId, connection);
}
}
catch (SQLException e)
{
e.printStackTrace();
}
catch (Exception e)
{
e.printStackTrace();
}
return connection;
}
`
error stack is as bellow
java.sql.SQLException: ORA-01000: maximum open cursors exceeded
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:180)
at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:208)
at oracle.jdbc.ttc7.Oopen.receive(Oopen.java:118)
at oracle.jdbc.ttc7.TTC7Protocol.open(TTC7Protocol.java:472)
at oracle.jdbc.driver.OracleStatement.<init>(OracleStatement.java:499)
at oracle.jdbc.driver.OracleConnection.privateCreateStatement(OracleConnection.java:683)
at oracle.jdbc.driver.OracleConnection.createStatement(OracleConnection.java:560)
at org.apache.tomcat.dbcp.dbcp.DelegatingConnection.createStatement(DelegatingConnection.java:257)
at org.apache.tomcat.dbcp.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.createStatement(PoolingDataSource.java:216)
at com.iisl.business.adminbo.computeindex.MoviIndexComputeBO.calculateMoviValue(MoviIndexComputeBO.java:230)
Your code has a cursor leak. That's what is causing the error. It seems unlikely that your code can really go 2000 days (about 5.5 years) before encountering the error. If that was the case, I'd wager that you'd be more than happy to restart a server twice a decade.
In your try block, you create a Statement. If an exception is thrown between the time that the statement is created and the time that st.close() is called, your code will leave the statement open and you will have leaked a cursor. Once a session has leaked 4000 cursors, you'll get the error. Increasing max_open_cursors will merely delay when the error occurs, it won't fix the underlying problem.
The underlying problem is that your try/ catch block needs a finally that closes the Statement if it was left open by the try. For this to work, you'd need to declare st outside of the try
finally {
if (st != null) {
st.close();
}
}
As mentioned in another response you will leak cursors if an exception is thrown during the statement execution because st.close() won't be executed. You can use Java's try-with-resources syntax to be sure that your statement object is closed:
try (Statement st = con.createStatement())
{
flag=st.execute(query);
flag=true;
}
catch (Exception e)
{
flag=false;
e.printStackTrace();
throw new SQLException(" UNABLE TO FETCH INSERT");
}
return flag;
One of quickest solution is to increase cursor that each connection can handle by issuing following command on SQL prompt:
alter system set open_cursors = 1000
Also, add finally block in your code and close the connection to help closing cursors when ever exception occurs.
Also, run this query to see where actually cursor are opened.
select sid ,sql_text, count(*) as "OPEN CURSORS", USER_NAME from v$open_cursor
finally {
if (connection!=null) {
connection.close();
}

Java-MySQL: How to create a Stored Procedure

I Work on a Java 1.7 project with Mysql,
I have a method that insert a lot of data in a table with PreparedStatement, but this cause a Out Of Memory Error in the GlassFish Server.
Connection c = null;
String query = "INSERT INTO users.infos(name,phone,email,type,title) "
+ "VALUES (?, ?, ?, ?, ?, ";
PreparedStatement statement = null;
try {
c = users.getConnection();
statement = c.prepareStatement(query);
} catch (SQLException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
try{
int i =0;
for(Member member: members){
i++;
statement.setString(1, member.getName());
statement.setString(2, member.getPhone());
statement.setString(3, member.getEmail());
statement.setInt(4, member.getType());
statement.setString(5, member.getTitle());
statement.addBatch();
if (i % 100000 == 0){
statement.executeBatch();
}
}
statement.executeBatch();
}catch (Exception ex){
ex.printStackTrace();
} finally {
if(c != null)
{
try {
c.close();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
if(statement != null){
try {
statement.close();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
c = null;
statement = null;
}
I think that I need to create a Stored Procedure to avoid this memory issue, but I don't know where to start and if I should create a procedure or a function, and if I will be able to get some kind of response in a return or something?
What do you think about it?
Replacing your insert statement in your prepared statement with a call to a stored procedure in your prepared statement will not affect the memory consumption in any meaningful way.
You are running out of memory because you are using a very large batch size. You should test the performance of different batch sizes - you will find that the performance improvement for larger batches diminishes quickly with batch sizes greater than dozens to hundreds.
You may be able to achieve greater input rates by using load data infile. You would take your many rows of data, create a text file and loading the file. For example, see this question.
You may also consider parallelizing. For example, open multiple connections and insert rows to each connection with separate threads. Likewise, you could try doing parallel loads using load data infile.
You will have to try the various techniques, batch sizes, number of threads (probably not more than one per core), etc. on your hardware setup to see what gives the best performance.
You may also want to look at tuning some of the MySQL parameters, drop (and later recreate) indexes, etc.

Releasing JDBC resources

My application has a memory leak resulting from my usage of JDBC. I have verified this by looking at a visual dump of the heap and seeing thousands of instances of ResultSet and associated objects. My question, then, is how do I appropriately manage resources used by JDBC so they can be garbage collected? Do I need to call ".close()" for every statement that is used? Do I need to call ".close()" on the ResultSets themselves?
How would you free the memory used by the call:
ResultSet rs = connection.createStatement().executeQuery("some sql query");
??
I see that there are other, very similar, questions. Apologies if this is redundant, but either I don't quite follow the answers or they don't seem to apply universally. I am trying to achieve an authoritative answer on how to manage memory when using JDBC.
::EDIT:: Adding some code samples
I have a class that is basically a JDBC helper that I use to simplify database interactions, the main two methods are for executing an insert or update, and for executing select statements.
This one for executing insert or update statements:
public int executeCommand(String sqlCommand) throws SQLException {
if (connection == null || connection.isClosed()) {
sqlConnect();
}
Statement st = connection.createStatement();
int ret = st.executeUpdate(sqlCommand);
st.close();
return ret;
}
And this one for returning ResultSets from a select:
public ResultSet executeSelect(String select) throws SQLException {
if (connection == null || connection.isClosed()) {
sqlConnect();
}
ResultSet rs = connection.createStatement().executeQuery(select);
return rs;
}
After using the executeSelect() method, I always call resultset.getStatement().close()
Examining a heap dump with object allocation tracing on shows statements still being held onto from both of those methods...
You should close the Statement if you are not going to reuse it. It is usually good form to first close the ResultSet as some implementations did not close the ResultSet automatically (even if they should).
If you are repeating the same queries you should probably use a PreparedStatement to reduce parsing overhead. And if you add parameters to your query you really should use PreparedStatement to avoid risk of sql injection.
Yes, ResultSets and Statements should always be closed in a finally block. Using JDBC wrappers such as Spring's JdbcTemplate helps making the code less verbose and close everything for you.
I copied this from a project I have been working on. I am in the process of refactoring it to use Hibernate (from the code it should be clear why!!). Using a ORM tool like Hibernate is one way to resolve your issue. Otherwise, here is the way I used normal DAOs to access the data. There is no memory leak in our code, so this may help as a template. Hope it helps, memory leaks are terrible!
#Override
public List<CampaignsDTO> getCampaign(String key) {
ResultSet resultSet = null;
PreparedStatement statement = null;
try {
statement = connection.prepareStatement(getSQL("CampaignsDAOImpl.getPendingCampaigns"));
statement.setString(1, key);
resultSet = statement.executeQuery();
List<CampaignsDTO> list = new ArrayList<CampaignsDTO>();
while (resultSet.next()) {
list.add(new CampaignsDTO(
resultSet.getTimestamp(resultSet.findColumn("cmp_name")),
...));
}
return list;
} catch (SQLException e) {
logger.fatal(LoggerCodes.DATABASE_ERROR, e);
throw new RuntimeException(e);
} finally {
close(statement);
}
}
The close() method looks like this:
public void close(PreparedStatement statement) {
try {
if (statement != null && !statement.isClosed())
statement.close();
} catch (SQLException e) {
logger.debug(LoggerCodes.TRACE, "Warning! PreparedStatement could not be closed.");
}
}
You should close JDBC statements when you are done. ResultSets should be released when associated statements are closed - but you can do it explicitly if you want.
You need to make sure that you also close all JDBC resources in exception cases.
Use Try-Catch-Finally block - eg:
try {
conn = dataSource.getConnection();
stmt = conn.createStatement();
rs = stmet.executeQuery("select * from sometable");
stmt.close();
conn.close();
} catch (Throwable t) {
// do error handling
} finally {
try {
if (stmt != null) {
stmt.close();
}
if (conn != null) {
conn.close();
}
} catch(Exception e) {
}
}

Will Database omit incoming request since it is busy?

In my application I have implemented a method to get favourits of particular user. If the user is a new one there will not be a entry in the table.If so I add default favourtis to the table. Code is shown below.
public String getUserFavourits(String username) {
String s = "SELECT FAVOURITS FROM USERFAVOURITS WHERE USERID='" +
username.trim() + "'";
String a = "";
Statement stm = null;
ResultSet reset = null;
DatabaseConnectionHandler handler = null;
Connection conn = null;
try {
handler = DatabaseConnectionHandler.getInstance();
conn = handler.getConnection();
stm = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_UPDATABLE);
reset = stm.executeQuery(s);
if (reset.next()) {
a = reset.getString("FAVOURITS").toString();
}
reset.close();
stm.close();
}
catch (SQLException ex) {
ex.printStackTrace();
}
catch (Exception ex) {
ex.printStackTrace();
}
finally {
try {
handler.returnConnectionToPool(conn);
if (stm != null) {
stm.close();
}
if (reset != null) {
reset.close();
}
}catch (Exception ex) {
ex.printStackTrace();
}
}
if (a.equalsIgnoreCase("")) {
a = updateNewUserFav(username);
}
return a;
}
You can see that after the Finally block updateNewUserFav(username) method is use to insert default favourits in to table. Normally users are forced to change this in their first login.
My problem is many users have complain me about they hava lost their customized favourits and default has get loaded in their login. When I go through the code I notice that it can only happen if exception occured in the try block. When I debug code works fine. Is this can be coused at time when DB is busy?
Normally there are more than 1000 concurrent user in the system. Since it is real time application there will be huge number a of request comming to the Database(DB is Oracle).
Can some one pls explain.
Firstly, use jonearles suggestion about bind variables. If a lot of your code is like this, with 1000 concurrent users, I'd hate to think what performance is like.
Secondly, if it is busy then there is a chance of time-outs. As you say, if an exception is encountered then it falls back to the "updateNewUserFav"
Really, it should only call that if NO exception is raised.
If an exception is raised, the function should fail. The current code is similar to
"TURN THE IGNITION KEY TO START THE CAR"
"IF THERE IS A PROBLEM, RING GARAGE AND BOOK APPOINTMENT"
"PUT CAR INTO GEAR AND RELEASE HAND_BRAKE"
You really only want to release the hand-brake once the car has successfully started, otherwise you'll end up rolling down the hill until the sudden stop at the end (often involving an expensive CRUNCH sound).

Categories

Resources