I have found below code buggy as it degrades the performance of extjs3 grid, i am looking for possibilities of optimization at query or code level, as per my analysis, if we extract out the query there are two nested inner queries which are responding slow, in addition, the code inside while loop trying to find the unique id, can't we have distinct in query, or joins rather than inner queries.
Please suggest me the best practice to follow in order to achieve optimization.
public boolean isSCACreditOverviewGridVisible(String sessionId) {
Connection conn = null;
ResultSet rs = null;
PreparedStatement ps = null;
boolean result = false;
try {
CommonUtility commUtil = new CommonUtility();
List<String> hmIds = new ArrayList<String>();
Map<String, String> tmStockMap = new TreeMap<String, String>();
Set<String> setRecentCertificate = new HashSet<String>();
String managerAccountId = sessionInfo.getMembershipAccount();
String stockQuery = " select memberId , RootCertficateId from stockposition sp where sp.stocktype = 'TR' and sp.memberId "
+ " IN ( select hm2.accountId from "
DATALINK
+ ".holdingmembers hm2 "
+ " where hm2.holdingId = ( select holdingId from "
DATALINK
+ ".holdingmembers hm1 where hm1.accountId = ? )) "
+ " order by sp.createdDate desc ";
conn = getChildDBConnection();
if (null != conn) {
ps = conn.prepareStatement(stockQuery);
ps.setString(1, managerAccountId);
rs = ps.executeQuery();
if (null != rs) {
while (rs.next()) {
String memberId = rs.getString("memberId");
String rootCertficateId = rs
.getString("RootCertficateId");
if (tmStockMap.containsKey(rootCertficateId)) {
continue;
}
hmIds.add(memberId);
tmStockMap.put(rootCertficateId, memberId);
}
}
rs.close();
ps.close();
if (null != hmIds && !hmIds.isEmpty()) {
String inIds = commUtil.getInStateParam(hmIds);
String mostRecentLicense = "Select RootCertificateId , memberaccountid from "
+ OctopusSchema.octopusSchema
+ ".certificate c where c.memberaccountid IN ("
+ inIds
+ ") and c.isrootcertificate=0 and c.certificationstatusid > 1 order by c.modifieddate desc";
ps = conn.prepareStatement(mostRecentLicense);
rs = ps.executeQuery();
if (null != rs) {
while (rs.next()) {
String rootCertficateId = rs
.getString("RootCertificateId");
String memberaccountid = rs
.getString("memberaccountid");
if (setRecentCertificate.contains(memberaccountid)) {
continue;
}
setRecentCertificate.add(memberaccountid);
if (tmStockMap.containsKey(rootCertficateId)) {
result = true;
break;
}
}
}
rs.close();
ps.close();
} else {
result = false;
}
}
} catch (Exception e) {
LOGGER.error(e);
} finally {
closeDBReferences(conn, ps, null, rs);
}
return result;
}
QUERY:
select RootCertficateId,memberId from stockposition sp where sp.stocktype = 'TR' and sp.memberId
IN ( select hm2.accountId from
DATALINK.holdingmembers hm2
where hm2.holdingId = ( select holdingId from
DATALINK.holdingmembers hm1 where hm1.accountId = '4937' ))
order by sp.createdDate DESC;
One quick approach would be a substition of your IN by EXISTS. If your inner queryes return a lot of rows, it would be a lot more efficient. It depends if your subquery returns a lot of results.
SQL Server IN vs. EXISTS Performance
I have a class that cause exception. What I want is to mock that exception but with the coverage (so spy is needed) how do I mock with mockito so that Junit coverage will able to count those exception as covered?
For example:
private List<Data> _getSomeData(Key key) {
log.debug(logPrefix + " GetSomeData");
Connection dbc = null;
PreparedStatement st = null;
ResultSet rs = null;
String q = null;
....
try {
dbc = DataSourceUtils.getConnection(dataSource);
dbc.setAutoCommit(false);
q = "SELECT value FROM table where x = ? and y = ? and z = ?";
st = dbc.prepareStatement(q);
int ix = 1;
st.setInt(ix++, key.x);
st.setInt(ix++, key.y);
st.setInt(ix++, key.z);
rs = st.executeQuery();
while (rs.next()) {
Data data=new Data();
key.id = rs.getLong("x");
key.y = y;
....
DataList.add(data);
}
} catch (Exception e) {
throw new DbException(e, q);
} finally {
DbUtil.cleanup(log, rs, st, dbc);
}
return dataList;
}
So from above what I want is to cover the exception. How do I cover it with the coverage?
Jutest ->
#Test
public void testException(){
// DataImpl dataDao = new DataImpl();
// dataDao.setLog(new LogImpl());
DataImpl dataDao = Mockito.spy(new DataImpl());
Key key= new Key();
key.x = 1;
key.y = 1;
key.z = 1;
String q =
" SELECT data.* \n" +
" FROM SOME_DATA d1\n" +
" WHERE\n "+
" d1.x = ? \n " +
" AND ROUND (d1.y/ 1000 - 1) = ? \n" +
" AND MOD (d1.z, 1000) = ?";
Mockito.doThrow(new DbException(null, q)).when(invPPNDataDao)._getSomeData(Key);
}
Above will work but it will not be covered.
What you need to do here is a dependency injection. Then you can mock it and define how it will behave. Currently, you are constructing a lot of objects in your method (DataSourceUtils.getConnection(dataSource), new Data(), ...) which are hardly testable.
You can create class DbcProvider:
public class DbcProvider {
public Connection newDbc() {
DataSourceUtils.getConnection(dataSource);
}
}
Then in your test you mock DbcProvider and set it to throw exception when newDbc is called, and then you call your method _getSomeData.
I am trying to convert java.sql.Clob data into String by using SubString method (This method giving good performance compared with other). The clob data having near or morethan to 32MB. AS my observation substring method able to to return upto 33554342 bytes only.
if clob data is crossing 33554342 bytes then this it's throwing below sql exception
ORA-24817: Unable to allocate the given chunk for current lob operation
EDIT
CODE:
public static void main(String[] args) throws SQLException {
Main main = new Main();
Connection con = main.getConnection();
if (con == null) {
return;
}
PreparedStatement pstmt = null;
ResultSet rs = null;
String sql = "SELECT Table_ID,CLOB_FILE FROM TableName WHERE SOMECONDITION ";
String table_Id = null;
String directClobInStr = null;
CLOB clobObj = null;
String clobStr = null;
Object obj= null;
try {
pstmt = con.prepareStatement(sql);
rs = pstmt.executeQuery();
while (rs.next()) {
table_Id = rs.getString( "Table_ID" ) ;
directClobInStr = rs.getString( "clob_FILE" ) ;
obj = rs.getObject( "CLOB_FILE");
clobObj = (CLOB) obj;
System.out.println("Table id " + table_Id);
System.out.println("directClobInStr " + directClobInStr);
clobStr = clobObj.getSubString(1L, (int)clobObj.length() );//33554342
System.out.println("clobDataStr = " + clobStr);
}
}
catch (SQLException e) {
e.printStackTrace();
return;
}
catch (Exception e) {
e.printStackTrace();
return;
}
finally {
try {
rs.close();
pstmt.close();
con.close();
}
catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
NOTE:- here obj = rs.getObject( "CLOB_FILE"); working but I am not expecting this. because I am getting ResultSet object from somewhere as Object. I have to convert and get the data from CLOB
Any Idea how to achieve this?
Instead:
clobStr = clobObj.getSubString(1L, (int)clobObj.length() );
Try something like:
int toread = (int) clobObj.length();
int read = 0;
final int block_size = 8*1024*1024;
StringBuilder str = new StringBuilder(toread);
while (toread > 0) {
int current_block = Math.min(toread, block_size);
str.append(clobObj.getSubString(read+1, current_block));
read += current_block;
toread -= current_block;
}
clobStr = str.toString();
It extracts substrings using a loop (8MB per iteration).
But remember that, as far as I known, Java Strings are limited to 2 GB (this is the reason why read is declared as int instead of long) and Oracle CLOBs are limited to 128 TB.
i have java client send to server (jetty-xmlrpc) query and receive data from server inside hashmap. sometime data is more big(e.g. 3645888 rows), when this data send to java client i have error ( java heap space ). how can i send data by 2 times for example ? or give me way to fix it
this is server function to get data and send it to client
public HashMap getFlickValues(String query,String query2){
System.out.println("Query is : "+query);
System.out.println("Query2 is: "+query2);
Connection c = null;
Connection c2 = null;
Statement st = null;
Statement st2 = null;
HashMap<String, Object[]> result = new HashMap<String, Object[]>();
ArrayList<Double> vaArrL = new ArrayList<Double>();
ArrayList<Double> vbArrL = new ArrayList<Double>();
ArrayList<Double> vcArrL = new ArrayList<Double>();
try {
Class.forName("org.postgresql.Driver");
String conString = "jdbc:postgresql://" + host + ":" + port + "/" + DBName +
"?user=" + user + "&pass=" + pass;
String conString1 = "jdbc:postgresql://" + host + ":" + port2 + "/" + DBName2 +
"?user=" + user + "&pass=" + pass;
//String conString1 = "jdbc:postgresql://127.0.0.1:5431/merkezdbram " +
// "?user=" + user + "&pass=" + pass;
/*c = DriverManager.getConnection(conString);
st = c.createStatement();
ResultSet rs = st.executeQuery(query);
while (rs.next()){
vaArrL.add(rs.getDouble("va"));
vbArrL.add(rs.getDouble("vb"));
vcArrL.add(rs.getDouble("vc"));
}*/
c = DriverManager.getConnection(conString);
//c.setAutoCommit(false);
c2 = DriverManager.getConnection(conString1);
//c2.setAutoCommit(false);
st = c.createStatement();
//st.setFetchSize(1000);
st2 = c2.createStatement();
//st2.setFetchSize(1000);
List<ResultSet> resultSets = new ArrayList<>();
resultSets.add(st.executeQuery(query));
resultSets.add(st2.executeQuery(query2));
ResultSets rs = new ResultSets(resultSets);
int count = 0;
int ResultSetSize = rs.getFetchSize();
System.out.println("ResultSetSize is "+ResultSetSize);
while (rs.next()){
//count++;
//if ( count == 2200000) { break;}
vaArrL.add(rs.getDoubleVa("va"));
vbArrL.add(rs.getDoubleVb("vb"));
vcArrL.add(rs.getDoubleVc("vc"));
}
int sz = vaArrL.size();
result.put("va", vaArrL.toArray(new Object[sz]));
result.put("vb", vbArrL.toArray(new Object[sz]));
result.put("vc", vcArrL.toArray(new Object[sz]));
//rs.close();
st.close();
c.close();
} catch ( Exception e ) {
System.out.println(e);
e.printStackTrace();
}
System.out.println("Flicker vaArrL.size = "+vaArrL.size());
return result;
}
and ResultSets class is :
class ResultSets {
private java.util.List<java.sql.ResultSet> resultSets;
private java.sql.ResultSet current;
public ResultSets(java.util.List<java.sql.ResultSet> resultSets) {
this.resultSets = new java.util.ArrayList<>(resultSets);
current = resultSets.remove(0);
}
public boolean next() throws SQLException {
if (current.next()) {
return true;
}else if (!resultSets.isEmpty()) {
current = resultSets.remove(0);
return next();
}
return false;
}
public Double getDoubleVa(String va) throws SQLException{
return current.getDouble("va");
}
public Double getDoubleVb(String vb) throws SQLException{
return current.getDouble("vb");
}
public Double getDoubleVc(String vc) throws SQLException{
return current.getDouble("vc");
}
}
i want way to return data to client without (java heap space) ?
i make -Xmx1024m for VM argument , but same problrm
i want solution in my code
thanks
I have a csv file which has few cells in some columns empty.Modifying the csv file is not an option as it has around 50k total records.
Whenever I am executing below code it is throwing error as "Incorrect integer value: '' for column 'ParentId' at row 1".My database table has null value as yes for column 'ParentID'.But still it is throwing error.How can I edit this code so that it fills in correct values without giving error?
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.util.Date;
import org.apache.commons.lang.StringUtils;
import au.com.bytecode.opencsv.CSVReader;
public class CSVLoader {
static int count;
private static final
//String SQL_INSERT = "INSERT INTO ${table}(${keys}) VALUES(${values})";
String SQL_INSERT = "INSERT INTO ${table} VALUES(${values})";
private static final String TABLE_REGEX = "\\$\\{table\\}";
//private static final String KEYS_REGEX = "\\$\\{keys\\}";
private static final String VALUES_REGEX = "\\$\\{values\\}";
private Connection connection;
private char seprator;
/**
* Public constructor to build CSVLoader object with
* Connection details. The connection is closed on success
* or failure.
* #param connection
*/
public CSVLoader(Connection connection) {
this.connection = connection;
//Set default separator
this.seprator = ',';
}
/**
* Parse CSV file using OpenCSV library and load in
* given database table.
* #param csvFile Input CSV file
* #param tableName Database table name to import data
* #param truncateBeforeLoad Truncate the table before inserting
* new records.
* #throws Exception
*/
public void loadCSV(String csvFile, String tableName,
boolean truncateBeforeLoad) throws Exception {
CSVReader csvReader = null;
if(null == this.connection) {
throw new Exception("Not a valid connection.");
}
try {
csvReader = new CSVReader(new FileReader(csvFile), this.seprator);
} catch (Exception e) {
e.printStackTrace();
throw new Exception("Error occured while executing file. "
+ e.getMessage());
}
//String[] headerRow = csvReader.readNext();
String[] headerRow = csvReader.readNext();
count++;
if (null == headerRow) {
throw new FileNotFoundException(
"No columns defined in given CSV file." +
"Please check the CSV file format.");
}
/*String questionmarks = StringUtils.repeat("?,", headerRow.length);
System.out.println(headerRow.length);
questionmarks = (String) questionmarks.subSequence(0, questionmarks
.length() - 1);
System.out.println(SQL_INSERT);
String query = SQL_INSERT.replaceFirst(TABLE_REGEX, tableName);
//query = query
// .replaceFirst(KEYS_REGEX, StringUtils.join(headerRow, ","));
query = query.replaceFirst(VALUES_REGEX, questionmarks);
System.out.println("Query: " + query);
*/
//Stirng str1 = "insert into posts values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"
String[] nextLine;
Connection con = null;
PreparedStatement ps = null;
try {
con = this.connection;
con.setAutoCommit(false);
ps = con.prepareStatement("insert into posts (Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,RowNum) values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)");
if(truncateBeforeLoad) {
//delete data from table before loading csv
con.createStatement().execute("DELETE FROM " + tableName);
}
final int batchSize = 1000;
int count = 0;
Date date = null;
while ((nextLine = csvReader.readNext()) != null) {
if (null != nextLine) {
int index = 1;
for (String string : nextLine) {
date = DateUtil.convertToDate(string);
if (null != date) {
ps.setDate(index++, new java.sql.Date(date
.getTime()));
} else {
ps.setString(index++, string);
}
}
System.out.println(count);
ps.addBatch();
System.out.println(count);
}
if (++count % batchSize == 0) {
System.out.println(count);
ps.executeBatch();
}
}
ps.executeBatch(); // insert remaining records
con.commit();
} catch (Exception e) {
con.rollback();
e.printStackTrace();
throw new Exception(
"Error occured while loading data from file to database."
+ e.getMessage());
} finally {
if (null != ps)
ps.close();
if (null != con)
con.close();
csvReader.close();
}
}
public char getSeprator() {
return seprator;
}
public void setSeprator(char seprator) {
this.seprator = seprator;
}
}
It might be the case where int column is actually having blank spaces, in such case you can place manual check as below and still go ahead. Hope this answers your doubt.
// check if value is null or blank spaces
if(certain_value== null || certain.value.trim().length==0){
ps.setNull(4, java.sql.Types.INTEGER); // This will set null value for int type
}