I have a class that cause exception. What I want is to mock that exception but with the coverage (so spy is needed) how do I mock with mockito so that Junit coverage will able to count those exception as covered?
For example:
private List<Data> _getSomeData(Key key) {
log.debug(logPrefix + " GetSomeData");
Connection dbc = null;
PreparedStatement st = null;
ResultSet rs = null;
String q = null;
....
try {
dbc = DataSourceUtils.getConnection(dataSource);
dbc.setAutoCommit(false);
q = "SELECT value FROM table where x = ? and y = ? and z = ?";
st = dbc.prepareStatement(q);
int ix = 1;
st.setInt(ix++, key.x);
st.setInt(ix++, key.y);
st.setInt(ix++, key.z);
rs = st.executeQuery();
while (rs.next()) {
Data data=new Data();
key.id = rs.getLong("x");
key.y = y;
....
DataList.add(data);
}
} catch (Exception e) {
throw new DbException(e, q);
} finally {
DbUtil.cleanup(log, rs, st, dbc);
}
return dataList;
}
So from above what I want is to cover the exception. How do I cover it with the coverage?
Jutest ->
#Test
public void testException(){
// DataImpl dataDao = new DataImpl();
// dataDao.setLog(new LogImpl());
DataImpl dataDao = Mockito.spy(new DataImpl());
Key key= new Key();
key.x = 1;
key.y = 1;
key.z = 1;
String q =
" SELECT data.* \n" +
" FROM SOME_DATA d1\n" +
" WHERE\n "+
" d1.x = ? \n " +
" AND ROUND (d1.y/ 1000 - 1) = ? \n" +
" AND MOD (d1.z, 1000) = ?";
Mockito.doThrow(new DbException(null, q)).when(invPPNDataDao)._getSomeData(Key);
}
Above will work but it will not be covered.
What you need to do here is a dependency injection. Then you can mock it and define how it will behave. Currently, you are constructing a lot of objects in your method (DataSourceUtils.getConnection(dataSource), new Data(), ...) which are hardly testable.
You can create class DbcProvider:
public class DbcProvider {
public Connection newDbc() {
DataSourceUtils.getConnection(dataSource);
}
}
Then in your test you mock DbcProvider and set it to throw exception when newDbc is called, and then you call your method _getSomeData.
Related
I have found below code buggy as it degrades the performance of extjs3 grid, i am looking for possibilities of optimization at query or code level, as per my analysis, if we extract out the query there are two nested inner queries which are responding slow, in addition, the code inside while loop trying to find the unique id, can't we have distinct in query, or joins rather than inner queries.
Please suggest me the best practice to follow in order to achieve optimization.
public boolean isSCACreditOverviewGridVisible(String sessionId) {
Connection conn = null;
ResultSet rs = null;
PreparedStatement ps = null;
boolean result = false;
try {
CommonUtility commUtil = new CommonUtility();
List<String> hmIds = new ArrayList<String>();
Map<String, String> tmStockMap = new TreeMap<String, String>();
Set<String> setRecentCertificate = new HashSet<String>();
String managerAccountId = sessionInfo.getMembershipAccount();
String stockQuery = " select memberId , RootCertficateId from stockposition sp where sp.stocktype = 'TR' and sp.memberId "
+ " IN ( select hm2.accountId from "
DATALINK
+ ".holdingmembers hm2 "
+ " where hm2.holdingId = ( select holdingId from "
DATALINK
+ ".holdingmembers hm1 where hm1.accountId = ? )) "
+ " order by sp.createdDate desc ";
conn = getChildDBConnection();
if (null != conn) {
ps = conn.prepareStatement(stockQuery);
ps.setString(1, managerAccountId);
rs = ps.executeQuery();
if (null != rs) {
while (rs.next()) {
String memberId = rs.getString("memberId");
String rootCertficateId = rs
.getString("RootCertficateId");
if (tmStockMap.containsKey(rootCertficateId)) {
continue;
}
hmIds.add(memberId);
tmStockMap.put(rootCertficateId, memberId);
}
}
rs.close();
ps.close();
if (null != hmIds && !hmIds.isEmpty()) {
String inIds = commUtil.getInStateParam(hmIds);
String mostRecentLicense = "Select RootCertificateId , memberaccountid from "
+ OctopusSchema.octopusSchema
+ ".certificate c where c.memberaccountid IN ("
+ inIds
+ ") and c.isrootcertificate=0 and c.certificationstatusid > 1 order by c.modifieddate desc";
ps = conn.prepareStatement(mostRecentLicense);
rs = ps.executeQuery();
if (null != rs) {
while (rs.next()) {
String rootCertficateId = rs
.getString("RootCertificateId");
String memberaccountid = rs
.getString("memberaccountid");
if (setRecentCertificate.contains(memberaccountid)) {
continue;
}
setRecentCertificate.add(memberaccountid);
if (tmStockMap.containsKey(rootCertficateId)) {
result = true;
break;
}
}
}
rs.close();
ps.close();
} else {
result = false;
}
}
} catch (Exception e) {
LOGGER.error(e);
} finally {
closeDBReferences(conn, ps, null, rs);
}
return result;
}
QUERY:
select RootCertficateId,memberId from stockposition sp where sp.stocktype = 'TR' and sp.memberId
IN ( select hm2.accountId from
DATALINK.holdingmembers hm2
where hm2.holdingId = ( select holdingId from
DATALINK.holdingmembers hm1 where hm1.accountId = '4937' ))
order by sp.createdDate DESC;
One quick approach would be a substition of your IN by EXISTS. If your inner queryes return a lot of rows, it would be a lot more efficient. It depends if your subquery returns a lot of results.
SQL Server IN vs. EXISTS Performance
I have an web application, that runs under Glassfish 4.1, that contains a couple of features that require JMS/MDB.
In particular I am having problems regarding the generation of a report using JMS/MDB, that is, obtain data from a table and dump them in a file.
This is what happens, i have a JMS/MDB message that does a couple tasks in an Oracle database and after having the final result in a table, i would like to obtain a csv report from that table (which usually is 30M+ records).
So while in JMS/MDB this is what happens to generate the report:
public boolean handleReportContent() {
Connection conn = null;
try {
System.out.println("Handling report content... " + new Date());
conn = DriverManager.getConnection(data.getUrl(), data.getUsername(), data.getPassword());
int reportLine = 1;
String sql = "SELECT FIELD_NAME, VALUE_A, VALUE_B, DIFFERENCE FROM " + data.getDbTableName() + " WHERE SET_PK IN ( SELECT DISTINCT SET_PK FROM " + data.getDbTableName() + " WHERE IS_VALID=? )";
PreparedStatement ps = conn.prepareStatement(sql);
ps.setBoolean(1, false);
ResultSet rs = ps.executeQuery();
List<ReportLine> lst = new ArrayList<>();
int columns = data.getLstFormats().size();
int size = 0;
int linesDone = 0;
while (rs.next()) {
ReportLine rl = new ReportLine(reportLine, rs.getString("FIELD_NAME"), rs.getString("VALUE_A"), rs.getString("VALUE_B"), rs.getString("DIFFERENCE"));
lst.add(rl);
linesDone = columns * (reportLine - 1);
size++;
if ((size - linesDone) == columns) {
reportLine++;
if (lst.size() > 4000) {
appendReportContentNew(lst);
lst.clear();
}
}
}
if (lst.size() > 0) {
appendReportContentNew(lst);
lst.clear();
}
ps.close();
conn.close();
return true;
} catch (Exception e) {
System.out.println("exception handling report content new: " + e.toString());
return false;
}
This is working, i am aware it is slow and inneficient and most likely there is a better option to perform the same operation.
What this method does is:
collect the data from the ResultSet;
dump it in a List;
for each 4K objects will call the method appendReportContentNew()
dump the data in the List for the file
public void appendReportContentNew(List<ReportLine> lst) {
File f = new File(data.getJobFilenamePath());
try {
if (!f.exists()) {
f.createNewFile();
}
FileWriter fw = new FileWriter(data.getJobFilenamePath(), true);
BufferedWriter bw = new BufferedWriter(fw);
for (ReportLine rl : lst) {
String rID = "R" + rl.getLine();
String fieldName = rl.getFieldName();
String rline = rID + "," + fieldName + "," + rl.getValue1() + "," + rl.getValue2() + "," + rl.getDifference();
bw.append(rline);
bw.append("\n");
}
bw.close();
} catch (IOException e) {
System.out.println("exception appending report content: " + e.toString());
}
}
With this method, in 20 minutes, it wrote 800k lines (30Mb file) it usually goes to 4Gb or more. This is what i want to improve, if possible.
So i decided to try OpenCSV, and i got the following method:
public boolean handleReportContentv2() {
Connection conn = null;
try {
FileWriter fw = new FileWriter(data.getJobFilenamePath(), true);
System.out.println("Handling report content v2... " + new Date());
conn = DriverManager.getConnection(data.getUrl(), data.getUsername(), data.getPassword());
String sql = "SELECT NLINE, FIELD_NAME, VALUE_A, VALUE_B, DIFFERENCE FROM " + data.getDbTableName() + " WHERE SET_PK IN ( SELECT DISTINCT SET_PK FROM " + data.getDbTableName() + " WHERE IS_VALID=? )";
PreparedStatement ps = conn.prepareStatement(sql);
ps.setBoolean(1, false);
ps.setFetchSize(500);
ResultSet rs = ps.executeQuery();
BufferedWriter out = new BufferedWriter(fw);
CSVWriter writer = new CSVWriter(out, ',', CSVWriter.NO_QUOTE_CHARACTER);
writer.writeAll(rs, false);
fw.close();
writer.close();
rs.close();
ps.close();
conn.close();
return true;
} catch (Exception e) {
System.out.println("exception handling report content v2: " + e.toString());
return false;
}
}
So I am collecting all the data from the ResultSet, and dumping in the CSVWriter. This operation for the same 20 minutes, only wrote 7k lines.
But the same method, if I use it outside the JMS/MDB, it has an incredible difference, just for the first 4 minutes it wrote 3M rows in the file.
For the same 20 minutes, it generated a file of 500Mb+.
Clearly using OpenCSV is by far the best option if i want to improve the performance, my question is why it doesn't perform the same way inside the JMS/MDB?
If it is not possible is there any possible solution to improve the same task by any other way?
I appreciate the feedback and help on this matter, i am trying to understand the reason why the behavior/performance is different in/out of the JMS/MDB.
**
EDIT:
**
#MessageDriven(activationConfig = {
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
#ActivationConfigProperty(propertyName = "destinationLookup", propertyValue = "MessageQueue")})
public class JobProcessorBean implements MessageListener {
private static final int TYPE_A_ID = 0;
private static final int TYPE_B_ID = 1;
#Inject
JobDao jobsDao;
#Inject
private AsyncReport generator;
public JobProcessorBean() {
}
#Override
public void onMessage(Message message) {
int jobId = -1;
ObjectMessage msg = (ObjectMessage) message;
try {
boolean valid = true;
JobWrapper jobw = (JobWrapper) msg.getObject();
jobId = jobw.getJob().getJobId().intValue();
switch (jobw.getJob().getJobTypeId().getJobTypeId().intValue()) {
case TYPE_A_ID:
jobsDao.updateJobStatus(jobId, 0);
valid = processTask1(jobw);
if(valid) {
jobsDao.updateJobFileName(jobId, generator.getData().getJobFilename());
System.out.println(":: :: JOBW FileName :: "+generator.getData().getJobFilename());
jobsDao.updateJobStatus(jobId, 0);
}
else {
System.out.println("error...");
jobsDao.updateJobStatus(jobId, 1);
}
**boolean validfile = handleReportContentv2();**
if(!validfile) {
System.out.println("error file...");
jobsDao.updateJobStatus(jobId, 1);
}
break;
case TYPE_B_ID:
(...)
}
if(valid) {
jobsDao.updateJobStatus(jobw.getJob().getJobId().intValue(), 2); //updated to complete
}
System.out.println("***********---------Finished JOB " + jobId + "-----------****************");
System.out.println();
jobw = null;
} catch (JMSException ex) {
Logger.getLogger(JobProcessorBean.class.getName()).log(Level.SEVERE, null, ex);
jobsDao.updateJobStatus(jobId, 1);
} catch (Exception ex) {
Logger.getLogger(JobProcessorBean.class.getName()).log(Level.SEVERE, null, ex);
jobsDao.updateJobStatus(jobId, 1);
} finally {
msg = null;
}
}
private boolean processTask1(JobWrapper jobw) throws Exception {
boolean valid = true;
jobsDao.updateJobStatus(jobw.getJob().getJobId().intValue(), 0);
generator.setData(jobw.getData());
valid = generator.deployGenerator();
if(!valid) return false;
jobsDao.updateJobParameters(jobw.getJob().getJobId().intValue(),new ReportContent());
Logger.getLogger(JobProcessorBean.class.getName()).log(Level.INFO, null, "Job Finished");
return true;
}
So if the same method, handleReportContent() is executed inside the generator.deployGenerator() is has those slow results. If I wait for everything inside that method and make the file in this bean JobProcessorBean is way more fast. I am just trying to figure out why/how the behavior works to performs like this.
Adding the #TransactionAttribute(NOT_SUPPORTED) annotation on the bean might solve the problem (and it did, as your comment indicates).
Why is this so? Because if you don't put any transactional annotation on a message-driven bean, the default becomes #TransactionAttribute(REQUIRED) (so everything the bean does, is supervised by a transaction manager). Apparently, this slows things down.
i have java client send to server (jetty-xmlrpc) query and receive data from server inside hashmap. sometime data is more big(e.g. 3645888 rows), when this data send to java client i have error ( java heap space ). how can i send data by 2 times for example ? or give me way to fix it
this is server function to get data and send it to client
public HashMap getFlickValues(String query,String query2){
System.out.println("Query is : "+query);
System.out.println("Query2 is: "+query2);
Connection c = null;
Connection c2 = null;
Statement st = null;
Statement st2 = null;
HashMap<String, Object[]> result = new HashMap<String, Object[]>();
ArrayList<Double> vaArrL = new ArrayList<Double>();
ArrayList<Double> vbArrL = new ArrayList<Double>();
ArrayList<Double> vcArrL = new ArrayList<Double>();
try {
Class.forName("org.postgresql.Driver");
String conString = "jdbc:postgresql://" + host + ":" + port + "/" + DBName +
"?user=" + user + "&pass=" + pass;
String conString1 = "jdbc:postgresql://" + host + ":" + port2 + "/" + DBName2 +
"?user=" + user + "&pass=" + pass;
//String conString1 = "jdbc:postgresql://127.0.0.1:5431/merkezdbram " +
// "?user=" + user + "&pass=" + pass;
/*c = DriverManager.getConnection(conString);
st = c.createStatement();
ResultSet rs = st.executeQuery(query);
while (rs.next()){
vaArrL.add(rs.getDouble("va"));
vbArrL.add(rs.getDouble("vb"));
vcArrL.add(rs.getDouble("vc"));
}*/
c = DriverManager.getConnection(conString);
//c.setAutoCommit(false);
c2 = DriverManager.getConnection(conString1);
//c2.setAutoCommit(false);
st = c.createStatement();
//st.setFetchSize(1000);
st2 = c2.createStatement();
//st2.setFetchSize(1000);
List<ResultSet> resultSets = new ArrayList<>();
resultSets.add(st.executeQuery(query));
resultSets.add(st2.executeQuery(query2));
ResultSets rs = new ResultSets(resultSets);
int count = 0;
int ResultSetSize = rs.getFetchSize();
System.out.println("ResultSetSize is "+ResultSetSize);
while (rs.next()){
//count++;
//if ( count == 2200000) { break;}
vaArrL.add(rs.getDoubleVa("va"));
vbArrL.add(rs.getDoubleVb("vb"));
vcArrL.add(rs.getDoubleVc("vc"));
}
int sz = vaArrL.size();
result.put("va", vaArrL.toArray(new Object[sz]));
result.put("vb", vbArrL.toArray(new Object[sz]));
result.put("vc", vcArrL.toArray(new Object[sz]));
//rs.close();
st.close();
c.close();
} catch ( Exception e ) {
System.out.println(e);
e.printStackTrace();
}
System.out.println("Flicker vaArrL.size = "+vaArrL.size());
return result;
}
and ResultSets class is :
class ResultSets {
private java.util.List<java.sql.ResultSet> resultSets;
private java.sql.ResultSet current;
public ResultSets(java.util.List<java.sql.ResultSet> resultSets) {
this.resultSets = new java.util.ArrayList<>(resultSets);
current = resultSets.remove(0);
}
public boolean next() throws SQLException {
if (current.next()) {
return true;
}else if (!resultSets.isEmpty()) {
current = resultSets.remove(0);
return next();
}
return false;
}
public Double getDoubleVa(String va) throws SQLException{
return current.getDouble("va");
}
public Double getDoubleVb(String vb) throws SQLException{
return current.getDouble("vb");
}
public Double getDoubleVc(String vc) throws SQLException{
return current.getDouble("vc");
}
}
i want way to return data to client without (java heap space) ?
i make -Xmx1024m for VM argument , but same problrm
i want solution in my code
thanks
at the moment I'm working on a script that reads several values from different tables of one database. Every time I start a request, I have to open a statement and create a new resultset which leads to horrible, repetative code. What would be a good way of generalizing this and how can this be done?
Some elements from my code. At the moment there's just one statement and the closing has to be inserted. One of the primary reasons I ask this question.
public static void main(String[] args) throws Exception
{
Connection c = null;
Statement stmt = null;
try
{
//set up database connection
Class.forName("org.sqlite.JDBC");
c = DriverManager.getConnection("jdbc:sqlite:/nfs/home/mals/p/pu2002/workspace/Database2");
c.setAutoCommit(false);
stmt = c.createStatement();
//end
//get task id to work with
String Task_id = null;
if(args.length != 0) //if an argument was passed, Task_id will be the first element of the array args (arguments)
{
Task_id = args[0];
}
else if(args.length == 0) //if no arguments were passed, the highest number in the column id from tasks_task will be selected and set as Task_id
{
ResultSet TTask_id = stmt.executeQuery("SELECT max(id) FROM tasks_task");
int t_id = TTask_id.getInt(1);
Task_id = String.valueOf(t_id);
TTask_id.close();
}
//end
//get solution IDs from taks_ids
ArrayList<Integer> List_solIDs = new ArrayList<Integer>(); //create an empty array list
ResultSet SSolution_task_id = stmt.executeQuery("SELECT id FROM solutions_solution WHERE task_id ="+Task_id + " AND final = 1;"); //Sqlite3-Ausdruck SELECT..., Task IDs verändern pro Aufgabe - "SELECT * FROM solutions_solution where task_id ="+Task_id +";"
while (SSolution_task_id.next()) //loops through all elements of SSolution_task_id
{
List_solIDs.add(SSolution_task_id.getInt("id")); //adds all elements of the resultset SSolution_task_id to the list List_solIDs
}
SSolution_task_id.close();
//end
//get logs according to content type
int count = List_solIDs.size();
String log_javaBuilder = null;
List<String> log_JunitChecker = new ArrayList<String>();
for (int i = 0; i < count; i++)
{
boolean sol_id_valid = false;
String solID = String.valueOf(List_solIDs.get(i));
try
{
ResultSet AAttestation_sol_id = stmt.executeQuery("SELECT * FROM attestation_attestation WHERE solution_id =" +solID+";");
int Returned = AAttestation_sol_id.getInt("final_grade_id");
}
catch(Exception e)
{
sol_id_valid = true;
}
if(sol_id_valid ==true)
{
try
{
ResultSet CCresult_javaBuilder = stmt.executeQuery("SELECT log FROM checker_checkerresult WHERE solution_id = " +solID+ " AND content_type_id = 22;"); //"SELECT id FROM checker_checkerresult where solution_id = " +List_solIDs.get(i)+ ";"
log_javaBuilder = CCresult_javaBuilder.getString("log");
CCresult_javaBuilder.close();
ResultSet CCresult_Junit_checker = stmt.executeQuery("SELECT log FROM checker_checkerresult WHERE solution_id = " +solID+ " AND content_type_id = 24;");
while (CCresult_Junit_checker.next())
{
log_JunitChecker.add(CCresult_Junit_checker.getString("log"));
}
CCresult_Junit_checker.close();
}
catch (Exception e)
{
log_JunitChecker.add(null);
}
//end
All types of potential improvements will be welcome.
P.S.: Tried googling.
Seems you want to look at using some ORM layer e.g. http://hibernate.org/orm/
What you're looking for is probably a higher-level layer which
abstracts you from the underlying lower-level JDBC type of coding.
Better than writing generic method by yourself it is always better to use some framework, There are many JPA implementations out there which solve not only this issue but also takes care of multiple persistence layer boiler plate code. Start JPA from Here. You can also use Spring JDBC template as well to solve problem mentioned above Spring JDBC Documentation.
Now, if you really don't want any framework dependency and finish this code quite fast, You can define your own JDBCTemplate class which takes query and parameter map and return ResultSet. This class can handle open connection, query execution and closing connection etc.
What if you try to use generics on methods? this is a quick example, just for illustration, you must improve all this :)
resource: official docs
public static <T> List<T> getSingleValueList(ResultSet rs, Class<T> clazz, String colName) throws Exception {
ArrayList<T> list = new ArrayList<T>();
while (rs.next()) {//loops through all elements of generic list
list.add((T) rs.getObject(colName)); //adds all elements of the resultset rs to the list
}
rs.close();
return list;
}
public static <T> T getSingleValue(ResultSet rs, Class<T> clazz, String colName) throws Exception {
try {
if (rs.next()) {//loops through all elements of generic list
return (T) rs.getObject(colName);
} else {
throw new Exception("no value found.");
}
} finally {
rs.close();
}
}
public static void main(String[] args) throws Exception {
Connection c = null;
Statement stmt = null;
try {
//set up database connection
Class.forName("org.sqlite.JDBC");
c = DriverManager.getConnection("jdbc:sqlite:/nfs/home/mals/p/pu2002/workspace/Database2");
c.setAutoCommit(false);
stmt = c.createStatement();
//end
//get task id to work with
String Task_id = null;
if (args.length != 0) //if an argument was passed, Task_id will be the first element of the array args (arguments)
{
Task_id = args[0];
} else if (args.length == 0) //if no arguments were passed, the highest number in the column id from tasks_task will be selected and set as Task_id
{
ResultSet TTask_id = stmt.executeQuery("SELECT max(id) FROM tasks_task");
int t_id = TTask_id.getInt(1);
Task_id = String.valueOf(t_id);
TTask_id.close();
}
//end
//get solution IDs from taks_ids
ResultSet SSolution_task_id = stmt.executeQuery("SELECT id FROM solutions_solution WHERE task_id =" + Task_id + " AND final = 1;"); //Sqlite3-Ausdruck SELECT..., Task IDs verändern pro Aufgabe - "SELECT * FROM solutions_solution where task_id ="+Task_id +";"
List<Integer> List_solIDs = getSingleValueList(SSolution_task_id, Integer.class, "id"); //create an empty array list
//end
//get logs according to content type
int count = List_solIDs.size();
String log_javaBuilder = null;
List<String> log_JunitChecker = new ArrayList<String>();
List<String> tmplog_JunitChecker;
for (int i = 0; i < count; i++) {
boolean sol_id_valid = false;
String solID = String.valueOf(List_solIDs.get(i));
try {
ResultSet AAttestation_sol_id = stmt.executeQuery("SELECT * FROM attestation_attestation WHERE solution_id =" + solID + ";");
Integer Returned = getSingleValue(AAttestation_sol_id, Integer.class, "final_grade_id");
} catch (Exception e) {
sol_id_valid = true;
}
if (sol_id_valid == true) {
try {
ResultSet CCresult_javaBuilder = stmt.executeQuery("SELECT log FROM checker_checkerresult WHERE solution_id = " + solID + " AND content_type_id = 22;"); //"SELECT id FROM checker_checkerresult where solution_id = " +List_solIDs.get(i)+ ";"
log_javaBuilder = getSingleValue(CCresult_javaBuilder, String.class, "log");
ResultSet CCresult_Junit_checker = stmt.executeQuery("SELECT log FROM checker_checkerresult WHERE solution_id = " + solID + " AND content_type_id = 24;");
tmplog_JunitChecker = getSingleValueList(CCresult_Junit_checker, String.class, "log");
log_JunitChecker.addAll(tmplog_JunitChecker);
} catch (Exception e) {
log_JunitChecker.add(null);
}
//end
}
}
} catch (Exception eeee) {
//handle it
}
}
I hope I gave you a light.
Anyway, frameworks in almost all cases help a lot.
I try to generate random code name as licenseKey and check whether it is exist in database or not. If not exist, then display in my jsp page, if exist, continue generating the random code. I got the error "java.lang.StackOverflowError". How to solve this? Below is my code :
package com.raydar.hospital;
import com.raydar.hospital.DB_Connection;
import java.sql.*;
public class RandomCodeGenerator {
String licenseKey = "";
int noOfCAPSAlpha = 4;
int noOfDigits = 4;
int minLen = 8;
int maxLen = 8;
char[] code = RandomCode.generateCode(minLen, maxLen, noOfCAPSAlpha, noOfDigits);
public RandomCodeGenerator(){
}
public String getOutputCode() throws Exception{
String result ="";
result = isLicenseKeyExist();
System.out.println("4 + " +result);
if (result=="false"){
System.out.println("1 + " +new String(code));
licenseKey = new String(code);
}
else if (result=="true"){
System.out.println("2 + " +new String(code));
licenseKey = new String(code);
isLicenseKeyExist ();
}
return licenseKey;
}
private String isLicenseKeyExist () throws Exception{
String code = "";
code = getOutputCode();
Connection connection = null;
Statement statement = null;
ResultSet rs = null;
String result="";
System.out.println("3 + " +code);
try{
DB_Connection connect = new DB_Connection();
connection = connect.getDBConnection();
statement = connection.createStatement();
rs = statement.executeQuery("SELECT licenseKey FROM hospital WHERE licenseKey = '" +code+ "'");
if (rs.next()){
result = "true";
}
else{
result = "false";
}
}catch (Exception e){
System.out.println("Error retrieving data! "+e);
}
return result;
}
}
You create a recursive loop where isLicenseKeyExist() calls getOutputCode(), but then getOutputCode() calls isLicenseKeyExist(). So eventually you run out of stack space, and get this exception.
Here,
public String getOutputCode() throws Exception{
String result ="";
result = isLicenseKeyExist();
...
}
private String isLicenseKeyExist () throws Exception{
String code = "";
code = getOutputCode();
...
}
I think you want something like this. Remove the field called code from your class, and its initialiser, and put the call to RandomCode.generateCode inside your getOutputCode method like this. The reason is that you'll have to call it repeatedly if your code is already in the database.
public String getOutputCode() throws SQLException {
String code;
do {
code = new String(RandomCode.generateCode(minLen, maxLen, noOfCAPSAlpha, noOfDigits));
}
while(licenceKeyExists(code));
return code;
}
private boolean licenceKeyExists(String code) throws SQLException {
try{
DB_Connection connect = new DB_Connection();
connection = connect.getDBConnection();
statement = connection.createStatement();
rs = statement.executeQuery("SELECT licenseKey FROM hospital WHERE licenseKey = '" +code+ "'");
return rs.next();
}
finally {
try {
connection.close();
} catch (SQLException ignored){}
}
}
#aween - #captureSteve has answered the first part of the question .
So, straight to "I wan't to call this function" comment. See, if I
understand your question correctly, you want to generate a key, and
check if it is available in the DB using isLicenseKeyExist() . In such
case, why don't you create the key first, then pass it to the
isLicenseKeyExist(). Then this function will return true/false based
on which you can decide what to do.