How to get the size of a java.sql.ResultSet? - java

I want to get the size of the ResultSet inside the while loop.
Tried the code below, and I got the results that I want. But it seems to be messing up with result.next() and the while loop only loops once if I do this.
What's the proper way of doing this?
result.first();
while (result.next()){
System.out.println(result.getString(2));
System.out.println("A. " + result.getString(5) + "\n" + "B. " + result.getString(6) + "\n" + "C. " + result.getString(7) + "\n" + "D. " + result.getString(8));
System.out.println("Answer: ");
answer = inputquiz.next();
result.last();
if (answer.equals(result.getString(10))) {
score++;
System.out.println(score + "/" + result.getRow());
} else {
System.out.println(score + "/" + result.getRow());
}
}

What's the proper way of doing this?
Map it to a List<Entity>. Since your code is far from self-documenting (you're using indexes instead of column names), I can't give a well suited example. So I'll take a Person as example.
First create a javabean class representing whatever a single row contains.
public class Person {
private Long id;
private String firstName;
private String lastName;
private Date dateOfBirth;
// Add/generate c'tors/getters/setters/equals/hashcode and other boilerplate.
}
(a bit decent IDE like Eclipse can autogenerate them)
Then let JDBC do the following job.
List<Person> persons = new ArrayList<Person>();
while (resultSet.next()) {
Person person = new Person();
person.setId(resultSet.getLong("id"));
person.setFirstName(resultSet.getString("fistName"));
person.setLastName(resultSet.getString("lastName"));
person.setDataOfBirth(resultSet.getDate("dateOfBirth"));
persons.add(person);
}
// Close resultSet/statement/connection in finally block.
return persons;
Then you can just do
int size = persons.size();
And then to substitute your code example
for (int i = 0; i < persons.size(); i++) {
Person person = persons.get(i);
System.out.println(person.getFirstName());
int size = persons.size(); // Do with it whatever you want.
}
See also:
How to check if there is zero-or-one result or one-or-more results and their size

you could do result.last(); and call result.getRow(); (which retrieves the current row number) to get count. but it'll have load the all the rows and if it's a big result set, it might not be very efficient. The best way to go about is to do a SELECT COUNT(*) on you query and get the count like it's demonstrated in this post, beforehand.

This is a tricky question.
Normally, result.last() scrolls to the end of the ResultSet, and you can't go back.
If you created the statement using one of the createStatement or prepareStatement methods with a "resultSetType" parameter, and you've set the parameter to ResultSet.TYPE_SCROLL_INSENSITIVE or ResultSet.TYPE_SCROLL_SENSITIVE, then you can scroll the ResultSet using first() or relative() or some other methods.
However, I'm not sure if all databases / JDBC drivers support scrollable result sets, and there are likely to be performance implications in doing this. (A scrollable result set implies that either the database or the JVM needs to buffer the entire resultset somewhere ... or recalculate it ... and that's expensive for a large resultset.)

The way of getting size of ResultSet, No need of using ArrayList etc
int size =0;
if (rs != null)
{
rs.beforeFirst();
rs.last();
size = rs.getRow();
}
Now You will get size, And if you want print the ResultSet, before printing use following line of code too,
rs.beforeFirst();

There are also another way to get the count from DB.
Note :
This column gets updated when DBA'S do realtime statistics
select num_rows from all_Tables where table_name ='<TABLE_NAME>';

Related

Java Mysql table data output formatting

Here is my MySql table:
I want to show the output of the query in commandline as below:
I have written the code below to loop but I am getting only the first row, What i have to modify ??
ResultSet rs2 = stmt.executeQuery(table_retrive);
String[] cols = new String[itemList.size()];
int[] rec =new int[itemList.size()];
for (int i = 0; i < itemList.size(); i++) {
while (rs2.next()) {
cols[i] =(String) itemList.get(i);
rec[i] = rs2.getInt(cols[i]);
System.out.println(rec[i]+" ");
}
}
Your two loops are wrong. Start at i=0 and then iterate once over the whole ResultSet, filling yor first array position. When this is done, i is incremented and you try to iterate the ResultSet a second time but the cursor is at the end of the ResultSet, so rs2.next() returns false and the code will not be executed.
So you have two Solutions:
Handle the loops correctly. Unfortunately I do not know, what you are trying to do anyways because this is some C-like code without OOP, which doesn't show semantics and then you have this itemList which seems to hold preset values and you read out of this list, which column to take for the i-th position. This seems odd. Maybe switching the loops does the desired: Start with the while and nest the for.
Reset the cursor of the ResultSet after the while with rs2.beforeFirst(). WARNING: This could throw a SQLFeatureNotSupportedException. Not all Databases can move the cursor backwards. This is of course a very ugly solution, since you should first parse the whole row a once.
Try to use printf() Or format() method. It is same as printf method in c lang. you can pass parameters and difference. Look at link1
And link 2
Example : System.out.printf("%d%5s%10d", 5,"|",10);
output : 5 | 10
Using this the I got all the values but in one row :
while (rs2.next()) {
for (int i = 0; i < itemList.size(); i++) {
cols[i] =(String) itemList.get(i);
rec[i] = rs2.getInt(cols[i]);
System.out.print(rec[i]+" ");
}
}
But I need to divide like the rows.
Usage of the inner loop is your problem.
You can enhance your code to remove the usage of the second loop in your code, it basically does nothing. You can loop over your result set and in the same loop using the incremented variable to persist the values accordingly.
The code shown half implemented in your question, hence it will be difficult to give you exactly what need to be done. Nevertheless, here's an attempt to resolve the problem for you:
while (rs2.next()) {
System.out.println(rs2.getInt(1) + "\t |" + rs2.getString(2) + "\t |" + rs2.getString(3));
}
Based on the column names from the table in the question, assuming that column2 and column3 are String's.
You can add the necessary details to this code to complete it according to your usecase, but I've just taken the example of showing a record in one line.
EDIT:
OP has his own way of programming, but to satisfy his question in the comment - this is how you can do it.
while (rs2.next()) {
for (int i = 0; i < itemList.size(); i++)
{
cols[i] =(String) itemList.get(i);
rec[i] = rs2.getInt(cols[i]);
System.out.print(rec[i]+"\t |");
}
System.out.println();
}

Retrieving the number of entries in a ResultSet

In a few answers here on SO there is recommended to use ResultSet's function getRow() in order to get the number of entries in a query result. However, following code
private static int getSizeOfResultSet(ResultSet result) throws SQLException{
if(result != null){
int count = 0;
while(result.next()){
count++;
}
System.out.println("Count of resultset is " + result.getRow() + " and count " + count);
return result.getRow();
}else{
System.out.println("Result is null");
return 0;
}
}
is returning me things like
Count of resultset is 0 and count 1
when there is assured that entries are existing (in a JUnit test case). Note that this project has to be in Java6-standard, so maybe this is the issue. Is there anything to consider using the getRow() function?
Additionally, I tried with
result.last();
int c = result.getRow();
result.beforeFirst();
return c;
like mentioned here, but the result is still 0.
EDIT: I should mention that the while(result.next())-loop is for testing purpose. The behaviour without the loop is exactly the same (i.e. 0).
The javadoc states
returns the current row number; 0 if there is no current row
After you've visited all the rows with ResultSet#next() there is no current row.
Use your count value for the number of rows. Or use a SELECT count(*) ... style query.

Spring data / Neo4j path length with large data sets

I have been running the following query to find relatives within a certain "distance" of a given person:
#Query("start person=node({0}), relatives=node:__types__(className='Person') match p=person-[:PARTNER|CHILD*]-relatives where LENGTH(p) <= 2*{1} return distinct relatives")
Set<Person> getRelatives(Person person, int distance);
The 2*{1} comes from one conceptual "hop" between people being represented as two nodes - one Person and one Partnership.
This has been fine so far, on test populations. Now I'm moving on to actual data, which consists of sizes from 1-10 million, and this is taking for ever (also from the data browser in the web interface).
Assuming the cost was from loading everything into ancestors, I rewrote the query as a test in the data browser:
start person=node(385716) match p=person-[:PARTNER|CHILD*1..10]-relatives where relatives.__type__! = 'Person' return distinct relatives
And that works fine, in fractions of a second on the same data store. But when I want to put it back into Java:
#Query("start person=node({0}) match p=person-[:PARTNER|CHILD*1..{1}]-relatives where relatives.__type__! = 'Person' return relatives")
Set<Person> getRelatives(Person person, int distance);
That won't work:
[...]
Nested exception is Properties on pattern elements are not allowed in MATCH.
"start person=node({0}) match p=person-[:PARTNER|CHILD*1..{1}]-relatives where relatives.__type__! = 'Neo4jPerson' return relatives"
^
Is there a better way of putting a path length restriction in there? I would prefer not to use a where as that would involve loading ALL the paths, potentially loading millions of nodes where I need only go to a depth of 10. This would presumably leave me no better off.
Any ideas would be greatly appreciated!
Michael to the rescue!
My solution:
public Set<Person> getRelatives(final Person person, final int distance) {
final String query = "start person=node(" + person.getId() + ") "
+ "match p=person-[:PARTNER|CHILD*1.." + 2 * distance + "]-relatives "
+ "where relatives.__type__! = '" + Person.class.getSimpleName() + "' "
+ "return distinct relatives";
return this.query(query);
// Where I would previously instead have called
// return personRepository.getRelatives(person, distance);
}
public Set<Person> query(final String q) {
final EndResult<Person> result = this.template.query(q, MapUtil.map()).to(Neo4jPerson.class);
final Set<Person> people = new HashSet<Person>();
for (final Person p : result) {
people.add(p);
}
return people;
}
Which runs very quickly!
You're almost there :)
Your first query is a full graph scan, which effectively loads the whole database into memory and pulls all nodes through this pattern match multiple times.
So it won't be fast, also it would return huge datasets, don't know if that's what you want.
The second query looks good, the only thing is that you cannot parametrize the min-max values of variable length relationships. Due to effects to query optimization / caching.
So for right now you'd have to go with template.query() or different query methods in your repo for different max-values.

What is the best alternative to BatchStatement execute for retriving values from database (MSSQL 2008)

I have a SQL query as shown below.
SELECT O_DEF,O_DATE,O_MOD from OBL_DEFINITVE WHERE OBL_DEFINITVE_ID =?
A collection of Ids is passed to this query and ran as Batch query. This executes for 10000
times for retriveing values from Database.(Some one else mess)
public static Map getOBLDefinitionsAsMap(Collection oblIDs)
throws java.sql.SQLException
{
Map retVal = new HashMap();
if (oblIDs != null && (!oblIDs.isEmpty()))
{
BatchStatementObject stmt = new BatchStatementObject();
stmt.setSql(SELECT O_DEF,O_DATE,O_MOD from OBL_DEFINITVE WHERE OBL_DEFINITVE_ID=?);
stmt.setParameters(
PWMUtils.convertCollectionToSubLists(taskIDs, 1));
stmt.setResultsAsArray(true);
QueryResults rows = stmt.executeBatchSelect();
int rowSize = rows.size();
for (int i = 0; i < rowSize; i++)
{
QueryResults.Row aRow = (QueryResults.Row) rows.getRow(i);
CoblDefinition ctd = new CoblDefinition(aRow);
retVal.put(aRow.getLong(0), ctd);
}
}
return retVal;
Now we had identified that if the query is modified to
add as
SELECT O_DEF,O_DATE,O_MOD from OBL_DEFINITVE WHERE OBL_DEFINITVE_ID in (???)
so that we can reduce it to 1 query.
The problem here is MSSQL server is throwing exception that
Prepared or callable statement has more than 2000 parameter
And were struck here . Can some one provide any better alternative to this
There is a maximum number of allowed parameters, let's call it n. You can do one of the following:
If you have m*n + k parameters, you can create m batches (or m+1 batches, if k is not 0). If you have 10000 parameters and 2000 is the maximum allowed parameters, you will only need 5 batches.
Another solution is to generate the query string in your application and adding your parameters as string. This way you will run your query only once. This is an obvious optimization in speed, but you'll have a query string generated in your application. You would set your where clause like this:
String myWhereClause = "where TaskID = " + taskIDs[0];
for (int i = 1; i < numberOfTaskIDs; i++)
{
myWhereClause += " or TaskID = " + taskIDs[i];
}
It looks like you are using your own wrapper around PreparedStatement and addBatch(). You are clearly reaching a limit of how many statements/parameters can be batched at once. You will need to use executeBatch (eg every 100 or 1000) statements, instead of having it build up until the limit is reached.
Edit: Based on the comment below I reread the problem. The solution: make sure you use less than 2000 parameters when building the query. If necessary, breaking it up in two or more queries as required.

how to speed up my ArrayList searching?

I currently have an ArrayList holding objects of a class I have created, I then parse through the ArrayList in a for loop searching and comparing some data from the ArrayList and some global variables that are loaded else where, however this ArrayList is constantly growing and will eventually have about 115 elements to it towards the end, which then takes a very long time to search through, the function that does this is also called once for every line I read from a text file and the text file will usually be around 400-500 lines long so as you can tell it is very slow process even when testing on small files. Is there a way to speed this up by maybe using another collection instead of an ArrayList, my reasoning for using the ArrayList is I have to know what index it is on when it finds a match.
Here is the class:
private ArrayList<PanelData> panelArray = new ArrayList<PanelData>(1);
public class PanelData {
String dev = "";
String inst = "";
double tempStart = 0.0;
double tempEnd = 0.0;
}
Function:
public void panelTimeHandler (double timeStart, double timeEnd) throws SQLException {
PanelData temps = new PanelData();
temps.dev = devIDStr;
temps.inst = instanceStr;
temps.tempStart = timeStart;
temps.tempEnd = timeEnd;
boolean flag = false;
if(!flag)
{
panelArray.add(temps);
flag = true;
}
for(int i = 0; i < panelArray.size(); ++i ) {
if(panelArray.get(i).dev.equals(devIDStr) && panelArray.get(i).inst.equals(instanceStr)) {
if(panelArray.get(i).tempStart <= timeStart && panelArray.get(i).tempEnd >= timeEnd ) {
//Do Nothing
}
else
{
temps.dev = devIDStr;
temps.inst = instanceStr;
temps.tempStart = timeStart;
temps.tempEnd = timeEnd;
insert();
panelArray.set(i, temps);
}
}
else
{
temps.dev = devIDStr;
temps.inst = instanceStr;
temps.tempStart = timeStart;
temps.tempEnd = timeEnd;
panelArray.add(temps);
insert();
}
}
}
If there is something more you would like to see just ask, thanks. Beef.
Update: Added insert() function
private void insert() throws SQLException
{
stmt = conn.createStatement();
String sqlStm = "update ARRAY_BAC_SCH_Schedule set SCHEDULE_TIME = {t '" + finalEnd + "'} WHERE SCHEDULE_TIME >= {t '" + finalStart + "'} AND" +
" SCHEDULE_TIME <= {t '" + finalEnd + "'} AND VALUE_ENUM = 0 AND DEV_ID = " + devIDStr + " and INSTANCE = " + instanceStr;
int updateSuccess = stmt.executeUpdate(sqlStm);
if (updateSuccess < 1)
{
sqlStm = "insert into ARRAY_BAC_SCH_Schedule (SITE_ID, DEV_ID, INSTANCE, DAY, SCHEDULE_TIME, VALUE_ENUM, Value_Type) " +
" values (1, " + devIDStr + ", " + instanceStr + ", " + day + ", {t '" + finalStart + "'}, 1, 'Unsupported')";
stmt.executeUpdate(sqlStm);
sqlStm = "insert into ARRAY_BAC_SCH_Schedule (SITE_ID, DEV_ID, INSTANCE, DAY, SCHEDULE_TIME, VALUE_ENUM, Value_Type) " +
" values (1," + devIDStr + ", " + instanceStr + ", " + day + ", {t '" + finalEnd + "'}, 0, 'Unsupported')";
stmt.executeUpdate(sqlStm);
}
if(stmt!=null)
stmt.close();
}
Update:
Thank you to Matteo, I realized I was adding to the array even if I didnt find a match till the 10th element it would then added to the array the first 9 times which created many extra elements in the array, which was why it was so slow, I added some breaks and did a little tweaking in the function, and it improved the performance a lot. Thanks for all the input
you can use LinkedHashSet. It seems you add only elements to the end of the list, which is exactly what LinkedHashSet does as well, when inserting an element.
Note however, a LinkedHashSet will not allow duplicates, since it is a set.
Searching if an element exists will be O(1) using contains()
Using the LinkedHashSet will also allow you to keep track of where an element was added, and iterating it will be in order of insertion.
What about using a hashmap?
I would create a small class for the key:
class Key {
String dev, instr;
// todo: implements equals & hashCode
}
and create the map:
Map<Key, PanelData> map = new HashMap...
then you can easily find the element you need by invoking map.get(new Key(...)).
Instead of creating a new class, you could also tweak the PanelData class, implementing methods equals & hashcode so that two classes are equal iff their dev and instr are equal. In this case, your map becomes:
Map<PanelData, PanelData> map ...
// to add:
map.put(temps, temps)
// to search:
PanelData elem = map.get(new PanelData(desiredDev, desiredInstr));
Quite a few optimiztions here.
1) the call: panelArray.get(i) is used repeatedly. Declare a PanelData variable outside the loop, but initialize it only once, at the very begining of the loop:
PanelData pd = null;
for (int i = 0; i < panelArray.size(); ++i) {
pd = panelArray.get(i);
...
}
2) If your dataset allows it, consider using a few maps to help speed look up times:
HashMap<String, PanelData> devToPanelDataMapping = new HashMap<String,PanelData>();
HashMap<String, PanelData> instToPanelDataMapping = new HashMap<String,PanelData>();
3) Consider hashing your strings into ints or longs since String.equals() is slow compared to (int == int)
4) If the ArrayList will be read only, perhaps a multithread solution may help. The thread that reads lines from the text file can hand out individual lines of data to different 'worker' threads.
1) Create PanelArray with the max expected size + 10% when you first create it.
List<PanelData> panelArray = new ArrayList<PanelData>(130) - this will prevent dynamic reallocations of the array which will save processing time.
2) What does insert() do? Odds are that is your resource hog.
This problem might best be solved with a different data structure such as a HashMap or SortedSet.
In order to use a HashMap, you would need to define a class that can produce a hash code for the dev and inst string pairs. One solution is something like:
public class DevAndInstPair
{
private String dev, inst;
#Override
public int hashCode() {
return ((dev.hashCode() * 0x490aac18) ^ inst.hashCode());
}
#Override
public boolean equals(Object o) {
if (o == null || !(o instanceof DevAndInstPair)) {
return false;
}
DevAndInstPair other = (DevAndInstPair) o;
return (dev.equals(other.dev) && inst.equals(other.inst));
}
}
You would then use HashMap<DevAndInstPair, PanelData> as the map type.
Alternatively, if you know that a certain character never appears in dev strings, then you can use that character as a delimiter separating the dev value from the inst value. Supposing that this character is a hyphen ('-'), the key values would be dest + '-' + inst and the key type of the map would be String.
To use SortedSet, you would either have PanelData implement Comparable<PanelData> or write a class implementing Comparator<PanelData>. Remember that the compare operation must be consistent with equals.
A SortedSet is somewhat trickier to use than a HashMap, but I personally think that it is the more elegant solution to this problem.

Categories

Resources