Spring JDBC Template check for Null results - java

I am bit new to Spring and Spring JDBC Template. I am retrieving some rows from my database using mysql. The query will produce null results in some times. Therefore I need to do a null check for the results retrieved using JDBC Template.
This is my code for retrieving data from database.
try {
String sqlArrears = "SELECT SUM(total_payable) FROM letter_delaypayments WHERE status = '1' AND customer_order_id = '"+customerOrderIdList.get(j)+"' AND year(row_added_date) = year(curdate()) AND month(row_added_date) = month(curdate())";
double arrearsAmountForSingleCustomer = getSimpleJdbcTemplate().queryForObject(sqlArrears, Double.class);
} catch (Exception e) {
System.out.println("EXCEPTION: While taking relavant arrears payments for customer order ids : "+e);
}
There may not be rows in the database table in some cases. In those cases, this query pass Null pointer exception.
So what I need to know is, can I check for the NULL value it returns and put an exception in that cases or else, Do I have to look for the row count particular query matches with and then do the retrieval.
I.E - look for the row count particular query matches,
if it is 0, add an exception.
If it is >1, retrieve the results using above query.
What should I do here?
Could you please let me know, whether there is a way to check for the null result at the same time query retrieves the results.
Thank you!

I would suggest that such logic belongs in the service layer. Your DAO can return a Double (not double) and you can check for null in the service layer.

Related

How to write dynamic native queries in cosmos DB

I have a situation where i have couple of fields i have to pass while calling the cosmos DB , but those fields may not always have values. Some of them might be null while passing them to the repository method. I am trying to do it like below
public interface CachedRepository{
#Query(value="select * from abc a where (#base=null or a.base=#base) and (#position=null or a.position=#position) and (#active=null or a.active = #active)")
List<BackList> getBackListOptions(#Param("base")String base,#Param("position")String position,#Param("active") String active);
The implementation class
latestDetails = repositoryA.getBackListOptions(p.getBase(),p.getPosition(),p.getActive().get(0));//active is a List and we are passing one value
I am trying to pass the request without the active parameter(i.e. active is null in the request) The exception i am getting is
Can not invoke "java.util.List.get(int)" because the return value of request.Pick.getActive() is null
The cosmos Table is
{
'_id":"a25777-j"
"empId": 2436,
"base":"JH",
"position":"HG",
"active":"J"
.........
}
And i am taking reference from this answer
How to write dynamic sql in Spring Data Azure Cosmos DB
Please let me know where i am doing it wrong
The code you have given should work fine if either null or an actual value is passed for any of the parameters. But in your case, for active, it is neither. Nothing is passed, not even null.
It seems from the comments in your code that "active" is an object of type List, and you are attempting to pass a value in that list to getBackListOptions, and extracting that value directly in the method call with p.getActive().get(0). But I think your implementation code is failing where p.getActive().get(0) is called, before getBackListOptions is even invoked. The error you are getting is not something that is returned from Cosmos Spring client, which I believe is doing what it should. You need to handle the values you are passing properly before passing them. I think you could handle this something like below.
String base = p.getBase(); //set the value here so it is clear where failure is
String position = p.getPosition(); //set the value here so it is clear where failure is
String active = p.getActive() == null ? null : p.getActive().get(0); //I think this line was failing before because p.getActive() was null, but this condition handles that case
latestDetails = repositoryA.getBackListOptions(base,position,active);//active is a List and we are passing one value

Safe data update in mySQL / Java

Here I have a dilemma.
Let's imagine that we have a sql table like this
enter image description here
It could be a problem when two or more users overwrite data in the table.
How should I check if the place hasn't been taken before update data?
I have two options
in SQL query:
UPDATE ticket SET user_user_id = ? WHERE place = ? AND user_user_id is NULL
or in Service layer:
try {
Ticket ticket = ticketDAO.read(place)
if (ticket.getUser() == null) {
ticket.setUser(user)
ticketDAO.update(ticket)
}else {
throw new DAOException("Place has been already tooken")
}
What way is safer and commonly used in practice?
Please, share your advice.
Possible approach here is to go ahead with SQL query. After query execution check number of rows modified in ticketDAO.update method. If 0 rows modified then throw exception DAOException.

Proper way to insert record with unique attribute

I am using spring, hibernate and postgreSQL.
Let's say I have a table looking like this:
CREATE TABLE test
(
id integer NOT NULL
name character(10)
CONSTRAINT test_unique UNIQUE (id)
)
So always when I am inserting record the attribute id should be unique
I would like to know what is better way to insert new record (in my spring java app):
1) Check if record with given id exists and if it doesn't insert record, something like this:
if(testDao.find(id) == null) {
Test test = new Test(Integer id, String name);
testeDao.create(test);
}
2) Call straight create method and wait if it will throw DataAccessException...
Test test = new Test(Integer id, String name);
try{
testeDao.create(test);
}
catch(DataAccessException e){
System.out.println("Error inserting record");
}
I consider the 1st way appropriate but it means more processing for DB. What is your opinion?
Thank you in advance for any advice.
Option (2) is subject to a race condition, where a concurrent session could create the record between checking for it and inserting it. This window is longer than you might expect because the record might be already inserted by another transaction, but not yet committed.
Option (1) is better, but will result in a lot of noise in the PostgreSQL error logs.
The best way is to use PostgreSQL 9.5's INSERT ... ON CONFLICT ... support to do a reliable, race-condition-free insert-if-not-exists operation.
On older versions you can use a loop in plpgsql.
Both those options require use of native queries, of course.
Depends on the source of your ID. If you generate it yourself you can assert uniqueness and rely on catching an exception, e.g. http://docs.oracle.com/javase/1.5.0/docs/api/java/util/UUID.html
Another way would be to let Postgres generate the ID using the SERIAL data type
http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE-SERIAL
If you have to take over from an untrusted source, do the prior check.

Java: Cached RowSet insertRow fails: SQLException

So I'm trying to understand how to use the RowSet API, specifically CachedRowSet, and I feel like I've been bashing my head against a wall for the last hour or so and could use some help.
I've got some very simple tables set up in a MySQL database that I'm using to test this. I should also add that everything I'm attempting to do with RowSet I've been able to do successfully with ResultSet, which leads me to believe that the issue is with my usage of the ResultSet API, rather than the operation I'm attempting to do itself.
Anyway, I'm trying to insert a new row using ResultSet. I'll paste my code here, then add some notes about it below:
CachedRowSet rowSet = null;
try {
RowSetFactory rsFactory = RowSetProvider.newFactory();
rowSet = rsFactory.createCachedRowSet();
rowSet.setUrl("jdbc:mysql://localhost:3306/van1");
rowSet.setUsername("####");
rowSet.setPassword("####");
rowSet.setKeyColumns(new int[]{1});
} catch (SQLException e) {
e.printStackTrace();
}
String query = "select * from phone";
try {
rowSet.setCommand(query);
rowSet.execute();
printTable(rowSet);
rowSet.moveToInsertRow();
rowSet.setInt(1, 4);
rowSet.setString(2, "Mobile");
rowSet.setString(3, "1");
rowSet.setString(4, "732");
rowSet.setString(5, "555");
rowSet.setString(6, "1234");
rowSet.setString(7, "");
rowSet.insertRow();
rowSet.moveToCurrentRow();
rowSet.acceptChanges();
printTable(rowSet);
} catch (SQLException e) {
e.printStackTrace();
}
So, as you can see, I'm trying to update a table of phone numbers with a new phone number. Here are the details:
1) All the phone number fields are datatype char, so that leading zeroes are not lost.
2) I'm using the default CachedRowSet implementation provided by the JDBC API, as opposed to anything specific from the MySQL driver. Not sure if that matters or not, but I'm putting it here just in case. Also, I didn't see an option to import CachedRowSet from the driver library anyway.
3) I'm setting a value for every column in the table, because the RowSet API doesn't allow for rows to be inserted without a value for every column.
4) I've tried the operation using both the setter methods and the update methods. Same result either way.
5) As far as I can tell, I'm on the insert row when executing the insertRow() method. I also return to the current row before invoking acceptChanges(), but since my code never gets that far I can't really comment on that part.
6) The exception is a SQLException (no chained exception within it) thrown on the invocation of the insertRow() method. Here is the stack trace:
java.sql.SQLException: Failed on insert row
at com.sun.rowset.CachedRowSetImpl.insertRow(Unknown Source)
at firsttry.RowSetPractice.rowSetTest(RowSetPractice.java:87)
at firsttry.RowSetPractice.main(RowSetPractice.java:20)
So, I'm out of ideas. Any help would be appreciated. I've searched every thread on this site I could find, all I see is stuff about it failing on the acceptChanges() method rather than insertRow().

DynamoDB's withLimit clause with DynamoDBMapper.query

I am using DynamoDBMapper for a class, let's say "User" (username being the primary key) which has a field on it which says "Status". It is a Hash+Range key table, and everytime a user's status changes (changes are extremely infrequent), we add a new entry to the table alongwith the timestamp (which is the range key). To fetch the current status, this is what I am doing:
DynamoDBQueryExpression expr =
new DynamoDBQueryExpression(new AttributeValue().withS(userName))
.withScanIndexForward(false).withLimit(1);
PaginatedQueryList<User> result =
this.getMapper().query(User.class, expr);
if(result == null || result.size() == 0) {
return null;
}
for(final User user : result) {
System.out.println(user.getStatus());
}
This for some reason, is printing all the statuses a user has had till now. I have set scanIndexForward to false so that it is in descending order and I put limit of 1. I am expecting this to return the latest single entry in the table for that username.
However, when I even look into the wire logs of the same, I see a huge amount of entries being returned, much more than 1. For now, I am using:
final String currentStatus = result.get(0).getStatus();
What I am trying to understand here is, what is whole point of the withLimit clause in this case, or am I doing something wrong?
In March 2013 on the AWS forums a user complained about the same problem.
A representative from Amazon sent him to use the queryPage function.
It seems as if the limit is not preserved for elements but rather a limit on chunk of elements retrieved in a single API call, and the queryPage might help.
You could also look into the pagination loading strategy configuration
Also, you can always open a Github issue for the team.

Categories

Resources