I have a class that gets ran every time an action happens, for example, I log in and the User class gets ran. This class is passed a ResultSet containing information of that particular user.
Now what I'm trying to accomplish is to get the result and split them into "class variables" (I believe they're called fields). I've tried the following:
public User(ResultSet resultSet) throws SQLException {
this.username = resultSet.getString("username");
this.firstname = resultSet.getString("firstname");
// etc etc.
}
and that works, but since I have about two dozen elements in there, this would become a long list.
I also thought of a method where I'd look through the results and then check if its a string for example. If it is, assign it to a variable which name corresponds to the key, but that would cut my problem in half, since I'd still need to declare all those variables.
I was wondering if there's a faster, perhaps more elegant way to tackle something like this.
Thanks.
You create a class for each type of information you retrieve from the database. You created a User class for user information. There's no shortcut where your database tables create Java objects, unless you use an ORM like Hibernate.
I have been recently working on a simple tool that does this. It accepts a Map<String, Object> and a class, and returns an instance of this class. Have look at my repository for examples. If you use Maven, you can just add a dependency:
<dependency>
<groupId>uk.co.jpawlak</groupId>
<artifactId>map-to-object-converter</artifactId>
<version>1.1</version>
</dependency>
This will of course require you to convert ResultSet into Map first, however this is pretty simple and you will only have to write this code once.
There is nothing out of the box to help you accomplish something like that.
There are couple of ways you could do it
Using ORM would be the best option - if you are willing to spend the time and effort to configure and set up the framework and update your objects as necessary.
Using Java reflection along with ResultSetMetaData to map the resultSet directly to Objects would be another option. Something along these lines http://oprsteny.com/?p=900
If you are lucky enough to have the object's field names exactly match the sql column names, like in your example, you could write something like this
import java.lang.reflect.Field;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
...
private void getUserDetails(ResultSet resultSet) throws SQLException, NoSuchFieldException,
SecurityException, IllegalArgumentException, IllegalAccessException {
ResultSetMetaData rsmd = resultSet.getMetaData();
while (resultSet.next()) {
int i = rsmd.getColumnCount();
while (i-- > 0) {
// *this* being the user object
String colName = rsmd.getColumnName(i);
Field field = this.getClass().getDeclaredField(colName);
field.set(this, resultSet.getObject(colName));
}
}
}
Related
I need to store the values of an ArrayList that changes frequently and persist those values in case of an application crash. The application I'm working on uses a Redis database already, so it seemed like a good choice.
Below, I have boiled down a minimal example of a spring boot controller that connects to a localhost instance of Redis and uses it to store serialized objects. The value can be modified from a controller endpoint, or through a scheduled job that runs every 5 seconds. If you do a series of get-requests to localhost:8080/test, you'll see the scheduled job remove items from the ArrayList one at a time.
Is it possible for a value to get missed, or for something not thread-safe to happen here? I'm concerned a scheduled job might conflict with changes made from the controller endpoint if they try to modify the object or set the Redis value at the same time, especially if the network slows down, but I'm unsure if that would actually be a problem. Everything seems to work fine as it runs on my localhost, but I remain skeptical.
I read this article, among others, on thread safety, but it didn't answer if any of those things are even necessary for this particular situation. I'm also aware that Redis read and writes are atomic, but I thought, what if the commands get sent to Redis in the wrong order?
I was thinking that if this implementation has problems, then Lombok's #Syncronized annotation might be useful for an abstracted out method for IO. I appreciate any input and time spent.
import com.google.gson.Gson;
import com.google.gson.reflect.TypeToken;
import io.lettuce.core.RedisClient;
import io.lettuce.core.api.StatefulRedisConnection;
import io.lettuce.core.api.sync.RedisCommands;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import java.lang.reflect.Type;
import java.util.ArrayList;
import java.util.Collection;
#RestController
public class Example {
RedisClient redisClient = RedisClient.create("redis://localhost:6379/");
StatefulRedisConnection<String, String> connection = redisClient.connect();
RedisCommands<String, String> redis = connection.sync();
Gson gson = new Gson();
ArrayList<String> someList = new ArrayList<>();
public Example() {
if(redis.exists("somekey") == 1){
Type collectionType = new TypeToken<Collection<VideoDAO>>(){}.getType();
someList = new ArrayList<>(gson.fromJson(redis.get("somekey"), collectionType));
}
}
#GetMapping("/test")
public void addToSomeList(){
someList.add("sample string");
redis.set("somekey",gson.toJson(someList));
System.out.println("New item added. " + someList.size() + " items in array");
}
#Scheduled(fixedRate = 5000)
public void popFromSomeList() {
if (!someList.isEmpty()) {
someList.remove(0);
redis.set("somekey", gson.toJson(someList));
System.out.println("Item removed. " + someList.size() + " items in array");
}
}
}
I'm using java 1.8.
Most obviously someList isn't thread-safe, so even if you ignore Redis the code is broken.
Let's say we make it thread-safe with Collections.synchronizedList(new ArrayList<>());. Then add and pop still aren't atomic, although that might not matter too much for the functionality. You could just end up with (for example) the following kind of execution
someList.add("sample string");
someList.remove(0);
redis.set("somekey", gson.toJson(someList));
redis.set("somekey", gson.toJson(someList));
and the messages could be confusing, as it could show "New item added. 4 items in array", "New item added. 4 items in array", "Item removed. 4 items in array", due to the add/remove happening before the prints.
So for proper functionality for the given code (or similar), you would have to synchronize the methods or use an explicit shared lock. There is a possibility of sending the commands in the wrong order, but in the given example (provided the list is made thread-safe) there's no chance of real danger, as it would only result in a duplicated set for the same data.
I'm trying to fix some of the issues identified by Sonarqube, and I've got several that are all essentially the same as the example shown here. It's telling me I should close the object, but I can't really do that if the object is what the method returns to the caller, can I?
package my.qe.codereview.myservice.data;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.Properties;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jdbc.core.PreparedStatementCreator;
import org.springframework.stereotype.Component;
import my.qe.review.integration.to.StatusHistoryInsertRequest;
#Component
public class InsertStatusHistoryPSCBuilder {
#Autowired
private Properties queryProps;
public PreparedStatementCreator build(final StatusHistoryInsertRequest request) {
return new PreparedStatementCreator() {
#Override
public PreparedStatement createPreparedStatement(Connection con) throws SQLException {
PreparedStatement pst = con.prepareStatement(queryProps.getProperty("insertStatusHistory"));
pst.setObject(1, request.getReviewStatusHistoryId());
pst.setObject(2, request.getReviewHistoryId());
return pst;
}
};
}
}
I googled on the issue, and the how-to-fix examples I could find all involved an object that only exists within the scope of the method in question. None addressed the situation where the object is returned by the method as is the case here. I have no control over what the caller does with the object after I return.
What can I do to address this issue? I assume you can't close an object and still return it (is that correct?). Or is the real problem that the method includes a call to another method that can throw an exception, and I need to catch, close, and rethrow?
Update: Using try-with-resources to create the Closeable object didn't work, as the object still gets closed under the covers before the method returns, which results in the caller throwing an exception because the returned object has been closed (which answers the question in the above paragraph). At this time I don't know of a way to return a Closeable object without Sonarqube flagging it as a blocker.
The suggestion was made to mark it as a false positive. I'm not sure if that's an option in my work environment.
I'm building a simple RESTful app (using Spark Java, although this question is more general).
The Handler below is called when the /users index route is requested. It just queries for all users and renders an HTML template (using Velocity, but again this question is more general).
package com.example.api;
import java.util.*;
import spark.Request;
import spark.ModelAndView;
import spark.template.velocity.VelocityTemplateEngine;
public class UsersIndexHandler {
private Map<String, Object> locals;
private UserDao userDao;
public UsersIndexHandler(UserDao userDao) {
this.locals = new HashMap<>();
this.userDao = userDao;
}
public String execute(Request req, boolean formatAsHtml) {
// Set locals so they are available in the view
locals.put("users", userDao.all());
// Render the view
String body = new VelocityTemplateEngine().render(new ModelAndView(locals, "views/users/index.vm"))
return body;
}
}
I'm trying to write a basic Junit test for this scenario. I could test the contents of the String that's returned, but I have two issues with that -
The HTML for the page could be quite long, so it doesn't feel practical to do a direct comparison of the actual result with an expected test string
The content of view should be tested in a view test.
What's the "right" (generally accepted) way to test this? Is there a way to set expectations on VelocityTemplateEngine() so we know that render() is called correctly AND with correct arguments?
Or should I focus on just testing the locals Map object, even though I'd have to expose that to access it during tests?
Thanks!
I am in line with the views that Tom has mentioned in the comment but if still,
you want to follow this pattern, Powermockito (and plain Powermock also) have a way to do this. I will Just post an example here
Employee mockEmployeeObject = mock(Employee.class);
PowerMockito.whenNew(Employee.class)
.withAnyArguments().thenReturn(mockEmployeeObject);
verify(mockEmployeeObject, times(1)).someMethod();
Powermockito.whenNew(..) lets us return our mocked object when any new object of that class is created, since you want to verify the method parameters, it works well for you. You can add the verify that you need I just posted an example.
Hope this helps!
Good luck!
We're currently using a PostgreSQL database and OrmLite. We now have a use case for using an Postgres hstore, but can't find any way of accessing that table through OrmLite. I'd prefer to avoid opening a separate database connection just to select and insert to that one table, but I'm not seeing any other options.
At the very least I'd like a handle to the existing connection OrmLite is using so I can reuse it to build a prepared statement, but I haven't found a way to get a java.sql.Connection starting from an OrmLite ConnectionSource.
I see that OrmLite has a JdbcCompiledStatement, but that's just a wrapper around a PreparedStatement and requires the PreparedStatement to be passed in to the constructor. (Not sure what the use case for that is.)
I've tried to use DatabaseConnection.compileStatement(...), but that requires knowledge of the field types being used and OrmLite doesn't seem to know what an hstore is.
I've tried to use updateRaw(), but that function only exists on an OrmLite dao that I don't have because the table I would link the dao to has a field type OrmLite doesn't recognize. Is there some way to get a generic dao to issue raw queries?
I get that hstores are database specific and probably won't be supported by OrmLite, but I'd really like to find a way to transfer data to and from the database using unsupported fields instead of just unsupported queries.
It sounds like ConnectionSource may actually be implemented by JdbcConnectionSource, and will likely return a JdbcDatabaseConnection. That object has a getInternalConnection method that looks like what you are looking for.
#Gray I submitted an ORMLite patch on SourceForge that can enables the "Other" data type. The patch ID is 3566779. With this patch, it's possible to support hstores.
Users will need to add the PGHStore class to their projects. The code for this class is here.
Users will also need to add a persister class as shown here:
package com.mydomain.db.persister;
import com.mydomain.db.PGHStore;
import com.j256.ormlite.field.FieldType;
import com.j256.ormlite.field.SqlType;
import com.j256.ormlite.field.types.BaseDataType;
import com.j256.ormlite.support.DatabaseResults;
import java.sql.SQLException;
public class PGHStorePersister extends BaseDataType {
private static final PGHStorePersister singleton = new PGHStorePersister();
public static PGHStorePersister getSingleton() {
return singleton;
}
protected PGHStorePersister() {
super(SqlType.OTHER, new Class<?>[] { PGHStore.class });
}
protected PGHStorePersister(SqlType sqlType, Class<?>[] classes) {
super(sqlType, classes);
}
#Override
public Object parseDefaultString(FieldType ft, String string) throws SQLException {
return new PGHStore(string);
}
#Override
public Object resultToSqlArg(FieldType fieldType, DatabaseResults results, int columnPos) throws SQLException {
return results.getString(columnPos);
}
#Override
public Object sqlArgToJava(FieldType fieldType, Object sqlArg, int columnPos) throws SQLException {
return new PGHStore((String) sqlArg);
}
#Override
public boolean isAppropriateId() {
return false;
}
}
Lastly, users will need to annotate their data to use the persister.
#DatabaseField(columnName = "myData", persisterClass=PGHStorePersister.class)
At the very least I'd like a handle to the existing connection OrmLite is using so I can reuse it to build a prepared statement...
Ok, that's pretty easy. As #jsight mentioned, the ORMLite ConnectionSource for JDBC is JdbcConnectionSource. When you get a connection from that class using connectionSource.getReadOnlyConnection(), you will get a DatabaseConnection that is really a JdbcDatabaseConnection and can be cast to it. There is a JdbcDatabaseConnection.getInternalConnection() method which returns the associated java.sql.Connection.
I've tried to use updateRaw(), but that function only exists on an OrmLite dao that I don't have ...
You really can use any DAO class to perform a raw function on any table. It is convenient to think of it as being an unstructured update to an DAO object's table. But if you have any DAO, you can perform a raw update on any other table.
find a way to transfer data to and from the database using unsupported fields instead of just unsupported queries
If you are using unsupported fields, then you are going to have to do it as a raw statement -- either SELECT or UPDATE. If you edit your post to show the raw statement you've tried, I can help more specifically.
(I am using MyBatis v3, Java SE v6, Tomcat v6 and Spring v3 all over Teradata v12.)
One of the technical requirements for my current project is to use the query banding feature in Teradata. This is done by running a statement like the following whenever required:
SET QUERY_BAND='someKey=someValue;' FOR TRANSACTION;
I want to have a query band for all of my calls. However, I am unsure how to add this functionality in a clean and reusable manner without having to add it to each of my <select> statements in my mapper file like the following:
<sql id="queryBand">
SET QUERY_BAND='k=v;' FOR TRANSACTION;
</sql>
<select ...>
<include refid="queryBand"/>
... some SQL performing a SELECT
</select>
My issues with the above are:
1) The format of the query band is identical across all my mapper XML files with the exception of k & v, which I would want to customise on a per <select> (etc.) basis. I'm not sure how I can do this customisation without having to pass in the k and v values, which muddies my mapper interface.
2) There is duplication in the above code that makes me uneasy. Developers have to remember to include the queryBand SQL, which someone will forget at some stage (Murphy's Law).
Can someone point me in the direction of the solution to implementing the query banding in a cleaner way?
The solution is to use MyBatis Interceptor plug-ins. For example, the following:
import java.sql.Connection;
import java.sql.Statement;
import java.util.Properties;
import org.apache.ibatis.executor.statement.StatementHandler;
import org.apache.ibatis.plugin.Interceptor;
import org.apache.ibatis.plugin.Intercepts;
import org.apache.ibatis.plugin.Invocation;
import org.apache.ibatis.plugin.Plugin;
import org.apache.ibatis.plugin.Signature;
#Intercepts({#Signature(
type=StatementHandler.class,
method = "prepare",
args={ Connection.class })})
public class StatementInterceptor implements Interceptor {
#Override
public Object intercept(Invocation invocation) throws Throwable {
Connection conn = (Connection) invocation.getArgs()[0];
Statement stmt = conn.createStatement();
stmt.executeUpdate("SET QUERY_BAND = 'k=v;' FOR TRANSACTION;");
return invocation.proceed();
}
#Override
public Object plugin(Object target) {
return Plugin.wrap(target, this);
}
#Override
public void setProperties(Properties properties) {}
}
Let's say that every SQL string should be appended to a query band. I would try to find a method inside myBatis/Spring which does it. Using Spring's AOP this method could be intercepted and its result appended to the query band and returned for further computation.
Finding a method to intercept can be hard but not impossible. Download all dependency sources and link them properly (using Maven this should be trivial nonetheless in Eclipse is not that hard, too), run the code in debugging mode and look for an appropriate method.