simple jdbc wrapper - java

To implement data access code in our application we need some framework to wrap around jdbc (ORM is not our choice, because of scalability).
The coolest framework I used to work with is Spring-Jdbc. However, the policy of my company is to avoid external dependencies, especially spring, J2EE, etc.
So we are thinking about writing own handy-made jdbc framework, with functionality similar Spring-jdbc: row mapping, error handling, supporting features of java5, but without transaction support.
Does anyone have experience of writing such jdbc wrapper framework?
If anyone has experience of using other jdbc wrapper frameworks, please share your experience.
Thanks in advance.

We wrote our own wrapper. This topic is worthy of a paper but I doubt I'll ever have time to write it, so here are some key points:
we embraced sql and made no attempt to hide it. the only tweak was to add support for named parameters. parameters are important because we do not encourage the use of on-the-fly sql (for security reasons) and we always use PreparedStatements.
for connection management, we used Apache DBCP. This was convenient at the time but it's unclear how much of this is needed with modern JDBC implementations (the docs on this stuff is lacking). DBCP also pools PreparedStatements.
we didn't bother with row mapping. instead (for queries) we used something similar to the Apache dbutil's ResultSetHandler, which allows you to "feed" the result set into a method which can then dump the information wherever you'd like it. This is more flexible, and in fact it wouldn't be hard to implement a ResultSetHandler for row mapping. for inserts/updates we created a generic record class (basically a hashmap with some extra bells and whistles). the biggest problem with row mapping (for us) is that you're stuck as soon as you do an "interesting" query because you may have fields that map to different classes; because you may have a hierarchical class structure but a flat result set; or because the mapping is complex and data dependent.
we built in error logging. for exception handling: on a query we trap and log, but for an update we trap, log, and rethrow an unchecked exceptions.
we provided transaction support using a wrapper approach. the caller provides the code that performs transaction, and we make sure that the transaction is properly managed, with no chance of forgetting to finish the transaction and with rollback and error handling built-in.
later on, we added a very simplistic relationship scheme that allows a single update/insert to apply to a record and all its dependencies. to keep things simple, we did not use this on queries, and we specifically decided not to support this with deletes because it is more reliable to use cascaded deletes.
This wrapper has been successfully used in two projects to date. It is, of course, lightweight, but these days everyone says their code is lightweight. More importantly, it increases programmer productivity, decreases the number of bugs (and makes problems easier to track down), and it's relatively easy to trace through if need be because we don't believe in adding lots of layers just to provide beautiful architecture.

Spring-JDBC is fantastic. Consider that for an open source project like Spring the down side of external dependency is minimized. You can adopt the most stable version of Spring that satisfies your JDBC abstraction requirements and you know that you'll always be able to modify the source code yourselves if you ever run into an issue -- without depending on an external party. You can also examine the implementation for any security concerns that your organization might have with code written by an external party.

The one I prefer: Dalesbred. It's MIT licensed.
A simple example of getting all rows for a custom class (Department).
List<Department> departments = db.findAll(Department.class,
"select id, name from department");
when the custom class is defined as:
public final class Department {
private final int id;
private final String name;
public Department(int id, String name) {
this.id = id;
this.name = name;
}
}
Disclaimer: it's by a company I work for.

This sounds like a very short sighted decision. Consider the cost of developing/maintaining such a framework, especially when you can get it, and it's source code for free. Not only do you not have to do the development yourself, you can modify it at will if need be.
That being said, what you really need to duplicate is the notion of JdbcTemplate and it's callbacks (PreparedStatementCreator, PreparedStatementCallback), as well and RowMapper/RowCallbackHandler. It shouldn't be overcomplicated to write something like this (especially considering you don't have to do transaction management).
Howver, as i've said, why write it when you can get it for free and modify the source code as you see fit?

Try JdbcSession from jcabi-jdbc. It's as simple as JDBC should be, for example:
String name = new JdbcSession(source)
.sql("SELECT name FROM foo WHERE id = ?")
.set(123)
.select(new SingleOutcome<String>(String.class));
That's it.

Try mine library as alternative:
<dependency>
<groupId>com.github.buckelieg</groupId>
<artifactId>jdbc-fn</artifactId>
<version>0.2</version>
</dependency>
More info here

Jedoo
There is a wrapper class called Jedoo out there that uses database connection pooling and a singleton pattern to access it as a shared variable. It has plenty of functions to run queries fast.
Usage
To use it you should add it to your project and load its singleton in a java class:
import static com.pwwiur.util.database.Jedoo.database;
And using it is pretty easy as well:
if(database.count("users") < 100) {
long id = database.insert("users", new Object[][]{
{"name", "Amir"},
{"username", "amirfo"}
});
database.setString("users", "name", "Amir Forsati", id);
try(ResultSetHandler rsh = database.all("users")) {
while(rsh.next()) {
System.out.println("User ID:" + rsh.getLong("id"));
System.out.println("User Name:" + rsh.getString("name"));
}
}
}
There are also some useful functions that you can find in the documentation linked above.

mJDBC: https://mjdbc.github.io/
I use it for years and found it very useful (I'm the author of this library).
It is inspired by JDBI library but has no dependencies, adds transactions support, provides performance counters and allows to switch to the lowest possible SQL level in Java (old plain JDBC API) easily in case if you really need it.

Related

Details on how front end interacts with back end?

Consider this line of jsp code:
function clearCart(){
cartForm.action="cart_clear?method=clear";
cartForm.submit();
}
Clearly it's trying to call a method on the back end to clear the cart. My question is how does the service (Tomcat most likely, correct me if I'm wrong) which hosts this site that contains this snippet of code know how and where to find this method, how it "indexes" it with string values etc. In my java file, the clear method is defined as:
public String clear( )
{
this.request = ServletActionContext.getRequest();
this.session = this.request.getSession();
logger.info("Cart is clearing...");
Cart cart = ( Cart ) this.session.getAttribute(Constants.SESSION_CART );
cart.clear();
for( Long id : cart.getCartItems().keySet() )
{
Item it = cart.getCartItems().get(id);
System.out.println( it.getProduct().getName() + " " + it.getNumber()
);
}
return "cart";
}
By which module/what mechanism does Tomcat know how to locate precisely that method? By copycatting online tutorials and textbooks I know how to write these codes, but I want to get a bit closer to the bottom of it all, or at least something very basic.
Here's my educated (or not so much) guess: Since I'm basing my entire project on struts, hibernate and spring, I've inadvertently/invariably configured the build path and dependencies in such ways that when I hit the "compile" button, all the "associating" and "navigating" are done by these framework, in other words, as long as I correctly configured the project and got spring etc. "involved" (sorry I can't think of that technical jargon that's on the tip of my tongue), and as long as I inherit a class or implement an interface, when compiling, the compiler will expose these java methods to the jsp script - it's part the work done by compiler, part the work done by the people who composed spring framework. Or, using a really bad analogy, consider a C++ project whereby you use a 3rd party library which came in compiled binary form, all you have to do is to do the right inclusion (.h/.hpp file) and call the right function and you'll get the function during run time when calling those functions - note that this really is a really bad analogy.
Is that how it is done or am I overthinking it? For example it's all handled by Tomcat?
Sorry for all the verbosity. Things get lengthy when you need to express slightly more complicated and nuanced ideas. Also - please go deep and go low-level don't go too deep, by that I mean you are free to lecture on how hibernate and spring etc. work, how its code is being run on a server, but try not to touch the java virtue machine, byte code and C++ pointers etc. unless of course, it is helpful.
Thanks in advance!
Tomcat doesn't do much except obey the Servlet specification. Spring tells Tomcat that all requests to http://myserver.com/ should be directed to Spring's DispatcherServlet, which is the main entry point.
Then it's up to Spring to further direct those requests to the code that handles them. There are different strategies for mapping a specific URL to the code that handles the request, but it's not set in stone and you could easily create your own strategy that would allow you to use whatever kind of URLs you want. For a simple (and stupid) example you could have http://myserver.com/1 that would execute the first method in a single massive handler class, http://myserver.com/2 would execute the second, etc.
The example is with Spring, but it's the same general idea with other frameworks. You have a mapper that maps an URL to the handler code.
These days it's all hidden under layers of abstraction so you don't have to care about the specifics of the mapping and can develop quickly and concentrate on the business code.

How to recover from legacy stuff in large system step by step?

Our team has been given a legacy system for further maintenance and development.
As this this is true "legacy" stuff, there is really, really small number of tests, and most of them are crap. This is an app with web interface, so there are both container-managed components as well as plain java classes (not tied to any framework etc.) which are "new-ed" here and there, whenever you want to.
As we work with this system, every time we touch given part we try to break all that stuff into smaller pieces, discover and refactor dependencies, push dependencies instead of pulling them in code.
My question is how to work with such system, to break dependecies, make code more testable etc? When to stop and how to deal with this?
Let me show you an example:
public class BillingSettingsAction {
private TelSystemConfigurator configurator;
private OperatorIdDao dao;
public BillingSettingsAction(String zoneId) {
configurator = TelSystemConfiguratorFactory.instance().getConfigurator(zoneId);
dao = IdDaoFactory.getDao();
...
}
// methods using configurator and dao
}
This constructor definitely does too much. Also to test this for further refactoring it requires doing magic with PowerMock etc. What I'd do is to change it into:
public BillingSettingsAction(String zone, TelSystemConfigurator configurator, OperatorIdDao dao) {
this.configurator = configurator;
this.dao = dao;
this.zone = zone;
}
or provide constructor setting zone only with setters for dependencies.
The problem I see is that if I provide dependencies in constructor, I still need to provide them somewhere. So it is just moving problem one level up. I know I can create factory for wiring all the dependencies, but touching different part of app will cause having different factories for each. I obviously can't refactor all the app at once and introduce e.g. Spring there.
Exposing setters (maybe with default implementations provided) is similar, moreover it is like adding code for tests only.
So my question is how you deal with that? How to make dependencies between objects better, more readable, testable without doing it in one go?
I just recently started reading "Working effectively with legacy code" by Michael Feathers.
The book is basically an answer to your very question. It presents very actionable "nuggets" and techniques to incrementally bring a legacy system under test and progressively improve the code base.
The navigation can be a little confusing, as the book references itself by pointing to specific techniques, almost from page 1, but I find that the content is very useful so far.
I am not affiliated with the author or anything like that, it's just that I am facing similar situations and found this a very interesting resource.
HTH
I'd try to establish a rule like the boy scout rule: When ever you touch a file you have to improve it a little, apart from implementing what ever you wanted to implement.
In order to support that you can
agree on a fixed time budget for such improvements like for 2 hours of feature work we allow 1 hour of clean up.
Have metrics visible showing the improvement over time. Often simple things like average file size and test coverage are sufficient
Have a list of things you want to change at least for the bigger stuff, like "Get rid of TelSystemConfiguratorFactory" track on which tasks you are already working and prefere working on things that already started over new things.
In any case make sure management agrees to your approach.
On the more technical side: The approach you showed is is good. In many cases I would consider a second constructor providing all the dependencies but through the new constructor with the parameters. Make the additional constructor deprecated. When you touch clients of that class make them use the new constructor.
If you are going with Spring (or some other DI Framework) you can start by replacing calls to static Factories with getting an instance from the Spring context as an intermediate step before actually creating it via spring and injecting all the dependencies.

JDBC-simulator for non-DB structures

Is there a framework to quickly build a JDBC-like interface for an internal data structure?
We have a complex internal data structure for which we have to generate reports. The data itself is persisted in a database but there is a large amount of Java code which manages the dependencies, access rights, data aggregation, etc. While it is theoretically possible to write all this code again in SQL, it would be much more simple if we could add a JDBC-like API to out application and point the reporting framework to that.
Especially since "JDBC" doesn't mean "SQL"; we could use something like commons jxpath to query our model or write our own simple query language.
[EDIT] What I'm looking for is a something that implements most of the necessary boiler plate code, so you can write:
// Get column names and types from "Foo" by reflection
ReflectionResultSet rs = new ReflectionResultSet( Foo.class );
List<Foo> results = ...;
rs.setData( results );
and ReflectionResultSet takes care of cursor management, all the getters, etc.
It sounds like JoSQL (SQL for Java Objects) is exactly what you want.
try googling "jdbe driver framework" The first (for me) looks like a fit for you: http://jxdbcon.sourceforge.net/
Another option that might work (also on the google results from the search above) is the Spring JDBC Templage. Here is a writeup http://www.zabada.com/tutorials/simplifying-jdbc-with-the-spring-jdbc-abstraction-framework.php
I think you'll have to create a new Driver implementation for your data structure. Usually, framework using JDBC have just to be provided an URL and a driver, so if you define your custom driver (and all the things that go with it, for example Connection), you'll be able to add the JDBC-API you want.

Do you use Java annotations? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
How and where are Annotations used in Java?
Java beans, annotations: What do they do? How do they help me?
Over and over, I read about Java 5's annotations being an 'advanced feature' of the language. Until recently, I haven't much used annotations (other than the usual #Override, &c), but work on a number of webservice-related projects has forced my hand. Since I learned Java pre-5, I never really took the time to sit down and grok the annotation system.
My question- do you guys actually use annotations? How helpful are they to you, day-to-day? How many StackOverflow-ers have had to write a custom annotation?
Perhaps the most useful and used case of Java Annotations is to use POJO + Annotation instead of xml configuration files
I use it a lot since (as you already stated) if you use a web framework (like spring or seam) they usually have plenty of annotations to help you.
I have recently wrote some annotations to build a custom statemachine, validations purpose and annotations of annotations (using the metadata aspect of it). And IMO they help a lot making the code cleaner, easier to understand and manage.
Current project (200KLOC), annotations I use all the time are:
#NotNull / #Nullabe
#Override
#Test
#Ignore
#ThreadSafe
#Immutable
But I haven't written yet my own annotation... Yet!
I have used annotations for:
Hibernate, so I don't need to keep those huge XML files;
XML Serialization, so I describe how the object should be rendered in the object itself;
Warning removal for warnings that I don't want to disable (and for which the particular case cannot be properly solved).
I have created annotations for:
Describe the state required in order for my method to be executed (for example, that a user must be logged in);
Mark my method as executable from a specific platform with additional properties for that platform;
And probably some other similar operations.
The annotations that I have created are read with Reflection when I need to get more information about the object I am working with. It works and it works great.
Annotations are just for frameworks and they do work great in hibernate/jpa. until you write a framework that needs some extra information from passed to it objects you wont write your own annotations.
however there is new and cool junit feature that let you write your own annotations in tests - http://blog.mycila.com/2009/11/writing-your-own-junit-extensions-using.html
I use annotations daily and they are wonderful. I use them with jsf and jpa and find them much easier to manage and work with than the alternative XML configurations.
I use annotations for describing in my state synchronisation system what classes are specialisations of the annotated classes, and the environment in which they should be used (when an object is created, it will work out for its entity lists which are the best entity classes to create for the nodes on the network; i.e., a Player entity for a server node is instead a ServerPlayer entity). Additionally, the attributes inside the classes are described and how they should be synchronised across machines.
We just used annotations to create a simple way to validate our POJO's:
#NotEmpty
#Pattern(regex = "I")
private String value;
Then we run this through the Hibernate validator which will do all our validation for us:
import org.hibernate.validator.ClassValidator;
import org.hibernate.validator.InvalidValue;
public void validate(T validateMe) {
ClassValidator<T> validator = new ClassValidator<T>((Class<T>) validateMe.getClass());
InvalidValue[] errors = validator.getInvalidValues(validateMe);
}
Works great. Nice clean code.
We use custom annotations as a part of our integration testing system:
#Artifact: Associates an integration test with an issue ID. Trace matrices are then automatically generated for our testing and regulatory departments.
#Exclude: Ignores an integration test based on the browser platform / version. Keeps the IE 6 bugs from clogging up our nightly test runs :)
#SeleniumSession: Defines test specific selenium settings for each integration test.
They are a very powerful tool, but you gotta use them carefully. Just have a look at those early .NET Enterprise class files to see what a nightmare mandatory annotations can be :)
We have a report builder as part of our webapp. A user can add a large number of widgets that are all small variations on the same set of themes (graphs, tables, etc).
The UI builds itself based on custom annotations in the widget classes. (e.g. an annotation might contain default value and valid values that would render as a dropdown. Or a flag indicating if the field is mandatory).
It has turned out be be a good way to allow devs to crank out widgets without having to touch the UI.

Dynamically create table and Java classes at runtime

I have a requirement in my application. My tables won't be defined beforehand.
For example, if a user creates a form by name Student, and adds its attributes like name, roll no, subject, class etc, then on runtime, there should be a table created by name student with columns name, roll no, subject, class etc. And also its related class and its Hibernate mapping file.
Is there any way of doing so?
Thanks in advance,
Rima Desai
Hibernate supports dynamic models, that is, entities that are defined at run-time, but you have to write out a mapping file. You should note a couple things about dynamic models:
You may be restricted in how you define these at run-time (viz. you will have to use the Session directly instead of using a helper method from HibernateTemplate or something like that).
Dynamic models are supported using Maps as the container for the fields of an entity, so you will lose typing and a POJO-style API at run-time (without doing something beyond the baked-in dynamic model support).
All of that said, you didn't mention whether it was a requirement that the dynamically defined tables be persistent across application sessions. That could complicate things, potentially.
It's possible, but it's not clear why would you want to do something like that, so it's hard to suggest any specific solution.
But generally, yes, you can generate database tables, hibernate classes and mappings dynamically based on some input. The easiest approach is to use some template engine. I've used Velocity in the past and it was very good for this task, but there are others too if you want to try them.
EDIT:
Following OP clarification the better approach is to use XML to store user defined data.
The above solution is good but it requires recompiling the application whether forms are changed. If you don't want to stop and recompile after each user edit, XML is much better answer.
To give you some head start:
#Entity
public class UserDefinedFormData {
#Id
private long id;
#ManyToOne
private FormMetadata formMetadata;
#Lob
private String xmlUserData;
}
Given a definition of the form it would trivial to save and load data saved as XML.
Add a comment if you would like some more clarifications.
last week I was looking for same solution and then I got idea from com.sun.tools.javac.Main.compile class, you create the Entity class using java IO and compile using java tools, for this you need tools.jar to locate on CLASS_PATH, now I am looking for run time hibernate mapping without restart.
some one was saying in the post regarding to this type of requirement that "but it's not clear why would you want to do something like that" answer is this requirement is for CMS(Content Management System). and I am doing the same. code is as below.
public static void createClass()
{
String methodName=“execute”;
String parameterName=“strParam”;
try{
//Creates DynamicTestClass.java file
FileWriter fileWriter=new FileWriter(fileName,false);
fileWriter.write(“public class “+ className +” {\n”);
fileWriter.write(“public String “+methodName +“(String “+parameterName+“) {\n”);
fileWriter.write(“System.out.println(\” Testing\”);\n”);
fileWriter.write(“return “+parameterName +“+ \” is dumb\”;\n }\n}”);
fileWriter.flush();
fileWriter.close();
String[] source = { new String(fileName) };
com.sun.tools.javac.Main.compile(source);
}

Categories

Resources