I want to create a class file dynamically. Here it goes...
With the given ResultSet, extracting the metadata I want to build a class file dynamically with getter and setter methods for all the columns that exist in ResultSet. Also I should be able to use this class file generated where ever I want in my later use.
Can any body suggest me a better way to implement this. Also if any existing jar files available to implement this, that would be helpful.
Perhaps Apache Beanutils might suit your requirements?
See the section on Dynabeans
In particular:
3.3 ResultSetDynaClass (Wraps ResultSet in DynaBeans)
A very common use case for DynaBean APIs is to wrap other collections of "stuff" that do not normally present themselves as JavaBeans. One of the most common collections that would be nice to wrap is the java.sql.ResultSet that is returned when you ask a JDBC driver to perform a SQL SELECT statement. Commons BeanUtils offers a standard mechanism for making each row of the result set visible as a DynaBean, which you can utilize as shown in this example:
Connection conn = ...;
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery
("select account_id, name from customers");
Iterator rows = (new ResultSetDynaClass(rs)).iterator();
while (rows.hasNext()) {
DynaBean row = (DynaBean) rows.next();
System.out.println("Account number is " +
row.get("account_id") +
" and name is " + row.get("name"));
}
rs.close();
stmt.close();
3.4 RowSetDynaClass (Disconnected ResultSet as DynaBeans)
Although ResultSetDynaClass is a very useful technique for representing the results of an SQL query as a series of DynaBeans, an important problem is that the underlying ResultSet must remain open throughout the period of time that the rows are being processed by your application. This hinders the ability to use ResultSetDynaClass as a means of communicating information from the model layer to the view layer in a model-view-controller architecture such as that provided by the Struts Framework, because there is no easy mechanism to assure that the result set is finally closed (and the underlying Connection returned to its connection pool, if you are using one).
The RowSetDynaClass class represents a different approach to this problem. When you construct such an instance, the underlying data is copied into a set of in-memory DynaBeans that represent the result. The advantage of this technique, of course, is that you can immediately close the ResultSet (and the corresponding Statement), normally before you even process the actual data that was returned. The disadvantage, of course, is that you must pay the performance and memory costs of copying the result data, and the result data must fit entirely into available heap memory. For many environments (particularly in web applications), this tradeoff is usually quite beneficial.
As an additional benefit, the RowSetDynaClass class is defined to implement java.io.Serializable, so that it (and the DynaBeans that correspond to each row of the result) can be conveniently serialized and deserialized (as long as the underlying column values are also Serializable). Thus, RowSetDynaClass represents a very convenient way to transmit the results of an SQL query to a remote Java-based client application (such as an applet).
The thing is though - from the sounds of your situation, I understand that you want to create this class at runtime, based on the contents of a ResultSet that you just got back from a database query. This is all well and good, and can be done with bytecode manipulation.
However, what benefit do you perceive you will get from this? Your other code will not be able to call any methods on this class (because it did not exist when they were compiled), and consequently the only way to actually use this generated class would be either via reflection or via methods on its parent class or implemented interfaces (I'm going to assume it would extend ResultSet). You can do the latter without bytecode weaving (look at dynamic proxies for arbitrary runtime implementations of an interface), and if you're doing the former, I don't see how having a class and mechanically calling the getFoo method through reflection is better than just calling resultSet.getString("foo") - it will be slower, more clunky and less type-safe.
So - are you sure you really want to create a class to achieve your goal?
You might want to look at BCEL, although I believe there are other bytecode manipulation libraries available too.
If you're using Java 6 you can write your code and directly call the Java compiler:
Files[] files1 = ... ; // input for first compilation task
Files[] files2 = ... ; // input for second compilation task
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
StandardJavaFileManager fileManager = compiler.getStandardFileManager(null, null, null);
Iterable<? extends JavaFileObject> compilationUnits1 =
fileManager.getJavaFileObjectsFromFiles(Arrays.asList(files1));
compiler.getTask(null, fileManager, null, null, null, compilationUnits1).call();
Iterable<? extends JavaFileObject> compilationUnits2 =
fileManager.getJavaFileObjects(files2); // use alternative method
// reuse the same file manager to allow caching of jar files
compiler.getTask(null, fileManager, null, null, null, compilationUnits2).call();
fileManager.close();
You will then have to load said class but you can do that easily enough with a class loader.
Sadly this is what you have to do in Java.
In C# you just use the 'var' type.
I'm confused to the way it's supposed to work. And i don't think it's possible.
Here's why:
If you want to use the class code in the rest of your application, you need an interface (or heavy use of reflection) and that would mean, you know the column types beforehand - defeating the purpose of a generated class.
A generated class might clash during runtime with another one.
If you create a new class for each SQL call, you will have either different classes for the same purpose. And these would probably not even pass a regular call to "equals".
You have to look up classes from previously executed statements. And you loose flexibility and/or fill your heap with classes.
I've done something probably similar. But I wouldn't create dynamic classes.
I had an object called Schema that would load the data of each table I'd need.
I had a Table object that would have a Schema type. Each Schema object would have columns attribute while While Table object had attribute with value and reference on Schema column attribute.
The Schema had everything you'd need to insert,select,delete,update data to the database.
And I had a mediator that would handle connection between the database and Table object.
Table t = new Table('Dog');
t.randomValue(); // needed for the purpose of my project
t.save();
Table u = Table.get(t);
u.delete();
But It could have something to get value on certain column name easily.
Anyway, the principle is easy, my could would load data contained in the table information_data it could probably work with a describe too.
I was able to load anytable dynamically as table had dynamic attributes the structure wasn't hardcoded. But there is no real need to create new classes for each table.
There was also something that could be important to note. Each table schema were loaded once. Tables only had reference to schemas and schemas had reference to column. column had references to column type etc...
It could have been interesting to find a better use than it had. I made that for unit case on database replication. I had no real interest to code a class for each of the 30 tables and do insert/delete/updates and selects. That's the only reason I can see it usefull to create something dynamic about sql. If you don't need to know anything about the tables and only want to insert/delete junk into it.
If I had to redo my code, I'd used more associative array.
Anyway Goodluck
I second the comments made by dtsazza and Stroboskop; generating a new class at run time is probably not what you want to do in this case.
You haven't really gotten into why you want to do this, but it sounds like you are trying to roll your own Object-Relational mapper. That is a problem that's much harder to get right than it first seems.
Instead of building your own system from the down up, you might want to look into existing solutions like Hibernate (high-level system, manages most of you objects and queries for you) or iBatis (a bit more low-level; it handles object mapping, but you still get to write your own SQL).
I have found that in JSF beans and maps can be used interchangably. Hence for handling results where you don't want to build a complete set of get/setters but just create a h:table, it is much easier to create a list with a map for each line, where the key is the column name (or number) and the value is the column content.
If you find it relevant later to make it more typesafe, you can then rework the backend code with beans, and keep your JSF-code unchanged.
Related
What I am trying to do is create a copy of a Prolog instance and load that copy with JPL (the Java-Prolog Interface). I can think of several possible ways to do this, but none of them are completely worked out, and that is why I have come here.
First, I know I could save a copy of the state using qsave_program/2. This creates an exe file which I can run. However, I need to query this saved instance from Java using JPL. I've tried looking for documentation for this, but I couldn't find any (probably not a common issue). Is there any way I can run an instance saved using qsave_program/2 and query it from JPL?
The second idea would be to query the original instance for all dynamically asserted clauses. However, I cannot know what was asserted, so I cannot ask for those things directly, but rather I must collect these clauses based on the fact that they are dynamic. Then I could simply start another instance from JPL and assert these facts to create a copy. Is this possible? And would this effectively create a copy of the state?
Alright, here is the solution I've decided on. I can find all the predicates I will need to reassert with the following query:
predicate_property(X,dynamic),\+predicate_property(X,built_in),\+predicate_property(X,number_of_clauses(0)).
Here's why I think this will work for me.
predicate_property(X,dynamic) will give me all the dynamic predicates. The reason I don't stop here is because there are a lot of predicates that are dynamic whose clauses I don't need to explicitly assert in my new instance of prolog. Predicates that have the property built_in can be ignored, because those will be automatically defined when I create the new instance of my prolog query. Even if they are explicitly defined by the user, that definition will be reinstantiated because I am consulting the same file. I can also ignore those predicates that have no clauses (number_of_clauses(0)), because the predicates are not affecting the state if they have no clauses.
So, once I have all the dynamic predicates I want, I can find all solutions to those predicates, make a list of the Terms returned in Java through JPL, open a new consultation of the file, and reassert those Terms.
I have a large data set. I am creating a system which allows users to submit java source files, which will then be applied to the data set. To be more specific, each submitted java source file must contain a static method with a specific name, let's say toBeInvoked(). toBeInvoked will take a row of the data set as an array parameter. I want to call the toBeInvoked method of each submitted source file on each row in the data set. I also need to implement security measures (so toBeInvoked() can't do I/O, can't call exit, etc.).
Currently, my implementation is this: I have a list of the names of the java source files. For each file, I create an instance of the custom secure ClassLoader which I coded, which compiles the source file and returns the compiled class. I use reflection to extract the static method toBeInvoked() (e.g. method = c.getMethod("toBeInvoked", double[].class)). Then, I iterate over the rows of the data set, and invoke the method on each row.
There are at least two problems with my approach:
it appears to be painfully slow (I've heard reflection tends to be slow)
the code is more complicated than I would like
Is there a better way to accomplish what I am trying to do?
There is no significantly better approach given the constraints that you have set yourself.
For what it is worth, what makes this "painfully slow" is compiling the source files to class files and loading them. That is many orders of magnitude slower than the use of reflection to call the methods.
(Use of a common interface rather than static methods is not going to make a measurable difference to speed, and the reduction in complexity is relatively small.)
If you really want to simplify this and speed it up, change your architecture so that the code is provided as a JAR file containing all of the compiled classes.
Assuming your #toBeInvoked() could be defined in an interface rather than being static (it should be!), you could just load the class and cast it to the interface:
Class<? extends YourInterface> c = Class.forName("name", true, classLoader).asSubclass(YourInterface.class);
YourInterface i = c.newInstance();
Afterwards invoke #toBeInvoked() directly.
Also have a look into java.util.ServiceLoader, which could be helpful for finding the right class to load in case you have more than one source file.
Personally, I would use an interface. This will allow you to have multiple instance with their own state (useful for multi-threading) but more importantly you can use an interface, first to define which methods must be implemented but also to call the methods.
Reflection is slow but this is only relative to other options such as a direct method call. If you are scanning a large data set, the fact you have to pulling data from main memory is likely to be much more expensive.
I would suggest following steps for your problem.
To check if the method contains any unwanted code, you need to have a check script which can do these checks at upload time.
Create an Interface having a method toBeInvoked() (not a static method).
All the classes which are uploaded must implement this interface and add the logic inside this method.
you can have your custom class loader scan a particular folder for new classes being added and load them accordingly.
When a file is uploaded and successfully validated, you can compile and copy the class file to the folder which class loader scans.
You processor class can lookup for new files and then call toBeInvoked() method on loaded class when required.
Hope this help. (Note that i have used a similar mechanism to load dynamically workflow step classes in Workflow Engine tool which was developed).
I have a little design issue on which I would like to get some advice:
I have several classes that inherit from the same base class, each one can accept the same data and analyze it in a slightly different way.
Analyzer
|
˪_ AnalyzerA
|
˪_ AnalyzerB
...
I have an input file (I do not have control over the file's format) that defines which analyzers should be invoked and their parameters. Plus it defines data-extractors in the same way and other similar things too (in similar I mean that this is an action that can have several variations).
I have a module that iterates over different analyzers in the file and calls some factory that constructs the correct analyzer. I have a factory for each of the archetypes the input file can define and so far so good.
But what if I want to extend it and to add a new type of analyzer?
The solution I was thinking about is using a property file for each factory that will be named after the factories name and it will hold a mapping between the input file's definition of whatever it wants me to execute and the actual classes that I use to execute the action.
This way I could load that class at run-time -> verify that it's implementing the right interface and then execute it.
If some John Doe would like to create his own analyzer he'd just need to add a new property to the correct file (I'm not quite sure what would be the best strategy to allow this kind of property customization).
So in short:
Is my solution too flawed?
If no what would be the most user friendly/convenient way to allow customization of properties?
P.S
Unfortunately I'm confined to using only build in JDK classes as the existing solution, so I can't just drop in SF on them.
I hope this question is not out of line I'm just not used to having my wings clipped this way, not having SF or some other to help me implement an elegant solution.
Have a look at the way how the java.sql.DriverManager.getConnection(connectionString) method is implemented. The best way is to watch the source code.
Very rough summary of the idea (it is hidden inside a lot of private methods). It is more or less an implementation of chain of responsibility, although there is not linked list of drivers.
DriverManager manages a list of drivers.
Each driver must register itself to the DriverManager by calling its method registerDriver().
Upon request for a connection, the getConnection(connectionString) method sequentially calls the drivers passing them the connectionString.
Each driver KNOWS if the given connection string is within its competence. If yes, it creates the connection and returns it. Otherwise the control is passed to the next driver.
Analogy:
drivers = your concrete Analyzers
connection strings = types of your files to be analyzed
Advantages:
There is no need to explicitly bind the analyzers with their type of file they are meant for. Let the analyzer to decide itself if it is able to analyze the file. If not, null is returned (or an exception or whatever) to tell the AnalyzerManager that the next analyzer in the row should be asked.
Adding new analyzer just means adding a new call to the register() method. Complete decoupling.
Coming from a perl background and having done some simple OO in that, I am struggling to grasp the android/Java way of interacting with databases.
In my experience I would create a file or class for each object, and an object would match/represent a table in the database.
Within that object would be a constructor, variables for the data, methods/functions to return individual variables but also the DB queries to read and write from the DB, doing all the necessary CRUD functions.
From what I read on the web, in Android I would create the objects similarly but without the DB interaction. This would happen in either a single class with all my DB functionality in it, or multiple DB classes, one per table.
My questions are all to do with best practices really.
Can I do my app how I would in Perl. If not why not, and if so,what are the pros and cons and limitations?
What do the terms DAO, Adapter and POJO mean?
Should I make an application class and globally declare the DB there?
Should I create one handler in each activity or in the application class only?
I have read so many tutorials now my head is spinning, all with a diff way of doing things and mostly only with a single table and few showing actual objects directly representing tables.
I'm happy to hear opinion, be linked to tutorials or just have the odd term explained.
Thanks in advance
If I am reading you correctly, ORMLite may be your best bet. It uses reflection for database creation and maintenance which seems to be how Perl does it.
POJO is Plain old java object which means it is just a normal class.
An adapter would be the class that contains the CRUD stuff and manages the database itself. There are quite some patterns around in the Android world and talking about can fill a book.
I prefer the pattern, that I open the database once in my Application class and I never close it (Android does that when it kills the app). A sample from a very old project I had might show you the basic idea.
DAO is Data Access Object and can fill dozens of books. If would just start programming and see where you're heading...
The other poster is correct in putting out ORMLite as a great way to manage code relationships that mirror your database. If you're looking to do it on your own, however, there are a ton of ways to do things, and I wouldn't say the community has really gravitated toward one over the other. Personally, I tend to have my entities represented by Plain Old Java Objects (POJO - implies no special connectivity to other things, like databases), where the various attributes of the table correspond to field values. I then persist and retrieve those objects through a Data Access Object (DAO). The DAO's all have access to a shared, open, Database object - against which they execute queries according to need.
For example: if I had a table foo, I would have a corresponding entity class Foo, with attributes corresponding to columns. class FooDAO would have mechanisms to get a Foo:
public Foo getFooById(Integer id) {
String[] selection = {id.toString()};
String limit = "1"
Cursor c = mDatabase.query(FOO_TABLE, null, "id=?", selection, null, null, null, 1);
// Create a new Foo from returned cursor and close it
}
A second entity bar might have many foo. For that, we would, in Bar, reference the FooDAO to get all of bar's member foo:
public class Bar {
public List<Foo> getFoo() {
return mFooDAO.getFooByBar(this);
}
}
etc... the scope of what one can do in rolling your own ORM like this is pretty vast, so do as much or as little as you need. Or just use ORMLite and skip the whole thing :)
Also, the android engineers frown on subclassing Application for globally accessible objects in favor of Singletons (see hackbod's answer), but opinions vary
Is there a framework to quickly build a JDBC-like interface for an internal data structure?
We have a complex internal data structure for which we have to generate reports. The data itself is persisted in a database but there is a large amount of Java code which manages the dependencies, access rights, data aggregation, etc. While it is theoretically possible to write all this code again in SQL, it would be much more simple if we could add a JDBC-like API to out application and point the reporting framework to that.
Especially since "JDBC" doesn't mean "SQL"; we could use something like commons jxpath to query our model or write our own simple query language.
[EDIT] What I'm looking for is a something that implements most of the necessary boiler plate code, so you can write:
// Get column names and types from "Foo" by reflection
ReflectionResultSet rs = new ReflectionResultSet( Foo.class );
List<Foo> results = ...;
rs.setData( results );
and ReflectionResultSet takes care of cursor management, all the getters, etc.
It sounds like JoSQL (SQL for Java Objects) is exactly what you want.
try googling "jdbe driver framework" The first (for me) looks like a fit for you: http://jxdbcon.sourceforge.net/
Another option that might work (also on the google results from the search above) is the Spring JDBC Templage. Here is a writeup http://www.zabada.com/tutorials/simplifying-jdbc-with-the-spring-jdbc-abstraction-framework.php
I think you'll have to create a new Driver implementation for your data structure. Usually, framework using JDBC have just to be provided an URL and a driver, so if you define your custom driver (and all the things that go with it, for example Connection), you'll be able to add the JDBC-API you want.