Converting SQL update based application to java rule based application - java

I have an data centric & data sensitive application, which is written using java, but almost all the business logic is maintained in a .sql files.
These sql files are executed 1 by 1 , temporary table is created and updated by these sql files.
Internally these sql files fire update queries on temporary table with available data values on various conditions.
finally the temporary table is dumped into a physical table.
We are planning to move this to java rule based application as sql scripts are getting huge and hard to understand as well as maintain.
Planning to have all the data in memory using Lucene & its RAMDiirectory, what would be the preferred choice for building rules (these are nothing but update queries in sql)
Was looking # scripting languages to have dynamic rules, but scripts (rhino / groovy)
have same characteristics as that of sql files (hard to write & maintain)
Please post your suggestions.
Thanks in advance!!

Our company uses Drools. Works really great for us. Drools normally has you write your rules in an XML-based format, but we just extended some of their classes so we could write our rules in Java (allows us to debug the rules at runtime).

We also use JBoss Rules / Drools.The newer version (> 4.0.0) has a nice DSL that is perfectly readable and maintainable.No more XML is needed.

Related

Generic approach of mirroring data from Oracle to another database

We have source Oracle database, where we have a lot of tabels (let say 100) which we need to mirror to target database. So we need to copy data increments periodically to another db tables. The target database is currently Oracle, but in the short future it will be probably changed to a different database technology.
So currently we can create a PL/SQL procedure which will dynamically generate DML (insert, update or merge statements) for each table (assuming that the source and target table have exactly the same attributes) from Oracle metadata.
But we would rather create some db technology independent solution so when we change target database to another (e.g. MS SQL or Postgres), then we will no need to change whole logic of data mirroring.
Does anyone have a suggestion how to do it differently (preferably in java)?
Thanks for every advice.
The problem you have is called CDC - continuous data capture. In case of Oracle this is complicated because Oracle is usually asking money for this.
So you can use:
PL/SQL or Java and use SQL to incrementally detect changes in data. IT requires plenty of work and performance is bad.
Use tools based on Oracle triggers, which will dects data changes and pushes them into some queue.
Use tool which can parse content of Oracle Archive logs. These are commercial products: GoldenGate (from Oracle) and Shareplex (Dell/EMC/dunno). GoldenDate also contains Java technology(XStreams) which allows you to inject Java visitor into the data stream. Those technologies also support sending data changes into Kafka stream.
There are plenty of tools like Debezium, Informatica, Tibco which can not parse Archived logs by themself, but rather they use Oracle's internal tool LogMiner. These tools usually do not scale well and can not cope with higher data volumes.
Here is quite article in as a summary. If you have money pick GoldenGate or Shareplex. If you don't pick Debezium or any other Java CDC project based on Logminer.

Java Application - Can i Store my sql queries in the DB rather than a file packaged inside the application?

As the application gets complicated, one thing that change a lot is the queries, especially if they are complex queries. Wouldn't it be easier to maintain the queries in the db rather then the resources location inside the package, so that it can be enhanced easily without a code change. What are the drawbacks of this?
You can use stores procedures, to save your queries in the database. Than your Java code can just call the procedure from the database instead of building a complex query.
See wikipedia for a more detailed explanation about stored procedures:
https://en.wikipedia.org/wiki/Stored_procedure
You can find details about the implementation and usage in the documentation of your database system (MySql, MariaDb, Oracle...)
When you decide to move logic to the database, you should use a version control system for databases like liquibase: https://www.liquibase.org/get-started/quickstart
You can write the changes to you database code in xml, json or even yaml and check that in in your version control system (svn, git...). This way you have a history of the changes and can roll back to a previous version of your procedure, if something goes wrong.
You also asked, why some people use stored procedures and others keep their queries in the code.
Stored procedures can encapsulate the query and provide an interface to the data. They can be faster than queries. That is good.
But there are also problems
you distribute the buisiness logic of your application to the database and the programm code. It can realy be troublesome, if the logic is spread through all technical layers of your applicaton.
it is not so simple anymore to switch from a Oracle database to a MariaDb, if you use specific features of the database system. You have to migrate or rewrite the procedures.
you have to integrate liquibase or another system into you build pipeline, to keep track of you database changes.
So it depends on the project and it's size, if either of the solutions is better.

Java API for parsing/recognition but not executing SQL against a DB

I am working on building a huge meta data dictionary and lookup application for workload automation processes that are pretty much all Sybase and Oracle straight out SQL scripts. What I need to do is make an inventory for every SQL statement in every script and extract things like statement type (e.g. INSERT, UPDATE, CREATE TABLE etc), tables into, tables from etc.
I have a prototype parsing application that does an OK job using mostly regex. Programming something like that is tedious beyond any nightmare and, frankly, I'd rather be digging 3' ditches in the midst of summer than covering all the possible use case scenarios. Plus I feel like I am reinventing the wheel. I thought there must be some API out there that, if you feed it just a SQL statement, it will tell you #1 if it will compile and then extract the structure for you that you can access through the normal POJO facilities. Like getStatementType, getFromTables() etc.
The scripts I am dealing with are written in just about all coding styles and there is no standard format, there is also every kind of statement form and syntax represented (aggregates, subqueries, you name it).
So my question is: is there an API that parses SQL given SQL and maybe which vendor it is specific to to return normalized particulars?
I know that there are plugins for Eclipse that do similar things (DDL editor and Quantum DB) so I thought I could sneak some of their API that deal with SQL recognition. Thoughts?
You can use antlr and one of vendor-specific or standard sql grammars for parsing.
You can use ZQL parser for all type of SQL queries.
ZQL can perform all SQL operation specified in this tutorial

How to test sql scripts are standard sql in junit tests?

Is there any way to test that SQL scripts contain standard SQL with java/junit tests?
Currently we have sql scripts for creating a database etc. in a Postgres db, but when using hsqldb everything fails. That's why I wonder if any java tools exist for testing if sql statements are standard sql.
Or would it just be wise to create different sets of scripts per database vendor?
If so, is there a way to test if a given script works with postgres/hsqldb?
The H2 database supports different modes, which may help you with postgres testing, I've found that our sql often contains functions which are not supported but H2, but you can create your own "stored procedures" which actually invoke a static Java method to work around this. If you want to support different database vendors you should go down the vendor specific script route, unless you are doing really basic queries.
If you have the available resources I would recommend setting up a fully fledged UAT environment which you can use to test against a live postgres database, as even seemingly minor db configuration differences can impact query plans in unexpected ways.
I've usually made a very simple java-wrapper that tests this code
by using a localhost-connection with some standard user/pass settings.
Remember to use a temporary database or a known test-database so your tests
doesn't destroy anything important.
Reason for above is that I have had the need for specific databases (non standard
features etc).
If you only want to test standard sql-stuff for junit tests (like syntax, selects etc),
I would consider using a embedded sql database in java (ususally memory only).
That way it is easy to test lots of stuff without the need to install a db
and also without the risk of destoring other installations.
It sounds like you're looking for an SQL syntax parser and validator. The only Java SQL parser with which I'm familiar is Zql, but I've never actually used it.
A similar question was asked early last year, and the best answer there turned out to be writing your own parser with ANTLR.
The best tool for checking SQL statements for conformance to Standard SQL is HSQLDB 2.0. This is especially true with data definition statements. HSQLDB has been written to the Standard, as opposed to adopting bits of the Standard over several versions.
PostgresSQL has been slowly moving towards standard SQL. But it still has some "legacy" data types. HSQLDB allows you to define types with the CREATE TYPE statement for compatibility with other dialects. It also allows you to define functions in SQL or Java for the same purpose.
The best policy is to use standard SQL as much as possible, with user-defined types and functions to support an alternative database's syntax.
ZQL works fine with Junit..
ZqlParser parser = new ZqlParser(any input stream);
try{
parser = parser.readStatement();
}catch(ParseException e){
// if sql is not valid it caught here
}
Different vendors expose different additional features in their SQL implementation.
You may decide the set of databases to test. Then use http://dbunit.sourceforge.net to simplify the testing job.

Alternative of Storing data except databases like mysql,sql etc

I had completed my project Address Book in Java core, in which my data is stored in database (MySql).
I am facing a problem that when i run my program on other computer than tere is the requirement of creating the hole data base again.
So please tell me any alternative for storing my data without using any database software like mysql, sql etc.
You can use an in-memory database such as HSQLDB, Derby (a.k.a JavaDB), H2, ..
All of those can run without any additional software installation and can be made to act like just another library.
I would suggest using an embeddable, lightweight database such as SQLite. Check it out.
From the features page (under the section Suggested Uses For SQLite):
Application File Format. Rather than
using fopen() to write XML or some
proprietary format into disk files
used by your application, use an
SQLite database instead. You'll avoid
having to write and troubleshoot a
parser, your data will be more easily
accessible and cross-platform, and
your updates will be transactional.
The whole point of StackOverflow was so that you would not have to email around questions/answers :)
You could store data in a filesystem, memory (use serialisation etc) which are simple alternatives to DB. You can even use HSQLDB which can be run completely in memory
If you data is not so big, you may use simple txt file and store everything in it. Then load it in memory. But this will lead to changing the way you modify/query data.
Database software like mysql, sql etc provides an abstraction in terms of implementation effort. If you wish to avoid using the same, you can think of having your own database like XML or flat files. XML is still a better choice as XML parsers or handlers are available. Putting your data in your customised database/flat files will not be manageable in the long run.
Why don't you explore sqlite? It is file based, means you don't need to install it separately and still you have the standard SQL to retrieve or interact with the data? I think, sqlite will be a better choice.
Just use a prevayler (.org). Faster and simpler than using a database.
I assume from your question that you want some form of persistent storage to the local file system of the machine your application runs on. In addition to that, you need to decide on how the data in your application is to be used, and the volume of it. Do you need a database? Are you going to be searching the data different fields? Do you need a query language? Is the data small enough to fit in to a simple data structure in memory? How resilient does it need to be? The answers to these types of questions will help lead to the correct choice of storage. It could be that all you need is a simple CSV file, XML or similar. There are a host of lightweight databases such as SQLite, Berkelely DB, JavaDB etc - but whether or not you need the power of a database is up to your requirements.
A store that I'm using a lot these days is Neo4j. It's a graph database and is not only easy to use but also is completely in Java and is embedded. I much prefer it to a SQL alternative.
In addition of the others answers about embedded databases I was working on a objects database that directly serialize java objects without the need for ORM. Its name is Sofof and I use it in my projects. It has many features which are described in its website page.

Categories

Resources