Dynamic table name in Hibernate - java

I am developing an application in Java that uses Hibernate to connect to MySQL database.
My application manages students of different batches. If a student joined in 2010 then they are in the 2010 batch, so whenever the administrators of the application create a new batch, my application has to create new tables for that batch. While the scheme is much more like the old tables that are already there in the database, the table name changes. How do I accomplish this using Hibernate?
How do I create the XML files and the classes required dynamically?

If I understood your problem right, I think you want to check Hibernate Shards. Note that this is an advanced feature, unsupported and not really tested (nor maintained). So, use it at your own risk. You may want to pay special attention to the "Shard Selection Strategy" section:
http://docs.jboss.org/hibernate/stable/shards/reference/en/html_single/#shards-strategy-shardselection
From the documentation:
We expect many applications will want to implement attribute-based sharding, so for our example application that stores weather reports let's shard reports by the continents on which the reports originate
But as the others said: think twice before splitting your data. Do it only if you expect really large volumes of data. A couple million records are not really that much.

Related

What happens when changing ORMLite database structure within an app

I'm using ORMLite database in my android application, now I want to change the whole structure of the Database like (renaming tables, add/remove columns, change relations, ...etc).
The question here, is there are any conflicts would happen in the devices with my app previously installed? in another words, when updating the app, is ORMLite leave any trails from the previous install that would make conflicts with the new one?!! so if I have a table named parent and I changed its name to guardian will I have two tables now in the new release?!!
If the answer is No, so why there is something like database version?
and if the answer is Yes, so how would I drop a table that is not exist anymore in my application? and can I just use the same class name with a different table name annotation to override the previous table?
I have not used ORMLite specifically. But it's just an ORM which means, it will won't decide if the table would be dropped or not based on a certain condition. That is something client has to do specifically based on their business rules. Now in Android there are specific ways you can upgrade the current database schema without dropping existing tables - https://developer.android.com/reference/android/database/sqlite/SQLiteOpenHelper.html
But upgrading database schema on SQLIte has lots of limitations i.e. there are so many operations that are not supported unlike a full blown DBMS.But that's part of the reason why SQLite is so light. Generally during your development cycle, try to conclude to a stable database schema as early as possible which needs minor additions later (SQLIte specifically does not support removing columns and etc.). Once you are in production and you don't wanna play with user's data, implementing an upgrade logic is the best bet you got.
But if you still want to drop a table explicitly, i see there are API's for the same in ORMLite -
http://ormlite.com/javadoc/ormlite-core/com/j256/ormlite/table/TableUtils.html

How join a record set that is returned from a web service with one of your sql tables

I thought about this solution: get data from web service, insert into table and then join with other table, but it will affect perfomance and, also, after this I must delete all that data.
Are there other ways to do this?
You don't return a record set from a web service. HTTP knows nothing about your database or result sets.
HTTP requests and responses are strings. You'll have to parse out the data, turn it into queries, and manipulate it.
Performance depends a great deal on things like having proper indexes on columns in WHERE clauses, the nature of the queries, and a lot of details that you don't provide here.
This sounds like a classic case of "client versus server". Why don't you write a stored procedure that does all that work on the database server? You are describing a lot of work to bring a chunk of data to the middle tier, manipulate it, put it back, and then delete it? I'd figure out how to have the database do it if I could.
no, you don't need save anything into database, there's a number of ways to convert XML to table without saving it into database
for example in Oracle database you can use XMLTable/XMLType/XQuery/dbms_xml
to convert xml result from webservice into table and then use it in your queries
for example:
if you use Oracle 12c you can use JSON_QUERY: Oracle 12ะก JSON
XMLTable: oracle-xmltable-tutorial
this week discussion about converting xml into table data
It is common to think about applications having a three-tier structure: user interface, "business logic"/middleware, and backend data management. The idea of pulling records from a web service and (temporarily) inserting them into a table in your SQL database has some advantages, as the "join" you wish to perform can be quickly implemented in SQL.
Oracle (as other SQL DBMS) features temporary tables which are optimized for just such tasks.
However this might not be the best approach given your concerns about performance. It's a guess that your "middleware" layer is written in Java, given the tags placed on the Question, and the lack of any explicit description suggests you may be attempting a two-tier design, where user interface programs connect directly with the backend data management resources.
Given your apparent investment in Oracle products, you might find it worthwhile to incorporate Oracle Middleware elements in your design. In particular Oracle Fusion Middleware promises to enable "data integration" between web services and databases.

what database architecture is a good choiche for this application?

I have a servlet-based application that runs in a tomcat7 environment.
This application needs to manage users' files in such a way these files can be accessed in many ways and through different classification methods (for instance time-oriented classification and search, keywords, tags, author and so on).
So I have a multidimensional search space and I need to organize a database-based grouping system.
Let focus on a single and specific aspect.
Any user can upload his own files. So I'll have a path in which these files will be saved.
Then I need also a place where to store the informations associated to the files.
I though that it is good to separate files from associated informaions (title, ...) and then to create a third entity that is a small string that univocally identificate both info and file.
This way once i know the file id I can get both the informations (that are stored in a specific file) and the file but I can save this id in any perverse classification table without copying anything heavy.
So If I have the file id (fid) I can get the file and the informations. and when I have for example to associate an object to a file I can simply associate that object to the fid.
Then any user must have its own table that collects the variuos fid of the files he uploaded .
Therefore I have one table for each user. Then for any other classification dimension I will have N tables (where N is the size of the dimension). So for instance I want to classify files for keywords, I'll need N tables each for a specific keyboard. (it will be too unefficient to search each time I want files associated to key AGAA through all the users files)
So if I need to show the 50 more recent files associated to the keyword "AGAAA" I need a table for AGAAA. and so on.
This is crazy. as the number of users increases I get exponentialy more tables.
I heard about table limit per database in mysql databases.
Until now I'm using mysql (mariaDB) with connection pooling.
I though to split tables of different "nature" (i.e. those of the keyboards, those fo the time and so on) in different databases (also in order to organize in a clearly way the contents). But with connection pooling I need to declare the database name in the resource definition. So for different databases I will need different pools.
Now questions.
Using pooling I must create a different pool resource for each different database access. aint I?
If yes, is It a good pratice to use the same database for all the different kind of tables?
If no. How can change database runtime?
I thought I could manage different tables with different database systems. for example I could use SQLite in order to manage classification tables, mysql to manage user interaction and so on. Is this a good pratice?
Is SQLite in general faster than server-based databases in multi-user applications?
Can I use connection pooling with SQLite ? I mean, what are SQLite connection if SQLite has no server? and does it make sense to think about connection pooling?
What database architecture do you suggest for this kind of problematics?
thanks
Why would each user or keyword need its own table? Tables can have many rows.
Using pooling I must create a different pool resource for each
different database access. aint I?
Your question has multiple meanings, but generally you create one pool for one application, and it manages itself.
If yes, is It a good pratice to use the same database for all the
different kind of tables? If no. How can change database runtime?
Generally one would use one database for an application.
I thought I could manage different tables with different database
systems. for example I could use SQLite in order to manage
classification tables, mysql to manage user interaction and so on. Is
this a good pratice?
You could, but that would be insane.
Is SQLite in general faster than server-based databases in multi-user
applications?
Absolutely not. SQLite can only have one writer at a time, though it is fine for many readers.
Can I use connection pooling with SQLite ? I mean, what are SQLite
connection if SQLite has no server? and does it make sense to think
about connection pooling?
I don't know, but you shouldn't use SQLite if you expect multiple concurrent users writing / uploading to the database.
What database architecture do you suggest for this kind of
problematics?
I would suggest you use a content repository like Apache JackRabbit, or a search server like Apache Solr.

Does a simple document-based database exist?

Is there a database out there that I can use for a really basic project that stores the schema in terms of documents representing an individual database table?
For example, if I have a schema made up of 5 tables (one, two, three, four and five), then the database would be made up of 5 documents in some sort of "simple" encoding (e.g. json, xml etc)
I'm writing a Java based app so I would need it to have a JDBC driver for this sort of database if one exists.
CouchDB and you can use it with java
dbslayer is also light weight with MySQL adapter. I guess, this will make life a little easy.
I haven't used it for a bit, but HyperSQL has worked well in the past, and it's quite quick to set up:
"... offers a small, fast multithreaded and transactional database engine which offers in-memory and disk-based tables and supports embedded and server modes."
CouchDB works well (#zengr). You may also want to look at MongoDB.
Comparing Mongo DB and Couch DB
Java Tutorial - MongoDB
Also check http://jackrabbit.apache.org/ , not quite a DB but should also work.

Java Persistence frameworks

I am in need of some further information.
I am developing a small application which will be interacting with a PHP web application. The media server which we are incorporating with is extensible in Java.
I need very little access to the database inside the plugin which we are developing, I only need to view rows in about 10% of the tables. I only need to update data in 1 of the tables.
The schema as a whole is littered with foreign keys, but currently (and there is little chance this changes in the future) I do not need to modify any other information in the databse except for the one column (which is not a foreign key).
I don't really want to model all of these relationships -- as there is no need to.
What is my best bet? Will Hibernate make me map all of these domain objects? Is myBatis (formerly iBATIS) a better choice as the people I am handing off too are more comfortable with SQL? Does it matter which persistence framework I choose -- i.e. are they all going to make me model each of the tables?
These are mySQL InnoDB tables if it makes any difference.
Hibernate only requires you to map those items which you want to use within the context of your Java application. As a result, you can have objects only mapped to those tables which you desire access from the Java side.
A few caveats for the process though:
You will have to model all objects/relationships for all tables with which a given entity table will interact
Things could be messy with two programs hitting the database at the same time. While this is an issue that is accounted for and handled by Hibernate for locking, such things tend to fall by the wayside in PHP.
I can't really speak about Hibernate, but myBatis won't make you model anything - just create a POJO that contains the properties that you care about, then write mappings (in just straight sql) that map whatever columns from whatever tables you want into your pojo.
With Hibernate, you only need to model the objects you will be working with, and the ddl2hbm tool may be able to generate the Java classes for you based on the existing database, depending on if there are foreign keys linking to models you will not be using.

Categories

Resources