I am learning about DynamoDB and one of the benefits I have read about NoSQL is that the data does not need to be standardized. I was wondering if it is possible in Java to support inserting into a DynamoDB table with an unknown number and type of attributes. Is there any way in the DynamoDBMapper or JPA that supports this? For example, reading form a Spreadsheet that contains different columns depending on the sheet, but is guaranteed to have two specific columns (hash and range) regardless.
Thank you.
Is there any way in the DynamoDBMapper or JPA that supports this
JPA (or generally any object mapping framework) would map strongly typed objects into the DynamoDB, so the database provides more flexibility than the object framework, no issue in that
While you work with fixed objects, the DynamoDBMapper seems to be good choice
Spreadsheet that contains different columns depending on the sheet
Lets assume you don't know the sheet columns upfront and you need to store 'any column' that you encounter.
IMHO you would have no easy way to map 'any column' into strongly typed Java objects, for that use case I see the best fit a key/value map.
As far I know you cannot store a Map attribute with the DynamoDBMapper (please correct me if I am wrong) , so for working with flexible schema I'd skip JPA or mapper layer completely.
Related
I have a SAAS product, which is build by Spring MVC and Hibernate. Generally SAAS products allow user's to customize the product like adding extra fields to the table. So i want to give the flexibility to users, to create custom fields in the tables for themselves. Please provide all the viable solutions to achieve it. Thank you so much for your help.
I'm guessing your trying to back this to a Relational database. The primary problem is that relational databases store things in tables, and tables don't really handle free form data well.
So one solution is to use a document structure that is flexible, like XML (and perhaps ditch the database) but databases have features which are nice, so let's also consider the database-using approaches.
You could create a "custom field" table which would have columns (composite primary key) for
ExtendedTable
ColumnName
but you'd also have to store the data somewhere
(ExtendedKey)
DataItem
And now we get into the really nasty bits. How would you apply constraints to this data? I mean, what would the type be of a DataItem? A general solution would be quite complex (being a type of free form database). Hopefully you could limit the solution to solve only the problems you require solved.
Another approach is to use a single "extra" column that contains an XML record which embeds it's own "column and value" extensions, but if you wanted to display a table of the efficiently, you'd have to parse out every XML document in every field, which is not ideal.
Neither one of these approaches will work well with the existing SQL query language, so you'll then start building your own query language.
I suggest you go back and look at real data requirements, instead of sweeping them under the table with a "and anything else one might want" set of columns on your table.
Your requirement is best suited use case for NoSQL databases (like MongoDB).
Dynamically creating relational database tables & columns (modifying schemas) upon user requests in an application is not a best practice as these involve DDL operations, which are very powerful and in case if you don't handle them carefully, the whole application's database goes to the inconsistent state.
I have a requirement to store CSV data in an Oracle database for later retrieval by dynamic query scripts. The data needs to be stored such that any column of the CSV data can be queried using SQL and performance is key (some CSV files are 100k+ lines).
The content of the CSV files (number of columns, headings, data types) is not known ahead of time and the system needs to be able to handle multiple file structures (which are added to a config file so the system knows how to read them, by people who don't know SQL).
My current solution, in order to avoid an EAV model, is to have my code create new tables every time a new CSV structure is added to the config file. I'm curious to know if there is a better way to achieve what I'm trying to do. I'm not particularly fond of having my code create new tables in production at run-time.
The system is written in groovy, in case it matters.
I am inclined to go with your current solution, which is a separate table for each type. Somehow, I'm most comfortable with storing data in well-defined tables with well-defined types.
An EAV (entity-attribute-value) solution is also viable. With 100k rows of data, the EAV solution should perform pretty well, unless you have lots of tables. One downside is the types of the columns. Without a lot of extra work, you are pretty much limited to strings for all the values.
Oracle does offer another possibility, which is an XML solution. This can give you the flexibility of dynamic column names along with the "simplicity" of not having to define a separate table for each one. You can read more about it in the documentation here.
It comes down to what you want to model. If you need to handle adhoc queries against any of the columns in the CSV file, then I guess you need to model them all as Oracle columns. If you need to only retrieve a whole line based on a particular key, then you could model as two columns: the key and the line. If you need to model the individual columsn that such a thing would not be in first normal form.
When you create an EAV model, you are making a flexible system that allows for additional columns to be added/removed easily. Oracle is already a flexible system that allows for additional columns to be added/removed easily. They've just put more thought into locking, performance, scalability and tool support that your naive EAV model might have.
Overall, I think what you are probably doing is best. It's not an easy problem and it's not exactly what Oracle was designed for so you might have issues with statistics and which indexes to create and so on.
I have a hashmap with pair.
the key in this are column-names in a table Now I want to insert them into a table say users_table,
i should be able to match the key to the column names and if both are same then insert that value into table.
What I am doing is that i have to write preparedstatement with all the columns and then pass the hashmap values as parameter using setter methods of preparedsatatement.
For doing this i need to know all the columns of table and this would be tedious work as there would be no. columns and this step would be repeated to no.of tables.
tell me Any idea of doing this, Thanks in Advance
First off, use a LinkedHashMap to preserve the order of the columns. This will make a difference when iterating over the map to assign column names and then values.
I'm not entirely sure what you're asking, but you're hinting at what is called Object Relational Mapping (ORM). Simply put, it's a way to map database tables to plain old Java objects (POJO). Though there's a lot more to it than that.
If you're interested in representing your database tables as objects, you should look into Hibernate, which is a popular Java ORM API.
Otherwise, create and keep to a standard that is uniform across both your database and your Java project and you'll be fine.
Edit:
If I understand your question a little more, you're having issues with knowing the names of the columns? This is something you have to know, there's not going to be an easy, dynamic, or efficient way of getting that information.
One example of setting that information is storing the column names in a String array of a class that represents your table. You can then access the array and iterate over it when saving to a database.
And finally, if you feel like doing some reading, check out my answer (Store nested Pojo Objects as individuall Objects in Database). I go quite in-depth on how I manage Database to Java and vice versa.
I am trying to create an application in java which pulls out records from the database and maps it to objects. It does that without knowing what the schema of the database looks like. All i want to do is fetch all rows from all tables and store them somewhere. There could be a thousand tables with thousands of records each. The application doesn't know the name of any table or attribute. It should map "on the fly". I looked at hibernate but it doesnt give me what i want for this app. I don't want to create hard-coded xml files and classes for mapping. Any ideas how i can accomplish this ?
Thanks
Oracle has a bunch of data dictionary views for metadata.
ALL_TABLES, ALL_TAB_COLUMNS would be first places to start. Then you'd build ad-hoc queries based on what you get out of there. Not sure whether you have to deal with all data types (dates, blobs, spatial, user-defined....).
Not sure what you mean by "store them somewhere". If you start thinking CSV or XML files, you'll need to escape various characters from VARCHAR2 columns.
If you are looking for some generic extract/unload routines, you should look at what is already available in the database or open-source/commercially.
MyBatis provides a pretty simple way to map data results to objects and back, maybe check that out?
http://code.google.com/p/mybatis/
Not to be flip, but for this task, you might want to check out Ruby on Rails and its ActiveRecord approach
I have an application that needs to support a multilingual interface, five languages to be exact. For the main part of the interface the standard ResourceBundle approach can be used to handle this.
However, the database contains numerous tables whose elements contain human readable names, descriptions, abstracts etc. It needs to be possible to enter each of these in all five languages.
While I suppose I could simply have fields on each table like
NameLang1
NameLang2
...
I feel that that leads to a significant amount of largely identical code when writing the beans the represent each table.
From a purely object oriented point of view the solution is however simple. Each class simply has a Text object that contains the relevant text in each of the languages. This is further helpful in that only one of the language is mandated, the others have fallback rules (e.g. if language 4 is missing return language 2 which fall back to language 1 which is mandatory).
Unfortunately, mapping this back to a relational database, means that I wind up with a single table that some 10-12 other tables FK to (some tables have more than one FK to it in fact).
This approach seems to work and I've been able to map the data to POJOs with Hibernate. About the only thing you cant do is map from a Text object to its parent (since you have no way of knowing which table you should link to), but then there is hardly any need to do that.
So, overall this seems to work but it just feels wrong to have multiple tables reference one table like this. Anyone got a better idea?
If it matters I'm using MySQL...
I had to do that once... multilingual text for some tables... I don't know if I found the best solution but what I did was have the table with the language-agnostic info and then a child table with all the multilingual fields. At least one record was required in the child table, for the default language; more languages could be added later.
On Hibernate you can map the info from the child tables as a Map, and get the info for the language you want, implementing the fallback on your POJO like you said. You can have different getters for the multilingual fields, that internally call the fallback method to get the appropiate child object for the needed language and then just return the required field.
This approach uses more table (one extra table for every table that needs multilingual info) but the performance is much better, as well as the maintenance I think...
The standard translation approach as used, for example, in gettext is to use a single string to describe the concept and make a call to a translate method which translates to the destination language.
This way you only need to store in the database a single string (the canonical representation) and then make a call in your application to the translate method to get the translated string. No FKs and total flexibility at the cost of a little of runtime performance (and maybe a bit more of maintenance trouble, but with some thought there's no need to make maintenance a problem in this scenario).
The approach I've seen in an application with a similar problem is that we use a "text id" column to store a reference, and we have a single table with all the translations. This provides some flexibility also in reusing the same keys to reduce the amount of required translations, which is an expensive part of the project.
It also provides a good separation between the data, and the translations which in my opinion is more of an UI thing.
If it is the case that the strings you require are not that many after all, then you can just load them all in memory once and use some method to provide translations by checking a data structure in memory.
With this approach, your beans won't have getters for each language, but you would use some other translator object:
MyTranslator.translate(myBean.getNameTextId());
Depending on your requirements, it may be best to have a separate label table for each table which needs to be multilingual. e.g.: you have a XYZ table with a xyz_id column, and a XYZ_Label table with a xyz_id, language_code, label, other_label, etc
The advantage of this, over having a single huge labels table, is that you can do unique constraints on the XYZ_labels table (e.g.: The english name for XYZ must be unique), and you can do indexed lookups much more efficiently, since the index will only be covering a single table at a time (e.g.: if you need to look up XYZ entities by english name) .
What about this:
http://rob.purplerockscissors.com/2009/07/24/internationalizing-websites/
...that is what user "Chochos" says in response #2