I am seeking advice in persisting my JTable data in an elegant manner. So far my research has indicated I can iterate through the many columns and rows and extract the data for saving (seems convoluted) or that I can save the table and related data as an Object in an Object file.
I would love to hear some advice from those more versed in this area as I am quite new to JTables and their workings. Are there many other solutions available that may be a better choice?
It depends on what you want to do with the persisted data. If you only want to persist it so that you can display it again later, look at serializing (Java Serializable or Java Externalizable) it to a data-stream that you put somewhere. Later you can read it back (deserialize) and display it again.
If you want to put it in a database where the information is useable for other purposes, then you probably want to implement some object which models your data to keep it clear and simple. Then you can present this in a Swing Jtable by adapting your model to a table model. This still means you need to write the adaptation/transformation logic but it shouldn't be onerous and you get the most usable result. The TableModel is simply a way of looking at your data that a JTable is able to understand. Look at Adaptor patterns to get one idea about the mapping.
Hope that helps.
Related
Recently I come across a schema model like this
Structure looks exactly the same, i just renamed with Entity name like Table (*)
Starting from Table C, all the tables are having close to 200 Columns, from C to L
Reason for posting this is like, I never come across structure like this before, if anyone who have already experienced like this or worked similar or more complex than this please do share your idea,
Having a structure like this is good or bad, and why?
Assume we need to have API to save data for the table structure like this,
how to design the API
How we are going to manage the Transactional across all these tables
In service code, there are few cases where we might need to get data from these table and transfer to external system.
Catch here is, external system is accepting the request in the flatten structure not in the hierarchy which we have as mentioned above. If this data needs to be transferred to external system, how can we manage marshaling and un marshaling
Last but not least, API which is going to manage the data like this can be consumed atleast 2K a day.
What is your thought on this, I don't know exactly why we need it, it needs a detailed discussion and we need to break up the things.
If I consider Spring Data JPA, Hibernate. What are all things i need to consider,
More Importantly, all these tables row values will be limited based on the the ownerId/tenantId, so the data needs to be consistent across all the tables.
I can not comment on the general aspect of the structure as that is pretty domain specific and one would need to know why this structure was chosen to be able to say if it's good or not. Either way, you probably can't change this anyway, so why bother asking if it's good or not?
Having said that, with such a model there are a few aspects that you should consider:
When updating data, it is pretty important to update only columns that really changed to avoid index trashing and allow the DB to use spare storage in pages. This is a performance concern that usually comes up when using Hibernate with such models as Hibernate usually updates all "updatable" columns, not just the dirty ones. There is an option to do dynamic updates though. Without dynamic updates, you might produce a few more IOs per update and thus keep locks for a longer time which affects the overall scalability.
When reading data, it is very important not to use join fetching by default as that might result in a result set size explosion.
I have a requirement to store CSV data in an Oracle database for later retrieval by dynamic query scripts. The data needs to be stored such that any column of the CSV data can be queried using SQL and performance is key (some CSV files are 100k+ lines).
The content of the CSV files (number of columns, headings, data types) is not known ahead of time and the system needs to be able to handle multiple file structures (which are added to a config file so the system knows how to read them, by people who don't know SQL).
My current solution, in order to avoid an EAV model, is to have my code create new tables every time a new CSV structure is added to the config file. I'm curious to know if there is a better way to achieve what I'm trying to do. I'm not particularly fond of having my code create new tables in production at run-time.
The system is written in groovy, in case it matters.
I am inclined to go with your current solution, which is a separate table for each type. Somehow, I'm most comfortable with storing data in well-defined tables with well-defined types.
An EAV (entity-attribute-value) solution is also viable. With 100k rows of data, the EAV solution should perform pretty well, unless you have lots of tables. One downside is the types of the columns. Without a lot of extra work, you are pretty much limited to strings for all the values.
Oracle does offer another possibility, which is an XML solution. This can give you the flexibility of dynamic column names along with the "simplicity" of not having to define a separate table for each one. You can read more about it in the documentation here.
It comes down to what you want to model. If you need to handle adhoc queries against any of the columns in the CSV file, then I guess you need to model them all as Oracle columns. If you need to only retrieve a whole line based on a particular key, then you could model as two columns: the key and the line. If you need to model the individual columsn that such a thing would not be in first normal form.
When you create an EAV model, you are making a flexible system that allows for additional columns to be added/removed easily. Oracle is already a flexible system that allows for additional columns to be added/removed easily. They've just put more thought into locking, performance, scalability and tool support that your naive EAV model might have.
Overall, I think what you are probably doing is best. It's not an easy problem and it's not exactly what Oracle was designed for so you might have issues with statistics and which indexes to create and so on.
I want to build a fairly simple Android application.
The basic data model of my app consist on several basic elements with relationships between them.
I want the data to be persistant so I'm thinking about using an sqlite DB.
I looked at the Android Developer website, and the closest thing that I could find that is relevant to my question is the "NotePad Tutorial" which make use of an sqlite DB to store notes.
I think by now I got the handle on the basics, but I still have some questions though:
In the tutorial they have only one table in the DB. If my application requires a more complicated scheme - should I still use the same method? that is - putting everything inside a subclass of SQLiteOpenHelper? Or is there a more "structured" way to go?
Should I bother creating classes for all the data elements stored in my DB? I mean this is what I learned that I should do, but in the documentation there is no hint about that.
If I Should create classes - How should I use them correctly? I mean, since the result of the query is a Cursor object, and not a collection of rows, Should/Can I parse that into objects?
Thanks!
Define all the tables in the same subclass, this makes easy to see all the table at one place and possibly write SQL for upgade etc
Yes, that would be easier for manipulation in the java side and makes code clean
Read from the cursor and initialize an arraylist of objects one by one.
I'm working on an application where I need to map fields in one CSV file to fields in a data structure defined by the application. I'd thought of different ways of doing this, but the method that I like the best is the one where I would have a graphical user interface where the user could just drag columns from an entity representing the CSV file to an entity representing the internal data structure. This way, it would be all drag and drop.
Does anyone know of a Java library I can use to achieve something like this?
UPDATE
I'd like to point out that I am looking for components which can help me with the visualisation. I do know that I can't find any ready made components which will take care of the entire mapping and data transformations for me. It's a matter of trying to track down swing components which can help me visualise relationships between entities and their fields (CSV file being an entity and internal data structure being another entity).
Consider using JList or JTable containing a checkbox column, either of which would leverage the existing DnD support for those components. A common interface uses two parallel lists flanking a column of controls. For example,
(source: java2s.com)
I know how to retrieve data (just text in a table) from a database in Java and how to show it in the console. But I want to load it in a JTable.
Is there a good (modern) way (tutorial), without using Vectors?
Check out this tutorial: How to Use Tables
Seems your question is similar with these two questions:
How to fill data in a JTable with database?
Displaying data from database in JTable
Check out GlazedLists it comes with ready made TableModels that are based on modern collection interfaces.
http://www.glazedlists.com/
If you don't want to use an extra library. You can easily implement your own javax.swing.table.TableModel. I like to implement TableModel and java.util.List so I'm working with just a simple List, and hooking up any List easily.