I'm working on a school project where I need to make a dynamic table in JavaFX, which should be saved in my SQL database.
The user should be able to specify the number of rows and columns needed.
Is it possible to save a dynamic table in SQL?
I've thought about exporting the dynamic table to a CSV file, and then save the information in the CSV file as a string. Then when you load the table again, a method would be needed to convert from string to table again. Seems like a stupid and inefficient way to do it though.
I've read some places that I should use XML or JSON in some way, but I never understood how.
Related
I want to use a result set to hold my user data. Just one wrinkle - the data is in a CSV (comma delimited) format, not in a database.
The CSV file contains a header row at the top, then data rows. I would like to dynamically create a result set but not using an SQL query. The input would be the CSV file.
I am expecting (from my Java experience) that there would be methods like rs.newRow, rs.newColumn, rs.updateRow and so on, so I could use the structure of a result set without having any database query first.
Are there such methods on result sets? I read the docs and did not find any way. It would be highly useful to just build a result set from the CSV and then use it like I had done an SQL query. I find it difficult to use a java bean, array list of beans, and such - because I want to read in CSV files with different structures, the equivalent of SELECT * FROM in SQL.
I don't know what the names or numbers of columns will be in advance. So other SO questions like "Manually add data to a Java ResultSet" do not answer my needs. In that case, he already has an SQL created result set, and wants to add rows.
In my case, I want to create a result set from scratch, and add columns as the first row is read in. Now that I am thinking about it, I could create a query from the first row, using a statement like SELECT COL1, COL2, COL3 ... FROM DUMMY. Then use the INSERT ROW statement to read and insert the rest. (Later:) But on trying that, it fails because there is no real table associated.
The CsvJdbc open source project should do just that:
// Load the driver.
Class.forName("org.relique.jdbc.csv.CsvDriver");
// Create a connection using the directory containing the file(s)
Connection conn = DriverManager.getConnection("jdbc:relique:csv:/path/to/files");
// Create a Statement object to execute the query with.
Statement stmt = conn.createStatement();
// Query the table. The name of the table is the name of the file without ".csv"
ResultSet results = stmt.executeQuery("SELECT col1, col2 FROM myfile");
Just another idea to complement #Mureinik's one, that I find really good.
Alternatively, you could use any CSV Reader library (many of them out there), and load the file into an in-memory table using any in-memory database such as H2, HyperSQL, Derby, etc. These ones offer you a full/complete SQL engine, where you can run high end/complex queries.
It requires more work but you get a lot of flexibility to use the data afterwards.
After you try the in-memory solution, switching to a persistent database is really easy (just change the URL). This way you could load the CSV file only once into the database. On the second execution an on, the database would be ready to use; no need to load the CSV again.
I have a place where I store tables.
In this place I have a table. Lets' name it Table A. All columns in table A columns are varchars.
I want to create a process that will go through every table and create a new table in a different place in the database.
However this table has to have the right datatypes instead of varchars.
So if my column in table A is a column with the values 1234, I want to create a field that will be an INT an not a varchar.
I don't have any idea how to do it in java. Could somebody point me in the right direction on what things I would have to learn and if it is too difficult
Thanks
As I see it, your problem has a few parts:
EASY - Getting the existing table information. You'll have to get the DatabaseMetaData from your jdbc connection. Documentation on that class is here, and Google is your friend as to how to get that Object in the first place. (Hint: this question should get you most of the way )
MEDIUM - Figuring out what data type your data really is. If you try to narrow it down to just a few simple types like Integer, String, and dates, this might not actually be that hard. You'll just need to select a few rows for each column (where that column isn't null), and then try and parse it into different types. If it succeeds, it probably is that type. If it throws an Exception, then it's not. Integer.parseInt(), LocalDateTimeFormatter, etc. are the things you'll be using.
If you want to get table meta data, then just do a "SELECT * FROM < TABLE >" and the ResultSetMetaData will have all the information you'll need to keep going.
VERY HARD - Creating tables using the new, appropriate data types. This is hard not because of the stuff that's been mentioned so far, but because we haven't yet talked about foreign keys, indexes, and all that other good stuff. Mapping all those things programmatically will be a very hard problem. Therefore, I recommend NOT trying to program this part. Instead, have your script output the mapping of old datatypes to new datatypes for each table. Then you can go into your SQL mananger and get the SQL script to generate that table. Almost every SQL database management tool can generate the creation script for a table. Edit that script with your new datatypes, give it a new name, and run it.
I just designed a Pg database and need to choose a way of populating my DB with data, the data consists of txt and csv files but can generally be any type of file containing characters with delimiters, I'm programming in java in order to the data to have the same structure (there's lots of different kinds of files and I need to find what each column of the file represents so I can associate it with a column of my DB) I thought of two ways:
Convert the files into one same type of file (JSON) and then get the DB to regularly check the JSON file and import its content.
Directly connect to the database via JDBC send the strings to the DB (I still need to create a backup file containing what was inserted into the DB so in both cases there is a file created and written into).
Which would you go with time efficiency wise? I'm kinda tempted into using the first one as it would be easier to handle a json file in the DB.
If you have any other suggestion that would also be welcome!
JSON or CSV
If you have the liberty of converting your data either to CSV or JSON format, CSV is the one to choose. This is because you will then be able to use COPY FROM to bulk load large amounts of data at once into postgresql.
CSV is supported by COPY but JSON is not.
Directly inserting values.
This is the approach to take if you only need to insert a few (or maybe even a few thousand) records but not suited for large number of records because it will be slow.
If you choose this approach you can create the back up using COPY TO. However if you feel that you need to create the backup file with your java code. Choosing the format as CSV means you would be able to bulk load as discussed above.
I am using IBM DB2 v9.1 and want to export all database to xml file and them import it back when needed. There are 9 tables in my database.
I am using java and hibernate. What I have done so far is: fetch all data through hibernate and fill POJO objects, then export the objects to xml file. Now for import I need to delete all existing databases first and them import xml file data to the database.
Problem is with primary keys (ids). Once id is deleted from DB2 then data cannot be saved with that id and it will be assigned new id. This disturbs foreign key relation. What is the best possible solution for it?
If you want to export/import data for testing purpose you may want to consider DbUnit http://www.dbunit.org/index.html
Perhaps the MERGE statement can come to your rescue. If there already is a row with the matching id, it will let you update the row. If there is no row with a matching ID, then it will let you insert it.
So then the question might become, do you really need to delete the rows from DB2 when you create the XML files?
What do you mean by all database? all data? or even the DDL?
I think you are exporting all data, and you leave the tables created to be refilled with the exported data.
The problem are the constraints and the generated values. There is a good article about generated values: http://www.ibm.com/developerworks/data/library/techarticle/0205pilaka/0205pilaka2.html
For the referential contraints, the best is to drop/deactivate them before import, then import data, and finally recreate/active referential constraints.
Here, a good stored procedure to enable/disable constraints: http://www.dzone.com/snippets/db2-enabledisable-constraints
After importing file all relations will be mapped to check their inter relation. New objects will be created after mapping relation and they will saved in database with new ID’s as DB2 will not save data on an old deleted ID and it saves it to a new ID.
I am trying to create an application in java which pulls out records from the database and maps it to objects. It does that without knowing what the schema of the database looks like. All i want to do is fetch all rows from all tables and store them somewhere. There could be a thousand tables with thousands of records each. The application doesn't know the name of any table or attribute. It should map "on the fly". I looked at hibernate but it doesnt give me what i want for this app. I don't want to create hard-coded xml files and classes for mapping. Any ideas how i can accomplish this ?
Thanks
Oracle has a bunch of data dictionary views for metadata.
ALL_TABLES, ALL_TAB_COLUMNS would be first places to start. Then you'd build ad-hoc queries based on what you get out of there. Not sure whether you have to deal with all data types (dates, blobs, spatial, user-defined....).
Not sure what you mean by "store them somewhere". If you start thinking CSV or XML files, you'll need to escape various characters from VARCHAR2 columns.
If you are looking for some generic extract/unload routines, you should look at what is already available in the database or open-source/commercially.
MyBatis provides a pretty simple way to map data results to objects and back, maybe check that out?
http://code.google.com/p/mybatis/
Not to be flip, but for this task, you might want to check out Ruby on Rails and its ActiveRecord approach