I am reading from one db table and writing same data to another db table.
Now if the date fields in the source do not have any data (null) then when it gets written to the destination table - the same fields get defaulted to a date "1/1/1900".
I have not set this default value anywhere in my program. Not sure why it is happening.
Any idea how can I prevent that instead of checking each field for null values and setting it to a different value?
I am using Eclipse and SQL Server database.
Thanks.
One possibility is that the default value for the date columns are actually set to '1/1/1900'. If so , remove this default using SQLs ALTER TABLE. Syntax for SQL Server is:
ALTER TABLE Table1 ALTER COLUMN DateColumn1 DROP DEFAULT
Update: As it looks like this date is the out-of-the-box default, try setting:
ALTER TABLE Table1 ALTER COLUMN DateColumn1 DATETIME NULL
Some important notes here:
Backup your database before you start altering tables(!)
Use the the same datatype (e.g. DATETIME) in your ALTER command.
Related
I have a table Facility header and i want to alter it and add few columns.I want the newly added columns to hold default value as null. My table is already loaded with 14 years of data .As it is partitioned table for year 2002-2014 by default the value of these newly added column should come null in the table .
create table facility_HEADER
(
A string,
B INT,
C INT
)partitioned by (year int comment 'Date Year Incurred')
STORED AS PARQUET
Alter Table Command
ALTER TABLE facility_HEADER add columns (MSCLMID Bigint,NPI STRING,UNITS decimal(10,2));
When i put a describe on the table i can see the columns got appended at the end .
When i put a select * from any of the partition it gives error.
Failed with exception
java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot
be cast to org.apache.hadoop.io.LongWritable
My table is having 14 years of data and i don't want it to be done with putting null into select clause and giving aliases.
I tried the things referring from here and from here.
Can any one help me what actually happened with my table .I lost 14 years of data in a go.
Make a full files backup first. Try to alter table and drop newly added columns. If you didn't write into table yet, it should work. Check the table is selectable again. Then create new table with new columns and insert overwrite into.
Using alter command on hive it just changed metadata not under laying data so while select * from table will fail.
As hive is trying to extract data from file stored "/apps/hive/warehouse/databasename.db/tablename/" using using row_format and file_format it will not able to find column values as described on row_format .
Note : Data is STORED AS PARQUET hive is not getting definition of new column on PARQUET
work around: create new table and insert data and rename table as oldtablname
INSERT INTO TABLE
selet columns(old) null as MSCLMID ,null as NPI, null as UNITS from oldtabel
I'm relatively new to working with JDBC and SQL. I have two tables, CustomerDetails and Cakes. I want to create a third table, called Transactions, which uses the 'Names' column from CustomerDetails, 'Description' column from Cakes, as well as two new columns of 'Cost' and 'Price'. I'm aware this is achievable through the use of relational databases, but I'm not exactly sure about how to go about it. One website I saw said this can be done using ResultSet, and another said using the metadata of the column. However, I have no idea how to go about either.
What you're probably looking to do is to create a 'SQL View' (to simplify - a virtual table), see this documentation
CREATE VIEW view_transactions AS
SELECT Name from customerdetails, Description from cakes... etc.
FROM customerdetails;
Or something along those lines
That way you can then query the View view_transactions for example as if it was a proper table.
Also why have you tagged this as mysql when you are using sqlite.
You should create the new table manually, i.e. outside of your program. Use the commandline 'client' sqlite3 for example.
If you need to, you can use the command .schema CustomerDetails in that tool to show the DDL ("metadata" if you want) of the table.
Then you can write your new CREATE TABLE Transactions (...) defining your new columns, plus those from the old tables as they're shown by the .schema command before.
Note that the .schema is only used here to show you the exact column definitions of the existing tables, so you can create matching columns in your new table. If you already know the present column definitions, because you created those tables yourself, you can of course skip that step.
Also note that SELECT Name from CUSTOMERDETAILS will always return the data from that table, but never the structure, i.e. the column definition. That data is useless when trying to derive a column definition from it.
If you really want/have to access the DB's metadata programatically, the documented way is to do so by querying the sqlite_master system table. See also SQLite Schema Information Metadata for example.
You should read up on the concept of data modelling and how relational databases can help you with it, then your transaction table might look just like this:
CREATE TABLE transactions (
id int not null primary key
, customer_id int not null references customerdetails( id )
, cake_id int not null references cakes( id )
, price numeric( 8, 2 ) not null
, quantity int not null
);
This way, you can ensure, that for each transaction (which is in this case would be just a single position of an invoice), the cake and customer exist.
And I agree with #hanno-binder, that it's not the best idea to create all this in plain JDBC.
I am trying to sense all the rows of some sql server 2008 tables if any changes occur using java technology.
I have investigated some approaches like use of timestamps column , Change Tracking mechanism , Change Data Capture.
But all the above approaches need some customization in database as follows:
1.Timestamps column should present in each table.
2.Change tracking require primary key in each table.
3.Change Data Capture require some creation of system tables and other customizations.
I need some approach which do not require such heavy customizations in the database because database is crucial and does not allow to alter the config settings.
Can anyone help or suggest something in this regard?
The below changes can accomplish data audit
Create an identity column for all txn tables.
Fetch this identity data to the front end along with the transaction data.
Create history tables for all txn tables and move the original data prior to every transaction using a version ID.
After modification in the UI, pass the data back to database and compare the data with the existing information using SQL MERGE statement to perform the update/Insert/Delete
Compare the latest version available in the history table with the data in the current table using the following logic
New data Inserted - IF an Identity key exist in current table and NOT available in the latest version data of history table
WHERE C.IdentityColumn NOT IN (Select identitycolumn from History H)
Data deleted - IF an identity key exist in the latest version data of history tables AND NOT exist in the current table.
WHERE H.IdentityColumn NOT IN (Select identitycolumn from Current C)
Data updated - IF identity key exist in both current table and latest version of history table and any one column data is modified
WHERE (C.IdentityColumn = H.IdentityColumn)
AND
(
C.Col1 <> H.Col1
OR
C.Col2 <> H.Col2
OR
C.ColN <> H.ColN
)
C - Current table
H - History table
using the above logic the modified data can be tracked in a separate audit table which can have columns like Record ID, Field Name, Old Value, New Value, Modification, Modified By, Modified Date/Time
I am using Excel sheet for reading the values and updating in DB. I have two questions:
How to avoid duplicates in DB table when same value is added in Excel Sheet?
If a new value/s is updated in Excel sheet, I will run the java source console once again and execute the query in the DB to see the results. But I don't want that... Instead of that, if any values is modified/updated in Excel sheet, it should automatically reflect in DB Table.
Is there any ways to do that?
1) to avoid duplicates in DB table, just make the column unique. Non-unique updates/inserts will just fail.
create table mytable (
id int primary key,
name varchar(255) unique not null
);
2) if you want it to reflect directly in the DB, I suggest you just link MS Access directly to the DB table. It looks very much like Excel and is probably what you want.
You can also try a free Access like OpenOffice.org Base.
Here is my situation and my constraints:
I am using Java 5, JDBC, and DB2 9.5
My database table contains a BIGINT value which represents the primary key. For various reasons that are too complicated to go into here, the way I insert records into the table is by executing an insert against a VIEW; an INSTEAD OF trigger retrieves the NEXT_VAL from a SEQUENCE and performs the INSERT into the target table.
I can change the triggers, but I cannot change the underlying table or the general approach of inserting through the view.
I want to retrieve the sequence value from JDBC as if it were a generated key.
Question: How can I get access to the value pulled from the SEQUENCE. Is there some message I can fire within DB2 to float this sequence value back to the JDBC driver?
Resolution:
I resorted to retrieving the PREVIOUS_VAL from the sequence in a separate JDBC call.
Have you looked at java.sql.Statement.getGeneratedKeys()? I wouldn't hold out much hope since you're doing something so unusual but you never know.
You should be able to do this using the FINAL TABLE syntax:
select * from final table (insert into yourview values (...) );
This will return the data after all triggers have been fired.