I have a table Facility header and i want to alter it and add few columns.I want the newly added columns to hold default value as null. My table is already loaded with 14 years of data .As it is partitioned table for year 2002-2014 by default the value of these newly added column should come null in the table .
create table facility_HEADER
(
A string,
B INT,
C INT
)partitioned by (year int comment 'Date Year Incurred')
STORED AS PARQUET
Alter Table Command
ALTER TABLE facility_HEADER add columns (MSCLMID Bigint,NPI STRING,UNITS decimal(10,2));
When i put a describe on the table i can see the columns got appended at the end .
When i put a select * from any of the partition it gives error.
Failed with exception
java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot
be cast to org.apache.hadoop.io.LongWritable
My table is having 14 years of data and i don't want it to be done with putting null into select clause and giving aliases.
I tried the things referring from here and from here.
Can any one help me what actually happened with my table .I lost 14 years of data in a go.
Make a full files backup first. Try to alter table and drop newly added columns. If you didn't write into table yet, it should work. Check the table is selectable again. Then create new table with new columns and insert overwrite into.
Using alter command on hive it just changed metadata not under laying data so while select * from table will fail.
As hive is trying to extract data from file stored "/apps/hive/warehouse/databasename.db/tablename/" using using row_format and file_format it will not able to find column values as described on row_format .
Note : Data is STORED AS PARQUET hive is not getting definition of new column on PARQUET
work around: create new table and insert data and rename table as oldtablname
INSERT INTO TABLE
selet columns(old) null as MSCLMID ,null as NPI, null as UNITS from oldtabel
Related
I'm fairly new to developing web apps in Java. I just connected the database, and as seen in the pictures, my ID_patient is auto_increment, but in Netbeans it looks generated.
INSERT INTO sys.patient values('5','elif','nil','er','elif#hotmail.com','11111111111','1234a','istanbul')
The new record inserted wants this value, while i want it to take
INSERT INTO sys.patient values('elif','nil','er','elif#hotmail.com','11111111111','1234a','istanbul')
and auto-increment and give the id as 1,2,3,4...etc.
how can I fix this?
thank you
in netbeans
in mysql
If you are expecting to create an insert query where you don't want to provide an id and the database should generate it automatically based on table defination then you need to create insert query where you have to mention columns names.
Example
INSERT INTO <TABLENAME>(COLUMN1, COLUMN2, COLUMN3, COLUMN4)
VALUES
(VALUE1,VALUE2, VALUE 3,VALUE 4);
So in ur case it should be
INSERT INTO PATIENT(FirstName,MiddleName,LastName,E_mail)
values
('myname','mymiddlename','mylastname','myemailid');
Sequence of column names and their values are very important. They should match exactly. If you don't provide a value for one of column and if its is auto increment, then DB will add an value to it else it will add null value.
I'm relatively new to working with JDBC and SQL. I have two tables, CustomerDetails and Cakes. I want to create a third table, called Transactions, which uses the 'Names' column from CustomerDetails, 'Description' column from Cakes, as well as two new columns of 'Cost' and 'Price'. I'm aware this is achievable through the use of relational databases, but I'm not exactly sure about how to go about it. One website I saw said this can be done using ResultSet, and another said using the metadata of the column. However, I have no idea how to go about either.
What you're probably looking to do is to create a 'SQL View' (to simplify - a virtual table), see this documentation
CREATE VIEW view_transactions AS
SELECT Name from customerdetails, Description from cakes... etc.
FROM customerdetails;
Or something along those lines
That way you can then query the View view_transactions for example as if it was a proper table.
Also why have you tagged this as mysql when you are using sqlite.
You should create the new table manually, i.e. outside of your program. Use the commandline 'client' sqlite3 for example.
If you need to, you can use the command .schema CustomerDetails in that tool to show the DDL ("metadata" if you want) of the table.
Then you can write your new CREATE TABLE Transactions (...) defining your new columns, plus those from the old tables as they're shown by the .schema command before.
Note that the .schema is only used here to show you the exact column definitions of the existing tables, so you can create matching columns in your new table. If you already know the present column definitions, because you created those tables yourself, you can of course skip that step.
Also note that SELECT Name from CUSTOMERDETAILS will always return the data from that table, but never the structure, i.e. the column definition. That data is useless when trying to derive a column definition from it.
If you really want/have to access the DB's metadata programatically, the documented way is to do so by querying the sqlite_master system table. See also SQLite Schema Information Metadata for example.
You should read up on the concept of data modelling and how relational databases can help you with it, then your transaction table might look just like this:
CREATE TABLE transactions (
id int not null primary key
, customer_id int not null references customerdetails( id )
, cake_id int not null references cakes( id )
, price numeric( 8, 2 ) not null
, quantity int not null
);
This way, you can ensure, that for each transaction (which is in this case would be just a single position of an invoice), the cake and customer exist.
And I agree with #hanno-binder, that it's not the best idea to create all this in plain JDBC.
I am reading from one db table and writing same data to another db table.
Now if the date fields in the source do not have any data (null) then when it gets written to the destination table - the same fields get defaulted to a date "1/1/1900".
I have not set this default value anywhere in my program. Not sure why it is happening.
Any idea how can I prevent that instead of checking each field for null values and setting it to a different value?
I am using Eclipse and SQL Server database.
Thanks.
One possibility is that the default value for the date columns are actually set to '1/1/1900'. If so , remove this default using SQLs ALTER TABLE. Syntax for SQL Server is:
ALTER TABLE Table1 ALTER COLUMN DateColumn1 DROP DEFAULT
Update: As it looks like this date is the out-of-the-box default, try setting:
ALTER TABLE Table1 ALTER COLUMN DateColumn1 DATETIME NULL
Some important notes here:
Backup your database before you start altering tables(!)
Use the the same datatype (e.g. DATETIME) in your ALTER command.
I am using Excel sheet for reading the values and updating in DB. I have two questions:
How to avoid duplicates in DB table when same value is added in Excel Sheet?
If a new value/s is updated in Excel sheet, I will run the java source console once again and execute the query in the DB to see the results. But I don't want that... Instead of that, if any values is modified/updated in Excel sheet, it should automatically reflect in DB Table.
Is there any ways to do that?
1) to avoid duplicates in DB table, just make the column unique. Non-unique updates/inserts will just fail.
create table mytable (
id int primary key,
name varchar(255) unique not null
);
2) if you want it to reflect directly in the DB, I suggest you just link MS Access directly to the DB table. It looks very much like Excel and is probably what you want.
You can also try a free Access like OpenOffice.org Base.
i´m currently working on my first Java application based on a MySQL DB. I´m using EclipseLink2.0 and NetBeans, at the time i am facing a behaviour i cannot explain, maybe someone has stumbled over this problem in the past and can help me with this. Here goes:
Table 1 has the PK of table 2 as Fk. Application-side, there is an UI where users can generate content for table 1. The value for the fk(Table2ID) is beeing chosen with a dropdown menu, which gets each values by reading the Collection of table2 rows. Now, when i try to change the value for the fk to another (already existing) value, instead of doing just that, a new row with a fresh ID is generated on table2, all other column values are cloned from the row i tried to change the Fk to. So, for example, when i try to set table1 rows 3,4 and 5 to table1.fkcolumn =6 (i.e Table2ID=6), the program instead clones the row with ID=6 3 times and sets each of the table1 columns to one of them.
Any help would be greatly appreciated .
The problem is you are changing the primary key of an entity. In EclipseLink, when you change the PK of an entity, you have a new entity. As such, Eclipselink inserts the new rows and leaves the old rows alone.
To get around this you have three choices:
1) Change the database. Primary keys really shouldn't be changed.
2) Set the application to execute an update query which changes the primary key values and requery them.
3) Delete the old rows and re-create with a new primary key.