Searching a garbled character `�` in the entire database - Oracle - java

We recently fixed an issue with the character encoding being read incorrectly into our system from the text files by making sure the file is UTF-8 and the Java code opens these files in UTF-8 encoding.
However, we had ended up adding a lot of records across the entire database tables with incorrect characters being inserted i.e. °F was read as �F. So even though we have fixed this now, we need to clean up the database tables now to rectify this anomaly.
Can anyone please suggest me ways to achieve this?

I had a similar problem a while back. Luckily, the number of columns that it affected was limited to a small number, and those columns had the same name throughout the database.
I solved this by writing a script that does the following:
disable foreign key constraints
build up a list of tables which contain the
target columns
update all the tables in your list using a
REGEXP_REPLACE
commit the data re-enable the constraints
This used a healthy dose of dynamic SQL, pulling data from the user_constraints and user_tab_columns, filtering on the specific column names I was targeting.
Here's a rough skeleton to get you started, I've just thrown it together quickly, so it isn't tested. Also, if you have triggers to worry about, you'll need to disable those too:
-- disable constraints
BEGIN
FOR c IN (
SELECT c.owner, c.table_name, c.constraint_name, c.constraint_type
FROM user_constraints c
INNER JOIN user_tables t ON (t.table_name = c.table_name)
AND c.status = 'ENABLED'
AND c.constraint_type NOT IN ('C', 'P')
ORDER BY c.constraint_type DESC
)
LOOP
dbms_utility.exec_ddl_statement('alter table '||c.table_name||' disable constraint ' || c.constraint_name);
END LOOP;
END;
-- do the updates
BEGIN
FOR t IN (
SELECT table_name, column_name
FROM user_tab_columns
WHERE column_name = 'TEMPERATURE'
AND data_type = 'VARCHAR2';
)
LOOP
dbms_utility.exec_ddl_statement('UPDATE '||t.table_name||' SET ' ||t.column_name||' = '||''GOOD VALUE''||' WHERE '||t.column_name||' = '||''BAD VALUE'');
END LOOP;
END;
-- re-enable constraints
BEGIN
FOR c IN (
SELECT c.owner, c.table_name, c.constraint_name, c.constraint_type
FROM user_constraints c
INNER JOIN user_tables t ON (t.table_name = c.table_name)
AND c.status = 'DISABLED'
AND c.constraint_type NOT IN ('C', 'P')
ORDER BY c.constraint_type ASC
)
LOOP
dbms_utility.exec_ddl_statement('alter table '||c.table_name||' enable constraint ' || c.constraint_name);
END LOOP;
END;
/

Related

Convert Oracle Merge to MySQL Update

I'm attempting to convert an Oracle MERGE statement to a MySQL Update statement. This particular MERGE statement only does an update (no inserts), so am unclear why the previous engineer used a MERGE statement.
Regardless, I know need to convert this to MySQL and am not clear as to how this is done. (side note, I'm doing this within a JAVA App)
Here is the MERGE statement :
MERGE INTO table1 a
USING
(SELECT DISTINCT(ROWID) AS ROWID FROM table2
WHERE DATETIMEUTC >= TO_TIMESTAMP('
formatter.format(dWV.getTime())
','YYYY-MM-DD HH24:MI:SS')) b
ON(a.ROWID = b.ROWID and
a.STATE = 'WV' and a.LAST_DTE = trunc(SYSDATE))
WHEN MATCHED THEN UPDATE SET a.THISIND = 'S';
My attempts goes something like this :
UPDATE table1 a
INNER JOIN table2 b ON (a.ROWID = b.ROWID
and a.STATE = 'WV'
and a.LAST_DTE = date(sysdate()))
SET a.THISIND = 'S'
WHERE DATETIMEUTC >= TO_TIMESTAMP('formatter.form(dWV.getTime())', 'YYYY-MM-DD HH24:MI:SS')
However, I'm unclear if this is actually doing the same thing or not?
As noted by you, the original Oracle MERGE statement only performs updates, no inserts.
The general syntax of your MySQL query looks ok compared to the Oracle version. Here is an updated version :
UPDATE table1 a
INNER JOIN table2 b
ON a.ROWID = b.ROWID
AND b.DATETIMEUTC >= 'formatter.form(dWV.getTime())'
SET a.THISIND = 'S'
WHERE
a.STATE = 'WV'
AND a.LAST_DTE = CURDATE()
Changes :
current date can be obtained with function CURDATE()
'YYYY-MM-DD HH24:MI:SS' is the default format for MySQL dates, hence you do not need to convert it, you may just pass it as is (NB1 : it is unclear what 'formatter.form(dWV.getTime())' actually means - NB2 : if you ever need to translate a string to date, STR_TO_DATE is your friend)
the filter conditions on table a are better placed in the WHERE clause, while those on table b would better belong in the INNER JOIN

Generate ansi sql INSERT INTO

I have Oracle database with 10 tables. Some of the tables have CLOB data text. I need to export data from these tables pro-grammatically using java. The export data should be in ANSI INSERT INTO SQL format, for example:
INSERT INTO table_name (column1, column2, column3, ...)
VALUES (value1, value2, value3, ...);
The main idea is that I need to import this data into three different databases:
ORACLE, MSSQL and MySQL. As I know all these databases support ANSI INSERT INTO. But I have not found any java API/framework for generating data SQL scripts. And I do not know how to deal with CLOB data, how to export it.
What is the best way to export data from a database with java?
UPDATE: (01.07.2018)
I guess it is impossible to insert text data more than 4000 bytes according to this answer. How to generate PL\SQL scripts using java programmatically? Or is there any other export format which supports ORACLE, MSSQL, etc?
Did you ever think about a proper ORM-Api? The first thing in my mind would come to Hibernate or more abstract JPA/JPQL. The framework does know all the main sql dialects. All what you need is to define your connections with your dialects. Than you retrieve the data from the database and its mapped into POJO's, and than you push(insert) the data to your different(other dialect) connection. Should work good i think, even if i never did this. But i know JPA is not new and widely used for the sake of changing the database even when the software is already in production. This approach is a bit inperformant since every row gets transformed into POJO and there is, afaik, no bulk insertion available.
If you are looking for SQL generation then there are many sqlbuilder libraries available, which you can use.
You can use metadata to get column names and type from the select * query and use it at insert query.
See https://github.com/jkrasnay/sqlbuilder
More about it at http://john.krasnay.ca/2010/02/15/building-sql-in-java.html
If your need is to export tables from a Oracle database to insert it back into different types of Database I would suggest a different approach.
This is the perfect use case for JPA (Java Persistence API) which allows you create a model that represent your database structure. This is the Java current solution solution for managing different types of database.
From your model you will be able to generate request compatible with all popular databases.
So my suggestion is, using Spring Boot + Spring Data + Spring Batch :
Create a first app from your model that exports the content of your tables to CSV format.
Create a second app from the same model that imports your CSV files. Depending on you jdbc url, Spring Boot will automtically trigger the appropriate dialect for your target Database and generate the right queries (this is also the case for the export).
This can be done within a reasonnable amount of time and with decent performance.
I like "pro-grammatically" very much. :)
Best way to export that data is to iterate over tables, then query each of them and output plain text with insert into statements. It can be problem if you have binary data there, since different RDBS can deal with it in a slightly different way.
Reading blob/clob at Java side means reading stream. It can be binary, or character stream. For Oracle, from the doc - you can do:
ResultSet rs = s.executeQuery(
"SELECT text FROM documents WHERE id = 1477");
while (rs.next()) {
java.sql.Clob aclob = rs.getClob(1);
java.io.InputStream ip = rs.getAsciiStream(1);
int c = ip.read();
while (c > 0) {
System.out.print((char)c);
c = ip.read();
}
System.out.print("\n");
}
From this answer, you can make it shorter:
Clob clob = resultSet.getClob("CLOB_COLUMN")
String clob_content = clob.getSubString(1, (int) clob.length());
Writing output perhaps would require dealing with: \t\n\r. Depends on your needs, content. The docs have full example - reading, writing. They use prepared statement, and that is why streams are needed at both ends. If your clob is not big - like 32k/64k - there can be other limits. If you have any example - like create table with 2-3 rows it would be much easier by anyone to write code, and provide something that works...
I need to export data from these tables programmatically using java
Come on guy! What is a matter? Java is a tool to operate data, not to migrate. If it's about ETL - please use ETL environments of the target DBMS or write an ETL code by your own.
For MSSQL and ORACLE you can use MERGE Tool syntax (and USING clause for Data) for ANSI standards:
MERGE INTO tablename USING table_reference ON (condition)
WHEN MATCHED THEN
UPDATE SET column1 = value1 [, column2 = value2 ...]
WHEN NOT MATCHED THEN
INSERT (column1 [, column2 ...]) VALUES (value1 [, value2 ...]);
For MySQL there is a different syntax (On Duplicate Key Update Statement):
-- Insert new or merge into existing row.
MERGE INTO system_user target
USING (SELECT 1 AS system_user_id
, 'SYSADMIN' AS system_user_name
, 1 AS system_user_group_id
, 1 AS system_user_type
, 'Samuel' AS first_name
, 'the' AS middle_name
, 'Lamanite' AS last_name
, 1 AS created_by
, SYSDATE AS creation_date
, 1 AS last_updated_by
, SYSDATE AS last_update_date
FROM dual) SOURCE
ON (target.system_user_id = SOURCE.system_user_id)
WHEN MATCHED THEN
UPDATE SET first_name = 'Samuel'
, middle_name = 'the'
, last_name = 'Lamanite'
, last_updated_by = 1
, last_update_date = SYSDATE
WHEN NOT MATCHED THEN
INSERT
( target.system_user_id
, target.system_user_name
, target.system_user_group_id
, target.system_user_type
, target.first_name
, target.middle_name
, target.last_name
, target.created_by
, target.creation_date
, target.last_updated_by
, target.last_update_date )
VALUES
( SOURCE.system_user_id
, SOURCE.system_user_name
, SOURCE.system_user_group_id
, SOURCE.system_user_type
, SOURCE.first_name
, SOURCE.middle_name
, SOURCE.last_name
, SOURCE.created_by
, SOURCE.creation_date
, SOURCE.last_updated_by
, SOURCE.last_update_date );
AND:
-- Insert new or merge into existing row.
INSERT INTO system_user
( system_user_name
, system_user_group_id
, system_user_type
, first_name
, middle_name
, last_name
, created_by
, creation_date
, last_updated_by
, last_update_date )
VALUES
('SYSADMIN'
, 1
, 1
,'Samuel'
,'the'
,'Lamanite'
, 1
, NOW()
, 1
, NOW())
ON DUPLICATE KEY
UPDATE first_name = 'Samuel'
, middle_name = 'the'
, last_name = 'Lamanite'
, last_updated_by = 1
, last_update_date = UTC_DATE();
I would try Scriptella.
Is an open source ETL and script execution tool written in Java, on which you define in a xml file the source and target connections and transformations if needed. Connections can be JDBC or even to text files and there features for batching support. The resulting xml file can be processed programmatically with java, ant or command line.
In their two minute tutorial there are examples for copy tables to another database and working with BLOBs.

How to upsert(update if exists, else insert) into a table using jdbcTemplate [duplicate]

The UPSERT operation either updates or inserts a row in a table, depending if the table already has a row that matches the data:
if table t has a row exists that has key X:
update t set mystuff... where mykey=X
else
insert into t mystuff...
Since Oracle doesn't have a specific UPSERT statement, what's the best way to do this?
The MERGE statement merges data between two tables. Using DUAL
allows us to use this command. Note that this is not protected against concurrent access.
create or replace
procedure ups(xa number)
as
begin
merge into mergetest m using dual on (a = xa)
when not matched then insert (a,b) values (xa,1)
when matched then update set b = b+1;
end ups;
/
drop table mergetest;
create table mergetest(a number, b number);
call ups(10);
call ups(10);
call ups(20);
select * from mergetest;
A B
---------------------- ----------------------
10 2
20 1
The dual example above which is in PL/SQL was great becuase I wanted to do something similar, but I wanted it client side...so here is the SQL I used to send a similar statement direct from some C#
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" )
However from a C# perspective this provide to be slower than doing the update and seeing if the rows affected was 0 and doing the insert if it was.
An alternative to MERGE (the "old fashioned way"):
begin
insert into t (mykey, mystuff)
values ('X', 123);
exception
when dup_val_on_index then
update t
set mystuff = 123
where mykey = 'X';
end;
Another alternative without the exception check:
UPDATE tablename
SET val1 = in_val1,
val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%rowcount = 0 )
THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
insert if not exists
update:
INSERT INTO mytable (id1, t1)
SELECT 11, 'x1' FROM DUAL
WHERE NOT EXISTS (SELECT id1 FROM mytble WHERE id1 = 11);
UPDATE mytable SET t1 = 'x1' WHERE id1 = 11;
None of the answers given so far is safe in the face of concurrent accesses, as pointed out in Tim Sylvester's comment, and will raise exceptions in case of races. To fix that, the insert/update combo must be wrapped in some kind of loop statement, so that in case of an exception the whole thing is retried.
As an example, here's how Grommit's code can be wrapped in a loop to make it safe when run concurrently:
PROCEDURE MyProc (
...
) IS
BEGIN
LOOP
BEGIN
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" );
EXIT; -- success? -> exit loop
EXCEPTION
WHEN NO_DATA_FOUND THEN -- the entry was concurrently deleted
NULL; -- exception? -> no op, i.e. continue looping
WHEN DUP_VAL_ON_INDEX THEN -- an entry was concurrently inserted
NULL; -- exception? -> no op, i.e. continue looping
END;
END LOOP;
END;
N.B. In transaction mode SERIALIZABLE, which I don't recommend btw, you might run into
ORA-08177: can't serialize access for this transaction exceptions instead.
I'd like Grommit answer, except it require dupe values. I found solution where it may appear once: http://forums.devshed.com/showpost.php?p=1182653&postcount=2
MERGE INTO KBS.NUFUS_MUHTARLIK B
USING (
SELECT '028-01' CILT, '25' SAYFA, '6' KUTUK, '46603404838' MERNIS_NO
FROM DUAL
) E
ON (B.MERNIS_NO = E.MERNIS_NO)
WHEN MATCHED THEN
UPDATE SET B.CILT = E.CILT, B.SAYFA = E.SAYFA, B.KUTUK = E.KUTUK
WHEN NOT MATCHED THEN
INSERT ( CILT, SAYFA, KUTUK, MERNIS_NO)
VALUES (E.CILT, E.SAYFA, E.KUTUK, E.MERNIS_NO);
I've been using the first code sample for years. Notice notfound rather than count.
UPDATE tablename SET val1 = in_val1, val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%notfound ) THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
The code below is the possibly new and improved code
MERGE INTO tablename USING dual ON ( val3 = in_val3 )
WHEN MATCHED THEN UPDATE SET val1 = in_val1, val2 = in_val2
WHEN NOT MATCHED THEN INSERT
VALUES (in_val1, in_val2, in_val3)
In the first example the update does an index lookup. It has to, in order to update the right row. Oracle opens an implicit cursor, and we use it to wrap a corresponding insert so we know that the insert will only happen when the key does not exist. But the insert is an independent command and it has to do a second lookup. I don't know the inner workings of the merge command but since the command is a single unit, Oracle could execute the correct insert or update with a single index lookup.
I think merge is better when you do have some processing to be done that means taking data from some tables and updating a table, possibly inserting or deleting rows. But for the single row case, you may consider the first case since the syntax is more common.
A note regarding the two solutions that suggest:
1) Insert, if exception then update,
or
2) Update, if sql%rowcount = 0 then insert
The question of whether to insert or update first is also application dependent. Are you expecting more inserts or more updates? The one that is most likely to succeed should go first.
If you pick the wrong one you will get a bunch of unnecessary index reads. Not a huge deal but still something to consider.
Try this,
insert into b_building_property (
select
'AREA_IN_COMMON_USE_DOUBLE','Area in Common Use','DOUBLE', null, 9000, 9
from dual
)
minus
(
select * from b_building_property where id = 9
)
;
From http://www.praetoriate.com/oracle_tips_upserts.htm:
"In Oracle9i, an UPSERT can accomplish this task in a single statement:"
INSERT
FIRST WHEN
credit_limit >=100000
THEN INTO
rich_customers
VALUES(cust_id,cust_credit_limit)
INTO customers
ELSE
INTO customers SELECT * FROM new_customers;

Detect, delete empty columns and update database in sql, oracle

I have 100 of columns and some of the doesn't have any values inside(they are empty) how can I search for empty columns and delete from table and update database? I tried this query but it doesnt work. It shows 0 rows selected. After selecting how can I update the database?
select table_name, column_name
from all_tab_columns
where table_name='some_table'
and column_name is NULL;
Thanks,
You are querying a data dictionary view. It shows meta-data, in formation about the database. This view, ALL_TAB_COLUMNS, shows information for every column of every table (you have privileges on). Necessarily COLUMN_NAME cannot be null, hence your query returns no rows.
Now what you want to do is query every table and find which columns have no data in them. This requires dynamic SQL. You will need to query ALL_TAB_COLUMNS, so you weren't completely off-base.
Because of dynamic SQL this is a programmatic solution, so the results are displayed with DBMS_OUTPUT.
set serveroutput on size unlimited
Here is an anonymous block: it might take some time to run. The join to USER_TABLES is necessary because columns from views are included in TAB_COLUMNS and we don't want those in the result set.
declare
dsp varchar2(32767);
stmt varchar2(32767);
begin
<< tab_loop >>
for trec in ( select t.table_name
from user_tables t )
loop
stmt := 'select ';
dbms_output.put_line('table name = '|| trec.table_name);
<< col_loop >>
for crec in ( select c.column_name
, row_number() over (order by c.column_id) as rn
from user_tab_columns c
where c.table_name = trec.table_name
and c.nullable = 'Y'
order by c.column_id )
loop
if rn > 1 then stmt := concat(stmt, '||'); end if;
stmt := stmt||''''||crec.column_name||'=''||'
||'to_char(count('||crec.column_name||')) ';
end loop col_loop;
stmt := stmt || ' from '||trec.table_name;
execute immediate stmt into dsp;
dbms_output.put_line(dsp);
end loop tab_loop;
end;
sample output:
table name = MY_PROFILER_RUN_EVENTS
TOT_EXECS=0TOT_TIME=0MIN_TIME=0MAX_TIME=0
table name = LOG_TABLE
PKG_NAME=0MODULE_NAME=0CLIENT_ID=0
PL/SQL procedure successfully completed.
SQL>
Any column where the COUNT=0 has no values in it.
Now whether you actually want to drop such columns is a different matter. You might break programs which depend on them. So you need an impact analysis first. This is why I have not produced a program which automatically drops the empty columns. I think that would be dangerous practice.
It is crucial that changes to our database structure are considered and audited. So if I were ever to undertake an exercise like this I would alter the output from the program above so it produced a script of drop column statements which I could review, edit and keep under source control.

Listagg function and ORA-01489: result of string concatenation is too long

When i run the following query:
Select
tm.product_id,
listagg(tm.book_id || '(' || tm.score || ')',',')
within group (order by tm.product_id) as matches
from
tl_product_match tm
where
tm.book_id is not null
group by
tm.product_id
Oracle returns the following error:
ORA-01489: result of string concatenation is too long
I know that the reason it is failing is that the listagg function is trying to concatenate a the values which are greater than 4000 characters which is not supported.
I have seen the alternative example described here - http://www.oracle-base.com/articles/misc/string-aggregation-techniques.php but they all require the use of functions or procedure.
Is there a solution that is pure SQL without having to call a function or stored procedure and being able to read the value using standard JDBC?
The other difficulty i have is that most string aggregation examples i have seen shows examples with how to read the value as is. In my example about i am modifying the value first (i.e. i am aggregating two columns).
you can use xml functions to do it which return a CLOB. JDBC should be just fine with that.
select tm.product_id,
rtrim(extract(xmlagg(xmlelement(e, tm.book_id || '(' || tm.score || '),')),
'/E/text()').getclobval(), ',')
from tl_product_match tm
where tm.book_id is not null
group by tm.product_id;
eg: http://sqlfiddle.com/#!4/083a2/1
Why not use nested tables?
set echo on;
set display on;
set linesize 200;
drop table testA;
create table testA
(
col1 number,
col2 varchar2(50)
);
drop table testB;
create table testB
(
col1 number,
col2 varchar2(50)
);
create or replace type t_vchar_tab as table of varchar2(50);
insert into testA values (1,'A');
insert into testA values (2,'B');
insert into testB values (1,'X');
insert into testB values (1,'Y');
insert into testB values (1,'Z');
commit;
-- select all related testB.col2 values in a nested table for each testA.col1 value
select a.col1,
cast(multiset(select b.col2 from testB b where b.col1 = a.col1 order by b.col2) as t_vchar_tab) as testB_vals
from testA a;
-- test size > 4000
insert into testB
select 2 as col1, substr((object_name || object_type), 1, 50) as col2
from all_objects;
commit;
-- select all related testB.col2 values in a nested table for each testA.col1 value
select a.col1,
cast(multiset(select b.col2 from testB b where b.col1 = a.col1 order by b.col2) as t_vchar_tab) as testB_vals
from testA a;
I'm no java expert, but this has been around for some time and I'm sure java can pull the values out of the nested table. And, no need to tokenize some delimited string on the other end.
I have seen the alternative example described here - http://www.oracle-base.com/articles/misc/string-aggregation-techniques.php but they all require the use of functions or procedure.
No they don't. Scroll down and you'll see several options that don't require pl/sql.

Categories

Resources