Error while loading CSV file to Oracle DB using SQL Loader - java

I have a CSV file in which their is a multiline data. Important thing to note in that file is that, each record ends with CRLF character, and multi line incomplete record ends with LF. Now if I use below .ctl file for SQL Loader, the records load successfully. Below is the CSV file snapshot(correct file.jpg) and the ctl file.
ctl File :
OPTIONS (
ERRORS=1,
SKIP=1
)
LOAD DATA
CHARACTERSET 'UTF8'
INFILE 'C:\Users\puspender.tanwar\Desktop\try_SQLLDR\csvfile.csv'
BADFILE 'C:\Users\puspender.tanwar\Desktop\try_SQLLDR\badfile.bad'
DISCARDFILE 'C:\Users\puspender.tanwar\Desktop\try_SQLLDR\DSCfile.dsc'
CONTINUEIF LAST != '"'
INTO TABLE ODI_DEV_TARGET.CASE
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
ID "REPLACE(:ID,'<br>',chr(10))",
ISDELETED "CASE WHEN :ISDELETED='true' then 'T' ELSE 'F' END",
CASENUMBER "REPLACE(:CASENUMBER,'<br>',chr(10))",
DESCRIPTION CHAR(30000)
)
Now if I loaded another CSV file which contains exactly same data, the records fails to load. The only difference in this second csv file is that the records ends with LF and incomplete records ends with CRLF. I used the same ctl file but got the error : Rejected - Error on table ODI_DEV_TARGET.CASE, column DESCRIPTION. second enclosure string not present below is the snapshot of second csv file.
I also noticed that if I change INFILE option of ctl file to INFILE 'C:\Users\puspender.tanwar\Desktop\try_SQLLDR\csvfile.csv "str '\r\n'"' the record then also got loaded for first CSV(only). So I though if I use "str '\n'" instead of "str ' \r\n'" then second CSV records should load, but unfortunately that not happane.
Please advise me how to handle this by doing some modification in .ctl file. Or any other way to resolve this.

Related

Amazon S3 Select Issue : not supporting line break occurring inside fields

I am trying to use Amazon S3 Select to read records from a CSV file and if the field contains a line break(\n), then the record is not being parsed as a single record. Also, the line break inside the field has been properly escaped by double quotes as per standard CSV format.
For example, the below CSV file
Id,Name,Age,FamilyName,Place
p1,Albert Einstein,25,"Einstein
Cambridge",Cambridge
p2,Thomas Edison,30,"Edison
Cardiff",Cardiff
is being parsed as
Line 1 : Id,Name,Age,FamilyName,Place
Line 2 : p1,Albert Einstein,25,"Einstein
Line 3 : Cambridge",Cambridge
Line 4 : p2,Thomas Edison,30,"Edison
Line 5 : Cardiff",Cardiff
Ideally it should have been parsed as given below:
Line 1:
Id,Name,Age,FamilyName,Place
Line 2:
p1,Albert Einstein,25,"Einstein
Cambridge",Cambridge
Line 3:
p2,Thomas Edison,30,"Edison
Cardiff",Cardiff
I'm setting AllowQuotedRecordDelimiter to TRUE in the SelectObjectContentRequest as given in their documentation. It's still not working.
Does anyone know if Amazon S3 Select supports line break inside fields as described in the case mentioned above? Or any other parameters I need to change or set to make this work?
This is being parsed / printed correctly. The confusion lies in that the literal newline is being printed in the output. You can test this if you run the following expression on the given csv:
SELECT COUNT(*) from s3Object s
Output: 2
Note that if you specify only the third column, you get only the correct value:
SELECT s._3 frin s3Object s
You get only the parts of each line that enclose said field:
"Einstein
Cambridge"
"Edison
Cardiff"
What's happening is the character in the field is the same as the default CSVOutput.RecordDelimiter value (\n) which is causing a clash. If you want to separate each field in a different way, you could add the the following to the CSVOutput part of the OutputSerialization:
"RecordDelimiter": "\r\n"
or use some other type of 1-2 length character sequence in place of \r\n

str in control file is not working to load csv data having carriage return and line feed

I am trying to load (using sqlldr) a csv file from linux system to oracle database where a column is having data which has carriage return and line feed.
Control File looks as below:
OPTIONS (DIRECT = TRUE, SKIP = 1, ERRORS=0)
unrecoverable load data
CHARACTERSET UTF8
infile 'abc.csv' "str '\r\n'"
into table USER1."ABC"
Append
fields terminated by "," optionally enclosed by '"'
TRAILING NULLCOLS
("COLUMN1" CONSTANT 100,
"COLUMN2",
"COLUMN3" CONSTANT 'XYZ',
"COLUMN4")
CSV File looks as below:
COLUMN2, COLUMN4
"abc1","abc2
welcome"
"ok","abc4"
I have tried following things in control file but load was successful with zero row insertion to the table:
1. "str '\r\n'"
2. "str '#EOR#'"
3. "str x'0D'"
4. "str '\n'"
"str '\n'":This generates .bad file. Content of .bad file is as below:
"abc1","abc2
Is there anything that is being missed? Kindly help. Thanks in advance.
Have The Data Adhere to the Stream Record Format You Have Identified
You are using the Stream Record Format and you are indicating each record ends with \r\n.
Based on the *.bad file, your data file records end with \n and not \r\n (standard Unix line ending behavior).
Can you change your stream record format's end of record to, |\n, and add a | at the end of every record in your data?
You would change this line:
infile 'abc.csv' "str '\r\n'
to
infile 'abc.csv' "str '|\n'
The data would change to this:
"abc1","abc2
welcome"|
"ok","abc4"|

Spark adding extra space when record contain a "comma"

My input is a "|" (pipe) separator file. I can't change the input file.
The format is
HEADER_A|HEADER_B|HEADER_C
A|B|C
A D|B| => records without comma generates output like "A D|B|"
A,D|B| => records with comma generates output like " A,D|B| "
Spark config is :
sparkSession.read()
.option("header","true")
.option("delimiter","|")
.schema(schema) * assume this is valid and represents the correct schema
.csv(fileName)
.cache();
I've tried using the "sep" option but didn't work as well.
If my delimiter is "|", why Spark has a different effect on records with a comma?
I found my error. As the record contains a comma, I should not use the .csv(path) when writing the file
Changing from
dataset.write()...
.csv(path)
to
dataset,write()...
.text(path)
solved it

Error: Special characters are not uploaded from csv to database in Liferay 6.1

I am inserting records from csv file to mysql database in Liferay 6.1. I have already set porta-ext.properties file with
jdbc.default.driverClassName=com.mysql.jdbc.Driver
jdbc.default.url=jdbc:mysql://localhost:3306/lportal?useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false
jdbc.default.username=root jdbc.default.password=root
when I am trying to upload records ,it throws error for special characters like รก
Error details:
13:38:21,001 ERROR [JDBCExceptionReporter:75] Data truncation: Data too long for column 'providerName' at row 1
When I removed those characters it persists records without error.
Can anyone suggest me how to resolve this problem.
Thank you
If your database is in UTF-8 and you have "special" characters in it than most probably you are missing "file.encoding=UTF-8" vm argument (-Dfile.encoding=UTF-8), or at least you should specify encoding when opening file/stream.

how to skip lines of a csv file while using LOAD DATA command?

i'm using sql command load data to insert data in a csv file to mysql database. the problem is that at the end of the file there's a few line like ",,,,,,,,,,,,,,,,,," (the csv file is a conversion of an excel file). so when sql get to those lines he send me : #1366 - Incorrect integer value: '' for column 'Bug_ID' at row 661.
the 'bug_id' is an int and i have 32 column.
how can i tell him to ignore those lines considering the number of filed lines is variable?
thanks for your help.
MySQL supports a 'LINES STARTING BY "xxxx" ' for when reading delimited text files. If you can, require your specific .CVS file to have each data line with a 'prefix' and non-data lines to not have that prefix. This gives you the benefit of being able to putting comments into a .CSV if desired.
MySQL Doc: Load Data InFile
You can:
step 1 - (optionally) export data:
SELECT *
INTO OUTFILE "myFile.csv"
COLUMNS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
ESCAPED BY '\\'
LINES STARTING BY 'DATA:'
TERMINATED BY '\n'
FROM table
step 2 - import data
LOAD DATA INFILE "myFile.csv"
INTO TABLE some_table
COLUMNS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
ESCAPED BY '\\'
LINES STARTING BY 'DATA:'
Effectively you can modify the .csv file to look like this:
# Comment for humans
// Comment for humans
Comments for us humans.
DATA:1,3,4,5,6,'asdf','abcd'
DATA:4,5,6,7,8,'qwerty','zxcv'
DATA:9,8,7,6,5,'yuio','hjlk'
# Comments for humans
// Comments for humans
Comments for humans
DATA:13,15,64,78,54,'bla bla','foo bar'
Only the lines with 'DATA:' prefix will be interpreted/read by the construct.
I used this technique to create a 'config' file for a SQL script that needed external control information. But there was a human element that needed to be able to easily manipulate the .csv file and understand its contents.
-- J Jorgenson --
i fixed it:
i just added a condition on the line in my csv parser
while ((line = is.readLine()) != null) {
if (!line.equals(",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"))
{
Iterator e = csv.parse(line).iterator();
......
}
}

Categories

Resources