setting a UTF-8 in java and csv file [duplicate] - java

This question already has answers here:
How to add a UTF-8 BOM in Java?
(8 answers)
Closed 5 years ago.
I am using this code for add Persian words to a csv file via OpenCSV:
String[] entries="\u0645 \u062E\u062F\u0627".split("#");
try{
CSVWriter writer=new CSVWriter(new OutputStreamWriter(new FileOutputStream("C:\\test.csv"), "UTF-8"));
writer.writeNext(entries);
writer.close();
}
catch(IOException ioe){
ioe.printStackTrace();
}
When I open the resulting csv file, in Excel, it contains "ứỶờịỆ". Other programs such as notepad.exe don't have this problem, but all of my users are using MS Excel.
Replacing OpenCSV with SuperCSV does not solve this problem.
When I typed Persian characters into csv file manually, I don't have any problems.

I spent some time but found solution for your problem.
First I opened notepad and wrote the following line: שלום, hello, привет
Then I saved it as file he-en-ru.csv using UTF-8.
Then I opened it with MS excel and everything worked well.
Now, I wrote a simple java program that prints this line to file as following:
PrintWriter w = new PrintWriter(new OutputStreamWriter(os, "UTF-8"));
w.print(line);
w.flush();
w.close();
When I opened this file using excel I saw "gibrish."
Then I tried to read content of 2 files and (as expected) saw that file generated by notepad contains 3 bytes prefix:
239 EF
187 BB
191 BF
So, I modified my code to print this prefix first and the text after that:
String line = "שלום, hello, привет";
OutputStream os = new FileOutputStream("c:/temp/j.csv");
os.write(239);
os.write(187);
os.write(191);
PrintWriter w = new PrintWriter(new OutputStreamWriter(os, "UTF-8"));
w.print(line);
w.flush();
w.close();
And it worked! I opened the file using excel and saw text as I expected.
Bottom line: write these 3 bytes before writing the content. This prefix indicates that the content is in 'UTF-8 with BOM' (otherwise it is just 'UTF-8 without BOM').

Unfortunately, CSV is a very ad hoc format with no metadata and no real standard that would mandate a flexible encoding. As long as you use CSV, you can't reliably use any characters outside of ASCII.
Your alternatives:
Write to XML (which does have encoding metadata if you do it right) and have the users import the XML into Excel.
Use Apache POI to create actual Excel documents.

Excel doesn't use UTF8 to open CSV files. Thats a known problem. The actual encoding used depends on the locale settings of Microsoft Windows. With a German lcoale for example Excel would open a CSV file with CP1252.
You could create an Excel file containing some persian characters and save it as an CSV file. Then write a small Java program to read this file and test some common encodings. Thats the way I used to figure out the correct encoding for German umlauts in CSV files.

Related

Generate CSV via Apache CSV in UTF-8

how to write CSV File in UTF-8 via Apache CSV?
I am trying generate csv by following code where Files.newBufferedWriter() encode text into UTF-8 by default, but when I open generated text in excel there are senseless characters.
I create CSVPrinter like this:
CSVPrinter csvPrinter = new CSVPrinter(Files.newBufferedWriter(Paths.get(filePath)), CSVFormat.EXCEL);
next I set headers
csvPrinter.printRecord(headers);
and next in loop I print values into writer like this
csvPrinter.printRecord("value1", "valu2", ...);
I also tried upload file into online CSV lint validator and it tells that I am using ASCII-8BIT instead of UTF-8. What I did wrong?
Microsoft software tends to assume windows-12* or UTF-16LE charsets, unless the content starts with a byte order mark which the software will use to identify the charset. Try adding a byte order mark at the start of your file:
try (BufferedWriter writer = Files.newBufferedWriter(Paths.get(filePath))) {
writer.write('\ufeff');
CSVPrinter csvPrinter = new CSVPrinter(writer);
//...
}

Android UTF-8 encoding not working?

I am working with a CSV file right now.
In my program i am using an OutputStreamWriter to write data the csv file.
OutputStreamWriter myOutWriter = new OutputStreamWriter(fOut, Charset.forName("UTF-8").newEncoder());
I tried printing out the encoding style of this writer and get the following:
Log.i(TAG, "BODY ENCODING: " + myOutWriter.getEncoding());
Logcat: BODY ENCODING: UTF-8
But when i try to open the csv file on my desktop it says that the file is in windows-1252 so i cant read æøå chars which i need.
Am i missing something obvious here or am i not understanding the concept of outputStreamWriter? I have tried different types of encoding, but it doesn't seem to work :)
When i try to open in Excel:
Your file is actually UTF-8 not CP-1252. Your text editor/viewer detected it as CP-1251 (since no multi-byte characters). You can help your editor by adding byte order mark (BOM) in the beginning of the file. I.e.
static final byte[] UTF8_BOM = {0xEF,0xBB,0xBF};
...
fOut.write(UTF8_BOM);
OutputStreamWriter myOutWriter = new OutputStreamWriter(fOut, Charset.forName("UTF-8").newEncoder());
Did you try opening it in EXCEL? For EXCEL to recognize the file as UTF-8 it needs to have BOM (https://en.wikipedia.org/wiki/Byte_order_mark)

CSV encoding specification

I am creating a CSV and writing content in UTF-8 to support German and English by specifying encoding as below
BufferedWriter outFile = new BufferedWriter( new OutputStreamWriter( outputStream, "UTF-8" ) );
The above is working fine till I add the below separator indication (;) in the header of CSV
outFile.write( "sep=;" );
outFile.newLine();
Without this delimiter ; my CSV will be wrong but when I inclde this the encoding is failing and UTf-8 not in place.
Is there any other keyword like "sep=" to specify in header of CSV to specify encoding?
I tried encoding="UTF-8" and it is not working.
Thanks.
You cannot open a UTF8 csv file with Excel 2007. Microsft have no understanding of the word "standards". Because of this, it is notoriously difficult to generate a csv file which opens in every possible application that reads .csv files and keeps the correct encoding.
If you must use Excel 2007, I would suggest using encoding with Microsofts own "windows 1252" as it supports German characters. Don't use the header, and also look in to using tab as a separator. Yes I know the c stands for comma, but tab seems to be more consistent with Excel 2007 if you save the file back again.

opening xls file and saving it as tsv file using java and UTF-16LE to UTF-8 conversion

I've two questions:
Is there a way through which we can open a xls file and save it as a tsv file through Java?
EDIT:
Or is there a way through which we can convert a xls file into an tsv file through Java?
Is there a way in which we can convert a UTF-16LE file to UTF-8 using java ?
Thank you
I've two questions:
On StackOverflow you should split that into two different questions...
I'll answer your second question:
Is there a way in which we can convert a UTF-16LE file to UTF-8 using
java?
Yes of course. And there's more than one way.
Basically you want to read your input file specifying the input encoding (UTF-16LE) and then write the file specifying the output encoding (UTF-8).
Say you have some UTF-16LE encoded file:
... $ file testInput.txt
testInput.txt: Little-endian UTF-16 Unicode character data
You then basically could do something like this in Java (it's just an example: you'll want to fill in missing exception handling code, maybe not put a last newline at the end, maybe discard the BOM if any, etc.):
FileInputStream fis = new FileInputStream(new File("/home/.../testInput.txt") );
InputStreamReader isr = new InputStreamReader( fis, Charset.forName("UTF-16LE") );
BufferedReader br = new BufferedReader( isr );
FileOutputStream fos = new FileOutputStream(new File("/home/.../testOutput.txt"));
OutputStreamWriter osw = new OutputStreamWriter( fos, Charset.forName("UTF-8") );
BufferedWriter bw = new BufferedWriter( osw );
String line = null;
while ( (line = br.readLine()) != null ) {
bw.write(line);
bw.newLine(); // will add an unnecessary newline at the end of your file, fix this
}
bw.flush();
// take care of closing the streams here etc.
This shall create a UTF-8 encoded file.
$ file testOutput.txt
testOutput.txt: UTF-8 Unicode (with BOM) text
The BOM can clearly be seen using, for example, hexdump:
$ hexdump testOutput.txt -C
00000000 ef bb bf ... (snip)
The BOM is encoded on three bytes in UTF-8 (ef bb fb) while it's encoded on two bytes in UTF-16. In UTF16-LE the BOM looks like this:
$ hexdump testInput.txt -C
00000000 ff fe ... (snip)
Note that UTF-8 encoded files may or may not (both are totally valid) have a "BOM" (byte order mask). A BOM in a UTF-8 file is not that silly: you don't care about the byte order but it can help quickly identify a text file as being UTF-8 encoded. UTF-8 files with a BOM are fully legit according to the Unicode specs and hence readers unable to deal with UTF-8 files starting with a BOM are broken. Plain and simple.
If for whatever reason you're working with broken UTF-8 readers unable to cope with BOMs, then you may want to remove the BOM from the first String before writing it to disk.
More infos on BOMs here:
http://unicode.org/faq/utf_bom.html
There is a library called jexcelapi that allows you to open/edit/save .xls files.
Once you have read the .xls file it would not be hard to write something that would output it as .tsv.

Convert from Codepage 1252 (Windows) to Java, in Java

I have some strings in Java (originally from an Excel sheet) that I presume are in Windows 1252 codepage. I want them converted to Javas own unicode format. The Excel file was parsed using the JXL package, in case that matter.
I will clarify: apparently the strings gotten from the Excel file look pretty much like it already is some kind of unicode.
WorkbookSettings ws = new WorkbookSettings();
ws.setCharacterSet(someInteger);
Workbook workbook = Workbook.getWorkbook(new File(filename), ws);
Sheet s = workbook.getSheet(sheet);
row = s.getRow(4);
String contents = row[0].getContents();
This is where contents seems to contain something unicode, the åäö are multibyte characters, while the ASCII ones are normal single byte characters. It is most definitely not Latin1. If I print the "contents" string with printLn and redirect it to a hello.txt file, I find that the letter "ö" is represented with two bytes, C3 B6 in hex. (195 and 179 in decimal.)
[edit]
I have tried the suggestions with different codepages etc given below, tried converting from Cp1252 etc. There was some kind of conversion, because I would get some other kind of gibberish instead. As reference I always printed an "ö" string hand coded into the source code, to verify that there was not something wrong with my terminal or typefaces or anything. The manually typed "ö" always worked.
[edit]
I also tried WorkBookSettings as suggested in the comments, but I looked in the code for JXL and characterSet seems to be ignored by parsing code. I think the parsing code just looks at whatever encoding the XLS file is supposed to be in.
WorkbookSettings ws = new WorkbookSettings();
ws.setEncoding("CP1250");
Worked for me.
If none of the answer above solve the problem, the trick might be done like this:
String myOutput = new String (myInput, "UTF-8");
This should decode the incoming string, whatever its format.
When Java parses a file it uses some encoding to read the bytes on the disk and create bytes in memory. The default encoding varies from platform to platform. Java's internal String representation is Unicode already, so if it parses the file with the right encoding then you are already done; just write out the data in any encoding you want.
If your strings appear corrupted when you look at them in Java, it is probably because you are using the wrong encoding to read the data. Excel is probably using UTF-16 (Little-Endian I think) but I'd expect a library like JXL should be able to detect it appropriately. I've looked at the Javadocs for JXL and it doesn't do anything with character encodings. I imagine it auto-detects any encodings as it needs to.
Do you just need to write the already loaded strings to a text file? If so, then something like the following will work:
String text = getCP1252Text(); // doesn't matter what the original encoding was, Java always uses Unicode
FileOutputStream fos = new FileOutputStream("test.txt"); // Open file
OutputStreamWriter osw = new OutputStreamWriter(fos, "UTF-16"); // Specify character encoding
PrintWriter pw = new PrintWriter(osw);
pw.print(text ); // repeat as needed
pw.close(); // cleanup
osw.close();
fos.close();
If your problem is something else please edit your question and provide more details.
You need to specify the correct encoding when the file is parsed - once you have a Java String based on the wrong encoding, it's too late.
JXL allows you to specify the encoding by passing a WorkbookSettings object to the factory method.
"windows-1252"/"Cp1252" is not required to be supported by JREs, but is by Sun's (and presumably most others). See the "Supported Encodings" in your JDK documentation. Then it's just a matter of using String, InputStreamReader or similar to decode the bytes into chars.
FileInputStream fis = new FileInputStream (yourFile);
BufferedReader reader = new BufferedReader(new InputStreamReader(fis,"CP1250"));
And do with reader whatever you'd do directly with file.
Your description indicates that the encoding is UTF-8 and indeed C3 B6 is the UTF-8 encoding for 'ö'.

Categories

Resources