So, I have file in ISO8859-1 encoding. I do the next:
InputStreamReader isr = new InputStreamReader(new FileInputStream(fileLocation));
System.out.println(isr.getEncoding());
And I get UTF8... Looks like FileInputStream or InputStreamReader convert it to UTF8.
Yes, I know about the next one way:
BufferedReader br = new BufferedReader(
new InputStreamReader(
new FileInputStream(fileLocation), "ISO-8859-1");
But I don't know beforehand what encoding my file will have.
How can I read file with saving encoding?
Binary files (bytes) that are actually text in some encoding for those bytes, unfortunately do not store the encoding (charset) somewhere.
Sometimes there is an encoding somewhere: Unicode text could have an optional BOM character at the begin of the file. HTML and XML can specify the charset.
If you downloaded the file from the internet in the header lines the charset could be mentioned. Say it were an HTML file, and Content-Type: text/html; charset=Windows-1251. Then you could read the file with Windows-1251, and always store it as UTF-8, modifying/adding a <meta charset="UTF-8">.
But in general there is no solution for determining some file's encoding. You could do:
read the bytes
if convertible to UTF-8 without error in the multibyte sequences, it is UTF-8
otherwise it is a single byte encoding, default to Windows-1252 (rather than ISO-8859-1)
maybe use word frequency tables of some languages together with encodings, and try those
write the bytes in the determined encoding to file as UTF-8
There might be a library doing such a thing; combining language recognition and charset recognition.
Related
I checked the default file encoding in jvm:
System.out.println("***file.encoding::" + System.getProperty("file.encoding"));
// ***file.encoding::Cp1252
But when I wrote new file using FileWriter:
bf = new BufferedWriter(new FileWriter(file));
then, I recheck encoding file by using cmd:
file -i output-file.txt
output-file.txt: text/plain; charset=iso-8859-1
why charset isn't Cp1252 instead of iso-8859-1?
cp1252 and iso-8859-1 are very similar encodings, and file might not be able to tell the difference based on the content of your file if both would be valid encodings.
Text files don't contain any metadata about the file encoding, to the only way to know is to read some bytes and guess. For byte values below 128 (ie, most normal English text) the two encoding are identical, so there is no way to tell which was used to write the file if those are the only characters in the file.
I have tried to create UTF-8 file using java using different readers.But after creating when I open the file it is not read as being UTF-8 encoded(I opened it in notepad++ and it was UTF-8 without BOM).
File fileDir = new File("c:\\temp\\test.txt");
Writer out1 = new BufferedWriter(
new OutputStreamWriter(
new FileOutputStream(fileDir),
Charset.forName("UTF-8").newEncoder())
);
Writer out = new OutputStreamWriter(
new FileOutputStream(fileDir),
Charset.forName("UTF-8")
);
out.append("Website UTF-8").append("\r\n");
out.append("?? UTF-8").append("\r\n");
out.append("??????? UTF-8").append("\r\n");
out.flush();
out.close();
You are correctly writing a file in the UTF-8 encoding. (Note that you're not using out1 and it's unnecessary).
Notepad++ tells you that the file is "UTF-8 without BOM". Why do you think this is not UTF-8?
BOM stands for byte order mark. It's a special Unicode character to indicate if the bytes in a file are in little-endian or big-endian order. But for UTF-8 it has no meaning and its use is not recommended. From the Wikipedia article:
The UTF-8 representation of the BOM is the byte sequence 0xEF,0xBB,0xBF. A text editor or web browser interpreting the text as ISO-8859-1 or CP1252 will display the characters  for this.
The Unicode Standard permits the BOM in UTF-8, but does not require nor recommend its use. Byte order has no meaning in UTF-8, so its only use in UTF-8 is to signal at the start that the text stream is encoded in UTF-8. The BOM may also appear when UTF-8 data is converted from other encodings that use a BOM.
Is there a special reason why you need a BOM to be included? If not, then don't worry about it. Some Java XML parsers cannot deal with an UTF-8 BOM properly and will give an error when you try to parse an XML document encoded in UTF-8 when it starts with a BOM.
I am using "FileInputStream" and "FileReader" to read a data from a file which contains unicode characters.
When i am setting the default encoding to "cp-1252" both are reading junk data, when i am setting default encoding to UTF-8 both are reading fine.
Is it true that both these use System Default Encoding to read the data?
Then whats the benifit of using Character stream if it depends on System Encoding.
Is there any way apart from:
BufferedReader fis = new BufferedReader(new InputStreamReader(new FileInputStream("some unicode file"),"UTF-8"));
to read the data correctly when the default encoding is other than UTF-8.
FileReader and FileWriter should IMHO be deprecated.
Use
new InputStreamReader(new FileInputStream(file), "UTF-8")
or so.
Here also there exists an overloaded version without the encoding parameter, using the default platform encoding: System.getProperty("file.encoding").
I've two questions:
Is there a way through which we can open a xls file and save it as a tsv file through Java?
EDIT:
Or is there a way through which we can convert a xls file into an tsv file through Java?
Is there a way in which we can convert a UTF-16LE file to UTF-8 using java ?
Thank you
I've two questions:
On StackOverflow you should split that into two different questions...
I'll answer your second question:
Is there a way in which we can convert a UTF-16LE file to UTF-8 using
java?
Yes of course. And there's more than one way.
Basically you want to read your input file specifying the input encoding (UTF-16LE) and then write the file specifying the output encoding (UTF-8).
Say you have some UTF-16LE encoded file:
... $ file testInput.txt
testInput.txt: Little-endian UTF-16 Unicode character data
You then basically could do something like this in Java (it's just an example: you'll want to fill in missing exception handling code, maybe not put a last newline at the end, maybe discard the BOM if any, etc.):
FileInputStream fis = new FileInputStream(new File("/home/.../testInput.txt") );
InputStreamReader isr = new InputStreamReader( fis, Charset.forName("UTF-16LE") );
BufferedReader br = new BufferedReader( isr );
FileOutputStream fos = new FileOutputStream(new File("/home/.../testOutput.txt"));
OutputStreamWriter osw = new OutputStreamWriter( fos, Charset.forName("UTF-8") );
BufferedWriter bw = new BufferedWriter( osw );
String line = null;
while ( (line = br.readLine()) != null ) {
bw.write(line);
bw.newLine(); // will add an unnecessary newline at the end of your file, fix this
}
bw.flush();
// take care of closing the streams here etc.
This shall create a UTF-8 encoded file.
$ file testOutput.txt
testOutput.txt: UTF-8 Unicode (with BOM) text
The BOM can clearly be seen using, for example, hexdump:
$ hexdump testOutput.txt -C
00000000 ef bb bf ... (snip)
The BOM is encoded on three bytes in UTF-8 (ef bb fb) while it's encoded on two bytes in UTF-16. In UTF16-LE the BOM looks like this:
$ hexdump testInput.txt -C
00000000 ff fe ... (snip)
Note that UTF-8 encoded files may or may not (both are totally valid) have a "BOM" (byte order mask). A BOM in a UTF-8 file is not that silly: you don't care about the byte order but it can help quickly identify a text file as being UTF-8 encoded. UTF-8 files with a BOM are fully legit according to the Unicode specs and hence readers unable to deal with UTF-8 files starting with a BOM are broken. Plain and simple.
If for whatever reason you're working with broken UTF-8 readers unable to cope with BOMs, then you may want to remove the BOM from the first String before writing it to disk.
More infos on BOMs here:
http://unicode.org/faq/utf_bom.html
There is a library called jexcelapi that allows you to open/edit/save .xls files.
Once you have read the .xls file it would not be hard to write something that would output it as .tsv.
I'm trying to read a file from the SD card and I've been told it's in unicode format. However, when I try to read the file I get the following:
This is the code I'm using to read the file:
InputStreamReader fw = new InputStreamReader(new FileInputStream(root.getAbsolutePath()+"/Drive/sdk/cmd.62.out"), "UTF-8");
char[] buf = new char[255];
fw.read(buf);
String readString = new String(buf);
Log.d("courierread",readString);
fw.close();
If I write that output to a file this is what I get when I open it in a hex editor:
Any thoughts on what I need to do to read the file correctly?
Does the file have a byte-order mark? In that case look at Reading UTF-8 - BOM marker
EDIT (from comment): That looks like little-endian UTF-16 to me. Try the charset "UTF-16LE".
The file you show in the hex editor is not UTF-8 encoded, it looks more like UTF-16. This means you must specify UTF-16 as the encoding in your code (probably the UTF-16LE variant).
If it were UTF-8 encoded, then it would represent all characters representable in ASCII using just a single byte.