Character inferno - java

I need some help. I have to read data from a file and store it into an Oracle db. I run into troubles when characters like 'à' or 'À' appear into data. For example, 'à' is read and become 'à' into my application, so, when I try to save data into db, sometimes, the db complains about values too big about the field that are going to save into. I also tryied
Normalizer.normalize(row, Form.NFD).replaceAll("\\p{InCombiningDiacriticalMarks}+", "");
I payed attention about encoding too. I notice that if I run my application on data file, a Cp1252 file, on a Windows machine I got no errors. Sadly I got errors when I run the stuff on a Linux machine. I'm using java 6. TIA.

So, the default character encoding on your windows machine is probably windows-1252 (a superset of latin-1). That means that if you don't specify the charset when reading in the file, Java will default to your system default and get it right.
On your Linux machine, your default charset is probably UTF-8. That means that if you don't not explicitly specify a charset while reading a file, it will default to UTF-8 . . . which, in this case, is wrong.
You didn't post how you're reading in your file, but for example:
InputStreamReader isr = new InputStreamReader(file, "UTF-8");
This would create an input stream reader for reading a file formatted in UTF-8.

Related

reading CSV file using java is working fine in test environment but its not behaving correctly in production environment

I am using CSV files which has the contents in German and french language. I am reading these files through Java using file reader. Its working fine on test environment but in production, csv file contents get changed in some symbols after reading (I can see in logs).
You should never use FileReader. It always reads a file using the platform's default charset.
The default charset in your test environment is not the same as your default charset in your production environment. It is exactly for this reason that you should never rely on the default charset. I assume you know what the actual charset of the CSV file is, so specify it explicitly in your code:
String charset = "ISO-8859-1";
Reader csvReader = new BufferedReader(
new InputStreamReader(new FileInputStream(fileName), charset));
A list of all known charsets can be found here. That link is also present in the documentation for the Charset class.

java unicode conversion on linux not working on max os x

I am writing a java application on Ubuntu Linux that reads in a text file and creates an xml file from the data. Some of the text contains curly apostrophes and quotes that I convert to straight apostrophes and quotes using the following code:
dataLine = dataLine.replaceAll( "[\u2018|\u2019]", "\u0027" ).replaceAll( "[\u201C|\u201D]", "\u005c\u0022" );
This works fine, but when I port the jar file to a Mac OSX machine, I get three question marks where I should get straight apostrophes and quotes. I created a test application on the Mac using the same line of code to do the conversion and the same test file for input and it worked fine. Why doesn't the jar file created on the Linux machine work correctly on a Mac? I thought java was supposed to be cross platform compatible.
Chances are you'tr not reading the file correctly to start with. You haven't shown how you're reading the file, but my guess is that you're just using FileReader, or an InputStreamReader without specifying the encoding. In that case, the default platform encoding is used - and if that's not the actual encoding of the file, you won't be reading the right characters. You should be able to detect that without doing any replacement at all.
Instead, you should use a FileInputStream and wrap it in an InputStreamReader with the correct encoding - which is likely to be UTF-8 as it's XML. (You should be able to check this easily.)

International characters with Java

I am building an app that takes information from java and builds an excel spreadsheet. Some of the information contains international characters. I am having an issue when Russian characters, for example, are rendered correctly in Java, but when I send those characters to Excel, they are not rendered properly. I initially thought the issue was an encoding problem, but I am now thinking that the problem is simply not have the Russian language pack loaded on my Windows 7 machine.
I need to know if there is a way for a Java application to "force" Excel to show international characters.
Thanks
Check the file encoding you're using is characters don't show up. Java defaults to platform native encoding (Win-1252 for Windows) instead of UTF-8. You can explicitly set the writers to use UTF-8 or Cyrillic encoding.

Issue with encoding UTF-8 when FTPing files

I am able to have my application upload files via FTP using the FTPClient Java library.
(I happen to be uploading to an Oracle XML DB repository.)
Everything uploads fine unless the xml file has curly quotes in it. In which case I get the error:
LPX-00200: could not convert from encoding UTF-8 to UCS2
I can upload what I believe to be the same file using the Windows CMD line FTP tool. I am wondering if there is some encoding setting that the windows CMD line tool uses that maybe I need to set in my Java code.
Anyone know stuff about this? Thanks!!
I don't know that application but you could try to use -Dfile.encoding=UTF-8 on your JVM command line
Not familiar with Oracle XML DB repositories—can they accept compressed uploads? Zipping or gzipping your file would save resources and frustrate any ASCII file type autodetection in use.
In binary this problem goes away.
FTPClient.setType(FTPClient.TYPE_BINARY);
http://www.sauronsoftware.it/projects/ftp4j/manual.php#3
If your file contains curly quotes, they are in the high-order bit set range in iso-8859-1 and windows-1252 character sets. In UTF-8, those characters usually take two bytes in UTF-8.
It's quite possible that you've accidentally encoded the xml file in one of these encodings instead of UTF-8. That would result in a conversion error, because the high-order bit being set is only allowed in sequences of multiple UTF-8 octets.
If you're in Windows, open the file in Notepad and try re-saving the document using Save As... with the UTF-8 encoding, and upload the changed file.. In Unix, use iconv or a similar tool to convert from iso-8859-1 to UTF-8 before uploading.
If the XML document explicitly marks its encoding, make sure it's marked with the correct encoding (e.g. UTF-8). In many xml parsers, you can parse iso-8859-1 or windows-1252 character set encoded XML as long as it's marked as such.

How do I properly store and retrieve internationalized Strings in properties files?

I'm experimenting with internationalization by making a Hello World program that uses properties files + ResourceBundle to get different strings.
Specifically, I have a file "messages_en_US.properties" that stores "hello.world=Hello World!", which works fine of course.
I then have a file "messages_ja_JP.properties" which I've tried all sorts of things with, but it always appears as some type of garbled string when printed to the console or in Swing. The problem is obviously with the reading of the content into a Java string, as a Java string in Japanese typed directly into the source can print fine.
Things I've tried:
The .properties file in UTF-8 encoding with the Japanese string as-is for the value. Something I read indicates that Java expects a properties file to be in the native encoding of the system...? It didn't work either way.
The file in default encoding (ISO-8859-1) and the value stored as escaped Unicode created by the native2ascii program included with Java. Tried with a source file in various Japanese encodings... SHIFT-JIS, EUC-JP, ISO-2022-JP.
Edit:
I actually figured this out while I was typing this, but I figured I'd post it anyway and answer it in case it helps anyone.
I realized that native2ascii was assuming (surprise) that it was converting from my operating system's default encoding each time, and as such not producing the correct escaped Unicode string.
Running native2ascii with the "-encoding encoding_name" option where encoding_name was the name of the source file's encoding (SHIFT-JIS in this case) produced the correct result and everything works fine.
Ant also has a native2ascii task that runs native2ascii on a set of input files and sends output files wherever you want, so I was able to add a builder that does that in Eclipse so that my source folder has the strings in their original encoding for easy editing and building automatically puts converted files of the same name in the output folder.
As of JDK 1.6, Properties has a load() method that accepts a Reader. That means you can save all the property files as UTF-8 and read them all directly by passing an InputStreamReader to load(). I think that's the most elegant solution, but it requires your app to run on a Java 6 runtime.
Historically, load() only accepted an InputStream, and the stream was decoded as ISO-8859-1. Not the system default encoding, always ISO-8859-1. That's important, because it makes a certain hack possible. Say your property file is stored as UTF-8. After you retrieve a property, you can re-encode it as ISO-8859-1 and decode it again as UTF-8, like this:
String realProp = new String(prop.getBytes("ISO-8859-1"), "UTF-8");
It's ugly and fragile, but it does work. But I think the best solution, at least for the next few years, is the one you found: bulk-convert the files with native2ascii using a build tool like Ant.
An alternative way to handle the properties files is:
http://www.unipad.org/main/
This is an editor which can read/write files in \u unicode escape format, this is the format native2ascii creates.
It don't know how well it works with Japanese, I've used it for Hungarian.

Categories

Resources