In a Java portlet I'm offering files to download through the serveResource(...) method.
I'm calling
response.getPortletOutputStream().write(byteArray);
This byte array contains some special characters in German, for example Ä, Ü or ö. The file format of the resulting file is csv.
When I'm opening the file in a text editor, the special characters are displayed correctly.
However when I open them in Microsoft Excel, they're displayed as ü or ß.
Do you have any ideas of what could be the cause of this problem?
Notepad++ displays the file as
ANSI as UTF-8
This might help you: Microsoft Excel mangles Diacritics in .csv files?
Basically, you'd need to add a byte order mark (BOM) to your CSV file.
Related
I am new to JAVA. I wanted a JAVA code to convert a text file coming from Unix to a text file that goes to Linux server. So, its a character conversion code from UTF-16 TO UTF-8. The text file goes tthrough oracle database before it reaches linux server. I need this conversion because some special symbols are getting converted to garbage values. Please help Java Experts :)
I am writing contents on pdf file.
When i write Hebrew letters ("שלום") The letters dont appear on the pdf.
Waybe its a Encode issue, anyhow how can i write Hebrew on a pdf file?
It could be an encoding issue, but it is difficult to tell without knowing how you are writing to the PDF file (what library, what encoding etc...).
Another thing to look at are the embedded fonts used in the PDF - by default there wouldn't be any and you would need to embed a Hebrew font to be used for Hebrew text. You would need to ensure that you have the rights to embed and distribute such a font before doing so.
You need the font with Hebrew letters (glyphs) embedded in PDF.
I am building an app that takes information from java and builds an excel spreadsheet. Some of the information contains international characters. I am having an issue when Russian characters, for example, are rendered correctly in Java, but when I send those characters to Excel, they are not rendered properly. I initially thought the issue was an encoding problem, but I am now thinking that the problem is simply not have the Russian language pack loaded on my Windows 7 machine.
I need to know if there is a way for a Java application to "force" Excel to show international characters.
Thanks
Check the file encoding you're using is characters don't show up. Java defaults to platform native encoding (Win-1252 for Windows) instead of UTF-8. You can explicitly set the writers to use UTF-8 or Cyrillic encoding.
I have a Java application that is generating JasperReports. It will create as many as three JasperPrints from a single report: one prints on the printer, one is serialized and saved to the database, and the third is exported to PDF using Jasper's built-in export capability.
The problem is that when exporting to PDF, characters containing 8 or more bits (i.e. not 7-bit ASCII) are showing up as empty squares, meaning Acrobat Reader is not able to display that character. The print version is correct, and loading the database version and printing it shows up correctly. If I change the PDF exported version to a different format, e.g. XML, the character shows up fine in a web browser.
Based on the evidence, I believe the issue is something specific to font handling in PDFs, but I am not sure what.
The font used is Lucida Sans Typewriter, a Unicode monospaced font. The Windows "font" directory is listed in the Java classpath: without this step, PDF exporting fails miserably with zero text at all, so I know it is finding the font.
The specific characters not displayed are accented characters used in Spanish text: á, é, í, ó, and ú. I haven't checked ñ but I am guessing that won't work too.
Any ideas what the problem is, areas of the system to check, or maybe parameters I need to send to the export process?
The PDF encoding used for exporting was UTF-8, and apparently the font didn't support that properly. When I changed it to ISO-8859-1, every character showed up correctly in the PDF output.
In iReport, try setting the Pdf Embedded property of your TextFields to true.
I'm using Jasper Report 6, My team has spend a few days to display Khmer Unicode. I have found solution finally, and everything work as expected.
follow this https://community.jaspersoft.com/wiki/custom-font-font-extension
after you exported, upload your jar file to lib folder and restart your jasper server.
I am able to have my application upload files via FTP using the FTPClient Java library.
(I happen to be uploading to an Oracle XML DB repository.)
Everything uploads fine unless the xml file has curly quotes in it. In which case I get the error:
LPX-00200: could not convert from encoding UTF-8 to UCS2
I can upload what I believe to be the same file using the Windows CMD line FTP tool. I am wondering if there is some encoding setting that the windows CMD line tool uses that maybe I need to set in my Java code.
Anyone know stuff about this? Thanks!!
I don't know that application but you could try to use -Dfile.encoding=UTF-8 on your JVM command line
Not familiar with Oracle XML DB repositories—can they accept compressed uploads? Zipping or gzipping your file would save resources and frustrate any ASCII file type autodetection in use.
In binary this problem goes away.
FTPClient.setType(FTPClient.TYPE_BINARY);
http://www.sauronsoftware.it/projects/ftp4j/manual.php#3
If your file contains curly quotes, they are in the high-order bit set range in iso-8859-1 and windows-1252 character sets. In UTF-8, those characters usually take two bytes in UTF-8.
It's quite possible that you've accidentally encoded the xml file in one of these encodings instead of UTF-8. That would result in a conversion error, because the high-order bit being set is only allowed in sequences of multiple UTF-8 octets.
If you're in Windows, open the file in Notepad and try re-saving the document using Save As... with the UTF-8 encoding, and upload the changed file.. In Unix, use iconv or a similar tool to convert from iso-8859-1 to UTF-8 before uploading.
If the XML document explicitly marks its encoding, make sure it's marked with the correct encoding (e.g. UTF-8). In many xml parsers, you can parse iso-8859-1 or windows-1252 character set encoded XML as long as it's marked as such.