Freemarker encoding - question marks in the place of accented characters - java

I am trying to print accented characters with Freemarker, but in the place of accented characters, I get only question marks. I have verified, that following statement holds:
"UTF-8" == Environment.getCurrentEnvironment().getConfiguration().getDefaultEncoding()
I can easily see, that the accented characters are correctly held in the variable before giving it to the template.
My freemarker context can be found here: https://gist.github.com/1975239
For instance instead of:
Jedinečný živý koncert, kde nejen, že uslyšíte, ale i uvidíte splynutí metalové kapely s padesátičlenným orchestrem včetně.
I keep getting:
Jedine?ný ?ivý koncert, kde nejen, ?e usly?íte, ale i uvidíte splynutí metalové kapely s padesáti?lenným orchestrem v?etn?.
Thanks.

I was able to resolve a similar issue with non-standard symbols (like ™) by setting the content-type on the FreeMarkerViewResolver:
<bean id="viewResolver" class="org.springframework.web.servlet.view.freemarker.FreeMarkerViewResolver">
...
<property name="contentType" value="text/html;charset=UTF-8"/>
...
</bean>

For DROPWIZARD Users: passing through the UTF-8 Charset in the constructor worked out:
import io.dropwizard.views.View;
import java.nio.charset.Charset;
public class SomeView extends View {
public SomeView() {
super("/views/some.ftl", Charset.forName("UTF-8"));
}
}

FreeMarker always treats text as UNICODE, so it doesn't generate question marks. Since the accented letters aren't coming from the templates (if I understand it well), it must be your output encoding that's improper. See also: http://freemarker.org/docs/app_faq.html#faq_questionmark
BTW, getDefaultEncoding() has no role in this. That influences the decoding used when you load the templates, but you are saying that the accented characters aren't coming from the template file, also I don't think you can get ?-s from decoding (unless, for invalid UTF-8 byte sequences). As of the encoding of the output, FreeMarker just uses a Writer (as opposed to an OutputStream), so it can't influence that.

For the freemarker servlet there exist init parameters for encoding of template and output. You might compare it with your configuration.

In a Dropwizard project this can be fixed by adding <#ftl encoding="utf-8"> at the start of the template file, as described in FreeMarkers’s FAQ. This works because Dropwizard uses the encoding of the template for the output.

Related

CharConversionException while transforming xml file

I have a Java program which process xml files. When transforming xml into another xml file base on certain schema( xsd/xsl) it throws following error.
This error only throws for one xml file which has a xml tag like this.
<abc>xxx yyyy “ggggg vvvv” uuuu</abc>
But after removing or re-type two quotes, it doesn’t throw the error.
Anybody, please assist me to resolve this issue.
java.io.CharConversionException: Character larger than 4 bytes are not supported: byte 0x93 implies a length of more than 4 bytes
at .org.apache.xmlbeans..impl.piccolo.xml.UTF8XMLDecoder.decode(UTF8XMLDecoder.java:162)
<?xml version= “1.0’ encoding =“UTF-8” standalone =“yes “?><xyz xml s=“http://pqr.yy”><Header><abc> aaa “cccc” aaaaa vvv</abc></Header></xyz>.
As others have reported in comments, it has failed because the typographical quotation marks are encoded in Windows-1292 encoding, not in UTF-8, so the parser hasn't managed to decode them.
The encoding declared in the XML declaration must match the actual encoding used for the characters.
To find out how this error arose, and to prevent it happening again, we would need to know where this (wannabe) XML file came from, and how it was created.
My guess would be that someone used a "smart" editor; Microsoft editors in particular are notorious for changing what you type to what Microsoft think you wanted to type. If you're editing XML by hand it's best to use an XML-aware editor.

Gradle/Eclipse: Different behavior of german "Umlaute" when using equality?

I am experiencing a weird behavior with german "Umlaute" (ä, ö, ü, ß) when using Java's equality checks (either directly or indirectly.
Everything works as expected when running, debugging or testing from Eclipse and input containing "Umlaute" is treated as equal or not as expected.
However when I build the application using Spring Boot and run it, these equality checks fail for words that contain "Umlaute", i.e. for words like "Nationalität".
Input is retrieved from a webpage via Jsoup and content of a table is extracted for some keywords. The encoding of the page is UTF-8 and I have handling in place for Jsoup to convert it if this is not the case.
The encoding of the source files is UTF-8 as well.
Connection connection = Jsoup.connect(url)
.header("accept-language", "de-de, de, en")
.userAgent("Mozilla/5.0")
.timeout(10000)
.method(Method.GET);
Response response = connection.execute();
if(logger.isDebugEnabled())
logger.debug("Encoding of response: " +response.charset());
Document doc;
if(response.charset().equalsIgnoreCase("UTF-8"))
{
logger.debug("Response has expected charset");
doc = Jsoup.parse(response.body(), baseURL);
}
else
{
logger.debug("Response doesn't have exepcted charset and is converted");
doc = Jsoup.parse(new String(response.bodyAsBytes(), "UTF-8"), baseURL);
}
logger.debug("Encoding of document: " +doc.charset());
if(!doc.charset().equals(Charset.forName("UTF-8")))
{
logger.debug("Changing encoding of document from " +doc.charset());
doc.updateMetaCharsetElement(true);
doc.charset(Charset.forName("UTF-8"));
logger.debug("Changed encoding of document to: " +doc.charset());
}
return doc;
Example log output (from deployed app) of reading content.
Encoding of response: utf-8
Response has expected charset
Encoding of document: UTF-8
Example input:
<tr><th>Nationalität:</th> <td> [...] </td> </tr>
Example code that fails for words containing ä, ö, ü or ß but works fine for other words:
Element header = row.select("th").first();
String text = header.ownText();
if("Nationalität:".equals(text))
{
// goes here in eclipse
}
else
{
// and here in deployed spring boot app
}
Is there any difference between running from Eclipse and a built & deployed app that I am missing? Where else could this behavior come from and how I this be resolved?
As far as I can see this is not (directly) an encoding issue since the input shows "Umlaute" correctly...
Since this is not reproducible when debugging, I am having a hard time figuring out what exactly goes wrong.
Edit: While input looks fine in logs (i.e. diacritics show up correctly) I realized that they don't look correct in the console:
<th>Nationalität:</th>
I am currently using a Normalizer as suggested by Mirko like this:
Normalizer.normalize(input, Form.NFC);
(also tried it with NFD).
How do (SpringBoot-) console and (logback) logoutput differ?
Diacritics like umlauts can often be represented in two different ways in unicode: As a single-codepoint character or as a composition of two characters. This isn't a problem of the encoding, it can happen in UTF-8, UTF-16, UTF-32 etc.
Java's equals method may not consider composite characters equal to single-codepoint characters, even though they look exactly the same.
Try to have a look at the binary representation of the strings you are comparing, this way you should be able to track down the differences.
You could also use the methods of the "Character" class to iterate through the strings and print out the properties of all the characters. Maybe this helps, too, to figure out differences.
In any case, it could help if you use java.text.Normalizer on both "sides" of the "equals", to normalize the text to, for example, Unicode Normalization Form C. This way, differences like the aforementioned should be straightened out and the strings should compare as expected.
Have you tried printing the keycode to console to see if they actually match when compiled? Maybe Eclipse is handling the charset gracefully but when it's compiled it's down to some Java/System settings?
I think I tracked this down to the build of the standalone app being the culprit.
As described above, when running from Eclipse all is fine, the problem only occurred when I ran the standalone Spring Boot app.
This is being built with Gradle. In my build.gradle I have
compileJava.options.encoding = 'UTF-8'
in order to force UTF-8 being used for encoding. This should (usually) be enough. I however also use AspectJ (via gradle-aspectj plugin) which apparently breaks this behavior (involuntarily?) and results in a default encoding to be used instead of the one explicitly defined.
In order to solve this I added
compileAspect {
additionalAjcArgs = ['encoding' : 'UTF-8']
}
to my build.gradle which passes the encoding option on to the ajc compiler. This seems to have fixed the problem for the regular build.
The problem still occurs however when tests are run from gradle. I was not yet able to find out what needs to be done there and why the above configuration is not enough.
This is now tracked in a separate question.

getBytes() With UTF-8 Doesn't Work for Upper-Case German Umlauts

For development I'm using ResourceBundle to read a UTF-8 encoded properties-file (I set that in Eclipse' file properties on that file) directly from my resources-directory in the IDE (native2ascii is used on the way to production), e.g.:
menu.file.open.label=&Öffnen...
label.btn.add.name=&Hinzufügen
label.btn.remove.name=&Löschen
Since that causes issues with the character encoding when using non-ASCII characters I thought I'd be happy with:
ResourceBundle resourceBundle = ResourceBundle.getBundle("messages", Locale.getDefault());
String value = resourceBundle.getString(key);
value = new String(value.getBytes(), "UTF-8");
Well, it does work nicely for lower-case German umlauts, but not for the upper-case ones, the ß also doesn't work. Here's the value read with getString(key) and the value after the conversion with new String(value.getBytes(), "UTF-8"):
&Löschen => &Löschen
&Hinzufügen => &Hinzufügen
&Ã?ber => &??ber
&SchlieÃ?en => &Schlie??en
&Ã?ffnen... => &??ffnen...
The last three should be:
&Ã?ber => &Über
&SchlieÃ?en => &Schließen
&Ã?ffnen... => &Öffnen...
I guess that I'm not too far away from the truth, but what am I missing here?
Google found something similar, but that remained unanswered.
EDIT: a little more code
The problem is you're calling String.getBytes() without specifying an encoding - which will use the default platform encoding. You're then using the binary result of that operation as if it were in UTF-8.
If you use UTF-8 in both directions, it'll be fine:
// Should be a round-trip
value = new String(value.getBytes("UTF-8"), "UTF-8");
... but if you were trying to use this to read a UTF-8-encoded property file without telling the code which is performing the initial read, that won't work.
The code you've presented is basically always the wrong approach. Your "Since that causes issues with the character encoding" suggests that you'd already run across an earlier problem - so I'd go back to that, instead of trying to apply a broken fix. If you've already lost data when constructing the ResourceBundle, it's too late to go back later... you need to make sure the ResourceBundle itself is loaded correctly.
Please tell us exactly what problems you had with the ResourceBundle, and we can see if we can fix the root cause.
EDIT: It's not clear how you're running native2ascii. The fix may be as simple as changing to use:
native2ascii -encoding UTF-8 input.properties output.properties
Some notes:
If it is a String it is UTF-16 and if it isn't it is a corrupt string (and too late to fix.)
new String(value.getBytes(), "UTF-8"); - this code will (at best) do nothing on a system that uses UTF-8 as the default encoding; otherwise it will corrupt the string.
.properties files must be ISO 8859-1 (the Properties type supports other formats and encodings, but I don't know how you would tell ResourceBundle that.)
System.out can introduce its own transcoding bugs (the PrintStream encodes UTF-16 strings to the default encoding; the receiving device must decode the bytes using the same encoding.)
I suspect you are trying to fix your problems in the wrong place.
You are encoding the text with a different encoding to the one you are decoding with.
Try instead using the same character set for encoding and decoding.
value = new String(value.getBytes("UTF-8"), "UTF-8");
String s = "ßßßßß";
s += s.toUpperCase();
s = new String(s.getBytes("UTF-8"), "UTF-8");
System.out.println(s);
prints
ßßßßßSSSSSSSSSS
Today I was talking to one of my colleagues and he was pretty much on the same path as the other answers have mentioned. So I tried to achieve what Jon Skeet had mentioned, meaning creating the same file as in production. Since rebuilding the project after each change of a resource is out of question and I hadn't done any of what solved this (and I guess this will be new to some) let me line it out (even if it may be just for personal reference ;) ). In short this uses Eclipse' project builders.
Create an Ant-style build.xml
<?xml version="1.0" encoding="UTF-8"?>
<project>
<property name="dir.resources" value="src/main/resources" />
<property name="dir.target" value="bin/main" />
<target name="native-to-ascii">
<delete dir="${dir.target}" includes="**/*.properties" />
<native2ascii src="${dir.resources}" dest="${dir.target}" includes="**/*.properties" />
</target>
</project>
Its intention is to delete the properties-files in the target directory and use native2ascii to recreate them. The delete is necessary as native2ascii won't overwrite existing files.
In Eclipse go to the project properties and select "Builders", click "New...", pick "Ant Builder" (that's the slightly enhanced editor for run configurations)
In "Main" let "Buildfile" point to the Ant-script, set "Base Directory" to ${project_loc}
In "Refresh" tick "Refresh resources upon completion" and pick "The project containing the selected resource"
In "Targets" click "Set Targets" next to the "Auto Build" and pick native-to-ascii there (note that for some reason I had to do this later again)
This might not be necessary for everybody, but in "JRE" pick a proper execution environment
In "Build Options" tick off "Allocate Console" (however, you may want to keep this ticked on until you see that it's all working)
"Apply", "OK"
I was told that the newly created builder should be somewhere underneath the Java Builder (use Up/Down-button)
In the "Java Build Path" select the source folder with the resources (src/main/resources for me) and add an exclusion for **/*.properties
That should have been it. If you edit a properties-file and save it, it should automatically be converted to ASCII in the output folder. You can try with entering ü, which should end up as \u00fc.
Note that if you have a lot of properties-files, this may take some time. Just don't save after every keypress. :)

UTF-8 encoding of GET parameters in JSF

I have a search form in JSF that is implemented using a RichFaces 4 autocomplete component and the following JSF 2 page and Java bean. I use Tomcat 6 & 7 to run the application.
...
<h:commandButton value="#{msg.search}" styleClass="search-btn" action="#{autoCompletBean.doSearch}" />
...
In the AutoCompleteBean
public String doSearch() {
//some logic here
return "/path/to/page/with/multiple_results?query=" + searchQuery + "&faces-redirect=true";
}
This works well as long as everything withing the "searchQuery" String is in Latin-1, it does not work if is outside of Latin-1.
For instance a search for "bodø" will be automatically encoded as "bod%F8". However a search for "Kra Ðong" will not work since it is unable to encode "Ð".
I have now tried several different approaches to solve this, but none of them works.
I have tried encoding the searchQuery my self using URLEncode, but this only leads to double encoding since % is encoded as %25.
I have tried using java.net.URI to get the encoding, but gives the same result as URLEncode.
I have tried turning on UTF-8 in Tomcat using URIEncoding="UTF-8" in the Connector but this only worsens that problem since then non-ascii characters does not work at all.
So to my questions:
Can I change the way JSF 2 encodes the GET parameters?
If I cannot change the way JSF 2 encodes the GET parameters, can I turn of the encoding and do it manually?
Am I doing something where strange here? This seems like something that should be supported out-of-the-box, but I cannot find any others with the same problem.
I think you've hit a corner case bug in JSF. The query string is URL-encoded by ExternalContext#encodeRedirectURL() which uses the response character encoding as obtained by ExternalContext#getResponseCharacterEncoding(). However, while JSF by default uses UTF-8 as response character encoding, this is only been set if the view is actually to be rendered, not when the response is to be redirected, so the response character encoding still returns the platform default of ISO-8859-1 which causes your characters to be URL-encoded using this wrong encoding.
I've reported this as issue 2440. In the meanwhile your best bet is to explicitly set the response character encoding yourself beforehand.
FacesContext.getCurrentInstance().getExternalContext().setResponseCharacterEncoding("UTF-8");
Note that this still requires that the container itself uses the same character encoding to decode the request URL, so you certainly need to set URIEncoding="UTF-8" in Tomcat's configuration. This won't mess up the characters anymore as they will be really UTF-8 now.
The only character encoding accepted for HTTP URLs and headers is US-ASCII, you need to URL encode these characters to send them back to the application. Simplest way to do this in java would be:
public String doSearch() {
//some logic here
String encodedSearchQuery = java.net.URLEncoder.encode( searchQuery, "UTF-8" );
return "/path/to/page/with/multiple_results?query=" + encodedSearchQuery + "&faces-redirect=true";
}
And then it should work for any character that you use.

How to check encoding in java?

I am facing a problem about encoding.
For example, I have a message in XML, whose format encoding is "UTF-8".
<message>
<product_name>apple</product_name>
<price>1.3</price>
<product_name>orange</product_name>
<price>1.2</price>
.......
</message>
Now, this message is supporting multiple languages:
Traditional Chinese (big5),
Simple Chinese (gb),
English (utf-8)
And it will only change the encoding in specific fields.
For example (Traditional Chinese),
蘋果
1.3
橙
1.2
.......
Only "蘋果" and "橙" are using big5, "<product_name>" and "</product_name>" are still using utf-8.
<price>1.3</price> and <price>1.2</price> are using utf-8.
How do I know which word is using different encoding?
It looks like whoever is providing the XML is providing incorrect XML. They should be using a consistent encoding.
http://sourceforge.net/projects/jchardet/files/ is a pretty good heuristic charset detector.
It's a port of the one used in Firefox to detect the encoding of pages that are missing a charset in content-type or a BOM.
You could use that to try and figure out the encoding for substrings in a malformed XML file if you can't get the provider to fix their output.
you should use only one encoding in one xml file. there are counterparts of the characters of big5 in the UTF_8 encoding.
Because I cannot get the provider to fix the output, so I should be handle it by myself and I cannot use the extend library in this project.
I only can solve that like this,
String str = new String(big5String.getByte("UTF-8"));
before display the message.

Categories

Resources