Strange behaviour with Jersey multipart request for UTF-8 encoding - java

I have seen strange behavior with jersey and tomcat multipart request.
I have files in different languages example
минуты назад.txt or 您好.txt
With help of other post I figured out that we need to convert this in UTF-8 format.
Something like
String fileName=new String(bodyPart.getContentDisposition().getFileName().getBytes(),"UTF-8");
With this I see that the names are converted back but some characters are garbled with question marks. The above mentioned file names are converted to something like
мин�?�?�? назад.txt and �?�好.txt
I am not sure why only few characters are lost. In above code bodyPart is nothing but FormDataBodyPart bodyPart from Jersey.
Is there any additional configuration needed in Tomcat ? I tried adding URIEncoding="UTF-8" but that did not help.
Need some help to understand this.

Related

Tomcat text file encoding

I have a java webapp which reads from file on disk and returns the needed values. The file on disk contains UTF-8 characters.
Example of the file content:
lähedus teeb korterist atraktiivse üüriobjekti välismaalastele
When the webapp is run on localhost* then the servlet reads from disk and returns:
lähedus teeb korterist atraktiivse üüriobjekti välismaalastele
When I run the same app on a separate server the same request returns this:
l??hedus teeb korterist atraktiivse ????riobjekti v??lismaalastele
This is purely an encoding issue but I don't know how to solve it.
What I have tried:
I added this to config/server.xml
<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443"
URIEncoding="UTF-8"/> <!-- THIS PART
But it didn't help.
What should I change in config to have it working on server as well?
Thanks!
EDIT
I am reading from a txt file on server containing json strings.
I am using java BufferReader to read the content. As I mentioned in the comments, this problem is not caused by the reader because the same works on localhost.
I am sending the response via a servlet which just flushes the json string out. Again the same story as with the reader.
I get the question marks on any client I make the request (browser, android, etc).
Your local file seems to be in UTF-8, with a wrong conversion to some single-byte encoding. As one sees a multi-byte sequence for one special char resulting in two unconvertible chars (?).
The application is reading it without specification of the encoding, hence using the system's encoding. That is not something you want.
And then you need to find the wrong reading code: often there is an overloaded method where one can add the encoding. Notorious however is FileReader, that utility class always uses the default encoding. Check occurrences of:
InputStreamReader
new String
String.getBytes
Scanner
For good order, but probably not the case here: any response yielding that text should specify the charset in the content-type.

encoding trouble while sending info on server

I try to send info to server. charset encoding set to UTF-8. jsp page encoding also set to UTF-8. I use Spring mvc
I form json and try to send it to server. but when when I get response body I see strange symbols between words attributeCategory%5B0%5D%5Batt.
I searched and all suggestions were to have encoding utf-8 to resolve such problem.
UPDATE
When I add on server side this line URLDecoder.decode(body, "ISO-8859-1"); everything was encoded in normal form. So my question what I need to change with my json or something else to make my program work with UTF-8 encoding
%5B = [ (hexadecimal code 5B)
%5D = ] (hexadecimal code 5D)
This might stem from HTML INPUT fields with the same name, so in fact attributeCategory[0][att was meant (probably miscopied here).
It could also be JavaScript.
It is a url encoding for HTTP transmission of non-basic characters like [, ] and so on. Nothing to do with an encoding.
I hope this points to some cause of this error.

UTF-8 encoding of GET parameters in JSF

I have a search form in JSF that is implemented using a RichFaces 4 autocomplete component and the following JSF 2 page and Java bean. I use Tomcat 6 & 7 to run the application.
...
<h:commandButton value="#{msg.search}" styleClass="search-btn" action="#{autoCompletBean.doSearch}" />
...
In the AutoCompleteBean
public String doSearch() {
//some logic here
return "/path/to/page/with/multiple_results?query=" + searchQuery + "&faces-redirect=true";
}
This works well as long as everything withing the "searchQuery" String is in Latin-1, it does not work if is outside of Latin-1.
For instance a search for "bodø" will be automatically encoded as "bod%F8". However a search for "Kra Ðong" will not work since it is unable to encode "Ð".
I have now tried several different approaches to solve this, but none of them works.
I have tried encoding the searchQuery my self using URLEncode, but this only leads to double encoding since % is encoded as %25.
I have tried using java.net.URI to get the encoding, but gives the same result as URLEncode.
I have tried turning on UTF-8 in Tomcat using URIEncoding="UTF-8" in the Connector but this only worsens that problem since then non-ascii characters does not work at all.
So to my questions:
Can I change the way JSF 2 encodes the GET parameters?
If I cannot change the way JSF 2 encodes the GET parameters, can I turn of the encoding and do it manually?
Am I doing something where strange here? This seems like something that should be supported out-of-the-box, but I cannot find any others with the same problem.
I think you've hit a corner case bug in JSF. The query string is URL-encoded by ExternalContext#encodeRedirectURL() which uses the response character encoding as obtained by ExternalContext#getResponseCharacterEncoding(). However, while JSF by default uses UTF-8 as response character encoding, this is only been set if the view is actually to be rendered, not when the response is to be redirected, so the response character encoding still returns the platform default of ISO-8859-1 which causes your characters to be URL-encoded using this wrong encoding.
I've reported this as issue 2440. In the meanwhile your best bet is to explicitly set the response character encoding yourself beforehand.
FacesContext.getCurrentInstance().getExternalContext().setResponseCharacterEncoding("UTF-8");
Note that this still requires that the container itself uses the same character encoding to decode the request URL, so you certainly need to set URIEncoding="UTF-8" in Tomcat's configuration. This won't mess up the characters anymore as they will be really UTF-8 now.
The only character encoding accepted for HTTP URLs and headers is US-ASCII, you need to URL encode these characters to send them back to the application. Simplest way to do this in java would be:
public String doSearch() {
//some logic here
String encodedSearchQuery = java.net.URLEncoder.encode( searchQuery, "UTF-8" );
return "/path/to/page/with/multiple_results?query=" + encodedSearchQuery + "&faces-redirect=true";
}
And then it should work for any character that you use.

How to check encoding in java?

I am facing a problem about encoding.
For example, I have a message in XML, whose format encoding is "UTF-8".
<message>
<product_name>apple</product_name>
<price>1.3</price>
<product_name>orange</product_name>
<price>1.2</price>
.......
</message>
Now, this message is supporting multiple languages:
Traditional Chinese (big5),
Simple Chinese (gb),
English (utf-8)
And it will only change the encoding in specific fields.
For example (Traditional Chinese),
蘋果
1.3
橙
1.2
.......
Only "蘋果" and "橙" are using big5, "<product_name>" and "</product_name>" are still using utf-8.
<price>1.3</price> and <price>1.2</price> are using utf-8.
How do I know which word is using different encoding?
It looks like whoever is providing the XML is providing incorrect XML. They should be using a consistent encoding.
http://sourceforge.net/projects/jchardet/files/ is a pretty good heuristic charset detector.
It's a port of the one used in Firefox to detect the encoding of pages that are missing a charset in content-type or a BOM.
You could use that to try and figure out the encoding for substrings in a malformed XML file if you can't get the provider to fix their output.
you should use only one encoding in one xml file. there are counterparts of the characters of big5 in the UTF_8 encoding.
Because I cannot get the provider to fix the output, so I should be handle it by myself and I cannot use the extend library in this project.
I only can solve that like this,
String str = new String(big5String.getByte("UTF-8"));
before display the message.

encoding problem in servlet

I have a servlet which receive some parameter from the client ,then do some job.
And the parameter from the client is Chinese,so I often got some invalid characters in the servet.
For exmaple:
If I enter
http://localhost:8080/Servlet?q=中文&type=test
Then in the servlet,the parameter of 'type' is correct(test),however the parameter of 'q' is not correctly encoding,they become invalid characters that can not parsed.
However if I enter the adderss bar again,the url will changed to :
http://localhost:8080/Servlet?q=%D6%D0%CE%C4&type=test
Now my servlet will get the right parameter of 'q'.
What is the problem?
UPDATE
BTW,it words well when I send the form with post.
WHen I send them in the ajax,for example:
url="http://..q='中文',
xmlhttp.open("POST",url,true);
Then the server side also get the invalid characters.
It seems that just when the Chinese character are encoded like %xx,the server side can get the right result.
That's to say http://.../q=中文 does not work,
http://.../q=%D6%D0%CE%C4 work.
But why "http://www.google.com.hk/search?hl=zh-CN&newwindow=1&safe=strict&q=%E4%B8%AD%E6%96%87&btnG=Google+%E6%90%9C%E7%B4%A2&aq=f&aqi=&aql=&oq=&gs_rfai=" work?
Ensure that the encoding of the page with the form itself is also UTF-8 and ensure that the browser is instructed to read the page as UTF-8. Assuming that it's JSP, just put this in very top of the page to achieve that:
<%# page pageEncoding="UTF-8" %>
Then, to process GET query string as UTF-8, ensure that the servletcontainer in question is configured to do so. It's unclear which one you're using, so here's a Tomcat example: set the URIEncoding attribute of the <Connector> element in /conf/server.xml to UTF-8.
<Connector URIEncoding="UTF-8">
For the case that you'd like to use POST, then you need to ensure that the HttpServletRequest is instructed to parse the POST request body using UTF-8.
request.setCharacterEncoding("UTF-8");
Call this before you access the first parameter. A Filter is the best place for this.
See also:
Unicode - How to get the characters right?
Using non-ASCII characters as GET parameters (i.e. in URLs) is generally problematic. RFC 3986 recommends using UTF-8 and then percent encoding, but that's AFAIK not an official standard. And what you are using in the case where it works isn't UTF-8!
It would probably be safest to switch to POST requests.
I believe that the problem is on sending side. As I understood from your description if you are writing the URL in browser you get "correctly" encoded request. This job is done by browser: it knows to convert unicode characters to sequence of codes like %xx.
So, try to check how do you send the request. It should be encoded on sending.
Other possibility is to use POST method instead of GET.
Do read this article on URL encoding format "www.blooberry.com/indexdot/html/topics/urlencoding.htm".
If you want, you could convert characters to hex or Base64 and put them in the parameters of the URL.
I think it's better to put them in the body (Post) then the URL (Get).

Categories

Resources