here's my problem, I am receiving a string from a soap Webservice which seems to contain UTF8 encoded %c3%89. This string is a URL i have to reach to get a picture that contains a part of the URL in it.
My problem is that the server generating the picture doesn't recognize the %c3%89 encoding and thus doesn't create the right . When replaced with it's normal representation (i.e É) the server is generating the picture correctly.
My question is: How can i replace the encoded character in the string?
Ps: I don't have access to the server side
here's my code
URL aURL = new URL(URLDecoder.decode(url));
URLConnection conn = aURL.openConnection();
conn.connect();
InputStream is = conn.getInputStream();
BufferedInputStream bis = new BufferedInputStream(is);
bm = BitmapFactory.decodeStream(bis);
Thanks a lot :)
Hush
You need to pass the character encoding as 2nd argument to URLDecoder#decode(), otherwise it will use the platform default character encoding.
System.out.println(URLDecoder.decode("%c3%89", "ISO-8859-1")); // Ã?
System.out.println(URLDecoder.decode("%c3%89", "UTF-8")); // É
I just realized that the URL was perfectly understood by the website when using earlier version of android (lets say before 2.2) I start to wonder what has changed in the urlconnection framework since that version... anyway i will try to pass through this problem by hosting the required picture on the webservice rather than returning the url.
Thank you
Related
I am working on a legacy web service client code where the JSON data is being sent to the web service. Recently it was found that for some requests in the JSON body, the service is giving HTTP 400 response due to invalid characters (non-UTF8) in the JSON Body.
Below is one example of the data which is causing the issue.
String value = "zu3z5eq tô‰U\f‹Á‹€z";
I am using org.json.JSONObject.toString() method to generate the JSON string. Can you please let me know how can I ensure that the JSON string is UTF-8 encoded?
I already tried few solutions like available online , like converting to byte array and then back, using java charset methods etc, but they did not work. Either they convert the valid values as well like chinese/japanese characters, or doesn't work at all.
Can you please provide some input on this?
You need to set the character encoding for OutputStreamWriter when you create it:
httpConn.connect();
wr = new OutputStreamWriter(httpConn.getOutputStream(), StandardCharsets.UTF_8);
wr.write(jsonObject.toString());
wr.flush();
Otherwise it defaults to the "platform default encoding," which is some encoding that has been used historically for text files on whatever system you are running.
Use Base64 encoding for converting the value to Byte[].
String value = "zu3z5eq tô‰U\f‹Á‹€z";
// WHILE SENDING ENCODE THE VALUE
byte[] encodedBytes = Base64.getEncoder().encode(value.getBytes("UTF-8"));
String encodedValue = new String(encodedBytes, "UTF-8");
// TRANSPORT....
// ON RECEIVING END DECODE THE VALUE
byte[] decodedBytes = Base64.getDecoder().decode(encodedValue.getBytes("UTF-8"));
System.out.println( new String(decodedBytes, "UTF-8"));
I have produced a little app that searches and displays for me data which I retrieve from Google Books in a neat but simple fashion. Everything works so far, but there is an issue directly at the source: Though Google provides me correctly with German text search results, it for some reason displays all special German characters (Ä, Ö, Ü and ß probably) as the "�" dummy or sometimes just "?".
I was able to confirm that the JSONObject built from the InputStream already contains those mistakes. It seems like the original inputstream from Google is not being read correctly. Weird is that I have "UTF-8" encoding (which should contain german characters) added to my InputStreamReader, but to no avail apparently.
Here is the http-request procedure I am using:
public class HttpRequest {
public static String request(String urlString) throws IOException {
URL url = new URL(urlString);
URLConnection connection = url.openConnection();
connection.setConnectTimeout(5000);
connection.setReadTimeout(10000);
BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream(), "UTF-8"));
StringBuilder builder = new StringBuilder();
String inputLine;
while((inputLine = in.readLine()) != null)
builder.append(inputLine);
in.close();
return builder.toString();
}
}
What else could be going wrong? I checked the StringBuilder already, but the mistakes are already in the inputLine(s) that get read out of the BufferedReader.
Also, I was unable to find any language or encoding specific settings in the official google books api guide, so I guess they should come with universal encoding, but then the "UTF-8" flag should detect them, or not?
Easiest is to check the raw data in another way, such as a browser. Looking at a Google Books api url response in the browser is quite simple, just use the url and the response comes back as json. Optionally install a json viewer plugin, but not needed for this.
For example use this url:
https://www.googleapis.com/books/v1/volumes?q=Latein+key=NO
Checking the http header (in the browser developer tools for example) you can see that the header list the content as having the expected encoding:
content-type: application/json; charset=UTF-8
Look at the specific content for some German results and the text there and we can see that it is correct German special characters for some books, but not for all. Depending on the book in question.
Conclusion: UTF-8 is indeed correct and the source/raw data has missing/wrong data for some texts for the German characters.
I'm trying to read XML data from Google weather webservice. The response contain some Spanish characters. Problem is that these characters are not displayed properly. I've tried to convert everything to UTF-8 but that does not seem to help. Code is given below
public static void main(String[] args) {
try {
URL url = new URL("http://www.google.com/ig/api?weather=Noja&hl=es");
HttpURLConnection con = (HttpURLConnection) url.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(
con.getInputStream(), "UTF-8"));
String str = in.readLine();
//this does not work even
//String str = new String(in.readLine().getBytes("UTF-8"),"UTF-8");
System.out.println(str);
in.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Output is given below (trimmed to keep the post in limits). Notice "mi�" and s�b
trimmed to keep max char limit
<day_of_week data="mi�"/><day_of_week data="s�b"/><low data="11"/><high data="16"/><icon data="/ig/images/weather/chance_of_rain.gif"/><condition data="Posibilidad de lluvia"/></forecast_conditions></weather></xml_api_reply>
If that page is xml then you should usually pass the InputStream directly to the xml parser and let it automatically detect the encoding. Otherwise you should look at the charset parameter of the content type response header to determine the correct encoding and create the appropriate InputStreamReader.
Edit: That server is indeed responding with different encodings to the browser and the java client, probably depending on the Accept-Charset request header. For firefox this header has the value
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n
This means both charset are accepted, there is no preference for either one. The server responds with a Content-Type header of text/xml; charset=UTF-8. The java client does not send this header and the server responds with text/xml; charset=ISO-8859-1.
To use the charset supplied by the server you can use code like the following:
Matcher matcher = Pattern.compile("charset\\s*=\\s*([^ ;]+)").matcher(contentType);
String charset = "utf-8"; // default
if (matcher.find()) {
charset = matcher.group(1);
}
System.out.println(con.getContentType());
BufferedReader in = new BufferedReader(new InputStreamReader(
con.getInputStream(), charset));
Edit 2: Turns out the server decides the charset to use based on the user-agent header. If you add the following line, it responds with a charset of utf-8.
con.setRequestProperty("User-Agent", "Mozilla/5.0");
Anyway, the Content-Type response header contains the correct charset to use.
Your input may be correct, although I would use an XML parser to read the XML, rather than try and interpret this as a line-by-line feed. However your output may be incorrect.
What's the default char encoding of your JVM ? Check (and set) the confusingly named property -Dfile.encoding=UTF-8
Do the requisite fonts etc. exist on your system ? Can you check the actual character codes you're outputting and not rely on your terminal settings ? I would suspect this is perhaps the case, since the encoding/decoding appears to work and you're just missing those individual characters.
in my application there is both arabic and english language suport but i am facing a problem when the mobile receive arabic SMS it is displaied as ??? ???? (question marks) knowing that the monbile i am using for testing supports arabic and all the arabic in the application is working fine the problem is only when an arabic SMS is received by my mobile.
String ff = new String(smsContent.getBytes("UTF-8"), "UTF-8");
StringWriter stringBuffer = new StringWriter();
PrintWriter pOut = new PrintWriter(stringBuffer);
pOut.print("<?xml version=\"1.0\" encoding=\"utf-8\"?>");
pOut.print("<!DOCTYPE MESSAGE SYSTEM \"http://127.0.0.1/psms/dtd/messagev12.dtd\" >");
pOut.print("<MESSAGE VER=\"1.2\"><USER USERNAME=\""+userName+"\" PASSWORD=\""+password+"\"/>");
pOut.print("<SMS UDH=\"0\" CODING=\"1\" TEXT=\""+ff+"\" PROPERTY=\"0\" ID=\"2\">");
pOut.print("<ADDRESS FROM=\""+fromNo+"\" TO=\""+toNO+"\" SEQ=\"1\" TAG=\"\" />");
pOut.print("</SMS>");
pOut.print("</MESSAGE>");
pOut.flush();
pOut.close();
URL url = new URL("url");
HttpURLConnection connection = (HttpURLConnection)url.openConnection();
connection.setDoOutput(true);
BufferedWriter out = new BufferedWriter(new OutputStreamWriter(connection.getOutputStream()));
out.write("data="+message+"&action=send");
out.flush();
SMS in english working file in my application.
First, new String(smsContent.getBytes("UTF-8"), "UTF-8") is a redundant roundtrip, equivalent to smsContent. First you encode the string as bytes via UTF-8, and then immediately decode it back from the bytes again.
Second, your method of puzzling together XML is completely broken. You can't just concatenate strings and hope to end up with well-formed XML. Just for example think about what happens if someone tries to send a "? Use an XML library.
Third, you're implicitly using the platform default encoding for your OutputStreamWriter instead of explicitly specifying one, which means your code only works on those machines which randomly happen to have the correct encoding as default. I'm guessing yours does not.
Fourth, your method of puzzling together POST parameters is broken. You haven't specified what the variable message is. I'm guessing it's the complete XML document, but then you're trying to send it as a POST parameter to some kind of HTTP service, in which case it needs to be escaped/url-encoded. Just for example, what happens if someone tries to send the message &data=<whatever>&? Please clarify.
See also Using java.net.URLConnection to fire and handle HTTP requests
Fifth, since you're sending to some HTTP service, there's probably some documentation for that service what encoding to send or how to specify it, possibly with a HTTP header (Probably "Content-type: application/x-www-form-urlencoded; charset=UTF-8"?). Point us to the documentation if you can't figure it out yourself.
Edit: Found the documentation: http://www.google.se/search?q=valuefirst+pace
It pretty clearly states that you need to url encode the XML document, so that's probably what you're missing, in which case the encoding for the OutputStreamWriter won't matter as long as it's ASCII-compatible.
However, the documentation does not specify which character encoding to use for url-encoding, which is pretty weak. UTF-8 is the most likely though.
From what I've read on some internet pages, SMS in arabic languages (and others too) are encoded with UCS-2 and not UTF-8. Changing the encoding is worth a try.
You are using your platform's default encoding for the request data, which may very well differ from UTF-8. Try specifying UTF-8 in the OutputStreamWriter:
... new OutputStreamWriter(connection.getOutputStream(), "UTF-8") ...
Another issue is of course that your hand-made XML document will fail as soon as any of your parameters contain characters, which have to be escaped in XML, but that's a different story. Why don't you use an XML library instead?
Just an additional information: The documentation Christoffer points to also explains that the request example you are using is only suitable for text messages with characters in the standard SMS character set. For Unicode character support, you have to use a different request.
I'm trying to read in an image URL. As mentioned in the java documentation, I tried converting the URL to URI by
String imageURL = "http://www.shefinds.com/files/Christian-Louboutin-Décolleté-100-pumps.jpg";
URL url = new URL(imageURL);
url = new URI(url.getProtocol(), url.getHost(), url.getFile(), null).toURL();
URLConnection conn = url.openConnection();
InputStream is = conn.getInputStream();
I get the a Java.io.FileNotFound Exception for file
http://www.shefinds.com/files/Christian-Louboutin-Décolleté-100-pumps.jpg
What am I doing wrong and what is the right way to encode this URL?
Update:
I'm using Rome to read in RSS feeds. Taking suggestions from BalusC I have printed out the raw input from different stages and seems like that the ROME rss parser is using ISO-8859-1 instead of UTF-8.
Works fine here (returns a 403, it's at least not a 404):
URL url = new URL("http://www.shefinds.com/files/Christian-Louboutin-Décolleté-100-pumps.jpg");
URLConnection connection = url.openConnection();
InputStream input = connection.getInputStream();
When I fix it so that it doesn't return a 403, the picture is correctly retireved:
URL url = new URL("http://www.shefinds.com/files/Christian-Louboutin-Décolleté-100-pumps.jpg");
URLConnection connection = url.openConnection();
connection.setRequestProperty("User-Agent", "Mozilla/4.0");
InputStream input = connection.getInputStream();
OutputStream output = new FileOutputStream("/pic.jpg");
for (int data = 0; (data = input.read()) != -1;) {
output.write(data));
}
So your problem lies somewhere else. Converting is actually not needed. The initial URL is valid.
Maybe you're obtaining the actual URL from some binary source using the wrong character encoding? The transition of é to é namely suggests that the original source was UTF-8 encoded and that the code has incorrectly read it in in using ISO-8859-1 instead of UTF-8.
Update: or maybe you've actually hardcoded it in the Java source code and saving the source file itself using the wrong encoding. I've configured my editor (Eclipse) to save files using UTF-8 and the -Dfile.encoding is also defaulted to UTF-8, that would explain why it works at my machine ;)
Update 2: as per the comments, in a nutshell, everything should work fine if the encoding used to save the source file matches the default -Dfile.encoding of the runtime platform (and the character encoding in question supports the é). To avoid those unforeseen clashes whenever you like to distribute the code, it's indeed better to replace hardcoded non-ASCII chars by unicode escapes.
I think the technical answer is "you can't." Non-ASCII characters can't be used in a URL according to the standard, and even some ASCII characters must be escaped with "%XX" syntax, where XX is the ASCII value of the character.
If anything, you can escape 'é' with '%E9' but this relies on the server interpreting this as an encoding of the character according to ISO-8859-1. While this isn't technically allowed, I believe many servers will do it.
The encoding of your source file is to blame. Using your IDE, set it to UTF-8, and then repaste the URL.