UTF Encoding for Chinese CharactersJava - java

I am receiving a String via an object from an axis webservice. Because I'm not getting the string I expected, I did a check by converting the string into bytes and I get C3A4C2 BDC2A0 C3A5C2 A5C2BD C3A5C2 90C297 in hexa, when I'm expecting E4BDA0 E5A5BD E59097 which is actually 你好吗 in UTF-8.
Any ideas what might be causing 你好吗 to become C3A4C2 BDC2A0 C3A5C2 A5C2BD C3A5C2 90C297? I did a Google search but all I got was a chinese website describing a problem that happens in python. Any insights will be great, thanks!

You have what is known as a double encoding.
You have the three character sequence "你好吗" which you correctly point out is encoded in UTF-8 as E4BDA0 E5A5BD E59097.
But now, start encoding each byte of THAT encoding in UTF-8. Start with E4. What is that codepoint in UTF-8? Try it! It's C3 A4!
You get the idea.... :-)
Here is a Java app which illustrates this:
public class DoubleEncoding {
public static void main(String[] args) throws Exception {
byte[] encoding1 = "你好吗".getBytes("UTF-8");
String string1 = new String(encoding1, "ISO8859-1");
for (byte b : encoding1) {
System.out.printf("%2x ", b);
}
System.out.println();
byte[] encoding2 = string1.getBytes("UTF-8");
for (byte b : encoding2) {
System.out.printf("%2x ", b);
}
System.out.println();
}
}

public class Encoder{
public static void main(String[] args) throws Exception {
String requestString="你好";
String ISO = new String(requestString.getBytes("gb2312"), "ISO8859-1");
String plaintxt = new String(ISO.getBytes("ISO8859-1"), "gb2312");
plaintxt.getBytes("UTF-8");
}
}

Related

Simulating Java Base64Url.decode by using Visual Basic

I use Visual Basic to Simulate java program language Base64Url.decode, this is my code:
Java
import org.jose4j.base64url.Base64Url;
public class ss {
public static void main(String[] args) {
String Base64str = "q128qjafnAdcbkfrhfWim783bCvDWsak9bfe2ERjnAc";
byte[] decodeResult = Base64Url.decode(Base64str);
}
}
Visual Basic
Dim Base64str As String = "q128qjafnAdcbkfrhfWim783bCvDWsak9bfe2ERjnAc"
Dim decodeResult() As Byte = System.Convert.FromBase64String(Base64str)
It showed System.FormatException - Invalid length for a Base-64 char array.
How to solve it?

convert file utf8 to utf16 java

I'm trying to convert a file from UTF-8 to UTF-16 with a Java application
But my output turned out to be like this
蓘Ꟙ괠��Ꟙ돘ꨊ੕䥎潴楦楣慴楯渮瑩瑬攮佲摥牁摤敤乯瑩晩捡瑩潮偬畧楮㷘께뇛賘꼠���藙蘊啉乯瑩晩捡瑩潮⹬慢敬⹏牤敲䅤摥摎潴楦楣慴楯湐汵杩渽��藘귘뗙裙萠��藘꿛賘뇛賘ꨠ
Eventually, the output should be the same
utf8= سلام utf16=\u0633\u0644\u0627\u0645
import java.io.*;
class WriteUTF8Data<inbytes> {
WriteUTF8Data() throws UnsupportedEncodingException {
}
public static void main(String[] args) throws IOException {
System.setProperty("file.encoding","UTF-8");
byte[] inbytes = new byte[1024];
FileInputStream fis = new FileInputStream("/home/mehrad/Desktop/PerkStoreNotification(1).properties");
fis.read(inbytes);
FileOutputStream fos = new FileOutputStream("/home/mehrad/Desktop/PerkStoreNotification(2).properties");
String in = new String(inbytes, "UTF16");
fos.write(in.getBytes());
}
}
You're currently converting from UTF-16 into whatever your system default encoding is. If you want to convert from UTF-8, you need to specify that when you're converting the binary data. There are other issues with your code though - you're assuming that InputStream.read reads the whole buffer, and that that's all that's in the file. You'd probably be better using an Reader and a Writer, looping round and reading into a char array then writing the relevant part of that char array into the writer.
Here's some sample code that does that. It may well not be the best way of doing it these days, but it should at least work:
import java.io.*;
import java.nio.charset.*;
import java.nio.file.*;
public class ConvertUtf8ToUtf16 {
public static void main(String[] args) throws IOException {
Path inputPath = Paths.get(args[0]);
Path outputPath = Paths.get(args[1]);
char[] buffer = new char[4096];
// UTF-8 is actually the default for Files.newBufferedReader,
// but let's be explicit.
try (Reader reader = Files.newBufferedReader(inputPath, StandardCharsets.UTF_8)) {
try (Writer writer = Files.newBufferedWriter(outputPath, StandardCharsets.UTF_16)) {
int charsRead;
while ((charsRead = reader.read(buffer)) != -1) {
writer.write(buffer, 0, charsRead);
}
}
}
}
}
First of all answer by Jon Skeet is correct answer and will work. The problem with your code is that you convert incoming String into bytes according to your current encoding (I guess - UTF-8) and then try to create a new String with UTF-16 encoding from bytes that were produced as UTF-8 and that's why you get garbled output. Java keeps Strings internally in its own encoding (I think it is UCS-2). So when you have a String you can tell java to produce bytes from String in whatever charset you want. So for the same valid String method getBytes(UTF-8) and getBytes("UTF-16") would produce different sequence of bytes. So if you read your original content and you know that it is UTF-8 then you need to create String in UTF-8 String inString = new String(inbytes, "UTF-8") and then when you are writing produce your byte array from your String fos.write(inString.getBytes(UTF-16)); Also I would suggest to use this tool that would help you to understand the internal workings with String: It is a Utility that converts any String into unicode sequence and vice-versa.
result = "Hello World";
result = StringUnicodeEncoderDecoder.encodeStringToUnicodeSequence(result);
System.out.println(result);
result = StringUnicodeEncoderDecoder.decodeUnicodeSequenceToString(result);
System.out.println(result);
The output of this code is:
\u0048\u0065\u006c\u006c\u006f\u0020\u0057\u006f\u0072\u006c\u0064
Hello World
The library that contains this Utility is called MgntUtils and can be found at Maven Central or at Github It comes as maven artifact and with sources and javadoc. Here is javadoc for the class StringUnicodeEncoderDecoder. Here is the link to an article that describes the MgntUtils Open source library: Open Source Java library with stack trace filtering, Silent String parsing Unicode converter and Version comparison

Java GZip makes small differences when compressing file and decompressing it again

After a week of work I designed a binary file format, and made a Java reader for it. It's just an experiment, which works fine, unless I'm using the GZip compression function.
I called my binary type MBDF (Minimal Binary Database Format), and it can store 8 different types:
Integer (There is nothing like a byte, short, long or anything like that, since it is stored in flexible space (bigger numbers take more space))
Float-32 (32-bits floating point format, like java's float type)
Float-64 (64-bits floating point format, like java's double type)
String (A string in UTF-16 format)
Boolean
Null (Just specifies a null value)
Array (Something like java's ArrayList<Object>)
Compound (A String - Object map)
I used this data as test data:
COMPOUND {
float1: FLOAT_32 3.3
bool2: BOOLEAN true
float2: FLOAT_64 3.3
int1: INTEGER 3
compound1: COMPOUND {
xml: STRING "two length compound"
int: INTEGER 23
}
string1: STRING "Hello world!"
string2: STRING "3"
arr1: ARRAY [
STRING "Hello world!"
INTEGER 3
STRING "3"
FLOAT_32 3.29
FLOAT_64 249.2992
BOOLEAN true
COMPOUND {
str: STRING "one length compound"
}
BOOLEAN false
NULL null
]
bool1: BOOLEAN false
null1: NULL null
}
The xml key in a compound does matter!!
I made a file from it using this java code:
MBDFFile.writeMBDFToFile(
"/Users/<anonymous>/Documents/Java/MBDF/resources/file.mbdf",
b.makeMBDF(false)
);
Here, the variable b is a MBDFBinary object, containing all the data given above. With the makeMBDF function it generates the ISO 8859-1 encoded string and if the given boolean is true, it compresses the string using GZip. Then, when writing, an extra information character is added at the beginning of the file, containing information about how to read it back.
Then, after writing the file, I read it back into java and parse it
MBDF mbdf = MBDFFile.readMBDFFromFile("/Users/<anonymous>/Documents/Java/MBDF/resources/file.mbdf");
System.out.println(mbdf.getBinaryObject().parse());
This prints exactly the information mentioned above.
Then I try to use compression:
MBDFFile.writeMBDFToFile(
"/Users/<anonymous>/Documents/Java/MBDF/resources/file.mbdf",
b.makeMBDF(true)
);
I do exactly the same to read it back as I did with the uncompressed file, which should work. It prints this information:
COMPOUND {
float1: FLOAT_32 3.3
bool2: BOOLEAN true
float2: FLOAT_64 3.3
int1: INTEGER 3
compound1: COMPOUND {
xUT: STRING 'two length compound'
int: INTEGER 23
}
string1: STRING 'Hello world!'
string2: STRING '3'
arr1: ARRAY [
STRING 'Hello world!'
INTEGER 3
STRING '3'
FLOAT_32 3.29
FLOAT_64 249.2992
BOOLEAN true
COMPOUND {
str: STRING 'one length compound'
}
BOOLEAN false
NULL null
]
bool1: BOOLEAN false
null1: NULL null
}
Comparing it to the initial information, the name xml changed into xUT for some reason...
After some research I found little differences in binary data between before the compression and after the compression. Such patterns as 110011 change into 101010.
When I make the name xml longer, like xmldm, it is just parsed as xmldm for some reason.
I currently saw the problem only occur on names with three characters.
Directly compressing and decompressing the generated string (without saving it to a file and reading that) does work, so maybe the bug is caused by the file encoding.
As far as I know, the string output is in ISO 8859-1 format, but I couldn't get the file encoding right. When a file is read, it is read as it has to be read, and all the characters are read as ISO 8859-1 characters.
I've some things that could be a reason, I actually don't know how to test them:
The GZip output has a different encoding than the uncompressed encoding, causing small differences while storing as a file.
The file is stored as UTF-8 format, just ignoring the order to be ISO 8859-1 encoding ( don't know how to explain :) )
There is a little bug in the java GZip libraries.
But which one is true, and if none of them is right, what is the true reason for this bug?
I couldn't figure it out right now.
The MBDFFile class, reading and storing the files:
/* MBDFFile.java */
package com.redgalaxy.mbdf;
import java.io.*;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class MBDFFile {
public static MBDF readMBDFFromFile(String filename) throws IOException {
// FileInputStream is = new FileInputStream(filename);
// InputStreamReader isr = new InputStreamReader(is, "ISO-8859-1");
// BufferedReader br = new BufferedReader(isr);
//
// StringBuilder builder = new StringBuilder();
//
// String currentLine;
//
// while ((currentLine = br.readLine()) != null) {
// builder.append(currentLine);
// builder.append("\n");
// }
//
// builder.deleteCharAt(builder.length() - 1);
//
//
// br.close();
Path path = Paths.get(filename);
byte[] data = Files.readAllBytes(path);
return new MBDF(new String(data, "ISO-8859-1"));
}
private static void writeToFile(String filename, byte[] txt) throws IOException {
// BufferedWriter writer = new BufferedWriter(new FileWriter(filename));
//// FileWriter writer = new FileWriter(filename);
// writer.write(txt.getBytes("ISO-8859-1"));
// writer.close();
// PrintWriter pw = new PrintWriter(filename, "ISO-8859-1");
FileOutputStream stream = new FileOutputStream(filename);
stream.write(txt);
stream.close();
}
public static void writeMBDFToFile(String filename, MBDF info) throws IOException {
writeToFile(filename, info.pack().getBytes("ISO-8859-1"));
}
}
The pack function generates the final string for the file, in ISO 8859-1 format.
For all the other code, see my MBDF Github repository.
I commented the code I've tried, trying to show what I tried.
My workspace:
- Macbook Air '11 (High Sierra)
- IntellIJ Community 2017.3
- JDK 1.8
I hope this is enough information, this is actually the only way to make clear what I'm doing, and what exactly isn't working.
Edit: MBDF.java
/* MBDF.java */
package com.redgalaxy.mbdf;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
public class MBDF {
private String data;
private InfoTag tag;
public MBDF(String data) {
this.tag = new InfoTag((byte) data.charAt(0));
this.data = data.substring(1);
}
public MBDF(String data, InfoTag tag) {
this.tag = tag;
this.data = data;
}
public MBDFBinary getBinaryObject() throws IOException {
String uncompressed = data;
if (tag.isCompressed) {
uncompressed = GZipUtils.decompress(data);
}
Binary binary = getBinaryFrom8Bit(uncompressed);
return new MBDFBinary(binary.subBit(0, binary.getLen() - tag.trailing));
}
public static Binary getBinaryFrom8Bit(String s8bit) {
try {
byte[] bytes = s8bit.getBytes("ISO-8859-1");
return new Binary(bytes, bytes.length * 8);
} catch( UnsupportedEncodingException ignored ) {
// This is not gonna happen because encoding 'ISO-8859-1' is always supported.
return new Binary(new byte[0], 0);
}
}
public static String get8BitFromBinary(Binary binary) {
try {
return new String(binary.getByteArray(), "ISO-8859-1");
} catch( UnsupportedEncodingException ignored ) {
// This is not gonna happen because encoding 'ISO-8859-1' is always supported.
return "";
}
}
/*
* Adds leading zeroes to the binary string, so that the final amount of bits is 16
*/
private static String addLeadingZeroes(String bin, boolean is16) {
int len = bin.length();
long amount = (long) (is16 ? 16 : 8) - len;
// Create zeroes and append binary string
StringBuilder zeroes = new StringBuilder();
for( int i = 0; i < amount; i ++ ) {
zeroes.append(0);
}
zeroes.append(bin);
return zeroes.toString();
}
public String pack(){
return tag.getFilePrefixChar() + data;
}
public String getData() {
return data;
}
public InfoTag getTag() {
return tag;
}
}
This class contains the pack() method. data is already compressed here (if it should be).
For other classes, please watch the Github repository, I don't want to make my question too long.
Solved it by myself!
It seemed to be the reading and writing system. When I exported a file, I made a string using the ISO-8859-1 table to turn bytes into characters. I wrote that string to a text file, which is UTF-8. The big problem was that I used FileWriter instances to write it, which are for text files.
Reading used the inverse system. The complete file was read into memory as a string (memory consuming!!) and was then being decoded.
I didn't know a file was binary data, where specific formats of them form text data. ISO-8859-1 and UTF-8 are some of those formats. I had problems with UTF-8, because it splitted some characters into two bytes, which I couldn't manage...
My solution to it was to use streams. There exist FileInputStreams and FileOutputStreams in Java, which could be used for reading and writing binary files. I didn't use the streams, as I thought there was no big difference ("files are text, so what's the problem?"), but there is... I implemented this (by writing a new similar library) and I'm now able to pass every input stream to the decoder and every output stream to the encoder. To make uncompressed files, you need to pass a FileOutputStream. GZipped files could use GZipOutputStreams, relying on a FileOutputStream. If someone wants a string with the binary data, a ByteArrayOutputStream could be used. Same rules apply to reading, where the InputStream variant of the mentioned streams should be used.
No UTF-8 or ISO-8859-1 problems anymore, and it seemed to work, even with GZip!

GSON - use unicode characters

In my GSON testing class I have a class with a string that has to be serialized.
The problem is, that special unicode characters like the \u06A4 or the ► are converted to ?. That is not how I want this to work.
Here's my class:
public final class JSONvsBinary {
public static final void run() throws Exception {
A a1 = new A();
a1.a = "bla, blu., € # xyz Ø, \u06A4 ►";
GsonBuilder builder = new GsonBuilder();
builder.excludeFieldsWithModifiers(Modifier.TRANSIENT);
builder.setPrettyPrinting();
builder.disableHtmlEscaping();
builder.serializeNulls();
builder.serializeSpecialFloatingPointValues();
Gson gson = builder.create();
final String gsonString = gson.toJson(a1, A.class);
final byte[] gsonBytes = gsonString.getBytes("UTF8");
System.out.println("GSON:\n" + new String(gsonBytes, "UTF8"));
System.out.println("GSON bytes: " + gsonBytes.length);
}
#SuppressWarnings("unused")
private static final class A {
public String a;
}
}
And that's the output:
GSON:
{
"a": "bla, blu., € # xyz Ø, ? ?"
}
GSON bytes: 44
I set the byte encoding to UTF-8 but it doesn't work...
One, make sure your compiler and editor are using the same encoding. This is usually not an issue in an IDE.
The problem is probably here: System.out.println.
From the documentation for PrintStream:
All characters printed by a PrintStream are converted into bytes using the platform's default character encoding.
So, depending on the platform encoding, System.out can lose data.
On top of this, the rendering engine of the device System.out is sending data to must support a grapheme for rendering each code point in the character data.

How to replace � in a string

I have a string that contains a character � I haven't been able to replace it correctly.
String.replace("�", "");
doesn't work, does anyone know how to remove/replace the � in the string?
That's the Unicode Replacement Character, \uFFFD. (info)
Something like this should work:
String strImport = "For some reason my �double quotes� were lost.";
strImport = strImport.replaceAll("\uFFFD", "\"");
Character issues like this are difficult to diagnose because information is easily lost through misinterpretation of characters via application bugs, misconfiguration, cut'n'paste, etc.
As I (and apparently others) see it, you've pasted three characters:
codepoint glyph escaped windows-1252 info
=======================================================================
U+00ef ï \u00ef ef, LATIN_1_SUPPLEMENT, LOWERCASE_LETTER
U+00bf ¿ \u00bf bf, LATIN_1_SUPPLEMENT, OTHER_PUNCTUATION
U+00bd ½ \u00bd bd, LATIN_1_SUPPLEMENT, OTHER_NUMBER
To identify the character, download and run the program from this page. Paste your character into the text field and select the glyph mode; paste the report into your question. It'll help people identify the problematic character.
You are asking to replace the character "�" but for me that is coming through as three characters 'ï', '¿' and '½'. This might be your problem... If you are using Java prior to Java 1.5 then you only get the UCS-2 characters, that is only the first 65K UTF-8 characters. Based on other comments, it is most likely that the character that you are looking for is '�', that is the Unicode replacement character. This is the character that is "used to replace an incoming character whose value is unknown or unrepresentable in Unicode".
Actually, looking at the comment from Kathy, the other issue that you might be having is that javac is not interpreting your .java file as UTF-8, assuming that you are writing it in UTF-8. Try using:
javac -encoding UTF-8 xx.java
Or, modify your source code to do:
String.replaceAll("\uFFFD", "");
As others have said, you posted 3 characters instead of one. I suggest you run this little snippet of code to see what's actually in your string:
public static void dumpString(String text)
{
for (int i=0; i < text.length(); i++)
{
System.out.println("U+" + Integer.toString(text.charAt(i), 16)
+ " " + text.charAt(i));
}
}
If you post the results of that, it'll be easier to work out what's going on. (I haven't bothered padding the string - we can do that by inspection...)
Change the Encoding to UTF-8 while parsing .This will remove the special characters
Use the unicode escape sequence. First you'll have to find the codepoint for the character you seek to replace (let's just say it is ABCD in hex):
str = str.replaceAll("\uABCD", "");
for detail
import java.io.UnsupportedEncodingException;
/**
* File: BOM.java
*
* check if the bom character is present in the given string print the string
* after skipping the utf-8 bom characters print the string as utf-8 string on a
* utf-8 console
*/
public class BOM
{
private final static String BOM_STRING = "Hello World";
private final static String ISO_ENCODING = "ISO-8859-1";
private final static String UTF8_ENCODING = "UTF-8";
private final static int UTF8_BOM_LENGTH = 3;
public static void main(String[] args) throws UnsupportedEncodingException {
final byte[] bytes = BOM_STRING.getBytes(ISO_ENCODING);
if (isUTF8(bytes)) {
printSkippedBomString(bytes);
printUTF8String(bytes);
}
}
private static void printSkippedBomString(final byte[] bytes) throws UnsupportedEncodingException {
int length = bytes.length - UTF8_BOM_LENGTH;
byte[] barray = new byte[length];
System.arraycopy(bytes, UTF8_BOM_LENGTH, barray, 0, barray.length);
System.out.println(new String(barray, ISO_ENCODING));
}
private static void printUTF8String(final byte[] bytes) throws UnsupportedEncodingException {
System.out.println(new String(bytes, UTF8_ENCODING));
}
private static boolean isUTF8(byte[] bytes) {
if ((bytes[0] & 0xFF) == 0xEF &&
(bytes[1] & 0xFF) == 0xBB &&
(bytes[2] & 0xFF) == 0xBF) {
return true;
}
return false;
}
}
dissect the URL code and unicode error. this symbol came to me as well on google translate in the armenian text and sometimes the broken burmese.
profilage bas� sur l'analyse de l'esprit (french)
should be translated as:
profilage basé sur l'analyse de l'esprit
so, in this case � = é
No above answer resolve my issue. When i download xml it apppends <xml to my xml. I simply
xml = parser.getXmlFromUrl(url);
xml = xml.substring(3);// it remove first three character from string,
now it is running accurately.

Categories

Resources