When I run the following program:
public static void main(String args[]) throws Exception
{
byte str[] = {(byte)0xEC, (byte)0x96, (byte)0xB4};
String s = new String(str, "UTF-8");
}
on Linux and inspect the value of s in jdb, I correctly get:
s = "ì–´"
on Windows, I incorrectly get:
s = "?"
My byte sequence is a valid UTF-8 character in Korean, why would it be producing two very different results?
It correctly prints "어" on my computer (Ubuntu Linux), as described in Code Table Korean Hangul. Windows command prompt is known to have issues with encoding, don't bother.
Your code is fine.
It gives 어 for me. This means your console is probably not configured to display UTF-8 and it is a printing/display problem, rather than a problem with representation.
You get the correct string, it's Windows console that does not display the string correctly.
Here is a link to an article that discusses a way to make Java console produce correct Unicode output using JNI.
JDB is displaying the data incorrectly. The code works the same on both Windows and Linux. Try running this more definitive test:
public static void main(String[] args) throws Exception {
byte str[] = {(byte)0xEC, (byte)0x96, (byte)0xB4};
String s = new String(str, "UTF-8");
for(int i=0; i<s.length(); i++) {
System.out.println(BigInteger.valueOf((int)s.charAt(i)).toString(16));
}
}
This prints out the hex value of every character in the string. This will correctly print out "c5b4" in both Windows and Linux.
Related
Using ProcessBuilder, I need to be able to send non-ASCII parameters to another Java program.
In this case, a program Abc needs to send e.g. Arabic characters to Def program through the parameters. I have control of Abc code, but not of Def.
Using the normal way of ProcessBuilder without any playing with the encoding, it was mentioned here, it is not possible. Def recieves question marks "?????".
However, I am able to get some result, but different encodings can be used for different scenarios.
E.g. I am trying all encodings to send to the recipient, and comparing the result of what is expected.
Windows, IntelliJ console:
Default charset: UTF-8
Found charsets: windows-1252, windows-1254 and windows-1258
Windows, command prompt:
Default charset: windows-1252
Found charsets: CESU-8 and UTF-8
Ubuntu, command prompt:
Default charset: ISO-8859-1
Found charsets: ISO-2022-CN, ISO-2022-KR, ISO-8859-1, ISO-8859-15, ISO-8859-9, x-IBM1129, x-ISO-2022-CN-CNS and x-ISO-2022-CN-GB
My question is: how to programmatically know which correct encoding to use, since I need to have something universal?
In other words, what is the relation between the default charset and the found ones?
public class Abc {
private static final Path PATH = Paths.get("."); // With maven: ./target/classes
public static void main(String[] args) throws Exception {
var string = "hello أحمد";
var bytes = string.getBytes();
System.out.println("Original string: " + string);
System.out.println("Default charset: " + Charset.defaultCharset());
for (var c : Charset.availableCharsets().values()) {
var newString = new String(bytes, c);
var process = new ProcessBuilder().command("java", "-cp",
PATH.toAbsolutePath().toString(),
"Def", newString).start();
process.waitFor();
var output = asString(process.getInputStream());
if (output.contains(string)) {
System.out.println("Found " + c + " " + output);
}
}
}
private static String asString(InputStream is) throws IOException {
try (var reader = new BufferedReader(new InputStreamReader(is))) {
var builder = new StringBuilder();
String line;
while ((line = reader.readLine()) != null) {
if (builder.length() != 0) {
builder.append(System.lineSeparator());
}
builder.append(line);
}
return builder.toString();
}
}
}
public class Def {
public static void main(String[] args) {
System.out.println(args[0]);
}
}
Under the hood, what's actually being passed around is bytes, not chars. Normally, you'd expect the java method that ends up turning characters into bytes to have an overload that lets you specify charset, but, for whatever reason, it does not exist here.
How it should work is thusly:
You pass a string to ProcessBuilder
PB will turn that string into bytes using Charset.defaultCharset() (why? Because PB is all about making the OS do things, and the default charset reflects the OS's preferred charset).
These bytes are then fed to the process.
The process starts up. If it is java, and we're talking the args in psv main(String[] args), the same is done in reverse: Java takes the bytes and turns them back to characters via Charset.defaultCharset(), again.
This does show an immediate issue: If the default charset is not capable of representing a certain character, then in theory you are out of luck.
That would strongly suggest that using java to fire up java.exe should ordinarily mean you can pass whatever you want (unless the characters involved aren't representable in the system's charset).
Your code is odd. In particular, this line is the problem:
var bytes = string.getBytes();
This is short for string.getBytes(Charset.defaultCharset()). So now you have your bytes in the provided charset.
var newString = new String(bytes, c);
and now you're taking those bytes and turning them into a string using a completely different charset. I'm not sure what you're trying to accomplish with this. Pure gobbledygook would come out.
In other words, what is the relation between the default charset and the found ones?
What do you mean by 'found ones'? The string "Found charsets" appears nowhere in your code. If you mean: What Charset.availableCharsets() returns - there is no relationship at all. availableCharsets isn't relevant for ProcessBuilder.
One possibility is to convert your String to Unicode sequences string and then pass it to another process and there convert it back to a regular String. String of Unicode sequences will always contain ASCI characters only. Here is how it may look like:
String encoded = StringUnicodeEncoderDecoder.encodeStringToUnicodeSequence("hello أحمد"));
The result will be that String encode will hold this value:
"\u0068\u0065\u006c\u006c\u006f\u0020\u0623\u062d\u0645\u062f"
This String you can safely pass to another process. In that other process, you can do the following:
String originalString = StringUnicodeEncoderDecoder.decodeUnicodeSequenceToString(encodedString);
And the result will be that originalString will now hold this value:
"hello أحمد"
Class StringUnicodeEncoderDecoder could be found in an Open Source library called MgntUtils. You can get this library as Maven Artifact or get it on Github (including source code and JavaDoc). JavaDoc online is available here
This library and this particular feature is used and well tested by multiple users.
Disclamer: This library is written by me
So I've been trying to print out some lines of "-" characters. Why does the following not work?:
StringBuilder horizonRule = new StringBuilder();
for(int i = 0 ; i < 12 ; i++) {
horizonRule.append("─");
System.out.println(horizonRule.toString());
}
The correct output is several lines like
─
──
───
────
and so on, but the incorrect output is
â??
â??â??
â??â??â??
I'm guessing the string is not being properly decoded by println or something
The string in your code is not a hyphen but a UTF8 box drawing character.
The terminal your application is printing to doesn't seem to expect any UTF8 content, so the issue is not inside your application.
Replace it with a real hyphen (-) or make sure the tool that displays the output supports UTF8.
You say that the IDE wants to save as UTF-8. You then probably have saved it as UTF-8.
However your compiler is likely to compile in whatever encoding your system uses.
If you write your code as UTF-8, make sure to compile it with the same encoding:
javac -encoding utf8 MyClass.java
I tried your code (I literally just copy'n'paste) using BeanShell, and it worked perfectly. So there's nothing wrong with the code. It will be your environment.
stewart$ bsh
Picked up JAVA_TOOL_OPTIONS: -Djava.awt.headless=true -Dapple.awt.UIElement=true
BeanShell 2.0b4 - by Pat Niemeyer (pat#pat.net)
bsh % StringBuilder horizonRule = new StringBuilder();
bsh % for(int i=0; i<12; i++) {
horizonRule.append("─");
System.out.println(horizonRule.toString());
}
─
──
───
────
─────
──────
───────
────────
─────────
──────────
───────────
────────────
bsh %
public class myTest1 {
public static void main(String[] args) {
StringBuilder horizonRule = new StringBuilder();
for (int i = 0 ; i <= 13 ; i++){
horizonRule.append('_');
System.out.println(horizonRule.toString());
}
}
}
is correct;
maybe you use a different encoding ? clear env path
So I'm working on a localization example and the normal method of doing it with ResourceBundle and everything doesn't support UTF-8 it seems so I'm moving on to Properties.
I've got it getting the actual properties fine but in the Spanish file, it doesn't like the accents. I have it reading in UTF-8 but it doesn't care, just displays a different symbol than before.
Output:
íHola!
┐C¾mo estßs?
íAdi¾s!
Expected Output:
¡Hola!
¿Cómo estás?
¡Adios!
Properties File:
greetings = ¡Hola!
farewell = ¡Adiós!
inquiry = ¿Cómo estás?
Code:
import java.util.*;
import java.io.*;
public class Test{
public static void main(String[] args) throws IOException {
String language;
String country;
if (args.length != 2) {
language = new String("en");
country = new String("GB");
} else {
language = new String(args[0]);
country = new String(args[1]);
}
String file = String.format("lang_%s_%s.properties",language,country);
InputStream utf8in = Test.class.getClassLoader().getResourceAsStream(file);
Reader reader = new InputStreamReader(utf8in, "UTF-8");
Properties props = new Properties();
props.load(reader);
System.out.println(props.getProperty("greetings"));
System.out.println(props.getProperty("inquiry"));
System.out.println(props.getProperty("farewell"));
}
}
I've just spent about 40 minutes reading everything I could find and they were either the exact same as what I've got now or slightly different and when trying, produced the same results.
Can someone please tell me how I can get my expected output?
In Eclipse, I can be reproduced the problem. Here are steps:
Create Java project and set Text file encoding to CP850.
Create Run/Debug Configurations, set VM arguments to -Dfile.encoding=ISO8859-1.
Confirm Encoding setting in Common tab is CP850;
Run the java program.
When java program print to standard output, those chars become ISO8859-1 bytes.
Those bytes are re-encoded using CP850 and display in Console view.
This is a configuration problem. Make sure the Encoding is the same as the file.encoding of running program.
I have a program which runs on a console and its Umlauts and other special characters are being output as ?'s on Macs. Here's a simple test program:
public static void main( String[] args ) {
System.out.println("höhößüä");
System.console().printf( "höhößüä" );
}
On a default Mac console (with default UTF-8 encoding), this prints:
h?h????
h?h????
But after manually setting the Mac terminal's encoding to "Mac OS Roman", it correctly printed
höhößüä
höhößüä
Note that on Windows systems using System.console() works:
h÷h÷▀³õ
höhößüä
So how do I make my program...rolleyes..."run everywhere"?
Try the following command-line argument when starting your application:
-Dfile.encoding=utf-8
This changes the default encoding of the JVM for I/O operations.
You can also try:
System.setOut(new PrintStream(System.out, true, "utf-8"));
Epaga: have a look right here. You can set the output encoding in a printstream - just have to determine or be absolutely sure about which is being set.
import java.io.PrintStream;
import java.io.UnsupportedEncodingException;
public class Test {
public static void main (String[] argv) throws UnsupportedEncodingException {
String unicodeMessage =
"\u7686\u3055\u3093\u3001\u3053\u3093\u306b\u3061\u306f";
PrintStream out = new PrintStream(System.out, true, "UTF-8");
out.println(unicodeMessage);
}
}
To determine the console encoding you could use the system command "locale" and parse the output which - on a german UTF-8 system looks like:
LANG="de_DE.UTF-8"
LC_COLLATE="de_DE.UTF-8"
LC_CTYPE="de_DE.UTF-8"
LC_MESSAGES="de_DE.UTF-8"
LC_MONETARY="de_DE.UTF-8"
LC_NUMERIC="de_DE.UTF-8"
LC_TIME="de_DE.UTF-8"
LC_ALL=
I am trying to decode some UTF-8 strings in Java.
These strings contain some combining unicode characters, such as CC 88 (combining diaresis).
The character sequence seems ok, according to http://www.fileformat.info/info/unicode/char/0308/index.htm
But the output after conversion to String is invalid.
Any idea ?
byte[] utf8 = { 105, -52, -120 };
System.out.print("{{");
for(int i = 0; i < utf8.length; ++i)
{
int value = utf8[i] & 0xFF;
System.out.print(Integer.toHexString(value));
}
System.out.println("}}");
System.out.println(">" + new String(utf8, "UTF-8"));
Output:
{{69cc88}}
>i?
The console which you're outputting to (e.g. windows) may not support unicode, and may mangle the characters. The console output is not a good representation of the data.
Try writing the output to a file instead, making sure the encoding is correct on the FileWriter, then open the file in a unicode-friendly editor.
Alternatively, use a debugger to make sure the characters are what you expect. Just don't trust the console.
Here how I finally solved the problem, in Eclipse on Windows:
Click Run Configuration.
Click Arguments tab.
Add -Dfile.encoding=UTF-8
Click Common tab.
Set Console Encoding to UTF-8.
Modify the code:
byte[] utf8 = { 105, -52, -120 };
System.out.print("{{");
for(int i = 0; i < utf8.length; ++i)
{
int value = utf8[i] & 0xFF;
System.out.print(Integer.toHexString(value));
}
System.out.println("}}");
PrintStream sysout = new PrintStream(System.out, true, "UTF-8");
sysout.print(">" + new String(utf8, "UTF-8"));
Output:
{{69cc88}}
> ï
The code is fine, but as skaffman said your console probably doesn't support the appropriate character.
To test for sure, you need to print out the unicode values of the character:
public class Test {
public static void main(String[] args) throws Exception {
byte[] utf8 = { 105, -52, -120 };
String text = new String(utf8, "UTF-8");
for (int i=0; i < text.length(); i++) {
System.out.println(Integer.toHexString(text.charAt(i)));
}
}
}
This prints 69, 308 - which is correct (U+0069, U+0308).
Java, not unreasonably, encodes Unicode characters into native system encoded bytes before it writes them to stdout. Some operating systems, like many Linux distros, use UTF-8 as their default character set, which is nice.
Things are a bit different on Windows for a variety of backwards-compatibility reasons. The default system encoding will be one of the "ANSI" codepages and if you open the default command prompt (cmd.exe) it will be one of the old "OEM" DOS codepages (though it is possible to get ANSI and Unicode there with a bit of work).
Since U+0308 isn't in any of the "ANSI" character sets (probably 1252 in your case), it'll get encoded as an error character (usually a question mark).
An alternative to Unicode-enabling everything is to normalize the combining sequence U+0069 U+0308 to the single character U+00EF:
public static void emit(String foo) throws IOException {
System.out.println("Literal: " + foo);
System.out.print("Hex: ");
for (char ch : foo.toCharArray()) {
System.out.print(Integer.toHexString(ch & 0xFFFF) + " ");
}
System.out.println();
}
public static void main(String[] args) throws IOException {
String foo = "\u0069\u0308";
emit(foo);
foo = Normalizer.normalize(foo, Normalizer.Form.NFC);
emit(foo);
}
Under windows-1252, this code will emit:
Literal: i?
Hex: 69 308
Literal: ï
Hex: ef