When I do
System.out.println('说');
It just prints "?"
In the bottom right corner it says UTF-8 (so that is good).
I have no idea what I am doing wrong, any help much appreciated.
PS: When I make a python file and print it, it prints it properly. But not in java :(
I tried doing System.setProperty("file.encoding", "UTF-8"); but same result sadly. I tried running code in repl.it, and it works. But not in visual studios.
Note that windows Locale is set to support UTF-8. And I am using Consolas Font, which should support UTF-8.
I also tried uninstalling VS and installing it again - it didn't fix anything.
I am also using terminal for all output.
What is your machine language? Change the system language to Chinese, and modify the system locale to Chinese(you may need to restart the computer), then restart vscode and print out Chinese characters.
Another simple and effective way is to use the Code Runner extension. Install the extension and execute the script with Run Code, the OUTPUT panel will display the result.
I have a text file that i am reading through scalding Textline function. Problem is that my file has multiple £ sign in it. but as the default language is en_US, eclipse by default converts that £ into a �. I'm sure that i have to change the language somewhere to en_UK. but i dont know where to do that.
I have tried adding this in windows-> preference -> Java -> installed java and adding this
-Duser.language=en_UK -Duser.country=UK
to Default VM arguments, but the output remains same ..
PS- using eclipse keepler.
any recommendations are welcome
I'm not sure if I'm getting you right, but I guess that you could solve you problem either way.
Eclipse just can't display the correct sign, then you should tell eclipse to use Unicode character (explained here: http://eclipsesource.com/blogs/2013/02/21/pro-tip-unicode-characters-in-the-eclipse-console/)
If you read in you file programmatically you have to use the correct charset, again utf-8. But this is hard to answer because you don't provide any code.
I am looking for a way to print italics out of the Eclipse console. So that some variation of:
System.out.println((Some code)"Hello World");
Outputs:
Hello World
Is this even possible?
Thanks for your help!!
There are a few options available to you with this, but first off, I'll make it clear that this is not a default Java ability. It depends entirely on the Operating System and the output console you are using (Eclipse, Terminal, Command, etc).
JCurses
You can use the JCurses Library to give you additional functions over the default console window. You can find a tutorial here that might help you.
ANSI Escape Sequences
I found a link here that uses ANSI Escape Sequences to modify the text, and ran the code myself to double-check it worked fine. It certainly changed the font, and there are some escape sequences listed here that might help you.
I believe italics should be System.out.println("\030[3mHello World!\030[0m");
I have this problem that has been dropped on me, and have been a couple of days of unsuccessful searches and workaround attempts.
I have now an internal java swing program distributed by jnlp/webstart, on osx and windows computers, that, among other things, downloads some files from WebDav.
Recently, on a test machine with OSX 10.8 and Java 7, filenames and directory names with accented characters started having those replaced by question marks.
No problem on OSX with versions of Java before 7.
example :
XXXYYY_è_ABCD/
becomes
XXXYYY_?_ABCD/
using java.text.Normalizer (NFD, NFC, NFKD, NFKC) on the original string, the result is different but still wrong :
XXXYYY_e?_ABCD/
or
XXXYYY_e_ABCD/
I know, from correspondence between [andrew.brygin at oracle.com] and [mik3hall at gmail.com] that
Yes, file.encoding is set based on the locale that the jvm is running
on, and if you run your java vm in xxxx.UTF-8 locale, the
file.encoding should be UTF-8, set to MacRoman will be problematic.
So I believe Oracle/OpenJDK7 behaves correctly. That said, as Andrew
Thompson pointed out, if all previous Apple JDK releases use MacRoman
as the file.encoding for english/UTF-8 locale, there is a
"compatibility" concern here, it might worth putting something in the
release note to give Oracle/OpenJDK MacOS user a heads up.
original mail
from Joni Salonen blog (java-and-file-names-with-invalid-characters) i know that :
You probably know that Java uses a “default character encoding” to
convert binary data to Strings. To read or write text using another
encoding you can use an InputStreamReader or OutputStreamWriter. But
for data-to-text conversions deep in the API you have no choice but to
change the default encoding.
and
What about file.encoding?
The file.encoding system property can also be used to set the default
character encoding that Java uses for I/O. Unfortunately it seems to
have no effect on how file names are decoded into Strings.
executing locale from inside the jnlp invariabily prints
LANG=
LC_COLLATE="C"
LC_CTYPE="C"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
the most similar problem on stackoverflow with a solution is this :
encoding-issues-on-java-7-file-names-in-os-x
but the solution is wrapping the execution of the java program in a script with
#!/bin/bash
export LC_CTYPE="UTF-8" # Try other options if this doesn't work
exec java your.program.Here
but I don't think this option is available to me because of the webstart, and I haven't found any way to set the LC_CTYPE environment variable from within the program.
Any solutions or workarounds?
P.S. :
If we run the program directly from shell, it writes the file/directory correctly even on OSX 10+Java 7.
The problem appears only with the combination of JNLP+OSX+Java7
I take it it's acceptable to have maximal ASCII representation of the file name, which works in virtually any encoding.
First, you want to use specifically NFKD, so that maximum information is retained in the ASCII form. For example, "2⁵" becomes "25"rather than just
"2", "fi" becomes "fi" rather than "" etc once the non-ascii and non-control characters are filtered out.
String str = "XXXYYY_è_ABCD/";
str = Normalizer.normalize(str, Normalizer.Form.NFKD);
str = str.replaceAll( "[^\\x20-\\x7E]", "");
//The file name will be XXXYYY_e_ABCD no matter what system encoding
You would then always pass filenames through this filter to get their filesystem name. You only lose is some uniqueness, I.E file asdé.txt is the same
as asde.txt and in this system they cannot be differentiated.
EDIT: After experimenting with OS X some more I realized my answer was totally wrong, so I'm redoing it.
If your JVM supports -Dfile.encoding=UTF-8 on the JVM command line, that might fix the issue. I believe that is a standard property but I'm not certain about that.
HFS Plus, like other POSIX-compliant file systems, stores filenames as bytes. But unlike Linux's ext3 filesystem, it forces filenames to be valid decomposed UTF-8. This can be seen here with the Python interpreter on my OS X system, starting in an empty directory.
$ python
Python 2.7.1 (r271:86832, Jul 31 2011, 19:30:53)
>>> import os
>>> os.mkdir('\xc3\xa8')
>>> os.mkdir('e\xcc\x80')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: [Errno 17] File exists: 'e\xcc\x80'
>>> os.mkdir('\x8f')
>>> os.listdir('.')
['%8F', 'e\xcc\x80']
>>> ^D
$ ls
%8F è
This proves that the directory name on your filesystem cannot be Mac-Roman encoded (i.e. with byte value 8F where the è is seen), as long as it's an HFS Plus filesystem. But of course, the JVM is not assured of an HFS Plus filesystem, and SMB and NFS do not have the same encoding guarantees, so the JVM should not assume this scheme.
Therefore, you have to convince the JVM to interpret file and directory names with UTF-8 encoding, in order to read the names as java.lang.String objects correctly.
Shot in the dark: File Encoding does not influence the way how the file names are created, just how the content gets written into the file - check this guy here: http://jonisalonen.com/2012/java-and-file-names-with-invalid-characters/
Here is a short entry from Apple: http://developer.apple.com/library/mac/#qa/qa1173/_index.html
Comparing this to http://docs.oracle.com/javase/tutorial/i18n/text/normalizerapi.html I would assume you want to use
normalized_string = Normalizer.normalize(target_chars, Normalizer.Form.NFD);
to normalize the file names before you pass them to the File constructor. Does this help?
I don't think there is a real solution to this problem, right now.
Meantime I came to the conclusion that the "C" environment variables printed from inside the program are from the Java Web Start sandbox, and (by design, apparently) you can't influence those using the jnlp.
The accepted (as accepted by the company) workaround/compromise was of launching the jnlp using javaws from a bash script.
Apparently, launching the jnlp from browser or from finder creates a new sandbox environment with the LANG not setted (so is setted to "C" that is equal to ASCII).
Launching the jnlp from command line instead prints the right LANG from the system default, inheriting it from the shell.
This permits to at least preserve the autoupdating feature of the jnlp and dependencies.
Anyway, we sent a bug report to Oracle, but personally I'm not hoping it to be resolved anytime soon, if ever.
It's a bug in the old-skool java File api, maybe just on a mac? Anyway, the new java.nio api works much better. I have several files containing unicode characters and content that failed to load using java.io.File and related classes. After converting all my code to use java.nio.Path EVERYTHING started working. And I replaced org.apache.commons.io.FileUtils (which has the same problem) with java.nio.Files...
...and be sure to read and write the content of file using an appropriate charset, for example:
Files.readAllLines(myPath, StandardCharsets.UTF_8)
I'm using Serbian Latin keyboard on CentOS 6.1. When I press Alt Gr + N I get }. Everywhere, except in NetBeans.
Also, I'm unable to type any bracket []{} or \|. Did anyone come across solution to this?
Changing keyboard for every brace or other symbol is not an option.
The solution was to install Sun/Oracle Java, and reinstall NetBEans.
actually it is an X11/distro bug.
KDE or GTK apps use their own keyboard mechanism, so they don't show that problem;
but java (and thus, netbeans) use the X11 keyboard mechanism for input.
The problem is in how X11 handles your locale; if set properly it works; if not it doesn't.
X11 doesn't has any "default" rule; if your locale isn't know to X11, you have nothing.
Also, X11 locales support isn't much updated either.
X11, in order to allow proper altgr/compose rules has to load a proper "Compose" file.
It loads it (or not) depending on the locale: in a /usr/share/X11/locale/compose.dir file (your path may vary) there are lines like:
en_US.UTF-8/Compose en_US.UTF-8
en_US.UTF-8/Compose sr_CS.UTF-8
en_US.UTF-8/Compose: en_US.UTF-8
en_US.UTF-8/Compose: sr_CS.UTF-8
etc.
(yes, two lines per locale, with and without colon; one is used by old programs, other by new ones; but I don't remember which is which)
there must be a line for the locale you use (shown with "locale" command).
Note that if the system uses locales like "en_US.utf8" there must be an alias
(in the locales.alias file); something like:
sr_CS.utf8 sr_CS.UTF-8
...
sr_CS.utf8: sr_CS.UTF-8
(again, duplicate with and without colon)
To solve your problem, you can either set LC_ALL=en_US.UTF-8 before launching java programs; or edit (you need to be root, and do it at each X11 update) the compose.dir (and locale.dir and/or locale.alias) files, copy the en_US.UTF-8 lines and adapt to your locale.
You can also report to your distro so they patch those .dir/.alias files to work properly for all locales provided by the distro.