I'm currently developing a proyect about chess. Main idea is to use it on console as CMD.
It currently works with array[8][8], i store the "chess pieces" on it. But the main problem is:
When i want to print an emoji as "♜, ♞, ♝, and so on", output displays the emojis as ?.
I have already tried some things like UTF-8, Emoji-Java library, changing the Fonts of output console with compatible Fonts... I've tried for hours, i have searched around the internet, i can't find anything... If you help me i'd appreciate it.
[?][?][?][?][?][?][?][?]//♜, ♞, ♝, ♛, ♚, ♝, ♞, ♜.
[null][null][null][null][null][null][null][null]//Null= available space to move
[null][null][null][null][null][null][null][null]
[null][null][null][null][null][null][null][null]
[null][null][null][null][null][null][null][null]
[null][null][null][null][null][null][null][null]
[null][null][null][null][null][null][null][null]
[null][null][null][null][null][null][null][null]
//Please ignore the null values, it's going to be fixed when the problem is solved...
It's complicated, very complicated, and it differs by the OS and it also differs by the version (windows 7 vs 10), and it differs by the patch level (eg windows 10 before and after patch 2004 for example).
So let me save you hours of further heartache by suggesting that you use a UI instead where you can control the underlying character set. For example, using Swing or JavaFX.
However, if you insist on using the console then you need to take a number of steps.
The first being to use a PrintWriter in your code to write out characters using the correct encoding:
PrintWriter consoleOut = new PrintWriter(new OutputStreamWriter(System.out, StandardCharsets.UTF_8));
consoleOut.println("your character here");
The next step is to pre-configure the console to use your character set. For example in windows you might use the chcp command before starting your jar file:
chcp 65001
java -jar .....
But not only that, you should use the Dfile.encoding flag when you start your jar:
java -Dfile.encoding=UTF-8 -jar yourChessApplciation.jar
Now assuming you got all those steps right it might work, but it might not. You also need to ensure that all your source files are encoded in UTF-8. I won't go into that here because it differs by this IDE, but if you are using something like Netbeans then you can configure the source encoding in the project properties.
I would also encourage you to use the Unicode character definition rather than the actual symbol in your code:
//Avoid this, it may fail for a number of reasons (mostly encoding related)
consoleOut.println("♜");
//The better way to write the character using the unicode definition
consoleOut.println("\u265C");
Now even with all this you still need to ensure that your chosen console uses the correct character set. Here are the steps to follow for powershell: Using UTF-8 Encoding (CHCP 65001) in Command Prompt / Windows Powershell (Windows 10) Or for windows cmd you can take a look here: How to make Unicode charset in cmd.exe by default
So with all of those steps completed you can compile this code:
PrintWriter consoleOut = new PrintWriter(new OutputStreamWriter(System.out, StandardCharsets.UTF_8));
consoleOut.println("Using UTF_8 output with the character: ♜");
consoleOut.println("Using UTF_8 output with the unicode definition: \u265C");
consoleOut.close();
And then run your compiled jar file in your console (Powershell in this example) something like this (You wont need to use chcp 65001 if you configured the powershell console correctly):
chcp 65001
java -Dfile.encoding=UTF-8 -jar yourChessApplciation.jar
And the output should give the following result:
Using UTF_8 output with the character: ♜
Using UTF_8 output with the unicode definition: ♜
But it might still fail to show correctly, in which case see my opening section about using a UI, or try a different console... It's complicated.
Related
I have got a problem with printing out a unicode symbol in the windows console.
Here's the java code that prints out the unicode symbol value;
System.out.print("\u22A2 ");
The problem doesn't exist when I run the program in Eclipse with encoding settings as UTF-8, however when it comes to windows console the symbol gets replaced by a question mark.
The following was done to try overcome this problem, with no success;
Change the font of windows console to Lucida Console.
Every time I run windows console I will change the encoding settings, i.e. with the use of chcp 65001
An extra step I've tried a few times was running the java file with an argument, i.e. java -Dfile.encoding=UTF-8 Filter (where "Filter" is name of the class)
By default, the code-page using in the CMD of Windows is 437. You can test by run this command in the prompt:
C:\>chcp
Active code page: 437
And, this code-page prevent you from showing Unicode characters properly! You have to change code page to 65001 AND using -Dfile.encoding=UTF-8 for that purpose.
C:\>chcp 65001
Active code page: 65001
C:\>java -jar -Dfile.encoding=UTF-8 path/to/your/runnable/jar
In additions to the steps you have taken, you also need a PrintStream/PrintWriter that encodes the printed characters to UTF-8.
Unfortunately, Java designers have chosen to open the standard streams with the so called "default" encoding, which is almost always unusable*) under Windows. Hence, using System.out and System.err naively will make your program output appear differently, depending on where you run it. This is straight against the goal: compile once, run anywhere.
*) It will be some non standard "code page" nobody except Microsoft recognizes on this planet. And AFAIK, if for example you have a German keyboard and a "German" OEM Windows and you want to have date and time in your home time zone, there is just no way to say: But I want UTF-8 input/output in my CMD window. This is one reason why I have my dual Ubuntu booted most of the time, where it goes without saying that the terminal does UTF-8.
The following usually works for me in JDK7:
public static PrintWriter stdout = new PrintWriter(
new OutputStreamWriter(System.out, StandardCharsets.UTF_8),
true);
For ancient Java versions, I replace StandardCharsets.UTF_8 by Charset.forName("UTF-8")
For the Arabic language I used the following code:
PrintWriter stdout = new PrintWriter(
new OutputStreamWriter(System.out,StandardCharsets.ISO_8859_1),true);
I have a problem with the chars of all JTextField on my program (when compiled into JAR).
When I run it from Eclipse everything works fine... the problem is when run from the already compiled JAR.
The problem is, when I insert text to JTextField with special character as "Ñandú?" when java extract text from the input that goes with rare characters.
For example: System.out.println( myTextField.getText() );. That would write in the console: IMAGE
I have tried all kinds of way to switch chars, but when I write again on the console or in the interface reappear rare signs. I've even tried the library Commons Lang 3.1, but I have not been successful :(
I hope someone knows what to do! The only way to work is inserting -Dfile.encoding=UTF-8 on running the jar file, but that can not be so.
Sorry for the English. Thanks!!!
Java uses the default encoding for your computer, which for Windows would be C16 and doesn't support unicode. Run your program with the following command in terminal:
java -jar -Dfile.encoding=utf-8 <path to your .jar>
Eclipse would run your applications like so if you have any unicode in your document, but outside of it, you're on your own.
The only other possible way I can think of is writing a .bat script with this command and putting it in the same folder as the application.
I have seen numerous of questions like mine but they don't answer my question because I'm using ant and I'm not using eclipse. I run this code: ant clean dist and it tells me numerous times that warning: unmappable character for encoding UTF8.
I see on the Java command that there is a -encoding option, but that doesn't help me cuz I'm using the ant.
I'm on Linux and I'm trying to run the developer version of Sentrick; I haven't made no modifications to anything, I just downloaded it and followed all their instructions and it ain't makes no difference. I emailed the developper and they told me it was this problem but I suspect that it is actually something that gotta do with this error at the end:
BUILD FAILED
/home/daniel/sentricksrc/sentrick/build.xml:22: The following error occurred while executing this line:
/home/daniel/sentricksrc/sentrick/ant/common-targets.xml:83: Test de.denkselbst.sentrick.tokeniser.components.DetectedAbbreviationAnnotatorTest failed
I'm not sure what I'm gonna do now because I really need for it to work
Try to change file encoding of your source files and set the Default Java File Encoding to UTF-8 also.
For Ant:
add -Dfile.encoding=UTF8 to your ANT_OPTS environment variable
Setting the Default Java File Encoding to UTF-8:
export JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8
Or you can start up java with an argument -Dfile.encoding=UTF8
The problem is not eclipse or ant. The problem is that you have a build file with special characters in it. Like smart quotes or m-dashes from MS Word. Anyway, you have characters in your XML file that are not part of the UTF-8 character set. So you should fix your XML to remove those invalid characters and replace them with similar looking but valid UTF-8 versions. Look for special characters like @ © — ® etc. and replace them with the (c) or whatever is useful to you.
BTW, the bad character is in common-targets.xml at line 83
Changing encoding to Cp 1252 worked for my project with same error. I tried changing eclipse properties several times but it did not help me in any way. I added encoding property to my pom.xml file and the error gone. http://ctrlaltsolve.blogspot.in/2015/11/encoding-properties-in-maven.html
I am using imagemagick in my application. Our development machine is Windows and live server is linux. Now, in online it is working fine online. But not in development machine. I downloaded and installed Imagemagick latest release for Windows and when i try the below command in DOS-prompt, it is working fine.
convert -sample 100x100 D:\test.jpg D:\test-cropped.jpg
But when i run the same as command-line in Java program, it is not working and not giving any error too.
My code is :
Runtime.getRuntime().exec("convert -sample 250x150 "+pathName+digest+".jpg "+pathName+digest+"_thumb.jpg");
Any help is appareciated.
convert.exe is available in ImageMagick installation directory. So you need to add ImageMagick installation directory in environment variable path.
Another option is to provide complete path of convert.exe as :
Runtime.getRuntime().exec("C:\\program files\\ImageMagick\\convert -sample 250x150 "+pathName+digest+".jpg "+pathName+digest+"_thumb.jpg");
try
execute convert using the absolute path
quote your parameter input file and output file, in case they contain space
I suspect the problem is spaces in pathnames, but the solution is NOT to use escapes or quotes. The exec(String) method splits the string into "arguments" in a completely naive fashion by looking for white-space. It pays no attention whatsoever to quoting, etcetera. Instead, you will end up with command names and arguments that have quote characters, etcetera embedded in them.
The solution is to use the overload of exec that takes a String[], and do the argument splitting yourself; e.g.
Runtime.getRuntime().exec(new String[]{
"convert", // or "D:\\Program Files (x86)\\ImageMagick-6.8.0-Q16\\convert\\"
"-sample",
"250x150",
pathName + digest + ".jpg",
pathName + digest + "_thumb.jpg"
});
The other thing you could do is to capture and print any output that is written to the processes stdout and stderr.
In my case, the problem I was facing that from java compare command was working fine using Runtime.getRuntime().exec(), but when using convert, it was not working and returning me exit value as 4.
Compare execution returns exit value 0, telling that it is successfully executed.
I have system path updated with the ImageMagic's installation directory, still it was not picking 'convert' exe file. So, I started giving complete path of the convert.exe file instead of only writing only convert
e.g:
Runtime.getRuntime().exec("C:/Program files/ImageMagic......../convert.exe myImage1 -draw .... myImage2") and it worked fine this time.
Some how system was not able to pick the convert application and giving full path sorted it out. May be this solution would help someone facing same type of issue.
I am looking at some Java Code in Eclipse on Windows. The line termination characters (DOS-style) do not display properly (empty lines everywhere..).
The Problem is that the Code is from a Windows ClearCase vob for which I do not have check-in permissions, so it is read-only (changing the line-termination characters with auto-format is not possible). Creating a full copy and changing the line termintators is out of the question as the code might change while I am looking at it..
I found Preferences->Workspace->"New text file line delimiter", but it seems that this does not display the line termination characters in existing files properly.
How do I make eclipse display the text file as it was meant to display?
Edit:
Notepad displays the file correctly. Ultra-Edit also detects it as unix-style and suggests converting it to DOS (but displays properly when declining it).
gvim detects the file as unix and displays ^M and the end of the line.
I have checked the file in binary and it does not contain any \n characters that do not follow a \r character. Could there be any other way that Eclipse distinguishes unix from dos-style line endings?
I found this sequence of characters: 0d0d 0a0d 0d0a (\r\r\n\r\r\n). I suppose this is why it does not work..
What OS are you running on?
Eclipse auto detect line terminators.
I never seen it fail and display extra newlines. is it possible that your file actually does not double newlines?
maybe try to view it with another editor (notepad++, editplus)
Go to Preferences->General->Workspace
You will see Text file encoding where you can change it to the encoding you prefer
and there is the New text file line delimiter where you have to option of using unix, windows or mac os
I had the the same issue with end-of-lines and I notice the problem of quick visibility in Eclipse (compared with e.g. Notepad++) but I notice it can be quickly found by using a built-in feature:
In the toolbar there is a button "Show Whitespace Characters" (symbol Pi: ¶) and when you press it, you will have one of these character at the end of line:
¤¶ for a file with Windows EOL (CRLF)
¶ for a file with Unix EOL (LF)
That way you can just see what EOL encoding you have and convert it by using the File menu.
If you use for example git, you can also set up an option to automatically convert all the commits with a specific EOL. But sometimes for an unknown reason, the files gets with Windows EOL even I am using Eclipse under a Linux VM. The Eclipse built-in feature permits to see in one sight what is the encoding.
We are in a mixed-environment with ClearCase (Unix VOB server) and Windows ClearCase clients, and with Eclipse.
I haven't observed such an issue, but I know that certain type can be managed by ClearCase in a specific way: see Working with Rational ClearCase Unicode Type Manager.
By describing (cleartool describe) the type of the files causing the problem, you might see a special file type which could be a first explanation.
Another classical reason could include a ClearCase trigger which would somehow corrupt the content of said file.