No cyrillic symbols in a string in IntelliJ IDEA - java

i've started developing in IntelliJ in Java, created a new gradle project, added a few libraries but encountered a problem that a string is not being filled with cyrillic symbols - instead i get incorrect symbols (screenshot attached). How can i fix it? It has something to do with encodings - I tried a lot of suggestions on the web but nothing helped. Cheers!

Just change all encodings into UTF-8 (or other preferable encoding) and reload all java files into UTF-8.
You must know how to do first part (if you don't: File(top left corner of your window)>Settings>Editor>File Encodings).
The second part is also easy: just click on encoding in bottom right corner of the window and choose UTF-8 (or other preferable encoding).
Then you will have window like shown below. Choose "Reload" and enjoy. If you still have questions you are free to ask.
Here better description: https://www.jetbrains.com/help/idea/encoding.html

Related

Set scala eclipse language to english_uk

I have a text file that i am reading through scalding Textline function. Problem is that my file has multiple £ sign in it. but as the default language is en_US, eclipse by default converts that £ into a �. I'm sure that i have to change the language somewhere to en_UK. but i dont know where to do that.
I have tried adding this in windows-> preference -> Java -> installed java and adding this
-Duser.language=en_UK -Duser.country=UK
to Default VM arguments, but the output remains same ..
PS- using eclipse keepler.
any recommendations are welcome
I'm not sure if I'm getting you right, but I guess that you could solve you problem either way.
Eclipse just can't display the correct sign, then you should tell eclipse to use Unicode character (explained here: http://eclipsesource.com/blogs/2013/02/21/pro-tip-unicode-characters-in-the-eclipse-console/)
If you read in you file programmatically you have to use the correct charset, again utf-8. But this is hard to answer because you don't provide any code.

Copy and paste java code spacing gives errors

I copied and pasted code and Eclipse is giving me an error. I can fix it by deleting the spacing at the beginning of the lines but that is very tedious. It is the most recent version. Any ideas?
Try turning on display of control characters. Under Preferences > General > Editors > Text Editors. There is a checkbox labeled "Show whitespace characters". Then you could potentially do a global replace if that is the issue.
I have seen such errors in the past in other language (ActionScript) in eclipse. The problem there was that the used whitespace character was not the usual one to encode a blank but the one that encoded a blank that should not be broken apart into two lines. I would expect normaly this should not be an issue in Java because it uses unicode ... but probably your file has some other encoding (not unicode)!?
You should check:
Does the encoding of your file support unicode (in the eclipse "Properties" of the file). Probably changing the encoding solves your problem.
Look in some hex editor into the problematic file and check the character of the problematic blanks. Probably you have copy/paste your code from some webpage and there was such a character used to prevent the browser to break the code into multiple lines.
There are different unseen characters. What I do is to paste to a text editor (such as Notepad ++) and I say remove formatting. It removes these unseen characters and copy from it and paste it to the eclipse without any problem.

Troubles with local letters PentahoPDFReporting

I am using Pentaho Report Designer 3.8.3 and I have small, aesthetic problem with fonts.
I have implemented, or better say, I am aimplementing OpenSans font into my pentaho.
Its cirrent state is, that i have installed this font into Linux(which is my pentaho runing on) and also into java. But i still have 2 problems with fonts:
1.) I see OpenSans font in html only when i open it on PC where it actually is installed. Whenever i open report with openSans font on machine where it is not installed, it change it for something else, Arial for example.
(I have added OpenSans to: '/usr/lib/jvm/' and also into: '/usr/share/fonts')
2.) After publishing report to PDF, I see only '?' instead of accentuated characters. But in html, I see no '?', each letter is as it should be.
(I have added
org.pentaho.reporting.engine.classic.core.modules.output.pageable.pdf.Encoding=ISO 639-1 #my language code, Slovak
into '/home/pentaho/pentaho/biserver-ce/tomcat/webapps/pentaho/WEB-INF/classes/classic-engine.properties' file)
Does anyone have an idea wht else should I try to make it work properly?
I'm not sure what you can do with html export other than doing the usual approach of setting the font-family so it has something to fall back to.
However in PDF look carefully at the pdf export options, you'll see there's a "embed fonts" option. Enable that and your pdf will work anywhere!

Eclipse UTF-8-weird characters

I am writing a program in java with Eclipse IDE, and i want to write my comments in Greek. So i changed the encoding from Window->Preferences->General->Content Types->Text->Java Source File, to UTF-8. The comments in my code are ok but when i run my program some words contains weird characters e.g San Germ�n (San Germán). If i change the encoding to ISO-8859-1, all are ok when i run the program but the comments in my code are not(weird characters !). So, what is going wrong with it?
Edit: My program is in java swing and the weird characters with UTF-8 are Strings in cells of a JTable.
EDIT(2): Ok, i solve my problem i keep the UTF-8 encoding for java file but i change the encoding of the strings. String k = new String(myStringInByteArray,ISO-8859-1);
This is most likely due to the compiler not using the correct character encoding when reading your source. This is a very common source of error when moving between systems.
The typical way to solve it is to use plain ASCII (which is identical in both Windows 1252 and UTF-8) and the "\u1234" encoding scheme (unicode character 0x1234), but it is a bit cumbersome to handle as Eclipse (last time I looked) did not transparently support this.
The property file editor does, though, so a reasonable suggestion could be that you put all your strings in a property file, and load the strings as resources when needing to display them. This is also an excellent introduction to Locales which are needed when you want to have your application be able to speak more than one language.

hebrew appears as question marks in netbeans

I am using netbeans 6.1 on 2 computers.
on one of them the program:
public static void main(String argv[])
{
System.out.println("שלום");
}
prints normally, and the on the other question marks.
what can be the difference between the 2 environments?
edit:
on both computers
Control Panel \ Regional and Language Options \ Advanced
is set to hebrew
edit:
Thank you Michael Burr,
but the value of the encoding is already UTF-8.
Maybe this something with the JVM?
edit:
I have installed Eclipse and the problem occurs there as well.
I also tried reading the hebrew from a file with the same result.
edit:
System.getProperty("file.encoding");
returns "Cp1252"
I tried
System.setProperty("file.encoding","UTF-8")
but the question marks remains.
Thanks,
Ido
Make sure that NetBeans is set up with an encoding that supports Hebrew characters. From the NetBeans Wiki:
To change the language encoding for a project:
Right-click a project node in the Projects windows and choose Properties.
Under Sources, select an encoding value from the Encoding drop-down field.
You can't set the "file.encoding" property with System.setProperty(); it has to be set on the command line when the JVM is started with -Dfile.encoding=UTF-8. The value of this property is read during JVM initialization and cached. By the time your main method is invoked, the value has been cached and changes to the property are ignored.
Is Hebrew installed by default? Could be that a language pack isn't installed?
Control Panel > Regional and Language Options > Languages. Select the 'Install files for complex script and right-to-left languages (including Thai)' option. This will install support for Hebrew. You'll probably need an OS disc.
How exactly are you running the program? Where does it print its output? It could be as simple as netbeans or the console using different fonts, one of which does not include Hebrew characters.
To eliminate encoding problems during compilation, try replacing the Hebrew characters with their unicode escape sequences and see if the result is different.
I think I misunderstood your problem (I thought that the characters were not being displayed in the NetBeans editor properly). The exact steps to solve your problem might depend on the version of the OS you're running on. Win2K, WInXP, and Vista all have slightly different dialogs and wording unfortuantely.
Take a look at this help page for the JVM:
http://java.com/en/download/help/locale.xml
It sounds like you've already configured the system like it should be, but the devil is in the details - there are several different 'locale' settings on a system that might affect this (and for all I know the JVM might throw in one or two on its own).
http://www.siao2.com/2005/02/01/364707.aspx
Usually it's the default encoding on:
Control Panel \ Regional and Language Options \ Advanced
(Select Hebrew on the combo)
You'll have to restart after changing this setting.
What helped me is this (on Win7):
No one answer from above doesn't work.
I spent about an hour, but had figured out, that the problem is located not in the String encoding, but in default encoding, which is used by IDE from the start-up.
So, to get Hebrew, Arabic, Russian etc symbols in Netbeans console output you need to modify netbeans.conf.
Search for the key netbeans_default_options and add -J-Dfile.encoding=UTF-8 into the quotes.

Categories

Resources