I am new to Java and I'm not quite sure how to output an integer raised to a power as a string output. I know that
Math.pow(double, double)
will actually compute the value of raising a double to a power. But if I wanted to output "2^6" as an output (except with 6 as a superscript and not with the carat), how do I do that?
EDIT: This is for an Android app. I'm passing in the integer raised to the power as a string and I would like to know how to convert this to superscript in the UI for the phone.
Unicode does have superscript versions of the digits 0 to 9: http://en.wikipedia.org/wiki/Unicode_subscripts_and_superscripts
This should print 2⁶:
System.out.println("2⁶");
System.out.println("2\u2076");
If you're outputting the text to the GUI then you can use HTML formatting and the <sup> tag to get a superscript. Otherwise, you'll have to use Unicode characters to get the other superscripts. Wikipedia has a nice article on superscripts and subscripts in Unicode:
http://en.wikipedia.org/wiki/Unicode_subscripts_and_superscripts
This answer should only be used if using Eclipse (a java editor) what eclipse does is it only supports certain Unicode symbols. 1 2 and 3 can all be done and are supported by eclipse, anything else you have to play with the settings of eclipse which isnt too hard. There's this thing called Windows-1252 which is the default for a lot of stuff it seems, including being the default encoding for files in Eclipse. So, whenever you're printing to the console, that's what it's trying to interpret it as, and it doesn't support the full unicode charset since it's a 1 byte encoding. This isn't actually a problem with the Java, then, it's a problem with Eclipse, which means you need to edit your Eclipse to use UTF-8 instead. You can do this in multiple places; if you just want the unicode characters to be displayed when you're running this one file, you can right click the file, properties -> resource -> text file encoding and change from default to other: utf-8. If you want to do this to your whole workspace, you can go to Window -> Preferences -> General -> Workspace -> Text file encoding. You could also do this project-wide or even just package-wide following pretty similar steps depending what you're going for.
Related
I have about 30 languages that my application needs to support. I have some fairly simple text that was provided for each of them, but within that text I do need to make one choice using {0, choice, 0# ...|0<...}
At present I have not even got as far as testing if this works because I am having a lot of trouble trying to convice my text editor to allow me to combine left to right and right to left text, but what I really want to know if this is even possible...
Question: Is it possible to use the Java message properties embedded choice with languages flowing from right to left.
If anyone can think of any additional tags to use for this question, I would be grateful.
The short answer is yes it is possible. It is a thorny issue, but BIDI (bi-directionl) support is an issue of the text editor not yours. So if your text editor supports it (and most editors do) then it is possible. First you have to make sure that you use an encoding (character set) that supports multiple languages - UTF-8 is recommended (but also UTF-16 and may be some others may work) as opposed to ISO-8859-X (where X is a single digit) that supports just 2 languages. Also you can write your Strings in property file or anywhere in the code as a unicode sequence.
There is an Open Source java library MgntUtils that has a Utility that converts Strings in any language (including special characters and emojis to unicode sequence and vise versa:
result = "Hello World";
result = StringUnicodeEncoderDecoder.encodeStringToUnicodeSequence(result);
System.out.println(result);
result = StringUnicodeEncoderDecoder.decodeUnicodeSequenceToString(result);
System.out.println(result);
The output of this code is:
\u0048\u0065\u006c\u006c\u006f\u0020\u0057\u006f\u0072\u006c\u0064
Hello World
The library can be found at Maven Central or at Github It comes as maven artifact and with sources and javadoc
Here is javadoc for the class StringUnicodeEncoderDecoder
I don't know the exact use of \f - form feed escape sequence,so I searched some examples from that i come to know that it has been used for page break(Insert a form feed in the text at this point).
So when I run the the following program,I got some different result for /f,/b and /r is there any reason?
Share your opinion on this,Point out if I done anything wrong.
java version "1.8.0_181"
System.out.println("Backspace : " + "ABCDE\bFGHIJ");
System.out.println("Formfeed : " + "ABCDE\fFGHIJ");
System.out.println("Backslash : " + "ABCDE\\FGHIJ");
System.out.println("Carriage Return: " + "ABCDE\rFGHIJ");
OUTPUT
Backspace : ABCDEFGHIJ
Formfeed : ABCDEFGHIJ
Backslash : ABCDE\FGHIJ
Carriage Return: ABCDE
FGHIJ
Well obviously something has changed between the time you saw your expected results and today when you are getting your unexpected results. But I doubt it has anything to do with the escapes themselves as, from a backward-compatibility point of view, they must continue to represent the same characters in every new version of Java.
So, any differences you're seeing are due to how the characters are handled once they've be sent out to System.out. It could be something to do with the particular version of Java, either in the JDK libraries or in the JVM. But it could equally well be something entirely outside of Java. For instance, a DOS-emulation box on Windows might display these characters differently than an output window in an IDE such as NetBeans or Eclipse. There might even be a difference between the DOS boxes on different versions of Windows, or the output windows on different versions of the same IDE.
The only way to know for sure is to run your program under different Java versions in controlled conditions where everything else besides the Java version remains identical. However, there's no guarantee it's even possible to achieve those test conditions. So if I were you I'd just accept that there's no guarantee about how these characters will be displayed and not let it bother you any further.
If you're curious about Form Feed '\f' specifically, it's historical purpose was, as you may have surmised, to cause an output device, usually a printer, to advance to the top of a new page. For for software-emulated display pseudo-devices like DOS-boxes and IDE output windows that don't have a well-defined concept of what a "page" is, advancing to a new page is basically meaningless. So what should happen when you send a formfeed to one of these displays? Clear the window? Jump down ten lines (or five or twenty)? Display a little one-character-wide "FF" graphic? Display a blank space? Ignore it and display nothing at all? Who knows? It all depends on what the programmers of DOS-boxes and IDE output windows and other such software-emulated display pseudo-devices decided they ought to do.
I'm trying to display arabic text in java but it shows junk characters(Example : ¤[ï߯[î) or sometimes only question marks when i print. How do i make it to print arabic. I heard that its something related to unicode and UTF-8. This is the first time i'm working with languages so no idea. I'm using Eclipse Indigo IDE.
EDIT:
If i use UTF-8 encoding then "¤[ï߯[î" characters are becoming "????????" characters.
For starters you could take a look here. This should allow you to make Eclipse print unicode in its console (which I do not know if it is something which Eclipse supports out of the box without any extra tweaks)
If that does not solve your problem you most likely have an issue with the encoding your program is using, so you might want to create strings in some manner similar to this:
String str = new String("تعطي يونيكود رقما فريدا لكل حرف".getBytes(), "UTF-8");
This at least works for me.
If you embed the text literally in the code make sure you set the encoding for your project correctly.
This is for Java SE, Java EE, or Java ME?
If this is for Java ME, you have to make custom GlyphUtils if you use LWUIT.
Download this file:
http://dl.dropbox.com/u/55295133/U0600.pdf
Look list of unicode encoding..
And look at this thread:
https://stackoverflow.com/a/9172732/1061371
in the answer (post) of Mohamed Nazar that edited by bernama Alex Kliuchnikau,
"The below code can be use for displaying arabic text in J2ME String s=new String("\u0628\u06A9".getBytes(), "UTF-8"); where \u0628\u06A9 is the unicode of two arabic letters"
Look at U0600.pdf file, so we can see that Mohamed Nazar and Alex Kliuchnikau give example to create "ba" and "kaf" character in arabic.
Then the last point that you must consider is: "Make sure your UI support unicode(I mean arabic) character."
Like LWUIT not support yet unicode (I mean arabic) character.
You should make your custom code if you mean your app is using LWUIT.
I'm trying to find out what has happen in an integration project. We just can't get the encoding right at the end.
A Lithuanian file was imported to the as400. There, text is stored in the encoding EBCDIC. Exporting the data to ANSI file and then read as windows-1257. ASCII-characters works fine and some Lithuanian does, but the rest looks like crap with chars like ~, ¶ and ].
Example string going thou the pipe
Start file
Tuskulënö
as400
Tuskulënö
EAA9A9596
34224335A
exported file (after conversion to windows-1257)
Tuskulėnö
expected result for exported file
Tuskulėnų
Any ideas?
Regards,
Karl
EBCDIC isn't a single encoding, it's a family of encodings (in this case called codepages), similar to how ISO-8859-* is a family of encodings: the encodings within the families share about half the codes for "basic" letters (roughly what is present in ASCII) and differ on the other half.
So if you say that it's stored in EBCDIC, you need to tell us which codepage is used.
A similar problem exists with ANSI: when used for an encoding it refers to a Windows default encoding. Unfortunately the default encoding of a Windows installation can vary based on the locale configured.
So again: you need to find out which actual encoding is used here (these are usually from the Windows-* family, the "normal" English one s Windows-1252).
Once you actually know what encoding you have and want at each point, you can go towards the second step: fixing it.
My personal preference for this kind of problems is this: Have only one step where encodings are converted: take whatever the initial tool produces and convert it to UTF-8 in the first step. From then on, always use UTF-8 to handle that data. If necessary convert UTF-8 to some other encoding in the last step (but avoid this if possible).
I have string with ISO-8859-1 characters in Oct (\350, ...). How to convert them to normal form, for example "\350" -> "è" in Java?
Octal 350 is the proper code of è. Is this what your seeing in a console, or in a file that is displaying in a console? If so, I suspect the problem is with your terminal-emulator or console configuration. The text in the actual file or screen buffer is in iso-8859-1, your terminal simply can't display it so it write the octal equivalent.
Edit: I've been faced with similar sequences of characters showing up in files, and had stared for hour trying to figure out why they had been replaced in the file, and it turned out that they had not. It was the software that I was using to view the file that was doing the substitution. In my case it was putty. If you think this might be the case I recommend you do a hexdump on the file to verify.