I'm trying to make a UUencode algorithm and part of my code contains this:
for(int x = 0; x < my_chars.length; x++)
{
if((x+1) % 3 == 0)
{
char first = my_chars[x-2];
char second = my_chars[x-1];
char third = my_chars[x];
int first_binary = Integer.parseInt(Integer.toBinaryString(first));
int second_binary = Integer.parseInt(Integer.toBinaryString(second));
int third_binary = Integer.parseInt(Integer.toBinaryString(third));
int n = (((first << 8) | second) << 8) | third;
System.out.print(my_chars[x-2] + "" + my_chars[x-1] + my_chars[x] + Integer.toBinaryString(n));
}
}
System.out.println();
System.out.println(Integer.toBinaryString('s'));
What I'm trying to achieve is to combine those 8 bits from the chars that I get into a big 24 bits int. The problem I'm facing is that the result is a 23 bit int. Say my first 3 chars were:
'T' with a binary representation of 01010100
'u' with a binary representation of 01110101
'r' with a binary representation of 01110010
The result that I get from my program is a int formed from these bits:
10101000111010101110010
Which is missing the 0 at the beginning from the representation of 'T'.
Also I have included the last 2 lines of code because the binary string that I get from 's' is: 1110011 which is missing the 0 at the beginning.
I have checked if I scrolled by mistake to the right but it does not seem that I have done so.
The method Integer.toBinaryString() does not zero-pad the results on the left; you'll have to zero-pad it yourself.
This value is converted to a string of ASCII digits in binary (base 2)
with no extra leading 0s.
Related
I know this is trivial, but I can't find the proper explication. I have the following code
str="1230"
int rez=str.charAt(3) - '0';
rez=3;
How does this parsing work?
As long as the character is a digit, you can get the equivalent int value by subtracting '0'. The ASCII coding for '0' is decimal 48, '1' is decimal 49, etc.
So '8' - '0' = 56 - 48 = 8;
For your number, you can parse the entire string like this (assuming all the characters are digits, otherwise the result wouldn't make sense).
String v = "1230";
int result = 0; // starting point
for (int i = 0; i < v.length(); i++) {
result = result* 10 + v.charAt(i) -'0';
}
System.out.println(result);
Prints
1230
Explanation
In the above loop, first time thru
result = 0 * 10 + '1'-'0 = 1
second time thru
result = 1 * 10 + '2'-'0' = 12
third time thru
result = 12 * 10 + '3'-'0' = 123
last time thru
result = 123 * 10 + '0'-'0' = 1230
"Behind the scene" a char is just an int with a specific range (very simplified explanation).
Try following code to convince yourself:
System.out.println((int) '0'); // this won't output 0
System.out.println((int) 'a'); // this work as well
This is why arithmetic operations are possible on chars.
I've been searching for a solution to my problem for days but can't get a spot-on answer when looking at previously answered questions/ blogs / tutorials etc. all over the internet.
My aim is to write a program which takes a decimal number as an input and then calculates the hexadecimal number and also prints the unicode-symbol of said hexadecimal number (\uXXXX).
My problem is I can't "convert" the hexadecimal number to unicode. (It has to be written in this format: \uXXXX)
Example:
Input:
122 (= Decimal)
Output:
Hexadecimal: 7A
Unicode: \u007A | Unicode Symbol: Latin small letter "z"
The only thing I've managed to do is print the unicode (\u007A), but I want the symbol ("z").
I thought if the unicode only has 4 numbers/letters, I would just need to "copy" the hexadecimal into the code and fill up the remaining places with 0's and it kinda worked, but as I said I need the symbol not the code. So I tried and tried, but I just couldn't get the symbol.
By my understanding, if you want the symbol you need to print it as a string.
But when trying it with a string I get the error "illegal unicode escape".
It's like you only can print pre-determined unicodes and not "random" ones generated on the spot in relation of your input.
I'm only a couple days into Java, so apologies if I have missed anything.
Thank you for reading.
My code:
int dec;
int quotient;
int rest;
int[]hex = new int[10];
char[]chars = new char[]{
'F',
'E',
'D',
'C',
'B',
'A'
};
String unicode;
// Input Number
System.out.println("Input decimal number:");
Scanner input = new Scanner(System.in);
dec = input.nextInt();
//
// "Converting to hexadecimal
quotient = dec / 16;
rest = dec % 16;
hex[0] = rest;
int j = 1;
while (quotient != 0) {
rest = quotient % 16;
quotient = quotient / 16;
hex[j] = rest;
j++;
}
//
/*if (j == 1) {
unicode = '\u000';
}
if (j == 2) {
unicode = '\u00';
}
if (j == 3) {
unicode = '\u0';
}*/
System.out.println("Your number: " + dec);
System.out.print("The corresponding Hexadecimal number: ");
for (int i = j - 1; i >= 0; i--) {
if (hex[i] > 9) {
if (j == 1) {
unicode = "\u000" + String.valueOf(chars[16 - hex[i] - 1]);
}
if (j == 2) {
unicode = "\u00" + String.valueOf(chars[16 - hex[i] - 1]);
}
if (j == 3) {
unicode = "\u0" + String.valueOf(chars[16 - hex[i] - 1]);
}
System.out.print(chars[16 - hex[i] - 1]);
} else {
if (j == 1) {
unicode = "\u000" + Character.valueOf[hex[i]);
}
if (j == 2) {
unicode = "\u00" + Character.valueOf(hex[i]);
}
if (j == 3) {
unicode = "\u0" + Character.valueOf(hex[i]);
}
System.out.print(hex[i]);
}
}
System.out.println();
System.out.print("Unicode: " + (unicode));
}
It's not an advanced code whatsoever, I wrote it exactly how I would calculate it on paper.
Dividing the number through 16 until I get a 0 and what remains while doing so is the hexadecimal equivalent.
So I put it in a while loop, since I would divide the number n-times until I got 0, the condition would be to repeat the division until the quotient equals zero.
While doing so the remains of each division would be the numbers/letters of my hexadecimal number, so I need them to be saved. I choose an integer array to do so. Rest (remains) = hex[j].
I also threw a variable in the called "j", so I would now how many times the division was repeated. So I could determine how long the hexadecimal is.
In the example it would 2 letters/numbers long (7A), so j = 2.
The variable would then be used to determine how many 0's I would need to fill up the unicode with.
If I have only 2 letters/numbers, it means there are 2 empty spots after \u, so we add two zeros, to get \u007A instead of \u7A.
Also the next if-command replaces any numbers higher than 9 with a character from the char array above. Basically just like you would do on paper.
I'm very sorry for this insanely long question.
U+007A is the 3 bytes int code pointer.
\u007A is the UTF-16 char.
A Unicode code pointer, symbol, sometimes is converted to two chars and then the hexadecimal numbers do not agree. Using code pointers hence is best. As UTF-16 is just an encoding scheme for two-bytes representation, where the surrogate pairs for 3 byte Unicode numbers do not contain / or such (high bit always 1).
int hex = 0x7A;
hex = Integer.parseUnsignedInt("007A", 16);
char ch = (char) hex;
String stringWith1CodePoint = new String(new int[] { hex }, 0, 1);
int[] codePoints = stringWith1CodePoint.codePoints().toArray();
String s = "đť„ž"; // U+1D11E = "\uD834\uDD1E"
You can simply use System.out.printf or String.format to do what you want.
Example:
int decimal = 122;
System.out.printf("Hexadecimal: %X\n", decimal);
System.out.printf("Unicode: u%04X\n", decimal);
System.out.printf("Latin small letter: %c\n", (char)decimal);
Output:
Hexadecimal: 7A
Unicode: u007A
Latin small letter: z
I am doing some kind of cipher in Java for schools homework. The task is to change the value of a certian char to a new one with a specific offset which is given by the user and has a range from negative numbers to positive numbers (alphabet).
Now I have a problem with negative offsets. I have created a String with the Alphabet which helps to find the new char. For example: With the offset of 7 I got this: encrypt(“TEST”) = “ALZA”. So my code grabs the index of the string value and searches with this index in the alphabet string for the new char. Anyway when I now have the char 'E' and a negative index i.e '-7' it will return the value of -3 for the new index of the new char (I hope that makes sense). Since there is no char on index '-3' I get an error.
So how can I access to the end of the string instead of going more and more into negative index numbers ?
Add 26 then mod 26:
i = (i + 26) % 26;
This always works for indexes down to -26. If that's not enough, just add some zeroes:
i = (i + 26000000) % 26;
Your general problem appears to be that letters are represented by only 26 indices, but the actual index variable you use might be greater than 26, or even less than zero (negative). One way to handle this problem is to use the mod operator to safely wrap your index around to always point to a range containing a valid letter.
Here is logic which can do that:
if (index < 0) {
index = (index % 26) + 26;
}
else {
index = index % 26;
}
Assuming the letter E is position 5 and you have a reassignment of -7, this would mean that the new index would be -2. This new position can be mapped using the above logic as follows, where index = -2 in this case:
5 - 7 = -2
(-2 % 26) + 26
-2 + 26
24
And character in position 24 is the letter X.
If you can constrain shift values to be positive, you can use remainder operator:
int newIndex = (index + shift) % 26
If there are negatives to be expected:
int newIndex = Math.floorMod(inndex + shift, 26) would do the trick
Actually you need mathematical modulo, but % operator is not quite that
I'm trying to create a new byte knowing a certain amount of bits
char prostie1 = theRepChars[j-3];
char prostie2 = theRepChars[j-2];
char prostie3 = theRepChars[j-1];
char prostie4 = theRepChars[j];
String prostiaMare = prostie4 + prostie3 + prostie2 + prostie1 + "";
Byte theChar = new Byte(prostiaMare);
When i do this I get a NumberFormatException value 196.
I have no idea what might be my problem
--EDIT--
Ok I think I might have to give some more details since I wasn't very clear. I'm trying to do an Uuencode algorithm and by following the logic of the algorithm I should stop my byte having a value bigger than 194. Here is a bunch of my code.
if(my_chars.length % 3 == 0)
{
for(int x = 0; x < my_chars.length; x++)
{
if((x+1) % 3 == 0)
{
char first = my_chars[x-2];
char second = my_chars[x-1];
char third = my_chars[x];
int n = (((first << 8) | second) << 8) | third;
String theRep = Integer.toBinaryString(n);
while(theRep.length() < 24 - 1)
{
theRep = 0 + theRep;
}
//0 padded theRep
for(int j = 0; j < theRepChars.length; j++)
{
if((j+1) % 4 == 0)
{
char prostie1 = theRepChars[j-3];
char prostie2 = theRepChars[j-2];
char prostie3 = theRepChars[j-1];
char prostie4 = theRepChars[j];
String prostiaMare = prostie4 + prostie3 + prostie2 + prostie1 + "";
System.out.println(prostiaMare);
}
}
}
}
}
And trying to create a new byte with the value that prostiaMare has gives me the numberFormatException. I'm not sure if I have not followed the algorithm right ( http://www.herongyang.com/encoding/UUEncode-Algorithm.html )
196 is outside the range of byte, a signed value. Bytes can range from -128 to 127.
I'm not sure why you're casting to String. If you just want a byte with bits equivalent those of the sum of the four chars, cast directly to byte:
(byte) (prostie4 + prostie3 + prostie2 + prostie1)
If you intended to construct a String from the four chars, you are not currently doing that. Use:
"" + prostie4 + prostie3 + prostie2 + prostie1
and, if the result is in the range of a byte, you can create a byte as you have been.
Bytes are signed in Java. Which means a byte, which is 8 bits long, has a minimum value of -2^7 (-128) and a max value of 2^7 - 1 (127). Java has no unsigned primitive types apart from char (unsigned, 16bit).
Therefore 196 is unparseable --> NumberFormatException.
You don't have much to work around this except to read into a larger type and do & 0xff to obtain the byte:
final int i = Integer.parseInt(theString);
final byte b = (byte) (i & 0xff);
Or do yourself a favour and use Guava, which has UnsignedBytes:
final byte b = UnsignedBytes.parseUnsignedByte(theString);
But it appears that you want to do comparisons anyway; so just use a larger type than byte. And no, this won't waste memory: don't forget about alignment.
As mentioned in the docs
An exception of type NumberFormatException is thrown if any of the following situations occurs:
The first argument is null or is a string of length zero.
The radix is either smaller than Character.MIN_RADIX or larger than Character.MAX_RADIX.
Any character of the string is not a digit of the specified radix, except that the first - character may be a minus sign '-' ('\u002D') provided that the string is longer than length 1.
The value represented by the string is not a value of type byte.
In your case its the last case since 196 cant be represented as byte..The valid range is -128 to 127
I have one decimal value like 65 and I want to divide this value in 2 raised to format.
For example, I have this type rule:
If I get 42 as a decimal number, I want to divide first 42 number in format of 2 raised to. Then, I want to output its power only, like:
OutPut : 1,3,5
For example, if I have 65 as a decimal number, then I want 6,0 as its output, because (2 raised to 6) + (2 raised to 0) = 65.
Thanks
Anybody can help me how I can achieve this thing in Java.
You can repeatedly compare the least significant bit, counting as you go, and right-shifting the number to look at each bit in turn:
int n = 65
int d = 0;
while (n > 0) {
if ((n & 1) == 1) { // check LSB
System.out.println(d);
}
n >>>= 1; // shift right
++d; // inc digit count
}
Integer.toString(65, 2);
Does the following output:
1000001
Then you work on the String.
This can be improved, but I think it'll do the job.
int n = 42;
String binary = Integer.toBinaryString(n);
for(int i = binary.length() - 1; i >= 0; i--){
if(binary.charAt(i) == '1')
System.out.print(i+1);
}
Here is the algorithm:
Find a log base 2 of given number x=log(2, input)
Find the floor and the ceiling of the result y = floor(x), z=ceiling(x)
Find 2^y, 2^z and choose the one closer to the input.
calculate the diff = (input - 2^(x or y)) and do the same for the diff recursively until dif=0.