This is the following code I had for a recursion question
Is anyone able to run me through how the output is 24?
To prove how confused I am, I thought the output would have been 6, 12, 20 ,1
package Examples;
public class QuestionDemo {
public static void main(String[] args) {
System.out.println(recCall(2));
}
public static int recCall(int num) {
if (num == 5) {
return 1;
} else {
return num * recCall(++num);
}
}
}
You have 4 recursive calls
first one when you call recCall(2)
then recCall(3), recCall(4) and recCall(5)
recCall(5) returns 1
recCall(4) returns 4*1
recCall(3) returns 3*4*1
recCall(2) returns 2*3*4*1 = 24
recCall(int num):
recall(2)
| |
2 * recall(3)
| |
3 * recall(4)
| |
4 * recall(5)
|
1
recall(2) = 24
| |
2 * 12 = 24
| |
3 * 4 = 12
| |
4 * 1 = 4
|
1
Its because you are using the recCall(++num), and you are using pre-increment operator which increases value before calling the method.
Please read how ++ pre increment works in java :- How do the post increment (i++) and pre increment (++i) operators work in Java?
So your recursive calls will look like below
f(2)= 2*f(3)=2*12 = 24.
f(3)=3*f(4)= 3*4=12
f(4)=4*f(5) = 4*1
f(5)= 1
hence it returns 24.
This recursion takes a bottom-up approach. First going deepest to find the value of the base case, before you can get the value of other recursive calls.
This is why you're getting output = 24:
_
/|\
/ | \
recCall(4) = 4 * recCall(5) = 4 + / | \ // recCall(5) is your base case.
recCall(3) = 3 * recCall(4) = 12 + |
----------------------------------- |
recCall(2) = 2 * recCall(3) = 24 | // This was your first call to the recursive function.
You're not traversing through values, but rather adding them all up.
Yes, if you try to move recall(++num) and put the new number which is ++num so result will be = 2 * 3 * 4 and in 5 it will return 1, so 2 * 3 * 4* 1.
Related
Just trying to understand how this code works. Say the integer is 4, I understand that 4 is checked against the base case and then the method calls itself again this time with the integer 3 and then same pattern occurs until the integer is 1. My question is how is the summation part being done? What would be the final result?
public int sum(int num)
{
int result;
if (num == 1)
result = 1;
else
result = num + sum(num-1);
return result;
}
As I think you realize based on your post, the magic happens here: result = num + sum(num-1); Think of it as a chain of method calls. Logically, the whole process could be represented like this:
sum(4);
evaluates to
4 + sum(3);
which evaluates to
4 + 3 + sum(2);
which evaluates to
4 + 3 + 2 + sum(1);
which evaluates to
4 + 3 + 2 + 1
which is equal to
10
The summation happens as the recursive calls return back up the stack. See here:
sum(4) # The initial function call
|
|---------------|
| 4 + sum(4-1) | # num + recursive call
|
|---------------|
| 3 + sum(3-1) | # the next num + the next recursive call
|
|---------------|
| 2 + sum(2-1) |
|
|---|
| 1 | # Base case num == 1
If you populate each recursive sum(...) call with the value below it, what do you get? The sums added up. That's where the addition occurs.
Trace this and find out what the value should be for yourself. Or, run the code.
It happens here:
result = num + sum(num-1);
together with
return result;
Iteration n calls sum() again (triggering iteration n+1). The result of n+1 is returned; and added to n; giving the result of n-1 (because of the later return statement).
And for the record: I did not include the final solution in my answer; as you can figure that easily by yourself; either by running that code; or by using a pen and a piece of paper to "run" this code "manually".
Recently, I came across a requirement in one of my project groups where in I need to create sets from different types of variables. Here is an example:
I have a list of (lets say) string where every string is associated with a type. So a list has: [var11: Type1, var12: Type1, var21: Type2, var31: Type3, var32: Type3, var33: Type3].
Now, I want to create a function:
public List<String> getSetsWithTypes(String[] types) {
// Iterate through types and create sets
}
So, if I call the function with:
1. types = {"Type1", "Type2"}, it must return:
["var11:var21", "var12:var21"]
2. types = {"Type1", "Type3"}, it must return:
["var11:var31", "var11:var32", "var11:var33", "var12:var31", "var12:var32", "var12:var33"]
3. types = {"Type1", "Type2", "Type3"}, it must return:
["var11:var21:var31", "var11:var21:var32", "var11:var21:var33", "var12:var21:var31"...and so on]
The types are dynamic in nature and also the number of variables.
Any help is appreciated and thanks in advance.
So, I found a way to resolve this. Here is the explanation:
For example, we have:
Types : Type1 | Type 2 | Type 3 | Type 4 |
Number of vars : 2 | 3 | 1 | 2 |
Toggle rate : 6 | 2 | 2 | 1 |
Toggle rate T(i) = N(i+1) * N(i+2) * ... * N(n), in our case n = 4 and N(i) = number of vars of Type i.
/* Assumptions: Each type has it's own column, so variable index
at column 4 will always be for Type 4.
*/
int currentColumn = numberOfTypes.length;
while (--currentColumn > -1) {
// For each currentCol calculate combination
// numberOfVariablesAtCurrentCol = typeSize[colPointer]
// if there are more than 1 variables of certain type
if (typeSize[currentColumn] > 1) {
// Toggle rate is the integer value after which the var index would change for any column
final int toggleRate = findToggleRate(params, currentColumn);
int currentTogglePos = 1, varIndex = 0;
for (int currentRow = 0; currentRow < rows; currentRow++) {
variableCombination[currentRow][currentColumn] = varIndex;
// Reset currentTogglePos, if needed
if (++currentTogglePos > toggleRate) {
currentTogglePos = 1;
// Reset varIndex, if required, only at toggle boundary
if (++varIndex >= typeSize[currentColumn])
varIndex = 0;
}
}
}
}
So, what I finally get is:
Types : Type1 | Type 2 | Type 3 | Type 4 |
Number of vars : 2 | 3 | 1 | 2 |
Toggle rate : 6 | 2 | 2 | 1 |
---------------------------------------------------
VariableIndex : 0 0 0 0
Every col gives: 0 0 0 1
it's type : 0 1 0 0
0 1 0 1
0 2 0 0
0 2 0 1
1 0 0 0
1 0 0 1
1 1 0 0
1 1 0 1
1 2 0 0
1 2 0 1
Well, this is the only way I could think of. Please suggest, if there exists some more efficient way.
I want to convert an int array to a hex string. I am unsure if I am doing this correctly.
I create an int[] in another class and get it with via msg.obj. I am getting some values in Hex but am unsure if they are correct.
int[] readBuf = (int[]) msg.obj; //int array is in another class
StringBuffer output=new StringBuffer();
for (int a:readBuf) {
int val1 = a & 0xff;
output.append(Integer.toHexString(val1));
}
dataView.setText(output);
Assuming I understand your intention, there are two problems with the code:
int val1 = a & 0xff;
You're taking only the last byte of your int. If you want to convert the whole integer, remove the &0xff.
You want to makes sure that the output of Integer.toHexString is always padded with zeroes in front so it's length is always 8 characters (since every byte of the 4 byte long int requres 2 characters). Otherwise both array {1,2,3} and the array {291} will give you the same string - 123.
here's a quick and dirty working code example
public static String byteToUnsignedHex(int i) {
String hex = Integer.toHexString(i);
while(hex.length() < 8){
hex = "0" + hex;
}
return hex;
}
public static String intArrToHex(int[] arr) {
StringBuilder builder = new StringBuilder(arr.length * 8);
for (int b : arr) {
builder.append(byteToUnsignedHex(b));
}
return builder.toString();
}
public static void main(String[] args){
System.out.println(intArrToHex(new int[]{1,2,3}));
System.out.println(intArrToHex(new int[]{291}));
System.out.println(intArrToHex(new int[]{0xFFFFFFFF}));
}
Output:
000000010000000200000003
00000123
ffffffff
#Malt's answer definitely highlights the problem with your code: that it doesn't 0-pad the int hex values; and that you mask the int to only take the last 8 bits using a & 0xff. Your original question implies you are only after the last byte in each int, but it really isn't clear.
You say you get results every second from your remote object. On a slow machine with large arrays it is possible that it could take a significant number of milliseconds to convert a long int[] to a hex string using your method using your (or rather Malt's corrected version of your) method.
A much faster method would be to get each 4-bit nibble from each int using bit shifting, and get the appropriate hex character from a static hex lookup array (note this does base-16 encoding, you would get shorter strings from something like base-64 encoding):
public class AltConverter {
final protected static char[] encoding = "0123456789ABCDEF".toCharArray();
public String convertToString(int[] arr) {
char[] encodedChars = new char[arr.length * 4 * 2];
for (int i = 0; i < arr.length; i++) {
int v = arr[i];
int idx = i * 4 * 2;
for (int j = 0; j < 8; j++) {
encodedChars[idx + j] = encoding[(v >>> ((7-j)*4)) & 0x0F];
}
}
return new String(encodedChars);
}
}
Testing this vs your original method using caliper (microbenchmark results here) shows this is around 11x faster † (caveat: on my machine). EDIT For anyone interested in running this and comparing the results, there is a gist here with the source code.
Even for a single element array
The original microbenchmark used Caliper as I happened to be trying it out at the time. I have rewritten it to use JMH. While doing so I found that the results I linked to and copied here originally used an array that was only ever filled with 0 for each int element. This caused the JVM to optimise the AltConverter code for arrays with length > 1 yielding artificial 10x to 11x improvements in AltConverter vs SimpleConverter. JMH and Caliper produce very similar results for both the flawed and corrected benchmark. (Updated benchmark project for maven eclipse here).
This is around 2x to 4x faster depending on array length (on my machine™). The mean runtime results (in ns) are:
Average run times in nanoseconds
Original method: SimpleConverter
New method: AltConverter
| N | Alt / ns | error / ns | Simple / ns | Error / ns | Speed up |
| ---------: | ---------: | ---------: | ----------: | ---------: | -------: |
| 1 | 30 | 1 | 61 | 2 | 2.0x |
| 100 | 852 | 19 | 3,724 | 99 | 4.4x |
| 1000 | 7,517 | 200 | 36,484 | 879 | 4.9x |
| 1000,0 | 82,641 | 1,416 | 360,670 | 5,728 | 4.4x |
| 1000,00 | 1,014,612 | 241,089 | 4,006,940 | 91,870 | 3.9x |
| 1000,000 | 9,929,510 | 174,006 | 41,077,214 | 1,181,322 | 4.1x |
| 1000,000,0 | 182,698,229 | 16,571,654 | 432,730,259 | 13,310,797 | 2.4x |
† Disclaimer: Micro-benchmarking is dangerous to rely on as an indication of performance in a real world app, but caliper is a good benchmarking framework, jmh is imho better. A performance difference of 10x 4x, with very small standard deviation, in caliper a good t-test result is enough to indicate a good performance increase even inside a more complex application.
I don't understand how this works. I know basic factoral recursion, but this is of a mixed type. Could someone explain step by step on what's going on with the output based on this exact code snippet? Even just a few of the first values (commented at the end of the code)?
Thanks :)
public class recursion1st
{
public static String recFun(int x)
{
if (x <= 0) return "/";
return recFun(x-3) + x + recFun(x-2) + x;
}
public static void main(String[] args)
{
System.out.println(recFun(8));
}
}
//Produces '/2/25/3/1/1358/3/1/136/1/14/2/2468 '(?)
3| function taking an int argument
5| this is your base case, return "/" if...
6| this is your recursion, call recFun(x-3) + x /*this is your current value */ recFun(x-2) + x. basically this is going to go until you're x reaches 0 or less. Note the right root recursive call will go through more recursions before it hits the base case because it is passing x - 2 everytime instead of x-3. Also realize this statement calls recursively twice.
Make a chart on a big piece of paper and map out passing 8 in:
8
root
return recFun(5) + 8 + recFun(6) + 8
1st branch
return recFun(2) + 5 + recFun(3) + 8
2nd branch
return recFun(3) + 6 + recFun(4) + 6
.......................
Keep going it will become clear.
In the main function, it is calling recFun() to print the data. In that function, which is recursive, the first if condition is to break out of recursion, when that condition is met (when x is negative or equal to zero).
Else it will return a string, in turn again calling itself.
Here, in the first return, it calls recFun(5) and and recFun(6) and likewise, if you analyse step by step, the cycle continues till the break condition is met. Basically the return statement is concatenation of strings, and it separates the data by placing "/" on breaking out of recursion.
EDIT:
Value returned by return statement is
rec(5)+8+rec(6)+8......(1)
where
rec(5) returns rec(2)+5+rec(3)+5......(2)
and
rec(6) returns rec(3)+6+rec(4)+6.......(3)
where
rec(3) returns "/"+3+rec(1)+3.......(4)
and
rec(4) returns rec(1)+4+rec(2)+4.....(5)
and
rec(2) returns "/" +2 +"/"+2.....(6)
and
rec(1) returns "/"+1+"/"+1 .......(7)
Substitute (7) and (6) in (5)
Substitute (7) in (4)
Substitute (4) and (5) in (3)
Substitute (4) and (6) in (2)
and substitute (2) and (3) in (1)
You will need a sheet for this. Try and work it out using this. Hope it helps. :)
take a paper and write down step by step what the function does:
recFun(5)
|
recFun(2) + 5 + recFun(3)+5
| |
recFun(-1)+2+recFun(0)+2 recFun(0)+3+recFun(1)+3
| | | |
"/" "/" "/" recFun(-2)+1+recFun(-1)+1
| |
"/" "/"
1 call recFun with 5
2 5 leads to recFun with 2 & recFun with 3
3 2 leads to recFun with -1 & recFun with 0 , 3 leads to recFun with 0 & recFun with 1
4 -1 leads to "/", 0 leads to "/", 0 leads to "/", 1 leads to recFun with -2 & recFun(-1)
5 -2 leads to "/", -1 leads to "/"
Until every singe call is finished executing (until the x <= 0 and a "/" is returned) no call made before will return anything. The function starts returning Strings from the end of that chain of calls (in this pyramid: bottom to top and from right to left)
/ 1 / 1
/ 2 / 2 / 3 3
5 5
-> / 2 / 2 5 / 3 / 1 / 1 3 5
I know, recursion is confusing. The only thing that helps me is: first make every last call of the recursive function and only when the last call returns anything then roll back. Sorry if this isn't any more precise :)
I discovered this oddity:
for (long l = 4946144450195624l; l > 0; l >>= 5)
System.out.print((char) (((l & 31 | 64) % 95) + 32));
Output:
hello world
How does this work?
The number 4946144450195624 fits 64 bits, and its binary representation is:
10001100100100111110111111110111101100011000010101000
The program decodes a character for every 5-bits group, from right to left
00100|01100|10010|01111|10111|11111|01111|01100|01100|00101|01000
d | l | r | o | w | | o | l | l | e | h
5-bit codification
For 5 bits, it is possible to represent 2⁵ = 32 characters. The English alphabet contains 26 letters, and this leaves room for 32 - 26 = 6 symbols
apart from letters. With this codification scheme, you can have all 26 (one case) English letters and 6 symbols (space being among them).
Algorithm description
The >>= 5 in the for loop jumps from group to group, and then the 5-bits group gets isolated ANDing the number with the mask 31₁₀ = 11111₂ in the sentence l & 31.
Now the code maps the 5-bit value to its corresponding 7-bit ASCII character. This is the tricky part. Check the binary representations for the lowercase
alphabet letters in the following table:
ASCII | ASCII | ASCII | Algorithm
character | decimal value | binary value | 5-bit codification
--------------------------------------------------------------
space | 32 | 0100000 | 11111
a | 97 | 1100001 | 00001
b | 98 | 1100010 | 00010
c | 99 | 1100011 | 00011
d | 100 | 1100100 | 00100
e | 101 | 1100101 | 00101
f | 102 | 1100110 | 00110
g | 103 | 1100111 | 00111
h | 104 | 1101000 | 01000
i | 105 | 1101001 | 01001
j | 106 | 1101010 | 01010
k | 107 | 1101011 | 01011
l | 108 | 1101100 | 01100
m | 109 | 1101101 | 01101
n | 110 | 1101110 | 01110
o | 111 | 1101111 | 01111
p | 112 | 1110000 | 10000
q | 113 | 1110001 | 10001
r | 114 | 1110010 | 10010
s | 115 | 1110011 | 10011
t | 116 | 1110100 | 10100
u | 117 | 1110101 | 10101
v | 118 | 1110110 | 10110
w | 119 | 1110111 | 10111
x | 120 | 1111000 | 11000
y | 121 | 1111001 | 11001
z | 122 | 1111010 | 11010
Here you can see that the ASCII characters, we want to map, begin with the 7th and 6th bit set (11xxxxx₂) (except for space, which only has the 6th bit on). You could OR the 5-bit
codification with 96 (96₁₀ = 1100000₂) and that should be enough to do the mapping, but that wouldn't work for space (darn space!).
Now we know that special care has to be taken to process space at the same time as the other characters. To achieve this, the code turns the 7th bit on (but not the 6th) on the extracted 5-bit group with an OR 64 64₁₀ = 1000000₂ (l & 31 | 64).
So far the 5-bit group is of the form: 10xxxxx₂ (space would be 1011111₂ = 95₁₀).
If we can map space to 0 unaffecting other values, then we can turn the 6th bit on and that should be all.
Here is what the mod 95 part comes to play. Space is 1011111₂ = 95₁₀, using the modulus
operation (l & 31 | 64) % 95). Only space goes back to 0, and after this, the code turns the 6th bit on by adding 32₁₀ = 100000₂
to the previous result, ((l & 31 | 64) % 95) + 32), transforming the 5-bit value into a valid ASCII character.
isolates 5 bits --+ +---- takes 'space' (and only 'space') back to 0
| |
v v
(l & 31 | 64) % 95) + 32
^ ^
turns the | |
7th bit on ------+ +--- turns the 6th bit on
The following code does the inverse process, given a lowercase string (maximum 12 characters), returns the 64-bit long value that could be used with the OP's code:
public class D {
public static void main(String... args) {
String v = "hello test";
int len = Math.min(12, v.length());
long res = 0L;
for (int i = 0; i < len; i++) {
long c = (long) v.charAt(i) & 31;
res |= ((((31 - c) / 31) * 31) | c) << 5 * i;
}
System.out.println(res);
}
}
The following Groovy script prints intermediate values.
String getBits(long l) {
return Long.toBinaryString(l).padLeft(8, '0');
}
for (long l = 4946144450195624l; l > 0; l >>= 5) {
println ''
print String.valueOf(l).toString().padLeft(16, '0')
print '|' + getBits((l & 31))
print '|' + getBits(((l & 31 | 64)))
print '|' + getBits(((l & 31 | 64) % 95))
print '|' + getBits(((l & 31 | 64) % 95 + 32))
print '|';
System.out.print((char) (((l & 31 | 64) % 95) + 32));
}
Here it is:
4946144450195624|00001000|01001000|01001000|01101000|h
0154567014068613|00000101|01000101|01000101|01100101|e
0004830219189644|00001100|01001100|01001100|01101100|l
0000150944349676|00001100|01001100|01001100|01101100|l
0000004717010927|00001111|01001111|01001111|01101111|o
0000000147406591|00011111|01011111|00000000|00100000|
0000000004606455|00010111|01010111|01010111|01110111|w
0000000000143951|00001111|01001111|01001111|01101111|o
0000000000004498|00010010|01010010|01010010|01110010|r
0000000000000140|00001100|01001100|01001100|01101100|l
0000000000000004|00000100|01000100|01000100|01100100|d
Interesting!
Standard ASCII characters which are visible are in range of 32 to 127.
That's why you see 32 and 95 (127 - 32) there.
In fact, each character is mapped to 5 bits here, (you can find what is 5 bit combination for each character), and then all bits are concatenated to form a large number.
Positive longs are 63 bit numbers, large enough to hold encrypted form of 12 characters. So it is large enough to hold Hello word, but for larger texts you shall use larger numbers, or even a BigInteger.
In an application we wanted to transfer visible English characters, Persian characters and symbols via SMS. As you see, there are 32 (number of Persian characters) + 95 (number of English characters and standard visible symbols) = 127 possible values, which can be represented with 7 bits.
We converted each UTF-8 (16 bit) character to 7 bits, and gain more than a 56% compression ratio. So we could send texts with twice the length in the same number of SMSes. (Somehow, the same thing happened here.)
You are getting a result which happens to be char representation of below values
104 -> h
101 -> e
108 -> l
108 -> l
111 -> o
32 -> (space)
119 -> w
111 -> o
114 -> r
108 -> l
100 -> d
You've encoded characters as 5-bit values and packed 11 of them into a 64 bit long.
(packedValues >> 5*i) & 31 is the i-th encoded value with a range 0-31.
The hard part, as you say, is encoding the space. The lowercase English letters occupy the contiguous range 97-122 in Unicode (and ASCII, and most other encodings), but the space is 32.
To overcome this, you used some arithmetic. ((x+64)%95)+32 is almost the same as x + 96 (note how bitwise OR is equivalent to addition, in this case), but when x=31, we get 32.
It prints "hello world" for a similar reason this does:
for (int k=1587463874; k>0; k>>=3)
System.out.print((char) (100 + Math.pow(2,2*(((k&7^1)-1)>>3 + 1) + (k&7&3)) + 10*((k&7)>>2) + (((k&7)-7)>>3) + 1 - ((-(k&7^5)>>3) + 1)*80));
But for a somewhat different reason than this:
for (int k=2011378; k>0; k>>=2)
System.out.print((char) (110 + Math.pow(2,2*(((k^1)-1)>>21 + 1) + (k&3)) - ((k&8192)/8192 + 7.9*(-(k^1964)>>21) - .1*(-((k&35)^35)>>21) + .3*(-((k&120)^120)>>21) + (-((k|7)^7)>>21) + 9.1)*10));
I mostly work with Oracle databases, so I would use some Oracle knowledge to interpret and explain :-)
Let's convert the number 4946144450195624 into binary. For that I use a small function called dec2bin, i.e., decimal-to-binary.
SQL> CREATE OR REPLACE FUNCTION dec2bin (N in number) RETURN varchar2 IS
2 binval varchar2(64);
3 N2 number := N;
4 BEGIN
5 while ( N2 > 0 ) loop
6 binval := mod(N2, 2) || binval;
7 N2 := trunc( N2 / 2 );
8 end loop;
9 return binval;
10 END dec2bin;
11 /
Function created.
SQL> show errors
No errors.
SQL>
Let's use the function to get the binary value -
SQL> SELECT dec2bin(4946144450195624) FROM dual;
DEC2BIN(4946144450195624)
--------------------------------------------------------------------------------
10001100100100111110111111110111101100011000010101000
SQL>
Now the catch is the 5-bit conversion. Start grouping from right to left with 5 digits in each group. We get:
100|01100|10010|01111|10111|11111|01111|01100|01100|00101|01000
We would be finally left with just 3 digits in the end at the right. Because, we had total 53 digits in the binary conversion.
SQL> SELECT LENGTH(dec2bin(4946144450195624)) FROM dual;
LENGTH(DEC2BIN(4946144450195624))
---------------------------------
53
SQL>
hello world has a total of 11 characters (including space), so we need to add two bits to the last group where we were left with just three bits after grouping.
So, now we have:
00100|01100|10010|01111|10111|11111|01111|01100|01100|00101|01000
Now, we need to convert it to 7-bit ASCII value. For the characters it is easy; we need to just set the 6th and 7th bit. Add 11 to each 5-bit group above to the left.
That gives:
1100100|1101100|1110010|1101111|1110111|1111111|1101111|1101100|1101100|1100101|1101000
Let's interpret the binary values. I will use the binary to decimal conversion function.
SQL> CREATE OR REPLACE FUNCTION bin2dec (binval in char) RETURN number IS
2 i number;
3 digits number;
4 result number := 0;
5 current_digit char(1);
6 current_digit_dec number;
7 BEGIN
8 digits := length(binval);
9 for i in 1..digits loop
10 current_digit := SUBSTR(binval, i, 1);
11 current_digit_dec := to_number(current_digit);
12 result := (result * 2) + current_digit_dec;
13 end loop;
14 return result;
15 END bin2dec;
16 /
Function created.
SQL> show errors;
No errors.
SQL>
Let's look at each binary value -
SQL> set linesize 1000
SQL>
SQL> SELECT bin2dec('1100100') val,
2 bin2dec('1101100') val,
3 bin2dec('1110010') val,
4 bin2dec('1101111') val,
5 bin2dec('1110111') val,
6 bin2dec('1111111') val,
7 bin2dec('1101111') val,
8 bin2dec('1101100') val,
9 bin2dec('1101100') val,
10 bin2dec('1100101') val,
11 bin2dec('1101000') val
12 FROM dual;
VAL VAL VAL VAL VAL VAL VAL VAL VAL VAL VAL
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
100 108 114 111 119 127 111 108 108 101 104
SQL>
Let's look at what characters they are:
SQL> SELECT chr(bin2dec('1100100')) character,
2 chr(bin2dec('1101100')) character,
3 chr(bin2dec('1110010')) character,
4 chr(bin2dec('1101111')) character,
5 chr(bin2dec('1110111')) character,
6 chr(bin2dec('1111111')) character,
7 chr(bin2dec('1101111')) character,
8 chr(bin2dec('1101100')) character,
9 chr(bin2dec('1101100')) character,
10 chr(bin2dec('1100101')) character,
11 chr(bin2dec('1101000')) character
12 FROM dual;
CHARACTER CHARACTER CHARACTER CHARACTER CHARACTER CHARACTER CHARACTER CHARACTER CHARACTER CHARACTER CHARACTER
--------- --------- --------- --------- --------- --------- --------- --------- --------- --------- ---------
d l r o w ⌂ o l l e h
SQL>
So, what do we get in the output?
d l r o w ⌂ o l l e h
That is hello⌂world in reverse. The only issue is the space. And the reason is well explained by #higuaro in his answer. I honestly couldn't interpret the space issue myself at first attempt, until I saw the explanation given in his answer.
I found the code slightly easier to understand when translated into PHP, as follows:
<?php
$result=0;
$bignum = 4946144450195624;
for (; $bignum > 0; $bignum >>= 5){
$result = (( $bignum & 31 | 64) % 95) + 32;
echo chr($result);
}
See live code
Use
out.println((char) (((l & 31 | 64) % 95) + 32 / 1002439 * 1002439));
to make it capitalised.