Im working on a program that is an implementation of the RSA encryption algorithm, just as a personal exercise, its not guarding anyone's information or anything. I am trying to understand how a plaintext passage is being interpreted numerically, allowing it to be encrypted. I understand that most UTF-8 characters end up only using 1 byte of space, and not the 2 bytes one might think, but thats about it. Heres my code:
BigInteger ONE = new BigInteger("1");
SecureRandom rand = new SecureRandom();
BigInteger d, e, n;
BigInteger p = BigInteger.probablePrime(128, rand);
BigInteger q = BigInteger.probablePrime(128, rand);
BigInteger phi = (p.subtract(ONE)).multiply(q.subtract(ONE));
n = p.multiply(q);
e = new BigInteger("65537");
d = e.modInverse(phi);
String string = "test";
BigInteger plainText = new BigInteger(string.getBytes("UTF-8"));
BigInteger cipherText = plainText.modPow(e, n);
BigInteger originalMessage = cipherText.modPow(d, n);
String decrypted = new String(originalMessage.toByteArray(),"UTF-8");
System.out.println("original: " + string);
System.out.println("decrypted: " + decrypted);
System.out.println(plainText);
System.out.println(cipherText);
System.out.println(originalMessage);
System.out.println(string.getBytes("UTF-8"));
byte byteArray[] = string.getBytes("UTF-8");
for(byte littleByte:byteArray){
System.out.println(littleByte);
}
It outputs:
original: test
decrypted: test
1952805748
16521882695662254558772281277528769227027759103787217998376216650996467552436
1952805748
[B#60d70b42
116
101
115
116
Maybe more specifically i am wondering about this line:
BigInteger plainText = new BigInteger(string.getBytes("UTF-8"));
Does each letter of "test" have a value, and they are literraly added together here? Like say t=1,e=2,s=3,t=1 for example, if you get the bytes from that string, do you end up with 7 or are the values just put together like 1231? And why does
BigInteger plainText = new BigInteger(string.getBytes("UTF-8")); output 1952805748
I am trying to understand how a plaintext passage is being interpreted numerically, allowing it to be encrypted.
It really boils down to understanding what this line does:
BigInteger plainText = new BigInteger(string.getBytes("UTF-8"));
Lets break it down.
We start with a String (string). A Java string is a sequence of characters represented as Unicode code points (encoded in UCS-16 ...).
The getBytes("UTF-8") then encodes the characters as a sequence of bytes, and returns them in a newly allocated byte array.
The BigInteger(byte[]) constructor interprets that byte array as a number. As the javadoc says:
Translates a byte array containing the two's-complement binary representation of a BigInteger into a BigInteger. The input array is
assumed to be in big-endian byte-order: the most significant byte is
in the zeroth element.
The method that is being used here is not giving an intrisically meaningful number, just one that corresponds to the byte-encoded string. And going from the byte array to the number is simply treating the bytes as a bit sequence that represents an integer in 2's complement form ... which is the most common representation for integers on modern hardware.
The key thing is that the transformation from the text to the (unencrypted) BigInteger is lossless and reversible. Any other transformation with those properties could be used.
References:
The Wikipedia page on 2's Complement representation
The Wikipedia page on the UTF-8 text encoding scheme
javadoc BigInteger(byte[])
javadoc String.getBytes(String)
Im still not quite understanding how the the UTF-8 values for each character in "test", 116,101,115,116 respectively come together to form 1952805748?
Convert the numbers 116,101,115,116 to hex.
Convert the number 1952805748 to hex
Compare them
See the pattern?
The answer is in the output, "test" is encoded into array of 4 bytes [116, 101, 115, 116]. This is then interperted by BigInteger as binary integer representation. The value can be calculated this way
value = (116 << 24) + (101 << 16) + (115 << 8) + 116;
Related
I am attempting to convert some BigInteger objects and math from Java to C#.
The Java flow is as follows:
1. Construct 2 BigIntegers from a base-10 string (i.e. 0-9 values).
2. Construct a third BigInteger from an inputted byte array.
3. Create a fourth BigInteger as third.modPow(first, second).
4. Return the byte result of fourth.
The main complications in converting to C# seem to consist of endianness and signed/unsigned values.
I have tried a couple different ways to convert the initial 2 BigIntegers from Java->C#. I believe that using the base-10 string with BigInteger.Parse will work as intended, but I am not completely sure.
Another complication comes from the use of a BinaryReader/BinaryWriter implementation, in C#, that is already big-endian (like Java). I use the BR/BW to supply the byte array to create the third BigInteger and consume the byte array produced from the modPow (the fourth BigInteger).
I have tried reversing the byte arrays for input and output in every way, and still do not get the expected output.
Java:
public static byte[] doMath(byte[] input)
{
BigInteger exponent = new BigInteger("BASE-10-STRING");
BigInteger mod = new BigInteger("BASE-10-STRING");
BigInteger bigInput = new BigInteger(input);
return bigInput.modPow(exponent, mod).toByteArray();
}
C#:
public static byte[] CSharpDoMath(byte[] input)
{
BigInteger exponent = BigInteger.Parse("BASE-10-STRING");
BigInteger mod = BigInteger.Parse("BASE-10-STRING");
// big->little endian
byte[] reversedBytes = input.Reverse().ToArray();
BigInteger bigInput = new BigInteger(reversedBytes);
BigInteger output = BigInteger.ModPow(bigInput, exponent, mod);
// little->big endian
byte[] bigOutput = output.ToByteArray().Reverse().ToArray();
return bigOutput;
}
I need the same output from both.
I was expecting the two constructors in the BigInteger class, BigInteger(String) and BigInteger(byte[]), to behave similarly but they don't.
Why are the two BigInteger not equal? How can I create a BigInteger from the byte array?
String hex = "94B4";
byte[] b = DatatypeConverter.parseHexBinary(hex); // -108, -76
BigInteger b1 = new BigInteger(hex, 16); //38068
BigInteger b2 = new BigInteger(b); //-27468
Looks like the byte[] constructor treats input as regular 2's complement data, whereas the hex constructor treats it as, well, a hex string.
Using the new BigInteger(int signum, byte[] magnitude) allows you to force the value to be positive, so new BigInteger(1, b) will be 38068.
It will come as no surprise that 38068 + 27468 is 65536.
Remember that a java.lang.String is an array of characters, and a char in Java is an unsigned 16 bit type. Nice.
BigInteger b2 = new BigInteger(b); circumvents that. It interprets the data as a 2's complement signed 16 bit type.
Hence the difference.
This is going to be a long question but I have a really weird bug. I use OpenSSL in C++ to compute a HMAC and compare them to a simular implementation using javax.crypto.Mac. For some keys the HMAC calculation is correct and for others there is a difference in HMAC. I believe the problem occurs when the keys get to big. Here are the details.
Here is the most important code for C++:
void computeHMAC(std::string message, std::string key){
unsigned int digestLength = 20;
HMAC_CTX hmac_ctx_;
BIGNUM* key_ = BN_new();;
BN_hex2bn(&key_, key);
unsigned char convertedKey[BN_num_bytes(key_)];
BIGNUM* bn = BN_new();
HMAC_CTX_init(&hmac_ctx_);
BN_bn2bin(bn, convertedKey);
int length = BN_bn2bin(key_, convertedKey);
HMAC_Init_ex(&hmac_ctx_, convertedKey, length, EVP_sha1(), NULL);
/*Calc HMAC */
std::transform( message.begin(), message.end(), message.begin(), ::tolower);
unsigned char digest[digestLength];
HMAC_Update(&hmac_ctx_, reinterpret_cast<const unsigned char*>(message.c_str()),
message.length());
HMAC_Final(&hmac_ctx_, digest, &digestLength);
char mdString[40];
for(unsigned int i = 0; i < 20; ++i){
sprintf(&mdString[i*2], "%02x", (unsigned int)digest[i]);
}
std::cout << "\n\nMSG:\n" << message << "\nKEY:\n" + std::string(BN_bn2hex(key_)) + "\nHMAC\n" + std::string(mdString) + "\n\n";
}
The java test looks like this:
public String calculateKey(String msg, String key) throws Exception{
HMAC = Mac.getInstance("HmacSHA1");
BigInteger k = new BigInteger(key, 16);
HMAC.init(new SecretKeySpec(k.toByteArray(), "HmacSHA1"));
msg = msg.toLowerCase();
HMAC.update(msg.getBytes());
byte[] digest = HMAC.doFinal();
System.out.println("Key:\n" + k.toString(16) + "\n");
System.out.println("HMAC:\n" + DatatypeConverter.printHexBinary(digest).toLowerCase() + "\n");
return DatatypeConverter.printHexBinary(digest).toLowerCase();
}
Some test runs with different keys (all strings are interpreted as hex):
Key1:
736A66B29072C49AB6DC93BB2BA41A53E169D14621872B0345F01EBBF117FCE48EEEA2409CFC1BD92B0428BA0A34092E3117BEB4A8A14F03391C661994863DAC1A75ED437C1394DA0741B16740D018CA243A800DA25311FDFB9CA4361743E8511E220B79C2A3483FCC29C7A54F1EB804481B2DC87E54A3A7D8A94253A60AC77FA4584A525EDC42BF82AE2A1FD6E3746F626E0AFB211F6984367B34C954B0E08E3F612590EFB8396ECD9AE77F15D5222A6DB106E8325C3ABEA54BB59E060F9EA0
Msg:
test
Hmac OpenSSL:
b37f79df52afdbbc4282d3146f9fe7a254dd23b3
Hmac Java Mac:
b37f79df52afdbbc4282d3146f9fe7a254dd23b3
Key 2: 636A66B29072C49AB6DC93BB2BA41A53E169D14621872B0345F01EBBF117FCE48EEEA2409CFC1BD92B0428BA0A34092E3117BEB4A8A14F03391C661994863DAC1A75ED437C1394DA0741B16740D018CA243A800DA25311FDFB9CA4361743E8511E220B79C2A3483FCC29C7A54F1EB804481B2DC87E54A3A7D8A94253A60AC77FA4584A525EDC42BF82AE2A1FD6E3746F626E0AFB211F6984367B34C954B0E08E3F612590EFB8396ECD9AE77F15D5222A6DB106E8325C3ABEA54BB59E060F9EA0
Msg:
test
Hmac OpenSSL:
bac64a905fa6ae3f7bf5131be06ca037b3b498d7
Hmac Java Mac:
bac64a905fa6ae3f7bf5131be06ca037b3b498d7
Key 3: 836A66B29072C49AB6DC93BB2BA41A53E169D14621872B0345F01EBBF117FCE48EEEA2409CFC1BD92B0428BA0A34092E3117BEB4A8A14F03391C661994863DAC1A75ED437C1394DA0741B16740D018CA243A800DA25311FDFB9CA4361743E8511E220B79C2A3483FCC29C7A54F1EB804481B2DC87E54A3A7D8A94253A60AC77FA4584A525EDC42BF82AE2A1FD6E3746F626E0AFB211F6984367B34C954B0E08E3F612590EFB8396ECD9AE77F15D5222A6DB106E8325C3ABEA54BB59E060F9EA0
Msg:
test
Hmac OpenSSL:
c189c637317b67cee04361e78c3ef576c3530aa7
Hmac Java Mac:
472d734762c264bea19b043094ad0416d1b2cd9c
As the data shows, when the key gets to big, an error occurs. If have no idea which implementation is faulty. I have also tried with bigger keys and smaller keys. I haven't determined the exact threshold. Can anyone spot the problem? Is there anyone capable of telling me which HMAC is incorrect in the last case by doing a simulation using different software or can anyone tell me which 3rd implementation I could use to check mine?
Kind regards,
Roel Storms
When you convert a hexadecimal string to a BigInt in Java, it assumes the number is positive (unless the string includes a - sign).
But the internal representation of it is twos-complement. Meaning that one bit is used for the sign.
If you are converting a value that starts with a hex between 00 and 7F inclusive, then that's not a problem. It can convert the byte directly, because the leftmost bit is zero, which means that the number is considered positive.
But if you are converting a value that starts with 80 through FF, then the leftmost bit is 1, which will be considered negative. To avoid this, and keep the BigInteger value exactly as it is supplied, it adds another zero byte at the beginning.
So, internally, the conversion of a number such as 7ABCDE is the byte array
0x7a 0xbc 0xde
But the conversion of a number such as FABCDE (only the first byte is different!), is:
0x00 0xfa 0xbc 0xde
This means that for keys that begin with a byte in the range 80-FF, the BigInteger.toByteArray() is not producing the same array that your C++ program produced, but an array one byte longer.
There are several ways to work around this - like using your own hex-to-byte-array parser or finding an existing one in some library. If you want to use the one produced by BigInteger, you could do something like this:
BigInteger k = new BigInteger(key, 16);
byte[] kByteArr = k.toByteArray();
if ( kByteArr.length > (key.length() + 1) / 2 ) {
kByteArr = Arrays.copyOfRange(kByteArr,1,kByteArr.length);
}
Now you can use the kByteArr to perform the operation properly.
Another issue you should watch out for is keys whose length is odd. In general, you shouldn't have a hex octet string that has an odd length. A string like F8ACB is actually 0F8ACB (which is not going to cause an extra byte in BigInteger) and should be interpreted as such. This is why I wrote (key.length() + 1) in my formula - if key is odd-length, it should be interpreted as a one octet longer. This is also important to watch out for if you write your own hex-to-byte-array converter - if the length is odd, you should add a zero at the beginning before you start converting.
I have binary string String A = "1000000110101110". I want to convert this string into byte array of length 2 in java
I have taken the help of this link
I have tried to convert it into byte by various ways
I have converted that string into decimal first and then apply the code to store into the byte array
int aInt = Integer.parseInt(A, 2);
byte[] xByte = new byte[2];
xByte[0] = (byte) ((aInt >> 8) & 0XFF);
xByte[1] = (byte) (aInt & 0XFF);
System.arraycopy(xByte, 0, record, 0,
xByte.length);
But the values get store into the byte array are negative
xByte[0] :-127
xByte[1] :-82
Which are wrong values.
2.I have also tried using
byte[] xByte = ByteBuffer.allocate(2).order(ByteOrder.BIG_ENDIAN).putInt(aInt).array();
But it throws the exception at the above line like
java.nio.Buffer.nextPutIndex(Buffer.java:519) at
java.nio.HeapByteBuffer.putInt(HeapByteBuffer.java:366) at
org.com.app.convert.generateTemplate(convert.java:266)
What should i do now to convert the binary string to byte array of 2 bytes?Is there any inbuilt function in java to get the byte array
The answer you are getting
xByte[0] :-127
xByte[1] :-82
is right.
This is called 2's compliment Represantation.
1st bit is used as signed bit.
0 for +ve
1 for -ve
if 1st bit is 0 than it calculates as regular.
but if 1st bit is 1 than it deduct the values of 7 bit from 128 and what ever the answer is presented in -ve form.
In your case
1st value is10000001
so 1(1st bit) for -ve and 128 - 1(last seven bits) = 127
so value is -127
For more detail read 2's complement representation.
Use putShort for putting a two byte value. int has four bytes.
// big endian is the default order
byte[] xByte = ByteBuffer.allocate(2).putShort((short)aInt).array();
By the way, your first attempt is perfect. You can’t change the negative sign of the bytes as the most significant bit of these bytes is set. That’s always interpreted as negative value.
10000001₂ == -127
10101110₂ == -82
try this
String s = "1000000110101110";
int i = Integer.parseInt(s, 2);
byte[] a = {(byte) ( i >> 8), (byte) i};
System.out.println(Arrays.toString(a));
System.out.print(Integer.toBinaryString(0xFF & a[0]) + " " + Integer.toBinaryString(0xFF & a[1]));
output
[-127, -82]
10000001 10101110
that is -127 == 0xb10000001 and -82 == 0xb10101110
Bytes are signed 8 bit integers. As such your result is completely correct.
That is: 01111111 is 127, but 10000000 is -128. If you want to get numbers in 0-255 range you need to use a bigger variable type like short.
You can print byte as unsigned like this:
public static String toString(byte b) {
return String.valueOf(((short)b) & 0xFF);
}
How does one obtain the complementary hexadecimal value for a given input ?
This could be a bit more generic, i.e. having an array of X possible values, how do you convert a random array like arr[x] --> arr[arr.length - arr.indexOf(x)].
Please ignore the syntax.
The following code snippet will find 16 complement of hexadecimal number:
BigInteger subtrahend = new BigInteger("2D", 16);
// input, you can take input from user and use after validation
char[] array = new char[subtrahend.toString(16).length()];
// construct a character array of the given length
Arrays.fill(array, 'F');
// fill the array by F, look at the first source there the FF is subtracted by 2D
BigInteger minuend = new BigInteger(new String(array), 16);
// construct FFF... Biginteger of that length
BigInteger difference = minuend.subtract(subtrahend);
// calculate minus
BigInteger result = difference.add(BigInteger.ONE);
// add one to it
System.out.println(result.toString(16));
// print it in hex format
Hope this will help you. Thank you.
Source:
Binary and Hexadecimal Arithmetic
Digital Principles and Logic Design
First find the 15's complement of the given input by subtracting it from the number FF... which is of the same length of the input. Then add 1 to it.