iOS and Android AES Encryption (No UINT in Java) - java

All,
I am new to encryption so I'm not sure all of the information I need to share to get help; but I'll edit this question as I learn more about how to ask this question well :)
I am performing AES encryption on both an iOS and an android app that communicate over bluetooth to a device. I am using AES CTR encryption and it is fully implemented and functional on iOS. The problem I'm running into is that when I convert items such as my IV to a byte array; java bytes are signed and swift bytes are unsigned so while I can encrypt and decrypt my string on Java; it is a different result than what I would see in iOS.
How are other people dealing with this unsigned int issue? I feel like there's got to be some straight-forward thing I'm doing wrong. I'm really not sure what code to post. For android I'm using hex string to byte conversion functions I found here on stack overflow and they are working correctly...they're just signed instead of unsigned so the values are different than the unsigned byte arrays in iOS.
iOS Implementation:
let aesPrivateKey = "********************************"
print("MacAddress:-> \(macAddress)")
var index = 0
let aesPrivateKeyStartIndex = aesPrivateKey.startIndex
let macAddressStartIndex = macAddress.startIndex
//Perform an XOR to get the device key
var deviceKeyArray: Array<Character> = Array(repeating: "?", count: 32)
for _ in macAddress {
let nextPrivateKeyIndex = aesPrivateKey.index(aesPrivateKeyStartIndex, offsetBy: index)
let nextMacAddressIndex = macAddress.index(macAddressStartIndex, offsetBy: index)
let nextPrivateKeyString = String(aesPrivateKey[nextPrivateKeyIndex])
let nextMacAddressString = String(macAddress[nextMacAddressIndex])
let nextPrivateKeyByte = Int(nextPrivateKeyString, radix: 16)
let nextMacAddressByte = Int(nextMacAddressString, radix: 16)
let nextCombinedByte = nextPrivateKeyByte! ^ nextMacAddressByte!
let nextCombinedString = nextCombinedByte.hexString
deviceKeyArray[index] = nextCombinedString[nextCombinedString.index(nextCombinedString.startIndex, offsetBy: 1)]
index+=1
}
while(index < 32) {
let nextPrivateKeyIndex = aesPrivateKey.index(aesPrivateKeyStartIndex, offsetBy: index)
deviceKeyArray[index] = aesPrivateKey[nextPrivateKeyIndex]
index += 1
}
//Convert the device key to a byte array
let deviceKey = "0x" + String(deviceKeyArray)
let deviceKeyByte = Array<UInt8>(hex: deviceKey)
//Convert the password to a byte array
let passwordByte : Array<UInt8> = password.bytes
//Convert the initialization vector to a byte array
let aesIVHex = "0x" + AESIV
let aesIVByte = Array<UInt8>(hex: aesIVHex)
//Encrypt the password
var encrypted = [Unicode.UTF8.CodeUnit]()
do{
encrypted = try AES(key: deviceKeyByte, blockMode: CTR(iv: aesIVByte)).encrypt(passwordByte)
}
catch{
print(error)
}
print("The Encrypted Password Data: \(encrypted)")
let encryptedData = encrypted.toHexString()
//Write password to bluetooth and check result
UserDefaultUtils.setObject(encryptedData as AnyObject, key: userDefaults.password)
DeviceLockManager.shared().isEncrypted = false.
DeviceLockManager.share().setVerifyPasswordForDevice(isGunboxDevice:true)
Android implementation:
System.out.println("ble_ Password:"+str_password+"\nble_ AesKey:"+aesDeviceKey+"\nble_ AesIV:"+aesIV);
byte[] encryptedData = encrypt(
str_password.getBytes(),
Utility.getInstance().hexStringToByteArray(aesDeviceKey),
Utility.getInstance().hexStringToByteArray(aesIV));
String encryptedPassword = Utility.getInstance().bytesToHexString(encryptedData);
System.out.println("ble_ AES Encrypted password " + encryptedPassword);
byte[] decryptedData = decrypt(encryptedData, aesDeviceKey.getBytes(), aesIV.getBytes());
System.out.println("ble_ Cipher Decrypt:"+new String(decryptedData));
//Write password to bluetooth and check result
deviceManager.writePassword(encryptedPassword);
Utility.getInstance().sleep(100);
deviceManager.readPasswordResult();
All input values match exactly until I call the function: hextStringtoByteArray. At this point, the iOS byte arrays are unsigned and the android byte arrays are signed.
Here is that function for reference:
public static byte[] hexStringToByteArray(String s){
byte[] b = new byte[s.length() / 2];
for (int i = 0; i < b.length; i++) {
int index = i * 2;
int v = Integer.parseInt(s.substring(index, index + 2), 16);
b[i] = (byte) v;
}
return b;
}
Sample IV Byte Array:
iOS vs Android:
43, 34, 95, 101, 57, 150, 75, 100, 250, 178, 194, 70, 253, 236, 92, 70
43, 34, 95, 101, 57, -106, 75, 100, -6, -78, -62, 70, -3, -20, 92, 70

You might notice a difference between the two printed arrays because java by default displays a byte as a signed value. But in reality those are actually equal. To make it more clear I'll add a little table with the last 5 values of the example IV array you provided.
|----------------------------------------|
| hex | 46 | FD | EC | 5C | 46 |
|----------------------------------------|
| unsigned | 70 | 253 | 236 | 92 | 70 |
|----------------------------------------|
| signed | 70 | -3 | -20 | 92 | 70 |
|----------------------------------------|
So they are actually the same (bit wise), only printed diffently as they are interpreted as different values. If you want to make sure things are correct, I would suggest looking at a few numbers with a calculator on programming mode. Usually there is a way to set the byte/word length so you can play around with signed vs unsigned interpretation of the same Hexadecimal value (there should also be a bit-representation of the value).
As an alternative I found a small website containing a signed vs unsigned type-bit/hex converter, which will do the trick as well. (make sure you select either char-type, otherwise the signed values will be incorrect)
So in the IV-bytes part of the code there shouldn't be any problem. There might be one however when you create your String using only a byte-array as parameter. e.i:
byte[] decryptedData = decrypt(encryptedData, aesDeviceKey.getBytes(), aesIV.getBytes());
System.out.println("ble_ Cipher Decrypt:" + new String(decryptedData));
Since most likely the used Charset is not UTF-8. (you can determine that by calling Charset#defaultCharset, and check its value). The alternative would be:
new String(decryptedData, StandardCharsets.UTF_8)
or:
new String(decryptedData, "UTF-8");

Related

How to convert a binary in surrogate pairs to unicode?

Anyone can help figure out a surrogate pairs problem?
The source binary is #{EDA0BDEDB883}(encoded by Hessian/Java), how to decode it to 😃 or "^(1F603)"?
I have checked UTF-16 wiki, but it only telled me a half story.
My problem is how to convert #{EDA0BDEDB883} to \ud83d and \ude03?
My aim is to rewrite the python3 program to Rebol or Red-lang,just parsing binary data, without any libs.
This is how python3 do it:
def _decode_surrogate_pair(c1, c2):
"""
Python 3 no longer decodes surrogate pairs for us; we have to do it
ourselves.
"""
# print(c1.encode('utf-8')) # \ud83d
# print(c2.encode('utf-8')) # \ude03
if not('\uD800' <= c1 <= '\uDBFF') or not ('\uDC00' <= c2 <= '\uDFFF'):
raise Exception("Invalid UTF-16 surrogate pair")
code = 0x10000
code += (ord(c1) & 0x03FF) << 10
code += (ord(c2) & 0x03FF)
return chr(code)
def _decode_byte_array(bytes):
s = ''
while(len(bytes)):
b, bytes = bytes[0], bytes[1:]
c = b.decode('utf-8', 'surrogatepass')
if '\uD800' <= c <= '\uDBFF':
b, bytes = bytes[0], bytes[1:]
c2 = b.decode('utf-8', 'surrogatepass')
c = _decode_surrogate_pair(c, c2)
s += c
return s
bytes = [b'\xed\xa0\xbd', b'\xed\xb8\x83']
print(_decode_byte_array(bytes))
public static void main(String[] args) throws Exception {
// "😃"
// "\uD83D\uDE03"
final byte[] bytes1 = "😃".getBytes(StandardCharsets.UTF_16);
// [-2, -1, -40, 61, -34, 3]
// #{FFFFFFFE} #{FFFFFFFF} #{FFFFFFD8} #{0000003D} #{FFFFFFDE} #{00000003}
System.out.println(Arrays.toString(bytes1));
}

Understanding SHA-256 Hashing

I am using Sha-256 hashing in my Java program which is working as per behavior.
I am actually a bit confused in the function I have used for SHA-256.
Following is the code of the function:
// Function for generating to Hash of the file content..
public static String generateHash( String fileContent )
{
String hashtext = EMPTY_STRING;
try {
// SHA - 256 Message Digest..
MessageDigest shaDigest = MessageDigest.getInstance( "SHA-256" );
// digest() method is called
// to calculate message digest of the input string
// returned as array of byte
byte[] messageDigest = shaDigest.digest( fileContent.getBytes() );
// Convert byte array into signum representation
BigInteger no = new BigInteger( 1, messageDigest );
// Convert message digest into hex value
hashtext = no.toString( 16 );
// Add preceding 0s to make it 32 bit
while ( hashtext.length() < 32 ) {
hashtext = "0" + hashtext;
}
}
catch ( Exception hashingException ) {
System.out.println( "Exception in Hashing of Content = " + hashingException );
}
// return the HashText
return hashtext;
}
Now, here I am confused in three statements; as I am unaware of what is their actual purpose since I have surfed them on the internet but didnt get any explanatory stuff. Can some one elaborate these three steps to me?
STATEMENT 1
BigInteger no = new BigInteger( 1, messageDigest );
STATEMENT 2
hashtext = no.toString( 16 );
STATEMENT 3
while ( hashtext.length() < 32 ) {
hashtext = "0" + hashtext;
}
BigInteger no = new BigInteger( 1, messageDigest );
Covert the bytes to a Positive Sign-Magnitude representation. Read Javadoc for more info.
hashtext = no.toString( 16 );
Convert the BigInteger number to a Base 16 (Hex Decimal) String
while ( hashtext.length() < 32 ) {
hashtext = "0" + hashtext;
}
Prepend 0 until the hashtext has the size of 32.

AES CMAC calculation - output length for host cryptogram incorrect length

I have below function which is supposed to return 8 byte host cryptogram based on length of derived data "L" but I am getting 16 bytes data. Although key is 128 bits, I was expecting BC AESCMAC function will return data based on value of L in the derivation data. If this is not the case, do I need to extract MS 8 Bytes from output. Below is my function -
private String scp03CalculatehostCryptogram(byte[] derivedSMACSessionKey, String hostChallenge, String cardChallenge) throws InvalidKeyException, NoSuchAlgorithmException, NoSuchProviderException, UnsupportedEncodingException {
// Reference : GPC_2.2_D_SCP03_v1.1.1 > 6.2.2.3 Host Authentication Cryptogram - The host cryptogram (8 bytes) is calculated using the data derivation scheme defined in section 4.1.5 with the session key S-MAC and the derivation constant set to “host authentication cryptogram generation”. The length of the cryptogram shall be reflected in the parameter “L” (i.e. '0040'). The “context” parameter shall be set to the concatenation of the host challenge (8 bytes) and the card challenge (8 bytes).
String labelForSMAC = "000000000000000000000001";
String separationIndicator = "00";
String lInteger = "0040";
String counter = "01";
String context = hostChallenge.concat(cardChallenge);
String hostCryptogramDerivationData = labelForSMAC.concat(separationIndicator).concat(lInteger).concat(counter).concat(context);
byte[] hostCryptogramDerivationDataBytes = DatatypeConverter.parseHexBinary(hostCryptogramDerivationData);
System.out.println(" Host Cryptogram Derivation data : "+DatatypeConverter.printHexBinary(hostCryptogramDerivationDataBytes));
Mac aescmac = Mac.getInstance("AESCMAC", "BC");
SecretKey scpENCKeyObject = new SecretKeySpec(derivedSMACSessionKey, "AES");
aescmac.init(scpENCKeyObject);
aescmac.update(hostCryptogramDerivationDataBytes);
byte[] hostCryptogram = aescmac.doFinal();
System.out.println(" Calculated Host Cryptogram : "+DatatypeConverter.printHexBinary(hostCryptogram));
return DatatypeConverter.printHexBinary(hostCryptogram);
}
Output :
Host Cryptogram Derivation data : 0000000000000000000000010000400161BD435249EC20B7AA984A2D47AD4302
Calculated Host Cryptogram : 6F405B9FD1438A4633A4289B618A1FB5
Example - derived smac session key : 47297387E512687FBEB37D1C1F4B8F4C
what am I doing wrong?
The length L is included in the input of the crytogram to make the output of the cryptogram as specific as possible.
Obviously the MAC algorithm won't pay any respect to the input. A MAC simply takes a key, input and then produces a predefined amount of data. Your function is supposed to create the cryptogram. This cryptogram requires the size of the output data L as parameter. So if you're not producing the required amount of output data then that's up to you.
And yes, in general if an output of a PRF (e.g. your function) needs to be resized then the leftmost bytes are taken.

About Java Android BASE64 decoding to ASCII String

I have Base64 string data that i have received from a service.
I am able to decode this data and get byte array.
But when i create a new string from that byte array, my server is not being able to read that data properly.
But this same process in C language of Linux based device is working fine on my server side. That is to say, if i (Base64) decode that same string (using OpenSSL and get char array) on that device and send it to my server, the server is able to read that properly.
Now, i tried a sample code in eclipse to understand the problem. Below is the sample,
String base1 =
"sUqVKrgErEId6j3rH8BMMpzvXuTf05rj0PlO/eLOoJwQb3rXrsplAl28unkZP0WvrXRTlpAmT3Y
ohtPFl2+zyUaCSrYfug5JtVHLoVsJ9++Afpx6A5dupn3KJQ9L9ItfWvatIlamQyMo2S5nDypCw79
B2HNAR/PG1wfgYG5OPMNjNSC801kQSE9ljMg3hH6nrRJhXvEVFlllKIOXOYuR/NORAH9k5W+rQeQ
7ONsnao2zvYjfiKO6eGleL6/DF3MKCnGx1sbci9488EQhEBBOG5FGJ7KjTPEQzn/rq3m1Yj9Le/r
KsmzbRNcJN2p/wy1xz9oHy8jWDm81iwRYndJYAQ==";
byte[] b3 = Base64.getDecoder().decode(base1.getBytes());
System.out.println("B3Len:" + b3.length );
String s2 = new String(b3);
System.out.println("S2Len:" + s2.length() );
System.out.println("B3Hex: " + bytesToHex(b3) );
System.out.println("B3HexLen: " + bytesToHex(b3).length() );
byte[] b2 = s2.getBytes();
System.out.println("B2Len:" + b2.length );
int count = 0;
for(int i = 0; i< b3.length; i++) {
if(b3[i] != b2[i]) {
count++;
System.out.println("Byte: " + i + " >> " + b3[i] + " != " + b2[i]);
}
}
System.out.println("Count: " + count);
System.out.println("B2Hex: " + bytesToHex(b2) );
System.out.println("B2HexLen: " + bytesToHex(b2).length() );
Below is output:
B3Len:256
S2Len:256
B3Hex:
b14a952ab804ac421dea3deb1fc04c329cef5ee4dfd39ae3d0f94efde2cea09c106f7ad7aeca
65025dbcba79193f45afad74539690264f762886d3c5976fb3c946824ab61fba0e49b551cba1
5b09f7ef807e9c7a03976ea67dca250f4bf48b5f5af6ad2256a6432328d92e670f2a42c3bf41
d8734047f3c6d707e0606e4e3cc3633520bcd35910484f658cc837847ea7ad12615ef1151659
65288397398b91fcd391007f64e56fab41e43b38db276a8db3bd88df88a3ba78695e2fafc317
730a0a71b1d6c6dc8bde3cf0442110104e1b914627b2a34cf110ce7febab79b5623f4b7bfaca
b26cdb44d709376a7fc32d71cfda07cbc8d60e6f358b04589dd25801
B3HexLen: 512
B2Len:256
Byte: 52 >> -112 != 63
Byte: 175 >> -115 != 63
Byte: 252 >> -99 != 63
Count: 3
B2Hex:
b14a952ab804ac421dea3deb1fc04c329cef5ee4dfd39ae3d0f94efde2cea09c106f7ad7aeca
65025dbcba79193f45afad7453963f264f762886d3c5976fb3c946824ab61fba0e49b551cba1
5b09f7ef807e9c7a03976ea67dca250f4bf48b5f5af6ad2256a6432328d92e670f2a42c3bf41
d8734047f3c6d707e0606e4e3cc3633520bcd35910484f658cc837847ea7ad12615ef1151659
65288397398b91fcd391007f64e56fab41e43b38db276a3fb3bd88df88a3ba78695e2fafc317
730a0a71b1d6c6dc8bde3cf0442110104e1b914627b2a34cf110ce7febab79b5623f4b7bfaca
b26cdb44d709376a7fc32d71cfda07cbc8d60e6f358b04583fd25801
B2HexLen: 512
I understand that there are extended characters in this string.
So, here we can see that the reconverting the hex to string is not working properly, because of the differences in the byte arrays.
I actually need this to work because, i have much larger Base64 string than the one in this sample that i need to send to my server which is trying to read ASCII string.
Or,
Can anyone give me a solution that can give me an ASCII String output that is identical to char array output from C language (OpenSSL decoding) on Linux device.

Java CRC32 does not match MySQL CRC32

I need Java's CRC32 to match MySQL's CRC32, but I'm getting different results.
MySQL
MariaDB> SELECT CRC32(148595460);
+------------------+
| CRC32(148595460) |
+------------------+
| 4137475682 |
+------------------+
1 row in set (0.00 sec)
Java
Checksum checksum = new CRC32();
checksum.update(ByteBuffer.allocate(4).putInt(148595460).array(), 0, 4);
System.out.println("Checksum: " + checksum.getValue());
Result: Checksum: 747753823
Why does this happen? I'm guessing MySQL interprets the number as a string?
I believe your observations can be explained by a close look at the APIs for MariaDB and Java:
MariaDB:
Computes a cyclic redundancy check value and returns a 32-bit unsigned value. The result is NULL if the argument is NULL. The argument is expected to be a string and (if possible) is treated as one if it is not.
In other words, when you call CRC32(148595460) in MariaDB it is using the string 148595460. Now let's look at Java.
Java:
From the documentation for CRC32.update:
Updates the CRC-32 checksum with the specified array of bytes.
In other words, you passed in the int value 148595460 which then was converted to a byte array.
If you try the following Java code I believe you will get the behavior you want:
Checksum checksum = new CRC32();
byte[] b = "148595460".getBytes();
checksum.update(b);
system.out.println("Checksum: " + checksum.getValue());
The problem is just as I assumed, MySQL interprets the number as a string of characters. Passing a string of numbers to Java solves the problem:
Checksum checksum = new CRC32();
String id = "148595460";
checksum.update(id.getBytes(), 0, id.length());
System.out.println("Checksum: " + checksum.getValue());
The result: Checksum: 4137475682

Categories

Resources