Converting HMAC-SHA1 from node.js to Java - java

I have been tasked with converting some existing piece of node.js code to Java. I think I'm well on my way, but I'm kind of stuck right now. The output of the methods do not seem to match.
What I'm doing is creating a SHA-1 signature based on a query string. This query string contains some query-related data (not relevant for this question) and an API key.
Important
The api_secret string in node.js is equivalent to Config.API_SECRET in Java.
Example query string (these are equal in the node.js and Java program):
/events?festival=imaginate&pretty=1&size=100&from=0&key=SOME_KEY
Actual code
The SHA-1 hmac is initialized as follows in nodejs:
const hmac = crypto.createHmac('sha1', api_secret);
The SHA-1 mac is initialized as follows in Java:
final SecretKeySpec secretKeySpec = new SecretKeySpec(Config.API_SECRET.getBytes("UTF-8"), "HmacSHA1");
final Mac hmac = Mac.getInstance("HmacSHA1");
hmac.init(secretKeySpec);
Next, the node.js program updates the hmac as such (query parameter is as listed above):
hmac.update(query, 'ascii');
Which I replicated in Java like this (query parameter is equal to the node.js query parameter):
hmac.update(query.getBytes("US-ASCII"));
Finally, the byte string is converted to a SHA-1 hash as such in the node.js program:
const signature = hmac.digest('hex');
I couldn't find an exact translation to Java, but this was my attempt, which I think does about the same thing:
Byte array to hex function
public static String byteArrayToHex(byte[] a) {
StringBuilder sb = new StringBuilder(a.length * 2);
for(byte b: a)
sb.append(String.format("%02x", b & 0xff));
return sb.toString();
}
Actual usage
byte[] result = hmac.doFinal();
MessageDigest md = MessageDigest.getInstance("SHA-1");
String sha1Hash = byteArrayToHex(md.digest(result));
However, this is where I get confused. The node.js program returns this hash:
18cf4fce7bd6163c64d3b2ea8d935b0f16720fe3
But my Java program gives this hash as output:
f65f8738cce89134dc73709e3353d94c83ccf1fb
I can't figure out where I went wrong and I really hope someone could shed a light on this.

I figured it out!
Turns out I was doing one unnecessary step.
This line:
byte[] result = mac.doFinal();
Already contained the signature hash. I needed to convert that byte array to a hex string, not the digest of that byte array.
So the working code was simply:
byte[] result = mac.doFinal();
return byteArrayToHex(result);

Related

Ed25519 in JDK 15, Parse public key from byte array and verify

Since Ed25519 has not been around for long (in JDK), there are very few resources on how to use it.
While their example is very neat and useful, I have some trouble understanding what am I doing wrong regarding key parsing.
They Public Key is being read from a packet sent by an iDevice.
(Let's just say, it's an array of bytes)
From the searching and trying my best to understand how the keys are encoded, I stumbled upon this message.
4. The public key A is the encoding of the point [s]B. First,
encode the y-coordinate (in the range 0 <= y < p) as a little-
endian string of 32 octets. The most significant bit of the
final octet is always zero. To form the encoding of the point
[s]B, copy the least significant bit of the x coordinate to the
most significant bit of the final octet. The result is the
public key.
That means that if I want to get y and isXOdd I have to do some work.
(If I understood correctly)
Below is the code for it, yet the verifying still fails.
I think, I did it correctly by reversing the array to get it back into Big Endian for BigInteger to use.
My questions are:
Is this the correct way to parse the public key from byte arrays
If it is, what could possibly be the reason for it to fail the verifying process?
// devicePublicKey: ByteArray
val lastIndex = devicePublicKey.lastIndex
val lastByte = devicePublicKey[lastIndex]
val lastByteAsInt = lastByte.toInt()
val isXOdd = lastByteAsInt.and(255).shr(7) == 1
devicePublicKey[lastIndex] = (lastByteAsInt and 127).toByte()
val y = devicePublicKey.reversedArray().asBigInteger
val keyFactory = KeyFactory.getInstance("Ed25519")
val nameSpec = NamedParameterSpec.ED25519
val point = EdECPoint(isXOdd, y)
val keySpec = EdECPublicKeySpec(nameSpec, point)
val key = keyFactory.generatePublic(keySpec)
Signature.getInstance("Ed25519").apply {
initVerify(key)
update(deviceInfo)
println(verify(deviceSignature))
}
And the data (before manipulation) (all in HEX):
Device identifier: 34444432393531392d463432322d343237442d414436302d444644393737354244443533
Device public key: e0a611c84db0ae91abfe2e6db91b6a457a4b41f9d8e09afdc7207ce3e4942e94
Device signature: a0383afb3bcbd43d08b04274a9214036f16195dc890c07a81aa06e964668955b29c5026d73d8ddefb12160529eeb66f843be4a925b804b575e6a259871259907
Device info: a86a71d42874b36e81a0acc65df0f2a84551b263b80b61d2f70929cd737176a434444432393531392d463432322d343237442d414436302d444644393737354244443533e0a611c84db0ae91abfe2e6db91b6a457a4b41f9d8e09afdc7207ce3e4942e94
// Device info is simply concatenated [hkdf, identifier, public key]
And the public key after the manipulation:
e0a611c84db0ae91abfe2e6db91b6a457a4b41f9d8e09afdc7207ce3e4942e14
Thank you very much, and every bit of help is greatly appreciated.
This will help many more who will stumble upon this problem at a later point, when the Ed25519 implementation will not be so fresh.
Helped me a lot. Would never have figured it out without your example.
I did it in java.
public static PublicKey getPublicKey(byte[] pk)
throws NoSuchAlgorithmException, InvalidKeySpecException, InvalidParameterSpecException {
// key is already converted from hex string to a byte array.
KeyFactory kf = KeyFactory.getInstance("Ed25519");
// determine if x was odd.
boolean xisodd = false;
int lastbyteInt = pk[pk.length - 1];
if ((lastbyteInt & 255) >> 7 == 1) {
xisodd = true;
}
// make sure most significant bit will be 0 - after reversing.
pk[pk.length - 1] &= 127;
// apparently we must reverse the byte array...
pk = ReverseBytes(pk);
BigInteger y = new BigInteger(1, pk);
NamedParameterSpec paramSpec = new NamedParameterSpec("Ed25519");
EdECPoint ep = new EdECPoint(xisodd, y);
EdECPublicKeySpec pubSpec = new EdECPublicKeySpec(paramSpec, ep);
PublicKey pub = kf.generatePublic(pubSpec);
return pub;
Actually, the whole encoding and decoding is correct.
The one thing in the end, that was the problem was that I (by mistake) reversed the array I read one too many times.
Reversing arrays since certain keys are encoded in little endian, while in order to represent it as a BigInteger in JVM, you have to reverse the little endian so it becomes big endian.
Hopefully this helps everyone in the future who will get stuck on any similar problems.
If there will be any questions, simply comment here or send me a message here.
I'll do my best to help you out.
I the code here is way way way more than you probably need. I investigated this and came up with what I think is equivalent, only much simpler. Anyhow, here is the blog piece: https://www.tbray.org/ongoing/When/202x/2021/04/19/PKI-Detective. and here is the Java code: https://github.com/timbray/blueskidjava
You can check how it is done in the OpenJDK implementation:
https://github.com/openjdk/jdk15/blob/master/src/jdk.crypto.ec/share/classes/sun/security/ec/ed/EdDSAPublicKeyImpl.java#L65
Basically encodedPoint is your byte array (just the plain bytes, without ASN.1 encoding).

HMACSHA256.ComputeHash(byte[] arr, int , int ) is not matching the output of Java Mac.doFinal(byte[] arr) [duplicate]

The following is an extract from using AES encryption in Java:
encryptedData = encryptCipher.doFinal(strToEncrypt.getBytes());
The following is an extract in c#
DecryptStringFromBytes_Aes(encrypted, myAes.Key, myAes.IV);
Both use a byte array one to encrypt the other to decrypt, the encryption in Java encrypts producing some negative values stored in a byte array.
C# uses a byte array to decrypt but a byte in C# is defined as only containing the numbers from 0..255 - Java defines its Byte type as -128 to 127.
Therefore, I cannot send encrypted data to the remote application which is written in C# because it cannot decrypt using the byte array that has been sent from the Java aplication.
Has anyone come up with a solution that would allow me to tell java not to produce negative numbers when encrypting?
The code is from Micrsoft, the MemoryStream requires the byte[] to create the stream for the crypto code...
As mentioned or not, I replaced byte[] with sbyte but to no avail as MemoryStream requires byte[]
static string DecryptStringFromBytes_Aes(sbyte[] cipherText, byte[] Key, byte[] IV)
{
// Check arguments.
if (cipherText == null || cipherText.Length <= 0)
throw new ArgumentNullException("cipherText");
if (Key == null || Key.Length <= 0)
throw new ArgumentNullException("Key");
if (IV == null || IV.Length <= 0)
throw new ArgumentNullException("Key");
// Declare the string used to hold
// the decrypted text.
string plaintext = null;
// Create an Aes object
// with the specified key and IV.
using (Aes aesAlg = Aes.Create())
{
aesAlg.Key = Key;
aesAlg.IV = IV;
// Create a decrytor to perform the stream transform.
ICryptoTransform decryptor = aesAlg.CreateDecryptor(aesAlg.Key, aesAlg.IV);
// Create the streams used for decryption.
using (MemoryStream msDecrypt = new MemoryStream((byte)cipherText))
{
using (CryptoStream csDecrypt = new CryptoStream(msDecrypt, decryptor, CryptoStreamMode.Read))
{
using (StreamReader srDecrypt = new StreamReader(csDecrypt))
{
// Read the decrypted bytes from the decrypting stream
// and place them in a string.
plaintext = srDecrypt.ReadToEnd();
}
}
}
}
return plaintext;
}
Java's bytes are signed, C# bytes are unsigned (there's also an sbyte type in C#, that no one uses, which works like Java's bytes).
It doesn't matter. They are different in some regards, namely
when converted to int, C#'s bytes will be zero-extended, Java's bytes will be sign-extended (which is why you almost always see & 0xFF when bytes are used in Java).
when converted to string, Java's bytes will have their 128 - 255 range mapped to -128 - -1. Just ignore that.
The actual value of those bytes (that is, their bit-pattern) is what actually matters, a byte that is 0xAA will be 0xAA regardless of whether you interpret it as 170 (as in C#) or -86 (as in Java). It's the same thing, just a different way to print it as string.
new MemoryStream((byte)cipherText)) definitely doesn't do the right thing (or anything, it shouldn't even compile). The related new MemoryStream((byte[])cipherText)) wouldn't work either, you can't cast between primitive arrays like that. cipherText should just be a byte[] to begin with.
You could turn it into a string with some encoding, like:
encryptedData = encryptCipher.doFinal(strToEncrypt.getBytes());
String s = new String(encryptedData, "Base-64");
Using the same standardized encoding, both C# and Java should be able to reconstruct each others encrypted data from that string.

Replicating Java password hashing code in Python (PBKDF2WithHmacSHA1)

I have been trying to replicate the java password authenticate to python, however the resulted hash is different.
password: abcd1234
password token (java): $31$16$sWy1dDEx52vwQUCswXDYMQMzTJC39g1_nmrK384T4-w
generated password token (python): $pbkdf2$16$c1d5MWRERXg1MnZ3UVVDcw$qPQvE4QbrnYJTmRXk0M7wlfhH5U
From the Java code, the Iteration is 16, SALT should the first 16 char in sWy1dDEx52vwQUCswXDYMQMzTJC39g1_nmrK384T4-w, which is sWy1dDEx52vwQUCs and the hash should be wXDYMQMzTJC39g1_nmrK384T4-w
however, applying the variables to python gave me a different hash result which, qPQvE4QbrnYJTmRXk0M7wlfhH5U which is different from Java's hash.
Where did i missed out?
Java:
private static final String ALGORITHM = "PBKDF2WithHmacSHA1";
private static final int SIZE = 128;
private static final Pattern layout = Pattern.compile("\\$31\\$(\\d\\d?)\\$(.{43})");
public boolean authenticate(char[] password, String token)
{
Matcher m = layout.matcher(token);
if (!m.matches())
throw new IllegalArgumentException("Invalid token format");
int iterations = iterations(Integer.parseInt(m.group(1)));
byte[] hash = Base64.getUrlDecoder().decode(m.group(2));
byte[] salt = Arrays.copyOfRange(hash, 0, SIZE / 8);
byte[] check = pbkdf2(password, salt, iterations);
int zero = 0;
for (int idx = 0; idx < check.length; ++idx)
zero |= hash[salt.length + idx] ^ check[idx];
return zero == 0;
}
Python:
from passlib.hash import pbkdf2_sha1
def hasher(password):
size = 128
key0 = "abcd1234"
iter = int(password.split("$")[2])
salt0 = password.split("$")[3][0: 16]
hash = pbkdf2_sha1.using(rounds=iter, salt = salt0.encode()).hash(key0)
print(hash.split('$')[4])
return hash
Original Link for Java code: How can I hash a password in Java?
There's a bunch of things different between how that java code does things, and how passlib's pbkdf2_sha1 hasher does things.
The java hash string contains a log cost parameter, which needs passing through 1<<cost to get the number of rounds / iterations.
The salt+digest needs to be base64 decoded, then take the first 16 bytes as the salt (which actually corresponds to first 21 1/3 characters of base64 data).
Similarly, since the digest's bits start in the middle of a base64 character, when the salt+digest is decoded, and digest is then encoded separately, the base64 string would be
AzNMkLf2DX-easrfzhPj7A (noticably different from the original encoded string).
Based on that, the following bit of code converts a java hash into the format used by pbkdf1_sha1.verify:
from passlib.utils.binary import b64s_decode, ab64_encode
def adapt_java_hash(jhash):
_, ident, cost, data = jhash.split("$")
assert ident == "31"
data = b64s_decode(data.replace("_", ".").replace("-", "+"))
return "$pbkdf2$%d$%s$%s" % (1<<int(cost), ab64_encode(data[:16]),
ab64_encode(data[16:]))
>>> adapt_java_hash("$31$16$sWy1dDEx52vwQUCswXDYMQMzTJC39g1_nmrK384T4-w")
'$pbkdf2$65536$sWy1dDEx52vwQUCswXDYMQ$AzNMkLf2DX.easrfzhPj7A'
The resulting string should be suitable for passing into pbkdf2_sha1.verify("abcd1234", hash); except for one issue:
The java code truncates the sha1 digest to 16 bytes, rather than the full 20 bytes; and way passlib's hasher is coded, the digest must be the full 20 bytes.
If you alter the java code to use SIZE=160 instead of SIZE=128, running the hash through the above adapt() function should then work in passlib.

Java Mac HMAC vs C++ OpenSSL hmac

This is going to be a long question but I have a really weird bug. I use OpenSSL in C++ to compute a HMAC and compare them to a simular implementation using javax.crypto.Mac. For some keys the HMAC calculation is correct and for others there is a difference in HMAC. I believe the problem occurs when the keys get to big. Here are the details.
Here is the most important code for C++:
void computeHMAC(std::string message, std::string key){
unsigned int digestLength = 20;
HMAC_CTX hmac_ctx_;
BIGNUM* key_ = BN_new();;
BN_hex2bn(&key_, key);
unsigned char convertedKey[BN_num_bytes(key_)];
BIGNUM* bn = BN_new();
HMAC_CTX_init(&hmac_ctx_);
BN_bn2bin(bn, convertedKey);
int length = BN_bn2bin(key_, convertedKey);
HMAC_Init_ex(&hmac_ctx_, convertedKey, length, EVP_sha1(), NULL);
/*Calc HMAC */
std::transform( message.begin(), message.end(), message.begin(), ::tolower);
unsigned char digest[digestLength];
HMAC_Update(&hmac_ctx_, reinterpret_cast<const unsigned char*>(message.c_str()),
message.length());
HMAC_Final(&hmac_ctx_, digest, &digestLength);
char mdString[40];
for(unsigned int i = 0; i < 20; ++i){
sprintf(&mdString[i*2], "%02x", (unsigned int)digest[i]);
}
std::cout << "\n\nMSG:\n" << message << "\nKEY:\n" + std::string(BN_bn2hex(key_)) + "\nHMAC\n" + std::string(mdString) + "\n\n";
}
The java test looks like this:
public String calculateKey(String msg, String key) throws Exception{
HMAC = Mac.getInstance("HmacSHA1");
BigInteger k = new BigInteger(key, 16);
HMAC.init(new SecretKeySpec(k.toByteArray(), "HmacSHA1"));
msg = msg.toLowerCase();
HMAC.update(msg.getBytes());
byte[] digest = HMAC.doFinal();
System.out.println("Key:\n" + k.toString(16) + "\n");
System.out.println("HMAC:\n" + DatatypeConverter.printHexBinary(digest).toLowerCase() + "\n");
return DatatypeConverter.printHexBinary(digest).toLowerCase();
}
Some test runs with different keys (all strings are interpreted as hex):
Key1:
736A66B29072C49AB6DC93BB2BA41A53E169D14621872B0345F01EBBF117FCE48EEEA2409CFC1BD92B0428BA0A34092E3117BEB4A8A14F03391C661994863DAC1A75ED437C1394DA0741B16740D018CA243A800DA25311FDFB9CA4361743E8511E220B79C2A3483FCC29C7A54F1EB804481B2DC87E54A3A7D8A94253A60AC77FA4584A525EDC42BF82AE2A1FD6E3746F626E0AFB211F6984367B34C954B0E08E3F612590EFB8396ECD9AE77F15D5222A6DB106E8325C3ABEA54BB59E060F9EA0
Msg:
test
Hmac OpenSSL:
b37f79df52afdbbc4282d3146f9fe7a254dd23b3
Hmac Java Mac:
b37f79df52afdbbc4282d3146f9fe7a254dd23b3
Key 2: 636A66B29072C49AB6DC93BB2BA41A53E169D14621872B0345F01EBBF117FCE48EEEA2409CFC1BD92B0428BA0A34092E3117BEB4A8A14F03391C661994863DAC1A75ED437C1394DA0741B16740D018CA243A800DA25311FDFB9CA4361743E8511E220B79C2A3483FCC29C7A54F1EB804481B2DC87E54A3A7D8A94253A60AC77FA4584A525EDC42BF82AE2A1FD6E3746F626E0AFB211F6984367B34C954B0E08E3F612590EFB8396ECD9AE77F15D5222A6DB106E8325C3ABEA54BB59E060F9EA0
Msg:
test
Hmac OpenSSL:
bac64a905fa6ae3f7bf5131be06ca037b3b498d7
Hmac Java Mac:
bac64a905fa6ae3f7bf5131be06ca037b3b498d7
Key 3: 836A66B29072C49AB6DC93BB2BA41A53E169D14621872B0345F01EBBF117FCE48EEEA2409CFC1BD92B0428BA0A34092E3117BEB4A8A14F03391C661994863DAC1A75ED437C1394DA0741B16740D018CA243A800DA25311FDFB9CA4361743E8511E220B79C2A3483FCC29C7A54F1EB804481B2DC87E54A3A7D8A94253A60AC77FA4584A525EDC42BF82AE2A1FD6E3746F626E0AFB211F6984367B34C954B0E08E3F612590EFB8396ECD9AE77F15D5222A6DB106E8325C3ABEA54BB59E060F9EA0
Msg:
test
Hmac OpenSSL:
c189c637317b67cee04361e78c3ef576c3530aa7
Hmac Java Mac:
472d734762c264bea19b043094ad0416d1b2cd9c
As the data shows, when the key gets to big, an error occurs. If have no idea which implementation is faulty. I have also tried with bigger keys and smaller keys. I haven't determined the exact threshold. Can anyone spot the problem? Is there anyone capable of telling me which HMAC is incorrect in the last case by doing a simulation using different software or can anyone tell me which 3rd implementation I could use to check mine?
Kind regards,
Roel Storms
When you convert a hexadecimal string to a BigInt in Java, it assumes the number is positive (unless the string includes a - sign).
But the internal representation of it is twos-complement. Meaning that one bit is used for the sign.
If you are converting a value that starts with a hex between 00 and 7F inclusive, then that's not a problem. It can convert the byte directly, because the leftmost bit is zero, which means that the number is considered positive.
But if you are converting a value that starts with 80 through FF, then the leftmost bit is 1, which will be considered negative. To avoid this, and keep the BigInteger value exactly as it is supplied, it adds another zero byte at the beginning.
So, internally, the conversion of a number such as 7ABCDE is the byte array
0x7a 0xbc 0xde
But the conversion of a number such as FABCDE (only the first byte is different!), is:
0x00 0xfa 0xbc 0xde
This means that for keys that begin with a byte in the range 80-FF, the BigInteger.toByteArray() is not producing the same array that your C++ program produced, but an array one byte longer.
There are several ways to work around this - like using your own hex-to-byte-array parser or finding an existing one in some library. If you want to use the one produced by BigInteger, you could do something like this:
BigInteger k = new BigInteger(key, 16);
byte[] kByteArr = k.toByteArray();
if ( kByteArr.length > (key.length() + 1) / 2 ) {
kByteArr = Arrays.copyOfRange(kByteArr,1,kByteArr.length);
}
Now you can use the kByteArr to perform the operation properly.
Another issue you should watch out for is keys whose length is odd. In general, you shouldn't have a hex octet string that has an odd length. A string like F8ACB is actually 0F8ACB (which is not going to cause an extra byte in BigInteger) and should be interpreted as such. This is why I wrote (key.length() + 1) in my formula - if key is odd-length, it should be interpreted as a one octet longer. This is also important to watch out for if you write your own hex-to-byte-array converter - if the length is odd, you should add a zero at the beginning before you start converting.

How to iteratively sha256 in Python using native lib (ie hashlib), using byte[] as input and not hex string

Background: I have an iterative hash algorithm I need to compute from a Python script and a Java web application.
Psuedo code:
hash = sha256(raw)
for x=1 to 64000 hash = sha256(hash)
where hash is a byte array of length 32, and not a hex string of length 64.
The reason I want to keep it in bytes is because, though Python can convert to hex string in between each iteration and keep the processing time under a second, Java takes 3 seconds for the String overhead.
So, the Java code looks like this:
// hash one time...
byte[] result = sha256(raw.getBytes("UTF-8"));
// then hash 64k-1 more times
for (int x = 0; x < 64000-1; x++) {
result = sha256(result);
}
// hex encode and print result
StringBuilder sb = new StringBuilder();
Formatter formatter = new Formatter(sb);
for (int i=0; i<buf.length; i++) {
formatter.format("%02x", buf[i]);
}
System.out.println(sb.toString());
And the Python code looks like this:
import hashlib
# hash 1 time...
hasher = hashlib.sha256()
hasher.update(raw)
digest = hasher.digest()
# then hash 64k-1 times
for x in range (0, 64000-1):
# expect digest is bytes and not hex string
hasher.update(digest)
digest = hasher.digest()
print digest.encode("hex")
The Python result calculated the hash on the hex representation of the first digest (String), rather than the raw digest bytes. So, I get varying outputs.
Method .update of hasher appends argument to previous text (Python docs). Instead you should create new hasher each time you want to compute digest.
import hashlib
# hash 1 time...
digest = hashlib.sha256(raw).digest()
# then hash 64k-1 times
for x in range(0, 64000-1):
digest = hashlib.sha256(digest).digest()
print digest.encode("hex")

Categories

Resources