TOTP / HOTP / HmacSHA256 with unsigned bytes key in Java - java

As we can see from the following questions:
Java HmacSHA256 with key
Java vs. Golang for HOTP (rfc-4226)
, Java doesn't really play nicely when using a key in a TOTP / HOTP / HmacSHA256 use case. My analysis is that the following cause trouble:
String.getBytes will (of course) give negative byte values for characters with a character value > 127;
javax.crypto.Mac and javax.crypto.spec.SecretKeySpec both externally and internally use byte[] for accepting and transforming the key.
We have acquired a number of Feitian C-200 Single Button OTP devices, and they come with a hexadecimal string secret which consist of byte values > 127.
We have successfully created a PoC in Ruby for these tokens, which works flawlessly. Since we want to integrate these in Keycloak, we need to find a Java solution.
Since every implementation of TOTP / HOTP / HmacSHA256 we have seen makes use the javax.crypto library and byte[], we fear we have to rewrite all the used classes but using int in order to support this scenario.
Q: Is there another way? How can we use secrets in a HmacSHA256 calculation in Java of which the bytes have values > 127 without having to rewrite everything?
Update
I was looking in the wrong direction. My problem was that the key was represented a String (UTF-16 in Java), which contained Unicode characters that were exploded into two bytes by getBytes(), before being passed into the SecretKeySpec.
Forcing StandardCharsets.ISO_8859_1 on this conversion fixes the problem.

Signed vs. unsigned is a presentation issue that's mainly relevant to humans only. The computer doesn't know or care whether 0xFF means -1 or 255 to you. So no, you don't need to use ints, using byte[] works just fine.
This doesn't mean that you can't break things, since some operations work based on default signed variable types. For example:
byte b = (byte)255; // b is -1 or 255, depending on how you interpret it
int i = b; // i is -1 or 2³² instead of 255
int u = b & 0xFF; // u is 255
It seems to throw many people off that Java has only signed primitives (boolean and char not withstanding). However Java is perfectly capable of performing cryptographic operations, so all these questions where something is "impossible" are just user errors. Which is not something you want when writing security sensitive code.

Don't be afraid of Java :) I've tested dozens tokens from different vendors, and everything is fine with Java, you just need to pickup correct converter.
It's common issue to get bytes from String as getBytes() instead of using proper converter. The file you have from your vendor represent secret keys in hex format, so just google 'java hex string to byte array' and choose solution, that works for you.
Hex, Base32, Base64 is just a representation and you can easily convert from one to another.

I've ran into absolutely the same issue (some years later): we got Feitian devices, and had to set up their server side code.
None of the available implementations worked with them (neither php or java).
Solution: Feitian devices come with seeds in hexadecimal. First you have to decode the seed into raw binary (e.g. in PHP using the hex2bin()). That data is the correct input of the TOTP/HOTP functions.
The hex2bin() version of java is a bit tricky, and its solution is clearly written in the question of the OP.
(long story short: the result of hex2bin you have to interpret with StandardCharsets.ISO_8859_1, otherwise some chars will be interpreted as 2 bytes utf-16 char, which causes different passcode at the end)
String hex = "1234567890ABCDEF"; // original seed from Feitian
Sring secretKey = new String(hex2bin(hex), StandardCharsets.ISO_8859_1);
Key key = new SecretKeySpec(secretKey.getBytes(StandardCharsets.ISO_8859_1), "RAW");
// or without String representation:
Key key = new SecretKeySpec(hex2bin(hex), "RAW");

Related

How can I directly work with bits in Clojure?

I began working through the first problem set over at https://cryptopals.com the other day. I'm trying to learn Clojure simultaneously, so I figured I'd implement all of the exercises in Clojure. These exercises are for learning purposes of course, but I'm going out of my way to not use any libraries besides clojure.core and the Java standard library.
The first exercise asks you to write code that takes in a string encoded in hexadecimal and spit out a string encoded in base64. The algorithm for doing this is fairly straightforward:
Get the byte associated with each couplet of hex digits (for example, the hex 49 becomes 01001001).
Once all bytes for the hex string have been retrieved, turn the list of bytes into a sequence of individual bits.
For every 6 bits, return a base64 character (they're all represented as units of 6 bits).
I'm having trouble actually representing/working-with bits and bytes in Clojure (operating on raw bytes is one of the requirements of the exercise). I know I can do byte-array on the initial hex values and get back an array of bytes, but how do I access the raw bits so that I can translate from a series of bytes into a base64 encoded string?
Any help or direction would be greatly appreciated.
Always keep a browser tab open to the Clojure CheatSheet.
For detailed bit work, you want functions like bit-and, bit-test, etc.
If you are just parsing a hex string, see java.lang.BigInteger withe the radix option: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/math/BigInteger.html#%3Cinit%3E(java.lang.String,int)
java.lang.Long/parse( string, radix ) is also useful.
For the base64 part, you may be interested in the tupelo.base64 functions. This library function is all you really need to convert a string of hex into a base-64 string, although it may not count for your homework!
Please note that Java includes base-64 functions:
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/Base64.html
Remember, also, that you can get ideas by looking at the source code for both Clojure & the Tupelo lib.
And also, keep in mind that one of Clojure's super-powers is the ability to write low-level or performance-critical code in native Java and then link all the *.clj and *.java files together into one program (you can use Leiningen to compile & link everything in one step).

What integer format when reading binary data from Java DataOutputStream in PHP?

I'm aware that this is probably not the best idea but I've been playing around trying to read a file in PHP that was encoded using Java's DataOutputStream.
Specifically, in Java I use:
dataOutputStream.writeInt(number);
Then in PHP I read the file using:
$data = fread($handle, 4);
$number = unpack('N', $data);
The strange thing is that the only format character in PHP that gives the correct value is 'N', which is supposed to represent "unsigned long (always 32 bit, big endian byte order)". I thought that int in java was always signed?
Is it possible to reliably read data encoded in Java in this way or not? In this case the integer will only ever need to be positive. It may also need to be quite large so writeShort() is not possible. Otherwise of course I could use XML or JSON or something.
This is fine, as long as you don't need that extra bit. l (instead of N) would work on a big endian machine.
Note, however, that the maximum number that you can store is 2,147,483,647 unless you want to do some math on the Java side to get the proper negative integer to represent the desired unsigned integer.
Note that a signed Java integer uses the two's complement method to represent a negative number, so it's not as easy as flipping a bit.
DataOutputStream.writeInt:
Writes an int to the underlying output stream as four bytes, high byte
first.
The formats available for the unpack function for signed integers all use machine dependent byte order. My guess is that your machine uses a different byte order than Java. If that is true, the DataOutputStream + unpack combination will not work for any signed primitive.

Problems with reproducing the same HMAC MD5 on Java and C

I am currently stumped on recreating an HMAC MD5 hash generated by a Java program on C. Any help, suggestions, correction and recommendation would be greatly appreciated.
The Java program creates the HMAC MD5 string (encoded to a base 16 HEX string which is 32 characters long) using UTF16LE and MAC; what I need is to recreate the same result on C program.
I am using the RSA source for MD5 and the HMAC-MD5 code is from RFC 2104 (http://www.koders.com/c/fidBA892645B9DFAD21A2B5ED526824968A1204C781.aspx)
I have "simulated" UTF16LE on the C implementation by padding every even byte with 0s. The Hex/Int representation seem to be consistent on both ends when I do this; but is this the correct way to do this? I figured this would be the best way because the HMAC-MD5 function call only allows for a byte array (no such thing as a double byte array call in the RFC2104 implementation but that's irrelevant).
When I run the string to be HMAC'd through - you naturally get "garbage". Now my problem is that not even the "garbage" is consistent across the systems (excluding the fact that perhaps the base 16 encoding could be inconsistent). What I mean by this is "�����ԙ���," might be the result from Java HMAC-MD5 but C might give "v ����?��!��{� " (Just an example, not actual data).
I have 2 things I would like to confirm:
Did padding every even byte with 0 mess up the HMAC-MD5 algorithms? (either because it would come across a null immediately after the first byte or whatever)
Is the fact that I see different "garbage" because C and Java are using different character encodings? (same machine running Ubuntu)
I am going to read through the HMAC-MD5 and MD5 code to see how they treat the byte array going in (whether or not the null even bytes is causing a problem). I am also having a hard time writing a proper encoding function on the C side to convert the resultant string into a 32 character hex string. Any input/help would be greatly appreciated.
Update (Feb 3rd): Would passing signed/unsigned byte array alter the output of HMAC-MD5? The Java implementation takes a byte array (which is SIGNED); but the C implementation takes an UNSIGNED byte array. I think this might also be a factor in producing different results. If this does affect the final output; what can I really do? Would I pass a SIGNED byte array in C (the method takes an unsigned byte array) or would I cast the SIGNED byte array as unsigned?
Thanks!
Clement
The problem is probably due to your naive creation of the UTF-16 string. Any character greater than 0x7F (see unicode explanation) needs to be expanded into the UTF encoding scheme.
I would work on first getting the same byte string between the C and Java implementation as that is probably where your problem lies -- so I would agree with your assumption (1)
Have you tried to calculate the MD5 without padding the C-string, but rather just converting it to UTF -- you can use iconv to make experiments with the encoding.
The problem was that I used the RSA implementation. After I switched to OpenSSL all my problems were resolved. RSA implementation did not take into consideration all the necessary details of cross platform support (including 32bit/64bit processors).
Always use OpenSSL because they have already resolved all the cross platform issues.

Network communication between C# server and JAVA client: how to handle that. transition?

I have a C# server that cannot be altered. In C#, a byte ranges fom 0 - 255, while in JAVA it ranges from -128 to 127.
I have read about the problem with unsigned byte/ints/etc and the only real option as I have found out is to use "more memory" to represent the unsigned thing:
http://darksleep.com/player/JavaAndUnsignedTypes.html
Is that really true?
So when having network communication between the JAVA client and the C# server, the JAVA client receives byte arrays from the server. The server sends them "as unsigned" but when received they will be interpreted as signed bytes, right?
Do I then have to typecast each byte into an Int and then add 127 to each of them?
I'm not sure here... but how do I interpret it back to the same values (int, strings etc) as I had on the C# server?
I find this whole situation extremely messy (not to mention the endianess-problems, but that's for another post).
A byte is 0-255 in C#, not 0-254.
However, you really don't need to worry in most cases - basically in both C# and Java, a byte is 8 bits. If you send a byte array from C#, you'll receive the same bits in Java and vice versa. If you then convert parts of that byte array to 32-bit integers, strings etc, it'll all work fine. The signed-ness of bytes in Java is almost always irrelevant - it's only if you treat them numerically that it's a problem.
What's more of a problem is the potential for different endianness when (say) converting a 32-bit integer into 4 bytes in Java and then reading the data in C# or vice versa. You'd have to give more information about what you're trying to do in order for us to help you there. In particular do you already have a protocol that you need to adhere to? If so, that pretty much decides what you need to do - and how hard it will be depends on what the protocol is.
If you get to choose the protocol, you may wish to use a platform-independent serialization format such as Protocol Buffers which is available for both .NET and Java.
Unfortunately, yes ... the answer is to "use more memory", at least on some level.
You can store the data as a byte array in java, but when you need to use that data numerically you'll need to move up to an int and add 256 to negative values. A bitwise & will do this for you quickly and efficiently.
int foo;
if (byte[3] < 0)
foo = (byte[3] & 0xFF);
else
foo = byte[3];

JAVA RSAES-OAEP attack

I need to implement an RSAES-OAEP PKCS#1 V2.1 attack, using a unix executable oracle and a ASCII format challenge file. The format of challenge ASCII file is
{n}
{e}
{c}
where N (an integer) is a 1024-bit modulus, e (an integer) is the public exponent and c (an
octet string) is the ciphertext corresponding to the RSAES-OAEP encryption of some unknown
plaintext m (an octet string) under the public key (N, e). Note that the plaintext is ASCII text
(i.e., each octet is an ASCII encoded character), and that the RSAES-OAEP encryption will
have used SHA-1 as the hash function and a null label (i.e., in all cases the label is an octet
string of length zero).
The executable represents an RSAES-OAEP decryption oracle: when executed from a BASH
shell using the command
bash$ ./ USER < USER . challenge
it tries to decrypt the ciphertext read from stdin using the private key (N, d). Note that N is
read from stdin (i.e., from the challenge) but d (an integer) is a private exponent embedded
into the oracle (i.e., you do not have access to it).
The challenge file is as follows:
99046A2DB3D185D6D2728E799D66AC44F10DDAEE1C0A1AC5D7F34F04EDE17B96A5B486D95D927AA9B58FC91865DBF3A1685141345CC31B92E13F06E8212BAB22529F7D06B503AAFEEB89800E12EABA50C3F3BBE86F5966A88CCCF5C843281F8B98DF97A3111458FCA89B8085A96AE68EAEBAE270831D41C956159B81D29503
80A3C4043F940BE6AC16B11A0A77016DBA96B0239311AF182DD70E214E07E7DF3523CE1E269B176A3AAA0BA8F02C59262F693D6A248F22F2D561ED7ECC3CB9ABD0FE7B7393FA0A16C4D07181EEF6E27D97F48B83B90C58F51FD40DCDA71EF5E3C3E97D1697DC8E26B694B5CAFE59E427B12EE82A93064C81AAB74431F3A735
57D808889DE1417235C790CB7742EB76E537F55FD49941EBC862681735733F8BB095EDBB3C0DA44AB8F1176E69A61BBD3F0D31EB997071758A5DD850730A1D171E9EC92788EBA358974CE521537EE4A809BF1607D04EFD4A407866970981B88F44D5260D25C9E8864D5FC2AFB2CB90994DD1934BCEA728B38A00D4712AE0EE
Any ideas as to how to proceed for this attack?!
thanks
Anyone to guide me for this?!!!!!!!!!!
The first thing you could try is to find out whether you can apply the attack by
J. Manger from the paper "A Chosen Ciphertext Attack on RSA Optimal Asymmetric Encryption
Padding (OAEP) as Standardized in PKCS #1 v2.0." Crypto 2001.
That means you have to find out what kind of information you can get from the oracle.
I.e. Choose two arbitrary integers m0, m1 such that m1 is a 1024-bit integer smaller than n
and m0 is 1023 or less bits long. If you pass m0^e mod n and m1^e mod n to the oracle do you get a different response? If so then you might be able to apply the attack in the paper above. Otherwise you will have to search for another flaw in the decryption oracle.
Another approach that might work is to try to modify the modulus n. If the oracle really reads the modulus from user supplied input, then it looks like modifying the modulus should work and the attack becomes quite easy. I don't have access to the implementation of the oracle so I can only guess what might be possible. If you can check for any chosen
n',c' whether c'^d mod n' is a valid OAEP encoded plaintext then you decrypting the original message is not all you can do, in fact you can also recover d and hence factor the original RSA modulus.
(Furthermore this would indeed be a very nice puzzle, so I don't want to spoil the fun by giving
a step by step receipe on how to solve it.)

Categories

Resources