So basically I have these snippets of code and would like them to produce the same output:
require 'openssl'
aes = OpenSSL::Cipher::Cipher.new("AES-128-CBC")
aes.key = "aaaaaaaaaaaaaaaa"
aes.iv = "aaaaaaaaaaaaaaaa"
aes.encrypt
encrypted = aes.update("1234567890123456") << aes.final
puts encrypted.unpack('H*').join
This prints:
8d3bbffade308f8e4e80cb77ecb8df19ee933f75438cec1315c4a491bd1b83f4
And this Java code:
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
String key = "aaaaaaaaaaaaaaaa";
String textToEncryptpt = "1234567890123456";
SecretKeySpec keyspec = new SecretKeySpec(key.getBytes(), "AES");
IvParameterSpec ivspec = new IvParameterSpec(key.getBytes());
cipher.init(Cipher.ENCRYPT_MODE, keyspec, ivspec);
byte[] encrypted = cipher.doFinal(textToEncryptpt.getBytes());
System.out.println(Crypto.bytesToHex(encrypted));
Prints:
2d3760f53b8b3dee722aed83224f418f9dd70e089ecfe9dc689147cfe0927ddb
Annoying thing is that it was working a couple of days ago... so I am not sure what happened. What's wrong with this code? Do you see anything unusual?
Ruby script is wrong. You have to first call the encrypt method, and then set the key and iv:
require 'openssl'
aes = OpenSSL::Cipher::Cipher.new("AES-128-CBC")
aes.encrypt
aes.key = "aaaaaaaaaaaaaaaa"
aes.iv = "aaaaaaaaaaaaaaaa"
encrypted = aes.update("1234567890123456") << aes.final
puts encrypted.unpack('H*').join
I figured out because when trying to decode an encrypted string I got:
aescrypt.rb:13:in `final': bad decrypt (OpenSSL::Cipher::CipherError)
from aescrypt.rb:13:in `<main>'
Seems you found already the reason that your script does give different results.
Some more things to consider:
Don't ever hardcode the key in the program - that way you can't easily change it, and if someone gets access to your program code, she also gets to see the key.
Don't ever use a constant initialization vector. Instead, generate a random one and send it together with the ciphertext. Alternatively, if you generate the key from a password and some salt, you can also generate the IV from the same ... but don't use the key directly as IV.
Your key/IV values are strings, not bytes. String.getBytes() (in Java) converts the string to bytes using some encoding. The encoding used is system-dependent, and none of the usual String encodings (UTF-8, Latin-1, ...) can represent all bytes as printable (and typeable) characters. Preferably use something like Base64 or hex-encoding, if you have to store your key as string.
And whenever you transform a string to bytes, specify an encoding (and use the same encoding later for retrieving it).
#Cristian, For key and initial vector, you can create a function by using today's date plus the secure fixed keyword.
Eg: key = January 8, 2012 + Key
And for initial vector,
Eg: iv = January 8, 2012 + IV
Then enter that data(key and iv) to MD5 it will produce the output 16 bytes that you can use for the Key and IV. Every day, key and iv will change randomly.
Make sure both systems use the same date format and setup on the same date.
Related
As a bit of context to this I am converting a java file to python and am on the last operation. I'm at about 200 LOC so it makes it that much more edge of the seat...
Anyways, in java the operation is:
Cipher cipher = Cipher.getInstance("AES/ecb/nopadding");
cipher.init(Cipher.DECRYPT_MODE, new SecretKeySpec(keys[i], "AES"));
//doFinal(byte[] input, int inputOffset, int inputLen, byte[] output, int outputOffset)
cipher.doFinal(save, saveOffset, 16, save, saveOffset);
In python I have this:
from Crypto.Cipher import AES
cipher = AES.new(bytes(keys[i]), AES.MODE_ECB)
cipher.decrypt(?????)
This is taken from the package under decrypt():
:Parameters:
ciphertext : bytes/bytearray/memoryview
The piece of data to decrypt.
The length must be multiple of the cipher block length.
:Keywords:
output : bytearray/memoryview
The location where the plaintext must be written to.
If ``None``, the plaintext is returned.
:Return:
If ``output`` is ``None``, the plaintext is returned as ``bytes``.
Otherwise, ``None``.
As you can see .decrypt() doesn't really have an input for offsets, but I was wondering if there was some way around this?
This is why I decided to post on SO, would I be able to send:
temp_bytes = save[offset]
temp_decrypt = cipher.decrypt(temp_bytes)
save[offset] = temp_decrypt
Or when decrypting does it use the whole file as context and i will get the wrong output? I would love to just do it and test it but the output is just gibberish that i will have to write another program to parse (another java to python project).
What ended up working was:
save[saveOffset:saveOffset+16] = cipher.decrypt(save[saveOffset:saveOffset+16])
Did not expect it to work so easily
I'm developping an Android app and i wanna do encryption with AES-128.
I use secret key and text with hexadecimal.
I use this website : http://testprotect.com/appendix/AEScalc to compute the AES encryption with the following key : "c4dcc3c6ce0acaec4327b6098260b0be" and my text to encrypt is : "6F4B1B252A5F0C3F2992E1A65E56E5B8" (hexadecimal).
So the result from this website give me "859499d0802de8cc6ba4f208da648a8f" and that's the result i wanna get.
http://hpics.li/d89105a
I find some code on the net, but a lot uses seed to generate random secret key and i wanna fix the secret key.
Actually i'm using the following code, i think it's the code i need but the result gives me D/resultdebug: 72adc67b6d11e1c5fb89ddf5faeb0e030686b91f8bfaf6c41335f08955343f87
and it's not the same result than the website, that i want.
String text16 = "6F4B1B252A5F0C3F2992E1A65E56E5B8";
String secret16 = "c4dcc3c6ce0acaec4327b6098260b0be";
SecretKeySpec sks = new SecretKeySpec(secret16.getBytes(),"AES");
Cipher c = Cipher.getInstance("AES");
c.init(Cipher.ENCRYPT_MODE, sks);
c.update(text16.getBytes());
byte[] ciphertext = c.doFinal();
Log.d("resultdebug",new String(Hex.encode(ciphertext), "ASCII"));
Could you please tell me what's wrong, thanks.
You are not converting the hex to binary, you are getting the binary of the characters not the value that is hex encoded.
Thanks zaph, it solved my problem
I added these two lines on my code :
byte[] text = stringhextobyte(text16);
byte[] secret = stringhextobyte(secret16);
And the function stringhextobyte permit me to transform my hex string to byte array and i get the result needed.
I am using in my application AES algorithm to encrypt my data. My key is of 256 bit. The encrypted token formed is of this sort:
pRplOI4vTs41FICGeQ5mlWUoq5F3bcviHcTZ2hN
Now if I change one bit of the token alphabet from upper case to lower case say some thing like this:
prplOI4vTs41FICGeQ5mlWUoq5F3bcviHcTZ2hN
Some part of the token is getting decrypted along with junk value. My concern is why even some part of the data is getting visible when as such one bit is changed.My code for encryption is as follows:
cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
Key secretKeySpecification = secretKeyData.getKey();
cipher.init(
Cipher.ENCRYPT_MODE,
secretKeySpecification,
new IvParameterSpec(secretKeyData.getIV().getBytes("UTF-8")));
byte[] bytesdata = cipher.doFinal(data.getBytes());
String encodedData = new BASE64Encoder().encode(bytesdata)
My code for decryption is:
Key secretKeySpecification = decryptionKeyDetails.getKey();
cipher.init(Cipher.DECRYPT_MODE, secretKeySpecification,
new IvParameterSpec(decryptionKeyDetails.getIV()
.getBytes("UTF-8")));
byte[] bytesdata;
byte[] tempStr = new BASE64Decoder()
.decodeBuffer(splitedData[0]);
bytesdata = cipher.doFinal(tempStr);
return new String(bytesdata);
Ciphertext modes of operation have specific forms of error propagation. There is such as thing as Bi-IGE (Bi-directional Infinite Garble Extension, that does change the whole plaintext if any error is introduced. However, it requires more than one pass, and it still won't protect you from getting random data if a bit was changed.
In the end, listen to Oleg and Codes (and Wikipedia and even me) and add an authentication tag to your ciphertext. Validate the authentication tag (e.g. HMAC) before decryption. Don't forget to include other data in your protocol such as the IV, or you may have a plaintext for which the first block has been changed.
I'm working on some interoperable code for encrypting/decrypting strings between Java and node.js and have managed to get node.js to decrypt what Java has encrypted with this being the final part to successful decryption: the secret key.
To derive a secret key in Java, we write:
private static Key deriveSecretKey(String secretKeyAlgorithm, String secretKey, String salt) throws Exception {
SecretKeyFactory factory = SecretKeyFactory.getInstance(SECRET_KEY_FACTORY_ALGORITHM);
KeySpec spec = new PBEKeySpec(secretKey.toCharArray(), char2byte(salt), 65536, 128);
SecretKey tmp = factory.generateSecret(spec);
SecretKey secret = new SecretKeySpec(tmp.getEncoded(), secretKeyAlgorithm);
return secret;
}
Notice the key length passed to PBEKeySpec() is 128 here. In node.js, however, I get an "Invalid key length" if I try to use 128 and actually have to use 16 here instead:
crypto.pbkdf2(key_value, salt_value, 65536, 16, function(err, key) {
var decipher = crypto.createDecipheriv('aes-128-cbc', key, iv);
// decipher.setAutoPadding(false);
var decoded = decipher.update(ciphertext, 'binary', 'utf8');
decoded += decipher.final('utf8');
console.log('Result: ' + decoded);
});
Console output:
Result: Super secret stuff -- right here.
Curious as to why the difference when specifying key lengths between these two functions. Thanks!
Normally, key sizes are defined in bits. However, most cryptographic libraries don't handle bit sizes that cannot be divided by 8 particularly well - the output is almost always in octets (8-bit bytes). So it is up to the designer of the API if the user has to specify the size in bits, or in the number of octets in the octet string (byte array).
The only way to really know why bits or bytes are being chosen is to ask the person who designed the library. In my own code, I do try to keep to (ad-hoc) standards - so bits for key sizes. If it's unclear from the context which is which, it is probably best to use names such as blockSizeBits or blockSizeBytes. Documentation may be of help too of course, but using specific identifiers is best in my opinion.
I have observed the following when I worked with Cipher.
Encryption code:
Cipher aes = Cipher.getInstance("AES");
aes.init(Cipher.ENCRYPT_MODE, generateKey());
byte[] ciphertext = aes.doFinal(rawPassword.getBytes());
Decryption code :
Cipher aes = Cipher.getInstance("AES");
aes.init(Cipher.DECRYPT_MODE, generateKey());
byte[] ciphertext = aes.doFinal(rawPassword.getBytes());
I get IllegalBlockSizeException ( Input length must be multiple of 16 when ...) on running the Decrypt code.
But If I change the decrypt code to
Cipher aes = Cipher.getInstance("AES/ECB/PKCS5Padding"); //I am passing the padding too
aes.init(Cipher.DECRYPT_MODE, generateKey());
byte[] ciphertext = aes.doFinal(rawPassword.getBytes());
It works fine.
I understand that it is in the pattern algorithm/mode/padding. So I thought it is because I didn't mention the padding. So I tried giving mode and padding during encryption,
Encryption code:
Cipher aes = Cipher.getInstance("AES/ECB/PKCS5Padding");//Gave padding during encryption too
aes.init(Cipher.ENCRYPT_MODE, generateKey());
byte[] ciphertext = aes.doFinal(rawPassword.getBytes());
Decryption code :
Cipher aes = Cipher.getInstance("AES/ECB/PKCS5Padding");
aes.init(Cipher.DECRYPT_MODE, generateKey());
byte[] ciphertext = aes.doFinal(rawPassword.getBytes());
But it fails with IllegalBlockSizeException.
What is the reason, why the exception and what is exactly happening underneath.
If anyone can help? Thanks in advance
UPDATE
Looks like the issue is with the string I am encrypting and decrypting. Because, even the code that I said works, doesn't always work. I am basically encrypting UUIDs (eg : 8e7307a2-ef01-4d7d-b854-e81ce152bbf6). It works with certain strings and doesn't with certain others.
The length of encrypted String is 64 which is divisible by 16. Yes, I am running it on the same machine.
Method for secret key generation:
private Key generateKey() throws NoSuchAlgorithmException {
MessageDigest digest = MessageDigest.getInstance("SHA");
String passphrase = "blahbl blahbla blah";
digest.update(passphrase.getBytes());
return new SecretKeySpec(digest.digest(), 0, 16, "AES");
}
During decryption, one can only get an IllegalBlockSizeException if the input data is not a multiple of the block-size (16 bytes for AES).
If the key or the data was invalid (but correct in length), you would get a BadPaddingException because the PKCS #5 padding would be wrong in the plaintext. Very occasionally the padding would appear correct by chance and you would have no exception at all.
N.B. I would recommend you always specify the padding and mode. If you don't, you are liable to be surprised if the provider changes the defaults. AFAIK, the Sun provider converts "AES" to "AES/ECB/PKCS5Padding".
Though I haven't fully understood the internals, I have found what the issue is.
I fetch the encrypted string as a GET request parameter. As the string contains unsafe characters, over the request the string gets corrupted. The solution is, to do URL encoding and decoding.
I am able to do it successfully using the URLEncoder and URLDecoder.
Now the results are consistent. Thanks :)
I would be grateful if anyone can contribute more to this.