I'm trying to take hash of gzipped string in Python and need it to be identical to Java's. But Python's gzip implementation seems to be different from Java's GZIPOutputStream.
Python gzip:
import gzip
import hashlib
gzip_bytes = gzip.compress(bytes('test', 'utf-8'))
gzip_hex = gzip_bytes.hex().upper()
md5 = hashlib.md5(gzip_bytes).hexdigest().upper()
>>>gzip_hex
'1F8B0800678B186002FF2B492D2E01000C7E7FD804000000'
>>>md5
'C4C763E9A0143D36F52306CF4CCC84B8'
Java GZIPOutputStream:
import java.io.ByteArrayOutputStream;
import java.util.zip.GZIPOutputStream;
import java.io.IOException;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
public class HelloWorld{
private static final char[] HEX_ARRAY = "0123456789ABCDEF".toCharArray();
public static String bytesToHex(byte[] bytes) {
char[] hexChars = new char[bytes.length * 2];
for (int j = 0; j < bytes.length; j++) {
int v = bytes[j] & 0xFF;
hexChars[j * 2] = HEX_ARRAY[v >>> 4];
hexChars[j * 2 + 1] = HEX_ARRAY[v & 0x0F];
}
return new String(hexChars);
}
public static String md5(byte[] bytes) {
try {
MessageDigest md = MessageDigest.getInstance("MD5");
byte[] thedigest = md.digest(bytes);
return bytesToHex(thedigest);
}
catch (NoSuchAlgorithmException e){
new RuntimeException("MD5 Failed", e);
}
return new String();
}
public static void main(String []args){
String string = "test";
final byte[] bytes = string.getBytes();
try {
final ByteArrayOutputStream bos = new ByteArrayOutputStream();
final GZIPOutputStream gout = new GZIPOutputStream(bos);
gout.write(bytes);
gout.close();
final byte[] encoded = bos.toByteArray();
System.out.println("gzip: " + bytesToHex(encoded));
System.out.println("md5: " + md5(encoded));
}
catch(IOException e) {
new RuntimeException("Failed", e);
}
}
}
Prints:
gzip: 1F8B08000000000000002B492D2E01000C7E7FD804000000
md5: 1ED3B12D0249E2565B01B146026C389D
So, both gzip bytes outputs seem to be very similar, but slightly different.
1F8B0800678B186002FF2B492D2E01000C7E7FD804000000
1F8B08000000000000002B492D2E01000C7E7FD804000000
Python gzip.compress() method accepts compresslevel argument in range of 0-9. Tried all of them, but none gives desired result.
Any way to get same result as Java's GZIPOutputStream in Python?
Your requirement "hash of gzipped string in Python and need it to be identical to Java's" cannot be met in general. You need to change your requirement, implementing your need differently. I would recommend requiring simply that the decompressed data have identical hashes. In fact, there is a 32-bit hash (a CRC-32) of the decompressed data already there in the two gzip strings, which are identical (0xd87f7e0c). If you want a longer hash, then you can append one. The last four bytes is the uncompressed length, modulo 232, so you can compare those as well. Just compare the last eight bytes of the two strings and check that they are the same.
The difference between the two gzip strings in your question illustrates the issue. One has a time stamp in the header, and the other does not (set to zeros). Even if they both had time stamps, they would still very likely be different. They also have some other bytes in the header different, like the originating operating system.
Furthermore, the compressed data in your examples is extremely short, so it just so happens to be identical in this case. However for any reasonable amount of data, the compressed data generated by two gzippers will be different, unless they happen to made with exactly the same deflate code, the same version of that code, and the same memory size and compression level settings. If you are not in control of all of those, you will never be able to assure the same compressed data coming out of them, given identical uncompressed data.
In short, don't waste your time trying to get identical compressed strings.
Related
I want to load the MD5 of may different files. I am following this answer to do that but the main problem is that the time taken to load the MD5 of the files ( May be in hundreds) is a lot.
Is there any way which can be used to find the MD5 of an file without consuming much time.
Note- The size of the file may be large ( May go up to 300MB).
This is the code which I am using -
import java.io.*;
import java.security.MessageDigest;
public class MD5Checksum {
public static byte[] createChecksum(String filename) throws Exception {
InputStream fis = new FileInputStream(filename);
byte[] buffer = new byte[1024];
MessageDigest complete = MessageDigest.getInstance("MD5");
int numRead;
do {
numRead = fis.read(buffer);
if (numRead > 0) {
complete.update(buffer, 0, numRead);
}
} while (numRead != -1);
fis.close();
return complete.digest();
}
// see this How-to for a faster way to convert
// a byte array to a HEX string
public static String getMD5Checksum(String filename) throws Exception {
byte[] b = createChecksum(filename);
String result = "";
for (int i=0; i < b.length; i++) {
result += Integer.toString( ( b[i] & 0xff ) + 0x100, 16).substring( 1 );
}
return result;
}
public static void main(String args[]) {
try {
System.out.println(getMD5Checksum("apache-tomcat-5.5.17.exe"));
// output :
// 0bb2827c5eacf570b6064e24e0e6653b
// ref :
// http://www.apache.org/dist/
// tomcat/tomcat-5/v5.5.17/bin
// /apache-tomcat-5.5.17.exe.MD5
// 0bb2827c5eacf570b6064e24e0e6653b *apache-tomcat-5.5.17.exe
}
catch (Exception e) {
e.printStackTrace();
}
}
}
You cannot use hashes to determine any similarity of content.
For instance, generating the MD5 of hellostackoverflow1 and hellostackoverflow2 calculates two hashes where none of the characters of the string representation match (7c35[...]85fa vs b283[...]3d19). That's because a hash is calculated based on the binary data of the file, thus two different formats of the same thing - e.g. .txt and a .docx of the same text - have different hashes.
But as already noted, some speed might be achieved by using native code, thus the NDK. Additionally, if you still want to compare files for exact matches, first compare the size in bytes, after that use a hashing algorithm with enough speed and a low risk of collisions. As stated, CRC32 is fine.
Hash/CRC calculation takes some time as the file has to be read completely.
The code of createChecksum you presented is nearly optimal. The only parts that can be tweaked is the read buffer size (I would use a buffer size 2048 bytes or larger). However this may get you a maximum of 1-2% speed improvement.
If this is still too slow the only option left is to implement the hashing in C/C++ and use it as native method. Besides that there is nothing you can do.
Question
Are the Java 8 java.util.Base64 MIME Encoder and Decoder a drop-in replacement for the unsupported, internal Java API sun.misc.BASE64Encoder and sun.misc.BASE64Decoder?
EDIT (Clarification): By drop-in replacement
I mean that I can switch legacy code using sun.misc.BASE64Encoder and sun.misc.BASE64Decoder to Java 8 MIME Base64 Encoder/Decoder for any existing other client code transparently.
What I think so far and why
Based on my investigation and quick tests (see code below) it should be a drop-in replacement because
sun.misc.BASE64Encoder based on its JavaDoc is a BASE64 Character encoder as specified in RFC1521. This RFC is part of the MIME specification...
java.util.Base64 based on its JavaDoc Uses the "The Base64 Alphabet" as specified in Table 1 of RFC 2045 for encoding and decoding operation... under MIME
Assuming no significant changes in the RFC 1521 and 2045 (I could not find any) and based on my quick test using the Java 8 Base64 MIME Encoder/Decoder should be fine.
What I am looking for
an authoritative source confirming or disproving the "drop-in replacement" point OR
a counterexample which shows a case where java.util.Base64 has different behaviour than the sun.misc.BASE64Encoder OpenJDK Java 8 implementation (8u40-b25) (BASE64Decoder) OR
whatever you think answers above question definitely
For reference
My test code
public class Base64EncodingDecodingRoundTripTest {
public static void main(String[] args) throws IOException {
String test1 = " ~!##$%^& *()_+=`| }{[]\\;: \"?><,./ ";
String test2 = test1 + test1;
encodeDecode(test1);
encodeDecode(test2);
}
static void encodeDecode(final String testInputString) throws IOException {
sun.misc.BASE64Encoder unsupportedEncoder = new sun.misc.BASE64Encoder();
sun.misc.BASE64Decoder unsupportedDecoder = new sun.misc.BASE64Decoder();
Base64.Encoder mimeEncoder = java.util.Base64.getMimeEncoder();
Base64.Decoder mimeDecoder = java.util.Base64.getMimeDecoder();
String sunEncoded = unsupportedEncoder.encode(testInputString.getBytes());
System.out.println("sun.misc encoded: " + sunEncoded);
String mimeEncoded = mimeEncoder.encodeToString(testInputString.getBytes());
System.out.println("Java 8 Base64 MIME encoded: " + mimeEncoded);
byte[] mimeDecoded = mimeDecoder.decode(sunEncoded);
String mimeDecodedString = new String(mimeDecoded, Charset.forName("UTF-8"));
byte[] sunDecoded = unsupportedDecoder.decodeBuffer(mimeEncoded); // throws IOException
String sunDecodedString = new String(sunDecoded, Charset.forName("UTF-8"));
System.out.println(String.format("sun.misc decoded: %s | Java 8 Base64 decoded: %s", sunDecodedString, mimeDecodedString));
System.out.println("Decoded results are both equal: " + Objects.equals(sunDecodedString, mimeDecodedString));
System.out.println("Mime decoded result is equal to test input string: " + Objects.equals(testInputString, mimeDecodedString));
System.out.println("\n");
}
}
Here's a small test program that illustrates a difference in the encoded strings:
byte[] bytes = new byte[57];
String enc1 = new sun.misc.BASE64Encoder().encode(bytes);
String enc2 = new String(java.util.Base64.getMimeEncoder().encode(bytes),
StandardCharsets.UTF_8);
System.out.println("enc1 = <" + enc1 + ">");
System.out.println("enc2 = <" + enc2 + ">");
System.out.println(enc1.equals(enc2));
Its output is:
enc1 = <AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
>
enc2 = <AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA>
false
Note that the encoded output of sun.misc.BASE64Encoder has a newline at the end. It doesn't always append a newline, but it happens to do so if the encoded string has exactly 76 characters on its last line. (The author of java.util.Base64 considered this to be a small bug in the sun.misc.BASE64Encoder implementation – see the review thread).
This might seem like a triviality, but if you had a program that relied on this specific behavior, switching encoders might result in malformed output. Therefore, I conclude that java.util.Base64 is not a drop-in replacement for sun.misc.BASE64Encoder.
Of course, the intent of java.util.Base64 is that it's a functionally equivalent, RFC-conformant, high-performance, fully supported and specified replacement that's intended to support migration of code away from sun.misc.BASE64Encoder. You need to be aware of some edge cases like this when migrating, though.
I had same issue, when i moved from sun to java.util.base64, but then org.apache.commons.codec.binary.Base64 solved my problem
There are no changes to the base64 specification between rfc1521 and rfc2045.
All base64 implementations could be considered to be drop-in replacements of one another, the only differences between base64 implementations are:
the alphabet used.
the API's provided (e.g. some might take only act on a full input buffer, while others might be finite state machines allowing you to continue to push chunks of input through them until you are done).
The MIME base64 alphabet has remained constant between RFC versions (it has to or older software would break) and is: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz+/
As Wikipedia notes, only the last 2 characters may change between base64 implementations.
As an example of a base64 implementation that does change the last 2 characters, the IMAP MUTF-7 specification uses the following base64 alphabet: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz+,
The reason for the change is that the / character is often used as a path delimiter and since the MUTF-7 encoding is used to flatten non-ASCII directory paths into ASCII, the / character needed to be avoided in encoded segments.
Assuming both encoders are bug free, then the RFC requires distinct encodings for every 0 byte, 1 byte, 2 byte and 3 bytes sequence. Longer sequences are broken down into as many 3 byte sequences as needed followed by a final sequence. Hence if the two implementations handle all 16,843,009 (1+256+65536+16777216) possible sequences correctly, then the two implementations are also identical.
These tests only take a few minutes to run. By slightly changing your test code, I have done that and my Java 8 installation passed all the test. Hence the public implementation can be used to safely replace the sun.misc implementation.
Here is my test code:
import java.util.Base64;
import java.util.Arrays;
import java.io.IOException;
public class Base64EncodingDecodingRoundTripTest {
public static void main(String[] args) throws IOException {
System.out.println("Testing zero byte encoding");
encodeDecode(new byte[0]);
System.out.println("Testing single byte encodings");
byte[] test = new byte[1];
for(int i=0;i<256;i++) {
test[0] = (byte) i;
encodeDecode(test);
}
System.out.println("Testing double byte encodings");
test = new byte[2];
for(int i=0;i<65536;i++) {
test[0] = (byte) i;
test[1] = (byte) (i >>> 8);
encodeDecode(test);
}
System.out.println("Testing triple byte encodings");
test = new byte[3];
for(int i=0;i<16777216;i++) {
test[0] = (byte) i;
test[1] = (byte) (i >>> 8);
test[2] = (byte) (i >>> 16);
encodeDecode(test);
}
System.out.println("All tests passed");
}
static void encodeDecode(final byte[] testInput) throws IOException {
sun.misc.BASE64Encoder unsupportedEncoder = new sun.misc.BASE64Encoder();
sun.misc.BASE64Decoder unsupportedDecoder = new sun.misc.BASE64Decoder();
Base64.Encoder mimeEncoder = java.util.Base64.getMimeEncoder();
Base64.Decoder mimeDecoder = java.util.Base64.getMimeDecoder();
String sunEncoded = unsupportedEncoder.encode(testInput);
String mimeEncoded = mimeEncoder.encodeToString(testInput);
// check encodings equal
if( ! sunEncoded.equals(mimeEncoded) ) {
throw new IOException("Input "+Arrays.toString(testInput)+" produced different encodings (sun=\""+sunEncoded+"\", mime=\""+mimeEncoded+"\")");
}
// Check cross decodes are equal. Note encoded forms are identical
byte[] mimeDecoded = mimeDecoder.decode(sunEncoded);
byte[] sunDecoded = unsupportedDecoder.decodeBuffer(mimeEncoded); // throws IOException
if(! Arrays.equals(mimeDecoded,sunDecoded) ) {
throw new IOException("Input "+Arrays.toString(testInput)+" was encoded as \""+sunEncoded+"\", but decoded as sun="+Arrays.toString(sunDecoded)+" and mime="+Arrays.toString(mimeDecoded));
}
}
}
Stuart Marks' answer is almost correct. The getMimeEncoder in his example above should be configured like this to emulate sun.misc:
String enc2 = new String(java.util.Base64.getMimeEncoder(76, new byte[]{0xa}).encode(bytes),
StandardCharsets.UTF_8);
At this point, it will be a drop-in as requested in the original post.
My application requires a list of doubles encoded as a byte array with little endian encoding that has been zlib compressed and then encoded as base 64. I wrote up a harness to test my encoding, which wasn't working. I was able to make progress.
However, I noticed that when I attempt to decompress to a fixed size buffer, I am able to come up with input such that the size of the decompressed byte array is smaller than the original byte array, which obviously isn't right. Coincident with this, the last double in the list disappears. On most inputs, the fixed buffer size reproduces the input. Does anyone know why that would be? I am guessing the error is in the way I am encoding the data, but I can't figure out what is going wrong.
When I try using a ByteArrayOutputStream to handle variable-length output of arbitrary size (which will be important for the real version of the code, as I can't guarantee max size limits), the inflate method of Inflater continuously returns 0. I looked up the documentation and it said this means it needs more data. Since there is no more data, I again suspect my encoding, and guess that it is the same issue causing the previously explained behavior.
In my code I've included an example of data that works fine with the fixed buffer size, as well as data that doesn't work for fixed buffer. Both data sets will cause the variable buffer size error I explained.
Any clues as to what I am doing wrong? Many thanks.
import java.io.ByteArrayOutputStream;
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.ArrayList;
import java.util.zip.DataFormatException;
import java.util.zip.Deflater;
import java.util.zip.Inflater;
import org.apache.commons.codec.binary.Base64;
public class BinaryReaderWriter {
public static void main(String [ ] args) throws UnsupportedEncodingException, DataFormatException
{
// this input will break the fixed buffer method
//double[] centroids = {123.1212234143345453223123123, 28464632322456781.23, 3123121.0};
// this input will break the fixed buffer method
double[] centroids = {123.1212234143345453223123123, 28464632322456781.23, 31.0};
BinaryReaderWriter brw = new BinaryReaderWriter();
String output = brw.compressCentroids(centroids);
brw.decompressCentroids(output);
}
void decompressCentroids(String encoded) throws DataFormatException{
byte[] binArray = Base64.decodeBase64(encoded);
// This block of code is the fixed buffer version
//
System.out.println("binArray length " + binArray.length);
Inflater deCompressor = new Inflater();
deCompressor.setInput(binArray, 0, binArray.length);
byte[] decompressed = new byte[1024];
int decompressedLength = deCompressor.inflate(decompressed);
deCompressor.end();
System.out.println("decompressedLength = " + decompressedLength);
byte[] decompressedData = new byte[decompressedLength];
for(int i=0;i<decompressedLength;i++){
decompressedData[i] = decompressed[i];
}
/*
// This block of code is the variable buffer version
//
ByteArrayOutputStream bos = new ByteArrayOutputStream(binArray.length);
Inflater deCompressor = new Inflater();
deCompressor.setInput(binArray, 0, binArray.length);
byte[] decompressed = new byte[1024];
while (!deCompressor.finished()) {
int decompressedLength = deCompressor.inflate(decompressed);
bos.write(decompressed, 0, decompressedLength);
}
deCompressor.end();
byte[] decompressedData = bos.toByteArray();
*/
ByteBuffer bb = ByteBuffer.wrap(decompressedData);
bb.order(ByteOrder.LITTLE_ENDIAN);
System.out.println("decompressedData length = " + decompressedData.length);
double[] doubleValues = new double[decompressedData.length / 8];
for (int i = 0; i< doubleValues.length; i++){
doubleValues[i] = bb.getDouble(i * 8);
}
for(double dbl : doubleValues){
System.out.println(dbl);
}
}
String compressCentroids(double[] centroids){
byte[] cinput = new byte[centroids.length * 8];
ByteBuffer buf = ByteBuffer.wrap(cinput);
buf.order(ByteOrder.LITTLE_ENDIAN);
for (double cent : centroids){
buf.putDouble(cent);
}
byte[] input = buf.array();
System.out.println("raw length = " + input.length);
byte[] output = new byte[input.length];
Deflater compresser = new Deflater();
compresser.setInput(input);
compresser.finish();
int compressedLength = compresser.deflate(output);
compresser.end();
System.out.println("Compressed length = " + compressedLength);
byte[] compressed = new byte[compressedLength];
for(int i = 0; i < compressedLength; i++){
compressed[i] = output[i];
}
String decrypted = Base64.encodeBase64String(compressed);
return decrypted;
}
}
When compressing data what we are really doing is re-encoding to increase entropy in the data. During the reecoding precess we have to add meta data to tell us how we have encoded the data so it can be converted back to what it was previously.
Compression will only be successful if the meta data size is less than the space we save by reencoding the data.
Consider Huffman encoding:
Huffman is a simple encoding scheme where we replace the fixed width character set with a variable width character set plus a charset length table. The length table size will be greater than 0 for obvious reasons. If all characters appear with a near equal distribution we will not be able to save any space. So our compressed data ends up being larger than our uncompressed data.
according to the specification: http://wiki.theory.org/BitTorrentSpecification
info_hash: urlencoded 20-byte SHA1 hash of the value of the info key from the Metainfo file. Note that the value will be a bencoded dictionary, given the definition of the info key above.
torrentMap is my dictionary, I get the info key which is another dictionary, I calculate the hash and I URLencode it.
But I always get an invalid info_hash message when I try to send it to the tracker.
This is my code:
public String GetInfo_hash() {
String info_hash = "";
ByteArrayOutputStream bos = new ByteArrayOutputStream();
ObjectOutput out = null;
try {
out = new ObjectOutputStream(bos);
out.writeObject(torrentMap.get("info"));
byte[] bytes = bos.toByteArray(); //Map => byte[]
MessageDigest md = MessageDigest.getInstance("SHA1");
info_hash = urlencode(md.digest(bytes)); //Hashing and URLEncoding
out.close();
bos.close();
} catch (Exception ex) { }
return info_hash;
}
private String urlencode(byte[] bs) {
StringBuffer sb = new StringBuffer(bs.length * 3);
for (int i = 0; i < bs.length; i++) {
int c = bs[i] & 0xFF;
sb.append('%');
if (c < 16) {
sb.append('0');
}
sb.append(Integer.toHexString(c));
}
return sb.toString();
}
This is almost certainly the problem:
out = new ObjectOutputStream(bos);
out.writeObject(torrentMap.get("info"));
What you're going to be hashing is the Java binary serialization format of the value of torrentMap.get("info"). I find it very hard to believe that all BitTorrent programs are meant to know about that.
It's not immediately clear to me from the specification what the value of the "info" key is meant to be, but you need to work out some other way of turning it into a byte array. If it's a string, I'd expect some well-specified encoding (e.g. UTF-8). If it's already binary data, then use that byte array directly.
EDIT: Actually, it's sounds like the value will be a "bencoded dictionary" as per your quote, which looks like it will be a string. Quite how you're meant to encode that string (which sounds like it may include values which aren't in ASCII, for example) before hashing it is up for grabs. If your sample strings are all ASCII, then using "ASCII" and "UTF-8" as the encoding names for String.getBytes(...) will give the same result anyway, of course...
Is it possible to convert a string to a byte array and then convert it back to the original string in Java or Android?
My objective is to send some strings to a microcontroller (Arduino) and store it into EEPROM (which is the only 1 KB). I tried to use an MD5 hash, but it seems it's only one-way encryption. What can I do to deal with this issue?
I would suggest using the members of string, but with an explicit encoding:
byte[] bytes = text.getBytes("UTF-8");
String text = new String(bytes, "UTF-8");
By using an explicit encoding (and one which supports all of Unicode) you avoid the problems of just calling text.getBytes() etc:
You're explicitly using a specific encoding, so you know which encoding to use later, rather than relying on the platform default.
You know it will support all of Unicode (as opposed to, say, ISO-Latin-1).
EDIT: Even though UTF-8 is the default encoding on Android, I'd definitely be explicit about this. For example, this question only says "in Java or Android" - so it's entirely possible that the code will end up being used on other platforms.
Basically given that the normal Java platform can have different default encodings, I think it's best to be absolutely explicit. I've seen way too many people using the default encoding and losing data to take that risk.
EDIT: In my haste I forgot to mention that you don't have to use the encoding's name - you can use a Charset instead. Using Guava I'd really use:
byte[] bytes = text.getBytes(Charsets.UTF_8);
String text = new String(bytes, Charsets.UTF_8);
You can do it like this.
String to byte array
String stringToConvert = "This String is 76 characters long and will be converted to an array of bytes";
byte[] theByteArray = stringToConvert.getBytes();
http://www.javadb.com/convert-string-to-byte-array
Byte array to String
byte[] byteArray = new byte[] {87, 79, 87, 46, 46, 46};
String value = new String(byteArray);
http://www.javadb.com/convert-byte-array-to-string
Use [String.getBytes()][1] to convert to bytes and use [String(byte[] data)][2] constructor to convert back to string.
byte[] pdfBytes = Base64.decode(myPdfBase64String, Base64.DEFAULT)
import java.io.FileInputStream;
import java.io.ByteArrayOutputStream;
public class FileHashStream
{
// write a new method that will provide a new Byte array, and where this generally reads from an input stream
public static byte[] read(InputStream is) throws Exception
{
String path = /* type in the absolute path for the 'commons-codec-1.10-bin.zip' */;
// must need a Byte buffer
byte[] buf = new byte[1024 * 16]
// we will use 16 kilobytes
int len = 0;
// we need a new input stream
FileInputStream is = new FileInputStream(path);
// use the buffer to update our "MessageDigest" instance
while(true)
{
len = is.read(buf);
if(len < 0) break;
md.update(buf, 0, len);
}
// close the input stream
is.close();
// call the "digest" method for obtaining the final hash-result
byte[] ret = md.digest();
System.out.println("Length of Hash: " + ret.length);
for(byte b : ret)
{
System.out.println(b + ", ");
}
String compare = "49276d206b696c6c696e6720796f757220627261696e206c696b65206120706f69736f6e6f7573206d757368726f6f6d";
String verification = Hex.encodeHexString(ret);
System.out.println();
System.out.println("===")
System.out.println(verification);
System.out.println("Equals? " + verification.equals(compare));
}
}