Understanding SHA-256 Hashing - java

I am using Sha-256 hashing in my Java program which is working as per behavior.
I am actually a bit confused in the function I have used for SHA-256.
Following is the code of the function:
// Function for generating to Hash of the file content..
public static String generateHash( String fileContent )
{
String hashtext = EMPTY_STRING;
try {
// SHA - 256 Message Digest..
MessageDigest shaDigest = MessageDigest.getInstance( "SHA-256" );
// digest() method is called
// to calculate message digest of the input string
// returned as array of byte
byte[] messageDigest = shaDigest.digest( fileContent.getBytes() );
// Convert byte array into signum representation
BigInteger no = new BigInteger( 1, messageDigest );
// Convert message digest into hex value
hashtext = no.toString( 16 );
// Add preceding 0s to make it 32 bit
while ( hashtext.length() < 32 ) {
hashtext = "0" + hashtext;
}
}
catch ( Exception hashingException ) {
System.out.println( "Exception in Hashing of Content = " + hashingException );
}
// return the HashText
return hashtext;
}
Now, here I am confused in three statements; as I am unaware of what is their actual purpose since I have surfed them on the internet but didnt get any explanatory stuff. Can some one elaborate these three steps to me?
STATEMENT 1
BigInteger no = new BigInteger( 1, messageDigest );
STATEMENT 2
hashtext = no.toString( 16 );
STATEMENT 3
while ( hashtext.length() < 32 ) {
hashtext = "0" + hashtext;
}

BigInteger no = new BigInteger( 1, messageDigest );
Covert the bytes to a Positive Sign-Magnitude representation. Read Javadoc for more info.
hashtext = no.toString( 16 );
Convert the BigInteger number to a Base 16 (Hex Decimal) String
while ( hashtext.length() < 32 ) {
hashtext = "0" + hashtext;
}
Prepend 0 until the hashtext has the size of 32.

Related

About Java Android BASE64 decoding to ASCII String

I have Base64 string data that i have received from a service.
I am able to decode this data and get byte array.
But when i create a new string from that byte array, my server is not being able to read that data properly.
But this same process in C language of Linux based device is working fine on my server side. That is to say, if i (Base64) decode that same string (using OpenSSL and get char array) on that device and send it to my server, the server is able to read that properly.
Now, i tried a sample code in eclipse to understand the problem. Below is the sample,
String base1 =
"sUqVKrgErEId6j3rH8BMMpzvXuTf05rj0PlO/eLOoJwQb3rXrsplAl28unkZP0WvrXRTlpAmT3Y
ohtPFl2+zyUaCSrYfug5JtVHLoVsJ9++Afpx6A5dupn3KJQ9L9ItfWvatIlamQyMo2S5nDypCw79
B2HNAR/PG1wfgYG5OPMNjNSC801kQSE9ljMg3hH6nrRJhXvEVFlllKIOXOYuR/NORAH9k5W+rQeQ
7ONsnao2zvYjfiKO6eGleL6/DF3MKCnGx1sbci9488EQhEBBOG5FGJ7KjTPEQzn/rq3m1Yj9Le/r
KsmzbRNcJN2p/wy1xz9oHy8jWDm81iwRYndJYAQ==";
byte[] b3 = Base64.getDecoder().decode(base1.getBytes());
System.out.println("B3Len:" + b3.length );
String s2 = new String(b3);
System.out.println("S2Len:" + s2.length() );
System.out.println("B3Hex: " + bytesToHex(b3) );
System.out.println("B3HexLen: " + bytesToHex(b3).length() );
byte[] b2 = s2.getBytes();
System.out.println("B2Len:" + b2.length );
int count = 0;
for(int i = 0; i< b3.length; i++) {
if(b3[i] != b2[i]) {
count++;
System.out.println("Byte: " + i + " >> " + b3[i] + " != " + b2[i]);
}
}
System.out.println("Count: " + count);
System.out.println("B2Hex: " + bytesToHex(b2) );
System.out.println("B2HexLen: " + bytesToHex(b2).length() );
Below is output:
B3Len:256
S2Len:256
B3Hex:
b14a952ab804ac421dea3deb1fc04c329cef5ee4dfd39ae3d0f94efde2cea09c106f7ad7aeca
65025dbcba79193f45afad74539690264f762886d3c5976fb3c946824ab61fba0e49b551cba1
5b09f7ef807e9c7a03976ea67dca250f4bf48b5f5af6ad2256a6432328d92e670f2a42c3bf41
d8734047f3c6d707e0606e4e3cc3633520bcd35910484f658cc837847ea7ad12615ef1151659
65288397398b91fcd391007f64e56fab41e43b38db276a8db3bd88df88a3ba78695e2fafc317
730a0a71b1d6c6dc8bde3cf0442110104e1b914627b2a34cf110ce7febab79b5623f4b7bfaca
b26cdb44d709376a7fc32d71cfda07cbc8d60e6f358b04589dd25801
B3HexLen: 512
B2Len:256
Byte: 52 >> -112 != 63
Byte: 175 >> -115 != 63
Byte: 252 >> -99 != 63
Count: 3
B2Hex:
b14a952ab804ac421dea3deb1fc04c329cef5ee4dfd39ae3d0f94efde2cea09c106f7ad7aeca
65025dbcba79193f45afad7453963f264f762886d3c5976fb3c946824ab61fba0e49b551cba1
5b09f7ef807e9c7a03976ea67dca250f4bf48b5f5af6ad2256a6432328d92e670f2a42c3bf41
d8734047f3c6d707e0606e4e3cc3633520bcd35910484f658cc837847ea7ad12615ef1151659
65288397398b91fcd391007f64e56fab41e43b38db276a3fb3bd88df88a3ba78695e2fafc317
730a0a71b1d6c6dc8bde3cf0442110104e1b914627b2a34cf110ce7febab79b5623f4b7bfaca
b26cdb44d709376a7fc32d71cfda07cbc8d60e6f358b04583fd25801
B2HexLen: 512
I understand that there are extended characters in this string.
So, here we can see that the reconverting the hex to string is not working properly, because of the differences in the byte arrays.
I actually need this to work because, i have much larger Base64 string than the one in this sample that i need to send to my server which is trying to read ASCII string.
Or,
Can anyone give me a solution that can give me an ASCII String output that is identical to char array output from C language (OpenSSL decoding) on Linux device.

Xor a string that is uint16 or uint32

I am trying to recreate the following logic I created in JAVA to swift:
public String xorMessage(String message, String key) {
try {
if (message == null || key == null) return null;
char[] keys = key.toCharArray();
char[] mesg = message.toCharArray();
int ml = mesg.length;
int kl = keys.length;
char[] newmsg = new char[ml];
for (int i = 0; i < ml; i++) {
newmsg[i] = (char)(mesg[i] ^ keys[i % kl]);
}//for i
return new String(newmsg);
} catch (Exception e) {
return null;
}
I have reached till here while coding in swift3:
import UIKit
import Foundation
let t = "22-Jun-2017 12:30 pm"
let m = "message"
print(UInt8(t))
let a :[UInt8] = Array(t.utf8)
let v = m.characters.map{String ($0) }
print(v)
func encodeWithXorByte(key: UInt8 , Input : String) -> String {
return String(bytes: Input.utf8.map{$0 ^ key}, encoding: String.Encoding.utf8) ?? ""
}
var ml :Int = Int( m.characters.count )
var kl :Int = Int (t.characters.count)
var f = [String]()
for i in 0..<ml{
let key = a[i%kl]
let input = v[i]
f.append(String(bytes: input.utf8.map{$0 ^ key} , encoding : String.Encoding.utf8)!)
// f.append(<#T##newElement: Character##Character#>)
//m[i] = input.utf8.map{$0 ^ key}
}
I am trying to obtain a string(message) which has been xor'ed with a key passed into the above function. But my code in swift is not working as it is returning character array and I want a string, if I try to cast the character array to string it does not show the unicode like \u{0001} etc in the string...
Suppose I get following output :
["_", "W", "^", "9", "\u{14}", "\t", "H"]
and then I try to convert to string, I get this:
_W^9 H
I want :
_W^9\u{14}\tH
Please help.
There are different problems. First, if your intention is to print
"unprintable" characters in a string \u{} escaped then you can use
the .debugDescription method. Example:
let s = "a\u{04}\u{08}b"
print(s) // ab
print(s.debugDescription) // "a\u{04}\u{08}b"
Next, your Swift code converts the string to UTF-8, xor's the bytes
and then converts the result back to a String. That can easily fail
if the xor'ed byte sequence is not valid UTF-8.
The Java code operates on UTF-16 code units, so the equivalent Swift
code would be
func xorMessage(message: String, key: String) -> String {
let keyChars = Array(key.utf16)
let keyLen = keyChars.count
let newMsg = message.utf16.enumerated().map { $1 ^ keyChars[$0 % keyLen] }
return String(utf16CodeUnits: newMsg, count: newMsg.count)
}
Example:
let t = "22-Jun-2017 12:30 pm"
let m = "message"
let encrypted = xorMessage(message: m, key: t)
print(encrypted.debugDescription) // "_W^9\u{14}\tH"
Finally, even that can produce unexpected results unless you restrict
the input (key and message) to ASCII characters. Example:
let m = "😀"
print(Array(m.utf16).map { String($0, radix: 16)} ) // ["d83d", "de00"]
let t = "a€"
print(Array(t.utf16).map { String($0, radix: 16)} ) // ["61", "20ac"]
let e = xorMessage(message: m, key: t)
print(Array(e.utf16).map { String($0, radix: 16)} ) // ["fffd", "feac"]
let d = xorMessage(message: e, key: t)
print(Array(d.utf16).map { String($0, radix: 16)} ) // ["ff9c", "fffd"]
print(d) // ワ�
print(d == m) // false
The problem is that the xor'ing produces an invalid UTF-16 sequence
(an unbalanced surrogate pair), which is then replaced by the
"replacement character" U+FFFD.
I don't know how Java handles this, but Swift strings cannot invalid
Unicode scalar values, so the only solution would be to represent
the result as an [UInt16] array instead of a String.

how to convert digits to ascii value or characters and store in a string array

I am trying to generate a prime key from a phone number provided by the users on my application.
for example, user provides following phone number:
Phone number: 033232532523
Now, I want to generate some kind of key like converting those digits to alphabet, special characters or ascii value or kind of that, so that I could get a key something like this (dummy):
ab743kdhad$
e.g replacing 0 with a, getting ascii value of 3 and so on...
The code I am trying to get is something like this:
public class PrimeKeyGenerator {
public static void main( String[] args ) {
String phoneNumber = "123456342";
//could we convert the digits to characters or replace the digits with their ascii value?
String characters = convertNumToCharacters( phoneNumber );
System.out.println( "Generated Prime Key: " + characters );
}
private static String convertNumToCharacters(String phoneNumber) {
return null;
}}
You could convert the digits to a byte[] and then apply a SHA-1 hash and then Base64 encode the result. Something like,
private static String convertNumToCharacters(String phoneNumber) {
byte[] digits = new byte[phoneNumber.length()];
for (int i = 0; i < digits.length; i++) {
digits[i] = (byte) Character.digit(phoneNumber.charAt(i), 10);
}
try {
MessageDigest md = MessageDigest.getInstance("SHA1");
return Base64.getEncoder().encodeToString(md.digest(digits));
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
}
return null;
}
Which returns (with your input "123456342")
Generated Prime Key: wlwRLSZuhzMBn5Yw6RVfw+dwegM=
and (with my phone #)
Generated Prime Key: botMioqy/9B4tu/KvLv5Cc/Ykak=

how to detect base64 encoded strings? [duplicate]

I want to decode a Base64 encoded string, then store it in my database. If the input is not Base64 encoded, I need to throw an error.
How can I check if a string is Base64 encoded?
You can use the following regular expression to check if a string constitutes a valid base64 encoding:
^([A-Za-z0-9+/]{4})*([A-Za-z0-9+/]{3}=|[A-Za-z0-9+/]{2}==)?$
In base64 encoding, the character set is [A-Z, a-z, 0-9, and + /]. If the rest length is less than 4, the string is padded with '=' characters.
^([A-Za-z0-9+/]{4})* means the string starts with 0 or more base64 groups.
([A-Za-z0-9+/]{4}|[A-Za-z0-9+/]{3}=|[A-Za-z0-9+/]{2}==)$ means the string ends in one of three forms: [A-Za-z0-9+/]{4}, [A-Za-z0-9+/]{3}= or [A-Za-z0-9+/]{2}==.
If you are using Java, you can actually use commons-codec library
import org.apache.commons.codec.binary.Base64;
String stringToBeChecked = "...";
boolean isBase64 = Base64.isArrayByteBase64(stringToBeChecked.getBytes());
[UPDATE 1] Deprecation Notice
Use instead
Base64.isBase64(value);
/**
* Tests a given byte array to see if it contains only valid characters within the Base64 alphabet. Currently the
* method treats whitespace as valid.
*
* #param arrayOctet
* byte array to test
* #return {#code true} if all bytes are valid characters in the Base64 alphabet or if the byte array is empty;
* {#code false}, otherwise
* #deprecated 1.5 Use {#link #isBase64(byte[])}, will be removed in 2.0.
*/
#Deprecated
public static boolean isArrayByteBase64(final byte[] arrayOctet) {
return isBase64(arrayOctet);
}
Well you can:
Check that the length is a multiple of 4 characters
Check that every character is in the set A-Z, a-z, 0-9, +, / except for padding at the end which is 0, 1 or 2 '=' characters
If you're expecting that it will be base64, then you can probably just use whatever library is available on your platform to try to decode it to a byte array, throwing an exception if it's not valid base 64. That depends on your platform, of course.
As of Java 8, you can simply use java.util.Base64 to try and decode the string:
String someString = "...";
Base64.Decoder decoder = Base64.getDecoder();
try {
decoder.decode(someString);
} catch(IllegalArgumentException iae) {
// That string wasn't valid.
}
Try like this for PHP5
//where $json is some data that can be base64 encoded
$json=some_data;
//this will check whether data is base64 encoded or not
if (base64_decode($json, true) == true)
{
echo "base64 encoded";
}
else
{
echo "not base64 encoded";
}
Use this for PHP7
//$string parameter can be base64 encoded or not
function is_base64_encoded($string){
//this will check if $string is base64 encoded and return true, if it is.
if (base64_decode($string, true) !== false){
return true;
}else{
return false;
}
}
var base64Rejex = /^(?:[A-Z0-9+\/]{4})*(?:[A-Z0-9+\/]{2}==|[A-Z0-9+\/]{3}=|[A-Z0-9+\/]{4})$/i;
var isBase64Valid = base64Rejex.test(base64Data); // base64Data is the base64 string
if (isBase64Valid) {
// true if base64 formate
console.log('It is base64');
} else {
// false if not in base64 formate
console.log('it is not in base64');
}
Try this:
public void checkForEncode(String string) {
String pattern = "^([A-Za-z0-9+/]{4})*([A-Za-z0-9+/]{4}|[A-Za-z0-9+/]{3}=|[A-Za-z0-9+/]{2}==)$";
Pattern r = Pattern.compile(pattern);
Matcher m = r.matcher(string);
if (m.find()) {
System.out.println("true");
} else {
System.out.println("false");
}
}
It is impossible to check if a string is base64 encoded or not. It is only possible to validate if that string is of a base64 encoded string format, which would mean that it could be a string produced by base64 encoding (to check that, string could be validated against a regexp or a library could be used, many other answers to this question provide good ways to check this, so I won't go into details).
For example, string flow is a valid base64 encoded string. But it is impossible to know if it is just a simple string, an English word flow, or is it base 64 encoded string ~Z0
There are many variants of Base64, so consider just determining if your string resembles the varient you expect to handle. As such, you may need to adjust the regex below with respect to the index and padding characters (i.e. +, /, =).
class String
def resembles_base64?
self.length % 4 == 0 && self =~ /^[A-Za-z0-9+\/=]+\Z/
end
end
Usage:
raise 'the string does not resemble Base64' unless my_string.resembles_base64?
Check to see IF the string's length is a multiple of 4. Aftwerwards use this regex to make sure all characters in the string are base64 characters.
\A[a-zA-Z\d\/+]+={,2}\z
If the library you use adds a newline as a way of observing the 76 max chars per line rule, replace them with empty strings.
/^([A-Za-z0-9+\/]{4})*([A-Za-z0-9+\/]{4}|[A-Za-z0-9+\/]{3}=|[A-Za-z0-9+\/]{2}==)$/
this regular expression helped me identify the base64 in my application in rails, I only had one problem, it is that it recognizes the string "errorDescripcion", I generate an error, to solve it just validate the length of a string.
For Flutter, I tested couple of the above comments and translated that into dart function as follows
static bool isBase64(dynamic value) {
if (value.runtimeType == String){
final RegExp rx = RegExp(r'^([A-Za-z0-9+/]{4})*([A-Za-z0-9+/]{3}=|[A-Za-z0-9+/]{2}==)?$',
multiLine: true,
unicode: true,
);
final bool isBase64Valid = rx.hasMatch(value);
if (isBase64Valid == true) {return true;}
else {return false;}
}
else {return false;}
}
In Java below code worked for me:
public static boolean isBase64Encoded(String s) {
String pattern = "^([A-Za-z0-9+/]{4})*([A-Za-z0-9+/]{3}=|[A-Za-z0-9+/]{2}==)?$";
Pattern r = Pattern.compile(pattern);
Matcher m = r.matcher(s);
return m.find();
}
This works in Python:
import base64
def IsBase64(str):
try:
base64.b64decode(str)
return True
except Exception as e:
return False
if IsBase64("ABC"):
print("ABC is Base64-encoded and its result after decoding is: " + str(base64.b64decode("ABC")).replace("b'", "").replace("'", ""))
else:
print("ABC is NOT Base64-encoded.")
if IsBase64("QUJD"):
print("QUJD is Base64-encoded and its result after decoding is: " + str(base64.b64decode("QUJD")).replace("b'", "").replace("'", ""))
else:
print("QUJD is NOT Base64-encoded.")
Summary: IsBase64("string here") returns true if string here is Base64-encoded, and it returns false if string here was NOT Base64-encoded.
C#
This is performing great:
static readonly Regex _base64RegexPattern = new Regex(BASE64_REGEX_STRING, RegexOptions.Compiled);
private const String BASE64_REGEX_STRING = #"^[a-zA-Z0-9\+/]*={0,3}$";
private static bool IsBase64(this String base64String)
{
var rs = (!string.IsNullOrEmpty(base64String) && !string.IsNullOrWhiteSpace(base64String) && base64String.Length != 0 && base64String.Length % 4 == 0 && !base64String.Contains(" ") && !base64String.Contains("\t") && !base64String.Contains("\r") && !base64String.Contains("\n")) && (base64String.Length % 4 == 0 && _base64RegexPattern.Match(base64String, 0).Success);
return rs;
}
There is no way to distinct string and base64 encoded, except the string in your system has some specific limitation or identification.
This snippet may be useful when you know the length of the original content (e.g. a checksum). It checks that encoded form has the correct length.
public static boolean isValidBase64( final int initialLength, final String string ) {
final int padding ;
final String regexEnd ;
switch( ( initialLength ) % 3 ) {
case 1 :
padding = 2 ;
regexEnd = "==" ;
break ;
case 2 :
padding = 1 ;
regexEnd = "=" ;
break ;
default :
padding = 0 ;
regexEnd = "" ;
}
final int encodedLength = ( ( ( initialLength / 3 ) + ( padding > 0 ? 1 : 0 ) ) * 4 ) ;
final String regex = "[a-zA-Z0-9/\\+]{" + ( encodedLength - padding ) + "}" + regexEnd ;
return Pattern.compile( regex ).matcher( string ).matches() ;
}
If the RegEx does not work and you know the format style of the original string, you can reverse the logic, by regexing for this format.
For example I work with base64 encoded xml files and just check if the file contains valid xml markup. If it does not I can assume, that it's base64 decoded. This is not very dynamic but works fine for my small application.
This works in Python:
def is_base64(string):
if len(string) % 4 == 0 and re.test('^[A-Za-z0-9+\/=]+\Z', string):
return(True)
else:
return(False)
Try this using a previously mentioned regex:
String regex = "^([A-Za-z0-9+/]{4})*([A-Za-z0-9+/]{4}|[A-Za-z0-9+/]{3}=|[A-Za-z0-9+/]{2}==)$";
if("TXkgdGVzdCBzdHJpbmc/".matches(regex)){
System.out.println("it's a Base64");
}
...We can also make a simple validation like, if it has spaces it cannot be Base64:
String myString = "Hello World";
if(myString.contains(" ")){
System.out.println("Not B64");
}else{
System.out.println("Could be B64 encoded, since it has no spaces");
}
if when decoding we get a string with ASCII characters, then the string was
not encoded
(RoR) ruby solution:
def encoded?(str)
Base64.decode64(str.downcase).scan(/[^[:ascii:]]/).count.zero?
end
def decoded?(str)
Base64.decode64(str.downcase).scan(/[^[:ascii:]]/).count > 0
end
Function Check_If_Base64(ByVal msgFile As String) As Boolean
Dim I As Long
Dim Buffer As String
Dim Car As String
Check_If_Base64 = True
Buffer = Leggi_File(msgFile)
Buffer = Replace(Buffer, vbCrLf, "")
For I = 1 To Len(Buffer)
Car = Mid(Buffer, I, 1)
If (Car < "A" Or Car > "Z") _
And (Car < "a" Or Car > "z") _
And (Car < "0" Or Car > "9") _
And (Car <> "+" And Car <> "/" And Car <> "=") Then
Check_If_Base64 = False
Exit For
End If
Next I
End Function
Function Leggi_File(PathAndFileName As String) As String
Dim FF As Integer
FF = FreeFile()
Open PathAndFileName For Binary As #FF
Leggi_File = Input(LOF(FF), #FF)
Close #FF
End Function
import java.util.Base64;
public static String encodeBase64(String s) {
return Base64.getEncoder().encodeToString(s.getBytes());
}
public static String decodeBase64(String s) {
try {
if (isBase64(s)) {
return new String(Base64.getDecoder().decode(s));
} else {
return s;
}
} catch (Exception e) {
return s;
}
}
public static boolean isBase64(String s) {
String pattern = "^([A-Za-z0-9+/]{4})*([A-Za-z0-9+/]{4}|[A-Za-z0-9+/]{3}=|[A-Za-z0-9+/]{2}==)$";
Pattern r = Pattern.compile(pattern);
Matcher m = r.matcher(s);
return m.find();
}
For Java flavour I actually use the following regex:
"([A-Za-z0-9+]{4})*([A-Za-z0-9+]{3}=|[A-Za-z0-9+]{2}(==){0,2})?"
This also have the == as optional in some cases.
Best!
I try to use this, yes this one it's working
^([A-Za-z0-9+/]{4})*([A-Za-z0-9+/]{3}=|[A-Za-z0-9+/]{2}==)?$
but I added on the condition to check at least the end of the character is =
string.lastIndexOf("=") >= 0

Java encryption and Force.com apex encryption

I need to convert this java code in force.com apex. i tried to use Crypto class to get same encryption but not getting how can i get same value for the variable "fingerprintHash" in the last in APEX . Can Anyone help me in this technical issue?
Random generator = new Random();
sequence =Long.parseLong(sequence+""+generator.nextInt(1000));
timeStamp = System.currentTimeMillis() / 1000;
try {
SecretKey key = new SecretKeySpec(transactionKey.getBytes(), "HmacMD5");
Mac mac = Mac.getInstance("HmacMD5");
mac.init(key);
String inputstring = loginID + "^" + sequence + "^" + timeStamp + "^" + amount + "^";
byte[] result = mac.doFinal(inputstring.getBytes());
StringBuffer strbuf = new StringBuffer(result.length * 2);
for (int i = 0; i < result.length; i++) {
if (((int) result[i] & 0xff) < 0x10) {
strbuf.append("0");
}
strbuf.append(Long.toString((int) result[i] & 0xff, 16));
}
fingerprintHash = strbuf.toString(); //need this result for variable x_fp_hash
The apex code I was trying is :-
String API_Login_Id='6########';
String TXn_Key='6###############';
String amount='55';
sequence = '300';
long timeStamp = System.currentTimeMillis()/1000;
String inputStr = API_Login_Id + '^' + sequence + '^' + timeStamp + '^' + amount + '^';
String algorithmName = 'hmacMD5';
Blob mac = Crypto.generateMac(algorithmName,Blob.valueOf(inputStr),Blob.valueOf( TXn_Key));
String macUrl =EncodingUtil.urlEncode(EncodingUtil.base64Encode(mac), 'UTF-8');
The problem would seem to be that you are hex encoding the output on the javaside, but base64 encoding the output on the apex side, try using EncodingUtils.convertToHex instead of EncodingUtils.base64Encode
You look like you're heading along the right lines with regards to the encryption, however you're using a time stamp as part of your input string, and so unless you're astronomically lucky you're always encoding different strings. While you're working on porting the code, remove the timestamp so that you can be sure your input strings are the same - if they're not the same then you'll never get the same result.
Once you've established that your encryption is working as desired, then you can put the timestamp back into the code safe in the knowledge that it'll be functioning the same way as the original java code.

Categories

Resources