I'm currently implementing a chat program that lets the user choose between RC4 and TEA encryption. It's a partner assignment, and I have taken RC4. I've used the wikipedia page mostly, as well as our book (which the code I believe is the same as on the wikipedia page).
https://en.wikipedia.org/wiki/RC4
Going off of the code on there, I've changed it into Java. Note that my convertKey method will be taking in a key from a separate class which utilizes Diffie-Hellman to obtain the key (I have the key hard coded right now to ensure RC4 works as intended), but only my RC4 class is non-functional, so unless specifically requested (since the other classes work fine and this RC4 one doesn't use them currently) I will just paste the RC4 class to avoid wasted space.
STACK TRACE:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: -24 at cryptochat2.RC4.PRGA(RC4.java:80) at cryptochat2.RC4.main(RC4.java:132) Java Result: 1 –
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package cryptochat2;
/**
*
* #author Braydon
*/
//Initialize array of 256 bytes
// Run KSA on them (key stream algorithm)
// Run PRGA on KSA to generate keystream
// XOR the data with the keystream
import java.util.Arrays;
import java.util.*;
import java.util.stream.IntStream;
// The interesting feature of RC4 is that the key can be of any length from 1 to 256
//bytes. And again, the key is only used to initialize the permutation S.
//Note that the 256-byte array K is filled by simply repeating the key until
//the array is full.
public class RC4 {
byte[] lookUpTable = new byte[256]; // S
byte[] key = new byte[256];
byte[] initializedKey = new byte[256]; // key
byte[] keyStream;
byte keyStreamByte;
boolean generatingOutput;
//int key;
int keylength = 256;
// Call KSA then PRGA
public void KSA() {
// uses identity permutation while also converting to bytes
// then must process S
for (int k = 0; k < 256; k++) {
lookUpTable[k] = (byte) k;
}
// now processing permutation of array
int j = 0;
int tempOne = 0;
int tempTwo = 0;
for (int k = 0; k < 256; k++) {
j = (j + lookUpTable[k] + initializedKey[k % keylength]) % 256;
// Switching S[i] and S[j]
byte tmp;
for (int i = 0; i < 256; i++) {
j = (j + lookUpTable[i] + initializedKey[i]) & 0xFF;
tmp = lookUpTable[j];
lookUpTable[j] = lookUpTable[i];
lookUpTable[i] = tmp;
}
}
}
//Error in PRGA-- arrayIndexOutOfBounds, value differs based on key but
// error remains the same. It's an issue with the following method.
public void PRGA() {
int i = 0;
int j = 0;
byte tmp;
boolean generatingOutput = true;
while (generatingOutput) {
i = (i + 1) % 256;
j = (j + lookUpTable[i]) % 256;
for (int k = 0; k < 256; k++) {
j = (j + lookUpTable[i] + initializedKey[i]) & 0xFF;
tmp = lookUpTable[j];
lookUpTable[j] = lookUpTable[i];
lookUpTable[i] = tmp;
}
int keyStreamByteTemp = ((lookUpTable[i] + lookUpTable[j]) % 256);
try {// error's in this part vvvvvv
keyStreamByte = lookUpTable[keyStreamByteTemp];
System.out.println("keystream byte: " + keyStreamByte);
} catch (IndexOutOfBoundsException exception){
System.out.println(keyStreamByte + " has given an exception");
}
}
}
public void convertKey(int key) {
// call this first. it gives us our key.
int nonByte = key;
byte bytedKey = (byte) key;
// We create an int array and from it we initialize our key byte array
int[] data = IntStream.generate(() -> bytedKey).limit(256).toArray();
for (int i = 0; i < 256; i++) {
initializedKey[i] = (byte) data[i];
}
keylength = numberOfLeadingZeros(key);
}
public static int numberOfLeadingZeros(int i) {
// http://stackoverflow.com/questions/2935793/count-bits-used-in-int
if (i == 0) {
return 32;
}
int n = 1;
if (i >>> 16 == 0) {
n += 16;
i <<= 16;
}
if (i >>> 24 == 0) {
n += 8;
i <<= 8;
}
if (i >>> 28 == 0) {
n += 4;
i <<= 4;
}
if (i >>> 30 == 0) {
n += 2;
i <<= 2;
}
n -= i >>> 31;
return n;
}
public static void main(String[] args) {
RC4 RC4 = new RC4();
// temp hard coded key
RC4.convertKey(16);
RC4.KSA();
RC4.PRGA();
}
}
While I have a general understanding of RC4, it's very possible that I have an element or two confused, or incorrect, or something like that. The try/catch block gives me an infinite loop (due to the generatingOutput boolean I'd imagine, and since we aren't passing in an actual message yet) of numbers, positive and negative, throwing IndexOutOfBounds exceptions, until I stop the program.
keystream byte: -53
keystream byte: 105
105 has given an exception
keystream byte: 6
6 has given an exception
keystream byte: -111
keystream byte: 73
73 has given an exception
keystream byte: 66
keystream byte: -86
keystream byte: -104
keystream byte: -114
keystream byte: -117
keystream byte: 56
keystream byte: 67
67 has given an exception
keystream byte: 121
keystream byte: 10
keystream byte: -7
keystream byte: 16
keystream byte: 103
103 has given an exception
keystream byte: -65
-65 has given an exception
keystream byte: 31
31 has given an exception
keystream byte: 21
I would really appreciate any help, I'd like to learn this well and not disappoint my professor or myself.
Related
So I am saving an audio file. And i have to convert float[] to byte[]. This works fine:
final byte[] byteBuffer = new byte[buffer.length * 2];
int bufferIndex = 0;
for (int i = 0; i < byteBuffer.length; i++) {
final int x = (int) (buffer[bufferIndex++] * 32767.0);
byteBuffer[i] = (byte) x;
i++;
byteBuffer[i] = (byte) (x >>> 8);
if (bufferIndex < 5) {
System.out.println(buffer[bufferIndex]);
System.out.println(byteBuffer[i - 1]);
System.out.println(byteBuffer[i]);
}
}
But when i want to read the bytes and convert it back to floats just the first 4 numbers match with the old ones:
for (int i =0; i < length; i++) {
i++;
float val = (((audioB[i]) & 0xff) << 8) | ((audioB[i-1]) & 0xff);
val = (float) (val /32767.0);
if (bufferindex < 5) {
System.out.println(val);
System.out.println(audioB[i-1]);
System.out.println(audioB[i]);
}
bufferindex++;
}
The output :
0.07973075
0
0
0.149165
52
10
0.19944257
23
19
0.22437502
-121
25
---------
0.0
0
0
0.07971435
52
10
0.14914395
23
19
0.19943845
-121
25
0.22437209
Why ?
Rather than implementing your own bit shifting magic, why not use the java.nio.ByteBuffer class?
byte[] bytes = ByteBuffer.allocate(8).putFloat(1.0F).putFloat(2.0F).array();
ByteBuffer bb = ByteBuffer.wrap(bytes);
float f1 = bb.getFloat();
float f2 = bb.getFloat();
You are stuffing 4 byte float data into just 2 bytes per sample. This will obviously loose some precisition. Hint: Do the backwards calculation (2 byte -> float) in your first loop and compare the result with the original value, and look at intermediate values like x.
I am newbie applets and i used from this link: working with Java Card
Wallet for creating an Wallet project.
I before could credit card amount by this command : 80 30 00 00 01 1A 00.
I now want add '5000' to the present amount. As you know 5000 in hex equals
with '1388' that is 2 byte. So i must send 2 byte data 13 and 88 to the card.
I create bellow command and sent it to card but i get '67 00 Wrong lenght' as
response.
80 30 00 00 02 13 88 00
How can i credit or debit more than 1 byte to/from card?
You'll have to change the code of the Applet you're pointing to of course:
if ((numBytes != 1) || (byteRead != 1)) {
ISOException.throwIt(ISO7816.SW_WRONG_LENGTH); // constant with value 0x6700
}
So you must make sure that it allows for 2 bytes to be send, then you can use the Util.getShort method to convert to the bytes to a 16 bit signed value (using big endian two complement notation, as usual).
Replace the creadit() method, with this one. But remember that you must use two byte value for crediting you walled henceforth. (even for values less than 255 or 0xFF. i.e. you must use 0x00FF to debit you wallet with 255$ )
private void credit(APDU apdu) {
// access authentication
if (!pin.isValidated()) {
ISOException.throwIt(SW_PIN_VERIFICATION_REQUIRED);
}
byte[] buffer = apdu.getBuffer();
// Lc byte denotes the number of bytes in the
// data field of the command APDU
byte numBytes = buffer[ISO7816.OFFSET_LC];
// indicate that this APDU has incoming data
// and receive data starting from the offset
// ISO7816.OFFSET_CDATA following the 5 header
// bytes.
byte byteRead = (byte) (apdu.setIncomingAndReceive());
// it is an error if the number of data bytes
// read does not match the number in Lc byte
if ((numBytes != 2) || (byteRead != 2)) {
ISOException.throwIt(ISO7816.SW_WRONG_LENGTH);
}
// get the creditBytes
byte[] creditBytes = new byte[2];
creditBytes[0]=buffer[ISO7816.OFFSET_CDATA];
creditBytes[1]=buffer[ISO7816.OFFSET_CDATA+1];
// convert 2 byte of creatBytes to a single short value.
short creditAmount = Util.getShort(creditBytes,(short)0);
// check the credit amount
if ((creditAmount > MAX_TRANSACTION_AMOUNT) || (creditAmount < 0)) {
ISOException.throwIt(SW_INVALID_TRANSACTION_AMOUNT);
}
// check the new balance
if ((short) (balance + creditAmount) > MAX_BALANCE) {
ISOException.throwIt(SW_EXCEED_MAXIMUM_BALANCE);
}
// credit the amount
balance = (short) (balance + creditAmount);
}
I propose using BCD addition and BCD subtraction, as follow:
Each byte represent two BCD, e.g. 0x99 represent 99 instead of 153.
All data included in the addition and subtraction shall have the same length, e.g. 6 bytes will represents 12 digits. This should cover most cases, but if you need more, simply change your constant.
Your applet performs loop through the bytes to do the addition or subtraction. Encode and decode operation from BCD to the value and vice versa are needed before and after the operation.
Here is sample for the implementation. It is not tested yet, but should give you idea of how it works:
public class BCD {
public static final short NUMBER_OF_BYTES = 6;
static void add(byte[] augend, byte[] addend, byte[] result) {
byte carry = 0;
short temp = 0;
for (short i = (short) (NUMBER_OF_BYTES - 1); i >= 0; i--) {
temp = (short) (decode(augend[i]) + decode(addend[i]) + carry);
carry = (byte) ((temp > 100) ? 1 : 0);
result[i] = encode((byte) temp);
}
if (carry == 1) {
// TODO: result more than maximum
// you can set all digits to 9 or throw exception
}
}
static void subtract(byte[] minuend, byte[] subtrahend, byte[] result) {
byte borrow = 0;
short temp = 0;
for (short i = (short) (NUMBER_OF_BYTES - 1); i >= 0; i--) {
temp = (short) (100 + decode(minuend[i]) - decode(subtrahend[i]) - borrow);
borrow = (byte) ((temp < 100) ? 1 : 0);
result[i] = encode((byte) temp);
}
if (borrow == 1) {
// TODO: subtrahend > minuend,
// you can set all digits to 0 or throw exception
}
}
static byte encode(byte value) {
value %= 100; // only convert two digits, ignore borrow/carry
return (byte) (((value / 10) << 4) | (value % 10));
}
static byte decode(byte bcdByte) {
byte highNibble = (byte) ((bcdByte >> 4) & 0x0F);
byte lowNibble = (byte) (bcdByte & 0x0F);
if ((highNibble > 9) || (lowNibble > 9)) {
// found 'A' to 'F' character which should be invalid
// you can change this line, e.g. throwing exception
return 0;
}
return (byte) ((highNibble * 10) + lowNibble);
}
}
Is it possible to put a byte[] (byte array) to JSON?
if so, how can I do that in java? then read that JSON and convert that field again to byte[]?
Here is a good example of base64 encoding byte arrays. It gets more complicated when you throw unicode characters in the mix to send things like PDF documents. After encoding a byte array the encoded string can be used as a JSON property value.
Apache commons offers good utilities:
byte[] bytes = getByteArr();
String base64String = Base64.encodeBase64String(bytes);
byte[] backToBytes = Base64.decodeBase64(base64String);
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Base64_encoding_and_decoding
Java server side example:
public String getUnsecureContentBase64(String url)
throws ClientProtocolException, IOException {
//getUnsecureContent will generate some byte[]
byte[] result = getUnsecureContent(url);
// use apache org.apache.commons.codec.binary.Base64
// if you're sending back as a http request result you may have to
// org.apache.commons.httpclient.util.URIUtil.encodeQuery
return Base64.encodeBase64String(result);
}
JavaScript decode:
//decode URL encoding if encoded before returning result
var uriEncodedString = decodeURIComponent(response);
var byteArr = base64DecToArr(uriEncodedString);
//from mozilla
function b64ToUint6 (nChr) {
return nChr > 64 && nChr < 91 ?
nChr - 65
: nChr > 96 && nChr < 123 ?
nChr - 71
: nChr > 47 && nChr < 58 ?
nChr + 4
: nChr === 43 ?
62
: nChr === 47 ?
63
:
0;
}
function base64DecToArr (sBase64, nBlocksSize) {
var
sB64Enc = sBase64.replace(/[^A-Za-z0-9\+\/]/g, ""), nInLen = sB64Enc.length,
nOutLen = nBlocksSize ? Math.ceil((nInLen * 3 + 1 >> 2) / nBlocksSize) * nBlocksSize : nInLen * 3 + 1 >> 2, taBytes = new Uint8Array(nOutLen);
for (var nMod3, nMod4, nUint24 = 0, nOutIdx = 0, nInIdx = 0; nInIdx < nInLen; nInIdx++) {
nMod4 = nInIdx & 3;
nUint24 |= b64ToUint6(sB64Enc.charCodeAt(nInIdx)) << 18 - 6 * nMod4;
if (nMod4 === 3 || nInLen - nInIdx === 1) {
for (nMod3 = 0; nMod3 < 3 && nOutIdx < nOutLen; nMod3++, nOutIdx++) {
taBytes[nOutIdx] = nUint24 >>> (16 >>> nMod3 & 24) & 255;
}
nUint24 = 0;
}
}
return taBytes;
}
The typical way to send binary in json is to base64 encode it.
Java provides different ways to Base64 encode and decode a byte[]. One of these is DatatypeConverter.
Very simply
byte[] originalBytes = new byte[] { 1, 2, 3, 4, 5};
String base64Encoded = DatatypeConverter.printBase64Binary(originalBytes);
byte[] base64Decoded = DatatypeConverter.parseBase64Binary(base64Encoded);
You'll have to make this conversion depending on the json parser/generator library you use.
Amazingly now org.json now lets you put a byte[] object directly into a json and it remains readable. you can even send the resulting object over a websocket and it will be readable on the other side. but i am not sure yet if the size of the resulting object is bigger or smaller than if you were converting your byte array to base64, it would certainly be neat if it was smaller.
It seems to be incredibly hard to measure how much space such a json object takes up in java. if your json consists merely of strings it is easily achievable by simply stringifying it but with a bytearray inside it i fear it is not as straightforward.
stringifying our json in java replaces my bytearray for a 10 character string that looks like an id. doing the same in node.js replaces our byte[] for an unquoted value reading <Buffered Array: f0 ff ff ...> the length of the latter indicates a size increase of ~300% as would be expected
In line with #Qwertie's suggestion, but going further on the lazy side, you could just pretend that each byte is a ISO-8859-1 character. For the uninitiated, ISO-8859-1 is a single-byte encoding that matches the first 256 code points of Unicode.
So #Ash's answer is actually redeemable with a charset:
byte[] args2 = getByteArry();
String byteStr = new String(args2, Charset.forName("ISO-8859-1"));
This encoding has the same readability as BAIS, with the advantage that it is processed faster than either BAIS or base64 as less branching is required. It might look like the JSON parser is doing a bit more, but it's fine because dealing with non-ASCII by escaping or by UTF-8 is part of a JSON parser's job anyways. It could map better to some formats like MessagePack with a profile.
Space-wise however, it is usually a loss, as nobody would be using UTF-16 for JSON. With UTF-8 each non-ASCII byte would occupy 2 bytes, while BAIS uses (2+4n + r?(r+1):0) bytes for every run of 3n+r such bytes (r is the remainder).
If your byte array may contain runs of ASCII characters that you'd like to be able to see, you might prefer BAIS (Byte Array In String) format instead of Base64. The nice thing about BAIS is that if all the bytes happen to be ASCII, they are converted 1-to-1 to a string (e.g. byte array {65,66,67} becomes simply "ABC") Also, BAIS often gives you a smaller file size than Base64 (this isn't guaranteed).
After converting the byte array to a BAIS string, write it to JSON like you would any other string.
Here is a Java class (ported from the original C#) that converts byte arrays to string and back.
import java.io.*;
import java.lang.*;
import java.util.*;
public class ByteArrayInString
{
// Encodes a byte array to a string with BAIS encoding, which
// preserves runs of ASCII characters unchanged.
//
// For simplicity, this method's base-64 encoding always encodes groups of
// three bytes if possible (as four characters). This decision may
// unfortunately cut off the beginning of some ASCII runs.
public static String convert(byte[] bytes) { return convert(bytes, true); }
public static String convert(byte[] bytes, boolean allowControlChars)
{
StringBuilder sb = new StringBuilder();
int i = 0;
int b;
while (i < bytes.length)
{
b = get(bytes,i++);
if (isAscii(b, allowControlChars))
sb.append((char)b);
else {
sb.append('\b');
// Do binary encoding in groups of 3 bytes
for (;; b = get(bytes,i++)) {
int accum = b;
if (i < bytes.length) {
b = get(bytes,i++);
accum = (accum << 8) | b;
if (i < bytes.length) {
b = get(bytes,i++);
accum = (accum << 8) | b;
sb.append(encodeBase64Digit(accum >> 18));
sb.append(encodeBase64Digit(accum >> 12));
sb.append(encodeBase64Digit(accum >> 6));
sb.append(encodeBase64Digit(accum));
if (i >= bytes.length)
break;
} else {
sb.append(encodeBase64Digit(accum >> 10));
sb.append(encodeBase64Digit(accum >> 4));
sb.append(encodeBase64Digit(accum << 2));
break;
}
} else {
sb.append(encodeBase64Digit(accum >> 2));
sb.append(encodeBase64Digit(accum << 4));
break;
}
if (isAscii(get(bytes,i), allowControlChars) &&
(i+1 >= bytes.length || isAscii(get(bytes,i), allowControlChars)) &&
(i+2 >= bytes.length || isAscii(get(bytes,i), allowControlChars))) {
sb.append('!'); // return to ASCII mode
break;
}
}
}
}
return sb.toString();
}
// Decodes a BAIS string back to a byte array.
public static byte[] convert(String s)
{
byte[] b;
try {
b = s.getBytes("UTF8");
} catch(UnsupportedEncodingException e) {
throw new RuntimeException(e.getMessage());
}
for (int i = 0; i < b.length - 1; ++i) {
if (b[i] == '\b') {
int iOut = i++;
for (;;) {
int cur;
if (i >= b.length || ((cur = get(b, i)) < 63 || cur > 126))
throw new RuntimeException("String cannot be interpreted as a BAIS array");
int digit = (cur - 64) & 63;
int zeros = 16 - 6; // number of 0 bits on right side of accum
int accum = digit << zeros;
while (++i < b.length)
{
if ((cur = get(b, i)) < 63 || cur > 126)
break;
digit = (cur - 64) & 63;
zeros -= 6;
accum |= digit << zeros;
if (zeros <= 8)
{
b[iOut++] = (byte)(accum >> 8);
accum <<= 8;
zeros += 8;
}
}
if ((accum & 0xFF00) != 0 || (i < b.length && b[i] != '!'))
throw new RuntimeException("String cannot be interpreted as BAIS array");
i++;
// Start taking bytes verbatim
while (i < b.length && b[i] != '\b')
b[iOut++] = b[i++];
if (i >= b.length)
return Arrays.copyOfRange(b, 0, iOut);
i++;
}
}
}
return b;
}
static int get(byte[] bytes, int i) { return ((int)bytes[i]) & 0xFF; }
public static int decodeBase64Digit(char digit)
{ return digit >= 63 && digit <= 126 ? (digit - 64) & 63 : -1; }
public static char encodeBase64Digit(int digit)
{ return (char)((digit + 1 & 63) + 63); }
static boolean isAscii(int b, boolean allowControlChars)
{ return b < 127 && (b >= 32 || (allowControlChars && b != '\b')); }
}
See also: C# unit tests.
what about simply this:
byte[] args2 = getByteArry();
String byteStr = new String(args2);
I have to implement a variant of the Vigenère cipher. I got the encryption part without issues, but I have a bug in the decryption code and I don't understand what I'm doing wrong.
The requirements are:
the key can only contain A - Z (uppercase)
code values for the key characters are 0 for A, 1 for B, ..., and 25 for Z
do not encode a character if the code is < 32 (preserve control characters)
encrypted character code = original character code + key character code
the final encrypted character must be between 32 and 126, exclusively so if the final encrypted character > 126 it must be brought back into the 32 - 126 range by adding 32 to the value and then subtracting 126
The encryption code:
// it works ok
// I have tested it with some provided strings and the results are as expected
public String encrypt(String plainText)
{
StringBuilder sb = new StringBuilder();
for (int i = 0; i < plainText.length(); i++) {
char c = plainText.charAt(i);
if (c >= 32) {
int keyCharValue = theKey.charAt(i % theKey.length()) - 'A';
c += keyCharValue;
if (c > 126) {
c = (char) (c + 32 - 126);
}
}
sb.append(c);
}
return sb.toString();
}
The decryption code:
// there probably is an off-by-one error somewhere
// everything is decrypted ok, except '~' which gets decrypted to ' ' (space)
public String decrypt(String cipherText)
{
StringBuilder sb = new StringBuilder();
for (int i = 0; i < cipherText.length(); i++) {
char c = cipherText.charAt(i);
if (c >= 32) {
int keyCharValue = theKey.charAt(i % theKey.length()) - 'A';
c -= keyCharValue;
if (c < 32) {
c = (char) (c + 126 - 32);
}
}
sb.append(c);
}
return sb.toString();
}
Example (with key ABCDEFGHIJKLMNOPQRSTUVWXYZ):
original ~~~~~~~~~~~~~~~~~~~~~~~~~~
encrypted ~!"#$%&'()*+,-./0123456789
decrypted ~ ('~' followed by spaces)
EDIT:
Here is the code I use for testing (it tests every character from 0 to 126 repeated as a string):
public static void main(String[] args) {
int passed = 0;
int failed = 0;
String key = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
for (int c = 0; c <= 126; c++) {
StringBuilder sbString = new StringBuilder();
for (int i = 0; i <= 25; i++) {
sbString.append((char) c);
}
String original = sbString.toString();
Cipher cipher = Cipher(key);
String encrypted = cipher.encrypt(original);
String decrypted = cipher.decrypt(encrypted);
if (!original.equals(decrypted)) {
failed++;
System.out.println("--FAILED--");
System.out.println(original);
System.out.println(encrypted);
System.out.println(decrypted);
} else {
passed++;
}
}
int tests = passed + failed;
System.out.println(tests + " tests");
System.out.println("passed: " + passed);
System.out.println("failed: " + failed);
}
I believe the If(c < 32) in the decryption needs to be If (c <= 32).
Reasoning: if you take the case of Char(126) or '~' then add one to it in the encryption you get 127, which goes through the encryption transform and becomes 33.
When decrypting that you get 33 minus that same 1 which leaves you with 32, which won't trigger the special decryption case. By including 32 in that statement it will trigger the special decryption and change the 32 (" ") to 126 ("~")
You are correct it is an off by one error, but it is kinda subtle
EDIT: there is a collision error because char(32) and char(126) are hashing to the same value. In my previous example that value would be 33, the equation needs to be changed such that Char(126) will hash to 32.
Changing the c = (char) (c + 32 - 126); to c = (char) (c + 32 - 127); should free up the extra space to prevent the collision from happening. THe decrypt will also have to be changed from c = (char) (c + 126 - 32); to c = (char) (c + 127 - 32);
And someone posted that in my comments.
Is there any standard method in java to convert IBM 370(in the form of bytes) to IEEE format.?Any algorithm for the conversion would help..
I tried writing a java code..But i fail to understand where do i go wrong. When i give the input as -2.000000000000000E+02, i'm getting the value as -140.0 in IEEE format. and in othercase when i give the input as 3.140000000000000E+00 i'm getting the value as 3.1712502374909226 in IEEE format Any help on this would be highly appreciated
private void conversion() {
byte[] buffer = //bytes to be read(8 bytes);
int sign = (buffer[0] & 0x80);
// Extract exponent.
int exp = ((buffer[0] & 0x7f) - 64) * 4 - 1;
//Normalize the mantissa.
for (int i = 0; i < 4; i++) {//since 4 bits per hex digit
if ((buffer[1] & 0x80) == 0) {
buffer = leftShift(buffer);
exp = exp - 1;
}
}
// Put sign and mantissa back in 8-byte number
buffer = rightShift(buffer);// make room for longer exponent
buffer = rightShift(buffer);
buffer = rightShift(buffer);
exp = exp + 1023;//Excess 1023 format
int temp = exp & 0x0f;//Low 4 bits go into B(1)
buffer[1]= (byte)((buffer[1]&0xf) | (temp *16));
buffer[0]= (byte)(sign | ((exp/16) & 0x7f));
}
private byte[] rightShift(byte[] buf) {
int newCarry = 0;
int oldCarry = 0;
for(int i = 1; i<buf.length; i++) {
newCarry = buf[i] & 1;
buf[i] = (byte)((buf[i] & 0xFE)/2 + (oldCarry != 0 ? 0x80 : 0));
oldCarry = newCarry;
}
return buf;
}
private byte[] leftShift(byte[] buf) {
int newCarry = 0;
int oldCarry = 0;
for(int i = buf.length-1; i>0; i--) {
newCarry = buf[i] & 1;
buf[i] = (byte)((buf[i] & 0x7F)*2 + (oldCarry != 0 ? 1 : 0));
oldCarry = newCarry;
}
return buf;
}
I can see a couple different solutions to your question:
Use the text representation as an intermediary reference
Do a straight conversion C code
This IBM Technical Article includes algorithms for converting from IBM floating point formats to IEE floating point.
There is a bug in the leftShift() function, where you should mask with 0x80 instead of 1. Here is the corrected function.
private byte[] leftShift(byte[] buf) {
int newCarry = 0;
int oldCarry = 0;
for(int i = buf.length-1; i>0; i--) {
newCarry = buf[i] & 0x80;
buf[i] = (byte)((buf[i] & 0x7F)*2 + (oldCarry != 0 ? 1 : 0));
oldCarry = newCarry;
}
return buf;
}
I tested with the wiki example -118.625 If I understand correctly, the bias for IBM double is also 64, so the binary will be 11000010 01110110 10100000 00000000 00000000 00000000 00000000 00000000. After the fix, the program can produce -118.625 correctly.
I know it is an old post, but I currently ran into the same situation too.