Hi
I have some data being received over a bluetooth connection.
The data has a 16-bit CRC 16-CCITT block which I want to use in order to verify that the data was transferred successfully and without error.
Is there any built in method in java or android that can help me or do I need to implement it myself? Will I need to encode the data and compare? I have a code snippet for doing that which I found online, but I'm not sure it is correct or efficient.
It is found at: http://introcs.cs.princeton.edu/java/51data/CRC16CCITT.java.html and the code is:
int crc = 0xFFFF; // initial value
int polynomial = 0x1021; // 0001 0000 0010 0001 (0, 5, 12)
// byte[] testBytes = "123456789".getBytes("ASCII");
byte[] bytes = args[0].getBytes();
for (byte b : bytes) {
for (int i = 0; i < 8; i++) {
boolean bit = ((b >> (7-i) & 1) == 1);
boolean c15 = ((crc >> 15 & 1) == 1);
crc <<= 1;
if (c15 ^ bit) crc ^= polynomial;
}
}
crc &= 0xffff;
System.out.println("CRC16-CCITT = " + Integer.toHexString(crc));
I also saw that Java has an implementation of crc32 at http://download.oracle.com/javase/1.4.2/docs/api/java/util/zip/CRC32.html. Is that something I can use here?
Thanks.
It is very inefficient. There is a table-driven version around the Internet written originally in C in the 1980s that runs at least 8 times as fast. The Wikipedia article appears to provide some links.
Related
I am trying to port some existing C# code that uses BitConverter to Java. I have found various other threads, but then happened upon a github class that appears to do the trick. However, the ToUInt16 does not match the output from my C# code. The ToInt16 and ToInt32 appear to be returning the same values. Can you help me understand what is wrong with this implementation (or possibly what I am doing wrong)?
Code Ref: Java BitConverter
ToUInt16:
public static int ToUInt16(byte[] bytes, int offset) {
int result = (int)bytes[offset+1]&0xff;
result |= ((int)bytes[offset]&0xff) << 8;
if(Lysis.bDebug)
System.out.println(result & 0xffff);
return result & 0xffff;
}
ToUInt32:
public static long ToUInt32(byte[] bytes, int offset) {
long result = (int)bytes[offset]&0xff;
result |= ((int)bytes[offset+1]&0xff) << 8;
result |= ((int)bytes[offset+2]&0xff) << 16;
result |= ((int)bytes[offset+3]&0xff) << 24;
if(Lysis.bDebug)
System.out.println(result & 0xFFFFFFFFL);
return result & 0xFFFFFFFFL;
}
MyCode Snippet:
byte[] byteArray = from some byte array
int offset = currentOffset
int msTime = BitConverter.ToUInt16(byteArray, offset)
msTime does not match what is coming from C#
C# Example (string from vendor gets converted from a string using Convert.FromBase64String)
byte[] rawData = Convert.FromBase64String(vendorRawData);
byte[] sampleDataRaw = rawData;
Assert.AreEqual(15616, sampleDataRaw.Length);
//Show byte data for msTime
Console.WriteLine(sampleDataRaw[7]);
Console.WriteLine(sampleDataRaw[6]);
//Show byte data for msTime
Console.WriteLine(sampleDataRaw[14]);
Console.WriteLine(sampleDataRaw[13]);
var msTime = BitConverter.ToUInt16(sampleDataRaw, 6);
var dmWindow = BitConverter.ToUInt16(sampleDataRaw, 13);
Assert.AreEqual(399, msTime);
Assert.AreEqual(10, dmWindow);
C# Console Output for byte values:
1
143
0
10
Groovy Example (string from vendor gets converted from a string using groovy decodeBase64())
def rawData = vendorRawData.decodeBase64()
def sampleDataRaw = rawData
Assert.assertEquals(15616, rawData.length)
//Show byte data for msTime
println sampleDataRaw[7]
println sampleDataRaw[6]
//Show byte data for dmWindow
println sampleDataRaw[14]
println sampleDataRaw[13]
def msTime = ToUInt16(sampleDataRaw, 6)
def dmWindow = ToUInt16(sampleDataRaw, 13)
Assert.assertEquals(399, msTime)
Assert.assertEquals(10, dmWindow)
**Asserts fail with**
399 fro msTime is actually 36609
10 from dmWindow is actually 2560
Groovy Output from Byte values in println
1
-113
0
10
There is a discrepancy between the two methods. The first one ToUInt16 assumes big-endian byte order. i.e the first byte is the most significant byte.
But ToUInt32 assumes little-endian byte order (a strange choice). So the first byte is least significant.
A corrected implementation would look like:
public static long toUInt32(byte[] bytes, int offset) {
long result = Byte.toUnsignedLong(bytes[offset+3]);
result |= Byte.toUnsignedLong(bytes[offset+2]) << 8;
result |= Byte.toUnsignedLong(bytes[offset+1]) << 16;
result |= Byte.toUnsignedLong(bytes[offset]) << 24;
return result;
}
Where the array indexing is 'reversed'.
(I also changed the hacky looking bitmasking to clearer calls to Byte.toUnsignedLong, which does the same thing.)
What I actually found, was that the ToInt16 is actually giving me the results I wanted, not the ToUInt16 in the solution. I have checked quite a few results and they all match the .Net output.
The link from #pinkfloydx33 where I could see the source code, is actually what led me to try and use ToInt16 instead of ToUInt16.
I received a CRC function written in C from a hardware partner. Messages send by his devices are signed using this code. Can anyone help me to translate it to Java?
int8 crc8(int8*buf, int8 data_byte)
{
int8 crc=0x00;
int8 data_bit=0x80;
while(data_byte>0)
{
if(((crc&0x01)!=0)!=((buf[data_byte]&data_bit)!=0))
{
crc>>=1;
crc^=0xCD;
}
else
crc>>=1;
data_bit>>=1;
if(!data_bit)
{
data_bit=0x80;
data_byte--;
}
}
return(crc);
}
I tried to convert this to Java, but the result is not I expect.
public static byte crc8(byte [] buf, byte data_byte)
{
byte crc = 0x00;
byte data_bit = (byte)0x80;
while(data_byte>0)
{
if(((crc&0x01)!=0)!=((buf[data_byte]&data_bit)!=0))
{
crc>>=1;
crc^=0xCD;
}
else
{
crc>>=1;
}
data_bit>>=1;
if(data_bit == 0)
{
data_bit= (byte)0x80;
data_byte--;
}
}
return crc;
}
I suppose that this is the error: if(data_bit != 0)
EDIT:
I changed the code to byte in my conversion method. I receive my data from a socket and convert this then to a String where I get a byteArray out from.
An input example is 16, 0, 1, -15, 43, 6, 1, 6, 8, 0, 111, 0, 0 ,49
where the last field (49) should be the checksum
I also tried Durandals version, but my result is still not valid.
This is how I read the data
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
char[] buffer = new char[14];
int count= bufferedReader.read(buffer, 0, 14);
String msg = new String(buffer, 0, count);
byte[] content = msg.getBytes();
if(!data_bit)
translates to
if(data_bit == 0)
You really need to use bytes and not shorts. To get around the problem you had using bytes, use this
byte data_bit = (byte)0x80;
Also, as Mark says, you need to use >>> instead of >>.
Translate the code 1:1, paying extra attention to all operations done on bytes to account for java's implicit cast to int (e.g. (byte >>> 1) is absolutely worthless because the byte is first extendet to int, shifted and then cast back, making it effectively a signed shift no matter what).
Therefore local variables are best declared as int and when loaded from a bytearray masked to yield unsigned extension: int x = byte[i] & 0xFF; Since in the only place that is done data is already masked down to a single bit (in the if) there is nothing special to be done.
Applying to the C code yields:
int crc8(byte[] buf, int dataCount) {
int crc = 0;
int data_bit = 0x80;
while(dataCount > 0) {
if ( ((crc & 0x01)!=0) != ((buf[dataCount] & data_bit)!=0)) {
crc >>= 1;
crc ^= 0xCD;
} else {
crc >>= 1;
}
data_bit >>= 1;
if (data_bit == 0) {
data_bit = 0x80;
dataCount--;
}
}
return crc;
}
That said, the code isn't very efficient (it processes input bit by bit, there are faster implementations processing entire bytes, using a table for each possible byte added, but you probably don't care for this use case).
Also, beware when you compare the crc from this method to a byte, you must mask the byte properly with 0xFF, otherwise comparison will fail for values >=0x80:
(int) crc == (byte) crc & 0xFF
EDIT:
What worries my even about the original code, that data_byte is clearly intended to specify a length, first it calculates in reverse order and also, it will access an additional byte after the specfied number (data_byte is not decremented before the loop). I suspect the original is (already) broken code, or the calls to it are very messy.
I have a BinaryStream class in JavasScipt that reads bytes from an array downloaded via XMLHHttpRequest and has the function next() which returns an unsigned byte (technically an integer). I need to read a double from the stream which is basically the same as DataStream.readDouble() from Java which uses the method Double.longBitsToDouble(long). I can't figure out how the longBitsToDouble method works.
This is the the code I have:
var bits = stream.nextLong();
if (bits == 0x7ff0000000000000)
this.variables = [Infinity];
else if (bits == 0xfff0000000000000)
this.variables = [-Infinity];
else if (bits >= 0x7ff0000000000001 && bits <= 0x7fffffffffffffff || bits >= 0xfff0000000000001 && bits <= 0xfff0000000000001)
this.variables = [NaN];
else
{
var s = ((bits >> 63) == 0) ? 1 : -1;
var e = ((bits >> 52) & 0x7ff);
this.variables = [(e == 0) ? (bits & 0xfffffffffffff) << 1 : (bits & 0xfffffffffffff) | 0x10000000000000];
// This must be incorrect because it returns a number many times higher than it should
}
console.log(this.variables[0]);
I found a JavaScript library that can encode and decode many different types of numbers from an array of bytes here.
http://code.google.com/p/quake2-gwt-port/source/browse/src/com/google/gwt/corp/compatibility/Numbers.java has an implementation of floatToLongBits. I'd start with that. You should be able to just tweak the mantissa mask and size and the exponent mask and size.
Joel Webber (of GWT advises caution):
The Quake code has only ever been tested on WebKit (Nitro & V8), so I would
check it carefully before using. float/doubleToLongBits() in Javascript is a
truly, truly foul piece of work that is horrifically inefficient. Use with
caution.
I want to store some data into byte arrays in Java. Basically just numbers which can take up to 2 Bytes per number.
I'd like to know how I can convert an integer into a 2 byte long byte array and vice versa. I found a lot of solutions googling but most of them don't explain what happens in the code. There's a lot of shifting stuff I don't really understand so I would appreciate a basic explanation.
Use the classes found in the java.nio namespace, in particular, the ByteBuffer. It can do all the work for you.
byte[] arr = { 0x00, 0x01 };
ByteBuffer wrapped = ByteBuffer.wrap(arr); // big-endian by default
short num = wrapped.getShort(); // 1
ByteBuffer dbuf = ByteBuffer.allocate(2);
dbuf.putShort(num);
byte[] bytes = dbuf.array(); // { 0, 1 }
byte[] toByteArray(int value) {
return ByteBuffer.allocate(4).putInt(value).array();
}
byte[] toByteArray(int value) {
return new byte[] {
(byte)(value >> 24),
(byte)(value >> 16),
(byte)(value >> 8),
(byte)value };
}
int fromByteArray(byte[] bytes) {
return ByteBuffer.wrap(bytes).getInt();
}
// packing an array of 4 bytes to an int, big endian, minimal parentheses
// operator precedence: <<, &, |
// when operators of equal precedence (here bitwise OR) appear in the same expression, they are evaluated from left to right
int fromByteArray(byte[] bytes) {
return bytes[0] << 24 | (bytes[1] & 0xFF) << 16 | (bytes[2] & 0xFF) << 8 | (bytes[3] & 0xFF);
}
// packing an array of 4 bytes to an int, big endian, clean code
int fromByteArray(byte[] bytes) {
return ((bytes[0] & 0xFF) << 24) |
((bytes[1] & 0xFF) << 16) |
((bytes[2] & 0xFF) << 8 ) |
((bytes[3] & 0xFF) << 0 );
}
When packing signed bytes into an int, each byte needs to be masked off because it is sign-extended to 32 bits (rather than zero-extended) due to the arithmetic promotion rule (described in JLS, Conversions and Promotions).
There's an interesting puzzle related to this described in Java Puzzlers ("A Big Delight in Every Byte") by Joshua Bloch and Neal Gafter . When comparing a byte value to an int value, the byte is sign-extended to an int and then this value is compared to the other int
byte[] bytes = (…)
if (bytes[0] == 0xFF) {
// dead code, bytes[0] is in the range [-128,127] and thus never equal to 255
}
Note that all numeric types are signed in Java with exception to char being a 16-bit unsigned integer type.
You can also use BigInteger for variable length bytes. You can convert it to long, int or short, whichever suits your needs.
new BigInteger(bytes).intValue();
or to denote polarity:
new BigInteger(1, bytes).intValue();
To get bytes back just:
new BigInteger(bytes).toByteArray()
Although simple, I just wanted to point out that if you run this many times in a loop, this could lead to a lot of garbage collection. This may be a concern depending on your use case.
A basic implementation would be something like this:
public class Test {
public static void main(String[] args) {
int[] input = new int[] { 0x1234, 0x5678, 0x9abc };
byte[] output = new byte[input.length * 2];
for (int i = 0, j = 0; i < input.length; i++, j+=2) {
output[j] = (byte)(input[i] & 0xff);
output[j+1] = (byte)((input[i] >> 8) & 0xff);
}
for (int i = 0; i < output.length; i++)
System.out.format("%02x\n",output[i]);
}
}
In order to understand things you can read this WP article: http://en.wikipedia.org/wiki/Endianness
The above source code will output 34 12 78 56 bc 9a. The first 2 bytes (34 12) represent the first integer, etc. The above source code encodes integers in little endian format.
/** length should be less than 4 (for int) **/
public long byteToInt(byte[] bytes, int length) {
int val = 0;
if(length>4) throw new RuntimeException("Too big to fit in int");
for (int i = 0; i < length; i++) {
val=val<<8;
val=val|(bytes[i] & 0xFF);
}
return val;
}
As often, guava has what you need.
To go from byte array to int: Ints.fromBytesArray, doc here
To go from int to byte array: Ints.toByteArray, doc here
Someone with a requirement where they have to read from bits, lets say you have to read from only 3 bits but you need signed integer then use following:
data is of type: java.util.BitSet
new BigInteger(data.toByteArray).intValue() << 32 - 3 >> 32 - 3
The magic number 3 can be replaced with the number of bits (not bytes) you are using.
i think this is a best mode to cast to int
public int ByteToint(Byte B){
String comb;
int out=0;
comb=B+"";
salida= Integer.parseInt(comb);
out=out+128;
return out;
}
first comvert byte to String
comb=B+"";
next step is comvert to a int
out= Integer.parseInt(comb);
but byte is in rage of -128 to 127 for this reasone, i think is better use rage 0 to 255 and you only need to do this:
out=out+256;
The following code snippet, sourced from Core Java Vol 2 (7th Ed), shows how to create an SHA1 and an MD5 fingerprint using Java.
It turns out that the only function that works is when I load the cleartext from a textfile.
How does MessageDigestFrame.computeDigest() work out the fingerprint, and, particularly the use of the bit shifting pattern (Line 171 - 172)?
public void computeDigest(byte[] b)
{
currentAlgorithm.reset();
currentAlgorithm.update(b);
byte[] hash = currentAlgorithm.digest();
String d = "";
for (int i = 0; i < hash.length; i++)
{
int v = hash[i] & 0xFF;
if (v < 16) d += "0";
d += Integer.toString(v, 16).toUpperCase() + " ";
}
digest.setText(d);
}
The method should work fine whatever you give it - if you're getting the wrong results, I suspect you're loading the file incorrectly. Please show that code, and we can help you work out what's going wrong.
In terms of the code, this line:
int v = hash[i] & 0xFF;
is basically used to treat a byte as unsigned. Bytes are signed in Java - an acknowledged design mistake in the language - but we want to print out the hex value as if it were an unsigned integer. The bitwise AND with just the bottom 8 bits effectively converts it to the integer value of the byte treated as unsigned.
(There are better ways to convert a byte array to a hex string, but that's a separate matter.)
It is not bit shifting, it is bit masking. hash[i] is a byte. When it is widened to integer you need to mask off the higher integer bits because of possible sign extension.
byte b = (byte)0xEF;
System.out.println("No masking: " + (int)b);
System.out.println("Masking: " + (int)(b & 0xFF));
This snipped:
int v = hash[i] & 0xFF;
if (v < 16) d += "0";
d += Integer.toString(v, 16).toUpperCase() + " ";
First you set all but the lowest 8 bits of v to 0 (because 0xFF is 11111111 in binary).
Then if the resulting number is only one digit in hex (< 16) you add a leading "0".
Finally convert the result to hex and add it to the string.