How do I convert long to 4 bytes? I am receiving some output from a C program and it uses unsigned long. I need to read this output and convert this to 4 bytes.
However, java uses signed long which is 64 bits. Is there any way to do this conversion?
To read 4 bytes as an unsigned 32-bit value, assuming it is little endian, the simplest thing to do is to use ByteBuffer
byte[] bytes = { 1,2,3,4 };
long l = ByteBuffer.wrap(bytes)
.order(ByteOrder.LITTLE_ENDIAN).getInt() & 0xFFFFFFFFL;
While l can be an signed 64-bit value it will only be between 0 and 2^^32-1 which is the range of a unsigned 32-bit value.
You can use the java.nio.ByteBuffer. It can parse the long, and it does the byte ordering for you.
You can code a loop where you divide the "long" by 256, take the rest, then you have the "Least Significant Byte" ...
(depending on whether you want little-endian or big-endian you can loop forwards or backwards)
long l = (3* 256 * 256 * 256 + 1 * 256 *256 + 4 * 256 + 8);
private byte[] convertLongToByteArray(long l) {
byte[] b = new byte[4];
if(java.nio.ByteOrder.nativeOrder() == ByteOrder.LITTLE_ENDIAN){
for (int i=0; i<4; i++) {
b[i] = (byte)(l % 256) ;
l = l / 256;
}
}else{
for (int i=3; i>=0; i--) {
b[i] = (byte)(l % 256) ;
l = l / 256;
}
}
return b;
}
Related
So I am saving an audio file. And i have to convert float[] to byte[]. This works fine:
final byte[] byteBuffer = new byte[buffer.length * 2];
int bufferIndex = 0;
for (int i = 0; i < byteBuffer.length; i++) {
final int x = (int) (buffer[bufferIndex++] * 32767.0);
byteBuffer[i] = (byte) x;
i++;
byteBuffer[i] = (byte) (x >>> 8);
if (bufferIndex < 5) {
System.out.println(buffer[bufferIndex]);
System.out.println(byteBuffer[i - 1]);
System.out.println(byteBuffer[i]);
}
}
But when i want to read the bytes and convert it back to floats just the first 4 numbers match with the old ones:
for (int i =0; i < length; i++) {
i++;
float val = (((audioB[i]) & 0xff) << 8) | ((audioB[i-1]) & 0xff);
val = (float) (val /32767.0);
if (bufferindex < 5) {
System.out.println(val);
System.out.println(audioB[i-1]);
System.out.println(audioB[i]);
}
bufferindex++;
}
The output :
0.07973075
0
0
0.149165
52
10
0.19944257
23
19
0.22437502
-121
25
---------
0.0
0
0
0.07971435
52
10
0.14914395
23
19
0.19943845
-121
25
0.22437209
Why ?
Rather than implementing your own bit shifting magic, why not use the java.nio.ByteBuffer class?
byte[] bytes = ByteBuffer.allocate(8).putFloat(1.0F).putFloat(2.0F).array();
ByteBuffer bb = ByteBuffer.wrap(bytes);
float f1 = bb.getFloat();
float f2 = bb.getFloat();
You are stuffing 4 byte float data into just 2 bytes per sample. This will obviously loose some precisition. Hint: Do the backwards calculation (2 byte -> float) in your first loop and compare the result with the original value, and look at intermediate values like x.
I have written a function that computes the checksum for a given tcp packet. However, when I capture a tcp packet sent over ipv4 from wireshark and let my function compute its checksum, then its not the same checksum as in the wireshark captured packet. I checked and the bytes I give to the computeChecksum function are exactly the same as the tcp packet bytes i captured with wireshark.
I computed the checksum according to the RFC 793. Does anybody see if there's anything wrong in my code?
public long computeChecksum( byte[] buf, int src, int dst ){
int length = buf.length; // nr of bytes of the tcppacket in total.
int pseudoHeaderLength = 12; // nr of bytes of pseudoheader.
int i = 0;
long sum = 0;
long data;
buf[16] = (byte)0x0; // set checksum to 0 bytes
buf[17] = (byte)0x0;
// create the pseudoheader as specified in the rfc.
ByteBuffer pseudoHeaderByteBuffer = ByteBuffer.allocate( 12 );
pseudoHeaderByteBuffer.putInt( src );
pseudoHeaderByteBuffer.putInt( dst );
pseudoHeaderByteBuffer.put( (byte)0x0 ); // store the 0x0 byte
pseudoHeaderByteBuffer.put( (byte)PROTO_NUM_TCP ); // stores the protocol number
pseudoHeaderByteBuffer.putShort( (short) length ); // store the length of the packet.
byte[] pbuf = pseudoHeaderByteBuffer.array();
// loop through all 16-bit words of the psuedo header
int bytesLeft = pseudoHeaderLength;
while( bytesLeft > 0 ){
// store the bytes at pbuf[i] and pbuf[i+1] in data.
data = ( ((pbuf[i] << 8) & 0xFF00) | ((pbuf[i + 1]) & 0x00FF));
sum += data;
// Check if the sum has bit 17 or higher set by doing a binary AND with the 46 most significant bits and 0xFFFFFFFFFF0000.
if( (sum & 0xFFFFFFFF0000) > 0 ){
sum = sum & 0xFFFF; // discard all but the 16 least significant bits.
sum += 1; // add 1 (because we have to do a one's complement sum where you add the carry bit to the sum).
}
i += 2; // point to the next two bytes.
bytesLeft -= 2;
}
// loop through all 16-bit words of the TCP packet (ie. until there's only 1 or 0 bytes left).
bytesLeft = length;
i=0;
while( bytesLeft > 1 ){ // note that with the pseudo-header we could never have an odd byte remaining.
// We do do exactly the same as with the pseudo-header but then for the TCP packet bytes.
data = ( ((buf[i] << 8) & 0xFF00) | ((buf[i + 1]) & 0x00FF));
sum += data;
if( (sum & 0xFFFF0000) > 0 ){
sum = sum & 0xFFFF;
sum += 1;
}
i += 2;
bytesLeft -= 2;
}
// If the data has an odd number of bytes, then after adding all 16 bit words we remain with 8 bits.
// In that case the missing 8 bits is considered to be all 0's.
if( bytesLeft > 0 ){ // ie. there are 8 bits of data remaining.
sum += (buf[i] << 8 & 0xFF00); // construct a 16 bit word holding buf[i] and 0x00 and add it to the sum.
if( (sum & 0xFFFF0000) > 0) {
sum = sum & 0xFFFF;
sum += 1;
}
}
sum = ~sum; // Flip all bits (ie. take the one's complement as stated by the rfc)
sum = sum & 0xFFFF; // keep only the 16 least significant bits.
return sum;
}
If you don't see anything wrong with the code then let me know that too. In that case I know to look somewhere else for the problem.
I've tested your code and it works correctly. I've done the following:
Configure wireshark to "Validate the TCP checksum if possible" in order to avoid to do the test with a packet with an incorrect checksum.
Add the long type suffix L to the constant 0xFFFFFFFF0000 in order to avoid the compile time error integer number too large (Java 8).
Use an hexadecimal representation of a TCP segment coming from wireshark
String tcpSegment = "0050dc6e5add5b4fa9bf9ad8a01243e0c67c0000020405b4010303000101080a00079999000d4e0e";
Use a method to convert an hexadecimal string to a byte array
public static byte[] toByteArray(String strPacket) {
int len = strPacket.length();
byte[] data = new byte[len / 2];
for (int i = 0; i < len; i += 2) {
data[i / 2] = (byte) ((Character.digit(strPacket.charAt(i), 16) << 4)
+ Character.digit(strPacket.charAt(i + 1), 16));
}
return data;
}
Use a ByteBuffer to write the source and destination adress into an int
int src = ByteBuffer.wrap(toByteArray("c0a80001")).getInt();
int dst = ByteBuffer.wrap(toByteArray("c0a8000a")).getInt();
With this, I obtain a checksum of C67C, the same as in wireshark.
P.S.: There is an error in your code when you do
pseudoHeaderByteBuffer.putShort( (short) length );
you store the length in two's-complement inside the pseudo header which will be a problem if the length is greater than 2^15. You better used char which is 16 bit unsigned.
I am working with Local Binary Patterns (LBP) which produce numbers in the range 0-255.
That means that they can fit in a byte (256 different values may be included into a byte). So that explains why many (if not all) implementation in java I have found uses byte[] to store these values.
The problem is that since I am interested in the rank of these values when converted to byte (from int for example) they do not keep the previous rank they had (as int for example) since byte are signed (as all but chars in java I think) and so the greater 128 values (127 and after) of the range 0-255 becomes negative numbers. Furthermore I think they are inverted in order (the negative ones).
Some examples to be more specific:
(int) 0 = (byte) 0
(int) 20 = (byte) 20
(int) 40 = (byte) 40
(int) 60 = (byte) 60
(int) 80 = (byte) 80
(int) 100 = (byte) 100
(int) 120 = (byte) 120
(int) 140 = (byte) -116
(int) 160 = (byte) -96
(int) 180 = (byte) -76
(int) 200 = (byte) -56
(int) 220 = (byte) -36
(int) 240 = (byte) -16
My question is whether there is a specific way to maintain the order of int values when converted to byte (meaning 240 > 60 should hold true in byte also -16 < 60!) while keeping memory needs minimum (meaning use only 8bits if that many are required). I know I could consider comparing the byte in a more complex way (for example every negative > positive and if both bytes are negative inverse the order) but I think it's not that satisfactory.
Is there any other way to convert to byte besides (byte) i?
You could subtract 128 from the value:
byte x = (byte) (value - 128);
That would be order-preserving, and reversible later by simply adding 128 again. Be careful to make sure you do add 128 later on though... It's as simple as:
int value = x + 128;
So for example, if you wanted to convert between an int[] and byte[] in a reversible way:
public byte[] toByteArray(int[] values) {
byte[] ret = new byte[values.length];
for (int i = 0; i < values.length; i++) {
ret[i] = (byte) (values[i] - 128);
}
return ret;
}
public int[] toIntArray(int[] values) {
int[] ret = new byte[values.length];
for (int i = 0; i < values.length; i++) {
ret[i] = values[i] + 128;
}
return ret;
}
If you wanted to keep the original values though, the byte comparison wouldn't need to be particularly complex:
int unsigned1 = byte1 & 0xff;
int unsigned2 = byte2 & 0xff;
// Now just compare unsigned1 and unsigned2...
I'm facing some problems with WAV files in Java.
WAV format: PCM_SIGNED 44100.0 Hz, 24-bit, stereo, 6 bytes/frame, little-endian.
I extracted the WAV data to a byte array with no problems.
I'm trying to convert the byte array to a double array, but some doubles come with NaN value.
Code:
ByteBuffer byteBuffer = ByteBuffer.wrap(byteArray);
double[] doubles = new double[byteArray.length / 8];
for (int i = 0; i < doubles.length; i++) {
doubles[i] = byteBuffer.getDouble(i * 8);
}
The fact of being 16/24/32-bit, mono/stereo makes me confused.
I intend to pass the double[] to a FFT algorithm and get the audio frequencies.
try this:
public static byte[] toByteArray(double[] doubleArray){
int times = Double.SIZE / Byte.SIZE;
byte[] bytes = new byte[doubleArray.length * times];
for(int i=0;i<doubleArray.length;i++){
ByteBuffer.wrap(bytes, i*times, times).putDouble(doubleArray[i]);
}
return bytes;
}
public static double[] toDoubleArray(byte[] byteArray){
int times = Double.SIZE / Byte.SIZE;
double[] doubles = new double[byteArray.length / times];
for(int i=0;i<doubles.length;i++){
doubles[i] = ByteBuffer.wrap(byteArray, i*times, times).getDouble();
}
return doubles;
}
public static byte[] toByteArray(int[] intArray){
int times = Integer.SIZE / Byte.SIZE;
byte[] bytes = new byte[intArray.length * times];
for(int i=0;i<intArray.length;i++){
ByteBuffer.wrap(bytes, i*times, times).putInt(intArray[i]);
}
return bytes;
}
public static int[] toIntArray(byte[] byteArray){
int times = Integer.SIZE / Byte.SIZE;
int[] ints = new int[byteArray.length / times];
for(int i=0;i<ints.length;i++){
ints[i] = ByteBuffer.wrap(byteArray, i*times, times).getInt();
}
return ints;
}
Your WAV format is 24 bit, but a double uses 64 bit. So the quantities stored in your wav can't be doubles. You have one 24 bit signed integer per frame and channel, which amounts to these 6 bytes mentioned.
You could do something like this:
private static double readDouble(ByteBuffer buf) {
int v = (byteBuffer.get() & 0xff);
v |= (byteBuffer.get() & 0xff) << 8;
v |= byteBuffer.get() << 16;
return (double)v;
}
You'd call that method once for the left channel and once for the right. Not sure about the correct order, but I guess left first. The bytes are read from least significant one to most significant one, as little-endian indicates. The lower two bytes are masked with 0xff in order to treat them as unsigned. The most significant byte is treated as signed, since it will contain the sign of the signed 24 bit integer.
If you operate on arrays, you can do it without the ByteBuffer, e.g. like this:
double[] doubles = new double[byteArray.length / 3];
for (int i = 0, j = 0; i != doubles.length; ++i, j += 3) {
doubles[i] = (double)( (byteArray[j ] & 0xff) |
((byteArray[j+1] & 0xff) << 8) |
( byteArray[j+2] << 16));
}
You will get samples for both channels interleaved, so you might want to separate these afterwards.
If you have mono, you won't have two channels interleaved but only once. For 16 bit you can use byteBuffer.getShort(), for 32 bit you can use byteBuffer.getInt(). But 24 bit isn't commonly used for computation, so ByteBuffer doesn't have a method for this. If you have unsigned samples, you'll have to mask all signs, and to offset the result, but I guess unsigned WAV is rather uncommon.
For floating-point types in DSP they usually prefer values in the range [0, 1] or [0, 1), so you should divide each element by 224-1. Do like the answer of MvG above but with some changes
int t = ((byteArray[j ] & 0xff) << 0) |
((byteArray[j+1] & 0xff) << 8) |
(byteArray[j+2] << 16);
return t/double(0xFFFFFF);
But double is really a waste of space and CPU for data process purposes. I would recommend convert it to 32-bit int instead, or float which has the same precision (24 bits) but bigger range. In fact 32-bit int or float is the biggest type for a data channel when you do audio or video processing
Finally you can utilize multithreading and SIMD to accelerate the conversion
I am trying to convert a HEX-sequence to a String encoded in either, ISO-8859-1, UTF-8 or UTF-16BE. That is, I have a String looking like: "0422043504410442" this represents the characters: "Test" in UTF-16BE.
The code I used to convert between the two formats was:
private static String hex2String(String hex, String encoding) throws UnsupportedEncodingException {
char[] hexArray = hex.toCharArray();
int length = hex.length() / 2;
byte[] rawData = new byte[length];
for(int i=0; i<length; i++){
int high = Character.digit(hexArray[i*2], 16);
int low = Character.digit(hexArray[i*2+1], 16);
int value = (high << 4) | low;
if( value > 127)
value -= 256;
rawData[i] = (byte) value;
}
return new String(rawData, encoding);
}
This seems to work fine for me, but I still have two questions regarding this:
Is there any simpler way (preferably without bit-handling) to do this conversion?
How am I to interpret the line: int value = (high << 4) | low;?
I am familiar with the basics of bit-handling, though not at all with the Java syntax. I believe the first part shift all bits to the left by 4 steps. Though the rest I don't understand and why it would be helpful in this certain situation.
I apologize for any confusion in my question, please let me know if I should clarify anything.
Thank you.
//Abeansits
Is there any simpler way (preferably without bit-handling) to do this conversion?
None I would know of - the only simplification seems to parse the whole byte at once rather than parsing digit by digit (e.g. using int value = Integer.parseInt(hex.substring(i * 2, i * 2 + 2), 16);)
public static byte[] hexToBytes(final String hex) {
final byte[] bytes = new byte[hex.length() / 2];
for (int i = 0; i < bytes.length; i++) {
bytes[i] = (byte) Integer.parseInt(hex.substring(i * 2, i * 2 + 2), 16);
}
return bytes;
}
How am I to interpret the line: int value = (high << 4) | low;?
look at this example for your last two digits (42):
int high = 4; // binary 0100
int low = 2; // binary 0010
int value = (high << 4) | low;
int value = (0100 << 4) | 0010; // shift 4 to left
int value = 01000000 | 0010; // bitwise or
int value = 01000010;
int value = 66; // 01000010 == 0x42 == 66
You can replace the << and | in this case with * and +, but I don't recommend it.
The expression
int value = (high << 4) | low;
is equivalent to
int value = high * 16 + low;
The subtraction of 256 to get a value between -128 and 127 is unnecessary. Simply casting, for example, 128 to a byte will produce the correct result. The lowest 8 bits of the int 128 have the same pattern as the byte -128: 0x80.
I'd write it simply as:
rawData[i] = (byte) ((high << 4) | low);
Is there any simpler way (preferably
without bit-handling) to do this
conversion?
You can use the Hex class in Apache commons, but internally, it will do the same thing, perhaps with minor differences.
How am I to interpret the line: int value = (high << 4) | low;?
This combines two hex digits, each of which represents 4 bits, into one unsigned 8-bit value stored as an int. The next two lines convert this to a signed Java byte.