I'm working on java project based on analysis network traffic. I created classes which are uploading data from input and output files in pcap extension (I uploaded from them time and packet's size). I have a problem with main logarithm which task is checking a buffer condition. I have to check how many times and how long buffer was empty, how many times was unsuccessful attempt to output packet from the buffer and final buffer size.
I created logic in analysis_buffer method, which sets the time chronological, it's mean: if the output_time is bigger than input_time, output_size is adding to buffer(buffer_size), if input_size is bigger, input_size is subtracting from buffer.
I'm using nested loops, first on input file length, second loop(inside) on output file length. Application is working, but finally output is not correctly, the variables are equal 0(only buffer size is much too big) . I have no idea what else I have to change in code, any suggestions?
import org.jnetpcap.util.PcapPacketArrayList;
public class Buffer {
int buffer_size =0; //initial size the buffer
int failed_attempt =0; //count unsuccessful attempt to output packet from the buffer
double time_empty = 0; //time for which buffer was empty
int count_empty =0; //count for which buffer was empty
public Buffer(int buffer_size) {
this.buffer_size = buffer_size;
}
public Buffer() {
// TODO Auto-generated constructor stub
}
public void analysis_buffer(PcapPacketArrayList packetArrayList, PcapPacketArrayList packetArrayList2, double[] input_time, double[] output_time, int[] input_size, int[] output_size) {
for (int i = 0; i < packetArrayList.size(); i++) {
for (int j = 0; j < packetArrayList2.size(); j++) {
// if output time is greater than input time, add input size to main buffer size
if (input_time[i] < output_time[j]) {
buffer_size = buffer_size + input_size[i];
}
// if input time is greater than output time, we have 3 options
// 1. buffer size is greater than output size packet and output is subtracting from buffer size
// 2. buffer size is smaller than output packet - buffer size is automatically equal 0
// 3. buffer size is equal 0, I'm increasing unsuccessful attempt to get out of the buffer and saves time when buffer was empty
else if (input_time[i] > output_time[j]) {
if(output_size[j] < buffer_size) {
buffer_size = buffer_size - output_size[j];
}
if(output_size[j] > buffer_size) {
buffer_size = 0;
count_empty++;
}
if(buffer_size == 0) {
failed_attempt++;
time_empty = time_empty + (output_time[j+1]-output_time[j]);
}
}
// if input time is equal output time, add and subtract packets from buffer size, or buffer size is automatically equal 0
else if (input_time[i] == output_time[j]) {
if(output_size[j] < buffer_size) {
buffer_size = buffer_size + input_size[i] - output_size[j];
}
if(output_size[j] > (buffer_size + input_size[i])){
buffer_size = 0;
count_empty++;
}
}
}
}
}
public void check_buffer() {
System.out.println("Initial buffer size was 0");
System.out.println("Final buffer size: " + buffer_size +".");
System.out.println("Buffer was empty " + count_empty +" times.");
System.out.println("Failed attempt to output data from the buffer: " + failed_attempt +" times.");
System.out.println("Total time for which the buffer was empty: " + time_empty + " seconds.");
}
}
My output:
Initial buffer size was 0
Final buffer size: 1227700210.
Buffer was empty 1 times.
Failed attempt to output data from the buffer: 0 times.
Total time for which the buffer was empty: 0.0 seconds.
I have tried your code with these values:
String[] p = new String[4];
String[] p2 = new String[4];
double[] it = {5432d, 4234d, 6345d, 64320d, 8534d};
double[] ot = {5436d, 4234d, 6342d, 64326d, 8534d};
int[] is = {45, 654, 79, 16354, 4563};
int[] os = {65, 641, 98, 23346, 9846};
Buffer buffer = new Buffer();
buffer.analysis_buffer(p, p2, it, ot, is, os);
buffer.check_buffer();
And I got this result:
Initial buffer size was 0
Final buffer size: 16354.
Buffer was empty 3 times.
Failed attempt to output data from the buffer: 3 times.
Total time for which the buffer was empty: 62200.0 seconds.
The failed_attempt and time_empty variables are modified only in this place:
if(buffer_size == 0) {
failed_attempt++;
time_empty = time_empty + (output_time[j+1]-output_time[j]);
}
I think that the condition hasn't been met (with your test values), and maybe neither the next:
if(output_size[j] > buffer_size) {
buffer_size = 0;
count_empty++;
}
and
else if (input_time[i] > output_time[j]) {
BTW, try to fix this:
java.lang.ArrayIndexOutOfBoundsException: 5
at Buffer.analysis_buffer(Buffer.java:47)
With these values:
String[] p = new String[5];
String[] p2 = new String[5];
double[] it = {5432d, 4234d, 6345d, 64320d, 8534d};
double[] ot = {5436d, 4234d, 6342d, 64326d, 8534d};
int[] is = {45, 654, 79, 16354, 4563};
int[] os = {65, 641, 98, 23346, 9846};
Buffer buffer = new Buffer();
buffer.analysis_buffer(p, p2, it, ot, is, os);
buffer.check_buffer();
Related
I have a problem with this method
private static boolean getBlocks(File file1, File file2) throws IOException {
FileChannel channel1 = new FileInputStream(file1).getChannel();
FileChannel channel2 = new FileInputStream(file2).getChannel();
int SIZE = (int) Math.min((8192), channel1.size());
int point = 0;
MappedByteBuffer buffer1 = channel1.map(FileChannel.MapMode.READ_ONLY, 0, channel1.size());
MappedByteBuffer buffer2 = channel2.map(FileChannel.MapMode.READ_ONLY, 0, channel2.size());
byte [] bytes1 = new byte[SIZE];
byte [] bytes2 = new byte[SIZE];
while (point < channel1.size() - SIZE) {
buffer1.get(bytes1, point, SIZE);
buffer2.get(bytes2, point, SIZE);
if (!compareBlocks(bytes1, bytes2)) {
return false;
}
point += SIZE;
}
return true;
}
private static boolean compareBlocks (byte[] bytes1, byte[] bytes2) {
for (int i = 0; i < bytes1.length; i++) {
if (bytes1[i] != bytes2[i]) {
return false;
}
}
return true;
}
In a result I caught IndexOutOfBoundsException in while loop.
How can I get around this problem and compare two files by blocks?
Yeah... it has to crap.
You create a byte array with 'SIZE' length and access it's position with point var which increments with 'SIZE' vallue.
For example:
int SIZE = 10;
int point = 0;
while( point < channel.size() - SIZE ){
buffer1.get(bytes1, point, SIZE);
// Your logic here
point += SIZE;
}
When you do the above, SIZE vallue increments enourmously and you try to access the byte array with point position which will have a higher vallue than it's size.
So, your logic to access the array position is wrong. As the error line says, you're accessing and index out of bounds( higher than the limit ).
I hope I could help you.
How can I remove the first n number of bytes from a ByteBuffer without changing or lowering the capacity? The result should be that the 0th byte is the n+1 byte. Is there a better data type in Java to do this type of action?
You could try something like this:
public void removeBytesFromStart(ByteBuffer bf, int n) {
int index = 0;
for(int i = n; i < bf.position(); i++) {
bf.put(index++, bf.get(i));
bf.put(i, (byte)0);
}
bf.position(index);
}
Or something like this:
public void removeBytesFromStart2(ByteBuffer bf, int n) {
int index = 0;
for(int i = n; i < bf.limit(); i++) {
bf.put(index++, bf.get(i));
bf.put(i, (byte)0);
}
bf.position(bf.position()-n);
}
This uses the absolute get and put method of the ByteBuffer class and sets the position at next write position.
Note that the absolute put method is optional, which means that a class that extends the abstract class ByteBuffer may not provide an implementation for it, for example it might throw a ReadOnlyBufferException.
Whether you choose to loop till position or till limit depends on how you use the buffer, for example if you manually set the position you might want to use loop till limit. If you do not then looping till position is enough and more efficient.
Here is some testings:
#Test
public void removeBytesFromStart() {
ByteBuffer bf = ByteBuffer.allocate(16);
int expectedCapacity = bf.capacity();
bf.put("abcdefg".getBytes());
ByteBuffer expected = ByteBuffer.allocate(16);
expected.put("defg".getBytes());
removeBytesFromStart(bf, 3);
Assert.assertEquals(expectedCapacity, bf.capacity());
Assert.assertEquals(0, bf.compareTo(expected));
}
#Test
public void removeBytesFromStartInt() {
ByteBuffer bf = ByteBuffer.allocate(16);
int expectedCapacity = bf.capacity();
bf.putInt(1);
bf.putInt(2);
bf.putInt(3);
bf.putInt(4);
ByteBuffer expected = ByteBuffer.allocate(16);
expected.putInt(2);
expected.putInt(3);
expected.putInt(4);
removeBytesFromStart2(bf, 4);
Assert.assertEquals(expectedCapacity, bf.capacity());
Assert.assertEquals(0, bf.compareTo(expected));
}
I think the method you are looking for is the ByteBuffer's compact() method
Even though the documentation says:
"The bytes between the buffer's current position and its limit, if any, are copied to the beginning of the buffer. That is, the byte at index p = position() is copied to index zero, the byte at index p + 1 is copied to index one, and so forth until the byte at index limit() - 1 is copied to index n = limit() - 1 - p. The buffer's position is then set to n+1 and its limit is set to its capacity."
I am not sure that this method realy does that, because when I debug it seems like the method just does buffer.limit = buffer.capacity.
Do you mean to shift all the element to the begining of the buffer? Like this:
int n = 4;
//allocate a buffer of capacity 10
ByteBuffer b = ByteBuffer.allocate(10);
// add data to buffer
for (int i = 0; i < b.limit(); i++) {
b.put((byte) i);
}
// print buffer
for (int i = 0; i < b.limit(); i++) {
System.out.print(b.get(i) + " ");
}
//shift left the elements from the buffer
//add zeros to the end
for (int i = n; i < b.limit() + n; i++) {
if (i < b.limit()) {
b.put(i - n, b.get(i));
} else {
b.put(i - n, (byte) 0);
}
}
//print buffer again
System.out.println();
for (int i = 0; i < b.limit(); i++) {
System.out.print(b.get(i) + " ");
}
For n=4 it will print:
0 1 2 3 4 5 6 7 8 9
4 5 6 7 8 9 0 0 0 0
Use compact method for that. E.g.:
ByteBuffer b = ByteBuffer.allocate(32);
b.put("hello,world".getBytes());
b.position(6);
b.compact();
System.out.println(new String(b.array()));
I am trying to figure out a way of taking data from a file and I want to store every 4 bytes as a bitset(32). I really have no idea of how to do this. I have played about with storing each byte from the file in an array and then tried to covert every 4 bytes to a bitset but I really cannot wrap my head around using bitsets. Any ideas on how to go about this?
FileInputStream data = null;
try
{
data = new FileInputStream(myFile);
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] b = new byte[1024];
int bytesRead;
while ((bytesRead = data.read(b)) != -1)
{
bos.write(b, 0, bytesRead);
}
byte[] bytes = bos.toByteArray();
Ok, you got your byte array. Now what you have to convert each byte to a bitset.
//Is number of bytes divisable by 4
bool divisableByFour = bytes.length % 4 == 0;
//Initialize BitSet array
BitSet[] bitSetArray = new BitSet[bytes.length / 4 + divisableByFour ? 0 : 1];
//Here you convert each 4 bytes to a BitSet
//You will handle the last BitSet later.
int i;
for(i = 0; i < bitSetArray.length-1; i++) {
int bi = i*4;
bitSetArray[i] = BitSet.valueOf(new byte[] { bytes[bi], bytes[bi+1], bytes[bi+2], bytes[bi+3]});
}
//Now handle the last BitSet.
//You do it here there may remain less than 4 bytes for the last BitSet.
byte[] lastBitSet = new byte[bytes.length - i*4];
for(int j = 0; j < lastBitSet.length; j++) {
lastBitSet[i] = bytes[i*4 + j]
}
//Put the last BitSet in your bitSetArray
bitSetArray[i] = BitSet.valueOf(lastBitSet);
I hope this works for you as I have written quickly and did not check if it works. But this gives you the basic idea, which was my intention at the beginning.
currently i'm working on a project regarding "delayed auditory feedback" (DAF). Basically i want to record sounds from a microphone, delay it by a specific amount of time and then play it back. Using a delay around 200ms and a person with a headset, this feedback shuts down the persons ability to speak fluently. (Pretty much fun: DAF on youtube)
Right now i am trying to make this loop with SourceDataLine and TargetDataLine using a byte[]-buffer with 256 bytes. If the buffer gets bigger, so does the delay. My problem is now: I can't tell what the delay in milliseconds is.
Is there any way to calculate the real delay in ms from the buffer size? Or is there maybe another approach to get this result?
This is what my loop looks like at the moment:
private int mBufferSize; // 256
private TargetDataLine mLineOutput;
private SourceDataLine mLineInput;
public void run() {
... creating the DataLines and getting the lines from AudioSystem ...
// byte buffer for audio
byte[] data = new byte[mBufferSize];
// start the data lines
mLineOutput.start();
mLineInput.start();
// start recording and playing back
while (running) {
mLineOutput.read(data, 0, mBufferSize);
mLineInput.write(data, 0, mBufferSize);
}
... closing the lines and exiting ...
}
You can calculate the delay easily, as it's dependent on the sample rate of the audio. Assuming this is CD-quality (mono) audio, the sample rate is 44,100 samples per second. 200 milliseconds is 0.2 seconds, so 44,100 X 0.2 = 8820.
So your audio playback needs to be delayed by 8820 samples (or 17640 bytes). If you make your recording and playback buffers exactly this size (17640 bytes) it will make your code pretty simple. As each recording buffer is filled you pass it to playback; this will achieve a playback lag of exactly one buffer's duration.
There is some delay inherent in Android that you should account for, but aside from that...
Create a circular buffer. Doesn't matter how big, as long as it is more than big enough for N 0 samples. Now write it with N '0' samples.
N in this case is (delay in seconds) * (sample rate in hertz).
Example: 200ms with 16kHz stereo:
0.2s*16000Hz*(2 channels)=3200*2 samples = 6400 samples
You will probably be working with pcm data too, which is 16-bit, so use short instead of byte.
After filling the buffer with the right amount of zeroes, start reading data for the speaker while filling with data from the microphone.
PCM Fifo:
public class PcmQueue
{
private short mBuf[] = null;
private int mWrIdx = 0;
private int mRdIdx = 0;
private int mCount = 0;
private int mBufSz = 0;
private Object mSync = new Object();
private PcmQueue(){}
public PcmQueue( int nBufSz )
{
try {
mBuf = new short[nBufSz];
} catch (Exception e) {
Log.e(this.getClass().getName(), "AudioQueue allocation failed.", e);
mBuf = null;
mBufSz = 0;
}
}
public int doWrite( final short pWrBuf[], final int nWrBufIdx, final int nLen )
{
int sampsWritten = 0;
if ( nLen > 0 ) {
int toWrite;
synchronized(mSync) {
// Write nothing if there isn't room in the buffer.
toWrite = (nLen <= (mBufSz - mCount)) ? nLen : 0;
}
// We can definitely read toWrite shorts.
while (toWrite > 0)
{
// Calculate how many contiguous shorts to the end of the buffer
final int sampsToCopy = Math.min( toWrite, (mBufSz - mWrIdx) );
// Copy that many shorts.
System.arraycopy(pWrBuf, sampsWritten + nWrBufIdx, mBuf, mWrIdx, sampsToCopy);
// Circular buffering.
mWrIdx += sampsToCopy;
if (mWrIdx >= mBufSz) {
mWrIdx -= mBufSz;
}
// Increment the number of shorts sampsWritten.
sampsWritten += sampsToCopy;
toWrite -= sampsToCopy;
}
synchronized(mSync) {
// Increment the count.
mCount = mCount + sampsWritten;
}
}
return sampsWritten;
}
public int doRead( short pcmBuffer[], final int nRdBufIdx, final int nRdBufLen )
{
int sampsRead = 0;
final int nSampsToRead = Math.min( nRdBufLen, pcmBuffer.length - nRdBufIdx );
if ( nSampsToRead > 0 ) {
int sampsToRead;
synchronized(mSync) {
// Calculate how many shorts can be read from the RdBuffer.
sampsToRead = Math.min(mCount, nSampsToRead);
}
// We can definitely read sampsToRead shorts.
while (sampsToRead > 0)
{
// Calculate how many contiguous shorts to the end of the buffer
final int sampsToCopy = Math.min( sampsToRead, (mBufSz - mRdIdx) );
// Copy that many shorts.
System.arraycopy( mBuf, mRdIdx, pcmBuffer, sampsRead + nRdBufIdx, sampsToCopy);
// Circular buffering.
mRdIdx += sampsToCopy;
if (mRdIdx >= mBufSz) {
mRdIdx -= mBufSz;
}
// Increment the number of shorts read.
sampsRead += sampsToCopy;
sampsToRead -= sampsToCopy;
}
// Decrement the count.
synchronized(mSync) {
mCount = mCount - sampsRead;
}
}
return sampsRead;
}
}
And your code, modified for the FIFO... I have no experience with TargetDataLine/SourceDataLine so if they only handle byte arrays, modify the FIFO for byte instead of short.
private int mBufferSize; // 256
private TargetDataLine mLineOutput;
private SourceDataLine mLineInput;
public void run() {
... creating the DataLines and getting the lines from AudioSystem ...
// short buffer for audio
short[] data = new short[256];
final int emptySamples = (int)(44100.0 * 0.2);
final int bufferSize = emptySamples*2;
PcmQueue pcmQueue = new PcmQueue( bufferSize );
// Create a temporary empty buffer to write to the PCM queue
{
short[] emptyBuf = new short[emptySamples];
Arrays.fill(emptyBuf, (short)emptySamples );
pcmQueue.doWrite(emptyBuf, 0, emptySamples);
}
// start recording and playing back
while (running) {
mLineOutput.read(data, 0, mBufferSize);
pcmQueue.doWrite(data, 0, mBufferSize);
pcmQueue.doRead(data, 0, mBufferSize);
mLineInput.write(data, 0, mBufferSize);
}
... closing the lines and exiting ...
}
I have some code that does not seem to operate the way it should. The whole point is to take a 256x128x256x2 array of integers, split it into 256 16x128x16x2 chunks, process the chunks into a byte array, then add that byte array to a main array of bytes to be saved. chunkdata[] is fine before saving, but after saving the whole file is blank except the first 4096 bytes. the location table (location of each chunk in the file) is there and the first four byte "chunk header" is there, everything else is 0's, which isn't supposed to happen.
public void createFile(int[][][][] map){
byte[] file = new byte[fileLength]; //22,024,192 bytes long
System.arraycopy(Sector.locationTable, 0, file, 0, Sector.locationTable.length); //This works as it should
for(int cx = 0; cx < 16; cx++)
{
for(int cz = 0; cz < 16; cz++)
{
int start = sectorLength+cx*(sectorLength*chunkSectorLength)+cz*(chunkRows*sectorLength*chunkSectorLength); //this algorithm works, just rather hideous
int[][][][] chunk = getChunk(map, cx * 16, cz * 16); //This works as it should
byte[] chunkdata = putChunk(chunk); //The data from this is correct
int counter = 0;
for(int i=start;i<chunkdata.length;i++){
file[i]=chunkdata[counter]; //Data loss here?
counter++;
}
}
}
System.out.println("Saving file...");
writeFile(file, fileLocation);
}
public static void writeFile(byte[] file,String filename){
try{
FileOutputStream fos = new FileOutputStream(filename);
fos.write(file);
fos.close();
Messages.showSuccessfulSave();
}catch(Exception ex){
Messages.showFileSavingError(ex);
}
}
So, assuming putChunk and getChunk work as intended, and my hideous algorithms, what could cause everything past the first 4096 bytes to be blank?
Thanks in advance.
Why are you comparing i against chunkdata.length when i is initialized with start? I think counter should be used instead.
Current:
int counter = 0;
for(int i=start;i<chunkdata.length;i++){
file[i]=chunkdata[counter]; //Data loss here?
counter++;
}
Instead, you want to write something like this:
int counter = 0;
for(int i=start;counter<chunkdata.length;i++){
file[i]=chunkdata[counter]; //Data loss here?
counter++;
}
or more compact way:
for(int i=start,counter = 0;counter<chunkdata.length;i++,counter++){
file[i]=chunkdata[counter]; //Data loss here?
}