I have need to create FloatBuffer's from a dynamic set of floats (that is, I don't know the length ahead of time). The only way I've found to do this is rather inelegant (below). I assume I'm missing something and there must be a cleaner/simpler method.
My solution:
Vector<Float> temp = new Vector<Float>();
//add stuff to temp
ByteBuffer bb = ByteBuffer.allocateDirect( work.size() * 4/*sizeof(float)*/ );
bb.order( ByteOrder.nativeOrder() );
FloatBuffer floatBuf = bb.asFloatBuffer();
for( Float f : work )
floatBuf.put( f );
floatBuf.position(0);
I am using my buffers for OpenGL commands thus I need to keep them around (that is, the resulting FloatBuffer is not just a temporary space).
If you're using the OpenGL API through Java, I assume you're using LWJGL as the go-between. If so, there's a simple solution for this, which is to use the BufferUtils class in the org.lwjgl package. The method BufferUtils.createFloatBuffer() allows you to put in floats from an array, which if you're using a Vector, is a simple conversion. Although it's not much better than your method, it does save the need for a byte buffer which is nasty enough, and allows for a few quick conversions. The code for this exists in the new LWJGL tutorials for OpenGL 3.2+ here.
Hope this helps.
I would use a plain ByteBuffer and I would write out the data when the buffer fills. (or do what ever you planed to do with it)
e.g.
SocketChannel sc = ...
ByteBuffer bb = ByteBuffer.allocateDirect(32 * 1024).order(ByteOrder.LITTLE_ENDIAN);
for(int i = 0 ; i < 100000000; i++) {
float f = i;
// move to a checkFree(4) method.
if (bb.remaining() < 4) {
bb.flip();
while(bb.remaining() > 0)
sc.write(bb);
}
// end of method
bb.putFloat(f);
}
Creating really large buffers can actually be slower than processing the data as you generate it.
Note: this creates almost no garbage. There is only one object which is the ByteBuffer.
Related
I created a custom DataSetIterator. It works by randomly generating two INDArrays (one for input and one for output) in the next method and creating a DataSet out of it:
int[][] inputArray = new int[num][NUM_INPUTS];
int[][] expectedOutputArray = new int[num][];
for (int i = 0; i < num; i++) {//just fill the arrays with some data
int sum = 0;
int product = 1;
for (int j = 0; j < inputArray[i].length; j++) {
inputArray[i][j] = rand.nextInt();
sum += inputArray[i][j];
product *= inputArray[i][j];
}
expectedOutputArray[i] = new int[] { sum, product, sum / inputArray[i].length };
}
INDArray inputs = Nd4j.createFromArray(inputArray);//never closed
INDArray desiredOutputs = Nd4j.createFromArray(expectedOutputArray);//never closed
return new DataSet(inputs, desiredOutputs);
However, INDArray implements AutoClosable and the javadoc for close() states:
This method releases exclusive off-heap resources uses by this INDArray instance. If INDArray relies on shared resources, exception will be thrown instead PLEASE NOTE: This method is NOT safe by any means
Do I need to close the INDArrays?
If so, when do I need to close the INDArrays?
I have tried to use a try-with-resources but it threw an exception as the INDArray is closed when using it in the fit method.
The documentation of createFromArray(int[][]) does not seem to explain this.
you don't really need to close them. We take care of that automatically with javacpp. You can choose to close them but AutoCloseable was implemented for people who wanted more control over the memory management of the ndarrays.
Edit: Javacpp is the underlying native integration that we use to connect to native libraries we maintain written in c++ and other libraries. All of our calculations and data are all based on native code and off heap memory.
close() just forces us to de allocate those buffers faster. Javacpp has automatic de allocation built in to it already though.
I'm reading about Buffer Streams. I searched about it and found many answers that clear my concepts but still have little more questions.
After searching, I have come to know that, Buffer is temporary memory(RAM) which helps program to read data quickly instead hard disk. and when Buffers empty then native input API is called.
After reading little more I got answer from here that is.
Reading data from disk byte-by-byte is very inefficient. One way to
speed it up is to use a buffer: instead of reading one byte at a time,
you read a few thousand bytes at once, and put them in a buffer, in
memory. Then you can look at the bytes in the buffer one by one.
I have two confusion,
1: How/Who data filled in Buffers? (native API how?) as quote above, who filled thousand bytes at once? and it will consume same time. Suppose I have 5MB data, and 5MB loaded once in Buffer in 5 Seconds. and then program use this data from buffer in 5 seconds. Total 10 seconds. But if I skip buffering, then program get direct data from hard disk in 1MB/2sec same as 10Sec total. Please clear my this confusion.
2: The second one how this line works
BufferedReader inputStream = new BufferedReader(new FileReader("xanadu.txt"));
As I'm thinking FileReader write data to buffer, then BufferedReader read data from buffer memory? Also explain this.
Thanks.
As for the performance of using buffering during read/write, it's probably minimal in impact since the OS will cache too, however buffering will reduce the number of calls to the OS, which will have an impact.
When you add other operations on top, such as character encoding/decoding or compression/decompression, the impact is greater as those operations are more efficient when done in blocks.
You second question said:
As I'm thinking FileReader write data to buffer, then BufferedReader read data from buffer memory? Also explain this.
I believe your thinking is wrong. Yes, technically the FileReader will write data to a buffer, but the buffer is not defined by the FileReader, it's defined by the caller of the FileReader.read(buffer) method.
The operation is initiated from outside, when some code calls BufferedReader.read() (any of the overloads). BufferedReader will then check it's buffer, and if enough data is available in the buffer, it will return the data without involving the FileReader. If more data is needed, the BufferedReader will call the FileReader.read(buffer) method to get the next chunk of data.
It's a pull operation, not a push, meaning the data is pulled out of the readers by the caller.
All the stuff is done by a private method named fill() i give you for educational purpose, but all java IDE let you see the source code yourself :
private void fill() throws IOException {
int dst;
if (markedChar <= UNMARKED) {
/* No mark */
dst = 0;
} else {
/* Marked */
int delta = nextChar - markedChar;
if (delta >= readAheadLimit) {
/* Gone past read-ahead limit: Invalidate mark */
markedChar = INVALIDATED;
readAheadLimit = 0;
dst = 0;
} else {
if (readAheadLimit <= cb.length) {
/* Shuffle in the current buffer */
// here copy the read chars in a memory buffer named cb
System.arraycopy(cb, markedChar, cb, 0, delta);
markedChar = 0;
dst = delta;
} else {
/* Reallocate buffer to accommodate read-ahead limit */
char ncb[] = new char[readAheadLimit];
System.arraycopy(cb, markedChar, ncb, 0, delta);
cb = ncb;
markedChar = 0;
dst = delta;
}
nextChar = nChars = delta;
}
}
int n;
do {
n = in.read(cb, dst, cb.length - dst);
} while (n == 0);
if (n > 0) {
nChars = dst + n;
nextChar = dst;
}
}
I'm trying to normalize an audio file of speech.
Specifically, where an audio file contains peaks in volume, I'm trying to level it out, so the quiet sections are louder, and the peaks are quieter.
I know very little about audio manipulation, beyond what I've learnt from working on this task. Also, my math is embarrassingly weak.
I've done some research, and the Xuggle site provides a sample which shows reducing the volume using the following code: (full version here)
#Override
public void onAudioSamples(IAudioSamplesEvent event)
{
// get the raw audio byes and adjust it's value
ShortBuffer buffer = event.getAudioSamples().getByteBuffer().asShortBuffer();
for (int i = 0; i < buffer.limit(); ++i)
buffer.put(i, (short)(buffer.get(i) * mVolume));
super.onAudioSamples(event);
}
Here, they modify the bytes in getAudioSamples() by a constant of mVolume.
Building on this approach, I've attempted a normalisation modifies the bytes in getAudioSamples() to a normalised value, considering the max/min in the file. (See below for details). I have a simple filter to leave "silence" alone (ie., anything below a value).
I'm finding that the output file is very noisy (ie., the quality is seriously degraded). I assume that the error is either in my normalisation algorithim, or the way I manipulate the bytes. However, I'm unsure of where to go next.
Here's an abridged version of what I'm currently doing.
Step 1: Find peaks in file:
Reads the full audio file, and finds this highest and lowest values of buffer.get() for all AudioSamples
#Override
public void onAudioSamples(IAudioSamplesEvent event) {
IAudioSamples audioSamples = event.getAudioSamples();
ShortBuffer buffer =
audioSamples.getByteBuffer().asShortBuffer();
short min = Short.MAX_VALUE;
short max = Short.MIN_VALUE;
for (int i = 0; i < buffer.limit(); ++i) {
short value = buffer.get(i);
min = (short) Math.min(min, value);
max = (short) Math.max(max, value);
}
// assign of min/max ommitted for brevity.
super.onAudioSamples(event);
}
Step 2: Normalize all values:
In a loop similar to step1, replace the buffer with normalized values, calling:
buffer.put(i, normalize(buffer.get(i));
public short normalize(short value) {
if (isBackgroundNoise(value))
return value;
short rawMin = // min from step1
short rawMax = // max from step1
short targetRangeMin = 1000;
short targetRangeMax = 8000;
int abs = Math.abs(value);
double a = (abs - rawMin) * (targetRangeMax - targetRangeMin);
double b = (rawMax - rawMin);
double result = targetRangeMin + ( a/b );
// Copy the sign of value to result.
result = Math.copySign(result,value);
return (short) result;
}
Questions:
Is this a valid approach for attempting to normalize an audio file?
Is my math in normalize() valid?
Why would this cause the file to become noisy, where a similar approach in the demo code doesn't?
I don't think the concept of "minimum sample value" is very meaningful, since the sample value just represents the current "height" of the sound wave at a certain time instant. I.e. its absolute value will vary between the peak value of the audio clip and zero. Thus, having a targetRangeMin seems to be wrong and will probably cause some distortion of the waveform.
I think a better approach might be to have some sort of weight function that decreases the sample value based on its size. I.e. bigger values are decreased by a large percentage than smaller values. This would also introduce some distortion, but probably not very noticeable.
Edit: here is a sample implementation of such a method:
public short normalize(short value) {
short rawMax = // max from step1
short targetMax = 8000;
//This is the maximum volume reduction
double maxReduce = 1 - targetMax/(double)rawMax;
int abs = Math.abs(value);
double factor = (maxReduce * abs/(double)rawMax);
return (short) Math.round((1 - factor) * value);
}
For reference, this is what your algorithm did to a sine curve with an amplitude of 10000:
This explains why the audio quality becomes much worse after being normalized.
This is the result after running with my suggested normalize method:
"normalization" of audio is the process of increasing the level of the audio such that the maximum is equal to some given value, usually the maximum possible value. Today, in another question, someone explained how to do this (see #1): audio volume normalization
However, you go on to say "Specifically, where an audio file contains peaks in volume, I'm trying to level it out, so the quiet sections are louder, and the peaks are quieter." This is called "compression" or "limiting" (not to be confused with the type of compression such as that used in encoding MP3s!). You can read more about that here: http://en.wikipedia.org/wiki/Dynamic_range_compression
A simple compressor is not particularly hard to implement, but you say your math "is embarrassingly weak." So you might want to find one that's already built. You might be able to find a compressor implemented in http://sox.sourceforge.net/ and convert that from C to Java. The only java implementation of compressor I know of who's source is available (and it's not very good) is in this book
As an alternative to solve your problem, you might be able to normalize your file in segments of say 1/2 a second each, and then connect the gain values you use for each segment using linear interpolation. You can read about linear interpolation for audio here: http://blog.bjornroche.com/2010/10/linear-interpolation-for-audio-in-c-c.html
I don't know if the source code is available for the levelator, but that's something else you can try.
I'm currently stumped. I've been looking around and experimenting with audio comparison. I've found quite a bit of material, and a ton of references to different libraries and methods to do it.
As of now I've taken Audacity and exported a 3min wav file called "long.wav" and then split the first 30seconds of that into a file called "short.wav". I figured somewhere along the line I could visually log (log.txt) the data through java for each and should be able to see at least some visual similarities among the values.... here's some code
Main method:
int totalFramesRead = 0;
File fileIn = new File(filePath);
BufferedWriter writer = new BufferedWriter(new FileWriter(outPath));
writer.flush();
writer.write("");
try {
AudioInputStream audioInputStream =
AudioSystem.getAudioInputStream(fileIn);
int bytesPerFrame =
audioInputStream.getFormat().getFrameSize();
if (bytesPerFrame == AudioSystem.NOT_SPECIFIED) {
// some audio formats may have unspecified frame size
// in that case we may read any amount of bytes
bytesPerFrame = 1;
}
// Set an arbitrary buffer size of 1024 frames.
int numBytes = 1024 * bytesPerFrame;
byte[] audioBytes = new byte[numBytes];
try {
int numBytesRead = 0;
int numFramesRead = 0;
// Try to read numBytes bytes from the file.
while ((numBytesRead =
audioInputStream.read(audioBytes)) != -1) {
// Calculate the number of frames actually read.
numFramesRead = numBytesRead / bytesPerFrame;
totalFramesRead += numFramesRead;
// Here, do something useful with the audio data that's
// now in the audioBytes array...
if(totalFramesRead <= 4096 * 100)
{
Complex[][] results = PerformFFT(audioBytes);
int[][] lines = GetKeyPoints(results);
DumpToFile(lines, writer);
}
}
} catch (Exception ex) {
// Handle the error...
}
audioInputStream.close();
} catch (Exception e) {
// Handle the error...
}
writer.close();
Then PerformFFT:
public static Complex[][] PerformFFT(byte[] data) throws IOException
{
final int totalSize = data.length;
int amountPossible = totalSize/Harvester.CHUNK_SIZE;
//When turning into frequency domain we'll need complex numbers:
Complex[][] results = new Complex[amountPossible][];
//For all the chunks:
for(int times = 0;times < amountPossible; times++) {
Complex[] complex = new Complex[Harvester.CHUNK_SIZE];
for(int i = 0;i < Harvester.CHUNK_SIZE;i++) {
//Put the time domain data into a complex number with imaginary part as 0:
complex[i] = new Complex(data[(times*Harvester.CHUNK_SIZE)+i], 0);
}
//Perform FFT analysis on the chunk:
results[times] = FFT.fft(complex);
}
return results;
}
At this point I've tried logging everywhere: audioBytes before transforms, Complex values, and FFT results.
The problem: No matter what values I log, the log.txt of each wav file is completely different. I'm not understanding it. Given that I took the small.wav from the large.wav (and they have all the same properties) there should be a very heavy similarity among either the raw wav byte[] data... or Complex[][] fft data... or something thus far..
How can I possibly try to compare these files if the data isn't even close to similar at any point of these calculations.
I know I'm missing quite a bit of knowledge with regards to audio analysis, and this is why I come to the board for help! Thanks for any info, help, or fixes you can offer!!
Have you looked at MARF? It is a well-documented Java library used for audio recognition.
It is used to recognize speakers (for transcription or securing software) but the same features should be able to be used to classify audio samples. I'm not familiar with it but it looks like you'd want to use the FeatureExtraction class to extract an array of features from each audio sample and then create a unique id.
For 16-bit audio, 3e-05 isn't really that different from zero. So a file of zeros is pretty much the same as a file of zeros (maybe missing equality by some tiny rounding errors.)
ADDED:
For your comparison, read in and plot, using some Java plotting library, a portion of each of the two waveforms when they get past the portion that's mostly (close to) zero.
I think for debugging you better try use matlab to plot out. Since matlab is much more powerful in dealing with this problem.
You use "wavread" to the file, and "stft" to get the short time Fourier Transformation which is a complex number Matrix. Then simply abs(Matrix) to get the magnitude of each complex number. Show the image with imshow(abs(Matrix),[]).
I don't know how do you compare the whole file and 30s clip (by looking at the stft image?)
I don't know how are you comparing both audio files, but, seeing some service that offer music recognition (like TrackId or MotoID), these services take a small sample of the music you're hearing (10-20 secs), then process them in their server, i theorize that they have samples that long or less and that they have a database of (or calculate it on the fly) patterns of that samples (in your case Fourier Transforms), in your case, you may need to break your long audio file in chunks of or smaller size than your sample data, in the first case you may find a specific chunk that resembles more the pattern in your sample data, in the second case your smaller chunks may resamble a part of your sample data and you can calculate the probability that the sample data belongs to a respective audio file.
I think you are looking at Acoustic Fingerprinting
It's hard, and there are libraries to do it.
If you want to implement it yourself, this is a whitepaper on the shazam algorithm.
I'm trying to grab a set of mipmaplevels and save to a local cache file to avoid rebuilding them each time (and it is not practical to pre-generate them...)
I've got the mipmaplevels into a set of bitmaps OK, and now want to write them to my cache file, but whatever variety of buffer I use (direct or not, setting byteorder or not) hasArray always comes back false on the intbuffer. I must be doing something silly here, but I can't see the wood for the trees anymore.
Not been using java for long, so this is prolly a noob error ;)
Code looks like this:
int tsize = 256;
ByteBuffer xbb = ByteBuffer.allocate(tsize*tsize*4);
// or any other variety of create like wrap etc. makes no diference
xbb.order(ByteOrder.nativeOrder()); // in or out - makes no difference
IntBuffer ib = xbb.asIntBuffer();
for (int i = 0; i < tbm.length; i++) {
//ib.array() throws exception, ib.hasArray() returns false;
tbm[i].getPixels(ib.array(), 0, tsize, 0, 0, tsize, tsize);
ou.write(xbb.array(), 0, tsize*tsize*4);
tsize = tsize / 2;
}
See this link - it talks about same problem. There are answers there; however, it concludes with:
In another post
http://forum.java.sun.com/thread.jsp?forum=4&thread=437539
it is suggested to implement a version of DataBuffer which is
backed by a (Mapped)ByteBuffer *gasp* ...
Notice, that forum.java.sun.com has moved to Oracle somewhere.
I hope this helps. Anyhow, if you find better answer let me know too :-)
In my testing, changing the ByteBuffer into an IntBuffer using ByteBuffer.asIntBuffer() causes you to lose the backing array.
If you use IntBuffer.allocate(tsize*tsize) instead, you should be able to get the backing array.
ByteBuffer buf = ByteBuffer.allocate(10);
IntBuffer ib = buf.asIntBuffer();
System.out.printf("Buf %s, hasArray: %s, ib.hasArray %s\n", ib, buf.hasArray(), ib.hasArray());
buf = ByteBuffer.allocateDirect(10);
ib = IntBuffer.allocate(10);
System.out.printf("Buf %s, hasArray: %s, ib.hasArray %s\n", ib, buf.hasArray(), ib.hasArray());
Produces:
Buf java.nio.ByteBufferAsIntBufferB[pos=0 lim=2 cap=2], buf.hasArray: true, ib.hasArray false
Buf java.nio.HeapIntBuffer[pos=0 lim=10 cap=10], buf.hasArray: false, ib.hasArray true