I know the first thing you are thinking is "look for it in the documentation", however, the documentation is not clear about it.
I use the library to get the FFT and I followed this short guide:
http://www.digiphd.com/android-java-reconstruction-fast-fourier-transform-real-signal-libgdx-fft/
The problem arises when it uses:
fft.forward(array);
fft_cpx=fft.getSpectrum();
tmpi = fft.getImaginaryPart();
tmpr = fft.getRealPart();
Both "fft_cpx", "tmpi", "tmpr" are float vectors. While "tmpi" and "tmpr" are used for calculate the magnitude, "fft_cpx" is not used anymore.
I thought that getSpectrum() was the union of getReal and getImmaginary but the values are all different.
Maybe, the results from getSpectrum are complex values, but what is their representation?
I tried without fft_cpx=fft.getSpectrum(); and it seems to work correctly, but I'd like to know if it is actually necessary and what is the difference between getSpectrum(), getReal() and getImmaginary().
The documentation is at:
http://libgdx-android.com/docs/api/com/badlogic/gdx/audio/analysis/FFT.html
public float[] getSpectrum()
Returns: the spectrum of the last FourierTransform.forward() call.
public float[] getRealPart()
Returns: the real part of the last FourierTransform.forward() call.
public float[] getImaginaryPart()
Returns: the imaginary part of the last FourierTransform.forward()
call.
Thanks!
getSpectrum() returns absolute values of complex numbers.
It is calculated like this
for (int i = 0; i < spectrum.length; i++) {
spectrum[i] = (float)Math.sqrt(real[i] * real[i] + imag[i] * imag[i]);
}
Related
I have a task to sort search results not only by relevance of string fields of indexed documents, but also by distance from a given geographical point to a point associated with each document being indexed. It should be mentioned that only top-ten or so matched documents should be included into a result set. Also it's not important to sort by precise distance, only kind of "distance levels" from the given point are important.
Technically I have successfully implemented the task. The geographical part of the task was implemented as a CustomScoreQuery-derived class:
private static class DistanceQuery extends CustomScoreQuery {
public DistanceQuery(final Query _subQuery, final SpatialStrategy _strategy, final Point _bp) {
super(_subQuery, new FunctionQuery(_strategy.makeDistanceValueSource(_bp)));
}
#Override
protected CustomScoreProvider getCustomScoreProvider(AtomicReaderContext _context) throws IOException {
return new CustomScoreProvider(_context) {
#Override
public float customScore(int _doc, float _subQueryScore, float _valSrcScore) throws IOException {
// the spatial strategies makeDistanceValueSource creates a ValueSource which score varies from almost 0 for nearby points to 2.7-2.8 for distant points
// so I voluntarily chosen 2 as the normalization factor and increase subQueryScore for that factor at max;
logger.debug("customScore for document {}: [subQuery={}, valScore={}", this.context.reader().document(_doc).getField(IndexedField.id.name()).numericValue().toString(), _subQueryScore, _valSrcScore);
return (_valSrcScore > 2 || _valSrcScore < 0) ? _subQueryScore : _subQueryScore + (2 - _valSrcScore);
}
};
}
}
and wrap a given "textual" query with this geospatial "enhancement".
Generally speaking the chosen strategy gives me pretty reasonable results. As one may see, the final score just slightly exceeds the initial query score (by 2 at max). And with typical results scores of a dozen and more, this geospatial addition works just as a way to "post-sort" otherwise similar documents.
With few hundreds or thousands test documents in the index, performance of the wrapped query was also good enough. It's about 10-50 milliseconds per search and this is just 2-5 times slower than an unwrapped query.
But when I switched from a test to a real-world DB and the number of documents in the index raised from a thousand to approximately 10 millions and is going to increase even more (with an estimation of a hundred millions in a near future), then the situation have changed dramatically. Actually I can't get any search results anymore because JVM goes out of memory and processor. Currently it can't finish the search in JVM with -Xmx6g and more.
Certainly I could buy a better hardware for the task, but the problem is likely to be solved by choosing a more appropriate sorting strategy.
One solution is to completely avoid geo-sorting provided by Lucene and manually sort top N items of the result set if items relevance scores are similar. And I'm going to choose this way if nothing else helps.
But my question is whether more adequate solutions exist. Maybe I can somehow split result items by classes of equivalence (with same or similar enough scores) and apply geo-spatial sorting only to first few classes? Please suggest.
Look at how elasticsearch implements this in the function_score query. You can probably reuse a few things from what they do. If I remember correctly, they can optionally use faster but less accurate distance calculation algorithms as well. You probably want to do something similar.
I'm using another CustomScoreProvider for DistanceQuery:
public class DistanceQueryScoreProvider extends CustomScoreProvider {
private double x;
private double y;
public DistanceQueryScoreProvider(LeafReaderContext context, double x, double y) {
super(context);
this.x = x;
this.y = y;
}
#Override
public float customScore(int doc, float subQueryScore, float valSrcScore) throws IOException {
Document d = context.reader().document(doc);
double geomX = d.getField(Consts.GEOM_X_FIELD).numericValue().doubleValue();
double geomY = d.getField(Consts.GEOM_Y_FIELD).numericValue().doubleValue();
double deglen = 110.25;
double deltaX = geomY - y;
double deltaY = (geomX - x) * Math.cos(y);
return -Double.valueOf(deglen * Math.sqrt(deltaX * deltaX + deltaY * deltaY)).floatValue();
}
}
Elasticsearch implementation of plane distance function from Sorting by Distance was slower, than above code function customScore. This function was implemented based on article Geographic distance can be simple and fast
user3159253, maybe you have your answer for this thread?
I'm trying to solve a constrained non-linear 267 dimensional optimization problem with the java optimization library supplied by Apache Commons.
After 3 days of deciphering, this is what I have:
public class optimize2 {
public static void main(String []args){
double[] point = {1.,2.};
double[] cost = {3., 2.};
MultivariateFunction function = new MultivariateFunction() {
public double value(double[] point) {
double x = point[0];
double y = point[1];
return x * y;
}
};
MultivariateOptimizer optimize = new BOBYQAOptimizer(5);
optimize.optimize(
new MaxEval(200),
GoalType.MAXIMIZE,
new InitialGuess(point),
new ObjectiveFunction(function),
new LinearConstraint(cost, Relationship.EQ, 30));
}
}
For whatever reason optimize.optimize() is throwing a null pointer error. Maybe I'm just being dumb but I can't figure out how to get this to work.
Here is the error:
Exception in thread "main" java.lang.NullPointerException
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.setup(BOBYQAOptimizer.java:2401)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.doOptimize(BOBYQAOptimizer.java:236)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.doOptimize(BOBYQAOptimizer.java:49)
at org.apache.commons.math3.optim.BaseOptimizer.optimize(BaseOptimizer.java:143)
at org.apache.commons.math3.optim.BaseMultivariateOptimizer.optimize(BaseMultivariateOptimizer.java:66)
at org.apache.commons.math3.optim.nonlinear.scalar.MultivariateOptimizer.optimize(MultivariateOptimizer.java:64)
at Test.Code.optimize2.main(optimize2.java:39)
Looking directly into the BOBYQA code, it actually seems like the problem is that you have not explicitly defined any variable bounds. Line 2401 (setup method) reads as follows:
boundDifference[i] = upperBound[i] - lowerBound[i];
In the doOptimze method, prior to calling setup the bounds are set using these methods:
final double[] lowerBound = getLowerBound();
final double[] upperBound = getUpperBound();
These methods are defined in BaseMultivariateOptimizer like this:
public double[] getLowerBound() {
return lowerBound == null ? null : lowerBound.clone();
}
(and analogously for getUpperBound()). But lowerBound and upperBound in BaseMultivariateOptimizer are only set if the optimization data in the optimize call contains bounds information. If the bounds are not set in the call to optimize, you should therefore receive a NullPointerException.
Looking at the BOBYQA test code it seems like it should be sufficient if you add the following argument to the optimize call:
SimpleBounds.unbounded(point.length)
Having said that, I also do not think you will be able to completely solve your problem using any of the nonlinear optimizers in Apache Commons Math, since as far as I can tell none of these optimizers can handle linear or nonlinear constraints. I recommend that you take a look at for example Michael Powell's COBYLA2 algorithm instead. I have migrated the original FORTRAN code of this algorithm to Java, and you can find the code here and here.
I have a problem in using the apache commons math library.
I just want to create functions like f(x) = 4x^2 + 2x and I want to compute the derivative of this function --> f'(x) = 8x + 2
I read the article about Differentiation (http://commons.apache.org/proper/commons-math/userguide/analysis.html, section 4.7).
There is an example which I don't understand:
int params = 1;
int order = 3;
double xRealValue = 2.5;
DerivativeStructure x = new DerivativeStructure(params, order, 0, xRealValue);
DerivativeStructure y = f(x); //COMPILE ERROR
System.out.println("y = " + y.getValue();
System.out.println("y' = " + y.getPartialDerivative(1);
System.out.println("y'' = " + y.getPartialDerivative(2);
System.out.println("y''' = " + y.getPartialDerivative(3);
In Line 5 a compile error occurs of course. The function f(x) is called and not defined. What I am getting wrong?
Has anyone any experience with the differentiation/derivation with the apache commons math library or does anyone know another library/framework which can help me?
Thanks
In the paragraph below that example, the author describes ways to create DerivativeStructures. It isn't magic. In the example you quoted, someone was supposed to write the function f. Well, that wasn't very clear.
There are several ways a user can create an implementation of the UnivariateDifferentiableFunction interface. The first method is to simply write it directly using the appropriate methods from DerivativeStructure to compute addition, subtraction, sine, cosine... This is often quite straigthforward and there is no need to remember the rules for differentiation: the user code only represent the function itself, the differentials will be computed automatically under the hood. The second method is to write a classical UnivariateFunction and to pass it to an existing implementation of the UnivariateFunctionDifferentiator interface to retrieve a differentiated version of the same function. The first method is more suited to small functions for which user already control all the underlying code. The second method is more suited to either large functions that would be cumbersome to write using the DerivativeStructure API, or functions for which user does not have control to the full underlying code (for example functions that call external libraries).
Use the first idea.
// Function of 1 variable, keep track of 3 derivatives with respect to that variable,
// use 2.5 as the current value. Basically, the identity function.
DerivativeStructure x = new DerivativeStructure(1, 3, 0, 2.5);
// Basically, x --> x^2.
DerivativeStructure x2 = x.pow(2);
//Linear combination: y = 4x^2 + 2x
DerivativeStructure y = new DerivativeStructure(4.0, x2, 2.0, x);
System.out.println("y = " + y.getValue());
System.out.println("y' = " + y.getPartialDerivative(1));
System.out.println("y'' = " + y.getPartialDerivative(2));
System.out.println("y''' = " + y.getPartialDerivative(3));
The following thread from the Apache mailing list seems to illustrate the two possible ways of how the derivative of a UnivariateDifferentiableFunction can be defined. I am adding a new answer as I'm unable to comment on the previous one (insufficient reputation).
The used sample specification of the function is f(x) = x^2.
(1) Using a DerivativeStructure:
public DerivativeStructure value(DerivativeStructure t) {
return t.multiply(t);
}
(2) By writing a classical UnivariateFunction:
public UnivariateRealFunction derivative() {
return new UnivariateRealFunction() {
public double value(double x) {
// example derivative
return 2.*x;
}
}
}
If I understand well, the advantage of the first case is that the derivative does not need to be obtained manually, as in the second case. In case the derivative is known, there should thus be no advantage of defining a DerivativeStructure, right? The application I have in mind is that of a Newton-Raphson solver, for which generally the function value and its derivative need to be known.
The full example is provided on the aforementioned web site (authors are Thomas Neidhart and Franz Simons). Any further comments are most welcome!
There have been other questions and answers on this site suggesting that, to create an echo or delay effect, you need only add one audio sample with a stored audio sample from the past. As such, I have the following Java class:
public class DelayAMod extends AudioMod {
private int delay = 500;
private float decay = 0.1f;
private boolean feedback = false;
private int delaySamples;
private short[] samples;
private int rrPointer;
#Override
public void init() {
this.setDelay(this.delay);
this.samples = new short[44100];
this.rrPointer = 0;
}
public void setDecay(final float decay) {
this.decay = Math.max(0.0f, Math.min(decay, 0.99f));
}
public void setDelay(final int msDelay) {
this.delay = msDelay;
this.delaySamples = 44100 / (1000/this.delay);
System.out.println("Delay samples:"+this.delaySamples);
}
#Override
public short process(short sample) {
System.out.println("Got:"+sample);
if (this.feedback) {
//Delay should feed back into the loop:
sample = (this.samples[this.rrPointer] = this.apply(sample));
} else {
//No feedback - store base data, then add echo:
this.samples[this.rrPointer] = sample;
sample = this.apply(sample);
}
++this.rrPointer;
if (this.rrPointer >= this.samples.length) {
this.rrPointer = 0;
}
System.out.println("Returning:"+sample);
return sample;
}
private short apply(short sample) {
int loc = this.rrPointer - this.delaySamples;
if (loc < 0) {
loc += this.samples.length;
}
System.out.println("Found:"+this.samples[loc]+" at "+loc);
System.out.println("Adding:"+(this.samples[loc] * this.decay));
return (short)Math.max(Short.MIN_VALUE, Math.min(sample + (int)(this.samples[loc] * this.decay), (int)Short.MAX_VALUE));
}
}
It accepts one 16-bit sample at a time from an input stream, finds an earlier sample, and adds them together accordingly. However, the output is just horrible noisy static, especially when the decay is raised to a level that would actually cause any appreciable result. Reducing the decay to 0.01 barely allows the original audio to come through, but there's certainly no echo at that point.
Basic troubleshooting facts:
The audio stream sounds fine if this processing is skipped.
The audio stream sounds fine if decay is 0 (nothing to add).
The stored samples are indeed stored and accessed in the proper order and the proper locations.
The stored samples are being decayed and added to the input samples properly.
All numbers from the call of process() to return sample are precisely what I would expect from this algorithm, and remain so even outside this class.
The problem seems to arise from simply adding signed shorts together, and the resulting waveform is an absolute catastrophe. I've seen this specific method implemented in a variety of places - C#, C++, even on microcontrollers - so why is it failing so hard here?
EDIT: It seems I've been going about this entirely wrong. I don't know if it's FFmpeg/avconv, or some other factor, but I am not working with a normal PCM signal here. Through graphing of the waveform, as well as a failed attempt at a tone generator and the resulting analysis, I have determined that this is some version of differential pulse-code modulation; pitch is determined by change from one sample to the next, and halving the intended "volume" multiplier on a pure sine wave actually lowers the pitch and leaves volume the same. (Messing with the volume multiplier on a non-sine sequence creates the same static as this echo algorithm.) As this and other DSP algorithms are intended to work on linear pulse-code modulation, I'm going to need some way to get the proper audio stream first.
It should definitely work unless you have significant clipping.
For example, this is a text file with two columns. The leftmost column is the 16 bit input. The second column is the sum of the first and a version delayed by 4001 samples. The sample rate is 22KHz.
Each sample in the second column is the result of summing x[k] and x[k-4001] (e.g. y[5000] = x[5000] + x[999] = -13840 + 9181 = -4659) You can clearly hear the echo signal when playing the samples in the second column.
Try this signal with your code and see if you get identical results.
I'm trying to normalize an audio file of speech.
Specifically, where an audio file contains peaks in volume, I'm trying to level it out, so the quiet sections are louder, and the peaks are quieter.
I know very little about audio manipulation, beyond what I've learnt from working on this task. Also, my math is embarrassingly weak.
I've done some research, and the Xuggle site provides a sample which shows reducing the volume using the following code: (full version here)
#Override
public void onAudioSamples(IAudioSamplesEvent event)
{
// get the raw audio byes and adjust it's value
ShortBuffer buffer = event.getAudioSamples().getByteBuffer().asShortBuffer();
for (int i = 0; i < buffer.limit(); ++i)
buffer.put(i, (short)(buffer.get(i) * mVolume));
super.onAudioSamples(event);
}
Here, they modify the bytes in getAudioSamples() by a constant of mVolume.
Building on this approach, I've attempted a normalisation modifies the bytes in getAudioSamples() to a normalised value, considering the max/min in the file. (See below for details). I have a simple filter to leave "silence" alone (ie., anything below a value).
I'm finding that the output file is very noisy (ie., the quality is seriously degraded). I assume that the error is either in my normalisation algorithim, or the way I manipulate the bytes. However, I'm unsure of where to go next.
Here's an abridged version of what I'm currently doing.
Step 1: Find peaks in file:
Reads the full audio file, and finds this highest and lowest values of buffer.get() for all AudioSamples
#Override
public void onAudioSamples(IAudioSamplesEvent event) {
IAudioSamples audioSamples = event.getAudioSamples();
ShortBuffer buffer =
audioSamples.getByteBuffer().asShortBuffer();
short min = Short.MAX_VALUE;
short max = Short.MIN_VALUE;
for (int i = 0; i < buffer.limit(); ++i) {
short value = buffer.get(i);
min = (short) Math.min(min, value);
max = (short) Math.max(max, value);
}
// assign of min/max ommitted for brevity.
super.onAudioSamples(event);
}
Step 2: Normalize all values:
In a loop similar to step1, replace the buffer with normalized values, calling:
buffer.put(i, normalize(buffer.get(i));
public short normalize(short value) {
if (isBackgroundNoise(value))
return value;
short rawMin = // min from step1
short rawMax = // max from step1
short targetRangeMin = 1000;
short targetRangeMax = 8000;
int abs = Math.abs(value);
double a = (abs - rawMin) * (targetRangeMax - targetRangeMin);
double b = (rawMax - rawMin);
double result = targetRangeMin + ( a/b );
// Copy the sign of value to result.
result = Math.copySign(result,value);
return (short) result;
}
Questions:
Is this a valid approach for attempting to normalize an audio file?
Is my math in normalize() valid?
Why would this cause the file to become noisy, where a similar approach in the demo code doesn't?
I don't think the concept of "minimum sample value" is very meaningful, since the sample value just represents the current "height" of the sound wave at a certain time instant. I.e. its absolute value will vary between the peak value of the audio clip and zero. Thus, having a targetRangeMin seems to be wrong and will probably cause some distortion of the waveform.
I think a better approach might be to have some sort of weight function that decreases the sample value based on its size. I.e. bigger values are decreased by a large percentage than smaller values. This would also introduce some distortion, but probably not very noticeable.
Edit: here is a sample implementation of such a method:
public short normalize(short value) {
short rawMax = // max from step1
short targetMax = 8000;
//This is the maximum volume reduction
double maxReduce = 1 - targetMax/(double)rawMax;
int abs = Math.abs(value);
double factor = (maxReduce * abs/(double)rawMax);
return (short) Math.round((1 - factor) * value);
}
For reference, this is what your algorithm did to a sine curve with an amplitude of 10000:
This explains why the audio quality becomes much worse after being normalized.
This is the result after running with my suggested normalize method:
"normalization" of audio is the process of increasing the level of the audio such that the maximum is equal to some given value, usually the maximum possible value. Today, in another question, someone explained how to do this (see #1): audio volume normalization
However, you go on to say "Specifically, where an audio file contains peaks in volume, I'm trying to level it out, so the quiet sections are louder, and the peaks are quieter." This is called "compression" or "limiting" (not to be confused with the type of compression such as that used in encoding MP3s!). You can read more about that here: http://en.wikipedia.org/wiki/Dynamic_range_compression
A simple compressor is not particularly hard to implement, but you say your math "is embarrassingly weak." So you might want to find one that's already built. You might be able to find a compressor implemented in http://sox.sourceforge.net/ and convert that from C to Java. The only java implementation of compressor I know of who's source is available (and it's not very good) is in this book
As an alternative to solve your problem, you might be able to normalize your file in segments of say 1/2 a second each, and then connect the gain values you use for each segment using linear interpolation. You can read about linear interpolation for audio here: http://blog.bjornroche.com/2010/10/linear-interpolation-for-audio-in-c-c.html
I don't know if the source code is available for the levelator, but that's something else you can try.