I'm trying to draw a sinus wave-form (think Siri) that picks up and is immediately influenced by a user's voice. If I could accomplish exactly this in Android with as much fluidity on a device such as the S4, I will be extremely satisfied, so any helpful information is greatly appreciated.
Right now, I understand how to use the MediaRecorder to grab the max amplitude at a "tick" (reference), and I can store several of these integer values in an array as the MediaRecorder is recording and picking up audio, but I have no idea how I can transform this array of integers into something like the Github project that I posted above. I'd also appreciate if someone could suggest how large this array should be, since I want to dump old data as quickly as possible to make the animation fast and use as little memory as possible.
EMy aproach would be as follows: You could store, say, the last 5 values (In your example, it shows about 5-6 lines at a time).
Then, for each value in these 5 values:
Take the max amplitude value you can get for reference, and use it for calculate a percentage of the current value. Use that percentage along with the sin(x)Math.sin function to smooth the curvy line:
example:
MAX_AMPL:1200
CURR_VALUE: 240 -->20% of MAX_AMPL
Use android drawing primitives Drwaing on android to draw f(x)=(CURR_VALUE/MAX_VALUE) Math.Sin (x)
If you draw the function between 0 and 2Pi i think you will get the same number of waves as in your example.
The more recent the value (position in value ArrayList), the more wider the line for vanishing efect.
Last, draw your graphs from the oldest to the newer.
Related
I'm trying to simply Display a WAV File in it's Frequency Domain using this FFT. I try accomplish this via Short-Time-Fourier-Transform with a set Window Size etc.
The output is dealt with by drawing with pixels on a JFrame (the higher the Amplitude the brighter the dot) which is really basic i know.
The signal i try to plot is a Chirp from 1000Hz to 10000Hz in 10s with a 44100Hz Samplerate. The output should look like the following:
Chirp-Signal
But instead my output looks like this:
Chirp-Signal with artifacts
which is kind of right.. but as you can see theres is some sort of noise pattern. It doesn't seem to be random.
One thing is tried was simply reducing the gain from the pixels but that solves it poorly:
Chirp-Signal with reduced gain
I used a Hann-Filter from here.
Maybe the way I try to do it is faulty. So please let me briefly explain:
I have a WAV File - i remove the header to just get the data part. A 10s long signal with 44100Hz should give me 441000 Samples which it does.
I read the Array with Chunks by the size of 2048 Samples and a windowstep which is 256 Samples.
Every step will run though the Hann-Filter and then the FFT which is added to a pixel array which then gets drawn.
Is there something obvious I am missing which can be seen through the Screenshots?
Is the FFT algorithm I used not "good enough"?
Please tell me if you further information or if my explanation isn't good enough. Thank you in advance
Your contrast is too high, or the color map range in too wide. Try rescaling and raising the lowest FFT magnitudes that map to black, and reducing the gain on the FFT output so that less of it maps to white. That will get rid of the numerical and quantization noise (rounding, etc.), and bring some of the blown out to white levels back into greyscale range in your plot.
I have done my own function plotter with java which works quite well.
All you have to do is to iterate over the with (pixels) of the panel and calculate the y-value. Then plot it with a poly-line onto the screen and that's it.
But here comes my problem: There is a scale factor between the number of pixels and the value which I want to plot.
For example I'm at the 304' iteration (iterating over the with value of the plot panel). Now I calculate the corresponding value for this pixel position (304) by the rule of three. This gives me 1.45436. Then I calculate the sin based on this value. Which is transcendetal number. Then I use again the rule of tree to determine which y-pixel this value corresponds to. Doing so, I have to round because the pixel is an integer. And there is my data loss. This data loss may give me the following result:
This looks not really nice. If I play around with resizing the window I sometimes get a smooth result.
How can I fix this problem? I've actually never seen such plots in any other function plotter.
If you do this in Java, you might consider composing your data points to a Path2D. That would have floating point coordinates, and the drawing engine would take care of smoothing things down. You might have to disable stroke control, though.
I need to create a heatmap for android google maps. I have geolocation and points that have negative and positive weight attributed to them that I would like to visually represent. Unlike the majority of heatmaps, I want these positive and negative weights to destructively interfere; that is, when two points are close to each other and one is positive and the other is negative, the overlap of them destructively interferes, effectively not rendering areas that cancel out completely.
I plan on using the android google map's TileOverlay/TileProvider class that has the job of creating/rendering tiles based a given location and zoom. (I don't have an issue with this part.)
How should I go about rendering these Tiles? I plan on using java's Graphics class but the best that I can think of is going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel. This seems very inefficient, however, and I was looking for suggestions on a better approach.
Edit: I've considered everything from using a non-android Google Map inside of a WebView to using a TileOverlay to using a GroundOverlay. What I am now considering doing is having a large 2 dimensional array of "squares." Each square would have a long, lat, and total +/- weights. When a new data point is added, instead of rendering it exactly where it is, it will be added to the "square" that it is in. The weight of this data point will be added to the square and then I will use the GoogleMap Polygon object to render the square on the map. The ratio of +points to -points will determine the color that is rendered, with a ratio closer to 1:1 being clear, >1 being blue (cold point), and <1 being red (hot point).
Edit: a.k.a. clustering the data into small regional groups
I suggest trying
going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel.
Even if it slow, it will work. There are not too many Tiles on the screen, there are not too many pixels in each Tile and all this is done on a background thread.
All this is still followed by translating Bitmap into byte[]. The byte[] is a representation of PNG or JPG file, so it's not a simple pixel mapping from Bitmap. The last operation takes some time too and may possibly require more processing power than your whole algorithm.
Edit (moved from comment):
What you describe in the edit sounds like a simple clustering on LatLng. I can't say it's a better or worse idea, but it's something worth a try.
i'm developing a software that compare images and i need to do it in a fast way! Actually i compare them using plain c but it's too slow.
I want to compare them using shaders and a couple of gl surfaces (textures), using c and not java, but this doesn't change the situation so much, and get back a list of changed parts, but i really don't know where to start.
Basically i want to use something like SIMD neon instruction to compare pixel colors to check for changes (well, i need to check only the first pixel fragment color, ex. only red ... these are photos so is unrealistic that it doesn't change) but instead to use neon instructions i want to use pixel shaders to do the comparison and get the list of changed part back
More, if it's possible, i want to use parallel comparison on the same image splitting it in blocks :)
Someone can give an hit?
note: i know that i can't output back a list of stuff, but, well, use a third texture as output is good anyway for me (if i put on the texture 2 ushorts that indicates x and y i'm ok and with an uint on the end of the texture that report the number of changed pixels)
OpenGL ES 1.1 doesn't have shaders, and the best route I can think of for what you want to do ends with a 50% reduction in colour precision. Issues are:
without extensions there's additive blending, but not subtractive. No problem, just upload the second of your textures with all colour values inverted.
OpenGL clamps output colours to the range [0, 1] and without extensions you're limited to one byte per channel. So you'd need to upload textures with 7bit colour channels to ensure you got the correct results within the 8bits coming back.
Shaders would allow a slightly circuitous route around that, because you can add or subtract or do whatever you want, and can split up the results. If you're sending two three channel 24bit images in to get a four channel 32bit image out, obviously there's enough space to fit in 9 bits per source channel, even though you're going to have to divide the data oddly and reconstruct it later.
In practice you're going to pay quite a lot for uploading and downloading images from the GPU, so NEON might be a better choice not just to avoid packing peculiarities. Assuming the Android kit supplies the same compiler intrinsics as the iPhone kit (likely, since they'll both include GCC), this page has a bit of an introduction showing how to convert an image to greyscale. So it's not exactly what you're looking for, but it's image processing in C using NEON so it should be a good start.
In both cases you're likely to end up with an image of the differences, rather than a simple count and list. A count is a concurrent operation, whatever way you think about it, so isn't
really something you'd do in GL or via NEON. You'd need to inspect the final image to work it out.
I have large data sets (10 Hz data, so 864k points per 24 Hours) which I need to plot in real time. The idea is the user can zoom and pan into highly detailed scatter plots.
The data is not very continuous and there are spikes. Since the data set is so large, I can't plot every point each time the plot refreshes.
But I also can't just plot every nth point or else I will miss major features like large but short spikes.
Matlab does it right. You can give it a 864k vector full of zeros and just set any one point to 1 and it will plot correctly in real-time with zooms and pans.
How does Matlab do it?
My target system is Java, so I would be generating views of this plot in Swing/Java2D.
You should try the file from MATLAB Central:
https://mathworks.com/matlabcentral/fileexchange/15850-dsplot-downsampled-plot
From the author:
This version of "plot" will allow you to visualize data that has very large number of elements. Plotting large data set makes your graphics sluggish, but most times you don't need all of the information displayed in the plot. Your screen only has so many pixels, and your eyes won't be able to detect any information not captured on the screen.
This function will downsample the data and plot only a subset of the data, thus improving the memory requirement. When the plot is zoomed in, more information gets displayed. Some work is done to make sure that outliers are captured.
Syntax:
dsplot(x, y)
dsplot(y)
dsplot(x, y, numpoints)
Example:
x =linspace(0, 2*pi, 1000000);
y1=sin(x)+.02*cos(200*x)+0.001*sin(2000*x)+0.0001*cos(20000*x);
dsplot(x,y1);
I don't know how Matlab does it, but I'd start with Quadtrees.
Dump all your data points into the quadtree, then to render at a given zoom level, you walk down the quadtree (starting with the areas that overlap what you're viewing) until you reach areas which are comparable to the size of a pixel. Stick a pixel in the middle of that area.
added: Doing your drawing with OpenGL/JOGL will also help you get faster drawing. Especially if you can predict panning, and build up the points to show in a display list or something, so that you don't have to do any CPU work for the new frames.
10Hz data means that you only have to plot 10 frames per second. It should be easy, since many games achieve >100 fps with much more complex graphics.
If you plot 10 pixels per second for each possible data point you can display a minute worth of data using a 600 pixel wide widget. If you save the index of the 600th to last sample it should be easy to draw only the latest data.
If you don't have a new data-point every 10th of a second you have to come up with a way to insert an interpolated data-point. Three choices come to mind:
Repeat the last data-point.
Insert an "empty" data-point. This will cause gaps in the graph.
Don't update the graph until the next data-point arrives. Then insert all the pixels you didn't draw at once, with linear interpolation between the data-points.
To make the animation smooth use double-buffering. If your target language supports a canvas widget it probably supports double-buffering.
When zooming you have the same three choices as above, as the zoomed data-points are not continuous even if the original data-points were.
This might help for implementing it in Java.