Scale factor of a function plotter - java

I have done my own function plotter with java which works quite well.
All you have to do is to iterate over the with (pixels) of the panel and calculate the y-value. Then plot it with a poly-line onto the screen and that's it.
But here comes my problem: There is a scale factor between the number of pixels and the value which I want to plot.
For example I'm at the 304' iteration (iterating over the with value of the plot panel). Now I calculate the corresponding value for this pixel position (304) by the rule of three. This gives me 1.45436. Then I calculate the sin based on this value. Which is transcendetal number. Then I use again the rule of tree to determine which y-pixel this value corresponds to. Doing so, I have to round because the pixel is an integer. And there is my data loss. This data loss may give me the following result:
This looks not really nice. If I play around with resizing the window I sometimes get a smooth result.
How can I fix this problem? I've actually never seen such plots in any other function plotter.

If you do this in Java, you might consider composing your data points to a Path2D. That would have floating point coordinates, and the drawing engine would take care of smoothing things down. You might have to disable stroke control, though.

Related

Improvement Suggestion for Color Measurement Algorithm

I'm working on an Image Processing project for a while, which consists in a way to measure and classify some types of sugar in the production line by its color. Until now, my biggest concern was searching and implementing the appropriate mathematical techniques to calculate distance between two colors (a reference color and the color being analysed), and then, turn this value into something more meaningful, as an industry standard measure.
From this, I'm trying to figure out how should I reliably extract the average color value from an image, once the frame captured by a video camera may contain noises or dirt in the sugar (most likely almost black dots).
Language: Java with OpenCV library.
Current solution: Before taking average image value, I'm applying the fastNlMeansDenoisingColored function, provided by OpenCV. It removes some white dots, at cost of more defined details. Couldn't remove black dots with it (not shown in the following images).
From there, I'm using the org.opencv.core.Core.mean function to computate the mean value of array elements independently for each channel, so that I can have a scalar value to use in my calculations.
I tried to use some kinds of image thresholding filters to get rid of black and white dots, and then calculate the mean with a mask, It kinda works too. Also, I tried to find any weighted average function which could return scalar values as well, but without success.
I don't know If those are robust enough pre-processing techniques to such application, mean values can vary easily. Am I in the right way? Would you suggest a better way to get reliable value that will represent my sugar's color?

How do you draw smooth curves given integer amplitude points?

I'm trying to draw a sinus wave-form (think Siri) that picks up and is immediately influenced by a user's voice. If I could accomplish exactly this in Android with as much fluidity on a device such as the S4, I will be extremely satisfied, so any helpful information is greatly appreciated.
Right now, I understand how to use the MediaRecorder to grab the max amplitude at a "tick" (reference), and I can store several of these integer values in an array as the MediaRecorder is recording and picking up audio, but I have no idea how I can transform this array of integers into something like the Github project that I posted above. I'd also appreciate if someone could suggest how large this array should be, since I want to dump old data as quickly as possible to make the animation fast and use as little memory as possible.
EMy aproach would be as follows: You could store, say, the last 5 values (In your example, it shows about 5-6 lines at a time).
Then, for each value in these 5 values:
Take the max amplitude value you can get for reference, and use it for calculate a percentage of the current value. Use that percentage along with the sin(x)Math.sin function to smooth the curvy line:
example:
MAX_AMPL:1200
CURR_VALUE: 240 -->20% of MAX_AMPL
Use android drawing primitives Drwaing on android to draw f(x)=(CURR_VALUE/MAX_VALUE) Math.Sin (x)
If you draw the function between 0 and 2Pi i think you will get the same number of waves as in your example.
The more recent the value (position in value ArrayList), the more wider the line for vanishing efect.
Last, draw your graphs from the oldest to the newer.

Need Conceptual Help Rendering a Heat Map

I need to create a heatmap for android google maps. I have geolocation and points that have negative and positive weight attributed to them that I would like to visually represent. Unlike the majority of heatmaps, I want these positive and negative weights to destructively interfere; that is, when two points are close to each other and one is positive and the other is negative, the overlap of them destructively interferes, effectively not rendering areas that cancel out completely.
I plan on using the android google map's TileOverlay/TileProvider class that has the job of creating/rendering tiles based a given location and zoom. (I don't have an issue with this part.)
How should I go about rendering these Tiles? I plan on using java's Graphics class but the best that I can think of is going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel. This seems very inefficient, however, and I was looking for suggestions on a better approach.
Edit: I've considered everything from using a non-android Google Map inside of a WebView to using a TileOverlay to using a GroundOverlay. What I am now considering doing is having a large 2 dimensional array of "squares." Each square would have a long, lat, and total +/- weights. When a new data point is added, instead of rendering it exactly where it is, it will be added to the "square" that it is in. The weight of this data point will be added to the square and then I will use the GoogleMap Polygon object to render the square on the map. The ratio of +points to -points will determine the color that is rendered, with a ratio closer to 1:1 being clear, >1 being blue (cold point), and <1 being red (hot point).
Edit: a.k.a. clustering the data into small regional groups
I suggest trying
going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel.
Even if it slow, it will work. There are not too many Tiles on the screen, there are not too many pixels in each Tile and all this is done on a background thread.
All this is still followed by translating Bitmap into byte[]. The byte[] is a representation of PNG or JPG file, so it's not a simple pixel mapping from Bitmap. The last operation takes some time too and may possibly require more processing power than your whole algorithm.
Edit (moved from comment):
What you describe in the edit sounds like a simple clustering on LatLng. I can't say it's a better or worse idea, but it's something worth a try.

What is the meaning of jitter in visualize tab of weka

In weka I load an arff file. I can view the relationship between attributes using the visualize tab.
However I can't understand the meaning of the jitter slider. What is its purpose?
You can find the answer in the mailing list archives:
The jitter function in the Visualize panel just adds artificial random
noise to the coordinates of the plotted points in order to spread the
data out a bit (so that you can see points that might have been
obscured by others).
I don't know weka, but generally jitter is a term for the variation of a periodic signal to some reference interval. I'm guessing the slider allows you to set some range or threshold below which data points are treated as being regular, or to modify the output to introduce some variation. The wikipedia entry can give you some background.
Update: from this pdf, the jitter slider is for this purpose:
“Jitter” option to deal with nominal attributes (and to detect “hidden”data points)
Based on the accompanying slide it looks like it introduces some variation in the visualisation, perhaps to show when two data points overlap.
Update 2: This google books extract (to Data mining By Ian H. Witten, Eibe Frank) seems to confirm my guess:
[jitter] is a random displacement applied to X and Y values to separate points that lie on top of one another. Without jitter, 1000 instances at the same data point would look just the same as 1 instance
I don't know the products you mention, but jittering generally means randomising the sample positions. Eg, in ray tracing you would normally render a ray though each pixel on the screen. Jittering adds a random offset to each ray to reduce issues caused by regular aliasing.

How to plot large data vectors accurately at all zoom levels in real time?

I have large data sets (10 Hz data, so 864k points per 24 Hours) which I need to plot in real time. The idea is the user can zoom and pan into highly detailed scatter plots.
The data is not very continuous and there are spikes. Since the data set is so large, I can't plot every point each time the plot refreshes.
But I also can't just plot every nth point or else I will miss major features like large but short spikes.
Matlab does it right. You can give it a 864k vector full of zeros and just set any one point to 1 and it will plot correctly in real-time with zooms and pans.
How does Matlab do it?
My target system is Java, so I would be generating views of this plot in Swing/Java2D.
You should try the file from MATLAB Central:
https://mathworks.com/matlabcentral/fileexchange/15850-dsplot-downsampled-plot
From the author:
This version of "plot" will allow you to visualize data that has very large number of elements. Plotting large data set makes your graphics sluggish, but most times you don't need all of the information displayed in the plot. Your screen only has so many pixels, and your eyes won't be able to detect any information not captured on the screen.
This function will downsample the data and plot only a subset of the data, thus improving the memory requirement. When the plot is zoomed in, more information gets displayed. Some work is done to make sure that outliers are captured.
Syntax:
dsplot(x, y)
dsplot(y)
dsplot(x, y, numpoints)
Example:
x =linspace(0, 2*pi, 1000000);
y1=sin(x)+.02*cos(200*x)+0.001*sin(2000*x)+0.0001*cos(20000*x);
dsplot(x,y1);
I don't know how Matlab does it, but I'd start with Quadtrees.
Dump all your data points into the quadtree, then to render at a given zoom level, you walk down the quadtree (starting with the areas that overlap what you're viewing) until you reach areas which are comparable to the size of a pixel. Stick a pixel in the middle of that area.
added: Doing your drawing with OpenGL/JOGL will also help you get faster drawing. Especially if you can predict panning, and build up the points to show in a display list or something, so that you don't have to do any CPU work for the new frames.
10Hz data means that you only have to plot 10 frames per second. It should be easy, since many games achieve >100 fps with much more complex graphics.
If you plot 10 pixels per second for each possible data point you can display a minute worth of data using a 600 pixel wide widget. If you save the index of the 600th to last sample it should be easy to draw only the latest data.
If you don't have a new data-point every 10th of a second you have to come up with a way to insert an interpolated data-point. Three choices come to mind:
Repeat the last data-point.
Insert an "empty" data-point. This will cause gaps in the graph.
Don't update the graph until the next data-point arrives. Then insert all the pixels you didn't draw at once, with linear interpolation between the data-points.
To make the animation smooth use double-buffering. If your target language supports a canvas widget it probably supports double-buffering.
When zooming you have the same three choices as above, as the zoomed data-points are not continuous even if the original data-points were.
This might help for implementing it in Java.

Categories

Resources