here's what we have today:
* NxM grid of points in 3D
* we draw these using legacy opengl calls.
* we have a rubberband select and single point selection, using selection buffer.
Today we can use CTRL to select parts of what we want to select until we have the selection we want. but it is getting very annoying if you have a 200x500 grid and want to select a circle, star or anythingn that is not a rectangle.
I've tried to find any info on how to create a lasso selection, some people uses unique colors for each object and then uses readpixel to see what was sselected. We can't use this because all of our points needs to be the same color.
There's a pretty good illustrated tutorial on color picking at Lighthouse3D.com:
http://www.lighthouse3d.com/opengl/picking/index.php?color1
Its quite fast & I have implemented this technique in apps with millions of polygons. Way faster than bounding boxes since you only check what's under the cursor (or lasso region). Also, it's compatible with OpenGL ES as the feedback buffer selection is on its way out.
Related
I want to make clickable cell of the palette in Vuforia (without Unity) by tap on screen:
I found Dominoes example with similar functionality and do that:
create one plate object and multiply cells objects
call isTapOnSetColor function with parameter x, y (click coordinates) on tap and get coordinates,
coordinates is correct, but get the id/name of part of objects is wrong
I think problem with this line:
boolean bool = checkIntersectionLine(matrix44F, lineStart, lineEnd);
In the Dominoes example this was:
bool intersection = checkIntersectionLine(domino->pickingTransform, lineStart, lineEnd);
But I don't know what does do domino->pickingTransform and paste instead of this line modelViewMatrix (Tool.convertPose2GLMatrix(trackableResult.getPose()).getData())
Full code of my touch function: http://pastebin.com/My4CkxHa
Can you help me to make clicks or may be another way (not Unity) to do that?
Basically, domino->pickingTransform is pretty much the final matrix that is being drawn for each domino object. The domino sample work in a way that for each object (domino), the app checks the projected point of the screen touch and sees if it intersects the matrix of the object. The picking matrix is not exactly the same, since you want to make the is more responsive, so you make it a little wider than the drawing matrix.
You said you are getting a wrong id, but the question is - is it always the same id for different cells? If not, this is probably some small calculation error you made in your matrix transformations. I would suggest to do a visual debugging - add some graphical indication for the detected id, so you will be able to see what cell the application thinks you have clicked. This should help you progress towards the solution.
Is it possible to change the edge shape in JUNG? For example, I would like to have the edge change it's color gradually in a way similar to a progress bar. What about the edge label font size?
Thanks.
Yes, sort of... Also - I'm not sure which version of JUNG you're using, but this works in the latest JUNG 2 release (I realize JUNG 3 might be under development currently, but last time I checked, it wasn't stable enough to be used for production-level code).
1. Labelling: First, you need to implement the Transformer<EdgeType,Font> interface that converts your edge instances into Font instances. Then call [VisualizationViewer instance].getRenderContext().setEdgeFontTransformer([Transformer<EdgeType,Font> instance]).
2. Color/Stroke Customization: This is a little trickier, because the only way you can have this change color gradually (that I am aware of) is by creating a Transformer<EdgeType,Paint> that returns different paints for edge type instances over time. There are several transformers used for edges - these control the draw, the fill, and the Stroke, and have similar method names like the one mentioned for the labeller in step 1. You will either need to control when the graph panel repaints manually or ensure that JUNG's animation renderer is turned on so that repaints happen continuously.
If I have an image of a table of boxes, with some coloured in, is there an image processing library that can help me turn this into an array?
Thanks
You can use a thresholding function to binarize the image into dark/light pixels so dark pixels are 0 and light ones are 1.
Then you would want to remove image artifacts using dilation and erosion functions to remove noise (all these are well defined on Wikipedia).
Finally if you know where the boxes are, you can just get the value in the center of each box to determine the array value, or possibly use an area near the center and take the prevailing value (i.e. more 0's is a filled in square, more 1's is and empty square).
If you are scanning these boxes and there is a lot of variation in the position of the boxes, you will have to perform some level of image registration using known points, or fiducials.
As far as what tools to use to do this, I'd recommend first trying this manually using a tool like ImageJ, which has a UI and can also be used programatically since it is written all in Java.
Other good libraries for this include OpenCV and the Java Advanced Imaging API.
Your results will definitely vary depending on the input images and how consistenly lit and positioned they are.
The best way to see how it will do for your data is to try applying these processing steps manually to see where your threshold value should be, how much dilating/eroding you need to get consistent results.
My question is similar to this one, but is more specific in scope.
In my card game application, I would like for users to be able to click on words located in a scanned jpeg image. Please see this sample Pokemon trading card.
In this case, the user should be able to hover his mouse over the text "Scratch", upon which a pulsing rectangular border will appear around the text, indicating that it is clickable. The problem is how to detect the border of the text. There will be an array of words KNOWN BEFOREHAND that the user may click on (these will be retrieved from a database on a card-by-card basis). To continue our example, the array in this case will be ["Scratch", "Live Coal"]. Once the user clicks on "Scratch", the application must know via a call-back that "Scratch" was chosen instead of "Live Coal".
I was thinking of using optical character recognition libraries to solve this problem, but the open-source options for this are poor in quality (e.g. GOCR) and/or not well-tested on multiple platforms (e.g. Tesseract). I only care about Windows and Mac compatibility. Am I missing an obvious/simpler solution/algorithm that does not require OCR? I cannot simply hand-code in bounding boxes for each card, as there will be thousands of scanned cards in my database. The user may also upload his own custom card scans with an accompanying array of clickable text.
Text color is not always black. See this panorama of different card and text styles that will be permitted. The black cards have white text, and the third-to-last card (Zekrom) has black text with a white outline.
Solutions in any programming language are appreciated. However, please note that I am looking for open-source algorithms and/or libraries. If there is a solution in Ruby or Java, even better, as my code is primarily in these two languages.
EDIT: I forgot to mention that the order of the words/phrases in the array will be the same as on the card. Thus, the array will be ["Scratch", "Live Coal"] instead of ["Live Coal", "Scratch"]. I am mentioning this because it can potentially simplify the task. Thus, for this example, I can simply look for black pixels (though I have to watch out for the black star in the white circle). However, there will be more difficult cases where there is descriptive text under the attack name in a smaller font (again, see the panorama for examples).
I would just write a program that allows you to visually draw a bounding box around your text for simplicity but could could do this buy detecting differences in pixel color. Since the text is black you could see where the upper-left most black pixel is without large indents and within the bottom half of the card.
When the cursor is stationary, check if there is a black pixel either underneath or to 4 pixels around the cursor. If it is, check the first three consecutive (because there still might be a non-black pixel between the letters) non-black pixels to the left of the cursor, to the right, to the top and at the bottom. If yes, use these locations to draw a square. You can use OpenCV.
I'm writing some Java OpenGL code (though the principles are the same in C++ openGL). I have a situation where I want to render certain items on top of others. I can do that by disable the depth test or by setting it to GL_ALWAYS) for those items and that works well. The issue is that colors of those items on top seem to be darkened by the items underneath it. I'm not sure if it's a lighting issue or if it's some blending issue but I'm trying to show the item's color without being affected by the colors around it, regardless of this item's z-position (since depth testing is set to ALWAYS). Is there a lighting setting or blending setting I should be using for this?
thanks,
Jeff
I think in this situation, I'd leave the depth settings alone, but adjust the Z value of the objects based on the drawing order (for those items you want drawn based on order instead of normal depth).
glBegin(GL_WHATEVER);
for (int i=0; i<num_objects; i++)
glVertex(object[i].x, object[i].y, i/-100.0f);
glEnd(GL_WHATEVER);