Yesterday I tried to solve a Problem I had the entire Day and it is still unsolved. I searched for every combinations of words I could imagine to find solutions on Google etc. But without success. :(
The Problem consists of following idea:
I started programming GLES20 (Not GLES10!) on Android. There are many other ways how to compute matrixes and objects. So there aren't any methods like "Pop" or "push" Matrix.
I want to rotate a globe/sphere only by touching and moving my fingers. Touching functions etc works fine but the rotation itself never does. Everytime I first rotate by x-axis and then rotating by y-axis the rotation is still computed by local space axis of the object and Not two global axis. This happens whatever I do... :/
There are some people searching the solution for the Same Problem, but mostly GLES10 or completly different programming languages, never GLES20 and Java.
I will also post some Code parts later, when I get access to a Computer.
perhaps someone already understands what my Problem is.
Thank you so much! :)
Chrise
In OpenGL ES 2.0 there aren't any of the matrix functions and you're supposed to take care of your own matrices. So, for pushing and popping matrices what you need to do is when you're drawing you copy your matrix to a temporary one, do whatever transformation on it and pass the temporary matrix to the vertex shader.
Related
I would like to know if there is a way to align the RGB picture and the depth data of a Kinect V2 using the colour data as a starting point using Java, I am actually using java for Kinect as a wrapper and it does not seem to give me the possibility for that. Is there any way doing that?
Finally got around it by using #Spektre answer here, I got to play around with the formulas to make it work but it seem fine to me.
Rectified for my needs, it gives:
int alignx= (((x-512)<<8)/241)+Width;
int aligny= (((y-424)<<8)/240)+25+Height;
It works fine as long as the Kinect is at the same level that the object you want to target (ie: no pitch used).
I don't quite agree with Alex Acquier’s answer, it is not the right approach I feel. I have faced the same issue and I know I am doing this 8 months late, but in the interest of others who came here searching for the solution, I now present it here:
The thing is, you don't have to manually align the RGB and the Depth frames. There is a class already available that can do that for you, "IMultiSourceFrameReader". Using this as a source, you can sure to be making the point cloud in the right way.
Now this is okay if you just want to use the feeds. But if somewhere down your code, if you are going to be using some kind of a coordinate system and if you are going to be needing the coordinates of the RGB and the depth pixels, then you would expect them them to be the same, right? Because you are using the aligned coordinates after all, right? But you won't get the coordinates aligned till to use the "ICoordinateMapper" class. This class will make all the coordinates from all the different sensors, RGB and Infra, to align as well and will return the aligned coordinates.
Please refer to this source, it has been my go to source for Kinect V2 since a long time.
I have many problems unsolved, and i'm kind of new at LWJGL.
Here is a screen: http://image.noelshack.com/fichiers/2015/07/1423885261-sans-titre.png
(this is 20x20x20 simple cube)
But as you can see, my fps are not bigger than 40 and every face of the cube is showing. How can i fix the fps drop and hide the hidden block behind another one ?
I have glEnable(GL_DEPTH_TEST); and glEnable(GL_CULL_FACE); but it only work INSIDE the block :x ...
Sorry for my english too but i really need help :p
Culling
If culling only works if you are inside a block your vertex winding order is most likely mixed up. If so you might want to change it from the default GL_CCW to GL_CW or fix your vertex order to the default. Reference here
SpeedUp
For this your question has too few information. If you are not doing so already you might want to switch to using Vertex_Buffer_Object. Preferably using a single geometry that is only translated.
An additional approach would be to only render objects that are in line of sight of the camera. One method for this is a Binary Search Tree
I've not found anything here or on google. I'm looking for a way to identify shapes (circle, square, triangle and various other shapes) from a image file. Some examples:
You get the general idea. Not sure if BoofCV is the best choice here but it looks like it should be straightforward enough to use, but again I know nothing about it. I've looked at some of the examples and I though before I get in over my head (which is not hard to do some days), I thought I would ask if there is any info out there.
I'm taking a class on Knowledge Based AI solving Ravens Progressive Matrix problems and the final assignment will use strictly visual based images instead of the text files with attributes. We are not being graded on the visual since we only have a few weeks to work on this section of the project and we are encouraged to share this information. SOF has always been my go to source for information and I'm hoping someone out there might have some ideas on where to start with this...
Essentially what I want to do is detect the shapes (?? convert them into 2D geometry) and then make some assumptions about attributes such as size, fill, placement etc, create a text file with these attributes and then using that, send it through my existing code based that I wrote for my other projects to solve the problems.
Any suggestions????
There are a lot of ways you can do it. One way is to find the contour of the shape then fit a polygon to it or a oval. If you git a polygon to it and there are 4 sides with almost equal length then its a square. The contour can be found with binary blobs (my recommendation for the above images) or canny edge.
http://boofcv.org/index.php?title=Example_Fit_Polygon
http://boofcv.org/index.php?title=Example_Fit_Ellipse
So I'm doing the project of an introduction to Java course and it seems that I chose something that goes way beyond what I'm able to do. :P
Any help would be greatly appreciated. This is what I'm having problems with:
You have a cursor that is controlled by a player (goes forward or
turns 90°) which leaves a colored line as it goes. If you manage to go
over your own line and close a polygon of any shape (only right angles
though), its surface changes color into the color of your line.
I can detect when this situation arises but I am kind of lost as how to actually fill the correct polygon just closed. I can't seem to imagine an algorithm that would cover any case possible.
I looked at the Scanline fill algorithm but I think it would start having problems by the time there are already some polygons already filled in the map.
The Floodfill algorithm would be perfect if I had a way of finding a point inside the polygon, but, as there are many different possibilities, I can't think of a general rule for this.
I'm using an array 2x2 of integers where each color is represented by a number.
Does anyone have an idea on how to approach this problem?
If you can detect the situation then this can be solved in very simple manner. The question is which point to choose as start point for floodfill. The simple answer is: try all of them. Of course it makes a sense to start only with points adjacent to the one where your cursor is located. In this case you will have at most 8 points to check. Even better - at least 2 of them are definitely painted already if current point forms a polygon.
So you have 8 points to check. Launch floodfill 8 times starting from each of those points.
Two things which you probably should keep in mind:
You should try filling the area in cloned version of your field in order to be able to get back if floodfill will not find a polygon.
Launching floodfill second time and later you should reuse this cloned version of your field to see whether it was filled there. This will allow you to check every point at most once and this will make your 8 floodfills almost as fast as 1 floodfill.
Check this question, using Graphics2 and Polygon to fill an arbitrary polygon: java swing : Polygon fill color problem
Finding out whether a point is inside or outside a polygon: http://en.wikipedia.org/wiki/Point_in_polygon
Make sure you use double buffering. If you set individual pixels and don't use double buffering the component may redraw after every pixel was set.
I'm developing Side Scroll 2D Game, using AndEngine
I'm using their SVG extension (I'm using vector graphic)
But I discovered strange and ugly effect, while moving my player (while camera is chasing player exactly, means while camera is changing its position)
Images of my sprites looks just different, they are like blurred or there is effect like those images would be moving (not changing their possition, just jittery effect, really hard to explain and call this effect properly) Hopefully this image may explain it a bit:
Its more or less, how does it look in the game, where:
a) "FIRST" image is showing square, while player is moving (CAMERA isn't) image looks as it should
b) "SECOND" the same image, but with this strange effect "which looks like image moving/blurring during camera moving [chasing player])
Friend of mine told me that it might be hardware problem:
"the blurring that you notice is actually a hardware problem. Some phones "smooth" the content on the screen to give a nicer feel to applications. I don't know if it's the screen or the graphics processor, but it doesn't occur on my wife's Samsung Captivate. It happens on my Atrix and Xoom though. It's really noticable on the Xoom due to the large screen size."
But seems there is way to prevent it, since I have tested many similar games, where camera is chasing player, and I could not notice such effect.
Is there a way to turn this off in code?
I'm grateful for previous answers, unfortunately, still problem exist.
Till now, I have tried:
casting (int) on setCenter method which is being executed on updateChaseEntity
testing game using PNG images, instead of SVG extension and vector graphic
different TextureOptions
hardwareAcceleration
If someone have different idea, what may cause this strange effect, I would be really grateful for help - thank you.
Some devices (Xperia Play) bleed everywhere when trying to draw things that are moving quickly. For example a red icon on the application list leaves a blur behind it. You could try hardwareAcceleration in the manifest (on and off) to see if it makes a difference.
You'd probably get the same effect if you weren't using svg
When your player's just going to the right and camera begins chasing him, all other sprites except player are moving to the left. Try to print the absolute coordinates of your "blurring" sprite (or some of its anchor points) to the log. The X-coord of sprite should be decreasing linearly. If you notice it's increasing some times, it could be a reason of blur.
Hope this will help.
It sounds like it's due to the camera moving in real increments, making the SVG components rest on non-integer bounds, and the SVG renderer making anti-aliasing come into effect to demonstrate this. Try moving the camera in integer increments by casting camera values to int.
I'm not familiar with this engine but I wonder why would you use vector graphics for pixelated art style. I'll be surprised if your character in the screenshot is really a vector art... maybe it's texture imported in SVG? I attempted back in the day to use flash a few times and I was making the same mistake... I'm not saying it's not possible but it's not intended to create pixel art with flash or any other vector software. There is a reason why most flash games have similar look.
Best way to debug it, is try a different looking sprite.
Maybe it is just the slow response time of your device display.
I'm also an Andengine developer, and never seen such behavior.
Sometimes you fix jittering using FixedStepEngine, it might help.
If you can post your code maybe we can better help you.