I am making a simple platformer with a tile-based map. However, when the camera moves there is white flickering which I think occurs along the tile boundaries, but it is hard to tell because they flicker quickly.
What I have tried:
adding padding to my Tileset. I have 4px padding around each tile, so it's not image bleeding.
setting the TextureFilter to nearest. It was never on anything else, so Linear wasn't the culprit.
Casting the camera position to an int. While this doesn't fix the flickering it also makes my camera jerky, so this is the worst possible solution.
Setting config.useCPUSync to false and config.vSync to true. While I have set vSync to true I can't set CPUSync to false because, as far as I am aware, this is no longer an option. I get a compile time error when I try.
I am just displaying the map by calling TiledMapRendere.render(), so I don't know if the padding from my Tileset or my Nearest TextureFilter are actually being applied correctly, but that is the only possible issue in my rendering process I can think of.
Any other ideas?
Edit:
So I tried rendering manually and I learned a few things.
Even if I cast every coordinate for each tile to an int and every coordinate in the camera, there is still flickering, so that is defiantly not the answer. However, I then set the TextureFilter on each tile to Linear and that DID solve the flickering, but I don't like how the textures look so it's not really a solution.
This took me forever to figure out: everywhere else I looked gave the solutions I tried above.
What I eventually did is extruded the outer pixels by 1 the same way I would have if I expected there to be blur. I think this worked because when the tiles were scaled, the application sometimes had to choose whether to use the outer pixel or a transparent pixel (my margins) and chose a transparent pixel. So now, the margin around each tile is just another pixel of the same color, so if it chooses that pixel it looks the same to us.
So this is what I have: a camera filming a set of dice. The blue rects indicate regions of interest and in each ROI I would want to see if there's a die placed there and which side of the die is showing (these dice are just placeholders, I'm not working on anything related to Dungeon Roll!).
I think the problem is rather plain to see: since I'm using a very wide angle camera, not only do I see the side of the die with the scroll (which would be the side the camera is pointing to), but also ie the swords in the top right and bottom left.
I'm having a hard time thinking of how to get this to work. How can I go about figuring out which side is the "correct" side that I'm trying to identify?
Thank you!
So, the solution is more hardware than software based, but it works like a charm!
I put a clear plastic folder with a bumpy surface on top of the glass panel and then put the dice on that. the result: only things close to the refractive surface are displayed clearly enough and everything farther away kind of disappears. Play with amount of folders and settings of edge detection and the result will be satisfactory!
Though I'm sure Micka and stacker's comments would've led me to find the solution this quick fix is good enough for my current testing. thank you!
I've recently been looking into LibGDX and seem to have hit a wall, seen in the picture, the blue dot represents the users finger, the map generation it self is where i seem to get stuck, does LibGDX provide a method of dynamically drawing curved objects? I could simply generate them myself as images but then the image is hugely stretched to the point of the gap for the finger can fit 3! But also would need to be 1000's of PX tall to accommodate the whole level design.
Is it such that i should be drawing hundreds of polygons close together to make a curved line?
On a side not i'll need a way of determining when the object has from bottom to top so i can generate another 'chunk' of map.
You don't need hundreds of polygons to make a curve like you drew. You could get away with 40 quads on the left, and 40 on the right, and it would look pretty smooth. Raise that to 100 on each side and it will look almost perfectly smooth, and no modern device is going to have any trouble running that at 60fps.
You could use the Mesh class to generate a procedural mesh for each side. You can make the mesh stay in one spot, locked to the camera, and modify it's vertices and UVs to make it look like you are panning down an infinitely long corridor. This will take a fair amount of math up front but should be smooth sailing once you have that down.
Basically, your level design could be based on some kind of equation that takes Y offset as an input. Or it could be a long array of offsets, and you could use a spline equation or linear equation to interpolate between them. The output would be the UV and X coordinates which can be used to update each of the vertices of your two meshes.
You can use the vertex shader to efficiently update the UV coordinates, using a constant offset uniform parameter that you update each frame. That way you don't have to move UV data to the GPU every frame.
For the vertex positions, use your Mesh's underlying float[] and call setVertices() each frame to update it. Info here.
Actually, it might look better if you leave the UV's and the X positions alone, and just scroll the Y positions up. Keep a couple quads of padding off top and bottom of screen, and just move the top quad to the bottom after it scrolls off screen.
How about creating a set of curved forms that can be put together variably. Like the gap in the middle will at the top and bottom of each image be in the middle (with the same curvature at end and beginning points)...
And inbetween the start and end points you can go crazy on the shape.
And finally, you can randomly put those images together and get an endless world.
If you don't want to stop in the middle each time, you could also have like three entry and exit points (left, middle, right)... and after an image that ends left, you of course need to add an image that starts left, but might end somewhere else...
A portion of my app involves the user drawing images that will be later strung together in a PDF. The user is free to use the entire screen to draw. Once the user is done drawing, I'd like to trim off all of the white space before adding the image to a PDF. This is where I am having problems. I've thought of two different methods to determine the location of trimmable white space and both seem clumsy.
My first thought was having the motion event of the stylus record if the event has gone outside of the box so far. If it has, I would expand the box to accommodate this. Unfortunately I could see polling every time there is a motion event being bad for performance. I can't just look at up and down events because the user could draw something like the letter V.
Then I thought I could look at all the pixels (using getPixel()) and see where the highest, lowest, rightmost and leftmost black pixels are. Again this seems like a really inefficient way to find the box. I'm sure I could skip some pixels to improve performance, but I can't skip too many.
Is there a standard way of doing what I want to do? I haven't been able to find anything.
You can inside your editor, where you record that this pixel has been drawn upon, update the maximum and minimum X and Y, and then use them later to crop the image.
If the user is drawing, aren't you already handling the onTouchEvent callback in order to capture the drawing events? If so, it shouldn't be a big deal to keep a minX, maxX, minY and maxY and check each recorded drawing event against these values.
I'm not complaining, just wonder. Why Java use top left point of the drawing surface as origin? I assume more natural is to choose left bottom corner as origin and increase axis as they go up and right (similar to Quartz).
Computer graphics has had the origin in the upper left since the dawn of time, with QuickDraw included. Using the lower left (as in math) is a PostScript/PDF thing. Since Quartz is based on PDF, it uses its coordinates, but that is mostly a unique decision among graphics libraries.
It always worked like this.
Back in the assembly days, pixel one has always been on the top left corner. It was the first pixel or character that the user could read.
This way of numbering things allows you to have a infinitely long image or text. If you started from bottom left and you wanted to add a new line, you'd have to shift all your stuff and recalculate coordinates for everything.
If you go back far enough, like 1981, you can find some exceptions!
http://central.kaserver5.org/Kasoft/Typeset/BBC/Ch08.html
"Imagine a graphics window which has its edges a, b, c and d 'graphics units' away from the bottom left hand corner of the screen (which is always the starting point for graphics)."
Could be also due to the CRT monitor, where the electron gun draws the image from left to right and top to bottom.
probably came from television standard, where scan starts from top to bottom.
With the right hand coordinate system, when X and Y are placed at the top-left corner, Z goes into the screen. Graphics engine can now know the points which are away from screen ... larger the Z further away the points are .... very useful in rendering multiple objects placed in space and some objects are hiding others ...
Answers like "it was always like this" don't really answer a question of "why", so I'm confused as to why top voted answers are about re-stating the status quo with extra information.
Eric mentions that “[back] in the assembly days, pixel one has always been on the top left corner”, but he doesn't mention why. He proceeds to explain that if we start from the bottom left corner, and want to add a new line to a body of text, then we'd have to do it by basically overwriting everything on the screen from the bottom upwards (if you started from the bottom left the previous time, then you didn't leave space for this new line; stuff has to be shifted up to add new lines). User Irreputable commented that this makes sense with only some languages (but I don't know any languages that start from the bottom upwards, which is what really matters anyway), and that it doesn't make much sense when it comes to images or graphics; and I concur, he's right about the latter.
Ubieto gives perhaps the most helpful answer: That it perhaps has to do with how the electron gun of CRT monitors draws the image from top to bottom, left to right.
However, all these answers perhaps miss one important point: The reason people ask such a question about the top left being the origin of the axes is not just about the point being in the top left, but also because, contrary to the Cartesian coordinate system that all of us are used to all the way from primary school, where the y axis increases upwards, this computer graphics and Java coordinate system increases the y axis downwards! This is one of the most jarring and confusing aspects about this system. If the system had the origin at the top left of the screen but decreased the y axis (and had negative numbers) downwards, then the CRT monitor electron gun would've explained the whole mystery really, at least for me. After all, we would then understand why the (0,0) point is at the top left and everything else works as we would expect from our math education.
However, that's not the case with the Java and computer graphics 2D coordinate system; the y axis of that system increases downwards, surprisingly enough. Why? I think that's the biggest mystery after we consider the CRT or origins of screen technology. And in an attempt to answer this why question, I can only think of one possibility: Computer scientists wanted the 2D graphics coordinate system to be simpler and avoid the potential confusion of always having the x-axis coordinate positive with a negative y axis. If we assume that the top left origin was a necessity due to the screen technology of the time with its electron gun (avoid screen tearing with that technology), then we realize that computer scientists had the option of:
Treating the screen like its the 4th quadrant, as the Cartesian coordinate system would, with every single pixel in that quadrant (on the screen) having a positive x-axis coordinate and a negative y-axis coordinate, like (5,-5); or
They could flip the y-axis across the x-axis (vertically downwards), bringing the 1st quadrant downwards, and every pixel on the screen thereafter would have both, positive x-axis and positive y-axis coordinates, like (5,5). Perhaps computer scientists simply saw that as a convenience and a way of doing things that minimizes confusion; two positive numbers are perhaps much less confusing and easier to calculate and visualize than a positive and negative number.
In summary, there are are two aspects to the question: The mystery of the location of (0,0) at the top left instead of bottom left, and the mystery of the y axis increasing downwards. The first mystery is probably best explained by the early monitor technology that worked top to bottom, left to right. And the second mystery is probably best explained by a desire for simplicity and clarity via adopting a coordinate system with two positive numbers for the x and y coordinates, rather than the potentially confusing system that would permanently rely on a positive x-axis coordinate paired with a negative y-axis coordinate.
I think to be compatible with the frames minimizing and maximizing
The obvious focusing area is where the first word in a page written in english appears ,namely, the top left witch is the most natural way except when it comes to graphical representation of some math in the first quadrant which became the second with the y axis positive (reflected or rotated 180° about the origin (I came to this while googling the way to solve that)
actually the decision was made long time before the computer and crt age
Just an implementation choice. Screen coordinates in windows and other OS are given the same way, so I am guessing they chose that to be consistent with the OS's choice, which is likely a legacy thing.
It also has the benefit of being similar to 2-D arrays in programs, where [0][0] refers to the upper-left element.