The end goal is to be able to render images of arbitrary sizes in JOGL and do it fast on basic graphic cards.
My initial attempt was to achieve this using textures. However, I ran into problems on some graphics cards, (more precisely, virtual machine graphics cards).
Some images exceed the GL_MAX_TEXTURE_SIZE and if the card does not support textures which are not power of two (gl.isNPOTTextureAvailable())
I then followed several (1, 2) samples which used glDrawPixels to render the image directly.
gl.glBlendFunc (GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
gl.glEnable (GL.GL_BLEND);
gl.glColor3f (0.0f, 0.0f, 0.0f);
gl.glRasterPos2i (10, 300);
gl.glDrawPixels (dukeWidth, dukeHeight,
gl.GL_RGBA, gl.GL_UNSIGNED_BYTE,
dukeRGBA);
This works fine, except when the raster position moves outside the viewport. When part of the image (bottom left corner) goes outside the viewport, the whole image is not displayed.
[1] https://today.java.net/pub/a/today/2003/09/11/jogl2d.html
[2] http://www.java-tips.org/other-api-tips/jogl/drawing-pixels-and-showing-the-effect-of-gldrawpixels-glcopypixels-and-glpix.htm
I have managed to solve the image disappearing problem by replacing glRasterPos2i with glWindowPos2d but again this lead to another problemn - glWindowPos2d is only supported from openGL 1.4 and my virtual machines only support 1.1.
What is wrong with my approach?
Should I be handing images which are non-power size by padding textures?
Should I split large images into many textures (like a quilt) so that maximum texture size in not exceed? worried about performance in this case.
Tried Mesa3D to ensure obtain a higher openGL version, but cannot make it compile for windows. Any other software renderers recommended? (waiting on Swiftshader support)
I have managed to figure out an answer to my own question.
http://www.opengl.org/wiki/Raster_Position_And_Clipping
How do I draw glBitmap() or glDrawPixels() primitives that have an initial glRasterPos() outside the window's left or bottom edge?
When the raster position is set outside the window, it's often outside the view volume and subsequently marked as invalid. Rendering the glBitmap and glDrawPixels primitives won't occur with an invalid raster position. Because glBitmap/glDrawPixels produce pixels up and to the right of the raster position, it appears impossible to render this type of primitive clipped by the left and/or bottom edges of the window.
However, here's an often-used trick: Set the raster position to a valid value inside the view volume. Then make the following call:
glBitmap (0, 0, 0, 0, xMove, yMove, NULL);
This tells OpenGL to render a no-op bitmap, but move the current raster position by (xMove,yMove). Your application will supply (xMove,yMove) values that place the raster position outside the view volume. Follow this call with the glBitmap() or glDrawPixels() to do the rendering you desire.
Related
however, i have a weird issue, when drawing, it seems the outside 1px of an image is stretched to fit a rectangle, but the inside is only stetched to an extend, i was drawing to 48x48 tiles, but drew a 500x500 tile to show the issue. [ 500x500 draws fine ]
the worst part seems to be, it chooses when to stretch and not to stretch. and also what to strech. im sorry this is hard to explain but i have attached a image that i hope does a better job.
it could just be misunderstanding how to use a draw with spritebatch
edit: Tile is 48x48 not 64x64, ive just been working all day.
This is because you are not rendering "pixel perfect" which means your image does not line up with the pixel grid of your monitor. A quick fix might be to set a linear filter for your textures, since by default it uses nearest and thus a pixel on the screen will inherit the closest color it can get. A linear filter will interpolate colors and make that line "look" thinner.
texture.setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Linear);
If you are using texturepacker you can do this in one go by altering it's settings.
texturePackerSetting.filterMin = Texture.TextureFilter.Linear;
texturePackerSetting.filterMag = Texture.TextureFilter.Linear;
Or you could edit the atlas file itself by by changing the filter parameter to:
filter: Linear,Linear
This obviously costs more power since it needs to do more calculations for each pixel you drawn to the screen but I would not worry about this until your drawing is starting to get a bottleneck.
Another solutions is to draw pixel perfect which means you need to set your viewport to the size of the device gdx.graphics.getWidth, gdx.graphics.getHeight, in other words a ScreenViewport and draw your textures at exact sizes you want them. Of course this means a screen with more pixels sees more of your game world then a screen with less pixels and the more pixels a device has the smaller your textures will look. Another drawback of this is that you have to forget about any zooming or draw sprites for each level of zoom so they line up with the pixel grid of the device again.
For a JavaFX graphics program that I'm trying to create I would like to be able to divide each pixel into 4 subpixels and draw them. Is this possible in JavaFX (or in any other Java graphics library)?
JavaFX will render sub pixels for text when FontSmoothingType.LCD is used.
As far as I know, there are no mechanism in JavaFX where you can specify using sub pixel rendering for any other graphics primitives than text (e.g. lines and circles).
Even though you can't specify the rendering type for other primitives, they may or may not be rendered using sub pixel rendering (I don't know, though I would guess that sub pixel rendering would not be used even when the primitives are rendered using anti-aliasing).
Yes, you can. By default if you draw to an integer pixel on a canvas, it will draw a dot "between" actual screen pixels, i.e. it will anti-alias with the neighboring pixels. If you offset by 0.5, then it will draw exactly on the pixel. So you can draw sub-pixels by using values between 0.5 and 1.5. For example if you draw a horizontal line with Y1=100.5 and Y2=101.5, you will see anti-aliasing in between. You can also set line widths non-integer, in which case it will dim or anti-alias the pixel an appropriate amount. I'm not sure about FontSmoothing, but this is part of the canvas GraphicsContext.
I've created a isometric tile based game in Libgdx. The textures I'm using are 64x64 and packed using TexturePacker into a TextureAtlas. They are then drawn onto the screen. However, while moving around the pixelated edges of the 64x64 texture flicker and they are distorted, which can be seen in the images below. I have used all filters available in texturepacker, below you can see the results of the Linear and Nearest filters. Apart from flickering, the linear filter adds a black outline to the textures. I would be fine with this if it wasn't for the flickering when the camera moves around.
How the tile should appear:
Linear filtering (You can clearly see the black lines distorting):
Nearest filtering (Harder to see, but the pixelated lines are not straight):
The easiest place to spot it is on the top and bottom of the brown cube. The distortion happens on different places depending on camera movement (this causes flickering).
Anyone know what causes this, or has a possible solution? I'm not sure if any code snippets are needed.
It is also worth mentioning that the camera is set to windowHeight/ppm (ppm = 64) and windowWidth/ppm, then the textures are drawn onto a batch that has its projection matrix set to camera.combined.
Edit: Somehow it's better when reducing the window height from 800 to 710 (nearest):
Turn on the premultiplyAlpha option in TexturePacker and set setBlendFunction.(GL20.GL_ONE, GL20.GL_ONE_MINUS_SRC_ALPHA) on the SpriteBatch. This should get rid of the flickering black fringing. Basically, when using linear filtering, when the sprite's edges don't exactly line up with the pixels on the screen, the color of the pixel is linearly sampled from an image pixel on the edge of your sprite and an image pixel in the invisible black space (RGBA = 0000) next to it, so the edges can appear darker and more transparent than intended. Pre-multiplying the alpha cures this problem by changing the order of operations of the interpolation. Detailed explanation here and here.
Also, use filterMin of MipMapLinearNearest or MipMapLinearLinear to make sure you aren't getting minifying artifacts. (The first one performs better and the second one looks better at certain zoom levels and should be used if your camera zooms in and out.)
And finally, filterMax should be Linear.
Nearest filtering will always produce uneven artifacts if the sprites are not drawn at exactly 1X, 2X, 3X, etc. of their original size, because there will be certain rows and columns of the screen where a pixel in the image is drawn twice.
I've been looking around and i couldn't find an answer to this but what I have done is create a cube / box and the camera will squash and stretch depending on where I am looking at. This all seems to resolve it self when the screen is perfectly square but when I'm using 16:9 it stretches and squashes the shapes. Is it possible to change this?
16:9
and this is 500px X 500px
As a side question would it be possible to change the color of background "sky"?
OpenGL uses a cube [-1,1]^3 to represent the frustum in normalized device coordinates. The Viewport transform strechtes this in x and y direction to [0,width] and [0,height]. So to get the correct output aspect ratio, you have to take the viewport dimensions into account when transfroming the vertices into clip space. Usually, this is part of the projection matrix. The old fixed-function gluPerspective() function has a parameter to directly create a frustum for a given aspect ratio. As you do not show any code, it is hard to suggest what you actually should change, but it should be quite easy, as it boils down to a simple scale operation along x and y.
To the side question: That color is defined by the values the color buffer is set to when clearing the it. You can set the color via glClearColor().
I have rendered a 3D scene in OpenGL viewed from the gluOrtho perspective. In my application I am looking at the front face of a cube of volume 100x70x60mm (which I have as 1000x700x600 pixels). Inside this cube I have rendered a simple blue sphere which sits exactly in the middle and 'fills' the cube (radius 300 pixels).
I now want to read the color value of pixels (in 3D) at specific points within the cube; i.e. I wish to know if say point (100,100,-200) is blue or blank (black).
glReadPixels only allows 2D extraction of color and I have tried it with the DEPTH_COMPONENT but am unsure what this should return in byte form? Is there a way to combine the two? Am I missing something?
I am using Eclipse with Java and JOGL.
This can't be done in the context of OpenGL--you'll need some sort of scene graph or other space partitioning scheme working in concert with your application's data structures.
The reason is simple: the frame buffer only stores the color and depth of the fragment nearest to the eye at each pixel location (assuming a normal GL_LESS depth function). The depth value stored in the Z-buffer is used to determine if each subsequent fragment is closer or farther from the eye than the existing fragment, and thus whether the new fragment should replace the old or not. The frame buffer only stores color and depth values from the most recent winner of the depth test, not the entire set of fragments that would have mapped to that pixel location. Indeed, there would be no way to bound the amount of graphics memory required if that were the case.
You're not the first to fall for this misconception, so I say it the most blunt way possible: OpenGL doesn't work that way. OpenGL never(!) deals with objects or any complex scenes. The only thing OpenGL knows about are framebuffers, shaders and single triangles. Whenever you draw an object, usually composed of triangles, OpenGL will only see each triangle at a time. And once something has been drawn to the framebuffer, whatever has been there before is lost.
There are algorithms based on the concepts of rasterizers (like OpenGL is) that decompose a rendered scene into it's parts, depth peeling would be one of them.