I'm following a tutorial on creating a Game Engine in Java using OpenGL.
I'm trying to render a triangle on the screen. Everything is running fine and I can change the background color but the triangle won't show. I've also tried running the code provided as part of the tutorial series and it still doesn't work.
Link to the tutorial: http://bit.ly/1EUnvz4
Link to the code used in the video: http://bit.ly/1z7XUlE
Setup
I've tried checking for OpenGL version and belive I have 2.1.
Mac OSX
Java - Eclipse
Mesh.java
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL15.*;
import static org.lwjgl.opengl.GL20.*;
public class Mesh
{
private int vbo; //pointer to the buffer
private int size; //size of the data to buffer
public Mesh ()
{
vbo = glGenBuffers();
size = 0;
}
public void addVertices (Vertex[] vertices)
{
size = vertices.length;
//add the data by first binding the buffer
glBindBuffer (GL_ARRAY_BUFFER, vbo); //vbo is now the buffer
//and then buffering the data
glBufferData (GL_ARRAY_BUFFER, Util.createFlippedBuffer(vertices), GL_STATIC_DRAW);
}
public void draw ()
{
glEnableVertexAttribArray (0); //divide up the data into a segment
glBindBuffer (GL_ARRAY_BUFFER, vbo); //vbo is now the buffer
//tell OpenGL more about the segment:
//segment = 0, elements = 3, type = float, normalize? = false, vertex size, where to start = 0)
glVertexAttribPointer(0, 3, GL_FLOAT, false, Vertex.SIZE * 4, 0);
//draw GL_TRIANGLES starting from '0' with a given 'size'
glDrawArrays (GL_TRIANGLES, 0, size);
glDisableVertexAttribArray (0);
}
}
RenderUtil.java
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL30.*;
public class RenderUtil
{
public static void clearScreen ()
{
//TODO: Stencil Buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
//set everything to engine defaults
public static void initGraphics ()
{
glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // default color
glFrontFace(GL_CW); // direction for visible faces
glCullFace(GL_BACK); // direction for back faces
glEnable (GL_CULL_FACE); // don't draw back faces
glEnable (GL_DEPTH_TEST); // determines draw order by pixel depth testing
//TODO: Depth clamp for later
glEnable (GL_FRAMEBUFFER_SRGB); // do exponential correction on gamma so we don't have to
}
}
Util.java
import java.nio.FloatBuffer;
import org.lwjgl.BufferUtils;
public class Util
{
//create a float buffer (we need this because java is weird)
public static FloatBuffer createFloatBuffer (int size)
{
return BufferUtils.createFloatBuffer(size);
}
//flip the buffer to fit what OpenGL expects
public static FloatBuffer createFlippedBuffer (Vertex[] vertices)
{
FloatBuffer buffer = createFloatBuffer(vertices.length * Vertex.SIZE);
for (int i = 0; i < vertices.length; i++)
{
buffer.put(vertices[i].getPos().getX());
buffer.put(vertices[i].getPos().getY());
buffer.put(vertices[i].getPos().getZ());
}
buffer.flip();
return buffer;
}
}
You are using an invalid mix of legacy and modern OpenGL.
The glVertexAttribPointer() and glEnableVertexAttribArray() functions you are calling are used for setting up generic vertex attributes. This is the only way to set up vertex attribues in current versions of OpenGL (Core Profile of desktop OpenGL, or OpenGL ES 2.0 and later). They can be used in older versions of OpenGL as well, but only in combination with providing your own shaders implemented in GLSL.
If you are just getting started, your best option is probably to stick with what you have, and study how to start implementing your own shaders. If you wanted to get the code working with the legacy fixed pipeline (which is only supported in the Compatibility Profile of OpenGL), you would need to use the glVertexPointer() and glEnableClientState() functions instead.
Try a single import?
import static org.lwjgl.opengl.GL11.*
I only have one import on mine, also try importing the packages you need separately. One thing you are likely doing wrong is importing multiple versions of OpenGL
Related
I am trying to make a simple 2D game, and I store the world in a 2D array of Block (an enum, with each value having its texture).
Since these are all simple opaque tiles, when rendering I sort them by texture and then render them by translating to their coordinate. However, I also need to specify the texture coordinates and the vertex for each tile that I draw, even though these are also the same.
Here's what I currently have:
public static void render() {
// Sorting...
for(SolidBlock block : xValues.keySet()) {
block.getTexture().bind();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
for(int coordinateIndex = 0; coordinateIndex < xValues.get(block).size(); coordinateIndex++) {
int x = xValues.get(block).get(coordinateIndex);
int y = yValues.get(block).get(coordinateIndex);
glTranslatef(x, y, Integer.MIN_VALUE);
// Here I use MIN_VALUE because I'll later have to do z sorting with other tiles
glBegin(GL_QUADS);
loadModel();
glEnd();
glLoadIdentity();
}
xValues.get(block).clear();
yValues.get(block).clear();
}
}
private static void loadModel() {
glTexCoord2f(0, 0);
glVertex2f(0, 0);
glTexCoord2f(1, 0);
glVertex2f(1, 0);
glTexCoord2f(1, 1);
glVertex2f(1, 1);
glTexCoord2f(0, 1);
glVertex2f(0, 1);
}
I'd like to know if it is possible to put loadModel() before the main loop, to avoid having to load the model thousands of times with the same Data, and also what else could be moved to make it as fast as possible!
Some quick optimizations:
glTexParameteri only needs to be called once per parameter per texture. You should put it in the part of your code where you load the textures.
You can draw multiple quads in one glBegin/glEnd pair simply by adding more vertices. However, you cannot do any coordinate changes between glBegin and glEnd (such as glTranslatef or glLoadIdentity or glPushMatrix) so you'll have to pass x and y to your loadModel function (which really should be called addQuad for accuracy). It's also not allowed to rebind textures between glBegin/glEnd, so you'll have to use one set of glBegin/glEnd per texture.
Minor, but instead of calling xValues.get(block) a whole bunch of times, just say List<Integer> blockXValues = xValues.get(block) at the beginning of your outer loop and then use blockXValues from there on.
Some more involved optimizations:
Legacy OpenGL has draw lists, which are basically macros for OpenGL. You can make OpenGL record all the OpenGL calls you're doing between glNewList and glEndList (with some exceptions), and store them somehow. The next time you want to run those exact OpenGL calls, you can use glCallList to make OpenGL do just that for you. Some optimizations will be done on the draw list in order to speed up subsequent draws.
Texture switching is relatively expensive, which you're probably already aware of since you sorted your quads by texture, but there is a better solution than sorting textures: Put all your textures into a single texture atlas. You'll want to store the subtexture coordinates of each block inside your SolidBlocks, and then pass block to addQuad as well so you can pass the appropriate subtexture coordinates to glTexCoord2f. Once you've done that, you don't need to sort by texture anymore and can just iterate over x and y coordinates.
Good practices:
Only use glLoadIdentity once per frame, at the beginning of your draw process. Then use glPushMatrix paired with glPopMatrix to save and restore the state of matrices. That way the inner parts of your code don't need to know about the matrix transformations the outer parts may or may not have done beforehand.
Don't use Integer.MIN_VALUE as a vertex coordinate. Use a constant of your own choosing, preferably one that won't make your depth range huge (the last two arguments to glOrtho which I assume you're using). Depth buffer precision is limited, you'll run into Z-fighting issues if you try to use Z coordinates of 1 or 2 or so after setting your Z range from Integer.MIN_VALUE to Integer.MAX_VALUE. Also, you're using float coordinates, so int constants don't make sense here anyway.
Here's the code after a quick pass (without the texture atlas changes):
private static final float BLOCK_Z_DEPTH = -1; // change to whatever works for you
private int blockCallList;
private boolean regenerateBlockCallList; // set to true whenever you need to update some blocks
public static void init() {
blockCallList = glGenLists(1);
regenerateBlockCallList = true;
}
public static void render() {
if (regenerateBlockCallList) {
glNewList(blockCallList, GL_COMPILE_AND_EXECUTE);
drawBlocks();
glEndList();
regenerateBlockCallList = false;
} else {
glCallList(blockCallList);
}
}
private static void drawBlocks() {
// Sorting...
glPushMatrix();
glTranslatef(0, 0, BLOCK_Z_DEPTH);
for (SolidBlock block : xValues.keySet()) {
List<Integer> blockXValues = xValues.get(block);
List<Integer> blockYValues = yValues.get(block);
block.getTexture().bind();
glBegin(GL_QUADS);
for(int coordinateIndex = 0; coordinateIndex < blockXValues.size(); coordinateIndex++) {
int x = blockXValues.get(coordinateIndex);
int y = blockYValues.get(coordinateIndex);
addQuad(x,y);
}
glEnd();
blockXValues.clear();
blockYValues.clear();
}
glPopMatrix();
}
private static void addQuad(float x, float y) {
glTexCoord2f(0, 0);
glVertex2f(x, y);
glTexCoord2f(1, 0);
glVertex2f(x+1, y);
glTexCoord2f(1, 1);
glVertex2f(x+1, y+1);
glTexCoord2f(0, 1);
glVertex2f(x, y+1);
}
With modern OpenGL (vertex buffers, shaders and instancing instead of display lists, matrix transformations and passing vertices one by one) you'd approach this problem very differently, but I'll keep that beyond the scope of my answer.
I'm currently trying to get a very simple program to work. It just displays a white cross on a black background. The problem is that the rendering of my cross is only working under strange conditions. These are all the conditions i figured out thus far:
The layout of the vertex shader position input has to be greater than 2
Any call to glBindVertexArray(0) is causing the cross not to render even after calling glBindVertexArray(array)
I have to call glUseProgram before every draw call
As you might see i have no idea anymore of what is acutally happening here. How do i fix this bug?
Here is the code:
int axesVBO;
int axesVAO;
int vert, frag;
int program;
#Override
public void display(GLAutoDrawable drawable) {
System.out.println("Render");
GL4 gl = drawable.getGL().getGL4();
gl.glClear(GL4.GL_COLOR_BUFFER_BIT | GL4.GL_DEPTH_BUFFER_BIT);
gl.glBindVertexArray(axesVAO);
gl.glUseProgram(program); //Doesnt work without
gl.glDrawArrays(GL4.GL_LINES, 0, 2);
gl.glDrawArrays(GL4.GL_LINES, 2, 2);
gl.glBindVertexArray(0); //After this line the cross isn't renderd anymore
}
#Override
public void dispose(GLAutoDrawable drawable) {
GL4 gl = drawable.getGL().getGL4();
gl.glDeleteBuffers(1, IntBuffer.wrap(new int[]{axesVBO}));
gl.glDeleteVertexArrays(1, IntBuffer.wrap(new int[]{axesVAO}));
gl.glDeleteProgram(program);
gl.glDeleteShader(vert);
gl.glDeleteShader(frag);
}
#Override
public void init(GLAutoDrawable drawable) {
System.out.println("Init");
GL4 gl = drawable.getGL().getGL4();
IntBuffer buffer = Buffers.newDirectIntBuffer(2);
gl.glGenBuffers(1, buffer);
axesVBO = buffer.get(0);
vert = gl.glCreateShader(GL4.GL_VERTEX_SHADER);
frag = gl.glCreateShader(GL4.GL_FRAGMENT_SHADER);
gl.glShaderSource(vert, 1, new String[]{"#version 410\n in vec2 pos;void main() {gl_Position = vec4(pos, 0, 1);}"}, null);
gl.glShaderSource(frag, 1, new String[]{"#version 410\n out vec4 FragColor;void main() {FragColor = vec4(1, 1, 1, 1);}"}, null);
gl.glCompileShader(vert);
gl.glCompileShader(frag);
if(GLUtils.getShaderiv(gl, vert, GL4.GL_COMPILE_STATUS) == GL.GL_FALSE) {
System.out.println("Vertex shader compilation failed:");
System.out.println(GLUtils.getShaderInfoLog(gl, vert));
} else {
System.out.println("Vertex shader compilation sucessfull");
}
if(GLUtils.getShaderiv(gl, frag, GL4.GL_COMPILE_STATUS) == GL.GL_FALSE) {
System.out.println("Fragment shader compilation failed:");
System.out.println(GLUtils.getShaderInfoLog(gl, frag));
} else {
System.out.println("Fragment shader compilation sucessfull");
}
program = gl.glCreateProgram();
gl.glAttachShader(program, vert);
gl.glAttachShader(program, frag);
gl.glBindAttribLocation(program, 2, "pos"); //Only works when location is > 2
gl.glLinkProgram(program);
if(GLUtils.getProgramiv(gl, program, GL4.GL_LINK_STATUS) == GL.GL_FALSE) {
System.out.println("Program linking failed:");
System.out.println(GLUtils.getProgramInfoLog(gl, program));
} else {
System.out.println("Program linking sucessfull");
}
gl.glBindBuffer(GL4.GL_ARRAY_BUFFER, axesVBO);
gl.glBufferData(GL4.GL_ARRAY_BUFFER, Float.BYTES * 8, FloatBuffer.wrap(new float[]{-1f, 0, 1f, 0, 0, 1f, 0, -1f}), GL4.GL_STATIC_DRAW);
gl.glUseProgram(program);
buffer.clear();
gl.glGenVertexArrays(1, buffer);
axesVAO = buffer.get();
gl.glBindVertexArray(axesVAO);
int pos = gl.glGetAttribLocation(program, "pos");
gl.glEnableVertexAttribArray(pos);
gl.glBindBuffer(GL4.GL_ARRAY_BUFFER, axesVBO);
gl.glVertexAttribPointer(pos, 2, GL4.GL_FLOAT, false, 0, 0);
//Commented out for testing reasons (doesnt work when active)
//gl.glBindVertexArray(0);
gl.glClearColor(0f, 0f, 0f, 1f);
}
The conditions you figured out look strange. Anyway in general, having a clean and simple code helps a lot to avoid nasty bugs. Start clean and simple and then built it up :)
Few considerations:
don't use int for vbo and vao, use directly direct buffers
don't need to declare globally vert and frag if they are gonna be used only in the init, declare them locally in the method instead
prefer generating direct buffers using the jogl utility GLBuffers.newDirect*Buffer(...)
prefer, at least at the begin, to use the jogl utility (ShaderCode.create and ShaderProgram) to compile your shaders, it offloads you from work and potential bugs and includes a deeper check on any step during the whole shader creation (sometimes even too much, but nowadays shaders are so fast to compile it doesn't matter)
if you have ARB_explicit_attrib_location, you can check with gl4.isExtensionAvailable("GL_ARB_explicit_attrib_location");, use it everywhere you can, it will avoid a lot of potential bugs and overhead with any kind of location (such as glBindAttribLocation and glGetAttribLocation)
better to pass a direct buffer to glBufferData so that jogl doesn't have to create it by itself underneath and you can keep trace of it to deallocate it
keep the init clean and readable. You are mixing a lot of stuff together. For example you generate the vbo at the begin, then you create the program, then you upload data to the vbo.
it makes no sense gl.glUseProgram(program); in the init, unless your idea is to bind it and leave it bound. Anyway, normally, program is part of the initialization phase before a rendering call, so better to move it in the display().
prefer glClearBuffer to glClear
gl.glDrawArrays(GL4.GL_LINES, 0, 2); has no utility because you are passing zero as the number of vertices
if you need inspiration, take a look of this Hello Triangle
I'm hoping to scale 12MP images from a machine vision camera using LWJGL 3 and an SWT GLCanvas.
Scaling is obviously computationally intensive so I'd like to get the GPU to take care of this for me, but I am very unfamiliar with OpenGL. Further, every example I've looked at for LWJGL appears to be for much older versions of LWJGL or use deprecated methods; it appears LWJGL has undergone radical changes throughout its life.
I've provided a sample class which should describe how I'm desiring to implement this functionality, but I need help filling in the blanks (preferably using modern OpenGL and LWJGL 3):
import org.eclipse.swt.SWT;
import org.eclipse.swt.layout.FillLayout;
import org.eclipse.swt.opengl.GLCanvas;
import org.eclipse.swt.opengl.GLData;
import org.eclipse.swt.widgets.Composite;
import org.lwjgl.opengl.GLContext;
public class LiveCameraView extends Composite
{
private GLCanvas canvas;
public LiveCameraView(Composite parent, int style)
{
super(parent, style);
this.setLayout(new FillLayout());
GLData data = new GLData();
data.doubleBuffer = true;
canvas = new GLCanvas(this, SWT.NONE, data);
}
public void updateImage(byte[] bgrPixels, int imageWidth, int imageHeight)
{
canvas.setCurrent();
GLContext.createFromCurrent();
/*
* STEP 1: Translate pixels into a GL texture from the 3-byte BGR byte[]
* buffer.
*/
/*
* STEP 2: Now that the GPU has the full sized image, we'll get the GPU to
* scale the image appropriately.
*/
double scalingFactor = getScalingFactor(imageWidth, imageHeight);
canvas.swapBuffers();
}
private double getScalingFactor(int originalWidth, int originalHeight)
{
int availableWidth = canvas.getBounds().width;
int availableHeight = canvas.getBounds().height;
// We can either scale to the available width or the available height, but
// in order to guarantee that the whole image is visible we choose the
// smaller of the two scaling factors.
double scaleWidth = (double) availableWidth / (double) originalWidth;
double scaleHeight = (double) availableHeight / (double) originalHeight;
double scale = Math.min(scaleWidth, scaleHeight);
return scale;
}
}
In a separate thread new images are being acquired from the camera continuously. Ideally that thread will asynchronously invoke the updateImage(...) method and provide the raw BGR data of the most recent image.
I believe this should be achievable using the outlined paradigm, but I could be way off base. I appreciate any good direction.
As a final note, this question arose from my initial question asked here: My initial question concerning the general paradigm
When I use a ShapeRenderer, it always comes out pixelated. But if I draw the shape in photoshop with the same dimensions, it's very smooth and clean-looking.
My method is just as follows:
package com.me.actors;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.Color;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShapeRenderer;
import com.badlogic.gdx.graphics.glutils.ShapeRenderer.ShapeType;
import com.badlogic.gdx.scenes.scene2d.Actor;
public class bub_actors extends Actor {
private ShapeRenderer shapes;
private Texture text;
private Sprite sprite;
public bub_actors(){
shapes = new ShapeRenderer();
text = new Texture(Gdx.files.internal("data/circle.png"));
sprite = new Sprite();
sprite.setRegion(text);
}
#Override
public void draw(SpriteBatch batch, float parentAlpha) {
batch.draw(sprite, 200, 200, 64, 64);
shapes.begin(ShapeType.FilledCircle);
shapes.filledCircle(50, 50, 32);
shapes.setColor(Color.BLACK);
shapes.end();
}
}
Here's an image of the output:
Any ideas as to why this happens? Is it possible to make the ShapeRenderer look like the image (so I don't have to create a SpriteBatch of different-colored circles...).
The difference is anti-aliasing that Photoshop applies to the image it generates. If you zoom in on the edges of the two circles, you'll see the anti-aliased one has some semi-black pixels around the edge, where the ShapeRenderer generated circle just shows pixels entirely on or off.
The Libgdx ShapeRenderer was designed for being a quick and simple way to get debugging shapes on the screen, it does not support anti-aliasing. The easiest way to get consistent anti-aliased rendering it to use a texture. (Its also possible with an OpenGL shader.)
That said, you do not have to create different sprites just to render different colored circles. Just use a white circle with a transparent background, and then render it with a color. (Assuming you want a variety of solid-colored circles).
Here is really simple way to achieve smooth & well-looking shapes without using a texture and SpriteBatch.
All you have to do is to render couple of shapes with slightly larger size and
lower alpha channel along with the first one.
The more passes the better result, but, of course, consider ppi of your screen.
...
float alphaMultiplier = 0.5f; //you may play with different coefficients
float radiusStep = radius/200;
int sampleRate = 3;
...
//do not forget to enable blending
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
shapeRenderer.begin(ShapeType.Filled);
//first rendering
shapeRenderer.setColor(r, g, b, a);
shapeRenderer.circle(x, y, radius);
//additional renderings
for(int i=0; i<sampleRate; i++) {
a *= alphaMultiplier;
radius += radiusStep;
shapeRenderer.setColor(r, g, b, a);
shapeRenderer.circle(x, y, radius);
}
shapeRenderer.end();
...
Here is a screenshot of what can you achieve.
If you're not teetering on the edge of losing frames, you can enable antialiasing in your launcher. You can increase the sample count for better results, but it's really diminishing returns.
LWJGL3 : config.setBackBufferConfig(8, 8, 8, 8, 16, 0, 2);
LWJGL2 : config.samples = 2;
GWT : config.antialiasing = true;
Android: config.numSamples = 2;
iOS : config.multisample = GLKViewDrawableMultisample._4X;
Okay, so I'm trying to get the tutorial here to work: http://lwjgl.org/wiki/index.php?title=GLSL_Shaders_with_LWJGL
My question is why aren't my shaders doing anything in this example? I'm very new to GLSL.
here is the code for the main class
import org.lwjgl.opengl.GL11;
import org.lwjgl.opengl.Display;
import org.lwjgl.opengl.DisplayMode;
import org.lwjgl.util.glu.GLU;
/*
* Sets up the Display, the GL context, and runs the main game
loop.
*
* #author Stephen Jones
*/
public class GLSLTest{
Box box;
private boolean done=false; //game runs until done is set to true
public GLSLTest(){
init();
while(!done){
if(Display.isCloseRequested())
done=true;
render();
Display.update();
}
Display.destroy();
}
private void render(){
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT |
GL11.GL_DEPTH_BUFFER_BIT);
GL11.glLoadIdentity();
box.draw();
}
private void init(){
int w=1024;
int h=768;
try{
Display.setDisplayMode(new DisplayMode(w, h));
Display.setVSyncEnabled(true);
Display.setTitle("Shader Setup");
Display.create();
}catch(Exception e){
System.out.println("Error setting up display");
System.exit(0);
}
GL11.glViewport(0,0,w,h);
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GLU.gluPerspective(45.0f, ((float)w/(float)h),0.1f,100.0f);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();
GL11.glShadeModel(GL11.GL_SMOOTH);
GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
GL11.glClearDepth(1.0f);
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT,
GL11.GL_NICEST);
box = new Box();
}
public static void main(String[] args){
new GLSLTest();
}
}
Here is the code for the Box class:
import org.lwjgl.opengl.GL11;
import java.io.BufferedReader;
import java.io.FileReader;
import java.nio.ByteBuffer;
import java.nio.IntBuffer;
import org.lwjgl.BufferUtils;
import org.lwjgl.opengl.ARBShaderObjects;
import org.lwjgl.opengl.ARBVertexShader;
import org.lwjgl.opengl.ARBFragmentShader;
import org.lwjgl.opengl.Util;
/**
* The vertex and fragment shaders are setup when the box object is
* constructed. They are applied to the GL state prior to the box
* being drawn, and released from that state after drawing.
* #author Stephen Jones
*/
public class Box {
/*
* if the shaders are setup ok we can use shaders, otherwise we just
* use default settings
*/
private boolean useShader=true;
/*
* program shader, to which is attached a vertex and fragment shaders.
* They are set to 0 as a check because GL will assign unique int
* values to each
*/
private int shader=0;
private int vertShader=0;
private int fragShader=0;
public Box(){
/*
* create the shader program. If OK, create vertex
* and fragment shaders
*/
shader=ARBShaderObjects.glCreateProgramObjectARB();
if(shader!=0){
vertShader=createVertShader("screen.vert");
fragShader=createFragShader("screen.frag");
}
else useShader=false;
/*
* if the vertex and fragment shaders setup sucessfully,
* attach them to the shader program, link the sahder program
* (into the GL context I suppose), and validate
*/
if(vertShader !=0 && fragShader !=0){
ARBShaderObjects.glAttachObjectARB(shader, vertShader);
ARBShaderObjects.glAttachObjectARB(shader, fragShader);
ARBShaderObjects.glLinkProgramARB(shader);
ARBShaderObjects.glValidateProgramARB(shader);
useShader=printLogInfo(shader);
}else useShader=false;
}
/*
* If the shader was setup succesfully, we use the shader. Otherwise
* we run normal drawing code.
*/
public void draw(){
if(useShader) {
ARBShaderObjects.glUseProgramObjectARB(shader);
}
GL11.glLoadIdentity();
GL11.glTranslatef(0.0f, 0.0f, -10.0f);
GL11.glColor3f(1.0f, 1.0f, 1.0f);//white
GL11.glBegin(GL11.GL_QUADS);
GL11.glVertex3f(-1.0f, 1.0f, 0.0f);
GL11.glVertex3f(1.0f, 1.0f, 0.0f);
GL11.glVertex3f(1.0f, -1.0f, 0.0f);
GL11.glVertex3f(-1.0f, -1.0f, 0.0f);
GL11.glEnd();
//release the shader
ARBShaderObjects.glUseProgramObjectARB(0);
}
/*
* With the exception of syntax, setting up vertex and fragment shaders
* is the same.
* #param the name and path to the vertex shader
*/
private int createVertShader(String filename){
//vertShader will be non zero if succefully created
vertShader=ARBShaderObjects.glCreateShaderObjectARB(ARBVertexShader.GL_VERTEX_SHADER_ARB);
//if created, convert the vertex shader code to a String
if(vertShader==0){return 0;}
String vertexCode="";
String line;
try{
BufferedReader reader=new BufferedReader(new FileReader(filename));
while((line=reader.readLine())!=null){
vertexCode+=line + "\n";
}
}catch(Exception e){
System.out.println("Fail reading vertex shading code");
return 0;
}
/*
* associate the vertex code String with the created vertex shader
* and compile
*/
ARBShaderObjects.glShaderSourceARB(vertShader, vertexCode);
ARBShaderObjects.glCompileShaderARB(vertShader);
//if there was a problem compiling, reset vertShader to zero
if(!printLogInfo(vertShader)){
vertShader=0;
}
//if zero we won't be using the shader
return vertShader;
}
//same as per the vertex shader except for method syntax
private int createFragShader(String filename){
fragShader=ARBShaderObjects.glCreateShaderObjectARB(ARBFragmentShader.GL_FRAGMENT_SHADER_ARB);
if(fragShader==0){return 0;}
String fragCode="";
String line;
try{
BufferedReader reader=new BufferedReader(new FileReader(filename));
while((line=reader.readLine())!=null){
fragCode+=line + "\n";
}
}catch(Exception e){
System.out.println("Fail reading fragment shading code");
return 0;
}
ARBShaderObjects.glShaderSourceARB(fragShader, fragCode);
ARBShaderObjects.glCompileShaderARB(fragShader);
if(!printLogInfo(fragShader)){
fragShader=0;
}
return fragShader;
}
private static boolean printLogInfo(int obj){
IntBuffer iVal = BufferUtils.createIntBuffer(1);
ARBShaderObjects.glGetObjectParameterARB(obj,
ARBShaderObjects.GL_OBJECT_INFO_LOG_LENGTH_ARB, iVal);
int length = iVal.get();
if (length > 1) {
// We have some info we need to output.
ByteBuffer infoLog = BufferUtils.createByteBuffer(length);
iVal.flip();
ARBShaderObjects.glGetInfoLogARB(obj, iVal, infoLog);
byte[] infoBytes = new byte[length];
infoLog.get(infoBytes);
String out = new String(infoBytes);
System.out.println("Info log:\n"+out);
}
else return true;
return false;
}
}
Here is the code for the fragment shader
varying vec4 vertColor;
void main(){
gl_FragColor = vertColor;
}
and vertex shader:
varying vec4 vertColor;
void main(){
gl_Position = gl_ModelViewProjectionMatrix*gl_Vertex;
vertColor = vec4(0.6, 0.3, 0.4, 1.0);
}
Here is the output I get when I run the code:
Info log:
Vertex shader was successfully compiled to run on hardware.
Info log:
Fragment shader was successfully compiled to run on hardware.
And here is a screenshot: http://dl.dropbox.com/u/28109593/glslss.png
The problem is due to the fact that you're seeing anything in the Info log at all, i.e. it's yielding a message on success, the way some OpenGL drivers are inclined to do. printLogInfo is also doubling as a validation function, and returning false if there was any log info, which is telling createVertShader and createFragShader to zero out your perfectly good shader id and return a failure. It's really not at all a good design, for reasons exactly like this (and I know it came from someone else, so I know I'm not slagging you off personally :)
A quick workaround for this program ONLY would be to simply make printLogInfo always return true. What you ultimately need to do is check the return status, using glGetShader(id, param), like so:
glCompileShader(obj)
if (glGetShader(obj, GL_COMPILE_STATUS) == GL_FALSE
... handle error here ...
Then for linking and validating:
glLinkProgram(obj)
if (glGetProgram(obj, GL_LINK_STATUS) == GL_FALSE
... handle error here ...
glValidateProgram(obj)
if (glGetProgram(obj, GL_VALIDATE_STATUS) == GL_FALSE
... handle error here ...
http://www.opengl.org/sdk/docs/man/xhtml/glGetShader.xml
http://www.opengl.org/sdk/docs/man/xhtml/glGetProgram.xml
I don't know the ARB_* equivalents of these, sorry, but you probably should be using OpenGL 2.0 API instead of ARB extensions for something this basic. Anything that still only supports shaders as an extension is probably not worth doing shaders on anyway.
The shaders are doing what they're supposed to do.
Don't think of shaders in the sense of what you hear in video games, shaders are simply a way of having direct access to the GPU.
The reason you're not seeing a difference in color is due to the fact that color is not being output, try changing what is being set in the fragment shader for gl_fragColor.
If you're looking for more information on shader driven opengl I'd suggest taking a look at the online book
Learning Modern 3D Graphics Programming
If you google that title, it should be the first result.
Hope this helps!