OpenGL wireFrame render of a OBJ file - java

I am writing a program that just needs to read a .obj file and render it in wire frame mode.
I had already read the .obj file (correctly - i believe) that i want to render. But i am having some problems... It is suppose to be in wire frame but instead... (image bellow)
Here is the code:
public void render(GL gl){
float xMiddle = (m.getXVertexMax() + m.getXVertexMin())/2;
float yMiddle = (m.getYVertexMax() + m.getYVertexMin())/2;
float zMiddle = (m.getZVertexMax() + m.getZVertexMin())/2;
gl.glScalef(1/(m.getXVertexMax() - m.getXVertexMin()), 1, 1);
gl.glScalef(1, 1/(m.getYVertexMax() - m.getYVertexMin()), 1);
gl.glScalef(1, 1, 1/(m.getZVertexMax() - m.getZVertexMin()));
gl.glBegin(GL.GL_TRIANGLES);
{
gl.glEnable(GL.GL_DEPTH_TEST);
gl.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINES);
for(int i = 0; i < m.faces.size(); i++ ){
Vector3f n1 = m.normals.get(m.faces.get(i).getNormalIndices()[0] - 1);
Vector3f v1 = m.vertices.get(m.faces.get(i).getVertexIndices()[0] - 1);
gl.glVertex3f(v1.x - xMiddle, v1.y - yMiddle, v1.z - zMiddle);
gl.glNormal3f(n1.x, n1.y, n1.z);
Vector3f n2 = m.normals.get(m.faces.get(i).getNormalIndices()[1] - 1);
Vector3f v2 = m.vertices.get(m.faces.get(i).getVertexIndices()[1] - 1);
gl.glVertex3f(v2.x - xMiddle, v2.y - yMiddle, v2.z - zMiddle);
gl.glNormal3f(n2.x, n2.y, n2.z);
Vector3f n3 = m.normals.get(m.faces.get(i).getNormalIndices()[2] - 1);
Vector3f v3 = m.vertices.get(m.faces.get(i).getVertexIndices()[2] - 1);
gl.glVertex3f(v3.x - xMiddle, v3.y - yMiddle, v3.z - zMiddle);
gl.glNormal3f(n3.x, n3.y, n3.z);
}
}
gl.glEnd();
}
NOTE: Vector3f in the code, is a data structure that i made.
I have tried everything i could find but still, it wont render the image as wire frame! :-/
Can anyone give a hand?

gl.glBegin(GL.GL_TRIANGLES);
{
gl.glEnable(GL.GL_DEPTH_TEST);
gl.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINES);
...
Very few GL commands are valid inside a glBegin()/glEnd() block:
Only a subset of GL commands can be used between glBegin and glEnd.
The commands are
glVertex, glColor, glSecondaryColor, glIndex, glNormal, glFogCoord, glTexCoord, glMultiTexCoord, glVertexAttrib, glEvalCoord, glEvalPoint,
glArrayElement, glMaterial, and glEdgeFlag. Also, it is acceptable
to use glCallList or glCallLists to execute display lists that include
only the preceding commands.
If any other GL command is executed between glBegin and glEnd, the error flag is set and the command is ignored.
glEnable() and glPolygonMode() are not on that list.
Move them outside your glBegin() block.
gl.glVertex3f(v1.x - xMiddle, v1.y - yMiddle, v1.z - zMiddle);
gl.glNormal3f(n1.x, n1.y, n1.z);
Wrong way around. You want normal, then vertex:
gl.glNormal3f(n1.x, n1.y, n1.z);
gl.glVertex3f(v1.x - xMiddle, v1.y - yMiddle, v1.z - zMiddle);
glNormal() only sets the current normal, glVertex() is what actually sends that down the pipeline.

Related

Keras Neural Network output different than Java TensorFlowInferenceInterface output

I have created a neural network in Keras using the InceptionV3 pretrained model:
base_model = applications.inception_v3.InceptionV3(weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(2048, activation='relu')(x)
x = Dropout(0.5)(x)
predictions = Dense(len(labels_list), activation='sigmoid')(x)
I trained the model successfully and want to following image: https://imgur.com/a/hoNjDfR. Therefore, the image is cropped to 299x299 and normalized (just devided by 255):
def img_to_array(img, data_format='channels_last', dtype='float32'):
if data_format not in {'channels_first', 'channels_last'}:
raise ValueError('Unknown data_format: %s' % data_format)
# Numpy array x has format (height, width, channel)
# or (channel, height, width)
# but original PIL image has format (width, height, channel)
x = np.asarray(img, dtype=dtype)
if len(x.shape) == 3:
if data_format == 'channels_first':
x = x.transpose(2, 0, 1)
elif len(x.shape) == 2:
if data_format == 'channels_first':
x = x.reshape((1, x.shape[0], x.shape[1]))
else:
x = x.reshape((x.shape[0], x.shape[1], 1))
else:
raise ValueError('Unsupported image shape: %s' % (x.shape,))
return x
def load_image_as_array(path):
if pil_image is not None:
_PIL_INTERPOLATION_METHODS = {
'nearest': pil_image.NEAREST,
'bilinear': pil_image.BILINEAR,
'bicubic': pil_image.BICUBIC,
}
# These methods were only introduced in version 3.4.0 (2016).
if hasattr(pil_image, 'HAMMING'):
_PIL_INTERPOLATION_METHODS['hamming'] = pil_image.HAMMING
if hasattr(pil_image, 'BOX'):
_PIL_INTERPOLATION_METHODS['box'] = pil_image.BOX
# This method is new in version 1.1.3 (2013).
if hasattr(pil_image, 'LANCZOS'):
_PIL_INTERPOLATION_METHODS['lanczos'] = pil_image.LANCZOS
with open(path, 'rb') as f:
img = pil_image.open(io.BytesIO(f.read()))
width_height_tuple = (IMG_HEIGHT, IMG_WIDTH)
resample = _PIL_INTERPOLATION_METHODS['nearest']
img = img.resize(width_height_tuple, resample)
return img_to_array(img, data_format=K.image_data_format())
img_array = load_image_as_array('https://imgur.com/a/hoNjDfR')
img_array = img_array/255
Then I predict it with the trained model in Keras:
predict(img_array.reshape(1,img_array.shape[0],img_array.shape[1],img_array.shape[2]))
The result is the following:
array([[0.02083278, 0.00425783, 0.8858412 , 0.17453966, 0.2628744 ,
0.00428194, 0.2307986 , 0.01038828, 0.07561868, 0.00983179,
0.09568241, 0.03087404, 0.00751176, 0.00651798, 0.03731382,
0.02220723, 0.0187968 , 0.02018479, 0.3416505 , 0.00586909,
0.02030778, 0.01660049, 0.00960067, 0.02457979, 0.9711478 ,
0.00666443, 0.01468313, 0.0035468 , 0.00694743, 0.03057212,
0.00429407, 0.01556832, 0.03173089, 0.01407397, 0.35166138,
0.00734553, 0.0508953 , 0.00336689, 0.0169737 , 0.07512951,
0.00484502, 0.01656419, 0.01643038, 0.02031735, 0.8343202 ,
0.02500874, 0.02459189, 0.01325032, 0.00414564, 0.08371573,
0.00484318]], dtype=float32)
The important point is that it has four values with a value greater than 0.8:
>>> y[y>=0.8]
array([0.9100583 , 0.96635956, 0.91707945, 0.9711707 ], dtype=float32))
Now I have converted my network to .pb and imported it in an android project. I wanted to predict the same image in android. Therefore I also resize the image and normalize it like I did in Python by using the following code:
// Resize image:
InputStream imageStream = getAssets().open("test3.jpg");
Bitmap bitmap = BitmapFactory.decodeStream(imageStream);
Bitmap resized_image = utils.processBitmap(bitmap,299);
and then normalize by using the following function:
public static float[] normalizeBitmap(Bitmap source,int size){
float[] output = new float[size * size * 3];
int[] intValues = new int[source.getHeight() * source.getWidth()];
source.getPixels(intValues, 0, source.getWidth(), 0, 0, source.getWidth(), source.getHeight());
for (int i = 0; i < intValues.length; ++i) {
final int val = intValues[i];
output[i * 3] = Color.blue(val) / 255.0f;
output[i * 3 + 1] = Color.green(val) / 255.0f;
output[i * 3 + 2] = Color.red(val) / 255.0f ;
}
return output;
}
But in java I get other values. None of the four indices has a value greater than 0.8.
The value of the four indices are between 0.1 and 0.4!!!
I have checked my code several times, but I don't understand why in android I don't get the same values for the same image? Any idea or hint?

How to edit image with opencv to read text with OCR

I'm developing an android app to recognize text in particular plate, as in photo here:
I have to recognize the texts in white (e.g. near to "Mod."). I'm using Google ML Kit's text recognition APIs, but it fails. So, I'm using OpenCV to edit image but I don't know how to emphasize the (white) texts so OCR recognize it. I tried more stuff, like contrast, brightness, gamma correction, adaptive thresholding, but the cases vary a lot depending on how the photo is taken. Do you have any ideas?
Thank u very much.
I coded this example in Python (since OpenCV's SIFT in Android is paid) but you can still use this to understand how to solve it.
First I created this image as a template:
Step 1: Load images
""" 1. Load images """
# load image of plate
src_path = "nRHzD.jpg"
src = cv2.imread(src_path)
# load template of plate (to be looked for)
src_template_path = "nRHzD_template.jpg"
src_template = cv2.imread(src_template_path)
Step 2: Find the template using SIFT and perspective transformation
# convert images to gray scale
src_gray = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
src_template_gray = cv2.cvtColor(src_template, cv2.COLOR_BGR2GRAY)
# use SIFT to find template
n_matches_min = 10
template_found, homography = find_template(src_gray, src_template_gray, n_matches_min)
warp = transform_perspective_and_crop(homography, src, src_gray, src_template)
warp_gray = cv2.cvtColor(warp, cv2.COLOR_BGR2GRAY)
warp_hsv = cv2.cvtColor(warp, cv2.COLOR_BGR2HSV)
template_hsv = cv2.cvtColor(src_template, cv2.COLOR_BGR2HSV)
Step 3: Find regions of interest (using the green parts of the template image)
green_hsv_lower_bound = [50, 250, 250]
green_hsv_upper_bound = [60, 255, 255]
mask_rois, mask_rois_img = crop_img_in_hsv_range(warp, template_hsv, green_hsv_lower_bound, green_hsv_upper_bound)
roi_list = separate_rois(mask_rois, warp_gray)
# sort the rois by distance to top right corner -> x (value[1]) + y (value[2])
roi_list = sorted(roi_list, key=lambda values: values[1]+values[2])
Step 4: Apply a Canny Edge detection to the rois (regions of interest)
for i, roi in enumerate(roi_list):
roi_img, roi_x_offset, roi_y_offset = roi
print("#roi:{} x:{} y:{}".format(i, roi_x_offset, roi_y_offset))
roi_img_blur_threshold = cv2.Canny(roi_img, 40, 200)
cv2.imshow("ROI image", roi_img_blur_threshold)
cv2.waitKey()
There are many ways for you to detect the digits, one of the easiest approaches is to run a K-Means Clustering on each of the contours.
Full code:
""" This code shows a way of getting the digit's edges in a pre-defined position (in green) """
import cv2
import numpy as np
def find_template(src_gray, src_template_gray, n_matches_min):
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
""" find grid using SIFT """
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(src_template_gray, None)
kp2, des2 = sift.detectAndCompute(src_gray, None)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1, des2, k=2)
# store all the good matches as per Lowe's ratio test.
good = []
for m,n in matches:
if m.distance < 0.7*n.distance:
good.append(m)
if len(good) > n_matches_min:
src_pts = np.float32([kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
h_template, w_template = src_template_gray.shape
pts = np.float32([[0, 0], [0, h_template - 1], [w_template - 1, h_template - 1], [w_template - 1,0]]).reshape(-1,1,2)
homography = cv2.perspectiveTransform(pts, M)
else:
print "Not enough matches are found - %d/%d" % (len(good), n_matches_min)
matchesMask = None
# show matches
draw_params = dict(matchColor = (0, 255, 0), # draw matches in green color
singlePointColor = None,
matchesMask = matchesMask, # draw only inliers
flags = 2)
if matchesMask:
src_gray_copy = src_gray.copy()
sift_matches = cv2.polylines(src_gray_copy, [np.int32(homography)], True, 255, 2, cv2.LINE_AA)
sift_matches = cv2.drawMatches(src_template_gray, kp1, src_gray_copy, kp2, good, None, **draw_params)
return sift_matches, homography
def transform_perspective_and_crop(homography, src, src_gray, src_template_gray):
""" get mask and contour of template """
mask_img_template = np.zeros(src_gray.shape, dtype=np.uint8)
mask_img_template = cv2.polylines(mask_img_template, [np.int32(homography)], True, 255, 1, cv2.LINE_AA)
_ret, contours, hierarchy = cv2.findContours(mask_img_template, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
template_contour = None
# approximate the contour
c = contours[0]
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
# if our approximated contour has four points, then
# we can assume that we have found our template
warp = None
if len(approx) == 4:
template_contour = approx
cv2.drawContours(mask_img_template, [template_contour] , -1, (255,0,0), -1)
""" Transform perspective """
# now that we have our template contour, we need to determine
# the top-left, top-right, bottom-right, and bottom-left
# points so that we can later warp the image -- we'll start
# by reshaping our contour to be our finals and initializing
# our output rectangle in top-left, top-right, bottom-right,
# and bottom-left order
pts = template_contour.reshape(4, 2)
rect = np.zeros((4, 2), dtype = "float32")
# the top-left point has the smallest sum whereas the
# bottom-right has the largest sum
s = pts.sum(axis = 1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]
# compute the difference between the points -- the top-right
# will have the minumum difference and the bottom-left will
# have the maximum difference
diff = np.diff(pts, axis = 1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]
# now that we have our rectangle of points, let's compute
# the width of our new image
(tl, tr, br, bl) = rect
widthA = np.sqrt(((br[0] - bl[0]) ** 2) + ((br[1] - bl[1]) ** 2))
widthB = np.sqrt(((tr[0] - tl[0]) ** 2) + ((tr[1] - tl[1]) ** 2))
# ...and now for the height of our new image
heightA = np.sqrt(((tr[0] - br[0]) ** 2) + ((tr[1] - br[1]) ** 2))
heightB = np.sqrt(((tl[0] - bl[0]) ** 2) + ((tl[1] - bl[1]) ** 2))
# take the maximum of the width and height values to reach
# our final dimensions
maxWidth = max(int(widthA), int(widthB))
maxHeight = max(int(heightA), int(heightB))
# construct our destination points which will be used to
# map the screen to a top-down, "birds eye" view
homography = np.array([
[0, 0],
[maxWidth - 1, 0],
[maxWidth - 1, maxHeight - 1],
[0, maxHeight - 1]], dtype = "float32")
# calculate the perspective transform matrix and warp
# the perspective to grab the screen
M = cv2.getPerspectiveTransform(rect, homography)
warp = cv2.warpPerspective(src, M, (maxWidth, maxHeight))
# resize warp
h_template, w_template, _n_channels = src_template_gray.shape
warp = cv2.resize(warp, (w_template, h_template), interpolation=cv2.INTER_AREA)
return warp
def crop_img_in_hsv_range(img, hsv, lower_bound, upper_bound):
mask = cv2.inRange(hsv, np.array(lower_bound), np.array(upper_bound))
# do an MORPH_OPEN (erosion followed by dilation) to remove isolated pixels
kernel = np.ones((5,5), np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(img, img, mask=mask)
return mask, res
def separate_rois(column_mask, img_gray):
# go through each of the boxes
# https://stackoverflow.com/questions/41592039/contouring-a-binary-mask-with-opencv-python
border = cv2.copyMakeBorder(column_mask, 1, 1, 1, 1, cv2.BORDER_CONSTANT, value=0)
_, contours, hierarchy = cv2.findContours(border, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE, offset=(-1, -1))
cell_list = []
for contour in contours:
cell_mask = np.zeros_like(img_gray) # Create mask where white is what we want, black otherwise
cv2.drawContours(cell_mask, [contour], -1, 255, -1) # Draw filled contour in mask
# turn that mask into a rectangle
(x,y,w,h) = cv2.boundingRect(contour)
#print("x:{} y:{} w:{} h:{}".format(x, y, w, h))
cv2.rectangle(cell_mask, (x, y), (x+w, y+h), 255, -1)
# copy the img_gray using that mask
img_tmp_region = cv2.bitwise_and(img_gray, img_gray, mask= cell_mask)
# Now crop
(y, x) = np.where(cell_mask == 255)
(top_y, top_x) = (np.min(y), np.min(x))
(bottom_y, bottom_x) = (np.max(y), np.max(x))
img_tmp_region = img_tmp_region[top_y:bottom_y+1, top_x:bottom_x+1]
cell_list.append([img_tmp_region, top_x, top_y])
return cell_list
""" 1. Load images """
# load image of plate
src_path = "nRHzD.jpg"
src = cv2.imread(src_path)
# load template of plate (to be looked for)
src_template_path = "nRHzD_template.jpg"
src_template = cv2.imread(src_template_path)
""" 2. Find the plate (using the template image) and crop it into a rectangle """
# convert images to gray scale
src_gray = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
src_template_gray = cv2.cvtColor(src_template, cv2.COLOR_BGR2GRAY)
# use SIFT to find template
n_matches_min = 10
template_found, homography = find_template(src_gray, src_template_gray, n_matches_min)
warp = transform_perspective_and_crop(homography, src, src_gray, src_template)
warp_gray = cv2.cvtColor(warp, cv2.COLOR_BGR2GRAY)
warp_hsv = cv2.cvtColor(warp, cv2.COLOR_BGR2HSV)
template_hsv = cv2.cvtColor(src_template, cv2.COLOR_BGR2HSV)
""" 3. Find regions of interest (using the green parts of the template image) """
green_hsv_lower_bound = [50, 250, 250]
green_hsv_upper_bound = [60, 255, 255]
mask_rois, mask_rois_img = crop_img_in_hsv_range(warp, template_hsv, green_hsv_lower_bound, green_hsv_upper_bound)
roi_list = separate_rois(mask_rois, warp_gray)
# sort the rois by distance to top right corner -> x (value[1]) + y (value[2])
roi_list = sorted(roi_list, key=lambda values: values[1]+values[2])
""" 4. Apply a Canny Edge detection to the rois (regions of interest) """
for i, roi in enumerate(roi_list):
roi_img, roi_x_offset, roi_y_offset = roi
print("#roi:{} x:{} y:{}".format(i, roi_x_offset, roi_y_offset))
roi_img_blur_threshold = cv2.Canny(roi_img, 40, 200)
cv2.imshow("ROI image", roi_img_blur_threshold)
cv2.waitKey()

Template matching false positive

I am using template matching on openCV java to identify if a subimage exist in larger image. I want to get coordinates of match only if exact subimage is available in larger image. I am using this code but getting a lot of false positive. Attached is the subimage and larger image. Subimage is not present in larger image but i am getting match at (873,715) larger image subimage
public void run(String inFile, String templateFile, String outFile,
int match_method) {
System.out.println("Running Template Matching");
Mat img = Imgcodecs.imread(inFile);
Mat templ = Imgcodecs.imread(templateFile);
// / Create the result matrix
int result_cols = img.cols() - templ.cols() + 1;
int result_rows = img.rows() - templ.rows() + 1;
Mat result = new Mat(result_rows, result_cols, CvType.CV_32FC1);
// / Do the Matching and Normalize
Imgproc.matchTemplate(img, templ, result, match_method);
// Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1, new
// Mat());
Imgproc.threshold(result, result, 0.1, 1.0, Imgproc.THRESH_TOZERO);
// / Localizing the best match with minMaxLoc
MinMaxLocResult mmr = Core.minMaxLoc(result);
Point matchLoc;
if (match_method == Imgproc.TM_SQDIFF
|| match_method == Imgproc.TM_SQDIFF_NORMED) {
matchLoc = mmr.minLoc;
} else {
matchLoc = mmr.maxLoc;
}
double threashhold = 1.0;
if (mmr.maxVal > threashhold) {
System.out.println(matchLoc.x+" "+matchLoc.y);
Imgproc.rectangle(img, matchLoc, new Point(matchLoc.x + templ.cols(),
matchLoc.y + templ.rows()), new Scalar(0, 255, 0));
}
// Save the visualized detection.
Imgcodecs.imwrite(outFile, img);
}
Here is an answer of the same question: Determine if an image exists within a larger image, and if so, find it, using Python
You will have to convert the python code to Java
I am not familiar with OpenCV in Java but OpenCV C++.
I don't think following code is necessary.
Imgproc.threshold(result, result, 0.1, 1.0, Imgproc.THRESH_TOZERO);
The min/max values of 'Mat result' will be between -1 and 1 if you uses normalized option. Therefore, your following code will not work because your threshold is 1.0 if you use normalized option.
if (mmr.maxVal > threshold)
Also, if you use CV_TM_SQDIFF, above code should be
if (mmr.minVal < threshold)
with proper threshold.
How about drawing minMaxLoc before comparing minVal/maxVal with threshold? to see it gives correct result? because match at (873,715) is ridiculous.

JavaCV assertion failed

I am trying to use K-means JavaCV implementation, but I have the following error:
OpenCV Error: Assertion failed (!centers.empty()) in cvKMeans2, file src\matrix.cpp, line 4233
My source code is:
IplImage src = cvLoadImage(fileName, CV_LOAD_IMAGE_COLOR);
int cluster_count = 3;
int attempts = 10;
CvTermCriteria termCriteria = new CvTermCriteria(TermCriteria.EPS + TermCriteria.MAX_ITER, 10, 1.0);
cvReshape(src, src.asCvMat(), 1, src.height() * src.width());
IplImage samples = cvCreateImage(cvGetSize(src), src.depth(), 1);
cvConvertImage(src, samples, CV_32F);
IplImage labels = cvCreateImage(new CvSize(samples.height()), 1, CV_8U);
IplImage centers = cvCreateImage(new CvSize(cluster_count), 1, CV_32F);
cvKMeans2(samples, cluster_count, labels, termCriteria, 1, new long[attempts], KMEANS_RANDOM_CENTERS, centers, new double[attempts]);
I'm beginner on JavaCV and would to know what I doing wrong in this code?

Android OGLES2.0 : All applications getting error

I'm new to android SDK and programming using OGLES2.0. my problem is, most of the programs are not running on my PC.
I'm using Android virtual Device Nexus 4 with 512 Mb Ram, VM Heap 64, Internal Storage 512 and Android 4.3 with API 18 (No SD Card).
A sample code which I'm trying to run is
package com.example.mynewsample;
//
// Book: OpenGL(R) ES 2.0 Programming Guide
// Authors: Aaftab Munshi, Dan Ginsburg, Dave Shreiner
// ISBN-10: 0321502795
// ISBN-13: 9780321502797
// Publisher: Addison-Wesley Professional
// URLs: http://safari.informit.com/9780321563835
// http://www.opengles-book.com
//
// Hello_Triangle
//
// This is a simple example that draws a single triangle with
// a minimal vertex/fragment shader. The purpose of this
// example is to demonstrate the basic concepts of
// OpenGL ES 2.0 rendering.
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.content.Context;
import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.util.Log;
public class myTriangleRenderer implements GLSurfaceView.Renderer
{
///
// Constructor
//
public myTriangleRenderer(Context context)
{
mVertices = ByteBuffer.allocateDirect(mVerticesData.length * 4)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mVertices.put(mVerticesData).position(0);
}
///
// Create a shader object, load the shader source, and
// compile the shader.
//
private int LoadShader(int type, String shaderSrc)
{
int shader;
int[] compiled = new int[1];
// Create the shader object
shader = GLES20.glCreateShader(type);
if (shader == 0)
return 0;
// Load the shader source
GLES20.glShaderSource(shader, shaderSrc);
// Compile the shader
GLES20.glCompileShader(shader);
// Check the compile status
GLES20.glGetShaderiv(shader, GLES20.GL_COMPILE_STATUS, compiled, 0);
if (compiled[0] == 0)
{
Log.e(TAG, GLES20.glGetShaderInfoLog(shader));
GLES20.glDeleteShader(shader);
return 0;
}
return shader;
}
///
// Initialize the shader and program object
//
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{
String vShaderStr =
"attribute vec4 vPosition; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_Position = vPosition; \n"
+ "} \n";
String fShaderStr =
"precision mediump float; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_FragColor = vec4 ( 1.0, 0.0, 0.0, 1.0 );\n"
+ "} \n";
int vertexShader;
int fragmentShader;
int programObject;
int[] linked = new int[1];
// Load the vertex/fragment shaders
vertexShader = LoadShader(GLES20.GL_VERTEX_SHADER, vShaderStr);
fragmentShader = LoadShader(GLES20.GL_FRAGMENT_SHADER, fShaderStr);
// Create the program object
programObject = GLES20.glCreateProgram();
if (programObject == 0)
return;
GLES20.glAttachShader(programObject, vertexShader);
GLES20.glAttachShader(programObject, fragmentShader);
// Bind vPosition to attribute 0
GLES20.glBindAttribLocation(programObject, 0, "vPosition");
// Link the program
GLES20.glLinkProgram(programObject);
// Check the link status
GLES20.glGetProgramiv(programObject, GLES20.GL_LINK_STATUS, linked, 0);
if (linked[0] == 0)
{
Log.e(TAG, "Error linking program:");
Log.e(TAG, GLES20.glGetProgramInfoLog(programObject));
GLES20.glDeleteProgram(programObject);
return;
}
// Store the program object
mProgramObject = programObject;
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
}
// /
// Draw a triangle using the shader pair created in onSurfaceCreated()
//
public void onDrawFrame(GL10 glUnused)
{
// Set the viewport
GLES20.glViewport(0, 0, mWidth, mHeight);
// Clear the color buffer
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Use the program object
GLES20.glUseProgram(mProgramObject);
// Load the vertex data
GLES20.glVertexAttribPointer(0, 3, GLES20.GL_FLOAT, false, 0, mVertices);
GLES20.glEnableVertexAttribArray(0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 3);
}
// /
// Handle surface changes
//
public void onSurfaceChanged(GL10 glUnused, int width, int height)
{
mWidth = width;
mHeight = height;
}
// Member variables
private int mProgramObject;
private int mWidth;
private int mHeight;
private FloatBuffer mVertices;
private static String TAG = "HelloTriangleRenderer";
private final float[] mVerticesData =
{ 0.0f, 0.5f, 0.0f, -0.5f, -0.5f, 0.0f, 0.5f, -0.5f, 0.0f };
}
I had tried different virtual devices, but each time it says Unfortunately stops running.
I am getting this with all OGLES2.0 programs, that won't use CANVAS. A Canvas Program is running accurately.
My experience thus far has always been that the Android emulator does not fully support OpenGL ES 2.0, only ES 1.x. By far the easiest approach is to test on a physical device.
However please checkout this question which suggests it can now be done:
Android OpenGL ES 2.0 emulator
OpenGL ES 2.0 emulation on AVDs actually works pretty well now since the Jelly Bean version. However, the critical factor is the underlying OpenGL driver you have installed on your host development system. It really must be a recent Nvidia or AMD driver. Also, installing Intel's HAXM makes it run much faster. See the third article here:
http://montgomery1.com/opengl/

Categories

Resources