Drawing OpenGL ES 2.0 Line with onTouch Android - java

I've downloaded and run the opengl sample from android. now, i try to make a line with opengl es 2.0 while user is moving their finger. so far this is what i've done:
on my GlSurfaceView class onTouchEvent method:
#Override
public boolean onTouchEvent(MotionEvent e) {
mScaleDetector.onTouchEvent(e);
switch (e.getAction()) {
case MotionEvent.ACTION_DOWN: {
float x = e.getX();
float y = e.getY();
mPreviousX = x;
mPreviousY = y;
// Save the ID of this pointer
mActivePointerId = e.getPointerId(0);
break;
}
case MotionEvent.ACTION_MOVE: {
final int pointerIndex = e.findPointerIndex(mActivePointerId);
final float x = e.getX(pointerIndex);
final float y = e.getY(pointerIndex);
if (!mScaleDetector.isInProgress()) {
mRenderer.setLineCoordinates(mPreviousX, mPreviousY, x, y);
requestRender();
}
mPreviousX = x;
mPreviousY = y;
break;
}
case MotionEvent.ACTION_UP: {
mActivePointerId = INVALID_POINTER_ID;
break;
}
case MotionEvent.ACTION_CANCEL: {
mActivePointerId = INVALID_POINTER_ID;
break;
}
case MotionEvent.ACTION_POINTER_UP: {
final int pointerIndex = (e.getAction() & MotionEvent.ACTION_POINTER_INDEX_MASK)
>> MotionEvent.ACTION_POINTER_INDEX_SHIFT;
final int pointerId = e.getPointerId(pointerIndex);
if (pointerId == mActivePointerId) {
// This was our active pointer going up. Choose a new
// active pointer and adjust accordingly.
final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
mPreviousX = e.getX(newPointerIndex);
mPreviousY = e.getY(newPointerIndex);
mActivePointerId = e.getPointerId(newPointerIndex);
}
break;
}
}
return true;
}
on my renderer class:
public void onDrawFrame(GL10 gl) {
// Draw background color
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Set GL_MODELVIEW transformation mode
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity(); // reset the matrix to its default state
// When using GL_MODELVIEW, you must set the view point
GLU.gluLookAt(gl, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
gl.glScalef(factor, factor, 0.0f);
// Draw triangle
mTriangle.draw(gl);
mLine.draw(gl);
}
function inside renderer class to change coordinates:
public void setLineCoordinates(float mPreviousX
, float mPreviousY, float x, float y) {
// TODO Auto-generated method stub
lineCoords[0] = (float) (mPreviousX * 2.0 / WIDTH - 1.0);
lineCoords[1] = (float) (mPreviousY * -2.0 / HEIGHT + 1.0);
lineCoords[2] = 0.0f;
lineCoords[3] = (float) (x * 2.0 / WIDTH - 1.0);
lineCoords[4] = (float) (y * -2.0 / HEIGHT + 1.0);
lineCoords[5] = 0.0f;
}
when i move my finger, there's no line, just a moving piece of tiny line...
how can i draw the whole line and keep the line drawing while i keep moving my finger? also start new line when i touch down the screen again.
UPDATED
Thanks for #Gil Moshayof i can draw line now with my version like this
public void setLineCoordinates(float mPreviousX
, float mPreviousY, float x, float y) {
// TODO Auto-generated method stub
float lineCoords[] = new float[6];
lineCoords[0] = (float) (mPreviousX * 2.0 / WIDTH - 1.0);
lineCoords[1] = (float) (mPreviousY * -2.0 / HEIGHT + 1.0);
lineCoords[2] = 0.0f;
lineCoords[3] = (float) (x * 2.0 / WIDTH - 1.0);
lineCoords[4] = (float) (y * -2.0 / HEIGHT + 1.0);
lineCoords[5] = 0.0f;
bufferOfArrays.add(new Line(lineCoords));
if(isFirst) {
isFirst = false;
listOfArrays.addAll(bufferOfArrays);
view.requestRender();
}
}
public void setFirst(boolean isFirst) {
this.isFirst = isFirst;
}
and then:
#Override
public void onDrawFrame(GL10 gl) {
//... color, matrix, look at etc..
// Draw triangle
//mTriangle.draw(gl);
//TODO mLine.draw(gl);
for(Line line : listOfArrays) {
line.draw(gl);
}
isFirst = true;
}
but my app starts to slow down while moving my finger, still have no idea what's happening

The problem here is that you're always setting mPreviousX & mPreviousY on every move. Actually, the problem starts with the name you chose for these parameters. A better name would be mLineStartX/Y.
Set mLineStartX/Y on the touch down event, then update the x/y on every move and don't change the mLineStartX/Y.
As for adding multiple lines, you'll need to implement a bit more logic. You'll need to maintain an array list of some class you define which has 2 point (2xPointFs). The moment the user lifts his finger (touch up / touch cancel), create a new class with the 2 points, and on your render method, make sure you go over all such lines and render them.

Related

How I can retrieve information on a canvas

I want to do a "zoomable paint", I mean a paint that I can zoom/zoom out and pan/drag the canvas and then draw on it.
I have a problem that I can't solve: when I draw while the canvas is zoomed, I retrieve the X and Y coordinate and effectively drawing it on the canvas. But these coordinates are not correct because of the zoomed canvas.
I tried to correct these (multiply by (zoomHeigh/screenHeight)) but I can't find a way to retrieve where I must draw on the original/none-zoomed screen
This is my code :
public class PaintView extends View {
public static int BRUSH_SIZE = 20;
public static final int DEFAULT_COLOR = Color.BLACK;
public static final int DEFAULT_BG_COLOR = Color.WHITE;
private static final float TOUCH_TOLERANCE = 4;
private float mX, mY;
private SerializablePath mPath;
private Paint mPaint;
private ArrayList<FingerPath> paths = new ArrayList<>();
private ArrayList<FingerPath> tempPaths = new ArrayList<>();
private int currentColor;
private int backgroundColor = DEFAULT_BG_COLOR;
private int strokeWidth;
private boolean emboss;
private boolean blur;
private boolean eraser;
private MaskFilter mEmboss;
private MaskFilter mBlur;
private Bitmap mBitmap;
private Canvas mCanvas;
private Paint mBitmapPaint = new Paint(Paint.DITHER_FLAG);
public boolean zoomViewActivated = false;
//These two constants specify the minimum and maximum zoom
private static float MIN_ZOOM = 1f;
private static float MAX_ZOOM = 5f;
private float scaleFactor = 1.f;
private ScaleGestureDetector detector;
//These constants specify the mode that we're in
private static int NONE = 0;
private static int DRAG = 1;
private static int ZOOM = 2;
private int mode;
//These two variables keep track of the X and Y coordinate of the finger when it first
//touches the screen
private float startX = 0f;
private float startY = 0f;
//These two variables keep track of the amount we need to translate the canvas along the X
//and the Y coordinate
private float translateX = 0f;
private float translateY = 0f;
//These two variables keep track of the amount we translated the X and Y coordinates, the last time we
//panned.
private float previousTranslateX = 0f;
private float previousTranslateY = 0f;
int currentPositionX = 0;
int currentPositionY = 0;
private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
#Override
public boolean onScale(ScaleGestureDetector detector) {
scaleFactor *= detector.getScaleFactor();
scaleFactor = Math.max(MIN_ZOOM, Math.min(scaleFactor, MAX_ZOOM));
return true;
}
}
public ArrayList<FingerPath> getDividedPaths(float i){
for(FingerPath p : this.paths){
Matrix scaleMatrix = new Matrix();
scaleMatrix.setScale(i, i);
p.path.transform(scaleMatrix);
}
return this.paths;
}
public ArrayList<FingerPath> getPaths() {
return paths;
}
public void dividePath(float i) {
for(FingerPath p : this.paths){
Matrix scaleMatrix = new Matrix();
scaleMatrix.setScale(i, i);
p.path.transform(scaleMatrix);
}
}
public void setPaths(ArrayList<FingerPath> paths){
this.paths = paths;
}
public void setStrokeWidth(int value){
strokeWidth = value;
}
public PaintView(Context context) {
this(context, null);
detector = new ScaleGestureDetector(getContext(), new ScaleListener());
}
public PaintView(Context context, AttributeSet attrs) {
super(context, attrs);
mPaint = new Paint();
mPaint.setAntiAlias(true);
mPaint.setDither(true);
mPaint.setColor(DEFAULT_COLOR);
mPaint.setStyle(Paint.Style.STROKE);
mPaint.setStrokeJoin(Paint.Join.ROUND);
mPaint.setStrokeCap(Paint.Cap.ROUND);
mPaint.setXfermode(null);
mPaint.setAlpha(0xff);
mEmboss = new EmbossMaskFilter(new float[] {1, 1, 1}, 0.4f, 6, 3.5f);
mBlur = new BlurMaskFilter(5, BlurMaskFilter.Blur.NORMAL);
detector = new ScaleGestureDetector(getContext(), new ScaleListener());
}
public void init(DisplayMetrics metrics) {
int height = metrics.heightPixels;
int width = metrics.widthPixels;
mBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
mCanvas = new Canvas(mBitmap);
currentColor = DEFAULT_COLOR;
strokeWidth = BRUSH_SIZE;
}
public void normal() {
emboss = false;
blur = false;
eraser = false;
}
public void emboss() {
emboss = true;
blur = false;
eraser = false;
}
public void blur() {
emboss = false;
blur = true;
eraser = false;
}
public void eraser() {
eraser = true;
}
public void cancel(){
if(paths.size() != 0){
tempPaths.add(paths.get(paths.size()-1));
paths.remove(paths.size()-1);
invalidate();
}
}
public void redo(){
if(tempPaths.size() != 0){
paths.add(tempPaths.get(tempPaths.size()-1));
tempPaths.remove(tempPaths.size()-1);
invalidate();
}
}
public void clear() {
backgroundColor = DEFAULT_BG_COLOR;
paths.clear();
normal();
invalidate();
}
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
canvas.save();
//We're going to scale the X and Y coordinates by the same amount
canvas.scale(scaleFactor, scaleFactor);
//If translateX times -1 is lesser than zero, let's set it to zero. This takes care of the left bound
if((translateX * -1) < 0) {
translateX = 0;
}
//This is where we take care of the right bound. We compare translateX times -1 to (scaleFactor - 1) * displayWidth.
//If translateX is greater than that value, then we know that we've gone over the bound. So we set the value of
//translateX to (1 - scaleFactor) times the display width. Notice that the terms are interchanged; it's the same
//as doing -1 * (scaleFactor - 1) * displayWidth
else if((translateX * -1) > (scaleFactor - 1) * getWidth()) {
translateX = (1 - scaleFactor) * getWidth();
}
if(translateY * -1 < 0) {
translateY = 0;
}
//We do the exact same thing for the bottom bound, except in this case we use the height of the display
else if((translateY * -1) > (scaleFactor - 1) * getHeight()) {
translateY = (1 - scaleFactor) * getHeight();
}
//We need to divide by the scale factor here, otherwise we end up with excessive panning based on our zoom level
//because the translation amount also gets scaled according to how much we've zoomed into the canvas.
canvas.translate(translateX / scaleFactor, translateY / scaleFactor);
/* The rest of your canvas-drawing code */
mCanvas.drawColor(backgroundColor);
if(paths != null){
for (FingerPath fp : paths) {
mPaint.setColor(fp.color);
mPaint.setStrokeWidth(fp.strokeWidth);
mPaint.setMaskFilter(null);
if (fp.emboss)
mPaint.setMaskFilter(mEmboss);
else if (fp.blur)
mPaint.setMaskFilter(mBlur);
if(fp.eraser) {
mPaint.setColor(DEFAULT_BG_COLOR);
}
mCanvas.drawPath(fp.path, mPaint);
}
}
canvas.drawBitmap(mBitmap, 0, 0, mBitmapPaint);
canvas.restore();
}
private void touchStart(float x, float y) {
mPath = new SerializablePath();
FingerPath fp = new FingerPath(currentColor, emboss, blur, eraser, strokeWidth, mPath);
paths.add(fp);
tempPaths = new ArrayList<>();
mPath.reset();
mPath.moveTo(x, y);
mX = x;
mY = y;
}
private void touchMove(float x, float y) {
float dx = Math.abs(x - mX);
float dy = Math.abs(y - mY);
if (dx >= TOUCH_TOLERANCE || dy >= TOUCH_TOLERANCE) {
mPath.quadTo(mX, mY, (x + mX) / 2, (y + mY) / 2);
mX = x;
mY = y;
}
}
private void touchUp() {
mPath.lineTo(mX, mY);
}
#Override
public boolean onTouchEvent(MotionEvent event) {
if(zoomViewActivated){
boolean dragged = false;
switch (event.getAction() & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN:
mode = DRAG;
//We assign the current X and Y coordinate of the finger to startX and startY minus the previously translated
//amount for each coordinates This works even when we are translating the first time because the initial
//values for these two variables is zero.
startX = event.getX() - previousTranslateX;
startY = event.getY() - previousTranslateY;
break;
case MotionEvent.ACTION_MOVE:
translateX = event.getX() - startX;
translateY = event.getY() - startY;
//We cannot use startX and startY directly because we have adjusted their values using the previous translation values.
//This is why we need to add those values to startX and startY so that we can get the actual coordinates of the finger.
double distance = Math.sqrt(Math.pow(event.getX() - (startX + previousTranslateX), 2) +
Math.pow(event.getY() - (startY + previousTranslateY), 2)
);
if(distance > 0) {
dragged = true;
}
break;
case MotionEvent.ACTION_POINTER_DOWN:
mode = ZOOM;
break;
case MotionEvent.ACTION_UP:
mode = NONE;
dragged = false;
//All fingers went up, so let's save the value of translateX and translateY into previousTranslateX and
//previousTranslate
previousTranslateX = translateX;
previousTranslateY = translateY;
currentPositionX += previousTranslateX;
currentPositionY += previousTranslateY;
break;
case MotionEvent.ACTION_POINTER_UP:
mode = DRAG;
//This is not strictly necessary; we save the value of translateX and translateY into previousTranslateX
//and previousTranslateY when the second finger goes up
previousTranslateX = translateX;
previousTranslateY = translateY;
break;
}
detector.onTouchEvent(event);
//We redraw the canvas only in the following cases:
//
// o The mode is ZOOM
// OR
// o The mode is DRAG and the scale factor is not equal to 1 (meaning we have zoomed) and dragged is
// set to true (meaning the finger has actually moved)
if ((mode == DRAG && scaleFactor != 1f && dragged) || mode == ZOOM) {
invalidate();
}
}else{
float x = event.getX()*(getHeight()/scaleFactor)/getHeight()+currentPositionX;
float y = event.getY()*(getWidth()/scaleFactor)/getWidth()+currentPositionY;
switch(event.getAction()) {
case MotionEvent.ACTION_DOWN :
touchStart(x, y);
invalidate();
break;
case MotionEvent.ACTION_MOVE :
touchMove(x, y);
invalidate();
break;
case MotionEvent.ACTION_UP :
touchUp();
invalidate();
break;
}
}
return true;
}
}

Pixel Perfect Collision detection between a custom view and an ImageView

I have a CustomView and an Image view. The CustomView is a ball that moves around the screen and bounces off the walls. The Image is a quarter circle that you can rotate in a circle on touch. I am trying to make my game so that when the filled pixels from the CustomView cross paths with the Filled pixels from the ImageView a collision is detected. The problem that I am having is I do not know how to retrieve where the filled pixels are on each view.
Here is my XML code
<com.leytontaylor.bouncyballz.AnimatedView
android:id="#+id/anim_view"
android:layout_height="fill_parent"
android:layout_width="fill_parent"
/>
<ImageView
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:id="#+id/quartCircle"
android:layout_gravity="center_horizontal"
android:src="#drawable/quartercircle"
android:scaleType="matrix"/>
My MainActivity
public class MainActivity extends AppCompatActivity {
private static Bitmap imageOriginal, imageScaled;
private static Matrix matrix;
private ImageView dialer;
private int dialerHeight, dialerWidth;
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
// load the image only once
if (imageOriginal == null) {
imageOriginal = BitmapFactory.decodeResource(getResources(), R.drawable.quartercircle);
}
// initialize the matrix only once
if (matrix == null) {
matrix = new Matrix();
} else {
// not needed, you can also post the matrix immediately to restore the old state
matrix.reset();
}
dialer = (ImageView) findViewById(R.id.quartCircle);
dialer.setOnTouchListener(new MyOnTouchListener());
dialer.getViewTreeObserver().addOnGlobalLayoutListener(new ViewTreeObserver.OnGlobalLayoutListener() {
#Override
public void onGlobalLayout() {
// method called more than once, but the values only need to be initialized one time
if (dialerHeight == 0 || dialerWidth == 0) {
dialerHeight = dialer.getHeight();
dialerWidth = dialer.getWidth();
// resize
Matrix resize = new Matrix();
resize.postScale((float) Math.min(dialerWidth, dialerHeight) / (float) imageOriginal.getWidth(), (float) Math.min(dialerWidth, dialerHeight) / (float) imageOriginal.getHeight());
imageScaled = Bitmap.createBitmap(imageOriginal, 0, 0, imageOriginal.getWidth(), imageOriginal.getHeight(), resize, false);
// translate to the image view's center
float translateX = dialerWidth / 2 - imageScaled.getWidth() / 2;
float translateY = dialerHeight / 2 - imageScaled.getHeight() / 2;
matrix.postTranslate(translateX, translateY);
dialer.setImageBitmap(imageScaled);
dialer.setImageMatrix(matrix);
}
}
});
}
MyOnTouchListener class:
private class MyOnTouchListener implements View.OnTouchListener {
private double startAngle;
#Override
public boolean onTouch(View v, MotionEvent event) {
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
startAngle = getAngle(event.getX(), event.getY());
break;
case MotionEvent.ACTION_MOVE:
double currentAngle = getAngle(event.getX(), event.getY());
rotateDialer((float) (startAngle - currentAngle));
startAngle = currentAngle;
break;
case MotionEvent.ACTION_UP:
break;
}
return true;
}
}
private double getAngle(double xTouch, double yTouch) {
double x = xTouch - (dialerWidth / 2d);
double y = dialerHeight - yTouch - (dialerHeight / 2d);
switch (getQuadrant(x, y)) {
case 1:
return Math.asin(y / Math.hypot(x, y)) * 180 / Math.PI;
case 2:
return 180 - Math.asin(y / Math.hypot(x, y)) * 180 / Math.PI;
case 3:
return 180 + (-1 * Math.asin(y / Math.hypot(x, y)) * 180 / Math.PI);
case 4:
return 360 + Math.asin(y / Math.hypot(x, y)) * 180 / Math.PI;
default:
return 0;
}
}
/**
* #return The selected quadrant.
*/
private static int getQuadrant(double x, double y) {
if (x >= 0) {
return y >= 0 ? 1 : 4;
} else {
return y >= 0 ? 2 : 3;
}
}
/**
* Rotate the dialer.
*
* #param degrees The degrees, the dialer should get rotated.
*/
private void rotateDialer(float degrees) {
matrix.postRotate(degrees, dialerWidth / 2, dialerHeight / 2);
dialer.setImageMatrix(matrix);
}
And my AnimatedView
public class AnimatedView extends ImageView {
private Context mContext;
int x = -1;
int y = -1;
private int xVelocity = 10;
private int yVelocity = 5;
private Handler h;
private final int FRAME_RATE = 60;
public AnimatedView(Context context, AttributeSet attrs) {
super(context, attrs);
mContext = context;
h = new Handler();
}
private Runnable r = new Runnable() {
#Override
public void run() {
invalidate();
}
};
protected void onDraw(Canvas c) {
BitmapDrawable ball = (BitmapDrawable) mContext.getResources().getDrawable(R.drawable.smallerball);
if (x<0 && y <0) {
x = this.getWidth()/2;
y = this.getHeight()/2;
} else {
x += xVelocity;
y += yVelocity;
if ((x > this.getWidth() - ball.getBitmap().getWidth()) || (x < 0)) {
xVelocity = xVelocity*-1;
}
if ((y > this.getHeight() - ball.getBitmap().getHeight()) || (y < 0)) {
yVelocity = yVelocity*-1;
}
}
c.drawBitmap(ball.getBitmap(), x, y, null);
h.postDelayed(r, FRAME_RATE);
}
public float getX() {
return x;
}
public float getY() {
return y;
}
}
My question is: How can I retrieve the filled pixels from both of these views, and pass them through a function that detects a collision.
Thanks in advance for the help!:)
You could encapsulate your images with Array of points marking your border coordinates(and update them on move/calculate them based on origin) then you'll be able to decide whether oposing object are in touch or not(if any of oposing arrays share the same point)
You really need to define "filled pixels". I assume you mean the non-transparent pixels. The easiest way to find those, is by converting your entire view into a bitmap and iterating through its pixels. You can convert a View into a Bitmap like this:
private Bitmap getBitmapFromView(View view) {
view.buildDrawingCache();
Bitmap returnedBitmap = Bitmap.createBitmap(view.measuredWidth,
view.measuredHeight,
Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(returnedBitmap);
canvas.drawColor(Color.WHITE, PorterDuff.Mode.SRC_IN);
Drawable drawable = view.background;
drawable.draw(canvas);
view.draw(canvas);
return returnedBitmap;
}
You'd also need to get the absolute location of the views:
Point getViewLocationOnScreen(View v) {
int[] coor = new int[]{0, 0};
v.getLocationOnScreen(coor);
return new Point(coor[0], coor[1]);
}
Then you just have to iterate through the pixels of each Bitmap, check their colors to know whether they're "filled", and also check whether they're overlapping based on their coordination inside the bitmaps and the absolute location of the views on screen.
Iterating through a bitmap's is done like this:
for (int x = 0; x < myBitmap.getWidth(); x++) {
for (int y = 0; y < myBitmap.getHeight(); y++) {
int color = myBitmap.getPixel(x, y);
}
}
But I must say, I don't think performing this sort of heavy computations on UI thread is really a good idea. There are dozens of much better ways to detect collisions than pixel-perfect checking. This will probably come out extremely laggy.

coordinate X Y draw Android

I programmed a drawing application, I want to retrieve all the X Y of my drawing. That is to say each time I touch the screen, the coordinates x and y I put them in a two dimensional table ,
I made a toast to find out when the coordinates change, and I found that they change in the movetouch method, so I declare a table in the method and I still make a toast to see the 10 line Of my array, the toast changed co-ordination so I understood that in fact values ​​are crushed whenever the x and y change, or I am planting
public boolean onTouchEvent(MotionEvent event) {
float x = event.getX();
float y = event.getY();
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
startTouch(x, y);
invalidate();
break;
case MotionEvent.ACTION_UP:
upTouch();
invalidate();
break;
case MotionEvent.ACTION_MOVE:
moveTouche(x, y);
invalidate();
break;
}
return true;
}
Method moveTouch
public void moveTouche (float x,float y ) {
if ((canDraw)&& drawing) {
float dx = Math.abs(x - mX);
float dy = Math.abs(y - mY);
if(dx >= Tolerance || dy >= Tolerance){
path.quadTo(mX,mY,(x+mX)/2,(y+mY)/2);
mX = x ;
mY = y;
double[][] point = new double [99][2];
for (int i = 0; i < 99; i++) {
point[i][0]=x;
point[i][1]=y;
}
Toast.makeText(getContext(),"y = "+point[10][1]+" ",Toast.LENGTH_LONG).show();
}}
}
You can read as many points as you want from any path. Example how to read coordinates from the middle of path:
PathMeasure pm = new PathMeasure(myPath, false);
//coordinates will be here
float aCoordinates[] = {0f, 0f};
//get coordinates of the middle point
pm.getPosTan(pm.getLength() * 0.5f, aCoordinates, null);
You can pass any distance from the path start to get point coordinates.

openGL in java: moving camera with TouchEvent

I need to create a movement/changing position of the camera with the x,y axes by touching screen. I've read many of the previous questions, but nowhere noticed something that will solve my problem.
how can i use this code ? :
class ESSurfaceView extends GLSurfaceView {
private final float TOUCH_SCALE_FACTOR = 180.0f / 320;
private float mPreviousX;
private float mPreviousY;
#Override
public boolean onTouchEvent(MotionEvent e) {
float x = e.getX();
float y = e.getY();
switch (e.getAction()) {
case MotionEvent.ACTION_MOVE:
float dx = x - mPreviousX;
float dy = y - mPreviousY;
if (y > getHeight() / 2) {
dx = dx * -1 ;
}
if (x < getWidth() / 2) {
dy = dy * -1 ;
}
GLRenderer.setAngle(
GLRenderer.getAngle() +
((dx + dy) * TOUCH_SCALE_FACTOR));
requestRender();
}
mPreviousX = x;
mPreviousY = y;
return true;
}
public ESSurfaceView(Context context)
{
super(context);
setEGLContextClientVersion(2);
GLRenderer renderer = new GLRenderer();
setRenderer(renderer);
// Render the view only when there is a change in the drawing data
//setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
}}
Offical Android Training has an example (mostly identically to yours) for that:
https://developer.android.com/training/graphics/opengl/touch.html
It also offers complete source.
By the way - you never move the camera in openGL - you move the world.
For further understanding please read:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
This should give you a better understanding on how things work...

Moving cube in opengl on android help

After a lot of trial and error I was able to draw a cube and a tile map for the cube to be infront of. I added input so that when you move your finger, the camera moves. But now I want to add the next step in my games progression. I want it to have my cube move to where i pressed my finger in 3d space. It sounds a bit difficult. What I wanted to do is have you use one finger to slide the camera around the map and the other to double tap and move. So far my cube jitters and wobbles in place but doesn't move to where I want it to go. Of course none of my code is perfect and my following of gluunproject might be far from correct. Here is my code so far
The cube creator and update code//its short
public shapemaker(int x, int y, int z,int width)
{
centerx =x;
centery =y;
centerz =z;
mysquare1= new square(centerx,centery,centerz-5,1,.0f,.0f,1f,"vert");
mysquare2= new square(centerx-1,centery,centerz-6f,1,.0f,1.0f,.0f,"side");
mysquare3= new square(centerx,centery,centerz-7,1,1.0f,.0f,.0f,"vert");
mysquare4= new square(centerx+1,centery,centerz-6f,1,.0f,.5f,.5f,"side");
mysquare5= new square(centerx,centery-1,centerz-6f,1,.5f,.5f,.0f,"horiz");
mysquare6= new square(centerx,centery+1,centerz-6f,1,.5f,.0f,.5f,"horiz");
}
public void updatecube(float x, float y, float z)
{
centerx =x;
centery =y;
centerz =z;
mysquare1= new square(centerx,centery,centerz-5,1,.0f,.0f,1f,"vert");
mysquare2= new square(centerx-1,centery,centerz-6f,1,.0f,1.0f,.0f,"side");
mysquare3= new square(centerx,centery,centerz-7,1,1.0f,.0f,.0f,"vert");
mysquare4= new square(centerx+1,centery,centerz-6f,1,.0f,.5f,.5f,"side");
mysquare5= new square(centerx,centery-1,centerz-6f,1,.5f,.5f,.0f,"horiz");
mysquare6= new square(centerx,centery+1,centerz-6f,1,.5f,.0f,.5f,"horiz");
}
public void drawcube(GL10 gl)
{
mysquare1.draw(gl);
mysquare2.draw(gl);
mysquare3.draw(gl);
mysquare4.draw(gl);
mysquare5.draw(gl);
mysquare6.draw(gl);
}
public float getcenterx()
{
return centerx;
}
public float getcentery()
{
return centery;
}
public float getcenterz()
{
return centerz;
}
Here is the actual implementation code , it is based off of googles intro to opengl
class ClearGLSurfaceView extends GLSurfaceView {
private static final String BufferUtils = null;
float x =0;
float y =0;
float z =0f;
float prevx=0;
float prevy=0;
GL10 mygl;
// GL gl;
float mywidth;
float myheight;
public ClearGLSurfaceView(Context context,float width, float height) {
super(context);
mRenderer = new ClearRenderer();
setRenderer(mRenderer);
mywidth = width;
myheight = height;
}
#Override
public boolean onTouchEvent(final MotionEvent event)
{
queueEvent(new Runnable() {
// Find the index of the active pointer and fetch its position
public void run()
{
float curx = event.getX();
float cury = event.getY();
final int action = event.getAction();
if(action == MotionEvent.ACTION_MOVE)
{
if(curx>prevx)
{
x-=.1f;
}
if(curx<prevx)
{
x+=.1f;
}
if(cury>prevy)
{
y+=.1f;
}
if(cury<prevy)
{
y-=.1f;
}
}
if(action == MotionEvent.ACTION_DOWN)
{
// GLU.gluUnProject(winX, winY, winZ, model,
// modelOffset, project, projectOffset, view, viewOffset, obj, objOffset)
vector2d moveto = new vector2d(0,0);
moveto = mRenderer.getworkcoords(curx,cury);
Log.i("move to ", "x "+moveto.x+" y "+ moveto.y+" z " + moveto.z);
mRenderer.updatemoveto(moveto.x, moveto.y);
}
prevx= curx;
prevy = cury;
mRenderer.updatecamera(x, y, z);
}
});
return true;
}
ClearRenderer mRenderer;
}
class ClearRenderer implements GLSurfaceView.Renderer {
GL10 mygl;
GL11 mygl11;
int viewport[] = new int[4];
float modelview[] = new float[16];
float projection[] = new float[16];
float wcoord[] = new float[4];
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // OpenGL docs.
gl.glShadeModel(GL10.GL_SMOOTH);// OpenGL docs.
gl.glClearDepthf(1.0f);// OpenGL docs.
gl.glEnable(GL10.GL_DEPTH_TEST);// OpenGL docs.
gl.glDepthFunc(GL10.GL_LEQUAL);// OpenGL docs.
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, // OpenGL docs.
GL10.GL_NICEST);
mygl = gl;
mygl11 = (GL11) gl;
int index;
int counter = 0;
float zcoord = -7.0f;
mygl11.glGetFloatv(GL11.GL_MODELVIEW_MATRIX, modelview, 0);
mygl11.glGetFloatv(GL11.GL_PROJECTION_MATRIX, projection, 0);
mygl11.glGetIntegerv(GL11.GL_VIEWPORT, viewport, 0);
for(int x=0;x<11;x++)
{
for(int y =0;y<10;y++)
{
index = tilemap1[y][x];
if(index==0)
{
tiles[counter]=new square(x,y,zcoord,0.5f,1.0f,0.0f,0f,"vert");
}
if(index==1)
{
tiles[counter]=new square(x,y,zcoord,0.5f,0f,1.0f,0f,"vert");
}
if(index==2)
{
tiles[counter]=new square(x,y,zcoord,0.5f,0.0f,0.0f,1f,"vert");
}
counter++;
}
}
}
public vector2d getworkcoords(float width,float height)
{
float[] depth = new float[1];
float winx = width;
float winy =viewport[3]-height;
//vector2d position = new vector2d(0,0);
int test = GLU.gluUnProject(winx, winy,0.0f, modelview, 0, projection,
0, viewport, 0, wcoord, 0);
vector2d v = new vector2d(0,0);
v.x = wcoord[0];
v.y = wcoord[1];
v.z = wcoord[2];
return v ;
}
public void onSurfaceChanged(GL10 gl, int w, int h) {
gl.glViewport(0, 0, w, h);
gl.glMatrixMode(GL10.GL_PROJECTION);// OpenGL docs.
gl.glLoadIdentity();// OpenGL docs.
GLU.gluPerspective(gl, 45.0f,
(float) w / (float) h,
0.1f, 100.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);// OpenGL docs.
gl.glLoadIdentity();// OpenGL docs.
}
public float movetox;
public float movetoy;
float currentposx;
float currentposy;
public void updatemoveto(float x , float y)
{
movetox = x;
movetoy = y;
}
public void onDrawFrame(GL10 gl) {
gl.glClearColor(mRed, mGreen, mBlue, 1.0f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
currentposx = mycube.getcenterx();
currentposy = mycube.getcentery();
float x = movetox -currentposx;
float y = movetoy - currentposy;
double angle = Math.atan2(y, x);
double mx =.5f* Math.cos(angle);
double my = .5f* Math.sin(angle);
double mmx = mx+ currentposx;
double mmy = my+ currentposy;
mycube.updatecube((float)(mmx), (float)(mmy),0);
mycube.drawcube(gl);
for(int i = 0;i<110;i++)
{
tiles[i].draw(gl);
}
}
public void setColor(float r, float g, float b) {
mRed = r;
mGreen = g;
mBlue = b;
}
public void updatecamera(float myx, float myy, float myz)
{
mygl.glLoadIdentity();// OpenGL docs.
GLU.gluLookAt(mygl, myx, myy, myz, myx, myy, myz-10, 0, 1, 0);
}
private float mRed;
private float mGreen;
private float mBlue;
int tilemap1[][] = {
{0,0,0,0,0,0,0,0,0,0,0},
{0,1,2,1,1,1,1,1,1,1,0},
{0,1,2,1,1,1,1,1,1,1,0},
{0,1,2,1,1,1,1,1,1,1,0},
{0,1,2,2,2,1,1,1,1,1,0},
{0,1,1,1,2,1,1,1,1,1,0},
{0,1,1,1,2,1,1,1,1,1,0},
{0,1,1,1,2,1,1,1,1,1,0},
{0,1,1,1,2,2,1,1,1,1,0},
{0,0,0,0,0,0,0,0,0,0,0},
};
square[] tiles = new square[110];
shapemaker mycube = new shapemaker(0,0,0,1);
}
I did not include the on create method and the on pause ones. I would also like advice on designing the code structure because my implementation is far from perfect.
So in short. I would like help figuring out how to modify the code I made to have a cube move to where i press. And how you would improve the code
Thanks
First of all,
you don't need to recreate vertices on every moving. It is very complex task and it will low the framerate (FPS). In you code you have model matrix. Usually it is responsible for translating, scaling rotating of your model(mesh, cube in your example). In common case you have to translate the model matrix and then multiple model, projection and view matrixes, you will get the matrix for your device screen. Then by multiplication of vertex postion and this matrix you'll get the right position of vertex.
you can look at this question/answer

Categories

Resources