I'm migrating my native Android game to libGDX. That's why I use flipped graphics. Apparently NinePatches can't be flipped. (They are invisible or look strange.)
What would be more efficient:
use one big TextureAtlas containing all graphic files and load it twice (flipped and unflipped) or
use one big TextureAtlas for the flipped graphic files and a second small one for the NinePatch graphics?
Type A:
public static TextureAtlas atlas, atlas2;
public static void load() {
// big atlas (1024 x 1024)
atlas = new TextureAtlas(Gdx.files.internal("game.atlas"), true);
// find many AtlasRegions here
// Same TextureAtlas. Loaded into memory twice?
atlas2 = new TextureAtlas(Gdx.files.internal("game.atlas"), false);
button = ninepatch.createPatch("button");
dialog = ninepatch.createPatch("dialog");
}
Type B:
public static TextureAtlas atlas, ninepatch;
public static void load() {
// big atlas (1024 x 1024)
atlas = new TextureAtlas(Gdx.files.internal("game.atlas"), true);
// find many AtlasRegions here
// small atlas (128 x 64)
ninepatch = new TextureAtlas(Gdx.files.internal("ninepatch.atlas"), false);
button = ninepatch.createPatch("button");
dialog = ninepatch.createPatch("dialog");
}
now I do not have time to test, I expound the idea, but I think it can work, is based on a texture, not texture atlas for simplicity.
short metadata = 2;
Texture yourTextureMetadata = new Texture(Gdx.files.internal(metaDataTexture));
int width = yourTextureMetadata.getWidth() - metadata;
int height = yourTextureMetadata.getHeight() - metadata;
TextureRegion yourTextureClean = new TextureRegion(yourTextureMetadata,
1, 1, width, height);
I assume that metadata has a size of two, now I do not remember, sorry
the idea would take, the largest texture, with metadata and then cut them to have clean on the other side so that you can turn around, I hope to work.
for texturealtas would similar, findRegions and cut metadata and save them without metadata
On the other hand, note that you have static textures, and I think when android change of context, your application to another application, and then back to yours application, you can give errors visualization, your images can black display
Related
I am trying to develop an app that allows me to draw on a video while recording it, and to then save both the recording and the video in one mp4 file for later use. Also, I want to use the camera2 library, especially that I need my app to run for devices higher than API 21, and I am always avoiding deprecated libraries.
I tried many ways to do it, including FFmpeg in which I placed an overlay of the TextureView.getBitmap() (from the camera) and a bitmap taken from the canvas. It worked but since it is a slow function, the video couldn't catch enough frames (not even 25 fps), and it ran so fast. I want audio to be included as well.
I thought about the MediaProjection library, but I am not sure if it can capture the layout containg the camera and the drawing only inside its VirtualDisplay, because the app user may add text as well on the video, and I don't want the keyboard to appear.
Please help, it's been a week of research and I found nothing that worked fine for me.
P.S: I don't have a problem if a little bit of processing time is included after that the user presses the "Stop Recording"button.
EDITED:
Now after Eddy's Answer, I am using the shadercam app to draw on the camera surface since the app does the video rendering, and the workaround to do is about rendering my canvas into a bitmap then into a GL texture, however I am not being able to do it successfully. I need your help guys, I need to finish the app :S
I am using the shadercam library (https://github.com/googlecreativelab/shadercam), and I replaced the "ExampleRenderer" file with the following code:
public class WriteDrawRenderer extends CameraRenderer
{
private float offsetR = 1f;
private float offsetG = 1f;
private float offsetB = 1f;
private float touchX = 1000000000;
private float touchY = 1000000000;
private Bitmap textBitmap;
private int textureId;
private boolean isFirstTime = true;
//creates a new canvas that will draw into a bitmap instead of rendering into the screen
private Canvas bitmapCanvas;
/**
* By not modifying anything, our default shaders will be used in the assets folder of shadercam.
*
* Base all shaders off those, since there are some default uniforms/textures that will
* be passed every time for the camera coordinates and texture coordinates
*/
public WriteDrawRenderer(Context context, SurfaceTexture previewSurface, int width, int height)
{
super(context, previewSurface, width, height, "touchcolor.frag.glsl", "touchcolor.vert.glsl");
//other setup if need be done here
}
/**
* we override {#link #setUniformsAndAttribs()} and make sure to call the super so we can add
* our own uniforms to our shaders here. CameraRenderer handles the rest for us automatically
*/
#Override
protected void setUniformsAndAttribs()
{
super.setUniformsAndAttribs();
int offsetRLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetR");
int offsetGLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetG");
int offsetBLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetB");
GLES20.glUniform1f(offsetRLoc, offsetR);
GLES20.glUniform1f(offsetGLoc, offsetG);
GLES20.glUniform1f(offsetBLoc, offsetB);
if (touchX < 1000000000 && touchY < 1000000000)
{
//creates a Paint object
Paint yellowPaint = new Paint();
//makes it yellow
yellowPaint.setColor(Color.YELLOW);
//sets the anti-aliasing for texts
yellowPaint.setAntiAlias(true);
yellowPaint.setTextSize(70);
if (isFirstTime)
{
textBitmap = Bitmap.createBitmap(mSurfaceWidth, mSurfaceHeight, Bitmap.Config.ARGB_8888);
bitmapCanvas = new Canvas(textBitmap);
}
bitmapCanvas.drawText("Test Text", touchX, touchY, yellowPaint);
if (isFirstTime)
{
textureId = addTexture(textBitmap, "textBitmap");
isFirstTime = false;
}
else
{
updateTexture(textureId, textBitmap);
}
touchX = 1000000000;
touchY = 1000000000;
}
}
/**
* take touch points on that textureview and turn them into multipliers for the color channels
* of our shader, simple, yet effective way to illustrate how easy it is to integrate app
* interaction into our glsl shaders
* #param rawX raw x on screen
* #param rawY raw y on screen
*/
public void setTouchPoint(float rawX, float rawY)
{
this.touchX = rawX;
this.touchY = rawY;
}
}
Please help guys, it's been a month and I am still stuck with the same app :( and have no idea about opengl. Two weeks and I'm trying to use this project for my app, and nothing is being rendered on the video.
Thanks in advance!
Here's a rough outline that should work, but it's quite a bit of work:
Set up a android.media.MediaRecorder for recording the video and audio
Get a Surface from MediaRecorder and set up an EGLImage from it (https://developer.android.com/reference/android/opengl/EGL14.html#eglCreateWindowSurface(android.opengl.EGLDisplay, android.opengl.EGLConfig, java.lang.Object, int[], int) ); you'll need a whole OpenGL context and setup for this. Then you'll need to set that EGLImage as your render target.
Create a SurfaceTexture within that GL context.
Configure camera to send data to that SurfaceTexture
Start the MediaRecorder
On each frame received from camera, convert the drawing done by the user to a GL texture, and composite the camera texture and the user drawing.
Finally, call glSwapBuffers to send the composited frame to the video recorder
So I'm creating a 2D game and currently following a tutorial for adding custom font to the screen.
What I did is shown in the code below. I only copied the part of code relevant to this topic.
public class GameScreen implements Screen, InputProcessor {
private SpriteBatch batch = null;
private OrthographicCamera mCamera = null;
private BitmapFont scoreFont = null;
private int score = 0;
#Override
public void show() {
mCamera = new OrthographicCamera(1920, 1080);
font_texture = new Texture(Gdx.files.internal("font.png"));
font_texture.setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Linear);
scoreFont = new BitmapFont(Gdx.files.internal("font.fnt"), new TextureRegion(font_texture), false);
batch = new SpriteBatch();
}
#Override
public void render(float delta) {
Gdx.gl.glClearColor(0, 0, 0, 0);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(mCamera.combined);
batch.begin();
this.scoreFont.draw(batch, "" + score, 100, 220);
batch.end();
}
The problem is, the text is not showing. Could this be outdated method or what?
If you want to draw fonts on your screen then you have many ways to do that:
A. Using .fnt format
1- Hiero you can run it from your Desktop project with Right Click->Run As->Java Application then choose Hiero then create any style you want on the font then save it as .fnt. This tool will save image automaticaly then move these files into your Android project into assets folder then call font with the following simple code:
BitmapFont bFont = new BitmapFont(Gdx.files.internal("fonts/___.fnt"); // initialization
bfont.draw(batch, "" + score, 100, 220); // in render
2- Shobox is a free Adobe Air based app for Windows and Mac OSX with game and ui related tools. Each tool uses a drag and drop - or clipbord interaction for a quick workflow.
3- Glyph Designer is a powerful bitmap font designer, redesigned specifically for OS X Yosemite to take advantage of the latest features. Create beautiful designs using highly configurable effects, definable backgrounds, custom images, editable glyph metrics and more. Make the most of your screen with smart zooming and full screen support. Target hundreds of devices on multiple platforms with support for over 15 frameworks out the box. Streamline localizations with GDCL.
4- Glyphite is a browser-based Bitmap font generator that can create detailed Bitmap fonts in seconds and export them in most major formats.
B. Using .ttf format
1- First you must insert the FreeType lib in your project, then you can use the following simple code:
FreeTypeFontGenerator generator = new FreeTypeFontGenerator(Gdx.files.internal("fonts/___.ttf"));
FreeTypeFontParameter parameter = new FreeTypeFontParameter();
parameter.size = 12;
BitmapFont font12 = generator.generateFont(parameter); // font size 12 pixels
generator.dispose(); // don't forget to dispose to avoid memory leaks!
The scene2d is good lib and easy to handle Actors with cool hierarchy :
//initialization
LabelStyle style = LabelStyle(bFont, Color.BLUE);
Label label = new Label("", style);
stage.addActor(label);
//in render
stage.act();
stage.draw();
label.setText("" + score);
I'm hoping to scale 12MP images from a machine vision camera using LWJGL 3 and an SWT GLCanvas.
Scaling is obviously computationally intensive so I'd like to get the GPU to take care of this for me, but I am very unfamiliar with OpenGL. Further, every example I've looked at for LWJGL appears to be for much older versions of LWJGL or use deprecated methods; it appears LWJGL has undergone radical changes throughout its life.
I've provided a sample class which should describe how I'm desiring to implement this functionality, but I need help filling in the blanks (preferably using modern OpenGL and LWJGL 3):
import org.eclipse.swt.SWT;
import org.eclipse.swt.layout.FillLayout;
import org.eclipse.swt.opengl.GLCanvas;
import org.eclipse.swt.opengl.GLData;
import org.eclipse.swt.widgets.Composite;
import org.lwjgl.opengl.GLContext;
public class LiveCameraView extends Composite
{
private GLCanvas canvas;
public LiveCameraView(Composite parent, int style)
{
super(parent, style);
this.setLayout(new FillLayout());
GLData data = new GLData();
data.doubleBuffer = true;
canvas = new GLCanvas(this, SWT.NONE, data);
}
public void updateImage(byte[] bgrPixels, int imageWidth, int imageHeight)
{
canvas.setCurrent();
GLContext.createFromCurrent();
/*
* STEP 1: Translate pixels into a GL texture from the 3-byte BGR byte[]
* buffer.
*/
/*
* STEP 2: Now that the GPU has the full sized image, we'll get the GPU to
* scale the image appropriately.
*/
double scalingFactor = getScalingFactor(imageWidth, imageHeight);
canvas.swapBuffers();
}
private double getScalingFactor(int originalWidth, int originalHeight)
{
int availableWidth = canvas.getBounds().width;
int availableHeight = canvas.getBounds().height;
// We can either scale to the available width or the available height, but
// in order to guarantee that the whole image is visible we choose the
// smaller of the two scaling factors.
double scaleWidth = (double) availableWidth / (double) originalWidth;
double scaleHeight = (double) availableHeight / (double) originalHeight;
double scale = Math.min(scaleWidth, scaleHeight);
return scale;
}
}
In a separate thread new images are being acquired from the camera continuously. Ideally that thread will asynchronously invoke the updateImage(...) method and provide the raw BGR data of the most recent image.
I believe this should be achievable using the outlined paradigm, but I could be way off base. I appreciate any good direction.
As a final note, this question arose from my initial question asked here: My initial question concerning the general paradigm
The context of the question is OpenGL ES 2.0 in the Android environment. I have a texture. No problem to display or use it.
Is there a method to know its width and height and other info (like internal format) simply starting from its binding id?
I need to save texture to bitmap without knowing the texture size.
Not in ES 2.0. It's actually kind of surprising that the functionality is not there. You can get the size of a renderbuffer, but not the size of a texture, which seems inconsistent.
The only thing available are the values you can get with glGetTexParameteriv(), which are the FILTER and WRAP parameters for the texture.
It's still not in ES 3.0 either. Only in ES 3.1, glGetTexLevelParameteriv() was added, which gives you access to all the values you're looking for. For example to get the width and height of the currently bound texture:
int[] texDims = new int[2];
GLES31.glGetTexLevelParameteriv(GLES31.GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, texDims, 0);
GLES31.glGetTexLevelParameteriv(GLES31.GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, texDims, 1);
As #Reto Koradi said there is no way to do it but you can store the width and height of a texture when you are loading it from android context before you bind it in OpenGL.
AssetManager am = context.getAssets();
InputStream is = null;
try {
is = am.open(name);
} catch (IOException e) {
e.printStackTrace();
}
final Bitmap bitmap = BitmapFactory.decodeStream(is);
int width = bitmap.getWidth();
int height = bitmap.getHeight();
// here is you bind your texture in openGL
I'll suggest a hack for doing this. Use ESSL's textureSize function. To access its result from the CPU side you're going to have to pass the texture as an uniform to a shader, and output the texture size as the r & g components of your shader output. Apply this shader to an 1x1px primitive drawn to a 1x1px FBO, then readback the drawn value from the GPU with glReadPixels.
You'll have to be careful with rounding, clamping and FBO formats. You may need a 16-bit integer FBO format.
I'm working on the android half of a cross-platform android/ios framework that lets you write apps in JS that work on both platforms. I say this because it means I can't use things like 9-patches to get this effect. Full code at https://github.com/mschulkind/cordova-true-native-android
Here are two screenshots of the problem:
-Images redacted because I'm too new to be this useful. I will have to add them when I'm no longer a newbie.-
Here's the code that generates the drawable from https://github.com/mschulkind/cordova-true-native-android/blob/master/src/org/apache/cordova/plugins/truenative/ViewPlugin.java#L146
// Borrowed from:
// http://www.betaful.com/2012/01/programmatic-shapes-in-android/
private class ViewBackground extends ShapeDrawable {
private final Paint mFillPaint, mStrokePaint;
private final int mBorderWidth;
public ViewBackground(
Shape s, int backgroundColor, int borderColor, int borderWidth) {
super(s);
mFillPaint = new Paint(this.getPaint());
mFillPaint.setColor(backgroundColor);
mStrokePaint = new Paint(mFillPaint);
mStrokePaint.setStyle(Paint.Style.STROKE);
mStrokePaint.setStrokeWidth(borderWidth);
mStrokePaint.setColor(borderColor);
mBorderWidth = borderWidth;
}
#Override
protected void onDraw(Shape shape, Canvas canvas, Paint paint) {
shape.resize(canvas.getClipBounds().right, canvas.getClipBounds().bottom);
Matrix matrix = new Matrix();
matrix.setRectToRect(
new RectF(
0, 0,
canvas.getClipBounds().right, canvas.getClipBounds().bottom),
new RectF(
mBorderWidth/2, mBorderWidth/2,
canvas.getClipBounds().right - mBorderWidth/2,
canvas.getClipBounds().bottom - mBorderWidth/2),
Matrix.ScaleToFit.FILL);
canvas.concat(matrix);
shape.draw(canvas, mFillPaint);
if (mBorderWidth > 0) {
shape.draw(canvas, mStrokePaint);
}
}
}
This has happened both when the drawable was set as the background of the EditText directly and when I set it as the background of a parent view around the EditText.
Anyone have an idea of what's going on here or what avenues I should explore?
Looks like you want to draw a rounded rectangle.
To achieve such a style, it is simpler to use a XML drawable.
You simply put a XML file into the drawable/ directory. Here you can describe the desired shape.
Some documentation about XML drawables is here : http://idunnolol.com/android/drawables.html
Look at the tag.