I am new to android, and my group is currently creating a graphing application using a GlSurfaceView using opengl es 2.0.
We have recently displayed the grid and tickmarks on the plot and now I have been assigned the task to implement a numeric scale and labeling the x and y axis as "X" and "Y".
After doing a lot of research I have determined to accomplish this by rendering a string of characters to a bitmap. I have encountered many problems in achieving this. I understand the basic concept. I know I will need the alphanumeric characters "0123456789" and "XY"and"-"(for the -x and -y scale). I have seen many different examples and have tried extensively to follow JVitella's example here here
I am beginning to grasp the concept but as far as the my string goes I know I have 13 characters so how large should my bitmap be?
Also in Jvitelas example I am dumbfounded by the code:
Drawable background = context.getResources().getDrawable(R.drawable.background);
I dont understand what exactly is going on and when I code this I recieve a syntax error on context.
For my application I understand I would need to save the string into a bit map much like this. I would create a bitmap but how big should it be? Then I create a canvas from the bitmap and canvas.drawText into the bitmap.
[ 0 1 2 3 4 ]
| 5 6 7 8 9 |
[ X Y Z ]
Basically I am asking:
How to achieve the following bit map above?
How would I draw single digit numbers from the bit map?
How would I draw numbers with more than one digit?
You're asking a lot of questions, but I'll try to answer a few:
so how large should my bitmap be?
It's really up to you, depending on how crisp you want the text to be. You could allocate a huge bitmap with hundreds of pixels for each character that would zoom very well, or a very small bitmap with limited resolution. I'd say whatever "font size" you want to have, allocate at least that many pixels in height for each character. So if you want to draw something with a font size of "20", then maybe you need a bitmap 5x20 by 3x20 or 100x60.
How would I draw single digit numbers from the bit map?
You'll draw a quad with opengl in the place where you want to draw a letter, and you use the texture coordinates of that quad to pick a letter.
For example if I want to draw an X, then you draw a quad on the screen, and assign it's texcoords from (0,0) to (0.2, 0.33), which selects the left 1/5th of the texture, and the bottom 1/3rd of the texture. You'll see how a box like this lines up with the position of the "X" in your texture.
How would I draw numbers with more than one digit?
You just draw two independent single digits right next to each other.
If your only goal here is to draw text in Android, it might be easier to just use a FrameLayout, and layer TextViews overtop of your GLSurfaceView. OpenGL isn't designed for text which makes it somewhat cumbersome.
Related
In short, I'm making a simulation where I have a bunch of creatures that can see each other. The way I want to do this is to capture an area around each creature and give it to their neural network, and make them evolve to recognize their surroundings. I am coding this using LibGDX, and I don't plan on making screenshots every single frame because I can imagine that that is already a very poor idea. However, the problem is that I don't know how to get the pixels inside a defined square without capturing the entire screen and then cherry picking what I want for each creature, which will cause a MASSIVE lag spike, since the area these creatures will be in is 2000x2000, and therefore 12 million different values (4 million RGB values).
Each creature is about 5 pixels (width and height), so my idea is to give them a 16x16 area around them, which is why iterating through the entire frame buffer won't work, it would pointlessly iterate through millions of values before finding the ones I asked for.
I would also need to be able to take pictures outside of the screen (as in, the part outside the window's boundaries), if that is even possible.
How can I achieve this? I'm aiming for performance, but I do not mind distributing the load between multiple frames or even multithreading.
The problem is you can't query pixels in a framebuffer.
You can capture a texture from a framebuffer, and you can convert a texture to a pixmap.
libgdx TextureRegion to Pixmap
You can then getPixel(int x, int y) against the pixmap.
However, maybe going the other way would be better.
Start with a pixmap, work with the pixmap, and for each frame convert the pixmap to a texture and render that texture fullscreen. This also removes the need for the creatures environment to match the screen resolution (although you could still set it up like that).
Referring to my previous question Check if a curve is closed
I would like to know how to reduce the thickness of the curve to a single pixel.
For example, imagine that every pixel is a green square, if I have this part of the curve:
Before - thickness of many pixels
I would like to be able to transform it like this:
After - 1 pixel
(or even variants, as long as if the stretch is continuous remains such)
My input will be a BufferedImage of a white curve on a black background.
The family of algorithms you are looking for is called skeletonization or homotopic thinning.
Homotopic thinning is a conditional erosion, where a pixel is not removed if removing it would break the topography.
Skeletonization can be implemented using homotopic thinning, but also in other ways. The result of a skeletonization is a one-pixel thick line that goes through the center of an object.
These are not trivial algorithms to implement. I am not going to explain how it works. You need to start using a library with image processing functionality. Don’t reinvent the wheel.
How to draw warped text like this picture in libgdx?
There are different methods to do this – and they do not come standard in libgdx, so you will have to implement one yourself.
Convert the text to outlines. Warp each of the coordinates. Draw polyfilled objects using these warped coordinates. This is what professional software such as Adobe Illustrator and CorelDraw do.
Draw the text into a bitmap. Warp the bitmap. For a better result, draw the bitmap at twice the output size so you can use subsampling.
(Based on the rather poor quality of the sample image) Draw each of the characters slightly rotated. You can base the amount of rotation on the total number of characters (quick, dirty, and simple), or ever so slightly improve it by using the individual widths of each character to determine its relative position inside the entire string, and base the amount of rotation on that.
Are you going to use this picture for some motion or you just need it for display? If it's the latter why don't you just draw it in gimp, photoshop or even paint and position it/scale it on where you need it on the screen as normal sprite/actor?
I have to find out the contour of the image. After that, I want to find out how to fill in hole in the number characters, but not in the other space. The image is the following.
http://i.stack.imgur.com/jlLYE.jpg
Actually, if it is not possible, is there any other method for me to perform segmentation of this image by using openCV in java platform? I want the image contains the characters only. Thankyou.
http://i.stack.imgur.com/kY4Dh.png
Here is a simple method (But I am not sure if it will work everywhere. Test it yourself)
NB: Code is in Python, I don't do Java, sorry about that :(
Load the grayscale image
Apply Otsu's binarization
import cv2
import numpy as np
img = cv2.imread('test.png',0)
ret, thresh = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
Below is the thresholded image:
Now you can try two methods:
3a. Median Blurring with a 3x3 kernel
res = cv2.medianBlur(thresh,3)
Result:
3b. Erosion with a 3x1 kernel (vertical). 3x1 because all the lines in your image are more-horizontal. It there are vertical lines in other images, you may need to take 3x3 kernel (not sure. Check it)
kernel = np.ones((3,1))
cls = cv2.erode(thresh, kernel)
If you think you are losing some parts of digits also, you can apply dilation after erosion, or replace everything with morphological opening function.
Result:
Finally, find contours. It will also pick up some noise left in the preprocessed image, but you can filter out them by checking their aspect ratio, area etc.
There is a not that complicated solution. Think about the vertical run length representation of the image: since you have only black and white there, you can think that each vertical line of the image is a list containing only 1(black pixel) and 0 (white pixel) so you will have for instace 01111111110000000000000111. This can be minimized if you will take only the lengths of each sublist containing only 1 or 0, so instead of 0000000001111111000111111111111 you will have 0 9 7 3 12, starts with 0 because let us say that you always start with the count of black pixels, and since there you don't have any black pixels at the begining you put a 0 (it will be much easier to work like this). After you have this reprezentation you take the maximal value for a white run (the run is actually that count of white or black pixels) and the minimal one and go throught all white runs. For each of them you see if the value is closer to the smallest white run, and if this is the case you just remove it.
This algorithm should work for the given image ;)
I am attempting to create a "hole in the fog" effect. I have a background grid image, overlapped onto that I have a "fog" texture that I use to show that certain areas are not in view. I am attempting to cut a chunk out of "fog" that will show the area that is currently in view. I am trying to "mask" a part of the fog off the screen.
I made some images to help explain what I am after:
Background:
"Mask Image" (The full transparency has to be on the inside and not the outer rim for what I am going to use it for):
Fog (Sorry, hard to see.. Mostly Transparent):
What I want as a final Product:
I have tried:
Stencil-Buffer: I got this fully working except for one fact... I wasn't able to figure out how to retain the fading transparency of the "mask" image.
glBlendFunc: I have tried many different version of the parameters and many other methods with it (glColorMask, glBlendEquation, glBlendFuncSeparate) I started by using some parameter that I found on this website: here. I used the "glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);" as this seemed to be what I was looking for but... This is what ended up happening as a result: (Its hard to tell what is happening here but... There is fog covering the grid in the background. Though, the mask is just ending up as an fully opaque black blob when its supposed to be a transparent part in the fog.
Some previous code:
glEnable(GL_BLEND); // This is not really called here... It is called on the init function of the program as it is needed all the way through the rendering cycle.
renderFogTexture(delta, 0.55f); // This renders the fog texture over the background the 0.55f is the transparency of the image.
glBlendFunc(GL_ZERO, GL11.GL_ONE_MINUS_SRC_ALPHA); // This is the one I tried from one of the many website I have been to today.
renderFogCircles(delta); // This just draws one (or more) of the mask images to remove the fog in key places.
(I would have posted more code but after I tried many things I started removing some old code as it was getting very cluttered (I "backed them up" in block comments))
This is doable, provided that you're not doing anything with the alpha of the framebuffer currently.
Step 1: Make sure that the alpha of the framebuffer is cleared to zero. So your glClearColor call needs to set the alpha to zero. Then call glClear as normal.
Step 2: Draw the mask image before you draw the "fog". As Tim said, once you blend with your fog, you can't undo that. So you need the mask data there first.
However, you also need to render the mask specially. You only want the mask to modify the framebuffer's alpha. You don't want it to mess with the RGB color. To do that, use this function: glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE). This turns off writes to the RGB part of the color; thus, only the alpha will be modified.
Your mask texture seems to have zero where it is visible and one where it isn't. However, the algorithm needs the opposite, so you should either fix your texture or use a glTexEnv mode that will effectively flip the alpha.
After this step, your framebuffer should have an alpha of 0 where we want to see the fog, and an alpha of 1 where we don't.
Also, don't forget to undo the glColorMask call after rendering the mask. You need to get those colors back.
Step 3: Render the fog. That's easy enough; to make the masking work, you need a special blend mode. Like this one:
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA, GL_ZERO, GL_ONE);
The separation between the RGB and A blend portions is important. You don't want to change the framebuffer's alpha (just in case you want to render more than one layer of fog).
And you're done.
The approach you're currently taking will not work, as once you draw the fog over the whole screen, there's no way to 'erase' it.
If you're using fixed pipeline:
You can use multitexturing (glTexEnv) with fixed pipeline to combine the fog and circle textures in a single pass. This function is probably kind of confusing if you haven't used it before, you'll probably have to spend some time studying the man page. You'll do something like bind fog to glActiveTexture 0, and mask to glActiveTexture 1, enable multitexturing, and then combine them with glTexEnv. I don't remember exactly the right parameters for this.
If you're using shaders:
Use a multitexturing shader where you multiply the fog alpha with the circle texture (to zero out the alpha in the circle region), and then blend this combined texture into the background in a single pass. This is probably a more conceptually easy approach, but not sure if you're using shaders.
I'm not sure there's a way you can do this where you draw the fog and the mask in separate passes, as they both have their own alpha values, it will be difficult to combine them to get the right color result.