#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
Mat rgb=inputFrame.rgba();
for(int y=0;y<rgb.cols();++y){
for(int x=0;x<rgb.rows();++x){
double[]pixelValues=rgb.get(x,y);
double redVal=pixelValues[0];
double greenVal=pixelValues[1];
double blueVal=pixelValues[2];
double alphaVal=pixelValues[3];
double gryVal=(redVal+greenVal+blueVal)/3;
rgb.put(x,y,gryVal,gryVal,gryVal,alphaVal);
}
}
return rgb;
}
This is my code to change the pixel values of a cameraStream displayed inside a JavaCameraView component. The thing is this is very slow. less than 3 fps. I know there is a faster way to get gray scale images {.rgba() => .gray() or using Improc.cvt()}. Though i need the freedom to handle pixels by my self. my ultimate goal is to get something which i can adjust red, green, blue colors as i like.
Is there is a way to make this atleast get 30fps (smooth framerates)?
Get and put operations for every pixel will surely results in poor fps because of overhead of native calls.Instead ,you should call get and put operations only once.i.e outside the for loop.Single call to get/put them into java primitive arrays and do operations on them.This way you can do your own pixel operations and it will boost performance greatly(more closer to imgproc.cvt()).
http://answers.opencv.org/question/5/how-to-get-and-modify-the-pixel-of-mat-in-java/
This will show you an example of how to make single get/put calls(Not only the actual answer but also see in comments).
Related
Hello I am an inexperienced programmer and this is my first question on Stack Overflow!
I am attempting to implement 'fog of war' in my Java game. This means most of my map begins off black and then as one of my characters moves around parts of the map will be revealed. I have searched around including here and found a few suggestions and tried tweaking them myself. Each of my approaches works, however I run into significant runtime issues with each. For comparison, before any of my fog of war attempts I was getting 250-300 FPS.
Here is my basic approach:
Render my background and all objects on my JPanel
Create a black BufferedImage (fogofwarBI)
Work out which areas of my map need to be visible
Set the relevant pixels on my fogofwarBI to be fully transparent
Render my fogofwarBI, thus covering parts of the screen with black and in transparent sections allowing the background and objects to be seen.
For initialising the buffered image I have done the following in my FogOfWar() class:
private BufferedImage blackBI = loader.loadImage("/map_black_2160x1620.png");
private BufferedImage fogofwarBI = new BufferedImage(blackBI.getWidth(), blackBI.getHeight(), BufferedImage.TYPE_INT_ARGB);
public FogOfWar() {
fogofwarBI.getGraphics().drawImage(blackBI,0,0,null);
}
In each of my attempts I start the character in a middle of 'visible' terrain, ie. in a section of my map which has no fog (where my fogofwarBI will have fully transparent pixels).
Attempt 1: setRGB
First I find the 'new' coordinates in my character's field of vision if it has moved. ie. not every pixel within the character's range of sight, but just the pixels at the edge of his range of vision in the direction he is moving. This is done with a for loop, and will go through up to 400 or so pixels.
I feed each of these x and y coordinates into my FogOfWar class.
I check if these x,y coordinates are already visible (in which case I don't bother doing anything to them to save time). I do this check by maintaining a Set of Lists. Where each List contains two elements: an x and y value. And the Set is a unique set of the coordinate Lists. The Set begins empty, and I will add x,y coordinates to represent transparent pixels. I use the Set to keep the collection unique and because I understand the List.contains function is a fast way of doing this check. And I store the coordinates in a List to avoid mixing up x and y.
If a given x,y position on my fogofwarBI is not currently visible I add set the RBG to be transparent using .setRGB, and add it to my transparentPoints Set so that coordinate will not be edited again in future.
Set<List<Integer>> transparentPoints = new HashSet<List<Integer>>();
public void editFog(int x, int y) {
if (transparentPoints.contains(Arrays.asList(x,y)) == false){
fogofwarBI.setRGB(x,y,0); // 0 is transparent in ARGB
transparentPoints.add(Arrays.asList(x,y));
}
}
I then render it using
public void render(Graphics g, Camera camera) {
g.drawImage(fogofwarBI, 0, 0, Game.v_WIDTH, Game.v_HEIGHT,
camera.getX()-Game.v_WIDTH/2, camera.getY()-Game.v_HEIGHT/2,
camera.getX()+Game.v_WIDTH/2, camera.getY()+Game.v_HEIGHT/2, null);
}
Where I am basically applying the correct part of my fogofwarBI to my JPanel (800*600) based on where my game camera is.
Results:
Works correctly.
FPS of 20-30 when moving through fog, otherwise normal (250-300).
This method is slow due to the .setRGB function, being run up to 400 times each time my game 'ticks'.
Attempt 2: Raster
In this attempt I create a raster of my fogofwarBI to play with the pixels directly in an array format.
private BufferedImage blackBI = loader.loadImage("/map_black_2160x1620.png");
private BufferedImage fogofwarBI = new BufferedImage(blackBI.getWidth(), blackBI.getHeight(), BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = fogofwarBI.getRaster();
DataBufferInt dataBuffer = (DataBufferInt)raster.getDataBuffer();
int[] pixels = dataBuffer.getData();
public FogOfWar() {
fogofwarBI.getGraphics().drawImage(blackBI,0,0,null);
}
My editFog method then looks like this:
public void editFog(int x, int y) {
if (transparentPoints.contains(Arrays.asList(x,y)) == false){
pixels[(x)+((y)*Game.m_WIDTH)] = 0; // 0 is transparent in ARGB
transparentPoints.add(Arrays.asList(x,y));
}
}
My understanding is that the raster is in (constant?) communication with the pixels array, and so I render the BI in the same way as in attempt 1.
Results:
Works correctly.
A constant FPS of around 15.
I believe it is constantly this slow (regardless of whether my character is moving through fog or not) because whilst manipulating the pixels array is quick, the raster is constantly working.
Attempt 3: Smaller Raster
This is a variation on attempt 2.
I read somewhere that constantly resizing a BufferedImage using the 10 input version of .drawImage is slow. I also thought that having a raster for a 2160*1620 BufferedImage might be slow.
Therefore I tried having my 'fog layer' only equal to the size of my view (800*600), and updating every pixel using a for loop, based on whether the current pixel should be black or visible from my standard transparentPoints Set and based on my camera position.
So now my editFog Class just updates the Set of invisible pixels and my render class looks like this:
public void render(Graphics g, Camera camera) {
int xOffset = camera.getX() - Game.v_WIDTH/2;
int yOffset = camera.getY() - Game.v_HEIGHT/2;
for (int i = 0; i<Game.v_WIDTH; i++) {
for (int j = 0; j<Game.v_HEIGHT; j++) {
if ( transparentPoints.contains(Arrays.asList(i+xOffset,j+yOffset)) ) {
pixels[i+j*Game.v_WIDTH] = 0;
} else {
pixels[i+j*Game.v_WIDTH] = myBlackARGB;
}
}
}
g.drawImage(fogofwarBI, 0, 0, null);
}
So I am no longer resizing my fogofwarBI on the fly, but I am updating every single pixel every time.
Result:
Works correctly.
FPS: Constantly 1 FPS - worst result yet!
I guess that any savings of not resizing my fogofwarBI and having it smaller are massively outweighed by updating 800*600 pixels in the raster rather than around 400.
I have run out of ideas and none of my internet searching is getting me any further in trying to do this in a better way. I think there must be a way to do fog of war effectively, but perhaps I am not yet familiar enough with Java or the available tools.
And pointers as to whether my current attempts could be improved or whether I should be trying something else altogether would be very much appreciated.
Thanks!
This is a good question. I am not familar with the awt/swing type rendering, so I can only try to explain a possible solution for the problem.
From a performance standpoint I think it is a better choice to chunk/raster the FOW in bigger sections of the map rather than using a pixelbased system. That will reduce the amount of checks per tick and updating it will also take less resources, as only a small portion of the window/map needs to update. The larger the grid, the less checks, but there is a visual penalty the bigger you go.
Leaving it like that would make the FOW look blocky/pixelated, but its not something you can't fix.
For the direct surrounding of a player, you can add a circle texture with the player at its center. You can than use blending (I believe the term in awt/swing is composite) to 'override' the alpha where the circle overlaps the FOW texture. This way the pixel-based updating is done by the renderingAPI which usually uses hardware enhanced methods to achieve these things. (for custom pixel-based rendering, something like 'shader scripts' are often used if supported by the rendering API)
This is enough if you only need temporary vission in the FOW (if you don't need to 'remember' the map), you don't even need a texture grid for the FOW than, but I suspect you do want to 'remember' the map. So in that case:
The blocky/pixelated look can be fixed like they do with grid-based terain. Basically add a small additional textures/shapes based on the surroundings to make things look nice. The link below provides good examples and a detailed explanation on how to do the 'terrain-transitions' as they are called.
https://www.gamedev.net/articles/programming/general-and-gameplay-programming/tilemap-based-game-techniques-handling-terrai-r934/
I hope this gives a better result. If you cannot get a better result, I would advise switching over to something like OpenGL for the render engine as it is meant for games, while the awt/swing API is primarely used for UI/application rendering.
I have a fixed size frame buffer (1200x720 RGBA efficiently converted from YUV) in a java byte array.
I would like to set a certain shade of a color (white in my case, regardless of its alpha value) to fully transparent.
Currently I am doing this via CPU by traversing the byte array and zero'ing the pixel if RGB > 0xC8. This somewhat works but is obviously extremely slow (>1sec/frame) for doing so on a live stream.
I've been researching methods to do this via GPU/OpenGL on Android and I see mentioning of Alpha test, blending, and color keying. It seems the alpha test is not useful here since it relies on the alpha information rather than RGB's values.
Any idea how to do this on Android using OpenGL/java?
It seems the alpha test is not useful here
The logic for an alpha-test is implemented in the fragment shader, so rather than testing alpha just change the test to implement a check on the RGB value. The technique here is generic and 100% flexible. The underlying operation you are looking for is fragment shaders which trigger the discard operation when the color key matches.
Alternatively you can use the same conditional check but rather than calling discard just set the output color to vec4(0.0) and use blending to avoid modifying the framebuffer for that fragment. On the whole I would expect this to be more efficient; discard tends to have odd perfomance side-effects.
You should create a custom renderscript script to convert those pixels, you can also use it to convert from yuv so you only process the pixels in the buffer once
If I try to allocate any memory during onDraw in my View-derived class in my Android app, Eclipse/lint gives me warnings that I shouldn't be allocating memory during the execution of onDraw. so I'm trying to think of the best way to append a rotated rectangle to a path that may get used to define clipping bounds. I'm also going to want to figure out how to add a rotated ellipse to such a path.
I have considered using Matrix.mapPoints with the 4 corners of the rectangle (using a pre-allocated matrix), but I don't currently have a pre-allocated array of floats to use with that, and I'm not sure I want to do that if there's another way. Should I use Math.atan2 to get polar coordinates, offset the result, and then use sin and cos to calculate new coordinates, or is that going to have a lot more overhead than the matrix multiplication?
Are there other ways of adding rotated rectangles and ellipses to clipping boundaries that I should consider?
Edit: I'm also not clear if calling other functions that have local variables would be considered memory allocation. If I create a function like this:
private void drawOperation(Operation op, Canvas canvas) {
float coords[] = {0,0,0,0,0,0,0,0};
....
}
Does that array get created on the heap or the stack? Does it still constitute something that should be avoided during onDraw?
I am considering code like this, where mMatrix, mRotationPath, mPoint and mPath are pre-allocated objects:
mMatrix.setRotate(angle, mPoint.x, mPoint.y);
mRotationPath.rewind();
mRotationPath.addRect(mRect, Path.Direction.CW);
mPath.addPath(mRotationPath, mMatrix);
Hi, so I have to make a script (doesn't matter what programming language, but i'll use Java here for example), a script that compares two black and white images and tells which one is blurred the most.
So I have to make a function like this:
function int getImageBlurPercentage()
{
ArrayList<Integer> ColorList = new ArrayList<Integer>();
//Part 1: fill ColorList with color values (0=black, 255=white)
go through Y axis
go through X axis
ColorList -> add colorValue of each pixel; [ie: 0 to 255]
//Part 2: process (This is the part where I need help !)
int lastColor = 0;
for(int color : ColorList)
{
// Something has to be done here
// To compare pixel by pixel
// and get noise result or difference result
// and convert it to a percentage (0% - 100%)
// This is where I need your help !
}
}
So this is where I need your help guys, I don't really know how to handle this.
I think this needs some math formulas which I suck at.
I would appreciate it if someone helps or gives a hint that could lead me to the right path. Thank you.
When you blur an image (let's say you use Gaussian blur), you actually doing some "averaging" on the pixels of the image, which means you make your edges "smoother".
So to check if one image has "smoother" edges then other, you can look on the Gradients of the image like Jan Dvorak suggested, but don't forget to normalize it by the amount of pixels in the image (otherwise larger images will get larger results).
If you want to check two entirely different images, the test will be much more complex, because different scenes naturally has different smoothness
I'm designing a Canvas object which is been used to draw a BufferedImage of size 228x262 pixels.
That image is been drawn using Graphics2D.drawImage(...) method. I'm doing a pixel basis color manipulation within given offset ranges. A sample of the code below:
for( int i = frameOffset; i < colorClock; i++ ) {
rgb[i] = new Color(this.colorBK).getRGB();
}
Where rbg is set to that bufferedimage I'm changing in.
The problem is that code is painting slow.
I'm creating the image using GraphicsConfiguration.createCompatibleImage, and I'm using double buffering via Buffer Strategy.
Any lights please?
Thanks on adv.
If you run the loop every time you draw the image, the loop might be the bottleneck. There is an completely unnecessary object allocation which will make the garbage collector to run quite often.
I'm assuming that colorBK is int. If this is the case, you just create and initialize a Color object and ask it to return a rgb value that is assigned to rgb array. What actually happens is that you assign the value of colorBK in the rgb array. So, equivalent and more efficient implementation would be rgb[i] = colorBK.
To optimize this even more, you could assign the value of colorBK to a final local variable. This would avoid fetching the value of the field over and over again. So the loop could look like this:
final int color = colorBK;
for( int i = frameOffset; i < colorClock; i++ ) {
rgb[i] = color;
}
To get even more performance gain, you should think that if there is completely different ways of doing this. As the above example just changes some pixels to certain color, I could assume that this could be done with an image and a couple of fillRects.
So you would fill a rect behind the image with the color you want (in this case colorBK). If the image has transparent pixels in those areas the above loop changes they remain unchanged in the canvas and the same effect is gained. This might be more efficient as the graphics methods are better optimized and does not involve heavy array usage.
Don't create a new Color just to extract an RGB integer for every pixel in your image. The only single parameter constructor I can find for Color is one that takes an int RGB - can you not just use colorBK directly?
Also, if you are doing this conversion on every paint that will be slow; you should only need to do the conversion once.