I want to create an android app that takes 2 pictures (taken from the phone camera). Takes the top part of pic1 and the bottom part of pic2 and combines them to the final picture.
I'm thinking about converting each image to byte array. Then take the half values from the array of the first image and the other half from the other image, merge them in the final array and convert that array back to image. Is it feasible? Is this a good solution or there is any better practice for this?
Well I guess I found the solution. There is a class in the Java6 API called "BufferedImage". This class has the methods: setRGB , getRGB where you can get the int value of the rgb color for the pixel you specify. This way you can get the pixel color from the image you want and set it in the target image.
Try using OpenCV. It will be very fast since it will be handling the images in native code. Convert Bitmap objects into Matrix(OpenCV) object and send the address to the native code where you can do these computations very easily. If any code is required, do let me know.
Related
When I use the robot class to get multiple screenshots of my screen, then convert the bufferedimage to a byte array, the lengths of the byte arrays vary, sometimes by a lot. Should this be happening? I feel as though the amount of bytes in each picture would be the same.
For background I am trying to speed up a simple screen sharing program. Right now I am sending each picture as a complete byte array which works fine (but is slow). I would like to keep every picture in a buffer, then send only the changes and indexes in the byte arrays of the changes from the last picture to the next, cutting down on data sent over the socket. This isn’t working out as each screenshot has a different sized byte array.
Your screenshots are different sizes because they've got different amounts of complexity.
You haven't included a lot of detail in your question, so I'm going to assume that the screenshot routine you're using returns a PNG image.
The PNG format isn't just an array of (red, green, blue) data with one entry for every pixel of your image. Instead, the format compresses the image data. This means that (for example) an image that was just a single flat colour would be much smaller than one where every pixel was a different colour.
So, say I have two images, one which is a .bmp of some text and another which is a bufferedImage, how would I go about finding if the .bmp is inside the bufferedImage?
Im really lost on how to find an image within an image, a color is easier as its just one thing to search for but an image seems much harder...
One Solution to this Problem is "Template Matching".
This means sliding your Template (the image you want to find) over the Image (you want to search in) and at every Position compare the similiarity of all Pixels.
The Position of your Template in the Image is at the Maximum this procedure returned.
As suggested in the comments you can use OpenCV for this Task which support Template Matching.
I need to be able to take certain images from one big image on a grid. Sort of like how in the game minecraft, there are texture packs that retexture the way the game looks. In order to keep file size down, it is not all that many actual pictures. It is a grid of all the different block texture all on one picture.
I need to do something similar to this, but using this picture: http://f.cl.ly/items/122C0G3R3P422R2I452o/fontes_blanches_alpha.png
Specifically, i want to be able to call each character from this picture from an ArrayList sort of like:
(Pseudo code)
ArrayList<Pictures> chars = new ArrayList<Pictures>();
JFrame.add(chars.get(x));
So, how would i separate the pictures to display only part of it?
You could try loading the font image into a BufferedImage object. Then you can say bufferedImage.getSubimage(x,y,w,h) to get a subImage of type BufferedImage. When you have a subImage, you can add it to your "chars" ArrayList.
How can crop and display a character in an image without using mouse?
The image contain only a one charater and nothing else.
Example a scanned copy of a paper, which contain a character drawn on it.
This will require some image processing, and there is a lot of libraries available for this task. General processing sequence would be:
convert image to b&w image
calculate integral image over it
using integral image determine glyph boundaries
( if nothing of the above make sense to you, read some image processing books first )
You could get Slick2D library and then use the SpriteSheet class to crop it. It works like this:
SpriteSheet s = new SpriteSheet("imagelocation");
Image cropped = s.getSubImage(x,y,width,length);
Worked for me :D
Is there an efficient way to determine if an image is in greyscale or in color? By efficient I don't mean reading all the pixels of the image and looking for every single RGB value then.
For example, in Python there is a function inside the Imaging library called 'getcolors' that returns a hash of pairs { (R G B) -> counter } for the whole image and I just have to iterate over that hash looking for only one entry in color.
UPDATE:
For future readers of this post: I implemented a solution reading pixel by pixel the image (as #npinti suggested on his link) and it seems to be fast enough for me (you should take your time implementing it, won't take you more than 10 minutes). It seems the Python implementation of the pixel by pixel way is really bad (inneficient and slow).
If you are using a BufferedImage, this previous SO post should provide helpful.