I'm doing a project which involves taking a live camera feed and displaying it on a window for the user.
As the camera image is the wrong way round by default, I'm flipping it using cvFlip (so the computer screen is like a mirror) like so:
while (true)
{
IplImage currentImage = grabber.grab();
cvFlip(currentImage,currentImage, 1);
// Image then displayed here on the window.
}
This works fine most of the time. However, for a lot of users (mostly on faster PCs), the camera feed flickers violently. Basically an unflipped image is displayed, then a flipped image, then unflipped, over and over.
So I then changed things a bit to detect the problem...
while (true)
{
IplImage currentImage = grabber.grab();
IplImage flippedImage = null;
cvFlip(currentImage,flippedImage, 1); // l-r = 90_degrees_steps_anti_clockwise
if(flippedImage == null)
{
System.out.println("The flipped image is null");
continue;
}
else
{
System.out.println("The flipped image isn't null");
continue;
}
}
The flipped image appears to always return null. Why? What am I doing wrong? This is driving me crazy.
If this is an issue with cvFlip(), what other ways are there to flip an IplImage?
Thanks to anyone who helps!
You need to initialise the flipped image with an empty image rather than NULL before you can store a result in it. Also, you should only create the image once and then re-use the memory for more efficiency. So a better way to do this would be something like below (untested):
IplImage current = null;
IplImage flipped = null;
while (true) {
current = grabber.grab();
// Initialise the flipped image once the source image information
// becomes available for the first time.
if (flipped == null) {
flipped = cvCreateImage(
current.cvSize(), current.depth(), current.nChannels()
);
}
cvFlip(current, flipped, 1);
}
Related
I am using SikuliX to check when a video on a website has ended.
I do this by comparing my region (which is my active web browser window) to a screen capture of the region I have taken while the video is playing.
If it doesn't match, that means the video is still playing and I will take a new screen capture, which will be run through the while loop again for comparison.
If it matches, it means the video has stopped and will exit the while loop.
It works when it first goes through the loop. The while loop returns null which means the video is playing. However, on the second time it loops, it will exit the while loop and tell me my video has stopped but it clearly hasn't.
Is my logic flawed?
// Get dimensions of the bounding rectangle of the specified window
WinDef.HWND hwnd = User32.INSTANCE.GetForegroundWindow();
WinDef.RECT dimensions = new WinDef.RECT();
// Get screen coordinates of upper-left and lower-right corners of the window in dimensions
User32.INSTANCE.GetWindowRect(hwnd, dimensions);
Rectangle window = new Rectangle(dimensions.toRectangle());
int x = window.x;
int y = window.y;
int width = window.width;
int height = window.height;
// Initialize screen region for Sikuli to match
Region region = new Region(x, y, width, height);
Robot robot;
Image image;
Pattern p;
try {
robot = new Robot(); // Gets and saves a reference to a new Robot object
} catch (AWTException e) {
throw new RuntimeException(
"Failed to initialize robot...");
}
robot.delay(3000); // Delay robot for 3 seconds
// Take a screen capture of the region
BufferedImage capture = robot.createScreenCapture(dimensions.toRectangle());
image = new Image(capture);
p = new Pattern(image);
region.wait(1.0); // Wait 1 second
// Check if region content is still the same
while (region.exists(p.similar((float) 0.99), 0) == null) {
System.out.println("Video is playing");
// Take a new screen capture of the region
BufferedImage captureLoop = robot.createScreenCapture(dimensions.toRectangle());
image = new Image(captureLoop);
p = new Pattern(image);
region.wait(1.0); // Wait 1 second
}
System.out.println("Video has stopped");
Instead of using while (region.exists(p.similar((float) 0.99), 0) == null), using while (region.compare(image).getScore() == 1.0) to compare the region with the screen capture gave the results I wanted.
Currently I am developing a project and it is a requirement that I screenshot the current active window on the screen (Assuming one monitor) and save it as an image.
I have worked at the following code which screenshots the entire screen:
int x = 0,y = 0;
Color suit = new Robot().getPixelColor(x, y);
Rectangle fs = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
BufferedImage rank = new Robot().createScreenCapture(fs);
ImageIO.write(rank, "bmp", new File("hi.bmp"));
and I am of the understanding that to get the size of the current active window one must use a method as such:
public static long getHWnd(Frame f) {
return f.getPeer() != null ? ((WComponentPeer) f.getPeer()).getHWnd() : 0;
}
however I am having trouble implementing this method into my code, and I have no previous experience working with frames or rectangles.
Could I be pointed in the right direction in terms of where to go next!
Thanks.
For example, I am taking a screen shot of a very small portion of the desktop (pretend the coordinates cannot be saved when screen shot taken). Next, a screen shot of full screen is taken. How do I find the location of the small screen shot within the big screen shot?
Is this even possible in Java?
public void RobotScreenCoordinateFinder()
{
Robot robot = new Robot();
robot.createScreenCapture(screenRect);
}
My idea is to use this:
BufferedImage screen=robot.createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));c
BufferedImage capture=robot.createScreenCapture(screenRect);//assuming that screenRect is your capture
boolean screenMatches=false;
int screenX=0;
int screenY=0;
for(int i=0;i<screen.getWidth()-capture.getWidth();i++)
{
for(int j=0;j<screen.getHeight()-capture.getHeight();j++)
{
boolean matches=true;
for(int x=0;x<capture.getWidth();x++)
{
for(int y=0;y<capture.getHeight();y++)
{
if(screen.getRGB(i+x,j+y)!=capture.getRGB(x,y))
{
matches=false;
break;
}
}
if(!matches)break;
}
if(matches)
{
screenMatches=true;
screenX=i;
screenY=j;
break;
}
}
if(screenMatches)break;
}
//now if the capture is in the screen, screenMatches is true and the coordinate of the left up corner is stored in screenX and screenY
if(screenMatches)
{
System.out.println("Found match with coordinates "+screenX+","+screenY);
}
I have a game engine that tries to load resources efficiently by storing Images in a HashMap. When I want to load an image for a Sprite object, I simply say Sprite.image = LOAD_IMAGE(imageName);
Now I'm making a map editor for this game. I'd like to be able to rotate certain tiles, such a tile of a house to face different directions through Java Image manipulation. I have that working; however, the error arises in the way I set the image to the new rotated Image. I say Sprite.image = rotateImage(Sprite.image); This leads to ALL TILES of the same type to have the same new rotate image.
What can I implement into my game so that each Sprite can be rotated and not affect the other Sprite's Images? I would still like to also keep my HashMap if possible, as I think it would really increase the efficiency of my game.
This is how I implement my rotation:
terrain.terrainImage = ImageManipulator.mirror(currentTile.terrain.terrainImage, false);
This is my "ImageBank":
public HashMap images = new HashMap(30);
public synchronized Image getImage(String imgName) {
// Check if the HashMap images already has the specified image
// by checking for the image name.
if (imgExists(imgName)) {
// Sprite already loaded.
return hm.get(imgName);
} else {
// Sprite hasn't been loaded yet.
Image img;
img = loadImage(imgName);
if (img == null) {
// An error occured, couldn't load sprite.
return null;
} // else move on
// save the loaded sprite (and now transparent) into the hashmap
hm.put(imgName, img);
// and then return the sprite
return img;
}
}
Either you store for each tile if and by which angle it is rotated (and than rotate it each time you draw it) or you store rotated copies on the image.
I recommend storing rotated versions.
I am a beginner who is learning to write games in JAVA.
In the game I am writing, I am trying to get it to support multiple displayModes. First let me tell you a little about how I'm setting the display setting in the first place.
In the beginning of the code, I have an list of display modes I wish to support
//List of supported display modes
private static DisplayMode modes[] = {
new DisplayMode(640, 480, 32, 0),
new DisplayMode(1024, 768, 32, 0),
};
I then get a list of supported display Modes from the Video Card, comparing the list and use the first matching display mode.
/////////////////////////////////////////////
////Variable Declaration
/////////////////////////////////////////////
private GraphicsDevice vc;
/////////////////////////////////////////////
//Give Video Card Access to Monitor Screen
/////////////////////////////////////////////
public ScreenManager(){
GraphicsEnvironment e = GraphicsEnvironment.getLocalGraphicsEnvironment();
vc = e.getDefaultScreenDevice();
}
/////////////////////////////////////////////
//Find Compatible display mode
/////////////////////////////////////////////
//Compare Display mode supported by the application and display modes supported by the video card
//Use the first matching display mode;
public DisplayMode findFirstCompatibleMode(DisplayMode modes[]){
DisplayMode goodModes[] = vc.getDisplayModes();
for(int x=0; x<modes.length; x++){
for(int y=0; y<goodModes.length; y++){
if (displayModesMatch(modes[x], goodModes[y])){
return modes[x];
}
}
}
return null;
}
/////////////////////////////////////////////
//Checks if two Display Modes match each other
/////////////////////////////////////////////
public boolean displayModesMatch(DisplayMode m1, DisplayMode m2){
//Test Resolution
if (m1.getWidth() != m2.getWidth() || m1.getHeight() != m2.getHeight()){
return false;
}
//Test BitDepth
if (m1.getBitDepth() != DisplayMode.BIT_DEPTH_MULTI && m2.getBitDepth() != DisplayMode.BIT_DEPTH_MULTI
&& m1.getBitDepth() != m2.getBitDepth()){
return false;
}
//Test Refresh Rate
if (m1.getRefreshRate() != DisplayMode.REFRESH_RATE_UNKNOWN &&
m2.getRefreshRate() != DisplayMode.REFRESH_RATE_UNKNOWN &&
m1.getRefreshRate() != m2.getRefreshRate()){
return false;
}
return true;
}
Currently, I am only supporting two resolutions, 640x480 and 1024x768.
In order to have every element of my game available in both resolutions, first I find how much the screen is resized and store this value in a variable called resizeRatio
private void getResizeRatio(){
resizeRatio = (double)1024/(double)s.getWidth();
//s.getWidth() returns the current width of the screen.
}
And with every image I import, i would divide the image height and width by this resizeRatio.
/////////////////////////////////////////////
//Scale the image to the right proportion for the resolution
/////////////////////////////////////////////
protected Image scaleImage(Image in){
Image out = in.getScaledInstance((int)(in.getWidth(null)/resizeRatio), (int)(in.getHeight(null)/resizeRatio), Image.SCALE_SMOOTH);
return out;
}
This is all fine and good, until my application grew bigger and bigger. Soon I realize I forgot to resize some of the icons, and they are all at the wrong place when resolution is 640x480.
Additionally, I realize I must scale, not just the size of all my images, but all the movement speed, and all the positions as well, since having my character move at 5px per refresh makes him move significantly faster when displayed at 640x480 than when displayed at 1024x768
So my question is, instead of individually scaling every image, every icon, and every movement, is there a way to scale everything all at once? Or rather, there must be another way of doing this so could someone please tell me?
Thank you for reading and any help would be much appreciated.
In the paintComponent(Graphics g) or paint method you can do with Graphics2D.scale:
private double scale = 0.75;
#override
public void paintComponent(Graphics g) {
Graphics2D g2 = (Graphics2D)g; // The newer more ellaborate child class.
g2.scale(scale, scale);
...
g2.scale(1/scale, 1/scale);
}