I am trying to design a program in java to periodically (every 100 milliseconds or so) take screenshots of my display and compute the average pixel rgb values of the entire screen. I need this to be able to work with video games and iTunes/Quicktime videos. However I have tried using JNA and robot to capture the screen and it only works when I am not capturing a video game in full screen or an iTunes video. For instance I tested my code by saving an image to examine and see what is happening. When I am in a video game I only see a screenshot of a blank window. I think this is because games use directx or openGL and communicate with the hardware differently than your typical app.
If I use this this method for capturing a screenshot instead of robot or JNA will this solve my problem? It looks like it is copying over data from the openGL screen buffer. What about DirectX applications?
I basically just want to be able to get the perceived pixel data on the screen at all times. Regardless of whether or not its a fullscreen DirectX or OpenGL application or not. Thanks in advance for your help.
I'm going to guess this is for a homebrew version of the amBX lighting system. Even if it's not, the following page may help you; it contains both the java code and arduino code for a DIY ambient lighting setup, which needs to accomplish the same thing:
http://siliconrepublic.blogspot.com/2011/02/arduino-based-pc-ambient-lighting.html
Things to consider:
1. For processing speed reasons, that sample of code purposely ignores some of the pixels on the screen
2. Depending on what you're displaying (racing games vs. first-person shooters vs. top-down-view strategy or MOBA games vs. movies) you may want to purposely segment the display into separate sectors. For example, for a racing game you may want both the left and right sides to be more independent and very responsive to rapid changes, whereas for general film viewing you may want a more general output because you're dealing with a wider variety of ways the camera can move.
Related
I have spent a bit of time researching about whether it is possible to draw on top of a VLCJ movie within a Java application. I have found a few bits of conflicting advice some saying it is not possible and some referencing articles which have moved on oracle.com.
Can someone clarify if it is or is not possible to draw java2d graphics like rectangles/lines which also have transparent backgrounds so the video stream underneath can be viewed whilst the shapes are present on screen?
If this is not possible with vlcj what would be a good alternative for a linux and windows compatible media player allowing for annotation over a playing video stream? Please note i do not have to be limited to java but something where i can get re-use out of developed drawing routines for multiple platforms would be ideal.
Yes, you can do it. For the normal hardware rendered video player, you need to have at least Java 6u10 (preferably 7) and achieve this by overlaying a transparent JWindow on top of the VLC canvas (it's not too hard to add events to the canvas to check for updates in position / size and then move the overlayed window correspondingly.)
The other way that doesn't involve using overlaid windows is to use a DirectMediaPlayer, where you have access to the framebuffer directly (and can therefore do what you like with the pixels, including wrapping them as textures round 3D objects and so on.) So with this approach, you could simply draw what you wanted onto the frame buffer before rendering it to screen in the way you chose. This is the most flexible approach, but comes with the downside that if you're not very careful about your implementation, you lose all the GPU acceleration and end up crippling the CPU, especially for HD video.
If a simple overlay would do the trick, I'd try that first, and just resort to a DirectMediaPlayer if you have to.
I'm creating a UI system for an android game that will have a large (up to 4096x4096) background area in which menus can be placed anywhere within that screen and a camera will fly to that location when a different menu is needed. Instead of having a large static image, I'd like to be able to animate this slightly. What I'd like to know is how to do this efficiently without lagging up the device. These are the methods I've come up with so far, but maybe there is something better..
1) Have 3 separate 4096x4096 static layers for the background, 1 is the sky, one is the terrain, one is things like clouds and trees. Each layer is placed on top of each other with a slight difference in Z space to give a little parallax effect when the camera moves.
2) Have a large stationary background image, with a layer on top of that with individual specific sprites of clouds, trees and other things that should be animated. I think this might be the most efficient route, as I can choose not to animate parts that are not in view, but it will also limit re-usability as every different object will have to be placed manually in space. My goal is to be able to simply change the assets and be able to have a whole new game.
3) Have 1 large background layer with several frames that plays almost like a video. I feel like this will be the worst on performance(loading several 4096x4096 frames and drawing a different one 30 times a second), but would give me the scene exactly how I want it directly out of After Effects. I doubt this one is even feasible, not just because of the drawing but storage space on android devices just for the menu UI wouldn't allow for several 6MB frames.
Are any of these in the right direction? I have seen a few similar questions asked but none fit close enough to what I needed(A large, moving background that isn't made of tiles).
Any help is appreciated.
As far as your question is tagged for Android, I would recommend the 2nd solution.
The main reason is that solution #1 and #3 involve loading numerous 4096x4096 textures.
Quick calcultation: three 32bit textures with such resolution would use at least 200MB of Video RAM. It means that you can immediatly discard a lot of android devices.
On the other hand, the solution #2 would involve only two big textures: a large stationary background image, and a texture atlas containing specific sprites of clouds, trees...
This solution is really more memory friendly, and will lead to the same aestetic output.
TL;DR: the 3 solutions would work great but only the #2 would fit an embedded device
I am working on a photobooth type app for iPhone and Android. On iPhone, I know the exact resolution of the front camera, and am able to always output 4 mini pics predictably and make a photostrip from them. But for Android, I need a way to resize 4 images I have taken to a width of 48px and height of 320px per image. This way, I can build the same size photostrip I built for the iPhone version, and easily display the photostrips in a consistent manner on a website (I don't want the size of them to vary depending on platform). On Android, how can I resize to that resolution (48x320), even if the Android camera doesn't output that aspect ratio? Basically, I'd like to resize on Android, and have it automatically zoom as necessary until 48x320 is reached and it doesn't look stretched/distorted...I'm ok with part of the image (like the outside border) being lost in favor of getting a 48x320 image. Maybe this is just a straight Java question...
Thanks so much!
Here’s my task which I want to solve with as little effort as possible (preferrably with QT & C++ or Java): I want to use webcam video input to detect if there’s a (or more) crate(s) in front of the camera lens or not. The scene can change from "clear" to "there is a crate in front of the lens" and back while the cam feeds its video signal to my application. For prototype testing/ learning I have 2-3 images of the “empty” scene, and 2-3 images with one or more crates.
Do you know straightforward idea how to tackle this task? I found OpenCV, but isn't this framework too bulky for this simple task? I'm new to the field of computer vision. Is this generally a hard task or is it simple and robust to detect if there's an obstacle in front of the cam in live feeds? Your expert opinion is deeply appreciated!
Here's an approach I've heard of, which may yield some success:
Perform edge detection on your image to translate it into a black and white image, whereby edges are shown as black pixels.
Now create a histogram to record the frequency of black pixels in each vertical column of pixels in the image. The theory here is that a high frequency value in the histogram in or around one bucket is indicative of a vertical edge, which could be the edge of a crate.
You could also consider a second histogram to measure pixels on each row of the image.
Obviously this is a fairly simple approach and is highly dependent on "simple" input; i.e. plain boxes with "hard" edges against a blank background (preferable a background that contrasts heavily with the box).
You dont need a full-blown computer-vision library to detect if there is a crate or no crate in front of the camera. You can just take a snapshot and make a color-histogram (simple). To capture the snapshot take a look here:
http://msdn.microsoft.com/en-us/library/dd742882%28VS.85%29.aspx
Lots of variables here including any possible changes in ambient lighting and any other activity in the field of view. Look at implementing a Canny edge detector (which OpenCV has and also Intel Performance Primitives have as well) to look for the outline of the shape of interest. If you then kinda know where the box will be, you can perhaps sum pixels in the region of interest. If the box can appear anywhere in the field of view, this is more challenging.
This is not something you should start in Java. When I had this kind of problems I would start with Matlab (OpenCV library) or something similar, see if the solution would work there and then port it to Java.
To answer your question I did something similar by XOR-ing the 'reference' image (no crate in your case) with the current image then either work on the histogram (clustered pixels at right means large difference) or just sum the visible pixels and compare them with a threshold. XOR is not really precise but it is fast.
My point is, it took me 2hrs to install Scilab and the toolkits and write a proof of concept. It would have taken me two days in Java and if the first solution didn't work each additional algorithm (already done in Mat-/Scilab) another few hours. IMHO you are approaching the problem from the wrong angle.
If really Java/C++ are just some simple tools that don't matter then drop them and use Scilab or some other Matlab clone - prototyping and fine tuning would be much faster.
There are 2 parts involved in object detection. One is feature extraction, the other is similarity calculation. Some obvious features of the crate are geometry, edge, texture, etc...
So you can find some algorithms to extract these features from your crate image. Then comparing these features with your training sample images.
I am developing a 2D platformer game for the android platform, so I don't really care about the screen DPI, but much more about the actual resolution in pixels. From what I've gathered on the net, there are a couple of different resolutions (and aspect ratios) present. According to my search, the two resolutions that are currently widespread are 480x320 (1.5) and 800x480 (1.666), is that right? I'd like to target these two resolutions to reach most customers.
Now, I can deal with the different aspect ratios by showing a black border of 40 pixel for the bigger display, essentialy reducing it to 720x480 pixel and a ratio of 1.5.
The problem with my game is that it is essential for gameplay that the players see the same amount of the world on each screen. Otherwise, some players would get an unfair advantage. Furthermore, I trigger some events depending on the visibility. For example, an enemy is only allowed to start shooting when the player starts seeing it. Otherwise, the enemies' bullets would seem to come from nowhere.
So I figured I need to either create my graphics for one resolution and scale them for the other. Or I create separate graphics for each resolution. Is that right? Unfortunately, both ways are suboptimal for pixel graphics.
On another note: How can I restrict my game to these resolutions only (especially for the Android Market)? I know about the "supports-screens" tag in the manifest, but that works depending on the effective screen-size, not the size in pixel, or am I mistaken?
I am also interested in personal experiences of other android game developers when it comes to resolution independence.
Thanks!
My question would be: what do you think would do on a PC? For game development, Android should be looked at much more like a PC target than a console. You just intrinsically need to accept that there will be some diversity of screens that you can't totally predict up-front.
So I think there are two main approaches to take:
(1) Use a constant "display size" as if you were setting a fixed video resolution on the PC and letting the user's monitor deal with it. On these devices of course there is no monitor, just one fixed display, so it doesn't make sense to modify the core resolution. Instead, you can set up the SurfaceView showing your game to have a fixed resolution, and let the platform's compositor take care of scaling it (in hardware) as it composites to the screen.
(2) More intelligently adjust to the actual resolution of the screen you find yourself running in. Scale up or down graphics yourself to create the playing area you want. Maybe have some different sizes of textures and select the appropriate ones for the screen resolution.
You could probably also do a combination of these, where you have a couple fixed sizes you pick for the surface view depending on the total resolution available which the game can run well with. In either case, you can do letter-boxing as appropriate to keep your aspect ratio constant on different screens, if that is what you want.
There are three approaches to differences in aspect ratio:
Show opaque borders on some ratios ("letterboxing").
Show more of the game world on some ratios.
Don't work at all on some ratios.
With approach (1) you waste screen space on some devices. Not such a big deal for televisions, but miserable on handheld devices where screen space is limited. With approach (2) players on some devices get advantages (they can see more of the world) and disadvantages (sprites are smaller, so touch precision is harder). Approach (3) just sucks.
Obviously it depends on the details of your game which is better, but as a player I much prefer approach (2). The constituency who care if players on other devices get a bit of a hypothetical advantage is pretty small compared to the constituency who care if their screen is partly obscured by unnecessary black bars.
(Similar approaches and remarks apply to differences in resolution.)