In Java, Creating application that supports different resolutions/display modes - java

I am a beginner who is learning to write games in JAVA.
In the game I am writing, I am trying to get it to support multiple displayModes. First let me tell you a little about how I'm setting the display setting in the first place.
In the beginning of the code, I have an list of display modes I wish to support
//List of supported display modes
private static DisplayMode modes[] = {
new DisplayMode(640, 480, 32, 0),
new DisplayMode(1024, 768, 32, 0),
};
I then get a list of supported display Modes from the Video Card, comparing the list and use the first matching display mode.
/////////////////////////////////////////////
////Variable Declaration
/////////////////////////////////////////////
private GraphicsDevice vc;
/////////////////////////////////////////////
//Give Video Card Access to Monitor Screen
/////////////////////////////////////////////
public ScreenManager(){
GraphicsEnvironment e = GraphicsEnvironment.getLocalGraphicsEnvironment();
vc = e.getDefaultScreenDevice();
}
/////////////////////////////////////////////
//Find Compatible display mode
/////////////////////////////////////////////
//Compare Display mode supported by the application and display modes supported by the video card
//Use the first matching display mode;
public DisplayMode findFirstCompatibleMode(DisplayMode modes[]){
DisplayMode goodModes[] = vc.getDisplayModes();
for(int x=0; x<modes.length; x++){
for(int y=0; y<goodModes.length; y++){
if (displayModesMatch(modes[x], goodModes[y])){
return modes[x];
}
}
}
return null;
}
/////////////////////////////////////////////
//Checks if two Display Modes match each other
/////////////////////////////////////////////
public boolean displayModesMatch(DisplayMode m1, DisplayMode m2){
//Test Resolution
if (m1.getWidth() != m2.getWidth() || m1.getHeight() != m2.getHeight()){
return false;
}
//Test BitDepth
if (m1.getBitDepth() != DisplayMode.BIT_DEPTH_MULTI && m2.getBitDepth() != DisplayMode.BIT_DEPTH_MULTI
&& m1.getBitDepth() != m2.getBitDepth()){
return false;
}
//Test Refresh Rate
if (m1.getRefreshRate() != DisplayMode.REFRESH_RATE_UNKNOWN &&
m2.getRefreshRate() != DisplayMode.REFRESH_RATE_UNKNOWN &&
m1.getRefreshRate() != m2.getRefreshRate()){
return false;
}
return true;
}
Currently, I am only supporting two resolutions, 640x480 and 1024x768.
In order to have every element of my game available in both resolutions, first I find how much the screen is resized and store this value in a variable called resizeRatio
private void getResizeRatio(){
resizeRatio = (double)1024/(double)s.getWidth();
//s.getWidth() returns the current width of the screen.
}
And with every image I import, i would divide the image height and width by this resizeRatio.
/////////////////////////////////////////////
//Scale the image to the right proportion for the resolution
/////////////////////////////////////////////
protected Image scaleImage(Image in){
Image out = in.getScaledInstance((int)(in.getWidth(null)/resizeRatio), (int)(in.getHeight(null)/resizeRatio), Image.SCALE_SMOOTH);
return out;
}
This is all fine and good, until my application grew bigger and bigger. Soon I realize I forgot to resize some of the icons, and they are all at the wrong place when resolution is 640x480.
Additionally, I realize I must scale, not just the size of all my images, but all the movement speed, and all the positions as well, since having my character move at 5px per refresh makes him move significantly faster when displayed at 640x480 than when displayed at 1024x768
So my question is, instead of individually scaling every image, every icon, and every movement, is there a way to scale everything all at once? Or rather, there must be another way of doing this so could someone please tell me?
Thank you for reading and any help would be much appreciated.

In the paintComponent(Graphics g) or paint method you can do with Graphics2D.scale:
private double scale = 0.75;
#override
public void paintComponent(Graphics g) {
Graphics2D g2 = (Graphics2D)g; // The newer more ellaborate child class.
g2.scale(scale, scale);
...
g2.scale(1/scale, 1/scale);
}

Related

Limiting the detection area in Google Vision, text recognition

I have been searching the whole day for a solution. I've checked out several Threads regarding my problem.
Custom detector object
Reduce bar code tracking window
and more...
But it didn't help me a lot. Basically I want that the Camera Preview is fullscreen but text only gets recognized in the center of the screen, where a Rectangle is drawn.
Technologies I am using:
Google Mobile Vision API’s for Optical character recognition(OCR)
Dependecy: play-services-vision
My current state: I created a BoxDetector class:
public class BoxDetector extends Detector {
private Detector mDelegate;
private int mBoxWidth, mBoxHeight;
public BoxDetector(Detector delegate, int boxWidth, int boxHeight) {
mDelegate = delegate;
mBoxWidth = boxWidth;
mBoxHeight = boxHeight;
}
public SparseArray detect(Frame frame) {
int width = frame.getMetadata().getWidth();
int height = frame.getMetadata().getHeight();
int right = (width / 2) + (mBoxHeight / 2);
int left = (width / 2) - (mBoxHeight / 2);
int bottom = (height / 2) + (mBoxWidth / 2);
int top = (height / 2) - (mBoxWidth / 2);
YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, width, height, null);
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(left, top, right, bottom), 100, byteArrayOutputStream);
byte[] jpegArray = byteArrayOutputStream.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Frame croppedFrame =
new Frame.Builder()
.setBitmap(bitmap)
.setRotation(frame.getMetadata().getRotation())
.build();
return mDelegate.detect(croppedFrame);
}
public boolean isOperational() {
return mDelegate.isOperational();
}
public boolean setFocus(int id) {
return mDelegate.setFocus(id);
}
#Override
public void receiveFrame(Frame frame) {
mDelegate.receiveFrame(frame);
}
}
And implemented an instance of this class here:
final TextRecognizer textRecognizer = new TextRecognizer.Builder(App.getContext()).build();
// Instantiate the created box detector in order to limit the Text Detector scan area
BoxDetector boxDetector = new BoxDetector(textRecognizer, width, height);
//Set the TextRecognizer's Processor but using the box collider
boxDetector.setProcessor(new Detector.Processor<TextBlock>() {
#Override
public void release() {
}
/*
Detect all the text from camera using TextBlock
and the values into a stringBuilder which will then be set to the textView.
*/
#Override
public void receiveDetections(Detector.Detections<TextBlock> detections) {
final SparseArray<TextBlock> items = detections.getDetectedItems();
if (items.size() != 0) {
mTextView.post(new Runnable() {
#Override
public void run() {
StringBuilder stringBuilder = new StringBuilder();
for (int i = 0; i < items.size(); i++) {
TextBlock item = items.valueAt(i);
stringBuilder.append(item.getValue());
stringBuilder.append("\n");
}
mTextView.setText(stringBuilder.toString());
}
});
}
}
});
mCameraSource = new CameraSource.Builder(App.getContext(), boxDetector)
.setFacing(CameraSource.CAMERA_FACING_BACK)
.setRequestedPreviewSize(height, width)
.setAutoFocusEnabled(true)
.setRequestedFps(15.0f)
.build();
On execution this Exception is thrown:
Exception thrown from receiver.
java.lang.IllegalStateException: Detector processor must first be set with setProcessor in order to receive detection results.
at com.google.android.gms.vision.Detector.receiveFrame(com.google.android.gms:play-services-vision-common##19.0.0:17)
at com.spectures.shopendings.Helpers.BoxDetector.receiveFrame(BoxDetector.java:62)
at com.google.android.gms.vision.CameraSource$zzb.run(com.google.android.gms:play-services-vision-common##19.0.0:47)
at java.lang.Thread.run(Thread.java:919)
If anyone has a clue, what my fault is or has any alternatives I would really appreciate it. Thank you!
This is what I want to achieve, a Rect. Text area scanner:
Google vision detection have the input is a frame. A frame is an image data and contain a width and height as associated data. U can process this frame (Cut it to smaller centered frame) before pass it to the Detector. This process must be fast and do along camera processing image.
Check out my Github below, Search for FrameProcessingRunnable. U can see the frame input there. u can do the process yourself there.
CameraSource
You can try to pre-parse the CameraSource feed as #'Thành Hà Văn' mentioned (which I myself tried first, but discarded after trying to adjust for the old and new camera apis) but I found it easier to just limit your search area and use the detections returned by the default Vision detections and CameraSource. You can do it in several ways. For example,
(1) limiting the area of the screen by setting bounds based on the screen/preview size
(2) creating a custom class that can be used to dynamically set the detection area
I chose option 2 (I can post my custom class if needed), and then in the detection area, I filtered it for detections only within the specified area:
for (j in 0 until detections.size()) {
val textBlock = detections.valueAt(j) as TextBlock
for (line in textBlock.components) {
if((line.boundingBox.top.toFloat()*hScale) >= scanView.top.toFloat() && (line.boundingBox.bottom.toFloat()*hScale) <= scanView.bottom.toFloat()) {
canvas.drawRect(line.boundingBox, linePainter)
if(scanning)
if (((line.boundingBox.top.toFloat() * hScale) <= yTouch && (line.boundingBox.bottom.toFloat() * hScale) >= yTouch) &&
((line.boundingBox.left.toFloat() * wScale) <= xTouch && (line.boundingBox.right.toFloat() * wScale) >= xTouch) ) {
acceptDetection(line, scanCount)
}
}
}
}
The scanning section is just some custom code I used to allow the user to select what detections they wanted to keep. You would replace everything inside the if(line....) loop with your custom code to only act on the cropped detection area. Note, this example code only crops vertically, but you could also drop horizontally as well, and both directions also.
In google-vision you can get the coordinates of a detected text like described in How to get position of text in an image using Mobile Vision API?
You get the TextBlocks from TextRecognizer, then you filter the TextBlock by their coordinates, that can be determined by the getBoundingBox() or getCornerPoints() method of TextBlocks class :
TextRecognizer
Recognition results are returned by detect(Frame). The OCR algorithm
tries to infer the text layout and organizes each paragraph into
TextBlock instances. If any text is detected, at least one TextBlock
instance will be returned.
[..]
Public Methods
public SparseArray<TextBlock> detect (Frame frame) Detects and recognizes text in a image. Only supports bitmap and NV21 for now.
Returns mapping of int to TextBlock, where the int domain represents an opaque ID for the text block.
source : https://developers.google.com/android/reference/com/google/android/gms/vision/text/TextRecognizer
TextBlock
public class TextBlock extends Object implements Text
A block of text (think of it as a paragraph) as deemed by the OCR
engine.
Public Method Summary
Rect getBoundingBox() Returns the TextBlock's axis-aligned bounding box.
List<? extends Text> getComponents() Smaller components that comprise this entity, if any.
Point[] getCornerPoints() 4 corner points in clockwise direction starting with top-left.
String getLanguage() Prevailing language in the TextBlock.
String getValue() Retrieve the recognized text as a string.
source : https://developers.google.com/android/reference/com/google/android/gms/vision/text/TextBlock
So you basically proceed like in How to get position of text in an image using Mobile Vision API? however you do not split any block in lines and then any line in words like
//Loop through each `Block`
foreach (TextBlock textBlock in blocks)
{
IList<IText> textLines = textBlock.Components;
//loop Through each `Line`
foreach (IText currentLine in textLines)
{
IList<IText> words = currentLine.Components;
//Loop through each `Word`
foreach (IText currentword in words)
{
//Get the Rectangle/boundingBox of the word
RectF rect = new RectF(currentword.BoundingBox);
rectPaint.Color = Color.Black;
//Finally Draw Rectangle/boundingBox around word
canvas.DrawRect(rect, rectPaint);
//Set image to the `View`
imgView.SetImageDrawable(new BitmapDrawable(Resources, tempBitmap));
}
}
}
instead you get the boundary box of all text blocks and then select the boundary box with the coordinates closest to the center of the screen/frame or the rectangle that you specify (i.e. How can i get center x,y of my view in android?) . For this you use the getBoundingBox() or getCornerPoints() method of TextBlocks ...

Setting icons of different size depending on screen resolution [duplicate]

I've got a Java desktop app that works, amongst other, on OS X.
Now the new MacBook Pro has a retina display and I'm concerned: how is it going to work regarding Swing?
What about when a Java app uses both Swing components and some bitmap graphics (like custom icons / ImageIcon)?
Shall all desktop Java apps be automatically resized (for example by quadrupling every pixel) or am I going to need to create two versions of my icons set (for example one with 24x24 icons and the other with 96x96 icons) and somehow determine that the app is running on a retina display?
Use IconLoader library. It supports HiDPI images http://bulenkov.com/iconloader/ It also provides a way to work with HiDPI images (drawing, etc)
On Apple's Java 6 you can provide multiple versions of the same image. Depending on the screen (retina or not), one or the other image is picked and drawn.
However, those images have to loaded in a special way:
Toolkit.getDefaultToolkit().getImage("NSImage://your_image_name_without_extension");
For example, if your (regular resolution) image is called: "scissor.png", you have to create a high resolution version "scissor#2x.png" (following the Apple naming conventions) and place both images in the Resources directory of your app bundle (yes, you need to bundle your app).
Then call:
Image img = Toolkit.getDefaultToolkit().getImage("NSImage://scissor");
You can use the resulting image in your buttons and it will be drawn with the right resolution magically.
There are two other "tricks" you can use:
Using an AffineTransform of (0.5, 0.5) on your Graphics2D object before drawing an Image. Also see this java-dev message
Creating a high dpi version of your image programmatically using this hack
The first "trick" (0.5 scaling) by now also works on Oracle's Java 7/8.
I.e. if you draw an image with 0.5 scaling directly to the component's Graphics object, it will be rendered in high resolution on Retina displays (and also with half its original size).
Update
Starting with Java 9, there is better built-in support for images with different resolutions via the MultiResolutionImage interface. For more details, please see this answer.
I can confirm that the scaling your images works with on Oracle Java 1.8. I cannot get the NSImage hack to work on java 1.7 or 1.8. I think this only works with Java 6 from Mac...
Unless someone else has a better solution, what I do is the following:
Create two sets of icons.
If you have a 48pixel width icon create one 48px #normal DPI and another at 96px with 2x DPI. Rename the 2xDPI image as #2x.png to conform with apple naming standards.
Subclass ImageIcon and call it RetinaIcon or whatever.
You can test for a Retina display as follows:
public static boolean isRetina() {
boolean isRetina = false;
GraphicsDevice graphicsDevice = GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice();
try {
Field field = graphicsDevice.getClass().getDeclaredField("scale");
if (field != null) {
field.setAccessible(true);
Object scale = field.get(graphicsDevice);
if(scale instanceof Integer && ((Integer) scale).intValue() == 2) {
isRetina = true;
}
}
}
catch (Exception e) {
e.printStackTrace();
}
return isRetina;
}
Make sure to #Override the width and height of the new ImageIcon class as follows:
#Override
public int getIconWidth()
{
if(isRetina())
{
return super.getIconWidth()/2;
}
return super.getIconWidth();
}
#Override
public int getIconHeight()
{
if(isRetina())
{
return super.getIconHeight()/2;
}
return super.getIconHeight();
}
Once you have a test for the retina screen and your custom width/height methods overridden you can customise the painIcon method as follows:
#Override
public synchronized void paintIcon(Component c, Graphics g, int x, int y)
{
ImageObserver observer = getImageObserver();
if (observer == null)
{
observer = c;
}
Image image = getImage();
int width = image.getWidth(observer);
int height = image.getHeight(observer);
final Graphics2D g2d = (Graphics2D)g.create(x, y, width, height);
if(isRetina())
{
g2d.scale(0.5, 0.5);
}
else
{
}
g2d.drawImage(image, 0, 0, observer);
g2d.scale(1, 1);
g2d.dispose();
}
I do not know how this will work with multiple screens though- is there anyone else that can help out with that???
Hope this code helps out anyway!
Jason Barraclough.
Here is an example of using the scaling as mentioned above:
RetinaIcon is on the left. ImageIcon is on the right
Here is a solution, that works also when the icons are used in the apple menu. There the icon is automatically greyed. So I have implemented a class DenseIcon which paints densely:
public synchronized void paintIcon(Component c, Graphics g, int x, int y) {
if(getImageObserver() == null) {
g.drawImage(getImage0(), x, y, getIconWidth(), getIconHeight(), c);
} else {
g.drawImage(getImage0(), x, y, getIconWidth(), getIconHeight(), getImageObserver());
}
}
How to hook into the greying I have not yet figured out. So as a kludge we return a low res image so that the menu can do its modifications:
public Image getImage() {
Image image = getImage0().getScaledInstance(
getIconWidth(),
getIconHeight(),
Image.SCALE_SMOOTH);
ImageIcon icon = new ImageIcon(image, getDescription());
return icon.getImage();
}
You find the code of the full class here on gist. You need to instantiate the icon class with an URL to an image that is twice the size. Works for 2K displays.
This how icons look like on my retina macbook '12:
On the left side icons in IntelliJ IDEA 11 (swing app) and on the right side IDEA 12 which is claimed to be retinized. As you can see automatically resized icons (on the left) looks pretty ugly.
As far as I know, they, just like the guys from Chrome team, made it by providing double sized icons.

Efficient way to color image regions with transparent overlay from a map of [1,0]

I avoided the word bitmap in the title as bitmap in this context usually (?) refers to the bitmap from the underlying image.
I have an image that is segmented into a number of different regions. For each region I have a map of ones and zeros (a bitmap) where 1 represents inside the region and zero outside the region. Not every part of the image is covered with a region, and the regions may overlap. The images are of the dimension (480x360).
What I would like to do is to overlay the image with a transparent red when you hoover the region with your mouse. My problem is that my current method is very slow and it takes a second or two before the overlay appears.
My current approach is using a JLayer over my ImagePanel (extension of JPanel drawing a BufferedImage). Then my instance of the LayerUI draws the overlay when the mouse is moved:
public class ImageHighlightLayerUI extends LayerUI<JPanel> {
private boolean mouseActive;
private Point mousePoint;
private byte[][][] masks;
public void paint(Graphics g, JComponent c) {
super.paint(g, c);
if (mouseActive) {
byte[][] curMask = null;
// Find which region the mouse intersect
for (int i = 0; i < masks.length; i++) {
if (masks[i][mousePoint.x][mousePoint.y] == 1) {
curMask = masks[i];
break;
}
}
// Outside region --> don't draw overlay
if (curMask == null) return;
//Transparent red
g.setColor(new Color((float)1.0,
(float)0.0, (float)0.0, (float)0.8));
//Draw the mask
for(int x = 0; x < curMask.length; x++)
for(int y = 0; y < curMask[y].length; y++)
if (curMask[x][y] == 1)
g.fillRect(x, y, 1, 1);
}
}
}
So, how can I make this more efficient? I open to suggestions using other ways than a JLayer. Can I use my bitmap in some "magic" way with some swing-method? Can I mix it with the underlying bitmap from the BufferedImage? Is removing transparancy the only thing that'll help me? (Which is something I would like to keep)
Two other side problems which are not necessarily related to the question, but I have yet to solve:
The overlay is repainted every time the mouse moves. This seems like a waste of resources.
When regions are overlapping, how do I choose which one to paint?

How to know if a JFrame is on screen in a multi screen environment

My application is used in multi-screen environments. The application stores it's location on close and starts on the last position.
I obtain the position by calling frame.getLocation()
This gives me a positive value if the frame is on the main screen or it is on the right of the main screen. Frames located on a screen left to the main screen get negative values for X.
The problem comes up, when the screen configuration changes (for example multiple users share one Citrix-Account and have different screen resolutions).
My problem now is to determine if the stored position is visible on the screen. According to some other posts, I should use GraphicsEnvironment to get the size of the available screens, but I can't get the position of the different screens.
Example: getLocation() gives Point(-250,10)
GraphicsEnvironment gives
Device1-Width: 1920
Device2-Width: 1280
Now, depending on the ordering of the screens (wether the secondary monitor is placed on the left or on the right of the primary one), the frame might be visible, or not.
Can you tell me how to fix this problem?
Thanks a lot
This is a little redumentry, but, if all you want to know is if the frame would be visible on the screen, you could calculate the "virtual" bounds of desktop and test to see if the frame is contained within in.
public class ScreenCheck {
public static void main(String[] args) {
JFrame frame = new JFrame();
frame.setBounds(-200, -200, 200, 200);
Rectangle virtualBounds = getVirtualBounds();
System.out.println(virtualBounds.contains(frame.getBounds()));
}
public static Rectangle getVirtualBounds() {
Rectangle bounds = new Rectangle(0, 0, 0, 0);
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
GraphicsDevice lstGDs[] = ge.getScreenDevices();
for (GraphicsDevice gd : lstGDs) {
bounds.add(gd.getDefaultConfiguration().getBounds());
}
return bounds;
}
}
Now, this uses the Rectangle of the frame, but you could use it's location instead.
Equally, you could use each GraphicsDevice individually and check each one in turn...
This might be helpful to others out there looking for a similar solution.
I wanted to know if any part of my swing application location was off the screen. This method calculates the application's area and determines if all of it is visible, even if it's split across multiple screens. Helps when you save your applications location and then restart it and your display configuration is different.
public static boolean isClipped(Rectangle rec) {
boolean isClipped = false;
int recArea = rec.width * rec.height;
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
GraphicsDevice sd[] = ge.getScreenDevices();
Rectangle bounds;
int boundsArea = 0;
for (GraphicsDevice gd : sd) {
bounds = gd.getDefaultConfiguration().getBounds();
if (bounds.intersects(rec)) {
bounds = bounds.intersection(rec);
boundsArea = boundsArea + (bounds.width * bounds.height);
}
}
if (boundsArea != recArea) {
isClipped = true;
}
return isClipped;
}

cvFlip() flickers or returns null

I'm doing a project which involves taking a live camera feed and displaying it on a window for the user.
As the camera image is the wrong way round by default, I'm flipping it using cvFlip (so the computer screen is like a mirror) like so:
while (true)
{
IplImage currentImage = grabber.grab();
cvFlip(currentImage,currentImage, 1);
// Image then displayed here on the window.
}
This works fine most of the time. However, for a lot of users (mostly on faster PCs), the camera feed flickers violently. Basically an unflipped image is displayed, then a flipped image, then unflipped, over and over.
So I then changed things a bit to detect the problem...
while (true)
{
IplImage currentImage = grabber.grab();
IplImage flippedImage = null;
cvFlip(currentImage,flippedImage, 1); // l-r = 90_degrees_steps_anti_clockwise
if(flippedImage == null)
{
System.out.println("The flipped image is null");
continue;
}
else
{
System.out.println("The flipped image isn't null");
continue;
}
}
The flipped image appears to always return null. Why? What am I doing wrong? This is driving me crazy.
If this is an issue with cvFlip(), what other ways are there to flip an IplImage?
Thanks to anyone who helps!
You need to initialise the flipped image with an empty image rather than NULL before you can store a result in it. Also, you should only create the image once and then re-use the memory for more efficiency. So a better way to do this would be something like below (untested):
IplImage current = null;
IplImage flipped = null;
while (true) {
current = grabber.grab();
// Initialise the flipped image once the source image information
// becomes available for the first time.
if (flipped == null) {
flipped = cvCreateImage(
current.cvSize(), current.depth(), current.nChannels()
);
}
cvFlip(current, flipped, 1);
}

Categories

Resources