First problem: You have 400 pixels width to go on, and need to fit some text within that constraint as large as possible (thus, the text shall use that amount of space).
Throw in a new constraint: If the text is just "A", then it shall not zoom this above 100 pixels height (or some specific font size).
Then, a final situation: Linebreaks. Fit some text in the largest possible way within e.g. 400 x 150 pixels.
An obvious way is to simply start with point 1, and then increase until you can't fit it anymore. This would work for all three problems, but would be very crude. The fitting of a single line within bounds could be done by writing it with some fixed point size, check the resulting pixel bounds of the text, and then simply scale it with a transform (the text scales properly too then, check out TransformUI).
Any ideas of other ways to attack this would be greatly appreciated!
As what you are modelling is complex, especially with line breaks, then your initial proposal of trying all sizes is along the right lines, especially if it needs to be accurate.
However, rather than testing each value, you can use a binary search to find the appropriate font size. You know the size is somewhere between 1 and 100 (your upper range). using a binary search, each test sets the font size and checks the resulting layout. If the text is too large, then we search the lower half of the current range of possible values. If the font size fits, then we search the upper half. Your search will use at most 7 attempts (100 log base 2 rounded up), it will be exact, finding the largest size without going over, and it will be flexible if you need to add more requirements later, such as a mix of fonts or more stringent constraints on the layout.
I'm assuming you are using a text component that does line wrapping, and that you can set the maximum width to 400. So, you set the font size and it does the layout giving you back the required height, laying out text within the given width.
You can use hints to try to guide the algorithm to the result quicker, such as making your first guess close to the expected size, but text rendering is fast, that the performance increase may not be worth the implementation effort.
See Wikipedia - Binary Search Algorithm
I would do the following:
Assume you want W pixels wide text.
Pick an arbitrary size, say 10pt, and see what bounding box the text-string gets for that size. Lets say it gets N pixels wide.
Set the new size to 10pt * W/N, and repeat from step one, until you get within a reasonable threshold. (Hopefully it would work within one iteration.)
This relies on the fact that the width of the string, is roughly proportional to the size of the font.
I'd instantiate the Font at the largest desired size: say 72 for one inch glyphs at 72 dpi. Use TextLayout to get the bounds and scale using AffineTransform (direct) or AffineTransformOp (offscreen), while preserving the aspect ratio. Suitable RenderingHints help, too.
Related
I'm a making a retro-style game in Java with a set 256*192 resolution and I want to scale the game by an even number based on how many times the player's resizable window can fit that resolution inside it.
For example, if the window is default 256*192, the scale is 1.
If the window is 512*384, the scale is 2.
But if the window resolution is a different aspect ratio, such as 560*490, the maximum number of times the original resolution can fit is still 3, so the scale would be 3 and I'd just fill the extra bounds with black or some basic pattern.
Maybe it's a stupidly simple answer and I just haven't had enough coffee yet, but I can't figure out how to find the number for scale.
Help?
Say your game is a*b and the window is x*y. You can find the proportionally greater dimension of the window by comparing x/a to y/b. Your scale should be whichever of those two values is smaller. If you want it to be an integer, just round down whichever value is smaller.
I'm curious why there is no implementation for Sheet.setColumnWidth that accepts a Points parameter.
There is a method to set the height of a Row in points, which is setHeightInPoints
The only available method to set the width is accepting a rather weird scale, defined as
width = Truncate([{Number of Visible Characters} * {Maximum Digit Width} + {5 pixel padding}]/{Maximum Digit Width}*256)/256
This means it depends on some width of a digit, so the value used in the parameter is
just 1/256 of what is wanted here, some straight representation of a width. In Excel itself
you set the width in Points, too.
This means behaviour of Row-height and Column-width is not symetric.
Is there any rational reason for only having that version of setColumnWidth?
This leads to serious problems, as the best I can get has a different result on every computer,
because the width which is set is depending on the users default font setting.
I believe it has something to do with displaying nicely on many different setups, as the comment of pnuts suggests. But it is only usable in a very narrow field.
At the moment I believe that there is no simple workaround for that, and I cannot find one right now. (just one that works for my case perhaps)
Is there any good way to calculate a column width value from a desired points value?
I.e. I want 120 points on any computer that is using the excel export functionality. What is the width value to use as parameter here to get the wanted points width?
I am using a BufferedImage to hold a 10 by 10 sample of an image. With this Image I would like to find an approximate average color (as a Color object) that represents this image. Currently I have two ideas on how to implement this feature:
Make a scaled instance of the image into a 1 by 1 size image and find the color of the newly created image as the average color
Use two for loops. The inner-most is used to average each line, the secondary for-loop is used to average each line pixel by pixel.
I really like the idea of the first solution, however I am not sure how accurate it would be. The second solution would be as accurate as they come, however it seems incredibly tedious. I also believe the getColor command is processor intensive on a large scale such as this (I am performing this averaging roughly at 640 to 1920 times a second), please correct me if I am wrong. Since this method will be very CPU intensive, I would like to use a fairly efficient algorithm.
It depends what you mean by average. If you have half the pixels red and half the pixels blue, would the average be purple? In that case I think you can try adding all the values up and dividing it by how many pixels you have.
However, I suspect that rather than the average, you want the dominant colour?
In that case one alternative could be to discretise the colours into 'buckets' (say at intervals of 100, or even more sparser in the extreme case just 3, one for Red, one for Green and one for Blue), and create a histogram (a simple array with counts). You would then take the bucket which has the most count.
Be careful with idea 1. Remember that scaling often takes place by sampling. Since you have a very small image, you have already lost a lot of information. Scaling down further will probably just sample a few pixels and not really average all of them. Better check what algorithm your scaling process is using.
I'm currently drawing a string to a canvas with a specified font. I would, however, like to scale this font based on the window size.
Given a target string, how do I find the point size of a particular font face so that printing the target string will be either h units tall, or w units wide? Is there a linear relationship between point size and font dimensions?
I can think of very smelly ways to determine a relative point size (pick an arbitrary size and shrink / grow until the dimensions are within some epsilon of the target), but would rather do it more cleanly.
I want to do this with fonts-only, if possible, and not resort to affine transformations.
For the best metrics, I prefer TextLayout, illustrated here, but deriveFont(), suggested by #StanislavL among the answers here, is surprisingly agile and not at all malodorous.
I am trying to solve a problem of compositing two images in Java. The program will take a part of the first image and past it on the second image. The goal is to make the boundary between the two images less visible. The boundary must be chosen in such a way that the difference between the two images at the boundary is small.
My Tasks:
To write a method to choose the boundary between the two images. The method will receive the overlapping parts of the input images. This must first be transformed so that the boundary always starts from the left-top corner to the right-bottom corner.
NOTE:
The returned image should not be the joined image but gives which parts of the two images were used.
The pixels of the boundary line can be marked with a constant(SEAM). Pixels of the first image can be marked with integer 0, pixels of the second image with integer 1. After choosing the boundary line, the floodfill algorithm can be used to fill the extra pixels with 0 or 1.
NOTE: The image can be represented as a graph whereby each pixel is connected with its left, right, top and bottom neighbor. So using the flood fill will be like depth-first search.
The shortest path algorithm must be used to choose the boundary in order to make it small.
NOTE: I can not use any java data structure except Arrays (not even ArrayList)
Guys, am new in this area and am trying to solve it. What steps must I follow to solve this problem? or a pointer to a tutorial
I would do it so:
Choose the width of the border checked. At your will.
1. find the maximal possible shift in pixels. That is D.
2. For all possible shifts in the square (+-D,+-D) find the k (correlation quocient) for the border. The border is taken in the middle of the shift.
3. The shift that has the largest k is the best. Let it be taken for granted.
4. Now begin to move the border, checking it by "k" the same way. Find the place of it. Done.
If D is large and the process is long, do it in 2(or more) stages. On the first stages the step of counting k is large, the last stage has step of 1. You could also use previous filtering.
If the border or relative images' position could be turned, the algorithm doesn't change principally - only add to it trying for the best k among different slightly turned positions and later - turned border, too.