Java Splash Screen scale factor - java

Java seems to scale the splash screen passed as a jvm switch e.g.
java -splash:splash_file.png
depending on the size of the monitor.
On the source i can see a reference to some natively calculated scale factor. Does anybody know how this scaling factor is calculated?

I would assume it's calculated by the standard way for graphics which takes an image of a given size in an unbounded "world" (world coordinates), transforms it to a normalized device (think unit square), and then transforms it again to the screen coordinates. The transformations consist of a translation and a scaling of the points.
Given the splash screen's window in the world (the way it should appear without translation or scaling), the normalized (x,y) values are obtained as follows:
The first part is the translation and the second the scale factor. This reduces the image to be contained in a 1 x 1 square so all the (x,y) values are fractional.
To go from the normalized to screen coordinate system the values are calculated as follows:
These operations are typically done efficiently with the help of translation and scaling matrix multiplications. Rotation can also be applied.
This is actually a low-level view of how you could take images, shapes, etc. drawn anyway you like and present them consistently across any sized screens. I'm not sure exactly how it's done in the example you give but it would likely be some variation of this. See the beginning of this presentation for a visual representation.

This value is actually both jdk implementation-dependent and architecture-dependent.
I browsed the OpenJDK code, and for a lot of architectures, it's simply hardcoded to 1.
For example, in the windows bindings, you'll find:
SPLASHEXPORT char*
SplashGetScaledImageName(const char* jarName, const char* fileName,
float *scaleFactor)
{
*scaleFactor = 1;
return NULL;
}
The scaleFactor value is then stored into a struct that is accessed via JNI through the _GetScaleFactor method.

Related

What does virtual display in Android do with Image pixels when smaller display dimensions are set than grabbed one?

I dont understand what happens with pixels in Virtual Display in Android when output dimensions are reduced compared to the input ones ?
When I have for example input = size of my Display = 1920x960 and I set outputs to be 1920/3 and 960/3, what happens in that case with image pixels:
pixel density is increased or
maybe it takes only smaller part of screen that is centered and has dimensions 640x320 or
something else?
Additionally, is there a way that I can only grab center part of screen as in picture below?
By digging in the AOSP, watching, looking at a thread and experimenting~
I have come to my own (might be overly) simplified conclusion.
Android calls the native Java SDK which produces the information that's needed to render a bitmap/pixels on a Java program.
If the results are the same don't need to pass/copy it again to the GPU.
If the results are not the same pass/copy it to the GPU to be "invalidated" then re-rendered.
Now to your question.
By looking at the Bitmap class and looking at this thread It came to my mind that resizing depends on the scaling ratio passed on to the Matrix class.
If resized, it will expensively create a new Bitmap that looks like something either a pretty-bad higher pixel-density or not-so-smooth lower pixel density.
If the pixel-density is increased (smaller dimensions, your case) it will look squashed and if need be, the colors are averaged to the nearest neighbouring pixels. ("kind of" like how JPEG works).
After resizing it will still stay to it's origin (top-left part of the rendered object) which is defined by it's X and Y coordinates.
For your second question, about screen grabbing you can take a look at this and then programatically resize the image by doing something like this:
//...
Bitmap.createBitmap(screenshot_bitmap, left, top, right, bottom);
//...

Improvement Suggestion for Color Measurement Algorithm

I'm working on an Image Processing project for a while, which consists in a way to measure and classify some types of sugar in the production line by its color. Until now, my biggest concern was searching and implementing the appropriate mathematical techniques to calculate distance between two colors (a reference color and the color being analysed), and then, turn this value into something more meaningful, as an industry standard measure.
From this, I'm trying to figure out how should I reliably extract the average color value from an image, once the frame captured by a video camera may contain noises or dirt in the sugar (most likely almost black dots).
Language: Java with OpenCV library.
Current solution: Before taking average image value, I'm applying the fastNlMeansDenoisingColored function, provided by OpenCV. It removes some white dots, at cost of more defined details. Couldn't remove black dots with it (not shown in the following images).
From there, I'm using the org.opencv.core.Core.mean function to computate the mean value of array elements independently for each channel, so that I can have a scalar value to use in my calculations.
I tried to use some kinds of image thresholding filters to get rid of black and white dots, and then calculate the mean with a mask, It kinda works too. Also, I tried to find any weighted average function which could return scalar values as well, but without success.
I don't know If those are robust enough pre-processing techniques to such application, mean values can vary easily. Am I in the right way? Would you suggest a better way to get reliable value that will represent my sugar's color?

Why does LWJGL have a function that uses floats as coordinates?

I was fixing a problem in my code that I knew had something to do with this: GL11.glTexCoord2f(x,y);I changed it of course, but I wondered why it used floats as coordinates. I'm sure there is a reason, but it just seems extremely stupid to me. There is probably something I'm missing :P but whatever.
This is not specific to Java or LWJGL. As you might know, LWJGL is largely a 1:1 mapping of the original OpenGL API. And the different flavors of the glTexCoord function have been in OpenGL since version 1.0: https://www.opengl.org/sdk/docs/man2/xhtml/glTexCoord.xml
The reason of why texture coordinates are usually given as floating point values is that they refer to an "abstract" image (the texture). The size of this texture is not known. For example, you may have a mesh, and as its texture, you may use a JPG image that is 200x200 pixels large, or or an image that is 800x800 pixels large.
In order to allow arbitrary images to be used as the texture, the texture coordinates are almost always given as values between (0.0, 0.0) and (1.0, 1.0). They refer to the the total size of the image, regardless of which size this is.
Imagine a vertex with texture coordinates (0.5, 0.5). If the texture is 200x200 pixels large, then the actual "pixel" that will be used as this vertex is the pixel at (100,100). If the texture is 800x800 pixels large, then the pixel that will be used at this vertex will be the pixel at (400,400).
Using this sort of "normalized" coordinates (in the range [0.0,1.0]) is very common in graphics and other applications.
What else should it be using? The Double dataType may not be supported on all GPU's although there is some support comming up. And since a texture uv-coordinate should be between 0.0 and 1.0 (as a precentage of the height/width) I cant think of any other logical dataType that could be used here.
So is there any specific problem with using floats? And if so, please edit your post and provide more information, as for now we cannot help you any further.

Need Conceptual Help Rendering a Heat Map

I need to create a heatmap for android google maps. I have geolocation and points that have negative and positive weight attributed to them that I would like to visually represent. Unlike the majority of heatmaps, I want these positive and negative weights to destructively interfere; that is, when two points are close to each other and one is positive and the other is negative, the overlap of them destructively interferes, effectively not rendering areas that cancel out completely.
I plan on using the android google map's TileOverlay/TileProvider class that has the job of creating/rendering tiles based a given location and zoom. (I don't have an issue with this part.)
How should I go about rendering these Tiles? I plan on using java's Graphics class but the best that I can think of is going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel. This seems very inefficient, however, and I was looking for suggestions on a better approach.
Edit: I've considered everything from using a non-android Google Map inside of a WebView to using a TileOverlay to using a GroundOverlay. What I am now considering doing is having a large 2 dimensional array of "squares." Each square would have a long, lat, and total +/- weights. When a new data point is added, instead of rendering it exactly where it is, it will be added to the "square" that it is in. The weight of this data point will be added to the square and then I will use the GoogleMap Polygon object to render the square on the map. The ratio of +points to -points will determine the color that is rendered, with a ratio closer to 1:1 being clear, >1 being blue (cold point), and <1 being red (hot point).
Edit: a.k.a. clustering the data into small regional groups
I suggest trying
going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel.
Even if it slow, it will work. There are not too many Tiles on the screen, there are not too many pixels in each Tile and all this is done on a background thread.
All this is still followed by translating Bitmap into byte[]. The byte[] is a representation of PNG or JPG file, so it's not a simple pixel mapping from Bitmap. The last operation takes some time too and may possibly require more processing power than your whole algorithm.
Edit (moved from comment):
What you describe in the edit sounds like a simple clustering on LatLng. I can't say it's a better or worse idea, but it's something worth a try.

What is the purpose of double precision drawing in Graphics2D

Could anybody explain it to me?
You can't draw between pixels so why should I use float or double measuring when drawing? In Oracle's docs is written something about printer device, but it also can't paint between the smallest points. I don't understand it.
Let's say a simple line. This line has set width 1.3f. What is going on with it when it's drawn on:
display in windows (I believe it has 96 DPI)?
printer with 300 DPI?
AFAIK Java uses 72 DPI internally. So how is the math?
Several use cases come to mind.
Your graphics device might be scaled. For example I know of several applications which draw a window-filling image of the unit circle, i.e. a circle of radius 1, using an appropiate scaling of the graphics context.
You might be producing output for a vector-oriented target, like a PDF file. In that case, users might zoom in arbitrarily, and might expect a fair amount or precision even at high resolutions.
Printers, like you mention, might print at a resolution much higher than the screen, which is accomplished by a built-in zoom factor that maps default coordinate units to several times the device pixel size.
Anti-aliasing suggest sub-pixel resolution. The amount of color applied to a given pixel at the boundary of a geometric object will depend on the sub-pixel coordinates of said object.
None of the above would readily rule out using single precision floats, and in fact most G2D operations are available using floats as well. Using doubles is only important for really large zooms, really strong demands in terms of precision, and similar applications. But on the other hand, most computations are performed on doubles in any case, and the overhead of carrying these as far through the graphics pipeline as possible is often negligible. So when you ask me why to use double instead of float, I ask you “why not?”

Categories

Resources