Tinting pixels in Java - Need a faster method - java

I'm making a doom style pseudo-3D game.
The world is rendered pixel by pixel into a buffered image, which is later displayed on the JPanel. I want to keep this approach so that lighting individual pixels will be easier.
I want to be able to color the textures in the game to many different colors.
Coloring the whole texture and storing it in a separate buffered image takes too much time and memory for my purpose. So I am tinting each pixel of the texture during the rendering stage.
The problem I am having is that tinting each pixel is quite expensive. When an uncolored wall covers the entire screen, I get around 65 fps. And when a colored wall covers the screen, I get 30 fps.
This is my function for tinting the pixels:
//Change the color of the pixel using its brightness.
public static int tintABGRPixel(int pixelColor, Color tintColor) {
//Calculate the luminance. The decimal values are pre-determined.
double lum = ((pixelColor>>16 & 0xff) * 0.2126 +
(pixelColor>>8 & 0xff) * 0.7152 +
(pixelColor & 0xff) * 0.0722) / 255;
//Calculate the new tinted color of the pixel and return it.
return ((pixelColor>>24 & 0xff) << 24) |
((int)(tintColor.getBlue()*lum) & 0xff) |
(((int)(tintColor.getGreen()*lum) & 0xff) << 8) |
(((int)(tintColor.getRed()*lum) & 0xff) << 16);
}
Sorry for the illegible code. This function calculates the brightness of the original pixel, multiplies the new color by the brightness, and converts it back into an int.
It only contains simple operations, but this function is called up to a million times per frame in the worst case. The bottleneck is the calculation in the return statement.
Is there a more efficient way to calculate the new color?
Would it be best if I changed my approach?
Thanks

Do the work in Parallel
Threads aren't necessarily the only way to parallelise code, CPUs often have instructions sets such as SIMD which allow you to compute the same arithmetic on multiple numbers at once. GPUs take this idea and run with it, allowing you to run the same function on hundreds to thousands of numbers in parallel. I don't know how to do this in Java, but I'm sure with some googling its possible to find an method that works.
Algorithm - Do less work
Is it possible to reduce the amount of time the function needs to be called? Calling any function a million times per frame is going to hurt. Unless the overhead of each function call is managed (inlining it, reusing the stack frame, caching the result if possible), you'll want to do less work.
Possible options could be:
Make the window/resolution of the game smaller.
Work with a different representation. Are you doing a lot of operations that are easier to do when pixels are HSV instead of RGB? Then only convert to RGB when you are about to render the pixel.
Use a limited number of colours for each pixel. That way you can work out the possible tints in advance, so they are only a lookup away, as opposed to a function call.
Tint as little as possible. Maybe there is some UI that is tinted and shouldn't be. Maybe lighting effects only travel so far.
As a last resort, make tinted the default. If tinting pixels is done so much then possibly "untinting" happens far less and you can get better performance by doing that.
Performance - (Micro-)optimising the code
If you can settle for an "approximate tint" this SO answer gives an approximation for the brightness (lum) of a pixel that should be cheaper to compute. (The formula from the link is Y = 0.33 R + 0.5 G + 0.16 B, which can be written Y = (R+R+B+G+G+G)/6.
The next step is to time your code (profile is a good term to know for googling) to see what takes up the most resources. It may well be that it isn't this function here, but another piece of code. Or waiting for textures to load.
From this point on we will assume the function provided in the question takes up the most time. Let's see what it is spending its time on. I don't have the rest of your code, so I can't benchmark all of it, but I can compile it and look at the bytecode that is produced. Using javap on a class containing the function I get the following (bytecode has been cut where there are repeats).
public static int tintABGRPixel(int, Color);
Code:
0: iload_0
1: bipush 16
3: ishr
4: sipush 255
7: iand
8: i2d
9: ldc2_w #2 // double 0.2126d
12: dmul
13: iload_0
...
37: dadd
38: ldc2_w #8 // double 255.0d
41: ddiv
42: dstore_2
43: iload_0
44: bipush 24
46: ishr
47: sipush 255
50: iand
51: bipush 24
53: ishl
54: aload_1
55: pop
56: invokestatic #10 // Method Color.getBlue:()I
59: i2d
60: dload_2
61: dmul
62: d2i
63: sipush 255
66: iand
67: ior
68: aload_1
69: pop
...
102: ireturn
This can look scary at first, but Java bytecode is nice, in that you can match each line (or instruction) to a point in your function. It hasn't done anything crazy like rewrite it or vectorize it or anything that makes it unrecognizable.
The general method to see if a change has made an improvement, is to measure the code before and after. With that knowledge you can decide if a change is worth keeping. Once the performance is good enough, stop.
Our poor man profiling is to look at each instruction, and see (on average, according to online sources) how expensive it is. This is a little naive, as how long each instruction takes to execute can depend on a multitude of things such as the hardware it is running on, the versions of software on the computer, and the instructions around it.
I don't have a comprehensive list of the time cost for each instruction, so I'm going to go with some heuristics.
integer operations are faster than floating operations.
constants are faster than local memory, which is faster than global memory.
powers of two can allow for powerful optimisations.
I stared at the bytecode for a while, and all I noticed was that from lines [8 - 42] there are a lot of floating point operations. This section of code works out lum (the brightness). Other than that, nothing else stands out, so let's rewrite the code with our first heuristic in mind. If you don't care for the explanation, I'll provide the final code at the end.
Let us just consider what the blue colour value (which we will label B) will be by the end of the function. The changes will apply to red and green too, but we will leave them out for brevity.
double lum = ((pixelColor>>16 & 0xff) * 0.2126 +
(pixelColor>>8 & 0xff) * 0.7152 +
(pixelColor & 0xff) * 0.0722) / 255;
...
... | ((int)(tintColor.getBlue()*lum) & 0xff) | ...
This can be rewritten as
int x = (pixelColor>>16 & 0xff), y = (pixelColor>>8 & 0xff), z = (pixelColor & 0xff);
double a = 0.2126, b = 0.7152, c = 0.0722;
double lum = (a*x + b*y + c*z) / 255;
int B = (int)(tintColor.getBlue()*lum) & 0xff;
We don't want to be doing as many floating point operations, so let us do some refactoring. The idea is that the floating point constants can be written as fractions. For example, 0.2126 can be written as 2126/10000.
int x = (pixelColor>>16 & 0xff), y = (pixelColor>>8 & 0xff), z = (pixelColor & 0xff);
int a = 2126, b = 7152, c = 722;
int top = a*x + b*y + c*z;
double temp = (double)(tintColor.getBlue() * top) / 10000 / 255;
int B = (int)temp & 0xff;
So now we do three integer multiplications (imul) instead of three dmuls. The cost is one extra floating division, which alone would probably not be worth it. We can avoid this issue by piggybacking on the other division that we are already doing. Combining the two sequential divisions into one division is as simple as changing / 10000 / 255 to /2550000. We can also setup the code for one more optimization by moving the casting and division to one line.
int x = (pixelColor>>16 & 0xff), y = (pixelColor>>8 & 0xff), z = (pixelColor & 0xff);
int a = 2126, b = 7152, c = 722;
int top = a*x + b*y + c*z);
int temp = (int)((double)(tintColor.getBlue()*top) / 2550000);
int B = temp & 0xff;
This could be a good place to stop. However, if you need to squeeze a tiny bit more performance out of this function, we can optimise dividing by a constant and casting a double to an int (which I believe are two expensive operations) to a multiply (by a long) and a shift.
int x = (pixelColor>>16 & 0xff), y = (pixelColor>>8 & 0xff), z = (pixelColor & 0xff);
int a = 2126, b = 7152, c = 722;
int top = a*x + b*y + c*z;
int Btemp = (int)((tintColor.getBlue() * top * 1766117501L) >> 52);
int B = temp & 0xff;
where the magic numbers are two that were magicked up when I compiled a c++ version of the code with clang. I am not able to explain how to produce this magic, but it works as far as I have tested with a couple of values for x, y, z, and tintColor.getBlue(). When testing I assumed all the values are in the range [0 - 256), and I tried only a couple of examples.
The final code is below. Be warned that this is not well tested and may have edge cases that I've missed, so let me know if there are any bugs. Hopefully it is fast enough.
public static int tintABGRPixel(int pixelColor, Color tintColor) {
// Calculate the luminance. The decimal values are pre-determined.
int x = pixelColor>>16 & 0xff, y = pixelColor>>8 & 0xff, z = pixelColor & 0xff;
int top = 2126*x + 7252*y + 722*z;
int Btemp = (int)((tintColor.getBlue() * top * 1766117501L) >> 52);
int Gtemp = (int)((tintColor.getGreen() * top * 1766117501L) >> 52);
int Rtemp = (int)((tintColor.getRed() * top * 1766117501L) >> 52);
//Calculate the new tinted color of the pixel and return it.
return ((pixelColor>>24 & 0xff) << 24) | Btemp & 0xff | (Gtemp & 0xff) << 8 | (Rtemp & 0xff) << 16;
}
EDIT: Alex found that the magic number should be 1755488566L instead of 1766117501L.

To get better performance you'll have to get rid of objects like Color during image manipulation, also if you know that a method is to be called million times (image.width * image.height times) then it's best to inline this method. In general JVM would probably inline this method itself, but you should not take the risk.
You can use PixelGrabber to get all the pixels into an array. Here's a general usage
final int[] pixels = new int[width * height];
final PixelGrabber pixelgrabber = new PixelGrabber(image, 0, 0, width, height, pixels, 0, 0);
for(int i = 0; i < height; i++) {
for(int j = 0; j < width; j++) {
int p = pixels[i * width + j]; // same as image.getRGB(j, i);
int alpha = ( ( p >> 24) & 0xff );
int red = ( ( p >> 16) & 0xff );
int green = ( ( p >> 8) & 0xff );
int blue = ( p & 0xff );
//do something i.e. apply luminance
}
}
Above is just an example of how to iterate row and column indexes, however in your case nested loop is not needed. This should reasonably improve the performance.
This can probably be parallelized also using Java 8 streams easily, however be careful before using streams while dealing with images, as streams are a lot slower than plain old loops.
You can also try replacing int with byte where applicable (i.e. individual color components don't need to be stored in int). Basically try using primitive datatypes and even in primitive datatypes use smallest that's applicable.

At this point you are really close to the metal on this calculation. I think you'll have to change your approach to really improve things, but a quick idea is to cache the lum calculation. That is a simple function of pixel color and your lum isn't dependent on anything but that. If you cache that it could save you a lot of calcs. While you're caching you could cache this calc too:
((pixelColor>>24 & 0xff) << 24)
I don't know if that'll save you a ton of time, but I think at this point that is just about all you could do from a micro-optimization stand point.
Now you could refactor your pixel loop to use parallelism, and do those pixel calcs in parallel on your CPU this might set you up for the next idea too.
If neither of those above ideas work I think you might need to try and push color calculations off to the GPU card. This is all bare metal math that has to happen millions of times which is what graphics cards do best. Unfortunately this is a deep topic with lots of education that has to happen in order to pick the best option. Here were some interesting things to research:
https://code.google.com/archive/p/java-gpu/
https://github.com/nativelibs4java/JavaCL
http://jogamp.org/jogl/www/
https://www.lwjgl.org/
I know some of those are huge frameworks which isn't what you asked for. But they might contain other relatively unknown libs that you could use to push these math calcs off to the GPU. The #Parrallel annotation looked like it could be the most useful or JavaCL bindings.

Related

More efficient way to blend pixels (semi-transparency)?

I'm working on drawing semi-transparent images on top of other images for a small 2d game. To currently blend the images I'm using the formula found here: https://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending
My implementation of this is as follows;
private static int blend(int source, int dest, int trans)
{
double alpha = ((double) trans / 255.0);
int sourceRed = (source >> 16 & 0xff);
int sourceGreen = (source >> 8 & 0xff);
int sourceBlue = (source & 0xff);
int destRed = (dest >> 16 & 0xff);
int destGreen = (dest >> 8 & 0xff);
int destBlue = (dest & 0xff);
int blendedRed = (int) (alpha * sourceRed + (1.0 - alpha) * destRed);
int blendedGreen = (int) (alpha * sourceGreen + (1.0 - alpha) * destGreen);
int blendedBlue = (int) (alpha * sourceBlue + (1.0 - alpha) * destBlue);
return (blendedRed << 16) + (blendedGreen << 8) + blendedBlue;
}
Now, it works fine, but it has a pretty high overhead since it's being called for every single pixel every single frame. I get a performance drop of around 30% FPS as opposed to simply rendering the image without blending.
I just wanted to know if anyone can think of a better way to optimise this code as I'm probably doing too many bit operations.
not a java coder (so read with prejudice) but you are doing some things really wrong (from mine C++ and low level gfx perspective):
mixing integers and floating point
that requires conversions which are sometimes really costly... Its much better to use integer weights (alpha) in range <0..255> and then just divide by 255 or bitshift by 8. That would be most likely much faster.
bitshifting/masking to obtain bytes
yes its fine but there are simpler and faster methods simply by using
enum{
_b=0, // db
_g=1,
_r=2,
_a=3,
};
union color
{
DWORD dd; // 1x32 bit unsigned int
BYTE db[4]; // 4x8 bit unsigned int
};
color col;
col.dd=some_rgba_color;
r = col.dd[_r]; // get red channel
col.dd[_b]=5; // set blue channel
decent compilers could optimize some parts of your code to this internally on its own but I doubt it can do it everywhere...
You can also use pointers instead of union in the same way...
function overhead
you got function blending single pixel. That means it will be called a lot. its usually much faster to blend region (rectangle) per single call than call stuff on per pixel basis. Because you trash the stack this way. To limit this you can try these (for functions that are called massively):
Recode your app so you can blend regions instead of pixels causing much less function calls.
Lower the stack trashing by lowering operands, return values and internal variables of called function to limit the amount of RAM being allocated/freed/overwritten/copied each call... For example by using static or global variables for example the Alpha will most likely not be changing much. Or you can use alpha encoded in the color directly instead of having alpha as operand.
use inline or macros like #define to place the source code directly to code instead of function call.
For starters I would try to recode your function body to something like this:
enum{
_b=0, // db
_g=1,
_r=2,
_a=3,
};
union color
{
unsigned int dd; // 1x32 bit unsigned int
unsigned char db[4]; // 4x8 bit unsigned int
};
private static unsigned int blend(unsigned int src, unsigned int dst, unsigned int alpha)
{
unsigned int i,a,_alpha=255-alpha;
color s,d;
s.dd=src;
d.dd=dst;
for (i=0;i<3;i++)
{
a=(((unsigned int)(s.db[i]))*alpha) + (((unsigned int)(d.db[i]))*_alpha);
a>>=8;
d.db[i]=a;
}
return d.dd;
}
However if you want true speed use GPU (OpenGL Blending).

TODO-FIXME: In Java 8's Integer class?

While reading through Java 8's Integer class, I come upon the following FIX-ME: (line 379)
// TODO-FIXME: convert (x * 52429) into the equiv shift-add
// sequence.
The entire comment reads:
// I use the "[invariant division by multiplication][2]" trick to
// accelerate Integer.toString. In particular we want to
// avoid division by 10.
//
// The "trick" has roughly the same performance characteristics
// as the "classic" Integer.toString code on a non-JIT VM.
// The trick avoids .rem and .div calls but has a longer code
// path and is thus dominated by dispatch overhead. In the
// JIT case the dispatch overhead doesn't exist and the
// "trick" is considerably faster than the classic code.
//
// TODO-FIXME: convert (x * 52429) into the equiv shift-add
// sequence.
//
// RE: Division by Invariant Integers using Multiplication
// T Gralund, P Montgomery
// ACM PLDI 1994
//
I cannot imagine that I should be worried about this, as this has been present for quite a while.
But, can someone shed light on what this FIX-ME means and if has any side-effects?
Side notes:
I see this has been removed from the JDK 10
The paper referenced in the link does not seem to address to address the issue directly.
52429 is the closest integer to (2 ^ 19) / 10, so division by 10 can be achieved by multiplying by 52429, and then dividing by 2 ^ 19, where the latter is a trivial bit shift operation instead of requiring a full division.
The code author appears to be suggesting that the multiplication could be done more optimally using shift/add operations instead, per this (C language) snippet:
uint32_t div10(uint16_t in)
{
// divides by multiplying by 52429 / (2 ^ 16)
// 52429 = 0xcccd
uint32_t x = in << 2; // multiply by 4 : total = 0x0004
x += (x << 1); // multiply by 3 : total = 0x000c
x += (x << 4); // multiply by 17 : total = 0x00cc
x += (x << 8); // multiply by 257 : total = 0xcccc
x += in; // one more makes : total = 0xcccd
return x >> 19;
}
What I can't answer is why they apparently thought this might be more optimal than a straight multiplication in a Java environment.
At the machine code level it would only be more optimal on a (nowadays rare) CPU without a hardware multiplier where the simplest (albeit perhaps naïve) multiply function would need 16 shift/add operations to multiply two 16-bit numbers.
On the other hand a hand-crafted function like the above can perform the multiplication by a constant in fewer steps by exploiting the numeric properties of that constant, in this case reducing it to four shift/add operations instead of 16.
FWIW (and somewhat impressively) the clang compiler on macOS even with just the -O1 optimisation flag actually converts that code above back into a single multiplication:
_div10: ## #div10
pushq %rbp
movq %rsp, %rbp
imull $52429, %edi, %eax ## imm = 0xCCCD
shrl $19, %eax
popq %rbp
retq
It also turns:
uint32_t div10(uint16_t in) {
return in / 10;
}
into exactly the same assembly code, which just goes to show that modern compilers really do know best.

Make an binary addition behave like a (packed-) decimal addition

I'm currently working on a restrictive environment where the only types allowed are :
byte, byte[], short, short[].
I am almost certain that I can't import external libraries, since I'm working on a JavaCard, and have already tried such things, which didn't turn out good.
So, here I have to manage a byte array with a size of 6 bytes, which represents the balance of the card (in euros), and the last byte are cents, but this is not important now.
Given that I don't have access to integers, I don't know how I can add two byte in the way I want.
Let's have an example :
User puts in (Add) 0x00 0x00 0x00 0x00 0x00 0x57, which, to the user, means add 57 cents. Let's now say that the balance is 0x00 ... 0x26.
I want to be able to create a method that could modify the balance array (with carries), in a way that after adding, the cents are 83, and represented 0x83.
I have to handle subtractions as well, but I guess I can figure that out for myself afterwards.
My first guess was mask out each digit from each byte, and work separately at first, but that got me nowhere.
I'm obviously not asking for a full solution, because I believe my problem is almost impossible, but if you have any thoughts on how to approach this, I'd be very grateful.
So how can I add two arrays containing binary coded decimals to each other on Java Card?
EDIT 1: A common array would look like this :
{ 0x00 , 0x00 , 0x01, 0x52, 0x45, 0x52}
and would represent 15 254€ and 52 cents in a big-endian BCD encoded integer.
EDIT 2 : Well, as I suspected, my card doesn't support the package framework.math, so I can't use BCDUtil or BigNumbers, which would've been useful.
The below implementation goes through the BCD byte-by-byte and digit by digit. This allows it to use 8 bit registers that are efficient on most smart card processors. It explicitly allows for the carry to be handled correctly and returns the carry in case of overflow.
/**
* Adds two values to each other and stores it in the location of the first value.
* The values are represented by big endian, packed BCD encoding with a static size.
* No validation is performed if the arrays do indeed contain packed BCD;
* the result of the calculation is indeterminate if the arrays contain anything other than packed BCD.
* This calculation should be constant time;
* it should only leak information about the values if one of the basic byte calculations leaks timing information.
*
* #param x the first buffer containing the packed BCD
* #param xOff the offset in the first buffer of the packed BCD
* #param y the second buffer containing the packed BCD
* #param yOff the offset in the second buffer of the packed BCD
* #param packedBytes the number of bytes that contain two BCD digits in both buffers
* #return zero or one depending if the full calculation generates a carry, i.e. overflows
* #throws ArrayIndexOutOfBoundsException if a packed BCD value is out of bounds
*/
public static byte addPackedBCD(byte[] x, short xOff, byte[] y, short yOff, short packedBytes) {
// declare temporary variables, we'll handle bytes only
byte xd, yd, zd, z;
// set the initial carry to zero, c will only be 0 or 1
byte c = 0;
// go through the bytes backwards (least significant bytes first)
// as we need to take the carry into account
for (short i = (short) (packedBytes - 1); i >= 0; i--) {
// retrieve the two least significant digits the current byte in the arrays
xd = (byte) (x[xOff + i] & 0b00001111);
yd = (byte) (y[yOff + i] & 0b00001111);
// zd is the addition of the lower two BCD digits plus the carry
zd = (byte) (xd + yd + c);
// c is set to 1 if the final number is larger than 10, otherwise c is set to zero
// i.e. the value is at least 16 or the value is at least 8 + 4 or 8 + 2
c = (byte) (((zd & 0b10000) >> 4)
| (((zd & 0b01000) >> 3)
& (((zd & 0b00100) >> 2) | ((zd & 0b00010) >> 1))));
// subtract 10 if there is a carry and then assign the value to z
z = (byte) (zd - c * 10);
// retrieve the two most significant digits the current byte in the arrays
xd = (byte) ((x[xOff + i] >>> 4) & 0b00001111);
yd = (byte) ((y[yOff + i] >>> 4) & 0b00001111);
// zd is the addition of the higher two BCD digits plus the carry
zd = (byte) (xd + yd + c);
// c is set to 1 if the final number is larger than 10, otherwise c is set to zero
// i.e. the value is at least 16 or the value is at least 8 + 4 or 8 + 2
c = (byte) (((zd & 0b10000) >> 4)
| (((zd & 0b01000) >> 3)
& (((zd & 0b00100) >> 2) | ((zd & 0b00010) >> 1))));
// subtract 10 if there is a carry and then assign the value to the 4 msb digits of z
z |= (zd - c * 10) << 4;
// assign z to the first byte array
x[xOff + i] = z;
}
// finally, return the last carry
return c;
}
Note that I have only tested this for two arrays containing a single byte / two BCD digits. However, the carry works and as all 65536 combinations have been tested the approach must be valid.
To top it off, you may want to test the correctness of the packed BCD encoding before performing any operation. The same approach could be integrated into the for loop of the addition for higher efficiency. Tested against all single byte values as in the previous block of code.
/**
* Checks if the buffer contains a valid packed BCD representation.
* The values are represented by packed BCD encoding with a static size.
* This calculation should be constant time;
* it should only leak information about the values if one of the basic byte calculations leaks timing information.
*
* #param x the buffer containing the packed BCD
* #param xOff the offset in the buffer of the packed BCD
* #param packedBytes the number of bytes that packed BCD in the buffer
* #return true if and only if the value is valid, packed BCD
* #throws ArrayIndexOutOfBoundsException if the packed BCD value is out of bounds
*/
public static boolean validPackedBCD(byte[] x, short xOff, short packedBytes) {
// declare temporary variable, we'll handle bytes only
byte xdd;
// c is the correctness of the digits; it will be off-zero if invalid encoding is encountered
byte c = 0;
short end = (short) (xOff + packedBytes);
// go through the bytes, reusing xOff for efficiency
for (; xOff < end; xOff++) {
xdd = x[xOff];
// c will be set to non-zero if the high bit of each encoded decimal is set ...
// and either one of the two decimals is set as that would indicate a value of 10 or higher
// i.e. only values 8 + 4 or 8 + 2 are 10 or higher if you look at the bits in the digits
c |= ((xdd & 0b1000_1000) >> 2) & (((xdd & 0b0100_0100) >> 1) | (xdd & 0b0010_0010));
}
// finally, return the result - c is zero in case all bytes encode two packed BCD values
return c == 0;
}
Note that this one is also implemented in BCDUtil in Java Card. I do however dislike that class design and I don't think it is that well documented, so I decided for a different tack on it. It's also in javacardx which means that it could theoretically throw an exception if not implemented.
The answer of EJP isn't applicable, other than to indicate that the used encoding is that of packed BCD. The addition that Jones proposes is fast, but it doesn't show how to handle the carry between the 32 bit words:
Note that the most significant digit of the sum will exceed 9 if there should have been a carry out of that position. Furthermore, there is no easy way to detect this carry!
This is of course required for Java Card as it only has 16 bit signed shorts a base type integer. For that reason the method that Jones proposes is not directly applicable; any answer that utilizes the approach of Jones should indicate how to handle the carry between the bytes or shorts used in Java Card.
This is not really hex, it is packed decimal, one of the forms of BCD.
You can do packed-decimal addition and subtraction a byte at a time, with internal carry. There is a trick of adding 6 to force a carry into the MS digit, if necessary, and then masking and shifting it out again if it carried, to correct the LS digit. It's too broad to explain here.
See Jones on BCD arithmetic which shows how to efficiently use bit operands on 32 bit words to implement packed decimal arithmetic.

Is there a bit-wise trick for checking the divisibility of a number by 2 or 3?

I am looking for a bit-wise test equivalent to (num%2) == 0 || (num%3) == 0.
I can replace num%2 with num&1, but I'm still stuck with num%3 and with the logical-or.
This expression is also equivalent to (num%2)*(num%3) == 0, but I'm not sure how that helps.
Yes, though it's not very pretty, you can do something analogous to the old "sum all the decimal digits until you have only one left" trick to test if a number is divisible by 9, except in binary and with divisibility by 3. You can use the same principle for other numbers as well, but many combinations of base/divisor introduce annoying scaling factors so you're not just summing digits anymore.
Anyway, 16n-1 is divisible by 3, so you can use radix 16, that is, sum the nibbles. Then you're left with one nibble (well, 5 bits really), and you can just look that up. So for example in C# (slightly tested) edit: brute-force tested, definitely works
static bool IsMultipleOf3(uint x)
{
const uint lookuptable = 0x49249249;
uint t = (x & 0x0F0F0F0F) + ((x & 0xF0F0F0F0) >> 4);
t = (t & 0x00FF00FF) + ((t & 0xFF00FF00) >> 8);
t = (t & 0x000000FF) + ((t & 0x00FF0000) >> 16);
t = (t & 0xF) + ((t & 0xF0) >> 4);
return ((lookuptable >> (int)t) & 1) != 0;
}
The trick from my comment, x * 0xaaaaaaab <= 0x55555555, works through a modular multiplicative inverse trick. 0xaaaaaaab * 3 = 1 mod 232, which means that 0xaaaaaaab * x = x / 3 if and only if
x % 3 = 0. "if" because 0xaaaaaaab * 3 * y = y (because 1 * y = y), so if x is of the form
3 * y then it will map back to y. "only if" because no two inputs are mapped to the same output, so everything not divisible by 3 will map to something higher than the highest thing you can get by dividing anything by 3 (which is 0xFFFFFFFF / 3 = 0x55555555).
You can read more about this (including the more general form, which includes a rotation) in Division by Invariant Integers using Multiplication (T. Granlund and P. L. Montgomery).
You compiler may not know this trick. For example this:
uint32_t foo(uint32_t x)
{
return x % 3 == 0;
}
Becomes, on Clang 3.4.1 for x64,
movl %edi, %eax
movl $2863311531, %ecx # imm = 0xAAAAAAAB
imulq %rax, %rcx
shrq $33, %rcx
leal (%rcx,%rcx,2), %eax
cmpl %eax, %edi
sete %al
movzbl %al, %eax
ret
G++ 4.8:
mov eax, edi
mov edx, -1431655765
mul edx
shr edx
lea eax, [rdx+rdx*2]
cmp edi, eax
sete al
movzx eax, al
ret
What it should be:
imul eax, edi, 0xaaaaaaab
cmp eax, 0x55555555
setbe al
movzx eax, al
ret
I guess I'm a bit late to this party, but here's a slightly faster (and slightly prettier) solution than the one from harold:
bool is_multiple_of_3(std::uint32_t i)
{
i = (i & 0x0000FFFF) + (i >> 16);
i = (i & 0x00FF) + (i >> 8);
i = (i & 0x0F) + (i >> 4);
i = (i & 0x3) + (i >> 2);
const std::uint32_t lookuptable = 0x49249249;
return ((lookuptable >> i) & 1) != 0;
}
It's C++11, but that doesn't really matter for this piece of code. It's also brute-force tested for 32-bit unsigned ints. It saves you at least one bit-fiddling op for each of the first four steps. It also scales beautifully to 64 bits - only one additional step needed at the beginning.
The last two lines are obviously and shamelessly taken from harold's solution (nice one, I wouldn't have done that so elegantly).
Possible further optimizations:
The & ops in the first two steps will be optimized away by just using the lower-half registers on architectures that have them (x86, for example).
The largest possible output from the third step is 60, and from the fourth step it's 15 (when the function argument is 0xFFFFFFFF). Given that, we can eliminate the fourth step, use a 64-bit lookuptable and shift directly into that following the third step. This turns out to be a bad idea for Visual C++ 2013 in 32-bit mode, as the right shift turns into a non-inline call to code that does a lot of tests and jumps. However, it should be a good idea if 64-bit registers are natively available.
The point above needs to be reevaluated if the function is modified to take a 64-bit argument. The maximum outputs from the last two steps (which will be steps 4 and 5 after adding one step at the beginning) will be 75 and 21 respectively, which means we can no longer eliminate the last step.
The first four steps are based on the fact that a 32-bit number can be written as
(high 16 bits) * 65536 + (low 16 bits) =
(high 16 bits) * 65535 + (high 16 bits) + (low 16 bits) =
(high 16 bits) * 21845 * 3 + ((high 16 bits) + (low 16 bits))
So the whole thing is divisible by 3 if and only if the right parenthesis is divisible by 3. And so on, as this holds for 256 = 85 * 3 + 1, 16 = 5 * 3 + 1, and 4 = 3 + 1. (Of course, this is generally true for even powers of two; odd powers are one less than the nearest multiple of 3.)
The numbers that are input into the following steps will be larger than 16-bit, 8-bit, and 4-bit respectively in some cases, but that's not a problem, as we're not dropping any high-order bits when shifting right.

How to get Color object in java from css-style string which describes color?

For example, I have strings #0f0, #00FF00, green and in all cases I want to transform them to Color.GREEN.
Are there any standard ways or maybe some libraries have necessary functionality?
First, I apologize if the below isn't helpful - that is, if you know how to do this already and were just looking for a library to do it for you. I don't know of any libraries that do this, though they certainly may exist.
Of the 3 strings you gave as an example, #00FF00 is the easiest to transform.
String colorAsString = "#00FF00";
int colorAsInt = Integer.parseInt(colorAsString.substring(1), 16);
Color color = new Color(colorAsInt);
If you have #0f0...
String colorAsString = "#0f0";
int colorAsInt = Integer.parseInt(colorAsString.substring(1), 16);
int R = colorAsInt >> 8;
int G = colorAsInt >> 4 & 0xF;
int B = colorAsInt & 0xF;
// my attempt to normalize the colors - repeat the hex digit to get 8 bits
Color color = new Color(R << 4 | R, G << 4 | G, B << 4 | B);
If you have the color word like green, then you'll want to check first that all CSS-recognized colors are within the Java constants. If so, you can maybe use reflection to get the constant values from them (uppercase them first).
If not, you may need to create a map of CSS strings to colors yourself. This is probably the cleanest method anyway.

Categories

Resources