Consider a java project doing lots of floating point operations where efficiency and memory consumption can be important factors - such as a game. If this project targets multiple platforms, typically Android and the desktop, or more generally 32 and 64 bit machines, you might want to be able to build a single and a double precision build of your software.
In C/C++ and other lower level languages, this is easily achieved by typedef statements. You can have:
typedef float myfloat;
and the day you want to go 64 bit just change that to:
typedef double myfloat;
provided you use myfloat throughout your code.
How would one achieve a similar effect in java?
A global search and replace of "float" by "double" (or vice-versa) has the huge drawback of breaking compatibility with exterior libraries that only offer one flavor of floating point precision, chief among them certain functions of the java.lang.Math class.
Having a high-level polymorphic approach is less than ideal when you wish to remain efficient and keep memory tight (by having lots of primitive type arrays, for instance).
Have you ever dealt with such a situation and if so, what is in your opinion the most elegant approach to this problem?
The official Android documentation says this about float vs. double performance:
In speed terms, there's no difference between float and double on the more modern hardware. Space-wise, double is 2x larger. As with desktop machines, assuming space isn't an issue, you should prefer double to float.
So you shouldn't have to worry about performance too much. Just use the type that is reasonable for solving your problem.
Apart from that, if you really want to have the ability to switch between double and float, you could wrap your floatin point value in a class an work with that. But I would expect such a solution to be slower that using any floating point primitive directly. As Java does not support overloading operators, it would also make your math code much more complicated. Think of something like
double d = (a+b)/c;
when using primitives versus
MyFloat d = a.add(b).div(c);
when working with wrapper objects. According to my experience, the polymorphic approach makes maintaining your code much harder.
I will omit the part saying that for example double should be just fine etc. Others covered that more than good. I'm just assuming you want to do it - no matter what. Even for the sake of experiment to see what's the performance/memory difference - it's interesting.
So, a preprocessor would be great here. Java doesn't provide one.
But, you can use your own. Here are some existing implementations. Using javapp for example, you will have #define.
This is not practical without great pains.
While you could define your high level API's to work with wrapper types (e.g. use Number instead of a specific type and have multiple implementations of the API that uses Float or Double under the hood), chances are that the wrappers will eat more performance than you can ever gain by selecting a less precise type.
You could define high level objects as interfaces (e.g. Polygon etc.) and hide their actual data representation in the implementation. That means you will have to maintain two implementations, one using float and one for double. It probably requires considerable code duplication.
Personally, I think you are attempting to solve a non-existant conundrum. Either you need double precision, then float isn't an option, or float is "good enough", then there is no advantage of ever using double.
Simply use the smallest type that fits the requirements. Especially for a game float/double should make little difference. Its unlikely you spend that much time in math (in java code) - most likely your graphics will determine how fast you can go.
Generally use float and only switch to double for parts where you need the precision and the question disappears.
Java does not have such a functionality, aside from brute-force find-and-replace. However, you can create a helper class. Shown below, the type you will change to change the float precision is called F:
public class VarFloat {
F boxedVal;
public VarFloat(F f){
this.boxedVal = f;
}
public F getVal() { return boxedVal; }
public double getDoubleVal() { return (double)boxedVal; }
public double getFloatVal() { return (float)boxedVal; }
}
Where at all possible, you should use getVal as opposed to any of the type-specific ones. You can also consider adding methods like add, addLocal, etc. For example, the two add methods would be:
public VarFloat add(VarFloat vf){
return new VarFloat(this.boxedVal + vf.boxedVal);
}
public VarFloat addLocal(VarFloat vf){
this.boxedVal += vf.boxedVal;
return this; // for method chaining
}
Related
The question is on the strategy approach to the problem of defining a square root algorithm in a generic numerical interface. I am aware of the existance of algorithms solving the problem with different conditions. I'm interested in algorithms that:
Solves the problem using only selected functions;
Doesn't care if the objects manipulated are integers, floating points or other, provided those objects can be added, mutiplied and confronted;
Returns an exact solution if the input is a perfect square.
Because the subtlety of the distintion and for the sake of clarity, I will define the problem in a very verbose way. Beware the wall text!
Suppose to have a Java Interface Constant<C extends Constant<C>> with the following abstract methods, that we will call base functions:
C add(C a);
C subtract(C a);
C multiply(C a);
C[] divideAndRemainder(C b);
C additiveInverse();
C multiplicativeInverse();
C additiveIdentity();
C multiplicativeIdentity();
int compareTo(C arg1);
Is not known if C represents an integer or a floating point, nor this must be relevant in the following discussion.
Using only those methods is possible to create static or default implementation of some mathematical algorithm regarding numbers: for example, dividerAndRemainder(C b); and compareTo(C arg1); allow to create algorithms for the greater common divisor, the bezout identity, etc etc...
Now suppose our Interface has a default method for the exponentiation:
public default C pow(int n){
if(n < 0) return this.additiveInverse().pow(-n);
if(n == 0) return additiveIdentity();
int m = n;
C output = this;
while(m > 1)
{
if(m%2 == 0) output = output.multiply(output);
else output = this.multiply(output.multiply(output));
m = m/2;
}
return output;
}
The goal is to define two default method called C root(int n) and C maximumErrorAllowed() such that:
x.equals(y.pow(n)) implies x.root(n).equals(y);
C root(int n); is actually implemented using only base functions and methods created from the base functions;
The interface can still be applied to any kind of numbers, including but not limiting at both integers and floating points.
this.root(n).pow(n).compareTo(maximumErrorAllowed()) == -1 for all this such that this.root(n)!=null, i.e. any eventual approximation has an error minor than C maximumErrorAllowed();
Is that possible? If yes, how and what would be an estimation of the computational complexity?
I went through some time working on a custom number interface for Java, it's amazingly hard--one of the most disappointing experiences I've had with Java.
The problem is that you have to start over from scratch--you can't really re-use anything in Java, so if you want to have implementations for int, float, long, BigInteger, rational, Complex and Vector you have to implement all the methods yourself for every single class, and then don't expect the Math package to be of much help.
It got particularly nasty implementing the "Composed" classes like "Complex" which is made from two of the "Generic" floating point types, or "Rational" which composes two generic integer types.
And math operators are right out--this can be especially frustrating.
The way I got it to work reasonably well was to implement the classes in Java and then write some of the higher-level stuff in Groovy. If you name the operations correctly, Groovy can just pick them up, like if your class implements ".plus()" then groovy will let you do instance1+instance2.
IIRC because of being dynamic, Groovy often handled cross-class pieces nicely, like if you said Complex + Integer you could supply a conversion from Integer to complex and groovy would promote Integer to Complex to do the operation and return a complex.
Groovy is pretty interchangeable with Java, You can usually just rename a Java class ".groovy" and compile it and it will work, so it was a pretty good compromise.
This was a long time ago though, now you might get some traction with Java 8's default methods in your "Number" interface--that could make implementing some of the classes easier but might not help--I'd have to try it again to find out and I'm not sure I want to re-open that can o' worms.
Is that possible? If yes, how?
In theory, yes. There are approximation algorithms for root(), for example the n-th root algorithm. You will run into problems with precision, however, which you might want to solve on a case-by-case basis (i. e. use a look-up table for integers). As such, I'd recommend against a default implementation in an interface.
What would be an estimation of the computational complexity?
This, too, is implementation varies based on your type of number, and is dependant on your precision. For integers, you can create an implementation with a look-up table, and the complexity would be O(1).
If you want a better answer for the complexity of the operation itself, you might want to check out Computational complexity of calculating the nth root of a real number.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
In Java for example there is the primitive data type "int" which represents a 32 Bit value and there is "Integer" which is just a class with a single "int" property (and some methods of course). That means that a Java "Integer" class still uses primitives behind the scenes. And that's the reason Java is not a pure object oriented programming language.
Where could a value be stored if there where no primitives? For example I imagine this pseudo class:
class Integer
{
private Integer i = 12;
public Integer getInteger
{
return this.Integer;
}
}
This would be recursive.
How can a programming language be implemented without primitives?
I appreciate any help solving my confusion.
Behind the scene always will be primitives because it just a bits in memory. But some languages hide primitives that You can work only with objects. Java allows You to work both with objects and primitives.
If you mean by primitives value types, then yes you can live without them as a user and use Integer instead of int and pay for the overhead of heap allocation and GC. But this doesn't come for free and you have to pay the cost. Primitives like 32-bit/64-bit integers and IEEE-754 floating points will always be faster because there is a hardware support for them.
From a compiler writer point of view you have to use what the machine supports to make things work.
LISP is a very simple functional language. The basic LISP did not have a primitive int and one solution to integers was to have successor of successor of successor of zero for 3.
This actually had some advantages, integers being open ended, no overflow so operations really commutative, associative, and so on. Some nice optimizations possible. And of course succ(succ(succ(zero))) could be encoded in a more tuple like way (probably better not in LISP).
In a later, normal, LISP '3' would be an atom, 123 would be such an atom, with math operators.
Then there are symbol manipulating languages (SNOBOL) that could do math on numerical strings ['4', '0'] * ['3'].
So names are objects (atoms) like a char 'a' or int 42.
It might help to show you the analogous code in a language that takes the "everything is an object" design principle much more seriously than Java does. Namely, Smalltalk. Imagine what it would be like if Java had only int, not Integer, but everything you used to need to use Integer for was possible with int. That's Smalltalk.
This is an excerpt of the code defining the SmallInteger class in Squeak 5.0:
Integer immediateSubclass: #SmallInteger
instanceVariableNames: ''
classVariableNames: ''
poolDictionaries: ''
category: 'Kernel-Numbers'!
!SmallInteger commentStamp: 'eem 11/20/2014 08:41' prior: 0!
My instances are at least 31-bit numbers, stored in twos complement
form. The allowable range in 32-bits is approximately +- 10^9
(+- 1billion). In 64-bits my instances are 61-bit numbers,
stored in twos complement form. The allowable range is
approximately +- 10^18 (+- 1 quintillion). The actual
values are computed at start-up. See SmallInteger class startUp:,
minVal, maxVal.!
!SmallInteger methodsFor: 'arithmetic' stamp: 'di 2/1/1999 21:31'!
+ aNumber
"Primitive. Add the receiver to the argument and answer with the result
if it is a SmallInteger. Fail if the argument or the result is not a
SmallInteger.
Essential, No Lookup. See Object documentation whatIsAPrimitive."
<primitive: 1>
^ super + aNumber! !
!SmallInteger class methodsFor: 'instance creation' stamp: 'tk 4/20/1999 14:17'!
basicNew
self error: 'SmallIntegers can only be created by performing arithmetic'! !
Don't sweat the fine details of syntax or semantics. What you should get out of this is: SmallInteger is defined as an object class just like everything else in the language, and arithmetic operations are methods just like every other piece of code in the language. But it's a little odd. It has no instance variables, you can only create instances by performing arithmetic, and most of the methods look like they're being defined circularly.
"Under the hood", the implementation maps arithmetic to the appropriate machine instructions (the <primitive: 1> thing is a hint to the implementation about that) and stores SmallIntegers as nothing more than the integer itself. The restricted range, relative to the hardware, is because a couple of bits are reserved to mark memory words as integers, rather than pointers to objects ("tagged pointers").
Without being able to eventually access real data, (eg. primitives or actual bits) (directly or indirectly) on a machine, it is no longer a programming language, it is an Interface Description Language.
(I'll rephrase the question to what I believe you're asking. If you think I've got it wrong, feel free to comment.)
How can a type system that's based on composition and inheritance define any useful type, if there are no intrinsic types to start from? Unless the language implementation knows about at least one intrinsic type to start from, any defined types would be doomed to be either recursive or empty. Is this inevitable?
Yes, in every C-family language that I know of, this is pretty much inevitable.
If every type is composed of other types then, at the very least, you need to have an intrinsic type to build upon - for example, an intrinsic type that represents a bit, in order to construct the byte type out of it through composition, then the word type, then various integer types, and so on. Then you'd need to define the operations that can be performed on these types, by manipulating the bits that make up their internal representation.
And even though all you need is one intrinsic type to build upon, it would likely be terribly inefficient - you don't want to waste space or CPU cycles and you do want to take advantage of the various storage locations and instructions that your target architecture offers, including FP registers and other stuff.
Thus, a good compromise between performance and "purity" is to offer in the language some intrinsic types that are likely to be recognizable by modern CPUs (like int32, int64, float, double, etc) and build the rest of the type system upon them. In Java, they decided to call these intrinsic types primitives and make them separate from classes.
Eventually everything comes back to bits in memory and instructions to the computer. The difference between assembler, compiled, procedural, object oriented, and all the other things is how much abstraction there is between you and the bits and how much benefit (or cost) you get from that abstraction.
In Java 8, there is a new method String.chars() which returns a stream of ints (IntStream) that represent the character codes. I guess many people would expect a stream of chars here instead. What was the motivation to design the API this way?
As others have already mentioned, the design decision behind this was to prevent the explosion of methods and classes.
Still, personally I think this was a very bad decision, and there should, given they do not want to make CharStream, which is reasonable, different methods instead of chars(), I would think of:
Stream<Character> chars(), that gives a stream of boxes characters, which will have some light performance penalty.
IntStream unboxedChars(), which would to be used for performance code.
However, instead of focusing on why it is done this way currently, I think this answer should focus on showing a way to do it with the API that we have gotten with Java 8.
In Java 7 I would have done it like this:
for (int i = 0; i < hello.length(); i++) {
System.out.println(hello.charAt(i));
}
And I think a reasonable method to do it in Java 8 is the following:
hello.chars()
.mapToObj(i -> (char)i)
.forEach(System.out::println);
Here I obtain an IntStream and map it to an object via the lambda i -> (char)i, this will automatically box it into a Stream<Character>, and then we can do what we want, and still use method references as a plus.
Be aware though that you must do mapToObj, if you forget and use map, then nothing will complain, but you will still end up with an IntStream, and you might be left off wondering why it prints the integer values instead of the strings representing the characters.
Other ugly alternatives for Java 8:
By remaining in an IntStream and wanting to print them ultimately, you cannot use method references anymore for printing:
hello.chars()
.forEach(i -> System.out.println((char)i));
Moreover, using method references to your own method do not work anymore! Consider the following:
private void print(char c) {
System.out.println(c);
}
and then
hello.chars()
.forEach(this::print);
This will give a compile error, as there possibly is a lossy conversion.
Conclusion:
The API was designed this way because of not wanting to add CharStream, I personally think that the method should return a Stream<Character>, and the workaround currently is to use mapToObj(i -> (char)i) on an IntStream to be able to work properly with them.
The answer from skiwi covered many of the major points already. I'll fill in a bit more background.
The design of any API is a series of tradeoffs. In Java, one of the difficult issues is dealing with design decisions that were made long ago.
Primitives have been in Java since 1.0. They make Java an "impure" object-oriented language, since the primitives are not objects. The addition of primitives was, I believe, a pragmatic decision to improve performance at the expense of object-oriented purity.
This is a tradeoff we're still living with today, nearly 20 years later. The autoboxing feature added in Java 5 mostly eliminated the need to clutter source code with boxing and unboxing method calls, but the overhead is still there. In many cases it's not noticeable. However, if you were to perform boxing or unboxing within an inner loop, you'd see that it can impose significant CPU and garbage collection overhead.
When designing the Streams API, it was clear that we had to support primitives. The boxing/unboxing overhead would kill any performance benefit from parallelism. We didn't want to support all of the primitives, though, since that would have added a huge amount of clutter to the API. (Can you really see a use for a ShortStream?) "All" or "none" are comfortable places for a design to be, yet neither was acceptable. So we had to find a reasonable value of "some". We ended up with primitive specializations for int, long, and double. (Personally I would have left out int but that's just me.)
For CharSequence.chars() we considered returning Stream<Character> (an early prototype might have implemented this) but it was rejected because of boxing overhead. Considering that a String has char values as primitives, it would seem to be a mistake to impose boxing unconditionally when the caller would probably just do a bit of processing on the value and unbox it right back into a string.
We also considered a CharStream primitive specialization, but its use would seem to be quite narrow compared to the amount of bulk it would add to the API. It didn't seem worthwhile to add it.
The penalty this imposes on callers is that they have to know that the IntStream contains char values represented as ints and that casting must be done at the proper place. This is doubly confusing because there are overloaded API calls like PrintStream.print(char) and PrintStream.print(int) that differ markedly in their behavior. An additional point of confusion possibly arises because the codePoints() call also returns an IntStream but the values it contains are quite different.
So, this boils down to choosing pragmatically among several alternatives:
We could provide no primitive specializations, resulting in a simple, elegant, consistent API, but which imposes a high performance and GC overhead;
we could provide a complete set of primitive specializations, at the cost of cluttering up the API and imposing a maintenance burden on JDK developers; or
we could provide a subset of primitive specializations, giving a moderately sized, high performing API that imposes a relatively small burden on callers in a fairly narrow range of use cases (char processing).
We chose the last one.
My questions are motivated by a C++ code which is not mine and that I am currently trying to understand. Nevertheless, I think this question can be answered by OO developers in general (because I have ever seen this case in Java code for example).
Reading through the code, I noticed that the developer always work using side effects (most functions have "void return type" except for getters and some rare cases) instead of returning results directly. He sometimes uses return values but only for control flows (error code... instead of exceptions).
Here are two possible examples of his prototypes (in pseudo-code):
For a function that should return min, max and avg of the float values in a matrix M:
void computeStatistics(float min, float max, float avg, Matrix M);
OR
void computeStatistics(List myStat, Matrix M);
For a function that should return some objects in a given list that verifies a certain criteria and the number of objects found:
int controlValue findObjects(List result, int nbObjectsFound, Object myCriteria, List givenList)
I am not familiar with C++ as you can probably see in my very-pseudo-code... But rather with Matlab where it is possible to return everything you want from a function for example an int and a List side by side (which could be useful for the second example). I know it is not possible in C++ and that could explain the second prototype but it doesn't explain the choice for the first example where he could have done:
List myStat computeStat(Matrix M)
Finally, here are my questions:
What are the possible reasons that could motivate this choice? Is it a good practice, a convention or just a development choice? Are there advantages of one way over the other (returning values vs. side effects way)?
In terms of C++:
IMO using returns values is clearer than passing value by references and present, in most cases, no overhead. (please have a look at RVO and Copy Elision)
However if you do use return values for your control flow, using references is not a bad thing and is still clear for most developers.
So I guess we could say that the choice is yours.
Keep also in mind that many developers are not aware of what black magic your C++ compiler is doing and so using return values might offend them.
In the past it was a common practice to use reference parameters as output, since returning complex objects was very slow without return value optimization an move semantic. Today I belief in most cases returning the value is the best choice.
Want Speed? Pass by Value.
Writing the following provided that the list has a copy would by me be considered inappropriate.
void computeStatistics(List myStat, Matrix M);
Instead (provided that list has copy) you should.
List myStat computeStat(Matrix M)
However the call-by-reference approach can be motivated if you do not have a copy on your object, then you wont need to allocate it on the heap instead you can allocate it on the stack and send your function a pointer to it.
Regarding:
void computeStatistics(float min, float max, float avg, Matrix M);
My personal opinion is that best-practice is one method one purpose, so I would do this like:
float min computeMin(Matrix M);
float max computeMax(Matrix M);
float avg computeAvg(Matrix M);
The only reason that I can see for making all this in one function would be because the calculations are not done separately (more work to do it in separate functions).
If you however need to have several return types in one method i would do it with call-by-reference. For example:
void SomeMethod(input1, input2, &output1, &output2, &output3)
Is it possible to use some construct to replace all floats with doubles (or the opposite) without refactoring?
For example you may be implementing some mathematical system that works perfectly interchangeably with floats or doubles. In C you may use: typedef float real and then use real in your code. Changing to double involves only replacing one line of code.
Is something like this possible in Java? Or is there some generic numeric type?
This is not possible in Java in the straightforward case which you describe. However, depending on how your code works, you could write your math classes to interfaces, and have all methods that return values be implemented with both a double and a float return type. Then, you could write two implementation classes, and switch between them depending on which one you wanted to use.
This seems like overkill. Why do you want to do this?
It's actually recommended to use BigDecimal instead of float/double. Don't think java has something similar to typedef float real
No, there is no way to achieve this in Java with primitive types. There is simply no typedef equivalent and there are also no template classes. From a functional view, you could work this with the object oriented way, the methods would take a wrapper class/interface type (something like java.lang.Number) and also return results as a wrapped type.
However, I would just scrap the entire idea and only implement the double version. Callers that want to work with float can just use the double version of any method - parameters will be automatically widened to double. The results then need to be cast back to float by the caller. The conversions to and from double will cost a little speed. Or if double was just nice to have and you can make do with float, create only a float version.
In terms of raw computation speed, there is little to no difference between float and double (on a desktop CPU). The speed advantage with float usually mostly comes from the halved memory bandwidth requirements.
If its just one or a few utility classes, you could also have two sets of them (e.g. FloatMathUtil and DoubleMathUtil). It would then be up to the user to decide which one to code against (they would be entirely unrelated classes in terms of API).
You can use object-oriented aproach.
Create your own class that implements the methods your mathematical system needs. Use this class instead of float. Inside it can use whatever you want float, double or BigDecimal. You can change later how your class works without changing the rest of your system.
Take a look at Double, it will give the general idea how to build it.
Implement methods for addition, multiplication etc.
E.g.:
public class MyDecimal
{
private float value;
public MyDecimal(int value)
{
this.value = value;
}
public MyDecimal(float value)
{
this.value = value;
}
public MyDecimal multiply(MyDecimal by)
{
return new MyDecimal(value * by.value);
}
...
}
So, if you want to use double instead of float you only need to change this class.