Dynamic comparator or 2 separate comparators? - java

I'm working on a solution including 2 types of sorting based on a property. What do you think which one is a better approach?
public class ABComparator implements Comparator<O> {
private final Property property;
public ABComparator(Request request) {
property = getProperty();
}
#Override
public int compare(O o1, O o2) {
if (property.someLogic()) {
// 1 first type of sorting
} else. {
// another type of sorting
}
}
}
or is better having 2 classes with their own logic and choosing one in the class where the sort is actually happening?
Thanks

Problems are exponential as you incorporate more mutations and more "choices" in your algorithms.
You can check what Sonarcloud thinks about cognitivity complexity and cyclomatic complexity. https://www.sonarsource.com/resources/cognitive-complexity/
To put it simply less choices, less state to manage will create more robust code and less bugs. Bringing 2 classes is low on complexity, especially if they don't depend on anything else. Code volume added should be low. I personally would use 2 classes with 2 implementations that don't share anything between themselves if possible so there is no if condition in your implementation of your compare function.

Related

How to compare Java function object to a specific method? [duplicate]

Say I have a List of object which were defined using lambda expressions (closures). Is there a way to inspect them so they can be compared?
The code I am most interested in is
List<Strategy> strategies = getStrategies();
Strategy a = (Strategy) this::a;
if (strategies.contains(a)) { // ...
The full code is
import java.util.Arrays;
import java.util.List;
public class ClosureEqualsMain {
interface Strategy {
void invoke(/*args*/);
default boolean equals(Object o) { // doesn't compile
return Closures.equals(this, o);
}
}
public void a() { }
public void b() { }
public void c() { }
public List<Strategy> getStrategies() {
return Arrays.asList(this::a, this::b, this::c);
}
private void testStrategies() {
List<Strategy> strategies = getStrategies();
System.out.println(strategies);
Strategy a = (Strategy) this::a;
// prints false
System.out.println("strategies.contains(this::a) is " + strategies.contains(a));
}
public static void main(String... ignored) {
new ClosureEqualsMain().testStrategies();
}
enum Closures {;
public static <Closure> boolean equals(Closure c1, Closure c2) {
// This doesn't compare the contents
// like others immutables e.g. String
return c1.equals(c2);
}
public static <Closure> int hashCode(Closure c) {
return // a hashCode which can detect duplicates for a Set<Strategy>
}
public static <Closure> String asString(Closure c) {
return // something better than Object.toString();
}
}
public String toString() {
return "my-ClosureEqualsMain";
}
}
It would appear the only solution is to define each lambda as a field and only use those fields. If you want to print out the method called, you are better off using Method. Is there a better way with lambda expressions?
Also, is it possible to print a lambda and get something human readable? If you print this::a instead of
ClosureEqualsMain$$Lambda$1/821270929#3f99bd52
get something like
ClosureEqualsMain.a()
or even use this.toString and the method.
my-ClosureEqualsMain.a();
This question could be interpreted relative to the specification or the implementation. Obviously, implementations could change, but you might be willing to rewrite your code when that happens, so I'll answer at both.
It also depends on what you want to do. Are you looking to optimize, or are you looking for ironclad guarantees that two instances are (or are not) the same function? (If the latter, you're going to find yourself at odds with computational physics, in that even problems as simple as asking whether two functions compute the same thing are undecidable.)
From a specification perspective, the language spec promises only that the result of evaluating (not invoking) a lambda expression is an instance of a class implementing the target functional interface. It makes no promises about the identity, or degree of aliasing, of the result. This is by design, to give implementations maximal flexibility to offer better performance (this is how lambdas can be faster than inner classes; we're not tied to the "must create unique instance" constraint that inner classes are.)
So basically, the spec doesn't give you much, except obviously that two lambdas that are reference-equal (==) are going to compute the same function.
From an implementation perspective, you can conclude a little more. There is (currently, may change) a 1:1 relationship between the synthetic classes that implement lambdas, and the capture sites in the program. So two separate bits of code that capture "x -> x + 1" may well be mapped to different classes. But if you evaluate the same lambda at the same capture site, and that lambda is non-capturing, you get the same instance, which can be compared with reference equality.
If your lambdas are serializable, they'll give up their state more easily, in exchange for sacrificing some performance and security (no free lunch.)
One area where it might be practical to tweak the definition of equality is with method references because this would enable them to be used as listeners and be properly unregistered. This is under consideration.
I think what you're trying to get to is: if two lambdas are converted to the same functional interface, are represented by the same behavior function, and have identical captured args, they're the same
Unfortunately, this is both hard to do (for non-serializable lambdas, you can't get at all the components of that) and not enough (because two separately compiled files could convert the same lambda to the same functional interface type, and you wouldn't be able to tell.)
The EG discussed whether to expose enough information to be able to make these judgments, as well as discussing whether lambdas should implement more selective equals/hashCode or more descriptive toString. The conclusion was that we were not willing to pay anything in performance cost to make this information available to the caller (bad tradeoff, punishing 99.99% of users for something that benefits .01%).
A definitive conclusion on toString was not reached but left open to be revisited in the future. However, there were some good arguments made on both sides on this issue; this is not a slam-dunk.
To compare labmdas I usually let the interface extend Serializable and then compare the serialized bytes. Not very nice but works for the most cases.
I don't see a possibility, to get those informations from the closure itself.
The closures doesn't provide state.
But you can use Java-Reflection, if you want to inspect and compare the methods.
Of course that is not a very beautiful solution, because of the performance and the exceptions, which are to catch. But this way you get those meta-informations.

Does using a map truly reduce cyclomatic complexity?

Suppose I have the original method below.
public String someMethod(String str) {
String returnStr;
if("BLAH".equals(str)) {
returnStr="ok";
} else if ("BLING".equals(str)) {
returnStr="not ok";
} else if ("BONG".equals(str)) {
returnStr="ok";
}
return returnStr;
}
Does converting to below truly reduce CC?
Map<String, String> validator = new HashMap<String,String>();
validator.put("BLAH","ok");
validator.put("BLING","not ok");
validator.put("BONG","ok");
public String someMethod(String str) {
return validator.get(str);
}
Yes, in your case. In simple terms, cyclomatic complexity is a number of linear-independent ways to reach the end of code piece from starting point. So, any conditional operator increases CC of your code.
(if OP's question is somehow related to testing tag) However, reducing CC doesn't reduce count of unit test which have to be written to cover your code: CC gives you only a lower bound of test count. For good coverage unit tests should cover all specific cases, and in second case you don't reduce this number of specific cases, you only make your code more readable.
Yes, because the cyclomatic complexity defines the number of linear independent paths in the control flow graph plus one. In your second example, there is only one path, the first has multiple path through if branches. However, it does not seem that cyclomatic complexity is really a problem here. You could substitue your method like this, to make it better readable:
public String someMethod(String str) {
switch(str) {
case "BLAH":
case "BONG": return "ok";
case "BLING": return "not ok";
default: return null;
}
}
Short answer : yes, the usuage of Hashmap in your case does reduce Cyclomatic complexcity.
Detailed answer : Cyclomatic complexcity as per wikipedia is
It is a quantitative measure of the number of linearly independent paths through a program's source code.
There are various ways to tackle if-else cases. If-else statements makes the code less readable, difficult to understand. These if-else are also bad as each time you have addition / deletion / modification in the cases then you need to modify the existing code files where your other business logic remains same, and due to change in these files you need to test them all over again. This leads to maintainance issues as well as at all the places we need to make sure to handle the cases. Same issues exists with switch statements too, though they are little more readable.
The way you have used also reduces the different logical paths of execution.Another alternative approach is as below.
You can create an interface say IPair. Let this interface define an abstract method public String getValue(); Lets define different classes for each case we have and let BlahMatch.java, Bling.java and Bong.java implement IPair and in the implementation of getValue() method return the appropriate String.
public String someMethod(IPair pair) {
return pair.getValue();
}
The advantage of above approach is, in case you have some new pair later you can still create a new class and pass the object of your new class easily, just you need to have your class provide implementation for IPair.

Ability-object with multiple effects/features (game)

I'm trying to make a class which represents an Ability of a Champion in my game. The impediment I've faced is that I don't know how to design an Ability class with multiple properties i.e. damaging, stuning, slowing. My idea was to create enum class which represents all these properties and then assign them to my abilities, for example:
public enum Effect {
DAMAGE,
SLOW,
STUN,
...
}
But what if I have an ability that stuns and deals damage at the same time? Should I create an Effect[] array and to deal with it somehow or should I create interface-markers like Serializable(it's the creaziest idea I've had)? Or maybe there is some technique for cases like that?
An array with Effect objects seems fine, a list would be even better. You can even create your own wrapper class to provide additional methods calculating entity properties based on applied effects.
For example:
public class Effects
{
public int calculateDamage(int baseDamage)
{
int damage = baseDamage;
if (effects.contains(Effect.DAMAGE)) {
// some effect stacking :)
damage *= 1.5 * Collections.frequency(effects, Effect.DAMAGE);
}
if (effects.contains(Effect.STUN)) {
damage = 0;
}
return damage;
}
private final List<Effect> effects = new ArrayList<>();
}
Collections.frequency() is a utility method from Apache Commons library, which I highly recommend.
you can create a class of type Effect with all properties as Boolean attributes and then assign true or false to them as you like, and if you like to extend properties to ones you can't decide at compile time think about putting one hashmap of type HashMap in Effect class

Using intermediate array for hashCode and equals

As its a pain to handle structural changes of the class in two places I often do:
class A {
class C{}
class B{}
private B bChild;
private C cChild;
private Object[] structure() {
return new Object[]{bChild, cChild};
}
public int hashCode() {
Arrays.hashCode(structure());
}
public boolean equals(Object that) {
//type check here
return Arrays.equals(this.structure(), ((A)that).structure());
}
}
What's bad about this approach besides boxing of primitives?
Can it be improved?
It's a clever way to reuse library methods, which is generally a good idea; but it does a great deal of excess allocation and array manipulation, which might be terribly inefficient in such frequently used methods. All in all, I'd say its cute, but it wouldn't pass a review.
In JDK 7 they added the java.util.Objects class. It actually implements a hash and equals utility in a manner that reminds what you wrote. The point being that this approach is actually sanctioned by JDK developers. Ernest Friedman-Hill has a point but in the majority of cases I don't think that the extra few machine instructions are worth saving at the expense of readability.
For example: the hash utility method is implemented as:
public static int hash(Object... values) {
return Arrays.hashCode(values);
}
Someone familiarizing themselves with the code will have a bit more difficulty seeing what's going on. It's less "obvious" than listing the individual fields, as demonstrated by my previously erroneous answer. It is true, that "equals" is generally implemented with an "Object" passed in, so it's debatable, but the input is cast after the reference equality check. That is not the case here.
One improvement might be to store the array as a private data member rather than create it with the structure method, sacrificing a bit of memory to avoid the boxing.

sorting objects in java

i want to do nested sorting . I have a course object which has a set of applications .Applications have attributes like time and priority. Now i want to sort them according to the priority first and within priority i want to sort them by time.
For example, given this class (public fields only for brevity):
public class Job {
public int prio;
public int timeElapsed;
}
you might implement sorting by time using the static sort(List, Comparator) method in the java.util.Collections class. Here, an anonymous inner class is created to implemented the Comparator for "Job". This is sometimes referred to as an alternative to function pointers (since Java does not have those).
public void sortByTime() {
AbstractList<Job> list = new ArrayList<Job>();
//add some items
Collections.sort(list, new Comparator<Job>() {
public int compare(Job j1, Job j2) {
return j1.timeElapsed - j2.timeElapsed;
}
});
}
Mind the contract model of the compare() method: http://java.sun.com/javase/6/docs/api/java/util/Comparator.html#compare(T,%20T)
Take a look at the Google Collections Ordering class at http://google-collections.googlecode.com/svn/trunk/javadoc/index.html?com/google/common/collect/Ordering.html. It should have everything you need plus more. In particular you should take a look at the compound method to get your second ordering.
To sort on multiple criteria, a couple of common approches using the Comparable interface:
write your compareTo() method so that it compares one field, and then goes on to compare the other if it can't return an ordering based on the first;
if you're careful then again, in your compareTo() method, you can translate a combination of both criteria into a single integer that you can then compare.
The first of these approaches is usually preferable and more likely to be correct (even though the code ends up looking a bit more cumbersome).
See the example on my web site of making Java objects sortable, which shows an example of sorting playing cards by suit and number within the suits.
You've already asked this question elsewhere. Write an implementation of java.util.Comparator.
Subtracting the two numbers as in the example above is not always a good idea.
Consider what would happen if you compared -2,147,483,644 with 2,147,483,645. Subtracting them would cause an integer overflow and thus a positive number. A positive number means would cause the comparator to claim that -2,147,483,644 is larger than 2,147,483,645.
-5 - 6 = -7
-2,147,483,644 - 2,147,483,645 = 1
Subtracting to find the compare value is even more dangerous when you consider comparing longs or doubles, since that have to be cast back to ints providing another opportunity for an overflow. For example, never do this:
class ZardozComparorator implements Comparator<Zardoz>{
public int compare(Zardoz z1, Zardoz z2) {
Long z1long = Long.getLong(z1.getName());
Long z2long = Long.getLong(z2.getName());
return (int)(z1long-z2long);
}
}
Instead use the compare method of the object you are comparing. That way you can avoid overflows and if needed you can override the compare method.
class ZardozComparorator implements Comparator<Zardoz>{
public int compare(Zardoz z1, Zardoz z2) {
Long z1long = Long.getLong(z1.getName());
Long z2long = Long.getLong(z2.getName());
return z1long.compareTo(z2long);
}
}
Here is my opinion for this 7 year old question that still gets reported sometimes:
Make a static method in your object like (only if you use other libraries to autogenerate getters and setters):
public static String getNameFrom(Order order){
return order.name;
}
Then try to use something like this:
Collections.sort(orders, Comparator.comparing(Order::getNameFrom));
For a more elegand approach I always prefer not change the entity, but use more advanced coding with Lambdas. For example:
Collections.sort(orders, (order1, order2) ->
order1.name.compareTo(order2.name);

Categories

Resources