I have this:
public class Superclass {
public int getMaxLevel() {
return 1;
}
}
public class Subclass extends Superclass {
public int getMaxLevel() {
return 4;
}
}
and I need it to be this:
public class Superclass {
public int getMaxLevel() {
return 3; //injected
//return 1; //original
}
}
public class Subclass extends Superclass {
public int getMaxLevel() {
return 3; //injected
//return 4; //original
}
}
Both superclass and subclass are in a library. Also, due to my use-case being a Minecraft mod, I aim at compatibility, so it needs to be done in the superclass, not the subclass.
I would prefer answers that use Java only (require no external libraries).
Edit: Modding framework used is Forge 1.16.5, and I do not have access to all subclasses (so there may be additional classes extending Superclass), which is why the injection needs to be performed in Superclass.
You don't mention the modding toolchain you're using, but if it's Fabric or Forge 1.16.5 the Mixin bytecode weaving framework will let you do what you're looking for. Read more about Mixin on the fabric wiki.
#Mixin(Superclass.class)
public class SuperclassMixin {
#Inject(method = "getMaxLevel", at = #At("RETURN"), cancellable = true)
public int getMaxLevelReturnInject(CallbackInfoReturnable<Integer> ci) {
ci.setReturnValue(3);
}
}
and repeat for Subclass.
Related
So, I am trying to learn how the interface classes in Java Work, and i'm really confused about it.
I wan't to make it like a method in a normal class file like this:
public class APIClass {
private int davs;
public int setInt(int dav) {
this.davs = dav;
return davs;
}
public int getInt() {
return davs;
}
}
Two methods. One that set's the int "davs", and one getting the int "davs".
What i wan't to do in the interface is something like that. I have seen in others interface files, that they have something like this:
public interface MyInterface {
public MyInterface setInt(int davs);
public MyInterface getInt();
}
EDIT:
My question is that i can't see what i can use the interface for? All i have seen use it, declare the same method in a new class file, and then they really don't need the interface file. So what is it for?
Interfaces in Java are meant as an abstraction. You're expected to use it strictly for deriving other classes. You don't declare any methods in it all.
So if you have an interface like this:
public interface MyInterface {
int setInt(int davs); // this should probably return void
int getInt();
}
And you implement it in a class like this:
public class APIClass implements MyInterface {
private int da;
public int setInt(int davs) {
// return da; <- this doesn't make a whole lot of sense
da = davs; // I assume you meant this
return da; // usually you don't return anything from a setter
}
public int getInt() {
return dada;
}
}
And another class like this:
public class SecondAPIClass implements MyInterface {
private int dada = 0;
public int setInt(int davs) { // note that you have to keep the same method signiture in all derived classes
dada = davs + 5;
return dada;
}
public int getInt() {
return da;
}
}
You can use the interface to group them both. This is an important part of object oriented design. It's usefulness is probably too long to explain in a simple StackOverflow question, but here's a simple example of its usefullness:
import java.util.ArrayList;
public static void main(String[] args)
{
APIClass first = new APIClass();
SecondAPIClass second = new SecondAPIClass();
first.setInt(20);
second.setInt(20);
ArrayList<MyInterface> list = new ArrayList<MyInterface>();
list.add(first);
list.add(second);
for(MyInterface item : list) {
System.out.println(item.getInt());
}
}
The output should be this:
20
25
This example might be more helpful:
Consider you have several vehicles. All vehicles can drive, but driving a boat is different from driving a car, or a helicopter. This is where interfaces are useful. You can declare what a Vehicle should do, without dictating how it should do it.
public interface Vehicle {
void drive();
}
So when you derive it in a class Car, you can state how you want this vehicle to drive.
public class Car implements Vehicle {
void drive() {
// drive like a car
}
}
Now boats are vehicles, and they can drive too, but driving a boat is much different than driving a car.
public class Boat implements Vehicle {
public void drive() {
// drive like a boat
}
}
In summary, interfaces are useful when you have an abstract concept in mind, where you know what derived objects should do but can't dictate how they do it.
I am attempting to create a generic/parameterized code to call from multiple classes I have. I will code it so I have several classes that have methods of the same name, so I'm hoping to create a generic way to call them.
Say I have 3 different classes that will all have methods that are called getAmount() that return ints and toString() methods that return Strings
Then I want a generic class that could possibly reference any of those three.
public class Stuff<Project> {
private Project p;
public Stuff(Project aProject) {
this.p = aProject;
}
public int getValue() {
return p.getAmount();
}
public String toString() {
return p.toString();
}
Is there anything in java that would get this functionality for me, or am i thinking in C?
I've tried using Object.getClass() in various ways to attempt to cast things, and several of the other Generic programming related questions on this site and the docs.oracle site don't seem to have what I'm looking for. Is this not possible because of the way type erasure works?
Don't use generics here, use interfaces. In Java, you use interfaces to tell the compiler that a class implements certain methods, without telling it how these methods are implemented.
public interface Project {
int getAmount();
}
public class Stuff {
private Project p;
public Stuff(Project aProject) {
this.p = aProject;
}
public int getValue() {
return p.getAmount();
}
}
You can pass an instance of any class to Stuff's constructor, as long as it implements the Project interface:
public class Construction implements Project {
public int getAmount() {
// implementation
}
}
...
Stuff s = new Stuff(new Construction());
You can use an interface, an abstract class or reflection. I would avoid using reflection unless you really need it. This looks like the perfect job for an interface.
public interface Ammount {
public int getAmmount();
}
public class BankAccount implements Ammount {
#Override
public int getAmmount() {
return -10; // broke
}
}
public class PiggyBank implements Ammount {
#Override
public int getAmmount() {
return 12; // rich
}
}
You can then use some code like
BankAccount myBankAccount = new BankAccount();
Ammount ammount = myBankAccount;
ammount.getAmmount();
PiggyBank myPiggyBank = new PiggyBank();
Ammount ammount = myPiggyBank;
ammount.getAmmount();
I have played around with Scala for a while now, and I know that traits can act as the Scala equivalent of both interfaces and abstract classes. How exactly are traits compiled into Java bytecode?
I found some short explanations that stated traits are compiled exactly like Java interfaces when possible, and interfaces with an additional class otherwise. I still don't understand, however, how Scala achieves class linearization, a feature not available in Java.
Is there a good source explaining how traits compile to Java bytecode?
I'm not an expert, but here is my understanding:
Traits are compiled into an interface and corresponding class.
trait Foo {
def bar = { println("bar!") }
}
becomes the equivalent of...
public interface Foo {
public void bar();
}
public class Foo$class {
public static void bar(Foo self) { println("bar!"); }
}
Which leaves the question: How does the static bar method in Foo$class get called? This magic is done by the compiler in the class that the Foo trait is mixed into.
class Baz extends Foo
becomes something like...
public class Baz implements Foo {
public void bar() { Foo$class.bar(this); }
}
Class linearization just implements the appropriate version of the method (calling the static method in the Xxxx$class class) according to the linearization rules defined in the language specification.
For the sake of discussion, let's look the following Scala example using multiple traits with both abstract and concrete methods:
trait A {
def foo(i: Int) = ???
def abstractBar(i: Int): Int
}
trait B {
def baz(i: Int) = ???
}
class C extends A with B {
override def abstractBar(i: Int) = ???
}
At the moment (i.e. as of Scala 2.11), a single trait is encoded as:
an interface containing abstract declarations for all the trait's methods (both abstract and concrete)
an abstract static class containing static methods for all the trait's concrete methods, taking an extra parameter $this (in older versions of Scala, this class wasn't abstract, but it doesn't make sense to instantiate it)
at every point in the inheritance hierarchy where the trait is mixed in, synthetic forwarder methods for all the concrete methods in the trait that forward to the static methods of the static class
The primary advantage of this encoding is that a trait without concrete members (which is isomorphic to an interface) actually is compiled to an interface.
interface A {
int foo(int i);
int abstractBar(int i);
}
abstract class A$class {
static void $init$(A $this) {}
static int foo(A $this, int i) { return ???; }
}
interface B {
int baz(int i);
}
abstract class B$class {
static void $init$(B $this) {}
static int baz(B $this, int i) { return ???; }
}
class C implements A, B {
public C() {
A$class.$init$(this);
B$class.$init$(this);
}
#Override public int baz(int i) { return B$class.baz(this, i); }
#Override public int foo(int i) { return A$class.foo(this, i); }
#Override public int abstractBar(int i) { return ???; }
}
However, Scala 2.12 requires Java 8, and thus is able to use default methods and static methods in interfaces, and the result looks more like this:
interface A {
static void $init$(A $this) {}
static int foo$(A $this, int i) { return ???; }
default int foo(int i) { return A.foo$(this, i); };
int abstractBar(int i);
}
interface B {
static void $init$(B $this) {}
static int baz$(B $this, int i) { return ???; }
default int baz(int i) { return B.baz$(this, i); }
}
class C implements A, B {
public C() {
A.$init$(this);
B.$init$(this);
}
#Override public int abstractBar(int i) { return ???; }
}
As you can see, the old design with the static methods and forwarders has been retained, they are just folded into the interface. The trait's concrete methods have now been moved into the interface itself as static methods, the forwarder methods aren't synthesized in every class but defined once as default methods, and the static $init$ method (which represents the code in the trait body) has been moved into the interface as well, making the companion static class unnecessary.
It could probably be simplified like this:
interface A {
static void $init$(A $this) {}
default int foo(int i) { return ???; };
int abstractBar(int i);
}
interface B {
static void $init$(B $this) {}
default int baz(int i) { return ???; }
}
class C implements A, B {
public C() {
A.$init$(this);
B.$init$(this);
}
#Override public int abstractBar(int i) { return ???; }
}
I'm not sure why this wasn't done. At first glance, the current encoding might give us a bit of forwards-compatibility: you can use traits compiled with a new compiler with classes compiled by an old compiler, those old classes will simply override the default forwarder methods they inherit from the interface with identical ones. Except, the forwarder methods will try to call the static methods on A$class and B$class which no longer exist, so that hypothetic forwards-compatibility doesn't actually work.
A very good explanation of this is in:
The busy Java developer's guide to Scala: Of traits and behaviors - Traits in the JVM
Quote:
In this case, it [the compiler] drops the method implementations and field declarations defined in the trait into the class that implements the trait
In the context of Scala 12 and Java 8, you can see another explanation in commit 8020cd6:
Better inliner support for 2.12 trait encoding
Some changes to the trait encoding came late in the 2.12 cycle, and the
inliner was not adapted to support it in the best possible way.
In 2.12.0 concrete trait methods are encoded as
interface T {
default int m() { return 1 }
static int m$(T $this) { <invokespecial $this.m()> }
}
class C implements T {
public int m() { return T.m$(this) }
}
If a trait method is selected for inlining, the 2.12.0 inliner would
copy its body into the static super accessor T.m$, and from there into
the mixin forwarder C.m.
This commit special-cases the inliner:
We don't inline into static super accessors and mixin forwarders.
Instead, when inlining an invocation of a mixin forwarder, the inliner also follows through the two forwarders and inlines the trait method body.
Suppose I have two interfaces:
public interface I1
{
default String getGreeting() {
return "Good Morning!";
}
}
public interface I2
{
default String getGreeting() {
return "Good Afternoon!";
}
}
If I want to implement both of them, what implementation will be used?
public class C1 implements I1, I2
{
public static void main(String[] args)
{
System.out.println(new C1().getGreeting());
}
}
This is a compile-time error. You cannot have two implementation from two interfaces.
However, it is correct, if you implement the getGreeting method in C1:
public class C1 implements I1, I2 // this will compile, bacause we have overridden getGreeting()
{
public static void main(String[] args)
{
System.out.println(new C1().getGreeting());
}
#Override public String getGreeting()
{
return "Good Evening!";
}
}
I just want to add that even if the method in I1 is abstract, and default in I2, you cannot implement both of them. So this is also a compile-time error:
public interface I1
{
String getGreeting();
}
public interface I2
{
default String getGreeting() {
return "Good afternoon!";
}
}
public class C1 implements I1, I2 // won't compile
{
public static void main(String[] args)
{
System.out.println(new C1().getGreeting());
}
}
This is not specific to the question. But, I still think that it adds some value to the context. As an addition to #toni77's answer, I would like to add that the default method can be invoked from an implementing class as shown below. In the below code, the default method getGreeting() from interface I1 is invoked from an overridden method:
public interface I1 {
default String getGreeting() {
return "Good Morning!";
}
}
public interface I2 {
default String getGreeting() {
return "Good Night!";
}
}
public class C1 implements I1, I2 {
#Override
public String getGreeting() {
return I1.super.getGreeting();
}
}
If a class implements 2 interfaces both of which have a java-8 default method with the same signature (as in your example) the implementing class is obliged to override the method. The class can still access the default method using I1.super.getGreeting();. It can access either, both or neither. So the following would be a valid implementation of C1
public class C1 implements I1, I2{
public static void main(String[] args)
{
System.out.println(new C1().getGreeting());
}
#Override //class is obliged to override this method
public String getGreeting() {
//can use both default methods
return I1.super.getGreeting()+I2.super.getGreeting();
}
public String useOne() {
//can use the default method within annother method
return "One "+I1.super.getGreeting();
}
public String useTheOther() {
//can use the default method within annother method
return "Two "+I2.super.getGreeting();
}
}
There is a case where this actually works according to the resolution rules. If one of the interfaces extends one of the others.
Using the example from above:
public interface I2 extends I1 {
default String getGreeting() {
return "Good Afternoon!";
}
}
The result would be:
Good Afternoon!
However, I believe this is going to be a big problem. The whole reason for default interfaces is to allow library developers to evolve apis without breaking implementers.
Understandably they don't allow the methods to compile without the inheritance structure via extension because a library developer could potentially hijack behavior.
However, this has the potential to be self defeating. If a class implements two interfaces that are not related from a hierarchical view, but both define the same default method signature, then the class that extends both interfaces will not compile. (as demonstrated above)
It is conceivable that two different library developers could decide to add default methods at different times using common signatures; in fact it is probable that this will happen in libraries that implement similar concepts such as math libraries. If you happen to be the sorry soul implementing both interfaces in the same class you will be broken on update.
I believe the rule is that the class implementing the duplicate default methods 'must' override the implementation.. The following compiles and runs fine...
public class DupeDefaultInterfaceMethods {
interface FirstAbility {
public default boolean doSomething() {
return true;
}
}
interface SecondAbility {
public default boolean doSomething() {
return true;
}
}
class Dupe implements FirstAbility, SecondAbility {
#Override
public boolean doSomething() {
return false;
}
}
public static void main(String[] args) {
DupeDefaultInterfaceMethods ddif = new DupeDefaultInterfaceMethods();
Dupe dupe = ddif.new Dupe();
System.out.println(dupe.doSomething());
}
}
> false
This is the simple way:
public interface Circle{
default String shape() {
return "Circle drawn...";
}
}
public interface Rectangle{
default String shape() {
return "Rectangle drawn...";
}
}
public class Main implements Circle, Rectangle{
#Override
public String shape() {
return Circle.super.shape();// called using InterfaceName.super.methodName
}
}
Output:
Circle drawn...
Default methods in Java 8 can be viewed as a form of multiple inheritance (except that attribute can not be inherited).
The main motivation behind default methods is that if at some point we need to add a method to an existing interface, we can add a method without changing the existing implementation classes. In this way, the interface is still compatible with older versions. This is a cool feature. However, we should remember the motivation of using Default Methods and should keep the separation of interface and implementation.
interface First{
// default method
default void show(){
System.out.println("Default method implementation of First interface.");
} }
interface Second{
// Default method
default void show(){
System.out.println("Default method implementation of Second interface.");
} }
// Implementation class code
public class Example implements First, Second{
// Overriding default show method
public void show(){
First.super.show();
Second.super.show();
}
public static void main(String args[]){
Example e = new Example();
e.show();
} }
I have a question regarding the best design pattern for code reuse when dealing with Java enums. Basically, what I'm trying to achieve is being able to define several enums that model static business collections (sets of constants), but I'd also like to share behavior between them, with minimal coding.
This is trivial to achieve with class inheritance from abstract classes but, since Java enums cannot be extended (they can only implement interfaces), this type of work is tedious and involves a lot of error prone copy/paste work (copying the code from enum to enum). Examples of "business logic" that should be shared among all enums includes converting from/to Strings, instance and logical comparison, etc.
My best shot right now is using helper classes in conjunction with business interfaces, but this only goes so far in reducing code complexity (as all enums still have to declare and invoke the helper classes). See example (just to clarify):
public enum MyEnum {
A, B, C;
// Just about any method fits the description - equals() is a mere example
public boolean equals(MyEnum that) {
ObjectUtils.equals(this, that);
}
}
How do StackOverflowers deal with this "language feature"?
You can move the reusable logic to dedicated (non-enum) classes and then have the enums delegate to those classes. Here's an example:
[Side note: the inheritance of PlusTwo extends PlusOne is not recommended (b/c PlusTwo is not PlusOne). It here just to illustrate the point of being able to extend an existing logic.]
public interface Logic {
public int calc(int n);
}
public static class PlusOne implements Logic {
public int calc(int n) { return n + 1; }
}
public static class PlusTwo extends PlusOne {
#Override
public int calc(int n) { return super.calc(n) + 1; }
}
public static enum X {
X1, X2;
public Logic logic;
public int doSomething() {
return logic.calc(10);
}
}
public static enum Y {
Y1, Y2;
public Logic logic;
public String doSomethingElse() {
return "Your result is '" + logic.calc(10) + "'";
}
}
public static void main(String[] args) {
// One time setup of your logic:
X.X1.logic = new PlusOne();
X.X2.logic = new PlusTwo();
Y.Y1.logic = new PlusOne();
Y.Y2.logic = new PlusTwo();
System.out.println(X.X1.doSomething());
System.out.println(X.X2.doSomething());
System.out.println(Y.Y1.doSomethingElse());
System.out.println(Y.Y2.doSomethingElse());
}
I would do the same, or combine the Enums into a super-enum.
With Java 8 this will be easier. You will be able to define a default implementation for interface methods and have the enum extend the interface.
I rarely find enums useful, except for representing finite states in which case they do not need behavior.
I would suggest refactoring enums that need behavior into classes with a Factory.
This might look a bit ugly, but generally can offer you the required functionality.
You can have interface
public interface MyEnumInterface<T extends Enum<T>> {
String getBusinessName();
T getEnum();
}
Implementation
public enum OneOfMyEnums implements MyEnumInterface<OneOfMyEnums>{
X, Y, Z;
#Override
public String getBusinessName() {
return "[OneOfMyEnums]" + name();
}
#Override
public OneOfMyEnums getEnum() {
return this;
}
}
And utility class instead of your parent class
public class MyEnumUtils {
public static <T extends Enum<T>> String doSomething(MyEnumInterface<T> e){
e.getBusinessName(); // can use MyEnumInterface methods
e.getEnum().name(); // can use Enum methods as well
return null;
}
}