Java method with unknown parameter type - java

I'm new to Java.
There is repeating code in multiple files in a project I'm working on.
Object types can be Thing1, Thing2, Thing3 etc.
So the code looks like:
if (Thing1.getStuff() instanceof String) {
myObj.setString("Hello");
} else {
myObj.setString("World");
}
I want to extend myObj with a class and method to handle this, as such:
public class myObj extends DoStuff {...}
--
class DoStuff {
public String doThis(*WHAT_TYPE_TO_USE* input) {
var String = input.myMethod(); // I need to call method.
return "String after some logic";
}
}
Which should allow me to run:
myObj.doThis("Something");
However, I can't specify input to be a specific type in the method as it could be Thing1, Thing2 etc. Also, Thing1 and Thing2 can't be dynamically imported, can they? How can I run myMethod (which exists in Thing1 and Thing2)?
Any advice is appreciated.

You need your Thing classes to implement a common interface such as
public interface Thing {
public String myMethod();
}
public class Thing1 implements Thing {
...
}
If they don't have a common supertype, then the two myMethod methods are unrelated. The fact that they have the same name is irrelevant to Java; they're distinct methods defined in distinct classes. You can access them with reflection shenanigans, but then you're giving up a lot of type safety (at that point, you would just take an Object and trust the user to provide a value of the correct type; it's ugly and messy and I don't recommend it).
If the classes in question are third-party classes (i.e. that you don't control) and don't implement a common interface, then you need the adapter pattern. Basically, you define a new class that does implement the interface and wraps an instance of the original.
public class Thing1Adapter implements Thing {
private Thing1 impl;
public Thing1Adapter(Thing1 impl) {
this.impl = impl;
}
#Override
public String myMethod() {
return this.impl.myMethod();
}
}
...
DoThis(new Thing1Adapter(myThing1));

Related

Object Creation depending on Generic Type

I have the following generic interface:
public interface I<T> {
void method(T key);
}
which it is implemented by two different classes (A y B).
public class A implements I<Integer> {
#Override
void method(Integer key) {
//do smth
}
public class B implements I<String> {
#Override
void method(String key) {
//do smth
}
Futhermore, there is a Java class MyClass where a new instance of A or B is created depending on the T param.
public class MyClass<T> {
public void f() {
I<T> object = //here is the problem
}
}
My question is the following:
Is it possible to achieve it without passing the object of T class?
Pass a Supplier.
class MyClass<T> {
public void f(Supplier<I<T>> supplier) {
I<T> object = supplier.get();
}
}
new MyClass<String>().f(B::new);
new MyClass<Integer>().f(A::new);
no. you have to have something concrete to disambiguate the instantiation. remember, at runtime the generic bindings are gone (they are only syntactic sugar). if you doubt this, compile the same code with and without the generic hints. the output classes will be bytewise identical.
you basically have to have "some concrete reference to a type", either as presented by Igor above, or something else (Class.forName( "ClassName" ), ClassName.class, etc.) or dynamically build a class via java.lang.reflect.Proxy.
Igor's example just creates an anonymous factory as a lambda, but in the end, he's still passing the reference to a class, wrapped in a factory method, and bound as a lambda.
now something you "could" do, if you want to pass the Class, you could change your binding to or something similar, and pass a Class reference to use for instantiation. then you can do something like
_pass_in_ref.newInstance();
_pass_in_ref::new
etc.
caveat emptory

Using an anonymous class to return values in Java

Consider the following code:
public interface MyClass {
public final String getMyObject1();
public final String getMyObject2();
}
public class MyClass1 implements MyClass {
private String myObject1;
private String myObject2;
public MyClass1(String myObject1, String myObject2) {
this.myObject1 = myObject1;
this.myObject2 = myObject2;
}
public String getMyObject1() {
return myObject1;
}
public String getMyObject2() {
return myObject2;
}
}
public interface MyClass2 extends MyClass {
public static MyClass2 newInstance(String myObject1, String myObject2) {
return new MyClass2() {
public String getMyObject1() {
return myObject1;
}
public String getMyObject2() {
return myObject2;
}
};
}
}
And I use them like
public static void func(MyClass m) {
m.getMyObject1();
m.getMyObject2();
}
func(new MyClass1(o1, o2));
func(MyClass2.newInstance(o1, o2));
I wonder how they differ and if I only need to read from the values (i.e. to use MyClass as a "struct" to pass values), using the anonymous class can it be a simpler approach?
Otherwise, what are the draw backs?
One core rule of programming: try to not surprise your readers.
Your approach here to use a static class within an interface as "factory" method is very surprising (and believe me: I have seen a lot of Java code).
If at all, the more "common" way of handling such things: create a static class with a slightly similar name, you know, like there is java.lang.Object and java.lang.Objects that carries some useful static helper methods.
And beyond that, there is already a class in Java that helps with arbitrary numbers of "named" values; and that is called a Map!
Finally: there are some good arguments for "DTO"s (data transfer objects) but esp. for "beginners", you should rather look into "real" OO designs; based on the SOLID principles. In that sense: design real classes that exactly model your problem domain; and that provide helpful abstractions. A struct with an arbitrary number of members ... doesn't fall into either category.
The problem here is not the code necessarily but the design. I would be interested to know the real world use case you are trying to design here.
Surely there are limitations in the second approach like you cannot update the value of your objects at all once your class is created as you just have a way to get the value of the passed objects back.
Coming back to Design:
An interface is supposed to be an action which your class can perform if it implements that interface. In your case you are trying to return the value of two instance variables using the two methods in your interface which is a kind of action but it ignores the basic principle of encapsulation.
If your class defines/owns those instance variables it should have the getters and setters for that. You should not require an interface to do that. So ideally your interface should not be required. Any other class which uses MyClass1 object should directly use the getters and setters of the MyClass1.

Implicit object type in Java?

This isn't exactly the definition of implicit type conversion, but I'm curious how many standards I'm breaking with this one...
I'm creating an abstract class in Java that basically casts its variables depending on a string passed into the constructor.
For example:
public abstract class MyClass {
Object that;
public MyClass(String input){
if("test1".equals(input){
that = new Test1();
}
else{
that = new Test();
}
}
public void doSomething(){
if(that instanceof Test1){
//specific test1 method or variable
} else if(that instanceof Test2)}
//specific test2 method or variable
} else {
//something horrible happened
}
}
}
You see what I'm getting at? Now the problem I run into is that my compiler wants me to explicitly cast that into Test1 or Test2 in the doSomething method - which I understand, as the compiler won't assume that it's a certain object type even though the if statements pretty much guarantee the type.
I guess what I'm getting at is, is this a valid solution?
I have other classes that all basically do the same thing but use two different libraries depending on a simple difference and figure this class can help me easily track and make changes to all of those other objects.
You are right. This is a horrible way to achieve polymorphism in design. Have you considered using a factory? A strategy object? It sounds like what you are trying to achieve can be implemented in a more loosely-coupled way using a combination of these patterns (and perhaps others).
For the polymorphism of doSomething, for example:
interface Thing {
public void doThing();
}
class Test1 implements Thing {
public void doThing() {
// specific Test1 behavior
}
}
class Test2 implements Thing {
public void doThing() {
// specific Test2 behavior
}
}
class MyClass {
Thing _thing;
public void doSomething() {
_thing.doThing(); // a proper polymorphism will take care of the dispatch,
// effectively eliminating usage of `instanceof`
}
}
Of course, you need to unify the behaviors of Test1 and Test2 (and other concrete Thing classes, present and planned) under a set of common interface(s).
PS: This design is commonly known as Strategy Pattern.
I would create a separate class file. So you would have something like this:
1. You abstract "MyClass"
->within "MyClass" define an abstract method call doSomething...this will force the specific implementation of the method to it's subclasses.
2. Test1 would be the implementation of MyClass which would contain the implementation of the doSomething method
3. Create a utility class that does the check "instanceOf" that check should not be in the constructor it belongs in another class.
So in the end you would have 3 class files an Abstract Class, Implementation of the Abstract and a Class that does the "instanceOf" check. I know this sounds like a lot but it's the proper way to design, for what I think you are attempting to do. You should pick up a design patterns book, I think it would help you a lot with questions like these.
The Open-Closed principle would be better satisfied by moving the object creation outside of this class.
Consider changing the constructor to accept an object that implements an interface.
public MyClass {
public MyClass( ITest tester ) { m_tester = tester; }
public void doSomething(){ m_tester.doTest(); }
}
This makes it possible to change the behavior of the class (open to extension) without modifying its code (closed to modification).
The better way to do this is to create an interface which will specify a set of methods that can be guaranteed to be called on the object.
Here's an example:
public interface TestInterface
{
void doTest();
}
Now you can write your classes to implement this interface. This means that you need to provide a full definition for all methods in the interface, in this case doTest().
public class Test implements TestInterface
{
public void doTest()
{
// do Test-specific stuff
}
}
public class Test1 implements TestInterface
{
public void doTest()
{
// do Test1-specific stuff
}
}
Looks really boring and pointless, right? Lots of extra work, I hear you say.
The true value comes in the calling code...
public abstract class MyObject
{
Test that;
// [...]
public void doSomething()
{
that.doTest();
}
}
No if statements, no instanceof, no ugly blocks, nothing. That's all moved to the class definitions, in the common interface method(s) (again, here that is doTest()).

Why not abstract fields?

Why can't Java classes have abstract fields like they can with abstract methods?
For example: I have two classes that extend the same abstract base class. These two classes each have a method that is identical except for a String constant, which happens to be an error message, within them. If fields could be abstract, I could make this constant abstract and pull the method up into the base class. Instead, I have to create an abstract method, called getErrMsg() in this case, that returns the String, override this method in the two derived classes, and then I can pull up the method (which now calls the abstract method).
Why couldn't I just make the field abstract to begin with? Could Java have been designed to allow this?
You can do what you described by having a final field in your abstract class that is initialised in its constructor (untested code):
abstract class Base {
final String errMsg;
Base(String msg) {
errMsg = msg;
}
abstract String doSomething();
}
class Sub extends Base {
Sub() {
super("Sub message");
}
String doSomething() {
return errMsg + " from something";
}
}
If your child class "forgets" to initialise the final through the super constructor the compiler will give a warning an error, just like when an abstract method is not implemented.
I see no point in that. You can move the function to the abstract class and just override some protected field. I don't know if this works with constants but the effect is the same:
public abstract class Abstract {
protected String errorMsg = "";
public String getErrMsg() {
return this.errorMsg;
}
}
public class Foo extends Abstract {
public Foo() {
this.errorMsg = "Foo";
}
}
public class Bar extends Abstract {
public Bar() {
this.errorMsg = "Bar";
}
}
So your point is that you want to enforce the implementation/overriding/whatever of errorMsg in the subclasses? I thought you just wanted to have the method in the base class and didn't know how to deal with the field then.
Obviously it could have been designed to allow this, but under the covers it'd still have to do dynamic dispatch, and hence a method call. Java's design (at least in the early days) was, to some extent, an attempt to be minimalist. That is, the designers tried to avoid adding new features if they could be easily simulated by other features already in the language.
Reading your title, I thought you were referring to abstract instance members; and I couldn't see much use for them. But abstract static members is another matter entirely.
I have often wished that I could declare a method like the following in Java:
public abstract class MyClass {
public static abstract MyClass createInstance();
// more stuff...
}
Basically, I would like to insist that concrete implementations of my parent class provide a static factory method with a specific signature. This would allow me to get a reference to a concrete class with Class.forName() and be certain that I could construct one in a convention of my choosing.
Another option is to define the field as a public (final, if you like) in the base class, and then initialize that field in the constructor of the base class, depending upon which subclass is currently being used. It's a bit shady, in that it introduces a circular dependency. But, at least it's not a dependency that can ever change -- i.e., the subclass will either exist or not exist, but the subclass's methods or fields can not influence the value of field.
public abstract class Base {
public final int field;
public Base() {
if (this instanceof SubClassOne) {
field = 1;
} else if (this instanceof SubClassTwo) {
field = 2;
} else {
// assertion, thrown exception, set to -1, whatever you want to do
// to trigger an error
field = -1;
}
}
}

Practical side of the ability to define a class within an interface in Java?

What would be the practical side of the ability to define a class within an interface in Java:
interface IFoo
{
class Bar
{
void foobar ()
{
System.out.println("foobaring...");
}
}
}
I can think of another usage than those linked by Eric P: defining a default/no-op implementation of the interface.
./alex
interface IEmployee
{
void workHard ();
void procrastinate ();
class DefaultEmployee implements IEmployee
{
void workHard () { procrastinate(); };
void procrastinate () {};
}
}
Yet another sample — implementation of Null Object Pattern:
interface IFoo
{
void doFoo();
IFoo NULL_FOO = new NullFoo();
final class NullFoo implements IFoo
{
public void doFoo () {};
private NullFoo () {};
}
}
...
IFoo foo = IFoo.NULL_FOO;
...
bar.addFooListener (foo);
...
I think this page explains one example pretty well. You would use it to tightly bind a certain type to an interface.
Shamelessly ripped off from the above link:
interface employee{
class Role{
public String rolename;
public int roleId;
}
Role getRole();
// other methods
}
In the above interface you are binding the Role type strongly to the employee interface(employee.Role).
One use (for better or worse) would be as a workaround for the fact that Java doesn't support static methods in interfaces.
interface Foo {
int[] getData();
class _ {
static int sum(Foo foo) {
int sum = 0;
for(int i: foo.getData()) {
sum += i;
}
return sum;
}
}
}
Then you'd call it with:
int sum = Foo._.sum(myFoo);
I can say without hesitation that I've never done that. I can't think of a reason why you would either. Classes nested within classes? Sure, lots of reasons to do that. In those cases I tend to consider those inner classes to be an implementation detail. Obviously an interface has no implementation details.
One place this idiom is used heavily is in XMLBeans. The purpose of that project is to take an XML Schema and generate a set of Java classes that you can use bidirectionally to work with XML documents corresponding to the schema. So, it lets you parse XML into xml beans or create the xml beans and output to xml.
In general, most of the xml schema types are mapped to a Java interface. That interface has within it a Factory that is used to generate instances of that interface in the default implementation:
public interface Foo extends XmlObject {
public boolean getBar();
public boolean isSetBar();
public void setBar(boolean bar);
public static final SchemaType type = ...
public static final class Factory {
public static Foo newInstance() {
return (Foo)XmlBeans.getContextTypeLoader().newInstance(Foo.type, null);
}
// other factory and parsing methods
}
}
When I first encountered this it seemed wrong to bind all this implementation gunk into the interface definition. However, I actually grew to like it as it let everything get defined in terms of interfaces but have a uniform way to get instances of the interface (as opposed to having another external factory / builder class).
I picked it up for classes where this made sense (particularly those where I had a great deal of control over the interface/impls) and found it to be fairly clean.
I guess you could define a class that is used as the return type or parameter type for methods within the interface. Doesn't seem particularly useful. You might as well just define the class separately. The only possible advantage is that it declares the class as "belonging" to the interface in some sense.
Google Web Toolkit uses such classes to bind 'normal' interface to asynchronous call interface:
public interface LoginService extends RemoteService {
/**
* Utility/Convenience class.
* Use LoginService.App.getInstance() to access static instance of LoginServiceAsync
*/
class App {
public static synchronized LoginServiceAsync getInstance() {
...
}
}
}
With a static class inside an interface you have the possibility to shorten a common programming fragment: Checking if an object is an instance of an interface, and if so calling a method of this interface. Look at this example:
public interface Printable {
void print();
public static class Caller {
public static void print(Object mightBePrintable) {
if (mightBePrintable instanceof Printable) {
((Printable) mightBePrintable).print();
}
}
}
}
Now instead of doing this:
void genericPrintMethod(Object obj) {
if (obj instanceof Printable) {
((Printable) obj).print();
}
}
You can write:
void genericPrintMethod(Object obj) {
Printable.Caller.print(obj);
}
Doing this seems to have "Bad design decision" written all over it.
I would urge caution whenever it seems like a good idea to create a non-private nested class. You are almost certainly better off going straight for an outer class. But if you are going to create a public nested class, it doesn't seem any more strange to put it in an interface than a class. The abstractness of the outer class is not necessarily related to the abstractness of a nested class.
This approach can be used to define many classes in the same file. This has worked well for me in the past where I have many simple implementations of an interface. However, if I were to do this again, I would use an enum which implements an interface which would have been a more elegant solution.

Categories

Resources