how to determine if a method has been overriden? - java

Consider the following scenario:
I am working on a framework that allows users to subclass a particular abstract class provided by the framework, which will allow them to insert their own functionality into the flow of the application.
For example a class like this:
public class MyAbstractClass {
protected String step1(String input) {
// default implementation
}
public final void process(MyObject objct) {
// some steps
String someData = step1(object.getInput());
// some more steps
}
}
Now, in a newer version of the framework, I want to change the step1 method to use some object (which can be converted to the argument of the current step1 method) as a parameter. However, I do not want to break the code for the users who have already extended this class.
I am currently planning to do it this way:
public class MyAbstractClass {
private boolean isOldStep1Overriden = true;
protected String step1(MyObject input) {
String oldValue = step1(input.getInput());
if (isOldStep1Overriden) {
// the method has been overriden - some logic here to handle this (assume this logic is known)
} else {
// the method has not been overriden - ignore the old value and carry out the new default processing
}
}
#Deprecated
protected String step1(String input) {
// default implementation
isOldStep1Overriden = false;
}
public final void process(MyObject objct) {
// some steps
String someData = step1(object);
// some more steps
}
}
Are there any other ways to implement such a use case?
The following options are also on hold :
Create a new abstract class for the new implementation - This would prevent existing users from getting the new features without modifying their code.
Use reflection - I would prefer not to, if possible
EDIT: Some more relevant background information:
The framework in question is actually an application which is deployed to a shared location, and users (developers) of this framework submit extensions to be installed in it. So the application itself could be upgraded without the developers of the extensions having made any changes to their code. To take an example of a similar system, consider firefox and its addons - both developed by separate teams, and firefox can be updated without the addon developer's knowledge.

Related

How to use the Java type system to create classes handling network messages, which determine their proper handler using a field in the message?

I have a lot of classes implementing a "common" interface called Setter.
public interface Setter {
Result set(Config config, int entityId);
enum Result {
HANDLED, HANDLING_ERROR, REJECTED
}
}
An example of an implementation looks like this, it sets in the world a 'number' value for a given 'entity' distinguished by its entityId:
public class NumberSetter implements Setter {
private World world;
private Assets assets;
public NumberSetter(World world, Assets assets) {
this.world = world.
this.assets = assets;
}
#Override
public Result set(Config config, int entityId) {
if (config instanceof NumberConfig numberConfig) {
world.passNumber(entityId, numberConfig.number);
return Result.HANDLED;
} else {
return Result.REJECTED;
}
}
}
Please do notice, that the Config object is cast to a specific NumberConfig, otherwise the Setter implementation signals it didn't handle the argument.
I am using a Set of these Setters in a network-enabled class, where it tries to match a super-type Config object against one of these Setters from the Set. (The naming might be subject to change lmao.) The code below handles a network package by passing it to all of the Setters in the Set and checks if there were any errors or if no Setter handled the package. If the check passes then the package wasn't handled properly and the Handler returns a NOT_HANDLED which later crashes the program because I'm still at the development stage.
public class ConfigNetworkHandler implements NetworkHandler {
private final Assets assets;
private final Set<Setter> setterSet;
public ConfigNetworkHandler(
Assets assets,
Set<Setter> setterSet
) {
this.assets = assets;
this.setterSet = setterSet;
}
#Override
public boolean handle(WebSocket webSocket, int worldEntity, Component component) {
var configId = ((ConfigId) component).getId();
var config = assets.getConfigs().get(configId);
var setterResults = setterSet.stream()
.map(setter -> setter.set(config, worldEntity))
.toList();
var anyErrors = setterResults.stream().anyMatch(HANDLING_ERROR::equals);
var wasHandled = setterResults.stream().anyMatch(HANDLED::equals);
if (anyErrors || !wasHandled) {
return NOT_HANDLED;
}
return FULLY_HANDLED;
}
}
I don't like it how I am not using Java's type system properly. I don't know how to do it otherwise, without manually providing a Map between ConfigId's and the Setters, which I would rather not do, because the ConfigIds aren't known at compile-time. The NetworkHandler-type-stuff is kind of similar but there are a lot less of them and they will probably be refactored in a similar way (there is also a lot fewer of them, so it's not a practical issue).
I like the current solution because it allows me to add and remove Setters without worrying about the other ones and also I don't need to change the implementation of ConfigNetworkHandler, because it's provided a Set. I don't like it, because it requires list traversing, doesn't seem "idiomatic" for Java, returns weird Results instead of just not being called because it doesn't accept the type, and FEELS like there should be something else.
Do you have an idea how to approach this differently?

Best approach to test several similar classes in spock test?

I have a lot of similar classes (actually it is different types of events with one parent class). It is about 30 classes already and the number will be growing. Every class has its own logic to process, but there several fields that are exists in every class. I want to be sure every event's flow is taking care of common fields. It is become more complex, because of adding new event types and adding new flows. The best approach will be to create some dynamic test that will be checking common fields are processed. Saying 'dynamically' I mean the ability of the test automatically discover new classes and put them into the test pack. We are using spock, but it is not possible to dynamically generate 'where' section of test. I come with quite a strange approach that is not working, but illustrate my idea:
def "dynamic test"() {
given:
def classes = methodToGetListOfEventClass()
when:
for(Class clazz : classes) {
ParentEvent event = clazz.getDeclaredConstructor().newInstance() as ParentEvent
service.sendEvent(event)
}
}
then:
for(Class clazz : classes) {
ParentEvent event = clazz.getDeclaredConstructor().newInstance() as ParentEvent
1 * sendExternalEvent("someId", event.getClass().getName(), Collections.emptyMap())
//check common fields exists
}
}
}
}
So I just try to create an instance of every class, pass it into the event handler and check created external event has all common fields set. It looks ugly and does not work. Is any suggestion on how to implement such a dynamic test?
You can use dynamic data pipes. Here is a simple example, based on your pseudo code and the limited information you provided. Because you did not say if you use Spock 1.3 or 2.x, I made sure that the example works on 1.3, too.
Given a situation as follows (all Groovy code, but the classes under test can be Java ones, too):
interface Event {
void init()
void sendExternalEvent(String id, String className, Map options)
}
class Service {
void sendEvent(Event event) {
event.sendExternalEvent("123", event.class.name, [:])
}
}
abstract class BaseEvent implements Event {
private static final Random random = new Random()
private static final String alphabet = (('A'..'Z') + ('0'..'9')).join()
protected int id
protected String name
#Override
void init() {
id = 1 + random.nextInt(100)
name = (1..10).collect { alphabet[random.nextInt(alphabet.length())] }.join()
}
}
class FirstEvent extends BaseEvent {
#Override
void sendExternalEvent(String id, String className, Map options) {}
String doFirst() { "first" }
}
class SecondEvent extends BaseEvent {
#Override
void sendExternalEvent(String id, String className, Map options) {}
String doSecond() { "second" }
}
class ThirdEvent extends BaseEvent {
#Override
void sendExternalEvent(String id, String className, Map options) {}
int doThird() { 3 }
}
You can implement your dynamic test for BaseEvent subclasses like this:
import spock.lang.Specification
import spock.lang.Unroll
class DynamicBaseClassTest extends Specification {
#Unroll("verify #className")
def "basic event class functionality"() {
given:
def service = new Service()
def event = Spy(baseEventClass.getConstructor().newInstance())
when:
event.init()
then:
// '.id' and '.name' should be enough, but on Spock 2.1 there is a problem
// when not explicitly using the '#' notation for direct field access.
event.#id > 0
event.#name.length() == 10
when:
service.sendEvent(event)
then:
1 * event.sendExternalEvent(_, event.class.name, [:])
where:
baseEventClass << getEventClasses()
className = baseEventClass.simpleName
}
static List<Class<? extends BaseEvent>> getEventClasses() {
[FirstEvent, SecondEvent, ThirdEvent]
}
}
Try it in the Groovy web console.
The notable things are:
where:
baseEventClass << getEventClasses()
The data pipe is declared to call a data provider method, just like in your example. What getEventClasses() does, is totally up to you: return a fixed list, scan the classpath or whatever.
def event = Spy(baseEventClass.getConstructor().newInstance())
The spy is necessary to have both the real behaviour for the class under test - you do not want to mock it, of course - and the ability to verify interactions on it later:
then:
1 * event.sendExternalEvent(_, event.class.name, [:])
BTW, if you are unfamiliar with #Unroll, it makes the spec look like this in an IDE or a test report:

Iterate Fields of Fields continuously with reflection

Please avoid giving answers in Kotlin only and higher than Android 21.
I'm trying to build an API parser that makes use of class hierarchy logic to represent the API hierarchy itself. With this structure I am able to parse the API in an uncomplicated fashion and I was able to achieve this already, but I'd like to improve it further.
I'll begin explaining what I already have implemented.
This is an example URL that my app will receive via GET, parse and dispatch internally:
http://www.example.com/news/article/1105
In the app the base domain is irrelevant, but what comes after is the API structure.
In this case we have a mixture of commands and variables:
news (command)
article (command)
1105 (variable)
To establish what is a command and what is a variable I built the following class structures:
public class API {
public static final News extends AbstractNews {}
}
public class AbstractNews {
public static final Article extends AbstractArticle {}
}
public class Article {
public static void GET(String articleId) {
// ...
}
}
And I iterate through each class after splitting the URL while matching each command to each class (or subclass) starting from the API class. Until I reach the end of the split URL any matches that fail are stored in a separate list as variables.
The process is as follows for the example provided above:
Split URL each forward slash (ignoring the base domain)
/news/article/1105
List<String> stringList = [
news,
article,
1105
];
Iterate each item in the split list and match agains the API structured classes (the following is just a sample example, it is not 100% of what I currently have implemtend):
List<String> variableList = new ArrayList<>();
Class lastClass = API.class;
for (String stringItem : stringList) {
if ((lastClass = classHasSubClass(lastClass, stringItem)) != null) {
continue;
}
variableList.add(stringItem);
}
Once the end of the list is reached I check if the last class contains the request method (in this case GET) and invoke along with the variable list.
Like I said before this is working perfectly fine, but it leaves every class directly exposed and as a result they can be accessed directly and incorrectly by anyone else working on the project, so I am trying to make the hierarchy more contained.
I want to keep the ability to access the methods via hierarchy as well, so the following can still be possible:
API.News.Article.GET(42334);
While at the same time I don't want it to be possible to do the following as well:
AbstractArticle.GET(42334);
I have tried making each subclass into a class instance field instead
public class API {
// this one is static on purpose to avoid having to instantiate
// the API class before accessing its fields
public static final AbstractNews News = new AbstractNews();
}
public class AbstractNews {
public final AbstractArticle Article = new AbstractArticle();
}
public class Article {
public void GET(String articleId) {
// ...
}
}
This works well for the two points I wanted to achieve before, however I am not able to find a way to iterate the class fields in a way that allows me to invoke the final methods correctly.
For the previous logic all I needed to iterate was the following:
private static Class classHasSubClass(Class<?> currentClass, String fieldName) {
Class[] classes;
classes = currentClass.getClasses();
for (final Class classItem : classes) {
if (classItem.getSimpleName().toLowerCase().equals(fieldName)) {
return classItem;
}
}
return null;
}
But for the second logic attempt with fields I was not able to invoke the final method correctly, probably because the resulting logic was in fact trying to do the following:
AbstractArticle.GET(42334);
Instead of
API.News.Article.GET(42334);
I suspect it is because the first parameter of the invoke method can no longer be null like I was doing before and has to be the correct equivalent of API.News.Article.GET(42334);
Is there a way to make this work or is there a better/different way of doing this?
I discovered that I was on the right path with the instance fields, but was missing part of the necessary information to invoke the method correctly at the end.
When iterating the fields I was only using the Class of each field, which was working perfectly fine before with the static class references since those weren't instances, but now it requires the instance of the field in order to work correctly.
In the end the iterating method used in place of classHasSubClass that got this to work is as follows:
private static Object getFieldClass(Class<?> currentClass, Object currentObject, final String fieldName) {
Field[] fieldList;
fieldList = currentClass.getDeclaredFields();
for (final Field field : fieldList) {
if (field.getName().toLowerCase().equals(fieldName)) {
try {
return field.get(currentObject);
} catch (IllegalAccessException e) {
e.printStackTrace();
break;
}
}
}
return null;
}
With this I always keep an instance object reference to the final field that I want to invoke to pass as the 1st parameter (someMethod.invoke(objectInstance);) instead of null.

Force a user of my library to implement an interface or extend an abstract class

I'm developing an android library (.aar) and I was wondering if it was possible to, as the title suggest, force a user to implement an interface or extend an abstract class of my library.
I already know that I could just go with a class like this in my library :
public class MyLibrary
{
public interface VariablesInterface
{
void createVariables();
}
private static VariablesInterface vi = null;
public void setVariablesInterface(VariablesInterface v)
{
vi = v;
}
private static void SomeWork()
{
if (vi == null)
{
throw new RuntimeException("You noob.");
}
else
{
// do work
}
}
}
The library will work "alone" at some point, and when it will come to SomeWork(), if the interface isn't implemented it will crash, but this could only be seen at runtime.
Is there a way to have this behaviour when compiling the user's application ?
The goal is to avoid the user forgetting that he have to implement this without having to write it in the documentation and hope the user will probably read it.
Thanks for reading !
EDIT
I think that this question need some enhancement and background.
The purpose of the library is to provide classes that create variables which manages preferences, e.g. :
public class VarPreferenceBoolean extends VarPreference
{
private boolean defaultValue;
public VarPreferenceBoolean(String key, boolean defaultValue)
{
super(key, true);
this.defaultValue = defaultValue;
}
public void setValue(Context context, boolean value)
{
SharedPreferences.Editor e = context.getSharedPreferences(PropertiesManager.preferenceFileName, Context.MODE_PRIVATE).edit();
e.putBoolean(key, value);
e.commit();
}
public boolean getValue(Context context)
{
readPropFile(context);
SharedPreferences sp = context.getSharedPreferences(PropertiesManager.preferenceFileName, Context.MODE_PRIVATE);
return sp.getBoolean(key, defaultValue);
}
}
The same goes for int, string and so on.
In the super class, I add each VarPreference to a List to keep the library acknowledged of all the variables availables.
Note the readPropFile inside the getter.
Then, the user use the library in his project like this :
public class Constants
{
public static final VarPreferenceInt FILETYPE;
public static final VarPreferenceInt DATAMODE;
public static final VarPreferenceString URL_ONLINE;
public static final VarPreferenceBoolean UPDATING;
public static final VarPreferenceLong LAST_UPDATE;
static
{
FILETYPE = new VarPreferenceInt("FileType", MyFile.FileType.LOCAL.getValue());
DATAMODE = new VarPreferenceInt("DataMode", DataProvider.DataMode.OFFLINE.getValue());
URL_ONLINE = new VarPreferenceString("UrlOnline", "http://pouetpouet.fr");
UPDATING = new VarPreferenceBoolean("Updating", false);
LAST_UPDATE = new VarPreferenceLong("LastUpdate", 0L);
}
}
Now, when the user call an accessor, readPropFile will first search if a .properties file exist and modify accordingly the preferences if it found matches between the list of VarPreference and the properties of the file. Then it will delete the file and the accessor will return the value.
This is what exists today.
Now we want another application (let's say Pilot) to be able to get the VarPreferences of the user's application (let's say Client). Both implements the library.
Pilot send an Intent asking for the VarPreference list of Client, putting in extra the package name of Client.
The library receive the intent, verify the packagename, if it's Client it send back the list.
Problem is, if Client hasn't started, no VarPreference exists, and the list is empty.
I need to force the user to create his VarPreference in an method that my library know, to be able to call it whenever I want, and create the VarPreferences of the user when it's necessary.
Hope this is clearer !
EDIT
I rethought about all of this with a colleague and it just hit us that all this stack is biaised.
I didn't explain well and even if I said it, I didn't take account enough of this : everything needs to be done from the library.
So, even if I give an interface to the library, the application will have to run and call this affectation first in order to let the library work alone.
We are heading towards introspection now.
(This is the goal, it may not be possible...)
There will be an abstract class inside the library, with an abstract method where the user will place all of the VarPreferences creations. The user will have to extends this class and call the method in order to create his VarPreferences.
In the library, a method will search by introspection a child of the abstract class, create an instance of this child and call the method that will create the VarPreferences.
I would leave the abstract classes and interfaces in the main library and load the rest of your code via classloader from another. JDBC works like this.
Is there a way to have this behaviour when compiling the user's application ?
I see no way to force a compilation failure. However, if you force them to supply a VariablesInterface in the constructor then it will fail immediately. Make the VariablesInterface be final and only initialize it in the constructor:
public class MyLibrary {
private final VariablesInterface vi;
public MyLibrary(VariablesInterface vi) {
if (vi == null) {
throw new IllegalArgumentException("vi can't be null");
}
this.vi = vi;
}
...
If you can't change the constructor then you can also add to any SomeWork public methods some sort of configuration check method to make sure the the vi wiring has properly been done but this requires careful programming to make sure all public methods are covered.
public void somePublicMethod() {
checkWiring();
...
}
private void checkWiring() {
if (vi == null) {
throw new IllegalStateException("vi needs to be specified");
}
}

ByteBuddy: How to implement field access interceptor?

I'am trying to make a OGM to translate object to Vertex for the OrientDB. Currently i'am using GCLib but i read that ByteBuddy could implements two critical things that if work, it will improve the OGM speed.
Could ByteBuddy implement field access control? I read the doc but it's not clear or I do not understand it.
Dinamically add default empty constructor.
The current problem is this: We do not know the class definition that will be passed as a parameter. The idea is to redefine the class and implement the empty constructor if it not have one, add a field named __BB__Dirty to set the object as dirty if an assign operation was detected and force the implementation of an interface to talk with the object.
Example:
A generic class:
public class Example {
int i = 0;
String stringField;
public Example(Strinf s) {
stringField = s;
}
public void addToI(){
i++;
}
}
Now we have an interface like this:
public interface DirtyCheck {
public boolean isDirty();
}
So, I want to force the Example class to implement the interface, the method isDirty(), a field to work on and a default contructor so the class should be translated to:
public class Example implements DirtyCheck {
int i = 0;
String stringField;
boolean __BB__dirty = false;
public Example() {
}
public Example(Strinf s) {
stringField = s;
}
public void addToI(){
i++;
}
public boolean isDirty() {
return this.__BB__dirty;
}
}
and the some magically assigner so if any field (except __BB__dirty) is modified, the __BB__dirty field is set to True;
I have tried the first part of this but I fail :(
...
ByteBuddyAgent.install();
Example ex = new ByteBuddy()
.redefine(Example.class)
.defineField("__BB__Dirty", boolean.class, Visibility.PUBLIC)
.make()
.load(Example.class.getClassLoader(), ClassReloadingStrategy.fromInstalledAgent())
.getLoaded().newInstance();
....
ex.addToI(); // <--- this should set __BB__dirty to true since it
// assign a value to i.
But i get this error:
Exception in thread "main" java.lang.UnsupportedOperationException: class redefinition failed: attempted to change the schema (add/remove fields)
at sun.instrument.InstrumentationImpl.redefineClasses0(Native Method)
at sun.instrument.InstrumentationImpl.redefineClasses(InstrumentationImpl.java:170)
at net.bytebuddy.dynamic.loading.ClassReloadingStrategy$Strategy$1.apply(ClassReloadingStrategy.java:297)
at net.bytebuddy.dynamic.loading.ClassReloadingStrategy.load(ClassReloadingStrategy.java:173)
at net.bytebuddy.dynamic.DynamicType$Default$Unloaded.load(DynamicType.java:4350)
at Test.TestBB.<init>(TestBB.java:33)
at Test.TestBB.main(TestBB.java:23)
I'am stuck in the very first stage to solve the problem with BB.
Thanks
The Java virtual machine does not support changing the layout of classes that are already loaded when redefining a class. This is not a limitation of Byte Buddy but the VM implementation.
In order to do what you want, you should look at the AgentBuilder API which allows you to modify classes before they are loaded. Creating an agent does however require you to add it explicitly as an agent on startup (opposed to adding the library to the class path.
You can implement the interface by calling:
.implement(DirtyCheck.class).intercept(FieldAccessor.of("__dirty__");
You can also add a default constructor by simply defining one:
.defineConstructor(Visibility.PUBLIC).intercept(SuperMethodCall.INSTANCE)
The latter definition requires the super class to define a default constructor.

Categories

Resources