Sharing Realm fields on Android - java

Realm on Android doesn't support model inheritance/polymorphism.
So is there any way to share fields on Android? We have 5 models that all share the same synchronization-related fields and code. We use inheritance in our current SQLite models; if we switch to Realm, is our only choice to duplicate the sync fields across each of the 5 model classes?
As a workaround, I'm thinking about having those classes implement a Syncable interface with getters and setters for the shared fields, which at least let me share sync functionality. Is there a better way?
To share sync functionality, my best guess is to make a static Synchronizer class and pass it Syncable model objects. Synchronizer methods will use the Syncable interface to operate directly on model objects' shared, sync-related fields and delegate operations on type-specific fields. Alternatively, I could provide each model object its own Synchronizer instance...
Trying to find the right way to work around the inheritance limitation is really stretching my OOP skills... help is appreciated!

I had the same issue when I found out that realmObjects should inherit directly form RealmObject class (no support for inheritance).
In order to get back the benefits of polymorphism, I considered a similar solution to yours combined with some composition tricks that would avoid me attribute duplication.
"Talk is cheap show me the code."
Code examples
interface IPerson {
String getName();
}
class Person extends RealmObject implements IPerson {
String name;
#Override
public String getName() {
return name;
}
}
interface IWorker extends IPerson {
int getSalary();
}
class Worker extends RealmObject implements IWorker {
Person person;
int salary;
#Override
public String getName() {
return person.getName();
}
#Override
public int getSalary() {
return salary;
}
}
Some benefits
You won't have to duplicate your attributes in each extending class.
Polymorphism is back! For example, now you can simulate a cast (with getPerson() in this example).
Some limits
When using a serialization library that uses reflection (suppose it's Gson), your serialized models will have their parents attributes embedded. Not something that you would have had if you were using classic inheritance.
Example with JSON
Let's suppose John Doe is making 500$ a month. (He's a Worker and a Person right?).
With classic inheritance, John Doe would look like this:
{
"name":"John Doe",
"salary": 500
}
But with this inheritance workaround ... :
{
"person" : {
"name":"John Doe"
},
"salary": 500
}
Hope this helps!
Note
PrimaryKeys unfortunately have to be duplicated.
Bonus
You might want to check RealmFieldNamesHelper, a library made by Christian Melchior "to make Realm queries more type safe".

If you use Kotlin, sharing the fields via an interface becomes even more trivial:
interface PersonBase {
var name: String?
var salary: Int
}
Then
class Person: RealmObject(), PersonBase {
}

Related

#JsonView: different perspectives

I'm coding a Spring Web Service, using Jackson by default. I'm using #JsonView to indicate which property I need to be parsed to my JSON object. So, the problem is: Many objects are used in different classes, but not exactly all its properties, for example:
class Professor {
#JsonView({Views.Public.class, Views.Internal.class})
private int id;
#JsonView(Views.Internal.class)
private String name;
...
}
class Classroom {
#JsonView({Views.Public.class, Views.Internal.class})
private int id;
#JsonView(Views.Internal.class)
private String name;
...
}
class Lecture {
#JsonView(Views.Public.class)
private Professor professor;
#JsonView(Views.Public.class)
private Classroom classroom;
...
}
What if I need more than two 'perspectives', I'd have to create more interfaces/classes to do that? (like Views.Professor, Views.Principal, ...) Is this a real good practice?
I'd like to hear some suggestions or alternatives to solve that. I'm a little bit confused about being on the right track.
Generic names
You always can define more views if you need more perspectives, that's the idea behind the Jackson JSON views and that's what makes it flexible.
If you use generic names in your views classes, such as Basic, Extended, Public, Private and so on, you'll find it easier to reuse them across multiple beans.
Inheritance
You always can rely on inheritance with #JsonView. Consider the following example where Views.Private extends Views.Public:
public class Views {
interface Public {}
interface Private extends Public {}
}
Serialization of properties annotated with #JsonView(Views.Private.class) will also include properties annotated with #JsonView(Views.Public.class).

Intanstiate entity class using access modifiers in java

I've asked yesterday about the instance generation of java object like Entity as the instance variable. https://stackoverflow.com/questions/42239761/declare-entity-in-java-as-private
I am not satisfied for the answers I get and now I want to more clarify what my question is:
I have Entity called User.java:
#Entity(naming = NamingType.SNAKE_LOWER_CASE)
public class User{
private String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
And I have a class called UserImpl.java what I did is:
public class UserImpl implements UserLogic {
private User userEntity = new User(); ------> // Bad practice
/** Methods **/
}
As you can see I declared an instance for User entity in UserImpl class. Is that a bad practice at all?
Any answers will be much appreciated.
The simple act of creating a private instance of a class in another class is not a bad practice at all. In fact, it is very common and Java would really be pointless if you couldn't do this.
I feel like there is still some missing information here. Depending on the function of the "UserImpl" class, this might be a bad practice. But this would entirely depend on how you go about implementing the "UserImpl" class.
For example, if you were planning on creating a variety of methods in the "UserImpl" class that would all center around changing "userEntity," then that might be a bad idea because all of those methods could instead go into the "User" class itself, which would be much simpler and more intuitive.
tldr: This isn't bad practice as it is right now, but depending on the purpose of your "UserImpl" class, it could turn out to be.

Is there a creational pattern for creating data objects based on a dependency?

I have a class that only contains data, but depends on a connection object from which the data is fetched. The connection object manages a low level socket connection to a much bigger API (old single header C API). Currently i have this:
public class MyData {
private int data1;
public MyData(Connection connection) {
data1 = connection.getLowLevelConnection().someDataFetchCall();
}
// getters, setters
}
It works, but feels wrong. Intuitively i would say, that the data should be acquired in a function, which returns my initialized data object:
public class DataFetcher {
public static MyData getMyDataFromConnection(Connection connection) { ... }
// ... more data fetching functions
}
Is there a named pattern for this kind of task? That would make naming these classes easier.
You tied the data itself to its loading mechanism. Without proposing any pattern, you should decouple it. I guess there is no real pattern that you can apply. It's a matter of design principles especially the single responsibility principle and dependency inversion. The single responsibility principle says you should not separate things that belong together and you should not tie things that don't belong together. The dependency inversion principle says do not depend on concretes, depend on abstracts.
The design should finally come up with
one class to represent the data and
one class to load the data to satisfy the single responsibility
principle.
If you want to be independent of the loading mechanism introduce
abstraction (e.g. interface), applying dependency inversion.
It's true that the repository "pattern" defines such a structure.
A side note: I personally do not accept the label "pattern" on it. It's only a label for a benifical stucture that you will reach anyway if you apply the SOLID principles. In contrary the 23 design patterns from the GoF. They were not made up, they were identified. They have intrinsic meaning. The "pattern" label suggest that the repository "pattern" falls in the same category but it doesn't. It is lacking naturalness and atomicity.
It works, but feels wrong.
It does.
Sticking to DDD, I'd create a repository if MyData was an aggregate or a service if MyData was an entity. Anyway I'd put classes to infrastructure layer and interfaces with MyData to domain layer. For example,
MyData entity:
package myapp.domain.model;
public class MyData {
private int data1;
// other fields
public MyData(int data1) {
this.data1 = data1;
}
// other methods
}
Service interface:
package myapp.domain.service;
import myapp.domain.model.MyData;
public interface MyService {
MyData GetMyData();
}
Service implementation:
package myapp.infrastructure.oldapi;
import myapp.domain.model.MyData;
import myapp.domain.service.MyService;
public class MyServiceImpl implements MyService {
#Override
public MyData GetMyData() {
MyData myData = connection.getLowLevelConnection().someDataFetchCall();
return myData;
}
}

Force the usage of a JPA AttributeConverter for enums

We're trying to figure out a robust way of persisting enums using JPA. The common approach of using #Enumerated is not desirable, because it's too easy to break the mappings when refactoring. Each enum should have a separate database value that can be different than the enum name/order, so that you can safely change the name or internal ordering (e.g. the ordinal values) of the enum without breaking anything. E.g. this blog post has an example on how to achieve this, but we feel the suggested solution adds too much clutter to the code. We'd like to achieve a similar result by using the new AttributeConverter mechanism introduced in JPA 2.1. We have an interface that each enum should implement that defines a method for getting the value that is used to store the enum in the database. Example:
public interface PersistableEnum {
String getDatabaseValue();
}
...
public enum SomeEnum implements PersistableEnum {
FOO("foo"), BAR("bar");
private String databaseValue;
private SomeEnum(String databaseValue) {
this.databaseValue = databaseValue;
}
public void getDatabaseValue() {
return databaseValue;
}
}
We also have a base converter that has the logic for converting enums to Strings and vice versa, and separate concrete converter classes for each enum type (AFAIK, a fully generic enum converter is not possible to implement, this is also noted in this SO answer). The concrete converters then simply call the base class that does the conversion, like this:
public abstract class EnumConverter<E extends PersistableEnum> {
protected String toDatabaseValue(E value) {
// Do the conversion...
}
protected E toEntityAttribute(Class<E> enumClass, String value) {
// Do the conversion...
}
}
...
#Converter(autoApply = true)
public class SomeEnumConverter extends EnumConverter<SomeEnum>
implements AttributeConverter<SomeEnum, String> {
public String convertToDatabaseColumn(SomeEnum attribute) {
return toDatabaseValue(attribute);
}
public SomeEnum convertToEntityAttribute(String dbData) {
return toEntityAttribute(SomeEnum.class, dbData);
}
}
However, while this approach works very nicely in a technical sense, there's still a pretty nasty pitfall: Whenever someone creates a new enum class whose values need to be stored to the database, that person also needs to remember to make the new enum implement the PersistableEnum interface and write a converter class for it. Without this, the enum will get persisted without a problem, but the conversion will default to using #Enumerated(EnumType.ORDINAL), which is exactly what we want to avoid. How could we prevent this? Is there a way to make JPA (in our case, Hibernate) NOT default to any mapping, but e.g. throw an exception if no #Enumerated is defined on a field and no converter can be found for the type? Or could we create a "catch all" converter that is called for all enums that don't have their own specific converter class and always throw an exception from there? Or do we just have to suck it up and try to remember the additional steps each time?
You want to ensure that all Enums are instances of PersistableEnum.
You need to set a Default Entity Listener (an entity listener whose callbacks apply to all entities in the persistence unit).
In the Default Entity Listener class implement the #PrePersist method and make sure all the Enums are instances of PersistableEnum.

Annotation with User Defined class

While writing a simple skills system for a game, I have run into a small hiccup. My basic convention for my system uses annotations to define if a skill or ability meets all predefined requirements, but I want it to be extendable.
To do this I have implemented an Enum class with an interface and Annotated the Enums within to gather basic information. I have found that this limits my ability to create another annotation that can be placed on other classes to see if that skill needs to be "trained".
Something like:
public enum AthleticSkills implements AnnotatedSkill {
#Skill(name = "Jump", base = Agility.class, trained = true)
JUMP(public void use(){})
}
This enum could be any number of skill types, such as StrengthSkills, etc, all interfacing AnnotatedSkill, which does let me define the type of AnnoatedSkill within methods and parameters, but I wanted to do something similar to:
public #interface TrainedSkill {
AnnotatedSkill[] value();
}
Trained skills are kept in a Set<AnnotatedSkill>.
I know this isn't possible, but I wanted to stick with the convention I have setup. I am willing to redefine how I setup skills if necessary, but if anyone has any way to make this work to keep it similar to this, I would be most greatful.
Some more explanation of how this is used:
Skills are simple, everyone will have them (think old DND skills) and everyone can use them, they use the class (Agility.class, Strength.class, etc) which are stat classes to adjust their effectiveness. Along with this there is the ability to train in a skill, a one shot, you know it or not, this is added to Set<AnnotatedSkill> within the player object, this just means you get additional modifiers when you use the skill.
The use() method within the ENUM can be used based on whether the trained modifier in the annotation is set, if true, it can only be used if skill is trained, if false, anyone can use it.
There is also another Annotation that can be added if you wish to limit who can train the skill (#Requirements) which can limit the training/using to classes/races/etc
I assume this would fall more into a Framework for skills/classes/races/etc.
Assume the following:
#Types(types = {EffectTypes.BUFF, EffectTypes.HEALTH, EffectTypes.HEAL})
#Requirements(
abilities = {
#RequireAbility(required = TimedHealAbility.class)})
#TrainedSkill(AthleticSkills.JUMP)
public class TimedHealEffect extends EffectComponent {}
This effect will only happen is they have trained AthleticSkills.JUMP, as well if they meet the requirements from #requirements.
Tracking Trained skills:
public class SkillTracking {
private Set<AnnotatedSkill> trainedSkills = Collections.newSetFromMap(new ConcurrentHashMap<AnnotatedSkill, Boolean>());
public void addSkill(AnnotatedSkill skill) {
trainedSkills.add(skill);
}
public boolean removeSkill(AnnotatedSkill skill) {
return trainedSkills.remove(skill);
}
public boolean hasSkill(AnnotatedSkill skill) {
return trainedSkills.contains(skill);
}
}

Categories

Resources