Project Lombok - customise generated Setter - java

Is there a way to customise the generated code for #Setter?
Consider the following simple class:
#Entity
#Getter
#Setter
#NoArgsConstructor
public class MyEntity implements Serializable {
#Id private long id;
#OneToMany
private Set<AttributeColumn> columns = new HashSet<>();
public void setColumns(Set<AttributeColumn> columns) {
this.columns.clear();
this.columns.addAll(columns);
}
}
I want Lombok to generate the Setter for columns as I implemented it in the example above. This should only be done on classes annotated with #Entity and on attributes that are a Collection. The Setter for other attributes, in this example id should be generated as usual.
Is there a way to customise the generation of the Setter code depending on those criteria?

No.
No, there's no such feature and no plans for it.
As already stated in a comment, you could do it yourself, but it's not easy at all. Moreover, you'd have to decide to either hardcode the logic (simple but probable unusable for others) or interpret something like
#SetterWhen(#Or(
#Condition(annotatedWith=Entity.class),
#Condition(declaredType=Collection.class)))
which is close to impossible to implement (as this information is unavailable when Lombok runs).
Currently, all you can do is to allow on suppress the generation on a per field basis. There's no possibility to generate a different setter, however
there's a related feature: #Singular, which may or may not help you.

Related

Is it possible to map custom final classes in hibernate?

Suppose I have already made class which I wish to persist. I can't change it's code, i.e. can't put any annotations inside. Also, class is not following bean convention.
I.e. it is arbitrary complex class I wish to persist.
Is it possible to write some sort of custom serializer and deserializer (don't know how to name it) in Hibernate, so that I be able to read these classes as usual POJOs?
Hello the first question is can I map a "fina class" the answer to this question is YES as long as you dont use Hibernate Enchancing or some sort of instrumentation.
Now second question. Bean not following Bean Conventions. I guess this means no getters and setters. You can have Attribute level access so this is again not a problem.
Is it possible to write custom serializer in Hibernate. The answer here is NO. Why ? Because Hibernate is not about Serialization hibernate is about SQL. There is no strict requirement that a Hibernate Entity should be serialize-able.
Even though Hibernate does not enforce serialization. Can I still make my final class serialize-able even though it does not implement Serializable or Eternalizeable. Yes you need to wrap it into class implementing Serializable or Externalizeable and implement the doRead doWrite methods yourself.
Serialization to JSON or XML - this is not part of Hibernate neither is part of JPA. Serialization to these two formats is defined as part of the Jaxb and Jax-rs specifications.
Have a look at hibernate UserType and CompositeUserType, with the well known EnumUserType example
Enums are a bit like your needs : final class, no getters nor setters. They are not complex though, so you might need a CompositeUserType that allows to map several columns for one Type, rather that a UserType.
Then you would use it like that in your class :
public class MyClass {
#Id
private Long id;
#Type(type = "com...MyCompositeUserType")
private ComplexFinalClassNotPojo complexObject;
}

datanucleus/JDO a relation to many different classes)

I need to create a database with 2 kinds of 'modules'.
domain focused classes
metadata classes
In the first group it is just simple (or complex rather) RDBMS. The second 'block' are metadata classes which collects information about classes from the first block.
What I have done:
Created Entity class which is parent of all fro 1st part:
#PersistenceAware
#Inheritance(strategy = InheritanceStrategy.NEW_TABLE)
public abstract class Entity implements Serializable {
private static final long serialVersionUID = 1L;
}
Created normal schema with all entities inherit somehow Entity class.
Created InternalMapping class as a parent of the whole concept.
#PersistenceCapable
#Inheritance(strategy = InheritanceStrategy.NEW_TABLE)
public abstract class InternalMapping implements Serializable {
private static final long serialVersionUID = 1L;
private Entity entity;
//.. cut off getter and setter
}
Created InternalMapping child which should have that feature.
Finally I found it does not work. Probably because Entity does not have any field. But if so I would expect 2 fields: a primary key and class name. In that way I would map every entity by 2 coordinates: ID and class name.
Any idea how to solve that issue? An finally how JDOQL would looks like.
Ps. I know that RDBMS is not the best solution for that kind of problems but people with whom I work wish to have relational database.
Finally I found solution for my problem. I am able to keep entities of different classes keep in one table. Also I am able to do JDOQL request with filtering instances of particular class.
The example is inside GitHub repository here: https://github.com/jgrzebyta/samples-jdo/tree/metalink and within metalink branch. It is slightly modified Tutorial project from datanucleus example.
So.
The lowest level in the inheritance hierarchy is Core interface with the PK defined inside.
Class MyIndex collects different implementations of the Core interface, i.e. Book and Product. Also I have added new column called type for storing Class names only. I am able to retrieve implementations of Core interface and build query filter against type filed because query type core instanceof Book simple does not work. That is the feature of the identity mapping strategy which I have used in my solution: DataNucleus JDO Objects.
PS. If you run command mvn -Pschema-gen compile than you will receive DDL file.

Too much boilerplate, how can I reduce my POJO builders?

I have several different POJOs that use a builder pattern, but after adding a builder for each one and generating Object.toString, Object.hashCode and Object.equals, my classes end up being around 100 lines of code. There has to be a better way to handle this. I think having some sort of a reflective builder would help out a lot, but I'm not sure this would be good practice and I'm also not sure how I'd exactly make it happen. In other words, is there a way to implement a builder like this?
A simple POJO:
public class Foo {
public int id;
public String title;
public boolean change;
...
}
Then some sort of reflective builder:
Foo = ReflectiveBuilder.from(Foo.class).id(1).title("title").change(false).build();
Short answer no. What you ask for is not possible. Reflection looks at the code at runtime and invokes methods dynamically, it cannot generate actual methods.
What you could do would be:
Foo foo = ReflectiveBuilder.from(Foo.class).
set("id", 1).
set("title", "title").
build();
This has three massive problems:
the fields are Strings - a typo causes a runtime error rather than a compile time one,
the values are Objects - the wrong type causes a runtime error rather than a compile time one, and
it would be much slower than the alternative as Reflection is very slow.
So a reflection based solution, whilst possible (see Apache Commons BeanUtils BeanMap) is not at all practical.
Long answer, if you're willing to allow some compile time magic, you can use Project Lombok. The idea behind Lombok is to generate boilerplate code from annotations using the Java annotation preprocessor system.
The really magical thing is that all IDEs, well the big 3 at least, understand annotation preprocessing and code completion will still function correctly even though the code doesn't really exist.
In the case of a POJO with a Builder you can use #Data and #Builder
#Data
#Builder
public class Foo {
public int id;
public String title;
public boolean change;
...
}
The #Data annotation will generate:
a required arguments constructor (that takes all final fields),
equals and hashCode methods that use all fields (can be configured with the #EqualsAndHashCode annotation)
a toString method on all fields (can be configured with the #ToString annotation and
public getters and setters for all fields (can be configured using the #Getter / #Setter annotations on fields).
The #Builder annotation will generate an inner class called Builder that can be instantiated using Foo.builder().
Do make sure you configure the equals, hashCode and toString methods as if you have two classes with Lombok that have references to each other then you will end up with an infinite loop in the default case as both classes include the other in these methods.
There is also a new configuration system that allows you to use, for example, fluent setters so you can more of less do away with the builder if your POJO is mutable:
new Foo().setId(3).setTitle("title)...
For another approach you can look at Aspect-oriented programming (AOP) and AspectJ. AOP allows you do chop your classes up into "aspects" and then stick them together using certain rules using a pre-compiler. For example you could implement exactly what Lombok does, using custom annotations and an aspect. This is a fairly advanced topic however, and might well be overkill.
Maybe Project Lombok (yes the website is ugly) is an option for you. Lombok injects code into your classes based on annotations.
With Lombok you use the #Data annotations to generated getters, setters, toString(), hashCode() and equals():
#Data
public class Foo {
public int id;
public String title;
public boolean change;
}
Have a look at the example on the #Data documentation section to see the generated code.
Lombok also provides a #Builder that generates a builder for your class. But be aware that this is an experimental feature:
#Builder
public class Foo {
public int id;
public String title;
public boolean change;
}
Now you can do:
Foo foo = Foo.builder()
.id(123)
.title("some title")
.change(true)
.build();
I personally use this website to create all the boilerplate code for the POJOs for me. All you need to do is to paste the JSON that you want to parse, and it will generate all the classes for you. Then I just use Retrofit to do the requests/caching/parsing of the information. Here is an example of Retrofit and POJOs in my Github account.
I hope it helps!
I created a small library CakeMold to do fluent initialization of POJOs. It uses reflection, what is certainly not fast. But can be very helpful when need to write tests.
Person person = CakeMold.of(Person.class)
.set("firstName", "Bob")
.set("lastName", "SquarePants")
.set("email", "sponge.bob#bikinibottom.io")
.set("age", 22)
.cook();

How to Avoid Annotations in POJOs

lets say i have the following POJO Class
public class Example {
private String name;
private int id;
private Object o;
// more fields
// getter/Setter
Now lets assume i want to persist my Entity using JPA i will come on with the following example POJO Class:
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "ID")
private int id;
#OneToMany(mappedBy = "directive")
private String name;
In my Opinion this is bad, becaus if i want to use e.g. Spring Data MongoDB the Annotations would be useless/false.
The only why i can think of to avoid this, is defining an Interface or an Abstract class, for example Storable, which defines getter/setter methods.
But then i have violated the POJO definition (and one can argue it was not a Pojo to begin with).
Are there any best Practices for defining Model Classes?
When using JPA, you can leave you classes untouched and have ALL your configuration in XML files. Many people prefer annotations, but if changing the persistence implementation is a requirement, you should consider using external configuration.
I am not sure about other frameworks/specs besides JPA, but XML configuration goes a long way in java. I am sure, many frameworks offer such possibilities.
There is also a pattern called DTO (data transfer objects) that can be used for separation of persistence concerns from business concerns.
The gist is: you use your annotated, DB-centric classes for your DB connection only. Your main application only uses business-oriented classes and is persistence-agnostic. The data could come from a DB or from a flat file, as long as you can convert it to you business objects, all is well.
EDIT: DTOs sound like a lot of work, but you gain clarity and testability by separating concerns. hexagonal architecture and clean architecture emphasize this approach.

How do I extend a hibernate annotated class to point a field to a different hibernate entity?

Let's say I have the following class structure:
/** Boring bits snipped */
#Entity
#Table(name = "Foo")
public class Foo {
#JoinColumn(name = "id")
private Bar bar;
/** Other flat data goes here */
}
#Entity
#Table(name = "Bar")
public class Bar {
/** Some data goes here */
}
For reasons I'm not going to go into, I have copies of these tables which I want to also map too, which should appear in Java to also be Foo and Bar objects. Most importantly, the relationships between tables should be between the copied tables when dealing with copied objects.
What is the most correct way of doing this?
I'm guessing I can probably do something like this:
#Entity
#Table(name = "OtherFoo")
public class OtherFoo extends Foo {
#JoinColumn(name = "id")
private OtherBar bar;
}
#Entity
#Table(name = "OtherBar")
public class OtherBar extends Bar {
}
But is that the right way to do it?
You're close, but you can't just inherit from another entity and change the table like that. Entity inheritance has to follow one of the provided inheritance models. It may be for your use case as simple as adding #Inheritance(strategy=InheritanceType.TABLE_PER_CLASS) to the superclass. There are some limitations to this if you have some more complicated mappings with other classes. Since it won't be able to tell which table a superclass based mapping is actually in, it can't join through it. And mappings to the superclass will require checking both tables every time. You also of course need unique ID generation across all the tables in the hierarchy. You may want to consider using an abstract superclass and having both concrete entities be leaf classes. Then at least you can always work with just a single table when you know which one it is.
Alternately you can declare your column mappings in an #MappedSuperclass and each subclass can then be an entity with a table mapping. That might work better if it's legacy data and you don't have unique IDs across the 'regular' and 'copy' tables.

Categories

Resources