Copy specific properties using spring BeanUtils - java

I have a model class which has around 45 properties. I have created another DTO class which has exactly the same properties.
At runtime, some requirements dont need me to show all the properties to the user. Hence i want to copy some properties from my model class to my DTO class and then send that object to the client.
I am using Spring.s BeanUtils.copyproperties. But here i only see the option to ignore properties which i dont want. as the my list of unwanted properties is long, is there a way in which i can specify only the list i want.
I searched on the net and found a solution
"org.springframework.beans.BeanUtils.copyProperties(Object source, Object target, Class editable) throws BeansException
Ensure the target implements the interface editable which defines the properties which would be copied."
But i am not able to work my head around this editable interface. I tried using an interface which has all the properties I want and tried to use it here, but it gave me an error saying that it is expecting a class. Can some body help me with the editable interface stuff

I think creating another DTO with just the properties you wanted will be a lot easier.

Instead of creating copies of the object containing only a few properties, you could define various Interfaces that provide only the getter methods to the properties that you want to expose to the consumer of your object. Your DTO can implement all the different interfaces. And you hand over the interface to the consumer instead of a concrete class.
public class MyDTO implements Fooable, Barable {
private String foo;
private String bar;
public String getFoo() {
return foo;
}
public String getBar() {
return bar;
}
}
public interface Fooable {
String getFoo();
}
public interface Barable {
String getBar();
}

Related

Java: make static methods of one class mirror instance methods of another class

I have a POJO like this:
public class Foo {
private String bar1;
private String bar2;
//...
public String getBar1() { return bar1; }
public void setBar1(String bar1) { this.bar1 = bar1; }
public String getBar2() { return bar2; }
public void setBar2(String bar2) { this.bar2 = bar2; }
//...
}
As an alternative to Java reflection (which is quite slow in general), I would like to define a class with static methods like this:
public class FooStatic {
public static String getBar1(Foo foo) { return foo.getBar1(); }
public static void setBar1(Foo foo, String bar1) { foo.setBar1(bar1); }
public static String getBar2(Foo foo) { return foo.getBar2(); }
public static void setBar2(Foo foo, String bar2) { foo.setBar2(bar2); }
//...
}
which enforces the creation/deprecation of a static method every time the Foo class is updated. For example, if the field bar2 is deleted in FooStatic, then the static methods getBar2() and setBar2() in FooStatic should be identified by the compiler to be removed. If a new variable bar3 is added to Foo with getters and setters, the compiler should enforce the creation of new static methods getBar3() and setBar3() in FooStatic. Moreover, I have multiple POJOs, and would like a solution which scales. Is this possible?
Yes... sort of. It's very complicated.
Annotation Processors are compiler plugins that run at certain times during the compilation process. It gets complex fast - IDEs and build tools are 'incremental' (they don't want to recompile your entire code base everytime you change a single character, of course), for example.
Annotation processors can do a few things:
They can run as part of the compilation processes. This can be done automatically - they just need to be on the classpath, is all
They can be triggered due to the presence of an annotation.
They can read the signatures of existing files (the names of fields and methods, the parameter names, parameter types, return type, and throws clause, and the type of fields, and the extends and implements clauses, and the param names and types of the constructors). They can't read the body content (initializing expressions, method and constructor bodies). But I think you just need the signatures here.
They can make new files. They can even make new java files which will then automatically get compiled along with the rest.
Thus, you have a route here: Make an annotation, then make an annotation processor. For example, you could set it up so that you manually write:
#com.foo.Hossmeister.Singletonify
class Example {
void foo1() {}
String foo2(String arg) throws IOException {}
}
and have an Annotation Processor (which also has that com.foo.Hossmeister.Singletonify annotation), which, if it is on the classpath, automatically generates and ensures that all other code can automatically see this file:
// Generated
class ExampleSingleton {
private ExampleSingleton() {}
private static final Example INSTANCE = new Example();
public void foo1() {
INSTANCE.foo1();
}
public static String foo2(String arg) throws IOException {
return INSTANCE.foo2(arg);
}
}
But, annotation processors are tricky beasts to write, and they can be quite a drag on the compilation process. Still, that's the only way to get what you want. Now you have something to search the web for / read up on :)
You start by making a separate project that defines the annotation, has the annotation processor (a class that extends AbstractProcessor), pack that into a jar, and make sure the manifest includes an SPI file that tells java that your class that extends AbstractProcessor is an annotation processor, and then it'll be picked up automatically. I'll give you the annotation definition:
In a file named Singletonify.java:
#Retention(RetentionPolicy.CLASS)
#Target(ElementType.TYPE)
public #interface Singletonify {}
But... wait!
The concept of singletons is often problematic. Singletons should be 'stateless' - and if they are stateless, why isn't your Foo class just filled with entirely static methods, obviating the need for your "static mirror class"? If it is stateful, you now have global state which is a virtually universally decried anti-pattern. You don't want global state, it makes reasoning about control flow impossible.
A second problem is testability - because static stuff doesn't 'do' inheritance, you can't (easily) make test implementations of static methods. With non-static stuff this is much easier.
This problem is more generally solved by so-called Dependency Injection frameworks such as Dagger, Guice, or Spring. They let you write code that just 'gets' an instance of your Foo class, without callers having to actually figure out where to get this instance from: The Dependency Injection framework takes care of it. It lets you do things like "Have a singleton of this object... per web session". Which is pretty powerful stuff.
I think what you probably want is a DI system. You may want to investigate a bit before spending the 2 weeks writing that annotation processor.

Force the usage of a JPA AttributeConverter for enums

We're trying to figure out a robust way of persisting enums using JPA. The common approach of using #Enumerated is not desirable, because it's too easy to break the mappings when refactoring. Each enum should have a separate database value that can be different than the enum name/order, so that you can safely change the name or internal ordering (e.g. the ordinal values) of the enum without breaking anything. E.g. this blog post has an example on how to achieve this, but we feel the suggested solution adds too much clutter to the code. We'd like to achieve a similar result by using the new AttributeConverter mechanism introduced in JPA 2.1. We have an interface that each enum should implement that defines a method for getting the value that is used to store the enum in the database. Example:
public interface PersistableEnum {
String getDatabaseValue();
}
...
public enum SomeEnum implements PersistableEnum {
FOO("foo"), BAR("bar");
private String databaseValue;
private SomeEnum(String databaseValue) {
this.databaseValue = databaseValue;
}
public void getDatabaseValue() {
return databaseValue;
}
}
We also have a base converter that has the logic for converting enums to Strings and vice versa, and separate concrete converter classes for each enum type (AFAIK, a fully generic enum converter is not possible to implement, this is also noted in this SO answer). The concrete converters then simply call the base class that does the conversion, like this:
public abstract class EnumConverter<E extends PersistableEnum> {
protected String toDatabaseValue(E value) {
// Do the conversion...
}
protected E toEntityAttribute(Class<E> enumClass, String value) {
// Do the conversion...
}
}
...
#Converter(autoApply = true)
public class SomeEnumConverter extends EnumConverter<SomeEnum>
implements AttributeConverter<SomeEnum, String> {
public String convertToDatabaseColumn(SomeEnum attribute) {
return toDatabaseValue(attribute);
}
public SomeEnum convertToEntityAttribute(String dbData) {
return toEntityAttribute(SomeEnum.class, dbData);
}
}
However, while this approach works very nicely in a technical sense, there's still a pretty nasty pitfall: Whenever someone creates a new enum class whose values need to be stored to the database, that person also needs to remember to make the new enum implement the PersistableEnum interface and write a converter class for it. Without this, the enum will get persisted without a problem, but the conversion will default to using #Enumerated(EnumType.ORDINAL), which is exactly what we want to avoid. How could we prevent this? Is there a way to make JPA (in our case, Hibernate) NOT default to any mapping, but e.g. throw an exception if no #Enumerated is defined on a field and no converter can be found for the type? Or could we create a "catch all" converter that is called for all enums that don't have their own specific converter class and always throw an exception from there? Or do we just have to suck it up and try to remember the additional steps each time?
You want to ensure that all Enums are instances of PersistableEnum.
You need to set a Default Entity Listener (an entity listener whose callbacks apply to all entities in the persistence unit).
In the Default Entity Listener class implement the #PrePersist method and make sure all the Enums are instances of PersistableEnum.

Canonicalizing Java bean property names

I have a bunch of third-party Java classes that use different property names for what are essentially the same property:
public class Foo {
public String getReferenceID();
public void setReferenceID(String id);
public String getFilename();
public void setFilename(String fileName);
}
public class Bar {
public String getRefID();
public void setRefID(String id);
public String getFileName();
public void setFileName(String fileName);
}
I'd like to be able to address these in a canonicalized form, so that I can treat them polymorphically, and so that I can do stuff with Apache BeanUtils like:
PropertyUtils.copyProperties(object1,object2);
Clearly it would be trivial to write an Adapter for each class ...
public class CanonicalizedBar implements CanonicalizedBazBean {
public String getReferenceID() {
return this.delegate.getRefID();
}
// etc.
}
But I wonder is there something out there more generalized and dynamic? Something that would take a one-to-many map of property name equivalences, and a delegate class, and produce the Adapter?
I've never used it, but I think you're looking for Dozer:
Dozer is a Java Bean to Java Bean mapper that recursively copies data
from one object to another. Typically, these Java Beans will be of
different complex types.
Dozer supports simple property mapping, complex type mapping,
bi-directional mapping, implicit-explicit mapping, as well as
recursive mapping. This includes mapping collection attributes that
also need mapping at the element level.
Dozer not only supports mapping between attribute names, but also
automatically converting between types. Most conversion scenarios are
supported out of the box, but Dozer also allows you to specify custom
conversions via XML.
First Option is Dozer.
Second option is Smooks Framework
with a tweak. It will be beneficial to use Smook's Graphical mapper.
Another option would be XStream with custom Mapper.
maybe something like that:
public class CanonicalizedBar implements CanonicalizedBazBean {
public String getReferenceID() {
Method m = this.delegate.getClass().getDeclaredMethod("getReferenceID");
if(m == null)
m = this.delegate.getClass().getDeclaredMethod("getRefID");
...
return m.invoke();
}
// etc.
}
Although, I personally have never used it. I noticed that a project called orika is noted as having the best performance and the ability to automatically understand many such mappings.
At any rate it also supports custom mappings and uses generated code to implicitly define the adapters.
You can also define a custom mapper, that is if you know how to canonize the member names you can use that knowledge to build a mapping that is true for all your objects. for instance:
DefaultFieldMapper myDefaultMapper = new DefaultFieldMapper() {
public String suggestMapping(String propertyName, Type<?> fromPropertyType) {
// split word according to camel case (apache commons lang)
String[] words= StringUtils.splitByCharacterTypeCamelCase(propertyName);
if(words[0].length() > 6) {
// trim first camel-cased word of propery name to 3 letters
words[0]= words[0].substring(0,2);
return StringUtils.join(words);
} else {
// remains unchanged
return propertyName;
}
}
}
mapperFactory.registerDefaultFieldMapper(myDefaultMapper );
I haven't done much with it but you may be able to use Aspect Oriented Programming to do this.
What you should be able to do I think is add a method to each of the classes that internally calls the real method. See this article about half way down it talks about mixins.
AspectJ is probably the most popular implementation.

Magically call methods in Java

Is there some way of using magic methods in Java like there is in PHP with __call?
For instance:
class foo {
#Setter #Getter
int id;
#Getter
Map <String, ClassInFoo> myMap;
protected class ClassInFoo {
#Setter #Getter
String name;
}
#Setter
String defaultKey;
}
I'm using Project Lombok annotations for getter and setter methods to simplify the code.
Let's consider that that my map contains several items mapped by String and the defaultKey defines the default one.
What I would like is to be able to call foo.getName() which would return the default name as foo.myMap.get(defaultKey).getName().
The reason I can't just write all the getters manually is that the Foo class is in fact inherited with generics and the the inner class might be different.
I sort of need something like:
function Object __call(method) {
if (exist_method(this.method)
return this.method();
else
return this.myMap.get(defaultKey).method();
}
Is this somehow possible in Java?
EDIT:
I made a more precise example of what I am trying to achieve here: https://gist.github.com/1864457
The only reason of doing this is to "shorthand" the methods in the inner class.
You absolutely can through reflection by using its features like
public Method getMethod(String name, Class<?>... parameterTypes)
that can be used to see if a class has some methods defined but I don't see how your problem couldn't be solved with a proper use of interfaces, inheritance and overriding of methods
Features like reflection are provided to manage certain, otherwise unsolvable, issues but Java is not PHP so you should try to avoid using it when possible, since it's not in the philosophy of the language.
Isn't it the whole point of inheritance and overriding?
Base class:
public Object foo() {
return this.myMap.get(defaultKey).method();
}
Subclass:
#Overrides
public Object foo() {
return whateverIWant;
}

JavaBean class rules

What are the correct rules to write a JavaBean class?
I'm confused because some books use MUST while other user SHOULD or COULD to describe
the writing rule:
i.e.
a bean class MUST implements Serializable or SHOULD?
the instance variables MUST be private or SHOULD BE?
A JavaBean is defined by its properties (i.e. its getter and setter methods), not it's fields. Although the terms are used interchangably, that is actually not correct. The Introspector mechanism ignores fields completely.
Example
Take this (awfully designed) Javabean:
public class TestBean {
private int baz;
private char[] phleem;
public String getFoo() {
return new String(phleem);
}
public void setFoo(final String foo) {
this.phleem = foo.toCharArray();
}
public long getBar() {
return baz;
}
public void setBar(final long bar) {
this.baz = (int) bar;
}
}
You'd think the properties are:
"baz" (int)
"phleem" (char[])
but now let's inspect it with the Javabeans introspector:
for (PropertyDescriptor descriptor : Introspector
.getBeanInfo(TestBean.class, Object.class)
.getPropertyDescriptors()) {
System.out.println("Name: " + descriptor.getName() +
", type: " + descriptor.getPropertyType());
}
Here's the output:
Name: bar, type: long
Name: foo, type: class java.lang.String
Conclusion:
Getters and setters are what define a Javabeans property. It's a convention that they are backed by fields of the same name and type, but the fields are not actually part of the Javabean properties (although many documentations will suggest otherwise).
On re-reading my answer: it is meant as an addendum to the other answers. If you want a short and simple answer, go with skaffman's.
It is a public class.
It has a public parameterless constructor (though it may have other constructors
as well)
It implements Serializable interface (i.e. it can be made persistent, so its state can
be saved)
It has properties with “getter” and “setter” methods named by following
JavaBeans naming patterns
It has events which follow the standard Java event model with the registration
methods named by following the JavaBeans naming patterns
It may have other methods which do not follow the naming patterns. These
methods are not exposed by a builder tool.
Adding to the previous poster - skaffman. It is always a good practice to override, toString(), hashCode(), equals() and finally write a overloaded constructor that has all the fields (that this class has) as input.
Be sure not to use other references (like List, HashMaps etc) in the toString() and hashCode()'s implementation.
On a side note, eclipse has built-in functionality to generate them for you..
A Java Bean is a Java class that should follow the following conventions:
It should have a no-arg constructor.
It should be Serializable.
It should provide methods to set and get the values of the properties, known as getter and setter methods.
All the above and it should not cross the boundaries of Java API . It means it should not extend or implement any classes or interface,but one relaxation is there it can implement only one serializable interfce why because it is a marker interface

Categories

Resources