How to convert map with flat properties (prop1.prop2.prop3) into POJO - java

I have following usecase:
I'm having map with properties:
Map <String, String[]> args = new HashMap(){{
put("requester.firstName", "Alice");
put("requester.lastName", "Smith");
put("requester.address", "Glasgow Av");
put("publication.type", "print");
put("publication.subtype", "book");
}};
I need to convert it to this pojo
public class WebRequest{
private Requester requester;
private Publication publication;
}
class Requester{
private String firstName;
private String lastName;
private String address;
}
class Publication{
private String type;
private String subType;
}
Can I use Jackson, to run the conversion? If not, what is the best suitable library for this?
Thanks,
Nadiia

You might be able to use the BeanUtils of apache common. That framework is capable to map from a Map to a bean almost instantly.
Map<String,String> yourMap = new HashMap<String,String>();
yourMap.put("name", "Joan");
yourMap.put("age", "30");
YourBean p = new YourBean();
try {
BeanUtils.populate(p, yourMap);
} catch (Throwable e) {
//do something...
}
What I'm not sure about is whether it automatically recognizes nested objects and corresponding properties but maybe you are able to do this di-visioning manually (by offering 2 maps, etc).
More information can be found here BeanUtils

I found here on StackOverflow excellent solution for my problem. Similar problem,
BeanUtils converting java.util.Map to nested bean
One of the answers was perfect:
You should use Spring's BeanWrapper class. It supports nested properties, and optionally create inner beans for you:
BeanOne one = new BeanOne();
BeanWrapper wrapper = PropertyAccessorFactory.forBeanPropertyAccess(one);
wrapper.setAutoGrowNestedPaths(true);
Map<String, Object> map = new HashMap<>();
map.put("fieldOne", "fieldOneValue");
map.put("fieldTwo.fieldOne", "fieldOneValue");
wrapper.setPropertyValues(map);
assertEquals("fieldOneValue", one.getFieldOne());
BeanTwo two = one.getFieldTwo();
assertNotNull(two);
assertEquals("fieldOneValue", two.getFieldOne();
I hope it will help somebody with similar problem.

Related

How to map a Java Map to Object using Orika

I am trying to map a Java Map to a POJO but having problems with using Orika.
I have a LinkedHashMap and trying to map it to a POJO. I've been reading this website https://www.baeldung.com/orika-mapping, specifically section 4.2
This is how I define my orika mapper:
factory.classMap(Map.class, TestDto.class)
.field("nest['name']", "name")
.toClassMap();
and this is the LinkedHashMap I'm trying to map:
Map<Object, Object> nest = new LinkedHashMap<>();
nest.put("name", "myname");
Map<Object, Object> obj = new LinkedHashMap<>();
obj.put("nest", nest);
and this is the POJO I'm trying to map to:
public class TestDto {
private String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
when I do:
TestDto testDto = mapper.map(obj, TestDto.class);
I get null as name. Is this scenario possible to map or do I have to do more customization of orika?
Yeah the name will return null for this mapping.
TestDto testDto = mapper.map(obj, TestDto.class);
This is due to variable name in TestDto class does not contain variable name nest.
To map your POJO with different key.
Pleas add #JsonProperty in your get() method.
Example:
#JsonProperty("nest")
public String getName() { return name; }
I figured it out, you can define a mapper like:
factory.classMap(Map.class, TestDto.class)
.field("nest:{get('nest')|type=Map}.name:{get('name')|type=java.lang.String}", "name")
.toClassMap();
which is outlined here:
https://orika-mapper.github.io/orika-docs/advanced-mappings.html
This is really only needed if you don't have control over the Pojo and/or do not want to make a copy of the Pojo

How do I get QueryFields from custom class when defining cache?

I'm trying to get Ignite Cache data through jdbc. For that purpose I define new custom class and annotate fields like that :
public class MyClass implements Serializable {
#QuerySqlField(index = true)
public Integer id;
#QuerySqlField(index = true)
public String records_offset;
#QuerySqlField(index = true)
public Integer session_id;
...
}
Then I start ignite in this way:
CacheConfiguration conf = new CacheConfiguration();
conf.setBackups(1);
conf.setName("test");
QueryEntity queryEntity = new QueryEntity();
queryEntity.setKeyType(Integer.class.getName());
queryEntity.setValueType(CDR.class.getName());
queryEntity.setTableName("CDR");
conf.setQueryEntities(Arrays.asList(queryEntity));
IgniteConfiguration iconf = new IgniteConfiguration();
iconf.setCacheConfiguration(conf);
iconf.setPeerClassLoadingEnabled(true);
this.ignite = Ignition.start(iconf);
this.cache = ignite.getOrCreateCache("test");
Now when I try to get data from JDBC, I get error:
Error: class org.apache.ignite.binary.BinaryObjectException: Custom objects are not supported (state=50000,code=0)
I could define a set of fields to get opportunity to fetch data from JDBC
LinkedHashMap<String, String> fields = new LinkedHashMap();
fields.put("session_id", Integer.class.getName());
fields.put("records_offset", String.class.getName());
queryEntity.setFields(fields);
But Why do I need to do this if I've already annotated field in class definition?
You have three options to define SQL schema:
Annotations and CacheConfiguration.setIndexedTypes
https://apacheignite.readme.io/docs/cache-queries#section-query-configuration-by-annotations
You can configure QueryEntity:
https://apacheignite.readme.io/docs/cache-queries#section-query-configuration-using-queryentity
or just use pure SQL:
https://apacheignite-sql.readme.io/docs/create-table
In your case, you mixed [1] and [2], so you registered key and value for indexing by QueryEntity, but defined fields with annotations, so mixing of different ways doesn't work. you need to stick to open specific way like you already did by adding key and value registration for indexing with CacheConfiguration.setIndexedTypes method. So you can get rid of QueryEntity now.

Inject all keys and Values from property file as Map in Spring

Can someone provide some idea to inject all dynamic keys and values from property file and pass it as Map to DBConstants class using Setter Injection with Collection.
Keys are not known in advance and can vary.
// Example Property File that stores all db related details
// db.properties
db.username.admin=root
db.password.admin=password12
db.username.user=admin
db.password.user=password13
DBConstants contains map dbConstants for which all keys and values need to be injected.
Please provide bean definition to inject all keys and values to Map dbConstants.
public class DBConstants {
private Map<String,String> dbConstants;
public Map<String, String> getDbConstants() {
return dbConstants;
}
public void setDbConstants(Map<String, String> dbConstants) {
this.dbConstants = dbConstants;
}
}
You can create PropertiesFactoryBean with your properties file and then inject it with #Resource annotation where you want to use it as a map.
#Bean(name = "myProperties")
public static PropertiesFactoryBean mapper() {
PropertiesFactoryBean bean = new PropertiesFactoryBean();
bean.setLocation(new ClassPathResource("prop_file_name.properties"));
return bean;
}
Usage:
#Resource(name = "myProperties")
private Map<String, String> myProperties;
you can use #Value.
Properties file:
dbConstants={key1:'value1',key2:'value2'}
Java code:
#Value("#{${dbConstants}}")
private Map<String,String> dbConstants;
you have to give spaces its like
hash.key = {indoor: 'reading', outdoor: 'fishing'}
Read map like below as i mentioned.
#Value("#{${hash.key}}")
private Map<String, String> hobbies;

Runtime annotations design and performance

I have a java api which performs an external resource lookup and then maps the values to a Pojo. To do this, the api needs the field names of the Pojo as string values, something like:
public <F> F populatePojoFields(String primaryField, String secondaryField);
This works fine, however passing the pojo field names as String to the api does not feel right. I was able to change this by writing marker annotations for the pojo, so now it is like
public class POJO {
#Primary //custom marker annotation
private int mojo;
#Secondary //custom marker annotation
private String jojo;
}
String primaryField = getFieldNameUsingReflection(Pojo.class, Primary.class)
String secondryField = getFieldNameUsingReflection(Pojo.class, Secondary.class)
Pojo pojo = populatePojoFields(primaryField, secondaryField);
This way I don't have to keep track of string values, I can just add marker annotations to the Pojo fields. This works fine, but I'm worried about performance. Is this a standard way to do things? as keeping hardcoded string values is more efficient than looking up the field names every time we need to call the api. Is there a better way to do this?
If you call getFieldNameUsingReflection often you can think to cache the result of this call.
You can use a singleton class with internal Map with a code like the following:
public class SingletonMapPrimarySecondary {
Map<Class, String> mapPrimary;
Map<Class, String> mapSecondary;
// TODO: Handle mapPrimary and mapSecondary creation and singleton pattern
public String getPrimary(Class clazz) {
String primary = mapPrimary.get(clazz);
if (primary == null) {
primary = getFieldNameUsingReflection(clazz, Primary.class);
mapPrimary.put(clazz, primary);
}
return primary;
}
public String getSecondary(Class clazz) {
// TODO: Similar to getPrimary
}
}

Jackson : avoiding exceptions due to unmodeled fields

I have some beans, and they model (explicitly) the core data types in a JSon. However, sometimes the Jsons im reading have extra data in them.
Is there a way to annotate/define a Bean in jackson so that it uses explicit field names for some of the fields (the ones I know of, for example), while cramming the extra fields into a map / list ?
Yes there is, assuming you really do want to retain all the extra/unrecognized parameters, then do something like this:
public class MyBean {
private String field1;
private String field2;
private Integer field3;
private Map <String, Object> unknownParameters ;
public MyBean() {
super();
unknownParameters = new HashMap<String, Object>(16);
}
// Getters & Setters here
// Handle unknown deserialization parameters
#JsonAnySetter
protected void handleUnknown(String key, Object value) {
unknownParameters.put(key, value);
}
}
To configure global handling of parameters you can choose to define an implementation of DeserializationProblemHandler and register it globally with the ObjectMapper config.
DeserializationProblemHandler handler = new MyDeserializationProblemHandler();
ObjectMapper.getDeserializationConfig().addHandler(handler);
If you find you really do not care about the unknown parameters, then you can simply turn them off. On a per-class basis with the #JsonIgnoreProperties(ignoreUnknown = true), or globally by configuring ObjectMapper:
objectMapper.configure(DeserializationConfig.Feature.FAIL_ON_UNKNOWN_PROPERTIES, false)

Categories

Resources