I need to use two different schemas in one java app. For now I'm setting the value with static text:
#Table(name = "jhi_user", catalog = "some_catalog")
#Table(name = "jhi_persistent_audit_event", catalog = "some_other_catalog")
I want to read those values from class marked with #ConfigurationProperties annotation but I don't have any ideas how to do it correctly.
I've tried to create enum with static subclass, but it only works with constant values.
public enum MyData {
DB1(Constants.DB1),
DB2(Constants.DB2);
public static class Constants {
public static final String DB1 = ApplicationProperties.getDb1(); //<--- : attribute value must be constant
public static final String DB2 = "db2"; //<----- works
}
}
Is it any way to change database catalog for different classes without compiling the code? I suppose, I can set it to blank and then change them with reflection, is there a better way?
Related
I've a scenario to solve, where I need to maintain some key-value pairs in the code. The key names are to be read from properties file. Which I'm going to do like this in spring boot.
#Data
#ConfigurationProperties( prefix = "project.keysList" )
public class KeyNames {
private String key1Name;
private String key2Name;
...
}
I'll get the corresponding values during the run-time. How can I store the values elegantly ? These are the options I'm considering.
Define one more model object called KeyValues and store them.
Define a hashmap called KeyValues and store the keyNames and corresponding values.
Is there a better way other than these two, where I don't need to define a new object for storing the values, which I get during the run time.?
Appreciate any help in this..
Pls ask for clarifications in case my qn is not too clear.
Define your key and value in resources folder of application.properties file and define #Value annotation in the above your Configuration class of field name.
For example:
application.properties file
key1.name=valuename
key2.name=valuename
============================
#ConfigurationProperties( prefix = "project.keysList" )
public class KeyNames {
#Value("${key1.name}")
private String key1Name;
#Value("${key2.name}")
private String key2Name;
}
#Data
#ConfigurationProperties(prefix = "custom")
public class CustomProperties {
private Long id;
private String name;
private String phone;
}
I'm extending code from an existing Java class that serializes to and from XML. The existing class is somewhat like this:
#Getter
#JacksonXmlRootElement("element")
public class Element {
#JacksonXmlProperty(localName = "type", isAttribute = true)
private String type;
}
The type field has a finite set of possible values so I created an enum Type with all possible values and (to avoid breaking existing functionality) added a new field to the class, like so:
#Getter
#JacksonXmlRootElement("element")
public class Element {
#JacksonXmlProperty(localName = "type", isAttribute = true)
private String type;
#JacksonXmlProperty(localName = "type", isAttribute = true)
#JsonDeserialize(using = TypeDeserializer.class)
private Type typeEnum;
}
This gives me the following error:
Multiple fields representing property "type": Element#type vs Element#typeEnum
I understand why this is a problem cuz when Jackson would try to serialize the class, two fields in my class map onto the same field in the output XML.
I tried adding a #JsonIgnore on one of the fields and it gets rid of the error but has the side effect of not populating the ignored field either. Is there a way to annotate that a field should be deserialized (while reading XML) but not serialized (while writing XML)?
I really need to keep both fields in the class to not disturb any legacy code that might be using the first field, but at the same time allow newer code to leverage the second field.
Thank you!
I followed everything that is outlined here - https://github.com/derjust/spring-data-dynamodb/wiki/Use-Hash-Range-keys. But still no luck.
I have a DynamoDB table with a hash key and a sort key.
Here is my entity class RecentlyPlayed.class
#DynamoDBTable(tableName="some-table")
public class RecentlyPlayed {
#Id
private RecentlyPlayedId recentlyPlayedId;
// ----- Constructor methods -----
#DynamoDBHashKey(attributeName="keyA")
// Getter and setter
#DynamoDBRangeKey(attributeName="keyB")
// Getter and setter
}
Here is my key class RecentlyPlayedId.class
public class RecentlyPlayedId implements Serializable {
private static final long serialVersionUID = 1L;
private String keyA;
private String keyB;
public RecentlyPlayedId(String keyA, String keyB) {
this.keyA = keyA;
this.keyB = keyB;
}
#DynamoDBHashKey
// Getter and setter
#DynamoDBRangeKey
// Getter and setter
}
Here is my repository interface RecentlyPlayedRepository
#EnableScan
public interface RecentlyPlayedRepository extends CrudRepository<RecentlyPlayed, RecentlyPlayedId> {
List<RecentlyPlayed> findAllByKeyA(#Param("keyA") String keyA);
// Finding the entry for keyA with highest keyB
RecentlyPlayed findTop1ByKeyAOrderByKeyBDesc(#Param("keyA") String keyA);
}
I am trying to save an object like this
RecentlyPlayed rp = new RecentlyPlayed(...);
dynamoDBMapper.save(rp); // Throws that error
recentlyPlayedRepository.save(rp); // Also throws the same error
I am using Spring v2.0.1.RELEASE. The wiki in the original docs warns about this error and describes what to do to mitigate. I did exactly what they said. But still no luck.
The link to that wiki is here - https://github.com/derjust/spring-data-dynamodb/wiki/Use-Hash-Range-keys
DynamoDB only supports primitive data types, it does not know how to convert your complex field (recentlyPlayedId) into a primitive, such as a String.
To show that this is the case, you can add the annotation #DynamoDBIgnore to your recentlyPlayedId attribute like this:
#DynamoDBIgnore
private RecentlyPlayedId recentlyPlayedId;
You also need to remove the #id annotation.
Your save function will then work, but the recentlyPlayedId will not be stored in the item. If you do want to save this field, you need to use the #DynamoDBTypeConverted annotation and write a converter class. The converter class defines how to convert the complex field into a String, and then uncovert the String into the complex field.
Removing getters/setters for the #Id field fixed the problem for me. This is suggested in https://github.com/derjust/spring-data-dynamodb/wiki/Use-Hash-Range-keys
not supported; requires #DynamoDBTyped or #DynamoDBTypeConverted",
i was getting this error when i defined model class with field JsonNode,i converted it to MAP<String,String>,now it is working fine
In my db entities i have byte[] fields:
import javax.persistence.*;
/**
* Account
*/
#Entity
#Table(name = TABLE)
public class Account {
public static final String TABLE = "Account";
...
public final static String COLUMN_PASSWORD_HASH = "passwordHash";
#Column(name = COLUMN_PASSWORD_HASH, nullable = false)
public byte[] passwordHash;
...
I want to keep my db entities clear of any vendor dependency so i use only JPA annotations and try to avoid any ORMLite or Hibernate annotations.
However when trying to save such entity with ORMLite i get the following error:
java.sql.SQLException: ORMLite does not know how to store class [B for field 'passwordHash'. byte[] fields must specify dataType=DataType.BYTE_ARRAY or SERIALIZABLE
As far as i understand for some reason ORMLite does not prefer BYTE_ARRAY for byte[] and requires to mark the fields with com.j256.ormlite.field.Datatype ORMLite annotation with introduces explicit dependency on ormlite-core module and this is what i want to avoid (i have Hibernate DAO impl and ORMLite DAO impl and i don't want to mix everything).
My original intention was to configure ORMLite to prefer BYTE_ARRAY for byte[] fields. How can i do it? Should i introduce custom persister? Any other suggestions?
I solved it by adding the following custom data persister (without adding dependency to ormlite-core as i wanted):
package name.antonsmirnov.zzz.dao.types;
import com.j256.ormlite.field.SqlType;
import com.j256.ormlite.field.types.ByteArrayType;
/**
* ByteArray Type that prefers storing byte[] as BYTE_ARRAY
*/
public class PreferByteArrayType extends ByteArrayType {
public PreferByteArrayType() {
super(SqlType.BYTE_ARRAY, new Class[] { byte[].class });
}
private static final PreferByteArrayType singleTon = new PreferByteArrayType();
public static PreferByteArrayType getSingleton() {
return singleTon;
}
}
Register it just like any other custom persister:
DataPersisterManager.registerDataPersisters(PreferByteArrayType.getSingleton());
Note you can't use default ByteArrayDataType because it has empty classes array so it causes it to become persister for autogenerated fields and it throws exception that byte array fields can't be id fields.
I've checked it to use BLOB fields type for MySQL:
com.mysql.jdbc.Field#39a2bb97[catalog=test_db,tableName=account,originalTableName=account,columnName=passwordHash,originalColumnName=passwordHash,mysqlType=252(FIELD_TYPE_BLOB),flags= BINARY BLOB, charsetIndex=63, charsetName=ISO-8859-1]
Suppose you have this entity:
class Foo{
String propA;
String propB;
}
and you want to serialize for one API like :
{propA: "ola",
propB: "Holla"}
and for another API like :
{fooPropA: "ola",
fooPropB: "Holla"}
How can this be achieved using jackson and using the same entity. Creating 2 different entities is not an option :)
There are several ways in which you can achieve this. You can enable a custom serializer (already covered by #se_vedem), register an annotation introspector which changes the property names for the corresponding class and so on.
However, if you are willing to only add a string prefix to all the property names, then the Jackson property name strategy is probably the best fit. The naming strategy class has the access to the serialized object type information, so you can make a decision whether to change the property name or not.
Here is an example using a custom annotation that defines the prefix:
public class JacksonNameStrategy {
#Retention(RetentionPolicy.RUNTIME)
public static #interface PropertyPrefix {
String value();
}
#PropertyPrefix("foo_")
public static class Foo {
public String propA;
public String propB;
public Foo(String propA, String propB) {
this.propA = propA;
this.propB = propB;
}
}
public static void main(String[] args) throws JsonProcessingException {
ObjectMapper mapper = new ObjectMapper();
mapper.setPropertyNamingStrategy(new MyPropertyNamingStrategyBase());
System.out.println(mapper.writeValueAsString(new Foo("old", "Holla")));
}
private static class MyPropertyNamingStrategyBase extends PropertyNamingStrategy {
#Override
public String nameForField(MapperConfig<?> config,
AnnotatedField field,
String defaultName) {
PropertyPrefix ann = field.getDeclaringClass().getAnnotation(PropertyPrefix.class);
if (ann != null) {
return ann.value() + defaultName;
}
return super.nameForField(config, field, defaultName);
}
}
}
Output:
{"foo_propA":"old","foo_propB":"Holla"}
In your API method you choose between two ObjectMapper instances one with the default naming naming strategy and one with the custom one.
You can achieve this by using modules feature from Jackson.
Basically, each API would have it's own ObjectMapper and they will be configured with different modules. This way you can create 2 serializers for the same class and register them on the appropriate module. More read can be found here http://wiki.fasterxml.com/JacksonFeatureModules
However, be aware that serializers are loaded in a particular order. First it tries to get the annotated ones, if none is found it will try to get those registered from modules. So, for example if you have your class annotated with serializer, then that serializer(FooSerializer) would be chosen instead of the one configured in module(MySecondFooSerializer).
#JsonSerialize(using = FooSerializer.class)
class Foo{
String propA;
String propB;
}
module.addSerializer(Foo.class, new MySecondFooSerializer());