Spring Solr data - custom convertor for dynamic fields - java

I have some fields in Solr that I'm currently mapping to Java like so:
#Field("file_*") #Dynamic private final Map<String, List<String>> fileDetails;
this matches a number of dynamic fields like:
file_name_1
file_type_1
file_size_1
...
file_name_2
file_type_2
file_size_2
...
...
I then convert that fileDetails map to a FileAttachments object to be used elsewhere in the code.
That is currently resulting in this class having some specific code for converting from how the data is stored in Solr, into the POJO.
Ideally, I'd want to map it more like so:
#Field("file_*") private final FileAttachments attachments;
or even better (but possibly even harder to accomplish):
#Field("file_*") private final List<FileAttachment> attachments;
Then (aside from annotations) the class knows nothing about how the data is stored in Solr.
However, the problem, is, when I try and use a custom convertor for the field (as detailed here), the convertor doesn't get passed a Map<String, List<String>> instance, holding all the fields that matched file_*, instead it simply gets passed a String instance, holding the value of the first field that matched the pattern.
Is there any way to get it to pass the Map<String, List<String>> instance, or any other thoughts as to how I can get this conversion to happen outside of the class?
In an ideal world I'd change the solr document so it was a child object (well, list of them), but that isn't possible as it's part of a legacy system.

Related

When serializing my Java map key-value pair to JSON, how can I add JSON keys?

I have a Java map Map<String, String> map = Collections.singletonMap("dummyKey", "dummyValue").
When I use ObjectMapper to serialize it, I get {"dummyKey":"dummyValue"}.
Is there any way that I can insert JSON keys in the process of serialization so my JSON string looks like {"key":"dummyKey", "value":"dummyValue"}.
I'd like to be able to do this without having to create a separate class, if possible. I'm looking for something like the allow explicit property renaming feature.
I looked at the other mapper features provided, but can't find anything useful. I don't want to create a different class because I want to retain the ability to be able to use it as a map (look up by key) when I deserialize it later. I'm looking for something that'll only change the way serialization and deserialization happen.

Working with heterogenous map as it was typed in Kotlin/Java

I read some items from DB as JSON and create a Map<String, Any> from each of them. I want to process the items as Map because instantiating objects using Jackson Object Mapper's .convertValue took a long time (decimals of milliseconds more than with Map, per item). However, the type of the map is not uniform and when I need to for example get some nested prop (e.g. map.child.prop), I need to go like this:
val prop = (parentMap["child"] as Map<String, Any>)["prop"]
I am aware there is Apache Commons providing PropertyUtils.getProperty(map, "map.child.prop")), but it is not exactly what I want. The object is quite complex and I need to work with the whole map as it was typed. Is there such an option? The casting goes crazy as the item is a big map, but currently the only solution I have.
What I tried:
Convert map to class
Too long
Using typed map library https://github.com/broo2s/typedmap
The map type seems to be built continuously as you add some key/value. But I am not building continuously, I am converting from JSON.
Using Kotlin delegated properties
Works well for flat structure, but needs to casting when going deeper.

Apache Flink: How do I use a stream of Java Map (or Map containing DTOs)?

I am using Flink and have a stream of JSON strings arriving in my system with dynamically changing fields and nested fields. So I can't mock and convert this incoming JSON as a static POJO and I have to rely on a Map instead.
My first transformation is to convert the JSON string stream into a Map object stream using GSON parsing and then I wrap the map in a DTO called Data.
(inside the first map transformation)
LinkedTreeMap map = gson.fromJson(input, LinkedTreeMap.class);
Data data = new Data(map); // Data has getters, setters for the map and implements Serializable
Problem arises when right after this transformation processing, I attempt to feed the resultant stream into my custom Flink sink. The invoke function does not get called in the sink. The sink works however, if I change from this Map containing DTO to a primitive or a regular DTO with no Map.
My DTO looks like this:
public class FakeDTO {
private String id;
private LinkedTreeMap map; // com.google.gson.internal
// getters and setters
// constructors, empty and with fields
I have tried the two following solutions:
env.getConfig().addDefaultKryoSerializer(LinkedTreeMap.class,MapSerializer.class;
env.getConfig().disableGenericTypes();
Any expert advise I could use in this situation?
I was able to resolve this issue. In my Flink logs I saw that one Kryo file called ReflectionSerializerFactory class was not being found. I updated the Kryo version in maven and use a Map type for my map which Flink documentation says Flink supports.
Just make sure to have generic types specified in your code and add getters and setters inside your POJOs for Maps.
I also use the .returns(xyz.class) type decleration to avoid the effects of Type Erasure.

Key-Value on top of Appengine

Although appengine already is schema-less, there still need to define the entities that needed to be stored into the Datastore through the Datanucleus persistence layer. So I am thinking of a way to get around this; by having a layer that will store Key-value at runtime, instead of compile-time Entities.
The way this is done with Redis is by creating a key like this:
private static final String USER_ID_FORMAT = "user:id:%s";
private static final String USER_NAME_FORMAT = "user:name:%s";
From the docs Redis types are: String, Linked-list, Set, Sorted set. I am not sure if there's more.
As for the GAE datastore is concerned a String "Key" and a "Value" have to be the entity that will be stored.
Like:
public class KeyValue {
private String key;
private Value value; // value can be a String, Linked-list, Set or Sorted set etc.
// Code omitted
}
The justification of this scheme is rooted to the Restful access to the datastore (that is provided by Datanucleus-api-rest)
Using this rest api, to persist a object or entity:
POST http://datanucleus.appspot.com/dn/guestbook.Greeting
{"author":null,
"class":"guestbook.Greeting",
"content":"test insert",
"date":1239213923232}
The problem with this approach is that in order to persist a Entity the actual class needs to be defined at compile time; unlike with the idea of having a key-value store mechanism we can simplify the method call:
POST http://datanucleus.appspot.com/dn/org.myframework.KeyValue
{ "class":"org.myframework.KeyValue"
"key":"user:id:johnsmith;followers",
"value":"the_list",
}
Passing a single string as "value" is fairly easy, I can use JSON array for list, set or sorted list. The real question would be how to actually persist different types of data passed into the interface. Should there be multiple KeyValue entities each representing the basic types it support: KeyValueString? KeyValueList? etc.
Looks like you're using a JSON based REST API, so why not just store Value as a JSON string?
You do not need to use the Datanucleus layer, or any of the other fine ORM layers (like Twig or Objectify). Those are optional, and are all based on the low-level API. If I interpret what you are saying properly, perhaps it already has the functionality that you want. See: https://developers.google.com/appengine/docs/java/datastore/entities
Datanucleus is a specific framework that runs on top of GAE. You can however access the database at a lower, less structured, more key/value-like level - the low-level API. That's the lowest level you can access directly.
BTW, the low-level-"GAE datastore" internally runs on 6 global Google Megastore tables, which in turn are hosted on the Google Big Table database system.
Saving JSON as a String works fine. But you will need ways to retrieve your objects other than by ID. That is, you need a way to index your data to support any kind of useful query on it.

Jackson - suppressing serialization(write) of properties dynamically

I am trying to convert java object to JSON object in Tomcat/jersey using Jackson. And want to suppress serialization(write) of certain properties dynamically.
I can use JsonIgnore, but I want to make the ignore decision at runtime. Any ideas??
So as an example below, I want to suppress "id" field when i serialize the User object to JSON..
new ObjectMapper.writeValueAsString(user);
class User {
private String id = null;
private String firstName = null;
private String lastName = null;
//getters
//setters
}//end class
Yes, JSON View is the way to go.
If you e.g. need to let the client to decide which fields to marshal, this example might help: http://svn.codehaus.org/jackson/tags/1.6/1.6.3/src/sample/CustomSerializationView.java
Check
ObjectMapper.configure(SerialiationJson.Feature f, boolean value)
and
org.codehaus.jackson.annotate.JsonIgnore
annotation
This will work only when you want all instances of a certain type to ignore id on serialization. If you truly want dynamic (aka per instance customization) you will probabily have to hack the jackson library yourself.
I don't see any way of doing that. If you need to dynamically decide which properties are marshalled, then I suggest you manually construct a Map of keys to values for your objects, and then pass that Map to Jackson, rather than passing the User object directly.
Have you tried using JSON Views?
Views allow annotation-based mechanism for defining different profiles, so if you just need slightly differing views for different users, this could work for you.

Categories

Resources