I am having a field of type "text" in mysql and storing json data in it.
Eg:- "["android_app","iphone_app","windows_app"]";
I am interacting with mysql using hibernate and while reading this field I am deserializing it to an arraylist in Java.
My question is, is this the best and fastest way to handle such cases or there are some better ways of doing it.
If you're able to take advantage of some JPA 2.1 features, you could use anAttributeConverter to handle this for you automatically without having to deal with this in your business code.
public class YourEntity {
// other stuff
#Convert(converter = StringArrayToJsonConverter.class)
List<String> textValues;
}
Then you just define the converter as follows:
public class StringArraytoJsonConverter
implements AttributeConverter<List<String>, String> {
#Override
public string convertToDatabaseColumn(List<String> list) {
// convert the list to a json string here and return it
}
#Override
public List<String> convertToEntityAttribute(String dbValue) {
// convert the json string to the array list here and return it
}
}
The best part is this becomes a reusable component that you can simply place anywhere you need to represent a json array as a List<> in your java classes but would rather store it as JSON in the database using a single text field.
Another alternative would be to avoid storing the data as JSON but instead use a real table where that it would allow you to actually query on the JSON values. To do this, you'd rewrite that mapping using JPA's #ElementCollection
#ElementCollection
private List<String> textValues;
Internally, Hibernate creates a secondary table where it stores the text string values for the array with a reference to the owning entity's primary key, where the entity primary key and the string value are considered the PK of this secondary table.
You then either handle serializing the List<> as a JSON array in your controller/business code to avoid mixing persistence with that type of medium, particularly given that most databases have not yet introduced a real JSON data-type yet :).
Related
I'm using Hibernate, JPA and Spring Boot.
I would like to persist a custom Object of type "Version". "Version" is an Object with complex processing but it corresponds in the table to a simple VARCHAR field. I'm able to construct Version from String and construct String from Version object.
class MyEntity {
#Column("version_sw")
private Version version; <-- I would like to persist version as a String
}
I'm looking at different way to create hibernate type :
UserType : Seems overkill for me because my object map to only one Column in database.
BasicType (with AbstractTypeDescriptor) : Seems good but not sure if it's the right way to do that
CompositeUserType : Seems not good for my needs
Is there a missing and simple way in order to do that ?
Thanks !!
If I understend you just need to convert you object to strings and save on data base, to do that you can use JPA attribute converter, so you can convert your complex object to string, some kind of json for exemple and save it on table. And to use on your system you must convert the string to one object.
Try this: https://thoughts-on-java.org/jpa-21-how-to-implement-type-converter/
I have a problem with declaration with one of my fields in Object.
public class PhotoEntity {
#SerializedName("mentions")
public Array<Integer> mentions = new ArrayList<>();
// Other fields
}
mentions - is a part of incoming JSON, looks like :
[120,55,32]
I have a question. How correctly make projection of ArrayList in SQL database?
In the best case, as I know - I need to create separate Table and setup foreign key, but in my case it not be applicable, because the structure of PhotoEntity directly related to JSON contract.
My java application makes use of complex object graphs that are jackson annotated and serialized to json in their entirety for client side use. Recently I had to change one of the objects in the domain model such that instead of having two children of type X it will instead contain a Set<X>. This changed object is referenced by several types of objects in the model.
The problem now is that I have a large quantity of test data in json form for running my unit tests that I need to convert to this new object model. My first thought for updating the json files was to use the old version java object model to deserialize the json data, create new objects using the new version object model, hydrate the new objects from the old objects and then finally serialize the new objects back to json. I realized though that the process of programmatically creating matching object graphs and then hydrating those object graphs could be just as tedious as fixing the json by hand since the object graphs are relatively deep and its not a simple clone.
I'm wondering how I can get around fixing these json files entirely by hand? I'm open to any suggestions even non-java based json transformation or parsing tools.
One possibility, if Objects in question are closely-enough related, structurally, is to just read using one setting of data-binding, write using another.
For example: if using Jackson, you could consider implementing custom set and get methods; so that setters could exist for child types; but getter only for Set value. Something like:
```
public class POJO {
private X a, b;
public void setA(X value) { a = value; }
public void setB(X value) { b = value; }
public X[] getValues() {
return new X[] { a, b };
}
```
would, just as an example, read structure where POJO would have two Object-valued properties, "a" and "b", but write structure that has one property "values", with JSON array of 2 Objects.
This is just an example of the basic idea that reading in (deserialization) and serialization (writing out) need not be symmetric or identical.
I have a MongoDB database that represents snippets of public gene information like so:
{
_id: 1,
symbol: "GENEA",
db_references: {
"DB A": "DBA000123",
"DB B" ["ABC123", "DEF456"]
}
}
I am trying to map this to a #Document-annotated POJO class, like this:
#Document
Public class Gene {
#Id
private int id;
private String symbol;
private Map<String,Object> db_references;
//getters and setters
}
Because of the nature of MongoDB's schema-less design, the db_references field can contain a long list of possible keys, with values sometimes being arrays or other key-value pairs. My primary concern is the speed at which I can fetch multiple Gene documents and slice up their db_references.
My question: what is the best way to represent this field to optimize fetching performance? Should I define a custom POJO and map this field to it? Should I make it a BasicDBObject? Or would it be best not map the documents at all with Spring Data and just use the MongoDB Java driver and parse the DBObjects returned?
Sorry to see your question hasn't been answered yet.
If db_references represent an actual concept within the domain you are much better off capturing this domain knowledge in a class. It is always a good idea and MongoDB helps with it a lot.
Thus, you can store this list of nested objects inside the MongoDB document and fetch the whole aggregate in a single query. Spring Data should also handle deserializing as well.
Although appengine already is schema-less, there still need to define the entities that needed to be stored into the Datastore through the Datanucleus persistence layer. So I am thinking of a way to get around this; by having a layer that will store Key-value at runtime, instead of compile-time Entities.
The way this is done with Redis is by creating a key like this:
private static final String USER_ID_FORMAT = "user:id:%s";
private static final String USER_NAME_FORMAT = "user:name:%s";
From the docs Redis types are: String, Linked-list, Set, Sorted set. I am not sure if there's more.
As for the GAE datastore is concerned a String "Key" and a "Value" have to be the entity that will be stored.
Like:
public class KeyValue {
private String key;
private Value value; // value can be a String, Linked-list, Set or Sorted set etc.
// Code omitted
}
The justification of this scheme is rooted to the Restful access to the datastore (that is provided by Datanucleus-api-rest)
Using this rest api, to persist a object or entity:
POST http://datanucleus.appspot.com/dn/guestbook.Greeting
{"author":null,
"class":"guestbook.Greeting",
"content":"test insert",
"date":1239213923232}
The problem with this approach is that in order to persist a Entity the actual class needs to be defined at compile time; unlike with the idea of having a key-value store mechanism we can simplify the method call:
POST http://datanucleus.appspot.com/dn/org.myframework.KeyValue
{ "class":"org.myframework.KeyValue"
"key":"user:id:johnsmith;followers",
"value":"the_list",
}
Passing a single string as "value" is fairly easy, I can use JSON array for list, set or sorted list. The real question would be how to actually persist different types of data passed into the interface. Should there be multiple KeyValue entities each representing the basic types it support: KeyValueString? KeyValueList? etc.
Looks like you're using a JSON based REST API, so why not just store Value as a JSON string?
You do not need to use the Datanucleus layer, or any of the other fine ORM layers (like Twig or Objectify). Those are optional, and are all based on the low-level API. If I interpret what you are saying properly, perhaps it already has the functionality that you want. See: https://developers.google.com/appengine/docs/java/datastore/entities
Datanucleus is a specific framework that runs on top of GAE. You can however access the database at a lower, less structured, more key/value-like level - the low-level API. That's the lowest level you can access directly.
BTW, the low-level-"GAE datastore" internally runs on 6 global Google Megastore tables, which in turn are hosted on the Google Big Table database system.
Saving JSON as a String works fine. But you will need ways to retrieve your objects other than by ID. That is, you need a way to index your data to support any kind of useful query on it.