Suppose I have an entity with a field of type FOO, but an external process writes the invalid value BAR into the database. So next time I try to read this entity with Hibernate I get an exception like this:
org.hibernate.PropertyAccessException: Could not set field value [BAR] value by reflection
Unfortunately I also get this exception, when I call the method getAllFoo() and the database contains 999 valid entities plus the one invalid entity. I would like to be able to get the 999 valid entities plus a warning of some sort for the invalid one.
Is that even possible in Hibernate?
I can think of 3 options:
(1): Add a WHERE field IN(<list of valid values>) clause to all queries to this table+field.
(2): Clean up the database and add a contraint to the field, so that no invalid values can end up in there.
(3): Add the invalid option (I assume it's an enum?) to your entity, and filter it out in own code. This may break proper pagination of queries. It will also get messy quickly if you have a large variety of invalid values.
Edit
(4): Create a custom hibernate type (UserType). There you can parse the value manually from the prepared statement / result set. This allows you to (instead of throwing an exception) map 'invalid' values to null or any other meaningful value that you can process.
Related
I'm writing a simple webapp to show my coding skills to potential employers. It connects with an API and receives a JSON file which is then deserialized using Jackson and displayed in a table form in the browser. I want to enable the user to persist the Java object in a Postgres database using Hibernate. I got it to work and it does the job nicely but I want to make it more efficient.
Whenever there is no data in the JSON response to put in the object's field (right now all the possible JSON attributes are present in the Java class/Hibernate entity in the form of String fields) I put an empty String ('') and then, with all fields having something and no null objects, it is stored in the database.
Should I only store what I have and put no empty strings in the DB (using nulls instead) or is what I'm doing now the right way?
Null is an absence of a value. An empty string is a value. But that don't impact much to memory. If you want to display data repeatedly and don't want conversion from null to empty string while retrieval you can go for empty string ''.
But if you want unique constraint for values other than empty string '' then use null.
Sometimes null and empty '' can be used to differentiate either data was known or not. for known but not available data use empty and for unknown data null can be used.
Use NULLwhen there isn't a known value.
Never use the empty string.
For example, if you have a customer which didn't supply his address don't say his address is '', say it is NULL. NULL unambiguously states "no value".
For database columns that must have a value for your web application to work, create the backing table with NOT NULL data constraints on those columns.
In your unit tests, call NULL, ..._address_is_null_ and test for success or failure (depending on if the test should trigger no errors or trigger an exception).
The use of '' in databases as a sentinel, a special value that means something other that '', is discouraged. That's because we won't know what you meant it to mean. Also, there might be more than one special case, and if you use '' first, then it makes restructuring more difficult to add others (unless you fall into the really bad practice of using even more special strings to enumerate other special cases, like "deleted" and so on).
I have the below class structure:
class A{
int id;
List<B> blist;
List<C> clist;
List<D> dlist;
}
I get a json as an input which is mapped to object A by a mapper. Now, i have object A which has the list of B,C and D objects. I want to use batching to save the insert time taken. I went through the documentation which describes the solution if I want to save multiple parent objects. How would I use the batching capability in my case which has nested list of objects of multiple type.
I have enabled batch inserts using
<property name="hibernate.jdbc.batch_size">50</property>
This by itself doesnt give me any batching unless I clear and flush the session. Any suggestions on how do I go about with this?
The problem is that you're using IDENTITY strategy.
Whenever you save a new entity, Hibernate will place it into the Session's 1LC; however, in order to do that the identifier must be known. The problem with IDENTITY strategy is that Hibernate must actually perform the insert to determine the identifier value.
In the end, batch insert capabilities are disabled.
You should either try to load your data using business key values that are known up front or worse case use SEQUENCE generation type with a sequence optimizer to minimize the database hit. This will allow batch inserts to work.
UPDATE
For situations where you have no business key that defines the uniqueness for a row and your database doesn't have SEQUENCE support, you could manage the identifiers yourself. You can either elect to do this using a custom identifier generator or just doing this in your loop as code.
The caveat here is that this solution is not thread-safe. You should guarantee that at no point would you ever be running this logic in two threads simultaneously, which is typically not something one does anyway with bulk data loads.
Define a variable to store your identifier in. We will need to initialize this variable based on the existing max value of the identifier in the database. If no rows in the database exist, we likely will want to initialize it as 1.
Long value = ... // createQuery ( "SELECT MAX(id) FROM YourEntity" )
value = ( value == null ? 1L : value + 1);
The next step is to change the #Id annotated field. It should not be marked as #GeneratedValue since we're going to allow the application to provide the value.
For each row you're going to insert, simply call your #setId( value ) method with the value variable generated from step 1.
Increment your value variable by 1.
I have implemented following JPA Repository query for some common search functionality.
But with that query, due to age is a Integer value and mapping Parameter is String value I got some exception as follow. Is there any mechanism to implicitly casting parameter to relevant data type instead of we do that. Thanks.
Query with common parameter
#Query("select u from User u where u.firstname = :searchText or u.age = :searchText")
List<User> findBySearchText(#Param("searchText") String searchText);
Exception
Caused by: org.postgresql.util.PSQLException: ERROR: operator does not exist: integer = character varying
Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
The issue you see has nothing to do with the binding itself. Spring Data basically binds the value you give to the named parameter searchText.
It looks like what happens next is that your persistence provider builds some SQL from it where there's a type mismatch apparently. Age doesn't seem to be of type String, is it? That said, I think trying to bind an arbitrary String to an integer (which it is I guess) is a very weird approach in the first place.
SQL is not really built to support arbitrary text search features and schema is helping you to detect invalid criterias (which it does in this case). Have you thought about adding a full-text search store (Elasticsearch, Solr or the like) and do the text searches in those?
I am trying to use Objectify #IgnoreSave annotation along with simple If condition (IfEmpty, IfNull) but it seems that it is not working. Without If condition the actual value is not persisted as expected, however, when I use some If condition, it is always persisted (e.g. if IfNull condition used and null value provided, it is persisted and hence original value in datastore deleted).
...
#IgnoreSave(IfNull.class)
private String email;
...
...
this.objectify.save().entity(userDetails).now();
...
Is there any additional configuration needed? Or has anyone experienced the same?
From "hence original value in datastore deleted" it sounds like you misunderstand a fundamental characteristic of the GAE datastore - entities are stored whole. If you #IgnoreSave a field, it will be ignored during save and thus the field will not be present in the datastore. You do not get to update some fields and not others.
It seems that JOOQ is completely ignoring the default values of database columns. Neither gets the ActiveRecord object updated nor does it skip this column on INSERT. Instead it tries to set it to NULL which fails on NOT NULL columns.
Example:
CREATE TABLE bug (
foo int,
bar int not null default 42
);
BugRecord b = jooq.newRecord(BUG);
b.setFoo(3);
b.store();
assertNotNull(b.getBar()); // fails
Record r = jooq.select().from(BUG).fetchOne();
assertEquals(new Integer(-1), r.getValue(BUG.BAR)); // fails
// DataMapper pattern
Bug b = new Bug();
b.setFoo(3);
bugDao.insert(b); // Fails because it tries to set "bar" to NULL
The behaviour I would expect is that either the newRecord() initializes all default variables with the korrekt values (although I understand that this could be difficult if the result is the outcome of a custom function :-)).or that the INSERT INTO does not insert all unmodified columns with default values and then that the INSERT INTO is followed by a SELECT that fetches the now existing values from the database (similar to a RETURNING).
Is this really a bug/limitation or am I missing some config option etc which makes it
possible to use "not null default" columns?
You've spotted a couple of things here (all relevant to jOOQ 3.1 and previous versions):
Returning default values from inserts:
BugRecord b = jooq.newRecord(BUG);
b.setFoo(3);
b.store();
assertNotNull(b.getBar()); // fails
That would be a nice-to-have feature, indeed. Currently, jOOQ only fetches IDENTITY column values. You can use the INSERT .. RETURNING syntax or the UPDATE .. RETURNING syntax to explicitly chose which columns ought to be returned after an insert or update. But being able to do so in regular CRUD operations would be much better.
This had also been mentioned in this thread. The relevant feature request for this is #1859.
You can work around this issue by calling
b.refresh(); // Refresh all columns
b.refresh(BUG.BAR, ...); // Refresh only some columns
Inserting NULL vs. inserting DEFAULTs through UpdatableRecord:
Record r = jooq.select().from(BUG).fetchOne();
assertEquals(new Integer(-1), r.getValue(BUG.BAR)); // fails
This is a bug, in my opinion. jOOQ's CRUD operations should be DEFAULT value safe. Only those values that have been set explicitly prior to a store() / insert() / update() operation ought to be rendered in the generated SQL. I have registered #2698 for this.
Inserting NULL vs. inserting DEFAULTs through DAO:
// DataMapper pattern
Bug b = new Bug();
b.setFoo(3);
bugDao.insert(b); // Fails because it tries to set "bar" to NULL
Nice catch. This is non-trivial to solve / enhance, as a POJO does not ship with an internal "changed" / "dirty" flag per column. It is thus not possible to know the meaning of a null reference in a POJO.
On the other hand, jOOQ already knows whether a column is nullable. If jOOQ also maintained metadata about the presence of a DEFAULT clause on a column, it could deduce that the combination NOT NULL DEFAULT would have to lead to:
INSERT INTO bug(foo, bar)
VALUES(3, DEFAULT)
And to
UPDATE bug SET bar = DEFAULT WHERE foo = 3
I have registered
#2699: Adding some metadata information to generated code
#2700: Leveraging the above metadata in SQL from DAOs