JOOQ Does Not Respect Default Value Inside the generated POJO's - java

I'm trying to create a new schema with a new column with a default value not null.
Somehow in the generated DAO, I can see that JOOQ doesn't respect and delegate the default value.
Would you happen to have any idea how to fix this?
The generated code looks like this:
#Override
public ReleaseBundleVersionRecord setInternal(Short value) {
set(20, value);
return this;
}
But I expected to see something like:
#Override
public ReleaseBundleVersionRecord setInternal(Short value) {
if(value == null){
set(1, (short) "anyDefaultValue");
}else{
set(1, value);
}
return this;
}
Please your help...
I expected to get a genearted POJO including my NONNULL default value 0;
So the genearated POJO should includes the below code:
#Override
public ReleaseBundleVersionRecord setInternal(Short value) {
if(value == null){
set(1, (short) "anyDefaultValue");
}else{
set(1, value);
}
return this;
}

I'm assuming you have some sort of SQL DEFAULT expression on your CREATE TABLE statement, such as:
CREATE TABLE release_bundle_version (
..
internal smallint DEFAULT 0,
..
);
And now you're expecting jOOQ's code generator to take this DEFAULT expression and also generate it in various places of generated code.
This has been requested a few times but it isn't practically possible. While your specific DEFAULT value is a constant that could be translated to Java code, there is nothing keeping you from using a runtime evaluated expression, e.g. DEFAULT y + z (deterministic) or DEFAULT current_timestamp (non-deterministic), which would be hard to evaluate in the client.
As such, what you're expecting isn't possible, and probably won't be in the future, either.

Related

Java records with nullable components

I really like the addition of records in Java 14, at least as a preview feature, as it helps to reduce my need to use lombok for simple, immutable "data holders". But I'm having an issue with the implementation of nullable components. I'm trying to avoid returning null in my codebase to indicate that a value might not be present. Therefore I currently often use something like the following pattern with lombok.
#Value
public class MyClass {
String id;
#Nullable String value;
Optional<String> getValue() { // overwrite the generated getter
return Optional.ofNullable(this.value);
}
}
When I try the same pattern now with records, this is not allowed stating incorrect component accessor return type.
record MyRecord (String id, #Nullable String value){
Optional<String> value(){
return Optional.ofNullable(this.value);
}
}
Since I thought the usage of Optionals as return types is now preferred, I'm really wondering why this restriction is in place. Is my understanding of the usage wrong? How can I achieve the same, without adding another accessor with another signature which does not hide the default one? Should Optional not be used in this case at all?
A record comprises attributes that primarily define its state. The derivation of the accessors, constructors, etc. is completely based on this state of the records.
Now in your example, the state of the attribute value is null, hence the access using the default implementation ends up providing the true state. To provide customized access to this attribute you are instead looking for an overridden API that wraps the actual state and further provides an Optional return type.
Of course, as you mentioned one of the ways to deal with it would be to have a custom implementation included in the record definition itself
record MyClass(String id, String value) {
Optional<String> getValue() {
return Optional.ofNullable(value());
}
}
Alternatively, you could decouple the read and write APIs from the data carrier in a separate class and pass on the record instance to them for custom accesses.
The most relevant quote from JEP 384: Records that I found would be(formatting mine):
A record declares its state -- the group of variables -- and commits
to an API that matches that state. This means that records give up a
freedom that classes usually enjoy -- the ability to decouple a
class's API from its internal representation -- but in return, records
become significantly more concise.
Due to restrictions placed on records, namely that canonical constructor type needs to match accessor type, a pragmatic way to use Optional with records would be to define it as a property type:
record MyRecord (String id, Optional<String> value){
}
A point has been made that this is problematic due to the fact that null might be passed as a value to the constructor. This can be solved by forbidding such MyRecord invariants through canonical constructor:
record MyRecord(String id, Optional<String> value) {
MyRecord(String id, Optional<String> value) {
this.id = id;
this.value = Objects.requireNonNull(value);
}
}
In practice most common libraries or frameworks (e.g. Jackson, Spring) have support for recognizing Optional type and translating null into Optional.empty() automatically so whether this is an issue that needs to be tackled in your particular instance depends on context. I recommend researching support for Optional in your codebase before cluttering your code possibly unnecessary.
Credits go to Holger! I really like his proposed way of questioning the actual need of null. Thus with a short example, I wanted to give his approach a bit more space, even if a bit convoluted for this use-case.
interface ConversionResult<T> {
String raw();
default Optional<T> value(){
return Optional.empty();
}
default Optional<String> error(){
return Optional.empty();
}
default void ifOk(Consumer<T> okAction) {
value().ifPresent(okAction);
}
default void okOrError(Consumer<T> okAction, Consumer<String> errorAction){
value().ifPresent(okAction);
error().ifPresent(errorAction);
}
static ConversionResult<LocalDate> ofDate(String raw, String pattern){
try {
var value = LocalDate.parse(raw, DateTimeFormatter.ofPattern(pattern));
return new Ok<>(raw, value);
} catch (Exception e){
var error = String.format("Invalid date value '%s'. Expected pattern '%s'.", raw, pattern);
return new Error<>(raw, error);
}
}
// more conversion operations
}
record Ok<T>(String raw, T actualValue) implements ConversionResult<T> {
public Optional<T> value(){
return Optional.of(actualValue);
}
}
record Error<T>(String raw, String actualError) implements ConversionResult<T> {
public Optional<String> error(){
return Optional.of(actualError);
}
}
Usage would be something like
var okConv = ConversionResult.ofDate("12.03.2020", "dd.MM.yyyy");
okConv.okOrError(
v -> System.out.println("SUCCESS: "+v),
e -> System.err.println("FAILURE: "+e)
);
System.out.println(okConv);
System.out.println();
var failedConv = ConversionResult.ofDate("12.03.2020", "yyyy-MM-dd");
failedConv.okOrError(
v -> System.out.println("SUCCESS: "+v),
e -> System.err.println("FAILURE: "+e)
);
System.out.println(failedConv);
which leads to the following output...
SUCCESS: 2020-03-12
Ok[raw=12.03.2020, actualValue=2020-03-12]
FAILURE: Invalid date value '12.03.2020'. Expected pattern 'yyyy-MM-dd'.
Error[raw=12.03.2020, actualError=Invalid date value '12.03.2020'. Expected pattern 'yyyy-MM-dd'.]
The only minor issue is that the toString prints now the actual... variants. And of course we do not NEED to use records for this.
Don't have the rep to comment, but I just wanted to point out that you've essentially reinvented the Either datatype. https://hackage.haskell.org/package/base-4.14.0.0/docs/Data-Either.html or https://www.scala-lang.org/api/2.9.3/scala/Either.html. I find Try, Either, and Validation to be incredibly useful for parsing and there are a few java libraries with this functionality that I use: https://github.com/aol/cyclops/tree/master/cyclops and https://www.vavr.io/vavr-docs/#_either.
Unfortunately, I think your main question is still open (and I'd be interested in finding an answer).
doing something like
RecordA(String a)
RecordAandB(String a, Integer b)
to deal with an immutable data carrier with a null b seems bad, but wrapping recordA(String a, Integer b) to have an Optional getB somewhere else seems contra-productive. There's almost no point to the record class then and I think the lombok #Value is still the best answer. I'm just concerned that it won't play well with deconstruction for pattern matching.

Enum inner field as parameter of Spring Data #Query statement

I have enum with inner value:
public enum Test {
A("a");
private String value;
Test(String value) {
this.value = value;
}
#Override
public String toString() {
return this.value;
}
}
And I use it as a parameter in Spring Data Cassandra #Query.
Also I added read and write converter to just for parsing "a" -> A and back.
I was expected that if I send A as parameter the query will be generated by converters to. But the query was like "... my_enum = 'A'" instead of 'a'.
I was trying to debug spring sources and found this part of code org/springframework/data/spring-data-cassandra/1.5.9.RELEASE/spring-data-cassandra-1.5.9.RELEASE-sources.jar!/org/springframework/data/cassandra/convert/MappingCassandraConverter.java:799:
// Cassandra has no default enum handling - convert it either to string
// or - if requested - to a different type
if (Enum.class.isAssignableFrom(value.getClass())) {
if (requestedTargetType != null && !requestedTargetType.isEnum()
&& getConversionService().canConvert(value.getClass(), requestedTargetType)) {
return getConversionService().convert(value, requestedTargetType);
}
return ((Enum<?>) value).name();
}
Looks like if parameter is enum - converters never be used.
How can i deal with it?
Thank you.

Convert empty string to null using javax.validation annotations

I have the following variable annotated for data validation:
#Size(min=8, max=16, message="the size of the parameter must be between 8 and 16")
private String param;
However, the param can be null. It is required that it be 8-16 chars long only if it is not null. The problem I face is if the client app (JSON API) supplies an empty string, I want to treat it as though it were not supplied at all, i.e. is null. I was wondering if there is an elegant way to do this using the javax.validation annotations, i.e. convert an empty string to null, as opposed to the plain Java way the way I'm doing it right now:
public void setParameter(String _param) {
if(_param != null && !_param.trim().isEmpty()){
this.param = _param;
} else {
this.param = null;
}
}
I would like to have a very simple setter:
public void setParameter(String _param) {
this.param = _param;
}
and have the is-empty-string boilerplate done by an annotation. Is there a way to do it?
You could can implement your own custom constraint validator.
see here. I've used this many times and works like a charm.
https://docs.jboss.org/hibernate/validator/5.0/reference/en-US/html/validator-customconstraints.html
You would just need to set this condition (if null return "" or vice-versa) in the isValid method.

Automatically apply field conversion function in Hibernate

I have a database table with a field that I need to read from and write to via Hibernate. It is string field, but the contents are encrypted. And for various reasons (e.g. a need to sort the plain text values), the encrypt/decrypt functions are implemented inside the database, not in Java.
The problem I'm struggling with now is finding a way to invoke the encrypt/decrypt functions in Hibernate-generated SQL everywhere that the field is referenced and in a way that's transparent to my application code. Is this possible? I've looked into Hibernate's support for "derived" properties, but unfortunately, that approach doesn't support read-write fields. Any ideas appreciated.
I don't think there's a way to make encryption like you've described it completely transparent to your application. The closest thing you can get is to make it transparent outside of entity. In your entity class:
#Entity
#SQLInsert(sql="INSERT INTO my_table(my_column, id) VALUES(encrypt(?),?)")
#SQLUpdate( sql="UPDATE my_table SET my_column = encrypt(?) WHERE id = ?")
public class MyEntity {
private String myValue;
....
#Formula("decrypt(my_column)")
public String getValue() {
return myValue;
}
public void setValue(String value) {
myValue = value;
}
#Column (name="my_column")
private String getValueCopy() {
return myValue;
}
private void setValueCopy(String value) {
}
}
value is mapped as derived property, you should be able to use it in queries.
valueCopy is private and is used to get around derived property being read-only.
SQLInsert and SQLUpdate is black voodoo magic to force encryption on insert / update. Note that parameter order IS important, you need to find out what order Hibernate would generate parameters in without using custom insert / update and then replicate it.
You could have a trigger internal to the database that, on retrieval, decrypts the value and replaces the returned result and on insert encrypts the value and replaces the stored result with the encrypted value. You could also do this with a view wrapper - i.e. have an insert trigger on the view, and have the view automatically decrypt the value.
To better explain: have a view that decrypts the value, and an on insert trigger that encrypts the value that is linked to the view.
Actually, in the end, I went a different route and submitted a patch to Hibernate. It was committed to trunk last week and so I think it will be in the next release following 3.5. Now, in property mappings, you can specify SQL "read" and "write" expressions to call SQL functions or perform some other kind of database-side conversion.
Assuming you have access to the encrypt/decrypt algorithm from within Java, I would set up my mapped class something like
public class encryptedTable {
#Column(name="encrypted_field")
private String encryptedValue;
#Transient
private String value;
public String getEncryptedValue() {
return encryptedValue;
}
public String getValue() {
return value;
}
public void setEncryptedValue(String encryptedValue) {
this.encryptedValue = encryptedValue;
this.value = decrypt(encryptedValue);
}
public void setValue(String value) {
this.value = value;
this.encryptedValue = encrypt(value);
}
}
And then use get/set Value as the accessor within your program and leave the get/set EncryptedValue for Hibernates use when accessing the database.
Why not just use the SQl server encryption that seems to already be in place by calling a stored proc in Hibernate instead of letting Hibernate generate a query?

JAXB Marshalling with null fields

This is a pretty simple request, but I just didn't find a way to do it.
I'm basically trying to set up a role in JAXB which says that whenever an null field is encountered, instead of ignoring it in the output, set it to an empty value. So for the class :
#XMLRootElement
Class Foo {
Integer num;
Date date;
….
}
When this has been marshalled into the XML file if the date field is null, my output does not have that element in it. What I want to do is include all the fields in the output; and if they are null, replace them with - say a blank. So the output should be :
<foo>
<num>123</num>
<date></date>
</foo>
Thanks,
Jalpesh.
Thanks guys for your answers.
Chris Dail - I tried your approach, and it didn't really do what I wanted. JAXB was still ignoring my null values, in spite of defining a default value for my fields.
I did stumble across the answer after somebody in the Jersey forums pointed me to documentation section 2.2.12.8 No Value.
Basically, all I had to do was to add the following to my fields :
#XmlElement(nillable = true)
Once I added that, JAXB would show up those fields when marshalling them to XML like this:
...
<num>5</num>
<date xsi:nil="true" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"/>
....
But but but...a empty string is not a valid lexical representation for a date, so you can't do that. i.e., if you generated an XML document with an empty value for a date field, it won't validate properly.
In other words, if your date element has a minOccurs of 1 or more and not nillable, then you absolutely must have (1 or more) dates, which can't be null (or blanks, or other non-values).
As indicated in the other answer is invalid since it is not a valid date. I had a similar issue where I wanted to handle (same as ) specially. Since you cannot use null, you can use the default value mechanism in JAXB. The following will default the value if none is specified. You can through code detect this special date and handle this exception case.
#XmlElement(defaultValue="1970-01-01T00:00:00.0-00:00")
So it is possible to detected and empty date value but you just cannot use null to do it.
In MOXy u can specify how the jsonProvider must do its job for JAXB.
So when doing JAX-RS, add following code in your class derived from Application
I used this code on Tomcat 7 with good results. (eclipselink 2.4.1)
#ApplicationPath("/rest")
public class RestApplication extends Application
{
...
public Set< Object> getSingletons()
{
HashSet<Object> set = new HashSet<Object>(1);
set.add( newMoxyJsonProvider());
return set;
}
public static MOXyJsonProvider newMoxyJsonProvider()
{
MOXyJsonProvider result = new MOXyJsonProvider();
//result.setAttributePrefix("#");
result.setFormattedOutput( false);
result.setIncludeRoot( false);
result.setMarshalEmptyCollections( true);
//result.setValueWrapper("$");
return result;
}
On Glassfish 3.1.2 and WAS 8.5 however, newMoxyJsonProvider() is not needed, but then the JAXB provider gets configured by the server.
In the case of Glassfish, which comes with MOXy, i witnessed same problems with null values.
Did not check yet, but guess the answer is in configuring JAXB at application server level if possible at all.
Try this:
marshal.setListener(new MarshallerListener());
with
public class MarshallerListener extends Marshaller.Listener {
public static final String BLANK_CHAR = "";
#Override
public void beforeMarshal(Object source) {
super.beforeMarshal(source);
Field[] fields = source.getClass().getDeclaredFields();
for (Field f : fields) {
f.setAccessible(true);
try {
if (f.getType() == String.class && f.get(source) == null) {
f.set(source, BLANK_CHAR);
}
} catch (IllegalAccessException e) {
e.printStackTrace();
}
}
}
}

Categories

Resources