I'm currently using QueryDSL and I must add a WHERE clause to filter by a field which stores a huge amount of information (readable text) and it's created as a LOB field.
This is the field in my entity:
#Lob
#Column(name = "MY_FIELD", nullable = true)
private byte[] myField;
which is generated in this way in my "Q" class:
public final ArrayPath<byte[], Byte> myField = createArray("myField", byte[].class);
I can recover the information in this field without a problem. However, when I was trying to add the filtering clause, I realized the ArrayPath object doesn't have a like method, so I tried to do it in a different way.
I tried different approaches and I came up to this:
Expressions.predicate(Ops.LIKE, Expressions.stringPath("MY_FIELD"), Expressions.constant(stringValue));
The SQL code generated with the previous predicate is the following:
...
WHERE
MY_FIELD like '%?%' escape '!'
...
If I try to execute this SQL command directly in my database it perfectly works, it recovers the correct rows depending on the "?" param. However, my application doesn't recover any of them even though it's executing the very same SQL command.
Is there anything I'm missing? Could it be done in a different way?
Thank you very much in advance.
PS: I'm using SQL Server 2011.
By default a byte[] is mapped to an Array path. In case of a (C)LOB, you want to map it as String path instead. You can hint the code generator by specifying the QueryType:
#Lob
#QueryType(PropertyType.STRING)
#Column(name = "MY_FIELD", nullable = true)
private byte[] myField;
However, #Column(name = "MY_FIELD", nullable = true) seems to imply that you're querying JPA instead of plain SQL. Be aware that some JPA vendors may not support the like function for CLOBs.
Related
I'm trying to lazily fetch single byte[] content java property using Hibernate under Spring Boot, accessing PostgreSQL database. So I pulled together testing app for testing different solutions. One of them required me to use #Lob annotation on said property, so I did. Now reading entity from the database leads to very curious error, precisely:
Bad value for type long : \x454545454545445455
The value \x45... is value of bytea column not the bigint one, why is it trying to force it into the long even though it's wrong column? Why annotation on one column somehow affects another one?
As for fix, removing #Lob seems to work (at least in my stack) but the problem remains unexplained to me and I would like to know what is going rather than just blindly moving on. Is it bug or I am misunderstanding something completely?
Entity:
#Entity
#Table(name = "blobentity")
public class BlobEntity {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
#Lob //this annotation breaks code
#Column(name = "content")
#Basic(fetch = FetchType.LAZY)
private byte[] content;
#Column(name = "name")
private String name;
//getters/setters
}
Repository:
#Repository
public interface BlobRepo extends JpaRepository<BlobEntity, Long> {
}
Calling code:
#Autowired
BlobRepo blobrepo;
#GetMapping("lazyBlob")
public String blob () {
var t = blobrepo.findAll().get(0);
var name = t.getName();
var dataAccessedIfLazy = t.getContent();
return t.getName();
}
Postgres DDL:
CREATE TABLE test.blobentity (
id bigserial NOT NULL DEFAULT nextval('test.blobentity_id_seq'::regclass),
"name" varchar NULL,
"content" bytea NULL,
CONSTRAINT blobentity_pk PRIMARY KEY (id)
);
Select result:
Used version:
PostgreSQL 10.4; springframework.boot 2.4.2; hibernate version that comes with this spring boot version
The bytea type is inlined into the table whereas other types are chunked into a separate table which is called TOAST on PostgreSQL. To access these values, database have a concept often referred to as a LOB locator which essentially is just an id for doing the lookup. Some drivers/databases just work either way but others might need to match the actual physical representation. In your case, using #Lob is just wrong because AFAIK bytea is inlined up to a certain size and de-TOASTed i.e. materialized automatically behind the scenes if necessary. If you were using the varbinary/blob type or something like that, you would have to use #Lob as in that case, the main table only contains this LOB locator which is a long. The driver then knows when you ask for the value by using getBlob that it has to execute some select get_lob(?) query to retrieve the actual contents.
We have an SQL Table with a column of type "real" that we are tying to read from using Hibernate. When we try to read from that we are expecting a type float but are getting this error:
found [real (Types#REAL)], but expecting [float (Types#FLOAT)]
Currently we do not have this field annotated with anything else than this?
#Column(name = "BatteryVoltage")
private float batteryVoltage;
However, I expect we might need to use either percision and/or scale. Does hibernate have a solution for this or is it necessary to alter table configuration?
The problem here is that the Hibernate "Validate" step was failing to map the float type to a real type. I don't believe there was actually a run-time error. The solution, as this other post goes into, is to use the #Column(columnDefinition=... annotation.
This solution worked for us:
#Column(name = "BatteryVoltage", columnDefinition = "real")
private float batteryVoltage;
I got a bunch of DTO's which are not commented at all. However, there are comments in the SQL-Database. I can get these comments by sending a query and then retrieving the ResultSet.
My task is to create a javadoc-API (as HTML) with the comments from the SQL-Database in order to make the codebase better understandable.
After asking about this task already HERE, I tried to looked into creating my own doclet. I then wrote my own doclet by rewriting the Standard-, Abstract- and HtmlDoclet from Java.Tools. My results are working fine and I can create javadoc html pages WITH the comments from the database.
HOWEVER its a massive hack imho. There are two main tasks that need to be done in order to get the Database comments.
know the table name
know the column name
How it should be done: (which is what I want to ask - How do I implement it like this?)
For 1. : Find the #Table annotation. Read name = "tablename".
For 2. : For each variable:
Is there a #Column annotation ? return "columnName" : return ""
How I do it right now:
For 1. : I read the RootDoc.name() variable and then read the String char by char. Find a capital letter. Insert '_'. And at the end, turn everything .toUpperCase(). So "testFile" turns into "TEST_FILE".
This sometimes does not work. If you read carefully in the example class. Its name is "SaklTAdrkla" but the Databasetable name is SAKL_T_ADRKLAS. Parsing the name from RootDoc.name() would result in "SAKL_T_ADRKLA" which is missing the character 'S' at the end, therefore it wont find the table in the database.
For 2. : I get all Fields from the ClassDoc. I then parse Field.name() the same way I parsed the RootDoc.name() variable.
This wont work for the same reason as 1.; but also because some fieldnames are not the same as their mapped names. In the example Class - field sakgTAklgrpAklAkgid is mapped in the database as AKL_AKGID
I am able to find the Annotation itselfe by calling FieldDoc.annotations(). But thats ONLY the annotation without the String (name = "xyz") which is the most important part for me!
I have found the Jax-Doclet, which can parse the javax annotations. However after downloading the jar-source file and implementing the java files, there are numerous dependency issues which are not resolvable because the referenced classes no longer exist in java 8 tools.jar.
Is there another solution, that is capable of reading the javax annotations?
Can I implement something into my doclet so it can read the javax annotations?
Edit:
I found out you can call .elementValues() on the AnnotationDesc class which I can get from FieldDoc.annotations(). However I always get a com.sun.jdi.ClassNotLoadedException Type has not been loaded occurred while retrieving component type of array. To fix it I manually load the classes AnnotationDesc and AnnotationDesc.ElementValuePair by calling Class.forName(). However now the Array with the elementValuePairs is empty..?
Example class:
/**
* The persistent class for the SAKL_T_ADRKLAS database table.
*/
#Entity
#IdClass(SaklTAdrklaPK.class)
#Table(name = "SAKL_T_ADRKLAS")
#NamedQuery(name = "SaklTAdrkla.findAll", query = "SELECT s FROM SaklTAdrkla s")
public class SaklTAdrkla implements Serializable, IModelEntity {
private static final long serialVersionUID = 1L;
#Id #Column(name = "AKL_AKLID") private String aklAklid;
#Id
// uni-directional many-to-one association to SakgTAklgrp
#JsonBackReference(value = "sakgTAklgrpAklAkgid") #ManyToOne #JoinColumn(name = "AKL_AKGID") private SakgTAklgrp sakgTAklgrpAklAkgid;
#Temporal(TemporalType.TIMESTAMP) #Column(name = "AKL_AEND") private Date aklAend;
#Column(name = "AKL_DEFLT") private BigDecimal aklDeflt;
#Column(name = "AKL_SPERRE") private BigDecimal aklSperre;
#Column(name = "AKL_T_BEZ") private String aklTBez;
#Column(name = "AKL_USRID") private String aklUsrid;
public SaklTAdrkla() {
}
It took me quite a while to figure this out now, but I finnally did.
The Problem was that my doclet could find all the annotations, which it displayed in the console as errors.
error: cannot find symbol #Column(name = "TST_USER") private
String tstUser;
What I also found was this message in the lot of errors that got thrown:
error: package javax.persistence does not exist import
javax.persistence.*;
So I imported javax.persistance.jar into my project.
I also added com.fasterxml.jaxkson.annotations.jar into the project since it would also not work without it.
Surprise Surprise! IT WORKS!
I can get all the annotations and annotation values by using annotation.elementValues().
I no longer get an empty Array nor do I get an ClassNotLoadedException.
In our database table, we record a large string and its corresponding md5 value. In mysql5, we insert such a record with
insert (md5,content) values (md5(content), hex(content));
Moving to hibernate, I have annotated the entity
#Column(name = "content", columnDefinition = "MEDIUMTEXT")
#ColumnTransformer(read = "unhex(content)", write="hex(?)")
private String content;
which works great. But I don't see how to annotate the md5 column so that it can be automatically generated on insert. In particular, a columntransformer won't work, since the ? in the annotation refers to the md5 field, not the content field.
Any observations, or help appreciated.
You can use hibernate interceptors and exactly pre-save (in this case) events for encryption.
read about hibernate interceptors here Hibernate Interceptors
If I use JPA (EclipseLink) to create tables a String type results in a varchar2(255). How could I tell JPA (via Annotation) to create a varchar2(20) attribute.
If I have a List JPA creates a BLOB(4000) but I would like a varchar2 (my serialized object's string is short)
How is this possible? Do I have to do it by hand?
You need to use the columnDefinition property of the #Column annotation. i.e.
#Column(columnDefinition="varchar2(20)")
If I use JPA (EclipseLink) to create tables a String type results in a varchar2(255). How could I tell JPA (via Annotation) to create a varchar2(20) attribute.
Using the columnDefinition can break portability from one database to another. For a string-valued column, prefer using the length element (which defaults to 255):
#Column(length=20)
String someString;
You can set the length on your #Column annotation, as such:
#Column(length = 20)
Note that length is only for text columns. For numeric, you can use precision and scale.
#Column(name = "doc_number", columnDefinition = "varchar2(20)")
Please, try that.