ORA-01460: unimplemented or unreasonable conversion requested using Hibernate #Lob - java

I have a byte[] that I am persisting to a Lob as follows:
#Basic(fetch = FetchType.LAZY)
#Column(name = "ABF", length = Integer.MAX_VALUE)
#Lob
private byte[] abf;
Seems simple enough, but when I attempt to store anything sizable in it (more than 4000 characters) I get the following exception when I try to commit:
java.sql.SQLException: ORA-01460: unimplemented or unreasonable conversion requested
None of the files I am attempting to store are anywhere near 32,000 characters. Is there some other gotcha here?

See this post.
Nutshell:
<property name="hibernate.connection.SetBigStringTryClob">true</property>
<property name="hibernate.jdbc.batch_size">0</property>
It can also be:
Old Oracle JDBC driver (although I think then the limit was 2k)
Driver/DB version mismatch
Wrong Oracle dialect specified in Hibernate config
For DB stuff it's always helpful to supply driver and DB version info :)

i just updated the oracle driver and it worked fine.
it is mainly due to the oracle driver mismatch.
If you have right version of jdbc driver corresponding to you oracle version, it should not be an issue.

Sometimes it helps to do things in that order:
insert the new entity with an empty lob;
commit;
populate the lob on the newly created entity;
update and commit.

#Dave Newton set me on the right path. The answer involved a few things. As Dave pointed out, I added these lines to hibernate.cfg.xml:
<property name="hibernate.connection.SetBigStringTryClob">true</property>
<property name="hibernate.jdbc.batch_size">0</property>
I was previously using hsqldb-2.0.0.jar. I updated this to the current version (hsqldb-2.2.5.jar). I think this was the main culprit, and I swear I've noticed a database performance increase since doing this.
I also updated to the current version of ojdbc14.jar (10.2.0.5). I was previously on some older version, but I don't know exactly which one. It should be noted that even after updating to version 10.2.0.5 the problem did not go away. It wasn't until I updated the hsqldb.jar version that the problem was resolved.

Related

Cassandra Java datastax 2.1.8 Cannot connect to keyspace with quotes

I have a simple piece of code for removing data from cassandra 2
Cluster myCluster = Cluster.builder().addContactPoint(myhost).withPort(myport).build();
Session session = myCluster.connect(keyspaceName);
session.excecute(deleteStatement); -- it is just simple Delete.Where
So basically when I try to do something on (for example) keyspaceName = "test"
it will easily excecute my delete statement, but if I try the same thing for (for example) keyspace = "\"DONT_WORK\"" (since I have a keyspace name in quotes in cassandra) it won't work, and will throw
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:16660 (com.datastax.driver.core.ConnectionException: [localhost/127.0.0.1:16660] Pool is shutdown))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
I need help please
PS. I even used Metadata.quote() static method from datastax library - still ain't working.
You should not need to quote the keyspace name for connecting. Quoting is important to protect case sensitivity in the context of a CQL string, but you do not need to protect it if you are passing the keyspace name as an API parameter.
Ok, there is no need for further investigation of this problem. The issue was that I accidentally used 2.1.8 datastax library on cassandra at version 2.0.8. I have to stop using numeric keyborad. Simple mistake but sure made quite a fuss.

UnsupportedCharsetException: Cp1027 with DB2 JDBC Driver

I am creating a simple database table with a column of type Timestamp on IBM DB2 on mainframes from a JDBC client like this-
CREATE TABLE scma.timetest(
T_TYPE VARCHAR(8),
T_DATE TIMESTAMP
);
With or without inserting any record if I do a select * from scma.timetest; I end up getting the below exception-
java.nio.charset.UnsupportedCharsetException: Cp1027
If I don't have the Timestamp type column, everything works fine. I have tried starting the JDBC client with -Dfile.encoding=UTF-8 with no avail. Same thing I tried from a Java program as well, it results in the same error.
It is not the same problem mentioned here, I don't get ClassNotFoundException. Any pointer what could be wrong. Here is full exception if it helps-
Exception in thread "main" java.nio.charset.UnsupportedCharsetException: Cp1027
at java.nio.charset.Charset.forName(Charset.java:531)
at com.ibm.db2.jcc.am.t.<init>(t.java:13)
at com.ibm.db2.jcc.am.s.a(s.java:12)
at com.ibm.db2.jcc.am.o.a(o.java:444)
at com.ibm.db2.jcc.t4.cc.a(cc.java:2412)
at com.ibm.db2.jcc.t4.cb.a(cb.java:3513)
at com.ibm.db2.jcc.t4.cb.a(cb.java:2006)
at com.ibm.db2.jcc.t4.cb.a(cb.java:1931)
at com.ibm.db2.jcc.t4.cb.m(cb.java:765)
at com.ibm.db2.jcc.t4.cb.i(cb.java:253)
at com.ibm.db2.jcc.t4.cb.c(cb.java:55)
at com.ibm.db2.jcc.t4.q.c(q.java:44)
at com.ibm.db2.jcc.t4.rb.j(rb.java:147)
at com.ibm.db2.jcc.am.mn.kb(mn.java:2107)
at com.ibm.db2.jcc.am.mn.a(mn.java:3099)
at com.ibm.db2.jcc.am.mn.a(mn.java:686)
at com.ibm.db2.jcc.am.mn.executeQuery(mn.java:670)
Moving this here from comments:
Legacy DB2 for z/OS often use EBCDIC (also known as CP1027) encoding for character data. Also I believe DB2 sends timestamp values to the client as character strings, although they are internally stored differently. I suspect that the Java runtime that you are using does not support CP1027, so it doesn't know how to convert EBCDIC data to whatever it needs on the client. I cannot explain though why VARCHAR value comes through OK.
For more details about DB2 encoding you can check the manual.
You can force DB2 to create a table using different encoding, which will likely be supported by Java:
CREATE TABLE scma.timetest(...) CCSID UNICODE
Another alternative might be to use a different Java runtime that supports the EBCDIC (CP1027) encoding. The IBM JDK, which comes with some DB2 client packages, would be a good candidate.
You (well, not you but the mainframe system programmers) can also configure the default encoding scheme for the database (subsystem).

Liquibase fails during checking of non-existence of primary key

During the replacement of mysql-connector to MariaDB I came to the situation when Liquibase fails on the changeset where I check non-existence of primary key:
<preConditions onFail="MARK_RAN">
<not>
<primaryKeyExists tableName="users"/>
</not>
</preConditions>
It fails with NullPointerException
Error: null java.lang.NullPointerException at liquibase.snapshot.jvm.MySQLDatabaseSnapshotGenerator.convertPrimaryKeyName(MySQLDatabaseSnapshotGenerator.java:124)
at liquibase.snapshot.jvm.JdbcDatabaseSnapshotGenerator.readPrimaryKeys(JdbcDatabaseSnapshotGenerator.java:759)
at liquibase.snapshot.jvm.JdbcDatabaseSnapshotGenerator.createSnapshot(JdbcDatabaseSnapshotGenerator.java:243)
at liquibase.snapshot.DatabaseSnapshotGeneratorFactory.createSnapshot(DatabaseSnapshotGeneratorFactory.java:69)
at liquibase.precondition.core.PrimaryKeyExistsPrecondition.check(PrimaryKeyExistsPrecondition.java:52)
at liquibase.precondition.core.NotPrecondition.check(NotPrecondition.java:30)
at liquibase.precondition.core.AndPrecondition.check(AndPrecondition.java:34)
at liquibase.precondition.core.PreconditionContainer.check(PreconditionContainer.java:199)
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:249)
If I remove this clause the liquibase works fine. Interesting thing is that other preConditions work fine, for example, which check some table existence.
After diving in the code I found that the issue is in JdbcDatabaseSnapshotGenerator#readPrimaryKeys, where we try to fetch primary keys. But of course, for different databases there are different implementations, so it seems that it is a bit different ResultSet (with null column for primary key) which I get using MariaDB, however, the funny thing is that the method (in MySQLDatabaseSnapshotGenerator) where it fails is like this:
#Override
protected String convertPrimaryKeyName(String pkName) throws SQLException {
if (pkName.equals("PRIMARY")) {
return null;
} else {
return pkName;
}
}
So, just if it is opposite way around it would work for me:) Like this I mean:
if ("PRIMARY".equals(pkName))
THE QUESTION: Is it a bug of liquibase or maybe I am doing something wrong?
According to my research I came to this conclusion.
Probably it is a bug of liquibase, but as I found we are using quite old version 2.0.5.
Upgrade to current 3.2.0 didn't help at all because even the first changeset fails and it doesn't matter which driver I am using (mysql-connector or mariadb) and which database (MySQL or PostgreSQL). Plus one of the main things what I found was that liquibase actually doesn't have a support for MariaDb according to this ticket:
https://liquibase.jira.com/browse/CORE-1411
Moreover, I was thinking that maybe MariaDb has some other version, but seems there is only one by now:
http://mvnrepository.com/artifact/org.jumpmind.symmetric.jdbc/mariadb-java-client
So, in general removing of these preConditions fixes my problem, the databases are the same at least with the clean installation. However, i am still thinking that it shouldn't be like this, so would be nice to hear some other thoughts if someone has.

Cannot Execute Stored Procedure using JDBC

I use sybase database and am trying to update some values into the database.
While trying to run this it throws an exception as :
com.sybase.jdbc2.jdbc.SybSQLException: The identifier that starts with 'WeeklyStudentEventClassArchiv' is too long. Maximum length is 30.
This table is in another database and thus i have to use the database name along with the table name as dhown below:
StudActive..WeeklyStudentEventClassArchiv which apparently exceeds 30 characters.
I have to use the databasename..tablename in the stored procudure but its throwing an exception.
This happens even if i physically embed the sql in the java code.
How can this be solved.
The Stored Procedue is as shown:
create proc dbo.sp_getStudentList(
#stDate int,
#endDate int
)
as
begin
set nocount on
select distinct studCode
StudActive..WeeklyStudentEventClassArchive
where studCode > 0
and courseStartDate between #stDate and #endDate
end
StudActive..WeeklyStudentEventClassArchiv which apparently exceeds 30
characters.
Yes - I count 41.
Rename the table and/or the stored proc and you should be fine. It wounds like a limitation of either the JDBC driver or the database.
Your JDBC driver is out of date. Updating to a later version might help solve your problem.
First download a more recent jConnect driver from the Sybase website. Then update your code to use the new driver package. You will also need to change your code, as the package name of the driver changes for each new version of the specification. (The current package is com.sybase.jdbcx...)
Take a look at the programmers reference for more information.

Problem persisting a java.util.Date into MySql using Hibernate

I've been debugging this problem for the last couple of hours with no success and figured I'd throw it out to SO and see where that goes.
I'm developing a Java program that persists data into a MySql database using Hibernate and the DAO/DTO pattern. In my database, I have a memberprofile table with a firstLoginDate column. In the database, the SQL type of that column is a DateTime. The corresponding section of the Hibernate XML file is
<property name="firstLoginDate" type="timestamp">
<column name="firstLoginDate" sql-type="DATETIME"/>
</property>
However, when I try to save a Date into that table (in Java), the "date" (year/month/day) part is persisted correctly, but the "time of day" part (hours:minutes:seconds) is not. For instance, if I try to save a Java date representing 2009-09-01 14:02:23, what ends up in the database is instead 2009-09-01 00:00:00.
I've already confirmed that my own code isn't stomping on the time component; as far as I can see source code (while debugging) the time component remains correct. However, after committing changes, I can examine the relevant row using the MySql Query Browser (or just grabbing back out from the database in my Java code), and indeed the time component is missing. Any ideas?
I did try persisting a java.sql.Timestamp instead of a java.util.Date, but the problem remained. Also, I have a very similar column in another table that does not exhibit this behavior at all.
I expect you guys will have questions, so I'll edit this as needed. Thanks!
Edit #Nate:
...
MemberProfile mp = ...
Date now = new Date();
mp.setFirstLoginDate(now);
...
MemberProfile is pretty much a wrapper class for the DTO; setting the first login date sets a field of the DTO and then commits the changes.
Edit 2: It seems to only occur on my machine. I've already tried rebuilding the table schema and wiping out all of my local source and re-checking-out from CVS, with no improvement. Now I'm really stumped.
Edit 3: Completely wiping my MySql installation, reinstalling it, and restoring the database from a known good copy also did not fix the problem.
I have a similar setup to you (except mine works), and my mapping file looks like this:
<property name="firstLoginDate" type="timestamp">
<column name="firstLoginDate" length="19"/>
</property>
My database shows the column definition as datetime.
Edit:
Some more things to check...
Check that the mysql driver the same
on your local as on the working machines.
Try dropping the table, and have
hibernate recreate it for you. If
that works, then there's a problem in
the mapping.
This may or may not be your problem, but we have had serious problems with date/time info - if your database server is on a different time zone than the machine submitting the data, you can have inconsistencies in the data saved.
Beyond that, with our annotation configuration, it looks something like the following:
#Column(name="COLUMN_NAME", length=11)
If it is viable for you, consider using the JodaTime DateTime class which is much nicer than the built in classes and you can also persist them using Hibernate with their Hibernate Support
Using them I mark my fields or getters with the annotation for custom Hibernate Types as:
#org.hibernate.annotations.Type(type = "org.joda.time.contrib.hibernate.PersistentDateTime")
#Column(name = "date")
This works fine for me and it also generates correct schema generation sql
This works fine in MySQL
Use TemporalType.TIMESTAMP beside your Temporal annonation.
Please check the example below.
#Temporal(TemporalType.TIMESTAMP)
public Date getCreated() {
return this.created;
}

Categories

Resources