Oracle RDF table throws ORA-22835 error when saving triple - java

We're running a Java application that saves data to the Oracle RDF triplestore using the Jena adapter. Our version of Oracle is 11gR2.
Recently, we've been getting this error popping up during the save of a large triple.
ERROR http-bio-8080-exec-4 oracle.spatial.rdf.client.jena.GraphOracleSem:
Could not add triple java.sql.SQLException:
ORA-22835: Buffer too small for CLOB to CHAR or BLOB to RAW conversion (actual: 5223, maximum: 4000)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:439)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:395)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:802)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:436)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:186)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:521)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:205)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1008)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1307)
at oracle.jdbc.driver.OraclePreparedStatement.sendBatch(OraclePreparedStatement.java:3753)
at oracle.jdbc.driver.OraclePreparedStatement.processCompletedBindRow(OraclePreparedStatement.java:2112)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3444)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3530)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1350)
at oracle.spatial.rdf.client.jena.GraphOracleSem.performAdd(GraphOracleSem.java:3509)
at oracle.spatial.rdf.client.jena.OracleBulkUpdateHandler.add(OracleBulkUpdateHandler.java:1226)
at oracle.spatial.rdf.client.jena.OracleBulkUpdateHandler.addIterator(OracleBulkUpdateHandler.java:1257)
at oracle.spatial.rdf.client.jena.OracleBulkUpdateHandler.add(OracleBulkUpdateHandler.java:1278)
at oracle.spatial.rdf.client.jena.OracleBulkUpdateHandler.add(OracleBulkUpdateHandler.java:1268)
at com.hp.hpl.jena.sparql.modify.UpdateProcessorVisitor$1.exec(UpdateProcessorVisitor.java:51)
at com.hp.hpl.jena.sparql.modify.GraphStoreUtils.action(GraphStoreUtils.java:60)
at com.hp.hpl.jena.sparql.modify.UpdateProcessorVisitor.visit(UpdateProcessorVisitor.java:48)
at com.hp.hpl.jena.sparql.modify.op.UpdateInsertData.visit(UpdateInsertData.java:16)
at com.hp.hpl.jena.sparql.modify.UpdateProcessorMain.execute(UpdateProcessorMain.java:34)
at com.hp.hpl.jena.update.UpdateAction.execute(UpdateAction.java:253)
at com.hp.hpl.jena.update.UpdateAction.parseExecute(UpdateAction.java:176)
at com.hp.hpl.jena.update.UpdateAction.parseExecute(UpdateAction.java:143)
at com.hp.hpl.jena.update.UpdateAction.parseExecute(UpdateAction.java:105)
As the error states, it occurs when the data string is greater than 4000 characters. Though it doesn't specify table/column in the error, the Oracle documentation suggests that it's supposed to automatically handle this internally:
RDF_VALUE$ Table:
LONG_VALUE: CLOB - The character string if the length of the lexical value is greater than 4000 bytes. Otherwise, this column has a null value.
VALUE_NAME: VARCHAR2(4000) - This is a computed column. If length of the lexical value is 4000 bytes or less, the value of this column is the concatenation of the values of VNAME_PREFIX column and the VNAME_SUFFIX column.
Some users are not seeing this error, though it may be that they just haven't tried to save something big enough. We've tried clearing out the triplestore model of the user, which seemed to work for a couple days but then came back.
Does anybody have any hints about where to start debugging this? Thank you.

I had the same problem a couple of years ago. Which version of the jena-adapter are you using? I got a patch that solved the problem, maybe you can try and see if it is still availiable on oracle support. This is the instruction I received:
login to support.oracle.com,
then click on Patches & Updates tab
in Patch Search panel, click Search tab, type 10186312 in the text box after Patch Name or Number button.
Click Search button. It should return one match.
Click on the Patch Name 10186312, then Click on Download.

Related

Oracle sql developer save hebrew data - converted to gibberish

I'm trying to insert hebrew content to table, using oracle sql developer db.
this is my code using myBatis:
<insert id="insertTaskTempImage" parameterType="com.ladpc.mobile.entities.TaskTempImage" useGeneratedKeys="false">
INSERT INTO TASKS_TEMP_IMAGES (
TASK_ID,
RASHUT_ID,
COMMENTS,
CREATION_DATE,
IMAGE,
FILE_NAME
)
VALUES (
#{taskId,jdbcType=NUMERIC},
#{rashutId,jdbcType=NUMERIC},
#{comments, jdbcType=VARCHAR},
#{creationDate, jdbcType=TIMESTAMP},
#{image,jdbcType=BLOB},
#{fileName,jdbcType=VARCHAR}
)
</insert>
After I insert the fileName to the table- with hebrew chars, I get gibberish content in the table:
and when I load this content and show it in UI its writen in gibberish.
What need I do to resolve this issue?
edit:
My nls is on hebrew but its still not working...
Thank you!
ALTER SESSION SET NLS_LANGUAGE . And you will have to find the right value for the language you want.
Tools > Preferences > Code Editor > Fonts.
Set it to something friendly like Tahoma.
Query your data, save it.
And then open said file.
Oracle SQL Developer is a java app - it's got full unicode supports out-of-the-box. Nine times out of 10, display issues are due to an incompatible font.
I don't read Hebrew, I used an ipsum generator for this text, so if it's offensive, my apologies.
Sorry, I finded out the sulotion...
My client need to send this data in utf-8 encoding.
The problem wasn't on server inserts, the server get this string on gibbresh...
Tanks for all answers!

UnsupportedCharsetException: Cp1027 with DB2 JDBC Driver

I am creating a simple database table with a column of type Timestamp on IBM DB2 on mainframes from a JDBC client like this-
CREATE TABLE scma.timetest(
T_TYPE VARCHAR(8),
T_DATE TIMESTAMP
);
With or without inserting any record if I do a select * from scma.timetest; I end up getting the below exception-
java.nio.charset.UnsupportedCharsetException: Cp1027
If I don't have the Timestamp type column, everything works fine. I have tried starting the JDBC client with -Dfile.encoding=UTF-8 with no avail. Same thing I tried from a Java program as well, it results in the same error.
It is not the same problem mentioned here, I don't get ClassNotFoundException. Any pointer what could be wrong. Here is full exception if it helps-
Exception in thread "main" java.nio.charset.UnsupportedCharsetException: Cp1027
at java.nio.charset.Charset.forName(Charset.java:531)
at com.ibm.db2.jcc.am.t.<init>(t.java:13)
at com.ibm.db2.jcc.am.s.a(s.java:12)
at com.ibm.db2.jcc.am.o.a(o.java:444)
at com.ibm.db2.jcc.t4.cc.a(cc.java:2412)
at com.ibm.db2.jcc.t4.cb.a(cb.java:3513)
at com.ibm.db2.jcc.t4.cb.a(cb.java:2006)
at com.ibm.db2.jcc.t4.cb.a(cb.java:1931)
at com.ibm.db2.jcc.t4.cb.m(cb.java:765)
at com.ibm.db2.jcc.t4.cb.i(cb.java:253)
at com.ibm.db2.jcc.t4.cb.c(cb.java:55)
at com.ibm.db2.jcc.t4.q.c(q.java:44)
at com.ibm.db2.jcc.t4.rb.j(rb.java:147)
at com.ibm.db2.jcc.am.mn.kb(mn.java:2107)
at com.ibm.db2.jcc.am.mn.a(mn.java:3099)
at com.ibm.db2.jcc.am.mn.a(mn.java:686)
at com.ibm.db2.jcc.am.mn.executeQuery(mn.java:670)
Moving this here from comments:
Legacy DB2 for z/OS often use EBCDIC (also known as CP1027) encoding for character data. Also I believe DB2 sends timestamp values to the client as character strings, although they are internally stored differently. I suspect that the Java runtime that you are using does not support CP1027, so it doesn't know how to convert EBCDIC data to whatever it needs on the client. I cannot explain though why VARCHAR value comes through OK.
For more details about DB2 encoding you can check the manual.
You can force DB2 to create a table using different encoding, which will likely be supported by Java:
CREATE TABLE scma.timetest(...) CCSID UNICODE
Another alternative might be to use a different Java runtime that supports the EBCDIC (CP1027) encoding. The IBM JDK, which comes with some DB2 client packages, would be a good candidate.
You (well, not you but the mainframe system programmers) can also configure the default encoding scheme for the database (subsystem).

empire db code-gen double parse failure in reading "0.-127" from table meta data

I'm trying to reverse engineer using empire db code-gen on my project tables in Oracle.
For a primary key column (ID), the meta-data returned has invalid values ( COLUMN_SIZE is 0 and for field DECIMAL_DIGITS is -127 ), which is the cause of Exception : For input string: "0.-127"
Can anyone enligthen me on why DECIMAL_DIGITS is -127.
Exception in thread "main" java.lang.NumberFormatException: For input string: "0.-127"
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1241)
at java.lang.Double.parseDouble(Double.java:540)
at org.apache.empire.db.codegen.CodeGenParser.addColumn(CodeGenParser.java:368)
at org.apache.empire.db.codegen.CodeGenParser.populateTable(CodeGenParser.java:300)
at org.apache.empire.db.codegen.CodeGenParser.populateDatabase(CodeGenParser.java:168)
at org.apache.empire.db.codegen.CodeGenParser.loadDbModel(CodeGenParser.java:96)
at org.apache.empire.db.codegen.CodeGenerator.generate(CodeGenerator.java:57)
at org.apache.empire.db.codegen.CodeGenerator.generate(CodeGenerator.java:72)
at org.apache.empire.db.codegen.CodeGenerator.main(CodeGenerator.java:45)
I invoke the CodeGenerator using mvn generate-sources
PS: I've tried both ojdbc6 and ojdbc14 jars, it didn't work.
It seems to be related to the issue reported here: DatabaseMetaData.getColumns() returns unexpected values in COLUMN_SIZE and DECIMAL_DIGITS for INTEGER columns (the -127 is coming from "int decimalDig = rs.getInt("DECIMAL_DIGITS");")
And running the JVM with the option "-Doracle.jdbc.J2EE13Compliant=true" seems to fix it. Could you try?
Looks like this is a bug in Oracle JDBC:
https://community.oracle.com/thread/3677876
As workaround you could change line 368 in CodeGenParser.java if rs.getInt("DECIMAL_DIGITS") is -127.

Error While Inserting Data to Database from Java Program

I am trying to develop a inventory management system as part of my mini project.
While I try to Insert a data to my Bill_Master Database it returning an error
java.sql.SQLException: [Microsoft][ODBC driver for Oracle][Oracle]ORA-01858: a non-numeric character was found where a numeric was expected
bqty=Integer.parseInt(iqty.getText());
bamount=Float.parseFloat(famnt.getText());
bdsc=Integer.parseInt(dsc.getText());
bnet=Float.parseFloat(netamnt.getText());
billid=Integer.parseInt(billn.getText());
code=Integer.parseInt(icode.getText());
bqty=Integer.parseInt(iqty.getText());
rate=getRate(code);
iamount=rate*bqty;
amt.setText(Float.toString(iamount));
total=total+iamount;
try
{
billdetailid++;
stmt.executeUpdate("insert into Bill_Master values('"+billid+"','"+date+"','"+cname+"','"+total+"','"+bdsc+"','"+total+"','"+uid+"')");//Error Causing Line.
Values are
(1,'27-oct-2013','n/a',900.00,0.0,900.00,'Desk')
Table Structure
Bill_Id (Primary Key INT ):-Stores Bill Number
Bill_Date (Date): Stores Date Of Bill
Customer_Name ( VARCHAR(50)): Customer Name
Total_amt (NUMBER(6)) :Total Bill Amount
Cash_Disc (Number(2)):Discount
Grand_Total(Number(6)):Grand Total
UID(VARCHAR(10)) Stores Who Generated the bill.(EMPLOYEE ID)
Connection Type :ODBC
Please help to solve this issue.
You are putting single quotes around each of your values including bill_Id which is defined as an int. the SQL database is reading this as a string and complaining. Also (as was already pointed out) PreparedStatements make this a lot easier and more secure.
Try this:
stmt.executeUpdate("insert into Bill_Master values('"+billid+"',to_date('"+date+"', 'dd-MON-yyyy'),'"+cname+"','"+total+"','"+bdsc+"','"+total+"','"+uid+"')");
Firstly that is one horrible way of writing SQL queries in java !!!
I am guessing you have just started learning. Please check out PreparedStatements
Data Type related bugs will become easier to debug.
Also that is not the way you write continuous string appends. Check out StringBuilder and String Buffer
I found the exact problem. The reason was I am trying to insert the Label instead of the text in the label.
correct statement is
stmt.executeUpdate("insert into Bill_Master values('"+billid+"','"+date.getText()+"','"+cname.getText()+"','"+total+"','"+bdsc+"','"+total+"','"+uid+"')");

Error: Special characters are not uploaded from csv to database in Liferay 6.1

I am inserting records from csv file to mysql database in Liferay 6.1. I have already set porta-ext.properties file with
jdbc.default.driverClassName=com.mysql.jdbc.Driver
jdbc.default.url=jdbc:mysql://localhost:3306/lportal?useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false
jdbc.default.username=root jdbc.default.password=root
when I am trying to upload records ,it throws error for special characters like รก
Error details:
13:38:21,001 ERROR [JDBCExceptionReporter:75] Data truncation: Data too long for column 'providerName' at row 1
When I removed those characters it persists records without error.
Can anyone suggest me how to resolve this problem.
Thank you
If your database is in UTF-8 and you have "special" characters in it than most probably you are missing "file.encoding=UTF-8" vm argument (-Dfile.encoding=UTF-8), or at least you should specify encoding when opening file/stream.

Categories

Resources