I am connected to IBM DB2 database with java but data is stored as binary format in database so when I fetch any value it comes as binary or hexdecimal format. How can I convert this in binary data in utf-8 at query level.
Sample code to fetch data -
String sql = "SELECT poMast.ORDNO from AMFLIBL.POMAST AS poMast ";
Class.forName("com.ddtek.jdbc.db2.DB2Driver");
String url = "jdbc:datadirect:db2://hostname:port;DatabaseName=dbName;";
Connection con = DriverManager.getConnection(url, "username","password");
PreparedStatement preparedStatement = con.prepareStatement(sql);
ResultSet rs = preparedStatement.executeQuery();
System.out.println("ResultSet : \n");
System.out.println(" VNDNO");
while (rs.next())
{
System.out.println(rs.getString("ORDNO"));
}
You probably need to use the CAST expression:
SELECT CAST(poMast.ORDNO as VARCHAR(50)) from AMFLIBL.POMAST AS poMast
Adjust the VARCHAR length to your needs. The string is in the database codepage (often UTF-8 these days) and converted to the client/application codepage when fetched.
you can "cast" the result from your select to utf8 like below.
String sql = "SELECT poMast.ORDNO, CAST(poMast.ORDNO AS VARCHAR(255) CCSID UNICODE) FROM AMFLIBL.POMAST AS poMast ";
src: cast db2
In my case, somehow bad UTF-8 data had gotten into varchars in a 1208/UTF-8 DB. Prior to conversion, when querying such data via the JDBC driver, the DB returned -4220 via the JDBC driver. This is fixable at the JDBC driver level by adding this property:
java -Ddb2.jcc.charsetDecoderEncoder=3 MyApp
see:
https://www.ibm.com/support/pages/sqlexception-message-caught-javaiocharconversionexception-and-errorcode-4220
The Db2 LUW Command Line Processor fixed it long ago as an APAR, so this error is only seen via the JDBC driver when the above property is not set.
But, if you want to fix the data in the db, this works:
update <table_name> set <bad_data_col> = cast(cast( <bad_data_col> as vargraphic) as varchar);
1st db2 treats (casts) the bad data as a binary where "anything goes" and then converts (casts) it back to valid UTF-8. After the casts, the JDBC driver shows the same result with or without the special property set and returns no errors.
Related
I am trying to extract the data from XMLTYPE COLUMN "ATTRIBUTE_XML2" STORE AS SECUREFILE BINARY XML from an Oracle 12C database.
I am using this select query in my code:
select xmlserialize(document a.xmlrecord as clob) as xmlrecord from tablename
ResultSet rset = stmt.executeQuery();
OracleResultSet orset = (OracleResultSet) rset;
while (orset.next()) {
oracle.sql.CLOB xmlrecord = (oracle.sql.CLOB) orset.getClob(1);
Reader reader = new BufferedReader(xmlrecord.getCharacterStream());
}
Here "orset.getClob" is taking more memory in oracle DB and we are getting out of process memory in the oracle database. Currently we have the XML type storage as CLOB and business is interested to change it to BINARY XML.
Is there any option for retrieving the binary XML from the oracle result set?
Please note that i have tried "orset.getClob" which results in memory error, since it is changing the binary XML to clob.
Also tried with " XMLType xml = (XMLType) orset.getObject(1);" this is working fine, but it is taking 27 minutes for fetching 1 million XML records.
Whereas the same 1 million completed in 5 minutes if the table type storage is CLOB instead of BINARY XML.
Is there any other option for retrieving the BINARY XML ?
The Oracle documentation for Using JDBC to Access XML Documents in Oracle XML DB states that:
You can select XMLType data using JDBC in any of these ways:
Use SQL/XML function XMLSerialize in SQL, and obtain the result as an oracle.sql.CLOB, java.lang.String or oracle.sql.BLOB in Java. The Java snippet in Example 13-2 illustrates this.
Call method getObject() in the PreparedStatement to obtain the whole XMLType instance. The return value of this method is of type oracle.xdb.XMLType. Then you can use Java functions on class XMLType to access the data. Example 13-3 shows how to do this.
So you should be able to use XMLSERIALIZE( DOCUMENT your_binary_xml_column AS BLOB ) in SQL and then use OracleResultSet#getBLOB(int) to get the binary data.
Paraphrasing Oracle's Example 13-2 to cast to a BLOB instead of a CLOB:
DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
Connection conn = DriverManager.getConnection("jdbc:oracle:oci8:#", "QUINE", "CURRY");
OraclePreparedStatement stmt = (OraclePreparedStatement) conn.prepareStatement(
"SELECT XMLSerialize(DOCUMENT e.poDoc AS BLOB) poDoc FROM po_xml_tab e");
ResultSet rset = stmt.executeQuery();
OracleResultSet orset = (OracleResultSet) rset;
while(orset.next())
{
// the first argument is a BLOB
oracle.sql.BLOB clb = orset.getBLOB(1);
// now use the BLOB inside the program
}
Is there any difference between java.sql.Clob and java.sql.NClob? There is no new method for java.sql.NClob interface. I tried the following:
The setup SQL:
create table tab(id number(2), clobcol clob, nclobcol nclob)
insert into tab values (1, to_clob('你好'), to_nclob('你好'))
JDBC code:
conn = getConnection();
stmt = conn.createStatement();
rs = stmt.executeQuery("select * from tab");
rs.next();
Clob c = rs.getClob(2);
NClob nc = rs.getNClob(3);
InputStream inputStream1 = c.getAsciiStream();
InputStream inputStream2 = nc.getAsciiStream();
System.out.println(inputStream1.available());
System.out.println(inputStream2.available());
c.free();
nc.free();
I have also tried some other methods, looks like there is no difference from the output. Is there a specific I can see some differences ?
Added the supported character set in the database:
SELECT parameter, value
FROM v$nls_parameters
3 WHERE parameter LIKE '%CHARACTERSET';
PARAMETER VALUE
--------------------------------- --------------------
NLS_CHARACTERSET AL32UTF8
NLS_NCHAR_CHARACTERSET AL16UTF16
In the old days (80s) many Databases were created using US7ASCII (in the US) or ISOLATIN1 (in Europe) as the character set. For these Databases that still exist today (after many upgrades), the only way to store non-ASCII character String data is to use the special types NVARCHAR or NCLOB. These Nxxx types are not used by newer Databases that were created directly using UTF8 (now the default in Oracle) as the encoding.
In my application I am having IBM DB2 database as storage and my data service layer has been implemented using Node.js. I have established JDBC connection to IBM DB2 iSeries database by DataDirect approach given by Progress using db2.jar. When I execute any select query the result returned from DB is a hexadecimal value not a proper what I want.
To solve this problem I have an option to use CAST function at query level with each column, but this is not so efficient as I have to apply this CAST in each column so I am trying to have a generic solution at connection level so that I do not have to apply this cast in each column just like "translate binary =true" in JTOpen.
Below are the result with select query -
Without CAST function :
Query = SELECT poMast.ORDNO from AMFLIBL.POMAST AS poMast WHERE poMast.ORDNO = 'P544901'
Result in Hex format = D7F5F4F4F9F0F1
With CAST function :
Query = SELECT CAST(poMast.ORDNO CHAR(7) CCSID 37) AS ORDNO from AMFLIBL.POMAST AS poMast WHERE poMast.ORDNO IS NOT NULL
Result in proper format = P544901
Connection URL = "jdbc:datadirect:db2://hostname:port;DatabaseName=dbname;"
Any help will be appreciated.
may be try to modify your connexionstring like this
Connection URL = "jdbc:datadirect:db2://hostname:port;DatabaseName=dbname;translate binary=true;ccsid=37"
or like this
Connection URL = "jdbc:db2://hostname:port;DatabaseName=dbname;translate binary=true;ccsid=37"
I want to insert a special character like ✪ into a database.
When I do it like this in the Java code:
String message = "✪";
preparedStatement = connection.prepareStatement("INSERT INTO `messages` (`message`) VALUES (?)");
preparedStatement.setString(1, message);
preparedStatement.executeUpdate();
It just inserts a ? instead of ✪. But when I execute the SQL command on phpMyAdmin it works fine and ✪ is inserted.
The column message in the database is of the type varchar(2048) and collation utf8_general_ci.
And the text file encoding of the java project is UTF-8 as well.
I have added the Parameter ?characterEncoding=UTF-8 to the JDBC URL, as #Mathisca pointed out.
insertSQL = "insert into TELBP_INPUT_LOG (SERIAL_NO, INPUT_XML) values (?, ?)";
statement = connection.prepareStatement(insertSQL);
statement.setString(1, serialNo);
statement.setString(2, inXml);
//statement.setString(2, "test");
insertCount = statement.executeUpdate();
when the program run to executeUpdate(), error
java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column
is thrown, but if I copy the value of serialNO and inXml and run in SQL developer, no error prompted, what is the reason?
oracle version:Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
column:
SERIAL_NO VARCHAR2(22)
INPUT_XML CLOB
Websphere:Websphere 5.1
jdbc: both ojdbc14 and ojdbc6 is tried, both has same error
You cannot write a String into a Clob column.
Instead of
statement.setString(2, inXml);
use
statement.setClob(2, xmlClob);
You first need to create xmlClob:
Clob xmlClob = connection.createClob();
Writer clobWriter = myClob.setCharacterStream(1);
clobWriter.write(inXml);
for clob field you can make use of .setClob(..)
setString():Sets the designated parameter to the given Java String value. The driver converts this to an SQL VARCHAR or LONGVARCHAR value (depending on the argument's size relative to the driver's limits on VARCHAR values) when it sends it to the database.
API DOC
CLOB API DOC
Java (the underlying driver) treats CLOB as a character stream. When ever you are setting String, the underlying driver implementation will automatically do the relevant conversion (String to Varchar etc.,). As CLOB is a special type, it is the responsibility of the programmer to do the necessary steps. Follow the link to now how to insert clob using java:
http://docs.oracle.com/javase/tutorial/jdbc/basics/blob.html