We are using Oracle version 12.1.0.2.0 with ojdbc7-1.0.jar and JDK 8.
We use Hikari as connection pool if it's relevant and it's using oracle.jdbc.driver.OracleDriver
This is the recommended driver to use in oracle
Oracle Database version JDBC specification compliance
12.1 or 12cR1 ojdbc7.jar with JDK 7 and JDK 8
In Spring application,we are using PreparedStatement, and we want to support national character as characters.
We use setNString which works for new/updated queries
Sets the designated paramter to the given String object. The driver converts this to a SQL NCHAR or NVARCHAR or LONGNVARCHAR value (depending on the argument's size relative to the driver's limits on NVARCHAR values) when it sends it to the database.
Can we add support (platform/config level) for existing code using setString without code change?
try (PreparedStatement ps = conn.prepareStatement(SQL_STATEMENT)) {
ps.setString(1, THAI_TEXT);
}
Currently setString doesn't handle national characters (save as ?) although columns defined as NVARCHAR2(50)
Must we replace all setString with setNString?
or is there a flag/property/upgrade/fix that can add the national support using setString?
You might get away with oracle.jdbc.defaultNChar=true, see 19.2 in the JDBC Guide (however I find Oracle and the JDBC API highly confusing in this regard, it should work with setString)
Related
This is just a theoretical question, but I am building a program that gets data from Facebook using the CDATA JDBC Plugin, I wanted to know if all JDBC Plugins have the same syntax. For example, if I just change the JAR file for the driver to a Twitter one, and change the names of the tables and columns I am accessing, would it still work?
By a plugin I mean a driver, also, to put it more clearly, if I was developing a MySQL app and switched from the stock Connector/J Driver to the CData driver, would I need to change the code?
Until the underlying schema where you store remains same, the use of JDBC driver will yield the same result.
Note: Twitter/FB... both has to support the JDBC Model...
However, if you have changes in Drivers, you can consider using ApacheMetamodel Link for reference
JDBC is a standard that has been established and vetted over the years. As long as the drivers you're working with are written to that standard (which as a CData employee, I can say that ours are) you can expect your code referencing a JDBC driver to be essentially identical, regardless of the manufacturer of the driver or the data source you're connecting to.
//optional, register the driver with the DriverManager
Class.forName(myDriverName).newInstance();
//obtain a Connection instance from the DriverManager
Connection conn = null;
try {
conn = DriverManager.getConnection(myJDBCurl);
//execute a select query
Statement stmt = conn.createStatement();
Result rs = stmt.executeQuery("SELECT foo FROM bar");
} catch (SQLException ex) {
//handle any errors
}
As you can see, the code to utilize the JDBC driver can be generalized with variables to use any driver or to use different connections under a single driver (if, for instance, you wanted to connect to different Facebook accounts).
JDBC is an interesting standard. It was intentionally designed to load the driver at run-time, so no vendor classes are used during compilation.
It also has some JDBC own mechanisms for schema data (DatabaseMetaData), and for such things as doing an INSERT with an autoincrement key, and retrieving that key (getGeneratedKeys).
However the SQL is far from standardized by vendor, despite standardisation efforts. For example just getting the first 10 rows.
Unfortunately the visionaries of JDBC seem no longer to exist.
But it a sound basis for professional usage.
I am working with some legacy code that performs database operations in a generic way, so that the User/developer can work with a different database by changing only the JDBC driver.
I have a problem with PostgreSQL JDBC driver. My test case:
//ddl
CREATE TABLE test
(
id numeric,
name text,
)
//java code
String sqlCmd = "INSERT INTO test values (?, ?)";
PreparedStatement ps = connection.prepareStatement( sqlCmd );
ps.setString( 1, "1" );
ps.setString( 1, "name1" );
ps.executeUpdate();
With Postgres, the result of this case is an exception with message: "can't cast string to int..."
Is it inappropriate to use PreparedStatement.setString() to set values that database expects to be numeric?
Should I expect the JDBC driver to automatically convert java types to database types?
This test passes with other databases, including H2 and MySQL. Does the failure with PostgreSQL reflect a bug in the JDBC driver? Is it possible to make this case work without changing code?
The documentation for java.sql.PreparedStatement has this to say:
Note: The setter methods (setShort, setString, and so on) for setting IN parameter values must specify types that are compatible with the defined SQL type of the input parameter. For instance, if the IN parameter has SQL type INTEGER, then the method setInt should be used.
Whether a particular database or JDBC driver allows you to be sloppy about that is its own affair, but you are not justified in expecting that all drivers will allow such slop, even if certain ones do.
while migrating from oracle database to postgresql, I found that may help you to use setString with numeric types and date types as well.
Just you have to use the connection parameter stringtype=specified as mentioned in the documentation.
https://jdbc.postgresql.org/documentation/head/connect.html
You are using setString() method to insert integers and postgres could not do that, Use
ps.setInt(1, INTEGER_VALUE);
I am developping an application on WAS 8.0.0.5 that iteracts with a DB2 database.
I am getting the column name using java.sql.ResultSetMetaData call getColumnName() class. On my development WAS everything works great.
ResultSetMetaData rsmd = rs.getMetaData();
String columnName = rsmd.getColumnName(i + 1);
When I try and install on a WAS 8.0.0.6 instead of getting the column name, I get the column index!!!
The driver set for the connection string is com.ibm.db2.jcc.DB2Driver
As I side note, I've confirmed and WAS 8.0.0.5 uses DB2 driver 3.62 (works) and 8.0.0.6 uses 4.12 (doesn't work).
What is wrong?
The behaviour of getColumnName() and getColumnLabel() has changed in the IBM Data Server Driver for JDBC version 4. I believe it now conforms to the JDBC specification. You can use the connection property useJDBC4ColumnNameAndLabelSemantics to modify this behaviour, as explained here: http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.apdv.java.doc/src/tpc/imjcc_c0052593.html.
Thanks for the response.
Unfortunately it wasn't the solution. The behavior was that the column index was being returned, instead of the query's label or column name itself.
The problem was that the db2jcc.jar version, configured on the WAS JDBC resources, was too old (version 3.59) I replaced it for 4.12 and now it works.
The latest Java JDBC drivers for postgres claim to support UUIDs natively; working against Postgres 9.2 (mac).
Indeed, when a PreparedStatement is used, I can step through the driver code, and even walk
through the specialised 'setUuid' function in AbstractJdbc3gStatement.java. By all indications, it should 'just work'.
However, it does not work. The database flings back an error, which I receive thus:
Caused by: org.postgresql.util.PSQLException: ERROR: operator does not exist: uuid = bytea
Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
Position: 139
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157) ~[postgresql-9.2-1002.jdbc4.jar:na]
Yes, indeed, setUuid in the JDBC driver does send that as a bytea :
private void setUuid(int parameterIndex, UUID uuid) throws SQLException {
if (connection.binaryTransferSend(Oid.UUID)) {
byte[] val = new byte[16];
ByteConverter.int8(val, 0, uuid.getMostSignificantBits());
ByteConverter.int8(val, 8, uuid.getLeastSignificantBits());
bindBytes(parameterIndex, val, Oid.UUID);
} else {
bindLiteral(parameterIndex, uuid.toString(), Oid.UUID);
}
}
What gives?
Is there some magic rune required in the actual database to bless this conversion ?
tl;dr
myPreparedStatement.setObject(
… ,
java.util.UUID.randomUUID()
)
Details
(a) Show us your code.
PreparedStatement::setObject does work when passing a java.util.UUID. You likely have some other issue in your code.
(b) See my blog post UUID Values From JDBC to Postgres for a bit of discussion and example code.
// Generate or obtain data to store in database.
java.util.UUID uuid = java.util.UUID.randomUUID(); // Generate a random UUID.
String foodName = "Croissant";
// JDBC Prepared Statement.
PreparedStatement preparedStatement = conn.prepareStatement( "INSERT INTO food_ (pkey_, food_name_ ) VALUES (?,?)" );
int nthPlaceholder = 1; // 1-based counting (not an index).
preparedStatement.setObject( nthPlaceholder++, uuid );
preparedStatement.setString( nthPlaceholder++, foodName );
// Execute SQL.
if ( !( preparedStatement.executeUpdate() == 1 ) ) {
// If the SQL reports other than one row inserted…
this.logger.error( "Failed to insert row into database." );
}
(c) I'm not sure what you mean by
The latest Java JDBC drivers for postgres claim to support UUIDs natively
Which driver? There are at least two open-source JDBC drivers for Postgres, the current/legacy one and a new rewrite "next generation" one. And there are other commercial drivers as well.
"natively"? Can you link to the documentation you read? The SQL spec has no data type for UUID (unfortunately ☹), therefore the JDBC spec has no data type for UUID. As a workaround, the JDBC driver for Postgres uses the setObject and getObject methods on PreparedStatement move the UUID across the chasm between Java ↔ SQL ↔ Postgres. See the example code above.
As the PreparedStatement JDBC doc says:
If arbitrary parameter type conversions are required, the method setObject should be used with a target SQL type.
Perhaps by "natively", you confused Postgres' native support for UUID as a data type with JDBC having a UUID data type. Postgres does indeed support UUID as a data type, which means the value is stored as 128-bits rather than multiple times that if it were stored as as ASCII or Unicode hex string. And being native also means Postgres knows how to build an index on a column of that type.
The point of my blog post mentioned above was that I was pleasantly surprised by how simple it is to bridge that chasm between Java ↔ SQL ↔ Postgres. In my first uneducated attempts, I was working too hard.
Another note about Postgres supporting UUID… Postgres knows how to store, index, and retrieve existing UUID values. To generate UUID values, you must enable the Postgres extension (plugin) uuid-ossp. This extension wraps a library provided by The OSSP Project for generating a variety of kinds of UUID values. See my blog for instructions.
By the way…
If I knew how to petition the JDBC expert group or JSR team to make JDBC aware of UUID, I certainly would. They are doing just that for the new date-time types being defined in JSR 310: Date and Time API.
Similarly, if I knew how to petition the SQL standards committee to add a data type of UUID, I would. But apparently that committee is more secretive than the Soviet Politburo and slower than a glacier.
I used the following approach to add UUID and other objects to postgres:
PGobject toInsertUUID = new PGobject();
toInsertUUID.setType("uuid");
toInsertUUID.setValue(uuid.toString());
PreparedStmt stmt = conn.prepareStatement(query);
stmt.setObject(placeHolder,toInsertUUID);
stmt.execute();
This way you will be stopping yourself from doing type casting. This piece of code worked perfectly for me for any time for example even json.
This worked for me using the org.postgresql.postgresql 42.2.5
myPreparedStatement.setObject(4, UUID.randomUUID(),java.sql.Types.OTHER)
Without java.sql.Types.OTHER I got an error
try
.setParameter("uuid", uuid, PostgresUUIDType.INSTANCE);
I'm using official JDBC driver for PostgreSQL, but I'm stuck with the following issues:
No support for PostgreSQL-ish data structures such as UUIDs.
Common JDBC weirdness, such as:
No function to escape values for consuming by PostgreSQL.
Limited support for executing heterogeneous statements in batch.
No rewriting of multiple insert statements into single insert statement when inserting many rows in one table.
So, the question — is there any PostgreSQL database driver which can leverage full power of PostgreSQL without much boilerplate? I'm also use Scala language for development, so if driver is designed specifically for Scala it would be so much awesome awesome.
Some of this seems to be (unless I'm not understanding) user error in using JDBC. JDBC is a pretty ugly API, so never ask if you can do it elegantly, just ask if you can do it at all.
Escaping and inserting multiple rows should be handled, as #ColinD and #a_horse pointed out, with Prepared statements and batch operations. Under the hood, I would expect a good JDBC implementation to do the things you want (I am not familiar with PostgreSQL's implementation).
Regarding UUIDs, here is a solution:
All that PostgreSQL can do is convert string literals to uuid.
You can make use of this by using the data type
org.postgresql.util.PGobject, which is a general class used to
represent data types unknown to JDBC.
You can define a helper class:
public class UUID extends org.postgresql.util.PGobject {
public static final long serialVersionUID = 668353936136517917L;
public UUID(String s) throws java.sql.SQLException {
super();
this.setType("uuid");
this.setValue(s);
}
}
Then the following piece of code will succeed:
java.sql.PreparedStatement stmt =
conn.prepareStatement("UPDATE t SET uid = ? WHERE id = 1");
stmt.setObject(1, new UUID("a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11"));
stmt.executeUpdate();
The driver supports batched statements to speed up bulk inserts.
And using batched statements is a lot more portable than using proprietary INSERT syntax (and as far as I can tell, there is no big different between a multi-row insert and batched inserts)
Check out PreparedStatement.addBatch()
The reason why UUID is not supported is probably that UUID is not part of the Postgres core, just a contrib module.
Edit
Regarding the execute heterogeneous statements
The Postgres driver does support different types of statements in the a batch.
The following works fine:
Connection con = DriverManager.getConnection("jdbc:postgresql://localhost/postgres", "foo", "bar");
con.setAutoCommit(false);
Statement stmt = con.createStatement();
stmt.addBatch("create table foo (id integer, data varchar(100))");
stmt.addBatch("insert into foo values (1, 'one')");
stmt.addBatch("insert into foo values (2, 'two')");
stmt.addBatch("update foo set data = 'one_other' where id = 1");
stmt.executeBatch();
con.commit();
Although you do lose the automatic escaping that PreparedStatement gives you.
I realise this doesn't answer your entire question, but hopefully it will be useful all the same.
I'm using Java 6 and Postgres 8.4. The driver I'm using is in my Maven POM file as:
<dependency>
<groupId>postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>8.4-702.jdbc4</version>
</dependency>
I'm using PreparedStatement.getObject() and PreparedStatement.setObject() with Java's java.util.UUID class to retrieve and store UUIDs.
For example:
pstm.setObject(1, guid); //where pstm is a PreparedStatement and guid is a UUID
and:
//where rs is a ResultSet
UUID myGuid = (UUID) rs.getObject("my_uuid_column_name");
Works fine.
With newer drivers, the following is alsow supported
UUID myGuid = rs.getObject("my_uuid_column_name", UUID.class);
No support for PostgreSQL-ish data structures such as UUIDs.
On the contrary, the current JDBC driver (9.2-1002 JDBC 4) for Postgres 9.x does indeed support UUID via the setObject and getObject commands. You cannot get any more direct or simpler than that (in any database, Postgres or any other) because JDBC does not recognize UUID as a data type.
As far as I can tell, there is no need to create a helper class as suggest in another answer by Yishai.
No need to do any casting or go through strings.
See my blog post for more discussion and code example.
Code example excerpt:
java.util.UUID uuid = java.util.UUID.randomUUID();
…
preparedStatement.setObject( nthPlaceholder++, uuid ); // Pass UUID to database.
Take a look at O/R Broker, which is a Scala JDBC-based library for relational database access.