I have a Cassandra schema with a table that has a column that is a SET of a user defined type (UDT). That UDT itself has a column that is a SET of another UDT.
I can create the types and table in cqlsh but when I try to use this schema in my Java (actually Scala) code I get a "missing codec error".
Does anyone know if the Datastax java driver supports this?
CREATE TYPE testname(firstname text, lastname text);
CREATE TYPE testuser(testname <FROZEN<SET<FROZEN<testname>>>);
CREATE TABLE testobjects(
simplename text
testusers SET<FROZEN<testuser>>
) WITH CLUSTERING ORDER BY (simple name DESC);
I've registered codecs for the two UDT types but when I try to bind a prepared statement I get the error:
can't find code for:
cqlType: frozen<set<frozen<testname>>
javaType: TestNameUDT
Because while there is a codec mapping testname to TestNameUDT there really is no codec mapping a Set of testname's to a TestNameUDT.
So, I'm wondering if anyone knows if the Java driver supports this...has anyone created nested sets of UDTs? Thanks.
Datastax has acknowledged that this is a Cassandra defect and does not currently work.
with spring data cassandra yes, but the nested udt must be declered without #cassandraType
https://jira.spring.io/browse/DATACASS-506
hope it helps
Related
goal
Get java.sql.Types without creating table.
detail
If I define a column like this "a JSON DEFAULT NULL"
How can I get java.sql.Types about column 'a'?
what I have tried
I use jdbc, so I use connection.getMetaData().getTypeInfo to get all types
public static Map<String, Integer> load(final DatabaseMetaData databaseMetaData) throws SQLException {
Map<String, Integer> result = new TreeMap<>(String.CASE_INSENSITIVE_ORDER);
try (ResultSet resultSet = databaseMetaData.getTypeInfo()) {
while (resultSet.next()) {
result.put(resultSet.getString("TYPE_NAME"), resultSet.getInt("DATA_TYPE"));
}
}
return result;
}
Then I get this map like this
BIGINT -> {Integer#5274} -5
BIGINT UNSIGNED -> {Integer#5274} -5
And so on.
But there is no JSON.
what's more?
If I define like this
"id_card LONG CHAR VARYING"
Then how can I get the java.sql.Types?
The code that you are using looks to be the correct way to get the supported column types.
One possible explanation for your not seeing JSON in the output is that your database does not support JSON as a column type. For instance, although MySQL 8.x supports the JSON data type, according to the manual MySQL 5.6 does not support it.
It could also be the database driver. For instance, I don't know what would happen vis a vis JSON support if you tried to use a Connector/J 5.1 driver with a MySQL 8.x database.
UPDATE
I had a look at the source code MySQL Connector/J 8.x (latest), and I think that the reason that you can't see JSON is ... it is a bug in the JDBC driver.
Looking at the implementation class for DatabaseMetadata, at line 4012 (or thereabouts) we see that the getTypeInfo() has code that assembles a ResultSet for a hard-wired list of types. The JSON type is not there, and there is a comment that says:
// TODO add missed types (aliases)
(FWIW, I couldn't see any logic in the driver to check which data types are actually supported by the database. That is potentially another bug.)
So, my "possible explanations" (see above) were not the actual explanation in your particular case.
Solution? I don't think there is a good one in this case. But I recommend that you submit a bug report. (I couldn't find an existing bug report for this particular issue, though there were some old bugs about missing types in MySQL Connector/J 5.x)
I'm working on data retrieval part of Cassandra using Java Driver.
I have a custom data type
CREATE TYPE ynapanalyticsteam.ynapnestedmap (
so_nestedmap map<text, text>
);
And column type mapped as below
order_line map<text, frozen<ynapnestedmap>>
I am trying to retrieve value of this column using TypeToken as below.
row.getMap("order_line", TypeToken.of(String.class), new TypeToken<Map<String,String>>() {});
But I am still getting codecNot found exception.
You need to define codec for your nested user-defined type, not for Map<String, String> - they are different types...
The documentation for java driver has good description of this process.
The code that you trying to use will work for definition of column like:
order_line map<text, frozen<map<text, text>>>
I have a simple piece of code for removing data from cassandra 2
Cluster myCluster = Cluster.builder().addContactPoint(myhost).withPort(myport).build();
Session session = myCluster.connect(keyspaceName);
session.excecute(deleteStatement); -- it is just simple Delete.Where
So basically when I try to do something on (for example) keyspaceName = "test"
it will easily excecute my delete statement, but if I try the same thing for (for example) keyspace = "\"DONT_WORK\"" (since I have a keyspace name in quotes in cassandra) it won't work, and will throw
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:16660 (com.datastax.driver.core.ConnectionException: [localhost/127.0.0.1:16660] Pool is shutdown))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
I need help please
PS. I even used Metadata.quote() static method from datastax library - still ain't working.
You should not need to quote the keyspace name for connecting. Quoting is important to protect case sensitivity in the context of a CQL string, but you do not need to protect it if you are passing the keyspace name as an API parameter.
Ok, there is no need for further investigation of this problem. The issue was that I accidentally used 2.1.8 datastax library on cassandra at version 2.0.8. I have to stop using numeric keyborad. Simple mistake but sure made quite a fuss.
I am creating a simple database table with a column of type Timestamp on IBM DB2 on mainframes from a JDBC client like this-
CREATE TABLE scma.timetest(
T_TYPE VARCHAR(8),
T_DATE TIMESTAMP
);
With or without inserting any record if I do a select * from scma.timetest; I end up getting the below exception-
java.nio.charset.UnsupportedCharsetException: Cp1027
If I don't have the Timestamp type column, everything works fine. I have tried starting the JDBC client with -Dfile.encoding=UTF-8 with no avail. Same thing I tried from a Java program as well, it results in the same error.
It is not the same problem mentioned here, I don't get ClassNotFoundException. Any pointer what could be wrong. Here is full exception if it helps-
Exception in thread "main" java.nio.charset.UnsupportedCharsetException: Cp1027
at java.nio.charset.Charset.forName(Charset.java:531)
at com.ibm.db2.jcc.am.t.<init>(t.java:13)
at com.ibm.db2.jcc.am.s.a(s.java:12)
at com.ibm.db2.jcc.am.o.a(o.java:444)
at com.ibm.db2.jcc.t4.cc.a(cc.java:2412)
at com.ibm.db2.jcc.t4.cb.a(cb.java:3513)
at com.ibm.db2.jcc.t4.cb.a(cb.java:2006)
at com.ibm.db2.jcc.t4.cb.a(cb.java:1931)
at com.ibm.db2.jcc.t4.cb.m(cb.java:765)
at com.ibm.db2.jcc.t4.cb.i(cb.java:253)
at com.ibm.db2.jcc.t4.cb.c(cb.java:55)
at com.ibm.db2.jcc.t4.q.c(q.java:44)
at com.ibm.db2.jcc.t4.rb.j(rb.java:147)
at com.ibm.db2.jcc.am.mn.kb(mn.java:2107)
at com.ibm.db2.jcc.am.mn.a(mn.java:3099)
at com.ibm.db2.jcc.am.mn.a(mn.java:686)
at com.ibm.db2.jcc.am.mn.executeQuery(mn.java:670)
Moving this here from comments:
Legacy DB2 for z/OS often use EBCDIC (also known as CP1027) encoding for character data. Also I believe DB2 sends timestamp values to the client as character strings, although they are internally stored differently. I suspect that the Java runtime that you are using does not support CP1027, so it doesn't know how to convert EBCDIC data to whatever it needs on the client. I cannot explain though why VARCHAR value comes through OK.
For more details about DB2 encoding you can check the manual.
You can force DB2 to create a table using different encoding, which will likely be supported by Java:
CREATE TABLE scma.timetest(...) CCSID UNICODE
Another alternative might be to use a different Java runtime that supports the EBCDIC (CP1027) encoding. The IBM JDK, which comes with some DB2 client packages, would be a good candidate.
You (well, not you but the mainframe system programmers) can also configure the default encoding scheme for the database (subsystem).
I'm trying to use a SQL Array type with PostgreSQL 8.4 and the JDBC4 driver.
My column is defined as follows:
nicknames CHARACTER VARYING(255)[] NOT NULL
and I'm trying to update it thusly:
row.updateArray("nicknames",
connection.createArrayOf("CHARACTER VARYING", p.getNicknames().toArray()));
(p.getNicknames() returns a List<String>)
but I'm seeing:
org.postgresql.util.PSQLException:
Unable to find server array type for
provided name CHARACTER VARYING. at
org.postgresql.jdbc4.AbstractJdbc4Connection.createArrayOf(AbstractJdbc4Connection.java:67)
at
org.postgresql.jdbc4.Jdbc4Connection.createArrayOf(Jdbc4Connection.java:21)
Unfortunately, the Array types don't seem to be well documented - I've not found mention of exactly how to do this for PostgreSQL anywhere :(
Any ideas?
Change "CHARACTER VARYING" to "varchar". The command-line psql client accepts the type name "CHARACTER VARYING", but the JDBC driver does not.
The source for org.postgresql.jdbc2.TypeInfoCache contains a list of accepted type names.
Consider part of the ambiguously-worded contract for createArrayOf():
The typeName is a database-specific name which may be the name of a built-in type, a user-defined type or a standard SQL type supported by this database.
I always assumed driver implementors interpret the phrases "database-specific name" and "supported by this database" to mean "accept whatever you want". But maybe you could file this as a bug against the Postgres JDBC driver.
Good luck.