Cassandra Java datastax 2.1.8 Cannot connect to keyspace with quotes - java

I have a simple piece of code for removing data from cassandra 2
Cluster myCluster = Cluster.builder().addContactPoint(myhost).withPort(myport).build();
Session session = myCluster.connect(keyspaceName);
session.excecute(deleteStatement); -- it is just simple Delete.Where
So basically when I try to do something on (for example) keyspaceName = "test"
it will easily excecute my delete statement, but if I try the same thing for (for example) keyspace = "\"DONT_WORK\"" (since I have a keyspace name in quotes in cassandra) it won't work, and will throw
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:16660 (com.datastax.driver.core.ConnectionException: [localhost/127.0.0.1:16660] Pool is shutdown))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
I need help please
PS. I even used Metadata.quote() static method from datastax library - still ain't working.

You should not need to quote the keyspace name for connecting. Quoting is important to protect case sensitivity in the context of a CQL string, but you do not need to protect it if you are passing the keyspace name as an API parameter.

Ok, there is no need for further investigation of this problem. The issue was that I accidentally used 2.1.8 datastax library on cassandra at version 2.0.8. I have to stop using numeric keyborad. Simple mistake but sure made quite a fuss.

Related

JPA HIBERNATE : Cannot get a connection as the driver manager is not properly initialized [duplicate]

I am trying to configure hibernate(5.3.3 Final) with tomcat (8.5) and mysql (v.8.0.12). When I launch my HibernateTest.java (very simple code from tutorial, no problem here) with the hibernate.connection.url set as ‘jdbc:mysql://localhost:3306/sakila’ I am encountering the following error:
Caused by: java.sql.SQLException: The server time zone value 'Paris, Madrid
(heure d?été)' is unrecognized or represents more than one time zone. You
must configure either the server or JDBC driver (via the serverTimezone
configuration property) to use a more specifc time zone value if you want to
utilize time zone support.
MySQL is currently set on the ‘SYSTEM’ timezone for the global and the session (mysql> select ##global.time_zone, ##session.time_zone). And my system timezone is indeed Paris/Madrid.
In my hibernate.cfg.xml file, when I write the connection url :
jdbc:mysql://localhost:3306/sakila?useJDBCCompliantTimezoneShift=true;useLegacyDatetimeCode=false;serverTimezone=UTC;
The error is :
com.mysql.cj.exceptions.WrongArgumentException: Malformed database URL, failed to parse the connection string near '=false;serverTimezone=UTC;'.
It is not the problem mentioned in the stackoverflow post ‘issues with mysql server timezone and jdbc connection’, because the ‘&’ is refused by eclipse, see screenshot attached of the hibernate.cfg.xml file :
[The reference to entity "useLegacyDatetimeCode" must end with the delimiter ';']
1
It is not an invisible character between 'mysql:' and '//localhost' as mentioned in the stackoverflow post ‘Malformed database URL, failed to parse the main URL sections’.
I’ve tried to work the problem around by setting via MySql Workbench the option for the local time (default-time-zone = '+02:00') which fits with the summer time for Madrid/Paris (my case here). It doesn’t change a thing.
Any idea? Do I have to configure it somewhere else?
Thank you for your help, I've been on this one for 3 days now, without success.
You need to escape the &:
jdbc:mysql://localhost:3306/sakila?useSSL=false&serverTimezone=UTC
See more here: https://docs.jboss.org/exojcr/1.12.13-GA/developer/en-US/html/ch-db-configuration-hibernate.html
I've finally came across a solution.
As it looked that neither ';' nor '&' would do the trick to add more than one parameter, I took out all the parameters, and tried only one parameter :
jdbc:mysql://localhost:3306/sakila?serverTimezone=UTC
And it did the trick, I no longer have problems with this.
The following URL produced the error: jdbc:mysql://localhost:3306/db?useLegacyDatetimeCode=false&serverTimezone=CET
This worked for me:
replace & by &.

Update Statement Issues with Apache Ignite(2.13.0) + Java Spring boot

We are facing issues while updating tables having column with datatype timestamp.
Insert and Update works fine if we use ignite repository for both.
Insert or Update works fine if we use native queries for both.
Insert via Ignite repository and update via native queries results in an below error
class org.apache.ignite.binary.BinaryObjectException: Invalid flag value: 32
at org.apache.ignite.internal.binary.builder.BinaryBuilderReader.parseValue(BinaryBuilderReader.java:863)
at org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:290)
at org.apache.ignite.internal.binary.builder.BinaryBuilderSerializer.writeValue(BinaryBuilderSerializer.java:103)
at org.apache.ignite.internal.binary.builder.BinaryBuilderSerializer.writeValue(BinaryBuilderSerializer.java:56)
at org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:297)
at org.apache.ignite.internal.binary.builder.BinaryBuilderSerializer.writeValue(BinaryBuilderSerializer.java:103)
at org.apache.ignite.internal.binary.builder.BinaryBuilderSerializer.writeValue(BinaryBuilderSerializer.java:56)
at org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:297)
```
If you can post example code, this would make a good bug report.
https://github.com/apache/ignite/blob/876a2ca190dbd88f42bc7acecff8b7783ce7ce54/modules/core/src/main/java/org/apache/ignite/internal/binary/builder/BinaryBuilderReader.java#L515

Error while using Lucene with H2 Database

I want to implement a small fullText search in my project that uses H2 Database (embedded). As I know I have to use Lucene for fullText engine for find relevance results (not only containing results).
But I can't use it. This block is Lucene initiation:
FullTextLucene.init(connection);
FullTextLucene.createIndex(connection, "PUBLIC", Tables.COURSES_DETAIL, Columns.NAME);
Also I used this way:
stmt.execute(
"create alias if not exists FTL_INIT for \"org.h2.fulltext.FullTextLucene.init\"");
stmt.execute("call FTL_INIT()");
stmt.execute(
String.format("CALL FTL_CREATE_INDEX('PUBLIC','%s',%s)", Tables.COURSES_DETAIL, "NULL"));
But this error happens at runtime:
Error creating or initializing trigger "FTL_COURSES_DETAIL" object, class "org.h2.fulltext.FullTextLucene$FullTextTrigger", cause: "org.h2.message.DbException: Class ""org.h2.fulltext.FullTextLucene$FullTextTrigger"" not found [90086-197]"; see root cause for details; SQL statement:
CREATE TRIGGER IF NOT EXISTS "PUBLIC"."FTL_COURSES_DETAIL" AFTER INSERT, UPDATE, DELETE, ROLLBACK ON "PUBLIC"."COURSES_DETAIL" FOR EACH ROW CALL "org.h2.fulltext.FullTextLucene$FullTextTrigger"
After I downgraded H2 library to latest 'stable' version (1.4.196) the error has been changed:
Caused by: java.lang.NoSuchMethodError: org.apache.lucene.store.FSDirectory.open(Ljava/io/File;)Lorg/apache/lucene/store/FSDirectory;
and sometimes this error:
Exception calling user-defined function: "init(conn1: url=jdbc:default:connection user=INFC): org.apache.lucene.store.FSDirectory.open(Ljava/io/File;)Lorg/apache/lucene/store/FSDirectory;"; SQL statement:
call FTL_INIT()
I found a solution. But I know this isn't best one.
I downgraded Lucene lib to 3.6.2 and used plain queries instead of FullTextLucene functions.

How to fix the Exception: Saving data in the Hive serde table Please use the insertInto() API as an alternative. Spark:2.1.0

We are trying to save Dataframe to a Hive Table using the saveAsTable() method. But, We are getting the below exception. We are trying to store the data as TextInputFormat.
Exception in thread "main" org.apache.spark.sql.AnalysisException: Saving data in the Hive serde table `cdx_network`.`inv_devices_incr` is not supported yet. Please use the insertInto() API as an alternative..;
reducedFN.write().mode(SaveMode.Append).saveAsTable("cdx_network.alert_pas_incr");
I tried insertInto() and also enableHiveSupport() and it works. But, I want to use saveAsTable() .
I want to understand why the saveAsTable() does not work. I tried going through the documentation and also the code. Did not get much understanding. It supposed to be working. I have seen issues raised by people who are using Parquet format but for TextFileInputFormat i did not see any issues.
Table definition
CREATE TABLE `cdx_network.alert_pas_incr`(
`alertid` string,
`alerttype` string,
`alert_pas_documentid` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'maprfs:/apps/cdx-dev/alert_pas_incr'
TBLPROPERTIES (
'COLUMN_STATS_ACCURATE'='{\"BASIC_STATS\":\"true\"}',
'numFiles'='0',
'numRows'='0',
'rawDataSize'='0',
'totalSize'='0',
'transient_lastDdlTime'='1524121971')
Looks like this is bug. I made a little research and found this issue SPARK-19152. Fixed version is 2.2.0. Unfortunately I can’t verify it, cause my company’s cluster uses version 2.1.0

Can Cassandra nest Sets of UDTs?

I have a Cassandra schema with a table that has a column that is a SET of a user defined type (UDT). That UDT itself has a column that is a SET of another UDT.
I can create the types and table in cqlsh but when I try to use this schema in my Java (actually Scala) code I get a "missing codec error".
Does anyone know if the Datastax java driver supports this?
CREATE TYPE testname(firstname text, lastname text);
CREATE TYPE testuser(testname <FROZEN<SET<FROZEN<testname>>>);
CREATE TABLE testobjects(
simplename text
testusers SET<FROZEN<testuser>>
) WITH CLUSTERING ORDER BY (simple name DESC);
I've registered codecs for the two UDT types but when I try to bind a prepared statement I get the error:
can't find code for:
cqlType: frozen<set<frozen<testname>>
javaType: TestNameUDT
Because while there is a codec mapping testname to TestNameUDT there really is no codec mapping a Set of testname's to a TestNameUDT.
So, I'm wondering if anyone knows if the Java driver supports this...has anyone created nested sets of UDTs? Thanks.
Datastax has acknowledged that this is a Cassandra defect and does not currently work.
with spring data cassandra yes, but the nested udt must be declered without #cassandraType
https://jira.spring.io/browse/DATACASS-506
hope it helps

Categories

Resources