Dash in Schema Name - Groovy SQL query error - java

A Quick groovy syntax question here:
I'm working with Groovy SQL capabilities (Groovy 2.4)
My Schema contains dashes, like "SAMPLE-SCHEMA" and my table is called "SAMPLE_TABLE"
When I run the following I'm getting an exception that the relation does not exist.
I'm running against Postgres 9.6 with the correct driver.
def sql = Sql.newInstance(...)
sql.eachRow('SELECT SAMPLE_COLUMN FROM \"SAMPLE-SCHEMA\".SAMPLE_TABLE') {
row -> // do something with row here
}
If I query another schema without Dashes it works flowlessly.
The exception message is:
Caught: org.postgresql.util.PSQLException: ERROR: relation "SAMPLE-SCHEMA.SAMPLE_TABLE" does not exist
How can I adjust my query to make it work? Thanks

Ok, I've found the answer, the schema in postgresql is case sensitive, so I by mistake called "SAMPLE-SCHEMA" and it should have been "sample-schema" instead.
I'm not deleting the question because it might help someone

Related

Cannot save to geometry field from app deployed in Glassfish to Postgres13 with Postgis3.1.7

I have this weird issue where I cannot read or write to geometry fields in my Database.
INSERT INTO digital_addresses(gps_coordinates, digital_address) VALUES (ST_SetSRID(ST_MakePoint(?, ?), 4326), ?) fails with the error:
exception in get getOrSaveDigitalAddress. Reason: ERROR: function st_makepoint(real, real) does not exist
Hint: No function matches the given name and argument types. You might need to add explicit type casts.
Position: 233
I know postgis is installed correctly because I am able to execute same query from pgAdmin.
I have tried putting postgis-geometry-2.5.1.jar, postgis-jdbc-2.5.0.jar and postgresql-42.2.25.jar in different permutations and different versions into {domain_dir}/domain1/lib/ext still the issue persists.
select select postgis_full_version():
POSTGIS="3.1.7 aafe1ff" [EXTENSION] PGSQL="130" GEOS="3.9.2-CAPI-1.14.3" PROJ="8.2.1" LIBXML="2.9.7" LIBJSON="0.13.1" LIBPROTOBUF="1.3.0" WAGYU="0.5.0 (Internal)"
Does anyone know what might be wrong or what versions of postgressql and postgis should work for my setup.
Thanks!
If gps_coordinates field type is GEOMETRY try this one:
INSERT INTO digital_addresses (gps_coordinates, digital_address) VALUES (ST_GeomFromText('POINT(-70.064544 40.28787)',4326)), ?)

Error while using Lucene with H2 Database

I want to implement a small fullText search in my project that uses H2 Database (embedded). As I know I have to use Lucene for fullText engine for find relevance results (not only containing results).
But I can't use it. This block is Lucene initiation:
FullTextLucene.init(connection);
FullTextLucene.createIndex(connection, "PUBLIC", Tables.COURSES_DETAIL, Columns.NAME);
Also I used this way:
stmt.execute(
"create alias if not exists FTL_INIT for \"org.h2.fulltext.FullTextLucene.init\"");
stmt.execute("call FTL_INIT()");
stmt.execute(
String.format("CALL FTL_CREATE_INDEX('PUBLIC','%s',%s)", Tables.COURSES_DETAIL, "NULL"));
But this error happens at runtime:
Error creating or initializing trigger "FTL_COURSES_DETAIL" object, class "org.h2.fulltext.FullTextLucene$FullTextTrigger", cause: "org.h2.message.DbException: Class ""org.h2.fulltext.FullTextLucene$FullTextTrigger"" not found [90086-197]"; see root cause for details; SQL statement:
CREATE TRIGGER IF NOT EXISTS "PUBLIC"."FTL_COURSES_DETAIL" AFTER INSERT, UPDATE, DELETE, ROLLBACK ON "PUBLIC"."COURSES_DETAIL" FOR EACH ROW CALL "org.h2.fulltext.FullTextLucene$FullTextTrigger"
After I downgraded H2 library to latest 'stable' version (1.4.196) the error has been changed:
Caused by: java.lang.NoSuchMethodError: org.apache.lucene.store.FSDirectory.open(Ljava/io/File;)Lorg/apache/lucene/store/FSDirectory;
and sometimes this error:
Exception calling user-defined function: "init(conn1: url=jdbc:default:connection user=INFC): org.apache.lucene.store.FSDirectory.open(Ljava/io/File;)Lorg/apache/lucene/store/FSDirectory;"; SQL statement:
call FTL_INIT()
I found a solution. But I know this isn't best one.
I downgraded Lucene lib to 3.6.2 and used plain queries instead of FullTextLucene functions.

Cassandra Java datastax 2.1.8 Cannot connect to keyspace with quotes

I have a simple piece of code for removing data from cassandra 2
Cluster myCluster = Cluster.builder().addContactPoint(myhost).withPort(myport).build();
Session session = myCluster.connect(keyspaceName);
session.excecute(deleteStatement); -- it is just simple Delete.Where
So basically when I try to do something on (for example) keyspaceName = "test"
it will easily excecute my delete statement, but if I try the same thing for (for example) keyspace = "\"DONT_WORK\"" (since I have a keyspace name in quotes in cassandra) it won't work, and will throw
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:16660 (com.datastax.driver.core.ConnectionException: [localhost/127.0.0.1:16660] Pool is shutdown))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
I need help please
PS. I even used Metadata.quote() static method from datastax library - still ain't working.
You should not need to quote the keyspace name for connecting. Quoting is important to protect case sensitivity in the context of a CQL string, but you do not need to protect it if you are passing the keyspace name as an API parameter.
Ok, there is no need for further investigation of this problem. The issue was that I accidentally used 2.1.8 datastax library on cassandra at version 2.0.8. I have to stop using numeric keyborad. Simple mistake but sure made quite a fuss.

mysql with jdbc returning fully qualified column names

I have a java web application and mysql 5.7 as database, and using jooq 3.8.2 to execute JDBC code.
Executing the following select:
Result<Record> result = dsl.select(TABLE1.fields())
.select(TABLE2.fields())
.from(TABLE1)
.join(TABLE2).ON(TABLE1.FK.eq(TABLE2.ID))
.fetch();
for(Record rec : result) {
Table1Record tb1Rec = rec.into(TABLE1);
Table2Record tb2Rec = rec.into(TABLE2);
}
After updating jooq to version 3.8.2, my logs shows the following message:
INFO org.jooq.impl.Fields - Ambiguous match found for ....
To me, it's clear that the problem is that both tables have some columns with same names. So I tried (as example):
Field tb1Field = TABLE1.FIELD1;
dsl.select(tb1Field.as("TABLE1_" + tb1Field.getName())
But then
Table1Record tb1Rec = rec.get(TABLE1);
returns null.
My preferred solution is, in same way, force mysql return fully qualified column names, but I can't found any option to do that.
Any help?

IntelliJ IDEA code inspection: HQL custom dialect & registered functions

My question is about
using registered functions for date/time manipulations in Hibernate Query Language and
IntelliJ IDEA's code inspection for these registered functions in HQL.
I'm using Hibernate 4.2.5 with Java 7, and SQL Server 2008 R2 as the database, and IntelliJ IDEA 12.1.6.
In an HQL query I need to perform the TSQL DATEADD function - or the equivalent HQL date operation. This doesn't seem to exist.
Here's what I'd like to achieve:
update MyTable set startTime = GETDATE(), targetTime = DATEADD(HOUR, allocatedTime, GETDATE()), endTime = null where faultReport.faultReportId = :faultReportId and slaTypeId = :slaTypeId
Searching for answers online has been disappointingly no help, and the most common advice (like the comment seen here: https://stackoverflow.com/a/18150333/2753571) seems to be "don't use date manipulation in hql." I don't see how I can get around performing the operation in the SQL statement in the general case (e.g. when you want to update one column based on the value in another column in multiple rows).
In a similar fashion to the advice in this post: Date operations in HQL, I've subclassed a SQLServerDialect implementation and registered new functions:
registerFunction("get_date", new NoArgSQLFunction("GETDATE", StandardBasicTypes.TIMESTAMP)); // this function is a duplication of "current_timestamp" but is here for testing / illustration
registerFunction("add_hours", new VarArgsSQLFunction(TimestampType.INSTANCE, "DATEADD(HOUR,", ",", ")"));
and added this property to my persistence.xml:
<property name="hibernate.dialect" value="my.project.dialect.SqlServerDialectExtended" />
and then I'm testing with a simple (meaningless, admitted) query like this:
select x, get_date(), add_hours(1, get_date()) from MyTable x
The functions appear to be successfully registered, and that query seems to be working because the following SQL is generated and the results are correct:
select
faultrepor0_.FaultReportSLATrackingId as col_0_0_,
GETDATE() as col_1_0_,
DATEADD(HOUR,
1,
GETDATE()) as col_2_0_,
... etc.
But I now have this problem with IntelliJ IDEA: where get_date() is used in the HQL, the code inspection complains "<expression> expected, got ')'". This is marked as an error and the file is marked in red as a compilation failure.
Can someone can explain how to deal with this, please, or explain what a better approach is? Am I using the incorrect SQLFunction template (VarArgsSQLFunction)? If yes, which is the best one to use?
I'd like the usage of the registered function to not be marked as invalid in my IDE. Ideally, if someone can suggest a better way altogether than creating a new dialect subclass, that would be awesome.

Categories

Resources