Error while using Lucene with H2 Database - java

I want to implement a small fullText search in my project that uses H2 Database (embedded). As I know I have to use Lucene for fullText engine for find relevance results (not only containing results).
But I can't use it. This block is Lucene initiation:
FullTextLucene.init(connection);
FullTextLucene.createIndex(connection, "PUBLIC", Tables.COURSES_DETAIL, Columns.NAME);
Also I used this way:
stmt.execute(
"create alias if not exists FTL_INIT for \"org.h2.fulltext.FullTextLucene.init\"");
stmt.execute("call FTL_INIT()");
stmt.execute(
String.format("CALL FTL_CREATE_INDEX('PUBLIC','%s',%s)", Tables.COURSES_DETAIL, "NULL"));
But this error happens at runtime:
Error creating or initializing trigger "FTL_COURSES_DETAIL" object, class "org.h2.fulltext.FullTextLucene$FullTextTrigger", cause: "org.h2.message.DbException: Class ""org.h2.fulltext.FullTextLucene$FullTextTrigger"" not found [90086-197]"; see root cause for details; SQL statement:
CREATE TRIGGER IF NOT EXISTS "PUBLIC"."FTL_COURSES_DETAIL" AFTER INSERT, UPDATE, DELETE, ROLLBACK ON "PUBLIC"."COURSES_DETAIL" FOR EACH ROW CALL "org.h2.fulltext.FullTextLucene$FullTextTrigger"
After I downgraded H2 library to latest 'stable' version (1.4.196) the error has been changed:
Caused by: java.lang.NoSuchMethodError: org.apache.lucene.store.FSDirectory.open(Ljava/io/File;)Lorg/apache/lucene/store/FSDirectory;
and sometimes this error:
Exception calling user-defined function: "init(conn1: url=jdbc:default:connection user=INFC): org.apache.lucene.store.FSDirectory.open(Ljava/io/File;)Lorg/apache/lucene/store/FSDirectory;"; SQL statement:
call FTL_INIT()

I found a solution. But I know this isn't best one.
I downgraded Lucene lib to 3.6.2 and used plain queries instead of FullTextLucene functions.

Related

Update Statement Issues with Apache Ignite(2.13.0) + Java Spring boot

We are facing issues while updating tables having column with datatype timestamp.
Insert and Update works fine if we use ignite repository for both.
Insert or Update works fine if we use native queries for both.
Insert via Ignite repository and update via native queries results in an below error
class org.apache.ignite.binary.BinaryObjectException: Invalid flag value: 32
at org.apache.ignite.internal.binary.builder.BinaryBuilderReader.parseValue(BinaryBuilderReader.java:863)
at org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:290)
at org.apache.ignite.internal.binary.builder.BinaryBuilderSerializer.writeValue(BinaryBuilderSerializer.java:103)
at org.apache.ignite.internal.binary.builder.BinaryBuilderSerializer.writeValue(BinaryBuilderSerializer.java:56)
at org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:297)
at org.apache.ignite.internal.binary.builder.BinaryBuilderSerializer.writeValue(BinaryBuilderSerializer.java:103)
at org.apache.ignite.internal.binary.builder.BinaryBuilderSerializer.writeValue(BinaryBuilderSerializer.java:56)
at org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:297)
```
If you can post example code, this would make a good bug report.
https://github.com/apache/ignite/blob/876a2ca190dbd88f42bc7acecff8b7783ce7ce54/modules/core/src/main/java/org/apache/ignite/internal/binary/builder/BinaryBuilderReader.java#L515

Can't read data from existing table on hbase via phoenix

When using java querying Hbase via phoenix, I encounter a following problem:
My connection is ok, so I get all data from SYSTEM.CATALOG using this query:
SELECT * FROM SYSTEM.CATALOG
It give me result as:
TENANT_ID TABLE_SCHEM TABLE_NAME ...
null DEVLOCAL BASE_COMMENTS ...
null SYSTEM CATALOG ...
null g edges ...
null g messages ...
...
I guess that g.edge does exist so I try:
Select * from g.edges
And the problem starts here:
Exception in thread "main" org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=G.EDGES
at org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:575)
at org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:72)
at org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:529)
at org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:696)
at org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:627)
at org.apache.phoenix.iterate.BaseResultIterators.<init>(BaseResultIterators.java:499)
at org.apache.phoenix.iterate.ParallelIterators.<init>(ParallelIterators.java:62)
at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:242)
at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:351)
at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:207)
at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:202)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:310)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:290)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:289)
at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1706)
at HbaseDataProvider.query(HbaseDataProvider.java:29)
at Main.main(Main.java:10)
... that G.EDGES is not exist. I tried "g"."edges" for not uppercase but still give me error like that
Please give me an idea or show me if I misunderstanding something.
Thank you!
Oh, I fixed my problem. That caused by the lack of phoenix. Actually, apache phoenix is not auto-sync its metadata from hbase so if you have tables created using hbase shell, you first have to create tables with the same table name, schema name in phoenix so that phoenix can sync metadata from hbase. My fault is that I did not read the docs carefully.
Here is the reference: https://phoenix.apache.org/faq.html

Dash in Schema Name - Groovy SQL query error

A Quick groovy syntax question here:
I'm working with Groovy SQL capabilities (Groovy 2.4)
My Schema contains dashes, like "SAMPLE-SCHEMA" and my table is called "SAMPLE_TABLE"
When I run the following I'm getting an exception that the relation does not exist.
I'm running against Postgres 9.6 with the correct driver.
def sql = Sql.newInstance(...)
sql.eachRow('SELECT SAMPLE_COLUMN FROM \"SAMPLE-SCHEMA\".SAMPLE_TABLE') {
row -> // do something with row here
}
If I query another schema without Dashes it works flowlessly.
The exception message is:
Caught: org.postgresql.util.PSQLException: ERROR: relation "SAMPLE-SCHEMA.SAMPLE_TABLE" does not exist
How can I adjust my query to make it work? Thanks
Ok, I've found the answer, the schema in postgresql is case sensitive, so I by mistake called "SAMPLE-SCHEMA" and it should have been "sample-schema" instead.
I'm not deleting the question because it might help someone

Cassandra Java datastax 2.1.8 Cannot connect to keyspace with quotes

I have a simple piece of code for removing data from cassandra 2
Cluster myCluster = Cluster.builder().addContactPoint(myhost).withPort(myport).build();
Session session = myCluster.connect(keyspaceName);
session.excecute(deleteStatement); -- it is just simple Delete.Where
So basically when I try to do something on (for example) keyspaceName = "test"
it will easily excecute my delete statement, but if I try the same thing for (for example) keyspace = "\"DONT_WORK\"" (since I have a keyspace name in quotes in cassandra) it won't work, and will throw
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:16660 (com.datastax.driver.core.ConnectionException: [localhost/127.0.0.1:16660] Pool is shutdown))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
I need help please
PS. I even used Metadata.quote() static method from datastax library - still ain't working.
You should not need to quote the keyspace name for connecting. Quoting is important to protect case sensitivity in the context of a CQL string, but you do not need to protect it if you are passing the keyspace name as an API parameter.
Ok, there is no need for further investigation of this problem. The issue was that I accidentally used 2.1.8 datastax library on cassandra at version 2.0.8. I have to stop using numeric keyborad. Simple mistake but sure made quite a fuss.

Unable to remove elements from mongoDB using hibernate OGM

I am using the following code to remove all the elements from mongoDB collection with a given parent_id:
final String strQuery = "db.Child.remove({'$query':{'PARENT_ID':'" + parentId + "'}})";
final Query query = entityManager.createNativeQuery(strQuery, Child.class);
query.executeUpdate();
However, I am getting the following exception:
Unexpected Exception
com.mongodb.util.JSONParseException:
db.Child.remove({'$query':{'CHILD_ID':'7313c076-dbaa-4557-b80f-68d040b65d82'}})
If I replace remove with find, I get the result back. Dont know what is causing JSON parser error in the aboev mentioned native query.
I am using hibernate-ogm version 4.3 Final with mongo-db 3.2
Hibernate OGM 4.3 did not support the remove operation for native queries.
You should give OGM 5.0.2.Final a try: it should solve your issue as we added the support for quite a lot of other operations (and a lot of other fixes and improvements).

Categories

Resources