We are facing a serious problem with H2 database, version 1.4.199 - server mode. The application data layer creates a table programmatically if not exists, for example:
CREATE TABLE IF NOT EXISTS mytable (...);
CREATE INDEX IF NOT EXISTS idx_mytable ON mytable(mycol);
and works fine for days, writing data into the above table. After restarting the service, at first connection attempt, the engine throws the error
org.h2.jdbc.JdbcSQLSyntaxErrorException: Table "mytable" not found; SQL statement:
CREATE INDEX "PUBLIC"."IDX_MYTABLE" ON "PUBLIC"."MYTABLE"("MYCOL")
If we try recovering the database, the sql script does not contain the "mytable" anymore, so the data are definitevely lost! We have hundreds of installations of the software, but the error happens occasionally on some of them (10%).
Ping us H2 properties which you used.
"spring.jpa.hibernate.ddl-auto" should be "update".
Related
In Talend Data Quality, I have configured a JDBC connection to an OpenEdge database and it's working fine.
I can pull the list of tables and select columns to analyse, but when executing analysis, I get this :
Table "DBGSS.SGSSGSS" cannot be found.
This is because it does not specify a schema, only the database name - DBGSS.
How can I make it specify database, schema and then the table name ? Or just the table name, its would work too.
Thanks !
You can use a tDBConnection component that give you the right to specify a schéma
Then , use it with the option of Use Existing connection
See below documentation , https://help.talend.com/r/en-US/7.3/db-generic/tdbconnection
I am working with Java quartz scheduler and everything is working fine as expected.
Now I have made a change in the property of MySQL like lower_case_table_names=1(purposefully I need this setting).
But now I am getting table doesn't exist error when I try to query from quartz related tables;
Query from all other tables (named with the small letter and block letter) are working without any issue.
See my query
select * from QRTZ_LOCKS;
ERROR 1146 (42S02): Table 'schema.qrtz_locks' doesn't exist
In my project, we use hsqldb for running unit test cases and oracle in production. Liquibase is used to run queries on environments. I have an issue with creating table with datatype LONGVARCHAR. I am already using this statement to use oracle syntax in hsqldb.
SET DATABASE SQL SYNTAX ORA TRUE
When I try to create table in hsqldb, this query seem to work.
CREATE TABLE A (DATA LONGVARCHAR);
And when I try to create table in oracle, the following works.
CREATE TABLE A (DATA LONG VARCHAR);
How can I write a homogeneous query which can work for both database servers.
Use a CLOB
CREATE TABLE A (DATA CLOB);
I'm trying to import data from DWH SQL server table which is using Clustered Columnstore Index into kudu through flume. However, after my custom flume source retrieves a certain number of rows from the database, this exception occurs:
SqlExceptionHelper: Cursors are not supported on a table which has a clustered columnstore index
I'm using JDBC SQL Server driver type 4, and apparently it uses cursors to iterate resultset. Therefore, I tried setting fetch size to the number the query is limited to, but nothing changed.
How can I stop the JDBC driver from using cursors and thus get all rows imported into a kudu table?
Thanks in advance.
Try setting selectmethod=direct in the connection properties. Source:
If set to direct (the default), the database server sends the complete result set in a single response to the driver when responding to a query. A server-side database cursor is not created if the requested result set type is a forward-only result set. Typically, responses are not cached by the driver. Using this method, the driver must process the entire response to a query before another query is submitted. If another query is submitted (using a different statement on the same connection, for example), the driver caches the response to the first query before submitting the second query. Typically, the Direct method performs better than the Cursor method.
Of course you need to define your resultset as FORWARD_ONLY to guarantee this.
I had the same problem with the table which has the columnstore clustered index.
The simple SELECT statement by ODBC failed with "Cursors are not supported on a table which has a clustered columnstore index".
My workaround is to create the view that contains statement:
select * from dbo.TableName
It works for me.
I have 2 different databases one is MYSQL other is Oracle.Each have 1 table with different name and different columns name.Now I have to perform some db opeartions on each db from a single java application.Suppose for MYSQL db I have Emp table with columns Id,Name,Dept and for Oracle db I have Student table with StudentName and StudentDept.Now without changing code how can I manage 2 dbs?If I mention all db connection related data(connection url,username,password) in a properties file but to execute query I have to mention table name and column name in code.How can I manage it dynamically without altering the code so that in future any new db with different table name and column name is added I can only add the new one in properties file and no need to touch the code.Please suggest.
This might not be the prettiest, but one way to do this:
On application launch, parse properties files to get all DB connections. Store these however you want...List of connection pools, list of single connections, list of connection strings, etc...it doesn't matter.
Run a predefined stored procedure or select query to retrieve all table names from each database found in step 1. In sybase you can do this with
select name from sysobjects where type = 'U'
Build a Map where the key is the table name and the value is either the DB name, connection, connection string, or whatever you are using to manage your DB connections from the result set of #2. Anything that can be passed to your DB connection manager to identify which database it should connect to will work as the value.
In code, when table name is passed, lookup the required DB in the map
Execute query on returned DB Info in the map you created in step 3
As long as the tables are distinct in each DB this will work. Once this is setup, new DBs can be added to the properties file and the cache can be refreshed with an application restart. However, if new tables/columns are being sent to the code, how are these being passed without any code change?