I am using jooq version 3.11.9 and I have MySQL Ver 8.0.11 installed on my local. While initiating connection of jooq with Mysql I get the following error:
org.jooq.exception.DataAccessException: SQL [select 1 as `one` from dual where exists (select 1 as `one` from `mysql`.`proc`)]; Table 'mysql.proc' doesn't exist
I understand MySQL Ver 8.0.11 doesn't contain this table. So what is the solution? I cannot downgrade the MySQL versions as other projects are already running with this version.
As you can see in the mysql Release notes:
Previously, information about stored routines and events was stored in the proc and event tables of the mysql system database. Those tables are no longer used. Instead, information about stored routines and events is stored in the routines, events, and parameters data dictionary tables in the mysql system database. The old tables used the MyISAM (nontransactional) storage engine. The new tables use the InnoDB (transactional) engine.
That query is there precisely to check whether you're running on MySQL 8+. It should not cause an error or even a stack trace (but maybe a debug message). You can safely ignore it.
If you found an error or stack trace message, or if this causes your code generation to fail, it might be a bug in jOOQ's logging configuration, which I would invite you to file here: https://github.com/jOOQ/jOOQ/issues/new
Related
I have multiple application servers configured to run flyway at startup. Each server attempts to apply the same set of migrations across multiple schemas in the same Oracle 11g database. These servers are started at the same time. This works most of the time. On occasion, however, a server fails during migration because it encounters a unique constraint violation.
Unable to insert row for version '0' in metadata table "FOO"."SCHEMA_VERSION"
SQL State : 23000
Error Code : 1
Message : ORA-00001: unique constraint (FOO.SCHEMA_VERSION_pk) violated
at org.flywaydb.core.internal.metadatatable.MetaDataTableImpl.addAppliedMigration(MetaDataTableImpl.java:242)
at org.flywaydb.core.internal.metadatatable.MetaDataTableImpl.addBaselineMarker(MetaDataTableImpl.java:334)
at org.flywaydb.core.internal.command.DbBaseline$2.call(DbBaseline.java:135)
at org.flywaydb.core.internal.command.DbBaseline$2.call(DbBaseline.java:112)
at org.flywaydb.core.internal.util.jdbc.TransactionTemplate.execute(TransactionTemplate.java:75)
at org.flywaydb.core.internal.command.DbBaseline.baseline(DbBaseline.java:112)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:990)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:971)
at org.flywaydb.core.Flyway.execute(Flyway.java:1464)
at org.flywaydb.core.Flyway.migrate(Flyway.java:971)
...
I thought that flyway would be able to handle this situation based on the following:
https://flywaydb.org/documentation/faq#parallel
Shouldn't a flyway instance detect that the schema version table is locked and move onto the next schema?
Is there a setting that can ensure the schema version is locked or is this a bug?
The OracleTable class locks the table in exclusive mode. Should it add the NOWAIT clause and handle any resulting Oracle exception?
It should work. We test this with every build and the behavior without NOWAIT is the one we desired (block until lock is released). If you can reliably reproduce this or see a clear mistake in our code, then please by all means do file a bug with the necessary details in the issue tracker.
We recently configured our Oracle database to be in so-called native encryption (Oracle Advanced Security Option).
As development goes on, some SQL queries are sometimes badly-written and so an error should be returned by the JDBC driver (ojdbc7 v12.1.0.2). Instead of that, a Checksum Fail IOException is raised.
So the problem is that we do not have anymore any syntax or database integrity error at all. The problem is the same in SQL GUI editors, like DBeaver, SQLDeveloper or SQuirrel.
With driver ojdbc7 12.1.0.1 the correct VM parameter names are as follows:
-Doracle.net.crypto_checksum_client=REQUIRED
-Doracle.net.crypto_checksum_types_client=SHA1
Driver version 12.1 and earlier have a bug in SHA-2 functions
If able force the server to handshake with SHA-1
-Doracle.net.crypto_checksum_client=REQUIRED
-Doracle.net.crypto_checksum_types=SHA1
This is fixed in ojdbc8.jar version 12.2
It's a known issue in the Oracle JDBC thin driver. If you can use SSL instead of ASO then this problem will go away.
Our team is also experiencing the same issue.
Determined that setting the WebLogic Connection Pool to use either SHA1 or MD5 for Checksum Encryption resolved the issue (also had to add the chosen value to the list of approved algorithms in the DB-server's sqlnet.ora file of course).
Attempts to use any Checksum value on the client side aside from SHA1 or MD5 produced the Checksum Fail error message when Oracle attempted to return a 'standard' error ie Constraint Violation.
**if you are inserting record to data base and see the error then
check your insert values and schema, you might be inserting null
value in FK reference
you might be inserting null in not null column
**
Oracle wont give the correct information for this error
I have a Cassandra process that was created and defined in early Cassandra versions. So far, we used an hector driver to connect to it. Im in a process of changing the driver to DataStax to enjoy the CQL new features and to allow asynchronous access.
I encounter some problems in the process of doing that transition. I've read this upgrade guide which shed some light though I still encounter some problems.
The biggest one is that I cant access the keyspace with a protocol version bigger than one. When I try the following python code:
cass4 = Cluster(['MyIp'])
cass4.protocol_version = 2
session = cass4.connect('myKeySpace')
This code yields the following errors and warnings:
ERROR:cassandra.connection:Closing connection <AsyncoreConnection(4849045328) IP:9042> due to protocol error: code=000a [Protocol error] message="Invalid or unsupported protocol version: 2"
WARNING:cassandra.cluster:Downgrading core protocol version from 2 to 1 for IP
WARNING:cassandra.metadata:Building table metadata with no column meta for keyspace
With the Java driver, I simply get a NoHostAvailableException: All host(s) tried for query failed connection error if im trying to connect with a protocol version bigger than 1.
This connection problem is causing me a lot of trouble building an appropriate Java DAO. for example, if Im trying to do batch update, e.g.:
BatchStatment batch = new BatchStatement()
batch.add(somePreparedStatement)
cqlSession.executeAsync(batch)
I get the following error:
com.datastax.driver.core.exceptions.UnsupportedFeatureException: Unsupported feature with the native protocol version 1 (which is currently in use): Protocol level batching is not supported
Running a "BEGIN BATCH.." operation directly on a cluster node using cqlsh works, So I know this CQL command can be executed, but I dont know how to prepare it in Java and execute it with protocol version 1. Also, the cassandra and CQL version an the cluster seems appropriate:
[cqlsh 3.1.7 | Cassandra 1.2.13.2 | CQL spec 3.0.0 | Thrift protocol 19.36.2]
So, questions are:
Why is this happening?
Can I connect to that keyspace with a protocol version greater than 1?
If not, Can I somehow bypass this batch update problem?
The answer for this issue was eventually found here:
Can I combine Batches and PreparedStatements?
Starting with Cassandra 2.0 and the corresponding versions of the C#,
Java, and Python drivers, PreparedStatements can be used in batch
operations (nb before that you could still prepare a complete batch
operation, but you’d need to know apriori the number of statements
that will be included).
Since my Cassandra version is 1.2xx, I cant use batch updates and prepared statements.
A work around is to create the query as a string (Ya, that's dirty) and than execute the string query.
I'm absolutely newbie in Pentaho and I tried to install this tool.
My problem was when configure the data source. When connect and set the parameters to the DB, connect good and charge all the tables. After, when configure in step 3 the joins for the tables I don't get the columns for this.
In the line command appear the next message when select the tables:
Couldn't close query: resulset or prepared statements You have an
error in your SQL syntax; check the manual that corresponds to your
MySQL server version for the right syntax to use near 'OPTION
SQL_SELECT_LIMIT=DEFAULT' at line 1
The installation was in Windows 7 x86 with MySQL 5.6, Java JDK 1.7 and Pentaho 5.1
The connect to MySQL was with ODBC 5.3
Thanks'you all!! :D
You can solve it by adding ANSI_QUOTES to sql_mode in MySQL. Also, use the newest JDBC/ODBC.
Try "SHOW VARIABLES" in mysql console (or workbench) to be sure that ANSI_QUOTES is avalaible.
Upgrading the driver in PENTAHO\biserver-ce\tomcat\lib worked for me. They had the 5.1.17 version and the current one is 5.1.34 at the momento of this answer is written.
You may download it from the Oracle web page.
I successfully ran v1 migration with a create table DDL. I copied same to v2 file and ran - got the expected validation error message:
Migrating to version 1.0.002
com.googlecode.flyway.core.exception.FlywayException: Error executing statement
at line 1: create table people(id number(10) primary key, name varchar2(301))
Caused by java.sql.SQLSyntaxErrorException: ORA-00955: name is already used by a
n existing object
MigrationException: Migration to version 1.0.002 failed! Please restore backups
and roll back database and code
I corrected the v2 file and ran flyway migrate again. Giving back the error message
Current schema version: 1.0.002
MigrationException: Migration to version 1.0.002 failed! Please restore backups
and roll back database and code
I am not in a stage where a database backup is taken - simple trying to execute a fixed DDL. I don't currently see a solution short of flyway clean. Why cannot flyway try to execute FAILED versions again (if the checksum has changed)? Or shouldn't there be a flyway rollback command?
I know I can very well modify the code to make it that way, but was there any reason why you chose it to behave this way?
The problem with simply reexecuting is that some changes might already have been applied, which will cause the migration to fail.
There are two solutions to this:
Use a database that supports DDL transactions such as PostgreSQL, SQLServer or DB2
Perform a manual cleanup of the modified structures and the metadata table before reapplying