Flyway 4.2.0 Multiple Nodes In Parallel Failure with Oracle 11g - java

I have multiple application servers configured to run flyway at startup. Each server attempts to apply the same set of migrations across multiple schemas in the same Oracle 11g database. These servers are started at the same time. This works most of the time. On occasion, however, a server fails during migration because it encounters a unique constraint violation.
Unable to insert row for version '0' in metadata table "FOO"."SCHEMA_VERSION"
SQL State : 23000
Error Code : 1
Message : ORA-00001: unique constraint (FOO.SCHEMA_VERSION_pk) violated
at org.flywaydb.core.internal.metadatatable.MetaDataTableImpl.addAppliedMigration(MetaDataTableImpl.java:242)
at org.flywaydb.core.internal.metadatatable.MetaDataTableImpl.addBaselineMarker(MetaDataTableImpl.java:334)
at org.flywaydb.core.internal.command.DbBaseline$2.call(DbBaseline.java:135)
at org.flywaydb.core.internal.command.DbBaseline$2.call(DbBaseline.java:112)
at org.flywaydb.core.internal.util.jdbc.TransactionTemplate.execute(TransactionTemplate.java:75)
at org.flywaydb.core.internal.command.DbBaseline.baseline(DbBaseline.java:112)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:990)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:971)
at org.flywaydb.core.Flyway.execute(Flyway.java:1464)
at org.flywaydb.core.Flyway.migrate(Flyway.java:971)
...
I thought that flyway would be able to handle this situation based on the following:
https://flywaydb.org/documentation/faq#parallel
Shouldn't a flyway instance detect that the schema version table is locked and move onto the next schema?
Is there a setting that can ensure the schema version is locked or is this a bug?
The OracleTable class locks the table in exclusive mode. Should it add the NOWAIT clause and handle any resulting Oracle exception?

It should work. We test this with every build and the behavior without NOWAIT is the one we desired (block until lock is released). If you can reliably reproduce this or see a clear mistake in our code, then please by all means do file a bug with the necessary details in the issue tracker.

Related

jooq 3.11.9 with MySQL Ver 8.0.11

I am using jooq version 3.11.9 and I have MySQL Ver 8.0.11 installed on my local. While initiating connection of jooq with Mysql I get the following error:
org.jooq.exception.DataAccessException: SQL [select 1 as `one` from dual where exists (select 1 as `one` from `mysql`.`proc`)]; Table 'mysql.proc' doesn't exist
I understand MySQL Ver 8.0.11 doesn't contain this table. So what is the solution? I cannot downgrade the MySQL versions as other projects are already running with this version.
As you can see in the mysql Release notes:
Previously, information about stored routines and events was stored in the proc and event tables of the mysql system database. Those tables are no longer used. Instead, information about stored routines and events is stored in the routines, events, and parameters data dictionary tables in the mysql system database. The old tables used the MyISAM (nontransactional) storage engine. The new tables use the InnoDB (transactional) engine.
That query is there precisely to check whether you're running on MySQL 8+. It should not cause an error or even a stack trace (but maybe a debug message). You can safely ignore it.
If you found an error or stack trace message, or if this causes your code generation to fail, it might be a bug in jOOQ's logging configuration, which I would invite you to file here: https://github.com/jOOQ/jOOQ/issues/new

Error while mysql table gets updated from java

I am trying to update one particular record in mysql table from java code. Update function doing lots of different activities including the update operation within the same transaction.
But here the same table row is not getting updated from outside (MySQL Workbench) or other transaction until the transaction for update gets committed. Its generating the following error.
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException: Lock wait timeout exceeded; try restarting transaction
Is there any work around to fix the issue? Please suggest.

Why do I have "Checksum fail" on every bad SQL request in Oracle when Native Encryption (ASO) is enabled?

We recently configured our Oracle database to be in so-called native encryption (Oracle Advanced Security Option).
As development goes on, some SQL queries are sometimes badly-written and so an error should be returned by the JDBC driver (ojdbc7 v12.1.0.2). Instead of that, a Checksum Fail IOException is raised.
So the problem is that we do not have anymore any syntax or database integrity error at all. The problem is the same in SQL GUI editors, like DBeaver, SQLDeveloper or SQuirrel.
With driver ojdbc7 12.1.0.1 the correct VM parameter names are as follows:
-Doracle.net.crypto_checksum_client=REQUIRED
-Doracle.net.crypto_checksum_types_client=SHA1
Driver version 12.1 and earlier have a bug in SHA-2 functions
If able force the server to handshake with SHA-1
-Doracle.net.crypto_checksum_client=REQUIRED
-Doracle.net.crypto_checksum_types=SHA1
This is fixed in ojdbc8.jar version 12.2
It's a known issue in the Oracle JDBC thin driver. If you can use SSL instead of ASO then this problem will go away.
Our team is also experiencing the same issue.
Determined that setting the WebLogic Connection Pool to use either SHA1 or MD5 for Checksum Encryption resolved the issue (also had to add the chosen value to the list of approved algorithms in the DB-server's sqlnet.ora file of course).
Attempts to use any Checksum value on the client side aside from SHA1 or MD5 produced the Checksum Fail error message when Oracle attempted to return a 'standard' error ie Constraint Violation.
**if you are inserting record to data base and see the error then
check your insert values and schema, you might be inserting null
value in FK reference
you might be inserting null in not null column
**
Oracle wont give the correct information for this error

Batch update fails when connecting to an "old" column family with DataStax driver

I have a Cassandra process that was created and defined in early Cassandra versions. So far, we used an hector driver to connect to it. Im in a process of changing the driver to DataStax to enjoy the CQL new features and to allow asynchronous access.
I encounter some problems in the process of doing that transition. I've read this upgrade guide which shed some light though I still encounter some problems.
The biggest one is that I cant access the keyspace with a protocol version bigger than one. When I try the following python code:
cass4 = Cluster(['MyIp'])
cass4.protocol_version = 2
session = cass4.connect('myKeySpace')
This code yields the following errors and warnings:
ERROR:cassandra.connection:Closing connection <AsyncoreConnection(4849045328) IP:9042> due to protocol error: code=000a [Protocol error] message="Invalid or unsupported protocol version: 2"
WARNING:cassandra.cluster:Downgrading core protocol version from 2 to 1 for IP
WARNING:cassandra.metadata:Building table metadata with no column meta for keyspace
With the Java driver, I simply get a NoHostAvailableException: All host(s) tried for query failed connection error if im trying to connect with a protocol version bigger than 1.
This connection problem is causing me a lot of trouble building an appropriate Java DAO. for example, if Im trying to do batch update, e.g.:
BatchStatment batch = new BatchStatement()
batch.add(somePreparedStatement)
cqlSession.executeAsync(batch)
I get the following error:
com.datastax.driver.core.exceptions.UnsupportedFeatureException: Unsupported feature with the native protocol version 1 (which is currently in use): Protocol level batching is not supported
Running a "BEGIN BATCH.." operation directly on a cluster node using cqlsh works, So I know this CQL command can be executed, but I dont know how to prepare it in Java and execute it with protocol version 1. Also, the cassandra and CQL version an the cluster seems appropriate:
[cqlsh 3.1.7 | Cassandra 1.2.13.2 | CQL spec 3.0.0 | Thrift protocol 19.36.2]
So, questions are:
Why is this happening?
Can I connect to that keyspace with a protocol version greater than 1?
If not, Can I somehow bypass this batch update problem?
The answer for this issue was eventually found here:
Can I combine Batches and PreparedStatements?
Starting with Cassandra 2.0 and the corresponding versions of the C#,
Java, and Python drivers, PreparedStatements can be used in batch
operations (nb before that you could still prepare a complete batch
operation, but you’d need to know apriori the number of statements
that will be included).
Since my Cassandra version is 1.2xx, I cant use batch updates and prepared statements.
A work around is to create the query as a string (Ya, that's dirty) and than execute the string query.

flyway commandline tool - what option to re-execute failed DDL?

I successfully ran v1 migration with a create table DDL. I copied same to v2 file and ran - got the expected validation error message:
Migrating to version 1.0.002
com.googlecode.flyway.core.exception.FlywayException: Error executing statement
at line 1: create table people(id number(10) primary key, name varchar2(301))
Caused by java.sql.SQLSyntaxErrorException: ORA-00955: name is already used by a
n existing object
MigrationException: Migration to version 1.0.002 failed! Please restore backups
and roll back database and code
I corrected the v2 file and ran flyway migrate again. Giving back the error message
Current schema version: 1.0.002
MigrationException: Migration to version 1.0.002 failed! Please restore backups
and roll back database and code
I am not in a stage where a database backup is taken - simple trying to execute a fixed DDL. I don't currently see a solution short of flyway clean. Why cannot flyway try to execute FAILED versions again (if the checksum has changed)? Or shouldn't there be a flyway rollback command?
I know I can very well modify the code to make it that way, but was there any reason why you chose it to behave this way?
The problem with simply reexecuting is that some changes might already have been applied, which will cause the migration to fail.
There are two solutions to this:
Use a database that supports DDL transactions such as PostgreSQL, SQLServer or DB2
Perform a manual cleanup of the modified structures and the metadata table before reapplying

Categories

Resources