In my project, we use hsqldb for running unit test cases and oracle in production. Liquibase is used to run queries on environments. I have an issue with creating table with datatype LONGVARCHAR. I am already using this statement to use oracle syntax in hsqldb.
SET DATABASE SQL SYNTAX ORA TRUE
When I try to create table in hsqldb, this query seem to work.
CREATE TABLE A (DATA LONGVARCHAR);
And when I try to create table in oracle, the following works.
CREATE TABLE A (DATA LONG VARCHAR);
How can I write a homogeneous query which can work for both database servers.
Use a CLOB
CREATE TABLE A (DATA CLOB);
Related
In Talend Data Quality, I have configured a JDBC connection to an OpenEdge database and it's working fine.
I can pull the list of tables and select columns to analyse, but when executing analysis, I get this :
Table "DBGSS.SGSSGSS" cannot be found.
This is because it does not specify a schema, only the database name - DBGSS.
How can I make it specify database, schema and then the table name ? Or just the table name, its would work too.
Thanks !
You can use a tDBConnection component that give you the right to specify a schéma
Then , use it with the option of Use Existing connection
See below documentation , https://help.talend.com/r/en-US/7.3/db-generic/tdbconnection
We are facing a serious problem with H2 database, version 1.4.199 - server mode. The application data layer creates a table programmatically if not exists, for example:
CREATE TABLE IF NOT EXISTS mytable (...);
CREATE INDEX IF NOT EXISTS idx_mytable ON mytable(mycol);
and works fine for days, writing data into the above table. After restarting the service, at first connection attempt, the engine throws the error
org.h2.jdbc.JdbcSQLSyntaxErrorException: Table "mytable" not found; SQL statement:
CREATE INDEX "PUBLIC"."IDX_MYTABLE" ON "PUBLIC"."MYTABLE"("MYCOL")
If we try recovering the database, the sql script does not contain the "mytable" anymore, so the data are definitevely lost! We have hundreds of installations of the software, but the error happens occasionally on some of them (10%).
Ping us H2 properties which you used.
"spring.jpa.hibernate.ddl-auto" should be "update".
I have an application which I need to backport to mysql 5.6.
This application uses rather large composite keys which works fine on mysql 5.7 because innodb-large-prefix is enabled by default.
I can configure mysql 5.6 to use innodb-large-prefix, but it also requires to create tables with ROW_FORMAT=DYNAMIC or COMPRESSED.
Here is the SQL example I would like to achieve using jooq:
CREATE TABLE `domain` (
`path` varchar(300) NOT NULL,
UNIQUE KEY `index1` (`path`)
) ROW_FORMAT=DYNAMIC;
These are the mysql 5.6 documentation for reference:
https://dev.mysql.com/doc/refman/5.6/en/innodb-restrictions.html
https://dev.mysql.com/doc/refman/5.6/en/innodb-row-format.html
https://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_large_prefix
You can add custom storage clauses to CREATE TABLE statements by using the CreateTableStorageStep.storage() method. E.g.
ctx.createTable("domain")
.column("path", VARCHAR(300).nullable(false))
.constraint(constraint("index1").unique("path"))
.storage("ROW_FORMAT=DYNAMIC")
.execute();
I need some sort of persistance componnent to store id(long) and value(object) for my Java application.
All The cacheing systems I looked at where not persistant enough(If the process died the cache would erase itself) or slow
I tried to use Embedded DataBases like Derby and HSQLDB but they where not as fast as H2 as SELECT and INSERT.
For some reason the UPDATE query takes 1-2 seconds for one row if I Update a row with Blob.
Does anyone know why is it this slow?
Queries:
CREATE TABLE ENTITIES(ID BIGINT PRIMARY KEY, DATA BLOB)
INSERT INTO ENTITIES(DATA, ID) VALUES(?, ?)
UPDATE ENTITIES SET DATA = ? WHERE ID = ?
I am using JDBC with PreparedStatement
Edit:
The connection string is:
jdbc:h2:C:\temp\h2db;FILE_LOCK=NO;
I tried to add CACHE_SIZE=102400 and PAGE_SIZE=209715200 but it didn't help
I recently rewrote a Java EE web application (running on a MySQL database) to Rails 3.1. The problem now is that the database model of the new application is not the same as the old one because I added, removed and renamed some attributes. The database table names are also different.
Is there a way of migrating this data? The only way I can imagine to do this is writing a stored procedure with many ALTER TABLE and CREATE TABLE statements to update the database to the new model.
Thanks in advanced.
Solution:
I finally used INSERT..SELECT statements in a mysql stored procedure to migrate the data. INSERT INTO new_schema.new_table SELECT FROM old_schema.old_table. I am now considering making a Rake task to call that procedure and doing other stuff.
The only way is to write a script that take the data from the old db and insert thme in the new db. Or you can in some way to connect to the two databases and then make some select and insert query, something like
insert into new_db.table as select old_db.table.field1, ....
or
insert into new_db.table (field_1, field_2) values (select old_db.table.field_1, ...)
In any way, is a manual process, also if can be automated to some extend with a script
Instead of a Store procedure you can try with rails and some sql within the rails console using the information_schema of mysql
sql = ActiveRecord::Base.connection()
old_tables = sql.execute "select table_name from information_schema.tables where table_schema = your_old_schema"
res.each do | old_table|
old_fields = sql.execute "select distinct column_name, data_type from information_schema.columns where table_name='#{old_table}' and table_schema='your_old_schema'"
new_fields = sql.execute "select distinct column_name, data_type from information_schema.columns where table_name='#{old_table}' and table_schema='your_new_schema'"
#compare fields and so on...
end