I'm a junior CS major student working on an MVC project using spring and I am quite new to spring.
I set up an H2 in memory db and wrote 2 sql scripts; one to populate the db and another one to flush it empty.
As far as I know, the practical difference between truncate table <table name> and delete from <table name> is truncate table is ddl thus its faster and it also resets the auto incremented columns. The problem is H2's truncate table does not reset auto incremented columns. Could you point out where I'm messing up?
I'm using spring's SqlGroup annotation to run the scripts.
#SqlGroup({
#Sql(executionPhase = Sql.ExecutionPhase.BEFORE_TEST_METHOD, scripts =
"classpath:populate.sql"),
#Sql(executionPhase = Sql.ExecutionPhase.AFTER_TEST_METHOD, scripts =
"classpath:cleanup.sql")
})
Here's the cleanup.sql that is supposed to truncate every table.
SET FOREIGN_KEY_CHECKS=0;
TRUNCATE TABLE `criteria`;
TRUNCATE TABLE `job_title`;
TRUNCATE TABLE `organization`;
TRUNCATE TABLE `organization_teams`;
TRUNCATE TABLE `organization_users`;
TRUNCATE TABLE `organization_job_titles`;
TRUNCATE TABLE `review`;
TRUNCATE TABLE `review_evaluation`;
TRUNCATE TABLE `team`;
TRUNCATE TABLE `team_members`;
TRUNCATE TABLE `user`;
TRUNCATE TABLE `user_criteria_list`;
SET FOREIGN_KEY_CHECKS=1;
And populate.sql simply inserts a bunch of rows to those tables without violating integrity constraints.
So when i run my test class, only the first method passes. For the rest i get stuff like this:
Referential integrity constraint violation: "FK3MAB5XYC980PSHDJ3JJ6XNMMT: PUBLIC.ORGANIZATION_JOB_TITLES FOREIGN KEY(JOB_TITLES_ID) REFERENCES PUBLIC.JOB_TITLE(ID) (1)"; SQL statement:
INSERT INTO organization_job_titles(organization_id, job_titles_id) VALUES(1, 1)
I thought the problem arose from auto incremented columns not being reset. So i written a different test class:
monAutoIncTest.java
#Sql(executionPhase = Sql.ExecutionPhase.AFTER_TEST_METHOD, scripts =
"classpath:truncateTable.sql")
public class monAutoIncTest {
#Autowired
OrganizationService organizationService;
#Test
public void h2truncate_pls() throws BaseException {
organizationService.add(new Organization("Doogle"));
Organization fetched = organizationService.getByName("Doogle");
Assert.assertEquals(1, fetched.getId());
}
#Test
public void h2truncate_plz() throws BaseException {
organizationService.add(new Organization("Zoogle"));
Organization fetched = organizationService.getByName("Zoogle");
Assert.assertEquals(1, fetched.getId());
}
}
With truncateTable.sql
SET FOREIGN_KEY_CHECKS=0;
TRUNCATE TABLE `organization`;
SET FOREIGN_KEY_CHECKS=1;
When the test runs, whichever method that is run first passes, and the other gives me this.
java.lang.AssertionError:
Expected :1
Actual :2
For now H2 does not support that feature, but we can see such plans in their roadmap
TRUNCATE should reset the identity columns as in MySQL and MS SQL
Server (and possibly other databases).
Try to use this statement in composition with TRUNCATE:
ALTER TABLE [table] ALTER COLUMN [column] RESTART WITH [initial value]
For each table name execute SQL-script:
TRUNCATE TABLE my_table RESTART IDENTITY;
ALTER SEQUENCE my_table_id_seq RESTART WITH 1;
H2 should be of version 1.4.200 or higher.
See https://www.h2database.com/html/commands.html
Related
I have a jOOQ MockConnection/DSL set up for unit tests to be able to do insert, but in at least one instance of my testing I have to also implement a MockResult for a subsequent select statement.
My question is, why does jOOQ execute a select statement in org.jooq.impl.AbstractDMLQuery#executeReturningGeneratedKeysFetchAdditionalRows -> selectReturning for my insert?
My insert is a simple myRecord.insert(), and the mocked DSL looks something like this:
// Simplified
var connection = new MockConnection(ctx -> {
var sql = ctx.sql();
if (sql.startsWith("insert")) {
return mockResultCount(1); // Impl elsewhere
}
return null;
});
var dsl = DSL.using(connection, SQLDialect.MYSQL);
[...]
var myRecord = new MyRecord();
myRecord.setX(...).setY(...);
dsl.attach(myRecord);
// Why does this require a mocked insert result AND a mocked select result?
myRecord.insert();
And my test fails because jOOQ needs the DSL to return a result for a select on something like SELECT ID_COLUMN, UNIQUE_KEY_COLUMN WHERE UNIQUE_KEY_COLUMN = ?
The only thing I can think of is that this table has a unique key?
Anyone know why a simple record.insert(); requires a select statement to be executed?
This depends on a variety of factors.
First off, the dialect. MySQL, for example cannot fetch values other than the identity values via JDBC's Statement.getGeneratedKeys(). This is the main reasons why there might be an additional query at all.
Then, for example, your Settings.returnAllOnUpdatableRecord configuration might cause this behaviour. If you have turned this on, then a separate query is required in MySQL to fetch the non-identity values.
Or, if your identity column (in MySQL, the AUTO_INCREMENT column) does not coincide with your primary key, which seems to be the case given your logged SQL statement, where you distinguish between an ID_COLUMN and a UNIQUE_KEY_COLUMN.
The reason for this fetching is that jOOQ assumes that such values may be generated (e.g. by triggers). I guess that the special case where
The identity and the primary key do not coincide
The primary key has been supplied fully by the user
We can attempt not to fetch the possibly generated primary key value, and fetch only the identity value. I've created a feature request for this: https://github.com/jOOQ/jOOQ/issues/9125
I'm trying to execute a CTAS on Oracle 11g command using Spring's JdbcTemplate.
private void ctasTest(JdbcTemplate jdbcTemplate) {
String ctas = "CREATE TABLE TARGET_DATA NOLOGGING AS SELECT ID,
NTILE(10) OVER (ORDER BY ID) AS CONTAINER_COLUMN FROM SOURCE_DATA";
jdbcTemplate.execute(ctas);
}
When run against a new database TARGET_DATA table is created, but with 0 rows even though the SOURCE_DATA table has 1000 rows.
If I then use SQLDeveloper to drop the empty TARGET_DATA table, and run the same command, it is successful, and the table contains 1000 rows.
I can then drop the table and re-run my Java code and it will succeed and the TARGET_DATA will contain 1000 rows.
Is SQLDeveloper providing something in the background that I need to include in my Java code. I've tried the same thing in plain JDBC, and on Oracle 12c, and get the same results.
try this as your statement:
String ctas = "CREATE TABLE TARGET_DATA NOLOGGING AS (SELECT ID,
NTILE(10) OVER (ORDER BY ID) AS CONTAINER_COLUMN FROM SOURCE_DATA)";
Notice that the select portion is enclosed in parentheses.
It turns out I hadn't committed my test data. Therefore it was still visible in the SQLDeveloper session, but not visible to the Java application.
I have a table in oracle with primary key and auto increment attribute. But sometimes I have preset ID value for the record.
So when I try to insert record I get exception your trying to insert into table with auto increment field.
So in SQLDeveloper I tried by disabling triggering and inserting values and then enabling trigger which worked perfectly
ALTER TABLE TABLE_NAME DISABLE ALL TRIGGERS;
INSERT INTO TABLE_NAME SELECT * FROM ARCHIVE_TABLE_NAME WHERE TABLE_NAME_COLUMN >= '27-JUN-16 10.35.12.945000000';
ALTER TABLE TABLE_NAME ENABLE ALL TRIGGERS;
But I would like to do this grammatically through hibernate.
So I have following questions
1) If is there any other way of inserting records in table with auto incremtn field?
2) If no then how to execute above 3 statements in hibernate
you can insert records with auto increment key by using the following jpa annotation on your auto incremented field
#GeneratedValue(generator="InvSeq")
#SequenceGenerator(name="InvSeq",sequenceName="INV_SEQ", allocationSize=5)
private long autoIncId;
see this link http://www.oracle.com/technetwork/middleware/ias/id-generation-083058.html
I am trying to create a Vertica table with JOOQ 3.5.x:
Connection connection = create();
DSLContext dslContext = DSL.using(connection);
Field<String> myColumn = DSL.field("my_column", SQLDataType.VARCHAR);
Table table = DSL.tableByName("my_schema", "my_table");
dslContext.createTable(table)
.column(myColumn, myColumn.getDataType())
.execute();
This fails on Schema "my_schema" does not exist.
I can solve it with:
dslContext.execute("create schema if not exists my_schema");
But is there a more elegant way to create a schema with JOOQ?
Currently JOOQ covers just a subset of the possible DDL statements that can be executed against a server and schema management is not yet included so you have to drop back to plan old SQL.
If you need to do a lot of DDL work you should start to look at the latest version 3.8 as this has extend the capabilities to include
DEFAULT column values in CREATE TABLE or ALTER TABLE statements
IF EXISTS in DROP statements
IF NOT EXISTS in CREATE statements
ALTER TABLE .. { RENAME | RENAME COLUMN | RENAME CONSTRAINT } statements
Version 3.6 added
ALTER TABLE ADD CONSTRAINT (with UNIQUE, PRIMARY KEY, FOREIGN KEY, CHECK)
ALTER TABLE DROP CONSTRAINT
CREATE TEMPORARY TABLE
Hi I have a table in Postgres, say email_messages. It is partitioned so whatever the inserts i do it using my java application will not effect the master table, as the data is actually getting inserted in the child tables. Here is the problem, I want to get an auto generated column value (say email_message_id which is of big serial type). Postgres is returning it as null since there is no insert being done on master table. For oracle I used GeneratedKeyHolder to get that value. But I'm unable to do the same for partitioned table in postgres. Please help me out.
Here is the code snippet we used for oracle
public void createAndFetchPKImpl(final Entity pEntity,
final String pStatementId, final String pPKColumnName) {
final SqlParameterSource parameterSource =
new BeanPropertySqlParameterSource(pEntity);
final String[] columnNames = new String[]{"email_message_id"};
final KeyHolder keyHolder = new GeneratedKeyHolder();
final int numberOfRowsEffected = mNamedParameterJdbcTemplate.update(
getStatement(pStatementId), parameterSource, keyHolder, columnNames);
pEntity.setId(ConversionUtil.getLongValue(keyHolder.getKey()));
}
When you use trigger-based partitioning in PostgreSQL it is normal for the JDBC driver to report that zero rows were affected/inserted. This is because the original SQL UPDATE, INSERT or DELETE didn't actually take effect, no rows were changed on the main table. Instead operations were performed on one or more sub-tables.
Basically, partitioning in PostgreSQL is a bit of a hack and this is one of the more visible limitations of it.
The workarounds are:
INSERT/UPDATE/DELETE directly against the sub-table(s), rather than the top-level table;
Ignore the result rowcount; or
Use RULEs and INSERT ... RETURNING instead (but they have their own problems) (Won't work for partitions)