I am trying to use Jooq to do an INSERT into a PostgreSQL database. The query fails if the String includes a backslash character with SQL state code: 42601 which means SYNTAX ERROR.
Jooq: 3.4.4
postgresql driver: 8.4-702.jdbc4
PostgreSQL: "PostgreSQL
8.4.20 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit"
JDK 1.8.0_25
Spring Tool Suite 3.6.0.RELEASE
Database:
CREATE TABLE datahub.test (
body TEXT NOT NULL
);
Jooq code generated using maven:
jooq-codegen-maven version 3.4.4
generator.name: org.jooq.util.DefaultGenerator
generator.database.name: org.jooq.util.postgres.PostgresDatabase
Unit test
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = {"/spring-config.xml"})
public class BatchExceptionJooqTest {
private static Logger log = LogManager.getLogger(BatchExceptionJooqTest.class);
#Autowired
private DSLContext db;
#Test
public void runBasicJooqTest(){
try{
final List<InsertQuery<TestRecord>> batchUpdate = Lists.newLinkedList();
InsertQuery<TestRecord> insertQuery = db.insertQuery(TEST);
insertQuery.addValue(TEST.BODY, "It's a bit more complicated than just doing copy and paste... :\\");
batchUpdate.add(insertQuery);
db.batch(batchUpdate).execute();
}catch(Exception e){
log.error(e);
}
}
}
Problem
The test fails with an exception:
2014-12-26 17:11:16,490 [main] ERROR BatchExceptionJooqTest:36 :runBasicJooqTest - org.jooq.exception.DataAccessException: SQL [null]; Batch entry 0 insert into "datahub"."test" ("body") values ('It''s a bit more complicated than just doing copy and paste... :\') was aborted. Call getNextException to see the cause.
The test passes, if instead of String: "It's a bit more complicated than just doing copy and paste... :\\" I use String: "It's a bit more complicated than just doing copy and paste... :\\\\". This seems a bit inconsistent when compared to what is happening to the the single quote during the operation. It is correctly doubled so as to get through the SQL parser. Not so with the backslash.
I read somewhere that escaping a backslash with another backslash is not part of the SQL standard and Postgre has changed its default behavior lately. However I am not clear on the meaning of the manual p 4.1.2.2 - it seems to indicate that double backslashes should work and there is not really any reason for jooq not to do it.
So.. could someone please explain if the described situation in Jooq:
Is desired behavior and there is no workaround besides doubling all incoming backslashes my application is processing?
Is desired behavior but there is a configuration change I can do to make Jooq process the backslashes in a similar manner to the single quotes?
Is it a bug?
What am I doing incorrectly?
Thank you
You are using PostgreSQL 8.x. In that version, the system defaulted to accepting backslash escaped string literals even without the preceding E.
To avoid this, you should set the server configuration variable standard_conforming_strings to ON.
It is, of course, strongly recommended that you migrate to a version of PostgreSQL higher than 8.x, as the 8.x versions have reached end-of-life and are no longer supported.
jOOQ 3.5 has introduced org.jooq.conf.Settings.backslashEscaping (https://github.com/jOOQ/jOOQ/issues/3000). This was mainly introduced for MySQL, which still today defaults to non-standards compliant string literal escaping using backslashes.
Note that this setting affects only inlined bind values, so it will not escape backslashes when binding values to a PreparedStatement.
I agree with RealSkeptic's answer, which suggests you change the database behaviour or upgrade to a newer PostgreSQL version.
Related
I've tried
CacheConfiguration<?, ?> cacheCfg = new CacheConfiguration<>(cacheTemplateName).setSqlSchema("PUBLIC"); //create table can only be executed on public schema
cacheCfg.setSqlEscapeAll(false); //otherwise ignite tries to quote after we've quoted and there are cases we have to quote before ignite gets it
cacheCfg.setCacheMode(CacheMode.PARTITIONED);
ignite.addCacheConfiguration(cacheCfg); //required to register cacheTemplateName as a template, see WITH section of https://apacheignite-sql.readme.io/docs/create-table
Unfortunately nothing I try seems to work.
I've debugged through and isSqlEscapeAll() always returned true.
FYI in the CREATE TABLE statement I've set TEMPLATE=MyTPLName.
Is it possible to disable this behaviour? My queries are already appropriately quoted.
This flag doesn't work for dynamic caches since it could cause some unclearness with table names, which were described in this thread on Ignite dev list: http://apache-ignite-developers.2346864.n4.nabble.com/Remove-deprecate-CacheConfiguration-sqlEscapeAll-property-td17966.html
By the way, what is the problem you want to solve using this flag?
What collation is available to H2 Database that does not ignore spaces but at the same time recognizes characters with umlauts and without as the same?
For example, it should treat "Ilkka Seppälä" and "Ilkka Seppala" as the same. It also needs to treat "MSaifAsif" and "M Saif Asif" as different (because of the spaces)
I found the answer to my question. To get my desired outcome to work, I had to do two things:
add icu4j as a dependency to the project which made H2 use the ICU4J collator.
testCompile 'com.ibm.icu:icu4j:55.1'
This is mentioned in the documentation H2 DB Reference - SET COLLATION . (It does not explain though the difference between the default collator and ICU4J's.
Add SET COLLATION ENGLISH STRENGTH PRIMARY to the JDBC url:
jdbc:h2:mem:test;MODE=MySQL;INIT=CREATE SCHEMA IF NOT EXISTS "public"\;SET COLLATION ENGLISH STRENGTH PRIMARY
A snippet of my unit test which works after adding ICU4J:
#Test
public void testUnicode() throws Exception {
Author authorWithUnicode = new Author();
authorWithUnicode.setName("Ilkka Seppälä");
authorRepository.save(authorWithUnicode);
Author authorWithSpaces = new Author();
authorWithSpaces.setName("M Saif Asif");
authorRepository.save(authorWithSpaces);
assertThat(authorRepository.findByName("Ilkka Seppälä").get()).isNotNull();
assertThat(authorRepository.findByName("Ilkka Seppala").get()).isNotNull();
assertThat(authorRepository.findByName("M Saif Asif").get()).isNotNull();
assertThat(authorRepository.findByName("MSaifAsif")).isEqualTo(Optional.empty());
}
Previously, without ICU4J, if H2 was initialized with SET COLLATION ENGLISH STRENGTH PRIMARY, the 4th assert would fail because it would treat the String with spaces as the same with the one without spaces. Without SET COLLATION, the second assert would fail because it would treat the name with letter "a" with umlaut as different from the one without.
I am creating a simple database table with a column of type Timestamp on IBM DB2 on mainframes from a JDBC client like this-
CREATE TABLE scma.timetest(
T_TYPE VARCHAR(8),
T_DATE TIMESTAMP
);
With or without inserting any record if I do a select * from scma.timetest; I end up getting the below exception-
java.nio.charset.UnsupportedCharsetException: Cp1027
If I don't have the Timestamp type column, everything works fine. I have tried starting the JDBC client with -Dfile.encoding=UTF-8 with no avail. Same thing I tried from a Java program as well, it results in the same error.
It is not the same problem mentioned here, I don't get ClassNotFoundException. Any pointer what could be wrong. Here is full exception if it helps-
Exception in thread "main" java.nio.charset.UnsupportedCharsetException: Cp1027
at java.nio.charset.Charset.forName(Charset.java:531)
at com.ibm.db2.jcc.am.t.<init>(t.java:13)
at com.ibm.db2.jcc.am.s.a(s.java:12)
at com.ibm.db2.jcc.am.o.a(o.java:444)
at com.ibm.db2.jcc.t4.cc.a(cc.java:2412)
at com.ibm.db2.jcc.t4.cb.a(cb.java:3513)
at com.ibm.db2.jcc.t4.cb.a(cb.java:2006)
at com.ibm.db2.jcc.t4.cb.a(cb.java:1931)
at com.ibm.db2.jcc.t4.cb.m(cb.java:765)
at com.ibm.db2.jcc.t4.cb.i(cb.java:253)
at com.ibm.db2.jcc.t4.cb.c(cb.java:55)
at com.ibm.db2.jcc.t4.q.c(q.java:44)
at com.ibm.db2.jcc.t4.rb.j(rb.java:147)
at com.ibm.db2.jcc.am.mn.kb(mn.java:2107)
at com.ibm.db2.jcc.am.mn.a(mn.java:3099)
at com.ibm.db2.jcc.am.mn.a(mn.java:686)
at com.ibm.db2.jcc.am.mn.executeQuery(mn.java:670)
Moving this here from comments:
Legacy DB2 for z/OS often use EBCDIC (also known as CP1027) encoding for character data. Also I believe DB2 sends timestamp values to the client as character strings, although they are internally stored differently. I suspect that the Java runtime that you are using does not support CP1027, so it doesn't know how to convert EBCDIC data to whatever it needs on the client. I cannot explain though why VARCHAR value comes through OK.
For more details about DB2 encoding you can check the manual.
You can force DB2 to create a table using different encoding, which will likely be supported by Java:
CREATE TABLE scma.timetest(...) CCSID UNICODE
Another alternative might be to use a different Java runtime that supports the EBCDIC (CP1027) encoding. The IBM JDK, which comes with some DB2 client packages, would be a good candidate.
You (well, not you but the mainframe system programmers) can also configure the default encoding scheme for the database (subsystem).
I am using flyway version 2.3, I have an sql patch which inserts a varchar into a table having character sequence that Flyway treats as placeholders. I want to flyway to ignore placeholders and run the script as is.
The script file is
insert into test_data (value) values ("${Email}");
And the Java code is
package foobar;
import com.googlecode.flyway.core.Flyway;
public class App
{
public static void main( String[] args )
{
// Create the Flyway instance
Flyway flyway = new Flyway();
// Point it to the database
flyway.setDataSource("jdbc:mysql://localhost:3306/flywaytest", "alpha", "beta");
// Start the migration
flyway.migrate();
}
}
This can be done by splitting $ and { in the expression:
insert into test_data (value) values ('$' || '{Email}')
You can change the value of the placeholder suffix or prefix to a different value and you should be OK.
try this properties:
final var flyway = Flyway.configure()
.dataSource(DataSourceProvider.getInstanceDataSource())
.locations("path")
.outOfOrder(true)
.validateOnMigrate(false)
.placeholderReplacement(false)
.load();
In my MySQL migration script this worked:
I just escaped the first { characters, like this:
'...<p>\nProgram name: $\{programName}<br />\nStart of studies: $\{startOfStudies}<br />\n($\{semesterNote})\n</p>...'
This way flyway didn't recognize them as placeholders, and the string finally stored doesn't contain the escape character.
...<p>
Program name: ${programName}<br />
Start of studies: ${startOfStudies}<br />
(${semesterNote})
</p>...
I had exactly the same problem, but the accepted answer didn't fit my requirements. So I solved the problem in another way and post this answer hoping that it'll be useful to other people coming here from Google search.
If you cannot change the placeholder suffix and prefix, you can trick Flyway into believing there are no placeholders by using an expression. E.g.:
INSERT INTO test_data(value) VALUES (REPLACE("#{Email}", "#{", "${"));
This is useful if you've already used placeholders in lots of previous migrations. (If you just change placeholder suffix and prefix, you'll have to change them in previous migration scripts, too. But then the migration script checksums won't match, Flyway will rightfully complain, and you'll have to change checksums in the schema_version table by calling Flyway#repair() or manually altering the table.)
Just add a property to your bootstrap.properties (or whatever you use)
flyway.placeholder-replacement = false
In 2021, the simple answer is to set the placeholderReplacement boolean to false:
flyway -placeholderReplacement="false"
The configuration parameter placeholderReplacement determines whether placeholders should be replaced.
Reference: https://flywaydb.org/documentation/configuration/parameters/placeholderReplacement
Well this is abit strange, could anyone help me point out where this function may be wrong.
I have a function similar to
CREATE FUNCTION check_password(uname TEXT, pass TEXT)
RETURNS BOOLEAN AS $$
DECLARE passed BOOLEAN;
BEGIN
SELECT (pwd = $2) INTO passed
FROM pwds
WHERE username = $1;
RETURN passed;
END;
$$ LANGUAGE plpgsql
When i run it directly in the pgAdmin sql console, there are no errors but running it in a migration script using db-migration-maven-plugin i get the error.
Error executing: CREATE FUNCTION check_password(uname TEXT, pass TEXT)
RETURNS BOOLEAN AS $$ DECLARE passed BOOLEAN
org.postgresql.util.PSQLException: ERROR: unterminated dollar-quoted
string at or near "$$ DECLARE passed BOOLEAN"
Position: 74
The SQL generated by your migration scripts probably have some kind of $$ quotes in them that gets interpreted as a string somewhere.
A quick and dirty fix could be to change $$ to $func$ or even $check_password$, though there might be other functions further down that suffer the same problem.
The better, more long term approach will be to locate the offending $$.
#ivanorone: There is a bug filed for db-migration-maven-plugin: https://code.google.com/p/c5-db-migration/issues/detail?id=9
There is a patch included but looking at its source, it doesn't really fix the problem properly. Besides that, the project seems to be idling (last commit 2010).
There is another plugin, that I am trying to use instead, Flyway: http://flywaydb.org/ Switching to it was pretty easy and it works fine so far.
Execute your query as single batch(Hint: USE ctrl+F5).When I ran your query in Postgres SQl (Greenplum Interface) I got similiar error as u stated above. I found that db is executing query by splitting it based on termination in our query(Semicolon). As we you have terminated thrice in your query, It executes it one by one statement. So, to execute it as a whole batch run as using execute as single batch in your execution option.
I hope it helps you:):)
Solution for Grails Database Migration Plugin
changeSet(author: "...", id: "...") {
sql(splitStatements: false, '''
CREATE FUNCTION trigger_func() RETURNS TRIGGER AS $$
DECLARE var text;
BEGIN
...
END $$ LANGUAGE plpgsql;
''')
}
The best solution seems to be to tell the plugin to only accept the semicolon as a delimiter when it's on a new line.
This works well where you need to define code blocks like this.
Ref: sql-maven-plugin with multiple delimiters