Issue retrieving generated keys with SimpleJdbcInsert and Sybase - java

I'm having an odd problem with SimpleJdbcInsert.executeAndReturnKey with Sybase (jTDS driver) and certain data.
Take the following example:
SimpleJdbcInsert insert = new SimpleJdbcInsert(jdbcTemplate)
.withTableName("TABLE_NAME")
.usingGeneratedKeyColumns("ID");
List<String> columns = new ArrayList<String>();
columns.add("SOME_NUMERIC_DATA");
columns.add("SOME_STRING_DATA");
Map<String, Object> params = new HashMap<String, Object>();
params.put("SOME_NUMERIC_DATA", 10.02);
params.put("SOME_STRING_DATA", "AAAA");
Number insertId = insert.executeAndReturnKey(params);
The above will fail with
DataIntegrityViolationException: Unable to retrieve the generated key for the insert
The insert itself is fine as if I do an insert.execute(params) the insert will work correctly (but I need the generated column value).
If I insert null instead of 10.02 for the SOME_NUMERIC_DATA column then it will work correctly and return the generated column value. Also if all of the fields are VARCHAR/String then it will work correctly.
Can anyone see anything here that might be causing this with a combination of string and numeric fields.
I should also add that when I use the exact same code with an H2 database it works all of the time - this seems to be related to Sybase/jTDS

I had the same problem with SQL Server and fixed it by calling this configuration method right before the call to executeAndReturnKey():
mySimpleJdbcInsert.setAccessTableColumnMetaData(false);
I suspect the error has to do with database metadata : as explained in the spring reference http://docs.spring.io/spring-framework/docs/current/spring-framework-reference/html/jdbc.html, SimpleJdbcInsert uses database metadata to construct the actual insert statement.
One could also use the SQL OUTPUT clause such as
INSERT INTO myTable (Name, Age)
OUTPUT Inserted.Id
VALUES (?,?)
And use some more generic JdbcTemplate.execute() to handle the insert.

Related

Insert data into BigQuery using native Insert query using Java

I Insert rows into BigQuery with the InsertAll method using JAVA. It is working always fine. But when we try to update the same row from the JAVA code am getting the below error,
com.google.cloud.bigquery.BigQueryException UPDATE or DELETE DML statements over table project123:mydataset.test would affect rows in the streaming buffer, which is not supported
So I tried from BigQueryConsole.
I inserted a row using the INSERT query then immediately UPDATE the same row. It worked fine.
When I read the articles of BIGQUERY, they are mentioning both InsertAll from JAVA and INSERT query from Console using Streaming Buffer. In that case, the console query execution should be got failed.
Why Console query is working fine? But from Java InsertAll it is throwing me an exception.
It will be really helpful if anyone helps me to know the exact details.
If any suggestions to use Native insert query insertion from Java instead of InsertAll to BigQuery, It will be a great help.
Please find the code snippet
First am inserting the values to the BigQuery using the below code snippet
Map<String, Object> map = new HashMap<>();
map.put("1", "name");
map.put("2", "age");
BigQuery bQuery = BigQueryOptions.newBuilder().setCredentials(credentials).setProjectId(id)
.build().getService();
InsertAllResponse response = bQuery .insertAll(InsertAllRequest.newBuilder(tableId).addRow(map).build());
Once it is getting inserted, am trying to update the row in that table with the following code snippet
String updateQuery = String.format( "UPDATE `%s` SET name = \"%s\" WHERE age = \"%s\")", name, age);
QueryJobConfiguration queryConfig = QueryJobConfiguration.newBuilder(query).build();
bQuery.query(queryConfig);
Insert is working fine. when I tried to update the same inserted row am getting the streaming buffer error.
Thanks in advance.
When you read the documentation, it's clear that the insertAll perform a stream write into BigQuery.
When you use INSERT DML (INSERT INTO <table> [VALUES....|SELECT...]), you perform a query, not a stream write. So, the data management isn't the same. The performance are also different (Stream write can write up to 1 million of rows per seconds, the DML is query by query, and took more time for less data).
So, I don't know your code and what you want to achieve. But if you want to use usual query (INSERT, UPDATE, DELETE), use query API.
EDIT
I tried to adapt your code (but it was wrong, I took some assumptions) and I can propose you this. Simply perform a QUERY, not a Load Job or a Streaming write.
String tableName = "YOUR_TABLE_NAME";
String insertQuery = String.format("INSERT INTO %s(name, age) VALUES (1,2)", tableName);
QueryRequest queryRequest = QueryRequest.builder(insertQuery).build();
bQuery.query(queryRequest);
String updateQuery = String.format( "UPDATE `%s` SET name = \"%s\" WHERE age = \"%s\")", tableName, name, age);
queryRequest = QueryRequest.builder(updateQuery).build();
bQuery.query(queryRequest);

Flyway Migration: NamedParameterJdbcTemplate

Is there anyway to create a flyway migration class utilizing the NamedParameterJdbcTemplate rather than the standard JdbcTemplate that comes across from the implementation of SpringJdbcMigration?
I have an upgrade I need to run where I need to convert a column type from text to integer (Replacing a string value with an internal id associated with that value.)
The way I'm doing this is temporarily storing the string values for a reverse lookup, deleting the column and re-adding it as the proper type, and then running an UPDATE call to add in the appropriate ID to the records. I have code similar to the following I want to execute as part of the migration:
String sql = "UPDATE my_table SET my_field = :my_field WHERE my_id IN (:my_ids)";
MapSqlParameterSource source = new MapSqlParameterSource();
source.addValue("my_field", someIntValue); // the internal id of the string I want to use.
source.addValue("my_ids", someListOfPKIds); // List of PK ids.
namedTemplate.update(sql,source); //namedTemplate is a NamedParameterJdbcTemplate
However, it seems as if I can't take advantage of the NamedParameterJdbcTemplate. Am I incorrect in this?
According to Flyway sources they create a new JdbcTemplate in SpringJdbcMigrationExecutor
However you can try creating a new NamedParameterJdbcTemplate in your migration given the classic JdbcTemplate. Check this constructor. E.g. new NamedParameterJdbcTemplate(jdbcTemplate)

Insert map and other complex types in hive using jdbc

I have a java Map (Map) and a JDBC connection to hive server.
The schema of the table at the server contains a column of type Map.
Is it possible to insert the java Map to the hive table column with similar datatype using JDBC?
I tried:
"create table test(key string, value Map<String, String>)"
"insert into table test values ('keywer', map('subkey', 'subvalue')) from dummy limit 1;"
ref: Hive inserting values to an array complex type column
but the insert failed with:
"Error: Error while compiling statement: FAILED: ParseException line 1:69 missing EOF at 'from' near ')' (state=42000,code=40000)"
[EDIT]
hive version is : 0.14.0
Thanks
The manual clearly says you cannot insert into a Map data type using SQL:
"Hive does not support literals for complex types (array, map, struct, union), so it is not possible to use them in INSERT INTO...VALUES clauses. This means that the user cannot insert data into a complex datatype column using the INSERT INTO...VALUES clause.”
I think the correct DDL and query would be:
CREATE TABLE test(key STRING, value MAP<STRING, STRING>);
INSERT INTO TABLE test VALUES('keywer', map('subkey', 'subvalue')) from dummy limit 1;
A working method to put a complex type from jdbc client is:
insert into table test select "key",map("key1","value1","key2","value2") from dummy limit 1;
where dummy is another table which has at least one row.

Why does Spring's JDBC Templates doesn't use the tables default value

I have a table in MYSQL and I am using JDBC Templates to do an insert into that table.
One of the columns has a default value, and I am not specifying it in the Map<String, Object> parameters map.
I am getting an exception Column 'colName' cannot be null.
Can anyone explain this please?
Thanks
*Edit: code *
contactDetailsInsertTemplate = new SimpleJdbcInsert( dataSource ).withTableName("contactdetails").usingGeneratedKeyColumns("contactcode");
Map<String, Object> parameters = new HashMap<String, Object>();
Number newId = contactDetailsInsertTemplate.executeAndReturnKey(parameters);
You need to limit the columns specified in the SQL Insert.
See SimpleJdbcInsert.usingColumns()
If you don't do this, the SQL statement will need values for ALL columns and the default values cannot be used.
e.g. use
insert into mytable (col1, col2) values (val1, val2);
instead of
insert into mytable values (val1, val2);
The problem is somewhere else: when I use SimpleJdbcInsert and pass a map of parameters to it, I simple expect to create insert statment only with parameters I provided. I have a NOT NULL column in the table with DEFAULT value and I don't want to specify it. I also don't want to explicitly put all column names in usingColumns() as I've just do it when creating a map of parameters! Why should I do it again? The problem is that SimpleJdbcInsert want's to be smarter and add all columns to insert statement which is not necessary. Why not create a statement only using columns provided with parameters map?
See: http://forum.spring.io/forum/spring-projects/data/54209-simplejdbcinsert-and-columns-with-default-values
INSERT INTO mytable (1,null) is not the same as INSERT INTO mytable (1). The prior specifically inserts null, the later leaves the decision up to the DB (which would be the default value). Spring JDBC is doing the former.
You can add the columns which have a default value as parameters to usingGeneratedKeyColumns().
Column names included as parameters of that method will be omitted from the generated INSERT statement.
Since the OP is supplying a MAP to the SimpleJdbcInsert .
This is what I did.
I made sure the MAP only has columns that I want inserted.
I extracted the Map keys and converted them into String[] like below
Set<String> set = map.keySet();
String[] columnNames = (String[]) map.keySet().toArray(new String[set.size()]);`
When initiating the SimpleJdbcInsert I passed in the columns I want to be used as below
SimpleJdbcInsert sji = new SimpleJdbcInsert(dataSource).withTableName("table_name");
sji.usingColumns(columnNames);`
Hope it helps.

Get generated key value while inserting in partitioned table

Hi I have a table in Postgres, say email_messages. It is partitioned so whatever the inserts i do it using my java application will not effect the master table, as the data is actually getting inserted in the child tables. Here is the problem, I want to get an auto generated column value (say email_message_id which is of big serial type). Postgres is returning it as null since there is no insert being done on master table. For oracle I used GeneratedKeyHolder to get that value. But I'm unable to do the same for partitioned table in postgres. Please help me out.
Here is the code snippet we used for oracle
public void createAndFetchPKImpl(final Entity pEntity,
final String pStatementId, final String pPKColumnName) {
final SqlParameterSource parameterSource =
new BeanPropertySqlParameterSource(pEntity);
final String[] columnNames = new String[]{"email_message_id"};
final KeyHolder keyHolder = new GeneratedKeyHolder();
final int numberOfRowsEffected = mNamedParameterJdbcTemplate.update(
getStatement(pStatementId), parameterSource, keyHolder, columnNames);
pEntity.setId(ConversionUtil.getLongValue(keyHolder.getKey()));
}
When you use trigger-based partitioning in PostgreSQL it is normal for the JDBC driver to report that zero rows were affected/inserted. This is because the original SQL UPDATE, INSERT or DELETE didn't actually take effect, no rows were changed on the main table. Instead operations were performed on one or more sub-tables.
Basically, partitioning in PostgreSQL is a bit of a hack and this is one of the more visible limitations of it.
The workarounds are:
INSERT/UPDATE/DELETE directly against the sub-table(s), rather than the top-level table;
Ignore the result rowcount; or
Use RULEs and INSERT ... RETURNING instead (but they have their own problems) (Won't work for partitions)

Categories

Resources