I have exported some data from schema A (table x)to XML and I am reading the XML and inserting the data into schema B(table y). while inserting the data after 20000 records it says
com.ibm.db2.jcc.am.SqlSyntaxErrorException: [jcc][t4][20111][11366][3.63.75] The value of a host variable is too large for its corresponding use. Host variable=1. ERRORCODE=-4461, SQLSTATE=42815
com.ibm.db2.jcc.am.BatchUpdateException: [jcc][t4][102][10040][3.63.75] Batch failure.
The batch was submitted, but at least one exception occurred on an individual member of the batch.
I compared the data types of the corresponding columns in table x and table y they are the same. It is BIGINT for the identity(Auto increment) col and LONG VARCHAR in both source and destination..
Kindly help in resolving this issue.
I had a similar problem one time. I solved it adding to the XML the queue size. In my case was something like this:
<task>
<name>Ventas MCC</name>
<queueSize>100</queueSize>
<queueNames>trashQueue</queueNames>
<queryTasks>
<queryTask>...</queryTask>
</queryTasks>
</task>
With the queueSize the queries was launched in batch.
Related
I'm trying to import data from DWH SQL server table which is using Clustered Columnstore Index into kudu through flume. However, after my custom flume source retrieves a certain number of rows from the database, this exception occurs:
SqlExceptionHelper: Cursors are not supported on a table which has a clustered columnstore index
I'm using JDBC SQL Server driver type 4, and apparently it uses cursors to iterate resultset. Therefore, I tried setting fetch size to the number the query is limited to, but nothing changed.
How can I stop the JDBC driver from using cursors and thus get all rows imported into a kudu table?
Thanks in advance.
Try setting selectmethod=direct in the connection properties. Source:
If set to direct (the default), the database server sends the complete result set in a single response to the driver when responding to a query. A server-side database cursor is not created if the requested result set type is a forward-only result set. Typically, responses are not cached by the driver. Using this method, the driver must process the entire response to a query before another query is submitted. If another query is submitted (using a different statement on the same connection, for example), the driver caches the response to the first query before submitting the second query. Typically, the Direct method performs better than the Cursor method.
Of course you need to define your resultset as FORWARD_ONLY to guarantee this.
I had the same problem with the table which has the columnstore clustered index.
The simple SELECT statement by ODBC failed with "Cursors are not supported on a table which has a clustered columnstore index".
My workaround is to create the view that contains statement:
select * from dbo.TableName
It works for me.
I have a java Map (Map) and a JDBC connection to hive server.
The schema of the table at the server contains a column of type Map.
Is it possible to insert the java Map to the hive table column with similar datatype using JDBC?
I tried:
"create table test(key string, value Map<String, String>)"
"insert into table test values ('keywer', map('subkey', 'subvalue')) from dummy limit 1;"
ref: Hive inserting values to an array complex type column
but the insert failed with:
"Error: Error while compiling statement: FAILED: ParseException line 1:69 missing EOF at 'from' near ')' (state=42000,code=40000)"
[EDIT]
hive version is : 0.14.0
Thanks
The manual clearly says you cannot insert into a Map data type using SQL:
"Hive does not support literals for complex types (array, map, struct, union), so it is not possible to use them in INSERT INTO...VALUES clauses. This means that the user cannot insert data into a complex datatype column using the INSERT INTO...VALUES clause.”
I think the correct DDL and query would be:
CREATE TABLE test(key STRING, value MAP<STRING, STRING>);
INSERT INTO TABLE test VALUES('keywer', map('subkey', 'subvalue')) from dummy limit 1;
A working method to put a complex type from jdbc client is:
insert into table test select "key",map("key1","value1","key2","value2") from dummy limit 1;
where dummy is another table which has at least one row.
I have 2 different databases one is MYSQL other is Oracle.Each have 1 table with different name and different columns name.Now I have to perform some db opeartions on each db from a single java application.Suppose for MYSQL db I have Emp table with columns Id,Name,Dept and for Oracle db I have Student table with StudentName and StudentDept.Now without changing code how can I manage 2 dbs?If I mention all db connection related data(connection url,username,password) in a properties file but to execute query I have to mention table name and column name in code.How can I manage it dynamically without altering the code so that in future any new db with different table name and column name is added I can only add the new one in properties file and no need to touch the code.Please suggest.
This might not be the prettiest, but one way to do this:
On application launch, parse properties files to get all DB connections. Store these however you want...List of connection pools, list of single connections, list of connection strings, etc...it doesn't matter.
Run a predefined stored procedure or select query to retrieve all table names from each database found in step 1. In sybase you can do this with
select name from sysobjects where type = 'U'
Build a Map where the key is the table name and the value is either the DB name, connection, connection string, or whatever you are using to manage your DB connections from the result set of #2. Anything that can be passed to your DB connection manager to identify which database it should connect to will work as the value.
In code, when table name is passed, lookup the required DB in the map
Execute query on returned DB Info in the map you created in step 3
As long as the tables are distinct in each DB this will work. Once this is setup, new DBs can be added to the properties file and the cache can be refreshed with an application restart. However, if new tables/columns are being sent to the code, how are these being passed without any code change?
I am having two different tables.I am fetching the value from a column of Table1 and then inserting it into a Column of Table2.No processing is done for that data in between, before inserting into secTable2.Both source and destination tables have same datatype - varchar(50). Example data in Column: CSC123
When I query from destination table with SQL query (after insertion is done),Only 0 appears.When I export the results to excel,I could see some special characters before that 0 (Something like squares). This issue happens only for some cases and the actual data is missing.
Application : J2EE
Framework : Hibernate
Database : Oracle
Please suggest a solution
Thanks
We have a database function to cache sequence values. It accepts a sequence object name and a fetch size, increments the sequence and return the values.
The return type is a oracle collection.
Here is the definition of the db types used by the function:
create or replace type icn_num_type as object(v_inc_num number);
create or replace type icn_num_type_table as table of icn_num_type; --this is returned
The values returned by the function are cached on the application side. We are using ibatis for DAO. All this worked well when the function, types and the sequence objects were all in the same schema.
Now we have the function, types and sequences defined in one parent schema. The user schema has synonyms to the all the above mentioned objects. I am facing the following error now:
--- The error occurred while executing query procedure.
--- Check the {? = call seq_inc(?, ?)}.
--- Check the output parameters (register output parameters failed).
--- Cause: java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist
ORA-06512: at "SYS.DBMS_PICKLER", line 18
ORA-06512: at "SYS.DBMS_PICKLER", line 58
ORA-06512: at line 1
However when we access the function from SQLDeveloper(user schema) it works fine.
Could some one help me on this issue?
Seems there were issues (restrictions / bugs) with synonyms and Object types back in 9iR2 for java. Google for ORA-21700 and DBMS_PICKLER
I suspect you've got some issue with the JDBC driver being used for iBatis that is resolved for the JDBC version used for SQL Developer.
Grab something like SQuirrel SQL Client and try the JDBC driver you are using for iBatis with that to see if you can reproduce.