I have 2 different databases one is MYSQL other is Oracle.Each have 1 table with different name and different columns name.Now I have to perform some db opeartions on each db from a single java application.Suppose for MYSQL db I have Emp table with columns Id,Name,Dept and for Oracle db I have Student table with StudentName and StudentDept.Now without changing code how can I manage 2 dbs?If I mention all db connection related data(connection url,username,password) in a properties file but to execute query I have to mention table name and column name in code.How can I manage it dynamically without altering the code so that in future any new db with different table name and column name is added I can only add the new one in properties file and no need to touch the code.Please suggest.
This might not be the prettiest, but one way to do this:
On application launch, parse properties files to get all DB connections. Store these however you want...List of connection pools, list of single connections, list of connection strings, etc...it doesn't matter.
Run a predefined stored procedure or select query to retrieve all table names from each database found in step 1. In sybase you can do this with
select name from sysobjects where type = 'U'
Build a Map where the key is the table name and the value is either the DB name, connection, connection string, or whatever you are using to manage your DB connections from the result set of #2. Anything that can be passed to your DB connection manager to identify which database it should connect to will work as the value.
In code, when table name is passed, lookup the required DB in the map
Execute query on returned DB Info in the map you created in step 3
As long as the tables are distinct in each DB this will work. Once this is setup, new DBs can be added to the properties file and the cache can be refreshed with an application restart. However, if new tables/columns are being sent to the code, how are these being passed without any code change?
Related
In Talend Data Quality, I have configured a JDBC connection to an OpenEdge database and it's working fine.
I can pull the list of tables and select columns to analyse, but when executing analysis, I get this :
Table "DBGSS.SGSSGSS" cannot be found.
This is because it does not specify a schema, only the database name - DBGSS.
How can I make it specify database, schema and then the table name ? Or just the table name, its would work too.
Thanks !
You can use a tDBConnection component that give you the right to specify a schéma
Then , use it with the option of Use Existing connection
See below documentation , https://help.talend.com/r/en-US/7.3/db-generic/tdbconnection
im trying to use the createwithparm method programmatically in adf
to insert new record in the database but it doesnt works
i have db table with 2 generated values with before insert triggers
and i will pass 2 values
and this is my code
OperationBinding operation = ADFUtils.findOperation("CreateWithParams");
Object result = operation.execute();
and from the edit action binding I've referenced the 2 values i want to pass
{pageFlowScope.userBean.investorNumber}
{pageFlowScope.userBean.tempCode}
but nothing is inserted in the database
and there is nothing in the log
Given that you said "nothing is inserted into the database", I have to ask: Do you understand how ADF BC(EO, VO, AM) works? When you submit a page, for example with createwithparam, it updates the EO and VOs in the ADF BC middle tier model, in memory. Nothing is written to the database. You must issue a COMMIT through the enclosing Application Module to get the data written to the db.
This might help.
I'm trying to import data from DWH SQL server table which is using Clustered Columnstore Index into kudu through flume. However, after my custom flume source retrieves a certain number of rows from the database, this exception occurs:
SqlExceptionHelper: Cursors are not supported on a table which has a clustered columnstore index
I'm using JDBC SQL Server driver type 4, and apparently it uses cursors to iterate resultset. Therefore, I tried setting fetch size to the number the query is limited to, but nothing changed.
How can I stop the JDBC driver from using cursors and thus get all rows imported into a kudu table?
Thanks in advance.
Try setting selectmethod=direct in the connection properties. Source:
If set to direct (the default), the database server sends the complete result set in a single response to the driver when responding to a query. A server-side database cursor is not created if the requested result set type is a forward-only result set. Typically, responses are not cached by the driver. Using this method, the driver must process the entire response to a query before another query is submitted. If another query is submitted (using a different statement on the same connection, for example), the driver caches the response to the first query before submitting the second query. Typically, the Direct method performs better than the Cursor method.
Of course you need to define your resultset as FORWARD_ONLY to guarantee this.
I had the same problem with the table which has the columnstore clustered index.
The simple SELECT statement by ODBC failed with "Cursors are not supported on a table which has a clustered columnstore index".
My workaround is to create the view that contains statement:
select * from dbo.TableName
It works for me.
I am creating a job to pull data from a database to CSV file using talend open studio. There are 100 of tables, the data types and no of columns differ in the tables, I want to pull the data from database tables with a single job and customizable SQL query. I know how to create and use context variables.
If I understood you correctly you should be using tMap's reload at each row -option and defining table names in Excel sheet or in tFixedFlowInput.
tMap settings
Whole job and results
SQL Script:
"SELECT TOP(1) Name, Code from mdm." + (String)globalMap.get("row4.table")
I used Microsoft SQL Server for example but same script works as well with MySQL.
You can simply use a context-variable which you set via the --context_param argument in a tWhicheverDatabaseInput. E.g. define a context variable "my_sql" which you can set in the commandline as
my_job.sh --context_param my_sql="select a,b,c from a_test_table"
and then use context.my_sql as the SQL in you database input component.
However, as garpitmzn already mentioned, you will need dynamic schemas to actually work with this unknown structure in Talend. This feature only exists in the enterprise version.
If the enterprise version is available to you, simply declare a single column of type "Dynamic", which you can pass through the rest of your flow.
Declare a local context say query as type string.
Prepare a context file with variable query: query=select name from employee
Excecute the query: toraclecomponent use context.query
Query is throwing some error when you have where conditions with string type.
Need to investigate more on that. Otherwise it works.
In H2 there are two ways to create a new in-memory database. In the first, you explicitly create the database with a CREATE DATABASE.. SQL statement. In the other, if you attempt to connect to a non-existent database, H2 will simply create it. I've elected the first way because if I don't get some kind of error back how will I know to create the single table (with only two columns).
The problem is that H2 doesn't like he SQL I'm using and flags an error. This SQL statement:
String sql = "CREATE DATABASE Tickets, " + USER + ", " + PASS;
throws this exception:
org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement "CREATE DATABASE[*] TICKETS, USERNAME, PASSWORD "; expected "OR, FORCE, VIEW, ALIAS, SEQUENCE, USER, TRIGGER, ROLE, SCHEMA, CONSTANT, DOMAIN, TYPE, DATATYPE, AGGREGATE, LINKED, MEMORY, CACHED, LOCAL, GLOBAL, TEMP, TEMPORARY, TABLE, PRIMARY, UNIQUE, HASH, SPATIAL, INDEX"; SQL statement:
Any idea about what going on in the above? Or, can you tell me how I can tell that the DB was auto-created so that I can proceed to create the table?
I don't believe that you're correct when you suggest that you can create a H2 database via SQL - I think that's your basic issue...
Just connect to your DB (and it's the jdbc URL that defines the database involved) and if you don't get an exception, carry on and use it. (Create your table, etc.)