Consider a table :
ID COUNTRY_CODE
1 ab-cd-ef
2 gh-ef
3 cd-ab-pq-xy
And I need an sql query that selects the records which contain a specific country code.
The traditional LIKE approach works, of course:
select ID from TableName where COUNTRY_CODE like '%cd%';
The concern here is, this query would run over millions of records, thus increasing the cost of total operation. Due to the cost issues, nested tables is not an option here.
Note : The query can be parameterized with Java, if needed.
Is there any cost-effective way to handle such searchable columns?
Do not store lists as delimited strings. Either create a separate table:
CREATE TABLE TableName ( id NUMBER PRIMARY KEY );
CREATE TABLE TableName__Country_Codes(
id NUMBER REFERENCES table_name ( id ),
country_code CHAR(2),
PRIMARY KEY ( id, country_code )
);
Then you could use
SELECT ID
FROM TableName__Country_codes
WHERE 'cd' = COUNTRY_CODE;
Or use a nested table:
CREATE TYPE Char2List IS TABLE OF CHAR(2)
/
CREATE TABLE TableName(
ID NUMBER PRIMARY KEY,
Country_Code Char2List
) NESTED TABLE Country_code STORE AS Table_Name__Country_Codes;
Then you could use
SELECT ID
FROM TableName
WHERE 'cd' MEMBER OF COUNTRY_CODE;
If you have to use a delimited string then your query should include the delimiters in the search expression:
SELECT ID
FROM TableName
WHERE '-' || COUNTRY_CODE || '-' like '%-cd-%';
Otherwise, if you have longer list elements then %cd% could match cd or cda or bbcdbb.
Related
So I have an object that I'm trying to insert into a database that has the following structure
int id
String name
Array tags
and I'd want to insert the first two columns into the following table
CREATE TABLE foo (
id number(20) PRIMARY KEY,
name varchar2(50) NOT NULL
);
and the array into this table
CREATE TABLE fooTags (
id number(20) PRIMARY KEY,
fooId number(20), //foreign key to foo. I don't know what the sql is for that.
tagName varchar2(50)
);
How would performing a sub insert that would take the id created by the initial insert work? I'd assume a SELECT is needed, but I'm unsure as to how it would be ordered to get the information needed inserted into the proper areas for each object.
I wrote 2 procedure;
If you can learn your seq names for ids ;
create or replace procedure FOO_INSERT(foo_name in varchar2, FooTags_tagName in varchar2 )
is
foo_seq_val number;
footag_seq_val number;
begin
select foo_seq.nextval into foo_seq_val from dual;
insert into foo(id,name) values (foo_seq_val, foo_name);
select footag_seq.nextval into footag_seq_val from dual;
insert into footags (id,fooid,tagName) values(footag_seq_val,foo_seq_val,FooTags_tagName);
commit;
end;
If you can not learn your seq names for ids ;
create or replace procedure FOO_INSERT_T(foo_name in varchar2, FooTags_tagName in varchar2 )
is
foo_seq_val number;
begin
insert into foo_T(name) values (foo_name);
select id into foo_seq_val from FOO_T where name =foo_name;
insert into footags_T (fooid,tagName) values(foo_seq_val,FooTags_tagName);
commit;
end;
If you pass ids ;
insert into foo (id, name) values (123,'foo_name');
insert into footags (id,fooid,tagname) select 444,id, 'tag_name' from foo ;
commit;
For second procedure I assume your foo_T.name values are unique or your other values for each row make your id unique. You can put in order end of your select with and ... and ..
You can see one COMMIT each method. Because if there is an error, transaction rollback all your inserts for foo_table and fooTags_table. This is accuracy.
My solution: two insert query, the first for parent object (foo table), and the second for his tags (fooTags table):
<insert id="fooInsert" useGeneratedKeys="true" keyProperty="id" keyColumn="id">
INSERT INTO foo (name) VALUES (#{name})
</insert>
<insert id="fooTagsInsert">
INSERT INTO fooTags ("fooId", "tagName") VALUES
<foreach item="tag" collection="tags" separator=",">
(#{id}, #{tag})
</foreach>
</insert>
The attributes "useGeneratedKeys", "keyProperty" and "keyColumn" are used to reload newly generated keys from the database, if JDBC driver supports the getGeneratedKeys function. Alternatively we must reload the id using a select query. More info: http://www.mybatis.org/mybatis-3/sqlmap-xml.html#insert_update_and_delete
The tags insert use "foreach" to iterate through tag names (in this case I used a String array, but they could be objects). The "inner" insert refers to the "id" from the "foo" object and "tag", that is the iterating String. In case of an object, we can access inner fields with "tag.", i.e. "tag.name".
Usage in Java code:
Foo foo = new Foo();
foo.setName("James");
foo.setTags(new String[] {"one", "two", "three"});
fooMapper.fooInsert(foo);
fooMapper.fooTagsInsert(foo);
Tables definitions (tested with PostgreSQL):
CREATE TABLE public.foo (
id numeric NOT NULL DEFAULT nextval('seq_foo_id'::regclass),
"name" varchar NULL,
CONSTRAINT foo_pk PRIMARY KEY (id)
)
CREATE TABLE public.footags (
id varchar NOT NULL DEFAULT nextval('seq_foo_id'::regclass),
"fooId" numeric NULL,
"tagName" varchar NULL,
CONSTRAINT footags_pk PRIMARY KEY (id),
CONSTRAINT footags_foo_fk FOREIGN KEY ("fooId") REFERENCES public.foo(id)
)
I have an SQLite database. I am trying to insert values (users_id, lessoninfo_id) in table bookmarks, only if both do not exist before in a row.
INSERT INTO bookmarks(users_id,lessoninfo_id)
VALUES(
(SELECT _id FROM Users WHERE User='"+$('#user_lesson').html()+"'),
(SELECT _id FROM lessoninfo
WHERE Lesson="+lesson_no+" AND cast(starttime AS int)="+Math.floor(result_set.rows.item(markerCount-1).starttime)+")
WHERE NOT EXISTS (
SELECT users_id,lessoninfo_id from bookmarks
WHERE users_id=(SELECT _id FROM Users
WHERE User='"+$('#user_lesson').html()+"') AND lessoninfo_id=(
SELECT _id FROM lessoninfo
WHERE Lesson="+lesson_no+")))
This gives an error saying:
db error near where syntax.
If you never want to have duplicates, you should declare this as a table constraint:
CREATE TABLE bookmarks(
users_id INTEGER,
lessoninfo_id INTEGER,
UNIQUE(users_id, lessoninfo_id)
);
(A primary key over both columns would have the same effect.)
It is then possible to tell the database that you want to silently ignore records that would violate such a constraint:
INSERT OR IGNORE INTO bookmarks(users_id, lessoninfo_id) VALUES(123, 456)
If you have a table called memos that has two columns id and text you should be able to do like this:
INSERT INTO memos(id,text)
SELECT 5, 'text to insert'
WHERE NOT EXISTS(SELECT 1 FROM memos WHERE id = 5 AND text = 'text to insert');
If a record already contains a row where text is equal to 'text to insert' and id is equal to 5, then the insert operation will be ignored.
I don't know if this will work for your particular query, but perhaps it give you a hint on how to proceed.
I would advice that you instead design your table so that no duplicates are allowed as explained in #CLs answer below.
For a unique column, use this:
INSERT OR REPLACE INTO tableName (...) values(...);
For more information, see: sqlite.org/lang_insert
insert into bookmarks (users_id, lessoninfo_id)
select 1, 167
EXCEPT
select user_id, lessoninfo_id
from bookmarks
where user_id=1
and lessoninfo_id=167;
This is the fastest way.
For some other SQL engines, you can use a Dummy table containing 1 record.
e.g:
select 1, 167 from ONE_RECORD_DUMMY_TABLE
I'm trying to imagine how to use jOOQ with bridge tables.
Suppose you have
CREATE TABLE TableA (
id BIGSERIAL PRIMARY KEY
)
CREATE TABLE TableB (
id BIGSERIAL PRIMARY KEY
)
CREATE TABLE TableBridge (
id BIGSERIAL,
table_a_id INTEGER NOT NULL,
table_b_id INTEGER NOT NULL,
CONSTRAINT tablea_pk_id PRIMARY KEY (table_a_id)
REFERENCES TableA (id) MATCH SIMPLE,
CONSTRAINT tableb_pk_id PRIMARY KEY (table_b_id)
REFERENCES TableB (id) MATCH SIMPLE
)
When mapping this schema using jOOQ there will be three record classes, TableARecord, TableBRecord and TableBridgeRecord.
If I want to persist through an insert a record for TableA, should I simply first create and persist the TableB records, then persit rows for TableB and then manually add the TableBridge rows? Isn't there any way to automatically save also the rows in the bridge table?
There are several ways to solve this kind of problem:
1. Do it with a "single" jOOQ statement (running three SQL statements)
The most idiomatic way to solve this kind of problem with standard jOOQ would be to write a single SQL statement that takes care of all three insertions in one go:
ctx.insertInto(TABLE_BRIDGE)
.columns(TABLE_BRIDGE.TABLE_A_ID, TABLE_BRIDGE.TABLE_B_ID)
.values(
ctx.insertInto(TABLE_A)
.columns(TABLE_A.VAL)
.values(aVal)
.returning(TABLE_A.ID)
.fetchOne()
.get(TABLE_A.ID),
ctx.insertInto(TABLE_B)
.columns(TABLE_B.VAL)
.values(bVal)
.returning(TABLE_B.ID)
.fetchOne()
.get(TABLE_B.ID)
)
.execute();
The above works with jOOQ 3.8. Quite possibly, future versions will remove some of the verbosity around returning() .. fetchOne() .. get().
2. Do it with a single SQL statement
I assume you're using PostgreSQL from your BIGSERIAL data type usage, so the following SQL statement might be an option to you as well:
WITH
new_a(id) AS (INSERT INTO table_a (val) VALUES (:aVal) RETURNING id),
new_b(id) AS (INSERT INTO table_b (val) VALUES (:bVal) RETURNING id)
INSERT INTO table_bridge (table_a_id, table_b_id)
SELECT new_a.id, new_b.id
FROM new_a, new_b
The above query is currently not supported entirely via jOOQ 3.8 API, but you can work around the jOOQ API's limitations by using some plain SQL:
ctx.execute(
"WITH "
+ " new_a(id) AS ({0}), "
+ " new_b(id) AS ({1}) "
+ "{2}",
// {0}
insertInto(TABLE_A)
.columns(TABLE_A.VAL)
.values(aVal)
.returning(TABLE_A.ID),
// {1}
insertInto(TABLE_B)
.columns(TABLE_B.VAL)
.values(bVal)
.returning(TABLE_B.ID),
// {2}
insertInto(TABLE_BRIDGE)
.columns(TABLE_BRIDGE.TABLE_A_ID, TABLE_BRIDGE.TABLE_B_ID)
.select(
select(field("new_a.id", Long.class), field("new_b.id", Long.class))
.from("new_a, new_b")
)
);
Clearly also here, there will be improvements in future jOOQ APIs.
3. Do it with UpdatableRecords
In this particular simple case, you could get away simply by calling:
TableARecord a = ctx.newRecord(TABLE_A);
a.setVal(aVal);
a.store();
TableBRecord b = ctx.newRecord(TABLE_B);
b.setVal(bVal);
b.store();
TableBridgeRecord bridge = ctx.newRecord(TABLE_BRIDGE);
bridge.setTableAId(a.getId());
bridge.setTableBId(b.getId());
bridge.store();
I have created a keyspace with two column fields. One is Id and another one is name. I have inserted records in that keyspace. I want to update the name filed of a particular id.
I have used the following CQL query
UPDATE keyspaceName/columnFalmilyName SET name='name' WHERE id = 'id'
While Executing this query it throws the Exception of
InvalidRequestException(why:line 1:56 mismatched input 'id' expecting K_KEY)...
If the query framed is wrong means, how to update the record using CQL?
Inserts and updates require specifying the row key, whereas you seem to be trying to use a column name. What key have you used when inserting new id,name pairs?
Refer to the CQL documentation on [http://cassandra.apache.org/doc/cql3/CQL.html][1]
<update-stmt> ::= UPDATE <tablename>
( USING <option> ( AND <option> )* )?
SET <assignment> ( ',' <assignment> )*
WHERE <where-clause>
The UPDATE statement writes one or more columns for a given row in a table. The
<where-clause> is used to select the row to update and must include all columns
composing the PRIMARY KEY. Other columns values are specified through <assignment>
after the SET keyword.
You need to take a look at your ColumnFamily definition and see what your primary key is.
Have you tried:
UPDATE keyspaceName/columnFalmilyName SET name='name' WHERE id = (without '')
I have table as follow ::
id || name || Desg || Sal || deptId
1 ||ajay ||MD ||999 ||1
2 ||Kaushal ||Engg ||100 ||2
3 ||Vidhi ||HR ||5000 ||3
4 ||Sonu ||SSP ||200 ||1
5 ||Jay ||Asst Manager ||120 ||3
6 ||Uvi ||Utra ||450 ||5
id is primary column. This is just one table with name person.
I want to get the values of primary key column (here id).
My java method will receives the table name & where clause and will return the ArrayList of values of primary Column. Now the problem is that based on table name it should be decided which column is primary column. Is there any query which can give values ::
<<Part of Query to get values of primary column key>> where sal > 150 & deptId != 1 (Where clause that method will receive)
Database metadata can be extracted from the INFORMATION_SCHEMA tables, as documented here.
I think the table you'd be looking at would be columns:
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = '<<your schema name>>'
AND TABLE_NAME = '<<your table name>>'
AND COLUMN_KEY = 'PRI'
although that's only for a simplistic case.
For proper index analysis, the statistics table can be consulted. From (somewhat vague) memory, the index name or type can be used to figure out if the index is a primary key index.
Once you have the primary key column name, you can simply construct another query based on that.
In other words (untested):
String prim_colm = getPrimaryKeyColumn (tableName);
String newQuery = "select " + prim_colm + " from " + tableName";
then execute newQuery.
To answer the second part of your question
<<Part of Query to get values of primary column key>>
use...
SELECT <column name> FROM <table name> WHERE....
Where <column name>/<table name> comes from where ever you decide to work it out. Querying database meta data or just using a switch.