I want to call stored procedures from PostgreSQL in JOOQ by name dynamically:
final Field function = function("report_" + name, Object.class, (Field[])params.toArray(new Field[params.size()]));
dsl().select(function).fetchArrays();
For example it generates:
select report_total_requests('83.84.85.3184');
Which returns:
report_total_requests
-----------------------
(3683,2111,0)
(29303,10644,1)
And in java it is array of "(3683,2111,0)" objects.
I want to generate:
select * from report_total_requests('83.84.85.3184')
To produce:
total | users | priority
------+-------+----------
3683 | 2111 | 0
29303 | 10644 | 1
That is in java array of arrays of objects
Any ideas?
The way forward is to use plain SQL as follows:
Name name = DSL.name("report_" + name);
QueryPart arguments = DSL.list(params);
dsl().select().from("{0}({1})", name, arguments).fetch();
Note, I've wrapped the function name in a DSL.name() object, to prevent SQL injection.
Related
I am developing an app for ordering products online. I will first present you a part of the ER Diagram and then explain.
Here the "products" are food items and they will go inside the fresh_products table. As you can see a fresh_product consists of product species, category, type, size and product grade.
Then when I place an order, the order id and order_status will be saved in order table. All the ordered items will be saved in the order_item table. In order_item table you can see I have a direct connection with the fresh_products.
At certain times, the management will decide to change the existing fresh_products. For an example, lets take fish products such as Tuna. This fish item has a category called Loin(fished with Loin), after sometime management decide to remove it or rename it because they no longer fish with Loin.
Now, my design will be affected because of the above change. Why? When you make a change to any of the fresh_product fields, it will directly affect the order_item which holds the information of already ordered items. So if you rename Loin , all order_items which has Loin will now be linked to the new name which is wrong. If you decide to delete a fresh_product you can't do that either, because there are existing order history bound to that fresh_product.
My Suggestion
As a solution for this problem, I am thinking of removing the relationship of fresh_products from order_item. Then, I will add String fields into order_item representing all fields of fresh_products, for an example productType, prodyctCategory and so on.
Now I don't have a direct connection with fresh_products but have all the information needed. At the same time, anyfresh_product item can undertake any change without affecting already purchased items at order_item.
My question is, is my suggestion the best way to solve this issue? If you have better solutions, I am open.
Consider the following; in this example, only the name product changes, but you could easily add columns for each attribute (although at some point this would move from sublime to ridiculous)...
Also, I rarely use correlated subqueries, so apologies if there's an error there...
create table product_history
(product_id int not null
,product_name varchar(12) not null
,date date not null
,primary key (product_id,date)
);
insert into product_history values
(1,'soda','2020-01-01'),
(1,'pop','2020-01-04'),
(1,'cola','2020-01-07');
create table order_detail
(order_id int not null
,product_id int not null
,quantity int not null default 1
,primary key(order_id,product_id)
);
insert into order_detail values
(100,1,4);
create table orders
(order_id serial primary key
,customer_id int not null
,order_date date not null
);
insert into orders values
(100,22,'2020-01-05');
SELECT o.order_id
, o.customer_id
, o.order_date
, ph.product_id
, ph.product_name
, od.quantity
FROM orders o
JOIN order_detail od
ON od.order_id = o.order_id
JOIN product_history ph
ON ph.product_id = od.product_id
WHERE ph.date IN
( SELECT MAX(x.date)
FROM product_history x
WHERE x.product_id = ph.product_id
AND x.date <= o.order_date
);
| order_id | customer_id | order_date | product_id | product_name | quantity |
| -------- | ----------- | ---------- | ---------- | ------------ | -------- |
| 100 | 22 | 2020-01-05 | 1 | pop | 4 |
---
View on DB Fiddle
So I have a table named BLOG_REPORTERS like:
blog_id | reporter_id
1 1
2 3
And REPORTER:
reporter_id | name | etc
1 asd etc
I need to perform JOIN operation between them. I generated a QueryDSL class for them, and tried something like:
new JPAQuery<>(entityManager)
.select(QReporter.reporter)
.from(QReporter.reporter)
.join(QBlogReporters.blogreporters)
but this is wrong because join() method accepts EntityPath<P> and QBlogReporters extends BeanPath<T>.
Is any way to do this?
One of the functions we implemented in SAP is not working correctly.
In SAP all functions are working correctly and return the right values, however, when called in Java JCo the Client wants a structure rather than a String or int.
When extracting the structure from the Parameter it gives a Structure that has two unnamed columns each with no lengths of Bytes to be filled in.
Metadata:
{[],[]}
0,0
We tried different datatypes in SAP for the Input Parameter "I_REZEPT" like int8 & char12
private String sollwerte(JSONObject jsonin) throws JSONException, JCoException {
String id = String.valueOf(jsonin.getInt("rezeptid"));
JCoStructure in = input.getStructure("I_REZEPT");
System.out.println("Fieldcount:"+in.getFieldCount());
input.setValue("I_REZEPT", id);
e.printStackTrace();
function.execute(destination);
...
Stacktrace:
com.sap.conn.jco.ConversionException: (122) JCO_ERROR_CONVERSION: Cannot convert a value of '1' from type java.lang.String to STRUCTURE at field I_REZEPT
at com.sap.conn.jco.rt.AbstractRecord.createConversionException(AbstractRecord.java:436)
at com.sap.conn.jco.rt.AbstractRecord.createConversionException(AbstractRecord.java:430)
at com.sap.conn.jco.rt.AbstractRecord.setValue(AbstractRecord.java:2824)
at com.sap.conn.jco.rt.AbstractRecord.setValue(AbstractRecord.java:3933)
at edu.hsalbsig.intellifarm.connector.sap.IntellifarmSapFunction.sollwerte(IntellifarmSapFunction.java:226)
at edu.hsalbsig.intellifarm.connector.sap.IntellifarmSapFunction.execute(IntellifarmSapFunction.java:61)
at edu.hsalbsig.intellifarm.connector.mqtt.IntellifarmMqttClient.messageArrived(IntellifarmMqttClient.java:98)
at org.eclipse.paho.client.mqttv3.internal.CommsCallback.deliverMessage(CommsCallback.java:513)
at org.eclipse.paho.client.mqttv3.internal.CommsCallback.handleMessage(CommsCallback.java:416)
at org.eclipse.paho.client.mqttv3.internal.CommsCallback.run(CommsCallback.java:213)
at java.base/java.lang.Thread.run(Thread.java:834)
While debugging the function from SAP looks like this
Input:
|--------|
| PARAMETERS 'INPUT'
|--------|
|I_REZEPT|
|--------|
| |
|--------|
|I_REZEPT|
|--------|
expected was something like this
Input:
|------------------|
| PARAMETERS 'INPUT'
|------------------|
|I_REZEPT |
|------------------|
|012345678901234567|
|------------------|
| |
|------------------|
Without knowing your function interface definition from ABAP side it is difficult to help here. But if input.getStructure("I_REZEPT"); works this import parameter I_REZEPT seems to be structure. Therefore you cannot call input.setValue("I_REZEPT", (String)id); with trying to set a string for it and this is what the exception is showing. I_REZEPT is an IMPORT parameter and is of type STRUCTURE, it is not a STRING or a CHAR type parameter. It contains various other fields - at least one.
Instead of this, I guess you may call in.setValue(0, id); for setting the first field of this structure or in.setValue("FIELDNAME", id); with using the correct field name within the structure.
I use the Vlad Mihalcea's library in order to map SQL arrays (Postgresql in my case) to JPA. Then let's imagine I have an Entity, ex.
#TypeDefs(
{#TypeDef(name = "string-array", typeClass =
StringArrayType.class)}
)
#Entity
public class Entity {
#Type(type = "string-array")
#Column(columnDefinition = "text[]")
private String[] tags;
}
The appropriate SQL is:
CREATE TABLE entity (
tags text[]
);
Using QueryDSL I'd like to fetch rows which tags contains all the given ones. The raw SQL could be:
SELECT * FROM entity WHERE tags #> '{"someTag","anotherTag"}'::text[];
(taken from: https://www.postgresql.org/docs/9.1/static/functions-array.html)
Is it possible to do it with QueryDSL? Something like the code bellow ?
predicate.and(entity.tags.eqAll(<whatever>));
1st step is to generate proper sql: WHERE tags #> '{"someTag","anotherTag"}'::text[];
2nd step is described by coladict (thanks a lot!): figure out the functions which are called: #> is arraycontains and ::text[] is string_to_array
3rd step is to call them properly. After hours of debug I figured out that HQL doesn't treat functions as functions unless I added an expression sign (in my case: ...=true), so the final solution looks like this:
predicate.and(
Expressions.booleanTemplate("arraycontains({0}, string_to_array({1}, ',')) = true",
entity.tags,
tagsStr)
);
where tagsStr - is a String with values separated by ,
Since you can't use custom operators, you will have to use their functional equivalents. You can look them up in the psql console with \doS+. For \doS+ #> we get several results, but this is the one you want:
List of operators
Schema | Name | Left arg type | Right arg type | Result type | Function | Description
------------+------+---------------+----------------+-------------+---------------------+-------------
pg_catalog | #> | anyarray | anyarray | boolean | arraycontains | contains
It tells us the function used is called arraycontains, so now we look-up that function to see it's parameters using \df arraycontains
List of functions
Schema | Name | Result data type | Argument data types | Type
------------+---------------+------------------+---------------------+--------
pg_catalog | arraycontains | boolean | anyarray, anyarray | normal
From here, we transform the target query you're aiming for into:
SELECT * FROM entity WHERE arraycontains(tags, '{"someTag","anotherTag"}'::text[]);
You should then be able to use the builder's function call to create this condition.
ParameterExpression<String[]> tags = cb.parameter(String[].class);
Expression<Boolean> tagcheck = cb.function("Flight_.id", Boolean.class, Entity_.tags, tags);
Though I use a different array solution (might publish soon), I believe it should work, unless there are bugs in the underlying implementation.
An alternative to method would be to compile the escaped string format of the array and pass it on as the second parameter. It's easier to print if you don't treat the double-quotes as optional. In that event, you have to replace String[] with String in the ParameterExpression row above
For EclipseLink I created a function
CREATE OR REPLACE FUNCTION check_array(array_val text[], string_comma character varying ) RETURNS bool AS $$
BEGIN
RETURN arraycontains(array_val, string_to_array(string_comma, ','));
END;
$$ LANGUAGE plpgsql;
As pointed out by Serhii, then you can useExpressions.booleanTemplate("FUNCTION('check_array', {0}, {1}) = true", entity.tags, tagsStr)
The use case that we are working to solve with Cassandra is this: We need to retrieve a list of entity UUIDs that have been updated within a certain time range within the last 90 days. Imagine that we're building a document tracking system, so our relevant entity is a Document, whose key is a UUID.
The query we need to support in this use case is: Find all Document UUIDs that have changed between StartDateTime and EndDateTime.
Question 1: What's the best Cassandra table design to support this query?
I think the answer is as follows:
CREATE TABLE document_change_events (
event_uuid TIMEUUID,
document_uuid uuid,
PRIMARY KEY ((event_uuid), document_uuid)
) WITH default_time_to_live='7776000';
And given that we can't do range queries on partition keys, we'd need to use the token() method. As such the query would then be:
SELECT document_uuid
WHERE token(event_uuid) > token(minTimeuuid(?))
AND token(event_uuid) < token(maxTimeuuid(?))
For example:
SELECT document_uuid
WHERE token(event_uuid) > token(minTimeuuid('2015-05-10 00:00+0000'))
AND token(event_uuid) < token(maxTimeuuid('2015-05-20 00:00+0000'))
Question 2: I can't seem to get the following Java code using DataStax's driver to reliability return the correct results.
If I run the following code 10 times pausing 30 seconds between, I will then have 10 rows in this table:
private void addEvent() {
String cql = "INSERT INTO document_change_events (event_uuid, document_uuid) VALUES(?,?)";
PreparedStatement preparedStatement = cassandraSession.prepare(cql);
BoundStatement boundStatement = new BoundStatement(preparedStatement);
boundStatement.setConsistencyLevel(ConsistencyLevel.ANY);
boundStatement.setUUID("event_uuid", UUIDs.timeBased());
boundStatement.setUUID("document_uuid", UUIDs.random());
cassandraSession.execute(boundStatement);
}
Here are the results:
cqlsh:> select event_uuid, dateOf(event_uuid), document_uuid from document_change_events;
event_uuid | dateOf(event_uuid) | document_uuid
--------------------------------------+--------------------------+--------------------------------------
414decc0-0014-11e5-93a9-51f9a7931084 | 2015-05-21 18:51:09-0500 | 92b6fb6a-9ded-47b0-a91c-68c63f45d338
9abb4be0-0014-11e5-93a9-51f9a7931084 | 2015-05-21 18:53:39-0500 | 548b320a-10f6-409f-a921-d4a1170a576e
6512b960-0014-11e5-93a9-51f9a7931084 | 2015-05-21 18:52:09-0500 | 970e5e77-1e07-40ea-870a-84637c9fc280
53307a20-0014-11e5-93a9-51f9a7931084 | 2015-05-21 18:51:39-0500 | 11b4a49c-b73d-4c8d-9f88-078a6f303167
ac9e0050-0014-11e5-93a9-51f9a7931084 | 2015-05-21 18:54:10-0500 | b29e7915-7c17-4900-b784-8ac24e9e72e2
88d7fb30-0014-11e5-93a9-51f9a7931084 | 2015-05-21 18:53:09-0500 | c8188b73-1b97-4b32-a897-7facdeecea35
0ba5cf70-0014-11e5-93a9-51f9a7931084 | 2015-05-21 18:49:39-0500 | a079b30f-be80-4a99-ae0e-a784d82f0432
76f56dd0-0014-11e5-93a9-51f9a7931084 | 2015-05-21 18:52:39-0500 | 3b593ca6-220c-4a8b-8c16-27dc1fb5adde
1d88f910-0014-11e5-93a9-51f9a7931084 | 2015-05-21 18:50:09-0500 | ec155e0b-39a5-4d2f-98f0-0cd7a5a07ec8
2f6b3850-0014-11e5-93a9-51f9a7931084 | 2015-05-21 18:50:39-0500 | db42271b-04f2-45d1-9ae7-0c8f9371a4db
(10 rows)
But if I then run this code:
private static void retrieveEvents(Instant startInstant, Instant endInstant) {
String cql = "SELECT document_uuid FROM document_change_events " +
"WHERE token(event_uuid) > token(?) AND token(event_uuid) < token(?)";
PreparedStatement preparedStatement = cassandraSession.prepare(cql);
BoundStatement boundStatement = new BoundStatement(preparedStatement);
boundStatement.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
boundStatement.bind(UUIDs.startOf(Date.from(startInstant).getTime()),
UUIDs.endOf(Date.from(endInstant).getTime()));
ResultSet resultSet = cassandraSession.execute(boundStatement);
if (resultSet == null) {
System.out.println("None found.");
return;
}
while (!resultSet.isExhausted()) {
System.out.println(resultSet.one().getUUID("document_uuid"));
}
}
It only retrieves three results:
3b593ca6-220c-4a8b-8c16-27dc1fb5adde
ec155e0b-39a5-4d2f-98f0-0cd7a5a07ec8
db42271b-04f2-45d1-9ae7-0c8f9371a4db
Why didn't it retrieve all 10 results? And what do I need to change to achieve the correct results to support this use case?
For reference, I've tested this against dsc-2.1.1, dse-4.6 and using the DataStax Java Driver v2.1.6.
First of all, please only ask one question at a time. Both of your questions here could easily stand on their own. I know these are related, but it just makes the readers come down with a case of tl;dr.
I'll answer your 2nd question first, because the answer ties into a fundamental understanding that is central to getting the data model correct. When I INSERT your rows and run the following query, this is what I get:
aploetz#cqlsh:stackoverflow2> SELECT document_uuid FROM document_change_events
WHERE token(event_uuid) > token(minTimeuuid('2015-05-10 00:00-0500'))
AND token(event_uuid) < token(maxTimeuuid('2015-05-22 00:00-0500'));
document_uuid
--------------------------------------
a079b30f-be80-4a99-ae0e-a784d82f0432
3b593ca6-220c-4a8b-8c16-27dc1fb5adde
ec155e0b-39a5-4d2f-98f0-0cd7a5a07ec8
db42271b-04f2-45d1-9ae7-0c8f9371a4db
(4 rows)
Which is similar to what you are seeing. Why didn't that return all 10? Well, the answer becomes apparent when I include token(event_uuid) in my SELECT:
aploetz#cqlsh:stackoverflow2> SELECT token(event_uuid),document_uuid FROM document_change_events WHERE token(event_uuid) > token(minTimeuuid('2015-05-10 00:00-0500')) AND token(event_uuid) < token(maxTimeuuid('2015-05-22 00:00-0500'));
token(event_uuid) | document_uuid
----------------------+--------------------------------------
-2112897298583224342 | a079b30f-be80-4a99-ae0e-a784d82f0432
2990331690803078123 | 3b593ca6-220c-4a8b-8c16-27dc1fb5adde
5049638908563824288 | ec155e0b-39a5-4d2f-98f0-0cd7a5a07ec8
5577339174953240576 | db42271b-04f2-45d1-9ae7-0c8f9371a4db
(4 rows)
Cassandra stores partition keys (event_uuid in your case) in order by their hashed token value. You can see this when using the token function. Cassandra generates partition tokens with a process called consistent hashing to ensure even cluster distribution. In other words, querying by token range doesn't make sense unless the actual (hashed) token values are meaningful to your application.
Getting back to your first question, this means you will have to find a different column to partition on. My suggestion is to use a timeseries mechanism called a "date bucket." Picking the date bucket can be tricky, as it depends on your requirements and query patterns...so that's really up to you to pick a useful one.
For the purposes of this example, I'll pick "month." So I'll re-create your table partitioning on month and clustering by event_uuid:
CREATE TABLE document_change_events2 (
event_uuid TIMEUUID,
document_uuid uuid,
month text,
PRIMARY KEY ((month),event_uuid, document_uuid)
) WITH default_time_to_live='7776000';
Now I can query by a date range, when also filtering by month:
aploetz#cqlsh:stackoverflow2> SELECT document_uuid FROM document_change_events2
WHERE month='201505'
AND event_uuid > minTimeuuid('2015-05-10 00:00-0500')
AND event_uuid < maxTimeuuid('2015-05-22 00:00-0500');
document_uuid
--------------------------------------
a079b30f-be80-4a99-ae0e-a784d82f0432
ec155e0b-39a5-4d2f-98f0-0cd7a5a07ec8
db42271b-04f2-45d1-9ae7-0c8f9371a4db
92b6fb6a-9ded-47b0-a91c-68c63f45d338
11b4a49c-b73d-4c8d-9f88-078a6f303167
970e5e77-1e07-40ea-870a-84637c9fc280
3b593ca6-220c-4a8b-8c16-27dc1fb5adde
c8188b73-1b97-4b32-a897-7facdeecea35
548b320a-10f6-409f-a921-d4a1170a576e
b29e7915-7c17-4900-b784-8ac24e9e72e2
(10 rows)
Again, month may not work for your application. So put some thought behind coming up with an appropriate column to partition on, and then you should be able to solve this.