How to generate runtime data using DB stored GString definitions - java

Hi how can I use database stored GString definitions for dynamically generated data. I was able to use GString for pick and choose row attributes if format is defined in the code
code_format = "${-> row.ACCOUNT} ${-> row.ACCOUNT_OWNER}"
However if same definition is extracted from database my code is not working.
Sql sql = Sql.newInstance(url, login, password, driver);
sql.eachRow(SQL) { row ->
code_format = "${-> row.ACCOUNT} ${-> row.ACCOUNT_OWNER}"
database_format = "${-> row.REPORT_ATTRIBUTES}"
println "1- " + code_format
println "2- " + database_format
println "CODE : " + code_format.dump()
println "DB : " + database_format.dump()
}
When I run this code I am getting following output;
1- FlowerHouse Joe
2- ${-> row.ACCOUNT} ${-> row.ACCOUNT_OWNER}
CODE : <org.codehaus.groovy.runtime.GStringImpl#463cf024 strings=[, , ] values=[GString$_run_closure1_closure2#44f289ee, GString$_run_closure1_closure3#f3d8b9f]>
DB : org.codehaus.groovy.runtime.GStringImpl#4f5e9da9 strings=[, ] values=[GString$_run_closure1_closure4#11997b8a]

row.REPORT_ATTRIBUTES returns String because database doesn't know groovy stings format.
GString is template, which can be created from string.
So you can do something like:
def engine = new groovy.text.SimpleTemplateEngine()
println engine.createTemplate(row.REPORT_ATTRIBUTES).make([row:row]).toString()

Related

How can I get all the entities used in a SPARQL-DL Query?

I'm currently working on a project where I need to graphically represent a SPARQL-DL query.
To do that, I need to get all the entities used in a query (at the end, all the entities used in the query and the results from the query). I'm struggling with getting all the entities of the query. Is there an easy way to get all the atoms of the query ?
The library I'm using is OWL-API 4.2.8 with the latest SPARQL-DL-API. I'm using the Example_Basic.java file to try my method.
Here's the query I used as an example (it gives me all the wines that are located in New Zealand):
PREFIX wine: <http://www.w3.org/TR/2003/PR-owl-guide-20031209/wine#>
SELECT ?wine
WHERE {
PropertyValue(?wine, wine:locatedIn, wine:NewZealandRegion)
}
the method I use :
extractAllQueryEntities(QueryResult result) {
List<QueryAtomGroup> queryAtomGroups = result.getQuery().getAtomGroups();
for (QueryAtomGroup queryAtomGroup : queryAtomGroups) {
List<QueryAtom> atoms = queryAtomGroup.getAtoms();
System.out.println("Size of the atoms: " + atoms.size());
Iterator<QueryAtom> queryAtom = atoms.iterator();
while (queryAtom.hasNext()) {
QueryAtom element = queryAtom.next();
System.out.println("atom: " + element);
List<QueryArgument> arguments = element.getArguments();
for (QueryArgument argument : arguments) {
System.out.println("type: " + argument.getType() + " : value: " + argument.getValueAsString());
}
}
}
}
and here's the result I get from my method:
Results:
.
.
.
some wines
.
.
.
Size of the atoms: 1
atom: PropertyValue(?de.derivo.sparqldlapi.Var#37b009, http://www.w3.org/TR/2003/PR-owl-guide-20031209/wine#locatedIn, http://www.w3.org/TR/2003/PR-owl-guide-20031209/wine#NewZealandRegion)
type: VAR : value: wine
type: URI : value: http://www.w3.org/TR/2003/PR-owl-guide-20031209/wine#locatedIn
type: URI : value: http://www.w3.org/TR/2003/PR-owl-guide-20031209/wine#NewZealandRegion

SparkSql and REGEX

in my case i use a dataset(dataframe) in JavaSparkSQL.
This dataset result from an JSON file. The json file is formed from key-value.When i lunch a query for see the value i write for examle:
SELECT key1.name from table
example JSON file
{
"key1":
{ "name": ".....",....}
"key2":
{ "name":"....",....}
}
my question is, when i want acceding at all key,I believe I should use a REGEX like
select key*.name from table
but i don't know the regex!
please help
I am afraid no such syntax is available in (spark) SQL.
You may want to construct your query programmatically though.
Something like :
String sql = Stream.of(ds.schema().fieldNames()).filter(name -> name.startsWith("key")).collect(Collectors.joining(", ", "select ", " from table"));
System.out.println(sql);
or even
Dataset<Row> result = spark.table("table").select(Stream.of(ds.schema().fieldNames()).filter(name -> name.startsWith("key")).map(name -> ds.col(name))
.toArray(Column[]::new));
result.show();
HTH!

How do I read Windows Event Log field names using JNA?

I am using JNA to read Windows event logs. I can get a fair amount of data out of each record but I can't quite get the field names.
To read logs I am doing
EventLogIterator iter = new EventLogIterator("Security");
while(iter.hasNext()) {
EventLogRecord record = iter.next();
System.out.println("Event ID: " + record.getEventId()
+ ", Event Type: " + record.getType()
+ ", Event Source: " + record.getSource());
String strings[] = record.getStrings();
for(String str : strings) {
System.out.println(str);
}
}
I can get data like the id, type, and source easily. Then I can get the list of strings which may be for SubjectUserSid, SubjectUserName, etc.
I've been trying to get the data that I want with the field names. Is there an easy way to extract the field names/headers for each of the strings from record.getStrings()? I noticed there is a byte[] data variable in the record. I have tried to read this but I haven't been able to get any useful information from it. I know I can get the data length and offset for certain variables which I think I could extract the data that I want that way but I was wondering if that was correct or if there was an easier way.

SpagoBI multi value parameter

I'm trying to create a multi-value parameter in SpagoBI.
Here is my data set query whose last line appears to be causing an issue.
select C."CUSTOMERNAME", C."CITY", D."YEAR", P."NAME"
from "CUSTOMER" C, "DAY" D, "PRODUCT" P, "TRANSACTIONS" T
where C."CUSTOMERID" = T."CUSTOMERID"
and D."DAYID" = T."DAYID"
and P."PRODUCTID" = T."PRODUCTID"
and _CITY_
I created before open script in my dataset which looks like this:
this.queryText = this.queryText.replace(_CITY_, " CUSTOMER.CITY in ( "+params["cp"].value+" ) ");
My parameter is set as string, display type dynamic list box.
When I run the report I'm getting that error.
org.eclipse.birt.report.engine.api.EngineException: There are errors evaluating script "
this.queryText = this.queryText.replace(_CITY_, " CUSTOMER.CITY in ( "+params["cp"].value+" ) ");
":
Fail to execute script in function __bm_beforeOpen(). Source:
Could anyone please help me?
Hello I managed to solve the problem. Here is my code:
var substring = "" ;
var strParamValsSelected=reportContext.getParameterValue("citytext");
substring += "?," + strParamValsSelected ;
this.queryText = this.queryText.replace("'xxx'",substring);
As You can see the "?" is necessary before my parameter. Maybe It will help somebody. Thank You so much for Your comments.
If you are using SpagoBI server and High charts (JFreeChart Engine) / JSChat Engine you can just use ($P{param_url}) in query,
or build dynamic query using Java script / groovy Script
so your query could also be:
select C."CUSTOMERNAME", C."CITY", D."YEAR", P."NAME"
from "CUSTOMER" C, "DAY" D, "PRODUCT" P, "TRANSACTIONS" T
where C."CUSTOMERID" = T."CUSTOMERID"
and D."DAYID" = T."DAYID"
and P."PRODUCTID" = T."PRODUCTID"
and CUSTOMER."CITY" in ('$P{param_url}')

How to create and publish Index using Java Client Programatically

Is it possible to programmatically create and publish secondary indexes using Couchbases Java Client 2.2.2? I want to be able to create and publish my custom secondary indexes Running Couchbase 4.1. I know this is possible to do with Couchbase Views but I can't find the same for indexes.
couchbase-java-client-2.3.1 is needed in order to programmatically create indexes primary or secondary. Some of the usable methods can be found on the bucketManger same that is used to upsert views. Additionally the static method createIndex can be used it support DSL and String syntax
There are a few options to create your secondary indexes.
Option #1:
Statement query = createIndex(name).on(bucket.name(), x(fieldName));
N1qlQueryResult result = bucket.query(N1qlQuery.simple(query));
Option #2:
String query = "BUILD INDEX ON `" + bucket.name() + "` (" + fieldName + ")";
N1qlQueryResult result = bucket.query(N1qlQuery.simple(query));
Option #3 (Actually multiple options here since method createN1qlIndex is overloaded
bucket.bucketManager().createN1qlIndex(indexName, fields, where, true, false);
Primary index:
// Create a N1QL Primary Index (ignore if it exists)
bucket.bucketManager().createN1qlPrimaryIndex(true /* ignore if exists */, false /* defer flag */);
Secondary Index:
// Create a N1QL Index (ignore if it exists)
bucket.bucketManager().createN1qlIndex(
"my_idx_1",
true, //ignoreIfExists
false, //defer
Expression.path("field1.id"),
Expression.path("field2.id"));
or
// Create a N1QL Index (ignore if it exists)
bucket.bucketManager().createN1qlIndex(
"my_idx_2",
true, //ignoreIfExists
false, //defer
new String ("field1.id"),
new String("field2.id"));
The first secondary index (my_idx_1) is helpful if your document is something like this:
{
"field1" : {
"id" : "value"
},
"field2" : {
"id" : "value"
}
}
The second secondary index (my_idx_2) is helpful if your document is something like this:
{
"field1.id" : "value",
"field2.id" : "value"
}
You should be able to do this with any 2.x, once you have a Bucket
bucket.query(N1qlQuery.simple(queryString))
where queryString is something like
String queryString = "CREATE PRIMARY INDEX ON " + bucketName + " USING GSI;";
As of java-client 3.x+ there is a QueryIndexManager(obtained via cluster.queryIndexes()) which provides an indexing API with the below specific methods to create indexes:
createIndex(String bucketName, String indexName, Collection<String> fields)
createIndex(String bucketName, String indexName, Collection<String> fields, CreateQueryIndexOptions options)
createPrimaryIndex(String bucketName)
createPrimaryIndex(String bucketName, CreatePrimaryQueryIndexOptions options)

Categories

Resources