How to configure the #CosmosDBtrigger using java? - java

I'm setting up #CosmosDBTrigger, need help with the below code and also what needs to be in the name field?
I'm using below Tech stack,
JDK 1.8.0-211
apache maven 3.5.3
AzureCLI 2.0.71
.net core 2.2.401
Java:
public class Function {
#FunctionName("CosmosTrigger")
public void mebershipProfileTrigger(
#CosmosDBTrigger(name = "?", databaseName =
"*database_name*", collectionName = "*collection_name*",
leaseCollectionName = "leases",
createLeaseCollectionIfNotExists = true,
connectionStringSetting = "DBConnection") String[] items,
final ExecutionContext context) {
context.getLogger().info("item(s) changed");
}
}
What do we need to provide in the name field?
local.settings.json
{
"IsEncrypted": false,
"Values": {
"DBConnection": "AccountEndpoint=*Account_Endpoint*"
}
}
Expected: function starts
Result:
"Microsoft.Azure.WebJobs.Host: Error indexing method 'Functions.Cosmostrigger'. Microsoft.Azure.WebJobs.Extensions.CosmosDB: Cannot create Collection Information for collection_name in database database_name with lease leases in database database_name : Unexpected character encountered while parsing value: <. Path '', line 0, position 0. Newtonsoft.Json: Unexpected character encountered while parsing value: <. Path '', line 0, position 0."

Follow this:- https://github.com/microsoft/inventory-hub-java-on-azure/blob/master/function-apps/Notify-Inventory/src/main/java/org/inventory/hub/NotifyInventoryUpdate.java
#CosmosDBTrigger(name = "document", databaseName = "db1",collectionName = "col1", connectionStringSetting = "dbstr",leaseCollectionName = "lease1", createLeaseCollectionIfNotExists = true) String document,
Now when you publish put the value for dbstr as your connection string in Application Settings of Azure portal, after setting the properties just restart

See the official samples here: https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2#trigger---java-example
name is just some identifier for your Function. The error you are getting is because you are telling the Trigger that the Collection you want to monitor your changes in is called "collection_name" and it's inside a database called "database_name".
Please use the real correct values for them, they should be pointing to an existing Collection, and your connection string DBConnection needs to be in the correct format of: AccountEndpoint=https://<your-account-name>.documents.azure.com:443/;AccountKey=<your-account-key>;(you can get it from the Azure Portal).

Related

BigQueryIO: Query configured via options, but "Value only available at runtime"

Apache Beam 2.9.0
I have set up a pipeline that pulls data from BigQuery and does a series of transforms on it. The options have a start date attached to them using a ValueProvider:
ValueProvider<String> getStartTime();
void setStartTime(ValueProvider<String> startTime);
I then go to pull the data with BigQueryIO (changing things around a bit for the sake of making it explicit what is going on):
BigQueryIO.read(
(SerializableFunction<SchemaAndRecord, AggregatedRowRecord>)
input -> new BigQueryParser().apply(input.getRecord()))
.withoutValidation()
.withTemplateCompatibility()
.fromQuery(
ValueProvider.NestedValueProvider.of(
opts.getStartTime(),
(SerializableFunction<String, String>)
input -> {
Instant instant = Instant.parse(input);
return String.format(
<large SQL statement with a %s in it>,
String.format(
"%d_%d_%d",
instant.get(ChronoField.YEAR),
instant.get(ChronoField.MONTH_OF_YEAR),
instant.get(ChronoField.DAY_OF_MONTH)));
}))
.withCoder(<coder for AggregatedRowRecords>)
.usingStandardSql()
This is then added to a pipeline normally (p.apply(<above>)).
Now I run it:
--project=<project> \
--tempLocation=<directory> \
--stagingLocation=<directory> \
--network=dataflow \
--subnetwork=<subnetwork> \
--defaultWorkerLogLevel=DEBUG
--appName=<name>
--runner=DirectRunner
This causes the following error:
org.apache.beam.sdk.Pipeline$PipelineExecutionException: java.lang.IllegalStateException: Value only available at runtime, but accessed from a non-runtime context: RuntimeValueProvider{propertyName=startTime, default=null}
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:332)
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:302)
at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:197)
at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:64)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:313)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:299)
at <class>.main(<class>.java:<>)
Caused by: java.lang.IllegalStateException: Value only available at runtime, but accessed from a non-runtime context: RuntimeValueProvider{propertyName=startTime, default=null}
at org.apache.beam.sdk.options.ValueProvider$RuntimeValueProvider.get(ValueProvider.java:228)
at org.apache.beam.sdk.options.ValueProvider$NestedValueProvider.get(ValueProvider.java:131)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryQuerySource.createBasicQueryConfig(BigQueryQuerySource.java:230)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryQuerySource.dryRunQueryIfNeeded(BigQueryQuerySource.java:175)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryQuerySource.getTableToExtract(BigQueryQuerySource.java:115)
at org.apache.beam.sdk.io.gcp.bigquery.BigQuerySourceBase.extractFiles(BigQuerySourceBase.java:102)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead$2.processElement(BigQueryIO.java:783)
The use of NestedValueProvider comes from this example on setting up templates:
The user provides a substring for a BigQuery query, such as a specific date. The transform uses the substring to create the full query. Calling .get() returns the full query.
Removing the value provider logic doesn't seem to help, however. Removing the ValueProvider entirely from the withQuery section works fine, but defeats the purpose of being able to set it via options.
The exception explains you the issue, Apache beam first builds the pipeline and the classes and then start to run the data in the pipeline, in this stage, you can't access to options, this is just metadata for building the pipeline.
The way to overcome it is to create a ParDo function/ PTransform, that will get the options you need as parameters in the constructor, then it can access it in its logic.
See example: (my use case, I face the same issue last days)
The pipeline:
HistoryProcessingOptions options = PipelineOptionsFactory.fromArgs(args).withValidation()
.as(HistoryProcessingOptions.class);
Pipeline pipeline = Pipeline.create(options);
pipeline.apply(SourceRead.of(options.getSourceBigQueryTable().get(),
options.getSourceBigQueryDataset().get(),
options.getSourceBigQueryProject().get(),
options.getFromDate().get(),
options.getToDate().get()
))
The transformer itself:
public class SourceRead extends PTransform<PBegin, PCollection<TableRow>> {
private String sourceBigQueryTable;
private String sourceBigQueryDataset;
private String sourceBigQueryProject;
private String formDate;
private String toDate;
private static Logger logger = LoggerFactory.getLogger(SourceRead.class);
public SourceRead(String sourceBigQueryTable, String sourceBigQueryDataset, String sourceBigQueryProject, String formDate, String toDate) {
this.sourceBigQueryTable = sourceBigQueryTable;
this.sourceBigQueryDataset = sourceBigQueryDataset;
this.sourceBigQueryProject = sourceBigQueryProject;
this.formDate = formDate;
this.toDate = toDate;
}
public static SourceRead of(String sourceBigQueryTable, String sourceBigQueryDataset, String sourceBigQueryProject, String yearToLoad, String dateToLoad) {
return new SourceRead(sourceBigQueryTable, sourceBigQueryDataset, sourceBigQueryProject, yearToLoad, dateToLoad);
}
#Override
public PCollection<TableRow> expand(PBegin input) {
String query = "SELECT * FROM TABLE_DATE_RANGE([" + sourceBigQueryProject + ":"+sourceBigQueryDataset+"."+sourceBigQueryTable+"],"
+ "TIMESTAMP('" + formDate + "'),"
+ "TIMESTAMP('" + toDate + "'))";
logger.info("query is"+ query);
return input.apply(BigQueryIO.readTableRows()
.fromQuery(query));
}

PASSWD_CANT_CHANGE flag not present in UserAccountControl attribute

I need to check through LDAP if an ActiveDirectory user has the PASSWD_CANT_CHANGE flag set. I found the UserAccountControl attribute (https://learn.microsoft.com/it-it/windows/desktop/ADSchema/a-useraccountcontrol): it works for all other flags but it doesn't work for this flag. I only need to read it, not to write.
I'm using Java with UnboundID LDAP SDK (https://ldap.com/unboundid-ldap-sdk-for-java/).
Here is my JUnit test code.
public static enum UACFlags {
SCRIPT(0x0001),
ACCOUNTDISABLE(0x0002),
HOMEDIR_REQUIRED(0x0008),
LOCKOUT(0x0010),
PASSWD_NOTREQD(0x0020),
PASSWD_CANT_CHANGE(0x0040),
ENCRYPTED_TEXT_PWD_ALLOWED(0x0080),
TEMP_DUPLICATE_ACCOUNT(0x0100),
NORMAL_ACCOUNT(0x0200),
INTERDOMAIN_TRUST_ACCOUNT(0x0800),
WORKSTATION_TRUST_ACCOUNT(0x1000),
SERVER_TRUST_ACCOUNT(0x2000),
DONT_EXPIRE_PASSWORD(0x10000),
MNS_LOGON_ACCOUNT(0x20000),
SMARTCARD_REQUIRED(0x40000),
TRUSTED_FOR_DELEGATION(0x80000),
NOT_DELEGATED(0x100000),
USE_DES_KEY_ONLY(0x200000),
DONT_REQ_PREAUTH(0x400000),
PASSWORD_EXPIRED(0x800000),
TRUSTED_TO_AUTH_FOR_DELEGATION(0x1000000);
private int flag;
private UACFlags(int flag) {
this.flag = flag;
}
}
#Test
public void testLDAP() throws LDAPException {
LDAPConnection connection = //GET CONNECTION
String username = "....";
String search = "(sAMAccountName=" + username + ")";
SearchRequest request = new SearchRequest("DC=....,DC=....", SearchScope.SUB, search, SearchRequest.ALL_USER_ATTRIBUTES);
SearchResult result = connection.search(request);
SearchResultEntry entry = result.getSearchEntries().get(0);
Attribute a = entry.getAttribute("userAccountControl");
int val = a.getValueAsInteger();
System.out.println(Integer.toHexString(val));
EnumSet<UACFlags> flags = EnumSet.noneOf(UACFlags.class);
for (UACFlags f : UACFlags.values()) {
if ((val & f.flag) == f.flag) {
flags.add(f);
}
}
System.out.println("FLAGS: " + flags);
}
I set up the flag on AD Users and Computers and it works as expected. I only want to check the flag programmatically, using Java and LDAP. Other solutions than UserAccountControl attribute are ok!
Thanks!!
That is, unfortunately, expected.
Microsoft uses the ADS_USER_FLAG_ENUM enumeration in a couple places:
The userAccountControl attribute when using LDAP, and
The userFlags property when using the WinNT provider.
The ADS_UF_PASSWD_CANT_CHANGE flag can only be used when using the WinNT provider, which I'm not sure you can do from Java.
When you click that 'User cannot change password' checkbox in AD Users and Computers, it doesn't actually change the userAccountControl attribute. In reality, it adds two permissions on the account:
Deny Change Password to 'Everyone'
Deny Change Password to 'SELF'
There is a description of how to look for those permissions here, but the examples are in C++ and VBScript. I don't know how to view the permissions in Java. It seems difficult and I can't find any real examples.
UPDATE
Appears from AD 2008 on that this is not a "real" value; but rather an ACE within the ACL of the entry.
THIS NO LONGER WORKS
As far as I can tell.
Microsoft Active Directory has a neat Extensible matching value that should work called LDAP_MATCHING_RULE_BIT_AND
So a simple LDAP Query Filter like:
(userAccountControl:1.2.840.113556.1.4.803:=64)
Should do the trick.

Get token string from tokenID using Stanford Parser in GATE

I am trying to use some Java RHS to get the string value of dependent tokens using Stanford dependency parser in GATE, and add them as features of a new annotation.
I am having problems targeting just the 'dependencies' feature of the token, and getting the string value from the tokenID.
Using below specifying only 'depdencies' also throws a java null pointer error:
for(Annotation lookupAnn : tokens.inDocumentOrder())
{
FeatureMap lookupFeatures = lookupAnn.getFeatures();
token = lookupFeatures.get("dependencies").toString();
}
I can use below to get all the features of a token,
gate.Utils.inDocumentOrder
but it returns all features, including the dependent tokenID's; i.e:
dependencies = [nsubj(8390), dobj(8394)]
I would like to get just the dependent token's string value from these tokenID's.
Is there any way to access dependent token string value and add them as a feature to the annotation?
Many thanks for your help
Here is a working JAPE example. It only printns to the GATE's message window (std out), It doesn't create any new annotations with features you asked for. Please finish it yourself...
Stanford_CoreNLP plugin has to be loaded in GATE to make this JAPE file loadable. Otherwise you will get class not found exception for DependencyRelation class.
Imports: {
import gate.stanford.DependencyRelation;
}
Phase: GetTokenDepsPhase
Input: Token
Options: control = all
Rule: GetTokenDepsRule
(
{Token}
): token
-->
:token {
//note that tokenAnnots contains only a single annotation so the loop could be avoided...
for (Annotation token : tokenAnnots) {
Object deps = token.getFeatures().get("dependencies");
//sometimes the dependencies feature is missing - skip it
if (deps == null) continue;
//token.getFeatures().get("string") could be used instead of gate.Utils.stringFor(doc,token)...
System.out.println("Dependencies for token " + gate.Utils.stringFor(doc, token));
//the dependencies feature has to be typed to List<DependencyRelation>
List<DependencyRelation> typedDeps = (List<DependencyRelation>) deps;
for (DependencyRelation r : typedDeps) {
//use DependencyRelation.getTargetId() to get the id of the target token
//use inputAS.get(id) to get the annotation for its id
Annotation targetToken = inputAS.get(r.getTargetId());
//use DependencyRelation.getType() to get the dependency type
System.out.println(" " +r.getType()+ ": " +gate.Utils.stringFor(doc, targetToken));
}
}
}

REST Service - org.hibernate.QueryException: could not resolve property error

I am trying to send data from my android app to my REST service so that it can be stored on a database. I have successfully done this in another app that I have worked on and when I am trying to do it with this project I am getting this error:
org.hibernate.QueryException: could not resolve property: line_type
In my Oracle Database the field is named "LINE_TYPE" and is VARCHAR2(20 BYTE).
Here is the code in my REST service table (which I have reverse engineered from my oracle database):
private String lineType;
#Column(name="LINE_TYPE")
#Size(max = 20, message = "Line Type has a max size of 20 characters.")
public String getLineType() {
return this.lineType;
}
public void setLineType(String lineType) {
this.lineType = lineType;
}
Also in my tableCriteria I have the getters and setters:
public String getLineType() {
return lineType;
}
public void setLineType(String lineType) {
this.lineType = lineType;
}
The last time I had this error it was through a spelling mistake or case sensitivity but I have double and triple checked and that is not the case here.
I have debugged the entity that the REST service is receiving in NetBeans and I can see that it is receiving the data. So why cant it be resolved?
Anyone see anything I don't?

How to create and publish Index using Java Client Programatically

Is it possible to programmatically create and publish secondary indexes using Couchbases Java Client 2.2.2? I want to be able to create and publish my custom secondary indexes Running Couchbase 4.1. I know this is possible to do with Couchbase Views but I can't find the same for indexes.
couchbase-java-client-2.3.1 is needed in order to programmatically create indexes primary or secondary. Some of the usable methods can be found on the bucketManger same that is used to upsert views. Additionally the static method createIndex can be used it support DSL and String syntax
There are a few options to create your secondary indexes.
Option #1:
Statement query = createIndex(name).on(bucket.name(), x(fieldName));
N1qlQueryResult result = bucket.query(N1qlQuery.simple(query));
Option #2:
String query = "BUILD INDEX ON `" + bucket.name() + "` (" + fieldName + ")";
N1qlQueryResult result = bucket.query(N1qlQuery.simple(query));
Option #3 (Actually multiple options here since method createN1qlIndex is overloaded
bucket.bucketManager().createN1qlIndex(indexName, fields, where, true, false);
Primary index:
// Create a N1QL Primary Index (ignore if it exists)
bucket.bucketManager().createN1qlPrimaryIndex(true /* ignore if exists */, false /* defer flag */);
Secondary Index:
// Create a N1QL Index (ignore if it exists)
bucket.bucketManager().createN1qlIndex(
"my_idx_1",
true, //ignoreIfExists
false, //defer
Expression.path("field1.id"),
Expression.path("field2.id"));
or
// Create a N1QL Index (ignore if it exists)
bucket.bucketManager().createN1qlIndex(
"my_idx_2",
true, //ignoreIfExists
false, //defer
new String ("field1.id"),
new String("field2.id"));
The first secondary index (my_idx_1) is helpful if your document is something like this:
{
"field1" : {
"id" : "value"
},
"field2" : {
"id" : "value"
}
}
The second secondary index (my_idx_2) is helpful if your document is something like this:
{
"field1.id" : "value",
"field2.id" : "value"
}
You should be able to do this with any 2.x, once you have a Bucket
bucket.query(N1qlQuery.simple(queryString))
where queryString is something like
String queryString = "CREATE PRIMARY INDEX ON " + bucketName + " USING GSI;";
As of java-client 3.x+ there is a QueryIndexManager(obtained via cluster.queryIndexes()) which provides an indexing API with the below specific methods to create indexes:
createIndex(String bucketName, String indexName, Collection<String> fields)
createIndex(String bucketName, String indexName, Collection<String> fields, CreateQueryIndexOptions options)
createPrimaryIndex(String bucketName)
createPrimaryIndex(String bucketName, CreatePrimaryQueryIndexOptions options)

Categories

Resources