createUserDefinedFunction : if already exists? - java

I'm using azure-documentdb java SDK in order to create and use "User Defined Functions (UDFs)"
So from the official documentation I finally find the way (with a Java client) on how to create an UDF:
String regexUdfJson = "{"
+ "id:\"REGEX_MATCH\","
+ "body:\"function (input, pattern) { return input.match(pattern) !== null; }\","
+ "}";
UserDefinedFunction udfREGEX = new UserDefinedFunction(regexUdfJson);
getDC().createUserDefinedFunction(
myCollection.getSelfLink(),
udfREGEX,
new RequestOptions());
And here is a sample query :
SELECT * FROM root r WHERE udf.REGEX_MATCH(r.name, "mytest_.*")
I had to create the UDF one time only because I got an exception if I try to recreate an existing UDF:
DocumentClientException: Message: {"Errors":["The input name presented is already taken. Ensure to provide a unique name property for this resource type."]}
How should I do to know if the UDF already exists ?
I try to use "readUserDefinedFunctions" function without success. Any example / other ideas ?
Maybe for the long term, should we suggest a "createOrReplaceUserDefinedFunction(...)" on azure feedback

You can check for existing UDFs by running query using queryUserDefinedFunctions.
Example:
List<UserDefinedFunction> udfs = client.queryUserDefinedFunctions(
myCollection.getSelfLink(),
new SqlQuerySpec("SELECT * FROM root r WHERE r.id=#id",
new SqlParameterCollection(new SqlParameter("#id", myUdfId))),
null).getQueryIterable().toList();
if (udfs.size() > 0) {
// Found UDF.
}

An answer for .NET users.
`var collectionAltLink = documentCollections["myCollection"].AltLink; // Target collection's AltLink
var udfLink = $"{collectionAltLink}/udfs/{sampleUdfId}"; // sampleUdfId is your UDF Id
var result = await _client.ReadUserDefinedFunctionAsync(udfLink);
var resource = result.Resource;
if (resource != null)
{
// The UDF with udfId exists
}`
Here _client is Azure's DocumentClient and documentCollections is a dictionary of your documentDb collections.
If there's no such UDF in the mentioned collection, the _client throws a NotFound exception.

Related

How to use UpdateEventSourceMappingRequest in java?

I'm trying to use something like this:
UpdateEventSourceMappingRequest request = new UpdateEventSourceMappingRequest()
.withFunctionName("arn:aws:lambda:us-east-1:9999999999:function:"+functionName)
.withEnabled(false);
But I received a error because I have to use .withUUID(uuid):
UpdateEventSourceMappingRequest request = new UpdateEventSourceMappingRequest()
.withUUID(uuid))
.withFunctionName("arn:aws:lambda:us-east-1:9999999999:function:"+functionName)
.withEnabled(false);
I don't know how to get the value of uuid ( uuid from aws lambda ).
Can you help me with the solution to my problem ?
You need to provide the UUID identifier of the event source mapping to update it (and this field is mandatory). Update-request is not intended to create it.
When you create an event source mapping (here) - aws should return a response with a UUID identifier which you then may use in the update request.
That's the solution that I founded:
String strUUID = "";
ListEventSourceMappingsRequest requestList = new ListEventSourceMappingsRequest()
.withEventSourceArn("arn:aws:sqs:us-east-1:9999999999:test");
ListEventSourceMappingsResult result = awsLambda.listEventSourceMappings(requestList);
List<EventSourceMappingConfiguration> eventSourceMappings = result.getEventSourceMappings();
for (EventSourceMappingConfiguration eventLambda : eventSourceMappings) {
strUUID = eventLambda.getUUID();
}
System.out.println("Output UUID " + strUUID);
We have to use the ARN of the SQS that's trigger of the aws lambda.

Use "Selector modules" with DataMovement SDK MarkLogic [Java] [MarkLogic] [dmsdk] [data-movement-sdk][ml-java-api]

I'm using Data Movement SDK from MarkLogic Java API to transform several documents, up to now I can transform documents by using a query batcher and a transform, but i'm only able to use URIS selectors by StructuredQuery objects.
My question is: ¿How may I use a selector module from my database instead of define it into my java application?
Update:
Up to now I already have a code that looks for document's URIS and applies a transform on them. I want to change that query batcher and use a module or selector module instead of looking for all documents into a directory
public TransformExecutionResults applyTransformByModule(String transformName, String filterText, int batchSize, int threadCount, String selectorModuleName, Map<String,String> parameters ) {
final ConcurrentHashMap<String, TransformExecutionResults> transformResult = new ConcurrentHashMap<>();
try {
// Specify a server-side transformation module (stored procedure) by name
ServerTransform transform = new ServerTransform(transformName);
ApplyTransformListener transformListener = new ApplyTransformListener().withTransform(transform).withApplyResult(ApplyResult.REPLACE) // Transform in-place, i.e. rewrite
.onSuccess(batch -> {
transformResult.compute(transformName, (k, v) -> TransformExecutionResults.Success);
System.out.println("Transformation " + transformName + " executed succesfully.");
}).onSkipped(batch -> {
System.out.println("Transformation " + transformName + " skipped succesfully.");
transformResult.compute(transformName, (k, v) -> TransformExecutionResults.Skipped);
}).onFailure((batchListener, throwable) -> {
System.err.println("Transformation " + transformName + " executed with errors.");
transformResult.compute(transformName, (k, v) -> TransformExecutionResults.Failed); // failed
});
// Apply the transformation to only the documents that match a query.
QueryManager qm = DbClient.newQueryManager();
StructuredQueryBuilder sqb = qm.newStructuredQueryBuilder();
// instead of this StruturedQueryDefinition, I want to use a module to get all URIS
StructuredQueryDefinition queryBySubdirectory = sqb.directory(true, "/temp/" + filterText + "/");
final QueryBatcher batcher = DMManager.newQueryBatcher(queryBySubdirectory);
batcher.withBatchSize(batchSize);
batcher.withThreadCount(threadCount);
batcher.withConsistentSnapshot();
batcher.onUrisReady(transformListener).onQueryFailure(exception -> {
exception.printStackTrace();
System.out.println("There was an error on Transform process.");
});
final JobTicket ticket = DMManager.startJob(batcher);
batcher.awaitCompletion();
DMManager.stopJob(ticket);
} catch (Exception fault) {
transformResult.compute(transformName, (k, v) -> TransformExecutionResults.GeneralException); // general exception
}
return transformResult.get(transformName);
}
If the job is small enough, you can just implement the document rewriting within your enode code either by making a call to a resource service extension:
http://docs.marklogic.com/guide/java/resourceservices#id_27702
http://docs.marklogic.com/javadoc/client/com/marklogic/client/extensions/ResourceServices.html
or by invoking a main module:
http://docs.marklogic.com/guide/java/resourceservices#id_84134
If the job is too long to fit in a single transaction, your can create a QueryBatcher with a document URI iterator instead of with a query. See:
http://docs.marklogic.com/javadoc/client/com/marklogic/client/datamovement/DataMovementManager.html#newQueryBatcher-java.util.Iterator-
For some examples illustrating the approach, see the second half of the second example in the class description for QueryBatcher:
http://docs.marklogic.com/javadoc/client/com/marklogic/client/datamovement/QueryBatcher.html
as well as the second half of this example:
http://docs.marklogic.com/javadoc/client/com/marklogic/client/datamovement/UrisToWriterListener.html
In your case, you could implement an Iterator that calls a resource service extension or invokes a main module to get and return the URIs (preferrably with read ahead), blocking when necessary.
By returning the uris to the client, it's easy to log the uris for later audit.
Hoping that helps,

Azure Document DB - Java 1.9.5 | Authorization Error

I have a collection with some documents in it. And in my application I am creating this collection first and then inserting documents. Also, based on the requirement I need to truncate (delete all documents) the collection as well. Using document db java api I have written the following code for my this purpose-
DocumentClient documentClient = getConnection(masterkey, server, portNo);
List<Database> databaseList = documentClient.queryDatabases("SELECT * FROM root r WHERE r.id='" + schemaName + "'", null).getQueryIterable().toList();
DocumentCollection collection = null;
Database databaseCache = (Database)databaseList.get(0);
List<DocumentCollection> collectionList = documentClient.queryCollections(databaseCache.getSelfLink(), "SELECT * FROM root r WHERE r.id='" + collectionName + "'", null).getQueryIterable().toList();
// truncate logic
if (collectionList.size() > 0) {
collection = ((DocumentCollection) collectionList.get(0));
if (truncate) {
try {
documentClient.deleteDocument(collection.getSelfLink(), null);
} catch (DocumentClientException e) {
e.printStackTrace();
}
}
} else { // create logic
RequestOptions requestOptions = new RequestOptions();
requestOptions.setOfferType("S1");
collection = new DocumentCollection();
collection.setId(collectionName);
try {
collection = documentClient.createCollection(databaseCache.getSelfLink(), collection, requestOptions).getResource();
} catch (DocumentClientException e) {
e.printStackTrace();
}
With the above code I am able to create a new collection successfully. Also, I am able to insert documents as well in this collection. But while truncating the collection I am getting below error-
com.microsoft.azure.documentdb.DocumentClientException: The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'delete
colls
eyckqjnw0ae=
I am using Azure Document DB Java API version 1.9.5.
It will be of great help if you can point out the error in my code or if there is any other better way of truncating collection. I would really appreciate any kind of help here.
According to your description & code, I think the issue was caused by the code below.
try {
documentClient.deleteDocument(collection.getSelfLink(), null);
} catch (DocumentClientException e) {
e.printStackTrace();
}
It seems that you want to delete a document via the code above, but pass the argument documentLink with a collection link.
So if your real intention is to delete a collection, please using the method DocumentClient.deleteCollection(collectionLink, options).

Call MongoDB function from Java

I'm trying to call a stored JavaScript function from the MongoDB Java driver.
I have been following this guide to store the function on the DB server and I'm able to call the function from the mongo shell and have the result returned.
However I cannot figure out how to call the same function in Java?
According to this http://api.mongodb.org/java/current/com/mongodb/DB.html#doEval-java.lang.String-java.lang.Object...- there's a method called doEval
I have also tried to use it with this method:
public static String callFunction() {
try (MongoClient client = new MongoClient("localhost")) {
com.mongodb.DB db = client.getDB("TestDB");
return db.doEval("echoFunction", 3).toString();
}
}
But when I call the method this is what I get:
{ "retval" : { "$code" : "function (x) {\n return x;\n}"} , "ok" : 1.0}
and I would expect to get the number 3 back in this case.
Another problem with the above code is that the method client.getDB() is deprecated. As I understand it the new method to call is client.getDatabase() and it returns a MongoDatabase object, but according to the API there is no method to execute a function.
So my question is: Is it possible to execute a stored JavaScript function on the database server from Java and get back the result of that function? And if it is possible, I would appreciate some help on how to do it?
Thank you.
Edit:
According to a comment on Calling server js function on mongodb from java:
"It seems like getNextSequence is a function written in the mongo
javascript shell. Neither the database (mongod) nor the Java side
knows this function exists and neither is able to interprete the
Javascript code the function contains. You will have to reimplement it
in Java. "
The function I'm trying to implement is a bit more complex than the example above - it's supposed to return a collection of documents and that does not seems to be working using the db.doEval method.
So I guess the comment is correct?
You can do all this with java driver.
MongoClient mongoClient = new MongoClient();
MongoDatabase mdb = mongoClient.getDatabase("TestDB");
/* run this <code snippet> in bootstrap */
BsonDocument echoFunction = new BsonDocument("value",
new BsonJavaScript("function(x1) { return x1; }"));
BsonDocument myAddFunction = new BsonDocument("value",
new BsonJavaScript("function (x, y){ return x + y; }"));
mdb.getCollection("system.js").updateOne(
new Document("_id", "echoFunction"),
new Document("$set", echoFunction),
new UpdateOptions().upsert(true));
mdb.getCollection("system.js").updateOne(
new Document("_id", "myAddFunction"),
new Document("$set", myAddFunction),
new UpdateOptions().upsert(true));
mdb.runCommand(new Document("$eval", "db.loadServerScripts()"));
/* end </code snippet> */
Document doc1 = mdb.runCommand(new Document("$eval", "echoFunction(5)"));
System.out.println(doc1);
The result is also:
Document{{retval=5.0, ok=1.0}}
You should do this instead:
return db.doEval("echoFunction(3)").toString();
If you use just function name in eval you only refer to JavaScript variable on server side storing code of function. It doesn't execute it. When you use parentheses you request to actually execute a function. If you need to send something more complex than a number I would advise to use JSON serializer.
I resolved the same issue in the following way:
I run a command in mongoShell to create my stored JavaScript functions:
db.system.js.save(
{
_id: "echoFunction" ,
value : function(x1) { return x1; }
}
)
db.system.js.save(
{
_id : "myAddFunction" ,
value : function (x, y){ return x + y; }
}
);
db.system.js.save(
{
_id: "fullFillCollumns" ,
value : function () {
for (i = 0; i < 2000; i++) {
db.numbers.save({num:i}); } }
}
);
To execute this functions from MongoDB Java Driver:
MongoClient mongoClient = new MongoClient();
MongoDatabase db = mongoClient.getDatabase("databaseName");
db.runCommand(new Document("$eval", "fullFillCollumns()"));
Document doc1 = db.runCommand(new Document("$eval", "echoFunction(5)"));
System.out.println(doc1);
Document doc2 = db.runCommand(new Document("$eval", "myAddFunction(5,8)"));
System.out.println(doc2);
I see that the collection numbers were created and filled with values. In the IntellijIdea console I see:
Document{{retval=5.0, ok=1.0}}
Document{{retval=13.0, ok=1.0}}

MongoDB SELF JOIN query having 1 collection

I'd like to do something like
SELECT e1.sender
FROM email as e1, email as e2
WHERE e1.sender = e2.receiver;
but in MongoDB. I found many forums about JOIN, which can be implemented via MapReduce in MongoDB, but I don't understand how to do it in this example with self-join.
I was thinking about something like this:
var map1 = function(){
var output = {
sender:db.collectionSender.email,
receiver: db.collectionReceiver.findOne({email:db.collectionSender.email}).email
}
emit(this.email, output);
};
var reduce1 = function(key, values){
var outs = {sender:null, receiver:null
values.forEach(function(v) {
if(outs.sender == null){
outs.sender = v.sender
}
if(outs.receivers == null){
outs.receiver = v.receiver
}
});
return outs; }};
db.email.mapReduce(map2,reduce2,{out:'rec_send_email'})
to create 2 new collections - collectionReceiver containing only receiver email and collectionSender containing only sender email
OR
var map2 = function(){
var output = {sender:this.sender,
receiver: db.email.findOne({receiver:this.sender})}
emit(this.sender, output);
};
var reduce2 = function(key, values){
var outs = {sender:null, receiver:null
values.forEach(function(v){
if(outs.sender == null){
outs.sender = v.sender
}
if(outs.receiver == null){
outs.receiver = v.receiver
}
});
return outs; };};
db.email.mapReduce(map2,reduce2,{out:'rec_send_email'})
but none of them is working and I don't understand this MapReduce-thing well. Could somebody explain it to me please? I was inspired by this article http://tebros.com/2011/07/using-mongodb-mapreduce-to-join-2-collections/ .
Additionally, I need to write it in Java. Is there any way how to solve it?
If you need to implement a "self-join" when using MongoDB then you may have structured your schema incorrectly (or sub-optimally).
In MongoDB (and noSQL in general) the schema structure should reflect the queries you will need to run against them.
It looks like you are assuming a collection of emails where each document has one sender and one receiver and now you want to find all senders who also happen to be receivers of email? The only way to do this would be via two simple queries, and not via map/reduce (which would be far more complex, unnecessary and the way you've written them wouldn't work as you can't query from within map function).
You are writing in Java - why not make two queries - the first to get all unique senders and the second to find all unique receivers who are also in the list of senders?
In the shell it would be:
var senderList = db.email.distinct("sender");
var receiverList = db.email.distinct("receiver", {"receiver":{$in:senderList}})

Categories

Resources