How to create and publish Index using Java Client Programatically - java

Is it possible to programmatically create and publish secondary indexes using Couchbases Java Client 2.2.2? I want to be able to create and publish my custom secondary indexes Running Couchbase 4.1. I know this is possible to do with Couchbase Views but I can't find the same for indexes.

couchbase-java-client-2.3.1 is needed in order to programmatically create indexes primary or secondary. Some of the usable methods can be found on the bucketManger same that is used to upsert views. Additionally the static method createIndex can be used it support DSL and String syntax
There are a few options to create your secondary indexes.
Option #1:
Statement query = createIndex(name).on(bucket.name(), x(fieldName));
N1qlQueryResult result = bucket.query(N1qlQuery.simple(query));
Option #2:
String query = "BUILD INDEX ON `" + bucket.name() + "` (" + fieldName + ")";
N1qlQueryResult result = bucket.query(N1qlQuery.simple(query));
Option #3 (Actually multiple options here since method createN1qlIndex is overloaded
bucket.bucketManager().createN1qlIndex(indexName, fields, where, true, false);

Primary index:
// Create a N1QL Primary Index (ignore if it exists)
bucket.bucketManager().createN1qlPrimaryIndex(true /* ignore if exists */, false /* defer flag */);
Secondary Index:
// Create a N1QL Index (ignore if it exists)
bucket.bucketManager().createN1qlIndex(
"my_idx_1",
true, //ignoreIfExists
false, //defer
Expression.path("field1.id"),
Expression.path("field2.id"));
or
// Create a N1QL Index (ignore if it exists)
bucket.bucketManager().createN1qlIndex(
"my_idx_2",
true, //ignoreIfExists
false, //defer
new String ("field1.id"),
new String("field2.id"));
The first secondary index (my_idx_1) is helpful if your document is something like this:
{
"field1" : {
"id" : "value"
},
"field2" : {
"id" : "value"
}
}
The second secondary index (my_idx_2) is helpful if your document is something like this:
{
"field1.id" : "value",
"field2.id" : "value"
}

You should be able to do this with any 2.x, once you have a Bucket
bucket.query(N1qlQuery.simple(queryString))
where queryString is something like
String queryString = "CREATE PRIMARY INDEX ON " + bucketName + " USING GSI;";

As of java-client 3.x+ there is a QueryIndexManager(obtained via cluster.queryIndexes()) which provides an indexing API with the below specific methods to create indexes:
createIndex(String bucketName, String indexName, Collection<String> fields)
createIndex(String bucketName, String indexName, Collection<String> fields, CreateQueryIndexOptions options)
createPrimaryIndex(String bucketName)
createPrimaryIndex(String bucketName, CreatePrimaryQueryIndexOptions options)

Related

How to generate runtime data using DB stored GString definitions

Hi how can I use database stored GString definitions for dynamically generated data. I was able to use GString for pick and choose row attributes if format is defined in the code
code_format = "${-> row.ACCOUNT} ${-> row.ACCOUNT_OWNER}"
However if same definition is extracted from database my code is not working.
Sql sql = Sql.newInstance(url, login, password, driver);
sql.eachRow(SQL) { row ->
code_format = "${-> row.ACCOUNT} ${-> row.ACCOUNT_OWNER}"
database_format = "${-> row.REPORT_ATTRIBUTES}"
println "1- " + code_format
println "2- " + database_format
println "CODE : " + code_format.dump()
println "DB : " + database_format.dump()
}
When I run this code I am getting following output;
1- FlowerHouse Joe
2- ${-> row.ACCOUNT} ${-> row.ACCOUNT_OWNER}
CODE : <org.codehaus.groovy.runtime.GStringImpl#463cf024 strings=[, , ] values=[GString$_run_closure1_closure2#44f289ee, GString$_run_closure1_closure3#f3d8b9f]>
DB : org.codehaus.groovy.runtime.GStringImpl#4f5e9da9 strings=[, ] values=[GString$_run_closure1_closure4#11997b8a]
row.REPORT_ATTRIBUTES returns String because database doesn't know groovy stings format.
GString is template, which can be created from string.
So you can do something like:
def engine = new groovy.text.SimpleTemplateEngine()
println engine.createTemplate(row.REPORT_ATTRIBUTES).make([row:row]).toString()

Jackson - Variable key

I'm facing a little problem, I think some workaround can be find but I'm searching for the proper way to do it.
I use Selenium with Grid, and then I configure all my nodes with JSON files. Some browsers (Chrome, IE) needs specific drivers.
These drivers are defined by a key of that form: webdriver.browser.driver with browser = chrome or ie. So, we've got for example:
{"browserName": "chrome",
"maxInstances": 5,
"platform": "WINDOWS",
"webdriver.chrome.driver": "C:/Program Files (x86)/Google/Chrome/Application/chrome.exe" }
{"browserName": "internet explorer",
"maxInstances": 1,
"platform": "WINDOWS",
"webdriver.ie.driver": "C:/Program Files (x86)/Internet Explorer/iexplore.exe" }
I want to get the value and put it in: private String driverPath of my POJO.
Is there a way to get the value of the key dynamically? Like a regex?
Conceptually speaking, in both of your objects, the attributes "webdriver.chrome.driver" and "webdriver.ie.driver" represents the same entity: the driver, so the attribute should just be called driver.
Usually a POJO <-> JSON conversion is intended to be one to one, so one Java field per JSON field.
If you cannot change the JSON or you feel that you need them to be like they are right now, at least in Jackson and GSON, you can register a Custom Deserializer and manually make the parsing for that value.
You can see examples here: http://www.baeldung.com/jackson-deserialization
Parse it as a List, for example using jackson:
List<Map> list = (List<Map>) new ObjectMapper().readValue(json, List.class);
Here's some working code:
String json = "[{\"browserName\": \"chrome\", \"maxInstances\": 5,\n \"platform\": \"WINDOWS\",\n" +
" \"webdriver.chrome.driver\": \"C:/Program Files (x86)/Google/Chrome/Application/chrome.exe\" }," +
"{\"browserName\": \"internet explorer\", \"maxInstances\": 1," +
" \"platform\": \"WINDOWS\", \"webdriver.ie.driver\": \"C:/Program Files (x86)/Internet Explorer/iexplore.exe\" }]";
List<Map> list = (List<Map>) new ObjectMapper().readValue(json, List.class);
String browser = "chrome";
String driver = list.stream()
.<String>map(m -> (String) m.get("webdriver." + browser + ".driver"))
.filter(s -> s != null)
.findFirst()
.orElse(null); // C:/Program Files (x86)/Google/Chrome/Application/chrome.exe
driver will be null if not found

Cassandra Lucene Index boolean syntax

I am performing a user search system in my Cassandra database. For that purpose I installed Cassandra Lucene Index from Stratio.
I am able to lookup users by username, but the problem is as follows:
This is my Cassandra users table and the Lucene Index:
CREATE TABLE user (
username text PRIMARY KEY,
email text,
password text,
is_verified boolean,
lucene text
);
CREATE CUSTOM INDEX search_main ON user (lucene) USING 'com.stratio.cassandra.lucene.Index' WITH OPTIONS = {
'refresh_seconds': '3600',
'schema': '{
fields : {
username : {type : "string"},
is_verified : {type : "boolean"}
}
}'
};
This is a normal query performed to Lookup a user by username:
SELECT * FROM user WHERE lucene = '{filter: {type : "wildcard", field : "username", value : "*%s*"}}' LIMIT 15;
My Question is:
How could I sort the returned results to ensure that any verified users are between the first 15 results in the query? (Limit is 15).
You can use this search:
SELECT * FROM user WHERE lucene = '{filter: {type:"boolean", must:[
{type : "wildcard", field : "username", value : "*%s*"},
{type : "match", field : "is_verified", value : true}
]}}' LIMIT 15;

Update collection in MongoDb via Apache Spark using Mongo-Hadoop connector

I would like to update a specific collection in MongoDb via Spark in Java.
I am using the MongoDB Connector for Hadoop to retrieve and save information from Apache Spark to MongoDb in Java.
After following Sampo Niskanen's excellent post regarding retrieving and saving collections to MongoDb via Spark, I got stuck with updating collections.
MongoOutputFormat.java includes a constructor taking String[] updateKeys, which I am guessing is referring to a possible list of keys to compare on existing collections and perform an update. However, using Spark's saveAsNewApiHadoopFile() method with parameter MongoOutputFormat.class, I am wondering how to use that update constructor.
save.saveAsNewAPIHadoopFile("file:///bogus", Object.class, Object.class, MongoOutputFormat.class, config);
Prior to this, MongoUpdateWritable.java was being used to perform collection updates. From examples I've seen on Hadoop, this is normally set on mongo.job.output.value, maybe like this in Spark:
save.saveAsNewAPIHadoopFile("file:///bogus", Object.class, MongoUpdateWritable.class, MongoOutputFormat.class, config);
However, I'm still wondering how to specify the update keys in MongoUpdateWritable.java.
Admittedly, as a hacky way, I've set the "_id" of the object as my document's KeyValue so that when a save is performed, the collection will overwrite the documents having the same KeyValue as _id.
JavaPairRDD<BSONObject,?> analyticsResult; //JavaPairRdd of (mongoObject,result)
JavaPairRDD<Object, BSONObject> save = analyticsResult.mapToPair(s -> {
BSONObject o = (BSONObject) s._1;
//for all keys, set _id to key:value_
String id = "";
for (String key : o.keySet()){
id += key + ":" + (String) o.get(key) + "_";
}
o.put("_id", id);
o.put("result", s._2);
return new Tuple2<>(null, o);
});
save.saveAsNewAPIHadoopFile("file:///bogus", Object.class, Object.class, MongoOutputFormat.class, config);
I would like to perform the mongodb collection update via Spark using MongoOutputFormat or MongoUpdateWritable or Configuration, ideally using the saveAsNewAPIHadoopFile() method. Is it possible? If not, is there any other way that does not involve specifically setting the _id to the key values I want to update on?
I tried several combination of config.set("mongo.job.output.value","....") and several combination of
.saveAsNewAPIHadoopFile(
"file:///bogus",
classOf[Any],
classOf[Any],
classOf[com.mongodb.hadoop.MongoOutputFormat[Any, Any]],
mongo_config
)
and none of them worked.
I made it to work by using MongoUpdateWritable class as output of my map method:
items.map(row => {
val mongo_id = new ObjectId(row("id").toString)
val query = new BasicBSONObject()
query.append("_id", mongo_id)
val update = new BasicBSONObject()
update.append("$set", new BasicBSONObject().append("field_name", row("new_value")))
val muw = new MongoUpdateWritable(query,update,false,true)
(null, muw)
})
.saveAsNewAPIHadoopFile(
"file:///bogus",
classOf[Any],
classOf[Any],
classOf[com.mongodb.hadoop.MongoOutputFormat[Any, Any]],
mongo_config
)
The raw query executed in mongo is then something like this:
2014-11-09T13:32:11.609-0800 [conn438] update db.users query: { _id: ObjectId('5436edd3e4b051de6a505af9') } update: { $set: { value: 10 } } nMatched:1 nModified:0 keyUpdates:0 numYields:0 locks(micros) w:24 3ms

DWR addRows() with Element ID's

Calling All DWR Gurus!
I am currently using reverse Ajax to add data to a table in a web page dynamically.
When I run the following method:
public static void addRows(String tableBdId, String[][] data) {
Util dwrUtil = new Util(getSessionForPage()); // Get all page sessions
dwrUtil.addRows(tableBdId, data);
}
The new row gets created in my web page as required.
However, in order to update these newly created values later on the tags need to have an element ID for me to access.
I have had a look at the DWR javadoc and you can specify some additional options see http://directwebremoting.org/dwr/browser/addRows , but this makes little sense to me, the documentation is very sparse.
If anyone could give me a clue as to how I could specify the element id's for the created td elements I would be most grateful. Alternatively if anyone knows of an alternative approach I would be keen to know.
Kind Regards
Karl
The closest I could get was to pass in some arguments to give the element an id. See below:
public static void addRows(String tableBdId, String[] data, String rowId) {
Util dwrUtil = new Util(getSessionForPage()); // Get all page sessions
// Create the options, which is needed to add a row ID
String options = "{" +
" rowCreator:function(options) {" +
" var row = document.createElement(\"tr\");" +
" row.setAttribute('id','" + rowId + "'); " +
" return row;" +
" }," +
" cellCreator:function(options) {" +
" var td = document.createElement(\"td\");" +
" return td;" +
" }," +
" escapeHtml:true\"}";
// Wrap the supplied row into an array to match the API
String[][] args1 = new String[][] { data };
dwrUtil.addRows(tableBdId, args1, options);
Is this line of your code really working??
dwrUtil.addRows(tableBdId, data);
The DWR addRows method needs at least 3 parameters of 4 to work, they are:
id: The id of the table element (preferably a tbody element);
array: Array (or object from DWR 1.1) containing one entry for each row in the updated table;
cellfuncs: An array of functions (one per column) for extracting cell data from the passed row data;
options: An object containing various options.
The id, array and cellfuncs are required, and in your case, you'll have to pass the options also because you want to customize the row creation and set the TD id's. check it out:
Inside the options argument, you need to use one parameter called "cellCreator" to inform your own way to create the td html element.
Check it out:
// Use the cellFuncs var to set the values you want to display inside the table rows
// the syntax is object.property
// use one function(data) for each property you need to show on your table.
var cellFuncs = [
function(data) { return data.name_of_the_first_object_property ,
function(data) { return data.name_of_the_second_object_property; }
];
DWRUtil.addRows(
tableBdId,
data,
cellFuncs,
{
// This function is used for you customize the generated td element
cellCreator:function(options) {
var td = document.createElement("td");
// setting the td element id using the rowIndex
// just implement your own id bellow
td.id = options.rowIndex;
return td;
}
});

Categories

Resources