Add field with value to existing document in MongoDB via Java API - java

The following code haven't worked for me:
public void addFieldWithValueToDoc(String DBName, String collName, String docID, String key, String value) {
BasicDBObject setNewFieldQuery = new BasicDBObject().append("$set", new BasicDBObject().append(key, value));
mongoClient.getDB(DBName).getCollection(collName).update(new BasicDBObject().append("_id", docID), setNewFieldQuery);
}
Where mongoClient variable's type is MongoClient.
It's inspired by Add new field to a collection in MongoDB .
What's wrong and how to do it right?
Thanks.

I've written a JUnit test to prove that your code does work:
#Test
public void shouldUpdateAnExistingDocumentWithANewKeyAndValue() {
// Given
String docID = "someId";
collection.save(new BasicDBObject("_id", docID));
assertThat(collection.find().count(), is(1));
// When
String key = "newKeyName";
String value = "newKeyValue";
addFieldWithValueToDoc(db.getName(), collection.getName(), docID, key, value);
// Then
assertThat(collection.findOne().get(key).toString(), is(value));
}
public void addFieldWithValueToDoc(String DBName, String collName, String docID, String key, String value) {
BasicDBObject setNewFieldQuery = new BasicDBObject().append("$set", new BasicDBObject().append(key, value));
mongoClient.getDB(DBName).getCollection(collName).update(new BasicDBObject().append("_id", docID), setNewFieldQuery);
}
So your code is correct, although I'd like to point out some comments on style that would make it more readable:
Parameters and variables should start with a lower-case letter. DBName should be dbName,
You don't need new BasicDBObject().append(key, value) use new BasicDBObject(key, value)
This code does the same thing as your code, but is shorter and simpler:
public void addFieldWithValueToDoc(String dbName, String collName, String docID, String key, String value) {
mongoClient.getDB(dbName).getCollection(collName).update(new BasicDBObject("_id", docID),
new BasicDBObject("$set", new BasicDBObject(key, value)));
}

To update existing documents in a collection, you can use the collection’s updateOne() or updateMany methods.
updateOne method has the following form:
db.collection.updateOne(filter, update, options)
filter - the selection criteria for the update. The same query selectors as in the find() method are available.
Specify an empty document { } to update the first document returned in
the collection.
update - the modifications to apply.
So, if you want to add one more field using Mongodb Java driver 3.4+, it will be:
collection.updateOne(new Document("flag", true),
new Document("$set", new Document("title", "Portable Space Ball")));
The following operation updates a single document where flag:true
Or in the same logic:
collection.updateOne(eq("flag", true),
new Document("$set", new Document("title", "Portable Space Ball")));
If the title field does not exist, $set will add a new field with the specified value, provided that the new field does not violate a type constraint. If you specify a dotted path for a non-existent field, $set will create the embedded documents as needed to fulfill the dotted path to the field.

Related

Lucene: How to search by specific term

I'm trying to do a Lucene search by a specific string term.
Eg: I had the tags 1-"Hello World", 2-"Hello, Steve", 3-"Helloween" and finally 4-"Hello" if I look for the last tag (hello), Lucene will bring all tags, because all of them had "hello" at some point. I need an operator or a logic that makes the search without "like".
There is a way to avoid this using the clause "must_not" (- operator) and the query will be:
term:hello -term:world. But this is not the case, cause I will need to find all other words that should not be in search.
private <T> Query createQuery(final Class<T> clazz, String s, final String[] fields, final SearchFactory searchFactory, final Boolean allowLeadingWildcard) throws ParseException {
final Analyzer analyzer = searchFactory.getAnalyzer(clazz);
final QueryParser parser = new MultiFieldQueryParser(Version.LUCENE_36, fields, analyzer);
Query query = null;
try{
query = parser.parse(s);
} catch(...){...}
return query;
My knowledge of Lucene is short, so I will place an SQL example to see if will be easier to understand
/*This is what Lucene is doing. It will bring "HELLO", "HELLO WORLD", "Hello, Steve"...*/
WHERE table.tag LIKE "%HELLO%"
/*This is what I want. Match exactly the term "HELLO" and nothing more*/
WHERE table.tag = "HELLO"
I guess that this is the Analyzer used in the application:
public class AnalyserCustom extends Analyzer {
#Override
public TokenStream tokenStream(final String fieldName, final Reader reader) {
final StandardTokenizer tokenizer = new StandardTokenizer(Version.LUCENE_36, reader);
TokenStream stream = new StandardFilter(Version.LUCENE_36, tokenizer);
stream = new LowerCaseFilter(Version.LUCENE_36, stream);
return new ASCIIFoldingFilter(stream);
}
}
And the attribute TAG is this:
...
#Field
private String tagname;
...
Any suggestions?
PS: I'm new to Lucene.
You have to use to index the field, that will generate one specific token for the searched string, try with KeywordAnalyzer.

Best way to Query for single Entity based on ONE property value?

The ask: a function to retrieve a single Entity from the Google App Engine Datastore based on a property that is not its Key or otherwise return null if no such object is found.
Here is the function I have currently:
public Entity find(DatastoreService datastore, String kind, String property, String value) {
Filter propertyFilter =
new FilterPredicate(property, FilterOperator.EQUAL, value);
Query q = new Query(kind).setFilter(propertyFilter);
List<Entity> results =
datastore.prepare(q).asList(FetchOptions.Builder.withDefaults());
if (results.isEmpty()) {
return null;
}
return results.get(0);
}
Is there a one-shot API I could use instead or are there any optimization suggestions?
You could use PreparedQuery#asSingleEntity():
public Entity find(DatastoreService datastore,
String kind, String property, String value)
throws TooManyResultsException {
Filter propertyFilter =
new FilterPredicate(property, FilterOperator.EQUAL, value);
Query q = new Query(kind).setFilter(propertyFilter);
return datastore.prepare(q).asSingleEntity();
}
The only real difference between this and your code is that the underlying query sets a LIMIT of 2.

Rebuilding a URL without a query string parameter

Let's say I have a page which lists things and has various filters for that list in a sidebar. As an example, consider this page on ebuyer.com, which looks like this:
Those filters on the left are controlled by query string parameters, and the link to remove one of those filters contains the URL of the current page but without that one query string parameter in it.
Is there a way in JSP of easily constructing that "remove" link? I.e., is there a quick way to reproduce the current URL, but with a single query string parameter removed, or do I have to manually rebuild the URL by reading the query string parameters, adding them to the base URL, and skipping the one that I want to leave out?
My current plan is to make something like the following method available as a custom EL function:
public String removeQueryStringParameter(
HttpServletRequest request,
String paramName,
String paramValue) throws UnsupportedEncodingException {
StringBuilder url = new StringBuilder(request.getRequestURI());
boolean first = true;
for (Map.Entry<String, String[]> param : request.getParameterMap().entrySet()) {
String key = param.getKey();
String encodedKey = URLEncoder.encode(key, "UTF-8");
for (String value : param.getValue()) {
if (key.equals(paramName) && value.equals(paramValue)) {
continue;
}
if (first) {
url.append('?');
first = false;
} else {
url.append('&');
}
url.append(encodedKey);
url.append('=');
url.append(URLEncoder.encode(value, "UTF-8"));
}
}
return url.toString();
}
But is there a better way?
The better way is to use UrlEncodedQueryString.
UrlEncodedQueryString can be used to set, append or remove parameters
from a query string:
URI uri = new URI("/forum/article.jsp?id=2&para=4");
UrlEncodedQueryString queryString = UrlEncodedQueryString.parse(uri);
queryString.set("id", 3);
queryString.remove("para");
System.out.println(queryString);

How to update a Map or a List on AWS DynamoDB document API?

The new AWS DynamoDB document API allows 2 new data types that correspond directly to the underlying JSON representation: Map (aka JSON object) and List (aka JSON array).
However, I can't find a way to update attributes of these data types without completely overwriting them. In contrast, a Number attribute can be updated by ADDing another number, so in Java you can do something like:
new AttributeUpdate("Some numeric attribute").addNumeric(17);
Similarly you can addElements to an attribute of a Set data type. (In the old API you would use AttributeAction.ADD for both purposes.)
But for a Map or a List, it seems you must update the previous value locally, then PUT it instead of that value, for example in Java:
List<String> list = item.getList("Some list attribute");
list.add("new element");
new AttributeUpdate("Some list attribute").put(list);
This is much less readable, and under some circumstances much less efficient.
So my questions are:
Is there a way to update an attribute of a Map or a List data type without overwriting the previous value? For example, to add an element to a List, or to put an element in a Map?
How would you implement it using the Java API?
Do you know of plans to support this in the future?
Please take a look at UpdateExpression in the UpdateItem API
For example given an item with a list:
{
"hashkey": {"S" : "my_key"},
"my_list" : {"L":
[{"N":"3"},{"N":"7"} ]
}
You can update the list with code like the following:
UpdateItemRequest request = new UpdateItemRequest();
request.setTableName("myTableName");
request.setKey(Collections.singletonMap("hashkey",
new AttributeValue().withS("my_key")));
request.setUpdateExpression("list_append(:prepend_value, my_list)");
request.setExpressionAttributeValues(
Collections.singletonMap(":prepend_value",
new AttributeValue().withN("1"))
);
dynamodb.updateItem(request);`
You can also append to the list by reversing the order of the arguments in the list_append expression.
An expression like: SET user.address.zipcode = :zip would address a JSON map element combined with expression attribute values {":zip" : {"N":"12345"}}
Base on DynamoDB examples, this also work (scala)
val updateItemSpec:UpdateItemSpec = new UpdateItemSpec()
.withPrimaryKey("hashkey", my_key)
.withUpdateExpression("set my_list = list_append(:prepend_value, my_list)")
.withValueMap(new ValueMap()
.withList(":prepend_value", "1"))
.withReturnValues(ReturnValue.UPDATED_NEW)
println("Updating the item...")
val outcome: UpdateItemOutcome = table.updateItem(updateItemSpec)
println("UpdateItem succeeded:\n" + outcome.getItem.toJSONPretty)
A generic function to add or update a key/value pairs. attribute updateColumn should be of type map.
Update tableName attribute name should be passed as attributeName under key:value pairs where primaryKey = primaryKeyValue
public boolean insertKeyValue(String tableName, String primaryKey, String
primaryKeyValue, String attributeName, String newKey, String newValue) {
//Configuration to connect to DynamoDB
Table table = dynamoDB.getTable(tableName);
boolean insertAppendStatus = false;
try {
//Updates when map is already exist in the table
UpdateItemSpec updateItemSpec = new UpdateItemSpec()
.withPrimaryKey(primaryKey, primaryKeyValue)
.withReturnValues(ReturnValue.ALL_NEW)
.withUpdateExpression("set #columnName." + newKey + " = :columnValue")
.withNameMap(new NameMap().with("#columnName", attributeName))
.withValueMap(new ValueMap().with(":columnValue", newValue))
.withConditionExpression("attribute_exists("+ attributeName +")");
table.updateItem(updateItemSpec);
insertAppendStatus = true;
//Add map column when it's not exist in the table
} catch (ConditionalCheckFailedException e) {
HashMap<String, String> map = new HashMap<>();
map.put(newKey, newValue);
UpdateItemSpec updateItemSpec = new UpdateItemSpec()
.withPrimaryKey(primaryKey,primaryKeyValue)
.withReturnValues(ReturnValue.ALL_NEW)
.withUpdateExpression("set #columnName = :m")
.withNameMap(new NameMap().with("#columnName", attributeName))
.withValueMap(new ValueMap().withMap(":m", map));
table.updateItem(updateItemSpec);
insertAppendStatus = true;
} catch(Exception e) {
e.printStackTrace();
}
return insertAppendStatus;
}

Adding a document into another Document through java service

I am writting a java service where I am building the document for output.
But My structure should be : OutPut Doc is the top level doc. Inside that i want to have another Doc say Intermediate doc and in this intermediate doc i want to have Key values.
But My question is how can i insert one doc to another. I see the IDataUtil has put method which ask for key as string and value can be object.
My code is IDataUtil.put(idcvalueDoc, "Body", FullValue.toString());
But this Body should not be string it should be document .I want to insert one Doc to another.
Please help me
To accomplish what you're after, you will need to do the following:
Create an intermediateDoc IData object
Add key value tuples to the intermediateDoc as required
Create an outputDoc IData object
Add the intermediateDoc as a key value tuple to the outputDoc
Add the outputDoc to the pipeline
The following is an example Java service that demonstrates this (note the key value tuples added to the intermediateDoc are hard-coded here for convenience):
public static final void exampleService(IData pipeline) throws ServiceException {
IDataCursor pipelineCursor = pipeline.getCursor();
try {
// create an intermediateDoc IData object
IData intermediateDoc = IDataFactory.create();
// create a cursor to use to add key value tuples to the intermediateDoc
IDataCursor intermediateCursor = intermediateDoc.getCursor();
// add key value tuples as required to the intermediateDoc
IDataUtil.put(intermediateCursor, "key1", "value1");
IDataUtil.put(intermediateCursor, "key2", "value2");
// ...
// destroy the intermediateCursor when done adding key value tuples
intermediateCursor.destroy();
// create an outputDoc IData object
IData outputDoc = IDataFactory.create();
// create a cursor to use to add key value tuples to the outputDoc
IDataCursor outputCursor = outputDoc.getCursor();
// add the intermediateDoc to the outputDoc
IDataUtil.put(outputCursor, "intermediateDoc", intermediateDoc);
// destroy the outputCursor when done adding key value tuples
outputCursor.destroy();
// add the outputDoc to the pipeline
IDataUtil.put(pipelineCursor, "outputDoc", outputDoc);
} finally {
// destroy the pipelineCursor
pipelineCursor.destroy();
}
}
Assumption and Design Input/Output
Assuming the
Document of ValuesInput[] are the input, which are same as Values[] the output.
Document of columnValue under Document of ValuesInput[] has string variable named additionalString (which it would be makes no sense if there are no variable inside/under the Document)
So overall would be like this:
Of course you could generate the code after designing the input/output by right-clicking your mouse and generate code -> For implementing this service.
But, instead of using the generated code, I'm trying to give an example with IDataMap which you can find in webMethods Javadoc com.softwareag.util.IDataMap. Which is very convenient to use
IDataMap
IDataMap combines the functionality of IData, IDataCursor, IDataUtil,
and IDataFactory. IDataMap implements the java.util.Map interface from
the Java Collections Framework, providing a familiar and simple
interface. IDataMap expands the Map interface, adding getAs<Type>
methods, which convert the returned value to a specific type.
And it goes like this:
public static final void mapDocument(IData pipeline) throws ServiceException
{
// pipeline input by IDataMap
IDataMap pipelineMap = new IDataMap(pipeline);
// extracting Values input into IData[] variable array
IData[] ValuesInput = pipelineMap.getAsIDataArray("ValuesInput");
// Initiate OutDoc.Values length based on ValuesInput length
IData[] Values = new IData[ValuesInput.length];
// OutDoc.Values
// Iterate and copying all ValuesInputDoc into OutDoc.Values
for (int i = 0; i < ValuesInput.length; i++)
{
Values[i] = IDataUtil.clone(ValuesInput[i]);
}
// OutDoc
IData OutDoc = IDataFactory.create();
IDataMap outDocMap = new IDataMap(OutDoc);
// OutDoc IDataMap
String TableName = "TableName is Never assigned";
outDocMap.put("TableName", TableName);
// OutDoc.Values
outDocMap.put("Values", Values);
// Wrap the OutDoc into pipeline
pipelineMap.put("OutDoc", OutDoc);
}
And the result
With the wmboost-data library you can write this:
public static final void exampleService(IData pipeline) throws ServiceException {
Document outputDoc = Documents.create();
Document intermediateDoc = outputDoc.docEntry("intermediateDoc").putNew();
intermediateDoc.entry("key1").put("value1");
intermediateDoc.entry("key2").put("value2");
Documents.wrap(pipeline).entry("outputDoc").put(outputDoc);
}
The code:
Creates the top-level document, outputDoc
Creates intermediateDoc as nested document (an alternative is to create it independently and to attach it to its parent later)
Assigns entry values to intermediateDoc
Adds outputDoc to the pipeline
Disclaimer: I'm the author of wmboost-data.
Just check if any of the following in the WmPublic package would help:
pub.list:appendToDocumentList
or
pub.document:insertDocument

Categories

Resources