I use this search method in my Spring data/MongoDB application:
#Query("{$text : { $search : ?0 } }")
List<DocumentFile> findBySearchString(final String searchString);
My question would be what I have to give in for searchString that I get DocumentFiles where two terms exists (e.g. term1 AND term2 or term1 + term2).
Is there something like AND with Spring Data and MongoDB?
Yes, you can achieve using mongo repository
List<DocumentFile> findByString1AndString2(String String1,String String2);
Ok, based on your last round of comments-can we do like this
Using mongo template
Criteria regex = Criteria.where("name").regex("*.xxxx.*", "i");
mongoOperations.find(new Query().addCriteria(regex), DocumentFile.class);
Using mongo repository
List<DocumentFile> documents= repository .findByQuery(".*ab.*");
interface repository extends MongoRepository<DocumentFile, String> {
#Query("{'name': {$regex: ?0 }})")
List<DocumentFile> findByQuery(String name);
}
Hope it will useful.
Related
After reading Spring's docs at https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#mongodb.repositories.queries.update ,
I wrote this Repository method:
#Repository
interface TokenRepo: MongoRepository<TokenModel, String> {
#Query("{ authorizationState: ?0 }")
#Update("{ authorizationState: ?0, authorizationCode: ?1 }")
fun updateCode(state: String, code: String): Int
}
Then I use it like this:
#Test fun testUpdate() {
val token = TestTokenModels().makeToken()
val tokenSaved = tokenRepo.save(token)
assertThat(tokenSaved).isNotNull.isEqualTo(token)
assertThat(tokenSaved.requestTimestampMs).isNotNull()
assertThat(tokenRepo.findByState(token.authorizationState)).isNotNull.isEqualTo(token)
tokenRepo.updateCode(token.authorizationState, "someCode")
val tokenUpdated = tokenRepo.findByState(token.authorizationState) // FAILS!
assertThat(tokenUpdated).isNotNull
assertThat(tokenUpdated!!.authorizationCode).isNotNull.isEqualTo("someCode")
}
But this fails when reading back from the database, because almost all fields were set to null:
org.springframework.data.mapping.model.MappingInstantiationException:
Failed to instantiate com.tracker.bl.token.TokenModel using constructor
fun `<init>`(kotlin.String, com.tracker.bl.token.TokenModel.Status, kotlin.String, kotlin.String, kotlin.String, kotlin.Long, kotlin.String, kotlin.String, kotlin.String?, kotlin.String?, com.tracker.rest.Oauth2TokenType, kotlin.String?, kotlin.String?, kotlin.Long?, java.time.ZonedDateTime, kotlin.String?): com.tracker.bl.token.TokenModel with arguments 637e4686ae781b603ac77c12,null,null,null,null,null,null,tokenFlowStateVC8g80BT,null,null,null,null,null,null,null,null,65026,null
at org.springframework.data.mapping.model.
KotlinClassGeneratingEntityInstantiator$DefaultingKotlinClassInstantiatorAdapter
.createInstance(KotlinClassGeneratingEntityInstantiator.java:215)
How am I supposed to use #Update? Or is it only intended for things like $inc and $push? The docs is actually quite brief on this topic. I'm relatively new to MongoDB.
All right that was quick. It was me being new into MongoDB.
Spring Data MongoDB is really just a thin layer, so one needs to follow the MongoDB query language to the extent where update happens through $set { ... }.
So the method is supposed to be like this:
#Query("{ authorizationState: ?0 }")
#Update("{ '\$set': { authorizationCode: ?1 } }")
fun updateCode(state: String, code: String): Int
Context
I need to rewrite some code previoulsy with Jongo but using Springframework MongoDb. Previous code was:
eventsCollection
.aggregate("{$match:" + query + "}")
.and("{$group: {_id: '$domain', domain: {$first: '$domain'}, codes: {$push: '$code'}}}")
.and("{$project : { _id: 0, domain: 1 , codes: 1 } }")
.as(DomainCodes.class);
where eventsCollection is Jongo MongoCollection, and query is a String containing with criteria.
Problem
New code must probably look like :
Aggregation myAggregation = Aggregation.newAggregation(
Aggregation.match(/* something here */),
Aggregation.group("domain").first("domain").as("domain").push("code").as("codes")
);
mongoTemplate.aggregate(myAggregation, "collectionName", DomainCodes.class);
but I don't find a way to create match criteria using String (similare as BasicQuery that can take a query as String for argument)
Question
In order to change as little code as possible, is there anyway to use query String as in Jongo ?
Thank you,
I am trying to find how to use mongo Atlas search indexes, from java application, which is using spring-data-mongodb to query the data, can anyone share an example for it
what i found was as code as below, but that is used for MongoDB Text search, though it is working, but not sure whether it is using Atlas search defined index.
TextQuery textQuery = TextQuery.queryText(new TextCriteria().matchingAny(text)).sortByScore();
textQuery.fields().include("cast").include("title").include("id");
List<Movies> movies = mongoOperations
.find(textQuery, Movies.class);
I want smaple java code using spring-data-mongodb for below query:
[
{
$search: {
index: 'cast-fullplot',
text: {
query: 'sandeep',
path: {
'wildcard': '*'
}
}
}
}
]
It will be helpful if anyone can explain how MongoDB Text Search is different from Mongo Atlas Search and correct way of using Atalas Search with the help of java spring-data-mongodb.
How to code below with spring-data-mongodb:
Arrays.asList(new Document("$search",
new Document("index", "cast-fullplot")
.append("text",
new Document("query", "sandeep")
.append("path",
new Document("wildcard", "*")))),
new Document())
Yes, spring-data-mongo supports the aggregation pipeline, which you'll use to execute your query.
You need to define a document list, with the steps defined in your query, in the correct order. Atlas Search must be the first step in the pipeline, as it stands. You can translate your query to the aggregation pipeline using the Mongo Atlas interface, they have an option to export the pipeline array in the language of your choosing. Then, you just need to execute the query and map the list of responses to your entity class.
You can see an example below:
public class SearchRepositoryImpl implements SearchRepositoryCustom {
private final MongoClient mongoClient;
public SearchRepositoryImpl(MongoClient mongoClient) {
this.mongoClient = mongoClient;
}
#Override
public List<SearchEntity> searchByFilter(String text) {
// You can add codec configuration in your database object. This might be needed to map
// your object to the mongodb data
MongoDatabase database = mongoClient.getDatabase("aggregation");
MongoCollection<Document> collection = database.getCollection("restaurants");
List<Document> pipeline = List.of(new Document("$search", new Document("index", "default2")
.append("text", new Document("query", "Many people").append("path", new Document("wildcard", "*")))));
List<SearchEntity> searchEntityList = new ArrayList<>();
collection.aggregate(pipeline, SearchEntity.class).forEach(searchEntityList::add);
return searchEntityList;
}
}
How I can update a string field in a mongo document, concatenating another string value, using java and spring-data mongo? Ex:
{
“languages”: “python,java,c”
}
Concat “kotlin”:
{
“languages”: “python,java,c,kotlin”
}
Thanks so much.
Starting in MongoDB v4.2, the db.collection.update() method can accept an aggregation pipeline to modify a field using the values of the other fields in the Document.
Update with Aggregation Pipeline
Try this one:
UpdateResult result = mongoTemplate.updateMulti(Query.query(new Criteria()),
AggregationUpdate.update()
.set(SetOperation.set("languages")
.toValue(StringOperators.Concat.valueOf("languages").concat(",").concat("kotlin"))),
"collection"); //mongoTemplate.getCollectionName(Entity.class)
System.out.println(result);
//AcknowledgedUpdateResult{matchedCount=1, modifiedCount=1, upsertedId=null}
In MongoDB shell, it looks like this:
db.collection.updateMany({},
[
{ "$set" : { "languages" : { "$concat" : ["$languages", ",", "kotlin"]}}}
]
)
I would like to update a specific collection in MongoDb via Spark in Java.
I am using the MongoDB Connector for Hadoop to retrieve and save information from Apache Spark to MongoDb in Java.
After following Sampo Niskanen's excellent post regarding retrieving and saving collections to MongoDb via Spark, I got stuck with updating collections.
MongoOutputFormat.java includes a constructor taking String[] updateKeys, which I am guessing is referring to a possible list of keys to compare on existing collections and perform an update. However, using Spark's saveAsNewApiHadoopFile() method with parameter MongoOutputFormat.class, I am wondering how to use that update constructor.
save.saveAsNewAPIHadoopFile("file:///bogus", Object.class, Object.class, MongoOutputFormat.class, config);
Prior to this, MongoUpdateWritable.java was being used to perform collection updates. From examples I've seen on Hadoop, this is normally set on mongo.job.output.value, maybe like this in Spark:
save.saveAsNewAPIHadoopFile("file:///bogus", Object.class, MongoUpdateWritable.class, MongoOutputFormat.class, config);
However, I'm still wondering how to specify the update keys in MongoUpdateWritable.java.
Admittedly, as a hacky way, I've set the "_id" of the object as my document's KeyValue so that when a save is performed, the collection will overwrite the documents having the same KeyValue as _id.
JavaPairRDD<BSONObject,?> analyticsResult; //JavaPairRdd of (mongoObject,result)
JavaPairRDD<Object, BSONObject> save = analyticsResult.mapToPair(s -> {
BSONObject o = (BSONObject) s._1;
//for all keys, set _id to key:value_
String id = "";
for (String key : o.keySet()){
id += key + ":" + (String) o.get(key) + "_";
}
o.put("_id", id);
o.put("result", s._2);
return new Tuple2<>(null, o);
});
save.saveAsNewAPIHadoopFile("file:///bogus", Object.class, Object.class, MongoOutputFormat.class, config);
I would like to perform the mongodb collection update via Spark using MongoOutputFormat or MongoUpdateWritable or Configuration, ideally using the saveAsNewAPIHadoopFile() method. Is it possible? If not, is there any other way that does not involve specifically setting the _id to the key values I want to update on?
I tried several combination of config.set("mongo.job.output.value","....") and several combination of
.saveAsNewAPIHadoopFile(
"file:///bogus",
classOf[Any],
classOf[Any],
classOf[com.mongodb.hadoop.MongoOutputFormat[Any, Any]],
mongo_config
)
and none of them worked.
I made it to work by using MongoUpdateWritable class as output of my map method:
items.map(row => {
val mongo_id = new ObjectId(row("id").toString)
val query = new BasicBSONObject()
query.append("_id", mongo_id)
val update = new BasicBSONObject()
update.append("$set", new BasicBSONObject().append("field_name", row("new_value")))
val muw = new MongoUpdateWritable(query,update,false,true)
(null, muw)
})
.saveAsNewAPIHadoopFile(
"file:///bogus",
classOf[Any],
classOf[Any],
classOf[com.mongodb.hadoop.MongoOutputFormat[Any, Any]],
mongo_config
)
The raw query executed in mongo is then something like this:
2014-11-09T13:32:11.609-0800 [conn438] update db.users query: { _id: ObjectId('5436edd3e4b051de6a505af9') } update: { $set: { value: 10 } } nMatched:1 nModified:0 keyUpdates:0 numYields:0 locks(micros) w:24 3ms