I am coding in java and to implement persistence for my game model and players I was hoping to create and use a mongo database. I am not using any authentication, but mongodb is throwing an error that insists that I am not authorized to do anything.
What would cause the program to think that authentication is necessary and how can I fix this?
Exception in thread "main" com.mongodb.MongoCommandException:
Command failed with error 13:
'not authorized on Catan2 to execute command
{
insert: "Players",
ordered: true,
documents: [ { _id: 20, Login: "{"username":"un","password":"p","ID":20}" } ]
}'
on server 127.0.0.1:27017.
The full response is
{
"ok" : 0.0,
"errmsg" : "not authorized on Catan2 to execute command
{
insert: \"Players\",
ordered: true,
documents: [ { _id: 20, Login: \"{\"username\":\"un\",\"password\":\"p\",\"ID\":20}\" } ]
}",
"code" : 13 }
I have included my code below to show the steps that I am taking.
import server.plugincode.iplugin.IPersistencePlugin;
import server.plugincode.iplugin.IPlayerDAO;
import com.mongodb.DB;
import com.mongodb.MongoClient;
import shared.jsonobject.Login;
import org.bson.Document;
import com.google.gson.GsonBuilder;
import com.mongodb.client.MongoDatabase;
public class MongoPersistencePlugin implements IPersistencePlugin {
private MongoClient mongoClient;
private PlayerDAO Players;
private DB mdb;
public MongoPersistencePlugin()
{
mongoClient = new MongoClient();
MongoDatabase mdb = mongoClient.getDatabase("Catan2");
GsonBuilder gson = new GsonBuilder();
gson.enableComplexMapKeySerialization();
String info = gson.create().toJson(new Login("un", "p", 20));
Document test = new Document("_id", 20).append("Login", info);
mdb.getCollection("Players").insertOne(test);
mdb.getCollection("Players").drop();
Players = new PlayerDAO(mdb);
// Games = new GameDAO(mdb);
// Commands = new CommandDAO(mdb);
// clear();
}
}
I am using Mongodb 3.0, Java 8 and am working on a Windows laptop if that helps.
Related
I need a mongo DB query as well as corresponding java code for the query using aggregation framework for the below mentioned scenario,
Scenario is :
I need to search an array for "seqNo": 4 based on "aC","aI","aN","aT","bID","pD" from collection A.
Please find the collection mentioned below,
Collection A:
/*1*/
{
"_id" : ObjectId("6398b904aa0c28d6193bb853"),
"aC" : "AB",
"aI" : "ABCD",
"aN" : "040000000002",
"aT" : "CA",
"bID" : NumberLong(0),
"pD" : "2019-04-19",
"timeToLive" : ISODate("2019-04-19T00:00:00.000Z"),
"transactions" : [
{
"amt" : NumberDecimal("-12.340"),
"seqNo" : 2,
"valDt" : "2022-10-04"
},
{
"amt" : NumberDecimal("-6.800"),
"seqNo" : 5,
"valDt" : "2022-10-04"
}
]
}
/*2*/
{
"_id" : ObjectId("5d42daa7decf182710080d46"),
"aC" : "AB",
"aI" : "ABCD",
"aN" : "040000000002",
"aT" : "CA",
"bID" : NumberLong(1),
"pD" : "2019-04-19",
"timeToLive" : ISODate("2019-04-19T00:00:00.000Z"),
"transactions" : [
{
"seqNo" : 4,
"amt" : NumberDecimal("26074.000"),
"valDt" : "2019-04-19"
},
{
"seqNo" : 3,
"amt" : NumberDecimal("26074.000"),
"valDt" : "2019-04-19"
}
]
}
Please help me with the query it will be really helpful if explained in detail.
Thanks in advance.
To kick off the answer we will start with the mongo CLI (javascript) because it outlines what we are going to do later in Java.
db.foo.aggregate([
{$match: {"aC": val_aC, "aI": val_aI}},
{$project: {transaction: {$arrayElemAt: [ {$filter: {
input: "$transactions",
cond: {$eq:["$$this.seqNo",4]}
}}, 0] }
}},
{$match: {transaction: {$exists: true}}}
]);
Java is always a little more verbose than python or javascript but here is how I do it. Because my editor matches braces and brackets, I find it easier to construct the query as JSON and then convert it into the required pipeline of Document objects.
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.ClientSession;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoCursor;
import com.mongodb.client.AggregateIterable;
import org.bson.*;
import java.util.Arrays;
import java.util.ArrayList;
import java.util.List;
import java.util.Date;
import java.text.SimpleDateFormat;
import java.util.Map;
import java.util.HashMap;
import java.math.BigDecimal;
public class agg4 {
private MongoClient mongoClient;
private MongoDatabase db;
private MongoCollection<Document> coll;
private static class StageHelper {
private StringBuilder txt;
public StageHelper() {
this.txt = new StringBuilder();
}
public void add(String expr, Object ... subs) {
expr.replace("'", "\""); // This is the helpful part.
if(subs.length > 0) {
expr = String.format(expr, subs); // this too
}
txt.append(expr);
}
public Document fetch() {
return Document.parse(txt.toString());
}
}
private List<Document> makePipeline() {
List<Document> pipeline = new ArrayList<Document>();
StageHelper s = new StageHelper();
// It is left as an exercise to add all the other individual fields
// that need to be matched and properly pass them in, etc.
String val_aC = "AB";
String val_aI = "ABCD";
s.add(" {'$match': {'aC':'%s','aI':'%s'}}", val_aC, val_aI );
pipeline.add(s.fetch());
s = new StageHelper();
// This is the meat. Find seqNo = 4. Since I suspect seqNo is
// unique, it does no extra good to filter the array to just an array
// array of one; changing the array of 1 (if found of course) to a
// *single* object has more utility, hence use of $arrayElemAt:
s.add(" {'$project': {'transaction': {'$arrayElemAt': [ ");
s.add(" {'$filter': {'input': '$transactions', ");
s.add(" 'cond': {'$eq':['$$this.seqNo', 4]} ");
s.add(" }}, 0]} ");
s.add(" }}");
pipeline.add(s.fetch());
s = new StageHelper();
// If seqNo = 4 could not be found, transaction will be unset so
// use the following to exclude that doc.
s.add(" {'$match': {'transaction': {'$exists': true}}} ");
pipeline.add(s.fetch());
return pipeline;
}
private void doAgg() {
try {
List<Document> pipeline = makePipeline();
AggregateIterable<Document> output = coll.aggregate(pipeline);
MongoCursor<Document> iterator = output.iterator();
while (iterator.hasNext()) {
Document doc = iterator.next();
}
} catch(Exception e) {
System.out.println("some fail: " + e);
}
}
public static void main(String[] args) {
(new agg4()).go(args);
}
public void go(String[] args) {
try {
Map params = new HashMap();
String host = "mongodb://localhost:37017/?replicaSet=rs0";
String dbname = "testX";
String collname = "foo";
mongoClient = MongoClients.create(host);
db = mongoClient.getDatabase(dbname);
coll = db.getCollection(collname, Document.class);
doAgg();
} catch(Exception e) {
System.out.println("epic fail: " + e);
e.printStackTrace();
}
}
If you are using Java 13 or higher, text blocks for String make it even easier:
String s = """
{'$project': {'transaction': {'$arrayElemAt': [
{'$filter': {'input': '$transactions',
'cond': {'$eq':['$$this.seqNo', 4]}
}}, 0]}
}}
""";
pipeline.add(s.fetch());
I have read countless of articles and code examples on MongoDB Change Streams, but I still can't manage to set it up properly. I'm trying to listen to a specific collection in my MongoDB and whenever a document is inserted, updated or deleted, I want to do something.
This is what I've tried:
#Data
#Document(collection = "teams")
public class Teams{
private #MongoId(FieldType.OBJECT_ID)
ObjectId id;
private Integer teamId;
private String name;
private String description;
}
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.Aggregates;
import com.mongodb.client.model.Filters;
import com.mongodb.client.model.changestream.FullDocument;
import com.mongodb.client.ChangeStreamIterable;
import org.bson.Document;
import org.bson.conversions.Bson;
import java.util.Arrays;
import java.util.List;
public class MongoDBChangeStream {
// connect to the local database server
MongoClient mongoClient = MongoClients.create("db uri goes here");
// Select the MongoDB database
MongoDatabase database = mongoClient.getDatabase("MyDatabase");
// Select the collection to query
MongoCollection<Document> collection = database.getCollection("teams");
// Create pipeline for operationType filter
List<Bson> pipeline = Arrays.asList(
Aggregates.match(
Filters.in("operationType",
Arrays.asList("insert", "update", "delete"))));
// Create the Change Stream
ChangeStreamIterable<Document> changeStream = collection.watch(pipeline)
.fullDocument(FullDocument.UPDATE_LOOKUP);
// Iterate over the Change Stream
for (Document changeEvent : changeStream) {
// Process the change event here
}
}
So this is what I have so far and everything is good until the for-loop which gives three errors:
There is a red line under 'for (', which says unexpected token.
There is a red line under ' :', which says ';' expected.
There is a red line under 'changeStream)', which says unknown class: 'changeStream'.
First of all you should put your code inside class method, not class body. Second - ChangeStreamIterable<Document> iterator element is ChangeStreamDocument<Document> and not Document.
Summing things up:
public class MongoDBChangeStream {
public void someMethod() {
// connect to the local database server
MongoClient mongoClient = MongoClients.create("db uri goes here");
// Select the MongoDB database
MongoDatabase database = mongoClient.getDatabase("MyDatabase");
// Select the collection to query
MongoCollection<Document> collection = database.getCollection("teams");
// Create pipeline for operationType filter
List<Bson> pipeline = Arrays.asList(
Aggregates.match(
Filters.in(
"operationType",
Arrays.asList("insert", "update", "delete")
)));
// Create the Change Stream
ChangeStreamIterable<Document> changeStream = collection.watch(pipeline)
.fullDocument(FullDocument.UPDATE_LOOKUP);
// Iterate over the Change Stream
for (ChangeStreamDocument<Document> changeEvent : changeStream) {
// Process the change event here
}
}
}
I've been hunting around for a solution to this question.
It appears to me that there is no way to embed reading and writing Parquet format in a Java program without pulling in dependencies on HDFS and Hadoop. Is this correct?
I want to read and write on a client machine, outside of a Hadoop cluster.
I started to get excited about Apache Drill, but it appears that it must run as a separate process. What I need is an in-process ability to read and write a file using the Parquet format.
You can write parquet format out side hadoop cluster using java Parquet Client API.
Here is a sample code in java which writes parquet format to local disk.
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericRecord;
import org.apache.hadoop.fs.Path;
import org.apache.parquet.avro.AvroSchemaConverter;
import org.apache.parquet.avro.AvroWriteSupport;
import org.apache.parquet.hadoop.ParquetWriter;
import org.apache.parquet.hadoop.metadata.CompressionCodecName;
import org.apache.parquet.schema.MessageType;
public class Test {
void test() throws IOException {
final String schemaLocation = "/tmp/avro_format.json";
final Schema avroSchema = new Schema.Parser().parse(new File(schemaLocation));
final MessageType parquetSchema = new AvroSchemaConverter().convert(avroSchema);
final WriteSupport<Pojo> writeSupport = new AvroWriteSupport(parquetSchema, avroSchema);
final String parquetFile = "/tmp/parquet/data.parquet";
final Path path = new Path(parquetFile);
ParquetWriter<GenericRecord> parquetWriter = new ParquetWriter(path, writeSupport, CompressionCodecName.SNAPPY, BLOCK_SIZE, PAGE_SIZE);
final GenericRecord record = new GenericData.Record(avroSchema);
record.put("id", 1);
record.put("age", 10);
record.put("name", "ABC");
record.put("place", "BCD");
parquetWriter.write(record);
parquetWriter.close();
}
}
avro_format.json,
{
"type":"record",
"name":"Pojo",
"namespace":"com.xx.test",
"fields":[
{
"name":"id",
"type":[
"int",
"null"
]
},
{
"name":"age",
"type":[
"int",
"null"
]
},
{
"name":"name",
"type":[
"string",
"null"
]
},
{
"name":"place",
"type":[
"string",
"null"
]
}
]
}
Hope this helps.
I got a new API key using create credentials. But I didnt enter any billing details.Then I used the following code to access the translator API in order to translate a text
package traanslatorapi;
import com.google.api.services.translate.Translate;
import com.google.api.services.translate.model.TranslationsListResponse;
import com.google.api.services.translate.model.TranslationsResource;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Arrays;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
*
* #author User
*/
public class TraanslatorApi {
/**
* #param args the command line arguments
*/
public static void main(String[] args) throws IOException {
Translate t = null;
try {
t = new Translate.Builder(
com.google.api.client.googleapis.javanet.GoogleNetHttpTransport.newTrustedTransport(), com.google.api.client.json.gson.GsonFactory.getDefaultInstance(), null)
//Need to update this to your App-Name
.setApplicationName("OCRProject")
.build();
} catch (GeneralSecurityException ex) {
Logger.getLogger(TraanslatorApi.class.getName()).log(Level.SEVERE, null, ex);
}
Translate.Translations.List list = t.new Translations().list(
Arrays.asList(
//Pass in list of strings to be translated
"Hello World",
"How to use Google Translate from Java"),
//Target language
"ES");
//Set your API-Key from https://console.developers.google.com/
list.setKey("AIzaSyCX2O-pteDLJLeMivT47kD9pucEv67QECQ");
TranslationsListResponse response = list.execute();
for (TranslationsResource tr : response.getTranslations()) {
System.out.println(tr.getTranslatedText());
}
}
}
As the output I got the following
run:
Exception in thread "main" com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
{
"code": 403,
"errors": [
{
"domain": "usageLimits",
"message": "Daily Limit Exceeded",
"reason": "dailyLimitExceeded"
}
],
"message": "Daily Limit Exceeded"
}
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:145)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:321)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1056)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at traanslatorapi.TraanslatorApi.main(TraanslatorApi.java:47)
Java Result: 1
BUILD SUCCESSFUL (total time: 4 seconds)
I used this application on the same day that I obtained the key. I can't find the reason for this.
Give it another try after entering your billing details. You need to do that before the API will accept your API key.
import java.util.Date;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.mongodb.core.MongoOperations;
import org.springframework.data.mongodb.core.query.Criteria;
import org.springframework.data.mongodb.core.query.Query;
public class CustomQuery {
#Autowired private MongoOperations mongoOperations;
public void customQuery(Date submittalDate) {
List<Question> q1s = mongoOperations.find(
new Query(Criteria.where("category").is("New")),
Question.class);
List<Question> q2s = mongoOperations.find(
new Query(
Criteria.where("submittalDate").gt(submittalDate).and("category").is("New")
),
Question.class);
}
}
The top Spring Java MongoDB query gives back the expected results in q1s.
The bottom query should return a subset of the top query. Instead, records which match ("submittalDate").gt(submittalDate) are in the q2s results regardless of whether or not they are in the "New" category.
i.e. it is like and("category").is("New") from the second query is being ignored.
Using Mongodb version v2.0.6 32-bit with Spring Data.
Help appreciated.
Update 05/09/2012
Still doesn't work
Update 26/08/2012
This returns results on the Mongo command line:
db.foo.find( { "submittalDate":{ "$gte": ISODate("2012-07-31T23:00:00.000Z") }, "category" : "New" } )
In constrast, the Java code (for the same date paramter) doesn't work. For comparison, the query logged by DEBUG from Java is:
[DEBUG] [http-8080-1] (MongoTemplate.java:doFind:1256) find using query:
{ "submittalDate" : { "$gte" : { "$date" : "2012-07-31T23:00:00.000Z"}} , "category" : "New"}
Yes, the logging logs a date string whereas to get Mongo shell working I needed to use ISODate(..).
But I'm using MongoDB Java driver with the accepted type of java.util.Date - how could ISODate(..) not appearing be the issue? Issue might have another cause.
I'm no spring expert, but it seems like some of your imports may be conflicting with each other. It's difficult to diagnose exactly where you are going wrong given the documentation I've looked at. If your not set on using the spring framework for this, an alternative/more common approach would be the below.
import com.mongodb.Mongo;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.BasicDBObject;
import com.mongodb.DBObject;
import com.mongodb.DBCursor;
public class CustomQuery {
public void customQuery(Date submittalDate)
{
document = new BasicDBObject();
document.put(("submittalDate").greaterThanEquals(submittalDate).put("category").is("New").get());
DBCursor cursor = getDbCollection().find(document);
}
}
{ "$date" : "2012-07-31T23:00:00.000Z"}
equals to
Date("2012-07-31T23:00:00.000Z")
and Date("2012-07-31T23:00:00.000Z") will return a string, not an ISODate().
via http://www.mongodb.org/display/DOCS/Mongo+Extended+JSON .
I think this is a bug of org.springframework.data.mongodb.core.query.Criteria.