Mongo dropAllUsersFromDatabase command not work by using MongoClient - java

I am new to MongoDB and have below code for dropping database. Before dropping the database, I have run a command dropAllUsersFromDatabase for removing all users from this database. However, this command didn't remove those users.
private final MongoClient mongo;
public void drop(final String databaseName) {
final MongoDatabase database = mongo.getDatabase(databaseName);
final BasicDBObject commandArguments = new BasicDBObject();
commandArguments.put("dropAllUsersFromDatabase", 1);
final BasicDBObject command = new BasicDBObject(commandArguments);
database.runCommand(command);
database.drop();
}
Those users still remaining in system.users collection in the admin database, which mongo stores user authentication and authorization information. For example:
{
"_id" : "57b3cc41851cb3aab00a3c84.57b3cc41851cb3aab00a3c84",
"user" : "57b3cc41851cb3aab00a3c84",
"db" : "57b3cc41851cb3aab00a3c84",
"credentials" : {
"SCRAM-SHA-1" : {
"iterationCount" : 10000,
"salt" : "8LjAxK/7z9yJLBfAchkLGg==",
"storedKey" : "HmktZ1GF+uQ8MJqIzY3/lU7VJEg=",
"serverKey" : "9CyQxvtK9BWT80la8p4o+kHyNwc="
}
},
"roles" : [
{
"role" : "dbOwner",
"db" : "57b3cc41851cb3aab00a3c84"
}
]
}
But, if I execute the command by mongo as below, it will delete those users correctly.
db.runCommand({ dropAllUsersFromDatabase: 1});
The login authoriztion are same and there has no exception when running database.runCommand(command);

Related

Deletion of users (identities) from Google bucket IAM Policy does not work

In order to remove identities from a google cloud bucket, I use the example provided at the GCP examples repo: here. I am wondering if there is something I am missing, I have the correct root credentials to the cloud account, as well as the project ownership credentials. Basically, the removal operations do not owrk both from Java code and using the gsutil function from gcp web console.
Here is the original policy:
Policy{
bindings= {
roles/storage.legacyBucketOwner= [
projectOwner:csbauditor
],
roles/storage.objectAdmin= [
serviceAccount:company-kiehn-log#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-kiehn-file#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-howe-file#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-satterfield-log#csbauditor.iam.gserviceaccount.com,
serviceAccount:customer-0c1e8536-8bf5-46f4-8e#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-fahey-log#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-hammes-file#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-howe-log#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-sipes-file#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-doyle-log#csbauditor.iam.gserviceaccount.com,
serviceAccount:customer-6a53ee71-95eb-49b2-8a#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-bergnaum-file#csbauditor.iam.gserviceaccount.com
],
roles/storage.legacyBucketReader= [
projectViewer:csbauditor
],
roles/storage.objectViewer= [
serviceAccount:company-block-log#csbauditor.iam.gserviceaccount.com
]
},
etag=CLgE,
version=0
}
Here is the second policy version, before writing to IAM:
Policy{
bindings= {
roles/storage.legacyBucketOwner= [
projectOwner:csbauditor
],
roles/storage.objectAdmin= [
serviceAccount:company-kiehn-log#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-kiehn-file#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-howe-file#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-satterfield-log#csbauditor.iam.gserviceaccount.com,
serviceAccount:customer-0c1e8536-8bf5-46f4-8e#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-fahey-log#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-hammes-file#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-howe-log#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-sipes-file#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-doyle-log#csbauditor.iam.gserviceaccount.com,
serviceAccount:customer-6a53ee71-95eb-49b2-8a#csbauditor.iam.gserviceaccount.com,
serviceAccount:company-bergnaum-file#csbauditor.iam.gserviceaccount.com
],
roles/storage.legacyBucketReader= [
projectViewer:csbauditor
],
roles/storage.objectViewer= [
serviceAccount:company-block-log#csbauditor.iam.gserviceaccount.com
]
},
etag=CLgE,
version=0
}
Here is my code snippet:
Read bucket policy and extract unwanted identities
Set<Identity> wrongIdentities = new HashSet<Identity>();
Role roler = null;
Policy p = Cache.GCSStorage.getIamPolicy("bucketxyz");
Map<Role, Set<Identity>> policyBindings = p.getBindings();
for (Map.Entry<Role, Set<Identity>> entry : policyBindings.entrySet()) {
Set<Identity> setidentities = entry.getValue();
roler = entry.getKey();
if (roler.getValue().equals("roles/storage.objectAdmin")) {
setidentities = entry.getValue();
if ((set.equals("serviceAccount:attacker#csbauditor.iam.gserviceaccount.com"))) {
continue;
} else {
wrongIdentities.add(set);
}
}
}
}
removeBucketIamMember("bucektxyz", roler, identity));
}
}
Remove Unwanted Identities from policy
public static Policy removeBucketIamMember(String bucketName, Role role,
Identity identity) {
Storage storage = GoogleStorage.initStorage();
Policy policy = storage.getIamPolicy(bucketName);
System.out.println("policyt "+ policy);
Policy updatedPolicy = policy.toBuilder().removeIdentity(role,
Identity.serviceAccount(identity.getValue())).build();
System.out.println("updatedPolicy "+ policy);
storage.setIamPolicy(bucketName,updatedPolicy);
if (updatedPolicy.getBindings().get(role) == null||
!updatedPolicy.getBindings().get(role).contains(identity)) {
System.out.printf("Removed %s with role %s from %s\n", identity, role,
bucketName);
}
return updatedPolicy;
}
Update 01
I tried also using gsutil from within the web console, still does not work.
myaccount#cloudshell:~ (csbauditor)$ gsutil iam ch -d user:company-sipes-
file#csbauditor.iam.gserviceaccount.com gs://company-block-log-fce65e82-a0cd-
4f71-8693-381100d93c18
No changes made to gs://company-block-log-fce65e82-a0cd-4f71-8693-381100d93c18/
Update 02 As advised by #JohnHanley, gsutil worked after I replaced user with serviceAccount. However, the java code is not yet working.
I have found the issue in your code. Although I cannot be completely sure that this was the only issue since I wasn't able to compile your code, I had to change several classes too.
After I was able to compile and run the code I noticed that even if the "remove" function was executed nothing really happened, after making a few prints I noticed that it was trying to remove the services accounts using the wrong "role", since you were changing the "role" value on the "for" loop, and if the "set" wasn't equal to "attacker-service-account" then the loop made another iteration and changed the "role" value.
Here's the code of my class (a modification of the example snippet):
package com.google.cloud.examples.storage.snippets;
import com.google.cloud.Identity;
import com.google.cloud.Policy;
import com.google.cloud.Role;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import com.google.cloud.storage.StorageRoles;
import java.util.Map;
import java.util.Set;
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
/** This class contains Bucket-level IAM snippets for the {#link Storage} interface. */
public class BucketIamSnippets {
/** Example of listing the Bucket-Level IAM Roles and Members */
public Policy listBucketIamMembers(String bucketName) {
// [START view_bucket_iam_members]
// Initialize a Cloud Storage client
Storage storage = StorageOptions.getDefaultInstance().getService();
// Get IAM Policy for a bucket
Policy policy = storage.getIamPolicy(bucketName);
// Print Roles and its identities
Map<Role, Set<Identity>> policyBindings = policy.getBindings();
for (Map.Entry<Role, Set<Identity>> entry : policyBindings.entrySet()) {
System.out.printf("Role: %s Identities: %s\n", entry.getKey(), entry.getValue());
}
// [END view_bucket_iam_members]
return policy;
}
/** Example of adding a member to the Bucket-level IAM */
public Policy addBucketIamMember(String bucketName, Role role, Identity identity) {
// [START add_bucket_iam_member]
// Initialize a Cloud Storage client
Storage storage = StorageOptions.getDefaultInstance().getService();
// Get IAM Policy for a bucket
Policy policy = storage.getIamPolicy(bucketName);
// Add identity to Bucket-level IAM role
Policy updatedPolicy =
storage.setIamPolicy(bucketName, policy.toBuilder().addIdentity(role, identity).build());
if (updatedPolicy.getBindings().get(role).contains(identity)) {
System.out.printf("Added %s with role %s to %s\n", identity, role, bucketName);
}
// [END add_bucket_iam_member]
return updatedPolicy;
}
public static void removeUserFromBucketUsingEmail(String bucketName, Role role, String email) {
Storage storage = StorageOptions.getDefaultInstance().getService();
Policy policy = storage.getIamPolicy(bucketName);
Identity identity = Identity.serviceAccount(email);
String eTag = policy.getEtag();
System.out.println("etag: " + eTag);
Policy updatedPolicy = storage.setIamPolicy(bucketName, policy.toBuilder().removeIdentity(role, identity).build());
if (updatedPolicy.getBindings().get(role) == null
|| !updatedPolicy.getBindings().get(role).contains(identity)) {
System.out.printf("Removed %s with role %s from %s\n", identity, role, bucketName);
}
}
public static void main(String... args) throws Exception {
try
{
String bucketName = "my-bucket-name";
BucketIamSnippets obj = new BucketIamSnippets ();
Role role_admin = StorageRoles.objectAdmin();
String acc_1 = "test1#my.iam.gserviceaccount.com";
String acc_2 = "test2#my.iam.gserviceaccount.com";
Identity identity_1 = Identity.serviceAccount(acc_1);
Identity identity_2 = Identity.serviceAccount(acc_2);
System.out.println(obj.addBucketIamMember (bucketName, role_admin, identity_1 ));
System.out.println(obj.addBucketIamMember (bucketName, role_admin, identity_2 ));
Storage storage = StorageOptions.getDefaultInstance().getService();
Policy policy = storage.getIamPolicy(bucketName);
System.out.println(policy);
//List<Role> roleList = new ArrayList<>();
List<Set<Identity>> identities = new ArrayList<>();
// Print Roles and its identities
Set<Identity> wrongIdentities = new HashSet<Identity>();
Role aux = null;
Map<Role, Set<Identity>> policyBindings = policy.getBindings();
Set<Identity> setidentities = new HashSet<>();
for (Map.Entry<Role, Set<Identity>> entry : policyBindings.entrySet()) {
aux = entry.getKey();
System.out.println("role plain " + aux);
System.out.println("role other " + aux.getValue());
if (aux.getValue().equals("roles/storage.objectAdmin")) {
System.out.println("role :" + aux.getValue());
System.out.println("Identities getV :" + entry.getValue());
System.out.println("Identities getK :" + entry.getKey());
setidentities = entry.getValue();
System.out.println("setidentities :" + setidentities);
System.out.println("setidentities size :" + setidentities.size());
for (Identity set : setidentities) {
if ((set.equals("serviceAccount: test2#my.iam.gserviceaccount.com"))) {
System.out.println("strong one : " + set);
continue;
} else {
wrongIdentities.add(set);
System.out.println("strong one : " + set);
}
System.out.println("wrongIdentities.size() : " + wrongIdentities.size());
}
}
}
System.out.println("ww " + wrongIdentities);
System.out.println("policyEtag " + policy.getEtag());
//GCSFunctions function = new GCSFunctions();
for (Identity identity : wrongIdentities) {
BucketIamSnippets.removeUserFromBucketUsingEmail(bucketName, role_admin, identity.getValue());
}
}
catch (Exception e)
{
e.printStackTrace ();
}
}
}
Notes:
I add two test services accounts and then I run your code (with a little modifications).
I have initialized the "role" as objectAdmin directly, and that's what i pass to the removing function.
Modify the code to comply with your actual use case.
I have compiled this with the same dependencies used on the example

Recording sales amount in datadog metrics

I'm trying to record my website sales $ amount in datadog. However I'm getting way more than the actual value.
I'm using java-dogstatsd client and spring. My application is running on 3 hosts. I recorded all metrics (using sendWebOrder method) but no luck.
#EnableConfigurationProperties({DataDogProperties.class})
#Component
public class DDMetrics {
#Autowired
DataDogProperties dataDogProperties;
#Autowired
private NonBlockingStatsDClient statsd;
private Map<TopicPartition,Long> lags = new HashMap<>();
#Bean
private NonBlockingStatsDClient initClient() {
NonBlockingStatsDClient metricsClient = new NonBlockingStatsDClient(
dataDogProperties.getServiceName(),
dataDogProperties.getHostname(),
dataDogProperties.getPort();
return metricsClient;
}
public void sendWebOrder(WebOrder webOrder) {
List<String> tags = new ArrayList<>();
tags.add("transactionType:" + webOrder.getTransactionType());
tags.add("dataSourceType:" + webOrder.getDataSourceType()));
statsd.count("amount_count", webOrder.getAmount(), String.join(",", tags));
statsd.recordDistributionValue("amount_dist", webOrder.getAmount(), String.join(",", tags));
statsd.recordHistogramValue("amount_hist", webOrder.getAmount(), String.join(",", tags));
statsd.recordGaugeValue("amount_gauge", webOrder.getAmount(), String.join(",", tags));
statsd.incrementCounter("weborder", String.join(",", tags));
}
I'm trying to generate a datadog toplist by transactiontype. I'm not getting the correct amount in any of the metrics (tried mainly count, gauge and histogram.sum). Here is my datadog config:
{
"viz": "toplist",
"requests": [
{
"q": "top(sum:projecta.webtransactions.amount_histogram.sum{$TransactionType} by {transactiontype}, 10, 'sum', 'desc')",
"type": "area",
"style": {
"palette": "dog_classic",
"type": "solid",
"width": "normal"
},
"aggregator": "sum",
"conditional_formats": []
}
],
"autoscale": true
}
What am I missing? Is this the correct way to record money value? Do I've to do any rollup in config?
Any help is appreciated.

Table "myTable" not found error in while accessing the H2 DB

I am using the H2 in-memory database in my grails project. My application is running properly with the H2 database.
I want to connect with the H2 database using groovy to get the data from the database .
import groovy.sql.Sql
import java.sql.Driver
class psqlh2 {
static void main(String[] args) {
def driver = Class.forName('org.h2.Driver').newInstance() as Driver
def props = new Properties()
props.setProperty("user", "sa")
props.setProperty("password", "")
def conn = driver.connect("jdbc:h2:mem:~/databaseName;DB_CLOSE_DELAY=-1",props)
def sql = new Sql(conn)
def query = "SELECT * FROM company"
try {
sql.eachRow(query) { row ->
println(row)
}
} finally {
sql.close()
conn.close()
}
}
WARNING: Failed to execute: SELECT * FROM company because: Table "COMPANY" not found; SQL statement:
SELECT * FROM company [42102-199]
Exception in thread "main" org.h2.jdbc.JdbcSQLSyntaxErrorException: Table "COMPANY" not found;
Please help me out.
Replace mem to file
def driver = Class.forName('org.h2.Driver').newInstance() as Driver
def props = new Properties()
props.setProperty("user", username)
props.setProperty("password", password)
return driver.connect("jdbc:h2:file:${absolutePath};DB_CLOSE_DELAY=-1;IFEXISTS=true", props)
Also make sure use the same version of H2 jar, which is using by h2-server.

How to enable journal in embedded MongoDB from Spring Boot test

I am trying to write a Spring Boot test that uses embedded MondoDB 4.0.2; the code to test requires Mongo ChangeStreams which requires MongoDB start as a replica set. MongoDB as a replica set at MongoDB v4 requires journaling enabled. I was not able to find a way to start with journaling enabled so posted this here looking for answers. I subsequently found out how to do it - below.
I have spring-boot 2.1.3.RELEASE. Spring-data-mongodb 2.1.5.RELEASE
This is what I'd been trying:
#RunWith(SpringRunner.class)
#DataMongoTest(properties= {
"spring.mongodb.embedded.version= 4.0.2",
"spring.mongodb.embedded.storage.repl-set-name = r_0",
"spring.mongodb.embedded.storage.journal.enabled=true"
})
public class MyStreamWatcherTest {
#SpringBootApplication
#ComponentScan(basePackages = {"my.package.with.dao.classes"})
#EnableMongoRepositories( { "my.package.with.dao.repository" })
static public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
#Before
public void startup() {
MongoDatabase adminDb = mongoClient.getDatabase("admin");
Document config = new Document("_id", "rs0");
BasicDBList members = new BasicDBList();
members.add(new Document("_id", 0).append("host",
mongoClient.getConnectPoint()));
config.put("members", members);
adminDb.runCommand(new Document("replSetInitiate", config));
However, when the test starts the options used to start mongo did not include enabling journal.
The fix was to add this class:
#Configuration
public class MyEmbeddedMongoConfiguration {
private int localPort = 0;
public int getLocalPort() {
return localPort;
}
#Bean
public IMongodConfig mongodConfig(EmbeddedMongoProperties embeddedProperties) throws IOException {
MongodConfigBuilder builder = new MongodConfigBuilder()
.version(Version.V4_0_2)
.cmdOptions(new MongoCmdOptionsBuilder().useNoJournal(false).build());
// Save the local port so the replica set initializer can come get it.
this.localPort = Network.getFreeServerPort();
builder.net(new Net("127.0.0.1", this.getLocalPort(), Network.localhostIsIPv6()));
EmbeddedMongoProperties.Storage storage = embeddedProperties.getStorage();
if (storage != null) {
String databaseDir = storage.getDatabaseDir();
String replSetName = "rs0"; // Should be able to: storage.getReplSetName();
int oplogSize = (storage.getOplogSize() != null)
? (int) storage.getOplogSize().toMegabytes() : 0;
builder.replication(new Storage(databaseDir, replSetName, oplogSize));
}
return builder.build();
}
This got journal enabled and the mongod started with replica set enabled. Then I added another class to initialize the replica set:
#Configuration
public class EmbeddedMongoReplicaSetInitializer {
#Autowired
MyEmbeddedMongoConfiguration myEmbeddedMongoConfiguration;
MongoClient mongoClient;
// We don't use this MongoClient as it will try to wait for the replica set to stabilize
// before address-fetching methods will return. It is specified here to order this class's
// creation after MongoClient, so we can be sure mongod is running.
EmbeddedMongoReplicaSetInitializer(MongoClient mongoClient) {
this.mongoClient = mongoClient;
}
#PostConstruct
public void initReplicaSet() {
//List<ServerAddress> serverAddresses = mongoClient.getServerAddressList();
MongoClient mongo = new MongoClient(new ServerAddress("127.0.0.1", myEmbeddedMongoConfiguration.getLocalPort()));
MongoDatabase adminDb = mongo.getDatabase("admin");
Document config = new Document("_id", "rs0");
BasicDBList members = new BasicDBList();
members.add(new Document("_id", 0).append("host", String.format("127.0.0.1:%d", myEmbeddedMongoConfiguration.getLocalPort())));
config.put("members", members);
adminDb.runCommand(new Document("replSetInitiate", config));
mongo.close();
}
}
That's getting the job done. If anyone has tips to make this easier, please post here.

Connecting to MongoDB Atlas: com.mongodb.MongoCommandException: Command failed with error 8000

so I'm trying to connect to MongoDB Atlas Service, which is hosting my database (which i have done in the past with no problems), but I keep getting this error and I can't understand why. I cannot find anyone with the same issue, so I'm quite stuck.
Exception in thread "main" com.mongodb.MongoCommandException: Command failed with error 8000: 'not authorized on admin to execute command { insert: "adminCol", ordered: true, documents: [[{_id ObjectIdHex("5a4e3b6dd04f1c047975fdd5")} {id 512} {name peter jones} {hello hi}]] }' on server *******-shard-00-02-xygnn.mongodb.net:27017. The full response is { "ok" : 0, "errmsg" : "not authorized on admin to execute command { insert: \"adminCol\", ordered: true, documents: [[{_id ObjectIdHex(\"5a4e3b6dd04f1c047975fdd5\")} {id 512} {name peter jones} {hello hi}]] }", "code" : 8000, "codeName" : "AtlasError" }
at com.mongodb.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:164)
at com.mongodb.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:295)
at com.mongodb.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:255)
at com.mongodb.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:98)
at com.mongodb.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:441)
at com.mongodb.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:76)
at com.mongodb.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:189)
at com.mongodb.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:263)
at com.mongodb.connection.DefaultServerConnection.command(DefaultServerConnection.java:126)
at com.mongodb.operation.MixedBulkWriteOperation.executeCommand(MixedBulkWriteOperation.java:373)
at com.mongodb.operation.MixedBulkWriteOperation.executeBulkWriteBatch(MixedBulkWriteOperation.java:255)
at com.mongodb.operation.MixedBulkWriteOperation.access$700(MixedBulkWriteOperation.java:66)
at com.mongodb.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:199)
at com.mongodb.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:190)
at com.mongodb.operation.OperationHelper.withReleasableConnection(OperationHelper.java:432)
at com.mongodb.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:190)
at com.mongodb.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:66)
at com.mongodb.Mongo$3.execute(Mongo.java:833)
at com.mongodb.MongoCollectionImpl.executeSingleWriteRequest(MongoCollectionImpl.java:1025)
at com.mongodb.MongoCollectionImpl.executeInsertOne(MongoCollectionImpl.java:513)
at com.mongodb.MongoCollectionImpl.insertOne(MongoCollectionImpl.java:493)
at com.mongodb.MongoCollectionImpl.insertOne(MongoCollectionImpl.java:487)
at AdminDB.addAdmin(AdminDB.java:51)
at AdminDB.main(AdminDB.java:76)
This is the code I'm using.
public class AdminDB {
private MongoDatabase adminDB;
private MongoCollection<Document> adminCollection;
private String adminCol = "adminCol";
public AdminDB() {
MongoClientURI uri = new MongoClientURI(
"mongodb://***********-shard-00-00-xygnn.mongodb.net:27017,*******-shard-00-01-xygnn.mongodb.net:27017,******-shard-00-02-xygnn.mongodb.net:27017/test?ssl=true&replicaSet=******-shard-0&authSource=admin");
MongoClient mongoClient = new MongoClient(uri);
adminDB = mongoClient.getDatabase("admin");
adminCollection = adminDB.getCollection(adminCol);
}
public void addAdmin(JsonObject json) {
adminCollection.insertOne(Document.parse(json.toString()));
}
public static void main(String[] args) {
AdminDB adminDB = new AdminDB();
JsonObject json = new JsonObject();
json.addProperty("id", 512);
json.addProperty("name", "peter jones");
json.addProperty("hello", "hi");
adminDB.addAdmin(json);
}
}
To clarify, MongoDB Atlas M0 (Free Tier), M2, or M5 shared starter cluster do not support read/write operations on any collections within the admin database. See also Command Limitations in Free Tier Clusters.
Change the database name to insert document into:
database = mongoClient.getDatabase("anotherDatabase");

Categories

Resources