I have configure mongo uri in property file as below,
spring.data.mongodb.uri=mongodb://db1.dev.com,db2.dev.com,db3.dev.com
spring.data.mongodb.database=mydb
I use mongoowl as a monitoring tool.
When i do a get request, it shows hits in every mongodb which ideally should be show only in one db right?
No, You are actually opening a cluster replica set connection, in this connection type spring actually connects to all 3 databases to maintain fail over conditions or to full fill "read from secondary" option(hence you see hits on all 3 databases), but however the read and write operations happen only on primary unless you have specified it to read from a secondary.
Related
Having an issue with the following configuration,
Driver version : 3.12.1, mongodb-driver for Java
Server Version: 3.2 of Mongo API for Azure Cosmos DB (Ancient, I know)
We run some fairly high read/write loads and may hit rate limiting from the Cosmos API for Mongo. In this case, I expect an exception to occur. We're doing pretty vanilla queries, code snippet looks similar to
public DatabaseQueryResult find(String collectionName, Map<String, Object> queryData) {
Document toFind = new Document(queryData);
MongoCollection<Document> collection = this.mongoDatabase.getCollection(collectionName);
FindIterable<Document> findResults = collection.find(toFind);
if (findResults != null) {
Document dataFound = findResults.first();
return new DatabaseQueryResult(dataFound.toJson(this.settings))
}
// other stuff...
}
When rate limited by Azure, you'll receive a response like so
{
"$err":"Message: {\"Errors\":[\"Request rate is large. More Request Units may be needed, so no changes were made. Please retry this request later. Learn more: http://aka.ms/cosmosdb-error-429\"]}\r\n s",
"code":16500,
"_t":"OKMongoResponse",
"errmsg":"Message: {\"Errors\":[\"Request rate is large. More Request Units may be needed, so no changes were made. Please retry this request later. Learn more: http://aka.ms/cosmosdb-error-429\"]}\r\n",
"ok":0
}
I expect an exception to be thrown here - but that doesn't seem to be the case with the later driver. What's happening is,
collection.find is returning a FindIterable with the JSON error result as above as the first document
We're eventually returning a DatabaseQueryResult with JSON error as the query payload
I don't want this to happen - I'd much prefer the mongo driver to throw a MongoCommandException/MongoQueryException if a query operation returns an OKMongoResponse where "ok" 0. This seems fine on writes,
which will use a CommandProtocol object and the response is validated as I'd expect - it's just reads that seems to have changed.
Comparing the 2 driver versions, this seems to be a change in read behaviour - perhaps due to retryable reads that were introduced in version 3.11? Response validation now seems to be around this section.
Q: Is there a way to configure my Mongo client so that the driver will validate server responses on read operations and throw an exception if it receives a OKMongoResponse, and ok == 0?
I can of course validate the results myself, but I'd prefer not to and let the driver do this if possible
I'm not sure why Mongo changed this driver. There is something on the Cosmos side which may help. You can raise a support ticket and ask them to turn on server-side retries. This will change the behavior of Cosmos such that requests will queue up rather than throw 429's when there are too many.
This more reflects how Mongo behaves when running on a VM or in Atlas (which also runs on VM's) rather than a multi-tenant service like Cosmos DB.
With 3.2-3.4 servers the drivers use find command described here, not OP_QUERY.
The driver surely is not "returning OKMongoResponse" since it isn't written for cosmosdb.
If you think there is a driver issue, update the question with exact wire protocol response received and the exact result you receive from the driver.
Retryable writes require sessions (which cosmosdb advertises but does not support, see Importing BSON to CosmosDB MongoDB API using mongorestore) and normally use the OP_MSG protocol which come with 3.6+ servers. I don't know what drivers would do if a 3.2 server advertises session support, this isn't a combination that is possible with MongoDB.
Note that MongoDB does not support cosmosdb (and consequently MongoDB drivers don't, officially, either).
How can I check that a connection to db is active or lost using spring data jpa?
Only the way is to make a query "SELECT 1"?
Nothing. Just execute your query. If the connection has died, either your JDBC driver will reconnect (if it supports it, and you enabled it in your connection string--most don't support it) or else you'll get an exception.
If you check the connection is up, it might fall over before you actually execute your query, so you gain absolutely nothing by checking.
That said, a lot of connection pools validate a connection by doing something like SELECT 1 before handing connections out. But this is nothing more than just executing a query, so you might just as well execute your business query.
our best chance is to just perform a simple query against one table, e.g.:
select 1 from SOME_TABLE;
https://docs.oracle.com/javase/6/docs/api/java/sql/Connection.html#isValid%28int%29
if you can use Spring Boot. Spring Boot Actuator is useful for you.
actuator will configure automatically and after it has activated ,
you can get know database status to request "health"
http://[CONTEXT_ROOT]/health
and it will return , database status like below
{"status":"UP","db":{"status":"UP","database":"PostgreSQL","hello":1}}
Last two days i've been searching suitable solution for the problem described below.
In my standalone notification-service module I have an abstract Message entity. Message has 'to', 'from', 'sentAt', 'receivedAt' and other attributes. The responsibility of the notification-service is to:
send new messages using different registered message providers (SMS, EMAIL, Skype , etc).
receive new messages from registered message providers
update status for already sent messages.
Notification-service module is developed as standalone module that is available by SOAP protocol. A lot of clients can use this module to send or searching through already received messages.
Clients want to attach some properties (~ smth like tags) while sending messages for further searching messages by these properties. These properties make a sense only in client's environment.
For example, Client A might want to send message and save following custom properties :
1. Internal system id of user whom system sends message
2. Distinguish flag (whether id related to users / admins or clients)
3. Notification flag (notification/alert/ ...)
Client B might want to send message and save another set of custom properties :
1. Internal system operator id (who sends sms)
2. Template id that was used to send message
Custom properties can be used by the clients to search already sent messages.
For example:
Client A could find SMS messages sent to administrator users in period between [Date 1; Date 2] that have 'alert' status.
Client B could find all notification sent by specified template.
Of course, data should be fetched page by page.
At first I created the following database model:
Database scheme
To find all messages with specified properties I tried to use query:
SELECT * FROM (SELECT message_id FROM custom_message_properties
WHERE CONCAT(CONCAT(key, ':'), value) IN ('property1:value1', 'property2:value2')
GROUP BY message_id having(count(*)) = 2)
as cmp JOIN message m ON cmp.message_id = m.id ORDER BY ID LIMIT 100 OFFSET 0
Query worked fine (although it seems me not very good) in database with small data. I decided to check results for ~ real awaited data .
So i generated 10 000 000 messages that have 40 000 000 custom properties and checked result. Execution time was ~ 2 minutes. The most time consumed operation was following sub-select:
SELECT message_id FROM custom_message_properties
WHERE CONCAT(CONCAT(key, ':'), value) IN ('property1:value1', 'property2:value2')
I understand that string comparison is very slow cause database index feature is not used. I decided to change database structure to merge 'key' and 'value' columns into single one. So i updated by database scheme :
Updated database scheme
I checked result again. Now execution time was ~20 seconds. It's much better but still is not suitable for production use.
So now I have no idea how to improve performance without significant changes in application architecture design.
The only one thought i have is to create separate table for each client with required client properties.
client(i)_custom_properties {
mid bigint, // foreign key references message (id)
p1 type1,
p2 type2,
......
pn type(n)
}
I have spent a lot of time while trying to find any useful information. I have also analyzed 'stackoverflow' database cause it seemed me that it should be quite the same. But in 'stackoverflow' there are ~ 50 000 different tags. Not so much that my database could have.
Any help is appreciated. Thanks, in advance!
Project environment that i use :
Postgres database (9.6)
Java 1.8
Spring modules (spring-boot, spring-data-jpa + hibernate, spring-ws, etc).
I have not found any suitable solution except creating additional table with client's properties for each client.
I know, that solution is not so flexible,
but now search query time is less than 1 second.
In future, I will try to solve the same problem using noSQL data storage.
I was checking for some alternatives for Quartz-scheduler.
Though this is not a complete replacement, I was trying out RabbitMQ Delayed Messages Plugin (suits for my use-case).
I was able to get the scheduling work but I was not to view the messages which are delayed(which are stored in Mnesia).
Is there a way to check the messages and/or number of messages in Mnesia?
Edit : I inferred that the messages are stored in Mnesia from the comment from here.
There is no way to check the messages that RabbitMQ is persisting in it's mnesia database.
RabbitMQ is not a generalized datastore. It is a purpose-built message broker and queueing system. The datastore it has in it is there to facilitate the persistence of messages, not to be queried and used as if it were a database on it's own.
To view the data inside MNESIA you could :
Write a simple Erlang program as this, as result you have:
(rabbit#gabrieles-MBP)5>
load:traverse_table_and_show('rabbit_delayed_messagerabbit#gabrieles-MBP').
{delay_entry,
{delay_key,1442258857832,
{exchange,
{resource,<<"/">>,exchange,<<"my-exchange">>},
'x-delayed-message',true,false,false,
[{<<"x-delayed-type">>,longstr,<<"direct">>}],
undefined,undefined, {[],[]}}},
{delivery,false,false,<0.2008.0>,
{basic_message,
{resource,<<"/">>,exchange,<<"my-exchange">>},
[<<>>],
{content,60,
{'P_basic',undefined,undefined,
[{<<"x-delay">>,signedint,100000}],
undefined,undefined,undefined,undefined,undefined,
undefined,undefined,undefined,undefined,undefined,
undefined},
..
OR in this way:
execute an Erlang shell session using:
erl -set-cookie ABCDEFGHI -sname monitorNode#gabrielesMBP
you have to use the same cookie that rabbitmq are using.
Typically $(HOME).erlang.cookie
execute this command:observer:start().
and you should have this:
Once you are connected to rabbitmq node open Table Viewer and from the menu Mnesia table as:
Here you can see your data:
We are developping an application that uses the Google Cloud Datastore, important detail: it's not an gae application!
Everything works fine for normal usage. We designed a test that fetches over 30000 records but when we tried to run the test we got the following error:
java.net.SocketTimeoutException: Read timed out
We found that a Timeout Exception occurs after 30 seconds, so this explains the error.
I have two questions:
Is there a way to increase this timeout?
Is it possible to use pagination to query the datastore? We found when you have an aep application you can use the cursor, but our application isn't.
You can use cursors in the exact same way as a GAE app using Datastore. Take a look at this page for info.
In particular, the ResultQueryBatch object has an .getEndCursor() method which you can then use when you reissue a Query with setStartCursor(...). Here's a code snippet from the page above:
Query q = ...
if (response.getBatch().getMoreResults() == QueryResultBatch.MoreResultsType.NOT_FINISHED) {
ByteString endCursor = response.getBatch().getEndCursor();
q.setStartCursor(endCursor);
// reissue the query to get more results...
}
You should definitely use cursors to ensure that you get all your results. The rpc has additional constraints to time like total rpc size, so you shouldn't depend on a single rpc answering your entire query.