Where does micrometer store data? - java

As i am in doubt when we use micrometer and prometheus in production as prometheus pull data form micrometer and we just use remote data storage for prometheus but some data are also stored by micrometer.. now my question is if my server is running in a production than the micrometer store data is keep going increases as it is running or it automatically flushes after some time? means how micrometer store data in production?

Micrometer itself does not store data persistently. All data is kept in memory. If the application restarts, the counters starts from zero.
It is the task of the timeline database to hanlde that. E.g. Prometheus has functions like rate() and increase() that ignore these resets.

No matter which environment your application uses micrometer (whether it is locally, in dev, acceptance or production, micrometer will behave the same way:
Collecting and storing in-memory metrics
Waiting for other metrics analysis and visualization tools to collect the data:
Publishing when the tools implementation uses a push model
Exposing the needed endpoints for tools using the pull model, which is the Prometheus case
micrometers, and any other metrics collecting library by the way, cannot make assumptions about when the data collected should be flushed or cleared since it cannot make assumptions or even know in advance which tools will be collecting which data and when.
Meanwhile, if you have already full picture about your application architecture and you know that you will be only using Prometheus to collect metrics, you can configure your endpoint to clear the MeterRegistry after successful scraping (based on the official documentation sample since you did not afford any snippet about the implementation):
PrometheusMeterRegistry prometheusRegistry = new PrometheusMeterRegistry(PrometheusConfig.DEFAULT);
try {
HttpServer server = HttpServer.create(new InetSocketAddress(8080), 0);
server.createContext("/prometheus", httpExchange -> {
String response = prometheusRegistry.scrape(); (1)
httpExchange.sendResponseHeaders(200, response.getBytes().length);
try (OutputStream os = httpExchange.getResponseBody()) {
os.write(response.getBytes());
prometheusRegistry.clear(); // clear the registry upon successful response write
}
});
new Thread(server::start).start();
} catch (IOException e) {
throw new RuntimeException(e);
}

Related

Mongo Driver for Azure Cosmos - Reads and Rate Limiting

Having an issue with the following configuration,
Driver version : 3.12.1, mongodb-driver for Java
Server Version: 3.2 of Mongo API for Azure Cosmos DB (Ancient, I know)
We run some fairly high read/write loads and may hit rate limiting from the Cosmos API for Mongo. In this case, I expect an exception to occur. We're doing pretty vanilla queries, code snippet looks similar to
public DatabaseQueryResult find(String collectionName, Map<String, Object> queryData) {
Document toFind = new Document(queryData);
MongoCollection<Document> collection = this.mongoDatabase.getCollection(collectionName);
FindIterable<Document> findResults = collection.find(toFind);
if (findResults != null) {
Document dataFound = findResults.first();
return new DatabaseQueryResult(dataFound.toJson(this.settings))
}
// other stuff...
}
When rate limited by Azure, you'll receive a response like so
{
"$err":"Message: {\"Errors\":[\"Request rate is large. More Request Units may be needed, so no changes were made. Please retry this request later. Learn more: http://aka.ms/cosmosdb-error-429\"]}\r\n s",
"code":16500,
"_t":"OKMongoResponse",
"errmsg":"Message: {\"Errors\":[\"Request rate is large. More Request Units may be needed, so no changes were made. Please retry this request later. Learn more: http://aka.ms/cosmosdb-error-429\"]}\r\n",
"ok":0
}
I expect an exception to be thrown here - but that doesn't seem to be the case with the later driver. What's happening is,
collection.find is returning a FindIterable with the JSON error result as above as the first document
We're eventually returning a DatabaseQueryResult with JSON error as the query payload
I don't want this to happen - I'd much prefer the mongo driver to throw a MongoCommandException/MongoQueryException if a query operation returns an OKMongoResponse where "ok" 0. This seems fine on writes,
which will use a CommandProtocol object and the response is validated as I'd expect - it's just reads that seems to have changed.
Comparing the 2 driver versions, this seems to be a change in read behaviour - perhaps due to retryable reads that were introduced in version 3.11? Response validation now seems to be around this section.
Q: Is there a way to configure my Mongo client so that the driver will validate server responses on read operations and throw an exception if it receives a OKMongoResponse, and ok == 0?
I can of course validate the results myself, but I'd prefer not to and let the driver do this if possible
I'm not sure why Mongo changed this driver. There is something on the Cosmos side which may help. You can raise a support ticket and ask them to turn on server-side retries. This will change the behavior of Cosmos such that requests will queue up rather than throw 429's when there are too many.
This more reflects how Mongo behaves when running on a VM or in Atlas (which also runs on VM's) rather than a multi-tenant service like Cosmos DB.
With 3.2-3.4 servers the drivers use find command described here, not OP_QUERY.
The driver surely is not "returning OKMongoResponse" since it isn't written for cosmosdb.
If you think there is a driver issue, update the question with exact wire protocol response received and the exact result you receive from the driver.
Retryable writes require sessions (which cosmosdb advertises but does not support, see Importing BSON to CosmosDB MongoDB API using mongorestore) and normally use the OP_MSG protocol which come with 3.6+ servers. I don't know what drivers would do if a 3.2 server advertises session support, this isn't a combination that is possible with MongoDB.
Note that MongoDB does not support cosmosdb (and consequently MongoDB drivers don't, officially, either).

Using Redis Stream to Block HTTP response via HTTP long polling in Spring Boot App

I have a spring boot web application with the functionality to update an entity called StudioLinking. This entity describes a temporary, mutable, descriptive logical link between two IoT devices for which my web app is their cloud service. The Links between these devices are ephemeral in nature, but the StudioLinking Entity persists on the database for reporting purposes. StudioLinking is stored to the SQL based datastore in the conventional way using Spring Data/ Hibernate. From time to time this StudioLinking entity will be updated with new information from a Rest API. When that link is updated the devices need to respond (change colors, volume, etc). Right now this is handled with polling every 5 seconds but this creates lag from when a human user enters an update into the system and when the IoT devices actually update. It could be as little as a millisecond or up to 5 seconds! Clearly increasing the frequency of the polling is unsustainable and the vast majority of the time there are no updates at all!
So, I am trying to develop another Rest API on this same application with HTTP Long Polling which will return when a given StudioLinking entity is updated or after a timeout. The listeners do not support WebSocket or similar leaving me with Long Polling. Long polling can leave a race condition where you have to account for the possibility that with consecutive messages one message may be "lost" as it comes in between HTTP requests (while the connection is closing and opening, a new "update" might come in and not be "noticed" if I used a Pub/Sub).
It is important to note that this "subscribe to updates" API should only ever return the LATEST and CURRENT version of the StudioLinking, but should only do so when there is an actual update or if an update happened since the last checkin. The "subscribe to updates" client will initially POST an API request to setup a new listening session and pass that along so the server knows who they are. Because it is possible that multiple devices will need to monitor updates to the same StudioLinking entity. I believe I can acomplish this by using separately named consumers in the redis XREAD. (keep this in mind for later in the question)
After hours of research I believe the way to acomplish this is using redis streams.
I have found these two links regarding Redis Streams in Spring Data Redis:
https://www.vinsguru.com/redis-reactive-stream-real-time-producing-consuming-streams-with-spring-boot/
https://medium.com/#amitptl.in/redis-stream-in-action-using-java-and-spring-data-redis-a73257f9a281
I also have read this link about long polling, both of these links just have a sleep timer during the long polling which is for demonstration purposes but obviously I want to do something useful.
https://www.baeldung.com/spring-deferred-result
And both these links were very helpful. Right now I have no problem figuring out how to publish the updates to the Redis Stream - (this is untested "pseudo-code" but I don't anticipate having any issues implementing this)
// In my StudioLinking Entity
#PostUpdate
public void postToRedis() {
StudioLinking link = this;
ObjectRecord<String, StudioLinking> record = StreamRecords.newRecord()
.ofObject(link)
.withStreamKey(streamKey); //I am creating a stream for each individual linking probably?
this.redisTemplate
.opsForStream()
.add(record)
.subscribe(System.out::println);
atomicInteger.incrementAndGet();
}
But I fall flat when it comes to subscribing to said stream: So basically what I want to do here - please excuse the butchered pseudocode, it is for idea purposes only. I am well aware that the code is in no way indicative of how the language and framework actually behaves :)
// Parameter studioLinkingID refers to the StudioLinking that the requester wants to monitor
// updateList is a unique token to track individual consumers in Redis
#GetMapping("/subscribe-to-updates/{linkId}/{updatesId}")
public DeferredResult<ResponseEntity<?>> subscribeToUpdates(#PathVariable("linkId") Integer linkId, #PathVariable("updatesId") Integer updatesId) {
LOG.info("Received async-deferredresult request");
DeferredResult<ResponseEntity<?>> output = new DeferredResult<>(5000l);
deferredResult.onTimeout(() ->
deferredResult.setErrorResult(
ResponseEntity.status(HttpStatus.REQUEST_TIMEOUT)
.body("IT WAS NOT UPDATED!")));
ForkJoinPool.commonPool().submit(() -> {
//----------------------------------------------
// Made up stuff... here is where I want to subscribe to a stream and block!
//----------------------------------------------
LOG.info("Processing in separate thread");
try {
// Subscribe to Redis Stream, get any updates that happened between long-polls
// then block until/if a new message comes over the stream
var subscription = listenerContainer.receiveAutoAck(
Consumer.from(studioLinkingID, updateList),
StreamOffset.create(studioLinkingID, ReadOffset.lastConsumed()),
streamListener);
listenerContainer.start();
} catch (InterruptedException e) {
}
output.setResult("IT WAS UPDATED!");
});
LOG.info("servlet thread freed");
return output;
}
So is there a good explanation to how I would go about this? I think the answer lies within https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/core/ReactiveRedisTemplate.html but I am not a big enough Spring power user to really understand the terminology within Java Docs (the Spring documentation is really good, but the JavaDocs is written in the dense technical language which I appreciate but don't quite understand yet).
There are two more hurdles to my implementation:
My exact understanding of spring is not at 100% yet. I haven't yet reached that a-ha moment where I really fully understand why all these beans are floating around. I think this is the key to why I am not getting things here... The configuration for the Redis is floating around in the Spring ether and I am not grasping how to just call it. I really need to keep investigating this (it is a huge hurdle to spring for me).
These StudioLinking are short lived, so I need to do some cleanup too. I will implement this later once I get the whole thing up off the ground, I do know it will be needed.
Why don't you use a blocking polling mechanism? No need to use fancy stuff of spring-data-redis. Just use simple blocking read of 5 seconds, so this call might take around 6 seconds or so. You can decrease or increase the blocking timeout.
class LinkStatus {
private final boolean updated;
LinkStatus(boolean updated) {
this.updated = updated;
}
}
// Parameter studioLinkingID refers to the StudioLinking that the requester wants to monitor
// updateList is a unique token to track individual consumers in Redis
#GetMapping("/subscribe-to-updates/{linkId}/{updatesId}")
public LinkStatus subscribeToUpdates(
#PathVariable("linkId") Integer linkId, #PathVariable("updatesId") Integer updatesId) {
StreamOperations<String, String, String> op = redisTemplate.opsForStream();
Consumer consumer = Consumer.from("test-group", "test-consumer");
// auto ack block stream read with size 1 with timeout of 5 seconds
StreamReadOptions readOptions = StreamReadOptions.empty().block(Duration.ofSeconds(5)).count(1);
List<MapRecord<String, String, String>> records =
op.read(consumer, readOptions, StreamOffset.latest("test-stream"));
return new LinkStatus(!CollectionUtils.isEmpty(records));
}

vertx-redis-client 3.7.0: Is it cheap to create redis client on every http request

I am using vertx-redis-client in one of my projects. I am creating redis client like this:
private void createRedisClient(final Handler<AsyncResult<Redis>> redisHandler) {
Redis.createClient(vertx, AppSettings.REDIS_OPTIONS)
.connect(onConnect -> {
if (onConnect.succeeded()) {
System.out.println("Redis got connected");
Redis redisClient = onConnect.result();
redisHandler.handle(onConnect);
redisClient.exceptionHandler(e -> {
e.printStackTrace();
attemptReconnect(0, redisHandler);
});
} else {
onConnect.cause().printStackTrace();
redisHandler.handle(onConnect);
}
});
}
But, I need to switch redis DB based on parameters of REST API input JSON. So, is it wise (performant) to create a redis client for every request and connect to required DB? Or should I pool my redis clients somehow?
It is not cheap at all.
If you have more than one Redis client, you should put them in some kind of concurrent map, and use atomic operations to get those clients depending on your parameters.
Creating a connection for every access to Redis is going to kill your application's performance.
Getting good performance from Redis is also about how well you design your data structures. Ideally, you should fetch (or write) all the data in a single call - for example, you could have all your keys in a single db and organize closely associated data with the same key, so that you can get your work done in a single HGET/HSET.
If that is not possible, I'd recommend that you create a pool of Redis clients that are already connected to the dbs that you will access. A single Redis client can have multiple connections open, since keep-alive is on by default.

Getting DB IO issue in java code because of ASYNC_NETWORK_IO

DB team of our application has reported ASYNC_NETWORK_IO issue along with a big but optimized query that gets executed in around 36Sec and brings around 6,44,000 rows.
The main reasons behind this might be one of these:
A. Problem with the client application
B. Problems with the network – (but we have Ethernet speed 1 GB)
So, might be this is a code side issue because The session must wait for the client application to process the data received from SQL Server in order to send the signal to SQL Server that it can accept new data for processing.
This is a common scenario that may reflect bad application design, and is the most often cause of excessive ASYNC_NETWORK_IO wait type values
and here is the how we are getting data from DB in code.
try {
queue.setTickets(jdbcTemplate.query(sql, params, new QueueMapper()));
} catch (DataAccessException e) {
logger.error("get(QueueType queueType, long ticketId)", e);
}
Can anyone advise me on this?
Thanks in advance.

Use Persistent Context Mode in Spark Job Server with Java

I want to use the Machine Learning capabilities of Apache Spark through a RESTful API. Therefore I use the Spark Job Server. I already developed an interface for the communication but figured out that, while I am using the Persistent Context Mode, I can't save objects like a trained model between different job circles. I can't find any documentation on how to actually implement a persistent job with JAVA.
I am also quite new to Apache Spark and have no clue of Scala. I also don't want to start over with the development process and would be very happy if somebody can share his experience in how to persist JAVA objects between Apache Spark Jobserver jobs or point me to a good example or documentation.
For the beginning, it would be even sufficient to at least serialize an object and save it to the disk. But I also wasn't succesful with that inside of the Apache Spark Jobserver. I used a simple code like the following, but this probably doesn't work as simple in Spark
try {
FileInputStream streamIn = new FileInputStream("D:\\LRS.ser");
ObjectInputStream objectinputstream = new ObjectInputStream(streamIn);
LRService = (LogisticRegressionService) objectinputstream.readObject();
objectinputstream.close();
} catch (Exception e) {
e.printStackTrace();
}

Categories

Resources