I want to check how fast the CRUD Operations are executing on a MongoDB.
Therefore I recorded the time with the following code:
long start = System.nanoTime();
FindIterable<Document> datasetFindIterable = this.collection.find(filter);
long finish = System.nanoTime();
long timeElapsed = finish - start;
I am aware, that the FindIterable Object comes with "executionStats" and "executionTimeMillis":
JSONObject jsonobject = (JSONObject) parser.parse(datasetFindIterable.explain().toJson())
JSONObject executionStats = (JSONObject) jsonobject.get("executionStats");
Long executionTimeMillis = (Long) executionStats.get("executionTimeMillis");
However I am a bit confused, I get the following results:
start (ns)
finish (ns)
timeElapsed (ns)
executionTimeMillis (ms)
582918161918004
582918161932511
14507
1234
14507 ns are 0.014507 ms
How can it be, that the executionTimeMillis (1234 ms) is that much larger than the difference between the System.nanoTime() (=0.014507 ms). Shouldn't it be the other way around, since the System.nanoTime() does also need some time to execute itself?
If I recall correctly, there are asynchronous and synchronous MongoDB Drivers available.
If you use an asynchronous driver, it could be the issue, that the
"long finish = System.nanoTime();"
command does not wait for the
"FindIterable<Document> datasetFindIterable = this.collection.find(filter);"
command to return with a value, therefore the time difference could be lower than the execution time stored in the FindIterable variable.
Related
I have a REST API that fetches a list of entities that is then mapped to DTOs. The requests take quite a lot of time because for 10 fetched entities it's about 1.5seconds. The query to the DB seems OK - it takes about 100ms.
I found that this one piece of code during the mapping entitiy->DTO phase takes about 80% of the time:
Map<String, List<DailyTimeDTO>> dailyTimesGroupedByClientStream = timesheetReport.getDailyTimes()
.stream()
.filter(dailyTime -> dailyTime.getWorkTime() != null)
.map(DailyTimeDTO::create)
.collect(Collectors.groupingBy(DailyTimeDTO::getClient));
With about 30-40 DailyTime objects it takes 80-100 ms and it is ran for each entity while mapping so when mapping 10 entities the request takes about 1.5 seconds and for 100 entities it takes a whole lot more.
I tried implementing it without the stream but it didn't really help (as below). I tried debugging what takes so much time (using the System.nanoTime() as below) but each iteration of the loop takes about 5 microseconds (so 0.005 ms). But the whole loop takes about 80-100 ms. So where does so much more time get lost? Is it the overhead of iterating over a list multiple times? Is there something I can do about besides figuring out a different way to perform this mapping?
dailyTimes.forEach( dailyTime -> {
long startTime = System.nanoTime();
if (dailyTime == null || dailyTime.getWorkTime() == null) return;
DailyTimeDTO dailyTimeDTO = DailyTimeDTO.create(dailyTime);
map.add(dailyTimeDTO.getClient(), dailyTimeDTO);
log.info("Iteration full {}", (System.nanoTime() - startTime) / 1000);
});
The DailyTimeDTO::create is nothing special:
public static DailyTimeDTO create(DailyTime dailyTime) {
return DailyTimeDTO.builder()
.client(dailyTime.getClient() == null ? "" : dailyTime.getClient().getClientName())
.project(dailyTime.getProject() == null ? "" : dailyTime.getProject().getProjectName())
.hours(dailyTime.getWorkTime())
.build();
}
My first question has been answered. Now I am trying to interpret the results based on the given query.
METRIC ACQUISITION:
// globally done
Summary.build()
.name("http_response_time")
.labelNames("method", "handler", "status")
.help("Request completed")
.register();
// done BEFORE every request
final long start = System.nanoTime();
// "start" is saved as a request attribute and lateron read from the request
// done AFTER every request
final double latencyInSeconds =
SimpleTimer.elapsedSecondsFromNanos(start, System.nanoTime());
responseTime.labels(
request.getMethod(),
handlerLabel,
String.valueOf(response.getStatus())
)
.observe(latencyInSeconds);
QUERY:
rate(http_response_time_sum{application="myapp",handler="myHandler", status="200"}[1m])
/
rate(http_response_time_count{application="myapp",handler="myHandler", status="200"}[1m])
RESULT:
0.0020312920780360694
So, what is this? Measured in ns, pushed to summary object in seconds.
As far as I would interpret it, this tells me that all successful requests of the last minute have an average latency of 0.0020 seconds (20ms).
Is that correct?
I will post my results here: the measured/calculated/interpreted value seems to be correct.
Anyway, I would prefer a more detailed and mathematical documentation of the Prometheus methods.
I'm trying to correlate the timing information obtain from a java job and linux performance monitoring tool perf (specifically perf stat).
The timing information from java is obtained using
String tstamp0 = String.valueOf(System.currentTimeMillis()); (This is essentially time in milliseconds from epoch)
whereas perf gives the time the process has began and the subsequent recording only show the time elapsed.
What I would like to do is, convert the timing information obtained from the perf stat to milliseconds, and here is where I'm failing. I'm approaching this problem in Python.
This piece of code is giving me the timing information from perf
tailit = "head -n 1 " + dataset_path
process = subprocess.Popen(tailit, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = process.communicate()
date_time = out.split("\n")[0].split(" ")[4:]
date = date_time[3] + "-" + date_time[0] + "-" + date_time[1]
time = date_time[2]
#TIMESTAMP
INIT_TIME = datetime.datetime.strptime(date + ' ' + time, "%Y-%B-%d %H:%M:%S") + datetime.timedelta(seconds=0.01)
#df is pandas data frame
df['STAMPME'] = df['TCOUNT'].apply(lambda x: foobar(datetime.timedelta(seconds=x) + INIT_TIME))
here foobar is the following to convert a string to timestamp in milliseconds, but it doesn't make sense.
def foobar(INIT_TIME):
d = datetime.datetime.strptime(str(INIT_TIME), "%Y-%m-%d %H:%M:%S.%f").strftime('%s')
d_in_ms = int(d)*1000
return (d_in_ms)
Any help will be appreciated.
EDIT: Prior questions were not addressing the problem of correlating the java timestamp (currentTimeMillis()) to the datetime with milliseconds.
For instance: with the function foobar:
with INIT_TIME set as 2017-05-11 10:56:54.203, the return value is 1494493014000 when it instead should be 1494500214203
I think figured out the problem.
Looks like foobar function is returning time in GMT+2, whereas the java job was returning time in GMT. so with a timedelta of +2, I could solve it.
I am working on a django and java project in which I need to compare the time in django to the time in current time in java.
I am storing the enbled_time in models as :
enabled_time = models.DateTimeField(auto_now = True, default=timezone.now())
The time gets populated in the db in the form :
2017-02-26 14:54:02
Now in my java project a cron is running which checks whether enabled_time plus an expiry time is greater than the current time something as:
Long EditedTime = db.getEnabledTime() + (expiryTime*60*1000); //expiryTime is in mins
if (System.currentTimeMillis() - EditedTime > 0) {
//do something
}
Here db is the database entity for that table.
But db.getEnabledTime() gives a result '2017'. What am I doing wrong?
PS: I am storing time as Long which seems unsuitable to me. Can someone suggest which datatype should I choose or does it work fine?
I'd like to know how long my SQL queries take to execute. It seems the jdbc layer doesn't report this, and I couldn't find it in the MyBatis logs, either. I can't believe there is no way to easily get this?
You can use a a StopWatch from (org.apache.commons.lang.time) package. So it would run in your java code, you'd have something like this after you add dependeNcy / import StopWatch into your java class
StopWatch sw = new StopWatch();
sw.start();
// query you want to measure the time for
sw.stop();
long timeInMilliseconds = sw.getTime();
System.out.println("Time in ms is: " + timeInMilliseconds );
// or maybe log it if you like?