Using Spring-boot-actuator API I need to count the number of API hits per clientID. How can I achieve this? Another challenge is my application is deployed on AWS and Azure. At any time I want to know the total API hit count across all environments.
There are multiple ways to do it. You have use tools like newrelic to capture that.
It uses java agent to bound to each API call.
Another option is you can use logging system to push logs and then accumulate and show using splunk, kibana. there you can create dashboard based on logs to check API hit.
You can implement your own approach, as an API interceptor/ControllerAdvice to send request hit in a separate async thread.But then you have to implement real time aggregration of these hits.
Related
I have a web application running in EC2 instance. It has different API endpoints. I want to count the number of times each API is called. The web application is in Java.
Can anyone suggest to me some articles where I can find proper Java implementation for integration of statsD with CloudWatch?
Refer their doc page https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-custom-metrics-statsd.html, They have mentioned about publishing the metrics in the same page, for your client side you can refer https://github.com/etsy/statsd/wiki#client-implementations.
Usually I follow a simple approach without using statsd, Log the events in the file and sync the file to the Cloudwatch, In cloudwatch you can configure filters and based on filters, you can increment the custom metrics.
Install CloudWatch Agent on your EC2 instance
Locate and open CW Agent config file
Add statsd section into the config file (JSON format)
{
....,
"statsd": {
"metrics_aggregation_interval": 60,
"metrics_collection_interval": 10,
"service_address": ":8125"
}
}
AWS CloudWatch agent is smart enough to understand custom tags helping you to correctly split statistics gathered from different API methods ("correctly" here means splitting API methods stats by dimension name, not by metric name). So you need a Java client lib supporting tags. For example, DataDog client
Configure the client instance as explained in the package documentation and that's it. Now you can do thing like this at the beginning of your each REST API operation:
statsd.incrementCounter(“InvocationCount”, 1, “host:YOUR-EC2-INSTANCE-NAME”, “operation:YOUR-REST-OPERATION-NAME”);
CloudWatch will handle everything else automatically. You will be able to see you metrics data flowing in the AWS CloudWatch Console under "CWAgent" namespace. Please be aware that average delay between statds client call and the data visibility in CW Console is about 10-15 minutes.
Manually writing statsd calls in each REST API operation may not be a good idea. Decorators will help you to automatically instrument it with just a several lines of code.
I have a Java application running in a Google Compute Engine instance. I am attempting to publish a message to a Cloud Pub/Sub topic using the google-cloud library, and I am getting DEADLINE_EXCEEDED exceptions. The code looks like this:
PubSub pubSub = PubSubOptions.getDefaultInstance().toBuilder()
.build().getService();
String messageId = pubSub.publish(topic, message);
The result is:
com.google.cloud.pubsub.PubSubException: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED
The documentation suggests that this response is typically caused by networking issues. Is there something I need to configure in my Networking section to allow Compute Engine to reach Pub/Sub? The default-allow-internal firewall rule is present.
I have already made my Compute Engine service account an editor and publisher in the Pub/Sub topic's permissions.
The application resides in a Docker container within a Container Engine-managed Compute Engine instance. The Pub/Sub topic and the Compute Engine instance are in the same project. I am able to use the google-cloud library to connect to other Cloud Platform services, such as Datastore. I am also able to publish to the same Pub/Sub topic without fail from App Engine instances in the same project.
Would I have more luck using the google-api-services-pubsub API library instead of google-cloud?
I have the same problem at the moment and created an issue at the google-cloud-java issue tracker on GitHub since I couldn't find it there.
We switched from the old google-api-services-pubsub libraries (which worked) to the new ones and got the exception. Our Java application is also running on a Compute Engine instance.
While this can be caused by networking issues (the client cannot connect to the service), the more typical cause is publishing too fast. It is common to call the publish method in a tight loop which can create of thousands to hundreds of thousands within the time it takes a typical request to return. The network stack on a machine will only send so many requests at a time, while the others sit waiting for execution. If your machine is able to send N parallel requests and each request takes 0.1s, in a minute you can send 600N requests. If you publish at a faster rate than that, all additional requests will time out on the client with DEADLINE_EXCEEDED.
You can confirm this by looking at server side metrics in Cloud Monitoring: you will not see these requests and you will only see successful requests. The rate of those requests will tell you the throughput capacity of your machines.
The solution to this is Publisher flow control: limit how fast you are calling the publish method, effectively. You can do this in most client libraries through simple configuration. Please refer to the documentation for the client library publisher API for you client library for details. E.g. in Java, this is a property called FlowControlSettings of the Publisher BatchingSettings. In Python, this is set directly in the PublisherOptions.
I would like to send one API request on the Google Compute Engine, instead of sending 100 requests, in order to reduce HTTP traffic.
Here is a description of how to do this when using plain HTTP calls. I am interested in achieving the same goal using Google's Java API client library
I am also aware of creating groups of instances using templates, but I didn't find a way to attach extra disks that are not read-only.
i am going to integrate some applications using RabbitMQ. Now i am facing the design issue. Right now i am having one application producing message and one application consuming it (in future more are possible). Both applications have access to some database. Application A is some kind of registration application when it receives registration request it sends message to on rabitmq. Now application b receives this message and its task is to load the registration data to elasticsearch server. Now i have some options
consumer will read the message and id from q and load the data and send it to the elastic search server
fastest throughput. Because things will move in asynchronous way. other process which may be running on separate
server will loading the data and sending to elastic server
consumer will read the message and id from the q and then call the rest service to load the company data.
will take more time for processing each request as it will be having network overhead.although it will save time to data load
but will add network delay. And it will by pass the ESB(Message Broker) also. (i personally think if i am using esb in my application
it is not necessary that i use it for every single method call)
send all the registration data in the message. consumer will receive it and just upload it to elasticsearch server.
which approach i should follow?
Apparently there are many components to your application set up that is hard to take into account and suggest a straightforward answer. I would suggest that you should look into each design and identify I/O points, calls over the network and data volume exchanged over the network. Then depending on the load you expect and the volume of data you expect to store over time I would suggest you hierarchize these bottlenecks giving a higher score depending on the severity of it. Identify the one solution that has the lowest score and go with that.
I would suggest you should benchmark the difference between sending only the iq or sending the whole object. I would expect that the difference is negligible.
One suggestion. Make your objects immutable. It is not directly relevant with what you are describing but in situations like yours, where components are operating "blindly" you will find that knowing that an object has not changed state is a big assurance.
I'm looking for some advice on the simplest way to create some product registration communication. I have a Java desktop application that needs to be re-newed every year. When a user downloads and install this app (through JNLP) they get a limited demo-version. There is code in place on the client to "register" the product and unlock all of the features.
My next step is to build the server-side components. There will be a database with customer ID numbers and other information about the customer. When the user clicks register, some contact information will be sent to the server as well as some product registration ID. The server will check this against the database and then either give the client the o.k. to unlock the features or the user will be informed that the registration id was not valid. This seems like a very standard thing. So what is the standard way to do it?
I have my own VPS and I'm running Tomcat, so I'm really free to implement this any way I choose. I was planning on building some web service, but I have never used REST before.
Use REST; REST is nothing more than using plain HTTP 'better'. Since you are already using HTTP, somehow you are already doing REST like calls and moving these calls to full fledged REST will be easy.
Implementing REST calls is easy. You have two approaches:
Low end: using URLConnection objects on the client, servlets on the server and following some REST conventions on using HTTP methods and 'clean' URLs (see here). Advantage is that you need no 3rd party library and minimize the footprint. Maintenance and evolutions are harder though.
High-end: a framework and specifications like JAX-RS. Using Restlet you can be up in running with a REST server in a couple of hours without having to deploy a servlet container.
Don't use SOAP. The only reason you would want to use SOAP is that you want to contractualise using a WSDL what you are exposing (you can do the same with REST btw, see the Amazon documentation for instance). Trust me, SOAP is way too heavy and confusing for what you are trying to do.