We are running our Spring applications on Tomcat, and over a period of time we have added multiple REST endpoints to our application. We now want to trim it down and remove all the unused endpoints that our GUIs do not use any more.
We do use Splunk, but it will only give the number of hits on active endpoints from the log aggregator on localhost_access file of Tomcat. We want to find the end points that have 0 hits.
The most straightforward way is to write some kind of python script, that copies data from Tomcat start up, and gets all the end points(or manually feed it). Then put them in a hash map, and then go over local host access files in Tomcat server logs for last few months, incrementing a counter when the corresponding endpoint is met. Then print out all the keys in this hash map with value 0.
Is the above a feasible way to do so, or does there exist an easier method?
Splunk is essentially a search engine and, like any other search engine, cannot find something that is not there. Endpoints with no hits will not have data in Splunk and so will not appear in search results.
The usual approach to a problem like this is to start with a list of known objects and subtract those that are found by Splunk. The result is a list of unused objects. You touched on this concept yourself with your hash map idea.
Create a CSV file containing a list of all of your endpoints. I'll call it endpoints.csv. Then use it in a search like this one:
index=foo endpoint=* NOT [ inputlookup endpoint.csv | format ]
One way to find unused endpoints, go to access.log, check for few days logs which all endpoints are getting accessed. You'll get to know, which endpoints are unused over a period of time.
Related
I am working on a project where we are creating an auction site that works on a weekly basis. The plan is to start all auctions on Monday and end all on Friday.
So far I have figured out that I need a database that holds the start and end date so I can check to see how much time left and so. But I need to be able to constantly check and see if the time is up or not and I do not know how to proceed. What is the proper way to do this?
We are using Java8 with Spring and react as frontend.
two solution:
Use websocket, server set a timer which due at Friday, and once timer expired, send the event to client.
client side do timer also.
You have 3 layers in play here:
Frontend (React)
Backend (Java8/Spring app)
Database
Now you need to figure out how to propagate data between those layers.
Propagating from backend to frontend can be done using either polling or websockets.
Propagating from database to backend can be done using either polling or database triggers.
I'd personally connect React with Spring App via websockets. Then I'd have some background task polling the database and pushing the discovered changes to connected websocket clients.
Here is Springs own tutorial on websockets https://spring.io/guides/gs/messaging-stomp-websocket/
I think you are looking for a pull model. Basically your Java application needs to pull the end date from database at certain intervals. You can write a cron job for that.Quartz cron is one of the popular Java based frameworks out there http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html . It handles distributed system also. So if your application is having multiple instances, Quartz can cover it for you.
Another variant in pull model, you can read the entry with end dates in JVM local cache or some other cache(Redis, Memcache) & run a cron on that. But you have to maintain cache consistency with database.
Which one you choose depends on your business use case(how frequently end date changes, how frequently you need to check for end dates etc.).
Other option was to go for push model. But push model won't work with traditional databases for this case.
Possible option - is to extend org.springframework.web.servlet.handler.HandlerInterceptorAdapter
And write logic of checking current time against your time range in this class - with throwing of an exception if check fails.
Potential optimization - cache values from DB (at least for some time, for example - 15 minutes - as it will help to decrease number of actual calls to the database.
Currently we have several Java micro service apps that use Elastic Search, and for debugging purposes we have the logging set to tracer. This outputs all ES requests and responses to the logs. We really only need requests, and only on non-production. For all environments we want to keep search response times along with a custom header that we set for tracking purposes across multiple micro service apps.
I see that in .NET there is a custom solution that would work perfectly for us: https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/logging-with-on-request-completed.html#logging-with-on-request-completed but sadly I can't seem to find a matching Java feature.
Is there a way to do this using Java?
If I got you question correct then you want the following :-
Log every elasticsearch query only (and not response) from different
microservices.
You just want it on your test clusters
There is a workaround in elastisearch for the same. Elasticsearch itself logs the queries made to it and you just need to set a threshold for it. So any query that takes more time than that threshold would be logged in a separate file "_slow_log." in your logs folder. You just simply need to set the threshold to "0" to log every query only and agin this can be done in testing enviorments for your particular usecase.
There are a lot of configuration options in it and would recommend you to check this : https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-slowlog.html
I'm integrating BMC remedy and JIRA to solve a problem.
Task: I run a rest service and it raises automatic JIRA after reading BMC remedy if there are any records which are of type hotfix. So basically few fields from BMC will be mapped to JIRA when JIRA are created.
Problem : Because Remedy API accepts only one search criteria (which is "hotFix" in my case), Every time my service runs it reads remedy and fetches all the data which are of type "hotFix" even the ones I've already created JIRAs for which is expected. But now i need to resolve this because I don't want to raise duplicate JIRAs for them.
I don't want store all these things in database due to some reason. (Well infra cost).
Is there any way I can import this data without creating duplicates?
in your service before creating a JIRA ticket(I assume its an api call), check if already one exists (by using GET api from JIRA).
Based on your constraints for querying bmc remedy, seems this extra call to JIRA, to check if its a duplicate is an option.
Okay! I'm using a flat file.
As an alternative solution I've used a flat file to store the "date created" of last remedy incident with "HotFix" label (only one record !!! this gets updated every time my service gets a hit if there are new remedy incidents) and while fetching the data from remedy I'm ordering it based on date created and storing the most updated date (which would effectively serve me as a parameter for comparison when i hit my service next time to check if JIRAs till this particular date/time has already been created.) in this file.
This has resolved my issue.
How to track my projects like let's say i have 5 different projects which are already in production and i want to know which user in a single day how many times login to my application out of 5 application and after login how many times he clicked on a button or how many times he/she click on a function provided by the application. Now i want to track all these things without doing maximum changes in existing code?
Only way I can think of is through logging, using log4j or something similar. Also, maybe checkout Splunk.
If it is JavaEE web application, You can use
Java Servlet Filter
for auditing where information about operating requests and the outcome of those requests can be collected, stored, and distributed.Initially you can store these data in database and later on you can analyze it using different application.
We would like to start using salesforce for managing sales contacts, but there is also some business functions regarding contacts that we would like to retain in our current system.
As far as I can see, that means that we're going to need a two-way sync? Ie, when anything changes on salesforce, we need to update it on our system and vice versa.
I'm suggesting some kind of messaging product that can sit in the middle and retry failed messages, because I have a feeling that without that, things are going to get very messy? Eg, when one or other service is down.
The manager on the project would like to keep it simple and feels that using messages rather then realtime point-to-point calls is overkill, but I feel like without it we're going to be in for a world of pain.
Does anyone have any experience with trying to do two-way syncs (actually even one-way suffers from the same risks I think?)
Many thanks for your insights..
I can't speak for your system, but on the side Salesforce API, take a look at the getUpdated() and getDeleted() calls, which are designed for data replication. The SOAP API doc has a section that goes into detail about how to use them effectively.
We use Jitterbit to achieve two way sync between Salesforce and billing system. The Salesforce has a last-modified field and so does our biling system (you system should have this, if not, add a timestamp field to the table in its SQL storage). The only important thing is to chose one of the keys as primary (either SF_ID or other system's key) and create that key field in another system as it will be used for conflict resolution. The process is simple and multistep, load all modified SF data into flat file, load all modified secondary system data into another flat file, look for conflicts by comparing two files over a common key field, notify admin on conflicts, if any, and propagate all non-conflicting changes to another system. We run this process every 10 minutes and we store the last timestamp on both systems between cycle runs so that we only take records that were modified between two cycles.
In case two users edit at the same time, you will either encounter a confict and resolve it manually or you will get the "last-saved-wins" outcome.
You also have to cater for new provisions, on SF side use upsert instead of update (using external or SF key depending on which you chose above), on your other side it depends on the system.