I'm integrating BMC remedy and JIRA to solve a problem.
Task: I run a rest service and it raises automatic JIRA after reading BMC remedy if there are any records which are of type hotfix. So basically few fields from BMC will be mapped to JIRA when JIRA are created.
Problem : Because Remedy API accepts only one search criteria (which is "hotFix" in my case), Every time my service runs it reads remedy and fetches all the data which are of type "hotFix" even the ones I've already created JIRAs for which is expected. But now i need to resolve this because I don't want to raise duplicate JIRAs for them.
I don't want store all these things in database due to some reason. (Well infra cost).
Is there any way I can import this data without creating duplicates?
in your service before creating a JIRA ticket(I assume its an api call), check if already one exists (by using GET api from JIRA).
Based on your constraints for querying bmc remedy, seems this extra call to JIRA, to check if its a duplicate is an option.
Okay! I'm using a flat file.
As an alternative solution I've used a flat file to store the "date created" of last remedy incident with "HotFix" label (only one record !!! this gets updated every time my service gets a hit if there are new remedy incidents) and while fetching the data from remedy I'm ordering it based on date created and storing the most updated date (which would effectively serve me as a parameter for comparison when i hit my service next time to check if JIRAs till this particular date/time has already been created.) in this file.
This has resolved my issue.
Related
We want to do a zero downtime migration of a google app engine java 8 standard project to another region.
Unfortunately google does not support this, so it has to be done manually.
One could export the datastore and import it again, but there may be no downtime and the data must always be consistent.
So the idea came up to create the project in the new region, and embed objectify 5 there with all entities (definitions, not data) used in the old project. Any new data goes in the "new datastore" attached to this new project.
All data not found on this new datastore shall be queried (if necessary) using objectify 6 connected to the "old" project using datastore api.
The advantage would be to not export any data manually at all and only migrate the most important data on the fly, using the mechanism above. (there's a lot of unused garbage, we did not do housekeeping for, but also some very vital data that must be on the new system)
Is this a valid approach? I know I'll probably integrate objectify by code and change package names to not have problems on the "code side"
If there is a better approach to migrate a project to another region, we're happy to hear.
We searched for hours without a proper result
Edit: I'm aware that we must instantly stop requests to the old service / disable writes there. We'd solve this by redirecting traffic (http) from the old project to the new one and disable writes.
This is a valid approach for migration. The traffic from new project can continue to do reads from old Datastore and writes to new one. I would like to add one more point.
However, soon after this switchover you should also plan data migration from old datastore to new one through mass export and import. The app will then have to be pointed to the new ones even for reads. https://cloud.google.com/datastore/docs/export-import-entities
This can be done gracefully by introducing a proxy connection logic in JAVA for connecting with the new Datastore entity. Which means during data migration, you put a condition in OFY6 to check for the new Datastore entity, if it is not available then read data from the old entity. This will ensure Zero downtime and in the backend you can silently and safely turn off the old datastore assuming you already have its full export.
Reads from both the old data source and new data source is a valid way to do migrations.
I am not an expert in this so forgive any bad coding you may see in my project.
I building an app that lets the user create, read, update and delete ingredients and recipes which can then be used to make grocery lists. I am using Java with Spring Boot as the back end, MongoDB as the database and AngularJS as the front end. Creating, reading and deleting works just fine, but updating is not working for the recipe collection in the DB.
When I run the back end server and use postman to make a PUT request to update a recipe, the recipe in the body of the request gets inserted as a new document even though I specify the ObjectID of an existing recipe. I don't have this problem with the ingredient collection. I suspect it has something to do with the fact that the recipe documents are nested JSON objects, while the ingredient documents are not, but I may be wrong.
I have uploaded the Java back end of the project to gitHub: https://github.com/firo8/grocerylists/tree/master/grocery-list/src/main/java/com/firo/grocerylist.
I have searched everywhere and can't find anyone else that has this problem with an answer I can understand. E.g. the following question seems to be similar, but the answer is not helpful: PUT makes POST instead of updating value in Spring Boot
what can I do to fix this?
Thanks in advance
Please refer to below docs on spring mongo db support page.
https://docs.spring.io/spring-data/mongodb/docs/3.0.3.RELEASE/reference/html/#mongo.core
I see that you need to connect the database by creating a mongo client and use mongo template to insert/update/get/delete from mongo db.
Follow the docs to get it working.
We are running our Spring applications on Tomcat, and over a period of time we have added multiple REST endpoints to our application. We now want to trim it down and remove all the unused endpoints that our GUIs do not use any more.
We do use Splunk, but it will only give the number of hits on active endpoints from the log aggregator on localhost_access file of Tomcat. We want to find the end points that have 0 hits.
The most straightforward way is to write some kind of python script, that copies data from Tomcat start up, and gets all the end points(or manually feed it). Then put them in a hash map, and then go over local host access files in Tomcat server logs for last few months, incrementing a counter when the corresponding endpoint is met. Then print out all the keys in this hash map with value 0.
Is the above a feasible way to do so, or does there exist an easier method?
Splunk is essentially a search engine and, like any other search engine, cannot find something that is not there. Endpoints with no hits will not have data in Splunk and so will not appear in search results.
The usual approach to a problem like this is to start with a list of known objects and subtract those that are found by Splunk. The result is a list of unused objects. You touched on this concept yourself with your hash map idea.
Create a CSV file containing a list of all of your endpoints. I'll call it endpoints.csv. Then use it in a search like this one:
index=foo endpoint=* NOT [ inputlookup endpoint.csv | format ]
One way to find unused endpoints, go to access.log, check for few days logs which all endpoints are getting accessed. You'll get to know, which endpoints are unused over a period of time.
I have a use case in which my data is present in Mysql.
For each new row insert in Mysql, I have to perform analytics for the new data.
How I am currently solving this problem is:
My application is a Spring-boot application, in which I have used Scheduler which checks for new row entered in the database after every 2 seconds.
The problem with the current approach is:
Even if there is no new data available in Mysql table, Scheduler fires MySQL query to check if new data available or not.
One way to solve this type of problem in any SQL database in Triggers .
But till now I am not successful in creating Mysql triggers which can call Java-based Spring application or a simple java application.
My question is :
Is their any better way to solve my above use-case? Even I am open to change to another storage (database) system if they are built for this type of use-case.
This fundamentally sounds like an architecture issue. You're essentially using a database as an API which, as you can see, causes all kinds of issues. Ideally, this db would be wrapped in a service that can manage the notification of systems that need to be notified. Let's look at a few different options going forward.
Continue to poll
You didn't outline what the actual issue is with your current polling approach. Is running the job when it's not needed causing an issue of some kind? I'd be a proponent for just leaving it unless you're interested in making a larger change.
Database Trigger
While I'm unaware of a way to launch a java process via a db trigger, you can do an HTTP POST from one. With that in mind, you can have your batch job staged in a web app that uses a POST to launch the job when the trigger fires.
Wrap existing datastore in a service
This is, IMHO, the best option. This allows there to be a system of record that provides an API that can be versioned, etc. This would allow any logic around who to notify would also be encapsulated into this service.
Replace data store with something that allows for better notifications
Without any real information on what the data being store is, it's hard to say how practical this is. But using something as Apache Kafka or Apache Geode would both be options that provide the ability to be notified when new data is persisted (Kafka by listening to the topic, Geode via a continuous query).
For the record, I'd advocate for the wrapping of the existing database in a service. That service would be the only way into the db and take on responsibility for any notifications required.
We would like to start using salesforce for managing sales contacts, but there is also some business functions regarding contacts that we would like to retain in our current system.
As far as I can see, that means that we're going to need a two-way sync? Ie, when anything changes on salesforce, we need to update it on our system and vice versa.
I'm suggesting some kind of messaging product that can sit in the middle and retry failed messages, because I have a feeling that without that, things are going to get very messy? Eg, when one or other service is down.
The manager on the project would like to keep it simple and feels that using messages rather then realtime point-to-point calls is overkill, but I feel like without it we're going to be in for a world of pain.
Does anyone have any experience with trying to do two-way syncs (actually even one-way suffers from the same risks I think?)
Many thanks for your insights..
I can't speak for your system, but on the side Salesforce API, take a look at the getUpdated() and getDeleted() calls, which are designed for data replication. The SOAP API doc has a section that goes into detail about how to use them effectively.
We use Jitterbit to achieve two way sync between Salesforce and billing system. The Salesforce has a last-modified field and so does our biling system (you system should have this, if not, add a timestamp field to the table in its SQL storage). The only important thing is to chose one of the keys as primary (either SF_ID or other system's key) and create that key field in another system as it will be used for conflict resolution. The process is simple and multistep, load all modified SF data into flat file, load all modified secondary system data into another flat file, look for conflicts by comparing two files over a common key field, notify admin on conflicts, if any, and propagate all non-conflicting changes to another system. We run this process every 10 minutes and we store the last timestamp on both systems between cycle runs so that we only take records that were modified between two cycles.
In case two users edit at the same time, you will either encounter a confict and resolve it manually or you will get the "last-saved-wins" outcome.
You also have to cater for new provisions, on SF side use upsert instead of update (using external or SF key depending on which you chose above), on your other side it depends on the system.