Let's say we have a ProductType table in our database. This represents the types of products that can exist:
productTypeId
name
description
capacity
pricePerNight
1
Tent
This is a tent
4
20
Let's say we have a Product table in our database. This represents actual physical Products that exist (i.e. in a rental shop's inventory):
id
productTypeId (FK)
condition
lastUsed
6
1
good
18/11/21
7
1
bad
18/11/21
8
1
medium
18/11/21
Now, let's say I was making a public API which allows clients to query all available products and the price of those products for a specific set of dates.
My options are: (a) Return an edited version of the ProductType object with new fields (e.g. quantityAvailable and priceForDates) conveying the extra information requested:
{
"productTypeId": 1,
"name": "Tent",
"description": "This is a tent",
"capacity": 4,
"quantityAvailable": 1,
"priceForDates": 60,
}
(b) Wrap the ProductType object in a parent object then add extra fields (e.g. quantityAvailable and priceForDates) to the parent object:
{
"quantityAvailable": 3,
"priceForDates": 60,
"product":
{
"productTypeId": 1,
"name": "Tent",
"description": "This is a tent",
"capacity": 4,
}
}
I am confronted by this situation quite regularly and I'm wondering what is the best practice for a RESTful API.
IRL I am building a public API which needs to be as intuitive and easy to use as possible for our company's integrating partners.
This is borderline opinion-based but still, this is my take on this.
I would go with option (b). The reason is that you might be exposing your ProductType as the following in other endpoints:
"product": {
"productTypeId": 1,
"name": "Tent",
"description": "This is a tent",
"capacity": 4,
}
In this case, it is important to be consistent and to represent ProductType with the same structure in all your endpoints.
On top of this, this seems the response to the call of checking the availability of a product for a given period of days. Even the way you phrased it "(...) allows clients to query all available products and the price of those products for a specific set of dates." shows that the price and the number of available products are just additional information about a given ProductType. Hence, they should be included alongside ProductType but not as part of ProductType. For example priceForDates is clearly not a ProductType property.
Related
I'm having a diffuculties with aggregations over dynamic templates. I have values stored like this.
[
{
"country": "CZ",
"countryName": {
"en": "Czech Republic",
"es": "Republica checa",
"de": "Tschechische Republik"
},
"ownerName": "..."
},
{
"ownerName": "..."
}
]
Country field is classic keyword, mapping for country name is indexed as dynamic template according to the fact that I want to extend with with another languages when I need to.
{
"dynamic_templates": [
{
"countryName_lsi_object_template": {
"path_match": "countryName.*",
"mapping": {
"type": "keyword"
}
}
}
]
}
countryName and country are not mandatory parameters - when the document is not assigned to any country, I can't have countryName filled either. However I need to do a sorted aggregation over the country names with according to chosen key and also need to include buckets with null countries. Is there any way to do that?
Previously, I used TermsValuesSourceBuilder with order on "country" field, but I need data sorted according to specifix language and name and that can't be done over country codes.
(I'm using elasticsearch 7.7.1 and java 8 and recreation of index / changing data structure is not my option.)
I tried to use missing bucket option, but the response does not include buckets with "countryName" missing at all.
TermsValuesSourceBuilder("countryName").field("countryName.en").missingBucket(true);
This is about adding GraphQL to an existing Java api which takes in multiple lists as input.
Background:
I have an existing Java based REST API getFooInformationByRecognizer which takes in a list of recognizers, where each recognizer object contains an id and it's type and returns information corresponding to each id.
The only 3 types possible are A, B or C. The input can be any combination of these types.
Eg:
[{"id": "1", "type": "A" }, {"id": "2", "type": "B"},{"id": "3", "type": "C"}, {"id": "4", "type": "A"}, {"id":"5", "type": "B"}]
Here's it's Java representation:
class FooRecognizer{
String id;
String FooType;
}
This api does a bit of processing.
First extracts out all the input that has ids of type A and fetches information corresponding to those ids.
Similarly, extract out the ids that has type B and fetches information corresponding to those ids and similarly for C.
So, it fetches data from 3 different sources and finally collates them to a single map and returns.
Eg:
ids of type A --> A SERVICE -> <DATA SOURCE FOR A>
ids of type B --> B SERVICE --> <DATA SOURCE FOR B>
ids of type C --> C SERVICE --> <DATA SOURCE FOR C>
Finally does this:
A information + B information + C information and puts this in a Java Hashmap.
The Java representation of the request to this service is:
class FooRequest{
private Bar bar;
List<FooRecognizer> list;
}
The Java representation of the response object from the service is:
class FooInformationResponse{
private Map<String, FooRecognizer> fooInformationCollated;
}
Sample JSON output of the response is:
"output":{
"fooInformationCollated":{
"1":{
"someProperty": "somePropertyValue"
"listOfNestedProperties": [{"x": "xValue", "y": "yValue", "z","zValue"]
"nestedProperty":{
"anotherProperty":{
"furtherNestedProperty": "value"
}
}
}
"2":{
"someProperty": "somePropertyValue"
"listOfNestedProperties": [{"a": "aValue", "b": "bValue", "c","cValue"]
"nestedProperty":{
"anotherProperty":{
"furtherNestedProperty": "value"
}
}
}
}... and so on for other ids in the input
Now, I want to convert this service to GraphQL and here is my query.
query{
getFooInformationByRecognizer(filterBy:{
fooRecognizer: [{
id: "1",
fooType: A
}],
bar: {
barId: "someId",
...<other bar info>
}
}){
fooInformationCollated{
id
fooInformation{
someProperty
listOfNestedProperties
nestedProperty{
anotherProperty{
furtherNestedProperty
}
}
}
}
}
}
Here is my GraphQL schema:
type Query{
getFooInfoByRecognizer (filterBy: getFooByRecognizerTypeFilter!):getFooByRecognizerTypeFilterResponse
}
input getFooByIdentifierTypeFilter{
bar: Bar!
fooIdentifiers: [FooIdentifier!]!
}
input Bar{
barId: String!
....
}
input FooIdentifier{
id: String!
fooIdType: fooIdtype!
}
enum fooIdType{
A
B
C
}
I have a few questions here:
Would this be the best way / best practice to represent this query? Or should I model my query to be able to take in 3 separate lists. Eg: query getFooInformationByRecognizer(barId, listOfAs, listOfBs, listOfCs). Any other choice that I have to query / model?
I found having a complex input type as the easiest. In general, is there any specific reason to choose complex input type over other choices or vice-versa?
Is there any thing related to query performance that I should be concerned with? I've tried looking into DataLoader / BatchLoading but that doesn't quite seem to fit the case. I don't think N+1 problem should be an issue as I will also create separate individual resolvers for A, B and C but the query as can be seen does not make further calls to back-end once JSON is returned in response.
The question is too broad to answer concretely, but here's my best attempt.
While there isn't a definitive answer on 1 complex input argument vs multiple simpler arguments, 1 complex argument is generally more desirable as it's easier for the clients to pass a single variable, and it keeps the GraphQL files smaller. This may be more interesting for mutations, but it is a good heuristic regardless. See the logic explained it more detail e.g. in this article.
The logic explained above echoes your own observations
For this specific scenario you listed, I don't see anything of importance for performance. You seem to fetch the whole list in one go (no N+1), so not much different from what you're doing for your REST endpoint. Now, I can't say how expensive it is to fetch the lower-level fields (e.g. whether you need JOINs or network calls or whatever), but if there's any non-trivial logic, you may want to optimize it by looking ahead into the sub-selection before resolving your top-level fields.
I am bulding a small app to track Bike Stations around the city, and I have an API that gives me the current status of the availability of bikes in bike stations from the company that provides the service.
My plan is to have a sort of interactive map, with all the markers for each of the bike stations, and when the user taps one of these, they get the information on that specific bike station. I have already all the locations coded in as markers on the map. What I need now is to be able to get the data for the specific bike station the user clicks.
An example of a part of the JSON I get from the API is below:
"number": INT,
"contract_name": "STRING",
"name": "STRING",
"address": "STRING",
"position": {
"lat": DOUBLE,
"lng": DOUBLE
},
"banking": BOOLEAN,
"bonus": BOOLEAN,
"bike_stands": INT,
"available_bike_stands": INT,
"available_bikes": INT,
"status": "STRING",
"last_update": 1588583133000
},
....
This structure is the same for all 100+ nodes of the JSON which I get from the API.
My question is, how would I go about filtering out one individual entry like such from the rest of the JSON. The parameter number is an ID unique to each bike station.
Is there a library that can do this for me? My idea (Very naive) was to save the whole JSON locally each time, and then go through it looking for "number":X and then parse out the data I needed, although this is obviously highly inefficient, I recognize that.
I am only interested in a part of each JSON, to be show to the user: the node's banking, bonus, available_bike_stands, available_bikes and status tags. The status tag is optional, it should simply tell me if the bike station is open (available) or closed.
Thank you very much,
Regards.
Get data from API --> Retrofit
Save local data--> SharePreference, Room
get a part of each JSON --> you create an object that contains some fields you need. when you use retrofit get data from API then it will return the result you desire
class YourClass {
#SerializedName("number")
var number: Int? = null
#SerializedName("banking")
var banking: Boolean? = null
#SerializedName("bonus")
var bonus: Boolean? = null
#SerializedName("available_bike_stands")
var availableBikeStands: Int? = null
//... fields you need
}
//CouponCartRule - MongoDB
{
"id": 1,
"applicability": {
"validFrom": "12-MAR-2017T01:00:00Z",
"validTill": "12-MAR-2019T01:00:00Z"
},
"maxUsage": 100,
"currentUsage": 99,
"maxBudget": 1000,
"currrentBudget": 990
}
//UniqueCoupon collection - MongoDB
{
"ruleId": 1,
"couponCode": "CITIDEALMAR18",
"currentCouponUsage": 90,
"validFrom": "12-MAR-2018T01:00:00Z",
"validTill": "12-APR-2018T01:00:00Z"
}
{
"ruleId": 1,
"couponCode": "CITIDEALAPR18",
"currentCouponUsage": 9,
"validFrom": "12-JAN-2018T01:00:00Z",
"validTill": "12-FEB-2018T01:00:00Z"
}
//Order - MongoDB
{
"id": 112,
"total": {
"discountCode": "CITIDEALMAR18",
"discount": 10,
"total": 90,
"grandTotal": 90
},
"items": []
}
Problem Description
A CouponCartRule will have conditions defined for a coupon code to be applied on order.
Each shopping cart rule can have n coupons where specific conditions might be overriden.
This is done to reuse CouponCartRule but create different coupons under them and also coupons under CouponCartRule can grow to max of 10K record.
When order is succesffuly placed the order will have couponCode and discount applied.
Order collection is managed by OM team and it is very big.
We receive order add / cancel event when order is created / cancelled.
I have requirement to check if maxUsage and maxBudget is not breached during checkout.
I am planning to listen to order add and cancel event, update usage on order add and order cancel event.
Steps
Update currentUsage of UniqueCoupon usage [inc operator of mongo]
Update currentUsage & budget stats of CouponCartRule using
Any suggestion on making the code idempotent if there is downtime, If there is failure in step 2. The listener would send message to DLQ and the count would be update again which is not desired.
One option that I thought of is to track and record stats at UniqueCoupon level and later aggregate at SCR level [The aggregation operation will eventually be consistent when subsequent coupon is used].
Since this code is invoked real time it should be efficient.
I ended up proposing the below solution that is supported by our infra [rabbit mq and mongodb]. Here is the solution that was accepted after design review. Hope this helps someone who is having similar issues.
System receives order add / cancel event from Order Management.
If coupon code was used in order, We Upsert record into DB based on order ID.
Order Add Event- Track coupon code, orderId, discount, couponRuleId, updated etc. Upsert based on orderId and update couponUsed flag as “APPLIED”
Order cancel Event- Upsert orderId based on orderId and update couponUsed flag as “NOT_APPLIED”
Have Mongo TTL on “coupon_usage_tracker” as 3 months / 6 months.
Trigger sync on COUPON_RULE_SUMMARY and COUPON_SUMMARY dashboard based on last by passing required data. Trigger sync mechanism is via MQ since we can have automated retry on failure via DLQ setup.
If trigger sync fails for either COUPON_RULE_SUMMARY and COUPON_SUMMARY then it would be updated with correct value in next coupon code usage [Eventual consistency].
The config collections will not have mutable states and would be strictly having configuration. This lets us cache the configs.
//COUPON_USAGE_TRACKER
{
"ruleId": 948,
"couponCode": "TENOFF",
"couponId": 123,
"order": {
"created_at": "",
"udpated_at": "",
"member_id": 221930,
"member_order_count": 5,
"publicId": "4asdff-23ewea-232",
"id": 234123,
"total": {
"discount_code": "TENOFF",
"discount": "10",
"grand_total": 100,
"sub_total": 10
}
}
}
//COUPON_RULE_SUMMARY
{
"ruleId": 948,
"last_update_at": "",
"currentUsage": 100,
"currentBudget": 1000
}
//COUPON_SUMMARY
{
"ruleId": 948,
"couponId": 948,
"last_update_at": "",
"currentUsage": 100,
"currentBudget": 1000
}
Task at hand: Consider the following model for a JSON response, every API response will conform to this response, obviously the data will vary. Lets call it ResponseModel:
{
"isErroneous": false,
"Message": "",
"Result": null,
"Data": {
"objId": 38,
"objName": "StackO",
"objDescription": "StackODesc",
"objOtherId": 0,
"objLocationId": 1
}
}
How can I deserialize this response regardless of the data in Data: ? Data could contain a single object of different types, e.g a Dog, a Car. It could also contain a collection of Cars or Dogs e.g not just one like above.
It could also contain A car, the cars engine obj, the cars driver seat obj.
In short, the same response is always going to be present but the value of Data can vary wildly, I want to try best to deserialize this to some sort of Result.class for ALL possible scenarios, how do I best approach this? Setting up the class ResponseModel.class is easy for everything except the "Data" type.
Thanks
Here is another example of something which could be returned
{
"isErroneous": false,
"Message": "",
"Result": null,
"Data": [
{
"carId": 1,
"carName": "car#1",
"carDescription": "car#1",
"carOtherId": 1,
},
{
"carId": 2,
"carName": "car#2",
"carDescription": "car#2",
"carOtherId": 2,
},
{
"carId": 3,
"carName": "car#3",
"carDescription": "car#3",
"carOtherId": 3,
},
As you can see in the second example, we are returning a list of cars but in the first response we are just returning a single object. Am i trying to abstract this too much? should I setup custom response(s) for each call of the API etc?
I need to be writing integration tests asserting that the deserialized object is equal to one that I expect everytime.
you can try to check if your json input contains "Data": { or "Data": [ then parse it to seperate class accordingly
But... maybe it's not the best way to do it