Parse.com preventing player from rating level multiply times - java

I am using parse.com cloud storage, to implement level sharing/downloading and rating for built in level editor for my game, so players are allowed to built and test their own created levels, latter on they can share it with different players, that`s how I upload it to the parse.com cloud storage:
ParseObject testObject = new ParseObject("Levels");
testObject.put("file", new ParseFile(name + ".lvl", levelString.getBytes()));
testObject.put("author", authorName);
testObject.put("email", authorEmail);
testObject.saveInBackground();
It works fine, but I wanted to let players also rate downloaded levels (lets say 1-5 stars) it could be simple, by creating new two fields called rating and ratings count, so every time someone will vote, I would add it to ratings count and would ++ ratings count.
Problem is, how to prevent player from rating particular level multiple times? Thanks.

I have thought about this for a project of mine. In the end you will need two data points.
You need to track the counts per rank on the object (Level in your case)
You need to track UserLevelRating, at minimum a reference to the user, reference to the target (Level), and the rating given (if you will let people change ratings)
Depending on how you want to implement it, to prevent rating something twice, or to allow people to change the rating they have given something, you would do a query for the current user and the Level. If a record is returned they have already voted, so prevent them from voting again.
You could add some cloud code using before-safe or after-save logic to handle other things, such as changing the vote and updating the counts on the target (Level).
Here's a sample of the logic I would use for a simple single vote system without changing votes:
Test for existence of UserLevelRating record, if it exists prevent voting
Saving vote, include User=current user, Level=selected level, Rating=stars given
Cloud code, in after-save of UserLevelRating, looks at Level property, loads the level, calls increment on the property for the rating (e.g. if Rating=3, increment("Stars3") would be called)
Anytime you load a Level object you would have counts for each rating, and could produce the average.

Related

Storing a list of entrants in a SQL table and adding a leaderboard

I'm making a java web application for my golf society.
So far I've made all the user accounts and roles. I've made an events table in my database where a new event can be created. So far the event contains, id, name, course, date, cost and winner. Winner is null until the event has finished when a winner is entered.
I'm wanting to move on now and allow members to enter an event via the app and then a list of current entrants will be shown for each event. But I am stuck on how to go about this. I've thought about having a column in the DB for entrants, which will hold an array of people who have entered, but cannot find how to do it this way.
I have searched online for days and found nothing that helps.
Secondly once this is complete, the members who have entered would input their score for the round and this would create a leaderboard for each event.
I know this is possible as I use an app via my club which does the exact same thing, but I have spent days searching the internet but have come to a dead end.

Java in-memory cache with subcaches for each client

I have a ConcurrentMap which is my in-memory cache/database for my web app.
There I have stored my entity with the id as key.
This is how my entity basically looks like.
public class MyEntity {
private int id;
private String name;
private Date start;
private Date end;
...
}
Now I have multiple Users which requests different data from my map.
User1 has a filter for the start date. So for example he only gets item 1, 2, and 3 of my map. User2 has also a filter and only gets item 2, 3, 4, 5 of the map.
So they only get a part of the complete map. I am doing the filtering on my server because the map is to big to send the complete map and I need to check other attributes.
My problem now is that the entries in the map can be updated / removed / added from some other API calls and I want to live update the entries on the user side.
For now I am sending a notification to the users that the map has been updated and then every user loads the complete data which the user needs.
For example item 8 has been updated. User1 gets a notification and loads item 1, 2, 3 again even if the update was only on item 8. So in this case it would be unnecessary to update for User1 because he doesnt need item 8.
Now i am searching for a good solution so that the User only receives the necessary updates.
One way I was thinking about was to store temporarily all items id which the user requested. So on an update notification I can check if the item updated is in the list and then send the updated item to the user only if it is in the users list.
But I am concerning the memory usage that this will create in case I have a lot of users and the user list with the item ids can be very big too.
What would be a good solution to send only the added / updated / removed item to the user and only if the user needs that item?
So something like observing only a part of the base map (cache) but with a notification for every action like adding, updating and removing item.
Basically there is no "Silver Bullet". A possible solution depends on the usage patterns, your resources and requirements.
Questions to ask:
How big is the total dataset, now and in the future?
How big is the displayed dataset?
How many users with searches?
Is it a business application with a closed user group or a public web application that needs to scale?
What is the update frequency and what kind of updates are common?
How many different search patterns are present?
The biggest question of all:
Is it really needed?
Looking at a bunch of search interfaces, the user expects an update only when there is an interaction. If it is similar to a search, then the users will not expect an instant update. An instant update can be useful and "innovative", but you do spend a lot of engineering costs into that problem.
So before jumping onto the engineering task, make sure that the benefit will justify the costs. Maybe check on these alternative approaches first:
Don't do it. Only update on a user interaction. Maybe add a reload button.
Only notify the user that an update occurred, but only update when the user hits "reload". This solves the problem that the user may not have the browser tab in focus and the transfer is a waste.
But I am concerning the memory usage that this will create in case I have a lot of users and the user list with the item ids can be very big too.
In our applications we observe that there are not so many different search patterns / filters. Let's say you might have 1000 user sessions, but only 20 different popular searches. If that is true, you can do:
For each search filter, you can store a hash value of the result.
If you do the update of the main data, you run the searches again and only send an update to the user if the hash value changed.
If it is a public web application, you should optimize more. E.g. don't send an update when the application has no focus.

last item in the cart

I am learning microservices and trying to design an e-commerce website. I can't figure out how big shopping sites take care of the last item in the cart problem.
For example, I selected an item from Amazon which had just a single item available in stock. I logged in from two different accounts and placed the item in cart. I even reached the payment page from both the account and the site didn't restrict me anywhere saying that the item is not available. I am not sure after the payment page when payment from both the account is in progress, how Amazon handles it.
Few solutions which come to my mind are like:
Accept payment from both the accounts and later cancel transaction for one of them which paid later than the first. This will not be a good practice though as it will result in bas customer experience.
Keep few items in reserve and use them in case of overbooking.
I forget what Amazon is doing and implement quantity checks in Order service from Item service via REST calls, at every stage of the order. But these checks sometimes can fail when a lot of people are ordering the same item. for e.g. in flash sales
Please share if you guys have worked on similar problem and solved it even with few limitations. If I need to put any in more details, let me know.
I cannot answer how Amaozon does it, nor I think anyone could on a public forum I can tell you how I think this could be managed.
So you have to take lock on your inventory if you want to make sure you precisely map inventory to an order. If you intend to do that, question will be where you take lock. When an item gets added to the cart, when user goes for payment or when payment is done. But the problem with lock is that it will make you system slow.
So that is something you should avoid.
Rest all the options you have already covered in your question and it boils down to tradeoffs.
First point, user experience will suffer and you also need to incur the cost of the transaction.
Second option ask you to be ready to undersell or oversell.
When you keep reserves, you are basically saying that I will be underselling. This can also backfire because say you decide to reserve 5 items but you get 20 concurrent request foir checkout and payment, you will be back to the square one. But it can help in most scenarios, given you are willing to take a hit.
Doing inventory check at checkout can help you get better resolution on inventory but it will not help when you literally have last item in inventory and 10 people doing a checkout on it. Read calls even for two such request coincides you will give them inventory and back to square one.
So what I do in such scenarios, is
1. My inventory goes not as number but enum i.e critical, low, med, high, very high
Depending on some analytics we configure inventory check. For high and very high we will not do any check and book the item. for critical and we take the lock. (not exactly a db lock but we reserve the inventory for them), for low and medium we check the inventory and proceed if we have enough. All these values are configurable and help us mitigate the scenarios we have.
Another thing that we are trying is to distribute inventory to inventory brokers and assign inventory broker to some set of services to see this inventory. Even if we reserve the inventory on one broker others can continue selling freely. And there brokers regularly update the inventory master about the status of inventory. Its like Inventory master has 50 items, it distributes 5 each to all ten. After 10 mins they come back and if they need more inventory they ask for it, if they have left over (in case of failure) they drop back inventory to the master for it to be assigned to others.
The above approach will not help you resolve the issue precisely but it gives you certain degree of freedom as to how you can manage the inventory.
Consider doing:
On the payment page, you should re-check if the product is still available. This can be a simple HTTP GET.
If the GET call is slow for you, consider caching recent product added by user to some in-memory databases (eg. REDIS). Now if first users successfully processes the payment, decrease counter for that product-id in redis. And before proceeding payment for second user, check the counter of that product-id in redis.
(BONUS: Redis offers atomic operations, so you can successfully handle the race condition in ordering the product as well.)

Checking if a Set of items exist in database quickly

I have an external service which I'm grabbing a list of items from, and persisting locally a relationship between those items and a user. I feed that external service a name, and get back the associated items with that name. I am choosing to persist them locally because I'd like to keep my own attributes about those external items once they've been discovered by my application. The items themselves are pretty static objects, but the total number of them are unknown to me, and the only time I learn about new ones is if a new user has an association with them on the external service.
When I get a list of them back from the external service, I want to check if they exist in my database first, and use that object instead but if it doesn't I need to add them so I can set my own attributes and keep the association to my user.
Right now I have the following (pseudocode, since it's broken into service layers etc):
Set<ExternalItem> items = externalService.getItemsForUser(user.name);
for (ExternalItem externalItem : items){
Item dbItem = sessionFactory.getCurrentSession().get(Item.class,item.id);
if (dbitem == null){
//Not in database, create it.
dbItem = mapToItem(externalItem);
}
user.addItem(dbItem);
}
sessionFactory.getCurrentSession().save(user);//Saves the associated Items also.
The time this operation is taking is around 16 seconds for approximately 500 external items. The remote operation is around 1 second of that, and the save is negligible also. The drain that I'm noticing comes from the numerous session.get(Item.class,item.id) calls I'm doing.
Is there a better way to check for an existing Item in my database than this, given that I get a Set back from my external service?
Note: The external item's id is reliable to be the same as mine, and a single id will always represent the same External Item
I would definitely recommend a native query, as recommended in the comments.
I would not bother to chunk them, though, given the numbers you are talking about. Postgres should be able to handle an IN clause with 500 elements with no problems. I have had programmatically generated queries with many more items than that which performed fine.
This way you also have only one round trip, which, assuming the proper indexes are in place, really should complete in sub-second time.

Designing a count based access control

I would like to get some advice on designing a count based access control. For example I want to restrict the number of users that a customer can create in my system based on their account. So by default a customer can create 2 users but if the upgrade their account they get to create 5 users and so on.
There are a few more features that I need to restrict on a similar basis.
The application follows a generic model so every feature exposed has a backing table and we have a class which handles the CRUD operation on that table. Also the application runs on multiple nodes and has a distributed cache.
The approach that I am taking to implement this is as follows
- I have a new table which captures the functionality to control and the allowed limit (stored per customer).
- I intercept the create method for all tables and check if the table in question needs to have access control applied. If so I fetch the count of created entities and compare against the limit to decide if I should allow the creation or not.
- I am using the database to handle synchronization in case of concurrent requests. So after the create method is called I update the table using the following where clause
where ( count_column + 1 ) = #countInMemory#
. i.e. the update will succeed only if the value stored in the DB + 1 = value in memory. This will ensure that even if two threads attempt a create at the same time, only one of them will be able to successfully update. The thread that successfully updates wins and the other one is rolled back. This way I do not need to synchronize any code in the application.
I would like to know if there is any other / better way of doing this. My application runs on Oracle and MySQL DB.
Thanks for the help.
When you roll back, do you retry (after fetching the new user count) or do you fail? I recommend the former, assuming that the new fetched user count would permit another user.
I've dealt with a similar system recently, and a few things to consider: do you want CustomerA to be able to transfer their users to CustomerB? (This assumes that customers are not independent, for example in our system CustomerA might be an IT manager and CustomerB might be an accounting manager working for the same company, and when one of CustomerA's employees moves to accounting he wants this to be reflected by CustomerB's account.) What happens to a customer's users when the customer is deleted? (In our case another customer/manager would need to adopt them, or else they would be deleted.) How are you storing the customer's user limit - in a separate table (e.g. a customer has type "Level2," and the customer-type table says that "Level2" customers can create 5 users), or in the customer's row (which is more error prone, but would also allow a per-customer override on their max user count), or a combination (a customer has a type column that says they can have 5 users, and an override column that says they can have an additional 3 users)?
But that's beside the point. Your DB synchronization is fine.

Categories

Resources