How can we keep caches and cookies in native app up to completing a session (example booking tickets) in Retrofit(or other rest calls).
Ex. while booking tickets online, we will keep session up to last step of the process. I need to keep my session in a native app up to the last step of a similar process.
Related
I am working on creating a crypto address tracker using JAVA as my backend language.
Features:
1.User can add multiple address to track.
2.App will broadcast alerts on Email/WhatsApp/Telegram when 'tracked' wallet address recieves / sends transactions.
3.Support for ERC20 tokens transaction
4.Support for Polygon & Ethereum Chain
Current Progress:
I am currently using spring framework to obtain this functionality &
using 3rd party for pulling transactions.
Workers(cron job) to pull transaction and send alerts.
saving transactions in DB till a certain block & keeping track of block no in db
sending alerts for transaction from that block till current block using Sendgrid & Twilio
Is there any better system design to consider ?
I am quite new to hibernate and I was learning about first-level caching in hibernate. I have a concern in first level cache consistency.
Imagine I have two separate Web Applications which can read/write to the same database. Both applications use hibernate. First application consists following code segment.
//First Application code
//Open the hibernate session
Session session = HibernateUtil.getSessionFactory().openSession();
session.beginTransaction();
//fetch the client entity from database first time
Client client= (Client) session.load(Client.class, new Integer(77869));
System.out.println(client.getName());
//execute some code which runs for several minutes
//fetch the client entity again
client= (Client) session.load(Client.class, new Integer(77869));
System.out.println(client.getName());
session.getTransaction().commit();
The second application consists of the following code.
//Second Application code
//Open the hibernate session
Session session = HibernateUtil.getSessionFactory().openSession();
session.beginTransaction();
//fetch the client entity from database first time
String hql = "UPDATE client set name = :name WHERE id = :client_id";
Query query = session.createQuery(hql);
query.setParameter("name", "Kumar");
query.setParameter("client_id", "77869");
int result = query.executeUpdate();
System.out.println("Rows affected: " + result);
session.getTransaction().commit();
Let's say the first application creates the session at 10.00 AM. the first application keeps that session object live for 10 minutes. Meanwhile, at 10.01AM second application do an update to Database (update CLIENT set name = 'Kumar' where id = 77869).
So first level cache in first application is outdated after 10.01AM. am I right? if so, is there any method to avoid this scenario?
There is no implicit way your first application will know about the underlying changes that were triggered by the second application.
One of the possible ways to handle this situation could be the following:
1) Once one of the applications does update / insert it needs to save a flag of some sort in the database for example that the data has been changed.
2) In the other application you just after you start a new session you need to check whether that flag is set and therefore the data has been altered.
If so, you need to set the CacheMode of the session accordingly:
session.setCacheMode(CacheMode.REFRESH);
This would ensure that during this session all the queried entities will not be taken from the cache but from the DB (therefore updating the cache in the process). Most likely this will not update all the changes in the second-level change, you would need to set that session attribute periodically every now and then.
Keep in mind that second-level-caching anything else than dictionary entities that do not really change is tricky in terms of consistency.
Евгений Кравцов:
I develop some tiny service with http backend and android app. And recently i felt the lack of knowledge in such a systems.
Case:
- Client makes order in app and send request to server
- Server successfuly recieved data and make a database row for order
- After database work completes, backend tries to respond to App with 200 success code.
- App user faced some internet connection problems and can not receive server response. App gets timeout exception and notify user, that order was not successful
- After some time, internet connection on user device restored and he send another request with same oreder.
- Backend recieves this again and create duplicate for previous order
So i got 2 orders in database, but user wanted to create only one.
Question: How can i prevent such a behavior?
You should add a key to your table in the db, for example defining the key as compound key of user_id, type - so specific user cannot make 2 orders with the same order type.
This will cause the second db insert to fail.
What is called session store in context of web applications/websites ?
Is it anything more than a temporary store of session variables?
Typically the user's first request to the site establishes a session. The session has a key which is passed to the user as a cookie, so that with every subsequent request the same session is retrieved.
The session store can store information about that user you don't want (or can't due to the length limit of cookies) to put in a cookie, for example the currently logged-in user ID or the contents of a shopping cart. This is usually in the form of some kind of serialized data structure depending upon the language/framework in use.
The reason why you might implement the session store in an external database rather than within the local web server would be to account for if you have multiple web servers in a pool; this way if the user's first request went to server A, and the next went to server B, your web app can still retrieve the same session data every time.
I am trying to address session fixation/hijacking/sidejacking on an ATG/JBoss/Tomcat site. It seems that by far, the most commons recommendations are:
Grant a new session to the user when they log in. This prevents the attacker from being able to predict the session ID of the victim. I tried this approach first, but I fear it may not work in my case
Use a servlet filter to invalidate the session anytime a session ID (SID) is passed in the URL. The filter additionally prevents url rewriting for creating links w/ SIDs
What are the pros and cons of #2? Some that I've thought of:
Pros:
This seems like a broader protection than #1: #1 protects against malicious URLs being passed to the victim, #2 protects against any means of acquiring SIDs (insecure wireless networks, access to the machine, etc) - you can't just pass the SID you want to use a request parameter!
Cons:
Session management will be shot for users without cookies enabled.
Normal users will be logged out if they click a link w/ jsessionid specified, though I don't believe there will be any legitmate links like that in the system, due to the behavior of the filter.
2 is to stop Session Fixation.
You also need to take CSRF aka "Session Riding" into consideration. Here are methods of preventing CSRF.
Finaly don't forget the most overlooked OWASP, OWASP A9 - Insufficient Transport Layer Protection. This means that your Session ID must be transmitted over HTTPS at all times. If you don't then someone can use Firesheep to grab the account.
You could store a variable in the session that contains the user's IP, user agent, etc. or a hash of them and check it every request so that if it is hijacked the hijackers would have to fake those.
Not perfect but it helps.