I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB:
Clients <= HTTP => [ RESTlet <= HTTP => CouchDB ]
I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data.
My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data.
Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data.
So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that?
Or what is a good strategy to keep the cache(s) up-to-date in general?
Thanks in advance.
To me it looks like you've grown up far enough to use some "professional" cache solution (e.g. EHCache). All distributed caches allow new data replication & invalidation among different nodes so your problem is already solved.
A distributed in-memory cache like memcached might be what you are looking for. You can configure object age, cache size and also expose callbacks to remove specific objects from the cache (like when the information is stale).
Related
I have two war files such as war1 and war2
If am login the application, the session will be created in war1 and from that if am navigate to war2, there i need the same session data.
I tried crossContext=true in context.xml of server from that i can access the data by storing it in servletContext.
But the issue is once i logined the screen in chrome the session data will be stored in servletContext and the data will maintain till the application is running.
If am giving the same URL in another browser like IE here also, i can get the servletContext data so instead of navigate to login page the corresponding screen will be opened
Kindly suggest me how can i overcome this issue in java?
Is there any way to findout browser switching or incognito window mode of the browser in java?
Note: am using tomcat server
I have never dealt with your exact configuration problem, but even if you can make this work on a single Tomcat instance, you might have problems should your two web applications ever be distributed across multiple Tomcat instances.
So, I am going to suggest that you actually use a database to store state which needs to be passed between the two applications in a safe and reliable way. Note that the database approach also scales nicely in a distributed environment, so long as you have a single logical database.
While session replication indeed can be done in Tomcat (see here) I really suggest you to avoid this type of issues by eliminating the session altogether.
This session replication is an approach that was somewhat common before ~15-10 years, but nowadays when we have a lot of servers running in parallel to serve user requests and have elastic clusters, this approach is not good enough because basically it doesn't scale well.
There are many ways to achieve what you want, though:
Use a shared database to store the session information. Add some session Id to the response and require the client to pass this id back into all subsequent request along the session. Then execute a query to the Database by this Id and retrieve all the session information.
This solution also doesnt really scale well, but then you can shard the session information if the db permits to do so...
Use Redis/Aerospike to save the session information of the currently connected user. somewhat like DB approach, but since redis run in-memory it will be much faster. In general, this approach can be used in conjunction with 1 where redis is an in-memory cache.
Encrypt the session information or even just sign cryptographically and send back to client. Client will have to supply this information along with the request without knowing which server will actually serve this request.
Without delving into cryptography I'll just state that encryption can be done if you don't want client to see the session information (despite the fact that this is the user whose information is supplied) and signature is used to prevent tempering the data (while sending it back to server).
The data can be supplied to server from client via Header or cookie for instance.
I know what I am asking is somehow weird. There is a web application (which we don't have access to its source code), and we want to expose a few of its features as web services.
I was thinking to use something like Selenium WebDriver, so I simulate web clicks on the application according to the web service request.
I want to know whether this is a better solution or pattern to do this.
I shall mention that the application is written using Java, Spring MVC (it is not SPA) and Spring Security. And there is a CAS server providing SSO.
There are multiple ways to implement it. In my opinion Selenium/PhantomJS is not the best option as if the web is properly designed, you can interact with it only using the provided HTML or even some API rather than needing all the CSS, and execute the javascript async requests. As your page is not SPA it's quite likely that an "API" already exists in form of GET/POST requests and you might be lucky enough that there's no CSRF protection.
First of all, you need to solve the authentication against the CAS. There are multiple types of authentication in oAuth, but you should get an API token that enables you access to the application. This token should be added in form of HTTP Header or Cookie in every single request. Ideally this token shouldn't expire, otherwise you'll need to implement a re-authentication logic in your app.
Once the authentication part is resolved, you'll need quite a lot of patience, open the target website with the web inspector of your preferred web browser and go to the Network panel and execute the actions that you want to run programmatically. There you'll find your request with all the headers and content and the response.
That's what you need to code. There are plenty of libraries to achieve that in Java. You can have a look at Jsop if you need to parse HTML, but to run plain GET/POST requests, go for RestTemplate (in Spring) or JAX-RS/Jersey 2 Client.
You might consider implementing a cache layer to increase performance if the result of the query is maintained over the time, or you can assume that in, let's say 5 minutes, the response will be the same to the same query.
You can create your app in your favourite language/framework. I'd recommend to start with SpringBoot + MVC + DevTools. That'd contain all you need + Jsoup if you need to parse some HTML. Later on you can add the cache provider if needed.
We do something similar to access web banking on behalf of a user, scrape his account data and obtain a credit score. In most cases, we have managed to reverse-engineer mobile apps and sniff traffic to use undocumented APIs. In others, we have to fall back to web scraping.
You can have two other types of applications to scrape:
Data is essentially the same for any user, like product listings in Amazon
Data is specific to each user, like in a banking app.
In the firs case, you could have your scraper running and populating a local database and use your local data to provide the web service. In the later case, you cannot do that and you need to scrape the site on user's request.
I understand from your explanation that you are in this later case.
When web scraping you can find really difficult web apps:
Some may require you to send data from previous requests to the next
Others render most data on the client with JavaScript
If any of these two is your case, Selenium will make your implementation easier though not performant.
Implementing the first without selenium will require you to do lots of trial an error to get the thing working because you will be simulating the requests and you will need to know what data is expected from the client. Whereas if you use selenium you will be executing the same interactions that you do with the browser and hence sending the expected data.
Implementing the second case requires your scraper to support JavaScript. AFAIK best support is provided by selenium. HtmlUnit claims to provide fair support, and I think JSoup provides no support to JavaScript.
Finally, if your solution takes too much time you can mitigate the problem providing your web service with a notification mechanism, similar to Webhooks or Resthooks:
A client of your web service would make a request for data providing a URI they would like to get notified when the results are ready.
Your service would respond immediatly with an id of the request and start scraping the necessary info in the background.
If you use skinny payload model, when the scraping is done, you store the response in your data store with an id identifying the original request. This response will be exposed as a resource.
You would execute an HTTPPOST on the URI provided by the client. In the body of the request you would add the URI of the response resource.
The client can now GET the response resource and because the request and response have the same id, the client can correlate both.
Selenium isn't a best way to consume webservices. Selenium is preferably an automation tool largely used for testing the applications.
Assuming the services are already developed, the first thing we need to do is authenticate user request.
This can be done by adding a HttpHeader with key as "Authorization" and value as "Basic "+ Base64Encode(username+":"+password)
If the user is valid (Users login credentials match with credentials in server) then generate a unique token, store the token in server by mapping with the user Id and
set the same token in the response header or create a cookie containing token.
By doing this we can avoid validating credentials for the following requests form the same user by just looking for the token in the response header or cookie.
If the services are designed to chcek login every time the "Authorization" header needs to be set in request every time when the request is made.
I think it is a lot of overhead using a webdriver but it depends on what you really want to achieve. With the info you provided I would rather go with a restTemplate implementation sending the appropriate http messages to the existing webapp, wrap it with a nice #service layer and build your web service (rest or soap) on top of it.
The authentication is a matter of configuration, you can pack this in a microservice with #EnableOAuth2Sso and your restTemplate bean, thanks to spring boot, will handle the underlining auth part for you.
May be overkill..... But RPA? http://windowsitpro.com/scripting/review-automation-anywhere-enterprise
I have 4 separate software systems which implemented separately using java EE, Spring,hibernate etc.I want to integrate all of them and build a master application. I want to have a single login as well. Now they have their own databases and I want to have a shared single database as well because they have some common information.
What is the best method which can be used to achieve this task with having minimum changes to currently implemented systems?
Do I have to implement a new service layer( eg: using JAX-RS) or something on top of new db to access the new shared database and provide all db access services with business logic to above software systems??
For DataBase:
Spring/hibernate applications support connecting to one database by default. If you want to connect to multiple databases (own db + common db) then you will have to take care of database objects (Jdbc connection + pools + lifecycle/transaction management + other db initialisations) by yourself.
In my opinion DB connection+lifecycle initialisation yourself can be a huge pain and will take away your focus from solving real business cases. I would suggest using a single DB for the applications if possible. Most databases allow you to use file-per-table and even distribute the table files across multiple machines/servers (this is an optimisation).
Code Unification
For Unifying the code base into one (I assume you want to unify the codebases), you can make each application a separate module each with its own resource path. For example if you have Service1, Service2 and Service3 then in your new code base all your Service1 resources will be hosted inside /service1 path, Service2 resources inside /service2 path and so on. To do this you will simply need to modify the Path specifiers in your resource files (usually an #Path annotation).
Q: How to change all the api calls to these services since their path changed?
A: Now if you already pickup the paths to api call for these services from a config file then its great, just change the paths in your config file. Else you can actually start using this config approach, and specify something as below:
In your config file:
api-paths: {
service1: /service1/
service2: /service2/
...
}
Config Unification
You can put all your configs in a single file which most frameworks support. Another option to look at is putting separate config files for each service. For 2nd option take a look at TypeSafe Config Lib. It allows you to use multiple config files with overrides.
Note: In case codebase unification is not needed then use a reverse proxy like nginx. Its just how huge websites like google/fb work. You see a single domain which hides all the microservices behind layers of reverse proxies and a CDN.
For Auth/Login
You can do this in a servlet filter. In your config have an exclusion list, these excluded paths can be accessed without login. For example the /login path must in exclusion list so people can access the login page without login first. Now your servlet filter can implement a simple client cookie + server side session store based auth. You will need a password store as well.
The login flow will be like:
User open /login page
User enters username+password (credentials)
Server receives request for login with credentials. Server checks credentials against its own credential/password store.
If successful then server sends response back to client to set a cookie with some expiry time. If failed then send Http Unauthorised response.
Server stores the cookie in its session store as well (cookies will be stored per-client, user1+chrome=1 cookie, user1+firefox=another cookie, user2+any=another set of cookies)
In further requests the client sends the cookie and server (the servlet filter) verifies against its session store. If verification passes then server allows the api call to work.
If cookie expired or no cookie in request then redirect user to /login. Continue from step-1.
Note: Always hash your credentials on client end before sending on network. On server side store only hashed credentials, no raw text passwords. Also if security is paramount then look at salting your credentials as well.
I think a good aproach for this is to have a Look at netflix technology Stack. There is a project called zuul which Acts as reverse proxy. This proxy can route incoming requests to your underlying Services. This proxy can be the frontdoor to your services where every request only can pass throu if it is authenticated.
Hope this will help a bit.
I am working on Spring 4 mvc application with mysql database, tomcat server.
Basically, I am creating a Spring rest which will be used by Angular JS.
Note: I am not using Spring security
In order to avoid session replication in case of clustered environment, I am using cookie approach.On login, I am generating one unique session id (using java UUID) and use it to create a cookie and then set the cookie in the response.Also, storing that session id in database along with any user data.
In order to authenticate every rest API, I have written a Spring interceptor which will intercept every rest API call which in turn check if there is a cookie in the request.If it is present, I am fetching session id value and using it, making a database call to check if it is valid or not. On logout, I am deleting the cookie.
Base on what i am doing as explained above, I have few questions:
1) Is my approach correct? or do you see any flaw in it.
2) Let me know if there is any other better method to achieve the same i.e. to avoid session replication.
3) Since, I am not using any HTTP session, how do I achieve something like session-timeout or do i even need it?
1) Is my approach correct? or do you see any flaw in it.
Its a good approach. Just a couple of points though in order of precedence:-
If you are using your API to service alot of requests then think about using in-memory cache rather then DB. Going to DB is relatively much more expensive. Alot is subjective I know and it depends on your setup, but just consider the DB for data you want to live beyond sessions. Better to use a more temporary/faster store such as an in-memory cache for things like API tokens. If across a cluster then explore a distributed cache solution.
Using cookies is not necessarily a security risk but have a bit of a read about CSRF. More secure to pass the token in a HTTP Header rather than in the cookie itself - that is if you are concerned about CSRF (I do use the Header approach in my own apps but I think CSRF is relatively rare and it depends on how sensitive your data is)
2) Let me know if there is any other better method to achieve the same
i.e. to avoid session replication.
Nothing to add over response to 1.)
3) Since, I am not using any HTTP session, how do I achieve something like session-timeout or do i even need it?
Store (preferably in cache) a timestamp with the token, and refresh it in the cache for each transaction that uses the token. Then, when checking if you consider if the token is valid, check the timestamp and you can decide (based on time elapsed) if you wish to remove the token and request the client to re-authenticate.
We are planning on developing a layer of REST services to expose services hosted on a legacy system. These services will be used by a classic web application and native mobile phone applications.
This legacy system is secured in such a way that an initial username + password authentication is required (a process that can take 5 to 10 seconds). After the initial authentication, a time-constrained token is returned. This token must then be included in all further requests or else requests will be rejected.
Due to a security requirement, the legacy security token cannot be returned outside of the REST service layer. This means that the REST service layer needs to keep this token in some form of user session, or else the expensive username + password authentication process would need to be repeated for every call to the legacy system.
The REST service layer will be implemented using a Java 6 + Spring 3 + Spring Security 3 stack. At first sight, it looks like this setup will run fine: Spring-based REST services will be secured using a rather standard Spring Security configuration, the legacy security token will be stored in the user's HTTP session and every call will retrieve this token using the user's session and send it to the legacy system.
But there lies the question: how will REST clients send the necessary data so that the user's HTTP session is retrieved properly? This is normally done transparently by the web browser using the JSESSIONID cookie, but no browser is involved in this process. Sure, REST clients could add cookie management to their code, but is this an easy task for all Spring RestTemplate, iPhone, BlackBerry and Android clients?
The alternative would be to bypass the HTTP session at the REST service layer and use some other form of user session, maybe using a database, that would be identified using some key that would be sent by REST clients through a HTTP header or simple request query. The question then becomes, how can Spring Security be configured to use this alternative session mechanism instead of the standard Servlet HttpSession?
Surely I am not the first dealing with this situation. What am I missing?
Thanks!
There's nothing magical about cookies. They're just strings in HTTP headers. Any decent client API can handle them, although many require explicit configuration to enable cookie processing.
An alternative to using cookies is to put the JSESSIONID into the URL. I don't know anything about spring-security, but it seems that that's actually the default for at least some types of URL requests, unless disable-url-rewriting is explicitly set to true . This can be considered a security weakness, though.
Unfortunately authentication is highly problematic -- a bit of a blind spot in terms of web standards and browser implementations. You are right that cookies are not considered "RESTful" but purists, but even on fully-featured browsers avoiding takes quite a bit of hackery, as described in this article: Rest based authentication.
Unfortunately I haven't done any mobile development, so I can't suggest what the best compromise is. You might want to start by checking what authentication models each of your targetted platforms does support. In particular, two main options are:
HTTP authentication (ideally "digest", not "basic")
Cookies
One possibility would be to provide both options. Obviously not ideal from a technical or security point of view, but could have merits in terms of usability.