I create a web-application with Spring 4. It must work as proxy for internal resources of our company.
When it receives user requests, it analyses its correctness and user privilegies, and if everything is correct, gives user the result.
So, is it possible to do forward request to page like http://xxxxx.com:8983/solr?
If yes, show the example, please
So that user writes url of my application and sees the page http://xxxxx.com:8983/solr
P.S. I tried to find it in google, but everywhere were answers about redirect but not forward
Well, I think there are multiple ways of dealing with this. But my gut feeling is that you're going to have to write some code yourself.
Basically, I would use HttpClient to make a request to your proxied website, take the input stream from the httpclient connection and stream it to the output stream of your spring application response.
You could handle all this interaction in the controller itself, but I think using a specialized ViewResolver might be better.
Have you tried:
return "forward:/page/section/";
Related
I know what I am asking is somehow weird. There is a web application (which we don't have access to its source code), and we want to expose a few of its features as web services.
I was thinking to use something like Selenium WebDriver, so I simulate web clicks on the application according to the web service request.
I want to know whether this is a better solution or pattern to do this.
I shall mention that the application is written using Java, Spring MVC (it is not SPA) and Spring Security. And there is a CAS server providing SSO.
There are multiple ways to implement it. In my opinion Selenium/PhantomJS is not the best option as if the web is properly designed, you can interact with it only using the provided HTML or even some API rather than needing all the CSS, and execute the javascript async requests. As your page is not SPA it's quite likely that an "API" already exists in form of GET/POST requests and you might be lucky enough that there's no CSRF protection.
First of all, you need to solve the authentication against the CAS. There are multiple types of authentication in oAuth, but you should get an API token that enables you access to the application. This token should be added in form of HTTP Header or Cookie in every single request. Ideally this token shouldn't expire, otherwise you'll need to implement a re-authentication logic in your app.
Once the authentication part is resolved, you'll need quite a lot of patience, open the target website with the web inspector of your preferred web browser and go to the Network panel and execute the actions that you want to run programmatically. There you'll find your request with all the headers and content and the response.
That's what you need to code. There are plenty of libraries to achieve that in Java. You can have a look at Jsop if you need to parse HTML, but to run plain GET/POST requests, go for RestTemplate (in Spring) or JAX-RS/Jersey 2 Client.
You might consider implementing a cache layer to increase performance if the result of the query is maintained over the time, or you can assume that in, let's say 5 minutes, the response will be the same to the same query.
You can create your app in your favourite language/framework. I'd recommend to start with SpringBoot + MVC + DevTools. That'd contain all you need + Jsoup if you need to parse some HTML. Later on you can add the cache provider if needed.
We do something similar to access web banking on behalf of a user, scrape his account data and obtain a credit score. In most cases, we have managed to reverse-engineer mobile apps and sniff traffic to use undocumented APIs. In others, we have to fall back to web scraping.
You can have two other types of applications to scrape:
Data is essentially the same for any user, like product listings in Amazon
Data is specific to each user, like in a banking app.
In the firs case, you could have your scraper running and populating a local database and use your local data to provide the web service. In the later case, you cannot do that and you need to scrape the site on user's request.
I understand from your explanation that you are in this later case.
When web scraping you can find really difficult web apps:
Some may require you to send data from previous requests to the next
Others render most data on the client with JavaScript
If any of these two is your case, Selenium will make your implementation easier though not performant.
Implementing the first without selenium will require you to do lots of trial an error to get the thing working because you will be simulating the requests and you will need to know what data is expected from the client. Whereas if you use selenium you will be executing the same interactions that you do with the browser and hence sending the expected data.
Implementing the second case requires your scraper to support JavaScript. AFAIK best support is provided by selenium. HtmlUnit claims to provide fair support, and I think JSoup provides no support to JavaScript.
Finally, if your solution takes too much time you can mitigate the problem providing your web service with a notification mechanism, similar to Webhooks or Resthooks:
A client of your web service would make a request for data providing a URI they would like to get notified when the results are ready.
Your service would respond immediatly with an id of the request and start scraping the necessary info in the background.
If you use skinny payload model, when the scraping is done, you store the response in your data store with an id identifying the original request. This response will be exposed as a resource.
You would execute an HTTPPOST on the URI provided by the client. In the body of the request you would add the URI of the response resource.
The client can now GET the response resource and because the request and response have the same id, the client can correlate both.
Selenium isn't a best way to consume webservices. Selenium is preferably an automation tool largely used for testing the applications.
Assuming the services are already developed, the first thing we need to do is authenticate user request.
This can be done by adding a HttpHeader with key as "Authorization" and value as "Basic "+ Base64Encode(username+":"+password)
If the user is valid (Users login credentials match with credentials in server) then generate a unique token, store the token in server by mapping with the user Id and
set the same token in the response header or create a cookie containing token.
By doing this we can avoid validating credentials for the following requests form the same user by just looking for the token in the response header or cookie.
If the services are designed to chcek login every time the "Authorization" header needs to be set in request every time when the request is made.
I think it is a lot of overhead using a webdriver but it depends on what you really want to achieve. With the info you provided I would rather go with a restTemplate implementation sending the appropriate http messages to the existing webapp, wrap it with a nice #service layer and build your web service (rest or soap) on top of it.
The authentication is a matter of configuration, you can pack this in a microservice with #EnableOAuth2Sso and your restTemplate bean, thanks to spring boot, will handle the underlining auth part for you.
May be overkill..... But RPA? http://windowsitpro.com/scripting/review-automation-anywhere-enterprise
So, I'm currently developing an app for a service which has a json-based (unfortunately) read only API. Retrieving content is no problem at all, however the only way to post content is using a form on their site which location is a PHP script. The service is open source so I know which fields the form expects, but whatever I send, it always results in a BAD REQUEST.
I captured the network traffic inside my browser and as far as I can see, the browser constructs a multipart form request, however when I copy the request and invoke it again using a REST client, a BAD REQUEST gets returned.
Is there a way to construct a http request in Android that simulates a form post?
If it's readonly I think you wouldn't be able to make requests with POST (it's assume for editing or adding things).
If you let me make you an advise, I recommend you using this project as a Library.
https://github.com/matessoftwaresolutions/AndroidHttpRestService
It makes you easy deal with apis, control network problems etc.
You can find a sample of use there.
You only have to:
Build your URL
Tell the component to execute in POST mode
Build your JSON
As I told you, I don't know even if it will work.
I hope it helps!!!
Let's say I've created a mobile application named 'Foo'(iOS). This app talks to a Java-running backend at 'java.com' and works perfectly. Now, I'm trying to create the website 'Foo.com' to let users enjoy the 'same' service on a browser/computer. So far, I've found that almost all calls needed to the API from the website can be done in JavaScript directly to the backend at 'java.com', including a login-function.
On the backend, I've implemented the standard 'doPost'-method to handle the login, and I create a Cookie to attach to the request.
The problem, I think, is that the users get the JavaScript from 'Foo.com', and the JavaScript tries to log in by using an AJAX-call to 'java.com', thus the cookie will be 'stamped' by www.java.com', not by 'www.foo.com', and the user will never receive the cookie. (At least, I don't receive a cookie now)
I've been trying to find a way to accept cookies from 'api.com' into the application, but it doesn't look good. Honestly, I'm not even sure this is the actual problem causing me to not receive a cookie, but I've read several places that cross-domain-cookies aren't allowed. So I ask the general question, how should I proceed?
I've been toying with the idea to add a .php-page to the server-side of the website 'foo.com', and from there handle the requests from client to API, hopefully causing the cookies to be 'stamped' as 'foo.com' instead of 'java.com'. (In that case, I'd also wonder if the .php can forward the information in the cookie or something similar).
But I really want to avoid as much traffic on the webhost as possible. An all-script-website would be optimal, but I don't really see how cookies can work with that.
Is there anything else I can do to handle this? If I simply want a persistent login-function from a client of 'foo.com' handled at 'java.com', are there any options, with or without the use of cookies?
yesterday i started brainstorm for a project of mine and I'm not sure if its the correct approach.
On my website I'm having an (kind of an order form) which sends a post to a target URL, which works with a simple curl php script. The target is an external service (where I have no access no rights, nothing). I only know that I will get a POST with further processed data back from the service, which I have to save into my DB.
in steps:
Users fills out the (order) form and posts data to an external url on my website.
data gets externally handled and after finishing that resents a post.
read incoming post data.
save data into DB.
Success page on my website.
My thoughts were to handle the incoming data with a servlet (spring maven project) but I'm not sure if this is a correct approach. Is there a better why to do this. Or is the first step with the php scripts wrong. thx for any help.
A simplest workflow could be
1. Forward the initial (Order form with values) request to a servlet
2. Invoke a post request using java to an external url inside this servlet (Using Apache http client, or libraries such as HTMLUnit)
3. Once you get the incoming response in your servlet, you can update your database.
If you are using spring, the controller could forward initial request to a business class which will handle this post processing and delegate the database update to respective DAO.
There are a number of suitable ways to handle this, and the decision is largely a matter of preference and what you're familiar with. Spring can handle this sort of job quite well.
Note: Maven is a build system for Java and some other JVM languages. I recommend using it, but it's not part of Spring; what you're probably looking for is Spring MVC.
I am trying to put some logging to capture the raw http request coming to my application. My Java code is inside a SpringMVC controller. I have access to the "HttpServletRequest" object. But I could not find a way to get the raw http request stream out of it. There is a reader but only reads the post content. What I want is the whole shebang, the url, the headers, the body. Is there an easy way to do this?
Thanks in advance.
No.
The servlet provides no such API, and it would be hard to implement because (basically) you cannot read the same data twice from a Socket. It is not difficult to get header information, but raw headers are impossible to capture within a servlet container. To get the bodies you need to capture them yourself as your application reads/writes the relevant streams.
Your alternatives are:
Write your own server-side implementation of the HTTP protocol. (Probably not right for your application.)
You may be able to get the header information you need with filters, though they don't show the raw requests.
Some servlet containers have request header logging; e.g. with Tomcat there's a beast called the RequestDumperValve that you can configure in your "server.xml" file.
Implement a proxy server that sits between the client and your "real" server.
Packet sniffing.
Which is best depends on what you are really trying to achieve.
FOLLOWUP:
If the "badness" is in the headers, the RequestDumperValve approach is probably the best for debugging. Go to the "$CATALINA_HOME/conf/server.xml" file, search for "RequestDumperValve" and uncomment the element. Then restart Tomcat. You can also do the equivalent in your webapp's "context.xml" file. The dumped requests and responses end up in "logs/catalina.out" by default. Note that this will give a LOT of output, so you don't want to do this in production ... except as a last resort.
If the badness is in the content of a POST or PUT request, you'll need to modify your application to save a copy the content as it reads it from the input stream. I'm not aware of any shortcuts for this.
Also, if you want to leave logging on for long periods, you'll probably need to solve the problem yourself by calling the HttpServletRequest API and logging headers, etc. The RequestDumperValve generates too much output, and dumps ALL requests not just the bad ones.
No, servlets provide no api to get at the raw request - you might need a sniffer like wireshark for that.
You can get at the parsed request headers and uri though:
getHeaderNames()
getRequestURI()
etc.
I managed to read my raw request in my webapplication deployed on Tomcat 5.5
All I had to do is to read HttpServletRequest through my servlet/Spring controller
using request.getInputStream() only.
It must be the first API approach to the request. before any filter or other command start to mass with the request that cause its completely reading by the webserver.
What's the problem with that approach?