Get jsp content after forward - java

How can I get content of a jsp page, after servlet made a forward. At them moment I'm trying the following:
request.getRequestDispatcher(DESTINATION_PAGE).forward(request, response);
URL teamsURL = new URL(request.getScheme(), request.getServerName(), request.getServerPort(), request.getContextPath() + DESTINATION_PAGE);
URLConnection teamsCon = teamsURL.openConnection();
String fileName = request.getServletContext().getRealPath("/") + System.currentTimeMillis() + ".html";
System.out.println(fileName);
try (BufferedReader in = new BufferedReader(new InputStreamReader(teamsCon.getInputStream()));
PrintWriter out = new PrintWriter(fileName)) {
String inputLine = null;
while ((inputLine = in.readLine()) != null) {
out.println(inputLine);
}
}
I get the html with empty divs. But, I want the same page I see in browser.
Sorry for messy post, ask for what info you need, I'll update my post accordingly.

If you're trying to get the response body after you've written it, you'll need to use a custom HttpServletResponse wrapper that keeps track of what was written to the OutputStream directly or with the Writer.
You will do this in a servlet Filter after chain.doFilter(request, yourResponseWrapper) returns. A simple example can be found here.

Related

Retrieve HTML content refreshed with Ajax

I tried to get HTML content from a website and I did it with this code.
public void extractRoutes(String urlStringifyed) throws MalformedURLException, IOException {
URL url = new URL(urlStringifyed);
URLConnection c = url.openConnection();
c.connect();
BufferedReader br = new BufferedReader(new InputStreamReader(url.openStream()));
String line = null;
StringBuilder sb = new StringBuilder();
while ((line = br.readLine()) != null) {
sb.append(line);
}
}
Now I want to get the content from a specific page that is loaded with Ajax and protected with ReChapta, but I can't.
Below is the url. I'm passing all the arguments, but the content that I get from this link says to me that the service is temporally down and I should try later. The thing that I don't understand is that when I copy the url and paste in my browser, it works fine. The second link that does not involve rechapta shouts the same thing.
https://mersultrenurilor.infofer.ro/ro-RO/Itineraries?DepartureStationName=Ia%C8%99i&ArrivalStationName=Suceava&DepartureDate=21.01.2019&TimeSelectionId=0&MinutesInDay=0&ChangeStationName=&DepartureTrainRunningNumber=&ArrivalTrainRunningNumber=&ConnectionsTypeId=0&OrderingTypeId=0&g-recaptcha-response=03AO9ZY1ChGhLCoSKCnF49dyCskHENK7ZUYdJEK_UCDVPn7RYGp40CMRUxvA0Q_ni6fDhP9BRm6viymicOOudd78WJbaHb2vbbtCq0DLS7NzngWBAgBKaWBFBa94RKqetwMSR89p5G1a8oS3bknB6d2tyZ2zhUk1veesR2Ef-RNVXDMpy0GotKH_XGPylDTvL5ftIrDem1LmWb4lQYNY0CCJ7jFScQf6SRqSH18jBWHAGEXVSlsQjoK8X4Q6riSlo1LK_vMJR-F-HVig7vavBd6zTI6LjceGyBtlQZCK7tcIuj4cS9Yg-tMbRKn_laukwLkceOpN8Q88_Aafz9JPtyx-eJAN_5fMbuRw
http://mersultrenurilor.infofer.ro/ro-RO/Itineraries?DepartureStationName=Ia%C8%99i&ArrivalStationName=Suceava&DepartureDate=21.01.2019%200%3A00%3A00&AreOnlyTrainsWithReservation=False&ArrivalTrainRunningNumber=&DepartureTrainRunningNumber=&ConnectionsTypeId=0&MinutesInDay=0&OrderingTypeId=0&TimeSelectionId=0&ChangeStationName=&IsSearchWanted=False
How I can get html content(I'm interested in train routes that are showed) from this loaded url?

Download AJAX generated content using java

I have a webpage on which a list of movies is being displayed. The content is created using AJAX (as far as my limited knowledge would suggest...).
I want to download the content, in this case the movie playing times, using Java. I know how to download a simple website, but here my solution only gives me the following as an result instead of the playing times:
ajaxpage('http://data.cineradoplex.de/mod/AndyCineradoProg/extern',
"kinoprogramm");
How do I make my program download the results this AJAX function gives?
Here is the code I use:
String line = "";
URL myUrl = http://www.cineradoplex.de/programm/spielplan/;
BufferedReader in = null;
try {
myUrl = new URL(URL);
in = new BufferedReader(new InputStreamReader(myUrl.openStream()));
while ((line = in.readLine()) != null) {
System.out.println(line);
}
} finally {
if (in != null) {
in.close();
}
}
In your response you can see the address from which actual data is retrieved
http://data.cineradoplex.de/mod/AndyCineradoProg/extern
You can request its contents and parse it.

Screen scraping in Java

I'm trying to create an application, written in java, that uses my university class search function. I am using a simple http get request with the following code:
public static String GET_Request(String urlToRead) {
java.net.CookieManager cm = new java.net.CookieManager();
java.net.CookieHandler.setDefault(cm);
URL url;
HttpURLConnection conn;
BufferedReader rd;
String line;
String result = "";
try {
url = new URL(urlToRead);
conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("GET");
rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
while ((line = rd.readLine()) != null) {
result += line;
}
rd.close();
}
catch (Exception e) {
e.printStackTrace();
}
return result;
}
But it is not working.
Here is the url I am trying to scrape:
https://webapp4.asu.edu/catalog/classlist?c=TEMPE&s=CSE&n=100&t=2141&e=open&hon=F
I tried looking into jsoup but when I go to their try jsoup tab and fetch the url it is coming up with the same results as the get request is coming up with.
The, repeated, failed results that I'm getting with the http get request and jsoup is that it is bring up the search page of the university but not the actual classes and information about if they are open or not.
What I am ultimately looking for is a way to scrape the website that shows if the classes have open seats or not. Once I get the contents of the web page I could parse through it I'm just not getting any good results.
Thanks!
You need to add a cookie to answer the initial course offerings question:
class search course catalog
Indicate which course offerings you wish to see
* ASU Campus
* ASU Online
You do this by simply adding
conn.setRequestProperty("Cookie", "onlineCampusSelection=C");
to the HttpURLConnection.
I found the cookie by using Google Chrome's Developer Tools (Ctrl-Shift-I) and looked at Resources tab then expanded Cookies to see the webapp4.asu.edu cookies.
The following code (mostly yours) gets the HTML of the page you are looking for:
public static void main(String[] args) {
System.out.println(download("https://webapp4.asu.edu/catalog/classlist?c=TEMPE&s=CSE&n=100&t=2141&e=open&hon=F"));
}
static String download(String urlToRead) {
java.net.CookieManager cm = new java.net.CookieManager();
java.net.CookieHandler.setDefault(cm);
String result = "";
try {
URL url = new URL(urlToRead);
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("GET");
conn.setRequestProperty("Cookie", "onlineCampusSelection=C");
BufferedReader rd = new BufferedReader(new InputStreamReader(
conn.getInputStream()));
String line;
while ((line = rd.readLine()) != null) {
result += line + "\n";
}
rd.close();
} catch (Exception e) {
e.printStackTrace();
}
return result;
}
Although, I'd use a real parser like jsoup or HTML Parser to do the actual parsing job.

Building Java server and I can't get my page to stop loading (using PrintWriter and Buffered Reader)

I'm building a Java server and everything has been working as expected until now. I can serve up a static html page using two methods I wrote: body and header. Now, I am trying to write a new method called "bodywithQueryString".
Problem:
It almost works, but after the page is loaded, the loading won't stop. It just loads and loads. This is not happening with my static pages.
The only difference between the old method and new bodyWithQueryString() method is that in the new method I am using a buffered reader and print writer. These are new-ish functions for me so I'm guessing I'm not doing it right.
Here's how my new method is supposed to function:
I want to pass my route and querystring (queryarray) to bodyWithQueryString method. I want the method to read the file (from the route) to a byte output stream, do a replaceall on the key/value pair of the querystring while reading and, lastly, return the bytes. The getResponse() main method would then send the html to the browser.
Here's my code:
public void getResponse() throws Exception {
String[] routeParts = parseRoute(route); //break apart route and querystring
File theFile = new File(routeParts[0]);
if (theFile.canRead()) {
out.write(header( twoHundredStatusCode, routeParts[0], contentType(routeParts[0]) ) );
if (routeParts.length > 1) { //there must be a querystring
String[] queryStringArray = parseQueryString(routeParts[1]); //break apart querystring
out.write(bodyWithQueryString(routeParts[0], queryStringArray)); //use new body method
}
else out.write(body(routeParts[0])); //use original body method
out.flush();
private byte[] bodyWithQueryString(String route, String[] queryArray)
throws Exception {
BufferedReader reader = new BufferedReader(new FileReader(route));
ByteArrayOutputStream fileOut = new ByteArrayOutputStream();
PrintWriter writer = new PrintWriter(fileOut);
String line;
while ((line = reader.readLine()) != null) writer.println(line.replaceAll(queryArray[0] ,queryArray[1]));
writer.flush();
writer.close();
reader.close();
return fileOut.toByteArray();
}
It seems to me that you are not returning Content-Length header. This makes it hard for browser know when to stop loading the response.

How can I retrieve a feed in JSON from a Java Servlet?

I want to make an Http request and store the result in a JSONObject. I haven't worked much with servlets, so I am unsure as to whether I am 1) Making the request properly, and 2) supposed to create the JSONObject. I have imported the JSONObject and JSONArray classes, but I don't know where I ought to use them. Here's what I have:
public void doGet(HttpServletRequest req, HttpServletResponse resp)
throws IOException {
//create URL
try {
// With a single string.
URL url = new URL(FEED_URL);
// Read all the text returned by the server
BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream()));
String str;
while ((str = in.readLine()) != null) {
// str is one line of text; readLine() strips the newline character(s)
}
in.close();
} catch (MalformedURLException e) {
}
catch (IOException e) {
}
My FEED_URL is already written so that it will return a feed formatted for JSON.
This has been getting to me for hours. Thank you very much, you guys are an invaluable resource!
First gather the response into a String:
BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream()));
StringBuilder fullResponse = new StringBuilder();
String str;
while ((str = in.readLine()) != null) {
fullResponse.append(str);
}
Then, if the string starts with "{", you can use:
JSONObject obj = new JSONObject(fullResponse.toString()); //[1]
and if it starts with "[", you can use:
JSONArray arr = new JSONArray(fullResponse.toStrin()); //[2]
[1] http://json.org/javadoc/org/json/JSONObject.html#JSONObject%28java.lang.String%29
[2] http://json.org/javadoc/org/json/JSONArray.html#JSONArray%28java.lang.String%29
Firstly, this is actually not a servlet problem. You don't have any problems with javax.servlet API. You just have problems with java.net API and the JSON API.
For parsing and formatting JSON strings, I would recommend to use Gson (Google JSON) instead of the legacy JSON API's. It has much better support for generics and nested properties and can convert a JSON string to a fullworthy javabean in a single call.
I've posted a complete code example before here. Hope you find it useful.

Categories

Resources