Sorry in advance for my googled english,
I work with an API and I make a JAVA software that allows to use it.
I need to make a DELETE and the software.
I have to perform a deletion, and with the supplied software to test the API, I am shown that I have to add the line in a body to remove it, like this :
["email","Termine","13/03/2018 09:52:20",etc...,""].
The body must contain a String Array with all the contents of the line to delete.
I can make it work in the test software.
However I can not understand how to make a DELETE with JAVA. I can make it work in the software test. That's what I did for now:
public static String delete(String json, String nomUrl) throws IOException {
URL url = new URL(baseUrl + "survey/"+ nomUrl + "/data");
//String json = "[\"Marc#Houdijk.nl\",\"Contacte\",\"10/04/2018 11:30:05\",\"Avoriaz\",\"Office de Tourisme\",\"Accueil OT\",\"Neerlandais\",\"Semaine 6\",\"Periode 2\",\"16\",\"\",\"Hiver 2018\",\"BJBR-CDQB\",\"04/12/2018 14:15:13\",\"04/12/2018 14:15:13\",\"04/12/2018 14:15:13\",\"\",\"Direct\",\"\",\"\",\"\"]\n";
HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setRequestMethod("DELETE");
con.setRequestProperty("Content-Type","application/json");
con.setRequestProperty("Accept","application/json");
con.setRequestProperty("Authorization","Bearer "+token);
con.setDoOutput(true);
DataOutputStream wr = new DataOutputStream(con.getOutputStream());
wr.writeBytes(json);
wr.flush();
wr.close();
int responseCode = con.getResponseCode();
StringBuilder responce = new StringBuilder();
responce.append("\\nSending 'DELETE' request to URL : ").append(url);
responce.append("\nResponse Code : ").append(responseCode);
BufferedReader in = new BufferedReader(
new InputStreamReader(con.getInputStream()));
String inputLine;
while ((inputLine = in.readLine()) != null) {
responce.append("\n").append(inputLine);
}
in.close();
return responce.toString();
}
I was inspired by what I did for the post and the get. But I do not see how to add a body correctly with my String Array to my delete function because it doesn't work, and the internet did not help me ...
Thank you in advance for your help !
EDIT : Finally, my code works. So if you want to delete with body, you can use this code. However, the problem comes from the json: I'm french, so was some accents on my words and special characters. After cleaning my string, everythings works.
EDIT : Finally, my code works. So if you want to delete with body, you can use this code. However, the problem comes from the json: I'm french, so was some accents on my words and special characters. After cleaning my string, everythings works.
You can create a POJO class with the fields required by RequestBody and send it to API, by Serializing the Object (Serialization means converting Java Objects into JSON and this can be done via GSON library). on API side you can easily get the ArrayList or whatever you want, just need to create same POJO class on server side as well, RequestBody will deserialize this JSON into Appropriate class, now via object of the class you can get whatever variables you want. Hope this helps.
Im using below code to get the cache-control value in header of given URL. I dont want to get the body of the URL. Below request takes 800ms to process. Is there any alteration can be done in below code? Im using Google App Engine for development. Please suggest. Thanks. I like not to add extra jar.
URL obj;
URLConnection conn = null;
String noTransform = "";
obj = new URL(url);
conn = obj.openConnection();
noTransform = conn.getHeaderField("cache-control");
if (noTransform !=null && (noTransform.contains("no-transform") || noTransform.contains("private") )){
news.setIsGoogleLiteURL("false");
return news;
}
else {
news.setIsGoogleLiteURL("false");
return news;
}
Instead of making a GET request, try making a HEAD request.
https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.4
I am new to programming and know very little about http, but I wrote a code to scrape a website in Java, and have been running into the issue that my code scrapes "get" http calls (based on typing in a URL) but I do not know how to go about scraping data for a "post" http call.
After a brief overview on http, I believe I will need to simulate the browser, but do not know how to do this in Java. The website I have been trying to use.
As I need to scrape that source code for all the pages, the URL does not change as each next button is clicked. I have used Firefox firebug to look at what is going on when the button is clicked, but I do not know all that I am looking for.
My code to scrape the data as of now is:
public class Scraper {
private static String month = "11";
private static String day = "4";
private static String url = "http://cpdocket.cp.cuyahogacounty.us/SheriffSearch/results.aspx?q=searchType%3dSaleDate%26searchString%3d"+month+"%2f"+day+"%2f2013%26foreclosureType%3d%27NONT%27%2c+%27PAR%27%2c+%27COMM%27%2c+%27TXLN%27"; // the input website to be scraped
public static String sourcetext; //The source code that has been scraped
//scrapeWebsite runs the method to scrape the input URL and returns a string to be parsed.
public static void scrapeWebsite() throws IOException {
URL urlconnect = new URL(url); //creates the url from the variable
URLConnection connection = urlconnect.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(
connection.getInputStream(), "UTF-8"));
String inputLine;
StringBuilder sourcecode = new StringBuilder(); // creates a stringbuilder which contains the sourcecode
while ((inputLine = in.readLine()) != null)
sourcecode.append(inputLine);
in.close();
sourcetext = sourcecode.toString();
}
What would be the best way to go about scraping all the pages for each "post" call?
Take a look at the jersey client interface
View the source of each page and determine the pattern of the url for next an previous pages then loop through.
This question already has answers here:
How to use java.net.URLConnection to fire and handle HTTP requests
(12 answers)
Closed 9 years ago.
Need to make a program that takes a valid URL of a webpage like www.stackoverflow.com/questions and its IP address equivalent. The program will then find that webpage and return the status code of the page to us such as 200 OK and 404 NOT FOUND. If the webpage isn’t reachable, a message should be returned explaining the situation.
Here’s what I have done so far:
interface Result {
public boolean ok ();
public String message (); }
class Page {
public Result check ( String wholeURL ) throws Exception {
throw new Exception ( "Not sure about the rest”); } }
Also if I were to check a page like http://www.stackoverflow.com I’ll create an instance of Page and then do something like this:
Page page = new PageImplementation ();
Result result = page.check ( "http://www.stackoverflow.com:60" );
if ( result.ok () ) { ... }
else { ... }
The object that is returned is an instance of Result, and the “ok” method should return true when the status code is 200 OK but false otherwise. The method “msg” should return the status code as string.
Have a look at the HttpURLConnection class within the JDK or use Apache Http Components.
Basically you try to connect to the url and check the response header or wait for a timeout if the server isn't reachable at all.
With HttpURLConnection it might look like this:
URL url = new URL("http://www.stackoverflow.com");
HttpURLConnection connection = (HttpURLConnection)url.openConnection();
connection.connect();
int httpStatusCode = connection.getResponseCode(); //200, 404 etc.
You can use some api like commons http ,
import org.apache.commons.httpclient.*;
import org.apache.commons.httpclient.methods.*;
import org.apache.commons.httpclient.params.HttpMethodParams;
..........
public Result check ( String fullURL ) throws Exception {
HttpClient client = new HttpClient();
GetMethod method = new GetMethod(url);
int statusCode = client.executeMethod(method);
//Update your result object based on statuscode
}
I have an error where I am loading data from a web-service into the datastore. The problem is that the XML returned from the web-service has UTF-8 characters and app engine is not interpreting them correctly. It renders them as ??.
I'm fairly sure I've tracked this down to the URL Fetch request. The basic flow is: Task queue -> fetch the web-service data -> put data into datastore so it definitely has nothing to do with request or response encoding of the main site.
I put log messages before and after Apache Digester to see if that was the cause, but determined it was not. This is what I saw in logs:
string from the XML: "Doppelg��nger"
After digester processed: "Doppelg??nger"
Here is my url fetching code:
public static String getUrl(String pageUrl) {
StringBuilder data = new StringBuilder();
log.info("Requesting: " + pageUrl);
for(int i = 0; i < 5; i++) {
try {
URL url = new URL(pageUrl);
URLConnection connection = url.openConnection();
connection.connect();
BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream()));
String line;
while ((line = reader.readLine()) != null) {
data.append(line);
}
reader.close();
break;
} catch (Exception e) {
log.warn("Failed to load page: " + pageUrl, e);
}
}
String resp = data.toString();
if(resp.isEmpty()) {
return null;
}
return resp;
Is there a way I can force this to recognize the input as UTF-8. I tested the page I am loading and the W3c validator recognized it as valid utf-8.
The issue is only on app engine servers, it works fine in the development server.
Thanks
try
BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream(), "UTF-8"));
I was drawn into the same issue 3 months back Mike. It does look like and I would assume your problems are same.
Let me recollect and put it down here. Feel free to add if I miss something.
My set up was Tomcat and struts.
And the way I resolved it was through correct configs in Tomcat.
Basically it has to support the UTF-8 character there itself. useBodyEncodingForURI in the connector. this is for GET params
Plus you can use a filter for POST params.
A good resource where yu can find all this in one roof is Click here!
I had a problem in the production thereafter where I had apache webserver redirecting request to tomcat :). Similarly have to enable UTF-8 there too. The moral of the story resolve the problem as it comes :)