I am sending a request on a server URL but I am getting File not found exception but when I browse this file through a web browser it seems fine.
URL url = new URL(serverUrl);
connection = getSecureConnection(url);
// Connect to server
connection.connect();
// Send parameters to server
writer = new BufferedWriter(new OutputStreamWriter(connection.getOutputStream(), "UTF-8"));
writer.write(parseParameters(CoreConstants.ACTION_PREFIX + actionName, parameters));
writer.flush();
// Read server's response
reader = new BufferedReader(new InputStreamReader(connection.getInputStream()));
when I try to getInputStream then it throws error file not found.
It is an .aspx Controller page.
If the request works fine in a browser but not in code, and you've verified that the URL is the same, then the problem probably has something to do with how you are sending your parameters to the server. Specifically, this part:
writer.write(parseParameters(CoreConstants.ACTION_PREFIX + actionName, parameters));
Perhaps there is a bug in the parseParameters() function?
But more generally, I would recommend using something a bit higher-level than a raw URLConnection. HtmlUnit and HttpClient are both fine choices, particularly since it seems like your request is a fairly simple one. I've used both to perform similar client/server interaction in a number of apps. I suggest revising your code to use one of these libraries, and then see if it still produces the error.
Ok finally I have found that the problem was at IIS side it has been resolved in .Net 4.0. for previous version go to your web.config and specify validateRequest==false
Related
I need to send data to another system in a Java aplication via HTTP POST method. Using the Apache HttpClient library is not an option.
I create a URL, httpconection without problems. But when sending special character like Spanish Ñ, the system complains it is receiving
Ñ instead of Ñ.
I've read many post, but I don't understand some things:
When doing a POST connection, and writing to the connection object, is it mandatory to do the URLEncode.encode(data,encoding) to the data being sent?
When sending the data, in some examples I have seen they use the
conn.writeBytes(strData), and in other I have seen conn.write(strData.getBytes(encoding)). Which one is it better? Is it related of using the encode?
Update:
The current code:
URL url = new URL(URLstr);
conn1 = (HttpsURLConnection) url.openConnection();
conn1.setRequestMethod("POST");
conn1.setDoOutput(true);
DataOutputStream wr = new DataOutputStream(conn1.getOutputStream());
wr.writeBytes(strToSend);//data sent
wr.flush();
wr.close();
(later I get the response)
strToSend has been previously URLENCODE.encode(,"UTF-8")
I still don't know if I must use urlencode in my code and/or setRequestProperty("Contentype","application/x-www-formurlencode");
Or if I must use .write(strToSend.getByte(??)
Any ideas are welcome. I am testing also the real server (I dont know very much about it)
I have a Java application which opens an existing company's website using the Socket class:
Socket sockSite;
InputStream inFile = null;
BufferedWriter out = null;
try
{
sockSite = new Socket( presetSite, 80 );
inFile = sockSite.getInputStream();
out = new BufferedWriter( new OutputStreamWriter(sockSite.getOutputStream()) );
}
catch ( IOException e )
{
...
}
out.write( "GET " + presetPath + " HTTP/1.1\r\n\r\n" );
out.flush();
I would read the website with the stream inFile and life is good.
Recently this started to fail. I was getting an HTTP 301 "site has moved" error but no moved-to link. The site still exists and responds using the same original HTTP reference and any web browser. But the above code comes back with the HTTP 301.
I changed the code to this:
URL url;
InputStream inFile = null;
try
{
url = new URL( presetSite + presetPath );
inFile = url.openStream();
}
catch ( IOException e )
{
...
}
And read the site with the original code from inFile stream and it now works again.
This difference doesn't just occur in Java but it also occurs if I use Perl (using IO::Socket::INET approach opening the website port 80, then issuing a GET fails, but using LWP::Simple method get just works). In other words, I get a failure if I open the web page first with port 80, then do a GET, but it works fine if I use a class which does it "all at once" (that just says, "get me web page with such-and-such an HTTP address").
I thought I'd try the different approaches on http://www.microsoft.com and got an interesting result. In the case of opening port 80, followed by issuing the GET /..., I received an HTTP 200 response with a page that said, "Your current user agent
In one case, I tried the "port 80" open followed by GET / on www.microsoft.com and I received an HTTP 200 response page that said, "Your current user agent appears to be from an automated process...". But if I use the second method (URL class in Java, or LWP in Perl) I simply get their web page.
So my question is: how does the URL class (in Java) or the LWP module (in Perl) do its thing under the hood that makes it different from opening the website on port 80 and issuing a GET?
Most servers require the Host: header, to allow virtual hosting (multiple domains on one IP)
If you use a packet capturing software to see what's being sent when URL is used, you'll realize that there's a lot more than just "GET /" being sent. All sorts of additional header information are included. If a server gets just a simple "GET /", it's easy to deduct that it can't be a very sophisticated client on the other end.
Also, HTTP 1.0 is "outdated", the current version is 1.1.
Java URL implementation delegates to HttpURLConnection if it starts with "http:"
I am trying to create a proxy server.
I want to read the websites byte by byte so that I can display images and all other stuff. I tried readLine but I can't display images. Do you have any suggestions how I can change my code and send all data with DataOutputStream object to browser ?
try{
Socket s = new Socket(InetAddress.getByName(req.hostname), 80);
String file = parcala(req.url);
DataOutputStream out = new DataOutputStream(clientSocket.getOutputStream());
BufferedReader dis = new BufferedReader(new InputStreamReader(s.getInputStream()));
PrintWriter socketOut = new PrintWriter(s.getOutputStream());
socketOut.print("GET "+ req.url + "\n\n");
//socketOut.print("Host: "+req.hostname);
socketOut.flush();
String line;
while ((line = dis.readLine()) != null){
System.out.println(line);
}
}
catch (Exception e){}
}
Edited Part
This is what I should have to do. I can block banned web sites but can't allow other web sites in my program.
In the filter program, you will open a TCP socket at the specified port and wait for connections. If a
request comes (i.e. the client types a URL to access a web site), the application will process it to
decide whether access is allowed or not and then, using the same socket, it will send the reply back
to the client. After the client opened her connection to WebPolice (and her request has been checked
and is allowed), the real web page needs to be shown to the client. Therefore, since the user already gave her request, now it is WebPolice’s turn to forward the request so that the user can get the web page. Thus, WebPolice acts as a client and requests the web page. This means you need to open a connection to the web server (without closing the connection to the user), forward the request over this connection, get the reply and forward it back to the client. You will use threads to handle multiple connections (at the same time and/or at different times).
I don't know what exactly you're trying to do, but crafting an HTTP request and reading its response incorporates somewhat more than you have done here. Readline won't work on binary data anyway.
You can take a look at the URLConnection class (stolen here):
URL oracle = new URL("http://www.oracle.com/");
URLConnection yc = oracle.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(yc.getInputStream()));
Then you can read textual or binary data from the in object.
Read line will treat the line read as a String, so unless you want to mess around with conversions over to bytes, I wouldn't recommend that.
I would just read bytes until you can't read anymore, then write them out to a file, this should allow you to grab the images, keeping file headers intact which can be important when dealing with files other than text.
Hope this helps.
Instead of using BufferedReader you can try to use InputStream.
It has several methods for reading bytes.
http://docs.oracle.com/javase/6/docs/api/java/io/InputStream.html
I have a Java Swing app that launches by web start and accesses data files on a server through a URL. The files are served by Apache2, with HTTP basic Auth. Java pops up a dialog box prompting for a login, and that works fine.
The trouble comes when a user has checked "save this password in your password list". Then the password changes or was incorrect in the first place and you're stuck. It's apparently not smart enough to give you another chance. If your saved login fails you get a 401 error and that's it.
So, where is it storing saved passwords and how do you delete them?
The code involved looks like this:
// uri is a String
URL url = new URL(uri);
HttpURLConnection urlConnection = (HttpURLConnection)url.openConnection();
// check HTTP response code
int responseCode = urlConnection.getResponseCode();
if (responseCode != HttpURLConnection.HTTP_OK)
throw new IOException("\nHTTP response code: " + responseCode);
// read the file
BufferedReader reader = new BufferedReader
(new InputStreamReader (urlConnection.getInputStream ()));
... etc ...
That code works fine, except in this situation where the user has saved a bad password, in which case you get a 401.
My understanding is that Java WebStart puts its hooks into the java.net classes, so that you get things like the password prompt, which you wouldn't get by running the same code from the command line or from your IDE. So, this question is really about that behavior.
Thanks!
No Code? Now you get a vague answer. Depending on your HttpClient, it's probably stored in the cookies or something. Re-initializing your HttpClient would be a great first debugging step. If that doesn't work, posting a little code here would be very helpful.
In windows the passwords are saved in below file.
{user_name}\AppData\LocalLow\Sun\Java\Deployment\security\auth.dat
You can delete this file to clear the saved passwords.
I want to read data from a streaming icy protocol.The problem is that all the libraries that I've tried (dsj,MP3SPI) use the HttpUrlConnection to do this.However I've tried it on my windows 7 and I've received "Invalid http response" which is normal cause "HTTP 200 OK" is different from "ICY 200 OK".I know this could be accomplished with sockets but I'm a beginner so if any can provide a few lines o code so I can get an idea I would appreciate.Also if you have some solutions please share them.Thanx and have a nice day!
This is the code that I've tried:
URLConnection connection = new URL("89.47.247.59:8020").openConnection();
InputStream in = connection.getInputStream();
System.out.println("getting lots of bytes");
in.close();
The error is :
Exception in thread "main" java.io.IOException: Invalid Http response
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1328)
at javaapplication1.JavaApplication1.main(JavaApplication1.java:46) Java Result: 1
Sorry couldnt figure it out how to format code or add newline.
Edit: I included the code from your comment below..
Try this instead:
URL url=new URL("89.47.247.59:8020");
Socket socket=new Socket(url.getHost(), url.getPort());
OutputStream os=socket.getOutputStream();
String user_agent = "WinampMPEG/5.09";
String req="GET / HTTP/1.0\r\nuser-agent: "+user_agent+"\r\nIcy-MetaData: 1\r\nConnection: keep-alive\r\n\r\n";
os.write(req.getBytes());
is=socket.getInputStream();
This worked for me perfectly!
MP3SPI should work fine on all systems.
If you want to extract ICY metadata, check this code: https://gist.github.com/1008126 There's an IcyInputStream that opens the URL and returns a regular InputStream that you can attach to a decoder and it also extracts metadata like Artist and Track title.
I've written this code using information from here.