I'm trying to send an image to the client from a servlet, and add a cookie containing the id of the image to the repsonse. ( i don't want to display the same image more than N times).
Looks like Internet Explorer doesn't care about the cookies and i always get a null reference when i call request.getCookies();. With Opera everything works great.
Chrome sees the cookies but i get the following exception when i write the image to the outputStream :
ClientAbortException: java.net.SocketException: Software caused connection abort: socket write error
I haven't tried yet Mozilla.
Is there a workaround for Internet Explorer, except cookies? Sessions work with my Internet Explorer.
Any ideas for the exception raised when i use Chrome ? ( the image is less than 1 MB).
Here's the servlet code:
public void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
response.setContentType("image/jpeg");
response.addHeader ("Content-Disposition", "attachment");
response.setHeader("Cache-Control", "no-cache,no-store,must-revalidate");
response.setHeader("Pragma", "no-cache");
response.setDateHeader("Expires", 0);
HttpSession session = request.getSession();
String requestURI = request.getParameter("requestURI");
String resolution = request.getParameter("resolution");
Cookie[] cookies = request.getCookies();
try
{
if (cookies == null)
coada = (new BannerChooser().Choose(1));
String filePath = null;
Iterator it = coada.iterator();
boolean found =false;
while ((!found) && it.hasNext())
{
found = true;
if (cookies!=null)
for (int i = 0; i < cookies.length; i++)
if ( Integer.parseInt(cookies[i].getValue()) == ((BannerNota)it.next()).getB().getId())
{
found = false;
break;
}
if (found)
{
BannerNota bannerToDisplay = (BannerNota)it.next();
Cookie cookie = new Cookie(bannerToDisplay.getB().getId().toString(),bannerToDisplay.getB().getId().toString());
cookie.setMaxAge(60*60*24);
cookie.setPath("/licenta");
filePath = bannerToDisplay.getB().getPath();
response.addCookie(cookie);
break;
}
}
filePath = "h:/program files/Workspace/licenta/WebRoot/" + filePath;
File f = new File(filePath);
byte[] b = new byte[(int)f.length()];
FileInputStream fis = new FileInputStream(f);
fis.read(b);
ServletOutputStream out = response.getOutputStream();
out.write(b);
out.close();
}
catch (Exception e)
{
e.printStackTrace();
}
}
Cookies are domain based. Many browsers reject cookies on embedded resources (CSS/JS/images) which are served from other domain than the page is been served from.
You want to manage the cookie using JavaScript instead. Google Analytics is also doing it that way. At quirksmode.org you can find a nice tutorial how to manage cookies using JavaScript. Any cookie-based information can then be sent as a request parameter on the image URL.
Related
when I try to load the page in the server's contentpath "\manager" it should send the response of page manager.html complete with css and the various libraries used. The server send the page, the only problem does not execute the javascript present inside (with the corresponding images and css).
#Override
public void handle(HttpExchange he) throws IOException {
String root = "html/manager";
URI uri = he.getRequestURI();
File file = new File(root + uri.getPath()+ ".html").getCanonicalFile();
OutputStream os = null;
String response = "";
System.out.println(root + uri.getPath());
if (!file.isFile()) {
// Object does not exist or is not a file: reject with 404 error.
response = "404 (Not Found)\n";
he.sendResponseHeaders(404, response.length());
os = he.getResponseBody();
os.write(response.getBytes());
} else {
// Object exists and is a file: accept with response code 200.
he.sendResponseHeaders(200, 0);
os = he.getResponseBody();
FileInputStream fs = new FileInputStream(file);
final byte[] buffer = new byte[0x10000];
int count = 0;
while ((count = fs.read(buffer)) >= 0) os.write(buffer,0,count);
fs.close();
}
os.close();
}
Console output:
html/manager/manager
HTML server output
Opening local .html file with browser
Path:
/.../Applicazione/Illumify/html/manager/manager.html
doc folder used inside the manager.html
I had a problem a while back where Chrome and Firefox were changing the sessionid when creating a connection from an applet to a backing bean via a servlet in JSF (see here). I got around it last time by manually setting the sessionId on the HttpURLConnection connection.
The applet requests a ranking criteria object from the backing bean via the servlet. The user then customises the ranking criteria in the applet (and, in other code, submits the ranking criteria back to the backing bean in order to rank products according to the newly customised ranking criteria).
Now Chrome is setting the request sessionId to null, but Firefox and Internet Explorer work fine.
In the applet:
try
{
// Get the URL for the servlet.
URL url = new URL(getCodeBase(), "editCriteriaServlet");
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoInput(true);
connection.setDoOutput(true);
connection.setUseCaches(false);
connection.setRequestMethod("POST");
connection.setRequestProperty("Content-Type", "text/plain");
connection.setRequestProperty("Cookie", "JSESSIONID=" + sessionID);
ObjectOutputStream out = new ObjectOutputStream(connection.getOutputStream());
out.writeObject("Request criteria Object");
out.flush();
out.close();
// Read in the search criteria object.
ObjectInputStream in = new ObjectInputStream(connection.getInputStream());
SealedObject sealedObject = (SealedObject)in.readObject();
in.close();
// Decrypt the sealed object and get the zipped data.
SecretKey key = buildSecretKey(crypKeyString);
Cipher cipher = Cipher.getInstance("DES/ECB/PKCS5Padding");
cipher.init(Cipher.DECRYPT_MODE, key);
byte[] baos = (byte[]) sealedObject.getObject(cipher);
ByteArrayInputStream gis = new ByteArrayInputStream(baos);
// Unzip and recover the original object.
GZIPInputStream unzipped = new GZIPInputStream(gis);
ObjectInputStream ois = new ObjectInputStream(unzipped);
tempMultipleSlideDataObject = (MultipleSlideDataObject15) ois.readObject();
}
catch (MalformedURLException ex)
{
errorMessage = "Submit criteria file Malformed URL." + ex.toString();
fireActionPerformed(new ActionEvent(this, ActionEvent.ACTION_PERFORMED, "showErrorMessageDialog_"));
System.out.println("Model_CriteriaInterface: loadCriteriaObject: MalformedURLException occurred");
}
catch (Exception e)
{
errorMessage = "Submit criteria file ERROR exception:" + e.toString();
fireActionPerformed(new ActionEvent(this, ActionEvent.ACTION_PERFORMED, "showErrorMessageDialog_"));
System.out.println("Model_CriteriaInterface: loadCriteriaObject: Submit criteria file ERROR exception: " + e.toString());
}
In the servlet:
#Override
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException
{
System.out.println("Servlet: SessionID: " + ((HttpServletRequest)request).getRequestedSessionId());
response.setContentType("application/x-java-serialized-object");
try
{
ObjectInputStream in = new ObjectInputStream(request.getInputStream());
in.close();
// Get the backing bean and then use that to get the search criteria object.
ProductSelectionBean productSelection = (ProductSelectionBean)request.getSession().getAttribute("productSelectionBean");
Object searchObject = productSelection.getSealedRankingCriteria();
// Send the object, in the response, back to the applet.
ObjectOutputStream outputToApplet = new ObjectOutputStream(response.getOutputStream());
outputToApplet.writeObject(searchObject);
outputToApplet.flush();
outputToApplet.close();
}
catch (ClassNotFoundException ex)
{
Logger.getLogger(EditCriteriaServlet.class.getName()).log(Level.SEVERE, null, ex);
}
}
The following line in the servlet:
System.out.println("Servlet: SessionID: " + ((HttpServletRequest)req).getRequestedSessionId());
-prints 'null' and the 'productSelection' backing bean is null. Firefox and IE print the sessionId fine.
I suspect that this is a bug in Chrome, however, no one from Chrome has gotten back to me.
Any thoughts? Suggestions to get around it?
Edit update - I found the proper Chrome reporting site (I was Googling Chrome instead of Chromium) here. Will see if they get back to me through this.
Many thanks in advance.
I am not exactly sure if this is what fixed the problem or not, on Chrome, but fixing the:ViewExpiredException exception problem here, using BalusC's answer, also appeared to fix this one as well. Either that or Google's guys have fixed it in the last day or so and it is just coincidence.
Because the problem cookies have been cleared, I have been unable to reproduce the error to check if that was actually the cause of the null pointer problem in Chrome. Why it is only affecting Chrome is any ones guess.
I am having a problem calling the same https URL several times in a row. The first requests are successful, but after an indeterminate amount of time, a 401 HTTP error code exception is thrown, suggesting that the user credentials are invalid.
I discussed this problem with the person in charge of the database/server and he told me that the problem I was experiencing was normal because after some fixed amount of time, the server invalidates session data, causing subsequent calls to the same URL with the same user credentials to result in a 401 HTTP error code.
He indicated that if I let different URLConnection objects handle all the calls that need to be made, then I should not have to worry about expired session data.
His explanation seems to make sense, but as the snippet of code below shows, I am already using a brand new URLConnection object for each request to the same url with the same user credentials. So if what I was told is correct, then I guess that the problem is that the URLConnection objects are all using the same underlying connection and for that reason sharing the same session data.
Assuming that I am on the right track, how should I modify my code so that each time I make a new request to the same URL with the same user credentials I don't run into problems caused by expired session data? Is it just a matter of calling disconnect() on the underlying HttpsURLConnection object?
public static void main(String[] args)
{
String url = "https://...";//some https url
int x = 0;
while(true)
{
try
{
System.out.print("call#: " + (++x));
//call download() with a valid username & password
String result = download(url, "some-valid-username", "some-valid-password");
System.out.println(result);
}
catch(Throwable e)
{
//after hundreds of successful calls,
//a 401 HTTP error code exception
e.printStackTrace();
break;
}
}
}
public static String download(String url, String user, String pass) throws IOException
{
//new URLConnection object
java.net.URLConnection connection = new java.net.URL(url).openConnection();
connection.setRequestProperty("Authorization",
"Basic " +
javax.xml.bind.DatatypeConverter.printBase64Binary(
(user + ":" + pass).getBytes("UTF-8")));
//get response
InputStream is = null;
byte[] response = null;
try
{
is = connection.getInputStream();
ByteArrayOutputStream stream = new ByteArrayOutputStream();
byte[] bytes = new byte[16384];
int x = 0;
while((x = is.read(bytes, 0, bytes.length)) != -1){
stream.write(bytes, 0, x);
}
stream.flush();
response = stream.toByteArray();
}
finally
{
if (is != null)
{
is.close();
}
}
//((javax.net.ssl.HttpsURLConnection)connection).disconnect();// ?
return response != null ? new String(response, "UTF-8") : null;
}
Based on your login credentials server will create the session id and that will maintain by the server. Your subsequent calls will be validated against the session id not by your credentials.
You need to get the session id at first time and maintain with in your application. Pass that to server for subsequent calls. The session will be expired after certain predifined time period if no request sent to the server.
Please read it
http://en.wikipedia.org/wiki/Session_%28computer_science%29
I have a web service that generate a pdf. In my GAE application I have a button, when i click i use an ajax's function.
$('#test').click(function(){
$.ajax({
url: 'provaws.do',
type: 'get',
dataType: 'html',
success : function(data) {
}
});
});
this is the method in java that's call ws, using UrlFetch:
#RequestMapping(method = RequestMethod.GET, value = PROVAWS_URL)
public void prova(HttpServletRequest httpRequest, HttpServletResponse httpResponse, HttpSession httpSession) throws IOException{
try {
URL url = new URL("http://XXXXX/sap/bc/zcl_getpdf/vbeln/yyyyyy");
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestProperty("Authorization","Basic " + Base64.encodeBase64String(("username:password").getBytes()));
connection.setConnectTimeout(60000);
if (connection.getResponseCode() == HttpURLConnection.HTTP_OK) {
// OK
ByteArrayOutputStream bais = new ByteArrayOutputStream();
InputStream is = null;
try {
is = connection.getInputStream();
byte[] byteChunk = new byte[4096];
int n;
while ( (n = is.read(byteChunk)) > 0 ) {
bais.write(byteChunk, 0, n);
}
}
catch (IOException e) {
}
finally {
if (is != null) { is.close(); }
}
httpResponse.setContentType("application/pdf");
httpResponse.setHeader("content-disposition","attachment; filename=yyyyy.pdf");
httpResponse.getOutputStream().write(bais.toString().getBytes("UTF-8"));
httpResponse.getOutputStream().flush();
}
....
}
With Firebug i see the repsonse:
%PDF-1.3
%âãÏÓ
2 0 obj
<<
/Type /FontDescriptor
/Ascent 720
/CapHeight 660
/Descent -270
/Flags 32
/FontBBox [-177 -269 1123 866]
/FontName /Helvetica-Bold
/ItalicAngle 0
....
What i need to set in ajax's function to show the pdf?
Thanks in advance
I don't know Java well, but in my understanding your mechanism may not be right.
Here are my corrections:
Instead of sending files in stream, the server-side code(JAVA) should generate the pdf at backend, put the file in file system, and then return the URI of file to Ajax response.
For Ajax code, it get the url from server, then show the new url in DOM. Then user can follow this link to read/download PDF.
Side note:
I checked further that there are methods for streaming data by Ajax, though jQuery's ajax() can't handle that. But I think for a PDF file rendering, streaming is overkill.
Refs: jquery ajax, read the stream incrementally?, http://ajaxpatterns.org/HTTP_Streaming#In_A_Blink*
I'm trying to download a file from
http://aula.au.dk/main/document/document.php?action=download&id=%2F%D8velsesvejledning+2012.pdf
but it dosen't appear to be a pdf, when i try downloading it with this code
import java.io.*;
import java.net.*;
public class DownloadFile {
public static void download(String address, String localFileName) throws IOException {
URL url1 = new URL(address);
byte[] ba1 = new byte[1024];
int baLength;
FileOutputStream fos1 = new FileOutputStream(localFileName);
try {
// Contacting the URL
System.out.print("Connecting to " + url1.toString() + " ... ");
URLConnection urlConn = url1.openConnection();
// Checking whether the URL contains a PDF
if (!urlConn.getContentType().equalsIgnoreCase("application/pdf")) {
System.out.println("FAILED.\n[Sorry. This is not a PDF.]");
} else {
try {
// Read the PDF from the URL and save to a local file
InputStream is1 = url1.openStream();
while ((baLength = is1.read(ba1)) != -1) {
fos1.write(ba1, 0, baLength);
}
fos1.flush();
fos1.close();
is1.close();
} catch (ConnectException ce) {
System.out.println("FAILED.\n[" + ce.getMessage() + "]\n");
}
}
} catch (NullPointerException npe) {
System.out.println("FAILED.\n[" + npe.getMessage() + "]\n");
}
}
}
Can you help me out here?
http://aula.au.dk/main/document/document.php?action=download&id=%2F%D8velsesvejledning+2012.pdf is not a pdf. The website gives an error this is why the script doesn't work:
SQL error in file /data/htdocs/dokeos184/www/main/inc/tool_navigation_menu.inc.php at line 70
As Marti said, the root cause of the problem is the fact that the script fails. I tested your program on a working pdf link, it works just fine.
This wouldn't have helped you in this case, but HttpURLConnection is a specialized subclass of URLConnection that makes communications with an http server a lot easier - eg direct access to error codes, etc.
HttpURLConnection urlConn = (HttpURLConnection) url1.openConnection();
// check the responsecode for e.g. errors (4xx or 5xx)
int responseCode = urlConn.getResponseCode();
2 step process with 2 libraries.
// 1. Use Jsoup to get the response.
Response response= Jsoup.connect(location)
.ignoreContentType(true)
// more method calls like user agent, referer, timeout
.execute();
// 2. Use Apache Commons to write the file
FileUtils.writeByteArrayToFile(new File(path), response.bodyAsBytes());