Im currently building an web app, using Java Servlets in Tomcat 7.
This webapp uses Jupload as a client side applet to provide a more comfortable way of uploading multiple files to server.
However, currently this applet sends the files one per post request. My implemented Servlet reads the data from input stream and stores it local. Thats fine and this works.
But additional i have to store filename and paths and such things in DB. Thats why I wanted to store such informations in an object and keep them in a list, and collecting this infos during the incoming requests from the applet.
The list is currently realized as class variable.
public class UploadServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private ArrayList<ImageUploadInformation> uploadInfos; //I know, thats bad.
public UploadServlet() {
super();
uploadInfos = new ArrayList<>();
}
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
// not relevant stuff...
}
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
//accessing data stream directly
//jUpload sends file information and file binary data in one stream, so we have to deal with mixed data in streams
InputStream inputStream = request.getInputStream();
DataInputStream dataInputStream = new DataInputStream(inputStream);
//some other stuff, not relevant
byte[] b = IOUtils.toByteArray(inputStream);
File file = null;
if (finalFilename != null) {
file = new File(finalFilename);
}
if (file != null) {
FileOutputStream fos = new FileOutputStream(file);
BufferedOutputStream bos = new BufferedOutputStream(fos);
bos.write(b);
bos.close();
fos.close();
}
else
throw new IOException("File Creation on Server failed!");
//adding Meta informations about file in list
uploadInfos.add(new ImageUploadInformation(filename, relativeDir,"1"));
}
}
But i read on some threads here, that it is really a bad thing to do, in case of threadsafety. Im not very experinced in writing web applications, so maybe the following approach is completely wrong.
I tried to bind the list as session attribute of the request.
request.getSession().setAttribute("uploadInfos", uploadInfos);
However, I cannot use this, because it is a entirely new post request which comes from the applet, and that why I dont have access to this list, in another request.
I read something about binding objects in ServletContext, but I think this is also a bad practice, but i couldnt find any proof for that. How can I achieve, that I can store this list over multiple independent requests.
Would it be better, if all files will be sent to servlet in only one post request, where i can create the list inside of the doPost() Method ?
Think this is configurable within Jupload, but actually the files could be very large.
Is it common practice to send large amount of files in one request ?
Thanks for any help and links to additional literature on that kind of stuff.
#edit: additional stuff
tried also this..
if (request.getSession().getAttribute("uploadInfos") != null) {
uploadInfos = (ArrayList<ImageUploadInformation>)request.getSession().getAttribute("uploadInfos");
uploadInfos.add(new ImageUploadInformation(filename, relativeDir,"1"));
System.out.println("UploadInfos found in Session, current size: " +uploadInfos.size());
request.getSession().setAttribute("uploadInfos", uploadInfos);
}
else {
System.out.println("No UploadInfos found, creating new List...");
uploadInfos = new ArrayList<>();
uploadInfos.add(new ImageUploadInformation(filename, relativeDir,"1"));
request.getSession().setAttribute("uploadInfos", uploadInfos);
}
Here's the output of the test:
Incoming post request
No UploadInfos found, creating new List...
Incoming post request
No UploadInfos found, creating new List...
Incoming post request
No UploadInfos found, creating new List...
You're almost there. The session is where to store state, on the server, that you want to keep across requests.
Servlets may be called from multiple threads and from different clients; the container may also create or destroy instances as it pleases, so the servlets should not hold state themselves. The field "uploadInfos" needs to be removed. That list should be a thread-safe collection, e.g. CopyOnWriteArrayList, and stored in the session instead. First get the attribute from the session, if it's null, create a new list and store it in the session. Then add your entry to the list as before.
It's also worth mentioning that storing state between requests on the server is sometimes undesirable as it can make systems harder to scale out. An alternative would be to store the state on the client, using JavaScript. In your case, though, I wouldn't bother with that, just store it in a session. It's easier.
Related
I want to send data (what I call a response) to a previous Http requester (assuming a newer request came to the same servlet) through the already old formed session (which is still alive).
The reason is that an occurring event (which is NOT another request in the same session in question) wants the servlet to say something (a string) through our old session to our old requester (assuming another requester came).
Is there an way (a hint) to do that?
(( Edit :
I know it's not usual for HTTP communication. And I'm afraid that if the socket (of layer 5 of OSI) closes after the "service" method ends then my question doesn't hold in the first place and probably I would have to stop socket from closing by going into an API of the server container ("Tomcat" in my case).
This is probably the case by looking onto this.
))
Anyway, here is what I've tried to do :
1) I tried to save the HttpServletResponse object (in the "service" method) of the session in question, but then when a newer request (from another requester) came (and by consequence the "service" method in the same servlet, name it "servlet1", will be executed again), it appeared to me, oddly enough to me, clearly that the HttpServletResponse that I saved is overritten.
2) I found the same result (I was even more surprised) when I tried to save the PrintWriter object (in the following shown code) I got from HttpServletResponse.getWriter(). And again, it was overritten when another request came.
3) I had a hope with HttpSession but I doubt I'm able to write through it. (check its methods here)
Again, I just want to write a string and get it there in (my) client side. If there is a sort of control I can take on the session to send a string, and a way to get it on the client, that would be good.
class Session {
protected PrintWriter printWriter;
protected int index;
Session(PrintWriter pW, int index) {
this.printWriter = pW;
this.index = index;
}
}
#WebServlet("/servlet1")
public class Servlet1 extends HttpServlet {
private static int session_index = 0;
protected static volatile ArrayList<Session> list = new ArrayList<Session>();
protected void service(HttpServletRequest request, HttpServletResponse response) throws IOException {
session_index++;
System.out.println( "Servlet1 : " + session_index );
response.getWriter().append("Served at: ").append(request.getContextPath());
list.add(new Session(response.getWriter(), session_index));
if( list.get(0).printWriter.equals( response.getWriter() ) ) {
System.out.println( "Servlet1 : same old PrintWriter object" );
}
}
I expected this message "Servlet1 : same old PrintWriter object" to not appear in my console, but it did in all subsequent requests.
It is Tomcat 9.0.21 installed on Linux, that I'm working with on the server side (localhost).
It looks like I will need to modify the server container itself, so I will download the source code from here and will follow this guideline to build the Tomcat. The modification must be made in such a way not to close the socket. I may add a method in the servlet to close the socket manually.
I currently have the following situation, which has bothered me for a couple of months right now.
The case
I have build a Java (FX) application which serves as a cash registry for my shop. The application contains a lot of classes (such as Customer, Customer, Transaction etc.), which are shared with the server API. The server API is hosted on Google App Engine.
Because we also have an online shop, I have chosen to build the cache of the entire database on startup of the application. To do this I call the GET of my Data API for each class/table:
protected QueryBuilder performGet(HttpServletRequest req, HttpServletResponse res)
throws ServletException, IOException, ApiException, JSONException {
Connection conn = connectToCloudSQL();
log.info("Parameters: "+Functions.parameterMapToString(req.getParameterMap()));
String tableName = this.getTableName(req);
log.info("TableName: "+tableName);
GetQueryBuilder queryBuilder = DataManager.executeGet(conn, req.getParameterMap(), tableName, null);
//Get the correct method to create the objects
String camelTableName = Functions.snakeToCamelCase(tableName);
String parsedTableName = Character.toUpperCase(camelTableName.charAt(0)) + camelTableName.substring(1);
List<Object> objects = new ArrayList<>();
try {
log.info("Parsed Table Name: "+parsedTableName);
Method creationMethod = ObjectManager.class.getDeclaredMethod("create"+parsedTableName, ResultSet.class, boolean.class);
while (queryBuilder.getResultSet().next()) {
//Create new objects with the ObjectManager
objects.add(creationMethod.invoke(null, queryBuilder.getResultSet(), false));
}
log.info("List of objects created");
creationMethod = null;
}
catch (Exception e) {
camelTableName = null;
parsedTableName = null;
objects = null;
throw new ApiException(e, "Something went wrong while iterating through ResultSet.", ErrorStatus.NOT_VALID);
}
Functions.listOfObjectsToJson(objects, res.getOutputStream());
log.info("GET Request succeeded");
//Clean up objects
camelTableName = null;
parsedTableName = null;
objects = null;
closeConnection(conn);
return queryBuilder;
}
It simples gets every row from the requested table in my Cloud SQL database. Then it creates the objects, with the class that is shared with the client application. Lastly, it converts these classes to JSON using GSON. Some of my tables have 10.000+ rows, and then it takes approx. 5-10 sec to do this.
At the client, I convert this JSON back to a list of objects by using the same shared class. First I load the essential classes sequentially (because else the application won't start), and after that I load the rest of the classes in the background with separate threads.
The problem
Every time I load up the cache, there is a chance (1 on 4) that the server responds with a DeadlineExceededException on some of the bigger tables. I believe this has something to do with Google App Engine not being able to fire up a new instance in time, and therefore the computation time exceeds the limit.
I know it has something to do with loading the objects in background threads, because these all start at the same time. When I delay the start of these threads with 3 seconds, the error occurs a lot less, but is still present. Because the application loads 15 classes in the background, delaying them is not ideal because the application will only work partly until it is done. It is also not an option to load everything before starting, because this will take more than 2 minutes.
Does anyone know how to set up some load balancing on Google App Engine for this? I would like to solve this server side.
You clearly have an issue with warm up requests and a query that takes quite long. You have the usual options:
Do some profiling and reduce the cost of your method invocations
use caching (memcache) to cache some of the result
If those options don't work for you, you should parallelize your computations. One thing that comes to my mind is that you could reliably reduce request times if you simply split your request into multiple parallel requests like so:
Let's say your table contains 5k rows.
Then you create 50 requests with each handleing 100 rows.
Aggregate the results on server or client side and respond
It'll be quite tough to do this on just the server side but it should be possible if your now (much) smaller taks return within a couple of seconds.
Alternatively you could return a job id at once and make the client poll for the result in a couple of seconds. This would however require a small change on the client side. It's the better option though imho, especially if you want to use a task queue for creating your response.
I have a servlet, mapped to an URL, which does a long task and outputs some data while it's working.
What I want to do is to call this url and see output in real-time.
Let's take this as an example:
package com.tasks;
public class LongTaskWithOutput extends HttpServlet {
private static final long serialVersionUID = 2945022862538743411L;
#Override
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
response.addHeader("Content-Type", "text/plain");
PrintStream out = new PrintStream(response.getOutputStream(), true);
for(int i=0; i<10;i++) {
out.println("# " + i);
out.flush();
try {
Thread.sleep(1000);
} catch(Exception e){}
}
}
}
With the following in web.xml:
...
<servlet>
<servlet-name>LongTaskServlet</servlet-name>
<servlet-class>com.tasks.LongTaskWithOutput</servlet-class>
<description>Long Task Servlet</description>
</servlet>
<servlet-mapping>
<servlet-name>LongTaskServlet</servlet-name>
<url-pattern>/longTask</url-pattern>
</servlet-mapping>
...
What happens
If I browse localhost/myApp/longTask, the browser makes me wait 10 seconds, then prints out all text at once.
What should happen
The text should be sent to the browser as soon as it's written to the output stream, and the browser should render one line every second.
As you can see, I already put an out.flush() to be sure that the stream flushes every second, but it still doesn't work.
I also tried with response.flushBuffer(), but I had the same result.
Is there a way to achieve this?
Update
As #MadConan suggested, I tried to use the output stream directly:
OutputStream out = response.getOutputStream();
for(int i=0; i<10;i++) {
out.write(("# " + i + "\n").getBytes());
out.flush();
try {
Thread.sleep(1000);
} catch(Exception e){}
}
The result, unfortunately, is still the same.
This is an upstream issue. The browser is not necessarily going to display that data as it receives it. It may wait until the request is complete. You might have additional chunking if you are going through a proxy. If you snoop on the network traffic, I bet it will go through as expected.
You are flushing the content to the response stream, but your response is not committed to the client. Understand it this way - You are giving some things to be somebody, to take and leave the room and hand over to somebody else. Until the person leaves the room and hands it over, your package will not be delivered.
For your requirement, you can keep on pushing the data to some global space and then have a PULL mechanism from client to read that space every X seconds and display the content to client. There is also option of PUSH mechanism to do the same, it depends on your project whether to use PUSH or PULL.
This basically means that in one request you can cannot update client and return back to server and do same over and over again. Request dies once response is committed to the client. Then there should be another PULL request from client or PUSH from server.
I have a question pertaining to the use of the ServletInputStream and ServletOutputStream available in Java Servlets. First I'll give some much needed context:
The assignment I am working on calls on implementing Task Queues in the google app engine. I've been able to get tasks to be added to the app engine and the appropriate workers to be called. However, I am struggling to figure out how to pass an ArrayList<> of serializable objects to the worker's doPost() method. The pervailing method is apparently to use the input and outputstreams of the HTTP request and response objects, respectively, to handle this communication between servlets. I've googled extensively but haven't been able to find a clear example of how to prepare such an arraylist for transmission as an outputstream, adding it to the response of the first servlet, then retrieving it from the request in the second servlet and finally converting it back into an arraylist for use in the code of the doPost() method. So that is basically my question. Due to my inexperience with Java, it is difficult for me to figure it all out by myself and am mostly struggling to wrap my head around it.
To clarify a bit more, I'll post the doPost() method of the worker in question:
protected void doPost(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException
{
try
{
ArrayList<Quote> qs = /*Here the list needs to be read in.*/ null;
EntityManager manager = EMF.get().createEntityManager();
CarRentalModel.get().confirmQuotes(qs, manager);
}
catch (ReservationException e)
{
}
}
Any help would be greatly appreciated.
Thank you in advance,
Kevin
It's worth to follow BalusC's advice. If you are looking for a simple and quick solution, you can do it with Java's serialization:
In your doPost() method, you can create an ObjectInputStream which reads data from the underlying servlet input stream and deserializes (makes objects out of) the data.
ServletInputStream sis = req.getInputStream();
ObjectInputStream ois = new ObjectInputStream(sis);
ArrayList<Quote> qs = (ArrayList<Quote>) ois.readObject();
You write the object on the other side analogously with an ObjectOutputStream and its writeObject() method. If this doesn't work on spot, try to .flush() or .close() your output stream after finishing your write operations to trigger sending over any remaining buffered data.
this is my first question on stack overflow, I hope you can help me. I've done a bit of searching online but I keep finding tutorials or answers that talk about reading either text files using a BufferedReader or reading bytes from files on the internet. Ideally, I'd like to have a file on my server called "http://ascistudent.com/scores.data" that stores all of the Score objects made by players of a game I have made.
The game is a simple "block-dropping" game where you try to get 3 of the same blocks touching do increase the score. When time runs out, the scores are loaded from a file, their score is added in the right position of a List of Score objects. After that the scores are saved again to the same file.
At the moment I get an exception, java.io.EOFException on the highlighted line:
URL url = new URL("http://ascistudent.com/scores.data");
InputStream is = url.openStream();
Score s;
ObjectInputStream load;
//if(is.available()==0)return;
load = new ObjectInputStream(is); //----------java.io.EOFException
while ((s = (Score)load.readObject()) != null){
scores.add(s);
}
load.close();
I suspect that this is due to the file being empty. But then when I catch this exception and tell it to write to the file anyway (after changing the Score List) with the following code, nothing appears to be written (the exception continues to happen.)
URL url = new URL("http://ascistudent.com/scores.data");
URLConnection ucon = url.openConnection();
ucon.setDoInput(true);
ucon.setDoOutput(true);
os = ucon.getOutputStream();
ObjectOutputStream save = new ObjectOutputStream(os);
for(Score s:scores){
save.writeObject(s);
}
save.close();
What am I doing wrong? Can anyone point me in the right direction?
Thanks very much,
Luke
Natively you can't write to an URLConnection unless that connection is writable.
What I mean is that you cannot direcly write to an URL unless the otherside accept what you are going to send. This in HTTP is done throug a POST request that attaches data from your client to the request itself.
On the server side you'll have to accept this post request, take the data and add it tothe scores.data. You can't directly write to the file, you need to process the request in the webserver, eg:
http://host/scores.data
provides the data, while
http://host/uploadscores
should be a different URL that accepts a POST request, process it and remotely modifies score.data