I'm looking for a way within Spring MVC to put my JSP pages in a remote machine and load them when I need them.
The reason I wanna do this is because my application received some page templates from users and I have to save them somewhere and load them dynamically when that page get requested! I was thinking if I want to put my users' JSPs pages inside my web-app on real time, It's not possible so I have two choice :
1) save it in a remote place and get reference to it while a request comes in
2) save them inside database which I think that's not good because the user page may have so many visitors ...
What solution you suggest ?
Using unix? Maybe you could mount the remote server and create a symbolic link to WEB-INF/jsp directory to point to the remote mount.
Related
Ok, I'm a beginner so this maybe stupid but i afraid that clients can modify static resources (css/javascript files) on server if they can load them directly through URL path (Of course I have to put css/javascript files outside of WEB-INF folder).
If my hypothesis is wrong, could you give me links or quotes to help me expand my knowledge ? Thank you :)
When a user's browser requests resources from your server, they are performing a GET request. This request will not directly change any file on your server. The request will go through your web server and will be processed. In the case of the resources such as css/javascript files, the web server sees the user is requesting the file and sends the contents of the file back. There is no way the user can update the contents of those files on the server unless you write code on the server to allow them to update the files. If the user has direct access to the server via ssh or other protocol and has permissions on the folder that holds the resources, they would be able to change them.
The whole process is much more complex for going through the web server, but for brevity left out here. Here is a good article that explains what really happens when you go to an address in a browser:
https://medium.com/#maneesha.wijesinghe1/what-happens-when-you-type-an-url-in-the-browser-and-press-enter-bb0aa2449c1a
is it possible to retrieve a index of all files on the web server when connecting to a website?
(Something similar to this image: http://www.linuxscrew.com/wp-content/uploads/2008/06/directory_index.png)
I understand that you can achieve similar effect using a web crawler, but there might be some unlisted links on the website, that are public, but invisible. Is there any way to access those files?
Not unless the server is configured to expose such a list. In many cases, there are very few "files", per se, in the first place. Resources are database records that are processed by server-side routines to provide HTML content to the browser.
We have a Java class that is supposed to fetch an HTML file and then read some content in it based on the id of certain divs and then return the content to a frontend which will then render it.
Now we have a set of HTML files on a common file system somewhere on the network. Multiple applications will access it. It is like a homegrown GUI help guide for our customer facing screens with a centralized storage.
We have managed to load the html file in 2 ways
Start an Apache web server and put all html files in htdocs. The calling Java class then makes an http call http://someIP:80/helpguide/userguide.html #firstname. This will fetch the help guide related to FirstName field on the screen. The Apache service has to be managed as it is accessed in Live but only accessible within our network.
Create a Shared directory and grant access to it to the Windows logon used to run the Windows service that runs Tomcat where the client facing web application is deployed. Then the Java client class uses new File("<file location>") to load the file and read its content. This works as well.
Basically we have 2 ways to load the html file. Now we are confused whether to use route 1 or 2?
The html files won't be that massive and will be of reasonable size. It may have inline css or youtube video links embedded in them.
Downside of (2) is if we want to include images later it won't work while it should work with (1).
However in terms of performance and efficiency how are teh 2 approaches different? (1) will open a Http socket connection over port 80 and get the html stream back. WIth (2) It will possibly use a File Inputstream to get the file on the server.
A web application uploads files (images only) from client to server location no any DB and also save the same file/files from server to client's machine.
Process 1. upload a file <input type="file /">
2. save files into server predifined location : java
3. download the same files from server to client's machine by clicking on save button
problem : let suppose there are two users and they are uploading different files with same name at same time in predifined (or programmed) server's folder.
then how should i avoid this kind of naming conflict & how to programmed for, which file belongs to whom (client) .
possible sol'n : during uploading the file from client to server, create one folder for each client and save the file into specifiec newly created folder.
please note that there is no any Database in application. please suggest any better
Environment : java servlet Apache-tomcat 6.0 xhtml
Use HttpServletReqeust.getSession() method to get client's unique session and then HttpSession.getId() to get session's identifier which you can use in directory/file name construction.
create one folder for each client/user.
Seems like the obvious solution to me. Using session id will result in many more directories being created. If the server saved images are to be used later(which I assume they are, otherwise whats the point of saving them). Having a directory structure based on usernames(or similar) would be much less painful to navigate than anything else.
i have a web application which stores users file in directory which is under webroot directory..
Suppose web application is under 'fileupload' and all files are getting stored in 'xyz' folder under 'fileupload' so now if user points to url say like
www.xyzpqr.com/fileupload/xyz/abc.doc, he gets that file.
How do i restirct this from happening.. i have thought of putting xyz folder in WeB-inf folder but as my application is very big i have to made changes at too many places.. so is there any way so that without moving the folder to web-inf (restricted folders) i can achieve wat i want..
In many cases, this is something that you can set in the configuration for the webserver your files are stored on. You can require passwords for directory access, or restrict things even further than that. It will vary by implementation exactly what configuration file you need to look at, but some common ones are http.conf and .htaccess
If you're unsure, it's probably worth it to contact your hosting company and/or network admin.
Hi if you are using Apache web server you can protect a directory by password. But you need to be able to edit/create .htaccess file.
Here's the solution for WAMP server:
http://php-mysql.develop.sitefrost.com/PHP/security.php
It is very similar for other platforms. When someone wants to access a protected directory he must first submit username and password.
I would suggest creating a filter which examines every URL to see if the user has access to that particular file and denies the request if not.
This gives you full control.