I am creating a series of websites that will share a common java code base but will each have a different look and feel, as well as make slightly different calls to a database. Each site will have a unique URL (www.siteA.com, www.siteB.com).
The necessary database information is stored in properties files that appear to be loaded when the applications are deployed (to a JBoss 4.2.3 server). The CSS and images are in static folders.
What I want:
The user enters www.siteA.com
The "unbranded" site is initialized
Java (or whatever needs to) checks the URL to see which files to load
siteA.properties and siteA.css are loaded from the siteA resources folder
siteA's customized site is served to the client
If www.siteB.com is entered, all of its info would be loaded. When I want to add a new Site C, I will just create a siteC resources folder, put the SiteC versions of properties and CSS in it, and the underlying common code should take care of noticing that www.siteC.com was entered and grab from the new folder. All of this should happen without having to redeploy any of the elements common to all the sites.
I think I've mostly figured out how to get the CSS/images side of this working, but I can't get the properties files loaded this way.
Is this even possible? I haven't even been able to find a high-level discussion of the process.
Why don't you look up the HOST http header and output the relevant information for each server using a PHP script. You can output common content using file from an HTML file stored somewhere on the server.
Related
I am working on some security alerts on one of our servers whereby a 'file download' JSP file is able to let a user download contents of WEB-INF for the web application (Which is located in the root folder of the site). It is a very crudely simple file, written in 2007, that uses java.io.FileInputStream on unsanitised input to return a file to the user.
The alert actually claimed that this was a directory traversal problem, which it is in one way as the following URI would download the web.xml for the user:
http://domain.com/filedownload.jsp?filename=../../WEB-INF/web.xml&filepath=some/directory/
Now obviously the 'directory traversal' part should be corrected by doing user input sanitising (Which this script does not yet do). However, the following URI also delivers the web.xml to the user, but input sanitisation for directory traversal would not help here, unless the sanitisation checks for 'WEB-INF' and other 'illegal' directories...
http://domain.com/filedownload.jsp?filename=web.xml&filepath=WEB-INF/
Is there a standardised way to prevent this in common servlet containers or does this need to be entirely managed by the developer of the code? I noticed that the Java 'normalize()' function would not strip out this directory from the user input.
I tried searching for an answer for this, but all I could find was information about preventing the 'serving' of WEB-INF directly, but nothing about preventing it from being accessed from a JSP file itself.
Thanks,
Tom...
You say the JSP page is using java.io.FileInputStream to read the file. That is a standard Java class that is not aware of the fact that it is running inside a servlet container.
So java.io.FileInputStream will be able to access any file that can be accessed by the user process the servlet container (JVM) is running under. There's nothing you could configure in the servlet container to prevent that.
You might like to make sure that files in other areas of the filesystem completely unrelated to the servlet container can't be accessed, e.g. like "/etc/passwd".
Assuming you're running on Linux, what does this URL do:
http://domain.com/filedownload.jsp?filename=passwd&filepath=/etc/
If it does return the file, you've got a bigger problem! Perhaps the security software (not sure what you're using?) that created the alerts will prevent download. If not, operating system file permissions can help, as long as the web server isn't running under root or other privileged account, but that's a short-term emergency fix only.
So no, there there no standardised way to prevent this in common servlet containers, and yes, it does need to be entirely managed by the developer of the code.
When using java.io.FileInputStream, it's the responsibility of the writer / maintainer of the JSP page to ensure that only valid paths are accessed.
Task: Copy Folder and contents from one vdi to another vdi. This application is internally facing within the company.
Method:
In jsp have user browse for folder
The folder selection is in a text box, the folder path is passed into an action class
The folder path is placed into a teradata table
A script is called to query the table for the source path and target path (pre-determined) and make the copy
Due Dilligence: So far I have tried the <input type="file", which selects a file, not a folder. Also, the file path is not passed through due to security reasons. I have read other possible solutions but none work.
Question: Are sevlets a viable solution, and if so, how do I create one?
I'm going to go with no. There are several reasons for this.
A Java Enterprise Edition application (be it a Servlet or Java Server Page) is not supposed to access the file system directly.
It is inherently unsafe to expose internal infrastructure through an external website.
I think you need to break it up a bit more.
Save a list of shares the server has access to in a data store of some sort, like a new teradata table or for a quick proof of concept plain text file (if you're on Linux you can use the output of something like showmount -e localhost).
Let the user pick the src share from a combobox or something similar.
Continue from your step 2.
This gives you two immediately obviously advantages, which may or may not be relevant.
You can use the system without having access to the physical shares.
You can add metadata (like description or aliases).
This was a question about testing file upload functionality using a local java server on Windows 7 platform. Since the question evolved with Marko's input, I have edited it, so that those who run into the same challenge do not waste time on evolution details and reach conclusions sooner.
The challenge was to direct uploaded file to a folder outside of the WAR structure and successfully read it from there. For example: upload an image into c:/tmp/ and then redirect to a confirmation page that displays the image <img src="c:/tmp/test.jpg" />. The upload worked but image would not be displayed. And based on Marko's input, this makes sense because browser sitting at localhost will refuse to load anything from local disk structure using c:. Maybe these are security considerations similar to those with file input control where we cannot set a default path...
The following tag will work in a locally created .html file but when pasted into a jsp, it won't work. And the difference is that browser uses localhost to get to the jsp.
<img src="c:/tmp/test.jpg" />
Solutions
I think that Marko's answer pretty much defines what needs to be done. While I didn't go with that approach, it clearly is the better way to do it and I will accept that as the answer. Thanks, Marko!
For those who don't want to bother installing a Web server and are willing to live with a bit of a hack, here's what I have done. Again, I didn't want to upload files into my WAR structure because I would then need to remember about clearing that folder before deploying to the server. But that upload folder still needs to be accessible, so I simply created another dummy project and put that upload folder under its WebContent. This works for the purposes of my local testing. The only nuisance is that after uploading a file, I need to refresh the dummy project's WebContent in Eclipse.
config.properties
#for uploading files
fileUploadDirectory=C:/javawork/modelsite/tmp/WebContent
#for building html links
publicFileServicePrefix=http://localhost:8080/tmp
<img src="http://localhost:8080/tmp/test.jpg" /> // this works - tmp is the name of my dummy project.
If you are citing literally the HTML that goes to the browser (the one that you access via "vieew source") then this has nothing to do with Java. The browser is the one who interprets these links. If they fail to load, the problem is in the browser/file system.
UPDATE
According to the results of your additional diagnostics, I conclude that the browser (sensibly!) refuses to load anything from your local disk if it is referenced from an HTML file coming from an internet URL, even when that URL is localhost.
UPDATE 2
(Deleted, irrelevant)
UPDATE 3
However you handle the files uploaded to the server, it's definitely not going to look like your solution -- the file is on the server's local filesystem, not client's. This sort of thing can be handled at the Apache HTTP server level -- reserve an URL section for static content and configure Apache with a base directory from which to serve the static content. Even if you run the server locally, on the same machine where you test it, you still need to go through the network interface.
I need to write a Java client application which, when given the below URL, will enumerate the directories/files recursively beneath it. I also need to get the last modified timestamp for each since I'm only concerned with changes since a known timestamp.
http://www.myserver.com/testproduct/
For example, suppose the following exist on the server.
http://www.myserver.com/testproduct/red/file1.txt
http://www.myserver.com/testproduct/red/file2.txt
http://www.myserver.com/testproduct/red/black/file3.txt
http://www.myserver.com/testproduct/red/black/file4.txt
http://www.myserver.com/testproduct/orange/anotherfile.html
http://www.myserver.com/testproduct/orange/mymovie.avi
http://www.myserver.com/testproduct/readme.txt
I need to, starting at the specified URL (http://www.myserver.com/testproduct/) enumerate the directories and files recursively beneath it along with the last modified timestamp of each. Once I have the list of directories/files, I'll be selectively downloading some of the files based on timestamp and other client-side filters.
The server is running Apache and is configured to allow directory listing.
I did some experimentation using Apache's HttpClient Java class and when I request the contents of http://www.myserver.com/testproduct/ I get back an HTML file which of course is the same thing you see if you go there in your browser. Its an HTML page showing the contents of the folder.
Is this the only way to do it? i.e. scraping the resulting HTML page to parse out the files and directories? Also, I'm not sure I can reliably distinguish files from directories based on the HTML returned
Is there a better way to enumerate directories and files without page scraping the resultant HTML?
If you have any control over the server, you should ask them to implement WebDAV, which is meant for precisely that sort of scenario. Apache comes with a mod_dav that just needs to be configured. On the Java client side, see this question
If your application is not on the same machine as the server, then there isn't much you can do beside scrape the data you're looking for. If you know about all of the products that exist on your server, then you can just issue web requests for each file and you will get them. However, if you only know about the root path or a single product page, then you will essentially have to crawl the web site and extract the links to the other products from the same web site. You would only select URLs to crawl if they're on the same host and you haven't seen/crawled them before.
For example:
if http://www.myserver.com/testproduct/ contains links to
http://www.myserver.com/testproduct/red/file1.txt
http://www.myserver.com/testproduct/red/file2.txt
http://www.devboost.com/
http://www.myspace.com/
http://blog.devboost.com/
http://beta.devboost.com/
http://www.myserver.com/testproduct/red/file2.txt
Then you would ignore any link that does not start with the host www.myserver.com.
Regarding directories and timestamps: as pointed in the comments HTTP does not support directory browsing and if you're trying to get the time stamp when the file was last modified, then you're out of luck on that one too.
More importantly, I don't know how much it would benefit you to know that a file has not been changed when that file is generating dynamic content. For example: it's extremely likely that the file which is responsible for displaying a product page hasn't change in a LONG time. Usually, the same file will be responsible for displaying all of the products in the database and if it's part of an MVC-type framework. In other words: you would have to parse the HTML and determine if there are any changes which you care about, then process the file accordingly.
I have developed a command-line (read: no GUI) Java application which crunches through numbers based on a given dataset and a series of parameters; and spits out a series of HTML files as resultant reports. These reports hold a large amount of data in tables, so in order to give the users a easy and quick overview of the results, I utilized the JUNG2 library and created a nice graph.
Here's where it gets interesting; since I would like the graph to be interactive it should be deployed after the application has run and files are generated, whenever the user wants to view the reports. I decided to go with an applet based deployment, however I am not too happy with the current setup due to the following reasons:
I want to make the software as simple to use as possible (my users won't be tech-savvy, and even tech-intimidated in most cases). I would really like to distribute one JAR only, which forced me to put the applet with everything else it needs in a package in the same JAR as the main application.
The applet and the main application need to communicate the results, so I create a xML-based report which is used to hold information. As long as the files are on a local machine and are not moved around it all works fine. Unfortunately I also need the files to be moved around. A user should be able to take the "results" folder to a USB stick, go anywhere plug the stick to another computer and be able to use the report as he/she likes.
For the time being the applets are implemented with the following html code:
<applet code="package.myapp.visualization.GraphApplet.class"
codebase="file:/home/user/myApp"
archive="myApp-0.2.6-r28.jar"
width="750" height="750">
<param name=input value="results/test_name/results.fxml">
</applet>
As you can see this applet will not work if the parent folder is moved to another location.
As far as I know I have a couple of alternatives:
a) Change codebase to point to an URL on our webserver where I could put the jar file. This however creates the problem with permissions, as the applet will not be able to read the results file. Alternative is to upload the results file to the server when the user wants to visualize the graph, although I am not sure if that's a good option due to server security and also if it could be made so that upload happens automatically without bothering the user.
b) I can use a relative path on the codebase attribute, but then the whole folder hierarchy needs to be intact upon copy. This could be a last resort, if I cant come up with a better way to do it.
c) change the deployment method (would like to avoid this alternative to not spend more time on the development phase)
Any ideas? Am I missing something? How could I tackle this problem?
Thanks,
I'm not sure I entirely understand your use-case, but from what I do understand, I would suggest this:
Dump the applet for an application launched using Java Web Start. Have the JNLP file declare a file association for the fxml file type. When the user double clicks an fxml file, it will be passed as an argument to the main(String[]) of the JWS application.
A sand-boxed JWS application can gain access to resources on the local file system using the JNLP API. Here is my demo. of the JNLP API file services.