Intelligently serving jar files from a web server - java

I am writing a simple (generic) wrapper Java class that will execute on various computers separate from a deployed web server. I want to download the latest version of a jar file that is the application from that associated Web Server (currently Jetty 8).
I have code like this:
// Get the jar URL which contains the application
URL jarFileURL = new URL("jar:http://localhost:8081/myapplication.jar!/");
JarURLConnection jcl = (JarURLConnection) jarFileURL.openConnection();
Attributes attr = jcl.getMainAttributes();
String mainClass = (attr != null)
? attr.getValue(Attributes.Name.MAIN_CLASS)
: null;
if (mainClass != null) // launch the program
This works well, except that myapplication.jar is a large jar file (a OneJar jarfile, so a lot is in there). I would like this to be as efficient as possible. The jar file isn't going to change very often.
Can the jar file be saved to disk (I see how to get a JarFile object, but not to save it)?
More importantly, but related to #1, can the jar file be cached somehow?
2.1 can I (easily) request the MD5 of the jar file on the web server and only download it when that has changed?
2.2 If not is there another caching mechanism, maybe request only the Manifest? Version/Build info could be stored there.
If anyone done something similar could you sketch out in as much detail what to do?
UPDATES PER INITIAL RESPONSES
The suggestion is to use an If-Modified-Since header in the request and the openStream method on the URL to get the jar file to save.
Based on this feedback, I have added one critical piece of info and some more focused questions.
The java program I am describing above runs the program downloaded from the jar file referenced. This program will run from around 30 seconds to maybe 5 minutes or so. Then it is done and exits. Some user may run this program multiple times per day (say even up to 100 times), others may run it as infrequently as once every other week. It should still be smart enough to know if it has the most current version of the jar file.
More Focused Questions:
Will the If-Modified-Since header still work in this usage? If so, will I need completely different code to add that? That is, can you show me how to modify the code presented to include that? Same question with regard to saving the jar file - ultimately I am really surprised (frustrated!) that I can get a JarFile object, but have no way to persist it - will I even need the JarURLConnection class?
Bounty Question
I didn't initially realize the precise question I was trying to ask. It is this:
How can I save a jar file from a web server locally in a command-line program that exits and ONLY update that jar file when it has been changed on the server?
Any answer that, via code examples, shows how that may be done will be awarded the bounty.

Yes, the file can be saved to the disk, you can get the input stream using the method openStream() in URL class.
As per the comment mentioned by #fge there is a way to detect whether the file is modified.
Sample Code:
private void launch() throws IOException {
// Get the jar URL which contains the application
String jarName = "myapplication.jar";
String strUrl = "jar:http://localhost:8081/" + jarName + "!/";
Path cacheDir = Paths.get("cache");
Files.createDirectories(cacheDir);
Path fetchUrl = fetchUrl(cacheDir, jarName, strUrl);
JarURLConnection jcl = (JarURLConnection) fetchUrl.toUri().toURL().openConnection();
Attributes attr = jcl.getMainAttributes();
String mainClass = (attr != null) ? attr.getValue(Attributes.Name.MAIN_CLASS) : null;
if (mainClass != null) {
// launch the program
}
}
private Path fetchUrl(Path cacheDir, String title, String strUrl) throws IOException {
Path cacheFile = cacheDir.resolve(title);
Path cacheFileDate = cacheDir.resolve(title + "_date");
URL url = new URL(strUrl);
URLConnection connection = url.openConnection();
if (Files.exists(cacheFile) && Files.exists(cacheFileDate)) {
String dateValue = Files.readAllLines(cacheFileDate).get(0);
connection.addRequestProperty("If-Modified-Since", dateValue);
String httpStatus = connection.getHeaderField(0);
if (httpStatus.indexOf(" 304 ") == -1) { // assuming that we get status 200 here instead
writeFiles(connection, cacheFile, cacheFileDate);
} else { // else not modified, so do not do anything, we return the cache file
System.out.println("Using cached file");
}
} else {
writeFiles(connection, cacheFile, cacheFileDate);
}
return cacheFile;
}
private void writeFiles(URLConnection connection, Path cacheFile, Path cacheFileDate) throws IOException {
System.out.println("Creating cache entry");
try (InputStream inputStream = connection.getInputStream()) {
Files.copy(inputStream, cacheFile, StandardCopyOption.REPLACE_EXISTING);
}
String lastModified = connection.getHeaderField("Last-Modified");
Files.write(cacheFileDate, lastModified.getBytes());
System.out.println(connection.getHeaderFields());
}

How can I save a jar file from a web server locally in a command-line program that exits and ONLY update that jar file when it has been changed on the server?
With JWS. It has an API so you can control it from your existing code. It already has versioning and caching, and comes with a JAR-serving servlet.

I have assumed that a .md5 file will be available both locally and at the web server. Same logic will apply if you wanted this to be a version control file.
The urls given in the following code need to updated according to your web server location and app context. Here is how your command line code would go
public class Main {
public static void main(String[] args) {
String jarPath = "/Users/nrj/Downloads/local/";
String jarfile = "apache-storm-0.9.3.tar.gz";
String md5File = jarfile + ".md5";
try {
// Update the URL to your real server location and application
// context
URL url = new URL(
"http://localhost:8090/JarServer/myjar?hash=md5&file="
+ URLEncoder.encode(jarfile, "UTF-8"));
BufferedReader in = new BufferedReader(new InputStreamReader(
url.openStream()));
// get the md5 value from server
String servermd5 = in.readLine();
in.close();
// Read the local md5 file
in = new BufferedReader(new FileReader(jarPath + md5File));
String localmd5 = in.readLine();
in.close();
// compare
if (null != servermd5 && null != localmd5
&& localmd5.trim().equals(servermd5.trim())) {
// TODO - Execute the existing jar
} else {
// Rename the old jar
if (!(new File(jarPath + jarfile).renameTo((new File(jarPath + jarfile
+ String.valueOf(System.currentTimeMillis())))))) {
System.err
.println("Unable to rename old jar file.. please check write access");
}
// Download the new jar
System.out
.println("New jar file found...downloading from server");
url = new URL(
"http://localhost:8090/JarServer/myjar?download=1&file="
+ URLEncoder.encode(jarfile, "UTF-8"));
// Code to download
byte[] buf;
int byteRead = 0;
BufferedOutputStream outStream = new BufferedOutputStream(
new FileOutputStream(jarPath + jarfile));
InputStream is = url.openConnection().getInputStream();
buf = new byte[10240];
while ((byteRead = is.read(buf)) != -1) {
outStream.write(buf, 0, byteRead);
}
outStream.close();
System.out.println("Downloaded Successfully.");
// Now update the md5 file with the new md5
BufferedWriter bw = new BufferedWriter(new FileWriter(md5File));
bw.write(servermd5);
bw.close();
// TODO - Execute the jar, its saved in the same path
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
And just in case you had control over the servlet code as well, this is how the servlet code goes:-
#WebServlet(name = "jarervlet", urlPatterns = { "/myjar" })
public class JarServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
// Remember to have a '/' at the end, otherwise code will fail
private static final String PATH_TO_FILES = "/Users/nrj/Downloads/";
#Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
String fileName = req.getParameter("file");
if (null != fileName) {
fileName = URLDecoder.decode(fileName, "UTF-8");
}
String hash = req.getParameter("hash");
if (null != hash && hash.equalsIgnoreCase("md5")) {
resp.getWriter().write(readMd5Hash(fileName));
return;
}
String download = req.getParameter("download");
if (null != download) {
InputStream fis = new FileInputStream(PATH_TO_FILES + fileName);
String mimeType = getServletContext().getMimeType(
PATH_TO_FILES + fileName);
resp.setContentType(mimeType != null ? mimeType
: "application/octet-stream");
resp.setContentLength((int) new File(PATH_TO_FILES + fileName)
.length());
resp.setHeader("Content-Disposition", "attachment; filename=\""
+ fileName + "\"");
ServletOutputStream os = resp.getOutputStream();
byte[] bufferData = new byte[10240];
int read = 0;
while ((read = fis.read(bufferData)) != -1) {
os.write(bufferData, 0, read);
}
os.close();
fis.close();
// Download finished
}
}
private String readMd5Hash(String fileName) {
// We are assuming there is a .md5 file present for each file
// so we read the hash file to return hash
try (BufferedReader br = new BufferedReader(new FileReader(
PATH_TO_FILES + fileName + ".md5"))) {
return br.readLine();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
}

I can share experience of solving the same problem in our team. We have several desktop product written in java which are updated regularly.
Couple years ago we had separate update server for every product and following process of update: Client application has an updater wrapper that starts before main logic, and stored in a udpater.jar. Before start, application send request to update server with MD5-hash of application.jar file. Server compares received hash with the one that it has, and send new jar file to updater if hashes are different.
But after many cases, where we confused which build is now in production, and update-server failures we switched to continuous integration practice with TeamCity on top of it.
Every commit done by developer is now tracked by build server. After compilation and test passing build server assigns build number to application and shares app distribution in local network.
Update server now is a simple web server with special structure of static files:
$WEB_SERVER_HOME/
application-builds/
987/
988/
989/
libs/
app.jar
...
changes.txt <- files, that changed from last build
lastversion.txt <- last build number
Updater on client side requests lastversion.txt via HttpClient, retrieves last build number and compares it with client build number stored in manifest.mf.
If update is required, updater harvests all changes made since last update iterating over application-builds/$BUILD_NUM/changes.txt files. After that, updater downloads harvested list of files. There could be jar-files, config files, additional resources etc.
This scheme is seems complex for client updater, but in practice it is very clear and robust.
There is also a bash script that composes structure of files on updater server. Script request TeamCity every minute to get new builds and calculates diff between builds. We also upgrading now this solution to integrate with project management system (Redmine, Youtrack or Jira). The aim is to able product manager to mark build that are approved to be updated.
UPDATE.
I've moved our updater to github, check here: github.com/ancalled/simple-updater
Project contains updater-client on Java, server-side bash scripts (retrieves updates from build-server) and sample application to test updates on it.

Related

How do i find a filepath given only the name of the file?

I am currently working on transferring a file from a server to a client via tcp connection. The file exists somewhere within a sharedroot directory of the server. This is the sample code of my upload method for server upload to client.
public void upload(String filename, DataOutputStream out) throws IOException {
File fname = null;
if (filename.contains(sharedroot)) { //this is if the client provides a proper filepath with the filename
fname = new File(filename);
}else { //if client only provides a filename without path
fname = new File(filename);
//"..\\..\\"+ //i was working around with this, but somehow just making the file whether or not it contains the sharedroot seems to give me the "best" output so far...
}
System.out.println(fname.getCanonicalPath());
if (fname.isDirectory()) {
System.out.println("File is a directory");
String quit = "404 not found";
sendOut(quit, out);
return;
}
String path = fname.getAbsolutePath();
System.out.println(path);
if (fname.isFile()) {
String canonpath = fname.getCanonicalPath();
if (canonpath.contains(sharedroot)) {
try {
FileInputStream in = new FileInputStream(fname);
byte[] buffer = new byte[1024];
out.writeInt(fname.getName().length());
out.write(fname.getName().getBytes(), 0, fname.getName().length()); // writes file name only, not
// including the path
long size = fname.length();
out.writeLong(size);
while (size > 0) {
int len = in.read(buffer, 0, buffer.length);
out.write(buffer, 0, len);
size -= len;
}
} catch (Exception e) {
System.out.println("Error occurred in uploading file to client. Please try again");
}
}else {
System.out.println("File not in shared directory");
String quit = "404 not found";
sendOut(quit, out);
}
}else {
System.out.println("File not exists");
String quit = "404 not found";
sendOut(quit, out);
}
}
The output given by getCanonicalPath() and getAbsolutePath() as seen below is wrong because it is checking inside the directory of my eclipse and not the sharedroot directory. How can I get the filepath of the file so that i can compare it to my sharedroot and ensure it exists within the sharedroot? The sharedroot would be for example: D:\seant\2uniFiles\1. FINE2005 Year 3
D:\seant\eclipse-workspace\DCN3005\Lecture 1 Exercise.pdf
D:\seant\eclipse-workspace\DCN3005\Lecture 1 Exercise.pdf
File not exists
Your creation of File does not specify a dedicated directory. There are two constructors requiring a (root) directory and a file name – one as File itself, the other as String. I assume one of your paths is relative but your else-branch creates the file the same way as the full qualified path. You should pass the sharedRoot instead as first parameter and the fileName as second.
File fname = null;
// sharedRoot is more like a constant and startsWith
// avoids reading somewhere else that looks similar
if (filename.startsWith(sharedRoot)) {
fname = new File(filename);
} else {
fname = new File(sharedRoot, filename);
}
In all other cases relative paths are relative to the root directory of the VM process – and I mean process. If for example a user starts this in the user's HOME directory it'll be relative to this. If an operating system task starts the VM it'll be relative to the OS process' root – which might be a Unix cron job or a Windows scheduling thing.
Maybe you introduce a sort of configuration of sharedRoot so you don't need to recompile if this changes in the future.

SFTP Mule Client Java API - User Login Time out Issue

I am working on a Java code to SFTP file to remote from HDFS filesystem. It works fine for smallers file less than 200 MB. For large files I get the following error.
17/08/08 02:44:49 ERROR sftp.SftpClient: Error writing data over SFTP service, error was: Failure
4: Failure
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2289)
at com.jcraft.jsch.ChannelSftp.checkStatus(ChannelSftp.java:1937)
at com.jcraft.jsch.ChannelSftp._put(ChannelSftp.java:541)
at com.jcraft.jsch.ChannelSftp.put(ChannelSftp.java:439)
at com.jcraft.jsch.ChannelSftp.put(ChannelSftp.java:406)
My code is as follows:
public static void sendFile(String targetDirectory, String sourceFileWithFullPath) throws IOException {
SftpClient client = new SftpClient("karthick");
BufferedInputStream bis = null;
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
FSDataInputStream fsdisPath = null;
String filePath = null;
try {
filePath=sourceFileWithFullPath;
Path inputPath = new Path(filePath);
fsdisPath = fs.open(inputPath);
bis = new BufferedInputStream(fsdisPath);
client.login("karthick","/karthick/id_rsa", null);
client.changeWorkingDirectory(targetDirectory);
client.storeFile(inputPath.getName(), bis);
System.out.println("The actual path is" + client.getAbsolutePath(sourceFileWithFullPath));
}
finally {
if (client != null) {
client.disconnect();
}
if (bis != null) {
bis.close();
}
}
}
I ensured that I have enough disk space, no memory issues and all the required permissions. What could be other possible ways to avoid this issue. I am planning to have this utility to copy 500GB files. I am beginning java and learning the fundamentals now. Any suggestions would be greatly appreciated.
Update: I am receiving this error as well com.jcraft.jsch.JSchException: verify: false . I have added keys wherever necessary. How do I resolve this issue
ERROR sftp.SftpClient: Error during login to karthick#karthick
com.jcraft.jsch.JSchException: verify: false
at com.jcraft.jsch.Session.connect(Session.java:295)
at com.jcraft.jsch.Session.connect(Session.java:150)
at org.mule.transport.sftp.SftpClient.login(SftpClient.java:178)
It looks like one of 4 reasons:
1)Permissions on the folder you're writing to
2)Spaces on the file path (or name).
3)Wrong slash in the file path.
4)Time out issues are happen while reading or writing the huge File..

Open file from network drive

I'm trying to create hyperlinks to open files from network drive G:
This is part of my testing servlet:
#WebServlet(name="fileHandler", urlPatterns={"/fileHandler/*"})
public class FileServlet extends HttpServlet
{
private static final int DEFAULT_BUFFER_SIZE = 10240; // 10KB.
...
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException
{
String requestedFile = request.getPathInfo();
...
File file = new File("G:/test_dir", URLDecoder.decode(requestedFile, "UTF-8")); // cesta se nacita v kazdem doGet
String contentType = getServletContext().getMimeType(file.getName());
response.reset();
response.setBufferSize(DEFAULT_BUFFER_SIZE);
response.setContentType(contentType);
response.setHeader("Content-Length", String.valueOf(file.length()));
response.setHeader("Content-Disposition", "attachment; filename=\"" + file.getName() + "\"");
BufferedInputStream input = null;
BufferedOutputStream output = null;
try
{
input = new BufferedInputStream(new FileInputStream(file), DEFAULT_BUFFER_SIZE);
output = new BufferedOutputStream(response.getOutputStream(), DEFAULT_BUFFER_SIZE);
byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
int length;
while ((length = input.read(buffer)) > 0)
{
output.write(buffer, 0, length);
}
}
finally
{
close(output);
close(input);
}
}
}
My HTML component:
TEST FILE
Network drive is mapped on application server as G:\
Everything is working fine on my localhost application server. I'm able to open file from local drive C: and even from the same network drive G:.
When I start the JSF application on real server, I'm able to open files from local drive only. Not from G: drive.
I've tried to simple JAVA application (to find if java instance has an access to network drive) and it works on both (server and dev.PC):
public class Test
{
static void main(String[] args)
{
String path = "G:/test_dir/test.txt";
File file = new File(path);
System.err.println(file.exists() ? "OK" : "NOK");
}
}
I've tried different URI schemes:
G:/test_dir
G:\\test_dir
And following doesn't work at all:
file://server/g:/test_dir
///server/g:/test_dir
\\\\server\\g\\test_dir ---> in fact, this should work
Where should be a difference between my develop PC and application server?
SOLUTION:
I've found that links to network drive doesn't work in standalone Tomcat, but works in Eclipse + Tomcat, so I have to use complete URI:
Case Eclipse + Tomcat: Path G:/test_dir/test.txt works
Case Standalone Tomcat: Path \\\\server\\g\\test_dir\\test.txt works
If you can debug your server and look at the logs, try this:
String requestedFile = request.getPathInfo();
log.debug('requestedFile='+requestedFile);
String decodedFilename = URLDecoder.decode(requestedFile, "UTF-8");
log.debug('decodedFilename='+decodedFilename);
File dir = new File("G:/test_dir");
log.debug('File g:/test_dir is a dir:'+dir.isDirectory());
File file = new File(dir, decodedFilename);
log.debug('requested file = '+file.getAbsolutePath());
log.debug('file exists = '+file.isFile());
If you do not have a logging framework set up, you can use System.out.println() instead of log.debug(), but that's not recommended for production use.
This won't solve your problem, but you'll be able to see what's going on.
I've found that links to network drive doesn't work in standalone Tomcat, but works in Eclipse + Tomcat, so I have to use complete URI:
Case Eclipse + Tomcat: Path G:/test_dir/test.txt works
Case Standalone Tomcat: Path \\\\server\\g\\test_dir\\test.txt works

How to save and load an image on a Tomcat server [duplicate]

This question already has answers here:
Recommended way to save uploaded files in a servlet application
(2 answers)
Closed 7 years ago.
I'm writing a Java EE application, and I try to get an image from an URL then save it in my resource folder (thru an AJAX request).
My problem is if I don't reboot my server I'm not able to display this image because it isn't loaded on my server.
I'm using Tomcat 7, Spring, Hibernate, Primeface.
Here is my class to save myimage
public class ImageSaver {
final static int SIZE=1024;
public static void fileUrl(String fAddress, String localFileName, String destinationDir) {
OutputStream outStream = null;
URLConnection uCon = null;
InputStream is = null;
try {
URL url = new URL(fAddress);
byte[] buf;
int byteRead;
int byteWritten=0;
outStream = new BufferedOutputStream(new FileOutputStream(destinationDir+"\\"+localFileName));
uCon = url.openConnection();
is = uCon.getInputStream();
buf = new byte[SIZE];
while ((byteRead = is.read(buf)) != -1) {
outStream.write(buf, 0, byteRead);
byteWritten += byteRead;
}
}
catch (Exception e) {
e.printStackTrace();
}
finally {
try {
is.close();
outStream.close();
}
catch (IOException e) {
e.printStackTrace();
}
}
}
public static void fileDownload(String fAddress,String fileName, String destinationDir){
int slashIndex =fAddress.lastIndexOf('/');
int periodIndex =fAddress.lastIndexOf('.');
//String fileName=fAddress.substring(slashIndex + 1);
if (periodIndex >=1 && slashIndex >= 0 && slashIndex < fAddress.length()-1){
}
else{
System.err.println("path or file name.");
}
}
}
and the way I call the function :
ImageSaver.fileDownload("http://www.mywebsite.com/myImage.jpg","myImage.jpg", "C:\\Users\\MyProject\\src\\main\\webapp\\resources\\images");
How can I automatically loaded on my serve my uploaded Image without any reboot ?
Which file allows the configuration of an upload folder ? And how ?
You should not write images retrieved dynamically into your webapp's folder - this opens you up for a whole category of problems. Instead, save them to a folder outside of the appserver's root directory and create a download servlet that will serve these resources.
The danger otherwise is that you'll retrieve some jsp file from external sources, save them to your appserver and on download the appserver will happily execute it server side.
Assume your webapp's directory to be non-writeable to your webapp. This will also ease backup and updates: Imagine you'll need to install an update or migrate to a different server: The application you have on your server will only partially be contained in its *.war file. If there's an explicit resource directory, you can back this up independently (or put it on a network share drive)

Webapp deployed in Glassfish can't write to files

I have the following function inside a Stateless EJB running in Glassfish. All it does is write some data to a file. The first part of the function just creates the path to where the file needs to go. The second part actually writes the file.
private boolean createFile(String companyName, String fileName, byte[] data)
{
logger.log(Level.FINEST, "Creating file: {0} for company {1}", new Object[]{fileName, companyName});
File companyFileDir = new File(LOCAL_FILE_DIR, companyName);
if(companyFileDir.exists() == false)
{
boolean createFileDir = companyFileDir.mkdirs();
if(createFileDir == false)
{
logger.log(Level.WARNING, "Could not create directory to place file in");
return false;
}
}
File newFile = new File(companyFileDir, fileName);
try
{
FileOutputStream fileWriter = new FileOutputStream(newFile);
fileWriter.write(data);
}
catch(IOException e)
{
logger.log(Level.SEVERE,"Could not write file to disk",e);
return false;
}
logger.log(Level.FINEST,"File successfully written to file");
return true;
}
The output I get after this code executes is:
WARNING: Could not create directory to place file in
So obviously Glassfish cant create this directory. I am am assuming this has something to do with permissions. Can anyone give me a direction to go as to what might be wrong here?
I am running this on Glassfish 3.12 on Ubuntu 12
different things:
1) Compare spec: (21.1.2 Programming Restrictions)
An enterprise bean must not use the java.io package to attempt to access files and directories in the file system.
I'm sure GF isn't enforcing this, but you should be aware of that.
2) The code itself is fine. Try chmod +777 on the LOCAL_FILE_DIR to get an idea if it has to do with rights in general ...
Hope that helps ...

Categories

Resources