Getting a specific version of an image with Jib (Maven, Docker, testcontainers) - java

I'm trying to understand a comment that a colleague made. We're using testcontainers to create a fixture:
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.utility.DockerImageName;
public class SalesforceFixture extends GenericContainer<SalesforceFixture> {
private static final String APPLICATION_NAME = "salesforce-emulator";
public SalesforceFixture() {
// super(ImageResolver.resolve(APPLICATION_NAME));
super(DockerImageName.parse("gcr.io/ad-selfserve/salesforce-emulator:latest"));
...
}
...
The commented code is what it used to be. The next line is my colleague's suggestion. And on that line he commented:
This is the part I don't know. The [ImageResolver] gets the specific version of the emulator, rather than the latest. You need a docker-info file for that though, which jib doesn't automatically generate (but I think it can).
This is what I know or have figured so far:
SalesforceFixture is a class that will be used by other projects to write tests. It spins up a container in Docker, running a service that emulates the real service's API. It's like a local version of the service that behaves enough like the real thing that if one writes code and tests using the fixture, it should work the same in production. (This is where my knowledge ends.)
I looked into ImageResolver—it seems to be a class we wrote that searches a filesystem for something:
public static String resolve(String applicationName, File... roots) {
Stream<File> searchPaths = Arrays.stream(roots).flatMap((value) -> {
return Stream.of(new File(value, "../" + applicationName), new File(value, applicationName));
});
Optional<File> buildFile = searchPaths.flatMap((searchFile) -> {
if (searchFile.exists()) {
File imageFile = new File(searchFile + File.separator + "/target/docker/image-name");
if (imageFile.exists()) {
return Stream.of(imageFile);
}
}
return Stream.empty();
}).findAny();
InputStream build = (InputStream)buildFile.map(ImageResolver::fileStream).orElseGet(() -> {
return searchClasspath(applicationName);
});
if (build != null) {
try {
return IOUtils.toString(build, Charset.defaultCharset()).trim();
} catch (IOException var6) {
throw new RuntimeException("An exception has occurred while reading build file", var6);
}
} else {
throw new RuntimeException("Could not resolve target image for application: " + applicationName);
}
}
But I'm confused. What filesystem? Like, what is the present working directory? My local computer, wherever I ran the Java program from? Or is this from within some container? (I don't think so.) Or maybe the directory structure inside a .jar file? Or somewhere in gcr.io?
What does he mean about a "specific version number" vs. "latest"? I mean, when I build this project, whatever it built is all I have. Isn't that equivalent to "latest"? In what case would an older version of an image be present? (That's what made me think of gcr.io.)
Or, does he mean, that in the project using this project's image, one will not be able to specify a version via Maven/pom.xml—it will always spin up the latest.
Sorry this is long, just trying to "show my work." Any hints welcome. I'll keep looking.

I can't comment on specifics of your own internal implementations, but ImageResolver seems to work on your local filesystem, e.g. it looks into your target/ directory and also touches the classpath. I can imagine this code was just written for resolving an actual image name (not an image), since it also returns a String.
Regarding latest, using a latest tag for a Docker image is generally considered an anti-pattern, so likely your colleague is commenting about this. Here is a random article from the web explaining some of the issues with latest tag:
https://vsupalov.com/docker-latest-tag/
Besides, I don't understand why you ask these questions which are very specific to your project here on SO rather than asking your colleague.

Related

How to create a custom ClassLoader for nested JAR files

I am working with a Java library that has some nested JAR files in lib package.
I have 2 issues:
I cannot see referenced types in my IDE (I am using JetBrains IntelliJ)
Of course I get class not defined at runtime
I understand that I have to create and use a custom ClassLoader, will it solve both problems?
Is this the recommended way of achieving this result?
The JAR file is an Italian government provided library and I cannot modify it as it will be periodically updated as the regulation changes.
Yes, as far as I know, the standard ClassLoaders do not support nested JARs. Which is sad, since it would be a really nice idea, but Oracle just doesn't give a damn about it. Here is a 18-year old ticket:
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4735639
If you are getting those JARs from somebody else, the best thing would be to contact the vendor and ask them for a delivery in standards-compatible format. From your answer I realize that this might be difficult to achieve, but I would still try to talk to them, because it's the right thing to do. I'm pretty sure that everybody else in your position has the same issue. According to industry standards, such situation would usually hint your vendor into using Maven repository for their deliverables.
If talking to your vendor fails, you can re-pack the JARs as you get them. I would recommend writing an automated script for that and making sure it gets run on each delivery. You can either put all .class files into one uber-JAR, or just move the nested JARs outside the enclosing JAR. Caveat 1: there can be more than one class with the same name, so you need to make sure to take the correct one. Caveat 2: if the JARs were signed, you will lose the signature (unless you sign them with your own).
Option 3: you can always implement your own ClassLoader to load the classes from anywhere, even from the kitchen sink.
This guy did exactly this: https://www.ibm.com/developerworks/library/j-onejar/index.html
The short summary is that such a ClassLoader has to perform recursive unzipping, which is a bit of a pain-in-the-ass because archives are essentially made for stream access and not for random access, but apart from that it's perfectly doable.
You can use his solution as a "wrapper loader" which will replace your main class.
As far as IntelliJ IDEA goes, I don't believe it supports this functionality out-of-the box. The best thing would be either to re-package JARs as described above and add them as separate classpath entries, or to search if anybody has written a plugin for nested JAR support.
I don't know what you want to do after load jars.
In my case, use jar dynamic loading for Servlet samples.
try{
final URLClassLoader loader = (URLClassLoader)ClassLoader.getSystemClassLoader();
final Method method = URLClassLoader.class.getDeclaredMethod("addURL", URL.class);
method.setAccessible(true);
new File(dir).listFiles(new FileFilter() {
#Override
public boolean accept(File jar) {
// load file if it is 'jar' type
if( jar.toString().toLowerCase().contains(".jar") ){
try {
method.invoke(loader, new Object[]{jar.toURI().toURL()});
XMLog.info_arr(logger, jar, " is loaded.");
JarInputStream jarFile = new JarInputStream(new FileInputStream(jar));
JarEntry jarEntry;
while (true) {
// load jar file
jarEntry = jarFile.getNextJarEntry();
if (jarEntry == null) {
break;
}
// load .class file in loaded jar file
if (jarEntry.getName().endsWith(".class")) {
Class loadedClass = Class.forName(jarEntry.getName().replaceAll("/", "\\.").replace(".class",""));
/*
* In my case, I load jar file for Servlet.
* If you want to use it for other case, then change below codes
*/
WebServlet annotaions = (WebServlet) loadedClass.getAnnotation(WebServlet.class);
// load annotation and mapping if it is Servlet
if (annotaions.urlPatterns().length > 0) {
ServletRegistration.Dynamic registration = servletContextEvent.getServletContext().addServlet(annotaions.urlPatterns().toString(), loadedClass);
registration.addMapping(annotaions.urlPatterns());
}
}
}
} catch (Exception e) {
System.err.println("Can't load classes in jar");
}
}
return false;
}
});
} catch(Exception e) {
throw new RuntimeException(e);
}
Interestingly I just solved a version of this problem for JesterJ, though I had the additional requirement of loading dependencies for the code in the jar file as well. JesterJ (as of this evening's commits!) runs from a fat jar and accepts an argument denoting a second fat jar containing the classes, dependencies and configuration for a document ingestion plan (the user's code that I need to run).
The way my solution works is I borrow the knowledge of how to load jars inside of jars from Uno-Jar (the library that produces the fat jar), and stuff my own classloader in above it to control the evaluation order of the class loaders.
The key bit from https://github.com/nsoft/jesterj/blob/jdk11/code/ingest/src/main/java/org/jesterj/ingest/Main.java looks like this:
JesterJLoader jesterJLoader;
File jarfile = new File(javaConfig);
URL planConfigJarURL;
try {
planConfigJarURL = jarfile.toURI().toURL();
} catch (MalformedURLException e) {
throw new RuntimeException(e); // boom
}
jesterJLoader = (JesterJLoader) ClassLoader.getSystemClassLoader();
ClassLoader loader;
if (isUnoJar) {
JarClassLoader jarClassLoader = new JarClassLoader(jesterJLoader, planConfigJarURL.toString());
jarClassLoader.load(null);
loader = jarClassLoader;
} else {
loader = new URLClassLoader(new URL[]{planConfigJarURL}, jesterJLoader);
}
jesterJLoader.addExtLoader(loader);
My JesterJLoader is here:
https://github.com/nsoft/jesterj/blob/jdk11/code/ingest/src/main/java/org/jesterj/ingest/utils/JesterJLoader.java
Though if you are happy to simply delegate up and rely on existing classes on the main class path (rather than loading additional dependencies from the sub-fat-jar like I'm doing) yours could be much simpler. I go to a lot of effort to allow it to check the sub-jar first rather than delegating up to the parent immediately, and then have to keep track of what's been sent to the sub-jar to avoid loops and subsequent StackOverflowError...
Also note that the line where I get the system class loader is going to NOT be what you want, I'm also monkeying with the system loader to work around impolite things that some of my dependencies are doing with class loading.
If you decide to try to check out Uno-Jar pls note that resource loading for this nested scenario may yet be wonky and things definitely won't work before https://github.com/nsoft/uno-jar/commit/cf5af42c447c22edb9bbc6bd08293f0c23db86c2
Also: recently committed thinly tested code warning :)
Disclosure: I maintain both JesterJ and Uno-Jar (a fork of One-JAR the library featured in the link supplied by jurez) and welcome any bug reports or comments or even contributions!

NoClassDefFoundError when trying to pass an object from a project to an API Rest

I have a problem with one of my project. Here is a little more info about it :
Our teacher gave us a virtual machine (ubuntu) which contains Hadoop and Hbase, already setup.
The objective is pretty simple : we have a Rest api with tomcat 8.5 (RestServer project, web project), which intercept GET requests (our teacher only want us to have GET request, security reason apparently), and we need to perform, according to the url (for instance : /students/{id}/{program} will return the grades summary for this particular student (id) and year of study (program)), data selection and mapreduce job on Hbase tables. And we have a BigData project, which contains simple java code to scan and filter Hbase table. Here is the short summary of the project.
Here is the structure we use for this project : project structure
And here is what is the execution logic : we type our url in the browser, after we launched our RestServer project (right click on RestServer -> Run as -> Run on server.
Here is what we get after doing so : RestServer in the browser.
The easy part stop there. The links we see on the previous image are just for demo, they are not what we need to do in this project. The idea is to intercept the GET request from the api, in the method handling the request, get the parameters, give them to a call to the constructor of our response object, and return the object as the response (that will be transform into a JSON). The idea is to get this object (the response to our GET request) from the BigData project. So we need to make this 2 projects communicate.
Here is the code to intercept the request :
#GET
#Path("/students/{id}/{program}")
#Produces(MediaType.APPLICATION_JSON)
public Response getStudent(#PathParam("id") String ID,#PathParam("program") String program) throws IOException {
System.out.println("ID : "+ID+" program"+program);
if (ID != null) {
System.out.println("Non nul");
return Response.ok(new Response1(ID,program), MediaType.APPLICATION_JSON).build();
} else {
return Response.status(Response.Status.NOT_FOUND).entity("Student not found: " + ID).build();
}
}
The Response1(ID,program) object is build in the BigData project. When i execute the code from the BigData project directly (as Java application), i have absolutely no problem, no error. But the idea is to use the code from the BigData project to build the Result1 object, and "give it back" to the Rest api. The problem is here, i tried absolutely everything i know and found on the internet but i can't resolve this problem. When i type my url, (which is : http://localhost:8080/RestServer/v1/StudentService/students/2005000033/L3) i get this error : error
From my research, i found that (correct me if i'm wrong) the program can't find the ByteArrayComparable class at runtime. I looked all the links i could find, and here is what i tried to resolve it :
Check if the library for Hadoop and Hbase are in both projects.
Check if the projects contains hbase-client, which is suppose to contains the ByteArrayComparable class (yes, it is in both projects).
By doing right click on RestServer -> Properties -> Java Build Path :
Source tab : i added the src folder from BigData project (and bin folder, but i can't remember where, i believe it is in one of the tab of Java Build Path).
Projects tab : i added the BigData project.
Order and Export tab : i checked the src folder (this folder is in the RestServer project, created after i added the src folder from BigData project in the Source tab).
Deployement Assembly : i added BigData project.
I copied the class which are use in the BigData project in my src folder of my RestServer project.
I saw that it can be cause by conflict between libraries, so i tried to remove some in one project and let them in the other.
I cleaned and rebuilt the projects between each changes.
I tried adding the import that seems to cause the problem by adding import org.apache.hadoop.hbase.filter.*; in the files that are involve in the execution.
I have no idea of what i can do now. Some of my friend have the same problem, even if we don't have the same code, so it seems that the problem come from the configuration. At this point, i didn't perform any mapreduce job, i'm just using Hbase java api to scan the table with some filters.
Thanks for reading me, i hope i'll find the answer. I'll keep testing and searching, and editing this post if i find something.
Here is the code for the Response1 class :
package bdma.bigdata.project.rest.core;
import java.io.IOException;
import org.apache.hadoop.hbase.filter.Filter.*;
public class Response1 {
private StudentBD student;
private Semester semesters;
public Response1(String id, String program) throws IOException {
System.out.println("Building student");
this.student = new StudentBD(id);
System.out.println("Building semester");
this.semesters = new Semester(id,program);
}
#Override
public String toString() {
return student.toString()+" "+semesters.toString();
}
public static void main(String[] args) throws IOException {
Response1 r = new Response1("2005000100", "L1");
System.out.println("AFFICHAGE TEST");
System.out.println(r);
}
}
Edit
I finally managed to resolve my problem. I put the solution here, if it can help someone in the same situation as mine in the futur.
Once you've linked your 2 projects (in the Java Build Path section of the properties of the Rest api project), you need to go, still in the properties, in the Deployment Assembly (above Java Build Path). Here you click on Add... and add all of your jar files.

Read the jar version for a class

For a webservice client I'd like to use Implementation-Title and Implementation-Version from the jar file as user-agent string. The question is how to read the jar's manifest.
This question has been asked multiple times, however the answer seems not applicable for me. (e.g. Reading my own Jar's Manifest)
The problem is that simply reading /META-INF/MANIFEST.MF almost always gives wrong results. In my case, it would almost always refer to JBoss.
The solution proposed in https://stackoverflow.com/a/1273196/4222206
is problematic for me as you'd have to hardcode the library name to stop the iteration, and then still it may mean two versions of the same library are on the classpath and you just return the first - not necessarily the right - hit.
The solution in https://stackoverflow.com/a/1273432/4222206
seems to work with jar:// urls only which completely fails within JBoss where the application classloader produces vfs:// urls.
Is there a way for code in a class to find it's own manifest?
I tried the abovementioned items which seem to run well in small applications run from the java command line but then I'd like to have a portable solution as I cannot predict where my library would be used later.
public static Manifest getManifest() {
log.debug("getManifest()");
synchronized(Version.class) {
if(manifest==null) {
try {
// this works wrongly in JBoss
//ClassLoader cl = Version.class.getProtectionDomain().getClassLoader();
//log.debug("found classloader={}", cl);
//URL manifesturl = cl.getResource("/META-INF/MANIFEST.MF");
URL jar = Version.class.getProtectionDomain().getCodeSource().getLocation();
log.debug("Class loaded from {}", jar);
URL manifesturl = null;
switch(jar.getProtocol()) {
case "file":
manifesturl = new URL(jar.toString()+"META-INF/MANIFEST.MF");
break;
default:
manifesturl = new URL(jar.toString()+"!/META-INF/MANIFEST.MF");
}
log.debug("Expecting manifest at {}", manifesturl);
manifest = new Manifest(manifesturl.openStream());
}
catch(Exception e) {
log.info("Could not read version", e);
}
}
}
The code will detect the correct jar path. I assumed by modifying the url to point to the manifest would give the required result however I get this:
Class loaded from vfs:/C:/Users/user/Documents/JavaLibs/wildfly-18.0.0.Final/bin/content/webapp.war/WEB-INF/lib/library-1.0-18.jar
Expecting manifest at vfs:/C:/Users/user/Documents/JavaLibs/wildfly-18.0.0.Final/bin/content/webapp.war/WEB-INF/lib/library-1.0-18.jar!/META-INF/MANIFEST.MF
Could not read version: java.io.FileNotFoundException: C:\Users\hiran\Documents\JavaLibs\wildfly-18.0.0.Final\standalone\tmp\vfs\temp\tempfc75b13f07296e98\content-e4d5ca96cbe6b35e\WEB-INF\lib\library-1.0-18.jar!\META-INF\MANIFEST.MF (The system cannot find the path specified)
I checked that path and it seems even the first URL to the jar (obtained via Version.class.getProtectionDomain().getCodeSource().getLocation() ) was wrong already. It should have been C:\Users\user\Documents\JavaLibs\wildfly-18.0.0.Final\standalone\tmp\vfs\temp\tempfc75b13f07296e98\content-e4d5ca96cbe6b35e\WEB-INF\lib\library-1.0.18.jar.
So this could even point to a problem in Wildfly?
It seems I found some suitable solution here:
https://stackoverflow.com/a/37325538/4222206
So in the end this code can display the correct version of the jar (at least) in JBoss:
this.getClass().getPackage().getImplementationTitle();
this.getClass().getPackage().getImplementationVersion();
Hopefully I will find this answer when I search next time...

Java 1.4.2 File.listFiles not working properly with CIFS mounts - workaround?

I'm using Java 1.4.2 and Debian 6.0.3. There's a shared Windows folder in the network, which is correctly mounted to /mnt/share/ via fstab (e.g. it's fully visible from OS and allows all operations) using CIFS. However, when I try to do this in Java:
System.out.println(new File("/mnt/share/").listFiles().length)
it would always return 0, meaning File[] returned by listFiles is empty. The same problem applies to every subdirectory of /mnt/share/. list returns empty array as well. Amusingly enough, other File functions like "create", "isDirectory" or even "delete" work fine. Directories mounted from USB flash drive (fat32) also work fine.
I tested this on 2 different "shared folders" from different Windows systems; one using domain-based authentication system, another using "simple sharing" - that is, guest access. The situation seems weird, since mounted directories should become a part of a file system, so any program could use it. Or so I thought, at least.
I want to delete a directory in my program, and I currently see no other way of doing it except recursive walking on listFiles, so this bug becomes rather annoying. The only "workaround" I could think of is to somehow run an external bash script, but it seems like a terrible solution.
Edit: It seems this is 1.4.2-specific bug, everything works fine in Java 6. But I can't migrate, so the problem remains.
Could you suggest some workaround? Preferably without switching to third-party libs instead of native ones, I can't say I like the idea of rewriting the whole project for the sake of single code line.
Since Java 1.2 there is method File.getCanonicalFile(). In your case with mounted directory you should use exactly this one in such style:
new File("/mnt/share/").getCanonicalFile().listFiles()
So, two and half years later after giving up I encounter the same problem, again stuck with 1.4.2 because I need to embed the code into obsolete Oracle Forms 10g version.
If someone, by chance, stumbles onto this problem and decides to solve it properly, not hack his way through, it most probably has to do with (highly) unusual inode mapping that CIFS does upon mounting the remote filesystem, causing more obscure bugs some of which can be found on serverfault. One of the side-effects of such mapping is that all directories have zero hard-link count. Another one is that all directories have "size" of exactly 0, instead of usual "sector size or more", which can be checked even with ls.
I can't be sure without examining the (proprietary) source code, but I can guess that Java prior to 1.5 used some shortcut like checking link count internally instead of actually calling readdir() with C, which works equally well for any mounted FS.
Anyway, the second side-effect can be used to create a simple wrapper around File which won't rely on system calls unless it suspects a directory is mounted using CIFS. Other versions of list and listFiles functions in java.io.File, even ones using filters, rely on list() internally, so it's OK to override only it.
I didn't care about listFiles returning File[] not FileEx[] so I didn't bother to override it, but is should be simple enough. Obviously, that code can work only in Unix-like systems having ls command handy.
package FSTest;
import java.io.BufferedReader;
import java.io.File;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.ArrayList;
public class FileEx extends File
{
public FileEx(String path)
{
super(path);
}
public FileEx(File f)
{
super(f.getAbsolutePath());
}
public String[] list()
{
if (this.canRead() && this.isDirectory())
{
/*
* Checking the length of dir is not the most reliable way to distinguish CIFS mounts.
* However, zero directory length generally indicates something unusual,
* so calling ls on it wouldn't hurt. Ordinary directories don't suffer any overhead this way.
* If this "zero-size" behavior is ever changed by CIFS but list() still won't work,
* it will be safer to call super.list() first and call this.listUsingExec if returned array has 0 elements.
* Though it might have serious performance implications, of course.
*/
if (this.length() > 0)
return super.list();
else
return this.listUsingExec();
}
else
return null;
}
private String[] listUsingExec()
{
Process p;
String command = "/bin/ls -1a " + this.getAbsolutePath();
ArrayList list = new ArrayList();
try
{
p = Runtime.getRuntime().exec(command);
p.waitFor();
BufferedReader reader = new BufferedReader(new InputStreamReader(p.getInputStream()));
for (String line = reader.readLine(); line != null; line = reader.readLine())
{
if (!line.equalsIgnoreCase(".") && !line.equalsIgnoreCase(".."))
list.add(line);
}
String[] ret = new String[list.size()];
list.toArray(ret);
return ret;
}
catch (IOException e)
{
return null;
}
}
}

How can I make OS X recognize drive letters?

I know. Heresy. But I'm in a bind. I have a lot of config files that use absolute path names, which creates an incompatibility between OS X and Windows. If I can get OS X (which I'm betting is the more flexible of the two) to recognize Q:/foo/bar/bim.properties as a valid absolute file name, it'll save me days of work spelunking through stack traces and config files.
In the end, I need this bit of Java test code to print "SUCCESS!" when it runs:
import java.io.*;
class DriveLetterTest {
static public void main(String... args) {
File f = new File("S:");
if (f.isDirectory()) {
System.out.println("SUCCESS!");
} else {
System.out.println("FAIL!");
}
}
}
Anyone know how this can be done?
UPDATE: Thanks for all the feedback, everyone. It's now obvious to me I really should have been clearer in my question.
Both the config files and the code that uses them belong to a third-party package I cannot change. (Well, I can change them, but that means incurring an ongoing maintenance load, which I want to avoid if at all possible.)
I'm in complete agreement with all of you who are appalled by this state of affairs. But the fact remains: I can't change the third-party code, and I really want to avoid forking the config files.
Short answer: No.
Long answer: For Java you should use System.getProperties(XXX).
Then you can load a Properties file or Configuration based on what you find in os.name.
Alternate Solution just strip off the S: when you read the existing configuration files on non-Windows machines and replace them with the appropriate things.
Opinion: Personally I would bite the bullet and deal with the technical debt now, fix all the configuration files at build time when the deployment for OSX is built and be done with it.
public class WhichOS
{
public static void main(final String[] args)
{
System.out.format("System.getProperty(\"os.name\") = %s\n", System.getProperty("os.name"));
System.out.format("System.getProperty(\"os.arch\") = %s\n", System.getProperty("os.arch"));
System.out.format("System.getProperty(\"os.version\") = %s\n", System.getProperty("os.version"));
}
}
the output on my iMac is:
System.getProperty("os.name") = Mac OS X
System.getProperty("os.arch") = x86_64
System.getProperty("os.version") = 10.6.4
Honestly, don't hard-code absolute paths in a program, even for a single-platform app. Do the correct thing.
The following is my wrong solution, saved to remind myself not to repeat giving a misdirected advice ... shame on me.
Just create a symbolic link named Q: just at the root directory / to / itself.
$ cd /
$ ln -s / Q:
$ ln -s / S:
You might need to use sudo. Then, at the start of your program, just chdir to /.
If you don't want Q: and S: to show up in the Finder, perform
$ /Developer/Tools/SetFile -P -a V Q:
$ /Developer/Tools/SetFile -P -a V S:
which set the invisible-to-the-Finder bit of the files.
The only way you can replace java.io.File is to replace that class in rt.jar.
I don't recommend that, but the best way to do this is to grab a bsd-port of the OpenJDK code, make necessary changes, build it and redistribute the binary with your project. Write a shell script to use your own java binary and not the built-in one.
PS. Just change your config files! Practice your regex skills and save yourself a lot of time.
If you are not willing to change your config file per OS, what are they for in first place?
Every installation should have its own set of config files and use it accordingly.
But if you insist.. you just have to detect the OS version and if is not Windows, ignore the letter:
Something along the lines:
boolean isWindows = System.getProperty("os.name").toLowerCase()
.contains("windows");
String folder = "S:";
if (isWindows && folder.matches("\\w:")) {
folder = "/";
} else if (isWindows && folder.matches("\\w:.+")) {
folder = folder.substring(2);// ignoring the first two letters S:
}
You get the idea
Most likely you'd have to provide a different java.io.File implementation that can parse out the file paths correctly, maybe there's one someone already made.
The real solution is to put this kind of stuff (hard-coded file paths) in configuration files and not in the source code.
Just tested something out, and discovered something interesting: In Windows, if the current directory is on the same logical volume (i.e. root is the same drive letter), you can leave off the drive letter when using a path. So you could just trim off all those drive letters and colons and you should be fine as long as you aren't using paths to items on different disks.
Here's what I finally ended up doing:
I downloaded the source code for the java.io package, and tweaked the code for java.io.File to look for path names that start with a letter and a colon. If it finds one, it prepends "/Volumes/" to the path name, coughs a warning into System.err, then continues as normal.
I've added symlinks under /Volumes to the "drives" I need mapped, so I have:
/Volumes/S:
/Volumes/Q:
I put it into its own jar, and put that jar at the front of the classpath for this project only. This way, the hack affects only me, and only this project.
Net result: java.io.File sees a path like "S:/bling.properties", and then checks the OS. If the OS is OS X, it prepends "/Volumes/", and looks for a file in /Volumes/S:/bling.properties, which is fine, because it can just follow the symlink.
Yeah, it's ugly as hell. But it gets the job done for today.

Categories

Resources