On Unix, Files.walkFileTree will callback FileVisitor.visitFile with BasicFileAttributes which are actually sun.nio.fs.UnixFileAttributes$UnixAsBasicFileAttributes. As the debugger shows, the wrapped UnixFileAttributes already contain permission information (st_mode field is populated). Is there a (graceful) way to unwrap the UnixFileAttributes$UnixAsBasicFileAttributes in order to get at least PosixFileAttributes so the permissions will be accessible? Reflection does not work for me, but results in an IllegalAccessError when trying to invoke UnixFileAttributes$UnixAsBasicFileAttributes.unwrap.
Also, I want to avoid to explicitly call Files.getPosixFilePermissions(file) for every reported file as this gives roughly 10% overhead for my test cases.
In my Java, there's a package-private method just to unwrap the wrapped attributes. No guarantee that any Java will contain this method. For now this works for me and I hope it does for you too.
I can call this method with the following code.
You came very close with your attempt at Java 9: How to retrieve the ctime in FileVisitor.visitFile()?. I simplified it a bit for this question.
try {
Class<?> basicFileAttributesClass = Class.forName("java.nio.file.attribute.BasicFileAttributes");
Class<?> unixFileAttributesClass = Class.forName("sun.nio.fs.UnixFileAttributes");
Method toUnixFileAttributesMethod =
unixFileAttributesClass.getDeclaredMethod("toUnixFileAttributes", basicFileAttributesClass);
toUnixFileAttributesMethod.setAccessible(true);
attrs = (BasicFileAttributes)toUnixFileAttributesMethod.invoke(unixFileAttributesClass, attrs);
} catch (ReflectiveOperationException ex) {
throw new RuntimeException(ex);
}
For Java 9 and up, the setAccessible() checks module permissions, which your module doesn't have. This can be unlocked with the VM option --add-opens java.base/sun.nio.fs=ALL-UNNAMED.
Related
In a directory using Java I want to check each subdirectory for a DOS-specific attribute, but I want the code to run on other file systems as well. I can do this, which seems to be what the Java API implies is the recommended approach:
try {
DosFileAttributes attrs = Files.readAttributes(path, DosFileAttributes.class);
//TODO check DOS attrs
} catch(UnsupportedOperationException ex) {
//ignore the error; must be another file system
}
However that is ugly, against best practices (requires ignoring an exception), and probably less efficient because of the try/catch overhead. I'd prefer to do this (using Java 17 syntax to cast to DosFileAttributes on the fly):
BasicFileAttributes attrs = Files.readAttributes(path, BasicFileAttributes.class);
if(attrs instance of DosFileAttributes dosFileAttributes) {
//TODO check DOS attrs
}
That's much cleaner. Much more understandable. Probably more efficient.
Unfortunately it's not clear to me if the API guarantees it to work (even though I see code all over the Internet assuming that the second approach will always work). In practice it would appear that OpenJDK will give me a DosFileAttributes (actually a WindowsFileAttributes) instance, even though I requested only a BasicFileAttributes, because DosFileAttributes implements BasicFileAttributes and it's just as easy to always return the same object since it works in all situations.
But does the API guarantee that? From my reading, it would seem that, because I only requested a BasicFileAttributes instance, some JDK implementation might (for various reasons, not just spite) decide to return only a BasicFileAttributes instance (perhaps it doesn't want to go lookup the DOS attributes unless it was specifically asked).
So am I stuck with the ugly and inefficient exception-based approach if I want my code to be guaranteed to work?
There is no such guaranty written anywhere. This, however, does not imply that you are stuck with using exceptions. You can use, for example
DosFileAttributeView view = Files.getFileAttributeView(path,DosFileAttributeView.class);
if(view != null) {
DosFileAttributes attrs = view.readAttributes();
// proceed
}
At the first glance, you could pretest with Files.getFileStore(path) .supportsFileAttributeView( DosFileAttributeView.class), but the documentation of supportsFileAttributeView says:
In the case of the default provider, this method cannot guarantee to give the correct result when the file store is not a local storage device. The reasons for this are implementation specific and therefore unspecified.
which is not very helpful. One thing you can do, is to test, whether the particular filesystem supports the DosFileAttributeView in general. If path.getFileSystem() .supportedFileAttributeViews() .contains("dos") returns false, none of the paths of this filesystem will ever support this attribute.
I decided to trace the source code to see what exactly is it doing (is it casting? is it a switch statement?)
Began with Files#readAttributes
provider(path)
This returns a FileSystemProvider (from the FileSystem class)
Using the LinuxFileSystemProvider as reference
Finally, I landed on LinuxFileSystemProvider's implementation for readAttributes. This may change depending, but assuming it hasn't, this does do some manual if-checking on the class.
#Override
#SuppressWarnings("unchecked")
public <A extends BasicFileAttributes> A readAttributes(Path file,
Class<A> type,
LinkOption... options)
throws IOException
{
if (type == DosFileAttributes.class) {
DosFileAttributeView view =
getFileAttributeView(file, DosFileAttributeView.class, options);
return (A) view.readAttributes();
} else {
return super.readAttributes(file, type, options);
}
}
Unfortunately, assuming that future JDK versions took this same path, you will have to stick with the old exception ignoring.
I managed to set up sort of a "Java sandbox" with the following code:
// #1
new File("xxx").exists();
// #2
PrivilegedExceptionAction<Boolean> untrusted = () -> new File("xxx").exists();
untrusted.run();
// #3
Policy.setPolicy(new Policy() {
#Override public boolean implies(ProtectionDomain domain, Permission permission) { return true; }
});
System.setSecurityManager(new SecurityManager());
AccessControlContext noPermissionsAccessControlContext;
{
Permissions noPermissions = new Permissions();
noPermissions.setReadOnly();
noPermissionsAccessControlContext = new AccessControlContext(
new ProtectionDomain[] { new ProtectionDomain(null, noPermissions) }
);
}
AccessControlContext allPermissionsAccessControlContext;
{
Permissions allPermissions = new Permissions();
allPermissions.add(new AllPermission());
allPermissions.setReadOnly();
allPermissionsAccessControlContext = new AccessControlContext(
new ProtectionDomain[] { new ProtectionDomain(null, allPermissions) }
);
}
// #4
try {
AccessController.doPrivileged(untrusted, noPermissionsAccessControlContext);
throw new AssertionError("AccessControlException expected");
} catch (AccessControlException ace) {
;
}
// #5
PrivilegedExceptionAction<Boolean> evil = () -> {
return AccessController.doPrivileged(untrusted, allPermissionsAccessControlContext);
};
try {
AccessController.doPrivileged(evil, noPermissionsAccessControlContext);
throw new AssertionError("AccessControlException expected"); // Line #69
} catch (AccessControlException ace) {
;
}
#1 and #2 should be self-explanatory.
#3 is the code that sets up the sandbox: It sets a totally unrestictive Policy (otherwise we'd lock ourselves out immediately), and then a system SecurityManager.
#4 then executes the "untrusted" code in a totally restrictive AccessControlContext, which causes an AccessControlException, which is what I want. Fine.
Now comes #5, where some evil code attempts to "escape" from the sandbox: It creates another, totally unrestricted AccessControlContext, and runs the untrusted code within that. I would expect that this would throw an AccessControlException as well, because the evil code successfully leaves the sandbox, but it doesn't:
Exception in thread "main" java.lang.AssertionError: AccessControlException expected
at sbtest.Demo.main(Demo.java:69)
From what I read in the JRE JAVADOC
doPrivileged
public static <T> T doPrivileged(PrivilegedExceptionAction<T> action, AccessControlContext context)
throws PrivilegedActionException
Performs the specified PrivilegedExceptionAction with privileges enabled and restricted by the
specified AccessControlContext. The action is performed with the intersection of the the permissions
possessed by the caller's protection domain, and those possessed by the domains represented by the
specified AccessControlContext.
If the action's run method throws an unchecked exception, it will propagate through this method.
Parameters:
action - the action to be performed
context - an access control context representing the restriction to be applied to the caller's
domain's privileges before performing the specified action. If the context is null,
then no additional restriction is applied.
Returns:
the value returned by the action's run method
Throws:
PrivilegedActionException - if the specified action's run method threw a checked exception
NullPointerException - if the action is null
See Also:
doPrivileged(PrivilegedAction), doPrivileged(PrivilegedExceptionAction,AccessControlContext)
, I would expect that the untrusted code would execute with no permissions and thus throw an AccessControlException, but that does not happen.
Why?? What can I do to mend that security hole?
Well, the untrusted code does execute with no permissions initially... until it requests that its permissions get restored (nested doPrivileged call with AllPermission AccessControlContext). And that request gets honored, because, according to your Policy, all your code, including the "evil" action, is fully privileged. Put otherwise, doPrivileged is not an "on-demand sandboxing tool". It is only useful as a means for its immediate caller to limit or increase its privileges, within the confines of what has already been granted to it by the policy decision point (ClassLoader + SecurityManager/Policy). Callers further down the line are absolutely free to "revert" that change if entitled to -- once again according to the policy decision point, not the opinion of any previous caller. So this is as-intended behavior and not a security hole.
What workarounds are there?
For one, there certainly is a "canonical" / sane way of using the infrastructure. According to that best practice, trusted code is to be isolated from untrusted code by means of packaging and class loader, resulting in the two being associated with distinct domains that can be authorized individually. If the untrusted code were then only granted permission to, say, read from a particular file system directory, no amount of doPrivileged calls would enable it to, say, open a URL connection.
That aside, one may of course come up with a hundred and two alternatives of creatively (but not necessarily safely) utilizing the different moving pieces of the infrastructure to their advantage.
Here for example I had suggested a custom protection domain with a thread-local for accomplishing roughly what you desire, i.e., on-demand sandboxing of a normally privileged domain throughout the execution of an untrusted action.
Another way of selectively sandboxing code within a single protection domain is by establishing a default blacklist and using DomainCombiners to whitelist trustworthy execution paths. Note that it is only applicable when the SecurityManager is set programmatically.
First one needs to ensure that no permissions are granted by default, neither via ClassLoader1 nor by Policy2.
Then a "special" AccessControlContext, coupled to a domain combiner that unconditionally yields AllPermission, is obtained as follows:
// this PD stack is equivalent to AllPermission
ProtectionDomain[] empty = new ProtectionDomain[0];
// this combiner, if bound to an ACC, will unconditionally
// cause it to evaluate to AllPermission
DomainCombiner combiner = (current, assigned) -> empty;
// bind combiner to an ACC (doesn't matter which one, since
// combiner stateless); note that this call will fail under
// a security manager (will check SecurityPermission
// "createAccessControlContext")
AccessControlContext wrapper = new AccessControlContext(
AccessController.getContext(), combiner);
// bind wrapper and thus combiner to current ACC; this will
// anew trigger a security check under a security manager.
// if this call succeeds, the returned ACC will have been
// marked "authorized" by the runtime, and can thus be
// reused to elevate permissions "on-demand" in the future
// without further a priori security checks.
AccessControlContext whitelisted = AccessController.doPrivileged(
(PrivilegedAction<AccessControlContext>) AccessController::getContext,
wrapper);
Now a standard security manager can be established to enforce the default blacklist. From this point onward all code will be blacklisted -- save for code holding a reference to the "whitelisted" / "backdoor" ACC, which it can leverage to escape the sandbox:
PrivilegedAction<Void> action = () -> {
System.getSecurityManager().checkPermission(new AllPermission());
return null;
};
// this will succeed, strange as it might appear
AccessController.doPrivileged(action, whitelisted);
// this won't
AccessController.doPrivileged(action);
// neither will this
action.run();
This leaves quite a bit of room for flexibility. One could directly call other trusted code within a "whitelisted" doPrivileged, and whitelisting would conveniently propagate up the call stack. One could alternatively expose the whitelisted ACC itself to trusted components so as to enable them to whitelist trusted execution paths of their own as they please. Or ACCs with limited permissions could be constructed, in the manner depicted above, and shared instead when it comes to code trusted less; or perhaps even specialized "pre-compiled" { action, limited pre-authorized ACC } capability objects.
Needless to say, whitelist propagation also widely opens up the door for bugs3. Unlike standard AccessController.doPrivileged(action) which is whitelist-opt-in, AccessController.doPrivileged(action, whitelisted) is effectively whitelist-opt-out; that is, for authorization to succeed under the former model, it is required that both the latest doPrivileged caller and every caller beyond it have the checked permission; whereas under the latter it suffices if merely the latest doPrivileged caller has it (provided no caller later on invokes standard doPrivileged, thereby reverting to the default blacklist).
Another prominent quirk4 to this approach lies with the fact that trusted third-party (library) code, reasonably expecting a call to standard doPrivileged to elevate its permissions, will be surprised to discover that it in fact causes the opposite.
1 The default application (aka "system") class loader only allows classes to read from their URL of origin (usually a JAR or directory on the local file system). Additionally permission to call System.exit is granted. If either is deemed "too much", a different class loader implementation will be necessitated.
2 Either by a custom subclass that unconditionally implies nothing, or by having the default sun.security.provider.PolicyFile implementation read an empty / "grant-nothing" configuration ("grant {};").
3 The convenient yet dangerous property of whitelist propagation could in theory be countered by use of a stateful combiner that emulates the behavior of standard doPrivileged by means of call stack inspection. Such a combiner might be provided a stable offset from the call stack's bottom upon instantiation, representing the invocation of some doWhitelistedSingleFrame(PrivilegedAction, Permission...) utility method exposed to trustworthy clients. Upon invocation of its combine method due to permission checks subsequently, the combiner would ensure that the stack has not gotten any deeper than offset + 1 (skipping frames associated with SecurityManager, AccessController, AccessControlContext, and itself), and, if it indeed hasn't, yield a modified PD stack evaluating to the desired permissions, while otherwise reverting to the default blacklist. Of course the combiner would have to be prepared to encounter synthetic frames (e.g. lambdas and bridges) at the anchored offset. Additionally it would have to safeguard itself from being used "out of context", via leakage to a different ACC or thread; and it would have to self-invalidate upon the utility method's return.
4 The only straightforward solution here would be to assign permissions to such code in the usual fashion (class loader and/or policy) -- which kind of defeats the goal of the exercise (circumventing the need for separate domains / class loaders / packaging).
For some methods I get the warning. This is what it says when expanded.
The following code (mkDirs()) gives the warning
if (!myDir.exists()) {
myDir.mkdirs();
}
Reports any calls to specific methods where the result of that call is ignored. Both methods specified in the inspection's settings and methods annotated with org.jetbrains.annotations.Contract(pure=true) are checked. For many methods, ignoring the result is perfectly legitimate, but for some methods it is almost certainly an error. Examples of methods where ignoring the result of a call is likely to be an error include java.io.inputStream.read(), which returns the number of bytes actually read, any method on java.lang.String or java.math.BigInteger, as all of those methods are side-effect free and thus pointless if ignored.
What does it mean? How to to avoid it? How should it be addressed?
It is possible to omit the warning using annotation
#SuppressWarnings("ResultOfMethodCallIgnored")
public void someMethod() {
...
myDir.mkdirs();
...
}
If the directory exists, the mkdir() operation will return false, if it does not exist, it will be created (if you have the appropriate rights, of course), therefore, IMHO, the check via isExists() can be omitted.
However, as indicated above, if you are working with this directory, it is a good idea to make sure that it exists.
This method suggests that you are creating directory with your specified path if directory is not already created then this function will create a new one for you.It is returning true/false weather directory exists or created.Perhaps in some situation due to low storage it not created at specified path and you are trying to write contents into file within that directory it will throw ioException.So you should utilise if condition
if(myDir.mkdirs())
{
//execute what ever you want to do
}
else
{
// show error message for any failure.
}
My process simply add some content to the system variable PATH. Actually I'm doing this with a Process that use the setx.exe:
public void changePath(String newPath ) {
String path = System.getenv("PATH") + ";";
String[] cmd = new String[]{"C:\\Windows\\System32\\setx.exe", "PATH",
path+newPath, "-m"};
ProcessBuilder builder = new ProcessBuilder(cmd);
...
}
So I tried to write a test case to it.
Class UpdatePathTest {
#Test
public void testUpdatePath() {
//call the method that update the path
changePath("C:\\somebin");
assertTrue(System.getenv("PATH").contains("C:\\somebin")); //fails
// ProcessBuilder with command String[]{"cmd", "/C", "echo", "%PATH%"}; will fail too.
//and the above in a new Thread will fail too.
}
}
So, is there any way to get the new PATH value? Writing the new path is the only option, because I'm developing a jar that will install a desktop application.
I'm not sure changing the path is a good idea in a unit test. What if the test fails? You will have to make sure you do all the relevant tidy up.
Consider inverting your dependencies and use dependency injection.
This article explains it quite well I think.
So instead of having a method that does:
public void method() {
String path = System.getenv("PATH") + ";";
//do stuff on path
}
consider doing:
public void method(String path) {
//do stuff on path
}
which allows you to stub the path. If you cannot change the signature of the method then consider using the factory pattern and using a test factory to get the path.
EDIT: after update to question
What you have to think about here is what you are testing. When you call C:\Windows\System32\setx.exe you have read the API docs and are calling it with the correct parameters. This is much like calling another method on a java API. For example if you are manipulating a list you "know" it is zero based. You do not need to test this you just read the API and the community backs you up on this. For testing changePath I think you probably what to test what is going into ProcessBuilder. Again you have read the API docs and you have to assume that you are passing in the correct variables. (See //1 at bottom) And again you have to assume that ProcessBuilder works properly and that the Oracle (or most likely Sun) guys have implemented it to the API documents.
So what you want to do is check that you are passing variables to ProcessBuilder that match the specification as you understand it. For this you can mock ProcessBuilder and then verify that you are passing the correct parameters and calling the correct method on this class.
In general it is a hard one to test because you don't want to test the windows functions but want to test java's interaction with them.
//1 The main problem I have had with calling this external commands is understanding the API documents correctly or setting up the command. Usually you have to get the command line out and check that you are using methods correctly (esp cmd functions). This can mean that you work out how to use the cmd function, code it into ProcessBuilder and then write a test (or vice versa on the ProcessBuilder/test) Not the ideal way but sometimes documents are hard to understand.
I'm trying to load a class via an URLClassLoader (well, it neither works with an normal class loader) and want them to not have any permission.
Therefore, i created my own security manager, which genereates a key on startup, which can only by requested once (in main thread).
The security manager has 2 lists, the applicationThread, which will be granted any right and the temporaryList, which will be granted one right just once (it's about the reflection).
As it is very hard to descripe, i decided to upload the whole thing: look at the link below
Ok, coming back: I created a WatchDog thread, which checks if the thread doesn't take too much time.
When i now start to instance two classes from an URLClassLoader, I call exactly 30 methods without getting any errors, but on the 31st call, it tries to check Permissions for the following but this is just happaning after the 30th call.
java.lang.RuntimePermission accessClassInPackage.sun.reflect),
Does anyone know what's going on there?
edit:
I had time to strip down the example.
http://myxcode.at/securitymanager.zip
I found out, that the SecurityManager is not asked synchronous. Just run this small piece of code and have a look at the red lines.
If the red lines come in the very first line, just run the program again, you will find out that it seems a little bit uncontrolled.
The problem more or less is, that i need the security manager to be synchronized.
Here is my output for those who cannot face the error(bug?)
http://pastebin.com/E9yLRLif
edit2: maybe its about the console? maybe the console is too slow?
For me the check occurs when i=15:
checkPermission ( (java.lang.RuntimePermission accessClassInPackage.sun.reflect) ) for Thread[main,5,main]
The reason for the delayed permission check is an inflationThreshold of the ReflectionFactory class which is used by the invoke method of NativeMethodAccessorImpl.java:
public Object invoke(Object obj, Object[] args)
throws IllegalArgumentException, InvocationTargetException {
if (++numInvocations > ReflectionFactory.inflationThreshold()) {
MethodAccessorImpl acc = (MethodAccessorImpl) new MethodAccessorGenerator()
.generateMethod(method.getDeclaringClass(), method
.getName(), method.getParameterTypes(),
method.getReturnType(), method
.getExceptionTypes(), method
.getModifiers());
parent.setDelegate(acc);
}
return invoke0(method, obj, args);
}
To disable the delay you could use Reflection API :)
Field hack = Class.forName("sun.reflect.ReflectionFactory").getDeclaredField("inflationThreshold");
hack.setAccessible(true);
hack.set(null, 0);