So I have a bit of code, that creates an instance of a class.
Class<?> c = Class.forName("MyClass");
Constructor<?> cons = c.getConstructor();
cons.setAccessible(true);
Object instance = cons.newInstance();
Now I want to set some restrictions to that instance. When I call:
instance.doSomething();
I want to set restrictions for that bit of code (of the instance). So the methods called from that isntance can not do something fishy (System calls, File operations...).
I have tried to set a security manager, but that restricts all of the code (I still want to read/write files for the rest of my code).
Is it possible to restrict only certain objects?
TL;DR: Code
The question is essentially "How do I invoke a method on a particular instance, with privileges lower than normal?". There are three requirements here:
Code is to be authorized on a per-instance basis. An instance is privileged by default.
An instance may be selectively blacklisted, i.e., it may be accorded lower privileges than it normally would have been, for the duration of a method invocation that it receives.
Blacklisting must propagate to code executed on the receiver's behalf, specifically any objects of the same type that it interacts with, itself included; otherwise, if, say, the receiver were in turn to call
AccessController.doPrivileged((PrivilegedAction<Void>) () -> {
this.doSomethingElse();
return null;
});
doSomethingElse() would escape the sandbox.
All three are problematic:
The first one is not really1 achievable, because it presupposes that the runtime maintain—and expose—information about the instances (rather than merely the classes) on threads' execution stacks, which it does not2.
The second and third are only achievable as long as any blacklisted code does not assert its own (default, class-based) privileges via AccessController.doPrivileged(...), which, by design, it may at any time choose to.
Is there an alternative?
Well, how far are you willing to go? Modify AccessController / AccessControlContext internals? Or worse yet, internals of the VM? Or perhaps provide your own SecurityManager that reimplements the aforementioned components' functionality from scratch, in a way that satisfies your requirements? If the answer to all is "no", then I fear that your options are limited.
As an aside, you should ideally be able to make a binary choice when asked "Can or cannot this particular code, i.e. class, be entrusted with the particular privileges?", for this would tremendously simplify3 things. Unfortunately you cannot; and, to make matters worse, neither can you, presumably, modify the implementation of the class such that all of its instances can either be considered—with regards to a specific set of privileges—trustworthy or not, nor do you wish to simply mark the class, and therefore all of its instances, as untrusted (which I do believe you should!) and live with it.
Moving on to the proposed workaround. To overcome the shortcomings listed earlier, the original question will be rephrased as follows: "How do I invoke a method with elevated privileges accorded to the method receiver's ProtectionDomain?" I am going to answer this derivative question instead, suggesting, in contrast to the original one, that:
Code is to be authorized by the ProtectionDomain of its class, as is normally the case. Code is sandboxed by default.
Code may be selectively whitelisted, for the duration of a method invocation under a particular thread.
Whitelisting must propagate4 to code of the same class called by the receiver.
The revised requirements will be satisfied by means of a custom ClassLoader and DomainCombiner. The purpose of the first is to assign a distinct ProtectionDomain per class5; the other's is to temporarily replace the domains of individual classes within the current AccessControlContext for "on-demand whitelisting" purposes. The SecurityManager is additionally extended to prevent thread creation by unprivileged code4.
Note: I relocated the code to this gist to keep the post's length below the limit.
Standard disclaimer: Proof-of-concept code—take with several tablespoons of salt!
Running the example
Compile and deploy the code as suggested by the example policy configuration file, i.e., there should be two6 unrelated classpath entries (e.g. sibling directories at the filesystem level)—one for classes of the com.example.trusted package, and another for com.example.untrusted.Nasty.
Ensure also that you have replaced the policy configuration with the example one, and have modified the paths therein as appropriate.
Lastly run (after having appropriately modified the classpath entries, of course):
java -cp /path/to/classpath-entry-for-trusted-code:/path/to/classpath-entry-for-untrusted-code -Djava.system.class.loader=com.example.trusted.DiscriminatingClasspathClassLoader com.example.trusted.Main
The first call to the untrusted method should hopefully succeed, and the second fail.
1 It would perhaps be possible for instances of a specially crafted class (having, e.g., a domain of their own, assigned by some trusted component) to exercise their own privileges themselves (which does not hold true in this case, since you have no control over the implementation of instance's class, it appears). Nevertheless, this would still not satisfy the second and third requirement.
2 Recall that, under the default SecurityManager, a Permission is granted when all ProtectionDomains—to which normally classes, rather than instances, are mapped—of the thread's AccessControlContext imply that permission.
3 You would then simply have to grant permissions at the policy level, if you deemed the class trustworthy, or otherwise grant nothing at all, rather than have to worry about permissions per instance per security context and whatnot.
4 This is a hard decision: If whitelisting did not affect subsequent callees of the same type, the instance would not be able to call any privilege-requiring methods on itself. Now that it does, on the other hand, any other instance of the same type, that the original whitelisted method receiver interacts with, become privileged too! Thus you must ensure that the receiver does not call any "untrusted" instances of its own kind. It is for the same reason a bad idea to allow the receiver to spawn any threads.
5 As opposed to the strategy employed by the default application ClassLoader, which is to group all classes that reside under the same classpath entry within a single ProtectionDomain.
6 The reason for the inconvenience is that the ProtectionDomain, which our custom application ClassLoader's class gets mapped to by its parent, has a CodeSource implying all CodeSources referring to files under the loader's classpath entry. So far so good. Now, when asked to load a class, our loader attempts to discern between system/extension classes (loading of which it delegates to its parent) and application classes, by testing whether the .class file is located below JAVA_HOME. Naturally, to be allowed to do so, it requires read access to the filesystem subtree beneath JAVA_HOME. Unfortunately, granting the corresponding privilege to our loader's (notoriously broad) domain, implicitly grants the privilege to the domains of all other classes residing beneath the loader's classpath entry, including untrusted ones, as well. And that should hopefully explain why classpath entry-level isolation between trusted and untrusted code is necessary. There are of course workarounds, as always; e.g. mandating that trusted code be additionally signed in order to accrue any privileges; or perhaps using a more flexible URL scheme for code source identification, and/or altering code source implication semantics.
Further reading:
Default Policy Implementation and Policy File Syntax
API for Privileged Blocks
Secure Coding Guidelines for Java SE - §9 - Access Control
Troubleshooting Security
Historical note: Originally this answer proposed a nearly identical solution that abusedrelied on JAAS's SubjectDomainCombiner, rather than a custom one, for dynamic privilege modification. A "special" Principal would be attached to specific domains, which would then accrue additional Permissions upon evaluation by the Policy, based on their composite CodeSource-Principal identity.
Related
I am working on developing a library that needs to instantiate and return untrusted objects downloaded from an external website. At a high-level, the library works as follows:
Clients of the library requests a class from a remote source.
My library instantiates that object, then returns it to the user.
This is a major security risk, since the untrusted code can do just about anything. To address this, my library has the following design:
I enable the SecurityManager and, when instantiating the untrusted object, I use an AccessController to handle the instantiation in a context where there are no privileges.
Before returning the object back to the client, I wrap the object in a decorator that uses an AccessController to forward all method requests to the underlying object in a way that ensures that the untrusted code is never run with any permissions.
It occurs to me, though, that this might not be the most elegant solution. Fundamentally, I want to strip away all permissions from any object of any type downloaded from the remote source. My current use of AccessController is simply a way of faking this up by intercepting all requests and dropping privileges before executing them. The AccessController approach also has its own issues:
If the wrapped object has any methods that return objects, those returned objects have to themselves be wrapped.
The wrapper code will potentially be thousands of lines long, since every exported method has to be secured.
All of the methods exported by the downloaded object have to be known in advance in order to be wrapped.
My question is this: is there a way to load classes into the JVM (probably using a custom ClassLoader) such that any instances of those classes execute their methods with no permissions?
Thanks!
You will want to call defineClass with an untrusted ProtectionDomain.
Your current solution has a number of problems. It doesn't appear to cover the static initialiser. It may be possible to install code into some mutable arguments. Methods that use the immediate caller will still be privileged (AccessController.doPrivileged, say). But most of all, it falls about when rubbing up against any kind of global - for instance running a finaliser.
Don't know if there's a way to directly do what you asked, but I think your approach can be simplified by using interfaces and dynamic proxies. Basically, if you have an interface for the object to be returned, and all its methods return either simple types or interfaces, then you can wrap all the methods and their return values automatically, without knowing the methods in advance. Just implement an InvocationHandler that does the AccessController magic in its invoke method, and create proxies using Proxy.newProxyInstance(...).
I always thought that SecurityManagers included a check method which was called when Method/Field.setAccessible() was attempted that included a Permission that included the name of the method/field enclosing class and member name etc. Apparently it does not which is a shock.
I had an idea that it would be possible to solve this problem by using a ClassLoader that rewrote attempts such as
Method.setAccessible()
to
MethodHelper.setAccessible( Method );
The MethodHelper method could set a thread local which my security manager look at and clear to get the actual Method.
This of course has some potential flaws as it requires class file rewriting which of course can only happen for non system classes.
The same approach could be taken for retrieving methods, fields, etc which today do make the member available to the SecurityManager in any form.
Are there any FOSS libraries that package the above functionality ?
I am playing around with the java .policy file and was wondering how I could go about doing something like preventing calls to java.util.Date(), as an example.
I just want to get a better sense of the .policy file works and how it can be used to sandbox code.
You'll be out of luck there I'm afraid.
As Paŭlo Ebermann says, package.access can block out package hierarchies. You can be more precise about this with a custom SecurityManager, which is usually a damn good indication you are doing something really dodgy.
In general you can make a ClassLoader that doesn't always delegate to its parent. Technically against the current Java SE spec, although the Java EE spec encourages it. You could block out java.util.Date. It's still accessible through reflection if any other class references it, or you can get an instance of it. You could block out the transitive closure of uses of Date, including those that in some way return a Date. However, to complete the scheme with your minimal date you'd have to load a java.util.Date in your class loader, which you can't along with all other java.* classes.
So, err, replace the java.util.Date class in rt.jar (possibly using a Java Agent), and substitute in any class you don't want to restrict new Date() with new Date(System.currentTimeMillis()).
(Btw, +1 to anything that reduces the dependency on System.currentTimeMillis() and other magic methods.)
To restrict access to certain packages, you have actually to change not the .policy file, but the security.properties. There is an entry package.access=... which lists the packages for which RuntimePermissions are needed. So, you can't restrict specifically the access to a single class, only to a whole package (including subpackages, if needed), i.e. java.util.
(You can alternatively access this by the Security.?etProperty methods.)
If you did this, you later can add the right RuntimePermission to the Policy to let the "good" code use it.
I think quite a good part of the JRE would cease working if you restrict access to java.util, so better try another class for testing.
The way the sandbox mostly works is that there are calls from classes that do security-sensitive stuff to the current SecurityManager to check whether or not such a call should succeed. Since the Date class isn't perceived to be security-sensitive, no such calls exist in it's code and that's why - as explained by Tom and Paulo - it is very difficult to restrict it.
For example, in contrast: File operations are perceived to be security sensitive and that's why the File class has calls to the SecurityManager. As an example the delete method:
public boolean delete() {
SecurityManager security = System.getSecurityManager();
if (security != null) {
security.checkDelete(path);
}
return fs.delete(this);
}
And thanks to the SecurityManager check in the File class you can restrict File delete operations in the .policy file with more ease.
I have been using quite a lot of
System.getProperty("property")
in order to obtain environmental information. However, it seems to me that Sun prefers the following :
(String) java.security.AccessController.doPrivileged(
new sun.security.action.GetPropertyAction("property"));
The strange thing is that this code involves a cast and as a result should be slightly slower than the
System.getProperty
implementation, that only uses a security manager and then instantly fetches the property from the instance variable props. My question is why did Sun chose to use the second method to obtain most environmental variables in their code internally, while
System.getProperty
seems like the faster way to go?
Both methods have a different meaning, and thus the right one has to be used depending on what the current code needs to do.
The code System.getProperty("property") says "Give me the value of the property, if the current security context allows me to read it."
The code that uses doPrivileged says "Give me the value of the property, if the current class (where this line of code is in) is allowed to read it."
The difference comes into play, when the protection domain of the current class is different from the currently active security context.
For example, consider a framework which executes the code of a plugin, which is untrusted. So the framework uses a SecurityManager to restrict the actions of the untrusted plugin code. But of course the plugin may call some methods of the framework, and suppose that one of these methods needs to read a property. Now as the method is called from untrusted restricted code, it is itself restricted and thus reading the property would fail. But of course the framework trusts itself and wants itself to be able to read that property, even in the case that somewhere in the call stack is untrusted code. That's when you need to use doPrivileged. It basically says "no matter what is up there in the call stack, I am a piece of framework code, and I am allowed to do whatever the framework code is allowed to do". So reading the property using the second method succeeds.
Of course one needs to be careful when using doPrivileged in order to not let the (untrusted) calling code do to much. If, for example, the framework code offers the following method to the plugin:
public String getProp(String key) {
return (String) java.security.AccessController.doPrivileged(
new sun.security.action.GetPropertyAction(key));
}
this would completely invalidate the policy that the untrusted code is not allowed to read system properties, because it can just use your method.
So use this method only when you know it is safe to do it, and only when you need it (which is, when you want your code to be able to do more than some other code should be able to do directly). Inside a normal application (which usually runs with no SecurityManager or the same security context for all code), there is no difference and the first method should be used.
I would recommend to stick with System.getProperty() since sun.security.action.GetPropertyAction seems to be proprietary to SUN and will not work on all Java VM implementations. Even the compiler warns you about it as:
warning: sun.security.action.GetPropertyAction is Sun proprietary API and may be removed in a future release
To understand what it actually means see this answer.
The reason to use a class like sun.security.action.GetPropertyAction is to avoid loading several, basically identical classes.
If you wrote:
(String) java.security.AccessController.doPrivileged(
new java.security.PrivilegedAction<java.lang.String>() {
String run() {
System.getProperty("property");
}
}
);
Each time you wanted to get a system property, you would load a new class for each getProperty call. Each class takes system resources and lives as long as the containing ClassLoader (forever for the bootclassloader).
Check out the javap output for more details:
javap -c -v -p sun.security.action.GetPropertyAction
I have a class that manages user preferences for a large software project. Any class in the project that may need to set or retrieve a user preference from a persistent store is to call the static methods on this class. This centralized management allows the preferences to be completely wiped programmatically - which would be impossible if each pref was handled close to its use code, sprinkled throughout the project.
I ran into another implication of the centralization design in the course of this. The software has a public API. That API can be provided by itself in a jar. Classes in that API might refer to the pref management class. So, the pref manager has to go in the API jar.
Each preference might have a default value. Upon software startup, that default might be computed. The algorithm depends on the preference, and thus tends to reside near the use code. So if the pref manager needs to come up with a default, it calls the class in question.
But now that pref manager has become an "octopus class", sucking in all sorts of classes into the API jar that shouldn't be there. If it doesn't, then programs using the API jar quickly run into ClassDef exceptions. If it does, then the API jar is now bloated, as each of those other classes may refer to still others.
In general, do other Java programmers manage their preferences with a centralized class?
Does it make sense to distribute that static pref management class as part of a public API?
Should that pref manager be the keeper of the code for determining defaults?
IMHO, I think that the answer to your first question is "yes" and "no".
Preferences are commonly handled as a centralized class, in the sense that the class is a "sink" for many classes in the project. Trying to do it closer to the calling code means that if the same preference is later useful elsewhere, you are in trouble. In my experience, trying to put the preferences "too close" also results in a very inconsistent handling.
That being said, it is often preferable to use multiple preference classes or "preference set", each supporting a module or submodule. If you look at the main components of your architecture, you will often find that the set of preferences can be logically partitioned. This reduces the mess in each preference class. More importantly, it will allow you to split your program into multiple jars in the future. Now, the "default value" calculators can be placed in the module but still in a global enough area.
I would also suggest not setting preferences directly as static methods, but rather using some getInstance() like operation to obtain a shared instance of the preferences manage, and then operating on it. Depending on your semantics, you may want to lock that object for a while (e.g., while the user is editing preferences in a UI) and this is easier if you have an actual object.
For your other questions, I would say that your public API should have a way of letting users change preferences, but only if you can document well enough what the results of these changes could be.
If you use a single API function to get the "reference manager", you can give users the possibility of providing their own "default values calculator". The preference manager will ask this calculator first before resorting to the one you have provided by default.
Can't you just handle preferences in a really generic way? You'd then just use the preference manager to handle the persistence. So from a class you'd just say to the preference manager PreferenceManager.setPreference(key, value) and it doesn't care what it's saving in terms of the semantics of the data.
Or am I simplifying this too much?
I'm no Java Dev, but as far as the whole "octopus class" thing goes, can you not just supply an interface to the jar and connect the calls between the interface and the prefs manager at runtime, using the application configuration file to determine the prefs manager to instantiate?
Something like the .NET provider pattern?
That way you can decouple your jar from the prefs manager.
You might want to look at Cocoa's NSUserDefaults class for inspiration. It handles the problem you describe by having several layers of preferences, called domains. When you look up the value for a key, such as "PrintUsingAllCaps", it first checks for a value in the user's local domain. If it isn't found there, it can check the system-wide domain, or a network-level domain, and so on.
The absolute last place it checks is called the "Registration Domain", which is basically where hard coded defaults are supposed to go. So, at any point in my code, I can write a preference into the registration domain, and NSUserDefaults will only serve that value if the user hasn't overridden it.
So, in your case, you could provide a method for classes to set a default value for a key before it accesses the (possibly) user defined value. The preferences class doesn't have to know anything about the classes it is serving.
As somebody else suggested, if you need something more sophisticated, you could set a DefaultValueProvider callback object instead of a straight value.
I deleted my first answer since I misunderstood what the author was asking.
To actually address the actual question--it feels like your desire to place preferences (and the calculation of the default values) with the code that uses them makes sense.
Could you meet both requirements by using a preferences container class for each area that follows a pattern for that area, but having it register with a "Global" preference object collection?
Your global collection could do things like iterate over each set of preferences and reset it to defaults but your preferences themselves would still be locally defined and maintained so that it doesn't spider out into other parts of the code.
The only problem I can see is that if you allow the preference object to register itself when instantiated, you run the risk of trying to "reset to defaults" with some of the preferences not instantiated yet.
I suppose this could be fixed by having the main "preference" class instantiate all the others, then any piece of code could retrieve it's local preference object from the central one though a static getter.
This seems to be a lot like how some loggers work. There is a central mechanism for maintaining log levels, output streams and such, but each class has it's own instance of a "log" method and logs to it.
I hope this was more on target. Oh, I also agree with the accepted answer, don't ever make all your methods public static, always use a getter--you'll be glad you did some day.
The JSR-10 (java.util.prefs.*) API uses a factory method with a Class<?> parameter to create Preferences instances. That way the API can store preferences from different classes belonging to the same package in a single file.