Spring Security and Azure: is there a Active Directory group wildcard? - java

The tutorial at https://learn.microsoft.com/en-us/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-azure-active-directory explains how to set up Spring Security with authentication at Microsoft Azure Active Directory.
Disregarding from two little differences (explained here OpenID Connect log in in with Office 365 and spring security ) this works fine.
In my application.properties there is this property:
azure.activedirectory.active-directory-groups=myADUserGroup
(Hint: azure.activedirectory.active-directory-groups seems to be the deprecated version of the newer azure.activedirectory.user-group.allowed-groups ...)
I don't want to limit on particular groups. Every user with a valid Microsoft account is OK for my use case.
Leaving the property blank or even deleting the property leads to this exception:
Caused by: java.lang.IllegalStateException: One of the User Group Properties must be populated. Please populate azure.activedirectory.user-group.allowed-groups
at com.microsoft.azure.spring.autoconfigure.aad.AADAuthenticationProperties.validateUserGroupProperties(AADAuthenticationProperties.java:148) ~[azure-spring-boot-2.3.1.jar:na]
A possible workaround is to enter some arbitrary group name for the property in application.properties:
azure.activedirectory.active-directory-groups=some-arbitrary-group-name-doesnt-matter
and just do not use #PreAuthorize("hasRole('[group / role name]')").
This works (as long as your app is not interested in the role names) but it does not feel correct.
A) Is there a "right" way to set a wildcard active-directory-group?
B) org.springframework.security.core.Authentication.getAuthorities() seems to deliver only those group names / role names that are entered in that property, so the workaround delivers none (but ROLE_USER). I want to read all the groups / roles at the user. So I ask a second question: How can I get all roles from org.springframework.security.core.Authentication.getAuthorities() without knowing all of them and especially without entering all of them into the "azure.activedirectory.active-directory-groups" property?

For now, it does not support to set a wildcard for azure active directory group.
You can give you voice to azure ad feedback and if others have same demand will voteup you. Much vote will promote this feature to be achieve.

It's not a group wildcard, but if stateless processing suits your need,
azure.activedirectory.active-directory-groups=...
may be replaced with
azure.activedirectory.session-stateless=true
This will activate AADAppRoleStatelessAuthenticationFilter instead of AADAuthenticationFilter, which doesn't require specifying groups via azure.activedirectory.active-directory-groups.
The roles you want to use have to declared in the application manifest

As there is no support for a wildcard for groups at the moment, I built a workaround by ignoring whether the user group is valid or not.
For this I made a copy of com.microsoft.azure.spring.autoconfigure.aad.AzureADGraphClient and commented out this code snippet:
.filter(this::isValidUserGroupToGrantAuthority)
and I made a copy of com.microsoft.azure.spring.autoconfigure.aad.AADOAuth2UserService with
graphClient = new MyAzureADGraphClient(...
instead of
graphClient = new AzureADGraphClient(...
And in the SecurityConfiguration I injected the AAD properties:
#Autowired(required = false) private AADAuthenticationProperties aadAuthenticationProperties;
#Autowired(required = false) private ServiceEndpointsProperties serviceEndpointsProps;
and called my own AADOAuth2UserService in void configure(HttpSecurity http):
EvaAADOAuth2UserService oidcUserService = new EvaAADOAuth2UserService(aadAuthenticationProperties, serviceEndpointsProps);
httpSecurity.oauth2Login().loginPage(LOGIN_URL).permitAll().userInfoEndpoint().oidcUserService(oidcUserService);

Related

Highlighting in Hibernate Search 6 and Elasticsearch backend

We're in the process of converting our java application from Hibernate Search 5 to 6 with an Elasticsearch backend.
For some good background info, see How to do highlighting within HibernateSearch over Elasticsearch for a question we had when upgrading our highlighting code from a Lucene to Elasticsearch backend and how it was resolved.
Hibernate Search 6 seems to support using 2 backends at the same time, Lucene and Elasticsearch, so we'd like to use Elasticsearch for all our queries and Lucene for the highlighting, if that's possible.
Here is basically what we're trying to do:
public boolean matchPhoneNumbers() {
String phoneNumber1 = "603-436-1234";
String phoneNumber2 = "603-436-1234";
LuceneBackend luceneBackend =
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
Analyzer analyzer = luceneBackend.analyzer("phoneNumberKeywordAnalyzer").get();
//... builds a Lucene Query using the analyzer and phoneNumber1 term
Query phoneNumberQuery = buildQuery(analyzer, phoneNumber1, ...);
return isMatch("phoneNumberField", phoneNumber2, phoneNumberQuery, analyzer);
}
private boolean isMatch(String field, String target, Query sourceQ, Analyzer analyzer) {
Highlighter highlighter = new Highlighter(new QueryScorer(sourceQ, field));
highlighter.setTextFragmenter(new NullFragmenter());
try {
String result = highlighter.getBestFragment(analyzer, field, target);
return StringUtils.hasText(result);
} catch (IOException e) {
...
}
}
What I've attempted so far is to configure two separate backends in the configuration properties, per the documentation, like this:
properties.setProperty("hibernate.search.backends.elasticsearch.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backends.elasticsearch.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backends.elasticsearch.uris", "http://127.0.0.1:9200");
The AnalysisConfigurer class implements ElasticsearchAnalysisConfigurer and
CustomLuceneAnalysisConfigurer implements from LuceneAnalysisConfigurer.
Analyzers are defined twice, once in the Elasticsearch configurer and again in the Lucene configurer.
I don't know why both hibernate.search.backends.elasticsearch.type and hibernate.search.backends.lucene.type are necessary but if I don't include the lucene.type, I get Ambiguous backend type: configuration property 'hibernate.search.backends.lucene.type' is not set.
But if I do have both backend properties types set, I get
HSEARCH000575: No default backend. Check that at least one entity is configured to target the default backend, when attempting to retrieve the Lucene backend, like:
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
And the same error when trying to retrieve the Elasticsearch backend.
I've also added #Indexed(..., backend = "elasticsearch") to my entities since I wish to have them saved into Elasticsearch and don't need them in Lucene. I also tried adding a fake entity with #Indexed(..., backend = "lucene") but it made no difference.
What have I got configured wrong?
I don't know why both hibernate.search.backends.elasticsearch.type and hibernate.search.backends.lucene.type are necessary but if I don't include the lucene.type, I get Ambiguous backend type: configuration property 'hibernate.search.backends.lucene.type' is not set.
That's because the backend name is just that: a name. Hibernate Search doesn't infer particular information from it, even if you name your backend "lucene" or "elasticsearch". You could have multiple Elasticsearch backends for all it knows :)
But if I do have both backend properties types set, I get HSEARCH000575: No default backend. Check that at least one entity is configured to target the default backend, when attempting to retrieve the Lucene backend, like:
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
``
You called .backend(), which retrieves the default backend, i.e. the backend that doesn't have a name and is configured through hibernate.search.backend.* instead of hibernate.search.backends.<somename>.* (see https://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#configuration-structure ).
But you are apparently mapping all your entities to a named backends, one named elasticsearch and one named lucene. So the default backend just doesn't exist.
You should call this:
Search.mapping(entityManager.getEntityManagerFactory())
.backend("lucene").unwrap(LuceneBackend.class);
I've also added #Indexed(..., backend = "elasticsearch") to my entities since I wish to have them saved into Elasticsearch
Since you obviously only want to use one backend for indexing, I would recommend reverting that change (keeping #Indexed without setting #Indexed.backend) and simply making using the default backend.
In short, remove the #Indexed.backend and replace this:
properties.setProperty("hibernate.search.backends.elasticsearch.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backends.elasticsearch.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backends.elasticsearch.uris", "http://127.0.0.1:9200");
With this
properties.setProperty("hibernate.search.backend.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backend.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backend.uris", "http://127.0.0.1:9200");
You don't technically have to do that, but I think it will be simpler in the long term. It keeps the Lucene backend as a separate hack that doesn't affect your whole application.
I also tried adding a fake entity with #Indexed(..., backend = "lucene")
I confirm you will need that fake entity mapped to the "lucene" backend, otherwise Hibernate Search will not create the "lucene" backend.

LdapTemplate: Can't find specific groups unless baseDN is the same OU the groups are in

I've build an active directory api. On one machine it works fine, but on another machine (different forest, different network) spring ldapTemplate does not find specific groups. All other groups are found. I've compared groups that can be found with those that can't. I can't see any differences.
For some reason the specific groups show up if I set the base of the ldap search to the organizational unit that the groups are in.
Also, if I search the ad domain with ldifde on my terminal, the missing groups show up with the others.
The code that I use is pretty basic. As I said before, it simply doesn't work on that one machine.
Here is how I set the ldap properties:
ldapContextSource().setUrl(domain.getAddress() + ":" + domain.getPort());
ldapContextSource().setBase(domain.getBase());
ldapContextSource().setUserDn(domain.getUserDn());
ldapContextSource().setPassword(domain.getDecryptedPassword());
ldapContextSource().afterPropertiesSet();
[...]
#Bean
public LdapTemplate ldapTemplate() {
return new LdapTemplate(ldapContextSource());
}
I search for groups using
ldapTemplate.findAll(LdapGroup.class);
and this is how LdapGroup.class looks like:
#Entry(objectClasses = {"top", "group"})
public class LdapGroup {
#JsonIgnore
#Id
private Name dn;
[...]

How to get Solr field type

Is there any way of getting the metadata for a solr core ?
For instance I know the core name, and can obtain a SolServer from that and I also know the field name.
Is there any way to determine the metadata though. Specifically I would like to know whether the field type is an int or a double.
Thanks
You can make a request to the luke request handler:
http://localhost:8983/solr/corename/admin/luke?show=schema&wt=json&_=1453816769771
The output will include the schema for the core, along with the defined fields, their settings and their types:
{"fields":{"xyz":{"type":"string","flags":"I-S-M---OF-----l","copyDests":[],"copySources":[]}, .... }
A neat trick to find these endpoints is to watch the 'network' tab when browsing the admin interface to Solr, as the admin interface is just a static HTML / Javascript frontend that makes all the requests for actual content from the Solr server behind the scenes.

Is it possible to use play-authenticate without javaEbean?

I've followed through this sample code and tried to implement it with only simple JPA. However, when I tried to sign up with a Google account or login with an existing user account, it gave me this error.
[RuntimeException: No EntityManager bound to this thread. Try to annotate your action method with #play.db.jpa.Transactional]
private static List<User> getAuthUserFind(final AuthUserIdentity identity)
{
-> List<User> query = JPA.em().createQuery(
After googling for a while, many solutions suggest adding the #Transactional annotation to the calling play action, but that action is in the play-authenticate code.
Is there a solution for this issue, or do I have to use it with Ebeans?
I am using Play Framework 2.2.1 and implementing my program in Java.
It's not necessary to use Ebean,
I have used mybatis as persistence provider, but in order to save the user and login without problem you should use the same hashing algorithm.
the hashing algorithm is used to store the password.
to use your custom persistence provider like JPA or whatever you want, you should implement the Authentication Provider interfaces, see UsernamePasswordAuthProvider in the example project for more details.
Focus ,especially, on "signupUser" and "loginUser" methods.
I have modified play-authenticate to support Login/password instead of email/password identityId.
see Modified version of Play-Authenticate.
Cheers.
You could use JPA.withTransaction(callback). This is the better way when you can't put #Transactional in a method or you don't want to.
Cheers,
Alberto

Does Java EE security model support ACL?

I used Java EE 6 with Glassfish v3.0.1, and I wonder if Java EE security model support ACL, and if so how fine-grained is it get?
EDITED
I implement Security using jdbc realm via glassfish v3, that the realm at runtime look into table USER inside the database to check for authentication, by looking at the password field and authorization by looking at the role field. The roles field only contain 2 either ADMINISTRATOR or DESIGNER. So it is a One-to-one map between user and role. At the managed bean level, I implemented this
private Principal getLoggedInUser()
{
HttpServletRequest request =
(HttpServletRequest) FacesContext.getCurrentInstance().
getExternalContext().getRequest();
if(request.isUserInRole("ADMINISTRATORS")){
admin = true;
}else{
admin = false;
}
return request.getUserPrincipal();
}
public boolean isUserNotLogin()
{
Principal loginUser = getLoggedInUser();
if (loginUser == null)
{
return true;
}
return false;
}
public String getLoginUserName()
{
Principal loginUser = getLoggedInUser();
if (loginUser != null)
{
return loginUser.getName();
}
return "None";
}
by calling isUserInRole, I can determine if the user is admin or not, then the JSF will render the content appropriately. However, that is not fine-grained enough (real quick background info: There are multiple projects, a project contains multiple drawings). Because if u are a DESIGNER, you can see all the drawings from all the projects (what if I only want tom to work on project A, while peter will work on project B, Cindy can supervised over the two project A and B). I want that, at runtime, when I create the user, I can specifically set what project can he/she see. Is there a way to accomplish this? NOTE: There are more than just two projects, the above example is just for demonstration.
The Java EE security model authenticates a 'Principal' which may one have or more 'Roles'.
In the other dimension you have services and resources which need configurable 'Permissions' or 'Capabilities'.
In the configuration you determine which 'Principals' or 'Roles' have which 'Permissions' or 'Capabilities'.
In other words, yes it supports ACL and it is as fine grained as you want it to be, but you'll have to get used to the terminology.
In the answer of Vineet is the excellent suggestion to create 'roles' per project id. Since people must be assigned to projects anyhow, it is straightforward to to add the people to these groups at that time. Alternatively a timed script can update the group memberships based on the roles. The latter approach can be preferable, because it is easier to verify security if these decisions are in one place instead of scattered all over the administration code.
Alternatively you can use "coarse-grained" roles e.g. designer and make use of the database (or program logic) to restrict the views for the user logged in
SELECT p.* FROM projects p, assignments a WHERE p.id = a.projectId AND a.finishdate < NOW();
or
#Stateless class SomeThing {
#Resource SessionContext ctx;
#RolesAllowed("DESIGNER")
public void doSomething(Project project) {
String userName = ctx.getCallerPrincipal.getName();
if (project.getTeamMembers().contains(userName) {
// do stuff
}
}
}
Note that the coarse grained access control has here been done with an annotation instead of code. This can move a lot of hard to test boilerplate out of the code and save a lot of time.
There are similar features to render webpages where you can render parts of the screen based on the current user using a tag typically.
Also because security is such a wide reaching concern, I think it is better to use the provided features to get at the context than to pass a battery of boolean flags like isAdmin around as this quickly becomes very messy. It increases coupling and it is another thing making the classes harder to unit-test.
In many JSF implementations there are tags which can help rendering optional things. Here is are examples for richfaces and seam:
<!-- richfaces -->
<rich:panel header="Admin panel" rendered="#{rich:isUserInRole('admin')}">
Very sensitive information
</rich:panel>
<!-- seam -->
<h:commandButton value="edit" rendered="#{isUserInRole['admin']}"/>.
Here is an article explaining how to add it to ADF
The Java EE security model implements RBAC (Role Based Access Control). To a Java EE programmer, this effectively means that permissions to access a resource can be granted to users. Resources could include files, databases, or even code. Therefore, it is possible to not only restrict access to objects like files and tables in databases, it is also possible to restrict access to executable code.
Now, permissions can be grouped together into roles that are eventually linked to users/subjects. This is the Java EE security model in a nutshell.
From the description of your problem, it appears that you wish to distinguish between two different projects as two different resources, and therefore have either two separate permission objects or two separate roles to account for the same. Given that you already have roles (more appropriately termed as user groups) like Administrator, Designer etc. this cannot be achieved in quite easily in Java EE. The reason is that you are distinguishing access to resources to users in a role, based on an additional property of the resource - the project ID. This technically falls into the area known as ABAC (Attribute Based Access Control).
One way of achieving ABAC in Java EE is to carry the properties/attributes granted to the role, in the role name. So instead of the following code:
if(request.isUserInRole("DESIGNERS")){
access = true;
}else{
access = false;
}
you ought to doing something like the following. Note the ":" character used as a separator to distinguish the role name from the accompanying attribute.
if(request.isUserInRole("DESIGNERS"+":"+projectId)){
access = true;
}else{
access = false;
}
Of course, there is the part where your login module should be modified (either in configuration or in code) to return Roles containing project IDs, instead of plain role names. Do note that all of these suggested changes need to reviewed comprehensively for issues - for instance, one should be disallowing the separator character from being part of a role name, otherwise it is quite possible to perform privilege escalation attacks.
If implementing the above proves to be a handful, you could look at systems like Shibboleth that provide support for ABAC, although I've never seen it being used in a Java EE application.

Categories

Resources