What security framework do you use in your Java projects?
I used Spring Security and Apache Shiro and they both look immature.
Spring Security flaws:
no native support for permissions;
no ability to use explicitly in Java code (sometimes it's necessary);
too much focused on classic (non AJAX) web applications.
Apache Shiro flaws:
bugs in final release (like the problem with Spring integration);
no support for OpenID and some other widely used technologies;
performance issues reported.
There is also lack of documentation for both of them.
Maybe most of the real projects develop their own security frameworks?
As for Apache Shiro:
I'm not sure why you've listed the things you did:
Every project in the world has release bugs, without question. The big key here however is that Shiro's team is responsive and fixes them ASAP. This is not something to evaluate a framework on, otherwise you'd eliminate every framework, including any you write yourself.
OpenID support will be released shortly in Shiro 1.2 - maybe a month out?.
What performance issues? No one has ever reported performance issues to the dev list, especially since the caching support in Shiro is broad and first-class. Without clarifications or references, this comes across as FUD.
Documentation now is really good actually - some of the best in Open Source that I've seen lately (it was re-worked 2 weeks ago). Do you have specific examples of where it falls short for you?
I'd love to help, but your concerns are generalizations that aren't supported by references or concrete examples. Maybe you could represent specific things that your project needs that you've fail to accomplish thus far?
Apache Shiro continues to be the most flexible and easiest to understand security framework for Java and JVM languages there is - I doubt you'll find better.
But, above all, and I mean this with all sincerity, please don't write your own security framework unless you plan on putting a ridiculous amount of time into it. Nearly every company I've ever seen that tries to do this themselves fails miserably. It is really hard to get 'right' (and secure). Trust me - after writing one for 8 years, that's one thing I'm absolutely sure of :)
Anyway, feel free to join the Shiro user list and you're sure to find that the community is happy and willing to work through whatever issues you may have. You'll find that we take care of the people that ask questions and do our best to help out.
HTH!
My current projects use SpringSecurity and involve doing all three things you claim to be flaws in SpringSecurity:
The projects implement fine-grained access rules that go beyond simple ROLEs, and variously involve state of domain objects, extra request parameters, and so on. These are implemented using custom "access policy objects" that get called within my MVC controllers. However, access check failures are handed back to SpringSecurity by throwing the relevant exception. (These could have been implemented as standard SpringSecurity method-level interceptors, but the checks typically involve examining domain objects.)
The projects support both web and AJAX access, and deal with access failures differently for the two cases. This is done by writing some custom Authentication entrypoint components for SpringSecurity that choose between different authentication behaviors depending on the request URL, etc.
In other words, it can be done ...
Having said that, I agree with you on a couple of points:
It is not easy to wire this up kind of thing. I kept on running into roadblocks when using the <http> element and its associated configurer. Like ... you want it to use a different version of component X. But to do that you have to replace Y, Z, P and Q as well.
The documentation is really sparse, and not helpful if you are trying to do something out of the ordinary.
Andrey, I think this answer comes too late to be helpful to you; it is intended for those who land on this thread later and I hope it helps.
My company recently released as open source, OACC, an advanced Java Application Security Framework. OACC is designed for systems that require up to object-level security granularity.
OACC provides a high performance API that provides permission based authorization services. In a nutshell, OACC allows your application to enforce security by answering the question: Is entity ‘A’ allowed to perform action ‘p’ on entity ‘B’?
One of the key abstractions in OACC is a resource. A resource serves as the placeholder in OACC for the object in the application domain that needs to be secured. Both the actors (e.g. users, processes) and the objects being secured (e.g. documents, servers) are represented as resources in OACC. The application domain objects that are actors, or are secured, simply store the resource id to the associated resource.
The resource abstraction allows OACC, unlike other major security frameworks, to provide a rich API that manages permissions between resources. OACC persists security relationships in RDBMS tables (DB2, Oracle, MS-SQLServer and PostgreSQL are currently supported).
For more information please check out the project website: http://oaccframework.org
We use a layered security in one of our projects. The layers are the following:
HTTPS as protocol (Apache-AJCConnectors-TomcatServlets)
Only binary objects transferred between client and servlet
Individual elements in the passed objects (either way) are encrypted
Encryption key is dynamic, set up during initial handshaking, valid for 1 session
Conceptually, the security consists of the encryption key, encryption algorithm and the data on which it is applied. We make sure that more than 1 of the 3 is never passed simultaneously during a communication. Hope that helps. Regards, - M.S.
Related
May be this question broad and hard to answer at current moment
But,when i went through different frameworks emerging by one after another like
Hadoop Distributed File System
HBase,
Hive,
Cassandra,
Hypertable,
Amazon S3,
BigTable,
DynamoDB,
MongoDB,
Redis,
Riak,
Neo4J,
Stripes,
Wicket,
Compojure,
Conjure,
Grails,
JRoR,
JSF,
Lift,
Netty,
Noir,
Play,
Scalatra,
Seam,
Sitemesh,
Spark,
Spring MVC,
Stripes,
Struts,
Tapestry,
VRaptor,
Vert.x,
Stripes,
Tapestry
OpenXava
It is always buzzing me.
Each framework has some unique features.Each one promises to solve some particular testing,development and production need with respect to increasing no of users ,data expansion,distributed computing and security ,performance and many more .
But,many functionality is common on them .Striving for unique some functionality we have to shift from one framework to another As,a java developer i would like to have following features included in one framework
like
Out of box support for testing for unit and integration testing
Fast prototyping
Distributed multithreading,caching,logging ,session management ,moduaralization
Security extension
Framework extension
Easily integration with big data .
Distributed data computation
Asychronous operation
High performance
I would like to know what other features others really want to have in one framework ?What others developer really want to have features included in one framework .What are the necessary and essential featues that every framework must include .Please share yours idea.
The reason they differ is because there is no consensus on these matters. Depending on your background and expertise the answer will be different. Every project was started knowing full well what the alternatives were and not being content with them.
This question is useless I'm afraid.
I think XKCD describes it well (replace standard with framework):
There is simply no way given an enough complex problem to solve it for all use cases and users.
The key to understanding is not in how much common is there about all those frameworks but in that most of them cover only part of needs. For instance, Hadoop and Stripes do nothing in common. Only few frameworks claim they cover everything (Java EE and Spring in fact) but in reality they just try to collect several unrelated technologies under one brand name.
The real domains are: presentation layer, data access layer and (arguably) something else.
We have two separate products, both including web app and server.
We want to implement Single Sign On for both of them, so when a user has logged into one product, he can automatically access resources in the other product belonging to him.
I have explored a little bit and find SAML is a good approach that we can take, but we are not sure how we want to proceed.
Is it a good idea to implement our own Service Provider? I have looked at Shib SP, but looks like if I want to integrate it into my products, it won't be that easy too.
So I am just looking for some suggestions from people who have encountered a similar problem before.
Another question is what resource that I can study if I need to implement a SP using OpenSaml? Looks like there is not a lot of tutorials or examples that I can refer to.
I would also be really appreciated if anybody can just point out some big procedures or components that my own SP need to contain.
EDIT 1:
Just try to provide more details about what I want. We have two separate products. Currently we are able to externalize user database. For example, our products can be configed to connect to LDAP server or any other external user DB as long as they implement a service properly.
Now our goal is we want SSO for both of our products. One scenario is we have our own SP component(either implement or integrate) in both products. Customer may have their own IdP. With some configuration, our SP can connect to their IdP, and do authentication from there, and user doesn't need to login twice to access both products. Of cause, we can provide an out of the box IdP if customer doesn't have it.
The biggest difficulty with Shibboleth is that it is, effectively, a reference implementation of the SAML v2.0 specification.
For most routine installations, though, you actually need very little of the SAML spec to enable a couple of web apps for SSO.
But since Shibboleth implements the whole thing, with all of its capability, it can be a bear to configure.
We did a project with Shibboleth (and it was admittedly an on the edge use case), and, for me, a SAML novice at the time, it was really a chore to get everything up and working.
For our next stab, I looked at the SAML spec for SSO via the Web Profile. If you read it, it's actually quite straightforward for this limited use case. And we decided that instead of using Shibboleth again, we'd write our own IdP and SP using the OpenSAML libraries.
Could we have got Shibboleth working faster? Probably. But I don't think we'd have the understanding of it that we do of our own. A bit of Not-Invented-Here, sure, but this stuff is confusing enough when you do understand the software and vocabulary, much less when you don't. And SAML is chock full of new vocabulary.
You can also consider using SimpleSAML as an IdP and writing your own SP for your web apps. SimpleSAML is in PHP, but it's a bit more user friendly. You can just treat it as a self contained apache service.
I will say that our SP weighs in at around 1000 lines of javadoc'd code, but it's mostly wiring OpenSAML stuff together and some utility stuff. In truth it's not that scary. Be prepared to really enjoy reading signed XML blobs though.
It is frustrating that this really isn't simpler, but it's a bit of a chicken/egg thing regarding adoption etc.
And if none of that suits you, you can look at OAuth2 and some of its profiles.
If you only want to implement SSO between 2 products, I think yes, building something from the scratch is easier. If it's Java, Shibboleth's OpenSaml is a very good lib.
As you begin to implement more stuff, and some complex scenarios, going for something already built is the best choice. You should also be aware of several stuff you'll be likely to write on a per-system basis (e. g. assertion generation, xml-dsig, validation, etc).
At a glance, it might seem like the already built products are way too complex or difficult to scale or adapt to your particular needs. But your dev effort writing connectors and implementations can be rewarded when you feel like exploding all of the SAML capabilities.
It'd be very helpful though, if you can explain with more detail what do you want to achieve; I feel your question is quite open...
I don't have personal experience with Shibboleth Service Provider, but I am currently developing architecture which uses Shibboleth IdP, Shibboleth Discovery Service and Guanxi Service Provider. Integrating lightweight Guard module from Guanxi Service Provider with Java webapp is a piece of cake and you can easily obtain Shibboleth based architecture without writing your own modules. There is localhost tutorial for setting up Guanxi SP, just skip the parts about Guanxi WAYF and IdP and use Shibboleth components in their place.
I need to write integrations to multiple external web services. Some of them are SOAP (have WSDL), some of them pretty much ad hoc - HTTP(s), authentication either by basic auth or parameters in URL (!), natural-language like XML which does not really map nicely to domain classes..
For now, I've done the spike integrations using Spring Web 3.0 RestTemplate and binding using JAXB2 (Jaxb2Marshaller). Some kind of binding is needed because domain classes need to be cleaner than the XML.
It works, but it kind of feels bad. Obviously this partially just because how the services are built. And one minor issue I have is naming of RestTemplate as services have nothing to do with REST. This I can live with. JAXB2 feels a bit heavy though.
So, I'm looking for some other alternatives. Ideas? I'd like to have a simple solution (so RestTemplate is fine), not too enterprisey..
While some of your services may be schemaless XML, they will still probably have a well-documented API. One of the techniques that the Spring folks seem to be pushing, at least from the web-service server side, is to use XPath/XQuery for retrieving only the information you really need from a request. I know that this may only end up being part of your solution, but I'm not sure that this is a situation where one particular binding framework is going to meet all your needs.
If I understand correctly you have 1 application that has to make calls to various external (web) services by use of different technologies. The first thing that comes to mind is to have some intermediate level. While this could be something as elaborate as en ESB-solution, my guess is that is not what you're looking for.
You could for example achieve this intermediate level by having a class hierarchy with at its top an interface 'Consumer'. Method to be implemented: doConsume() and so on.
If you look into it you'll probably have the opportunity to make use of several design patterns like Strategy or Template. Remember to be pro-active and try to ask a few times 'What if ..' (As in: what if they need me to consume yet another service? etc.)
If JAXB feels too heavy there are other API's to be found:
Axis
JAX-WS
CXF
other
It'll depend on the situation which one would be better. If you run into troubles with any of them I'm sure you'll be able to find help here on SO (and from people who have more hands-on experience with them than me ;-)
In our company, we have several rich Java applications that are used both by internal users and external users. We would like to begin migrating these systems to support a single sign on mechanism, and potentially allow our external clients to use their own authentication mechanisms to validate their users.
For instance, if we have a client who has a large number of users, and they would like to have their users only have to login using their company login information, we would like to support that behavior.
We have looked into using certificate based authentication systems (one of the common ones being Kerberos), and using that authentication mechanism to allow for external authentication services to be used in our system.
Is this doable? Are there specific implementation details we need to be aware of? I am not as concerned about specific technologies (although suggestions are certainly welcome), more about the core concepts and making sure we are doing the right thing wherever possible.
What about authorization - i.e. access to different services. Is there a standard or best practice to how this is handled when dealing with (potentially) disconnected authentication services?
As an additional note, our front end systems are made in Java, so specific information related to implementing this behavior in a Java framework is definitely appreciated (i.e. libraries that are useful, potential pitfalls specific to Java, etc).
Is it doable? Yes.
Are there specific implementation details we need to be aware of? Yes.
Each type of security implementation has its own implementation details that you're just going to have to figure out. Each one is different and has its own nuances.
You should be able to implement whatever type of security you chose. Kerberos is a fine choice. You might also look into Openid and CAS. There are many others though.
To handle the actual security itself you might consider looking into Spring Security. Spring Security is able to handle authentication/authorization fairly well. However, most of spring security is really focused towards security on the web and not client systems so you most likely will have to implement much of the authentication mechanisms yourself (using libraries available library whenever possible of course).
When designing your system, especially if you're going to have many different types of login types, try to build the login system as pluggable as you can. Which will take time and a lot of trial and error.
I would look into the Spring Security 3 book. It isn't a great book, but it does explain a lot about how to properly implement security. Leveraging springs work is highly recommend because trying to implement security all by yourself will be quite a daunting task.
Best of Luck.
I'm new to Flex development, and RIAs in general. I've got a CRUD-style Java + Spring + Hibernate service on top of which I'm writing a Flex UI. Currently I'm using BlazeDS. This is an internal application running on a local network.
It's become apparent to me that the way RIAs work is more similar to a desktop application than a web application in that we load up the entire model and work with it directly on the client (or at least the portion that we're interested in). This doesn't really jive well with BlazeDS because really it only supports remoting and not data management, thus it can become a lot of extra work to make sure that clients are in sync and to avoid reloading the model which can be large (especially since lazy loading is not possible).
So it feels like what I'm left with is a situation where I have to treat my Flex application more like a regular old web application where I do a lot of fine grained loading of data.
LiveCycle is too expensive. The free version of WebOrb for Java really only does remoting.
Enter GraniteDS. As far as I can determine, it's the only free solution out there that has many of the data management features of LiveCycle. I've started to go through its documentation a bit and suddenly feel like it's yet another quagmire of framework that I'll have to learn just to get an application running.
So my question(s) to the StackOverflow audience is:
1) do you recommend GraniteDS,
especially if my current Java stack
is Spring + Hibernate?
2) at what point do you feel like it starts to
pay off? That is, at what level of
application complexity do you feel
that using GraniteDS really starts
to make development that much
better? In what ways?
If you're committed to Spring and don't want to introduce Seam then I don't think that Granite DS will give you much beyond Blaze DS. There is a useful utility that ensures only a single instance of any one entity exists in the client at any one time but it's actually pretty easy to do that with a few instances of Dictionary with weak references and some post-processing applied to the server calls. A lot of the other features are Seam-specific as alluded to here in the docs:
http://www.graniteds.org/confluence/display/DOC/6.+Tide+Data+Framework
Generally, the Tide approach is to minimize the amount of code needed to make things work between the client and the server. Its principles are very similar to the ones of JBoss Seam, which is the main reason why the first integration of Tide has been done with this framework. Integrations with Spring and EJB 3 are also available but are a little more limited.
I do however think that Granite's approach to data management is a big improvement over Livecycle's because they are indeed quite different. From the Granite docs:
All client/server interactions are done exclusively by method calls on services exposed by the server, and thus respect transaction boundaries and security defined by the remote services.
This is different to how Livecycle DS uses "managed collections" where you invoke fill() to grab large swathes of data and then invoke commit() methods to persist changes en-mass. This treats the backend like a raw data access API and starts to get complicated (or simply fall apart entirely) when you have fine-grained security requirements. Therefore I think Granite's approach is far more workable.
All data management features (serialization of JPA detached entities, client entity caching, data paging...) work with Spring.
GraniteDS does not mandate anything, you only need Seam if you want to use Seam on the server.
Actually, the free version of WebORB for Java does do data management. I've recently posted a comparison between WebORB for Java, LiveCycle DS, BlazeDS and GraniteDS. You can view this comparison chart here: http://bit.ly/d7RVnJ I'd be interested in your comments and feedback as we want this to be the most comprehensive feature comparison on the web.
Cheers,
Kathleen
Have you looked at the spring-blazeDS integration project?
GraniteDS with Seam Framework, Hibernate and MySql is a very nice combination. What I do is create the database, use seamgen to generate hibernate entities then work from there.