Opensaml validate vs ComponentSpace validation - java

I am using opensaml to sign my SAML and I am also successfully able to validate it using OpenSAML's SignatureValidator.
SignatureValidator signValidator = new SignatureValidator( publicCredential );
signValidator.validate( signature );
The IdP uses Component space to validate on their side however they are unable to validate it using the same SAML and publicCertificate.
bool retVal = SAMLMessageSignature.Verify(samlResponseXml, x509Certificate);// returning false
The Signature algorithm in SAML is RSASHA1. From their logs we were able to see that the signature's hash value is different. Here are the logs:
http://pastebin.com/X27vUtbY

The computed and expected hashes are different, as you say, which indicates the XML has been modified after signing. We have no known interoperability issues with OpenSAML so I suspect there's something else going on here.
It's hard to say what the issue is from the limited information. I suggest the SP contact us, including the full SAML log, and we should be able to resolve the issue.

Related

Validating a leaf certificate from a third party in java 8

I am looking for a way to validate a java.security.cert.X509Certificate object that is sent from a third party. The certificate provider is using dns or ldap to fetch the certificate. I have included a link with additional information on how the certificate is being retrieved.
http://wiki.directproject.org/w/images/2/24/Certificate_Discovery_for_Direct_Project_Implementation_Guide_v4.1.pdf
I also need to know protocols and default ports that would be used in any of the verification steps. The certificate needs to meet the following criteria from page 13 section 4 of this document:
http://wiki.directproject.org/w/images/e/e6/Applicability_Statement_for_Secure_Health_Transport_v1.2.pdf
Has not expired.
Has a valid signature with a valid message digest
Has not been revoked
Binding to the expected entity
Has a trusted certificate path
Item 1 is straight forward to compare the dates from the getNotAfter and getNotBefore methods on the certificate object to the current date or to use the checkValidity method which throws a checked exception.
For item #2, I see a method to get the signature, but I am unsure how to generate the message digest and verify that the signature and message digest are both valid.
For item #3, The certification revocation list seems to be mixed with some other data by calling this method on the certificate getExtensionValue("2.5.29.31"). Retrieving the certification revocation list data seems possible over http, and ocsp seems to be based on http. I haven't been able to find how to do this in java.
For item #4, I am not sure what binding means in the context of certificates, or what is involved in verifying it.
For item #5, It looks like the data for intermediary certificates is mixed with some other data by calling this method on the certificate getExtensionValue("1.3.6.1.5.5.7.1.1"). CertPathValidator looks like it may be able to help verify this information once the certificates data is retrieved over http.
Certificate validation is a complex task. You can perform all the validations you need manually (expiration, revocation, certification chain) using native Java 8 support or Bouncycastle. But the option that I would recommend is to use a specific library that has already taken into account all the possibilities.
Take a look to DSS documentation and Certificate Verification example
// Trusted certificates sources, root and intermediates (#5 )
CertificateSource trustedCertSource = null;
CertificateSource adjunctCertSource = null;
// The certificate to be validated
CertificateToken token = DSSUtils.loadCertificate(new File("src/main/resources/keystore/ec.europa.eu.1.cer"));
// Creates a CertificateVerifier using Online sources. It checks the revocation status with the CRL lists URLs or OCSP server extracted from the certificate #3
CertificateVerifier cv = new CommonCertificateVerifier();
cv.setAdjunctCertSource(adjunctCertSource);
cv.setTrustedCertSource(trustedCertSource);
// Creates an instance of the CertificateValidator with the certificate
CertificateValidator validator = CertificateValidator.fromCertificate(token);
validator.setCertificateVerifier(cv);
// We execute the validation (#1, #2, #3, #5)
CertificateReports certificateReports = validator.validate();
//The final result. You have also a detailedReport and DiagnosticData
SimpleCertificateReport simpleReport = certificateReports.getSimpleReport();
The validation will perform all the steps you indicate, including expiration, signing of the certificate, revocation, and checking the chain of trust (including the download of intermediate certificates).
Step # 4 I don't know exactly what you mean. I suppose to validate that the certificate corresponds to one of the certification entities of the trusted list
To load the trusted certificate sources see this
CertificatePool certPool = new CertificatePool();
CommonCertificateSource ccc = new CommonCertificateSource(certPool);
CertificateToken cert = DSSUtils.loadCertificate(new File("root_ca.cer"));
CertificateToken adddedCert = ccc.addCertificate(cert);
I will split the answer to 3 pieces. The first is the background , the second is the choice of library , implementation code (references that i had used for my implementation with due credit)
In the past i had implemented a very similar use case. I had IOT fabrications done by a vendor and to onboard them i had implement the same X509 verification process that you have mentioned.
Implementation :
For my implementation i had refered the following . You can include BC as your defaultProvider (Security.setProvider) and use the following code . The code directly solves 1,3,5. The code is here : https://nakov.com/blog/2009/12/01/x509-certificate-validation-in-java-build-and-verify-chain-and-verify-clr-with-bouncy-castle/
Now coming to 2 , the answer to that depends on how you will get the certificate from your client and what additional data will be provided by the application.
The high level flow of that is as follows
a) Client produces the certificate
b) Client does a Digest using some algorithm that is acceptable. SHA256 are quite popular, you can increase the strength based on the needs and how much compute you have. Once the client creates the digest , to prove he is the owner of the certificate you can get the digest signed using the private key of the device. This can be then transmitted to the verifier application
c) Once you have the certificate and the signature , you can then use the Certificate and Public key associated with it to verify the signature applying the same digest and then verifying the signature.A good reference is here : http://www.java2s.com/Code/Java/Security/SignatureSignAndVerify.htm
I am not a 100% sure on what 4 means. But if it means proof of identity (who is producing the certificate is bound to be who they are , signing and verifying will provide the same)
Though you can realize the use cases using java security API's , I used bouncycastle core API for realizing the use cases. Bouncycastle API's are far more rich and battle tested especially for weird EC curve algorithms we had to use & you will find that many folks swear by BouncyCastle.
Hope this helps!

Identify which Identity Provider sent the response - OneLogin SAML Java

I have OneLogin's SAML plugin in Java. While trying to process the login response, the API requires the same settings used during the login request. However, I have multiple instances of my web server running, so the response could go to a different server than the request. If the response is not encrypted, I can use the InResponseTo attribute to track the settings between instances of the web server. But if the response is encrypted, there is no way for me to track the settings.
InResponseTo="ONELOGIN_4fee3b046395c4e751011e97f8900b5273d56685"
How is it possible to identify the Identity Provider's configuration on receiving the response? Any help would be appreciated.
Auth auth = new Auth(settings, request, response);
// This settings object is needed to decrypt the response
auth.processResponse();
if (!auth.isAuthenticated()) {
out.println("Not authenticated");
}
If the whole SAML response is encrypted there is no way to find out the issuer. Otherwise the 'Issuer' element of the SAML response would tell you the entity ID of the SAML IdP.
If I understand correctly, when handling the response you need to be able to identify where the request came from?
I think you should be able to do this with RelayState. In their docs (at https://github.com/onelogin/java-saml) they how the login() method can take RelayState info:
We can set a 'returnTo' url parameter to the login function and that will be converted as a 'RelayState' parameter
String targetUrl = 'https://example.com';
auth.login(returnTo=targetUrl)
Calling the parameter returnTo is misleading in my opinion because it's implying behavior that isn't there. It should have just been called "relayState."
When that comes back in the response, it's just an additional request parameter. In the docs, they show an example of using the RelayState value to route the response, but you can do anything you want with that information:
String relayState = request.getParameter("RelayState");
//do whatever you want based on this request parameter
Question:
If the response is not encrypted, I can use the InResponseTo attribute to track the settings between instances of the web server.
But if the response is encrypted, there is no way for me to track the settings.
How is it possible to identify the Identity Provider's configuration on receiving the response?
And I need to find a way to track which IDP the request is going to without the use of session.
Answer:
(1) Each IdP needs to post the SAML response to AttributeConsumingService endpoint such as https://sp.example.org/Shibboleth.sso/SAML2/POST
(2) Different IdP should have different entityID.
(3) SAML 2 supports the encryption of assertions, NameIDs, and attributes.
(4) Quote "How is it possible to identify the Identity Provider's configuration on receiving the response?"
Whether IdP encrypts assertions, NameIDs, or/and attributes is determined by SP. Diffferent SAML SP may have different requirements, such as sign assertion or sign response or encrypt assertions, NameIDs, or/and attributes.
SAML IdP needs to provide the corresponding configuration to meet the requirement of SAML SP.
Relying party/SP configuration of Shibboleth IdP at GitHub repository provides a configuration example for selected SAML SPs.
SP uses the entityID of IdP to identify the Identity Provider's configuration on receiving the response if SP have different requirements for different IdPs.
(5) Multiple IdPs can send the response to the same ACS. Usually for each time SP login, only one IdP sends the SAML response to the ACS. Then SP uses its own private key to decrypt assertions, NameIDs, or attributes.
Resolution(Some commercial SAML SPs adopt this solution):
Create different AttributeConsumingService endpoints for different IdPs, that is,
https://your-sp.com/SAML2/POST/idp1
https://your-sp.com/SAML2/POST/idp2
https://your-sp.com/SAML2/POST/idp3
Then different IdP posts the SAML response to different AttributeConsumingService endpoints.
SP uses its own private key/cert to decrypt assertions, NameIDs, or attributes, and then tracks which IDP the request is going to without the use of session.
Note that SP needs to provide different SP metadata with different ACS URL to different IdPs.

Errors using signpost api for oauth in java

I have been trying for months to get access to a certain api (which has almost no documentation) to work using signpost. The api has oauth 2.0 authentication. The problem is that I have never used oauth before. But I have spent a long time reseaching so I think I have a functional understanding of how it works. I thought that using the handy singpost api it wouldn't be too much trouble to hack through it, but alas I have encountered a wall. The api docs are here:
https://btcjam.com/faq/api
It gives three URLs that are needed for the oauth authentication, which I am writing as java here for consistency with some code below:
String Authorization= "https://btcjam.com/oauth/authorize";
String Token ="https://btcjam.com/oauth/token";
String Applications = "https://btcjam.com/oauth/applications";
I have an application with a name, key, and secret. I also have set my callback URL to be the localhost, i.e.
http://localhost:3000/users/auth/btcjam/callback.
Now, as I am reading the signpost docs, it tells me that in order to request an access token, I need to do something like the following:
OAuthProvider provider = new DefaultOAuthProvider(
REQUEST_TOKEN_ENDPOINT_URL, ACCESS_TOKEN_ENDPOINT_URL,
AUTHORIZE_WEBSITE_URL);
String url = provider.retrieveRequestToken(consumer, CALLBACK_URL);
However, I am unsure exactly what to put for the URL's in these various spots, and I am getting errors. The problem is that The names of the URLs required above do not correspond to the URLs given. The "authorization" and "callback" URLs seem to match up nicely, but I am not sure how the URLs "REQUEST_TOKEN_ENDPOINT_URL" and "ACCESS_TOKEN_ENDPOINT_URL" required in the signpost docs correspond to the URLs given by the api docs on the serverI am trying to access. Of course, there are only two possible permutations, but when I try them both I get two different errors:
"Authorization failed (server replied with a 401). This can happen if the consumer key was not correct or the signatures did not match."
"Communication with the service provider failed: URLDecoder: Illegal hex characters in escape (%) pattern - For input string: " 1""
Could someone please help explain what might be going on here? Am I very close to getting this to work or do I have to take a bunch of steps back?
Any help is much appreciated.
Thanks,
Paul

PUT file to S3 with presigned URL

I've been playing with Amazon S3 presigned URLs all night attempting to PUT a file. I generate the presigned URL in java code.
AWSCredentials credentials = new BasicAWSCredentials( accessKey, secretKey );
client = new AmazonS3Client( credentials );
GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest( bucketName, "myfilename", HttpMethod.PUT);
request.setExpiration( new Date( System.currentTimeMillis() + (120 * 60 * 1000) ));
return client.generatePresignedUrl( request ).toString();
I then want to use the generated, presigned URL to PUT a file using curl.
curl -v -H "content-type:image/jpg" -T mypicture.jpg https://mybucket.s3.amazonaws.com/myfilename?Expires=1334126943&AWSAccessKeyId=<accessKey>&Signature=<generatedSignature>
I assumed that, like a GET, this would work on a bucket which is not public (that's the point of presigned, right?) Well, I got access denied on every attempt. Finally out of frustration I changed the permission of the bucket to allow EVERYONE to write. Of course, then the presigned URL worked. I quickly removed the EVERYONE permission from the bucket. Now, I don't have permission to delete the item that was uploaded into my bucket by my own self-pre-signed URL. I see now that I probably should have put a x-amz-acl header on what I uploaded. I suspect I'll create several more undelete-able objects before I get that right.
This leads to a few questions:
How can I upload with curl using PUT and a generated presigned URL?
How can I delete the uploaded file and the bucket I created to test it with?
The end goal is that a mobile phone will use this presigned URL to PUT images. I'm trying to get it going in curl as a proof of concept.
Update: I asked a question on the amazon forums. If an answer is provided there I'll put it as an answer here.
This is indeed a bit puzzling, I consider it to be a bug in the AWS SDK for Java (see below) - but first and foremost, the following curl command will upload your file as such (assuming an updated pre-signed URL of course):
curl -v -T mypicture.jpg https://mybucket.s3.amazonaws.com/myfilename?Expires=1334126943&AWSAccessKeyId=<accessKey>&Signature=<generatedSignature>
That is, I've excluded the Content type header, which yields application/octet-stream (or binary/octet-stream) as a result, which is obviously not desired; thus, further digging had been order.
Background / Analysis
Pre-signed URLs for PUT (and DELETE as well as HEAD) requests to Amazon S3 are known to work in principle, not the least evidenced in related questions on this site (see e.g. my answer to Upload to s3 with curl using pre-signed URL (getting 403)).
The facilitated Query String Request Authentication Alternative is documented to use the following pseudo-grammar that illustrates the query string request authentication method:
StringToSign = HTTP-VERB + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Expires + "\n" +
CanonicalizedAmzHeaders +
CanonicalizedResource;
It does include the Content-Type header, and (as you already discovered) this has been the missing piece in some documented cases, see e.g. the AWS team response to GetPreSignedURL with PUT request, yielding a working pre-signed URL once added.
This is easy to achieve with the AWS SDK for .NET indeed, which provides the convenience method GetPreSignedUrlRequest.WithContentType to do just that:
Sets the ContentType property for this request. This property defaults
to "binary/octet-stream", but if you require something else you can
set this property.
Accordingly, extending the respective sample Upload an Object Using Pre-Signed URL - AWS SDK for .NET as follows yields a working pre-signed URL with content type, that can be uploaded via curl as expected (i.e. exactly as you attempted to):
// ...
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest();
// ...
request.WithContentType("image/jpg");
// ...
Now, one would like to extend the semantically identical sample Upload an Object Using Pre-Signed URL - AWS SDK for Java in a similar fashion, but (as you've discovered already as well), there is no dedicated method to achieve this. This might just be a lacking convenience method though and could be achievable via addRequestParameter() or setResponseHeaders() eventually, e.g.:
// ...
request.setExpiration( new Date( System.currentTimeMillis() + (120 * 60 * 1000) ));
request.addRequestParameter("content-type", "image/jpg");
return client.generatePresignedUrl( request ).toString();
// ...
However, both method's documentation suggests other purposes, and it doesn't work indeed, i.e. they always yield the identical signature, no matter which content type is set like so (if any).
Debugging further into the SDKs reveals, that both provide a semantically similar core method to calculate the query string authentication according to the pseudo-grammar referenced above, see buildSigningString() for .NET and makeS3CanonicalString() for Java.
But the respective code in the Java version to Add all interesting headers to a list, then sort them, where "Interesting" is defined as Content-MD5, Content-Type, Date, and x-amz- is never executed in fact, because there is indeed no method to provide these headers somehow, which are only available for class DefaultRequest and not class GeneratePresignedUrlRequest used to initialize the former, which is used as input for calculating the signature in turn, see protected method createRequest().
Interestingly/Notably, the two methods to calculate the query string authentication in .NET vs. Java compose their input from an almost inverse combination of header vs. parameter sources on the call stack, which could hint on the cause of the Java bug, but obviously that might as well be just difficult to decipher, i.e. the internal architecture could differ significantly of course.
Preliminary Conclusion
There are two angles to this:
The AWS SDK for Java is definitely lacking the convenience method for setting the content type, which might be a comparatively rare, but nonetheless obvious use case accounted for in other AWS SDKs accordingly - this is surprising, given its widespread use in AWS related backend services.
Regardless, there seems to be something fishy with the way the Query String Request Authentication is implemented in comparison to the .NET version for example - again this is surprising, given it is a core functionality, however, this is still within the S3 model/namespace and thus might only be required by the respective uses cases above.
In conclusion, the only reasonable way to resolve this would be an updated SDK, so a bug report is in order - obviously one could as well duplicate/extend the SDK functionality to account for this special case separately (ideally in a way allowing to submit a pull request for the aws-sdk-for-java project), but getting this right in a compatible and maintainable way seems to be a bit tricky, thus is likely best done by the SDK maintainers themselves.
Ran into this problem as well. We're already tracking when the file is uploaded on the backend, so our work around was to set the content type after the client uploads the file using the Rails app with a call to copy_from.

Generate SAML response/assertion in java

I am working on idp-initated authentication. I want to generate SAML response to be sent to SalesForce. I have necessary values to be set in the response from metadata. Please tell me openSAML classes to use in order to generate response/assertion.
Following are the main classes that will be used.
Assertion
Signature
SubjectConfirmationData
NameID
Subject
Conditions
Audience
AuthnContextClassRef
AuthnStatement
see this for openSAML libraries link
If you are looking to create SAML Assertions and want some convenience methods that will help you deal with the OpenSAML library, you can take a look at WSS4J's SAML2ComponentBuilder. This is used extensively in Apache CXF and other Java service stacks.
Creating an assertion is as easy as:
//Create assertion
Assertion assertion = SAML2ComponentBuilder.createAssertion();
//create issuer
Issuer issuer = SAML2ComponentBuilder.createIssuer(issuerString);
assertion.setIssuer(issuer);
You can obviously set all the values described above and there is an 'AssertionWrapper' provided that assists in digitally signing the assertion:
assertionWrapper.signAssertion( alias, password, signatureCrypto, false, defaultCanonicalizationAlgorithm, defaultRSASignatureAlgorithm);
It is worth looking into if you are having difficultly dealing directly with the OpenSAML library.
Thanks,
Yogesh

Categories

Resources