Not able to connect to AWS DocumentDB from AWS Lambda (using Java) - java

I want to connect to AWS DocumentDB cluster from AWS Lambda (using Java). TLS is enabled for cluster so I need to import the certificates to truststore. Not able to find any document around this on how to proceed.

You need to store https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem file to certstore before connecting to documentDB otherwise it will not work.
Their are many ways to import certificates using code during runtime.
Ref :
How to import a .cer certificate into a java keystore?
After importing cert, you can connect to documentDB, reference code can be found here :-
https://docs.aws.amazon.com/documentdb/latest/developerguide/connect_programmatically.html

I encourage you to avoid packaging the cert as part of your Lambda code. Instead you can get it dynamically from Amazon S3. This will avoid future issues in the future when the cert is rotate. Following a python example:
#Function to download the current docdb certificate
def getDocDbCertificate():
try:
print('Certificate')
clientS3.Bucket('rds-downloads').download_file('rds-combined-ca-bundle.pem', '/tmp/rds-combined-ca-bundle.pem')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
print("The object does not exist.")
else:
raise
For you to do that, the role of your Lambda needs permissions to get the object from S3 and S3 access via the Internet or a VPC endpoint.

Related

Configuring Grails to POST using certificate authentication

I am quite new to working with certificates and security, so pardon me if this is a no-brainer to others. I have followed this guide to set up my Grails application to run on HTTPS with self-signed certificates.
I am trying to establish 2-way SSL with another HTTPS network (a Nifi standalone instance) running on the same machine. I can get the Nifi instance to talk to Grails over HTTPS, but I am having issues with Grails talking to Nifi (specifically to a ListenHTTP processor).
I was hoping someone could advise how to use certificate authentication in Grails when posting over HTTPS.
Nifi uses certificate authentication; however per the above guide Grails only specifies a single keystore (for receiving requests?) so I'm a bit thrown off. I can successfully CURL to Nifi's REST API by specifying the --cert and --key properties, but since the final product will be a WAR on a client machine I want to set this up the 'right way', and I believe leaving those files on the client machine is a really big no-no for security.
During early development RestBuilder was sufficient for 2-way comms over HTTP, however, I am unable to find any mention of using it with certificate authentication (only basic authentication is covered in the documentation?).
HTTPBuilder shows up a lot when I looked for alternatives, however looking at the relevant documentation (line 139 'certificate()') it states that it takes a whole keystore JKS and password. I think this is close but not quite what I am looking for considering I only have one keystore; I am open to correction here.
Please note that I will be unavailable to respond until at least the day after this question was posted.
When making an outgoing HTTPS connection, if the remote endpoint (in this case Apache NiFi) requires client certificate authentication, the originating endpoint (Grails) will attempt to provide a certificate. The certificate that Grails is using to identify itself as a service is fine to use in this scenario, provided:
The certificate either does not have the ExtendedKeyUsage extension set, or if it is set, both ServerAuth and ClientAuth values are present. If ClientAuth is missing, the system will not allow this certificate to be used for client authentication, which is the necessary role in this exchange.
The certificate has a valid SubjectAlternativeName value which matches the hostname it is running on. RFC 6125 prescribes that SAN values should be used for certificate identity rather than Distinguished Name (DN) and Common Name (CN). So if the Grails app is running on https://grails.example.com, the SAN must contain values for grails.example.com or *.example.com.
The certificate must be imported into NiFi's truststore in order to allow NiFi to authenticate a presenter of this certificate.
NiFi must have ACL permissions in place for this "user". This can be done through the UI or by modifying the conf/authorizers.xml file before starting NiFi for the first time. See NiFi Admin Guide - Authorizers Configuration for more information.
Your concern for leaving the cert.pem and key.key files on the client machine is understandable, but the sensitive information contained therein is the same data that's in your keystore. At some point, the private key must be accessible by the Grails app in order to perform HTTPS processes, so having it in the keystore is functionally equivalent (you don't mention having a password on the *.key file, but obviously you should have a password on the keystore).

Mutual Authentication (2-way SSL) in AWS Lambda

I am building an AWS Lambda service for a small PoC. The flow in PoC is :
take a (text) input via POST,
performs a small string manipulation +
store the manipulated value into DynamoDB, and then
send the same (manipulated) value to a particular URL via HTTP POST
Seems like a simple lambda tutorial example, but the tricky part for me was the authorization. The URL that I have to POST to only allows requests that are mutually authenticated via a SSL cert. How can I achieve this in Lambda ?
I could not find enough answers to make this work. I looked at using the AWS API gateway 2-way ssl cert option. However, For that to work, I need to install the receiving part cert into cert store. Is the even possible ? Or the only way is to use a micro-EC2 box ?
At Lambda, I am okay to use Node.JS, Java, or Python.
How to implement mutual TLS in AWS Lambda?
First big applause for Hakky54 for this good tutorial on mutual TLS.
https://github.com/Hakky54/mutual-tls-ssl
I followed his tutorial to understand and implement MTLS for AWS Lambdas. You can also test your implementation locally before deploying to AWS by just running the spring-boot app which saves a lot of time.
Steps (all commands are documented on the above link)
Export server cert and import it to client trust store
Load your client key store and trust store, I saved both in s3 bucket
Create TLS Context
SSLContext sslContext = SSLContexts.custom()
.loadKeyMaterial(keyStore, stores.getKeyStorePassword().toCharArray())
.loadTrustMaterialtrustStore, (X509Certificate[] chain, String authType) -> true)
.build();
Create a new Jersey client
Client client = ClientBuilder.newBuilder()
.withConfig(new ClientConfig())
.sslContext(sslContext.get())
.trustStore(trustStore)
.keyStore(keyStore, keyStorePassword)
.build();
Make the call to the API
client.target(endpoint).get();
I am storing my keystore credentials in parameter store.

Connection issue with AWS DynamoDb from docker container

My client program is to get the records from DynamoDb table. My binary is working as expected on host machine, but if I run the same binary in Linux container, it's returning this error:
Unable to connect to endpoint
Do I need to change anything in client code or container settings?
This might be a bit late, but in case someone else is trying to run AWS through docker, by default it checks the SSL certificates when it connects. So you need to initialise the AWS client configuration with:
Aws::Client::ClientConfiguration config;
config.verifySSL = false;
It might be an SSL issue, if you see exceptions and/or logs mentioning about some sort of SSL certificate or connection error.
The short summary is that your linux box need to trust Amazon's root CA, which you can test by visiting https://dynamodb.eu-west-3.amazonaws.com.
Here is more detailed documentation to diagnose and resolve certificate related issues: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ats-certs.html

Import Certificates from Firefox's trust store

In general, is it possible to import the list of certificates that already comes in the Firefox's trust store (also called Certificate Manager) using JAVA?
You still need to export certificates to PEM/PKCS format before processing it with keytool.
Firefox uses it's own internal certificate storage (not system one like Chome) so there is another 'theoretically' possible solution: native calls to Firefox libraries through JNA.
Please look at this question. It uses JSS library which is an interface to firefox nss.

SSL problems with S3/AWS using the Java API: "hostname in certificate didn't match"

Amazon "upgraded" the SSL security in its AWS Java SDK in the 1.3.21 version. This broke access any S3 buckets that have periods in their name when using Amazon's AWS Java API. I'm using version 1.3.21.1 which is current up to Oct/5/2012. I've provided some solutions in my answer below but I'm looking for additional work arounds to this issue.
If you are getting this error, you will see something like the following message in your exceptions/logs. In this example, the bucket name is foo.example.com.
INFO: Unable to execute HTTP request: hostname in certificate didn't match:
<foo.example.com.s3.amazonaws.com> != <*.s3.amazonaws.com>
OR <*.s3.amazonaws.com> OR <s3.amazonaws.com>
at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:220)
at org.apache.http.conn.ssl.StrictHostnameVerifier.verify(StrictHostnameVerifier.java:61)
at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:149)
at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:130)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:390)
You can see documentation of this problem on the AWS S3 discussion forum:
https://forums.aws.amazon.com/thread.jspa?messageID=387508&#387508
Amazon's response to the problem is the following.
We should be able to fix this by using the older path style method of bucket addressing (instead of the newer virtual host style addressing) for buckets with this naming pattern. We'll get started on the fix and ensure that our internal integration tests have test cases for buckets names containing periods.
Any workaround or other solutions? Thanks for any feedback.
Original: October 2012
Turns out that Amazon "upgraded" the SSL security on S3 in late September 2012. This broke access any S3 buckets that have periods in their name when using Amazon's AWS Java API.
This is inaccurate. S3's SSL wildcard matching has been the same as when S3 launched back in 2006. What's more likely is that the AWS Java SDK team enabled stricter validation of SSL certificates (good), but ended up breaking bucket names that have been running afoul of S3's SSL cert (bad).
The right answer is that you need to use path-style addressing instead of DNS-style addressing. That is the only secure way of working around the issue with the wildcard matching on the SSL certificate. Disabling the verification opens you up to Man-In-The-Middle attacks.
What I don't presently know is if the Java SDK provides this as a configurable option. If so, that's your answer. Otherwise, it sounds like the Java SDK team said "we'll add this feature, and then add integration tests to make sure it all works."
Update: October 2020
AWS has announced that path-style addressing is deprecated will be going away in the near-future. AWS’ advice is to use DNS-compatible bucket names, which means no periods (among a few other things). Certain newer features of S3 require DNS-compatible bucket names (e.g., accelerated transfer).
If you require a bucket name which contains periods (which will also be disallowed for new buckets in the near future), my best advice is to put a CloudFront distribution in front of it if you want to hit it over HTTPS.
Amazon released version 1.3.22 which resolves this issue. I've verified that our code now works. To quote from their release notes:
Buckets whose name contains periods can now be correctly addressed again over HTTPS.
There are a couple of solutions that I can see, aside from waiting till Amazon releases a new API.
Obviously you could roll back to 1.3.20 version of the AWS Java SDK. Unfortunately I needed some of the features in 1.3.21.
You can replace the org.apache.http.conn.ssl.StrictHostnameVerifier in the classpath. This is a hack however which will remove all SSL checking for Apache http connections I think. Here's the code that worked for me: http://pastebin.com/bvFELdJE
I ended up downloading and building my own package from the AWS source jar. I applied the following approximate patch to the HttpClientFactory source.
===================================================================
--- src/main/java/com/amazonaws/http/HttpClientFactory.java (thirdparty/aws) (revision 20105)
+++ src/main/java/com/amazonaws/http/HttpClientFactory.java (thirdparty/aws) (working copy)
## -93,7 +93,7 ##
SSLSocketFactory sf = new SSLSocketFactory(
SSLContext.getDefault(),
- SSLSocketFactory.STRICT_HOSTNAME_VERIFIER);
+ SSLSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER);
The right fix is to change from domain-name bucket handling to path based handling.
Btw, the following seems like it might work but it does not. The AWS client specifically requests the STRICT verifier and does not use the default one:
SSLSocketFactory.getSystemSocketFactory().setHostnameVerifier(
SSLSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER);

Categories

Resources