AWS Lambda request gzip encoding in Java - java

As I know, the input data size limit for Async AWS Lambda functions is just 256Kb. Unfortunately, I hit that limit. I decided to use gzip compression since, according to AWS documentation, it's a supported compression algorithm.
This is my function:
public class TestHandler implements RequestHandler<TestPojoRequest, String> {
public String handleRequest(TestPojoRequest request, Context context) {
return String.valueOf(request.getPojos().size());
}
}
And this is how I invoke it:
public static void main(String[] args) throws IOException {
TestPojoRequest testPojoRequest = new TestPojoRequest();
TestPojo testPojo = new TestPojo();
testPojo.setName("name");
testPojo.setUrl("url");
List<TestPojo> testPojoList = new ArrayList<>();
for (int i = 0; i < 10; i++) {
testPojoList.add(testPojo);
}
testPojoRequest.setPojos(testPojoList);
String payload = gson.toJson(testPojoRequest);
invokeLambdaFunction("TestFunction", payload, "us-west-2", "my access id", "my secret");
}
private static void invokeLambdaFunction(String functionName, String payload, String region, String accessKeyId, String secretAccessKey) throws IOException {
LambdaClient client = LambdaClient.builder()
.region(Region.of(region))
.credentialsProvider(
StaticCredentialsProvider.create(AwsBasicCredentials.create(accessKeyId, secretAccessKey))
)
.build();
InvokeRequest.Builder builder = InvokeRequest.builder()
.functionName(functionName)
.invocationType(InvocationType.REQUEST_RESPONSE)
.overrideConfiguration(it -> it.putHeader("Content-Encoding", "gzip"))
.payload(SdkBytes.fromByteArray(compress(payload)));
System.out.println(builder.overrideConfiguration().headers());
InvokeRequest request = builder.build();
System.out.println(request);
InvokeResponse result = client.invoke(request);
System.out.println(new String(result.payload().asByteArray()));
}
public static byte[] compress(final String str) throws IOException {
ByteArrayOutputStream obj = new ByteArrayOutputStream();
GZIPOutputStream gzip = new GZIPOutputStream(obj);
gzip.write(str.getBytes(StandardCharsets.UTF_8));
gzip.flush();
gzip.close();
return obj.toByteArray();
}
As you may see I put Content-Encoding as a header.
Unfortunately, it doesn't work. This is my response:
Exception in thread "main" software.amazon.awssdk.services.lambda.model.InvalidRequestContentException: Could not parse request body into json: Illegal character ((CTRL-CHAR, code 31)): only regular white space (\r, \n, \t) is allowed between tokens
at [Source: (byte[])"�V*���/V���V�K�MU��P:J�E9#����%��'"; line: 1, column: 2] (Service: Lambda, Status Code: 400, Request ID: cf21cb46-fb1c-4472-a20a-5d35010d5aff)
Looks like there was no decompression on the AWS side. What is wrong? I have no idea. I tried to send the payload as a plain text and it worked, so I conclude that either AWS ignores my header or the library that I use doesn't send the header.

AWS Lambda doesn't natively accept compressed payload so your code might not work.
Currently I only know "compressed request" can be sent to AWS Lambda using API Gateway.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-gzip-compression-decompression.html
Thanks,

Related

AWS ElasticSearch/OpenSearch not connecting from java

Im using a simple example from amazon aws site to connect to opensearch index.
This is the example source https://docs.aws.amazon.com/opensearch-service/latest/developerguide/request-signing.html#request-signing-java.
The health status of my node is yellow and its open
yellow open my-index
The error message
Exception in thread "main" java.net.ConnectException
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:943)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:227)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1256)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1231)
at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:587)
at com.amazonaws.lambda.demo.AWSElasticsearchServiceClient.main(AWSElasticsearchServiceClient.java:41)
Caused by: java.net.ConnectException
at org.apache.http.nio.pool.RouteSpecificPool.timeout(RouteSpecificPool.java:168)
at org.apache.http.nio.pool.AbstractNIOConnPool.requestTimeout(AbstractNIOConnPool.java:561)
at org.apache.http.nio.pool.AbstractNIOConnPool$InternalSessionRequestCallback.timeout(AbstractNIOConnPool.java:822)
at org.apache.http.impl.nio.reactor.SessionRequestImpl.timeout(SessionRequestImpl.java:183)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processTimeouts(DefaultConnectingIOReactor.java:210)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:155)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348)
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:192)
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
at java.lang.Thread.run(Unknown Source) ```
private static String region = "us-west-1";
private static String domainEndpoint = "<my-index...amazon.com>"; // e.g. https://search-mydomain.us-west-1.es.amazonaws.com
private static String index = "my-index";
private static String type = "_doc";
private static String id = "1";
static final AWSCredentialsProvider credentialsProvider = new DefaultAWSCredentialsProviderChain();
``` public static void main(String[] args) throws IOException {
RestHighLevelClient searchClient = searchClient(serviceName, region);
// Create the document as a hash map
Map<String, Object> document = new HashMap<>();
document.put("title", "Walk the Line");
document.put("director", "James Mangold");
document.put("year", "2005");
// Form the indexing request, send it, and print the response
IndexRequest request = new IndexRequest(index, type, id).source(document);
IndexResponse response = searchClient.index(request, RequestOptions.DEFAULT);
System.out.println(response.toString());
}
// Adds the interceptor to the OpenSearch REST client
public static RestHighLevelClient searchClient(String serviceName, String region) {
AWS4Signer signer = new AWS4Signer();
signer.setServiceName(serviceName);
signer.setRegionName(region);
HttpRequestInterceptor interceptor = new AWSRequestSigningApacheInterceptor(serviceName, signer, credentialsProvider);
return new RestHighLevelClient(RestClient.builder(HttpHost.create(domainEndpoint)).setHttpClientConfigCallback(hacb -> hacb.addInterceptorLast(interceptor)));
}
Try this example. I tried the same and it did work well for me. I did not bother doing anything in regards to the cert as I had followed AWS demo examples to create the domain.
Hopefully this is what you are looking for...

OkHttp client not getting custom headers from response

I have an application A that has an endpoint which signs the response and puts the signature in a header. The header looks like:
X-Algorithm: SHA256withRSA
X-Signature: Zca8Myv4......PkH1E25hA=
When I call the application directly I see the headers.
I build application B and it's calling A via a proxy P.
Application B has an OkHttp client which sends the request and reads the response. I have a custom interceptor:
#Slf4j
public class SignatureValidatorInterceptor implements Interceptor {
private final Signature signer;
public SignatureValidatorInterceptor(final PublicKey publicKey) {
this.signer = getSigner(publicKey);
}
private static final String ALGORITHM_HEADER = "x-algorithm";
private static final String SIGNATURE_HEADER = "x-signature";
private static final String SIGNATURE_ALGORITHM = "SHA256withRSA";
/**
* This interceptor verifies the signature of the response
*/
#Override
public Response intercept(final Interceptor.Chain chain) throws IOException {
final Response response = chain.proceed(chain.request());
final Headers headers = response.headers();
log.info("Received response for url={}} with headers \n{}", response.request().url(), headers);
final byte[] responseBodyBytes = response.peekBody(1000).bytes();
final String algorithmHeader = headers.get(ALGORITHM_HEADER);
final String signatureHeader = headers.get(SIGNATURE_HEADER);
if (StringUtils.isBlank(signatureHeader) || StringUtils.isBlank(algorithmHeader)) {
throw new Exception("No signature or algorithm header on response");
}
this.verifySignature(responseBodyBytes, algorithmHeader, signatureHeader);
return response;
}
private void verifySignature(final byte[] responseBodyBytes, final String algorithmHeader, final String signatureHeader) {
//code to validate signature
}
private Signature getSigner(final PublicKey publicKey) {
//code to create signer
}
}
The debug line logs the headers it receives. I see multiple standard http headers in my logs but I'm missing my custom headers!
I have no clue why. I first thought it was a network thing. But on doing a curl from the machine of B to application A, I see the headers are there. Also I had the proxy log the headers for me, and I can see they passed.
All applications are standard spring-boot java applications. Running on linux VM's.
What am I missing?
Thanks,
Rick

Issues using Retrofit2 to call GitHub REST API to update existing file

I'm attempting to use Retrofit to call the GitHub API to update the contents of an existing file, but am getting 404s in my responses. For this question, I'm interested in updating this file. Here is the main code I wrote to try and achieve this:
GitHubUpdateFileRequest
public class GitHubUpdateFileRequest {
public String message = "Some commit message";
public String content = "Hello World!!";
public String sha = "shaRetrievedFromSuccessfulGETOperation";
public final Committer committer = new Committer();
private class Committer {
Author author = new Author();
private class Author {
final String name = "blakewilliams1";
final String email = "blake#blakewilliams.org";
}
}
}
**GitHubUpdateFileResponse **
public class GitHubUpdateFileResponse {
public GitHubUpdateFileResponse() {}
}
GitHubClient
public interface GitHubClient {
// Docs: https://docs.github.com/en/rest/reference/repos#get-repository-content
// WORKS FINE
#GET("/repos/blakewilliams1/blakewilliams1.github.io/contents/qr_config.json")
Call<GitHubFile> getConfigFile();
// https://docs.github.com/en/rest/reference/repos#create-or-update-file-contents
// DOES NOT WORK
#PUT("/repos/blakewilliams1/blakewilliams1.github.io/contents/qr_config.json")
Call<GitHubUpdateFileResponse> updateConfigFile(#Body GitHubUpdateFileRequest request);
}
Main Logic
// Set up the Retrofit client and add an authorization interceptor
UserAuthInterceptor interceptor =
new UserAuthInterceptor("blake#blakewilliams.org", "myActualGitHubPassword");
OkHttpClient.Builder httpClient =
new OkHttpClient.Builder().addInterceptor(interceptor);
Retrofit.Builder builder =
new Retrofit.Builder()
.baseUrl("https://api.github.com/")
.addConverterFactory(GsonConverterFactory.create());
Retrofit retrofit = builder.client(httpClient.build()).build();
client = retrofit.create(GitHubClient.class);
// Now make the request and process the response
GitHubUpdateFileRequest request = new GitHubUpdateFileRequest();
client.updateConfigFile(request).enqueue(new Callback<GitHubUpdateFileResponse>() {
#Override
public void onResponse(Call<GitHubUpdateFileResponse> call, Response<GitHubUpdateFileResponse> response) {
int responseCode = response.code();
// More code on successful update
}
#Override
public void onFailure(Call<GitHubUpdateFileResponse> call, Throwable t) {
Log.e("MainActivity", "Unable to update file" + t.getLocalizedMessage());
}
});
What currently happens:
Currently, the success callback is triggered, but with a response code of 404 like so:
Response{protocol=http/1.1, code=404, message=Not Found, url=https://api.github.com/repos/blakewilliams1/blakewilliams1.github.io/contents/qr_config.json}
Has anyone else encountered this? I first thought it was a problem with including '/content/' in the URL but I do the same thing for reading the file contents request and it works fine (also uses same URL just a GET instead of PUT).
For anyone interested in doing this in the future, I figured out the solution.
I needed to revise the request object structure
Rather than using an authentication interceptor, I instead added an access token to the header. Here is where you can create access tokens for Github, you only need to grant it permissions to the 'repos' options for this use case to work.
This is what my updated request object looks like:
public class GitHubUpdateFileRequest {
public String message;
public String content;
public String sha;
public final Committer committer = new Committer();
public GitHubUpdateFileRequest(String unencodedContent, String message, String sha) {
this.message = message;
this.content = Base64.getEncoder().encodeToString(unencodedContent.getBytes());
this.sha = sha;
}
private static class Committer {
final String name = "yourGithubUsername";
final String email = "email#yourEmailAddressForTheUsername.com";
}
}
Then from my code, I would just say:
GitHubUpdateFileRequest updateRequest = new GitHubUpdateFileRequest("Hello World File Contents", "This is the title of the commit", shaOfExistingFile);
For using this reqest, I updated the Retrofit client implementation like so:
// https://docs.github.com/en/rest/reference/repos#create-or-update-file-contents
#Headers({"Content-Type: application/vnd.github.v3+json"})
#PUT("/repos/yourUserName/yourRepository/subfolder/path/to/specific/file/theFile.txt")
Call<GitHubUpdateFileResponse> updateConfigFile(
#Header("Authorization") String authorization, #Body GitHubUpdateFileRequest request);
And I call that interface like this:
githubClient.updateConfigFile("token yourGeneratedGithubToken", request);
And yes, you do need the prefix "token ". You could hardcode that header into the interface, but I pass it in so that I can store it in locations outside of my version control's reach for security reasons.

Request is always invalid when using MessageBird RequestSigner

I'm not sure what I've done wrong. But the requestSigner.isMatch always returns invalid request.
I've used this https://github.com/messagebird/java-rest-api/blob/master/examples/src/main/java/ExampleRequestSignatureValidation.java as my reference but still same :(
public boolean isValidRequest(String signingKey, String timestamp, InputStream requestBody) throws IOException {
RequestSigner requestSigner = new RequestSigner(messageBirdSigningKey.getBytes());
byte[] bodyBytes = readAllBytes(requestBody);
Request request = new Request(timestamp, "", bodyBytes);
return requestSigner.isMatch(signingKey, request);
}
I pass an empty string for the queryParams since the incoming message has null queryParams.
The messageBirdSigningKey is the signing key provided by message bird.
Any leads would be a great help!
thank you!

Spring Integration not forwarding a custom header?

I am adding some headers in my-transformer:
public Message<?> transform(final Message<?> message) {
List<Item> items = doStuff(message);
final MessageBuilder<?> messageBuilder = MessageBuilder
.withPayload(message.getPayload())
.copyHeadersIfAbsent(message.getHeaders());
for (final Item item : items) {
messageBuilder.setHeader(item.getHeaderName(), item.getValue());
}
return messageBuilder.build();
}
And I wrote an integration test to confirm that my header is present on the output channel:
public static class HeaderTest extends TransformerTest {
#Test
public void test() throws Exception {
channels.input().send(new GenericMessage<>(TransformerTest.EXAMPLE_PAYLOAD));
final Message<?> out = this.collector.forChannel(this.channels.output()).poll(10, TimeUnit.SECONDS);
assertThat(out, HeaderMatcher.hasHeader("header-test", notNullValue()));
}
}
But, when I created a stream like:
http --port=1234 | my-transformer | log --expression=toString()
and sent the same EXAMPLE_PAYLOAD I received the following message in the logs log: GenericMessage [payload=..., headers={kafka_offset=0, id=f0a0727c-9351-274c-58b3-edee9ccbf6ce, kafka_receivedPartitionId=0, contentType=text/plain;charset=UTF-8, kafka_receivedTopic=myTopic.my-transformer, timestamp=1485171448947}].
Why isn't my header-test in the message headers?
-- EDIT --
So if I understood correctly I am supposed to do something like:
public class MyTransformer implements Transformer {
private final EmbeddedHeadersMessageConverter converter = new EmbeddedHeadersMessageConverter();
#Override
public Message<?> transform(final Message<?> message) {
List<Item> items = doStuff(message);
final MessageBuilder<byte[]> messageBuilder = MessageBuilder
.withPayload(((String) message.getPayload()).getBytes())
.copyHeadersIfAbsent(message.getHeaders());
final int itemsSize = items.size();
final String[] headerNames = new String[itemsSize];
for (int i = 0; i < itemsSize; i++) {
final Item item = items.get(i);
messageBuilder.setHeader(item.getHeaderName(), item.getValue());
headerNames[i] = item.getHeaderName();
}
final Message<byte[]> msg = messageBuilder.build();
final byte[] rawMessageWithEmbeddedHeaders;
try {
rawMessageWithEmbeddedHeaders = converter.embedHeaders(new MessageValues(msg), headerNames);
} catch (final Exception e) {
throw new HeaderEmbeddingException(String.format("Cannot embed headers from '%s' into message: %s", items, msg), e);
}
return new GenericMessage<>(rawMessageWithEmbeddedHeaders);
}
}
with spring.cloud.stream.bindings.output.producer.headerMode=raw set in application.properties and then convert the message payload on the receiving side? Or can I somehow make the receiving side automatically convert the message payload?
You don't say whether you are using Spring XD or Spring Cloud DataFlow, but the solution is similar in each case.
Since kafka has no native support for headers, we have to embed them in the message payload. Since we don't want to transport unnecessary headers, you have to opt-in for the headers you want transported by setting the header names in servers.yml for Spring XD or application.yml (or .properties) for a Spring Cloud Stream app.
EDIT
Unfortunately, there is no support for patterns. One option would be to use the EmbeddedHeadersMessageConverter yourself, and set the kafka mode to raw (on your transformer's output destination). Raw mode means the binder won't embed headers.
That way, the next app (without mode raw) should be able to decode the headers as if they had been encoded by the binder in your transformer. Javadocs here.
You are limited to 255 headers.

Categories

Resources