Pass a file to AWS Lambda via API Gateway - java

I am trying to pass a .csv file (less than 5MB) to a Lambda to perform some processing and I am not sure what would be the best way of doing this. Currently, I pass it as JSON string property and it works but I'm not sure if there are any pitfalls to this approach.
Current API:
{
"fileMetadata1" : "some meta data like a name for the request",
"date": "date file was last modified for example",
...
"fileData", "the actual data of file in String format"
}
Ideally id like "fileData" to be of type InputStream in my input object model, but that didn't work for some reason.

A very common pattern is to put your file to s3, and then make your lambda read it from s3. You can either start the lambda from a s3 event, or by passing it the address of the file to your invocation.
This is generally a better idea, as you won't have to care about input size limitations anymore.

Related

I want to convert json format data in java object in Java transformation

The requirement is:
I am using one Java transformation in Informatica developer client and my java code is returning the data in the JSON format and that result is store in one "Result" parameter.
Data sample like Result={ "result": [ { "ID": "101", "Name": "XYZ" } ] }
Now i want to store this data in the relational table say as Employee having two column as ID and Name.
So in java transformation, I am using two output port- 1 is Id(datatype as Integer) and another one is Name(datatype as String).
SO I want to write code in this manner that the ID value of the JSON data should go in Id output port and Name value of JSON data should go in Name output port.
If you want to use a Java transformation for this, you will need to make the Jackson JAR file available on the INFA server, so your code can see Jackson on its classpath. (And you will have to do the same for any additional JAR file dependencies Jackson may also need.)
Normally, this is something the INFA admins would have to do for you, as a one-time task. This is because the classpath location is a directory on the INFA server (as chosen by you and/or the admins) - and you probably do not have access to it.

Apache Camel: How to look inside body to determine file format

We receive .csv files (both via ftp and email) each of which can be one of a few different formats (that can be determined by looking at the top line of the file). I am fairly new to Apache Camel but want to implement a content based router and unmarshal each to the relevant class.
My current solution is to break down the files to a lists of strings, manually use the first line to determine the type of file, and then use the rest of the strings to create relevant entity instances.
Are there a cleaner and better way?
You could use a POJO to implement the type check in whatever way works best for your files.
public String checkFileType(#Body File file) {
return determineFileType(file);
}
private String determineFileType(File file) {...}
Like this you can keep your route clean by separating the filetype check and any other part of processing. Because the filetype check is just metadata enrichment.
For example you could just set the return value as a message header by calling the bean)
.setHeader("fileType", method(fileTypeChecker))
Then you can route the files according to type easily by using the message header.
.choice()
.when(header("fileType").isEqualTo("foo"))
...

How to post a big string/json using AJAX on Play Framework 1.4.x

I have a JSON that looks more or less like this:
{"id":"id","date":"date","csvdata":"csvdata".....}
where csvdata property is a big amount of data in JSON format too.
I was trying to POST this JSON using AJAX in Play! Framework 1.4.x so I sended just like that, but when I receive the data in the server side, the csvdata looks like [object Object] and stores it in my db.
My first thought to solve this was to send the csvdata json in string format to store it like a longtext, but when I try to do this, my request fails with the following error:
413 (Request Entity Too Large)
And Play's console show me this message:
Number of request parameters 3623 is higher than maximum of 1000, aborting. Can be configured using 'http.maxParams'
I also tried to add http.maxParams=5000 in application.conf but the only result is that Play's console says nothing and in my database this field is stored as null.
Can anyone help me, or maybe suggest another solution to my problem?
Thanks you so much in advance.
Is it possible that you sent "csvdata" as an array, not a string? Each element in the array would be a separate parameter. I have sent 100KB strings using AJAX and not run into the http.maxParams limit. You can check the contents of the request body using your browser's developer tools.
If your csvdata originates as a file on the client's machine, then the easiest way to send it is as a File. Your controller action would look like:
public static void upload(String id, Date date, File csv) {
...
}
When Play! binds a parameter to the File type, it writes the contents of the parameter to a temporary file which you can read in. (This avoids running out of memory if a large file is uploaded.) The File parameter type was designed for a normal form submit, but I have used it in AJAX when the browser supported some HTML5 features (File API and Form Data).

JMeter response to a file (append it to only one file)

In my JMeter project I make a request and I get a response like {id="fklajdlfja"} and then I get one JSON file for each response.
My question is, is there an elegant way to merge all the ids in a file?
My options are:
Make a JavaScript after using JMeter to put all together.
A JSON post-processor to get the id and then append to a file
Any nicer solution?
Extract the id from response, you can either use regular expression extractor or json post processor.
Use Beanshell Post processor and append these id's into a file. That should be easiest way.

Get Google Cloud Storage File from ObjectName

I'm migrating my GAE app from the deprecated File API to Google Cloud Storage Client Library.
I used to persist the blobKey, but since there is partial support for it (as specified here) from now on I'll have to persist the object name.
Unfortunately the object name that comes from the GCS looks more or less like this
/gs/bucketname/819892hjd81dh19gf872g8211
as you can see, it also contains the bucket name
Here's the issue, every time I need to get the file for further processing (or to serve it in a servlet) I need to create an instance of GcsFileName(bucketName, objectName) which gives me something like
/bucketName/gs/bucketName/akahsdjahslagfasgfjkasd
which (of course) doesn't work.
so. my question is:
- how can I generate a GcsFileName form the objectName?
UPDATE
I tried using the objectName as BlobKey. But it just doesn't work :(
InputStream is = new BlobstoreInputStream(blobstoreService.createGsBlobKey("/gs/bucketName/akahsdjahslagfasgfjkasd"));
I got the usual answer
BlobstoreInputStream received an invalid blob key
How do I get the file using the ObjectName???
If you have persisted and retrieved e.g the string String objname worth e.g "/gs/bucketname/819892hjd81dh19gf872g8211", you could split it on "/" (String[] pieces = objname.split("/")) and use the pieces appropriately in the call to GcsFileName.

Categories

Resources