I am trying to develop a AWS lambda function which is triggered when a file shows up in a specific s3 bucket. I am trying to follow the examples from AWS Lambda documentation, using aws-java-sdk-lambda 1.11.192, aws-java-sdk-s3 1.11.192. But, unfortunately the these examples use RequestHandler which is deprecated in the latest version of the jar.
My code is similar to this example
package example;
import java.net.URLDecoder;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.S3Event;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.event.S3EventNotification.S3EventNotificationRecord;
public class S3GetTextBody implements RequestHandler<S3Event, String> {
public String handleRequest(S3Event s3event, Context context) {
try {
S3EventNotificationRecord record = s3event.getRecords().get(0);
// Retrieve the bucket & key for the uploaded S3 object that
// caused this Lambda function to be triggered
String bkt = record.getS3().getBucket().getName();
String key = record.getS3().getObject().getKey().replace('+', ' ');
key = URLDecoder.decode(key, "UTF-8");
// Read the source file as text
AmazonS3 s3Client = new AmazonS3Client();
String body = s3Client.getObjectAsString(bkt, key);
System.out.println("Body: " + body);
return "ok";
} catch (Exception e) {
System.err.println("Exception: " + e);
return "error";
}
}
}
The current version of the aws sdk for lambda doesn't contain -
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.S3Event;
What are my alternatives? How can I achieve similar functionality using the newer versions of their sdk.
You aren't required to implement the RequestHandler interface provided in their helper library. Any method will work provided the input and output parameters can be serialized properly.
See this article for more detail.
If you want to use their helper library, use the following dependency coordinates:
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.1.0</version>
And for the S3 event helper:
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
<version>1.3.0</version>
It's not located within their main aws-java-sdk but instead has its own repository.
Related
I am trying to use Amazon Polly to convert text to speech using Java API. As described by Amazon there are several US english voices which support Neural. https://docs.aws.amazon.com/polly/latest/dg/voicelist.html
The code I am following to run in Java application is as following:
package com.amazonaws.demos.polly;
import java.io.IOException;
import java.io.InputStream;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.polly.AmazonPollyClient;
import com.amazonaws.services.polly.model.DescribeVoicesRequest;
import com.amazonaws.services.polly.model.DescribeVoicesResult;
import com.amazonaws.services.polly.model.OutputFormat;
import com.amazonaws.services.polly.model.SynthesizeSpeechRequest;
import com.amazonaws.services.polly.model.SynthesizeSpeechResult;
import com.amazonaws.services.polly.model.Voice;
import javazoom.jl.player.advanced.AdvancedPlayer;
import javazoom.jl.player.advanced.PlaybackEvent;
import javazoom.jl.player.advanced.PlaybackListener;
public class PollyDemo {
private final AmazonPollyClient polly;
private final Voice voice;
private static final String JOANNA="Joanna";
private static final String KENDRA="Kendra";
private static final String MATTHEW="Matthew";
private static final String SAMPLE = "Congratulations. You have successfully built this working demo of Amazon Polly in Java. Have fun building voice enabled apps with Amazon Polly (that's me!), and always look at the AWS website for tips and tricks on using Amazon Polly and other great services from AWS";
public PollyDemo(Region region) {
// create an Amazon Polly client in a specific region
polly = new AmazonPollyClient(new DefaultAWSCredentialsProviderChain(),
new ClientConfiguration());
polly.setRegion(region);
// Create describe voices request.
DescribeVoicesRequest describeVoicesRequest = new DescribeVoicesRequest();
// Synchronously ask Amazon Polly to describe available TTS voices.
DescribeVoicesResult describeVoicesResult = polly.describeVoices(describeVoicesRequest);
//voice = describeVoicesResult.getVoices().get(0);
voice = describeVoicesResult.getVoices().stream().filter(p -> p.getName().equals(MATTHEW)).findFirst().get();
}
public InputStream synthesize(String text, OutputFormat format) throws IOException {
SynthesizeSpeechRequest synthReq =
new SynthesizeSpeechRequest().withText(text).withVoiceId(voice.getId())
.withOutputFormat(format);
SynthesizeSpeechResult synthRes = polly.synthesizeSpeech(synthReq);
return synthRes.getAudioStream();
}
public static void main(String args[]) throws Exception {
//create the test class
PollyDemo helloWorld = new PollyDemo(Region.getRegion(Regions.US_WEST_1));
//get the audio stream
InputStream speechStream = helloWorld.synthesize(SAMPLE, OutputFormat.Mp3);
//create an MP3 player
AdvancedPlayer player = new AdvancedPlayer(speechStream,
javazoom.jl.player.FactoryRegistry.systemRegistry().createAudioDevice());
player.setPlayBackListener(new PlaybackListener() {
#Override
public void playbackStarted(PlaybackEvent evt) {
System.out.println("Playback started");
System.out.println(SAMPLE);
}
#Override
public void playbackFinished(PlaybackEvent evt) {
System.out.println("Playback finished");
}
});
// play it!
player.play();
}
}
By default its taking the Standard of the voice of Matthew. Please suggest what needs to be changed to make the speech Neural for the voice of Matthew.
Thanks
Thanks #ASR for your feedback.
I was able to find the engine parameter as you suggested.
The way I had to solve this is:
Update the aws-java-sdk-polly version from 1.11.77 (as they have in their documentation) to the latest 1.11.762 in the pom.xml and build the Maven project. This brings the latest class definition for SynthesizeSpeechRequest Class. With 1.11.77 I was unable to see withEngine function in its definition.
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-polly</artifactId>
<version>1.11.762</version>
</dependency>
Updated the withEngine("neural") as below:
SynthesizeSpeechRequest synthReq =
new SynthesizeSpeechRequest().withText(text).withVoiceId(voice.getId())
.withOutputFormat(format).withEngine("neural");
As defined in https://docs.aws.amazon.com/polly/latest/dg/NTTS-main.html Neural voice is only available in specific regions. So I had to chose as following:
PollyDemo helloWorld = new PollyDemo(Region.getRegion(Regions.US_WEST_2));
After this Neural voice worked perfectly.
I am assuming you are using AWS Java SDK 1.11
AWS documentation here states that you need to set the engine parameter in the speech sysnthesis request to neural. AWS Java SDK documentation here describes the withEngine method to set it to neural.
PS: the documentation page doesn't seem to provide the method URLs, so you will have to search for it.
I am looking for JAVA example of Amazon Forecast API so I can integrate this in my JAVA Application.
I searched and didn't found any solution, even I raised a support ticket with the AWS team and they are also unable to provide that which I am attaching as a screenshot.
Documentations are available for python, NodeJS, and other languages but not for JAVA.
I have already struggled a lot in integration with AWS Forecast Java SDK.
UPDATE
Finally, I got something that I am posting in my below answer but still looking for some better option.
After spending a few days in search of documentation or working example, I got this solution for me. I am able to get the predictions by using this code but still looking for some better approach (if possible).
package com.mayur.awsforecastexample;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.forecastquery.AmazonForecastQueryClientBuilder;
import com.amazonaws.services.forecastquery.model.DataPoint;
import com.amazonaws.services.forecastquery.model.Forecast;
import com.amazonaws.services.forecastquery.model.QueryForecastRequest;
import com.amazonaws.services.forecastquery.model.QueryForecastResult;
public class ForecastTest {
AmazonForecastQueryClientBuilder client = AmazonForecastQueryClientBuilder.standard();
public QueryForecastResult queryForecast(QueryForecastRequest request) {
client.setCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("ACCESS_KEY", "SECRET_KEY")));
client.setRegion("REGION");
return client.build().queryForecast(request);
}
public static void main(String ar[]) {
Map<String, String> filters = new HashMap<String, String>();
filters.put("item_id", "YOUR_ITEM_ID");
QueryForecastRequest request = new QueryForecastRequest();
request.setForecastArn("FORECAST_ARN");
request.setFilters(filters);
request.setStartDate(null);
request.setEndDate(null);
ForecastTest forecastTest = new ForecastTest();
QueryForecastResult res = forecastTest.queryForecast(request);
Forecast f = res.getForecast();
Map<String, List<DataPoint>> predictions = f.getPredictions();
for (Entry<String, List<DataPoint>> entry : predictions.entrySet())
System.out.println("Key = " + entry.getKey() + ", Value = " + entry.getValue());
}
}
Please check the working Example
I am following Quickstart Guide: Integrating Search into your Application available at Terrier Information Retrieval platform's website: Terrier IR platform homepage, using the following code, available at their webpage. The code uses org.terrier.realtime.memory.MemoryIndex but it is not available in the terrier jar files, which I have included in my project using maven.
I have checked both Terrier 5.1 and 5.0 but was unable to locate the MemoryIndex class and its constructor.
import java.io.File;
import java.io.StringReader;
import java.util.HashMap;
import java.util.Iterator;
import org.terrier.indexing.Document;
import org.terrier.indexing.TaggedDocument;
import org.terrier.indexing.tokenisation.Tokeniser;
import org.terrier.querying.LocalManager;
import org.terrier.querying.Manager;
import org.terrier.querying.ManagerFactory;
import org.terrier.querying.ScoredDoc;
import org.terrier.querying.ScoredDocList;
import org.terrier.querying.SearchRequest;
import org.terrier.realtime.memory.MemoryIndex;
import org.terrier.utility.ApplicationSetup;
import org.terrier.utility.Files;
public class IndexingAndRetrievalExample {
public static void main(String[] args) throws Exception {
// Directory containing files to index
String aDirectoryToIndex = "/my/directory/containing/files/";
// Configure Terrier
ApplicationSetup.setProperty("indexer.meta.forward.keys", "docno");
ApplicationSetup.setProperty("indexer.meta.forward.keylens", "30");
// Create a new Index
MemoryIndex memIndex = new MemoryIndex();
// For each file
for (String filename : new File(aDirectoryToIndex).list() ) {
String fullPath = aDirectoryToIndex+filename;
// Convert it to a Terrier Document
Document document = new TaggedDocument(Files.openFileReader(fullPath), new HashMap(), Tokeniser.getTokeniser());
// Add a meaningful identifier
document.getAllProperties().put("docno", filename);
// index it
memIndex.indexDocument(document);
}
// Set up the querying process
ApplicationSetup.setProperty("querying.processes", "terrierql:TerrierQLParser,"
+ "parsecontrols:TerrierQLToControls,"
+ "parseql:TerrierQLToMatchingQueryTerms,"
+ "matchopql:MatchingOpQLParser,"
+ "applypipeline:ApplyTermPipeline,"
+ "localmatching:LocalManager$ApplyLocalMatching,"
+ "filters:LocalManager$PostFilterProcess");
// Enable the decorate enhancement
ApplicationSetup.setProperty("querying.postfilters", "org.terrier.querying.SimpleDecorate");
// Create a new manager run queries
Manager queryingManager = ManagerFactory.from(memIndex.getIndexRef());
// Create a search request
SearchRequest srq = queryingManager.newSearchRequestFromQuery("search for document");
// Specify the model to use when searching
srq.setControl(SearchRequest.CONTROL_WMODEL, "BM25");
// Enable querying processes
srq.setControl("terrierql", "on");
srq.setControl("parsecontrols", "on");
srq.setControl("parseql", "on");
srq.setControl("applypipeline", "on");
srq.setControl("localmatching", "on");
srq.setControl("filters", "on");
// Enable post filters
srq.setControl("decorate", "on");
// Run the search
queryingManager.runSearchRequest(srq);
// Get the result set
ScoredDocList results = srq.getResults();
// Print the results
System.out.println("The top "+results.size()+" of documents were returned");
System.out.println("Document Ranking");
for(ScoredDoc doc : results) {
int docid = doc.getDocid();
double score = doc.getScore();
String docno = doc.getMetadata("docno")
System.out.println(" Rank "+i+": "+docid+" "+docno+" "+score);
}
}
}
I identified the problem. The problem was with setting maven dependencies. Here, is how the problem can be solved by adding the following dependencies while building the maven project:
<dependencies>
<dependency>
<groupId>org.terrier</groupId>
<artifactId>terrier-core</artifactId>
<version>5.1</version>
</dependency>
<dependency>
<groupId>org.terrier</groupId>
<artifactId>terrier-realtime</artifactId>
<version>5.1</version>
</dependency>
</dependencies>
It looks like the class MemoryIndex.java is a part of terrier-core version 4.4. More info: https://jar-download.com/artifacts/org.terrier/terrier-core/4.4/source-code/org/terrier/realtime/memory/MemoryIndex.java
And their documentation seems to be out of date.
I am creating a simple application where I want to upload a file to my AWS S3 bucket. Here is my code:
import java.io.File;
import java.io.IOException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.fasterxml.jackson.*;
public class UploadFileInBucket {
public static void main(String[] args) throws IOException {
String clientRegion = "<myRegion>";
String bucketName = "<myBucketName>";
String stringObjKeyName = "testobject";
String fileObjKeyName = "testfileobject";
String fileName = "D:\\Attachments\\LICENSE";
try {
BasicAWSCredentials awsCreds = new BasicAWSCredentials("<myAccessKey>", "<mySecretKey>");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
// Upload a text string as a new object.
s3Client.putObject(bucketName, stringObjKeyName, "Uploaded String Object");
// Upload a file as a new object with ContentType and title specified.
PutObjectRequest request = new PutObjectRequest(bucketName, fileObjKeyName, new File(fileName));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("plain/text");
metadata.addUserMetadata("x-amz-meta-title", "someTitle");
request.setMetadata(metadata);
s3Client.putObject(request);
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}
I am unable to upload a file and getting an error as:
Exception in thread "main" java.lang.NoSuchFieldError:
ALLOW_FINAL_FIELDS_AS_MUTATORS
at com.amazonaws.partitions.PartitionsLoader.<clinit>(PartitionsLoader.java:52)
at com.amazonaws.regions.RegionMetadataFactory.create(RegionMetadataFactory.java:30)
at com.amazonaws.regions.RegionUtils.initialize(RegionUtils.java:64)
at com.amazonaws.regions.RegionUtils.getRegionMetadata(RegionUtils.java:52)
at com.amazonaws.regions.RegionUtils.getRegion(RegionUtils.java:105)
at com.amazonaws.client.builder.AwsClientBuilder.getRegionObject(AwsClientBuilder.java:249)
at com.amazonaws.client.builder.AwsClientBuilder.withRegion(AwsClientBuilder.java:238)
at UploadFileInBucket.main(UploadFileInBucket.java:28)
I have added required AWS bucket credentials, permissions and dependencies to execute this code.
What changes I should made in the code to get my file uploaded to desired bucket?
It looks as though you either have the wrong version of the Jackson libraries or are somehow linking with multiple versions of them.
The AWS for Java SDK distribution contains a third-party/lib directory which contains all of the (correct versions of) the libraries that version of the SDK should be built with. Depending on which features of the SDK you are using you may not need all of them, but those are the specific 3rd party libraries you should be using.
You need to add Jackson to your classpath. Its classes are missing.
I don't know which version you need, but you can download them from their gitpage: https://github.com/FasterXML/jackson/
I've created & deployed one simple GET API in API Gateway and here is the ARN and there is no authentication whatsoever on this function, I can simply call it on my browser
arn:aws:lambda:ap-southeast-1:XXXXXXXXXXXXXX:function:La
and the public url that can be browsed using the browser is:
https://xxxxxxxxx.execute-api.ap-southeast-1.amazonaws.com/v1/lambda/geta
and I'm using Spring boot project and the below code to invoke the API (Following this Doc)
The interface as the lambda service
package com.xxxxxxx.services.interfaces;
import com.amazonaws.services.lambda.invoke.LambdaFunction;
public interface ILambdaGetBalance {
#LambdaFunction(functionName="La")
String getA();
}
The service using that interface to call the lambda function
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import com.xxxxxxxx.services.interfaces.ILambdaGetBalance;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.lambda.AWSLambda;
import com.amazonaws.services.lambda.AWSLambdaClientBuilder;
import com.amazonaws.services.lambda.invoke.LambdaInvokerFactory;
#Service
public class LambdaService {
#Value("${aws.access-key}")
private String accessKey;
#Value("${aws.secret-key}")
private String secretKey;
#Value("${aws.lambda.region-name}") // this is ap-southeast-1
private String regionName;
public void test() {
AWSCredentials credentials = new BasicAWSCredentials(accessKey,
secretKey);
AWSLambda client = AWSLambdaClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.withRegion(regionName)
.build();
final ILambdaGetBalance getBalance = LambdaInvokerFactory.builder()
.lambdaClient(client)
.build(ILambdaGetBalance.class);
getBalance.getA();
}
}
after calling the getA function the system will through the following exception:
java.lang.NoSuchMethodError: com.amazonaws.services.lambda.AWSLambdaClient.beforeClientExecution(Lcom/amazonaws/AmazonWebServiceRequest;)Lcom/amazonaws/AmazonWebServiceRequest;
Any idea why is this happening? What am I missing?
Looks like your aws-java-sdk-lambda and aws-java-sdk-core modules may have incompatible versions. How are you resolving the dependencies for your project? The beforeClientExecution method was added to the AmazonWebServiceClient base class in version 1.11.106 of aws-java-sdk-core - see here: https://github.com/aws/aws-sdk-java/blame/master/aws-java-sdk-core/src/main/java/com/amazonaws/AmazonWebServiceClient.java#L590