i have uploaded small file to amazon s3 bucket easily in java. but when i uploading large file with 50MB it is taking too long time i am not even getting exception but file is not uploading. my code is simple
s3client.putObject(new PutObjectRequest("dev.rivet.media.web", "all.wav",new File(file path)));
can any one please suggest me to over come this problem
Alternatively you can take a look at https://github.com/minio/minio-java
Minio Java library provides simpler API's to access S3 Compatible storage providers.
In this library putObject manages file upload automatically by doing multipart internally and continues from where it left off as well.
Here is an example program.
import io.minio.MinioClient;
import io.minio.errors.ClientException;
import org.xmlpull.v1.XmlPullParserException;
import java.io.FileInputStream;
import java.io.File;
import java.io.IOException;
public class PutObject {
public static void main(String[] args) throws IOException, XmlPullParserException, ClientException {
System.out.println("PutObject app");
// Set s3 endpoint, region is calculated automatically
MinioClient s3Client = new MinioClient("https://s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY");
File f = new File("C:/java/hello");
InputStream f = new FileInputStream(f);
// create object
s3Client.putObject("bucketName", "objectName", "application/octet-stream",
f.length(), f);
}
}
Hope this helps.
Related
I am currently following this example on the Vision API docs:found here
import com.google.cloud.vision.v1.*;
import com.google.cloud.vision.v1.Feature.Type;
import java.io.File;
import java.io.IOException;
import java.io.PrintStream;
import java.util.ArrayList;
import java.util.List;
public class VisionApiTest {
public static void main(String... args) throws Exception {
PrintStream stream = new PrintStream(new File("src/test.txt"));
detectTextGcs("https://www.w3.org/TR/SVGTiny12/examples/textArea01.png", stream);
}
public static void detectTextGcs(String gcsPath, PrintStream out) throws Exception, IOException {
List<AnnotateImageRequest> requests = new ArrayList<>();
ImageSource imgSource = ImageSource.newBuilder().setGcsImageUri(gcsPath).build();
Image img = Image.newBuilder().setSource(imgSource).build();
Feature feat = Feature.newBuilder().setType(Type.TEXT_DETECTION).build();
AnnotateImageRequest request =
AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
requests.add(request);
try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {
BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
List<AnnotateImageResponse> responses = response.getResponsesList();
for (AnnotateImageResponse res : responses) {
if (res.hasError()) {
out.printf("Error: %s\n", res.getError().getMessage());
return;
}
// For full list of available annotations, see http://g.co/cloud/vision/docs
for (EntityAnnotation annotation : res.getTextAnnotationsList()) {
out.printf("Text: %s\n", annotation.getDescription());
out.printf("Position : %s\n", annotation.getBoundingPoly());
}
}
}
}
}
After passing in the gcsPath String into the detectTextGcs method in the example, I am given the error: "Error: Invalid GCS path specified: https://www.w3.org/TR/SVGTiny12/examples/textArea01.png"
I am expecting for the PrintStream object to write to the file the text that is held within the picture which will be "Tomorrow, and\ntomorrow, and\ntomorrow; blah blah blah...". After trying the API on Vision API doc page mentioned above, it works fine, but not within IntelliJ.
Any help is greatly appreciated. Thank you. (Also forgive me if this isn't a well worded question, it's my first time posting)
I actually figured out the problem. The problem lies within the in line 3 of the detectGcsText() method.
ImageSource imgSource = imageSource.newBuilder().setGcsImageUri(gcsPath).build();
If you would like to use a regular HTTP URI, you must use setImageUri(path) instead of setGcsImageUri(gcsPath).
Thank you for everyone's help!
Google Cloud Storage (GCS) is a storage system where you can persistently save data as blob storage. In GCS, we have the concept of buckets which are "named" containers of data and objects which are named instances of data. To specify a Blob, we Google has invented the notion of a GCS URL of the form:
gs://[BUCKET_NAME]/[OBJECT_NAME]
In your story, you have specified an HTTP URL where a GCS Url was expected. You must not specify an HTTP URL where a GCS URL is required.
I'm using Google Cloud Speech to text api in Java.
I'm getting 0 results when I call speechClient.recognize
pom.xml:
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-speech</artifactId>
<version>0.80.0-beta</version>
</dependency>
Java code:
import java.io.FileInputStream;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.List;
import com.google.api.gax.core.FixedCredentialsProvider;
import com.google.auth.oauth2.GoogleCredentials;
import com.google.cloud.speech.v1.RecognitionAudio;
import com.google.cloud.speech.v1.RecognitionConfig;
import com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding;
import com.google.cloud.speech.v1.RecognizeResponse;
import com.google.cloud.speech.v1.SpeechClient;
import com.google.cloud.speech.v1.SpeechRecognitionAlternative;
import com.google.cloud.speech.v1.SpeechRecognitionResult;
import com.google.cloud.speech.v1.SpeechSettings;
import com.google.protobuf.ByteString;
public class SpeechToText {
public static void main(String[] args) {
// Instantiates a client
try {
String jsonFilePath = System.getProperty("user.dir") + "/serviceaccount.json";
FileInputStream credentialsStream = new FileInputStream(jsonFilePath);
GoogleCredentials credentials = GoogleCredentials.fromStream(credentialsStream);
FixedCredentialsProvider credentialsProvider = FixedCredentialsProvider.create(credentials);
SpeechSettings speechSettings =
SpeechSettings.newBuilder()
.setCredentialsProvider(credentialsProvider)
.build();
SpeechClient speechClient = SpeechClient.create(speechSettings);
//SpeechClient speechClient = SpeechClient.create();
// The path to the audio file to transcribe
String fileName = System.getProperty("user.dir") + "/call-recording-790.opus";
// Reads the audio file into memory
Path path = Paths.get(fileName);
byte[] data = Files.readAllBytes(path);
ByteString audioBytes = ByteString.copyFrom(data);
System.out.println(path.toAbsolutePath());
// Builds the sync recognize request
RecognitionConfig config = RecognitionConfig.newBuilder().setEncoding(AudioEncoding.LINEAR16)
.setSampleRateHertz(8000).setLanguageCode("en-US").build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setContent(audioBytes).build();
System.out.println("recognize builder");
// Performs speech recognition on the audio file
RecognizeResponse response = speechClient.recognize(config, audio);
List<SpeechRecognitionResult> results = response.getResultsList();
System.out.println(results.size()); // ***** HERE 0
for (SpeechRecognitionResult result : results) {
// There can be several alternative transcripts for a given chunk of speech.
// Just use the
// first (most likely) one here.
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s%n", alternative.getTranscript());
}
} catch (Exception e) {
System.out.println(e);
}
}
}
In the code above, I'm getting results.size as 0. When I upload the same opus file on demo at https://cloud.google.com/speech-to-text/, it gives output text correctly.
So why is the recognize call giving zero results?
There could be 3 reasons for Speech-to-Text to return an empty response:
Audio is not clear.
Audio is not intelligible.
Audio is not using the proper encoding.
From what I can see, reason 3 is the most possible cause of your issue. To resolve this, check this page to know how to verify the encoding of your audio file which must match the parameters you sent in InitialRecognizeRequest.
I am creating a simple application where I want to upload a file to my AWS S3 bucket. Here is my code:
import java.io.File;
import java.io.IOException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.fasterxml.jackson.*;
public class UploadFileInBucket {
public static void main(String[] args) throws IOException {
String clientRegion = "<myRegion>";
String bucketName = "<myBucketName>";
String stringObjKeyName = "testobject";
String fileObjKeyName = "testfileobject";
String fileName = "D:\\Attachments\\LICENSE";
try {
BasicAWSCredentials awsCreds = new BasicAWSCredentials("<myAccessKey>", "<mySecretKey>");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
// Upload a text string as a new object.
s3Client.putObject(bucketName, stringObjKeyName, "Uploaded String Object");
// Upload a file as a new object with ContentType and title specified.
PutObjectRequest request = new PutObjectRequest(bucketName, fileObjKeyName, new File(fileName));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("plain/text");
metadata.addUserMetadata("x-amz-meta-title", "someTitle");
request.setMetadata(metadata);
s3Client.putObject(request);
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}
I am unable to upload a file and getting an error as:
Exception in thread "main" java.lang.NoSuchFieldError:
ALLOW_FINAL_FIELDS_AS_MUTATORS
at com.amazonaws.partitions.PartitionsLoader.<clinit>(PartitionsLoader.java:52)
at com.amazonaws.regions.RegionMetadataFactory.create(RegionMetadataFactory.java:30)
at com.amazonaws.regions.RegionUtils.initialize(RegionUtils.java:64)
at com.amazonaws.regions.RegionUtils.getRegionMetadata(RegionUtils.java:52)
at com.amazonaws.regions.RegionUtils.getRegion(RegionUtils.java:105)
at com.amazonaws.client.builder.AwsClientBuilder.getRegionObject(AwsClientBuilder.java:249)
at com.amazonaws.client.builder.AwsClientBuilder.withRegion(AwsClientBuilder.java:238)
at UploadFileInBucket.main(UploadFileInBucket.java:28)
I have added required AWS bucket credentials, permissions and dependencies to execute this code.
What changes I should made in the code to get my file uploaded to desired bucket?
It looks as though you either have the wrong version of the Jackson libraries or are somehow linking with multiple versions of them.
The AWS for Java SDK distribution contains a third-party/lib directory which contains all of the (correct versions of) the libraries that version of the SDK should be built with. Depending on which features of the SDK you are using you may not need all of them, but those are the specific 3rd party libraries you should be using.
You need to add Jackson to your classpath. Its classes are missing.
I don't know which version you need, but you can download them from their gitpage: https://github.com/FasterXML/jackson/
Trying to run RecognizeUsingWebSocketsExample provided with IBM Watson SpeechToText Java SDK, but it's failing to create a valid RecognizeOptions object for the sample .wav file provided with the distribution:
Exception in thread "main" java.lang.IllegalArgumentException: When using PCM the audio rate should be specified.
at com.ibm.watson.developer_cloud.util.Validator.isTrue(Validator.java:38)
at com.ibm.watson.developer_cloud.speech_to_text.v1.RecognizeOptions$Builder.contentType(RecognizeOptions.java:95)
at com.ibm.watson.developer_cloud.speech_to_text.v1.RecognizeUsingWebSocketsExample.main(RecognizeUsingWebSocketsExample.java:30)
It appears that the contentType(HttpMediaType.AUDIO_WAV) is being misinterpreted as RAW. Here's the actual (unmodified from distro) code:
package com.ibm.watson.developer_cloud.speech_to_text.v1;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import com.ibm.watson.developer_cloud.http.HttpMediaType;
import com.ibm.watson.developer_cloud.speech_to_text.v1.model.SpeechResults;
import com.ibm.watson.developer_cloud.speech_to_text.v1.websocket.BaseRecognizeCallback;
/**
* Recognize using WebSockets a sample wav file and print the transcript into the console output.
*/
public class RecognizeUsingWebSocketsExample {
private static CountDownLatch lock = new CountDownLatch(1);
public static void main(String[] args) throws FileNotFoundException, InterruptedException {
SpeechToText service = new SpeechToText();
service.setUsernameAndPassword("<username>", "<password>");
FileInputStream audio = new FileInputStream("src/test/resources/speech_to_text/sample1.wav");
RecognizeOptions options = new RecognizeOptions.Builder()
.continuous(true)
.interimResults(true)
.contentType(HttpMediaType.AUDIO_WAV)
.build();
service.recognizeUsingWebSocket(audio, options, new BaseRecognizeCallback() {
#Override
public void onTranscription(SpeechResults speechResults) {
System.out.println(speechResults);
if (speechResults.isFinal())
lock.countDown();
}
});
lock.await(1, TimeUnit.MINUTES);
}
}
I'm using 3.0.0-RC2 snapshot. No problems running examples which do not use RecognizeOptions, like SpeechToTextExample. Thx.
-rg
Sorry, false alarm. I recreated the example project from scratch and it compiled and ran without a hitch. Must have been some weirdness with my Eclipse setup.
I am trying to read a Microsoft word file through Java. I have included all the .jar files from Apache poi-3.8-beta1 to my classpath. However, when I try running this, I get the following exception:
org.apache.poi.poifs.filesystem.OfficeXmlFileException: The supplied data appears to be in the Office 2007+ XML. You are calling the part of POI that deals with OLE2 Office Documents. You need to call a different part of POI to process this data (eg XSSF instead of HSSF)
at org.apache.poi.poifs.storage.HeaderBlock.<init>(HeaderBlock.java:131)
at org.apache.poi.poifs.storage.HeaderBlock.<init>(HeaderBlock.java:104)
at org.apache.poi.poifs.filesystem.POIFSFileSystem.<init>(POIFSFileSystem.java:138)
at readingmsword07.Main.main(Main.java:27)
Following is my code:
import org.apache.poi.xwpf.extractor.XWPFWordExtractor;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.*;
import org.apache.poi.poifs.filesystem.POIFSFileSystem;
import org.apache.poi.xwpf.usermodel.XWPFDocument;
public class Main {
public static void main(String[] args) {
try {
FileInputStream fis = new FileInputStream("C:\\TrialDoc.docx");
POIFSFileSystem fileSystem = new POIFSFileSystem(fis);
org.apache.poi.xwpf.extractor.XWPFWordExtractor oleTextExtractor =
new XWPFWordExtractor(new XWPFDocument(fis));
System.out.print(oleTextExtractor.getText());
} catch (Exception e) {
e.printStackTrace();
}
}
}
I am using the XWPFWordExtractor since I am trying to read a 2007 word document but for some reason I am unable to figure out the right POI that deals with this.
Any help is much appreciated. Thanks in advance!
~ Woods
remove the line,
POIFSFileSystem fileSystem = new POIFSFileSystem(fis);