History for context:
I am trying to run a web job from an HTTP Client. The file is a ZIP file . and contains a java class and bat file to run that java class. This runs okay when i do from POSTMAN. But when i use HTTP client, i get the following error always " '---i-NPsGbTVUpaP0CeJxMQVrHoDHvaxo3' is not recognized as an internal or external command" - Please help – Jagaran yesterday
#Jagaran if it only happen from some clients, it is likely unrelated. Please ask a new question – David Ebbo 21 hours ago
No any HTTP Client i am using in java, it is the same. it works in CURL or loading from web console. My sample code below – Jagaran 2 hours ago
No any HTTP Client i am using in java, it is the same. it works in CURL or loading from web console.
Do you have any sample Java based HTTP Client where I can publish Azure Web Job? I have tried all Java REST clients.
May be i am doing something wrong. The error I get in Azure console is '---i-NPsGbTVUpaP0CeJxMQVrHoDHvaxo3' is not recognized as an internal or external command, [08/25/2017 09:30:22 > e7f683: ERR ] operable program or batch file.o
I feel Content type = applciation /zip is not happening correctly when using java. Please help us.
Sample Code:
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.MalformedURLException;
import java.net.URI;
import java.net.URL;
import java.util.zip.ZipEntry;
import java.util.zip.ZipFile;
import org.apache.http.entity.ContentType;
import com.mashape.unirest.http.HttpResponse;
import com.mashape.unirest.http.Unirest;
/**
* #author jagaran.das
*
*/
public class AIPHTTPClient {
/**
* #param args
* #throws IOException
*/
#SuppressWarnings({ "unused", "rawtypes" })
public static void main(String[] args) throws IOException {
try {
URI uri = new AIPHTTPClient().getURI();
HttpResponse<InputStream> jsonResponse = Unirest.put("https://<URL>/api/triggeredwebjobs/TestJOb")
.basicAuth("$AzureWebJobTestBRMS", "XXXXX")
.header("content-disposition","attachement; filename=acvbgth.bat")
.field("file", new FileInputStream(new File(uri))
,ContentType.create("content-type: application/zip"),"AzureWebJob.zip").asBinary();
System.out.println(jsonResponse.getStatusText());
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public InputStream readZip() {
ZipFile zipFile = null;
ZipEntry zipEntry = zipFile.getEntry("run.bat");
InputStream stream = null;
/* try {
zipFile = new ZipFile("/Users/jagaran.das/Documents/work/AIP/AzureWebJob.zip");
java.util.Enumeration<? extends ZipEntry> entries = zipFile.entries();
while(entries.hasMoreElements()){
ZipEntry entry = entries.nextElement();
stream = zipFile.getInputStream(entry);
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} */
try {
stream = zipFile.getInputStream(zipEntry);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return stream;
}
public URI getURI() throws MalformedURLException {
File file = new File("/Users/jagaran.das/Documents/work/AIP/azure-poc/AzureWebJob.zip");
URI fileUri = file.toURI();
System.out.println("URI:" + fileUri);
URL fileUrl = file.toURI().toURL();
System.out.println("URL:" + fileUrl);
URL fileUrlWithoutSpecialCharacterHandling = file.toURL();
System.out.println("URL (no special character handling):" + fileUrlWithoutSpecialCharacterHandling);
return fileUri;
}
}
I've been a little too harsh in my answer before really trying stuff out. Apologies. I've now tried out your snippet and looks like you're hitting an issue with Unirest - probably this one.
My advice would be to just move to Apache's HTTP library.
Here's a working sample:
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.CredentialsProvider;
import org.apache.http.client.HttpClient;
import org.apache.http.client.entity.EntityBuilder;
import org.apache.http.client.methods.HttpPut;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.util.EntityUtils;
import java.io.File;
public class App
{
public static void main( String[] args )
{
File sourceZipFile = new File("webjob.zip");
String kuduApiUrl = "https://yoursitename.scm.azurewebsites.net/api/zip/site/wwwroot/app_data/jobs/triggered/job988/";
HttpEntity httpEntity = EntityBuilder.create()
.setFile(sourceZipFile)
.build();
CredentialsProvider provider = new BasicCredentialsProvider();
UsernamePasswordCredentials credentials = new UsernamePasswordCredentials(
"$yoursitename", "SiteLevelPasSw0rD"
);
provider.setCredentials(AuthScope.ANY, credentials);
HttpClient client = HttpClientBuilder.create()
.setDefaultCredentialsProvider(provider)
.build();
HttpPut putRequest = new HttpPut(kuduApiUrl);
putRequest.setEntity(httpEntity);
// Kudu's Zip API expects application/zip
putRequest.setHeader("Content-type", "application/zip");
try {
HttpResponse response = client.execute(putRequest);
int statusCode = response.getStatusLine().getStatusCode();
HttpEntity entity = response.getEntity();
String resBody = EntityUtils.toString(entity, "UTF-8");
System.out.println(statusCode);
System.out.println(resBody);
}
catch (Exception e) {
e.printStackTrace();
}
}
}
That's sending Content-Type: application/zip and the raw zip contents in the body (no multipart horse manure). I've probably over-engineered the sample.. but it is what it is.
The upload is successful and the WebJob published:
Glad for you that you have solved the issue and I try to provide a workaround for your reference.
Deploy WebJob to azure , in addition to using REST API, you can also use the FTP way. Of course, the premise is that you need to know the directory uploaded by webjob via KUDU.
I offer you the snippet of code below via FTP4J libiary:
import java.io.File;
import it.sauronsoftware.ftp4j.FTPClient;
public class UploadFileByFTP {
private static String hostName = <your host name>;
private static String userName = <user name>;
private static String password = <password>;
public static void main(String[] args) {
try {
// create client
FTPClient client = new FTPClient();
// connect host
client.connect(hostName);
// log in
client.login(userName, password);
// print address
System.out.println(client);
// change directory
client.changeDirectory("/site/wwwroot/App_Data/jobs/continuous");
// current directory
String dir = client.currentDirectory();
System.out.println(dir);
File file = new File("D:/test.zip");
client.upload(file);
} catch (Exception e) {
e.printStackTrace();
}
}
}
You can follow this tutorial to configure your parameters.
Related
I am able to call AWS Textract to read an image from my local path. How can I integrate this textract code to read the image uploaded onto a created S3 bucket with the S3 bucket codes.
Working Textract Code to textract images from local path
package aws.cloud.work;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileWriter;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.io.InputStream;
import org.json.simple.JSONArray;
import org.json.simple.JSONObject;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.textract.AmazonTextract;
import com.amazonaws.services.textract.AmazonTextractClientBuilder;
import com.amazonaws.services.textract.model.DetectDocumentTextRequest;
import com.amazonaws.services.textract.model.DetectDocumentTextResult;
import com.amazonaws.services.textract.model.Document;
import com.amazonaws.util.IOUtils;
public class TextractDemo {
static AmazonTextractClientBuilder clientBuilder = AmazonTextractClientBuilder.standard()
.withRegion(Regions.US_EAST_1);
private static FileWriter file;
public static void main(String[] args) throws IOException {
//AWS Credentials to access AWS Textract services
clientBuilder.setCredentials(new AWSStaticCredentialsProvider(
new BasicAWSCredentials("Access Key", "Secret key")));
//Set the path of the image to be textract. Can be configured to use from S3
String document="C:\\Users\\image-local-path\\sampleTT.jpg";
ByteBuffer imageBytes;
//Code to use AWS Textract services
try (InputStream inputStream = new FileInputStream(new File(document))) {
imageBytes = ByteBuffer.wrap(IOUtils.toByteArray(inputStream));
}
AmazonTextract client = clientBuilder.build();
DetectDocumentTextRequest request = new DetectDocumentTextRequest()
.withDocument(new Document().withBytes(imageBytes));
/*
* DetectDocumentTextResult result = client.detectDocumentText(request);
* System.out.println(result); result.getBlocks().forEach(block ->{
* if(block.getBlockType().equals("LINE")) System.out.println("text is "+
* block.getText() + " confidence is "+ block.getConfidence());
*/
//
DetectDocumentTextResult result = client.detectDocumentText(request);
System.out.println(result);
JSONObject obj = new JSONObject();
result.getBlocks().forEach(block -> {
if (block.getBlockType().equals("LINE"))
System.out.println("text is " + block.getText() + " confidence is " + block.getConfidence());
JSONArray fields = new JSONArray();
fields.add(block.getText() + " , " + block.getConfidence());
obj.put(block.getText(), fields);
});
//To import the results into JSON file and output the console output as sample.txt
try {
file = new FileWriter("/Users/output-path/sample.txt");
file.write(obj.toJSONString());
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
file.flush();
file.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
This is an example of the console out where the "text" and corresponding "confidence scores" are returned
S3 bucket code integration I managed to find from the docs:
String document = "sampleTT.jpg";
String bucket = "textract-images";
AmazonS3 s3client = AmazonS3ClientBuilder.standard()
.withEndpointConfiguration(
new EndpointConfiguration("https://s3.amazonaws.com","us-east-1"))
.build();
// Get the document from S3
com.amazonaws.services.s3.model.S3Object s3object = s3client.getObject(bucket, document);
S3ObjectInputStream inputStream = s3object.getObjectContent();
BufferedImage image = ImageIO.read(inputStream);
(Edited) - Thanks #smac2020, I currently have a working Rekognition Code that reads from my AWS console S3 bucket and runs the Rekognition services that I am referencing to. However, I am unable to modify and merge it with
the Textract source code
package com.amazonaws.samples;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.rekognition.AmazonRekognition;
import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
import com.amazonaws.services.rekognition.model.AmazonRekognitionException;
import com.amazonaws.services.rekognition.model.DetectLabelsRequest;
import com.amazonaws.services.rekognition.model.DetectLabelsResult;
import com.amazonaws.services.rekognition.model.Image;
import com.amazonaws.services.rekognition.model.Label;
import com.amazonaws.services.rekognition.model.S3Object;
import java.util.List;
public class DetectLabels {
public static void main(String[] args) throws Exception {
String photo = "sampleTT.jpg";
String bucket = "Textract-bucket";
// AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.standard().withRegion("ap-southeast-1").build();
AWSCredentialsProvider credentialsProvider = new AWSStaticCredentialsProvider (new BasicAWSCredentials("Access Key", "Secret Key"));
AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.standard().withCredentials(credentialsProvider).withRegion("ap-southeast-1").build();
DetectLabelsRequest request = new DetectLabelsRequest()
.withImage(new Image()
.withS3Object(new S3Object()
.withName(photo).withBucket(bucket)))
.withMaxLabels(10)
.withMinConfidence(75F);
try {
DetectLabelsResult result = rekognitionClient.detectLabels(request);
List <Label> labels = result.getLabels();
System.out.println("Detected labels for " + photo);
for (Label label: labels) {
System.out.println(label.getName() + ": " + label.getConfidence().toString());
}
} catch(AmazonRekognitionException e) {
e.printStackTrace();
}
}
}
Looks like you are trying to read an Amazon S3 object from a Spring boot app and then pass that byte array to DetectDocumentTextRequest.
There is a tutorial that shows a very similar use case where a Spring BOOT app reads the bytes from an Amazon S3 object and passes it to the Amazon Rekognition service (instead of Textract).
The Java code is:
// Get the byte[] from this AWS S3 object.
public byte[] getObjectBytes (String bucketName, String keyName) {
s3 = getClient();
try {
GetObjectRequest objectRequest = GetObjectRequest
.builder()
.key(keyName)
.bucket(bucketName)
.build();
ResponseBytes<GetObjectResponse> objectBytes = s3.getObjectAsBytes(objectRequest);
byte[] data = objectBytes.asByteArray();
return data;
} catch (S3Exception e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
return null;
}
See this AWS development article to see how to build a Spring BOOT app that has this functionality.
Creating an example AWS photo analyzer application using the AWS SDK for Java
This example uses the AWS SDK For Java V2. If you are not familiar with working with the latest SDK version, I recommend that you start here:
Get started with the AWS SDK for Java 2.x
I am trying to consume Twitter streams with the help of a Java Kafka application.
I have created a Twitter developer account and a Twitter application and generated all the 4 keys required.
Please find the code below:
package com.github.simpleanand.kafkabeginner.tutorial2;
import java.util.List;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.TimeUnit;
import org.apache.http.HttpHost;
import org.apache.http.conn.params.ConnRoutePNames;
import org.apache.http.impl.client.DefaultHttpClient;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.google.common.collect.Lists;
import com.twitter.hbc.ClientBuilder;
import com.twitter.hbc.core.Client;
import com.twitter.hbc.core.Constants;
import com.twitter.hbc.core.Hosts;
import com.twitter.hbc.core.HttpHosts;
import com.twitter.hbc.core.endpoint.StatusesFilterEndpoint;
import com.twitter.hbc.core.processor.StringDelimitedProcessor;
import com.twitter.hbc.httpclient.auth.Authentication;
import com.twitter.hbc.httpclient.auth.OAuth1;
public class TwitterProducer {
private Logger logger = LoggerFactory.getLogger(TwitterProducer.class);
private String consumerKey = "<consumer-key>";
private String consumerSecret = "<consumerSecret>";
private String token = "<token value>";
private String secret = "<secret>";
public TwitterProducer() {
}
public static void main(String[] args) {
new TwitterProducer().run();
}
public void run() {
logger.info("inside run........");
// create a twitter client
/**
* Set up your blocking queues: Be sure to size these properly based on
* expected TPS of your stream
*/
BlockingQueue<String> msgQueue = new LinkedBlockingQueue<String>(100000);
Client client = createTwitterClient(msgQueue);
client.connect();
// create a kafka producer
// loop to send tweets to kafka
// on a different thread, or multiple different threads....
while (!client.isDone()) {
String msg = null;
try {
msg = msgQueue.poll(5, TimeUnit.SECONDS);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
client.stop();
}
if (null != msg) {
logger.info("Msg -> " + msg);
}
}
logger.info("end of application........");
}
public Client createTwitterClient(BlockingQueue<String> msgQueue) {
HttpHost proxy = new HttpHost("<compayny proxy value>", 8080);
DefaultHttpClient httpClient = new DefaultHttpClient();
httpClient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, proxy);
/**
* Declare the host you want to connect to, the endpoint, and
* authentication (basic auth or oauth)
*/
Hosts hosebirdHosts = new HttpHosts(Constants.STREAM_HOST);
StatusesFilterEndpoint hosebirdEndpoint = new StatusesFilterEndpoint();
// Optional: set up some followings and track terms
// List<Long> followings = Lists.newArrayList(1234L, 566788L);
List<String> terms = Lists.newArrayList("bitcoin");
// hosebirdEndpoint.followings(followings);
hosebirdEndpoint.trackTerms(terms);
// These secrets should be read from a config file
Authentication hosebirdAuth = new OAuth1(consumerKey, consumerSecret, token, secret);
hosebirdAuth.setupConnection(httpClient);
// Creating a client:
ClientBuilder builder = new ClientBuilder().name("Hosebird-Client-01") // optional:
// mainly
// for
// the
// logs
.hosts(hosebirdHosts).authentication(hosebirdAuth).endpoint(hosebirdEndpoint)
.processor(new StringDelimitedProcessor(msgQueue));
Client hosebirdClient = builder.build();
// Attempts to establish a connection.
return hosebirdClient;
}
}
I'm getting the following error:
[hosebird-client-io-thread-0] INFO com.twitter.hbc.httpclient.ClientBase - Hosebird-Client-01 Establishing a connection
[hosebird-client-io-thread-0] WARN com.twitter.hbc.httpclient.ClientBase - Hosebird-Client-01 Unknown host - stream.twitter.com
[hosebird-client-io-thread-0] WARN com.twitter.hbc.httpclient.ClientBase - Hosebird-Client-01 failed to establish connection properly
[hosebird-client-io-thread-0] INFO com.twitter.hbc.httpclient.ClientBase - Hosebird-Client-01 Done processing, preparing to close connection
Please advise on how to solve this error.
It seems there is an issue of proxy object being sent.It that correct?
I have this sample code written in Java that explains how to call a method of a REST Api:
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.Entity;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import org.apac he.commons.io.FileUtils;
import org.glassfish.jersey.media.multipart.FormDataMultiPart;
import org.glassfish.jersey.media.multipart.MultiPartFeature;
import org.glassfish.jersey.media.multipart.file.FileDataBodyPart;
public class Test {
public static void main(String[] args) {
String alias = "ABCD";
String pin = "012345";
String originFileName = "C:\file.pdf";
String destinationFilename = "C:\file2.pdf";
String urlService = "https://serviceUrl";
Client client = ClientBuilder.newBuilder().register(MultiPartFeature.class).build();
FormDataMultiPart form = new FormDataMultiPart();
form.field("pin", pin);
form.bodyPart(new FileDataBodyPart("content", new File(originFileName)));
Response response = client.target(urlService).path("/auto/action/name/" + alias).request(MediaType.MULTIPART_FORM_DATA).post(Entity.entity(form, form.getMediaType()));
if (response.getStatus() == 200) {
InputStream file = response.readEntity(InputStream.class);
File targetFile = new File(destinationFilename);
try {
FileUtils.copyInputStreamToFile(file, targetFile);
System.out.print("Success");
} catch (IOException e) {
e.printStackTrace();
System.out.print("Error");
}
} else {
System.out.print("Error:" + response.readEntity(String.class));
}
}
}
In my application I've converted it to something like this:
Dim userAlias As String = "ABCD"
Dim pin As HttpContent = New StringContent("012345")
Dim content As HttpContent = New StreamContent(File.OpenRead(originFileName))
Using client = New HttpClient()
client.BaseAddress = New Uri(urlService)
Using formData = New MultipartFormDataContent()
formData.Add(pinCode, "pin", "pin")
formData.Add(content, "test", "test")
Dim response = client.PostAsync("/auto/sign/pades/" + userAlias, formData).Result
If response.StatusCode = 200 Then
Return response.Content.ReadAsStreamAsync().Result
Else
MessageBox.Show(response.ReasonPhrase)
Return Nothing
End If
End Using
End Using
As a result I get a 500 Internal Server Error.
I checked the service url and it's correct, so I guess I'm doing something wrong in the MultipartFormDataContent creation.
I found out that the error was caused by a typo in the name of the parameter I was uploading.
The way I created the MultipartFormDataContent was actually correct.
Thanks to #Chillzy for the suggestion that made me rewrite my code finding the typo.
I have written the following code using the Utgard OPC library.
I need to read data from an OPC server once every 15 seconds. However, I'm not sure if this is the most optimal way to implement it. In my scenario I require to read upward of 300 tags from the server.
Any suggestions?
package opcClientSalem;
import java.util.concurrent.Executors;
import org.jinterop.dcom.common.JIException;
//import org.jinterop.dcom.core.JIVariant;
import org.openscada.opc.lib.common.ConnectionInformation;
import org.openscada.opc.lib.common.NotConnectedException;
import org.openscada.opc.lib.da.AccessBase;
import org.openscada.opc.lib.da.AddFailedException;
import org.openscada.opc.lib.da.AutoReconnectController;
import org.openscada.opc.lib.da.DataCallback;
import org.openscada.opc.lib.da.DuplicateGroupException;
import org.openscada.opc.lib.da.Item;
import org.openscada.opc.lib.da.ItemState;
import org.openscada.opc.lib.da.Server;
import org.openscada.opc.lib.da.SyncAccess;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.UnsupportedEncodingException;
import org.apache.http.HttpResponse;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.DefaultHttpClient;
public class opcClientSalem {
public static void main(String[] args) throws Exception {
// create connection information
System.out.println("**********Initializing OPC Client**********");
java.util.logging.Logger.getLogger("org.jinterop").setLevel(java.util.logging.Level.OFF);
final ConnectionInformation ci = new ConnectionInformation("myusername","mypassword");
ci.setHost("myhost");
ci.setDomain("");
ci.setProgId("Matrikon.OPC.Simulation.1");
ci.setClsid("F8582CF2-88FB-11D0-B850-00C0F0104305");
String itemIdArr[] = {"Random.Real8","Random.Int2"}; // This is where I would have an array of all items
// create a new server
final Server server = new Server(ci, Executors.newSingleThreadScheduledExecutor());
AutoReconnectController controller = new AutoReconnectController(server);
try {
// connect to server
System.out.println("**********Attempting to connect to OPC**********");
controller.connect();
System.out.println("**********Successfully connected to OPC**********");
// add sync access, poll every 15000 ms
final AccessBase access = new SyncAccess(server, 15000);
while(true){
for(final String str : itemIdArr){
access.addItem(str, new DataCallback() {
#Override
public void changed(Item item, ItemState state) {
// Building a JSON string with value recieved
String record = "[ {" +"\""+"name"+"\" :\""+str + "\",\""+"timestamp"+"\" :"+ state.getTimestamp().getTime().getTime()+ ",\""+"value"+"\" : "+value.replace("[", "").replace("]", "") +",\"tags\":{\"test\":\"test1\"}} ]";
try {
// Post JSON string to my API which ingests this data
new opcClientSalem().restpost(record);
} catch (ClientProtocolException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
}
// start reading
access.bind();
Thread.sleep(5000);
}
// wait a little bit
// stop reading
//access.unbind();
} catch (final JIException e) {
//System.out.println(String.format("%08X: %s", e.getErrorCode(), server.getErrorMessage(e.getErrorCode())));
}
}
private void restpost(String record) throws ClientProtocolException, IOException{
HttpClient client = new DefaultHttpClient();
HttpPost post = new HttpPost("http://localhost/myapi/datapoints");
StringEntity input = new StringEntity(record);
post.setEntity(input);
HttpResponse response = client.execute(post);
System.out.println("Post success::"+record);
}
}
I'm not sure you need to add the items over and over again in your while group.
In other libraries (.net or native c++) usually you need to add the items only once, and the callback called whenever the value of the item is changed.
In .net or c++ we get a global callback per group, which seems more effective than individual callbacks per items. Maybe SyncAccess has some global callback, look for it.
So the possible optimizations:
remove the while part, add items only once and sleep the thread infinite.
look for global callback for all items
You should create a subscription in this case.
I have been trying for a long time to upload a file in the Google cloud store using java. By goggling I have found this code, but cant able to understand exactly. Can anyone please customize this one to upload a file in the GCS?
// Given
InputStream inputStream; // object data, e.g., FileInputStream
long byteCount; // size of input stream
InputStreamContent mediaContent = new InputStreamContent("application/octet-stream", inputStream);
// Knowing the stream length allows server-side optimization, and client-side progress
// reporting with a MediaHttpUploaderProgressListener.
mediaContent.setLength(byteCount);
StorageObject objectMetadata = null;
if (useCustomMetadata) {
// If you have custom settings for metadata on the object you want to set
// then you can allocate a StorageObject and set the values here. You can
// leave out setBucket(), since the bucket is in the insert command's
// parameters.
objectMetadata = new StorageObject()
.setName("myobject")
.setMetadata(ImmutableMap.of("key1", "value1", "key2", "value2"))
.setAcl(ImmutableList.of(
new ObjectAccessControl().setEntity("domain-example.com").setRole("READER"),
new ObjectAccessControl().setEntity("user-administrator#example.com").setRole("OWNER")
))
.setContentDisposition("attachment");
}
Storage.Objects.Insert insertObject = storage.objects().insert("mybucket", objectMetadata,
mediaContent);
if (!useCustomMetadata) {
// If you don't provide metadata, you will have specify the object
// name by parameter. You will probably also want to ensure that your
// default object ACLs (a bucket property) are set appropriately:
// https://developers.google.com/storage/docs/json_api/v1/buckets#defaultObjectAcl
insertObject.setName("myobject");
}
// For small files, you may wish to call setDirectUploadEnabled(true), to
// reduce the number of HTTP requests made to the server.
if (mediaContent.getLength() > 0 && mediaContent.getLength() <= 2 * 1000 * 1000 /* 2MB */) {
insertObject.getMediaHttpUploader().setDirectUploadEnabled(true);
}
insertObject.execute();
The recommended way to use Google Cloud Storage from Java is to use the Cloud Storage Client Libraries.
The GitHub page for this client gives several examples and resources to learn how to use it properly.
It also gives this code sample as an example of how to upload objects to Google Cloud Storage using the client library:
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import static java.nio.charset.StandardCharsets.UTF_8;
import com.google.cloud.storage.Blob;
import com.google.cloud.storage.Bucket;
import com.google.cloud.storage.BucketInfo;
// Create your service object
Storage storage = StorageOptions.getDefaultInstance().getService();
// Create a bucket
String bucketName = "my_unique_bucket"; // Change this to something unique
Bucket bucket = storage.create(BucketInfo.of(bucketName));
// Upload a blob to the newly created bucket
BlobId blobId = BlobId.of(bucketName, "my_blob_name");
BlobInfo blobInfo = BlobInfo.newBuilder(blobId).setContentType("text/plain").build();
Blob blob = storage.create(blobInfo, "a simple blob".getBytes(UTF_8));
I also tried with using GCS, but it did not worked for me. Finally, I did with using ServletFileUpload class. Below is the code that I wrote in order to create Google Bucket and to upload the file selected by the user, to that Bucket:
package com1.KT1;
import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.List;
import javax.servlet.ServletException;
import javax.servlet.ServletOutputStream;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.commons.fileupload.FileItem;
import org.apache.commons.fileupload.FileUploadException;
import org.apache.commons.fileupload.disk.DiskFileItemFactory;
import org.apache.commons.fileupload.servlet.ServletFileUpload;
import org.apache.commons.io.IOUtils;
import com.google.cloud.storage.Bucket;
import com.google.cloud.storage.BucketInfo;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import com.google.cloud.storage.Blob;
import com.google.cloud.storage.BlobId;
public class TestBucket extends HttpServlet {
private static final long serialVersionUID = 1L;
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
// TODO Auto-generated method stub
response.getWriter().append("Served at: ").append(request.getContextPath());
}
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
// TODO Auto-generated method stub
// Instantiates a client
String targetFileStr ="";
List<FileItem> fileName = null;
Storage storage = StorageOptions.getDefaultInstance().getService();
// The name for the new bucket
String bucketName = "vendor-bucket13"; // "my-new-bucket";
// Creates the new bucket
Bucket bucket = storage.create(BucketInfo.of(bucketName));
//Object requestedFile = request.getParameter("filename");
ServletFileUpload sfu = new ServletFileUpload(new DiskFileItemFactory());
try {
fileName = sfu.parseRequest(request);
for(FileItem f:fileName)
{
try {
f.write (new File("/Users/tkmajdt/Documents/workspace/File1POC1/" + f.getName()));
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
//targetFileStr = readFile("/Users/tkmajdt/Documents/workspace/File1POC1/" + f.getName(),Charset.defaultCharset());
targetFileStr = new String(Files.readAllBytes(Paths.get("/Users/tkmajdt/Documents/workspace/File1POC1/" + f.getName())));
}
}
//response.getWriter().print("File Uploaded Successfully");
//String content = readFile("test.txt", Charset.defaultCharset());
catch (FileUploadException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
/*if(requestedFile==null)
{
response.getWriter().print("File Not Found");
}*/
/*else
{
//String fileName = (String)requestedFile;
FileInputStream fisTargetFile = new FileInputStream(fileName);
targetFileStr = IOUtils.toString(fisTargetFile, "UTF-8");
}*/
BlobId blobId = BlobId.of(bucketName, "my_blob_name");
//Blob blob = bucket.create("my_blob_name", "a simple blob".getBytes("UTF-8"), "text/plain");
Blob blob = bucket.create("my_blob_name", targetFileStr.getBytes("UTF-8"), "text/plain");
//storage.delete("vendor-bucket3");
}
}
I uploaded the whole source code to GitHub