Java gRPC client predict call to half_plus_two example model - java

I'm trying to make a call from a Java client to Tensorflow Serving. The running model is the half_plus_two example model. I can make a REST call successfully. But cannot make the gRPC equivalent call.
I have tried passing a string as model input and also an array of floats into tensor proto builder. The tensor proto seems to contain correct data when I print it out:
[1.0, 2.0, 5.0]
String host = "localhost";
int port = 8500;
// the model's name.
String modelName = "half_plus_two";
// model's version
long modelVersion = 123;
// assume this model takes input of free text, and make some sentiment prediction.
// String modelInput = "some text input to make prediction with";
String modelInput = "{\"instances\": [1.0, 2.0, 5.0]";
// create a channel
ManagedChannel channel = ManagedChannelBuilder.forAddress(host, port).usePlaintext().build();
tensorflow.serving.PredictionServiceGrpc.PredictionServiceBlockingStub stub = tensorflow.serving.PredictionServiceGrpc.newBlockingStub(channel);
// create a modelspec
tensorflow.serving.Model.ModelSpec.Builder modelSpecBuilder = tensorflow.serving.Model.ModelSpec.newBuilder();
modelSpecBuilder.setName(modelName);
modelSpecBuilder.setVersion(Int64Value.of(modelVersion));
modelSpecBuilder.setSignatureName("serving_default");
Predict.PredictRequest.Builder builder = Predict.PredictRequest.newBuilder();
builder.setModelSpec(modelSpecBuilder);
// create the TensorProto and request
float[] floatData = new float[3];
floatData[0] = 1.0f;
floatData[1] = 2.0f;
floatData[2] = 5.0f;
org.tensorflow.framework.TensorProto.Builder tensorProtoBuilder = org.tensorflow.framework.TensorProto.newBuilder();
tensorProtoBuilder.setDtype(DataType.DT_FLOAT);
org.tensorflow.framework.TensorShapeProto.Builder tensorShapeBuilder = org.tensorflow.framework.TensorShapeProto.newBuilder();
tensorShapeBuilder.addDim(org.tensorflow.framework.TensorShapeProto.Dim.newBuilder().setSize(3));
tensorProtoBuilder.setTensorShape(tensorShapeBuilder.build());
// Set the float_val field.
for (int i = 0; i < floatData.length; i++) {
tensorProtoBuilder.addFloatVal(floatData[i]);
}
org.tensorflow.framework.TensorProto tp = tensorProtoBuilder.build();
System.out.println(tp.getFloatValList());
builder.putInputs("inputs", tp);
Predict.PredictRequest request = builder.build();
Predict.PredictResponse response = stub.predict(request);
When I print request the shape is:
model_spec {
name: "half_plus_two"
version {
value: 123
}
signature_name: "serving_default"
}
inputs {
key: "inputs"
value {
dtype: DT_FLOAT
tensor_shape {
dim {
size: -1
}
dim {
size: 1
}
}
float_val: 1.0
float_val: 2.0
float_val: 5.0
}
}
Get this exception:
Exception in thread "main" io.grpc.StatusRuntimeException: INVALID_ARGUMENT: input tensor alias not found in signature: inputs. Inputs expected to be in the set {x}.
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:233)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:214)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:139)
at tensorflow.serving.PredictionServiceGrpc$PredictionServiceBlockingStub.predict(PredictionServiceGrpc.java:446)
at com.avaya.ccml.grpc.GrpcClient.main(GrpcClient.java:72)`
Edit:
Still working on this.
It looks like the tensor proto I'm supplying is not correct.
Did an inspect with saved_model_cli and it shows the correct shape:
The given SavedModel SignatureDef contains the following input(s):
inputs['x'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: x:0
The given SavedModel SignatureDef contains the following output(s):
outputs['y'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: y:0
Method name is: tensorflow/serving/predict
So next need to figure out how to create tensor proto of this structure
Current

I figured this out.
The answer was staring me in the face the whole time.
The exception states that input signature must be 'x'
Exception in thread "main" io.grpc.StatusRuntimeException: INVALID_ARGUMENT: input tensor alias not found in signature: inputs. Inputs expected to be in the set {x}.
And the output of the CLI also looks for 'x' as input name
The given SavedModel SignatureDef contains the following input(s):
inputs['x'] tensor_info:
So I changed line
requestBuilder.putInputs("inputs", proto);
to
requestBuilder.putInputs("x", proto);
Full working code
import com.google.protobuf.Int64Value;
import io.grpc.ManagedChannel;
import io.grpc.ManagedChannelBuilder;
import org.tensorflow.framework.DataType;
import tensorflow.serving.Predict;
public class GrpcClient {
public static void main(String[] args) {
String host = "localhost";
int port = 8500;
// the model's name.
String modelName = "half_plus_two";
// model's version
long modelVersion = 123;
// create a channel
ManagedChannel channel = ManagedChannelBuilder.forAddress(host, port).usePlaintext().build();
tensorflow.serving.PredictionServiceGrpc.PredictionServiceBlockingStub stub = tensorflow.serving.PredictionServiceGrpc.newBlockingStub(channel);
// create PredictRequest
Predict.PredictRequest.Builder requestBuilder = Predict.PredictRequest.newBuilder();
// create ModelSpec
tensorflow.serving.Model.ModelSpec.Builder modelSpecBuilder = tensorflow.serving.Model.ModelSpec.newBuilder();
modelSpecBuilder.setName(modelName);
modelSpecBuilder.setVersion(Int64Value.of(modelVersion));
modelSpecBuilder.setSignatureName("serving_default");
// set model for request
requestBuilder.setModelSpec(modelSpecBuilder);
// create TensorProto with 3 floats
org.tensorflow.framework.TensorProto.Builder tensorProtoBuilder = org.tensorflow.framework.TensorProto.newBuilder();
tensorProtoBuilder.setDtype(DataType.DT_FLOAT);
tensorProtoBuilder.addFloatVal(1.0f);
tensorProtoBuilder.addFloatVal(2.0f);
tensorProtoBuilder.addFloatVal(5.0f);
// create TensorShapeProto
org.tensorflow.framework.TensorShapeProto.Builder tensorShapeBuilder = org.tensorflow.framework.TensorShapeProto.newBuilder();
tensorShapeBuilder.addDim(org.tensorflow.framework.TensorShapeProto.Dim.newBuilder().setSize(3));
// set shape for proto
tensorProtoBuilder.setTensorShape(tensorShapeBuilder.build());
// build proto
org.tensorflow.framework.TensorProto proto = tensorProtoBuilder.build();
// set proto for request
requestBuilder.putInputs("x", proto);
// build request
Predict.PredictRequest request = requestBuilder.build();
System.out.println("Printing request \n" + request.toString());
// run predict
Predict.PredictResponse response = stub.predict(request);
System.out.println(response.toString());
}
}

in the example for half_plus_two here they use instances label for input values; https://www.tensorflow.org/tfx/serving/docker#serving_example
could you try to set it to instances like this?
builder.putInputs("instances", tp);
I also believe that the DType can be problematic. instead of DT_STRING, i think you should use DT_FLOAT as the inspection result shows
tensorProtoBuilder.setDtype(DataType.DT_FLOAT);
Edit
I am working with Python, couldnt spot the mistake on yours but, this is how we send a predict request (with a PredictRequest proto). Maybe you can try out the Predict proto or there is something that I am missing out and you may spot the difference yourself
request = predict_pb2.PredictRequest()
request.model_spec.name = model_name
request.model_spec.signature_name = signature_name
request.inputs['x'].dtype = types_pb2.DT_FLOAT
request.inputs['x'].float_val.append(2.0)
channel = grpc.insecure_channel(model_server_address)
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
result = stub.Predict(request, RPC_TIMEOUT)

Related

How to pass “values” attribute as array of strings in payload object in case of Googles AUTOML TABLE?

We are  using Google cloud service AUTOML TABLES for online prediction.
We have created, trained and deployed the model. The model is giving predictions using the Google console. We are trying to integrate this model in our java code.
We are not able to pass “values”  attribute as array of strings in payload object in java code. We haven’t found anything for this in documentation.
Please find the links we are using for this:
https://cloud.google.com/automl-tables/docs/samples/automl-tables-predict
Please find the json object in the screenshot.
Please let us know how to pass “values”  attribute as array of strings in payload object?
Thanks.
Based from the reference you are following, to be able to populate "values" you need to define it at the main(). You can refer to Class Value.Builder if you need to set Numbers, Null, etc. values.
List<Value> values = new ArrayList<>();
values.add(Value.newBuilder().setStringValue("This is test data.").build());
// add more elements in values as needed
This list values will be used in Row that accepts iterable protobuf value. See Row.newBuilder.addAllValues().
Row row = Row.newBuilder().addAllValues(values).build();
Using these, the payload is complete and a prediction request be built:
ExamplePayload payload = ExamplePayload.newBuilder().setRow(row).build();
PredictRequest request =
PredictRequest.newBuilder()
.setName(name.toString())
.setPayload(payload)
.putParams("feature_importance", "true")
.build();
PredictResponse response = client.predict(request);
Your full prediction code should look like this:
import com.google.cloud.automl.v1beta1.AnnotationPayload;
import com.google.cloud.automl.v1beta1.ExamplePayload;
import com.google.cloud.automl.v1beta1.ModelName;
import com.google.cloud.automl.v1beta1.PredictRequest;
import com.google.cloud.automl.v1beta1.PredictResponse;
import com.google.cloud.automl.v1beta1.PredictionServiceClient;
import com.google.cloud.automl.v1beta1.Row;
import com.google.cloud.automl.v1beta1.TablesAnnotation;
import com.google.protobuf.Value;
import com.google.protobuf.NullValue;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
class TablesPredict {
public static void main(String[] args) throws IOException {
// TODO(developer): Replace these variables before running the sample.
String projectId = "your-project-id";
String modelId = "TBL9999999999";
// Values should match the input expected by your model.
List<Value> values = new ArrayList<>();
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setStringValue("blue-colar").build());
values.add(Value.newBuilder().setStringValue("married").build());
values.add(Value.newBuilder().setStringValue("primary").build());
values.add(Value.newBuilder().setStringValue("no").build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setStringValue("yes").build());
values.add(Value.newBuilder().setStringValue("yes").build());
values.add(Value.newBuilder().setStringValue("cellular").build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setStringValue("unknown").build());
predict(projectId, modelId, values);
}
static void predict(String projectId, String modelId, List<Value> values) throws IOException {
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests. After completing all of your requests, call
// the "close" method on the client to safely clean up any remaining background resources.
try (PredictionServiceClient client = PredictionServiceClient.create()) {
// Get the full path of the model.
ModelName name = ModelName.of(projectId, "us-central1", modelId);
Row row = Row.newBuilder().addAllValues(values).build();
ExamplePayload payload = ExamplePayload.newBuilder().setRow(row).build();
// Feature importance gives you visibility into how the features in a specific prediction
// request informed the resulting prediction. For more info, see:
// https://cloud.google.com/automl-tables/docs/features#local
PredictRequest request =
PredictRequest.newBuilder()
.setName(name.toString())
.setPayload(payload)
.putParams("feature_importance", "true")
.build();
PredictResponse response = client.predict(request);
System.out.println("Prediction results:");
for (AnnotationPayload annotationPayload : response.getPayloadList()) {
TablesAnnotation tablesAnnotation = annotationPayload.getTables();
System.out.format(
"Classification label: %s%n", tablesAnnotation.getValue().getStringValue());
System.out.format("Classification score: %.3f%n", tablesAnnotation.getScore());
// Get features of top importance
tablesAnnotation
.getTablesModelColumnInfoList()
.forEach(
info ->
System.out.format(
"\tColumn: %s - Importance: %.2f%n",
info.getColumnDisplayName(), info.getFeatureImportance()));
}
}
}
}
For testing purposes I used Google's test dataset (gs://cloud-ml-tables-data/bank-marketing.csv) and used the code above to run send prediction.
See test prediction:

Getting decoded output from a smart contract transaction

I am executing functions of a smart contract through web3j using the following code :
Credentials creds = getCredentialsFromPrivateKey("private-key");
RawTransactionManager manager = new RawTransactionManager(web3j, creds);
String contractAddress = "0x1278f8c858d799fe1010cfc0d1eeb56508243a4d";
BigInteger sum = new BigInteger("10000000000"); // amount you want to send
String data = encodeTransferData(sum);
BigInteger gasPrice = web3j.ethGasPrice().send().getGasPrice();
BigInteger gasLimit = BigInteger.valueOf(120000); // set gas limit here
EthSendTransaction transaction = manager.sendTransaction(gasPrice, gasLimit, contractAddress, data, null);
System.out.println(transaction.getTransactionHash());
It executes fine and the function is run, however i don't know how to read the output given by the contract, how can i read that output ?
this will return the hex value of the function call
private static List<Type> executeCall(Function function) throws IOException {
String encodedFunction = FunctionEncoder.encode(function);
org.web3j.protocol.core.methods.response.EthCall ethCall = web3j.ethCall(
Transaction.createEthCallTransaction(
"0x753ebAf6F6D5C2e3E6D469DEc5694Cd3Aa1A0c21", "0x47480bac30de77cd030b8a8dad2d6a2ecdb7f27a", encodedFunction),
DefaultBlockParameterName.LATEST)
.send();
String value = ethCall.getValue();
System.out.println(value);
System.out.println(FunctionReturnDecoder.decode(value, function.getOutputParameters()));
return FunctionReturnDecoder.decode(value, function.getOutputParameters());
}
Ethereum transaction hash is the unique id of the transaction. With this transaction hash, you can query the transaction status from the network.
The underlying JSON-RPC call is called eth_getTransactionReceipt. Here is Web3.js documentation.
If your smart contract emits events, you can also read those.

How to parse protobuf data of Envelope.Payload.Data?

I’m using Hyperledger Fabric Java SDK to get a transaction by txId. The return object includes Transaction Information.
TransactionInfo txInfo = channel.queryTransactionByID(txId);
Common.Envelope envelope = txInfo.getEnvelope();
Common.Payload payload = Common.Payload.parseFrom(envelope.getPayload());
The Payload message includes headers and data. I can parse headers by using Common.Header.ChannelHeader and Common.Header.SignatureHeader.
Common.ChannelHeader channelHeader = Common.ChannelHeader.parseFrom(payload.getHeader().getChannelHeader());
Common.SignatureHeader signatureHeader = Common.SignatureHeader.parseFrom(payload.getHeader().getSignatureHeader());
The problem is, I cannot see any message type to get data from Payload.
My expectation would be like,
SomeMessage someMsg = SomeMessage.parseFrom(payload.getData());
What is the ideal approach to get a data object?
Thanks to Driden Myung's tip, Finally found a way to parse QSCC responses into TxReadWriteSet or even KVRWSet !!
Here is an example:
TransactionInfo txInfo = channel.queryTransactionByID(txId);
Common.Envelope envelope = txInfo.getEnvelope();
Common.Payload payload = Common.Payload.parseFrom(envelope.getPayload());
FabricTransaction.Transaction transaction = FabricTransaction.Transaction.parseFrom(payload.getData());
FabricTransaction.TransactionAction action = transaction.getActionsList().get(0); // 0 is a index
FabricTransaction.ChaincodeActionPayload chaincodeActionPayload = FabricTransaction.ChaincodeActionPayload.parseFrom(action.getPayload());
FabricProposalResponse.ProposalResponsePayload prp = FabricProposalResponse.ProposalResponsePayload.parseFrom(chaincodeActionPayload.getAction().getProposalResponsePayload());
FabricProposal.ChaincodeAction ca = FabricProposal.ChaincodeAction.parseFrom(prp.getExtension());
Rwset.TxReadWriteSet txrws = Rwset.TxReadWriteSet.parseFrom(ca.getResults());
TxReadWriteSetInfo txrwsInfo = new TxReadWriteSetInfo(txrws);
KvRwset.KVRWSet kvrwSet = txrwsInfo.getNsRwsetInfo(0).getRwset();
KvRwset.KVWrite kvWrite = kvrwSet.getWrites(0);
String writeVal = kvWrite.getValue().toStringUtf8();
I found the answer.
FabricTransaction.Transaction transaction = FabricTransaction.Transaction.parseFrom(payload.getData());
After that,
FabricTransaction.TransactionAction action = transaction.getActionsList().get(index);
FabricTransaction.ChaincodeActionPayload chaincodeActionPayload = FabricTransaction.ChaincodeActionPayload.parseFrom(action.getPayload());
chaincodeActionPayload.getAction().getEndorsementsList().forEach(endorsement -> {
// This is my current point
???? endorser = ????.parseFrom(endorsement.getEndorser());
});
Let me add if I can find more. Anybody add comments welcome.
We have faced a similar problem to get the request data from a transaction.
The following code will help to get the transaction request data of a transaction
Fabric SDK version : 2.1.4
// get transaction from transaction ID
TransactionInfo txInfo = channel.queryTransactionByID(txId);
// transaction is stored inside the envelope containing the payload and signature
Common.Envelope envelope = txInfo.getEnvelope();
// parse payload from the envelope
Common.Payload payload = Common.Payload.parseFrom(envelope.getPayload());
// payload contains Header and Data. We are parsing data to get the transaction
TransactionPackage.Transaction transaction = TransactionPackage.Transaction.parseFrom(payload.getData());
// get first action from the transaction action list. it contains input and other details
TransactionPackage.TransactionAction action = transaction.getActionsList().get(0); // 0 is a index
// chaincode action payload contains input parameters. So we are taking the action payload
TransactionPackage.ChaincodeActionPayload chaincodeActionPayload = TransactionPackage.ChaincodeActionPayload.parseFrom(action.getPayload());
// chaincode ProposalPayload contains Input and TransientMap. We are parsing actionPayload to proposalPayload
ProposalPackage.ChaincodeProposalPayload prp = ProposalPackage.ChaincodeProposalPayload.parseFrom(chaincodeActionPayload.getChaincodeProposalPayload());
// parse the input to chaincodeInvocationSpec so that we can unmarshal the input
Chaincode.ChaincodeInvocationSpec chaincodeInvocationSpec = Chaincode.ChaincodeInvocationSpec.parseFrom(prp.getInput());
// get the input and parse the arg list and get input arguments
chaincodeInvocationSpec.getChaincodeSpec().getInput().getArgsList().get(ChaincodeInput.ARGS_FIELD_NUMBER).toStringUtf8();

How to export repeat grid layout data to Excel using pzRDExportWrapper in Pega 7.1.8?

I am trying to export repeat grid data to excel. To do this, I have provided a button which runs "MyCustomActivity" activity via clicking. The button is placed above the grid in the same layout. It also worth pointing out that I am utulizing an article as a guide to configure. According to the guide my "MyCustomActivity" activity contains two steps:
Method: Property-Set, Method Parameters: Param.exportmode = "excel"
Method: Call pzRDExportWrapper. And I pass current parameters (There is only one from the 1st step).
But after I had got an issue I have changed the 2nd step by Call Rule-Obj-Report-Definition.pzRDExportWrapper
But as you have already understood the solution doesn't work. I have checked the log files and found interesting error:
2017-04-11 21:08:27,992 [ WebContainer : 4] [OpenPortal] [ ] [ MyFW:01.01.02] (ctionWrapper._baseclass.Action) ERROR as1|172.22.254.110 bar - Activity 'MyCustomActivity' failed to execute; Failed to find a 'RULE-OBJ-ACTIVITY' with the name 'PZRESOLVECOPYFILTERS' that applies to 'COM-FW-MyFW-Work'. There were 3 rules with this name in the rulebase, but none matched this request. The 3 rules named 'PZRESOLVECOPYFILTERS' defined in the rulebase are:
2017-04-11 21:08:42,807 [ WebContainer : 4] [TABTHREAD1] [ ] [ MyFW:01.01.02] (fileSetup.Code_Security.Action) ERROR as1|172.22.254.110 bar - External authentication failed:
If someone have any suggestions and share some, I will appreciate it.
Thank you.
I wanted to provide a functionality of exporting retrieved works to a CSV file. The functionality should has a feature to choose fields to retrieve, all results should be in Ukrainian and be able to use any SearchFilter Pages and Report Definition rules.
At a User Portal I have two sections: the first section contains text fields and a Search button, and a section with a Repeat Grid to display results. The textfields are used to filter results and they use a page Org-Div-Work-SearchFilter.
I made a custom parser to csv. I created two activities and wrote some Java code. I should mention that I took some code from the pzPDExportWrapper.
The activities are:
ExportToCSV - takes parameters from users, gets data, invokes the ConvertResultsToCSV;
ConvertResultsToCSV - converts retrieved data to a .CSV file.
Configurations of the ExportToCSV activity:
The Pages And Classes tab:
ReportDefinition is an object of a certain Report Definition.
SearchFilter is a Page with values inputted by user.
ReportDefinitionResults is a list of retrieved works to export.
ReportDefinitionResults.pxResults denotes a type of a certain work.
The Parameters tab:
FileName is a name of a generated file
ColumnsNames names of columns separated by comma. If the parameter is empty then CSVProperties is exported.
CSVProperties is a props to display in a spreadsheet separated by comma.
SearchPageName is a name of a page to filter results.
ReportDefinitionName is a RD's name used to retrieve results.
ReportDefinitionClass is a class of utilized report definition.
The Step tab:
Lets look through the steps:
1. Get an SearchFilte Page with a name from a Parameter with populated fields:
2. If SearchFilter is not Empty, call a Data Transform to convert SearchFilter's properties to Paramemer properties:
A fragment of the data Transform:
3. Gets an object of a Report Definition
4. Set parameters for the Report Definition
5. Invoke the Report Definition and save results to ReportDefinitionResults:
6. Invoke the ConvertResultsToCSV activity:
7. Delete the result page:
The overview of the ConvertResultsToCSV activity.
The Parameters tab if the ConvertResultsToCSV activity:
CSVProperties are the properties to retrieve and export.
ColumnsNames are names of columns to display.
PageListProperty a name of the property to be read in the primay page
FileName the name of generated file. Can be empty.
AppendTimeStampToFileName - if true, a time of the file generation.
CSVString a string of generated CSV to be saved to a file.
FileName a name of a file.
listSeperator is always a semicolon to separate fields.
Lets skim all the steps in the activity:
Get a localization from user settings (commented):
In theory it is able to support a localization in many languages.
Set always "uk" (Ukrainian) localization.
Get a separator according to localization. It is always a semicolon in Ukrainian, English and Russian. It is required to check in other languages.
The step contains Java code, which form a CSV string:
StringBuffer csvContent = new StringBuffer(); // a content of buffer
String pageListProp = tools.getParamValue("PageListProperty");
ClipboardProperty resultsProp = myStepPage.getProperty(pageListProp);
// fill the properties names list
java.util.List<String> propertiesNames = new java.util.LinkedList<String>(); // names of properties which values display in csv
String csvProps = tools.getParamValue("CSVProperties");
propertiesNames = java.util.Arrays.asList(csvProps.split(","));
// get user's colums names
java.util.List<String> columnsNames = new java.util.LinkedList<String>();
String CSVDisplayProps = tools.getParamValue("ColumnsNames");
if (!CSVDisplayProps.isEmpty()) {
columnsNames = java.util.Arrays.asList(CSVDisplayProps.split(","));
} else {
columnsNames.addAll(propertiesNames);
}
// add columns to csv file
Iterator columnsIter = columnsNames.iterator();
while (columnsIter.hasNext()) {
csvContent.append(columnsIter.next().toString());
if (columnsIter.hasNext()){
csvContent.append(listSeperator); // listSeperator - local variable
}
}
csvContent.append("\r");
for (int i = 1; i <= resultsProp.size(); i++) {
ClipboardPage propPage = resultsProp.getPageValue(i);
Iterator iterator = propertiesNames.iterator();
int propTypeIndex = 0;
while (iterator.hasNext()) {
ClipboardProperty clipProp = propPage.getIfPresent((iterator.next()).toString());
String propValue = "";
if(clipProp != null && !clipProp.isEmpty()) {
char propType = clipProp.getType();
propValue = clipProp.getStringValue();
if (propType == ImmutablePropertyInfo.TYPE_DATE) {
DateTimeUtils dtu = ThreadContainer.get().getDateTimeUtils();
long mills = dtu.parseDateString(propValue);
java.util.Date date = new Date(mills);
String sdate = dtu.formatDateTimeStamp(date);
propValue = dtu.formatDateTime(sdate, "dd.MM.yyyy", "", "");
}
else if (propType == ImmutablePropertyInfo.TYPE_DATETIME) {
DateTimeUtils dtu = ThreadContainer.get().getDateTimeUtils();
propValue = dtu.formatDateTime(propValue, "dd.MM.yyyy HH:mm", "", "");
}
else if ((propType == ImmutablePropertyInfo.TYPE_DECIMAL)) {
propValue = PRNumberFormat.format(localeCode,PRNumberFormat.DEFAULT_DECIMAL, false, null, new BigDecimal(propValue));
}
else if (propType == ImmutablePropertyInfo.TYPE_DOUBLE) {
propValue = PRNumberFormat.format(localeCode,PRNumberFormat.DEFAULT_DECIMAL, false, null, Double.parseDouble(propValue));
}
else if (propType == ImmutablePropertyInfo.TYPE_TEXT) {
propValue = clipProp.getLocalizedText();
}
else if (propType == ImmutablePropertyInfo.TYPE_INTEGER) {
Integer intPropValue = Integer.parseInt(propValue);
if (intPropValue < 0) {
propValue = new String();
}
}
}
if(propValue.contains(listSeperator)){
csvContent.append("\""+propValue+"\"");
} else {
csvContent.append(propValue);
}
if(iterator.hasNext()){
csvContent.append(listSeperator);
}
propTypeIndex++;
}
csvContent.append("\r");
}
CSVString = csvContent.toString();
5. This step forms and save a file in server's catalog tree
char sep = PRFile.separatorChar;
String exportPath= tools.getProperty("pxProcess.pxServiceExportPath").getStringValue();
DateTimeUtils dtu = ThreadContainer.get().getDateTimeUtils();
String fileNameParam = tools.getParamValue("FileName");
if(fileNameParam.equals("")){
fileNameParam = "RecordsToCSV";
}
//append a time stamp
Boolean appendTimeStamp = tools.getParamAsBoolean(ImmutablePropertyInfo.TYPE_TRUEFALSE,"AppendTimeStampToFileName");
FileName += fileNameParam;
if(appendTimeStamp) {
FileName += "_";
String currentDateTime = dtu.getCurrentTimeStamp();
currentDateTime = dtu.formatDateTime(currentDateTime, "HH-mm-ss_dd.MM.yyyy", "", "");
FileName += currentDateTime;
}
//append a file format
FileName += ".csv";
String strSQLfullPath = exportPath + sep + FileName;
PRFile f = new PRFile(strSQLfullPath);
PROutputStream stream = null;
PRWriter out = null;
try {
// Create file
stream = new PROutputStream(f);
out = new PRWriter(stream, "UTF-8");
// Bug with Excel reading a file starting with 'ID' as SYLK file. If CSV starts with ID, prepend an empty space.
if(CSVString.startsWith("ID")){
CSVString=" "+CSVString;
}
out.write(CSVString);
} catch (Exception e) {
oLog.error("Error writing csv file: " + e.getMessage());
} finally {
try {
// Close the output stream
out.close();
} catch (Exception e) {
oLog.error("Error of closing a file stream: " + e.getMessage());
}
}
The last step calls #baseclass.DownloadFile to download the file:
Finally, we can post a button on some section or somewhere else and set up an Actions tab like this:
It also works fine inside "Refresh Section" action.
A possible result could be
Thanks for reading.

listing objects in aws bucket

I was trying to print all the objects in a bucket but I am getting an error.
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 301, AWS Service: Amazon S3, AWS Request ID: 758A7CBF1A29FD74, AWS Error Code: PermanentRedirect, AWS Error Message: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint., S3
At the moment I only have the following code :
public class S3Download {
/**
* #param args
*/
public static void main(String[] args) {
AmazonS3 s3 = new AmazonS3Client(new ClasspathPropertiesFileCredentialsProvider());
Region usWest2 = Region.getRegion(Regions.US_WEST_2);
s3.setRegion(usWest2);
String bucketName = "apireleasecandidate1";
ListObjectsRequest listObjectRequest = new ListObjectsRequest().withBucketName(bucketName);
ObjectListing objectListing;
do{
objectListing = s3.listObjects(listObjectRequest);
for(S3ObjectSummary objectSummary : objectListing.getObjectSummaries()){
System.out.println(" - " + objectSummary.getKey() + " " + "(size = " +
objectSummary.getSize() + ")");
}
listObjectRequest.setMarker(objectListing.getNextMarker());
}while(objectListing.isTruncated());
}
}
I found this solution on amazon's website.
Does anyone know what I am missing?
For Scala developers, here it is recursive function to execute a full scan and map of the contents of an AmazonS3 bucket using the official AWS SDK for Java
import com.amazonaws.services.s3.AmazonS3Client
import com.amazonaws.services.s3.model.{S3ObjectSummary, ObjectListing, GetObjectRequest}
import scala.collection.JavaConversions.{collectionAsScalaIterable => asScala}
def map[T](s3: AmazonS3Client, bucket: String, prefix: String)(f: (S3ObjectSummary) => T) = {
def scan(acc:List[T], listing:ObjectListing): List[T] = {
val summaries = asScala[S3ObjectSummary](listing.getObjectSummaries())
val mapped = (for (summary <- summaries) yield f(summary)).toList
if (!listing.isTruncated) mapped.toList
else scan(acc ::: mapped, s3.listNextBatchOfObjects(listing))
}
scan(List(), s3.listObjects(bucket, prefix))
}
To invoke the above curried map() function, simply pass the already constructed (and properly initialized) AmazonS3Client object (refer to the official AWS SDK for Java API Reference), the bucket name and the prefix name in the first parameter list. Also pass the function f() you want to apply to map each object summary in the second parameter list.
For example
map(s3, bucket, prefix)(s => println(s))
will print all the files
val tuple = map(s3, bucket, prefix)(s => (s.getKey, s.getOwner, s.getSize))
will return the full list of (key, owner, size) tuples in that bucket/prefix
val totalSize = map(s3, "bucket", "prefix")(s => s.getSize).sum
will return the total size of its content (note the additional sum() folding function applied at the end of the expression ;-)
You can combine map() with many other functions as you would normally approach by Monads in Functional Programming
It appears that your bucket "apireleasecandidate1" is not in the us-west-1 region. I think it is in the us-classic region. You should modify your code to remove the setRegion() call.

Categories

Resources