azure storage metrics at container level - java

I am referring to documentation provided by azure at
https://learn.microsoft.com/en-us/azure/storage/common/storage-metrics-in-azure-monitor#read-metric-values-with-the-net-sdk
I have made changes and make the code work for java using azure-mgmt-monitor dependency. Here is the code
public void listStorageMetricDefinition() {
String resourceId = "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{storageAccountName}";
String subscriptionId = "*****************************";
String tenantId = "*****************************";
String applicationId = "*****************************";
String accessKey = "*****************************";
ApplicationTokenCredentials credentials = (ApplicationTokenCredentials) new ApplicationTokenCredentials(
applicationId, tenantId, accessKey, AzureEnvironment.AZURE).withDefaultSubscriptionId(subscriptionId);
MonitorManagementClientImpl clientImpl = new MonitorManagementClientImpl(credentials);
Date startTime = DateTime.now().minusMinutes(30).toDate();
Date endTime = DateTime.now().toDate();
//DateTime must be in below format
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
dateFormat.setTimeZone(TimeZone.getTimeZone("UTC"));
String startInterval = dateFormat.format(startTime);
String endInterval = dateFormat.format(endTime);
String timespan = startInterval + "/" + endInterval;
Period interval = Period.minutes(1);
String metricNames = "Egress";
String aggregation = "Total";
Integer top = null;
String orderby = null;
String filter = null;
String metricNamespace = null;
ResponseInner response = clientImpl.metrics().list(resourceId, timespan, interval, metricNames, aggregation,
top, orderby, filter, null, metricNamespace);
List<MetricInner> value = response.value();
for (MetricInner metric : value) {
System.out.println("id " + metric.id());
System.out.println("name " + metric.name().value());
System.out.println("type " + metric.type());
System.out.println("unit " + metric.unit());
List<TimeSeriesElement> timeseries = metric.timeseries();
timeseries.forEach(ts -> {
ts.data().forEach(dt -> {
System.out.println(dt.timeStamp() + "--" + dt.total());
});
});
}
}
By using above I am able to read the metrics values at storage account level, but how can I find the metrics at container level? e.g. if I have 3 containers inside my storage account, I need to find the metrics for each container instead for complete storage account.
Please suggest if there are other ways to find metrics at container level.

There is not direct way of doing this, but one can achieve this by configuring monitoring for the storage account. Follow the below link to configure monitoring,
https://learn.microsoft.com/en-us/azure/storage/common/storage-monitor-storage-account
Once storage account is configured for monitoring, it will create a new container with name $logs in your storage account. This new container is not visible in azure portal but you can view and explore this new container using Azure Storage Explorer tool. The link to download the tool is given below.
https://azure.microsoft.com/en-us/features/storage-explorer/
The logs inside the $logs container are segregated on the basis of date and time in separate folders.
/blob/yyyy/MM/dd/HHmm/000000.log
/blob/yyyy/MM/dd/HHmm/000001.log
where mm is always going to be 00.
The schema for logs can be found in azure documentation at location.
https://learn.microsoft.com/en-us/rest/api/storageservices/storage-analytics-log-format
One can read the log file using the schema format and create useful metrics out if it.

Related

BigQueryIO: Query configured via options, but "Value only available at runtime"

Apache Beam 2.9.0
I have set up a pipeline that pulls data from BigQuery and does a series of transforms on it. The options have a start date attached to them using a ValueProvider:
ValueProvider<String> getStartTime();
void setStartTime(ValueProvider<String> startTime);
I then go to pull the data with BigQueryIO (changing things around a bit for the sake of making it explicit what is going on):
BigQueryIO.read(
(SerializableFunction<SchemaAndRecord, AggregatedRowRecord>)
input -> new BigQueryParser().apply(input.getRecord()))
.withoutValidation()
.withTemplateCompatibility()
.fromQuery(
ValueProvider.NestedValueProvider.of(
opts.getStartTime(),
(SerializableFunction<String, String>)
input -> {
Instant instant = Instant.parse(input);
return String.format(
<large SQL statement with a %s in it>,
String.format(
"%d_%d_%d",
instant.get(ChronoField.YEAR),
instant.get(ChronoField.MONTH_OF_YEAR),
instant.get(ChronoField.DAY_OF_MONTH)));
}))
.withCoder(<coder for AggregatedRowRecords>)
.usingStandardSql()
This is then added to a pipeline normally (p.apply(<above>)).
Now I run it:
--project=<project> \
--tempLocation=<directory> \
--stagingLocation=<directory> \
--network=dataflow \
--subnetwork=<subnetwork> \
--defaultWorkerLogLevel=DEBUG
--appName=<name>
--runner=DirectRunner
This causes the following error:
org.apache.beam.sdk.Pipeline$PipelineExecutionException: java.lang.IllegalStateException: Value only available at runtime, but accessed from a non-runtime context: RuntimeValueProvider{propertyName=startTime, default=null}
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:332)
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:302)
at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:197)
at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:64)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:313)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:299)
at <class>.main(<class>.java:<>)
Caused by: java.lang.IllegalStateException: Value only available at runtime, but accessed from a non-runtime context: RuntimeValueProvider{propertyName=startTime, default=null}
at org.apache.beam.sdk.options.ValueProvider$RuntimeValueProvider.get(ValueProvider.java:228)
at org.apache.beam.sdk.options.ValueProvider$NestedValueProvider.get(ValueProvider.java:131)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryQuerySource.createBasicQueryConfig(BigQueryQuerySource.java:230)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryQuerySource.dryRunQueryIfNeeded(BigQueryQuerySource.java:175)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryQuerySource.getTableToExtract(BigQueryQuerySource.java:115)
at org.apache.beam.sdk.io.gcp.bigquery.BigQuerySourceBase.extractFiles(BigQuerySourceBase.java:102)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead$2.processElement(BigQueryIO.java:783)
The use of NestedValueProvider comes from this example on setting up templates:
The user provides a substring for a BigQuery query, such as a specific date. The transform uses the substring to create the full query. Calling .get() returns the full query.
Removing the value provider logic doesn't seem to help, however. Removing the ValueProvider entirely from the withQuery section works fine, but defeats the purpose of being able to set it via options.
The exception explains you the issue, Apache beam first builds the pipeline and the classes and then start to run the data in the pipeline, in this stage, you can't access to options, this is just metadata for building the pipeline.
The way to overcome it is to create a ParDo function/ PTransform, that will get the options you need as parameters in the constructor, then it can access it in its logic.
See example: (my use case, I face the same issue last days)
The pipeline:
HistoryProcessingOptions options = PipelineOptionsFactory.fromArgs(args).withValidation()
.as(HistoryProcessingOptions.class);
Pipeline pipeline = Pipeline.create(options);
pipeline.apply(SourceRead.of(options.getSourceBigQueryTable().get(),
options.getSourceBigQueryDataset().get(),
options.getSourceBigQueryProject().get(),
options.getFromDate().get(),
options.getToDate().get()
))
The transformer itself:
public class SourceRead extends PTransform<PBegin, PCollection<TableRow>> {
private String sourceBigQueryTable;
private String sourceBigQueryDataset;
private String sourceBigQueryProject;
private String formDate;
private String toDate;
private static Logger logger = LoggerFactory.getLogger(SourceRead.class);
public SourceRead(String sourceBigQueryTable, String sourceBigQueryDataset, String sourceBigQueryProject, String formDate, String toDate) {
this.sourceBigQueryTable = sourceBigQueryTable;
this.sourceBigQueryDataset = sourceBigQueryDataset;
this.sourceBigQueryProject = sourceBigQueryProject;
this.formDate = formDate;
this.toDate = toDate;
}
public static SourceRead of(String sourceBigQueryTable, String sourceBigQueryDataset, String sourceBigQueryProject, String yearToLoad, String dateToLoad) {
return new SourceRead(sourceBigQueryTable, sourceBigQueryDataset, sourceBigQueryProject, yearToLoad, dateToLoad);
}
#Override
public PCollection<TableRow> expand(PBegin input) {
String query = "SELECT * FROM TABLE_DATE_RANGE([" + sourceBigQueryProject + ":"+sourceBigQueryDataset+"."+sourceBigQueryTable+"],"
+ "TIMESTAMP('" + formDate + "'),"
+ "TIMESTAMP('" + toDate + "'))";
logger.info("query is"+ query);
return input.apply(BigQueryIO.readTableRows()
.fromQuery(query));
}

How to get the access token to a Facebook page from the link with Spring Social?

I have to get a management token from a page from the link provided by the user. Until then I have collected the username of the page as a substring from the provided link and request permission from the user as follows:
public String createFacebookAuthorizationURL(String pagename, Long churchId) {
FacebookConnectionFactory connectionFactory = new FacebookConnectionFactory(facebookAppId, facebookSecret);
OAuth2Operations oauthOperations = connectionFactory.getOAuthOperations();
OAuth2Parameters params = new OAuth2Parameters();
params.setRedirectUri("https://" + this.hostName + "/api/facebook/" + churchId + "/response"); params.setScope("public_profile,pages_show_list,publish_pages,manage_pages,user_events");
return oauthOperations.buildAuthorizeUrl(params);
}
public String createFacebookAccessToken(String code, Long churchId) {
FacebookConnectionFactory connectionFactory = new FacebookConnectionFactory(facebookAppId, facebookSecret);
AccessGrant accessGrant = connectionFactory.getOAuthOperations().exchangeForAccess(code, "https://" + this.hostName + "/api/facebook/" + churchId + "/response", null);
return accessGrant.getAccessToken();
}
I have already tested the above method and it works. The problem is that: this method requests a token for access to user pages, I want access to only one page, but I do not know how to put this in the requisitiond and token.
I want use this permissions to post text, links and images in fb pages via my system API...

Adding an attachment on Azure CosmosDB

I am looking for some help on how to add an attachment on CosmosDB. Here is the little background.
Our application is currently on IBM Bluemix and we are using CloudantDB. We use CloudanDB to store attachments (PDF file). We are no moving to Azure PaaS App Service and planning to use CosmosDB. I am looking for help on how to create an attachment on CosmosDB using Java API. What API do I need to use? I want to do a small POC.
Thanks,
Well Personally i feel In Azure, if you go want to put files into documentDb, you will pay high for the query cost. Instead it would be normal practice to use Azure blob and save the link in a field, and then return url if its public or binary data if you want it to be secured.
However, You could store it using
var myDoc = new { id = "42", Name = "Max", City="Aberdeen" }; // this is the document you are trying to save
var attachmentStream = File.OpenRead("c:/Path/To/File.pdf"); // this is the document stream you are attaching
var client = await GetClientAsync();
var createUrl = UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName);
Document document = await client.CreateDocumentAsync(createUrl, myDoc);
await client.CreateAttachmentAsync(document.SelfLink, attachmentStream, new MediaOptions()
{
ContentType = "application/pdf", // your application type
Slug = "78", // this is actually attachment ID
});
WORKING WITH ATTACHMENTS
I have answered a similar question here
What client API I can use?
You could follow the cosmos db java sdk to CRUD attachment.
import com.microsoft.azure.documentdb.*;
import java.util.UUID;
public class CreateAttachment {
// Replace with your DocumentDB end point and master key.
private static final String END_POINT = "***";
private static final String MASTER_KEY = "***";
public static void main(String[] args) throws Exception, DocumentClientException {
DocumentClient documentClient = new DocumentClient(END_POINT,
MASTER_KEY, ConnectionPolicy.GetDefault(),
ConsistencyLevel.Session);
String uuid = UUID.randomUUID().toString();
Attachment attachment = getAttachmentDefinition(uuid, "application/text");
RequestOptions options = new RequestOptions();
ResourceResponse<Attachment> attachmentResourceResponse = documentClient.createAttachment(getDocumentLink(), attachment, options);
}
private static Attachment getAttachmentDefinition(String uuid, String type) {
return new Attachment(String.format(
"{" +
" 'id': '%s'," +
" 'media': 'http://xstore.'," +
" 'MediaType': 'Book'," +
" 'Author': 'My Book Author'," +
" 'Title': 'My Book Title'," +
" 'contentType': '%s'" +
"}", uuid, type));
}
}
In the documentation it says, total file size we can store is 2GB.
"Azure Cosmos DB allows you to store binary blobs/media either with
Azure Cosmos DB (maximum of 2 GB per account) " Is it the max we can
store?
Yes.The size of attachments is limited in document db. However, there are two methods for creating a Azure Cosmos DB Document Attachment.
1.Store the file as an attachment to a Document
The raw attachment is included as the body of the POST.
Two headers must be set:
Slug – The name of the attachment.
contentType – Set to the MIME type of the attachment.
2.Store the URL for the file in an attachment to a Document
The body for the POST include the following.
id – It is the unique name that identifies the attachment, i.e. no two attachments will share the same id. The id must not exceed 255 characters.
Media – This is the URL link or file path where the attachment resides.
The following is an example
{
"id": "device\A234",
"contentType": "application/x-zip-compressed",
"media": "www.bing.com/A234.zip"
}
If your files are over limitation , you could try to store them with second way. More details, please refer to blog.
In addition, you could notice that cosmos db attachments support
garbage collect mechanism,it ensures to garbage collect the media when all of the outstanding references are dropped.
Hope it helps you.

Android / Blackberry10 call information not appearing

I have made an Android app that I am trying to port over to a blackberry 10 device. Currently, all of the functions of the app work except for one, where I try and get information about recent calls from the phone. This works fine on android, but does not seem to work on the blackberry 10 simulator I am using. Here is my code for the section:
final TextView time = (TextView) findViewById(R.id.AddNewEditTextTime);
final TextView date = (TextView) findViewById(R.id.AddNewEditTextDate);
final TextView number = (TextView) findViewById(R.id.AddNewEditTextNumber);
// fields to select.
String[] strFields = { android.provider.CallLog.Calls.NUMBER,
android.provider.CallLog.Calls.TYPE,
android.provider.CallLog.Calls.CACHED_NAME,
android.provider.CallLog.Calls.CACHED_NUMBER_TYPE,
android.provider.CallLog.Calls.DATE};
// only incoming.
String strSelection = android.provider.CallLog.Calls.TYPE + " = "
+ android.provider.CallLog.Calls.INCOMING_TYPE;
// most recent first
String strOrder = android.provider.CallLog.Calls.DATE + " DESC";
// get a cursor.
Cursor mCallCursor = getContentResolver().query(
android.provider.CallLog.Calls.CONTENT_URI, // content provider
// URI
strFields, // project (fields to get)
strSelection, // selection
null, // selection args
strOrder // sortorder.
);
if (mCallCursor.moveToFirst()) {
String a = mCallCursor.getString(mCallCursor
.getColumnIndex("date"));
String b = mCallCursor.getString(mCallCursor
.getColumnIndex("number"));
mCallCursor.close();
SimpleDateFormat dateF = new SimpleDateFormat("dd-MMM-yyyy");
SimpleDateFormat timeF = new SimpleDateFormat("HH:mm");
String dateString = dateF.format(new Date(Long
.parseLong(a)));
String timeString = timeF.format(new Date(Long
.parseLong(a)));
time.setText(timeString);
date.setText(dateString);
number.setText(b);
}
The if(mCallCursor.moveToFirst()) statement is never entered on the blackberry 10 device, but works fine on Android. Is there something I'm missing / doing wrong, or is there no way to use the android.provider functions like this on a blackberry 10 device?
Apparently accessing call log is not yet supported
This is not supported, the Android API is not hooked up to retreive this data.
Edit: Usually when there's an equivalent native API, the corresponding API in Android will be supported. The Android API almost always uses the native equivalent for its implementation. AFAIK there isn't a native call logs API.
By bbenninger, at support forums.

Is it possible to get a list of workflows the document is attached to in Alfresco

I'm trying to get a list of workflows the document is attached to in an Alfresco webscript, but I am kind of stuck.
My original problem is that I have a list of files, and the current user may have workflows assigned to him with these documents. So, now I want to create a webscript that will look in a folder, take all the documents there, and assemble a list of documents together with task references, if there are any for the current user.
I know about the "workflow" object that gives me the list of workflows for the current user, but this is not a solution for my problem.
So, can I get a list of workflows a specific document is attached to?
Well, for future reference, I've found a way to get all the active workflows on a document from javascript:
var nodeR = search.findNode('workspace://SpacesStore/'+doc.nodeRef);
for each ( wf in nodeR.activeWorkflows )
{
// Do whatever here.
}
I used packageContains association to find workflows for document.
Below i posted code in Alfresco JavaScript for active workflows (as zladuric answered) and also for all workflows:
/*global search, logger, workflow*/
var getWorkflowsForDocument, getActiveWorkflowsForDocument;
getWorkflowsForDocument = function () {
"use strict";
var doc, parentAssocs, packages, packagesLen, i, pack, props, workflowId, instance, isActive;
//
doc = search.findNode("workspace://SpacesStore/8847ea95-108d-4e08-90ab-34114e7b3977");
parentAssocs = doc.getParentAssocs();
packages = parentAssocs["{http://www.alfresco.org/model/bpm/1.0}packageContains"];
//
if (packages) {
packagesLen = packages.length;
//
for (i = 0; i < packagesLen; i += 1) {
pack = packages[i];
props = pack.getProperties();
workflowId = props["{http://www.alfresco.org/model/bpm/1.0}workflowInstanceId"];
instance = workflow.getInstance(workflowId);
/* instance is org.alfresco.repo.workflow.jscript.JscriptWorkflowInstance */
isActive = instance.isActive();
logger.log(" + instance: " + workflowId + " (active: " + isActive + ")");
}
}
};
getActiveWorkflowsForDocument = function () {
"use strict";
var doc, activeWorkflows, activeWorkflowsLen, i, instance;
//
doc = search.findNode("workspace://SpacesStore/8847ea95-108d-4e08-90ab-34114e7b3977");
activeWorkflows = doc.activeWorkflows;
activeWorkflowsLen = activeWorkflows.length;
for (i = 0; i < activeWorkflowsLen; i += 1) {
instance = activeWorkflows[i];
/* instance is org.alfresco.repo.workflow.jscript.JscriptWorkflowInstance */
logger.log(" - instance: " + instance.getId() + " (active: " + instance.isActive() + ")");
}
}
getWorkflowsForDocument();
getActiveWorkflowsForDocument();
Unfortunately the javascript API doesn't expose all the workflow functions. It look like getting the list of workflow instances that are attached to a document only works in Java (or Java backed webscripts).
List<WorkflowInstance> workflows = workflowService.getWorkflowsForContent(node.getNodeRef(), true);
A usage of this can be found in the workflow list in the document details: http://svn.alfresco.com/repos/alfresco-open-mirror/alfresco/HEAD/root/projects/web-client/source/java/org/alfresco/web/ui/repo/component/UINodeWorkflowInfo.java
To get to the users who have tasks assigned you would then need to use getWorkflowPaths and getTasksForWorkflowPath methods of the WorkflowService.

Categories

Resources