Getting latest data from AWS custom Cloudwatch in Java - java

I have a custom metric in AWS cloudwatch and i am putting data into it through AWS java API.
for(int i =0;i<collection.size();i++){
String[] cell = collection.get(i).split("\\|\\|");
List<Dimension> dimensions = new ArrayList<>();
dimensions.add(new Dimension().withName(dimension[0]).withValue(cell[0]));
dimensions.add(new Dimension().withName(dimension[1]).withValue(cell[1]));
MetricDatum datum = new MetricDatum().withMetricName(metricName)
.withUnit(StandardUnit.None)
.withValue(Double.valueOf(cell[2]))
.withDimensions(dimensions);
PutMetricDataRequest request = new PutMetricDataRequest().withNamespace(namespace+"_"+cell[3]).withMetricData(datum);
String response = String.valueOf(cw.putMetricData(request));
GetMetricDataRequest res = new GetMetricDataRequest().withMetricDataQueries();
//cw.getMetricData();
com.amazonaws.services.cloudwatch.model.Metric m = new com.amazonaws.services.cloudwatch.model.Metric();
m.setMetricName(metricName);
m.setDimensions(dimensions);
m.setNamespace(namespace);
MetricStat ms = new MetricStat().withMetric(m);
MetricDataQuery metricDataQuery = new MetricDataQuery();
metricDataQuery.withMetricStat(ms);
metricDataQuery.withId("m1");
List<MetricDataQuery> mqList = new ArrayList<MetricDataQuery>();
mqList.add(metricDataQuery);
res.withMetricDataQueries(mqList);
GetMetricDataResult result1= cw.getMetricData(res);
}
Now i want to be able to fetch the latest data entered for a particular namespace, metric name and dimention combination through Java API. I am not able to find appropriate documenation from AWS regarding the same. Can anyone please help me?

I got the results from cloudwatch by the below code.\
GetMetricDataRequest getMetricDataRequest = new GetMetricDataRequest().withMetricDataQueries();
Integer integer = new Integer(300);
Iterator<Map.Entry<String, String>> entries = dimensions.entrySet().iterator();
List<Dimension> dList = new ArrayList<Dimension>();
while (entries.hasNext()) {
Map.Entry<String, String> entry = entries.next();
dList.add(new Dimension().withName(entry.getKey()).withValue(entry.getValue()));
}
com.amazonaws.services.cloudwatch.model.Metric metric = new com.amazonaws.services.cloudwatch.model.Metric();
metric.setNamespace(namespace);
metric.setMetricName(metricName);
metric.setDimensions(dList);
MetricStat ms = new MetricStat().withMetric(metric)
.withPeriod(integer)
.withUnit(StandardUnit.None)
.withStat("Average");
MetricDataQuery metricDataQuery = new MetricDataQuery().withMetricStat(ms)
.withId("m1");
List<MetricDataQuery> mqList = new ArrayList<>();
mqList.add(metricDataQuery);
getMetricDataRequest.withMetricDataQueries(mqList);
long timestamp = 1536962700000L;
long timestampEnd = 1536963000000L;
Date d = new Date(timestamp );
Date dEnd = new Date(timestampEnd );
getMetricDataRequest.withStartTime(d);
getMetricDataRequest.withEndTime(dEnd);
GetMetricDataResult result1= cw.getMetricData(getMetricDataRequest);

Related

Spark - createDataFrame returns NPE

I'm trying to run these lines :
dsFinalSegRfm.show(20, false);
Long compilationTime = System.currentTimeMillis() / 1000;
JavaRDD<CustomerKnowledgeEntity> customerKnowledgeList = dsFinalSegRfm.javaRDD().map(
(Function<Row, CustomerKnowledgeEntity>) rowRfm -> {
CustomerKnowledgeEntity customerKnowledge = new CustomerKnowledgeEntity();
customerKnowledge.setCustomerId(new Long(getString(rowRfm.getAs("CLI_ID"))));
customerKnowledge.setKnowledgeType("rfm-segmentation");
customerKnowledge.setKnowledgeTypeId("default");
InformationsEntity infos = new InformationsEntity();
infos.setCreationDate(new Date());
infos.setModificationDate(new Date());
infos.setUserModification("addKnowledge");
customerKnowledge.setInformations(infos);
List<KnowledgeEntity> knowledgeEntityList = new ArrayList<>();
List<WrappedArray<String>> segList = rowRfm.getList(rowRfm.fieldIndex("SEGS"));
for (WrappedArray<String> seg : segList) {
KnowledgeEntity knowledge = new KnowledgeEntity();
Map<String, Object> attr = new HashMap<>();
attr.put("segment", seg.apply(1));
attr.put("segmentSemester", seg.apply(2));
knowledge.setKnowledgeId(seg.apply(0));
knowledge.setAttributes(attr);
knowledge.setPriority(0);
knowledge.setCount(1);
knowledge.setDeleted(false);
knowledgeEntityList.add(knowledge);
}
customerKnowledge.setKnowledgeCollections(knowledgeEntityList);
return customerKnowledge;
});
Long dataConstructionTime = System.currentTimeMillis() / 1000;
Dataset<Row> dataset = sparkSession
.createDataFrame(customerKnowledgeList, CustomerKnowledgeEntity.class)
.repartition(16)
.cache();
The dsFinalSegRfm.show(20, false); returns what I expect :
But I'm getting a Null Pointer Exception from createDataFrame method.
I'm learning Spark but I find it very opaque for debugging...
Any help is appreciated !

Custom DataProvider Nattable

I create nattable the following way. But I can get access to the cells only through getters and setters in my Student class. How else can I access cells? Should I create my own BodyDataProvider or use IDataProvider? If it is true, could someone give some examples of implementing such providers?
final ColumnGroupModel columnGroupModel = new ColumnGroupModel();
ColumnHeaderLayer columnHeaderLayer;
String[] propertyNames = { "name", "groupNumber", "examName", "examMark" };
Map<String, String> propertyToLabelMap = new HashMap<String, String>();
propertyToLabelMap.put("name", "Full Name");
propertyToLabelMap.put("groupNumber", "Group");
propertyToLabelMap.put("examName", "Name");
propertyToLabelMap.put("examMark", "Mark");
DefaultBodyDataProvider<Student> bodyDataProvider = new DefaultBodyDataProvider<Student>(students,
propertyNames);
ColumnGroupBodyLayerStack bodyLayer = new ColumnGroupBodyLayerStack(new DataLayer(bodyDataProvider),
columnGroupModel);
DefaultColumnHeaderDataProvider defaultColumnHeaderDataProvider = new DefaultColumnHeaderDataProvider(
propertyNames, propertyToLabelMap);
DefaultColumnHeaderDataLayer columnHeaderDataLayer = new DefaultColumnHeaderDataLayer(
defaultColumnHeaderDataProvider);
columnHeaderLayer = new ColumnHeaderLayer(columnHeaderDataLayer, bodyLayer, bodyLayer.getSelectionLayer());
ColumnGroupHeaderLayer columnGroupHeaderLayer = new ColumnGroupHeaderLayer(columnHeaderLayer,
bodyLayer.getSelectionLayer(), columnGroupModel);
columnGroupHeaderLayer.addColumnsIndexesToGroup("Exams", 2, 3);
columnGroupHeaderLayer.setGroupUnbreakable(2);
final DefaultRowHeaderDataProvider rowHeaderDataProvider = new DefaultRowHeaderDataProvider(bodyDataProvider);
DefaultRowHeaderDataLayer rowHeaderDataLayer = new DefaultRowHeaderDataLayer(rowHeaderDataProvider);
ILayer rowHeaderLayer = new RowHeaderLayer(rowHeaderDataLayer, bodyLayer, bodyLayer.getSelectionLayer());
final DefaultCornerDataProvider cornerDataProvider = new DefaultCornerDataProvider(
defaultColumnHeaderDataProvider, rowHeaderDataProvider);
DataLayer cornerDataLayer = new DataLayer(cornerDataProvider);
ILayer cornerLayer = new CornerLayer(cornerDataLayer, rowHeaderLayer, columnGroupHeaderLayer);
GridLayer gridLayer = new GridLayer(bodyLayer, columnGroupHeaderLayer, rowHeaderLayer, cornerLayer);
NatTable table = new NatTable(shell, gridLayer, true);
As answered in your previous question How do I fix NullPointerException and putting data into NatTable, this is explained in the NatTable Getting Started Tutorial.
If you need some sample code try the NatTable Examples Application
And from knowing your previous question, your data structure does not work in a table, as you have nested objects where the child objects are stored in an array. So this is more a tree and not a table.

RandomForest with Weka in Java

I am working on a project and I need some examples how to implement RandomForest in Java with weka? I did it with IBk(), it worked. If I do it with RandomForest in the same way, it does not work.
Does anyone have a simple example for me how to implement RandomForest and how to get probability for each class (i did it with IBk withclassifier.distributionForInstance(instance) Function and it returned me probabilities for each class). How can I do it for RandomForest? I will need to get probability of every tree and to combine it?
//example
ConverrterUtils.DataSource source = new ConverterUtils.DataSource ("..../edit.arff);
Instances dataset = source.getDataSet();
dataset.setClassIndex(dataset.numAttributes() - 1);
IBk classifier = new IBk(5); classifier.buildClassifier(dataset);
Instance instance = new SparseInstance(2);
instance.setValue(0, 65) //example data
instance.setValue(1, 120); //example data
double[] prediction = classifier.distributionForInstance(instance);
//now I get the probability for the first class
System.out.println("Prediction for the first class is: "+prediction[0]);
You can calculate the the infogain while buidling the Model in the RandomForest. It is much slower and requires alot of memory while buidling model. I am not so sure about the documentation. you can add options or setValues while buiilding the model.
//numFolds in number of crossvalidations usually between 1-10
//br is your bufferReader
Instances trainData = new Instances(br);
trainData.setClassIndex(trainData.numAttributes() - 1);
RandomForest rf = new RandomForest();
rf.setNumTrees(50);
//You can set the options here
String[] options = new String[2];
options[0] = "-R";
rf.setOptions(options);
rf.buildClassifier(trainData);
weka.filters.supervised.attribute.AttributeSelection as = new weka.filters.supervised.attribute.AttributeSelection();
Ranker ranker = new Ranker();
InfoGainAttributeEval infoGainAttrEval = new InfoGainAttributeEval();
as.setEvaluator(infoGainAttrEval);
as.setSearch(ranker);
as.setInputFormat(trainData);
trainData = Filter.useFilter(trainData, as);
Evaluation evaluation = new Evaluation(trainData);
evaluation.crossValidateModel(rf, trainData, numFolds, new Random(1));
// Using HashMap to store the infogain values of the attributes
int count = 0;
Map<String, Double> infogainscores = new HashMap<String, Double>();
for (int i = 0; i < trainData.numAttributes(); i++) {
String t_attr = trainData.attribute(i).name();
//System.out.println(i+trainData.attribute(i).name());
double infogain = infoGainAttrEval.evaluateAttribute(i);
if(infogain != 0){
//System.out.println(t_attr + "= "+ infogain);
infogainscores.put(t_attr, infogain);
count = count+1;
}
}
//iterating over the hashmap
Iterator it = infogainscores.entrySet().iterator();
while (it.hasNext()) {
Map.Entry pair = (Map.Entry)it.next();
System.out.println(pair.getKey()+" = "+pair.getValue());
System.out.println(pair.getKey()+" = "+pair.getValue());
it.remove(); // avoids a ConcurrentModificationException
}

crosstab and crossDataSet

I'm trying to create a crossTab with three lines and n columns. I used a CrosstabDataset and JRBeanCollectionDataSource to show my Data. My problem is I can only show the last object in my CollectionDataSource I don't have an access to the data in my crossDataSet.
NB:
I used JRDesignCrosstab (java Code) to create the crossTab.
public static JRDesignCrosstab CrosstabPanel(String parameterName , JasperDesign jasperDesign, JRDesignDataset subDataset) throws JRException {
// parameter
JRDesignParameter parameter = new JRDesignParameter();
parameter.setName(parameterName);
parameter.setValueClass(java.lang.Object.class);
jasperDesign.addParameter(parameter);
subDataset.addParameter(parameter);
//Gross Tab
JRDesignCrosstab crosstab = new JRDesignCrosstab();
crosstab.setX(-90);
crosstab.setY(-4);
crosstab.setWidth(600);
crosstab.setHeight(400);
//Expression :
JRDesignExpression expression = new JRDesignExpression("$P{"+parameterName+"}");
//CrosstabDataset
JRDesignCrosstabDataset dataSet = new JRDesignCrosstabDataset();
//datasetrun
JRDesignDatasetRun dsr = new JRDesignDatasetRun();
dsr.setDatasetName(subDataset.getName());
dsr.setDataSourceExpression(expression);
//datasetrun into CrosstabDataset
dataSet.setResetType(ResetTypeEnum.NONE);
dataSet.setDatasetRun(dsr);
crosstab.setDataset(dataSet);
//Bucket Row
JRDesignCrosstabBucket bucket = new JRDesignCrosstabBucket();
JRDesignExpression expressionField = new JRDesignExpression();
expressionField.setText("$F{commissionSimPaye}");
bucket.setValueClassName("net.sf.jasperreports.engine.DataSource");
bucket.setExpression(expressionField);
//Row Group;
JRDesignCrosstabRowGroup rowGroup = new JRDesignCrosstabRowGroup();
rowGroup.setName("rowGroup");
rowGroup.setBucket(bucket);
rowGroup.setWidth(68*2+1);
rowGroup.setTotalPosition(CrosstabTotalPositionEnum.END);
crosstab.addRowGroup(rowGroup);
//Bucket Second Row
bucket = new JRDesignCrosstabBucket();
expressionField = new JRDesignExpression();
expressionField.setText("$F{commissionSimPaye}");
bucket.setValueClassName("net.sf.jasperreports.engine.ReportContext");
bucket.setExpression(expressionField);
//Row Group;
rowGroup = new JRDesignCrosstabRowGroup();
rowGroup.setName("secondRowGroup");
rowGroup.setBucket(bucket);
rowGroup.setWidth(68*2+1);
rowGroup.setTotalPosition(CrosstabTotalPositionEnum.END);
crosstab.addRowGroup(rowGroup);
//Bucket Column
bucket = new JRDesignCrosstabBucket();
expressionField = new JRDesignExpression();
expressionField.setText("$F{commissionSimCalcule}");
bucket.setValueClassName("java.lang.Object");
bucket.setExpression(expressionField);
//ColumnGroup
JRDesignCrosstabColumnGroup ColumnGroup = new JRDesignCrosstabColumnGroup();
ColumnGroup.setName("columnGroup");
ColumnGroup.setBucket(bucket);
ColumnGroup.setHeight(60);
ColumnGroup.setTotalPosition(CrosstabTotalPositionEnum.END);
crosstab.addColumnGroup(ColumnGroup);
JRDesignExpression expressionMesaure = new JRDesignExpression();
expressionMesaure.setText("$F{commissionSimCalcule}");
JRDesignCrosstabMeasure measure = new JRDesignCrosstabMeasure();
measure.setName("ColumContent"+0);
measure.setValueExpression(expressionMesaure);
measure.setValueClassName("java.lang.Object");
crosstab.addMeasure(measure);
expressionMesaure = new JRDesignExpression();
expressionMesaure.setText("$F{commissionSimPaye}");
measure = new JRDesignCrosstabMeasure();
measure.setName("ColumContent"+1);
measure.setValueExpression(expressionMesaure);
measure.setValueClassName("java.lang.Object");
crosstab.addMeasure(measure);
expressionMesaure = new JRDesignExpression();
expressionMesaure.setText("$F{commissionSimAPaye}");
measure = new JRDesignCrosstabMeasure();
measure.setName("ColumContent"+2);
measure.setValueExpression(expressionMesaure);
measure.setValueClassName("java.lang.Object");
crosstab.addMeasure(measure);
//contenu de la cellule
JRDesignTextField textField = new JRDesignTextField();
JRDesignCrosstabCell cell = new JRDesignCrosstabCell();
JRDesignExpression expressionTextField = new JRDesignExpression();
JRDesignCellContents cellContents = new JRDesignCellContents();
textField.setX(0);
textField.setY(0);
textField.setWidth(68);
textField.setHeight(20);
textField.setHorizontalAlignment(HorizontalAlignEnum.RIGHT);
textField.getLineBox().getLeftPen().setLineWidth(1);
textField.getLineBox().getTopPen().setLineWidth(1);
textField.getLineBox().getRightPen().setLineWidth(1);
textField.getLineBox().getBottomPen().setLineWidth(1);
cell.setHeight(20);
cell.setWidth(68);
expressionTextField.setText("$V{ColumContent"+0+"}");
textField.setExpression(expressionTextField);
cellContents.addElement(textField);
cell.setContents(cellContents);
crosstab.addCell(cell);
return crosstab;
}
problem solved, I worked on the structure of JRBeanCollectionDataSource

Retrieve multiple accounts from Microsoft CRM 2011 Online

With Microsoft CRM 2011 online and using webservices, I am using below method in my Main.java using the OrganizationServiceStub class created by webservices call. The output retrieved no of records is -1 can someone help where I am going wrong. I want to retrieve the accounts where name begins with "Tel" without giving the accountid. I can see the data exists in CRM.
Thanks
public static void getAccountDetails(OrganizationServiceStub service, ArrayOfstring fields)
{
try{
ArrayOfanyType aa = new ArrayOfanyType();
aa.setAnyType(new String[] {"Tel"});
ConditionExpression condition1 = new ConditionExpression();
condition1.setAttributeName("name");
condition1.setOperator(ConditionOperator.BeginsWith);
condition1.setValues(aa);
ArrayOfConditionExpression ss = new ArrayOfConditionExpression();
ss.setConditionExpression(new ConditionExpression[] {condition1});
FilterExpression filter1 = new FilterExpression();
filter1.setConditions(ss);
QueryExpression query = new QueryExpression();
query.setEntityName("account");
ColumnSet cols = new ColumnSet();
cols.setColumns(fields);
query.setColumnSet(cols);
query.setCriteria(filter1);
RetrieveMultiple ll = new RetrieveMultiple();
ll.setQuery(query);
RetrieveMultipleResponse result1 = service.retrieveMultiple(ll);
EntityCollection accounts = result1.getRetrieveMultipleResult();
System.out.println(accounts.getTotalRecordCount());
}
catch (IOrganizationService_RetrieveMultiple_OrganizationServiceFaultFault_FaultMessage e) {
logger.error(e.getMessage());
e.printStackTrace();
}
catch (RemoteException e) {
logger.error(e.getMessage());
e.printStackTrace();
}
}
For Java, include this code snippet works for the above issue
ArrayOfanyType aa = new ArrayOfanyType();
aa.setAnyType(new String[] {"555"});
ConditionExpression condition1 = new ConditionExpression();
condition1.setAttributeName("telephone1");
condition1.setOperator(ConditionOperator.BeginsWith);
condition1.setValues(aa);
ArrayOfConditionExpression ss = new ArrayOfConditionExpression();
ss.setConditionExpression(new ConditionExpression[] {condition1});
FilterExpression filter1 = new FilterExpression();
filter1.setConditions(ss);
QueryExpression query = new QueryExpression();
query.setEntityName("account");
PagingInfo pagingInfo = new PagingInfo();
pagingInfo.setReturnTotalRecordCount(true);
query.setPageInfo(pagingInfo);
OrganizationServiceStub.ColumnSet colSet = new OrganizationServiceStub.ColumnSet();
OrganizationServiceStub.ArrayOfstring cols = new OrganizationServiceStub.ArrayOfstring();
cols.setString(new String[]{"name", "telephone1", "address1_city"});
colSet.setColumns(cols);
query.setColumnSet(colSet);
query.setCriteria(filter1);
RetrieveMultiple ll = new RetrieveMultiple();
ll.setQuery(query);
OrganizationServiceStub.RetrieveMultipleResponse response = serviceStub.retrieveMultiple(ll);
EntityCollection result = response.getRetrieveMultipleResult();
ArrayOfEntity attributes = result.getEntities();
Entity[] keyValuePairs = attributes.getEntity();
for (int i = 0; i < keyValuePairs.length; i++) {
OrganizationServiceStub.KeyValuePairOfstringanyType[] keyValuePairss = keyValuePairs[i].getAttributes().getKeyValuePairOfstringanyType();
for (int j = 0; j < keyValuePairss.length; j++) {
System.out.print(keyValuePairss[j].getKey() + ": ");
System.out.println(keyValuePairss[j].getValue());
}
}
Not sure how similar your EntityCollection object is to the .Net version in the SDK, however you need to specify ReturnTotalRecordCount in the query's PagingInfo in .Net for the TotalRecordCount property to have a value. Could you not instead check accounts.Entities.Count?
Note: I'm not a Java guy either...

Categories

Resources