Is there any solution to get a public hostname in google cloud like in other cloud platforms?
Currently the machine name is:
computername.c.googleprojectid.internal
but I want something like in Amazon or in Azure:
computername.cloudapp.net
You can use the Google Cloud DNS service to update the DNS record for your host on startup. (You could also use a service like dyn-dns, but I'm assuming that you want to us the Google tools where possible.) It looks like you'd want to use the "create change" API, using a service account associated with your VM. This would look something like:
POST https://www.googleapis.com/dns/v1beta1/projects/*myProject*/managedZones/*myZone.com*/changes
{
"additions": [
{
"name": "computername.myZone.com.",
"type": "A",
"ttl": 600,
"rrdatas": [
"200.201.202.203"
]
}
],
"deletions": [
],
}
Note that 200.201.202.203 needs to be the external IP address of your VM.
Related
Iam trying to run simulateprincipalpolicy through java SDK and getting incorrect results.
I have policy something like this and attached this policy to role 'Myrole':
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "id",
"Effect": "Allow",
"Action": [
"ec2:DescribeTags"
],
"Resource": "*"
}
]
}
Java Code :
SimulatePrincipalPolicyRequest simulatePrincipalPolicyRequest = new SimulatePrincipalPolicyRequest()
simulatePrincipalPolicyRequest.setPolicySourceArn("arn:aws:iam::123456789012:role/Myrole");
simulatePrincipalPolicyRequest.withActionNames("ec2:DescribeTags");
Result:
{
EvalActionName: ec2:DescribeTags
EvalResourceName: *
EvalDecision: implicitDeny
MatchedStatements: []
MissingContextValues: []
OrganizationsDecisionDetail: {AllowedByOrganizations: false}
EvalDecisionDetails: {}
ResourceSpecificResults: []
}
The response is incorrect because when i try to perform that action Im able to do so.
I've run into a similar situation by calling simulate-principal-policy directly:
AllowedByOrganizations: false indicates there is an Organization SCP, an organisation-wide service control policy applied that somehow the simulator interprets as its denying access.
The issue in my case is that there are some of Org SCP that deny all kind of access to some regions, but not to others.
The simulator seems to be unable to discern a given region from others, or at least I did not find a way to overcome this problem.
I am sending a POST request to the Google analytics measurment protocol
at
https://www.google-analytics.com/collect?v=1&t=event&tid=UA-151666808-2&cid=123&el=cus&ea=CLIENT_REGISTRATION_SUCCESS3&ec=Server
However it is not being tracked on my site. I am sending it with Java API. I use rest template, feingClient, gama-client-core, google-analytics-java libraries. The result is always the same - the event does not track. If I change mine tid to another, then the event is displayed in another resource. Or if I call this link through POSTMAN, then the result is also successful.
The debug call for your request looks fine.
{
"hitParsingResult": [ {
"valid": true,
"parserMessage": [ ],
"hit": "/debug/collect?v=1\u0026t=event\u0026tid=UA-151666808-2\u0026cid=123\u0026el=cus\u0026ea=CLIENT_REGISTRATION_SUCCESS3\u0026ec=Server"
} ],
"parserMessage": [ {
"messageType": "INFO",
"description": "Found 1 hit in the request."
} ]
}
Data Processing time.
Check the real time api to ensure that the hits are being recorded if they are then you are all set. Then wait 24 - 48 hours for the data to complete processing you should see it in the standard reports then.
bot filtering
Make sure you have disabled bot filtering on the view
New google analytics account
It takes up to 72 hours for an account newly created in Google analytics to start showing data.
Quoted from google measurement protocol reference
Note that Google has libraries to identify real user agents. Hand crafting your own agent could break at any time.
You should change the request User-Agent to a browser like one so google doesn't think you are a bot.
I downloaded XS_JSCRIPT14_10-70001363 package from Service Marketplace.
Please suggest me how to run this App Router Login form with localhost
I am trying with npm startcommand, but getting UAA service exception. How to handle from localhost.
When you download the approuter, either via npm or service marketplace you have to provide two additional files for a basic setup inside the AppRouter directory (besides package.json, xs-app.json, etc.).
The default-services.json holds the variables that tell the approuter where to find the correct authentication server (e.g., XSUAA). You have to provide at least the clientid, clientsecret, and URL of the authorization server as part of this file like this:
{
"uaa": {
"url" : "http://my.uaa.server/",
"clientid" : "client-id",
"clientsecret" : "client-secret",
"xsappname" : "my-business-application"
}
}
You can get this parameters, for example, after binding on SAP Cloud Platform, CloudFoundry your application to an (empty) instance of XSUAA where you can retrieve the values via cf env <appname> from the `VCAP_SERVICES/xsuaa' properties (they have exactly the same property names).
In addition, you require the default-env.json file which holds at least the destination variable to which backend microservice you want to send the received Json Web Token to. It may look like this:
{
"destinations": [ {
"name": "my-destination", "url": "http://localhost:1234", "forwardAuthToken": true
}]
}
Afterwards, inside the approuter directory you can simply run npm start which runs the approuter per default under http://localhost:5000. It also writes nice console output you can use to debug the parameters above.
EDIT: Turns out I was incorrect, it is apparently possible to run the approuter locally.
First of all, here is the documentation for the approuter: https://help.sap.com/viewer/65de2977205c403bbc107264b8eccf4b/Cloud/en-US/01c5f9ba7d6847aaaf069d153b981b51.html
As far as I understood, you need to provide to files to the approuter for it to run locally, default-services.json and default-env.json (put them in the same directory as your package.json.
The default-services.json has a format like this:
{
"uaa": {
"url" : "http://my.uaa.server/",
"clientid" : "client-id",
"clientsecret" : "client-secret",
"xsappname" : "my-business-application"
}
}
The default-env.json is simply a json file holding the environment variables that the approuter needs to access, like so:
{
"VCAP_SERVICES": <env>,
...
}
Unfortunately, the documentation does not state which variables are required, therefore I cannot provide you with a working example.
Hope this helps you! Should you manage to get this running, I'm sure others would appreciate if you share your knowledge here.
is there a way in cf to update the vcap env port for service by code from my application, lets say I want to change the port to 12345
e.g.
{
"VCAP_SERVICES": {
"mongodb": [
{
"credentials": {
"dbname": "ztmvvvmtrz",
"hostname": "13.15.241.29",
"password": "abzArl7AsssseKpi",
"port": "22241",
while trying the cf set-env its update the user provided env and wasn't able to help...
some example on java / node.js will by great
I'm not sure exactly which information you're looking to change here, but values in environment variables like VCAP_SERVICES, VCAP_APPLICATION, PORT and anything starting with CF_ like CF_INSTANCE_PORT, CF_INSTANCE_PORTS and/or CF_INSTANCE_IP are all provided for you by the platform. They are effectively static. Changing them will not do anything.
I am writing Java app which sends email via Amazon SES service, and that works fine. But now I need to retrieve email sending statistics as granural as per email ID basis.
So, I use CloudWatch and pass the notifications to SNS. Yet, I cannot infer away how to get the statistics as per explicit request to the Web service. The SNS endpoints are able to dispatch the data as on needing basis. When I want to make explicit request from my app on service for stats.
The S3 service is for storage. Do I need to store stats somehow on it, so that later I can query it?
Any resolutions and details are appretiated?
Hi For your requirement as per my understanding AWS Dynamo DB is the best way. AWS Dynamo DB is a No sql related DB. After sending email you can store the result (emailId, if you want time ect..) into Dynamo DB by using SNS, or nay lambda functions. You can fire a query to dynamo DB to get the statistics.
If you want to go with S3 bucket way, you have a maintain one json file, and each time need to overwrite that file.
Well, this is the way I got the sending stats.
When it comes to Amazon SES, it gives you very limited sending statistics; not pointing to particular sent email.
Then when it comes to Amazon Cloudwatch, it gives you very similar statistics as the SES, is that it gives you a chance to make the stats dates appear precise to the extend of minutes. Meaning that if you know when you sent the email, via SES, by storing it on DB, you can estimate which stat belongs to which email.
Then you can use the Amazon Firehose combined with Amazon S3. This is where I landed. The Firehose is a stream which pushes statistics on the S3 storage. The SES provides the configuration set which lets you attach it.
The S3 stores anything you like, including email sending statistics. You can have up to 5 stats:
Send
Delivered
Bounce
Complaint
Reject
The stats are stored in files which you can access and read by using Amazon's SDK pertaining to Java language.The way to query in Java
What you get then is JSON file of email sending statistics, e.g.,
{
"eventType":"Bounce",
"bounce":{
"bounceType":"Permanent",
"bounceSubType":"General",
"bouncedRecipients":[
{
"emailAddress":"recipient#example.com",
"action":"failed",
"status":"5.1.1",
"diagnosticCode":"smtp; 550 5.1.1 user unknown"
}
],
"timestamp":"2016-10-14T05:02:52.574Z",
"feedbackId":"EXAMPLE7c1923f27-ab0c24cb-5d9f-4e77-99b8-85e4cb3a33bb-000000",
"reportingMTA":"dsn; ses-example.com"
},
"mail":{
"timestamp":"2016-10-14T05:02:16.645Z",
"source":"sender#example.com",
"sourceArn":"arn:aws:ses:us-east-1:123456789012:identity/sender#example.com",
"sendingAccountId":"123456789012",
"messageId":"EXAMPLE7c191be45-e9aedb9a-02f9-4d12-a87d-dd0099a07f8a-000000",
"destination":[
"recipient#example.com"
],
"headersTruncated":false,
"headers":[
{
"name":"From",
"value":"sender#example.com"
},
{
"name":"To",
"value":"recipient#example.com"
},
{
"name":"Subject",
"value":"Email Subject"
},
{
"name":"MIME-Version",
"value":"1.0"
},
{
"name":"Content-Type",
"value":"multipart/mixed; boundary=\"----=_Part_0_716996660.1476421336341\""
},
{
"name":"X-SES-MESSAGE-TAGS",
"value":"myCustomTag1=myCustomTagValue1, myCustomTag2=myCustomTagValue2"
}
],
"commonHeaders":{
"from":[
"sender#example.com"
],
"to":[
"recipient#example.com"
],
"messageId":"EXAMPLE7c191be45-e9aedb9a-02f9-4d12-a87d-dd0099a07f8a-000000",
"subject":"Email Subject"
},
"tags":{
"ses:configuration-set":[
"my-configuration-set"
],
"ses:source-ip":[
"192.0.2.0"
],
"ses:from-domain":[
"example.com"
],
"ses:caller-identity":[
"ses_user"
],
"myCustomTag1":[
"myCustomTagValue1"
],
"myCustomTag2":[
"myCustomTagValue2"
]
}
}
}
That is about it.