GoogleAds API - Java / How to get all existing Keyword Plans? - java

I figured out how to create & delete keyword plans, but I couldn't figure out how I can get a list of all my existing keyword plans (resource names / plan ids)?
final long customerId = Long.valueOf("XXXXXXXXXX");
GoogleAdsClient googleAdsClient = new ...
KeywordPlanServiceClient client = googleAdsClient.getVersion8().createKeywordPlanServiceClient();
String[] allExistingKeywordPlans = client. ???
<dependency>
<groupId>com.google.api-ads</groupId>
<artifactId>google-ads</artifactId>
<version>16.0.0</version>
</dependency>
Further resources:
https://developers.google.com/google-ads/api/docs/samples/add-keyword-plan
Any hints on how this can be solved is highly appreciated! Many thanks in advance!

Maybe you can try to fetch the keyword_plan resource from your account.
This is how I've done it to create remove operations for all the existing keywordPlans.
GoogleAdsServiceClient.SearchPagedResponse response = client.search(SearchGoogleAdsRequest.newBuilder()
.setQuery("SELECT keyword_plan.resource_name FROM keyword_plan")
.setCustomerId(Objects.requireNonNull(googleAdsClient.getLoginCustomerId()).toString())
.build());
List<KeywordPlanOperation> keywordPlanOperations = response.getPage().getResponse().getResultsList().stream()
.map(x -> KeywordPlanOperation.newBuilder()
.setRemove(x.getKeywordPlan().getResourceName())
.build())
.collect(Collectors.toList());
Of course this can also be applied to your use-case.

This is for PHP if you like to remove all of the existing keyword plans:
$googleAdsServiceClient = $this->googleAdsClient->getGoogleAdsServiceClient();
/** #var GoogleAdsServerStreamDecorator $stream */
$stream = $googleAdsServiceClient->searchStream(
$linkedCustomerId,
'SELECT keyword_plan.resource_name FROM keyword_plan'
);
$keywordPlanServiceClient = $this->googleAdsClient->getKeywordPlanServiceClient();
/** #var GoogleAdsRow $googleAdsRow */
foreach ($stream->iterateAllElements() as $googleAdsRow) {
$keywordPlanOperation = new KeywordPlanOperation();
$keywordPlanOperation->setRemove($googleAdsRow->getKeywordPlan()->getResourceName());
$keywordPlanServiceClient->mutateKeywordPlans($this->linkedCustomerId, [$keywordPlanOperation]);
}

For python:
import argparse
import sys
from google.ads.googleads.client import GoogleAdsClient
from google.ads.googleads.errors import GoogleAdsException
def main(client, customer_id):
ga_service = client.get_service("GoogleAdsService")
query = """
SELECT keyword_plan.name, keyword_plan.id, keyword_plan.forecast_period, keyword_plan.resource_name
FROM keyword_plan
"""
# Issues a search request using streaming.
search_request = client.get_type("SearchGoogleAdsStreamRequest")
search_request.customer_id = customer_id
search_request.query = query
stream = ga_service.search_stream(search_request)
for batch in stream:
for row in batch.results:
resource_name = row.keyword_plan.resource_name
forecast_period = row.keyword_plan.forecast_period
id = row.keyword_plan.id
name = row.keyword_plan.name
print(
f'plan resource name "{resource_name}" with '
f'forecast period "{forecast_period.date_interval}" '
f"and ID {id} "
f' name "{name}" '
)
if __name__ == "__main__":
# GoogleAdsClient will read the google-ads.yaml configuration file in the
# home directory if none is specified.
googleads_client = GoogleAdsClient.load_from_storage(path='your-google-ads.yml-file-path',version="v10")
parser = argparse.ArgumentParser(
description=("Retrieves a campaign's negative keywords.")
)
# The following argument(s) should be provided to run the example.
parser.add_argument(
"-c",
"--customer_id",
type=str,
required=True,
help="The Google Ads customer ID.",
)
args = parser.parse_args()
try:
main(googleads_client, args.customer_id)
except GoogleAdsException as ex:
print(
f'Request with ID "{ex.request_id}" failed with status '
f'"{ex.error.code().name}" and includes the following
errors:'
)
for error in ex.failure.errors:
print(f'\tError with message "{error.message}".')
if error.location:
for field_path_element in error.location.field_path_elements:
print(f"\t\tOn field: {field_path_element.field_name}")
sys.exit(1)

Related

How to write FILE_FORMAT in Snowflake to Java code?

I am trying to execute COPY INTO statement in Java code like this:
copy into s3://snowflake
from "TEST"."PUBLIC"."USER_TABLE_TEMP"
storage_integration = s3_int
file_format = CSV_TEST;
And it works fine.
Is there any way to add this file_format in Java code, so there is no need to set it up in Snowflake?
For example, SQL code of file_format that I have set in Snowflake is
ALTER FILE FORMAT "TEST"."PUBLIC".CSV_TEST SET COMPRESSION = 'NONE' FIELD_DELIMITER =
',' RECORD_DELIMITER = '\n' SKIP_HEADER = 0 FIELD_OPTIONALLY_ENCLOSED_BY = 'NONE'
TRIM_SPACE = TRUE ERROR_ON_COLUMN_COUNT_MISMATCH = FALSE ESCAPE = 'NONE'
ESCAPE_UNENCLOSED_FIELD = '\134' DATE_FORMAT = 'AUTO' TIMESTAMP_FORMAT = 'AUTO' NULL_IF = ('\\N');
Is there any way to write this as Java code?
UPDATE
Here is the code where I am using copy into statement:
String q = "COPY INTO s3://snowflake/"+ userId +" from \"EPICEROS\".\"PUBLIC\".\"USER_TABLE_TEMP\" storage_integration = s3_int file_format = CSV_TEST OVERWRITE=TRUE;";
jdbcTemplatePerBrand.get(brand).query(q, s -> {});
So how can I apply like file_format created on execution of query?
You are wanting an EXTERNAL STAGE
Which you would create like:
CREATE STAGE awesome_stange_name
URL = 's3://snowflake'
FILE_FORMAT = test.public.csv_test
and then you can copy to it:
COPY INTO #awesome_stange_name
FROM test.public.user_table_temp;
This means if the user doing the copy has permission to use the stage, then they can, without need to have access to the security tokens needed to working with that secure location.
Is there any way to write this as Java code?
In Snowflake, creating and altering file formats is done through SQL. You can simply execute a SQL statement through a JDBC connection in Java.
Just change your alter to a create if the file format is not already created:
CREATE FILE FORMAT "TEST"."PUBLIC".CSV_TEST COMPRESSION = 'NONE' FIELD_DELIMITER =
',' RECORD_DELIMITER = '\n' SKIP_HEADER = 0 FIELD_OPTIONALLY_ENCLOSED_BY = 'NONE'
TRIM_SPACE = TRUE ERROR_ON_COLUMN_COUNT_MISMATCH = FALSE ESCAPE = 'NONE'
ESCAPE_UNENCLOSED_FIELD = '\134' DATE_FORMAT = 'AUTO' TIMESTAMP_FORMAT = 'AUTO' NULL_IF = ('\\N');
Assign that to a String variable like sql and just run it like any other statement using JDBC:
ResultSet rs = stmt.executeQuery(sql);
You can then have a line rs.next(); and read from the first ordinal column or the column name status (in lowercase) to get the success/failure message.
This is the solution that I found for my question.
To be able to write file_format from code and not create one in Snowflake I did like this:
copy into s3://snowflake
from "TEST"."PUBLIC"."USER_TABLE_TEMP"
storage_integration = s3_int
OVERWRITE = TRUE
file_format = (type = csv compression = 'none' file_extension ='csv'
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
NULL_IF = ()
single = true
max_file_size = 4900000000;
I also added OVERWRITE = TRUE which means that if my file exists alredy in S3, overwrite it with new one.
single = true and max_file_size = 4900000000 means that I am allowing to export files big to 5 GB. If I haven't added these two, my one big file would be separated in few smaller .csv files, which I did not want.

UDF to extract only the file name from path in Spark SQL

There is input_file_name function in Apache Spark which is used by me to add new column to Dataset with the name of file which is currently being processed.
The problem is that I'd like to somehow customize this function to return only file name, ommitting the full path to it on s3.
For now, I am doing replacement of the path on the second step using map function:
val initialDs = spark.sqlContext.read
.option("dateFormat", conf.dateFormat)
.schema(conf.schema)
.csv(conf.path).withColumn("input_file_name", input_file_name)
...
...
def fromFile(fileName: String): String = {
val baseName: String = FilenameUtils.getBaseName(fileName)
val tmpFileName: String = baseName.substring(0, baseName.length - 8) //here is magic conversion ;)
this.valueOf(tmpFileName)
}
But I'd like to use something like
val initialDs = spark.sqlContext.read
.option("dateFormat", conf.dateFormat)
.schema(conf.schema)
.csv(conf.path).withColumn("input_file_name", **customized_input_file_name_function**)
In Scala:
#register udf
spark.udf
.register("get_only_file_name", (fullPath: String) => fullPath.split("/").last)
#use the udf to get last token(filename) in full path
val initialDs = spark.read
.option("dateFormat", conf.dateFormat)
.schema(conf.schema)
.csv(conf.path)
.withColumn("input_file_name", get_only_file_name(input_file_name))
Edit: In Java as per comment
#register udf
spark.udf()
.register("get_only_file_name", (String fullPath) -> {
int lastIndex = fullPath.lastIndexOf("/");
return fullPath.substring(lastIndex, fullPath.length - 1);
}, DataTypes.StringType);
import org.apache.spark.sql.functions.input_file_name
#use the udf to get last token(filename) in full path
Dataset<Row> initialDs = spark.read()
.option("dateFormat", conf.dateFormat)
.schema(conf.schema)
.csv(conf.path)
.withColumn("input_file_name", get_only_file_name(input_file_name()));
Borrowing from a related question here, the following method is more portable and does not require a custom UDF.
Spark SQL Code Snippet: reverse(split(path, '/'))[0]
Spark SQL Sample:
WITH sample_data as (
SELECT 'path/to/my/filename.txt' AS full_path
)
SELECT
full_path
, reverse(split(full_path, '/'))[0] as basename
FROM sample_data
Explanation:
The split() function breaks the path into it's chunks and reverse() puts the final item (the file name) in front of the array so that [0] can extract just the filename.
Full Code example here :
spark.sql(
"""
|WITH sample_data as (
| SELECT 'path/to/my/filename.txt' AS full_path
| )
| SELECT
| full_path
| , reverse(split(full_path, '/'))[0] as basename
| FROM sample_data
|""".stripMargin).show(false)
Result :
+-----------------------+------------+
|full_path |basename |
+-----------------------+------------+
|path/to/my/filename.txt|filename.txt|
+-----------------------+------------+
commons io is natural/easiest import in spark means(no need to add additional dependency...)
import org.apache.commons.io.FilenameUtils
getBaseName(String fileName)
Gets the base name, minus the full path and extension, from a full fileName.
val baseNameOfFile = udf((longFilePath: String) => FilenameUtils.getBaseName(longFilePath))
Usage is like...
yourdataframe.withColumn("shortpath" ,baseNameOfFile(yourdataframe("input_file_name")))
.show(1000,false)

Replacing a string in Google Protocol Data Buffers

I've succesfully built the following .proto file:
package JervisStorage;
option java_package = "TextBase";
option java_outer_classname = "JervisStorage";
message Owner {
optional string name = 1;
optional string sex = 2;
optional string profession = 3;
optional string email = 4;
}
Now, I managed to build the Owner:
private static Owner owner;
private static FileOutputStream serialOutput;
Owner pusheen= Owner.newBuilder()
.setName("Siema")
.setSex(" ")
.setProfession(" ")
.setEmail(" ")
.build();
I wrote the object to the file and successfully read the object from the file:
serialOutput = new FileOutputStream("JervisStorage.ser");
pusheen.writeTo(serialOutput);
serialOutput.close();
owner = Owner.parseFrom(new FileInputStream("JervisStorage.ser"));
System.out.println(owner.getName());
The problem is that I am unable to replace a signle record, write it back to the file and read the whole updated object. I have been trying to do this:
owner.toBuilder().setName("newName").build();
System.out.println(owner.getName());
serialOutput = new FileOutputStream("JervisStorage.ser");
owner.writeTo(serialOutput);
serialOutput.close();
owner = Owner.parseFrom(new FileInputStream("JervisStorage.ser"));
System.out.println(owner.getName());
Unfortunately, this approach does not seem to work... Can anyone help?
why not do
editedOwner = Owner.newBuilder()
.mergeFrom(new FileInputStream("JervisStorage.ser"))
.setName("new name")
.build();
alternatively you could do
editedOwner = owner.toBuilder()
.setName("new name")
.build();
Ok, I managed to find a solution to this problem. I hate extending the computation time when unnecessary, but this is the best I can think of so far. I've got two Owner objects: owner and editedOwner:
Owner owner;//for reading the records from the file
Owner editedOwner;// for addidng to the file
owner = Owner.parseFrom(new FileInputStream("JervisStorage.ser"));
editedOwner = Owner.newBuilder()
.setName("new name")
.setSex(owner.getSex())
.setProfession(owner.getProfession())
.setEmail(owner.getEmail())
.build();
serialOutput = new FileOutputStream("JervisStorage.ser");
editedOwner.writeTo(serialOutput);
serialOutput.close();
Fortunately, this little messy approach solves my problem for now. Thank you.

createUserDefinedFunction : if already exists?

I'm using azure-documentdb java SDK in order to create and use "User Defined Functions (UDFs)"
So from the official documentation I finally find the way (with a Java client) on how to create an UDF:
String regexUdfJson = "{"
+ "id:\"REGEX_MATCH\","
+ "body:\"function (input, pattern) { return input.match(pattern) !== null; }\","
+ "}";
UserDefinedFunction udfREGEX = new UserDefinedFunction(regexUdfJson);
getDC().createUserDefinedFunction(
myCollection.getSelfLink(),
udfREGEX,
new RequestOptions());
And here is a sample query :
SELECT * FROM root r WHERE udf.REGEX_MATCH(r.name, "mytest_.*")
I had to create the UDF one time only because I got an exception if I try to recreate an existing UDF:
DocumentClientException: Message: {"Errors":["The input name presented is already taken. Ensure to provide a unique name property for this resource type."]}
How should I do to know if the UDF already exists ?
I try to use "readUserDefinedFunctions" function without success. Any example / other ideas ?
Maybe for the long term, should we suggest a "createOrReplaceUserDefinedFunction(...)" on azure feedback
You can check for existing UDFs by running query using queryUserDefinedFunctions.
Example:
List<UserDefinedFunction> udfs = client.queryUserDefinedFunctions(
myCollection.getSelfLink(),
new SqlQuerySpec("SELECT * FROM root r WHERE r.id=#id",
new SqlParameterCollection(new SqlParameter("#id", myUdfId))),
null).getQueryIterable().toList();
if (udfs.size() > 0) {
// Found UDF.
}
An answer for .NET users.
`var collectionAltLink = documentCollections["myCollection"].AltLink; // Target collection's AltLink
var udfLink = $"{collectionAltLink}/udfs/{sampleUdfId}"; // sampleUdfId is your UDF Id
var result = await _client.ReadUserDefinedFunctionAsync(udfLink);
var resource = result.Resource;
if (resource != null)
{
// The UDF with udfId exists
}`
Here _client is Azure's DocumentClient and documentCollections is a dictionary of your documentDb collections.
If there's no such UDF in the mentioned collection, the _client throws a NotFound exception.

How to get an App category from play store by its package name in Android?

I want to fetch the app category from play store through its unique identifier i.e. package name, I am using the following code but does not return any data. I also tried to use this AppsRequest.newBuilder().setAppId(query) still no help.
Thanks.
String AndroidId = "dead000beef";
MarketSession session = new MarketSession();
session.login("email", "passwd");
session.getContext().setAndroidId(AndroidId);
String query = "package:com.king.candycrushsaga";
AppsRequest appsRequest = AppsRequest.newBuilder().setQuery(query).setStartIndex(0)
.setEntriesCount(10).setWithExtendedInfo(true).build();
session.append(appsRequest, new Callback<AppsResponse>() {
#Override
public void onResult(ResponseContext context, AppsResponse response) {
String response1 = response.toString();
Log.e("reponse", response1);
}
});
session.flush();
Use this script:
######## Fetch App names and genre of apps from playstore url, using pakage names #############
"""
Reuirements for running this script:
1. requests library
Note: Run this command to avoid insecureplatform warning pip install --upgrade ndg-httpsclient
2. bs4
pip install requests
pip install bs4
"""
import requests
import csv
from bs4 import BeautifulSoup
# url to be used for package
APP_LINK = "https://play.google.com/store/apps/details?id="
output_list = []; input_list = []
# get input file path
print "Need input CSV file (absolute) path \nEnsure csv is of format: <package_name>, <id>\n\nEnter Path:"
input_file_path = str(raw_input())
# store package names and ids in list of tuples
with open(input_file_path, 'rb') as csvfile:
for line in csvfile.readlines():
(p, i) = line.strip().split(',')
input_list.append((p, i))
print "\n\nSit back and relax, this might take a while!\n\n"
for package in input_list:
# generate url, get html
url = APP_LINK + package[0]
r = requests.get(url)
if not (r.status_code==404):
data = r.text
soup = BeautifulSoup(data, 'html.parser')
# parse result
x = ""; y = "";
try:
x = soup.find('div', {'class': 'id-app-title'})
x = x.text
except:
print "Package name not found for: %s" %package[0]
try:
y = soup.find('span', {'itemprop': 'genre'})
y = y.text
except:
print "ID not found for: %s" %package[0]
output_list.append([x,y])
else:
print "App not found: %s" %package[0]
# write to csv file
with open('results.csv', 'w') as fp:
a = csv.writer(fp, delimiter=",")
a.writerows(output_list)
This is what i did, best and easy solution
https://androidquery.appspot.com/api/market?app=your.unique.package.name
Or otherwise you can get source html and get the string out of it ...
https://play.google.com/store/apps/details?id=your.unique.package.name
Get this string out of it - use split or substring methods
<span itemprop="genre">Sports</span>
In this case sports is your category
use android-market-api it will gives all information of application

Categories

Resources