Java application cannot find credentials to access GCP Pub/Sub - java

I'm attempting to use the GCP Java SDK to send messages to a Pub/Sub topic using the following code (replaced the actual project ID and topic name with placeholders in this snippet):
Publisher publisher = null;
ProjectTopicName topic = ProjectTopicName.newBuilder()
.setProject("MY_PROJECT_ID")
.setTopic("MY_TOPIC")
.build();
try {
publisher = Publisher.newBuilder(topic).build();
for (final String message : data) {
ByteString messageBytes = ByteString.copyFromUtf8(message);
PubsubMessage pubsubMessage = PubsubMessage.newBuilder().setData(messageBytes).build();
ApiFuture<String> future = publisher.publish(pubsubMessage);
}
} catch (IOException ex) {
ex.printStackTrace();
} finally {
if (publisher != null) {
publisher.shutdown();
}
}
This results in the following exception:
Exception in thread "main" java.lang.AbstractMethodError: com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.needsCredentials()Z
at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:157)
at com.google.cloud.pubsub.v1.stub.GrpcPublisherStub.create(GrpcPublisherStub.java:164)
at com.google.cloud.pubsub.v1.Publisher.<init>(Publisher.java:171)
at com.google.cloud.pubsub.v1.Publisher.<init>(Publisher.java:85)
at com.google.cloud.pubsub.v1.Publisher$Builder.build(Publisher.java:718)
at com.westonsankey.pubsub.MessageWriter.sendMessagesToPubSub(MessageWriter.java:35)
at com.westonsankey.pubsub.MessageWriter.main(MessageWriter.java:24)
I've set the GOOGLE_APPLICATION_CREDENTIALS environment variable to point to the JSON private key file, and have confirmed that I can access other GCP resources in this application using that private key. The service account has project owner, and I've verified via the Pub/Sub console that the service account has the appropriate permissions.
Are there any extra steps required to authenticate with Pub/Sub?

The problem isn't accessing the credentials. It looks like this is a version conflict on the gax-java library. The needsCredentials method was added in v1.46 in June 2019. Perhaps you are explicitly depending on an older version or another dependency is pulling in an older version and is leaking the version they pull in. If it's the former, update to pull in version 1.46 or later. If it's the latter, you may need to shade the dependency.

Related

How to set parent id and operation id of the telemetry in Azure Application Insights for Azure function using Java

I have an function app: funct1(HttpTrigger) -> blob storage -> func2(BlobTrigger). In Application Insights, there will be two separate request telemetry generated with different operation id. Each has its own end-to-end transaction trace.
In order to get the end-to-end trace for the whole app, I would like to correlate two functions by setting the parent id and operation id of func2 with request id and operation id of func1 so both can be shown in application insights as one end-to-end trace.
I have tried following code but it didn't take any effect and there is a lack of documentation about how to use application insights Java SDK in general for customizing telemetry.
#FunctionName("Create-Thumbnail")
#StorageAccount(Config.STORAGE_ACCOUNT_NAME)
#BlobOutput(name = "$return", path = "output/{name}")
public byte[] generateThumbnail(
#BlobTrigger(name = "blob", path = "input/{name}")
byte[] content,
final ExecutionContext context
) {
try {
TelemetryConfiguration configuration = TelemetryConfiguration.getActive();
TelemetryClient client = new TelemetryClient(configuration);
client.getContext().getOperation().setParentId("MY_CUSTOM_PARENT_ID");
client.flush();
return Converter.createThumbnail(content);
} catch (Exception e) {
e.printStackTrace();
return content;
}
}
Anyone with knowledge in this area can provide some tips?
I'm afraid it can't be achieved as the official doc said :
In C# and JavaScript, you can use an Application Insights SDK to write
custom telemetry data.
If you need to set custom telemetry, you need to add app insights java SDK to your function, but I haven't found any SDK... If there's any progress, I'll update here.

The Application Default Credentials are not available with environment variable setup in mac Google Cloud Storage

The Application Default Credentials are not available.
They are available if running in Google Compute Engine.
Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials
Keep getting the above error instead I have set the environment variable on my local machine with the below command
export GOOGLE_APPLICATION_CREDENTIALS="/Users/macbook/Downloads/fetebird-2b6fa8261292.json"
If I check the path for the environment variable with the below command on terminal it does show the path of the variable
echo $GOOGLE_APPLICATION_CREDENTIALS
On Micronaut application I am trying to create a storage bucket during the startup
#Singleton
public class StartUp implements ApplicationEventListener<StartupEvent> {
private final GoogleCloudStorageService googleCloudStorageService;
public StartUp(GoogleCloudStorageService googleCloudStorageService) {
this.googleCloudStorageService = googleCloudStorageService;
}
#Override
public void onApplicationEvent(StartupEvent event) {
try {
this.googleCloudStorageService.createBucketWithStorageClassAndLocation().subscribe();
} catch (IOException e) {
e.printStackTrace();
}
}
}
On the service
#Singleton
public record GoogleCloudStorageService(GoogleCloudStorageConfiguration googleUploadObjectConfiguration, GoogleCredentialsConfiguration googleCredentialsConfiguration) {
private static final Logger LOG = LoggerFactory.getLogger(GoogleCloudStorageService.class);
public Observable<Void> createBucketWithStorageClassAndLocation() throws IOException {
GoogleCredentials credentials = GoogleCredentials.getApplicationDefault(); // fromStream(new FileInputStream(googleCredentialsConfiguration.getLocation()));
Storage storage = StorageOptions.newBuilder().setCredentials(credentials).setProjectId(googleUploadObjectConfiguration.projectId()).build().getService();
StorageClass storageClass = StorageClass.COLDLINE;
try {
Bucket bucket =
storage.create(
BucketInfo.newBuilder(googleUploadObjectConfiguration.bucketName())
.setStorageClass(storageClass)
.setLocation(googleUploadObjectConfiguration.locationName())
.build());
LOG.info(String.format("Created bucket %s in %s with storage class %s", bucket.getName(), bucket.getLocation(), bucket.getStorageClass()));
} catch (Exception ex) {
LOG.error(ex.getMessage());
}
return Observable.empty();
}
}
The environment variable is NULL while running the application
System.out.println(System.getenv("GOOGLE_APPLICATION_CREDENTIALS"))
The GoogleCredentials credentials = GoogleCredentials.getApplicationDefault(); causing an exception as
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
at com.google.auth.oauth2.DefaultCredentialsProvider.getDefaultCredentials(DefaultCredentialsProvider.java:134)
at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:120)
at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:92)
at fete.bird.service.gcp.GoogleCloudStorageService.createBucketWithStorageClassAndLocation(GoogleCloudStorageService.java:24)
at fete.bird.core.StartUp.onApplicationEvent(StartUp.java:24)
at fete.bird.core.StartUp.onApplicationEvent(StartUp.java:11)
at io.micronaut.context.DefaultBeanContext.notifyEventListeners(DefaultBeanContext.java:1307)
at io.micronaut.context.DefaultBeanContext.publishEvent(DefaultBeanContext.java:1292)
at io.micronaut.context.DefaultBeanContext.start(DefaultBeanContext.java:248)
at io.micronaut.context.DefaultApplicationContext.start(DefaultApplicationContext.java:166)
at io.micronaut.runtime.Micronaut.start(Micronaut.java:71)
at io.micronaut.runtime.Micronaut.run(Micronaut.java:311)
at io.micronaut.runtime.Micronaut.run(Micronaut.java:297)
at fete.bird.ServiceApplication.main(ServiceApplication.java:8)
Is it on StartupEvent the micronaut doesn't access the environment variable?
Well I was missing the below instruction
Local development/testing
If running locally for development/testing, you can use the Google Cloud SDK. Create Application Default Credentials with gcloud auth application-default login, and then google-cloud will automatically detect such credentials.
https://github.com/googleapis/google-cloud-java
However, this solution is not perfect, since it is using the OAuth authentication and getting the warning as Your application has authenticated using end user credentials from Google Cloud SDK. We recommend that most server applications use service accounts instead. If your application continues to use end user credentials from Cloud SDK, you might receive a "quota exceeded" or "API not enabled" error. For more information about service accounts.

How to enable and configure flow log for network security group

I am unable to enable and configure flow log for network security group, using a storage account in either the NetworkWatcherRG or another existing resource group. I am wondering what I am doing wrong from the sdk, as I can do so from the azure gui easily.
To Reproduce
retrieve network watchers
for network watcher in correct region
retrieve flow settings for existing network security group in the region
update flow settings to enable logging and set storage to existing storage account
final PagedList<NetworkWatcher> nws = adapter.getItsAzure().networkWatchers().list();
NetworkWatcher retval = null;
for(final NetworkWatcher nw : nws ) {
if(nw.region().equals(Region.GOV_US_VIRGINIA)) {
retval = nw;
}
}
final ResourceGroup rg = adapter.getItsAzure().resourceGroups().getByName(retval.resourceGroupName());
final StorageAccount sa = adapter.getItsAzure().storageAccounts().define(ResourceNameType.STORAGE_ACCOUNT.randomName("networkwatchersa"))
.withRegion(Region.GOV_US_VIRGINIA)
.withExistingResourceGroup(rg)
.withAccessFromAllNetworks()
.create();
final String rgName = "resource-group-38f6628eccb84ec9aa1cd9b3c8f5f815";
final NetworkSecurityGroup nsg = adapter.getItsAzure().networkSecurityGroups().getByResourceGroup(rgName, "add-network1-nat-securitygroup");
final FlowLogSettings fls = retval.getFlowLogSettings(nsg.id());
LOGGER.info("Found fls with enabled {} and storage id {}", fls.enabled(), fls.storageId());
fls.update()
.withLogging()
.withStorageAccount(sa.id())
.apply();
The client has permission to perform action 'Microsoft.OperationalInsights/workspaces/sharedKeys/action' on scope '/subscriptions/{subscription_id}/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_usgovvirginia', however the linked subscription 'resourcegroups' was not found
Note: subscription id was present in the above error, it has just been redacted for posting
Expected behavior
Expect to be able to enable flow logs for the nsg in a storage account, or a more elaborate error message, I cannot currently determine what the issue is
Setup:
OS: macOS
IDE : Eclipse Version: 2019-06 (4.12.0)
Version of the Library used: 1.23
Additional context
Call has been attempted with the Service Principal as both a contributor and owner in the subscription. I am trying to understand the error message as the sdk call seems straight forward. I suspect it is a permissions or ownership issue.
I have reproduced this issue from my side. And finally i have worked it out. Just ignore the misleading error message you got.
You need to provide TrafficAnalyticsConfigurationProperties to the FlowLogSettings, even if you do not want to turn it on.
So, you need to create a log analytic workspace first.
And you can refer the following code to enable and configure flow log for NSG.
NetworkWatcher nw = azure.networkWatchers().listByResourceGroup("NetworkWatcherRG").get(1);
NetworkSecurityGroup nsg = azure.networkSecurityGroups().getByResourceGroup("", "");
StorageAccount sa = azure.storageAccounts().getByResourceGroup("", "");
FlowLogSettings settings = nw.getFlowLogSettings(nsg.id());
TrafficAnalyticsConfigurationProperties networkWatcherFlowAnalyticsConfiguration = new TrafficAnalyticsConfigurationProperties();
networkWatcherFlowAnalyticsConfiguration.withWorkspaceId("").withWorkspaceRegion(Region.ASIA_SOUTHEAST.toString()).withWorkspaceResourceId("").withEnabled(false);
settings.inner().flowAnalyticsConfiguration()
.withNetworkWatcherFlowAnalyticsConfiguration(networkWatcherFlowAnalyticsConfiguration);
settings.update().withLogging().withRetentionPolicyEnabled().withRetentionPolicyDays(30).withStorageAccount(sa.id()).apply();

AmazonClientException: Unable To Load Credentials from any Provider in the Chain

My mule application writes json record to a kinesis stream. I use KPL producer library. When run locally, it picks AWS credentials from .aws/credentials and writes record to kinesis successfully.
However, when I deploy my application to Cloudhub, it throws AmazonClientException, obviously due to not having access to any of directories that DefaultAWSCredentialsProviderChain class supports. (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html)
This is how I attach credentials and it looks locally in .aws/credentials:
config.setCredentialsProvider( new
DefaultAWSCredentialsProviderChain());
I couldn't figure out a way to provide credentials explicitly using my-app.properies file.
Then I tried to create a separate configuration file with getters/setters. set access key and private key as private and then impement a getter:
public AWSCredentialsProvider getCredentials() {
if(accessKey == null || secretKey == null) {
return new DefaultAWSCredentialsProviderChain();
}
return new StaticCredentialsProvider(new BasicAWSCredentials(getAccessKey(), getSecretKey()));
}
}
This was intended to be used instead of DefaultAWSCredentialsProviderChain class this way---
config.setCredentialsProvider(new AWSConfig().getCredentials());
Still throws the same error when deployed.
The following repo states that it is possible to provide explicit credentials. I need help to figure out how because I can't find a proper documentation / example.
https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/src/com/amazonaws/services/kinesis/producer/sample/SampleProducer.java
I have Faced the same issue so, I got this solution I hope this will work for you also.
#Value("${s3_accessKey}")
private String s3_accessKey;
#Value("${s3_secretKey}")
private String s3_secretKey;
//this above value I am taking from Application.properties file
BasicAWSCredentials creds = new BasicAWSCredentials(s3_accessKey,
s3_secretKey);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().
withCredentials(new AWSStaticCredentialsProvider(creds))
.withRegion(Regions.US_EAST_2)
.build();

Netty logs now show "low-level API for accessing direct buffers reliably"

Why from netty 4.0.15 to 4.0.19 am I now seeing "Your platform does not provide complete low-level API for accessing direct buffers reliably." in the logs?
I'm not an Android platform and my JRE hasn't changed and I'm using OSGi.
Did something change order wise as to how to detect this?
PlatformDependent0 appears to have changed how it detects if sun.nio.ch.DirectBuffer is available.
boolean directBufferFreeable = false;
try {
Class<?> cls = Class.forName("sun.nio.ch.DirectBuffer", false, getClassLoader(PlatformDependent0.class));
Method method = cls.getMethod("cleaner");
if ("sun.misc.Cleaner".equals(method.getReturnType().getName())) {
directBufferFreeable = true;
}
} catch (Throwable t) {
// We don't have sun.nio.ch.DirectBuffer.cleaner().
}
logger.debug("sun.nio.ch.DirectBuffer.cleaner(): {}", directBufferFreeable? "available" : "unavailable");
Since this is an OSGi application I needed to add "sun.nio.ch" to the ClassLoader. I'm using felix in this case and added to the config the following properties:
config.put(FRAMEWORK_SYSTEMPACKAGES_EXTRA, "sun.misc,sun.nio.ch");

Categories

Resources