I have this problen when I try to run a function with BlobTrigger.
Microsoft.Azure.WebJobs.Host: Error indexing method 'Functions.myFunction'. Microsoft.Azure.WebJobs.Extensions.Storage:
Storage account connection string for 'AzureWebJobsAzureCosmosDBConnection' is invalid.
The variables are:
AzureWebJobsAzureCosmosDBConnection = AccountEndpoint=https://example.com/;AccountKey=YYYYYYYYYYY;
AzureWebJobsStorage = UseDevelopmentStorage=true
AzureCosmosDBConnection = AccountEndpoint=https://example.com/;AccountKey=YYYYYYYYYYY;
I don't know why this function throws exception....
Not Sure, if you have written your local.settings.json configuration will be in the format of key = value or just mentioned in the question.
The format of local.settings.json configuration to any Azure Function will be "key":"value":
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=pravustorageac88;AccountKey=<alpha-numeric-symbolic_access_key>;EndpointSuffix=core.windows.net",
"FUNCTIONS_WORKER_RUNTIME": "java",
"MAIN_CLASS":"com.example.DemoApplication",
"AzureWebJobsDashboard": "DefaultEndpointsProtocol=https;AccountName=pravustorageac88;AccountKey=<alpha-numeric-symbolic_access_key>;EndpointSuffix=core.windows.net"
"AzureCosmosdBConnStr":"Cosmos_db_conn_str"
}
}
If you are using Cosmos DB Connection string, then you have to configure in such a way:
public HttpResponseMessage execute(
#HttpTrigger(name = "request", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<User>> request,
#CosmosDBOutput(name="database", databaseName = "db_name", collectionName = "collection_name", connectionStringSetting = "AzureCosmosDBConnection")
ExecutionContext context)
Make Sure the cosmos database connection string present in the local.settigns.json file should be published to Azure Function App Configuration Menu > Application Settings.
For that either uncomment local.settings.json from .gitignore file or add the configuration settings manually to the Azure Function App Configuration:
I have uncommented the local.settings.json in .gitignore file and then published to Azure Function App as the cosmos database connection string also updated in the configuration:
Note:
If you have a proxy in the system, then you have to add the proxy settings in func.exe configuration file, given here by #p31415926
In two of the ways, you can configure the Cosmos DB Connection in the Azure Function Java Stack: Bindings (as code given above) and using the SDK given in this MS Doc.
Related
I am trying to run integration tests with testcontainers.
Lunch the testconiners with follow properties:
MySQLContainer database = (MySQLContainer) new MySQLContainer("mysql:8.0.27")
.withUsername("test")
.withPassword("test")
.withEnv("MYSQL_ROOT_PASSWORD", "test")
.withReuse(true);
database.withInitScript("init.sql");
database.start();
but it fails when run the script: Access denied for user 'test'#'%' to database 'test_scheme'
the init.sql script contains command for creating several DB
CREATE SCHEMA IF NOT EXISTS test_scheme
CREATE SCHEMA IF NOT EXISTS onothere_scheme
Judging by the error message, you are connecting as user 'test', not 'root'.
In that case you will have to add something like this to your init script:
GRANT ALL PRIVILEGES ON test_scheme.* TO 'test'#'%'
Since you are only using this for testing purposes, granting all privileges should be fine. Of course you could reduce this if you want to enable only read access, for instance.
You could also avoid this completely by connecting as the root user, who has access to all schemas by default.
The below works:
#Test
public void testExplicitInitScript() throws Exception {
try (MySQLContainer<?> container = new MySQLContainer<>(DockerImageName.parse("mysql:8.0.24"))
.withUsername("root").withPassword("")
.withInitScript("init_mysql.sql")
.withLogConsumer(new Slf4jLogConsumer(log))) {
container.start();
}
}
src/test/resources/init_mysql.sql
CREATE SCHEMA IF NOT EXISTS test_scheme
CREATE SCHEMA IF NOT EXISTS onothere_scheme
Has anyone managed to connect a java program to AWS DocumentDB where the java program is running outside of AWS and DocumentDB has tls enabled? Any examples or guidance provided would be greatly appreciated.
This is what I've done so far =>
I've been following AWS's developer guide and I understand to be able to do this I need an SSH tunnel set up to a jump box (EC2 instance) and then to the DB Cluster. I have done this and connected from my laptop.
I have then created the required .jks file from AWS's rds-combined-ca-bundle.pem file and referenced it in a basic java main class. From the java main class I have referenced the cluster as localhost:27017 as this is where I've set up the SSH tunnel from.
My test code is following the AWS example for Java and I get the following error when I run the program =>
Caused by: javax.net.ssl.SSLHandshakeException: No subject alternative DNS name matching localhost found.
public class CertsTestMain {
public static void main(String[] args) {
String template = "mongodb://%s:%s#%s/test?ssl=true&replicaSet=rs0&readpreference=%s";
String username = "dummy";
String password = "dummy";
String clusterEndpoint = "localhost:27017";
String readPreference = "secondaryPreferred";
String connectionString = String.format(template, username, password, clusterEndpoint, readPreference);
String truststore = "C:/Users/eclipse-workspace/certs/certs/rds-truststore.jks";
String truststorePassword = "test!";
System.setProperty("javax.net.ssl.trustStore", truststore);
System.setProperty("javax.net.ssl.trustStorePassword", truststorePassword);
MongoClient mongoClient = MongoClients.create(connectionString);
MongoDatabase testDB = mongoClient.getDatabase("test");
MongoCollection<Document> bookingCollection = testDB.getCollection("booking");
MongoCursor<Document> cursor = bookingCollection.find().iterator();
try {
while (cursor.hasNext()) {
System.out.println(cursor.next().toJson());
}
} finally {
cursor.close();
}
}
}
So, for me, to make this work I only had to alter the template to:
String template = "mongodb://%s:%s#%s/test?sl=true&tlsAllowInvalidHostnames&readpreference=%s";
As long as you have created your .jks file correctly
(I did this simply it by using a linux env and running the script AWS provide for Java in the following link in Point 2 => https://docs.aws.amazon.com/documentdb/latest/developerguide/connect_programmatically.html)
and you have a fully working ssh tunnel as described in https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
then the above code will work.
I have an Azure Java Function App (Java 11, gradle, azure-functions-java-library 1.4.0) that is tied to an event hub trigger. There are parameters that I can inject into the annotation by surrounding with % as per https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-expressions-patterns. The connection isn't using the % since it's a special param that is always taken from the app properties.
When I run my function locally, using ./gradlew azureFunctionsRun it runs as expected. But once it's deployed to an Azure Function App, it complains that it can't resolve the params.
The error in Azure:
2021-05-27T18:25:37.522 [Error] Microsoft.Azure.WebJobs.Host: Error indexing method 'Functions.EventHubTrigger'. Microsoft.Azure.WebJobs.Host: '%app.eventhub.name%' does not resolve to a value.
The Trigger annotation looks like:
#FunctionName("EventHubTrigger")
public void run(
#EventHubTrigger(name = "event",
connection = "app.eventhub.connectionString",
eventHubName = "%app.eventhub.name%",
consumerGroup = "%app.eventhub.consumerGroup%",
cardinality = Cardinality.MANY)
List<Event> event,
final ExecutionContext context
) {
// logic
}
Locally in local.settings.json I have values for:
"app.eventhub.connectionString": "Endpoint=XXXX",
"app.eventhub.name": "hubName",
"app.eventhub.consumerGroup": "consumerName"
And in Azure for the function app, I have Configuration (under Settings) for each of the above.
Any ideas what I'm missing?
After some further investigation, I managed to get things working in Azure Function Apps by changing my naming convention, from using . as separators to _.
This ended up working both locally and when deployed:
#FunctionName("EventHubTrigger")
public void run(
#EventHubTrigger(name = "event",
connection = "app_eventhub_connectionString",
eventHubName = "%app_eventhub_name%",
consumerGroup = "%app_eventhub_consumerGroup%",
cardinality = Cardinality.MANY)
List<Event> event,
final ExecutionContext context
) {
// logic
}
With configuration settings in local.settings.json as:
"app_eventhub_connectionString": "Endpoint=XXXX",
"app_eventhub_name": "hubName",
"app_eventhub_consumerGroup": "consumerName"
And corresponding updates made to the App configuration.
My mule application writes json record to a kinesis stream. I use KPL producer library. When run locally, it picks AWS credentials from .aws/credentials and writes record to kinesis successfully.
However, when I deploy my application to Cloudhub, it throws AmazonClientException, obviously due to not having access to any of directories that DefaultAWSCredentialsProviderChain class supports. (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html)
This is how I attach credentials and it looks locally in .aws/credentials:
config.setCredentialsProvider( new
DefaultAWSCredentialsProviderChain());
I couldn't figure out a way to provide credentials explicitly using my-app.properies file.
Then I tried to create a separate configuration file with getters/setters. set access key and private key as private and then impement a getter:
public AWSCredentialsProvider getCredentials() {
if(accessKey == null || secretKey == null) {
return new DefaultAWSCredentialsProviderChain();
}
return new StaticCredentialsProvider(new BasicAWSCredentials(getAccessKey(), getSecretKey()));
}
}
This was intended to be used instead of DefaultAWSCredentialsProviderChain class this way---
config.setCredentialsProvider(new AWSConfig().getCredentials());
Still throws the same error when deployed.
The following repo states that it is possible to provide explicit credentials. I need help to figure out how because I can't find a proper documentation / example.
https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/src/com/amazonaws/services/kinesis/producer/sample/SampleProducer.java
I have Faced the same issue so, I got this solution I hope this will work for you also.
#Value("${s3_accessKey}")
private String s3_accessKey;
#Value("${s3_secretKey}")
private String s3_secretKey;
//this above value I am taking from Application.properties file
BasicAWSCredentials creds = new BasicAWSCredentials(s3_accessKey,
s3_secretKey);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().
withCredentials(new AWSStaticCredentialsProvider(creds))
.withRegion(Regions.US_EAST_2)
.build();
I have this program in java to connect to a SQL Server
server = "ZF-SQL-MTRAZDB.NIS.LOCAL"
dbName = "MRAZ"
nameBaseDatos = "CD_LO"
table = "dbo.CD_LO_DATA"
user = "user"
password = "Pass"
url = "jdbc:sqlserver//"+ server + "\\" + dbName + "jdatabaseName=" + nameBaseDatos
driver = "com.microsoft.sqlserver.jdbc_SQLServerDriver"
Now I have to do the same with Visual C# 2010 in Windows XP
How can I do this program?? Because in java use JDBC, Should I also use JDBC?
Thanks for all!
The ConnectionString is similar to an OLE DB connection string, but is not identical. Unlike OLE DB or ADO, the connection string that is returned is the same as the user-set ConnectionString, minus security information if the Persist Security Info value is set to false (default). The .NET Framework Data Provider for SQL Server does not persist or return the password in a connection string unless you set Persist Security Info to true.
You can use the ConnectionString property to connect to a database. The following example illustrates a typical connection string.
"Persist Security Info=False;Integrated Security=true;Initial Catalog=Northwind;server=(local)"
Use the new SqlConnectionStringBuilder to construct valid connection strings at run time.
private static void OpenSqlConnection()
{
string connectionString = GetConnectionString();
using (SqlConnection connection = new SqlConnection())
{
connection.ConnectionString = connectionString;
connection.Open();
Console.WriteLine("State: {0}", connection.State);
Console.WriteLine("ConnectionString: {0}",
connection.ConnectionString);
}
}
static private string GetConnectionString()
{
// To avoid storing the connection string in your code,
// you can retrieve it from a configuration file.
return "Data Source=MSSQL1;Initial Catalog=AdventureWorks;"
+ "Integrated Security=true;";
}
Data Source or Server orAddressorAddrorNetwork Address
: The name or network address of the instance of SQL Server to which
to connect. The port number can be specified after the server name :
server=tcp:servername, portnumber`
The Initial Catalog or Database : The name of the database. The database name can be 128 characters or less.
The Integrated Security or Trusted_Connection : When false, User ID and Password are specified in the connection. When true, the current Windows account credentials are used for authentication. Recognized values are true, false, yes, no, and sspi (strongly recommended), which is equivalent to true. If User ID and Password are specified and Integrated Security is set to true, the User ID and Password will be ignored and Integrated Security will be used.
and other items
I hope this help you :) .