Azure Functions Binding Expression in Java not resolving when deployed - java

I have an Azure Java Function App (Java 11, gradle, azure-functions-java-library 1.4.0) that is tied to an event hub trigger. There are parameters that I can inject into the annotation by surrounding with % as per https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-expressions-patterns. The connection isn't using the % since it's a special param that is always taken from the app properties.
When I run my function locally, using ./gradlew azureFunctionsRun it runs as expected. But once it's deployed to an Azure Function App, it complains that it can't resolve the params.
The error in Azure:
2021-05-27T18:25:37.522 [Error] Microsoft.Azure.WebJobs.Host: Error indexing method 'Functions.EventHubTrigger'. Microsoft.Azure.WebJobs.Host: '%app.eventhub.name%' does not resolve to a value.
The Trigger annotation looks like:
#FunctionName("EventHubTrigger")
public void run(
#EventHubTrigger(name = "event",
connection = "app.eventhub.connectionString",
eventHubName = "%app.eventhub.name%",
consumerGroup = "%app.eventhub.consumerGroup%",
cardinality = Cardinality.MANY)
List<Event> event,
final ExecutionContext context
) {
// logic
}
Locally in local.settings.json I have values for:
"app.eventhub.connectionString": "Endpoint=XXXX",
"app.eventhub.name": "hubName",
"app.eventhub.consumerGroup": "consumerName"
And in Azure for the function app, I have Configuration (under Settings) for each of the above.
Any ideas what I'm missing?

After some further investigation, I managed to get things working in Azure Function Apps by changing my naming convention, from using . as separators to _.
This ended up working both locally and when deployed:
#FunctionName("EventHubTrigger")
public void run(
#EventHubTrigger(name = "event",
connection = "app_eventhub_connectionString",
eventHubName = "%app_eventhub_name%",
consumerGroup = "%app_eventhub_consumerGroup%",
cardinality = Cardinality.MANY)
List<Event> event,
final ExecutionContext context
) {
// logic
}
With configuration settings in local.settings.json as:
"app_eventhub_connectionString": "Endpoint=XXXX",
"app_eventhub_name": "hubName",
"app_eventhub_consumerGroup": "consumerName"
And corresponding updates made to the App configuration.

Related

Storage account connection string for 'AzureWebJobsAzureCosmosDBConnection' is invalid

I have this problen when I try to run a function with BlobTrigger.
Microsoft.Azure.WebJobs.Host: Error indexing method 'Functions.myFunction'. Microsoft.Azure.WebJobs.Extensions.Storage:
Storage account connection string for 'AzureWebJobsAzureCosmosDBConnection' is invalid.
The variables are:
AzureWebJobsAzureCosmosDBConnection = AccountEndpoint=https://example.com/;AccountKey=YYYYYYYYYYY;
AzureWebJobsStorage = UseDevelopmentStorage=true
AzureCosmosDBConnection = AccountEndpoint=https://example.com/;AccountKey=YYYYYYYYYYY;
I don't know why this function throws exception....
Not Sure, if you have written your local.settings.json configuration will be in the format of key = value or just mentioned in the question.
The format of local.settings.json configuration to any Azure Function will be "key":"value":
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=pravustorageac88;AccountKey=<alpha-numeric-symbolic_access_key>;EndpointSuffix=core.windows.net",
"FUNCTIONS_WORKER_RUNTIME": "java",
"MAIN_CLASS":"com.example.DemoApplication",
"AzureWebJobsDashboard": "DefaultEndpointsProtocol=https;AccountName=pravustorageac88;AccountKey=<alpha-numeric-symbolic_access_key>;EndpointSuffix=core.windows.net"
"AzureCosmosdBConnStr":"Cosmos_db_conn_str"
}
}
If you are using Cosmos DB Connection string, then you have to configure in such a way:
public HttpResponseMessage execute(
#HttpTrigger(name = "request", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<User>> request,
#CosmosDBOutput(name="database", databaseName = "db_name", collectionName = "collection_name", connectionStringSetting = "AzureCosmosDBConnection")
ExecutionContext context)
Make Sure the cosmos database connection string present in the local.settigns.json file should be published to Azure Function App Configuration Menu > Application Settings.
For that either uncomment local.settings.json from .gitignore file or add the configuration settings manually to the Azure Function App Configuration:
I have uncommented the local.settings.json in .gitignore file and then published to Azure Function App as the cosmos database connection string also updated in the configuration:
Note:
If you have a proxy in the system, then you have to add the proxy settings in func.exe configuration file, given here by #p31415926
In two of the ways, you can configure the Cosmos DB Connection in the Azure Function Java Stack: Bindings (as code given above) and using the SDK given in this MS Doc.

How to set parent id and operation id of the telemetry in Azure Application Insights for Azure function using Java

I have an function app: funct1(HttpTrigger) -> blob storage -> func2(BlobTrigger). In Application Insights, there will be two separate request telemetry generated with different operation id. Each has its own end-to-end transaction trace.
In order to get the end-to-end trace for the whole app, I would like to correlate two functions by setting the parent id and operation id of func2 with request id and operation id of func1 so both can be shown in application insights as one end-to-end trace.
I have tried following code but it didn't take any effect and there is a lack of documentation about how to use application insights Java SDK in general for customizing telemetry.
#FunctionName("Create-Thumbnail")
#StorageAccount(Config.STORAGE_ACCOUNT_NAME)
#BlobOutput(name = "$return", path = "output/{name}")
public byte[] generateThumbnail(
#BlobTrigger(name = "blob", path = "input/{name}")
byte[] content,
final ExecutionContext context
) {
try {
TelemetryConfiguration configuration = TelemetryConfiguration.getActive();
TelemetryClient client = new TelemetryClient(configuration);
client.getContext().getOperation().setParentId("MY_CUSTOM_PARENT_ID");
client.flush();
return Converter.createThumbnail(content);
} catch (Exception e) {
e.printStackTrace();
return content;
}
}
Anyone with knowledge in this area can provide some tips?
I'm afraid it can't be achieved as the official doc said :
In C# and JavaScript, you can use an Application Insights SDK to write
custom telemetry data.
If you need to set custom telemetry, you need to add app insights java SDK to your function, but I haven't found any SDK... If there's any progress, I'll update here.

Displaying deployed app status with Java Wildfly 9 native management API

I am trying to use the Wildfly 9 native management API to show me the status of my deployed apps. The jboss-cli execution and result is below:
jboss-cli.sh --connect --controller=myserver.com:9990 --commands="/deployment=my-deployment.war :read-attribute(name=status)"
{
"outcome" => "success",
"result" => "OK"
}
Using the code below, I am able to determine if the apps are enabled, but not if they are up and running:
ModelNode op = new ModelNode();
op.get("operation").set("read-children-names");
op.get("child-type").set(ClientConstants.DEPLOYMENT);
Can anyone assist in translating my jboss-cli commands into Java? I've tried hooking into the deployment-scanner subsystem as well, but that doesn't seem to get me anywhere useful.
This answer from a similar thread worked for us (with some minor tweaking): https://stackoverflow.com/a/41261864/3486967
You can use the read-children-resource operation to get the deployment resources.
Something like the following should work.
try (final ModelControllerClient client = ModelControllerClient.Factory.create(InetAddress.getLocalHost(), 9990)) {
ServerHelper.waitForStandalone(client, 20L);
final ModelNode op = Operations.createOperation("read-children-resources");
op.get(ClientConstants.CHILD_TYPE).set(ClientConstants.DEPLOYMENT);
final ModelNode result = client.execute(op);
if (Operations.isSuccessfulOutcome(result)) {
final List<Property> deployments = Operations.readResult(result).asPropertyList();
for (Property deployment : deployments) {
System.out.printf("Deployment %-20s enabled? %s%n", deployment.getName(), deployment.getValue().get("enabled"));
}
} else {
throw new RuntimeException(Operations.getFailureDescription(result).asString());
}
}

Vertx HttpClient getNow not working

I have problem with vertx HttpClient.
Here's code which shows that tests GET using vertx and plain java.
Vertx vertx = Vertx.vertx();
HttpClientOptions options = new HttpClientOptions()
.setTrustAll(true)
.setSsl(false)
.setDefaultPort(80)
.setProtocolVersion(HttpVersion.HTTP_1_1)
.setLogActivity(true);
HttpClient client = vertx.createHttpClient(options);
client.getNow("google.com", "/", response -> {
System.out.println("Received response with status code " + response.statusCode());
});
System.out.println(getHTML("http://google.com"));
Where getHTML() is from here: How do I do a HTTP GET in Java?
This is my output:
<!doctype html><html... etc <- correct output from plain java
Feb 08, 2017 11:31:21 AM io.vertx.core.http.impl.HttpClientRequestImpl
SEVERE: java.net.UnknownHostException: failed to resolve 'google.com'. Exceeded max queries per resolve 3
But vertx can't connect. What's wrong here? I'm not using any proxy.
For reference: a solution, as described in this question and in tsegismont's comment here, is to set the flag vertx.disableDnsResolver to true:
-Dvertx.disableDnsResolver=true
in order to fall back to the JVM DNS resolver as explained here:
sometimes it can be desirable to use the JVM built-in resolver, the JVM system property -Dvertx.disableDnsResolver=true activates this behavior
I observed this DNS resolution issue with a redis client in a kubernetes environment.
I had this issue, what caused it for me was stale DNS servers being picked up by the Java runtime, i.e. servers registered for a network the machine was no longer connected to. The issue is first in the Sun JNDI implementation, it also exists in Netty which uses JNDI to bootstrap its list of name servers on most platforms, then finally shows up in VertX.
I think a good place to fix this would be in the Netty layer where the set of default DNS servers is bootstrapped. I have raised a ticket with the Netty project so we'll see if they agree with me! Here is the Netty ticket
In the mean time a fairly basic workaround is to filter the default DNS servers detected by Netty, based on whether they are reachable or not. Here is a code Sample in Kotlin to apply before constructing the main VertX instance.
// The default set of name servers provided by JNDI can contain stale entries
// This default set is picked up by Netty and in turn by VertX
// To work around this, we filter for only reachable name servers on startup
val nameServers = DefaultDnsServerAddressStreamProvider.defaultAddressList()
val reachableNameServers = nameServers.stream()
.filter {ns -> ns.address.isReachable(NS_REACHABLE_TIMEOUT)}
.map {ns -> ns.address.hostAddress}
.collect(Collectors.toList())
if (reachableNameServers.size == 0)
throw StartupException("There are no reachable name servers available")
val opts = VertxOptions()
opts.addressResolverOptions.servers = reachableNameServers
// The primary Vertx instance
val vertx = Vertx.vertx(opts)
A little more detail in case it is helpful. I have a company machine, which at some point was connected to the company network by a physical cable. Details of the company's internal name servers were set up by DHCP on the physical interface. Using the wireless interface at home, DNS for the wireless interface gets set to my home DNS while the config for the physical interface is not updated. This is fine since that device is not active, ipconfig /all does not show the internal company DNS servers. However, looking in the registry they are still there:
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces
They get picked up by the JNDI mechanism, which feeds Netty and in turn VertX. Since they are not reachable from my home location, DNS resolution fails. I can imagine this home/office situation is not unique to me! I don't know whether something similar could occur with multiple virtual interfaces on containers or VMs, it could be worth looking at if you are having problems.
Here is the sample code which works for me.
public class TemplVerticle extends HttpVerticle {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
// Create the web client and enable SSL/TLS with a trust store
WebClient client = WebClient.create(vertx,
new WebClientOptions()
.setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true)
.setDefaultHost("www.w3schools.com")
);
client.get("www.w3schools.com")
.as(BodyCodec.string())
.send(ar -> {
if (ar.succeeded()) {
HttpResponse<String> response = ar.result();
System.out.println("Got HTTP response body");
System.out.println(response.body().toString());
} else {
ar.cause().printStackTrace();
}
});
}
}
Try using web client instead of httpclient, here you have an example (with rx):
private val client: WebClient = WebClient.create(vertx, WebClientOptions()
.setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true)
)
open fun <T> get(uri: String, marshaller: Class<T>): Single<T> {
return client.getAbs(host + uri).rxSend()
.map { extractJson(it, uri, marshaller) }
}
Another option is to use getAbs.

worklight multi thread/servletcontext calling adapter

I've been having some problems with worklight and multi threading.
We have a batch importer that needs to run once everyday.
What we have done so far
Importer is a servletContextListener
Use Quartz to run importer as a cronjob
Everything in the code works fine except for calling the HTTP Adapters. Everytime a adapter is called a error message "BaseProjectLocal is null" is retured.
The code does work fine if it is started by another worklight adapter.
It seems the error is there because he does not know how to access the adapters (i assume)
java.lang.RuntimeException: BaseProjectLocal is null
at com.worklight.common.util.BaseProjectLocal.get(BaseProjectLocal.java:41)
at com.worklight.server.util.ProjectLocal.get(ProjectLocal.java:55)
at com.worklight.server.util.ProjectLocal.getWorklightBundlesS(ProjectLocal.java:113)
at com.worklight.server.bundle.api.WorklightBundles.getInstance(WorklightBundles.java:28)
at com.ibm.nl.wwdw.server.util.AdapterCaller.doCall(AdapterCaller.java:25)
at com.ibm.nl.wwdw.server.connections.CommunityCollector.getMembersFromCommunity(CommunityCollector.java:50)
at com.ibm.nl.wwdw.server.importer.ConnectionsImporter.StartImport(ConnectionsImporter.java:53)
at com.ibm.nl.wwdw.server.importer.MyJob.execute(MyJob.java:17)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
2-jun-2014 17:38:56 com.ibm.nl.wwdw.server.importer.ConnectionsImporter StartImport
SEVERE: BaseProjectLocal is null
Java code calling the adapter
public static JSONObject doCall(String adapter, String path, String paramArray) {
Logging.logger.warning(adapter+"/"+path+"?"+paramArray);
DataAccessService service = WorklightBundles.getInstance().getDataAccessService();//This line gives the error report
ProcedureQName procedureQName = new ProcedureQName(adapter, path);
InvocationResult result = service.invokeProcedure(procedureQName, paramArray);
Logging.logger.warning(result.toJSON().toString());
return result.toJSON();
}
}
The problem may be that the thread has no authentication context. While it is possible to create an authentication context manually it is tricky to accomplish as it requires using internal APIs (meaning it is a path that is not supported, ...).
Something like this:
authService = (AuthenticationServiceBean);
getBeanFactory().getBean(AuthenticationService.BEAN_ID);
authContext = authService.createAuthenticationContext(realm, username, password);
AuthenticationContext.setThreadContext(authContext);
It is suggested to run the importer outside of Worklight, and invoke the adapter remotely (via HTTP).
Note however that the adapter should not be protected by other realms.

Categories

Resources