worklight multi thread/servletcontext calling adapter - java

I've been having some problems with worklight and multi threading.
We have a batch importer that needs to run once everyday.
What we have done so far
Importer is a servletContextListener
Use Quartz to run importer as a cronjob
Everything in the code works fine except for calling the HTTP Adapters. Everytime a adapter is called a error message "BaseProjectLocal is null" is retured.
The code does work fine if it is started by another worklight adapter.
It seems the error is there because he does not know how to access the adapters (i assume)
java.lang.RuntimeException: BaseProjectLocal is null
at com.worklight.common.util.BaseProjectLocal.get(BaseProjectLocal.java:41)
at com.worklight.server.util.ProjectLocal.get(ProjectLocal.java:55)
at com.worklight.server.util.ProjectLocal.getWorklightBundlesS(ProjectLocal.java:113)
at com.worklight.server.bundle.api.WorklightBundles.getInstance(WorklightBundles.java:28)
at com.ibm.nl.wwdw.server.util.AdapterCaller.doCall(AdapterCaller.java:25)
at com.ibm.nl.wwdw.server.connections.CommunityCollector.getMembersFromCommunity(CommunityCollector.java:50)
at com.ibm.nl.wwdw.server.importer.ConnectionsImporter.StartImport(ConnectionsImporter.java:53)
at com.ibm.nl.wwdw.server.importer.MyJob.execute(MyJob.java:17)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
2-jun-2014 17:38:56 com.ibm.nl.wwdw.server.importer.ConnectionsImporter StartImport
SEVERE: BaseProjectLocal is null
Java code calling the adapter
public static JSONObject doCall(String adapter, String path, String paramArray) {
Logging.logger.warning(adapter+"/"+path+"?"+paramArray);
DataAccessService service = WorklightBundles.getInstance().getDataAccessService();//This line gives the error report
ProcedureQName procedureQName = new ProcedureQName(adapter, path);
InvocationResult result = service.invokeProcedure(procedureQName, paramArray);
Logging.logger.warning(result.toJSON().toString());
return result.toJSON();
}
}

The problem may be that the thread has no authentication context. While it is possible to create an authentication context manually it is tricky to accomplish as it requires using internal APIs (meaning it is a path that is not supported, ...).
Something like this:
authService = (AuthenticationServiceBean);
getBeanFactory().getBean(AuthenticationService.BEAN_ID);
authContext = authService.createAuthenticationContext(realm, username, password);
AuthenticationContext.setThreadContext(authContext);
It is suggested to run the importer outside of Worklight, and invoke the adapter remotely (via HTTP).
Note however that the adapter should not be protected by other realms.

Related

Error "io.questdb.cairo.CairoException: [2] could not open read-write [file=<dir>/_tab_index.d]"

Currently, I am testing QuestDB in a Apache Camel / Spring Boot scenario for our project. I set up a custom Camel component and a configuration bean holding the connection properties. As far as I can see, my custom Camel component properly connects to the server where a test instance of QuestDB is running. But when sending data over the Camel route, I get error messages:
io.questdb.cairo.CairoException: [2] could not open read-write [file=<dir>/_tab_index.d]
The exception is thrown when creating the CairoEngine like (taken from QuestDB API documentation:
try (CairoEngine engine = new CairoEngine(this.configuration)) {
... other code ...
} catch (Exception e) {
e.printStackTrace();
...
}
where this.configuration is of type CairoConfiguration and contains the "data_dir" and is instantiated like this:
configuration = new DefaultCairoConfiguration(<quest db directory (String)>);
Currently, I am passing the fully qualified path my database directory: /srv/questdb/db. I confirmed that the file _tab_index.d is available at this location.
What am I going wrong? Maybe I should mention, that I set the access rights to the questdb directory to 777, the owner was set to chown root:questdb ...
Indeed, the embedded API is not suitable for what I want to do. I need to one of the other APIs. I tested my scenario withe the InfluxDB line protocol (see Line protocol documentation) and the data gets written to the server without problems.
The doInsert method in my custom component look like this (just for testing) which is called when building a route with the custom QuestDB "to" end point:
public class QuestDbProducer extends DefaultProducer {
... other code ...
private void doInsert(Exchange exchange, String tableName) throws InvalidPayloadException {
try (Sender sender = Sender.builder().address("lxyrpc01.gsi.de:9009").build()) {
sender.table("inventors")
.symbol("born", "Austrian Empire")
.longColumn("id", 0)
.stringColumn("name", "Nicola Tesla")
.atNow();
sender.table("inventors")
.symbol("born", "USA")
.longColumn("id", 1)
.stringColumn("name", "Thomas Alva Edison")
.atNow();
}
}

Azure Functions Binding Expression in Java not resolving when deployed

I have an Azure Java Function App (Java 11, gradle, azure-functions-java-library 1.4.0) that is tied to an event hub trigger. There are parameters that I can inject into the annotation by surrounding with % as per https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-expressions-patterns. The connection isn't using the % since it's a special param that is always taken from the app properties.
When I run my function locally, using ./gradlew azureFunctionsRun it runs as expected. But once it's deployed to an Azure Function App, it complains that it can't resolve the params.
The error in Azure:
2021-05-27T18:25:37.522 [Error] Microsoft.Azure.WebJobs.Host: Error indexing method 'Functions.EventHubTrigger'. Microsoft.Azure.WebJobs.Host: '%app.eventhub.name%' does not resolve to a value.
The Trigger annotation looks like:
#FunctionName("EventHubTrigger")
public void run(
#EventHubTrigger(name = "event",
connection = "app.eventhub.connectionString",
eventHubName = "%app.eventhub.name%",
consumerGroup = "%app.eventhub.consumerGroup%",
cardinality = Cardinality.MANY)
List<Event> event,
final ExecutionContext context
) {
// logic
}
Locally in local.settings.json I have values for:
"app.eventhub.connectionString": "Endpoint=XXXX",
"app.eventhub.name": "hubName",
"app.eventhub.consumerGroup": "consumerName"
And in Azure for the function app, I have Configuration (under Settings) for each of the above.
Any ideas what I'm missing?
After some further investigation, I managed to get things working in Azure Function Apps by changing my naming convention, from using . as separators to _.
This ended up working both locally and when deployed:
#FunctionName("EventHubTrigger")
public void run(
#EventHubTrigger(name = "event",
connection = "app_eventhub_connectionString",
eventHubName = "%app_eventhub_name%",
consumerGroup = "%app_eventhub_consumerGroup%",
cardinality = Cardinality.MANY)
List<Event> event,
final ExecutionContext context
) {
// logic
}
With configuration settings in local.settings.json as:
"app_eventhub_connectionString": "Endpoint=XXXX",
"app_eventhub_name": "hubName",
"app_eventhub_consumerGroup": "consumerName"
And corresponding updates made to the App configuration.

TokenResponseException: missing scope when using the RemoteAPI of AppEngine in Java with OAuth 2.0 on stand-alone application

My goal is for my stand-alone application to access the datastore of a Google App Engine application so that I can query it. My application used to work with ClientLogin, but I have been asked to use OAuth 2.0 for the authentication (and using ClientLogin doesn't work anymore).
I follow the instructions on this page: https://cloud.google.com/appengine/docs/java/tools/remoteapi
I use the provided code, have made an service account, downloaded the json key, made an environment variable pointing to this key. The result is that I get the following exception:
Exception in thread "main" java.lang.ExceptionInInitializerError
at myApplication.myClass4.moveResultsOfFeature(myClass4.java:51)
at myApplication.myClass2.migrate(MyClass3.java:32)
at myApplication.myClass1.main(Starter.java:11)
Caused by: java.lang.RuntimeException: Failed to acquire Google Application Default credential.
at com.google.appengine.tools.remoteapi.RemoteApiOptions.useApplicationDefaultCredential(RemoteApiOptions.java:163)
at commonMigration.RemoteOptions.<clinit>(RemoteOptions.java:18)
... 3 more
Caused by: com.google.appengine.repackaged.com.google.api.client.auth.oauth2.TokenResponseException: 400 Bad Request
{
"error" : "invalid_scope",
"error_description" : "Empty or missing scope not allowed."
}
at com.google.appengine.repackaged.com.google.api.client.auth.oauth2.TokenResponseException.from(TokenResponseException.java:105)
at com.google.appengine.repackaged.com.google.api.client.auth.oauth2.TokenRequest.executeUnparsed(TokenRequest.java:287)
at com.google.appengine.repackaged.com.google.api.client.auth.oauth2.TokenRequest.execute(TokenRequest.java:307)
at com.google.appengine.repackaged.com.google.api.client.googleapis.auth.oauth2.GoogleCredential.executeRefreshToken(GoogleCredential.java:384)
at com.google.appengine.repackaged.com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489)
at com.google.appengine.tools.remoteapi.RemoteApiOptions.useApplicationDefaultCredential(RemoteApiOptions.java:160)
... 4 more
which seems to point to a missing scope argument, a concern which isn't mentioned in the explication on the web page. Is there an easy way to fix this issue?
Per request, my code (simplified):
public class StackOverflow {
private static RemoteApiOptions REMOTE_OPTIONS = new RemoteApiOptions().server(
<application-id>.appspot.com, 443)
.useApplicationDefaultCredential();
public static void main(String[] args0) throws IOException {
// MAKING THE CONNECTION
RemoteApiInstaller installer = new RemoteApiInstaller();
// LOAD FROM Local
installer.install(REMOTE_OPTIONS);
try {
// MY OPERATIONS
} finally {
installer.uninstall();
}
}
}
This is a current limitation of the Remote API. See the note here:
Note: The Remote API call to useApplicationDefaultCredential() can only use credentials provided by the gcloud command.
(It's possible you followed the instructions before the note was added, since it is a recently discovered limitation). The limitation will be fixed in a future release. For now, you should either run:
gcloud auth login
And use your user account to authenticate using useApplicationDefaultCredential(). Or, you can use a service account with .useServiceAccountCredential, which accepts the service account email and a path to a p12 file instead of the json file.

Unable to retrieve slot information for clusters with admission failover level control policy enabled

I'm experiencing some problems when trying to retrieve the slot information for VMware clusters with admission failover level control policy enabled. I use the VI Java API.
When calling the following method:
clusterComputeResource.retrieveDasAdvancedRuntimeInfo()
I either get the following Exception:
java.rmi.RemoteException: VI SDK invoke exception:java.rmi.RemoteException: Exception in
WSClient.invoke:; nested exception is:
java.lang.NoSuchFieldException: slotInfo
at com.vmware.vim25.ws.WSClient.invoke(WSClient.java:122)
at com.vmware.vim25.ws.VimStub.retrieveDasAdvancedRuntimeInfo(VimStub.java:269)
or I get the result which is of type ClusterDasAdvancedRuntimeInfo
but I need the subclass ClusterDasFailoverLevelAdvancedRuntimeInfo in order to get the SlotInfo field (casting to the required sublcass doesn't work either).
I tried to access the web service of a vcenter directly via Soap UI and it worked without any problems, but with the vijava API it just doesn't.
Thanks in advance for any help!!!
After a lot of of debugging to see what the VI Java API does internally, I found out that if the web service client (wsc) is invoked with the name of the sublcass instead of the name of the superclass (as last parameter), the response will be converted correctly. This way the slot information can be retrieved without any problems. Here is the solution for those experiencing the same problems:
ClusterDasFailoverLevelAdvancedRuntimeInfo clusterDasFailoverLevelAdvancedRuntimeInfo = null;
try {
final Argument[] paras = new Argument[1];
paras[0] = new Argument("_this", "ManagedObjectReference", clusterComputeResource.getMOR());
clusterDasFailoverLevelAdvancedRuntimeInfo = (ClusterDasFailoverLevelAdvancedRuntimeInfo) serviceInstance.getServerConnection().getVimService().getWsc().invoke("RetrieveDasAdvancedRuntimeInfo", paras, "ClusterDasFailoverLevelAdvancedRuntimeInfo");
} catch (final Exception e) {
//error handling
}
(Note that this only works if admission control failover level policy is enabled!!!)

Google Cloud Endpoints EOFException

I have the method below in AppEngine:
#ApiMethod(name = "authed", path = "greeting/authed")
public HelloGreeting authedGreeting(User user) {
...
}
My doInBackground method in Android AsyncTask:
HelloGreeting hg = null;
try {
hg = service.authed().execute();
} catch (IOException e) {
Log.d("error", e.getMessage(), e);
}
return hg;
I encountered the ff error:
/_ah/api/.../v1/greeting/authed: java.io.EOFException
In logcat:
Problem accessing /_ah/api/.../v1/greeting/authed. Reason: INTERNAL_SERVER_ERROR
at java.util.zip.GZIPInputStream.readUByte
at java.util.zip.GZIPInputStream.readUShort
at java.util.zip.GZIPInputStream.readUShort
It only work when calling non-auth method. How to fix it?
Im using the local server.
I was running into a similar problem with when making a call for inserting values. Mine differs slightly because I am not using authentication, however I was getting the same exception. I am using appengine-java-sdk-1.8.8. I was able to make other endpoint calls having this error. I looked at the generated code and the difference that I saw with the working calls versus the non-working calls, was the HttpMethod. The failing call was defined as "POST". I was able to change this using the annotation attribute httpMethod=ApiMethod.HttpMethod.GET in the #ApiMethod annotation.
#ApiMethod(httpMethod = ApiMethod.HttpMethod.GET, name = "insertUserArtist", path = "insertUserArtist")
I then regenerated the client code and I was able to make the call without getting the dreaded EOFException. I am not sure why POST doesn't work properly but changing it to GET worked. This does possibly present some questions on how much data can be sent across and should be addressed (possibly a library issue). I am going to look into creating a demonstration app to submit to Google.
If you passed an "entity" object, then the POST will work. If you are passing primitive data type, you'll be stuck with using HttpMethod.GET.
If you are running on the local dev server,
then add the following snippet in MyApi.Builder to set it up properly after setting root url.
.setGoogleClientRequestInitializer(new GoogleClientRequestInitializer() {
#Override
public void initialize(AbstractGoogleClientRequest<?> abstractGoogleClientRequest) throws IOException {
abstractGoogleClientRequest.setDisableGZipContent(true);
}
})
Source: https://github.com/GoogleCloudPlatform/gradle-appengine-templates/tree/master/HelloEndpoints

Categories

Resources