Getting HystrixRuntimeException: Function timed-out and fallback failed - java

I am using hystrix 1.5.3 and running this piece of code in my local machine.
#HystrixCommand (groupKey = "BookService", commandKey = "BookService.BookDetail", commandProperties = {
#HystrixProperty (name = EXECUTION_ISOLATION_STRATEGY, value = "THREAD"),
#HystrixProperty (name = CIRCUIT_BREAKER_ENABLED, value = "true"),
#HystrixProperty (name = EXECUTION_TIMEOUT_ENABLED, value = "true"),
#HystrixProperty (name = EXECUTION_ISOLATION_THREAD_TIMEOUT_IN_MILLISECONDS, value = "1500")}, threadPoolProperties = {
#HystrixProperty (name = CORE_SIZE, value = "60"),
#HystrixProperty (name = MAX_QUEUE_SIZE, value = "60"),
#HystrixProperty (name = QUEUE_SIZE_REJECTION_THRESHOLD, value = "60"),
#HystrixProperty (name = KEEP_ALIVE_TIME_MINUTES, value = "1")
})
public String getBookDetail(String bookId)
{
log.info("Getting details");
...
}
On our servers it works fine however I get this runtime exception on my local machine. My local server just waits for the timeout duration and in the end throws this HystrixRuntimeException. Also, I don't have any fallback defined but it should not be needed for my case. The expectation is it should work normally like in our production server.
com.netflix.hystrix.exception.HystrixRuntimeException: BookService.BookDetail timed-out and fallback failed.
at com.netflix.hystrix.AbstractCommand$21.call(AbstractCommand.java:793) ~[hystrix-core-1.5.3.jar:1.5.3]
at com.netflix.hystrix.AbstractCommand$21.call(AbstractCommand.java:768) ~[hystrix-core-1.5.3.jar:1.5.3]
at rx.internal.operators.OperatorOnErrorResumeNextViaFunction$1.onError(OperatorOnErrorResumeNextViaFunction.java:77) ~[rxjava-1.0.12.jar:1.0.12]
at rx.internal.operators.OperatorDoOnEach$1.onError(OperatorDoOnEach.java:70) ~[rxjava-1.0.12.jar:1.0.12]
at rx.internal.operators.OperatorDoOnEach$1.onError(OperatorDoOnEach.java:70) ~[rxjava-1.0.12.jar:1.0.12]
at com.netflix.hystrix.AbstractCommand$DeprecatedOnFallbackHookApplication$1.onError(AbstractCommand.java:1448) ~[hystrix-core-1.5.3.jar:1.5.3]
at com.netflix.hystrix.AbstractCommand$FallbackHookApplication$1.onError(AbstractCommand.java:1373) ~[hystrix-core-1.5.3.jar:1.5.3]
at rx.internal.operators.OperatorDoOnEach$1.onError(OperatorDoOnEach.java:70) ~[rxjava-1.0.12.jar:1.0.12]
I checked the function is not being executed as the log is not getting printed.
When I change the timeout by modifying EXECUTION_ISOLATION_THREAD_TIMEOUT_IN_MILLISECONDS similar behaviour happens just after the new timeout duration.
When I remove the #HystrixCommand annotation it just works fine so it means there is some issue with Hystrix only. The hystrix properties seem is the one which is defined in the annotation which seem fine. Could this be because hystrix is not configured properly? Any help would be appreciated.

Your function is not being executed in 1500ms.
Change EXECUTION_TIMEOUT_ENABLED to false, hystrix will not timeout your method.

Related

SpringBoot Controller mapping to incorrect method

I have below 2 GET mappings in my controller:
1. #GetMapping("/department/{deptId}/employee/{employeeId}")
public String func1(#PathVariable(value = "deptId", required = true) String deptId,
#PathVariable(value = "employeeId", required = true) String employeeId) { ... }
2. #GetMapping("/department/{deptId}/employee/{employeeId}/workLogs")
public String func2(#PathVariable(value = "deptId", required = true) String deptId,
#PathVariable(value = "employeeId", required = true) String employeeId) { ... }
When I Fire the API as:
GET http://localhost:8080/department/102/employee//workLogs --> Keeping employeeId as blank, this call gets mapped to the first GetMapping (func1) and employeeId is calculated as employeeId = "workLogs".
Hence, There is no exception thrown for missing path variable which was marked as required and call completed with 200 OK.
How to resolve this, so that it maps correctly to func2, and throws an exception for missing required path variable.
When you make a request
http://localhost:8080/department/102/employee/workLogs
This will be interpreted as workLogs being provided as the employeeId.
There's a couple ways to solve the problem.
In func1, throw an exception if employeeId.equals("workLogs")
set employeeId as an Int or Long, so that an exception will be thrown by default when workLogs is attempted to be parsed as an employeeId
But actually, calling http://localhost:8080/department/102/employee//workLogs with the double slash (//) should result in a 404 error. Try using version 5.3.15 of Spring if this isn't the case.

How to use UpdateEventSourceMappingRequest in java?

I'm trying to use something like this:
UpdateEventSourceMappingRequest request = new UpdateEventSourceMappingRequest()
.withFunctionName("arn:aws:lambda:us-east-1:9999999999:function:"+functionName)
.withEnabled(false);
But I received a error because I have to use .withUUID(uuid):
UpdateEventSourceMappingRequest request = new UpdateEventSourceMappingRequest()
.withUUID(uuid))
.withFunctionName("arn:aws:lambda:us-east-1:9999999999:function:"+functionName)
.withEnabled(false);
I don't know how to get the value of uuid ( uuid from aws lambda ).
Can you help me with the solution to my problem ?
You need to provide the UUID identifier of the event source mapping to update it (and this field is mandatory). Update-request is not intended to create it.
When you create an event source mapping (here) - aws should return a response with a UUID identifier which you then may use in the update request.
That's the solution that I founded:
String strUUID = "";
ListEventSourceMappingsRequest requestList = new ListEventSourceMappingsRequest()
.withEventSourceArn("arn:aws:sqs:us-east-1:9999999999:test");
ListEventSourceMappingsResult result = awsLambda.listEventSourceMappings(requestList);
List<EventSourceMappingConfiguration> eventSourceMappings = result.getEventSourceMappings();
for (EventSourceMappingConfiguration eventLambda : eventSourceMappings) {
strUUID = eventLambda.getUUID();
}
System.out.println("Output UUID " + strUUID);
We have to use the ARN of the SQS that's trigger of the aws lambda.

Multiple SecurityScheme for swagger/open-api

I am facing a problem with the swagger UI 3 and the generated shcema.yml file
Currently, I have the configuration similar to this one:
#SecurityScheme(name = "security-oauth", type = SecuritySchemeType.OAUTH2,
flows = #OAuthFlows(
authorizationCode = #OAuthFlow(
authorizationUrl = "${authUrl}",
tokenUrl = "${tokenUrl}",
scopes = {}
)
)
)
This works as intended. Now, I want to add another #SecurityScheme so that I can also pass a cookie to the FE (an Angular App) and to get it back. However, the swagger UI and the generation for the schema.yml file fails.
#SecuritySchemes( {
#SecurityScheme(name = "security-oauth", type = SecuritySchemeType.OAUTH2,
flows = #OAuthFlows(
authorizationCode = #OAuthFlow(
authorizationUrl = "${authUrl}",
tokenUrl = "${tokenUrl}",
scopes = {}
)
)
)
}
)
With the annotation like this, the schema.yml is missing the secuirySchema part:
securitySchemes:
security-oauth:
type: oauth2
flows:
authorizationCode:
Am I missing something or is there a bug in the generation where it does not properly handle the #SecuritySchemes annotation?

How to configure the #CosmosDBtrigger using java?

I'm setting up #CosmosDBTrigger, need help with the below code and also what needs to be in the name field?
I'm using below Tech stack,
JDK 1.8.0-211
apache maven 3.5.3
AzureCLI 2.0.71
.net core 2.2.401
Java:
public class Function {
#FunctionName("CosmosTrigger")
public void mebershipProfileTrigger(
#CosmosDBTrigger(name = "?", databaseName =
"*database_name*", collectionName = "*collection_name*",
leaseCollectionName = "leases",
createLeaseCollectionIfNotExists = true,
connectionStringSetting = "DBConnection") String[] items,
final ExecutionContext context) {
context.getLogger().info("item(s) changed");
}
}
What do we need to provide in the name field?
local.settings.json
{
"IsEncrypted": false,
"Values": {
"DBConnection": "AccountEndpoint=*Account_Endpoint*"
}
}
Expected: function starts
Result:
"Microsoft.Azure.WebJobs.Host: Error indexing method 'Functions.Cosmostrigger'. Microsoft.Azure.WebJobs.Extensions.CosmosDB: Cannot create Collection Information for collection_name in database database_name with lease leases in database database_name : Unexpected character encountered while parsing value: <. Path '', line 0, position 0. Newtonsoft.Json: Unexpected character encountered while parsing value: <. Path '', line 0, position 0."
Follow this:- https://github.com/microsoft/inventory-hub-java-on-azure/blob/master/function-apps/Notify-Inventory/src/main/java/org/inventory/hub/NotifyInventoryUpdate.java
#CosmosDBTrigger(name = "document", databaseName = "db1",collectionName = "col1", connectionStringSetting = "dbstr",leaseCollectionName = "lease1", createLeaseCollectionIfNotExists = true) String document,
Now when you publish put the value for dbstr as your connection string in Application Settings of Azure portal, after setting the properties just restart
See the official samples here: https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2#trigger---java-example
name is just some identifier for your Function. The error you are getting is because you are telling the Trigger that the Collection you want to monitor your changes in is called "collection_name" and it's inside a database called "database_name".
Please use the real correct values for them, they should be pointing to an existing Collection, and your connection string DBConnection needs to be in the correct format of: AccountEndpoint=https://<your-account-name>.documents.azure.com:443/;AccountKey=<your-account-key>;(you can get it from the Azure Portal).

NotSerializableException using Publish Over SSH in Jenkinsfile

I'm trying to use the Publish over SSH plugin inside a Jenkinsfile. However, I'm getting the exception java.io.NotSerializableException in the createClient method. This is my code:
def publish_ssh = Jenkins.getInstance().getDescriptor("jenkins.plugins.publish_over_ssh.BapSshPublisherPlugin")
def hostConfiguration = publish_ssh.getConfiguration("${env.DSV_DEPLOY_SERVER}");
if( hostConfiguration == null )
{
currentBuild.rawBuild.result = Result.ABORTED
throw new hudson.AbortException("Configuration for ${env.DSV_DEPLOY_SERVER} not found.")
}
def buildInfo = hostConfiguration.createDummyBuildInfo();
def sshClient = hostConfiguration.createClient( buildInfo, new BapSshTransfer(
env.SOURCE_FILE,
null,
env.DSV_DEPLOY_REMOTE_DIR,
env.REMOVE_PREFIX,
false,
false,
env.DSV_DEPLOY_COMMAND,
env.DSV_DEPLOY_TIMEOUT as Integer,
false,
false,
false,
null
));
How can I get rid of the exception?
It is because some variables are not serializable.
From doc
Since pipelines must survive Jenkins restarts, the state of the running program is periodically saved to disk so it can be resumed later (saves occur after every step or in the middle of steps such as sh).
You may use #NonCPS annotation to do the creation,
use the
#NonCPS
def createSSHClient() {
// your code here.
}

Categories

Resources