The way of running karate tests in parallel - question - java

I have a question:) How does it work karate if it comes to parallel execution?
We use karate in specific way where under the hood we hava a bunch of java code. I'm curious if variables declared as ThreadLocal in our java code is really thread save?
Is each test run in separate thread or there is another way of running test simultaneously?
The problem we faced is that it looks like a least two of running tests have access to ThreadLocal variable which should be isolated from each other.
Could you explain?

If you use a ThreadLocal you are on your own :) The short answer is Karate will create a thread-pool for Scenario execution, and each Scenario can go onto any of these threads. Maybe you need to read this section on how you can force some Scenario-s to run in sequence - but you may still have problems if the ones that ran first do not "clean up".
EDIT: I also think it should NOT be possible for 2 Scenarios to be on the same thread at the same time and if you are seeing this - it is a bug, please help us replicate it (see last line of my answer).
Karate's multi threading is battle-tested and we would not be able to claim that you can do the Gatling integration if this was not the case. Maybe you should just trust Karate for doing all the heavy lifting you need ? For example look at "hooks": https://github.com/intuit/karate#hooks
And of course, if you really do think there's an issue, follow this process please: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

Related

Can we change the order of execution of cucumber scenarios?

I want to execute testcases in below order.
Example: execution starts with 3rd scenario of 2nd feature then 4th scenario of 6th feature and so on.
Can we do this customization using TestNG/cucumber options/java or any other tool?
Is it possible using hooks or cli options --order ?
You could probably achieve this with use of #tags above the above the line that has the Scenario keyword. Then execute multiple test runs but using the tag options as parameters passed to the build as mentioned here.
However, this approach does seem a little odd. If the reasons for doing this are to force an order due to tests leaving data cached in the running application that require state from the other one then this is not good practice. Each Scenario should be completely runnable in isolation and without any prior one that might set up state.

How to run a scheduled task on a single openshift pod only?

Story: in my java code i have a few ScheduledFuture's that i need to run everyday on specific time (15:00 for example), the only available thing that i have is database, my current application and openshift with multiple pods. I can't move this code out of my application and must run it from there.
Problem: ScheduledFuture works on every pod, but i need to run it only once a day. I have a few ideas, but i don't know how to implement them.
Idea #1:
Set environment variable to specific pod, then i will be able to check if this variable exists (and its value), read it and run schedule task if required. I know that i have a risk of hovered pods, but that's better not to run scheduled task at all than to run it multiple times.
Idea #2:
Determine a leader pod somehow, this seems to be a bad idea in my case since it always have "split-brain" problem.
Idea #3 (a bit offtopic):
Create my own synchronization algorithm thru database. To be fair, it's the simplest way to me since i'm a programmer and not SRE. I understand that this is not the best one tho.
Idea #4 (a bit offtopic):
Just use quartz schedule library. I personally don't really like that and would prefer one of the first two ideas (if i will able to implement them), but at the moment it seems like my only valid choice.
UPD. May be you have some other suggestions or a warning that i shouldn't ever do that?
I would suggest to use a ready-to-use solution. Getting those things right, especially covering all possible corner-cases wrt. reliability, is hard. If you do not want to use quartz, I would at least suggest to use a database-backed solution. Postgres, for example, has SELECT ... FOR UPDATE SKIP LOCKED; (scroll down to the section "The Locking Clause") which may be used to implement one-time only scheduling.
You can create cron job using openshift
https://docs.openshift.com/container-platform/4.7/nodes/jobs/nodes-nodes-jobs.html
and have this job trigger some endpoint in you application that will invoke your logic.

Running two parallel custom receiver streams in local setup with Spark streaming for local test

I am currently learning how to use Spark streaming and would like to do some experiments with joining parallel streams. For this purpose I would like to setup two parallel streams with custom Receiver classes just generating random numbers. So far I got everything setup.
However there seems to be a problem with running two custom receivers instances of the same receiver class on one Spark context. When I run only one everything works perfectly. As soon as I wire in the second one there seems to be some infinite loop error. The symptom is that I do not get any output anymore. For better understanding I put a simple example showing the problem on github
If you clone the project everything should work fine. Just uncomment line 18 in Application.java and you should see that the output of the print call is gone. This is either a bug in Spark streaming or my understanding of how the library works is not good enough to use it properly. Either way I hope there are some experts here, who are able to help me with the issue.
headdeask
Luckily there is a "Related" feature in Stackoverflow. Just found my solution by looking over the "Related" threads. The accepted answer to the following thread also solves the problem described here.
So the solution is, that in a local setup if you use the master URL "local[2]" you only get 2 worker threads. Both are used by the custom receivers in this setup. To get a third processing thread one needs to use the master URL "local[3]".

Best way to setup fixture data for JBehave tests

Let's say we have multiple test like this:
Scenario: trader is not alerted below threshold
Given a stock of symbol STK1 and a threshold of 10.0
When the stock is traded at 5.0
Then the alert status should be OFF
but the twist is that all the "Given" setup has to be done before any testes are ran. What would be the best way to do this?
Check http://jbehave.org/reference/stable/story-syntax.html for Lifecycle: steps in story files. But watch out, those are executed before/after EACH scenario.
There are #BeforeStory and #AfterStory annotations as well, they are quite obvious to use, but you may want to check the documentation: http://jbehave.org/reference/stable/annotations.html.
However, you may feel that it would be better to have everything in your story file... As much as I know, you cannot define a step in the story file, that runs before executing the story.
I also ran into the 'lack' of this feature before, but i think the reason why it was not implemented is that it would not fit the BDD approach, that scenarios should be independently executable and understandable. These steps are often related to some kind of environment preparation (globally for all the scenarios), which are not important for human readers(stakeholders) and thus should not be the part of the 'User story description' which the .story file is supposed to be.
If there is no major performance issue, I found that - from readability point of view - it is better to run these setup/teardown stuff before each and every scenario. If you use the Lifecycle: steps, you won't end up in duplications, and it will be clear for any human reader what steps should be done to execute the test just by reading the story file. But that's just my opinion.
So I think you have these options:
run your steps before/after each scenario by using LifeCycle: steps
use #BeforeStory and #AfterStory annotations
define a dummy scenario with your setup/teardown steps as the first/last scenario of your story file and name it so that it is clear for everybody that it is just a 'technical' scenario. By default the order of the scenarios are fixed, so this might do the trick. (hackish...)
write steps in the Lifecycle: part, and hack the implementation of these steps so that they are executed once in a story (even more hackish)

Does Maven Surefire execute test cases sequentially by default?

This is a follow up to this question which I realized when I dug deeper into my research:
Is it reasonable to suppose that the Maven Surefire plugin executes test cases sequentially by default: a test case ends befores the next one starts (I'm not interested in order). I found that you can configure Surefire to run in parallel, does that mean the sequential execution is the default behavior and will likely be in the future?
NB: In case you were asking why would I want to force tests to run sequentially (I know, good tests should be able to run in parallel), it is because I'm solving a solution to a specific problem which involves coverage of a web application. You can read about it here.
Thank you
The answer to your question involves speculating about the future, which is usually a difficult thing. Having said that, I'd make a guess that yes, it is going to be the default behaviour, because parallel execution of tests makes sense only for perfectly isolated tests, with all external dependencies mocked, or otherwise taken care of. It is sometimes hard to achieve, especially when creating tests for old code. In such cases the decision must be left to the programmer, who only has the idea whether it makes sense to employ parallelism.

Categories

Resources