feat(nimbus): Improve desktop targeting tests#14081
feat(nimbus): Improve desktop targeting tests#14081yashikakhurana wants to merge 10 commits intomainfrom
Conversation
| if (arguments.length === 1) { | ||
| // Mode 1: Initialization - run once at test setup | ||
| const [callback] = arguments; | ||
| initialize().then(callback).catch(() => callback(false)); | ||
| } else { |
There was a problem hiding this comment.
We should keep it simple and just have one mode of operation.
| return path.absolute() | ||
|
|
||
|
|
||
| @pytest.fixture(scope="function", autouse=True) |
There was a problem hiding this comment.
we want scope="module" to run this code once.
There was a problem hiding this comment.
actually just realised tha the selenium fixture is function-scoped, so we can't use module scope for setup_browser
| # Create experiments for all targeting configs | ||
| for idx, slug in enumerate(targeting_slugs): | ||
| test_slug = f"{experiment_slug}_{idx}_{slug}" | ||
| default_data_api["targetingConfigSlug"] = slug | ||
| helpers.create_experiment( | ||
| test_slug, | ||
| BaseExperimentApplications.FIREFOX_DESKTOP.value, | ||
| default_data_api, | ||
| targeting=slug, | ||
| ) | ||
| logging.info(f"Created experiment: {test_slug}") | ||
| # Delay to prevent race conditions when creating many experiments | ||
| time.sleep(1) | ||
|
|
||
| experiment_data = helpers.load_experiment_data(test_slug) | ||
| targeting = experiment_data["data"]["experimentBySlug"]["jexlTargetingExpression"] | ||
| recipe = experiment_data["data"]["experimentBySlug"]["recipeJson"] | ||
|
|
||
| targeting_tests.append( | ||
| { | ||
| "slug": slug, | ||
| "targeting": targeting, | ||
| "recipe": json.dumps({"experiment": recipe}), | ||
| } | ||
| ) |
There was a problem hiding this comment.
We should not create any experiments and instead use a mapping of targeting slug to targeting expression. We may be able to just import nimbus.targeting.constants directly here
This is the real bottleneck of this test.
| const { ASRouterTargeting } = ChromeUtils.importESModule("resource:///modules/asrouter/ASRouterTargeting.sys.mjs"); | ||
| const { ExperimentAPI } = ChromeUtils.importESModule("resource://nimbus/ExperimentAPI.sys.mjs"); | ||
| const { TargetingContext } = ChromeUtils.importESModule("resource://messaging-system/targeting/Targeting.sys.mjs"); | ||
| // Initialize modules once |
There was a problem hiding this comment.
Remove this comment. These modules are not being initialized, they are being imported. (a la regular import in js)
| await TelemetryEnvironment.onInitialized(); | ||
| await ExperimentAPI.ready(); | ||
|
|
||
| const [testsJson, callback] = arguments; |
There was a problem hiding this comment.
You have to set this outside the main function because arguments is a special global that refers to the arguments passed to the current function. ie so in this function it will be []
| callback(result); | ||
| }) | ||
| .catch(err => { | ||
| main().catch(() => { |
There was a problem hiding this comment.
| main().catch(() => { | |
| const [targeting, callback] = arguments; | |
| main(targeting, callback).catch(() => callback(null)); |
| @pytest.fixture(scope="function", autouse=True) | ||
| def setup_browser(selenium): | ||
| """Open about:blank once per test function.""" | ||
| selenium.get("about:blank") | ||
| yield |
There was a problem hiding this comment.
I'm actually curious if this test works without this function.
| """Test that audience fields generate valid targeting expressions.""" | ||
| # These tests verify the targeting expression generation logic | ||
| # without needing to create actual experiments | ||
| recipe = { |
There was a problem hiding this comment.
This test actually doesn't work like this. The old test was testing that Experimenter was generating valid targeting expressions based on recipe configurations. This updated test does not do that since it does not actually pass any targeting.
We should revert most of this test to the old method so we're actually testing what comes out of experimenter. O(1) experiments should be fine -- the real bottleneck was the O(n) experiments made because of n advanced targetings.
|
There is issue that one needs the actual creation of experiments, and for the other we can skip it, going to test it more and will open up again @freshstrangemusic I will address the feedback you shared |
Because
The desktop targeting tests were slow due to:
This commit
targetingfield to the existingnimbusConfig.targetingConfigsqueryPromise.allto evaluate all targeting expressions concurrently in a single browser script execution@functools.cachedecorator to avoid repeated disk readsscope="module"autouse fixture instead of per-testFixes #14073