Skip to content

feat(nimbus): Improve desktop targeting tests#14081

Closed
yashikakhurana wants to merge 10 commits intomainfrom
14073
Closed

feat(nimbus): Improve desktop targeting tests#14081
yashikakhurana wants to merge 10 commits intomainfrom
14073

Conversation

@yashikakhurana
Copy link
Contributor

@yashikakhurana yashikakhurana commented Nov 27, 2025

Because

The desktop targeting tests were slow due to:

  • Creating experiments via GraphQL for each targeting configuration
  • Reading filter_expression.js from disk for each test
  • Opening about:blank for each parameterized test iteration
  • Awaiting TelemetryEnvironment and ExperimentAPI initialization for each evaluation
  • Running targeting evaluations sequentially

This commit

  • Eliminates experiment creation - Tests targeting expressions directly without creating experiments via GraphQL
  • Exposes targeting expressions via GraphQL - Added targeting field to the existing nimbusConfig.targetingConfigs query
  • Parallelizes targeting evaluation - Uses Promise.all to evaluate all targeting expressions concurrently in a single browser script execution
  • Caches filter_expression.js path - Uses @functools.cache decorator to avoid repeated disk reads
  • Opens about:blank once per module - Changed to scope="module" autouse fixture instead of per-test
  • Initializes modules once - TelemetryEnvironment, ExperimentAPI, and other Firefox modules are imported and initialized only once at the script level
  • Consolidates test runs - Changed from parameterized tests (running sequentially) to batch evaluation (running in parallel)

Fixes #14073

@yashikakhurana yashikakhurana marked this pull request as ready for review November 27, 2025 17:39
Comment on lines 46 to 50
if (arguments.length === 1) {
// Mode 1: Initialization - run once at test setup
const [callback] = arguments;
initialize().then(callback).catch(() => callback(false));
} else {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should keep it simple and just have one mode of operation.

return path.absolute()


@pytest.fixture(scope="function", autouse=True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we want scope="module" to run this code once.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually just realised tha the selenium fixture is function-scoped, so we can't use module scope for setup_browser

Comment on lines 51 to 75
# Create experiments for all targeting configs
for idx, slug in enumerate(targeting_slugs):
test_slug = f"{experiment_slug}_{idx}_{slug}"
default_data_api["targetingConfigSlug"] = slug
helpers.create_experiment(
test_slug,
BaseExperimentApplications.FIREFOX_DESKTOP.value,
default_data_api,
targeting=slug,
)
logging.info(f"Created experiment: {test_slug}")
# Delay to prevent race conditions when creating many experiments
time.sleep(1)

experiment_data = helpers.load_experiment_data(test_slug)
targeting = experiment_data["data"]["experimentBySlug"]["jexlTargetingExpression"]
recipe = experiment_data["data"]["experimentBySlug"]["recipeJson"]

targeting_tests.append(
{
"slug": slug,
"targeting": targeting,
"recipe": json.dumps({"experiment": recipe}),
}
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not create any experiments and instead use a mapping of targeting slug to targeting expression. We may be able to just import nimbus.targeting.constants directly here

This is the real bottleneck of this test.

const { ASRouterTargeting } = ChromeUtils.importESModule("resource:///modules/asrouter/ASRouterTargeting.sys.mjs");
const { ExperimentAPI } = ChromeUtils.importESModule("resource://nimbus/ExperimentAPI.sys.mjs");
const { TargetingContext } = ChromeUtils.importESModule("resource://messaging-system/targeting/Targeting.sys.mjs");
// Initialize modules once
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this comment. These modules are not being initialized, they are being imported. (a la regular import in js)

await TelemetryEnvironment.onInitialized();
await ExperimentAPI.ready();

const [testsJson, callback] = arguments;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have to set this outside the main function because arguments is a special global that refers to the arguments passed to the current function. ie so in this function it will be []

callback(result);
})
.catch(err => {
main().catch(() => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
main().catch(() => {
const [targeting, callback] = arguments;
main(targeting, callback).catch(() => callback(null));

Comment on lines +27 to +31
@pytest.fixture(scope="function", autouse=True)
def setup_browser(selenium):
"""Open about:blank once per test function."""
selenium.get("about:blank")
yield
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually curious if this test works without this function.

"""Test that audience fields generate valid targeting expressions."""
# These tests verify the targeting expression generation logic
# without needing to create actual experiments
recipe = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test actually doesn't work like this. The old test was testing that Experimenter was generating valid targeting expressions based on recipe configurations. This updated test does not do that since it does not actually pass any targeting.

We should revert most of this test to the old method so we're actually testing what comes out of experimenter. O(1) experiments should be fine -- the real bottleneck was the O(n) experiments made because of n advanced targetings.

@yashikakhurana
Copy link
Contributor Author

There is issue that one needs the actual creation of experiments, and for the other we can skip it, going to test it more and will open up again @freshstrangemusic I will address the feedback you shared

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Improve Desktop targeting test runtime

2 participants