[ π¬π§ English | π·πΊ Π ΡΡΡΠΊΠΈΠΉ ]
β οΈ DisclaimerThis project is a portfolio / demo test automation framework created for educational and showcase purposes. It is not an official Todoist project and has no affiliation with the Todoist team. The test coverage is intentionally limited and focuses on demonstrating architecture, tooling, and automation approaches rather than full product coverage.
Multi-layer test automation framework for Todoist app, covering API, Web UI, and Android layers.
The repository is designed as a multi-module Gradle project where each module represents an isolated test layer, while sharing common architectural principles, configuration patterns, and reporting approach.
+ OWNER LIBRARY
+ DATAFAKER LIBRARY
+ ALLURE REPORT β HTTP TEMPLATES Β· Maintained by me
+ ALLURE NOTIFICATIONS
-
Multi-layer test architecture
The same product is covered consistently across API, Web UI, and Android layers, with each layer implemented as an isolated Gradle module and evolving independently. -
Strict layer isolation
API, Web, and Mobile test layers do not depend on each other at code level.
Shared concepts are aligned by convention, not by shared implementation, preventing cross-layer coupling. -
Infrastructure-first design
Each module is built around a dedicated infrastructure layer responsible for initialization, configuration resolution, lifecycle handling, and diagnostics.
Test classes focus on scenarios rather than setup logic. -
Unified project structure across layers
Despite platform differences, all modules follow the same internal structure (configs, helpers, base setup, domain-oriented tests), reducing cognitive overhead when switching between layers. -
Environment-agnostic test logic
All platform- and environment-specific behavior (local vs remote, emulator vs real device, credentials) is resolved via configuration.
Test scenarios contain no branching logic based on execution environment. -
Typed configuration management
Configuration is handled via typed interfaces instead of raw property access, making misconfiguration visible early and minimizing runtime surprises. -
Consistent reporting and diagnostics
All modules follow the same Allure reporting strategy and lifecycle, producing predictable diagnostics (screenshots, logs, artifacts) regardless of platform. -
Unified Allure aggregation across modules
All test layers write raw Allure results into a shared root directory, enabling a single aggregated report for API, Web, and Mobile runs without post-processing or result merging. -
Automated report publishing and notifications
Aggregated Allure reports are generated as static artifacts and can be automatically published or sent via Telegram notifications, making test results accessible outside the CI environment. -
Explicit execution model
Each module exposes a single, explicit execution entry point, allowing local runs and CI pipelines to compose test executions without special-case handling.
.
βββ api/ # Backend API test module
βββ web/ # Web UI test module
βββ mobile-android/ # Android UI test module
β
βββ notifications/ # Allure report notification configs (Telegram)
β
βββ build.gradle.kts # Root Gradle orchestrator:
β # - unified Allure aggregation
β # - report / serve tasks
β # - notification integration
βββ settings.gradle.kts
All modules:
- use the same build conventions
- expose a single execution entry point
- encapsulate platform-specific logic inside the module boundary
The root project acts as an orchestration layer and does not contain tests.
Each test layer can be executed independently or composed into a combined run using Gradle tasks.
./gradlew clean test
-Dplatform=emul-and β for emulator
-Dplatform=real-and β for real device
Runs all test modules and produces a single aggregated Allure results set.
API tests
./gradlew clean :api:test
Web UI tests
./gradlew clean :web:test
Android UI tests
Emulator
./gradlew clean :mobile-android:test -Dplatform=emul-and
Real connected device
./gradlew clean :mobile-android:test -Dplatform=real-and
BrowserStack
The project is BrowserStack-ready.
To run tests on BrowserStack Android:
-
A properly signed
.apkmust be uploaded to the service before execution. -
The application ID must be specified in
bs-and.properties:browserstack.app=bs://<app-id>./gradlew :mobile-android:test -Dplatform=bs-and -DBROWSERSTACK_USER=<your_userName> -DBROWSERSTACK_KEY=<your_accessKey>
Each module resolves its own configuration and execution environment independently.
./gradlew allureReport
Generates a static aggregated Allure report from shared raw results.
Report output directory:
build/reports/allure-report/allureReport
This report can be archived, published, or used for external notifications.
./gradlew allureServe
Starts a local HTTP server and opens the aggregated Allure report in the browser.
./gradlew sendAllureTelegram
Sends a summary notification with key metrics to a configured Telegram channel.
When executing the following commands, the tests will be run remotely using Selenoid.
clean :api:${TASK_API} :web:${TASK_WEB}
-DremoteUrl=<selenoid_url>
-Dbrowser=${BROWSER}
-DbrowserVersion=${BROWSER_VERSION}
-DbrowserSize=${BROWSER_SIZE}
allureReport
Build parameters:
SELENOID_URL β Selenoid endpoint URL. Default: https://(username):(password)@selenoid.autotests.cloud/wd/hub
BROWSER β Browser used for test execution. Default: chrome
BROWSER_VERSION β Browser version used for test execution. Default: 128.0
BROWSER_SIZE β Browser window size. Default: 1920x1080
The project is executed via a Jenkins job, which serves as the primary CI entry point for running automated tests.
The job is responsible for triggering test execution, passing runtime parameters, and orchestrating reporting and post-processing steps.
A unified Allure report is generated after test execution and aggregates results from all test layers.
The report provides a consolidated view of API, Web, and Mobile test executions, including detailed steps, attachments, and diagnostics.
The overview section displays overall execution statistics and test distribution across layers.
Mobile test details contain step-level execution information and attachments (including video).
API test details provide request/response visibility and execution context for backend validation based on custom HTTP templates.
The Jenkins build pipeline is integrated with Allure TestOps for centralized test execution analysis and reporting.
Automated test execution results are published to Allure TestOps, where they are aggregated and visualized. The Allure TestOps dashboard provides execution statistics, historical trends, and overall test run visibility.
The dashboard presents a high-level view of test execution status, trends, and quality metrics across multiple runs.
Run details provide granular information about a specific execution, including individual test results and related metadata.
Allure TestOps is integrated with Jira to establish traceability between automated tests and tracked work items.
Test cases and execution results are linked to Jira issues, enabling correlation between test coverage, execution status, and requirements.
After test execution is completed, the generated Allure report is processed by the Allure Notifications tool and sent to a configured Telegram channel.
The notification contains a summary of the test run and provides access to the generated Allure report, making execution results available outside the CI environment.
Each UI test execution is accompanied by a video recording captured by Selenoid.
The video is attached to the corresponding test case in the report and can be used for failure analysis and execution review.
Each Mobile UI test execution includes a video recording captured during the Appium session.
The recording is attached to the test report and can be used to analyze application behavior and UI interactions during execution.




























