This repository contains the new space and resource reservation platform for City of Helsinki, colloquially known as "Varaamo". In Varaamo, citizens of Helsinki can make reservations for spaces and resources owned by the City of Helsinki.
This project replaces the old Varaamo platform. For more detailed information, please refer to the Tilavarauspalvelu page in Confluence (accessible to the City of Helsinki organization only).
Table of Contents:
These instructions will set you up for local development the backend running in Docker and frontend running locally. If you want to run the backend without Docker, see the backend section.
- Docker
- GNU parallel (for running codegen without watch mode)
- Ubuntu:
sudo apt-get install parallel - Mac:
brew install parallel
- Ubuntu:
- Node version manager
- Ubuntu: https://github.com/nvm-sh/nvm
- Mac:
brew install nvm - Windows: https://github.com/coreybutler/nvm-windows
- Make
- Ubuntu:
sudo apt-get install make - Mac:
brew install make - Windows:
choco install make(using chocolatey)
- Ubuntu:
- Copy
backend/.env.exampletobackend/.env.
cp backend/.env.example backend/.env- Build and run backend with Docker.
make runYou should now be able to open Django admin panel at localhost:8000/admin/.
GraphQL endpoint is at localhost:8000/graphql/.
To generate test data, follow the steps below.
- Connect to running container.
make bash- Generate test data.
make generate- Install correct Node version.
nvm use- Install pnpm if not installed through OS package manager.
npm install -g pnpm- Install dependencies.
cd frontend
pnpm i- Add pre-commit hooks
# IMPORTANT run in the repo root
pnpm husky frontend/.husky- Copy
.env.exampleto.env.local.
cp frontend/apps/customer/.env.example frontend/apps/customer/.env.local
cp frontend/apps/staff/.env.example frontend/apps/staff/.env.local- Run codegen
For non Windows users using GNU parallel
cd frontend
pnpm codegenWithout GNU parallel (e.g. Windows users)
make codegen- Start the frontend.
cd frontend
pnpm devYou should now be able to open the customer frontend at localhost:3000 and the staff frontend at localhost:3001/kasittely.
- PostgreSQL (with the PostGIS extension) for database needs
- Redis for in-memory caching
- Celery for scheduling and background task handling
- Poetry for dependency management
- Django as the web framework
- Graphene as the GraphQL framework
- Authentication with Helsinki Tunnistus
- Profile data from Helsinki Profile
- Opening hours from Aukiolosovellus
- Unit information from Toimipisterekisteri
- Payments are handled by Helsinki Web Shop
- Access codes handled by Pindora
- Emails are sent using relay.hel.fi
These instructions will set up the backend for local development without Docker. This is mainly for backend developers, as it requires more dependencies and setup.
Windows users:
Some of the dependencies used by the project are not available for Windows. We recommend using WSL2 running Ubuntu for local development.
Requirements:
- CPython (check
pyproject.tomlfor version) - Poetry (latest version)
- PostgreSQL (with the PostGIS extension) (version 13 or newer)
- Redis (version 7 or newer)
- GDAL (version compatible with Django, check their documentation for more info)
- Ubuntu:
sudo apt-get install gdal-bin - Mac:
brew install gdal
- Ubuntu:
- gettext
- Ubuntu:
sudo apt-get install gettext - Mac:
brew install gettext
- Ubuntu:
Installation instructions for dependencies will vary based on your OS and can change over time, so please refer to the official documentation for each dependency on how to set them up correctly.
Now, follow the steps below in the backend directory.
- Copy
.env.exampleto.env.
cp .env.example .envThis file contains environment variables used by the project. You can modify these to suit your local development environment.
- Copy
local_settings_example.pytolocal_settings.py.
cp local_settings_example.py local_settings.pyThese can be used to modify settings for local development without changing the main settings file.
- Create a virtual environment & install dependencies.
poetry install- Add pre-commit hooks
poetry run pre-commit install- Run migrations
poetry run python manage.py migrate- Generate test data
poetry run python manage.py create_test_data- Start the server
poetry run manage.py runserver localhost:8000Backend should now be running at localhost:8000.
Since the backend is not in the root of the project, the source path is not correct for linting out of the box. Fixing this is specific to your IDE, but here are some setups for popular ones:
PyCharm: Right click on the
backendfolder and select "Mark Directory as" and then select "Sources Root".VSCode: In your workspace settings (
.vscode/settings.json), add the following:{ "python.analysis.extraPaths": [ "$(workspaceFolder)/backend" ] }
It's recommended to set up Ruff linting and formatting support in your editor.
Tests are run with pytest.
Some flags that can save time when running tests:
- To skip slow-running tests:
pytest --skip-slow - To retain test database between runs:
pytest --reuse-db - To skip migration-checks at the start of tests:
pytest --no-migrations - To run tests in parallel:
pytest -n 8 --dist=loadscope(=8 cores, use-n autoto use all available cores)
You can use a pytest.ini file to set up flags for local development.
Dependencies are managed by Poetry. Normally, they are automatically updated by dependabot without any manual intervention (given updates don't fail any automated tests).
However, if you want to update them manually, you should first check all outdated dependencies by running:
poetry show -oThen pin the exact new versions for all outdated dependencies
in the pyproject.toml file. Next, create a new lock file by running:
poetry lockAnd finally, update to the new versions by running:
poetry updateScheduled & background tasks are run with Celery.
When developing locally, you can run these tasks in a Celery worker with make celery.
This uses the filesystem as the message broker.
You'll need to create queue and processed folders according to the
CELERY_QUEUE_FOLDER_OUT, CELERY_QUEUE_FOLDER_IN, CELERY_PROCESSED_FOLDER
environment variables (see .env.example).
If you want to run background tasks synchronously without Celery, set the environment variable
CELERY_TASK_ALWAYS_EAGER to True. Scheduled tasks still need the worker in order to run.
Authentication is handled by Helsinki Tunnistus Keycloak using the django-helusers library.
You'll need to get the TUNNISTAMO_ADMIN_SECRET from the Azure Pipelines library
or from a colleague and set that in your .env file.
For development, you can also use local user accounts. Generated test data includes a superuser
named tvp with password tvp. You can log in with these credentials at the admin panel.
Authentication in application is managed using session cookies.
Static files are served by the Whitenoise package. These are all files that are not uploaded by the users in Django Admin pages.
Media files are served by the uWSGI static files implementation, offloaded to threads. These are all files uploaded by users in Django Admin pages.
In production, Varnish cache is used for reservation unit and purpose images. When new image is uploaded, existing images are removed from the cache.
In settings there are four configurations:
IMAGE_CACHE_ENABLED= Toggle caching on/offIMAGE_CACHE_VARNISH_HOST= Varnish hostnameIMAGE_CACHE_PURGE_KEY= Secret key for doing purge requestsIMAGE_CACHE_HOST_HEADER=Hostheader value in purge request
Translations are handled by Django's built-in translation system. GitHub Actions CI will check that all translations are up-to-date during PRs.
To update translations, run make translations. This will add any missing translations
and remove any removed translations from the .po files located in the locale directory.
After filling in the translations, run make translate to compile the .po files to .mo files.
The .mo will be used by Django to display translations. This compilation step is part of the
Dockerfile build process, so you don't need to commit the .mo files.
For model field translations, we use django-modeltranslation. The package has integrations in all the relevant parts of the project (serializers, admin, etc.). See code for more details.
The local_settings.py file added during setup can be used to change settings
locally that are not configurable using environment variables.
See documentation for django-environment-config for more details.
- React as a frontend framework
- NextJs for SSR and routing
- Helsinki Design System for common React components
- Styled Components for styling (CSS) components
- React Hook Form for forms
- Zod for form schema validation
- Apollo GraphQL client for communicating with the backend GraphQL API
- Codegen GraphQL for generating typed queries
Frontend is divided into two different applications one for customers and one for staff users.
These are in apps/ directory and common code for them is in packages/. The frontend
commands and dependencies are managed as a pnpm monorepo.
Codegen is used to generate Typescript types from GraphQL queries.
Codegen needs to be run if either the backend schema or any frontend GQL query changes.
Uses the schema file tilavaraus.graphql in the repo root and crawls the frontend code for gql tagged strings. Then generates Typescript types for them.
Update GraphQL schema and types. Uses GNU parallel to update all apps.
cd frontend
pnpm codegenWithout GNU parallel (e.g. on Windows)
make codegenRun in watch mode for all apps.
cd frontend
pnpm codegen:watchWatch mode has some issues with changes in packages/ui not propagated to the apps.
Also when switching branches it might hit an unrecoverable error.
In those cases running pnpm codegen first fixes the issue.
Can be run for individual apps with
cd frontend
# customer ui
pnpm codegen:customer
# saff ui
pnpm codegen:staff
# common ui package
pnpm codegen:uiLinting is done with four different tools:
- tsc for typechecking using the typescript compiler
- oxlint as a general linter
- prettier for formatting
- stylelint for CSS linting
All lints are ran on CI and if you enable pre-commit hooks they are ran locally to modified files.
Typecheck all packages.
cd frontend
pnpm tsc:check
# if you need to remove caches
pnpm tsc:cleanRun oxlint.
cd frontend
pnpm lint
# automatic fixing
pnpm lint:fixRun prettier on all files.
cd frontend
pnpm formatRun stylelint.
cd frontend
pnpm lint:cssTests are ran using vitest.
Locally tests run in watch mode by default. Watch mode doesn't work properly with monorepo so it has to be run per package / app.
cd frontend
# customer
cd apps/customer
pnpm test
# staff
cd apps/staff
pnpm test
# common
cd packages/ui
pnpm testDisabling watch mode allows running all tests.
cd frontend
CI=true pnpm testFrontend dependencies are managed using pnpm monorepo. dependabot will normally open pull requests for them.
For manual update of packages. You can check outdated packages with
cd frontend
pnpm outdated -rTo update a specific package.
cd frontend
# minor version
pnpm up -r {package_name}
# major version
pnpm up -r {package_name}@latestTranslations are done using i18next package that uses one .json file per translation namespace.
These are stored in public/locales/ for both apps. Common translations need to be duplicated
between the apps (i.e. components that are in packages/ui/ require duplicated translations for both app).
Translations are loaded using next-i18next that automatically picks the correct .json files to load per page.
All available top level scripts are listed in the root package.json. Most of them use turborepo to run
the same command in all packages. They can be run with pnpm {command}.
Top level commands are ran parallel to all packages (that contain that command) and output to standard output. Normally this is what you want, but if you have 100 lint errors in both apps all the errors are going to be mixed together.
In these cases you can target commands to specific packages using the --filter flag. {package_name} is the subpath e.g. staff for apps/staff.
cd frontend
# only that package
pnpm {command} --filter {package_name}
# only that package and it's dependencies
pnpm {command} --filter {package_name}...Turborepo uses aggressive caching for all commands. This can cause issues in situations where some files are read from cache. Typical cases are either during a rebase or stash popping.
Force a run without reading from local cache.
cd frontend
pnpm {cmd} --forcePluck graphql queries from frontend code as graphql files per app and store them in /gql-pluck-output/.
cd frontend
pnpm gql-pluckInteractive tool to remove package caches node_modules. Useful if other commands are not working (broken dependencies).
cd frontend
pnpm cleanBuild and start the app in production mode. Useful for testing the production build locally.
cd frontend
pnpm build
pnpm startProduction builds pnpm build breaks local caches. If restarting development server doesn't work then
cd frontend
rm -rf apps/customer/.next apps/customer/.turbo apps/staff/.next apps/staff/.turboIf the command should be run inside a package.
- Add the command to all needed
package.jsonof the individual packages. - Add the master command to
turbo.json - Add
turbo $cmdto/package.json - Run the command
pnpm $cmd
If only needed on the root package.
- Add the
$cmddirectly to/package.json - Run the command
pnpm $cmd
If the query is done on the server side (i.e. in getServerSideProps) you won't find it in the network tab.
Probably an SSR error. These are not visible in the browser.
Check the console logs in the terminal where pnpm dev is running.
Adding a new relation or a fragment to a graphql query often requires modifiying the backend allowed complexity for that
endpoint. Find the max_complexity for that specific endpoint in the backend code and increase it by one till it doesn't error anymore.
Remember to run backend in watch mode or make run after each change.
Max complexity is a security measure, but the default 10 is low compared to the complexity of a lot of the frontend queries.