This repository corresponds to the main backend services of the RPL 3.0 system. It contains the following components:
- RPL Users API
- RPL Activities API
- RabbitMQ message broker
- Fork the repo
- Create a branch with the following format:
feature/*: If you are developing a new feature.bug/*: If you are fixing a bug.chore/*: If you are working on other kind of task.
- When the code is ready and was properly tested, create a descriptive Pull Request, assign a reviewer and wait for some feedback.
The repo has Continuous Deployment, so everything merged to main is deployed to prod. Please, when you merge to main, follow the GitHub action to make sure that the deployment was successful.
If you want to test a branch in prod env, you can create a test/* branch, this will trigger a deployment to prod. It's important to be careful while using test branches because that code will go to prod 😄.
Both APIs and their tests can be run within a VSCode devcontainer for ease of use and reproducibility. (Requires: Docker, VSCode, Dev Containers extension)
Once inside the devcontainer:
# Run all tests
python -m pytestFirst make sure that the MySQL container is running (the instance from .devcontainer/metaservices.dev.yml should start automatically). Additionally, the RabbitMQ container (from metaservices.local.yml) should be activated if you want to use any submission-related endpoint from the activities API here.
# to run the users API (from vscode terminal)
fastapi run rpl_users/src/main.py --reload --port 9000
# to run the activities API (from vscode terminal)
fastapi run rpl_activities/src/main.py --reload --port 9001These ports are exposed so that you can access them at localhost
For integration testing and ease of use while trying patches for the whole system (compared to PROD environment via minikube), you can run the RPL 3.0 backend services locally using Docker Compose. This is the most straightforward setup for local development and testing.
- Ask a mantainer for a basic schema dump of the MySQL database and documentation on how to set it up for the metaservices image.
- The
metaservices.local.ymlcompose file contains multiple MySQL services since we needed them for migration purposes. You should only need thelatestone. If you have any questions regarding the compose services feel free to ask a mantainer. - Both APIS require
.envfiles, which should be placed within therpl_usersandrpl_activitiesdirectories, following the format of the example files. Ask a mantainer for the variables' values. - You can modify the
Dockerfilefiles for the APIs replacing theCMDstatement to get automatic reloads whenever you change the code (see the dockerfiles for details).
- Run the metaservices compose (MySQL with the previously loaded tables from the basic schema dump, and RabbitMQ):
docker compose -f metaservices.local.yml up -d --build- Run compose for the RPL Users API and RPL Activities API:
docker compose -f docker-compose.local.yml up -d --build- Run compose for the RPL-Runner (see RPL Runner repository)
- Run the local setup for the frontend via
nvmto enable automatic reload (see RPL Frontend repository)
You can access the APIs via:
http://localhost:8000for the RPL Users APIhttp://localhost:8001for the RPL Activities API
To stop the services, you can run:
docker compose -f metaservices.local.yml down
docker compose -f docker-compose.local.yml downFor a more strict production-like environment, you can use Minikube to run the entire system. This setup is more complex and requires quite a lot of configuration and resources (also, realoading system components becomes way more tedious), but it closely resembles the production environment.
- Follow the prerequisites 1 and 2 from the previous section (docker compose).
- All environment variables are set in the kubernetes files. For secrets, you must set their values individually. You can ask a mantainer for examples.
- Start Minikube (and its dashboard if you want to monitor the cluster from the browser):
minikube start
minikube dashboard- Start the
metaservicescompose (ONLY for the latest MySQL. Comment out the queue service since it is used directly from inside the cluster):
docker compose -f metaservices.local.yml up -d --build- Start the kubernetes service and deployment for the queue:
kubectl create -f kubernetes/deployments/queue.yaml
kubectl create -f kubernetes/services/queue.yaml- Build docker images for the APIs and load them into Minikube:
docker build -t rpl-users-api:local . --file rpl_users/Dockerfile
docker build -t rpl-activities-api:local . --file rpl_activities/Dockerfile
minikube image load rpl-users-api:local
minikube image load rpl-activities-api:local- Start the kubernetes services and deployments for the APIs:
kubectl create -f kubernetes/deployments/rpl_users_api.yaml
kubectl create -f kubernetes/services/rpl_users_api.yaml
kubectl create -f kubernetes/deployments/rpl_activities_api.yaml
kubectl create -f kubernetes/services/rpl_activities_api.yaml- You can follow the logs from the dashboard or using:
kubectl get pods
kubectl logs <pod_name> --followNow you can proceed with the instructions on both the Runner and the Frontend repositories for this particular setup.
To stop the services, you can run kubectl delete -f <path_to_kubernetes_file> for each of the deployments and services you created, or you can remove all of them at once with: (WARNING: this will stop ALL deployments/services within the namespace)
kubectl delete --all deployments --namespace=default
kubectl delete --all services --namespace=default