Skip to content

Latest commit

 

History

History
133 lines (99 loc) · 6.43 KB

File metadata and controls

133 lines (99 loc) · 6.43 KB

2.0 Development

2.1 Development Mode

It is recommended to let the operator know when you're running it for testing purposes. This has benefits such as skipping AWS support case creation. This is done by setting the FORCE_DEV_MODE env var to local in the operator's environment. This is already handled for you if you use one of the make deploy-* targets described below.

2.2 Operator Install

The operator can be installed into various cluster and pseudo-cluster environments. Depending which you choose, you can run in local mode or in cluster mode. The local mode is known to work in a Minishift or Code-Ready-Containers (CRC) cluster, and a private OpenShift cluster. The latter is known to work in a real OpenShift Dedicated cluster.

Note: You can try to mix and match; it might work.

Both local and cluster modes share predeployment steps. These can be done via make predeploy, which requires your AWS credentials.

First update your AWS credentials using temporary STS tokens:

./hack/scripts/update_aws_credentials.sh

Then deploy the operator prerequisites. You must be logged into the cluster as an administrator, or otherwise have permissions to create namespaces and deploy CRDs. For Minishift or CRC:

oc login -u kubeadmin
make predeploy

This does the following:

  • Ensures existence of the namespace in which the operator will run.
  • Installs the AWS credentials secret from your ~/.aws/credentials file (osd-staging-2 profile).
  • Installs the operator's Custom Resource Definitions.
  • Creates an initially zero-size AccountPool CR.

Important: Temporary credentials from rh-aws-saml-login expire after a few hours. If you encounter authentication errors:

  1. Re-run ./hack/scripts/update_aws_credentials.sh to refresh credentials
  2. Re-deploy the credentials secret: make deploy-aws-account-operator-credentials

Predeployment only needs to be done once per cluster, unless you are modifying the above artifacts or your credentials have expired.

2.2.1 Local Mode

"Local" mode differs from production in the following ways:

  • AWS support case management is skipped. Your Accounts will get an artificial case number.
  • Metrics are served from your local system at http://localhost:8080/metrics

On a local cluster, after predeploying, run

make deploy-local

will invoke the operator-sdk executable in local mode with the FORCE_DEV_MODE=local environment variable.

2.2.2 Cluster Mode

In "cluster" development mode, as in local mode, AWS support case management is skipped. However, metrics are served from within the cluster just as they are in a production deployment.

Once logged into the cluster, after predeploying, running

make deploy-cluster

will do the following:

  • Create the necessary service accounts, cluster roles, and cluster role bindings.
  • Create the operator Deployment, including FORCE_DEV_MODE=cluster in the environment of the operator's container.

Note: make deploy-cluster will deploy the development image created by the make build target. As you iterate, you will need to make build and make push each time before you make deploy-cluster.

As with local mode, you must be logged into the cluster as an administrator, or otherwise have permissions to create namespaces and deploy CRDs.

2.3 Testing

To run the test suite defined within the Makefile against your cluster, run:

make test-all

make test-all combines a number of test suites together to make things easier to validate the state of the cluster. If preferred, individual test suites can be run, such as:

make test-account-creation 
make test-ccs 
make test-reuse
etc.

2.4 Using integration-test bootstrap script to run tests

Integration test bootstrap script serves as an entrypoint for performing integration tests for different flow profiles. For more information read here

Integration Test Prerequisites

  • Check for core prerequisites.
  • Update AWS credentials using rh-aws-saml-login:
    ./hack/scripts/update_aws_credentials.sh
  • Environment Variables: Setup .envrc file in root folder as per documentation. Required variables:
    export AWS_PAGER=
    export FORCE_DEV_MODE=local
    
    # Your personal account
    export OSD_STAGING_2_AWS_ACCOUNT_ID=<your-assigned-account-id>
    
    # Shared team constants (get from team lead/documentation)
    export OSD_STAGING_1_OU_ROOT_ID=<shared-ou-root-id>
    export OSD_STAGING_1_OU_BASE_ID=<shared-ou-base-id>
    
    # STS roles
    export STS_ROLE_ARN=arn:aws:iam::<YOUR_ACCOUNT_ID>:role/AccessRole
    export STS_JUMP_ARN=arn:aws:iam::<SHARED_ACCOUNT_ID>:role/JumpRole
    export STS_JUMP_ROLE=arn:aws:iam::<SHARED_ACCOUNT_ID>:role/JumpRole
    export SUPPORT_JUMP_ROLE=arn:aws:iam::<SHARED_ACCOUNT_ID>:role/JumpRole
    
    # Optional: For integration tests
    export OSD_STAGING_1_AWS_ACCOUNT_ID=<SHARED_ACCOUNT_ID>
    Note: The OU IDs refer to the shared osd-staging-1 account's organization structure and are the same for all developers. OSD_STAGING_1_AWS_ACCOUNT_ID is optional (set to shared account ID for integration tests). OPERATOR_ACCESS_KEY_ID and OPERATOR_SECRET_ACCESS_KEY are no longer required as environment variables. Credentials are managed via ~/.aws/credentials file.
  • Ensure command line utilities dependencies jq, awscli, and python3 are present.
  • Ensure rh-aws-saml-login is installed and configured.

2.4.1 Testing on local crc cluster

  • Login to crc cluster as kubeadmin.
  • From root folder of AAO repository run - make local-ci-entrypoint

2.4.2 Testing on local osd stage cluster

  • Login to osd cluster via backplane
  • From root folder of AAO repository run - make stage-ci-entrypoint