Install Armory Enterprise for Spinnaker in Kubernetes
Armory Enterprise requires a license. For more information, contact Armory.
Overview of installing Armory Enterprise for Spinnaker in Kubernetes
This guide describes the initial installation of Armory Enterprise in Kubernetes. You can choose between two different installation methods: Armory Operator or Halyard. By the end of this guide, you will have an instance of Armory Enterprise up and running on your Kubernetes cluster. The document does not fully cover the following:
- TLS Encryption
- Authentication/Authorization
- Add K8s accounts to deploy to
- Add cloud accounts to deploy to
See Next Steps for information related to these topics.
This document focuses on Armory Enterprise but can be adapted to install Open Source SpinnakerTM by using Open Source Operator or a different Halyard container, and a corresponding different Spinnaker version.
Choosing an installation method
There are two recommended ways of installing Armory Enterprise: using the Armory Operator or using Halyard.
The Armory Operator is the newest installation and configuration method for Armory. Using the Operator, you can entirely manage Armory using only Kubernetes manifest files. You treat Armory like any other Kubernetes application, running standard tools like kubectl
, helm
, and kustomize
. You can even use a Armory pipeline to roll out configuration changes to itself. The Operator runs a few “hot” validations before accepting a manifest into the cluster, preventing some configuration problems from affecting a running Armory installation.
Prerequisites
- Your Kubernetes API Server is running version
1.13
or later. - You have admin rights to install the Custom Resource Definition (CRD) for Operator.
- You can assign a ClusterRole to Operator. This means that Operator has access to all namespaces in the cluster. Operator can still run in an isolated namespace in Basic mode (not covered in this installation guide), but it will not be able to run admission validations.
General workflow
- Install Armory Operator CRDs cluster wide.
- Create a Kubernetes namespace for the Operator.
- Install the Operator in that namespace, using a ServiceAccount with a ClusterRole to access other namespaces.
- Create an S3 bucket for Armory to store persistent configuration.
- Create an IAM user that Armory will use to access the S3 bucket (or alternately, granting access to the bucket via IAM roles).
- Create a Kubernetes namespace for Armory.
- Install Armory in that namespace.
Halyard is the former installation method for Armory. It has been around the longest and is the first one supporting new Armory features. Operator uses a customized version of Halyard that is constantly updated to incorporate changes from base Halyard.
Visit the Armory-extended Halyard Release Notes page for a list of available Armory-extended Halyard versions and their release notes. You can view version differences as well as which versions are compatible with which Armory releases.
Prerequisites
- Your Kubernetes cluster has storage set up so that PersistentVolumeClaims properly allocates PersistentVolumes.
General workflow
- Create a Kubernetes namespace where we will run both Halyard and Armory
- In the namespace, grant the
default
Kubernetes ServiceAccount thecluster-admin
ClusterRole. This gives it full permissions within our namespace, but not on other namespaces; for details, see the Kubernetes RBAC documentation) - In the namespace, create a PersistentVolumeClaim (PVC), which you use for persistent Armory cluster configuration (the “halyard configuration” or “halconfig”)
- In the namespace, create a StatefulSet to run Halyard. The PVC will be mounted to this StatefulSet.
- Halyard will use the
default
Kubernetes ServiceAccount to create and modify resources running the cluster (the Kubernetes Secrets, Deployments, and Services that make up Armory). - The Armory microservice called “Clouddriver”, which interacts with various clouds including Kubernetes, also uses the
default
ServiceAccount to interact with Kubernetes. - Run Halyard as a Kubernetes Pod in the namespace (using a StatefulSet).
- Create an S3 bucket for Armory to store persistent configuration.
- Create an IAM user that Armory will use to access the S3 bucket (or alternately, granting access to the bucket via IAM roles).
- Run the
hal
client interactively in the Kubernetes Pod to:- Build out the
hal
config YAML file (.hal/config
) - Configure Armory with the IAM credentials and bucket information
- Turn on other recommended settings (artifacts and http artifact provider)
- Install Armory
- Expose Armory
- Build out the
Prerequisites for installing Armory
- Your Kubernetes cluster is up and running with at least 4 CPUs and 12 GB of memory. This is the bare minimum to install and run Armory; depending on our Armory workload, you may need more resources.
- You have
kubectl
installed and are able to access and create Kubernetes resources. - You have access to an existing object storage bucket or the ability to create an object storage bucket (Amazon S3, Google GCS, Azure Storage, or Minio). For the initial version of this document, only Amazon S3 is used.
- You have access to an IAM role or user with access to the S3 bucket. If neither of these exists, you need to create an IAM role or user with access to the S3 bucket.
- Your cluster has either an existing Kubernetes Ingress controller or the permissions to install the NGINX Ingress Controller
These instructions set up the Armory microservice called “Front50”, which stores Armory Application and Pipeline configuration to an object store, with the following permission:
- Front50 has full access to an S3 bucket through either an IAM user (with an AWS access key and secret access key) or an IAM role (attached to your Kubernetes cluster).
At the end of this guide, you have a Armory deployment that is:
- Accessible from your browser
- Able to deploy other Kubernetes resources to the namespace where it runs, but not to any other namespace
Configure application and pipeline configuration storage
The Armory microservice Front50 requires a backing store to store Armory Application and Pipeline definitions. There are a number of options for this:
- Amazon S3 Bucket
- Google Cloud Storage (GCS) Bucket
- Azure Storage Bucket
- Minio
- MySQL
You must set up a backing store for Armory to use for persistent application and pipeline configuration.
Using S3 for Front50
Armory (the Front50
service, specifically) needs access to an S3 bucket. There are a number of ways to achieve this.
This section describes how to do the following:
- Create an S3 bucket
- Configure access to the bucket:
- (Option 1) Add an IAM Policy to an IAM Role, granting access to the S3 bucket
- (Option 2) Create an IAM User with access to the S3 bucket
Click to expand instructions
Creating an S3 bucket
If you do not have an S3 bucket, create an S3 bucket.
By default, Armory stores all Armory information in a folder called front50
in your bucket. Optionally, you can specify a different directory. You might want to do this if you're using an existing or shared S3 bucket..
Perform the following steps:
- Log into the AWS Console (web UI).
- Navigate to the S3 Console. Click on Services > Storage > S3.
- Click on Create Bucket.
- Specify a globally unique name for this bucket in your AWS region of choice. If your organization has a standard naming convention, follow it. For its examples, this guide uses
spinnaker-abcxyz
. - Click Next.
- Select the following two checkboxes:
- Keep all versions of an object in the same bucket
- Automatically encrypt objects when they are stored in S3
- Click Next.
- Do not add any additional permissions unless required by your organization. Click Next.
- Click Create bucket.
(Option 1) S3 using the IAM Policy/Role
First, identify the role attached to your Kubernetes instance and attach an inline IAM policy to it. This grants access to your S3 bucket.
- Log into the AWS Console (Web UI).
- Navigate to EC2. Click on Services > Compute > EC2.
- Click on one of your Kubernetes nodes.
- In the bottom section, look for IAM role and click on the role.
- Click on Add inline policy.
- On the JSON tab, add the following snippet:
- Click on Review Policy.
- Give your inline policy a name, such as
s3-spinnaker-abcxyz
. - Click Create Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::spinnaker-abcxyz",
"arn:aws:s3:::spinnaker-abcxyz/*"
]
}
]
}
(Option 2) S3 using an IAM User
First, create the IAM user and grant it permissions on your bucket:
- Log into the AWS Console (Web UI).
- Navigate to the IAM Console. Click on Services > Security, Identity, & Compliance > IAM.
- Click on Users on the left.
- Click on Add user.
- Give your user a distinct name based on your organization's naming conventions. This guide uses
spinnaker-abcxyz
. - Click on Programmatic access.
- For this guide, do not add a distinct policy to this user. Click on Next: Tags. You may receive a warning about how there are no policies attached to this user. You can ignore this warning.
- Optionally, add tags, then click on Next: Review.
- Click Create user.
- Save the Access Key ID and Secret Access Key. You need this information later during Halyard configuration.
- Click Close.
Then, add an inline policy to your IAM user:
- Click on our newly-created IAM user.
- Click on Add inline policy (on the right).
- On the JSON tab, add the following snippet:
- Click on Review Policy
- Give your inline policy a name, for example
s3-spinnaker-abcxyz
. - Click Create Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::spinnaker-abcxyz",
"arn:aws:s3:::spinnaker-abcxyz/*"
]
}
]
}
Replace s3-spinnaker-abcxyz
with the name of your bucket.
Connect to the Kubernetes cluster
You must be able to connect to the Kubernetes cluster with kubectl
. Depending on the type of your Kubernetes cluster, there are a number of ways of achieving this.
Connecting to an AWS EKS cluster
If you use an AWS EKS cluster, you must be able to deploy resources to it. Before you start this section, make sure you have configured the aws
CLI with credentials and a default region / availability zone. For more information, see the aws
installation directions and configuration directions). Armory recommends using the V2 version of the AWS CLI.
If you have access to the role that created the EKS cluster, update your kubeconfig with access to the Kubernetes cluster using this command:
aws eks update-kubeconfig --name <EKS_CLUSTER_NAME>
From here, validate access to the cluster with this command:
kubectl get namespaces
The command returns the namespaces in the EKS cluster.
Connecting to other Kubernetes clusters
If you created a Kubernetes on AWS with KOPS or another Kubernetes tool, ensure that you can communicate with the Kubernetes cluster with kubectl
:
kubectl get namespaces
The command returns the namespaces in the EKS cluster.
Install Armory using the Operator
Install CRDs and Operator
First, download the CRDs and manifests from the latest stable release.
$ bash -c 'curl -L https://github.com/armory-io/spinnaker-operator/releases/latest/download/manifests.tgz | tar -xz'
Install Armory CRDs:
$ kubectl apply -f deploy/crds/
Create a namespace for Operator. In this guide you use spinnaker-operator
, but the namespace can have any name, provided that you update the namespace name in the role_binding.yaml
file.
kubectl create namespace spinnaker-operator
Install Operator manifests:
$ kubectl apply -n spinnaker-operator -f deploy/operator/cluster
After installation, you can verify that Operator is running with the following command:
$ kubectl -n spinnaker-operator get pods
The command returns output similar to the following if the pod for Operator is running:
NAMESPACE READY STATUS RESTARTS AGE
spinnaker-operator-7cd659654b-4vktl 2/2 Running 0 6s
Install Armory
First, create the namespace where Armory will be installed. In this guide you use spinnaker
, but it can have any name:
kubectl create namespace spinnaker
You define and configure Armory in a YAML file and use kubectl
to create the service. Copy the contents below to a configuration file called spinnakerservice.yml
. The code creates a Kubernetes ServiceAccount
with permissions only to the namespace where Armory is installed. Applying this file creates a base Armory installation with one Kubernetes target account, which enables Armory to deploy to the same namespace where it is installed.
Note the values that you need to modify:
- Armory
version
: Use the version of Armory that you want to deploy, which can be found here. - S3
bucket
: Use the name of the S3 bucket created above. - S3
region
: Region where the S3 bucket is located. - S3
accessKeyId
: Optional, set when using IAM user credentials to authenticate to the S3 bucket. - S3
secretAccessKey
: Optional, set when using IAM user credentials to authenticate to the S3 bucket. - metadata
name
: Change if you’re installing Armory to a namespace other thanspinnaker
.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: spin-role
rules:
- apiGroups:
- ""
resources:
- pods
- services
- endpoints
- persistentvolumeclaims
- events
- configmaps
- secrets
- namespaces
verbs:
- '*'
- apiGroups:
- batch
- extensions
resources:
- jobs
verbs:
- '*'
- apiGroups:
- apps
- extensions
resources:
- deployments
- daemonsets
- replicasets
- statefulsets
verbs:
- '*'
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- get
- create
- apiGroups:
- apps
resourceNames:
- spinnaker-operator
resources:
- deployments/finalizers
verbs:
- update
- apiGroups:
- metrics.k8s.io
resources:
- pods
verbs:
- '*'
- apiGroups:
- apps
resourceNames:
- spinnaker-operator
resources:
- deployments/finalizers
verbs:
- update
- apiGroups:
- spinnaker.io
resources:
- '*'
- spinnakeraccounts
verbs:
- '*'
- apiGroups:
- spinnaker.armory.io
resources:
- '*'
- spinnakerservices
verbs:
- '*'
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: spin-sa
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: spin-role-binding
subjects:
- kind: ServiceAccount
name: spin-sa
roleRef:
kind: Role
name: spin-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: spinnaker.armory.io/v1alpha2
kind: SpinnakerService
metadata:
name: spinnaker
spec:
spinnakerConfig:
config:
version: 2.17.1 # Replace with desired version of Armory to deploy
persistentStorage:
persistentStoreType: s3
s3:
bucket: spinnaker-abcxyz # Replace with the name of the S3 bucket created previously
region: us-west-2 # Replace with correct bucket's region
accessKeyId: XYZ # (Optional, set only when using an IAM user to authenticate to the bucket instead of an IAM role)
secretAccessKey: XYZ # (Optional, set only when using an IAM user to authenticate to the bucket instead of an IAM role)
rootFolder: front50
features:
artifacts: true
providers:
kubernetes:
accounts:
- name: spinnaker
cacheThreads: 1
cachingPolicies: []
configureImagePullSecrets: true
customResources: []
dockerRegistries: []
kinds: []
namespaces:
- spinnaker # Name of the namespace where Armory is installed
oAuthScopes: []
omitKinds: []
omitNamespaces: []
onlySpinnakerManaged: false
permissions: {}
providerVersion: V2
requiredGroupMembership: []
serviceAccount: true
enabled: true
primaryAccount: spinnaker
service-settings:
clouddriver:
kubernetes:
serviceAccountName: spin-sa
Deploy the manifest with the following command:
kubectl -n spinnaker apply -f spinnakerservice.yml
Install Armory using Halyard
Start the Halyard StatefulSet
Halyard is a Docker image used to install Armory. It generates Kubernetes manifests for each of the Armory services. This guide explains how to run it in a Kubernetes cluster as a StatefulSet with one (1) Pod.
First, create a namespace for Armory to run in (this can be any namespace):
kubectl create ns spinnaker
Create a file called halyard.yml
that contains the following:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spinnaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
-
kind: ServiceAccount
name: default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hal-pvc
labels:
app: halyard
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: halyard
spec:
replicas: 1
serviceName: halyard
selector:
matchLabels:
app: halyard
template:
metadata:
labels:
app: halyard
spec:
containers:
-
name: halyard
image: armory/halyard-armory:1.11.0
volumeMounts:
-
name: hal
mountPath: /home/spinnaker
env:
-
name: HOME
value: "/home/spinnaker"
securityContext:
runAsUser: 1000
runAsGroup: 65535
fsGroup: 65535
volumes:
-
name: hal
persistentVolumeClaim:
claimName: hal-pvc
This Kubernetes manifest has these three resources in it:
- A Kubernetes Rolebinding that grants the
default
ServiecAccount access to the full namespace. This is used by both Halyard and Clouddriver. - A PersistentVolumeClaim used for persistent Halyard configuration.
- A StatefulSet using the PVC which runs the Halyard Pod.
Apply (create) the resources in your namespace:
kubectl -n spinnaker apply -f halyard.yml
Change spinnaker
to a different namespace if you’re using a different namespace.
Check the status of the pod and wait until all the pods are running:
kubectl -n spinnaker get pods
If you run into issues, such as the pods getting evicted, see the Kubernetes documentation for troubleshooting tips.
Access the Halyard container
The majority of tasks you need to perform are done from inside the Halyard container. Use the following command to access the container:
kubectl -n spinnaker exec -it halyard-0 bash
Once inside the container, customize the environment with a minimal .bashrc
like this:
tee -a /home/spinnaker/.bashrc <<-EOF
export PS1="\h:\w \$ "
alias ll='ls -alh'
cd /home/spinnaker
EOF
source /home/spinnaker/.bashrc
Configure Armory to install in Kubernetes
Inside the container, use the Halyard hal
command line tool to enable the Kubernetes cloud provider:
hal config provider kubernetes enable
Next, configure a Kubernetes account called spinnaker
:
hal config provider kubernetes account add spinnaker \
--provider-version v2 \
--only-spinnaker-managed true \
--service-account true \
--namespaces spinnaker
# Update the 'namespaces' field with your namespace if using a different namespace
This command uses the ServiceAccount associated with Halyard and Clouddriver, the default
service account in this case.
Once you create an account (with account add
), you can edit it by running the command with edit
instead of add
. Use the same flags.
For example, if you need to support multiple namespaces, you can run the following:
hal config provider kubernetes account edit spinnaker \
--namespaces spinnaker,dev,stage,prod
# Make sure to include all namespace you need to support
Important: These commands and parameters limit Armory to deploying to the spinnaker
namespace. If you want to deploy to other namespaces, either add a second cloud provider target or grant the default
service account in your namespace permissions on additional namespaces and change the --namespaces
flag.
Use the Halyard hal
command line tool to configure Halyard to install Armory in your Kubernetes cluster
hal config deploy edit \
--type distributed \
--account-name spinnaker \
--location spinnaker
# Update the 'location' parameter with your namespace, if relevant
Enable and configure the ‘Artifact’ feature
Within Armory, ‘artifacts’ are consumable references to items that live outside of Armory, such as a file in a git repository or a file in an S3 bucket. The Artifacts feature must be explicitly turned on.
The following commands enable the “Artifacts” feature, the new Artifact UI, and the “http” artifact provider:
hal config features edit --artifacts true
hal config features edit --artifacts-rewrite true
hal config artifact http enable
Although enabling the new Artifacts UI is optional, Armory recommends using it for better user experience.
In order to add specific types of artifacts, additional configuration must be completed. For now though, it is sufficient to turn on the artifacts feature with the http
artifact provider. This allows Armory to retrieve files using unauthenticated http.
Configure Armory to access S3 with the IAM Role or User
Armory needs information about which bucket to access. Additionally, if you are using an IAM User to access the the bucket, Armory needs credentials for the IAM User.
# Update these snippets with the information for your bucket
export BUCKET_NAME=spinnaker-abcxyz
export REGION=us-west-2
# This will prompt for the secret key
hal config storage s3 edit \
--bucket ${BUCKET_NAME} \
--region ${REGION} \
--no-validate
hal config storage edit --type s3
If you are using an IAM User, then provide Armory with the S3 credentials for your IAM User:
# Update this with the AWS Access Key ID
export ACCESS_KEY_ID=AKIAWWWWXXXXYYYYZZZZ
# This will prompt for the secret key
hal config storage s3 edit \
--access-key-id ${ACCESS_KEY_ID} \
--secret-access-key
By default, Halyard configures Armory to use the folder front50
in your S3 bucket. You can configure it to use a different folder with this command:
# Replace with the root folder within our bucket to use
ROOT_FOLDER=spinnaker_apps
hal config storage s3 edit \
--root-folder ${ROOT_FOLDER}
Set up Gate to listen on the /api/v1
path
The Armory microservice “Gate” serves as the API gateway for Armory. Configure it to listen on a specific path rather than requiring different hosts or ports to differentiate it from the UI of Armory.
Create these two files (you may have to create several directories):
Create the file /home/spinnaker/.hal/default/profiles/gate-local.yml
:
server:
servlet:
context-path: /api/v1
Create the file /home/spinnaker/.hal/default/service-settings/gate.yml
:
healthEndpoint: /api/v1/health
You can copy/paste this snippet to automatically create these two files:
mkdir -p /home/spinnaker/.hal/default/{profiles,service-settings}
tee /home/spinnaker/.hal/default/profiles/gate-local.yml <<-'EOF'
server:
servlet:
context-path: /api/v1
EOF
tee /home/spinnaker/.hal/default/service-settings/gate.yml <<-'EOF'
healthEndpoint: /api/v1/health
EOF
Select the Armory version to install
Before you use Armory-extended Halyard to install Armory, specify the version of Armory you want to use. Make sure the version of Armory you want to install is compatible with the version of Armory-extended Halyard you are using.
You can get a list of available versions of Armory with this command:
hal version list
-
If you are installing Armory using Armory-extended Halyard, the command returns a version that starts with
2.x.x
-
If you are installing open source Spinnaker and using open source Halyard (installed from
gcr.io/spinnaker-marketplace/halyard:stable
), the command returns a version that starts with1.x.x
Select the version with the following:
# Replace with version of choice:
export VERSION=$(hal version latest -q)
echo ${VERSION}
hal config version edit --version ${VERSION}
Install Armory
Now that our halconfig is configured, you can install Armory:
hal deploy apply --wait-for-completion
Once this is complete, congratulations! Armory is installed. Keep going to learn how to access Armory.
Connect to Armory using kubectl port-forward
If you have kubectl
on a local machine with access to your Kubernetes cluster, you can test the status of your Armory instance by doing a port-forward.
First, tell Armory about its local endpoint for localhost:8084/api/v1
:
hal config security api edit --override-base-url http://localhost:8084/api/v1
hal deploy apply --wait-for-completion
Wait for the pods get in a running state. Then, set up two port forwards, one for Gate (the API gateway) and one for Deck (the Armory UI):
NAMESPACE=spinnaker
kubectl -n ${NAMESPACE} port-forward svc/spin-deck 9000 &
kubectl -n ${NAMESPACE} port-forward svc/spin-gate 8084 &
Then, you can access Armory at http://localhost:9000
.
If you are doing this on a remote machine, this does not work because your browser attempts to access localhost
on your local workstation rather than on the remote machine where the port is forwarded.
Note: Even if the hal deploy apply
command returns successfully, the
installation may not be complete yet. This is especially the case with
distributed Kubernetes installs. If you see errors such as Connection refused
,
the containers may not be available yet. Either wait
or check the status of all of the containers using the command for our cloud provider
(such as kubectl get pods --namespace spinnaker
).
Ingress
There several ways to expose Armory, but there are a some basic requirements.
Given a domain name (or IP address) (such as spinnaker.domain.com or 55.55.55.55), you should be able to:
- Reach the
spin-deck
service at the root of the domain (http://spinnaker.domain.com
orhttp://55.55.55.55
) - Reach the
spin-gate
service at the root of the domain (http://spinnaker.domain.com/api/v1
orhttp://55.55.55.55/api/v1
)
You can use either http or https, as long as you use the same for both. Additionally, you have to configure Armory to be aware of its endpoints.
The Install the NGINX ingress controller section details how to do that with the NGINX ingress controller.
Install the NGINX ingress controller
In order to expose Armory to end users, perform the following actions:
- Expose the spin-deck (UI) Kubernetes service on a URL endpoint
- Expose the spin-gate (API) Kubernetes service on a URL endpoint
- Update Armory to be aware of the new endpoints
If you already have an ingress controller, use that ingress controller instead. You can check for the existence of the NGINX Ingress Controller by running kubectl get ns
and looking for a namespace called ingress-nginx
. If the namespace exists, you likely already have an NGINX Ingress Controller running in your cluster.
The following instructions walk you through how to install the NGINX ingress controller on AWS. This uses the Layer 4 ELB, as indicated in the NGINX ingress controller documentation. You can use other NGINX ingress controller configurations, such as the Layer 7 load balancer, based on your organization’s ingress policy.)
Both of these are configurable with Armory, but the NGINX ingress controller is also generally much more configurable.
From the workstation machine
where kubectl
is installed:
If you are using Kubernetes version 1.14 or later, install the NGINX ingress controller components:
kubectl --kubeconfig kubeconfig-gke apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml
If you are using a Kubernetes version earlier than 1.14, perform the following steps:
-
Download this NGINX file: mandatory.yaml
-
Change
kubernetes.io/os
tobeta.kubernetes.io/os
on line 217 ofmandatory.yaml
. -
Run
kubectl apply -f
using the file you modified:kubectl apply -f <modified-mandatory-file>.yaml
Then, install the NGINX ingress controller AWS-specific service:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-l4.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/patch-configmap-l4.yaml
Set up an Ingress for spin-deck
and spin-gate
Get the external IP for the NGINX ingress controller:
kubectl get svc -n ingress-nginx
The command returns a DNS name or IP address in the EXTERNAL-IP
field.
If you stood up a new NGINX ingress controller, you can likely use this value (IP address or DNS name) for your ingress.
For example, if the command returns abcd1234abcd1234abcd1234abcd1234-123456789.us-west-2.elb.amazonaws.com
, then you can use abcd1234abcd1234abcd1234abcd1234-123456789.us-west-2.elb.amazonaws.com
for the SPINNAKER_ENDPOINT
in the following steps. If the command returns 55.55.55.55
, then use 55.55.55.55
for the SPINNAKER_ENDPOINT
.
If you use an existing NGINX ingress controller or other services are likely to be using the same NGINX ingress controller, create a DNS entry that points at the NGINX ingress controller endpoint you are using for Armory. You can create either a CNAME Record
that points at the DNS name or an A Record
that points at the IP address.
For the example abcd1234abcd1234abcd1234abcd1234-123456789.us-west-2.elb.amazonaws.com
DNS name, do the following:
- Create a CNAME pointing
spinnaker.domain.com
atabcd1234abcd1234abcd1234abcd1234-123456789.us-west-2.elb.amazonaws.com
- Put
spinnaker.domain.com
in thehost
field in the below manifest and uncomment it - Use
spinnaker.domain.com
for theSPINNAKER_ENDPOINT
in the below steps - (Alternately, for testing, create an
/etc/hosts
entry pointingspinnaker.domain.com
at the IP address thatabcd1234abcd1234abcd1234abcd1234-123456789.us-west-2.elb.amazonaws.com
resolves to)
For the 55.55.55.55
IP address example, do the following:
- Create an
A Record
pointing tospinnaker.domain.com
at55.55.55.55
- Put
spinnaker.domain.com
in thehost
field in the below manifest and uncomment it - Use
spinnaker.domain.com
for my SPINNAKER_ENDPOINT in the below steps - (Alternately, for testing, create an
/etc/hosts
entry pointingspinnaker.domain.com
at55.55.55.55
)
Create a Kubernetes Ingress manifest to expose spin-deck
and spin-gate
.
Create a file called spin-ingress.yml
with the following content. If you are on Kubernetes 1.14 or above, you should replace extensions/v1beta1
with networking.k8s.io/v1
.
(Make sure the hosts and namespace match your actual host and namespace.)
---
apiVersion: extensions/v1beta1
# apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: spin-ingress
labels:
app: spin
cluster: spin-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
-
# host: spinnaker.some-url.com
# ^ If we have other things running in our cluster, we should uncomment this line and specify a valid DNS name
http:
paths:
- backend:
serviceName: spin-deck
servicePort: 9000
path: /
- backend:
serviceName: spin-gate
servicePort: 8084
path: /api/v1
Apply the ingress file you just created:
kubectl -n spinnaker apply -f spin-ingress.yml
Configure Armory to be aware of its endpoints
Armory must be aware of its endpoints to work properly. Configuration updates vary depending upon whether you installed Armory using Operator or Halyard.
Operator
Update spinnakerservice.yml
adding the security
section:
spec:
spinnakerConfig:
config:
security:
apiSecurity:
overrideBaseUrl: http://spinnaker.domain.com/api/v1 # Replace this with the IP address or DNS that points to our nginx ingress instance
uiSecurity:
overrideBaseUrl: http://spinnaker.domain.com # Replace this with the IP address or DNS that points to our nginx ingress instance
Apply the changes:
kubectl -n spinnaker apply -f spinnakerservice.yml
Halyard
Run this command to get into the Halyard container:
kubectl -n spinnaker exec -it halyard-0 bash
Then, run the following command from inside the container:
SPINNAKER_ENDPOINT=http://spinnaker.domain.com
# ^ Replace this with the IP address or DNS that points to our nginx ingress instance
hal config security ui edit --override-base-url ${SPINNAKER_ENDPOINT}
hal config security api edit --override-base-url ${SPINNAKER_ENDPOINT}/api/v1
hal deploy apply
Configuring TLS certificates
Configuring TLS certificates for ingresses is often very environment-specific. In general, you want to do the following:
- Add certificate(s) so that our ingress controller can use them
- Configure the ingress(es) so that NGINX (or the load balancer in front of NGINX, or your alternative ingress controller) terminates TLS using the certificate(s)
- Update Armory to be aware of the new TLS endpoints, by replacing
http
byhttps
to override the base URLs in the previous section.
Next steps
Now that Armory is running, here are potential next steps:
- Configuration of certificates to secure our cluster (see this section for notes on this)
- Configuration of Authentication/Authorization (see the Open Source Spinnaker documentation)
- Add Kubernetes accounts to deploy applications to (see Creating and Adding a Kubernetes Account to Armory as a Deployment Target)
- Add GCP accounts to deploy applications to (see the Open Source Spinnaker documentation)
- Add AWS accounts to deploy applications to (see the Open Source Spinnaker documentation)
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.