mirror of
https://github.com/sablierapp/sablier.git
synced 2026-01-04 20:14:58 +01:00
When an instance does not exist yet and needs to be started, its status is not assumed to be starting anymore. Instead, the statue will be retrieved from the provider. This changes one thing, it's that you may be able to start and access your services instantly because they'll be instantly seen as ready. With this change, you might want to make sure that your containers have a proper healthcheck used to determine when the application is able to process incoming requests. * refactor: add interface guards * refactor(providers): remove instance.State as a return value from Stop and Start * test(e2e): add healthcheck on nginx container Because now the container check is so fast, we need to add a delay on which the container is considered started and healthy to have a proper waiting page. * fix(tests): using acouvreur/whoami:v1.10.2 instead of containous/whoami:v1.5.0 This image simply retrieve the curl binary from curlimages/curl:8.8.0 to be able to add proper docker healthcheck commands. Once this is merged with traefik/whoami, I'll update back to the original image. See https://github.com/traefik/whoami/issues/33
98 lines
2.0 KiB
Markdown
98 lines
2.0 KiB
Markdown
# Kubernetes
|
|
|
|
Sablier assumes that it is deployed within the Kubernetes cluster to use the Kubernetes API internally.
|
|
|
|
## Use the Kubernetes provider
|
|
|
|
In order to use the kubernetes provider you can configure the [provider.name](TODO) property.
|
|
|
|
<!-- tabs:start -->
|
|
|
|
#### **File (YAML)**
|
|
|
|
```yaml
|
|
provider:
|
|
name: kubernetes
|
|
```
|
|
|
|
#### **CLI**
|
|
|
|
```bash
|
|
sablier start --provider.name=kubernetes
|
|
```
|
|
|
|
#### **Environment Variable**
|
|
|
|
```bash
|
|
PROVIDER_NAME=kubernetes
|
|
```
|
|
|
|
<!-- tabs:end -->
|
|
|
|
!> **Ensure that Sablier has the necessary roles!**
|
|
|
|
```yaml
|
|
apiVersion: rbac.authorization.k8s.io/v1
|
|
kind: ClusterRole
|
|
metadata:
|
|
name: sablier
|
|
rules:
|
|
- apiGroups:
|
|
- apps
|
|
- ""
|
|
resources:
|
|
- deployments
|
|
- statefulsets
|
|
verbs:
|
|
- get # Retrieve info about specific dep
|
|
- list # Events
|
|
- watch # Events
|
|
- apiGroups:
|
|
- apps
|
|
- ""
|
|
resources:
|
|
- deployments/scale
|
|
- statefulsets/scale
|
|
verbs:
|
|
- patch # Scale up and down
|
|
- update # Scale up and down
|
|
- get # Retrieve info about specific dep
|
|
- list # Events
|
|
- watch # Events
|
|
```
|
|
|
|
## Register Deployments
|
|
|
|
For Sablier to work, it needs to know which deployments to scale up and down.
|
|
|
|
You have to register your deployments by opting-in with labels.
|
|
|
|
|
|
```yaml
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
name: whoami
|
|
labels:
|
|
app: whoami
|
|
sablier.enable: "true"
|
|
sablier.group: mygroup
|
|
spec:
|
|
selector:
|
|
matchLabels:
|
|
app: whoami
|
|
template:
|
|
metadata:
|
|
labels:
|
|
app: whoami
|
|
spec:
|
|
containers:
|
|
- name: whoami
|
|
image: acouvreur/whoami:v1.10.2
|
|
```
|
|
|
|
## How does Sablier knows when a deployment is ready?
|
|
|
|
Sablier checks for the deployment replicas. As soon as the current replicas matches the wanted replicas, then the deployment is considered `ready`.
|
|
|
|
?> Kubernetes uses the Pod healthcheck to check if the Pod is up and running. So the provider has a native healthcheck support. |