formatting, mostly

This commit is contained in:
vmorganp
2024-02-06 02:22:48 -07:00
parent 20e5713dd0
commit bc88753264
10 changed files with 85 additions and 67 deletions

View File

@@ -7,11 +7,11 @@ name: Docker Build and Publish
on:
push:
branches: [ master ]
branches: [master]
# Publish semver tags as releases.
tags: [ 'v*.*.*' ]
tags: ["v*.*.*"]
pull_request:
branches: [ master ]
branches: [master]
env:
# Use docker.io for Docker Hub if empty
@@ -19,7 +19,6 @@ env:
# github.repository as <account>/<repo>
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
runs-on: ubuntu-latest

View File

@@ -1,43 +1,50 @@
# Lazytainer - Lazy Load Containers
Putting your containers to sleep
[![Docker](https://github.com/vmorganp/Lazytainer/actions/workflows/docker-publish.yml/badge.svg)](https://github.com/vmorganp/Lazytainer/actions/workflows/docker-publish.yml)
---
---
https://github.com/vmorganp/Lazytainer/assets/31448722/91af5528-6fee-4837-b4d8-11c03e792e94
## Quick Explanation
Monitors network traffic to containers. If there is traffic, the container runs, otherwise the container is stopped/paused. for more details check out the [Configuration](##Configuration) section.
## Want to test it?
1. Clone the project
```
git clone https://github.com/vmorganp/Lazytainer
cd Lazytainer
```
```
git clone https://github.com/vmorganp/Lazytainer
cd Lazytainer
```
2. Start the stack
```sh
docker-compose up
```
This will create 2 containers that you can reach through a third "lazytainer" container
```sh
if "docker compose" doesn't work, try "docker-compose"
docker compose up
```
This will create 2 containers that you can reach through a third "lazytainer" container
3. View the running container by navigating to its web ui at `http://localhost:81`. You should see some information about the container
4. Close the tab and wait until the logs say "stopped container"
6. Navigate again to `http://localhost:81`, it should be a dead page
7. Navigate to `http://localhost:81` several times, enough to generate some network traffic, and it should start
8. To clean up, run
```sh
docker-compose down
```
5. Navigate again to `http://localhost:81`, it should be a dead page
6. Navigate to `http://localhost:81` several times, enough to generate some network traffic, and it should start
7. To clean up, run
```sh
docker-compose down
```
## Configuration
### Note:
Lazytainer does not "automatically" start and stop all of your containers. You must apply a label to them and proxy their traffic through the Lazytainer container.
### Examples
For examples of lazytainer in action, check out the [Examples](./examples/)
### Groups
Lazytainer starts and stops other containers in "groups" of one or more other containers. To assign a container to a lazytainer group, a label must be added. The label will look like this.
```yaml
@@ -48,19 +55,20 @@ yourContainerThatWillSleep:
```
To configure a group, add labels to the lazytainer container like this. Note that each is required to have a port(s) specified. These ports must also be forwarded on the lazytainer container
```yaml
lazytainer:
# ... configuration omitted for brevity
ports:
- 81:81 # used by group1 and group2
- 82:82 # used by group2
labels:
# Configuration items are formatted like this
- "lazytainer.group.<yourGroupName>.<property>=value"
# configuration for group 1
- "lazytainer.group.group1.ports=81"
# configuration for group 2
- "lazytainer.group.group2.ports=81,82"
lazytainer:
# ... configuration omitted for brevity
ports:
- 81:81 # used by group1 and group2
- 82:82 # used by group2
labels:
# Configuration items are formatted like this
- "lazytainer.group.<yourGroupName>.<property>=value"
# configuration for group 1
- "lazytainer.group.group1.ports=81"
# configuration for group 2
- "lazytainer.group.group2.ports=81,82"
```
Group properties that can be changed include:
@@ -75,20 +83,25 @@ Group properties that can be changed include:
| netInterface | Network interface to listen on | No | `eth0` |
### Additional Configuration
#### Verbose Logging
If you would like more verbose logging, you can apply the environment variable `VERBOSE=true` to lazytainer like so
```yaml
lazytainer:
# ... configuration omitted for brevity
environment:
- VERBOSE=true
lazytainer:
# ... configuration omitted for brevity
environment:
- VERBOSE=true
```
#### Volumes
#### Volumes
If using lazytainer, you MUST provide the following volume to lazytainer
```yaml
lazytainer:
# ... configuration omitted for brevity
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
lazytainer:
# ... configuration omitted for brevity
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
```

View File

@@ -17,7 +17,7 @@ services:
- /var/run/docker.sock:/var/run/docker.sock:ro
labels:
# this configuration will cause traffic to whoami1 to start whoami2, but traffic to only whoami2 will not wake whoami1
# if there's no incoming traffic on port 81, pause whoami1
# if there's no incoming traffic on port 81, pause whoami1
- "lazytainer.group.group1.pollRate=1"
- "lazytainer.group.group1.inactiveTimeout=10"
- "lazytainer.group.group1.ports=81"

View File

@@ -1,17 +1,20 @@
# Lazy Load Docker Minecraft Server
## Startup
```
git clone https://github.com/vmorganp/Lazytainer
cd Lazytainer/examples/minecraft
docker-compose up
docker-compose up
```
## Watch the magic happen
After a configurable period of no activity the server should stop
After a configurable period of no activity the server should stop
if you generate some traffic by trying to connect to the instance or running a command like
`telnet localhost 25565`
a few times
a few times
you should see the minecraft container automatically restart itself
you should see the minecraft container automatically restart itself

View File

@@ -27,4 +27,4 @@ services:
depends_on:
- lazytainer
volumes:
- /tmp/lazytainerExample/minecraft:/data
- /tmp/lazytainerExample/minecraft:/data

View File

@@ -1,21 +1,25 @@
# Lazy Load Docker Satisfactory Server
## Startup
```
git clone https://github.com/vmorganp/Lazytainer
cd Lazytainer/examples/satisfactory
docker compose up
docker compose up
```
or
#### Deploy with Portainer, etc
Copy contents of docker-compose.yml into a stack, it should automatically deploy
## Notes
- "lazytainer.group.satisfactory.inactiveTimeout=120"
- "lazytainer.group.satisfactory.inactiveTimeout=120"
This may need to be adjusted based on your physical hardware. If you have slower hardware, the server client may not start with enough time to accept clients and create additional traffic.
In my experience, players can expect a 45 second delay after opening the Satisfactory client and navigating to the server manager before the server actually accepts clients. From the time clients are accepted, this gives players about a minute and a half to login before the container will try to shutdown again.
This could very well change based on hardware specifications, you may need to adjust.
This could very well change based on hardware specifications, you may need to adjust.
Don't forget to portforward

View File

@@ -2,15 +2,15 @@ version: "3"
services:
satisfactory-server:
container_name: 'satisfactory'
image: 'wolveix/satisfactory-server:latest'
container_name: "satisfactory"
image: "wolveix/satisfactory-server:latest"
volumes:
- /your/game_files/satisfactory:/config # Path must be changed to your satisfactory save mount
environment:
- MAXPLAYERS=8
- PGID=1000
- PUID=1000
- STEAMBETA=false
- MAXPLAYERS=8
- PGID=1000
- PUID=1000
- STEAMBETA=false
restart: unless-stopped
deploy:
resources:
@@ -30,14 +30,14 @@ services:
environment:
- VERBOSE=true
ports:
- '15777:15777/udp' # This is the query port, supplies packets when the server browser is used.
- '7777:7777/udp' # This port handles the actual game traffic
- '15000:15000/udp' # This port handles outbound check-ins "beacon", etc.
- "15777:15777/udp" # This is the query port, supplies packets when the server browser is used.
- "7777:7777/udp" # This port handles the actual game traffic
- "15000:15000/udp" # This port handles outbound check-ins "beacon", etc.
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
labels:
- "lazytainer.group.satisfactory.ports=15777,7777,15000"
- "lazytainer.group.satisfactory.inactiveTimeout=120" # A value of two minutes is safe on my hardware but this may need to be raised based on satisfactory container startup time on slower hardware
- "lazytainer.group.satisfactory.minPacketThreshold=500"
- "lazytainer.group.satisfactory.minPacketThreshold=500"
- "lazytainer.group.satisfactory.pollRate=1"
- "lazytainer.group.satisfactory.sleepMethod=stop" # This is the default but important to reclaim memory with such high usage. Changing this to sleep may speed up reload times?
- "lazytainer.group.satisfactory.sleepMethod=stop" # This is the default but important to reclaim memory with such high usage. Changing this to sleep may speed up reload times?

View File

@@ -11,13 +11,13 @@ services:
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
labels:
# if there's no incoming traffic on port 81, stop whoami1
# if there's no incoming traffic on port 81, stop whoami1
- "lazytainer.group.group1.pollRate=1"
- "lazytainer.group.group1.inactiveTimeout=30"
- "lazytainer.group.group1.ports=81"
- "lazytainer.group.group1.MINPACKETTHRESH=10"
- "lazytainer.group.group1.sleepMethod=stop" # can be either "stop" or "pause", or left blank for stop
- "lazytainer.group.group1.netInterface=ztukuxxqii"
- "lazytainer.group.group1.netInterface=ztukuxxqii"
zerotier:
image: zyclonite/zerotier
@@ -26,7 +26,7 @@ services:
devices:
- /dev/net/tun
volumes:
- './zt:/var/lib/zerotier-one'
- "./zt:/var/lib/zerotier-one"
cap_add:
- NET_ADMIN
- SYS_ADMIN
@@ -40,4 +40,3 @@ services:
- lazytainer
labels:
- "lazytainer.group=group1"

View File

@@ -11,7 +11,7 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/google/gopacket"
_ "github.com/google/gopacket/layers"
"github.com/google/gopacket/pcap"

View File

@@ -51,7 +51,7 @@ func configureFromLabels() map[string]LazyGroup {
//negotiate API version to prevent "client version is too new" error
dockerClient.NegotiateAPIVersion(context.Background())
filter := filters.NewArgs(filters.Arg("id", container_id))
containers, err := dockerClient.ContainerList(context.Background(), types.ContainerListOptions{All: true, Filters: filter})
check(err)