mirror of
https://github.com/sysadminsmedia/homebox.git
synced 2025-12-28 07:56:35 +01:00
Compare commits
1 Commits
katos/test
...
mk/multi-g
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
dd873a95da |
259
.github/scripts/upgrade-test/README.md
vendored
259
.github/scripts/upgrade-test/README.md
vendored
@@ -1,259 +0,0 @@
|
||||
# HomeBox Upgrade Testing Workflow
|
||||
|
||||
This document describes the automated upgrade testing workflow for HomeBox.
|
||||
|
||||
## Overview
|
||||
|
||||
The upgrade test workflow is designed to ensure data integrity and functionality when upgrading HomeBox from one version to another. It automatically:
|
||||
|
||||
1. Deploys a stable version of HomeBox
|
||||
2. Creates test data (users, items, locations, labels, notifiers, attachments)
|
||||
3. Upgrades to the latest version from the main branch
|
||||
4. Verifies all data and functionality remain intact
|
||||
|
||||
## Workflow File
|
||||
|
||||
**Location**: `.github/workflows/upgrade-test.yaml`
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
The workflow runs:
|
||||
- **Daily**: Automatically at 2 AM UTC (via cron schedule)
|
||||
- **Manual**: Can be triggered manually via GitHub Actions UI
|
||||
- **On Push**: When changes are made to the workflow files or test scripts
|
||||
|
||||
## Test Scenarios
|
||||
|
||||
### 1. Environment Setup
|
||||
- Pulls the latest stable HomeBox Docker image from GHCR
|
||||
- Starts the application with test configuration
|
||||
- Ensures the service is healthy and ready
|
||||
|
||||
### 2. Data Creation
|
||||
|
||||
The workflow creates comprehensive test data using the `create-test-data.sh` script:
|
||||
|
||||
#### Users and Groups
|
||||
- **Group 1**: 5 users (user1@homebox.test through user5@homebox.test)
|
||||
- **Group 2**: 2 users (user6@homebox.test and user7@homebox.test)
|
||||
- All users have password: `TestPassword123!`
|
||||
|
||||
#### Locations
|
||||
- **Group 1**: Living Room, Garage
|
||||
- **Group 2**: Home Office
|
||||
|
||||
#### Labels
|
||||
- **Group 1**: Electronics, Important
|
||||
- **Group 2**: Work Equipment
|
||||
|
||||
#### Items
|
||||
- **Group 1**: 5 items (Laptop Computer, Power Drill, TV Remote, Tool Box, Coffee Maker)
|
||||
- **Group 2**: 2 items (Monitor, Keyboard)
|
||||
|
||||
#### Attachments
|
||||
- Multiple attachments added to various items (receipts, manuals, warranties)
|
||||
|
||||
#### Notifiers
|
||||
- **Group 1**: Test notifier named "TESTING"
|
||||
|
||||
### 3. Upgrade Process
|
||||
|
||||
1. Stops the stable version container
|
||||
2. Builds a fresh image from the current main branch
|
||||
3. Copies the database to a new location
|
||||
4. Starts the new version with the existing data
|
||||
|
||||
### 4. Verification Tests
|
||||
|
||||
The Playwright test suite (`upgrade-verification.spec.ts`) verifies:
|
||||
|
||||
- ✅ **User Authentication**: All 7 users can log in with their credentials
|
||||
- ✅ **Data Persistence**: All items, locations, and labels are present
|
||||
- ✅ **Attachments**: File attachments are correctly associated with items
|
||||
- ✅ **Notifiers**: The "TESTING" notifier is still configured
|
||||
- ✅ **UI Functionality**: Version display, theme switching work correctly
|
||||
- ✅ **Data Isolation**: Groups can only see their own data
|
||||
|
||||
## Test Data File
|
||||
|
||||
The setup script generates a JSON file at `/tmp/test-users.json` containing:
|
||||
|
||||
```json
|
||||
{
|
||||
"users": [
|
||||
{
|
||||
"email": "user1@homebox.test",
|
||||
"password": "TestPassword123!",
|
||||
"token": "...",
|
||||
"group": "1"
|
||||
},
|
||||
...
|
||||
],
|
||||
"locations": {
|
||||
"group1": ["location-id-1", "location-id-2"],
|
||||
"group2": ["location-id-3"]
|
||||
},
|
||||
"labels": {...},
|
||||
"items": {...},
|
||||
"notifiers": {...}
|
||||
}
|
||||
```
|
||||
|
||||
This file is used by the Playwright tests to verify data integrity.
|
||||
|
||||
## Scripts
|
||||
|
||||
### create-test-data.sh
|
||||
|
||||
**Location**: `.github/scripts/upgrade-test/create-test-data.sh`
|
||||
|
||||
**Purpose**: Creates all test data via the HomeBox REST API
|
||||
|
||||
**Environment Variables**:
|
||||
- `HOMEBOX_URL`: Base URL of the HomeBox instance (default: http://localhost:7745)
|
||||
- `TEST_DATA_FILE`: Path to output JSON file (default: /tmp/test-users.json)
|
||||
|
||||
**Requirements**:
|
||||
- `curl`: For API calls
|
||||
- `jq`: For JSON processing
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
export HOMEBOX_URL=http://localhost:7745
|
||||
./.github/scripts/upgrade-test/create-test-data.sh
|
||||
```
|
||||
|
||||
## Running Tests Locally
|
||||
|
||||
To run the upgrade tests locally:
|
||||
|
||||
### Prerequisites
|
||||
```bash
|
||||
# Install dependencies
|
||||
sudo apt-get install -y jq curl docker.io
|
||||
|
||||
# Install pnpm and Playwright
|
||||
cd frontend
|
||||
pnpm install
|
||||
pnpm exec playwright install --with-deps chromium
|
||||
```
|
||||
|
||||
### Run the test
|
||||
```bash
|
||||
# Start stable version
|
||||
docker run -d \
|
||||
--name homebox-test \
|
||||
-p 7745:7745 \
|
||||
-e HBOX_OPTIONS_ALLOW_REGISTRATION=true \
|
||||
-v /tmp/homebox-data:/data \
|
||||
ghcr.io/sysadminsmedia/homebox:latest
|
||||
|
||||
# Wait for startup
|
||||
sleep 10
|
||||
|
||||
# Create test data
|
||||
export HOMEBOX_URL=http://localhost:7745
|
||||
./.github/scripts/upgrade-test/create-test-data.sh
|
||||
|
||||
# Stop container
|
||||
docker stop homebox-test
|
||||
docker rm homebox-test
|
||||
|
||||
# Build new version
|
||||
docker build -t homebox:test .
|
||||
|
||||
# Start new version with existing data
|
||||
docker run -d \
|
||||
--name homebox-test \
|
||||
-p 7745:7745 \
|
||||
-e HBOX_OPTIONS_ALLOW_REGISTRATION=true \
|
||||
-v /tmp/homebox-data:/data \
|
||||
homebox:test
|
||||
|
||||
# Wait for startup
|
||||
sleep 10
|
||||
|
||||
# Run verification tests
|
||||
cd frontend
|
||||
TEST_DATA_FILE=/tmp/test-users.json \
|
||||
E2E_BASE_URL=http://localhost:7745 \
|
||||
pnpm exec playwright test \
|
||||
--project=chromium \
|
||||
test/upgrade/upgrade-verification.spec.ts
|
||||
|
||||
# Cleanup
|
||||
docker stop homebox-test
|
||||
docker rm homebox-test
|
||||
```
|
||||
|
||||
## Artifacts
|
||||
|
||||
The workflow produces several artifacts:
|
||||
|
||||
1. **playwright-report-upgrade-test**: HTML report of test results
|
||||
2. **playwright-traces**: Detailed traces for debugging failures
|
||||
3. **Docker logs**: Collected on failure for troubleshooting
|
||||
|
||||
## Failure Scenarios
|
||||
|
||||
The workflow will fail if:
|
||||
- The stable version fails to start
|
||||
- Test data creation fails
|
||||
- The new version fails to start with existing data
|
||||
- Any verification test fails
|
||||
- Database migrations fail
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Test Data Creation Fails
|
||||
|
||||
Check the Docker logs:
|
||||
```bash
|
||||
docker logs homebox-old
|
||||
```
|
||||
|
||||
Verify the API is accessible:
|
||||
```bash
|
||||
curl http://localhost:7745/api/v1/status
|
||||
```
|
||||
|
||||
### Verification Tests Fail
|
||||
|
||||
1. Download the Playwright report from GitHub Actions artifacts
|
||||
2. Review the HTML report for detailed failure information
|
||||
3. Check traces for visual debugging
|
||||
|
||||
### Database Issues
|
||||
|
||||
If migrations fail:
|
||||
```bash
|
||||
# Check database file
|
||||
ls -lh /tmp/homebox-data-new/homebox.db
|
||||
|
||||
# Check Docker logs for migration errors
|
||||
docker logs homebox-new
|
||||
```
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
Potential improvements:
|
||||
- [ ] Test multiple upgrade paths (e.g., v0.10 → v0.11 → v0.12)
|
||||
- [ ] Test with PostgreSQL backend in addition to SQLite
|
||||
- [ ] Add performance benchmarks
|
||||
- [ ] Test with larger datasets
|
||||
- [ ] Add API-level verification in addition to UI tests
|
||||
- [ ] Test backup and restore functionality
|
||||
|
||||
## Related Files
|
||||
|
||||
- `.github/workflows/upgrade-test.yaml` - Main workflow definition
|
||||
- `.github/scripts/upgrade-test/create-test-data.sh` - Data generation script
|
||||
- `frontend/test/upgrade/upgrade-verification.spec.ts` - Playwright verification tests
|
||||
- `.github/workflows/e2e-partial.yaml` - Standard E2E test workflow (for reference)
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions about this workflow:
|
||||
1. Check the GitHub Actions run logs
|
||||
2. Review this documentation
|
||||
3. Open an issue in the repository
|
||||
153
.github/scripts/upgrade-test/create-test-data.sh
vendored
153
.github/scripts/upgrade-test/create-test-data.sh
vendored
@@ -1,153 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Script to create test data in HomeBox for upgrade testing
|
||||
|
||||
set -e
|
||||
|
||||
HOMEBOX_URL="${HOMEBOX_URL:-http://localhost:7745}"
|
||||
API_URL="${HOMEBOX_URL}/api/v1"
|
||||
TEST_DATA_FILE="${TEST_DATA_FILE:-/tmp/test-users.json}"
|
||||
|
||||
echo "Creating test data in HomeBox at $HOMEBOX_URL"
|
||||
|
||||
# Function to make API calls with error handling
|
||||
api_call() {
|
||||
local method=$1
|
||||
local endpoint=$2
|
||||
local data=$3
|
||||
local token=$4
|
||||
|
||||
local response
|
||||
if [ -n "$token" ]; then
|
||||
response=$(curl -s -X "$method" \
|
||||
-H "Authorization: Bearer $token" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$data" \
|
||||
"$API_URL$endpoint")
|
||||
else
|
||||
response=$(curl -s -X "$method" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$data" \
|
||||
"$API_URL$endpoint")
|
||||
fi
|
||||
|
||||
# Validate response is proper JSON
|
||||
if ! echo "$response" | jq '.' > /dev/null 2>&1; then
|
||||
echo "Invalid API response for $endpoint: $response" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "$response"
|
||||
}
|
||||
|
||||
# Function to initialize the test data JSON file
|
||||
initialize_test_data() {
|
||||
echo "Initializing test data JSON file: $TEST_DATA_FILE"
|
||||
if [ -f "$TEST_DATA_FILE" ]; then
|
||||
echo "Removing existing test data file..."
|
||||
rm -f "$TEST_DATA_FILE"
|
||||
fi
|
||||
echo "{\"users\":[],\"locations\":[],\"labels\":[],\"items\":[],\"attachments\":[],\"notifiers\":[]}" > "$TEST_DATA_FILE"
|
||||
}
|
||||
|
||||
# Function to add content to JSON data file
|
||||
add_to_test_data() {
|
||||
local key=$1
|
||||
local value=$2
|
||||
|
||||
jq --argjson data "$value" ".${key} += [\$data]" "$TEST_DATA_FILE" > "${TEST_DATA_FILE}.tmp" && mv "${TEST_DATA_FILE}.tmp" "$TEST_DATA_FILE"
|
||||
}
|
||||
|
||||
# Register a user and get their auth token
|
||||
register_user() {
|
||||
local email=$1
|
||||
local name=$2
|
||||
local password=$3
|
||||
local group_token=$4
|
||||
|
||||
echo "Registering user: $email"
|
||||
local payload="{\"email\":\"$email\",\"name\":\"$name\",\"password\":\"$password\""
|
||||
if [ -n "$group_token" ]; then
|
||||
payload="$payload,\"groupToken\":\"$group_token\""
|
||||
fi
|
||||
payload="$payload}"
|
||||
|
||||
api_call "POST" "/users/register" "$payload"
|
||||
}
|
||||
|
||||
# Main logic for creating test data
|
||||
initialize_test_data
|
||||
|
||||
# Group 1: Create 5 users
|
||||
echo "=== Creating Group 1 Users ==="
|
||||
group1_user1_response=$(register_user "user1@homebox.test" "User One" "password123")
|
||||
group1_user1_token=$(echo "$group1_user1_response" | jq -r '.token // empty')
|
||||
group1_invite_token=$(echo "$group1_user1_response" | jq -r '.group.inviteToken // empty')
|
||||
|
||||
if [ -z "$group1_user1_token" ]; then
|
||||
echo "Failed to register the first group user" >&2
|
||||
exit 1
|
||||
fi
|
||||
add_to_test_data "users" "{\"email\": \"user1@homebox.test\", \"token\": \"$group1_user1_token\", \"group\": 1}"
|
||||
|
||||
# Add 4 more users to the same group
|
||||
for user in 2 3 4 5; do
|
||||
response=$(register_user "user$user@homebox.test" "User $user" "password123" "$group1_invite_token")
|
||||
token=$(echo "$response" | jq -r '.token // empty')
|
||||
add_to_test_data "users" "{\"email\": \"user$user@homebox.test\", \"token\": \"$token\", \"group\": 1}"
|
||||
done
|
||||
|
||||
# Group 2: Create 2 users
|
||||
echo "=== Creating Group 2 Users ==="
|
||||
group2_user1_response=$(register_user "user6@homebox.test" "User Six" "password123")
|
||||
group2_user1_token=$(echo "$group2_user1_response" | jq -r '.token // empty')
|
||||
group2_invite_token=$(echo "$group2_user1_response" | jq -r '.group.inviteToken // empty')
|
||||
add_to_test_data "users" "{\"email\": \"user6@homebox.test\", \"token\": \"$group2_user1_token\", \"group\": 2}"
|
||||
|
||||
response=$(register_user "user7@homebox.test" "User Seven" "password123" "$group2_invite_token")
|
||||
group2_user2_token=$(echo "$response" | jq -r '.token // empty')
|
||||
add_to_test_data "users" "{\"email\": \"user7@homebox.test\", \"token\": \"$group2_user2_token\", \"group\": 2}"
|
||||
|
||||
# Create Locations
|
||||
echo "=== Creating Locations ==="
|
||||
group1_locations=()
|
||||
group1_locations+=("$(api_call "POST" "/locations" "{ \"name\": \"Living Room\", \"description\": \"Family area\" }" "$group1_user1_token")")
|
||||
group1_locations+=("$(api_call "POST" "/locations" "{ \"name\": \"Garage\", \"description\": \"Storage area\" }" "$group1_user1_token")")
|
||||
group2_locations=()
|
||||
group2_locations+=("$(api_call "POST" "/locations" "{ \"name\": \"Office\", \"description\": \"Workspace\" }" "$group2_user1_token")")
|
||||
|
||||
# Add Locations to Test Data
|
||||
for loc in "${group1_locations[@]}"; do
|
||||
loc_id=$(echo "$loc" | jq -r '.id // empty')
|
||||
add_to_test_data "locations" "{\"id\": \"$loc_id\", \"group\": 1}"
|
||||
done
|
||||
|
||||
for loc in "${group2_locations[@]}"; do
|
||||
loc_id=$(echo "$loc" | jq -r '.id // empty')
|
||||
add_to_test_data "locations" "{\"id\": \"$loc_id\", \"group\": 2}"
|
||||
done
|
||||
|
||||
# Create Labels
|
||||
echo "=== Creating Labels ==="
|
||||
label1=$(api_call "POST" "/labels" "{ \"name\": \"Electronics\", \"description\": \"Devices\" }" "$group1_user1_token")
|
||||
add_to_test_data "labels" "$label1"
|
||||
|
||||
label2=$(api_call "POST" "/labels" "{ \"name\": \"Important\", \"description\": \"High Priority\" }" "$group1_user1_token")
|
||||
add_to_test_data "labels" "$label2"
|
||||
|
||||
# Create Items and Attachments
|
||||
echo "=== Creating Items and Attachments ==="
|
||||
item1=$(api_call "POST" "/items" "{ \"name\": \"Laptop\", \"description\": \"Work laptop\", \"locationId\": \"$(echo ${group1_locations[0]} | jq -r '.id // empty')\" }" "$group1_user1_token")
|
||||
item1_id=$(echo "$item1" | jq -r '.id // empty')
|
||||
add_to_test_data "items" "{\"id\": \"$item1_id\", \"group\": 1}"
|
||||
|
||||
attachment1=$(api_call "POST" "/items/$item1_id/attachments" "" "$group1_user1_token")
|
||||
add_to_test_data "attachments" "{\"id\": \"$(echo $attachment1 | jq -r '.id // empty')\", \"itemId\": \"$item1_id\"}"
|
||||
|
||||
# Create Test Notifier
|
||||
echo "=== Creating Notifiers ==="
|
||||
notifier=$(api_call "POST" "/notifiers" "{ \"name\": \"TESTING\", \"url\": \"https://example.com/webhook\", \"isActive\": true }" "$group1_user1_token")
|
||||
add_to_test_data "notifiers" "$notifier"
|
||||
|
||||
echo "=== Test Data Creation Complete ==="
|
||||
cat "$TEST_DATA_FILE" | jq
|
||||
184
.github/workflows/upgrade-test.yaml
vendored
184
.github/workflows/upgrade-test.yaml
vendored
@@ -1,184 +0,0 @@
|
||||
name: HomeBox Upgrade Test
|
||||
|
||||
on:
|
||||
schedule:
|
||||
# Run daily at 2 AM UTC
|
||||
- cron: '0 2 * * *'
|
||||
workflow_dispatch: # Allow manual trigger
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
paths:
|
||||
- '.github/workflows/upgrade-test.yaml'
|
||||
- '.github/scripts/upgrade-test/**'
|
||||
|
||||
jobs:
|
||||
upgrade-test:
|
||||
name: Test Upgrade Path
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
permissions:
|
||||
contents: read
|
||||
packages: read
|
||||
|
||||
steps:
|
||||
# Step 1: Checkout repository
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
# Step 2: Setup dependencies (Node.js, Docker, pnpm, and Playwright)
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Set up Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: lts/*
|
||||
|
||||
- name: Install pnpm
|
||||
uses: pnpm/action-setup@v3.0.0
|
||||
with:
|
||||
version: 9.12.2
|
||||
|
||||
- name: Install Playwright
|
||||
run: |
|
||||
cd frontend
|
||||
pnpm install
|
||||
pnpm exec playwright install --with-deps chromium
|
||||
|
||||
# Step 3: Prepare environment and /tmp directories
|
||||
- name: Create test data directories
|
||||
run: |
|
||||
mkdir -p /tmp/homebox-data-old
|
||||
mkdir -p /tmp/homebox-data-new
|
||||
chmod -R 777 /tmp/homebox-data-old
|
||||
chmod -R 777 /tmp/homebox-data-new
|
||||
echo "Directories created:"
|
||||
ls -la /tmp/
|
||||
|
||||
# Step 4: Pull and start the stable HomeBox image
|
||||
- name: Pull latest stable HomeBox image
|
||||
run: |
|
||||
docker pull ghcr.io/sysadminsmedia/homebox:latest
|
||||
|
||||
- name: Start HomeBox (stable version)
|
||||
run: |
|
||||
docker run -d \
|
||||
--name homebox-old \
|
||||
--restart unless-stopped \
|
||||
-p 7745:7745 \
|
||||
-e HBOX_LOG_LEVEL=debug \
|
||||
-e HBOX_OPTIONS_ALLOW_REGISTRATION=true \
|
||||
-e TZ=UTC \
|
||||
-v /tmp/homebox-data-old:/data \
|
||||
ghcr.io/sysadminsmedia/homebox:latest
|
||||
|
||||
# Wait for the service to be ready
|
||||
timeout 60 bash -c 'until curl -f http://localhost:7745/api/v1/status; do sleep 2; done'
|
||||
echo "HomeBox stable version is ready"
|
||||
|
||||
# Step 5: Run test data script
|
||||
- name: Create test data (users, items, locations, labels)
|
||||
run: |
|
||||
chmod +x .github/scripts/upgrade-test/create-test-data.sh
|
||||
.github/scripts/upgrade-test/create-test-data.sh
|
||||
env:
|
||||
HOMEBOX_URL: http://localhost:7745
|
||||
|
||||
- name: Validate test data creation
|
||||
run: |
|
||||
echo "Verifying test data was created..."
|
||||
# Check test-users.json content
|
||||
cat /tmp/test-users.json | jq || echo "Test-users file is empty or malformed!"
|
||||
# Check the database file exists
|
||||
if [ ! -f /tmp/homebox-data-old/homebox.db ]; then
|
||||
echo "No database found in the old instance directory!" && exit 1
|
||||
fi
|
||||
echo "Test data creation validated successfully!"
|
||||
|
||||
# Step 6: Stop the HomeBox stable instance
|
||||
- name: Stop old HomeBox instance
|
||||
run: |
|
||||
docker stop homebox-old
|
||||
docker rm homebox-old
|
||||
|
||||
# Step 7: Build HomeBox from the main branch
|
||||
- name: Build HomeBox from main branch
|
||||
run: |
|
||||
docker build \
|
||||
--build-arg VERSION=main \
|
||||
--build-arg COMMIT=${{ github.sha }} \
|
||||
--build-arg BUILD_TIME="$(date -u +"%Y-%m-%dT%H:%M:%SZ")" \
|
||||
-t homebox:test \
|
||||
-f Dockerfile \
|
||||
.
|
||||
|
||||
# Step 8: Start the new HomeBox version with migrated data
|
||||
- name: Copy data to new location
|
||||
run: |
|
||||
cp -r /tmp/homebox-data-old/* /tmp/homebox-data-new/
|
||||
chmod -R 777 /tmp/homebox-data-new
|
||||
|
||||
- name: Start HomeBox (new version)
|
||||
run: |
|
||||
docker run -d \
|
||||
--name homebox-new \
|
||||
--restart unless-stopped \
|
||||
-p 7745:7745 \
|
||||
-e HBOX_LOG_LEVEL=debug \
|
||||
-e HBOX_OPTIONS_ALLOW_REGISTRATION=true \
|
||||
-e TZ=UTC \
|
||||
-v /tmp/homebox-data-new:/data \
|
||||
homebox:test
|
||||
|
||||
# Wait for the updated service to be ready
|
||||
timeout 60 bash -c 'until curl -f http://localhost:7745/api/v1/status; do sleep 2; done'
|
||||
echo "HomeBox new version is ready"
|
||||
|
||||
# Step 9: Execute Playwright verification tests
|
||||
- name: Run Playwright verification tests
|
||||
run: |
|
||||
cd frontend
|
||||
TEST_DATA_FILE=/tmp/test-users.json \
|
||||
E2E_BASE_URL=http://localhost:7745 \
|
||||
pnpm exec playwright test \
|
||||
-c ./test/playwright.config.ts \
|
||||
test/upgrade/upgrade-verification.spec.ts
|
||||
env:
|
||||
HOMEBOX_URL: http://localhost:7745
|
||||
|
||||
# Step 10: Upload reports for review
|
||||
- name: Upload Playwright report
|
||||
uses: actions/upload-artifact@v4
|
||||
if: always()
|
||||
with:
|
||||
name: playwright-report-upgrade-test
|
||||
path: frontend/playwright-report/
|
||||
retention-days: 30
|
||||
|
||||
- name: Upload test traces
|
||||
uses: actions/upload-artifact@v4
|
||||
if: always()
|
||||
with:
|
||||
name: playwright-traces
|
||||
path: frontend/test-results/
|
||||
retention-days: 7
|
||||
|
||||
# Step 11: Collect logs for failed instances
|
||||
- name: Collect logs on failure
|
||||
if: failure()
|
||||
run: |
|
||||
echo "=== Docker logs for new version ==="
|
||||
docker logs homebox-new || true
|
||||
echo "=== Database content ==="
|
||||
ls -la /tmp/homebox-data-new || true
|
||||
|
||||
# Step 12: Cleanup resources
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker stop homebox-new || true
|
||||
docker rm homebox-new || true
|
||||
docker rmi homebox:test || true
|
||||
@@ -55,7 +55,7 @@ func (a *app) SetupDemo() error {
|
||||
return errors.New("failed to setup demo")
|
||||
}
|
||||
|
||||
_, err = a.services.Items.CsvImport(ctx, self.GroupID, strings.NewReader(csvText))
|
||||
_, err = a.services.Items.CsvImport(ctx, self.DefaultGroupID, strings.NewReader(csvText))
|
||||
if err != nil {
|
||||
log.Err(err).Msg("Failed to import CSV")
|
||||
return errors.New("failed to setup demo")
|
||||
|
||||
@@ -339,7 +339,7 @@ func (ctrl *V1Controller) HandleItemsImport() errchain.HandlerFunc {
|
||||
|
||||
user := services.UseUserCtx(r.Context())
|
||||
|
||||
_, err = ctrl.svc.Items.CsvImport(r.Context(), user.GroupID, file)
|
||||
_, err = ctrl.svc.Items.CsvImport(r.Context(), user.DefaultGroupID, file)
|
||||
if err != nil {
|
||||
log.Err(err).Msg("failed to import items")
|
||||
return validate.NewRequestError(err, http.StatusInternalServerError)
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
package v1
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/hay-kot/httpkit/errchain"
|
||||
"github.com/sysadminsmedia/homebox/backend/internal/core/services"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
// HandleBillOfMaterialsExport godoc
|
||||
@@ -18,7 +19,7 @@ func (ctrl *V1Controller) HandleBillOfMaterialsExport() errchain.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) error {
|
||||
actor := services.UseUserCtx(r.Context())
|
||||
|
||||
csv, err := ctrl.svc.Items.ExportBillOfMaterialsCSV(r.Context(), actor.GroupID)
|
||||
csv, err := ctrl.svc.Items.ExportBillOfMaterialsCSV(r.Context(), actor.DefaultGroupID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -108,7 +108,7 @@ func run(cfg *config.Config) error {
|
||||
return err
|
||||
}
|
||||
|
||||
if strings.ToLower(cfg.Database.Driver) == config.DriverPostgres {
|
||||
if strings.ToLower(cfg.Database.Driver) == "postgres" {
|
||||
if !validatePostgresSSLMode(cfg.Database.SslMode) {
|
||||
log.Error().Str("sslmode", cfg.Database.SslMode).Msg("invalid sslmode")
|
||||
return fmt.Errorf("invalid sslmode: %s", cfg.Database.SslMode)
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"net/url"
|
||||
"strings"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/hay-kot/httpkit/errchain"
|
||||
v1 "github.com/sysadminsmedia/homebox/backend/app/api/handlers/v1"
|
||||
"github.com/sysadminsmedia/homebox/backend/internal/core/services"
|
||||
@@ -152,3 +153,48 @@ func (a *app) mwAuthToken(next errchain.Handler) errchain.Handler {
|
||||
return next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
|
||||
// mwTenant is a middleware that will parse the X-Tenant header and validate the user has access
|
||||
// to the requested tenant. If no header is provided, the user's default group is used.
|
||||
//
|
||||
// WARNING: This middleware _MUST_ be called after mwAuthToken
|
||||
func (a *app) mwTenant(next errchain.Handler) errchain.Handler {
|
||||
return errchain.HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {
|
||||
ctx := r.Context()
|
||||
|
||||
// Get the user from context (set by mwAuthToken)
|
||||
user := services.UseUserCtx(ctx)
|
||||
if user == nil {
|
||||
return validate.NewRequestError(errors.New("user context not found"), http.StatusInternalServerError)
|
||||
}
|
||||
|
||||
tenantID := user.DefaultGroupID
|
||||
|
||||
// Check for X-Tenant header
|
||||
if tenantHeader := r.Header.Get("X-Tenant"); tenantHeader != "" {
|
||||
parsedTenantID, err := uuid.Parse(tenantHeader)
|
||||
if err != nil {
|
||||
return validate.NewRequestError(errors.New("invalid X-Tenant header format"), http.StatusBadRequest)
|
||||
}
|
||||
|
||||
// Validate user has access to the requested tenant
|
||||
hasAccess := false
|
||||
for _, gid := range user.GroupIDs {
|
||||
if gid == parsedTenantID {
|
||||
hasAccess = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !hasAccess {
|
||||
return validate.NewRequestError(errors.New("user does not have access to the requested tenant"), http.StatusForbidden)
|
||||
}
|
||||
|
||||
tenantID = parsedTenantID
|
||||
}
|
||||
|
||||
// Set the tenant in context
|
||||
r = r.WithContext(services.SetTenantCtx(ctx, tenantID))
|
||||
return next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -82,6 +82,7 @@ func (a *app) mountRoutes(r *chi.Mux, chain *errchain.ErrChain, repos *repo.AllR
|
||||
|
||||
userMW := []errchain.Middleware{
|
||||
a.mwAuthToken,
|
||||
a.mwTenant,
|
||||
a.mwRoles(RoleModeOr, authroles.RoleUser.String()),
|
||||
}
|
||||
|
||||
|
||||
@@ -41,7 +41,7 @@ func setupStorageDir(cfg *config.Config) error {
|
||||
func setupDatabaseURL(cfg *config.Config) (string, error) {
|
||||
databaseURL := ""
|
||||
switch strings.ToLower(cfg.Database.Driver) {
|
||||
case config.DriverSqlite3:
|
||||
case "sqlite3":
|
||||
databaseURL = cfg.Database.SqlitePath
|
||||
dbFilePath := strings.Split(cfg.Database.SqlitePath, "?")[0]
|
||||
dbDir := filepath.Dir(dbFilePath)
|
||||
@@ -49,7 +49,7 @@ func setupDatabaseURL(cfg *config.Config) (string, error) {
|
||||
log.Error().Err(err).Str("path", dbDir).Msg("failed to create SQLite database directory")
|
||||
return "", fmt.Errorf("failed to create SQLite database directory: %w", err)
|
||||
}
|
||||
case config.DriverPostgres:
|
||||
case "postgres":
|
||||
databaseURL = fmt.Sprintf("host=%s port=%s dbname=%s sslmode=%s", cfg.Database.Host, cfg.Database.Port, cfg.Database.Database, cfg.Database.SslMode)
|
||||
if cfg.Database.Username != "" {
|
||||
databaseURL += fmt.Sprintf(" user=%s", cfg.Database.Username)
|
||||
|
||||
@@ -10,22 +10,31 @@ cloud.google.com/go/auth/oauth2adapt v0.2.8 h1:keo8NaayQZ6wimpNSmW5OPc283g65QNIi
|
||||
cloud.google.com/go/auth/oauth2adapt v0.2.8/go.mod h1:XQ9y31RkqZCcwJWNSx2Xvric3RrU88hAYYbjDWYDL+c=
|
||||
cloud.google.com/go/compute/metadata v0.9.0 h1:pDUj4QMoPejqq20dK0Pg2N4yG9zIkYGdBtwLoEkH9Zs=
|
||||
cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
|
||||
cloud.google.com/go/iam v1.5.2 h1:qgFRAGEmd8z6dJ/qyEchAuL9jpswyODjA2lS+w234g8=
|
||||
cloud.google.com/go/iam v1.5.2/go.mod h1:SE1vg0N81zQqLzQEwxL2WI6yhetBdbNQuTvIKCSkUHE=
|
||||
cloud.google.com/go/iam v1.5.3 h1:+vMINPiDF2ognBJ97ABAYYwRgsaqxPbQDlMnbHMjolc=
|
||||
cloud.google.com/go/iam v1.5.3/go.mod h1:MR3v9oLkZCTlaqljW6Eb2d3HGDGK5/bDv93jhfISFvU=
|
||||
cloud.google.com/go/logging v1.13.0 h1:7j0HgAp0B94o1YRDqiqm26w4q1rDMH7XNRU34lJXHYc=
|
||||
cloud.google.com/go/logging v1.13.0/go.mod h1:36CoKh6KA/M0PbhPKMq6/qety2DCAErbhXT62TuXALA=
|
||||
cloud.google.com/go/logging v1.13.1 h1:O7LvmO0kGLaHY/gq8cV7T0dyp6zJhYAOtZPX4TF3QtY=
|
||||
cloud.google.com/go/logging v1.13.1/go.mod h1:XAQkfkMBxQRjQek96WLPNze7vsOmay9H5PqfsNYDqvw=
|
||||
cloud.google.com/go/longrunning v0.6.7 h1:IGtfDWHhQCgCjwQjV9iiLnUta9LBCo8R9QmAFsS/PrE=
|
||||
cloud.google.com/go/longrunning v0.6.7/go.mod h1:EAFV3IZAKmM56TyiE6VAP3VoTzhZzySwI/YI1s/nRsY=
|
||||
cloud.google.com/go/longrunning v0.7.0 h1:FV0+SYF1RIj59gyoWDRi45GiYUMM3K1qO51qoboQT1E=
|
||||
cloud.google.com/go/longrunning v0.7.0/go.mod h1:ySn2yXmjbK9Ba0zsQqunhDkYi0+9rlXIwnoAf+h+TPY=
|
||||
cloud.google.com/go/monitoring v1.24.2 h1:5OTsoJ1dXYIiMiuL+sYscLc9BumrL3CarVLL7dd7lHM=
|
||||
cloud.google.com/go/monitoring v1.24.2/go.mod h1:x7yzPWcgDRnPEv3sI+jJGBkwl5qINf+6qY4eq0I9B4U=
|
||||
cloud.google.com/go/monitoring v1.24.3 h1:dde+gMNc0UhPZD1Azu6at2e79bfdztVDS5lvhOdsgaE=
|
||||
cloud.google.com/go/monitoring v1.24.3/go.mod h1:nYP6W0tm3N9H/bOw8am7t62YTzZY+zUeQ+Bi6+2eonI=
|
||||
cloud.google.com/go/pubsub v1.50.0 h1:hnYpOIxVlgVD1Z8LN7est4DQZK3K6tvZNurZjIVjUe0=
|
||||
cloud.google.com/go/pubsub v1.50.0/go.mod h1:Di2Y+nqXBpIS+dXUEJPQzLh8PbIQZMLE9IVUFhf2zmM=
|
||||
cloud.google.com/go/pubsub v1.50.1 h1:fzbXpPyJnSGvWXF1jabhQeXyxdbCIkXTpjXHy7xviBM=
|
||||
cloud.google.com/go/pubsub v1.50.1/go.mod h1:6YVJv3MzWJUVdvQXG081sFvS0dWQOdnV+oTo++q/xFk=
|
||||
cloud.google.com/go/pubsub/v2 v2.2.1 h1:3brZcshL3fIiD1qOxAE2QW9wxsfjioy014x4yC9XuYI=
|
||||
cloud.google.com/go/pubsub/v2 v2.2.1/go.mod h1:O5f0KHG9zDheZAd3z5rlCRhxt2JQtB+t/IYLKK3Bpvw=
|
||||
cloud.google.com/go/storage v1.56.0 h1:iixmq2Fse2tqxMbWhLWC9HfBj1qdxqAmiK8/eqtsLxI=
|
||||
cloud.google.com/go/storage v1.56.0/go.mod h1:Tpuj6t4NweCLzlNbw9Z9iwxEkrSem20AetIeH/shgVU=
|
||||
cloud.google.com/go/trace v1.11.6 h1:2O2zjPzqPYAHrn3OKl029qlqG6W8ZdYaOWRyr8NgMT4=
|
||||
cloud.google.com/go/trace v1.11.6/go.mod h1:GA855OeDEBiBMzcckLPE2kDunIpC72N+Pq8WFieFjnI=
|
||||
cloud.google.com/go/trace v1.11.7 h1:kDNDX8JkaAG3R2nq1lIdkb7FCSi1rCmsEtKVsty7p+U=
|
||||
cloud.google.com/go/trace v1.11.7/go.mod h1:TNn9d5V3fQVf6s4SCveVMIBS2LJUqo73GACmq/Tky0s=
|
||||
entgo.io/ent v0.14.5 h1:Rj2WOYJtCkWyFo6a+5wB3EfBRP0rnx1fMk6gGA0UUe4=
|
||||
entgo.io/ent v0.14.5/go.mod h1:zTzLmWtPvGpmSwtkaayM2cm5m819NdM7z7tYPq3vN0U=
|
||||
github.com/Azure/azure-amqp-common-go/v3 v3.2.3 h1:uDF62mbd9bypXWi19V1bN5NZEO84JqgmI5G73ibAmrk=
|
||||
@@ -79,6 +88,8 @@ github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7l
|
||||
github.com/agext/levenshtein v1.2.3/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
|
||||
github.com/apparentlymart/go-textseg/v15 v15.0.0 h1:uYvfpb3DyLSCGWnctWKGj857c6ew1u1fNQOlOtuGxQY=
|
||||
github.com/apparentlymart/go-textseg/v15 v15.0.0/go.mod h1:K8XmNZdhEBkdlyDdvbmmsvpAG721bKi0joRfFdHIWJ4=
|
||||
github.com/ardanlabs/conf/v3 v3.9.0 h1:aRBYHeD39/OkuaEXYIEoi4wvF3OnS7jUAPxXyLfEu20=
|
||||
github.com/ardanlabs/conf/v3 v3.9.0/go.mod h1:XlL9P0quWP4m1weOVFmlezabinbZLI05niDof/+Ochk=
|
||||
github.com/ardanlabs/conf/v3 v3.10.0 h1:qIrJ/WBmH/hFQ/IX4xH9LX9LzwK44T9aEOy78M+4S+0=
|
||||
github.com/ardanlabs/conf/v3 v3.10.0/go.mod h1:XlL9P0quWP4m1weOVFmlezabinbZLI05niDof/+Ochk=
|
||||
github.com/aws/aws-sdk-go-v2 v1.39.6 h1:2JrPCVgWJm7bm83BDwY5z8ietmeJUbh3O2ACnn+Xsqk=
|
||||
@@ -172,10 +183,14 @@ github.com/fogleman/gg v1.3.0/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzP
|
||||
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
|
||||
github.com/fortytw2/leaktest v1.3.0 h1:u8491cBMTQ8ft8aeV+adlcytMZylmA5nnwwkRZjI8vw=
|
||||
github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
|
||||
github.com/gabriel-vasile/mimetype v1.4.11 h1:AQvxbp830wPhHTqc1u7nzoLT+ZFxGY7emj5DR5DYFik=
|
||||
github.com/gabriel-vasile/mimetype v1.4.11/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
|
||||
github.com/gabriel-vasile/mimetype v1.4.12 h1:e9hWvmLYvtp846tLHam2o++qitpguFiYCKbn0w9jyqw=
|
||||
github.com/gabriel-vasile/mimetype v1.4.12/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
|
||||
github.com/gen2brain/avif v0.4.4 h1:Ga/ss7qcWWQm2bxFpnjYjhJsNfZrWs5RsyklgFjKRSE=
|
||||
github.com/gen2brain/avif v0.4.4/go.mod h1:/XCaJcjZraQwKVhpu9aEd9aLOssYOawLvhMBtmHVGqk=
|
||||
github.com/gen2brain/heic v0.4.6 h1:sNh3mfaEZLmDJnFc5WoLxCzh/wj5GwfJScPfvF5CNJE=
|
||||
github.com/gen2brain/heic v0.4.6/go.mod h1:ECnpqbqLu0qSje4KSNWUUDK47UPXPzl80T27GWGEL5I=
|
||||
github.com/gen2brain/heic v0.4.7 h1:xw/e9R3HdIvb+uEhRDMRJdviYnB3ODe/VwL8SYLaMGc=
|
||||
github.com/gen2brain/heic v0.4.7/go.mod h1:ECnpqbqLu0qSje4KSNWUUDK47UPXPzl80T27GWGEL5I=
|
||||
github.com/gen2brain/jpegxl v0.4.5 h1:TWpVEn5xkIfsswzkjHBArd0Cc9AE0tbjBSoa0jDsrbo=
|
||||
@@ -195,10 +210,16 @@ github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
|
||||
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
|
||||
github.com/go-openapi/inflect v0.19.0 h1:9jCH9scKIbHeV9m12SmPilScz6krDxKRasNNSNPXu/4=
|
||||
github.com/go-openapi/inflect v0.19.0/go.mod h1:lHpZVlpIQqLyKwJ4N+YSc9hchQy/i12fJykb83CRBH4=
|
||||
github.com/go-openapi/jsonpointer v0.22.3 h1:dKMwfV4fmt6Ah90zloTbUKWMD+0he+12XYAsPotrkn8=
|
||||
github.com/go-openapi/jsonpointer v0.22.3/go.mod h1:0lBbqeRsQ5lIanv3LHZBrmRGHLHcQoOXQnf88fHlGWo=
|
||||
github.com/go-openapi/jsonpointer v0.22.4 h1:dZtK82WlNpVLDW2jlA1YCiVJFVqkED1MegOUy9kR5T4=
|
||||
github.com/go-openapi/jsonpointer v0.22.4/go.mod h1:elX9+UgznpFhgBuaMQ7iu4lvvX1nvNsesQ3oxmYTw80=
|
||||
github.com/go-openapi/jsonreference v0.21.3 h1:96Dn+MRPa0nYAR8DR1E03SblB5FJvh7W6krPI0Z7qMc=
|
||||
github.com/go-openapi/jsonreference v0.21.3/go.mod h1:RqkUP0MrLf37HqxZxrIAtTWW4ZJIK1VzduhXYBEeGc4=
|
||||
github.com/go-openapi/jsonreference v0.21.4 h1:24qaE2y9bx/q3uRK/qN+TDwbok1NhbSmGjjySRCHtC8=
|
||||
github.com/go-openapi/jsonreference v0.21.4/go.mod h1:rIENPTjDbLpzQmQWCj5kKj3ZlmEh+EFVbz3RTUh30/4=
|
||||
github.com/go-openapi/spec v0.22.1 h1:beZMa5AVQzRspNjvhe5aG1/XyBSMeX1eEOs7dMoXh/k=
|
||||
github.com/go-openapi/spec v0.22.1/go.mod h1:c7aeIQT175dVowfp7FeCvXXnjN/MrpaONStibD2WtDA=
|
||||
github.com/go-openapi/spec v0.22.3 h1:qRSmj6Smz2rEBxMnLRBMeBWxbbOvuOoElvSvObIgwQc=
|
||||
github.com/go-openapi/spec v0.22.3/go.mod h1:iIImLODL2loCh3Vnox8TY2YWYJZjMAKYyLH2Mu8lOZs=
|
||||
github.com/go-openapi/swag v0.19.15 h1:D2NRCBzS9/pEY3gP9Nl8aDqGUcPFrwG2p+CNFrLyrCM=
|
||||
@@ -228,6 +249,8 @@ github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/o
|
||||
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
|
||||
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
|
||||
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
|
||||
github.com/go-playground/validator/v10 v10.28.0 h1:Q7ibns33JjyW48gHkuFT91qX48KG0ktULL6FgHdG688=
|
||||
github.com/go-playground/validator/v10 v10.28.0/go.mod h1:GoI6I1SjPBh9p7ykNE/yj3fFYbyDOpwMn5KXd+m2hUU=
|
||||
github.com/go-playground/validator/v10 v10.30.1 h1:f3zDSN/zOma+w6+1Wswgd9fLkdwy06ntQJp0BBvFG0w=
|
||||
github.com/go-playground/validator/v10 v10.30.1/go.mod h1:oSuBIQzuJxL//3MelwSLD5hc2Tu889bF0Idm9Dg26cM=
|
||||
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
|
||||
@@ -267,6 +290,8 @@ github.com/google/wire v0.7.0 h1:JxUKI6+CVBgCO2WToKy/nQk0sS+amI9z9EjVmdaocj4=
|
||||
github.com/google/wire v0.7.0/go.mod h1:n6YbUQD9cPKTnHXEBN2DXlOp/mVADhVErcMFb0v3J18=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.7 h1:zrn2Ee/nWmHulBx5sAVrGgAa0f2/R35S4DJwfFaUPFQ=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.7/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
|
||||
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
|
||||
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
|
||||
github.com/googleapis/gax-go/v2 v2.16.0 h1:iHbQmKLLZrexmb0OSsNGTeSTS0HO4YvFOG8g5E4Zd0Y=
|
||||
github.com/googleapis/gax-go/v2 v2.16.0/go.mod h1:o1vfQjjNZn4+dPnRdl/4ZD7S9414Y4xA+a/6Icj6l14=
|
||||
github.com/gorilla/schema v1.4.1 h1:jUg5hUjCSDZpNGLuXQOgIWGdlgrIdYvgQ0wZtdK1M3E=
|
||||
@@ -325,6 +350,8 @@ github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/
|
||||
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/mattn/go-runewidth v0.0.9 h1:Lm995f3rfxdpd6TSmuVCHVb/QhupuXlYr8sCI/QdE+0=
|
||||
github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
|
||||
github.com/mattn/go-sqlite3 v1.14.32 h1:JD12Ag3oLy1zQA+BNn74xRgaBbdhbNIDYvQUEuuErjs=
|
||||
github.com/mattn/go-sqlite3 v1.14.32/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
|
||||
github.com/mfridman/interpolate v0.0.2 h1:pnuTK7MQIxxFz1Gr+rjSIx9u7qVjf5VOoM/u6BbAxPY=
|
||||
@@ -337,6 +364,8 @@ github.com/nats-io/jwt/v2 v2.5.0 h1:WQQ40AAlqqfx+f6ku+i0pOVm+ASirD4fUh+oQsiE9Ak=
|
||||
github.com/nats-io/jwt/v2 v2.5.0/go.mod h1:24BeQtRwxRV8ruvC4CojXlx/WQ/VjuwlYiH+vu/+ibI=
|
||||
github.com/nats-io/nats-server/v2 v2.9.23 h1:6Wj6H6QpP9FMlpCyWUaNu2yeZ/qGj+mdRkZ1wbikExU=
|
||||
github.com/nats-io/nats-server/v2 v2.9.23/go.mod h1:wEjrEy9vnqIGE4Pqz4/c75v9Pmaq7My2IgFmnykc4C0=
|
||||
github.com/nats-io/nats.go v1.47.0 h1:YQdADw6J/UfGUd2Oy6tn4Hq6YHxCaJrVKayxxFqYrgM=
|
||||
github.com/nats-io/nats.go v1.47.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
|
||||
github.com/nats-io/nats.go v1.48.0 h1:pSFyXApG+yWU/TgbKCjmm5K4wrHu86231/w84qRVR+U=
|
||||
github.com/nats-io/nats.go v1.48.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
|
||||
github.com/nats-io/nkeys v0.4.12 h1:nssm7JKOG9/x4J8II47VWCL1Ds29avyiQDRn0ckMvDc=
|
||||
@@ -347,12 +376,16 @@ github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOF
|
||||
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
|
||||
github.com/olahol/melody v1.4.0 h1:Pa5SdeZL/zXPi1tJuMAPDbl4n3gQOThSL6G1p4qZ4SI=
|
||||
github.com/olahol/melody v1.4.0/go.mod h1:GgkTl6Y7yWj/HtfD48Q5vLKPVoZOH+Qqgfa7CvJgJM4=
|
||||
github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec=
|
||||
github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY=
|
||||
github.com/onsi/ginkgo/v2 v2.9.2 h1:BA2GMJOtfGAfagzYtrAlufIP0lq6QERkFmHLMLPwFSU=
|
||||
github.com/onsi/ginkgo/v2 v2.9.2/go.mod h1:WHcJJG2dIlcCqVfBAwUCrJxSPFb6v4azBwgxeMeDuts=
|
||||
github.com/onsi/gomega v1.27.6 h1:ENqfyGeS5AX/rlXDd/ETokDz93u0YufY1Pgxuy/PvWE=
|
||||
github.com/onsi/gomega v1.27.6/go.mod h1:PIQNjfQwkP3aQAH7lf7j87O/5FiNr+ZR8+ipb+qQlhg=
|
||||
github.com/philhofer/fwd v1.2.0 h1:e6DnBTl7vGY+Gz322/ASL4Gyp1FspeMvx1RNDoToZuM=
|
||||
github.com/philhofer/fwd v1.2.0/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
|
||||
github.com/pierrec/lz4/v4 v4.1.22 h1:cKFw6uJDK+/gfw5BcDL0JL5aBsAFdsIT18eRtLj7VIU=
|
||||
github.com/pierrec/lz4/v4 v4.1.22/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
|
||||
github.com/pierrec/lz4/v4 v4.1.23 h1:oJE7T90aYBGtFNrI8+KbETnPymobAhzRrR8Mu8n1yfU=
|
||||
github.com/pierrec/lz4/v4 v4.1.23/go.mod h1:EoQMVJgeeEOMsCqCzqFm2O0cJvljX2nGZjcRIPL34O4=
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
|
||||
@@ -389,6 +422,10 @@ github.com/shirou/gopsutil/v4 v4.25.11 h1:X53gB7muL9Gnwwo2evPSE+SfOrltMoR6V3xJAX
|
||||
github.com/shirou/gopsutil/v4 v4.25.11/go.mod h1:EivAfP5x2EhLp2ovdpKSozecVXn1TmuG7SMzs/Wh4PU=
|
||||
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e h1:MRM5ITcdelLK2j1vwZ3Je0FKVCfqOLp5zO6trqMLYs0=
|
||||
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e/go.mod h1:XV66xRDqSt+GTGFMVlhk3ULuV0y9ZmzeVGR4mloJI3M=
|
||||
github.com/spf13/cobra v1.7.0 h1:hyqWnYt1ZQShIddO5kBpj3vu05/++x6tJ6dg8EC572I=
|
||||
github.com/spf13/cobra v1.7.0/go.mod h1:uLxZILRyS/50WlhOIKD7W6V5bgeIt+4sICxh6uRMrb0=
|
||||
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/spiffe/go-spiffe/v2 v2.6.0 h1:l+DolpxNWYgruGQVV0xsfeya3CsC7m8iBzDnMpsbLuo=
|
||||
github.com/spiffe/go-spiffe/v2 v2.6.0/go.mod h1:gm2SeUoMZEtpnzPNs2Csc0D/gX33k1xIx7lEzqblHEs=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
@@ -408,6 +445,8 @@ github.com/swaggo/http-swagger/v2 v2.0.2 h1:FKCdLsl+sFCx60KFsyM0rDarwiUSZ8DqbfSy
|
||||
github.com/swaggo/http-swagger/v2 v2.0.2/go.mod h1:r7/GBkAWIfK6E/OLnE8fXnviHiDeAHmgIyooa4xm3AQ=
|
||||
github.com/swaggo/swag v1.16.6 h1:qBNcx53ZaX+M5dxVyTrgQ0PJ/ACK+NzhwcbieTt+9yI=
|
||||
github.com/swaggo/swag v1.16.6/go.mod h1:ngP2etMK5a0P3QBizic5MEwpRmluJZPHjXcMoj4Xesg=
|
||||
github.com/tetratelabs/wazero v1.10.1 h1:2DugeJf6VVk58KTPszlNfeeN8AhhpwcZqkJj2wwFuH8=
|
||||
github.com/tetratelabs/wazero v1.10.1/go.mod h1:DRm5twOQ5Gr1AoEdSi0CLjDQF1J9ZAuyqFIjl1KKfQU=
|
||||
github.com/tetratelabs/wazero v1.11.0 h1:+gKemEuKCTevU4d7ZTzlsvgd1uaToIDtlQlmNbwqYhA=
|
||||
github.com/tetratelabs/wazero v1.11.0/go.mod h1:eV28rsN8Q+xwjogd7f4/Pp4xFxO7uOGbLcD/LzB1wiU=
|
||||
github.com/tinylib/msgp v1.6.1 h1:ESRv8eL3u+DNHUoSAAQRE50Hm162zqAnBoGv9PzScPY=
|
||||
@@ -445,16 +484,26 @@ go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.6
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.62.0/go.mod h1:ru6KHrNtNHxM4nD/vd6QrLVWgKhxPYgblq4VAtNawTQ=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.62.0 h1:Hf9xI/XLML9ElpiHVDNwvqI0hIFlzV8dgIr35kV1kRU=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.62.0/go.mod h1:NfchwuyNoMcZ5MLHwPrODwUF1HWCXWrL31s8gSAdIKY=
|
||||
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
|
||||
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
|
||||
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
|
||||
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.37.0 h1:6VjV6Et+1Hd2iLZEPtdV7vie80Yyqf7oikJLjQ/myi0=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.37.0/go.mod h1:u8hcp8ji5gaM/RfcOo8z9NMnf1pVLfVY7lBY2VOGuUU=
|
||||
go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
|
||||
go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
|
||||
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
|
||||
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
|
||||
go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=
|
||||
go.opentelemetry.io/otel/sdk v1.38.0/go.mod h1:ghmNdGlVemJI3+ZB5iDEuk4bWA3GkTpW+DOoZMYBVVg=
|
||||
go.opentelemetry.io/otel/sdk v1.39.0 h1:nMLYcjVsvdui1B/4FRkwjzoRVsMK8uL/cj0OyhKzt18=
|
||||
go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0 h1:aSH66iL0aZqo//xXzQLYozmWrXxyFkBJ6qT5wthqPoM=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0/go.mod h1:dg9PBnW9XdQ1Hd6ZnRz689CbtrUp0wMMs9iPcgT9EZA=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.39.0 h1:cXMVVFVgsIf2YL6QkRF4Urbr/aMInf+2WKg+sEJTtB8=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.39.0/go.mod h1:xq9HEVH7qeX69/JnwEfp6fVq5wosJsY1mt4lLfYdVew=
|
||||
go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
|
||||
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
|
||||
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
|
||||
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
|
||||
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||
@@ -475,13 +524,21 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk
|
||||
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
|
||||
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
|
||||
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
|
||||
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=
|
||||
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
|
||||
golang.org/x/exp v0.0.0-20251125195548-87e1e737ad39 h1:DHNhtq3sNNzrvduZZIiFyXWOL9IWaDPHqTnLJp+rCBY=
|
||||
golang.org/x/exp v0.0.0-20251125195548-87e1e737ad39/go.mod h1:46edojNIoXTNOhySWIWdix628clX9ODXwPsQuG6hsK0=
|
||||
golang.org/x/exp v0.0.0-20251219203646-944ab1f22d93 h1:fQsdNF2N+/YewlRZiricy4P1iimyPKZ/xwniHj8Q2a0=
|
||||
golang.org/x/exp v0.0.0-20251219203646-944ab1f22d93/go.mod h1:EPRbTFwzwjXj9NpYyyrvenVh9Y+GFeEvMNh7Xuz7xgU=
|
||||
golang.org/x/image v0.33.0 h1:LXRZRnv1+zGd5XBUVRFmYEphyyKJjQjCRiOuAP3sZfQ=
|
||||
golang.org/x/image v0.33.0/go.mod h1:DD3OsTYT9chzuzTQt+zMcOlBHgfoKQb1gry8p76Y1sc=
|
||||
golang.org/x/image v0.34.0 h1:33gCkyw9hmwbZJeZkct8XyR11yH889EQt/QH4VmXMn8=
|
||||
golang.org/x/image v0.34.0/go.mod h1:2RNFBZRB+vnwwFil8GkMdRvrJOFd1AzdZI6vOY+eJVU=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
|
||||
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
|
||||
golang.org/x/mod v0.31.0 h1:HaW9xtz0+kOcWKwli0ZXy79Ix+UW/vOfmWI5QVd2tgI=
|
||||
golang.org/x/mod v0.31.0/go.mod h1:43JraMp9cGx1Rx3AqioxrbrhNsLl2l/iNAvuBkrezpg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
@@ -491,12 +548,18 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
|
||||
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
|
||||
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=
|
||||
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=
|
||||
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
|
||||
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
|
||||
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
|
||||
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
|
||||
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
@@ -512,6 +575,8 @@ golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
|
||||
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
|
||||
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
@@ -521,6 +586,8 @@ golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
|
||||
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
|
||||
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
|
||||
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
|
||||
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
|
||||
@@ -528,6 +595,8 @@ golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||
golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=
|
||||
golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ=
|
||||
golang.org/x/tools v0.40.0 h1:yLkxfA+Qnul4cs9QA3KnlFu0lVmd8JJfoq+E41uSutA=
|
||||
golang.org/x/tools v0.40.0/go.mod h1:Ik/tzLRlbscWpqqMRjyWYDisX8bG13FrdXp3o4Sr9lc=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
@@ -536,16 +605,28 @@ golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da h1:noIWHXmPHxILtqtCOPIhS
|
||||
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da/go.mod h1:NDW/Ps6MPRej6fsCIbMTohpP40sJ/P/vI1MoTEGwX90=
|
||||
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
|
||||
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
|
||||
google.golang.org/api v0.257.0 h1:8Y0lzvHlZps53PEaw+G29SsQIkuKrumGWs9puiexNAA=
|
||||
google.golang.org/api v0.257.0/go.mod h1:4eJrr+vbVaZSqs7vovFd1Jb/A6ml6iw2e6FBYf3GAO4=
|
||||
google.golang.org/api v0.258.0 h1:IKo1j5FBlN74fe5isA2PVozN3Y5pwNKriEgAXPOkDAc=
|
||||
google.golang.org/api v0.258.0/go.mod h1:qhOMTQEZ6lUps63ZNq9jhODswwjkjYYguA7fA3TBFww=
|
||||
google.golang.org/genproto v0.0.0-20250715232539-7130f93afb79 h1:Nt6z9UHqSlIdIGJdz6KhTIs2VRx/iOsA5iE8bmQNcxs=
|
||||
google.golang.org/genproto v0.0.0-20250715232539-7130f93afb79/go.mod h1:kTmlBHMPqR5uCZPBvwa2B18mvubkjyY3CRLI0c6fj0s=
|
||||
google.golang.org/genproto v0.0.0-20251202230838-ff82c1b0f217 h1:GvESR9BIyHUahIb0NcTum6itIWtdoglGX+rnGxm2934=
|
||||
google.golang.org/genproto v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:yJ2HH4EHEDTd3JiLmhds6NkJ17ITVYOdV3m3VKOnws0=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8 h1:mepRgnBZa07I4TRuomDE4sTIYieg/osKmzIf4USdWS4=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8/go.mod h1:fDMmzKV90WSg1NbozdqrE64fkuTv6mlq2zxo9ad+3yo=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217 h1:fCvbg86sFXwdrl5LgVcTEvNC+2txB5mgROGmRL5mrls=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:+rXWjjaukWZun3mLfjmVnQi18E1AsFbDN9QdJ5YXLto=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 h1:gRkg/vSppuSQoDjxyiGfN4Upv/h/DQmIR10ZU8dh4Ww=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b h1:Mv8VFug0MP9e5vUxfBcE3vUkV6CImK3cMNMIDFjmzxU=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ=
|
||||
google.golang.org/grpc v1.77.0 h1:wVVY6/8cGA6vvffn+wWK5ToddbgdU3d8MNENr4evgXM=
|
||||
google.golang.org/grpc v1.77.0/go.mod h1:z0BY1iVj0q8E1uSQCjL9cppRj+gnZjzDnzV0dHhrNig=
|
||||
google.golang.org/grpc v1.78.0 h1:K1XZG/yGDJnzMdd/uZHAkVqJE+xIDOcmdSFZkBUicNc=
|
||||
google.golang.org/grpc v1.78.0/go.mod h1:I47qjTo4OKbMkjA/aOOwxDIiPSBofUtQUI5EfpWvW7U=
|
||||
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
|
||||
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
@@ -567,6 +648,8 @@ modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE=
|
||||
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
|
||||
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
|
||||
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
|
||||
modernc.org/libc v1.67.1 h1:bFaqOaa5/zbWYJo8aW0tXPX21hXsngG2M7mckCnFSVk=
|
||||
modernc.org/libc v1.67.1/go.mod h1:QvvnnJ5P7aitu0ReNpVIEyesuhmDLQ8kaEoyMjIFZJA=
|
||||
modernc.org/libc v1.67.2 h1:ZbNmly1rcbjhot5jlOZG0q4p5VwFfjwWqZ5rY2xxOXo=
|
||||
modernc.org/libc v1.67.2/go.mod h1:QvvnnJ5P7aitu0ReNpVIEyesuhmDLQ8kaEoyMjIFZJA=
|
||||
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
|
||||
@@ -577,6 +660,8 @@ modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
|
||||
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
|
||||
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
|
||||
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
|
||||
modernc.org/sqlite v1.40.1 h1:VfuXcxcUWWKRBuP8+BR9L7VnmusMgBNNnBYGEe9w/iY=
|
||||
modernc.org/sqlite v1.40.1/go.mod h1:9fjQZ0mB1LLP0GYrp39oOJXx/I2sxEnZtzCmEQIKvGE=
|
||||
modernc.org/sqlite v1.41.0 h1:bJXddp4ZpsqMsNN1vS0jWo4IJTZzb8nWpcgvyCFG9Ck=
|
||||
modernc.org/sqlite v1.41.0/go.mod h1:9fjQZ0mB1LLP0GYrp39oOJXx/I2sxEnZtzCmEQIKvGE=
|
||||
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
|
||||
|
||||
@@ -14,6 +14,7 @@ type contextKeys struct {
|
||||
var (
|
||||
ContextUser = &contextKeys{name: "User"}
|
||||
ContextUserToken = &contextKeys{name: "UserToken"}
|
||||
ContextTenant = &contextKeys{name: "Tenant"}
|
||||
)
|
||||
|
||||
type Context struct {
|
||||
@@ -33,10 +34,14 @@ type Context struct {
|
||||
// This extracts the users from the context and embeds it into the ServiceContext struct
|
||||
func NewContext(ctx context.Context) Context {
|
||||
user := UseUserCtx(ctx)
|
||||
gid := UseTenantCtx(ctx)
|
||||
if gid == uuid.Nil && user != nil {
|
||||
gid = user.DefaultGroupID
|
||||
}
|
||||
return Context{
|
||||
Context: ctx,
|
||||
UID: user.ID,
|
||||
GID: user.GroupID,
|
||||
GID: gid,
|
||||
User: user,
|
||||
}
|
||||
}
|
||||
@@ -64,3 +69,17 @@ func UseTokenCtx(ctx context.Context) string {
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// UseTenantCtx is a helper function that returns the tenant group ID from the context.
|
||||
// Returns uuid.Nil if not set.
|
||||
func UseTenantCtx(ctx context.Context) uuid.UUID {
|
||||
if val := ctx.Value(ContextTenant); val != nil {
|
||||
return val.(uuid.UUID)
|
||||
}
|
||||
return uuid.Nil
|
||||
}
|
||||
|
||||
// SetTenantCtx is a helper function that sets the ContextTenant in the context.
|
||||
func SetTenantCtx(ctx context.Context, tenantID uuid.UUID) context.Context {
|
||||
return context.WithValue(ctx, ContextTenant, tenantID)
|
||||
}
|
||||
|
||||
@@ -14,7 +14,7 @@ type GroupService struct {
|
||||
|
||||
func (svc *GroupService) UpdateGroup(ctx Context, data repo.GroupUpdate) (repo.Group, error) {
|
||||
if data.Name == "" {
|
||||
data.Name = ctx.User.GroupName
|
||||
return repo.Group{}, errors.New("group name cannot be empty")
|
||||
}
|
||||
|
||||
if data.Currency == "" {
|
||||
|
||||
@@ -81,12 +81,12 @@ func (svc *UserService) RegisterUser(ctx context.Context, data UserRegistration)
|
||||
|
||||
hashed, _ := hasher.HashPassword(data.Password)
|
||||
usrCreate := repo.UserCreate{
|
||||
Name: data.Name,
|
||||
Email: data.Email,
|
||||
Password: &hashed,
|
||||
IsSuperuser: false,
|
||||
GroupID: group.ID,
|
||||
IsOwner: creatingGroup,
|
||||
Name: data.Name,
|
||||
Email: data.Email,
|
||||
Password: &hashed,
|
||||
IsSuperuser: false,
|
||||
DefaultGroupID: group.ID,
|
||||
IsOwner: creatingGroup,
|
||||
}
|
||||
|
||||
usr, err := svc.repos.Users.Create(ctx, usrCreate)
|
||||
@@ -99,7 +99,7 @@ func (svc *UserService) RegisterUser(ctx context.Context, data UserRegistration)
|
||||
if creatingGroup {
|
||||
log.Debug().Msg("creating default labels")
|
||||
for _, label := range defaultLabels() {
|
||||
_, err := svc.repos.Labels.Create(ctx, usr.GroupID, label)
|
||||
_, err := svc.repos.Labels.Create(ctx, usr.DefaultGroupID, label)
|
||||
if err != nil {
|
||||
return repo.UserOut{}, err
|
||||
}
|
||||
@@ -107,7 +107,7 @@ func (svc *UserService) RegisterUser(ctx context.Context, data UserRegistration)
|
||||
|
||||
log.Debug().Msg("creating default locations")
|
||||
for _, location := range defaultLocations() {
|
||||
_, err := svc.repos.Locations.Create(ctx, usr.GroupID, location)
|
||||
_, err := svc.repos.Locations.Create(ctx, usr.DefaultGroupID, location)
|
||||
if err != nil {
|
||||
return repo.UserOut{}, err
|
||||
}
|
||||
@@ -287,12 +287,12 @@ func (svc *UserService) registerOIDCUser(ctx context.Context, issuer, subject, e
|
||||
}
|
||||
|
||||
usrCreate := repo.UserCreate{
|
||||
Name: name,
|
||||
Email: email,
|
||||
Password: nil,
|
||||
IsSuperuser: false,
|
||||
GroupID: group.ID,
|
||||
IsOwner: true,
|
||||
Name: name,
|
||||
Email: email,
|
||||
Password: nil,
|
||||
IsSuperuser: false,
|
||||
DefaultGroupID: group.ID,
|
||||
IsOwner: true,
|
||||
}
|
||||
|
||||
entUser, err := svc.repos.Users.CreateWithOIDC(ctx, usrCreate, issuer, subject)
|
||||
|
||||
6
backend/internal/data/ent/client.go
generated
6
backend/internal/data/ent/client.go
generated
@@ -2711,15 +2711,15 @@ func (c *UserClient) GetX(ctx context.Context, id uuid.UUID) *User {
|
||||
return obj
|
||||
}
|
||||
|
||||
// QueryGroup queries the group edge of a User.
|
||||
func (c *UserClient) QueryGroup(_m *User) *GroupQuery {
|
||||
// QueryGroups queries the groups edge of a User.
|
||||
func (c *UserClient) QueryGroups(_m *User) *GroupQuery {
|
||||
query := (&GroupClient{config: c.config}).Query()
|
||||
query.path = func(context.Context) (fromV *sql.Selector, _ error) {
|
||||
id := _m.ID
|
||||
step := sqlgraph.NewStep(
|
||||
sqlgraph.From(user.Table, user.FieldID, id),
|
||||
sqlgraph.To(group.Table, group.FieldID),
|
||||
sqlgraph.Edge(sqlgraph.M2O, true, user.GroupTable, user.GroupColumn),
|
||||
sqlgraph.Edge(sqlgraph.O2M, false, user.GroupsTable, user.GroupsColumn),
|
||||
)
|
||||
fromV = sqlgraph.Neighbors(_m.driver.Dialect(), step)
|
||||
return fromV, nil
|
||||
|
||||
10
backend/internal/data/ent/group.go
generated
10
backend/internal/data/ent/group.go
generated
@@ -29,6 +29,7 @@ type Group struct {
|
||||
// Edges holds the relations/edges for other nodes in the graph.
|
||||
// The values are being populated by the GroupQuery when eager-loading is set.
|
||||
Edges GroupEdges `json:"edges"`
|
||||
user_groups *uuid.UUID
|
||||
selectValues sql.SelectValues
|
||||
}
|
||||
|
||||
@@ -127,6 +128,8 @@ func (*Group) scanValues(columns []string) ([]any, error) {
|
||||
values[i] = new(sql.NullTime)
|
||||
case group.FieldID:
|
||||
values[i] = new(uuid.UUID)
|
||||
case group.ForeignKeys[0]: // user_groups
|
||||
values[i] = &sql.NullScanner{S: new(uuid.UUID)}
|
||||
default:
|
||||
values[i] = new(sql.UnknownType)
|
||||
}
|
||||
@@ -172,6 +175,13 @@ func (_m *Group) assignValues(columns []string, values []any) error {
|
||||
} else if value.Valid {
|
||||
_m.Currency = value.String
|
||||
}
|
||||
case group.ForeignKeys[0]:
|
||||
if value, ok := values[i].(*sql.NullScanner); !ok {
|
||||
return fmt.Errorf("unexpected type %T for field user_groups", values[i])
|
||||
} else if value.Valid {
|
||||
_m.user_groups = new(uuid.UUID)
|
||||
*_m.user_groups = *value.S.(*uuid.UUID)
|
||||
}
|
||||
default:
|
||||
_m.selectValues.Set(columns[i], values[i])
|
||||
}
|
||||
|
||||
11
backend/internal/data/ent/group/group.go
generated
11
backend/internal/data/ent/group/group.go
generated
@@ -99,6 +99,12 @@ var Columns = []string{
|
||||
FieldCurrency,
|
||||
}
|
||||
|
||||
// ForeignKeys holds the SQL foreign-keys that are owned by the "groups"
|
||||
// table and are not defined as standalone fields in the schema.
|
||||
var ForeignKeys = []string{
|
||||
"user_groups",
|
||||
}
|
||||
|
||||
// ValidColumn reports if the column name is valid (part of the table columns).
|
||||
func ValidColumn(column string) bool {
|
||||
for i := range Columns {
|
||||
@@ -106,6 +112,11 @@ func ValidColumn(column string) bool {
|
||||
return true
|
||||
}
|
||||
}
|
||||
for i := range ForeignKeys {
|
||||
if column == ForeignKeys[i] {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
|
||||
5
backend/internal/data/ent/group_query.go
generated
5
backend/internal/data/ent/group_query.go
generated
@@ -38,6 +38,7 @@ type GroupQuery struct {
|
||||
withInvitationTokens *GroupInvitationTokenQuery
|
||||
withNotifiers *NotifierQuery
|
||||
withItemTemplates *ItemTemplateQuery
|
||||
withFKs bool
|
||||
// intermediate query (i.e. traversal path).
|
||||
sql *sql.Selector
|
||||
path func(context.Context) (*sql.Selector, error)
|
||||
@@ -587,6 +588,7 @@ func (_q *GroupQuery) prepareQuery(ctx context.Context) error {
|
||||
func (_q *GroupQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*Group, error) {
|
||||
var (
|
||||
nodes = []*Group{}
|
||||
withFKs = _q.withFKs
|
||||
_spec = _q.querySpec()
|
||||
loadedTypes = [7]bool{
|
||||
_q.withUsers != nil,
|
||||
@@ -598,6 +600,9 @@ func (_q *GroupQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*Group,
|
||||
_q.withItemTemplates != nil,
|
||||
}
|
||||
)
|
||||
if withFKs {
|
||||
_spec.Node.Columns = append(_spec.Node.Columns, group.ForeignKeys...)
|
||||
}
|
||||
_spec.ScanValues = func(columns []string) ([]any, error) {
|
||||
return (*Group).scanValues(nil, columns)
|
||||
}
|
||||
|
||||
5
backend/internal/data/ent/item_predicates.go
generated
5
backend/internal/data/ent/item_predicates.go
generated
@@ -4,7 +4,6 @@ import (
|
||||
"entgo.io/ent/dialect/sql"
|
||||
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/item"
|
||||
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/predicate"
|
||||
conf "github.com/sysadminsmedia/homebox/backend/internal/sys/config"
|
||||
"github.com/sysadminsmedia/homebox/backend/pkgs/textutils"
|
||||
)
|
||||
|
||||
@@ -25,7 +24,7 @@ func AccentInsensitiveContains(field string, searchValue string) predicate.Item
|
||||
dialect := s.Dialect()
|
||||
|
||||
switch dialect {
|
||||
case conf.DriverSqlite3:
|
||||
case "sqlite3":
|
||||
// For SQLite, we'll create a custom normalization function using REPLACE
|
||||
// to handle common accented characters
|
||||
normalizeFunc := buildSQLiteNormalizeExpression(s.C(field))
|
||||
@@ -33,7 +32,7 @@ func AccentInsensitiveContains(field string, searchValue string) predicate.Item
|
||||
"LOWER("+normalizeFunc+") LIKE ?",
|
||||
"%"+normalizedSearch+"%",
|
||||
))
|
||||
case conf.DriverPostgres:
|
||||
case "postgres":
|
||||
// For PostgreSQL, use REPLACE-based normalization to avoid unaccent dependency
|
||||
normalizeFunc := buildGenericNormalizeExpression(s.C(field))
|
||||
// Use sql.P() for proper PostgreSQL parameter binding ($1, $2, etc.)
|
||||
|
||||
17
backend/internal/data/ent/migrate/schema.go
generated
17
backend/internal/data/ent/migrate/schema.go
generated
@@ -98,12 +98,21 @@ var (
|
||||
{Name: "updated_at", Type: field.TypeTime},
|
||||
{Name: "name", Type: field.TypeString, Size: 255},
|
||||
{Name: "currency", Type: field.TypeString, Default: "usd"},
|
||||
{Name: "user_groups", Type: field.TypeUUID, Nullable: true},
|
||||
}
|
||||
// GroupsTable holds the schema information for the "groups" table.
|
||||
GroupsTable = &schema.Table{
|
||||
Name: "groups",
|
||||
Columns: GroupsColumns,
|
||||
PrimaryKey: []*schema.Column{GroupsColumns[0]},
|
||||
ForeignKeys: []*schema.ForeignKey{
|
||||
{
|
||||
Symbol: "groups_users_groups",
|
||||
Columns: []*schema.Column{GroupsColumns[5]},
|
||||
RefColumns: []*schema.Column{UsersColumns[0]},
|
||||
OnDelete: schema.SetNull,
|
||||
},
|
||||
},
|
||||
}
|
||||
// GroupInvitationTokensColumns holds the columns for the "group_invitation_tokens" table.
|
||||
GroupInvitationTokensColumns = []*schema.Column{
|
||||
@@ -468,7 +477,8 @@ var (
|
||||
{Name: "activated_on", Type: field.TypeTime, Nullable: true},
|
||||
{Name: "oidc_issuer", Type: field.TypeString, Nullable: true},
|
||||
{Name: "oidc_subject", Type: field.TypeString, Nullable: true},
|
||||
{Name: "group_users", Type: field.TypeUUID},
|
||||
{Name: "default_group_id", Type: field.TypeUUID, Nullable: true},
|
||||
{Name: "group_users", Type: field.TypeUUID, Nullable: true},
|
||||
}
|
||||
// UsersTable holds the schema information for the "users" table.
|
||||
UsersTable = &schema.Table{
|
||||
@@ -478,9 +488,9 @@ var (
|
||||
ForeignKeys: []*schema.ForeignKey{
|
||||
{
|
||||
Symbol: "users_groups_users",
|
||||
Columns: []*schema.Column{UsersColumns[12]},
|
||||
Columns: []*schema.Column{UsersColumns[13]},
|
||||
RefColumns: []*schema.Column{GroupsColumns[0]},
|
||||
OnDelete: schema.Cascade,
|
||||
OnDelete: schema.SetNull,
|
||||
},
|
||||
},
|
||||
Indexes: []*schema.Index{
|
||||
@@ -541,6 +551,7 @@ func init() {
|
||||
AttachmentsTable.ForeignKeys[1].RefTable = ItemsTable
|
||||
AuthRolesTable.ForeignKeys[0].RefTable = AuthTokensTable
|
||||
AuthTokensTable.ForeignKeys[0].RefTable = UsersTable
|
||||
GroupsTable.ForeignKeys[0].RefTable = UsersTable
|
||||
GroupInvitationTokensTable.ForeignKeys[0].RefTable = GroupsTable
|
||||
ItemsTable.ForeignKeys[0].RefTable = GroupsTable
|
||||
ItemsTable.ForeignKeys[1].RefTable = ItemsTable
|
||||
|
||||
177
backend/internal/data/ent/mutation.go
generated
177
backend/internal/data/ent/mutation.go
generated
@@ -12583,9 +12583,11 @@ type UserMutation struct {
|
||||
activated_on *time.Time
|
||||
oidc_issuer *string
|
||||
oidc_subject *string
|
||||
default_group_id *uuid.UUID
|
||||
clearedFields map[string]struct{}
|
||||
group *uuid.UUID
|
||||
clearedgroup bool
|
||||
groups map[uuid.UUID]struct{}
|
||||
removedgroups map[uuid.UUID]struct{}
|
||||
clearedgroups bool
|
||||
auth_tokens map[uuid.UUID]struct{}
|
||||
removedauth_tokens map[uuid.UUID]struct{}
|
||||
clearedauth_tokens bool
|
||||
@@ -13149,43 +13151,107 @@ func (m *UserMutation) ResetOidcSubject() {
|
||||
delete(m.clearedFields, user.FieldOidcSubject)
|
||||
}
|
||||
|
||||
// SetGroupID sets the "group" edge to the Group entity by id.
|
||||
func (m *UserMutation) SetGroupID(id uuid.UUID) {
|
||||
m.group = &id
|
||||
// SetDefaultGroupID sets the "default_group_id" field.
|
||||
func (m *UserMutation) SetDefaultGroupID(u uuid.UUID) {
|
||||
m.default_group_id = &u
|
||||
}
|
||||
|
||||
// ClearGroup clears the "group" edge to the Group entity.
|
||||
func (m *UserMutation) ClearGroup() {
|
||||
m.clearedgroup = true
|
||||
// DefaultGroupID returns the value of the "default_group_id" field in the mutation.
|
||||
func (m *UserMutation) DefaultGroupID() (r uuid.UUID, exists bool) {
|
||||
v := m.default_group_id
|
||||
if v == nil {
|
||||
return
|
||||
}
|
||||
return *v, true
|
||||
}
|
||||
|
||||
// GroupCleared reports if the "group" edge to the Group entity was cleared.
|
||||
func (m *UserMutation) GroupCleared() bool {
|
||||
return m.clearedgroup
|
||||
// OldDefaultGroupID returns the old "default_group_id" field's value of the User entity.
|
||||
// If the User object wasn't provided to the builder, the object is fetched from the database.
|
||||
// An error is returned if the mutation operation is not UpdateOne, or the database query fails.
|
||||
func (m *UserMutation) OldDefaultGroupID(ctx context.Context) (v *uuid.UUID, err error) {
|
||||
if !m.op.Is(OpUpdateOne) {
|
||||
return v, errors.New("OldDefaultGroupID is only allowed on UpdateOne operations")
|
||||
}
|
||||
if m.id == nil || m.oldValue == nil {
|
||||
return v, errors.New("OldDefaultGroupID requires an ID field in the mutation")
|
||||
}
|
||||
oldValue, err := m.oldValue(ctx)
|
||||
if err != nil {
|
||||
return v, fmt.Errorf("querying old value for OldDefaultGroupID: %w", err)
|
||||
}
|
||||
return oldValue.DefaultGroupID, nil
|
||||
}
|
||||
|
||||
// GroupID returns the "group" edge ID in the mutation.
|
||||
func (m *UserMutation) GroupID() (id uuid.UUID, exists bool) {
|
||||
if m.group != nil {
|
||||
return *m.group, true
|
||||
// ClearDefaultGroupID clears the value of the "default_group_id" field.
|
||||
func (m *UserMutation) ClearDefaultGroupID() {
|
||||
m.default_group_id = nil
|
||||
m.clearedFields[user.FieldDefaultGroupID] = struct{}{}
|
||||
}
|
||||
|
||||
// DefaultGroupIDCleared returns if the "default_group_id" field was cleared in this mutation.
|
||||
func (m *UserMutation) DefaultGroupIDCleared() bool {
|
||||
_, ok := m.clearedFields[user.FieldDefaultGroupID]
|
||||
return ok
|
||||
}
|
||||
|
||||
// ResetDefaultGroupID resets all changes to the "default_group_id" field.
|
||||
func (m *UserMutation) ResetDefaultGroupID() {
|
||||
m.default_group_id = nil
|
||||
delete(m.clearedFields, user.FieldDefaultGroupID)
|
||||
}
|
||||
|
||||
// AddGroupIDs adds the "groups" edge to the Group entity by ids.
|
||||
func (m *UserMutation) AddGroupIDs(ids ...uuid.UUID) {
|
||||
if m.groups == nil {
|
||||
m.groups = make(map[uuid.UUID]struct{})
|
||||
}
|
||||
for i := range ids {
|
||||
m.groups[ids[i]] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
// ClearGroups clears the "groups" edge to the Group entity.
|
||||
func (m *UserMutation) ClearGroups() {
|
||||
m.clearedgroups = true
|
||||
}
|
||||
|
||||
// GroupsCleared reports if the "groups" edge to the Group entity was cleared.
|
||||
func (m *UserMutation) GroupsCleared() bool {
|
||||
return m.clearedgroups
|
||||
}
|
||||
|
||||
// RemoveGroupIDs removes the "groups" edge to the Group entity by IDs.
|
||||
func (m *UserMutation) RemoveGroupIDs(ids ...uuid.UUID) {
|
||||
if m.removedgroups == nil {
|
||||
m.removedgroups = make(map[uuid.UUID]struct{})
|
||||
}
|
||||
for i := range ids {
|
||||
delete(m.groups, ids[i])
|
||||
m.removedgroups[ids[i]] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
// RemovedGroups returns the removed IDs of the "groups" edge to the Group entity.
|
||||
func (m *UserMutation) RemovedGroupsIDs() (ids []uuid.UUID) {
|
||||
for id := range m.removedgroups {
|
||||
ids = append(ids, id)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// GroupIDs returns the "group" edge IDs in the mutation.
|
||||
// Note that IDs always returns len(IDs) <= 1 for unique edges, and you should use
|
||||
// GroupID instead. It exists only for internal usage by the builders.
|
||||
func (m *UserMutation) GroupIDs() (ids []uuid.UUID) {
|
||||
if id := m.group; id != nil {
|
||||
ids = append(ids, *id)
|
||||
// GroupsIDs returns the "groups" edge IDs in the mutation.
|
||||
func (m *UserMutation) GroupsIDs() (ids []uuid.UUID) {
|
||||
for id := range m.groups {
|
||||
ids = append(ids, id)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// ResetGroup resets all changes to the "group" edge.
|
||||
func (m *UserMutation) ResetGroup() {
|
||||
m.group = nil
|
||||
m.clearedgroup = false
|
||||
// ResetGroups resets all changes to the "groups" edge.
|
||||
func (m *UserMutation) ResetGroups() {
|
||||
m.groups = nil
|
||||
m.clearedgroups = false
|
||||
m.removedgroups = nil
|
||||
}
|
||||
|
||||
// AddAuthTokenIDs adds the "auth_tokens" edge to the AuthTokens entity by ids.
|
||||
@@ -13330,7 +13396,7 @@ func (m *UserMutation) Type() string {
|
||||
// order to get all numeric fields that were incremented/decremented, call
|
||||
// AddedFields().
|
||||
func (m *UserMutation) Fields() []string {
|
||||
fields := make([]string, 0, 11)
|
||||
fields := make([]string, 0, 12)
|
||||
if m.created_at != nil {
|
||||
fields = append(fields, user.FieldCreatedAt)
|
||||
}
|
||||
@@ -13364,6 +13430,9 @@ func (m *UserMutation) Fields() []string {
|
||||
if m.oidc_subject != nil {
|
||||
fields = append(fields, user.FieldOidcSubject)
|
||||
}
|
||||
if m.default_group_id != nil {
|
||||
fields = append(fields, user.FieldDefaultGroupID)
|
||||
}
|
||||
return fields
|
||||
}
|
||||
|
||||
@@ -13394,6 +13463,8 @@ func (m *UserMutation) Field(name string) (ent.Value, bool) {
|
||||
return m.OidcIssuer()
|
||||
case user.FieldOidcSubject:
|
||||
return m.OidcSubject()
|
||||
case user.FieldDefaultGroupID:
|
||||
return m.DefaultGroupID()
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
@@ -13425,6 +13496,8 @@ func (m *UserMutation) OldField(ctx context.Context, name string) (ent.Value, er
|
||||
return m.OldOidcIssuer(ctx)
|
||||
case user.FieldOidcSubject:
|
||||
return m.OldOidcSubject(ctx)
|
||||
case user.FieldDefaultGroupID:
|
||||
return m.OldDefaultGroupID(ctx)
|
||||
}
|
||||
return nil, fmt.Errorf("unknown User field %s", name)
|
||||
}
|
||||
@@ -13511,6 +13584,13 @@ func (m *UserMutation) SetField(name string, value ent.Value) error {
|
||||
}
|
||||
m.SetOidcSubject(v)
|
||||
return nil
|
||||
case user.FieldDefaultGroupID:
|
||||
v, ok := value.(uuid.UUID)
|
||||
if !ok {
|
||||
return fmt.Errorf("unexpected type %T for field %s", value, name)
|
||||
}
|
||||
m.SetDefaultGroupID(v)
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("unknown User field %s", name)
|
||||
}
|
||||
@@ -13553,6 +13633,9 @@ func (m *UserMutation) ClearedFields() []string {
|
||||
if m.FieldCleared(user.FieldOidcSubject) {
|
||||
fields = append(fields, user.FieldOidcSubject)
|
||||
}
|
||||
if m.FieldCleared(user.FieldDefaultGroupID) {
|
||||
fields = append(fields, user.FieldDefaultGroupID)
|
||||
}
|
||||
return fields
|
||||
}
|
||||
|
||||
@@ -13579,6 +13662,9 @@ func (m *UserMutation) ClearField(name string) error {
|
||||
case user.FieldOidcSubject:
|
||||
m.ClearOidcSubject()
|
||||
return nil
|
||||
case user.FieldDefaultGroupID:
|
||||
m.ClearDefaultGroupID()
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("unknown User nullable field %s", name)
|
||||
}
|
||||
@@ -13620,6 +13706,9 @@ func (m *UserMutation) ResetField(name string) error {
|
||||
case user.FieldOidcSubject:
|
||||
m.ResetOidcSubject()
|
||||
return nil
|
||||
case user.FieldDefaultGroupID:
|
||||
m.ResetDefaultGroupID()
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("unknown User field %s", name)
|
||||
}
|
||||
@@ -13627,8 +13716,8 @@ func (m *UserMutation) ResetField(name string) error {
|
||||
// AddedEdges returns all edge names that were set/added in this mutation.
|
||||
func (m *UserMutation) AddedEdges() []string {
|
||||
edges := make([]string, 0, 3)
|
||||
if m.group != nil {
|
||||
edges = append(edges, user.EdgeGroup)
|
||||
if m.groups != nil {
|
||||
edges = append(edges, user.EdgeGroups)
|
||||
}
|
||||
if m.auth_tokens != nil {
|
||||
edges = append(edges, user.EdgeAuthTokens)
|
||||
@@ -13643,10 +13732,12 @@ func (m *UserMutation) AddedEdges() []string {
|
||||
// name in this mutation.
|
||||
func (m *UserMutation) AddedIDs(name string) []ent.Value {
|
||||
switch name {
|
||||
case user.EdgeGroup:
|
||||
if id := m.group; id != nil {
|
||||
return []ent.Value{*id}
|
||||
case user.EdgeGroups:
|
||||
ids := make([]ent.Value, 0, len(m.groups))
|
||||
for id := range m.groups {
|
||||
ids = append(ids, id)
|
||||
}
|
||||
return ids
|
||||
case user.EdgeAuthTokens:
|
||||
ids := make([]ent.Value, 0, len(m.auth_tokens))
|
||||
for id := range m.auth_tokens {
|
||||
@@ -13666,6 +13757,9 @@ func (m *UserMutation) AddedIDs(name string) []ent.Value {
|
||||
// RemovedEdges returns all edge names that were removed in this mutation.
|
||||
func (m *UserMutation) RemovedEdges() []string {
|
||||
edges := make([]string, 0, 3)
|
||||
if m.removedgroups != nil {
|
||||
edges = append(edges, user.EdgeGroups)
|
||||
}
|
||||
if m.removedauth_tokens != nil {
|
||||
edges = append(edges, user.EdgeAuthTokens)
|
||||
}
|
||||
@@ -13679,6 +13773,12 @@ func (m *UserMutation) RemovedEdges() []string {
|
||||
// the given name in this mutation.
|
||||
func (m *UserMutation) RemovedIDs(name string) []ent.Value {
|
||||
switch name {
|
||||
case user.EdgeGroups:
|
||||
ids := make([]ent.Value, 0, len(m.removedgroups))
|
||||
for id := range m.removedgroups {
|
||||
ids = append(ids, id)
|
||||
}
|
||||
return ids
|
||||
case user.EdgeAuthTokens:
|
||||
ids := make([]ent.Value, 0, len(m.removedauth_tokens))
|
||||
for id := range m.removedauth_tokens {
|
||||
@@ -13698,8 +13798,8 @@ func (m *UserMutation) RemovedIDs(name string) []ent.Value {
|
||||
// ClearedEdges returns all edge names that were cleared in this mutation.
|
||||
func (m *UserMutation) ClearedEdges() []string {
|
||||
edges := make([]string, 0, 3)
|
||||
if m.clearedgroup {
|
||||
edges = append(edges, user.EdgeGroup)
|
||||
if m.clearedgroups {
|
||||
edges = append(edges, user.EdgeGroups)
|
||||
}
|
||||
if m.clearedauth_tokens {
|
||||
edges = append(edges, user.EdgeAuthTokens)
|
||||
@@ -13714,8 +13814,8 @@ func (m *UserMutation) ClearedEdges() []string {
|
||||
// was cleared in this mutation.
|
||||
func (m *UserMutation) EdgeCleared(name string) bool {
|
||||
switch name {
|
||||
case user.EdgeGroup:
|
||||
return m.clearedgroup
|
||||
case user.EdgeGroups:
|
||||
return m.clearedgroups
|
||||
case user.EdgeAuthTokens:
|
||||
return m.clearedauth_tokens
|
||||
case user.EdgeNotifiers:
|
||||
@@ -13728,9 +13828,6 @@ func (m *UserMutation) EdgeCleared(name string) bool {
|
||||
// if that edge is not defined in the schema.
|
||||
func (m *UserMutation) ClearEdge(name string) error {
|
||||
switch name {
|
||||
case user.EdgeGroup:
|
||||
m.ClearGroup()
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("unknown User unique edge %s", name)
|
||||
}
|
||||
@@ -13739,8 +13836,8 @@ func (m *UserMutation) ClearEdge(name string) error {
|
||||
// It returns an error if the edge is not defined in the schema.
|
||||
func (m *UserMutation) ResetEdge(name string) error {
|
||||
switch name {
|
||||
case user.EdgeGroup:
|
||||
m.ResetGroup()
|
||||
case user.EdgeGroups:
|
||||
m.ResetGroups()
|
||||
return nil
|
||||
case user.EdgeAuthTokens:
|
||||
m.ResetAuthTokens()
|
||||
|
||||
@@ -42,7 +42,7 @@ func (Group) Edges() []ent.Edge {
|
||||
}
|
||||
|
||||
return []ent.Edge{
|
||||
owned("users", User.Type),
|
||||
edge.To("users", User.Type),
|
||||
owned("locations", Location.Type),
|
||||
owned("items", Item.Type),
|
||||
owned("labels", Label.Type),
|
||||
@@ -72,14 +72,14 @@ func (g GroupMixin) Fields() []ent.Field {
|
||||
}
|
||||
|
||||
func (g GroupMixin) Edges() []ent.Edge {
|
||||
edge := edge.From("group", Group.Type).
|
||||
e := edge.From("group", Group.Type).
|
||||
Ref(g.ref).
|
||||
Unique().
|
||||
Required()
|
||||
|
||||
if g.field != "" {
|
||||
edge = edge.Field(g.field)
|
||||
e = e.Field(g.field)
|
||||
}
|
||||
|
||||
return []ent.Edge{edge}
|
||||
return []ent.Edge{e}
|
||||
}
|
||||
|
||||
@@ -19,7 +19,6 @@ type User struct {
|
||||
func (User) Mixin() []ent.Mixin {
|
||||
return []ent.Mixin{
|
||||
mixins.BaseMixin{},
|
||||
GroupMixin{ref: "users"},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -54,6 +53,10 @@ func (User) Fields() []ent.Field {
|
||||
field.String("oidc_subject").
|
||||
Optional().
|
||||
Nillable(),
|
||||
// default_group_id is the user's primary tenant/group
|
||||
field.UUID("default_group_id", uuid.UUID{}).
|
||||
Optional().
|
||||
Nillable(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -66,6 +69,7 @@ func (User) Indexes() []ent.Index {
|
||||
// Edges of the User.
|
||||
func (User) Edges() []ent.Edge {
|
||||
return []ent.Edge{
|
||||
edge.To("groups", Group.Type),
|
||||
edge.To("auth_tokens", AuthTokens.Type).
|
||||
Annotations(entsql.Annotation{
|
||||
OnDelete: entsql.Cascade,
|
||||
|
||||
41
backend/internal/data/ent/user.go
generated
41
backend/internal/data/ent/user.go
generated
@@ -10,7 +10,6 @@ import (
|
||||
"entgo.io/ent"
|
||||
"entgo.io/ent/dialect/sql"
|
||||
"github.com/google/uuid"
|
||||
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/group"
|
||||
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/user"
|
||||
)
|
||||
|
||||
@@ -41,6 +40,8 @@ type User struct {
|
||||
OidcIssuer *string `json:"oidc_issuer,omitempty"`
|
||||
// OidcSubject holds the value of the "oidc_subject" field.
|
||||
OidcSubject *string `json:"oidc_subject,omitempty"`
|
||||
// DefaultGroupID holds the value of the "default_group_id" field.
|
||||
DefaultGroupID *uuid.UUID `json:"default_group_id,omitempty"`
|
||||
// Edges holds the relations/edges for other nodes in the graph.
|
||||
// The values are being populated by the UserQuery when eager-loading is set.
|
||||
Edges UserEdges `json:"edges"`
|
||||
@@ -50,8 +51,8 @@ type User struct {
|
||||
|
||||
// UserEdges holds the relations/edges for other nodes in the graph.
|
||||
type UserEdges struct {
|
||||
// Group holds the value of the group edge.
|
||||
Group *Group `json:"group,omitempty"`
|
||||
// Groups holds the value of the groups edge.
|
||||
Groups []*Group `json:"groups,omitempty"`
|
||||
// AuthTokens holds the value of the auth_tokens edge.
|
||||
AuthTokens []*AuthTokens `json:"auth_tokens,omitempty"`
|
||||
// Notifiers holds the value of the notifiers edge.
|
||||
@@ -61,15 +62,13 @@ type UserEdges struct {
|
||||
loadedTypes [3]bool
|
||||
}
|
||||
|
||||
// GroupOrErr returns the Group value or an error if the edge
|
||||
// was not loaded in eager-loading, or loaded but was not found.
|
||||
func (e UserEdges) GroupOrErr() (*Group, error) {
|
||||
if e.Group != nil {
|
||||
return e.Group, nil
|
||||
} else if e.loadedTypes[0] {
|
||||
return nil, &NotFoundError{label: group.Label}
|
||||
// GroupsOrErr returns the Groups value or an error if the edge
|
||||
// was not loaded in eager-loading.
|
||||
func (e UserEdges) GroupsOrErr() ([]*Group, error) {
|
||||
if e.loadedTypes[0] {
|
||||
return e.Groups, nil
|
||||
}
|
||||
return nil, &NotLoadedError{edge: "group"}
|
||||
return nil, &NotLoadedError{edge: "groups"}
|
||||
}
|
||||
|
||||
// AuthTokensOrErr returns the AuthTokens value or an error if the edge
|
||||
@@ -95,6 +94,8 @@ func (*User) scanValues(columns []string) ([]any, error) {
|
||||
values := make([]any, len(columns))
|
||||
for i := range columns {
|
||||
switch columns[i] {
|
||||
case user.FieldDefaultGroupID:
|
||||
values[i] = &sql.NullScanner{S: new(uuid.UUID)}
|
||||
case user.FieldIsSuperuser, user.FieldSuperuser:
|
||||
values[i] = new(sql.NullBool)
|
||||
case user.FieldName, user.FieldEmail, user.FieldPassword, user.FieldRole, user.FieldOidcIssuer, user.FieldOidcSubject:
|
||||
@@ -195,6 +196,13 @@ func (_m *User) assignValues(columns []string, values []any) error {
|
||||
_m.OidcSubject = new(string)
|
||||
*_m.OidcSubject = value.String
|
||||
}
|
||||
case user.FieldDefaultGroupID:
|
||||
if value, ok := values[i].(*sql.NullScanner); !ok {
|
||||
return fmt.Errorf("unexpected type %T for field default_group_id", values[i])
|
||||
} else if value.Valid {
|
||||
_m.DefaultGroupID = new(uuid.UUID)
|
||||
*_m.DefaultGroupID = *value.S.(*uuid.UUID)
|
||||
}
|
||||
case user.ForeignKeys[0]:
|
||||
if value, ok := values[i].(*sql.NullScanner); !ok {
|
||||
return fmt.Errorf("unexpected type %T for field group_users", values[i])
|
||||
@@ -215,9 +223,9 @@ func (_m *User) Value(name string) (ent.Value, error) {
|
||||
return _m.selectValues.Get(name)
|
||||
}
|
||||
|
||||
// QueryGroup queries the "group" edge of the User entity.
|
||||
func (_m *User) QueryGroup() *GroupQuery {
|
||||
return NewUserClient(_m.config).QueryGroup(_m)
|
||||
// QueryGroups queries the "groups" edge of the User entity.
|
||||
func (_m *User) QueryGroups() *GroupQuery {
|
||||
return NewUserClient(_m.config).QueryGroups(_m)
|
||||
}
|
||||
|
||||
// QueryAuthTokens queries the "auth_tokens" edge of the User entity.
|
||||
@@ -288,6 +296,11 @@ func (_m *User) String() string {
|
||||
builder.WriteString("oidc_subject=")
|
||||
builder.WriteString(*v)
|
||||
}
|
||||
builder.WriteString(", ")
|
||||
if v := _m.DefaultGroupID; v != nil {
|
||||
builder.WriteString("default_group_id=")
|
||||
builder.WriteString(fmt.Sprintf("%v", *v))
|
||||
}
|
||||
builder.WriteByte(')')
|
||||
return builder.String()
|
||||
}
|
||||
|
||||
43
backend/internal/data/ent/user/user.go
generated
43
backend/internal/data/ent/user/user.go
generated
@@ -38,21 +38,23 @@ const (
|
||||
FieldOidcIssuer = "oidc_issuer"
|
||||
// FieldOidcSubject holds the string denoting the oidc_subject field in the database.
|
||||
FieldOidcSubject = "oidc_subject"
|
||||
// EdgeGroup holds the string denoting the group edge name in mutations.
|
||||
EdgeGroup = "group"
|
||||
// FieldDefaultGroupID holds the string denoting the default_group_id field in the database.
|
||||
FieldDefaultGroupID = "default_group_id"
|
||||
// EdgeGroups holds the string denoting the groups edge name in mutations.
|
||||
EdgeGroups = "groups"
|
||||
// EdgeAuthTokens holds the string denoting the auth_tokens edge name in mutations.
|
||||
EdgeAuthTokens = "auth_tokens"
|
||||
// EdgeNotifiers holds the string denoting the notifiers edge name in mutations.
|
||||
EdgeNotifiers = "notifiers"
|
||||
// Table holds the table name of the user in the database.
|
||||
Table = "users"
|
||||
// GroupTable is the table that holds the group relation/edge.
|
||||
GroupTable = "users"
|
||||
// GroupInverseTable is the table name for the Group entity.
|
||||
// GroupsTable is the table that holds the groups relation/edge.
|
||||
GroupsTable = "groups"
|
||||
// GroupsInverseTable is the table name for the Group entity.
|
||||
// It exists in this package in order to avoid circular dependency with the "group" package.
|
||||
GroupInverseTable = "groups"
|
||||
// GroupColumn is the table column denoting the group relation/edge.
|
||||
GroupColumn = "group_users"
|
||||
GroupsInverseTable = "groups"
|
||||
// GroupsColumn is the table column denoting the groups relation/edge.
|
||||
GroupsColumn = "user_groups"
|
||||
// AuthTokensTable is the table that holds the auth_tokens relation/edge.
|
||||
AuthTokensTable = "auth_tokens"
|
||||
// AuthTokensInverseTable is the table name for the AuthTokens entity.
|
||||
@@ -83,6 +85,7 @@ var Columns = []string{
|
||||
FieldActivatedOn,
|
||||
FieldOidcIssuer,
|
||||
FieldOidcSubject,
|
||||
FieldDefaultGroupID,
|
||||
}
|
||||
|
||||
// ForeignKeys holds the SQL foreign-keys that are owned by the "users"
|
||||
@@ -216,10 +219,22 @@ func ByOidcSubject(opts ...sql.OrderTermOption) OrderOption {
|
||||
return sql.OrderByField(FieldOidcSubject, opts...).ToFunc()
|
||||
}
|
||||
|
||||
// ByGroupField orders the results by group field.
|
||||
func ByGroupField(field string, opts ...sql.OrderTermOption) OrderOption {
|
||||
// ByDefaultGroupID orders the results by the default_group_id field.
|
||||
func ByDefaultGroupID(opts ...sql.OrderTermOption) OrderOption {
|
||||
return sql.OrderByField(FieldDefaultGroupID, opts...).ToFunc()
|
||||
}
|
||||
|
||||
// ByGroupsCount orders the results by groups count.
|
||||
func ByGroupsCount(opts ...sql.OrderTermOption) OrderOption {
|
||||
return func(s *sql.Selector) {
|
||||
sqlgraph.OrderByNeighborTerms(s, newGroupStep(), sql.OrderByField(field, opts...))
|
||||
sqlgraph.OrderByNeighborsCount(s, newGroupsStep(), opts...)
|
||||
}
|
||||
}
|
||||
|
||||
// ByGroups orders the results by groups terms.
|
||||
func ByGroups(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOption {
|
||||
return func(s *sql.Selector) {
|
||||
sqlgraph.OrderByNeighborTerms(s, newGroupsStep(), append([]sql.OrderTerm{term}, terms...)...)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -250,11 +265,11 @@ func ByNotifiers(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOption {
|
||||
sqlgraph.OrderByNeighborTerms(s, newNotifiersStep(), append([]sql.OrderTerm{term}, terms...)...)
|
||||
}
|
||||
}
|
||||
func newGroupStep() *sqlgraph.Step {
|
||||
func newGroupsStep() *sqlgraph.Step {
|
||||
return sqlgraph.NewStep(
|
||||
sqlgraph.From(Table, FieldID),
|
||||
sqlgraph.To(GroupInverseTable, FieldID),
|
||||
sqlgraph.Edge(sqlgraph.M2O, true, GroupTable, GroupColumn),
|
||||
sqlgraph.To(GroupsInverseTable, FieldID),
|
||||
sqlgraph.Edge(sqlgraph.O2M, false, GroupsTable, GroupsColumn),
|
||||
)
|
||||
}
|
||||
func newAuthTokensStep() *sqlgraph.Step {
|
||||
|
||||
67
backend/internal/data/ent/user/where.go
generated
67
backend/internal/data/ent/user/where.go
generated
@@ -106,6 +106,11 @@ func OidcSubject(v string) predicate.User {
|
||||
return predicate.User(sql.FieldEQ(FieldOidcSubject, v))
|
||||
}
|
||||
|
||||
// DefaultGroupID applies equality check predicate on the "default_group_id" field. It's identical to DefaultGroupIDEQ.
|
||||
func DefaultGroupID(v uuid.UUID) predicate.User {
|
||||
return predicate.User(sql.FieldEQ(FieldDefaultGroupID, v))
|
||||
}
|
||||
|
||||
// CreatedAtEQ applies the EQ predicate on the "created_at" field.
|
||||
func CreatedAtEQ(v time.Time) predicate.User {
|
||||
return predicate.User(sql.FieldEQ(FieldCreatedAt, v))
|
||||
@@ -631,21 +636,71 @@ func OidcSubjectContainsFold(v string) predicate.User {
|
||||
return predicate.User(sql.FieldContainsFold(FieldOidcSubject, v))
|
||||
}
|
||||
|
||||
// HasGroup applies the HasEdge predicate on the "group" edge.
|
||||
func HasGroup() predicate.User {
|
||||
// DefaultGroupIDEQ applies the EQ predicate on the "default_group_id" field.
|
||||
func DefaultGroupIDEQ(v uuid.UUID) predicate.User {
|
||||
return predicate.User(sql.FieldEQ(FieldDefaultGroupID, v))
|
||||
}
|
||||
|
||||
// DefaultGroupIDNEQ applies the NEQ predicate on the "default_group_id" field.
|
||||
func DefaultGroupIDNEQ(v uuid.UUID) predicate.User {
|
||||
return predicate.User(sql.FieldNEQ(FieldDefaultGroupID, v))
|
||||
}
|
||||
|
||||
// DefaultGroupIDIn applies the In predicate on the "default_group_id" field.
|
||||
func DefaultGroupIDIn(vs ...uuid.UUID) predicate.User {
|
||||
return predicate.User(sql.FieldIn(FieldDefaultGroupID, vs...))
|
||||
}
|
||||
|
||||
// DefaultGroupIDNotIn applies the NotIn predicate on the "default_group_id" field.
|
||||
func DefaultGroupIDNotIn(vs ...uuid.UUID) predicate.User {
|
||||
return predicate.User(sql.FieldNotIn(FieldDefaultGroupID, vs...))
|
||||
}
|
||||
|
||||
// DefaultGroupIDGT applies the GT predicate on the "default_group_id" field.
|
||||
func DefaultGroupIDGT(v uuid.UUID) predicate.User {
|
||||
return predicate.User(sql.FieldGT(FieldDefaultGroupID, v))
|
||||
}
|
||||
|
||||
// DefaultGroupIDGTE applies the GTE predicate on the "default_group_id" field.
|
||||
func DefaultGroupIDGTE(v uuid.UUID) predicate.User {
|
||||
return predicate.User(sql.FieldGTE(FieldDefaultGroupID, v))
|
||||
}
|
||||
|
||||
// DefaultGroupIDLT applies the LT predicate on the "default_group_id" field.
|
||||
func DefaultGroupIDLT(v uuid.UUID) predicate.User {
|
||||
return predicate.User(sql.FieldLT(FieldDefaultGroupID, v))
|
||||
}
|
||||
|
||||
// DefaultGroupIDLTE applies the LTE predicate on the "default_group_id" field.
|
||||
func DefaultGroupIDLTE(v uuid.UUID) predicate.User {
|
||||
return predicate.User(sql.FieldLTE(FieldDefaultGroupID, v))
|
||||
}
|
||||
|
||||
// DefaultGroupIDIsNil applies the IsNil predicate on the "default_group_id" field.
|
||||
func DefaultGroupIDIsNil() predicate.User {
|
||||
return predicate.User(sql.FieldIsNull(FieldDefaultGroupID))
|
||||
}
|
||||
|
||||
// DefaultGroupIDNotNil applies the NotNil predicate on the "default_group_id" field.
|
||||
func DefaultGroupIDNotNil() predicate.User {
|
||||
return predicate.User(sql.FieldNotNull(FieldDefaultGroupID))
|
||||
}
|
||||
|
||||
// HasGroups applies the HasEdge predicate on the "groups" edge.
|
||||
func HasGroups() predicate.User {
|
||||
return predicate.User(func(s *sql.Selector) {
|
||||
step := sqlgraph.NewStep(
|
||||
sqlgraph.From(Table, FieldID),
|
||||
sqlgraph.Edge(sqlgraph.M2O, true, GroupTable, GroupColumn),
|
||||
sqlgraph.Edge(sqlgraph.O2M, false, GroupsTable, GroupsColumn),
|
||||
)
|
||||
sqlgraph.HasNeighbors(s, step)
|
||||
})
|
||||
}
|
||||
|
||||
// HasGroupWith applies the HasEdge predicate on the "group" edge with a given conditions (other predicates).
|
||||
func HasGroupWith(preds ...predicate.Group) predicate.User {
|
||||
// HasGroupsWith applies the HasEdge predicate on the "groups" edge with a given conditions (other predicates).
|
||||
func HasGroupsWith(preds ...predicate.Group) predicate.User {
|
||||
return predicate.User(func(s *sql.Selector) {
|
||||
step := newGroupStep()
|
||||
step := newGroupsStep()
|
||||
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
|
||||
for _, p := range preds {
|
||||
p(s)
|
||||
|
||||
48
backend/internal/data/ent/user_create.go
generated
48
backend/internal/data/ent/user_create.go
generated
@@ -162,6 +162,20 @@ func (_c *UserCreate) SetNillableOidcSubject(v *string) *UserCreate {
|
||||
return _c
|
||||
}
|
||||
|
||||
// SetDefaultGroupID sets the "default_group_id" field.
|
||||
func (_c *UserCreate) SetDefaultGroupID(v uuid.UUID) *UserCreate {
|
||||
_c.mutation.SetDefaultGroupID(v)
|
||||
return _c
|
||||
}
|
||||
|
||||
// SetNillableDefaultGroupID sets the "default_group_id" field if the given value is not nil.
|
||||
func (_c *UserCreate) SetNillableDefaultGroupID(v *uuid.UUID) *UserCreate {
|
||||
if v != nil {
|
||||
_c.SetDefaultGroupID(*v)
|
||||
}
|
||||
return _c
|
||||
}
|
||||
|
||||
// SetID sets the "id" field.
|
||||
func (_c *UserCreate) SetID(v uuid.UUID) *UserCreate {
|
||||
_c.mutation.SetID(v)
|
||||
@@ -176,15 +190,19 @@ func (_c *UserCreate) SetNillableID(v *uuid.UUID) *UserCreate {
|
||||
return _c
|
||||
}
|
||||
|
||||
// SetGroupID sets the "group" edge to the Group entity by ID.
|
||||
func (_c *UserCreate) SetGroupID(id uuid.UUID) *UserCreate {
|
||||
_c.mutation.SetGroupID(id)
|
||||
// AddGroupIDs adds the "groups" edge to the Group entity by IDs.
|
||||
func (_c *UserCreate) AddGroupIDs(ids ...uuid.UUID) *UserCreate {
|
||||
_c.mutation.AddGroupIDs(ids...)
|
||||
return _c
|
||||
}
|
||||
|
||||
// SetGroup sets the "group" edge to the Group entity.
|
||||
func (_c *UserCreate) SetGroup(v *Group) *UserCreate {
|
||||
return _c.SetGroupID(v.ID)
|
||||
// AddGroups adds the "groups" edges to the Group entity.
|
||||
func (_c *UserCreate) AddGroups(v ...*Group) *UserCreate {
|
||||
ids := make([]uuid.UUID, len(v))
|
||||
for i := range v {
|
||||
ids[i] = v[i].ID
|
||||
}
|
||||
return _c.AddGroupIDs(ids...)
|
||||
}
|
||||
|
||||
// AddAuthTokenIDs adds the "auth_tokens" edge to the AuthTokens entity by IDs.
|
||||
@@ -321,9 +339,6 @@ func (_c *UserCreate) check() error {
|
||||
return &ValidationError{Name: "role", err: fmt.Errorf(`ent: validator failed for field "User.role": %w`, err)}
|
||||
}
|
||||
}
|
||||
if len(_c.mutation.GroupIDs()) == 0 {
|
||||
return &ValidationError{Name: "group", err: errors.New(`ent: missing required edge "User.group"`)}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -403,12 +418,16 @@ func (_c *UserCreate) createSpec() (*User, *sqlgraph.CreateSpec) {
|
||||
_spec.SetField(user.FieldOidcSubject, field.TypeString, value)
|
||||
_node.OidcSubject = &value
|
||||
}
|
||||
if nodes := _c.mutation.GroupIDs(); len(nodes) > 0 {
|
||||
if value, ok := _c.mutation.DefaultGroupID(); ok {
|
||||
_spec.SetField(user.FieldDefaultGroupID, field.TypeUUID, value)
|
||||
_node.DefaultGroupID = &value
|
||||
}
|
||||
if nodes := _c.mutation.GroupsIDs(); len(nodes) > 0 {
|
||||
edge := &sqlgraph.EdgeSpec{
|
||||
Rel: sqlgraph.M2O,
|
||||
Inverse: true,
|
||||
Table: user.GroupTable,
|
||||
Columns: []string{user.GroupColumn},
|
||||
Rel: sqlgraph.O2M,
|
||||
Inverse: false,
|
||||
Table: user.GroupsTable,
|
||||
Columns: []string{user.GroupsColumn},
|
||||
Bidi: false,
|
||||
Target: &sqlgraph.EdgeTarget{
|
||||
IDSpec: sqlgraph.NewFieldSpec(group.FieldID, field.TypeUUID),
|
||||
@@ -417,7 +436,6 @@ func (_c *UserCreate) createSpec() (*User, *sqlgraph.CreateSpec) {
|
||||
for _, k := range nodes {
|
||||
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||
}
|
||||
_node.group_users = &nodes[0]
|
||||
_spec.Edges = append(_spec.Edges, edge)
|
||||
}
|
||||
if nodes := _c.mutation.AuthTokensIDs(); len(nodes) > 0 {
|
||||
|
||||
67
backend/internal/data/ent/user_query.go
generated
67
backend/internal/data/ent/user_query.go
generated
@@ -27,7 +27,7 @@ type UserQuery struct {
|
||||
order []user.OrderOption
|
||||
inters []Interceptor
|
||||
predicates []predicate.User
|
||||
withGroup *GroupQuery
|
||||
withGroups *GroupQuery
|
||||
withAuthTokens *AuthTokensQuery
|
||||
withNotifiers *NotifierQuery
|
||||
withFKs bool
|
||||
@@ -67,8 +67,8 @@ func (_q *UserQuery) Order(o ...user.OrderOption) *UserQuery {
|
||||
return _q
|
||||
}
|
||||
|
||||
// QueryGroup chains the current query on the "group" edge.
|
||||
func (_q *UserQuery) QueryGroup() *GroupQuery {
|
||||
// QueryGroups chains the current query on the "groups" edge.
|
||||
func (_q *UserQuery) QueryGroups() *GroupQuery {
|
||||
query := (&GroupClient{config: _q.config}).Query()
|
||||
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
|
||||
if err := _q.prepareQuery(ctx); err != nil {
|
||||
@@ -81,7 +81,7 @@ func (_q *UserQuery) QueryGroup() *GroupQuery {
|
||||
step := sqlgraph.NewStep(
|
||||
sqlgraph.From(user.Table, user.FieldID, selector),
|
||||
sqlgraph.To(group.Table, group.FieldID),
|
||||
sqlgraph.Edge(sqlgraph.M2O, true, user.GroupTable, user.GroupColumn),
|
||||
sqlgraph.Edge(sqlgraph.O2M, false, user.GroupsTable, user.GroupsColumn),
|
||||
)
|
||||
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
|
||||
return fromU, nil
|
||||
@@ -325,7 +325,7 @@ func (_q *UserQuery) Clone() *UserQuery {
|
||||
order: append([]user.OrderOption{}, _q.order...),
|
||||
inters: append([]Interceptor{}, _q.inters...),
|
||||
predicates: append([]predicate.User{}, _q.predicates...),
|
||||
withGroup: _q.withGroup.Clone(),
|
||||
withGroups: _q.withGroups.Clone(),
|
||||
withAuthTokens: _q.withAuthTokens.Clone(),
|
||||
withNotifiers: _q.withNotifiers.Clone(),
|
||||
// clone intermediate query.
|
||||
@@ -334,14 +334,14 @@ func (_q *UserQuery) Clone() *UserQuery {
|
||||
}
|
||||
}
|
||||
|
||||
// WithGroup tells the query-builder to eager-load the nodes that are connected to
|
||||
// the "group" edge. The optional arguments are used to configure the query builder of the edge.
|
||||
func (_q *UserQuery) WithGroup(opts ...func(*GroupQuery)) *UserQuery {
|
||||
// WithGroups tells the query-builder to eager-load the nodes that are connected to
|
||||
// the "groups" edge. The optional arguments are used to configure the query builder of the edge.
|
||||
func (_q *UserQuery) WithGroups(opts ...func(*GroupQuery)) *UserQuery {
|
||||
query := (&GroupClient{config: _q.config}).Query()
|
||||
for _, opt := range opts {
|
||||
opt(query)
|
||||
}
|
||||
_q.withGroup = query
|
||||
_q.withGroups = query
|
||||
return _q
|
||||
}
|
||||
|
||||
@@ -447,14 +447,11 @@ func (_q *UserQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*User, e
|
||||
withFKs = _q.withFKs
|
||||
_spec = _q.querySpec()
|
||||
loadedTypes = [3]bool{
|
||||
_q.withGroup != nil,
|
||||
_q.withGroups != nil,
|
||||
_q.withAuthTokens != nil,
|
||||
_q.withNotifiers != nil,
|
||||
}
|
||||
)
|
||||
if _q.withGroup != nil {
|
||||
withFKs = true
|
||||
}
|
||||
if withFKs {
|
||||
_spec.Node.Columns = append(_spec.Node.Columns, user.ForeignKeys...)
|
||||
}
|
||||
@@ -476,9 +473,10 @@ func (_q *UserQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*User, e
|
||||
if len(nodes) == 0 {
|
||||
return nodes, nil
|
||||
}
|
||||
if query := _q.withGroup; query != nil {
|
||||
if err := _q.loadGroup(ctx, query, nodes, nil,
|
||||
func(n *User, e *Group) { n.Edges.Group = e }); err != nil {
|
||||
if query := _q.withGroups; query != nil {
|
||||
if err := _q.loadGroups(ctx, query, nodes,
|
||||
func(n *User) { n.Edges.Groups = []*Group{} },
|
||||
func(n *User, e *Group) { n.Edges.Groups = append(n.Edges.Groups, e) }); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
@@ -499,35 +497,34 @@ func (_q *UserQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*User, e
|
||||
return nodes, nil
|
||||
}
|
||||
|
||||
func (_q *UserQuery) loadGroup(ctx context.Context, query *GroupQuery, nodes []*User, init func(*User), assign func(*User, *Group)) error {
|
||||
ids := make([]uuid.UUID, 0, len(nodes))
|
||||
nodeids := make(map[uuid.UUID][]*User)
|
||||
func (_q *UserQuery) loadGroups(ctx context.Context, query *GroupQuery, nodes []*User, init func(*User), assign func(*User, *Group)) error {
|
||||
fks := make([]driver.Value, 0, len(nodes))
|
||||
nodeids := make(map[uuid.UUID]*User)
|
||||
for i := range nodes {
|
||||
if nodes[i].group_users == nil {
|
||||
continue
|
||||
fks = append(fks, nodes[i].ID)
|
||||
nodeids[nodes[i].ID] = nodes[i]
|
||||
if init != nil {
|
||||
init(nodes[i])
|
||||
}
|
||||
fk := *nodes[i].group_users
|
||||
if _, ok := nodeids[fk]; !ok {
|
||||
ids = append(ids, fk)
|
||||
}
|
||||
nodeids[fk] = append(nodeids[fk], nodes[i])
|
||||
}
|
||||
if len(ids) == 0 {
|
||||
return nil
|
||||
}
|
||||
query.Where(group.IDIn(ids...))
|
||||
query.withFKs = true
|
||||
query.Where(predicate.Group(func(s *sql.Selector) {
|
||||
s.Where(sql.InValues(s.C(user.GroupsColumn), fks...))
|
||||
}))
|
||||
neighbors, err := query.All(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, n := range neighbors {
|
||||
nodes, ok := nodeids[n.ID]
|
||||
fk := n.user_groups
|
||||
if fk == nil {
|
||||
return fmt.Errorf(`foreign-key "user_groups" is nil for node %v`, n.ID)
|
||||
}
|
||||
node, ok := nodeids[*fk]
|
||||
if !ok {
|
||||
return fmt.Errorf(`unexpected foreign-key "group_users" returned %v`, n.ID)
|
||||
}
|
||||
for i := range nodes {
|
||||
assign(nodes[i], n)
|
||||
return fmt.Errorf(`unexpected referenced foreign-key "user_groups" returned %v for node %v`, *fk, n.ID)
|
||||
}
|
||||
assign(node, n)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
204
backend/internal/data/ent/user_update.go
generated
204
backend/internal/data/ent/user_update.go
generated
@@ -188,15 +188,39 @@ func (_u *UserUpdate) ClearOidcSubject() *UserUpdate {
|
||||
return _u
|
||||
}
|
||||
|
||||
// SetGroupID sets the "group" edge to the Group entity by ID.
|
||||
func (_u *UserUpdate) SetGroupID(id uuid.UUID) *UserUpdate {
|
||||
_u.mutation.SetGroupID(id)
|
||||
// SetDefaultGroupID sets the "default_group_id" field.
|
||||
func (_u *UserUpdate) SetDefaultGroupID(v uuid.UUID) *UserUpdate {
|
||||
_u.mutation.SetDefaultGroupID(v)
|
||||
return _u
|
||||
}
|
||||
|
||||
// SetGroup sets the "group" edge to the Group entity.
|
||||
func (_u *UserUpdate) SetGroup(v *Group) *UserUpdate {
|
||||
return _u.SetGroupID(v.ID)
|
||||
// SetNillableDefaultGroupID sets the "default_group_id" field if the given value is not nil.
|
||||
func (_u *UserUpdate) SetNillableDefaultGroupID(v *uuid.UUID) *UserUpdate {
|
||||
if v != nil {
|
||||
_u.SetDefaultGroupID(*v)
|
||||
}
|
||||
return _u
|
||||
}
|
||||
|
||||
// ClearDefaultGroupID clears the value of the "default_group_id" field.
|
||||
func (_u *UserUpdate) ClearDefaultGroupID() *UserUpdate {
|
||||
_u.mutation.ClearDefaultGroupID()
|
||||
return _u
|
||||
}
|
||||
|
||||
// AddGroupIDs adds the "groups" edge to the Group entity by IDs.
|
||||
func (_u *UserUpdate) AddGroupIDs(ids ...uuid.UUID) *UserUpdate {
|
||||
_u.mutation.AddGroupIDs(ids...)
|
||||
return _u
|
||||
}
|
||||
|
||||
// AddGroups adds the "groups" edges to the Group entity.
|
||||
func (_u *UserUpdate) AddGroups(v ...*Group) *UserUpdate {
|
||||
ids := make([]uuid.UUID, len(v))
|
||||
for i := range v {
|
||||
ids[i] = v[i].ID
|
||||
}
|
||||
return _u.AddGroupIDs(ids...)
|
||||
}
|
||||
|
||||
// AddAuthTokenIDs adds the "auth_tokens" edge to the AuthTokens entity by IDs.
|
||||
@@ -234,12 +258,27 @@ func (_u *UserUpdate) Mutation() *UserMutation {
|
||||
return _u.mutation
|
||||
}
|
||||
|
||||
// ClearGroup clears the "group" edge to the Group entity.
|
||||
func (_u *UserUpdate) ClearGroup() *UserUpdate {
|
||||
_u.mutation.ClearGroup()
|
||||
// ClearGroups clears all "groups" edges to the Group entity.
|
||||
func (_u *UserUpdate) ClearGroups() *UserUpdate {
|
||||
_u.mutation.ClearGroups()
|
||||
return _u
|
||||
}
|
||||
|
||||
// RemoveGroupIDs removes the "groups" edge to Group entities by IDs.
|
||||
func (_u *UserUpdate) RemoveGroupIDs(ids ...uuid.UUID) *UserUpdate {
|
||||
_u.mutation.RemoveGroupIDs(ids...)
|
||||
return _u
|
||||
}
|
||||
|
||||
// RemoveGroups removes "groups" edges to Group entities.
|
||||
func (_u *UserUpdate) RemoveGroups(v ...*Group) *UserUpdate {
|
||||
ids := make([]uuid.UUID, len(v))
|
||||
for i := range v {
|
||||
ids[i] = v[i].ID
|
||||
}
|
||||
return _u.RemoveGroupIDs(ids...)
|
||||
}
|
||||
|
||||
// ClearAuthTokens clears all "auth_tokens" edges to the AuthTokens entity.
|
||||
func (_u *UserUpdate) ClearAuthTokens() *UserUpdate {
|
||||
_u.mutation.ClearAuthTokens()
|
||||
@@ -340,9 +379,6 @@ func (_u *UserUpdate) check() error {
|
||||
return &ValidationError{Name: "role", err: fmt.Errorf(`ent: validator failed for field "User.role": %w`, err)}
|
||||
}
|
||||
}
|
||||
if _u.mutation.GroupCleared() && len(_u.mutation.GroupIDs()) > 0 {
|
||||
return errors.New(`ent: clearing a required unique edge "User.group"`)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -400,12 +436,18 @@ func (_u *UserUpdate) sqlSave(ctx context.Context) (_node int, err error) {
|
||||
if _u.mutation.OidcSubjectCleared() {
|
||||
_spec.ClearField(user.FieldOidcSubject, field.TypeString)
|
||||
}
|
||||
if _u.mutation.GroupCleared() {
|
||||
if value, ok := _u.mutation.DefaultGroupID(); ok {
|
||||
_spec.SetField(user.FieldDefaultGroupID, field.TypeUUID, value)
|
||||
}
|
||||
if _u.mutation.DefaultGroupIDCleared() {
|
||||
_spec.ClearField(user.FieldDefaultGroupID, field.TypeUUID)
|
||||
}
|
||||
if _u.mutation.GroupsCleared() {
|
||||
edge := &sqlgraph.EdgeSpec{
|
||||
Rel: sqlgraph.M2O,
|
||||
Inverse: true,
|
||||
Table: user.GroupTable,
|
||||
Columns: []string{user.GroupColumn},
|
||||
Rel: sqlgraph.O2M,
|
||||
Inverse: false,
|
||||
Table: user.GroupsTable,
|
||||
Columns: []string{user.GroupsColumn},
|
||||
Bidi: false,
|
||||
Target: &sqlgraph.EdgeTarget{
|
||||
IDSpec: sqlgraph.NewFieldSpec(group.FieldID, field.TypeUUID),
|
||||
@@ -413,12 +455,28 @@ func (_u *UserUpdate) sqlSave(ctx context.Context) (_node int, err error) {
|
||||
}
|
||||
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||
}
|
||||
if nodes := _u.mutation.GroupIDs(); len(nodes) > 0 {
|
||||
if nodes := _u.mutation.RemovedGroupsIDs(); len(nodes) > 0 && !_u.mutation.GroupsCleared() {
|
||||
edge := &sqlgraph.EdgeSpec{
|
||||
Rel: sqlgraph.M2O,
|
||||
Inverse: true,
|
||||
Table: user.GroupTable,
|
||||
Columns: []string{user.GroupColumn},
|
||||
Rel: sqlgraph.O2M,
|
||||
Inverse: false,
|
||||
Table: user.GroupsTable,
|
||||
Columns: []string{user.GroupsColumn},
|
||||
Bidi: false,
|
||||
Target: &sqlgraph.EdgeTarget{
|
||||
IDSpec: sqlgraph.NewFieldSpec(group.FieldID, field.TypeUUID),
|
||||
},
|
||||
}
|
||||
for _, k := range nodes {
|
||||
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||
}
|
||||
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||
}
|
||||
if nodes := _u.mutation.GroupsIDs(); len(nodes) > 0 {
|
||||
edge := &sqlgraph.EdgeSpec{
|
||||
Rel: sqlgraph.O2M,
|
||||
Inverse: false,
|
||||
Table: user.GroupsTable,
|
||||
Columns: []string{user.GroupsColumn},
|
||||
Bidi: false,
|
||||
Target: &sqlgraph.EdgeTarget{
|
||||
IDSpec: sqlgraph.NewFieldSpec(group.FieldID, field.TypeUUID),
|
||||
@@ -695,15 +753,39 @@ func (_u *UserUpdateOne) ClearOidcSubject() *UserUpdateOne {
|
||||
return _u
|
||||
}
|
||||
|
||||
// SetGroupID sets the "group" edge to the Group entity by ID.
|
||||
func (_u *UserUpdateOne) SetGroupID(id uuid.UUID) *UserUpdateOne {
|
||||
_u.mutation.SetGroupID(id)
|
||||
// SetDefaultGroupID sets the "default_group_id" field.
|
||||
func (_u *UserUpdateOne) SetDefaultGroupID(v uuid.UUID) *UserUpdateOne {
|
||||
_u.mutation.SetDefaultGroupID(v)
|
||||
return _u
|
||||
}
|
||||
|
||||
// SetGroup sets the "group" edge to the Group entity.
|
||||
func (_u *UserUpdateOne) SetGroup(v *Group) *UserUpdateOne {
|
||||
return _u.SetGroupID(v.ID)
|
||||
// SetNillableDefaultGroupID sets the "default_group_id" field if the given value is not nil.
|
||||
func (_u *UserUpdateOne) SetNillableDefaultGroupID(v *uuid.UUID) *UserUpdateOne {
|
||||
if v != nil {
|
||||
_u.SetDefaultGroupID(*v)
|
||||
}
|
||||
return _u
|
||||
}
|
||||
|
||||
// ClearDefaultGroupID clears the value of the "default_group_id" field.
|
||||
func (_u *UserUpdateOne) ClearDefaultGroupID() *UserUpdateOne {
|
||||
_u.mutation.ClearDefaultGroupID()
|
||||
return _u
|
||||
}
|
||||
|
||||
// AddGroupIDs adds the "groups" edge to the Group entity by IDs.
|
||||
func (_u *UserUpdateOne) AddGroupIDs(ids ...uuid.UUID) *UserUpdateOne {
|
||||
_u.mutation.AddGroupIDs(ids...)
|
||||
return _u
|
||||
}
|
||||
|
||||
// AddGroups adds the "groups" edges to the Group entity.
|
||||
func (_u *UserUpdateOne) AddGroups(v ...*Group) *UserUpdateOne {
|
||||
ids := make([]uuid.UUID, len(v))
|
||||
for i := range v {
|
||||
ids[i] = v[i].ID
|
||||
}
|
||||
return _u.AddGroupIDs(ids...)
|
||||
}
|
||||
|
||||
// AddAuthTokenIDs adds the "auth_tokens" edge to the AuthTokens entity by IDs.
|
||||
@@ -741,12 +823,27 @@ func (_u *UserUpdateOne) Mutation() *UserMutation {
|
||||
return _u.mutation
|
||||
}
|
||||
|
||||
// ClearGroup clears the "group" edge to the Group entity.
|
||||
func (_u *UserUpdateOne) ClearGroup() *UserUpdateOne {
|
||||
_u.mutation.ClearGroup()
|
||||
// ClearGroups clears all "groups" edges to the Group entity.
|
||||
func (_u *UserUpdateOne) ClearGroups() *UserUpdateOne {
|
||||
_u.mutation.ClearGroups()
|
||||
return _u
|
||||
}
|
||||
|
||||
// RemoveGroupIDs removes the "groups" edge to Group entities by IDs.
|
||||
func (_u *UserUpdateOne) RemoveGroupIDs(ids ...uuid.UUID) *UserUpdateOne {
|
||||
_u.mutation.RemoveGroupIDs(ids...)
|
||||
return _u
|
||||
}
|
||||
|
||||
// RemoveGroups removes "groups" edges to Group entities.
|
||||
func (_u *UserUpdateOne) RemoveGroups(v ...*Group) *UserUpdateOne {
|
||||
ids := make([]uuid.UUID, len(v))
|
||||
for i := range v {
|
||||
ids[i] = v[i].ID
|
||||
}
|
||||
return _u.RemoveGroupIDs(ids...)
|
||||
}
|
||||
|
||||
// ClearAuthTokens clears all "auth_tokens" edges to the AuthTokens entity.
|
||||
func (_u *UserUpdateOne) ClearAuthTokens() *UserUpdateOne {
|
||||
_u.mutation.ClearAuthTokens()
|
||||
@@ -860,9 +957,6 @@ func (_u *UserUpdateOne) check() error {
|
||||
return &ValidationError{Name: "role", err: fmt.Errorf(`ent: validator failed for field "User.role": %w`, err)}
|
||||
}
|
||||
}
|
||||
if _u.mutation.GroupCleared() && len(_u.mutation.GroupIDs()) > 0 {
|
||||
return errors.New(`ent: clearing a required unique edge "User.group"`)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -937,12 +1031,18 @@ func (_u *UserUpdateOne) sqlSave(ctx context.Context) (_node *User, err error) {
|
||||
if _u.mutation.OidcSubjectCleared() {
|
||||
_spec.ClearField(user.FieldOidcSubject, field.TypeString)
|
||||
}
|
||||
if _u.mutation.GroupCleared() {
|
||||
if value, ok := _u.mutation.DefaultGroupID(); ok {
|
||||
_spec.SetField(user.FieldDefaultGroupID, field.TypeUUID, value)
|
||||
}
|
||||
if _u.mutation.DefaultGroupIDCleared() {
|
||||
_spec.ClearField(user.FieldDefaultGroupID, field.TypeUUID)
|
||||
}
|
||||
if _u.mutation.GroupsCleared() {
|
||||
edge := &sqlgraph.EdgeSpec{
|
||||
Rel: sqlgraph.M2O,
|
||||
Inverse: true,
|
||||
Table: user.GroupTable,
|
||||
Columns: []string{user.GroupColumn},
|
||||
Rel: sqlgraph.O2M,
|
||||
Inverse: false,
|
||||
Table: user.GroupsTable,
|
||||
Columns: []string{user.GroupsColumn},
|
||||
Bidi: false,
|
||||
Target: &sqlgraph.EdgeTarget{
|
||||
IDSpec: sqlgraph.NewFieldSpec(group.FieldID, field.TypeUUID),
|
||||
@@ -950,12 +1050,28 @@ func (_u *UserUpdateOne) sqlSave(ctx context.Context) (_node *User, err error) {
|
||||
}
|
||||
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||
}
|
||||
if nodes := _u.mutation.GroupIDs(); len(nodes) > 0 {
|
||||
if nodes := _u.mutation.RemovedGroupsIDs(); len(nodes) > 0 && !_u.mutation.GroupsCleared() {
|
||||
edge := &sqlgraph.EdgeSpec{
|
||||
Rel: sqlgraph.M2O,
|
||||
Inverse: true,
|
||||
Table: user.GroupTable,
|
||||
Columns: []string{user.GroupColumn},
|
||||
Rel: sqlgraph.O2M,
|
||||
Inverse: false,
|
||||
Table: user.GroupsTable,
|
||||
Columns: []string{user.GroupsColumn},
|
||||
Bidi: false,
|
||||
Target: &sqlgraph.EdgeTarget{
|
||||
IDSpec: sqlgraph.NewFieldSpec(group.FieldID, field.TypeUUID),
|
||||
},
|
||||
}
|
||||
for _, k := range nodes {
|
||||
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||
}
|
||||
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||
}
|
||||
if nodes := _u.mutation.GroupsIDs(); len(nodes) > 0 {
|
||||
edge := &sqlgraph.EdgeSpec{
|
||||
Rel: sqlgraph.O2M,
|
||||
Inverse: false,
|
||||
Table: user.GroupsTable,
|
||||
Columns: []string{user.GroupsColumn},
|
||||
Bidi: false,
|
||||
Target: &sqlgraph.EdgeTarget{
|
||||
IDSpec: sqlgraph.NewFieldSpec(group.FieldID, field.TypeUUID),
|
||||
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/rs/zerolog/log"
|
||||
"github.com/sysadminsmedia/homebox/backend/internal/sys/config"
|
||||
)
|
||||
|
||||
//go:embed all:postgres
|
||||
@@ -22,9 +21,9 @@ var sqliteFiles embed.FS
|
||||
// embedded file system containing the migration files for the specified dialect.
|
||||
func Migrations(dialect string) (embed.FS, error) {
|
||||
switch dialect {
|
||||
case config.DriverPostgres:
|
||||
case "postgres":
|
||||
return postgresFiles, nil
|
||||
case config.DriverSqlite3:
|
||||
case "sqlite3":
|
||||
return sqliteFiles, nil
|
||||
default:
|
||||
log.Error().Str("dialect", dialect).Msg("unknown sql dialect")
|
||||
|
||||
@@ -0,0 +1,47 @@
|
||||
-- +goose Up
|
||||
-- Create user_groups junction table for M:M relationship
|
||||
CREATE TABLE IF NOT EXISTS "user_groups" (
|
||||
"user_id" uuid NOT NULL,
|
||||
"group_id" uuid NOT NULL,
|
||||
PRIMARY KEY ("user_id", "group_id"),
|
||||
CONSTRAINT "user_groups_user_id" FOREIGN KEY ("user_id") REFERENCES "users" ("id") ON UPDATE NO ACTION ON DELETE CASCADE,
|
||||
CONSTRAINT "user_groups_group_id" FOREIGN KEY ("group_id") REFERENCES "groups" ("id") ON UPDATE NO ACTION ON DELETE CASCADE
|
||||
);
|
||||
|
||||
-- Migrate existing user->group relationships to the junction table
|
||||
INSERT INTO "user_groups" ("user_id", "group_id")
|
||||
SELECT "id", "group_users" FROM "users" WHERE "group_users" IS NOT NULL;
|
||||
|
||||
-- Add default_group_id column to users table
|
||||
ALTER TABLE "users" ADD COLUMN "default_group_id" uuid;
|
||||
|
||||
-- Set default_group_id to the user's current group
|
||||
UPDATE "users" SET "default_group_id" = "group_users" WHERE "group_users" IS NOT NULL;
|
||||
|
||||
-- Drop the old group_users foreign key constraint and column
|
||||
ALTER TABLE "users" DROP CONSTRAINT "users_groups_users";
|
||||
ALTER TABLE "users" DROP COLUMN "group_users";
|
||||
|
||||
-- Add foreign key constraint for default_group_id
|
||||
ALTER TABLE "users" ADD CONSTRAINT "users_groups_users_default" FOREIGN KEY ("default_group_id") REFERENCES "groups" ("id") ON UPDATE NO ACTION ON DELETE SET NULL;
|
||||
|
||||
-- +goose Down
|
||||
-- Recreate group_users column with foreign key
|
||||
ALTER TABLE "users" ADD COLUMN "group_users" uuid;
|
||||
|
||||
-- Restore the group_users values from user_groups (using the default_group_id or first entry)
|
||||
UPDATE "users"
|
||||
SET "group_users" = COALESCE("default_group_id", (
|
||||
SELECT "group_id" FROM "user_groups" WHERE "user_id" = "users"."id" LIMIT 1
|
||||
));
|
||||
|
||||
-- Drop the default_group_id foreign key and column
|
||||
ALTER TABLE "users" DROP CONSTRAINT "users_groups_users_default";
|
||||
ALTER TABLE "users" DROP COLUMN "default_group_id";
|
||||
|
||||
-- Add back the original foreign key constraint
|
||||
ALTER TABLE "users" ADD CONSTRAINT "users_groups_users" FOREIGN KEY ("group_users") REFERENCES "groups" ("id") ON UPDATE NO ACTION ON DELETE CASCADE;
|
||||
|
||||
-- Drop the junction table
|
||||
DROP TABLE IF EXISTS "user_groups";
|
||||
|
||||
@@ -0,0 +1,105 @@
|
||||
-- +goose Up
|
||||
-- Create user_groups junction table for M:M relationship
|
||||
CREATE TABLE IF NOT EXISTS user_groups (
|
||||
user_id UUID NOT NULL,
|
||||
group_id UUID NOT NULL,
|
||||
PRIMARY KEY (user_id, group_id),
|
||||
CONSTRAINT user_groups_user_id FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE,
|
||||
CONSTRAINT user_groups_group_id FOREIGN KEY (group_id) REFERENCES groups(id) ON DELETE CASCADE
|
||||
);
|
||||
|
||||
-- Migrate existing user->group relationships to the junction table
|
||||
INSERT INTO user_groups (user_id, group_id)
|
||||
SELECT id, group_users FROM users WHERE group_users IS NOT NULL;
|
||||
|
||||
-- Add default_group_id column to users table
|
||||
ALTER TABLE users ADD COLUMN default_group_id UUID;
|
||||
|
||||
-- Set default_group_id to the user's current group
|
||||
UPDATE users SET default_group_id = group_users WHERE group_users IS NOT NULL;
|
||||
|
||||
-- Add foreign key constraint for default_group_id
|
||||
CREATE TABLE users_new (
|
||||
id UUID NOT NULL,
|
||||
created_at DATETIME NOT NULL,
|
||||
updated_at DATETIME NOT NULL,
|
||||
name TEXT NOT NULL,
|
||||
email TEXT NOT NULL UNIQUE,
|
||||
password TEXT,
|
||||
is_superuser BOOLEAN NOT NULL DEFAULT false,
|
||||
superuser BOOLEAN NOT NULL DEFAULT false,
|
||||
role TEXT NOT NULL DEFAULT 'user',
|
||||
activated_on DATETIME,
|
||||
oidc_issuer TEXT,
|
||||
oidc_subject TEXT,
|
||||
default_group_id UUID,
|
||||
PRIMARY KEY (id),
|
||||
CONSTRAINT users_groups_users_default FOREIGN KEY (default_group_id) REFERENCES groups(id) ON DELETE SET NULL,
|
||||
UNIQUE (oidc_issuer, oidc_subject)
|
||||
);
|
||||
|
||||
-- Copy data from old table to new table
|
||||
INSERT INTO users_new (
|
||||
id, created_at, updated_at, name, email, password, is_superuser, superuser, role,
|
||||
activated_on, oidc_issuer, oidc_subject, default_group_id
|
||||
)
|
||||
SELECT
|
||||
id, created_at, updated_at, name, email, password, is_superuser, superuser, role,
|
||||
activated_on, oidc_issuer, oidc_subject, default_group_id
|
||||
FROM users;
|
||||
|
||||
-- Drop old indexes
|
||||
DROP INDEX IF EXISTS users_email_key;
|
||||
DROP INDEX IF EXISTS users_oidc_issuer_subject_key;
|
||||
|
||||
-- Drop old table
|
||||
DROP TABLE users;
|
||||
|
||||
-- Rename new table to users
|
||||
ALTER TABLE users_new RENAME TO users;
|
||||
|
||||
-- Recreate indexes
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS users_email_key ON users(email);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS users_oidc_issuer_subject_key ON users(oidc_issuer, oidc_subject);
|
||||
|
||||
-- +goose Down
|
||||
-- Recreate the old schema
|
||||
CREATE TABLE users_old (
|
||||
id UUID NOT NULL,
|
||||
created_at DATETIME NOT NULL,
|
||||
updated_at DATETIME NOT NULL,
|
||||
name TEXT NOT NULL,
|
||||
email TEXT NOT NULL UNIQUE,
|
||||
password TEXT,
|
||||
is_superuser BOOLEAN NOT NULL DEFAULT false,
|
||||
superuser BOOLEAN NOT NULL DEFAULT false,
|
||||
role TEXT NOT NULL DEFAULT 'user',
|
||||
activated_on DATETIME,
|
||||
oidc_issuer TEXT,
|
||||
oidc_subject TEXT,
|
||||
group_users UUID NOT NULL,
|
||||
PRIMARY KEY (id),
|
||||
CONSTRAINT users_groups_users FOREIGN KEY (group_users) REFERENCES groups(id) ON DELETE CASCADE,
|
||||
UNIQUE (oidc_issuer, oidc_subject)
|
||||
);
|
||||
|
||||
-- Copy data back, using the first group from user_groups
|
||||
INSERT INTO users_old (
|
||||
id, created_at, updated_at, name, email, password, is_superuser, superuser, role,
|
||||
activated_on, oidc_issuer, oidc_subject, group_users
|
||||
)
|
||||
SELECT
|
||||
u.id, u.created_at, u.updated_at, u.name, u.email, u.password, u.is_superuser, u.superuser, u.role,
|
||||
u.activated_on, u.oidc_issuer, u.oidc_subject, COALESCE(u.default_group_id, (SELECT group_id FROM user_groups WHERE user_id = u.id LIMIT 1))
|
||||
FROM users u;
|
||||
|
||||
DROP INDEX IF EXISTS users_email_key;
|
||||
DROP INDEX IF EXISTS users_oidc_issuer_subject_key;
|
||||
DROP TABLE users;
|
||||
ALTER TABLE users_old RENAME TO users;
|
||||
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS users_email_key ON users(email);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS users_oidc_issuer_subject_key ON users(oidc_issuer, oidc_subject);
|
||||
|
||||
DROP TABLE IF EXISTS user_groups;
|
||||
|
||||
@@ -313,7 +313,7 @@ func TestItemRepository_GetAllCustomFields(t *testing.T) {
|
||||
|
||||
// Test getting all values from field
|
||||
{
|
||||
results, err := tRepos.Items.GetAllCustomFieldValues(context.Background(), tUser.GroupID, names[0])
|
||||
results, err := tRepos.Items.GetAllCustomFieldValues(context.Background(), tUser.DefaultGroupID, names[0])
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, values[:1], results)
|
||||
@@ -397,5 +397,3 @@ func TestItemsRepository_DeleteByGroupWithAttachments(t *testing.T) {
|
||||
_, err = tRepos.Attachments.Get(context.Background(), tGroup.ID, attachment.ID)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -40,7 +40,7 @@ func (r *TokenRepository) GetUserFromToken(ctx context.Context, token []byte) (U
|
||||
Where(authtokens.ExpiresAtGTE(time.Now())).
|
||||
WithUser().
|
||||
QueryUser().
|
||||
WithGroup().
|
||||
WithGroups().
|
||||
Only(ctx)
|
||||
if err != nil {
|
||||
return UserOut{}, err
|
||||
|
||||
@@ -17,12 +17,12 @@ type (
|
||||
// in the database. It should to create users from an API unless the user has
|
||||
// rights to create SuperUsers. For regular user in data use the UserIn struct.
|
||||
UserCreate struct {
|
||||
Name string `json:"name"`
|
||||
Email string `json:"email"`
|
||||
Password *string `json:"password"`
|
||||
IsSuperuser bool `json:"isSuperuser"`
|
||||
GroupID uuid.UUID `json:"groupID"`
|
||||
IsOwner bool `json:"isOwner"`
|
||||
Name string `json:"name"`
|
||||
Email string `json:"email"`
|
||||
Password *string `json:"password"`
|
||||
IsSuperuser bool `json:"isSuperuser"`
|
||||
DefaultGroupID uuid.UUID `json:"defaultGroupID"`
|
||||
IsOwner bool `json:"isOwner"`
|
||||
}
|
||||
|
||||
UserUpdate struct {
|
||||
@@ -31,16 +31,16 @@ type (
|
||||
}
|
||||
|
||||
UserOut struct {
|
||||
ID uuid.UUID `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Email string `json:"email"`
|
||||
IsSuperuser bool `json:"isSuperuser"`
|
||||
GroupID uuid.UUID `json:"groupId"`
|
||||
GroupName string `json:"groupName"`
|
||||
PasswordHash string `json:"-"`
|
||||
IsOwner bool `json:"isOwner"`
|
||||
OidcIssuer *string `json:"oidcIssuer"`
|
||||
OidcSubject *string `json:"oidcSubject"`
|
||||
ID uuid.UUID `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Email string `json:"email"`
|
||||
IsSuperuser bool `json:"isSuperuser"`
|
||||
DefaultGroupID uuid.UUID `json:"defaultGroupId"`
|
||||
GroupIDs []uuid.UUID `json:"groupIds"`
|
||||
PasswordHash string `json:"-"`
|
||||
IsOwner bool `json:"isOwner"`
|
||||
OidcIssuer *string `json:"oidcIssuer"`
|
||||
OidcSubject *string `json:"oidcSubject"`
|
||||
}
|
||||
)
|
||||
|
||||
@@ -55,37 +55,48 @@ func mapUserOut(user *ent.User) UserOut {
|
||||
passwordHash = *user.Password
|
||||
}
|
||||
|
||||
groupIDs := make([]uuid.UUID, len(user.Edges.Groups))
|
||||
for i, g := range user.Edges.Groups {
|
||||
groupIDs[i] = g.ID
|
||||
}
|
||||
|
||||
// Get the default group ID, handling the optional pointer
|
||||
defaultGroupID := uuid.Nil
|
||||
if user.DefaultGroupID != nil {
|
||||
defaultGroupID = *user.DefaultGroupID
|
||||
}
|
||||
|
||||
return UserOut{
|
||||
ID: user.ID,
|
||||
Name: user.Name,
|
||||
Email: user.Email,
|
||||
IsSuperuser: user.IsSuperuser,
|
||||
GroupID: user.Edges.Group.ID,
|
||||
GroupName: user.Edges.Group.Name,
|
||||
PasswordHash: passwordHash,
|
||||
IsOwner: user.Role == "owner",
|
||||
OidcIssuer: user.OidcIssuer,
|
||||
OidcSubject: user.OidcSubject,
|
||||
ID: user.ID,
|
||||
Name: user.Name,
|
||||
Email: user.Email,
|
||||
IsSuperuser: user.IsSuperuser,
|
||||
DefaultGroupID: defaultGroupID,
|
||||
GroupIDs: groupIDs,
|
||||
PasswordHash: passwordHash,
|
||||
IsOwner: user.Role == "owner",
|
||||
OidcIssuer: user.OidcIssuer,
|
||||
OidcSubject: user.OidcSubject,
|
||||
}
|
||||
}
|
||||
|
||||
func (r *UserRepository) GetOneID(ctx context.Context, id uuid.UUID) (UserOut, error) {
|
||||
return mapUserOutErr(r.db.User.Query().
|
||||
Where(user.ID(id)).
|
||||
WithGroup().
|
||||
WithGroups().
|
||||
Only(ctx))
|
||||
}
|
||||
|
||||
func (r *UserRepository) GetOneEmail(ctx context.Context, email string) (UserOut, error) {
|
||||
return mapUserOutErr(r.db.User.Query().
|
||||
Where(user.EmailEqualFold(email)).
|
||||
WithGroup().
|
||||
WithGroups().
|
||||
Only(ctx),
|
||||
)
|
||||
}
|
||||
|
||||
func (r *UserRepository) GetAll(ctx context.Context) ([]UserOut, error) {
|
||||
return mapUsersOutErr(r.db.User.Query().WithGroup().All(ctx))
|
||||
return mapUsersOutErr(r.db.User.Query().WithGroups().All(ctx))
|
||||
}
|
||||
|
||||
func (r *UserRepository) Create(ctx context.Context, usr UserCreate) (UserOut, error) {
|
||||
@@ -99,8 +110,9 @@ func (r *UserRepository) Create(ctx context.Context, usr UserCreate) (UserOut, e
|
||||
SetName(usr.Name).
|
||||
SetEmail(usr.Email).
|
||||
SetIsSuperuser(usr.IsSuperuser).
|
||||
SetGroupID(usr.GroupID).
|
||||
SetRole(role)
|
||||
SetDefaultGroupID(usr.DefaultGroupID).
|
||||
SetRole(role).
|
||||
AddGroupIDs(usr.DefaultGroupID)
|
||||
|
||||
// Only set password if provided (non-nil)
|
||||
if usr.Password != nil {
|
||||
@@ -126,10 +138,11 @@ func (r *UserRepository) CreateWithOIDC(ctx context.Context, usr UserCreate, iss
|
||||
SetName(usr.Name).
|
||||
SetEmail(usr.Email).
|
||||
SetIsSuperuser(usr.IsSuperuser).
|
||||
SetGroupID(usr.GroupID).
|
||||
SetDefaultGroupID(usr.DefaultGroupID).
|
||||
SetRole(role).
|
||||
SetOidcIssuer(issuer).
|
||||
SetOidcSubject(subject)
|
||||
SetOidcSubject(subject).
|
||||
AddGroupIDs(usr.DefaultGroupID)
|
||||
|
||||
if usr.Password != nil {
|
||||
createQuery = createQuery.SetPassword(*usr.Password)
|
||||
@@ -183,6 +196,6 @@ func (r *UserRepository) SetOIDCIdentity(ctx context.Context, uid uuid.UUID, iss
|
||||
func (r *UserRepository) GetOneOIDC(ctx context.Context, issuer, subject string) (UserOut, error) {
|
||||
return mapUserOutErr(r.db.User.Query().
|
||||
Where(user.OidcIssuerEQ(issuer), user.OidcSubjectEQ(subject)).
|
||||
WithGroup().
|
||||
WithGroups().
|
||||
Only(ctx))
|
||||
}
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
@@ -11,11 +12,11 @@ import (
|
||||
func userFactory() UserCreate {
|
||||
password := fk.Str(10)
|
||||
return UserCreate{
|
||||
Name: fk.Str(10),
|
||||
Email: fk.Email(),
|
||||
Password: &password,
|
||||
IsSuperuser: fk.Bool(),
|
||||
GroupID: tGroup.ID,
|
||||
Name: fk.Str(10),
|
||||
Email: fk.Email(),
|
||||
Password: &password,
|
||||
IsSuperuser: fk.Bool(),
|
||||
DefaultGroupID: tGroup.ID,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -87,7 +88,8 @@ func TestUserRepo_GetAll(t *testing.T) {
|
||||
assert.Equal(t, usr.Email, usr2.Email)
|
||||
|
||||
// Check groups are loaded
|
||||
assert.NotNil(t, usr2.GroupID)
|
||||
assert.NotEqual(t, uuid.Nil, usr2.DefaultGroupID)
|
||||
assert.Greater(t, len(usr2.GroupIDs), 0)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
package config
|
||||
|
||||
const (
|
||||
DriverSqlite3 = "sqlite3"
|
||||
DriverPostgres = "postgres"
|
||||
DriverSqlite3 = "sqlite3"
|
||||
)
|
||||
|
||||
type Storage struct {
|
||||
|
||||
@@ -43,7 +43,6 @@ export default defineConfig({
|
||||
nav: [
|
||||
{ text: 'API Docs', link: '/en/api' },
|
||||
{ text: 'Demo', link: 'https://demo.homebox.software' },
|
||||
{ text: 'Blog', link: 'https://sysadminsjournal.com/tag/homebox/' }
|
||||
],
|
||||
|
||||
sidebar: {
|
||||
|
||||
@@ -1,418 +0,0 @@
|
||||
/**
|
||||
* HomeBox Upgrade Verification Tests
|
||||
*
|
||||
* NOTE: These tests are ONLY meant to run in the upgrade-test workflow.
|
||||
* They require test data to be pre-created by the create-test-data.sh script.
|
||||
* These tests are stored in test/upgrade/ (not test/e2e/) to prevent them
|
||||
* from running during normal E2E test runs.
|
||||
*/
|
||||
|
||||
import { expect, test } from "@playwright/test";
|
||||
import * as fs from "fs";
|
||||
|
||||
// Load test data created by the setup script
|
||||
const testDataPath = process.env.TEST_DATA_FILE || "/tmp/test-users.json";
|
||||
|
||||
interface TestUser {
|
||||
email: string;
|
||||
password: string;
|
||||
token: string;
|
||||
group: string;
|
||||
}
|
||||
|
||||
interface TestData {
|
||||
users?: TestUser[];
|
||||
locations?: Record<string, string[]>;
|
||||
labels?: Record<string, string[]>;
|
||||
items?: Record<string, string[]>;
|
||||
notifiers?: Record<string, string[]>;
|
||||
}
|
||||
|
||||
let testData: TestData = {};
|
||||
|
||||
test.beforeAll(() => {
|
||||
if (fs.existsSync(testDataPath)) {
|
||||
const rawData = fs.readFileSync(testDataPath, "utf-8");
|
||||
testData = JSON.parse(rawData);
|
||||
console.log("Loaded test data:", JSON.stringify(testData, null, 2));
|
||||
} else {
|
||||
console.error(`Test data file not found at ${testDataPath}`);
|
||||
throw new Error("Test data file not found");
|
||||
}
|
||||
});
|
||||
|
||||
test.describe("HomeBox Upgrade Verification", () => {
|
||||
test("verify all users can log in", async ({ page }) => {
|
||||
// Test each user from the test data
|
||||
for (const user of testData.users || []) {
|
||||
await page.goto("/");
|
||||
await expect(page).toHaveURL("/");
|
||||
|
||||
// Wait for login form to be ready
|
||||
await page.waitForSelector("input[type='text']", { state: "visible" });
|
||||
|
||||
// Fill in login form
|
||||
await page.fill("input[type='text']", user.email);
|
||||
await page.fill("input[type='password']", user.password);
|
||||
await page.click("button[type='submit']");
|
||||
|
||||
// Wait for navigation to home page
|
||||
await expect(page).toHaveURL("/home", { timeout: 10000 });
|
||||
|
||||
console.log(`✓ User ${user.email} logged in successfully`);
|
||||
|
||||
// Navigate back to login for next user
|
||||
await page.goto("/");
|
||||
await page.waitForSelector("input[type='text']", { state: "visible" });
|
||||
}
|
||||
});
|
||||
|
||||
test("verify application version is displayed", async ({ page }) => {
|
||||
// Login as first user
|
||||
const firstUser = testData.users?.[0];
|
||||
if (!firstUser) {
|
||||
throw new Error("No users found in test data");
|
||||
}
|
||||
|
||||
await page.goto("/");
|
||||
await page.fill("input[type='text']", firstUser.email);
|
||||
await page.fill("input[type='password']", firstUser.password);
|
||||
await page.click("button[type='submit']");
|
||||
await expect(page).toHaveURL("/home", { timeout: 10000 });
|
||||
|
||||
// Look for version in footer or about section
|
||||
// The version might be in the footer or a settings page
|
||||
// Check if footer exists and contains version info
|
||||
const footer = page.locator("footer");
|
||||
if ((await footer.count()) > 0) {
|
||||
const footerText = await footer.textContent();
|
||||
console.log("Footer text:", footerText);
|
||||
|
||||
// Version should be present in some form
|
||||
// This is a basic check - the version format may vary
|
||||
expect(footerText).toBeTruthy();
|
||||
}
|
||||
|
||||
console.log("✓ Application version check complete");
|
||||
});
|
||||
|
||||
test("verify locations are present", async ({ page }) => {
|
||||
const firstUser = testData.users?.[0];
|
||||
if (!firstUser) {
|
||||
throw new Error("No users found in test data");
|
||||
}
|
||||
|
||||
await page.goto("/");
|
||||
await page.fill("input[type='text']", firstUser.email);
|
||||
await page.fill("input[type='password']", firstUser.password);
|
||||
await page.click("button[type='submit']");
|
||||
await expect(page).toHaveURL("/home", { timeout: 10000 });
|
||||
|
||||
// Wait for page to load
|
||||
await page.waitForSelector("body", { state: "visible" });
|
||||
|
||||
// Try to find locations link in navigation
|
||||
const locationsLink = page.locator("a[href*='location'], button:has-text('Locations')").first();
|
||||
|
||||
if ((await locationsLink.count()) > 0) {
|
||||
await locationsLink.click();
|
||||
await page.waitForLoadState("networkidle");
|
||||
|
||||
// Check if locations are displayed
|
||||
// The exact structure depends on the UI, but we should see location names
|
||||
const pageContent = await page.textContent("body");
|
||||
|
||||
// Verify some of our test locations exist
|
||||
expect(pageContent).toContain("Living Room");
|
||||
console.log("✓ Locations verified");
|
||||
} else {
|
||||
console.log("! Could not find locations navigation - skipping detailed check");
|
||||
}
|
||||
});
|
||||
|
||||
test("verify labels are present", async ({ page }) => {
|
||||
const firstUser = testData.users?.[0];
|
||||
if (!firstUser) {
|
||||
throw new Error("No users found in test data");
|
||||
}
|
||||
|
||||
await page.goto("/");
|
||||
await page.fill("input[type='text']", firstUser.email);
|
||||
await page.fill("input[type='password']", firstUser.password);
|
||||
await page.click("button[type='submit']");
|
||||
await expect(page).toHaveURL("/home", { timeout: 10000 });
|
||||
|
||||
await page.waitForSelector("body", { state: "visible" });
|
||||
|
||||
// Try to find labels link in navigation
|
||||
const labelsLink = page.locator("a[href*='label'], button:has-text('Labels')").first();
|
||||
|
||||
if ((await labelsLink.count()) > 0) {
|
||||
await labelsLink.click();
|
||||
await page.waitForLoadState("networkidle");
|
||||
|
||||
const pageContent = await page.textContent("body");
|
||||
|
||||
// Verify some of our test labels exist
|
||||
expect(pageContent).toContain("Electronics");
|
||||
console.log("✓ Labels verified");
|
||||
} else {
|
||||
console.log("! Could not find labels navigation - skipping detailed check");
|
||||
}
|
||||
});
|
||||
|
||||
test("verify items are present", async ({ page }) => {
|
||||
const firstUser = testData.users?.[0];
|
||||
if (!firstUser) {
|
||||
throw new Error("No users found in test data");
|
||||
}
|
||||
|
||||
await page.goto("/");
|
||||
await page.fill("input[type='text']", firstUser.email);
|
||||
await page.fill("input[type='password']", firstUser.password);
|
||||
await page.click("button[type='submit']");
|
||||
await expect(page).toHaveURL("/home", { timeout: 10000 });
|
||||
|
||||
await page.waitForSelector("body", { state: "visible" });
|
||||
|
||||
// Navigate to items list
|
||||
// This might be the home page or a separate items page
|
||||
const itemsLink = page.locator("a[href*='item'], button:has-text('Items')").first();
|
||||
|
||||
if ((await itemsLink.count()) > 0) {
|
||||
await itemsLink.click();
|
||||
await page.waitForLoadState("networkidle");
|
||||
}
|
||||
|
||||
const pageContent = await page.textContent("body");
|
||||
|
||||
// Verify some of our test items exist
|
||||
expect(pageContent).toContain("Laptop Computer");
|
||||
console.log("✓ Items verified");
|
||||
});
|
||||
|
||||
test("verify notifier is present", async ({ page }) => {
|
||||
const firstUser = testData.users?.[0];
|
||||
if (!firstUser) {
|
||||
throw new Error("No users found in test data");
|
||||
}
|
||||
|
||||
await page.goto("/");
|
||||
await page.fill("input[type='text']", firstUser.email);
|
||||
await page.fill("input[type='password']", firstUser.password);
|
||||
await page.click("button[type='submit']");
|
||||
await expect(page).toHaveURL("/home", { timeout: 10000 });
|
||||
|
||||
await page.waitForSelector("body", { state: "visible" });
|
||||
|
||||
// Navigate to settings or profile
|
||||
// Notifiers are typically in settings
|
||||
const settingsLink = page.locator("a[href*='setting'], a[href*='profile'], button:has-text('Settings')").first();
|
||||
|
||||
if ((await settingsLink.count()) > 0) {
|
||||
await settingsLink.click();
|
||||
await page.waitForLoadState("networkidle");
|
||||
|
||||
// Look for notifiers section
|
||||
const notifiersLink = page.locator("a:has-text('Notif'), button:has-text('Notif')").first();
|
||||
|
||||
if ((await notifiersLink.count()) > 0) {
|
||||
await notifiersLink.click();
|
||||
await page.waitForLoadState("networkidle");
|
||||
|
||||
const pageContent = await page.textContent("body");
|
||||
|
||||
// Verify our test notifier exists
|
||||
expect(pageContent).toContain("TESTING");
|
||||
console.log("✓ Notifier verified");
|
||||
} else {
|
||||
console.log("! Could not find notifiers section - skipping detailed check");
|
||||
}
|
||||
} else {
|
||||
console.log("! Could not find settings navigation - skipping notifier check");
|
||||
}
|
||||
});
|
||||
|
||||
test("verify attachments are present for items", async ({ page }) => {
|
||||
const firstUser = testData.users?.[0];
|
||||
if (!firstUser) {
|
||||
throw new Error("No users found in test data");
|
||||
}
|
||||
|
||||
await page.goto("/");
|
||||
await page.fill("input[type='text']", firstUser.email);
|
||||
await page.fill("input[type='password']", firstUser.password);
|
||||
await page.click("button[type='submit']");
|
||||
await expect(page).toHaveURL("/home", { timeout: 10000 });
|
||||
|
||||
await page.waitForSelector("body", { state: "visible" });
|
||||
|
||||
// Search for "Laptop Computer" which should have attachments
|
||||
const searchInput = page.locator("input[type='search'], input[placeholder*='Search']").first();
|
||||
|
||||
if ((await searchInput.count()) > 0) {
|
||||
await searchInput.fill("Laptop Computer");
|
||||
await page.waitForLoadState("networkidle");
|
||||
|
||||
// Click on the laptop item
|
||||
const laptopItem = page.locator("text=Laptop Computer").first();
|
||||
await laptopItem.click();
|
||||
await page.waitForLoadState("networkidle");
|
||||
|
||||
// Look for attachments section
|
||||
const pageContent = await page.textContent("body");
|
||||
|
||||
// Check for attachment indicators (could be files, documents, attachments, etc.)
|
||||
const hasAttachments =
|
||||
pageContent?.includes("laptop-receipt") ||
|
||||
pageContent?.includes("laptop-warranty") ||
|
||||
pageContent?.includes("attachment") ||
|
||||
pageContent?.includes("Attachment") ||
|
||||
pageContent?.includes("document");
|
||||
|
||||
expect(hasAttachments).toBeTruthy();
|
||||
console.log("✓ Attachments verified");
|
||||
} else {
|
||||
console.log("! Could not find search - trying direct navigation");
|
||||
|
||||
// Try alternative: look for items link and browse
|
||||
const itemsLink = page.locator("a[href*='item'], button:has-text('Items')").first();
|
||||
if ((await itemsLink.count()) > 0) {
|
||||
await itemsLink.click();
|
||||
await page.waitForLoadState("networkidle");
|
||||
|
||||
const laptopLink = page.locator("text=Laptop Computer").first();
|
||||
if ((await laptopLink.count()) > 0) {
|
||||
await laptopLink.click();
|
||||
await page.waitForLoadState("networkidle");
|
||||
|
||||
const pageContent = await page.textContent("body");
|
||||
const hasAttachments =
|
||||
pageContent?.includes("laptop-receipt") ||
|
||||
pageContent?.includes("laptop-warranty") ||
|
||||
pageContent?.includes("attachment");
|
||||
|
||||
expect(hasAttachments).toBeTruthy();
|
||||
console.log("✓ Attachments verified via direct navigation");
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
test("verify theme can be adjusted", async ({ page }) => {
|
||||
const firstUser = testData.users?.[0];
|
||||
if (!firstUser) {
|
||||
throw new Error("No users found in test data");
|
||||
}
|
||||
|
||||
await page.goto("/");
|
||||
await page.fill("input[type='text']", firstUser.email);
|
||||
await page.fill("input[type='password']", firstUser.password);
|
||||
await page.click("button[type='submit']");
|
||||
await expect(page).toHaveURL("/home", { timeout: 10000 });
|
||||
|
||||
await page.waitForSelector("body", { state: "visible" });
|
||||
|
||||
// Look for theme toggle (usually a sun/moon icon or settings)
|
||||
// Common selectors for theme toggles
|
||||
const themeToggle = page
|
||||
.locator(
|
||||
"button[aria-label*='theme'], button[aria-label*='Theme'], " +
|
||||
"button:has-text('Dark'), button:has-text('Light'), " +
|
||||
"[data-theme-toggle], .theme-toggle"
|
||||
)
|
||||
.first();
|
||||
|
||||
if ((await themeToggle.count()) > 0) {
|
||||
// Get initial theme state (could be from class, attribute, or computed style)
|
||||
const bodyBefore = page.locator("body");
|
||||
const classNameBefore = (await bodyBefore.getAttribute("class")) || "";
|
||||
|
||||
// Click theme toggle
|
||||
await themeToggle.click();
|
||||
// Wait for theme change to complete
|
||||
await page.waitForTimeout(500);
|
||||
|
||||
// Get theme state after toggle
|
||||
const classNameAfter = (await bodyBefore.getAttribute("class")) || "";
|
||||
|
||||
// Verify that something changed
|
||||
expect(classNameBefore).not.toBe(classNameAfter);
|
||||
|
||||
console.log(`✓ Theme toggle working (${classNameBefore} -> ${classNameAfter})`);
|
||||
} else {
|
||||
// Try to find theme in settings
|
||||
const settingsLink = page.locator("a[href*='setting'], a[href*='profile']").first();
|
||||
|
||||
if ((await settingsLink.count()) > 0) {
|
||||
await settingsLink.click();
|
||||
await page.waitForLoadState("networkidle");
|
||||
|
||||
const themeOption = page.locator("select[name*='theme'], button:has-text('Theme')").first();
|
||||
|
||||
if ((await themeOption.count()) > 0) {
|
||||
console.log("✓ Theme settings found");
|
||||
} else {
|
||||
console.log("! Could not find theme toggle - feature may not be easily accessible");
|
||||
}
|
||||
} else {
|
||||
console.log("! Could not find theme controls");
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
test("verify data counts match expectations", async ({ page }) => {
|
||||
const firstUser = testData.users?.[0];
|
||||
if (!firstUser) {
|
||||
throw new Error("No users found in test data");
|
||||
}
|
||||
|
||||
await page.goto("/");
|
||||
await page.fill("input[type='text']", firstUser.email);
|
||||
await page.fill("input[type='password']", firstUser.password);
|
||||
await page.click("button[type='submit']");
|
||||
await expect(page).toHaveURL("/home", { timeout: 10000 });
|
||||
|
||||
await page.waitForSelector("body", { state: "visible" });
|
||||
|
||||
// Check that we have the expected number of items for group 1 (5 items)
|
||||
const pageContent = await page.textContent("body");
|
||||
|
||||
// Look for item count indicators
|
||||
// This is dependent on the UI showing counts
|
||||
console.log("✓ Logged in and able to view dashboard");
|
||||
|
||||
// Verify at least that the page loaded and shows some content
|
||||
expect(pageContent).toBeTruthy();
|
||||
if (pageContent) {
|
||||
expect(pageContent.length).toBeGreaterThan(100);
|
||||
}
|
||||
});
|
||||
|
||||
test("verify second group users and data isolation", async ({ page }) => {
|
||||
// Login as user from group 2
|
||||
const group2User = testData.users?.find(u => u.group === "2");
|
||||
if (!group2User) {
|
||||
console.log("! No group 2 users found - skipping isolation test");
|
||||
return;
|
||||
}
|
||||
|
||||
await page.goto("/");
|
||||
await page.fill("input[type='text']", group2User.email);
|
||||
await page.fill("input[type='password']", group2User.password);
|
||||
await page.click("button[type='submit']");
|
||||
await expect(page).toHaveURL("/home", { timeout: 10000 });
|
||||
|
||||
await page.waitForSelector("body", { state: "visible" });
|
||||
|
||||
const pageContent = await page.textContent("body");
|
||||
|
||||
// Verify group 2 can see their items
|
||||
expect(pageContent).toContain("Monitor");
|
||||
|
||||
// Verify group 2 cannot see group 1 items
|
||||
expect(pageContent).not.toContain("Laptop Computer");
|
||||
|
||||
console.log("✓ Data isolation verified between groups");
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user