Compare commits

..

393 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
afd7a10003 Fix TypeScript null check in upgrade-verification test
Add null check for pageContent before accessing length property

Co-authored-by: katosdev <7927609+katosdev@users.noreply.github.com>
2025-12-27 14:34:59 +00:00
copilot-swe-agent[bot]
8eedd1e39d Fix ESLint errors in upgrade-verification.spec.ts
- Remove unused 'path' import
- Replace 'any' types with proper TypeScript interfaces
- Fix all Prettier formatting issues

Co-authored-by: katosdev <7927609+katosdev@users.noreply.github.com>
2025-12-27 14:32:57 +00:00
copilot-swe-agent[bot]
fedeb1a7e5 Add proper GITHUB_TOKEN permissions to workflow
Set minimal required permissions (contents:read, packages:read) to follow security best practices

Co-authored-by: katosdev <7927609+katosdev@users.noreply.github.com>
2025-12-27 14:14:52 +00:00
copilot-swe-agent[bot]
69b31a3be5 Improve test reliability and fix security issues
- Replace waitForTimeout with waitForSelector and waitForLoadState
- Remove eval security risk in bash script
- Use proper wait mechanisms for better test reliability

Co-authored-by: katosdev <7927609+katosdev@users.noreply.github.com>
2025-12-27 14:12:22 +00:00
copilot-swe-agent[bot]
31d306ca05 Add comprehensive documentation for upgrade test workflow
Co-authored-by: katosdev <7927609+katosdev@users.noreply.github.com>
2025-12-27 14:09:37 +00:00
copilot-swe-agent[bot]
1bfb716cea Add upgrade test workflow with data generation and verification
Co-authored-by: katosdev <7927609+katosdev@users.noreply.github.com>
2025-12-27 14:08:13 +00:00
copilot-swe-agent[bot]
13b1524c56 Initial plan for upgrade test workflow
Co-authored-by: katosdev <7927609+katosdev@users.noreply.github.com>
2025-12-27 14:06:05 +00:00
copilot-swe-agent[bot]
b18599b6f4 Initial plan 2025-12-27 14:02:31 +00:00
Matthew Kilgore
473027c1ae Fix notifiers gettings wiped 2025-12-26 17:27:18 -05:00
Matthew Kilgore
3a77440996 Fix flip flopped columns 2025-12-26 15:50:04 -05:00
Matthew Kilgore
731765c36c Make sure the right columns get migrated into the correct columns 2025-12-26 15:09:02 -05:00
Matthew Kilgore
a86b1bd17b Update dependencies 2025-12-26 09:51:33 -05:00
Matthew Kilgore
064b298968 Merge branch 'main' into main-weblate
# Conflicts:
#	frontend/locales/en.json
2025-12-26 09:28:10 -05:00
Tonya
2638f218f3 fix: templates that dont have a location set (#1160) 2025-12-24 17:30:46 +00:00
Dan
0f4f398b5a Added documentation for the external label service feature. (#1018)
* Added documentation for the external label service feature. Re-ordered the columns in the config page to make it easier to read.

* Update docs/en/configure/index.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

---------

Co-authored-by: Matt <tankerkiller125@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-12-23 21:36:19 +00:00
Copilot
545993a8aa Fix Windows attachment path encoding in blob storage operations (#1144)
* Initial plan

* Initial plan for fixing Windows attachment path issue

Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>

* Fix Windows attachment path encoding issue by normalizing to forward slashes

Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>

* Refactor path normalization into helper function per code review

Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>

* Update progress - all checks complete

Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>
2025-12-23 10:27:42 -05:00
tonyaellie
a1947dd09e feat: autosave after image upload 2025-12-22 23:46:29 +00:00
tonyaellie
018f1f5977 fix: use logical sorting for locations 2025-12-22 23:34:29 +00:00
tonyaellie
9a9e3d462e feat: add a clear button for selectors and stop create modal overflow 2025-12-22 23:24:01 +00:00
Weblate
fc8b6f0dcf Translated using Weblate (German)
Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 95.4% (581 of 609 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Russian)

Currently translated at 96.2% (586 of 609 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Czech)

Currently translated at 99.8% (608 of 609 strings)

Translated using Weblate (German)

Currently translated at 99.3% (605 of 609 strings)

Translated using Weblate (German)

Currently translated at 99.1% (604 of 609 strings)

Translated using Weblate (German)

Currently translated at 99.1% (604 of 609 strings)

Translated using Weblate (German)

Currently translated at 90.3% (550 of 609 strings)

Translated using Weblate (German)

Currently translated at 90.3% (550 of 609 strings)

Translated using Weblate (German)

Currently translated at 90.1% (549 of 609 strings)

Translated using Weblate (German)

Currently translated at 89.9% (548 of 609 strings)

Translated using Weblate (Indonesian)

Currently translated at 60.0% (366 of 609 strings)

Translated using Weblate (Thai)

Currently translated at 22.0% (134 of 609 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 80.4% (490 of 609 strings)

Translated using Weblate (Slovak)

Currently translated at 84.8% (517 of 609 strings)

Translated using Weblate (Finnish)

Currently translated at 53.3% (325 of 609 strings)

Translated using Weblate (Ukrainian)

Currently translated at 59.7% (364 of 609 strings)

Translated using Weblate (English)

Currently translated at 100.0% (609 of 609 strings)

Translated using Weblate (Greek)

Currently translated at 0.3% (2 of 551 strings)

Added translation using Weblate (Greek)

Translated using Weblate (Italian)

Currently translated at 99.8% (550 of 551 strings)

Translated using Weblate (Russian)

Currently translated at 99.8% (550 of 551 strings)

Translated using Weblate (Telugu)

Currently translated at 0.9% (5 of 551 strings)

Translated using Weblate (Telugu)

Currently translated at 0.9% (5 of 551 strings)

Added translation using Weblate (Telugu)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Bosnian)

Currently translated at 21.9% (121 of 551 strings)

Translated using Weblate (Danish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Turkish)

Currently translated at 91.6% (505 of 551 strings)

Translated using Weblate (Turkish)

Currently translated at 89.2% (492 of 551 strings)

Translated using Weblate (Turkish)

Currently translated at 89.2% (492 of 551 strings)

Translated using Weblate (Russian)

Currently translated at 99.2% (547 of 551 strings)

Translated using Weblate (Ukrainian)

Currently translated at 65.8% (363 of 551 strings)

Translated using Weblate (Ukrainian)

Currently translated at 64.4% (355 of 551 strings)

Translated using Weblate (Ukrainian)

Currently translated at 62.7% (346 of 551 strings)

Translated using Weblate (Ukrainian)

Currently translated at 62.4% (344 of 551 strings)

Translated using Weblate (Ukrainian)

Currently translated at 61.1% (337 of 551 strings)

Translated using Weblate (Ukrainian)

Currently translated at 60.4% (333 of 551 strings)

Translated using Weblate (Ukrainian)

Currently translated at 59.3% (327 of 551 strings)

Translated using Weblate (Thai)

Currently translated at 24.1% (133 of 551 strings)

Translated using Weblate (Thai)

Currently translated at 24.1% (133 of 551 strings)

Translated using Weblate (Portuguese (Portugal))

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (French)

Currently translated at 94.1% (519 of 551 strings)

Translated using Weblate (Polish)

Currently translated at 99.8% (550 of 551 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (German)

Currently translated at 99.8% (550 of 551 strings)

Translated using Weblate (German)

Currently translated at 99.8% (550 of 551 strings)

Translated using Weblate (German)

Currently translated at 99.8% (550 of 551 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 98.9% (545 of 551 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 98.9% (545 of 551 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 98.9% (545 of 551 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 98.9% (545 of 551 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 98.9% (545 of 551 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 98.9% (545 of 551 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 98.9% (545 of 551 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 98.9% (545 of 551 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 98.9% (545 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (551 of 551 strings)

Translated using Weblate (Portuguese (Portugal))

Currently translated at 100.0% (550 of 550 strings)

Translated using Weblate (Polish)

Currently translated at 98.7% (543 of 550 strings)

Translated using Weblate (Polish)

Currently translated at 98.7% (543 of 550 strings)

Translated using Weblate (Polish)

Currently translated at 98.7% (543 of 550 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 98.5% (542 of 550 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 98.5% (542 of 550 strings)

Translated using Weblate (Portuguese (Portugal))

Currently translated at 98.3% (541 of 550 strings)

Translated using Weblate (Portuguese (Portugal))

Currently translated at 98.3% (541 of 550 strings)

Translated using Weblate (Indonesian)

Currently translated at 66.3% (365 of 550 strings)

Translated using Weblate (Indonesian)

Currently translated at 66.3% (365 of 550 strings)

Translated using Weblate (Indonesian)

Currently translated at 66.3% (365 of 550 strings)

Translated using Weblate (Indonesian)

Currently translated at 66.3% (365 of 550 strings)

Translated using Weblate (Indonesian)

Currently translated at 66.3% (365 of 550 strings)

Translated using Weblate (Indonesian)

Currently translated at 66.3% (365 of 550 strings)

Translated using Weblate (Indonesian)

Currently translated at 66.3% (365 of 550 strings)

Translated using Weblate (Indonesian)

Currently translated at 66.3% (365 of 550 strings)

Translated using Weblate (Portuguese (Portugal))

Currently translated at 93.4% (514 of 550 strings)

Translated using Weblate (Swedish)

Currently translated at 68.3% (376 of 550 strings)

Translated using Weblate (Swedish)

Currently translated at 68.3% (376 of 550 strings)

Translated using Weblate (Swedish)

Currently translated at 68.3% (376 of 550 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 98.1% (540 of 550 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (550 of 550 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (550 of 550 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (550 of 550 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (550 of 550 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (550 of 550 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (550 of 550 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (550 of 550 strings)

Translated using Weblate (German)

Currently translated at 96.0% (528 of 550 strings)

Translated using Weblate (German)

Currently translated at 96.0% (528 of 550 strings)

Translated using Weblate (Turkish)

Currently translated at 87.7% (482 of 549 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (518 of 518 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (518 of 518 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (518 of 518 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (518 of 518 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (518 of 518 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (518 of 518 strings)

Translated using Weblate (Arabic)

Currently translated at 0.7% (4 of 518 strings)

Translated using Weblate (Arabic)

Currently translated at 0.5% (3 of 518 strings)

Added translation using Weblate (Arabic)

Translated using Weblate (Thai)

Currently translated at 22.9% (119 of 518 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (518 of 518 strings)

Translated using Weblate (Swedish)

Currently translated at 71.2% (369 of 518 strings)

Translated using Weblate (Swedish)

Currently translated at 71.2% (369 of 518 strings)

Co-authored-by: Adam Havránek <adamhavra@seznam.cz>
Co-authored-by: Aniruddh Kotte <aniruddhkotte@gmail.com>
Co-authored-by: BoneGear <bonegear@hotmail.com>
Co-authored-by: DevHrytsan <3axapHrytsan@gmail.com>
Co-authored-by: Eisa Al Shamsi <awwase@gmail.com>
Co-authored-by: Hannes Salen <hannes.salen@gmail.com>
Co-authored-by: Heine Olsen <olsen10051988@gmail.com>
Co-authored-by: Henrique dos Santos Wisniewski <henriqueswisniewski@gmail.com>
Co-authored-by: Jackxwb <xwb9606@163.com>
Co-authored-by: Jan Fader <jan.fader@web.de>
Co-authored-by: JorgeS15 <jorgea15santos@gmail.com>
Co-authored-by: Loffa <jesperfalk94@gmail.com>
Co-authored-by: Marcelo Sandrini <sandrini.marcelo@gmail.com>
Co-authored-by: Matthew Kilgore <matthew@kilgore.dev>
Co-authored-by: Matvey <mrspanky@yandex.ru>
Co-authored-by: Mikolaj Wolicki <MIKOLAJW1997@gmail.com>
Co-authored-by: Mirad Maglic <mirad.maglic@gmail.com>
Co-authored-by: Muhammad Ikhsan <pararang@gmail.com>
Co-authored-by: Mutagenic <mkardas@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: Ricardo González <notorius28@gmail.com>
Co-authored-by: Robert Eggl <robert@eggl.dev>
Co-authored-by: Sara Wattanasombat <saraten2@gmail.com>
Co-authored-by: Simone Girardi <s.girardi92@gmail.com>
Co-authored-by: Slydite4 <39199098+Slydite4@users.noreply.github.com>
Co-authored-by: Stratos Palaiologos <stpa03@betssongroup.com>
Co-authored-by: Supphakorn <supphakorn5343@gmail.com>
Co-authored-by: Weblate <noreply-mt-weblate@weblate.org>
Co-authored-by: Weblate <noreply@weblate.org>
Co-authored-by: WilliamStark <yujinghao007@163.com>
Co-authored-by: Yao Yimeng <yym900902@gmail.com>
Co-authored-by: akrstlv <zmilex@gmail.com>
Co-authored-by: arsenius88 <arsenovich_andr@ukr.net>
Co-authored-by: buzz <buzz.eclair@gmail.com>
Co-authored-by: dARK raVEr <Dark.Raver@gmx.net>
Co-authored-by: efe <vastly-fax-brim@duck.com>
Co-authored-by: fjrefluxx <julianzobel@gmail.com>
Co-authored-by: jesper rezler lang <jesper.rezler.lang@gmail.com>
Co-authored-by: jjxxzz <jaro689@gmail.com>
Co-authored-by: noxmyn <vladcraft93@gmail.com>
Co-authored-by: sg4r3z <giovannigln@gmail.com>
Co-authored-by: swedishpete <nyhetsutskick@outlook.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ar/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/bs/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/cs/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/da/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/de/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/el/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/en/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/es/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/fi/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/fr/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/id/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/it/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/nb_NO/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/nl/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/pl/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/pt_BR/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/pt_PT/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ru/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/sk/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/sv/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/te/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/th/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/tr/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/uk/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/zh_Hans/
Translation: Homebox/Frontend
2025-12-22 11:19:55 +00:00
tonyaellie
37890c2a22 docs: update OIDC configuration details 2025-12-22 11:19:49 +00:00
Tonya
096b682f0a Improve oidc docs and fix attachment issue (#1153)
* fix: sort auth issues for oidc

* feat: improve oidc docs
2025-12-21 22:11:38 +00:00
Tonya
e4d8bb2ada chore: use example.com for example
better safe than sorry
2025-12-20 21:50:44 +00:00
Katos
3becf046e6 Merge pull request #1147 from sysadminsmedia/katos/docs-variable
Update max file upload environment variable
2025-12-20 16:01:04 +00:00
Katos
a21b3257d4 Update max file upload environment variable 2025-12-20 15:57:14 +00:00
Robert Eggl
5f9ab577bb fix: request camera permission in ScannerModal (#1113)
* feat: request camera permission in ScannerModal

* chore: simplify source code
2025-12-19 21:47:37 +00:00
Robert Eggl
0a969bb64d fix(sidebar): prevent dropdown menu layout shift on hover (#1116) 2025-12-19 21:38:06 +00:00
Sarun Nuntaviriyakul
2d1d3d927b Update log level options in configuration documentation (#1127) 2025-12-12 13:33:12 -05:00
Matthew Kilgore
540028a22e fix: broken docker.io attestation 2025-12-11 22:24:11 -05:00
Nelson Cabete
14b0d51894 Update docs to reference disable_https instead of disableSsl on Storage Configuration page (#1124)
Co-authored-by: Nelson Cabete <me@ncabete.com>
2025-12-09 20:56:05 -05:00
Matt
4334f926c0 Fix postgres nullable password migration to be at end 2025-12-09 14:44:53 -05:00
Robert Eggl
1088972ff0 docs: add missing barcode spider env var (#1114) 2025-12-08 20:17:45 -05:00
Matthew Kilgore
55e247ac71 Fix missing postgres OIDC migration 2025-12-08 20:10:36 -05:00
Matthew Kilgore
05a2700718 Merge remote-tracking branch 'origin/main' 2025-12-06 18:14:12 -05:00
Matthew Kilgore
06c11cdcd5 Ensure options are up to date in docs 2025-12-06 18:14:06 -05:00
Logan Miller
cc66330a74 feat: Add item templates feature (#435) (#1099)
* feat: Add item templates feature (#435)

   Add ability to create and manage item templates for quick item creation.
   Templates store default values and custom fields that can be applied
   when creating new items.

   Backend changes:
   - New ItemTemplate and TemplateField Ent schemas
   - Template CRUD API endpoints
   - Create item from template endpoint

   Frontend changes:
   - Templates management page with create/edit/delete
   - Template selector in item creation modal
   - 'Use as Template' action on item detail page
   - Templates link in navigation menu

* refactor: Improve template item creation with a single query

- Add `CreateFromTemplate` method to ItemsRepository that creates items with all template data (including custom fields) in a single atomic transaction, replacing the previous two-phase create-then-update pattern
- Fix `GetOne` to require group ID parameter so templates can only be accessed by users in the owning group (security fix)
- Simplify `HandleItemTemplatesCreateItem` handler using the new transactional method

* Refactor item template types and formatting

Updated type annotations in CreateModal.vue to use specific ItemTemplate types instead of 'any'. Improved code formatting for template fields and manufacturer display. Also refactored warranty field logic in item details page for better readability. This resolves the linter issues as well that the bot in github keeps nagging at.

* Add 'id' property to template fields

Introduces an 'id' property to each field object in CreateModal.vue and item details page to support unique identification of fields. This change prepares the codebase for future enhancements that may require field-level identification.

* Removed redundant SQL migrations.

Removed redundant SQL migrations per @tankerkiller125's findings.

* Updates to PR #1099.

Regarding pull #1099. Fixed an issue causing some conflict with GUIDs and old rows in the migration files.

* Add new fields and location edge to ItemTemplate

Addresses recommendations from @tonyaellie.

* Relocated add template button
* Added more default fields to the template
* Added translation of all strings (think so?)
* Make oval buttons round
* Added duplicate button to the template (this required a rewrite of the migration files, I made sure only 1 exists per DB type)
* Added a Save as template button to a item detail view (this creates a template with all the current data of that item)
* Changed all occurrences of space to gap and flex where applicable.
* Made template selection persistent after item created.
* Collapsible template info on creation view.

* Updates to translation and fix for labels/locations

I also added a test in here because I keep missing small function tests. That should prevent that from happening again.

* Linted

* Bring up to date with main, fix some lint/type check issues

* In theory fix playwright tests

* Fix defaults being unable to be nullable/empty (and thus limiting flexibility)

* Last few fixes I think

* Forgot to fix the golang tests

---------

Co-authored-by: Matthew Kilgore <matthew@kilgore.dev>
2025-12-06 16:21:43 -05:00
Matthew Kilgore
3671ba2ba1 Fix merge digest for other docker images 2025-12-06 16:00:31 -05:00
Matthew Kilgore
8898dd03f7 Try to fix merge digest 2025-12-06 15:33:16 -05:00
Matthew Kilgore
bd8708ce38 Try max provenance? 2025-12-06 15:02:04 -05:00
Matthew Kilgore
a0589b7629 Use our own builkit and binfmt clones 2025-12-06 14:49:26 -05:00
Matthew Kilgore
0f4a686041 Forgot syft needs 2025-12-06 14:28:20 -05:00
Matthew Kilgore
848b444aef Fix postgres migration, and attempt new provenance publishing 2025-12-06 14:22:46 -05:00
Matthew Kilgore
e6e6056897 Update dependencies 2025-12-06 10:23:23 -05:00
Jeff Rescignano
f36756d98e Add support for SSO / OpenID Connect (OIDC) (#996)
* ent re-generation

* add oidc integration

* document oidc integration

* go fmt

* address backend linter findings

* run prettier on index.vue

* State cookie domain can mismatch when Hostname override is used (breaks CSRF check). Add SameSite.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Delete state cookie with matching domain and MaxAge; add SameSite.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Fix endpoint path in comments and error to include /api/v1.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Also use request context when verifying the ID token.

* Do not return raw auth errors to clients (user-enumeration risk).

* consistently set cookie the same way across function

* remove baseURL after declaration

* only enable OIDC routes if OIDC is enabled

* swagger doc for failure

* Only block when provider=local; move the check after parsing provider

* fix extended session comment

* reduce pii logging

* futher reduce pii logging

* remove unused DiscoveryDocument

* remove unused offline_access from default oidc scopes

* remove offline access from AuthCodeURL

* support host from X-Forwarded-Host

* set sane default claim names if unset

* error strings should not be capitalized

* Revert "run prettier on index.vue"

This reverts commit aa22330a23.

* Add timeout to provider discovery

* Split scopes robustly

* refactor hostname calculation

* address frontend prettier findings

* add property oidc on type APISummary

* LoginOIDC: Normalize inputs, only create if not found

* add oidc email verification

* oidc handleCallback: clear state cookie before each return

* add support for oidc nonce parameter

* Harden first-login race: handle concurrent creates gracefully and fix log key.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* support email verified claim as bool or string

* fail fast on empty email

* PKCE verifier

* fix: add timing delay to attachment test to resolve CI race condition

The attachment test was failing intermittently in CI due to a race condition
between attachment creation and retrieval. Adding a small 100ms delay after
attachment creation ensures the file system and database operations complete
before the test attempts to verify the attachment exists.

* Revert "fix: add timing delay to attachment test to resolve CI race condition"

This reverts commit 4aa8b2a0d829753e8d2dd1ba76f4b1e04e28c45e.

* oidc error state, use ref

* rename oidc.force to oidc.authRedirect

* remove hardcoded oidc error timeout

* feat: sub/iss based identity matching and userinfo endpoint collection

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Matthew Kilgore <matthew@kilgore.dev>
2025-12-06 10:16:05 -05:00
Matthew Kilgore
bfc5ffa76b Add gitattributes to maybe cut down on terrible Github review pages 2025-11-29 23:16:17 -05:00
Matthew Kilgore
1625354a70 Add gitlab CI/CD runner file 2025-11-29 17:02:59 -05:00
Matthew Kilgore
d1016845a9 Add gitlab CI/CD runner file 2025-11-29 17:02:02 -05:00
Matthew Kilgore
54ce340ac4 Add gitlab CI/CD runner file 2025-11-29 16:58:53 -05:00
dependabot[bot]
8c04ad7fe8 Bump the npm_and_yarn group across 2 directories with 1 update (#1097)
Bumps the npm_and_yarn group with 1 update in the / directory: [node-forge](https://github.com/digitalbazaar/forge).
Bumps the npm_and_yarn group with 1 update in the /frontend directory: [node-forge](https://github.com/digitalbazaar/forge).


Updates `node-forge` from 1.3.1 to 1.3.2
- [Changelog](https://github.com/digitalbazaar/forge/blob/main/CHANGELOG.md)
- [Commits](https://github.com/digitalbazaar/forge/compare/v1.3.1...v1.3.2)

Updates `node-forge` from 1.3.1 to 1.3.2
- [Changelog](https://github.com/digitalbazaar/forge/blob/main/CHANGELOG.md)
- [Commits](https://github.com/digitalbazaar/forge/compare/v1.3.1...v1.3.2)

---
updated-dependencies:
- dependency-name: node-forge
  dependency-version: 1.3.2
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: node-forge
  dependency-version: 1.3.2
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-27 11:18:10 -05:00
Tonya
78d05bb155 disable sort (via table) on item page (#1087)
* fix: disable sort on item page

* fix: type issue
2025-11-24 01:34:37 +00:00
dependabot[bot]
3a648aa279 Bump golang.org/x/crypto (#1088)
Bumps the go_modules group with 1 update in the /backend directory: [golang.org/x/crypto](https://github.com/golang/crypto).


Updates `golang.org/x/crypto` from 0.44.0 to 0.45.0
- [Commits](https://github.com/golang/crypto/compare/v0.44.0...v0.45.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.45.0
  dependency-type: direct:production
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-20 07:09:27 -05:00
Alan Mooiman
35a83c29af Fix auto-zoom on iOS devices (#1029)
* Remove text-sm from inputs

* Update frontend/components/ui/command/CommandInput.vue

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update frontend/components/ui/tags-input/TagsInputInput.vue

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update frontend/components/ui/select/SelectTrigger.vue

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Respond to coderrabitai

* Another coderrabbit comment

* More coderrabbit responses

* Fix formatting

* Apply suggestion from @coderabbitai[bot]

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update frontend/components/ui/input/Input.vue

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Correct Coderrabbit's messy suggestion that I was too trigger-happy on

* Accessible changes aOnly use accessible font sizing on mobile

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-11-18 22:40:21 +00:00
dependabot[bot]
6697738342 Bump glob in the npm_and_yarn group across 1 directory (#1085)
Bumps the npm_and_yarn group with 1 update in the / directory: [glob](https://github.com/isaacs/node-glob).


Updates `glob` from 10.4.5 to 10.5.0
- [Changelog](https://github.com/isaacs/node-glob/blob/main/changelog.md)
- [Commits](https://github.com/isaacs/node-glob/compare/v10.4.5...v10.5.0)

---
updated-dependencies:
- dependency-name: glob
  dependency-version: 10.5.0
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-18 09:36:30 -05:00
Matthew Kilgore
a379e7c1ab Fix goreleaser 2025-11-16 17:11:10 -05:00
Matthew Kilgore
e501d769da Upgrade frontend and doc dependencies 2025-11-16 16:51:16 -05:00
Matthew Kilgore
7d0e05dc5d Update go dependencies 2025-11-16 16:48:38 -05:00
Matt
81233e2999 Attempt to revert NodeJS so ARM 32bit builds work again (#1081)
* Attempt to revert NodeJS so ARM 32bit builds work again

* Rollback even further
2025-11-16 16:05:15 -05:00
tonyaellie
415c3238ae fix: android image capture for item create 2025-11-01 15:25:42 +00:00
Tonya
b3153cc971 Revert "Set single connection pool for sqlite3 (#1039)" (#1061)
This reverts commit 8a90b9c133.
2025-10-22 17:32:00 +00:00
Tonya
0801df9961 fix: use tx for duplicate (#1059) 2025-10-21 20:58:06 +01:00
Benjamin Wolff
2bdd085289 Item search query parameter modernisation (#1040)
* await labels and locations properly

* update query params with every search

* don't persist default settings in query params

* conceptualize optional parameters

* add run script for development

* lint

* consider typescript

* remove run.sh

* capitalize QueryParamValue

* make query parameter updates predictable

This reverts commit 5c0c48cea5.

* capitalize typename again

---------

Co-authored-by: Benji <benji@DG-SM-7059.local>
Co-authored-by: Benji <benji@mac.home.internal>
Co-authored-by: Benji <benji@dg-sm-7059.home.internal>
2025-10-21 19:40:46 +01:00
zebrapurring
c30cac4489 chore: update icon for button to duplicate items (#1050)
Co-authored-by: zebrapurring <>
2025-10-21 17:20:35 +00:00
Copilot
397a1c6f3e Fix: Return error to UI when attachment upload fails due to storage misconfiguration (#1045)
* Initial plan

* Fix attachment upload error handling to return errors to UI

Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>

* Final verification: All tests pass and code builds successfully

Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>
2025-10-11 08:55:15 -04:00
Copilot
05a392346f Fix item deletion to properly clean up attachment files from storage (#1046)
* Initial plan

* Fix item deletion to properly clean up attachment files

Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>
2025-10-11 08:55:02 -04:00
Tonya
28c3e102a2 feat: add a markdown preview for description and notes (#1043)
* feat: add a markdown preview for description and notes

* feat: add char count for md
2025-10-10 12:37:57 +00:00
zebrapurring
116e39531b Fix failing tests (#1009)
* chore: ignore all .data directories

* fix: date locale for unit tests

* test: disable parallelism to prevent database locks

* chore: fix lint errors

* chore: remove unused function

---------

Co-authored-by: zebrapurring <>
Co-authored-by: Tonya <tonya@tokia.dev>
2025-10-09 11:51:51 +00:00
rienkim
8a90b9c133 Set single connection pool for sqlite3 (#1039) 2025-10-08 14:58:29 -04:00
rienkim
ef52009f57 Feat/Added label maker custom font (#1038)
* Add label maker font config

* Add document for label maker font config

* Add test for custom font

* Fix custom font setup documentation

- Fallback font is gofont which don't support CJK characters

* Fix golangci-lint error

* Update custom-font-setup.md

* Fix typo
2025-10-08 14:49:22 -04:00
dependabot[bot]
76154263e0 Bump the npm_and_yarn group across 2 directories with 1 update (#1032)
Bumps the npm_and_yarn group with 1 update in the / directory: [nuxt](https://github.com/nuxt/nuxt/tree/HEAD/packages/nuxt).
Bumps the npm_and_yarn group with 1 update in the /frontend directory: [nuxt](https://github.com/nuxt/nuxt/tree/HEAD/packages/nuxt).


Updates `nuxt` from 4.0.3 to 4.1.0
- [Release notes](https://github.com/nuxt/nuxt/releases)
- [Commits](https://github.com/nuxt/nuxt/commits/v4.1.0/packages/nuxt)

Updates `nuxt` from 4.0.3 to 4.1.0
- [Release notes](https://github.com/nuxt/nuxt/releases)
- [Commits](https://github.com/nuxt/nuxt/commits/v4.1.0/packages/nuxt)

---
updated-dependencies:
- dependency-name: nuxt
  dependency-version: 4.1.0
  dependency-type: direct:production
  dependency-group: npm_and_yarn
- dependency-name: nuxt
  dependency-version: 4.1.0
  dependency-type: direct:development
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-30 20:20:18 -04:00
Matthew Kilgore
108194e7fd Merge remote-tracking branch 'origin/main' 2025-09-29 19:21:19 -04:00
Matthew Kilgore
bf845ae0f7 Update bounty page 2025-09-29 19:21:05 -04:00
dependabot[bot]
9be6a8c888 Bump the npm_and_yarn group across 2 directories with 1 update (#1001)
Bumps the npm_and_yarn group with 1 update in the / directory: [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite).
Bumps the npm_and_yarn group with 1 update in the /frontend directory: [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite).


Updates `vite` from 5.4.18 to 5.4.20
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.4.20/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.4.20/packages/vite)

Updates `vite` from 7.1.3 to 7.1.5
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.4.20/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.4.20/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 5.4.20
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: vite
  dependency-version: 7.1.5
  dependency-type: direct:production
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-29 12:07:33 -04:00
Alan Mooiman
38a987676e Fix frontend CI (#1028) 2025-09-29 08:18:43 -04:00
Matthew Kilgore
1f746efe27 Hide try it again (other issues) 2025-09-26 21:58:17 -04:00
Matthew Kilgore
d57bf8834b Fix CSP header 2025-09-26 21:45:57 -04:00
Matthew Kilgore
cb2c58c3f4 Merge remote-tracking branch 'origin/main' 2025-09-26 21:34:45 -04:00
Matthew Kilgore
7b3cf0453e Support TryIt Function in API Docs 2025-09-26 21:34:34 -04:00
Matt
825e72bceb Add funding information for contributors 2025-09-24 20:03:58 -04:00
Matt
8547fb9bb3 Add database type selection to bug report template
Added a dropdown for selecting the database type in the bug report template.
2025-09-24 11:28:11 -04:00
Tonya
f66624774e Change Item Card to use object contain by default for images (#1020)
* feat: add legacy image fit preference and adjustable image display in card component

* feat: add blurred bg image when object contain

* fix: add alt text for image and improve objectContain
2025-09-24 16:09:15 +01:00
Guy Taggar
33ec0c4aff Fix typo (#1019)
* Fix typo

* Change to plural
2025-09-24 09:48:44 -04:00
Tonya
6cd9e2779f Use Tanstack table for Selectable Table, quick actions (#998)
* feat: implement example of data table

* feat: load item data into table

* chore: begin switching dialogs

* feat: implement old dialog for controlling headers and page size

* feat: get table into relatively usable state

* feat: enhance dropdown actions for multi-selection and CSV download

* feat: enhance table cell and dropdown button styles for better usability

* feat: json download for table

* feat: add expanded row component for item details in data table

* chore: add translation support

* feat: restore table on home page

* fix: oops need ids

* feat: move card view to use tanstack to allow for pagination

* feat: switch the items search to use ItemViewSelectable

* fix: update pagination handling and improve button click logic

* feat: improve selectable table

* feat: add indeterminate to checkbox

* feat: overhaul maintenance dialog to use new system and add maintenance options to table

* feat: add label ids and location id to item patch api

* feat: change location and labels in table view

* feat: add quick actions preference and enable toggle in table settings

* fix: lint

* fix: remove sized 1 pages

* fix: attempt to fix type error

* fix: various issues

* fix: remove

* fix: refactor item fetching logic to use useAsyncData for improved reactivity and improve use confirm

* fix: sort backend issues

* fix: enhance CSV export functionality by escaping fields to prevent formula injection

* fix: put aria sort on th not button

* chore: update api types
2025-09-24 02:37:38 +01:00
Matthew Kilgore
a5d63ac4e1 In theory SLSA provenience for binary builds 2025-09-23 21:05:22 -04:00
Matt
ba45203ea3 beautify the readme a bit (#1014)
* beautify the readme a bit

* Revert CSS updates, Github filters them out

* Enhance README with Lemmy badge and description update
2025-09-23 13:20:50 -04:00
Matt
609b7a606b Generate OpenAPI 3 schemas from the swagger 2.0 generation (#1017)
* Generate OpenAPI 3 schemas from the swagger 2.0 generation

* Update API description URL in index.md
2025-09-23 12:14:37 -04:00
Katos
b56505452f Merge pull request #1006 from sysadminsmedia/update-currencies
Update currencies.json
2025-09-14 15:54:48 +01:00
katosdev
118bce4441 chore: update currencies.json 2025-09-14 14:52:41 +00:00
Katos
b535cdeb96 Refactor update-currencies workflow with enhancements
Update currencies workflow to fix errors and introduce improvements
2025-09-14 15:52:17 +01:00
Katos
3b0e986f01 Refactor update-currencies workflow file 2025-09-13 18:57:10 +01:00
Choong Jun Jin
8f8dbf4a3a Fea: add decimal support to currency system with ISO 4217 data integration (#976)
* feat: add decimal support to currency system with ISO 4217 data integration

* Harden currency formatting: add decimal bounds, input validation, and robust error handling

* Fixed issues raised by coderrabitai

* Fixed linting issue
2025-09-13 11:51:54 -04:00
Weblate
3183b38114 Translated using Weblate (French)
Currently translated at 100.0% (518 of 518 strings)

Translated using Weblate (French)

Currently translated at 100.0% (518 of 518 strings)

Translated using Weblate (French)

Currently translated at 100.0% (518 of 518 strings)

Translated using Weblate (French)

Currently translated at 100.0% (518 of 518 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (518 of 518 strings)

Translated using Weblate (Romanian)

Currently translated at 60.1% (311 of 517 strings)

Translated using Weblate (Romanian)

Currently translated at 60.1% (311 of 517 strings)

Translated using Weblate (Romanian)

Currently translated at 60.1% (311 of 517 strings)

Translated using Weblate (Romanian)

Currently translated at 60.1% (311 of 517 strings)

Translated using Weblate (Polish)

Currently translated at 97.8% (506 of 517 strings)

Translated using Weblate (Italian)

Currently translated at 82.5% (427 of 517 strings)

Translated using Weblate (Italian)

Currently translated at 82.5% (427 of 517 strings)

Translated using Weblate (German)

Currently translated at 99.8% (516 of 517 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (517 of 517 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (517 of 517 strings)

Translated using Weblate (German)

Currently translated at 98.4% (509 of 517 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (517 of 517 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (517 of 517 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (517 of 517 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (506 of 506 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: Adam Havránek <adamhavra@seznam.cz>
Co-authored-by: Erwin van Londen <translate.sysadminsm.treachery437@passmail.net>
Co-authored-by: Hannes Salen <hannes.salen@gmail.com>
Co-authored-by: Jose Riha <jose1711@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: Philipp Walter <philipp.walter@scodex.de>
Co-authored-by: Saverio Salatino <saverio.salatino@gmail.com>
Co-authored-by: Supertriton <tristan.marie@laposte.net>
Co-authored-by: The Frog <frog@blackbox.net>
Co-authored-by: Weblate <noreply-mt-weblate@weblate.org>
Co-authored-by: Weblate <noreply@weblate.org>
Co-authored-by: Weblate Translation Memory <noreply-mt-weblate-translation-memory@weblate.org>
Co-authored-by: vizu <bogdan.vizureanu@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/cs/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/de/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/fr/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/it/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/nl/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/pl/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ro/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/sk/
Translation: Homebox/Frontend
2025-09-08 09:42:46 +00:00
confiks
0408b1c03b Use CatmullRom instead of ApproxBiLinear for thumbnail generation (#964) 2025-09-05 11:19:46 -04:00
Copilot
a2e108eac4 Make attachment storage paths relative in database with cross-platform support (#967) 2025-09-05 11:12:51 -04:00
James Droste
227b81c6af Set default postgres sql_mode to require (#986)
fixes #985. libpq does not support the current default (prefer). This
sets the default sql_mode to match libpq's default which is require
2025-09-05 08:39:11 -04:00
Choong Jun Jin
3ef25d6463 Fix: add focus-triggered preloading to ItemSelector (#980)
* fix: add focus-triggered preloading to ItemSelector with proper error handling and complete localization

* Removed machine translated files

---------

Co-authored-by: Tonya <tonya@tokia.dev>
2025-09-04 16:29:34 +01:00
Tonya
d4e28e6f3b Upgrade frontend deps, including nuxt (#982)
* feat: begin upgrading deps, still very buggy

* feat: progress

* feat: sort all type issues

* fix: sort type issues

* fix: import sonner styles

* fix: nuxt is the enemy

* fix: try sorting issue with workflows

* fix: update vitest config for dynamic import of path and defineConfig

* fix: add missing import

* fix: add time out to try and fix issues

* fix: add ui:ci:preview task for frontend build in CI mode

* fix: i was silly

* feat: add go:ci:with-frontend task for CI mode and remove ui:ci:preview from e2e workflow

* fix: update baseURL in Playwright config for local testing to use port 7745

* fix: update E2E_BASE_URL and remove wait for timeout in login test for smoother execution
2025-09-04 09:00:25 +01:00
Matthieu Evrin
790352da34 fix(item): remove line break in Items label in location view (#975)
fix: prevent items word wrapped in firefox

Signed-off-by: lekaf974 <matthieu.evrin@gmail.com>
2025-09-01 22:52:14 +01:00
tonyaellie
52a6a31098 fix: import close dialog 2025-08-27 19:28:28 +00:00
Katos
1d02285b0d Merge pull request #962 from sysadminsmedia/katos/screenshots
Migrate Screenshots from Imgur to Github
2025-08-24 17:19:22 +01:00
Katos
19563d8b38 Update readme to point to new screenshots folder 2025-08-24 17:16:37 +01:00
Katos
282977e82c Upload example screenshots
Upload screenshots to Github repository
2025-08-24 17:15:46 +01:00
Katos
769d5c5b95 Create screenshots folder and readme 2025-08-24 17:15:07 +01:00
rapidcow
b8f7ce7eb2 doc fix: match configure option names with help message (#959)
* doc fix: match configure option names with help message (1/2)

This is a first commit in an attempt to reconcile the differences
between the /en/configure/index doc page and the automatically
generated help message.  This addresses typos including, though not
limited to, Discussion #954, titled "[doc] apparent typo in the
documentation of GitHub release check option".

This commit fixes the CLI help command, preserving the original
order, while manually matching the option names with the help
message generated by the backend api executable.

Options are only checked for spelling correctness and existence.
In particular, the following are removed because i could not
find them in the help message.

   * --swagger-host/$HBOX_SWAGGER_HOST <string> (default: localhost:7745)
   * --swagger-scheme/$HBOX_SWAGGER_SCHEME <string> (default: http)

The following default values have also been updated:

   * --storage-conn-string/$HBOX_STORAGE_CONN_STRING
      (a slash is added to the URI path)
   * --database-sqlite-path/$HBOX_DATABASE_SQLITE_PATH
      (a query param '&_time_format=sqlite' is added)
   * --database-ssl-mode/$HBOX_DATABASE_SSL_MODE
      (default 'prefer' added)

* doc fix: match configure option names with help message (2/2)

This is a second commit in an attempt to reconcile the differences
between the /en/configure/index doc page and the automatically
generated help message.  See the previous commit for details.

This commit fixes the Markdown table.

Options are only checked for spelling correctness and existence.
The following rows are deleted in particular:

   * HBOX_SWAGGER_HOST
   * HBOX_SWAGGER_SCHEME

The following default values are updated:

   * HBOX_STORAGE_CONN_STRING
      (a slash is added to the URI path)
   * HBOX_DATABASE_SQLITE_PATH
      (a query param '&_time_format=sqlite' is added)
   * HBOX_DATABASE_SSL_MODE
      (default 'prefer' added)
2025-08-23 21:17:26 -04:00
Matthew Kilgore
62ed3fabc2 Fix broken test version of binary build 2025-08-23 17:29:21 -04:00
Matthew Kilgore
304fc7f11f Fix YAML maybe 2025-08-23 17:24:10 -04:00
Matthew Kilgore
1b7a7a1999 Fix YAML maybe 2025-08-23 17:22:29 -04:00
Matthew Kilgore
a63f08ad87 Fix YAML maybe 2025-08-23 17:21:21 -04:00
Matthew Kilgore
9cb1a3f83c Fix YAML maybe 2025-08-23 17:21:01 -04:00
Matthew Kilgore
f86d38412b Fix YAML maybe 2025-08-23 17:20:16 -04:00
Matthew Kilgore
cbbe056d01 Let us test binary builds without publishing new tags 2025-08-23 17:17:10 -04:00
Katos
5f6b1a0805 Update binaries-publish.yaml
Add COSIGN_PWD and COSIGN_YES to workflow to rectify issues with binaries building on Action
2025-08-23 20:07:12 +01:00
Tonya
27e9eb2277 improve dialogs, option to open image dialog in edit then delete (#951)
* fix: change Content-Disposition to inline for proper document display in attachments

* feat: overhaul how dialog system works, add delete to image dialog and add button to open image dialog on edit page

* chore: remove unneeded console log

* fix: ensure cleanup of dialog callbacks on unmount in BarcodeModal, CreateModal, and ImageDialog components
2025-08-23 18:22:33 +00:00
tonyaellie
6fcd10d796 feat: move theme picker to its own component and improve contrast on login screen 2025-08-23 18:05:00 +00:00
Michael Manganiello
377c6c6e0d fix: Remove log.Fatal in favor of returning errors (#953)
* fix: Remove log.Fatal in favor of returning errors

This change is useful for including error tracking, which needs the
application to not terminate immediately, and instead give the tracer
time to capture and flush errors.

* Fix CodeRabbit issues

---------

Co-authored-by: Matthew Kilgore <matthew@kilgore.dev>
2025-08-23 13:09:40 -04:00
Matt
7980e8e90a Create hardened docker image (#955)
* Create hardened docker image

* Remove healthcheck that can't work

* Pin action dependencies

* Further cleanup and hardening

* Fix broken hardened build

* Enhance Dockerfile with healthcheck and optimizations

Added healthcheck helper using a small Go file module and improved Dockerfile structure for readability.

---------

Co-authored-by: Katos <7927609+katosdev@users.noreply.github.com>
2025-08-23 12:57:51 -04:00
Tonya
788d0b1c7e feat: improved duplicate (#927)
* feat: improved duplicate

* feat: enhance item duplication process with transaction handling and error logging for attachments and fields

* feat: add error logging during transaction rollback in item duplication process for better debugging

* feat: don't try and rollback is the commit succeeded

* feat: add customizable duplication options for items, including prefix and field copying settings in API and UI

* fix: simplify duplication checks for custom fields, attachments, and maintenance entries in ItemsRepository duplication method

* refactor: import DuplicateSettings type from composables and sort import issues
2025-08-23 16:17:15 +01:00
Weblate
8b711eda99 Translated using Weblate (Norwegian Bokmål)
Currently translated at 96.6% (489 of 506 strings)

Translated using Weblate (Slovak)

Currently translated at 97.2% (492 of 506 strings)

Translated using Weblate (Ukrainian)

Currently translated at 64.0% (324 of 506 strings)

Translated using Weblate (Hungarian)

Currently translated at 99.4% (503 of 506 strings)

Translated using Weblate (Polish)

Currently translated at 99.8% (505 of 506 strings)

Translated using Weblate (Catalan)

Currently translated at 54.5% (276 of 506 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 99.4% (503 of 506 strings)

Translated using Weblate (Spanish)

Currently translated at 99.4% (503 of 506 strings)

Translated using Weblate (Turkish)

Currently translated at 86.1% (436 of 506 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: Matthew Kilgore <matthew@kilgore.dev>
Co-authored-by: Michael Manganiello <mike@fmanganiello.com.ar>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ca/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/es/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/nb_NO/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/nl/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/pl/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/sk/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/tr/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/uk/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/zh_Hans/
Translation: Homebox/Frontend
2025-08-22 04:53:59 +00:00
Weblate
bba0d26480 Merge branch 'origin/main' into Weblate. 2025-08-21 23:23:32 +00:00
Matthew Kilgore
789e27e67b Merge remote-tracking branch 'origin/main' 2025-08-21 19:22:49 -04:00
Weblate
1828eae2c3 Translated using Weblate (French)
Currently translated at 96.8% (490 of 506 strings)

Translated using Weblate (English)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: Weblate <noreply@weblate.org>
Co-authored-by: buzz <buzz.eclair@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/en/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/fr/
Translation: Homebox/Frontend
2025-08-21 19:20:28 -04:00
Natalí Paura
8c87cda9ab Fix label name length (#822)
* Fix label name length

The labels name were shortened to the max length of 20 characters and not taking advantage of extra space. And it was difficult to distinguish between labels with the same prefix.

* run task ui:fix

* fix label selector when creating an item

* feat: sort styles for line wrapping

---------

Co-authored-by: Tonya <tonya@tokia.dev>
2025-08-21 18:52:10 +00:00
Tonya
900604661b fix: change Content-Disposition to inline for proper document display in attachments (#950) 2025-08-21 14:59:13 +00:00
Michael Manganiello
8af1e8fcba fix: Allow up to 1000 characters for label description (#948)
The database schema already supports 1,000 characters for label
description, so this seems just like an oversight.
2025-08-20 15:29:49 -04:00
Weblate
ed7c3dd3f5 Translated using Weblate (French)
Currently translated at 96.8% (490 of 506 strings)

Translated using Weblate (English)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: Weblate <noreply@weblate.org>
Co-authored-by: buzz <buzz.eclair@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/en/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/fr/
Translation: Homebox/Frontend
2025-08-19 21:58:40 +00:00
Matthew Kilgore
e810571bf1 Merge Bugged Translation Commits 2025-08-19 10:44:22 -04:00
Weblate
1bce1905b6 Translated using Weblate (Japanese)
Currently translated at 97.6% (494 of 506 strings)

Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 07:18:08 +00:00
Weblate
607507ad20 Translated using Weblate (Japanese)
Currently translated at 97.6% (494 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 97.6% (494 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 06:58:53 +00:00
Weblate
ed1b1a2765 Translated using Weblate (Japanese)
Currently translated at 95.4% (483 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 95.4% (483 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 06:41:30 +00:00
Weblate
5f140b34e6 Translated using Weblate (Japanese)
Currently translated at 95.4% (483 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 95.4% (483 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 06:39:32 +00:00
Weblate
3fbf154589 Translated using Weblate (Japanese)
Currently translated at 95.4% (483 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 95.4% (483 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 06:37:59 +00:00
Weblate
2bfd612971 Translated using Weblate (Japanese)
Currently translated at 95.4% (483 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 06:36:34 +00:00
Weblate
fe37c5acc7 Translated using Weblate (Japanese)
Currently translated at 95.4% (483 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 06:34:33 +00:00
Weblate
6be9c18f68 Translated using Weblate (Japanese)
Currently translated at 95.4% (483 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 95.4% (483 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 06:33:36 +00:00
Weblate
7d5d4e7dc7 Translated using Weblate (Japanese)
Currently translated at 95.0% (481 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 06:30:41 +00:00
Weblate
ec7051672f Translated using Weblate (Japanese)
Currently translated at 95.0% (481 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 95.0% (481 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 06:30:14 +00:00
Weblate
008725b300 Translated using Weblate (Japanese)
Currently translated at 94.0% (476 of 506 strings)

Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 06:18:16 +00:00
Weblate
3fb828ee1a Translated using Weblate (Japanese)
Currently translated at 93.4% (473 of 506 strings)

Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 06:03:16 +00:00
Weblate
0adebeaf8d Translated using Weblate (Japanese)
Currently translated at 93.2% (472 of 506 strings)

Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 06:00:52 +00:00
Weblate
c1a944411c Translated using Weblate (Japanese)
Currently translated at 91.6% (464 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 05:00:23 +00:00
Weblate
1aaab56045 Translated using Weblate (Japanese)
Currently translated at 91.6% (464 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:58:45 +00:00
Weblate
87ecb217fb Translated using Weblate (Japanese)
Currently translated at 91.6% (464 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 91.6% (464 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:58:33 +00:00
Weblate
91e4df652d Translated using Weblate (Japanese)
Currently translated at 91.5% (463 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:58:27 +00:00
Weblate
40ee154508 Translated using Weblate (Japanese)
Currently translated at 91.3% (462 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:56:40 +00:00
Weblate
1925167407 Translated using Weblate (Japanese)
Currently translated at 90.7% (459 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:56:18 +00:00
Weblate
b8bdf23d05 Translated using Weblate (Japanese)
Currently translated at 90.7% (459 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 90.7% (459 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:56:10 +00:00
Weblate
ca49a4cd82 Translated using Weblate (Japanese)
Currently translated at 90.5% (458 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 90.5% (458 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:55:42 +00:00
Weblate
c8c1a4f573 Translated using Weblate (Japanese)
Currently translated at 90.1% (456 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 90.1% (456 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:54:59 +00:00
Weblate
9f5fb82c47 Translated using Weblate (Japanese)
Currently translated at 89.9% (455 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 89.9% (455 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:54:45 +00:00
Weblate
d87c46a464 Translated using Weblate (Japanese)
Currently translated at 89.7% (454 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 89.7% (454 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:54:26 +00:00
Weblate
7e5567bd2f Translated using Weblate (Japanese)
Currently translated at 89.5% (453 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 89.5% (453 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:54:15 +00:00
Weblate
5589301c9d Translated using Weblate (Japanese)
Currently translated at 89.3% (452 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 89.3% (452 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:54:06 +00:00
Weblate
b489593e62 Translated using Weblate (Japanese)
Currently translated at 89.1% (451 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 89.1% (451 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:53:51 +00:00
Weblate
38413ddef4 Translated using Weblate (Japanese)
Currently translated at 88.9% (450 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 88.9% (450 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:53:38 +00:00
Weblate
273520fd96 Translated using Weblate (Japanese)
Currently translated at 88.7% (449 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 88.7% (449 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:52:35 +00:00
Weblate
4704b42b6d Translated using Weblate (Japanese)
Currently translated at 88.5% (448 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 88.5% (448 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:52:21 +00:00
Weblate
29c84e3071 Translated using Weblate (Japanese)
Currently translated at 88.3% (447 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 88.3% (447 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:52:06 +00:00
Weblate
6d3967383e Translated using Weblate (Japanese)
Currently translated at 88.1% (446 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 88.1% (446 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:51:48 +00:00
Weblate
c7af7720ea Translated using Weblate (Japanese)
Currently translated at 87.9% (445 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 87.9% (445 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:51:25 +00:00
Weblate
44ea3aef1b Translated using Weblate (Japanese)
Currently translated at 87.7% (444 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:48:40 +00:00
Weblate
414599503f Translated using Weblate (Japanese)
Currently translated at 87.7% (444 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:47:50 +00:00
Weblate
5eda237014 Translated using Weblate (Japanese)
Currently translated at 87.5% (443 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 87.5% (443 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:47:41 +00:00
Weblate
6e2b0f2d32 Translated using Weblate (Japanese)
Currently translated at 87.1% (441 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 87.1% (441 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:44:58 +00:00
Weblate
2fc9d40419 Translated using Weblate (Japanese)
Currently translated at 86.9% (440 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 86.9% (440 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:40:27 +00:00
Weblate
5ed5d69d34 Translated using Weblate (Japanese)
Currently translated at 86.7% (439 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 86.7% (439 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:40:16 +00:00
Weblate
19605bc242 Translated using Weblate (Japanese)
Currently translated at 86.5% (438 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 86.5% (438 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:40:04 +00:00
Weblate
523c3af677 Translated using Weblate (Japanese)
Currently translated at 86.3% (437 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 86.3% (437 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:39:55 +00:00
Weblate
c2d64388b2 Translated using Weblate (Japanese)
Currently translated at 86.1% (436 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 86.1% (436 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:39:47 +00:00
Weblate
2c8bc77aaa Translated using Weblate (Japanese)
Currently translated at 85.9% (435 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 85.9% (435 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:39:37 +00:00
Weblate
284e38c92c Translated using Weblate (Japanese)
Currently translated at 85.7% (434 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 85.7% (434 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:36:53 +00:00
Weblate
85fc35a382 Translated using Weblate (Japanese)
Currently translated at 85.1% (431 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 85.1% (431 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:35:55 +00:00
Weblate
9ffe8ec399 Translated using Weblate (Japanese)
Currently translated at 84.7% (429 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 84.7% (429 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:34:37 +00:00
Weblate
1e4902d8ae Translated using Weblate (Japanese)
Currently translated at 84.5% (428 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 84.5% (428 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:33:43 +00:00
Weblate
6585a271f6 Translated using Weblate (Japanese)
Currently translated at 83.3% (422 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 83.3% (422 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:32:55 +00:00
Weblate
faa9e09efe Translated using Weblate (Japanese)
Currently translated at 83.2% (421 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 83.2% (421 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:32:13 +00:00
Weblate
55b73418b8 Translated using Weblate (Japanese)
Currently translated at 83.0% (420 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 83.0% (420 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:31:35 +00:00
Weblate
8be61d9e36 Translated using Weblate (Japanese)
Currently translated at 82.8% (419 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 82.8% (419 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:30:50 +00:00
Weblate
174286b701 Translated using Weblate (Japanese)
Currently translated at 82.2% (416 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 82.2% (416 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:30:33 +00:00
Weblate
385baf1068 Translated using Weblate (Japanese)
Currently translated at 82.0% (415 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 82.0% (415 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:30:17 +00:00
Weblate
25104465ca Translated using Weblate (Japanese)
Currently translated at 81.8% (414 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 81.8% (414 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:29:58 +00:00
Weblate
dbdc9f6531 Translated using Weblate (Japanese)
Currently translated at 81.4% (412 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:29:36 +00:00
Weblate
2fe3cd9041 Translated using Weblate (Japanese)
Currently translated at 81.2% (411 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 81.2% (411 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:29:25 +00:00
Weblate
9c8a9d32b6 Translated using Weblate (Japanese)
Currently translated at 81.0% (410 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 81.0% (410 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:29:09 +00:00
Weblate
4b68162b1d Translated using Weblate (Japanese)
Currently translated at 80.8% (409 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 80.8% (409 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:29:02 +00:00
Weblate
3fa0ff5214 Translated using Weblate (Japanese)
Currently translated at 80.4% (407 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 80.4% (407 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:28:50 +00:00
Weblate
59c2074343 Translated using Weblate (Japanese)
Currently translated at 80.2% (406 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 80.2% (406 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:28:36 +00:00
Weblate
2c7d7b9d53 Translated using Weblate (Japanese)
Currently translated at 80.0% (405 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 80.0% (405 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:28:07 +00:00
Weblate
741baeb7fb Translated using Weblate (Japanese)
Currently translated at 79.8% (404 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 79.8% (404 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:27:43 +00:00
Weblate
65c1d20f17 Translated using Weblate (Japanese)
Currently translated at 79.6% (403 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 79.6% (403 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:27:34 +00:00
Weblate
23eec20e97 Translated using Weblate (Japanese)
Currently translated at 79.4% (402 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 79.4% (402 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:27:14 +00:00
Weblate
e9e0ccca99 Translated using Weblate (Japanese)
Currently translated at 79.2% (401 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 79.2% (401 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:26:56 +00:00
Weblate
00a1efce1d Translated using Weblate (Japanese)
Currently translated at 78.6% (398 of 506 strings)

Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:17:46 +00:00
Weblate
de7345f326 Translated using Weblate (Japanese)
Currently translated at 78.6% (398 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 78.6% (398 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:17:20 +00:00
Weblate
10564bfc9f Translated using Weblate (Japanese)
Currently translated at 78.4% (397 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 78.4% (397 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:17:00 +00:00
Weblate
508c5ee116 Translated using Weblate (Japanese)
Currently translated at 78.2% (396 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 78.2% (396 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:16:41 +00:00
Weblate
0dfc634d1b Translated using Weblate (Japanese)
Currently translated at 78.0% (395 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 78.0% (395 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:16:20 +00:00
Weblate
e92eb80aec Translated using Weblate (Japanese)
Currently translated at 77.8% (394 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 77.8% (394 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:16:09 +00:00
Weblate
5d84cc2899 Translated using Weblate (Japanese)
Currently translated at 77.6% (393 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 77.6% (393 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:15:51 +00:00
Weblate
19db9f5623 Translated using Weblate (Japanese)
Currently translated at 77.4% (392 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 77.4% (392 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:15:39 +00:00
Weblate
0f163e48e2 Translated using Weblate (Japanese)
Currently translated at 77.2% (391 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 77.2% (391 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 04:12:03 +00:00
Weblate
fb6df194d5 Translated using Weblate (Japanese)
Currently translated at 76.8% (389 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 76.8% (389 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:53:32 +00:00
Weblate
762a309e4b Translated using Weblate (Japanese)
Currently translated at 76.8% (389 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 76.8% (389 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:51:53 +00:00
Weblate
cf7f703f69 Translated using Weblate (Japanese)
Currently translated at 76.6% (388 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 76.6% (388 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:51:12 +00:00
Weblate
0e71f59086 Translated using Weblate (Japanese)
Currently translated at 76.2% (386 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 76.2% (386 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:49:33 +00:00
Weblate
b0829b7f4d Translated using Weblate (Japanese)
Currently translated at 75.8% (384 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 75.8% (384 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:48:12 +00:00
Weblate
305207fcd7 Translated using Weblate (Japanese)
Currently translated at 75.6% (383 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 75.6% (383 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:47:24 +00:00
Weblate
6deda72650 Translated using Weblate (Japanese)
Currently translated at 75.2% (381 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 75.2% (381 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:31:28 +00:00
Weblate
e8e6d6e81b Translated using Weblate (Japanese)
Currently translated at 75.0% (380 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 75.0% (380 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:30:29 +00:00
Weblate
1e06a6e4e0 Translated using Weblate (Japanese)
Currently translated at 74.9% (379 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 74.9% (379 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:30:19 +00:00
Weblate
064c945d9c Translated using Weblate (Japanese)
Currently translated at 74.7% (378 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 74.7% (378 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:30:01 +00:00
Weblate
8814d63655 Translated using Weblate (Japanese)
Currently translated at 74.3% (376 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:29:40 +00:00
Weblate
4954b79cbd Translated using Weblate (Japanese)
Currently translated at 74.1% (375 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 74.1% (375 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:29:16 +00:00
Weblate
6fa331307a Translated using Weblate (Japanese)
Currently translated at 73.3% (371 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 73.3% (371 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:28:28 +00:00
Weblate
1a95ff4854 Translated using Weblate (Japanese)
Currently translated at 72.7% (368 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:27:40 +00:00
Weblate
c77f2eb119 Translated using Weblate (Japanese)
Currently translated at 72.3% (366 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 72.3% (366 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 72.3% (366 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: Weblate Translation Memory <noreply-mt-weblate-translation-memory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:27:21 +00:00
Weblate
79b04203b9 Translated using Weblate (Japanese)
Currently translated at 70.5% (357 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 70.5% (357 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:18:16 +00:00
Weblate
32258535a5 Translated using Weblate (Japanese)
Currently translated at 70.1% (355 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 70.1% (355 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:15:04 +00:00
Weblate
4fb61bc4a5 Translated using Weblate (Japanese)
Currently translated at 69.5% (352 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 69.5% (352 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:13:58 +00:00
Weblate
55fed18582 Translated using Weblate (Japanese)
Currently translated at 69.3% (351 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 69.3% (351 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:13:48 +00:00
Weblate
408391d31f Translated using Weblate (Japanese)
Currently translated at 69.1% (350 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 69.1% (350 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:13:39 +00:00
Weblate
0087d810ae Translated using Weblate (Japanese)
Currently translated at 68.5% (347 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:12:16 +00:00
Weblate
be907f72ff Translated using Weblate (Japanese)
Currently translated at 68.3% (346 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 68.3% (346 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:12:00 +00:00
Weblate
669543989a Translated using Weblate (Japanese)
Currently translated at 68.1% (345 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 68.1% (345 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:11:40 +00:00
Weblate
484744c0f9 Translated using Weblate (Japanese)
Currently translated at 67.9% (344 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 67.9% (344 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:11:16 +00:00
Weblate
912a11f27d Translated using Weblate (Japanese)
Currently translated at 67.7% (343 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 67.7% (343 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:10:46 +00:00
Weblate
a49e6e4f92 Translated using Weblate (Japanese)
Currently translated at 67.5% (342 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 67.5% (342 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:10:33 +00:00
Weblate
f94167cb34 Translated using Weblate (Japanese)
Currently translated at 67.1% (340 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 67.1% (340 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:10:03 +00:00
Weblate
4aa6f12df4 Translated using Weblate (Japanese)
Currently translated at 66.9% (339 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 66.9% (339 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:08:54 +00:00
Weblate
2ac5c08f76 Translated using Weblate (Japanese)
Currently translated at 66.7% (338 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 66.7% (338 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:08:09 +00:00
Weblate
49f891f577 Translated using Weblate (Japanese)
Currently translated at 66.4% (336 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:05:58 +00:00
Weblate
25cf4ecc51 Translated using Weblate (Japanese)
Currently translated at 66.4% (336 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:05:48 +00:00
Weblate
e77f1dd68c Translated using Weblate (Japanese)
Currently translated at 66.2% (335 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:05:31 +00:00
Weblate
4cfece1bf5 Translated using Weblate (Japanese)
Currently translated at 66.2% (335 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 66.2% (335 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:05:19 +00:00
Weblate
6e5b348d82 Translated using Weblate (Japanese)
Currently translated at 65.8% (333 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 65.8% (333 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:01:40 +00:00
Weblate
d53c643de0 Translated using Weblate (Japanese)
Currently translated at 65.4% (331 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 65.4% (331 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 03:00:57 +00:00
Weblate
8c53d76819 Translated using Weblate (Japanese)
Currently translated at 64.4% (326 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 64.4% (326 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 02:57:50 +00:00
Weblate
5364833afb Translated using Weblate (Japanese)
Currently translated at 64.0% (324 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 64.0% (324 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 02:57:06 +00:00
Weblate
541585c0bb Translated using Weblate (Japanese)
Currently translated at 63.6% (322 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 63.6% (322 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 02:56:29 +00:00
Weblate
350a35f7f4 Translated using Weblate (Japanese)
Currently translated at 63.2% (320 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 63.2% (320 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 02:55:03 +00:00
Weblate
856f2584b9 Translated using Weblate (Japanese)
Currently translated at 62.6% (317 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 62.6% (317 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 62.6% (317 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: Weblate Translation Memory <noreply-mt-weblate-translation-memory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 02:49:52 +00:00
Weblate
c997f274cc Translated using Weblate (Japanese)
Currently translated at 62.0% (314 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 62.0% (314 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 02:49:21 +00:00
Weblate
e9689b6b52 Translated using Weblate (Japanese)
Currently translated at 61.4% (311 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 61.4% (311 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 02:48:57 +00:00
Weblate
3713816576 Translated using Weblate (Japanese)
Currently translated at 61.2% (310 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 61.2% (310 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 02:48:46 +00:00
Weblate
3529a95ebe Translated using Weblate (Japanese)
Currently translated at 60.6% (307 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 60.6% (307 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 02:46:56 +00:00
Weblate
fa066bc962 Translated using Weblate (Japanese)
Currently translated at 60.2% (305 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 60.2% (305 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 02:45:43 +00:00
Weblate
ba358790ea Translated using Weblate (Japanese)
Currently translated at 59.4% (301 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 59.4% (301 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 02:37:47 +00:00
Weblate
3aff39cdaf Translated using Weblate (Japanese)
Currently translated at 58.8% (298 of 506 strings)

Translated using Weblate (Japanese)

Currently translated at 58.8% (298 of 506 strings)

Translated using Weblate (English)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: Matthew Kilgore <matthew@kilgore.dev>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: ななしぃ <weblate@nanasi-rasi.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/en/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ja/
Translation: Homebox/Frontend
2025-08-19 02:30:52 +00:00
Weblate
877bb2ddbf Translated using Weblate (German)
Currently translated at 100.0% (506 of 506 strings)

Translated using Weblate (Italian)

Currently translated at 82.4% (417 of 506 strings)

Co-authored-by: Matteo Lombardi <matteolomba@protonmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/de/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/it/
Translation: Homebox/Frontend
2025-08-18 16:58:57 +00:00
Weblate
c8a48e4400 Translated using Weblate (Polish)
Currently translated at 100.0% (506 of 506 strings)

Translated using Weblate (German)

Currently translated at 99.8% (505 of 506 strings)

Translated using Weblate (German)

Currently translated at 99.8% (505 of 506 strings)

Translated using Weblate (Italian)

Currently translated at 82.4% (417 of 506 strings)

Translated using Weblate (Italian)

Currently translated at 82.4% (417 of 506 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: Krzysztof G. <mordret@o2.pl>
Co-authored-by: Mats <sysadminsmedia@mats-bueser.de>
Co-authored-by: Matteo Lombardi <matteolomba@protonmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: verhese <sean.verheyen1@telenet.be>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/de/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/it/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/nl/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/pl/
Translation: Homebox/Frontend
2025-08-18 11:34:12 +00:00
Weblate
1211105eb4 Translated using Weblate (Polish)
Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/pl/
Translation: Homebox/Frontend
2025-08-17 17:43:08 +00:00
Matthew Kilgore
28ce0d29a4 Default postgres ssl_mode to fix #943 2025-08-17 08:58:57 -04:00
Matthew Kilgore
dbf8322ec6 Update dependencies 2025-08-16 21:20:19 -04:00
Matthew Kilgore
9f34f80a60 Update dependencies 2025-08-16 17:43:02 -04:00
Matthew Kilgore
175b93a62e Make sure all languages are part of core translations. 2025-08-16 17:40:16 -04:00
Matt
d41f313cff Fix Windows Paths (#917)
* In theory this should fix the issue with Windows paths

* Fix Windows path handling in file storage connections for non-default
2025-08-16 17:08:24 -04:00
Weblate
1439e20d93 Translated using Weblate (Danish)
Currently translated at 99.4% (501 of 504 strings)

Translated using Weblate (Danish)

Currently translated at 99.4% (501 of 504 strings)

Co-authored-by: LovelessCodes <hello@loveless.codes>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/da/
Translation: Homebox/Frontend
2025-08-11 23:58:41 +00:00
Weblate
17e3a6d0cf Translated using Weblate (Turkish)
Currently translated at 86.7% (437 of 504 strings)

Co-authored-by: Can Dikyol <candikyol@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/tr/
Translation: Homebox/Frontend
2025-08-10 21:58:40 +00:00
Weblate
1ed7734b2e Translated using Weblate (German)
Currently translated at 100.0% (504 of 504 strings)

Co-authored-by: Katos <katos@creatorswave.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/de/
Translation: Homebox/Frontend
2025-08-10 14:47:35 +00:00
Matias Godoy
362c0bb3e6 Fix accent-insensitive search for Postgres databases (#932) 2025-08-04 20:35:22 -04:00
Weblate
0d3151ae5c Translated using Weblate (Turkish)
Currently translated at 85.9% (433 of 504 strings)

Co-authored-by: Can Dikyol <candikyol@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/tr/
Translation: Homebox/Frontend
2025-08-04 16:17:42 +00:00
Weblate
b4e679e321 Translated using Weblate (Turkish)
Currently translated at 67.6% (341 of 504 strings)

Co-authored-by: Can Dikyol <candikyol@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/tr/
Translation: Homebox/Frontend
2025-08-04 12:43:43 +00:00
Weblate
de3b63639b Translated using Weblate (Portuguese (Portugal))
Currently translated at 96.0% (484 of 504 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/pt_PT/
Translation: Homebox/Frontend
2025-08-04 03:54:18 +00:00
Weblate
23ba40892a Translated using Weblate (Korean)
Currently translated at 6.9% (35 of 504 strings)

Co-authored-by: HAN, Sang-uk <nouveau.monde.1987@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ko/
Translation: Homebox/Frontend
2025-08-03 19:49:16 +00:00
Ahmed Al Hafoudh
624c1763ac Add external label service support to label maker (#913)
* Add external label service support to label maker

* Make external label service fetch to include user agent, limit response size and allow any image type

* Fix linting errors

* Fix "response body closed" closing the Body to soon
2025-08-01 12:02:40 -04:00
Weblate
75c2423fd5 Translated using Weblate (Italian)
Currently translated at 81.5% (411 of 504 strings)

Translated using Weblate (Italian)

Currently translated at 81.5% (411 of 504 strings)

Co-authored-by: Matteo Lombardi <matteolomba@protonmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/it/
Translation: Homebox/Frontend
2025-07-30 18:57:54 +00:00
Weblate
d4f2b52b6c Translated using Weblate (Vietnamese)
Currently translated at 19.4% (98 of 504 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (504 of 504 strings)

Co-authored-by: Ngô Tạ Đình Phong <thichcarot@outlook.com>
Co-authored-by: askolock <askolock@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ru/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/vi/
Translation: Homebox/Frontend
2025-07-28 15:00:41 +00:00
Weblate
028b1382ad Translated using Weblate (Russian)
Currently translated at 100.0% (504 of 504 strings)

Co-authored-by: akrstlv <zmilex@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ru/
Translation: Homebox/Frontend
2025-07-25 20:53:56 +00:00
Weblate
d8781950fa Translated using Weblate (Dutch)
Currently translated at 100.0% (504 of 504 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (504 of 504 strings)

Co-authored-by: Hannes Salen <hannes.salen@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/nl/
Translation: Homebox/Frontend
2025-07-24 15:00:41 +00:00
Weblate
8646360b8c Translated using Weblate (Spanish)
Currently translated at 100.0% (504 of 504 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (504 of 504 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: Ricardo González <notorius28@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/es/
Translation: Homebox/Frontend
2025-07-23 07:00:43 +00:00
Weblate
6ce83ea04c Translated using Weblate (German)
Currently translated at 100.0% (504 of 504 strings)

Translated using Weblate (German)

Currently translated at 100.0% (504 of 504 strings)

Co-authored-by: Christoph Auer <Christoph.Auer@pilsheim.de>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/de/
Translation: Homebox/Frontend
2025-07-21 12:11:39 +00:00
Weblate
ad356acc73 Translated using Weblate (German)
Currently translated at 98.8% (498 of 504 strings)

Translated using Weblate (German)

Currently translated at 98.8% (498 of 504 strings)

Co-authored-by: Christoph Auer <Christoph.Auer@pilsheim.de>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/de/
Translation: Homebox/Frontend
2025-07-21 07:02:54 +00:00
Weblate
863b84355d Translated using Weblate (German)
Currently translated at 98.4% (496 of 504 strings)

Translated using Weblate (German)

Currently translated at 98.4% (496 of 504 strings)

Co-authored-by: Christoph Auer <Christoph.Auer@pilsheim.de>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/de/
Translation: Homebox/Frontend
2025-07-21 07:02:19 +00:00
Weblate
959d9961f1 Translated using Weblate (German)
Currently translated at 97.6% (492 of 504 strings)

Translated using Weblate (German)

Currently translated at 97.6% (492 of 504 strings)

Co-authored-by: Christoph Auer <Christoph.Auer@pilsheim.de>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/de/
Translation: Homebox/Frontend
2025-07-21 07:01:42 +00:00
Weblate
c5b783bef7 Translated using Weblate (Hungarian)
Currently translated at 100.0% (504 of 504 strings)

Translated using Weblate (German)

Currently translated at 97.4% (491 of 504 strings)

Translated using Weblate (German)

Currently translated at 97.4% (491 of 504 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: Christoph Auer <Christoph.Auer@pilsheim.de>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/de/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-21 07:01:26 +00:00
Weblate
1d78b953dd Translated using Weblate (Hungarian)
Currently translated at 100.0% (504 of 504 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (504 of 504 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-21 05:16:36 +00:00
Weblate
44f5aaec57 Translated using Weblate (Hungarian)
Currently translated at 99.4% (501 of 504 strings)

Translated using Weblate (Hungarian)

Currently translated at 99.4% (501 of 504 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-21 05:15:55 +00:00
Weblate
4933446202 Translated using Weblate (Hungarian)
Currently translated at 99.2% (500 of 504 strings)

Translated using Weblate (Hungarian)

Currently translated at 99.2% (500 of 504 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-21 05:15:30 +00:00
Weblate
e1fbb99203 Translated using Weblate (Hungarian)
Currently translated at 98.8% (498 of 504 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (504 of 504 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (504 of 504 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: WilliamStark <yujinghao007@163.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/zh_Hans/
Translation: Homebox/Frontend
2025-07-21 05:14:55 +00:00
Weblate
4a9557fcb7 Translated using Weblate (Chinese (Simplified Han script))
Currently translated at 99.4% (501 of 504 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 99.4% (501 of 504 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: WilliamStark <yujinghao007@163.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/zh_Hans/
Translation: Homebox/Frontend
2025-07-21 02:07:44 +00:00
Weblate
5766277c16 Translated using Weblate (Chinese (Simplified Han script))
Currently translated at 99.2% (500 of 504 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 99.2% (500 of 504 strings)

Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: WilliamStark <yujinghao007@163.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/zh_Hans/
Translation: Homebox/Frontend
2025-07-21 02:07:10 +00:00
Weblate
5374f31d69 Translated using Weblate (Vietnamese)
Currently translated at 14.8% (75 of 504 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (504 of 504 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (504 of 504 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (504 of 504 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 98.0% (494 of 504 strings)

Co-authored-by: Adam Havránek <adamhavra@seznam.cz>
Co-authored-by: Lucas Wilson <lucasws2020@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/cs/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/pl/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/vi/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/zh_Hans/
Translation: Homebox/Frontend
2025-07-21 02:06:03 +00:00
Balki
e82f5084d4 Fix Windows build and re-apply unix socket support (#906)
* Reapply "Support listening on unix sockets and systemd sockets (#878)"

This reverts commit 2f51ba419b.

* Fix windows build

Upgrade anyhttp to v0.5.2
2025-07-20 09:51:31 -04:00
Katos
bbd773fb3a Merge pull request #818 from crumbowl/feat/barcode
Add product fetching using barcodes
2025-07-20 10:59:44 +01:00
Crumb Owl
7129650efa ProductBarcode: properly check array boundaries 2025-07-19 23:06:44 +02:00
Crumb Owl
a57b83c52d ProductBarcode: various fix requested by Tonya
- fix many missing translations
- properly reset QR scanner when reopening
- add error message on BarcodeModal when no item is found
- fix icon size in item CreateModal
- remove useless closeDialog
2025-07-19 23:06:44 +02:00
Crumb Owl
bb5e36f0c4 ProductBarcode: final linting 2025-07-19 23:06:44 +02:00
Crumb Owl
bd44b36666 ProductBarcode: BarcodeModal: improve erroring 2025-07-19 23:06:43 +02:00
Crumb Owl
895063fa36 ProductBarcode: improve readability on CreateModal 2025-07-19 23:06:43 +02:00
Crumb Owl
aa7658b0d4 ProductBarcode: fix barcode value not updated + fix search button not reset properly 2025-07-19 23:06:43 +02:00
Crumb Owl
68f97f24c7 ProductBarcode: fix various remarks from Tonya 2025-07-19 23:06:43 +02:00
Crumb Owl
6555c9277a ProductBarcode: use json encoder from the project 2025-07-19 23:06:43 +02:00
Crumb Owl
b5d13380fe ProductBarcode: BarcodeModal: launch search on "Return" key 2025-07-19 23:06:43 +02:00
Crumb Owl
9271cdae4b ProductBarcode: architecture: move to strongly typed DialogID and parameters 2025-07-19 23:06:43 +02:00
Crumb Owl
18149a5c9a ProductBarcode: apply linting and fixes on frontend 2025-07-19 23:06:43 +02:00
Crumb Owl
68b6d58ab4 ProductBarcode: BarcodeModal: many fixes catched by linter 2025-07-19 23:06:43 +02:00
Crumb Owl
6d516f6de6 ProductBarcode: backend: properly define max length of a barcode 2025-07-19 23:06:43 +02:00
Crumb Owl
36d5ae1466 ProductBarcode: backend: improve verbosity for user 2025-07-19 23:06:43 +02:00
Crumb Owl
f37f609dff ProductBarcode: backend: prevent DoS with image download 2025-07-19 23:06:43 +02:00
Crumb Owl
a980d9f243 ProductBarcode: backend: remove API response verbosity 2025-07-19 23:06:43 +02:00
Crumb Owl
aac82c9236 ProductBarcode: backend: add timeout to external API calls 2025-07-19 23:06:43 +02:00
Crumb Owl
8dedfcca43 ProductBarcode: backend: fix error handling with http requests 2025-07-19 23:06:43 +02:00
Crumb Owl
f72fcb0800 ProductBarcode: backend: fix resource leak with defer 2025-07-19 23:06:43 +02:00
Crumb Owl
94e81809d3 ProductBarcode: backend: properly check barcodespider API response 2025-07-19 23:06:43 +02:00
Crumb Owl
e80e5744f7 ProductBarcode: backend: improve security of image fetching 2025-07-19 23:06:43 +02:00
Crumb Owl
402b8c429e ProductBarcode: improve error handling in BarcodeModal 2025-07-19 23:06:43 +02:00
Crumb Owl
d2919de8e8 ProductBarcode: add barcode shortcuts in item/Createmodal.vue 2025-07-19 23:06:43 +02:00
Crumb Owl
8a60729153 ProductBarcode: clean code, add error handling 2025-07-19 23:06:43 +02:00
Crumb Owl
4a4bf9a175 ProductBarcode: rename API call from getproductfromean to products/search-from-barcode 2025-07-19 23:06:43 +02:00
Crumb Owl
24923f2a83 ProductBarcode: refactoring Go method 2025-07-19 23:06:43 +02:00
Crumb Owl
66c2de22ed ProductBarcode: Go Linter fixing 2025-07-19 23:06:43 +02:00
Crumb Owl
c93fddae7f ProductBarcode: move backend code in dedicated source file 2025-07-19 23:06:43 +02:00
Crumb Owl
fb17b56f09 ProductBarcode: create a dedicated dialog for product selection 2025-07-19 23:06:43 +02:00
Crumb Owl
a3c13a8a74 ProductBarcode: return an array of BarcodeProduct instead of one 2025-07-19 23:06:38 +02:00
Crumb Owl
09f29d82f4 ProductBarcode: properly use of language system in frontend/Scanner.vue 2025-07-19 22:51:48 +02:00
Crumb Owl
dd94fd43ee ProductBarcode: improve UI of Barcode message in frontend/Scanner.vue 2025-07-19 22:51:48 +02:00
Crumb Owl
a85bdfef88 ProductBarcode: display barcode type in frontend/Scanner.vue 2025-07-19 22:51:48 +02:00
Crumb Owl
79baf6b5ef ProductBarcode: define Barcodespider API key using env variables 2025-07-19 22:51:48 +02:00
Crumb Owl
d691e908a4 ProductBarcode: add image downloading from remote product database
- Backend download images from the database
- Frontend retrieve the image as base64, no architecture change needed
2025-07-19 22:51:48 +02:00
Crumb Owl
ec8320bc42 ProductBarcode: update UPCItemDB parsing
- JSON response seems to have changed
2025-07-19 22:51:48 +02:00
Crumb Owl
6dbb243ba5 ProductBarcode: return more fields from DB (brand, model...)
- backend: change data structure returned to frontend
2025-07-19 22:51:48 +02:00
Crumb Owl
7c56bfb4ab ProductBarcode: fix error on pages/Scanner.vue when using a barcode 2025-07-19 22:51:48 +02:00
Crumb Owl
c3af4ac4ac ProductBarcode: add barcode processing in frontend 2025-07-19 22:51:48 +02:00
Crumb Owl
fc88df0ff0 ProductBarcode: allow passing parameters to Dialog 2025-07-19 22:51:48 +02:00
Crumb Owl
0e1e5ae3f0 ProductBarcode: add frontend API call utils 2025-07-19 22:51:48 +02:00
Crumb Owl
0ed69b75a1 ProductBarcode: add first backend API implementation 2025-07-19 22:51:48 +02:00
Crumb Owl
c666a8a8c1 ProductBarcode: add barcode detection to ScannerModal.vue 2025-07-19 22:51:48 +02:00
Weblate
6ef7045f62 Translated using Weblate (Polish)
Currently translated at 100.0% (492 of 492 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (492 of 492 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (492 of 492 strings)

Translated using Weblate (Albanian)

Currently translated at 19.1% (94 of 492 strings)

Translated using Weblate (French)

Currently translated at 99.3% (489 of 492 strings)

Translated using Weblate (Swedish)

Currently translated at 68.2% (336 of 492 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (492 of 492 strings)

Translated using Weblate (Portuguese (Portugal))

Currently translated at 97.3% (479 of 492 strings)

Translated using Weblate (Catalan)

Currently translated at 56.0% (276 of 492 strings)

Co-authored-by: Krzysztof G. <mordret@o2.pl>
Co-authored-by: Matthew Kilgore <matthew@kilgore.dev>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: Thomas J. Mazon de Oliveira <thomas.mazon@gmail.com>
Co-authored-by: Weblate Translation Memory <noreply-mt-weblate-translation-memory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/ca/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/fr/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/pl/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/pt_BR/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/pt_PT/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/sq/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/sv/
Translation: Homebox/Frontend
2025-07-19 09:00:42 +00:00
Weblate
98ce90636d Translated using Weblate (Danish)
Currently translated at 99.7% (491 of 492 strings)

Translated using Weblate (Chinese (Simplified) (zh_MO))

Currently translated at 37.1% (183 of 492 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 72.5% (357 of 492 strings)

Translated using Weblate (German)

Currently translated at 99.3% (489 of 492 strings)

Translated using Weblate (Italian)

Currently translated at 81.9% (403 of 492 strings)

Co-authored-by: Matthew Kilgore <matthew@kilgore.dev>
Co-authored-by: Thomas J. Mazon de Oliveira <thomas.mazon@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/da/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/de/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/it/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/pt_BR/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/zh_MO/
Translation: Homebox/Frontend
2025-07-17 19:00:41 +00:00
Weblate
86721c9b9a Translated using Weblate (Hungarian)
Currently translated at 100.0% (492 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (492 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-16 07:00:42 +00:00
Michael Manganiello
62f6121260 feat: Add plugin to set image sizes in Markdown (#901)
* feat: Add plugin to set image sizes in Markdown

Install the `@mdit/plugin-img-size` plugin [1] to allow setting image sizes
in Markdown content. This improves the image rendering capabilities for
Markdown blocks.

Before (no resizing possible):

```markdown
![logo](https://raw.githubusercontent.com/sysadminsmedia/homebox/refs/tags/v0.20.2/docs/public/lilbox.svg)
```

After (size specified):

```markdown
![logo =100x](https://raw.githubusercontent.com/sysadminsmedia/homebox/refs/tags/v0.20.2/docs/public/lilbox.svg)
```

[1] https://mdit-plugins.github.io/img-size.html

* Update @types/markdown-it to match markdown-it version
2025-07-16 05:58:24 +00:00
Matt
90bb6ed1fe Daily Analytics (#896)
* Send analytics daily

* Clean up error handling, add uptime to analytics

* Better analytics scheduling

* Even better logic for scheduling the analytics (hopefully)

* Some cleanup

* Switch to minutes for uptime, remove duplicate event on startup
2025-07-15 04:24:19 -04:00
Weblate
bd79ee3227 Translated using Weblate (Hungarian)
Currently translated at 99.5% (490 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 99.5% (490 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:45:22 +00:00
Weblate
c0e79cdb9e Translated using Weblate (Hungarian)
Currently translated at 99.3% (489 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 99.3% (489 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:45:01 +00:00
Weblate
5156792319 Translated using Weblate (Hungarian)
Currently translated at 99.1% (488 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 99.1% (488 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:44:30 +00:00
Weblate
8bbc39e416 Translated using Weblate (Hungarian)
Currently translated at 98.5% (485 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 98.5% (485 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:43:15 +00:00
Weblate
0beb430704 Translated using Weblate (Hungarian)
Currently translated at 98.3% (484 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 98.3% (484 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:41:46 +00:00
Weblate
0f7107f86d Translated using Weblate (Hungarian)
Currently translated at 97.7% (481 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 97.7% (481 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:36:35 +00:00
Weblate
115cda5c37 Translated using Weblate (Hungarian)
Currently translated at 97.1% (478 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 97.1% (478 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 97.1% (478 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Co-authored-by: Weblate Translation Memory <noreply-mt-weblate-translation-memory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:36:07 +00:00
Weblate
a6c1c8c652 Translated using Weblate (Hungarian)
Currently translated at 96.3% (474 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 96.3% (474 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:35:42 +00:00
Weblate
c69c6a1518 Translated using Weblate (Hungarian)
Currently translated at 96.1% (473 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 96.1% (473 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:34:55 +00:00
Weblate
adaffa5ca8 Translated using Weblate (Hungarian)
Currently translated at 95.9% (472 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 95.9% (472 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:32:21 +00:00
Weblate
b410642dc6 Translated using Weblate (Hungarian)
Currently translated at 95.7% (471 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 95.7% (471 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:29:52 +00:00
Weblate
4bed1a3158 Translated using Weblate (Hungarian)
Currently translated at 93.6% (461 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 93.6% (461 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:27:05 +00:00
Weblate
9ff39bb402 Translated using Weblate (Hungarian)
Currently translated at 93.4% (460 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 93.4% (460 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:26:15 +00:00
Weblate
3ab250a045 Translated using Weblate (Hungarian)
Currently translated at 93.2% (459 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 93.2% (459 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:24:16 +00:00
Weblate
4147cff1db Translated using Weblate (Hungarian)
Currently translated at 92.6% (456 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 92.6% (456 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:21:24 +00:00
Weblate
dada2f0266 Translated using Weblate (Hungarian)
Currently translated at 92.2% (454 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 92.2% (454 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:21:01 +00:00
Weblate
e9e852c8a3 Translated using Weblate (Hungarian)
Currently translated at 92.0% (453 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 92.0% (453 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:20:50 +00:00
Weblate
7dda0f473a Translated using Weblate (Hungarian)
Currently translated at 89.6% (441 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 89.6% (441 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:17:06 +00:00
Weblate
2006b8056a Translated using Weblate (Hungarian)
Currently translated at 89.4% (440 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 89.4% (440 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:16:31 +00:00
Weblate
41f63456eb Translated using Weblate (Hungarian)
Currently translated at 89.2% (439 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 89.2% (439 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:15:54 +00:00
Weblate
fe177deff4 Translated using Weblate (Hungarian)
Currently translated at 89.0% (438 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 89.0% (438 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:15:38 +00:00
Weblate
d729a74b34 Translated using Weblate (Hungarian)
Currently translated at 88.6% (436 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 88.6% (436 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:15:04 +00:00
Weblate
6ab51e4767 Translated using Weblate (Hungarian)
Currently translated at 88.2% (434 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 88.2% (434 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:13:09 +00:00
Weblate
e080817e1a Translated using Weblate (Hungarian)
Currently translated at 87.3% (430 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 87.3% (430 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:12:49 +00:00
Weblate
31e6f0264d Translated using Weblate (Hungarian)
Currently translated at 85.7% (422 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 85.7% (422 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:09:38 +00:00
Weblate
8e98ded03f Translated using Weblate (Hungarian)
Currently translated at 85.5% (421 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 85.5% (421 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:09:26 +00:00
Weblate
8da030d415 Translated using Weblate (Hungarian)
Currently translated at 84.7% (417 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 84.7% (417 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:09:06 +00:00
Weblate
393342bc32 Translated using Weblate (Hungarian)
Currently translated at 84.5% (416 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 84.5% (416 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:08:45 +00:00
Weblate
9f331b87df Translated using Weblate (Hungarian)
Currently translated at 84.1% (414 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 84.1% (414 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:08:21 +00:00
Weblate
27efa00ee2 Translated using Weblate (Hungarian)
Currently translated at 83.5% (411 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 83.5% (411 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:07:57 +00:00
Weblate
1224a6e516 Translated using Weblate (Hungarian)
Currently translated at 81.9% (403 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 81.9% (403 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-15 06:06:58 +00:00
Weblate
988f9eee8c Translated using Weblate (Hungarian)
Currently translated at 80.8% (398 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-14 22:56:25 +00:00
Weblate
832b4a6484 Translated using Weblate (Hungarian)
Currently translated at 80.8% (398 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 80.8% (398 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-14 14:06:37 +00:00
Weblate
64298511ee Translated using Weblate (Hungarian)
Currently translated at 79.4% (391 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 79.4% (391 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-14 14:06:01 +00:00
Weblate
f4ed929e4a Translated using Weblate (Hungarian)
Currently translated at 78.4% (386 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 78.4% (386 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-14 14:04:13 +00:00
Weblate
b272c97694 Translated using Weblate (Hungarian)
Currently translated at 73.5% (362 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 73.5% (362 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translation: Homebox/Frontend
2025-07-14 14:01:43 +00:00
Weblate
3004d376ab Translated using Weblate (Slovak)
Currently translated at 100.0% (492 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 70.7% (348 of 492 strings)

Translated using Weblate (Hungarian)

Currently translated at 70.7% (348 of 492 strings)

Co-authored-by: Adam Kleizer <adamkleizer@gmail.com>
Co-authored-by: Jose Riha <jose1711@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/hu/
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/sk/
Translation: Homebox/Frontend
2025-07-14 13:59:12 +00:00
Matthew Kilgore
8f440e2a64 Fix setup directory for Windows binary 2025-07-12 22:21:38 -04:00
Matthew Kilgore
017b05452a Merge remote-tracking branch 'origin/main' 2025-07-12 16:37:09 -04:00
Matthew Kilgore
6a1f2549df Cleanup main file after revert, add freebsd build 2025-07-12 16:37:01 -04:00
Matthew Kilgore
2f51ba419b Revert "Support listening on unix sockets and systemd sockets (#878)"
This reverts commit 850ed476
2025-07-12 16:33:29 -04:00
Matias Godoy
bcd77ee796 Make search accent-insensitive (#887)
* Make search accent-insensitive

* Efficiendy improvements and small fixes

* Fix tests to improve coverage

* Fix SQL compatibility issues
2025-07-12 16:16:55 -04:00
Matt
23cecfb2a5 Refactor main file, add support for postgres certificate authentication (#897)
* Refactor main file, add support for postgres certificate authentication

* Fix potential issues.

* Remove legacy linting ignore comment

* Minor cleanup, documentation update
2025-07-12 16:11:50 -04:00
Matthew Kilgore
f4c8dd5450 Prep docs for Cloudflare worker migration (Pages is apparently deprecated/no longer recommended) 2025-07-12 14:56:05 -04:00
Copilot
72033341b4 Fix photo display issue when adding additional attachments to items (#895)
* Initial plan

* Fix attachment display issue - prevent photo primary status loss when updating non-photo attachments

Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>
2025-07-12 13:36:21 -04:00
Copilot
c2cfa10336 Fix nil pointer dereference panic in thumbnail subscription during shutdown (#892)
* Initial plan

* Fix nil pointer dereference in thumbnail subscription handling

Add nil check for msg after subscription.Receive() returns error to prevent
panic when accessing msg.Metadata. When an error occurs or msg is nil,
continue to next iteration instead of trying to process the message.

Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>
2025-07-12 11:40:50 -04:00
Balki
850ed476d4 Support listening on unix sockets and systemd sockets (#878) 2025-07-12 09:58:16 -04:00
Ahmosys
adea83d421 fix(frontend/location): preserve parent location when using "Create and Add another" (#879)
* fix(frontend/location): preserve parent in "Create and Add another" modal flow

* fix: normalize line endings

* fix: preserve parent location state when modal closed
2025-07-12 00:08:41 +00:00
Ahmosys
d678c35c57 fix(frontend/scanner): close scanner modal after successful QR code scan (#889)
* fix(frontend/scanner): close scanner modal after successful QR code scan

* fix: linting errors
2025-07-10 17:00:08 -04:00
Matt
d3073b472d Fix rootless 2025-07-10 16:58:23 -04:00
Matt
b274f81dbb Fix broken docker actions 2025-07-10 16:56:50 -04:00
Matt
721e407600 Update docker-publish.yaml 2025-07-10 14:32:12 -04:00
Copilot
ca4aed7bd3 Fix GitHub Actions Docker workflow syntax errors for secrets access (#882)
* Initial plan

* Fix GitHub Actions Docker workflow syntax errors

Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>

* Fix GitHub Actions expression syntax for if conditions

Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>
2025-07-10 14:29:30 -04:00
Weblate
746bd50f24 Translated using Weblate (Slovenian)
Currently translated at 100.0% (492 of 492 strings)

Co-authored-by: Murk <saso@workrum.net>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/sl/
Translation: Homebox/Frontend
2025-07-10 13:52:06 +00:00
Weblate
945a768691 Translated using Weblate (Danish)
Currently translated at 99.7% (491 of 492 strings)

Co-authored-by: Heine Olsen <olsen10051988@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/da/
Translation: Homebox/Frontend
2025-07-10 02:06:57 +00:00
Weblate
27237ae6d3 Translated using Weblate (Danish)
Currently translated at 95.9% (472 of 492 strings)

Co-authored-by: Heine Olsen <olsen10051988@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/da/
Translation: Homebox/Frontend
2025-07-10 00:18:26 +00:00
Ahmed Al Hafoudh
4463867cf0 Pass label param to print command template (#886) 2025-07-09 12:11:16 -04:00
Weblate
95e2fb6a15 Translated using Weblate (Norwegian Bokmål)
Currently translated at 99.3% (489 of 492 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 99.3% (489 of 492 strings)

Co-authored-by: Anders Øyvind Urke-Sætre <andersoyvind@gmail.com>
Co-authored-by: MyMemory <noreply-mt-mymemory@weblate.org>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/nb_NO/
Translation: Homebox/Frontend
2025-07-09 12:35:25 +00:00
Copilot
e32dd0aaa5 Fix frontend duplicate tag creation in Label Selector (#861)
* Initial plan

* Fix frontend duplicate tag creation issue

Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>
2025-07-09 03:48:46 +00:00
Weblate
ee5c43dc29 Translated using Weblate (French)
Currently translated at 99.3% (489 of 492 strings)

Co-authored-by: buzz <buzz.eclair@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/fr/
Translation: Homebox/Frontend
2025-07-08 20:00:40 +00:00
Matt
17c9685391 Better Copilot tooling 2025-07-07 11:46:41 -04:00
Copilot
fd41065250 Fix warranty section visibility when lifetime warranty is enabled (#875)
* Initial plan

* Fix warranty section visibility when lifetime warranty is enabled

Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: tankerkiller125 <3457368+tankerkiller125@users.noreply.github.com>
2025-07-07 11:24:26 -04:00
Weblate
f9b1327507 Translated using Weblate (Slovak)
Currently translated at 100.0% (492 of 492 strings)

Co-authored-by: Jose Riha <jose1711@gmail.com>
Translate-URL: https://translate.sysadminsmedia.com/projects/homebox/frontend/sk/
Translation: Homebox/Frontend
2025-07-07 14:00:40 +00:00
392 changed files with 81203 additions and 17768 deletions

View File

@@ -29,6 +29,6 @@
// Comment out to connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node",
"features": {
"ghcr.io/devcontainers/features/go:1": "1.21"
"ghcr.io/devcontainers/features/go:1": "1.24"
}
}

3
.gitattributes vendored Normal file
View File

@@ -0,0 +1,3 @@
backend/internal/data/ent/** linguist-generated=true
backend/internal/data/ent/schema/** linguist-generated=false
frontend/lib/api/types/** linguist-generated=true

40
.github/AGENTS.md vendored Normal file
View File

@@ -0,0 +1,40 @@
This is a Go based repository with a VueJS client for the frontend built with Vite and Nuxt, with ShadCN.
To make life easier, the use of a Taskfile is included for the majority of development commands.
Please follow these guidelines when contributing:
## Required Before Each Commit
- Generate Swagger Files: `task swag --force`
- Generate JS API Client: `task typescript-types --force`
- Lint Golang: `task go:lint`
- Lint frontend: `task ui:fix`
## Repository Structure
### Backend
- `backend/`: Contains the backend folders
- `backend/app`: Contains main app code including API endpoints
- `backend/internal/core`: Contains basic services such as currencies
- `backend/data`: Contains all information related to data, including `ent` schemas, repos, migrations, etc.
- `backend/data/migrations`: Contains migration data, the `sqlite3` sub-folder contains sqlite migrations, `postgres` sub-folder the postgres migrations, BOTH are REQUIRED.
- `backend/data/ent/schema`: Contains the actual `ent` data models.
- `backend/data/repo`: Contains the data repositories
- `backend/pkgs`: Contains general helper functions and services
### Frontend
- `frontend/`: Contains initial frontend files
- `frontend/components`: Contains the ShadCN components
- `frontend/locales`: Contains the i18n JSON for languages
- `frontend/pages`: Contains VueJS pages
- `frontend/test`: Contains Playwright setup
- `frontend/test/e2e`: Contains actual Playwright test files
### Docs
- `docs/`: Contains VitePress based documentation
## Key Guidelines
1. Follow best practices for the various programming languages
2. Maintain existing code structure and organization when possible
3. Use dependency injection when reasonable
4. Write tests for new functionality and after fixing bugs to validate they're fixed
5. Document changes to the `docs/` folder when appropriate

1
.github/FUNDING.yml vendored
View File

@@ -1 +1,2 @@
open_collective: homebox
github: [tankerkiller125,katosdev,tonyaellie]

View File

@@ -58,6 +58,17 @@ body:
- Other
validations:
required: true
- type: dropdown
id: database
attributes:
label: Database Type
description: What database backend are you using?
multiple: false
options:
- SQLite
- PostgreSQL
validations:
required: true
- type: dropdown
id: arch
attributes:

BIN
.github/screenshots/1.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

BIN
.github/screenshots/10.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

BIN
.github/screenshots/2.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

BIN
.github/screenshots/3.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

BIN
.github/screenshots/4.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

BIN
.github/screenshots/5.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

BIN
.github/screenshots/6.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

BIN
.github/screenshots/7.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

BIN
.github/screenshots/8.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

BIN
.github/screenshots/9.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

8
.github/screenshots/readme.md vendored Normal file
View File

@@ -0,0 +1,8 @@
# Screenshots
These screenshots are taken from our public [Demo](https://demo.homebox.software) instance.
Note that whilst we will make every effort to ensure that these are maintained and updated, they may be outdated or missing functionality and we would always advise reviewing our demo instances:
- [Demo](https://demo.homebox.software)
- [Nightly](https://nightly.homebox.software)
- [VNext](https://vnext.homebox.software/)

View File

@@ -1,16 +1,33 @@
#!/usr/bin/env python3
import csv
import io
import json
import logging
import os
import sys
from pathlib import Path
import requests
from requests.adapters import HTTPAdapter, Retry
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
API_URL = 'https://restcountries.com/v3.1/all?fields=name,common,currencies'
# Default to a pinned commit for supply-chain security
DEFAULT_ISO_4217_URL = 'https://raw.githubusercontent.com/datasets/currency-codes/052b3088938ba32028a14e75040c286c5e142145/data/codes-all.csv'
ISO_4217_URL = os.environ.get('ISO_4217_URL', DEFAULT_ISO_4217_URL)
SAVE_PATH = Path('backend/internal/core/currencies/currencies.json')
TIMEOUT = 10 # seconds
# Known currency decimal overrides
CURRENCY_DECIMAL_OVERRIDES = {
"BTC": 8, # Bitcoin uses 8 decimal places
"JPY": 0, # Japanese Yen has no decimal places
"BHD": 3, # Bahraini Dinar uses 3 decimal places
}
DEFAULT_DECIMALS = 2
MIN_DECIMALS = 0
MAX_DECIMALS = 6
def setup_logging():
logging.basicConfig(
@@ -19,7 +36,93 @@ def setup_logging():
)
def get_currency_decimals(code, iso_data):
"""
Get the decimal places for a currency code.
Checks overrides first, then ISO data, then uses default.
Clamps result to safe range [MIN_DECIMALS, MAX_DECIMALS].
"""
# Normalize the input code
normalized_code = (code or "").strip().upper()
# First check overrides
if normalized_code in CURRENCY_DECIMAL_OVERRIDES:
decimals = CURRENCY_DECIMAL_OVERRIDES[normalized_code]
# Then check ISO data
elif normalized_code in iso_data:
decimals = iso_data[normalized_code]
# Finally use default
else:
decimals = DEFAULT_DECIMALS
# Ensure it's an integer and clamp to safe range
try:
decimals = int(decimals)
except (ValueError, TypeError):
decimals = DEFAULT_DECIMALS
return max(MIN_DECIMALS, min(MAX_DECIMALS, decimals))
def fetch_iso_4217_data():
"""
Fetch ISO 4217 currency data to get minor units (decimal places).
Returns a dict mapping currency code to minor units.
"""
# Log the resolved URL for transparency
logging.info("Fetching ISO 4217 data from: %s", ISO_4217_URL)
if not ISO_4217_URL.lower().startswith("https://"):
logging.error("Refusing non-HTTPS ISO_4217_URL: %s", ISO_4217_URL)
return {}
session = requests.Session()
retries = Retry(
total=3,
backoff_factor=1,
status_forcelist=[429, 500, 502, 503, 504],
allowed_methods=frozenset(['GET'])
)
session.mount('https://', HTTPAdapter(max_retries=retries))
try:
# Add Accept header for CSV content
headers = {'Accept': 'text/csv'}
resp = session.get(ISO_4217_URL, timeout=TIMEOUT, headers=headers)
resp.raise_for_status()
except requests.exceptions.RequestException as e:
logging.error("Failed to fetch ISO 4217 data: %s", e)
return {}
# Parse CSV data
iso_data = {}
try:
# Decode with utf-8-sig to strip BOM if present
csv_content = resp.content.decode('utf-8-sig')
csv_reader = csv.DictReader(io.StringIO(csv_content))
for row in csv_reader:
code = row.get('AlphabeticCode', '').strip()
minor_unit = row.get('MinorUnit', '').strip()
if code and minor_unit != 'N.A.':
try:
# Convert minor unit to int (decimal places)
iso_data[code] = int(minor_unit) if minor_unit.isdigit() else 2
except (ValueError, TypeError):
iso_data[code] = 2 # Default to 2 if parsing fails
logging.info("Successfully loaded decimal data for %d currencies from ISO 4217", len(iso_data))
return iso_data
except Exception as e:
logging.error("Failed to parse ISO 4217 CSV data: %s", e)
return {}
def fetch_currencies():
# First, fetch ISO 4217 data for decimal places
iso_data = fetch_iso_4217_data()
session = requests.Session()
retries = Retry(
total=3,
@@ -46,11 +149,15 @@ def fetch_currencies():
for country in countries:
country_name = country.get('name', {}).get('common') or "Unknown"
for code, info in country.get('currencies', {}).items():
# Get decimal places using the helper function
decimals = get_currency_decimals(code, iso_data)
results.append({
'code': code,
'local': country_name,
'symbol': info.get('symbol', ''),
'name': info.get('name', '')
'code': code,
'local': country_name,
'symbol': info.get('symbol', ''),
'name': info.get('name', ''),
'decimals': decimals
})
# sort by country name for consistency

259
.github/scripts/upgrade-test/README.md vendored Normal file
View File

@@ -0,0 +1,259 @@
# HomeBox Upgrade Testing Workflow
This document describes the automated upgrade testing workflow for HomeBox.
## Overview
The upgrade test workflow is designed to ensure data integrity and functionality when upgrading HomeBox from one version to another. It automatically:
1. Deploys a stable version of HomeBox
2. Creates test data (users, items, locations, labels, notifiers, attachments)
3. Upgrades to the latest version from the main branch
4. Verifies all data and functionality remain intact
## Workflow File
**Location**: `.github/workflows/upgrade-test.yaml`
## Trigger Conditions
The workflow runs:
- **Daily**: Automatically at 2 AM UTC (via cron schedule)
- **Manual**: Can be triggered manually via GitHub Actions UI
- **On Push**: When changes are made to the workflow files or test scripts
## Test Scenarios
### 1. Environment Setup
- Pulls the latest stable HomeBox Docker image from GHCR
- Starts the application with test configuration
- Ensures the service is healthy and ready
### 2. Data Creation
The workflow creates comprehensive test data using the `create-test-data.sh` script:
#### Users and Groups
- **Group 1**: 5 users (user1@homebox.test through user5@homebox.test)
- **Group 2**: 2 users (user6@homebox.test and user7@homebox.test)
- All users have password: `TestPassword123!`
#### Locations
- **Group 1**: Living Room, Garage
- **Group 2**: Home Office
#### Labels
- **Group 1**: Electronics, Important
- **Group 2**: Work Equipment
#### Items
- **Group 1**: 5 items (Laptop Computer, Power Drill, TV Remote, Tool Box, Coffee Maker)
- **Group 2**: 2 items (Monitor, Keyboard)
#### Attachments
- Multiple attachments added to various items (receipts, manuals, warranties)
#### Notifiers
- **Group 1**: Test notifier named "TESTING"
### 3. Upgrade Process
1. Stops the stable version container
2. Builds a fresh image from the current main branch
3. Copies the database to a new location
4. Starts the new version with the existing data
### 4. Verification Tests
The Playwright test suite (`upgrade-verification.spec.ts`) verifies:
-**User Authentication**: All 7 users can log in with their credentials
-**Data Persistence**: All items, locations, and labels are present
-**Attachments**: File attachments are correctly associated with items
-**Notifiers**: The "TESTING" notifier is still configured
-**UI Functionality**: Version display, theme switching work correctly
-**Data Isolation**: Groups can only see their own data
## Test Data File
The setup script generates a JSON file at `/tmp/test-users.json` containing:
```json
{
"users": [
{
"email": "user1@homebox.test",
"password": "TestPassword123!",
"token": "...",
"group": "1"
},
...
],
"locations": {
"group1": ["location-id-1", "location-id-2"],
"group2": ["location-id-3"]
},
"labels": {...},
"items": {...},
"notifiers": {...}
}
```
This file is used by the Playwright tests to verify data integrity.
## Scripts
### create-test-data.sh
**Location**: `.github/scripts/upgrade-test/create-test-data.sh`
**Purpose**: Creates all test data via the HomeBox REST API
**Environment Variables**:
- `HOMEBOX_URL`: Base URL of the HomeBox instance (default: http://localhost:7745)
- `TEST_DATA_FILE`: Path to output JSON file (default: /tmp/test-users.json)
**Requirements**:
- `curl`: For API calls
- `jq`: For JSON processing
**Usage**:
```bash
export HOMEBOX_URL=http://localhost:7745
./.github/scripts/upgrade-test/create-test-data.sh
```
## Running Tests Locally
To run the upgrade tests locally:
### Prerequisites
```bash
# Install dependencies
sudo apt-get install -y jq curl docker.io
# Install pnpm and Playwright
cd frontend
pnpm install
pnpm exec playwright install --with-deps chromium
```
### Run the test
```bash
# Start stable version
docker run -d \
--name homebox-test \
-p 7745:7745 \
-e HBOX_OPTIONS_ALLOW_REGISTRATION=true \
-v /tmp/homebox-data:/data \
ghcr.io/sysadminsmedia/homebox:latest
# Wait for startup
sleep 10
# Create test data
export HOMEBOX_URL=http://localhost:7745
./.github/scripts/upgrade-test/create-test-data.sh
# Stop container
docker stop homebox-test
docker rm homebox-test
# Build new version
docker build -t homebox:test .
# Start new version with existing data
docker run -d \
--name homebox-test \
-p 7745:7745 \
-e HBOX_OPTIONS_ALLOW_REGISTRATION=true \
-v /tmp/homebox-data:/data \
homebox:test
# Wait for startup
sleep 10
# Run verification tests
cd frontend
TEST_DATA_FILE=/tmp/test-users.json \
E2E_BASE_URL=http://localhost:7745 \
pnpm exec playwright test \
--project=chromium \
test/e2e/upgrade-verification.spec.ts
# Cleanup
docker stop homebox-test
docker rm homebox-test
```
## Artifacts
The workflow produces several artifacts:
1. **playwright-report-upgrade-test**: HTML report of test results
2. **playwright-traces**: Detailed traces for debugging failures
3. **Docker logs**: Collected on failure for troubleshooting
## Failure Scenarios
The workflow will fail if:
- The stable version fails to start
- Test data creation fails
- The new version fails to start with existing data
- Any verification test fails
- Database migrations fail
## Troubleshooting
### Test Data Creation Fails
Check the Docker logs:
```bash
docker logs homebox-old
```
Verify the API is accessible:
```bash
curl http://localhost:7745/api/v1/status
```
### Verification Tests Fail
1. Download the Playwright report from GitHub Actions artifacts
2. Review the HTML report for detailed failure information
3. Check traces for visual debugging
### Database Issues
If migrations fail:
```bash
# Check database file
ls -lh /tmp/homebox-data-new/homebox.db
# Check Docker logs for migration errors
docker logs homebox-new
```
## Future Enhancements
Potential improvements:
- [ ] Test multiple upgrade paths (e.g., v0.10 → v0.11 → v0.12)
- [ ] Test with PostgreSQL backend in addition to SQLite
- [ ] Add performance benchmarks
- [ ] Test with larger datasets
- [ ] Add API-level verification in addition to UI tests
- [ ] Test backup and restore functionality
## Related Files
- `.github/workflows/upgrade-test.yaml` - Main workflow definition
- `.github/scripts/upgrade-test/create-test-data.sh` - Data generation script
- `frontend/test/e2e/upgrade-verification.spec.ts` - Playwright verification tests
- `.github/workflows/e2e-partial.yaml` - Standard E2E test workflow (for reference)
## Support
For issues or questions about this workflow:
1. Check the GitHub Actions run logs
2. Review this documentation
3. Open an issue in the repository

View File

@@ -0,0 +1,413 @@
#!/bin/bash
# Script to create test data in HomeBox for upgrade testing
# This script creates users, items, attachments, notifiers, locations, and labels
set -e
HOMEBOX_URL="${HOMEBOX_URL:-http://localhost:7745}"
API_URL="${HOMEBOX_URL}/api/v1"
TEST_DATA_FILE="${TEST_DATA_FILE:-/tmp/test-users.json}"
echo "Creating test data in HomeBox at $HOMEBOX_URL"
# Function to make API calls with error handling
api_call() {
local method=$1
local endpoint=$2
local data=$3
local token=$4
if [ -n "$token" ]; then
if [ -n "$data" ]; then
curl -s -X "$method" \
-H "Authorization: Bearer $token" \
-H "Content-Type: application/json" \
-d "$data" \
"$API_URL$endpoint"
else
curl -s -X "$method" \
-H "Authorization: Bearer $token" \
-H "Content-Type: application/json" \
"$API_URL$endpoint"
fi
else
if [ -n "$data" ]; then
curl -s -X "$method" \
-H "Content-Type: application/json" \
-d "$data" \
"$API_URL$endpoint"
else
curl -s -X "$method" \
-H "Content-Type: application/json" \
"$API_URL$endpoint"
fi
fi
}
# Function to register a user and get token
register_user() {
local email=$1
local name=$2
local password=$3
local group_token=$4
echo "Registering user: $email"
local payload="{\"email\":\"$email\",\"name\":\"$name\",\"password\":\"$password\""
if [ -n "$group_token" ]; then
payload="$payload,\"groupToken\":\"$group_token\""
fi
payload="$payload}"
local response=$(curl -s -X POST \
-H "Content-Type: application/json" \
-d "$payload" \
"$API_URL/users/register")
echo "$response"
}
# Function to login and get token
login_user() {
local email=$1
local password=$2
echo "Logging in user: $email" >&2
local response=$(curl -s -X POST \
-H "Content-Type: application/json" \
-d "{\"username\":\"$email\",\"password\":\"$password\"}" \
"$API_URL/users/login")
echo "$response" | jq -r '.token // empty'
}
# Function to create an item
create_item() {
local token=$1
local name=$2
local description=$3
local location_id=$4
echo "Creating item: $name" >&2
local payload="{\"name\":\"$name\",\"description\":\"$description\""
if [ -n "$location_id" ]; then
payload="$payload,\"locationId\":\"$location_id\""
fi
payload="$payload}"
local response=$(curl -s -X POST \
-H "Authorization: Bearer $token" \
-H "Content-Type: application/json" \
-d "$payload" \
"$API_URL/items")
echo "$response"
}
# Function to create a location
create_location() {
local token=$1
local name=$2
local description=$3
echo "Creating location: $name" >&2
local response=$(curl -s -X POST \
-H "Authorization: Bearer $token" \
-H "Content-Type: application/json" \
-d "{\"name\":\"$name\",\"description\":\"$description\"}" \
"$API_URL/locations")
echo "$response"
}
# Function to create a label
create_label() {
local token=$1
local name=$2
local description=$3
echo "Creating label: $name" >&2
local response=$(curl -s -X POST \
-H "Authorization: Bearer $token" \
-H "Content-Type: application/json" \
-d "{\"name\":\"$name\",\"description\":\"$description\"}" \
"$API_URL/labels")
echo "$response"
}
# Function to create a notifier
create_notifier() {
local token=$1
local name=$2
local url=$3
echo "Creating notifier: $name" >&2
local response=$(curl -s -X POST \
-H "Authorization: Bearer $token" \
-H "Content-Type: application/json" \
-d "{\"name\":\"$name\",\"url\":\"$url\",\"isActive\":true}" \
"$API_URL/groups/notifiers")
echo "$response"
}
# Function to attach a file to an item (creates a dummy attachment)
attach_file_to_item() {
local token=$1
local item_id=$2
local filename=$3
echo "Creating attachment for item: $item_id" >&2
# Create a temporary file with some content
local temp_file=$(mktemp)
echo "This is a test attachment for $filename" > "$temp_file"
local response=$(curl -s -X POST \
-H "Authorization: Bearer $token" \
-F "file=@$temp_file" \
-F "type=attachment" \
-F "name=$filename" \
"$API_URL/items/$item_id/attachments")
rm -f "$temp_file"
echo "$response"
}
# Initialize test data storage
echo "{\"users\":[]}" > "$TEST_DATA_FILE"
echo "=== Step 1: Create first group with 5 users ==="
# Register first user (creates a new group)
user1_response=$(register_user "user1@homebox.test" "User One" "TestPassword123!")
user1_token=$(echo "$user1_response" | jq -r '.token // empty')
group_token=$(echo "$user1_response" | jq -r '.group.inviteToken // empty')
if [ -z "$user1_token" ]; then
echo "Failed to register first user"
echo "Response: $user1_response"
exit 1
fi
echo "First user registered with token. Group token: $group_token"
# Store user1 data
jq --arg email "user1@homebox.test" \
--arg password "TestPassword123!" \
--arg token "$user1_token" \
--arg group "1" \
'.users += [{"email":$email,"password":$password,"token":$token,"group":$group}]' \
"$TEST_DATA_FILE" > "$TEST_DATA_FILE.tmp" && mv "$TEST_DATA_FILE.tmp" "$TEST_DATA_FILE"
# Register 4 more users in the same group
for i in {2..5}; do
echo "Registering user$i in group 1..."
user_response=$(register_user "user${i}@homebox.test" "User $i" "TestPassword123!" "$group_token")
user_token=$(echo "$user_response" | jq -r '.token // empty')
if [ -z "$user_token" ]; then
echo "Failed to register user$i"
echo "Response: $user_response"
else
echo "user$i registered successfully"
# Store user data
jq --arg email "user${i}@homebox.test" \
--arg password "TestPassword123!" \
--arg token "$user_token" \
--arg group "1" \
'.users += [{"email":$email,"password":$password,"token":$token,"group":$group}]' \
"$TEST_DATA_FILE" > "$TEST_DATA_FILE.tmp" && mv "$TEST_DATA_FILE.tmp" "$TEST_DATA_FILE"
fi
done
echo "=== Step 2: Create second group with 2 users ==="
# Register first user of second group
user6_response=$(register_user "user6@homebox.test" "User Six" "TestPassword123!")
user6_token=$(echo "$user6_response" | jq -r '.token // empty')
group2_token=$(echo "$user6_response" | jq -r '.group.inviteToken // empty')
if [ -z "$user6_token" ]; then
echo "Failed to register user6"
echo "Response: $user6_response"
exit 1
fi
echo "user6 registered with token. Group 2 token: $group2_token"
# Store user6 data
jq --arg email "user6@homebox.test" \
--arg password "TestPassword123!" \
--arg token "$user6_token" \
--arg group "2" \
'.users += [{"email":$email,"password":$password,"token":$token,"group":$group}]' \
"$TEST_DATA_FILE" > "$TEST_DATA_FILE.tmp" && mv "$TEST_DATA_FILE.tmp" "$TEST_DATA_FILE"
# Register second user in group 2
user7_response=$(register_user "user7@homebox.test" "User Seven" "TestPassword123!" "$group2_token")
user7_token=$(echo "$user7_response" | jq -r '.token // empty')
if [ -z "$user7_token" ]; then
echo "Failed to register user7"
echo "Response: $user7_response"
else
echo "user7 registered successfully"
# Store user7 data
jq --arg email "user7@homebox.test" \
--arg password "TestPassword123!" \
--arg token "$user7_token" \
--arg group "2" \
'.users += [{"email":$email,"password":$password,"token":$token,"group":$group}]' \
"$TEST_DATA_FILE" > "$TEST_DATA_FILE.tmp" && mv "$TEST_DATA_FILE.tmp" "$TEST_DATA_FILE"
fi
echo "=== Step 3: Create locations for each group ==="
# Create locations for group 1 (using user1's token)
location1=$(create_location "$user1_token" "Living Room" "Main living area")
location1_id=$(echo "$location1" | jq -r '.id // empty')
echo "Created location: Living Room (ID: $location1_id)"
location2=$(create_location "$user1_token" "Garage" "Storage and tools")
location2_id=$(echo "$location2" | jq -r '.id // empty')
echo "Created location: Garage (ID: $location2_id)"
# Create location for group 2 (using user6's token)
location3=$(create_location "$user6_token" "Home Office" "Work from home space")
location3_id=$(echo "$location3" | jq -r '.id // empty')
echo "Created location: Home Office (ID: $location3_id)"
# Store locations
jq --arg loc1 "$location1_id" \
--arg loc2 "$location2_id" \
--arg loc3 "$location3_id" \
'.locations = {"group1":[$loc1,$loc2],"group2":[$loc3]}' \
"$TEST_DATA_FILE" > "$TEST_DATA_FILE.tmp" && mv "$TEST_DATA_FILE.tmp" "$TEST_DATA_FILE"
echo "=== Step 4: Create labels for each group ==="
# Create labels for group 1
label1=$(create_label "$user1_token" "Electronics" "Electronic devices")
label1_id=$(echo "$label1" | jq -r '.id // empty')
echo "Created label: Electronics (ID: $label1_id)"
label2=$(create_label "$user1_token" "Important" "High priority items")
label2_id=$(echo "$label2" | jq -r '.id // empty')
echo "Created label: Important (ID: $label2_id)"
# Create label for group 2
label3=$(create_label "$user6_token" "Work Equipment" "Items for work")
label3_id=$(echo "$label3" | jq -r '.id // empty')
echo "Created label: Work Equipment (ID: $label3_id)"
# Store labels
jq --arg lab1 "$label1_id" \
--arg lab2 "$label2_id" \
--arg lab3 "$label3_id" \
'.labels = {"group1":[$lab1,$lab2],"group2":[$lab3]}' \
"$TEST_DATA_FILE" > "$TEST_DATA_FILE.tmp" && mv "$TEST_DATA_FILE.tmp" "$TEST_DATA_FILE"
echo "=== Step 5: Create test notifier ==="
# Create notifier for group 1
notifier1=$(create_notifier "$user1_token" "TESTING" "https://example.com/webhook")
notifier1_id=$(echo "$notifier1" | jq -r '.id // empty')
echo "Created notifier: TESTING (ID: $notifier1_id)"
# Store notifier
jq --arg not1 "$notifier1_id" \
'.notifiers = {"group1":[$not1]}' \
"$TEST_DATA_FILE" > "$TEST_DATA_FILE.tmp" && mv "$TEST_DATA_FILE.tmp" "$TEST_DATA_FILE"
echo "=== Step 6: Create items for all users ==="
# Create items for users in group 1
declare -A user_tokens
user_tokens[1]=$user1_token
user_tokens[2]=$(echo "$user1_token") # Users in same group share data, but we'll use user1 token
user_tokens[3]=$(echo "$user1_token")
user_tokens[4]=$(echo "$user1_token")
user_tokens[5]=$(echo "$user1_token")
# Items for group 1 users
echo "Creating items for group 1..."
item1=$(create_item "$user1_token" "Laptop Computer" "Dell XPS 15 for work" "$location1_id")
item1_id=$(echo "$item1" | jq -r '.id // empty')
echo "Created item: Laptop Computer (ID: $item1_id)"
item2=$(create_item "$user1_token" "Power Drill" "DeWalt 20V cordless drill" "$location2_id")
item2_id=$(echo "$item2" | jq -r '.id // empty')
echo "Created item: Power Drill (ID: $item2_id)"
item3=$(create_item "$user1_token" "TV Remote" "Samsung TV remote control" "$location1_id")
item3_id=$(echo "$item3" | jq -r '.id // empty')
echo "Created item: TV Remote (ID: $item3_id)"
item4=$(create_item "$user1_token" "Tool Box" "Red metal tool box with tools" "$location2_id")
item4_id=$(echo "$item4" | jq -r '.id // empty')
echo "Created item: Tool Box (ID: $item4_id)"
item5=$(create_item "$user1_token" "Coffee Maker" "Breville espresso machine" "$location1_id")
item5_id=$(echo "$item5" | jq -r '.id // empty')
echo "Created item: Coffee Maker (ID: $item5_id)"
# Items for group 2 users
echo "Creating items for group 2..."
item6=$(create_item "$user6_token" "Monitor" "27 inch 4K monitor" "$location3_id")
item6_id=$(echo "$item6" | jq -r '.id // empty')
echo "Created item: Monitor (ID: $item6_id)"
item7=$(create_item "$user6_token" "Keyboard" "Mechanical keyboard" "$location3_id")
item7_id=$(echo "$item7" | jq -r '.id // empty')
echo "Created item: Keyboard (ID: $item7_id)"
# Store items
jq --argjson group1_items "[\"$item1_id\",\"$item2_id\",\"$item3_id\",\"$item4_id\",\"$item5_id\"]" \
--argjson group2_items "[\"$item6_id\",\"$item7_id\"]" \
'.items = {"group1":$group1_items,"group2":$group2_items}' \
"$TEST_DATA_FILE" > "$TEST_DATA_FILE.tmp" && mv "$TEST_DATA_FILE.tmp" "$TEST_DATA_FILE"
echo "=== Step 7: Add attachments to items ==="
# Add attachments for group 1 items
echo "Adding attachments to group 1 items..."
attach_file_to_item "$user1_token" "$item1_id" "laptop-receipt.pdf"
attach_file_to_item "$user1_token" "$item1_id" "laptop-warranty.pdf"
attach_file_to_item "$user1_token" "$item2_id" "drill-manual.pdf"
attach_file_to_item "$user1_token" "$item3_id" "remote-guide.pdf"
attach_file_to_item "$user1_token" "$item4_id" "toolbox-inventory.txt"
# Add attachments for group 2 items
echo "Adding attachments to group 2 items..."
attach_file_to_item "$user6_token" "$item6_id" "monitor-receipt.pdf"
attach_file_to_item "$user6_token" "$item7_id" "keyboard-manual.pdf"
echo "=== Test Data Creation Complete ==="
echo "Test data file saved to: $TEST_DATA_FILE"
echo "Summary:"
echo " - Users created: 7 (5 in group 1, 2 in group 2)"
echo " - Locations created: 3"
echo " - Labels created: 3"
echo " - Notifiers created: 1"
echo " - Items created: 7"
echo " - Attachments created: 7"
# Display the test data file for verification
echo ""
echo "Test data:"
cat "$TEST_DATA_FILE" | jq '.'
exit 0

View File

@@ -1,6 +1,7 @@
name: Publish Release Binaries
on:
workflow_dispatch:
push:
tags: [ 'v*.*.*' ]
@@ -8,6 +9,12 @@ jobs:
goreleaser:
name: goreleaser
runs-on: ubuntu-latest
outputs:
hashes: ${{ steps.binary.outputs.hashes }}
permissions:
contents: write
packages: write
id-token: write
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -36,7 +43,14 @@ jobs:
run: |
go install github.com/sigstore/cosign/cmd/cosign@latest
- name: Install Syft
working-directory: backend
run: |
go install github.com/anchore/syft/cmd/syft@latest
- name: Run GoReleaser
id: releaser
if: startsWith(github.ref, 'refs/tags/')
uses: goreleaser/goreleaser-action@v5
with:
workdir: "backend"
@@ -45,3 +59,75 @@ jobs:
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
COSIGN_PWD: ${{ secrets.COSIGN_PWD }}
COSIGN_YES: "true"
- name: Generate binary hashes
if: startsWith(github.ref, 'refs/tags/')
id: binary
env:
ARTIFACTS: "${{ steps.releaser.outputs.artifacts }}"
run: |
set -euo pipefail
checksum_file=$(echo "$ARTIFACTS" | jq -r '.[] | select (.type=="Checksum") | .path')
echo "hashes=$(cat $checksum_file | base64 -w0)" >> "$GITHUB_OUTPUT"
- name: Run GoReleaser No Release
if: ${{ !startsWith(github.ref, 'refs/tags/') }}
uses: goreleaser/goreleaser-action@v5
with:
workdir: "backend"
distribution: goreleaser
version: "~> v2"
args: release --clean --snapshot --skip=publish
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
COSIGN_PWD: ${{ secrets.COSIGN_PWD }}
COSIGN_YES: "true"
binary-provenance:
if: startsWith(github.ref, 'refs/tags/')
needs: [ goreleaser ]
permissions:
actions: read # To read the workflow path.
id-token: write # To sign the provenance.
contents: write # To add assets to a release.
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v1.9.0
with:
base64-subjects: "${{ needs.goreleaser.outputs.hashes }}"
upload-assets: true # upload to a new release
verification-with-slsa-verifier:
if: startsWith(github.ref, 'refs/tags/')
needs: [goreleaser, binary-provenance]
runs-on: ubuntu-latest
permissions: read-all
steps:
- name: Install the verifier
uses: slsa-framework/slsa-verifier/actions/installer@v2.4.0
- name: Download assets
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PROVENANCE: "${{ needs.binary-provenance.outputs.provenance-name }}"
run: |
set -euo pipefail
gh -R "$GITHUB_REPOSITORY" release download "$GITHUB_REF_NAME" -p "*.tar.gz"
gh -R "$GITHUB_REPOSITORY" release download "$GITHUB_REF_NAME" -p "*.zip"
gh -R "$GITHUB_REPOSITORY" release download "$GITHUB_REF_NAME" -p "$PROVENANCE"
- name: Verify assets
env:
CHECKSUMS: ${{ needs.goreleaser.outputs.hashes }}
PROVENANCE: "${{ needs.binary-provenance.outputs.provenance-name }}"
run: |
set -euo pipefail
checksums=$(echo "$CHECKSUMS" | base64 -d)
while read -r line; do
fn=$(echo $line | cut -d ' ' -f2)
echo "Verifying $fn"
slsa-verifier verify-artifact --provenance-path "$PROVENANCE" \
--source-uri "github.com/$GITHUB_REPOSITORY" \
--source-tag "$GITHUB_REF_NAME" \
"$fn"
done <<<"$checksums"

View File

@@ -0,0 +1,52 @@
name: "Copilot Setup Steps"
# Automatically run the setup steps when they are changed to allow for easy validation, and
# allow manual testing through the repository's "Actions" tab
on:
workflow_dispatch:
push:
paths:
- .github/workflows/copilot-setup-steps.yml
pull_request:
paths:
- .github/workflows/copilot-setup-steps.yml
jobs:
# The job MUST be called `copilot-setup-steps` or it will not be picked up by Copilot.
copilot-setup-steps:
runs-on: ubuntu-latest
# Set the permissions to the lowest permissions possible needed for your steps.
# Copilot will be given its own token for its operations.
permissions:
# If you want to clone the repository as part of your setup steps, for example to install dependencies, you'll need the `contents: read` permission. If you don't clone the repository in your setup steps, Copilot will do this for you automatically after the steps complete.
contents: read
# You can define any steps you want, and they will run before the agent starts.
# If you do not check out your code, Copilot will do this for you.
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "22"
- uses: pnpm/action-setup@v3.0.0
with:
version: 9.12.2
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: "1.24"
cache-dependency-path: backend/go.mod
- name: Install Task
uses: arduino/setup-task@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Perform setup
run: task setup

View File

@@ -0,0 +1,248 @@
name: Docker publish hardened
on:
schedule:
- cron: '00 0 * * *'
push:
branches: [ "main" ]
paths:
- 'backend/**'
- 'frontend/**'
- 'Dockerfile.hardened'
- '.dockerignore'
- '.github/workflows/docker-publish-hardened.yaml'
tags: [ 'v*.*.*' ]
pull_request:
branches: [ "main" ]
paths:
- 'backend/**'
- 'frontend/**'
- 'Dockerfile.hardened'
- '.dockerignore'
- '.github/workflows/docker-publish-hardened.yaml'
permissions:
contents: read # Access to repository contents
packages: write # Write access for pushing to GHCR
id-token: write # Required for OIDC authentication (if used)
attestations: write # Required for signing and attestation (if needed)
env:
DOCKERHUB_REPO: sysadminsmedia/homebox
GHCR_REPO: ghcr.io/${{ github.repository }}
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
id-token: write
attestations: write
strategy:
fail-fast: false
matrix:
platform:
- linux/amd64
- linux/arm64
- linux/arm/v7
steps:
- name: Enable Debug Logs
run: echo "##[debug]Enabling debug logging"
env:
ACTIONS_RUNNER_DEBUG: true
ACTIONS_STEP_DEBUG: true
- name: Checkout repository
uses: actions/checkout@v4
- name: Prepare
run: |
echo "BUILD_TIME=$(date -u +%Y-%m-%dT%H:%M:%SZ)" >> $GITHUB_ENV
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
branch=${{ github.event.pull_request.number || github.ref_name }}
echo "BRANCH=${branch//\//-}" >> $GITHUB_ENV
echo "DOCKERNAMES=${{ env.DOCKERHUB_REPO }},${{ env.GHCR_REPO }}" >> $GITHUB_ENV
if [[ "${{ github.event_name }}" != "schedule" ]] || [[ "${{ github.ref }}" != refs/tags/* ]]; then
echo "DOCKERNAMES=${{ env.GHCR_REPO }}" >> $GITHUB_ENV
fi
- name: Docker meta
id: meta
uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f
with:
images: |
name=${{ env.DOCKERHUB_REPO }},enable=${{ github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/') }}
name=${{ env.GHCR_REPO }}
- name: Login to Docker Hub
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/'))
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to GHCR
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up QEMU
uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392
with:
image: ghcr.io/sysadminsmedia/binfmt:latest
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435
with:
driver-opts: |
image=ghcr.io/sysadminsmedia/buildkit:master
- name: Build and push by digest
id: build
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83
with:
context: . # Explicitly specify the build context
file: ./Dockerfile.hardened # Explicitly specify the Dockerfile
platforms: ${{ matrix.platform }}
labels: ${{ steps.meta.outputs.labels }}
outputs: type=image,"name=${{ env.DOCKERNAMES }}",push-by-digest=true,name-canonical=true,push=${{ github.event_name != 'pull_request' }}
cache-from: type=registry,ref=ghcr.io/sysadminsmedia/devcache:${{ env.PLATFORM_PAIR }}-${{ env.BRANCH }}-hardened
cache-to: type=registry,ref=ghcr.io/sysadminsmedia/devcache:${{ env.PLATFORM_PAIR }}-${{ env.BRANCH }}-hardened,mode=max,ignore-error=true
build-args: |
VERSION=${{ github.ref_name }}
COMMIT=${{ github.sha }}
BUILD_TIME=${{ env.BUILD_TIME }}
provenance: mode=max
sbom: true
annotations: ${{ steps.meta.outputs.annotations }}
- name: Attest platform-specific images
uses: actions/attest-build-provenance@v1
if: github.event_name != 'pull_request'
with:
subject-name: ${{ env.GHCR_REPO }}
subject-digest: ${{ steps.build.outputs.digest }}
push-to-registry: true
- name: Export digest
run: |
mkdir -p /tmp/digests
digest="${{ steps.build.outputs.digest }}"
touch "/tmp/digests/${digest#sha256:}"
- name: Upload digest
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02
with:
name: digests-${{ env.PLATFORM_PAIR }}
path: /tmp/digests/*
if-no-files-found: error
retention-days: 1
merge:
if: github.event_name != 'pull_request'
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
id-token: write
attestations: write
needs:
- build
steps:
- name: Download digests
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
with:
path: /tmp/digests
pattern: digests-*
merge-multiple: true
- name: Login to Docker Hub
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/'))
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to GHCR
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435
with:
driver-opts: |
image=ghcr.io/sysadminsmedia/buildkit:master
- name: Docker meta
id: meta
uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f
with:
images: |
name=${{ env.DOCKERHUB_REPO }},enable=${{ github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/') }}
name=${{ env.GHCR_REPO }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=schedule,pattern=nightly
flavor: |
suffix=-hardened,onlatest=true
- name: Create manifest list and push GHCR
id: push-ghcr
working-directory: /tmp/digests
run: |
set -euo pipefail
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf '${{ env.GHCR_REPO }}@sha256:%s ' *) 2>&1 | tee /tmp/push-ghcr.out
digest=$(grep -oE 'sha256:[a-f0-9]{64}' /tmp/push-ghcr.out | head -n1 || true)
if [ -z "$digest" ]; then
echo "No digest found in imagetools output:"
cat /tmp/push-ghcr.out
exit 1
fi
echo "digest=$digest" >> $GITHUB_OUTPUT
- name: Attest GHCR images
uses: actions/attest-build-provenance@v1
if: github.event_name != 'pull_request'
with:
subject-name: ${{ env.GHCR_REPO }}
subject-digest: ${{ steps.push-ghcr.outputs.digest }}
push-to-registry: true
- name: Create manifest list and push Dockerhub
id: push-dockerhub
working-directory: /tmp/digests
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/'))
run: |
set -euo pipefail
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf '${{ env.DOCKERHUB_REPO }}@sha256:%s ' *) 2>&1 | tee /tmp/push-dockerhub.out
digest=$(grep -oE 'sha256:[a-f0-9]{64}' /tmp/push-dockerhub.out | head -n1 || true)
if [ -z "$digest" ]; then
echo "No digest found in imagetools output:"
cat /tmp/push-dockerhub.out
exit 1
fi
echo "digest=$digest" >> $GITHUB_OUTPUT
- name: Attest Dockerhub images
uses: actions/attest-build-provenance@v1
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/'))
with:
subject-name: docker.io/${{ env.DOCKERHUB_REPO }}
subject-digest: ${{ steps.push-dockerhub.outputs.digest }}
push-to-registry: true

View File

@@ -8,7 +8,7 @@ on:
paths:
- 'backend/**'
- 'frontend/**'
- 'Dockerfile'
- 'Dockerfile.rootless'
- '.dockerignore'
- '.github/workflows/docker-publish-rootless.yaml'
ignore:
@@ -19,7 +19,7 @@ on:
paths:
- 'backend/**'
- 'frontend/**'
- 'Dockerfile'
- 'Dockerfile.rootless'
- '.dockerignore'
- '.github/workflows/docker-publish-rootless.yaml'
ignore:
@@ -83,7 +83,7 @@ jobs:
- name: Login to Docker Hub
uses: docker/login-action@v3
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/')) && secrets.DOCKER_USERNAME != ''
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/'))
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
@@ -98,13 +98,13 @@ jobs:
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
image: ghcr.io/amitie10g/binfmt:latest
image: ghcr.io/sysadminsmedia/binfmt:latest
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver-opts: |
image=ghcr.io/amitie10g/buildkit:master
image=ghcr.io/sysadminsmedia/buildkit:master
- name: Build and push by digest
id: build
@@ -120,10 +120,18 @@ jobs:
build-args: |
VERSION=${{ github.ref_name }}
COMMIT=${{ github.sha }}
provenance: true
provenance: mode=max
sbom: true
annotations: ${{ steps.meta.outputs.annotations }}
- name: Attest platform-specific images
uses: actions/attest-build-provenance@v1
if: github.event_name != 'pull_request'
with:
subject-name: ${{ env.GHCR_REPO }}
subject-digest: ${{ steps.build.outputs.digest }}
push-to-registry: true
- name: Export digest
run: |
mkdir -p /tmp/digests
@@ -159,7 +167,7 @@ jobs:
- name: Login to Docker Hub
uses: docker/login-action@v3
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/')) && secrets.DOCKER_USERNAME != ''
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/'))
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
@@ -175,7 +183,7 @@ jobs:
uses: docker/setup-buildx-action@v3
with:
driver-opts: |
image=ghcr.io/amitie10g/buildkit:master
image=ghcr.io/sysadminsmedia/buildkit:master
- name: Docker meta
id: meta
@@ -198,13 +206,45 @@ jobs:
id: push-ghcr
working-directory: /tmp/digests
run: |
set -euo pipefail
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf '${{ env.GHCR_REPO }}@sha256:%s ' *)
$(printf '${{ env.GHCR_REPO }}@sha256:%s ' *) 2>&1 | tee /tmp/push-ghcr.out
digest=$(grep -oE 'sha256:[a-f0-9]{64}' /tmp/push-ghcr.out | head -n1 || true)
if [ -z "$digest" ]; then
echo "No digest found in imagetools output:"
cat /tmp/push-ghcr.out
exit 1
fi
echo "digest=$digest" >> $GITHUB_OUTPUT
- name: Attest GHCR images
uses: actions/attest-build-provenance@v1
if: github.event_name != 'pull_request'
with:
subject-name: ${{ env.GHCR_REPO }}
subject-digest: ${{ steps.push-ghcr.outputs.digest }}
push-to-registry: true
- name: Create manifest list and push Dockerhub
id: push-dockerhub
working-directory: /tmp/digests
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/')) && secrets.DOCKER_USERNAME != ''
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/'))
run: |
set -euo pipefail
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf '${{ env.DOCKERHUB_REPO }}@sha256:%s ' *)
$(printf '${{ env.DOCKERHUB_REPO }}@sha256:%s ' *) 2>&1 | tee /tmp/push-dockerhub.out
digest=$(grep -oE 'sha256:[a-f0-9]{64}' /tmp/push-dockerhub.out | head -n1 || true)
if [ -z "$digest" ]; then
echo "No digest found in imagetools output:"
cat /tmp/push-dockerhub.out
exit 1
fi
echo "digest=$digest" >> $GITHUB_OUTPUT
- name: Attest Dockerhub images
uses: actions/attest-build-provenance@v1
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/'))
with:
subject-name: docker.io/${{ env.DOCKERHUB_REPO }}
subject-digest: ${{ steps.push-dockerhub.outputs.digest }}
push-to-registry: true

View File

@@ -78,7 +78,7 @@ jobs:
- name: Login to Docker Hub
uses: docker/login-action@v3
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/')) && secrets.DOCKER_USERNAME != ''
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/'))
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
@@ -93,13 +93,13 @@ jobs:
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
image: ghcr.io/amitie10g/binfmt:latest
image: ghcr.io/sysadminsmedia/binfmt:latest
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver-opts: |
image=ghcr.io/amitie10g/buildkit:master
image=ghcr.io/sysadminsmedia/buildkit:latest
- name: Build and push by digest
id: build
@@ -113,10 +113,18 @@ jobs:
build-args: |
VERSION=${{ github.ref_name }}
COMMIT=${{ github.sha }}
provenance: true
provenance: mode=max
sbom: true
annotations: ${{ steps.meta.outputs.annotations }}
- name: Attest platform-specific images
uses: actions/attest-build-provenance@v1
if: github.event_name != 'pull_request'
with:
subject-name: ${{ env.GHCR_REPO }}
subject-digest: ${{ steps.build.outputs.digest }}
push-to-registry: true
- name: Export digest
run: |
mkdir -p /tmp/digests
@@ -152,7 +160,7 @@ jobs:
- name: Login to Docker Hub
uses: docker/login-action@v3
if: secrets.DOCKER_USERNAME != ''
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/'))
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
@@ -168,7 +176,7 @@ jobs:
uses: docker/setup-buildx-action@v3
with:
driver-opts: |
image=ghcr.io/amitie10g/buildkit:master
image=ghcr.io/sysadminsmedia/buildkit:master
- name: Docker meta
id: meta
@@ -189,13 +197,45 @@ jobs:
id: push-ghcr
working-directory: /tmp/digests
run: |
set -euo pipefail
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf '${{ env.GHCR_REPO }}@sha256:%s ' *)
$(printf '${{ env.GHCR_REPO }}@sha256:%s ' *) 2>&1 | tee /tmp/push-ghcr.out
digest=$(grep -oE 'sha256:[a-f0-9]{64}' /tmp/push-ghcr.out | head -n1 || true)
if [ -z "$digest" ]; then
echo "No digest found in imagetools output:"
cat /tmp/push-ghcr.out
exit 1
fi
echo "digest=$digest" >> $GITHUB_OUTPUT
- name: Attest GHCR images
uses: actions/attest-build-provenance@v1
if: github.event_name != 'pull_request'
with:
subject-name: ${{ env.GHCR_REPO }}
subject-digest: ${{ steps.push-ghcr.outputs.digest }}
push-to-registry: true
- name: Create manifest list and push Dockerhub
id: push-dockerhub
working-directory: /tmp/digests
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/')) && secrets.DOCKER_USERNAME != ''
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/'))
run: |
set -euo pipefail
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf '${{ env.DOCKERHUB_REPO }}@sha256:%s ' *)
$(printf '${{ env.DOCKERHUB_REPO }}@sha256:%s ' *) 2>&1 | tee /tmp/push-dockerhub.out
digest=$(grep -oE 'sha256:[a-f0-9]{64}' /tmp/push-dockerhub.out | head -n1 || true)
if [ -z "$digest" ]; then
echo "No digest found in imagetools output:"
cat /tmp/push-dockerhub.out
exit 1
fi
echo "digest=$digest" >> $GITHUB_OUTPUT
- name: Attest Dockerhub images
uses: actions/attest-build-provenance@v1
if: (github.event_name == 'schedule' || startsWith(github.ref, 'refs/tags/'))
with:
subject-name: docker.io/${{ env.DOCKERHUB_REPO }}
subject-digest: ${{ steps.push-dockerhub.outputs.digest }}
push-to-registry: true

View File

@@ -9,7 +9,10 @@ on:
paths:
- 'backend/**'
- 'frontend/**'
- '.github/workflows/**'
- '.github/workflows/partial-backend.yaml'
- '.github/workflows/partial-frontend.yaml'
- '.github/workflows/e2e-partial.yaml'
- '.github/workflows/pull-requests.yaml'
jobs:
backend-tests:

View File

@@ -5,18 +5,22 @@ on:
branches: [ main ]
workflow_dispatch:
permissions:
contents: write
pull-requests: write
jobs:
update-currencies:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: '3.8'
cache: 'pip'
@@ -25,15 +29,14 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install requests
pip install -r .github/workflows/update-currencies/requirements.txt
- name: Run currency update script
run: python .github/scripts/update_currencies.py
- name: Check for file changes
id: changes
- name: Check for currencies.json changes
run: |
if git diff --quiet; then
if git diff --quiet -- backend/internal/core/currencies/currencies.json; then
echo "changed=false" >> $GITHUB_ENV
else
echo "changed=true" >> $GITHUB_ENV
@@ -41,14 +44,16 @@ jobs:
- name: Create Pull Request
if: env.changed == 'true'
uses: peter-evans/create-pull-request@v7.0.8
uses: peter-evans/create-pull-request@v7
with:
token: ${{ secrets.GITHUB_TOKEN }}
branch: update-currencies
base: main
title: "Update currencies.json"
commit-message: "chore: update currencies.json"
path: backend/internal/core/currencies/currencies.json
path: .
add-paths: |
backend/internal/core/currencies/currencies.json
- name: No updates needed
if: env.changed == 'false'

177
.github/workflows/upgrade-test.yaml vendored Normal file
View File

@@ -0,0 +1,177 @@
name: HomeBox Upgrade Test
on:
schedule:
# Run daily at 2 AM UTC
- cron: '0 2 * * *'
workflow_dispatch: # Allow manual trigger
push:
branches:
- main
paths:
- '.github/workflows/upgrade-test.yaml'
- '.github/scripts/upgrade-test/**'
jobs:
upgrade-test:
name: Test Upgrade Path
runs-on: ubuntu-latest
timeout-minutes: 60
permissions:
contents: read # Read repository contents
packages: read # Pull Docker images from GHCR
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: lts/*
- name: Install pnpm
uses: pnpm/action-setup@v3.0.0
with:
version: 9.12.2
- name: Install Playwright
run: |
cd frontend
pnpm install
pnpm exec playwright install --with-deps chromium
- name: Create test data directory
run: |
mkdir -p /tmp/homebox-data-old
mkdir -p /tmp/homebox-data-new
chmod -R 777 /tmp/homebox-data-old
chmod -R 777 /tmp/homebox-data-new
# Step 1: Pull and deploy latest stable version
- name: Pull latest stable HomeBox image
run: |
docker pull ghcr.io/sysadminsmedia/homebox:latest
- name: Start HomeBox (stable version)
run: |
docker run -d \
--name homebox-old \
--restart unless-stopped \
-p 7745:7745 \
-e HBOX_LOG_LEVEL=debug \
-e HBOX_OPTIONS_ALLOW_REGISTRATION=true \
-e TZ=UTC \
-v /tmp/homebox-data-old:/data \
ghcr.io/sysadminsmedia/homebox:latest
# Wait for the service to be ready
timeout 60 bash -c 'until curl -f http://localhost:7745/api/v1/status; do sleep 2; done'
echo "HomeBox stable version is ready"
# Step 2: Create test data
- name: Create test data
run: |
chmod +x .github/scripts/upgrade-test/create-test-data.sh
.github/scripts/upgrade-test/create-test-data.sh
env:
HOMEBOX_URL: http://localhost:7745
- name: Verify initial data creation
run: |
echo "Verifying test data was created..."
# Check if database file exists and has content
if [ -f /tmp/homebox-data-old/homebox.db ]; then
ls -lh /tmp/homebox-data-old/homebox.db
echo "Database file exists"
else
echo "Database file not found!"
exit 1
fi
- name: Stop old HomeBox instance
run: |
docker stop homebox-old
docker rm homebox-old
# Step 3: Build latest version from main branch
- name: Build HomeBox from main branch
run: |
docker build \
--build-arg VERSION=main \
--build-arg COMMIT=${{ github.sha }} \
--build-arg BUILD_TIME=$(date -u +"%Y-%m-%dT%H:%M:%SZ") \
-t homebox:test \
-f Dockerfile \
.
# Step 4: Copy data and start new version
- name: Copy data to new location
run: |
cp -r /tmp/homebox-data-old/* /tmp/homebox-data-new/
chmod -R 777 /tmp/homebox-data-new
- name: Start HomeBox (new version)
run: |
docker run -d \
--name homebox-new \
--restart unless-stopped \
-p 7745:7745 \
-e HBOX_LOG_LEVEL=debug \
-e HBOX_OPTIONS_ALLOW_REGISTRATION=true \
-e TZ=UTC \
-v /tmp/homebox-data-new:/data \
homebox:test
# Wait for the service to be ready
timeout 60 bash -c 'until curl -f http://localhost:7745/api/v1/status; do sleep 2; done'
echo "HomeBox new version is ready"
# Step 5: Run verification tests with Playwright
- name: Run verification tests
run: |
cd frontend
TEST_DATA_FILE=/tmp/test-users.json \
E2E_BASE_URL=http://localhost:7745 \
pnpm exec playwright test \
-c ./test/playwright.config.ts \
--project=chromium \
test/e2e/upgrade-verification.spec.ts
env:
HOMEBOX_URL: http://localhost:7745
- name: Upload Playwright report
uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-report-upgrade-test
path: frontend/playwright-report/
retention-days: 30
- name: Upload test traces
uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-traces
path: frontend/test-results/
retention-days: 7
- name: Collect logs on failure
if: failure()
run: |
echo "=== Docker logs for new version ==="
docker logs homebox-new || true
echo "=== Database content ==="
ls -la /tmp/homebox-data-new/ || true
- name: Cleanup
if: always()
run: |
docker stop homebox-new || true
docker rm homebox-new || true
docker rmi homebox:test || true

2
.gitignore vendored
View File

@@ -60,7 +60,7 @@ backend/app/api/static/public/*
backend/api
docs/.vitepress/cache/
/.data/
.data/
# Playwright
frontend/test-results/

603
.gitlab-ci.yml Normal file
View File

@@ -0,0 +1,603 @@
include:
- template: Jobs/SAST.gitlab-ci.yml
- component: $CI_SERVER_FQDN/components/code-quality-oss/codequality-os-scanners-integration/codequality-oss@1.1.4
- component: $CI_SERVER_FQDN/components/code-intelligence/golang-code-intel@v0.3.1
- component: $CI_SERVER_FQDN/components/code-intelligence/typescript-code-intel@v0.3.1
inputs:
node_version: 24
- component: $CI_SERVER_FQDN/components/secret-detection/secret-detection@2.1.0
variables:
GITLAB_ADVANCED_SAST_ENABLED: 'true'
ADVANCED_SAST_PARTIAL_SCAN: 'differential'
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
# Registry configuration - adjust as needed
CI_REGISTRY_IMAGE: $CI_REGISTRY/$CI_PROJECT_PATH
stages:
- test
- build-binaries
- build-docker
- release
# ==========================================
# Test Jobs
# ==========================================
# Backend Tests (Go)
test:backend:
stage: test
image: golang:1.24
cache:
key:
files:
- backend/go.sum
paths:
- backend/.go-pkg-cache/
policy: pull-push
before_script:
- export GOMODCACHE=$(pwd)/backend/.go-pkg-cache
# Install Task
- sh -c "$(curl --location https://taskfile.dev/install.sh)" -- -d -b /usr/local/bin
script:
- cd backend
- task go:lint
- task go:build
- task go:coverage
coverage: '/coverage: \d+.\d+% of statements/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: backend/coverage.out
paths:
- backend/coverage.out
expire_in: 7 days
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# Frontend Lint and Typecheck
test:frontend:lint:
stage: test
image: node:22-alpine
cache:
key:
files:
- frontend/pnpm-lock.yaml
paths:
- frontend/node_modules/
- .pnpm-store/
policy: pull-push
before_script:
- npm install -g pnpm@9.15.3
- pnpm config set store-dir $(pwd)/.pnpm-store
script:
- cd frontend
- pnpm install --frozen-lockfile
- pnpm run lint:ci
- pnpm run typecheck
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# Frontend Integration Tests (SQLite)
test:frontend:integration:
stage: test
image: node:22
cache:
- key:
files:
- frontend/pnpm-lock.yaml
paths:
- frontend/node_modules/
- .pnpm-store/
policy: pull-push
- key:
files:
- backend/go.sum
paths:
- backend/.go-pkg-cache/
policy: pull-push
before_script:
- npm install -g pnpm@9.15.3
- pnpm config set store-dir $(pwd)/.pnpm-store
# Install Task
- sh -c "$(curl --location https://taskfile.dev/install.sh)" -- -d -b /usr/local/bin
# Install Go
- wget -q https://go.dev/dl/go1.24.0.linux-amd64.tar.gz
- tar -C /usr/local -xzf go1.24.0.linux-amd64.tar.gz
- export PATH=$PATH:/usr/local/go/bin
- export GOMODCACHE=$(pwd)/backend/.go-pkg-cache
script:
- cd frontend
- pnpm install --frozen-lockfile
- cd ..
- task test:ci
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# Frontend Integration Tests (PostgreSQL Matrix)
test:frontend:integration:postgresql:
stage: test
image: node:22
services:
- name: postgres:${POSTGRES_VERSION}
alias: postgres
variables:
POSTGRES_USER: homebox
POSTGRES_PASSWORD: homebox
POSTGRES_DB: homebox
POSTGRES_HOST_AUTH_METHOD: trust
parallel:
matrix:
- POSTGRES_VERSION: ["17", "16", "15"]
cache:
- key:
files:
- frontend/pnpm-lock.yaml
paths:
- frontend/node_modules/
- .pnpm-store/
policy: pull-push
- key:
files:
- backend/go.sum
paths:
- backend/.go-pkg-cache/
policy: pull-push
before_script:
- npm install -g pnpm@9.15.3
- pnpm config set store-dir $(pwd)/.pnpm-store
# Install Task
- sh -c "$(curl --location https://taskfile.dev/install.sh)" -- -d -b /usr/local/bin
# Install Go
- wget -q https://go.dev/dl/go1.24.0.linux-amd64.tar.gz
- tar -C /usr/local -xzf go1.24.0.linux-amd64.tar.gz
- export PATH=$PATH:/usr/local/go/bin
- export GOMODCACHE=$(pwd)/backend/.go-pkg-cache
script:
- cd frontend
- pnpm install --frozen-lockfile
- cd ..
- task test:ci:postgresql
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# E2E Tests (Playwright) - Sharded
test:e2e:playwright:
stage: test
image: mcr.microsoft.com/playwright:v1.48.2-jammy
timeout: 1h
parallel:
matrix:
- SHARD_INDEX: ["1", "2", "3", "4"]
SHARD_TOTAL: "4"
cache:
- key:
files:
- frontend/pnpm-lock.yaml
paths:
- frontend/node_modules/
- .pnpm-store/
policy: pull-push
- key:
files:
- backend/go.sum
paths:
- backend/.go-pkg-cache/
policy: pull-push
before_script:
- npm install -g pnpm@9.15.3
- pnpm config set store-dir $(pwd)/.pnpm-store
# Install Task
- sh -c "$(curl --location https://taskfile.dev/install.sh)" -- -d -b /usr/local/bin
# Install Go
- wget -q https://go.dev/dl/go1.24.0.linux-amd64.tar.gz
- tar -C /usr/local -xzf go1.24.0.linux-amd64.tar.gz
- export PATH=$PATH:/usr/local/go/bin
- export GOMODCACHE=$(pwd)/backend/.go-pkg-cache
script:
- cd frontend
- pnpm install --frozen-lockfile
- cd ..
- cd backend && go mod download
- task test:e2e -- --shard=$SHARD_INDEX/$SHARD_TOTAL
artifacts:
when: always
paths:
- frontend/blob-report/
expire_in: 2 days
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# E2E Reports Merge
test:e2e:merge-reports:
stage: test
image: mcr.microsoft.com/playwright:v1.48.2-jammy
needs:
- test:e2e:playwright
cache:
key:
files:
- frontend/pnpm-lock.yaml
paths:
- frontend/node_modules/
- .pnpm-store/
policy: pull
before_script:
- npm install -g pnpm@9.15.3
- pnpm config set store-dir $(pwd)/.pnpm-store
script:
- cd frontend
- pnpm install --frozen-lockfile
# Download all blob reports
- mkdir -p all-blob-reports
# GitLab automatically downloads artifacts from dependencies
- cp -r ../frontend/blob-report/* all-blob-reports/ || true
- pnpm exec playwright merge-reports --reporter html,github ./all-blob-reports || true
artifacts:
when: always
paths:
- frontend/playwright-report/
expire_in: 30 days
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
when: always
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: always
# Update Currencies (Scheduled Job)
update:currencies:
stage: test
image: python:3.11
cache:
key: python-currencies
paths:
- .pip-cache/
before_script:
- pip install --cache-dir .pip-cache -r .github/workflows/update-currencies/requirements.txt
script:
- python .github/scripts/update_currencies.py
- |
if git diff --quiet -- backend/internal/core/currencies/currencies.json; then
echo "✅ currencies.json is already up-to-date"
exit 0
else
echo "Changes detected in currencies.json"
git config user.name "GitLab CI"
git config user.email "ci@gitlab.com"
git checkout -b update-currencies-$CI_COMMIT_SHORT_SHA
git add backend/internal/core/currencies/currencies.json
git commit -m "chore: update currencies.json"
git push -o merge_request.create -o merge_request.target=$CI_DEFAULT_BRANCH -o merge_request.title="Update currencies.json" origin update-currencies-$CI_COMMIT_SHORT_SHA
fi
rules:
- if: $CI_PIPELINE_SOURCE == "schedule" && $UPDATE_CURRENCIES == "true"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $UPDATE_CURRENCIES == "true"
# ==========================================
# Binary Build with GoReleaser
# ==========================================
build:binaries:
stage: build-binaries
image: golang:1.24
cache:
- key:
files:
- frontend/pnpm-lock.yaml
paths:
- frontend/node_modules/
- .pnpm-store/
policy: pull-push
- key:
files:
- backend/go.sum
paths:
- backend/.go-pkg-cache/
policy: pull-push
before_script:
# Install Node.js and pnpm for frontend build
- curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
- apt-get install -y nodejs
- npm install -g pnpm@9.15.3
# Configure pnpm store
- pnpm config set store-dir $(pwd)/.pnpm-store
# Install GoReleaser
- curl -sfL https://goreleaser.com/static/run | bash -s -- check
- curl -sfL https://goreleaser.com/static/run | bash -s -- --version
# Configure Go cache
- export GOMODCACHE=$(pwd)/backend/.go-pkg-cache
script:
# Build frontend
- cd frontend
- pnpm install --frozen-lockfile
- pnpm run build
- cp -r ./.output/public ../backend/app/api/static/
- cd ..
# Run GoReleaser
- cd backend
- |
if [ -n "$CI_COMMIT_TAG" ]; then
echo "Building release for tag: $CI_COMMIT_TAG"
curl -sfL https://goreleaser.com/static/run | bash -s -- release --clean --skip=publish
else
echo "Building snapshot"
curl -sfL https://goreleaser.com/static/run | bash -s -- release --clean --snapshot --skip=publish
fi
artifacts:
name: "homebox-binaries-$CI_COMMIT_REF_SLUG"
paths:
- backend/dist/
expire_in: 30 days
rules:
- if: $CI_COMMIT_TAG
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
# ==========================================
# Docker Build Jobs - Regular
# ==========================================
.docker_build_template:
stage: build-docker
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
variables:
DOCKER_BUILDKIT: 1
DOCKERFILE: Dockerfile
IMAGE_SUFFIX: ""
script:
- export VERSION=${CI_COMMIT_TAG:-$CI_COMMIT_REF_NAME}
- export COMMIT=$CI_COMMIT_SHA
- export BUILD_TIME=$(date -u +%Y-%m-%dT%H:%M:%SZ)
- export CACHE_TAG=${IMAGE_SUFFIX:-regular}
# Build and push for the specific platform with layer caching
- |
docker buildx create --use --name builder-${CI_JOB_ID} || true
docker buildx build \
--platform $PLATFORM \
--build-arg VERSION=$VERSION \
--build-arg COMMIT=$COMMIT \
--build-arg BUILD_TIME=$BUILD_TIME \
--cache-from type=registry,ref=$CI_REGISTRY_IMAGE/cache:${PLATFORM_PAIR}-${CACHE_TAG}-$CI_COMMIT_REF_SLUG \
--cache-from type=registry,ref=$CI_REGISTRY_IMAGE/cache:${PLATFORM_PAIR}-${CACHE_TAG}-$CI_DEFAULT_BRANCH \
--cache-to type=registry,ref=$CI_REGISTRY_IMAGE/cache:${PLATFORM_PAIR}-${CACHE_TAG}-$CI_COMMIT_REF_SLUG,mode=max \
--file ./$DOCKERFILE \
--tag $CI_REGISTRY_IMAGE${IMAGE_SUFFIX}:${CI_COMMIT_REF_SLUG}-${PLATFORM_PAIR} \
--push \
.
docker buildx rm builder-${CI_JOB_ID} || true
rules:
- if: $CI_COMMIT_TAG
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
docker:build:amd64:
extends: .docker_build_template
variables:
PLATFORM: linux/amd64
PLATFORM_PAIR: linux-amd64
docker:build:arm64:
extends: .docker_build_template
variables:
PLATFORM: linux/arm64
PLATFORM_PAIR: linux-arm64
docker:build:armv7:
extends: .docker_build_template
variables:
PLATFORM: linux/arm/v7
PLATFORM_PAIR: linux-arm-v7
# ==========================================
# Docker Build Jobs - Rootless
# ==========================================
docker:build:rootless:amd64:
extends: .docker_build_template
variables:
PLATFORM: linux/amd64
PLATFORM_PAIR: linux-amd64
DOCKERFILE: Dockerfile.rootless
IMAGE_SUFFIX: -rootless
docker:build:rootless:arm64:
extends: .docker_build_template
variables:
PLATFORM: linux/arm64
PLATFORM_PAIR: linux-arm64
DOCKERFILE: Dockerfile.rootless
IMAGE_SUFFIX: -rootless
docker:build:rootless:armv7:
extends: .docker_build_template
variables:
PLATFORM: linux/arm/v7
PLATFORM_PAIR: linux-arm-v7
DOCKERFILE: Dockerfile.rootless
IMAGE_SUFFIX: -rootless
# ==========================================
# Docker Build Jobs - Hardened
# ==========================================
docker:build:hardened:amd64:
extends: .docker_build_template
variables:
PLATFORM: linux/amd64
PLATFORM_PAIR: linux-amd64
DOCKERFILE: Dockerfile.hardened
IMAGE_SUFFIX: -hardened
docker:build:hardened:arm64:
extends: .docker_build_template
variables:
PLATFORM: linux/arm64
PLATFORM_PAIR: linux-arm64
DOCKERFILE: Dockerfile.hardened
IMAGE_SUFFIX: -hardened
docker:build:hardened:armv7:
extends: .docker_build_template
variables:
PLATFORM: linux/arm/v7
PLATFORM_PAIR: linux-arm-v7
DOCKERFILE: Dockerfile.hardened
IMAGE_SUFFIX: -hardened
# ==========================================
# Docker Manifest Creation - Regular
# ==========================================
docker:manifest:
stage: release
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- export VERSION=${CI_COMMIT_TAG:-$CI_COMMIT_REF_NAME}
# Create manifest for regular image
- |
docker manifest create $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG \
$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_SLUG}-linux-amd64 \
$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_SLUG}-linux-arm64 \
$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_SLUG}-linux-arm-v7
docker manifest push $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
# Tag as latest on main branch or create version tags
- |
if [ "$CI_COMMIT_BRANCH" = "$CI_DEFAULT_BRANCH" ]; then
docker manifest create $CI_REGISTRY_IMAGE:latest \
$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_SLUG}-linux-amd64 \
$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_SLUG}-linux-arm64 \
$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_SLUG}-linux-arm-v7
docker manifest push $CI_REGISTRY_IMAGE:latest
fi
- |
if [ -n "$CI_COMMIT_TAG" ]; then
# Create version tag
docker manifest create $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG \
$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_SLUG}-linux-amd64 \
$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_SLUG}-linux-arm64 \
$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_SLUG}-linux-arm-v7
docker manifest push $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
# Create major.minor tag if semantic version
MAJOR_MINOR=$(echo $CI_COMMIT_TAG | sed -E 's/^v?([0-9]+\.[0-9]+)\..*/\1/')
if [ -n "$MAJOR_MINOR" ]; then
docker manifest create $CI_REGISTRY_IMAGE:$MAJOR_MINOR \
$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_SLUG}-linux-amd64 \
$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_SLUG}-linux-arm64 \
$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_SLUG}-linux-arm-v7
docker manifest push $CI_REGISTRY_IMAGE:$MAJOR_MINOR
fi
fi
needs:
- docker:build:amd64
- docker:build:arm64
- docker:build:armv7
rules:
- if: $CI_COMMIT_TAG
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# ==========================================
# Docker Manifest Creation - Rootless
# ==========================================
docker:manifest:rootless:
stage: release
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- export VERSION=${CI_COMMIT_TAG:-$CI_COMMIT_REF_NAME}
# Create manifest for rootless image
- |
docker manifest create $CI_REGISTRY_IMAGE-rootless:$CI_COMMIT_REF_SLUG \
$CI_REGISTRY_IMAGE-rootless:${CI_COMMIT_REF_SLUG}-linux-amd64 \
$CI_REGISTRY_IMAGE-rootless:${CI_COMMIT_REF_SLUG}-linux-arm64
docker manifest push $CI_REGISTRY_IMAGE-rootless:$CI_COMMIT_REF_SLUG
# Tag as latest on main branch or create version tags
- |
if [ "$CI_COMMIT_BRANCH" = "$CI_DEFAULT_BRANCH" ]; then
docker manifest create $CI_REGISTRY_IMAGE-rootless:latest \
$CI_REGISTRY_IMAGE-rootless:${CI_COMMIT_REF_SLUG}-linux-amd64 \
$CI_REGISTRY_IMAGE-rootless:${CI_COMMIT_REF_SLUG}-linux-arm64
docker manifest push $CI_REGISTRY_IMAGE-rootless:latest
fi
- |
if [ -n "$CI_COMMIT_TAG" ]; then
docker manifest create $CI_REGISTRY_IMAGE-rootless:$CI_COMMIT_TAG \
$CI_REGISTRY_IMAGE-rootless:${CI_COMMIT_REF_SLUG}-linux-amd64 \
$CI_REGISTRY_IMAGE-rootless:${CI_COMMIT_REF_SLUG}-linux-arm64
docker manifest push $CI_REGISTRY_IMAGE-rootless:$CI_COMMIT_TAG
MAJOR_MINOR=$(echo $CI_COMMIT_TAG | sed -E 's/^v?([0-9]+\.[0-9]+)\..*/\1/')
if [ -n "$MAJOR_MINOR" ]; then
docker manifest create $CI_REGISTRY_IMAGE-rootless:$MAJOR_MINOR \
$CI_REGISTRY_IMAGE-rootless:${CI_COMMIT_REF_SLUG}-linux-amd64 \
$CI_REGISTRY_IMAGE-rootless:${CI_COMMIT_REF_SLUG}-linux-arm64
docker manifest push $CI_REGISTRY_IMAGE-rootless:$MAJOR_MINOR
fi
fi
needs:
- docker:build:rootless:amd64
- docker:build:rootless:arm64
rules:
- if: $CI_COMMIT_TAG
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# ==========================================
# Docker Manifest Creation - Hardened
# ==========================================
docker:manifest:hardened:
stage: release
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- export VERSION=${CI_COMMIT_TAG:-$CI_COMMIT_REF_NAME}
# Create manifest for hardened image
- |
docker manifest create $CI_REGISTRY_IMAGE-hardened:$CI_COMMIT_REF_SLUG \
$CI_REGISTRY_IMAGE-hardened:${CI_COMMIT_REF_SLUG}-linux-amd64 \
$CI_REGISTRY_IMAGE-hardened:${CI_COMMIT_REF_SLUG}-linux-arm64
docker manifest push $CI_REGISTRY_IMAGE-hardened:$CI_COMMIT_REF_SLUG
# Tag as latest on main branch or create version tags
- |
if [ "$CI_COMMIT_BRANCH" = "$CI_DEFAULT_BRANCH" ]; then
docker manifest create $CI_REGISTRY_IMAGE-hardened:latest \
$CI_REGISTRY_IMAGE-hardened:${CI_COMMIT_REF_SLUG}-linux-amd64 \
$CI_REGISTRY_IMAGE-hardened:${CI_COMMIT_REF_SLUG}-linux-arm64
docker manifest push $CI_REGISTRY_IMAGE-hardened:latest
fi
- |
if [ -n "$CI_COMMIT_TAG" ]; then
docker manifest create $CI_REGISTRY_IMAGE-hardened:$CI_COMMIT_TAG \
$CI_REGISTRY_IMAGE-hardened:${CI_COMMIT_REF_SLUG}-linux-amd64 \
$CI_REGISTRY_IMAGE-hardened:${CI_COMMIT_REF_SLUG}-linux-arm64
docker manifest push $CI_REGISTRY_IMAGE-hardened:$CI_COMMIT_TAG
MAJOR_MINOR=$(echo $CI_COMMIT_TAG | sed -E 's/^v?([0-9]+\.[0-9]+)\..*/\1/')
if [ -n "$MAJOR_MINOR" ]; then
docker manifest create $CI_REGISTRY_IMAGE-hardened:$MAJOR_MINOR \
$CI_REGISTRY_IMAGE-hardened:${CI_COMMIT_REF_SLUG}-linux-amd64 \
$CI_REGISTRY_IMAGE-hardened:${CI_COMMIT_REF_SLUG}-linux-arm64
docker manifest push $CI_REGISTRY_IMAGE-hardened:$MAJOR_MINOR
fi
fi
needs:
- docker:build:hardened:amd64
- docker:build:hardened:arm64
rules:
- if: $CI_COMMIT_TAG
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH

14
.scaffold/go.sum Normal file
View File

@@ -0,0 +1,14 @@
entgo.io/ent v0.14.5 h1:Rj2WOYJtCkWyFo6a+5wB3EfBRP0rnx1fMk6gGA0UUe4=
entgo.io/ent v0.14.5/go.mod h1:zTzLmWtPvGpmSwtkaayM2cm5m819NdM7z7tYPq3vN0U=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/sysadminsmedia/homebox/backend v0.0.0-20251212183312-2d1d3d927bfd h1:QULUJSgHc4rSlTjb2qYT6FIgwDWFCqEpnYqc/ltsrkk=
github.com/sysadminsmedia/homebox/backend v0.0.0-20251212183312-2d1d3d927bfd/go.mod h1:jB+tPmHtPDM1VnAjah0gvcRfP/s7c+rtQwpA8cvZD/U=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -4,7 +4,7 @@
},
"explorer.fileNesting.enabled": true,
"explorer.fileNesting.patterns": {
"package.json": "package-lock.json, yarn.lock, .eslintrc.js, tsconfig.json, .prettierrc, .editorconfig, pnpm-lock.yaml, postcss.config.js, tailwind.config.js",
"package.json": "package-lock.json, yarn.lock, eslint.config.mjs, tsconfig.json, .prettierrc, .editorconfig, pnpm-lock.yaml, postcss.config.js, tailwind.config.js",
"docker-compose.yml": "Dockerfile, .dockerignore, docker-compose.dev.yml, docker-compose.yml",
"README.md": "LICENSE, SECURITY.md"
},
@@ -22,6 +22,8 @@
"editor.defaultFormatter": "dbaeumer.vscode-eslint"
},
"eslint.format.enable": true,
"eslint.validate": ["javascript", "typescript", "vue"],
"eslint.useFlatConfig": true,
"css.validate": false,
"tailwindCSS.includeLanguages": {
"vue": "html",

View File

@@ -1,5 +1,5 @@
# Node dependencies stage
FROM public.ecr.aws/docker/library/node:lts-alpine AS frontend-dependencies
FROM public.ecr.aws/docker/library/node:22-alpine AS frontend-dependencies
WORKDIR /app
# Install pnpm globally (caching layer)
@@ -10,7 +10,7 @@ COPY frontend/package.json frontend/pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile
# Build Nuxt (frontend) stage
FROM public.ecr.aws/docker/library/node:lts-alpine AS frontend-builder
FROM public.ecr.aws/docker/library/node:22-alpine AS frontend-builder
WORKDIR /app
# Install pnpm globally again (it can reuse the cache if not changed)

136
Dockerfile.hardened Normal file
View File

@@ -0,0 +1,136 @@
# ---------------------------------------
# Node dependencies stage
# ---------------------------------------
FROM public.ecr.aws/docker/library/node:22-alpine AS frontend-dependencies
WORKDIR /app
# Install pnpm globally (caching layer)
RUN npm install -g pnpm
# Copy package.json and lockfile to leverage caching
COPY frontend/package.json frontend/pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile
# ---------------------------------------
# Build Nuxt (frontend) stage
# ---------------------------------------
FROM public.ecr.aws/docker/library/node:22-alpine AS frontend-builder
WORKDIR /app
# Install pnpm globally again (it can reuse the cache if not changed)
RUN npm install -g pnpm
# Copy over source files and node_modules from dependencies stage
COPY frontend .
COPY --from=frontend-dependencies /app/node_modules ./node_modules
RUN pnpm build
# ---------------------------------------
# Go dependencies stage
# ---------------------------------------
FROM public.ecr.aws/docker/library/golang:alpine AS builder-dependencies
WORKDIR /go/src/app
# Copy go.mod and go.sum for better caching
COPY ./backend/go.mod ./backend/go.sum ./
RUN go mod download
# ---------------------------------------
# Build API + healthcheck stage
# ---------------------------------------
FROM public.ecr.aws/docker/library/golang:alpine AS builder
ARG TARGETOS
ARG TARGETARCH
ARG BUILD_TIME
ARG COMMIT
ARG VERSION
# Install necessary build tools
RUN apk update && \
apk upgrade && \
apk add --no-cache git build-base gcc g++
WORKDIR /go/src/app
# Copy Go modules (from dependencies stage) and source code
COPY --from=builder-dependencies /go/pkg/mod /go/pkg/mod
COPY ./backend .
# Clear old public files and copy new ones from frontend build
RUN rm -rf ./app/api/public
COPY --from=frontend-builder /app/.output/public ./app/api/static/public
# Use cache for Go build artifacts to build Homebox API
RUN --mount=type=cache,target=/root/.cache/go-build \
CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH go build \
-ldflags "-s -w -X main.commit=$COMMIT -X main.buildTime=$BUILD_TIME -X main.version=$VERSION" \
-tags nodynamic -o /go/bin/api -v ./app/api/*.go
RUN chmod +x /go/bin/api
RUN mkdir /app
RUN mkdir /data
# ---------- Build static healthcheck helper ----------
# A small Go program that GETs the status URL and exits 0 on 2xx.
RUN cat > /tmp/healthcheck.go <<'EOF'
package main
import (
"fmt"
"net/http"
"os"
"time"
)
func main() {
url := "http://127.0.0.1:7745/api/v1/status"
if len(os.Args) > 1 { url = os.Args[1] }
c := &http.Client{ Timeout: 3 * time.Second }
resp, err := c.Get(url)
if err != nil { fmt.Fprintln(os.Stderr, err); os.Exit(1) }
resp.Body.Close()
if resp.StatusCode/100 != 2 {
fmt.Fprintln(os.Stderr, "unexpected status:", resp.StatusCode)
os.Exit(1)
}
}
EOF
RUN --mount=type=cache,target=/root/.cache/go-build \
CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH \
go build -ldflags "-s -w" -o /go/bin/hc /tmp/healthcheck.go
# ---------------------------------------
# Production stage
# ---------------------------------------
FROM gcr.io/distroless/static:nonroot
ENV HBOX_MODE=production
ENV HBOX_STORAGE_CONN_STRING=file:///?no_tmp_dir=true
ENV HBOX_STORAGE_PREFIX_PATH=data
ENV HBOX_DATABASE_SQLITE_PATH=/data/homebox.db?_pragma=busy_timeout=2000&_pragma=journal_mode=WAL&_fk=1&_time_format=sqlite
# Create application directory and copy over built Go binary and assets
COPY --from=builder --chown=65532:65532 /app /app
COPY --from=builder --chown=65532:65532 --chmod=755 /go/bin/api /app
COPY --from=builder --chown=65532:65532 /data /data
# Copy the healthcheck helper
COPY --from=builder --chown=65532:65532 --chmod=755 /go/bin/hc /app/healthcheck
# Labels and configuration for the final image
LABEL Name=homebox Version=0.0.1
LABEL org.opencontainers.image.source="https://github.com/sysadminsmedia/homebox"
# Expose necessary ports for Homebox
EXPOSE 7745
WORKDIR /app
# Persist volume for data
VOLUME [ "/data" ]
# Entrypoint and CMD
USER 65532
ENTRYPOINT [ "/app/api" ]
CMD [ "/data/config.yml" ]
# JSON exec-form healthcheck (no shell, no wget)
HEALTHCHECK --interval=30s --timeout=5s --start-period=5s --retries=3 \
CMD ["/app/healthcheck", "http://127.0.0.1:7745/api/v1/status"]

View File

@@ -1,5 +1,5 @@
# Node dependencies stage
FROM public.ecr.aws/docker/library/node:lts-alpine AS frontend-dependencies
FROM public.ecr.aws/docker/library/node:22-alpine AS frontend-dependencies
WORKDIR /app
# Install pnpm globally (caching layer)
@@ -10,7 +10,7 @@ COPY frontend/package.json frontend/pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile
# Build Nuxt (frontend) stage
FROM public.ecr.aws/docker/library/node:lts-alpine AS frontend-builder
FROM public.ecr.aws/docker/library/node:22-alpine AS frontend-builder
WORKDIR /app
# Install pnpm globally again (it can reuse the cache if not changed)

View File

@@ -2,36 +2,59 @@
<img src="/docs/public/lilbox.svg" height="200"/>
</div>
<h1 align="center" style="margin-top: -10px"> HomeBox </h1>
<p align="center" style="width: 100;">
<h1 align="center" style="margin-top: -10px;"> HomeBox </h1>
<p align="center" style="width: 100%;">
<a href="https://homebox.software/en/">Docs</a>
|
<a href="https://demo.homebox.software">Demo</a>
|
<a href="https://discord.gg/aY4DCkpNA9">Discord</a>
</p>
<p align="center" style="width: 100%;">
<img src="https://img.shields.io/github/check-runs/sysadminsmedia/homebox/main" alt="Github Checks"/>
<img src="https://img.shields.io/github/license/sysadminsmedia/homebox"/>
<img src="https://img.shields.io/github/v/release/sysadminsmedia/homebox?sort=semver&display_name=release"/>
<img src="https://img.shields.io/weblate/progress/homebox?server=https%3A%2F%2Ftranslate.sysadminsmedia.com"/>
</p>
<p align="center" style="width: 100%;">
<img src="https://img.shields.io/reddit/subreddit-subscribers/homebox"/>
<img src="https://img.shields.io/mastodon/follow/110749314839831923?domain=infosec.exchange"/>
<img src="https://img.shields.io/lemmy/homebox%40lemmy.world?label=lemmy"/>
</p>
## What is HomeBox
HomeBox is the inventory and organization system built for the Home User! With a focus on simplicity and ease of use, Homebox is the perfect solution for your home inventory, organization, and management needs. While developing this project, I've tried to keep the following principles in mind:
HomeBox is the inventory and organization system built for the Home User! With a focus on simplicity and ease of use, Homebox is the perfect solution for your home inventory, organization, and management needs. While developing this project, We've tried to keep the following principles in mind:
- _Simple_ - Homebox is designed to be simple and easy to use. No complicated setup or configuration required. Use either a single docker container, or deploy yourself by compiling the binary for your platform of choice.
- _Blazingly Fast_ - Homebox is written in Go, which makes it extremely fast and requires minimal resources to deploy. In general, idle memory usage is less than 50MB for the whole container.
- _Portable_ - Homebox is designed to be portable and run on anywhere. We use SQLite and an embedded Web UI to make it easy to deploy, use, and backup.
- 🧘 _Simple but Expandable_ - Homebox is designed to be simple and easy to use. No complicated setup or configuration required. But expandable to whatever level of infrastructure you want to put into it.
- 🚀 _Blazingly Fast_ - Homebox is written in Go, which makes it extremely fast and requires minimal resources to deploy. In general, idle memory usage is less than 50MB for the whole container.
- 📦 _Portable_ - Homebox is designed to be portable and run on anywhere. We use SQLite and an embedded Web UI to make it easy to deploy, use, and backup.
### Key Features
- 📇 Rich Organization - Organize your items into categories, locations, and tags. You can also create custom fields to store additional information about your items.
- 🔍 Powerful Search - Quickly find items in your inventory using the powerful search feature.
- 📸 Image Upload - Upload images of your items to make it easy to identify them.
- 📄 Document and Warranty Tracking - Keep track of important documents and warranties for your items.
- 💰 Purchase & Maintenance Tracking - Track purchase dates, prices, and maintenance schedules for your items.
- 📱 Responsive Design - Homebox is designed to work on any device, including desktops, tablets, and smartphones.
## Screenshots
![Login Screen](.github/screenshots/1.png)
![Dashboard](.github/screenshots/2.png)
![Item View](.github/screenshots/3.png)
![Create Item](.github/screenshots/9.png)
![Search](.github/screenshots/8.png)
# Screenshots
Check out screenshots of the project [here](https://imgur.com/a/5gLWt2j).
You can also try the demo instances of Homebox:
- [Demo](https://demo.homebox.software)
- [Nightly](https://nightly.homebox.software)
- [VNext](https://vnext.homebox.software/)
## Quick Start
[Configuration & Docker Compose](https://homebox.software/en/quick-start.html)
```bash
# If using the rootless image, ensure data
# If using the rootless or hardened image, ensure data
# folder has correct permissions
mkdir -p /path/to/data/folder
chown 65532:65532 -R /path/to/data/folder
@@ -43,6 +66,7 @@ docker run -d \
--volume /path/to/data/folder/:/data \
ghcr.io/sysadminsmedia/homebox:latest
# ghcr.io/sysadminsmedia/homebox:latest-rootless
# ghcr.io/sysadminsmedia/homebox:latest-hardened
```
<!-- CONTRIBUTING -->
@@ -51,14 +75,20 @@ docker run -d \
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.
If you are not a coder, you can still contribute financially. Financial contributions help me prioritize working on this project over others and helps me know that there is a real demand for project development.
To get started with code based contributions, please see our [contributing guide](https://homebox.software/en/contribute/get-started.html).
If you are not a coder and can't help translate, you can still contribute financially. Financial contributions help us maintain the project and keep demos running.
## Help us Translate
We want to make sure that Homebox is available in as many languages as possible. If you are interested in helping us translate Homebox, please help us via our [Weblate instance](https://translate.sysadminsmedia.com/projects/homebox/).
[![Translation status](http://translate.sysadminsmedia.com/widget/homebox/multi-auto.svg)](http://translate.sysadminsmedia.com/engage/homebox/)
[![Translation status](https://translate.sysadminsmedia.com/widget/homebox/multi-auto.svg)](https://translate.sysadminsmedia.com/engage/homebox/)
## Credits
- Original project by [@hay-kot](https://github.com/hay-kot)
- Logo by [@lakotelman](https://github.com/lakotelman)
### Contributors
<a href="https://github.com/sysadminsmedia/homebox/graphs/contributors">
<img src="https://contrib.rocks/image?repo=sysadminsmedia/homebox" />
</a>

View File

@@ -23,10 +23,13 @@ tasks:
INTERNAL: "../../../internal"
PKGS: "../../../pkgs"
cmds:
- swag fmt --dir={{ .API }}
- swag init --dir={{ .API }},{{ .INTERNAL }}/core/services,{{ .INTERNAL }}/data/repo --parseDependency
- cp -r ./docs/swagger.json ../../../../docs/en/api/openapi-2.0.json
- cp -r ./docs/swagger.yaml ../../../../docs/en/api/openapi-2.0.yaml
- npx -y -p swagger2openapi swagger2openapi --outfile ./docs/openapi-3.json ./docs/swagger.json
- npx -y -p swagger2openapi swagger2openapi --yaml --outfile ./docs/openapi-3.yaml ./docs/swagger.json
- cp -r ./docs/swagger.json ../../../../docs/en/api/swagger-2.0.json
- cp -r ./docs/swagger.yaml ../../../../docs/en/api/swagger-2.0.yaml
- cp -r ./docs/openapi-3.json ../../../../docs/en/api/openapi-3.0.json
- cp -r ./docs/openapi-3.yaml ../../../../docs/en/api/openapi-3.0.yaml
sources:
- "./backend/app/api/**/*"
- "./backend/internal/data/**"
@@ -93,6 +96,16 @@ tasks:
- go run ./app/api/ {{ .CLI_ARGS }} &
silent: true
go:ci:with-frontend:
desc: Run backend with frontend in CI mode
dir: frontend
cmds:
- pnpm install
- pnpm run build
- cp -r ./.output/public ../backend/app/api/static/
- task: go:ci
silent: true
go:test:
desc: Runs all go tests using gotestsum - supports passing gotestsum args
dir: backend
@@ -201,12 +214,11 @@ tasks:
desc: Runs end-to-end test on a live server
dir: frontend
cmds:
- task: go:ci
- task: ui:ci
- task: go:ci:with-frontend
- pnpm exec playwright install-deps
- pnpm exec playwright install
- sleep 30
- TEST_SHUTDOWN_API_SERVER=true pnpm exec playwright test -c ./test/playwright.config.ts {{ .CLI_ARGS }}
- TEST_SHUTDOWN_API_SERVER=true E2E_BASE_URL=http://localhost:7745 pnpm exec playwright test -c ./test/playwright.config.ts {{ .CLI_ARGS }}
pr:
desc: Runs all tasks required for a PR

View File

@@ -14,35 +14,47 @@ builds:
- linux
- windows
- darwin
- freebsd
goarch:
- amd64
- "386"
- arm
- arm64
- riscv64
flags:
- -trimpath
ldflags:
- -s -w
- -X main.version={{.Version}}
- -X main.commit={{.Commit}}
- -X main.date={{.Date}}
ignore:
- goos: windows
goarch: arm
- goos: windows
goarch: "386"
- goos: freebsd
goarch: arm
- goos: freebsd
goarch: "386"
tags:
- >-
{{- if eq .Arch "riscv64" }}nodynamic
{{- else if eq .Arch "arm" }}nodynamic
{{- else if eq .Arch "386" }}nodynamic
{{- else if eq .Os "freebsd" }}nodynamic
{{ end }}
signs:
- cmd: cosign
stdin: "{{ .Env.COSIGN_PWD }}"
signature: "${artifact}.sigstore.json"
args:
- "sign-blob"
- "--key=cosign.key"
- "--output-signature=${signature}"
- sign-blob
- "--bundle=${signature}"
- "${artifact}"
- "--yes" # needed on cosign 2.0.0+
artifacts: all
- "--yes"
artifacts: checksum
output: true
archives:
- formats: [ 'tar.gz' ]
# this name template makes the OS and Arch compatible with the results of uname.
@@ -57,7 +69,8 @@ archives:
format_overrides:
- goos: windows
formats: [ 'zip' ]
sboms:
- artifacts: archive
release:
extra_files:
- glob: dist/*.sig

View File

@@ -10,6 +10,7 @@ import (
"github.com/hay-kot/httpkit/errchain"
"github.com/hay-kot/httpkit/server"
"github.com/rs/zerolog/log"
"github.com/sysadminsmedia/homebox/backend/app/api/providers"
"github.com/sysadminsmedia/homebox/backend/internal/core/services"
"github.com/sysadminsmedia/homebox/backend/internal/core/services/reporting/eventbus"
"github.com/sysadminsmedia/homebox/backend/internal/data/repo"
@@ -74,6 +75,7 @@ type V1Controller struct {
bus *eventbus.EventBus
url string
config *config.Config
oidcProvider *providers.OIDCProvider
}
type (
@@ -95,6 +97,14 @@ type (
Demo bool `json:"demo"`
AllowRegistration bool `json:"allowRegistration"`
LabelPrinting bool `json:"labelPrinting"`
OIDC OIDCStatus `json:"oidc"`
}
OIDCStatus struct {
Enabled bool `json:"enabled"`
ButtonText string `json:"buttonText,omitempty"`
AutoRedirect bool `json:"autoRedirect,omitempty"`
AllowLocal bool `json:"allowLocal"`
}
)
@@ -111,9 +121,23 @@ func NewControllerV1(svc *services.AllServices, repos *repo.AllRepos, bus *event
opt(ctrl)
}
ctrl.initOIDCProvider()
return ctrl
}
func (ctrl *V1Controller) initOIDCProvider() {
if ctrl.config.OIDC.Enabled {
oidcProvider, err := providers.NewOIDCProvider(ctrl.svc.User, &ctrl.config.OIDC, &ctrl.config.Options, ctrl.cookieSecure)
if err != nil {
log.Err(err).Msg("failed to initialize OIDC provider at startup")
} else {
ctrl.oidcProvider = oidcProvider
log.Info().Msg("OIDC provider initialized successfully at startup")
}
}
}
// HandleBase godoc
//
// @Summary Application Info
@@ -132,6 +156,12 @@ func (ctrl *V1Controller) HandleBase(ready ReadyFunc, build Build) errchain.Hand
Demo: ctrl.isDemo,
AllowRegistration: ctrl.allowRegistration,
LabelPrinting: ctrl.config.LabelMaker.PrintCommand != nil,
OIDC: OIDCStatus{
Enabled: ctrl.config.OIDC.Enabled,
ButtonText: ctrl.config.OIDC.ButtonText,
AutoRedirect: ctrl.config.OIDC.AutoRedirect,
AllowLocal: ctrl.config.Options.AllowLocalLogin,
},
})
}
}

View File

@@ -2,6 +2,7 @@ package v1
import (
"errors"
"fmt"
"net/http"
"strconv"
"strings"
@@ -106,6 +107,11 @@ func (ctrl *V1Controller) HandleAuthLogin(ps ...AuthProvider) errchain.HandlerFu
provider = "local"
}
// Block local only when disabled
if provider == "local" && !ctrl.config.Options.AllowLocalLogin {
return validate.NewRequestError(fmt.Errorf("local login is not enabled"), http.StatusForbidden)
}
// Get the provider
p, ok := providers[provider]
if !ok {
@@ -114,11 +120,11 @@ func (ctrl *V1Controller) HandleAuthLogin(ps ...AuthProvider) errchain.HandlerFu
newToken, err := p.Authenticate(w, r)
if err != nil {
log.Err(err).Msg("failed to authenticate")
return server.JSON(w, http.StatusInternalServerError, err.Error())
log.Warn().Err(err).Msg("authentication failed")
return validate.NewUnauthorizedError()
}
ctrl.setCookies(w, noPort(r.Host), newToken.Raw, newToken.ExpiresAt, true)
ctrl.setCookies(w, noPort(r.Host), newToken.Raw, newToken.ExpiresAt, true, newToken.AttachmentToken)
return server.JSON(w, http.StatusOK, TokenResponse{
Token: "Bearer " + newToken.Raw,
ExpiresAt: newToken.ExpiresAt,
@@ -172,7 +178,7 @@ func (ctrl *V1Controller) HandleAuthRefresh() errchain.HandlerFunc {
return validate.NewUnauthorizedError()
}
ctrl.setCookies(w, noPort(r.Host), newToken.Raw, newToken.ExpiresAt, false)
ctrl.setCookies(w, noPort(r.Host), newToken.Raw, newToken.ExpiresAt, false, newToken.AttachmentToken)
return server.JSON(w, http.StatusOK, newToken)
}
}
@@ -181,7 +187,7 @@ func noPort(host string) string {
return strings.Split(host, ":")[0]
}
func (ctrl *V1Controller) setCookies(w http.ResponseWriter, domain, token string, expires time.Time, remember bool) {
func (ctrl *V1Controller) setCookies(w http.ResponseWriter, domain, token string, expires time.Time, remember bool, attachmentToken string) {
http.SetCookie(w, &http.Cookie{
Name: cookieNameRemember,
Value: strconv.FormatBool(remember),
@@ -213,6 +219,19 @@ func (ctrl *V1Controller) setCookies(w http.ResponseWriter, domain, token string
HttpOnly: false,
Path: "/",
})
// Set attachment token cookie (accessible to frontend, not HttpOnly)
if attachmentToken != "" {
http.SetCookie(w, &http.Cookie{
Name: "hb.auth.attachment_token",
Value: attachmentToken,
Expires: expires,
Domain: domain,
Secure: ctrl.cookieSecure,
HttpOnly: false,
Path: "/",
})
}
}
func (ctrl *V1Controller) unsetCookies(w http.ResponseWriter, domain string) {
@@ -246,4 +265,77 @@ func (ctrl *V1Controller) unsetCookies(w http.ResponseWriter, domain string) {
HttpOnly: false,
Path: "/",
})
// Unset attachment token cookie
http.SetCookie(w, &http.Cookie{
Name: "hb.auth.attachment_token",
Value: "",
Expires: time.Unix(0, 0),
Domain: domain,
Secure: ctrl.cookieSecure,
HttpOnly: false,
Path: "/",
})
}
// HandleOIDCLogin godoc
//
// @Summary OIDC Login Initiation
// @Tags Authentication
// @Produce json
// @Success 302
// @Router /v1/users/login/oidc [GET]
func (ctrl *V1Controller) HandleOIDCLogin() errchain.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) error {
// Forbidden if OIDC is not enabled
if !ctrl.config.OIDC.Enabled {
return validate.NewRequestError(fmt.Errorf("OIDC is not enabled"), http.StatusForbidden)
}
// Check if OIDC provider is available
if ctrl.oidcProvider == nil {
log.Error().Msg("OIDC provider not initialized")
return validate.NewRequestError(errors.New("OIDC provider not available"), http.StatusInternalServerError)
}
// Initiate OIDC flow
_, err := ctrl.oidcProvider.InitiateOIDCFlow(w, r)
return err
}
}
// HandleOIDCCallback godoc
//
// @Summary OIDC Callback Handler
// @Tags Authentication
// @Param code query string true "Authorization code"
// @Param state query string true "State parameter"
// @Success 302
// @Router /v1/users/login/oidc/callback [GET]
func (ctrl *V1Controller) HandleOIDCCallback() errchain.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) error {
// Forbidden if OIDC is not enabled
if !ctrl.config.OIDC.Enabled {
return validate.NewRequestError(fmt.Errorf("OIDC is not enabled"), http.StatusForbidden)
}
// Check if OIDC provider is available
if ctrl.oidcProvider == nil {
log.Error().Msg("OIDC provider not initialized")
return validate.NewRequestError(errors.New("OIDC provider not available"), http.StatusInternalServerError)
}
// Handle callback
newToken, err := ctrl.oidcProvider.HandleCallback(w, r)
if err != nil {
log.Err(err).Msg("OIDC callback failed")
http.Redirect(w, r, "/?oidc_error=oidc_auth_failed", http.StatusFound)
return nil
}
// Set cookies and redirect to home
ctrl.setCookies(w, noPort(r.Host), newToken.Raw, newToken.ExpiresAt, true, newToken.AttachmentToken)
http.Redirect(w, r, "/home", http.StatusFound)
return nil
}
}

View File

@@ -0,0 +1,164 @@
package v1
import (
"net/http"
"github.com/google/uuid"
"github.com/hay-kot/httpkit/errchain"
"github.com/sysadminsmedia/homebox/backend/internal/core/services"
"github.com/sysadminsmedia/homebox/backend/internal/data/repo"
"github.com/sysadminsmedia/homebox/backend/internal/web/adapters"
)
// HandleItemTemplatesGetAll godoc
//
// @Summary Get All Item Templates
// @Tags Item Templates
// @Produce json
// @Success 200 {object} []repo.ItemTemplateSummary
// @Router /v1/templates [GET]
// @Security Bearer
func (ctrl *V1Controller) HandleItemTemplatesGetAll() errchain.HandlerFunc {
fn := func(r *http.Request) ([]repo.ItemTemplateSummary, error) {
auth := services.NewContext(r.Context())
return ctrl.repo.ItemTemplates.GetAll(r.Context(), auth.GID)
}
return adapters.Command(fn, http.StatusOK)
}
// HandleItemTemplatesGet godoc
//
// @Summary Get Item Template
// @Tags Item Templates
// @Produce json
// @Param id path string true "Template ID"
// @Success 200 {object} repo.ItemTemplateOut
// @Router /v1/templates/{id} [GET]
// @Security Bearer
func (ctrl *V1Controller) HandleItemTemplatesGet() errchain.HandlerFunc {
fn := func(r *http.Request, ID uuid.UUID) (repo.ItemTemplateOut, error) {
auth := services.NewContext(r.Context())
return ctrl.repo.ItemTemplates.GetOne(r.Context(), auth.GID, ID)
}
return adapters.CommandID("id", fn, http.StatusOK)
}
// HandleItemTemplatesCreate godoc
//
// @Summary Create Item Template
// @Tags Item Templates
// @Produce json
// @Param payload body repo.ItemTemplateCreate true "Template Data"
// @Success 201 {object} repo.ItemTemplateOut
// @Router /v1/templates [POST]
// @Security Bearer
func (ctrl *V1Controller) HandleItemTemplatesCreate() errchain.HandlerFunc {
fn := func(r *http.Request, body repo.ItemTemplateCreate) (repo.ItemTemplateOut, error) {
auth := services.NewContext(r.Context())
return ctrl.repo.ItemTemplates.Create(r.Context(), auth.GID, body)
}
return adapters.Action(fn, http.StatusCreated)
}
// HandleItemTemplatesUpdate godoc
//
// @Summary Update Item Template
// @Tags Item Templates
// @Produce json
// @Param id path string true "Template ID"
// @Param payload body repo.ItemTemplateUpdate true "Template Data"
// @Success 200 {object} repo.ItemTemplateOut
// @Router /v1/templates/{id} [PUT]
// @Security Bearer
func (ctrl *V1Controller) HandleItemTemplatesUpdate() errchain.HandlerFunc {
fn := func(r *http.Request, ID uuid.UUID, body repo.ItemTemplateUpdate) (repo.ItemTemplateOut, error) {
auth := services.NewContext(r.Context())
body.ID = ID
return ctrl.repo.ItemTemplates.Update(r.Context(), auth.GID, body)
}
return adapters.ActionID("id", fn, http.StatusOK)
}
// HandleItemTemplatesDelete godoc
//
// @Summary Delete Item Template
// @Tags Item Templates
// @Produce json
// @Param id path string true "Template ID"
// @Success 204
// @Router /v1/templates/{id} [DELETE]
// @Security Bearer
func (ctrl *V1Controller) HandleItemTemplatesDelete() errchain.HandlerFunc {
fn := func(r *http.Request, ID uuid.UUID) (any, error) {
auth := services.NewContext(r.Context())
err := ctrl.repo.ItemTemplates.Delete(r.Context(), auth.GID, ID)
return nil, err
}
return adapters.CommandID("id", fn, http.StatusNoContent)
}
type ItemTemplateCreateItemRequest struct {
Name string `json:"name" validate:"required,min=1,max=255"`
Description string `json:"description" validate:"max=1000"`
LocationID uuid.UUID `json:"locationId" validate:"required"`
LabelIDs []uuid.UUID `json:"labelIds"`
Quantity *int `json:"quantity"`
}
// HandleItemTemplatesCreateItem godoc
//
// @Summary Create Item from Template
// @Tags Item Templates
// @Produce json
// @Param id path string true "Template ID"
// @Param payload body ItemTemplateCreateItemRequest true "Item Data"
// @Success 201 {object} repo.ItemOut
// @Router /v1/templates/{id}/create-item [POST]
// @Security Bearer
func (ctrl *V1Controller) HandleItemTemplatesCreateItem() errchain.HandlerFunc {
fn := func(r *http.Request, templateID uuid.UUID, body ItemTemplateCreateItemRequest) (repo.ItemOut, error) {
auth := services.NewContext(r.Context())
template, err := ctrl.repo.ItemTemplates.GetOne(r.Context(), auth.GID, templateID)
if err != nil {
return repo.ItemOut{}, err
}
quantity := template.DefaultQuantity
if body.Quantity != nil {
quantity = *body.Quantity
}
// Build custom fields from template
fields := make([]repo.ItemField, len(template.Fields))
for i, f := range template.Fields {
fields[i] = repo.ItemField{
Type: f.Type,
Name: f.Name,
TextValue: f.TextValue,
}
}
// Create item with all template data in a single transaction
return ctrl.repo.Items.CreateFromTemplate(r.Context(), auth.GID, repo.ItemCreateFromTemplate{
Name: body.Name,
Description: body.Description,
Quantity: quantity,
LocationID: body.LocationID,
LabelIDs: body.LabelIDs,
Insured: template.DefaultInsured,
Manufacturer: template.DefaultManufacturer,
ModelNumber: template.DefaultModelNumber,
LifetimeWarranty: template.DefaultLifetimeWarranty,
WarrantyDetails: template.DefaultWarrantyDetails,
Fields: fields,
})
}
return adapters.ActionID("id", fn, http.StatusCreated)
}

View File

@@ -254,6 +254,25 @@ func (ctrl *V1Controller) HandleItemPatch() errchain.HandlerFunc {
return adapters.ActionID("id", fn, http.StatusOK)
}
// HandleItemDuplicate godocs
//
// @Summary Duplicate Item
// @Tags Items
// @Produce json
// @Param id path string true "Item ID"
// @Param payload body repo.DuplicateOptions true "Duplicate Options"
// @Success 201 {object} repo.ItemOut
// @Router /v1/items/{id}/duplicate [POST]
// @Security Bearer
func (ctrl *V1Controller) HandleItemDuplicate() errchain.HandlerFunc {
fn := func(r *http.Request, ID uuid.UUID, options repo.DuplicateOptions) (repo.ItemOut, error) {
ctx := services.NewContext(r.Context())
return ctrl.svc.Items.Duplicate(ctx, ctx.GID, ID, options)
}
return adapters.ActionID("id", fn, http.StatusCreated)
}
// HandleGetAllCustomFieldNames godocs
//
// @Summary Get All Custom Field Names

View File

@@ -186,7 +186,7 @@ func (ctrl *V1Controller) handleItemAttachmentsHandler(w http.ResponseWriter, r
log.Err(err).Msg("failed to open bucket")
return validate.NewRequestError(err, http.StatusInternalServerError)
}
file, err := bucket.NewReader(ctx, doc.Path, nil)
file, err := bucket.NewReader(ctx, ctrl.repo.Attachments.GetFullPath(doc.Path), nil)
if err != nil {
log.Err(err).Msg("failed to open file")
return validate.NewRequestError(err, http.StatusInternalServerError)
@@ -205,7 +205,7 @@ func (ctrl *V1Controller) handleItemAttachmentsHandler(w http.ResponseWriter, r
}(bucket)
// Set the Content-Disposition header for RFC6266 compliance
disposition := "attachment; filename*=UTF-8''" + url.QueryEscape(doc.Title)
disposition := "inline; filename*=UTF-8''" + url.QueryEscape(doc.Title)
w.Header().Set("Content-Disposition", disposition)
http.ServeContent(w, r, doc.Title, doc.CreatedAt, file)
return nil

View File

@@ -29,7 +29,7 @@ func generateOrPrint(ctrl *V1Controller, w http.ResponseWriter, r *http.Request,
_, err = w.Write([]byte("Printed!"))
return err
} else {
return labelmaker.GenerateLabel(w, &params)
return labelmaker.GenerateLabel(w, &params, ctrl.config)
}
}

View File

@@ -0,0 +1,332 @@
package v1
import (
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"time"
"github.com/hay-kot/httpkit/errchain"
"github.com/hay-kot/httpkit/server"
"github.com/rs/zerolog/log"
"github.com/sysadminsmedia/homebox/backend/internal/data/repo"
"github.com/sysadminsmedia/homebox/backend/internal/sys/config"
"github.com/sysadminsmedia/homebox/backend/internal/web/adapters"
)
type UPCITEMDBResponse struct {
Code string `json:"code"`
Total int `json:"total"`
Offset int `json:"offset"`
Items []struct {
Ean string `json:"ean"`
Title string `json:"title"`
Description string `json:"description"`
Upc string `json:"upc"`
Brand string `json:"brand"`
Model string `json:"model"`
Color string `json:"color"`
Size string `json:"size"`
Dimension string `json:"dimension"`
Weight string `json:"weight"`
Category string `json:"category"`
LowestRecordedPrice float64 `json:"lowest_recorded_price"`
HighestRecordedPrice float64 `json:"highest_recorded_price"`
Images []string `json:"images"`
Offers []struct {
Merchant string `json:"merchant"`
Domain string `json:"domain"`
Title string `json:"title"`
Currency string `json:"currency"`
ListPrice string `json:"list_price"`
Price float64 `json:"price"`
Shipping string `json:"shipping"`
Condition string `json:"condition"`
Availability string `json:"availability"`
Link string `json:"link"`
UpdatedT int `json:"updated_t"`
} `json:"offers"`
Asin string `json:"asin"`
Elid string `json:"elid"`
} `json:"items"`
}
type BARCODESPIDER_COMResponse struct {
ItemResponse struct {
Code int `json:"code"`
Status string `json:"status"`
Message string `json:"message"`
} `json:"item_response"`
ItemAttributes struct {
Title string `json:"title"`
Upc string `json:"upc"`
Ean string `json:"ean"`
ParentCategory string `json:"parent_category"`
Category string `json:"category"`
Brand string `json:"brand"`
Model string `json:"model"`
Mpn string `json:"mpn"`
Manufacturer string `json:"manufacturer"`
Publisher string `json:"publisher"`
Asin string `json:"asin"`
Color string `json:"color"`
Size string `json:"size"`
Weight string `json:"weight"`
Image string `json:"image"`
IsAdult string `json:"is_adult"`
Description string `json:"description"`
} `json:"item_attributes"`
Stores []struct {
StoreName string `json:"store_name"`
Title string `json:"title"`
Image string `json:"image"`
Price string `json:"price"`
Currency string `json:"currency"`
Link string `json:"link"`
Updated string `json:"updated"`
} `json:"Stores"`
}
// HandleGenerateQRCode godoc
//
// @Summary Search EAN from Barcode
// @Tags Items
// @Produce json
// @Param data query string false "barcode to be searched"
// @Success 200 {object} []repo.BarcodeProduct
// @Router /v1/products/search-from-barcode [GET]
// @Security Bearer
func (ctrl *V1Controller) HandleProductSearchFromBarcode(conf config.BarcodeAPIConf) errchain.HandlerFunc {
type query struct {
// 80 characters is the longest non-2D barcode length (GS1-128)
EAN string `schema:"productEAN" validate:"required,max=80"`
}
return func(w http.ResponseWriter, r *http.Request) error {
q, err := adapters.DecodeQuery[query](r)
if err != nil {
return err
}
const TIMEOUT_SEC = 10
log.Info().Msg("Processing barcode lookup request on: " + q.EAN)
// Search on UPCITEMDB
var products []repo.BarcodeProduct
// www.ean-search.org/: not free
// Example code: dewalt 5035048748428
upcitemdb := func(iEan string) ([]repo.BarcodeProduct, error) {
client := &http.Client{Timeout: TIMEOUT_SEC * time.Second}
resp, err := client.Get("https://api.upcitemdb.com/prod/trial/lookup?upc=" + iEan)
if err != nil {
return nil, err
}
defer func() {
err = errors.Join(err, resp.Body.Close())
}()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("API returned status code: %d", resp.StatusCode)
}
// We Read the response body on the line below.
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
}
// Uncomment the following string for debug
// sb := string(body)
// log.Debug().Msg("Response: " + sb)
var result UPCITEMDBResponse
if err := json.Unmarshal(body, &result); err != nil { // Parse []byte to go struct pointer
log.Error().Msg("Can not unmarshal JSON")
}
var res []repo.BarcodeProduct
for _, it := range result.Items {
var p repo.BarcodeProduct
p.SearchEngineName = "upcitemdb.com"
p.Barcode = iEan
p.Item.Description = it.Description
p.Item.Name = it.Title
p.Manufacturer = it.Brand
p.ModelNumber = it.Model
if len(it.Images) != 0 {
p.ImageURL = it.Images[0]
}
res = append(res, p)
}
return res, nil
}
ps, err := upcitemdb(q.EAN)
if err != nil {
log.Error().Msg("Can not retrieve product from upcitemdb.com" + err.Error())
}
// Barcode spider implementation
barcodespider := func(tokenAPI string, iEan string) ([]repo.BarcodeProduct, error) {
if len(tokenAPI) == 0 {
return nil, errors.New("no api token configured for barcodespider. " +
"Please define the api token in environment variable HBOX_BARCODE_TOKEN_BARCODESPIDER")
}
req, err := http.NewRequest(
"GET", "https://api.barcodespider.com/v1/lookup?upc="+iEan, nil)
if err != nil {
return nil, err
}
req.Header.Add("token", tokenAPI)
client := &http.Client{Timeout: TIMEOUT_SEC * time.Second}
resp, err := client.Do(req)
if err != nil {
return nil, err
}
// defer the call to Body.Close(). We also check the error code, and merge
// it with the other error in this code to avoid error overiding.
defer func() {
err = errors.Join(err, resp.Body.Close())
}()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("barcodespider API returned status code: %d", resp.StatusCode)
}
// We Read the response body on the line below.
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
}
// Uncomment the following string for debug
// sb := string(body)
// log.Debug().Msg("Response: " + sb)
var result BARCODESPIDER_COMResponse
if err := json.Unmarshal(body, &result); err != nil { // Parse []byte to go struct pointer
log.Error().Msg("Can not unmarshal JSON")
}
// TODO: check 200 code on HTTP response.
var p repo.BarcodeProduct
p.Barcode = iEan
p.SearchEngineName = "barcodespider.com"
p.Item.Name = result.ItemAttributes.Title
p.Item.Description = result.ItemAttributes.Description
p.Manufacturer = result.ItemAttributes.Brand
p.ModelNumber = result.ItemAttributes.Model
p.ImageURL = result.ItemAttributes.Image
var res []repo.BarcodeProduct
res = append(res, p)
return res, nil
}
ps2, err := barcodespider(conf.TokenBarcodespider, q.EAN)
if err != nil {
log.Error().Msg("Can not retrieve product from barcodespider.com: " + err.Error())
}
// Merge everything.
products = append(products, ps...)
products = append(products, ps2...)
// Retrieve images if possible
for i := range products {
p := &products[i]
if len(p.ImageURL) == 0 {
continue
}
// Validate URL is HTTPS
u, err := url.Parse(p.ImageURL)
if err != nil || u.Scheme != "https" {
log.Warn().Msg("Skipping non-HTTPS image URL: " + p.ImageURL)
continue
}
client := &http.Client{Timeout: TIMEOUT_SEC * time.Second}
res, err := client.Get(p.ImageURL)
if err != nil {
log.Warn().Msg("Cannot fetch image for URL: " + p.ImageURL + ": " + err.Error())
}
defer func() {
err = errors.Join(err, res.Body.Close())
}()
// Validate response
if res.StatusCode != http.StatusOK {
continue
}
// Check content type
contentType := res.Header.Get("Content-Type")
if !strings.HasPrefix(contentType, "image/") {
continue
}
// Limit image size to 8MB
limitedReader := io.LimitReader(res.Body, 8*1024*1024)
// Read data of image
bytes, err := io.ReadAll(limitedReader)
if err != nil {
log.Warn().Msg(err.Error())
continue
}
// Convert to Base64
var base64Encoding string
// Determine the content type of the image file
mimeType := http.DetectContentType(bytes)
// Prepend the appropriate URI scheme header depending
// on the MIME type
switch mimeType {
case "image/jpeg":
base64Encoding += "data:image/jpeg;base64,"
case "image/png":
base64Encoding += "data:image/png;base64,"
default:
continue
}
// Append the base64 encoded output
base64Encoding += base64.StdEncoding.EncodeToString(bytes)
p.ImageBase64 = base64Encoding
}
if len(products) != 0 {
return server.JSON(w, http.StatusOK, products)
}
return server.JSON(w, http.StatusNoContent, nil)
}
}

View File

@@ -20,9 +20,15 @@ import (
// @Produce json
// @Param payload body services.UserRegistration true "User Data"
// @Success 204
// @Failure 403 {string} string "Local login is not enabled"
// @Router /v1/users/register [Post]
func (ctrl *V1Controller) HandleUserRegistration() errchain.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) error {
// Forbidden if local login is not enabled
if !ctrl.config.Options.AllowLocalLogin {
return validate.NewRequestError(fmt.Errorf("local login is not enabled"), http.StatusForbidden)
}
regData := services.UserRegistration{}
if err := server.Decode(r, &regData); err != nil {

View File

@@ -1,21 +1,17 @@
package main
import (
"bytes"
"context"
"errors"
"fmt"
"github.com/google/uuid"
"github.com/sysadminsmedia/homebox/backend/pkgs/utils"
"net/http"
"os"
"path/filepath"
"strings"
"time"
"github.com/pressly/goose/v3"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/pressly/goose/v3"
"github.com/sysadminsmedia/homebox/backend/internal/sys/analytics"
"github.com/hay-kot/httpkit/errchain"
"github.com/hay-kot/httpkit/graceful"
@@ -28,16 +24,15 @@ import (
"github.com/sysadminsmedia/homebox/backend/internal/data/ent"
"github.com/sysadminsmedia/homebox/backend/internal/data/migrations"
"github.com/sysadminsmedia/homebox/backend/internal/data/repo"
"github.com/sysadminsmedia/homebox/backend/internal/sys/analytics"
"github.com/sysadminsmedia/homebox/backend/internal/sys/config"
"github.com/sysadminsmedia/homebox/backend/internal/web/mid"
"go.balki.me/anyhttp"
_ "github.com/lib/pq"
_ "github.com/sysadminsmedia/homebox/backend/internal/data/migrations/postgres"
_ "github.com/sysadminsmedia/homebox/backend/internal/data/migrations/sqlite3"
_ "github.com/sysadminsmedia/homebox/backend/pkgs/cgofreesqlite"
"gocloud.dev/pubsub"
_ "gocloud.dev/pubsub/awssnssqs"
_ "gocloud.dev/pubsub/azuresb"
_ "gocloud.dev/pubsub/gcppubsub"
@@ -102,81 +97,56 @@ func main() {
}
}
//nolint:gocyclo
func run(cfg *config.Config) error {
app := new(cfg)
app.setupLogger()
if cfg.Options.AllowAnalytics {
analytics.Send(version, build())
}
// =========================================================================
// Initialize Database & Repos
if strings.HasPrefix(cfg.Storage.ConnString, "file:///./") {
raw := strings.TrimPrefix(cfg.Storage.ConnString, "file:///./")
clean := filepath.Clean(raw)
absBase, err := filepath.Abs(clean)
if err != nil {
log.Fatal().Err(err).Msg("failed to get absolute path for storage connection string")
}
// Construct and validate the full storage path
storageDir := filepath.Join(absBase, cfg.Storage.PrefixPath)
// Set windows paths to use forward slashes required by go-cloud
storageDir = strings.ReplaceAll(storageDir, "\\", "/")
if !strings.HasPrefix(storageDir, absBase+"/") && storageDir != absBase {
log.Fatal().
Str("path", storageDir).
Msg("invalid storage path: you tried to use a prefix that is not a subdirectory of the base path")
}
// Create with more restrictive permissions
if err := os.MkdirAll(storageDir, 0o750); err != nil {
log.Fatal().
Err(err).
Msg("failed to create data directory")
}
err := setupStorageDir(cfg)
if err != nil {
return err
}
if strings.ToLower(cfg.Database.Driver) == "postgres" {
if !validatePostgresSSLMode(cfg.Database.SslMode) {
log.Fatal().Str("sslmode", cfg.Database.SslMode).Msg("invalid sslmode")
log.Error().Str("sslmode", cfg.Database.SslMode).Msg("invalid sslmode")
return fmt.Errorf("invalid sslmode: %s", cfg.Database.SslMode)
}
}
// Set up the database URL based on the driver because for some reason a common URL format is not used
databaseURL := ""
switch strings.ToLower(cfg.Database.Driver) {
case "sqlite3":
databaseURL = cfg.Database.SqlitePath
// Create directory for SQLite database if it doesn't exist
dbFilePath := strings.Split(cfg.Database.SqlitePath, "?")[0] // Remove query parameters
dbDir := filepath.Dir(dbFilePath)
if err := os.MkdirAll(dbDir, 0o755); err != nil {
log.Fatal().Err(err).Str("path", dbDir).Msg("failed to create SQLite database directory")
}
case "postgres":
databaseURL = fmt.Sprintf("host=%s port=%s user=%s password=%s dbname=%s sslmode=%s", cfg.Database.Host, cfg.Database.Port, cfg.Database.Username, cfg.Database.Password, cfg.Database.Database, cfg.Database.SslMode)
default:
log.Fatal().Str("driver", cfg.Database.Driver).Msg("unsupported database driver")
databaseURL, err := setupDatabaseURL(cfg)
if err != nil {
return err
}
c, err := ent.Open(strings.ToLower(cfg.Database.Driver), databaseURL)
if err != nil {
log.Fatal().
log.Error().
Err(err).
Str("driver", strings.ToLower(cfg.Database.Driver)).
Str("host", cfg.Database.Host).
Str("port", cfg.Database.Port).
Str("database", cfg.Database.Database).
Msg("failed opening connection to {driver} database at {host}:{port}/{database}")
return fmt.Errorf("failed opening connection to %s database at %s:%s/%s: %w",
strings.ToLower(cfg.Database.Driver),
cfg.Database.Host,
cfg.Database.Port,
cfg.Database.Database,
err,
)
}
goose.SetBaseFS(migrations.Migrations(strings.ToLower(cfg.Database.Driver)))
migrationsFs, err := migrations.Migrations(strings.ToLower(cfg.Database.Driver))
if err != nil {
return fmt.Errorf("failed to get migrations for %s: %w", strings.ToLower(cfg.Database.Driver), err)
}
goose.SetBaseFS(migrationsFs)
err = goose.SetDialect(strings.ToLower(cfg.Database.Driver))
if err != nil {
log.Fatal().Str("driver", cfg.Database.Driver).Msg("unsupported database driver")
log.Error().Str("driver", cfg.Database.Driver).Msg("unsupported database driver")
return fmt.Errorf("unsupported database driver: %s", cfg.Database.Driver)
}
@@ -186,25 +156,9 @@ func run(cfg *config.Config) error {
return err
}
collectFuncs := []currencies.CollectorFunc{
currencies.CollectDefaults(),
}
if cfg.Options.CurrencyConfig != "" {
log.Info().
Str("path", cfg.Options.CurrencyConfig).
Msg("loading currency config file")
content, err := os.ReadFile(cfg.Options.CurrencyConfig)
if err != nil {
log.Error().
Err(err).
Str("path", cfg.Options.CurrencyConfig).
Msg("failed to read currency config file")
return err
}
collectFuncs = append(collectFuncs, currencies.CollectJSON(bytes.NewReader(content)))
collectFuncs, err := loadCurrencies(cfg)
if err != nil {
return err
}
currencies, err := currencies.CollectionCurrencies(collectFuncs...)
@@ -258,154 +212,52 @@ func run(cfg *config.Config) error {
_ = httpserver.Shutdown(context.Background())
}()
listener, addrType, addrCfg, err := anyhttp.GetListener(cfg.Web.Host)
if err == nil {
switch addrType {
case anyhttp.SystemdFD:
sysdCfg := addrCfg.(*anyhttp.SysdConfig)
if sysdCfg.IdleTimeout != nil {
log.Error().Msg("idle timeout not yet supported. Please remove and try again")
return errors.New("idle timeout not yet supported. Please remove and try again")
}
fallthrough
case anyhttp.UnixSocket:
log.Info().Msgf("Server is running on %s", cfg.Web.Host)
return httpserver.Serve(listener)
}
} else {
log.Debug().Msgf("anyhttp error: %v", err)
}
log.Info().Msgf("Server is running on %s:%s", cfg.Web.Host, cfg.Web.Port)
return httpserver.ListenAndServe()
})
// =========================================================================
// Start Reoccurring Tasks
registerRecurringTasks(app, cfg, runner)
runner.AddFunc("eventbus", app.bus.Run)
runner.AddFunc("seed_database", func(ctx context.Context) error {
// TODO: Remove through external API that does setup
if cfg.Demo {
log.Info().Msg("Running in demo mode, creating demo data")
err := app.SetupDemo()
if err != nil {
log.Fatal().Msg(err.Error())
}
}
return nil
})
runner.AddPlugin(NewTask("purge-tokens", time.Duration(24)*time.Hour, func(ctx context.Context) {
_, err := app.repos.AuthTokens.PurgeExpiredTokens(ctx)
if err != nil {
log.Error().
Err(err).
Msg("failed to purge expired tokens")
}
}))
runner.AddPlugin(NewTask("purge-invitations", time.Duration(24)*time.Hour, func(ctx context.Context) {
_, err := app.repos.Groups.InvitationPurge(ctx)
if err != nil {
log.Error().
Err(err).
Msg("failed to purge expired invitations")
}
}))
runner.AddPlugin(NewTask("send-notifications", time.Duration(1)*time.Hour, func(ctx context.Context) {
now := time.Now()
if now.Hour() == 8 {
fmt.Println("run notifiers")
err := app.services.BackgroundService.SendNotifiersToday(context.Background())
if err != nil {
log.Error().
Err(err).
Msg("failed to send notifiers")
}
}
}))
go runner.AddFunc("create-thumbnails-subscription", func(ctx context.Context) error {
pubsubString, err := utils.GenerateSubPubConn(cfg.Database.PubSubConnString, "thumbnails")
if err != nil {
log.Error().Err(err).Msg("failed to generate pubsub connection string")
return err
}
topic, err := pubsub.OpenTopic(ctx, pubsubString)
if err != nil {
return err
}
defer func(topic *pubsub.Topic, ctx context.Context) {
err := topic.Shutdown(ctx)
if err != nil {
log.Err(err).Msg("fail to shutdown pubsub topic")
}
}(topic, ctx)
subscription, err := pubsub.OpenSubscription(ctx, pubsubString)
if err != nil {
log.Err(err).Msg("failed to open pubsub topic")
return err
}
defer func(topic *pubsub.Subscription, ctx context.Context) {
err := topic.Shutdown(ctx)
if err != nil {
log.Err(err).Msg("fail to shutdown pubsub topic")
}
}(subscription, ctx)
for {
select {
case <-ctx.Done():
return ctx.Err()
default:
msg, err := subscription.Receive(ctx)
log.Debug().Msg("received thumbnail generation request from pubsub topic")
if err != nil {
log.Err(err).Msg("failed to receive message from pubsub topic")
// Send analytics if enabled at around midnight UTC
if cfg.Options.AllowAnalytics {
analyticsTime := time.Second
runner.AddPlugin(NewTask("send-analytics", analyticsTime, func(ctx context.Context) {
for {
now := time.Now().UTC()
nextMidnight := time.Date(now.Year(), now.Month(), now.Day()+1, 0, 0, 0, 0, time.UTC)
dur := time.Until(nextMidnight)
analyticsTime = dur
select {
case <-ctx.Done():
return
case <-time.After(dur):
log.Debug().Msg("running send analytics")
err := analytics.Send(version, build())
if err != nil {
log.Error().Err(err).Msg("failed to send analytics")
}
}
groupId, err := uuid.Parse(msg.Metadata["group_id"])
if err != nil {
log.Error().
Err(err).
Str("group_id", msg.Metadata["group_id"]).
Msg("failed to parse group ID from message metadata")
}
attachmentId, err := uuid.Parse(msg.Metadata["attachment_id"])
if err != nil {
log.Error().
Err(err).
Str("attachment_id", msg.Metadata["attachment_id"]).
Msg("failed to parse attachment ID from message metadata")
}
err = app.repos.Attachments.CreateThumbnail(ctx, groupId, attachmentId, msg.Metadata["title"], msg.Metadata["path"])
if err != nil {
log.Err(err).Msg("failed to create thumbnail")
}
msg.Ack()
}
}
})
if cfg.Options.GithubReleaseCheck {
runner.AddPlugin(NewTask("get-latest-github-release", time.Hour, func(ctx context.Context) {
log.Debug().Msg("running get latest github release")
err := app.services.BackgroundService.GetLatestGithubRelease(context.Background())
if err != nil {
log.Error().
Err(err).
Msg("failed to get latest github release")
}
}))
}
if cfg.Debug.Enabled {
runner.AddFunc("debug", func(ctx context.Context) error {
debugserver := http.Server{
Addr: fmt.Sprintf("%s:%s", cfg.Web.Host, cfg.Debug.Port),
Handler: app.debugRouter(),
ReadTimeout: cfg.Web.ReadTimeout,
WriteTimeout: cfg.Web.WriteTimeout,
IdleTimeout: cfg.Web.IdleTimeout,
}
go func() {
<-ctx.Done()
_ = debugserver.Shutdown(context.Background())
}()
log.Info().Msgf("Debug server is running on %s:%s", cfg.Web.Host, cfg.Debug.Port)
return debugserver.ListenAndServe()
})
// Print the configuration to the console
cfg.Print()
}
return runner.Start(context.Background())
}

View File

@@ -0,0 +1,626 @@
package providers
import (
"context"
"crypto/rand"
"crypto/sha256"
"encoding/base64"
"fmt"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/coreos/go-oidc/v3/oidc"
"github.com/rs/zerolog/log"
"github.com/sysadminsmedia/homebox/backend/internal/core/services"
"github.com/sysadminsmedia/homebox/backend/internal/sys/config"
"golang.org/x/oauth2"
)
type OIDCProvider struct {
service *services.UserService
config *config.OIDCConf
options *config.Options
cookieSecure bool
provider *oidc.Provider
verifier *oidc.IDTokenVerifier
endpoint oauth2.Endpoint
}
type OIDCClaims struct {
Email string
Groups []string
Name string
Subject string
Issuer string
EmailVerified *bool
}
func NewOIDCProvider(service *services.UserService, config *config.OIDCConf, options *config.Options, cookieSecure bool) (*OIDCProvider, error) {
if !config.Enabled {
return nil, fmt.Errorf("OIDC is not enabled")
}
// Validate required configuration
if config.ClientID == "" {
return nil, fmt.Errorf("OIDC client ID is required when OIDC is enabled (set HBOX_OIDC_CLIENT_ID)")
}
if config.ClientSecret == "" {
return nil, fmt.Errorf("OIDC client secret is required when OIDC is enabled (set HBOX_OIDC_CLIENT_SECRET)")
}
if config.IssuerURL == "" {
return nil, fmt.Errorf("OIDC issuer URL is required when OIDC is enabled (set HBOX_OIDC_ISSUER_URL)")
}
ctx, cancel := context.WithTimeout(context.Background(), config.RequestTimeout)
defer cancel()
provider, err := oidc.NewProvider(ctx, config.IssuerURL)
if err != nil {
return nil, fmt.Errorf("failed to create OIDC provider from issuer URL: %w", err)
}
// Create ID token verifier
verifier := provider.Verifier(&oidc.Config{
ClientID: config.ClientID,
})
log.Info().
Str("issuer", config.IssuerURL).
Str("client_id", config.ClientID).
Str("scope", config.Scope).
Msg("OIDC provider initialized successfully with discovery")
return &OIDCProvider{
service: service,
config: config,
options: options,
cookieSecure: cookieSecure,
provider: provider,
verifier: verifier,
endpoint: provider.Endpoint(),
}, nil
}
func (p *OIDCProvider) Name() string {
return "oidc"
}
// Authenticate implements the AuthProvider interface but is not used for OIDC
// OIDC uses dedicated endpoints: GET /api/v1/users/login/oidc and GET /api/v1/users/login/oidc/callback
func (p *OIDCProvider) Authenticate(w http.ResponseWriter, r *http.Request) (services.UserAuthTokenDetail, error) {
_ = w
_ = r
return services.UserAuthTokenDetail{}, fmt.Errorf("OIDC authentication uses dedicated endpoints: /api/v1/users/login/oidc")
}
// AuthenticateWithBaseURL is the main authentication method that requires baseURL
// Called from handleCallback after state, nonce, and PKCE verification
func (p *OIDCProvider) AuthenticateWithBaseURL(baseURL, expectedNonce, pkceVerifier string, _ http.ResponseWriter, r *http.Request) (services.UserAuthTokenDetail, error) {
code := r.URL.Query().Get("code")
if code == "" {
return services.UserAuthTokenDetail{}, fmt.Errorf("missing authorization code")
}
// Get OAuth2 config for this request
oauth2Config := p.getOAuth2Config(baseURL)
// Exchange code for token with timeout and PKCE verifier
ctx, cancel := context.WithTimeout(r.Context(), p.config.RequestTimeout)
defer cancel()
token, err := oauth2Config.Exchange(ctx, code, oauth2.SetAuthURLParam("code_verifier", pkceVerifier))
if err != nil {
log.Err(err).Msg("failed to exchange OIDC code for token")
return services.UserAuthTokenDetail{}, fmt.Errorf("failed to exchange code for token")
}
// Extract ID token
idToken, ok := token.Extra("id_token").(string)
if !ok {
return services.UserAuthTokenDetail{}, fmt.Errorf("no id_token in response")
}
// Parse and validate the ID token using the library's verifier with timeout
verifyCtx, verifyCancel := context.WithTimeout(r.Context(), p.config.RequestTimeout)
defer verifyCancel()
idTokenStruct, err := p.verifier.Verify(verifyCtx, idToken)
if err != nil {
log.Err(err).Msg("failed to verify ID token")
return services.UserAuthTokenDetail{}, fmt.Errorf("failed to verify ID token")
}
// Extract claims from the verified token using dynamic parsing
var rawClaims map[string]interface{}
if err := idTokenStruct.Claims(&rawClaims); err != nil {
log.Err(err).Msg("failed to extract claims from ID token")
return services.UserAuthTokenDetail{}, fmt.Errorf("failed to extract claims from ID token")
}
// Attempt to retrieve UserInfo claims; use them as primary, fallback to ID token claims.
finalClaims := rawClaims
userInfoCtx, uiCancel := context.WithTimeout(r.Context(), p.config.RequestTimeout)
defer uiCancel()
userInfo, uiErr := p.provider.UserInfo(userInfoCtx, oauth2.StaticTokenSource(token))
if uiErr != nil {
log.Debug().Err(uiErr).Msg("OIDC UserInfo fetch failed; falling back to ID token claims")
} else {
var uiClaims map[string]interface{}
if err := userInfo.Claims(&uiClaims); err != nil {
log.Debug().Err(err).Msg("failed to decode UserInfo claims; falling back to ID token claims")
} else {
finalClaims = mergeOIDCClaims(uiClaims, rawClaims) // UserInfo first, then fill gaps from ID token
log.Debug().Int("userinfo_claims", len(uiClaims)).Int("id_token_claims", len(rawClaims)).Int("merged_claims", len(finalClaims)).Msg("merged UserInfo and ID token claims")
}
}
// Parse claims using configurable claim names (after merge)
claims, err := p.parseOIDCClaims(finalClaims)
if err != nil {
log.Err(err).Msg("failed to parse OIDC claims")
return services.UserAuthTokenDetail{}, fmt.Errorf("failed to parse OIDC claims: %w", err)
}
// Verify nonce claim matches expected value (nonce only from ID token; ensure preserved in merged map)
tokenNonce, exists := finalClaims["nonce"]
if !exists {
log.Warn().Msg("nonce claim missing from ID token - possible replay attack")
return services.UserAuthTokenDetail{}, fmt.Errorf("nonce claim missing from token")
}
tokenNonceStr, ok := tokenNonce.(string)
if !ok {
log.Warn().Msg("nonce claim is not a string in ID token")
return services.UserAuthTokenDetail{}, fmt.Errorf("invalid nonce claim format")
}
if tokenNonceStr != expectedNonce {
log.Warn().Str("received", tokenNonceStr).Str("expected", expectedNonce).Msg("OIDC nonce mismatch - possible replay attack")
return services.UserAuthTokenDetail{}, fmt.Errorf("nonce parameter mismatch")
}
// Check if email is verified
if p.config.VerifyEmail {
if claims.EmailVerified == nil {
return services.UserAuthTokenDetail{}, fmt.Errorf("email verification status not found in token claims")
}
if !*claims.EmailVerified {
return services.UserAuthTokenDetail{}, fmt.Errorf("email not verified")
}
}
// Check group authorization if configured
if p.config.AllowedGroups != "" {
allowedGroups := strings.Split(p.config.AllowedGroups, ",")
if !p.hasAllowedGroup(claims.Groups, allowedGroups) {
log.Warn().
Strs("user_groups", claims.Groups).
Strs("allowed_groups", allowedGroups).
Str("user", claims.Email).
Msg("user not in allowed groups")
return services.UserAuthTokenDetail{}, fmt.Errorf("user not in allowed groups")
}
}
// Determine username from claims
email := claims.Email
if email == "" {
return services.UserAuthTokenDetail{}, fmt.Errorf("no email found in token claims")
}
if claims.Subject == "" {
return services.UserAuthTokenDetail{}, fmt.Errorf("no subject (sub) claim present")
}
if claims.Issuer == "" {
claims.Issuer = p.config.IssuerURL // fallback to configured issuer, though spec requires 'iss'
}
// Use the dedicated OIDC login method (issuer + subject identity)
sessionToken, err := p.service.LoginOIDC(r.Context(), claims.Issuer, claims.Subject, email, claims.Name)
if err != nil {
log.Err(err).Str("email", email).Str("issuer", claims.Issuer).Str("subject", claims.Subject).Msg("OIDC login failed")
return services.UserAuthTokenDetail{}, fmt.Errorf("OIDC login failed: %w", err)
}
return sessionToken, nil
}
func (p *OIDCProvider) parseOIDCClaims(rawClaims map[string]interface{}) (OIDCClaims, error) {
var claims OIDCClaims
// Parse email claim
key := p.config.EmailClaim
if key == "" {
key = "email"
}
if emailValue, exists := rawClaims[key]; exists {
if email, ok := emailValue.(string); ok {
claims.Email = email
}
}
// Parse email_verified claim
if p.config.VerifyEmail {
key = p.config.EmailVerifiedClaim
if key == "" {
key = "email_verified"
}
if emailVerifiedValue, exists := rawClaims[key]; exists {
switch v := emailVerifiedValue.(type) {
case bool:
claims.EmailVerified = &v
case string:
if b, err := strconv.ParseBool(v); err == nil {
claims.EmailVerified = &b
}
}
}
}
// Parse name claim
key = p.config.NameClaim
if key == "" {
key = "name"
}
if nameValue, exists := rawClaims[key]; exists {
if name, ok := nameValue.(string); ok {
claims.Name = name
}
}
// Parse groups claim
key = p.config.GroupClaim
if key == "" {
key = "groups"
}
if groupsValue, exists := rawClaims[key]; exists {
switch groups := groupsValue.(type) {
case []interface{}:
for _, group := range groups {
if groupStr, ok := group.(string); ok {
claims.Groups = append(claims.Groups, groupStr)
}
}
case []string:
claims.Groups = groups
case string:
// Single group as string
claims.Groups = []string{groups}
}
}
// Parse subject claim (always "sub")
if subValue, exists := rawClaims["sub"]; exists {
if subject, ok := subValue.(string); ok {
claims.Subject = subject
}
}
// Parse issuer claim ("iss")
if issValue, exists := rawClaims["iss"]; exists {
if iss, ok := issValue.(string); ok {
claims.Issuer = iss
}
}
return claims, nil
}
func (p *OIDCProvider) hasAllowedGroup(userGroups, allowedGroups []string) bool {
if len(allowedGroups) == 0 {
return true
}
allowedGroupsMap := make(map[string]bool)
for _, group := range allowedGroups {
allowedGroupsMap[strings.TrimSpace(group)] = true
}
for _, userGroup := range userGroups {
if allowedGroupsMap[userGroup] {
return true
}
}
return false
}
func (p *OIDCProvider) GetAuthURL(baseURL, state, nonce, pkceVerifier string) string {
oauth2Config := p.getOAuth2Config(baseURL)
pkceChallenge := generatePKCEChallenge(pkceVerifier)
return oauth2Config.AuthCodeURL(state,
oidc.Nonce(nonce),
oauth2.SetAuthURLParam("code_challenge", pkceChallenge),
oauth2.SetAuthURLParam("code_challenge_method", "S256"))
}
func (p *OIDCProvider) getOAuth2Config(baseURL string) oauth2.Config {
// Construct full redirect URL with dedicated callback endpoint
redirectURL, err := url.JoinPath(baseURL, "/api/v1/users/login/oidc/callback")
if err != nil {
log.Err(err).Msg("failed to construct redirect URL")
return oauth2.Config{}
}
return oauth2.Config{
ClientID: p.config.ClientID,
ClientSecret: p.config.ClientSecret,
RedirectURL: redirectURL,
Endpoint: p.endpoint,
Scopes: strings.Fields(p.config.Scope),
}
}
// initiateOIDCFlow handles the initial OIDC authentication request by redirecting to the provider
func (p *OIDCProvider) initiateOIDCFlow(w http.ResponseWriter, r *http.Request) (services.UserAuthTokenDetail, error) {
// Generate state parameter for CSRF protection
state, err := generateSecureToken()
if err != nil {
log.Err(err).Msg("failed to generate OIDC state parameter")
return services.UserAuthTokenDetail{}, fmt.Errorf("internal server error")
}
// Generate nonce parameter for replay attack protection
nonce, err := generateSecureToken()
if err != nil {
log.Err(err).Msg("failed to generate OIDC nonce parameter")
return services.UserAuthTokenDetail{}, fmt.Errorf("internal server error")
}
// Generate PKCE verifier for code interception protection
pkceVerifier, err := generatePKCEVerifier()
if err != nil {
log.Err(err).Msg("failed to generate OIDC PKCE verifier")
return services.UserAuthTokenDetail{}, fmt.Errorf("internal server error")
}
// Get base URL from request
baseURL := p.getBaseURL(r)
u, _ := url.Parse(baseURL)
domain := u.Hostname()
if domain == "" {
domain = noPort(r.Host)
}
// Store state in session cookie for validation
http.SetCookie(w, &http.Cookie{
Name: "oidc_state",
Value: state,
Expires: time.Now().Add(p.config.StateExpiry),
Domain: domain,
Secure: p.isSecure(r),
HttpOnly: true,
Path: "/",
SameSite: http.SameSiteLaxMode,
})
// Store nonce in session cookie for validation
http.SetCookie(w, &http.Cookie{
Name: "oidc_nonce",
Value: nonce,
Expires: time.Now().Add(p.config.StateExpiry),
Domain: domain,
Secure: p.isSecure(r),
HttpOnly: true,
Path: "/",
SameSite: http.SameSiteLaxMode,
})
// Store PKCE verifier in session cookie for token exchange
http.SetCookie(w, &http.Cookie{
Name: "oidc_pkce_verifier",
Value: pkceVerifier,
Expires: time.Now().Add(p.config.StateExpiry),
Domain: domain,
Secure: p.isSecure(r),
HttpOnly: true,
Path: "/",
SameSite: http.SameSiteLaxMode,
})
// Generate auth URL and redirect
authURL := p.GetAuthURL(baseURL, state, nonce, pkceVerifier)
http.Redirect(w, r, authURL, http.StatusFound)
// Return empty token since this is a redirect response
return services.UserAuthTokenDetail{}, nil
}
// handleCallback processes the OAuth2 callback from the OIDC provider
func (p *OIDCProvider) handleCallback(w http.ResponseWriter, r *http.Request) (services.UserAuthTokenDetail, error) {
// Helper to clear state cookie using computed domain
baseURL := p.getBaseURL(r)
u, _ := url.Parse(baseURL)
domain := u.Hostname()
if domain == "" {
domain = noPort(r.Host)
}
clearCookies := func() {
http.SetCookie(w, &http.Cookie{
Name: "oidc_state",
Value: "",
Expires: time.Unix(0, 0),
Domain: domain,
MaxAge: -1,
Secure: p.isSecure(r),
HttpOnly: true,
Path: "/",
SameSite: http.SameSiteLaxMode,
})
http.SetCookie(w, &http.Cookie{
Name: "oidc_nonce",
Value: "",
Expires: time.Unix(0, 0),
Domain: domain,
MaxAge: -1,
Secure: p.isSecure(r),
HttpOnly: true,
Path: "/",
SameSite: http.SameSiteLaxMode,
})
http.SetCookie(w, &http.Cookie{
Name: "oidc_pkce_verifier",
Value: "",
Expires: time.Unix(0, 0),
Domain: domain,
MaxAge: -1,
Secure: p.isSecure(r),
HttpOnly: true,
Path: "/",
SameSite: http.SameSiteLaxMode,
})
}
// Check for OAuth error responses first
if errCode := r.URL.Query().Get("error"); errCode != "" {
errDesc := r.URL.Query().Get("error_description")
log.Warn().Str("error", errCode).Str("description", errDesc).Msg("OIDC provider returned error")
clearCookies()
return services.UserAuthTokenDetail{}, fmt.Errorf("OIDC provider error: %s - %s", errCode, errDesc)
}
// Verify state parameter
stateCookie, err := r.Cookie("oidc_state")
if err != nil {
log.Warn().Err(err).Msg("OIDC state cookie not found - possible CSRF attack or expired session")
clearCookies()
return services.UserAuthTokenDetail{}, fmt.Errorf("state cookie not found")
}
stateParam := r.URL.Query().Get("state")
if stateParam == "" {
log.Warn().Msg("OIDC state parameter missing from callback")
clearCookies()
return services.UserAuthTokenDetail{}, fmt.Errorf("state parameter missing")
}
if stateParam != stateCookie.Value {
log.Warn().Str("received", stateParam).Str("expected", stateCookie.Value).Msg("OIDC state mismatch - possible CSRF attack")
clearCookies()
return services.UserAuthTokenDetail{}, fmt.Errorf("state parameter mismatch")
}
// Verify nonce parameter
nonceCookie, err := r.Cookie("oidc_nonce")
if err != nil {
log.Warn().Err(err).Msg("OIDC nonce cookie not found - possible replay attack or expired session")
clearCookies()
return services.UserAuthTokenDetail{}, fmt.Errorf("nonce cookie not found")
}
// Verify PKCE verifier parameter
pkceCookie, err := r.Cookie("oidc_pkce_verifier")
if err != nil {
log.Warn().Err(err).Msg("OIDC PKCE verifier cookie not found - possible code interception attack or expired session")
clearCookies()
return services.UserAuthTokenDetail{}, fmt.Errorf("PKCE verifier cookie not found")
}
// Clear cookies before proceeding to token verification
clearCookies()
// Use the existing callback logic but return the token instead of redirecting
return p.AuthenticateWithBaseURL(baseURL, nonceCookie.Value, pkceCookie.Value, w, r)
}
// Helper functions
func generateSecureToken() (string, error) {
// Generate 32 bytes of cryptographically secure random data
bytes := make([]byte, 32)
_, err := rand.Read(bytes)
if err != nil {
return "", fmt.Errorf("failed to generate secure random token: %w", err)
}
// Use URL-safe base64 encoding without padding for clean URLs
return base64.RawURLEncoding.EncodeToString(bytes), nil
}
// generatePKCEVerifier generates a cryptographically secure code verifier for PKCE
func generatePKCEVerifier() (string, error) {
// PKCE verifier must be 43-128 characters, we'll use 43 for efficiency
// 32 bytes = 43 characters when base64url encoded without padding
bytes := make([]byte, 32)
_, err := rand.Read(bytes)
if err != nil {
return "", fmt.Errorf("failed to generate PKCE verifier: %w", err)
}
return base64.RawURLEncoding.EncodeToString(bytes), nil
}
// generatePKCEChallenge generates a code challenge from a verifier using S256 method
func generatePKCEChallenge(verifier string) string {
hash := sha256.Sum256([]byte(verifier))
return base64.RawURLEncoding.EncodeToString(hash[:])
}
func noPort(host string) string {
return strings.Split(host, ":")[0]
}
func (p *OIDCProvider) getBaseURL(r *http.Request) string {
scheme := "http"
if r.TLS != nil {
scheme = "https"
} else if p.options.TrustProxy && r.Header.Get("X-Forwarded-Proto") == "https" {
scheme = "https"
}
host := r.Host
if p.options.Hostname != "" {
host = p.options.Hostname
} else if p.options.TrustProxy {
if xfHost := r.Header.Get("X-Forwarded-Host"); xfHost != "" {
host = xfHost
}
}
return scheme + "://" + host
}
func (p *OIDCProvider) isSecure(r *http.Request) bool {
_ = r
return p.cookieSecure
}
// InitiateOIDCFlow starts the OIDC authentication flow by redirecting to the provider
func (p *OIDCProvider) InitiateOIDCFlow(w http.ResponseWriter, r *http.Request) (services.UserAuthTokenDetail, error) {
return p.initiateOIDCFlow(w, r)
}
// HandleCallback processes the OIDC callback and returns the authenticated user token
func (p *OIDCProvider) HandleCallback(w http.ResponseWriter, r *http.Request) (services.UserAuthTokenDetail, error) {
return p.handleCallback(w, r)
}
func mergeOIDCClaims(primary, secondary map[string]interface{}) map[string]interface{} {
// primary has precedence; fill missing/empty values from secondary.
merged := make(map[string]interface{}, len(primary)+len(secondary))
for k, v := range primary {
merged[k] = v
}
for k, v := range secondary {
if existing, ok := merged[k]; !ok || isEmptyClaim(existing) {
merged[k] = v
}
}
return merged
}
func isEmptyClaim(v interface{}) bool {
if v == nil {
return true
}
switch val := v.(type) {
case string:
return val == ""
case []interface{}:
return len(val) == 0
case []string:
return len(val) == 0
default:
return false
}
}

View File

@@ -0,0 +1,151 @@
package main
import (
"context"
"fmt"
"net/http"
"time"
"github.com/google/uuid"
"github.com/hay-kot/httpkit/graceful"
"github.com/rs/zerolog/log"
"github.com/sysadminsmedia/homebox/backend/internal/sys/config"
"github.com/sysadminsmedia/homebox/backend/pkgs/utils"
"gocloud.dev/pubsub"
)
func registerRecurringTasks(app *app, cfg *config.Config, runner *graceful.Runner) {
runner.AddFunc("eventbus", app.bus.Run)
runner.AddFunc("seed_database", func(ctx context.Context) error {
if cfg.Demo {
log.Info().Msg("Running in demo mode, creating demo data")
err := app.SetupDemo()
if err != nil {
log.Error().Err(err).Msg("failed to setup demo data")
return fmt.Errorf("failed to setup demo data: %w", err)
}
}
return nil
})
runner.AddPlugin(NewTask("purge-tokens", 24*time.Hour, func(ctx context.Context) {
_, err := app.repos.AuthTokens.PurgeExpiredTokens(ctx)
if err != nil {
log.Error().Err(err).Msg("failed to purge expired tokens")
}
}))
runner.AddPlugin(NewTask("purge-invitations", 24*time.Hour, func(ctx context.Context) {
_, err := app.repos.Groups.InvitationPurge(ctx)
if err != nil {
log.Error().Err(err).Msg("failed to purge expired invitations")
}
}))
runner.AddPlugin(NewTask("send-notifications", time.Hour, func(ctx context.Context) {
now := time.Now()
if now.Hour() == 8 {
fmt.Println("run notifiers")
err := app.services.BackgroundService.SendNotifiersToday(context.Background())
if err != nil {
log.Error().Err(err).Msg("failed to send notifiers")
}
}
}))
if cfg.Thumbnail.Enabled {
runner.AddFunc("create-thumbnails-subscription", func(ctx context.Context) error {
pubsubString, err := utils.GenerateSubPubConn(cfg.Database.PubSubConnString, "thumbnails")
if err != nil {
log.Error().Err(err).Msg("failed to generate pubsub connection string")
return err
}
topic, err := pubsub.OpenTopic(ctx, pubsubString)
if err != nil {
return err
}
defer func(topic *pubsub.Topic, ctx context.Context) {
err := topic.Shutdown(ctx)
if err != nil {
log.Err(err).Msg("fail to shutdown pubsub topic")
}
}(topic, ctx)
subscription, err := pubsub.OpenSubscription(ctx, pubsubString)
if err != nil {
log.Err(err).Msg("failed to open pubsub topic")
return err
}
defer func(topic *pubsub.Subscription, ctx context.Context) {
err := topic.Shutdown(ctx)
if err != nil {
log.Err(err).Msg("fail to shutdown pubsub topic")
}
}(subscription, ctx)
for {
select {
case <-ctx.Done():
return ctx.Err()
default:
msg, err := subscription.Receive(ctx)
log.Debug().Msg("received thumbnail generation request from pubsub topic")
if err != nil {
log.Err(err).Msg("failed to receive message from pubsub topic")
continue
}
if msg == nil {
log.Warn().Msg("received nil message from pubsub topic")
continue
}
groupId, err := uuid.Parse(msg.Metadata["group_id"])
if err != nil {
log.Error().Err(err).Str("group_id", msg.Metadata["group_id"]).Msg("failed to parse group ID from message metadata")
}
attachmentId, err := uuid.Parse(msg.Metadata["attachment_id"])
if err != nil {
log.Error().Err(err).Str("attachment_id", msg.Metadata["attachment_id"]).Msg("failed to parse attachment ID from message metadata")
}
err = app.repos.Attachments.CreateThumbnail(ctx, groupId, attachmentId, msg.Metadata["title"], msg.Metadata["path"])
if err != nil {
log.Err(err).Msg("failed to create thumbnail")
}
msg.Ack()
}
}
})
}
if cfg.Options.GithubReleaseCheck {
runner.AddPlugin(NewTask("get-latest-github-release", time.Hour, func(ctx context.Context) {
log.Debug().Msg("running get latest github release")
err := app.services.BackgroundService.GetLatestGithubRelease(context.Background())
if err != nil {
log.Error().Err(err).Msg("failed to get latest github release")
}
}))
}
if cfg.Debug.Enabled {
runner.AddFunc("debug", func(ctx context.Context) error {
debugserver := http.Server{
Addr: fmt.Sprintf("%s:%s", cfg.Web.Host, cfg.Debug.Port),
Handler: app.debugRouter(),
ReadTimeout: cfg.Web.ReadTimeout,
WriteTimeout: cfg.Web.WriteTimeout,
IdleTimeout: cfg.Web.IdleTimeout,
}
go func() {
<-ctx.Done()
_ = debugserver.Shutdown(context.Background())
}()
log.Info().Msgf("Debug server is running on %s:%s", cfg.Web.Host, cfg.Debug.Port)
return debugserver.ListenAndServe()
})
// Print the configuration to the console
cfg.Print()
}
}

View File

@@ -75,6 +75,11 @@ func (a *app) mountRoutes(r *chi.Mux, chain *errchain.ErrChain, repos *repo.AllR
r.Post("/users/register", chain.ToHandlerFunc(v1Ctrl.HandleUserRegistration()))
r.Post("/users/login", chain.ToHandlerFunc(v1Ctrl.HandleAuthLogin(providers...)))
if a.conf.OIDC.Enabled {
r.Get("/users/login/oidc", chain.ToHandlerFunc(v1Ctrl.HandleOIDCLogin()))
r.Get("/users/login/oidc/callback", chain.ToHandlerFunc(v1Ctrl.HandleOIDCCallback()))
}
userMW := []errchain.Middleware{
a.mwAuthToken,
a.mwRoles(RoleModeOr, authroles.RoleUser.String()),
@@ -129,6 +134,7 @@ func (a *app) mountRoutes(r *chi.Mux, chain *errchain.ErrChain, repos *repo.AllR
r.Put("/items/{id}", chain.ToHandlerFunc(v1Ctrl.HandleItemUpdate(), userMW...))
r.Patch("/items/{id}", chain.ToHandlerFunc(v1Ctrl.HandleItemPatch(), userMW...))
r.Delete("/items/{id}", chain.ToHandlerFunc(v1Ctrl.HandleItemDelete(), userMW...))
r.Post("/items/{id}/duplicate", chain.ToHandlerFunc(v1Ctrl.HandleItemDuplicate(), userMW...))
r.Post("/items/{id}/attachments", chain.ToHandlerFunc(v1Ctrl.HandleItemAttachmentCreate(), userMW...))
r.Put("/items/{id}/attachments/{attachment_id}", chain.ToHandlerFunc(v1Ctrl.HandleItemAttachmentUpdate(), userMW...))
@@ -139,6 +145,14 @@ func (a *app) mountRoutes(r *chi.Mux, chain *errchain.ErrChain, repos *repo.AllR
r.Get("/assets/{id}", chain.ToHandlerFunc(v1Ctrl.HandleAssetGet(), userMW...))
// Item Templates
r.Get("/templates", chain.ToHandlerFunc(v1Ctrl.HandleItemTemplatesGetAll(), userMW...))
r.Post("/templates", chain.ToHandlerFunc(v1Ctrl.HandleItemTemplatesCreate(), userMW...))
r.Get("/templates/{id}", chain.ToHandlerFunc(v1Ctrl.HandleItemTemplatesGet(), userMW...))
r.Put("/templates/{id}", chain.ToHandlerFunc(v1Ctrl.HandleItemTemplatesUpdate(), userMW...))
r.Delete("/templates/{id}", chain.ToHandlerFunc(v1Ctrl.HandleItemTemplatesDelete(), userMW...))
r.Post("/templates/{id}/create-item", chain.ToHandlerFunc(v1Ctrl.HandleItemTemplatesCreateItem(), userMW...))
// Maintenance
r.Get("/maintenance", chain.ToHandlerFunc(v1Ctrl.HandleMaintenanceGetAll(), userMW...))
r.Put("/maintenance/{id}", chain.ToHandlerFunc(v1Ctrl.HandleMaintenanceEntryUpdate(), userMW...))
@@ -157,6 +171,8 @@ func (a *app) mountRoutes(r *chi.Mux, chain *errchain.ErrChain, repos *repo.AllR
a.mwRoles(RoleModeOr, authroles.RoleUser.String(), authroles.RoleAttachments.String()),
}
r.Get("/products/search-from-barcode", chain.ToHandlerFunc(v1Ctrl.HandleProductSearchFromBarcode(a.conf.Barcode), userMW...))
r.Get("/qrcode", chain.ToHandlerFunc(v1Ctrl.HandleGenerateQRCode(), assetMW...))
r.Get(
"/items/{id}/attachments/{attachment_id}",

103
backend/app/api/setup.go Normal file
View File

@@ -0,0 +1,103 @@
package main
import (
"bytes"
"fmt"
"os"
"path/filepath"
"strings"
"github.com/rs/zerolog/log"
"github.com/sysadminsmedia/homebox/backend/internal/core/currencies"
"github.com/sysadminsmedia/homebox/backend/internal/sys/config"
)
// setupStorageDir handles the creation and validation of the storage directory.
func setupStorageDir(cfg *config.Config) error {
if strings.HasPrefix(cfg.Storage.ConnString, "file:///./") {
raw := strings.TrimPrefix(cfg.Storage.ConnString, "file:///./")
clean := filepath.Clean(raw)
absBase, err := filepath.Abs(clean)
if err != nil {
log.Error().Err(err).Msg("failed to get absolute path for storage connection string")
return fmt.Errorf("failed to get absolute path for storage connection string: %w", err)
}
absBase = strings.ReplaceAll(absBase, "\\", "/")
storageDir := filepath.Join(absBase, cfg.Storage.PrefixPath)
storageDir = strings.ReplaceAll(storageDir, "\\", "/")
if !strings.HasPrefix(storageDir, absBase+"/") && storageDir != absBase {
log.Error().Str("path", storageDir).Msg("invalid storage path: you tried to use a prefix that is not a subdirectory of the base path")
return fmt.Errorf("invalid storage path: you tried to use a prefix that is not a subdirectory of the base path")
}
if err := os.MkdirAll(storageDir, 0o750); err != nil {
log.Error().Err(err).Msg("failed to create data directory")
return fmt.Errorf("failed to create data directory: %w", err)
}
}
return nil
}
// setupDatabaseURL returns the database URL and ensures any required directories exist.
func setupDatabaseURL(cfg *config.Config) (string, error) {
databaseURL := ""
switch strings.ToLower(cfg.Database.Driver) {
case "sqlite3":
databaseURL = cfg.Database.SqlitePath
dbFilePath := strings.Split(cfg.Database.SqlitePath, "?")[0]
dbDir := filepath.Dir(dbFilePath)
if err := os.MkdirAll(dbDir, 0o755); err != nil {
log.Error().Err(err).Str("path", dbDir).Msg("failed to create SQLite database directory")
return "", fmt.Errorf("failed to create SQLite database directory: %w", err)
}
case "postgres":
databaseURL = fmt.Sprintf("host=%s port=%s dbname=%s sslmode=%s", cfg.Database.Host, cfg.Database.Port, cfg.Database.Database, cfg.Database.SslMode)
if cfg.Database.Username != "" {
databaseURL += fmt.Sprintf(" user=%s", cfg.Database.Username)
}
if cfg.Database.Password != "" {
databaseURL += fmt.Sprintf(" password=%s", cfg.Database.Password)
}
if cfg.Database.SslRootCert != "" {
if _, err := os.Stat(cfg.Database.SslRootCert); err != nil {
log.Error().Err(err).Str("path", cfg.Database.SslRootCert).Msg("SSL root certificate file is not accessible")
return "", fmt.Errorf("SSL root certificate file is not accessible: %w", err)
}
databaseURL += fmt.Sprintf(" sslrootcert=%s", cfg.Database.SslRootCert)
}
if cfg.Database.SslCert != "" {
if _, err := os.Stat(cfg.Database.SslCert); err != nil {
log.Error().Err(err).Str("path", cfg.Database.SslCert).Msg("SSL certificate file is not accessible")
return "", fmt.Errorf("SSL certificate file is not accessible: %w", err)
}
databaseURL += fmt.Sprintf(" sslcert=%s", cfg.Database.SslCert)
}
if cfg.Database.SslKey != "" {
if _, err := os.Stat(cfg.Database.SslKey); err != nil {
log.Error().Err(err).Str("path", cfg.Database.SslKey).Msg("SSL key file is not accessible")
return "", fmt.Errorf("SSL key file is not accessible: %w", err)
}
databaseURL += fmt.Sprintf(" sslkey=%s", cfg.Database.SslKey)
}
default:
log.Error().Str("driver", cfg.Database.Driver).Msg("unsupported database driver")
return "", fmt.Errorf("unsupported database driver: %s", cfg.Database.Driver)
}
return databaseURL, nil
}
// loadCurrencies loads currency data from config if provided.
func loadCurrencies(cfg *config.Config) ([]currencies.CollectorFunc, error) {
collectFuncs := []currencies.CollectorFunc{
currencies.CollectDefaults(),
}
if cfg.Options.CurrencyConfig != "" {
log.Info().Str("path", cfg.Options.CurrencyConfig).Msg("loading currency config file")
content, err := os.ReadFile(cfg.Options.CurrencyConfig)
if err != nil {
log.Error().Err(err).Str("path", cfg.Options.CurrencyConfig).Msg("failed to read currency config file")
return nil, err
}
collectFuncs = append(collectFuncs, currencies.CollectJSON(bytes.NewReader(content)))
}
return collectFuncs, nil
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -34,6 +34,8 @@ definitions:
properties:
code:
type: string
decimals:
type: integer
local:
type: string
name:
@@ -177,6 +179,11 @@ definitions:
items:
$ref: '#/definitions/ent.GroupInvitationToken'
type: array
item_templates:
description: ItemTemplates holds the value of the item_templates edge.
items:
$ref: '#/definitions/ent.ItemTemplate'
type: array
items:
description: Items holds the value of the items edge.
items:
@@ -412,6 +419,92 @@ definitions:
- $ref: '#/definitions/ent.Item'
description: Item holds the value of the item edge.
type: object
ent.ItemTemplate:
properties:
created_at:
description: CreatedAt holds the value of the "created_at" field.
type: string
default_description:
description: Default description for items created from this template
type: string
default_insured:
description: DefaultInsured holds the value of the "default_insured" field.
type: boolean
default_label_ids:
description: Default label IDs for items created from this template
items:
type: string
type: array
default_lifetime_warranty:
description: DefaultLifetimeWarranty holds the value of the "default_lifetime_warranty"
field.
type: boolean
default_manufacturer:
description: DefaultManufacturer holds the value of the "default_manufacturer"
field.
type: string
default_model_number:
description: Default model number for items created from this template
type: string
default_name:
description: Default name template for items (can use placeholders)
type: string
default_quantity:
description: DefaultQuantity holds the value of the "default_quantity" field.
type: integer
default_warranty_details:
description: DefaultWarrantyDetails holds the value of the "default_warranty_details"
field.
type: string
description:
description: Description holds the value of the "description" field.
type: string
edges:
allOf:
- $ref: '#/definitions/ent.ItemTemplateEdges'
description: |-
Edges holds the relations/edges for other nodes in the graph.
The values are being populated by the ItemTemplateQuery when eager-loading is set.
id:
description: ID of the ent.
type: string
include_purchase_fields:
description: Whether to include purchase fields in items created from this
template
type: boolean
include_sold_fields:
description: Whether to include sold fields in items created from this template
type: boolean
include_warranty_fields:
description: Whether to include warranty fields in items created from this
template
type: boolean
name:
description: Name holds the value of the "name" field.
type: string
notes:
description: Notes holds the value of the "notes" field.
type: string
updated_at:
description: UpdatedAt holds the value of the "updated_at" field.
type: string
type: object
ent.ItemTemplateEdges:
properties:
fields:
description: Fields holds the value of the fields edge.
items:
$ref: '#/definitions/ent.TemplateField'
type: array
group:
allOf:
- $ref: '#/definitions/ent.Group'
description: Group holds the value of the group edge.
location:
allOf:
- $ref: '#/definitions/ent.Location'
description: Location holds the value of the location edge.
type: object
ent.Label:
properties:
color:
@@ -580,6 +673,44 @@ definitions:
- $ref: '#/definitions/ent.User'
description: User holds the value of the user edge.
type: object
ent.TemplateField:
properties:
created_at:
description: CreatedAt holds the value of the "created_at" field.
type: string
description:
description: Description holds the value of the "description" field.
type: string
edges:
allOf:
- $ref: '#/definitions/ent.TemplateFieldEdges'
description: |-
Edges holds the relations/edges for other nodes in the graph.
The values are being populated by the TemplateFieldQuery when eager-loading is set.
id:
description: ID of the ent.
type: string
name:
description: Name holds the value of the "name" field.
type: string
text_value:
description: TextValue holds the value of the "text_value" field.
type: string
type:
allOf:
- $ref: '#/definitions/templatefield.Type'
description: Type holds the value of the "type" field.
updated_at:
description: UpdatedAt holds the value of the "updated_at" field.
type: string
type: object
ent.TemplateFieldEdges:
properties:
item_template:
allOf:
- $ref: '#/definitions/ent.ItemTemplate'
description: ItemTemplate holds the value of the item_template edge.
type: object
ent.User:
properties:
activated_on:
@@ -606,6 +737,12 @@ definitions:
name:
description: Name holds the value of the "name" field.
type: string
oidc_issuer:
description: OidcIssuer holds the value of the "oidc_issuer" field.
type: string
oidc_subject:
description: OidcSubject holds the value of the "oidc_subject" field.
type: string
role:
allOf:
- $ref: '#/definitions/user.Role'
@@ -646,6 +783,38 @@ definitions:
- TypeNumber
- TypeBoolean
- TypeTime
repo.BarcodeProduct:
properties:
barcode:
type: string
imageBase64:
type: string
imageURL:
type: string
item:
$ref: '#/definitions/repo.ItemCreate'
manufacturer:
type: string
modelNumber:
description: Identifications
type: string
notes:
description: Extras
type: string
search_engine_name:
type: string
type: object
repo.DuplicateOptions:
properties:
copyAttachments:
type: boolean
copyCustomFields:
type: boolean
copyMaintenance:
type: boolean
copyPrefix:
type: string
type: object
repo.Group:
properties:
createdAt:
@@ -841,6 +1010,16 @@ definitions:
properties:
id:
type: string
labelIds:
items:
type: string
type: array
x-nullable: true
x-omitempty: true
locationId:
type: string
x-nullable: true
x-omitempty: true
quantity:
type: integer
x-nullable: true
@@ -900,6 +1079,201 @@ definitions:
updatedAt:
type: string
type: object
repo.ItemTemplateCreate:
properties:
defaultDescription:
maxLength: 1000
type: string
x-nullable: true
defaultInsured:
type: boolean
defaultLabelIds:
items:
type: string
type: array
x-nullable: true
defaultLifetimeWarranty:
type: boolean
defaultLocationId:
description: Default location and labels
type: string
x-nullable: true
defaultManufacturer:
maxLength: 255
type: string
x-nullable: true
defaultModelNumber:
maxLength: 255
type: string
x-nullable: true
defaultName:
maxLength: 255
type: string
x-nullable: true
defaultQuantity:
description: Default values for items
type: integer
x-nullable: true
defaultWarrantyDetails:
maxLength: 1000
type: string
x-nullable: true
description:
maxLength: 1000
type: string
fields:
description: Custom fields
items:
$ref: '#/definitions/repo.TemplateField'
type: array
includePurchaseFields:
type: boolean
includeSoldFields:
type: boolean
includeWarrantyFields:
description: Metadata flags
type: boolean
name:
maxLength: 255
minLength: 1
type: string
notes:
maxLength: 1000
type: string
required:
- name
type: object
repo.ItemTemplateOut:
properties:
createdAt:
type: string
defaultDescription:
type: string
defaultInsured:
type: boolean
defaultLabels:
items:
$ref: '#/definitions/repo.TemplateLabelSummary'
type: array
defaultLifetimeWarranty:
type: boolean
defaultLocation:
allOf:
- $ref: '#/definitions/repo.TemplateLocationSummary'
description: Default location and labels
defaultManufacturer:
type: string
defaultModelNumber:
type: string
defaultName:
type: string
defaultQuantity:
description: Default values for items
type: integer
defaultWarrantyDetails:
type: string
description:
type: string
fields:
description: Custom fields
items:
$ref: '#/definitions/repo.TemplateField'
type: array
id:
type: string
includePurchaseFields:
type: boolean
includeSoldFields:
type: boolean
includeWarrantyFields:
description: Metadata flags
type: boolean
name:
type: string
notes:
type: string
updatedAt:
type: string
type: object
repo.ItemTemplateSummary:
properties:
createdAt:
type: string
description:
type: string
id:
type: string
name:
type: string
updatedAt:
type: string
type: object
repo.ItemTemplateUpdate:
properties:
defaultDescription:
maxLength: 1000
type: string
x-nullable: true
defaultInsured:
type: boolean
defaultLabelIds:
items:
type: string
type: array
x-nullable: true
defaultLifetimeWarranty:
type: boolean
defaultLocationId:
description: Default location and labels
type: string
x-nullable: true
defaultManufacturer:
maxLength: 255
type: string
x-nullable: true
defaultModelNumber:
maxLength: 255
type: string
x-nullable: true
defaultName:
maxLength: 255
type: string
x-nullable: true
defaultQuantity:
description: Default values for items
type: integer
x-nullable: true
defaultWarrantyDetails:
maxLength: 1000
type: string
x-nullable: true
description:
maxLength: 1000
type: string
fields:
description: Custom fields
items:
$ref: '#/definitions/repo.TemplateField'
type: array
id:
type: string
includePurchaseFields:
type: boolean
includeSoldFields:
type: boolean
includeWarrantyFields:
description: Metadata flags
type: boolean
name:
maxLength: 255
minLength: 1
type: string
notes:
maxLength: 1000
type: string
required:
- name
type: object
repo.ItemType:
enum:
- location
@@ -991,7 +1365,7 @@ definitions:
color:
type: string
description:
maxLength: 255
maxLength: 1000
type: string
name:
maxLength: 255
@@ -1237,6 +1611,31 @@ definitions:
total:
type: integer
type: object
repo.TemplateField:
properties:
id:
type: string
name:
type: string
textValue:
type: string
type:
type: string
type: object
repo.TemplateLabelSummary:
properties:
id:
type: string
name:
type: string
type: object
repo.TemplateLocationSummary:
properties:
id:
type: string
name:
type: string
type: object
repo.TotalsByOrganizer:
properties:
id:
@@ -1275,6 +1674,10 @@ definitions:
type: boolean
name:
type: string
oidcIssuer:
type: string
oidcSubject:
type: string
type: object
repo.UserUpdate:
properties:
@@ -1325,6 +1728,12 @@ definitions:
token:
type: string
type: object
templatefield.Type:
enum:
- text
type: string
x-enum-varnames:
- TypeText
user.Role:
enum:
- user
@@ -1351,6 +1760,8 @@ definitions:
$ref: '#/definitions/services.Latest'
message:
type: string
oidc:
$ref: '#/definitions/v1.OIDCStatus'
title:
type: string
versions:
@@ -1404,6 +1815,27 @@ definitions:
token:
type: string
type: object
v1.ItemTemplateCreateItemRequest:
properties:
description:
maxLength: 1000
type: string
labelIds:
items:
type: string
type: array
locationId:
type: string
name:
maxLength: 255
minLength: 1
type: string
quantity:
type: integer
required:
- locationId
- name
type: object
v1.LoginForm:
properties:
password:
@@ -1415,6 +1847,17 @@ definitions:
example: admin@admin.com
type: string
type: object
v1.OIDCStatus:
properties:
allowLocal:
type: boolean
autoRedirect:
type: boolean
buttonText:
type: string
enabled:
type: boolean
type: object
v1.TokenResponse:
properties:
attachmentToken:
@@ -1947,6 +2390,32 @@ paths:
summary: Update Item Attachment
tags:
- Items Attachments
/v1/items/{id}/duplicate:
post:
parameters:
- description: Item ID
in: path
name: id
required: true
type: string
- description: Duplicate Options
in: body
name: payload
required: true
schema:
$ref: '#/definitions/repo.DuplicateOptions'
produces:
- application/json
responses:
"201":
description: Created
schema:
$ref: '#/definitions/repo.ItemOut'
security:
- Bearer: []
summary: Duplicate Item
tags:
- Items
/v1/items/{id}/maintenance:
get:
parameters:
@@ -2543,6 +3012,27 @@ paths:
summary: Test Notifier
tags:
- Notifiers
/v1/products/search-from-barcode:
get:
parameters:
- description: barcode to be searched
in: query
name: data
type: string
produces:
- application/json
responses:
"200":
description: OK
schema:
items:
$ref: '#/definitions/repo.BarcodeProduct'
type: array
security:
- Bearer: []
summary: Search EAN from Barcode
tags:
- Items
/v1/qrcode:
get:
parameters:
@@ -2588,6 +3078,130 @@ paths:
summary: Application Info
tags:
- Base
/v1/templates:
get:
produces:
- application/json
responses:
"200":
description: OK
schema:
items:
$ref: '#/definitions/repo.ItemTemplateSummary'
type: array
security:
- Bearer: []
summary: Get All Item Templates
tags:
- Item Templates
post:
parameters:
- description: Template Data
in: body
name: payload
required: true
schema:
$ref: '#/definitions/repo.ItemTemplateCreate'
produces:
- application/json
responses:
"201":
description: Created
schema:
$ref: '#/definitions/repo.ItemTemplateOut'
security:
- Bearer: []
summary: Create Item Template
tags:
- Item Templates
/v1/templates/{id}:
delete:
parameters:
- description: Template ID
in: path
name: id
required: true
type: string
produces:
- application/json
responses:
"204":
description: No Content
security:
- Bearer: []
summary: Delete Item Template
tags:
- Item Templates
get:
parameters:
- description: Template ID
in: path
name: id
required: true
type: string
produces:
- application/json
responses:
"200":
description: OK
schema:
$ref: '#/definitions/repo.ItemTemplateOut'
security:
- Bearer: []
summary: Get Item Template
tags:
- Item Templates
put:
parameters:
- description: Template ID
in: path
name: id
required: true
type: string
- description: Template Data
in: body
name: payload
required: true
schema:
$ref: '#/definitions/repo.ItemTemplateUpdate'
produces:
- application/json
responses:
"200":
description: OK
schema:
$ref: '#/definitions/repo.ItemTemplateOut'
security:
- Bearer: []
summary: Update Item Template
tags:
- Item Templates
/v1/templates/{id}/create-item:
post:
parameters:
- description: Template ID
in: path
name: id
required: true
type: string
- description: Item Data
in: body
name: payload
required: true
schema:
$ref: '#/definitions/v1.ItemTemplateCreateItemRequest'
produces:
- application/json
responses:
"201":
description: Created
schema:
$ref: '#/definitions/repo.ItemOut'
security:
- Bearer: []
summary: Create Item from Template
tags:
- Item Templates
/v1/users/change-password:
put:
parameters:
@@ -2631,6 +3245,35 @@ paths:
summary: User Login
tags:
- Authentication
/v1/users/login/oidc:
get:
produces:
- application/json
responses:
"302":
description: Found
summary: OIDC Login Initiation
tags:
- Authentication
/v1/users/login/oidc/callback:
get:
parameters:
- description: Authorization code
in: query
name: code
required: true
type: string
- description: State parameter
in: query
name: state
required: true
type: string
responses:
"302":
description: Found
summary: OIDC Callback Handler
tags:
- Authentication
/v1/users/logout:
post:
responses:
@@ -2668,6 +3311,10 @@ paths:
responses:
"204":
description: No Content
"403":
description: Local login is not enabled
schema:
type: string
summary: Register New User
tags:
- User

View File

@@ -1,207 +1,209 @@
module github.com/sysadminsmedia/homebox/backend
go 1.24
go 1.24.0
toolchain go1.24.3
require (
entgo.io/ent v0.14.4
github.com/ardanlabs/conf/v3 v3.8.0
entgo.io/ent v0.14.5
github.com/ardanlabs/conf/v3 v3.10.0
github.com/containrrr/shoutrrr v0.8.0
github.com/coreos/go-oidc/v3 v3.17.0
github.com/evanoberholster/imagemeta v0.3.1
github.com/gen2brain/avif v0.4.4
github.com/gen2brain/heic v0.4.5
github.com/gen2brain/heic v0.4.7
github.com/gen2brain/jpegxl v0.4.5
github.com/gen2brain/webp v0.5.5
github.com/go-chi/chi/v5 v5.2.2
github.com/go-playground/validator/v10 v10.26.0
github.com/go-chi/chi/v5 v5.2.3
github.com/go-playground/validator/v10 v10.30.1
github.com/gocarina/gocsv v0.0.0-20240520201108-78e41c74b4b1
github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0
github.com/google/uuid v1.6.0
github.com/gorilla/schema v1.4.1
github.com/hay-kot/httpkit v0.0.11
github.com/lib/pq v1.10.9
github.com/mattn/go-sqlite3 v1.14.28
github.com/olahol/melody v1.2.1
github.com/mattn/go-sqlite3 v1.14.32
github.com/olahol/melody v1.4.0
github.com/pkg/errors v0.9.1
github.com/pressly/goose/v3 v3.24.3
github.com/pressly/goose/v3 v3.26.0
github.com/rs/zerolog v1.34.0
github.com/shirou/gopsutil/v4 v4.25.5
github.com/shirou/gopsutil/v4 v4.25.11
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e
github.com/stretchr/testify v1.10.0
github.com/stretchr/testify v1.11.1
github.com/swaggo/http-swagger/v2 v2.0.2
github.com/swaggo/swag v1.16.4
github.com/swaggo/swag v1.16.6
github.com/yeqown/go-qrcode/v2 v2.2.5
github.com/yeqown/go-qrcode/writer/standard v1.3.0
github.com/zeebo/blake3 v0.2.4
gocloud.dev v0.41.0
gocloud.dev/pubsub/kafkapubsub v0.41.0
gocloud.dev/pubsub/natspubsub v0.41.0
gocloud.dev/pubsub/rabbitpubsub v0.41.0
golang.org/x/crypto v0.39.0
golang.org/x/image v0.28.0
modernc.org/sqlite v1.37.1
go.balki.me/anyhttp v0.5.2
gocloud.dev v0.44.0
gocloud.dev/pubsub/kafkapubsub v0.44.0
gocloud.dev/pubsub/natspubsub v0.44.0
gocloud.dev/pubsub/rabbitpubsub v0.44.0
golang.org/x/crypto v0.46.0
golang.org/x/image v0.34.0
golang.org/x/oauth2 v0.34.0
golang.org/x/text v0.32.0
modernc.org/sqlite v1.41.0
)
require (
ariga.io/atlas v0.31.1-0.20250212144724-069be8033e83 // indirect
cel.dev/expr v0.22.1 // indirect
cloud.google.com/go v0.120.0 // indirect
cloud.google.com/go/auth v0.15.0 // indirect
ariga.io/atlas v0.32.1-0.20250325101103-175b25e1c1b9 // indirect
cel.dev/expr v0.24.0 // indirect
cloud.google.com/go v0.121.6 // indirect
cloud.google.com/go/auth v0.17.0 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
cloud.google.com/go/compute/metadata v0.6.0 // indirect
cloud.google.com/go/iam v1.4.2 // indirect
cloud.google.com/go/monitoring v1.24.1 // indirect
cloud.google.com/go/pubsub v1.48.0 // indirect
cloud.google.com/go/storage v1.51.0 // indirect
cloud.google.com/go/compute/metadata v0.9.0 // indirect
cloud.google.com/go/iam v1.5.3 // indirect
cloud.google.com/go/monitoring v1.24.3 // indirect
cloud.google.com/go/pubsub v1.50.1 // indirect
cloud.google.com/go/pubsub/v2 v2.2.1 // indirect
cloud.google.com/go/storage v1.56.0 // indirect
github.com/Azure/azure-amqp-common-go/v3 v3.2.3 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.2 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus v1.8.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus v1.9.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1 // indirect
github.com/Azure/go-amqp v1.4.0 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest/to v0.4.1 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0 // indirect
github.com/IBM/sarama v1.45.1 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.30.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 // indirect
github.com/IBM/sarama v1.46.3 // indirect
github.com/KyleBanks/depth v1.2.1 // indirect
github.com/agext/levenshtein v1.2.1 // indirect
github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect
github.com/agext/levenshtein v1.2.3 // indirect
github.com/apparentlymart/go-textseg/v15 v15.0.0 // indirect
github.com/aws/aws-sdk-go v1.55.6 // indirect
github.com/aws/aws-sdk-go-v2 v1.36.3 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.10 // indirect
github.com/aws/aws-sdk-go-v2/config v1.29.12 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.17.65 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.30 // indirect
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.69 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.34 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.34 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.34 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.3 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.0 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.15 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15 // indirect
github.com/aws/aws-sdk-go-v2/service/s3 v1.78.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sns v1.34.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sqs v1.38.3 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.25.2 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.0 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.33.17 // indirect
github.com/aws/smithy-go v1.22.3 // indirect
github.com/aws/aws-sdk-go-v2 v1.39.6 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 // indirect
github.com/aws/aws-sdk-go-v2/config v1.31.17 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.18.21 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.13 // indirect
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.3 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.13 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.13 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.13 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.3 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.13 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.13 // indirect
github.com/aws/aws-sdk-go-v2/service/s3 v1.89.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sns v1.34.7 // indirect
github.com/aws/aws-sdk-go-v2/service/sqs v1.38.8 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.30.1 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.5 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.39.1 // indirect
github.com/aws/smithy-go v1.23.2 // indirect
github.com/bmatcuk/doublestar v1.3.4 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/eapache/go-resiliency v1.7.0 // indirect
github.com/eapache/go-xerial-snappy v0.0.0-20230731223053-c322873962e3 // indirect
github.com/eapache/queue v1.1.0 // indirect
github.com/ebitengine/purego v0.8.4 // indirect
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
github.com/ebitengine/purego v0.9.1 // indirect
github.com/envoyproxy/go-control-plane/envoy v1.35.0 // indirect
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
github.com/fatih/color v1.15.0 // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fogleman/gg v1.3.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.8 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/gabriel-vasile/mimetype v1.4.12 // indirect
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/go-openapi/inflect v0.19.0 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/spec v0.20.6 // indirect
github.com/go-openapi/swag v0.19.15 // indirect
github.com/go-openapi/jsonpointer v0.22.4 // indirect
github.com/go-openapi/jsonreference v0.21.4 // indirect
github.com/go-openapi/spec v0.22.3 // indirect
github.com/go-openapi/swag/conv v0.25.4 // indirect
github.com/go-openapi/swag/jsonname v0.25.4 // indirect
github.com/go-openapi/swag/jsonutils v0.25.4 // indirect
github.com/go-openapi/swag/loading v0.25.4 // indirect
github.com/go-openapi/swag/stringutils v0.25.4 // indirect
github.com/go-openapi/swag/typeutils v0.25.4 // indirect
github.com/go-openapi/swag/yamlutils v0.25.4 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/golang-jwt/jwt/v5 v5.2.2 // indirect
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect
github.com/golang-jwt/jwt/v5 v5.2.3 // indirect
github.com/golang/snappy v1.0.0 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/google/s2a-go v0.1.9 // indirect
github.com/google/wire v0.6.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect
github.com/googleapis/gax-go/v2 v2.14.1 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/google/wire v0.7.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
github.com/googleapis/gax-go/v2 v2.16.0 // indirect
github.com/gorilla/websocket v1.5.3 // indirect
github.com/hashicorp/go-uuid v1.0.3 // indirect
github.com/hashicorp/hcl/v2 v2.13.0 // indirect
github.com/hashicorp/hcl/v2 v2.18.1 // indirect
github.com/jcmturner/aescts/v2 v2.0.0 // indirect
github.com/jcmturner/dnsutils/v2 v2.0.0 // indirect
github.com/jcmturner/gofork v1.7.6 // indirect
github.com/jcmturner/gokrb5/v8 v8.4.4 // indirect
github.com/jcmturner/rpc/v2 v2.0.3 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.4 // indirect
github.com/klauspost/compress v1.18.2 // indirect
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mfridman/interpolate v0.0.2 // indirect
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7 // indirect
github.com/nats-io/nats.go v1.40.1 // indirect
github.com/nats-io/nkeys v0.4.10 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/nats-io/nats.go v1.48.0 // indirect
github.com/nats-io/nkeys v0.4.12 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
github.com/ncruces/go-strftime v0.1.9 // indirect
github.com/philhofer/fwd v1.1.2 // indirect
github.com/pierrec/lz4/v4 v4.1.22 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/philhofer/fwd v1.2.0 // indirect
github.com/pierrec/lz4/v4 v4.1.23 // indirect
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
github.com/rabbitmq/amqp091-go v1.10.0 // indirect
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 // indirect
github.com/rcrowley/go-metrics v0.0.0-20250401214520-65e299d6c5c9 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/sethvargo/go-retry v0.3.0 // indirect
github.com/swaggo/files/v2 v2.0.0 // indirect
github.com/tetratelabs/wazero v1.9.0 // indirect
github.com/tinylib/msgp v1.1.8 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/spiffe/go-spiffe/v2 v2.6.0 // indirect
github.com/swaggo/files/v2 v2.0.2 // indirect
github.com/tetratelabs/wazero v1.11.0 // indirect
github.com/tinylib/msgp v1.6.1 // indirect
github.com/tklauser/go-sysconf v0.3.16 // indirect
github.com/tklauser/numcpus v0.11.0 // indirect
github.com/yeqown/reedsolomon v1.0.0 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
github.com/zclconf/go-cty v1.14.4 // indirect
github.com/zclconf/go-cty-yaml v1.1.0 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/detectors/gcp v1.35.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0 // indirect
go.opentelemetry.io/otel v1.35.0 // indirect
go.opentelemetry.io/otel/metric v1.35.0 // indirect
go.opentelemetry.io/otel/sdk v1.35.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.35.0 // indirect
go.opentelemetry.io/otel/trace v1.35.0 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/detectors/gcp v1.38.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.62.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.62.0 // indirect
go.opentelemetry.io/otel v1.39.0 // indirect
go.opentelemetry.io/otel/metric v1.39.0 // indirect
go.opentelemetry.io/otel/sdk v1.39.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.39.0 // indirect
go.opentelemetry.io/otel/trace v1.39.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/exp v0.0.0-20250506013437-ce4c2cf36ca6 // indirect
golang.org/x/mod v0.25.0 // indirect
golang.org/x/net v0.40.0 // indirect
golang.org/x/oauth2 v0.28.0 // indirect
golang.org/x/sync v0.15.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/text v0.26.0 // indirect
golang.org/x/time v0.11.0 // indirect
golang.org/x/tools v0.33.0 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/exp v0.0.0-20251219203646-944ab1f22d93 // indirect
golang.org/x/mod v0.31.0 // indirect
golang.org/x/net v0.48.0 // indirect
golang.org/x/sync v0.19.0 // indirect
golang.org/x/sys v0.39.0 // indirect
golang.org/x/time v0.14.0 // indirect
golang.org/x/tools v0.40.0 // indirect
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da // indirect
google.golang.org/api v0.228.0 // indirect
google.golang.org/genproto v0.0.0-20250324211829-b45e905df463 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250324211829-b45e905df463 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463 // indirect
google.golang.org/grpc v1.71.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
google.golang.org/api v0.258.0 // indirect
google.golang.org/genproto v0.0.0-20251202230838-ff82c1b0f217 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b // indirect
google.golang.org/grpc v1.78.0 // indirect
google.golang.org/protobuf v1.36.11 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
modernc.org/libc v1.65.7 // indirect
modernc.org/libc v1.67.2 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect
)

View File

@@ -1,48 +1,49 @@
ariga.io/atlas v0.31.1-0.20250212144724-069be8033e83 h1:nX4HXncwIdvQ8/8sIUIf1nyCkK8qdBaHQ7EtzPpuiGE=
ariga.io/atlas v0.31.1-0.20250212144724-069be8033e83/go.mod h1:Oe1xWPuu5q9LzyrWfbZmEZxFYeu4BHTyzfjeW2aZp/w=
cel.dev/expr v0.22.1 h1:xoFEsNh972Yzey8N9TCPx2nDvMN7TMhQEzxLuj/iRrI=
cel.dev/expr v0.22.1/go.mod h1:MrpN08Q+lEBs+bGYdLxxHkZoUSsCp0nSKTs0nTymJgw=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.120.0 h1:wc6bgG9DHyKqF5/vQvX1CiZrtHnxJjBlKUyF9nP6meA=
cloud.google.com/go v0.120.0/go.mod h1:/beW32s8/pGRuj4IILWQNd4uuebeT4dkOhKmkfit64Q=
cloud.google.com/go/auth v0.15.0 h1:Ly0u4aA5vG/fsSsxu98qCQBemXtAtJf+95z9HK+cxps=
cloud.google.com/go/auth v0.15.0/go.mod h1:WJDGqZ1o9E9wKIL+IwStfyn/+s59zl4Bi+1KQNVXLZ8=
ariga.io/atlas v0.32.1-0.20250325101103-175b25e1c1b9 h1:E0wvcUXTkgyN4wy4LGtNzMNGMytJN8afmIWXJVMi4cc=
ariga.io/atlas v0.32.1-0.20250325101103-175b25e1c1b9/go.mod h1:Oe1xWPuu5q9LzyrWfbZmEZxFYeu4BHTyzfjeW2aZp/w=
cel.dev/expr v0.24.0 h1:56OvJKSH3hDGL0ml5uSxZmz3/3Pq4tJ+fb1unVLAFcY=
cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
cloud.google.com/go v0.121.6 h1:waZiuajrI28iAf40cWgycWNgaXPO06dupuS+sgibK6c=
cloud.google.com/go v0.121.6/go.mod h1:coChdst4Ea5vUpiALcYKXEpR1S9ZgXbhEzzMcMR66vI=
cloud.google.com/go/auth v0.17.0 h1:74yCm7hCj2rUyyAocqnFzsAYXgJhrG26XCFimrc/Kz4=
cloud.google.com/go/auth v0.17.0/go.mod h1:6wv/t5/6rOPAX4fJiRjKkJCvswLwdet7G8+UGXt7nCQ=
cloud.google.com/go/auth/oauth2adapt v0.2.8 h1:keo8NaayQZ6wimpNSmW5OPc283g65QNIiLpZnkHRbnc=
cloud.google.com/go/auth/oauth2adapt v0.2.8/go.mod h1:XQ9y31RkqZCcwJWNSx2Xvric3RrU88hAYYbjDWYDL+c=
cloud.google.com/go/compute/metadata v0.6.0 h1:A6hENjEsCDtC1k8byVsgwvVcioamEHvZ4j01OwKxG9I=
cloud.google.com/go/compute/metadata v0.6.0/go.mod h1:FjyFAW1MW0C203CEOMDTu3Dk1FlqW3Rga40jzHL4hfg=
cloud.google.com/go/iam v1.4.2 h1:4AckGYAYsowXeHzsn/LCKWIwSWLkdb0eGjH8wWkd27Q=
cloud.google.com/go/iam v1.4.2/go.mod h1:REGlrt8vSlh4dfCJfSEcNjLGq75wW75c5aU3FLOYq34=
cloud.google.com/go/logging v1.13.0 h1:7j0HgAp0B94o1YRDqiqm26w4q1rDMH7XNRU34lJXHYc=
cloud.google.com/go/logging v1.13.0/go.mod h1:36CoKh6KA/M0PbhPKMq6/qety2DCAErbhXT62TuXALA=
cloud.google.com/go/longrunning v0.6.6 h1:XJNDo5MUfMM05xK3ewpbSdmt7R2Zw+aQEMbdQR65Rbw=
cloud.google.com/go/longrunning v0.6.6/go.mod h1:hyeGJUrPHcx0u2Uu1UFSoYZLn4lkMrccJig0t4FI7yw=
cloud.google.com/go/monitoring v1.24.1 h1:vKiypZVFD/5a3BbQMvI4gZdl8445ITzXFh257XBgrS0=
cloud.google.com/go/monitoring v1.24.1/go.mod h1:Z05d1/vn9NaujqY2voG6pVQXoJGbp+r3laV+LySt9K0=
cloud.google.com/go/pubsub v1.48.0 h1:ntFpQVrr10Wj/GXSOpxGmexGynldv/bFp25H0jy8aOs=
cloud.google.com/go/pubsub v1.48.0/go.mod h1:AAtyjyIT/+zaY1ERKFJbefOvkUxRDNp3nD6TdfdqUZk=
cloud.google.com/go/storage v1.51.0 h1:ZVZ11zCiD7b3k+cH5lQs/qcNaoSz3U9I0jgwVzqDlCw=
cloud.google.com/go/storage v1.51.0/go.mod h1:YEJfu/Ki3i5oHC/7jyTgsGZwdQ8P9hqMqvpi5kRKGgc=
cloud.google.com/go/trace v1.11.5 h1:CALS1loyxJMnRiCwZSpdf8ac7iCsjreMxFD2WGxzzHU=
cloud.google.com/go/trace v1.11.5/go.mod h1:TwblCcqNInriu5/qzaeYEIH7wzUcchSdeY2l5wL3Eec=
entgo.io/ent v0.14.4 h1:/DhDraSLXIkBhyiVoJeSshr4ZYi7femzhj6/TckzZuI=
entgo.io/ent v0.14.4/go.mod h1:aDPE/OziPEu8+OWbzy4UlvWmD2/kbRuWfK2A40hcxJM=
cloud.google.com/go/compute/metadata v0.9.0 h1:pDUj4QMoPejqq20dK0Pg2N4yG9zIkYGdBtwLoEkH9Zs=
cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
cloud.google.com/go/iam v1.5.3 h1:+vMINPiDF2ognBJ97ABAYYwRgsaqxPbQDlMnbHMjolc=
cloud.google.com/go/iam v1.5.3/go.mod h1:MR3v9oLkZCTlaqljW6Eb2d3HGDGK5/bDv93jhfISFvU=
cloud.google.com/go/logging v1.13.1 h1:O7LvmO0kGLaHY/gq8cV7T0dyp6zJhYAOtZPX4TF3QtY=
cloud.google.com/go/logging v1.13.1/go.mod h1:XAQkfkMBxQRjQek96WLPNze7vsOmay9H5PqfsNYDqvw=
cloud.google.com/go/longrunning v0.7.0 h1:FV0+SYF1RIj59gyoWDRi45GiYUMM3K1qO51qoboQT1E=
cloud.google.com/go/longrunning v0.7.0/go.mod h1:ySn2yXmjbK9Ba0zsQqunhDkYi0+9rlXIwnoAf+h+TPY=
cloud.google.com/go/monitoring v1.24.3 h1:dde+gMNc0UhPZD1Azu6at2e79bfdztVDS5lvhOdsgaE=
cloud.google.com/go/monitoring v1.24.3/go.mod h1:nYP6W0tm3N9H/bOw8am7t62YTzZY+zUeQ+Bi6+2eonI=
cloud.google.com/go/pubsub v1.50.1 h1:fzbXpPyJnSGvWXF1jabhQeXyxdbCIkXTpjXHy7xviBM=
cloud.google.com/go/pubsub v1.50.1/go.mod h1:6YVJv3MzWJUVdvQXG081sFvS0dWQOdnV+oTo++q/xFk=
cloud.google.com/go/pubsub/v2 v2.2.1 h1:3brZcshL3fIiD1qOxAE2QW9wxsfjioy014x4yC9XuYI=
cloud.google.com/go/pubsub/v2 v2.2.1/go.mod h1:O5f0KHG9zDheZAd3z5rlCRhxt2JQtB+t/IYLKK3Bpvw=
cloud.google.com/go/storage v1.56.0 h1:iixmq2Fse2tqxMbWhLWC9HfBj1qdxqAmiK8/eqtsLxI=
cloud.google.com/go/storage v1.56.0/go.mod h1:Tpuj6t4NweCLzlNbw9Z9iwxEkrSem20AetIeH/shgVU=
cloud.google.com/go/trace v1.11.7 h1:kDNDX8JkaAG3R2nq1lIdkb7FCSi1rCmsEtKVsty7p+U=
cloud.google.com/go/trace v1.11.7/go.mod h1:TNn9d5V3fQVf6s4SCveVMIBS2LJUqo73GACmq/Tky0s=
entgo.io/ent v0.14.5 h1:Rj2WOYJtCkWyFo6a+5wB3EfBRP0rnx1fMk6gGA0UUe4=
entgo.io/ent v0.14.5/go.mod h1:zTzLmWtPvGpmSwtkaayM2cm5m819NdM7z7tYPq3vN0U=
github.com/Azure/azure-amqp-common-go/v3 v3.2.3 h1:uDF62mbd9bypXWi19V1bN5NZEO84JqgmI5G73ibAmrk=
github.com/Azure/azure-amqp-common-go/v3 v3.2.3/go.mod h1:7rPmbSfszeovxGfc5fSAXE4ehlXQZHpMja2OtxC2Tas=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.1 h1:DSDNVxqkoXJiko6x8a90zidoYqnYYa6c1MTzDKzKkTo=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.1/go.mod h1:zGqV2R4Cr/k8Uye5w+dgQ06WJtEcbQG/8J7BB6hnCr4=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.2 h1:F0gBpfdPLGsw+nsgk6aqqkZS1jiixa5WwFe3fk/T3Ys=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.2/go.mod h1:SqINnQ9lVVdRlyC8cd1lCI0SdX4n2paeABd2K8ggfnE=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.1 h1:Wc1ml6QlJs2BHQ/9Bqu1jiyggbsSjramq2oUmp5WeIo=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.1/go.mod h1:Ot/6aikWnKWi4l9QB7qVSwa8iMphQNqkWALMoNT3rzM=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1 h1:B+blDbyVIG3WaikNxPnhPiJ1MThR03b3vKGtER95TP4=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1/go.mod h1:JdM5psgjfBf5fo2uWOZhflPWyDBZ/O/CNAH9CtsuZE4=
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.2 h1:yz1bePFlP5Vws5+8ez6T3HWXPmwOK7Yvq8QxDBD3SKY=
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.2/go.mod h1:Pa9ZNPuoNu/GztvBSKk9J1cDJW6vk/n0zLtV4mgd8N8=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 h1:ywEEhmNahHBihViHepv3xPBn1663uRv2t2q/ESv9seY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0/go.mod h1:iZDifYGJTIgIIkYRNWPENUnqx6bJ2xnSDFI2tjwZNuY=
github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus v1.8.0 h1:JNgM3Tz592fUHU2vgwgvOgKxo5s9Ki0y2wicBeckn70=
github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus v1.8.0/go.mod h1:6vUKmzY17h6dpn9ZLAhM4R/rcrltBeq52qZIkUR7Oro=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0 h1:PiSrjRPpkQNjrM8H0WwKMnZUdu1RGMtd/LdGKUrOo+c=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0/go.mod h1:oDrbWx4ewMylP7xHivfgixbfGBT6APAwsSoHRKotnIc=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0 h1:UXT0o77lXQrikd1kgwIPQOUect7EoR/+sbP4wQKdzxM=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0/go.mod h1:cTvi54pg19DoT07ekoeMgE/taAwNtCShVeZqA+Iv2xI=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 h1:FPKJS1T+clwv+OLGt13a8UjqeRuh0O4SJ3lUriThc+4=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1/go.mod h1:j2chePtV91HrC22tGoRX3sGY42uF13WzmmV80/OdVAA=
github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus v1.9.1 h1:CRZwf68N55u7ZZo3Xx2ynuqEA6k5GZfwsEUkU8qsAPk=
github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus v1.9.1/go.mod h1:NydgUaroiShkgOcb+X6OUdS3RalWBrvDNtOyFHJtsZY=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0 h1:LR0kAX9ykz8G4YgLCaRDVJ3+n43R8MneB5dTy2konZo=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0/go.mod h1:DWAciXemNf++PQJLeXUB4HHH5OpsAh12HZnu2wXE1jA=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1 h1:lhZdRq7TIx0GJQvSyX2Si406vrYsov2FXGp/RnSEtcs=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1/go.mod h1:8cl44BDmi+effbARHMQjgOKA2AYvcohNm7KEt42mSV8=
github.com/Azure/go-amqp v0.17.0/go.mod h1:9YJ3RhxRT1gquYnzpZO1vcYMMpAdJT+QEg6fwmw9Zlg=
github.com/Azure/go-amqp v1.4.0 h1:Xj3caqi4comOF/L1Uc5iuBxR/pB6KumejC01YQOqOR4=
github.com/Azure/go-amqp v1.4.0/go.mod h1:vZAogwdrkbyK3Mla8m/CxSc/aKdnTZ4IbPxl51Y5WZE=
@@ -60,91 +61,85 @@ github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1 h1:WJ
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1/go.mod h1:tCcJZ0uHAmvjsVYzEFivsRTN00oz5BEsRgQHu5JZ9WE=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJe7PpYPXT5A29ZkwJaPqcva7BVeemZOZs=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/DATA-DOG/go-sqlmock v1.5.0 h1:Shsta01QNfFxHCfpW6YH2STWB0MudeXXEWMr20OEh60=
github.com/DATA-DOG/go-sqlmock v1.5.0/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0 h1:ErKg/3iS1AKcTkf3yixlZ54f9U1rljCkQyEXWUnIUxc=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0/go.mod h1:yAZHSGnqScoU556rBOVkwLze6WP5N+U11RHuWaGVxwY=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0 h1:fYE9p3esPxA/C0rQ0AHhP0drtPXDRhaWiwg1DPqO7IU=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0/go.mod h1:BnBReJLvVYx2CS/UHOgVz2BXKXD9wsQPxZug20nZhd0=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.51.0 h1:OqVGm6Ei3x5+yZmSJG1Mh2NwHvpVmZ08CB5qJhT9Nuk=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.51.0/go.mod h1:SZiPHWGOOk3bl8tkevxkoiwPgsIl6CwrWcbwjfHZpdM=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0 h1:6/0iUd0xrnX7qt+mLNRwg5c0PGv8wpE8K90ryANQwMI=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0/go.mod h1:otE2jQekW/PqXk1Awf5lmfokJx4uwuqcj1ab5SpGeW0=
github.com/IBM/sarama v1.45.1 h1:nY30XqYpqyXOXSNoe2XCgjj9jklGM1Ye94ierUb1jQ0=
github.com/IBM/sarama v1.45.1/go.mod h1:qifDhA3VWSrQ1TjSMyxDl3nYL3oX2C83u+G6L79sq4w=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.30.0 h1:sBEjpZlNHzK1voKq9695PJSX2o5NEXl7/OL3coiIY0c=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.30.0/go.mod h1:P4WPRUkOhJC13W//jWpyfJNDAIpvRbAUIYLX/4jtlE0=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 h1:owcC2UnmsZycprQ5RfRgjydWhuoxg71LUfyiQdijZuM=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0/go.mod h1:ZPpqegjbE99EPKsu3iUWV22A04wzGPcAY/ziSIQEEgs=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.53.0 h1:4LP6hvB4I5ouTbGgWtixJhgED6xdf67twf9PoY96Tbg=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.53.0/go.mod h1:jUZ5LYlw40WMd07qxcQJD5M40aUxrfwqQX1g7zxYnrQ=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 h1:Ron4zCA/yk6U7WOBXhTJcDpsUBG9npumK6xw2auFltQ=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0/go.mod h1:cSgYe11MCNYunTnRXrKiR/tHc0eoKjICUuWpNZoVCOo=
github.com/IBM/sarama v1.46.3 h1:njRsX6jNlnR+ClJ8XmkO+CM4unbrNr/2vB5KK6UA+IE=
github.com/IBM/sarama v1.46.3/go.mod h1:GTUYiF9DMOZVe3FwyGT+dtSPceGFIgA+sPc5u6CBwko=
github.com/KyleBanks/depth v1.2.1 h1:5h8fQADFrWtarTdtDudMmGsC7GPbOAu6RVB3ffsVFHc=
github.com/KyleBanks/depth v1.2.1/go.mod h1:jzSb9d0L43HxTQfT+oSA1EEp2q+ne2uh6XgeJcm8brE=
github.com/agext/levenshtein v1.2.1 h1:QmvMAjj2aEICytGiWzmxoE0x2KZvE0fvmqMOfy2tjT8=
github.com/agext/levenshtein v1.2.1/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
github.com/apparentlymart/go-textseg/v13 v13.0.0 h1:Y+KvPE1NYz0xl601PVImeQfFyEy6iT90AvPUL1NNfNw=
github.com/apparentlymart/go-textseg/v13 v13.0.0/go.mod h1:ZK2fH7c4NqDTLtiYLvIkEghdlcqw7yxLeM89kiTRPUo=
github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7lmo=
github.com/agext/levenshtein v1.2.3/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
github.com/apparentlymart/go-textseg/v15 v15.0.0 h1:uYvfpb3DyLSCGWnctWKGj857c6ew1u1fNQOlOtuGxQY=
github.com/apparentlymart/go-textseg/v15 v15.0.0/go.mod h1:K8XmNZdhEBkdlyDdvbmmsvpAG721bKi0joRfFdHIWJ4=
github.com/ardanlabs/conf/v3 v3.8.0 h1:Mvv2wZJz8tIl705m5BU3ZRCP1V6TKY6qebA8i4sykrY=
github.com/ardanlabs/conf/v3 v3.8.0/go.mod h1:XlL9P0quWP4m1weOVFmlezabinbZLI05niDof/+Ochk=
github.com/aws/aws-sdk-go v1.55.6 h1:cSg4pvZ3m8dgYcgqB97MrcdjUmZ1BeMYKUxMMB89IPk=
github.com/aws/aws-sdk-go v1.55.6/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/aws/aws-sdk-go-v2 v1.36.3 h1:mJoei2CxPutQVxaATCzDUjcZEjVRdpsiiXi2o38yqWM=
github.com/aws/aws-sdk-go-v2 v1.36.3/go.mod h1:LLXuLpgzEbD766Z5ECcRmi8AzSwfZItDtmABVkRLGzg=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.10 h1:zAybnyUQXIZ5mok5Jqwlf58/TFE7uvd3IAsa1aF9cXs=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.10/go.mod h1:qqvMj6gHLR/EXWZw4ZbqlPbQUyenf4h82UQUlKc+l14=
github.com/aws/aws-sdk-go-v2/config v1.29.12 h1:Y/2a+jLPrPbHpFkpAAYkVEtJmxORlXoo5k2g1fa2sUo=
github.com/aws/aws-sdk-go-v2/config v1.29.12/go.mod h1:xse1YTjmORlb/6fhkWi8qJh3cvZi4JoVNhc+NbJt4kI=
github.com/aws/aws-sdk-go-v2/credentials v1.17.65 h1:q+nV2yYegofO/SUXruT+pn4KxkxmaQ++1B/QedcKBFM=
github.com/aws/aws-sdk-go-v2/credentials v1.17.65/go.mod h1:4zyjAuGOdikpNYiSGpsGz8hLGmUzlY8pc8r9QQ/RXYQ=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.30 h1:x793wxmUWVDhshP8WW2mlnXuFrO4cOd3HLBroh1paFw=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.30/go.mod h1:Jpne2tDnYiFascUEs2AWHJL9Yp7A5ZVy3TNyxaAjD6M=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.69 h1:6VFPH/Zi9xYFMJKPQOX5URYkQoXRWeJ7V/7Y6ZDYoms=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.69/go.mod h1:GJj8mmO6YT6EqgduWocwhMoxTLFitkhIrK+owzrYL2I=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.34 h1:ZK5jHhnrioRkUNOc+hOgQKlUL5JeC3S6JgLxtQ+Rm0Q=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.34/go.mod h1:p4VfIceZokChbA9FzMbRGz5OV+lekcVtHlPKEO0gSZY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.34 h1:SZwFm17ZUNNg5Np0ioo/gq8Mn6u9w19Mri8DnJ15Jf0=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.34/go.mod h1:dFZsC0BLo346mvKQLWmoJxT+Sjp+qcVR1tRVHQGOH9Q=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 h1:bIqFDwgGXXN1Kpp99pDOdKMTTb5d2KyU5X/BZxjOkRo=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3/go.mod h1:H5O/EsxDWyU+LP/V8i5sm8cxoZgc2fdNR9bxlOFrQTo=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.34 h1:ZNTqv4nIdE/DiBfUUfXcLZ/Spcuz+RjeziUtNJackkM=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.34/go.mod h1:zf7Vcd1ViW7cPqYWEHLHJkS50X0JS2IKz9Cgaj6ugrs=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.3 h1:eAh2A4b5IzM/lum78bZ590jy36+d/aFLgKF/4Vd1xPE=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.3/go.mod h1:0yKJC/kb8sAnmlYa6Zs3QVYqaC8ug2AbnNChv5Ox3uA=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.0 h1:lguz0bmOoGzozP9XfRJR1QIayEYo+2vP/No3OfLF0pU=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.0/go.mod h1:iu6FSzgt+M2/x3Dk8zhycdIcHjEFb36IS8HVUVFoMg0=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.15 h1:dM9/92u2F1JbDaGooxTq18wmmFzbJRfXfVfy96/1CXM=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.15/go.mod h1:SwFBy2vjtA0vZbjjaFtfN045boopadnoVPhu4Fv66vY=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15 h1:moLQUoVq91LiqT1nbvzDukyqAlCv89ZmwaHw/ZFlFZg=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15/go.mod h1:ZH34PJUc8ApjBIfgQCFvkWcUDBtl/WTD+uiYHjd8igA=
github.com/aws/aws-sdk-go-v2/service/s3 v1.78.2 h1:jIiopHEV22b4yQP2q36Y0OmwLbsxNWdWwfZRR5QRRO4=
github.com/aws/aws-sdk-go-v2/service/s3 v1.78.2/go.mod h1:U5SNqwhXB3Xe6F47kXvWihPl/ilGaEDe8HD/50Z9wxc=
github.com/aws/aws-sdk-go-v2/service/sns v1.34.2 h1:PajtbJ/5bEo6iUAIGMYnK8ljqg2F1h4mMCGh1acjN30=
github.com/aws/aws-sdk-go-v2/service/sns v1.34.2/go.mod h1:PJtxxMdj747j8DeZENRTTYAz/lx/pADn/U0k7YNNiUY=
github.com/aws/aws-sdk-go-v2/service/sqs v1.38.3 h1:j5BchjfDoS7K26vPdyJlyxBIIBGDflq3qjjJKBDlbcI=
github.com/aws/aws-sdk-go-v2/service/sqs v1.38.3/go.mod h1:Bar4MrRxeqdn6XIh8JGfiXuFRmyrrsZNTJotxEJmWW0=
github.com/aws/aws-sdk-go-v2/service/sso v1.25.2 h1:pdgODsAhGo4dvzC3JAG5Ce0PX8kWXrTZGx+jxADD+5E=
github.com/aws/aws-sdk-go-v2/service/sso v1.25.2/go.mod h1:qs4a9T5EMLl/Cajiw2TcbNt2UNo/Hqlyp+GiuG4CFDI=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.0 h1:90uX0veLKcdHVfvxhkWUQSCi5VabtwMLFutYiRke4oo=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.0/go.mod h1:MlYRNmYu/fGPoxBQVvBYr9nyr948aY/WLUvwBMBJubs=
github.com/aws/aws-sdk-go-v2/service/sts v1.33.17 h1:PZV5W8yk4OtH1JAuhV2PXwwO9v5G5Aoj+eMCn4T+1Kc=
github.com/aws/aws-sdk-go-v2/service/sts v1.33.17/go.mod h1:cQnB8CUnxbMU82JvlqjKR2HBOm3fe9pWorWBza6MBJ4=
github.com/aws/smithy-go v1.22.3 h1:Z//5NuZCSW6R4PhQ93hShNbyBbn8BWCmCVCt+Q8Io5k=
github.com/aws/smithy-go v1.22.3/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
github.com/ardanlabs/conf/v3 v3.10.0 h1:qIrJ/WBmH/hFQ/IX4xH9LX9LzwK44T9aEOy78M+4S+0=
github.com/ardanlabs/conf/v3 v3.10.0/go.mod h1:XlL9P0quWP4m1weOVFmlezabinbZLI05niDof/+Ochk=
github.com/aws/aws-sdk-go-v2 v1.39.6 h1:2JrPCVgWJm7bm83BDwY5z8ietmeJUbh3O2ACnn+Xsqk=
github.com/aws/aws-sdk-go-v2 v1.39.6/go.mod h1:c9pm7VwuW0UPxAEYGyTmyurVcNrbF6Rt/wixFqDhcjE=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 h1:DHctwEM8P8iTXFxC/QK0MRjwEpWQeM9yzidCRjldUz0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3/go.mod h1:xdCzcZEtnSTKVDOmUZs4l/j3pSV6rpo1WXl5ugNsL8Y=
github.com/aws/aws-sdk-go-v2/config v1.31.17 h1:QFl8lL6RgakNK86vusim14P2k8BFSxjvUkcWLDjgz9Y=
github.com/aws/aws-sdk-go-v2/config v1.31.17/go.mod h1:V8P7ILjp/Uef/aX8TjGk6OHZN6IKPM5YW6S78QnRD5c=
github.com/aws/aws-sdk-go-v2/credentials v1.18.21 h1:56HGpsgnmD+2/KpG0ikvvR8+3v3COCwaF4r+oWwOeNA=
github.com/aws/aws-sdk-go-v2/credentials v1.18.21/go.mod h1:3YELwedmQbw7cXNaII2Wywd+YY58AmLPwX4LzARgmmA=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.13 h1:T1brd5dR3/fzNFAQch/iBKeX07/ffu/cLu+q+RuzEWk=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.13/go.mod h1:Peg/GBAQ6JDt+RoBf4meB1wylmAipb7Kg2ZFakZTlwk=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.3 h1:4GNV1lhyELGjMz5ILMRxDvxvOaeo3Ux9Z69S1EgVMMQ=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.3/go.mod h1:br7KA6edAAqDGUYJ+zVVPAyMrPhnN+zdt17yTUT6FPw=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.13 h1:a+8/MLcWlIxo1lF9xaGt3J/u3yOZx+CdSveSNwjhD40=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.13/go.mod h1:oGnKwIYZ4XttyU2JWxFrwvhF6YKiK/9/wmE3v3Iu9K8=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.13 h1:HBSI2kDkMdWz4ZM7FjwE7e/pWDEZ+nR95x8Ztet1ooY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.13/go.mod h1:YE94ZoDArI7awZqJzBAZ3PDD2zSfuP7w6P2knOzIn8M=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 h1:WKuaxf++XKWlHWu9ECbMlha8WOEGm0OUEZqm4K/Gcfk=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4/go.mod h1:ZWy7j6v1vWGmPReu0iSGvRiise4YI5SkR3OHKTZ6Wuc=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.13 h1:eg/WYAa12vqTphzIdWMzqYRVKKnCboVPRlvaybNCqPA=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.13/go.mod h1:/FDdxWhz1486obGrKKC1HONd7krpk38LBt+dutLcN9k=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.3 h1:x2Ibm/Af8Fi+BH+Hsn9TXGdT+hKbDd5XOTZxTMxDk7o=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.3/go.mod h1:IW1jwyrQgMdhisceG8fQLmQIydcT/jWY21rFhzgaKwo=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.4 h1:NvMjwvv8hpGUILarKw7Z4Q0w1H9anXKsesMxtw++MA4=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.4/go.mod h1:455WPHSwaGj2waRSpQp7TsnpOnBfw8iDfPfbwl7KPJE=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.13 h1:kDqdFvMY4AtKoACfzIGD8A0+hbT41KTKF//gq7jITfM=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.13/go.mod h1:lmKuogqSU3HzQCwZ9ZtcqOc5XGMqtDK7OIc2+DxiUEg=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.13 h1:zhBJXdhWIFZ1acfDYIhu4+LCzdUS2Vbcum7D01dXlHQ=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.13/go.mod h1:JaaOeCE368qn2Hzi3sEzY6FgAZVCIYcC2nwbro2QCh8=
github.com/aws/aws-sdk-go-v2/service/s3 v1.89.2 h1:xgBWsgaeUESl8A8k80p6yBdexMWDVeiDmJ/pkjohJ7c=
github.com/aws/aws-sdk-go-v2/service/s3 v1.89.2/go.mod h1:+wArOOrcHUevqdto9k1tKOF5++YTe9JEcPSc9Tx2ZSw=
github.com/aws/aws-sdk-go-v2/service/sns v1.34.7 h1:OBuZE9Wt8h2imuRktu+WfjiTGrnYdCIJg8IX92aalHE=
github.com/aws/aws-sdk-go-v2/service/sns v1.34.7/go.mod h1:4WYoZAhHt+dWYpoOQUgkUKfuQbE6Gg/hW4oXE0pKS9U=
github.com/aws/aws-sdk-go-v2/service/sqs v1.38.8 h1:80dpSqWMwx2dAm30Ib7J6ucz1ZHfiv5OCRwN/EnCOXQ=
github.com/aws/aws-sdk-go-v2/service/sqs v1.38.8/go.mod h1:IzNt/udsXlETCdvBOL0nmyMe2t9cGmXmZgsdoZGYYhI=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.1 h1:0JPwLz1J+5lEOfy/g0SURC9cxhbQ1lIMHMa+AHZSzz0=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.1/go.mod h1:fKvyjJcz63iL/ftA6RaM8sRCtN4r4zl4tjL3qw5ec7k=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.5 h1:OWs0/j2UYR5LOGi88sD5/lhN6TDLG6SfA7CqsQO9zF0=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.5/go.mod h1:klO+ejMvYsB4QATfEOIXk8WAEwN4N0aBfJpvC+5SZBo=
github.com/aws/aws-sdk-go-v2/service/sts v1.39.1 h1:mLlUgHn02ue8whiR4BmxxGJLR2gwU6s6ZzJ5wDamBUs=
github.com/aws/aws-sdk-go-v2/service/sts v1.39.1/go.mod h1:E19xDjpzPZC7LS2knI9E6BaRFDK43Eul7vd6rSq2HWk=
github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/bmatcuk/doublestar v1.3.4 h1:gPypJ5xD31uhX6Tf54sDPUOBXTqKH4c9aPY66CyQrS0=
github.com/bmatcuk/doublestar v1.3.4/go.mod h1:wiQtGV+rzVYxB7WIlirSN++5HPtPlXEo9MEoZQC/PmE=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f h1:C5bqEmzEPLsHm9Mv73lSE9e9bKV23aB1vxOsmZrkl3k=
github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f h1:Y8xYupdHxryycyPlc9Y+bSQAYZnetRJ70VMVKm5CKI0=
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f/go.mod h1:HlzOvOjVBOfTGSRXRyY0OiCS/3J1akRGQQpRO/7zyF4=
github.com/coder/websocket v1.8.13 h1:f3QZdXy7uGVz+4uCJy2nTZyM0yTBj8yANEHhqlXZ9FE=
github.com/coder/websocket v1.8.13/go.mod h1:LNVeNrXQZfe5qhS9ALED3uA+l5pPqvwXg3CKoDBB2gs=
github.com/containrrr/shoutrrr v0.8.0 h1:mfG2ATzIS7NR2Ec6XL+xyoHzN97H8WPjir8aYzJUSec=
github.com/containrrr/shoutrrr v0.8.0/go.mod h1:ioyQAyu1LJY6sILuNyKaQaw+9Ttik5QePU8atnAdO2o=
github.com/coreos/go-oidc/v3 v3.17.0 h1:hWBGaQfbi0iVviX4ibC7bk8OKT5qNr4klBaCHVNvehc=
github.com/coreos/go-oidc/v3 v3.17.0/go.mod h1:wqPbKFrVnE90vty060SB40FCJ8fTHTxSwyXJqZH+sI8=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/devigned/tab v0.1.1/go.mod h1:XG9mPq0dFghrYvoBF3xdRrJzSTX1b7IQrvaL9mzjeJY=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
@@ -156,24 +151,20 @@ github.com/eapache/go-xerial-snappy v0.0.0-20230731223053-c322873962e3 h1:Oy0F4A
github.com/eapache/go-xerial-snappy v0.0.0-20230731223053-c322873962e3/go.mod h1:YvSRo5mw33fLEx1+DlK6L2VV43tJt5Eyel9n9XBcR+0=
github.com/eapache/queue v1.1.0 h1:YOEu7KNc61ntiQlcEeUIoDTJ2o8mQznoNvUhiigpIqc=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
github.com/ebitengine/purego v0.8.4 h1:CF7LEKg5FFOsASUj0+QwaXf8Ht6TlFxg09+S9wz0omw=
github.com/ebitengine/purego v0.8.4/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/go-control-plane v0.13.4 h1:zEqyPVyku6IvWCFwux4x9RxkLOMUL+1vC9xUFv5l2/M=
github.com/envoyproxy/go-control-plane v0.13.4/go.mod h1:kDfuBlDVsSj2MjrLEtRWtHlsWIFcGyB2RMO44Dc5GZA=
github.com/envoyproxy/go-control-plane/envoy v1.32.4 h1:jb83lalDRZSpPWW2Z7Mck/8kXZ5CQAFYVjQcdVIr83A=
github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw=
github.com/ebitengine/purego v0.9.1 h1:a/k2f2HQU3Pi399RPW1MOaZyhKJL9w/xFpKAg4q1s0A=
github.com/ebitengine/purego v0.9.1/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/envoyproxy/go-control-plane v0.13.5-0.20251024222203-75eaa193e329 h1:K+fnvUM0VZ7ZFJf0n4L/BRlnsb9pL/GuDG6FqaH+PwM=
github.com/envoyproxy/go-control-plane v0.13.5-0.20251024222203-75eaa193e329/go.mod h1:Alz8LEClvR7xKsrq3qzoc4N0guvVNSS8KmSChGYr9hs=
github.com/envoyproxy/go-control-plane/envoy v1.35.0 h1:ixjkELDE+ru6idPxcHLj8LBVc2bFP7iBytj353BoHUo=
github.com/envoyproxy/go-control-plane/envoy v1.35.0/go.mod h1:09qwbGVuSWWAyN5t/b3iyVfz5+z8QWGrzkoqm/8SbEs=
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0 h1:/G9QYbddjL25KvtKTv3an9lx6VBE2cnb8wp1vEGNYGI=
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/envoyproxy/protoc-gen-validate v1.2.1 h1:DEo3O99U8j4hBFwbJfrz9VtgcDfUKS7KJ7spH3d86P8=
github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU=
github.com/evanoberholster/imagemeta v0.3.1 h1:E4GUjXcvlVMjP9joN25+bBNf3Al3MTTfMqCrDOCW+LE=
github.com/evanoberholster/imagemeta v0.3.1/go.mod h1:V0vtDJmjTqvwAYO8r+u33NRVIMXQb0qSqEfImoKEiXM=
github.com/fatih/color v1.15.0 h1:kOqh6YHBtK8aywxGerMG2Eq3H6Qgoqeo13Bk2Mv/nBs=
github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw=
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/fogleman/gg v1.3.0 h1:/7zJX8F6AaYQc57WQCyN9cAIz+4bCJGO9B+dyW29am8=
@@ -181,45 +172,64 @@ github.com/fogleman/gg v1.3.0/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzP
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/fortytw2/leaktest v1.3.0 h1:u8491cBMTQ8ft8aeV+adlcytMZylmA5nnwwkRZjI8vw=
github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
github.com/gabriel-vasile/mimetype v1.4.8 h1:FfZ3gj38NjllZIeJAmMhr+qKL8Wu+nOoI3GqacKw1NM=
github.com/gabriel-vasile/mimetype v1.4.8/go.mod h1:ByKUIKGjh1ODkGM1asKUbQZOLGrPjydw3hYPU2YU9t8=
github.com/gabriel-vasile/mimetype v1.4.12 h1:e9hWvmLYvtp846tLHam2o++qitpguFiYCKbn0w9jyqw=
github.com/gabriel-vasile/mimetype v1.4.12/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
github.com/gen2brain/avif v0.4.4 h1:Ga/ss7qcWWQm2bxFpnjYjhJsNfZrWs5RsyklgFjKRSE=
github.com/gen2brain/avif v0.4.4/go.mod h1:/XCaJcjZraQwKVhpu9aEd9aLOssYOawLvhMBtmHVGqk=
github.com/gen2brain/heic v0.4.5 h1:Cq3hPu6wwlTJNv2t48ro3oWje54h82Q5pALeCBNgaSk=
github.com/gen2brain/heic v0.4.5/go.mod h1:ECnpqbqLu0qSje4KSNWUUDK47UPXPzl80T27GWGEL5I=
github.com/gen2brain/heic v0.4.7 h1:xw/e9R3HdIvb+uEhRDMRJdviYnB3ODe/VwL8SYLaMGc=
github.com/gen2brain/heic v0.4.7/go.mod h1:ECnpqbqLu0qSje4KSNWUUDK47UPXPzl80T27GWGEL5I=
github.com/gen2brain/jpegxl v0.4.5 h1:TWpVEn5xkIfsswzkjHBArd0Cc9AE0tbjBSoa0jDsrbo=
github.com/gen2brain/jpegxl v0.4.5/go.mod h1:4kWYJ18xCEuO2vzocYdGpeqNJ990/Gjy3uLMg5TBN6I=
github.com/gen2brain/webp v0.5.5 h1:MvQR75yIPU/9nSqYT5h13k4URaJK3gf9tgz/ksRbyEg=
github.com/gen2brain/webp v0.5.5/go.mod h1:xOSMzp4aROt2KFW++9qcK/RBTOVC2S9tJG66ip/9Oc0=
github.com/go-chi/chi/v5 v5.2.2 h1:CMwsvRVTbXVytCk1Wd72Zy1LAsAh9GxMmSNWLHCG618=
github.com/go-chi/chi/v5 v5.2.2/go.mod h1:L2yAIGWB3H+phAw1NxKwWM+7eUH/lU8pOMm5hHcoops=
github.com/go-chi/chi/v5 v5.2.3 h1:WQIt9uxdsAbgIYgid+BpYc+liqQZGMHRaUwp0JUcvdE=
github.com/go-chi/chi/v5 v5.2.3/go.mod h1:L2yAIGWB3H+phAw1NxKwWM+7eUH/lU8pOMm5hHcoops=
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-openapi/inflect v0.19.0 h1:9jCH9scKIbHeV9m12SmPilScz6krDxKRasNNSNPXu/4=
github.com/go-openapi/inflect v0.19.0/go.mod h1:lHpZVlpIQqLyKwJ4N+YSc9hchQy/i12fJykb83CRBH4=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonpointer v0.19.5 h1:gZr+CIYByUqjcgeLXnQu2gHYQC9o73G2XUeOFYEICuY=
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonreference v0.20.0 h1:MYlu0sBgChmCfJxxUKZ8g1cPWFOB37YSZqewK7OKeyA=
github.com/go-openapi/jsonreference v0.20.0/go.mod h1:Ag74Ico3lPc+zR+qjn4XBUmXymS4zJbYVCZmcgkasdo=
github.com/go-openapi/spec v0.20.6 h1:ich1RQ3WDbfoeTqTAb+5EIxNmpKVJZWBNah9RAT0jIQ=
github.com/go-openapi/spec v0.20.6/go.mod h1:2OpW+JddWPrpXSCIX8eOx7lZ5iyuWj3RYR6VaaBKcWA=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/jsonpointer v0.22.4 h1:dZtK82WlNpVLDW2jlA1YCiVJFVqkED1MegOUy9kR5T4=
github.com/go-openapi/jsonpointer v0.22.4/go.mod h1:elX9+UgznpFhgBuaMQ7iu4lvvX1nvNsesQ3oxmYTw80=
github.com/go-openapi/jsonreference v0.21.4 h1:24qaE2y9bx/q3uRK/qN+TDwbok1NhbSmGjjySRCHtC8=
github.com/go-openapi/jsonreference v0.21.4/go.mod h1:rIENPTjDbLpzQmQWCj5kKj3ZlmEh+EFVbz3RTUh30/4=
github.com/go-openapi/spec v0.22.3 h1:qRSmj6Smz2rEBxMnLRBMeBWxbbOvuOoElvSvObIgwQc=
github.com/go-openapi/spec v0.22.3/go.mod h1:iIImLODL2loCh3Vnox8TY2YWYJZjMAKYyLH2Mu8lOZs=
github.com/go-openapi/swag v0.19.15 h1:D2NRCBzS9/pEY3gP9Nl8aDqGUcPFrwG2p+CNFrLyrCM=
github.com/go-openapi/swag v0.19.15/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-openapi/swag/conv v0.25.4 h1:/Dd7p0LZXczgUcC/Ikm1+YqVzkEeCc9LnOWjfkpkfe4=
github.com/go-openapi/swag/conv v0.25.4/go.mod h1:3LXfie/lwoAv0NHoEuY1hjoFAYkvlqI/Bn5EQDD3PPU=
github.com/go-openapi/swag/jsonname v0.25.4 h1:bZH0+MsS03MbnwBXYhuTttMOqk+5KcQ9869Vye1bNHI=
github.com/go-openapi/swag/jsonname v0.25.4/go.mod h1:GPVEk9CWVhNvWhZgrnvRA6utbAltopbKwDu8mXNUMag=
github.com/go-openapi/swag/jsonutils v0.25.4 h1:VSchfbGhD4UTf4vCdR2F4TLBdLwHyUDTd1/q4i+jGZA=
github.com/go-openapi/swag/jsonutils v0.25.4/go.mod h1:7OYGXpvVFPn4PpaSdPHJBtF0iGnbEaTk8AvBkoWnaAY=
github.com/go-openapi/swag/jsonutils/fixtures_test v0.25.4 h1:IACsSvBhiNJwlDix7wq39SS2Fh7lUOCJRmx/4SN4sVo=
github.com/go-openapi/swag/jsonutils/fixtures_test v0.25.4/go.mod h1:Mt0Ost9l3cUzVv4OEZG+WSeoHwjWLnarzMePNDAOBiM=
github.com/go-openapi/swag/loading v0.25.4 h1:jN4MvLj0X6yhCDduRsxDDw1aHe+ZWoLjW+9ZQWIKn2s=
github.com/go-openapi/swag/loading v0.25.4/go.mod h1:rpUM1ZiyEP9+mNLIQUdMiD7dCETXvkkC30z53i+ftTE=
github.com/go-openapi/swag/stringutils v0.25.4 h1:O6dU1Rd8bej4HPA3/CLPciNBBDwZj9HiEpdVsb8B5A8=
github.com/go-openapi/swag/stringutils v0.25.4/go.mod h1:GTsRvhJW5xM5gkgiFe0fV3PUlFm0dr8vki6/VSRaZK0=
github.com/go-openapi/swag/typeutils v0.25.4 h1:1/fbZOUN472NTc39zpa+YGHn3jzHWhv42wAJSN91wRw=
github.com/go-openapi/swag/typeutils v0.25.4/go.mod h1:Ou7g//Wx8tTLS9vG0UmzfCsjZjKhpjxayRKTHXf2pTE=
github.com/go-openapi/swag/yamlutils v0.25.4 h1:6jdaeSItEUb7ioS9lFoCZ65Cne1/RZtPBZ9A56h92Sw=
github.com/go-openapi/swag/yamlutils v0.25.4/go.mod h1:MNzq1ulQu+yd8Kl7wPOut/YHAAU/H6hL91fF+E2RFwc=
github.com/go-openapi/testify/enable/yaml/v2 v2.0.2 h1:0+Y41Pz1NkbTHz8NngxTuAXxEodtNSI1WG1c/m5Akw4=
github.com/go-openapi/testify/enable/yaml/v2 v2.0.2/go.mod h1:kme83333GCtJQHXQ8UKX3IBZu6z8T5Dvy5+CW3NLUUg=
github.com/go-openapi/testify/v2 v2.0.2 h1:X999g3jeLcoY8qctY/c/Z8iBHTbwLz7R2WXd6Ub6wls=
github.com/go-openapi/testify/v2 v2.0.2/go.mod h1:HCPmvFFnheKK2BuwSA0TbbdxJ3I16pjwMkYkP4Ywn54=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.26.0 h1:SP05Nqhjcvz81uJaRfEV0YBSSSGMc/iMaVtFbr3Sw2k=
github.com/go-playground/validator/v10 v10.26.0/go.mod h1:I5QpIEbmr8On7W0TktmJAumgzX4CA1XNl4ZmDuVHKKo=
github.com/go-playground/validator/v10 v10.30.1 h1:f3zDSN/zOma+w6+1Wswgd9fLkdwy06ntQJp0BBvFG0w=
github.com/go-playground/validator/v10 v10.30.1/go.mod h1:oSuBIQzuJxL//3MelwSLD5hc2Tu889bF0Idm9Dg26cM=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls=
github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68=
@@ -227,37 +237,17 @@ github.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3a
github.com/gocarina/gocsv v0.0.0-20240520201108-78e41c74b4b1 h1:FWNFq4fM1wPfcK40yHE5UO3RUdSNPaBC+j3PokzA6OQ=
github.com/gocarina/gocsv v0.0.0-20240520201108-78e41c74b4b1/go.mod h1:5YoVOkjYAQumqlV356Hj3xeYh4BdZuLE0/nRkf2NKkI=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang-jwt/jwt/v5 v5.2.3 h1:kkGXqQOBSDDWRhWNXTFpqGSCMyh/PLnqUvMGJPDJDs0=
github.com/golang-jwt/jwt/v5 v5.2.3/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0 h1:DACJavvAHhabrF08vX0COfcOBJRhZ8lUbR+ZWIs0Y5g=
github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0/go.mod h1:E/TSTwGwJL78qG/PmXZO1EjYhfJinVAhrmmHX6Z8B9k=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc=
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/golang/snappy v1.0.0 h1:Oy607GVXHs7RtbggtPBnr2RmDArIsAefDwvrdWvRhGs=
github.com/golang/snappy v1.0.0/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
@@ -271,32 +261,27 @@ github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17k
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
github.com/google/subcommands v1.2.0/go.mod h1:ZjhPrFU+Olkh9WazFPsl27BQ4UPiG37m3yTrtFlrHVk=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/wire v0.6.0 h1:HBkoIh4BdSxoyo9PveV8giw7ZsaBOvzWKfcg/6MrVwI=
github.com/google/wire v0.6.0/go.mod h1:F4QhpQ9EDIdJ1Mbop/NZBRB+5yrR6qg3BnctaoUk6NA=
github.com/googleapis/enterprise-certificate-proxy v0.3.6 h1:GW/XbdyBFQ8Qe+YAmFU9uHLo7OnF5tL52HFAgMmyrf4=
github.com/googleapis/enterprise-certificate-proxy v0.3.6/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
github.com/googleapis/gax-go/v2 v2.14.1 h1:hb0FFeiPaQskmvakKu5EbCbpntQn48jyHuvrkurSS/Q=
github.com/googleapis/gax-go/v2 v2.14.1/go.mod h1:Hb/NubMaVM88SrNkvl8X/o8XWwDJEPqouaLeN2IUxoA=
github.com/google/wire v0.7.0 h1:JxUKI6+CVBgCO2WToKy/nQk0sS+amI9z9EjVmdaocj4=
github.com/google/wire v0.7.0/go.mod h1:n6YbUQD9cPKTnHXEBN2DXlOp/mVADhVErcMFb0v3J18=
github.com/googleapis/enterprise-certificate-proxy v0.3.7 h1:zrn2Ee/nWmHulBx5sAVrGgAa0f2/R35S4DJwfFaUPFQ=
github.com/googleapis/enterprise-certificate-proxy v0.3.7/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
github.com/googleapis/gax-go/v2 v2.16.0 h1:iHbQmKLLZrexmb0OSsNGTeSTS0HO4YvFOG8g5E4Zd0Y=
github.com/googleapis/gax-go/v2 v2.16.0/go.mod h1:o1vfQjjNZn4+dPnRdl/4ZD7S9414Y4xA+a/6Icj6l14=
github.com/gorilla/schema v1.4.1 h1:jUg5hUjCSDZpNGLuXQOgIWGdlgrIdYvgQ0wZtdK1M3E=
github.com/gorilla/schema v1.4.1/go.mod h1:Dg5SSm5PV60mhF2NFaTV1xuYYj8tV8NOPRo4FggUMnM=
github.com/gorilla/securecookie v1.1.1/go.mod h1:ra0sb63/xPlUeL+yeDciTfxMRAA+MP+HVt/4epWDjd4=
github.com/gorilla/sessions v1.2.1/go.mod h1:dk2InVEVJ0sfLlnXv9EAgkf6ecYs/i80K/zI+bUmuGM=
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/hashicorp/go-uuid v1.0.2/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8=
github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/hcl/v2 v2.13.0 h1:0Apadu1w6M11dyGFxWnmhhcMjkbAiKCv7G1r/2QgCNc=
github.com/hashicorp/hcl/v2 v2.13.0/go.mod h1:e4z5nxYlWNPdDSNYX+ph14EvWYMFm3eP0zIUqPc2jr0=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hashicorp/hcl/v2 v2.18.1 h1:6nxnOJFku1EuSawSD81fuviYUV8DxFr3fp2dUi3ZYSo=
github.com/hashicorp/hcl/v2 v2.18.1/go.mod h1:ThLC89FV4p9MPW804KVbe/cEXoQ8NZEh+JtMeeGErHE=
github.com/hay-kot/httpkit v0.0.11 h1:ZdB2uqsFBSDpfUoClGK5c5orjBjQkEVSXh7fZX5FKEk=
github.com/hay-kot/httpkit v0.0.11/go.mod h1:0kZdk5/swzdfqfg2c6pBWimcgeJ9PTyO97EbHnYl2Sw=
github.com/jarcoal/httpmock v1.3.0 h1:2RJ8GP0IIaWwcC9Fp2BmVi8Kog3v2Hn7VXM3fTd+nuc=
@@ -313,25 +298,16 @@ github.com/jcmturner/gokrb5/v8 v8.4.4 h1:x1Sv4HaTpepFkXbt2IkL29DXRf8sOfZXo8eRKh6
github.com/jcmturner/gokrb5/v8 v8.4.4/go.mod h1:1btQEpgT6k+unzCwX1KdWMEwPPkkgBtP+F6aCACiMrs=
github.com/jcmturner/rpc/v2 v2.0.3 h1:7FXXj8Ti1IaVFpSAziCZWNzbNuZmnvw/i6CqLNdWfZY=
github.com/jcmturner/rpc/v2 v2.0.3/go.mod h1:VUJYCIDm3PVOEHw8sgt091/20OJjskO/YJki3ELg/Hc=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/keybase/go-keychain v0.0.0-20231219164618-57a3676c3af6 h1:IsMZxCuZqKuao2vNdfD82fjjgPLfyHLpR41Z88viRWs=
github.com/keybase/go-keychain v0.0.0-20231219164618-57a3676c3af6/go.mod h1:3VeWNIJaW+O5xpRQbPp0Ybqu1vJd/pm7s2F473HRrkw=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/cpuid/v2 v2.2.4 h1:acbojRNwl3o09bUq+yDCtZFc1aiwaAAxtcn8YkZXnvk=
github.com/klauspost/cpuid/v2 v2.2.4/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/keybase/go-keychain v0.0.1 h1:way+bWYa6lDppZoZcgMbYsvC7GxljxrskdNInRtuthU=
github.com/keybase/go-keychain v0.0.1/go.mod h1:PdEILRW3i9D8JcdM+FmY6RwkHGnhHxXwkPPMeUgOK1k=
github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk=
github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
@@ -342,66 +318,62 @@ github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.7.6 h1:8yTIVnZgCoiM1TgqoeTl+LfU5Jg6/xL3QhGQnimLYnA=
github.com/mailru/easyjson v0.7.6/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v1.14.28 h1:ThEiQrnbtumT+QMknw63Befp/ce/nUPgBPMlRFEum7A=
github.com/mattn/go-sqlite3 v1.14.28/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/mattn/go-sqlite3 v1.14.32 h1:JD12Ag3oLy1zQA+BNn74xRgaBbdhbNIDYvQUEuuErjs=
github.com/mattn/go-sqlite3 v1.14.32/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/mfridman/interpolate v0.0.2 h1:pnuTK7MQIxxFz1Gr+rjSIx9u7qVjf5VOoM/u6BbAxPY=
github.com/mfridman/interpolate v0.0.2/go.mod h1:p+7uk6oE07mpE/Ik1b8EckO0O4ZXiGAfshKBWLUM9Xg=
github.com/minio/highwayhash v1.0.2 h1:Aak5U0nElisjDCfPSG79Tgzkn2gl66NxOMspRrKnA/g=
github.com/minio/highwayhash v1.0.2/go.mod h1:BQskDq+xkJ12lmlUUi7U0M5Swg3EWR+dLTk+kldvVxY=
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7 h1:DpOJ2HYzCv8LZP15IdmG+YdwD2luVPHITV96TkirNBM=
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0=
github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0=
github.com/nats-io/jwt/v2 v2.5.0 h1:WQQ40AAlqqfx+f6ku+i0pOVm+ASirD4fUh+oQsiE9Ak=
github.com/nats-io/jwt/v2 v2.5.0/go.mod h1:24BeQtRwxRV8ruvC4CojXlx/WQ/VjuwlYiH+vu/+ibI=
github.com/nats-io/nats-server/v2 v2.9.23 h1:6Wj6H6QpP9FMlpCyWUaNu2yeZ/qGj+mdRkZ1wbikExU=
github.com/nats-io/nats-server/v2 v2.9.23/go.mod h1:wEjrEy9vnqIGE4Pqz4/c75v9Pmaq7My2IgFmnykc4C0=
github.com/nats-io/nats.go v1.40.1 h1:MLjDkdsbGUeCMKFyCFoLnNn/HDTqcgVa3EQm+pMNDPk=
github.com/nats-io/nats.go v1.40.1/go.mod h1:wV73x0FSI/orHPSYoyMeJB+KajMDoWyXmFaRrrYaaTo=
github.com/nats-io/nkeys v0.4.10 h1:glmRrpCmYLHByYcePvnTBEAwawwapjCPMjy2huw20wc=
github.com/nats-io/nkeys v0.4.10/go.mod h1:OjRrnIKnWBFl+s4YK5ChQfvHP2fxqZexrKJoVVyWB3U=
github.com/nats-io/nats.go v1.48.0 h1:pSFyXApG+yWU/TgbKCjmm5K4wrHu86231/w84qRVR+U=
github.com/nats-io/nats.go v1.48.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
github.com/nats-io/nkeys v0.4.12 h1:nssm7JKOG9/x4J8II47VWCL1Ds29avyiQDRn0ckMvDc=
github.com/nats-io/nkeys v0.4.12/go.mod h1:MT59A1HYcjIcyQDJStTfaOY6vhy9XTUjOFo+SVsvpBg=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4=
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/olahol/melody v1.2.1 h1:xdwRkzHxf+B0w4TKbGpUSSkV516ZucQZJIWLztOWICQ=
github.com/olahol/melody v1.2.1/go.mod h1:GgkTl6Y7yWj/HtfD48Q5vLKPVoZOH+Qqgfa7CvJgJM4=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/olahol/melody v1.4.0 h1:Pa5SdeZL/zXPi1tJuMAPDbl4n3gQOThSL6G1p4qZ4SI=
github.com/olahol/melody v1.4.0/go.mod h1:GgkTl6Y7yWj/HtfD48Q5vLKPVoZOH+Qqgfa7CvJgJM4=
github.com/onsi/ginkgo/v2 v2.9.2 h1:BA2GMJOtfGAfagzYtrAlufIP0lq6QERkFmHLMLPwFSU=
github.com/onsi/ginkgo/v2 v2.9.2/go.mod h1:WHcJJG2dIlcCqVfBAwUCrJxSPFb6v4azBwgxeMeDuts=
github.com/onsi/gomega v1.27.6 h1:ENqfyGeS5AX/rlXDd/ETokDz93u0YufY1Pgxuy/PvWE=
github.com/onsi/gomega v1.27.6/go.mod h1:PIQNjfQwkP3aQAH7lf7j87O/5FiNr+ZR8+ipb+qQlhg=
github.com/philhofer/fwd v1.1.2 h1:bnDivRJ1EWPjUIRXV5KfORO897HTbpFAQddBdE8t7Gw=
github.com/philhofer/fwd v1.1.2/go.mod h1:qkPdfjR2SIEbspLqpe1tO4n5yICnr2DY7mqEx2tUTP0=
github.com/pierrec/lz4/v4 v4.1.22 h1:cKFw6uJDK+/gfw5BcDL0JL5aBsAFdsIT18eRtLj7VIU=
github.com/pierrec/lz4/v4 v4.1.22/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/philhofer/fwd v1.2.0 h1:e6DnBTl7vGY+Gz322/ASL4Gyp1FspeMvx1RNDoToZuM=
github.com/philhofer/fwd v1.2.0/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
github.com/pierrec/lz4/v4 v4.1.23 h1:oJE7T90aYBGtFNrI8+KbETnPymobAhzRrR8Mu8n1yfU=
github.com/pierrec/lz4/v4 v4.1.23/go.mod h1:EoQMVJgeeEOMsCqCzqFm2O0cJvljX2nGZjcRIPL34O4=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/pressly/goose/v3 v3.24.3 h1:DSWWNwwggVUsYZ0X2VitiAa9sKuqtBfe+Jr9zFGwWlM=
github.com/pressly/goose/v3 v3.24.3/go.mod h1:v9zYL4xdViLHCUUJh/mhjnm6JrK7Eul8AS93IxiZM4E=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/pressly/goose/v3 v3.26.0 h1:KJakav68jdH0WDvoAcj8+n61WqOIaPGgH0bJWS6jpmM=
github.com/pressly/goose/v3 v3.26.0/go.mod h1:4hC1KrritdCxtuFsqgs1R4AU5bWtTAf+cnWvfhf2DNY=
github.com/rabbitmq/amqp091-go v1.10.0 h1:STpn5XsHlHGcecLmMFCtg7mqq0RnD+zFr4uzukfVhBw=
github.com/rabbitmq/amqp091-go v1.10.0/go.mod h1:Hy4jKW5kQART1u+JkDTF9YYOQUHXqMuhrgxOEeS7G4o=
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 h1:N/ElC8H3+5XpJzTSTfLsJV/mx9Q9g7kxmchpfZyxgzM=
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/redis/go-redis/v9 v9.7.0 h1:HhLSs+B6O021gwzl+locl0zEDnyNkxMtf/Z3NNBMa9E=
github.com/redis/go-redis/v9 v9.7.0/go.mod h1:f6zhXITC7JUJIlPEiBOTXxJgPLdZcA93GewI7inzyWw=
github.com/rcrowley/go-metrics v0.0.0-20250401214520-65e299d6c5c9 h1:bsUq1dX0N8AOIL7EB/X911+m4EHsnWEHeJ0c+3TTBrg=
github.com/rcrowley/go-metrics v0.0.0-20250401214520-65e299d6c5c9/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/redis/go-redis/v9 v9.8.0 h1:q3nRvjrlge/6UD7eTu/DSg2uYiU2mCL0G/uzBWqhicI=
github.com/redis/go-redis/v9 v9.8.0/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
@@ -409,14 +381,16 @@ github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY=
github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
github.com/sergi/go-diff v1.0.0 h1:Kpca3qRNrduNnOQeazBd0ysaKrUJiIuISHxogkT9RPQ=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/sergi/go-diff v1.3.1 h1:xkr+Oxo4BOQKmkn/B9eMK0g5Kg/983T9DqqPHwYqD+8=
github.com/sergi/go-diff v1.3.1/go.mod h1:aMJSSKb2lpPvRNec0+w3fl7LP9IOFzdc9Pa4NFbPK1I=
github.com/sethvargo/go-retry v0.3.0 h1:EEt31A35QhrcRZtrYFDTBg91cqZVnFL2navjDrah2SE=
github.com/sethvargo/go-retry v0.3.0/go.mod h1:mNX17F0C/HguQMyMyJxcnU471gOZGxCLyYaFyAZraas=
github.com/shirou/gopsutil/v4 v4.25.5 h1:rtd9piuSMGeU8g1RMXjZs9y9luK5BwtnG7dZaQUJAsc=
github.com/shirou/gopsutil/v4 v4.25.5/go.mod h1:PfybzyydfZcN+JMMjkF6Zb8Mq1A/VcogFFg7hj50W9c=
github.com/shirou/gopsutil/v4 v4.25.11 h1:X53gB7muL9Gnwwo2evPSE+SfOrltMoR6V3xJAXZILTY=
github.com/shirou/gopsutil/v4 v4.25.11/go.mod h1:EivAfP5x2EhLp2ovdpKSozecVXn1TmuG7SMzs/Wh4PU=
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e h1:MRM5ITcdelLK2j1vwZ3Je0FKVCfqOLp5zO6trqMLYs0=
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e/go.mod h1:XV66xRDqSt+GTGFMVlhk3ULuV0y9ZmzeVGR4mloJI3M=
github.com/spiffe/go-spiffe/v2 v2.6.0 h1:l+DolpxNWYgruGQVV0xsfeya3CsC7m8iBzDnMpsbLuo=
github.com/spiffe/go-spiffe/v2 v2.6.0/go.mod h1:gm2SeUoMZEtpnzPNs2Csc0D/gX33k1xIx7lEzqblHEs=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
@@ -426,22 +400,22 @@ github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/swaggo/files/v2 v2.0.0 h1:hmAt8Dkynw7Ssz46F6pn8ok6YmGZqHSVLZ+HQM7i0kw=
github.com/swaggo/files/v2 v2.0.0/go.mod h1:24kk2Y9NYEJ5lHuCra6iVwkMjIekMCaFq/0JQj66kyM=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/swaggo/files/v2 v2.0.2 h1:Bq4tgS/yxLB/3nwOMcul5oLEUKa877Ykgz3CJMVbQKU=
github.com/swaggo/files/v2 v2.0.2/go.mod h1:TVqetIzZsO9OhHX1Am9sRf9LdrFZqoK49N37KON/jr0=
github.com/swaggo/http-swagger/v2 v2.0.2 h1:FKCdLsl+sFCx60KFsyM0rDarwiUSZ8DqbfSyIKC9OBg=
github.com/swaggo/http-swagger/v2 v2.0.2/go.mod h1:r7/GBkAWIfK6E/OLnE8fXnviHiDeAHmgIyooa4xm3AQ=
github.com/swaggo/swag v1.16.4 h1:clWJtd9LStiG3VeijiCfOVODP6VpHtKdQy9ELFG3s1A=
github.com/swaggo/swag v1.16.4/go.mod h1:VBsHJRsDvfYvqoiMKnsdwhNV9LEMHgEDZcyVYX0sxPg=
github.com/tetratelabs/wazero v1.9.0 h1:IcZ56OuxrtaEz8UYNRHBrUa9bYeX9oVY93KspZZBf/I=
github.com/tetratelabs/wazero v1.9.0/go.mod h1:TSbcXCfFP0L2FGkRPxHphadXPjo1T6W+CseNNY7EkjM=
github.com/tinylib/msgp v1.1.8 h1:FCXC1xanKO4I8plpHGH2P7koL/RzZs12l/+r7vakfm0=
github.com/tinylib/msgp v1.1.8/go.mod h1:qkpG+2ldGg4xRFmx+jfTvZPxfGFhi64BcnL9vkCm/Tw=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
github.com/swaggo/swag v1.16.6 h1:qBNcx53ZaX+M5dxVyTrgQ0PJ/ACK+NzhwcbieTt+9yI=
github.com/swaggo/swag v1.16.6/go.mod h1:ngP2etMK5a0P3QBizic5MEwpRmluJZPHjXcMoj4Xesg=
github.com/tetratelabs/wazero v1.11.0 h1:+gKemEuKCTevU4d7ZTzlsvgd1uaToIDtlQlmNbwqYhA=
github.com/tetratelabs/wazero v1.11.0/go.mod h1:eV28rsN8Q+xwjogd7f4/Pp4xFxO7uOGbLcD/LzB1wiU=
github.com/tinylib/msgp v1.6.1 h1:ESRv8eL3u+DNHUoSAAQRE50Hm162zqAnBoGv9PzScPY=
github.com/tinylib/msgp v1.6.1/go.mod h1:RSp0LW9oSxFut3KzESt5Voq4GVWyS+PSulT77roAqEA=
github.com/tklauser/go-sysconf v0.3.16 h1:frioLaCQSsF5Cy1jgRBrzr6t502KIIwQ0MArYICU0nA=
github.com/tklauser/go-sysconf v0.3.16/go.mod h1:/qNL9xxDhc7tx3HSRsLWNnuzbVfh3e7gh/BmM179nYI=
github.com/tklauser/numcpus v0.11.0 h1:nSTwhKH5e1dMNsCdVBukSZrURJRoHbSEQjdEbY+9RXw=
github.com/tklauser/numcpus v0.11.0/go.mod h1:z+LwcLq54uWZTX0u/bGobaV34u6V7KNlTZejzM6/3MQ=
github.com/yeqown/go-qrcode/v2 v2.2.5 h1:HCOe2bSjkhZyYoyyNaXNzh4DJZll6inVJQQw+8228Zk=
github.com/yeqown/go-qrcode/v2 v2.2.5/go.mod h1:uHpt9CM0V1HeXLz+Wg5MN50/sI/fQhfkZlOM+cOTHxw=
github.com/yeqown/go-qrcode/writer/standard v1.3.0 h1:chdyhEfRtUPgQtuPeaWVGQ/TQx4rE1PqeoW3U+53t34=
@@ -461,208 +435,140 @@ github.com/zeebo/blake3 v0.2.4 h1:KYQPkhpRtcqh0ssGYcKLG1JYvddkEA8QwCM/yBqhaZI=
github.com/zeebo/blake3 v0.2.4/go.mod h1:7eeQ6d2iXWRGF6npfaxl2CU+xy2Fjo2gxeyZGCRUjcE=
github.com/zeebo/pcg v1.0.1 h1:lyqfGeWiv4ahac6ttHs+I5hwtH/+1mrhlCtVNQM2kHo=
github.com/zeebo/pcg v1.0.1/go.mod h1:09F0S9iiKrwn9rlI5yjLkmrug154/YRW6KnnXVDM/l4=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/detectors/gcp v1.35.0 h1:bGvFt68+KTiAKFlacHW6AhA56GF2rS0bdD3aJYEnmzA=
go.opentelemetry.io/contrib/detectors/gcp v1.35.0/go.mod h1:qGWP8/+ILwMRIUf9uIVLloR1uo5ZYAslM4O6OqUi1DA=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0 h1:x7wzEgXfnzJcHDwStJT+mxOz4etr2EcexjqhBvmoakw=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0/go.mod h1:rg+RlpR5dKwaS95IyyZqj5Wd4E13lk/msnTS0Xl9lJM=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0 h1:sbiXRNDSWJOTobXh5HyQKjq6wUC5tNybqjIqDpAY4CU=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0/go.mod h1:69uWxva0WgAA/4bu2Yy70SLDBwZXuQ6PbBpbsa5iZrQ=
go.opentelemetry.io/otel v1.35.0 h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ=
go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.29.0 h1:WDdP9acbMYjbKIyJUhTvtzj601sVJOqgWdUxSdR/Ysc=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.29.0/go.mod h1:BLbf7zbNIONBLPwvFnwNHGj4zge8uTCM/UPIVW1Mq2I=
go.opentelemetry.io/otel/metric v1.35.0 h1:0znxYu2SNyuMSQT4Y9WDWej0VpcsxkuklLa4/siN90M=
go.opentelemetry.io/otel/metric v1.35.0/go.mod h1:nKVFgxBZ2fReX6IlyW28MgZojkoAkJGaE8CpgeAU3oE=
go.opentelemetry.io/otel/sdk v1.35.0 h1:iPctf8iprVySXSKJffSS79eOjl9pvxV9ZqOWT0QejKY=
go.opentelemetry.io/otel/sdk v1.35.0/go.mod h1:+ga1bZliga3DxJ3CQGg3updiaAJoNECOgJREo9KHGQg=
go.opentelemetry.io/otel/sdk/metric v1.35.0 h1:1RriWBmCKgkeHEhM7a2uMjMUfP7MsOF5JpUCaEqEI9o=
go.opentelemetry.io/otel/sdk/metric v1.35.0/go.mod h1:is6XYCUMpcKi+ZsOvfluY5YstFnhW0BidkR+gL+qN+w=
go.opentelemetry.io/otel/trace v1.35.0 h1:dPpEfJu1sDIqruz7BHFG3c7528f6ddfSWfFDVt/xgMs=
go.opentelemetry.io/otel/trace v1.35.0/go.mod h1:WUk7DtFp1Aw2MkvqGdwiXYDZZNvA/1J8o6xRXLrIkyc=
go.balki.me/anyhttp v0.5.2 h1:et4tCDXLeXpWfMNvRKG7ojfrnlr3du7cEaG966MLSpA=
go.balki.me/anyhttp v0.5.2/go.mod h1:JhfekOIjgVODoVqUCficjpIgmB3wwlB7jhN0eN2EZ/s=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/detectors/gcp v1.38.0 h1:ZoYbqX7OaA/TAikspPl3ozPI6iY6LiIY9I8cUfm+pJs=
go.opentelemetry.io/contrib/detectors/gcp v1.38.0/go.mod h1:SU+iU7nu5ud4oCb3LQOhIZ3nRLj6FNVrKgtflbaf2ts=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.62.0 h1:rbRJ8BBoVMsQShESYZ0FkvcITu8X8QNwJogcLUmDNNw=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.62.0/go.mod h1:ru6KHrNtNHxM4nD/vd6QrLVWgKhxPYgblq4VAtNawTQ=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.62.0 h1:Hf9xI/XLML9ElpiHVDNwvqI0hIFlzV8dgIr35kV1kRU=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.62.0/go.mod h1:NfchwuyNoMcZ5MLHwPrODwUF1HWCXWrL31s8gSAdIKY=
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.37.0 h1:6VjV6Et+1Hd2iLZEPtdV7vie80Yyqf7oikJLjQ/myi0=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.37.0/go.mod h1:u8hcp8ji5gaM/RfcOo8z9NMnf1pVLfVY7lBY2VOGuUU=
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
go.opentelemetry.io/otel/sdk v1.39.0 h1:nMLYcjVsvdui1B/4FRkwjzoRVsMK8uL/cj0OyhKzt18=
go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE=
go.opentelemetry.io/otel/sdk/metric v1.39.0 h1:cXMVVFVgsIf2YL6QkRF4Urbr/aMInf+2WKg+sEJTtB8=
go.opentelemetry.io/otel/sdk/metric v1.39.0/go.mod h1:xq9HEVH7qeX69/JnwEfp6fVq5wosJsY1mt4lLfYdVew=
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
gocloud.dev v0.41.0 h1:qBKd9jZkBKEghYbP/uThpomhedK5s2Gy6Lz7h/zYYrM=
gocloud.dev v0.41.0/go.mod h1:IetpBcWLUwroOOxKr90lhsZ8vWxeSkuszBnW62sbcf0=
gocloud.dev/pubsub/kafkapubsub v0.41.0 h1:Ft6YB77ejqk++VjW51UP39RH/WDAMtv6ed3+PHMxBzg=
gocloud.dev/pubsub/kafkapubsub v0.41.0/go.mod h1:kJf4c6b+4yJk6nXmv33yXKblbrgWmrYCzI5QEsr27G0=
gocloud.dev/pubsub/natspubsub v0.41.0 h1:UxNb0DiAzdnyHut6jcCG7u6lsB/hzxTyZ/RHWeCUJ4Q=
gocloud.dev/pubsub/natspubsub v0.41.0/go.mod h1:uCBKjwvIcuNuf3+ft4wUI9hPHHKQvroxq9ZPB/410ac=
gocloud.dev/pubsub/rabbitpubsub v0.41.0 h1:RutvHbacZxlFr0t3wlr+kz63j53UOfHY3PJR8NKN1EI=
gocloud.dev/pubsub/rabbitpubsub v0.41.0/go.mod h1:s7oQXOlQ2FOj8XmYMv5Ocgs1t+8hIXfsKaWGgECM9SQ=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
gocloud.dev v0.44.0 h1:iVyMAqFl2r6xUy7M4mfqwlN+21UpJoEtgHEcfiLMUXs=
gocloud.dev v0.44.0/go.mod h1:ZmjROXGdC/eKZLF1N+RujDlFRx3D+4Av2thREKDMVxY=
gocloud.dev/pubsub/kafkapubsub v0.44.0 h1:nQvzfnEN6lCh4j2p+1t0OLS4nmC2U/Ji5aWHVwgkifg=
gocloud.dev/pubsub/kafkapubsub v0.44.0/go.mod h1:/gcNz6OG4HgcY+w2LXwwY4qaRMgtq+SXoPSQU2jOlcw=
gocloud.dev/pubsub/natspubsub v0.44.0 h1:1Us76ckkdgtiE1p1rJZ+38b9TQP051bmjAiQlFQzYrM=
gocloud.dev/pubsub/natspubsub v0.44.0/go.mod h1:PvVAGIhL14PWGwWIXX/zAK42ixr2/PKP4Q4yMiAUraQ=
gocloud.dev/pubsub/rabbitpubsub v0.44.0 h1:MpRIO6XJ/JTqrlUWt3CxwDe1LvaiXUVu4sS5cv4f/AM=
gocloud.dev/pubsub/rabbitpubsub v0.44.0/go.mod h1:BB9+qT3r6g4M5+4asiXaEeqw4QAOzsWusO5krYaqkdA=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg=
golang.org/x/crypto v0.39.0 h1:SHs+kF4LP+f+p14esP5jAoDpHU8Gu/v9lFRK6IT5imM=
golang.org/x/crypto v0.39.0/go.mod h1:L+Xg3Wf6HoL4Bn4238Z6ft6KfEpN0tJGo53AAPC632U=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20250506013437-ce4c2cf36ca6 h1:y5zboxd6LQAqYIhHnB48p0ByQ/GnQx2BE33L8BOHQkI=
golang.org/x/exp v0.0.0-20250506013437-ce4c2cf36ca6/go.mod h1:U6Lno4MTRCDY+Ba7aCcauB9T60gsv5s4ralQzP72ZoQ=
golang.org/x/image v0.28.0 h1:gdem5JW1OLS4FbkWgLO+7ZeFzYtL3xClb97GaUzYMFE=
golang.org/x/image v0.28.0/go.mod h1:GUJYXtnGKEUgggyzh+Vxt+AviiCcyiwpsl8iQ8MvwGY=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
golang.org/x/exp v0.0.0-20251219203646-944ab1f22d93 h1:fQsdNF2N+/YewlRZiricy4P1iimyPKZ/xwniHj8Q2a0=
golang.org/x/exp v0.0.0-20251219203646-944ab1f22d93/go.mod h1:EPRbTFwzwjXj9NpYyyrvenVh9Y+GFeEvMNh7Xuz7xgU=
golang.org/x/image v0.34.0 h1:33gCkyw9hmwbZJeZkct8XyR11yH889EQt/QH4VmXMn8=
golang.org/x/image v0.34.0/go.mod h1:2RNFBZRB+vnwwFil8GkMdRvrJOFd1AzdZI6vOY+eJVU=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.14.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.25.0 h1:n7a+ZbQKQA/Ysbyb0/6IbB1H/X41mKgbhfv7AfG/44w=
golang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/mod v0.31.0 h1:HaW9xtz0+kOcWKwli0ZXy79Ix+UW/vOfmWI5QVd2tgI=
golang.org/x/mod v0.31.0/go.mod h1:43JraMp9cGx1Rx3AqioxrbrhNsLl2l/iNAvuBkrezpg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.3.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY=
golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY=
golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.28.0 h1:CrgCKl8PPAVtLnU3c+EDw6x11699EWlsDeWNWKdIOkc=
golang.org/x/oauth2 v0.28.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M=
golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA=
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.4.0/go.mod h1:UE5sM2OK9E/d67R0ANs2xJizIymRP5gJU295PvKXxjQ=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.17.0/go.mod h1:xsh6VxdV005rRVaS6SSAf9oiAqljS7UZUacMZ8Bnsps=
golang.org/x/tools v0.33.0 h1:4qz2S3zmRxbGIhDIAgjxvFutSvH5EfnsYrRBj0UI0bc=
golang.org/x/tools v0.33.0/go.mod h1:CIJMaWEY88juyUfo7UbgPqbC8rU2OqfAV1h2Qp0oMYI=
golang.org/x/tools v0.40.0 h1:yLkxfA+Qnul4cs9QA3KnlFu0lVmd8JJfoq+E41uSutA=
golang.org/x/tools v0.40.0/go.mod h1:Ik/tzLRlbscWpqqMRjyWYDisX8bG13FrdXp3o4Sr9lc=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da h1:noIWHXmPHxILtqtCOPIhSt0ABwskkZKjD3bXGnZGpNY=
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da/go.mod h1:NDW/Ps6MPRej6fsCIbMTohpP40sJ/P/vI1MoTEGwX90=
google.golang.org/api v0.228.0 h1:X2DJ/uoWGnY5obVjewbp8icSL5U4FzuCfy9OjbLSnLs=
google.golang.org/api v0.228.0/go.mod h1:wNvRS1Pbe8r4+IfBIniV8fwCpGwTrYa+kMUDiC5z5a4=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20250324211829-b45e905df463 h1:qEFnJI6AnfZk0NNe8YTyXQh5i//Zxi4gBHwRgp76qpw=
google.golang.org/genproto v0.0.0-20250324211829-b45e905df463/go.mod h1:SqIx1NV9hcvqdLHo7uNZDS5lrUJybQ3evo3+z/WBfA0=
google.golang.org/genproto/googleapis/api v0.0.0-20250324211829-b45e905df463 h1:hE3bRWtU6uceqlh4fhrSnUyjKHMKB9KrTLLG+bc0ddM=
google.golang.org/genproto/googleapis/api v0.0.0-20250324211829-b45e905df463/go.mod h1:U90ffi8eUL9MwPcrJylN5+Mk2v3vuPDptd5yyNUiRR8=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463 h1:e0AIkUUhxyBKh6ssZNrAMeqhA7RKUj42346d1y02i2g=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.71.0 h1:kF77BGdPTQ4/JZWMlb9VpJ5pa25aqvVqogsxNHHdeBg=
google.golang.org/grpc v1.71.0/go.mod h1:H0GRtasmQOh9LkFoCPDu3ZrwUtD1YGE+b2vYBYd/8Ec=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/api v0.258.0 h1:IKo1j5FBlN74fe5isA2PVozN3Y5pwNKriEgAXPOkDAc=
google.golang.org/api v0.258.0/go.mod h1:qhOMTQEZ6lUps63ZNq9jhODswwjkjYYguA7fA3TBFww=
google.golang.org/genproto v0.0.0-20251202230838-ff82c1b0f217 h1:GvESR9BIyHUahIb0NcTum6itIWtdoglGX+rnGxm2934=
google.golang.org/genproto v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:yJ2HH4EHEDTd3JiLmhds6NkJ17ITVYOdV3m3VKOnws0=
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217 h1:fCvbg86sFXwdrl5LgVcTEvNC+2txB5mgROGmRL5mrls=
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:+rXWjjaukWZun3mLfjmVnQi18E1AsFbDN9QdJ5YXLto=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b h1:Mv8VFug0MP9e5vUxfBcE3vUkV6CImK3cMNMIDFjmzxU=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ=
google.golang.org/grpc v1.78.0 h1:K1XZG/yGDJnzMdd/uZHAkVqJE+xIDOcmdSFZkBUicNc=
google.golang.org/grpc v1.78.0/go.mod h1:I47qjTo4OKbMkjA/aOOwxDIiPSBofUtQUI5EfpWvW7U=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
modernc.org/cc/v4 v4.26.1 h1:+X5NtzVBn0KgsBCBe+xkDC7twLb/jNVj9FPgiwSQO3s=
modernc.org/cc/v4 v4.26.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.28.0 h1:rjznn6WWehKq7dG4JtLRKxb52Ecv8OUGah8+Z/SfpNU=
modernc.org/ccgo/v4 v4.28.0/go.mod h1:JygV3+9AV6SmPhDasu4JgquwU81XAKLd3OKTUDNOiKE=
modernc.org/fileutil v1.3.1 h1:8vq5fe7jdtEvoCf3Zf9Nm0Q05sH6kGx0Op2CPx1wTC8=
modernc.org/fileutil v1.3.1/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc=
modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/libc v1.65.7 h1:Ia9Z4yzZtWNtUIuiPuQ7Qf7kxYrxP1/jeHZzG8bFu00=
modernc.org/libc v1.65.7/go.mod h1:011EQibzzio/VX3ygj1qGFt5kMjP0lHb0qCW5/D/pQU=
modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE=
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.67.2 h1:ZbNmly1rcbjhot5jlOZG0q4p5VwFfjwWqZ5rY2xxOXo=
modernc.org/libc v1.67.2/go.mod h1:QvvnnJ5P7aitu0ReNpVIEyesuhmDLQ8kaEoyMjIFZJA=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
@@ -671,8 +577,8 @@ modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.37.1 h1:EgHJK/FPoqC+q2YBXg7fUmES37pCHFc97sI7zSayBEs=
modernc.org/sqlite v1.37.1/go.mod h1:XwdRtsE1MpiBcL54+MbKcaDvcuej+IYSMfLN6gSKV8g=
modernc.org/sqlite v1.41.0 h1:bJXddp4ZpsqMsNN1vS0jWo4IJTZzb8nWpcgvyCFG9Ck=
modernc.org/sqlite v1.41.0/go.mod h1:9fjQZ0mB1LLP0GYrp39oOJXx/I2sxEnZtzCmEQIKvGE=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=

View File

@@ -7,6 +7,7 @@ import (
_ "embed"
"encoding/json"
"io"
"log"
"slices"
"strings"
"sync"
@@ -15,6 +16,24 @@ import (
//go:embed currencies.json
var defaults []byte
const (
MinDecimals = 0
MaxDecimals = 18
)
// clampDecimals ensures the decimals value is within a safe range [0, 18]
func clampDecimals(decimals int, code string) int {
original := decimals
if decimals < MinDecimals {
decimals = MinDecimals
log.Printf("WARNING: Currency %s had negative decimals (%d), normalized to %d", code, original, decimals)
} else if decimals > MaxDecimals {
decimals = MaxDecimals
log.Printf("WARNING: Currency %s had excessive decimals (%d), normalized to %d", code, original, decimals)
}
return decimals
}
type CollectorFunc func() ([]Currency, error)
func CollectJSON(reader io.Reader) CollectorFunc {
@@ -25,6 +44,11 @@ func CollectJSON(reader io.Reader) CollectorFunc {
return nil, err
}
// Clamp decimals during collection to ensure early normalization
for i := range currencies {
currencies[i].Decimals = clampDecimals(currencies[i].Decimals, currencies[i].Code)
}
return currencies, nil
}
}
@@ -48,10 +72,11 @@ func CollectionCurrencies(collectors ...CollectorFunc) ([]Currency, error) {
}
type Currency struct {
Name string `json:"name"`
Code string `json:"code"`
Local string `json:"local"`
Symbol string `json:"symbol"`
Name string `json:"name"`
Code string `json:"code"`
Local string `json:"local"`
Symbol string `json:"symbol"`
Decimals int `json:"decimals"`
}
type CurrencyRegistry struct {
@@ -62,7 +87,10 @@ type CurrencyRegistry struct {
func NewCurrencyService(currencies []Currency) *CurrencyRegistry {
registry := make(map[string]Currency, len(currencies))
for i := range currencies {
registry[currencies[i].Code] = currencies[i]
// Clamp decimals to safe range before adding to registry
currency := currencies[i]
currency.Decimals = clampDecimals(currency.Decimals, currency.Code)
registry[currency.Code] = currency
}
return &CurrencyRegistry{

File diff suppressed because it is too large Load Diff

View File

@@ -38,10 +38,11 @@ func bootstrap() {
log.Fatal(err)
}
password := fk.Str(10)
tUser, err = tRepos.Users.Create(ctx, repo.UserCreate{
Name: fk.Str(10),
Email: fk.Email(),
Password: fk.Str(10),
Password: &password,
IsSuperuser: fk.Bool(),
GroupID: tGroup.ID,
})

View File

@@ -38,6 +38,10 @@ func (svc *ItemService) Create(ctx Context, item repo.ItemCreate) (repo.ItemOut,
return svc.repo.Items.Create(ctx, ctx.GID, item)
}
func (svc *ItemService) Duplicate(ctx Context, gid, id uuid.UUID, options repo.DuplicateOptions) (repo.ItemOut, error) {
return svc.repo.Items.Duplicate(ctx, gid, id, options)
}
func (svc *ItemService) EnsureAssetID(ctx context.Context, gid uuid.UUID) (int, error) {
items, err := svc.repo.Items.GetAllZeroAssetID(ctx, gid)
if err != nil {

View File

@@ -50,6 +50,7 @@ func (svc *ItemService) AttachmentAdd(ctx Context, itemID uuid.UUID, filename st
_, err = svc.repo.Attachments.Create(ctx, itemID, repo.ItemCreateAttachment{Title: filename, Content: file}, attachmentType, primary)
if err != nil {
log.Err(err).Msg("failed to create attachment")
return repo.ItemOut{}, err
}
return svc.repo.Items.GetOneByGroup(ctx, ctx.GID, itemID)

View File

@@ -10,6 +10,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/sysadminsmedia/homebox/backend/internal/data/repo"
"github.com/sysadminsmedia/homebox/backend/internal/sys/config"
)
func TestItemService_AddAttachment(t *testing.T) {
@@ -52,11 +53,61 @@ func TestItemService_AddAttachment(t *testing.T) {
// Check that the file exists
storedPath := afterAttachment.Attachments[0].Path
// {root}/{group}/{item}/{attachment}
assert.Equal(t, path.Join("/", tGroup.ID.String(), "documents"), path.Dir(storedPath))
// path should now be relative: {group}/{documents}
assert.Equal(t, path.Join(tGroup.ID.String(), "documents"), path.Dir(storedPath))
// Check that the file contents are correct
bts, err := os.ReadFile(path.Join(os.TempDir(), storedPath))
require.NoError(t, err)
assert.Equal(t, contents, string(bts))
}
func TestItemService_AddAttachment_InvalidStorage(t *testing.T) {
// Create a service with an invalid storage path to simulate the issue
svc := &ItemService{
repo: tRepos,
filepath: "/nonexistent/path/that/should/not/exist",
}
// Create a temporary repo with invalid storage config
invalidRepos := repo.New(tClient, tbus, config.Storage{
PrefixPath: "/",
ConnString: "file:///nonexistent/directory/that/does/not/exist",
}, "mem://{{ .Topic }}", config.Thumbnail{
Enabled: false,
Width: 0,
Height: 0,
})
svc.repo = invalidRepos
loc, err := invalidRepos.Locations.Create(context.Background(), tGroup.ID, repo.LocationCreate{
Description: "test",
Name: "test-invalid",
})
require.NoError(t, err)
assert.NotNil(t, loc)
itmC := repo.ItemCreate{
Name: fk.Str(10),
Description: fk.Str(10),
LocationID: loc.ID,
}
itm, err := invalidRepos.Items.Create(context.Background(), tGroup.ID, itmC)
require.NoError(t, err)
assert.NotNil(t, itm)
t.Cleanup(func() {
err := invalidRepos.Items.Delete(context.Background(), itm.ID)
require.NoError(t, err)
})
contents := fk.Str(1000)
reader := strings.NewReader(contents)
// Attempt to add attachment with invalid storage - should return an error
_, err = svc.AttachmentAdd(tCtx, itm.ID, "testfile.txt", "attachment", false, reader)
// This should return an error now (after the fix)
assert.Error(t, err, "AttachmentAdd should return an error when storage is invalid")
}

View File

@@ -3,20 +3,21 @@ package services
import (
"context"
"errors"
"strings"
"time"
"github.com/google/uuid"
"github.com/rs/zerolog/log"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/authroles"
"github.com/sysadminsmedia/homebox/backend/internal/data/repo"
"github.com/sysadminsmedia/homebox/backend/pkgs/hasher"
)
var (
oneWeek = time.Hour * 24 * 7
ErrorInvalidLogin = errors.New("invalid username or password")
ErrorInvalidToken = errors.New("invalid token")
ErrorTokenIDMismatch = errors.New("token id mismatch")
oneWeek = time.Hour * 24 * 7
ErrorInvalidLogin = errors.New("invalid username or password")
ErrorInvalidToken = errors.New("invalid token")
)
type UserService struct {
@@ -82,7 +83,7 @@ func (svc *UserService) RegisterUser(ctx context.Context, data UserRegistration)
usrCreate := repo.UserCreate{
Name: data.Name,
Email: data.Email,
Password: hashed,
Password: &hashed,
IsSuperuser: false,
GroupID: group.ID,
IsOwner: creatingGroup,
@@ -190,6 +191,14 @@ func (svc *UserService) Login(ctx context.Context, username, password string, ex
return UserAuthTokenDetail{}, ErrorInvalidLogin
}
// SECURITY: Deny login for users with null or empty password (OIDC users)
if usr.PasswordHash == "" {
log.Warn().Str("email", username).Msg("Login attempt blocked for user with null password (likely OIDC user)")
// SECURITY: Perform hash to ensure response times are the same
hasher.CheckPasswordHash("not-a-real-password", "not-a-real-password")
return UserAuthTokenDetail{}, ErrorInvalidLogin
}
check, rehash := hasher.CheckPasswordHash(password, usr.PasswordHash)
if !check {
@@ -210,6 +219,106 @@ func (svc *UserService) Login(ctx context.Context, username, password string, ex
return svc.createSessionToken(ctx, usr.ID, extendedSession)
}
// LoginOIDC creates a session token for a user authenticated via OIDC.
// It now uses issuer + subject for identity association (OIDC spec compliance).
// If the user doesn't exist, it will create one.
func (svc *UserService) LoginOIDC(ctx context.Context, issuer, subject, email, name string) (UserAuthTokenDetail, error) {
issuer = strings.TrimSpace(issuer)
subject = strings.TrimSpace(subject)
email = strings.ToLower(strings.TrimSpace(email))
name = strings.TrimSpace(name)
if issuer == "" || subject == "" {
log.Warn().Str("issuer", issuer).Str("subject", subject).Msg("OIDC login missing issuer or subject")
return UserAuthTokenDetail{}, ErrorInvalidLogin
}
// Try to get existing user by OIDC identity
usr, err := svc.repos.Users.GetOneOIDC(ctx, issuer, subject)
if err != nil {
if !ent.IsNotFound(err) {
log.Err(err).Str("issuer", issuer).Str("subject", subject).Msg("failed to lookup user by OIDC identity")
return UserAuthTokenDetail{}, err
}
// Not found: attempt migration path by email (legacy) if email provided
if email != "" {
legacyUsr, lerr := svc.repos.Users.GetOneEmail(ctx, email)
if lerr == nil {
log.Info().Str("email", email).Str("issuer", issuer).Str("subject", subject).Msg("migrating legacy email-based OIDC user to issuer+subject")
// Update user with OIDC identity fields
if uerr := svc.repos.Users.SetOIDCIdentity(ctx, legacyUsr.ID, issuer, subject); uerr == nil {
usr = legacyUsr
} else {
log.Err(uerr).Str("email", email).Msg("failed to set OIDC identity on legacy user")
}
}
}
}
// Create user if still not resolved
if usr.ID == uuid.Nil {
log.Debug().Str("issuer", issuer).Str("subject", subject).Msg("OIDC user not found, creating new user")
usr, err = svc.registerOIDCUser(ctx, issuer, subject, email, name)
if err != nil {
if ent.IsConstraintError(err) {
if usr2, gerr := svc.repos.Users.GetOneOIDC(ctx, issuer, subject); gerr == nil {
log.Info().Str("issuer", issuer).Str("subject", subject).Msg("OIDC user created concurrently; proceeding")
usr = usr2
} else {
log.Err(gerr).Str("issuer", issuer).Str("subject", subject).Msg("failed to fetch user after constraint error")
return UserAuthTokenDetail{}, gerr
}
} else {
log.Err(err).Str("issuer", issuer).Str("subject", subject).Msg("failed to create OIDC user")
return UserAuthTokenDetail{}, err
}
}
}
return svc.createSessionToken(ctx, usr.ID, true)
}
// registerOIDCUser creates a new user for OIDC authentication with issuer+subject identity.
func (svc *UserService) registerOIDCUser(ctx context.Context, issuer, subject, email, name string) (repo.UserOut, error) {
group, err := svc.repos.Groups.GroupCreate(ctx, "Home")
if err != nil {
log.Err(err).Msg("Failed to create group for OIDC user")
return repo.UserOut{}, err
}
usrCreate := repo.UserCreate{
Name: name,
Email: email,
Password: nil,
IsSuperuser: false,
GroupID: group.ID,
IsOwner: true,
}
entUser, err := svc.repos.Users.CreateWithOIDC(ctx, usrCreate, issuer, subject)
if err != nil {
return repo.UserOut{}, err
}
log.Debug().Str("issuer", issuer).Str("subject", subject).Msg("creating default labels for OIDC user")
for _, label := range defaultLabels() {
_, err := svc.repos.Labels.Create(ctx, group.ID, label)
if err != nil {
log.Err(err).Msg("Failed to create default label")
}
}
log.Debug().Str("issuer", issuer).Str("subject", subject).Msg("creating default locations for OIDC user")
for _, location := range defaultLocations() {
_, err := svc.repos.Locations.Create(ctx, group.ID, location)
if err != nil {
log.Err(err).Msg("Failed to create default location")
}
}
return entUser, nil
}
func (svc *UserService) Logout(ctx context.Context, token string) error {
hash := hasher.HashToken(token)
err := svc.repos.AuthTokens.DeleteToken(ctx, hash)

View File

@@ -100,7 +100,7 @@ func (*Attachment) scanValues(columns []string) ([]any, error) {
// assignValues assigns the values that were returned from sql.Rows (after scanning)
// to the Attachment fields.
func (a *Attachment) assignValues(columns []string, values []any) error {
func (_m *Attachment) assignValues(columns []string, values []any) error {
if m, n := len(values), len(columns); m < n {
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
}
@@ -110,66 +110,66 @@ func (a *Attachment) assignValues(columns []string, values []any) error {
if value, ok := values[i].(*uuid.UUID); !ok {
return fmt.Errorf("unexpected type %T for field id", values[i])
} else if value != nil {
a.ID = *value
_m.ID = *value
}
case attachment.FieldCreatedAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field created_at", values[i])
} else if value.Valid {
a.CreatedAt = value.Time
_m.CreatedAt = value.Time
}
case attachment.FieldUpdatedAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field updated_at", values[i])
} else if value.Valid {
a.UpdatedAt = value.Time
_m.UpdatedAt = value.Time
}
case attachment.FieldType:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field type", values[i])
} else if value.Valid {
a.Type = attachment.Type(value.String)
_m.Type = attachment.Type(value.String)
}
case attachment.FieldPrimary:
if value, ok := values[i].(*sql.NullBool); !ok {
return fmt.Errorf("unexpected type %T for field primary", values[i])
} else if value.Valid {
a.Primary = value.Bool
_m.Primary = value.Bool
}
case attachment.FieldTitle:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field title", values[i])
} else if value.Valid {
a.Title = value.String
_m.Title = value.String
}
case attachment.FieldPath:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field path", values[i])
} else if value.Valid {
a.Path = value.String
_m.Path = value.String
}
case attachment.FieldMimeType:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field mime_type", values[i])
} else if value.Valid {
a.MimeType = value.String
_m.MimeType = value.String
}
case attachment.ForeignKeys[0]:
if value, ok := values[i].(*sql.NullScanner); !ok {
return fmt.Errorf("unexpected type %T for field attachment_thumbnail", values[i])
} else if value.Valid {
a.attachment_thumbnail = new(uuid.UUID)
*a.attachment_thumbnail = *value.S.(*uuid.UUID)
_m.attachment_thumbnail = new(uuid.UUID)
*_m.attachment_thumbnail = *value.S.(*uuid.UUID)
}
case attachment.ForeignKeys[1]:
if value, ok := values[i].(*sql.NullScanner); !ok {
return fmt.Errorf("unexpected type %T for field item_attachments", values[i])
} else if value.Valid {
a.item_attachments = new(uuid.UUID)
*a.item_attachments = *value.S.(*uuid.UUID)
_m.item_attachments = new(uuid.UUID)
*_m.item_attachments = *value.S.(*uuid.UUID)
}
default:
a.selectValues.Set(columns[i], values[i])
_m.selectValues.Set(columns[i], values[i])
}
}
return nil
@@ -177,63 +177,63 @@ func (a *Attachment) assignValues(columns []string, values []any) error {
// Value returns the ent.Value that was dynamically selected and assigned to the Attachment.
// This includes values selected through modifiers, order, etc.
func (a *Attachment) Value(name string) (ent.Value, error) {
return a.selectValues.Get(name)
func (_m *Attachment) Value(name string) (ent.Value, error) {
return _m.selectValues.Get(name)
}
// QueryItem queries the "item" edge of the Attachment entity.
func (a *Attachment) QueryItem() *ItemQuery {
return NewAttachmentClient(a.config).QueryItem(a)
func (_m *Attachment) QueryItem() *ItemQuery {
return NewAttachmentClient(_m.config).QueryItem(_m)
}
// QueryThumbnail queries the "thumbnail" edge of the Attachment entity.
func (a *Attachment) QueryThumbnail() *AttachmentQuery {
return NewAttachmentClient(a.config).QueryThumbnail(a)
func (_m *Attachment) QueryThumbnail() *AttachmentQuery {
return NewAttachmentClient(_m.config).QueryThumbnail(_m)
}
// Update returns a builder for updating this Attachment.
// Note that you need to call Attachment.Unwrap() before calling this method if this Attachment
// was returned from a transaction, and the transaction was committed or rolled back.
func (a *Attachment) Update() *AttachmentUpdateOne {
return NewAttachmentClient(a.config).UpdateOne(a)
func (_m *Attachment) Update() *AttachmentUpdateOne {
return NewAttachmentClient(_m.config).UpdateOne(_m)
}
// Unwrap unwraps the Attachment entity that was returned from a transaction after it was closed,
// so that all future queries will be executed through the driver which created the transaction.
func (a *Attachment) Unwrap() *Attachment {
_tx, ok := a.config.driver.(*txDriver)
func (_m *Attachment) Unwrap() *Attachment {
_tx, ok := _m.config.driver.(*txDriver)
if !ok {
panic("ent: Attachment is not a transactional entity")
}
a.config.driver = _tx.drv
return a
_m.config.driver = _tx.drv
return _m
}
// String implements the fmt.Stringer.
func (a *Attachment) String() string {
func (_m *Attachment) String() string {
var builder strings.Builder
builder.WriteString("Attachment(")
builder.WriteString(fmt.Sprintf("id=%v, ", a.ID))
builder.WriteString(fmt.Sprintf("id=%v, ", _m.ID))
builder.WriteString("created_at=")
builder.WriteString(a.CreatedAt.Format(time.ANSIC))
builder.WriteString(_m.CreatedAt.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("updated_at=")
builder.WriteString(a.UpdatedAt.Format(time.ANSIC))
builder.WriteString(_m.UpdatedAt.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("type=")
builder.WriteString(fmt.Sprintf("%v", a.Type))
builder.WriteString(fmt.Sprintf("%v", _m.Type))
builder.WriteString(", ")
builder.WriteString("primary=")
builder.WriteString(fmt.Sprintf("%v", a.Primary))
builder.WriteString(fmt.Sprintf("%v", _m.Primary))
builder.WriteString(", ")
builder.WriteString("title=")
builder.WriteString(a.Title)
builder.WriteString(_m.Title)
builder.WriteString(", ")
builder.WriteString("path=")
builder.WriteString(a.Path)
builder.WriteString(_m.Path)
builder.WriteString(", ")
builder.WriteString("mime_type=")
builder.WriteString(a.MimeType)
builder.WriteString(_m.MimeType)
builder.WriteByte(')')
return builder.String()
}

View File

@@ -23,169 +23,169 @@ type AttachmentCreate struct {
}
// SetCreatedAt sets the "created_at" field.
func (ac *AttachmentCreate) SetCreatedAt(t time.Time) *AttachmentCreate {
ac.mutation.SetCreatedAt(t)
return ac
func (_c *AttachmentCreate) SetCreatedAt(v time.Time) *AttachmentCreate {
_c.mutation.SetCreatedAt(v)
return _c
}
// SetNillableCreatedAt sets the "created_at" field if the given value is not nil.
func (ac *AttachmentCreate) SetNillableCreatedAt(t *time.Time) *AttachmentCreate {
if t != nil {
ac.SetCreatedAt(*t)
func (_c *AttachmentCreate) SetNillableCreatedAt(v *time.Time) *AttachmentCreate {
if v != nil {
_c.SetCreatedAt(*v)
}
return ac
return _c
}
// SetUpdatedAt sets the "updated_at" field.
func (ac *AttachmentCreate) SetUpdatedAt(t time.Time) *AttachmentCreate {
ac.mutation.SetUpdatedAt(t)
return ac
func (_c *AttachmentCreate) SetUpdatedAt(v time.Time) *AttachmentCreate {
_c.mutation.SetUpdatedAt(v)
return _c
}
// SetNillableUpdatedAt sets the "updated_at" field if the given value is not nil.
func (ac *AttachmentCreate) SetNillableUpdatedAt(t *time.Time) *AttachmentCreate {
if t != nil {
ac.SetUpdatedAt(*t)
func (_c *AttachmentCreate) SetNillableUpdatedAt(v *time.Time) *AttachmentCreate {
if v != nil {
_c.SetUpdatedAt(*v)
}
return ac
return _c
}
// SetType sets the "type" field.
func (ac *AttachmentCreate) SetType(a attachment.Type) *AttachmentCreate {
ac.mutation.SetType(a)
return ac
func (_c *AttachmentCreate) SetType(v attachment.Type) *AttachmentCreate {
_c.mutation.SetType(v)
return _c
}
// SetNillableType sets the "type" field if the given value is not nil.
func (ac *AttachmentCreate) SetNillableType(a *attachment.Type) *AttachmentCreate {
if a != nil {
ac.SetType(*a)
func (_c *AttachmentCreate) SetNillableType(v *attachment.Type) *AttachmentCreate {
if v != nil {
_c.SetType(*v)
}
return ac
return _c
}
// SetPrimary sets the "primary" field.
func (ac *AttachmentCreate) SetPrimary(b bool) *AttachmentCreate {
ac.mutation.SetPrimary(b)
return ac
func (_c *AttachmentCreate) SetPrimary(v bool) *AttachmentCreate {
_c.mutation.SetPrimary(v)
return _c
}
// SetNillablePrimary sets the "primary" field if the given value is not nil.
func (ac *AttachmentCreate) SetNillablePrimary(b *bool) *AttachmentCreate {
if b != nil {
ac.SetPrimary(*b)
func (_c *AttachmentCreate) SetNillablePrimary(v *bool) *AttachmentCreate {
if v != nil {
_c.SetPrimary(*v)
}
return ac
return _c
}
// SetTitle sets the "title" field.
func (ac *AttachmentCreate) SetTitle(s string) *AttachmentCreate {
ac.mutation.SetTitle(s)
return ac
func (_c *AttachmentCreate) SetTitle(v string) *AttachmentCreate {
_c.mutation.SetTitle(v)
return _c
}
// SetNillableTitle sets the "title" field if the given value is not nil.
func (ac *AttachmentCreate) SetNillableTitle(s *string) *AttachmentCreate {
if s != nil {
ac.SetTitle(*s)
func (_c *AttachmentCreate) SetNillableTitle(v *string) *AttachmentCreate {
if v != nil {
_c.SetTitle(*v)
}
return ac
return _c
}
// SetPath sets the "path" field.
func (ac *AttachmentCreate) SetPath(s string) *AttachmentCreate {
ac.mutation.SetPath(s)
return ac
func (_c *AttachmentCreate) SetPath(v string) *AttachmentCreate {
_c.mutation.SetPath(v)
return _c
}
// SetNillablePath sets the "path" field if the given value is not nil.
func (ac *AttachmentCreate) SetNillablePath(s *string) *AttachmentCreate {
if s != nil {
ac.SetPath(*s)
func (_c *AttachmentCreate) SetNillablePath(v *string) *AttachmentCreate {
if v != nil {
_c.SetPath(*v)
}
return ac
return _c
}
// SetMimeType sets the "mime_type" field.
func (ac *AttachmentCreate) SetMimeType(s string) *AttachmentCreate {
ac.mutation.SetMimeType(s)
return ac
func (_c *AttachmentCreate) SetMimeType(v string) *AttachmentCreate {
_c.mutation.SetMimeType(v)
return _c
}
// SetNillableMimeType sets the "mime_type" field if the given value is not nil.
func (ac *AttachmentCreate) SetNillableMimeType(s *string) *AttachmentCreate {
if s != nil {
ac.SetMimeType(*s)
func (_c *AttachmentCreate) SetNillableMimeType(v *string) *AttachmentCreate {
if v != nil {
_c.SetMimeType(*v)
}
return ac
return _c
}
// SetID sets the "id" field.
func (ac *AttachmentCreate) SetID(u uuid.UUID) *AttachmentCreate {
ac.mutation.SetID(u)
return ac
func (_c *AttachmentCreate) SetID(v uuid.UUID) *AttachmentCreate {
_c.mutation.SetID(v)
return _c
}
// SetNillableID sets the "id" field if the given value is not nil.
func (ac *AttachmentCreate) SetNillableID(u *uuid.UUID) *AttachmentCreate {
if u != nil {
ac.SetID(*u)
func (_c *AttachmentCreate) SetNillableID(v *uuid.UUID) *AttachmentCreate {
if v != nil {
_c.SetID(*v)
}
return ac
return _c
}
// SetItemID sets the "item" edge to the Item entity by ID.
func (ac *AttachmentCreate) SetItemID(id uuid.UUID) *AttachmentCreate {
ac.mutation.SetItemID(id)
return ac
func (_c *AttachmentCreate) SetItemID(id uuid.UUID) *AttachmentCreate {
_c.mutation.SetItemID(id)
return _c
}
// SetNillableItemID sets the "item" edge to the Item entity by ID if the given value is not nil.
func (ac *AttachmentCreate) SetNillableItemID(id *uuid.UUID) *AttachmentCreate {
func (_c *AttachmentCreate) SetNillableItemID(id *uuid.UUID) *AttachmentCreate {
if id != nil {
ac = ac.SetItemID(*id)
_c = _c.SetItemID(*id)
}
return ac
return _c
}
// SetItem sets the "item" edge to the Item entity.
func (ac *AttachmentCreate) SetItem(i *Item) *AttachmentCreate {
return ac.SetItemID(i.ID)
func (_c *AttachmentCreate) SetItem(v *Item) *AttachmentCreate {
return _c.SetItemID(v.ID)
}
// SetThumbnailID sets the "thumbnail" edge to the Attachment entity by ID.
func (ac *AttachmentCreate) SetThumbnailID(id uuid.UUID) *AttachmentCreate {
ac.mutation.SetThumbnailID(id)
return ac
func (_c *AttachmentCreate) SetThumbnailID(id uuid.UUID) *AttachmentCreate {
_c.mutation.SetThumbnailID(id)
return _c
}
// SetNillableThumbnailID sets the "thumbnail" edge to the Attachment entity by ID if the given value is not nil.
func (ac *AttachmentCreate) SetNillableThumbnailID(id *uuid.UUID) *AttachmentCreate {
func (_c *AttachmentCreate) SetNillableThumbnailID(id *uuid.UUID) *AttachmentCreate {
if id != nil {
ac = ac.SetThumbnailID(*id)
_c = _c.SetThumbnailID(*id)
}
return ac
return _c
}
// SetThumbnail sets the "thumbnail" edge to the Attachment entity.
func (ac *AttachmentCreate) SetThumbnail(a *Attachment) *AttachmentCreate {
return ac.SetThumbnailID(a.ID)
func (_c *AttachmentCreate) SetThumbnail(v *Attachment) *AttachmentCreate {
return _c.SetThumbnailID(v.ID)
}
// Mutation returns the AttachmentMutation object of the builder.
func (ac *AttachmentCreate) Mutation() *AttachmentMutation {
return ac.mutation
func (_c *AttachmentCreate) Mutation() *AttachmentMutation {
return _c.mutation
}
// Save creates the Attachment in the database.
func (ac *AttachmentCreate) Save(ctx context.Context) (*Attachment, error) {
ac.defaults()
return withHooks(ctx, ac.sqlSave, ac.mutation, ac.hooks)
func (_c *AttachmentCreate) Save(ctx context.Context) (*Attachment, error) {
_c.defaults()
return withHooks(ctx, _c.sqlSave, _c.mutation, _c.hooks)
}
// SaveX calls Save and panics if Save returns an error.
func (ac *AttachmentCreate) SaveX(ctx context.Context) *Attachment {
v, err := ac.Save(ctx)
func (_c *AttachmentCreate) SaveX(ctx context.Context) *Attachment {
v, err := _c.Save(ctx)
if err != nil {
panic(err)
}
@@ -193,91 +193,91 @@ func (ac *AttachmentCreate) SaveX(ctx context.Context) *Attachment {
}
// Exec executes the query.
func (ac *AttachmentCreate) Exec(ctx context.Context) error {
_, err := ac.Save(ctx)
func (_c *AttachmentCreate) Exec(ctx context.Context) error {
_, err := _c.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (ac *AttachmentCreate) ExecX(ctx context.Context) {
if err := ac.Exec(ctx); err != nil {
func (_c *AttachmentCreate) ExecX(ctx context.Context) {
if err := _c.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (ac *AttachmentCreate) defaults() {
if _, ok := ac.mutation.CreatedAt(); !ok {
func (_c *AttachmentCreate) defaults() {
if _, ok := _c.mutation.CreatedAt(); !ok {
v := attachment.DefaultCreatedAt()
ac.mutation.SetCreatedAt(v)
_c.mutation.SetCreatedAt(v)
}
if _, ok := ac.mutation.UpdatedAt(); !ok {
if _, ok := _c.mutation.UpdatedAt(); !ok {
v := attachment.DefaultUpdatedAt()
ac.mutation.SetUpdatedAt(v)
_c.mutation.SetUpdatedAt(v)
}
if _, ok := ac.mutation.GetType(); !ok {
if _, ok := _c.mutation.GetType(); !ok {
v := attachment.DefaultType
ac.mutation.SetType(v)
_c.mutation.SetType(v)
}
if _, ok := ac.mutation.Primary(); !ok {
if _, ok := _c.mutation.Primary(); !ok {
v := attachment.DefaultPrimary
ac.mutation.SetPrimary(v)
_c.mutation.SetPrimary(v)
}
if _, ok := ac.mutation.Title(); !ok {
if _, ok := _c.mutation.Title(); !ok {
v := attachment.DefaultTitle
ac.mutation.SetTitle(v)
_c.mutation.SetTitle(v)
}
if _, ok := ac.mutation.Path(); !ok {
if _, ok := _c.mutation.Path(); !ok {
v := attachment.DefaultPath
ac.mutation.SetPath(v)
_c.mutation.SetPath(v)
}
if _, ok := ac.mutation.MimeType(); !ok {
if _, ok := _c.mutation.MimeType(); !ok {
v := attachment.DefaultMimeType
ac.mutation.SetMimeType(v)
_c.mutation.SetMimeType(v)
}
if _, ok := ac.mutation.ID(); !ok {
if _, ok := _c.mutation.ID(); !ok {
v := attachment.DefaultID()
ac.mutation.SetID(v)
_c.mutation.SetID(v)
}
}
// check runs all checks and user-defined validators on the builder.
func (ac *AttachmentCreate) check() error {
if _, ok := ac.mutation.CreatedAt(); !ok {
func (_c *AttachmentCreate) check() error {
if _, ok := _c.mutation.CreatedAt(); !ok {
return &ValidationError{Name: "created_at", err: errors.New(`ent: missing required field "Attachment.created_at"`)}
}
if _, ok := ac.mutation.UpdatedAt(); !ok {
if _, ok := _c.mutation.UpdatedAt(); !ok {
return &ValidationError{Name: "updated_at", err: errors.New(`ent: missing required field "Attachment.updated_at"`)}
}
if _, ok := ac.mutation.GetType(); !ok {
if _, ok := _c.mutation.GetType(); !ok {
return &ValidationError{Name: "type", err: errors.New(`ent: missing required field "Attachment.type"`)}
}
if v, ok := ac.mutation.GetType(); ok {
if v, ok := _c.mutation.GetType(); ok {
if err := attachment.TypeValidator(v); err != nil {
return &ValidationError{Name: "type", err: fmt.Errorf(`ent: validator failed for field "Attachment.type": %w`, err)}
}
}
if _, ok := ac.mutation.Primary(); !ok {
if _, ok := _c.mutation.Primary(); !ok {
return &ValidationError{Name: "primary", err: errors.New(`ent: missing required field "Attachment.primary"`)}
}
if _, ok := ac.mutation.Title(); !ok {
if _, ok := _c.mutation.Title(); !ok {
return &ValidationError{Name: "title", err: errors.New(`ent: missing required field "Attachment.title"`)}
}
if _, ok := ac.mutation.Path(); !ok {
if _, ok := _c.mutation.Path(); !ok {
return &ValidationError{Name: "path", err: errors.New(`ent: missing required field "Attachment.path"`)}
}
if _, ok := ac.mutation.MimeType(); !ok {
if _, ok := _c.mutation.MimeType(); !ok {
return &ValidationError{Name: "mime_type", err: errors.New(`ent: missing required field "Attachment.mime_type"`)}
}
return nil
}
func (ac *AttachmentCreate) sqlSave(ctx context.Context) (*Attachment, error) {
if err := ac.check(); err != nil {
func (_c *AttachmentCreate) sqlSave(ctx context.Context) (*Attachment, error) {
if err := _c.check(); err != nil {
return nil, err
}
_node, _spec := ac.createSpec()
if err := sqlgraph.CreateNode(ctx, ac.driver, _spec); err != nil {
_node, _spec := _c.createSpec()
if err := sqlgraph.CreateNode(ctx, _c.driver, _spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
@@ -290,49 +290,49 @@ func (ac *AttachmentCreate) sqlSave(ctx context.Context) (*Attachment, error) {
return nil, err
}
}
ac.mutation.id = &_node.ID
ac.mutation.done = true
_c.mutation.id = &_node.ID
_c.mutation.done = true
return _node, nil
}
func (ac *AttachmentCreate) createSpec() (*Attachment, *sqlgraph.CreateSpec) {
func (_c *AttachmentCreate) createSpec() (*Attachment, *sqlgraph.CreateSpec) {
var (
_node = &Attachment{config: ac.config}
_node = &Attachment{config: _c.config}
_spec = sqlgraph.NewCreateSpec(attachment.Table, sqlgraph.NewFieldSpec(attachment.FieldID, field.TypeUUID))
)
if id, ok := ac.mutation.ID(); ok {
if id, ok := _c.mutation.ID(); ok {
_node.ID = id
_spec.ID.Value = &id
}
if value, ok := ac.mutation.CreatedAt(); ok {
if value, ok := _c.mutation.CreatedAt(); ok {
_spec.SetField(attachment.FieldCreatedAt, field.TypeTime, value)
_node.CreatedAt = value
}
if value, ok := ac.mutation.UpdatedAt(); ok {
if value, ok := _c.mutation.UpdatedAt(); ok {
_spec.SetField(attachment.FieldUpdatedAt, field.TypeTime, value)
_node.UpdatedAt = value
}
if value, ok := ac.mutation.GetType(); ok {
if value, ok := _c.mutation.GetType(); ok {
_spec.SetField(attachment.FieldType, field.TypeEnum, value)
_node.Type = value
}
if value, ok := ac.mutation.Primary(); ok {
if value, ok := _c.mutation.Primary(); ok {
_spec.SetField(attachment.FieldPrimary, field.TypeBool, value)
_node.Primary = value
}
if value, ok := ac.mutation.Title(); ok {
if value, ok := _c.mutation.Title(); ok {
_spec.SetField(attachment.FieldTitle, field.TypeString, value)
_node.Title = value
}
if value, ok := ac.mutation.Path(); ok {
if value, ok := _c.mutation.Path(); ok {
_spec.SetField(attachment.FieldPath, field.TypeString, value)
_node.Path = value
}
if value, ok := ac.mutation.MimeType(); ok {
if value, ok := _c.mutation.MimeType(); ok {
_spec.SetField(attachment.FieldMimeType, field.TypeString, value)
_node.MimeType = value
}
if nodes := ac.mutation.ItemIDs(); len(nodes) > 0 {
if nodes := _c.mutation.ItemIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -349,7 +349,7 @@ func (ac *AttachmentCreate) createSpec() (*Attachment, *sqlgraph.CreateSpec) {
_node.item_attachments = &nodes[0]
_spec.Edges = append(_spec.Edges, edge)
}
if nodes := ac.mutation.ThumbnailIDs(); len(nodes) > 0 {
if nodes := _c.mutation.ThumbnailIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: false,
@@ -377,16 +377,16 @@ type AttachmentCreateBulk struct {
}
// Save creates the Attachment entities in the database.
func (acb *AttachmentCreateBulk) Save(ctx context.Context) ([]*Attachment, error) {
if acb.err != nil {
return nil, acb.err
func (_c *AttachmentCreateBulk) Save(ctx context.Context) ([]*Attachment, error) {
if _c.err != nil {
return nil, _c.err
}
specs := make([]*sqlgraph.CreateSpec, len(acb.builders))
nodes := make([]*Attachment, len(acb.builders))
mutators := make([]Mutator, len(acb.builders))
for i := range acb.builders {
specs := make([]*sqlgraph.CreateSpec, len(_c.builders))
nodes := make([]*Attachment, len(_c.builders))
mutators := make([]Mutator, len(_c.builders))
for i := range _c.builders {
func(i int, root context.Context) {
builder := acb.builders[i]
builder := _c.builders[i]
builder.defaults()
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
mutation, ok := m.(*AttachmentMutation)
@@ -400,11 +400,11 @@ func (acb *AttachmentCreateBulk) Save(ctx context.Context) ([]*Attachment, error
var err error
nodes[i], specs[i] = builder.createSpec()
if i < len(mutators)-1 {
_, err = mutators[i+1].Mutate(root, acb.builders[i+1].mutation)
_, err = mutators[i+1].Mutate(root, _c.builders[i+1].mutation)
} else {
spec := &sqlgraph.BatchCreateSpec{Nodes: specs}
// Invoke the actual operation on the latest mutation in the chain.
if err = sqlgraph.BatchCreate(ctx, acb.driver, spec); err != nil {
if err = sqlgraph.BatchCreate(ctx, _c.driver, spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
@@ -424,7 +424,7 @@ func (acb *AttachmentCreateBulk) Save(ctx context.Context) ([]*Attachment, error
}(i, ctx)
}
if len(mutators) > 0 {
if _, err := mutators[0].Mutate(ctx, acb.builders[0].mutation); err != nil {
if _, err := mutators[0].Mutate(ctx, _c.builders[0].mutation); err != nil {
return nil, err
}
}
@@ -432,8 +432,8 @@ func (acb *AttachmentCreateBulk) Save(ctx context.Context) ([]*Attachment, error
}
// SaveX is like Save, but panics if an error occurs.
func (acb *AttachmentCreateBulk) SaveX(ctx context.Context) []*Attachment {
v, err := acb.Save(ctx)
func (_c *AttachmentCreateBulk) SaveX(ctx context.Context) []*Attachment {
v, err := _c.Save(ctx)
if err != nil {
panic(err)
}
@@ -441,14 +441,14 @@ func (acb *AttachmentCreateBulk) SaveX(ctx context.Context) []*Attachment {
}
// Exec executes the query.
func (acb *AttachmentCreateBulk) Exec(ctx context.Context) error {
_, err := acb.Save(ctx)
func (_c *AttachmentCreateBulk) Exec(ctx context.Context) error {
_, err := _c.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (acb *AttachmentCreateBulk) ExecX(ctx context.Context) {
if err := acb.Exec(ctx); err != nil {
func (_c *AttachmentCreateBulk) ExecX(ctx context.Context) {
if err := _c.Exec(ctx); err != nil {
panic(err)
}
}

View File

@@ -20,56 +20,56 @@ type AttachmentDelete struct {
}
// Where appends a list predicates to the AttachmentDelete builder.
func (ad *AttachmentDelete) Where(ps ...predicate.Attachment) *AttachmentDelete {
ad.mutation.Where(ps...)
return ad
func (_d *AttachmentDelete) Where(ps ...predicate.Attachment) *AttachmentDelete {
_d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query and returns how many vertices were deleted.
func (ad *AttachmentDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, ad.sqlExec, ad.mutation, ad.hooks)
func (_d *AttachmentDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, _d.sqlExec, _d.mutation, _d.hooks)
}
// ExecX is like Exec, but panics if an error occurs.
func (ad *AttachmentDelete) ExecX(ctx context.Context) int {
n, err := ad.Exec(ctx)
func (_d *AttachmentDelete) ExecX(ctx context.Context) int {
n, err := _d.Exec(ctx)
if err != nil {
panic(err)
}
return n
}
func (ad *AttachmentDelete) sqlExec(ctx context.Context) (int, error) {
func (_d *AttachmentDelete) sqlExec(ctx context.Context) (int, error) {
_spec := sqlgraph.NewDeleteSpec(attachment.Table, sqlgraph.NewFieldSpec(attachment.FieldID, field.TypeUUID))
if ps := ad.mutation.predicates; len(ps) > 0 {
if ps := _d.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
affected, err := sqlgraph.DeleteNodes(ctx, ad.driver, _spec)
affected, err := sqlgraph.DeleteNodes(ctx, _d.driver, _spec)
if err != nil && sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
ad.mutation.done = true
_d.mutation.done = true
return affected, err
}
// AttachmentDeleteOne is the builder for deleting a single Attachment entity.
type AttachmentDeleteOne struct {
ad *AttachmentDelete
_d *AttachmentDelete
}
// Where appends a list predicates to the AttachmentDelete builder.
func (ado *AttachmentDeleteOne) Where(ps ...predicate.Attachment) *AttachmentDeleteOne {
ado.ad.mutation.Where(ps...)
return ado
func (_d *AttachmentDeleteOne) Where(ps ...predicate.Attachment) *AttachmentDeleteOne {
_d._d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query.
func (ado *AttachmentDeleteOne) Exec(ctx context.Context) error {
n, err := ado.ad.Exec(ctx)
func (_d *AttachmentDeleteOne) Exec(ctx context.Context) error {
n, err := _d._d.Exec(ctx)
switch {
case err != nil:
return err
@@ -81,8 +81,8 @@ func (ado *AttachmentDeleteOne) Exec(ctx context.Context) error {
}
// ExecX is like Exec, but panics if an error occurs.
func (ado *AttachmentDeleteOne) ExecX(ctx context.Context) {
if err := ado.Exec(ctx); err != nil {
func (_d *AttachmentDeleteOne) ExecX(ctx context.Context) {
if err := _d.Exec(ctx); err != nil {
panic(err)
}
}

View File

@@ -33,44 +33,44 @@ type AttachmentQuery struct {
}
// Where adds a new predicate for the AttachmentQuery builder.
func (aq *AttachmentQuery) Where(ps ...predicate.Attachment) *AttachmentQuery {
aq.predicates = append(aq.predicates, ps...)
return aq
func (_q *AttachmentQuery) Where(ps ...predicate.Attachment) *AttachmentQuery {
_q.predicates = append(_q.predicates, ps...)
return _q
}
// Limit the number of records to be returned by this query.
func (aq *AttachmentQuery) Limit(limit int) *AttachmentQuery {
aq.ctx.Limit = &limit
return aq
func (_q *AttachmentQuery) Limit(limit int) *AttachmentQuery {
_q.ctx.Limit = &limit
return _q
}
// Offset to start from.
func (aq *AttachmentQuery) Offset(offset int) *AttachmentQuery {
aq.ctx.Offset = &offset
return aq
func (_q *AttachmentQuery) Offset(offset int) *AttachmentQuery {
_q.ctx.Offset = &offset
return _q
}
// Unique configures the query builder to filter duplicate records on query.
// By default, unique is set to true, and can be disabled using this method.
func (aq *AttachmentQuery) Unique(unique bool) *AttachmentQuery {
aq.ctx.Unique = &unique
return aq
func (_q *AttachmentQuery) Unique(unique bool) *AttachmentQuery {
_q.ctx.Unique = &unique
return _q
}
// Order specifies how the records should be ordered.
func (aq *AttachmentQuery) Order(o ...attachment.OrderOption) *AttachmentQuery {
aq.order = append(aq.order, o...)
return aq
func (_q *AttachmentQuery) Order(o ...attachment.OrderOption) *AttachmentQuery {
_q.order = append(_q.order, o...)
return _q
}
// QueryItem chains the current query on the "item" edge.
func (aq *AttachmentQuery) QueryItem() *ItemQuery {
query := (&ItemClient{config: aq.config}).Query()
func (_q *AttachmentQuery) QueryItem() *ItemQuery {
query := (&ItemClient{config: _q.config}).Query()
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
if err := aq.prepareQuery(ctx); err != nil {
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
selector := aq.sqlQuery(ctx)
selector := _q.sqlQuery(ctx)
if err := selector.Err(); err != nil {
return nil, err
}
@@ -79,20 +79,20 @@ func (aq *AttachmentQuery) QueryItem() *ItemQuery {
sqlgraph.To(item.Table, item.FieldID),
sqlgraph.Edge(sqlgraph.M2O, true, attachment.ItemTable, attachment.ItemColumn),
)
fromU = sqlgraph.SetNeighbors(aq.driver.Dialect(), step)
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
return fromU, nil
}
return query
}
// QueryThumbnail chains the current query on the "thumbnail" edge.
func (aq *AttachmentQuery) QueryThumbnail() *AttachmentQuery {
query := (&AttachmentClient{config: aq.config}).Query()
func (_q *AttachmentQuery) QueryThumbnail() *AttachmentQuery {
query := (&AttachmentClient{config: _q.config}).Query()
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
if err := aq.prepareQuery(ctx); err != nil {
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
selector := aq.sqlQuery(ctx)
selector := _q.sqlQuery(ctx)
if err := selector.Err(); err != nil {
return nil, err
}
@@ -101,7 +101,7 @@ func (aq *AttachmentQuery) QueryThumbnail() *AttachmentQuery {
sqlgraph.To(attachment.Table, attachment.FieldID),
sqlgraph.Edge(sqlgraph.O2O, false, attachment.ThumbnailTable, attachment.ThumbnailColumn),
)
fromU = sqlgraph.SetNeighbors(aq.driver.Dialect(), step)
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
return fromU, nil
}
return query
@@ -109,8 +109,8 @@ func (aq *AttachmentQuery) QueryThumbnail() *AttachmentQuery {
// First returns the first Attachment entity from the query.
// Returns a *NotFoundError when no Attachment was found.
func (aq *AttachmentQuery) First(ctx context.Context) (*Attachment, error) {
nodes, err := aq.Limit(1).All(setContextOp(ctx, aq.ctx, ent.OpQueryFirst))
func (_q *AttachmentQuery) First(ctx context.Context) (*Attachment, error) {
nodes, err := _q.Limit(1).All(setContextOp(ctx, _q.ctx, ent.OpQueryFirst))
if err != nil {
return nil, err
}
@@ -121,8 +121,8 @@ func (aq *AttachmentQuery) First(ctx context.Context) (*Attachment, error) {
}
// FirstX is like First, but panics if an error occurs.
func (aq *AttachmentQuery) FirstX(ctx context.Context) *Attachment {
node, err := aq.First(ctx)
func (_q *AttachmentQuery) FirstX(ctx context.Context) *Attachment {
node, err := _q.First(ctx)
if err != nil && !IsNotFound(err) {
panic(err)
}
@@ -131,9 +131,9 @@ func (aq *AttachmentQuery) FirstX(ctx context.Context) *Attachment {
// FirstID returns the first Attachment ID from the query.
// Returns a *NotFoundError when no Attachment ID was found.
func (aq *AttachmentQuery) FirstID(ctx context.Context) (id uuid.UUID, err error) {
func (_q *AttachmentQuery) FirstID(ctx context.Context) (id uuid.UUID, err error) {
var ids []uuid.UUID
if ids, err = aq.Limit(1).IDs(setContextOp(ctx, aq.ctx, ent.OpQueryFirstID)); err != nil {
if ids, err = _q.Limit(1).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryFirstID)); err != nil {
return
}
if len(ids) == 0 {
@@ -144,8 +144,8 @@ func (aq *AttachmentQuery) FirstID(ctx context.Context) (id uuid.UUID, err error
}
// FirstIDX is like FirstID, but panics if an error occurs.
func (aq *AttachmentQuery) FirstIDX(ctx context.Context) uuid.UUID {
id, err := aq.FirstID(ctx)
func (_q *AttachmentQuery) FirstIDX(ctx context.Context) uuid.UUID {
id, err := _q.FirstID(ctx)
if err != nil && !IsNotFound(err) {
panic(err)
}
@@ -155,8 +155,8 @@ func (aq *AttachmentQuery) FirstIDX(ctx context.Context) uuid.UUID {
// Only returns a single Attachment entity found by the query, ensuring it only returns one.
// Returns a *NotSingularError when more than one Attachment entity is found.
// Returns a *NotFoundError when no Attachment entities are found.
func (aq *AttachmentQuery) Only(ctx context.Context) (*Attachment, error) {
nodes, err := aq.Limit(2).All(setContextOp(ctx, aq.ctx, ent.OpQueryOnly))
func (_q *AttachmentQuery) Only(ctx context.Context) (*Attachment, error) {
nodes, err := _q.Limit(2).All(setContextOp(ctx, _q.ctx, ent.OpQueryOnly))
if err != nil {
return nil, err
}
@@ -171,8 +171,8 @@ func (aq *AttachmentQuery) Only(ctx context.Context) (*Attachment, error) {
}
// OnlyX is like Only, but panics if an error occurs.
func (aq *AttachmentQuery) OnlyX(ctx context.Context) *Attachment {
node, err := aq.Only(ctx)
func (_q *AttachmentQuery) OnlyX(ctx context.Context) *Attachment {
node, err := _q.Only(ctx)
if err != nil {
panic(err)
}
@@ -182,9 +182,9 @@ func (aq *AttachmentQuery) OnlyX(ctx context.Context) *Attachment {
// OnlyID is like Only, but returns the only Attachment ID in the query.
// Returns a *NotSingularError when more than one Attachment ID is found.
// Returns a *NotFoundError when no entities are found.
func (aq *AttachmentQuery) OnlyID(ctx context.Context) (id uuid.UUID, err error) {
func (_q *AttachmentQuery) OnlyID(ctx context.Context) (id uuid.UUID, err error) {
var ids []uuid.UUID
if ids, err = aq.Limit(2).IDs(setContextOp(ctx, aq.ctx, ent.OpQueryOnlyID)); err != nil {
if ids, err = _q.Limit(2).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryOnlyID)); err != nil {
return
}
switch len(ids) {
@@ -199,8 +199,8 @@ func (aq *AttachmentQuery) OnlyID(ctx context.Context) (id uuid.UUID, err error)
}
// OnlyIDX is like OnlyID, but panics if an error occurs.
func (aq *AttachmentQuery) OnlyIDX(ctx context.Context) uuid.UUID {
id, err := aq.OnlyID(ctx)
func (_q *AttachmentQuery) OnlyIDX(ctx context.Context) uuid.UUID {
id, err := _q.OnlyID(ctx)
if err != nil {
panic(err)
}
@@ -208,18 +208,18 @@ func (aq *AttachmentQuery) OnlyIDX(ctx context.Context) uuid.UUID {
}
// All executes the query and returns a list of Attachments.
func (aq *AttachmentQuery) All(ctx context.Context) ([]*Attachment, error) {
ctx = setContextOp(ctx, aq.ctx, ent.OpQueryAll)
if err := aq.prepareQuery(ctx); err != nil {
func (_q *AttachmentQuery) All(ctx context.Context) ([]*Attachment, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryAll)
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
qr := querierAll[[]*Attachment, *AttachmentQuery]()
return withInterceptors[[]*Attachment](ctx, aq, qr, aq.inters)
return withInterceptors[[]*Attachment](ctx, _q, qr, _q.inters)
}
// AllX is like All, but panics if an error occurs.
func (aq *AttachmentQuery) AllX(ctx context.Context) []*Attachment {
nodes, err := aq.All(ctx)
func (_q *AttachmentQuery) AllX(ctx context.Context) []*Attachment {
nodes, err := _q.All(ctx)
if err != nil {
panic(err)
}
@@ -227,20 +227,20 @@ func (aq *AttachmentQuery) AllX(ctx context.Context) []*Attachment {
}
// IDs executes the query and returns a list of Attachment IDs.
func (aq *AttachmentQuery) IDs(ctx context.Context) (ids []uuid.UUID, err error) {
if aq.ctx.Unique == nil && aq.path != nil {
aq.Unique(true)
func (_q *AttachmentQuery) IDs(ctx context.Context) (ids []uuid.UUID, err error) {
if _q.ctx.Unique == nil && _q.path != nil {
_q.Unique(true)
}
ctx = setContextOp(ctx, aq.ctx, ent.OpQueryIDs)
if err = aq.Select(attachment.FieldID).Scan(ctx, &ids); err != nil {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryIDs)
if err = _q.Select(attachment.FieldID).Scan(ctx, &ids); err != nil {
return nil, err
}
return ids, nil
}
// IDsX is like IDs, but panics if an error occurs.
func (aq *AttachmentQuery) IDsX(ctx context.Context) []uuid.UUID {
ids, err := aq.IDs(ctx)
func (_q *AttachmentQuery) IDsX(ctx context.Context) []uuid.UUID {
ids, err := _q.IDs(ctx)
if err != nil {
panic(err)
}
@@ -248,17 +248,17 @@ func (aq *AttachmentQuery) IDsX(ctx context.Context) []uuid.UUID {
}
// Count returns the count of the given query.
func (aq *AttachmentQuery) Count(ctx context.Context) (int, error) {
ctx = setContextOp(ctx, aq.ctx, ent.OpQueryCount)
if err := aq.prepareQuery(ctx); err != nil {
func (_q *AttachmentQuery) Count(ctx context.Context) (int, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryCount)
if err := _q.prepareQuery(ctx); err != nil {
return 0, err
}
return withInterceptors[int](ctx, aq, querierCount[*AttachmentQuery](), aq.inters)
return withInterceptors[int](ctx, _q, querierCount[*AttachmentQuery](), _q.inters)
}
// CountX is like Count, but panics if an error occurs.
func (aq *AttachmentQuery) CountX(ctx context.Context) int {
count, err := aq.Count(ctx)
func (_q *AttachmentQuery) CountX(ctx context.Context) int {
count, err := _q.Count(ctx)
if err != nil {
panic(err)
}
@@ -266,9 +266,9 @@ func (aq *AttachmentQuery) CountX(ctx context.Context) int {
}
// Exist returns true if the query has elements in the graph.
func (aq *AttachmentQuery) Exist(ctx context.Context) (bool, error) {
ctx = setContextOp(ctx, aq.ctx, ent.OpQueryExist)
switch _, err := aq.FirstID(ctx); {
func (_q *AttachmentQuery) Exist(ctx context.Context) (bool, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryExist)
switch _, err := _q.FirstID(ctx); {
case IsNotFound(err):
return false, nil
case err != nil:
@@ -279,8 +279,8 @@ func (aq *AttachmentQuery) Exist(ctx context.Context) (bool, error) {
}
// ExistX is like Exist, but panics if an error occurs.
func (aq *AttachmentQuery) ExistX(ctx context.Context) bool {
exist, err := aq.Exist(ctx)
func (_q *AttachmentQuery) ExistX(ctx context.Context) bool {
exist, err := _q.Exist(ctx)
if err != nil {
panic(err)
}
@@ -289,44 +289,44 @@ func (aq *AttachmentQuery) ExistX(ctx context.Context) bool {
// Clone returns a duplicate of the AttachmentQuery builder, including all associated steps. It can be
// used to prepare common query builders and use them differently after the clone is made.
func (aq *AttachmentQuery) Clone() *AttachmentQuery {
if aq == nil {
func (_q *AttachmentQuery) Clone() *AttachmentQuery {
if _q == nil {
return nil
}
return &AttachmentQuery{
config: aq.config,
ctx: aq.ctx.Clone(),
order: append([]attachment.OrderOption{}, aq.order...),
inters: append([]Interceptor{}, aq.inters...),
predicates: append([]predicate.Attachment{}, aq.predicates...),
withItem: aq.withItem.Clone(),
withThumbnail: aq.withThumbnail.Clone(),
config: _q.config,
ctx: _q.ctx.Clone(),
order: append([]attachment.OrderOption{}, _q.order...),
inters: append([]Interceptor{}, _q.inters...),
predicates: append([]predicate.Attachment{}, _q.predicates...),
withItem: _q.withItem.Clone(),
withThumbnail: _q.withThumbnail.Clone(),
// clone intermediate query.
sql: aq.sql.Clone(),
path: aq.path,
sql: _q.sql.Clone(),
path: _q.path,
}
}
// WithItem tells the query-builder to eager-load the nodes that are connected to
// the "item" edge. The optional arguments are used to configure the query builder of the edge.
func (aq *AttachmentQuery) WithItem(opts ...func(*ItemQuery)) *AttachmentQuery {
query := (&ItemClient{config: aq.config}).Query()
func (_q *AttachmentQuery) WithItem(opts ...func(*ItemQuery)) *AttachmentQuery {
query := (&ItemClient{config: _q.config}).Query()
for _, opt := range opts {
opt(query)
}
aq.withItem = query
return aq
_q.withItem = query
return _q
}
// WithThumbnail tells the query-builder to eager-load the nodes that are connected to
// the "thumbnail" edge. The optional arguments are used to configure the query builder of the edge.
func (aq *AttachmentQuery) WithThumbnail(opts ...func(*AttachmentQuery)) *AttachmentQuery {
query := (&AttachmentClient{config: aq.config}).Query()
func (_q *AttachmentQuery) WithThumbnail(opts ...func(*AttachmentQuery)) *AttachmentQuery {
query := (&AttachmentClient{config: _q.config}).Query()
for _, opt := range opts {
opt(query)
}
aq.withThumbnail = query
return aq
_q.withThumbnail = query
return _q
}
// GroupBy is used to group vertices by one or more fields/columns.
@@ -343,10 +343,10 @@ func (aq *AttachmentQuery) WithThumbnail(opts ...func(*AttachmentQuery)) *Attach
// GroupBy(attachment.FieldCreatedAt).
// Aggregate(ent.Count()).
// Scan(ctx, &v)
func (aq *AttachmentQuery) GroupBy(field string, fields ...string) *AttachmentGroupBy {
aq.ctx.Fields = append([]string{field}, fields...)
grbuild := &AttachmentGroupBy{build: aq}
grbuild.flds = &aq.ctx.Fields
func (_q *AttachmentQuery) GroupBy(field string, fields ...string) *AttachmentGroupBy {
_q.ctx.Fields = append([]string{field}, fields...)
grbuild := &AttachmentGroupBy{build: _q}
grbuild.flds = &_q.ctx.Fields
grbuild.label = attachment.Label
grbuild.scan = grbuild.Scan
return grbuild
@@ -364,56 +364,56 @@ func (aq *AttachmentQuery) GroupBy(field string, fields ...string) *AttachmentGr
// client.Attachment.Query().
// Select(attachment.FieldCreatedAt).
// Scan(ctx, &v)
func (aq *AttachmentQuery) Select(fields ...string) *AttachmentSelect {
aq.ctx.Fields = append(aq.ctx.Fields, fields...)
sbuild := &AttachmentSelect{AttachmentQuery: aq}
func (_q *AttachmentQuery) Select(fields ...string) *AttachmentSelect {
_q.ctx.Fields = append(_q.ctx.Fields, fields...)
sbuild := &AttachmentSelect{AttachmentQuery: _q}
sbuild.label = attachment.Label
sbuild.flds, sbuild.scan = &aq.ctx.Fields, sbuild.Scan
sbuild.flds, sbuild.scan = &_q.ctx.Fields, sbuild.Scan
return sbuild
}
// Aggregate returns a AttachmentSelect configured with the given aggregations.
func (aq *AttachmentQuery) Aggregate(fns ...AggregateFunc) *AttachmentSelect {
return aq.Select().Aggregate(fns...)
func (_q *AttachmentQuery) Aggregate(fns ...AggregateFunc) *AttachmentSelect {
return _q.Select().Aggregate(fns...)
}
func (aq *AttachmentQuery) prepareQuery(ctx context.Context) error {
for _, inter := range aq.inters {
func (_q *AttachmentQuery) prepareQuery(ctx context.Context) error {
for _, inter := range _q.inters {
if inter == nil {
return fmt.Errorf("ent: uninitialized interceptor (forgotten import ent/runtime?)")
}
if trv, ok := inter.(Traverser); ok {
if err := trv.Traverse(ctx, aq); err != nil {
if err := trv.Traverse(ctx, _q); err != nil {
return err
}
}
}
for _, f := range aq.ctx.Fields {
for _, f := range _q.ctx.Fields {
if !attachment.ValidColumn(f) {
return &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
}
}
if aq.path != nil {
prev, err := aq.path(ctx)
if _q.path != nil {
prev, err := _q.path(ctx)
if err != nil {
return err
}
aq.sql = prev
_q.sql = prev
}
return nil
}
func (aq *AttachmentQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*Attachment, error) {
func (_q *AttachmentQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*Attachment, error) {
var (
nodes = []*Attachment{}
withFKs = aq.withFKs
_spec = aq.querySpec()
withFKs = _q.withFKs
_spec = _q.querySpec()
loadedTypes = [2]bool{
aq.withItem != nil,
aq.withThumbnail != nil,
_q.withItem != nil,
_q.withThumbnail != nil,
}
)
if aq.withItem != nil || aq.withThumbnail != nil {
if _q.withItem != nil || _q.withThumbnail != nil {
withFKs = true
}
if withFKs {
@@ -423,7 +423,7 @@ func (aq *AttachmentQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*A
return (*Attachment).scanValues(nil, columns)
}
_spec.Assign = func(columns []string, values []any) error {
node := &Attachment{config: aq.config}
node := &Attachment{config: _q.config}
nodes = append(nodes, node)
node.Edges.loadedTypes = loadedTypes
return node.assignValues(columns, values)
@@ -431,20 +431,20 @@ func (aq *AttachmentQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*A
for i := range hooks {
hooks[i](ctx, _spec)
}
if err := sqlgraph.QueryNodes(ctx, aq.driver, _spec); err != nil {
if err := sqlgraph.QueryNodes(ctx, _q.driver, _spec); err != nil {
return nil, err
}
if len(nodes) == 0 {
return nodes, nil
}
if query := aq.withItem; query != nil {
if err := aq.loadItem(ctx, query, nodes, nil,
if query := _q.withItem; query != nil {
if err := _q.loadItem(ctx, query, nodes, nil,
func(n *Attachment, e *Item) { n.Edges.Item = e }); err != nil {
return nil, err
}
}
if query := aq.withThumbnail; query != nil {
if err := aq.loadThumbnail(ctx, query, nodes, nil,
if query := _q.withThumbnail; query != nil {
if err := _q.loadThumbnail(ctx, query, nodes, nil,
func(n *Attachment, e *Attachment) { n.Edges.Thumbnail = e }); err != nil {
return nil, err
}
@@ -452,7 +452,7 @@ func (aq *AttachmentQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*A
return nodes, nil
}
func (aq *AttachmentQuery) loadItem(ctx context.Context, query *ItemQuery, nodes []*Attachment, init func(*Attachment), assign func(*Attachment, *Item)) error {
func (_q *AttachmentQuery) loadItem(ctx context.Context, query *ItemQuery, nodes []*Attachment, init func(*Attachment), assign func(*Attachment, *Item)) error {
ids := make([]uuid.UUID, 0, len(nodes))
nodeids := make(map[uuid.UUID][]*Attachment)
for i := range nodes {
@@ -484,7 +484,7 @@ func (aq *AttachmentQuery) loadItem(ctx context.Context, query *ItemQuery, nodes
}
return nil
}
func (aq *AttachmentQuery) loadThumbnail(ctx context.Context, query *AttachmentQuery, nodes []*Attachment, init func(*Attachment), assign func(*Attachment, *Attachment)) error {
func (_q *AttachmentQuery) loadThumbnail(ctx context.Context, query *AttachmentQuery, nodes []*Attachment, init func(*Attachment), assign func(*Attachment, *Attachment)) error {
ids := make([]uuid.UUID, 0, len(nodes))
nodeids := make(map[uuid.UUID][]*Attachment)
for i := range nodes {
@@ -517,24 +517,24 @@ func (aq *AttachmentQuery) loadThumbnail(ctx context.Context, query *AttachmentQ
return nil
}
func (aq *AttachmentQuery) sqlCount(ctx context.Context) (int, error) {
_spec := aq.querySpec()
_spec.Node.Columns = aq.ctx.Fields
if len(aq.ctx.Fields) > 0 {
_spec.Unique = aq.ctx.Unique != nil && *aq.ctx.Unique
func (_q *AttachmentQuery) sqlCount(ctx context.Context) (int, error) {
_spec := _q.querySpec()
_spec.Node.Columns = _q.ctx.Fields
if len(_q.ctx.Fields) > 0 {
_spec.Unique = _q.ctx.Unique != nil && *_q.ctx.Unique
}
return sqlgraph.CountNodes(ctx, aq.driver, _spec)
return sqlgraph.CountNodes(ctx, _q.driver, _spec)
}
func (aq *AttachmentQuery) querySpec() *sqlgraph.QuerySpec {
func (_q *AttachmentQuery) querySpec() *sqlgraph.QuerySpec {
_spec := sqlgraph.NewQuerySpec(attachment.Table, attachment.Columns, sqlgraph.NewFieldSpec(attachment.FieldID, field.TypeUUID))
_spec.From = aq.sql
if unique := aq.ctx.Unique; unique != nil {
_spec.From = _q.sql
if unique := _q.ctx.Unique; unique != nil {
_spec.Unique = *unique
} else if aq.path != nil {
} else if _q.path != nil {
_spec.Unique = true
}
if fields := aq.ctx.Fields; len(fields) > 0 {
if fields := _q.ctx.Fields; len(fields) > 0 {
_spec.Node.Columns = make([]string, 0, len(fields))
_spec.Node.Columns = append(_spec.Node.Columns, attachment.FieldID)
for i := range fields {
@@ -543,20 +543,20 @@ func (aq *AttachmentQuery) querySpec() *sqlgraph.QuerySpec {
}
}
}
if ps := aq.predicates; len(ps) > 0 {
if ps := _q.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if limit := aq.ctx.Limit; limit != nil {
if limit := _q.ctx.Limit; limit != nil {
_spec.Limit = *limit
}
if offset := aq.ctx.Offset; offset != nil {
if offset := _q.ctx.Offset; offset != nil {
_spec.Offset = *offset
}
if ps := aq.order; len(ps) > 0 {
if ps := _q.order; len(ps) > 0 {
_spec.Order = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
@@ -566,33 +566,33 @@ func (aq *AttachmentQuery) querySpec() *sqlgraph.QuerySpec {
return _spec
}
func (aq *AttachmentQuery) sqlQuery(ctx context.Context) *sql.Selector {
builder := sql.Dialect(aq.driver.Dialect())
func (_q *AttachmentQuery) sqlQuery(ctx context.Context) *sql.Selector {
builder := sql.Dialect(_q.driver.Dialect())
t1 := builder.Table(attachment.Table)
columns := aq.ctx.Fields
columns := _q.ctx.Fields
if len(columns) == 0 {
columns = attachment.Columns
}
selector := builder.Select(t1.Columns(columns...)...).From(t1)
if aq.sql != nil {
selector = aq.sql
if _q.sql != nil {
selector = _q.sql
selector.Select(selector.Columns(columns...)...)
}
if aq.ctx.Unique != nil && *aq.ctx.Unique {
if _q.ctx.Unique != nil && *_q.ctx.Unique {
selector.Distinct()
}
for _, p := range aq.predicates {
for _, p := range _q.predicates {
p(selector)
}
for _, p := range aq.order {
for _, p := range _q.order {
p(selector)
}
if offset := aq.ctx.Offset; offset != nil {
if offset := _q.ctx.Offset; offset != nil {
// limit is mandatory for offset clause. We start
// with default value, and override it below if needed.
selector.Offset(*offset).Limit(math.MaxInt32)
}
if limit := aq.ctx.Limit; limit != nil {
if limit := _q.ctx.Limit; limit != nil {
selector.Limit(*limit)
}
return selector
@@ -605,41 +605,41 @@ type AttachmentGroupBy struct {
}
// Aggregate adds the given aggregation functions to the group-by query.
func (agb *AttachmentGroupBy) Aggregate(fns ...AggregateFunc) *AttachmentGroupBy {
agb.fns = append(agb.fns, fns...)
return agb
func (_g *AttachmentGroupBy) Aggregate(fns ...AggregateFunc) *AttachmentGroupBy {
_g.fns = append(_g.fns, fns...)
return _g
}
// Scan applies the selector query and scans the result into the given value.
func (agb *AttachmentGroupBy) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, agb.build.ctx, ent.OpQueryGroupBy)
if err := agb.build.prepareQuery(ctx); err != nil {
func (_g *AttachmentGroupBy) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, _g.build.ctx, ent.OpQueryGroupBy)
if err := _g.build.prepareQuery(ctx); err != nil {
return err
}
return scanWithInterceptors[*AttachmentQuery, *AttachmentGroupBy](ctx, agb.build, agb, agb.build.inters, v)
return scanWithInterceptors[*AttachmentQuery, *AttachmentGroupBy](ctx, _g.build, _g, _g.build.inters, v)
}
func (agb *AttachmentGroupBy) sqlScan(ctx context.Context, root *AttachmentQuery, v any) error {
func (_g *AttachmentGroupBy) sqlScan(ctx context.Context, root *AttachmentQuery, v any) error {
selector := root.sqlQuery(ctx).Select()
aggregation := make([]string, 0, len(agb.fns))
for _, fn := range agb.fns {
aggregation := make([]string, 0, len(_g.fns))
for _, fn := range _g.fns {
aggregation = append(aggregation, fn(selector))
}
if len(selector.SelectedColumns()) == 0 {
columns := make([]string, 0, len(*agb.flds)+len(agb.fns))
for _, f := range *agb.flds {
columns := make([]string, 0, len(*_g.flds)+len(_g.fns))
for _, f := range *_g.flds {
columns = append(columns, selector.C(f))
}
columns = append(columns, aggregation...)
selector.Select(columns...)
}
selector.GroupBy(selector.Columns(*agb.flds...)...)
selector.GroupBy(selector.Columns(*_g.flds...)...)
if err := selector.Err(); err != nil {
return err
}
rows := &sql.Rows{}
query, args := selector.Query()
if err := agb.build.driver.Query(ctx, query, args, rows); err != nil {
if err := _g.build.driver.Query(ctx, query, args, rows); err != nil {
return err
}
defer rows.Close()
@@ -653,27 +653,27 @@ type AttachmentSelect struct {
}
// Aggregate adds the given aggregation functions to the selector query.
func (as *AttachmentSelect) Aggregate(fns ...AggregateFunc) *AttachmentSelect {
as.fns = append(as.fns, fns...)
return as
func (_s *AttachmentSelect) Aggregate(fns ...AggregateFunc) *AttachmentSelect {
_s.fns = append(_s.fns, fns...)
return _s
}
// Scan applies the selector query and scans the result into the given value.
func (as *AttachmentSelect) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, as.ctx, ent.OpQuerySelect)
if err := as.prepareQuery(ctx); err != nil {
func (_s *AttachmentSelect) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, _s.ctx, ent.OpQuerySelect)
if err := _s.prepareQuery(ctx); err != nil {
return err
}
return scanWithInterceptors[*AttachmentQuery, *AttachmentSelect](ctx, as.AttachmentQuery, as, as.inters, v)
return scanWithInterceptors[*AttachmentQuery, *AttachmentSelect](ctx, _s.AttachmentQuery, _s, _s.inters, v)
}
func (as *AttachmentSelect) sqlScan(ctx context.Context, root *AttachmentQuery, v any) error {
func (_s *AttachmentSelect) sqlScan(ctx context.Context, root *AttachmentQuery, v any) error {
selector := root.sqlQuery(ctx)
aggregation := make([]string, 0, len(as.fns))
for _, fn := range as.fns {
aggregation := make([]string, 0, len(_s.fns))
for _, fn := range _s.fns {
aggregation = append(aggregation, fn(selector))
}
switch n := len(*as.selector.flds); {
switch n := len(*_s.selector.flds); {
case n == 0 && len(aggregation) > 0:
selector.Select(aggregation...)
case n != 0 && len(aggregation) > 0:
@@ -681,7 +681,7 @@ func (as *AttachmentSelect) sqlScan(ctx context.Context, root *AttachmentQuery,
}
rows := &sql.Rows{}
query, args := selector.Query()
if err := as.driver.Query(ctx, query, args, rows); err != nil {
if err := _s.driver.Query(ctx, query, args, rows); err != nil {
return err
}
defer rows.Close()

View File

@@ -25,151 +25,151 @@ type AttachmentUpdate struct {
}
// Where appends a list predicates to the AttachmentUpdate builder.
func (au *AttachmentUpdate) Where(ps ...predicate.Attachment) *AttachmentUpdate {
au.mutation.Where(ps...)
return au
func (_u *AttachmentUpdate) Where(ps ...predicate.Attachment) *AttachmentUpdate {
_u.mutation.Where(ps...)
return _u
}
// SetUpdatedAt sets the "updated_at" field.
func (au *AttachmentUpdate) SetUpdatedAt(t time.Time) *AttachmentUpdate {
au.mutation.SetUpdatedAt(t)
return au
func (_u *AttachmentUpdate) SetUpdatedAt(v time.Time) *AttachmentUpdate {
_u.mutation.SetUpdatedAt(v)
return _u
}
// SetType sets the "type" field.
func (au *AttachmentUpdate) SetType(a attachment.Type) *AttachmentUpdate {
au.mutation.SetType(a)
return au
func (_u *AttachmentUpdate) SetType(v attachment.Type) *AttachmentUpdate {
_u.mutation.SetType(v)
return _u
}
// SetNillableType sets the "type" field if the given value is not nil.
func (au *AttachmentUpdate) SetNillableType(a *attachment.Type) *AttachmentUpdate {
if a != nil {
au.SetType(*a)
func (_u *AttachmentUpdate) SetNillableType(v *attachment.Type) *AttachmentUpdate {
if v != nil {
_u.SetType(*v)
}
return au
return _u
}
// SetPrimary sets the "primary" field.
func (au *AttachmentUpdate) SetPrimary(b bool) *AttachmentUpdate {
au.mutation.SetPrimary(b)
return au
func (_u *AttachmentUpdate) SetPrimary(v bool) *AttachmentUpdate {
_u.mutation.SetPrimary(v)
return _u
}
// SetNillablePrimary sets the "primary" field if the given value is not nil.
func (au *AttachmentUpdate) SetNillablePrimary(b *bool) *AttachmentUpdate {
if b != nil {
au.SetPrimary(*b)
func (_u *AttachmentUpdate) SetNillablePrimary(v *bool) *AttachmentUpdate {
if v != nil {
_u.SetPrimary(*v)
}
return au
return _u
}
// SetTitle sets the "title" field.
func (au *AttachmentUpdate) SetTitle(s string) *AttachmentUpdate {
au.mutation.SetTitle(s)
return au
func (_u *AttachmentUpdate) SetTitle(v string) *AttachmentUpdate {
_u.mutation.SetTitle(v)
return _u
}
// SetNillableTitle sets the "title" field if the given value is not nil.
func (au *AttachmentUpdate) SetNillableTitle(s *string) *AttachmentUpdate {
if s != nil {
au.SetTitle(*s)
func (_u *AttachmentUpdate) SetNillableTitle(v *string) *AttachmentUpdate {
if v != nil {
_u.SetTitle(*v)
}
return au
return _u
}
// SetPath sets the "path" field.
func (au *AttachmentUpdate) SetPath(s string) *AttachmentUpdate {
au.mutation.SetPath(s)
return au
func (_u *AttachmentUpdate) SetPath(v string) *AttachmentUpdate {
_u.mutation.SetPath(v)
return _u
}
// SetNillablePath sets the "path" field if the given value is not nil.
func (au *AttachmentUpdate) SetNillablePath(s *string) *AttachmentUpdate {
if s != nil {
au.SetPath(*s)
func (_u *AttachmentUpdate) SetNillablePath(v *string) *AttachmentUpdate {
if v != nil {
_u.SetPath(*v)
}
return au
return _u
}
// SetMimeType sets the "mime_type" field.
func (au *AttachmentUpdate) SetMimeType(s string) *AttachmentUpdate {
au.mutation.SetMimeType(s)
return au
func (_u *AttachmentUpdate) SetMimeType(v string) *AttachmentUpdate {
_u.mutation.SetMimeType(v)
return _u
}
// SetNillableMimeType sets the "mime_type" field if the given value is not nil.
func (au *AttachmentUpdate) SetNillableMimeType(s *string) *AttachmentUpdate {
if s != nil {
au.SetMimeType(*s)
func (_u *AttachmentUpdate) SetNillableMimeType(v *string) *AttachmentUpdate {
if v != nil {
_u.SetMimeType(*v)
}
return au
return _u
}
// SetItemID sets the "item" edge to the Item entity by ID.
func (au *AttachmentUpdate) SetItemID(id uuid.UUID) *AttachmentUpdate {
au.mutation.SetItemID(id)
return au
func (_u *AttachmentUpdate) SetItemID(id uuid.UUID) *AttachmentUpdate {
_u.mutation.SetItemID(id)
return _u
}
// SetNillableItemID sets the "item" edge to the Item entity by ID if the given value is not nil.
func (au *AttachmentUpdate) SetNillableItemID(id *uuid.UUID) *AttachmentUpdate {
func (_u *AttachmentUpdate) SetNillableItemID(id *uuid.UUID) *AttachmentUpdate {
if id != nil {
au = au.SetItemID(*id)
_u = _u.SetItemID(*id)
}
return au
return _u
}
// SetItem sets the "item" edge to the Item entity.
func (au *AttachmentUpdate) SetItem(i *Item) *AttachmentUpdate {
return au.SetItemID(i.ID)
func (_u *AttachmentUpdate) SetItem(v *Item) *AttachmentUpdate {
return _u.SetItemID(v.ID)
}
// SetThumbnailID sets the "thumbnail" edge to the Attachment entity by ID.
func (au *AttachmentUpdate) SetThumbnailID(id uuid.UUID) *AttachmentUpdate {
au.mutation.SetThumbnailID(id)
return au
func (_u *AttachmentUpdate) SetThumbnailID(id uuid.UUID) *AttachmentUpdate {
_u.mutation.SetThumbnailID(id)
return _u
}
// SetNillableThumbnailID sets the "thumbnail" edge to the Attachment entity by ID if the given value is not nil.
func (au *AttachmentUpdate) SetNillableThumbnailID(id *uuid.UUID) *AttachmentUpdate {
func (_u *AttachmentUpdate) SetNillableThumbnailID(id *uuid.UUID) *AttachmentUpdate {
if id != nil {
au = au.SetThumbnailID(*id)
_u = _u.SetThumbnailID(*id)
}
return au
return _u
}
// SetThumbnail sets the "thumbnail" edge to the Attachment entity.
func (au *AttachmentUpdate) SetThumbnail(a *Attachment) *AttachmentUpdate {
return au.SetThumbnailID(a.ID)
func (_u *AttachmentUpdate) SetThumbnail(v *Attachment) *AttachmentUpdate {
return _u.SetThumbnailID(v.ID)
}
// Mutation returns the AttachmentMutation object of the builder.
func (au *AttachmentUpdate) Mutation() *AttachmentMutation {
return au.mutation
func (_u *AttachmentUpdate) Mutation() *AttachmentMutation {
return _u.mutation
}
// ClearItem clears the "item" edge to the Item entity.
func (au *AttachmentUpdate) ClearItem() *AttachmentUpdate {
au.mutation.ClearItem()
return au
func (_u *AttachmentUpdate) ClearItem() *AttachmentUpdate {
_u.mutation.ClearItem()
return _u
}
// ClearThumbnail clears the "thumbnail" edge to the Attachment entity.
func (au *AttachmentUpdate) ClearThumbnail() *AttachmentUpdate {
au.mutation.ClearThumbnail()
return au
func (_u *AttachmentUpdate) ClearThumbnail() *AttachmentUpdate {
_u.mutation.ClearThumbnail()
return _u
}
// Save executes the query and returns the number of nodes affected by the update operation.
func (au *AttachmentUpdate) Save(ctx context.Context) (int, error) {
au.defaults()
return withHooks(ctx, au.sqlSave, au.mutation, au.hooks)
func (_u *AttachmentUpdate) Save(ctx context.Context) (int, error) {
_u.defaults()
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
}
// SaveX is like Save, but panics if an error occurs.
func (au *AttachmentUpdate) SaveX(ctx context.Context) int {
affected, err := au.Save(ctx)
func (_u *AttachmentUpdate) SaveX(ctx context.Context) int {
affected, err := _u.Save(ctx)
if err != nil {
panic(err)
}
@@ -177,29 +177,29 @@ func (au *AttachmentUpdate) SaveX(ctx context.Context) int {
}
// Exec executes the query.
func (au *AttachmentUpdate) Exec(ctx context.Context) error {
_, err := au.Save(ctx)
func (_u *AttachmentUpdate) Exec(ctx context.Context) error {
_, err := _u.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (au *AttachmentUpdate) ExecX(ctx context.Context) {
if err := au.Exec(ctx); err != nil {
func (_u *AttachmentUpdate) ExecX(ctx context.Context) {
if err := _u.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (au *AttachmentUpdate) defaults() {
if _, ok := au.mutation.UpdatedAt(); !ok {
func (_u *AttachmentUpdate) defaults() {
if _, ok := _u.mutation.UpdatedAt(); !ok {
v := attachment.UpdateDefaultUpdatedAt()
au.mutation.SetUpdatedAt(v)
_u.mutation.SetUpdatedAt(v)
}
}
// check runs all checks and user-defined validators on the builder.
func (au *AttachmentUpdate) check() error {
if v, ok := au.mutation.GetType(); ok {
func (_u *AttachmentUpdate) check() error {
if v, ok := _u.mutation.GetType(); ok {
if err := attachment.TypeValidator(v); err != nil {
return &ValidationError{Name: "type", err: fmt.Errorf(`ent: validator failed for field "Attachment.type": %w`, err)}
}
@@ -207,37 +207,37 @@ func (au *AttachmentUpdate) check() error {
return nil
}
func (au *AttachmentUpdate) sqlSave(ctx context.Context) (n int, err error) {
if err := au.check(); err != nil {
return n, err
func (_u *AttachmentUpdate) sqlSave(ctx context.Context) (_node int, err error) {
if err := _u.check(); err != nil {
return _node, err
}
_spec := sqlgraph.NewUpdateSpec(attachment.Table, attachment.Columns, sqlgraph.NewFieldSpec(attachment.FieldID, field.TypeUUID))
if ps := au.mutation.predicates; len(ps) > 0 {
if ps := _u.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := au.mutation.UpdatedAt(); ok {
if value, ok := _u.mutation.UpdatedAt(); ok {
_spec.SetField(attachment.FieldUpdatedAt, field.TypeTime, value)
}
if value, ok := au.mutation.GetType(); ok {
if value, ok := _u.mutation.GetType(); ok {
_spec.SetField(attachment.FieldType, field.TypeEnum, value)
}
if value, ok := au.mutation.Primary(); ok {
if value, ok := _u.mutation.Primary(); ok {
_spec.SetField(attachment.FieldPrimary, field.TypeBool, value)
}
if value, ok := au.mutation.Title(); ok {
if value, ok := _u.mutation.Title(); ok {
_spec.SetField(attachment.FieldTitle, field.TypeString, value)
}
if value, ok := au.mutation.Path(); ok {
if value, ok := _u.mutation.Path(); ok {
_spec.SetField(attachment.FieldPath, field.TypeString, value)
}
if value, ok := au.mutation.MimeType(); ok {
if value, ok := _u.mutation.MimeType(); ok {
_spec.SetField(attachment.FieldMimeType, field.TypeString, value)
}
if au.mutation.ItemCleared() {
if _u.mutation.ItemCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -250,7 +250,7 @@ func (au *AttachmentUpdate) sqlSave(ctx context.Context) (n int, err error) {
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := au.mutation.ItemIDs(); len(nodes) > 0 {
if nodes := _u.mutation.ItemIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -266,7 +266,7 @@ func (au *AttachmentUpdate) sqlSave(ctx context.Context) (n int, err error) {
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if au.mutation.ThumbnailCleared() {
if _u.mutation.ThumbnailCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: false,
@@ -279,7 +279,7 @@ func (au *AttachmentUpdate) sqlSave(ctx context.Context) (n int, err error) {
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := au.mutation.ThumbnailIDs(); len(nodes) > 0 {
if nodes := _u.mutation.ThumbnailIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: false,
@@ -295,7 +295,7 @@ func (au *AttachmentUpdate) sqlSave(ctx context.Context) (n int, err error) {
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if n, err = sqlgraph.UpdateNodes(ctx, au.driver, _spec); err != nil {
if _node, err = sqlgraph.UpdateNodes(ctx, _u.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{attachment.Label}
} else if sqlgraph.IsConstraintError(err) {
@@ -303,8 +303,8 @@ func (au *AttachmentUpdate) sqlSave(ctx context.Context) (n int, err error) {
}
return 0, err
}
au.mutation.done = true
return n, nil
_u.mutation.done = true
return _node, nil
}
// AttachmentUpdateOne is the builder for updating a single Attachment entity.
@@ -316,158 +316,158 @@ type AttachmentUpdateOne struct {
}
// SetUpdatedAt sets the "updated_at" field.
func (auo *AttachmentUpdateOne) SetUpdatedAt(t time.Time) *AttachmentUpdateOne {
auo.mutation.SetUpdatedAt(t)
return auo
func (_u *AttachmentUpdateOne) SetUpdatedAt(v time.Time) *AttachmentUpdateOne {
_u.mutation.SetUpdatedAt(v)
return _u
}
// SetType sets the "type" field.
func (auo *AttachmentUpdateOne) SetType(a attachment.Type) *AttachmentUpdateOne {
auo.mutation.SetType(a)
return auo
func (_u *AttachmentUpdateOne) SetType(v attachment.Type) *AttachmentUpdateOne {
_u.mutation.SetType(v)
return _u
}
// SetNillableType sets the "type" field if the given value is not nil.
func (auo *AttachmentUpdateOne) SetNillableType(a *attachment.Type) *AttachmentUpdateOne {
if a != nil {
auo.SetType(*a)
func (_u *AttachmentUpdateOne) SetNillableType(v *attachment.Type) *AttachmentUpdateOne {
if v != nil {
_u.SetType(*v)
}
return auo
return _u
}
// SetPrimary sets the "primary" field.
func (auo *AttachmentUpdateOne) SetPrimary(b bool) *AttachmentUpdateOne {
auo.mutation.SetPrimary(b)
return auo
func (_u *AttachmentUpdateOne) SetPrimary(v bool) *AttachmentUpdateOne {
_u.mutation.SetPrimary(v)
return _u
}
// SetNillablePrimary sets the "primary" field if the given value is not nil.
func (auo *AttachmentUpdateOne) SetNillablePrimary(b *bool) *AttachmentUpdateOne {
if b != nil {
auo.SetPrimary(*b)
func (_u *AttachmentUpdateOne) SetNillablePrimary(v *bool) *AttachmentUpdateOne {
if v != nil {
_u.SetPrimary(*v)
}
return auo
return _u
}
// SetTitle sets the "title" field.
func (auo *AttachmentUpdateOne) SetTitle(s string) *AttachmentUpdateOne {
auo.mutation.SetTitle(s)
return auo
func (_u *AttachmentUpdateOne) SetTitle(v string) *AttachmentUpdateOne {
_u.mutation.SetTitle(v)
return _u
}
// SetNillableTitle sets the "title" field if the given value is not nil.
func (auo *AttachmentUpdateOne) SetNillableTitle(s *string) *AttachmentUpdateOne {
if s != nil {
auo.SetTitle(*s)
func (_u *AttachmentUpdateOne) SetNillableTitle(v *string) *AttachmentUpdateOne {
if v != nil {
_u.SetTitle(*v)
}
return auo
return _u
}
// SetPath sets the "path" field.
func (auo *AttachmentUpdateOne) SetPath(s string) *AttachmentUpdateOne {
auo.mutation.SetPath(s)
return auo
func (_u *AttachmentUpdateOne) SetPath(v string) *AttachmentUpdateOne {
_u.mutation.SetPath(v)
return _u
}
// SetNillablePath sets the "path" field if the given value is not nil.
func (auo *AttachmentUpdateOne) SetNillablePath(s *string) *AttachmentUpdateOne {
if s != nil {
auo.SetPath(*s)
func (_u *AttachmentUpdateOne) SetNillablePath(v *string) *AttachmentUpdateOne {
if v != nil {
_u.SetPath(*v)
}
return auo
return _u
}
// SetMimeType sets the "mime_type" field.
func (auo *AttachmentUpdateOne) SetMimeType(s string) *AttachmentUpdateOne {
auo.mutation.SetMimeType(s)
return auo
func (_u *AttachmentUpdateOne) SetMimeType(v string) *AttachmentUpdateOne {
_u.mutation.SetMimeType(v)
return _u
}
// SetNillableMimeType sets the "mime_type" field if the given value is not nil.
func (auo *AttachmentUpdateOne) SetNillableMimeType(s *string) *AttachmentUpdateOne {
if s != nil {
auo.SetMimeType(*s)
func (_u *AttachmentUpdateOne) SetNillableMimeType(v *string) *AttachmentUpdateOne {
if v != nil {
_u.SetMimeType(*v)
}
return auo
return _u
}
// SetItemID sets the "item" edge to the Item entity by ID.
func (auo *AttachmentUpdateOne) SetItemID(id uuid.UUID) *AttachmentUpdateOne {
auo.mutation.SetItemID(id)
return auo
func (_u *AttachmentUpdateOne) SetItemID(id uuid.UUID) *AttachmentUpdateOne {
_u.mutation.SetItemID(id)
return _u
}
// SetNillableItemID sets the "item" edge to the Item entity by ID if the given value is not nil.
func (auo *AttachmentUpdateOne) SetNillableItemID(id *uuid.UUID) *AttachmentUpdateOne {
func (_u *AttachmentUpdateOne) SetNillableItemID(id *uuid.UUID) *AttachmentUpdateOne {
if id != nil {
auo = auo.SetItemID(*id)
_u = _u.SetItemID(*id)
}
return auo
return _u
}
// SetItem sets the "item" edge to the Item entity.
func (auo *AttachmentUpdateOne) SetItem(i *Item) *AttachmentUpdateOne {
return auo.SetItemID(i.ID)
func (_u *AttachmentUpdateOne) SetItem(v *Item) *AttachmentUpdateOne {
return _u.SetItemID(v.ID)
}
// SetThumbnailID sets the "thumbnail" edge to the Attachment entity by ID.
func (auo *AttachmentUpdateOne) SetThumbnailID(id uuid.UUID) *AttachmentUpdateOne {
auo.mutation.SetThumbnailID(id)
return auo
func (_u *AttachmentUpdateOne) SetThumbnailID(id uuid.UUID) *AttachmentUpdateOne {
_u.mutation.SetThumbnailID(id)
return _u
}
// SetNillableThumbnailID sets the "thumbnail" edge to the Attachment entity by ID if the given value is not nil.
func (auo *AttachmentUpdateOne) SetNillableThumbnailID(id *uuid.UUID) *AttachmentUpdateOne {
func (_u *AttachmentUpdateOne) SetNillableThumbnailID(id *uuid.UUID) *AttachmentUpdateOne {
if id != nil {
auo = auo.SetThumbnailID(*id)
_u = _u.SetThumbnailID(*id)
}
return auo
return _u
}
// SetThumbnail sets the "thumbnail" edge to the Attachment entity.
func (auo *AttachmentUpdateOne) SetThumbnail(a *Attachment) *AttachmentUpdateOne {
return auo.SetThumbnailID(a.ID)
func (_u *AttachmentUpdateOne) SetThumbnail(v *Attachment) *AttachmentUpdateOne {
return _u.SetThumbnailID(v.ID)
}
// Mutation returns the AttachmentMutation object of the builder.
func (auo *AttachmentUpdateOne) Mutation() *AttachmentMutation {
return auo.mutation
func (_u *AttachmentUpdateOne) Mutation() *AttachmentMutation {
return _u.mutation
}
// ClearItem clears the "item" edge to the Item entity.
func (auo *AttachmentUpdateOne) ClearItem() *AttachmentUpdateOne {
auo.mutation.ClearItem()
return auo
func (_u *AttachmentUpdateOne) ClearItem() *AttachmentUpdateOne {
_u.mutation.ClearItem()
return _u
}
// ClearThumbnail clears the "thumbnail" edge to the Attachment entity.
func (auo *AttachmentUpdateOne) ClearThumbnail() *AttachmentUpdateOne {
auo.mutation.ClearThumbnail()
return auo
func (_u *AttachmentUpdateOne) ClearThumbnail() *AttachmentUpdateOne {
_u.mutation.ClearThumbnail()
return _u
}
// Where appends a list predicates to the AttachmentUpdate builder.
func (auo *AttachmentUpdateOne) Where(ps ...predicate.Attachment) *AttachmentUpdateOne {
auo.mutation.Where(ps...)
return auo
func (_u *AttachmentUpdateOne) Where(ps ...predicate.Attachment) *AttachmentUpdateOne {
_u.mutation.Where(ps...)
return _u
}
// Select allows selecting one or more fields (columns) of the returned entity.
// The default is selecting all fields defined in the entity schema.
func (auo *AttachmentUpdateOne) Select(field string, fields ...string) *AttachmentUpdateOne {
auo.fields = append([]string{field}, fields...)
return auo
func (_u *AttachmentUpdateOne) Select(field string, fields ...string) *AttachmentUpdateOne {
_u.fields = append([]string{field}, fields...)
return _u
}
// Save executes the query and returns the updated Attachment entity.
func (auo *AttachmentUpdateOne) Save(ctx context.Context) (*Attachment, error) {
auo.defaults()
return withHooks(ctx, auo.sqlSave, auo.mutation, auo.hooks)
func (_u *AttachmentUpdateOne) Save(ctx context.Context) (*Attachment, error) {
_u.defaults()
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
}
// SaveX is like Save, but panics if an error occurs.
func (auo *AttachmentUpdateOne) SaveX(ctx context.Context) *Attachment {
node, err := auo.Save(ctx)
func (_u *AttachmentUpdateOne) SaveX(ctx context.Context) *Attachment {
node, err := _u.Save(ctx)
if err != nil {
panic(err)
}
@@ -475,29 +475,29 @@ func (auo *AttachmentUpdateOne) SaveX(ctx context.Context) *Attachment {
}
// Exec executes the query on the entity.
func (auo *AttachmentUpdateOne) Exec(ctx context.Context) error {
_, err := auo.Save(ctx)
func (_u *AttachmentUpdateOne) Exec(ctx context.Context) error {
_, err := _u.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (auo *AttachmentUpdateOne) ExecX(ctx context.Context) {
if err := auo.Exec(ctx); err != nil {
func (_u *AttachmentUpdateOne) ExecX(ctx context.Context) {
if err := _u.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (auo *AttachmentUpdateOne) defaults() {
if _, ok := auo.mutation.UpdatedAt(); !ok {
func (_u *AttachmentUpdateOne) defaults() {
if _, ok := _u.mutation.UpdatedAt(); !ok {
v := attachment.UpdateDefaultUpdatedAt()
auo.mutation.SetUpdatedAt(v)
_u.mutation.SetUpdatedAt(v)
}
}
// check runs all checks and user-defined validators on the builder.
func (auo *AttachmentUpdateOne) check() error {
if v, ok := auo.mutation.GetType(); ok {
func (_u *AttachmentUpdateOne) check() error {
if v, ok := _u.mutation.GetType(); ok {
if err := attachment.TypeValidator(v); err != nil {
return &ValidationError{Name: "type", err: fmt.Errorf(`ent: validator failed for field "Attachment.type": %w`, err)}
}
@@ -505,17 +505,17 @@ func (auo *AttachmentUpdateOne) check() error {
return nil
}
func (auo *AttachmentUpdateOne) sqlSave(ctx context.Context) (_node *Attachment, err error) {
if err := auo.check(); err != nil {
func (_u *AttachmentUpdateOne) sqlSave(ctx context.Context) (_node *Attachment, err error) {
if err := _u.check(); err != nil {
return _node, err
}
_spec := sqlgraph.NewUpdateSpec(attachment.Table, attachment.Columns, sqlgraph.NewFieldSpec(attachment.FieldID, field.TypeUUID))
id, ok := auo.mutation.ID()
id, ok := _u.mutation.ID()
if !ok {
return nil, &ValidationError{Name: "id", err: errors.New(`ent: missing "Attachment.id" for update`)}
}
_spec.Node.ID.Value = id
if fields := auo.fields; len(fields) > 0 {
if fields := _u.fields; len(fields) > 0 {
_spec.Node.Columns = make([]string, 0, len(fields))
_spec.Node.Columns = append(_spec.Node.Columns, attachment.FieldID)
for _, f := range fields {
@@ -527,32 +527,32 @@ func (auo *AttachmentUpdateOne) sqlSave(ctx context.Context) (_node *Attachment,
}
}
}
if ps := auo.mutation.predicates; len(ps) > 0 {
if ps := _u.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := auo.mutation.UpdatedAt(); ok {
if value, ok := _u.mutation.UpdatedAt(); ok {
_spec.SetField(attachment.FieldUpdatedAt, field.TypeTime, value)
}
if value, ok := auo.mutation.GetType(); ok {
if value, ok := _u.mutation.GetType(); ok {
_spec.SetField(attachment.FieldType, field.TypeEnum, value)
}
if value, ok := auo.mutation.Primary(); ok {
if value, ok := _u.mutation.Primary(); ok {
_spec.SetField(attachment.FieldPrimary, field.TypeBool, value)
}
if value, ok := auo.mutation.Title(); ok {
if value, ok := _u.mutation.Title(); ok {
_spec.SetField(attachment.FieldTitle, field.TypeString, value)
}
if value, ok := auo.mutation.Path(); ok {
if value, ok := _u.mutation.Path(); ok {
_spec.SetField(attachment.FieldPath, field.TypeString, value)
}
if value, ok := auo.mutation.MimeType(); ok {
if value, ok := _u.mutation.MimeType(); ok {
_spec.SetField(attachment.FieldMimeType, field.TypeString, value)
}
if auo.mutation.ItemCleared() {
if _u.mutation.ItemCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -565,7 +565,7 @@ func (auo *AttachmentUpdateOne) sqlSave(ctx context.Context) (_node *Attachment,
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := auo.mutation.ItemIDs(); len(nodes) > 0 {
if nodes := _u.mutation.ItemIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -581,7 +581,7 @@ func (auo *AttachmentUpdateOne) sqlSave(ctx context.Context) (_node *Attachment,
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if auo.mutation.ThumbnailCleared() {
if _u.mutation.ThumbnailCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: false,
@@ -594,7 +594,7 @@ func (auo *AttachmentUpdateOne) sqlSave(ctx context.Context) (_node *Attachment,
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := auo.mutation.ThumbnailIDs(); len(nodes) > 0 {
if nodes := _u.mutation.ThumbnailIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: false,
@@ -610,10 +610,10 @@ func (auo *AttachmentUpdateOne) sqlSave(ctx context.Context) (_node *Attachment,
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
_node = &Attachment{config: auo.config}
_node = &Attachment{config: _u.config}
_spec.Assign = _node.assignValues
_spec.ScanValues = _node.scanValues
if err = sqlgraph.UpdateNode(ctx, auo.driver, _spec); err != nil {
if err = sqlgraph.UpdateNode(ctx, _u.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{attachment.Label}
} else if sqlgraph.IsConstraintError(err) {
@@ -621,6 +621,6 @@ func (auo *AttachmentUpdateOne) sqlSave(ctx context.Context) (_node *Attachment,
}
return nil, err
}
auo.mutation.done = true
_u.mutation.done = true
return _node, nil
}

View File

@@ -67,7 +67,7 @@ func (*AuthRoles) scanValues(columns []string) ([]any, error) {
// assignValues assigns the values that were returned from sql.Rows (after scanning)
// to the AuthRoles fields.
func (ar *AuthRoles) assignValues(columns []string, values []any) error {
func (_m *AuthRoles) assignValues(columns []string, values []any) error {
if m, n := len(values), len(columns); m < n {
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
}
@@ -78,22 +78,22 @@ func (ar *AuthRoles) assignValues(columns []string, values []any) error {
if !ok {
return fmt.Errorf("unexpected type %T for field id", value)
}
ar.ID = int(value.Int64)
_m.ID = int(value.Int64)
case authroles.FieldRole:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field role", values[i])
} else if value.Valid {
ar.Role = authroles.Role(value.String)
_m.Role = authroles.Role(value.String)
}
case authroles.ForeignKeys[0]:
if value, ok := values[i].(*sql.NullScanner); !ok {
return fmt.Errorf("unexpected type %T for field auth_tokens_roles", values[i])
} else if value.Valid {
ar.auth_tokens_roles = new(uuid.UUID)
*ar.auth_tokens_roles = *value.S.(*uuid.UUID)
_m.auth_tokens_roles = new(uuid.UUID)
*_m.auth_tokens_roles = *value.S.(*uuid.UUID)
}
default:
ar.selectValues.Set(columns[i], values[i])
_m.selectValues.Set(columns[i], values[i])
}
}
return nil
@@ -101,40 +101,40 @@ func (ar *AuthRoles) assignValues(columns []string, values []any) error {
// Value returns the ent.Value that was dynamically selected and assigned to the AuthRoles.
// This includes values selected through modifiers, order, etc.
func (ar *AuthRoles) Value(name string) (ent.Value, error) {
return ar.selectValues.Get(name)
func (_m *AuthRoles) Value(name string) (ent.Value, error) {
return _m.selectValues.Get(name)
}
// QueryToken queries the "token" edge of the AuthRoles entity.
func (ar *AuthRoles) QueryToken() *AuthTokensQuery {
return NewAuthRolesClient(ar.config).QueryToken(ar)
func (_m *AuthRoles) QueryToken() *AuthTokensQuery {
return NewAuthRolesClient(_m.config).QueryToken(_m)
}
// Update returns a builder for updating this AuthRoles.
// Note that you need to call AuthRoles.Unwrap() before calling this method if this AuthRoles
// was returned from a transaction, and the transaction was committed or rolled back.
func (ar *AuthRoles) Update() *AuthRolesUpdateOne {
return NewAuthRolesClient(ar.config).UpdateOne(ar)
func (_m *AuthRoles) Update() *AuthRolesUpdateOne {
return NewAuthRolesClient(_m.config).UpdateOne(_m)
}
// Unwrap unwraps the AuthRoles entity that was returned from a transaction after it was closed,
// so that all future queries will be executed through the driver which created the transaction.
func (ar *AuthRoles) Unwrap() *AuthRoles {
_tx, ok := ar.config.driver.(*txDriver)
func (_m *AuthRoles) Unwrap() *AuthRoles {
_tx, ok := _m.config.driver.(*txDriver)
if !ok {
panic("ent: AuthRoles is not a transactional entity")
}
ar.config.driver = _tx.drv
return ar
_m.config.driver = _tx.drv
return _m
}
// String implements the fmt.Stringer.
func (ar *AuthRoles) String() string {
func (_m *AuthRoles) String() string {
var builder strings.Builder
builder.WriteString("AuthRoles(")
builder.WriteString(fmt.Sprintf("id=%v, ", ar.ID))
builder.WriteString(fmt.Sprintf("id=%v, ", _m.ID))
builder.WriteString("role=")
builder.WriteString(fmt.Sprintf("%v", ar.Role))
builder.WriteString(fmt.Sprintf("%v", _m.Role))
builder.WriteByte(')')
return builder.String()
}

View File

@@ -22,52 +22,52 @@ type AuthRolesCreate struct {
}
// SetRole sets the "role" field.
func (arc *AuthRolesCreate) SetRole(a authroles.Role) *AuthRolesCreate {
arc.mutation.SetRole(a)
return arc
func (_c *AuthRolesCreate) SetRole(v authroles.Role) *AuthRolesCreate {
_c.mutation.SetRole(v)
return _c
}
// SetNillableRole sets the "role" field if the given value is not nil.
func (arc *AuthRolesCreate) SetNillableRole(a *authroles.Role) *AuthRolesCreate {
if a != nil {
arc.SetRole(*a)
func (_c *AuthRolesCreate) SetNillableRole(v *authroles.Role) *AuthRolesCreate {
if v != nil {
_c.SetRole(*v)
}
return arc
return _c
}
// SetTokenID sets the "token" edge to the AuthTokens entity by ID.
func (arc *AuthRolesCreate) SetTokenID(id uuid.UUID) *AuthRolesCreate {
arc.mutation.SetTokenID(id)
return arc
func (_c *AuthRolesCreate) SetTokenID(id uuid.UUID) *AuthRolesCreate {
_c.mutation.SetTokenID(id)
return _c
}
// SetNillableTokenID sets the "token" edge to the AuthTokens entity by ID if the given value is not nil.
func (arc *AuthRolesCreate) SetNillableTokenID(id *uuid.UUID) *AuthRolesCreate {
func (_c *AuthRolesCreate) SetNillableTokenID(id *uuid.UUID) *AuthRolesCreate {
if id != nil {
arc = arc.SetTokenID(*id)
_c = _c.SetTokenID(*id)
}
return arc
return _c
}
// SetToken sets the "token" edge to the AuthTokens entity.
func (arc *AuthRolesCreate) SetToken(a *AuthTokens) *AuthRolesCreate {
return arc.SetTokenID(a.ID)
func (_c *AuthRolesCreate) SetToken(v *AuthTokens) *AuthRolesCreate {
return _c.SetTokenID(v.ID)
}
// Mutation returns the AuthRolesMutation object of the builder.
func (arc *AuthRolesCreate) Mutation() *AuthRolesMutation {
return arc.mutation
func (_c *AuthRolesCreate) Mutation() *AuthRolesMutation {
return _c.mutation
}
// Save creates the AuthRoles in the database.
func (arc *AuthRolesCreate) Save(ctx context.Context) (*AuthRoles, error) {
arc.defaults()
return withHooks(ctx, arc.sqlSave, arc.mutation, arc.hooks)
func (_c *AuthRolesCreate) Save(ctx context.Context) (*AuthRoles, error) {
_c.defaults()
return withHooks(ctx, _c.sqlSave, _c.mutation, _c.hooks)
}
// SaveX calls Save and panics if Save returns an error.
func (arc *AuthRolesCreate) SaveX(ctx context.Context) *AuthRoles {
v, err := arc.Save(ctx)
func (_c *AuthRolesCreate) SaveX(ctx context.Context) *AuthRoles {
v, err := _c.Save(ctx)
if err != nil {
panic(err)
}
@@ -75,32 +75,32 @@ func (arc *AuthRolesCreate) SaveX(ctx context.Context) *AuthRoles {
}
// Exec executes the query.
func (arc *AuthRolesCreate) Exec(ctx context.Context) error {
_, err := arc.Save(ctx)
func (_c *AuthRolesCreate) Exec(ctx context.Context) error {
_, err := _c.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (arc *AuthRolesCreate) ExecX(ctx context.Context) {
if err := arc.Exec(ctx); err != nil {
func (_c *AuthRolesCreate) ExecX(ctx context.Context) {
if err := _c.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (arc *AuthRolesCreate) defaults() {
if _, ok := arc.mutation.Role(); !ok {
func (_c *AuthRolesCreate) defaults() {
if _, ok := _c.mutation.Role(); !ok {
v := authroles.DefaultRole
arc.mutation.SetRole(v)
_c.mutation.SetRole(v)
}
}
// check runs all checks and user-defined validators on the builder.
func (arc *AuthRolesCreate) check() error {
if _, ok := arc.mutation.Role(); !ok {
func (_c *AuthRolesCreate) check() error {
if _, ok := _c.mutation.Role(); !ok {
return &ValidationError{Name: "role", err: errors.New(`ent: missing required field "AuthRoles.role"`)}
}
if v, ok := arc.mutation.Role(); ok {
if v, ok := _c.mutation.Role(); ok {
if err := authroles.RoleValidator(v); err != nil {
return &ValidationError{Name: "role", err: fmt.Errorf(`ent: validator failed for field "AuthRoles.role": %w`, err)}
}
@@ -108,12 +108,12 @@ func (arc *AuthRolesCreate) check() error {
return nil
}
func (arc *AuthRolesCreate) sqlSave(ctx context.Context) (*AuthRoles, error) {
if err := arc.check(); err != nil {
func (_c *AuthRolesCreate) sqlSave(ctx context.Context) (*AuthRoles, error) {
if err := _c.check(); err != nil {
return nil, err
}
_node, _spec := arc.createSpec()
if err := sqlgraph.CreateNode(ctx, arc.driver, _spec); err != nil {
_node, _spec := _c.createSpec()
if err := sqlgraph.CreateNode(ctx, _c.driver, _spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
@@ -121,21 +121,21 @@ func (arc *AuthRolesCreate) sqlSave(ctx context.Context) (*AuthRoles, error) {
}
id := _spec.ID.Value.(int64)
_node.ID = int(id)
arc.mutation.id = &_node.ID
arc.mutation.done = true
_c.mutation.id = &_node.ID
_c.mutation.done = true
return _node, nil
}
func (arc *AuthRolesCreate) createSpec() (*AuthRoles, *sqlgraph.CreateSpec) {
func (_c *AuthRolesCreate) createSpec() (*AuthRoles, *sqlgraph.CreateSpec) {
var (
_node = &AuthRoles{config: arc.config}
_node = &AuthRoles{config: _c.config}
_spec = sqlgraph.NewCreateSpec(authroles.Table, sqlgraph.NewFieldSpec(authroles.FieldID, field.TypeInt))
)
if value, ok := arc.mutation.Role(); ok {
if value, ok := _c.mutation.Role(); ok {
_spec.SetField(authroles.FieldRole, field.TypeEnum, value)
_node.Role = value
}
if nodes := arc.mutation.TokenIDs(); len(nodes) > 0 {
if nodes := _c.mutation.TokenIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: true,
@@ -163,16 +163,16 @@ type AuthRolesCreateBulk struct {
}
// Save creates the AuthRoles entities in the database.
func (arcb *AuthRolesCreateBulk) Save(ctx context.Context) ([]*AuthRoles, error) {
if arcb.err != nil {
return nil, arcb.err
func (_c *AuthRolesCreateBulk) Save(ctx context.Context) ([]*AuthRoles, error) {
if _c.err != nil {
return nil, _c.err
}
specs := make([]*sqlgraph.CreateSpec, len(arcb.builders))
nodes := make([]*AuthRoles, len(arcb.builders))
mutators := make([]Mutator, len(arcb.builders))
for i := range arcb.builders {
specs := make([]*sqlgraph.CreateSpec, len(_c.builders))
nodes := make([]*AuthRoles, len(_c.builders))
mutators := make([]Mutator, len(_c.builders))
for i := range _c.builders {
func(i int, root context.Context) {
builder := arcb.builders[i]
builder := _c.builders[i]
builder.defaults()
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
mutation, ok := m.(*AuthRolesMutation)
@@ -186,11 +186,11 @@ func (arcb *AuthRolesCreateBulk) Save(ctx context.Context) ([]*AuthRoles, error)
var err error
nodes[i], specs[i] = builder.createSpec()
if i < len(mutators)-1 {
_, err = mutators[i+1].Mutate(root, arcb.builders[i+1].mutation)
_, err = mutators[i+1].Mutate(root, _c.builders[i+1].mutation)
} else {
spec := &sqlgraph.BatchCreateSpec{Nodes: specs}
// Invoke the actual operation on the latest mutation in the chain.
if err = sqlgraph.BatchCreate(ctx, arcb.driver, spec); err != nil {
if err = sqlgraph.BatchCreate(ctx, _c.driver, spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
@@ -214,7 +214,7 @@ func (arcb *AuthRolesCreateBulk) Save(ctx context.Context) ([]*AuthRoles, error)
}(i, ctx)
}
if len(mutators) > 0 {
if _, err := mutators[0].Mutate(ctx, arcb.builders[0].mutation); err != nil {
if _, err := mutators[0].Mutate(ctx, _c.builders[0].mutation); err != nil {
return nil, err
}
}
@@ -222,8 +222,8 @@ func (arcb *AuthRolesCreateBulk) Save(ctx context.Context) ([]*AuthRoles, error)
}
// SaveX is like Save, but panics if an error occurs.
func (arcb *AuthRolesCreateBulk) SaveX(ctx context.Context) []*AuthRoles {
v, err := arcb.Save(ctx)
func (_c *AuthRolesCreateBulk) SaveX(ctx context.Context) []*AuthRoles {
v, err := _c.Save(ctx)
if err != nil {
panic(err)
}
@@ -231,14 +231,14 @@ func (arcb *AuthRolesCreateBulk) SaveX(ctx context.Context) []*AuthRoles {
}
// Exec executes the query.
func (arcb *AuthRolesCreateBulk) Exec(ctx context.Context) error {
_, err := arcb.Save(ctx)
func (_c *AuthRolesCreateBulk) Exec(ctx context.Context) error {
_, err := _c.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (arcb *AuthRolesCreateBulk) ExecX(ctx context.Context) {
if err := arcb.Exec(ctx); err != nil {
func (_c *AuthRolesCreateBulk) ExecX(ctx context.Context) {
if err := _c.Exec(ctx); err != nil {
panic(err)
}
}

View File

@@ -20,56 +20,56 @@ type AuthRolesDelete struct {
}
// Where appends a list predicates to the AuthRolesDelete builder.
func (ard *AuthRolesDelete) Where(ps ...predicate.AuthRoles) *AuthRolesDelete {
ard.mutation.Where(ps...)
return ard
func (_d *AuthRolesDelete) Where(ps ...predicate.AuthRoles) *AuthRolesDelete {
_d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query and returns how many vertices were deleted.
func (ard *AuthRolesDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, ard.sqlExec, ard.mutation, ard.hooks)
func (_d *AuthRolesDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, _d.sqlExec, _d.mutation, _d.hooks)
}
// ExecX is like Exec, but panics if an error occurs.
func (ard *AuthRolesDelete) ExecX(ctx context.Context) int {
n, err := ard.Exec(ctx)
func (_d *AuthRolesDelete) ExecX(ctx context.Context) int {
n, err := _d.Exec(ctx)
if err != nil {
panic(err)
}
return n
}
func (ard *AuthRolesDelete) sqlExec(ctx context.Context) (int, error) {
func (_d *AuthRolesDelete) sqlExec(ctx context.Context) (int, error) {
_spec := sqlgraph.NewDeleteSpec(authroles.Table, sqlgraph.NewFieldSpec(authroles.FieldID, field.TypeInt))
if ps := ard.mutation.predicates; len(ps) > 0 {
if ps := _d.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
affected, err := sqlgraph.DeleteNodes(ctx, ard.driver, _spec)
affected, err := sqlgraph.DeleteNodes(ctx, _d.driver, _spec)
if err != nil && sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
ard.mutation.done = true
_d.mutation.done = true
return affected, err
}
// AuthRolesDeleteOne is the builder for deleting a single AuthRoles entity.
type AuthRolesDeleteOne struct {
ard *AuthRolesDelete
_d *AuthRolesDelete
}
// Where appends a list predicates to the AuthRolesDelete builder.
func (ardo *AuthRolesDeleteOne) Where(ps ...predicate.AuthRoles) *AuthRolesDeleteOne {
ardo.ard.mutation.Where(ps...)
return ardo
func (_d *AuthRolesDeleteOne) Where(ps ...predicate.AuthRoles) *AuthRolesDeleteOne {
_d._d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query.
func (ardo *AuthRolesDeleteOne) Exec(ctx context.Context) error {
n, err := ardo.ard.Exec(ctx)
func (_d *AuthRolesDeleteOne) Exec(ctx context.Context) error {
n, err := _d._d.Exec(ctx)
switch {
case err != nil:
return err
@@ -81,8 +81,8 @@ func (ardo *AuthRolesDeleteOne) Exec(ctx context.Context) error {
}
// ExecX is like Exec, but panics if an error occurs.
func (ardo *AuthRolesDeleteOne) ExecX(ctx context.Context) {
if err := ardo.Exec(ctx); err != nil {
func (_d *AuthRolesDeleteOne) ExecX(ctx context.Context) {
if err := _d.Exec(ctx); err != nil {
panic(err)
}
}

View File

@@ -32,44 +32,44 @@ type AuthRolesQuery struct {
}
// Where adds a new predicate for the AuthRolesQuery builder.
func (arq *AuthRolesQuery) Where(ps ...predicate.AuthRoles) *AuthRolesQuery {
arq.predicates = append(arq.predicates, ps...)
return arq
func (_q *AuthRolesQuery) Where(ps ...predicate.AuthRoles) *AuthRolesQuery {
_q.predicates = append(_q.predicates, ps...)
return _q
}
// Limit the number of records to be returned by this query.
func (arq *AuthRolesQuery) Limit(limit int) *AuthRolesQuery {
arq.ctx.Limit = &limit
return arq
func (_q *AuthRolesQuery) Limit(limit int) *AuthRolesQuery {
_q.ctx.Limit = &limit
return _q
}
// Offset to start from.
func (arq *AuthRolesQuery) Offset(offset int) *AuthRolesQuery {
arq.ctx.Offset = &offset
return arq
func (_q *AuthRolesQuery) Offset(offset int) *AuthRolesQuery {
_q.ctx.Offset = &offset
return _q
}
// Unique configures the query builder to filter duplicate records on query.
// By default, unique is set to true, and can be disabled using this method.
func (arq *AuthRolesQuery) Unique(unique bool) *AuthRolesQuery {
arq.ctx.Unique = &unique
return arq
func (_q *AuthRolesQuery) Unique(unique bool) *AuthRolesQuery {
_q.ctx.Unique = &unique
return _q
}
// Order specifies how the records should be ordered.
func (arq *AuthRolesQuery) Order(o ...authroles.OrderOption) *AuthRolesQuery {
arq.order = append(arq.order, o...)
return arq
func (_q *AuthRolesQuery) Order(o ...authroles.OrderOption) *AuthRolesQuery {
_q.order = append(_q.order, o...)
return _q
}
// QueryToken chains the current query on the "token" edge.
func (arq *AuthRolesQuery) QueryToken() *AuthTokensQuery {
query := (&AuthTokensClient{config: arq.config}).Query()
func (_q *AuthRolesQuery) QueryToken() *AuthTokensQuery {
query := (&AuthTokensClient{config: _q.config}).Query()
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
if err := arq.prepareQuery(ctx); err != nil {
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
selector := arq.sqlQuery(ctx)
selector := _q.sqlQuery(ctx)
if err := selector.Err(); err != nil {
return nil, err
}
@@ -78,7 +78,7 @@ func (arq *AuthRolesQuery) QueryToken() *AuthTokensQuery {
sqlgraph.To(authtokens.Table, authtokens.FieldID),
sqlgraph.Edge(sqlgraph.O2O, true, authroles.TokenTable, authroles.TokenColumn),
)
fromU = sqlgraph.SetNeighbors(arq.driver.Dialect(), step)
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
return fromU, nil
}
return query
@@ -86,8 +86,8 @@ func (arq *AuthRolesQuery) QueryToken() *AuthTokensQuery {
// First returns the first AuthRoles entity from the query.
// Returns a *NotFoundError when no AuthRoles was found.
func (arq *AuthRolesQuery) First(ctx context.Context) (*AuthRoles, error) {
nodes, err := arq.Limit(1).All(setContextOp(ctx, arq.ctx, ent.OpQueryFirst))
func (_q *AuthRolesQuery) First(ctx context.Context) (*AuthRoles, error) {
nodes, err := _q.Limit(1).All(setContextOp(ctx, _q.ctx, ent.OpQueryFirst))
if err != nil {
return nil, err
}
@@ -98,8 +98,8 @@ func (arq *AuthRolesQuery) First(ctx context.Context) (*AuthRoles, error) {
}
// FirstX is like First, but panics if an error occurs.
func (arq *AuthRolesQuery) FirstX(ctx context.Context) *AuthRoles {
node, err := arq.First(ctx)
func (_q *AuthRolesQuery) FirstX(ctx context.Context) *AuthRoles {
node, err := _q.First(ctx)
if err != nil && !IsNotFound(err) {
panic(err)
}
@@ -108,9 +108,9 @@ func (arq *AuthRolesQuery) FirstX(ctx context.Context) *AuthRoles {
// FirstID returns the first AuthRoles ID from the query.
// Returns a *NotFoundError when no AuthRoles ID was found.
func (arq *AuthRolesQuery) FirstID(ctx context.Context) (id int, err error) {
func (_q *AuthRolesQuery) FirstID(ctx context.Context) (id int, err error) {
var ids []int
if ids, err = arq.Limit(1).IDs(setContextOp(ctx, arq.ctx, ent.OpQueryFirstID)); err != nil {
if ids, err = _q.Limit(1).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryFirstID)); err != nil {
return
}
if len(ids) == 0 {
@@ -121,8 +121,8 @@ func (arq *AuthRolesQuery) FirstID(ctx context.Context) (id int, err error) {
}
// FirstIDX is like FirstID, but panics if an error occurs.
func (arq *AuthRolesQuery) FirstIDX(ctx context.Context) int {
id, err := arq.FirstID(ctx)
func (_q *AuthRolesQuery) FirstIDX(ctx context.Context) int {
id, err := _q.FirstID(ctx)
if err != nil && !IsNotFound(err) {
panic(err)
}
@@ -132,8 +132,8 @@ func (arq *AuthRolesQuery) FirstIDX(ctx context.Context) int {
// Only returns a single AuthRoles entity found by the query, ensuring it only returns one.
// Returns a *NotSingularError when more than one AuthRoles entity is found.
// Returns a *NotFoundError when no AuthRoles entities are found.
func (arq *AuthRolesQuery) Only(ctx context.Context) (*AuthRoles, error) {
nodes, err := arq.Limit(2).All(setContextOp(ctx, arq.ctx, ent.OpQueryOnly))
func (_q *AuthRolesQuery) Only(ctx context.Context) (*AuthRoles, error) {
nodes, err := _q.Limit(2).All(setContextOp(ctx, _q.ctx, ent.OpQueryOnly))
if err != nil {
return nil, err
}
@@ -148,8 +148,8 @@ func (arq *AuthRolesQuery) Only(ctx context.Context) (*AuthRoles, error) {
}
// OnlyX is like Only, but panics if an error occurs.
func (arq *AuthRolesQuery) OnlyX(ctx context.Context) *AuthRoles {
node, err := arq.Only(ctx)
func (_q *AuthRolesQuery) OnlyX(ctx context.Context) *AuthRoles {
node, err := _q.Only(ctx)
if err != nil {
panic(err)
}
@@ -159,9 +159,9 @@ func (arq *AuthRolesQuery) OnlyX(ctx context.Context) *AuthRoles {
// OnlyID is like Only, but returns the only AuthRoles ID in the query.
// Returns a *NotSingularError when more than one AuthRoles ID is found.
// Returns a *NotFoundError when no entities are found.
func (arq *AuthRolesQuery) OnlyID(ctx context.Context) (id int, err error) {
func (_q *AuthRolesQuery) OnlyID(ctx context.Context) (id int, err error) {
var ids []int
if ids, err = arq.Limit(2).IDs(setContextOp(ctx, arq.ctx, ent.OpQueryOnlyID)); err != nil {
if ids, err = _q.Limit(2).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryOnlyID)); err != nil {
return
}
switch len(ids) {
@@ -176,8 +176,8 @@ func (arq *AuthRolesQuery) OnlyID(ctx context.Context) (id int, err error) {
}
// OnlyIDX is like OnlyID, but panics if an error occurs.
func (arq *AuthRolesQuery) OnlyIDX(ctx context.Context) int {
id, err := arq.OnlyID(ctx)
func (_q *AuthRolesQuery) OnlyIDX(ctx context.Context) int {
id, err := _q.OnlyID(ctx)
if err != nil {
panic(err)
}
@@ -185,18 +185,18 @@ func (arq *AuthRolesQuery) OnlyIDX(ctx context.Context) int {
}
// All executes the query and returns a list of AuthRolesSlice.
func (arq *AuthRolesQuery) All(ctx context.Context) ([]*AuthRoles, error) {
ctx = setContextOp(ctx, arq.ctx, ent.OpQueryAll)
if err := arq.prepareQuery(ctx); err != nil {
func (_q *AuthRolesQuery) All(ctx context.Context) ([]*AuthRoles, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryAll)
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
qr := querierAll[[]*AuthRoles, *AuthRolesQuery]()
return withInterceptors[[]*AuthRoles](ctx, arq, qr, arq.inters)
return withInterceptors[[]*AuthRoles](ctx, _q, qr, _q.inters)
}
// AllX is like All, but panics if an error occurs.
func (arq *AuthRolesQuery) AllX(ctx context.Context) []*AuthRoles {
nodes, err := arq.All(ctx)
func (_q *AuthRolesQuery) AllX(ctx context.Context) []*AuthRoles {
nodes, err := _q.All(ctx)
if err != nil {
panic(err)
}
@@ -204,20 +204,20 @@ func (arq *AuthRolesQuery) AllX(ctx context.Context) []*AuthRoles {
}
// IDs executes the query and returns a list of AuthRoles IDs.
func (arq *AuthRolesQuery) IDs(ctx context.Context) (ids []int, err error) {
if arq.ctx.Unique == nil && arq.path != nil {
arq.Unique(true)
func (_q *AuthRolesQuery) IDs(ctx context.Context) (ids []int, err error) {
if _q.ctx.Unique == nil && _q.path != nil {
_q.Unique(true)
}
ctx = setContextOp(ctx, arq.ctx, ent.OpQueryIDs)
if err = arq.Select(authroles.FieldID).Scan(ctx, &ids); err != nil {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryIDs)
if err = _q.Select(authroles.FieldID).Scan(ctx, &ids); err != nil {
return nil, err
}
return ids, nil
}
// IDsX is like IDs, but panics if an error occurs.
func (arq *AuthRolesQuery) IDsX(ctx context.Context) []int {
ids, err := arq.IDs(ctx)
func (_q *AuthRolesQuery) IDsX(ctx context.Context) []int {
ids, err := _q.IDs(ctx)
if err != nil {
panic(err)
}
@@ -225,17 +225,17 @@ func (arq *AuthRolesQuery) IDsX(ctx context.Context) []int {
}
// Count returns the count of the given query.
func (arq *AuthRolesQuery) Count(ctx context.Context) (int, error) {
ctx = setContextOp(ctx, arq.ctx, ent.OpQueryCount)
if err := arq.prepareQuery(ctx); err != nil {
func (_q *AuthRolesQuery) Count(ctx context.Context) (int, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryCount)
if err := _q.prepareQuery(ctx); err != nil {
return 0, err
}
return withInterceptors[int](ctx, arq, querierCount[*AuthRolesQuery](), arq.inters)
return withInterceptors[int](ctx, _q, querierCount[*AuthRolesQuery](), _q.inters)
}
// CountX is like Count, but panics if an error occurs.
func (arq *AuthRolesQuery) CountX(ctx context.Context) int {
count, err := arq.Count(ctx)
func (_q *AuthRolesQuery) CountX(ctx context.Context) int {
count, err := _q.Count(ctx)
if err != nil {
panic(err)
}
@@ -243,9 +243,9 @@ func (arq *AuthRolesQuery) CountX(ctx context.Context) int {
}
// Exist returns true if the query has elements in the graph.
func (arq *AuthRolesQuery) Exist(ctx context.Context) (bool, error) {
ctx = setContextOp(ctx, arq.ctx, ent.OpQueryExist)
switch _, err := arq.FirstID(ctx); {
func (_q *AuthRolesQuery) Exist(ctx context.Context) (bool, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryExist)
switch _, err := _q.FirstID(ctx); {
case IsNotFound(err):
return false, nil
case err != nil:
@@ -256,8 +256,8 @@ func (arq *AuthRolesQuery) Exist(ctx context.Context) (bool, error) {
}
// ExistX is like Exist, but panics if an error occurs.
func (arq *AuthRolesQuery) ExistX(ctx context.Context) bool {
exist, err := arq.Exist(ctx)
func (_q *AuthRolesQuery) ExistX(ctx context.Context) bool {
exist, err := _q.Exist(ctx)
if err != nil {
panic(err)
}
@@ -266,32 +266,32 @@ func (arq *AuthRolesQuery) ExistX(ctx context.Context) bool {
// Clone returns a duplicate of the AuthRolesQuery builder, including all associated steps. It can be
// used to prepare common query builders and use them differently after the clone is made.
func (arq *AuthRolesQuery) Clone() *AuthRolesQuery {
if arq == nil {
func (_q *AuthRolesQuery) Clone() *AuthRolesQuery {
if _q == nil {
return nil
}
return &AuthRolesQuery{
config: arq.config,
ctx: arq.ctx.Clone(),
order: append([]authroles.OrderOption{}, arq.order...),
inters: append([]Interceptor{}, arq.inters...),
predicates: append([]predicate.AuthRoles{}, arq.predicates...),
withToken: arq.withToken.Clone(),
config: _q.config,
ctx: _q.ctx.Clone(),
order: append([]authroles.OrderOption{}, _q.order...),
inters: append([]Interceptor{}, _q.inters...),
predicates: append([]predicate.AuthRoles{}, _q.predicates...),
withToken: _q.withToken.Clone(),
// clone intermediate query.
sql: arq.sql.Clone(),
path: arq.path,
sql: _q.sql.Clone(),
path: _q.path,
}
}
// WithToken tells the query-builder to eager-load the nodes that are connected to
// the "token" edge. The optional arguments are used to configure the query builder of the edge.
func (arq *AuthRolesQuery) WithToken(opts ...func(*AuthTokensQuery)) *AuthRolesQuery {
query := (&AuthTokensClient{config: arq.config}).Query()
func (_q *AuthRolesQuery) WithToken(opts ...func(*AuthTokensQuery)) *AuthRolesQuery {
query := (&AuthTokensClient{config: _q.config}).Query()
for _, opt := range opts {
opt(query)
}
arq.withToken = query
return arq
_q.withToken = query
return _q
}
// GroupBy is used to group vertices by one or more fields/columns.
@@ -308,10 +308,10 @@ func (arq *AuthRolesQuery) WithToken(opts ...func(*AuthTokensQuery)) *AuthRolesQ
// GroupBy(authroles.FieldRole).
// Aggregate(ent.Count()).
// Scan(ctx, &v)
func (arq *AuthRolesQuery) GroupBy(field string, fields ...string) *AuthRolesGroupBy {
arq.ctx.Fields = append([]string{field}, fields...)
grbuild := &AuthRolesGroupBy{build: arq}
grbuild.flds = &arq.ctx.Fields
func (_q *AuthRolesQuery) GroupBy(field string, fields ...string) *AuthRolesGroupBy {
_q.ctx.Fields = append([]string{field}, fields...)
grbuild := &AuthRolesGroupBy{build: _q}
grbuild.flds = &_q.ctx.Fields
grbuild.label = authroles.Label
grbuild.scan = grbuild.Scan
return grbuild
@@ -329,55 +329,55 @@ func (arq *AuthRolesQuery) GroupBy(field string, fields ...string) *AuthRolesGro
// client.AuthRoles.Query().
// Select(authroles.FieldRole).
// Scan(ctx, &v)
func (arq *AuthRolesQuery) Select(fields ...string) *AuthRolesSelect {
arq.ctx.Fields = append(arq.ctx.Fields, fields...)
sbuild := &AuthRolesSelect{AuthRolesQuery: arq}
func (_q *AuthRolesQuery) Select(fields ...string) *AuthRolesSelect {
_q.ctx.Fields = append(_q.ctx.Fields, fields...)
sbuild := &AuthRolesSelect{AuthRolesQuery: _q}
sbuild.label = authroles.Label
sbuild.flds, sbuild.scan = &arq.ctx.Fields, sbuild.Scan
sbuild.flds, sbuild.scan = &_q.ctx.Fields, sbuild.Scan
return sbuild
}
// Aggregate returns a AuthRolesSelect configured with the given aggregations.
func (arq *AuthRolesQuery) Aggregate(fns ...AggregateFunc) *AuthRolesSelect {
return arq.Select().Aggregate(fns...)
func (_q *AuthRolesQuery) Aggregate(fns ...AggregateFunc) *AuthRolesSelect {
return _q.Select().Aggregate(fns...)
}
func (arq *AuthRolesQuery) prepareQuery(ctx context.Context) error {
for _, inter := range arq.inters {
func (_q *AuthRolesQuery) prepareQuery(ctx context.Context) error {
for _, inter := range _q.inters {
if inter == nil {
return fmt.Errorf("ent: uninitialized interceptor (forgotten import ent/runtime?)")
}
if trv, ok := inter.(Traverser); ok {
if err := trv.Traverse(ctx, arq); err != nil {
if err := trv.Traverse(ctx, _q); err != nil {
return err
}
}
}
for _, f := range arq.ctx.Fields {
for _, f := range _q.ctx.Fields {
if !authroles.ValidColumn(f) {
return &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
}
}
if arq.path != nil {
prev, err := arq.path(ctx)
if _q.path != nil {
prev, err := _q.path(ctx)
if err != nil {
return err
}
arq.sql = prev
_q.sql = prev
}
return nil
}
func (arq *AuthRolesQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*AuthRoles, error) {
func (_q *AuthRolesQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*AuthRoles, error) {
var (
nodes = []*AuthRoles{}
withFKs = arq.withFKs
_spec = arq.querySpec()
withFKs = _q.withFKs
_spec = _q.querySpec()
loadedTypes = [1]bool{
arq.withToken != nil,
_q.withToken != nil,
}
)
if arq.withToken != nil {
if _q.withToken != nil {
withFKs = true
}
if withFKs {
@@ -387,7 +387,7 @@ func (arq *AuthRolesQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*A
return (*AuthRoles).scanValues(nil, columns)
}
_spec.Assign = func(columns []string, values []any) error {
node := &AuthRoles{config: arq.config}
node := &AuthRoles{config: _q.config}
nodes = append(nodes, node)
node.Edges.loadedTypes = loadedTypes
return node.assignValues(columns, values)
@@ -395,14 +395,14 @@ func (arq *AuthRolesQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*A
for i := range hooks {
hooks[i](ctx, _spec)
}
if err := sqlgraph.QueryNodes(ctx, arq.driver, _spec); err != nil {
if err := sqlgraph.QueryNodes(ctx, _q.driver, _spec); err != nil {
return nil, err
}
if len(nodes) == 0 {
return nodes, nil
}
if query := arq.withToken; query != nil {
if err := arq.loadToken(ctx, query, nodes, nil,
if query := _q.withToken; query != nil {
if err := _q.loadToken(ctx, query, nodes, nil,
func(n *AuthRoles, e *AuthTokens) { n.Edges.Token = e }); err != nil {
return nil, err
}
@@ -410,7 +410,7 @@ func (arq *AuthRolesQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*A
return nodes, nil
}
func (arq *AuthRolesQuery) loadToken(ctx context.Context, query *AuthTokensQuery, nodes []*AuthRoles, init func(*AuthRoles), assign func(*AuthRoles, *AuthTokens)) error {
func (_q *AuthRolesQuery) loadToken(ctx context.Context, query *AuthTokensQuery, nodes []*AuthRoles, init func(*AuthRoles), assign func(*AuthRoles, *AuthTokens)) error {
ids := make([]uuid.UUID, 0, len(nodes))
nodeids := make(map[uuid.UUID][]*AuthRoles)
for i := range nodes {
@@ -443,24 +443,24 @@ func (arq *AuthRolesQuery) loadToken(ctx context.Context, query *AuthTokensQuery
return nil
}
func (arq *AuthRolesQuery) sqlCount(ctx context.Context) (int, error) {
_spec := arq.querySpec()
_spec.Node.Columns = arq.ctx.Fields
if len(arq.ctx.Fields) > 0 {
_spec.Unique = arq.ctx.Unique != nil && *arq.ctx.Unique
func (_q *AuthRolesQuery) sqlCount(ctx context.Context) (int, error) {
_spec := _q.querySpec()
_spec.Node.Columns = _q.ctx.Fields
if len(_q.ctx.Fields) > 0 {
_spec.Unique = _q.ctx.Unique != nil && *_q.ctx.Unique
}
return sqlgraph.CountNodes(ctx, arq.driver, _spec)
return sqlgraph.CountNodes(ctx, _q.driver, _spec)
}
func (arq *AuthRolesQuery) querySpec() *sqlgraph.QuerySpec {
func (_q *AuthRolesQuery) querySpec() *sqlgraph.QuerySpec {
_spec := sqlgraph.NewQuerySpec(authroles.Table, authroles.Columns, sqlgraph.NewFieldSpec(authroles.FieldID, field.TypeInt))
_spec.From = arq.sql
if unique := arq.ctx.Unique; unique != nil {
_spec.From = _q.sql
if unique := _q.ctx.Unique; unique != nil {
_spec.Unique = *unique
} else if arq.path != nil {
} else if _q.path != nil {
_spec.Unique = true
}
if fields := arq.ctx.Fields; len(fields) > 0 {
if fields := _q.ctx.Fields; len(fields) > 0 {
_spec.Node.Columns = make([]string, 0, len(fields))
_spec.Node.Columns = append(_spec.Node.Columns, authroles.FieldID)
for i := range fields {
@@ -469,20 +469,20 @@ func (arq *AuthRolesQuery) querySpec() *sqlgraph.QuerySpec {
}
}
}
if ps := arq.predicates; len(ps) > 0 {
if ps := _q.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if limit := arq.ctx.Limit; limit != nil {
if limit := _q.ctx.Limit; limit != nil {
_spec.Limit = *limit
}
if offset := arq.ctx.Offset; offset != nil {
if offset := _q.ctx.Offset; offset != nil {
_spec.Offset = *offset
}
if ps := arq.order; len(ps) > 0 {
if ps := _q.order; len(ps) > 0 {
_spec.Order = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
@@ -492,33 +492,33 @@ func (arq *AuthRolesQuery) querySpec() *sqlgraph.QuerySpec {
return _spec
}
func (arq *AuthRolesQuery) sqlQuery(ctx context.Context) *sql.Selector {
builder := sql.Dialect(arq.driver.Dialect())
func (_q *AuthRolesQuery) sqlQuery(ctx context.Context) *sql.Selector {
builder := sql.Dialect(_q.driver.Dialect())
t1 := builder.Table(authroles.Table)
columns := arq.ctx.Fields
columns := _q.ctx.Fields
if len(columns) == 0 {
columns = authroles.Columns
}
selector := builder.Select(t1.Columns(columns...)...).From(t1)
if arq.sql != nil {
selector = arq.sql
if _q.sql != nil {
selector = _q.sql
selector.Select(selector.Columns(columns...)...)
}
if arq.ctx.Unique != nil && *arq.ctx.Unique {
if _q.ctx.Unique != nil && *_q.ctx.Unique {
selector.Distinct()
}
for _, p := range arq.predicates {
for _, p := range _q.predicates {
p(selector)
}
for _, p := range arq.order {
for _, p := range _q.order {
p(selector)
}
if offset := arq.ctx.Offset; offset != nil {
if offset := _q.ctx.Offset; offset != nil {
// limit is mandatory for offset clause. We start
// with default value, and override it below if needed.
selector.Offset(*offset).Limit(math.MaxInt32)
}
if limit := arq.ctx.Limit; limit != nil {
if limit := _q.ctx.Limit; limit != nil {
selector.Limit(*limit)
}
return selector
@@ -531,41 +531,41 @@ type AuthRolesGroupBy struct {
}
// Aggregate adds the given aggregation functions to the group-by query.
func (argb *AuthRolesGroupBy) Aggregate(fns ...AggregateFunc) *AuthRolesGroupBy {
argb.fns = append(argb.fns, fns...)
return argb
func (_g *AuthRolesGroupBy) Aggregate(fns ...AggregateFunc) *AuthRolesGroupBy {
_g.fns = append(_g.fns, fns...)
return _g
}
// Scan applies the selector query and scans the result into the given value.
func (argb *AuthRolesGroupBy) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, argb.build.ctx, ent.OpQueryGroupBy)
if err := argb.build.prepareQuery(ctx); err != nil {
func (_g *AuthRolesGroupBy) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, _g.build.ctx, ent.OpQueryGroupBy)
if err := _g.build.prepareQuery(ctx); err != nil {
return err
}
return scanWithInterceptors[*AuthRolesQuery, *AuthRolesGroupBy](ctx, argb.build, argb, argb.build.inters, v)
return scanWithInterceptors[*AuthRolesQuery, *AuthRolesGroupBy](ctx, _g.build, _g, _g.build.inters, v)
}
func (argb *AuthRolesGroupBy) sqlScan(ctx context.Context, root *AuthRolesQuery, v any) error {
func (_g *AuthRolesGroupBy) sqlScan(ctx context.Context, root *AuthRolesQuery, v any) error {
selector := root.sqlQuery(ctx).Select()
aggregation := make([]string, 0, len(argb.fns))
for _, fn := range argb.fns {
aggregation := make([]string, 0, len(_g.fns))
for _, fn := range _g.fns {
aggregation = append(aggregation, fn(selector))
}
if len(selector.SelectedColumns()) == 0 {
columns := make([]string, 0, len(*argb.flds)+len(argb.fns))
for _, f := range *argb.flds {
columns := make([]string, 0, len(*_g.flds)+len(_g.fns))
for _, f := range *_g.flds {
columns = append(columns, selector.C(f))
}
columns = append(columns, aggregation...)
selector.Select(columns...)
}
selector.GroupBy(selector.Columns(*argb.flds...)...)
selector.GroupBy(selector.Columns(*_g.flds...)...)
if err := selector.Err(); err != nil {
return err
}
rows := &sql.Rows{}
query, args := selector.Query()
if err := argb.build.driver.Query(ctx, query, args, rows); err != nil {
if err := _g.build.driver.Query(ctx, query, args, rows); err != nil {
return err
}
defer rows.Close()
@@ -579,27 +579,27 @@ type AuthRolesSelect struct {
}
// Aggregate adds the given aggregation functions to the selector query.
func (ars *AuthRolesSelect) Aggregate(fns ...AggregateFunc) *AuthRolesSelect {
ars.fns = append(ars.fns, fns...)
return ars
func (_s *AuthRolesSelect) Aggregate(fns ...AggregateFunc) *AuthRolesSelect {
_s.fns = append(_s.fns, fns...)
return _s
}
// Scan applies the selector query and scans the result into the given value.
func (ars *AuthRolesSelect) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, ars.ctx, ent.OpQuerySelect)
if err := ars.prepareQuery(ctx); err != nil {
func (_s *AuthRolesSelect) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, _s.ctx, ent.OpQuerySelect)
if err := _s.prepareQuery(ctx); err != nil {
return err
}
return scanWithInterceptors[*AuthRolesQuery, *AuthRolesSelect](ctx, ars.AuthRolesQuery, ars, ars.inters, v)
return scanWithInterceptors[*AuthRolesQuery, *AuthRolesSelect](ctx, _s.AuthRolesQuery, _s, _s.inters, v)
}
func (ars *AuthRolesSelect) sqlScan(ctx context.Context, root *AuthRolesQuery, v any) error {
func (_s *AuthRolesSelect) sqlScan(ctx context.Context, root *AuthRolesQuery, v any) error {
selector := root.sqlQuery(ctx)
aggregation := make([]string, 0, len(ars.fns))
for _, fn := range ars.fns {
aggregation := make([]string, 0, len(_s.fns))
for _, fn := range _s.fns {
aggregation = append(aggregation, fn(selector))
}
switch n := len(*ars.selector.flds); {
switch n := len(*_s.selector.flds); {
case n == 0 && len(aggregation) > 0:
selector.Select(aggregation...)
case n != 0 && len(aggregation) > 0:
@@ -607,7 +607,7 @@ func (ars *AuthRolesSelect) sqlScan(ctx context.Context, root *AuthRolesQuery, v
}
rows := &sql.Rows{}
query, args := selector.Query()
if err := ars.driver.Query(ctx, query, args, rows); err != nil {
if err := _s.driver.Query(ctx, query, args, rows); err != nil {
return err
}
defer rows.Close()

View File

@@ -24,63 +24,63 @@ type AuthRolesUpdate struct {
}
// Where appends a list predicates to the AuthRolesUpdate builder.
func (aru *AuthRolesUpdate) Where(ps ...predicate.AuthRoles) *AuthRolesUpdate {
aru.mutation.Where(ps...)
return aru
func (_u *AuthRolesUpdate) Where(ps ...predicate.AuthRoles) *AuthRolesUpdate {
_u.mutation.Where(ps...)
return _u
}
// SetRole sets the "role" field.
func (aru *AuthRolesUpdate) SetRole(a authroles.Role) *AuthRolesUpdate {
aru.mutation.SetRole(a)
return aru
func (_u *AuthRolesUpdate) SetRole(v authroles.Role) *AuthRolesUpdate {
_u.mutation.SetRole(v)
return _u
}
// SetNillableRole sets the "role" field if the given value is not nil.
func (aru *AuthRolesUpdate) SetNillableRole(a *authroles.Role) *AuthRolesUpdate {
if a != nil {
aru.SetRole(*a)
func (_u *AuthRolesUpdate) SetNillableRole(v *authroles.Role) *AuthRolesUpdate {
if v != nil {
_u.SetRole(*v)
}
return aru
return _u
}
// SetTokenID sets the "token" edge to the AuthTokens entity by ID.
func (aru *AuthRolesUpdate) SetTokenID(id uuid.UUID) *AuthRolesUpdate {
aru.mutation.SetTokenID(id)
return aru
func (_u *AuthRolesUpdate) SetTokenID(id uuid.UUID) *AuthRolesUpdate {
_u.mutation.SetTokenID(id)
return _u
}
// SetNillableTokenID sets the "token" edge to the AuthTokens entity by ID if the given value is not nil.
func (aru *AuthRolesUpdate) SetNillableTokenID(id *uuid.UUID) *AuthRolesUpdate {
func (_u *AuthRolesUpdate) SetNillableTokenID(id *uuid.UUID) *AuthRolesUpdate {
if id != nil {
aru = aru.SetTokenID(*id)
_u = _u.SetTokenID(*id)
}
return aru
return _u
}
// SetToken sets the "token" edge to the AuthTokens entity.
func (aru *AuthRolesUpdate) SetToken(a *AuthTokens) *AuthRolesUpdate {
return aru.SetTokenID(a.ID)
func (_u *AuthRolesUpdate) SetToken(v *AuthTokens) *AuthRolesUpdate {
return _u.SetTokenID(v.ID)
}
// Mutation returns the AuthRolesMutation object of the builder.
func (aru *AuthRolesUpdate) Mutation() *AuthRolesMutation {
return aru.mutation
func (_u *AuthRolesUpdate) Mutation() *AuthRolesMutation {
return _u.mutation
}
// ClearToken clears the "token" edge to the AuthTokens entity.
func (aru *AuthRolesUpdate) ClearToken() *AuthRolesUpdate {
aru.mutation.ClearToken()
return aru
func (_u *AuthRolesUpdate) ClearToken() *AuthRolesUpdate {
_u.mutation.ClearToken()
return _u
}
// Save executes the query and returns the number of nodes affected by the update operation.
func (aru *AuthRolesUpdate) Save(ctx context.Context) (int, error) {
return withHooks(ctx, aru.sqlSave, aru.mutation, aru.hooks)
func (_u *AuthRolesUpdate) Save(ctx context.Context) (int, error) {
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
}
// SaveX is like Save, but panics if an error occurs.
func (aru *AuthRolesUpdate) SaveX(ctx context.Context) int {
affected, err := aru.Save(ctx)
func (_u *AuthRolesUpdate) SaveX(ctx context.Context) int {
affected, err := _u.Save(ctx)
if err != nil {
panic(err)
}
@@ -88,21 +88,21 @@ func (aru *AuthRolesUpdate) SaveX(ctx context.Context) int {
}
// Exec executes the query.
func (aru *AuthRolesUpdate) Exec(ctx context.Context) error {
_, err := aru.Save(ctx)
func (_u *AuthRolesUpdate) Exec(ctx context.Context) error {
_, err := _u.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (aru *AuthRolesUpdate) ExecX(ctx context.Context) {
if err := aru.Exec(ctx); err != nil {
func (_u *AuthRolesUpdate) ExecX(ctx context.Context) {
if err := _u.Exec(ctx); err != nil {
panic(err)
}
}
// check runs all checks and user-defined validators on the builder.
func (aru *AuthRolesUpdate) check() error {
if v, ok := aru.mutation.Role(); ok {
func (_u *AuthRolesUpdate) check() error {
if v, ok := _u.mutation.Role(); ok {
if err := authroles.RoleValidator(v); err != nil {
return &ValidationError{Name: "role", err: fmt.Errorf(`ent: validator failed for field "AuthRoles.role": %w`, err)}
}
@@ -110,22 +110,22 @@ func (aru *AuthRolesUpdate) check() error {
return nil
}
func (aru *AuthRolesUpdate) sqlSave(ctx context.Context) (n int, err error) {
if err := aru.check(); err != nil {
return n, err
func (_u *AuthRolesUpdate) sqlSave(ctx context.Context) (_node int, err error) {
if err := _u.check(); err != nil {
return _node, err
}
_spec := sqlgraph.NewUpdateSpec(authroles.Table, authroles.Columns, sqlgraph.NewFieldSpec(authroles.FieldID, field.TypeInt))
if ps := aru.mutation.predicates; len(ps) > 0 {
if ps := _u.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := aru.mutation.Role(); ok {
if value, ok := _u.mutation.Role(); ok {
_spec.SetField(authroles.FieldRole, field.TypeEnum, value)
}
if aru.mutation.TokenCleared() {
if _u.mutation.TokenCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: true,
@@ -138,7 +138,7 @@ func (aru *AuthRolesUpdate) sqlSave(ctx context.Context) (n int, err error) {
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := aru.mutation.TokenIDs(); len(nodes) > 0 {
if nodes := _u.mutation.TokenIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: true,
@@ -154,7 +154,7 @@ func (aru *AuthRolesUpdate) sqlSave(ctx context.Context) (n int, err error) {
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if n, err = sqlgraph.UpdateNodes(ctx, aru.driver, _spec); err != nil {
if _node, err = sqlgraph.UpdateNodes(ctx, _u.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{authroles.Label}
} else if sqlgraph.IsConstraintError(err) {
@@ -162,8 +162,8 @@ func (aru *AuthRolesUpdate) sqlSave(ctx context.Context) (n int, err error) {
}
return 0, err
}
aru.mutation.done = true
return n, nil
_u.mutation.done = true
return _node, nil
}
// AuthRolesUpdateOne is the builder for updating a single AuthRoles entity.
@@ -175,70 +175,70 @@ type AuthRolesUpdateOne struct {
}
// SetRole sets the "role" field.
func (aruo *AuthRolesUpdateOne) SetRole(a authroles.Role) *AuthRolesUpdateOne {
aruo.mutation.SetRole(a)
return aruo
func (_u *AuthRolesUpdateOne) SetRole(v authroles.Role) *AuthRolesUpdateOne {
_u.mutation.SetRole(v)
return _u
}
// SetNillableRole sets the "role" field if the given value is not nil.
func (aruo *AuthRolesUpdateOne) SetNillableRole(a *authroles.Role) *AuthRolesUpdateOne {
if a != nil {
aruo.SetRole(*a)
func (_u *AuthRolesUpdateOne) SetNillableRole(v *authroles.Role) *AuthRolesUpdateOne {
if v != nil {
_u.SetRole(*v)
}
return aruo
return _u
}
// SetTokenID sets the "token" edge to the AuthTokens entity by ID.
func (aruo *AuthRolesUpdateOne) SetTokenID(id uuid.UUID) *AuthRolesUpdateOne {
aruo.mutation.SetTokenID(id)
return aruo
func (_u *AuthRolesUpdateOne) SetTokenID(id uuid.UUID) *AuthRolesUpdateOne {
_u.mutation.SetTokenID(id)
return _u
}
// SetNillableTokenID sets the "token" edge to the AuthTokens entity by ID if the given value is not nil.
func (aruo *AuthRolesUpdateOne) SetNillableTokenID(id *uuid.UUID) *AuthRolesUpdateOne {
func (_u *AuthRolesUpdateOne) SetNillableTokenID(id *uuid.UUID) *AuthRolesUpdateOne {
if id != nil {
aruo = aruo.SetTokenID(*id)
_u = _u.SetTokenID(*id)
}
return aruo
return _u
}
// SetToken sets the "token" edge to the AuthTokens entity.
func (aruo *AuthRolesUpdateOne) SetToken(a *AuthTokens) *AuthRolesUpdateOne {
return aruo.SetTokenID(a.ID)
func (_u *AuthRolesUpdateOne) SetToken(v *AuthTokens) *AuthRolesUpdateOne {
return _u.SetTokenID(v.ID)
}
// Mutation returns the AuthRolesMutation object of the builder.
func (aruo *AuthRolesUpdateOne) Mutation() *AuthRolesMutation {
return aruo.mutation
func (_u *AuthRolesUpdateOne) Mutation() *AuthRolesMutation {
return _u.mutation
}
// ClearToken clears the "token" edge to the AuthTokens entity.
func (aruo *AuthRolesUpdateOne) ClearToken() *AuthRolesUpdateOne {
aruo.mutation.ClearToken()
return aruo
func (_u *AuthRolesUpdateOne) ClearToken() *AuthRolesUpdateOne {
_u.mutation.ClearToken()
return _u
}
// Where appends a list predicates to the AuthRolesUpdate builder.
func (aruo *AuthRolesUpdateOne) Where(ps ...predicate.AuthRoles) *AuthRolesUpdateOne {
aruo.mutation.Where(ps...)
return aruo
func (_u *AuthRolesUpdateOne) Where(ps ...predicate.AuthRoles) *AuthRolesUpdateOne {
_u.mutation.Where(ps...)
return _u
}
// Select allows selecting one or more fields (columns) of the returned entity.
// The default is selecting all fields defined in the entity schema.
func (aruo *AuthRolesUpdateOne) Select(field string, fields ...string) *AuthRolesUpdateOne {
aruo.fields = append([]string{field}, fields...)
return aruo
func (_u *AuthRolesUpdateOne) Select(field string, fields ...string) *AuthRolesUpdateOne {
_u.fields = append([]string{field}, fields...)
return _u
}
// Save executes the query and returns the updated AuthRoles entity.
func (aruo *AuthRolesUpdateOne) Save(ctx context.Context) (*AuthRoles, error) {
return withHooks(ctx, aruo.sqlSave, aruo.mutation, aruo.hooks)
func (_u *AuthRolesUpdateOne) Save(ctx context.Context) (*AuthRoles, error) {
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
}
// SaveX is like Save, but panics if an error occurs.
func (aruo *AuthRolesUpdateOne) SaveX(ctx context.Context) *AuthRoles {
node, err := aruo.Save(ctx)
func (_u *AuthRolesUpdateOne) SaveX(ctx context.Context) *AuthRoles {
node, err := _u.Save(ctx)
if err != nil {
panic(err)
}
@@ -246,21 +246,21 @@ func (aruo *AuthRolesUpdateOne) SaveX(ctx context.Context) *AuthRoles {
}
// Exec executes the query on the entity.
func (aruo *AuthRolesUpdateOne) Exec(ctx context.Context) error {
_, err := aruo.Save(ctx)
func (_u *AuthRolesUpdateOne) Exec(ctx context.Context) error {
_, err := _u.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (aruo *AuthRolesUpdateOne) ExecX(ctx context.Context) {
if err := aruo.Exec(ctx); err != nil {
func (_u *AuthRolesUpdateOne) ExecX(ctx context.Context) {
if err := _u.Exec(ctx); err != nil {
panic(err)
}
}
// check runs all checks and user-defined validators on the builder.
func (aruo *AuthRolesUpdateOne) check() error {
if v, ok := aruo.mutation.Role(); ok {
func (_u *AuthRolesUpdateOne) check() error {
if v, ok := _u.mutation.Role(); ok {
if err := authroles.RoleValidator(v); err != nil {
return &ValidationError{Name: "role", err: fmt.Errorf(`ent: validator failed for field "AuthRoles.role": %w`, err)}
}
@@ -268,17 +268,17 @@ func (aruo *AuthRolesUpdateOne) check() error {
return nil
}
func (aruo *AuthRolesUpdateOne) sqlSave(ctx context.Context) (_node *AuthRoles, err error) {
if err := aruo.check(); err != nil {
func (_u *AuthRolesUpdateOne) sqlSave(ctx context.Context) (_node *AuthRoles, err error) {
if err := _u.check(); err != nil {
return _node, err
}
_spec := sqlgraph.NewUpdateSpec(authroles.Table, authroles.Columns, sqlgraph.NewFieldSpec(authroles.FieldID, field.TypeInt))
id, ok := aruo.mutation.ID()
id, ok := _u.mutation.ID()
if !ok {
return nil, &ValidationError{Name: "id", err: errors.New(`ent: missing "AuthRoles.id" for update`)}
}
_spec.Node.ID.Value = id
if fields := aruo.fields; len(fields) > 0 {
if fields := _u.fields; len(fields) > 0 {
_spec.Node.Columns = make([]string, 0, len(fields))
_spec.Node.Columns = append(_spec.Node.Columns, authroles.FieldID)
for _, f := range fields {
@@ -290,17 +290,17 @@ func (aruo *AuthRolesUpdateOne) sqlSave(ctx context.Context) (_node *AuthRoles,
}
}
}
if ps := aruo.mutation.predicates; len(ps) > 0 {
if ps := _u.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := aruo.mutation.Role(); ok {
if value, ok := _u.mutation.Role(); ok {
_spec.SetField(authroles.FieldRole, field.TypeEnum, value)
}
if aruo.mutation.TokenCleared() {
if _u.mutation.TokenCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: true,
@@ -313,7 +313,7 @@ func (aruo *AuthRolesUpdateOne) sqlSave(ctx context.Context) (_node *AuthRoles,
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := aruo.mutation.TokenIDs(); len(nodes) > 0 {
if nodes := _u.mutation.TokenIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: true,
@@ -329,10 +329,10 @@ func (aruo *AuthRolesUpdateOne) sqlSave(ctx context.Context) (_node *AuthRoles,
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
_node = &AuthRoles{config: aruo.config}
_node = &AuthRoles{config: _u.config}
_spec.Assign = _node.assignValues
_spec.ScanValues = _node.scanValues
if err = sqlgraph.UpdateNode(ctx, aruo.driver, _spec); err != nil {
if err = sqlgraph.UpdateNode(ctx, _u.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{authroles.Label}
} else if sqlgraph.IsConstraintError(err) {
@@ -340,6 +340,6 @@ func (aruo *AuthRolesUpdateOne) sqlSave(ctx context.Context) (_node *AuthRoles,
}
return nil, err
}
aruo.mutation.done = true
_u.mutation.done = true
return _node, nil
}

View File

@@ -90,7 +90,7 @@ func (*AuthTokens) scanValues(columns []string) ([]any, error) {
// assignValues assigns the values that were returned from sql.Rows (after scanning)
// to the AuthTokens fields.
func (at *AuthTokens) assignValues(columns []string, values []any) error {
func (_m *AuthTokens) assignValues(columns []string, values []any) error {
if m, n := len(values), len(columns); m < n {
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
}
@@ -100,41 +100,41 @@ func (at *AuthTokens) assignValues(columns []string, values []any) error {
if value, ok := values[i].(*uuid.UUID); !ok {
return fmt.Errorf("unexpected type %T for field id", values[i])
} else if value != nil {
at.ID = *value
_m.ID = *value
}
case authtokens.FieldCreatedAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field created_at", values[i])
} else if value.Valid {
at.CreatedAt = value.Time
_m.CreatedAt = value.Time
}
case authtokens.FieldUpdatedAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field updated_at", values[i])
} else if value.Valid {
at.UpdatedAt = value.Time
_m.UpdatedAt = value.Time
}
case authtokens.FieldToken:
if value, ok := values[i].(*[]byte); !ok {
return fmt.Errorf("unexpected type %T for field token", values[i])
} else if value != nil {
at.Token = *value
_m.Token = *value
}
case authtokens.FieldExpiresAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field expires_at", values[i])
} else if value.Valid {
at.ExpiresAt = value.Time
_m.ExpiresAt = value.Time
}
case authtokens.ForeignKeys[0]:
if value, ok := values[i].(*sql.NullScanner); !ok {
return fmt.Errorf("unexpected type %T for field user_auth_tokens", values[i])
} else if value.Valid {
at.user_auth_tokens = new(uuid.UUID)
*at.user_auth_tokens = *value.S.(*uuid.UUID)
_m.user_auth_tokens = new(uuid.UUID)
*_m.user_auth_tokens = *value.S.(*uuid.UUID)
}
default:
at.selectValues.Set(columns[i], values[i])
_m.selectValues.Set(columns[i], values[i])
}
}
return nil
@@ -142,54 +142,54 @@ func (at *AuthTokens) assignValues(columns []string, values []any) error {
// Value returns the ent.Value that was dynamically selected and assigned to the AuthTokens.
// This includes values selected through modifiers, order, etc.
func (at *AuthTokens) Value(name string) (ent.Value, error) {
return at.selectValues.Get(name)
func (_m *AuthTokens) Value(name string) (ent.Value, error) {
return _m.selectValues.Get(name)
}
// QueryUser queries the "user" edge of the AuthTokens entity.
func (at *AuthTokens) QueryUser() *UserQuery {
return NewAuthTokensClient(at.config).QueryUser(at)
func (_m *AuthTokens) QueryUser() *UserQuery {
return NewAuthTokensClient(_m.config).QueryUser(_m)
}
// QueryRoles queries the "roles" edge of the AuthTokens entity.
func (at *AuthTokens) QueryRoles() *AuthRolesQuery {
return NewAuthTokensClient(at.config).QueryRoles(at)
func (_m *AuthTokens) QueryRoles() *AuthRolesQuery {
return NewAuthTokensClient(_m.config).QueryRoles(_m)
}
// Update returns a builder for updating this AuthTokens.
// Note that you need to call AuthTokens.Unwrap() before calling this method if this AuthTokens
// was returned from a transaction, and the transaction was committed or rolled back.
func (at *AuthTokens) Update() *AuthTokensUpdateOne {
return NewAuthTokensClient(at.config).UpdateOne(at)
func (_m *AuthTokens) Update() *AuthTokensUpdateOne {
return NewAuthTokensClient(_m.config).UpdateOne(_m)
}
// Unwrap unwraps the AuthTokens entity that was returned from a transaction after it was closed,
// so that all future queries will be executed through the driver which created the transaction.
func (at *AuthTokens) Unwrap() *AuthTokens {
_tx, ok := at.config.driver.(*txDriver)
func (_m *AuthTokens) Unwrap() *AuthTokens {
_tx, ok := _m.config.driver.(*txDriver)
if !ok {
panic("ent: AuthTokens is not a transactional entity")
}
at.config.driver = _tx.drv
return at
_m.config.driver = _tx.drv
return _m
}
// String implements the fmt.Stringer.
func (at *AuthTokens) String() string {
func (_m *AuthTokens) String() string {
var builder strings.Builder
builder.WriteString("AuthTokens(")
builder.WriteString(fmt.Sprintf("id=%v, ", at.ID))
builder.WriteString(fmt.Sprintf("id=%v, ", _m.ID))
builder.WriteString("created_at=")
builder.WriteString(at.CreatedAt.Format(time.ANSIC))
builder.WriteString(_m.CreatedAt.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("updated_at=")
builder.WriteString(at.UpdatedAt.Format(time.ANSIC))
builder.WriteString(_m.UpdatedAt.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("token=")
builder.WriteString(fmt.Sprintf("%v", at.Token))
builder.WriteString(fmt.Sprintf("%v", _m.Token))
builder.WriteString(", ")
builder.WriteString("expires_at=")
builder.WriteString(at.ExpiresAt.Format(time.ANSIC))
builder.WriteString(_m.ExpiresAt.Format(time.ANSIC))
builder.WriteByte(')')
return builder.String()
}

View File

@@ -24,119 +24,119 @@ type AuthTokensCreate struct {
}
// SetCreatedAt sets the "created_at" field.
func (atc *AuthTokensCreate) SetCreatedAt(t time.Time) *AuthTokensCreate {
atc.mutation.SetCreatedAt(t)
return atc
func (_c *AuthTokensCreate) SetCreatedAt(v time.Time) *AuthTokensCreate {
_c.mutation.SetCreatedAt(v)
return _c
}
// SetNillableCreatedAt sets the "created_at" field if the given value is not nil.
func (atc *AuthTokensCreate) SetNillableCreatedAt(t *time.Time) *AuthTokensCreate {
if t != nil {
atc.SetCreatedAt(*t)
func (_c *AuthTokensCreate) SetNillableCreatedAt(v *time.Time) *AuthTokensCreate {
if v != nil {
_c.SetCreatedAt(*v)
}
return atc
return _c
}
// SetUpdatedAt sets the "updated_at" field.
func (atc *AuthTokensCreate) SetUpdatedAt(t time.Time) *AuthTokensCreate {
atc.mutation.SetUpdatedAt(t)
return atc
func (_c *AuthTokensCreate) SetUpdatedAt(v time.Time) *AuthTokensCreate {
_c.mutation.SetUpdatedAt(v)
return _c
}
// SetNillableUpdatedAt sets the "updated_at" field if the given value is not nil.
func (atc *AuthTokensCreate) SetNillableUpdatedAt(t *time.Time) *AuthTokensCreate {
if t != nil {
atc.SetUpdatedAt(*t)
func (_c *AuthTokensCreate) SetNillableUpdatedAt(v *time.Time) *AuthTokensCreate {
if v != nil {
_c.SetUpdatedAt(*v)
}
return atc
return _c
}
// SetToken sets the "token" field.
func (atc *AuthTokensCreate) SetToken(b []byte) *AuthTokensCreate {
atc.mutation.SetToken(b)
return atc
func (_c *AuthTokensCreate) SetToken(v []byte) *AuthTokensCreate {
_c.mutation.SetToken(v)
return _c
}
// SetExpiresAt sets the "expires_at" field.
func (atc *AuthTokensCreate) SetExpiresAt(t time.Time) *AuthTokensCreate {
atc.mutation.SetExpiresAt(t)
return atc
func (_c *AuthTokensCreate) SetExpiresAt(v time.Time) *AuthTokensCreate {
_c.mutation.SetExpiresAt(v)
return _c
}
// SetNillableExpiresAt sets the "expires_at" field if the given value is not nil.
func (atc *AuthTokensCreate) SetNillableExpiresAt(t *time.Time) *AuthTokensCreate {
if t != nil {
atc.SetExpiresAt(*t)
func (_c *AuthTokensCreate) SetNillableExpiresAt(v *time.Time) *AuthTokensCreate {
if v != nil {
_c.SetExpiresAt(*v)
}
return atc
return _c
}
// SetID sets the "id" field.
func (atc *AuthTokensCreate) SetID(u uuid.UUID) *AuthTokensCreate {
atc.mutation.SetID(u)
return atc
func (_c *AuthTokensCreate) SetID(v uuid.UUID) *AuthTokensCreate {
_c.mutation.SetID(v)
return _c
}
// SetNillableID sets the "id" field if the given value is not nil.
func (atc *AuthTokensCreate) SetNillableID(u *uuid.UUID) *AuthTokensCreate {
if u != nil {
atc.SetID(*u)
func (_c *AuthTokensCreate) SetNillableID(v *uuid.UUID) *AuthTokensCreate {
if v != nil {
_c.SetID(*v)
}
return atc
return _c
}
// SetUserID sets the "user" edge to the User entity by ID.
func (atc *AuthTokensCreate) SetUserID(id uuid.UUID) *AuthTokensCreate {
atc.mutation.SetUserID(id)
return atc
func (_c *AuthTokensCreate) SetUserID(id uuid.UUID) *AuthTokensCreate {
_c.mutation.SetUserID(id)
return _c
}
// SetNillableUserID sets the "user" edge to the User entity by ID if the given value is not nil.
func (atc *AuthTokensCreate) SetNillableUserID(id *uuid.UUID) *AuthTokensCreate {
func (_c *AuthTokensCreate) SetNillableUserID(id *uuid.UUID) *AuthTokensCreate {
if id != nil {
atc = atc.SetUserID(*id)
_c = _c.SetUserID(*id)
}
return atc
return _c
}
// SetUser sets the "user" edge to the User entity.
func (atc *AuthTokensCreate) SetUser(u *User) *AuthTokensCreate {
return atc.SetUserID(u.ID)
func (_c *AuthTokensCreate) SetUser(v *User) *AuthTokensCreate {
return _c.SetUserID(v.ID)
}
// SetRolesID sets the "roles" edge to the AuthRoles entity by ID.
func (atc *AuthTokensCreate) SetRolesID(id int) *AuthTokensCreate {
atc.mutation.SetRolesID(id)
return atc
func (_c *AuthTokensCreate) SetRolesID(id int) *AuthTokensCreate {
_c.mutation.SetRolesID(id)
return _c
}
// SetNillableRolesID sets the "roles" edge to the AuthRoles entity by ID if the given value is not nil.
func (atc *AuthTokensCreate) SetNillableRolesID(id *int) *AuthTokensCreate {
func (_c *AuthTokensCreate) SetNillableRolesID(id *int) *AuthTokensCreate {
if id != nil {
atc = atc.SetRolesID(*id)
_c = _c.SetRolesID(*id)
}
return atc
return _c
}
// SetRoles sets the "roles" edge to the AuthRoles entity.
func (atc *AuthTokensCreate) SetRoles(a *AuthRoles) *AuthTokensCreate {
return atc.SetRolesID(a.ID)
func (_c *AuthTokensCreate) SetRoles(v *AuthRoles) *AuthTokensCreate {
return _c.SetRolesID(v.ID)
}
// Mutation returns the AuthTokensMutation object of the builder.
func (atc *AuthTokensCreate) Mutation() *AuthTokensMutation {
return atc.mutation
func (_c *AuthTokensCreate) Mutation() *AuthTokensMutation {
return _c.mutation
}
// Save creates the AuthTokens in the database.
func (atc *AuthTokensCreate) Save(ctx context.Context) (*AuthTokens, error) {
atc.defaults()
return withHooks(ctx, atc.sqlSave, atc.mutation, atc.hooks)
func (_c *AuthTokensCreate) Save(ctx context.Context) (*AuthTokens, error) {
_c.defaults()
return withHooks(ctx, _c.sqlSave, _c.mutation, _c.hooks)
}
// SaveX calls Save and panics if Save returns an error.
func (atc *AuthTokensCreate) SaveX(ctx context.Context) *AuthTokens {
v, err := atc.Save(ctx)
func (_c *AuthTokensCreate) SaveX(ctx context.Context) *AuthTokens {
v, err := _c.Save(ctx)
if err != nil {
panic(err)
}
@@ -144,61 +144,61 @@ func (atc *AuthTokensCreate) SaveX(ctx context.Context) *AuthTokens {
}
// Exec executes the query.
func (atc *AuthTokensCreate) Exec(ctx context.Context) error {
_, err := atc.Save(ctx)
func (_c *AuthTokensCreate) Exec(ctx context.Context) error {
_, err := _c.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (atc *AuthTokensCreate) ExecX(ctx context.Context) {
if err := atc.Exec(ctx); err != nil {
func (_c *AuthTokensCreate) ExecX(ctx context.Context) {
if err := _c.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (atc *AuthTokensCreate) defaults() {
if _, ok := atc.mutation.CreatedAt(); !ok {
func (_c *AuthTokensCreate) defaults() {
if _, ok := _c.mutation.CreatedAt(); !ok {
v := authtokens.DefaultCreatedAt()
atc.mutation.SetCreatedAt(v)
_c.mutation.SetCreatedAt(v)
}
if _, ok := atc.mutation.UpdatedAt(); !ok {
if _, ok := _c.mutation.UpdatedAt(); !ok {
v := authtokens.DefaultUpdatedAt()
atc.mutation.SetUpdatedAt(v)
_c.mutation.SetUpdatedAt(v)
}
if _, ok := atc.mutation.ExpiresAt(); !ok {
if _, ok := _c.mutation.ExpiresAt(); !ok {
v := authtokens.DefaultExpiresAt()
atc.mutation.SetExpiresAt(v)
_c.mutation.SetExpiresAt(v)
}
if _, ok := atc.mutation.ID(); !ok {
if _, ok := _c.mutation.ID(); !ok {
v := authtokens.DefaultID()
atc.mutation.SetID(v)
_c.mutation.SetID(v)
}
}
// check runs all checks and user-defined validators on the builder.
func (atc *AuthTokensCreate) check() error {
if _, ok := atc.mutation.CreatedAt(); !ok {
func (_c *AuthTokensCreate) check() error {
if _, ok := _c.mutation.CreatedAt(); !ok {
return &ValidationError{Name: "created_at", err: errors.New(`ent: missing required field "AuthTokens.created_at"`)}
}
if _, ok := atc.mutation.UpdatedAt(); !ok {
if _, ok := _c.mutation.UpdatedAt(); !ok {
return &ValidationError{Name: "updated_at", err: errors.New(`ent: missing required field "AuthTokens.updated_at"`)}
}
if _, ok := atc.mutation.Token(); !ok {
if _, ok := _c.mutation.Token(); !ok {
return &ValidationError{Name: "token", err: errors.New(`ent: missing required field "AuthTokens.token"`)}
}
if _, ok := atc.mutation.ExpiresAt(); !ok {
if _, ok := _c.mutation.ExpiresAt(); !ok {
return &ValidationError{Name: "expires_at", err: errors.New(`ent: missing required field "AuthTokens.expires_at"`)}
}
return nil
}
func (atc *AuthTokensCreate) sqlSave(ctx context.Context) (*AuthTokens, error) {
if err := atc.check(); err != nil {
func (_c *AuthTokensCreate) sqlSave(ctx context.Context) (*AuthTokens, error) {
if err := _c.check(); err != nil {
return nil, err
}
_node, _spec := atc.createSpec()
if err := sqlgraph.CreateNode(ctx, atc.driver, _spec); err != nil {
_node, _spec := _c.createSpec()
if err := sqlgraph.CreateNode(ctx, _c.driver, _spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
@@ -211,37 +211,37 @@ func (atc *AuthTokensCreate) sqlSave(ctx context.Context) (*AuthTokens, error) {
return nil, err
}
}
atc.mutation.id = &_node.ID
atc.mutation.done = true
_c.mutation.id = &_node.ID
_c.mutation.done = true
return _node, nil
}
func (atc *AuthTokensCreate) createSpec() (*AuthTokens, *sqlgraph.CreateSpec) {
func (_c *AuthTokensCreate) createSpec() (*AuthTokens, *sqlgraph.CreateSpec) {
var (
_node = &AuthTokens{config: atc.config}
_node = &AuthTokens{config: _c.config}
_spec = sqlgraph.NewCreateSpec(authtokens.Table, sqlgraph.NewFieldSpec(authtokens.FieldID, field.TypeUUID))
)
if id, ok := atc.mutation.ID(); ok {
if id, ok := _c.mutation.ID(); ok {
_node.ID = id
_spec.ID.Value = &id
}
if value, ok := atc.mutation.CreatedAt(); ok {
if value, ok := _c.mutation.CreatedAt(); ok {
_spec.SetField(authtokens.FieldCreatedAt, field.TypeTime, value)
_node.CreatedAt = value
}
if value, ok := atc.mutation.UpdatedAt(); ok {
if value, ok := _c.mutation.UpdatedAt(); ok {
_spec.SetField(authtokens.FieldUpdatedAt, field.TypeTime, value)
_node.UpdatedAt = value
}
if value, ok := atc.mutation.Token(); ok {
if value, ok := _c.mutation.Token(); ok {
_spec.SetField(authtokens.FieldToken, field.TypeBytes, value)
_node.Token = value
}
if value, ok := atc.mutation.ExpiresAt(); ok {
if value, ok := _c.mutation.ExpiresAt(); ok {
_spec.SetField(authtokens.FieldExpiresAt, field.TypeTime, value)
_node.ExpiresAt = value
}
if nodes := atc.mutation.UserIDs(); len(nodes) > 0 {
if nodes := _c.mutation.UserIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -258,7 +258,7 @@ func (atc *AuthTokensCreate) createSpec() (*AuthTokens, *sqlgraph.CreateSpec) {
_node.user_auth_tokens = &nodes[0]
_spec.Edges = append(_spec.Edges, edge)
}
if nodes := atc.mutation.RolesIDs(); len(nodes) > 0 {
if nodes := _c.mutation.RolesIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: false,
@@ -285,16 +285,16 @@ type AuthTokensCreateBulk struct {
}
// Save creates the AuthTokens entities in the database.
func (atcb *AuthTokensCreateBulk) Save(ctx context.Context) ([]*AuthTokens, error) {
if atcb.err != nil {
return nil, atcb.err
func (_c *AuthTokensCreateBulk) Save(ctx context.Context) ([]*AuthTokens, error) {
if _c.err != nil {
return nil, _c.err
}
specs := make([]*sqlgraph.CreateSpec, len(atcb.builders))
nodes := make([]*AuthTokens, len(atcb.builders))
mutators := make([]Mutator, len(atcb.builders))
for i := range atcb.builders {
specs := make([]*sqlgraph.CreateSpec, len(_c.builders))
nodes := make([]*AuthTokens, len(_c.builders))
mutators := make([]Mutator, len(_c.builders))
for i := range _c.builders {
func(i int, root context.Context) {
builder := atcb.builders[i]
builder := _c.builders[i]
builder.defaults()
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
mutation, ok := m.(*AuthTokensMutation)
@@ -308,11 +308,11 @@ func (atcb *AuthTokensCreateBulk) Save(ctx context.Context) ([]*AuthTokens, erro
var err error
nodes[i], specs[i] = builder.createSpec()
if i < len(mutators)-1 {
_, err = mutators[i+1].Mutate(root, atcb.builders[i+1].mutation)
_, err = mutators[i+1].Mutate(root, _c.builders[i+1].mutation)
} else {
spec := &sqlgraph.BatchCreateSpec{Nodes: specs}
// Invoke the actual operation on the latest mutation in the chain.
if err = sqlgraph.BatchCreate(ctx, atcb.driver, spec); err != nil {
if err = sqlgraph.BatchCreate(ctx, _c.driver, spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
@@ -332,7 +332,7 @@ func (atcb *AuthTokensCreateBulk) Save(ctx context.Context) ([]*AuthTokens, erro
}(i, ctx)
}
if len(mutators) > 0 {
if _, err := mutators[0].Mutate(ctx, atcb.builders[0].mutation); err != nil {
if _, err := mutators[0].Mutate(ctx, _c.builders[0].mutation); err != nil {
return nil, err
}
}
@@ -340,8 +340,8 @@ func (atcb *AuthTokensCreateBulk) Save(ctx context.Context) ([]*AuthTokens, erro
}
// SaveX is like Save, but panics if an error occurs.
func (atcb *AuthTokensCreateBulk) SaveX(ctx context.Context) []*AuthTokens {
v, err := atcb.Save(ctx)
func (_c *AuthTokensCreateBulk) SaveX(ctx context.Context) []*AuthTokens {
v, err := _c.Save(ctx)
if err != nil {
panic(err)
}
@@ -349,14 +349,14 @@ func (atcb *AuthTokensCreateBulk) SaveX(ctx context.Context) []*AuthTokens {
}
// Exec executes the query.
func (atcb *AuthTokensCreateBulk) Exec(ctx context.Context) error {
_, err := atcb.Save(ctx)
func (_c *AuthTokensCreateBulk) Exec(ctx context.Context) error {
_, err := _c.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (atcb *AuthTokensCreateBulk) ExecX(ctx context.Context) {
if err := atcb.Exec(ctx); err != nil {
func (_c *AuthTokensCreateBulk) ExecX(ctx context.Context) {
if err := _c.Exec(ctx); err != nil {
panic(err)
}
}

View File

@@ -20,56 +20,56 @@ type AuthTokensDelete struct {
}
// Where appends a list predicates to the AuthTokensDelete builder.
func (atd *AuthTokensDelete) Where(ps ...predicate.AuthTokens) *AuthTokensDelete {
atd.mutation.Where(ps...)
return atd
func (_d *AuthTokensDelete) Where(ps ...predicate.AuthTokens) *AuthTokensDelete {
_d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query and returns how many vertices were deleted.
func (atd *AuthTokensDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, atd.sqlExec, atd.mutation, atd.hooks)
func (_d *AuthTokensDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, _d.sqlExec, _d.mutation, _d.hooks)
}
// ExecX is like Exec, but panics if an error occurs.
func (atd *AuthTokensDelete) ExecX(ctx context.Context) int {
n, err := atd.Exec(ctx)
func (_d *AuthTokensDelete) ExecX(ctx context.Context) int {
n, err := _d.Exec(ctx)
if err != nil {
panic(err)
}
return n
}
func (atd *AuthTokensDelete) sqlExec(ctx context.Context) (int, error) {
func (_d *AuthTokensDelete) sqlExec(ctx context.Context) (int, error) {
_spec := sqlgraph.NewDeleteSpec(authtokens.Table, sqlgraph.NewFieldSpec(authtokens.FieldID, field.TypeUUID))
if ps := atd.mutation.predicates; len(ps) > 0 {
if ps := _d.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
affected, err := sqlgraph.DeleteNodes(ctx, atd.driver, _spec)
affected, err := sqlgraph.DeleteNodes(ctx, _d.driver, _spec)
if err != nil && sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
atd.mutation.done = true
_d.mutation.done = true
return affected, err
}
// AuthTokensDeleteOne is the builder for deleting a single AuthTokens entity.
type AuthTokensDeleteOne struct {
atd *AuthTokensDelete
_d *AuthTokensDelete
}
// Where appends a list predicates to the AuthTokensDelete builder.
func (atdo *AuthTokensDeleteOne) Where(ps ...predicate.AuthTokens) *AuthTokensDeleteOne {
atdo.atd.mutation.Where(ps...)
return atdo
func (_d *AuthTokensDeleteOne) Where(ps ...predicate.AuthTokens) *AuthTokensDeleteOne {
_d._d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query.
func (atdo *AuthTokensDeleteOne) Exec(ctx context.Context) error {
n, err := atdo.atd.Exec(ctx)
func (_d *AuthTokensDeleteOne) Exec(ctx context.Context) error {
n, err := _d._d.Exec(ctx)
switch {
case err != nil:
return err
@@ -81,8 +81,8 @@ func (atdo *AuthTokensDeleteOne) Exec(ctx context.Context) error {
}
// ExecX is like Exec, but panics if an error occurs.
func (atdo *AuthTokensDeleteOne) ExecX(ctx context.Context) {
if err := atdo.Exec(ctx); err != nil {
func (_d *AuthTokensDeleteOne) ExecX(ctx context.Context) {
if err := _d.Exec(ctx); err != nil {
panic(err)
}
}

View File

@@ -35,44 +35,44 @@ type AuthTokensQuery struct {
}
// Where adds a new predicate for the AuthTokensQuery builder.
func (atq *AuthTokensQuery) Where(ps ...predicate.AuthTokens) *AuthTokensQuery {
atq.predicates = append(atq.predicates, ps...)
return atq
func (_q *AuthTokensQuery) Where(ps ...predicate.AuthTokens) *AuthTokensQuery {
_q.predicates = append(_q.predicates, ps...)
return _q
}
// Limit the number of records to be returned by this query.
func (atq *AuthTokensQuery) Limit(limit int) *AuthTokensQuery {
atq.ctx.Limit = &limit
return atq
func (_q *AuthTokensQuery) Limit(limit int) *AuthTokensQuery {
_q.ctx.Limit = &limit
return _q
}
// Offset to start from.
func (atq *AuthTokensQuery) Offset(offset int) *AuthTokensQuery {
atq.ctx.Offset = &offset
return atq
func (_q *AuthTokensQuery) Offset(offset int) *AuthTokensQuery {
_q.ctx.Offset = &offset
return _q
}
// Unique configures the query builder to filter duplicate records on query.
// By default, unique is set to true, and can be disabled using this method.
func (atq *AuthTokensQuery) Unique(unique bool) *AuthTokensQuery {
atq.ctx.Unique = &unique
return atq
func (_q *AuthTokensQuery) Unique(unique bool) *AuthTokensQuery {
_q.ctx.Unique = &unique
return _q
}
// Order specifies how the records should be ordered.
func (atq *AuthTokensQuery) Order(o ...authtokens.OrderOption) *AuthTokensQuery {
atq.order = append(atq.order, o...)
return atq
func (_q *AuthTokensQuery) Order(o ...authtokens.OrderOption) *AuthTokensQuery {
_q.order = append(_q.order, o...)
return _q
}
// QueryUser chains the current query on the "user" edge.
func (atq *AuthTokensQuery) QueryUser() *UserQuery {
query := (&UserClient{config: atq.config}).Query()
func (_q *AuthTokensQuery) QueryUser() *UserQuery {
query := (&UserClient{config: _q.config}).Query()
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
if err := atq.prepareQuery(ctx); err != nil {
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
selector := atq.sqlQuery(ctx)
selector := _q.sqlQuery(ctx)
if err := selector.Err(); err != nil {
return nil, err
}
@@ -81,20 +81,20 @@ func (atq *AuthTokensQuery) QueryUser() *UserQuery {
sqlgraph.To(user.Table, user.FieldID),
sqlgraph.Edge(sqlgraph.M2O, true, authtokens.UserTable, authtokens.UserColumn),
)
fromU = sqlgraph.SetNeighbors(atq.driver.Dialect(), step)
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
return fromU, nil
}
return query
}
// QueryRoles chains the current query on the "roles" edge.
func (atq *AuthTokensQuery) QueryRoles() *AuthRolesQuery {
query := (&AuthRolesClient{config: atq.config}).Query()
func (_q *AuthTokensQuery) QueryRoles() *AuthRolesQuery {
query := (&AuthRolesClient{config: _q.config}).Query()
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
if err := atq.prepareQuery(ctx); err != nil {
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
selector := atq.sqlQuery(ctx)
selector := _q.sqlQuery(ctx)
if err := selector.Err(); err != nil {
return nil, err
}
@@ -103,7 +103,7 @@ func (atq *AuthTokensQuery) QueryRoles() *AuthRolesQuery {
sqlgraph.To(authroles.Table, authroles.FieldID),
sqlgraph.Edge(sqlgraph.O2O, false, authtokens.RolesTable, authtokens.RolesColumn),
)
fromU = sqlgraph.SetNeighbors(atq.driver.Dialect(), step)
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
return fromU, nil
}
return query
@@ -111,8 +111,8 @@ func (atq *AuthTokensQuery) QueryRoles() *AuthRolesQuery {
// First returns the first AuthTokens entity from the query.
// Returns a *NotFoundError when no AuthTokens was found.
func (atq *AuthTokensQuery) First(ctx context.Context) (*AuthTokens, error) {
nodes, err := atq.Limit(1).All(setContextOp(ctx, atq.ctx, ent.OpQueryFirst))
func (_q *AuthTokensQuery) First(ctx context.Context) (*AuthTokens, error) {
nodes, err := _q.Limit(1).All(setContextOp(ctx, _q.ctx, ent.OpQueryFirst))
if err != nil {
return nil, err
}
@@ -123,8 +123,8 @@ func (atq *AuthTokensQuery) First(ctx context.Context) (*AuthTokens, error) {
}
// FirstX is like First, but panics if an error occurs.
func (atq *AuthTokensQuery) FirstX(ctx context.Context) *AuthTokens {
node, err := atq.First(ctx)
func (_q *AuthTokensQuery) FirstX(ctx context.Context) *AuthTokens {
node, err := _q.First(ctx)
if err != nil && !IsNotFound(err) {
panic(err)
}
@@ -133,9 +133,9 @@ func (atq *AuthTokensQuery) FirstX(ctx context.Context) *AuthTokens {
// FirstID returns the first AuthTokens ID from the query.
// Returns a *NotFoundError when no AuthTokens ID was found.
func (atq *AuthTokensQuery) FirstID(ctx context.Context) (id uuid.UUID, err error) {
func (_q *AuthTokensQuery) FirstID(ctx context.Context) (id uuid.UUID, err error) {
var ids []uuid.UUID
if ids, err = atq.Limit(1).IDs(setContextOp(ctx, atq.ctx, ent.OpQueryFirstID)); err != nil {
if ids, err = _q.Limit(1).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryFirstID)); err != nil {
return
}
if len(ids) == 0 {
@@ -146,8 +146,8 @@ func (atq *AuthTokensQuery) FirstID(ctx context.Context) (id uuid.UUID, err erro
}
// FirstIDX is like FirstID, but panics if an error occurs.
func (atq *AuthTokensQuery) FirstIDX(ctx context.Context) uuid.UUID {
id, err := atq.FirstID(ctx)
func (_q *AuthTokensQuery) FirstIDX(ctx context.Context) uuid.UUID {
id, err := _q.FirstID(ctx)
if err != nil && !IsNotFound(err) {
panic(err)
}
@@ -157,8 +157,8 @@ func (atq *AuthTokensQuery) FirstIDX(ctx context.Context) uuid.UUID {
// Only returns a single AuthTokens entity found by the query, ensuring it only returns one.
// Returns a *NotSingularError when more than one AuthTokens entity is found.
// Returns a *NotFoundError when no AuthTokens entities are found.
func (atq *AuthTokensQuery) Only(ctx context.Context) (*AuthTokens, error) {
nodes, err := atq.Limit(2).All(setContextOp(ctx, atq.ctx, ent.OpQueryOnly))
func (_q *AuthTokensQuery) Only(ctx context.Context) (*AuthTokens, error) {
nodes, err := _q.Limit(2).All(setContextOp(ctx, _q.ctx, ent.OpQueryOnly))
if err != nil {
return nil, err
}
@@ -173,8 +173,8 @@ func (atq *AuthTokensQuery) Only(ctx context.Context) (*AuthTokens, error) {
}
// OnlyX is like Only, but panics if an error occurs.
func (atq *AuthTokensQuery) OnlyX(ctx context.Context) *AuthTokens {
node, err := atq.Only(ctx)
func (_q *AuthTokensQuery) OnlyX(ctx context.Context) *AuthTokens {
node, err := _q.Only(ctx)
if err != nil {
panic(err)
}
@@ -184,9 +184,9 @@ func (atq *AuthTokensQuery) OnlyX(ctx context.Context) *AuthTokens {
// OnlyID is like Only, but returns the only AuthTokens ID in the query.
// Returns a *NotSingularError when more than one AuthTokens ID is found.
// Returns a *NotFoundError when no entities are found.
func (atq *AuthTokensQuery) OnlyID(ctx context.Context) (id uuid.UUID, err error) {
func (_q *AuthTokensQuery) OnlyID(ctx context.Context) (id uuid.UUID, err error) {
var ids []uuid.UUID
if ids, err = atq.Limit(2).IDs(setContextOp(ctx, atq.ctx, ent.OpQueryOnlyID)); err != nil {
if ids, err = _q.Limit(2).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryOnlyID)); err != nil {
return
}
switch len(ids) {
@@ -201,8 +201,8 @@ func (atq *AuthTokensQuery) OnlyID(ctx context.Context) (id uuid.UUID, err error
}
// OnlyIDX is like OnlyID, but panics if an error occurs.
func (atq *AuthTokensQuery) OnlyIDX(ctx context.Context) uuid.UUID {
id, err := atq.OnlyID(ctx)
func (_q *AuthTokensQuery) OnlyIDX(ctx context.Context) uuid.UUID {
id, err := _q.OnlyID(ctx)
if err != nil {
panic(err)
}
@@ -210,18 +210,18 @@ func (atq *AuthTokensQuery) OnlyIDX(ctx context.Context) uuid.UUID {
}
// All executes the query and returns a list of AuthTokensSlice.
func (atq *AuthTokensQuery) All(ctx context.Context) ([]*AuthTokens, error) {
ctx = setContextOp(ctx, atq.ctx, ent.OpQueryAll)
if err := atq.prepareQuery(ctx); err != nil {
func (_q *AuthTokensQuery) All(ctx context.Context) ([]*AuthTokens, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryAll)
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
qr := querierAll[[]*AuthTokens, *AuthTokensQuery]()
return withInterceptors[[]*AuthTokens](ctx, atq, qr, atq.inters)
return withInterceptors[[]*AuthTokens](ctx, _q, qr, _q.inters)
}
// AllX is like All, but panics if an error occurs.
func (atq *AuthTokensQuery) AllX(ctx context.Context) []*AuthTokens {
nodes, err := atq.All(ctx)
func (_q *AuthTokensQuery) AllX(ctx context.Context) []*AuthTokens {
nodes, err := _q.All(ctx)
if err != nil {
panic(err)
}
@@ -229,20 +229,20 @@ func (atq *AuthTokensQuery) AllX(ctx context.Context) []*AuthTokens {
}
// IDs executes the query and returns a list of AuthTokens IDs.
func (atq *AuthTokensQuery) IDs(ctx context.Context) (ids []uuid.UUID, err error) {
if atq.ctx.Unique == nil && atq.path != nil {
atq.Unique(true)
func (_q *AuthTokensQuery) IDs(ctx context.Context) (ids []uuid.UUID, err error) {
if _q.ctx.Unique == nil && _q.path != nil {
_q.Unique(true)
}
ctx = setContextOp(ctx, atq.ctx, ent.OpQueryIDs)
if err = atq.Select(authtokens.FieldID).Scan(ctx, &ids); err != nil {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryIDs)
if err = _q.Select(authtokens.FieldID).Scan(ctx, &ids); err != nil {
return nil, err
}
return ids, nil
}
// IDsX is like IDs, but panics if an error occurs.
func (atq *AuthTokensQuery) IDsX(ctx context.Context) []uuid.UUID {
ids, err := atq.IDs(ctx)
func (_q *AuthTokensQuery) IDsX(ctx context.Context) []uuid.UUID {
ids, err := _q.IDs(ctx)
if err != nil {
panic(err)
}
@@ -250,17 +250,17 @@ func (atq *AuthTokensQuery) IDsX(ctx context.Context) []uuid.UUID {
}
// Count returns the count of the given query.
func (atq *AuthTokensQuery) Count(ctx context.Context) (int, error) {
ctx = setContextOp(ctx, atq.ctx, ent.OpQueryCount)
if err := atq.prepareQuery(ctx); err != nil {
func (_q *AuthTokensQuery) Count(ctx context.Context) (int, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryCount)
if err := _q.prepareQuery(ctx); err != nil {
return 0, err
}
return withInterceptors[int](ctx, atq, querierCount[*AuthTokensQuery](), atq.inters)
return withInterceptors[int](ctx, _q, querierCount[*AuthTokensQuery](), _q.inters)
}
// CountX is like Count, but panics if an error occurs.
func (atq *AuthTokensQuery) CountX(ctx context.Context) int {
count, err := atq.Count(ctx)
func (_q *AuthTokensQuery) CountX(ctx context.Context) int {
count, err := _q.Count(ctx)
if err != nil {
panic(err)
}
@@ -268,9 +268,9 @@ func (atq *AuthTokensQuery) CountX(ctx context.Context) int {
}
// Exist returns true if the query has elements in the graph.
func (atq *AuthTokensQuery) Exist(ctx context.Context) (bool, error) {
ctx = setContextOp(ctx, atq.ctx, ent.OpQueryExist)
switch _, err := atq.FirstID(ctx); {
func (_q *AuthTokensQuery) Exist(ctx context.Context) (bool, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryExist)
switch _, err := _q.FirstID(ctx); {
case IsNotFound(err):
return false, nil
case err != nil:
@@ -281,8 +281,8 @@ func (atq *AuthTokensQuery) Exist(ctx context.Context) (bool, error) {
}
// ExistX is like Exist, but panics if an error occurs.
func (atq *AuthTokensQuery) ExistX(ctx context.Context) bool {
exist, err := atq.Exist(ctx)
func (_q *AuthTokensQuery) ExistX(ctx context.Context) bool {
exist, err := _q.Exist(ctx)
if err != nil {
panic(err)
}
@@ -291,44 +291,44 @@ func (atq *AuthTokensQuery) ExistX(ctx context.Context) bool {
// Clone returns a duplicate of the AuthTokensQuery builder, including all associated steps. It can be
// used to prepare common query builders and use them differently after the clone is made.
func (atq *AuthTokensQuery) Clone() *AuthTokensQuery {
if atq == nil {
func (_q *AuthTokensQuery) Clone() *AuthTokensQuery {
if _q == nil {
return nil
}
return &AuthTokensQuery{
config: atq.config,
ctx: atq.ctx.Clone(),
order: append([]authtokens.OrderOption{}, atq.order...),
inters: append([]Interceptor{}, atq.inters...),
predicates: append([]predicate.AuthTokens{}, atq.predicates...),
withUser: atq.withUser.Clone(),
withRoles: atq.withRoles.Clone(),
config: _q.config,
ctx: _q.ctx.Clone(),
order: append([]authtokens.OrderOption{}, _q.order...),
inters: append([]Interceptor{}, _q.inters...),
predicates: append([]predicate.AuthTokens{}, _q.predicates...),
withUser: _q.withUser.Clone(),
withRoles: _q.withRoles.Clone(),
// clone intermediate query.
sql: atq.sql.Clone(),
path: atq.path,
sql: _q.sql.Clone(),
path: _q.path,
}
}
// WithUser tells the query-builder to eager-load the nodes that are connected to
// the "user" edge. The optional arguments are used to configure the query builder of the edge.
func (atq *AuthTokensQuery) WithUser(opts ...func(*UserQuery)) *AuthTokensQuery {
query := (&UserClient{config: atq.config}).Query()
func (_q *AuthTokensQuery) WithUser(opts ...func(*UserQuery)) *AuthTokensQuery {
query := (&UserClient{config: _q.config}).Query()
for _, opt := range opts {
opt(query)
}
atq.withUser = query
return atq
_q.withUser = query
return _q
}
// WithRoles tells the query-builder to eager-load the nodes that are connected to
// the "roles" edge. The optional arguments are used to configure the query builder of the edge.
func (atq *AuthTokensQuery) WithRoles(opts ...func(*AuthRolesQuery)) *AuthTokensQuery {
query := (&AuthRolesClient{config: atq.config}).Query()
func (_q *AuthTokensQuery) WithRoles(opts ...func(*AuthRolesQuery)) *AuthTokensQuery {
query := (&AuthRolesClient{config: _q.config}).Query()
for _, opt := range opts {
opt(query)
}
atq.withRoles = query
return atq
_q.withRoles = query
return _q
}
// GroupBy is used to group vertices by one or more fields/columns.
@@ -345,10 +345,10 @@ func (atq *AuthTokensQuery) WithRoles(opts ...func(*AuthRolesQuery)) *AuthTokens
// GroupBy(authtokens.FieldCreatedAt).
// Aggregate(ent.Count()).
// Scan(ctx, &v)
func (atq *AuthTokensQuery) GroupBy(field string, fields ...string) *AuthTokensGroupBy {
atq.ctx.Fields = append([]string{field}, fields...)
grbuild := &AuthTokensGroupBy{build: atq}
grbuild.flds = &atq.ctx.Fields
func (_q *AuthTokensQuery) GroupBy(field string, fields ...string) *AuthTokensGroupBy {
_q.ctx.Fields = append([]string{field}, fields...)
grbuild := &AuthTokensGroupBy{build: _q}
grbuild.flds = &_q.ctx.Fields
grbuild.label = authtokens.Label
grbuild.scan = grbuild.Scan
return grbuild
@@ -366,56 +366,56 @@ func (atq *AuthTokensQuery) GroupBy(field string, fields ...string) *AuthTokensG
// client.AuthTokens.Query().
// Select(authtokens.FieldCreatedAt).
// Scan(ctx, &v)
func (atq *AuthTokensQuery) Select(fields ...string) *AuthTokensSelect {
atq.ctx.Fields = append(atq.ctx.Fields, fields...)
sbuild := &AuthTokensSelect{AuthTokensQuery: atq}
func (_q *AuthTokensQuery) Select(fields ...string) *AuthTokensSelect {
_q.ctx.Fields = append(_q.ctx.Fields, fields...)
sbuild := &AuthTokensSelect{AuthTokensQuery: _q}
sbuild.label = authtokens.Label
sbuild.flds, sbuild.scan = &atq.ctx.Fields, sbuild.Scan
sbuild.flds, sbuild.scan = &_q.ctx.Fields, sbuild.Scan
return sbuild
}
// Aggregate returns a AuthTokensSelect configured with the given aggregations.
func (atq *AuthTokensQuery) Aggregate(fns ...AggregateFunc) *AuthTokensSelect {
return atq.Select().Aggregate(fns...)
func (_q *AuthTokensQuery) Aggregate(fns ...AggregateFunc) *AuthTokensSelect {
return _q.Select().Aggregate(fns...)
}
func (atq *AuthTokensQuery) prepareQuery(ctx context.Context) error {
for _, inter := range atq.inters {
func (_q *AuthTokensQuery) prepareQuery(ctx context.Context) error {
for _, inter := range _q.inters {
if inter == nil {
return fmt.Errorf("ent: uninitialized interceptor (forgotten import ent/runtime?)")
}
if trv, ok := inter.(Traverser); ok {
if err := trv.Traverse(ctx, atq); err != nil {
if err := trv.Traverse(ctx, _q); err != nil {
return err
}
}
}
for _, f := range atq.ctx.Fields {
for _, f := range _q.ctx.Fields {
if !authtokens.ValidColumn(f) {
return &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
}
}
if atq.path != nil {
prev, err := atq.path(ctx)
if _q.path != nil {
prev, err := _q.path(ctx)
if err != nil {
return err
}
atq.sql = prev
_q.sql = prev
}
return nil
}
func (atq *AuthTokensQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*AuthTokens, error) {
func (_q *AuthTokensQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*AuthTokens, error) {
var (
nodes = []*AuthTokens{}
withFKs = atq.withFKs
_spec = atq.querySpec()
withFKs = _q.withFKs
_spec = _q.querySpec()
loadedTypes = [2]bool{
atq.withUser != nil,
atq.withRoles != nil,
_q.withUser != nil,
_q.withRoles != nil,
}
)
if atq.withUser != nil {
if _q.withUser != nil {
withFKs = true
}
if withFKs {
@@ -425,7 +425,7 @@ func (atq *AuthTokensQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*
return (*AuthTokens).scanValues(nil, columns)
}
_spec.Assign = func(columns []string, values []any) error {
node := &AuthTokens{config: atq.config}
node := &AuthTokens{config: _q.config}
nodes = append(nodes, node)
node.Edges.loadedTypes = loadedTypes
return node.assignValues(columns, values)
@@ -433,20 +433,20 @@ func (atq *AuthTokensQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*
for i := range hooks {
hooks[i](ctx, _spec)
}
if err := sqlgraph.QueryNodes(ctx, atq.driver, _spec); err != nil {
if err := sqlgraph.QueryNodes(ctx, _q.driver, _spec); err != nil {
return nil, err
}
if len(nodes) == 0 {
return nodes, nil
}
if query := atq.withUser; query != nil {
if err := atq.loadUser(ctx, query, nodes, nil,
if query := _q.withUser; query != nil {
if err := _q.loadUser(ctx, query, nodes, nil,
func(n *AuthTokens, e *User) { n.Edges.User = e }); err != nil {
return nil, err
}
}
if query := atq.withRoles; query != nil {
if err := atq.loadRoles(ctx, query, nodes, nil,
if query := _q.withRoles; query != nil {
if err := _q.loadRoles(ctx, query, nodes, nil,
func(n *AuthTokens, e *AuthRoles) { n.Edges.Roles = e }); err != nil {
return nil, err
}
@@ -454,7 +454,7 @@ func (atq *AuthTokensQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*
return nodes, nil
}
func (atq *AuthTokensQuery) loadUser(ctx context.Context, query *UserQuery, nodes []*AuthTokens, init func(*AuthTokens), assign func(*AuthTokens, *User)) error {
func (_q *AuthTokensQuery) loadUser(ctx context.Context, query *UserQuery, nodes []*AuthTokens, init func(*AuthTokens), assign func(*AuthTokens, *User)) error {
ids := make([]uuid.UUID, 0, len(nodes))
nodeids := make(map[uuid.UUID][]*AuthTokens)
for i := range nodes {
@@ -486,7 +486,7 @@ func (atq *AuthTokensQuery) loadUser(ctx context.Context, query *UserQuery, node
}
return nil
}
func (atq *AuthTokensQuery) loadRoles(ctx context.Context, query *AuthRolesQuery, nodes []*AuthTokens, init func(*AuthTokens), assign func(*AuthTokens, *AuthRoles)) error {
func (_q *AuthTokensQuery) loadRoles(ctx context.Context, query *AuthRolesQuery, nodes []*AuthTokens, init func(*AuthTokens), assign func(*AuthTokens, *AuthRoles)) error {
fks := make([]driver.Value, 0, len(nodes))
nodeids := make(map[uuid.UUID]*AuthTokens)
for i := range nodes {
@@ -515,24 +515,24 @@ func (atq *AuthTokensQuery) loadRoles(ctx context.Context, query *AuthRolesQuery
return nil
}
func (atq *AuthTokensQuery) sqlCount(ctx context.Context) (int, error) {
_spec := atq.querySpec()
_spec.Node.Columns = atq.ctx.Fields
if len(atq.ctx.Fields) > 0 {
_spec.Unique = atq.ctx.Unique != nil && *atq.ctx.Unique
func (_q *AuthTokensQuery) sqlCount(ctx context.Context) (int, error) {
_spec := _q.querySpec()
_spec.Node.Columns = _q.ctx.Fields
if len(_q.ctx.Fields) > 0 {
_spec.Unique = _q.ctx.Unique != nil && *_q.ctx.Unique
}
return sqlgraph.CountNodes(ctx, atq.driver, _spec)
return sqlgraph.CountNodes(ctx, _q.driver, _spec)
}
func (atq *AuthTokensQuery) querySpec() *sqlgraph.QuerySpec {
func (_q *AuthTokensQuery) querySpec() *sqlgraph.QuerySpec {
_spec := sqlgraph.NewQuerySpec(authtokens.Table, authtokens.Columns, sqlgraph.NewFieldSpec(authtokens.FieldID, field.TypeUUID))
_spec.From = atq.sql
if unique := atq.ctx.Unique; unique != nil {
_spec.From = _q.sql
if unique := _q.ctx.Unique; unique != nil {
_spec.Unique = *unique
} else if atq.path != nil {
} else if _q.path != nil {
_spec.Unique = true
}
if fields := atq.ctx.Fields; len(fields) > 0 {
if fields := _q.ctx.Fields; len(fields) > 0 {
_spec.Node.Columns = make([]string, 0, len(fields))
_spec.Node.Columns = append(_spec.Node.Columns, authtokens.FieldID)
for i := range fields {
@@ -541,20 +541,20 @@ func (atq *AuthTokensQuery) querySpec() *sqlgraph.QuerySpec {
}
}
}
if ps := atq.predicates; len(ps) > 0 {
if ps := _q.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if limit := atq.ctx.Limit; limit != nil {
if limit := _q.ctx.Limit; limit != nil {
_spec.Limit = *limit
}
if offset := atq.ctx.Offset; offset != nil {
if offset := _q.ctx.Offset; offset != nil {
_spec.Offset = *offset
}
if ps := atq.order; len(ps) > 0 {
if ps := _q.order; len(ps) > 0 {
_spec.Order = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
@@ -564,33 +564,33 @@ func (atq *AuthTokensQuery) querySpec() *sqlgraph.QuerySpec {
return _spec
}
func (atq *AuthTokensQuery) sqlQuery(ctx context.Context) *sql.Selector {
builder := sql.Dialect(atq.driver.Dialect())
func (_q *AuthTokensQuery) sqlQuery(ctx context.Context) *sql.Selector {
builder := sql.Dialect(_q.driver.Dialect())
t1 := builder.Table(authtokens.Table)
columns := atq.ctx.Fields
columns := _q.ctx.Fields
if len(columns) == 0 {
columns = authtokens.Columns
}
selector := builder.Select(t1.Columns(columns...)...).From(t1)
if atq.sql != nil {
selector = atq.sql
if _q.sql != nil {
selector = _q.sql
selector.Select(selector.Columns(columns...)...)
}
if atq.ctx.Unique != nil && *atq.ctx.Unique {
if _q.ctx.Unique != nil && *_q.ctx.Unique {
selector.Distinct()
}
for _, p := range atq.predicates {
for _, p := range _q.predicates {
p(selector)
}
for _, p := range atq.order {
for _, p := range _q.order {
p(selector)
}
if offset := atq.ctx.Offset; offset != nil {
if offset := _q.ctx.Offset; offset != nil {
// limit is mandatory for offset clause. We start
// with default value, and override it below if needed.
selector.Offset(*offset).Limit(math.MaxInt32)
}
if limit := atq.ctx.Limit; limit != nil {
if limit := _q.ctx.Limit; limit != nil {
selector.Limit(*limit)
}
return selector
@@ -603,41 +603,41 @@ type AuthTokensGroupBy struct {
}
// Aggregate adds the given aggregation functions to the group-by query.
func (atgb *AuthTokensGroupBy) Aggregate(fns ...AggregateFunc) *AuthTokensGroupBy {
atgb.fns = append(atgb.fns, fns...)
return atgb
func (_g *AuthTokensGroupBy) Aggregate(fns ...AggregateFunc) *AuthTokensGroupBy {
_g.fns = append(_g.fns, fns...)
return _g
}
// Scan applies the selector query and scans the result into the given value.
func (atgb *AuthTokensGroupBy) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, atgb.build.ctx, ent.OpQueryGroupBy)
if err := atgb.build.prepareQuery(ctx); err != nil {
func (_g *AuthTokensGroupBy) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, _g.build.ctx, ent.OpQueryGroupBy)
if err := _g.build.prepareQuery(ctx); err != nil {
return err
}
return scanWithInterceptors[*AuthTokensQuery, *AuthTokensGroupBy](ctx, atgb.build, atgb, atgb.build.inters, v)
return scanWithInterceptors[*AuthTokensQuery, *AuthTokensGroupBy](ctx, _g.build, _g, _g.build.inters, v)
}
func (atgb *AuthTokensGroupBy) sqlScan(ctx context.Context, root *AuthTokensQuery, v any) error {
func (_g *AuthTokensGroupBy) sqlScan(ctx context.Context, root *AuthTokensQuery, v any) error {
selector := root.sqlQuery(ctx).Select()
aggregation := make([]string, 0, len(atgb.fns))
for _, fn := range atgb.fns {
aggregation := make([]string, 0, len(_g.fns))
for _, fn := range _g.fns {
aggregation = append(aggregation, fn(selector))
}
if len(selector.SelectedColumns()) == 0 {
columns := make([]string, 0, len(*atgb.flds)+len(atgb.fns))
for _, f := range *atgb.flds {
columns := make([]string, 0, len(*_g.flds)+len(_g.fns))
for _, f := range *_g.flds {
columns = append(columns, selector.C(f))
}
columns = append(columns, aggregation...)
selector.Select(columns...)
}
selector.GroupBy(selector.Columns(*atgb.flds...)...)
selector.GroupBy(selector.Columns(*_g.flds...)...)
if err := selector.Err(); err != nil {
return err
}
rows := &sql.Rows{}
query, args := selector.Query()
if err := atgb.build.driver.Query(ctx, query, args, rows); err != nil {
if err := _g.build.driver.Query(ctx, query, args, rows); err != nil {
return err
}
defer rows.Close()
@@ -651,27 +651,27 @@ type AuthTokensSelect struct {
}
// Aggregate adds the given aggregation functions to the selector query.
func (ats *AuthTokensSelect) Aggregate(fns ...AggregateFunc) *AuthTokensSelect {
ats.fns = append(ats.fns, fns...)
return ats
func (_s *AuthTokensSelect) Aggregate(fns ...AggregateFunc) *AuthTokensSelect {
_s.fns = append(_s.fns, fns...)
return _s
}
// Scan applies the selector query and scans the result into the given value.
func (ats *AuthTokensSelect) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, ats.ctx, ent.OpQuerySelect)
if err := ats.prepareQuery(ctx); err != nil {
func (_s *AuthTokensSelect) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, _s.ctx, ent.OpQuerySelect)
if err := _s.prepareQuery(ctx); err != nil {
return err
}
return scanWithInterceptors[*AuthTokensQuery, *AuthTokensSelect](ctx, ats.AuthTokensQuery, ats, ats.inters, v)
return scanWithInterceptors[*AuthTokensQuery, *AuthTokensSelect](ctx, _s.AuthTokensQuery, _s, _s.inters, v)
}
func (ats *AuthTokensSelect) sqlScan(ctx context.Context, root *AuthTokensQuery, v any) error {
func (_s *AuthTokensSelect) sqlScan(ctx context.Context, root *AuthTokensQuery, v any) error {
selector := root.sqlQuery(ctx)
aggregation := make([]string, 0, len(ats.fns))
for _, fn := range ats.fns {
aggregation := make([]string, 0, len(_s.fns))
for _, fn := range _s.fns {
aggregation = append(aggregation, fn(selector))
}
switch n := len(*ats.selector.flds); {
switch n := len(*_s.selector.flds); {
case n == 0 && len(aggregation) > 0:
selector.Select(aggregation...)
case n != 0 && len(aggregation) > 0:
@@ -679,7 +679,7 @@ func (ats *AuthTokensSelect) sqlScan(ctx context.Context, root *AuthTokensQuery,
}
rows := &sql.Rows{}
query, args := selector.Query()
if err := ats.driver.Query(ctx, query, args, rows); err != nil {
if err := _s.driver.Query(ctx, query, args, rows); err != nil {
return err
}
defer rows.Close()

View File

@@ -26,101 +26,101 @@ type AuthTokensUpdate struct {
}
// Where appends a list predicates to the AuthTokensUpdate builder.
func (atu *AuthTokensUpdate) Where(ps ...predicate.AuthTokens) *AuthTokensUpdate {
atu.mutation.Where(ps...)
return atu
func (_u *AuthTokensUpdate) Where(ps ...predicate.AuthTokens) *AuthTokensUpdate {
_u.mutation.Where(ps...)
return _u
}
// SetUpdatedAt sets the "updated_at" field.
func (atu *AuthTokensUpdate) SetUpdatedAt(t time.Time) *AuthTokensUpdate {
atu.mutation.SetUpdatedAt(t)
return atu
func (_u *AuthTokensUpdate) SetUpdatedAt(v time.Time) *AuthTokensUpdate {
_u.mutation.SetUpdatedAt(v)
return _u
}
// SetToken sets the "token" field.
func (atu *AuthTokensUpdate) SetToken(b []byte) *AuthTokensUpdate {
atu.mutation.SetToken(b)
return atu
func (_u *AuthTokensUpdate) SetToken(v []byte) *AuthTokensUpdate {
_u.mutation.SetToken(v)
return _u
}
// SetExpiresAt sets the "expires_at" field.
func (atu *AuthTokensUpdate) SetExpiresAt(t time.Time) *AuthTokensUpdate {
atu.mutation.SetExpiresAt(t)
return atu
func (_u *AuthTokensUpdate) SetExpiresAt(v time.Time) *AuthTokensUpdate {
_u.mutation.SetExpiresAt(v)
return _u
}
// SetNillableExpiresAt sets the "expires_at" field if the given value is not nil.
func (atu *AuthTokensUpdate) SetNillableExpiresAt(t *time.Time) *AuthTokensUpdate {
if t != nil {
atu.SetExpiresAt(*t)
func (_u *AuthTokensUpdate) SetNillableExpiresAt(v *time.Time) *AuthTokensUpdate {
if v != nil {
_u.SetExpiresAt(*v)
}
return atu
return _u
}
// SetUserID sets the "user" edge to the User entity by ID.
func (atu *AuthTokensUpdate) SetUserID(id uuid.UUID) *AuthTokensUpdate {
atu.mutation.SetUserID(id)
return atu
func (_u *AuthTokensUpdate) SetUserID(id uuid.UUID) *AuthTokensUpdate {
_u.mutation.SetUserID(id)
return _u
}
// SetNillableUserID sets the "user" edge to the User entity by ID if the given value is not nil.
func (atu *AuthTokensUpdate) SetNillableUserID(id *uuid.UUID) *AuthTokensUpdate {
func (_u *AuthTokensUpdate) SetNillableUserID(id *uuid.UUID) *AuthTokensUpdate {
if id != nil {
atu = atu.SetUserID(*id)
_u = _u.SetUserID(*id)
}
return atu
return _u
}
// SetUser sets the "user" edge to the User entity.
func (atu *AuthTokensUpdate) SetUser(u *User) *AuthTokensUpdate {
return atu.SetUserID(u.ID)
func (_u *AuthTokensUpdate) SetUser(v *User) *AuthTokensUpdate {
return _u.SetUserID(v.ID)
}
// SetRolesID sets the "roles" edge to the AuthRoles entity by ID.
func (atu *AuthTokensUpdate) SetRolesID(id int) *AuthTokensUpdate {
atu.mutation.SetRolesID(id)
return atu
func (_u *AuthTokensUpdate) SetRolesID(id int) *AuthTokensUpdate {
_u.mutation.SetRolesID(id)
return _u
}
// SetNillableRolesID sets the "roles" edge to the AuthRoles entity by ID if the given value is not nil.
func (atu *AuthTokensUpdate) SetNillableRolesID(id *int) *AuthTokensUpdate {
func (_u *AuthTokensUpdate) SetNillableRolesID(id *int) *AuthTokensUpdate {
if id != nil {
atu = atu.SetRolesID(*id)
_u = _u.SetRolesID(*id)
}
return atu
return _u
}
// SetRoles sets the "roles" edge to the AuthRoles entity.
func (atu *AuthTokensUpdate) SetRoles(a *AuthRoles) *AuthTokensUpdate {
return atu.SetRolesID(a.ID)
func (_u *AuthTokensUpdate) SetRoles(v *AuthRoles) *AuthTokensUpdate {
return _u.SetRolesID(v.ID)
}
// Mutation returns the AuthTokensMutation object of the builder.
func (atu *AuthTokensUpdate) Mutation() *AuthTokensMutation {
return atu.mutation
func (_u *AuthTokensUpdate) Mutation() *AuthTokensMutation {
return _u.mutation
}
// ClearUser clears the "user" edge to the User entity.
func (atu *AuthTokensUpdate) ClearUser() *AuthTokensUpdate {
atu.mutation.ClearUser()
return atu
func (_u *AuthTokensUpdate) ClearUser() *AuthTokensUpdate {
_u.mutation.ClearUser()
return _u
}
// ClearRoles clears the "roles" edge to the AuthRoles entity.
func (atu *AuthTokensUpdate) ClearRoles() *AuthTokensUpdate {
atu.mutation.ClearRoles()
return atu
func (_u *AuthTokensUpdate) ClearRoles() *AuthTokensUpdate {
_u.mutation.ClearRoles()
return _u
}
// Save executes the query and returns the number of nodes affected by the update operation.
func (atu *AuthTokensUpdate) Save(ctx context.Context) (int, error) {
atu.defaults()
return withHooks(ctx, atu.sqlSave, atu.mutation, atu.hooks)
func (_u *AuthTokensUpdate) Save(ctx context.Context) (int, error) {
_u.defaults()
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
}
// SaveX is like Save, but panics if an error occurs.
func (atu *AuthTokensUpdate) SaveX(ctx context.Context) int {
affected, err := atu.Save(ctx)
func (_u *AuthTokensUpdate) SaveX(ctx context.Context) int {
affected, err := _u.Save(ctx)
if err != nil {
panic(err)
}
@@ -128,45 +128,45 @@ func (atu *AuthTokensUpdate) SaveX(ctx context.Context) int {
}
// Exec executes the query.
func (atu *AuthTokensUpdate) Exec(ctx context.Context) error {
_, err := atu.Save(ctx)
func (_u *AuthTokensUpdate) Exec(ctx context.Context) error {
_, err := _u.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (atu *AuthTokensUpdate) ExecX(ctx context.Context) {
if err := atu.Exec(ctx); err != nil {
func (_u *AuthTokensUpdate) ExecX(ctx context.Context) {
if err := _u.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (atu *AuthTokensUpdate) defaults() {
if _, ok := atu.mutation.UpdatedAt(); !ok {
func (_u *AuthTokensUpdate) defaults() {
if _, ok := _u.mutation.UpdatedAt(); !ok {
v := authtokens.UpdateDefaultUpdatedAt()
atu.mutation.SetUpdatedAt(v)
_u.mutation.SetUpdatedAt(v)
}
}
func (atu *AuthTokensUpdate) sqlSave(ctx context.Context) (n int, err error) {
func (_u *AuthTokensUpdate) sqlSave(ctx context.Context) (_node int, err error) {
_spec := sqlgraph.NewUpdateSpec(authtokens.Table, authtokens.Columns, sqlgraph.NewFieldSpec(authtokens.FieldID, field.TypeUUID))
if ps := atu.mutation.predicates; len(ps) > 0 {
if ps := _u.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := atu.mutation.UpdatedAt(); ok {
if value, ok := _u.mutation.UpdatedAt(); ok {
_spec.SetField(authtokens.FieldUpdatedAt, field.TypeTime, value)
}
if value, ok := atu.mutation.Token(); ok {
if value, ok := _u.mutation.Token(); ok {
_spec.SetField(authtokens.FieldToken, field.TypeBytes, value)
}
if value, ok := atu.mutation.ExpiresAt(); ok {
if value, ok := _u.mutation.ExpiresAt(); ok {
_spec.SetField(authtokens.FieldExpiresAt, field.TypeTime, value)
}
if atu.mutation.UserCleared() {
if _u.mutation.UserCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -179,7 +179,7 @@ func (atu *AuthTokensUpdate) sqlSave(ctx context.Context) (n int, err error) {
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := atu.mutation.UserIDs(); len(nodes) > 0 {
if nodes := _u.mutation.UserIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -195,7 +195,7 @@ func (atu *AuthTokensUpdate) sqlSave(ctx context.Context) (n int, err error) {
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if atu.mutation.RolesCleared() {
if _u.mutation.RolesCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: false,
@@ -208,7 +208,7 @@ func (atu *AuthTokensUpdate) sqlSave(ctx context.Context) (n int, err error) {
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := atu.mutation.RolesIDs(); len(nodes) > 0 {
if nodes := _u.mutation.RolesIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: false,
@@ -224,7 +224,7 @@ func (atu *AuthTokensUpdate) sqlSave(ctx context.Context) (n int, err error) {
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if n, err = sqlgraph.UpdateNodes(ctx, atu.driver, _spec); err != nil {
if _node, err = sqlgraph.UpdateNodes(ctx, _u.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{authtokens.Label}
} else if sqlgraph.IsConstraintError(err) {
@@ -232,8 +232,8 @@ func (atu *AuthTokensUpdate) sqlSave(ctx context.Context) (n int, err error) {
}
return 0, err
}
atu.mutation.done = true
return n, nil
_u.mutation.done = true
return _node, nil
}
// AuthTokensUpdateOne is the builder for updating a single AuthTokens entity.
@@ -245,108 +245,108 @@ type AuthTokensUpdateOne struct {
}
// SetUpdatedAt sets the "updated_at" field.
func (atuo *AuthTokensUpdateOne) SetUpdatedAt(t time.Time) *AuthTokensUpdateOne {
atuo.mutation.SetUpdatedAt(t)
return atuo
func (_u *AuthTokensUpdateOne) SetUpdatedAt(v time.Time) *AuthTokensUpdateOne {
_u.mutation.SetUpdatedAt(v)
return _u
}
// SetToken sets the "token" field.
func (atuo *AuthTokensUpdateOne) SetToken(b []byte) *AuthTokensUpdateOne {
atuo.mutation.SetToken(b)
return atuo
func (_u *AuthTokensUpdateOne) SetToken(v []byte) *AuthTokensUpdateOne {
_u.mutation.SetToken(v)
return _u
}
// SetExpiresAt sets the "expires_at" field.
func (atuo *AuthTokensUpdateOne) SetExpiresAt(t time.Time) *AuthTokensUpdateOne {
atuo.mutation.SetExpiresAt(t)
return atuo
func (_u *AuthTokensUpdateOne) SetExpiresAt(v time.Time) *AuthTokensUpdateOne {
_u.mutation.SetExpiresAt(v)
return _u
}
// SetNillableExpiresAt sets the "expires_at" field if the given value is not nil.
func (atuo *AuthTokensUpdateOne) SetNillableExpiresAt(t *time.Time) *AuthTokensUpdateOne {
if t != nil {
atuo.SetExpiresAt(*t)
func (_u *AuthTokensUpdateOne) SetNillableExpiresAt(v *time.Time) *AuthTokensUpdateOne {
if v != nil {
_u.SetExpiresAt(*v)
}
return atuo
return _u
}
// SetUserID sets the "user" edge to the User entity by ID.
func (atuo *AuthTokensUpdateOne) SetUserID(id uuid.UUID) *AuthTokensUpdateOne {
atuo.mutation.SetUserID(id)
return atuo
func (_u *AuthTokensUpdateOne) SetUserID(id uuid.UUID) *AuthTokensUpdateOne {
_u.mutation.SetUserID(id)
return _u
}
// SetNillableUserID sets the "user" edge to the User entity by ID if the given value is not nil.
func (atuo *AuthTokensUpdateOne) SetNillableUserID(id *uuid.UUID) *AuthTokensUpdateOne {
func (_u *AuthTokensUpdateOne) SetNillableUserID(id *uuid.UUID) *AuthTokensUpdateOne {
if id != nil {
atuo = atuo.SetUserID(*id)
_u = _u.SetUserID(*id)
}
return atuo
return _u
}
// SetUser sets the "user" edge to the User entity.
func (atuo *AuthTokensUpdateOne) SetUser(u *User) *AuthTokensUpdateOne {
return atuo.SetUserID(u.ID)
func (_u *AuthTokensUpdateOne) SetUser(v *User) *AuthTokensUpdateOne {
return _u.SetUserID(v.ID)
}
// SetRolesID sets the "roles" edge to the AuthRoles entity by ID.
func (atuo *AuthTokensUpdateOne) SetRolesID(id int) *AuthTokensUpdateOne {
atuo.mutation.SetRolesID(id)
return atuo
func (_u *AuthTokensUpdateOne) SetRolesID(id int) *AuthTokensUpdateOne {
_u.mutation.SetRolesID(id)
return _u
}
// SetNillableRolesID sets the "roles" edge to the AuthRoles entity by ID if the given value is not nil.
func (atuo *AuthTokensUpdateOne) SetNillableRolesID(id *int) *AuthTokensUpdateOne {
func (_u *AuthTokensUpdateOne) SetNillableRolesID(id *int) *AuthTokensUpdateOne {
if id != nil {
atuo = atuo.SetRolesID(*id)
_u = _u.SetRolesID(*id)
}
return atuo
return _u
}
// SetRoles sets the "roles" edge to the AuthRoles entity.
func (atuo *AuthTokensUpdateOne) SetRoles(a *AuthRoles) *AuthTokensUpdateOne {
return atuo.SetRolesID(a.ID)
func (_u *AuthTokensUpdateOne) SetRoles(v *AuthRoles) *AuthTokensUpdateOne {
return _u.SetRolesID(v.ID)
}
// Mutation returns the AuthTokensMutation object of the builder.
func (atuo *AuthTokensUpdateOne) Mutation() *AuthTokensMutation {
return atuo.mutation
func (_u *AuthTokensUpdateOne) Mutation() *AuthTokensMutation {
return _u.mutation
}
// ClearUser clears the "user" edge to the User entity.
func (atuo *AuthTokensUpdateOne) ClearUser() *AuthTokensUpdateOne {
atuo.mutation.ClearUser()
return atuo
func (_u *AuthTokensUpdateOne) ClearUser() *AuthTokensUpdateOne {
_u.mutation.ClearUser()
return _u
}
// ClearRoles clears the "roles" edge to the AuthRoles entity.
func (atuo *AuthTokensUpdateOne) ClearRoles() *AuthTokensUpdateOne {
atuo.mutation.ClearRoles()
return atuo
func (_u *AuthTokensUpdateOne) ClearRoles() *AuthTokensUpdateOne {
_u.mutation.ClearRoles()
return _u
}
// Where appends a list predicates to the AuthTokensUpdate builder.
func (atuo *AuthTokensUpdateOne) Where(ps ...predicate.AuthTokens) *AuthTokensUpdateOne {
atuo.mutation.Where(ps...)
return atuo
func (_u *AuthTokensUpdateOne) Where(ps ...predicate.AuthTokens) *AuthTokensUpdateOne {
_u.mutation.Where(ps...)
return _u
}
// Select allows selecting one or more fields (columns) of the returned entity.
// The default is selecting all fields defined in the entity schema.
func (atuo *AuthTokensUpdateOne) Select(field string, fields ...string) *AuthTokensUpdateOne {
atuo.fields = append([]string{field}, fields...)
return atuo
func (_u *AuthTokensUpdateOne) Select(field string, fields ...string) *AuthTokensUpdateOne {
_u.fields = append([]string{field}, fields...)
return _u
}
// Save executes the query and returns the updated AuthTokens entity.
func (atuo *AuthTokensUpdateOne) Save(ctx context.Context) (*AuthTokens, error) {
atuo.defaults()
return withHooks(ctx, atuo.sqlSave, atuo.mutation, atuo.hooks)
func (_u *AuthTokensUpdateOne) Save(ctx context.Context) (*AuthTokens, error) {
_u.defaults()
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
}
// SaveX is like Save, but panics if an error occurs.
func (atuo *AuthTokensUpdateOne) SaveX(ctx context.Context) *AuthTokens {
node, err := atuo.Save(ctx)
func (_u *AuthTokensUpdateOne) SaveX(ctx context.Context) *AuthTokens {
node, err := _u.Save(ctx)
if err != nil {
panic(err)
}
@@ -354,34 +354,34 @@ func (atuo *AuthTokensUpdateOne) SaveX(ctx context.Context) *AuthTokens {
}
// Exec executes the query on the entity.
func (atuo *AuthTokensUpdateOne) Exec(ctx context.Context) error {
_, err := atuo.Save(ctx)
func (_u *AuthTokensUpdateOne) Exec(ctx context.Context) error {
_, err := _u.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (atuo *AuthTokensUpdateOne) ExecX(ctx context.Context) {
if err := atuo.Exec(ctx); err != nil {
func (_u *AuthTokensUpdateOne) ExecX(ctx context.Context) {
if err := _u.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (atuo *AuthTokensUpdateOne) defaults() {
if _, ok := atuo.mutation.UpdatedAt(); !ok {
func (_u *AuthTokensUpdateOne) defaults() {
if _, ok := _u.mutation.UpdatedAt(); !ok {
v := authtokens.UpdateDefaultUpdatedAt()
atuo.mutation.SetUpdatedAt(v)
_u.mutation.SetUpdatedAt(v)
}
}
func (atuo *AuthTokensUpdateOne) sqlSave(ctx context.Context) (_node *AuthTokens, err error) {
func (_u *AuthTokensUpdateOne) sqlSave(ctx context.Context) (_node *AuthTokens, err error) {
_spec := sqlgraph.NewUpdateSpec(authtokens.Table, authtokens.Columns, sqlgraph.NewFieldSpec(authtokens.FieldID, field.TypeUUID))
id, ok := atuo.mutation.ID()
id, ok := _u.mutation.ID()
if !ok {
return nil, &ValidationError{Name: "id", err: errors.New(`ent: missing "AuthTokens.id" for update`)}
}
_spec.Node.ID.Value = id
if fields := atuo.fields; len(fields) > 0 {
if fields := _u.fields; len(fields) > 0 {
_spec.Node.Columns = make([]string, 0, len(fields))
_spec.Node.Columns = append(_spec.Node.Columns, authtokens.FieldID)
for _, f := range fields {
@@ -393,23 +393,23 @@ func (atuo *AuthTokensUpdateOne) sqlSave(ctx context.Context) (_node *AuthTokens
}
}
}
if ps := atuo.mutation.predicates; len(ps) > 0 {
if ps := _u.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := atuo.mutation.UpdatedAt(); ok {
if value, ok := _u.mutation.UpdatedAt(); ok {
_spec.SetField(authtokens.FieldUpdatedAt, field.TypeTime, value)
}
if value, ok := atuo.mutation.Token(); ok {
if value, ok := _u.mutation.Token(); ok {
_spec.SetField(authtokens.FieldToken, field.TypeBytes, value)
}
if value, ok := atuo.mutation.ExpiresAt(); ok {
if value, ok := _u.mutation.ExpiresAt(); ok {
_spec.SetField(authtokens.FieldExpiresAt, field.TypeTime, value)
}
if atuo.mutation.UserCleared() {
if _u.mutation.UserCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -422,7 +422,7 @@ func (atuo *AuthTokensUpdateOne) sqlSave(ctx context.Context) (_node *AuthTokens
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := atuo.mutation.UserIDs(); len(nodes) > 0 {
if nodes := _u.mutation.UserIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -438,7 +438,7 @@ func (atuo *AuthTokensUpdateOne) sqlSave(ctx context.Context) (_node *AuthTokens
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if atuo.mutation.RolesCleared() {
if _u.mutation.RolesCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: false,
@@ -451,7 +451,7 @@ func (atuo *AuthTokensUpdateOne) sqlSave(ctx context.Context) (_node *AuthTokens
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := atuo.mutation.RolesIDs(); len(nodes) > 0 {
if nodes := _u.mutation.RolesIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2O,
Inverse: false,
@@ -467,10 +467,10 @@ func (atuo *AuthTokensUpdateOne) sqlSave(ctx context.Context) (_node *AuthTokens
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
_node = &AuthTokens{config: atuo.config}
_node = &AuthTokens{config: _u.config}
_spec.Assign = _node.assignValues
_spec.ScanValues = _node.scanValues
if err = sqlgraph.UpdateNode(ctx, atuo.driver, _spec); err != nil {
if err = sqlgraph.UpdateNode(ctx, _u.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{authtokens.Label}
} else if sqlgraph.IsConstraintError(err) {
@@ -478,6 +478,6 @@ func (atuo *AuthTokensUpdateOne) sqlSave(ctx context.Context) (_node *AuthTokens
}
return nil, err
}
atuo.mutation.done = true
_u.mutation.done = true
return _node, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -19,10 +19,12 @@ import (
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/groupinvitationtoken"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/item"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/itemfield"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/itemtemplate"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/label"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/location"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/maintenanceentry"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/notifier"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/templatefield"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/user"
)
@@ -81,7 +83,7 @@ var (
)
// checkColumn checks if the column exists in the given table.
func checkColumn(table, column string) error {
func checkColumn(t, c string) error {
initCheck.Do(func() {
columnCheck = sql.NewColumnCheck(map[string]func(string) bool{
attachment.Table: attachment.ValidColumn,
@@ -91,14 +93,16 @@ func checkColumn(table, column string) error {
groupinvitationtoken.Table: groupinvitationtoken.ValidColumn,
item.Table: item.ValidColumn,
itemfield.Table: itemfield.ValidColumn,
itemtemplate.Table: itemtemplate.ValidColumn,
label.Table: label.ValidColumn,
location.Table: location.ValidColumn,
maintenanceentry.Table: maintenanceentry.ValidColumn,
notifier.Table: notifier.ValidColumn,
templatefield.Table: templatefield.ValidColumn,
user.Table: user.ValidColumn,
})
})
return columnCheck(table, column)
return columnCheck(t, c)
}
// Asc applies the given fields in ASC order.

View File

@@ -46,9 +46,11 @@ type GroupEdges struct {
InvitationTokens []*GroupInvitationToken `json:"invitation_tokens,omitempty"`
// Notifiers holds the value of the notifiers edge.
Notifiers []*Notifier `json:"notifiers,omitempty"`
// ItemTemplates holds the value of the item_templates edge.
ItemTemplates []*ItemTemplate `json:"item_templates,omitempty"`
// loadedTypes holds the information for reporting if a
// type was loaded (or requested) in eager-loading or not.
loadedTypes [6]bool
loadedTypes [7]bool
}
// UsersOrErr returns the Users value or an error if the edge
@@ -105,6 +107,15 @@ func (e GroupEdges) NotifiersOrErr() ([]*Notifier, error) {
return nil, &NotLoadedError{edge: "notifiers"}
}
// ItemTemplatesOrErr returns the ItemTemplates value or an error if the edge
// was not loaded in eager-loading.
func (e GroupEdges) ItemTemplatesOrErr() ([]*ItemTemplate, error) {
if e.loadedTypes[6] {
return e.ItemTemplates, nil
}
return nil, &NotLoadedError{edge: "item_templates"}
}
// scanValues returns the types for scanning values from sql.Rows.
func (*Group) scanValues(columns []string) ([]any, error) {
values := make([]any, len(columns))
@@ -125,7 +136,7 @@ func (*Group) scanValues(columns []string) ([]any, error) {
// assignValues assigns the values that were returned from sql.Rows (after scanning)
// to the Group fields.
func (gr *Group) assignValues(columns []string, values []any) error {
func (_m *Group) assignValues(columns []string, values []any) error {
if m, n := len(values), len(columns); m < n {
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
}
@@ -135,34 +146,34 @@ func (gr *Group) assignValues(columns []string, values []any) error {
if value, ok := values[i].(*uuid.UUID); !ok {
return fmt.Errorf("unexpected type %T for field id", values[i])
} else if value != nil {
gr.ID = *value
_m.ID = *value
}
case group.FieldCreatedAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field created_at", values[i])
} else if value.Valid {
gr.CreatedAt = value.Time
_m.CreatedAt = value.Time
}
case group.FieldUpdatedAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field updated_at", values[i])
} else if value.Valid {
gr.UpdatedAt = value.Time
_m.UpdatedAt = value.Time
}
case group.FieldName:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field name", values[i])
} else if value.Valid {
gr.Name = value.String
_m.Name = value.String
}
case group.FieldCurrency:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field currency", values[i])
} else if value.Valid {
gr.Currency = value.String
_m.Currency = value.String
}
default:
gr.selectValues.Set(columns[i], values[i])
_m.selectValues.Set(columns[i], values[i])
}
}
return nil
@@ -170,74 +181,79 @@ func (gr *Group) assignValues(columns []string, values []any) error {
// Value returns the ent.Value that was dynamically selected and assigned to the Group.
// This includes values selected through modifiers, order, etc.
func (gr *Group) Value(name string) (ent.Value, error) {
return gr.selectValues.Get(name)
func (_m *Group) Value(name string) (ent.Value, error) {
return _m.selectValues.Get(name)
}
// QueryUsers queries the "users" edge of the Group entity.
func (gr *Group) QueryUsers() *UserQuery {
return NewGroupClient(gr.config).QueryUsers(gr)
func (_m *Group) QueryUsers() *UserQuery {
return NewGroupClient(_m.config).QueryUsers(_m)
}
// QueryLocations queries the "locations" edge of the Group entity.
func (gr *Group) QueryLocations() *LocationQuery {
return NewGroupClient(gr.config).QueryLocations(gr)
func (_m *Group) QueryLocations() *LocationQuery {
return NewGroupClient(_m.config).QueryLocations(_m)
}
// QueryItems queries the "items" edge of the Group entity.
func (gr *Group) QueryItems() *ItemQuery {
return NewGroupClient(gr.config).QueryItems(gr)
func (_m *Group) QueryItems() *ItemQuery {
return NewGroupClient(_m.config).QueryItems(_m)
}
// QueryLabels queries the "labels" edge of the Group entity.
func (gr *Group) QueryLabels() *LabelQuery {
return NewGroupClient(gr.config).QueryLabels(gr)
func (_m *Group) QueryLabels() *LabelQuery {
return NewGroupClient(_m.config).QueryLabels(_m)
}
// QueryInvitationTokens queries the "invitation_tokens" edge of the Group entity.
func (gr *Group) QueryInvitationTokens() *GroupInvitationTokenQuery {
return NewGroupClient(gr.config).QueryInvitationTokens(gr)
func (_m *Group) QueryInvitationTokens() *GroupInvitationTokenQuery {
return NewGroupClient(_m.config).QueryInvitationTokens(_m)
}
// QueryNotifiers queries the "notifiers" edge of the Group entity.
func (gr *Group) QueryNotifiers() *NotifierQuery {
return NewGroupClient(gr.config).QueryNotifiers(gr)
func (_m *Group) QueryNotifiers() *NotifierQuery {
return NewGroupClient(_m.config).QueryNotifiers(_m)
}
// QueryItemTemplates queries the "item_templates" edge of the Group entity.
func (_m *Group) QueryItemTemplates() *ItemTemplateQuery {
return NewGroupClient(_m.config).QueryItemTemplates(_m)
}
// Update returns a builder for updating this Group.
// Note that you need to call Group.Unwrap() before calling this method if this Group
// was returned from a transaction, and the transaction was committed or rolled back.
func (gr *Group) Update() *GroupUpdateOne {
return NewGroupClient(gr.config).UpdateOne(gr)
func (_m *Group) Update() *GroupUpdateOne {
return NewGroupClient(_m.config).UpdateOne(_m)
}
// Unwrap unwraps the Group entity that was returned from a transaction after it was closed,
// so that all future queries will be executed through the driver which created the transaction.
func (gr *Group) Unwrap() *Group {
_tx, ok := gr.config.driver.(*txDriver)
func (_m *Group) Unwrap() *Group {
_tx, ok := _m.config.driver.(*txDriver)
if !ok {
panic("ent: Group is not a transactional entity")
}
gr.config.driver = _tx.drv
return gr
_m.config.driver = _tx.drv
return _m
}
// String implements the fmt.Stringer.
func (gr *Group) String() string {
func (_m *Group) String() string {
var builder strings.Builder
builder.WriteString("Group(")
builder.WriteString(fmt.Sprintf("id=%v, ", gr.ID))
builder.WriteString(fmt.Sprintf("id=%v, ", _m.ID))
builder.WriteString("created_at=")
builder.WriteString(gr.CreatedAt.Format(time.ANSIC))
builder.WriteString(_m.CreatedAt.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("updated_at=")
builder.WriteString(gr.UpdatedAt.Format(time.ANSIC))
builder.WriteString(_m.UpdatedAt.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("name=")
builder.WriteString(gr.Name)
builder.WriteString(_m.Name)
builder.WriteString(", ")
builder.WriteString("currency=")
builder.WriteString(gr.Currency)
builder.WriteString(_m.Currency)
builder.WriteByte(')')
return builder.String()
}

View File

@@ -35,6 +35,8 @@ const (
EdgeInvitationTokens = "invitation_tokens"
// EdgeNotifiers holds the string denoting the notifiers edge name in mutations.
EdgeNotifiers = "notifiers"
// EdgeItemTemplates holds the string denoting the item_templates edge name in mutations.
EdgeItemTemplates = "item_templates"
// Table holds the table name of the group in the database.
Table = "groups"
// UsersTable is the table that holds the users relation/edge.
@@ -79,6 +81,13 @@ const (
NotifiersInverseTable = "notifiers"
// NotifiersColumn is the table column denoting the notifiers relation/edge.
NotifiersColumn = "group_id"
// ItemTemplatesTable is the table that holds the item_templates relation/edge.
ItemTemplatesTable = "item_templates"
// ItemTemplatesInverseTable is the table name for the ItemTemplate entity.
// It exists in this package in order to avoid circular dependency with the "itemtemplate" package.
ItemTemplatesInverseTable = "item_templates"
// ItemTemplatesColumn is the table column denoting the item_templates relation/edge.
ItemTemplatesColumn = "group_item_templates"
)
// Columns holds all SQL columns for group fields.
@@ -226,6 +235,20 @@ func ByNotifiers(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOption {
sqlgraph.OrderByNeighborTerms(s, newNotifiersStep(), append([]sql.OrderTerm{term}, terms...)...)
}
}
// ByItemTemplatesCount orders the results by item_templates count.
func ByItemTemplatesCount(opts ...sql.OrderTermOption) OrderOption {
return func(s *sql.Selector) {
sqlgraph.OrderByNeighborsCount(s, newItemTemplatesStep(), opts...)
}
}
// ByItemTemplates orders the results by item_templates terms.
func ByItemTemplates(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOption {
return func(s *sql.Selector) {
sqlgraph.OrderByNeighborTerms(s, newItemTemplatesStep(), append([]sql.OrderTerm{term}, terms...)...)
}
}
func newUsersStep() *sqlgraph.Step {
return sqlgraph.NewStep(
sqlgraph.From(Table, FieldID),
@@ -268,3 +291,10 @@ func newNotifiersStep() *sqlgraph.Step {
sqlgraph.Edge(sqlgraph.O2M, false, NotifiersTable, NotifiersColumn),
)
}
func newItemTemplatesStep() *sqlgraph.Step {
return sqlgraph.NewStep(
sqlgraph.From(Table, FieldID),
sqlgraph.To(ItemTemplatesInverseTable, FieldID),
sqlgraph.Edge(sqlgraph.O2M, false, ItemTemplatesTable, ItemTemplatesColumn),
)
}

View File

@@ -424,6 +424,29 @@ func HasNotifiersWith(preds ...predicate.Notifier) predicate.Group {
})
}
// HasItemTemplates applies the HasEdge predicate on the "item_templates" edge.
func HasItemTemplates() predicate.Group {
return predicate.Group(func(s *sql.Selector) {
step := sqlgraph.NewStep(
sqlgraph.From(Table, FieldID),
sqlgraph.Edge(sqlgraph.O2M, false, ItemTemplatesTable, ItemTemplatesColumn),
)
sqlgraph.HasNeighbors(s, step)
})
}
// HasItemTemplatesWith applies the HasEdge predicate on the "item_templates" edge with a given conditions (other predicates).
func HasItemTemplatesWith(preds ...predicate.ItemTemplate) predicate.Group {
return predicate.Group(func(s *sql.Selector) {
step := newItemTemplatesStep()
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
for _, p := range preds {
p(s)
}
})
})
}
// And groups predicates with the AND operator between them.
func And(predicates ...predicate.Group) predicate.Group {
return predicate.Group(sql.AndPredicates(predicates...))

View File

@@ -14,6 +14,7 @@ import (
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/group"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/groupinvitationtoken"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/item"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/itemtemplate"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/label"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/location"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/notifier"
@@ -28,171 +29,186 @@ type GroupCreate struct {
}
// SetCreatedAt sets the "created_at" field.
func (gc *GroupCreate) SetCreatedAt(t time.Time) *GroupCreate {
gc.mutation.SetCreatedAt(t)
return gc
func (_c *GroupCreate) SetCreatedAt(v time.Time) *GroupCreate {
_c.mutation.SetCreatedAt(v)
return _c
}
// SetNillableCreatedAt sets the "created_at" field if the given value is not nil.
func (gc *GroupCreate) SetNillableCreatedAt(t *time.Time) *GroupCreate {
if t != nil {
gc.SetCreatedAt(*t)
func (_c *GroupCreate) SetNillableCreatedAt(v *time.Time) *GroupCreate {
if v != nil {
_c.SetCreatedAt(*v)
}
return gc
return _c
}
// SetUpdatedAt sets the "updated_at" field.
func (gc *GroupCreate) SetUpdatedAt(t time.Time) *GroupCreate {
gc.mutation.SetUpdatedAt(t)
return gc
func (_c *GroupCreate) SetUpdatedAt(v time.Time) *GroupCreate {
_c.mutation.SetUpdatedAt(v)
return _c
}
// SetNillableUpdatedAt sets the "updated_at" field if the given value is not nil.
func (gc *GroupCreate) SetNillableUpdatedAt(t *time.Time) *GroupCreate {
if t != nil {
gc.SetUpdatedAt(*t)
func (_c *GroupCreate) SetNillableUpdatedAt(v *time.Time) *GroupCreate {
if v != nil {
_c.SetUpdatedAt(*v)
}
return gc
return _c
}
// SetName sets the "name" field.
func (gc *GroupCreate) SetName(s string) *GroupCreate {
gc.mutation.SetName(s)
return gc
func (_c *GroupCreate) SetName(v string) *GroupCreate {
_c.mutation.SetName(v)
return _c
}
// SetCurrency sets the "currency" field.
func (gc *GroupCreate) SetCurrency(s string) *GroupCreate {
gc.mutation.SetCurrency(s)
return gc
func (_c *GroupCreate) SetCurrency(v string) *GroupCreate {
_c.mutation.SetCurrency(v)
return _c
}
// SetNillableCurrency sets the "currency" field if the given value is not nil.
func (gc *GroupCreate) SetNillableCurrency(s *string) *GroupCreate {
if s != nil {
gc.SetCurrency(*s)
func (_c *GroupCreate) SetNillableCurrency(v *string) *GroupCreate {
if v != nil {
_c.SetCurrency(*v)
}
return gc
return _c
}
// SetID sets the "id" field.
func (gc *GroupCreate) SetID(u uuid.UUID) *GroupCreate {
gc.mutation.SetID(u)
return gc
func (_c *GroupCreate) SetID(v uuid.UUID) *GroupCreate {
_c.mutation.SetID(v)
return _c
}
// SetNillableID sets the "id" field if the given value is not nil.
func (gc *GroupCreate) SetNillableID(u *uuid.UUID) *GroupCreate {
if u != nil {
gc.SetID(*u)
func (_c *GroupCreate) SetNillableID(v *uuid.UUID) *GroupCreate {
if v != nil {
_c.SetID(*v)
}
return gc
return _c
}
// AddUserIDs adds the "users" edge to the User entity by IDs.
func (gc *GroupCreate) AddUserIDs(ids ...uuid.UUID) *GroupCreate {
gc.mutation.AddUserIDs(ids...)
return gc
func (_c *GroupCreate) AddUserIDs(ids ...uuid.UUID) *GroupCreate {
_c.mutation.AddUserIDs(ids...)
return _c
}
// AddUsers adds the "users" edges to the User entity.
func (gc *GroupCreate) AddUsers(u ...*User) *GroupCreate {
ids := make([]uuid.UUID, len(u))
for i := range u {
ids[i] = u[i].ID
func (_c *GroupCreate) AddUsers(v ...*User) *GroupCreate {
ids := make([]uuid.UUID, len(v))
for i := range v {
ids[i] = v[i].ID
}
return gc.AddUserIDs(ids...)
return _c.AddUserIDs(ids...)
}
// AddLocationIDs adds the "locations" edge to the Location entity by IDs.
func (gc *GroupCreate) AddLocationIDs(ids ...uuid.UUID) *GroupCreate {
gc.mutation.AddLocationIDs(ids...)
return gc
func (_c *GroupCreate) AddLocationIDs(ids ...uuid.UUID) *GroupCreate {
_c.mutation.AddLocationIDs(ids...)
return _c
}
// AddLocations adds the "locations" edges to the Location entity.
func (gc *GroupCreate) AddLocations(l ...*Location) *GroupCreate {
ids := make([]uuid.UUID, len(l))
for i := range l {
ids[i] = l[i].ID
func (_c *GroupCreate) AddLocations(v ...*Location) *GroupCreate {
ids := make([]uuid.UUID, len(v))
for i := range v {
ids[i] = v[i].ID
}
return gc.AddLocationIDs(ids...)
return _c.AddLocationIDs(ids...)
}
// AddItemIDs adds the "items" edge to the Item entity by IDs.
func (gc *GroupCreate) AddItemIDs(ids ...uuid.UUID) *GroupCreate {
gc.mutation.AddItemIDs(ids...)
return gc
func (_c *GroupCreate) AddItemIDs(ids ...uuid.UUID) *GroupCreate {
_c.mutation.AddItemIDs(ids...)
return _c
}
// AddItems adds the "items" edges to the Item entity.
func (gc *GroupCreate) AddItems(i ...*Item) *GroupCreate {
ids := make([]uuid.UUID, len(i))
for j := range i {
ids[j] = i[j].ID
func (_c *GroupCreate) AddItems(v ...*Item) *GroupCreate {
ids := make([]uuid.UUID, len(v))
for i := range v {
ids[i] = v[i].ID
}
return gc.AddItemIDs(ids...)
return _c.AddItemIDs(ids...)
}
// AddLabelIDs adds the "labels" edge to the Label entity by IDs.
func (gc *GroupCreate) AddLabelIDs(ids ...uuid.UUID) *GroupCreate {
gc.mutation.AddLabelIDs(ids...)
return gc
func (_c *GroupCreate) AddLabelIDs(ids ...uuid.UUID) *GroupCreate {
_c.mutation.AddLabelIDs(ids...)
return _c
}
// AddLabels adds the "labels" edges to the Label entity.
func (gc *GroupCreate) AddLabels(l ...*Label) *GroupCreate {
ids := make([]uuid.UUID, len(l))
for i := range l {
ids[i] = l[i].ID
func (_c *GroupCreate) AddLabels(v ...*Label) *GroupCreate {
ids := make([]uuid.UUID, len(v))
for i := range v {
ids[i] = v[i].ID
}
return gc.AddLabelIDs(ids...)
return _c.AddLabelIDs(ids...)
}
// AddInvitationTokenIDs adds the "invitation_tokens" edge to the GroupInvitationToken entity by IDs.
func (gc *GroupCreate) AddInvitationTokenIDs(ids ...uuid.UUID) *GroupCreate {
gc.mutation.AddInvitationTokenIDs(ids...)
return gc
func (_c *GroupCreate) AddInvitationTokenIDs(ids ...uuid.UUID) *GroupCreate {
_c.mutation.AddInvitationTokenIDs(ids...)
return _c
}
// AddInvitationTokens adds the "invitation_tokens" edges to the GroupInvitationToken entity.
func (gc *GroupCreate) AddInvitationTokens(g ...*GroupInvitationToken) *GroupCreate {
ids := make([]uuid.UUID, len(g))
for i := range g {
ids[i] = g[i].ID
func (_c *GroupCreate) AddInvitationTokens(v ...*GroupInvitationToken) *GroupCreate {
ids := make([]uuid.UUID, len(v))
for i := range v {
ids[i] = v[i].ID
}
return gc.AddInvitationTokenIDs(ids...)
return _c.AddInvitationTokenIDs(ids...)
}
// AddNotifierIDs adds the "notifiers" edge to the Notifier entity by IDs.
func (gc *GroupCreate) AddNotifierIDs(ids ...uuid.UUID) *GroupCreate {
gc.mutation.AddNotifierIDs(ids...)
return gc
func (_c *GroupCreate) AddNotifierIDs(ids ...uuid.UUID) *GroupCreate {
_c.mutation.AddNotifierIDs(ids...)
return _c
}
// AddNotifiers adds the "notifiers" edges to the Notifier entity.
func (gc *GroupCreate) AddNotifiers(n ...*Notifier) *GroupCreate {
ids := make([]uuid.UUID, len(n))
for i := range n {
ids[i] = n[i].ID
func (_c *GroupCreate) AddNotifiers(v ...*Notifier) *GroupCreate {
ids := make([]uuid.UUID, len(v))
for i := range v {
ids[i] = v[i].ID
}
return gc.AddNotifierIDs(ids...)
return _c.AddNotifierIDs(ids...)
}
// AddItemTemplateIDs adds the "item_templates" edge to the ItemTemplate entity by IDs.
func (_c *GroupCreate) AddItemTemplateIDs(ids ...uuid.UUID) *GroupCreate {
_c.mutation.AddItemTemplateIDs(ids...)
return _c
}
// AddItemTemplates adds the "item_templates" edges to the ItemTemplate entity.
func (_c *GroupCreate) AddItemTemplates(v ...*ItemTemplate) *GroupCreate {
ids := make([]uuid.UUID, len(v))
for i := range v {
ids[i] = v[i].ID
}
return _c.AddItemTemplateIDs(ids...)
}
// Mutation returns the GroupMutation object of the builder.
func (gc *GroupCreate) Mutation() *GroupMutation {
return gc.mutation
func (_c *GroupCreate) Mutation() *GroupMutation {
return _c.mutation
}
// Save creates the Group in the database.
func (gc *GroupCreate) Save(ctx context.Context) (*Group, error) {
gc.defaults()
return withHooks(ctx, gc.sqlSave, gc.mutation, gc.hooks)
func (_c *GroupCreate) Save(ctx context.Context) (*Group, error) {
_c.defaults()
return withHooks(ctx, _c.sqlSave, _c.mutation, _c.hooks)
}
// SaveX calls Save and panics if Save returns an error.
func (gc *GroupCreate) SaveX(ctx context.Context) *Group {
v, err := gc.Save(ctx)
func (_c *GroupCreate) SaveX(ctx context.Context) *Group {
v, err := _c.Save(ctx)
if err != nil {
panic(err)
}
@@ -200,66 +216,66 @@ func (gc *GroupCreate) SaveX(ctx context.Context) *Group {
}
// Exec executes the query.
func (gc *GroupCreate) Exec(ctx context.Context) error {
_, err := gc.Save(ctx)
func (_c *GroupCreate) Exec(ctx context.Context) error {
_, err := _c.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (gc *GroupCreate) ExecX(ctx context.Context) {
if err := gc.Exec(ctx); err != nil {
func (_c *GroupCreate) ExecX(ctx context.Context) {
if err := _c.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (gc *GroupCreate) defaults() {
if _, ok := gc.mutation.CreatedAt(); !ok {
func (_c *GroupCreate) defaults() {
if _, ok := _c.mutation.CreatedAt(); !ok {
v := group.DefaultCreatedAt()
gc.mutation.SetCreatedAt(v)
_c.mutation.SetCreatedAt(v)
}
if _, ok := gc.mutation.UpdatedAt(); !ok {
if _, ok := _c.mutation.UpdatedAt(); !ok {
v := group.DefaultUpdatedAt()
gc.mutation.SetUpdatedAt(v)
_c.mutation.SetUpdatedAt(v)
}
if _, ok := gc.mutation.Currency(); !ok {
if _, ok := _c.mutation.Currency(); !ok {
v := group.DefaultCurrency
gc.mutation.SetCurrency(v)
_c.mutation.SetCurrency(v)
}
if _, ok := gc.mutation.ID(); !ok {
if _, ok := _c.mutation.ID(); !ok {
v := group.DefaultID()
gc.mutation.SetID(v)
_c.mutation.SetID(v)
}
}
// check runs all checks and user-defined validators on the builder.
func (gc *GroupCreate) check() error {
if _, ok := gc.mutation.CreatedAt(); !ok {
func (_c *GroupCreate) check() error {
if _, ok := _c.mutation.CreatedAt(); !ok {
return &ValidationError{Name: "created_at", err: errors.New(`ent: missing required field "Group.created_at"`)}
}
if _, ok := gc.mutation.UpdatedAt(); !ok {
if _, ok := _c.mutation.UpdatedAt(); !ok {
return &ValidationError{Name: "updated_at", err: errors.New(`ent: missing required field "Group.updated_at"`)}
}
if _, ok := gc.mutation.Name(); !ok {
if _, ok := _c.mutation.Name(); !ok {
return &ValidationError{Name: "name", err: errors.New(`ent: missing required field "Group.name"`)}
}
if v, ok := gc.mutation.Name(); ok {
if v, ok := _c.mutation.Name(); ok {
if err := group.NameValidator(v); err != nil {
return &ValidationError{Name: "name", err: fmt.Errorf(`ent: validator failed for field "Group.name": %w`, err)}
}
}
if _, ok := gc.mutation.Currency(); !ok {
if _, ok := _c.mutation.Currency(); !ok {
return &ValidationError{Name: "currency", err: errors.New(`ent: missing required field "Group.currency"`)}
}
return nil
}
func (gc *GroupCreate) sqlSave(ctx context.Context) (*Group, error) {
if err := gc.check(); err != nil {
func (_c *GroupCreate) sqlSave(ctx context.Context) (*Group, error) {
if err := _c.check(); err != nil {
return nil, err
}
_node, _spec := gc.createSpec()
if err := sqlgraph.CreateNode(ctx, gc.driver, _spec); err != nil {
_node, _spec := _c.createSpec()
if err := sqlgraph.CreateNode(ctx, _c.driver, _spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
@@ -272,37 +288,37 @@ func (gc *GroupCreate) sqlSave(ctx context.Context) (*Group, error) {
return nil, err
}
}
gc.mutation.id = &_node.ID
gc.mutation.done = true
_c.mutation.id = &_node.ID
_c.mutation.done = true
return _node, nil
}
func (gc *GroupCreate) createSpec() (*Group, *sqlgraph.CreateSpec) {
func (_c *GroupCreate) createSpec() (*Group, *sqlgraph.CreateSpec) {
var (
_node = &Group{config: gc.config}
_node = &Group{config: _c.config}
_spec = sqlgraph.NewCreateSpec(group.Table, sqlgraph.NewFieldSpec(group.FieldID, field.TypeUUID))
)
if id, ok := gc.mutation.ID(); ok {
if id, ok := _c.mutation.ID(); ok {
_node.ID = id
_spec.ID.Value = &id
}
if value, ok := gc.mutation.CreatedAt(); ok {
if value, ok := _c.mutation.CreatedAt(); ok {
_spec.SetField(group.FieldCreatedAt, field.TypeTime, value)
_node.CreatedAt = value
}
if value, ok := gc.mutation.UpdatedAt(); ok {
if value, ok := _c.mutation.UpdatedAt(); ok {
_spec.SetField(group.FieldUpdatedAt, field.TypeTime, value)
_node.UpdatedAt = value
}
if value, ok := gc.mutation.Name(); ok {
if value, ok := _c.mutation.Name(); ok {
_spec.SetField(group.FieldName, field.TypeString, value)
_node.Name = value
}
if value, ok := gc.mutation.Currency(); ok {
if value, ok := _c.mutation.Currency(); ok {
_spec.SetField(group.FieldCurrency, field.TypeString, value)
_node.Currency = value
}
if nodes := gc.mutation.UsersIDs(); len(nodes) > 0 {
if nodes := _c.mutation.UsersIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
@@ -318,7 +334,7 @@ func (gc *GroupCreate) createSpec() (*Group, *sqlgraph.CreateSpec) {
}
_spec.Edges = append(_spec.Edges, edge)
}
if nodes := gc.mutation.LocationsIDs(); len(nodes) > 0 {
if nodes := _c.mutation.LocationsIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
@@ -334,7 +350,7 @@ func (gc *GroupCreate) createSpec() (*Group, *sqlgraph.CreateSpec) {
}
_spec.Edges = append(_spec.Edges, edge)
}
if nodes := gc.mutation.ItemsIDs(); len(nodes) > 0 {
if nodes := _c.mutation.ItemsIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
@@ -350,7 +366,7 @@ func (gc *GroupCreate) createSpec() (*Group, *sqlgraph.CreateSpec) {
}
_spec.Edges = append(_spec.Edges, edge)
}
if nodes := gc.mutation.LabelsIDs(); len(nodes) > 0 {
if nodes := _c.mutation.LabelsIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
@@ -366,7 +382,7 @@ func (gc *GroupCreate) createSpec() (*Group, *sqlgraph.CreateSpec) {
}
_spec.Edges = append(_spec.Edges, edge)
}
if nodes := gc.mutation.InvitationTokensIDs(); len(nodes) > 0 {
if nodes := _c.mutation.InvitationTokensIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
@@ -382,7 +398,7 @@ func (gc *GroupCreate) createSpec() (*Group, *sqlgraph.CreateSpec) {
}
_spec.Edges = append(_spec.Edges, edge)
}
if nodes := gc.mutation.NotifiersIDs(); len(nodes) > 0 {
if nodes := _c.mutation.NotifiersIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
@@ -398,6 +414,22 @@ func (gc *GroupCreate) createSpec() (*Group, *sqlgraph.CreateSpec) {
}
_spec.Edges = append(_spec.Edges, edge)
}
if nodes := _c.mutation.ItemTemplatesIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: group.ItemTemplatesTable,
Columns: []string{group.ItemTemplatesColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(itemtemplate.FieldID, field.TypeUUID),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges = append(_spec.Edges, edge)
}
return _node, _spec
}
@@ -409,16 +441,16 @@ type GroupCreateBulk struct {
}
// Save creates the Group entities in the database.
func (gcb *GroupCreateBulk) Save(ctx context.Context) ([]*Group, error) {
if gcb.err != nil {
return nil, gcb.err
func (_c *GroupCreateBulk) Save(ctx context.Context) ([]*Group, error) {
if _c.err != nil {
return nil, _c.err
}
specs := make([]*sqlgraph.CreateSpec, len(gcb.builders))
nodes := make([]*Group, len(gcb.builders))
mutators := make([]Mutator, len(gcb.builders))
for i := range gcb.builders {
specs := make([]*sqlgraph.CreateSpec, len(_c.builders))
nodes := make([]*Group, len(_c.builders))
mutators := make([]Mutator, len(_c.builders))
for i := range _c.builders {
func(i int, root context.Context) {
builder := gcb.builders[i]
builder := _c.builders[i]
builder.defaults()
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
mutation, ok := m.(*GroupMutation)
@@ -432,11 +464,11 @@ func (gcb *GroupCreateBulk) Save(ctx context.Context) ([]*Group, error) {
var err error
nodes[i], specs[i] = builder.createSpec()
if i < len(mutators)-1 {
_, err = mutators[i+1].Mutate(root, gcb.builders[i+1].mutation)
_, err = mutators[i+1].Mutate(root, _c.builders[i+1].mutation)
} else {
spec := &sqlgraph.BatchCreateSpec{Nodes: specs}
// Invoke the actual operation on the latest mutation in the chain.
if err = sqlgraph.BatchCreate(ctx, gcb.driver, spec); err != nil {
if err = sqlgraph.BatchCreate(ctx, _c.driver, spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
@@ -456,7 +488,7 @@ func (gcb *GroupCreateBulk) Save(ctx context.Context) ([]*Group, error) {
}(i, ctx)
}
if len(mutators) > 0 {
if _, err := mutators[0].Mutate(ctx, gcb.builders[0].mutation); err != nil {
if _, err := mutators[0].Mutate(ctx, _c.builders[0].mutation); err != nil {
return nil, err
}
}
@@ -464,8 +496,8 @@ func (gcb *GroupCreateBulk) Save(ctx context.Context) ([]*Group, error) {
}
// SaveX is like Save, but panics if an error occurs.
func (gcb *GroupCreateBulk) SaveX(ctx context.Context) []*Group {
v, err := gcb.Save(ctx)
func (_c *GroupCreateBulk) SaveX(ctx context.Context) []*Group {
v, err := _c.Save(ctx)
if err != nil {
panic(err)
}
@@ -473,14 +505,14 @@ func (gcb *GroupCreateBulk) SaveX(ctx context.Context) []*Group {
}
// Exec executes the query.
func (gcb *GroupCreateBulk) Exec(ctx context.Context) error {
_, err := gcb.Save(ctx)
func (_c *GroupCreateBulk) Exec(ctx context.Context) error {
_, err := _c.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (gcb *GroupCreateBulk) ExecX(ctx context.Context) {
if err := gcb.Exec(ctx); err != nil {
func (_c *GroupCreateBulk) ExecX(ctx context.Context) {
if err := _c.Exec(ctx); err != nil {
panic(err)
}
}

View File

@@ -20,56 +20,56 @@ type GroupDelete struct {
}
// Where appends a list predicates to the GroupDelete builder.
func (gd *GroupDelete) Where(ps ...predicate.Group) *GroupDelete {
gd.mutation.Where(ps...)
return gd
func (_d *GroupDelete) Where(ps ...predicate.Group) *GroupDelete {
_d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query and returns how many vertices were deleted.
func (gd *GroupDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, gd.sqlExec, gd.mutation, gd.hooks)
func (_d *GroupDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, _d.sqlExec, _d.mutation, _d.hooks)
}
// ExecX is like Exec, but panics if an error occurs.
func (gd *GroupDelete) ExecX(ctx context.Context) int {
n, err := gd.Exec(ctx)
func (_d *GroupDelete) ExecX(ctx context.Context) int {
n, err := _d.Exec(ctx)
if err != nil {
panic(err)
}
return n
}
func (gd *GroupDelete) sqlExec(ctx context.Context) (int, error) {
func (_d *GroupDelete) sqlExec(ctx context.Context) (int, error) {
_spec := sqlgraph.NewDeleteSpec(group.Table, sqlgraph.NewFieldSpec(group.FieldID, field.TypeUUID))
if ps := gd.mutation.predicates; len(ps) > 0 {
if ps := _d.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
affected, err := sqlgraph.DeleteNodes(ctx, gd.driver, _spec)
affected, err := sqlgraph.DeleteNodes(ctx, _d.driver, _spec)
if err != nil && sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
gd.mutation.done = true
_d.mutation.done = true
return affected, err
}
// GroupDeleteOne is the builder for deleting a single Group entity.
type GroupDeleteOne struct {
gd *GroupDelete
_d *GroupDelete
}
// Where appends a list predicates to the GroupDelete builder.
func (gdo *GroupDeleteOne) Where(ps ...predicate.Group) *GroupDeleteOne {
gdo.gd.mutation.Where(ps...)
return gdo
func (_d *GroupDeleteOne) Where(ps ...predicate.Group) *GroupDeleteOne {
_d._d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query.
func (gdo *GroupDeleteOne) Exec(ctx context.Context) error {
n, err := gdo.gd.Exec(ctx)
func (_d *GroupDeleteOne) Exec(ctx context.Context) error {
n, err := _d._d.Exec(ctx)
switch {
case err != nil:
return err
@@ -81,8 +81,8 @@ func (gdo *GroupDeleteOne) Exec(ctx context.Context) error {
}
// ExecX is like Exec, but panics if an error occurs.
func (gdo *GroupDeleteOne) ExecX(ctx context.Context) {
if err := gdo.Exec(ctx); err != nil {
func (_d *GroupDeleteOne) ExecX(ctx context.Context) {
if err := _d.Exec(ctx); err != nil {
panic(err)
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -80,7 +80,7 @@ func (*GroupInvitationToken) scanValues(columns []string) ([]any, error) {
// assignValues assigns the values that were returned from sql.Rows (after scanning)
// to the GroupInvitationToken fields.
func (git *GroupInvitationToken) assignValues(columns []string, values []any) error {
func (_m *GroupInvitationToken) assignValues(columns []string, values []any) error {
if m, n := len(values), len(columns); m < n {
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
}
@@ -90,47 +90,47 @@ func (git *GroupInvitationToken) assignValues(columns []string, values []any) er
if value, ok := values[i].(*uuid.UUID); !ok {
return fmt.Errorf("unexpected type %T for field id", values[i])
} else if value != nil {
git.ID = *value
_m.ID = *value
}
case groupinvitationtoken.FieldCreatedAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field created_at", values[i])
} else if value.Valid {
git.CreatedAt = value.Time
_m.CreatedAt = value.Time
}
case groupinvitationtoken.FieldUpdatedAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field updated_at", values[i])
} else if value.Valid {
git.UpdatedAt = value.Time
_m.UpdatedAt = value.Time
}
case groupinvitationtoken.FieldToken:
if value, ok := values[i].(*[]byte); !ok {
return fmt.Errorf("unexpected type %T for field token", values[i])
} else if value != nil {
git.Token = *value
_m.Token = *value
}
case groupinvitationtoken.FieldExpiresAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field expires_at", values[i])
} else if value.Valid {
git.ExpiresAt = value.Time
_m.ExpiresAt = value.Time
}
case groupinvitationtoken.FieldUses:
if value, ok := values[i].(*sql.NullInt64); !ok {
return fmt.Errorf("unexpected type %T for field uses", values[i])
} else if value.Valid {
git.Uses = int(value.Int64)
_m.Uses = int(value.Int64)
}
case groupinvitationtoken.ForeignKeys[0]:
if value, ok := values[i].(*sql.NullScanner); !ok {
return fmt.Errorf("unexpected type %T for field group_invitation_tokens", values[i])
} else if value.Valid {
git.group_invitation_tokens = new(uuid.UUID)
*git.group_invitation_tokens = *value.S.(*uuid.UUID)
_m.group_invitation_tokens = new(uuid.UUID)
*_m.group_invitation_tokens = *value.S.(*uuid.UUID)
}
default:
git.selectValues.Set(columns[i], values[i])
_m.selectValues.Set(columns[i], values[i])
}
}
return nil
@@ -138,52 +138,52 @@ func (git *GroupInvitationToken) assignValues(columns []string, values []any) er
// Value returns the ent.Value that was dynamically selected and assigned to the GroupInvitationToken.
// This includes values selected through modifiers, order, etc.
func (git *GroupInvitationToken) Value(name string) (ent.Value, error) {
return git.selectValues.Get(name)
func (_m *GroupInvitationToken) Value(name string) (ent.Value, error) {
return _m.selectValues.Get(name)
}
// QueryGroup queries the "group" edge of the GroupInvitationToken entity.
func (git *GroupInvitationToken) QueryGroup() *GroupQuery {
return NewGroupInvitationTokenClient(git.config).QueryGroup(git)
func (_m *GroupInvitationToken) QueryGroup() *GroupQuery {
return NewGroupInvitationTokenClient(_m.config).QueryGroup(_m)
}
// Update returns a builder for updating this GroupInvitationToken.
// Note that you need to call GroupInvitationToken.Unwrap() before calling this method if this GroupInvitationToken
// was returned from a transaction, and the transaction was committed or rolled back.
func (git *GroupInvitationToken) Update() *GroupInvitationTokenUpdateOne {
return NewGroupInvitationTokenClient(git.config).UpdateOne(git)
func (_m *GroupInvitationToken) Update() *GroupInvitationTokenUpdateOne {
return NewGroupInvitationTokenClient(_m.config).UpdateOne(_m)
}
// Unwrap unwraps the GroupInvitationToken entity that was returned from a transaction after it was closed,
// so that all future queries will be executed through the driver which created the transaction.
func (git *GroupInvitationToken) Unwrap() *GroupInvitationToken {
_tx, ok := git.config.driver.(*txDriver)
func (_m *GroupInvitationToken) Unwrap() *GroupInvitationToken {
_tx, ok := _m.config.driver.(*txDriver)
if !ok {
panic("ent: GroupInvitationToken is not a transactional entity")
}
git.config.driver = _tx.drv
return git
_m.config.driver = _tx.drv
return _m
}
// String implements the fmt.Stringer.
func (git *GroupInvitationToken) String() string {
func (_m *GroupInvitationToken) String() string {
var builder strings.Builder
builder.WriteString("GroupInvitationToken(")
builder.WriteString(fmt.Sprintf("id=%v, ", git.ID))
builder.WriteString(fmt.Sprintf("id=%v, ", _m.ID))
builder.WriteString("created_at=")
builder.WriteString(git.CreatedAt.Format(time.ANSIC))
builder.WriteString(_m.CreatedAt.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("updated_at=")
builder.WriteString(git.UpdatedAt.Format(time.ANSIC))
builder.WriteString(_m.UpdatedAt.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("token=")
builder.WriteString(fmt.Sprintf("%v", git.Token))
builder.WriteString(fmt.Sprintf("%v", _m.Token))
builder.WriteString(", ")
builder.WriteString("expires_at=")
builder.WriteString(git.ExpiresAt.Format(time.ANSIC))
builder.WriteString(_m.ExpiresAt.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("uses=")
builder.WriteString(fmt.Sprintf("%v", git.Uses))
builder.WriteString(fmt.Sprintf("%v", _m.Uses))
builder.WriteByte(')')
return builder.String()
}

View File

@@ -23,114 +23,114 @@ type GroupInvitationTokenCreate struct {
}
// SetCreatedAt sets the "created_at" field.
func (gitc *GroupInvitationTokenCreate) SetCreatedAt(t time.Time) *GroupInvitationTokenCreate {
gitc.mutation.SetCreatedAt(t)
return gitc
func (_c *GroupInvitationTokenCreate) SetCreatedAt(v time.Time) *GroupInvitationTokenCreate {
_c.mutation.SetCreatedAt(v)
return _c
}
// SetNillableCreatedAt sets the "created_at" field if the given value is not nil.
func (gitc *GroupInvitationTokenCreate) SetNillableCreatedAt(t *time.Time) *GroupInvitationTokenCreate {
if t != nil {
gitc.SetCreatedAt(*t)
func (_c *GroupInvitationTokenCreate) SetNillableCreatedAt(v *time.Time) *GroupInvitationTokenCreate {
if v != nil {
_c.SetCreatedAt(*v)
}
return gitc
return _c
}
// SetUpdatedAt sets the "updated_at" field.
func (gitc *GroupInvitationTokenCreate) SetUpdatedAt(t time.Time) *GroupInvitationTokenCreate {
gitc.mutation.SetUpdatedAt(t)
return gitc
func (_c *GroupInvitationTokenCreate) SetUpdatedAt(v time.Time) *GroupInvitationTokenCreate {
_c.mutation.SetUpdatedAt(v)
return _c
}
// SetNillableUpdatedAt sets the "updated_at" field if the given value is not nil.
func (gitc *GroupInvitationTokenCreate) SetNillableUpdatedAt(t *time.Time) *GroupInvitationTokenCreate {
if t != nil {
gitc.SetUpdatedAt(*t)
func (_c *GroupInvitationTokenCreate) SetNillableUpdatedAt(v *time.Time) *GroupInvitationTokenCreate {
if v != nil {
_c.SetUpdatedAt(*v)
}
return gitc
return _c
}
// SetToken sets the "token" field.
func (gitc *GroupInvitationTokenCreate) SetToken(b []byte) *GroupInvitationTokenCreate {
gitc.mutation.SetToken(b)
return gitc
func (_c *GroupInvitationTokenCreate) SetToken(v []byte) *GroupInvitationTokenCreate {
_c.mutation.SetToken(v)
return _c
}
// SetExpiresAt sets the "expires_at" field.
func (gitc *GroupInvitationTokenCreate) SetExpiresAt(t time.Time) *GroupInvitationTokenCreate {
gitc.mutation.SetExpiresAt(t)
return gitc
func (_c *GroupInvitationTokenCreate) SetExpiresAt(v time.Time) *GroupInvitationTokenCreate {
_c.mutation.SetExpiresAt(v)
return _c
}
// SetNillableExpiresAt sets the "expires_at" field if the given value is not nil.
func (gitc *GroupInvitationTokenCreate) SetNillableExpiresAt(t *time.Time) *GroupInvitationTokenCreate {
if t != nil {
gitc.SetExpiresAt(*t)
func (_c *GroupInvitationTokenCreate) SetNillableExpiresAt(v *time.Time) *GroupInvitationTokenCreate {
if v != nil {
_c.SetExpiresAt(*v)
}
return gitc
return _c
}
// SetUses sets the "uses" field.
func (gitc *GroupInvitationTokenCreate) SetUses(i int) *GroupInvitationTokenCreate {
gitc.mutation.SetUses(i)
return gitc
func (_c *GroupInvitationTokenCreate) SetUses(v int) *GroupInvitationTokenCreate {
_c.mutation.SetUses(v)
return _c
}
// SetNillableUses sets the "uses" field if the given value is not nil.
func (gitc *GroupInvitationTokenCreate) SetNillableUses(i *int) *GroupInvitationTokenCreate {
if i != nil {
gitc.SetUses(*i)
func (_c *GroupInvitationTokenCreate) SetNillableUses(v *int) *GroupInvitationTokenCreate {
if v != nil {
_c.SetUses(*v)
}
return gitc
return _c
}
// SetID sets the "id" field.
func (gitc *GroupInvitationTokenCreate) SetID(u uuid.UUID) *GroupInvitationTokenCreate {
gitc.mutation.SetID(u)
return gitc
func (_c *GroupInvitationTokenCreate) SetID(v uuid.UUID) *GroupInvitationTokenCreate {
_c.mutation.SetID(v)
return _c
}
// SetNillableID sets the "id" field if the given value is not nil.
func (gitc *GroupInvitationTokenCreate) SetNillableID(u *uuid.UUID) *GroupInvitationTokenCreate {
if u != nil {
gitc.SetID(*u)
func (_c *GroupInvitationTokenCreate) SetNillableID(v *uuid.UUID) *GroupInvitationTokenCreate {
if v != nil {
_c.SetID(*v)
}
return gitc
return _c
}
// SetGroupID sets the "group" edge to the Group entity by ID.
func (gitc *GroupInvitationTokenCreate) SetGroupID(id uuid.UUID) *GroupInvitationTokenCreate {
gitc.mutation.SetGroupID(id)
return gitc
func (_c *GroupInvitationTokenCreate) SetGroupID(id uuid.UUID) *GroupInvitationTokenCreate {
_c.mutation.SetGroupID(id)
return _c
}
// SetNillableGroupID sets the "group" edge to the Group entity by ID if the given value is not nil.
func (gitc *GroupInvitationTokenCreate) SetNillableGroupID(id *uuid.UUID) *GroupInvitationTokenCreate {
func (_c *GroupInvitationTokenCreate) SetNillableGroupID(id *uuid.UUID) *GroupInvitationTokenCreate {
if id != nil {
gitc = gitc.SetGroupID(*id)
_c = _c.SetGroupID(*id)
}
return gitc
return _c
}
// SetGroup sets the "group" edge to the Group entity.
func (gitc *GroupInvitationTokenCreate) SetGroup(g *Group) *GroupInvitationTokenCreate {
return gitc.SetGroupID(g.ID)
func (_c *GroupInvitationTokenCreate) SetGroup(v *Group) *GroupInvitationTokenCreate {
return _c.SetGroupID(v.ID)
}
// Mutation returns the GroupInvitationTokenMutation object of the builder.
func (gitc *GroupInvitationTokenCreate) Mutation() *GroupInvitationTokenMutation {
return gitc.mutation
func (_c *GroupInvitationTokenCreate) Mutation() *GroupInvitationTokenMutation {
return _c.mutation
}
// Save creates the GroupInvitationToken in the database.
func (gitc *GroupInvitationTokenCreate) Save(ctx context.Context) (*GroupInvitationToken, error) {
gitc.defaults()
return withHooks(ctx, gitc.sqlSave, gitc.mutation, gitc.hooks)
func (_c *GroupInvitationTokenCreate) Save(ctx context.Context) (*GroupInvitationToken, error) {
_c.defaults()
return withHooks(ctx, _c.sqlSave, _c.mutation, _c.hooks)
}
// SaveX calls Save and panics if Save returns an error.
func (gitc *GroupInvitationTokenCreate) SaveX(ctx context.Context) *GroupInvitationToken {
v, err := gitc.Save(ctx)
func (_c *GroupInvitationTokenCreate) SaveX(ctx context.Context) *GroupInvitationToken {
v, err := _c.Save(ctx)
if err != nil {
panic(err)
}
@@ -138,68 +138,68 @@ func (gitc *GroupInvitationTokenCreate) SaveX(ctx context.Context) *GroupInvitat
}
// Exec executes the query.
func (gitc *GroupInvitationTokenCreate) Exec(ctx context.Context) error {
_, err := gitc.Save(ctx)
func (_c *GroupInvitationTokenCreate) Exec(ctx context.Context) error {
_, err := _c.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (gitc *GroupInvitationTokenCreate) ExecX(ctx context.Context) {
if err := gitc.Exec(ctx); err != nil {
func (_c *GroupInvitationTokenCreate) ExecX(ctx context.Context) {
if err := _c.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (gitc *GroupInvitationTokenCreate) defaults() {
if _, ok := gitc.mutation.CreatedAt(); !ok {
func (_c *GroupInvitationTokenCreate) defaults() {
if _, ok := _c.mutation.CreatedAt(); !ok {
v := groupinvitationtoken.DefaultCreatedAt()
gitc.mutation.SetCreatedAt(v)
_c.mutation.SetCreatedAt(v)
}
if _, ok := gitc.mutation.UpdatedAt(); !ok {
if _, ok := _c.mutation.UpdatedAt(); !ok {
v := groupinvitationtoken.DefaultUpdatedAt()
gitc.mutation.SetUpdatedAt(v)
_c.mutation.SetUpdatedAt(v)
}
if _, ok := gitc.mutation.ExpiresAt(); !ok {
if _, ok := _c.mutation.ExpiresAt(); !ok {
v := groupinvitationtoken.DefaultExpiresAt()
gitc.mutation.SetExpiresAt(v)
_c.mutation.SetExpiresAt(v)
}
if _, ok := gitc.mutation.Uses(); !ok {
if _, ok := _c.mutation.Uses(); !ok {
v := groupinvitationtoken.DefaultUses
gitc.mutation.SetUses(v)
_c.mutation.SetUses(v)
}
if _, ok := gitc.mutation.ID(); !ok {
if _, ok := _c.mutation.ID(); !ok {
v := groupinvitationtoken.DefaultID()
gitc.mutation.SetID(v)
_c.mutation.SetID(v)
}
}
// check runs all checks and user-defined validators on the builder.
func (gitc *GroupInvitationTokenCreate) check() error {
if _, ok := gitc.mutation.CreatedAt(); !ok {
func (_c *GroupInvitationTokenCreate) check() error {
if _, ok := _c.mutation.CreatedAt(); !ok {
return &ValidationError{Name: "created_at", err: errors.New(`ent: missing required field "GroupInvitationToken.created_at"`)}
}
if _, ok := gitc.mutation.UpdatedAt(); !ok {
if _, ok := _c.mutation.UpdatedAt(); !ok {
return &ValidationError{Name: "updated_at", err: errors.New(`ent: missing required field "GroupInvitationToken.updated_at"`)}
}
if _, ok := gitc.mutation.Token(); !ok {
if _, ok := _c.mutation.Token(); !ok {
return &ValidationError{Name: "token", err: errors.New(`ent: missing required field "GroupInvitationToken.token"`)}
}
if _, ok := gitc.mutation.ExpiresAt(); !ok {
if _, ok := _c.mutation.ExpiresAt(); !ok {
return &ValidationError{Name: "expires_at", err: errors.New(`ent: missing required field "GroupInvitationToken.expires_at"`)}
}
if _, ok := gitc.mutation.Uses(); !ok {
if _, ok := _c.mutation.Uses(); !ok {
return &ValidationError{Name: "uses", err: errors.New(`ent: missing required field "GroupInvitationToken.uses"`)}
}
return nil
}
func (gitc *GroupInvitationTokenCreate) sqlSave(ctx context.Context) (*GroupInvitationToken, error) {
if err := gitc.check(); err != nil {
func (_c *GroupInvitationTokenCreate) sqlSave(ctx context.Context) (*GroupInvitationToken, error) {
if err := _c.check(); err != nil {
return nil, err
}
_node, _spec := gitc.createSpec()
if err := sqlgraph.CreateNode(ctx, gitc.driver, _spec); err != nil {
_node, _spec := _c.createSpec()
if err := sqlgraph.CreateNode(ctx, _c.driver, _spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
@@ -212,41 +212,41 @@ func (gitc *GroupInvitationTokenCreate) sqlSave(ctx context.Context) (*GroupInvi
return nil, err
}
}
gitc.mutation.id = &_node.ID
gitc.mutation.done = true
_c.mutation.id = &_node.ID
_c.mutation.done = true
return _node, nil
}
func (gitc *GroupInvitationTokenCreate) createSpec() (*GroupInvitationToken, *sqlgraph.CreateSpec) {
func (_c *GroupInvitationTokenCreate) createSpec() (*GroupInvitationToken, *sqlgraph.CreateSpec) {
var (
_node = &GroupInvitationToken{config: gitc.config}
_node = &GroupInvitationToken{config: _c.config}
_spec = sqlgraph.NewCreateSpec(groupinvitationtoken.Table, sqlgraph.NewFieldSpec(groupinvitationtoken.FieldID, field.TypeUUID))
)
if id, ok := gitc.mutation.ID(); ok {
if id, ok := _c.mutation.ID(); ok {
_node.ID = id
_spec.ID.Value = &id
}
if value, ok := gitc.mutation.CreatedAt(); ok {
if value, ok := _c.mutation.CreatedAt(); ok {
_spec.SetField(groupinvitationtoken.FieldCreatedAt, field.TypeTime, value)
_node.CreatedAt = value
}
if value, ok := gitc.mutation.UpdatedAt(); ok {
if value, ok := _c.mutation.UpdatedAt(); ok {
_spec.SetField(groupinvitationtoken.FieldUpdatedAt, field.TypeTime, value)
_node.UpdatedAt = value
}
if value, ok := gitc.mutation.Token(); ok {
if value, ok := _c.mutation.Token(); ok {
_spec.SetField(groupinvitationtoken.FieldToken, field.TypeBytes, value)
_node.Token = value
}
if value, ok := gitc.mutation.ExpiresAt(); ok {
if value, ok := _c.mutation.ExpiresAt(); ok {
_spec.SetField(groupinvitationtoken.FieldExpiresAt, field.TypeTime, value)
_node.ExpiresAt = value
}
if value, ok := gitc.mutation.Uses(); ok {
if value, ok := _c.mutation.Uses(); ok {
_spec.SetField(groupinvitationtoken.FieldUses, field.TypeInt, value)
_node.Uses = value
}
if nodes := gitc.mutation.GroupIDs(); len(nodes) > 0 {
if nodes := _c.mutation.GroupIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -274,16 +274,16 @@ type GroupInvitationTokenCreateBulk struct {
}
// Save creates the GroupInvitationToken entities in the database.
func (gitcb *GroupInvitationTokenCreateBulk) Save(ctx context.Context) ([]*GroupInvitationToken, error) {
if gitcb.err != nil {
return nil, gitcb.err
func (_c *GroupInvitationTokenCreateBulk) Save(ctx context.Context) ([]*GroupInvitationToken, error) {
if _c.err != nil {
return nil, _c.err
}
specs := make([]*sqlgraph.CreateSpec, len(gitcb.builders))
nodes := make([]*GroupInvitationToken, len(gitcb.builders))
mutators := make([]Mutator, len(gitcb.builders))
for i := range gitcb.builders {
specs := make([]*sqlgraph.CreateSpec, len(_c.builders))
nodes := make([]*GroupInvitationToken, len(_c.builders))
mutators := make([]Mutator, len(_c.builders))
for i := range _c.builders {
func(i int, root context.Context) {
builder := gitcb.builders[i]
builder := _c.builders[i]
builder.defaults()
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
mutation, ok := m.(*GroupInvitationTokenMutation)
@@ -297,11 +297,11 @@ func (gitcb *GroupInvitationTokenCreateBulk) Save(ctx context.Context) ([]*Group
var err error
nodes[i], specs[i] = builder.createSpec()
if i < len(mutators)-1 {
_, err = mutators[i+1].Mutate(root, gitcb.builders[i+1].mutation)
_, err = mutators[i+1].Mutate(root, _c.builders[i+1].mutation)
} else {
spec := &sqlgraph.BatchCreateSpec{Nodes: specs}
// Invoke the actual operation on the latest mutation in the chain.
if err = sqlgraph.BatchCreate(ctx, gitcb.driver, spec); err != nil {
if err = sqlgraph.BatchCreate(ctx, _c.driver, spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
@@ -321,7 +321,7 @@ func (gitcb *GroupInvitationTokenCreateBulk) Save(ctx context.Context) ([]*Group
}(i, ctx)
}
if len(mutators) > 0 {
if _, err := mutators[0].Mutate(ctx, gitcb.builders[0].mutation); err != nil {
if _, err := mutators[0].Mutate(ctx, _c.builders[0].mutation); err != nil {
return nil, err
}
}
@@ -329,8 +329,8 @@ func (gitcb *GroupInvitationTokenCreateBulk) Save(ctx context.Context) ([]*Group
}
// SaveX is like Save, but panics if an error occurs.
func (gitcb *GroupInvitationTokenCreateBulk) SaveX(ctx context.Context) []*GroupInvitationToken {
v, err := gitcb.Save(ctx)
func (_c *GroupInvitationTokenCreateBulk) SaveX(ctx context.Context) []*GroupInvitationToken {
v, err := _c.Save(ctx)
if err != nil {
panic(err)
}
@@ -338,14 +338,14 @@ func (gitcb *GroupInvitationTokenCreateBulk) SaveX(ctx context.Context) []*Group
}
// Exec executes the query.
func (gitcb *GroupInvitationTokenCreateBulk) Exec(ctx context.Context) error {
_, err := gitcb.Save(ctx)
func (_c *GroupInvitationTokenCreateBulk) Exec(ctx context.Context) error {
_, err := _c.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (gitcb *GroupInvitationTokenCreateBulk) ExecX(ctx context.Context) {
if err := gitcb.Exec(ctx); err != nil {
func (_c *GroupInvitationTokenCreateBulk) ExecX(ctx context.Context) {
if err := _c.Exec(ctx); err != nil {
panic(err)
}
}

View File

@@ -20,56 +20,56 @@ type GroupInvitationTokenDelete struct {
}
// Where appends a list predicates to the GroupInvitationTokenDelete builder.
func (gitd *GroupInvitationTokenDelete) Where(ps ...predicate.GroupInvitationToken) *GroupInvitationTokenDelete {
gitd.mutation.Where(ps...)
return gitd
func (_d *GroupInvitationTokenDelete) Where(ps ...predicate.GroupInvitationToken) *GroupInvitationTokenDelete {
_d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query and returns how many vertices were deleted.
func (gitd *GroupInvitationTokenDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, gitd.sqlExec, gitd.mutation, gitd.hooks)
func (_d *GroupInvitationTokenDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, _d.sqlExec, _d.mutation, _d.hooks)
}
// ExecX is like Exec, but panics if an error occurs.
func (gitd *GroupInvitationTokenDelete) ExecX(ctx context.Context) int {
n, err := gitd.Exec(ctx)
func (_d *GroupInvitationTokenDelete) ExecX(ctx context.Context) int {
n, err := _d.Exec(ctx)
if err != nil {
panic(err)
}
return n
}
func (gitd *GroupInvitationTokenDelete) sqlExec(ctx context.Context) (int, error) {
func (_d *GroupInvitationTokenDelete) sqlExec(ctx context.Context) (int, error) {
_spec := sqlgraph.NewDeleteSpec(groupinvitationtoken.Table, sqlgraph.NewFieldSpec(groupinvitationtoken.FieldID, field.TypeUUID))
if ps := gitd.mutation.predicates; len(ps) > 0 {
if ps := _d.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
affected, err := sqlgraph.DeleteNodes(ctx, gitd.driver, _spec)
affected, err := sqlgraph.DeleteNodes(ctx, _d.driver, _spec)
if err != nil && sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
gitd.mutation.done = true
_d.mutation.done = true
return affected, err
}
// GroupInvitationTokenDeleteOne is the builder for deleting a single GroupInvitationToken entity.
type GroupInvitationTokenDeleteOne struct {
gitd *GroupInvitationTokenDelete
_d *GroupInvitationTokenDelete
}
// Where appends a list predicates to the GroupInvitationTokenDelete builder.
func (gitdo *GroupInvitationTokenDeleteOne) Where(ps ...predicate.GroupInvitationToken) *GroupInvitationTokenDeleteOne {
gitdo.gitd.mutation.Where(ps...)
return gitdo
func (_d *GroupInvitationTokenDeleteOne) Where(ps ...predicate.GroupInvitationToken) *GroupInvitationTokenDeleteOne {
_d._d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query.
func (gitdo *GroupInvitationTokenDeleteOne) Exec(ctx context.Context) error {
n, err := gitdo.gitd.Exec(ctx)
func (_d *GroupInvitationTokenDeleteOne) Exec(ctx context.Context) error {
n, err := _d._d.Exec(ctx)
switch {
case err != nil:
return err
@@ -81,8 +81,8 @@ func (gitdo *GroupInvitationTokenDeleteOne) Exec(ctx context.Context) error {
}
// ExecX is like Exec, but panics if an error occurs.
func (gitdo *GroupInvitationTokenDeleteOne) ExecX(ctx context.Context) {
if err := gitdo.Exec(ctx); err != nil {
func (_d *GroupInvitationTokenDeleteOne) ExecX(ctx context.Context) {
if err := _d.Exec(ctx); err != nil {
panic(err)
}
}

View File

@@ -32,44 +32,44 @@ type GroupInvitationTokenQuery struct {
}
// Where adds a new predicate for the GroupInvitationTokenQuery builder.
func (gitq *GroupInvitationTokenQuery) Where(ps ...predicate.GroupInvitationToken) *GroupInvitationTokenQuery {
gitq.predicates = append(gitq.predicates, ps...)
return gitq
func (_q *GroupInvitationTokenQuery) Where(ps ...predicate.GroupInvitationToken) *GroupInvitationTokenQuery {
_q.predicates = append(_q.predicates, ps...)
return _q
}
// Limit the number of records to be returned by this query.
func (gitq *GroupInvitationTokenQuery) Limit(limit int) *GroupInvitationTokenQuery {
gitq.ctx.Limit = &limit
return gitq
func (_q *GroupInvitationTokenQuery) Limit(limit int) *GroupInvitationTokenQuery {
_q.ctx.Limit = &limit
return _q
}
// Offset to start from.
func (gitq *GroupInvitationTokenQuery) Offset(offset int) *GroupInvitationTokenQuery {
gitq.ctx.Offset = &offset
return gitq
func (_q *GroupInvitationTokenQuery) Offset(offset int) *GroupInvitationTokenQuery {
_q.ctx.Offset = &offset
return _q
}
// Unique configures the query builder to filter duplicate records on query.
// By default, unique is set to true, and can be disabled using this method.
func (gitq *GroupInvitationTokenQuery) Unique(unique bool) *GroupInvitationTokenQuery {
gitq.ctx.Unique = &unique
return gitq
func (_q *GroupInvitationTokenQuery) Unique(unique bool) *GroupInvitationTokenQuery {
_q.ctx.Unique = &unique
return _q
}
// Order specifies how the records should be ordered.
func (gitq *GroupInvitationTokenQuery) Order(o ...groupinvitationtoken.OrderOption) *GroupInvitationTokenQuery {
gitq.order = append(gitq.order, o...)
return gitq
func (_q *GroupInvitationTokenQuery) Order(o ...groupinvitationtoken.OrderOption) *GroupInvitationTokenQuery {
_q.order = append(_q.order, o...)
return _q
}
// QueryGroup chains the current query on the "group" edge.
func (gitq *GroupInvitationTokenQuery) QueryGroup() *GroupQuery {
query := (&GroupClient{config: gitq.config}).Query()
func (_q *GroupInvitationTokenQuery) QueryGroup() *GroupQuery {
query := (&GroupClient{config: _q.config}).Query()
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
if err := gitq.prepareQuery(ctx); err != nil {
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
selector := gitq.sqlQuery(ctx)
selector := _q.sqlQuery(ctx)
if err := selector.Err(); err != nil {
return nil, err
}
@@ -78,7 +78,7 @@ func (gitq *GroupInvitationTokenQuery) QueryGroup() *GroupQuery {
sqlgraph.To(group.Table, group.FieldID),
sqlgraph.Edge(sqlgraph.M2O, true, groupinvitationtoken.GroupTable, groupinvitationtoken.GroupColumn),
)
fromU = sqlgraph.SetNeighbors(gitq.driver.Dialect(), step)
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
return fromU, nil
}
return query
@@ -86,8 +86,8 @@ func (gitq *GroupInvitationTokenQuery) QueryGroup() *GroupQuery {
// First returns the first GroupInvitationToken entity from the query.
// Returns a *NotFoundError when no GroupInvitationToken was found.
func (gitq *GroupInvitationTokenQuery) First(ctx context.Context) (*GroupInvitationToken, error) {
nodes, err := gitq.Limit(1).All(setContextOp(ctx, gitq.ctx, ent.OpQueryFirst))
func (_q *GroupInvitationTokenQuery) First(ctx context.Context) (*GroupInvitationToken, error) {
nodes, err := _q.Limit(1).All(setContextOp(ctx, _q.ctx, ent.OpQueryFirst))
if err != nil {
return nil, err
}
@@ -98,8 +98,8 @@ func (gitq *GroupInvitationTokenQuery) First(ctx context.Context) (*GroupInvitat
}
// FirstX is like First, but panics if an error occurs.
func (gitq *GroupInvitationTokenQuery) FirstX(ctx context.Context) *GroupInvitationToken {
node, err := gitq.First(ctx)
func (_q *GroupInvitationTokenQuery) FirstX(ctx context.Context) *GroupInvitationToken {
node, err := _q.First(ctx)
if err != nil && !IsNotFound(err) {
panic(err)
}
@@ -108,9 +108,9 @@ func (gitq *GroupInvitationTokenQuery) FirstX(ctx context.Context) *GroupInvitat
// FirstID returns the first GroupInvitationToken ID from the query.
// Returns a *NotFoundError when no GroupInvitationToken ID was found.
func (gitq *GroupInvitationTokenQuery) FirstID(ctx context.Context) (id uuid.UUID, err error) {
func (_q *GroupInvitationTokenQuery) FirstID(ctx context.Context) (id uuid.UUID, err error) {
var ids []uuid.UUID
if ids, err = gitq.Limit(1).IDs(setContextOp(ctx, gitq.ctx, ent.OpQueryFirstID)); err != nil {
if ids, err = _q.Limit(1).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryFirstID)); err != nil {
return
}
if len(ids) == 0 {
@@ -121,8 +121,8 @@ func (gitq *GroupInvitationTokenQuery) FirstID(ctx context.Context) (id uuid.UUI
}
// FirstIDX is like FirstID, but panics if an error occurs.
func (gitq *GroupInvitationTokenQuery) FirstIDX(ctx context.Context) uuid.UUID {
id, err := gitq.FirstID(ctx)
func (_q *GroupInvitationTokenQuery) FirstIDX(ctx context.Context) uuid.UUID {
id, err := _q.FirstID(ctx)
if err != nil && !IsNotFound(err) {
panic(err)
}
@@ -132,8 +132,8 @@ func (gitq *GroupInvitationTokenQuery) FirstIDX(ctx context.Context) uuid.UUID {
// Only returns a single GroupInvitationToken entity found by the query, ensuring it only returns one.
// Returns a *NotSingularError when more than one GroupInvitationToken entity is found.
// Returns a *NotFoundError when no GroupInvitationToken entities are found.
func (gitq *GroupInvitationTokenQuery) Only(ctx context.Context) (*GroupInvitationToken, error) {
nodes, err := gitq.Limit(2).All(setContextOp(ctx, gitq.ctx, ent.OpQueryOnly))
func (_q *GroupInvitationTokenQuery) Only(ctx context.Context) (*GroupInvitationToken, error) {
nodes, err := _q.Limit(2).All(setContextOp(ctx, _q.ctx, ent.OpQueryOnly))
if err != nil {
return nil, err
}
@@ -148,8 +148,8 @@ func (gitq *GroupInvitationTokenQuery) Only(ctx context.Context) (*GroupInvitati
}
// OnlyX is like Only, but panics if an error occurs.
func (gitq *GroupInvitationTokenQuery) OnlyX(ctx context.Context) *GroupInvitationToken {
node, err := gitq.Only(ctx)
func (_q *GroupInvitationTokenQuery) OnlyX(ctx context.Context) *GroupInvitationToken {
node, err := _q.Only(ctx)
if err != nil {
panic(err)
}
@@ -159,9 +159,9 @@ func (gitq *GroupInvitationTokenQuery) OnlyX(ctx context.Context) *GroupInvitati
// OnlyID is like Only, but returns the only GroupInvitationToken ID in the query.
// Returns a *NotSingularError when more than one GroupInvitationToken ID is found.
// Returns a *NotFoundError when no entities are found.
func (gitq *GroupInvitationTokenQuery) OnlyID(ctx context.Context) (id uuid.UUID, err error) {
func (_q *GroupInvitationTokenQuery) OnlyID(ctx context.Context) (id uuid.UUID, err error) {
var ids []uuid.UUID
if ids, err = gitq.Limit(2).IDs(setContextOp(ctx, gitq.ctx, ent.OpQueryOnlyID)); err != nil {
if ids, err = _q.Limit(2).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryOnlyID)); err != nil {
return
}
switch len(ids) {
@@ -176,8 +176,8 @@ func (gitq *GroupInvitationTokenQuery) OnlyID(ctx context.Context) (id uuid.UUID
}
// OnlyIDX is like OnlyID, but panics if an error occurs.
func (gitq *GroupInvitationTokenQuery) OnlyIDX(ctx context.Context) uuid.UUID {
id, err := gitq.OnlyID(ctx)
func (_q *GroupInvitationTokenQuery) OnlyIDX(ctx context.Context) uuid.UUID {
id, err := _q.OnlyID(ctx)
if err != nil {
panic(err)
}
@@ -185,18 +185,18 @@ func (gitq *GroupInvitationTokenQuery) OnlyIDX(ctx context.Context) uuid.UUID {
}
// All executes the query and returns a list of GroupInvitationTokens.
func (gitq *GroupInvitationTokenQuery) All(ctx context.Context) ([]*GroupInvitationToken, error) {
ctx = setContextOp(ctx, gitq.ctx, ent.OpQueryAll)
if err := gitq.prepareQuery(ctx); err != nil {
func (_q *GroupInvitationTokenQuery) All(ctx context.Context) ([]*GroupInvitationToken, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryAll)
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
qr := querierAll[[]*GroupInvitationToken, *GroupInvitationTokenQuery]()
return withInterceptors[[]*GroupInvitationToken](ctx, gitq, qr, gitq.inters)
return withInterceptors[[]*GroupInvitationToken](ctx, _q, qr, _q.inters)
}
// AllX is like All, but panics if an error occurs.
func (gitq *GroupInvitationTokenQuery) AllX(ctx context.Context) []*GroupInvitationToken {
nodes, err := gitq.All(ctx)
func (_q *GroupInvitationTokenQuery) AllX(ctx context.Context) []*GroupInvitationToken {
nodes, err := _q.All(ctx)
if err != nil {
panic(err)
}
@@ -204,20 +204,20 @@ func (gitq *GroupInvitationTokenQuery) AllX(ctx context.Context) []*GroupInvitat
}
// IDs executes the query and returns a list of GroupInvitationToken IDs.
func (gitq *GroupInvitationTokenQuery) IDs(ctx context.Context) (ids []uuid.UUID, err error) {
if gitq.ctx.Unique == nil && gitq.path != nil {
gitq.Unique(true)
func (_q *GroupInvitationTokenQuery) IDs(ctx context.Context) (ids []uuid.UUID, err error) {
if _q.ctx.Unique == nil && _q.path != nil {
_q.Unique(true)
}
ctx = setContextOp(ctx, gitq.ctx, ent.OpQueryIDs)
if err = gitq.Select(groupinvitationtoken.FieldID).Scan(ctx, &ids); err != nil {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryIDs)
if err = _q.Select(groupinvitationtoken.FieldID).Scan(ctx, &ids); err != nil {
return nil, err
}
return ids, nil
}
// IDsX is like IDs, but panics if an error occurs.
func (gitq *GroupInvitationTokenQuery) IDsX(ctx context.Context) []uuid.UUID {
ids, err := gitq.IDs(ctx)
func (_q *GroupInvitationTokenQuery) IDsX(ctx context.Context) []uuid.UUID {
ids, err := _q.IDs(ctx)
if err != nil {
panic(err)
}
@@ -225,17 +225,17 @@ func (gitq *GroupInvitationTokenQuery) IDsX(ctx context.Context) []uuid.UUID {
}
// Count returns the count of the given query.
func (gitq *GroupInvitationTokenQuery) Count(ctx context.Context) (int, error) {
ctx = setContextOp(ctx, gitq.ctx, ent.OpQueryCount)
if err := gitq.prepareQuery(ctx); err != nil {
func (_q *GroupInvitationTokenQuery) Count(ctx context.Context) (int, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryCount)
if err := _q.prepareQuery(ctx); err != nil {
return 0, err
}
return withInterceptors[int](ctx, gitq, querierCount[*GroupInvitationTokenQuery](), gitq.inters)
return withInterceptors[int](ctx, _q, querierCount[*GroupInvitationTokenQuery](), _q.inters)
}
// CountX is like Count, but panics if an error occurs.
func (gitq *GroupInvitationTokenQuery) CountX(ctx context.Context) int {
count, err := gitq.Count(ctx)
func (_q *GroupInvitationTokenQuery) CountX(ctx context.Context) int {
count, err := _q.Count(ctx)
if err != nil {
panic(err)
}
@@ -243,9 +243,9 @@ func (gitq *GroupInvitationTokenQuery) CountX(ctx context.Context) int {
}
// Exist returns true if the query has elements in the graph.
func (gitq *GroupInvitationTokenQuery) Exist(ctx context.Context) (bool, error) {
ctx = setContextOp(ctx, gitq.ctx, ent.OpQueryExist)
switch _, err := gitq.FirstID(ctx); {
func (_q *GroupInvitationTokenQuery) Exist(ctx context.Context) (bool, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryExist)
switch _, err := _q.FirstID(ctx); {
case IsNotFound(err):
return false, nil
case err != nil:
@@ -256,8 +256,8 @@ func (gitq *GroupInvitationTokenQuery) Exist(ctx context.Context) (bool, error)
}
// ExistX is like Exist, but panics if an error occurs.
func (gitq *GroupInvitationTokenQuery) ExistX(ctx context.Context) bool {
exist, err := gitq.Exist(ctx)
func (_q *GroupInvitationTokenQuery) ExistX(ctx context.Context) bool {
exist, err := _q.Exist(ctx)
if err != nil {
panic(err)
}
@@ -266,32 +266,32 @@ func (gitq *GroupInvitationTokenQuery) ExistX(ctx context.Context) bool {
// Clone returns a duplicate of the GroupInvitationTokenQuery builder, including all associated steps. It can be
// used to prepare common query builders and use them differently after the clone is made.
func (gitq *GroupInvitationTokenQuery) Clone() *GroupInvitationTokenQuery {
if gitq == nil {
func (_q *GroupInvitationTokenQuery) Clone() *GroupInvitationTokenQuery {
if _q == nil {
return nil
}
return &GroupInvitationTokenQuery{
config: gitq.config,
ctx: gitq.ctx.Clone(),
order: append([]groupinvitationtoken.OrderOption{}, gitq.order...),
inters: append([]Interceptor{}, gitq.inters...),
predicates: append([]predicate.GroupInvitationToken{}, gitq.predicates...),
withGroup: gitq.withGroup.Clone(),
config: _q.config,
ctx: _q.ctx.Clone(),
order: append([]groupinvitationtoken.OrderOption{}, _q.order...),
inters: append([]Interceptor{}, _q.inters...),
predicates: append([]predicate.GroupInvitationToken{}, _q.predicates...),
withGroup: _q.withGroup.Clone(),
// clone intermediate query.
sql: gitq.sql.Clone(),
path: gitq.path,
sql: _q.sql.Clone(),
path: _q.path,
}
}
// WithGroup tells the query-builder to eager-load the nodes that are connected to
// the "group" edge. The optional arguments are used to configure the query builder of the edge.
func (gitq *GroupInvitationTokenQuery) WithGroup(opts ...func(*GroupQuery)) *GroupInvitationTokenQuery {
query := (&GroupClient{config: gitq.config}).Query()
func (_q *GroupInvitationTokenQuery) WithGroup(opts ...func(*GroupQuery)) *GroupInvitationTokenQuery {
query := (&GroupClient{config: _q.config}).Query()
for _, opt := range opts {
opt(query)
}
gitq.withGroup = query
return gitq
_q.withGroup = query
return _q
}
// GroupBy is used to group vertices by one or more fields/columns.
@@ -308,10 +308,10 @@ func (gitq *GroupInvitationTokenQuery) WithGroup(opts ...func(*GroupQuery)) *Gro
// GroupBy(groupinvitationtoken.FieldCreatedAt).
// Aggregate(ent.Count()).
// Scan(ctx, &v)
func (gitq *GroupInvitationTokenQuery) GroupBy(field string, fields ...string) *GroupInvitationTokenGroupBy {
gitq.ctx.Fields = append([]string{field}, fields...)
grbuild := &GroupInvitationTokenGroupBy{build: gitq}
grbuild.flds = &gitq.ctx.Fields
func (_q *GroupInvitationTokenQuery) GroupBy(field string, fields ...string) *GroupInvitationTokenGroupBy {
_q.ctx.Fields = append([]string{field}, fields...)
grbuild := &GroupInvitationTokenGroupBy{build: _q}
grbuild.flds = &_q.ctx.Fields
grbuild.label = groupinvitationtoken.Label
grbuild.scan = grbuild.Scan
return grbuild
@@ -329,55 +329,55 @@ func (gitq *GroupInvitationTokenQuery) GroupBy(field string, fields ...string) *
// client.GroupInvitationToken.Query().
// Select(groupinvitationtoken.FieldCreatedAt).
// Scan(ctx, &v)
func (gitq *GroupInvitationTokenQuery) Select(fields ...string) *GroupInvitationTokenSelect {
gitq.ctx.Fields = append(gitq.ctx.Fields, fields...)
sbuild := &GroupInvitationTokenSelect{GroupInvitationTokenQuery: gitq}
func (_q *GroupInvitationTokenQuery) Select(fields ...string) *GroupInvitationTokenSelect {
_q.ctx.Fields = append(_q.ctx.Fields, fields...)
sbuild := &GroupInvitationTokenSelect{GroupInvitationTokenQuery: _q}
sbuild.label = groupinvitationtoken.Label
sbuild.flds, sbuild.scan = &gitq.ctx.Fields, sbuild.Scan
sbuild.flds, sbuild.scan = &_q.ctx.Fields, sbuild.Scan
return sbuild
}
// Aggregate returns a GroupInvitationTokenSelect configured with the given aggregations.
func (gitq *GroupInvitationTokenQuery) Aggregate(fns ...AggregateFunc) *GroupInvitationTokenSelect {
return gitq.Select().Aggregate(fns...)
func (_q *GroupInvitationTokenQuery) Aggregate(fns ...AggregateFunc) *GroupInvitationTokenSelect {
return _q.Select().Aggregate(fns...)
}
func (gitq *GroupInvitationTokenQuery) prepareQuery(ctx context.Context) error {
for _, inter := range gitq.inters {
func (_q *GroupInvitationTokenQuery) prepareQuery(ctx context.Context) error {
for _, inter := range _q.inters {
if inter == nil {
return fmt.Errorf("ent: uninitialized interceptor (forgotten import ent/runtime?)")
}
if trv, ok := inter.(Traverser); ok {
if err := trv.Traverse(ctx, gitq); err != nil {
if err := trv.Traverse(ctx, _q); err != nil {
return err
}
}
}
for _, f := range gitq.ctx.Fields {
for _, f := range _q.ctx.Fields {
if !groupinvitationtoken.ValidColumn(f) {
return &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
}
}
if gitq.path != nil {
prev, err := gitq.path(ctx)
if _q.path != nil {
prev, err := _q.path(ctx)
if err != nil {
return err
}
gitq.sql = prev
_q.sql = prev
}
return nil
}
func (gitq *GroupInvitationTokenQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*GroupInvitationToken, error) {
func (_q *GroupInvitationTokenQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*GroupInvitationToken, error) {
var (
nodes = []*GroupInvitationToken{}
withFKs = gitq.withFKs
_spec = gitq.querySpec()
withFKs = _q.withFKs
_spec = _q.querySpec()
loadedTypes = [1]bool{
gitq.withGroup != nil,
_q.withGroup != nil,
}
)
if gitq.withGroup != nil {
if _q.withGroup != nil {
withFKs = true
}
if withFKs {
@@ -387,7 +387,7 @@ func (gitq *GroupInvitationTokenQuery) sqlAll(ctx context.Context, hooks ...quer
return (*GroupInvitationToken).scanValues(nil, columns)
}
_spec.Assign = func(columns []string, values []any) error {
node := &GroupInvitationToken{config: gitq.config}
node := &GroupInvitationToken{config: _q.config}
nodes = append(nodes, node)
node.Edges.loadedTypes = loadedTypes
return node.assignValues(columns, values)
@@ -395,14 +395,14 @@ func (gitq *GroupInvitationTokenQuery) sqlAll(ctx context.Context, hooks ...quer
for i := range hooks {
hooks[i](ctx, _spec)
}
if err := sqlgraph.QueryNodes(ctx, gitq.driver, _spec); err != nil {
if err := sqlgraph.QueryNodes(ctx, _q.driver, _spec); err != nil {
return nil, err
}
if len(nodes) == 0 {
return nodes, nil
}
if query := gitq.withGroup; query != nil {
if err := gitq.loadGroup(ctx, query, nodes, nil,
if query := _q.withGroup; query != nil {
if err := _q.loadGroup(ctx, query, nodes, nil,
func(n *GroupInvitationToken, e *Group) { n.Edges.Group = e }); err != nil {
return nil, err
}
@@ -410,7 +410,7 @@ func (gitq *GroupInvitationTokenQuery) sqlAll(ctx context.Context, hooks ...quer
return nodes, nil
}
func (gitq *GroupInvitationTokenQuery) loadGroup(ctx context.Context, query *GroupQuery, nodes []*GroupInvitationToken, init func(*GroupInvitationToken), assign func(*GroupInvitationToken, *Group)) error {
func (_q *GroupInvitationTokenQuery) loadGroup(ctx context.Context, query *GroupQuery, nodes []*GroupInvitationToken, init func(*GroupInvitationToken), assign func(*GroupInvitationToken, *Group)) error {
ids := make([]uuid.UUID, 0, len(nodes))
nodeids := make(map[uuid.UUID][]*GroupInvitationToken)
for i := range nodes {
@@ -443,24 +443,24 @@ func (gitq *GroupInvitationTokenQuery) loadGroup(ctx context.Context, query *Gro
return nil
}
func (gitq *GroupInvitationTokenQuery) sqlCount(ctx context.Context) (int, error) {
_spec := gitq.querySpec()
_spec.Node.Columns = gitq.ctx.Fields
if len(gitq.ctx.Fields) > 0 {
_spec.Unique = gitq.ctx.Unique != nil && *gitq.ctx.Unique
func (_q *GroupInvitationTokenQuery) sqlCount(ctx context.Context) (int, error) {
_spec := _q.querySpec()
_spec.Node.Columns = _q.ctx.Fields
if len(_q.ctx.Fields) > 0 {
_spec.Unique = _q.ctx.Unique != nil && *_q.ctx.Unique
}
return sqlgraph.CountNodes(ctx, gitq.driver, _spec)
return sqlgraph.CountNodes(ctx, _q.driver, _spec)
}
func (gitq *GroupInvitationTokenQuery) querySpec() *sqlgraph.QuerySpec {
func (_q *GroupInvitationTokenQuery) querySpec() *sqlgraph.QuerySpec {
_spec := sqlgraph.NewQuerySpec(groupinvitationtoken.Table, groupinvitationtoken.Columns, sqlgraph.NewFieldSpec(groupinvitationtoken.FieldID, field.TypeUUID))
_spec.From = gitq.sql
if unique := gitq.ctx.Unique; unique != nil {
_spec.From = _q.sql
if unique := _q.ctx.Unique; unique != nil {
_spec.Unique = *unique
} else if gitq.path != nil {
} else if _q.path != nil {
_spec.Unique = true
}
if fields := gitq.ctx.Fields; len(fields) > 0 {
if fields := _q.ctx.Fields; len(fields) > 0 {
_spec.Node.Columns = make([]string, 0, len(fields))
_spec.Node.Columns = append(_spec.Node.Columns, groupinvitationtoken.FieldID)
for i := range fields {
@@ -469,20 +469,20 @@ func (gitq *GroupInvitationTokenQuery) querySpec() *sqlgraph.QuerySpec {
}
}
}
if ps := gitq.predicates; len(ps) > 0 {
if ps := _q.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if limit := gitq.ctx.Limit; limit != nil {
if limit := _q.ctx.Limit; limit != nil {
_spec.Limit = *limit
}
if offset := gitq.ctx.Offset; offset != nil {
if offset := _q.ctx.Offset; offset != nil {
_spec.Offset = *offset
}
if ps := gitq.order; len(ps) > 0 {
if ps := _q.order; len(ps) > 0 {
_spec.Order = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
@@ -492,33 +492,33 @@ func (gitq *GroupInvitationTokenQuery) querySpec() *sqlgraph.QuerySpec {
return _spec
}
func (gitq *GroupInvitationTokenQuery) sqlQuery(ctx context.Context) *sql.Selector {
builder := sql.Dialect(gitq.driver.Dialect())
func (_q *GroupInvitationTokenQuery) sqlQuery(ctx context.Context) *sql.Selector {
builder := sql.Dialect(_q.driver.Dialect())
t1 := builder.Table(groupinvitationtoken.Table)
columns := gitq.ctx.Fields
columns := _q.ctx.Fields
if len(columns) == 0 {
columns = groupinvitationtoken.Columns
}
selector := builder.Select(t1.Columns(columns...)...).From(t1)
if gitq.sql != nil {
selector = gitq.sql
if _q.sql != nil {
selector = _q.sql
selector.Select(selector.Columns(columns...)...)
}
if gitq.ctx.Unique != nil && *gitq.ctx.Unique {
if _q.ctx.Unique != nil && *_q.ctx.Unique {
selector.Distinct()
}
for _, p := range gitq.predicates {
for _, p := range _q.predicates {
p(selector)
}
for _, p := range gitq.order {
for _, p := range _q.order {
p(selector)
}
if offset := gitq.ctx.Offset; offset != nil {
if offset := _q.ctx.Offset; offset != nil {
// limit is mandatory for offset clause. We start
// with default value, and override it below if needed.
selector.Offset(*offset).Limit(math.MaxInt32)
}
if limit := gitq.ctx.Limit; limit != nil {
if limit := _q.ctx.Limit; limit != nil {
selector.Limit(*limit)
}
return selector
@@ -531,41 +531,41 @@ type GroupInvitationTokenGroupBy struct {
}
// Aggregate adds the given aggregation functions to the group-by query.
func (gitgb *GroupInvitationTokenGroupBy) Aggregate(fns ...AggregateFunc) *GroupInvitationTokenGroupBy {
gitgb.fns = append(gitgb.fns, fns...)
return gitgb
func (_g *GroupInvitationTokenGroupBy) Aggregate(fns ...AggregateFunc) *GroupInvitationTokenGroupBy {
_g.fns = append(_g.fns, fns...)
return _g
}
// Scan applies the selector query and scans the result into the given value.
func (gitgb *GroupInvitationTokenGroupBy) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, gitgb.build.ctx, ent.OpQueryGroupBy)
if err := gitgb.build.prepareQuery(ctx); err != nil {
func (_g *GroupInvitationTokenGroupBy) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, _g.build.ctx, ent.OpQueryGroupBy)
if err := _g.build.prepareQuery(ctx); err != nil {
return err
}
return scanWithInterceptors[*GroupInvitationTokenQuery, *GroupInvitationTokenGroupBy](ctx, gitgb.build, gitgb, gitgb.build.inters, v)
return scanWithInterceptors[*GroupInvitationTokenQuery, *GroupInvitationTokenGroupBy](ctx, _g.build, _g, _g.build.inters, v)
}
func (gitgb *GroupInvitationTokenGroupBy) sqlScan(ctx context.Context, root *GroupInvitationTokenQuery, v any) error {
func (_g *GroupInvitationTokenGroupBy) sqlScan(ctx context.Context, root *GroupInvitationTokenQuery, v any) error {
selector := root.sqlQuery(ctx).Select()
aggregation := make([]string, 0, len(gitgb.fns))
for _, fn := range gitgb.fns {
aggregation := make([]string, 0, len(_g.fns))
for _, fn := range _g.fns {
aggregation = append(aggregation, fn(selector))
}
if len(selector.SelectedColumns()) == 0 {
columns := make([]string, 0, len(*gitgb.flds)+len(gitgb.fns))
for _, f := range *gitgb.flds {
columns := make([]string, 0, len(*_g.flds)+len(_g.fns))
for _, f := range *_g.flds {
columns = append(columns, selector.C(f))
}
columns = append(columns, aggregation...)
selector.Select(columns...)
}
selector.GroupBy(selector.Columns(*gitgb.flds...)...)
selector.GroupBy(selector.Columns(*_g.flds...)...)
if err := selector.Err(); err != nil {
return err
}
rows := &sql.Rows{}
query, args := selector.Query()
if err := gitgb.build.driver.Query(ctx, query, args, rows); err != nil {
if err := _g.build.driver.Query(ctx, query, args, rows); err != nil {
return err
}
defer rows.Close()
@@ -579,27 +579,27 @@ type GroupInvitationTokenSelect struct {
}
// Aggregate adds the given aggregation functions to the selector query.
func (gits *GroupInvitationTokenSelect) Aggregate(fns ...AggregateFunc) *GroupInvitationTokenSelect {
gits.fns = append(gits.fns, fns...)
return gits
func (_s *GroupInvitationTokenSelect) Aggregate(fns ...AggregateFunc) *GroupInvitationTokenSelect {
_s.fns = append(_s.fns, fns...)
return _s
}
// Scan applies the selector query and scans the result into the given value.
func (gits *GroupInvitationTokenSelect) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, gits.ctx, ent.OpQuerySelect)
if err := gits.prepareQuery(ctx); err != nil {
func (_s *GroupInvitationTokenSelect) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, _s.ctx, ent.OpQuerySelect)
if err := _s.prepareQuery(ctx); err != nil {
return err
}
return scanWithInterceptors[*GroupInvitationTokenQuery, *GroupInvitationTokenSelect](ctx, gits.GroupInvitationTokenQuery, gits, gits.inters, v)
return scanWithInterceptors[*GroupInvitationTokenQuery, *GroupInvitationTokenSelect](ctx, _s.GroupInvitationTokenQuery, _s, _s.inters, v)
}
func (gits *GroupInvitationTokenSelect) sqlScan(ctx context.Context, root *GroupInvitationTokenQuery, v any) error {
func (_s *GroupInvitationTokenSelect) sqlScan(ctx context.Context, root *GroupInvitationTokenQuery, v any) error {
selector := root.sqlQuery(ctx)
aggregation := make([]string, 0, len(gits.fns))
for _, fn := range gits.fns {
aggregation := make([]string, 0, len(_s.fns))
for _, fn := range _s.fns {
aggregation = append(aggregation, fn(selector))
}
switch n := len(*gits.selector.flds); {
switch n := len(*_s.selector.flds); {
case n == 0 && len(aggregation) > 0:
selector.Select(aggregation...)
case n != 0 && len(aggregation) > 0:
@@ -607,7 +607,7 @@ func (gits *GroupInvitationTokenSelect) sqlScan(ctx context.Context, root *Group
}
rows := &sql.Rows{}
query, args := selector.Query()
if err := gits.driver.Query(ctx, query, args, rows); err != nil {
if err := _s.driver.Query(ctx, query, args, rows); err != nil {
return err
}
defer rows.Close()

View File

@@ -25,97 +25,97 @@ type GroupInvitationTokenUpdate struct {
}
// Where appends a list predicates to the GroupInvitationTokenUpdate builder.
func (gitu *GroupInvitationTokenUpdate) Where(ps ...predicate.GroupInvitationToken) *GroupInvitationTokenUpdate {
gitu.mutation.Where(ps...)
return gitu
func (_u *GroupInvitationTokenUpdate) Where(ps ...predicate.GroupInvitationToken) *GroupInvitationTokenUpdate {
_u.mutation.Where(ps...)
return _u
}
// SetUpdatedAt sets the "updated_at" field.
func (gitu *GroupInvitationTokenUpdate) SetUpdatedAt(t time.Time) *GroupInvitationTokenUpdate {
gitu.mutation.SetUpdatedAt(t)
return gitu
func (_u *GroupInvitationTokenUpdate) SetUpdatedAt(v time.Time) *GroupInvitationTokenUpdate {
_u.mutation.SetUpdatedAt(v)
return _u
}
// SetToken sets the "token" field.
func (gitu *GroupInvitationTokenUpdate) SetToken(b []byte) *GroupInvitationTokenUpdate {
gitu.mutation.SetToken(b)
return gitu
func (_u *GroupInvitationTokenUpdate) SetToken(v []byte) *GroupInvitationTokenUpdate {
_u.mutation.SetToken(v)
return _u
}
// SetExpiresAt sets the "expires_at" field.
func (gitu *GroupInvitationTokenUpdate) SetExpiresAt(t time.Time) *GroupInvitationTokenUpdate {
gitu.mutation.SetExpiresAt(t)
return gitu
func (_u *GroupInvitationTokenUpdate) SetExpiresAt(v time.Time) *GroupInvitationTokenUpdate {
_u.mutation.SetExpiresAt(v)
return _u
}
// SetNillableExpiresAt sets the "expires_at" field if the given value is not nil.
func (gitu *GroupInvitationTokenUpdate) SetNillableExpiresAt(t *time.Time) *GroupInvitationTokenUpdate {
if t != nil {
gitu.SetExpiresAt(*t)
func (_u *GroupInvitationTokenUpdate) SetNillableExpiresAt(v *time.Time) *GroupInvitationTokenUpdate {
if v != nil {
_u.SetExpiresAt(*v)
}
return gitu
return _u
}
// SetUses sets the "uses" field.
func (gitu *GroupInvitationTokenUpdate) SetUses(i int) *GroupInvitationTokenUpdate {
gitu.mutation.ResetUses()
gitu.mutation.SetUses(i)
return gitu
func (_u *GroupInvitationTokenUpdate) SetUses(v int) *GroupInvitationTokenUpdate {
_u.mutation.ResetUses()
_u.mutation.SetUses(v)
return _u
}
// SetNillableUses sets the "uses" field if the given value is not nil.
func (gitu *GroupInvitationTokenUpdate) SetNillableUses(i *int) *GroupInvitationTokenUpdate {
if i != nil {
gitu.SetUses(*i)
func (_u *GroupInvitationTokenUpdate) SetNillableUses(v *int) *GroupInvitationTokenUpdate {
if v != nil {
_u.SetUses(*v)
}
return gitu
return _u
}
// AddUses adds i to the "uses" field.
func (gitu *GroupInvitationTokenUpdate) AddUses(i int) *GroupInvitationTokenUpdate {
gitu.mutation.AddUses(i)
return gitu
// AddUses adds value to the "uses" field.
func (_u *GroupInvitationTokenUpdate) AddUses(v int) *GroupInvitationTokenUpdate {
_u.mutation.AddUses(v)
return _u
}
// SetGroupID sets the "group" edge to the Group entity by ID.
func (gitu *GroupInvitationTokenUpdate) SetGroupID(id uuid.UUID) *GroupInvitationTokenUpdate {
gitu.mutation.SetGroupID(id)
return gitu
func (_u *GroupInvitationTokenUpdate) SetGroupID(id uuid.UUID) *GroupInvitationTokenUpdate {
_u.mutation.SetGroupID(id)
return _u
}
// SetNillableGroupID sets the "group" edge to the Group entity by ID if the given value is not nil.
func (gitu *GroupInvitationTokenUpdate) SetNillableGroupID(id *uuid.UUID) *GroupInvitationTokenUpdate {
func (_u *GroupInvitationTokenUpdate) SetNillableGroupID(id *uuid.UUID) *GroupInvitationTokenUpdate {
if id != nil {
gitu = gitu.SetGroupID(*id)
_u = _u.SetGroupID(*id)
}
return gitu
return _u
}
// SetGroup sets the "group" edge to the Group entity.
func (gitu *GroupInvitationTokenUpdate) SetGroup(g *Group) *GroupInvitationTokenUpdate {
return gitu.SetGroupID(g.ID)
func (_u *GroupInvitationTokenUpdate) SetGroup(v *Group) *GroupInvitationTokenUpdate {
return _u.SetGroupID(v.ID)
}
// Mutation returns the GroupInvitationTokenMutation object of the builder.
func (gitu *GroupInvitationTokenUpdate) Mutation() *GroupInvitationTokenMutation {
return gitu.mutation
func (_u *GroupInvitationTokenUpdate) Mutation() *GroupInvitationTokenMutation {
return _u.mutation
}
// ClearGroup clears the "group" edge to the Group entity.
func (gitu *GroupInvitationTokenUpdate) ClearGroup() *GroupInvitationTokenUpdate {
gitu.mutation.ClearGroup()
return gitu
func (_u *GroupInvitationTokenUpdate) ClearGroup() *GroupInvitationTokenUpdate {
_u.mutation.ClearGroup()
return _u
}
// Save executes the query and returns the number of nodes affected by the update operation.
func (gitu *GroupInvitationTokenUpdate) Save(ctx context.Context) (int, error) {
gitu.defaults()
return withHooks(ctx, gitu.sqlSave, gitu.mutation, gitu.hooks)
func (_u *GroupInvitationTokenUpdate) Save(ctx context.Context) (int, error) {
_u.defaults()
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
}
// SaveX is like Save, but panics if an error occurs.
func (gitu *GroupInvitationTokenUpdate) SaveX(ctx context.Context) int {
affected, err := gitu.Save(ctx)
func (_u *GroupInvitationTokenUpdate) SaveX(ctx context.Context) int {
affected, err := _u.Save(ctx)
if err != nil {
panic(err)
}
@@ -123,51 +123,51 @@ func (gitu *GroupInvitationTokenUpdate) SaveX(ctx context.Context) int {
}
// Exec executes the query.
func (gitu *GroupInvitationTokenUpdate) Exec(ctx context.Context) error {
_, err := gitu.Save(ctx)
func (_u *GroupInvitationTokenUpdate) Exec(ctx context.Context) error {
_, err := _u.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (gitu *GroupInvitationTokenUpdate) ExecX(ctx context.Context) {
if err := gitu.Exec(ctx); err != nil {
func (_u *GroupInvitationTokenUpdate) ExecX(ctx context.Context) {
if err := _u.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (gitu *GroupInvitationTokenUpdate) defaults() {
if _, ok := gitu.mutation.UpdatedAt(); !ok {
func (_u *GroupInvitationTokenUpdate) defaults() {
if _, ok := _u.mutation.UpdatedAt(); !ok {
v := groupinvitationtoken.UpdateDefaultUpdatedAt()
gitu.mutation.SetUpdatedAt(v)
_u.mutation.SetUpdatedAt(v)
}
}
func (gitu *GroupInvitationTokenUpdate) sqlSave(ctx context.Context) (n int, err error) {
func (_u *GroupInvitationTokenUpdate) sqlSave(ctx context.Context) (_node int, err error) {
_spec := sqlgraph.NewUpdateSpec(groupinvitationtoken.Table, groupinvitationtoken.Columns, sqlgraph.NewFieldSpec(groupinvitationtoken.FieldID, field.TypeUUID))
if ps := gitu.mutation.predicates; len(ps) > 0 {
if ps := _u.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := gitu.mutation.UpdatedAt(); ok {
if value, ok := _u.mutation.UpdatedAt(); ok {
_spec.SetField(groupinvitationtoken.FieldUpdatedAt, field.TypeTime, value)
}
if value, ok := gitu.mutation.Token(); ok {
if value, ok := _u.mutation.Token(); ok {
_spec.SetField(groupinvitationtoken.FieldToken, field.TypeBytes, value)
}
if value, ok := gitu.mutation.ExpiresAt(); ok {
if value, ok := _u.mutation.ExpiresAt(); ok {
_spec.SetField(groupinvitationtoken.FieldExpiresAt, field.TypeTime, value)
}
if value, ok := gitu.mutation.Uses(); ok {
if value, ok := _u.mutation.Uses(); ok {
_spec.SetField(groupinvitationtoken.FieldUses, field.TypeInt, value)
}
if value, ok := gitu.mutation.AddedUses(); ok {
if value, ok := _u.mutation.AddedUses(); ok {
_spec.AddField(groupinvitationtoken.FieldUses, field.TypeInt, value)
}
if gitu.mutation.GroupCleared() {
if _u.mutation.GroupCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -180,7 +180,7 @@ func (gitu *GroupInvitationTokenUpdate) sqlSave(ctx context.Context) (n int, err
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := gitu.mutation.GroupIDs(); len(nodes) > 0 {
if nodes := _u.mutation.GroupIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -196,7 +196,7 @@ func (gitu *GroupInvitationTokenUpdate) sqlSave(ctx context.Context) (n int, err
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if n, err = sqlgraph.UpdateNodes(ctx, gitu.driver, _spec); err != nil {
if _node, err = sqlgraph.UpdateNodes(ctx, _u.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{groupinvitationtoken.Label}
} else if sqlgraph.IsConstraintError(err) {
@@ -204,8 +204,8 @@ func (gitu *GroupInvitationTokenUpdate) sqlSave(ctx context.Context) (n int, err
}
return 0, err
}
gitu.mutation.done = true
return n, nil
_u.mutation.done = true
return _node, nil
}
// GroupInvitationTokenUpdateOne is the builder for updating a single GroupInvitationToken entity.
@@ -217,104 +217,104 @@ type GroupInvitationTokenUpdateOne struct {
}
// SetUpdatedAt sets the "updated_at" field.
func (gituo *GroupInvitationTokenUpdateOne) SetUpdatedAt(t time.Time) *GroupInvitationTokenUpdateOne {
gituo.mutation.SetUpdatedAt(t)
return gituo
func (_u *GroupInvitationTokenUpdateOne) SetUpdatedAt(v time.Time) *GroupInvitationTokenUpdateOne {
_u.mutation.SetUpdatedAt(v)
return _u
}
// SetToken sets the "token" field.
func (gituo *GroupInvitationTokenUpdateOne) SetToken(b []byte) *GroupInvitationTokenUpdateOne {
gituo.mutation.SetToken(b)
return gituo
func (_u *GroupInvitationTokenUpdateOne) SetToken(v []byte) *GroupInvitationTokenUpdateOne {
_u.mutation.SetToken(v)
return _u
}
// SetExpiresAt sets the "expires_at" field.
func (gituo *GroupInvitationTokenUpdateOne) SetExpiresAt(t time.Time) *GroupInvitationTokenUpdateOne {
gituo.mutation.SetExpiresAt(t)
return gituo
func (_u *GroupInvitationTokenUpdateOne) SetExpiresAt(v time.Time) *GroupInvitationTokenUpdateOne {
_u.mutation.SetExpiresAt(v)
return _u
}
// SetNillableExpiresAt sets the "expires_at" field if the given value is not nil.
func (gituo *GroupInvitationTokenUpdateOne) SetNillableExpiresAt(t *time.Time) *GroupInvitationTokenUpdateOne {
if t != nil {
gituo.SetExpiresAt(*t)
func (_u *GroupInvitationTokenUpdateOne) SetNillableExpiresAt(v *time.Time) *GroupInvitationTokenUpdateOne {
if v != nil {
_u.SetExpiresAt(*v)
}
return gituo
return _u
}
// SetUses sets the "uses" field.
func (gituo *GroupInvitationTokenUpdateOne) SetUses(i int) *GroupInvitationTokenUpdateOne {
gituo.mutation.ResetUses()
gituo.mutation.SetUses(i)
return gituo
func (_u *GroupInvitationTokenUpdateOne) SetUses(v int) *GroupInvitationTokenUpdateOne {
_u.mutation.ResetUses()
_u.mutation.SetUses(v)
return _u
}
// SetNillableUses sets the "uses" field if the given value is not nil.
func (gituo *GroupInvitationTokenUpdateOne) SetNillableUses(i *int) *GroupInvitationTokenUpdateOne {
if i != nil {
gituo.SetUses(*i)
func (_u *GroupInvitationTokenUpdateOne) SetNillableUses(v *int) *GroupInvitationTokenUpdateOne {
if v != nil {
_u.SetUses(*v)
}
return gituo
return _u
}
// AddUses adds i to the "uses" field.
func (gituo *GroupInvitationTokenUpdateOne) AddUses(i int) *GroupInvitationTokenUpdateOne {
gituo.mutation.AddUses(i)
return gituo
// AddUses adds value to the "uses" field.
func (_u *GroupInvitationTokenUpdateOne) AddUses(v int) *GroupInvitationTokenUpdateOne {
_u.mutation.AddUses(v)
return _u
}
// SetGroupID sets the "group" edge to the Group entity by ID.
func (gituo *GroupInvitationTokenUpdateOne) SetGroupID(id uuid.UUID) *GroupInvitationTokenUpdateOne {
gituo.mutation.SetGroupID(id)
return gituo
func (_u *GroupInvitationTokenUpdateOne) SetGroupID(id uuid.UUID) *GroupInvitationTokenUpdateOne {
_u.mutation.SetGroupID(id)
return _u
}
// SetNillableGroupID sets the "group" edge to the Group entity by ID if the given value is not nil.
func (gituo *GroupInvitationTokenUpdateOne) SetNillableGroupID(id *uuid.UUID) *GroupInvitationTokenUpdateOne {
func (_u *GroupInvitationTokenUpdateOne) SetNillableGroupID(id *uuid.UUID) *GroupInvitationTokenUpdateOne {
if id != nil {
gituo = gituo.SetGroupID(*id)
_u = _u.SetGroupID(*id)
}
return gituo
return _u
}
// SetGroup sets the "group" edge to the Group entity.
func (gituo *GroupInvitationTokenUpdateOne) SetGroup(g *Group) *GroupInvitationTokenUpdateOne {
return gituo.SetGroupID(g.ID)
func (_u *GroupInvitationTokenUpdateOne) SetGroup(v *Group) *GroupInvitationTokenUpdateOne {
return _u.SetGroupID(v.ID)
}
// Mutation returns the GroupInvitationTokenMutation object of the builder.
func (gituo *GroupInvitationTokenUpdateOne) Mutation() *GroupInvitationTokenMutation {
return gituo.mutation
func (_u *GroupInvitationTokenUpdateOne) Mutation() *GroupInvitationTokenMutation {
return _u.mutation
}
// ClearGroup clears the "group" edge to the Group entity.
func (gituo *GroupInvitationTokenUpdateOne) ClearGroup() *GroupInvitationTokenUpdateOne {
gituo.mutation.ClearGroup()
return gituo
func (_u *GroupInvitationTokenUpdateOne) ClearGroup() *GroupInvitationTokenUpdateOne {
_u.mutation.ClearGroup()
return _u
}
// Where appends a list predicates to the GroupInvitationTokenUpdate builder.
func (gituo *GroupInvitationTokenUpdateOne) Where(ps ...predicate.GroupInvitationToken) *GroupInvitationTokenUpdateOne {
gituo.mutation.Where(ps...)
return gituo
func (_u *GroupInvitationTokenUpdateOne) Where(ps ...predicate.GroupInvitationToken) *GroupInvitationTokenUpdateOne {
_u.mutation.Where(ps...)
return _u
}
// Select allows selecting one or more fields (columns) of the returned entity.
// The default is selecting all fields defined in the entity schema.
func (gituo *GroupInvitationTokenUpdateOne) Select(field string, fields ...string) *GroupInvitationTokenUpdateOne {
gituo.fields = append([]string{field}, fields...)
return gituo
func (_u *GroupInvitationTokenUpdateOne) Select(field string, fields ...string) *GroupInvitationTokenUpdateOne {
_u.fields = append([]string{field}, fields...)
return _u
}
// Save executes the query and returns the updated GroupInvitationToken entity.
func (gituo *GroupInvitationTokenUpdateOne) Save(ctx context.Context) (*GroupInvitationToken, error) {
gituo.defaults()
return withHooks(ctx, gituo.sqlSave, gituo.mutation, gituo.hooks)
func (_u *GroupInvitationTokenUpdateOne) Save(ctx context.Context) (*GroupInvitationToken, error) {
_u.defaults()
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
}
// SaveX is like Save, but panics if an error occurs.
func (gituo *GroupInvitationTokenUpdateOne) SaveX(ctx context.Context) *GroupInvitationToken {
node, err := gituo.Save(ctx)
func (_u *GroupInvitationTokenUpdateOne) SaveX(ctx context.Context) *GroupInvitationToken {
node, err := _u.Save(ctx)
if err != nil {
panic(err)
}
@@ -322,34 +322,34 @@ func (gituo *GroupInvitationTokenUpdateOne) SaveX(ctx context.Context) *GroupInv
}
// Exec executes the query on the entity.
func (gituo *GroupInvitationTokenUpdateOne) Exec(ctx context.Context) error {
_, err := gituo.Save(ctx)
func (_u *GroupInvitationTokenUpdateOne) Exec(ctx context.Context) error {
_, err := _u.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (gituo *GroupInvitationTokenUpdateOne) ExecX(ctx context.Context) {
if err := gituo.Exec(ctx); err != nil {
func (_u *GroupInvitationTokenUpdateOne) ExecX(ctx context.Context) {
if err := _u.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (gituo *GroupInvitationTokenUpdateOne) defaults() {
if _, ok := gituo.mutation.UpdatedAt(); !ok {
func (_u *GroupInvitationTokenUpdateOne) defaults() {
if _, ok := _u.mutation.UpdatedAt(); !ok {
v := groupinvitationtoken.UpdateDefaultUpdatedAt()
gituo.mutation.SetUpdatedAt(v)
_u.mutation.SetUpdatedAt(v)
}
}
func (gituo *GroupInvitationTokenUpdateOne) sqlSave(ctx context.Context) (_node *GroupInvitationToken, err error) {
func (_u *GroupInvitationTokenUpdateOne) sqlSave(ctx context.Context) (_node *GroupInvitationToken, err error) {
_spec := sqlgraph.NewUpdateSpec(groupinvitationtoken.Table, groupinvitationtoken.Columns, sqlgraph.NewFieldSpec(groupinvitationtoken.FieldID, field.TypeUUID))
id, ok := gituo.mutation.ID()
id, ok := _u.mutation.ID()
if !ok {
return nil, &ValidationError{Name: "id", err: errors.New(`ent: missing "GroupInvitationToken.id" for update`)}
}
_spec.Node.ID.Value = id
if fields := gituo.fields; len(fields) > 0 {
if fields := _u.fields; len(fields) > 0 {
_spec.Node.Columns = make([]string, 0, len(fields))
_spec.Node.Columns = append(_spec.Node.Columns, groupinvitationtoken.FieldID)
for _, f := range fields {
@@ -361,29 +361,29 @@ func (gituo *GroupInvitationTokenUpdateOne) sqlSave(ctx context.Context) (_node
}
}
}
if ps := gituo.mutation.predicates; len(ps) > 0 {
if ps := _u.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := gituo.mutation.UpdatedAt(); ok {
if value, ok := _u.mutation.UpdatedAt(); ok {
_spec.SetField(groupinvitationtoken.FieldUpdatedAt, field.TypeTime, value)
}
if value, ok := gituo.mutation.Token(); ok {
if value, ok := _u.mutation.Token(); ok {
_spec.SetField(groupinvitationtoken.FieldToken, field.TypeBytes, value)
}
if value, ok := gituo.mutation.ExpiresAt(); ok {
if value, ok := _u.mutation.ExpiresAt(); ok {
_spec.SetField(groupinvitationtoken.FieldExpiresAt, field.TypeTime, value)
}
if value, ok := gituo.mutation.Uses(); ok {
if value, ok := _u.mutation.Uses(); ok {
_spec.SetField(groupinvitationtoken.FieldUses, field.TypeInt, value)
}
if value, ok := gituo.mutation.AddedUses(); ok {
if value, ok := _u.mutation.AddedUses(); ok {
_spec.AddField(groupinvitationtoken.FieldUses, field.TypeInt, value)
}
if gituo.mutation.GroupCleared() {
if _u.mutation.GroupCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -396,7 +396,7 @@ func (gituo *GroupInvitationTokenUpdateOne) sqlSave(ctx context.Context) (_node
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := gituo.mutation.GroupIDs(); len(nodes) > 0 {
if nodes := _u.mutation.GroupIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
@@ -412,10 +412,10 @@ func (gituo *GroupInvitationTokenUpdateOne) sqlSave(ctx context.Context) (_node
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
_node = &GroupInvitationToken{config: gituo.config}
_node = &GroupInvitationToken{config: _u.config}
_spec.Assign = _node.assignValues
_spec.ScanValues = _node.scanValues
if err = sqlgraph.UpdateNode(ctx, gituo.driver, _spec); err != nil {
if err = sqlgraph.UpdateNode(ctx, _u.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{groupinvitationtoken.Label}
} else if sqlgraph.IsConstraintError(err) {
@@ -423,6 +423,6 @@ func (gituo *GroupInvitationTokenUpdateOne) sqlSave(ctx context.Context) (_node
}
return nil, err
}
gituo.mutation.done = true
_u.mutation.done = true
return _node, nil
}

View File

@@ -4,50 +4,58 @@ package ent
import "github.com/google/uuid"
func (a *Attachment) GetID() uuid.UUID {
return a.ID
func (_m *Attachment) GetID() uuid.UUID {
return _m.ID
}
func (ar *AuthRoles) GetID() int {
return ar.ID
func (_m *AuthRoles) GetID() int {
return _m.ID
}
func (at *AuthTokens) GetID() uuid.UUID {
return at.ID
func (_m *AuthTokens) GetID() uuid.UUID {
return _m.ID
}
func (gr *Group) GetID() uuid.UUID {
return gr.ID
func (_m *Group) GetID() uuid.UUID {
return _m.ID
}
func (git *GroupInvitationToken) GetID() uuid.UUID {
return git.ID
func (_m *GroupInvitationToken) GetID() uuid.UUID {
return _m.ID
}
func (i *Item) GetID() uuid.UUID {
return i.ID
func (_m *Item) GetID() uuid.UUID {
return _m.ID
}
func (_if *ItemField) GetID() uuid.UUID {
return _if.ID
func (_m *ItemField) GetID() uuid.UUID {
return _m.ID
}
func (l *Label) GetID() uuid.UUID {
return l.ID
func (_m *ItemTemplate) GetID() uuid.UUID {
return _m.ID
}
func (l *Location) GetID() uuid.UUID {
return l.ID
func (_m *Label) GetID() uuid.UUID {
return _m.ID
}
func (me *MaintenanceEntry) GetID() uuid.UUID {
return me.ID
func (_m *Location) GetID() uuid.UUID {
return _m.ID
}
func (n *Notifier) GetID() uuid.UUID {
return n.ID
func (_m *MaintenanceEntry) GetID() uuid.UUID {
return _m.ID
}
func (u *User) GetID() uuid.UUID {
return u.ID
func (_m *Notifier) GetID() uuid.UUID {
return _m.ID
}
func (_m *TemplateField) GetID() uuid.UUID {
return _m.ID
}
func (_m *User) GetID() uuid.UUID {
return _m.ID
}

View File

@@ -93,6 +93,18 @@ func (f ItemFieldFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, e
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.ItemFieldMutation", m)
}
// The ItemTemplateFunc type is an adapter to allow the use of ordinary
// function as ItemTemplate mutator.
type ItemTemplateFunc func(context.Context, *ent.ItemTemplateMutation) (ent.Value, error)
// Mutate calls f(ctx, m).
func (f ItemTemplateFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
if mv, ok := m.(*ent.ItemTemplateMutation); ok {
return f(ctx, mv)
}
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.ItemTemplateMutation", m)
}
// The LabelFunc type is an adapter to allow the use of ordinary
// function as Label mutator.
type LabelFunc func(context.Context, *ent.LabelMutation) (ent.Value, error)
@@ -141,6 +153,18 @@ func (f NotifierFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, er
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.NotifierMutation", m)
}
// The TemplateFieldFunc type is an adapter to allow the use of ordinary
// function as TemplateField mutator.
type TemplateFieldFunc func(context.Context, *ent.TemplateFieldMutation) (ent.Value, error)
// Mutate calls f(ctx, m).
func (f TemplateFieldFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
if mv, ok := m.(*ent.TemplateFieldMutation); ok {
return f(ctx, mv)
}
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.TemplateFieldMutation", m)
}
// The UserFunc type is an adapter to allow the use of ordinary
// function as User mutator.
type UserFunc func(context.Context, *ent.UserMutation) (ent.Value, error)

View File

@@ -210,185 +210,185 @@ func (*Item) scanValues(columns []string) ([]any, error) {
// assignValues assigns the values that were returned from sql.Rows (after scanning)
// to the Item fields.
func (i *Item) assignValues(columns []string, values []any) error {
func (_m *Item) assignValues(columns []string, values []any) error {
if m, n := len(values), len(columns); m < n {
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
}
for j := range columns {
switch columns[j] {
for i := range columns {
switch columns[i] {
case item.FieldID:
if value, ok := values[j].(*uuid.UUID); !ok {
return fmt.Errorf("unexpected type %T for field id", values[j])
if value, ok := values[i].(*uuid.UUID); !ok {
return fmt.Errorf("unexpected type %T for field id", values[i])
} else if value != nil {
i.ID = *value
_m.ID = *value
}
case item.FieldCreatedAt:
if value, ok := values[j].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field created_at", values[j])
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field created_at", values[i])
} else if value.Valid {
i.CreatedAt = value.Time
_m.CreatedAt = value.Time
}
case item.FieldUpdatedAt:
if value, ok := values[j].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field updated_at", values[j])
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field updated_at", values[i])
} else if value.Valid {
i.UpdatedAt = value.Time
_m.UpdatedAt = value.Time
}
case item.FieldName:
if value, ok := values[j].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field name", values[j])
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field name", values[i])
} else if value.Valid {
i.Name = value.String
_m.Name = value.String
}
case item.FieldDescription:
if value, ok := values[j].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field description", values[j])
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field description", values[i])
} else if value.Valid {
i.Description = value.String
_m.Description = value.String
}
case item.FieldImportRef:
if value, ok := values[j].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field import_ref", values[j])
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field import_ref", values[i])
} else if value.Valid {
i.ImportRef = value.String
_m.ImportRef = value.String
}
case item.FieldNotes:
if value, ok := values[j].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field notes", values[j])
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field notes", values[i])
} else if value.Valid {
i.Notes = value.String
_m.Notes = value.String
}
case item.FieldQuantity:
if value, ok := values[j].(*sql.NullInt64); !ok {
return fmt.Errorf("unexpected type %T for field quantity", values[j])
if value, ok := values[i].(*sql.NullInt64); !ok {
return fmt.Errorf("unexpected type %T for field quantity", values[i])
} else if value.Valid {
i.Quantity = int(value.Int64)
_m.Quantity = int(value.Int64)
}
case item.FieldInsured:
if value, ok := values[j].(*sql.NullBool); !ok {
return fmt.Errorf("unexpected type %T for field insured", values[j])
if value, ok := values[i].(*sql.NullBool); !ok {
return fmt.Errorf("unexpected type %T for field insured", values[i])
} else if value.Valid {
i.Insured = value.Bool
_m.Insured = value.Bool
}
case item.FieldArchived:
if value, ok := values[j].(*sql.NullBool); !ok {
return fmt.Errorf("unexpected type %T for field archived", values[j])
if value, ok := values[i].(*sql.NullBool); !ok {
return fmt.Errorf("unexpected type %T for field archived", values[i])
} else if value.Valid {
i.Archived = value.Bool
_m.Archived = value.Bool
}
case item.FieldAssetID:
if value, ok := values[j].(*sql.NullInt64); !ok {
return fmt.Errorf("unexpected type %T for field asset_id", values[j])
if value, ok := values[i].(*sql.NullInt64); !ok {
return fmt.Errorf("unexpected type %T for field asset_id", values[i])
} else if value.Valid {
i.AssetID = int(value.Int64)
_m.AssetID = int(value.Int64)
}
case item.FieldSyncChildItemsLocations:
if value, ok := values[j].(*sql.NullBool); !ok {
return fmt.Errorf("unexpected type %T for field sync_child_items_locations", values[j])
if value, ok := values[i].(*sql.NullBool); !ok {
return fmt.Errorf("unexpected type %T for field sync_child_items_locations", values[i])
} else if value.Valid {
i.SyncChildItemsLocations = value.Bool
_m.SyncChildItemsLocations = value.Bool
}
case item.FieldSerialNumber:
if value, ok := values[j].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field serial_number", values[j])
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field serial_number", values[i])
} else if value.Valid {
i.SerialNumber = value.String
_m.SerialNumber = value.String
}
case item.FieldModelNumber:
if value, ok := values[j].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field model_number", values[j])
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field model_number", values[i])
} else if value.Valid {
i.ModelNumber = value.String
_m.ModelNumber = value.String
}
case item.FieldManufacturer:
if value, ok := values[j].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field manufacturer", values[j])
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field manufacturer", values[i])
} else if value.Valid {
i.Manufacturer = value.String
_m.Manufacturer = value.String
}
case item.FieldLifetimeWarranty:
if value, ok := values[j].(*sql.NullBool); !ok {
return fmt.Errorf("unexpected type %T for field lifetime_warranty", values[j])
if value, ok := values[i].(*sql.NullBool); !ok {
return fmt.Errorf("unexpected type %T for field lifetime_warranty", values[i])
} else if value.Valid {
i.LifetimeWarranty = value.Bool
_m.LifetimeWarranty = value.Bool
}
case item.FieldWarrantyExpires:
if value, ok := values[j].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field warranty_expires", values[j])
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field warranty_expires", values[i])
} else if value.Valid {
i.WarrantyExpires = value.Time
_m.WarrantyExpires = value.Time
}
case item.FieldWarrantyDetails:
if value, ok := values[j].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field warranty_details", values[j])
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field warranty_details", values[i])
} else if value.Valid {
i.WarrantyDetails = value.String
_m.WarrantyDetails = value.String
}
case item.FieldPurchaseTime:
if value, ok := values[j].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field purchase_time", values[j])
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field purchase_time", values[i])
} else if value.Valid {
i.PurchaseTime = value.Time
_m.PurchaseTime = value.Time
}
case item.FieldPurchaseFrom:
if value, ok := values[j].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field purchase_from", values[j])
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field purchase_from", values[i])
} else if value.Valid {
i.PurchaseFrom = value.String
_m.PurchaseFrom = value.String
}
case item.FieldPurchasePrice:
if value, ok := values[j].(*sql.NullFloat64); !ok {
return fmt.Errorf("unexpected type %T for field purchase_price", values[j])
if value, ok := values[i].(*sql.NullFloat64); !ok {
return fmt.Errorf("unexpected type %T for field purchase_price", values[i])
} else if value.Valid {
i.PurchasePrice = value.Float64
_m.PurchasePrice = value.Float64
}
case item.FieldSoldTime:
if value, ok := values[j].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field sold_time", values[j])
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field sold_time", values[i])
} else if value.Valid {
i.SoldTime = value.Time
_m.SoldTime = value.Time
}
case item.FieldSoldTo:
if value, ok := values[j].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field sold_to", values[j])
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field sold_to", values[i])
} else if value.Valid {
i.SoldTo = value.String
_m.SoldTo = value.String
}
case item.FieldSoldPrice:
if value, ok := values[j].(*sql.NullFloat64); !ok {
return fmt.Errorf("unexpected type %T for field sold_price", values[j])
if value, ok := values[i].(*sql.NullFloat64); !ok {
return fmt.Errorf("unexpected type %T for field sold_price", values[i])
} else if value.Valid {
i.SoldPrice = value.Float64
_m.SoldPrice = value.Float64
}
case item.FieldSoldNotes:
if value, ok := values[j].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field sold_notes", values[j])
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field sold_notes", values[i])
} else if value.Valid {
i.SoldNotes = value.String
_m.SoldNotes = value.String
}
case item.ForeignKeys[0]:
if value, ok := values[j].(*sql.NullScanner); !ok {
return fmt.Errorf("unexpected type %T for field group_items", values[j])
if value, ok := values[i].(*sql.NullScanner); !ok {
return fmt.Errorf("unexpected type %T for field group_items", values[i])
} else if value.Valid {
i.group_items = new(uuid.UUID)
*i.group_items = *value.S.(*uuid.UUID)
_m.group_items = new(uuid.UUID)
*_m.group_items = *value.S.(*uuid.UUID)
}
case item.ForeignKeys[1]:
if value, ok := values[j].(*sql.NullScanner); !ok {
return fmt.Errorf("unexpected type %T for field item_children", values[j])
if value, ok := values[i].(*sql.NullScanner); !ok {
return fmt.Errorf("unexpected type %T for field item_children", values[i])
} else if value.Valid {
i.item_children = new(uuid.UUID)
*i.item_children = *value.S.(*uuid.UUID)
_m.item_children = new(uuid.UUID)
*_m.item_children = *value.S.(*uuid.UUID)
}
case item.ForeignKeys[2]:
if value, ok := values[j].(*sql.NullScanner); !ok {
return fmt.Errorf("unexpected type %T for field location_items", values[j])
if value, ok := values[i].(*sql.NullScanner); !ok {
return fmt.Errorf("unexpected type %T for field location_items", values[i])
} else if value.Valid {
i.location_items = new(uuid.UUID)
*i.location_items = *value.S.(*uuid.UUID)
_m.location_items = new(uuid.UUID)
*_m.location_items = *value.S.(*uuid.UUID)
}
default:
i.selectValues.Set(columns[j], values[j])
_m.selectValues.Set(columns[i], values[i])
}
}
return nil
@@ -396,144 +396,144 @@ func (i *Item) assignValues(columns []string, values []any) error {
// Value returns the ent.Value that was dynamically selected and assigned to the Item.
// This includes values selected through modifiers, order, etc.
func (i *Item) Value(name string) (ent.Value, error) {
return i.selectValues.Get(name)
func (_m *Item) Value(name string) (ent.Value, error) {
return _m.selectValues.Get(name)
}
// QueryGroup queries the "group" edge of the Item entity.
func (i *Item) QueryGroup() *GroupQuery {
return NewItemClient(i.config).QueryGroup(i)
func (_m *Item) QueryGroup() *GroupQuery {
return NewItemClient(_m.config).QueryGroup(_m)
}
// QueryParent queries the "parent" edge of the Item entity.
func (i *Item) QueryParent() *ItemQuery {
return NewItemClient(i.config).QueryParent(i)
func (_m *Item) QueryParent() *ItemQuery {
return NewItemClient(_m.config).QueryParent(_m)
}
// QueryChildren queries the "children" edge of the Item entity.
func (i *Item) QueryChildren() *ItemQuery {
return NewItemClient(i.config).QueryChildren(i)
func (_m *Item) QueryChildren() *ItemQuery {
return NewItemClient(_m.config).QueryChildren(_m)
}
// QueryLabel queries the "label" edge of the Item entity.
func (i *Item) QueryLabel() *LabelQuery {
return NewItemClient(i.config).QueryLabel(i)
func (_m *Item) QueryLabel() *LabelQuery {
return NewItemClient(_m.config).QueryLabel(_m)
}
// QueryLocation queries the "location" edge of the Item entity.
func (i *Item) QueryLocation() *LocationQuery {
return NewItemClient(i.config).QueryLocation(i)
func (_m *Item) QueryLocation() *LocationQuery {
return NewItemClient(_m.config).QueryLocation(_m)
}
// QueryFields queries the "fields" edge of the Item entity.
func (i *Item) QueryFields() *ItemFieldQuery {
return NewItemClient(i.config).QueryFields(i)
func (_m *Item) QueryFields() *ItemFieldQuery {
return NewItemClient(_m.config).QueryFields(_m)
}
// QueryMaintenanceEntries queries the "maintenance_entries" edge of the Item entity.
func (i *Item) QueryMaintenanceEntries() *MaintenanceEntryQuery {
return NewItemClient(i.config).QueryMaintenanceEntries(i)
func (_m *Item) QueryMaintenanceEntries() *MaintenanceEntryQuery {
return NewItemClient(_m.config).QueryMaintenanceEntries(_m)
}
// QueryAttachments queries the "attachments" edge of the Item entity.
func (i *Item) QueryAttachments() *AttachmentQuery {
return NewItemClient(i.config).QueryAttachments(i)
func (_m *Item) QueryAttachments() *AttachmentQuery {
return NewItemClient(_m.config).QueryAttachments(_m)
}
// Update returns a builder for updating this Item.
// Note that you need to call Item.Unwrap() before calling this method if this Item
// was returned from a transaction, and the transaction was committed or rolled back.
func (i *Item) Update() *ItemUpdateOne {
return NewItemClient(i.config).UpdateOne(i)
func (_m *Item) Update() *ItemUpdateOne {
return NewItemClient(_m.config).UpdateOne(_m)
}
// Unwrap unwraps the Item entity that was returned from a transaction after it was closed,
// so that all future queries will be executed through the driver which created the transaction.
func (i *Item) Unwrap() *Item {
_tx, ok := i.config.driver.(*txDriver)
func (_m *Item) Unwrap() *Item {
_tx, ok := _m.config.driver.(*txDriver)
if !ok {
panic("ent: Item is not a transactional entity")
}
i.config.driver = _tx.drv
return i
_m.config.driver = _tx.drv
return _m
}
// String implements the fmt.Stringer.
func (i *Item) String() string {
func (_m *Item) String() string {
var builder strings.Builder
builder.WriteString("Item(")
builder.WriteString(fmt.Sprintf("id=%v, ", i.ID))
builder.WriteString(fmt.Sprintf("id=%v, ", _m.ID))
builder.WriteString("created_at=")
builder.WriteString(i.CreatedAt.Format(time.ANSIC))
builder.WriteString(_m.CreatedAt.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("updated_at=")
builder.WriteString(i.UpdatedAt.Format(time.ANSIC))
builder.WriteString(_m.UpdatedAt.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("name=")
builder.WriteString(i.Name)
builder.WriteString(_m.Name)
builder.WriteString(", ")
builder.WriteString("description=")
builder.WriteString(i.Description)
builder.WriteString(_m.Description)
builder.WriteString(", ")
builder.WriteString("import_ref=")
builder.WriteString(i.ImportRef)
builder.WriteString(_m.ImportRef)
builder.WriteString(", ")
builder.WriteString("notes=")
builder.WriteString(i.Notes)
builder.WriteString(_m.Notes)
builder.WriteString(", ")
builder.WriteString("quantity=")
builder.WriteString(fmt.Sprintf("%v", i.Quantity))
builder.WriteString(fmt.Sprintf("%v", _m.Quantity))
builder.WriteString(", ")
builder.WriteString("insured=")
builder.WriteString(fmt.Sprintf("%v", i.Insured))
builder.WriteString(fmt.Sprintf("%v", _m.Insured))
builder.WriteString(", ")
builder.WriteString("archived=")
builder.WriteString(fmt.Sprintf("%v", i.Archived))
builder.WriteString(fmt.Sprintf("%v", _m.Archived))
builder.WriteString(", ")
builder.WriteString("asset_id=")
builder.WriteString(fmt.Sprintf("%v", i.AssetID))
builder.WriteString(fmt.Sprintf("%v", _m.AssetID))
builder.WriteString(", ")
builder.WriteString("sync_child_items_locations=")
builder.WriteString(fmt.Sprintf("%v", i.SyncChildItemsLocations))
builder.WriteString(fmt.Sprintf("%v", _m.SyncChildItemsLocations))
builder.WriteString(", ")
builder.WriteString("serial_number=")
builder.WriteString(i.SerialNumber)
builder.WriteString(_m.SerialNumber)
builder.WriteString(", ")
builder.WriteString("model_number=")
builder.WriteString(i.ModelNumber)
builder.WriteString(_m.ModelNumber)
builder.WriteString(", ")
builder.WriteString("manufacturer=")
builder.WriteString(i.Manufacturer)
builder.WriteString(_m.Manufacturer)
builder.WriteString(", ")
builder.WriteString("lifetime_warranty=")
builder.WriteString(fmt.Sprintf("%v", i.LifetimeWarranty))
builder.WriteString(fmt.Sprintf("%v", _m.LifetimeWarranty))
builder.WriteString(", ")
builder.WriteString("warranty_expires=")
builder.WriteString(i.WarrantyExpires.Format(time.ANSIC))
builder.WriteString(_m.WarrantyExpires.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("warranty_details=")
builder.WriteString(i.WarrantyDetails)
builder.WriteString(_m.WarrantyDetails)
builder.WriteString(", ")
builder.WriteString("purchase_time=")
builder.WriteString(i.PurchaseTime.Format(time.ANSIC))
builder.WriteString(_m.PurchaseTime.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("purchase_from=")
builder.WriteString(i.PurchaseFrom)
builder.WriteString(_m.PurchaseFrom)
builder.WriteString(", ")
builder.WriteString("purchase_price=")
builder.WriteString(fmt.Sprintf("%v", i.PurchasePrice))
builder.WriteString(fmt.Sprintf("%v", _m.PurchasePrice))
builder.WriteString(", ")
builder.WriteString("sold_time=")
builder.WriteString(i.SoldTime.Format(time.ANSIC))
builder.WriteString(_m.SoldTime.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("sold_to=")
builder.WriteString(i.SoldTo)
builder.WriteString(_m.SoldTo)
builder.WriteString(", ")
builder.WriteString("sold_price=")
builder.WriteString(fmt.Sprintf("%v", i.SoldPrice))
builder.WriteString(fmt.Sprintf("%v", _m.SoldPrice))
builder.WriteString(", ")
builder.WriteString("sold_notes=")
builder.WriteString(i.SoldNotes)
builder.WriteString(_m.SoldNotes)
builder.WriteByte(')')
return builder.String()
}

File diff suppressed because it is too large Load Diff

View File

@@ -20,56 +20,56 @@ type ItemDelete struct {
}
// Where appends a list predicates to the ItemDelete builder.
func (id *ItemDelete) Where(ps ...predicate.Item) *ItemDelete {
id.mutation.Where(ps...)
return id
func (_d *ItemDelete) Where(ps ...predicate.Item) *ItemDelete {
_d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query and returns how many vertices were deleted.
func (id *ItemDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, id.sqlExec, id.mutation, id.hooks)
func (_d *ItemDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, _d.sqlExec, _d.mutation, _d.hooks)
}
// ExecX is like Exec, but panics if an error occurs.
func (id *ItemDelete) ExecX(ctx context.Context) int {
n, err := id.Exec(ctx)
func (_d *ItemDelete) ExecX(ctx context.Context) int {
n, err := _d.Exec(ctx)
if err != nil {
panic(err)
}
return n
}
func (id *ItemDelete) sqlExec(ctx context.Context) (int, error) {
func (_d *ItemDelete) sqlExec(ctx context.Context) (int, error) {
_spec := sqlgraph.NewDeleteSpec(item.Table, sqlgraph.NewFieldSpec(item.FieldID, field.TypeUUID))
if ps := id.mutation.predicates; len(ps) > 0 {
if ps := _d.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
affected, err := sqlgraph.DeleteNodes(ctx, id.driver, _spec)
affected, err := sqlgraph.DeleteNodes(ctx, _d.driver, _spec)
if err != nil && sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
id.mutation.done = true
_d.mutation.done = true
return affected, err
}
// ItemDeleteOne is the builder for deleting a single Item entity.
type ItemDeleteOne struct {
id *ItemDelete
_d *ItemDelete
}
// Where appends a list predicates to the ItemDelete builder.
func (ido *ItemDeleteOne) Where(ps ...predicate.Item) *ItemDeleteOne {
ido.id.mutation.Where(ps...)
return ido
func (_d *ItemDeleteOne) Where(ps ...predicate.Item) *ItemDeleteOne {
_d._d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query.
func (ido *ItemDeleteOne) Exec(ctx context.Context) error {
n, err := ido.id.Exec(ctx)
func (_d *ItemDeleteOne) Exec(ctx context.Context) error {
n, err := _d._d.Exec(ctx)
switch {
case err != nil:
return err
@@ -81,8 +81,8 @@ func (ido *ItemDeleteOne) Exec(ctx context.Context) error {
}
// ExecX is like Exec, but panics if an error occurs.
func (ido *ItemDeleteOne) ExecX(ctx context.Context) {
if err := ido.Exec(ctx); err != nil {
func (_d *ItemDeleteOne) ExecX(ctx context.Context) {
if err := _d.Exec(ctx); err != nil {
panic(err)
}
}

View File

@@ -0,0 +1,120 @@
package ent
import (
"entgo.io/ent/dialect/sql"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/item"
"github.com/sysadminsmedia/homebox/backend/internal/data/ent/predicate"
"github.com/sysadminsmedia/homebox/backend/pkgs/textutils"
)
// AccentInsensitiveContains creates a predicate that performs accent-insensitive text search.
// It normalizes both the database field value and the search value for comparison.
func AccentInsensitiveContains(field string, searchValue string) predicate.Item {
if searchValue == "" {
return predicate.Item(func(s *sql.Selector) {
// Return a predicate that never matches if search is empty
s.Where(sql.False())
})
}
// Normalize the search value
normalizedSearch := textutils.NormalizeSearchQuery(searchValue)
return predicate.Item(func(s *sql.Selector) {
dialect := s.Dialect()
switch dialect {
case "sqlite3":
// For SQLite, we'll create a custom normalization function using REPLACE
// to handle common accented characters
normalizeFunc := buildSQLiteNormalizeExpression(s.C(field))
s.Where(sql.ExprP(
"LOWER("+normalizeFunc+") LIKE ?",
"%"+normalizedSearch+"%",
))
case "postgres":
// For PostgreSQL, use REPLACE-based normalization to avoid unaccent dependency
normalizeFunc := buildGenericNormalizeExpression(s.C(field))
// Use sql.P() for proper PostgreSQL parameter binding ($1, $2, etc.)
s.Where(sql.P(func(b *sql.Builder) {
b.WriteString("LOWER(")
b.WriteString(normalizeFunc)
b.WriteString(") LIKE ")
b.Arg("%" + normalizedSearch + "%")
}))
default:
// Default fallback using REPLACE for common accented characters
normalizeFunc := buildGenericNormalizeExpression(s.C(field))
s.Where(sql.ExprP(
"LOWER("+normalizeFunc+") LIKE ?",
"%"+normalizedSearch+"%",
))
}
})
}
// buildSQLiteNormalizeExpression creates a SQLite expression to normalize accented characters
func buildSQLiteNormalizeExpression(fieldExpr string) string {
return buildGenericNormalizeExpression(fieldExpr)
}
// buildGenericNormalizeExpression creates a database-agnostic expression to normalize common accented characters
func buildGenericNormalizeExpression(fieldExpr string) string {
// Chain REPLACE functions to handle the most common accented characters
// Focused on the most frequently used accents in Spanish, French, and Portuguese
// Ordered by frequency of use for better performance
normalized := fieldExpr
// Most common accented characters ordered by frequency
commonAccents := []struct {
from, to string
}{
// Spanish - most common
{"á", "a"}, {"é", "e"}, {"í", "i"}, {"ó", "o"}, {"ú", "u"}, {"ñ", "n"},
{"Á", "A"}, {"É", "E"}, {"Í", "I"}, {"Ó", "O"}, {"Ú", "U"}, {"Ñ", "N"},
// French - most common
{"è", "e"}, {"ê", "e"}, {"à", "a"}, {"ç", "c"},
{"È", "E"}, {"Ê", "E"}, {"À", "A"}, {"Ç", "C"},
// German umlauts and Portuguese - common
{"ä", "a"}, {"ö", "o"}, {"ü", "u"}, {"ã", "a"}, {"õ", "o"},
{"Ä", "A"}, {"Ö", "O"}, {"Ü", "U"}, {"Ã", "A"}, {"Õ", "O"},
}
for _, accent := range commonAccents {
normalized = "REPLACE(" + normalized + ", '" + accent.from + "', '" + accent.to + "')"
}
return normalized
}
// ItemNameAccentInsensitiveContains creates an accent-insensitive search predicate for the item name field.
func ItemNameAccentInsensitiveContains(value string) predicate.Item {
return AccentInsensitiveContains(item.FieldName, value)
}
// ItemDescriptionAccentInsensitiveContains creates an accent-insensitive search predicate for the item description field.
func ItemDescriptionAccentInsensitiveContains(value string) predicate.Item {
return AccentInsensitiveContains(item.FieldDescription, value)
}
// ItemSerialNumberAccentInsensitiveContains creates an accent-insensitive search predicate for the item serial number field.
func ItemSerialNumberAccentInsensitiveContains(value string) predicate.Item {
return AccentInsensitiveContains(item.FieldSerialNumber, value)
}
// ItemModelNumberAccentInsensitiveContains creates an accent-insensitive search predicate for the item model number field.
func ItemModelNumberAccentInsensitiveContains(value string) predicate.Item {
return AccentInsensitiveContains(item.FieldModelNumber, value)
}
// ItemManufacturerAccentInsensitiveContains creates an accent-insensitive search predicate for the item manufacturer field.
func ItemManufacturerAccentInsensitiveContains(value string) predicate.Item {
return AccentInsensitiveContains(item.FieldManufacturer, value)
}
// ItemNotesAccentInsensitiveContains creates an accent-insensitive search predicate for the item notes field.
func ItemNotesAccentInsensitiveContains(value string) predicate.Item {
return AccentInsensitiveContains(item.FieldNotes, value)
}

View File

@@ -0,0 +1,147 @@
package ent
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestBuildGenericNormalizeExpression(t *testing.T) {
tests := []struct {
name string
field string
expected string
}{
{
name: "Simple field name",
field: "name",
expected: "name", // Should be wrapped in many REPLACE functions
},
{
name: "Complex field name",
field: "description",
expected: "description",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := buildGenericNormalizeExpression(tt.field)
// Should contain the original field
assert.Contains(t, result, tt.field)
// Should contain REPLACE functions for accent normalization
assert.Contains(t, result, "REPLACE(")
// Should handle common accented characters
assert.Contains(t, result, "'á'", "Should handle Spanish á")
assert.Contains(t, result, "'é'", "Should handle Spanish é")
assert.Contains(t, result, "'ñ'", "Should handle Spanish ñ")
assert.Contains(t, result, "'ü'", "Should handle German ü")
// Should handle uppercase accents too
assert.Contains(t, result, "'Á'", "Should handle uppercase Spanish Á")
assert.Contains(t, result, "'É'", "Should handle uppercase Spanish É")
})
}
}
func TestSQLiteNormalizeExpression(t *testing.T) {
result := buildSQLiteNormalizeExpression("test_field")
// Should contain the field name and REPLACE functions
assert.Contains(t, result, "test_field")
assert.Contains(t, result, "REPLACE(")
// Check for some specific accent replacements (order doesn't matter)
assert.Contains(t, result, "'á'", "Should handle Spanish á")
assert.Contains(t, result, "'ó'", "Should handle Spanish ó")
}
func TestAccentInsensitivePredicateCreation(t *testing.T) {
tests := []struct {
name string
field string
searchValue string
description string
}{
{
name: "Normal search value",
field: "name",
searchValue: "electronica",
description: "Should create predicate for normal search",
},
{
name: "Accented search value",
field: "description",
searchValue: "electrónica",
description: "Should create predicate for accented search",
},
{
name: "Empty search value",
field: "name",
searchValue: "",
description: "Should handle empty search gracefully",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
predicate := AccentInsensitiveContains(tt.field, tt.searchValue)
assert.NotNil(t, predicate, tt.description)
})
}
}
func TestSpecificItemPredicates(t *testing.T) {
tests := []struct {
name string
predicateFunc func(string) interface{}
searchValue string
description string
}{
{
name: "ItemNameAccentInsensitiveContains",
predicateFunc: func(val string) interface{} { return ItemNameAccentInsensitiveContains(val) },
searchValue: "electronica",
description: "Should create accent-insensitive name search predicate",
},
{
name: "ItemDescriptionAccentInsensitiveContains",
predicateFunc: func(val string) interface{} { return ItemDescriptionAccentInsensitiveContains(val) },
searchValue: "descripcion",
description: "Should create accent-insensitive description search predicate",
},
{
name: "ItemManufacturerAccentInsensitiveContains",
predicateFunc: func(val string) interface{} { return ItemManufacturerAccentInsensitiveContains(val) },
searchValue: "compañia",
description: "Should create accent-insensitive manufacturer search predicate",
},
{
name: "ItemSerialNumberAccentInsensitiveContains",
predicateFunc: func(val string) interface{} { return ItemSerialNumberAccentInsensitiveContains(val) },
searchValue: "sn123",
description: "Should create accent-insensitive serial number search predicate",
},
{
name: "ItemModelNumberAccentInsensitiveContains",
predicateFunc: func(val string) interface{} { return ItemModelNumberAccentInsensitiveContains(val) },
searchValue: "model456",
description: "Should create accent-insensitive model number search predicate",
},
{
name: "ItemNotesAccentInsensitiveContains",
predicateFunc: func(val string) interface{} { return ItemNotesAccentInsensitiveContains(val) },
searchValue: "notas importantes",
description: "Should create accent-insensitive notes search predicate",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
predicate := tt.predicateFunc(tt.searchValue)
assert.NotNil(t, predicate, tt.description)
})
}
}

Some files were not shown because too many files have changed in this diff Show More