Private
Public Access
1
0

71 Commits

Author SHA1 Message Date
0dfa41da8e build(ci): migrate from Drone to Woodpecker
Some checks failed
ci/woodpecker/tag/woodpecker Pipeline failed
Replace .drone.yml with .woodpecker.yaml and update
scripts/run-goreleaser to use CI_COMMIT_TAG instead
of DRONE_TAG.
2026-03-07 16:18:53 -08:00
e4f6d8cafb fix(chdb): rename geodns references to dns
All checks were successful
continuous-integration/drone/push Build is passing
ClickHouse DNS tables moved from geodns/geodns3 to
a single dns database.
2026-03-07 16:07:20 -08:00
1b1413a632 build: Use go 1.26
All checks were successful
continuous-integration/drone/push Build is passing
2026-03-07 16:05:28 -08:00
85d86bc837 build: update go and dependencies
All checks were successful
continuous-integration/drone/push Build is passing
2025-09-27 08:17:04 -07:00
196f90a2b9 fix(db): use int for netspeed_active to prevent overflow
All checks were successful
continuous-integration/drone/push Build is passing
GetZoneStatsData and GetZoneStatsV2's netspeed_active values can
exceed 2 billion, causing 32-bit integer overflow. Changed from
int32/uint32 to int (64-bit on modern systems) to handle large
network speed totals.

- Update sqlc column overrides to use int type
- Fix type compatibility in dnsanswers.go zoneTotals map
- Regenerate database code with new types

Fixes https://community.ntppool.org/t/error-message-displayed-on-the-monitoring-score-page/4063
2025-09-21 00:08:21 -07:00
02a6f587bb Update schema 2025-09-20 10:29:53 -07:00
2dfc355f7c style: format Go code with gofumpt
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
Apply consistent formatting to Go source files using gofumpt
as required by pre-commit guidelines.
2025-08-03 16:06:59 -07:00
3e6a0f9e63 fix(api): include deleted monitors in name-based lookups
Remove status filter from GetMonitorByNameAndIPVersion query to allow
historical score data for deleted monitors to be accessible when
querying by monitor name/TLS name, making behavior consistent with
ID-based queries.
2025-08-03 14:53:21 -07:00
9c6b8d1867 fix(api): handle score monitors in name-based lookups
Score monitors have type='score' and ip_version=NULL, but the
GetMonitorByNameAndIPVersion query required ip_version to match.
This broke monitor lookups by name for score monitors.

Modified query to match either:
- Regular monitors with specified ip_version
- Score monitors with NULL ip_version

Fixes issue reported by Ben Harris at:
https://community.ntppool.org/t/monitor-recentmedian-no-longer-works/4002
2025-08-04 20:43:53 -07:00
393d532ce2 feat(api): add relative time support to v2 scores endpoint
- Add parseRelativeTime function supporting "-3d", "-2h", "-30m" format
- Update parseTimeRangeParams to handle Unix timestamps and relative times
- Add unit tests with comprehensive coverage for all time formats
- Document v2 API in API.md with examples and migration guide

Enables intuitive time queries like from=-3d&to=-1h instead of
Unix timestamps, improving developer experience for the enhanced
v2 endpoint that supports 50k records vs legacy 10k limit.
2025-08-03 12:12:22 -07:00
267c279f3d Update dependencies
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2025-08-02 19:48:42 -07:00
eb5459abf3 fix(api): protocol-aware monitor filtering for multi-protocol monitors
All checks were successful
continuous-integration/drone/push Build is passing
Servers with monitor filtering returned incorrect results when monitors
have same names but different protocols (v4/v6). Monitor lookup now
considers both name and IP version to match the correct protocol.

- Add GetMonitorByNameAndIPVersion SQL query with protocol matching
- Update history parameter parsing to use server IP version context
- Fix both /scores/{ip}/log and Grafana endpoints
- Remove unused GetMonitorByName query

Fixes abh/ntppool#264
Reported-by: Anssi Johansson <https://github.com/avijc>
2025-07-27 00:37:49 -07:00
8262b1442f feat(api): add Grafana time range endpoint for scores
- Add /api/v2/server/scores/{server}/{mode} endpoint
- Support time range queries with from/to parameters
- Return data in Grafana table format for visualization
- Fix routing pattern to handle IP addresses correctly
- Add comprehensive parameter validation and error handling
2025-07-27 02:18:32 -07:00
d4bf8d9e16 feat(api): add Grafana test endpoint for table format
Add `/api/v2/test/grafana-table` endpoint to validate Grafana
table format compatibility before implementing the full time
range API.

- Create server/grafana.go with table format structures
- Add structured logging and OpenTelemetry tracing
- Include realistic NTP Pool sample data with null handling
- Set proper CORS and cache headers for testing
- Update implementation plan with Phase 0 completion status

Ready for Grafana JSON API data source integration testing.
2025-07-26 09:03:46 -07:00
6c5b762a57 Update dependencies
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2025-07-05 12:29:51 -07:00
fd6e87cf2d fix(history): sanitize NULL bytes in CSV error output 2025-07-05 12:27:35 -07:00
a22d5ebc7e feat(api): add RTT data to history endpoints
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
- Add RTT column to CSV output (before leap column)
- Add RTT field to JSON ScoresEntry
- Add avg_rtt field to JSON MonitorEntry
- Convert RTT from microseconds to milliseconds
- Calculate average RTT per monitor from history data
2025-07-04 09:41:17 -07:00
42ce22e83e adjust cache-control for history api
All checks were successful
continuous-integration/drone/push Build is passing
it seems like there's a bug in the data calculations so many
servers get the too long maximum cache time; make it shorter
while we debug
2025-06-27 17:48:40 +08:00
087d253d90 Update schema for monitors v4; use go tool
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2025-06-21 03:49:02 -07:00
ae7acb4111 Update schema dump from development
All checks were successful
continuous-integration/drone/push Build is passing
2025-04-08 00:30:52 -07:00
bd4e52a73b Update Go to 1.24 + dependencies
All checks were successful
continuous-integration/drone/push Build is passing
2025-03-08 10:21:44 -08:00
118e596098 build: update goreleaser to 2.7.0
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2025-02-23 09:49:39 -08:00
e6f39f201c dns queries: set cache-control header 2025-02-23 09:48:56 -08:00
962839ed89 add dns query count endpoint 2025-02-23 09:28:30 -08:00
f8662fbda5 Support DSN config and auth for clickhouse connection
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2025-02-22 22:41:06 -08:00
a5b1f9ef08 Upgrade Go & dependencies
All checks were successful
continuous-integration/drone/push Build is passing
2025-01-17 21:03:31 -08:00
e316aeee99 db: less logging when opening a database connection
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2025-01-06 19:33:45 +01:00
3a9879b793 ch: add healtcheck for logs database
All checks were successful
continuous-integration/drone/push Build is passing
2025-01-06 19:21:17 +01:00
9fb3edacef data-api: fix health check shutdown; adjust db idle reset feature
All checks were successful
continuous-integration/drone/push Build is passing
2025-01-03 14:28:36 +01:00
d206f9d20e history: fix the more nuanced cache-control max-age logic
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2024-12-30 17:33:29 -08:00
dc8adc1aea health: fix noisy logs
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2024-12-28 00:52:11 -08:00
35ea262b99 go lint tweaks; update common
All checks were successful
continuous-integration/drone/push Build is passing
2024-12-27 18:45:28 -08:00
058531c362 Update Go and dependencies
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2024-12-27 17:38:58 -08:00
801d685940 health: ping databases for health checks 2024-12-27 17:37:16 -08:00
904f4b1df5 remove slightly less debugging 2024-11-03 07:15:25 +00:00
78432679ec Update Go & dependencies
All checks were successful
continuous-integration/drone/push Build is passing
2024-10-05 01:31:02 -07:00
397dd13c38 Fix 500 errors when requesting with an invalid or unknown monitor parameter
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2024-07-21 04:31:42 -07:00
79eea1d0f8 scores: improve error handling for invalid monitor parameters
All checks were successful
continuous-integration/drone/push Build is passing
2024-07-20 00:29:36 -07:00
8dfd7c8a4e Update Go and dependencies
All checks were successful
continuous-integration/drone/push Build is passing
2024-07-20 00:24:17 -07:00
574c7cfbf0 Fix lint warnings
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-09 09:03:41 -08:00
3cbef93607 Update schema, Go 1.22.1, and dependencies
All checks were successful
continuous-integration/drone/push Build is passing
2024-03-09 09:59:29 -08:00
675e993353 scorer: fix parsing leap column
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-20 23:43:22 -07:00
e1398e7472 scores: full_history option for internal clients
All checks were successful
continuous-integration/drone/push Build is passing
(somewhat inefficient, but for now rarely used ...)
2024-01-20 23:21:21 -07:00
b786ed6986 metrics: add echo metrics
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-20 21:35:49 -07:00
2f2a407409 scorer: configurable default source 2024-01-20 21:35:18 -07:00
6df51fc19f scores: clickhouse support
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-20 19:41:02 -07:00
5682c86837 Set environment in tracing, minor dependency updates 2024-01-13 21:49:10 -08:00
9428c1a227 Minor logging tweak 2024-01-12 22:39:19 -08:00
6b84bbe5e1 Update Go and dependencies
All checks were successful
continuous-integration/drone/push Build is passing
2024-01-12 22:14:15 -08:00
47b96cd598 zones: per zone server counts API migrated
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-23 01:54:21 -08:00
19c02063e9 Make text/csv inline in the browser
All checks were successful
continuous-integration/drone/push Build is passing
... by using text/plain
https://bugs.chromium.org/p/chromium/issues/detail?id=152911
2023-12-22 09:33:14 -08:00
84523661e2 scores: Allow 10000 rows 2023-12-22 09:22:26 -08:00
6553b4711b scores: allow specifying the monitor by name
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-22 08:59:13 -08:00
9280668d28 scores: support requesting logs by monitor name 2023-12-22 08:15:13 -08:00
ccc2fd401f scores: better error when monitor parameter is invalid 2023-12-22 08:11:43 -08:00
f2e4530023 Update dependencies 2023-12-22 08:10:42 -08:00
8f333354d2 Register build_info metric 2023-12-15 00:10:39 -08:00
41e7585637 scores: redirect POST requests
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-14 23:57:36 -08:00
36f695c146 trace tweaks
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-14 23:17:44 -08:00
404c64b910 Return not found for more recently deleted servers, too 2023-12-14 23:17:44 -08:00
2cd4d8a35a Fix url.path in trace 2023-12-14 23:14:00 -08:00
bae726dba6 scores: handle IPs with no current history
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-14 20:00:09 -08:00
f6b0f96a34 scores: json handler
Some checks failed
continuous-integration/drone/push Build was killed
2023-12-10 21:42:15 -08:00
61245cc77c scores: csv handler 2023-12-10 21:02:04 -08:00
adab600e26 Function to get server from IP or ID parameter
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-10 20:55:44 -08:00
9ef534eafa server: add url.path to traces
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-10 20:49:44 -08:00
9c6ea595f1 dnsanswers: tracing adjustments 2023-12-10 15:11:26 -08:00
e824274998 Add png graph handler
All checks were successful
continuous-integration/drone/push Build is passing
2023-12-09 13:54:13 -08:00
69cc4b4e80 Update dependencies and Go 2023-12-08 22:02:49 -08:00
37d66b073e Fix 'unsupported value: +Inf' error when the zone doesn't have active servers
All checks were successful
continuous-integration/drone/push Build is passing
2023-11-21 15:10:42 -08:00
954d97f71d Log version on startup
All checks were successful
continuous-integration/drone/push Build is passing
2023-11-21 12:24:02 -08:00
38 changed files with 5035 additions and 721 deletions

View File

@@ -1,86 +0,0 @@
---
kind: pipeline
type: kubernetes
name: default
environment:
GOCACHE: /cache/pkg/cache
GOMODCACHE: /cache/pkg/mod
steps:
- name: fetch-tags
image: alpine/git
commands:
- git fetch --tags
resources:
requests:
cpu: 250
memory: 50MiB
limits:
cpu: 250
memory: 100MiB
- name: test
image: golang:1.21.4
volumes:
- name: go
path: /go
- name: gopkg
path: /cache
commands:
- go test -v ./...
- go build ./...
- name: goreleaser
image: golang:1.21.4
resources:
requests:
cpu: 6000
memory: 1024MiB
limits:
cpu: 10000
memory: 4096MiB
volumes:
- name: go
path: /go
- name: gopkg
path: /cache
environment:
# GITHUB_TOKEN:
# from_secret: GITHUB_TOKEN
commands:
- ./scripts/run-goreleaser
depends_on: [test]
- name: docker
image: harbor.ntppool.org/ntppool/drone-kaniko:main
pull: always
volumes:
- name: go
path: /go
- name: gopkg
path: /cache
settings:
repo: ntppool/data-api
registry: harbor.ntppool.org
auto_tag: true
tags: SHA7,${DRONE_SOURCE_BRANCH}
cache: true
username:
from_secret: harbor_username
password:
from_secret: harbor_password
depends_on: [goreleaser]
volumes:
- name: go
temp: {}
- name: gopkg
claim:
name: go-pkg
---
kind: signature
hmac: decaee945fdc81d38afbc85c69e3192f9c798e599af1c7a9d0a2f2c97c806f63
...

View File

@@ -1,3 +1,5 @@
version: 2
before: before:
# we don't want this in the CI environment # we don't want this in the CI environment
#hooks: #hooks:

69
.woodpecker.yaml Normal file
View File

@@ -0,0 +1,69 @@
when:
- event: [push, pull_request, tag, manual]
clone:
git:
image: woodpeckerci/plugin-git
settings:
tags: true
variables:
- &go_env
GOMODCACHE: /go/pkg/mod
GOCACHE: /go/pkg/cache
- &go_volumes
- go-pkg:/go/pkg
steps:
- name: test
image: golang:1.26
pull: true
environment: *go_env
volumes: *go_volumes
commands:
- go test -v ./...
- go build ./...
- name: goreleaser
image: golang:1.26
pull: true
environment: *go_env
volumes: *go_volumes
commands:
- ./scripts/run-goreleaser
backend_options:
kubernetes:
resources:
requests:
cpu: 6000
memory: 1024Mi
limits:
cpu: 10000
memory: 4096Mi
depends_on: [test]
- name: generate-tags
image: ghcr.io/abh/woodpecker-docker-tags-plugin:sha-8a3bd7c
settings:
tags: |
branch
sha
semver --auto
edge -v latest
when:
- event: [push, tag, manual]
depends_on: [goreleaser]
- name: docker
image: woodpeckerci/plugin-kaniko
settings:
registry: harbor.ntppool.org
repo: ntppool/data-api
cache: true
username:
from_secret: harbor_username
password:
from_secret: harbor_password
when:
- event: [push, tag, manual]
depends_on: [goreleaser, generate-tags]

481
API.md Normal file
View File

@@ -0,0 +1,481 @@
# NTP Pool Data API Documentation
This document describes the REST API endpoints provided by the NTP Pool data API server.
## Base URL
The API server runs on port 8030. All endpoints are accessible at:
- Production: `https://www.ntppool.org/api/...`
- Local development: `http://localhost:8030/api/...`
## Common Response Headers
All API responses include:
- `Server`: Version information (e.g., `data-api/1.2.3+abc123`)
- `Cache-Control`: Caching directives
- `Access-Control-Allow-Origin`: CORS configuration
## Endpoints
### 1. User Country Data
**GET** `/api/usercc`
Returns DNS query statistics by user country code and NTP pool zone statistics.
#### Response Format
```json
{
"UserCountry": [
{
"CC": "us",
"IPv4": 42.5,
"IPv6": 12.3
}
],
"ZoneStats": {
"zones": [
{
"zone_name": "us",
"netspeed_active": 1000,
"server_count": 450
}
]
}
}
```
#### Response Fields
- `UserCountry`: Array of country statistics
- `CC`: Two-letter country code
- `IPv4`: IPv4 query percentage
- `IPv6`: IPv6 query percentage
- `ZoneStats`: NTP pool zone information
#### Cache Control
- `Cache-Control`: Varies based on data freshness
---
### 2. DNS Query Counts
**GET** `/api/dns/counts`
Returns aggregated DNS query counts from ClickHouse analytics.
#### Response Format
```json
{
"total_queries": 1234567,
"by_country": {
"us": 456789,
"de": 234567
},
"by_query_type": {
"A": 987654,
"AAAA": 345678
}
}
```
#### Cache Control
- `Cache-Control`: `s-maxage=30,max-age=60`
---
### 3. Server DNS Answers
**GET** `/api/server/dns/answers/{server}`
Returns DNS answer statistics for a specific NTP server, including geographic distribution and scoring metrics.
#### Path Parameters
- `server`: Server IP address (IPv4 or IPv6)
#### Response Format
```json
{
"Server": [
{
"CC": "us",
"Count": 12345,
"Points": 1234.5,
"Netspeed": 567.8
}
],
"PointSymbol": "‱"
}
```
#### Response Fields
- `Server`: Array of country-specific statistics
- `CC`: Country code where DNS queries originated
- `Count`: Number of DNS answers served
- `Points`: Calculated scoring points (basis: 10,000)
- `Netspeed`: Network speed score relative to zone capacity
- `PointSymbol`: Symbol used for point calculations ("‱" = per 10,000)
#### Error Responses
- `400 Bad Request`: Invalid server IP format
- `404 Not Found`: Server not found
- `500 Internal Server Error`: Database error
#### Cache Control
- Success: `public,max-age=1800`
- Errors: `public,max-age=300`
#### URL Canonicalization
Redirects to canonical IP format with `308 Permanent Redirect` if:
- IP format is not canonical
- Query parameters are present
---
### 4. Server Score History (Legacy)
**GET** `/api/server/scores/{server}/{mode}`
**⚠️ Legacy API** - Returns historical scoring data for an NTP server in JSON or CSV format. For enhanced features and higher limits, use the [v2 API](#7-server-score-history-v2---enhanced-time-range-api) instead.
#### Path Parameters
- `server`: Server IP address or ID
- `mode`: Response format (`json` or `log`)
#### Query Parameters
- `limit`: Maximum number of records (default: 100, max: 10000)
- `monitor`: Monitor ID or name prefix (default: "recentmedian.scores.ntp.dev")
- Use `*` for all monitors
- Use monitor ID number
- Use monitor name prefix (e.g., "recentmedian")
- `since`: Unix timestamp for start time
- `source`: Data source (`m` for MySQL, `c` for ClickHouse)
- `full_history`: Include full history (private IPs only)
#### JSON Response Format (`mode=json`)
```json
{
"history": [
{
"ts": 1640995200,
"offset": 0.001234,
"step": 0.5,
"score": 20.0,
"monitor_id": 123,
"rtt": 45.6
}
],
"monitors": [
{
"id": 123,
"name": "recentmedian.scores.ntp.dev",
"type": "ntp",
"ts": "2022-01-01T12:00:00Z",
"score": 19.5,
"status": "active",
"avg_rtt": 45.2
}
],
"server": {
"ip": "192.0.2.1"
}
}
```
#### CSV Response Format (`mode=log`)
Returns CSV data with headers:
```
ts_epoch,ts,offset,step,score,monitor_id,monitor_name,rtt,leap,error
1640995200,2022-01-01 12:00:00,0.001234,0.5,20.0,123,recentmedian.scores.ntp.dev,45.6,,
```
#### CSV Fields
- `ts_epoch`: Unix timestamp
- `ts`: Human-readable timestamp
- `offset`: Time offset in seconds
- `step`: NTP step value
- `score`: Computed score
- `monitor_id`: Monitor identifier
- `monitor_name`: Monitor display name
- `rtt`: Round-trip time in milliseconds
- `leap`: Leap second indicator
- `error`: Error message (sanitized for CSV)
#### Error Responses
- `404 Not Found`: Invalid mode, server not found, or monitor not found
- `500 Internal Server Error`: Database error
#### Cache Control
Dynamic based on data freshness:
- Recent data: `s-maxage=90,max-age=120`
- Older data: `s-maxage=260,max-age=360`
---
### 5. Zone Counts
**GET** `/api/zone/counts/{zone_name}`
Returns historical server count and network capacity data for an NTP pool zone.
#### Path Parameters
- `zone_name`: Zone name (e.g., "us", "europe", "@" for global)
#### Query Parameters
- `limit`: Maximum number of date entries to return
#### Response Format
```json
{
"history": [
{
"d": "2022-01-01",
"ts": 1640995200,
"rc": 450,
"ac": 380,
"w": 12500,
"iv": "v4"
}
]
}
```
#### Response Fields
- `history`: Array of historical data points
- `d`: Date in YYYY-MM-DD format
- `ts`: Unix timestamp
- `rc`: Registered server count
- `ac`: Active server count
- `w`: Network capacity (netspeed active)
- `iv`: IP version ("v4" or "v6")
#### Data Sampling
When `limit` is specified, the API intelligently samples data points to provide representative historical coverage while staying within the limit.
#### Error Responses
- `404 Not Found`: Zone not found
- `500 Internal Server Error`: Database error
#### Cache Control
- `s-maxage=28800, max-age=7200`
---
### 6. Graph Images
**GET** `/graph/{server}/{type}`
Returns generated graph images for server visualization.
#### Path Parameters
- `server`: Server IP address
- `type`: Graph type (currently only "offset.png" supported)
#### Response
- **Content-Type**: `image/png` or upstream service content type
- **Body**: Binary image data
#### Features
- Canonical URL enforcement (redirects if server IP format is non-canonical)
- Query parameter removal (redirects to clean URLs)
- Upstream service integration via HTTP proxy
#### Error Responses
- `404 Not Found`: Invalid image type or server not found
- `500 Internal Server Error`: Upstream service error
#### Cache Control
- Success: `public,max-age=1800,s-maxage=1350`
- Errors: `public,max-age=240`
---
### 7. Server Score History (v2) - Enhanced Time Range API
**GET** `/api/v2/server/scores/{server}/{mode}`
**🆕 Recommended API** - Returns historical scoring data for an NTP server in Grafana-compatible table format with enhanced time range support and relative time expressions.
#### Path Parameters
- `server`: Server IP address or ID
- `mode`: Response format (`json` only)
#### Query Parameters
- `from`: Start time (required) - Unix timestamp or relative time (e.g., "-3d", "-2h", "-30m")
- `to`: End time (required) - Unix timestamp or relative time (e.g., "-1d", "-1h", "0s")
- `maxDataPoints`: Maximum data points to return (default: 50000, max: 50000)
- `monitor`: Monitor filter (ID, name prefix, or "*" for all monitors)
- `interval`: Future downsampling interval (not implemented)
#### Time Format Support
The v2 API supports both Unix timestamps and relative time expressions:
**Unix Timestamps:**
- `from=1753500964&to=1753587364` - Standard Unix seconds
**Relative Time Expressions:**
- `from=-3d&to=-1d` - From 3 days ago to 1 day ago
- `from=-2h&to=-30m` - From 2 hours ago to 30 minutes ago
- `from=-1d&to=0s` - From 1 day ago to now
**Supported Units:**
- `s` - seconds
- `m` - minutes
- `h` - hours
- `d` - days
**Format:** `[-]<number><unit>` (negative sign for past, no sign for future)
#### Response Format
Grafana table format optimized for visualization:
```json
[
{
"target": "monitor{name=zakim1-yfhw4a}",
"tags": {
"monitor_id": "126",
"monitor_name": "zakim1-yfhw4a",
"type": "monitor",
"status": "active"
},
"columns": [
{"text": "time", "type": "time"},
{"text": "score", "type": "number"},
{"text": "rtt", "type": "number", "unit": "ms"},
{"text": "offset", "type": "number", "unit": "s"}
],
"values": [
[1753431667000, 20.0, 18.865, -0.000267],
[1753431419000, 20.0, 18.96, -0.000390],
[1753431151000, 20.0, 18.073, -0.000768]
]
}
]
```
#### Response Structure
- **One series per monitor**: Efficient grouping by monitor ID
- **Table format**: All metrics (time, score, rtt, offset) in columns
- **Timestamps**: Converted to milliseconds for Grafana compatibility
- **Null handling**: Null RTT/offset values preserved as `null`
#### Limits and Constraints
- **Data points**: Maximum 50,000 records per request
- **Time range**: Maximum 90 days per request
- **Minimum range**: 1 second
- **Data source**: ClickHouse only (for better time range performance)
#### Example Requests
**Recent data with relative times:**
```
GET /api/v2/server/scores/192.0.2.1/json?from=-3d&to=-1h&monitor=*
```
**Specific time range:**
```
GET /api/v2/server/scores/192.0.2.1/json?from=1753500000&to=1753586400&monitor=recentmedian
```
**All monitors, last 24 hours:**
```
GET /api/v2/server/scores/192.0.2.1/json?from=-1d&to=0s&monitor=*&maxDataPoints=10000
```
#### Error Responses
- `400 Bad Request`: Invalid time format, range too large/small, or invalid parameters
- `404 Not Found`: Server not found, invalid mode, or monitor not found
- `500 Internal Server Error`: Database or internal error
#### Cache Control
Dynamic caching based on data characteristics:
- Recent data: `s-maxage=90,max-age=120`
- Older data: `s-maxage=260,max-age=360`
- Empty results: `s-maxage=260,max-age=360`
#### Comparison with Legacy API
The v2 API offers significant improvements over `/api/server/scores/{server}/{mode}`:
| Feature | Legacy API | v2 API |
|---------|------------|--------|
| **Record limit** | 10,000 | 50,000 |
| **Time format** | Unix timestamps only | Unix timestamps + relative time |
| **Response format** | Legacy JSON/CSV | Grafana table format |
| **Time range** | Limited by `since` parameter | Full `from`/`to` range support |
| **Maximum range** | No explicit limit | 90 days |
| **Performance** | MySQL + ClickHouse | ClickHouse optimized |
#### Migration Guide
To migrate from legacy API to v2:
**Legacy:**
```
/api/server/scores/192.0.2.1/json?limit=10000&since=1753500000&monitor=*
```
**V2 equivalent:**
```
/api/v2/server/scores/192.0.2.1/json?from=1753500000&to=0s&monitor=*&maxDataPoints=10000
```
**V2 with relative time:**
```
/api/v2/server/scores/192.0.2.1/json?from=-3d&to=-1h&monitor=*
```
---
## Health Check Endpoints
### Health Check
**GET** `:9019/health`
Returns server health status by testing database connections.
#### Query Parameters
- `reset`: Boolean to reset database connection pool
#### Response
- `200 OK`: "ok" - All systems healthy
- `503 Service Unavailable`: "db ping err" - Database connectivity issues
### Metrics
**GET** `:9020/metrics`
Prometheus metrics endpoint for monitoring and observability.
---
## Error Handling
### Standard HTTP Status Codes
- `200 OK`: Successful request
- `308 Permanent Redirect`: URL canonicalization
- `400 Bad Request`: Invalid request parameters
- `404 Not Found`: Resource not found
- `500 Internal Server Error`: Server-side error
- `503 Service Unavailable`: Service temporarily unavailable
### Error Response Format
Most endpoints return plain text error messages for non-2xx responses. Some endpoints may return JSON error objects.
---
## Data Sources
The API integrates multiple data sources:
- **MySQL**: Operational data (servers, zones, accounts, current scores)
- **ClickHouse**: Analytics data (DNS query logs, historical scoring data)
Different endpoints may use different data sources, and some endpoints allow source selection via query parameters.
---
## Rate Limiting and Caching
The API implements extensive caching at multiple levels:
- **Response-level caching**: Each endpoint sets appropriate `Cache-Control` headers
- **Database query optimization**: Efficient queries with proper indexing
- **CDN integration**: Headers configured for CDN caching
Cache durations vary by endpoint and data freshness, ranging from 30 seconds for real-time data to 8 hours for historical data.

View File

@@ -1,4 +1,4 @@
FROM alpine:3.18.0 FROM alpine:3.21
RUN apk --no-cache upgrade RUN apk --no-cache upgrade
RUN apk --no-cache add ca-certificates tzdata zsh jq tmux curl RUN apk --no-cache add ca-certificates tzdata zsh jq tmux curl

View File

@@ -2,12 +2,9 @@ generate: sqlc
go generate ./... go generate ./...
sqlc: sqlc:
@which gowrap >& /dev/null || (echo "Run 'go install github.com/hexdigest/gowrap/cmd/gowrap@v1.3.2'" && exit 1) go tool sqlc compile
@which mockery >& /dev/null || (echo "Run 'go install github.com/vektra/mockery/v2@v2.35.4'" && exit 1) go tool sqlc generate
sqlc compile go tool gowrap gen -g -t opentelemetry -i QuerierTx -p ./ntpdb -o ./ntpdb/otel.go
sqlc generate
gowrap gen -t opentelemetry -i QuerierTx -p ./ntpdb -o ./ntpdb/otel.go
mockery --dir ntpdb --name QuerierTx --config /dev/null
sign: sign:
drone sign --save ntppool/data-api drone sign --save ntppool/data-api

View File

@@ -24,19 +24,20 @@ type ServerTotals map[string]uint64
func (s ServerQueries) Len() int { func (s ServerQueries) Len() int {
return len(s) return len(s)
} }
func (s ServerQueries) Swap(i, j int) { func (s ServerQueries) Swap(i, j int) {
s[i], s[j] = s[j], s[i] s[i], s[j] = s[j], s[i]
} }
func (s ServerQueries) Less(i, j int) bool { func (s ServerQueries) Less(i, j int) bool {
return s[i].Count > s[j].Count return s[i].Count > s[j].Count
} }
func (d *ClickHouse) ServerAnswerCounts(ctx context.Context, serverIP string, days int) (ServerQueries, error) { func (d *ClickHouse) ServerAnswerCounts(ctx context.Context, serverIP string, days int) (ServerQueries, error) {
ctx, span := tracing.Tracer().Start(ctx, "ServerAnswerCounts") ctx, span := tracing.Tracer().Start(ctx, "ServerAnswerCounts")
defer span.End() defer span.End()
conn := d.conn conn := d.Logs
log := logger.Setup().With("server", serverIP) log := logger.Setup().With("server", serverIP)
@@ -95,11 +96,14 @@ func (d *ClickHouse) ServerAnswerCounts(ctx context.Context, serverIP string, da
} }
func (d *ClickHouse) AnswerTotals(ctx context.Context, qtype string, days int) (ServerTotals, error) { func (d *ClickHouse) AnswerTotals(ctx context.Context, qtype string, days int) (ServerTotals, error) {
log := logger.Setup() log := logger.Setup()
ctx, span := tracing.Tracer().Start(ctx, "AnswerTotals")
defer span.End()
// queries by UserCC / Qtype for the ServerIP // queries by UserCC / Qtype for the ServerIP
rows, err := d.conn.Query(ctx, ` rows, err := d.Logs.Query(clickhouse.Context(ctx,
clickhouse.WithSpan(span.SpanContext()),
), `
select UserCC,Qtype,sum(queries) as queries select UserCC,Qtype,sum(queries) as queries
from by_server_ip_1d from by_server_ip_1d
where where

View File

@@ -2,53 +2,97 @@ package chdb
import ( import (
"context" "context"
"os"
"strings"
"time" "time"
"dario.cat/mergo"
"github.com/ClickHouse/clickhouse-go/v2" "github.com/ClickHouse/clickhouse-go/v2"
"github.com/ClickHouse/clickhouse-go/v2/lib/driver" "gopkg.in/yaml.v3"
"go.ntppool.org/common/logger" "go.ntppool.org/common/logger"
"go.ntppool.org/common/version" "go.ntppool.org/common/version"
) )
type Config struct {
ClickHouse struct {
Scores DBConfig `yaml:"scores"`
Logs DBConfig `yaml:"logs"`
} `yaml:"clickhouse"`
}
type DBConfig struct {
DSN string
Host string
Database string
User string
Password string
}
type ClickHouse struct { type ClickHouse struct {
conn clickhouse.Conn Logs clickhouse.Conn
Scores clickhouse.Conn
} }
func New(ctx context.Context, dbConfigPath string) (*ClickHouse, error) { func New(ctx context.Context, dbConfigPath string) (*ClickHouse, error) {
conn, err := setupClickhouse(ctx) ch, err := setupClickhouse(ctx, dbConfigPath)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &ClickHouse{conn: conn}, nil return ch, nil
} }
func setupClickhouse(ctx context.Context) (driver.Conn, error) { func setupClickhouse(ctx context.Context, configFile string) (*ClickHouse, error) {
log := logger.FromContext(ctx)
log.DebugContext(ctx, "opening ch config", "file", configFile)
dbFile, err := os.Open(configFile)
if err != nil {
return nil, err
}
dec := yaml.NewDecoder(dbFile)
cfg := Config{}
err = dec.Decode(&cfg)
if err != nil {
return nil, err
}
ch := &ClickHouse{}
ch.Logs, err = open(ctx, cfg.ClickHouse.Logs)
if err != nil {
return nil, err
}
ch.Scores, err = open(ctx, cfg.ClickHouse.Scores)
if err != nil {
return nil, err
}
return ch, nil
}
func open(ctx context.Context, cfg DBConfig) (clickhouse.Conn, error) {
log := logger.Setup() log := logger.Setup()
conn, err := clickhouse.Open(&clickhouse.Options{ options := &clickhouse.Options{
Addr: []string{"10.43.207.123:9000"}, Protocol: clickhouse.Native,
Auth: clickhouse.Auth{
Database: "geodns3",
Username: "default",
Password: "",
},
// Debug: true,
// Debugf: func(format string, v ...interface{}) {
// slog.Info("debug format", "format", format)
// fmt.Printf(format+"\n", v)
// },
Settings: clickhouse.Settings{ Settings: clickhouse.Settings{
"max_execution_time": 60, "max_execution_time": 60,
}, },
Compression: &clickhouse.Compression{ Compression: &clickhouse.Compression{
Method: clickhouse.CompressionLZ4, Method: clickhouse.CompressionLZ4,
}, },
DialTimeout: time.Second * 5, DialTimeout: time.Second * 5,
MaxOpenConns: 5, MaxOpenConns: 8,
MaxIdleConns: 5, MaxIdleConns: 3,
ConnMaxLifetime: time.Duration(10) * time.Minute, ConnMaxLifetime: 5 * time.Minute,
ConnOpenStrategy: clickhouse.ConnOpenInOrder, ConnOpenStrategy: clickhouse.ConnOpenInOrder,
BlockBufferSize: 10, BlockBufferSize: 10,
MaxCompressionBuffer: 10240, MaxCompressionBuffer: 10240,
@@ -60,7 +104,49 @@ func setupClickhouse(ctx context.Context) (driver.Conn, error) {
{Name: "data-api", Version: version.Version()}, {Name: "data-api", Version: version.Version()},
}, },
}, },
}) // Debug: true,
// Debugf: func(format string, v ...interface{}) {
// slog.Info("debug format", "format", format)
// fmt.Printf(format+"\n", v)
// },
}
if cfg.DSN != "" {
dsnOptions, err := clickhouse.ParseDSN(cfg.DSN)
if err != nil {
return nil, err
}
err = mergo.Merge(options, dsnOptions)
if err != nil {
return nil, err
}
}
if cfg.Host != "" {
options.Addr = []string{cfg.Host}
}
if len(options.Addr) > 0 {
// todo: support literal ipv6; or just require port to be configured explicitly
if !strings.Contains(options.Addr[0], ":") {
options.Addr[0] += ":9000"
}
}
if cfg.Database != "" {
options.Auth.Database = cfg.Database
}
if cfg.User != "" {
options.Auth.Username = cfg.User
}
if cfg.Password != "" {
options.Auth.Password = cfg.Password
}
conn, err := clickhouse.Open(options)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@@ -1,6 +1,6 @@
package chdb package chdb
// queries to the GeoDNS database // queries to the DNS database
import ( import (
"context" "context"
@@ -24,9 +24,11 @@ type UserCountry []flatAPI
func (s UserCountry) Len() int { func (s UserCountry) Len() int {
return len(s) return len(s)
} }
func (s UserCountry) Swap(i, j int) { func (s UserCountry) Swap(i, j int) {
s[i], s[j] = s[j], s[i] s[i], s[j] = s[j], s[i]
} }
func (s UserCountry) Less(i, j int) bool { func (s UserCountry) Less(i, j int) bool {
return s[i].IPv4 > s[j].IPv4 return s[i].IPv4 > s[j].IPv4
} }
@@ -36,7 +38,7 @@ func (d *ClickHouse) UserCountryData(ctx context.Context) (*UserCountry, error)
ctx, span := tracing.Tracer().Start(ctx, "UserCountryData") ctx, span := tracing.Tracer().Start(ctx, "UserCountryData")
defer span.End() defer span.End()
rows, err := d.conn.Query(clickhouse.Context(ctx, clickhouse.WithSpan(span.SpanContext())), rows, err := d.Logs.Query(clickhouse.Context(ctx, clickhouse.WithSpan(span.SpanContext())),
"select max(dt) as d,UserCC,Qtype,sum(queries) as queries from by_usercc_1d where dt > now() - INTERVAL 4 DAY group by rollup(Qtype,UserCC) order by UserCC,Qtype;") "select max(dt) as d,UserCC,Qtype,sum(queries) as queries from by_usercc_1d where dt > now() - INTERVAL 4 DAY group by rollup(Qtype,UserCC) order by UserCC,Qtype;")
if err != nil { if err != nil {
log.ErrorContext(ctx, "query error", "err", err) log.ErrorContext(ctx, "query error", "err", err)
@@ -183,3 +185,55 @@ func (d *ClickHouse) UserCountryData(ctx context.Context) (*UserCountry, error)
return nil, nil return nil, nil
} }
type DNSQueryCounts struct {
T uint32 `json:"t"`
Avg float64 `json:"avg"`
Max uint64 `json:"max"`
}
func (d *ClickHouse) DNSQueries(ctx context.Context) ([]DNSQueryCounts, error) {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(ctx, "DNSQueries")
defer span.End()
startUnix := time.Now().Add(2 * time.Hour * -1).Unix()
startUnix -= startUnix % (60 * 5)
log.InfoContext(ctx, "start time", "start", startUnix)
rows, err := d.Logs.Query(clickhouse.Context(ctx, clickhouse.WithSpan(span.SpanContext())),
`
select toUnixTimestamp(toStartOfFiveMinute(t)) as t,
sum(q)/300 as avg, max(q) as max
from (
select window as t, sumSimpleState(queries) as q
from dns.by_origin_1s
where
window > FROM_UNIXTIME(?)
and Origin IN ('pool.ntp.org', 'g.ntpns.org')
group by t order by t
)
group by t order by t
`, startUnix)
if err != nil {
log.ErrorContext(ctx, "query error", "err", err)
return nil, fmt.Errorf("database error")
}
var t uint32
var avg float64
var max uint64
r := []DNSQueryCounts{}
for rows.Next() {
if err := rows.Scan(&t, &avg, &max); err != nil {
return nil, err
}
log.InfoContext(ctx, "data", "t", t, "avg", avg, "max", max)
r = append(r, DNSQueryCounts{t, avg, max})
}
return r, nil
}

234
chdb/logscores.go Normal file
View File

@@ -0,0 +1,234 @@
package chdb
import (
"context"
"fmt"
"strings"
"time"
"github.com/ClickHouse/clickhouse-go/v2"
"go.ntppool.org/common/logger"
"go.ntppool.org/common/tracing"
"go.ntppool.org/data-api/ntpdb"
)
func (d *ClickHouse) Logscores(ctx context.Context, serverID, monitorID int, since time.Time, limit int, fullHistory bool) ([]ntpdb.LogScore, error) {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(ctx, "CH Logscores")
defer span.End()
recentFirst := true
if since.IsZero() && !fullHistory {
since = time.Now().Add(4 * -24 * time.Hour)
} else {
recentFirst = false
}
args := []interface{}{serverID}
query := `select id,monitor_id,server_id,ts,
toFloat64(score),toFloat64(step),offset,
rtt,leap,warning,error
from log_scores
where
server_id = ?`
if monitorID > 0 {
query = `select id,monitor_id,server_id,ts,
toFloat64(score),toFloat64(step),offset,
rtt,leap,warning,error
from log_scores
where
server_id = ?
and monitor_id = ?`
args = []interface{}{serverID, monitorID}
}
if fullHistory {
query += " order by ts"
if recentFirst {
query += " desc"
}
} else {
query += " and ts > ? order by ts "
if recentFirst {
query += "desc "
}
query += "limit ?"
args = append(args, since, limit)
}
log.DebugContext(ctx, "clickhouse query", "query", query, "args", args)
rows, err := d.Scores.Query(
clickhouse.Context(
ctx, clickhouse.WithSpan(span.SpanContext()),
),
query, args...,
)
if err != nil {
log.ErrorContext(ctx, "query error", "err", err)
return nil, fmt.Errorf("database error")
}
rv := []ntpdb.LogScore{}
for rows.Next() {
row := ntpdb.LogScore{}
var leap uint8
if err := rows.Scan(
&row.ID,
&row.MonitorID,
&row.ServerID,
&row.Ts,
&row.Score,
&row.Step,
&row.Offset,
&row.Rtt,
&leap,
&row.Attributes.Warning,
&row.Attributes.Error,
); err != nil {
log.Error("could not parse row", "err", err)
continue
}
row.Attributes.Leap = int8(leap)
rv = append(rv, row)
}
// log.InfoContext(ctx, "returning data", "rv", rv)
return rv, nil
}
// LogscoresTimeRange queries log scores within a specific time range for Grafana integration
func (d *ClickHouse) LogscoresTimeRange(ctx context.Context, serverID, monitorID int, from, to time.Time, limit int) ([]ntpdb.LogScore, error) {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(ctx, "CH LogscoresTimeRange")
defer span.End()
args := []interface{}{serverID, from, to}
query := `select id,monitor_id,server_id,ts,
toFloat64(score),toFloat64(step),offset,
rtt,leap,warning,error
from log_scores
where
server_id = ?
and ts >= ?
and ts <= ?`
if monitorID > 0 {
query += " and monitor_id = ?"
args = append(args, monitorID)
}
// Always order by timestamp ASC for Grafana convention
query += " order by ts ASC"
// Apply limit to prevent memory issues
if limit > 0 {
query += " limit ?"
args = append(args, limit)
}
log.DebugContext(ctx, "clickhouse time range query",
"query", query,
"args", args,
"server_id", serverID,
"monitor_id", monitorID,
"from", from.Format(time.RFC3339),
"to", to.Format(time.RFC3339),
"limit", limit,
"full_sql_with_params", func() string {
// Build a readable SQL query with parameters substituted for debugging
sqlDebug := query
paramIndex := 0
for strings.Contains(sqlDebug, "?") && paramIndex < len(args) {
var replacement string
switch v := args[paramIndex].(type) {
case int:
replacement = fmt.Sprintf("%d", v)
case time.Time:
replacement = fmt.Sprintf("'%s'", v.Format("2006-01-02 15:04:05"))
default:
replacement = fmt.Sprintf("'%v'", v)
}
sqlDebug = strings.Replace(sqlDebug, "?", replacement, 1)
paramIndex++
}
return sqlDebug
}(),
)
rows, err := d.Scores.Query(
clickhouse.Context(
ctx, clickhouse.WithSpan(span.SpanContext()),
),
query, args...,
)
if err != nil {
log.ErrorContext(ctx, "time range query error", "err", err)
return nil, fmt.Errorf("database error")
}
rv := []ntpdb.LogScore{}
for rows.Next() {
row := ntpdb.LogScore{}
var leap uint8
if err := rows.Scan(
&row.ID,
&row.MonitorID,
&row.ServerID,
&row.Ts,
&row.Score,
&row.Step,
&row.Offset,
&row.Rtt,
&leap,
&row.Attributes.Warning,
&row.Attributes.Error,
); err != nil {
log.Error("could not parse row", "err", err)
continue
}
row.Attributes.Leap = int8(leap)
rv = append(rv, row)
}
log.InfoContext(ctx, "time range query results",
"rows_returned", len(rv),
"server_id", serverID,
"monitor_id", monitorID,
"time_range", fmt.Sprintf("%s to %s", from.Format(time.RFC3339), to.Format(time.RFC3339)),
"limit", limit,
"sample_rows", func() []map[string]interface{} {
samples := make([]map[string]interface{}, 0, 3)
for i, row := range rv {
if i >= 3 {
break
}
samples = append(samples, map[string]interface{}{
"id": row.ID,
"monitor_id": row.MonitorID,
"ts": row.Ts.Format(time.RFC3339),
"score": row.Score,
"rtt_valid": row.Rtt.Valid,
"offset_valid": row.Offset.Valid,
})
}
return samples
}(),
)
return rv, nil
}

View File

@@ -30,7 +30,7 @@ func NewCLI() *CLI {
// RootCmd represents the base command when called without any subcommands // RootCmd represents the base command when called without any subcommands
func (cli *CLI) rootCmd() *cobra.Command { func (cli *CLI) rootCmd() *cobra.Command {
var cmd = &cobra.Command{ cmd := &cobra.Command{
Use: "data-api", Use: "data-api",
Short: "A brief description of your application", Short: "A brief description of your application",
// Uncomment the following line if your bare application // Uncomment the following line if your bare application
@@ -47,7 +47,6 @@ func (cli *CLI) rootCmd() *cobra.Command {
// Execute adds all child commands to the root command and sets flags appropriately. // Execute adds all child commands to the root command and sets flags appropriately.
// This is called by main.main(). It only needs to happen once to the rootCmd. // This is called by main.main(). It only needs to happen once to the rootCmd.
func Execute() { func Execute() {
cli := NewCLI() cli := NewCLI()
if err := cli.root.Execute(); err != nil { if err := cli.root.Execute(); err != nil {
@@ -57,7 +56,6 @@ func Execute() {
} }
func (cli *CLI) init(cmd *cobra.Command) { func (cli *CLI) init(cmd *cobra.Command) {
logger.Setup() logger.Setup()
cmd.PersistentFlags().StringVar(&cfgFile, "database-config", "database.yaml", "config file (default is $HOME/.data-api.yaml)") cmd.PersistentFlags().StringVar(&cfgFile, "database-config", "database.yaml", "config file (default is $HOME/.data-api.yaml)")

View File

@@ -12,13 +12,13 @@ import (
"github.com/spf13/cobra" "github.com/spf13/cobra"
"go.ntppool.org/common/logger" "go.ntppool.org/common/logger"
"go.ntppool.org/common/version"
"go.ntppool.org/data-api/server" "go.ntppool.org/data-api/server"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
func (cli *CLI) serverCmd() *cobra.Command { func (cli *CLI) serverCmd() *cobra.Command {
serverCmd := &cobra.Command{
var serverCmd = &cobra.Command{
Use: "server", Use: "server",
Short: "server starts the API server", Short: "server starts the API server",
Long: `starts the API server on (default) port 8000`, Long: `starts the API server on (default) port 8000`,
@@ -39,6 +39,8 @@ func (cli *CLI) serverCLI(cmd *cobra.Command, args []string) error {
g, ctx := errgroup.WithContext(ctx) g, ctx := errgroup.WithContext(ctx)
log.Info("starting", "version", version.Version())
srv, err := server.NewServer(ctx, cfgFile) srv, err := server.NewServer(ctx, cfgFile)
if err != nil { if err != nil {
return fmt.Errorf("srv setup: %s", err) return fmt.Errorf("srv setup: %s", err)

188
go.mod
View File

@@ -1,75 +1,151 @@
module go.ntppool.org/data-api module go.ntppool.org/data-api
go 1.21.3 go 1.25.0
// replace github.com/samber/slog-echo => github.com/abh/slog-echo v0.0.0-20231024051244-af740639893e // replace github.com/samber/slog-echo => github.com/abh/slog-echo v0.0.0-20231024051244-af740639893e
replace go.opentelemetry.io/otel/exporters/prometheus v0.59.1 => go.opentelemetry.io/otel/exporters/prometheus v0.59.0
tool (
github.com/hexdigest/gowrap/cmd/gowrap
github.com/sqlc-dev/sqlc/cmd/sqlc
// github.com/vektra/mockery/v3
)
require ( require (
github.com/ClickHouse/clickhouse-go/v2 v2.15.0 dario.cat/mergo v1.0.2
github.com/go-sql-driver/mysql v1.7.1 github.com/ClickHouse/clickhouse-go/v2 v2.40.3
github.com/labstack/echo/v4 v4.11.3 github.com/go-sql-driver/mysql v1.9.3
github.com/samber/slog-echo v1.8.0 github.com/hashicorp/go-retryablehttp v0.7.8
github.com/spf13/cobra v1.8.0 github.com/labstack/echo-contrib v0.17.4
github.com/stretchr/testify v1.8.4 github.com/labstack/echo/v4 v4.13.4
go.ntppool.org/common v0.2.5-0.20231112235121-2bff6d8ef307 github.com/samber/slog-echo v1.17.2
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.46.1 github.com/spf13/cobra v1.10.1
go.opentelemetry.io/otel v1.21.0 go.ntppool.org/api v0.3.4
go.opentelemetry.io/otel/trace v1.21.0 go.ntppool.org/common v0.5.2
golang.org/x/sync v0.5.0 go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.63.0
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.63.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0
go.opentelemetry.io/otel v1.38.0
go.opentelemetry.io/otel/trace v1.38.0
golang.org/x/sync v0.17.0
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
) )
require ( require (
github.com/ClickHouse/ch-go v0.58.2 // indirect cel.dev/expr v0.24.0 // indirect
github.com/andybalholm/brotli v1.0.6 // indirect filippo.io/edwards25519 v1.1.0 // indirect
github.com/ClickHouse/ch-go v0.68.0 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver/v3 v3.1.1 // indirect
github.com/Masterminds/sprig/v3 v3.2.2 // indirect
github.com/andybalholm/brotli v1.2.0 // indirect
github.com/antlr4-go/antlr/v4 v4.13.1 // indirect
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect github.com/cubicdaiya/gonp v1.0.4 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/fatih/structtag v1.2.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-faster/city v1.0.1 // indirect github.com/go-faster/city v1.0.1 // indirect
github.com/go-faster/errors v0.7.0 // indirect github.com/go-faster/errors v0.7.1 // indirect
github.com/go-logr/logr v1.3.0 // indirect github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect
github.com/golang-jwt/jwt v3.2.2+incompatible // indirect github.com/google/cel-go v0.24.1 // indirect
github.com/golang/protobuf v1.5.3 // indirect github.com/google/uuid v1.6.0 // indirect
github.com/google/uuid v1.4.0 // indirect github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.1 // indirect github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hexdigest/gowrap v1.4.2 // indirect
github.com/huandu/xstrings v1.5.0 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/klauspost/compress v1.17.3 // indirect github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/labstack/gommon v0.4.1 // indirect github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect github.com/jackc/pgx/v5 v5.7.4 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/labstack/gommon v0.4.2 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect github.com/mattn/go-isatty v0.0.20 // indirect
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 // indirect github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/paulmach/orb v0.10.0 // indirect github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/pierrec/lz4/v4 v4.1.18 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/ncruces/go-strftime v0.1.9 // indirect
github.com/paulmach/orb v0.12.0 // indirect
github.com/pganalyze/pg_query_go/v6 v6.1.0 // indirect
github.com/pierrec/lz4/v4 v4.1.22 // indirect
github.com/pingcap/errors v0.11.5-0.20240311024730-e056997136bb // indirect
github.com/pingcap/failpoint v0.0.0-20240528011301-b51a646c7c86 // indirect
github.com/pingcap/log v1.1.0 // indirect
github.com/pingcap/tidb/pkg/parser v0.0.0-20250324122243-d51e00e5bbf0 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.17.0 // indirect github.com/prometheus/client_golang v1.23.2 // indirect
github.com/prometheus/client_model v0.5.0 // indirect github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.45.0 // indirect github.com/prometheus/common v0.66.1 // indirect
github.com/prometheus/procfs v0.12.0 // indirect github.com/prometheus/otlptranslator v1.0.0 // indirect
github.com/remychantenay/slog-otel v1.2.2 // indirect github.com/prometheus/procfs v0.17.0 // indirect
github.com/samber/lo v1.38.1 // indirect github.com/remychantenay/slog-otel v1.3.4 // indirect
github.com/segmentio/asm v1.2.0 // indirect github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/shopspring/decimal v1.3.1 // indirect github.com/riza-io/grpc-go v0.2.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect github.com/samber/lo v1.51.0 // indirect
github.com/stretchr/objx v0.5.1 // indirect github.com/samber/slog-common v0.19.0 // indirect
github.com/samber/slog-multi v1.5.0 // indirect
github.com/segmentio/asm v1.2.1 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/spf13/cast v1.4.1 // indirect
github.com/spf13/pflag v1.0.10 // indirect
github.com/sqlc-dev/sqlc v1.29.0 // indirect
github.com/stoewer/go-strcase v1.2.0 // indirect
github.com/tetratelabs/wazero v1.9.0 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/valyala/fasttemplate v1.2.2 // indirect github.com/valyala/fasttemplate v1.2.2 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect github.com/wasilibs/go-pgquery v0.0.0-20250409022910-10ac41983c07 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 // indirect github.com/wasilibs/wazero-helpers v0.0.0-20240620070341-3dff1577cd52 // indirect
go.opentelemetry.io/otel/metric v1.21.0 // indirect go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/otel/sdk v1.21.0 // indirect go.opentelemetry.io/contrib/bridges/otelslog v0.13.0 // indirect
go.opentelemetry.io/proto/otlp v1.0.0 // indirect go.opentelemetry.io/contrib/bridges/prometheus v0.63.0 // indirect
golang.org/x/crypto v0.15.0 // indirect go.opentelemetry.io/contrib/exporters/autoexport v0.63.0 // indirect
golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa // indirect go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.14.0 // indirect
golang.org/x/mod v0.14.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.14.0 // indirect
golang.org/x/net v0.18.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.38.0 // indirect
golang.org/x/sys v0.14.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.38.0 // indirect
golang.org/x/text v0.14.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 // indirect
golang.org/x/time v0.4.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20231120223509-83a465c0220f // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20231120223509-83a465c0220f // indirect go.opentelemetry.io/otel/exporters/prometheus v0.60.0 // indirect
google.golang.org/grpc v1.59.0 // indirect go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.14.0 // indirect
google.golang.org/protobuf v1.31.0 // indirect go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.38.0 // indirect
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.38.0 // indirect
go.opentelemetry.io/otel/log v0.14.0 // indirect
go.opentelemetry.io/otel/metric v1.38.0 // indirect
go.opentelemetry.io/otel/sdk v1.38.0 // indirect
go.opentelemetry.io/otel/sdk/log v0.14.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.38.0 // indirect
go.opentelemetry.io/proto/otlp v1.8.0 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.42.0 // indirect
golang.org/x/exp v0.0.0-20250305212735-054e65f0b394 // indirect
golang.org/x/mod v0.28.0 // indirect
golang.org/x/net v0.44.0 // indirect
golang.org/x/sys v0.36.0 // indirect
golang.org/x/text v0.29.0 // indirect
golang.org/x/time v0.13.0 // indirect
golang.org/x/tools v0.36.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250922171735-9219d122eba9 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250922171735-9219d122eba9 // indirect
google.golang.org/grpc v1.75.1 // indirect
google.golang.org/protobuf v1.36.9 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
modernc.org/libc v1.62.1 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.9.1 // indirect
modernc.org/sqlite v1.37.0 // indirect
) )

538
go.sum
View File

@@ -1,70 +1,108 @@
github.com/ClickHouse/ch-go v0.58.2 h1:jSm2szHbT9MCAB1rJ3WuCJqmGLi5UTjlNu+f530UTS0= cel.dev/expr v0.24.0 h1:56OvJKSH3hDGL0ml5uSxZmz3/3Pq4tJ+fb1unVLAFcY=
github.com/ClickHouse/ch-go v0.58.2/go.mod h1:Ap/0bEmiLa14gYjCiRkYGbXvbe8vwdrfTYWhsuQ99aw= cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
github.com/ClickHouse/clickhouse-go v1.5.4 h1:cKjXeYLNWVJIx2J1K6H2CqyRmfwVJVY1OV1coaaFcI0= dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
github.com/ClickHouse/clickhouse-go/v2 v2.14.3 h1:s9SuU3PfJrfJ4SDbVRo6XM2ZWlr7efvW9Z/ppUpE1vo= dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
github.com/ClickHouse/clickhouse-go/v2 v2.14.3/go.mod h1:qdw8IMGH4Y+PedKlf9QEhFO1ATTSFhh4exQRVIa3y2A= filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
github.com/ClickHouse/clickhouse-go/v2 v2.15.0 h1:G0hTKyO8fXXR1bGnZ0DY3vTG01xYfOGW76zgjg5tmC4= filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
github.com/ClickHouse/clickhouse-go/v2 v2.15.0/go.mod h1:kXt1SRq0PIRa6aKZD7TnFnY9PQKmc2b13sHtOYcK6cQ= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/abh/slog-echo v0.0.0-20231024051244-af740639893e h1:RkyCTh9poEVRWZi9RWdSkcveX8TM5YVORZtdIeogUlI= github.com/ClickHouse/ch-go v0.68.0 h1:zd2VD8l2aVYnXFRyhTyKCrxvhSz1AaY4wBUXu/f0GiU=
github.com/abh/slog-echo v0.0.0-20231024051244-af740639893e/go.mod h1:iLkF/wVZhBWabIw4dB+bfbj1TjCd/OXnag0AE8IDFRg= github.com/ClickHouse/ch-go v0.68.0/go.mod h1:C89Fsm7oyck9hr6rRo5gqqiVtaIY6AjdD0WFMyNRQ5s=
github.com/andybalholm/brotli v1.0.6 h1:Yf9fFpf49Zrxb9NlQaluyE92/+X7UVHlhMNJN2sxfOI= github.com/ClickHouse/clickhouse-go/v2 v2.40.3 h1:46jB4kKwVDUOnECpStKMVXxvR0Cg9zeV9vdbPjtn6po=
github.com/andybalholm/brotli v1.0.6/go.mod h1:fO7iG3H7G2nSZ7m0zPUDn85XEX2GTukHGRSepvi9Eig= github.com/ClickHouse/clickhouse-go/v2 v2.40.3/go.mod h1:qO0HwvjCnTB4BPL/k6EE3l4d9f/uF+aoimAhJX70eKA=
github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI=
github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=
github.com/Masterminds/semver/v3 v3.1.1 h1:hLg3sBzpNErnxhQtUy/mmLR2I9foDujNK030IGemrRc=
github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=
github.com/Masterminds/sprig/v3 v3.2.2 h1:17jRggJu518dr3QaafizSXOjKYp94wKfABxUmyxvxX8=
github.com/Masterminds/sprig/v3 v3.2.2/go.mod h1:UoaO7Yp8KlPnJIYWTFkMaqPUYKTfGFPhxNuwnnxkKlk=
github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=
github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
github.com/antlr4-go/antlr/v4 v4.13.1 h1:SqQKkuVZ+zWkMMNkjy5FZe5mr5WURWnlpmOuzYWrPrQ=
github.com/antlr4-go/antlr/v4 v4.13.1/go.mod h1:GKmUxMtwp6ZgGwZSva4eWPC5mS6vUAmOABFgjdkM7Nw=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cenkalti/backoff/v4 v4.2.1 h1:y4OZtCnogmCPw98Zjyt5a6+QwPLGkiQsYW5oUqylYbM= github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v4 v4.2.1/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/cubicdaiya/gonp v1.0.4 h1:ky2uIAJh81WiLcGKBVD5R7KsM/36W6IqqTy6Bo6rGws=
github.com/cubicdaiya/gonp v1.0.4/go.mod h1:iWGuP/7+JVTn02OWhRemVbMmG1DOUnmrGTYYACpOI0I=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM=
github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE=
github.com/fatih/structtag v1.2.0 h1:/OdNE99OxoI/PqaW/SuSK9uxxT3f/tcSZgon/ssNSx4=
github.com/fatih/structtag v1.2.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/go-faster/city v1.0.1 h1:4WAxSZ3V2Ws4QRDrscLEDcibJY8uf41H6AhXDrNDcGw= github.com/go-faster/city v1.0.1 h1:4WAxSZ3V2Ws4QRDrscLEDcibJY8uf41H6AhXDrNDcGw=
github.com/go-faster/city v1.0.1/go.mod h1:jKcUJId49qdW3L1qKHH/3wPeUstCVpVSXTM6vO3VcTw= github.com/go-faster/city v1.0.1/go.mod h1:jKcUJId49qdW3L1qKHH/3wPeUstCVpVSXTM6vO3VcTw=
github.com/go-faster/errors v0.6.1 h1:nNIPOBkprlKzkThvS/0YaX8Zs9KewLCOSFQS5BU06FI= github.com/go-faster/errors v0.7.1 h1:MkJTnDoEdi9pDabt1dpWf7AA8/BaSYZqibYyhZ20AYg=
github.com/go-faster/errors v0.6.1/go.mod h1:5MGV2/2T9yvlrbhe9pD9LO5Z/2zCSq2T8j+Jpi2LAyY= github.com/go-faster/errors v0.7.1/go.mod h1:5ySTjWFiphBs07IKuiL69nxdfd5+fzh1u7FPGZP2quo=
github.com/go-faster/errors v0.7.0 h1:UnD/xusnfUgtEYkgRZohqL2AfmPTwv13NAJwwFFaNYc=
github.com/go-faster/errors v0.7.0/go.mod h1:5ySTjWFiphBs07IKuiL69nxdfd5+fzh1u7FPGZP2quo=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ= github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.3.0 h1:2y3SDp0ZXuc6/cjLSZ+Q3ir+QB9T/iG5yYRXqsagWSY=
github.com/go-logr/logr v1.3.0/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-sql-driver/mysql v1.7.1 h1:lUIinVbN1DY0xBg0eMOzmmtGoHwWBbvnWubQUrtU8EI= github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
github.com/go-sql-driver/mysql v1.7.1/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI= github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt v3.2.2+incompatible h1:IfV12K8xAKAnZqdXVzCZ+TOjboZ2keLg81eXfW3O+oY= github.com/gojuno/minimock/v3 v3.0.10 h1:0UbfgdLHaNRPHWF/RFYPkwxV2KI+SE4tR0dDSFMD7+A=
github.com/golang-jwt/jwt v3.2.2+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I= github.com/gojuno/minimock/v3 v3.0.10/go.mod h1:CFXcUJYnBe+1QuNzm+WmdPYtvi/+7zQcPcyQGsbcIXg=
github.com/golang/glog v1.1.2 h1:DVjP2PbBOzHyzA+dn3WhHIq4NdVu3Q+pvivFICf/7fo=
github.com/golang/glog v1.1.2/go.mod h1:zR+okUeTbrL6EL3xHUDxZuEtGv04p5shwip1+mL/rLQ=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/cel-go v0.24.1 h1:jsBCtxG8mM5wiUJDSGUqU0K7Mtr3w7Eyv00rw4DiZxI=
github.com/google/cel-go v0.24.1/go.mod h1:Hdf9TqOaTNSFQA1ybQaRqATVoK7m/zcf7IMhGXP5zI8=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38= github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/uuid v1.3.1 h1:KjJaJ9iWZ3jOFZIf1Lqf4laDRCasjl0BCmnEGxkdLb4= github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
github.com/google/uuid v1.3.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/uuid v1.4.0 h1:MtMxsa51/r9yyhkyLsVeVt0B+BGQZzpQiTQ4eHZ8bc4= github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.4.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.0 h1:RtRsiaGvWxcwd8y3BiRZxsylPT8hLWZ5SPcfI+3IDNk= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.0/go.mod h1:TzP6duP4Py2pHLVPPQp42aoYI92+PCrVotyR5e8Vqlk= github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 h1:8Tjv8EJ+pM1xP8mK6egEbD1OgnVTyacbefKhmbLhIhU=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.1 h1:6UKoz5ujsI55KNpsJH3UwCq3T8kKbZwNZBNPuTTje8U= github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2/go.mod h1:pkJQ2tZHJ0aFOVEEot6oZmaVEZcRme73eIFmhiVuRWs=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.1/go.mod h1:YvJ2f6MplWDhfxiUC3KpyTy76kYUZA4W3pTv/wdKQ9Y= github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
github.com/hashicorp/go-hclog v1.6.3 h1:Qr2kF+eVWjTiYmU7Y31tYlP1h0q/X3Nl3tPGdaB11/k=
github.com/hashicorp/go-hclog v1.6.3/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=
github.com/hashicorp/go-retryablehttp v0.7.8 h1:ylXZWnqa7Lhqpk0L1P1LzDtGcCR0rPVUrx/c8Unxc48=
github.com/hashicorp/go-retryablehttp v0.7.8/go.mod h1:rjiScheydd+CxvumBsIrFKlx3iS0jrZ7LvzFGFmuKbw=
github.com/hexdigest/gowrap v1.4.2 h1:crtk5lGwHCROa77mKcP/iQ50eh7z6mBjXsg4U492gfc=
github.com/hexdigest/gowrap v1.4.2/go.mod h1:s+1hE6qakgdaaLqgdwPAj5qKYVBCSbPJhEbx+I1ef/Q=
github.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
github.com/huandu/xstrings v1.5.0 h1:2ag3IFq9ZDANvthTwTiqSSZLjDc+BedvHPAp5tJy2TI=
github.com/huandu/xstrings v1.5.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
github.com/jackc/pgx/v5 v5.7.4 h1:9wKznZrhWa2QiHL+NjTSPP6yjl3451BX3imWDnokYlg=
github.com/jackc/pgx/v5 v5.7.4/go.mod h1:ncY89UGWxg82EykZUwSpUKEfccBGGYq1xjrOpsbsfGQ=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk= github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.17.2 h1:RlWWUY/Dr4fL8qk9YG7DTZ7PDgME2V4csBXA8L/ixi4= github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.17.2/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE= github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/compress v1.17.3 h1:qkRjuerhUU1EmXLYGkSH6EZL+vPSxIrYjLNAK4slzwA=
github.com/klauspost/compress v1.17.3/go.mod h1:/dCuZOvVtNoHsyb+cuJD3itjs3NbnF6KH9zAO4BDxPM=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
@@ -72,236 +110,312 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/labstack/echo/v4 v4.11.2 h1:T+cTLQxWCDfqDEoydYm5kCobjmHwOwcv4OJAPHilmdE= github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/labstack/echo/v4 v4.11.2/go.mod h1:UcGuQ8V6ZNRmSweBIJkPvGfwCMIlFmiqrPqiEBfPYws= github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/labstack/echo/v4 v4.11.3 h1:Upyu3olaqSHkCjs1EJJwQ3WId8b8b1hxbogyommKktM= github.com/labstack/echo-contrib v0.17.4 h1:g5mfsrJfJTKv+F5uNKCyrjLK7js+ZW6HTjg4FnDxxgk=
github.com/labstack/echo/v4 v4.11.3/go.mod h1:UcGuQ8V6ZNRmSweBIJkPvGfwCMIlFmiqrPqiEBfPYws= github.com/labstack/echo-contrib v0.17.4/go.mod h1:9O7ZPAHUeMGTOAfg80YqQduHzt0CzLak36PZRldYrZ0=
github.com/labstack/gommon v0.4.0 h1:y7cvthEAEbU0yHOf4axH8ZG2NH8knB9iNSoTO8dyIk8= github.com/labstack/echo/v4 v4.13.4 h1:oTZZW+T3s9gAu5L8vmzihV7/lkXGZuITzTQkTEhcXEA=
github.com/labstack/gommon v0.4.0/go.mod h1:uW6kP17uPlLJsD3ijUYn3/M5bAxtlZhMI6m3MFxTMTM= github.com/labstack/echo/v4 v4.13.4/go.mod h1:g63b33BZ5vZzcIUF8AtRH40DrTlXnx4UMC8rBdndmjQ=
github.com/labstack/gommon v0.4.1 h1:gqEff0p/hTENGMABzezPoPSRtIh1Cvw0ueMOe0/dfOk= github.com/labstack/gommon v0.4.2 h1:F8qTUNXgG1+6WQmqoUWnz8WiEU60mXVVw0P4ht1WRA0=
github.com/labstack/gommon v0.4.1/go.mod h1:TyTrpPqxR5KMk8LKVtLmfMjeQ5FEkBYdxLYPw/WfrOM= github.com/labstack/gommon v0.4.2/go.mod h1:QlUFxVM+SNXhDL/Z7YhocGIBYOiwB0mXm1+1bAPHPyU=
github.com/mattn/go-colorable v0.1.11/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4= github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA= github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo= github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw=
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 h1:jWpvCLoY8Z/e3VKvlsiIGKtc+UG6U5vzxaoagmhXfyg= github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw=
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0/go.mod h1:QUyp042oQthUoa9bqDv0ER0wrtXnBruoNd7aNjkbP+k= github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s=
github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ=
github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc= github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
github.com/paulmach/orb v0.10.0 h1:guVYVqzxHE/CQ1KpfGO077TR0ATHSNjp4s6XGLn3W9s= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/paulmach/orb v0.10.0/go.mod h1:5mULz1xQfs3bmQm63QEJA6lNGujuRafwA5S/EnuLaLU= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4=
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/paulmach/orb v0.12.0 h1:z+zOwjmG3MyEEqzv92UN49Lg1JFYx0L9GpGKNVDKk1s=
github.com/paulmach/orb v0.12.0/go.mod h1:5mULz1xQfs3bmQm63QEJA6lNGujuRafwA5S/EnuLaLU=
github.com/paulmach/protoscan v0.2.1/go.mod h1:SpcSwydNLrxUGSDvXvO0P7g7AuhJ7lcKfDlhJCDw2gY= github.com/paulmach/protoscan v0.2.1/go.mod h1:SpcSwydNLrxUGSDvXvO0P7g7AuhJ7lcKfDlhJCDw2gY=
github.com/pierrec/lz4/v4 v4.1.18 h1:xaKrnTkyoqfh1YItXl56+6KJNVYWlEEPuAQW9xsplYQ= github.com/pganalyze/pg_query_go/v6 v6.1.0 h1:jG5ZLhcVgL1FAw4C/0VNQaVmX1SUJx71wBGdtTtBvls=
github.com/pierrec/lz4/v4 v4.1.18/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4= github.com/pganalyze/pg_query_go/v6 v6.1.0/go.mod h1:nvTHIuoud6e1SfrUaFwHqT0i4b5Nr+1rPWVds3B5+50=
github.com/pierrec/lz4/v4 v4.1.22 h1:cKFw6uJDK+/gfw5BcDL0JL5aBsAFdsIT18eRtLj7VIU=
github.com/pierrec/lz4/v4 v4.1.22/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pingcap/errors v0.11.0/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8=
github.com/pingcap/errors v0.11.5-0.20240311024730-e056997136bb h1:3pSi4EDG6hg0orE1ndHkXvX6Qdq2cZn8gAPir8ymKZk=
github.com/pingcap/errors v0.11.5-0.20240311024730-e056997136bb/go.mod h1:X2r9ueLEUZgtx2cIogM0v4Zj5uvvzhuuiu7Pn8HzMPg=
github.com/pingcap/failpoint v0.0.0-20240528011301-b51a646c7c86 h1:tdMsjOqUR7YXHoBitzdebTvOjs/swniBTOLy5XiMtuE=
github.com/pingcap/failpoint v0.0.0-20240528011301-b51a646c7c86/go.mod h1:exzhVYca3WRtd6gclGNErRWb1qEgff3LYta0LvRmON4=
github.com/pingcap/log v1.1.0 h1:ELiPxACz7vdo1qAvvaWJg1NrYFoY6gqAh/+Uo6aXdD8=
github.com/pingcap/log v1.1.0/go.mod h1:DWQW5jICDR7UJh4HtxXSM20Churx4CQL0fwL/SoOSA4=
github.com/pingcap/tidb/pkg/parser v0.0.0-20250324122243-d51e00e5bbf0 h1:W3rpAI3bubR6VWOcwxDIG0Gz9G5rl5b3SL116T0vBt0=
github.com/pingcap/tidb/pkg/parser v0.0.0-20250324122243-d51e00e5bbf0/go.mod h1:+8feuexTKcXHZF/dkDfvCwEyBAmgb4paFc3/WeYV2eE=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.17.0 h1:rl2sfwZMtSthVU752MqfjQozy7blglC+1SOtjMAMh+Q= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/prometheus/client_golang v1.17.0/go.mod h1:VeL+gMmOAxkS2IqfCq0ZmHSL+LjWfWDUmp1mBz9JgUY= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_model v0.5.0 h1:VQw1hfvPvk3Uv6Qf29VrPF32JB6rtbgI6cYPYQjL0Qw= github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_model v0.5.0/go.mod h1:dTiFglRmd66nLR9Pv9f0mZi7B7fk5Pm3gvsjB5tr+kI= github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/common v0.45.0 h1:2BGz0eBc2hdMDLnO/8n0jeB3oPrt2D08CekT0lneoxM= github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/common v0.45.0/go.mod h1:YJmSTw9BoKxJplESWWxlbyttQR4uaEcGyv9MZjVOJsY= github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k6Bo= github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs=
github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo= github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA=
github.com/remychantenay/slog-otel v1.2.1 h1:CTsgUd2h3zWDf8/KQSCme+VMw7bdf/BycyGT0rjonSg= github.com/prometheus/otlptranslator v1.0.0 h1:s0LJW/iN9dkIH+EnhiD3BlkkP5QVIUVEoIwkU+A6qos=
github.com/remychantenay/slog-otel v1.2.1/go.mod h1:YV+vYh8c5i5U2U/QeBxRYelOLdYJ+AYBHy5IIC/XSeo= github.com/prometheus/otlptranslator v1.0.0/go.mod h1:vRYWnXvI6aWGpsdY/mOT/cbeVRBlPWtBNDb7kGR3uKM=
github.com/remychantenay/slog-otel v1.2.2 h1:EnFH7oq2i83TBstmqHqMEjjRNVWmXMsiybdZxad4Nus= github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0=
github.com/remychantenay/slog-otel v1.2.2/go.mod h1:YV+vYh8c5i5U2U/QeBxRYelOLdYJ+AYBHy5IIC/XSeo= github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ= github.com/remychantenay/slog-otel v1.3.4 h1:xoM41ayLff2U8zlK5PH31XwD7Lk3W9wKfl4+RcmKom4=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog= github.com/remychantenay/slog-otel v1.3.4/go.mod h1:ZkazuFMICKGDrO0r1njxKRdjTt/YcXKn6v2+0q/b0+U=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/riza-io/grpc-go v0.2.0 h1:2HxQKFVE7VuYstcJ8zqpN84VnAoJ4dCL6YFhJewNcHQ=
github.com/riza-io/grpc-go v0.2.0/go.mod h1:2bDvR9KkKC3KhtlSHfR3dAXjUMT86kg4UfWFyVGWqi8=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/samber/lo v1.38.1 h1:j2XEAqXKb09Am4ebOg31SpvzUTTs6EN3VfgeLUhPdXM= github.com/samber/lo v1.51.0 h1:kysRYLbHy/MB7kQZf5DSN50JHmMsNEdeY24VzJFu7wI=
github.com/samber/lo v1.38.1/go.mod h1:+m/ZKRl6ClXCE2Lgf3MsQlWfh4bn1bz6CXEOxnEXnEA= github.com/samber/lo v1.51.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0=
github.com/samber/slog-echo v1.8.0 h1:DQQRtAliSvQw+ScEdu5gv3jbHu9cCTzvHuTD8GDv7zI= github.com/samber/slog-common v0.19.0 h1:fNcZb8B2uOLooeYwFpAlKjkQTUafdjfqKcwcC89G9YI=
github.com/samber/slog-echo v1.8.0/go.mod h1:0ab2AwcciQXNAXEcjkHwD9okOh9vEHEYn8xP97ocuhM= github.com/samber/slog-common v0.19.0/go.mod h1:dTz+YOU76aH007YUU0DffsXNsGFQRQllPQh9XyNoA3M=
github.com/segmentio/asm v1.2.0 h1:9BQrFxC+YOHJlTlHGkTrFWf59nbL3XnCoFLTwDCI7ys= github.com/samber/slog-echo v1.17.2 h1:/d1D2ZiJsaqaeyz3Yk9olCeFFpi4EIJZtnoMp5zt9fs=
github.com/segmentio/asm v1.2.0/go.mod h1:BqMnlJP91P8d+4ibuonYZw9mfnzI9HfxselHZr5aAcs= github.com/samber/slog-echo v1.17.2/go.mod h1:4diugqPTk6iQdL7gZFJIyf6zGMLVMaGnCmNm+DBSMRU=
github.com/shopspring/decimal v1.3.1 h1:2Usl1nmF/WZucqkFZhnfFYxxxu8LG21F6nPQBE5gKV8= github.com/samber/slog-multi v1.5.0 h1:UDRJdsdb0R5vFQFy3l26rpX3rL3FEPJTJ2yKVjoiT1I=
github.com/shopspring/decimal v1.3.1/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o= github.com/samber/slog-multi v1.5.0/go.mod h1:im2Zi3mH/ivSY5XDj6LFcKToRIWPw1OcjSVSdXt+2d0=
github.com/spf13/cobra v1.7.0 h1:hyqWnYt1ZQShIddO5kBpj3vu05/++x6tJ6dg8EC572I= github.com/segmentio/asm v1.2.1 h1:DTNbBqs57ioxAD4PrArqftgypG4/qNpXoJx8TVXxPR0=
github.com/spf13/cobra v1.7.0/go.mod h1:uLxZILRyS/50WlhOIKD7W6V5bgeIt+4sICxh6uRMrb0= github.com/segmentio/asm v1.2.1/go.mod h1:BqMnlJP91P8d+4ibuonYZw9mfnzI9HfxselHZr5aAcs=
github.com/spf13/cobra v1.8.0 h1:7aJaZx1B85qltLMc546zn58BxxfZdR/W22ej9CFoEf0= github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
github.com/spf13/cobra v1.8.0/go.mod h1:WXLWApfZ71AjXPya3WOlMsY9yMs7YeiHhFVlvLyhcho= github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.4.1 h1:s0hze+J0196ZfEMTs80N7UlFt0BDuQ7Q+JDnHiMWKdA=
github.com/spf13/cast v1.4.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/sqlc-dev/sqlc v1.29.0 h1:HQctoD7y/i29Bao53qXO7CZ/BV9NcvpGpsJWvz9nKWs=
github.com/sqlc-dev/sqlc v1.29.0/go.mod h1:BavmYw11px5AdPOjAVHmb9fctP5A8GTziC38wBF9tp0=
github.com/stoewer/go-strcase v1.2.0 h1:Z2iHWqGXH00XYgqDmNgQbIBxf3wrNq0F3feEy0ainaU=
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/objx v0.5.1 h1:4VhoImhV/Bm0ToFkXFi8hXNXwpDRZ/ynw3amt82mzq0= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/objx v0.5.1/go.mod h1:/iHQpkQwBD6DLUmQ4pE+s1TXdob1mORJ4/UFdrifcy0= github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= github.com/tetratelabs/wazero v1.9.0 h1:IcZ56OuxrtaEz8UYNRHBrUa9bYeX9oVY93KspZZBf/I=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk= github.com/tetratelabs/wazero v1.9.0/go.mod h1:TSbcXCfFP0L2FGkRPxHphadXPjo1T6W+CseNNY7EkjM=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk= github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw= github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc= github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasttemplate v1.2.1/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo= github.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo=
github.com/valyala/fasttemplate v1.2.2/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ= github.com/valyala/fasttemplate v1.2.2/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/wasilibs/go-pgquery v0.0.0-20250409022910-10ac41983c07 h1:mJdDDPblDfPe7z7go8Dvv1AJQDI3eQ/5xith3q2mFlo=
github.com/wasilibs/go-pgquery v0.0.0-20250409022910-10ac41983c07/go.mod h1:Ak17IJ037caFp4jpCw/iQQ7/W74Sqpb1YuKJU6HTKfM=
github.com/wasilibs/wazero-helpers v0.0.0-20240620070341-3dff1577cd52 h1:OvLBa8SqJnZ6P+mjlzc2K7PM22rRUPE1x32G9DTPrC4=
github.com/wasilibs/wazero-helpers v0.0.0-20240620070341-3dff1577cd52/go.mod h1:jMeV4Vpbi8osrE/pKUxRZkVaA0EX7NZN0A9/oRzgpgY=
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI= github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
github.com/xdg-go/scram v1.1.1/go.mod h1:RaEWvsqvNKKvBPvcKeFjrG2cJqOkHTiyTpzz23ni57g= github.com/xdg-go/scram v1.1.1/go.mod h1:RaEWvsqvNKKvBPvcKeFjrG2cJqOkHTiyTpzz23ni57g=
github.com/xdg-go/stringprep v1.0.3/go.mod h1:W3f5j4i+9rC0kuIEJL0ky1VpHXQU3ocBgklLGvcBnW8= github.com/xdg-go/stringprep v1.0.3/go.mod h1:W3f5j4i+9rC0kuIEJL0ky1VpHXQU3ocBgklLGvcBnW8=
github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=
github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA= github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.mongodb.org/mongo-driver v1.11.4/go.mod h1:PTSz5yu21bkT/wXpkS7WR5f0ddqw5quethTUn9WM+2g= go.mongodb.org/mongo-driver v1.11.4/go.mod h1:PTSz5yu21bkT/wXpkS7WR5f0ddqw5quethTUn9WM+2g=
go.ntppool.org/common v0.2.4 h1:OqKR1OHYayv6AsERAR8RYKdOEigJqXBpqkGWlaGF3+Q= go.ntppool.org/api v0.3.4 h1:KeRyFhIRkjJwZif7hkpqEDEBmukyYGiOi2Fd6j3UzQ0=
go.ntppool.org/common v0.2.4/go.mod h1:kYshXIaeI13tj6CSW56KHkcwp0lJbM8bFCe3tm3BZEQ= go.ntppool.org/api v0.3.4/go.mod h1:LFLAwnrc/JyjzKnjgf8tCOJhps6oFIjuledS3PCx7xc=
go.ntppool.org/common v0.2.5-0.20231112235121-2bff6d8ef307 h1:bJPpvb3aP3sIdO/ptxH9Jqhksk0+c5qQBSa/xHLhscc= go.ntppool.org/common v0.5.2 h1:Ijlezhiqqs7TJYZTWwEwultLFxhNaXsh6DkaO53m/F4=
go.ntppool.org/common v0.2.5-0.20231112235121-2bff6d8ef307/go.mod h1:kYshXIaeI13tj6CSW56KHkcwp0lJbM8bFCe3tm3BZEQ= go.ntppool.org/common v0.5.2/go.mod h1:e5ohROK9LdZZTI1neNiSlmgmWC23F779qzLvSi4JzyI=
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.45.0 h1:JJCIHAxGCB5HM3NxeIwFjHc087Xwk96TG9kaZU6TAec= go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.45.0/go.mod h1:Px9kH7SJ+NhsgWRtD/eMcs15Tyt4uL3rM7X54qv6pfA= go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.46.0 h1:gYRavGiL75s1rM1MYzWbq6ptn02tnUt4t3HtDEaeVhE= go.opentelemetry.io/contrib/bridges/otelslog v0.13.0 h1:bwnLpizECbPr1RrQ27waeY2SPIPeccCx/xLuoYADZ9s=
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.46.0/go.mod h1:XnbQgDkVMrAxz8+hbgLtSiYVO6HTo2D1z9SDlY1gU/Y= go.opentelemetry.io/contrib/bridges/otelslog v0.13.0/go.mod h1:3nWlOiiqA9UtUnrcNk82mYasNxD8ehOspL0gOfEo6Y4=
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.46.1 h1:yJWyqeE+8jdOJpt+ZFn7sX05EJAK/9C4jjNZyb61xZg= go.opentelemetry.io/contrib/bridges/prometheus v0.63.0 h1:/Rij/t18Y7rUayNg7Id6rPrEnHgorxYabm2E6wUdPP4=
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.46.1/go.mod h1:tlgpIvi6LCv4QIZQyBc8Gkr6HDxbJLTh9eQPNZAaljE= go.opentelemetry.io/contrib/bridges/prometheus v0.63.0/go.mod h1:AdyDPn6pkbkt2w01n3BubRVk7xAsCRq1Yg1mpfyA/0E=
go.opentelemetry.io/contrib/propagators/b3 v1.20.0 h1:Yty9Vs4F3D6/liF1o6FNt0PvN85h/BJJ6DQKJ3nrcM0= go.opentelemetry.io/contrib/exporters/autoexport v0.63.0 h1:NLnZybb9KkfMXPwZhd5diBYJoVxiO9Qa06dacEA7ySY=
go.opentelemetry.io/contrib/propagators/b3 v1.20.0/go.mod h1:On4VgbkqYL18kbJlWsa18+cMNe6rYpBnPi1ARI/BrsU= go.opentelemetry.io/contrib/exporters/autoexport v0.63.0/go.mod h1:OvRg7gm5WRSCtxzGSsrFHbDLToYlStHNZQ+iPNIyD6g=
go.opentelemetry.io/contrib/propagators/b3 v1.21.0 h1:uGdgDPNzwQWRwCXJgw/7h29JaRqcq9B87Iv4hJDKAZw= go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.63.0 h1:6YeICKmGrvgJ5th4+OMNpcuoB6q/Xs8gt0YCO7MUv1k=
go.opentelemetry.io/otel v1.19.0 h1:MuS/TNf4/j4IXsZuJegVzI1cwut7Qc00344rgH7p8bs= go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.63.0/go.mod h1:ZEA7j2B35siNV0T00aapacNzjz4tvOlNoHp0ncCfwNQ=
go.opentelemetry.io/otel v1.19.0/go.mod h1:i0QyjOq3UPoTzff0PJB2N66fb4S0+rSbSB15/oyH9fY= go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.63.0 h1:2pn7OzMewmYRiNtv1doZnLo3gONcnMHlFnmOR8Vgt+8=
go.opentelemetry.io/otel v1.20.0 h1:vsb/ggIY+hUjD/zCAQHpzTmndPqv/ml2ArbsbfBYTAc= go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.63.0/go.mod h1:rjbQTDEPQymPE0YnRQp9/NuPwwtL0sesz/fnqRW/v84=
go.opentelemetry.io/otel v1.20.0/go.mod h1:oUIGj3D77RwJdM6PPZImDpSZGDvkD9fhesHny69JFrs= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0 h1:RbKq8BG0FI8OiXhBfcRtqqHcZcka+gU3cskNuf05R18=
go.opentelemetry.io/otel v1.21.0 h1:hzLeKBZEL7Okw2mGzZ0cc4k/A7Fta0uoPgaJCr8fsFc= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0/go.mod h1:h06DGIukJOevXaj/xrNjhi/2098RZzcLTbc0jDAUbsg=
go.opentelemetry.io/otel v1.21.0/go.mod h1:QZzNPQPm1zLX4gZK4cMi+71eaorMSGT3A4znnUvNNEo= go.opentelemetry.io/contrib/propagators/b3 v1.38.0 h1:uHsCCOSKl0kLrV2dLkFK+8Ywk9iKa/fptkytc6aFFEo=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.19.0 h1:Mne5On7VWdx7omSrSSZvM4Kw7cS7NQkOOmLcgscI51U= go.opentelemetry.io/contrib/propagators/b3 v1.38.0/go.mod h1:wMRSZJZcY8ya9mApLLhwIMjqmApy2o/Ml+62lhvxyHU=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.19.0/go.mod h1:IPtUMKL4O3tH5y+iXVyAXqpAwMuzC1IrxVS81rummfE= go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.20.0 h1:DeFD0VgTZ+Cj6hxravYYZE2W4GlneVH81iAOPjZkzk8= go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.20.0/go.mod h1:GijYcYmNpX1KazD5JmWGsi4P7dDTTTnfv1UbGn84MnU= go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.14.0 h1:OMqPldHt79PqWKOMYIAQs3CxAi7RLgPxwfFSwr4ZxtM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 h1:cl5P5/GIfFh4t6xyruOgJP5QiA1pw4fYYdv6nc6CBWw= go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.14.0/go.mod h1:1biG4qiqTxKiUCtoWDPpL3fB3KxVwCiGw81j3nKMuHE=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0/go.mod h1:zgBdWWAu7oEEMC06MMKc5NLbA/1YDXV1sMpSqEeLQLg= go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.14.0 h1:QQqYw3lkrzwVsoEX0w//EhH/TCnpRdEenKBOOEIMjWc=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0 h1:IeMeyr1aBvBiPVYihXIaeIZba6b8E1bYp7lbdxK8CQg= go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.14.0/go.mod h1:gSVQcr17jk2ig4jqJ2DX30IdWH251JcNAecvrqTxH1s=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0/go.mod h1:oVdCUtjq9MK9BlS7TtucsQwUcXcymNiEDjgDD2jMtZU= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.38.0 h1:vl9obrcoWVKp/lwl8tRE33853I8Xru9HFbw/skNeLs8=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.20.0 h1:CsBiKCiQPdSjS+MlRiqeTI9JDDpSuk0Hb6QTRfwer8k= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.38.0/go.mod h1:GAXRxmLJcVM3u22IjTg74zWBrRCKq8BnOqUVLodpcpw=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.20.0/go.mod h1:CMJYNAfooOwSZSAmAeMUV1M+TXld3BiK++z9fqIm2xk= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.38.0 h1:Oe2z/BCg5q7k4iXC3cqJxKYg0ieRiOqF0cecFYdPTwk=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 h1:digkEZCJWobwBqMwC0cwCq8/wkkRy/OowZg5OArWZrM= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.38.0/go.mod h1:ZQM5lAJpOsKnYagGg/zV2krVqTtaVdYdDkhMoX6Oalg=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0/go.mod h1:/OpE/y70qVkndM0TrxT4KBoN3RsFZP0QaofcfYrj76I= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 h1:GqRJVj7UmLjCVyVJ3ZFLdPRmhDUp2zFmQe3RHIOsw24=
go.opentelemetry.io/otel/metric v1.19.0 h1:aTzpGtV0ar9wlV4Sna9sdJyII5jTVJEvKETPiOKwvpE= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0/go.mod h1:ri3aaHSmCTVYu2AWv44YMauwAQc0aqI9gHKIcSbI1pU=
go.opentelemetry.io/otel/metric v1.19.0/go.mod h1:L5rUsV9kM1IxCj1MmSdS+JQAcVm319EUrDVLrt7jqt8= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0 h1:lwI4Dc5leUqENgGuQImwLo4WnuXFPetmPpkLi2IrX54=
go.opentelemetry.io/otel/metric v1.20.0 h1:ZlrO8Hu9+GAhnepmRGhSU7/VkpjrNowxRN9GyKR4wzA= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0/go.mod h1:Kz/oCE7z5wuyhPxsXDuaPteSWqjSBD5YaSdbxZYGbGk=
go.opentelemetry.io/otel/metric v1.20.0/go.mod h1:90DRw3nfK4D7Sm/75yQ00gTJxtkBxX+wu6YaNymbpVM= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 h1:aTL7F04bJHUlztTsNGJ2l+6he8c+y/b//eR0jjjemT4=
go.opentelemetry.io/otel/metric v1.21.0 h1:tlYWfeo+Bocx5kLEloTjbcDwBuELRrIFxwdQ36PlJu4= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0/go.mod h1:kldtb7jDTeol0l3ewcmd8SDvx3EmIE7lyvqbasU3QC4=
go.opentelemetry.io/otel/metric v1.21.0/go.mod h1:o1p3CA8nNHW8j5yuQLdc1eeqEaPfzug24uvsyIEJRWM= go.opentelemetry.io/otel/exporters/prometheus v0.60.0 h1:cGtQxGvZbnrWdC2GyjZi0PDKVSLWP/Jocix3QWfXtbo=
go.opentelemetry.io/otel/sdk v1.19.0 h1:6USY6zH+L8uMH8L3t1enZPR3WFEmSTADlqldyHtJi3o= go.opentelemetry.io/otel/exporters/prometheus v0.60.0/go.mod h1:hkd1EekxNo69PTV4OWFGZcKQiIqg0RfuWExcPKFvepk=
go.opentelemetry.io/otel/sdk v1.19.0/go.mod h1:NedEbbS4w3C6zElbLdPJKOpJQOrGUJ+GfzpjUvI0v1A= go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.14.0 h1:B/g+qde6Mkzxbry5ZZag0l7QrQBCtVm7lVjaLgmpje8=
go.opentelemetry.io/otel/sdk v1.20.0 h1:5Jf6imeFZlZtKv9Qbo6qt2ZkmWtdWx/wzcCbNUlAWGM= go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.14.0/go.mod h1:mOJK8eMmgW6ocDJn6Bn11CcZ05gi3P8GylBXEkZtbgA=
go.opentelemetry.io/otel/sdk v1.20.0/go.mod h1:rmkSx1cZCm/tn16iWDn1GQbLtsW/LvsdEEFzCSRM6V0= go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.38.0 h1:wm/Q0GAAykXv83wzcKzGGqAnnfLFyFe7RslekZuv+VI=
go.opentelemetry.io/otel/sdk v1.21.0 h1:FTt8qirL1EysG6sTQRZ5TokkU8d0ugCj8htOgThZXQ8= go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.38.0/go.mod h1:ra3Pa40+oKjvYh+ZD3EdxFZZB0xdMfuileHAm4nNN7w=
go.opentelemetry.io/otel/sdk v1.21.0/go.mod h1:Nna6Yv7PWTdgJHVRD9hIYywQBRx7pbox6nwBnZIxl/E= go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.38.0 h1:kJxSDN4SgWWTjG/hPp3O7LCGLcHXFlvS2/FFOrwL+SE=
go.opentelemetry.io/otel/trace v1.19.0 h1:DFVQmlVbfVeOuBRrwdtaehRrWiL1JoVs9CPIQ1Dzxpg= go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.38.0/go.mod h1:mgIOzS7iZeKJdeB8/NYHrJ48fdGc71Llo5bJ1J4DWUE=
go.opentelemetry.io/otel/trace v1.19.0/go.mod h1:mfaSyvGyEJEI0nyV2I4qhNQnbBOUUmYZpYojqMnX2vo= go.opentelemetry.io/otel/log v0.14.0 h1:2rzJ+pOAZ8qmZ3DDHg73NEKzSZkhkGIua9gXtxNGgrM=
go.opentelemetry.io/otel/trace v1.20.0 h1:+yxVAPZPbQhbC3OfAkeIVTky6iTFpcr4SiY9om7mXSQ= go.opentelemetry.io/otel/log v0.14.0/go.mod h1:5jRG92fEAgx0SU/vFPxmJvhIuDU9E1SUnEQrMlJpOno=
go.opentelemetry.io/otel/trace v1.20.0/go.mod h1:HJSK7F/hA5RlzpZ0zKDCHCDHm556LCDtKaAo6JmBFUU= go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
go.opentelemetry.io/otel/trace v1.21.0 h1:WD9i5gzvoUPuXIXH24ZNBudiarZDKuekPqi/E8fpfLc= go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
go.opentelemetry.io/otel/trace v1.21.0/go.mod h1:LGbsEB0f9LGjN+OZaQQ26sohbOmiMR+BaslueVtS/qQ= go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=
go.opentelemetry.io/proto/otlp v1.0.0 h1:T0TX0tmXU8a3CbNXzEKGeU5mIVOdf0oykP+u2lIVU/I= go.opentelemetry.io/otel/sdk v1.38.0/go.mod h1:ghmNdGlVemJI3+ZB5iDEuk4bWA3GkTpW+DOoZMYBVVg=
go.opentelemetry.io/proto/otlp v1.0.0/go.mod h1:Sy6pihPLfYHkr3NkUbEhGHFhINUSI/v80hjKIs5JXpM= go.opentelemetry.io/otel/sdk/log v0.14.0 h1:JU/U3O7N6fsAXj0+CXz21Czg532dW2V4gG1HE/e8Zrg=
go.opentelemetry.io/otel/sdk/log v0.14.0/go.mod h1:imQvII+0ZylXfKU7/wtOND8Hn4OpT3YUoIgqJVksUkM=
go.opentelemetry.io/otel/sdk/log/logtest v0.14.0 h1:Ijbtz+JKXl8T2MngiwqBlPaHqc4YCaP/i13Qrow6gAM=
go.opentelemetry.io/otel/sdk/log/logtest v0.14.0/go.mod h1:dCU8aEL6q+L9cYTqcVOk8rM9Tp8WdnHOPLiBgp0SGOA=
go.opentelemetry.io/otel/sdk/metric v1.38.0 h1:aSH66iL0aZqo//xXzQLYozmWrXxyFkBJ6qT5wthqPoM=
go.opentelemetry.io/otel/sdk/metric v1.38.0/go.mod h1:dg9PBnW9XdQ1Hd6ZnRz689CbtrUp0wMMs9iPcgT9EZA=
go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
go.opentelemetry.io/proto/otlp v1.8.0 h1:fRAZQDcAFHySxpJ1TwlA1cJ4tvcrw7nXl9xWWC8N5CE=
go.opentelemetry.io/proto/otlp v1.8.0/go.mod h1:tIeYOeNBU4cvmPqpaji1P+KbB4Oloai8wN4rWzRrFF0=
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/multierr v1.7.0/go.mod h1:7EAYxJLBy9rStEaz58O2t4Uvip6FSURkq8/ppBp95ak=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.19.0/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.14.0 h1:wBqGXzWJW6m1XrIKlAH0Hs1JJ7+9KBwnIO8v66Q9cHc= golang.org/x/crypto v0.42.0 h1:chiH31gIWm57EkTXpwnqf8qeuMUi0yekh6mT2AvFlqI=
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= golang.org/x/crypto v0.42.0/go.mod h1:4+rDnOTJhQCx2q7/j6rAN5XDw8kPjeaXEUR2eL94ix8=
golang.org/x/crypto v0.15.0 h1:frVn1TEaCEaZcn3Tmd7Y2b5KKPaZ+I32Q2OA3kYp5TA= golang.org/x/exp v0.0.0-20250305212735-054e65f0b394 h1:nDVHiLt8aIbd/VzvPWN6kSOPE7+F/fNFDSXLVYkE/Iw=
golang.org/x/crypto v0.15.0/go.mod h1:4ChreQoLWfG3xLDer1WdlH5NdlQ3+mwnQq1YTKY+72g= golang.org/x/exp v0.0.0-20250305212735-054e65f0b394/go.mod h1:sIifuuw/Yco/y6yb6+bDNfyeQ/MdPUy/hKEMYQV17cM=
golang.org/x/exp v0.0.0-20231006140011-7918f672742d h1:jtJma62tbqLibJ5sFQz8bKtEM8rJBtfilJ2qTU199MI= golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/exp v0.0.0-20231006140011-7918f672742d/go.mod h1:ldy0pHrwJyGW56pPQzzkH36rKxoZW1tw7ZJpeKx+hdo=
golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa h1:FRnLl4eNAQl8hwxVVC17teOw8kdjVDVAiFMtgUdTSRQ=
golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa/go.mod h1:zk2irFbV9DP96SEBUUAy67IdHUaZuSnrz1n472HUCLE=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.13.0 h1:I/DsJXRlw/8l/0c24sM9yb0T4z9liZTduXvdAWYiysY= golang.org/x/mod v0.28.0 h1:gQBtGhjxykdjY9YhZpSlZIsbnaE2+PgjfLWUQTnoZ1U=
golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= golang.org/x/mod v0.28.0/go.mod h1:yfB/L0NOf/kmEbXjzCPOx1iK1fRutOydrCMsqRhEBxI=
golang.org/x/mod v0.14.0 h1:dGoOF9QVLYng8IHTm7BAyWqCqSheQ5pYWGhzW00YJr0= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/mod v0.14.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM= golang.org/x/net v0.44.0 h1:evd8IRDyfNBMBTTY5XRF1vaZlD+EmWx6x8PkhR04H/I=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= golang.org/x/net v0.44.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY=
golang.org/x/net v0.18.0 h1:mIYleuAkSbHh0tCv7RvjL3F6ZVbLjq4+R7zbOn3Kokg=
golang.org/x/net v0.18.0/go.mod h1:/czyP5RqHAH4odGYxBJ1qz0+CE5WZ+2j1YgoEo8F2jQ=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.4.0 h1:zxkM55ReGkDlKSM+Fu41A+zmbZuaPVbGMzvvdUPznYQ= golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.4.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sync v0.5.0 h1:60k92dhOjHxJkrqnwsfl8KuaHbn/5dl0lUPUklKo3qE=
golang.org/x/sync v0.5.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211103235746-7861aae1554b/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE= golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.14.0 h1:Vz7Qs629MkJkGyHxUlRHizWJRG2j8fbQKjELVSNhy7Q=
golang.org/x/sys v0.14.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k= golang.org/x/text v0.29.0 h1:1neNs90w9YzJ9BocxfsQNHKuAT4pkghyXc4nhZ6sJvk=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.29.0/go.mod h1:7MhJOA9CD2qZyOKYazxdYMF85OwPdEr9jTtBpO7ydH4=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.4.0 h1:Z81tqI5ddIoXDPvVQ7/7CC9TnLM7ubaFG2qXYd5BbYY=
golang.org/x/time v0.4.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.36.0 h1:kWS0uv/zsvHEle1LbV5LE8QujrxB3wfQyxHfhOk0Qkg=
golang.org/x/tools v0.36.0/go.mod h1:WBDiHKJK8YgLHlcQPYQzNCkUxUypCaa5ZegCVutKm+s=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto v0.0.0-20231012201019-e917dd12ba7a h1:fwgW9j3vHirt4ObdHoYNwuO24BEZjSzbh+zPaNWoiY8= gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
google.golang.org/genproto v0.0.0-20231012201019-e917dd12ba7a/go.mod h1:EMfReVxb80Dq1hhioy0sOsY9jCE46YDgHlJ7fWVUWRE= gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/genproto/googleapis/api v0.0.0-20231016165738-49dd2c1f3d0b h1:CIC2YMXmIhYw6evmhPxBKJ4fmLbOFtXQN/GV3XOZR8k= google.golang.org/genproto/googleapis/api v0.0.0-20250922171735-9219d122eba9 h1:jm6v6kMRpTYKxBRrDkYAitNJegUeO1Mf3Kt80obv0gg=
google.golang.org/genproto/googleapis/api v0.0.0-20231016165738-49dd2c1f3d0b/go.mod h1:IBQ646DjkDkvUIsVq/cc03FUFQ9wbZu7yE396YcL870= google.golang.org/genproto/googleapis/api v0.0.0-20250922171735-9219d122eba9/go.mod h1:LmwNphe5Afor5V3R5BppOULHOnt2mCIf+NxMd4XiygE=
google.golang.org/genproto/googleapis/api v0.0.0-20231106174013-bbf56f31fb17 h1:JpwMPBpFN3uKhdaekDpiNlImDdkUAyiJ6ez/uxGaUSo= google.golang.org/genproto/googleapis/rpc v0.0.0-20250922171735-9219d122eba9 h1:V1jCN2HBa8sySkR5vLcCSqJSTMv093Rw9EJefhQGP7M=
google.golang.org/genproto/googleapis/api v0.0.0-20231106174013-bbf56f31fb17/go.mod h1:0xJLfVdJqpAPl8tDg1ujOCGzx6LFLttXT5NhllGOXY4= google.golang.org/genproto/googleapis/rpc v0.0.0-20250922171735-9219d122eba9/go.mod h1:HSkG/KdJWusxU1F6CNrwNDjBMgisKxGnc5dAZfT0mjQ=
google.golang.org/genproto/googleapis/api v0.0.0-20231120223509-83a465c0220f h1:2yNACc1O40tTnrsbk9Cv6oxiW8pxI/pXj0wRtdlYmgY= google.golang.org/grpc v1.75.1 h1:/ODCNEuf9VghjgO3rqLcfg8fiOP0nSluljWFlDxELLI=
google.golang.org/genproto/googleapis/api v0.0.0-20231120223509-83a465c0220f/go.mod h1:Uy9bTZJqmfrw2rIBxgGLnamc78euZULUBrLZ9XTITKI= google.golang.org/grpc v1.75.1/go.mod h1:JtPAzKiq4v1xcAB2hydNlWI2RnF85XXcV0mhKXr2ecQ=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231016165738-49dd2c1f3d0b h1:ZlWIi1wSK56/8hn4QcBp/j9M7Gt3U/3hZw3mC7vDICo=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231016165738-49dd2c1f3d0b/go.mod h1:swOH3j0KzcDDgGUWr+SNpyTen5YrXjS3eyPzFYKc6lc=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231106174013-bbf56f31fb17 h1:Jyp0Hsi0bmHXG6k9eATXoYtjd6e2UzZ1SCn/wIupY14=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231106174013-bbf56f31fb17/go.mod h1:oQ5rr10WTTMvP4A36n8JpR1OrO1BEiV4f78CneXZxkA=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231120223509-83a465c0220f h1:ultW7fxlIvee4HYrtnaRPon9HpEgFk5zYpmfMgtKB5I=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231120223509-83a465c0220f/go.mod h1:L9KNLi232K1/xB6f7AlSX692koaRnKaWSR0stBki0Yc=
google.golang.org/grpc v1.59.0 h1:Z5Iec2pjwb+LEOqzpB2MR12/eKFhDPhuqW91O+4bwUk=
google.golang.org/grpc v1.59.0/go.mod h1:aUPDwccQo6OTjy7Hct4AfBPD1GptF4fyUjIkQ9YtF98=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw=
google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
modernc.org/cc/v4 v4.25.2 h1:T2oH7sZdGvTaie0BRNFbIYsabzCxUQg8nLqCdQ2i0ic=
modernc.org/cc/v4 v4.25.2/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.25.1 h1:TFSzPrAGmDsdnhT9X2UrcPMI3N/mJ9/X9ykKXwLhDsU=
modernc.org/ccgo/v4 v4.25.1/go.mod h1:njjuAYiPflywOOrm3B7kCB444ONP5pAVr8PIEoE0uDw=
modernc.org/fileutil v1.3.0 h1:gQ5SIzK3H9kdfai/5x41oQiKValumqNTDXMvKo62HvE=
modernc.org/fileutil v1.3.0/go.mod h1:XatxS8fZi3pS8/hKG2GH/ArUogfxjpEKs3Ku3aK4JyQ=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/libc v1.62.1 h1:s0+fv5E3FymN8eJVmnk0llBe6rOxCu/DEU+XygRbS8s=
modernc.org/libc v1.62.1/go.mod h1:iXhATfJQLjG3NWy56a6WVU73lWOcdYVxsvwCgoPljuo=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.9.1 h1:V/Z1solwAVmMW1yttq3nDdZPJqV1rM05Ccq6KMSZ34g=
modernc.org/memory v1.9.1/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.37.0 h1:s1TMe7T3Q3ovQiK2Ouz4Jwh7dw4ZDqbebSDTlSJdfjI=
modernc.org/sqlite v1.37.0/go.mod h1:5YiWv+YviqGMuGw4V+PNplcyaJ5v+vQd7TQOgkACoJM=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=

120
logscores/history.go Normal file
View File

@@ -0,0 +1,120 @@
package logscores
import (
"context"
"database/sql"
"time"
"go.ntppool.org/common/logger"
"go.ntppool.org/common/tracing"
"go.ntppool.org/data-api/chdb"
"go.ntppool.org/data-api/ntpdb"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
)
type LogScoreHistory struct {
LogScores []ntpdb.LogScore
Monitors map[int]string
// MonitorIDs []uint32
}
func GetHistoryClickHouse(ctx context.Context, ch *chdb.ClickHouse, db *sql.DB, serverID, monitorID uint32, since time.Time, count int, fullHistory bool) (*LogScoreHistory, error) {
log := logger.FromContext(ctx)
ctx, span := tracing.Tracer().Start(ctx, "logscores.GetHistoryClickHouse",
trace.WithAttributes(
attribute.Int("server", int(serverID)),
attribute.Int("monitor", int(monitorID)),
attribute.Bool("full_history", fullHistory),
),
)
defer span.End()
log.DebugContext(ctx, "GetHistoryCH", "server", serverID, "monitor", monitorID, "since", since, "count", count, "full_history", fullHistory)
ls, err := ch.Logscores(ctx, int(serverID), int(monitorID), since, count, fullHistory)
if err != nil {
log.ErrorContext(ctx, "clickhouse logscores", "err", err)
return nil, err
}
q := ntpdb.NewWrappedQuerier(ntpdb.New(db))
monitors, err := getMonitorNames(ctx, ls, q)
if err != nil {
return nil, err
}
return &LogScoreHistory{
LogScores: ls,
Monitors: monitors,
}, nil
}
func GetHistoryMySQL(ctx context.Context, db *sql.DB, serverID, monitorID uint32, since time.Time, count int) (*LogScoreHistory, error) {
log := logger.FromContext(ctx)
ctx, span := tracing.Tracer().Start(ctx, "logscores.GetHistoryMySQL")
defer span.End()
span.SetAttributes(
attribute.Int("server", int(serverID)),
attribute.Int("monitor", int(monitorID)),
)
log.Debug("GetHistoryMySQL", "server", serverID, "monitor", monitorID, "since", since, "count", count)
q := ntpdb.NewWrappedQuerier(ntpdb.New(db))
var ls []ntpdb.LogScore
var err error
if monitorID > 0 {
ls, err = q.GetServerLogScoresByMonitorID(ctx, ntpdb.GetServerLogScoresByMonitorIDParams{
ServerID: serverID,
MonitorID: sql.NullInt32{Int32: int32(monitorID), Valid: true},
Limit: int32(count),
})
} else {
ls, err = q.GetServerLogScores(ctx, ntpdb.GetServerLogScoresParams{
ServerID: serverID,
Limit: int32(count),
})
}
if err != nil {
return nil, err
}
monitors, err := getMonitorNames(ctx, ls, q)
if err != nil {
return nil, err
}
return &LogScoreHistory{
LogScores: ls,
Monitors: monitors,
// MonitorIDs: monitorIDs,
}, nil
}
func getMonitorNames(ctx context.Context, ls []ntpdb.LogScore, q ntpdb.QuerierTx) (map[int]string, error) {
monitors := map[int]string{}
monitorIDs := []uint32{}
for _, l := range ls {
if !l.MonitorID.Valid {
continue
}
mID := uint32(l.MonitorID.Int32)
if _, ok := monitors[int(mID)]; !ok {
monitors[int(mID)] = ""
monitorIDs = append(monitorIDs, mID)
}
}
dbmons, err := q.GetMonitorsByID(ctx, monitorIDs)
if err != nil {
return nil, err
}
for _, m := range dbmons {
monitors[int(m.ID)] = m.DisplayName()
}
return monitors, nil
}

View File

@@ -1,105 +0,0 @@
// Code generated by mockery v2.35.4. DO NOT EDIT.
package mocks
import (
context "context"
mock "github.com/stretchr/testify/mock"
ntpdb "go.ntppool.org/data-api/ntpdb"
)
// Querier is an autogenerated mock type for the Querier type
type Querier struct {
mock.Mock
}
// GetServerNetspeed provides a mock function with given fields: ctx, ip
func (_m *Querier) GetServerNetspeed(ctx context.Context, ip string) (uint32, error) {
ret := _m.Called(ctx, ip)
var r0 uint32
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, string) (uint32, error)); ok {
return rf(ctx, ip)
}
if rf, ok := ret.Get(0).(func(context.Context, string) uint32); ok {
r0 = rf(ctx, ip)
} else {
r0 = ret.Get(0).(uint32)
}
if rf, ok := ret.Get(1).(func(context.Context, string) error); ok {
r1 = rf(ctx, ip)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetZoneStatsData provides a mock function with given fields: ctx
func (_m *Querier) GetZoneStatsData(ctx context.Context) ([]ntpdb.GetZoneStatsDataRow, error) {
ret := _m.Called(ctx)
var r0 []ntpdb.GetZoneStatsDataRow
var r1 error
if rf, ok := ret.Get(0).(func(context.Context) ([]ntpdb.GetZoneStatsDataRow, error)); ok {
return rf(ctx)
}
if rf, ok := ret.Get(0).(func(context.Context) []ntpdb.GetZoneStatsDataRow); ok {
r0 = rf(ctx)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]ntpdb.GetZoneStatsDataRow)
}
}
if rf, ok := ret.Get(1).(func(context.Context) error); ok {
r1 = rf(ctx)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetZoneStatsV2 provides a mock function with given fields: ctx, ip
func (_m *Querier) GetZoneStatsV2(ctx context.Context, ip string) ([]ntpdb.GetZoneStatsV2Row, error) {
ret := _m.Called(ctx, ip)
var r0 []ntpdb.GetZoneStatsV2Row
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, string) ([]ntpdb.GetZoneStatsV2Row, error)); ok {
return rf(ctx, ip)
}
if rf, ok := ret.Get(0).(func(context.Context, string) []ntpdb.GetZoneStatsV2Row); ok {
r0 = rf(ctx, ip)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]ntpdb.GetZoneStatsV2Row)
}
}
if rf, ok := ret.Get(1).(func(context.Context, string) error); ok {
r1 = rf(ctx, ip)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// NewQuerier creates a new instance of Querier. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewQuerier(t interface {
mock.TestingT
Cleanup(func())
}) *Querier {
mock := &Querier{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -1,6 +1,6 @@
// Code generated by sqlc. DO NOT EDIT. // Code generated by sqlc. DO NOT EDIT.
// versions: // versions:
// sqlc v1.22.0 // sqlc v1.29.0
package ntpdb package ntpdb

View File

@@ -1,14 +1,15 @@
package ntpdb package ntpdb
import ( import (
"context"
"database/sql" "database/sql"
"database/sql/driver" "database/sql/driver"
"fmt" "fmt"
"log"
"os" "os"
"time" "time"
"github.com/go-sql-driver/mysql" "github.com/go-sql-driver/mysql"
"go.ntppool.org/common/logger"
"gopkg.in/yaml.v3" "gopkg.in/yaml.v3"
) )
@@ -22,27 +23,28 @@ type DBConfig struct {
Pass string `default:"" flag:"pass"` Pass string `default:"" flag:"pass"`
} }
func OpenDB(configFile string) (*sql.DB, error) { func OpenDB(ctx context.Context, configFile string) (*sql.DB, error) {
log := logger.FromContext(ctx)
dbconn := sql.OpenDB(Driver{CreateConnectorFunc: createConnector(configFile)}) dbconn := sql.OpenDB(Driver{CreateConnectorFunc: createConnector(ctx, configFile)})
dbconn.SetConnMaxLifetime(time.Minute * 3) dbconn.SetConnMaxLifetime(time.Minute * 3)
dbconn.SetMaxOpenConns(10) dbconn.SetMaxOpenConns(8)
dbconn.SetMaxIdleConns(5) dbconn.SetMaxIdleConns(3)
err := dbconn.Ping() err := dbconn.Ping()
if err != nil { if err != nil {
log.Printf("Could not connect to database: %s", err) log.DebugContext(ctx, "could not connect to database: %s", "err", err)
return nil, err return nil, err
} }
return dbconn, nil return dbconn, nil
} }
func createConnector(configFile string) CreateConnectorFunc { func createConnector(ctx context.Context, configFile string) CreateConnectorFunc {
log := logger.FromContext(ctx)
return func() (driver.Connector, error) { return func() (driver.Connector, error) {
log.DebugContext(ctx, "opening db config file", "filename", configFile)
log.Printf("opening config file %s", configFile)
dbFile, err := os.Open(configFile) dbFile, err := os.Open(configFile)
if err != nil { if err != nil {
@@ -70,11 +72,11 @@ func createConnector(configFile string) CreateConnectorFunc {
return nil, err return nil, err
} }
if user := cfg.MySQL.User; len(user) > 0 && err == nil { if user := cfg.MySQL.User; len(user) > 0 {
dbcfg.User = user dbcfg.User = user
} }
if pass := cfg.MySQL.Pass; len(pass) > 0 && err == nil { if pass := cfg.MySQL.Pass; len(pass) > 0 {
dbcfg.Passwd = pass dbcfg.Passwd = pass
} }

View File

@@ -21,7 +21,6 @@ func (d Driver) Driver() driver.Driver {
func (d Driver) Connect(ctx context.Context) (driver.Conn, error) { func (d Driver) Connect(ctx context.Context) (driver.Conn, error) {
connector, err := d.CreateConnectorFunc() connector, err := d.CreateConnectorFunc()
if err != nil { if err != nil {
return nil, fmt.Errorf("error creating connector from function: %w", err) return nil, fmt.Errorf("error creating connector from function: %w", err)
} }

View File

@@ -1,14 +1,233 @@
// Code generated by sqlc. DO NOT EDIT. // Code generated by sqlc. DO NOT EDIT.
// versions: // versions:
// sqlc v1.22.0 // sqlc v1.29.0
package ntpdb package ntpdb
import ( import (
"database/sql"
"database/sql/driver" "database/sql/driver"
"fmt" "fmt"
"time"
"go.ntppool.org/common/types"
) )
type MonitorsIpVersion string
const (
MonitorsIpVersionV4 MonitorsIpVersion = "v4"
MonitorsIpVersionV6 MonitorsIpVersion = "v6"
)
func (e *MonitorsIpVersion) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = MonitorsIpVersion(s)
case string:
*e = MonitorsIpVersion(s)
default:
return fmt.Errorf("unsupported scan type for MonitorsIpVersion: %T", src)
}
return nil
}
type NullMonitorsIpVersion struct {
MonitorsIpVersion MonitorsIpVersion `json:"monitors_ip_version"`
Valid bool `json:"valid"` // Valid is true if MonitorsIpVersion is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullMonitorsIpVersion) Scan(value interface{}) error {
if value == nil {
ns.MonitorsIpVersion, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.MonitorsIpVersion.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullMonitorsIpVersion) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.MonitorsIpVersion), nil
}
type MonitorsStatus string
const (
MonitorsStatusPending MonitorsStatus = "pending"
MonitorsStatusTesting MonitorsStatus = "testing"
MonitorsStatusActive MonitorsStatus = "active"
MonitorsStatusPaused MonitorsStatus = "paused"
MonitorsStatusDeleted MonitorsStatus = "deleted"
)
func (e *MonitorsStatus) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = MonitorsStatus(s)
case string:
*e = MonitorsStatus(s)
default:
return fmt.Errorf("unsupported scan type for MonitorsStatus: %T", src)
}
return nil
}
type NullMonitorsStatus struct {
MonitorsStatus MonitorsStatus `json:"monitors_status"`
Valid bool `json:"valid"` // Valid is true if MonitorsStatus is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullMonitorsStatus) Scan(value interface{}) error {
if value == nil {
ns.MonitorsStatus, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.MonitorsStatus.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullMonitorsStatus) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.MonitorsStatus), nil
}
type MonitorsType string
const (
MonitorsTypeMonitor MonitorsType = "monitor"
MonitorsTypeScore MonitorsType = "score"
)
func (e *MonitorsType) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = MonitorsType(s)
case string:
*e = MonitorsType(s)
default:
return fmt.Errorf("unsupported scan type for MonitorsType: %T", src)
}
return nil
}
type NullMonitorsType struct {
MonitorsType MonitorsType `json:"monitors_type"`
Valid bool `json:"valid"` // Valid is true if MonitorsType is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullMonitorsType) Scan(value interface{}) error {
if value == nil {
ns.MonitorsType, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.MonitorsType.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullMonitorsType) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.MonitorsType), nil
}
type ServerScoresStatus string
const (
ServerScoresStatusCandidate ServerScoresStatus = "candidate"
ServerScoresStatusTesting ServerScoresStatus = "testing"
ServerScoresStatusActive ServerScoresStatus = "active"
ServerScoresStatusPaused ServerScoresStatus = "paused"
)
func (e *ServerScoresStatus) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = ServerScoresStatus(s)
case string:
*e = ServerScoresStatus(s)
default:
return fmt.Errorf("unsupported scan type for ServerScoresStatus: %T", src)
}
return nil
}
type NullServerScoresStatus struct {
ServerScoresStatus ServerScoresStatus `json:"server_scores_status"`
Valid bool `json:"valid"` // Valid is true if ServerScoresStatus is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullServerScoresStatus) Scan(value interface{}) error {
if value == nil {
ns.ServerScoresStatus, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.ServerScoresStatus.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullServerScoresStatus) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.ServerScoresStatus), nil
}
type ServersIpVersion string
const (
ServersIpVersionV4 ServersIpVersion = "v4"
ServersIpVersionV6 ServersIpVersion = "v6"
)
func (e *ServersIpVersion) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = ServersIpVersion(s)
case string:
*e = ServersIpVersion(s)
default:
return fmt.Errorf("unsupported scan type for ServersIpVersion: %T", src)
}
return nil
}
type NullServersIpVersion struct {
ServersIpVersion ServersIpVersion `json:"servers_ip_version"`
Valid bool `json:"valid"` // Valid is true if ServersIpVersion is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullServersIpVersion) Scan(value interface{}) error {
if value == nil {
ns.ServersIpVersion, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.ServersIpVersion.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullServersIpVersion) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.ServersIpVersion), nil
}
type ZoneServerCountsIpVersion string type ZoneServerCountsIpVersion string
const ( const (
@@ -50,3 +269,75 @@ func (ns NullZoneServerCountsIpVersion) Value() (driver.Value, error) {
} }
return string(ns.ZoneServerCountsIpVersion), nil return string(ns.ZoneServerCountsIpVersion), nil
} }
type LogScore struct {
ID uint64 `db:"id" json:"id"`
MonitorID sql.NullInt32 `db:"monitor_id" json:"monitor_id"`
ServerID uint32 `db:"server_id" json:"server_id"`
Ts time.Time `db:"ts" json:"ts"`
Score float64 `db:"score" json:"score"`
Step float64 `db:"step" json:"step"`
Offset sql.NullFloat64 `db:"offset" json:"offset"`
Rtt sql.NullInt32 `db:"rtt" json:"rtt"`
Attributes types.LogScoreAttributes `db:"attributes" json:"attributes"`
}
type Monitor struct {
ID uint32 `db:"id" json:"id"`
IDToken sql.NullString `db:"id_token" json:"id_token"`
Type MonitorsType `db:"type" json:"type"`
UserID sql.NullInt32 `db:"user_id" json:"user_id"`
AccountID sql.NullInt32 `db:"account_id" json:"account_id"`
Hostname string `db:"hostname" json:"hostname"`
Location string `db:"location" json:"location"`
Ip sql.NullString `db:"ip" json:"ip"`
IpVersion NullMonitorsIpVersion `db:"ip_version" json:"ip_version"`
TlsName sql.NullString `db:"tls_name" json:"tls_name"`
ApiKey sql.NullString `db:"api_key" json:"api_key"`
Status MonitorsStatus `db:"status" json:"status"`
Config string `db:"config" json:"config"`
ClientVersion string `db:"client_version" json:"client_version"`
LastSeen sql.NullTime `db:"last_seen" json:"last_seen"`
LastSubmit sql.NullTime `db:"last_submit" json:"last_submit"`
CreatedOn time.Time `db:"created_on" json:"created_on"`
DeletedOn sql.NullTime `db:"deleted_on" json:"deleted_on"`
IsCurrent sql.NullBool `db:"is_current" json:"is_current"`
}
type Server struct {
ID uint32 `db:"id" json:"id"`
Ip string `db:"ip" json:"ip"`
IpVersion ServersIpVersion `db:"ip_version" json:"ip_version"`
UserID sql.NullInt32 `db:"user_id" json:"user_id"`
AccountID sql.NullInt32 `db:"account_id" json:"account_id"`
Hostname sql.NullString `db:"hostname" json:"hostname"`
Stratum sql.NullInt16 `db:"stratum" json:"stratum"`
InPool uint8 `db:"in_pool" json:"in_pool"`
InServerList uint8 `db:"in_server_list" json:"in_server_list"`
Netspeed uint32 `db:"netspeed" json:"netspeed"`
NetspeedTarget uint32 `db:"netspeed_target" json:"netspeed_target"`
CreatedOn time.Time `db:"created_on" json:"created_on"`
UpdatedOn time.Time `db:"updated_on" json:"updated_on"`
ScoreTs sql.NullTime `db:"score_ts" json:"score_ts"`
ScoreRaw float64 `db:"score_raw" json:"score_raw"`
DeletionOn sql.NullTime `db:"deletion_on" json:"deletion_on"`
Flags string `db:"flags" json:"flags"`
}
type Zone struct {
ID uint32 `db:"id" json:"id"`
Name string `db:"name" json:"name"`
Description sql.NullString `db:"description" json:"description"`
ParentID sql.NullInt32 `db:"parent_id" json:"parent_id"`
Dns bool `db:"dns" json:"dns"`
}
type ZoneServerCount struct {
ID uint32 `db:"id" json:"id"`
ZoneID uint32 `db:"zone_id" json:"zone_id"`
IpVersion ZoneServerCountsIpVersion `db:"ip_version" json:"ip_version"`
Date time.Time `db:"date" json:"date"`
CountActive uint32 `db:"count_active" json:"count_active"`
CountRegistered uint32 `db:"count_registered" json:"count_registered"`
NetspeedActive int `db:"netspeed_active" json:"netspeed_active"`
}

23
ntpdb/monitor.go Normal file
View File

@@ -0,0 +1,23 @@
package ntpdb
import (
"strconv"
"strings"
)
func (m *Monitor) DisplayName() string {
switch {
// case len(m.Hostname) > 0:
// return m.Hostname
case m.TlsName.Valid && len(m.TlsName.String) > 0:
name := m.TlsName.String
if idx := strings.Index(name, "."); idx > 0 {
name = name[0:idx]
}
return name
case len(m.Location) > 0:
return m.Location + " (" + strconv.Itoa(int(m.ID)) + ")" // todo: IDToken instead of ID
default:
return strconv.Itoa(int(m.ID)) // todo: IDToken
}
}

View File

@@ -1,20 +1,19 @@
// Code generated by gowrap. DO NOT EDIT. // Code generated by gowrap. DO NOT EDIT.
// template: https://raw.githubusercontent.com/hexdigest/gowrap/6c8f05695fec23df85903a8da0af66ac414e2a63/templates/opentelemetry // template: https://raw.githubusercontent.com/hexdigest/gowrap/6bd1bc023b4d2a619f30020924f258b8ff665a7a/templates/opentelemetry
// gowrap: http://github.com/hexdigest/gowrap // gowrap: http://github.com/hexdigest/gowrap
package ntpdb package ntpdb
//go:generate gowrap gen -p go.ntppool.org/data-api/ntpdb -i QuerierTx -t https://raw.githubusercontent.com/hexdigest/gowrap/6c8f05695fec23df85903a8da0af66ac414e2a63/templates/opentelemetry -o otel.go -l ""
import ( import (
"context" "context"
"go.opentelemetry.io/otel" "go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/attribute"
_codes "go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/trace" "go.opentelemetry.io/otel/trace"
) )
// QuerierTxWithTracing implements QuerierTx interface instrumented with opentracing spans // QuerierTxWithTracing implements QuerierTx interface instrumented with open telemetry spans
type QuerierTxWithTracing struct { type QuerierTxWithTracing struct {
QuerierTx QuerierTx
_instance string _instance string
@@ -46,6 +45,7 @@ func (_d QuerierTxWithTracing) Begin(ctx context.Context) (q1 QuerierTx, err err
"err": err}) "err": err})
} else if err != nil { } else if err != nil {
_span.RecordError(err) _span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes( _span.SetAttributes(
attribute.String("event", "error"), attribute.String("event", "error"),
attribute.String("message", err.Error()), attribute.String("message", err.Error()),
@@ -67,6 +67,7 @@ func (_d QuerierTxWithTracing) Commit(ctx context.Context) (err error) {
"err": err}) "err": err})
} else if err != nil { } else if err != nil {
_span.RecordError(err) _span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes( _span.SetAttributes(
attribute.String("event", "error"), attribute.String("event", "error"),
attribute.String("message", err.Error()), attribute.String("message", err.Error()),
@@ -78,6 +79,150 @@ func (_d QuerierTxWithTracing) Commit(ctx context.Context) (err error) {
return _d.QuerierTx.Commit(ctx) return _d.QuerierTx.Commit(ctx)
} }
// GetMonitorByNameAndIPVersion implements QuerierTx
func (_d QuerierTxWithTracing) GetMonitorByNameAndIPVersion(ctx context.Context, arg GetMonitorByNameAndIPVersionParams) (m1 Monitor, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetMonitorByNameAndIPVersion")
defer func() {
if _d._spanDecorator != nil {
_d._spanDecorator(_span, map[string]interface{}{
"ctx": ctx,
"arg": arg}, map[string]interface{}{
"m1": m1,
"err": err})
} else if err != nil {
_span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes(
attribute.String("event", "error"),
attribute.String("message", err.Error()),
)
}
_span.End()
}()
return _d.QuerierTx.GetMonitorByNameAndIPVersion(ctx, arg)
}
// GetMonitorsByID implements QuerierTx
func (_d QuerierTxWithTracing) GetMonitorsByID(ctx context.Context, monitorids []uint32) (ma1 []Monitor, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetMonitorsByID")
defer func() {
if _d._spanDecorator != nil {
_d._spanDecorator(_span, map[string]interface{}{
"ctx": ctx,
"monitorids": monitorids}, map[string]interface{}{
"ma1": ma1,
"err": err})
} else if err != nil {
_span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes(
attribute.String("event", "error"),
attribute.String("message", err.Error()),
)
}
_span.End()
}()
return _d.QuerierTx.GetMonitorsByID(ctx, monitorids)
}
// GetServerByID implements QuerierTx
func (_d QuerierTxWithTracing) GetServerByID(ctx context.Context, id uint32) (s1 Server, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetServerByID")
defer func() {
if _d._spanDecorator != nil {
_d._spanDecorator(_span, map[string]interface{}{
"ctx": ctx,
"id": id}, map[string]interface{}{
"s1": s1,
"err": err})
} else if err != nil {
_span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes(
attribute.String("event", "error"),
attribute.String("message", err.Error()),
)
}
_span.End()
}()
return _d.QuerierTx.GetServerByID(ctx, id)
}
// GetServerByIP implements QuerierTx
func (_d QuerierTxWithTracing) GetServerByIP(ctx context.Context, ip string) (s1 Server, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetServerByIP")
defer func() {
if _d._spanDecorator != nil {
_d._spanDecorator(_span, map[string]interface{}{
"ctx": ctx,
"ip": ip}, map[string]interface{}{
"s1": s1,
"err": err})
} else if err != nil {
_span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes(
attribute.String("event", "error"),
attribute.String("message", err.Error()),
)
}
_span.End()
}()
return _d.QuerierTx.GetServerByIP(ctx, ip)
}
// GetServerLogScores implements QuerierTx
func (_d QuerierTxWithTracing) GetServerLogScores(ctx context.Context, arg GetServerLogScoresParams) (la1 []LogScore, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetServerLogScores")
defer func() {
if _d._spanDecorator != nil {
_d._spanDecorator(_span, map[string]interface{}{
"ctx": ctx,
"arg": arg}, map[string]interface{}{
"la1": la1,
"err": err})
} else if err != nil {
_span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes(
attribute.String("event", "error"),
attribute.String("message", err.Error()),
)
}
_span.End()
}()
return _d.QuerierTx.GetServerLogScores(ctx, arg)
}
// GetServerLogScoresByMonitorID implements QuerierTx
func (_d QuerierTxWithTracing) GetServerLogScoresByMonitorID(ctx context.Context, arg GetServerLogScoresByMonitorIDParams) (la1 []LogScore, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetServerLogScoresByMonitorID")
defer func() {
if _d._spanDecorator != nil {
_d._spanDecorator(_span, map[string]interface{}{
"ctx": ctx,
"arg": arg}, map[string]interface{}{
"la1": la1,
"err": err})
} else if err != nil {
_span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes(
attribute.String("event", "error"),
attribute.String("message", err.Error()),
)
}
_span.End()
}()
return _d.QuerierTx.GetServerLogScoresByMonitorID(ctx, arg)
}
// GetServerNetspeed implements QuerierTx // GetServerNetspeed implements QuerierTx
func (_d QuerierTxWithTracing) GetServerNetspeed(ctx context.Context, ip string) (u1 uint32, err error) { func (_d QuerierTxWithTracing) GetServerNetspeed(ctx context.Context, ip string) (u1 uint32, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetServerNetspeed") ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetServerNetspeed")
@@ -90,6 +235,7 @@ func (_d QuerierTxWithTracing) GetServerNetspeed(ctx context.Context, ip string)
"err": err}) "err": err})
} else if err != nil { } else if err != nil {
_span.RecordError(err) _span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes( _span.SetAttributes(
attribute.String("event", "error"), attribute.String("event", "error"),
attribute.String("message", err.Error()), attribute.String("message", err.Error()),
@@ -101,6 +247,78 @@ func (_d QuerierTxWithTracing) GetServerNetspeed(ctx context.Context, ip string)
return _d.QuerierTx.GetServerNetspeed(ctx, ip) return _d.QuerierTx.GetServerNetspeed(ctx, ip)
} }
// GetServerScores implements QuerierTx
func (_d QuerierTxWithTracing) GetServerScores(ctx context.Context, arg GetServerScoresParams) (ga1 []GetServerScoresRow, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetServerScores")
defer func() {
if _d._spanDecorator != nil {
_d._spanDecorator(_span, map[string]interface{}{
"ctx": ctx,
"arg": arg}, map[string]interface{}{
"ga1": ga1,
"err": err})
} else if err != nil {
_span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes(
attribute.String("event", "error"),
attribute.String("message", err.Error()),
)
}
_span.End()
}()
return _d.QuerierTx.GetServerScores(ctx, arg)
}
// GetZoneByName implements QuerierTx
func (_d QuerierTxWithTracing) GetZoneByName(ctx context.Context, name string) (z1 Zone, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetZoneByName")
defer func() {
if _d._spanDecorator != nil {
_d._spanDecorator(_span, map[string]interface{}{
"ctx": ctx,
"name": name}, map[string]interface{}{
"z1": z1,
"err": err})
} else if err != nil {
_span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes(
attribute.String("event", "error"),
attribute.String("message", err.Error()),
)
}
_span.End()
}()
return _d.QuerierTx.GetZoneByName(ctx, name)
}
// GetZoneCounts implements QuerierTx
func (_d QuerierTxWithTracing) GetZoneCounts(ctx context.Context, zoneID uint32) (za1 []ZoneServerCount, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetZoneCounts")
defer func() {
if _d._spanDecorator != nil {
_d._spanDecorator(_span, map[string]interface{}{
"ctx": ctx,
"zoneID": zoneID}, map[string]interface{}{
"za1": za1,
"err": err})
} else if err != nil {
_span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes(
attribute.String("event", "error"),
attribute.String("message", err.Error()),
)
}
_span.End()
}()
return _d.QuerierTx.GetZoneCounts(ctx, zoneID)
}
// GetZoneStatsData implements QuerierTx // GetZoneStatsData implements QuerierTx
func (_d QuerierTxWithTracing) GetZoneStatsData(ctx context.Context) (ga1 []GetZoneStatsDataRow, err error) { func (_d QuerierTxWithTracing) GetZoneStatsData(ctx context.Context) (ga1 []GetZoneStatsDataRow, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetZoneStatsData") ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetZoneStatsData")
@@ -112,6 +330,7 @@ func (_d QuerierTxWithTracing) GetZoneStatsData(ctx context.Context) (ga1 []GetZ
"err": err}) "err": err})
} else if err != nil { } else if err != nil {
_span.RecordError(err) _span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes( _span.SetAttributes(
attribute.String("event", "error"), attribute.String("event", "error"),
attribute.String("message", err.Error()), attribute.String("message", err.Error()),
@@ -135,6 +354,7 @@ func (_d QuerierTxWithTracing) GetZoneStatsV2(ctx context.Context, ip string) (g
"err": err}) "err": err})
} else if err != nil { } else if err != nil {
_span.RecordError(err) _span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes( _span.SetAttributes(
attribute.String("event", "error"), attribute.String("event", "error"),
attribute.String("message", err.Error()), attribute.String("message", err.Error()),
@@ -156,6 +376,7 @@ func (_d QuerierTxWithTracing) Rollback(ctx context.Context) (err error) {
"err": err}) "err": err})
} else if err != nil { } else if err != nil {
_span.RecordError(err) _span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes( _span.SetAttributes(
attribute.String("event", "error"), attribute.String("event", "error"),
attribute.String("message", err.Error()), attribute.String("message", err.Error()),

View File

@@ -1,6 +1,6 @@
// Code generated by sqlc. DO NOT EDIT. // Code generated by sqlc. DO NOT EDIT.
// versions: // versions:
// sqlc v1.22.0 // sqlc v1.29.0
package ntpdb package ntpdb
@@ -9,7 +9,16 @@ import (
) )
type Querier interface { type Querier interface {
GetMonitorByNameAndIPVersion(ctx context.Context, arg GetMonitorByNameAndIPVersionParams) (Monitor, error)
GetMonitorsByID(ctx context.Context, monitorids []uint32) ([]Monitor, error)
GetServerByID(ctx context.Context, id uint32) (Server, error)
GetServerByIP(ctx context.Context, ip string) (Server, error)
GetServerLogScores(ctx context.Context, arg GetServerLogScoresParams) ([]LogScore, error)
GetServerLogScoresByMonitorID(ctx context.Context, arg GetServerLogScoresByMonitorIDParams) ([]LogScore, error)
GetServerNetspeed(ctx context.Context, ip string) (uint32, error) GetServerNetspeed(ctx context.Context, ip string) (uint32, error)
GetServerScores(ctx context.Context, arg GetServerScoresParams) ([]GetServerScoresRow, error)
GetZoneByName(ctx context.Context, name string) (Zone, error)
GetZoneCounts(ctx context.Context, zoneID uint32) ([]ZoneServerCount, error)
GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow, error) GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow, error)
GetZoneStatsV2(ctx context.Context, ip string) ([]GetZoneStatsV2Row, error) GetZoneStatsV2(ctx context.Context, ip string) ([]GetZoneStatsV2Row, error)
} }

View File

@@ -1,15 +1,273 @@
// Code generated by sqlc. DO NOT EDIT. // Code generated by sqlc. DO NOT EDIT.
// versions: // versions:
// sqlc v1.22.0 // sqlc v1.29.0
// source: query.sql // source: query.sql
package ntpdb package ntpdb
import ( import (
"context" "context"
"database/sql"
"strings"
"time" "time"
) )
const getMonitorByNameAndIPVersion = `-- name: GetMonitorByNameAndIPVersion :one
select id, id_token, type, user_id, account_id, hostname, location, ip, ip_version, tls_name, api_key, status, config, client_version, last_seen, last_submit, created_on, deleted_on, is_current from monitors
where
tls_name like ? AND
(ip_version = ? OR (type = 'score' AND ip_version IS NULL)) AND
is_current = 1
order by id
limit 1
`
type GetMonitorByNameAndIPVersionParams struct {
TlsName sql.NullString `db:"tls_name" json:"tls_name"`
IpVersion NullMonitorsIpVersion `db:"ip_version" json:"ip_version"`
}
func (q *Queries) GetMonitorByNameAndIPVersion(ctx context.Context, arg GetMonitorByNameAndIPVersionParams) (Monitor, error) {
row := q.db.QueryRowContext(ctx, getMonitorByNameAndIPVersion, arg.TlsName, arg.IpVersion)
var i Monitor
err := row.Scan(
&i.ID,
&i.IDToken,
&i.Type,
&i.UserID,
&i.AccountID,
&i.Hostname,
&i.Location,
&i.Ip,
&i.IpVersion,
&i.TlsName,
&i.ApiKey,
&i.Status,
&i.Config,
&i.ClientVersion,
&i.LastSeen,
&i.LastSubmit,
&i.CreatedOn,
&i.DeletedOn,
&i.IsCurrent,
)
return i, err
}
const getMonitorsByID = `-- name: GetMonitorsByID :many
select id, id_token, type, user_id, account_id, hostname, location, ip, ip_version, tls_name, api_key, status, config, client_version, last_seen, last_submit, created_on, deleted_on, is_current from monitors
where id in (/*SLICE:MonitorIDs*/?)
`
func (q *Queries) GetMonitorsByID(ctx context.Context, monitorids []uint32) ([]Monitor, error) {
query := getMonitorsByID
var queryParams []interface{}
if len(monitorids) > 0 {
for _, v := range monitorids {
queryParams = append(queryParams, v)
}
query = strings.Replace(query, "/*SLICE:MonitorIDs*/?", strings.Repeat(",?", len(monitorids))[1:], 1)
} else {
query = strings.Replace(query, "/*SLICE:MonitorIDs*/?", "NULL", 1)
}
rows, err := q.db.QueryContext(ctx, query, queryParams...)
if err != nil {
return nil, err
}
defer rows.Close()
var items []Monitor
for rows.Next() {
var i Monitor
if err := rows.Scan(
&i.ID,
&i.IDToken,
&i.Type,
&i.UserID,
&i.AccountID,
&i.Hostname,
&i.Location,
&i.Ip,
&i.IpVersion,
&i.TlsName,
&i.ApiKey,
&i.Status,
&i.Config,
&i.ClientVersion,
&i.LastSeen,
&i.LastSubmit,
&i.CreatedOn,
&i.DeletedOn,
&i.IsCurrent,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getServerByID = `-- name: GetServerByID :one
select id, ip, ip_version, user_id, account_id, hostname, stratum, in_pool, in_server_list, netspeed, netspeed_target, created_on, updated_on, score_ts, score_raw, deletion_on, flags from servers
where
id = ?
`
func (q *Queries) GetServerByID(ctx context.Context, id uint32) (Server, error) {
row := q.db.QueryRowContext(ctx, getServerByID, id)
var i Server
err := row.Scan(
&i.ID,
&i.Ip,
&i.IpVersion,
&i.UserID,
&i.AccountID,
&i.Hostname,
&i.Stratum,
&i.InPool,
&i.InServerList,
&i.Netspeed,
&i.NetspeedTarget,
&i.CreatedOn,
&i.UpdatedOn,
&i.ScoreTs,
&i.ScoreRaw,
&i.DeletionOn,
&i.Flags,
)
return i, err
}
const getServerByIP = `-- name: GetServerByIP :one
select id, ip, ip_version, user_id, account_id, hostname, stratum, in_pool, in_server_list, netspeed, netspeed_target, created_on, updated_on, score_ts, score_raw, deletion_on, flags from servers
where
ip = ?
`
func (q *Queries) GetServerByIP(ctx context.Context, ip string) (Server, error) {
row := q.db.QueryRowContext(ctx, getServerByIP, ip)
var i Server
err := row.Scan(
&i.ID,
&i.Ip,
&i.IpVersion,
&i.UserID,
&i.AccountID,
&i.Hostname,
&i.Stratum,
&i.InPool,
&i.InServerList,
&i.Netspeed,
&i.NetspeedTarget,
&i.CreatedOn,
&i.UpdatedOn,
&i.ScoreTs,
&i.ScoreRaw,
&i.DeletionOn,
&i.Flags,
)
return i, err
}
const getServerLogScores = `-- name: GetServerLogScores :many
select id, monitor_id, server_id, ts, score, step, offset, rtt, attributes from log_scores
where
server_id = ?
order by ts desc
limit ?
`
type GetServerLogScoresParams struct {
ServerID uint32 `db:"server_id" json:"server_id"`
Limit int32 `db:"limit" json:"limit"`
}
func (q *Queries) GetServerLogScores(ctx context.Context, arg GetServerLogScoresParams) ([]LogScore, error) {
rows, err := q.db.QueryContext(ctx, getServerLogScores, arg.ServerID, arg.Limit)
if err != nil {
return nil, err
}
defer rows.Close()
var items []LogScore
for rows.Next() {
var i LogScore
if err := rows.Scan(
&i.ID,
&i.MonitorID,
&i.ServerID,
&i.Ts,
&i.Score,
&i.Step,
&i.Offset,
&i.Rtt,
&i.Attributes,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getServerLogScoresByMonitorID = `-- name: GetServerLogScoresByMonitorID :many
select id, monitor_id, server_id, ts, score, step, offset, rtt, attributes from log_scores
where
server_id = ? AND
monitor_id = ?
order by ts desc
limit ?
`
type GetServerLogScoresByMonitorIDParams struct {
ServerID uint32 `db:"server_id" json:"server_id"`
MonitorID sql.NullInt32 `db:"monitor_id" json:"monitor_id"`
Limit int32 `db:"limit" json:"limit"`
}
func (q *Queries) GetServerLogScoresByMonitorID(ctx context.Context, arg GetServerLogScoresByMonitorIDParams) ([]LogScore, error) {
rows, err := q.db.QueryContext(ctx, getServerLogScoresByMonitorID, arg.ServerID, arg.MonitorID, arg.Limit)
if err != nil {
return nil, err
}
defer rows.Close()
var items []LogScore
for rows.Next() {
var i LogScore
if err := rows.Scan(
&i.ID,
&i.MonitorID,
&i.ServerID,
&i.Ts,
&i.Score,
&i.Step,
&i.Offset,
&i.Rtt,
&i.Attributes,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getServerNetspeed = `-- name: GetServerNetspeed :one const getServerNetspeed = `-- name: GetServerNetspeed :one
select netspeed from servers where ip = ? select netspeed from servers where ip = ?
` `
@@ -21,6 +279,133 @@ func (q *Queries) GetServerNetspeed(ctx context.Context, ip string) (uint32, err
return netspeed, err return netspeed, err
} }
const getServerScores = `-- name: GetServerScores :many
select
m.id, m.hostname, m.tls_name, m.location, m.type,
ss.score_raw, ss.score_ts, ss.status
from server_scores ss
inner join monitors m
on (m.id=ss.monitor_id)
where
server_id = ? AND
monitor_id in (/*SLICE:MonitorIDs*/?)
`
type GetServerScoresParams struct {
ServerID uint32 `db:"server_id" json:"server_id"`
MonitorIDs []uint32 `db:"MonitorIDs" json:"MonitorIDs"`
}
type GetServerScoresRow struct {
ID uint32 `db:"id" json:"id"`
Hostname string `db:"hostname" json:"hostname"`
TlsName sql.NullString `db:"tls_name" json:"tls_name"`
Location string `db:"location" json:"location"`
Type MonitorsType `db:"type" json:"type"`
ScoreRaw float64 `db:"score_raw" json:"score_raw"`
ScoreTs sql.NullTime `db:"score_ts" json:"score_ts"`
Status ServerScoresStatus `db:"status" json:"status"`
}
func (q *Queries) GetServerScores(ctx context.Context, arg GetServerScoresParams) ([]GetServerScoresRow, error) {
query := getServerScores
var queryParams []interface{}
queryParams = append(queryParams, arg.ServerID)
if len(arg.MonitorIDs) > 0 {
for _, v := range arg.MonitorIDs {
queryParams = append(queryParams, v)
}
query = strings.Replace(query, "/*SLICE:MonitorIDs*/?", strings.Repeat(",?", len(arg.MonitorIDs))[1:], 1)
} else {
query = strings.Replace(query, "/*SLICE:MonitorIDs*/?", "NULL", 1)
}
rows, err := q.db.QueryContext(ctx, query, queryParams...)
if err != nil {
return nil, err
}
defer rows.Close()
var items []GetServerScoresRow
for rows.Next() {
var i GetServerScoresRow
if err := rows.Scan(
&i.ID,
&i.Hostname,
&i.TlsName,
&i.Location,
&i.Type,
&i.ScoreRaw,
&i.ScoreTs,
&i.Status,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getZoneByName = `-- name: GetZoneByName :one
select id, name, description, parent_id, dns from zones
where
name = ?
`
func (q *Queries) GetZoneByName(ctx context.Context, name string) (Zone, error) {
row := q.db.QueryRowContext(ctx, getZoneByName, name)
var i Zone
err := row.Scan(
&i.ID,
&i.Name,
&i.Description,
&i.ParentID,
&i.Dns,
)
return i, err
}
const getZoneCounts = `-- name: GetZoneCounts :many
select id, zone_id, ip_version, date, count_active, count_registered, netspeed_active from zone_server_counts
where zone_id = ?
order by date
`
func (q *Queries) GetZoneCounts(ctx context.Context, zoneID uint32) ([]ZoneServerCount, error) {
rows, err := q.db.QueryContext(ctx, getZoneCounts, zoneID)
if err != nil {
return nil, err
}
defer rows.Close()
var items []ZoneServerCount
for rows.Next() {
var i ZoneServerCount
if err := rows.Scan(
&i.ID,
&i.ZoneID,
&i.IpVersion,
&i.Date,
&i.CountActive,
&i.CountRegistered,
&i.NetspeedActive,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getZoneStatsData = `-- name: GetZoneStatsData :many const getZoneStatsData = `-- name: GetZoneStatsData :many
SELECT zc.date, z.name, zc.ip_version, count_active, count_registered, netspeed_active SELECT zc.date, z.name, zc.ip_version, count_active, count_registered, netspeed_active
FROM zone_server_counts zc USE INDEX (date_idx) FROM zone_server_counts zc USE INDEX (date_idx)
@@ -36,7 +421,7 @@ type GetZoneStatsDataRow struct {
IpVersion ZoneServerCountsIpVersion `db:"ip_version" json:"ip_version"` IpVersion ZoneServerCountsIpVersion `db:"ip_version" json:"ip_version"`
CountActive uint32 `db:"count_active" json:"count_active"` CountActive uint32 `db:"count_active" json:"count_active"`
CountRegistered uint32 `db:"count_registered" json:"count_registered"` CountRegistered uint32 `db:"count_registered" json:"count_registered"`
NetspeedActive uint32 `db:"netspeed_active" json:"netspeed_active"` NetspeedActive int `db:"netspeed_active" json:"netspeed_active"`
} }
func (q *Queries) GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow, error) { func (q *Queries) GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow, error) {
@@ -99,7 +484,7 @@ AS server_netspeed
type GetZoneStatsV2Row struct { type GetZoneStatsV2Row struct {
ZoneName string `db:"zone_name" json:"zone_name"` ZoneName string `db:"zone_name" json:"zone_name"`
NetspeedActive int32 `db:"netspeed_active" json:"netspeed_active"` NetspeedActive int `db:"netspeed_active" json:"netspeed_active"`
} }
func (q *Queries) GetZoneStatsV2(ctx context.Context, ip string) ([]GetZoneStatsV2Row, error) { func (q *Queries) GetZoneStatsV2(ctx context.Context, ip string) ([]GetZoneStatsV2Row, error) {

16
ntpdb/server.go Normal file
View File

@@ -0,0 +1,16 @@
package ntpdb
import "time"
func (s *Server) DeletionAge(dur time.Duration) bool {
if !s.DeletionOn.Valid {
return false
}
if dur > 0 {
dur = dur * -1
}
if s.DeletionOn.Time.Before(time.Now().Add(dur)) {
return true
}
return false
}

View File

@@ -0,0 +1,389 @@
# DETAILED IMPLEMENTATION PLAN: Grafana Time Range API with Future Downsampling Support
## Overview
Implement a new Grafana-compatible API endpoint `/api/v2/server/scores/{server}/{mode}` that returns time series data in Grafana format with time range support and future downsampling capabilities.
## API Specification
### Endpoint
- **URL**: `/api/v2/server/scores/{server}/{mode}`
- **Method**: GET
- **Path Parameters**:
- `server`: Server IP address or ID (same validation as existing API)
- `mode`: Only `json` supported initially
### Query Parameters (following Grafana conventions)
- `from`: Unix timestamp in seconds (required)
- `to`: Unix timestamp in seconds (required)
- `maxDataPoints`: Integer, default 50000, max 50000 (for future downsampling)
- `monitor`: Monitor ID, name prefix, or "*" for all (optional, same as existing)
- `interval`: Future downsampling interval like "1m", "5m", "1h" (optional, not implemented initially)
### Response Format
Grafana table format JSON array (more efficient than separate series):
```json
[
{
"target": "monitor{name=zakim1-yfhw4a}",
"tags": {
"monitor_id": "126",
"monitor_name": "zakim1-yfhw4a",
"type": "monitor",
"status": "active"
},
"columns": [
{"text": "time", "type": "time"},
{"text": "score", "type": "number"},
{"text": "rtt", "type": "number", "unit": "ms"},
{"text": "offset", "type": "number", "unit": "s"}
],
"values": [
[1753431667000, 20.0, 18.865, -0.000267],
[1753431419000, 20.0, 18.96, -0.000390],
[1753431151000, 20.0, 18.073, -0.000768],
[1753430063000, 20.0, 18.209, null]
]
}
]
```
## Implementation Details
### 1. Server Routing (`server/server.go`)
Add new route after existing scores routes:
```go
e.GET("/api/v2/server/scores/:server/:mode", srv.scoresTimeRange)
```
**Note**: Initially attempted `:server.:mode` pattern, but Echo router cannot properly parse IP addresses with dots using this pattern. Changed to `:server/:mode` to match existing API pattern and ensure compatibility with IP addresses like `23.155.40.38`.
## Key Implementation Clarifications
### Monitor Filtering Behavior
- **monitor=\***: Return ALL monitors (no monitor count limit)
- **50k datapoint limit**: Applied in database query (LIMIT clause)
- Return whatever data we get from database to user (no post-processing truncation)
### Null Value Handling Strategy
- **Score**: Always include (should never be null)
- **RTT**: Skip datapoints where RTT is null
- **Offset**: Skip datapoints where offset is null
### Time Range Validation Rules
- **Zero duration**: Return 400 Bad Request
- **Future timestamps**: Allow for now
- **Minimum range**: 1 second
- **Maximum range**: 90 days
### 2. New Handler Function (`server/grafana.go`)
#### Function Signature
```go
func (srv *Server) scoresTimeRange(c echo.Context) error
```
#### Parameter Parsing & Validation
```go
// Extend existing historyParameters struct for time range support
type timeRangeParams struct {
historyParameters // embed existing struct
from time.Time
to time.Time
maxDataPoints int
interval string // for future downsampling
}
func (srv *Server) parseTimeRangeParams(ctx context.Context, c echo.Context) (timeRangeParams, error) {
// Start with existing parameter parsing logic
baseParams, err := srv.getHistoryParameters(ctx, c)
if err != nil {
return timeRangeParams{}, err
}
// Parse and validate from/to second timestamps
// Validate time range (max 90 days, min 1 second)
// Parse maxDataPoints (default 50000, max 50000)
// Return extended parameters
}
```
#### Response Structure
```go
type ColumnDef struct {
Text string `json:"text"`
Type string `json:"type"`
Unit string `json:"unit,omitempty"`
}
type GrafanaTableSeries struct {
Target string `json:"target"`
Tags map[string]string `json:"tags"`
Columns []ColumnDef `json:"columns"`
Values [][]interface{} `json:"values"`
}
type GrafanaTimeSeriesResponse []GrafanaTableSeries
```
#### Cache Control
```go
// Reuse existing setHistoryCacheControl function for consistency
// Logic based on data recency and entry count:
// - Empty or >8h old data: "s-maxage=260,max-age=360"
// - Single entry: "s-maxage=60,max-age=35"
// - Multiple entries: "s-maxage=90,max-age=120"
setHistoryCacheControl(c, history)
```
### 3. ClickHouse Data Access (`chdb/logscores.go`)
#### New Method
```go
func (d *ClickHouse) LogscoresTimeRange(ctx context.Context, serverID, monitorID int, from, to time.Time, limit int) ([]ntpdb.LogScore, error) {
// Build query with time range WHERE clause
// Always order by ts ASC (Grafana convention)
// Apply limit to prevent memory issues
// Use same row scanning logic as existing Logscores method
}
```
#### Query Structure
```sql
SELECT id, monitor_id, server_id, ts,
toFloat64(score), toFloat64(step), offset,
rtt, leap, warning, error
FROM log_scores
WHERE server_id = ?
AND ts >= ?
AND ts <= ?
[AND monitor_id = ?] -- if specific monitor requested
ORDER BY ts ASC
LIMIT ?
```
### 4. Data Transformation Logic (`server/grafana.go`)
#### Core Transformation Function
```go
func transformToGrafanaTableFormat(history *logscores.LogScoreHistory, monitors []ntpdb.Monitor) GrafanaTimeSeriesResponse {
// Group data by monitor_id (one series per monitor)
// Create table format with columns: time, score, rtt, offset
// Convert timestamps to milliseconds
// Build proper target names and tags
// Handle null values appropriately in table values
}
```
#### Grouping Strategy
1. **Group by Monitor**: One table series per monitor
2. **Table Columns**: time, score, rtt, offset (all metrics in one table)
3. **Target Naming**: `monitor{name={sanitized_monitor_name}}`
4. **Tag Structure**: Include monitor metadata (no metric type needed)
5. **Monitor Status**: Query real monitor data using `q.GetServerScores()` like existing API
6. **Series Ordering**: No guaranteed order (standard Grafana behavior)
7. **Efficiency**: More efficient than separate series - less JSON overhead
#### Timestamp Conversion
```go
timestampMs := logScore.Ts.Unix() * 1000
```
### 5. Error Handling
#### Validation Errors (400 Bad Request)
- Invalid timestamp format
- from >= to (including zero duration)
- Time range too large (> 90 days)
- Time range too small (< 1 second minimum)
- maxDataPoints > 50000
- Invalid mode (not "json")
#### Not Found Errors (404)
- Server not found
- Monitor not found
- Server deleted
#### Server Errors (500)
- ClickHouse connection issues
- Database query errors
### 6. Future Downsampling Design
#### API Extension Points
- `interval` parameter parsing ready
- `maxDataPoints` limit already enforced
- Response format supports downsampled data seamlessly
#### Downsampling Algorithm (Future Implementation)
```go
// When datapoints > maxDataPoints:
// 1. Calculate downsample interval: (to - from) / maxDataPoints
// 2. Group data into time buckets
// 3. Aggregate per bucket: avg for score/rtt, last for offset
// 4. Return aggregated datapoints
```
## Testing Strategy
### Unit Tests
- Parameter parsing and validation
- Data transformation logic
- Error handling scenarios
- Timestamp conversion accuracy
### Integration Tests
- End-to-end API requests
- ClickHouse query execution
- Multiple monitor scenarios
- Large time range handling
### Manual Testing
- Grafana integration testing
- Performance with various time ranges
- Cache behavior validation
## Performance Considerations
### Current Implementation
- 50k datapoint limit applied in database query (LIMIT clause) (covers ~few weeks of data)
- ClickHouse-only for better range query performance
- Proper indexing on (server_id, ts) assumed
- Table format more efficient than separate time series (less JSON overhead)
### Future Optimizations (Critical for Production)
- **Downsampling for large ranges**: Essential for 90-day queries with reasonable performance
- Query optimization based on range size
- Potential parallel monitor queries
- Adaptive sampling rates based on time range duration
## Documentation Updates
### API.md Addition
```markdown
### 7. Server Scores Time Range (v2)
**GET** `/api/v2/server/scores/{server}/{mode}`
Grafana-compatible time series endpoint for NTP server scoring data.
#### Path Parameters
- `server`: Server IP address or ID
- `mode`: Response format (`json` only)
#### Query Parameters
- `from`: Start time as Unix timestamp in seconds (required)
- `to`: End time as Unix timestamp in seconds (required)
- `maxDataPoints`: Maximum data points to return (default: 50000, max: 50000)
- `monitor`: Monitor filter (ID, name prefix, or "*" for all)
#### Response Format
Grafana table format array with one series per monitor containing all metrics as columns.
```
## Key Research Findings
### Grafana Error Format Requirements
- **HTTP Status Codes**: Standard 400/404/500 work fine
- **Response Body**: JSON preferred with `Content-Type: application/json`
- **Structure**: Simple `{"error": "message", "status": code}` is sufficient
- **Compatibility**: Existing Echo error patterns are Grafana-compatible
### Data Volume Considerations
- **50k Datapoint Limit**: Only covers ~few weeks of data, not sufficient for 90-day ranges
- **Downsampling Critical**: Required for production use with 90-day time ranges
- **Current Approach**: Acceptable for MVP, downsampling essential for full utility
## Implementation Checklist
### Phase 0: Grafana Table Format Validation ✅ **COMPLETED**
- [x] Add test endpoint `/api/v2/test/grafana-table` returning sample table format
- [x] Implement Grafana table format response structures in `server/grafana.go`
- [x] Add structured logging and OpenTelemetry tracing to test endpoint
- [x] Verify endpoint compiles and serves correct JSON format
- [x] Test endpoint response format and headers (CORS, Content-Type, Cache-Control)
- [ ] Test with actual Grafana instance to validate table format compatibility
- [ ] Confirm time series panels render table format correctly
- [ ] Validate column types and units display properly
#### Phase 0 Implementation Details
**Files Created/Modified:**
- `server/grafana.go`: New file containing Grafana table format structures and test endpoint
- `server/server.go`: Added route `e.GET("/api/v2/test/grafana-table", srv.testGrafanaTable)`
**Test Endpoint Features:**
- **URL**: `http://localhost:8030/api/v2/test/grafana-table`
- **Response Format**: Grafana table format with realistic NTP Pool data
- **Sample Data**: Two monitor series (zakim1-yfhw4a, nj2-mon01) with time-based values
- **Columns**: time, score, rtt (ms), offset (s) with proper units
- **Null Handling**: Demonstrates null offset values
- **Headers**: CORS, JSON content-type, cache control
- **Observability**: Structured logging with context, OpenTelemetry tracing
**Recommended Grafana Data Source**: JSON API plugin (`marcusolsson-json-datasource`) - ideal for REST APIs returning table format JSON
### Phase 1: Core Implementation ✅ **COMPLETED**
- [x] Add route in server.go (fixed routing pattern from `:server.:mode` to `:server/:mode`)
- [x] Implement parseTimeRangeParams function for parameter validation
- [x] Add LogscoresTimeRange method to ClickHouse with time range filtering
- [x] Implement transformToGrafanaTableFormat function with monitor grouping
- [x] Add scoresTimeRange handler with full error handling
- [x] Error handling and validation (reuse existing Echo patterns)
- [x] Cache control headers (reuse setHistoryCacheControl)
#### Phase 1 Implementation Details
**Key Components Built:**
- **Route Pattern**: `/api/v2/server/scores/:server/:mode` (matches existing API consistency)
- **Parameter Validation**: Full validation of `from`/`to` timestamps, `maxDataPoints`, time ranges
- **ClickHouse Integration**: `LogscoresTimeRange()` with time-based WHERE clauses and ASC ordering
- **Data Transformation**: Grafana table format with monitor grouping and null value handling
- **Complete Handler**: `scoresTimeRange()` with server validation, error handling, caching, and CORS
**Routing Fix**: Changed from `:server.:mode` to `:server/:mode` to resolve Echo router issue with IP addresses containing dots (e.g., `23.155.40.38`).
**Files Created/Modified in Phase 1:**
- `server/grafana.go`: Complete implementation with all structures and functions
- `timeRangeParams` struct and `parseTimeRangeParams()` function
- `transformToGrafanaTableFormat()` function with monitor grouping
- `scoresTimeRange()` handler with full error handling
- `sanitizeMonitorName()` utility function
- `server/server.go`: Added route `e.GET("/api/v2/server/scores/:server/:mode", srv.scoresTimeRange)`
- `chdb/logscores.go`: Added `LogscoresTimeRange()` method for time-based queries
**Production Testing Results** (July 25, 2025):
-**Real Data Verification**: Successfully tested with server `102.64.112.164` over 12-hour time range
-**Multiple Monitor Support**: Returns data for multiple monitors (`defra1-210hw9t`, `recentmedian`)
-**Data Quality Validation**:
- RTT conversion (microseconds → milliseconds): ✅ Working
- Timestamp conversion (seconds → milliseconds): ✅ Working
- Null value handling: ✅ Working (recentmedian has null RTT/offset as expected)
- Monitor grouping: ✅ Working (one series per monitor)
-**API Parameter Changes**: Successfully changed from milliseconds to seconds for user-friendliness
-**Volume Testing**: Handles 100+ data points per monitor efficiently
-**Error Handling**: All validation working (400 for invalid params, 404 for missing servers)
-**Performance**: Sub-second response times for 12-hour ranges
**Sample Working Request:**
```bash
curl 'http://localhost:8030/api/v2/server/scores/102.64.112.164/json?from=1753457764&to=1753500964&monitor=*'
```
### Phase 2: Testing & Polish
- [ ] Unit tests for all functions
- [ ] Integration tests
- [ ] Manual Grafana testing with real data
- [ ] Performance testing with large ranges (up to 50k points)
- [ ] API documentation updates
### Phase 3: Future Enhancement Ready
- [ ] Interval parameter parsing (no-op initially)
- [ ] Downsampling framework hooks (critical for 90-day ranges)
- [ ] Monitoring and metrics for new endpoint
This design provides a solid foundation for immediate Grafana integration while being fully prepared for future downsampling capabilities without breaking changes.
## Critical Notes for Production
- **Downsampling Required**: 50k datapoint limit means 90-day ranges will hit limits quickly
- **Table Format Validation**: Phase 0 testing ensures Grafana compatibility before full implementation
- **Error Handling**: Existing Echo patterns are sufficient for Grafana requirements
- **Scalability**: Current design handles weeks of data well, downsampling needed for months

View File

@@ -35,4 +35,63 @@ WHERE
AND in_pool = 1 AND in_pool = 1
AND netspeed > 0 AND netspeed > 0
GROUP BY z.name) GROUP BY z.name)
AS server_netspeed AS server_netspeed;
-- name: GetServerByID :one
select * from servers
where
id = ?;
-- name: GetServerByIP :one
select * from servers
where
ip = sqlc.arg(ip);
-- name: GetMonitorByNameAndIPVersion :one
select * from monitors
where
tls_name like sqlc.arg('tls_name') AND
(ip_version = sqlc.arg('ip_version') OR (type = 'score' AND ip_version IS NULL)) AND
is_current = 1
order by id
limit 1;
-- name: GetMonitorsByID :many
select * from monitors
where id in (sqlc.slice('MonitorIDs'));
-- name: GetServerScores :many
select
m.id, m.hostname, m.tls_name, m.location, m.type,
ss.score_raw, ss.score_ts, ss.status
from server_scores ss
inner join monitors m
on (m.id=ss.monitor_id)
where
server_id = ? AND
monitor_id in (sqlc.slice('MonitorIDs'));
-- name: GetServerLogScores :many
select * from log_scores
where
server_id = ?
order by ts desc
limit ?;
-- name: GetServerLogScoresByMonitorID :many
select * from log_scores
where
server_id = ? AND
monitor_id = ?
order by ts desc
limit ?;
-- name: GetZoneByName :one
select * from zones
where
name = sqlc.arg(name);
-- name: GetZoneCounts :many
select * from zone_server_counts
where zone_id = ?
order by date;

View File

@@ -1,8 +1,9 @@
-- MariaDB dump 10.19 Distrib 10.6.12-MariaDB, for Linux (x86_64) /*M!999999\- enable the sandbox mode */
-- MariaDB dump 10.19-11.4.5-MariaDB, for Linux (x86_64)
-- --
-- Host: ntp-db-mysql-master.ntpdb.svc.cluster.local Database: askntp -- Host: ntpdb-haproxy.ntpdb.svc.cluster.local Database: askntp
-- ------------------------------------------------------ -- ------------------------------------------------------
-- Server version 5.7.35-38-log -- Server version 8.0.42-33
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
@@ -13,7 +14,7 @@
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */; /*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */; /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */; /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */; /*M!100616 SET @OLD_NOTE_VERBOSITY=@@NOTE_VERBOSITY, NOTE_VERBOSITY=0 */;
-- --
-- Table structure for table `account_invites` -- Table structure for table `account_invites`
@@ -21,14 +22,14 @@
DROP TABLE IF EXISTS `account_invites`; DROP TABLE IF EXISTS `account_invites`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `account_invites` ( CREATE TABLE `account_invites` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`account_id` int(10) unsigned NOT NULL, `account_id` int unsigned NOT NULL,
`email` varchar(255) NOT NULL, `email` varchar(255) NOT NULL,
`status` enum('pending','accepted','expired') DEFAULT NULL, `status` enum('pending','accepted','expired') DEFAULT NULL,
`user_id` int(10) unsigned DEFAULT NULL, `user_id` int unsigned DEFAULT NULL,
`sent_by_id` int(10) unsigned NOT NULL, `sent_by_id` int unsigned NOT NULL,
`code` varchar(25) NOT NULL, `code` varchar(25) NOT NULL,
`expires_on` datetime NOT NULL, `expires_on` datetime NOT NULL,
`created_on` datetime NOT NULL, `created_on` datetime NOT NULL,
@@ -41,7 +42,7 @@ CREATE TABLE `account_invites` (
CONSTRAINT `account_invites_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`), CONSTRAINT `account_invites_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`),
CONSTRAINT `account_invites_sent_by_fk` FOREIGN KEY (`sent_by_id`) REFERENCES `users` (`id`), CONSTRAINT `account_invites_sent_by_fk` FOREIGN KEY (`sent_by_id`) REFERENCES `users` (`id`),
CONSTRAINT `account_invites_user_fk` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) CONSTRAINT `account_invites_user_fk` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -50,15 +51,15 @@ CREATE TABLE `account_invites` (
DROP TABLE IF EXISTS `account_subscriptions`; DROP TABLE IF EXISTS `account_subscriptions`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `account_subscriptions` ( CREATE TABLE `account_subscriptions` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`account_id` int(10) unsigned NOT NULL, `account_id` int unsigned NOT NULL,
`stripe_subscription_id` varchar(255) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL, `stripe_subscription_id` varchar(255) CHARACTER SET utf8mb3 COLLATE utf8mb3_bin DEFAULT NULL,
`status` enum('incomplete','incomplete_expired','trialing','active','past_due','canceled','unpaid','ended') DEFAULT NULL, `status` enum('incomplete','incomplete_expired','trialing','active','past_due','canceled','unpaid','ended') DEFAULT NULL,
`name` varchar(255) NOT NULL, `name` varchar(255) NOT NULL,
`max_zones` int(10) unsigned NOT NULL, `max_zones` int unsigned NOT NULL,
`max_devices` int(10) unsigned NOT NULL, `max_devices` int unsigned NOT NULL,
`created_on` datetime NOT NULL, `created_on` datetime NOT NULL,
`ended_on` datetime DEFAULT NULL, `ended_on` datetime DEFAULT NULL,
`modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
@@ -66,7 +67,7 @@ CREATE TABLE `account_subscriptions` (
UNIQUE KEY `stripe_subscription_id` (`stripe_subscription_id`), UNIQUE KEY `stripe_subscription_id` (`stripe_subscription_id`),
KEY `account_subscriptions_account_fk` (`account_id`), KEY `account_subscriptions_account_fk` (`account_id`),
CONSTRAINT `account_subscriptions_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`) CONSTRAINT `account_subscriptions_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -75,15 +76,15 @@ CREATE TABLE `account_subscriptions` (
DROP TABLE IF EXISTS `account_users`; DROP TABLE IF EXISTS `account_users`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `account_users` ( CREATE TABLE `account_users` (
`account_id` int(10) unsigned NOT NULL, `account_id` int unsigned NOT NULL,
`user_id` int(10) unsigned NOT NULL, `user_id` int unsigned NOT NULL,
PRIMARY KEY (`account_id`,`user_id`), PRIMARY KEY (`account_id`,`user_id`),
KEY `account_users_user_fk` (`user_id`), KEY `account_users_user_fk` (`user_id`),
CONSTRAINT `account_users_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`), CONSTRAINT `account_users_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`),
CONSTRAINT `account_users_user_fk` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) CONSTRAINT `account_users_user_fk` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -92,21 +93,24 @@ CREATE TABLE `account_users` (
DROP TABLE IF EXISTS `accounts`; DROP TABLE IF EXISTS `accounts`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `accounts` ( CREATE TABLE `accounts` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`id_token` varchar(36) DEFAULT NULL,
`name` varchar(255) DEFAULT NULL, `name` varchar(255) DEFAULT NULL,
`organization_name` varchar(150) DEFAULT NULL, `organization_name` varchar(150) DEFAULT NULL,
`organization_url` varchar(150) DEFAULT NULL, `organization_url` varchar(150) DEFAULT NULL,
`public_profile` tinyint(1) NOT NULL DEFAULT '0', `public_profile` tinyint(1) NOT NULL DEFAULT '0',
`url_slug` varchar(150) DEFAULT NULL, `url_slug` varchar(150) DEFAULT NULL,
`flags` json DEFAULT NULL,
`created_on` datetime NOT NULL, `created_on` datetime NOT NULL,
`modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`stripe_customer_id` varchar(255) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL, `stripe_customer_id` varchar(255) CHARACTER SET utf8mb3 COLLATE utf8mb3_bin DEFAULT NULL,
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
UNIQUE KEY `url_slug_idx` (`url_slug`), UNIQUE KEY `url_slug_idx` (`url_slug`),
UNIQUE KEY `stripe_customer_id` (`stripe_customer_id`) UNIQUE KEY `stripe_customer_id` (`stripe_customer_id`),
) ENGINE=InnoDB DEFAULT CHARSET=utf8; UNIQUE KEY `id_token` (`id_token`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -115,16 +119,43 @@ CREATE TABLE `accounts` (
DROP TABLE IF EXISTS `api_keys`; DROP TABLE IF EXISTS `api_keys`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `api_keys` ( CREATE TABLE `api_keys` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`account_id` int unsigned DEFAULT NULL,
`user_id` int unsigned DEFAULT NULL,
`api_key` varchar(255) DEFAULT NULL, `api_key` varchar(255) DEFAULT NULL,
`grants` text, `grants` text,
`audience` text NOT NULL,
`token_lookup` varchar(16) NOT NULL,
`token_hashed` varchar(256) NOT NULL,
`last_seen` datetime DEFAULT NULL,
`created_on` datetime NOT NULL, `created_on` datetime NOT NULL,
`modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
UNIQUE KEY `api_key` (`api_key`) UNIQUE KEY `api_key` (`api_key`),
) ENGINE=InnoDB DEFAULT CHARSET=utf8; KEY `api_keys_account_fk` (`account_id`),
KEY `api_keys_user_fk` (`user_id`),
CONSTRAINT `api_keys_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`),
CONSTRAINT `api_keys_user_fk` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */;
--
-- Table structure for table `api_keys_monitors`
--
DROP TABLE IF EXISTS `api_keys_monitors`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `api_keys_monitors` (
`api_key_id` int unsigned NOT NULL,
`monitor_id` int unsigned NOT NULL,
PRIMARY KEY (`api_key_id`,`monitor_id`),
KEY `api_keys_monitors_monitors_fk` (`monitor_id`),
CONSTRAINT `api_keys_monitors_api_keys_fk` FOREIGN KEY (`api_key_id`) REFERENCES `api_keys` (`id`) ON DELETE CASCADE,
CONSTRAINT `api_keys_monitors_monitors_fk` FOREIGN KEY (`monitor_id`) REFERENCES `monitors` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -133,7 +164,7 @@ CREATE TABLE `api_keys` (
DROP TABLE IF EXISTS `combust_cache`; DROP TABLE IF EXISTS `combust_cache`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `combust_cache` ( CREATE TABLE `combust_cache` (
`id` varchar(64) NOT NULL, `id` varchar(64) NOT NULL,
`type` varchar(20) NOT NULL DEFAULT '', `type` varchar(20) NOT NULL DEFAULT '',
@@ -146,7 +177,7 @@ CREATE TABLE `combust_cache` (
PRIMARY KEY (`id`,`type`), PRIMARY KEY (`id`,`type`),
KEY `expire_idx` (`expire`), KEY `expire_idx` (`expire`),
KEY `purge_idx` (`purge_key`) KEY `purge_idx` (`purge_key`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -155,10 +186,10 @@ CREATE TABLE `combust_cache` (
DROP TABLE IF EXISTS `combust_secrets`; DROP TABLE IF EXISTS `combust_secrets`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `combust_secrets` ( CREATE TABLE `combust_secrets` (
`secret_ts` int(10) unsigned NOT NULL, `secret_ts` int unsigned NOT NULL,
`expires_ts` int(10) unsigned NOT NULL, `expires_ts` int unsigned NOT NULL,
`type` varchar(32) NOT NULL, `type` varchar(32) NOT NULL,
`secret` char(32) DEFAULT NULL, `secret` char(32) DEFAULT NULL,
PRIMARY KEY (`type`,`secret_ts`), PRIMARY KEY (`type`,`secret_ts`),
@@ -172,16 +203,16 @@ CREATE TABLE `combust_secrets` (
DROP TABLE IF EXISTS `dns_roots`; DROP TABLE IF EXISTS `dns_roots`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `dns_roots` ( CREATE TABLE `dns_roots` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`origin` varchar(255) NOT NULL, `origin` varchar(255) NOT NULL,
`vendor_available` tinyint(4) NOT NULL DEFAULT '0', `vendor_available` tinyint NOT NULL DEFAULT '0',
`general_use` tinyint(4) NOT NULL DEFAULT '0', `general_use` tinyint NOT NULL DEFAULT '0',
`ns_list` varchar(255) NOT NULL, `ns_list` varchar(255) NOT NULL,
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
UNIQUE KEY `origin` (`origin`) UNIQUE KEY `origin` (`origin`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -190,16 +221,16 @@ CREATE TABLE `dns_roots` (
DROP TABLE IF EXISTS `log_scores`; DROP TABLE IF EXISTS `log_scores`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `log_scores` ( CREATE TABLE `log_scores` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `id` bigint unsigned NOT NULL AUTO_INCREMENT,
`monitor_id` int(10) unsigned DEFAULT NULL, `monitor_id` int unsigned DEFAULT NULL,
`server_id` int(10) unsigned NOT NULL, `server_id` int unsigned NOT NULL,
`ts` datetime NOT NULL, `ts` datetime NOT NULL,
`score` double NOT NULL DEFAULT '0', `score` double NOT NULL DEFAULT '0',
`step` double NOT NULL DEFAULT '0', `step` double NOT NULL DEFAULT '0',
`offset` double DEFAULT NULL, `offset` double DEFAULT NULL,
`rtt` mediumint(9) DEFAULT NULL, `rtt` mediumint DEFAULT NULL,
`attributes` text, `attributes` text,
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
KEY `log_scores_server_ts_idx` (`server_id`,`ts`), KEY `log_scores_server_ts_idx` (`server_id`,`ts`),
@@ -207,7 +238,7 @@ CREATE TABLE `log_scores` (
KEY `log_score_monitor_id_fk` (`monitor_id`), KEY `log_score_monitor_id_fk` (`monitor_id`),
CONSTRAINT `log_score_monitor_id_fk` FOREIGN KEY (`monitor_id`) REFERENCES `monitors` (`id`), CONSTRAINT `log_score_monitor_id_fk` FOREIGN KEY (`monitor_id`) REFERENCES `monitors` (`id`),
CONSTRAINT `log_scores_server` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE CONSTRAINT `log_scores_server` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -216,35 +247,17 @@ CREATE TABLE `log_scores` (
DROP TABLE IF EXISTS `log_scores_archive_status`; DROP TABLE IF EXISTS `log_scores_archive_status`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `log_scores_archive_status` ( CREATE TABLE `log_scores_archive_status` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`archiver` varchar(255) NOT NULL, `archiver` varchar(255) NOT NULL,
`log_score_id` bigint(20) unsigned DEFAULT NULL, `log_score_id` bigint unsigned DEFAULT NULL,
`modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
UNIQUE KEY `archiver` (`archiver`), UNIQUE KEY `archiver` (`archiver`),
KEY `log_score_id` (`log_score_id`), KEY `log_score_id` (`log_score_id`),
CONSTRAINT `log_score_id` FOREIGN KEY (`log_score_id`) REFERENCES `log_scores` (`id`) CONSTRAINT `log_score_id` FOREIGN KEY (`log_score_id`) REFERENCES `log_scores` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */;
--
-- Table structure for table `log_status`
--
DROP TABLE IF EXISTS `log_status`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `log_status` (
`server_id` int(10) unsigned NOT NULL,
`last_check` datetime NOT NULL,
`ts_archived` datetime NOT NULL,
PRIMARY KEY (`server_id`),
KEY `log_scores_server_ts_idx` (`server_id`,`last_check`),
KEY `last_check_idx` (`last_check`),
CONSTRAINT `log_status_server` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -253,13 +266,13 @@ CREATE TABLE `log_status` (
DROP TABLE IF EXISTS `logs`; DROP TABLE IF EXISTS `logs`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `logs` ( CREATE TABLE `logs` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`account_id` int(10) unsigned DEFAULT NULL, `account_id` int unsigned DEFAULT NULL,
`server_id` int(10) unsigned DEFAULT NULL, `server_id` int unsigned DEFAULT NULL,
`user_id` int(10) unsigned DEFAULT NULL, `user_id` int unsigned DEFAULT NULL,
`vendor_zone_id` int(10) unsigned DEFAULT NULL, `vendor_zone_id` int unsigned DEFAULT NULL,
`type` varchar(50) DEFAULT NULL, `type` varchar(50) DEFAULT NULL,
`message` text, `message` text,
`changes` text, `changes` text,
@@ -273,7 +286,39 @@ CREATE TABLE `logs` (
CONSTRAINT `logs_vendor_zone_id` FOREIGN KEY (`vendor_zone_id`) REFERENCES `vendor_zones` (`id`) ON DELETE CASCADE, CONSTRAINT `logs_vendor_zone_id` FOREIGN KEY (`vendor_zone_id`) REFERENCES `vendor_zones` (`id`) ON DELETE CASCADE,
CONSTRAINT `server_logs_server_id` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE, CONSTRAINT `server_logs_server_id` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE,
CONSTRAINT `server_logs_user_id` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) CONSTRAINT `server_logs_user_id` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */;
--
-- Table structure for table `monitor_registrations`
--
DROP TABLE IF EXISTS `monitor_registrations`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `monitor_registrations` (
`id` int unsigned NOT NULL AUTO_INCREMENT,
`monitor_id` int unsigned DEFAULT NULL,
`request_token` varchar(128) NOT NULL,
`verification_token` varchar(32) NOT NULL,
`ip4` varchar(15) NOT NULL DEFAULT '',
`ip6` varchar(39) NOT NULL DEFAULT '',
`tls_name` varchar(255) DEFAULT '',
`hostname` varchar(256) NOT NULL DEFAULT '',
`location_code` varchar(5) NOT NULL DEFAULT '',
`account_id` int unsigned DEFAULT NULL,
`client` varchar(256) NOT NULL DEFAULT '',
`status` enum('pending','accepted','completed','rejected','cancelled') NOT NULL,
`last_seen` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`created_on` datetime NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `request_token` (`request_token`),
UNIQUE KEY `verification_token` (`verification_token`),
KEY `monitor_registrations_monitor_id_fk` (`monitor_id`),
KEY `monitor_registrations_account_fk` (`account_id`),
CONSTRAINT `monitor_registrations_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`),
CONSTRAINT `monitor_registrations_monitor_id_fk` FOREIGN KEY (`monitor_id`) REFERENCES `monitors` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -282,13 +327,14 @@ CREATE TABLE `logs` (
DROP TABLE IF EXISTS `monitors`; DROP TABLE IF EXISTS `monitors`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `monitors` ( CREATE TABLE `monitors` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`id_token` varchar(36) DEFAULT NULL,
`type` enum('monitor','score') NOT NULL DEFAULT 'monitor', `type` enum('monitor','score') NOT NULL DEFAULT 'monitor',
`user_id` int(10) unsigned DEFAULT NULL, `user_id` int unsigned DEFAULT NULL,
`account_id` int(10) unsigned DEFAULT NULL, `account_id` int unsigned DEFAULT NULL,
`name` varchar(30) NOT NULL, `hostname` varchar(255) NOT NULL DEFAULT '',
`location` varchar(255) NOT NULL DEFAULT '', `location` varchar(255) NOT NULL DEFAULT '',
`ip` varchar(40) DEFAULT NULL, `ip` varchar(40) DEFAULT NULL,
`ip_version` enum('v4','v6') DEFAULT NULL, `ip_version` enum('v4','v6') DEFAULT NULL,
@@ -300,15 +346,20 @@ CREATE TABLE `monitors` (
`last_seen` datetime(6) DEFAULT NULL, `last_seen` datetime(6) DEFAULT NULL,
`last_submit` datetime(6) DEFAULT NULL, `last_submit` datetime(6) DEFAULT NULL,
`created_on` datetime NOT NULL, `created_on` datetime NOT NULL,
`deleted_on` datetime DEFAULT NULL,
`is_current` tinyint(1) DEFAULT '1',
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
UNIQUE KEY `ip` (`ip`,`ip_version`),
UNIQUE KEY `api_key` (`api_key`), UNIQUE KEY `api_key` (`api_key`),
UNIQUE KEY `monitors_tls_name` (`tls_name`), UNIQUE KEY `monitors_tls_name` (`tls_name`,`ip_version`),
UNIQUE KEY `token_id` (`id_token`),
UNIQUE KEY `id_token` (`id_token`),
UNIQUE KEY `ip` (`ip`,`is_current`),
KEY `monitors_user_id` (`user_id`), KEY `monitors_user_id` (`user_id`),
KEY `monitors_account_fk` (`account_id`), KEY `monitors_account_fk` (`account_id`),
KEY `type_status` (`type`,`status`),
CONSTRAINT `monitors_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`), CONSTRAINT `monitors_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`),
CONSTRAINT `monitors_user_id` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) CONSTRAINT `monitors_user_id` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -318,7 +369,7 @@ CREATE TABLE `monitors` (
DROP TABLE IF EXISTS `monitors_data`; DROP TABLE IF EXISTS `monitors_data`;
/*!50001 DROP VIEW IF EXISTS `monitors_data`*/; /*!50001 DROP VIEW IF EXISTS `monitors_data`*/;
SET @saved_cs_client = @@character_set_client; SET @saved_cs_client = @@character_set_client;
SET character_set_client = utf8; SET character_set_client = utf8mb4;
/*!50001 CREATE VIEW `monitors_data` AS SELECT /*!50001 CREATE VIEW `monitors_data` AS SELECT
1 AS `id`, 1 AS `id`,
1 AS `account_id`, 1 AS `account_id`,
@@ -332,18 +383,40 @@ SET character_set_client = utf8;
1 AS `last_submit` */; 1 AS `last_submit` */;
SET character_set_client = @saved_cs_client; SET character_set_client = @saved_cs_client;
--
-- Table structure for table `oidc_public_keys`
--
DROP TABLE IF EXISTS `oidc_public_keys`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `oidc_public_keys` (
`id` bigint NOT NULL AUTO_INCREMENT,
`kid` varchar(255) NOT NULL,
`public_key` text NOT NULL,
`algorithm` varchar(20) NOT NULL,
`created_at` timestamp NOT NULL,
`expires_at` timestamp NULL DEFAULT NULL,
`active` tinyint(1) NOT NULL DEFAULT '1',
PRIMARY KEY (`id`),
UNIQUE KEY `kid` (`kid`),
KEY `idx_kid` (`kid`),
KEY `idx_active_expires` (`active`,`expires_at`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */;
-- --
-- Table structure for table `schema_revision` -- Table structure for table `schema_revision`
-- --
DROP TABLE IF EXISTS `schema_revision`; DROP TABLE IF EXISTS `schema_revision`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `schema_revision` ( CREATE TABLE `schema_revision` (
`revision` smallint(5) unsigned NOT NULL DEFAULT '0', `revision` smallint unsigned NOT NULL DEFAULT '0',
`schema_name` varchar(30) NOT NULL, `schema_name` varchar(30) NOT NULL,
PRIMARY KEY (`schema_name`) PRIMARY KEY (`schema_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -352,18 +425,18 @@ CREATE TABLE `schema_revision` (
DROP TABLE IF EXISTS `scorer_status`; DROP TABLE IF EXISTS `scorer_status`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `scorer_status` ( CREATE TABLE `scorer_status` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`scorer_id` int(10) unsigned NOT NULL, `scorer_id` int unsigned NOT NULL,
`log_score_id` bigint(20) unsigned DEFAULT NULL, `log_score_id` bigint unsigned NOT NULL,
`modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
KEY `scorer_log_score_id` (`log_score_id`), KEY `scorer_log_score_id` (`log_score_id`),
KEY `scores_status_monitor_id_fk` (`scorer_id`), KEY `scores_status_monitor_id_fk` (`scorer_id`),
CONSTRAINT `scorer_log_score_id` FOREIGN KEY (`log_score_id`) REFERENCES `log_scores` (`id`), CONSTRAINT `scorer_log_score_id` FOREIGN KEY (`log_score_id`) REFERENCES `log_scores` (`id`),
CONSTRAINT `scores_status_monitor_id_fk` FOREIGN KEY (`scorer_id`) REFERENCES `monitors` (`id`) CONSTRAINT `scores_status_monitor_id_fk` FOREIGN KEY (`scorer_id`) REFERENCES `monitors` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -372,15 +445,15 @@ CREATE TABLE `scorer_status` (
DROP TABLE IF EXISTS `server_alerts`; DROP TABLE IF EXISTS `server_alerts`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `server_alerts` ( CREATE TABLE `server_alerts` (
`server_id` int(10) unsigned NOT NULL, `server_id` int unsigned NOT NULL,
`last_score` double NOT NULL, `last_score` double NOT NULL,
`first_email_time` datetime NOT NULL, `first_email_time` datetime NOT NULL,
`last_email_time` datetime DEFAULT NULL, `last_email_time` datetime DEFAULT NULL,
PRIMARY KEY (`server_id`), PRIMARY KEY (`server_id`),
CONSTRAINT `server_alerts_server` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE CONSTRAINT `server_alerts_server` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -389,10 +462,10 @@ CREATE TABLE `server_alerts` (
DROP TABLE IF EXISTS `server_notes`; DROP TABLE IF EXISTS `server_notes`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `server_notes` ( CREATE TABLE `server_notes` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`server_id` int(10) unsigned NOT NULL, `server_id` int unsigned NOT NULL,
`name` varchar(255) NOT NULL DEFAULT '', `name` varchar(255) NOT NULL DEFAULT '',
`note` text NOT NULL, `note` text NOT NULL,
`created_on` datetime NOT NULL, `created_on` datetime NOT NULL,
@@ -401,7 +474,7 @@ CREATE TABLE `server_notes` (
UNIQUE KEY `server` (`server_id`,`name`), UNIQUE KEY `server` (`server_id`,`name`),
KEY `name` (`name`), KEY `name` (`name`),
CONSTRAINT `server_notes_ibfk_1` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE CONSTRAINT `server_notes_ibfk_1` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -410,24 +483,31 @@ CREATE TABLE `server_notes` (
DROP TABLE IF EXISTS `server_scores`; DROP TABLE IF EXISTS `server_scores`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `server_scores` ( CREATE TABLE `server_scores` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `id` bigint unsigned NOT NULL AUTO_INCREMENT,
`monitor_id` int(10) unsigned NOT NULL, `monitor_id` int unsigned NOT NULL,
`server_id` int(10) unsigned NOT NULL, `server_id` int unsigned NOT NULL,
`score_ts` datetime DEFAULT NULL, `score_ts` datetime DEFAULT NULL,
`score_raw` double NOT NULL DEFAULT '0', `score_raw` double NOT NULL DEFAULT '0',
`stratum` tinyint(3) unsigned DEFAULT NULL, `stratum` tinyint unsigned DEFAULT NULL,
`status` enum('new','testing','active') NOT NULL DEFAULT 'new', `status` enum('candidate','testing','active','paused') NOT NULL DEFAULT 'candidate',
`queue_ts` datetime DEFAULT NULL,
`created_on` datetime NOT NULL, `created_on` datetime NOT NULL,
`modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`constraint_violation_type` varchar(50) DEFAULT NULL,
`constraint_violation_since` datetime DEFAULT NULL,
`last_constraint_check` datetime DEFAULT NULL,
`pause_reason` varchar(20) DEFAULT NULL,
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
UNIQUE KEY `server_id` (`server_id`,`monitor_id`), UNIQUE KEY `server_id` (`server_id`,`monitor_id`),
KEY `monitor_id` (`monitor_id`,`server_id`), KEY `monitor_id` (`monitor_id`,`server_id`),
KEY `monitor_id_2` (`monitor_id`,`score_ts`), KEY `monitor_id_2` (`monitor_id`,`score_ts`),
KEY `idx_constraint_violation` (`constraint_violation_type`,`constraint_violation_since`),
KEY `idx_paused_monitors` (`status`,`last_constraint_check`,`pause_reason`),
CONSTRAINT `server_score_monitor_fk` FOREIGN KEY (`monitor_id`) REFERENCES `monitors` (`id`), CONSTRAINT `server_score_monitor_fk` FOREIGN KEY (`monitor_id`) REFERENCES `monitors` (`id`),
CONSTRAINT `server_score_server_id` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE CONSTRAINT `server_score_server_id` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -436,15 +516,65 @@ CREATE TABLE `server_scores` (
DROP TABLE IF EXISTS `server_urls`; DROP TABLE IF EXISTS `server_urls`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `server_urls` ( CREATE TABLE `server_urls` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`server_id` int(10) unsigned NOT NULL, `server_id` int unsigned NOT NULL,
`url` varchar(255) NOT NULL, `url` varchar(255) NOT NULL,
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
KEY `server` (`server_id`), KEY `server` (`server_id`),
CONSTRAINT `server_urls_server` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE CONSTRAINT `server_urls_server` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */;
--
-- Table structure for table `server_verifications`
--
DROP TABLE IF EXISTS `server_verifications`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `server_verifications` (
`id` int unsigned NOT NULL AUTO_INCREMENT,
`server_id` int unsigned NOT NULL,
`user_id` int unsigned DEFAULT NULL,
`user_ip` varchar(45) NOT NULL DEFAULT '',
`indirect_ip` varchar(45) NOT NULL DEFAULT '',
`verified_on` datetime DEFAULT NULL,
`token` varchar(36) DEFAULT NULL,
`created_on` datetime NOT NULL,
`modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `server` (`server_id`),
UNIQUE KEY `token` (`token`),
KEY `server_verifications_ibfk_2` (`user_id`),
CONSTRAINT `server_verifications_ibfk_1` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE,
CONSTRAINT `server_verifications_ibfk_2` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */;
--
-- Table structure for table `server_verifications_history`
--
DROP TABLE IF EXISTS `server_verifications_history`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `server_verifications_history` (
`id` int unsigned NOT NULL AUTO_INCREMENT,
`server_id` int unsigned NOT NULL,
`user_id` int unsigned DEFAULT NULL,
`user_ip` varchar(45) NOT NULL DEFAULT '',
`indirect_ip` varchar(45) NOT NULL DEFAULT '',
`verified_on` datetime DEFAULT NULL,
`created_on` datetime NOT NULL,
`modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `server_verifications_history_ibfk_1` (`server_id`),
KEY `server_verifications_history_ibfk_2` (`user_id`),
CONSTRAINT `server_verifications_history_ibfk_1` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE,
CONSTRAINT `server_verifications_history_ibfk_2` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -453,15 +583,15 @@ CREATE TABLE `server_urls` (
DROP TABLE IF EXISTS `server_zones`; DROP TABLE IF EXISTS `server_zones`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `server_zones` ( CREATE TABLE `server_zones` (
`server_id` int(10) unsigned NOT NULL, `server_id` int unsigned NOT NULL,
`zone_id` int(10) unsigned NOT NULL, `zone_id` int unsigned NOT NULL,
PRIMARY KEY (`server_id`,`zone_id`), PRIMARY KEY (`server_id`,`zone_id`),
KEY `locations_zone` (`zone_id`), KEY `locations_zone` (`zone_id`),
CONSTRAINT `locations_server` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE, CONSTRAINT `locations_server` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE,
CONSTRAINT `locations_zone` FOREIGN KEY (`zone_id`) REFERENCES `zones` (`id`) ON DELETE CASCADE CONSTRAINT `locations_zone` FOREIGN KEY (`zone_id`) REFERENCES `zones` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -470,23 +600,25 @@ CREATE TABLE `server_zones` (
DROP TABLE IF EXISTS `servers`; DROP TABLE IF EXISTS `servers`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `servers` ( CREATE TABLE `servers` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`ip` varchar(40) NOT NULL, `ip` varchar(40) NOT NULL,
`ip_version` enum('v4','v6') NOT NULL DEFAULT 'v4', `ip_version` enum('v4','v6') NOT NULL DEFAULT 'v4',
`user_id` int(10) unsigned NOT NULL, `user_id` int unsigned DEFAULT NULL,
`account_id` int(10) unsigned DEFAULT NULL, `account_id` int unsigned DEFAULT NULL,
`hostname` varchar(255) DEFAULT NULL, `hostname` varchar(255) DEFAULT NULL,
`stratum` tinyint(3) unsigned DEFAULT NULL, `stratum` tinyint unsigned DEFAULT NULL,
`in_pool` tinyint(3) unsigned NOT NULL DEFAULT '0', `in_pool` tinyint unsigned NOT NULL DEFAULT '0',
`in_server_list` tinyint(3) unsigned NOT NULL DEFAULT '0', `in_server_list` tinyint unsigned NOT NULL DEFAULT '0',
`netspeed` mediumint(8) unsigned NOT NULL DEFAULT '1000', `netspeed` int unsigned NOT NULL DEFAULT '10000',
`netspeed_target` int unsigned NOT NULL DEFAULT '10000',
`created_on` datetime NOT NULL, `created_on` datetime NOT NULL,
`updated_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `updated_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`score_ts` datetime DEFAULT NULL, `score_ts` datetime DEFAULT NULL,
`score_raw` double NOT NULL DEFAULT '0', `score_raw` double NOT NULL DEFAULT '0',
`deletion_on` date DEFAULT NULL, `deletion_on` date DEFAULT NULL,
`flags` varchar(4096) NOT NULL DEFAULT '{}',
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
UNIQUE KEY `ip` (`ip`), UNIQUE KEY `ip` (`ip`),
KEY `admin` (`user_id`), KEY `admin` (`user_id`),
@@ -495,7 +627,7 @@ CREATE TABLE `servers` (
KEY `server_account_fk` (`account_id`), KEY `server_account_fk` (`account_id`),
CONSTRAINT `server_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`), CONSTRAINT `server_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`),
CONSTRAINT `servers_user_ibfk` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) CONSTRAINT `servers_user_ibfk` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -504,9 +636,9 @@ CREATE TABLE `servers` (
DROP TABLE IF EXISTS `servers_monitor_review`; DROP TABLE IF EXISTS `servers_monitor_review`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `servers_monitor_review` ( CREATE TABLE `servers_monitor_review` (
`server_id` int(10) unsigned NOT NULL, `server_id` int unsigned NOT NULL,
`last_review` datetime DEFAULT NULL, `last_review` datetime DEFAULT NULL,
`next_review` datetime DEFAULT NULL, `next_review` datetime DEFAULT NULL,
`last_change` datetime DEFAULT NULL, `last_change` datetime DEFAULT NULL,
@@ -514,7 +646,7 @@ CREATE TABLE `servers_monitor_review` (
PRIMARY KEY (`server_id`), PRIMARY KEY (`server_id`),
KEY `next_review` (`next_review`), KEY `next_review` (`next_review`),
CONSTRAINT `server_monitor_review_server_id_fk` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE CONSTRAINT `server_monitor_review_server_id_fk` FOREIGN KEY (`server_id`) REFERENCES `servers` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -523,16 +655,16 @@ CREATE TABLE `servers_monitor_review` (
DROP TABLE IF EXISTS `system_settings`; DROP TABLE IF EXISTS `system_settings`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `system_settings` ( CREATE TABLE `system_settings` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`key` varchar(255) DEFAULT NULL, `key` varchar(255) NOT NULL,
`value` text, `value` text NOT NULL,
`created_on` datetime NOT NULL, `created_on` datetime NOT NULL,
`modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
UNIQUE KEY `key` (`key`) UNIQUE KEY `key` (`key`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -541,17 +673,17 @@ CREATE TABLE `system_settings` (
DROP TABLE IF EXISTS `user_equipment_applications`; DROP TABLE IF EXISTS `user_equipment_applications`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `user_equipment_applications` ( CREATE TABLE `user_equipment_applications` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`user_id` int(10) unsigned NOT NULL, `user_id` int unsigned NOT NULL,
`application` text, `application` text,
`contact_information` text, `contact_information` text,
`status` enum('New','Pending','Maybe','No','Approved') NOT NULL DEFAULT 'New', `status` enum('New','Pending','Maybe','No','Approved') NOT NULL DEFAULT 'New',
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
KEY `user_equipment_applications_user_id` (`user_id`), KEY `user_equipment_applications_user_id` (`user_id`),
CONSTRAINT `user_equipment_applications_user_id` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) CONSTRAINT `user_equipment_applications_user_id` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -560,19 +692,21 @@ CREATE TABLE `user_equipment_applications` (
DROP TABLE IF EXISTS `user_identities`; DROP TABLE IF EXISTS `user_identities`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `user_identities` ( CREATE TABLE `user_identities` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`profile_id` varchar(255) NOT NULL, `profile_id` varchar(255) NOT NULL,
`user_id` int(10) unsigned NOT NULL, `user_id` int unsigned NOT NULL,
`provider` varchar(255) NOT NULL, `provider` varchar(255) NOT NULL,
`data` text, `data` text,
`email` varchar(255) DEFAULT NULL, `email` varchar(255) DEFAULT NULL,
`created_on` datetime NOT NULL DEFAULT '2003-01-27 00:00:00',
`modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
UNIQUE KEY `profile_id` (`profile_id`), UNIQUE KEY `profile_id` (`profile_id`),
KEY `user_identities_user_id` (`user_id`), KEY `user_identities_user_id` (`user_id`),
CONSTRAINT `user_identities_user_id` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE CONSTRAINT `user_identities_user_id` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -581,16 +715,60 @@ CREATE TABLE `user_identities` (
DROP TABLE IF EXISTS `user_privileges`; DROP TABLE IF EXISTS `user_privileges`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `user_privileges` ( CREATE TABLE `user_privileges` (
`user_id` int(10) unsigned NOT NULL, `user_id` int unsigned NOT NULL,
`see_all_servers` tinyint(1) NOT NULL DEFAULT '0', `see_all_servers` tinyint(1) NOT NULL DEFAULT '0',
`vendor_admin` tinyint(4) NOT NULL DEFAULT '0', `vendor_admin` tinyint NOT NULL DEFAULT '0',
`equipment_admin` tinyint(4) NOT NULL DEFAULT '0', `equipment_admin` tinyint NOT NULL DEFAULT '0',
`support_staff` tinyint(4) NOT NULL DEFAULT '0', `support_staff` tinyint NOT NULL DEFAULT '0',
`monitor_admin` tinyint NOT NULL DEFAULT '0',
PRIMARY KEY (`user_id`), PRIMARY KEY (`user_id`),
CONSTRAINT `user_privileges_user` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE CONSTRAINT `user_privileges_user` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */;
--
-- Table structure for table `user_sessions`
--
DROP TABLE IF EXISTS `user_sessions`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `user_sessions` (
`id` int unsigned NOT NULL AUTO_INCREMENT,
`user_id` int unsigned NOT NULL,
`token_lookup` varchar(16) NOT NULL,
`token_hashed` varchar(256) NOT NULL,
`last_seen` datetime DEFAULT NULL,
`created_on` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `user_sessions_user_fk` (`user_id`),
KEY `token_lookup` (`token_lookup`),
CONSTRAINT `user_sessions_user_fk` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */;
--
-- Table structure for table `user_tasks`
--
DROP TABLE IF EXISTS `user_tasks`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `user_tasks` (
`id` int unsigned NOT NULL AUTO_INCREMENT,
`user_id` int unsigned DEFAULT NULL,
`task` enum('download','delete') NOT NULL,
`status` text NOT NULL,
`traceid` varchar(32) NOT NULL DEFAULT '',
`execute_on` datetime DEFAULT NULL,
`created_on` datetime NOT NULL,
`modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `user_tasks_user_fk` (`user_id`),
CONSTRAINT `user_tasks_user_fk` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -599,17 +777,20 @@ CREATE TABLE `user_privileges` (
DROP TABLE IF EXISTS `users`; DROP TABLE IF EXISTS `users`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `users` ( CREATE TABLE `users` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`id_token` varchar(36) DEFAULT NULL,
`email` varchar(255) NOT NULL, `email` varchar(255) NOT NULL,
`name` varchar(255) DEFAULT NULL, `name` varchar(255) DEFAULT NULL,
`username` varchar(40) DEFAULT NULL, `username` varchar(40) DEFAULT NULL,
`public_profile` tinyint(1) NOT NULL DEFAULT '0', `public_profile` tinyint(1) NOT NULL DEFAULT '0',
`deletion_on` datetime DEFAULT NULL,
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
UNIQUE KEY `email` (`email`), UNIQUE KEY `email` (`email`),
UNIQUE KEY `username` (`username`) UNIQUE KEY `username` (`username`),
) ENGINE=InnoDB DEFAULT CHARSET=utf8; UNIQUE KEY `id_token` (`id_token`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -618,34 +799,37 @@ CREATE TABLE `users` (
DROP TABLE IF EXISTS `vendor_zones`; DROP TABLE IF EXISTS `vendor_zones`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `vendor_zones` ( CREATE TABLE `vendor_zones` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`id_token` varchar(36) DEFAULT NULL,
`zone_name` varchar(90) NOT NULL, `zone_name` varchar(90) NOT NULL,
`status` enum('New','Pending','Approved','Rejected') NOT NULL DEFAULT 'New', `status` enum('New','Pending','Approved','Rejected') NOT NULL DEFAULT 'New',
`user_id` int(10) unsigned DEFAULT NULL, `user_id` int unsigned DEFAULT NULL,
`organization_name` varchar(255) DEFAULT NULL, `organization_name` varchar(255) DEFAULT NULL,
`client_type` enum('ntp','sntp','legacy') NOT NULL DEFAULT 'sntp', `client_type` enum('ntp','sntp','legacy') NOT NULL DEFAULT 'sntp',
`contact_information` text, `contact_information` text,
`request_information` text, `request_information` text,
`device_count` int(10) unsigned DEFAULT NULL, `device_information` text,
`device_count` int unsigned DEFAULT NULL,
`opensource` tinyint(1) NOT NULL DEFAULT '0', `opensource` tinyint(1) NOT NULL DEFAULT '0',
`opensource_info` text, `opensource_info` text,
`rt_ticket` smallint(5) unsigned DEFAULT NULL, `rt_ticket` smallint unsigned DEFAULT NULL,
`approved_on` datetime DEFAULT NULL, `approved_on` datetime DEFAULT NULL,
`created_on` datetime NOT NULL, `created_on` datetime NOT NULL,
`modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `modified_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`dns_root_id` int(10) unsigned NOT NULL, `dns_root_id` int unsigned NOT NULL,
`account_id` int(10) unsigned DEFAULT NULL, `account_id` int unsigned DEFAULT NULL,
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
UNIQUE KEY `zone_name` (`zone_name`,`dns_root_id`), UNIQUE KEY `zone_name` (`zone_name`,`dns_root_id`),
UNIQUE KEY `id_token` (`id_token`),
KEY `vendor_zones_user_id` (`user_id`), KEY `vendor_zones_user_id` (`user_id`),
KEY `dns_root_fk` (`dns_root_id`), KEY `dns_root_fk` (`dns_root_id`),
KEY `vendor_zone_account_fk` (`account_id`), KEY `vendor_zone_account_fk` (`account_id`),
CONSTRAINT `dns_root_fk` FOREIGN KEY (`dns_root_id`) REFERENCES `dns_roots` (`id`), CONSTRAINT `dns_root_fk` FOREIGN KEY (`dns_root_id`) REFERENCES `dns_roots` (`id`),
CONSTRAINT `vendor_zone_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`), CONSTRAINT `vendor_zone_account_fk` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`id`),
CONSTRAINT `vendor_zones_user_id` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) CONSTRAINT `vendor_zones_user_id` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -654,19 +838,20 @@ CREATE TABLE `vendor_zones` (
DROP TABLE IF EXISTS `zone_server_counts`; DROP TABLE IF EXISTS `zone_server_counts`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `zone_server_counts` ( CREATE TABLE `zone_server_counts` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`zone_id` int(10) unsigned NOT NULL, `zone_id` int unsigned NOT NULL,
`ip_version` enum('v4','v6') NOT NULL, `ip_version` enum('v4','v6') NOT NULL,
`date` date NOT NULL, `date` date NOT NULL,
`count_active` mediumint(8) unsigned NOT NULL, `count_active` mediumint unsigned NOT NULL,
`count_registered` mediumint(8) unsigned NOT NULL, `count_registered` mediumint unsigned NOT NULL,
`netspeed_active` int(10) unsigned NOT NULL, `netspeed_active` int unsigned NOT NULL,
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
UNIQUE KEY `zone_date` (`zone_id`,`date`,`ip_version`), UNIQUE KEY `zone_date` (`zone_id`,`date`,`ip_version`),
KEY `date_idx` (`date`,`zone_id`),
CONSTRAINT `zone_server_counts` FOREIGN KEY (`zone_id`) REFERENCES `zones` (`id`) ON DELETE CASCADE CONSTRAINT `zone_server_counts` FOREIGN KEY (`zone_id`) REFERENCES `zones` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -675,18 +860,18 @@ CREATE TABLE `zone_server_counts` (
DROP TABLE IF EXISTS `zones`; DROP TABLE IF EXISTS `zones`;
/*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */; /*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `zones` ( CREATE TABLE `zones` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `id` int unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL, `name` varchar(255) NOT NULL,
`description` varchar(255) DEFAULT NULL, `description` varchar(255) DEFAULT NULL,
`parent_id` int(10) unsigned DEFAULT NULL, `parent_id` int unsigned DEFAULT NULL,
`dns` tinyint(1) NOT NULL DEFAULT '1', `dns` tinyint(1) NOT NULL DEFAULT '1',
PRIMARY KEY (`id`), PRIMARY KEY (`id`),
UNIQUE KEY `name` (`name`), UNIQUE KEY `name` (`name`),
KEY `parent` (`parent_id`), KEY `parent` (`parent_id`),
CONSTRAINT `zones_parent` FOREIGN KEY (`parent_id`) REFERENCES `zones` (`id`) CONSTRAINT `zones_parent` FOREIGN KEY (`parent_id`) REFERENCES `zones` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
/*!40101 SET character_set_client = @saved_cs_client */; /*!40101 SET character_set_client = @saved_cs_client */;
-- --
@@ -701,8 +886,8 @@ CREATE TABLE `zones` (
/*!50001 SET character_set_results = utf8mb4 */; /*!50001 SET character_set_results = utf8mb4 */;
/*!50001 SET collation_connection = utf8mb4_general_ci */; /*!50001 SET collation_connection = utf8mb4_general_ci */;
/*!50001 CREATE ALGORITHM=UNDEFINED */ /*!50001 CREATE ALGORITHM=UNDEFINED */
/*!50013 DEFINER=`askntp`@`10.%` SQL SECURITY DEFINER */
/*!50001 VIEW `monitors_data` AS select `monitors`.`id` AS `id`,`monitors`.`account_id` AS `account_id`,`monitors`.`type` AS `type`,if((`monitors`.`type` = 'score'),`monitors`.`name`,substring_index(`monitors`.`tls_name`,'.',1)) AS `name`,`monitors`.`ip` AS `ip`,`monitors`.`ip_version` AS `ip_version`,`monitors`.`status` AS `status`,`monitors`.`client_version` AS `client_version`,`monitors`.`last_seen` AS `last_seen`,`monitors`.`last_submit` AS `last_submit` from `monitors` where (not((`monitors`.`tls_name` like '%.system'))) */; /*!50001 VIEW `monitors_data` AS select `monitors`.`id` AS `id`,`monitors`.`account_id` AS `account_id`,`monitors`.`type` AS `type`,if((`monitors`.`type` = 'score'),`monitors`.`hostname`,substring_index(`monitors`.`tls_name`,'.',1)) AS `name`,`monitors`.`ip` AS `ip`,`monitors`.`ip_version` AS `ip_version`,`monitors`.`status` AS `status`,`monitors`.`client_version` AS `client_version`,`monitors`.`last_seen` AS `last_seen`,`monitors`.`last_submit` AS `last_submit` from `monitors` where (not((`monitors`.`tls_name` like '%.system'))) */;
/*!50001 SET character_set_client = @saved_cs_client */; /*!50001 SET character_set_client = @saved_cs_client */;
/*!50001 SET character_set_results = @saved_cs_results */; /*!50001 SET character_set_results = @saved_cs_results */;
/*!50001 SET collation_connection = @saved_col_connection */; /*!50001 SET collation_connection = @saved_col_connection */;
@@ -714,6 +899,6 @@ CREATE TABLE `zones` (
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
/*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; /*M!100616 SET NOTE_VERBOSITY=@OLD_NOTE_VERBOSITY */;
-- Dump completed on 2023-05-03 5:59:38 -- Dump completed on 2025-08-06 4:26:05

View File

@@ -2,13 +2,22 @@
set -euo pipefail set -euo pipefail
go install github.com/goreleaser/goreleaser@v1.22.1 go install github.com/goreleaser/goreleaser/v2@v2.13.3
DRONE_TAG=${DRONE_TAG-""} if [ ! -z "${harbor_username:-}" ]; then
DOCKER_FILE=~/.docker/config.json
if [ ! -e $DOCKER_FILE ]; then
mkdir -p ~/.docker/
export harbor_auth=`cat /dev/null | jq -s -r '[ env.harbor_username, env.harbor_password ] | join(":") | @base64'`
echo '{"auths":{"harbor.ntppool.org":{"auth":""}}}' | jq '.auths["harbor.ntppool.org"].auth=env.harbor_auth' > $DOCKER_FILE
fi
fi
CI_TAG=${CI_COMMIT_TAG:-${DRONE_TAG:-""}}
is_snapshot="" is_snapshot=""
if [ -z "$DRONE_TAG" ]; then if [ -z "$CI_TAG" ]; then
is_snapshot="--snapshot" is_snapshot="--snapshot"
fi fi

View File

@@ -16,17 +16,17 @@ import (
"go.ntppool.org/data-api/ntpdb" "go.ntppool.org/data-api/ntpdb"
) )
const pointBasis float64 = 10000 const (
const pointSymbol = "‱" pointBasis float64 = 10000
pointSymbol = "‱"
)
// const pointBasis = 1000 // const pointBasis = 1000
// const pointSymbol = "‰" // const pointSymbol = "‰"
func (srv *Server) dnsAnswers(c echo.Context) error { func (srv *Server) dnsAnswers(c echo.Context) error {
log := logger.Setup() log := logger.Setup()
ctx := c.Request().Context() ctx, span := tracing.Tracer().Start(c.Request().Context(), "dnsanswers")
ctx, span := tracing.Tracer().Start(ctx, "dnsanswers")
defer span.End() defer span.End()
// for errors and 404s, a shorter cache time // for errors and 404s, a shorter cache time
@@ -89,7 +89,7 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
queryGroup.Go(func() error { queryGroup.Go(func() error {
var err error var err error
serverData, err = srv.ch.ServerAnswerCounts(c.Request().Context(), ip.String(), days) serverData, err = srv.ch.ServerAnswerCounts(ctx, ip.String(), days)
if err != nil { if err != nil {
log.Error("ServerUserCCData", "err", err) log.Error("ServerUserCCData", "err", err)
return err return err
@@ -107,7 +107,7 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
qtype = "AAAA" qtype = "AAAA"
} }
totalData, err = srv.ch.AnswerTotals(c.Request().Context(), qtype, days) totalData, err = srv.ch.AnswerTotals(ctx, qtype, days)
if err != nil { if err != nil {
log.Error("AnswerTotals", "err", err) log.Error("AnswerTotals", "err", err)
} }
@@ -123,7 +123,7 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
return c.String(http.StatusInternalServerError, err.Error()) return c.String(http.StatusInternalServerError, err.Error())
} }
zoneTotals := map[string]int32{} zoneTotals := map[string]int{}
for _, z := range zoneStats { for _, z := range zoneStats {
zn := z.ZoneName zn := z.ZoneName
@@ -141,9 +141,15 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
totalName = "uk" totalName = "uk"
} }
if zt, ok := zoneTotals[totalName]; ok { if zt, ok := zoneTotals[totalName]; ok {
// log.InfoContext(ctx, "netspeed data", "pointBasis", pointBasis, "zt", zt, "server netspeed", serverNetspeed)
if zt == 0 {
// if the recorded netspeed for the zone was zero, assume it's at least
// this servers worth instead. Otherwise the Netspeed gets to be 'infinite'.
zt = int(serverNetspeed)
}
cc.Netspeed = (pointBasis / float64(zt)) * float64(serverNetspeed) cc.Netspeed = (pointBasis / float64(zt)) * float64(serverNetspeed)
} }
// log.Info("points", "cc", cc.CC, "points", cc.Points) // log.DebugContext(ctx, "points", "cc", cc.CC, "points", cc.Points)
} }
r := struct { r := struct {
@@ -159,5 +165,4 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
c.Response().Header().Set("Cache-Control", "public,max-age=1800") c.Response().Header().Set("Cache-Control", "public,max-age=1800")
return c.JSONPretty(http.StatusOK, r, "") return c.JSONPretty(http.StatusOK, r, "")
} }

46
server/functions.go Normal file
View File

@@ -0,0 +1,46 @@
package server
import (
"context"
"database/sql"
"errors"
"net/netip"
"strconv"
"time"
"go.ntppool.org/common/logger"
"go.ntppool.org/common/tracing"
"go.ntppool.org/data-api/ntpdb"
)
func (srv *Server) FindServer(ctx context.Context, serverID string) (ntpdb.Server, error) {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(ctx, "FindServer")
defer span.End()
q := ntpdb.NewWrappedQuerier(ntpdb.New(srv.db))
var serverData ntpdb.Server
var dberr error
if id, err := strconv.Atoi(serverID); id > 0 && err == nil {
serverData, dberr = q.GetServerByID(ctx, uint32(id))
} else {
ip, err := netip.ParseAddr(serverID)
if err != nil || !ip.IsValid() {
return ntpdb.Server{}, nil // 404 error
}
serverData, dberr = q.GetServerByIP(ctx, ip.String())
}
if dberr != nil {
if !errors.Is(dberr, sql.ErrNoRows) {
log.Error("could not query server id", "err", dberr)
return serverData, dberr
}
}
if serverData.ID == 0 || (serverData.DeletionOn.Valid && serverData.DeletionOn.Time.Before(time.Now().Add(-1*time.Hour*24*30*24))) {
// no data and no error to produce 404 errors
return ntpdb.Server{}, nil
}
return serverData, nil
}

589
server/grafana.go Normal file
View File

@@ -0,0 +1,589 @@
package server
import (
"context"
"fmt"
"net/http"
"regexp"
"strconv"
"strings"
"time"
"github.com/labstack/echo/v4"
"go.ntppool.org/common/logger"
"go.ntppool.org/common/tracing"
"go.ntppool.org/data-api/logscores"
"go.ntppool.org/data-api/ntpdb"
)
// ColumnDef represents a Grafana table column definition
type ColumnDef struct {
Text string `json:"text"`
Type string `json:"type"`
Unit string `json:"unit,omitempty"`
}
// GrafanaTableSeries represents a single table series in Grafana format
type GrafanaTableSeries struct {
Target string `json:"target"`
Tags map[string]string `json:"tags"`
Columns []ColumnDef `json:"columns"`
Values [][]interface{} `json:"values"`
}
// GrafanaTimeSeriesResponse represents the complete Grafana table response
type GrafanaTimeSeriesResponse []GrafanaTableSeries
// timeRangeParams extends historyParameters with time range support
type timeRangeParams struct {
historyParameters // embed existing struct
from time.Time
to time.Time
maxDataPoints int
interval string // for future downsampling
}
// parseTimeRangeParams parses and validates time range parameters
// parseRelativeTime parses relative time expressions like "-3d", "-2h", "-30m"
// Returns the absolute time relative to the provided base time (usually time.Now())
func parseRelativeTime(relativeTimeStr string, baseTime time.Time) (time.Time, error) {
if relativeTimeStr == "" {
return time.Time{}, fmt.Errorf("empty time string")
}
// Check if it's a regular Unix timestamp first
if unixTime, err := strconv.ParseInt(relativeTimeStr, 10, 64); err == nil {
return time.Unix(unixTime, 0), nil
}
// Parse relative time format like "-3d", "-2h", "-30m", "-5s"
re := regexp.MustCompile(`^(-?)(\d+)([dhms])$`)
matches := re.FindStringSubmatch(relativeTimeStr)
if len(matches) != 4 {
return time.Time{}, fmt.Errorf("invalid time format, expected Unix timestamp or relative format like '-3d', '-2h', '-30m', '-5s'")
}
sign := matches[1]
valueStr := matches[2]
unit := matches[3]
value, err := strconv.Atoi(valueStr)
if err != nil {
return time.Time{}, fmt.Errorf("invalid numeric value: %s", valueStr)
}
var duration time.Duration
switch unit {
case "s":
duration = time.Duration(value) * time.Second
case "m":
duration = time.Duration(value) * time.Minute
case "h":
duration = time.Duration(value) * time.Hour
case "d":
duration = time.Duration(value) * 24 * time.Hour
default:
return time.Time{}, fmt.Errorf("invalid time unit: %s", unit)
}
// Apply sign (negative means go back in time)
if sign == "-" {
return baseTime.Add(-duration), nil
}
return baseTime.Add(duration), nil
}
func (srv *Server) parseTimeRangeParams(ctx context.Context, c echo.Context, server ntpdb.Server) (timeRangeParams, error) {
log := logger.FromContext(ctx)
// Start with existing parameter parsing logic
baseParams, err := srv.getHistoryParameters(ctx, c, server)
if err != nil {
return timeRangeParams{}, err
}
trParams := timeRangeParams{
historyParameters: baseParams,
maxDataPoints: 50000, // default
}
// Parse from timestamp (required) - supports Unix timestamps and relative time like "-3d"
fromParam := c.QueryParam("from")
if fromParam == "" {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "from parameter is required")
}
now := time.Now()
trParams.from, err = parseRelativeTime(fromParam, now)
if err != nil {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("invalid from parameter: %v", err))
}
// Parse to timestamp (required) - supports Unix timestamps and relative time like "-1d"
toParam := c.QueryParam("to")
if toParam == "" {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "to parameter is required")
}
trParams.to, err = parseRelativeTime(toParam, now)
if err != nil {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("invalid to parameter: %v", err))
}
// Validate time range
if trParams.from.Equal(trParams.to) || trParams.from.After(trParams.to) {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "from must be before to")
}
// Check minimum range (1 second)
if trParams.to.Sub(trParams.from) < time.Second {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "time range must be at least 1 second")
}
// Check maximum range (90 days)
if trParams.to.Sub(trParams.from) > 90*24*time.Hour {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "time range cannot exceed 90 days")
}
// Parse maxDataPoints (optional)
if maxDataPointsParam := c.QueryParam("maxDataPoints"); maxDataPointsParam != "" {
maxDP, err := strconv.Atoi(maxDataPointsParam)
if err != nil {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "invalid maxDataPoints format")
}
if maxDP > 50000 {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "maxDataPoints cannot exceed 50000")
}
if maxDP > 0 {
trParams.maxDataPoints = maxDP
}
}
// Parse interval (optional, for future downsampling)
trParams.interval = c.QueryParam("interval")
log.DebugContext(ctx, "parsed time range params",
"from", trParams.from,
"to", trParams.to,
"maxDataPoints", trParams.maxDataPoints,
"interval", trParams.interval,
"monitor", trParams.monitorID,
)
return trParams, nil
}
// sanitizeMonitorName sanitizes monitor names for Grafana target format
func sanitizeMonitorName(name string) string {
// Replace problematic characters for Grafana target format
result := strings.ReplaceAll(name, " ", "_")
result = strings.ReplaceAll(result, ".", "-")
result = strings.ReplaceAll(result, "/", "-")
return result
}
// transformToGrafanaTableFormat converts LogScoreHistory to Grafana table format
func transformToGrafanaTableFormat(history *logscores.LogScoreHistory, monitors []ntpdb.Monitor) GrafanaTimeSeriesResponse {
// Group data by monitor_id (one series per monitor)
monitorData := make(map[int][]ntpdb.LogScore)
monitorInfo := make(map[int]ntpdb.Monitor)
// Group log scores by monitor ID
skippedInvalidMonitors := 0
for _, ls := range history.LogScores {
if !ls.MonitorID.Valid {
skippedInvalidMonitors++
continue
}
monitorID := int(ls.MonitorID.Int32)
monitorData[monitorID] = append(monitorData[monitorID], ls)
}
// Debug logging for transformation
logger.Setup().Info("transformation grouping debug",
"total_log_scores", len(history.LogScores),
"skipped_invalid_monitors", skippedInvalidMonitors,
"grouped_monitor_ids", func() []int {
keys := make([]int, 0, len(monitorData))
for k := range monitorData {
keys = append(keys, k)
}
return keys
}(),
"monitor_data_counts", func() map[int]int {
counts := make(map[int]int)
for k, v := range monitorData {
counts[k] = len(v)
}
return counts
}(),
)
// Index monitors by ID for quick lookup
for _, monitor := range monitors {
monitorInfo[int(monitor.ID)] = monitor
}
var response GrafanaTimeSeriesResponse
// Create one table series per monitor
logger.Setup().Info("creating grafana series",
"monitor_data_entries", len(monitorData),
)
for monitorID, logScores := range monitorData {
if len(logScores) == 0 {
logger.Setup().Info("skipping monitor with no data", "monitor_id", monitorID)
continue
}
logger.Setup().Info("processing monitor series",
"monitor_id", monitorID,
"log_scores_count", len(logScores),
)
// Get monitor name from history.Monitors map or from monitor info
monitorName := "unknown"
if name, exists := history.Monitors[monitorID]; exists && name != "" {
monitorName = name
} else if monitor, exists := monitorInfo[monitorID]; exists {
monitorName = monitor.DisplayName()
}
// Build target name and tags
sanitizedName := sanitizeMonitorName(monitorName)
target := "monitor{name=" + sanitizedName + "}"
tags := map[string]string{
"monitor_id": strconv.Itoa(monitorID),
"monitor_name": monitorName,
"type": "monitor",
}
// Add status (we'll use active as default since we have data for this monitor)
tags["status"] = "active"
// Define table columns
columns := []ColumnDef{
{Text: "time", Type: "time"},
{Text: "score", Type: "number"},
{Text: "rtt", Type: "number", Unit: "ms"},
{Text: "offset", Type: "number", Unit: "s"},
}
// Build values array
var values [][]interface{}
for _, ls := range logScores {
// Convert timestamp to milliseconds
timestampMs := ls.Ts.Unix() * 1000
// Create row: [time, score, rtt, offset]
row := []interface{}{
timestampMs,
ls.Score,
}
// Add RTT (convert from microseconds to milliseconds, handle null)
if ls.Rtt.Valid {
rttMs := float64(ls.Rtt.Int32) / 1000.0
row = append(row, rttMs)
} else {
row = append(row, nil)
}
// Add offset (handle null)
if ls.Offset.Valid {
row = append(row, ls.Offset.Float64)
} else {
row = append(row, nil)
}
values = append(values, row)
}
// Create table series
series := GrafanaTableSeries{
Target: target,
Tags: tags,
Columns: columns,
Values: values,
}
response = append(response, series)
logger.Setup().Info("created series for monitor",
"monitor_id", monitorID,
"target", series.Target,
"values_count", len(series.Values),
)
}
logger.Setup().Info("transformation complete",
"final_response_count", len(response),
"response_is_nil", response == nil,
)
return response
}
// scoresTimeRange handles Grafana time range requests for NTP server scores
func (srv *Server) scoresTimeRange(c echo.Context) error {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(c.Request().Context(), "scoresTimeRange")
defer span.End()
// Set reasonable default cache time; adjusted later based on data
c.Response().Header().Set("Cache-Control", "public,max-age=240")
// Validate mode parameter
mode := c.Param("mode")
if mode != "json" {
return echo.NewHTTPError(http.StatusNotFound, "invalid mode - only json supported")
}
// Find and validate server first
server, err := srv.FindServer(ctx, c.Param("server"))
if err != nil {
log.ErrorContext(ctx, "find server", "err", err)
if he, ok := err.(*echo.HTTPError); ok {
return he
}
span.RecordError(err)
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
}
if server.DeletionAge(30 * 24 * time.Hour) {
span.AddEvent("server deleted")
return echo.NewHTTPError(http.StatusNotFound, "server not found")
}
if server.ID == 0 {
span.AddEvent("server not found")
return echo.NewHTTPError(http.StatusNotFound, "server not found")
}
// Parse and validate time range parameters
params, err := srv.parseTimeRangeParams(ctx, c, server)
if err != nil {
if he, ok := err.(*echo.HTTPError); ok {
return he
}
log.ErrorContext(ctx, "parse time range parameters", "err", err)
span.RecordError(err)
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
}
// Query ClickHouse for time range data
log.InfoContext(ctx, "executing clickhouse time range query",
"server_id", server.ID,
"server_ip", server.Ip,
"monitor_id", params.monitorID,
"from", params.from,
"to", params.to,
"max_data_points", params.maxDataPoints,
"time_range_duration", params.to.Sub(params.from).String(),
)
logScores, err := srv.ch.LogscoresTimeRange(ctx, int(server.ID), params.monitorID, params.from, params.to, params.maxDataPoints)
if err != nil {
log.ErrorContext(ctx, "clickhouse time range query", "err", err,
"server_id", server.ID,
"monitor_id", params.monitorID,
"from", params.from,
"to", params.to,
)
span.RecordError(err)
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
}
log.InfoContext(ctx, "clickhouse query results",
"server_id", server.ID,
"rows_returned", len(logScores),
"first_few_ids", func() []uint64 {
ids := make([]uint64, 0, 3)
for i, ls := range logScores {
if i >= 3 {
break
}
ids = append(ids, ls.ID)
}
return ids
}(),
)
// Build LogScoreHistory structure for compatibility with existing functions
history := &logscores.LogScoreHistory{
LogScores: logScores,
Monitors: make(map[int]string),
}
// Get monitor names for the returned data
monitorIDs := []uint32{}
for _, ls := range logScores {
if ls.MonitorID.Valid {
monitorID := uint32(ls.MonitorID.Int32)
if _, exists := history.Monitors[int(monitorID)]; !exists {
history.Monitors[int(monitorID)] = ""
monitorIDs = append(monitorIDs, monitorID)
}
}
}
log.InfoContext(ctx, "monitor processing",
"unique_monitor_ids", monitorIDs,
"monitor_count", len(monitorIDs),
)
// Get monitor details from database for status and display names
var monitors []ntpdb.Monitor
if len(monitorIDs) > 0 {
q := ntpdb.NewWrappedQuerier(ntpdb.New(srv.db))
logScoreMonitors, err := q.GetServerScores(ctx, ntpdb.GetServerScoresParams{
MonitorIDs: monitorIDs,
ServerID: server.ID,
})
if err != nil {
log.ErrorContext(ctx, "get monitor details", "err", err)
// Don't fail the request, just use basic info
} else {
for _, lsm := range logScoreMonitors {
// Create monitor entry for transformation (we mainly need the display name)
tempMon := ntpdb.Monitor{
TlsName: lsm.TlsName,
Location: lsm.Location,
ID: lsm.ID,
}
monitors = append(monitors, tempMon)
// Update monitor name in history
history.Monitors[int(lsm.ID)] = tempMon.DisplayName()
}
}
}
// Transform to Grafana table format
log.InfoContext(ctx, "starting grafana transformation",
"log_scores_count", len(logScores),
"monitors_count", len(monitors),
"history_monitors", history.Monitors,
)
grafanaResponse := transformToGrafanaTableFormat(history, monitors)
log.InfoContext(ctx, "grafana transformation complete",
"response_series_count", len(grafanaResponse),
"response_preview", func() interface{} {
if len(grafanaResponse) == 0 {
return "empty_response"
}
first := grafanaResponse[0]
return map[string]interface{}{
"target": first.Target,
"tags": first.Tags,
"columns_count": len(first.Columns),
"values_count": len(first.Values),
"first_few_values": func() [][]interface{} {
if len(first.Values) == 0 {
return [][]interface{}{}
}
count := 2
if len(first.Values) < count {
count = len(first.Values)
}
return first.Values[:count]
}(),
}
}(),
)
// Set cache control headers based on data characteristics
setHistoryCacheControl(c, history)
// Set CORS headers
c.Response().Header().Set("Access-Control-Allow-Origin", "*")
c.Response().Header().Set("Content-Type", "application/json")
log.InfoContext(ctx, "time range response final",
"server_id", server.ID,
"server_ip", server.Ip,
"monitor_id", params.monitorID,
"time_range", params.to.Sub(params.from).String(),
"raw_data_points", len(logScores),
"grafana_series_count", len(grafanaResponse),
"max_data_points", params.maxDataPoints,
"response_is_null", grafanaResponse == nil,
"response_is_empty", len(grafanaResponse) == 0,
)
return c.JSON(http.StatusOK, grafanaResponse)
}
// testGrafanaTable returns sample data in Grafana table format for validation
func (srv *Server) testGrafanaTable(c echo.Context) error {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(c.Request().Context(), "testGrafanaTable")
defer span.End()
log.InfoContext(ctx, "serving test Grafana table format",
"remote_ip", c.RealIP(),
"user_agent", c.Request().UserAgent(),
)
// Generate sample data with realistic NTP Pool values
now := time.Now()
sampleData := GrafanaTimeSeriesResponse{
{
Target: "monitor{name=zakim1-yfhw4a}",
Tags: map[string]string{
"monitor_id": "126",
"monitor_name": "zakim1-yfhw4a",
"type": "monitor",
"status": "active",
},
Columns: []ColumnDef{
{Text: "time", Type: "time"},
{Text: "score", Type: "number"},
{Text: "rtt", Type: "number", Unit: "ms"},
{Text: "offset", Type: "number", Unit: "s"},
},
Values: [][]interface{}{
{now.Add(-10*time.Minute).Unix() * 1000, 20.0, 18.865, -0.000267},
{now.Add(-20*time.Minute).Unix() * 1000, 20.0, 18.96, -0.000390},
{now.Add(-30*time.Minute).Unix() * 1000, 20.0, 18.073, -0.000768},
{now.Add(-40*time.Minute).Unix() * 1000, 20.0, 18.209, nil}, // null offset example
},
},
{
Target: "monitor{name=nj2-mon01}",
Tags: map[string]string{
"monitor_id": "84",
"monitor_name": "nj2-mon01",
"type": "monitor",
"status": "active",
},
Columns: []ColumnDef{
{Text: "time", Type: "time"},
{Text: "score", Type: "number"},
{Text: "rtt", Type: "number", Unit: "ms"},
{Text: "offset", Type: "number", Unit: "s"},
},
Values: [][]interface{}{
{now.Add(-10*time.Minute).Unix() * 1000, 19.5, 22.145, 0.000123},
{now.Add(-20*time.Minute).Unix() * 1000, 19.8, 21.892, 0.000089},
{now.Add(-30*time.Minute).Unix() * 1000, 20.0, 22.034, 0.000156},
},
},
}
// Add CORS header for browser testing
c.Response().Header().Set("Access-Control-Allow-Origin", "*")
c.Response().Header().Set("Content-Type", "application/json")
// Set cache control similar to other endpoints
c.Response().Header().Set("Cache-Control", "public,max-age=60")
log.InfoContext(ctx, "test Grafana table response sent",
"series_count", len(sampleData),
"response_size_approx", "~1KB",
)
return c.JSON(http.StatusOK, sampleData)
}

119
server/grafana_test.go Normal file
View File

@@ -0,0 +1,119 @@
package server
import (
"testing"
"time"
)
func TestParseRelativeTime(t *testing.T) {
// Use a fixed base time for consistent testing
baseTime := time.Date(2025, 8, 4, 12, 0, 0, 0, time.UTC)
tests := []struct {
name string
input string
expected time.Time
shouldError bool
}{
{
name: "Unix timestamp",
input: "1753500964",
expected: time.Unix(1753500964, 0),
},
{
name: "3 days ago",
input: "-3d",
expected: baseTime.Add(-3 * 24 * time.Hour),
},
{
name: "2 hours ago",
input: "-2h",
expected: baseTime.Add(-2 * time.Hour),
},
{
name: "30 minutes ago",
input: "-30m",
expected: baseTime.Add(-30 * time.Minute),
},
{
name: "5 seconds ago",
input: "-5s",
expected: baseTime.Add(-5 * time.Second),
},
{
name: "3 days in future",
input: "3d",
expected: baseTime.Add(3 * 24 * time.Hour),
},
{
name: "1 hour in future",
input: "1h",
expected: baseTime.Add(1 * time.Hour),
},
{
name: "empty string",
input: "",
shouldError: true,
},
{
name: "invalid format",
input: "invalid",
shouldError: true,
},
{
name: "invalid unit",
input: "3x",
shouldError: true,
},
{
name: "no number",
input: "-d",
shouldError: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result, err := parseRelativeTime(tt.input, baseTime)
if tt.shouldError {
if err == nil {
t.Errorf("parseRelativeTime(%q) expected error, got nil", tt.input)
}
return
}
if err != nil {
t.Errorf("parseRelativeTime(%q) unexpected error: %v", tt.input, err)
return
}
if !result.Equal(tt.expected) {
t.Errorf("parseRelativeTime(%q) = %v, expected %v", tt.input, result, tt.expected)
}
})
}
}
func TestParseRelativeTimeEdgeCases(t *testing.T) {
baseTime := time.Date(2025, 8, 4, 12, 0, 0, 0, time.UTC)
// Test large values
result, err := parseRelativeTime("365d", baseTime)
if err != nil {
t.Errorf("parseRelativeTime('365d') unexpected error: %v", err)
}
expected := baseTime.Add(365 * 24 * time.Hour)
if !result.Equal(expected) {
t.Errorf("parseRelativeTime('365d') = %v, expected %v", result, expected)
}
// Test zero values
result, err = parseRelativeTime("0s", baseTime)
if err != nil {
t.Errorf("parseRelativeTime('0s') unexpected error: %v", err)
}
if !result.Equal(baseTime) {
t.Errorf("parseRelativeTime('0s') = %v, expected %v", result, baseTime)
}
}

150
server/graph_image.go Normal file
View File

@@ -0,0 +1,150 @@
package server
import (
"context"
"fmt"
"io"
"net/http"
"net/http/httptrace"
"net/url"
"os"
"time"
"github.com/hashicorp/go-retryablehttp"
"github.com/labstack/echo/v4"
"go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
"go.ntppool.org/common/logger"
"go.ntppool.org/common/tracing"
)
func (srv *Server) graphImage(c echo.Context) error {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(c.Request().Context(), "graphImage")
defer span.End()
// cache errors briefly
c.Response().Header().Set("Cache-Control", "public,max-age=240")
serverID := c.Param("server")
imageType := c.Param("type")
log = log.With("serverID", serverID).With("type", imageType)
log.InfoContext(ctx, "graph parameters")
span.SetAttributes(attribute.String("url.server_parameter", serverID))
if imageType != "offset.png" {
return c.String(http.StatusNotFound, "invalid image name")
}
if len(c.QueryString()) > 0 {
// people breaking the varnish cache by adding query parameters
redirectURL := c.Request().URL
redirectURL.RawQuery = ""
log.InfoContext(ctx, "redirecting", "url", redirectURL.String())
return c.Redirect(308, redirectURL.String())
}
serverData, err := srv.FindServer(ctx, serverID)
if err != nil {
span.RecordError(err)
return c.String(http.StatusInternalServerError, "server error")
}
if serverData.ID == 0 {
return c.String(http.StatusNotFound, "not found")
}
if serverData.DeletionAge(7 * 24 * time.Hour) {
return c.String(http.StatusNotFound, "not found")
}
if serverData.Ip != serverID {
return c.Redirect(308, fmt.Sprintf("/graph/%s/offset.png", serverData.Ip))
}
contentType, data, err := srv.fetchGraph(ctx, serverData.Ip)
if err != nil {
span.RecordError(err)
return c.String(http.StatusInternalServerError, "server error")
}
if len(data) == 0 {
span.RecordError(fmt.Errorf("no data"))
return c.String(http.StatusInternalServerError, "server error")
}
ttl := 1800
c.Response().Header().Set("Cache-Control",
fmt.Sprintf("public,max-age=%d,s-maxage=%.0f",
ttl, float64(ttl)*0.75,
),
)
return c.Blob(http.StatusOK, contentType, data)
}
func (srv *Server) fetchGraph(ctx context.Context, serverIP string) (string, []byte, error) {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(ctx, "fetchGraph")
defer span.End()
// q := url.Values{}
// q.Set("graph_only", "1")
// pagePath := srv.config.WebURL("/scores/" + serverIP, q)
serviceHost := os.Getenv("screensnap_service")
if len(serviceHost) == 0 {
serviceHost = "screensnap"
}
reqURL := url.URL{
Scheme: "http",
Host: serviceHost,
Path: fmt.Sprintf("/image/offset/%s", serverIP),
}
client := retryablehttp.NewClient()
client.Logger = log
client.HTTPClient.Transport = otelhttp.NewTransport(
client.HTTPClient.Transport,
otelhttp.WithClientTrace(func(ctx context.Context) *httptrace.ClientTrace {
return otelhttptrace.NewClientTrace(ctx)
}),
)
req, err := retryablehttp.NewRequestWithContext(ctx, "GET", reqURL.String(), nil)
if err != nil {
return "", nil, err
}
resp, err := client.Do(req)
if err != nil {
return "", nil, err
}
defer resp.Body.Close()
if resp.StatusCode != 200 {
span.AddEvent("unexpected status code", trace.WithAttributes(attribute.Int64("http.status", int64(resp.StatusCode))))
return "text/plain", nil, fmt.Errorf("upstream error %d", resp.StatusCode)
}
b, err := io.ReadAll(resp.Body)
if err != nil {
return "", nil, err
}
return resp.Header.Get("Content-Type"), b, nil
}
// # my $data = JSON::encode_json(
// # { url => $url->as_string(),
// # timeout => 10,
// # viewport => "501x233",
// # height => 233,
// # resource_timeout => 5,
// # wait => 0.5,
// # scale_method => "vector",
// # }
// # );

476
server/history.go Normal file
View File

@@ -0,0 +1,476 @@
package server
import (
"bytes"
"context"
"database/sql"
"encoding/csv"
"errors"
"fmt"
"math"
"net/http"
"net/netip"
"os"
"strconv"
"strings"
"time"
"github.com/labstack/echo/v4"
"go.ntppool.org/common/logger"
"go.ntppool.org/common/tracing"
"go.ntppool.org/data-api/logscores"
"go.ntppool.org/data-api/ntpdb"
)
// sanitizeForCSV removes or replaces problematic characters for CSV output
func sanitizeForCSV(s string) string {
// Replace NULL bytes and other control characters with a placeholder
var result strings.Builder
for _, r := range s {
switch {
case r == 0: // NULL byte
result.WriteString("<NULL>")
case r < 32 && r != '\t' && r != '\n' && r != '\r': // Other control chars except tab/newline/carriage return
result.WriteString(fmt.Sprintf("<0x%02X>", r))
default:
result.WriteRune(r)
}
}
return result.String()
}
type historyMode uint8
const (
historyModeUnknown historyMode = iota
historyModeLog
historyModeJSON
historyModeMonitor
)
func paramHistoryMode(s string) historyMode {
switch s {
case "log":
return historyModeLog
case "json":
return historyModeJSON
case "monitor":
return historyModeMonitor
default:
return historyModeUnknown
}
}
type historyParameters struct {
limit int
monitorID int
server ntpdb.Server
since time.Time
fullHistory bool
}
func (srv *Server) getHistoryParameters(ctx context.Context, c echo.Context, server ntpdb.Server) (historyParameters, error) {
log := logger.FromContext(ctx)
p := historyParameters{}
limit := 0
if limitParam, err := strconv.Atoi(c.QueryParam("limit")); err == nil {
limit = limitParam
} else {
limit = 100
}
if limit > 10000 {
limit = 10000
}
p.limit = limit
q := ntpdb.NewWrappedQuerier(ntpdb.New(srv.db))
monitorParam := c.QueryParam("monitor")
var monitorID uint32
switch monitorParam {
case "":
name := "recentmedian.scores.ntp.dev"
var ipVersion ntpdb.NullMonitorsIpVersion
if server.IpVersion == ntpdb.ServersIpVersionV4 {
ipVersion = ntpdb.NullMonitorsIpVersion{MonitorsIpVersion: ntpdb.MonitorsIpVersionV4, Valid: true}
} else {
ipVersion = ntpdb.NullMonitorsIpVersion{MonitorsIpVersion: ntpdb.MonitorsIpVersionV6, Valid: true}
}
monitor, err := q.GetMonitorByNameAndIPVersion(ctx, ntpdb.GetMonitorByNameAndIPVersionParams{
TlsName: sql.NullString{Valid: true, String: name},
IpVersion: ipVersion,
})
if err != nil {
log.Warn("could not find monitor", "name", name, "ip_version", server.IpVersion, "err", err)
}
monitorID = monitor.ID
case "*":
monitorID = 0 // don't filter on monitor ID
default:
mID, err := strconv.ParseUint(monitorParam, 10, 32)
if err == nil {
monitorID = uint32(mID)
} else {
// only accept the name prefix; no wildcards; trust the database
// to filter out any other crazy
if strings.ContainsAny(monitorParam, "_%. \t\n") {
return p, echo.NewHTTPError(http.StatusNotFound, "monitor not found")
}
monitorParam = monitorParam + ".%"
var ipVersion ntpdb.NullMonitorsIpVersion
if server.IpVersion == ntpdb.ServersIpVersionV4 {
ipVersion = ntpdb.NullMonitorsIpVersion{MonitorsIpVersion: ntpdb.MonitorsIpVersionV4, Valid: true}
} else {
ipVersion = ntpdb.NullMonitorsIpVersion{MonitorsIpVersion: ntpdb.MonitorsIpVersionV6, Valid: true}
}
monitor, err := q.GetMonitorByNameAndIPVersion(ctx, ntpdb.GetMonitorByNameAndIPVersionParams{
TlsName: sql.NullString{Valid: true, String: monitorParam},
IpVersion: ipVersion,
})
if err != nil {
if err == sql.ErrNoRows {
return p, echo.NewHTTPError(http.StatusNotFound, "monitor not found").WithInternal(err)
}
log.WarnContext(ctx, "could not find monitor", "name", monitorParam, "ip_version", server.IpVersion, "err", err)
return p, echo.NewHTTPError(http.StatusNotFound, "monitor not found (sql)")
}
monitorID = monitor.ID
}
}
p.monitorID = int(monitorID)
log.DebugContext(ctx, "monitor param", "monitor", monitorID, "ip_version", server.IpVersion)
since, _ := strconv.ParseInt(c.QueryParam("since"), 10, 64) // defaults to 0 so don't care if it parses
if since > 0 {
p.since = time.Unix(since, 0)
}
clientIP, err := netip.ParseAddr(c.RealIP())
if err != nil {
return p, err
}
// log.DebugContext(ctx, "client ip", "client_ip", clientIP.String())
if clientIP.IsPrivate() || clientIP.IsLoopback() { // don't allow this through the ingress or CDN
if fullParam := c.QueryParam("full_history"); len(fullParam) > 0 {
if t, _ := strconv.ParseBool(fullParam); t {
p.fullHistory = true
}
}
}
return p, nil
}
func (srv *Server) getHistoryMySQL(ctx context.Context, _ echo.Context, p historyParameters) (*logscores.LogScoreHistory, error) {
ls, err := logscores.GetHistoryMySQL(ctx, srv.db, p.server.ID, uint32(p.monitorID), p.since, p.limit)
return ls, err
}
func (srv *Server) history(c echo.Context) error {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(c.Request().Context(), "history")
defer span.End()
// set a reasonable default cache time; adjusted later for
// happy path common responses
c.Response().Header().Set("Cache-Control", "public,max-age=240")
mode := paramHistoryMode(c.Param("mode"))
if mode == historyModeUnknown {
return echo.NewHTTPError(http.StatusNotFound, "invalid mode")
}
server, err := srv.FindServer(ctx, c.Param("server"))
if err != nil {
log.ErrorContext(ctx, "find server", "err", err)
if he, ok := err.(*echo.HTTPError); ok {
return he
}
span.RecordError(err)
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
}
if server.DeletionAge(30 * 24 * time.Hour) {
span.AddEvent("server deleted")
return echo.NewHTTPError(http.StatusNotFound, "server not found")
}
if server.ID == 0 {
span.AddEvent("server not found")
return echo.NewHTTPError(http.StatusNotFound, "server not found")
}
p, err := srv.getHistoryParameters(ctx, c, server)
if err != nil {
if he, ok := err.(*echo.HTTPError); ok {
return he
}
log.ErrorContext(ctx, "get history parameters", "err", err)
span.RecordError(err)
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
}
p.server = server
var history *logscores.LogScoreHistory
sourceParam := c.QueryParam("source")
switch sourceParam {
case "m":
case "c":
default:
sourceParam = os.Getenv("default_source")
}
if sourceParam == "m" {
history, err = srv.getHistoryMySQL(ctx, c, p)
} else {
history, err = logscores.GetHistoryClickHouse(ctx, srv.ch, srv.db, p.server.ID, uint32(p.monitorID), p.since, p.limit, p.fullHistory)
}
if err != nil {
var httpError *echo.HTTPError
if errors.As(err, &httpError) {
if httpError.Code >= 500 {
log.Error("get history", "err", err)
span.RecordError(err)
}
return httpError
} else {
log.Error("get history", "err", err)
span.RecordError(err)
return c.String(http.StatusInternalServerError, "internal error")
}
}
c.Response().Header().Set("Access-Control-Allow-Origin", "*")
switch mode {
case historyModeLog:
return srv.historyCSV(ctx, c, history)
case historyModeJSON:
return srv.historyJSON(ctx, c, server, history)
default:
return c.String(http.StatusNotFound, "not implemented")
}
}
func (srv *Server) historyJSON(ctx context.Context, c echo.Context, server ntpdb.Server, history *logscores.LogScoreHistory) error {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(ctx, "history.json")
defer span.End()
type ScoresEntry struct {
TS int64 `json:"ts"`
Offset *float64 `json:"offset,omitempty"`
Step float64 `json:"step"`
Score float64 `json:"score"`
MonitorID int `json:"monitor_id"`
Rtt *float64 `json:"rtt,omitempty"`
}
type MonitorEntry struct {
ID uint32 `json:"id"`
Name string `json:"name"`
Type string `json:"type"`
Ts string `json:"ts"`
Score float64 `json:"score"`
Status string `json:"status"`
AvgRtt *float64 `json:"avg_rtt,omitempty"`
}
res := struct {
History []ScoresEntry `json:"history"`
Monitors []MonitorEntry `json:"monitors"`
Server struct {
IP string `json:"ip"`
} `json:"server"`
}{
History: make([]ScoresEntry, len(history.LogScores)),
}
res.Server.IP = server.Ip
// log.InfoContext(ctx, "monitor id list", "ids", history.MonitorIDs)
monitorIDs := []uint32{}
for k := range history.Monitors {
monitorIDs = append(monitorIDs, uint32(k))
}
q := ntpdb.NewWrappedQuerier(ntpdb.New(srv.db))
logScoreMonitors, err := q.GetServerScores(ctx,
ntpdb.GetServerScoresParams{
MonitorIDs: monitorIDs,
ServerID: server.ID,
},
)
if err != nil {
span.RecordError(err)
log.ErrorContext(ctx, "GetServerScores", "err", err)
return c.String(http.StatusInternalServerError, "err")
}
// log.InfoContext(ctx, "got logScoreMonitors", "count", len(logScoreMonitors))
// Calculate average RTT per monitor
monitorRttSums := make(map[uint32]float64)
monitorRttCounts := make(map[uint32]int)
for _, ls := range history.LogScores {
if ls.MonitorID.Valid && ls.Rtt.Valid {
monitorID := uint32(ls.MonitorID.Int32)
monitorRttSums[monitorID] += float64(ls.Rtt.Int32) / 1000.0
monitorRttCounts[monitorID]++
}
}
for _, lsm := range logScoreMonitors {
score := math.Round(lsm.ScoreRaw*10) / 10 // round to one decimal
tempMon := ntpdb.Monitor{
// Hostname: lsm.Hostname,
TlsName: lsm.TlsName,
Location: lsm.Location,
ID: lsm.ID,
}
name := tempMon.DisplayName()
me := MonitorEntry{
ID: lsm.ID,
Name: name,
Type: string(lsm.Type),
Ts: lsm.ScoreTs.Time.Format(time.RFC3339),
Score: score,
Status: string(lsm.Status),
}
// Add average RTT if available
if count, exists := monitorRttCounts[lsm.ID]; exists && count > 0 {
avgRtt := monitorRttSums[lsm.ID] / float64(count)
me.AvgRtt = &avgRtt
}
res.Monitors = append(res.Monitors, me)
}
for i, ls := range history.LogScores {
x := float64(1000000000000)
score := math.Round(ls.Score*x) / x
res.History[i] = ScoresEntry{
TS: ls.Ts.Unix(),
MonitorID: int(ls.MonitorID.Int32),
Step: ls.Step,
Score: score,
}
if ls.Offset.Valid {
offset := ls.Offset.Float64
res.History[i].Offset = &offset
}
if ls.Rtt.Valid {
rtt := float64(ls.Rtt.Int32) / 1000.0
res.History[i].Rtt = &rtt
}
}
setHistoryCacheControl(c, history)
return c.JSON(http.StatusOK, res)
}
func (srv *Server) historyCSV(ctx context.Context, c echo.Context, history *logscores.LogScoreHistory) error {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(ctx, "history.csv")
defer span.End()
b := bytes.NewBuffer([]byte{})
w := csv.NewWriter(b)
ff := func(f float64) string {
s := fmt.Sprintf("%.9f", f)
s = strings.TrimRight(s, "0")
s = strings.TrimRight(s, ".")
return s
}
err := w.Write([]string{"ts_epoch", "ts", "offset", "step", "score", "monitor_id", "monitor_name", "rtt", "leap", "error"})
if err != nil {
log.ErrorContext(ctx, "could not write csv header", "err", err)
return err
}
for _, l := range history.LogScores {
// log.Debug("csv line", "id", l.ID, "n", i)
var offset string
if l.Offset.Valid {
offset = ff(l.Offset.Float64)
}
step := ff(l.Step)
score := ff(l.Score)
var monName string
if l.MonitorID.Valid {
monName = history.Monitors[int(l.MonitorID.Int32)]
}
var leap string
if l.Attributes.Leap != 0 {
leap = fmt.Sprintf("%d", l.Attributes.Leap)
}
var rtt string
if l.Rtt.Valid {
rtt = ff(float64(l.Rtt.Int32) / 1000.0)
}
err := w.Write([]string{
strconv.Itoa(int(l.Ts.Unix())),
// l.Ts.Format(time.RFC3339),
l.Ts.Format("2006-01-02 15:04:05"),
offset,
step,
score,
fmt.Sprintf("%d", l.MonitorID.Int32),
monName,
rtt,
leap,
sanitizeForCSV(l.Attributes.Error),
})
if err != nil {
log.Warn("csv encoding error", "ls_id", l.ID, "err", err)
}
}
w.Flush()
if err := w.Error(); err != nil {
log.ErrorContext(ctx, "could not flush csv", "err", err)
return c.String(http.StatusInternalServerError, "csv error")
}
// log.Info("entries", "count", len(history.LogScores), "out_bytes", b.Len())
setHistoryCacheControl(c, history)
c.Response().Header().Set("Content-Disposition", "inline")
// Chrome and Firefox force-download text/csv files, so use text/plain
// https://bugs.chromium.org/p/chromium/issues/detail?id=152911
return c.Blob(http.StatusOK, "text/plain", b.Bytes())
}
func setHistoryCacheControl(c echo.Context, history *logscores.LogScoreHistory) {
hdr := c.Response().Header()
if len(history.LogScores) == 0 ||
// cache for longer if data hasn't updated for a while; or we didn't
// find any.
(time.Now().Add(-8 * time.Hour).After(history.LogScores[len(history.LogScores)-1].Ts)) {
hdr.Set("Cache-Control", "s-maxage=260,max-age=360")
} else {
if len(history.LogScores) == 1 {
hdr.Set("Cache-Control", "s-maxage=60,max-age=35")
} else {
hdr.Set("Cache-Control", "s-maxage=90,max-age=120")
}
}
}

View File

@@ -5,11 +5,15 @@ import (
"database/sql" "database/sql"
"errors" "errors"
"fmt" "fmt"
"log/slog"
"net/http" "net/http"
"os" "os"
"strconv"
"time"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
"github.com/labstack/echo-contrib/echoprometheus"
"github.com/labstack/echo/v4" "github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware" "github.com/labstack/echo/v4/middleware"
slogecho "github.com/samber/slog-echo" slogecho "github.com/samber/slog-echo"
@@ -25,13 +29,16 @@ import (
"go.ntppool.org/common/version" "go.ntppool.org/common/version"
"go.ntppool.org/common/xff/fastlyxff" "go.ntppool.org/common/xff/fastlyxff"
"go.ntppool.org/api/config"
chdb "go.ntppool.org/data-api/chdb" chdb "go.ntppool.org/data-api/chdb"
"go.ntppool.org/data-api/ntpdb" "go.ntppool.org/data-api/ntpdb"
) )
type Server struct { type Server struct {
db *sql.DB db *sql.DB
ch *chdb.ClickHouse ch *chdb.ClickHouse
config *config.Config
ctx context.Context ctx context.Context
@@ -40,54 +47,69 @@ type Server struct {
} }
func NewServer(ctx context.Context, configFile string) (*Server, error) { func NewServer(ctx context.Context, configFile string) (*Server, error) {
log := logger.Setup()
ch, err := chdb.New(ctx, configFile) ch, err := chdb.New(ctx, configFile)
if err != nil { if err != nil {
return nil, fmt.Errorf("clickhouse open: %w", err) return nil, fmt.Errorf("clickhouse open: %w", err)
} }
db, err := ntpdb.OpenDB(configFile) db, err := ntpdb.OpenDB(ctx, configFile)
if err != nil { if err != nil {
return nil, fmt.Errorf("mysql open: %w", err) return nil, fmt.Errorf("mysql open: %w", err)
} }
conf := config.New()
if !conf.Valid() {
log.Error("invalid ntppool config")
}
srv := &Server{ srv := &Server{
ch: ch, ch: ch,
db: db, db: db,
ctx: ctx, ctx: ctx,
config: conf,
metrics: metricsserver.New(), metrics: metricsserver.New(),
} }
tpShutdown, err := tracing.InitTracer(ctx, &tracing.TracerConfig{ tpShutdown, err := tracing.InitTracer(ctx, &tracing.TracerConfig{
ServiceName: "data-api", ServiceName: "data-api",
Environment: "", Environment: conf.DeploymentMode(),
}) })
if err != nil { if err != nil {
return nil, err return nil, fmt.Errorf("tracing init: %w", err)
} }
srv.tpShutdown = append(srv.tpShutdown, tpShutdown) srv.tpShutdown = append(srv.tpShutdown, tpShutdown)
// srv.tracer = tracing.Tracer()
return srv, nil return srv, nil
} }
func (srv *Server) Run() error { func (srv *Server) Run() error {
log := logger.Setup() log := logger.Setup()
ntpconf := config.New()
ctx, cancel := context.WithCancel(srv.ctx) ctx, cancel := context.WithCancel(srv.ctx)
defer cancel() defer cancel()
g, _ := errgroup.WithContext(ctx) g, _ := errgroup.WithContext(ctx)
g.Go(func() error { g.Go(func() error {
version.RegisterMetric("dataapi", srv.metrics.Registry())
return srv.metrics.ListenAndServe(ctx, 9020) return srv.metrics.ListenAndServe(ctx, 9020)
}) })
g.Go(func() error { g.Go(func() error {
return health.HealthCheckListener(ctx, 9019, log.WithGroup("health")) hclog := log.WithGroup("health")
hc := health.NewServer(healthHandler(srv, hclog))
hc.SetLogger(hclog)
return hc.Listen(ctx, 9019)
}) })
e := echo.New() e := echo.New()
srv.tpShutdown = append(srv.tpShutdown, e.Shutdown) srv.tpShutdown = append(srv.tpShutdown, e.Shutdown)
e.Debug = false
trustOptions := []echo.TrustOption{ trustOptions := []echo.TrustOption{
echo.TrustLoopback(true), echo.TrustLoopback(true),
echo.TrustLinkLocal(false), echo.TrustLinkLocal(false),
@@ -110,11 +132,17 @@ func (srv *Server) Run() error {
e.IPExtractor = echo.ExtractIPFromXFFHeader(trustOptions...) e.IPExtractor = echo.ExtractIPFromXFFHeader(trustOptions...)
e.Use(echoprometheus.NewMiddlewareWithConfig(echoprometheus.MiddlewareConfig{
Registerer: srv.metrics.Registry(),
}))
e.Use(otelecho.Middleware("data-api")) e.Use(otelecho.Middleware("data-api"))
e.Use(slogecho.NewWithConfig(log, e.Use(slogecho.NewWithConfig(log,
slogecho.Config{ slogecho.Config{
WithTraceID: true, WithTraceID: false, // done by logger already
// WithSpanID: true, DefaultLevel: slog.LevelInfo,
ClientErrorLevel: slog.LevelWarn,
ServerErrorLevel: slog.LevelError,
// WithRequestHeader: true, // WithRequestHeader: true,
}, },
)) ))
@@ -127,6 +155,12 @@ func (srv *Server) Run() error {
span := trace.SpanFromContext(request.Context()) span := trace.SpanFromContext(request.Context())
span.SetAttributes(attribute.String("http.real_ip", c.RealIP())) span.SetAttributes(attribute.String("http.real_ip", c.RealIP()))
// since the Go library (temporarily?) isn't including this
span.SetAttributes(attribute.String("url.path", c.Request().RequestURI))
if q := c.QueryString(); len(q) > 0 {
span.SetAttributes(attribute.String("url.query", q))
}
c.Response().Header().Set("Traceparent", span.SpanContext().TraceID().String()) c.Response().Header().Set("Traceparent", span.SpanContext().TraceID().String())
return next(c) return next(c)
@@ -138,7 +172,6 @@ func (srv *Server) Run() error {
vinfo := version.VersionInfo() vinfo := version.VersionInfo()
v := "data-api/" + vinfo.Version + "+" + vinfo.GitRevShort v := "data-api/" + vinfo.Version + "+" + vinfo.GitRevShort
return func(c echo.Context) error { return func(c echo.Context) error {
c.Response().Header().Set(echo.HeaderServer, v) c.Response().Header().Set(echo.HeaderServer, v)
return next(c) return next(c)
} }
@@ -146,7 +179,7 @@ func (srv *Server) Run() error {
e.Use(middleware.CORSWithConfig(middleware.CORSConfig{ e.Use(middleware.CORSWithConfig(middleware.CORSConfig{
AllowOrigins: []string{ AllowOrigins: []string{
"http://localhost", "http://localhost:5173", "http://localhost:8080", "http://localhost", "http://localhost:5173", "http://localhost:5174", "http://localhost:8080",
"https://www.ntppool.org", "https://*.ntppool.org", "https://www.ntppool.org", "https://*.ntppool.org",
"https://web.beta.grundclock.com", "https://manage.beta.grundclock.com", "https://web.beta.grundclock.com", "https://manage.beta.grundclock.com",
"https:/*.askdev.grundclock.com", "https:/*.askdev.grundclock.com",
@@ -157,6 +190,7 @@ func (srv *Server) Run() error {
e.Use(middleware.RecoverWithConfig(middleware.RecoverConfig{ e.Use(middleware.RecoverWithConfig(middleware.RecoverConfig{
LogErrorFunc: func(c echo.Context, err error, stack []byte) error { LogErrorFunc: func(c echo.Context, err error, stack []byte) error {
log.ErrorContext(c.Request().Context(), err.Error(), "stack", string(stack)) log.ErrorContext(c.Request().Context(), err.Error(), "stack", string(stack))
fmt.Println(string(stack))
return err return err
}, },
})) }))
@@ -172,7 +206,29 @@ func (srv *Server) Run() error {
e.GET("/api/usercc", srv.userCountryData) e.GET("/api/usercc", srv.userCountryData)
e.GET("/api/server/dns/answers/:server", srv.dnsAnswers) e.GET("/api/server/dns/answers/:server", srv.dnsAnswers)
// e.GET("/api/server/scores/:server/:type", srv.logScores) e.GET("/api/server/scores/:server/:mode", srv.history)
e.GET("/api/dns/counts", srv.dnsQueryCounts)
e.GET("/api/v2/test/grafana-table", srv.testGrafanaTable)
e.GET("/api/v2/server/scores/:server/:mode", srv.scoresTimeRange)
if len(ntpconf.WebHostname()) > 0 {
e.POST("/api/server/scores/:server/:mode", func(c echo.Context) error {
// POST requests used to work, so make them not error out
mode := c.Param("mode")
server := c.Param("server")
query := c.Request().URL.Query()
return c.Redirect(
http.StatusSeeOther,
ntpconf.WebURL(
fmt.Sprintf("/scores/%s/%s", server, mode),
&query,
),
)
})
}
e.GET("/graph/:server/:type", srv.graphImage)
e.GET("/api/zone/counts/:zone_name", srv.zoneCounts)
g.Go(func() error { g.Go(func() error {
return e.Start(":8030") return e.Start(":8030")
@@ -208,7 +264,7 @@ func (srv *Server) userCountryData(c echo.Context) error {
log.InfoContext(ctx, "didn't get zoneStats") log.InfoContext(ctx, "didn't get zoneStats")
} }
data, err := srv.ch.UserCountryData(c.Request().Context()) data, err := srv.ch.UserCountryData(ctx)
if err != nil { if err != nil {
log.ErrorContext(ctx, "UserCountryData", "err", err) log.ErrorContext(ctx, "UserCountryData", "err", err)
return c.String(http.StatusInternalServerError, err.Error()) return c.String(http.StatusInternalServerError, err.Error())
@@ -221,5 +277,90 @@ func (srv *Server) userCountryData(c echo.Context) error {
UserCountry: data, UserCountry: data,
ZoneStats: zoneStats, ZoneStats: zoneStats,
}) })
}
func (srv *Server) dnsQueryCounts(c echo.Context) error {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(c.Request().Context(), "dnsQueryCounts")
defer span.End()
data, err := srv.ch.DNSQueries(ctx)
if err != nil {
log.ErrorContext(ctx, "dnsQueryCounts", "err", err)
return c.String(http.StatusInternalServerError, err.Error())
}
hdr := c.Response().Header()
hdr.Set("Cache-Control", "s-maxage=30,max-age=60")
return c.JSON(http.StatusOK, data)
}
func healthHandler(srv *Server, log *slog.Logger) func(w http.ResponseWriter, req *http.Request) {
return func(w http.ResponseWriter, req *http.Request) {
ctx := req.Context()
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
g, ctx := errgroup.WithContext(ctx)
stats := srv.db.Stats()
if stats.OpenConnections > 3 {
log.InfoContext(ctx, "health requests", "url", req.URL.String(), "stats", stats)
}
if resetParam := req.URL.Query().Get("reset"); resetParam != "" {
reset, err := strconv.ParseBool(resetParam)
log.InfoContext(ctx, "db reset request", "err", err, "reset", reset)
if err == nil && reset {
// this feature was to debug some specific problem
log.InfoContext(ctx, "setting idle db conns to zero")
srv.db.SetConnMaxLifetime(30 * time.Second)
srv.db.SetMaxIdleConns(0)
srv.db.SetMaxIdleConns(4)
}
}
g.Go(func() error {
err := srv.ch.Scores.Ping(ctx)
if err != nil {
log.WarnContext(ctx, "ch scores ping", "err", err)
return err
}
return nil
})
g.Go(func() error {
err := srv.ch.Logs.Ping(ctx)
if err != nil {
log.WarnContext(ctx, "ch logs ping", "err", err)
return err
}
return nil
})
g.Go(func() error {
err := srv.db.PingContext(ctx)
if err != nil {
log.WarnContext(ctx, "db ping", "err", err)
return err
}
return nil
})
err := g.Wait()
if err != nil {
w.WriteHeader(http.StatusServiceUnavailable)
_, err = w.Write([]byte("db ping err"))
if err != nil {
log.ErrorContext(ctx, "could not write response", "err", err)
}
return
}
w.WriteHeader(http.StatusOK)
_, err = w.Write([]byte("ok"))
if err != nil {
log.ErrorContext(ctx, "could not write response", "err", err)
}
}
} }

146
server/zones.go Normal file
View File

@@ -0,0 +1,146 @@
package server
import (
"database/sql"
"errors"
"net/http"
"strconv"
"time"
"github.com/labstack/echo/v4"
"go.ntppool.org/common/logger"
"go.ntppool.org/common/tracing"
"go.ntppool.org/data-api/ntpdb"
)
func (srv *Server) zoneCounts(c echo.Context) error {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(c.Request().Context(), "zoneCounts")
defer span.End()
// just cache for a short time by default
c.Response().Header().Set("Cache-Control", "public,max-age=240")
c.Response().Header().Set("Access-Control-Allow-Origin", "*")
c.Response().Header().Del("Vary")
q := ntpdb.NewWrappedQuerier(ntpdb.New(srv.db))
zone, err := q.GetZoneByName(ctx, c.Param("zone_name"))
if err != nil || zone.ID == 0 {
if errors.Is(err, sql.ErrNoRows) {
return c.String(http.StatusNotFound, "Not found")
}
log.ErrorContext(ctx, "could not query for zone", "err", err)
span.RecordError(err)
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
}
counts, err := q.GetZoneCounts(ctx, zone.ID)
if err != nil {
if !errors.Is(err, sql.ErrNoRows) {
log.ErrorContext(ctx, "get counts", "err", err)
span.RecordError(err)
return c.String(http.StatusInternalServerError, "internal error")
}
}
type historyEntry struct {
D string `json:"d"` // date
Ts int `json:"ts"` // epoch timestamp
Rc int `json:"rc"` // count registered
Ac int `json:"ac"` // count active
W int `json:"w"` // netspeed active
Iv string `json:"iv"` // ip version
}
rv := struct {
History []historyEntry `json:"history"`
}{}
skipCount := 0.0
limit := 0
if limitParam := c.QueryParam("limit"); len(limitParam) > 0 {
if limitInt, err := strconv.Atoi(limitParam); err == nil && limitInt > 0 {
limit = limitInt
}
}
var mostRecentDate int64 = -1
if limit > 0 {
count := 0
dates := map[int64]bool{}
for _, c := range counts {
ep := c.Date.Unix()
if _, ok := dates[ep]; !ok {
count++
dates[ep] = true
mostRecentDate = ep
}
}
if limit < count {
if limit > 1 {
skipCount = float64(count) / float64(limit-1)
} else {
// skip everything and use the special logic that we always include the most recent date
skipCount = float64(count) + 1
}
}
log.DebugContext(ctx, "mod", "count", count, "limit", limit, "mod", count%limit, "skipCount", skipCount)
// log.Info("limit plan", "date count", count, "limit", limit, "skipCount", skipCount)
}
toSkip := 0.0
if limit == 1 {
toSkip = skipCount // we just want to look for the last entry
}
lastDate := int64(0)
lastSkip := int64(0)
skipThreshold := 0.5
for _, c := range counts {
cDate := c.Date.Unix()
if (toSkip <= skipThreshold && cDate != lastSkip) ||
lastDate == cDate ||
mostRecentDate == cDate {
// log.Info("adding date", "date", c.Date.Format(time.DateOnly))
rv.History = append(rv.History, historyEntry{
D: c.Date.Format(time.DateOnly),
Ts: int(cDate),
Ac: int(c.CountActive),
Rc: int(c.CountRegistered),
W: int(c.NetspeedActive),
Iv: string(c.IpVersion),
})
lastDate = cDate
} else {
// log.Info("skipping date", "date", c.Date.Format(time.DateOnly))
if lastSkip == cDate {
continue
}
toSkip--
lastSkip = cDate
continue
}
if toSkip <= skipThreshold && skipCount > 0 {
toSkip += skipCount
}
}
if limit > 0 {
count := 0
dates := map[int]bool{}
for _, c := range rv.History {
ep := c.Ts
if _, ok := dates[ep]; !ok {
count++
dates[ep] = true
}
}
log.DebugContext(ctx, "result counts", "skipCount", skipCount, "limit", limit, "got", count)
}
c.Response().Header().Set("Cache-Control", "s-maxage=28800, max-age=7200")
return c.JSON(http.StatusOK, rv)
}

View File

@@ -12,6 +12,14 @@ sql:
omit_unused_structs: true omit_unused_structs: true
emit_interface: true emit_interface: true
# emit_all_enum_values: true # emit_all_enum_values: true
rename:
servers.Ip: IP
overrides: overrides:
- column: log_scores.attributes
go_type: go.ntppool.org/common/types.LogScoreAttributes
- column: "server_netspeed.netspeed_active" - column: "server_netspeed.netspeed_active"
go_type: "uint64" go_type: "int"
- column: "zone_server_counts.netspeed_active"
go_type: "int"
- db_type: "bigint"
go_type: "int"