Private
Public Access
1
0

12 Commits

Author SHA1 Message Date
c9481d12c6 feat(db): migrate from MySQL to PostgreSQL
All checks were successful
continuous-integration/drone/push Build is passing
Replace MySQL driver with pgx/v5 and pgxpool:
- Update sqlc to use postgresql engine
- Convert query.sql to PostgreSQL syntax ($1 params, CASE WHEN,
  ANY() arrays)
- Replace sql.DB with pgxpool.Pool throughout
- Change nullable types from sql.Null* to pgtype.*
- Update ID types from uint32 to int64 for PostgreSQL compatibility
- Delete MySQL-specific dynamic_connect.go
- Add opentelemetry.gowrap template for tracing
2025-11-29 10:59:15 -08:00
85d86bc837 build: update go and dependencies
All checks were successful
continuous-integration/drone/push Build is passing
2025-09-27 08:17:04 -07:00
196f90a2b9 fix(db): use int for netspeed_active to prevent overflow
All checks were successful
continuous-integration/drone/push Build is passing
GetZoneStatsData and GetZoneStatsV2's netspeed_active values can
exceed 2 billion, causing 32-bit integer overflow. Changed from
int32/uint32 to int (64-bit on modern systems) to handle large
network speed totals.

- Update sqlc column overrides to use int type
- Fix type compatibility in dnsanswers.go zoneTotals map
- Regenerate database code with new types

Fixes https://community.ntppool.org/t/error-message-displayed-on-the-monitoring-score-page/4063
2025-09-21 00:08:21 -07:00
02a6f587bb Update schema 2025-09-20 10:29:53 -07:00
2dfc355f7c style: format Go code with gofumpt
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
Apply consistent formatting to Go source files using gofumpt
as required by pre-commit guidelines.
2025-08-03 16:06:59 -07:00
3e6a0f9e63 fix(api): include deleted monitors in name-based lookups
Remove status filter from GetMonitorByNameAndIPVersion query to allow
historical score data for deleted monitors to be accessible when
querying by monitor name/TLS name, making behavior consistent with
ID-based queries.
2025-08-03 14:53:21 -07:00
9c6b8d1867 fix(api): handle score monitors in name-based lookups
Score monitors have type='score' and ip_version=NULL, but the
GetMonitorByNameAndIPVersion query required ip_version to match.
This broke monitor lookups by name for score monitors.

Modified query to match either:
- Regular monitors with specified ip_version
- Score monitors with NULL ip_version

Fixes issue reported by Ben Harris at:
https://community.ntppool.org/t/monitor-recentmedian-no-longer-works/4002
2025-08-04 20:43:53 -07:00
393d532ce2 feat(api): add relative time support to v2 scores endpoint
- Add parseRelativeTime function supporting "-3d", "-2h", "-30m" format
- Update parseTimeRangeParams to handle Unix timestamps and relative times
- Add unit tests with comprehensive coverage for all time formats
- Document v2 API in API.md with examples and migration guide

Enables intuitive time queries like from=-3d&to=-1h instead of
Unix timestamps, improving developer experience for the enhanced
v2 endpoint that supports 50k records vs legacy 10k limit.
2025-08-03 12:12:22 -07:00
267c279f3d Update dependencies
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2025-08-02 19:48:42 -07:00
eb5459abf3 fix(api): protocol-aware monitor filtering for multi-protocol monitors
All checks were successful
continuous-integration/drone/push Build is passing
Servers with monitor filtering returned incorrect results when monitors
have same names but different protocols (v4/v6). Monitor lookup now
considers both name and IP version to match the correct protocol.

- Add GetMonitorByNameAndIPVersion SQL query with protocol matching
- Update history parameter parsing to use server IP version context
- Fix both /scores/{ip}/log and Grafana endpoints
- Remove unused GetMonitorByName query

Fixes abh/ntppool#264
Reported-by: Anssi Johansson <https://github.com/avijc>
2025-07-27 00:37:49 -07:00
8262b1442f feat(api): add Grafana time range endpoint for scores
- Add /api/v2/server/scores/{server}/{mode} endpoint
- Support time range queries with from/to parameters
- Return data in Grafana table format for visualization
- Fix routing pattern to handle IP addresses correctly
- Add comprehensive parameter validation and error handling
2025-07-27 02:18:32 -07:00
d4bf8d9e16 feat(api): add Grafana test endpoint for table format
Add `/api/v2/test/grafana-table` endpoint to validate Grafana
table format compatibility before implementing the full time
range API.

- Create server/grafana.go with table format structures
- Add structured logging and OpenTelemetry tracing
- Include realistic NTP Pool sample data with null handling
- Set proper CORS and cache headers for testing
- Update implementation plan with Phase 0 completion status

Ready for Grafana JSON API data source integration testing.
2025-07-26 09:03:46 -07:00
31 changed files with 5236 additions and 1485 deletions

View File

@@ -21,7 +21,7 @@ steps:
memory: 100MiB memory: 100MiB
- name: test - name: test
image: golang:1.24 image: golang:1.25
pull: always pull: always
volumes: volumes:
- name: go - name: go
@@ -33,7 +33,7 @@ steps:
- go build ./... - go build ./...
- name: goreleaser - name: goreleaser
image: golang:1.24 image: golang:1.25
pull: always pull: always
resources: resources:
requests: requests:
@@ -83,6 +83,6 @@ volumes:
--- ---
kind: signature kind: signature
hmac: 616f5b902e42082a427162929ba5ac45d9331a8ade25c923f185ebb71dd8aef4 hmac: 7f4f57140394a1c3a34e4d23188edda3cd95359dacf6d0abfa45bda3afff692f
... ...

481
API.md Normal file
View File

@@ -0,0 +1,481 @@
# NTP Pool Data API Documentation
This document describes the REST API endpoints provided by the NTP Pool data API server.
## Base URL
The API server runs on port 8030. All endpoints are accessible at:
- Production: `https://www.ntppool.org/api/...`
- Local development: `http://localhost:8030/api/...`
## Common Response Headers
All API responses include:
- `Server`: Version information (e.g., `data-api/1.2.3+abc123`)
- `Cache-Control`: Caching directives
- `Access-Control-Allow-Origin`: CORS configuration
## Endpoints
### 1. User Country Data
**GET** `/api/usercc`
Returns DNS query statistics by user country code and NTP pool zone statistics.
#### Response Format
```json
{
"UserCountry": [
{
"CC": "us",
"IPv4": 42.5,
"IPv6": 12.3
}
],
"ZoneStats": {
"zones": [
{
"zone_name": "us",
"netspeed_active": 1000,
"server_count": 450
}
]
}
}
```
#### Response Fields
- `UserCountry`: Array of country statistics
- `CC`: Two-letter country code
- `IPv4`: IPv4 query percentage
- `IPv6`: IPv6 query percentage
- `ZoneStats`: NTP pool zone information
#### Cache Control
- `Cache-Control`: Varies based on data freshness
---
### 2. DNS Query Counts
**GET** `/api/dns/counts`
Returns aggregated DNS query counts from ClickHouse analytics.
#### Response Format
```json
{
"total_queries": 1234567,
"by_country": {
"us": 456789,
"de": 234567
},
"by_query_type": {
"A": 987654,
"AAAA": 345678
}
}
```
#### Cache Control
- `Cache-Control`: `s-maxage=30,max-age=60`
---
### 3. Server DNS Answers
**GET** `/api/server/dns/answers/{server}`
Returns DNS answer statistics for a specific NTP server, including geographic distribution and scoring metrics.
#### Path Parameters
- `server`: Server IP address (IPv4 or IPv6)
#### Response Format
```json
{
"Server": [
{
"CC": "us",
"Count": 12345,
"Points": 1234.5,
"Netspeed": 567.8
}
],
"PointSymbol": "‱"
}
```
#### Response Fields
- `Server`: Array of country-specific statistics
- `CC`: Country code where DNS queries originated
- `Count`: Number of DNS answers served
- `Points`: Calculated scoring points (basis: 10,000)
- `Netspeed`: Network speed score relative to zone capacity
- `PointSymbol`: Symbol used for point calculations ("‱" = per 10,000)
#### Error Responses
- `400 Bad Request`: Invalid server IP format
- `404 Not Found`: Server not found
- `500 Internal Server Error`: Database error
#### Cache Control
- Success: `public,max-age=1800`
- Errors: `public,max-age=300`
#### URL Canonicalization
Redirects to canonical IP format with `308 Permanent Redirect` if:
- IP format is not canonical
- Query parameters are present
---
### 4. Server Score History (Legacy)
**GET** `/api/server/scores/{server}/{mode}`
**⚠️ Legacy API** - Returns historical scoring data for an NTP server in JSON or CSV format. For enhanced features and higher limits, use the [v2 API](#7-server-score-history-v2---enhanced-time-range-api) instead.
#### Path Parameters
- `server`: Server IP address or ID
- `mode`: Response format (`json` or `log`)
#### Query Parameters
- `limit`: Maximum number of records (default: 100, max: 10000)
- `monitor`: Monitor ID or name prefix (default: "recentmedian.scores.ntp.dev")
- Use `*` for all monitors
- Use monitor ID number
- Use monitor name prefix (e.g., "recentmedian")
- `since`: Unix timestamp for start time
- `source`: Data source (`m` for MySQL, `c` for ClickHouse)
- `full_history`: Include full history (private IPs only)
#### JSON Response Format (`mode=json`)
```json
{
"history": [
{
"ts": 1640995200,
"offset": 0.001234,
"step": 0.5,
"score": 20.0,
"monitor_id": 123,
"rtt": 45.6
}
],
"monitors": [
{
"id": 123,
"name": "recentmedian.scores.ntp.dev",
"type": "ntp",
"ts": "2022-01-01T12:00:00Z",
"score": 19.5,
"status": "active",
"avg_rtt": 45.2
}
],
"server": {
"ip": "192.0.2.1"
}
}
```
#### CSV Response Format (`mode=log`)
Returns CSV data with headers:
```
ts_epoch,ts,offset,step,score,monitor_id,monitor_name,rtt,leap,error
1640995200,2022-01-01 12:00:00,0.001234,0.5,20.0,123,recentmedian.scores.ntp.dev,45.6,,
```
#### CSV Fields
- `ts_epoch`: Unix timestamp
- `ts`: Human-readable timestamp
- `offset`: Time offset in seconds
- `step`: NTP step value
- `score`: Computed score
- `monitor_id`: Monitor identifier
- `monitor_name`: Monitor display name
- `rtt`: Round-trip time in milliseconds
- `leap`: Leap second indicator
- `error`: Error message (sanitized for CSV)
#### Error Responses
- `404 Not Found`: Invalid mode, server not found, or monitor not found
- `500 Internal Server Error`: Database error
#### Cache Control
Dynamic based on data freshness:
- Recent data: `s-maxage=90,max-age=120`
- Older data: `s-maxage=260,max-age=360`
---
### 5. Zone Counts
**GET** `/api/zone/counts/{zone_name}`
Returns historical server count and network capacity data for an NTP pool zone.
#### Path Parameters
- `zone_name`: Zone name (e.g., "us", "europe", "@" for global)
#### Query Parameters
- `limit`: Maximum number of date entries to return
#### Response Format
```json
{
"history": [
{
"d": "2022-01-01",
"ts": 1640995200,
"rc": 450,
"ac": 380,
"w": 12500,
"iv": "v4"
}
]
}
```
#### Response Fields
- `history`: Array of historical data points
- `d`: Date in YYYY-MM-DD format
- `ts`: Unix timestamp
- `rc`: Registered server count
- `ac`: Active server count
- `w`: Network capacity (netspeed active)
- `iv`: IP version ("v4" or "v6")
#### Data Sampling
When `limit` is specified, the API intelligently samples data points to provide representative historical coverage while staying within the limit.
#### Error Responses
- `404 Not Found`: Zone not found
- `500 Internal Server Error`: Database error
#### Cache Control
- `s-maxage=28800, max-age=7200`
---
### 6. Graph Images
**GET** `/graph/{server}/{type}`
Returns generated graph images for server visualization.
#### Path Parameters
- `server`: Server IP address
- `type`: Graph type (currently only "offset.png" supported)
#### Response
- **Content-Type**: `image/png` or upstream service content type
- **Body**: Binary image data
#### Features
- Canonical URL enforcement (redirects if server IP format is non-canonical)
- Query parameter removal (redirects to clean URLs)
- Upstream service integration via HTTP proxy
#### Error Responses
- `404 Not Found`: Invalid image type or server not found
- `500 Internal Server Error`: Upstream service error
#### Cache Control
- Success: `public,max-age=1800,s-maxage=1350`
- Errors: `public,max-age=240`
---
### 7. Server Score History (v2) - Enhanced Time Range API
**GET** `/api/v2/server/scores/{server}/{mode}`
**🆕 Recommended API** - Returns historical scoring data for an NTP server in Grafana-compatible table format with enhanced time range support and relative time expressions.
#### Path Parameters
- `server`: Server IP address or ID
- `mode`: Response format (`json` only)
#### Query Parameters
- `from`: Start time (required) - Unix timestamp or relative time (e.g., "-3d", "-2h", "-30m")
- `to`: End time (required) - Unix timestamp or relative time (e.g., "-1d", "-1h", "0s")
- `maxDataPoints`: Maximum data points to return (default: 50000, max: 50000)
- `monitor`: Monitor filter (ID, name prefix, or "*" for all monitors)
- `interval`: Future downsampling interval (not implemented)
#### Time Format Support
The v2 API supports both Unix timestamps and relative time expressions:
**Unix Timestamps:**
- `from=1753500964&to=1753587364` - Standard Unix seconds
**Relative Time Expressions:**
- `from=-3d&to=-1d` - From 3 days ago to 1 day ago
- `from=-2h&to=-30m` - From 2 hours ago to 30 minutes ago
- `from=-1d&to=0s` - From 1 day ago to now
**Supported Units:**
- `s` - seconds
- `m` - minutes
- `h` - hours
- `d` - days
**Format:** `[-]<number><unit>` (negative sign for past, no sign for future)
#### Response Format
Grafana table format optimized for visualization:
```json
[
{
"target": "monitor{name=zakim1-yfhw4a}",
"tags": {
"monitor_id": "126",
"monitor_name": "zakim1-yfhw4a",
"type": "monitor",
"status": "active"
},
"columns": [
{"text": "time", "type": "time"},
{"text": "score", "type": "number"},
{"text": "rtt", "type": "number", "unit": "ms"},
{"text": "offset", "type": "number", "unit": "s"}
],
"values": [
[1753431667000, 20.0, 18.865, -0.000267],
[1753431419000, 20.0, 18.96, -0.000390],
[1753431151000, 20.0, 18.073, -0.000768]
]
}
]
```
#### Response Structure
- **One series per monitor**: Efficient grouping by monitor ID
- **Table format**: All metrics (time, score, rtt, offset) in columns
- **Timestamps**: Converted to milliseconds for Grafana compatibility
- **Null handling**: Null RTT/offset values preserved as `null`
#### Limits and Constraints
- **Data points**: Maximum 50,000 records per request
- **Time range**: Maximum 90 days per request
- **Minimum range**: 1 second
- **Data source**: ClickHouse only (for better time range performance)
#### Example Requests
**Recent data with relative times:**
```
GET /api/v2/server/scores/192.0.2.1/json?from=-3d&to=-1h&monitor=*
```
**Specific time range:**
```
GET /api/v2/server/scores/192.0.2.1/json?from=1753500000&to=1753586400&monitor=recentmedian
```
**All monitors, last 24 hours:**
```
GET /api/v2/server/scores/192.0.2.1/json?from=-1d&to=0s&monitor=*&maxDataPoints=10000
```
#### Error Responses
- `400 Bad Request`: Invalid time format, range too large/small, or invalid parameters
- `404 Not Found`: Server not found, invalid mode, or monitor not found
- `500 Internal Server Error`: Database or internal error
#### Cache Control
Dynamic caching based on data characteristics:
- Recent data: `s-maxage=90,max-age=120`
- Older data: `s-maxage=260,max-age=360`
- Empty results: `s-maxage=260,max-age=360`
#### Comparison with Legacy API
The v2 API offers significant improvements over `/api/server/scores/{server}/{mode}`:
| Feature | Legacy API | v2 API |
|---------|------------|--------|
| **Record limit** | 10,000 | 50,000 |
| **Time format** | Unix timestamps only | Unix timestamps + relative time |
| **Response format** | Legacy JSON/CSV | Grafana table format |
| **Time range** | Limited by `since` parameter | Full `from`/`to` range support |
| **Maximum range** | No explicit limit | 90 days |
| **Performance** | MySQL + ClickHouse | ClickHouse optimized |
#### Migration Guide
To migrate from legacy API to v2:
**Legacy:**
```
/api/server/scores/192.0.2.1/json?limit=10000&since=1753500000&monitor=*
```
**V2 equivalent:**
```
/api/v2/server/scores/192.0.2.1/json?from=1753500000&to=0s&monitor=*&maxDataPoints=10000
```
**V2 with relative time:**
```
/api/v2/server/scores/192.0.2.1/json?from=-3d&to=-1h&monitor=*
```
---
## Health Check Endpoints
### Health Check
**GET** `:9019/health`
Returns server health status by testing database connections.
#### Query Parameters
- `reset`: Boolean to reset database connection pool
#### Response
- `200 OK`: "ok" - All systems healthy
- `503 Service Unavailable`: "db ping err" - Database connectivity issues
### Metrics
**GET** `:9020/metrics`
Prometheus metrics endpoint for monitoring and observability.
---
## Error Handling
### Standard HTTP Status Codes
- `200 OK`: Successful request
- `308 Permanent Redirect`: URL canonicalization
- `400 Bad Request`: Invalid request parameters
- `404 Not Found`: Resource not found
- `500 Internal Server Error`: Server-side error
- `503 Service Unavailable`: Service temporarily unavailable
### Error Response Format
Most endpoints return plain text error messages for non-2xx responses. Some endpoints may return JSON error objects.
---
## Data Sources
The API integrates multiple data sources:
- **MySQL**: Operational data (servers, zones, accounts, current scores)
- **ClickHouse**: Analytics data (DNS query logs, historical scoring data)
Different endpoints may use different data sources, and some endpoints allow source selection via query parameters.
---
## Rate Limiting and Caching
The API implements extensive caching at multiple levels:
- **Response-level caching**: Each endpoint sets appropriate `Cache-Control` headers
- **Database query optimization**: Efficient queries with proper indexing
- **CDN integration**: Headers configured for CDN caching
Cache durations vary by endpoint and data freshness, ranging from 30 seconds for real-time data to 8 hours for historical data.

View File

@@ -4,8 +4,7 @@ generate: sqlc
sqlc: sqlc:
go tool sqlc compile go tool sqlc compile
go tool sqlc generate go tool sqlc generate
go tool gowrap gen -t opentelemetry -i QuerierTx -p ./ntpdb -o ./ntpdb/otel.go go tool gowrap gen -g -t opentelemetry -i QuerierTx -p ./ntpdb -o ./ntpdb/otel.go
#go tool mockery --dir ntpdb --name QuerierTx --config /dev/null
sign: sign:
drone sign --save ntppool/data-api drone sign --save ntppool/data-api

View File

@@ -24,15 +24,16 @@ type ServerTotals map[string]uint64
func (s ServerQueries) Len() int { func (s ServerQueries) Len() int {
return len(s) return len(s)
} }
func (s ServerQueries) Swap(i, j int) { func (s ServerQueries) Swap(i, j int) {
s[i], s[j] = s[j], s[i] s[i], s[j] = s[j], s[i]
} }
func (s ServerQueries) Less(i, j int) bool { func (s ServerQueries) Less(i, j int) bool {
return s[i].Count > s[j].Count return s[i].Count > s[j].Count
} }
func (d *ClickHouse) ServerAnswerCounts(ctx context.Context, serverIP string, days int) (ServerQueries, error) { func (d *ClickHouse) ServerAnswerCounts(ctx context.Context, serverIP string, days int) (ServerQueries, error) {
ctx, span := tracing.Tracer().Start(ctx, "ServerAnswerCounts") ctx, span := tracing.Tracer().Start(ctx, "ServerAnswerCounts")
defer span.End() defer span.End()

View File

@@ -3,6 +3,7 @@ package chdb
import ( import (
"context" "context"
"fmt" "fmt"
"strings"
"time" "time"
"github.com/ClickHouse/clickhouse-go/v2" "github.com/ClickHouse/clickhouse-go/v2"
@@ -105,3 +106,129 @@ func (d *ClickHouse) Logscores(ctx context.Context, serverID, monitorID int, sin
return rv, nil return rv, nil
} }
// LogscoresTimeRange queries log scores within a specific time range for Grafana integration
func (d *ClickHouse) LogscoresTimeRange(ctx context.Context, serverID, monitorID int, from, to time.Time, limit int) ([]ntpdb.LogScore, error) {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(ctx, "CH LogscoresTimeRange")
defer span.End()
args := []interface{}{serverID, from, to}
query := `select id,monitor_id,server_id,ts,
toFloat64(score),toFloat64(step),offset,
rtt,leap,warning,error
from log_scores
where
server_id = ?
and ts >= ?
and ts <= ?`
if monitorID > 0 {
query += " and monitor_id = ?"
args = append(args, monitorID)
}
// Always order by timestamp ASC for Grafana convention
query += " order by ts ASC"
// Apply limit to prevent memory issues
if limit > 0 {
query += " limit ?"
args = append(args, limit)
}
log.DebugContext(ctx, "clickhouse time range query",
"query", query,
"args", args,
"server_id", serverID,
"monitor_id", monitorID,
"from", from.Format(time.RFC3339),
"to", to.Format(time.RFC3339),
"limit", limit,
"full_sql_with_params", func() string {
// Build a readable SQL query with parameters substituted for debugging
sqlDebug := query
paramIndex := 0
for strings.Contains(sqlDebug, "?") && paramIndex < len(args) {
var replacement string
switch v := args[paramIndex].(type) {
case int:
replacement = fmt.Sprintf("%d", v)
case time.Time:
replacement = fmt.Sprintf("'%s'", v.Format("2006-01-02 15:04:05"))
default:
replacement = fmt.Sprintf("'%v'", v)
}
sqlDebug = strings.Replace(sqlDebug, "?", replacement, 1)
paramIndex++
}
return sqlDebug
}(),
)
rows, err := d.Scores.Query(
clickhouse.Context(
ctx, clickhouse.WithSpan(span.SpanContext()),
),
query, args...,
)
if err != nil {
log.ErrorContext(ctx, "time range query error", "err", err)
return nil, fmt.Errorf("database error")
}
rv := []ntpdb.LogScore{}
for rows.Next() {
row := ntpdb.LogScore{}
var leap uint8
if err := rows.Scan(
&row.ID,
&row.MonitorID,
&row.ServerID,
&row.Ts,
&row.Score,
&row.Step,
&row.Offset,
&row.Rtt,
&leap,
&row.Attributes.Warning,
&row.Attributes.Error,
); err != nil {
log.Error("could not parse row", "err", err)
continue
}
row.Attributes.Leap = int8(leap)
rv = append(rv, row)
}
log.InfoContext(ctx, "time range query results",
"rows_returned", len(rv),
"server_id", serverID,
"monitor_id", monitorID,
"time_range", fmt.Sprintf("%s to %s", from.Format(time.RFC3339), to.Format(time.RFC3339)),
"limit", limit,
"sample_rows", func() []map[string]interface{} {
samples := make([]map[string]interface{}, 0, 3)
for i, row := range rv {
if i >= 3 {
break
}
samples = append(samples, map[string]interface{}{
"id": row.ID,
"monitor_id": row.MonitorID,
"ts": row.Ts.Time.Format(time.RFC3339),
"score": row.Score,
"rtt_valid": row.Rtt.Valid,
"offset_valid": row.Offset.Valid,
})
}
return samples
}(),
)
return rv, nil
}

View File

@@ -30,7 +30,7 @@ func NewCLI() *CLI {
// RootCmd represents the base command when called without any subcommands // RootCmd represents the base command when called without any subcommands
func (cli *CLI) rootCmd() *cobra.Command { func (cli *CLI) rootCmd() *cobra.Command {
var cmd = &cobra.Command{ cmd := &cobra.Command{
Use: "data-api", Use: "data-api",
Short: "A brief description of your application", Short: "A brief description of your application",
// Uncomment the following line if your bare application // Uncomment the following line if your bare application
@@ -47,7 +47,6 @@ func (cli *CLI) rootCmd() *cobra.Command {
// Execute adds all child commands to the root command and sets flags appropriately. // Execute adds all child commands to the root command and sets flags appropriately.
// This is called by main.main(). It only needs to happen once to the rootCmd. // This is called by main.main(). It only needs to happen once to the rootCmd.
func Execute() { func Execute() {
cli := NewCLI() cli := NewCLI()
if err := cli.root.Execute(); err != nil { if err := cli.root.Execute(); err != nil {
@@ -57,7 +56,6 @@ func Execute() {
} }
func (cli *CLI) init(cmd *cobra.Command) { func (cli *CLI) init(cmd *cobra.Command) {
logger.Setup() logger.Setup()
cmd.PersistentFlags().StringVar(&cfgFile, "database-config", "database.yaml", "config file (default is $HOME/.data-api.yaml)") cmd.PersistentFlags().StringVar(&cfgFile, "database-config", "database.yaml", "config file (default is $HOME/.data-api.yaml)")

View File

@@ -18,8 +18,7 @@ import (
) )
func (cli *CLI) serverCmd() *cobra.Command { func (cli *CLI) serverCmd() *cobra.Command {
serverCmd := &cobra.Command{
var serverCmd = &cobra.Command{
Use: "server", Use: "server",
Short: "server starts the API server", Short: "server starts the API server",
Long: `starts the API server on (default) port 8000`, Long: `starts the API server on (default) port 8000`,

115
go.mod
View File

@@ -1,9 +1,11 @@
module go.ntppool.org/data-api module go.ntppool.org/data-api
go 1.24 go 1.25.0
// replace github.com/samber/slog-echo => github.com/abh/slog-echo v0.0.0-20231024051244-af740639893e // replace github.com/samber/slog-echo => github.com/abh/slog-echo v0.0.0-20231024051244-af740639893e
replace go.opentelemetry.io/otel/exporters/prometheus v0.59.1 => go.opentelemetry.io/otel/exporters/prometheus v0.59.0
tool ( tool (
github.com/hexdigest/gowrap/cmd/gowrap github.com/hexdigest/gowrap/cmd/gowrap
github.com/sqlc-dev/sqlc/cmd/sqlc github.com/sqlc-dev/sqlc/cmd/sqlc
@@ -12,35 +14,35 @@ tool (
require ( require (
dario.cat/mergo v1.0.2 dario.cat/mergo v1.0.2
github.com/ClickHouse/clickhouse-go/v2 v2.37.2 github.com/ClickHouse/clickhouse-go/v2 v2.40.3
github.com/go-sql-driver/mysql v1.9.3
github.com/hashicorp/go-retryablehttp v0.7.8 github.com/hashicorp/go-retryablehttp v0.7.8
github.com/jackc/pgx/v5 v5.7.6
github.com/labstack/echo-contrib v0.17.4 github.com/labstack/echo-contrib v0.17.4
github.com/labstack/echo/v4 v4.13.4 github.com/labstack/echo/v4 v4.13.4
github.com/samber/slog-echo v1.16.1 github.com/samber/slog-echo v1.17.2
github.com/spf13/cobra v1.9.1 github.com/spf13/cobra v1.10.1
go.ntppool.org/api v0.3.4 go.ntppool.org/api v0.3.4
go.ntppool.org/common v0.4.3 go.ntppool.org/common v0.6.3-0.20251129195245-283d3936f6d0
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.62.0 go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.63.0
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.62.0 go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.63.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.62.0 go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0
go.opentelemetry.io/otel v1.37.0 go.opentelemetry.io/otel v1.38.0
go.opentelemetry.io/otel/trace v1.37.0 go.opentelemetry.io/otel/trace v1.38.0
golang.org/x/sync v0.15.0 golang.org/x/sync v0.17.0
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
) )
require ( require (
cel.dev/expr v0.23.0 // indirect cel.dev/expr v0.24.0 // indirect
filippo.io/edwards25519 v1.1.0 // indirect filippo.io/edwards25519 v1.1.0 // indirect
github.com/ClickHouse/ch-go v0.66.1 // indirect github.com/ClickHouse/ch-go v0.68.0 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver/v3 v3.1.1 // indirect github.com/Masterminds/semver/v3 v3.1.1 // indirect
github.com/Masterminds/sprig/v3 v3.2.2 // indirect github.com/Masterminds/sprig/v3 v3.2.2 // indirect
github.com/andybalholm/brotli v1.2.0 // indirect github.com/andybalholm/brotli v1.2.0 // indirect
github.com/antlr4-go/antlr/v4 v4.13.1 // indirect github.com/antlr4-go/antlr/v4 v4.13.1 // indirect
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v5 v5.0.2 // indirect github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cubicdaiya/gonp v1.0.4 // indirect github.com/cubicdaiya/gonp v1.0.4 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
@@ -51,9 +53,10 @@ require (
github.com/go-faster/errors v0.7.1 // indirect github.com/go-faster/errors v0.7.1 // indirect
github.com/go-logr/logr v1.4.3 // indirect github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-sql-driver/mysql v1.9.3 // indirect
github.com/google/cel-go v0.24.1 // indirect github.com/google/cel-go v0.24.1 // indirect
github.com/google/uuid v1.6.0 // indirect github.com/google/uuid v1.6.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1 // indirect github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hexdigest/gowrap v1.4.2 // indirect github.com/hexdigest/gowrap v1.4.2 // indirect
github.com/huandu/xstrings v1.5.0 // indirect github.com/huandu/xstrings v1.5.0 // indirect
@@ -61,7 +64,6 @@ require (
github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/pgx/v5 v5.7.4 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect github.com/jinzhu/inflection v1.0.0 // indirect
github.com/klauspost/compress v1.18.0 // indirect github.com/klauspost/compress v1.18.0 // indirect
@@ -72,7 +74,7 @@ require (
github.com/mitchellh/reflectwalk v1.0.2 // indirect github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/ncruces/go-strftime v0.1.9 // indirect github.com/ncruces/go-strftime v0.1.9 // indirect
github.com/paulmach/orb v0.11.1 // indirect github.com/paulmach/orb v0.12.0 // indirect
github.com/pganalyze/pg_query_go/v6 v6.1.0 // indirect github.com/pganalyze/pg_query_go/v6 v6.1.0 // indirect
github.com/pierrec/lz4/v4 v4.1.22 // indirect github.com/pierrec/lz4/v4 v4.1.22 // indirect
github.com/pingcap/errors v0.11.5-0.20240311024730-e056997136bb // indirect github.com/pingcap/errors v0.11.5-0.20240311024730-e056997136bb // indirect
@@ -81,20 +83,21 @@ require (
github.com/pingcap/tidb/pkg/parser v0.0.0-20250324122243-d51e00e5bbf0 // indirect github.com/pingcap/tidb/pkg/parser v0.0.0-20250324122243-d51e00e5bbf0 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.22.0 // indirect github.com/prometheus/client_golang v1.23.2 // indirect
github.com/prometheus/client_model v0.6.2 // indirect github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.65.0 // indirect github.com/prometheus/common v0.66.1 // indirect
github.com/prometheus/otlptranslator v1.0.0 // indirect
github.com/prometheus/procfs v0.17.0 // indirect github.com/prometheus/procfs v0.17.0 // indirect
github.com/remychantenay/slog-otel v1.3.4 // indirect github.com/remychantenay/slog-otel v1.3.4 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/riza-io/grpc-go v0.2.0 // indirect github.com/riza-io/grpc-go v0.2.0 // indirect
github.com/samber/lo v1.51.0 // indirect github.com/samber/lo v1.51.0 // indirect
github.com/samber/slog-common v0.19.0 // indirect github.com/samber/slog-common v0.19.0 // indirect
github.com/samber/slog-multi v1.4.1 // indirect github.com/samber/slog-multi v1.5.0 // indirect
github.com/segmentio/asm v1.2.0 // indirect github.com/segmentio/asm v1.2.1 // indirect
github.com/shopspring/decimal v1.4.0 // indirect github.com/shopspring/decimal v1.4.0 // indirect
github.com/spf13/cast v1.4.1 // indirect github.com/spf13/cast v1.4.1 // indirect
github.com/spf13/pflag v1.0.6 // indirect github.com/spf13/pflag v1.0.10 // indirect
github.com/sqlc-dev/sqlc v1.29.0 // indirect github.com/sqlc-dev/sqlc v1.29.0 // indirect
github.com/stoewer/go-strcase v1.2.0 // indirect github.com/stoewer/go-strcase v1.2.0 // indirect
github.com/tetratelabs/wazero v1.9.0 // indirect github.com/tetratelabs/wazero v1.9.0 // indirect
@@ -102,42 +105,44 @@ require (
github.com/valyala/fasttemplate v1.2.2 // indirect github.com/valyala/fasttemplate v1.2.2 // indirect
github.com/wasilibs/go-pgquery v0.0.0-20250409022910-10ac41983c07 // indirect github.com/wasilibs/go-pgquery v0.0.0-20250409022910-10ac41983c07 // indirect
github.com/wasilibs/wazero-helpers v0.0.0-20240620070341-3dff1577cd52 // indirect github.com/wasilibs/wazero-helpers v0.0.0-20240620070341-3dff1577cd52 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/bridges/otelslog v0.12.0 // indirect go.opentelemetry.io/contrib/bridges/otelslog v0.13.0 // indirect
go.opentelemetry.io/contrib/bridges/prometheus v0.62.0 // indirect go.opentelemetry.io/contrib/bridges/prometheus v0.63.0 // indirect
go.opentelemetry.io/contrib/exporters/autoexport v0.62.0 // indirect go.opentelemetry.io/contrib/exporters/autoexport v0.63.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.13.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.14.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.13.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.14.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.37.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.38.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.37.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.38.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.37.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.37.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.37.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 // indirect
go.opentelemetry.io/otel/exporters/prometheus v0.59.0 // indirect go.opentelemetry.io/otel/exporters/prometheus v0.60.0 // indirect
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.13.0 // indirect go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.14.0 // indirect
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.37.0 // indirect go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.38.0 // indirect
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.37.0 // indirect go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.38.0 // indirect
go.opentelemetry.io/otel/log v0.13.0 // indirect go.opentelemetry.io/otel/log v0.14.0 // indirect
go.opentelemetry.io/otel/metric v1.37.0 // indirect go.opentelemetry.io/otel/metric v1.38.0 // indirect
go.opentelemetry.io/otel/sdk v1.37.0 // indirect go.opentelemetry.io/otel/sdk v1.38.0 // indirect
go.opentelemetry.io/otel/sdk/log v0.13.0 // indirect go.opentelemetry.io/otel/sdk/log v0.14.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.37.0 // indirect go.opentelemetry.io/otel/sdk/metric v1.38.0 // indirect
go.opentelemetry.io/proto/otlp v1.7.0 // indirect go.opentelemetry.io/proto/otlp v1.8.0 // indirect
go.uber.org/atomic v1.11.0 // indirect go.uber.org/atomic v1.11.0 // indirect
go.uber.org/multierr v1.11.0 // indirect go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect go.uber.org/zap v1.27.0 // indirect
golang.org/x/crypto v0.39.0 // indirect go.yaml.in/yaml/v2 v2.4.3 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.42.0 // indirect
golang.org/x/exp v0.0.0-20250305212735-054e65f0b394 // indirect golang.org/x/exp v0.0.0-20250305212735-054e65f0b394 // indirect
golang.org/x/mod v0.25.0 // indirect golang.org/x/mod v0.28.0 // indirect
golang.org/x/net v0.41.0 // indirect golang.org/x/net v0.44.0 // indirect
golang.org/x/sys v0.33.0 // indirect golang.org/x/sys v0.36.0 // indirect
golang.org/x/text v0.26.0 // indirect golang.org/x/text v0.29.0 // indirect
golang.org/x/time v0.12.0 // indirect golang.org/x/time v0.13.0 // indirect
golang.org/x/tools v0.33.0 // indirect golang.org/x/tools v0.37.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 // indirect google.golang.org/genproto/googleapis/api v0.0.0-20250922171735-9219d122eba9 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20250922171735-9219d122eba9 // indirect
google.golang.org/grpc v1.73.0 // indirect google.golang.org/grpc v1.75.1 // indirect
google.golang.org/protobuf v1.36.6 // indirect google.golang.org/protobuf v1.36.9 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
modernc.org/libc v1.62.1 // indirect modernc.org/libc v1.62.1 // indirect
modernc.org/mathutil v1.7.1 // indirect modernc.org/mathutil v1.7.1 // indirect

239
go.sum
View File

@@ -1,14 +1,14 @@
cel.dev/expr v0.23.0 h1:wUb94w6OYQS4uXraxo9U+wUAs9jT47Xvl4iPgAwM2ss= cel.dev/expr v0.24.0 h1:56OvJKSH3hDGL0ml5uSxZmz3/3Pq4tJ+fb1unVLAFcY=
cel.dev/expr v0.23.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw= cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8= dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA= dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA= filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4= filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/ClickHouse/ch-go v0.66.1 h1:LQHFslfVYZsISOY0dnOYOXGkOUvpv376CCm8g7W74A4= github.com/ClickHouse/ch-go v0.68.0 h1:zd2VD8l2aVYnXFRyhTyKCrxvhSz1AaY4wBUXu/f0GiU=
github.com/ClickHouse/ch-go v0.66.1/go.mod h1:NEYcg3aOFv2EmTJfo4m2WF7sHB/YFbLUuIWv9iq76xY= github.com/ClickHouse/ch-go v0.68.0/go.mod h1:C89Fsm7oyck9hr6rRo5gqqiVtaIY6AjdD0WFMyNRQ5s=
github.com/ClickHouse/clickhouse-go/v2 v2.37.2 h1:wRLNKoynvHQEN4znnVHNLaYnrqVc9sGJmGYg+GGCfto= github.com/ClickHouse/clickhouse-go/v2 v2.40.3 h1:46jB4kKwVDUOnECpStKMVXxvR0Cg9zeV9vdbPjtn6po=
github.com/ClickHouse/clickhouse-go/v2 v2.37.2/go.mod h1:pH2zrBGp5Y438DMwAxXMm1neSXPPjSI7tD4MURVULw8= github.com/ClickHouse/clickhouse-go/v2 v2.40.3/go.mod h1:qO0HwvjCnTB4BPL/k6EE3l4d9f/uF+aoimAhJX70eKA=
github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI= github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI=
github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=
github.com/Masterminds/semver/v3 v3.1.1 h1:hLg3sBzpNErnxhQtUy/mmLR2I9foDujNK030IGemrRc= github.com/Masterminds/semver/v3 v3.1.1 h1:hLg3sBzpNErnxhQtUy/mmLR2I9foDujNK030IGemrRc=
@@ -22,8 +22,8 @@ github.com/antlr4-go/antlr/v4 v4.13.1/go.mod h1:GKmUxMtwp6ZgGwZSva4eWPC5mS6vUAmO
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA= github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cenkalti/backoff/v5 v5.0.2 h1:rIfFVxEf1QsI7E1ZHfp/B4DF/6QBAUhmgkxc0H7Zss8= github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.2/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw= github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
@@ -70,8 +70,8 @@ github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1 h1:X5VWvz21y3gzm9Nw/kaUeku/1+uBhcekkmy4IkffJww= github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 h1:8Tjv8EJ+pM1xP8mK6egEbD1OgnVTyacbefKhmbLhIhU=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1/go.mod h1:Zanoh4+gvIgluNqcfMVTJueD4wSS5hT7zTt4Mrutd90= github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2/go.mod h1:pkJQ2tZHJ0aFOVEEot6oZmaVEZcRme73eIFmhiVuRWs=
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ= github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48= github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
github.com/hashicorp/go-hclog v1.6.3 h1:Qr2kF+eVWjTiYmU7Y31tYlP1h0q/X3Nl3tPGdaB11/k= github.com/hashicorp/go-hclog v1.6.3 h1:Qr2kF+eVWjTiYmU7Y31tYlP1h0q/X3Nl3tPGdaB11/k=
@@ -92,8 +92,8 @@ github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsI
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg= github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo= github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM= github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
github.com/jackc/pgx/v5 v5.7.4 h1:9wKznZrhWa2QiHL+NjTSPP6yjl3451BX3imWDnokYlg= github.com/jackc/pgx/v5 v5.7.6 h1:rWQc5FwZSPX58r1OQmkuaNicxdmExaEz5A2DO2hUuTk=
github.com/jackc/pgx/v5 v5.7.4/go.mod h1:ncY89UGWxg82EykZUwSpUKEfccBGGYq1xjrOpsbsfGQ= github.com/jackc/pgx/v5 v5.7.6/go.mod h1:aruU7o91Tc2q2cFp5h4uP3f6ztExVpyVv88Xl/8Vl8M=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo= github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4= github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E= github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
@@ -133,8 +133,8 @@ github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4= github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4=
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls= github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/paulmach/orb v0.11.1 h1:3koVegMC4X/WeiXYz9iswopaTwMem53NzTJuTF20JzU= github.com/paulmach/orb v0.12.0 h1:z+zOwjmG3MyEEqzv92UN49Lg1JFYx0L9GpGKNVDKk1s=
github.com/paulmach/orb v0.11.1/go.mod h1:5mULz1xQfs3bmQm63QEJA6lNGujuRafwA5S/EnuLaLU= github.com/paulmach/orb v0.12.0/go.mod h1:5mULz1xQfs3bmQm63QEJA6lNGujuRafwA5S/EnuLaLU=
github.com/paulmach/protoscan v0.2.1/go.mod h1:SpcSwydNLrxUGSDvXvO0P7g7AuhJ7lcKfDlhJCDw2gY= github.com/paulmach/protoscan v0.2.1/go.mod h1:SpcSwydNLrxUGSDvXvO0P7g7AuhJ7lcKfDlhJCDw2gY=
github.com/pganalyze/pg_query_go/v6 v6.1.0 h1:jG5ZLhcVgL1FAw4C/0VNQaVmX1SUJx71wBGdtTtBvls= github.com/pganalyze/pg_query_go/v6 v6.1.0 h1:jG5ZLhcVgL1FAw4C/0VNQaVmX1SUJx71wBGdtTtBvls=
github.com/pganalyze/pg_query_go/v6 v6.1.0/go.mod h1:nvTHIuoud6e1SfrUaFwHqT0i4b5Nr+1rPWVds3B5+50= github.com/pganalyze/pg_query_go/v6 v6.1.0/go.mod h1:nvTHIuoud6e1SfrUaFwHqT0i4b5Nr+1rPWVds3B5+50=
@@ -155,12 +155,14 @@ github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINE
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q= github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0= github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.65.0 h1:QDwzd+G1twt//Kwj/Ww6E9FQq1iVMmODnILtW1t2VzE= github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs=
github.com/prometheus/common v0.65.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8= github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA=
github.com/prometheus/otlptranslator v1.0.0 h1:s0LJW/iN9dkIH+EnhiD3BlkkP5QVIUVEoIwkU+A6qos=
github.com/prometheus/otlptranslator v1.0.0/go.mod h1:vRYWnXvI6aWGpsdY/mOT/cbeVRBlPWtBNDb7kGR3uKM=
github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0= github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0=
github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw= github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw=
github.com/remychantenay/slog-otel v1.3.4 h1:xoM41ayLff2U8zlK5PH31XwD7Lk3W9wKfl4+RcmKom4= github.com/remychantenay/slog-otel v1.3.4 h1:xoM41ayLff2U8zlK5PH31XwD7Lk3W9wKfl4+RcmKom4=
@@ -169,29 +171,30 @@ github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo= github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/riza-io/grpc-go v0.2.0 h1:2HxQKFVE7VuYstcJ8zqpN84VnAoJ4dCL6YFhJewNcHQ= github.com/riza-io/grpc-go v0.2.0 h1:2HxQKFVE7VuYstcJ8zqpN84VnAoJ4dCL6YFhJewNcHQ=
github.com/riza-io/grpc-go v0.2.0/go.mod h1:2bDvR9KkKC3KhtlSHfR3dAXjUMT86kg4UfWFyVGWqi8= github.com/riza-io/grpc-go v0.2.0/go.mod h1:2bDvR9KkKC3KhtlSHfR3dAXjUMT86kg4UfWFyVGWqi8=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII= github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o= github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/samber/lo v1.51.0 h1:kysRYLbHy/MB7kQZf5DSN50JHmMsNEdeY24VzJFu7wI= github.com/samber/lo v1.51.0 h1:kysRYLbHy/MB7kQZf5DSN50JHmMsNEdeY24VzJFu7wI=
github.com/samber/lo v1.51.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0= github.com/samber/lo v1.51.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0=
github.com/samber/slog-common v0.19.0 h1:fNcZb8B2uOLooeYwFpAlKjkQTUafdjfqKcwcC89G9YI= github.com/samber/slog-common v0.19.0 h1:fNcZb8B2uOLooeYwFpAlKjkQTUafdjfqKcwcC89G9YI=
github.com/samber/slog-common v0.19.0/go.mod h1:dTz+YOU76aH007YUU0DffsXNsGFQRQllPQh9XyNoA3M= github.com/samber/slog-common v0.19.0/go.mod h1:dTz+YOU76aH007YUU0DffsXNsGFQRQllPQh9XyNoA3M=
github.com/samber/slog-echo v1.16.1 h1:5Q5IUROkFqKcu/qJM/13AP1d3gd1RS+Q/4EvKQU1fuo= github.com/samber/slog-echo v1.17.2 h1:/d1D2ZiJsaqaeyz3Yk9olCeFFpi4EIJZtnoMp5zt9fs=
github.com/samber/slog-echo v1.16.1/go.mod h1:f+B3WR06saRXcaGRZ/I/UPCECDPqTUqadRIf7TmyRhI= github.com/samber/slog-echo v1.17.2/go.mod h1:4diugqPTk6iQdL7gZFJIyf6zGMLVMaGnCmNm+DBSMRU=
github.com/samber/slog-multi v1.4.1 h1:OVBxOKcorBcGQVKjwlraA41JKWwHQyB/3KfzL3IJAYg= github.com/samber/slog-multi v1.5.0 h1:UDRJdsdb0R5vFQFy3l26rpX3rL3FEPJTJ2yKVjoiT1I=
github.com/samber/slog-multi v1.4.1/go.mod h1:im2Zi3mH/ivSY5XDj6LFcKToRIWPw1OcjSVSdXt+2d0= github.com/samber/slog-multi v1.5.0/go.mod h1:im2Zi3mH/ivSY5XDj6LFcKToRIWPw1OcjSVSdXt+2d0=
github.com/segmentio/asm v1.2.0 h1:9BQrFxC+YOHJlTlHGkTrFWf59nbL3XnCoFLTwDCI7ys= github.com/segmentio/asm v1.2.1 h1:DTNbBqs57ioxAD4PrArqftgypG4/qNpXoJx8TVXxPR0=
github.com/segmentio/asm v1.2.0/go.mod h1:BqMnlJP91P8d+4ibuonYZw9mfnzI9HfxselHZr5aAcs= github.com/segmentio/asm v1.2.1/go.mod h1:BqMnlJP91P8d+4ibuonYZw9mfnzI9HfxselHZr5aAcs=
github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o= github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k= github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME= github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.4.1 h1:s0hze+J0196ZfEMTs80N7UlFt0BDuQ7Q+JDnHiMWKdA= github.com/spf13/cast v1.4.1 h1:s0hze+J0196ZfEMTs80N7UlFt0BDuQ7Q+JDnHiMWKdA=
github.com/spf13/cast v1.4.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.4.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v1.9.1 h1:CXSaggrXdbHK9CF+8ywj8Amf7PBRmPCOJugH954Nnlo= github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
github.com/spf13/cobra v1.9.1/go.mod h1:nDyEzZ8ogv936Cinf6g1RU9MRY64Ir93oCnqb9wxYW0= github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
github.com/spf13/pflag v1.0.6 h1:jFzHGLGAlb3ruxLB8MhbI6A8+AQX/2eW4qeyNZXNp2o= github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/sqlc-dev/sqlc v1.29.0 h1:HQctoD7y/i29Bao53qXO7CZ/BV9NcvpGpsJWvz9nKWs= github.com/sqlc-dev/sqlc v1.29.0 h1:HQctoD7y/i29Bao53qXO7CZ/BV9NcvpGpsJWvz9nKWs=
github.com/sqlc-dev/sqlc v1.29.0/go.mod h1:BavmYw11px5AdPOjAVHmb9fctP5A8GTziC38wBF9tp0= github.com/sqlc-dev/sqlc v1.29.0/go.mod h1:BavmYw11px5AdPOjAVHmb9fctP5A8GTziC38wBF9tp0=
github.com/stoewer/go-strcase v1.2.0 h1:Z2iHWqGXH00XYgqDmNgQbIBxf3wrNq0F3feEy0ainaU= github.com/stoewer/go-strcase v1.2.0 h1:Z2iHWqGXH00XYgqDmNgQbIBxf3wrNq0F3feEy0ainaU=
@@ -203,8 +206,8 @@ github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81P
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tetratelabs/wazero v1.9.0 h1:IcZ56OuxrtaEz8UYNRHBrUa9bYeX9oVY93KspZZBf/I= github.com/tetratelabs/wazero v1.9.0 h1:IcZ56OuxrtaEz8UYNRHBrUa9bYeX9oVY93KspZZBf/I=
github.com/tetratelabs/wazero v1.9.0/go.mod h1:TSbcXCfFP0L2FGkRPxHphadXPjo1T6W+CseNNY7EkjM= github.com/tetratelabs/wazero v1.9.0/go.mod h1:TSbcXCfFP0L2FGkRPxHphadXPjo1T6W+CseNNY7EkjM=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk= github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
@@ -227,64 +230,66 @@ github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9dec
go.mongodb.org/mongo-driver v1.11.4/go.mod h1:PTSz5yu21bkT/wXpkS7WR5f0ddqw5quethTUn9WM+2g= go.mongodb.org/mongo-driver v1.11.4/go.mod h1:PTSz5yu21bkT/wXpkS7WR5f0ddqw5quethTUn9WM+2g=
go.ntppool.org/api v0.3.4 h1:KeRyFhIRkjJwZif7hkpqEDEBmukyYGiOi2Fd6j3UzQ0= go.ntppool.org/api v0.3.4 h1:KeRyFhIRkjJwZif7hkpqEDEBmukyYGiOi2Fd6j3UzQ0=
go.ntppool.org/api v0.3.4/go.mod h1:LFLAwnrc/JyjzKnjgf8tCOJhps6oFIjuledS3PCx7xc= go.ntppool.org/api v0.3.4/go.mod h1:LFLAwnrc/JyjzKnjgf8tCOJhps6oFIjuledS3PCx7xc=
go.ntppool.org/common v0.4.3 h1:IByoorl2RMNf6EBTORl3MOZB5mTSnjYBQxn44U3v4HA= go.ntppool.org/common v0.6.2 h1:TvxrpaBQpSYuvuRT24M/I1ZqFjh4woHJTqayCOxe+o8=
go.ntppool.org/common v0.4.3/go.mod h1:8ILmR3KxpUSNofcw9EBG42HNf81Z9iu9Fg1Cj0f/WP0= go.ntppool.org/common v0.6.2/go.mod h1:Dkc2P5+aaCseC/cs0uD9elh4yTllqvyeZ1NNT/G/414=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA= go.ntppool.org/common v0.6.3-0.20251129195245-283d3936f6d0 h1:Vbs/RgrwfdA9ZzGAkhFRaU7ZSEl8D28pk95iYhjzvyA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A= go.ntppool.org/common v0.6.3-0.20251129195245-283d3936f6d0/go.mod h1:Dkc2P5+aaCseC/cs0uD9elh4yTllqvyeZ1NNT/G/414=
go.opentelemetry.io/contrib/bridges/otelslog v0.12.0 h1:lFM7SZo8Ce01RzRfnUFQZEYeWRf/MtOA3A5MobOqk2g= go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/contrib/bridges/otelslog v0.12.0/go.mod h1:Dw05mhFtrKAYu72Tkb3YBYeQpRUJ4quDgo2DQw3No5A= go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/bridges/prometheus v0.62.0 h1:0mfk3D3068LMGpIhxwc0BqRlBOBHVgTP9CygmnJM/TI= go.opentelemetry.io/contrib/bridges/otelslog v0.13.0 h1:bwnLpizECbPr1RrQ27waeY2SPIPeccCx/xLuoYADZ9s=
go.opentelemetry.io/contrib/bridges/prometheus v0.62.0/go.mod h1:hStk98NJy1wvlrXIqWsli+uELxRRseBMld+gfm2xPR4= go.opentelemetry.io/contrib/bridges/otelslog v0.13.0/go.mod h1:3nWlOiiqA9UtUnrcNk82mYasNxD8ehOspL0gOfEo6Y4=
go.opentelemetry.io/contrib/exporters/autoexport v0.62.0 h1:aCpZ6vvmOj5GHg1eUygjS/05mlQaEBsQDdTw5yT8EsE= go.opentelemetry.io/contrib/bridges/prometheus v0.63.0 h1:/Rij/t18Y7rUayNg7Id6rPrEnHgorxYabm2E6wUdPP4=
go.opentelemetry.io/contrib/exporters/autoexport v0.62.0/go.mod h1:1xHkmmL3bQm8m86HVoZTdgK/LIY5JpxdAWjog6cdtUs= go.opentelemetry.io/contrib/bridges/prometheus v0.63.0/go.mod h1:AdyDPn6pkbkt2w01n3BubRVk7xAsCRq1Yg1mpfyA/0E=
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.62.0 h1:b3/7WwVpLaIBTXHz6vp04idQOu02K0MFrkhF2ls7DbQ= go.opentelemetry.io/contrib/exporters/autoexport v0.63.0 h1:NLnZybb9KkfMXPwZhd5diBYJoVxiO9Qa06dacEA7ySY=
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.62.0/go.mod h1:aHqs9aFRWZBvil6ClpaKd/+bZ+o30+Q7xjcgMaSvuRw= go.opentelemetry.io/contrib/exporters/autoexport v0.63.0/go.mod h1:OvRg7gm5WRSCtxzGSsrFHbDLToYlStHNZQ+iPNIyD6g=
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.62.0 h1:wCeciVlAfb5DC8MQl/DlmAv/FVPNpQgFvI/71+hatuc= go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.63.0 h1:6YeICKmGrvgJ5th4+OMNpcuoB6q/Xs8gt0YCO7MUv1k=
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.62.0/go.mod h1:WfEApdZDMlLUAev/0QQpr8EJ/z0VWDKYZ5tF5RH5T1U= go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.63.0/go.mod h1:ZEA7j2B35siNV0T00aapacNzjz4tvOlNoHp0ncCfwNQ=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.62.0 h1:Hf9xI/XLML9ElpiHVDNwvqI0hIFlzV8dgIr35kV1kRU= go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.63.0 h1:2pn7OzMewmYRiNtv1doZnLo3gONcnMHlFnmOR8Vgt+8=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.62.0/go.mod h1:NfchwuyNoMcZ5MLHwPrODwUF1HWCXWrL31s8gSAdIKY= go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.63.0/go.mod h1:rjbQTDEPQymPE0YnRQp9/NuPwwtL0sesz/fnqRW/v84=
go.opentelemetry.io/contrib/propagators/b3 v1.37.0 h1:0aGKdIuVhy5l4GClAjl72ntkZJhijf2wg1S7b5oLoYA= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0 h1:RbKq8BG0FI8OiXhBfcRtqqHcZcka+gU3cskNuf05R18=
go.opentelemetry.io/contrib/propagators/b3 v1.37.0/go.mod h1:nhyrxEJEOQdwR15zXrCKI6+cJK60PXAkJ/jRyfhr2mg= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0/go.mod h1:h06DGIukJOevXaj/xrNjhi/2098RZzcLTbc0jDAUbsg=
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ= go.opentelemetry.io/contrib/propagators/b3 v1.38.0 h1:uHsCCOSKl0kLrV2dLkFK+8Ywk9iKa/fptkytc6aFFEo=
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I= go.opentelemetry.io/contrib/propagators/b3 v1.38.0/go.mod h1:wMRSZJZcY8ya9mApLLhwIMjqmApy2o/Ml+62lhvxyHU=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.13.0 h1:z6lNIajgEBVtQZHjfw2hAccPEBDs+nx58VemmXWa2ec= go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.13.0/go.mod h1:+kyc3bRx/Qkq05P6OCu3mTEIOxYRYzoIg+JsUp5X+PM= go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.13.0 h1:zUfYw8cscHHLwaY8Xz3fiJu+R59xBnkgq2Zr1lwmK/0= go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.14.0 h1:OMqPldHt79PqWKOMYIAQs3CxAi7RLgPxwfFSwr4ZxtM=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.13.0/go.mod h1:514JLMCcFLQFS8cnTepOk6I09cKWJ5nGHBxHrMJ8Yfg= go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.14.0/go.mod h1:1biG4qiqTxKiUCtoWDPpL3fB3KxVwCiGw81j3nKMuHE=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.37.0 h1:zG8GlgXCJQd5BU98C0hZnBbElszTmUgCNCfYneaDL0A= go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.14.0 h1:QQqYw3lkrzwVsoEX0w//EhH/TCnpRdEenKBOOEIMjWc=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.37.0/go.mod h1:hOfBCz8kv/wuq73Mx2H2QnWokh/kHZxkh6SNF2bdKtw= go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.14.0/go.mod h1:gSVQcr17jk2ig4jqJ2DX30IdWH251JcNAecvrqTxH1s=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.37.0 h1:9PgnL3QNlj10uGxExowIDIZu66aVBwWhXmbOp1pa6RA= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.38.0 h1:vl9obrcoWVKp/lwl8tRE33853I8Xru9HFbw/skNeLs8=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.37.0/go.mod h1:0ineDcLELf6JmKfuo0wvvhAVMuxWFYvkTin2iV4ydPQ= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.38.0/go.mod h1:GAXRxmLJcVM3u22IjTg74zWBrRCKq8BnOqUVLodpcpw=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.37.0 h1:Ahq7pZmv87yiyn3jeFz/LekZmPLLdKejuO3NcK9MssM= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.38.0 h1:Oe2z/BCg5q7k4iXC3cqJxKYg0ieRiOqF0cecFYdPTwk=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.37.0/go.mod h1:MJTqhM0im3mRLw1i8uGHnCvUEeS7VwRyxlLC78PA18M= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.38.0/go.mod h1:ZQM5lAJpOsKnYagGg/zV2krVqTtaVdYdDkhMoX6Oalg=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.37.0 h1:EtFWSnwW9hGObjkIdmlnWSydO+Qs8OwzfzXLUPg4xOc= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 h1:GqRJVj7UmLjCVyVJ3ZFLdPRmhDUp2zFmQe3RHIOsw24=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.37.0/go.mod h1:QjUEoiGCPkvFZ/MjK6ZZfNOS6mfVEVKYE99dFhuN2LI= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0/go.mod h1:ri3aaHSmCTVYu2AWv44YMauwAQc0aqI9gHKIcSbI1pU=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.37.0 h1:bDMKF3RUSxshZ5OjOTi8rsHGaPKsAt76FaqgvIUySLc= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0 h1:lwI4Dc5leUqENgGuQImwLo4WnuXFPetmPpkLi2IrX54=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.37.0/go.mod h1:dDT67G/IkA46Mr2l9Uj7HsQVwsjASyV9SjGofsiUZDA= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0/go.mod h1:Kz/oCE7z5wuyhPxsXDuaPteSWqjSBD5YaSdbxZYGbGk=
go.opentelemetry.io/otel/exporters/prometheus v0.59.0 h1:HHf+wKS6o5++XZhS98wvILrLVgHxjA/AMjqHKes+uzo= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 h1:aTL7F04bJHUlztTsNGJ2l+6he8c+y/b//eR0jjjemT4=
go.opentelemetry.io/otel/exporters/prometheus v0.59.0/go.mod h1:R8GpRXTZrqvXHDEGVH5bF6+JqAZcK8PjJcZ5nGhEWiE= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0/go.mod h1:kldtb7jDTeol0l3ewcmd8SDvx3EmIE7lyvqbasU3QC4=
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.13.0 h1:yEX3aC9KDgvYPhuKECHbOlr5GLwH6KTjLJ1sBSkkxkc= go.opentelemetry.io/otel/exporters/prometheus v0.60.0 h1:cGtQxGvZbnrWdC2GyjZi0PDKVSLWP/Jocix3QWfXtbo=
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.13.0/go.mod h1:/GXR0tBmmkxDaCUGahvksvp66mx4yh5+cFXgSlhg0vQ= go.opentelemetry.io/otel/exporters/prometheus v0.60.0/go.mod h1:hkd1EekxNo69PTV4OWFGZcKQiIqg0RfuWExcPKFvepk=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.37.0 h1:6VjV6Et+1Hd2iLZEPtdV7vie80Yyqf7oikJLjQ/myi0= go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.14.0 h1:B/g+qde6Mkzxbry5ZZag0l7QrQBCtVm7lVjaLgmpje8=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.37.0/go.mod h1:u8hcp8ji5gaM/RfcOo8z9NMnf1pVLfVY7lBY2VOGuUU= go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.14.0/go.mod h1:mOJK8eMmgW6ocDJn6Bn11CcZ05gi3P8GylBXEkZtbgA=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.37.0 h1:SNhVp/9q4Go/XHBkQ1/d5u9P/U+L1yaGPoi0x+mStaI= go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.38.0 h1:wm/Q0GAAykXv83wzcKzGGqAnnfLFyFe7RslekZuv+VI=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.37.0/go.mod h1:tx8OOlGH6R4kLV67YaYO44GFXloEjGPZuMjEkaaqIp4= go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.38.0/go.mod h1:ra3Pa40+oKjvYh+ZD3EdxFZZB0xdMfuileHAm4nNN7w=
go.opentelemetry.io/otel/log v0.13.0 h1:yoxRoIZcohB6Xf0lNv9QIyCzQvrtGZklVbdCoyb7dls= go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.38.0 h1:kJxSDN4SgWWTjG/hPp3O7LCGLcHXFlvS2/FFOrwL+SE=
go.opentelemetry.io/otel/log v0.13.0/go.mod h1:INKfG4k1O9CL25BaM1qLe0zIedOpvlS5Z7XgSbmN83E= go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.38.0/go.mod h1:mgIOzS7iZeKJdeB8/NYHrJ48fdGc71Llo5bJ1J4DWUE=
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE= go.opentelemetry.io/otel/log v0.14.0 h1:2rzJ+pOAZ8qmZ3DDHg73NEKzSZkhkGIua9gXtxNGgrM=
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E= go.opentelemetry.io/otel/log v0.14.0/go.mod h1:5jRG92fEAgx0SU/vFPxmJvhIuDU9E1SUnEQrMlJpOno=
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI= go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg= go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
go.opentelemetry.io/otel/sdk/log v0.13.0 h1:I3CGUszjM926OphK8ZdzF+kLqFvfRY/IIoFq/TjwfaQ= go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=
go.opentelemetry.io/otel/sdk/log v0.13.0/go.mod h1:lOrQyCCXmpZdN7NchXb6DOZZa1N5G1R2tm5GMMTpDBw= go.opentelemetry.io/otel/sdk v1.38.0/go.mod h1:ghmNdGlVemJI3+ZB5iDEuk4bWA3GkTpW+DOoZMYBVVg=
go.opentelemetry.io/otel/sdk/log/logtest v0.13.0 h1:9yio6AFZ3QD9j9oqshV1Ibm9gPLlHNxurno5BreMtIA= go.opentelemetry.io/otel/sdk/log v0.14.0 h1:JU/U3O7N6fsAXj0+CXz21Czg532dW2V4gG1HE/e8Zrg=
go.opentelemetry.io/otel/sdk/log/logtest v0.13.0/go.mod h1:QOGiAJHl+fob8Nu85ifXfuQYmJTFAvcrxL6w5/tu168= go.opentelemetry.io/otel/sdk/log v0.14.0/go.mod h1:imQvII+0ZylXfKU7/wtOND8Hn4OpT3YUoIgqJVksUkM=
go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFhbjxHHspCPc= go.opentelemetry.io/otel/sdk/log/logtest v0.14.0 h1:Ijbtz+JKXl8T2MngiwqBlPaHqc4YCaP/i13Qrow6gAM=
go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps= go.opentelemetry.io/otel/sdk/log/logtest v0.14.0/go.mod h1:dCU8aEL6q+L9cYTqcVOk8rM9Tp8WdnHOPLiBgp0SGOA=
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4= go.opentelemetry.io/otel/sdk/metric v1.38.0 h1:aSH66iL0aZqo//xXzQLYozmWrXxyFkBJ6qT5wthqPoM=
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0= go.opentelemetry.io/otel/sdk/metric v1.38.0/go.mod h1:dg9PBnW9XdQ1Hd6ZnRz689CbtrUp0wMMs9iPcgT9EZA=
go.opentelemetry.io/proto/otlp v1.7.0 h1:jX1VolD6nHuFzOYso2E73H85i92Mv8JQYk0K9vz09os= go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
go.opentelemetry.io/proto/otlp v1.7.0/go.mod h1:fSKjH6YJ7HDlwzltzyMj036AJ3ejJLCgCSHGj4efDDo= go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
go.opentelemetry.io/proto/otlp v1.8.0 h1:fRAZQDcAFHySxpJ1TwlA1cJ4tvcrw7nXl9xWWC8N5CE=
go.opentelemetry.io/proto/otlp v1.8.0/go.mod h1:tIeYOeNBU4cvmPqpaji1P+KbB4Oloai8wN4rWzRrFF0=
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ= go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
@@ -300,34 +305,38 @@ go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN8
go.uber.org/zap v1.19.0/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI= go.uber.org/zap v1.19.0/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.39.0 h1:SHs+kF4LP+f+p14esP5jAoDpHU8Gu/v9lFRK6IT5imM= golang.org/x/crypto v0.42.0 h1:chiH31gIWm57EkTXpwnqf8qeuMUi0yekh6mT2AvFlqI=
golang.org/x/crypto v0.39.0/go.mod h1:L+Xg3Wf6HoL4Bn4238Z6ft6KfEpN0tJGo53AAPC632U= golang.org/x/crypto v0.42.0/go.mod h1:4+rDnOTJhQCx2q7/j6rAN5XDw8kPjeaXEUR2eL94ix8=
golang.org/x/exp v0.0.0-20250305212735-054e65f0b394 h1:nDVHiLt8aIbd/VzvPWN6kSOPE7+F/fNFDSXLVYkE/Iw= golang.org/x/exp v0.0.0-20250305212735-054e65f0b394 h1:nDVHiLt8aIbd/VzvPWN6kSOPE7+F/fNFDSXLVYkE/Iw=
golang.org/x/exp v0.0.0-20250305212735-054e65f0b394/go.mod h1:sIifuuw/Yco/y6yb6+bDNfyeQ/MdPUy/hKEMYQV17cM= golang.org/x/exp v0.0.0-20250305212735-054e65f0b394/go.mod h1:sIifuuw/Yco/y6yb6+bDNfyeQ/MdPUy/hKEMYQV17cM=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.25.0 h1:n7a+ZbQKQA/Ysbyb0/6IbB1H/X41mKgbhfv7AfG/44w= golang.org/x/mod v0.28.0 h1:gQBtGhjxykdjY9YhZpSlZIsbnaE2+PgjfLWUQTnoZ1U=
golang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww= golang.org/x/mod v0.28.0/go.mod h1:yfB/L0NOf/kmEbXjzCPOx1iK1fRutOydrCMsqRhEBxI=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw= golang.org/x/net v0.44.0 h1:evd8IRDyfNBMBTTY5XRF1vaZlD+EmWx6x8PkhR04H/I=
golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA= golang.org/x/net v0.44.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8= golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -335,17 +344,17 @@ golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw= golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M= golang.org/x/text v0.29.0 h1:1neNs90w9YzJ9BocxfsQNHKuAT4pkghyXc4nhZ6sJvk=
golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA= golang.org/x/text v0.29.0/go.mod h1:7MhJOA9CD2qZyOKYazxdYMF85OwPdEr9jTtBpO7ydH4=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE= golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI=
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg= golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
@@ -353,23 +362,25 @@ golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtn
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.33.0 h1:4qz2S3zmRxbGIhDIAgjxvFutSvH5EfnsYrRBj0UI0bc= golang.org/x/tools v0.37.0 h1:DVSRzp7FwePZW356yEAChSdNcQo6Nsp+fex1SUW09lE=
golang.org/x/tools v0.33.0/go.mod h1:CIJMaWEY88juyUfo7UbgPqbC8rU2OqfAV1h2Qp0oMYI= golang.org/x/tools v0.37.0/go.mod h1:MBN5QPQtLMHVdvsbtarmTNukZDdgwdwlO5qGacAzF0w=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 h1:oWVWY3NzT7KJppx2UKhKmzPq4SRe0LdCijVRwvGeikY= gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822/go.mod h1:h3c4v36UTKzUiuaOKQ6gr3S+0hovBtUrXzTG/i3+XEc= gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 h1:fc6jSaCT0vBduLYZHYrBBNY4dsWuvgyff9noRNDdBeE= google.golang.org/genproto/googleapis/api v0.0.0-20250922171735-9219d122eba9 h1:jm6v6kMRpTYKxBRrDkYAitNJegUeO1Mf3Kt80obv0gg=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A= google.golang.org/genproto/googleapis/api v0.0.0-20250922171735-9219d122eba9/go.mod h1:LmwNphe5Afor5V3R5BppOULHOnt2mCIf+NxMd4XiygE=
google.golang.org/grpc v1.73.0 h1:VIWSmpI2MegBtTuFt5/JWy2oXxtjJ/e89Z70ImfD2ok= google.golang.org/genproto/googleapis/rpc v0.0.0-20250922171735-9219d122eba9 h1:V1jCN2HBa8sySkR5vLcCSqJSTMv093Rw9EJefhQGP7M=
google.golang.org/grpc v1.73.0/go.mod h1:50sbHOUqWoCQGI8V2HQLJM0B+LMlIUjNSZmow7EVBQc= google.golang.org/genproto/googleapis/rpc v0.0.0-20250922171735-9219d122eba9/go.mod h1:HSkG/KdJWusxU1F6CNrwNDjBMgisKxGnc5dAZfT0mjQ=
google.golang.org/grpc v1.75.1 h1:/ODCNEuf9VghjgO3rqLcfg8fiOP0nSluljWFlDxELLI=
google.golang.org/grpc v1.75.1/go.mod h1:JtPAzKiq4v1xcAB2hydNlWI2RnF85XXcV0mhKXr2ecQ=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY= google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY= google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=

View File

@@ -2,9 +2,10 @@ package logscores
import ( import (
"context" "context"
"database/sql"
"time" "time"
"github.com/jackc/pgx/v5/pgtype"
"github.com/jackc/pgx/v5/pgxpool"
"go.ntppool.org/common/logger" "go.ntppool.org/common/logger"
"go.ntppool.org/common/tracing" "go.ntppool.org/common/tracing"
"go.ntppool.org/data-api/chdb" "go.ntppool.org/data-api/chdb"
@@ -19,12 +20,12 @@ type LogScoreHistory struct {
// MonitorIDs []uint32 // MonitorIDs []uint32
} }
func GetHistoryClickHouse(ctx context.Context, ch *chdb.ClickHouse, db *sql.DB, serverID, monitorID uint32, since time.Time, count int, fullHistory bool) (*LogScoreHistory, error) { func GetHistoryClickHouse(ctx context.Context, ch *chdb.ClickHouse, db *pgxpool.Pool, serverID, monitorID int64, since time.Time, count int, fullHistory bool) (*LogScoreHistory, error) {
log := logger.FromContext(ctx) log := logger.FromContext(ctx)
ctx, span := tracing.Tracer().Start(ctx, "logscores.GetHistoryClickHouse", ctx, span := tracing.Tracer().Start(ctx, "logscores.GetHistoryClickHouse",
trace.WithAttributes( trace.WithAttributes(
attribute.Int("server", int(serverID)), attribute.Int64("server", serverID),
attribute.Int("monitor", int(monitorID)), attribute.Int64("monitor", monitorID),
attribute.Bool("full_history", fullHistory), attribute.Bool("full_history", fullHistory),
), ),
) )
@@ -33,7 +34,6 @@ func GetHistoryClickHouse(ctx context.Context, ch *chdb.ClickHouse, db *sql.DB,
log.DebugContext(ctx, "GetHistoryCH", "server", serverID, "monitor", monitorID, "since", since, "count", count, "full_history", fullHistory) log.DebugContext(ctx, "GetHistoryCH", "server", serverID, "monitor", monitorID, "since", since, "count", count, "full_history", fullHistory)
ls, err := ch.Logscores(ctx, int(serverID), int(monitorID), since, count, fullHistory) ls, err := ch.Logscores(ctx, int(serverID), int(monitorID), since, count, fullHistory)
if err != nil { if err != nil {
log.ErrorContext(ctx, "clickhouse logscores", "err", err) log.ErrorContext(ctx, "clickhouse logscores", "err", err)
return nil, err return nil, err
@@ -52,17 +52,17 @@ func GetHistoryClickHouse(ctx context.Context, ch *chdb.ClickHouse, db *sql.DB,
}, nil }, nil
} }
func GetHistoryMySQL(ctx context.Context, db *sql.DB, serverID, monitorID uint32, since time.Time, count int) (*LogScoreHistory, error) { func GetHistoryPostgres(ctx context.Context, db *pgxpool.Pool, serverID, monitorID int64, since time.Time, count int) (*LogScoreHistory, error) {
log := logger.FromContext(ctx) log := logger.FromContext(ctx)
ctx, span := tracing.Tracer().Start(ctx, "logscores.GetHistoryMySQL") ctx, span := tracing.Tracer().Start(ctx, "logscores.GetHistoryPostgres")
defer span.End() defer span.End()
span.SetAttributes( span.SetAttributes(
attribute.Int("server", int(serverID)), attribute.Int64("server", serverID),
attribute.Int("monitor", int(monitorID)), attribute.Int64("monitor", monitorID),
) )
log.Debug("GetHistoryMySQL", "server", serverID, "monitor", monitorID, "since", since, "count", count) log.Debug("GetHistoryPostgres", "server", serverID, "monitor", monitorID, "since", since, "count", count)
q := ntpdb.NewWrappedQuerier(ntpdb.New(db)) q := ntpdb.NewWrappedQuerier(ntpdb.New(db))
@@ -70,13 +70,13 @@ func GetHistoryMySQL(ctx context.Context, db *sql.DB, serverID, monitorID uint32
var err error var err error
if monitorID > 0 { if monitorID > 0 {
ls, err = q.GetServerLogScoresByMonitorID(ctx, ntpdb.GetServerLogScoresByMonitorIDParams{ ls, err = q.GetServerLogScoresByMonitorID(ctx, ntpdb.GetServerLogScoresByMonitorIDParams{
ServerID: serverID, ServerID: int64(serverID),
MonitorID: sql.NullInt32{Int32: int32(monitorID), Valid: true}, MonitorID: pgtype.Int8{Int64: int64(monitorID), Valid: true},
Limit: int32(count), Limit: int32(count),
}) })
} else { } else {
ls, err = q.GetServerLogScores(ctx, ntpdb.GetServerLogScoresParams{ ls, err = q.GetServerLogScores(ctx, ntpdb.GetServerLogScoresParams{
ServerID: serverID, ServerID: int64(serverID),
Limit: int32(count), Limit: int32(count),
}) })
} }
@@ -98,12 +98,12 @@ func GetHistoryMySQL(ctx context.Context, db *sql.DB, serverID, monitorID uint32
func getMonitorNames(ctx context.Context, ls []ntpdb.LogScore, q ntpdb.QuerierTx) (map[int]string, error) { func getMonitorNames(ctx context.Context, ls []ntpdb.LogScore, q ntpdb.QuerierTx) (map[int]string, error) {
monitors := map[int]string{} monitors := map[int]string{}
monitorIDs := []uint32{} monitorIDs := []int64{}
for _, l := range ls { for _, l := range ls {
if !l.MonitorID.Valid { if !l.MonitorID.Valid {
continue continue
} }
mID := uint32(l.MonitorID.Int32) mID := l.MonitorID.Int64
if _, ok := monitors[int(mID)]; !ok { if _, ok := monitors[int(mID)]; !ok {
monitors[int(mID)] = "" monitors[int(mID)] = ""
monitorIDs = append(monitorIDs, mID) monitorIDs = append(monitorIDs, mID)

View File

@@ -6,14 +6,15 @@ package ntpdb
import ( import (
"context" "context"
"database/sql"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgconn"
) )
type DBTX interface { type DBTX interface {
ExecContext(context.Context, string, ...interface{}) (sql.Result, error) Exec(context.Context, string, ...interface{}) (pgconn.CommandTag, error)
PrepareContext(context.Context, string) (*sql.Stmt, error) Query(context.Context, string, ...interface{}) (pgx.Rows, error)
QueryContext(context.Context, string, ...interface{}) (*sql.Rows, error) QueryRow(context.Context, string, ...interface{}) pgx.Row
QueryRowContext(context.Context, string, ...interface{}) *sql.Row
} }
func New(db DBTX) *Queries { func New(db DBTX) *Queries {
@@ -24,7 +25,7 @@ type Queries struct {
db DBTX db DBTX
} }
func (q *Queries) WithTx(tx *sql.Tx) *Queries { func (q *Queries) WithTx(tx pgx.Tx) *Queries {
return &Queries{ return &Queries{
db: tx, db: tx,
} }

View File

@@ -1,85 +1,15 @@
package ntpdb package ntpdb
//go:generate go tool github.com/hexdigest/gowrap/cmd/gowrap gen -t ./opentelemetry.gowrap -g -i QuerierTx -p . -o otel.go
import ( import (
"context" "context"
"database/sql"
"database/sql/driver"
"fmt"
"os"
"time"
"github.com/go-sql-driver/mysql" "github.com/jackc/pgx/v5/pgxpool"
"go.ntppool.org/common/logger" "go.ntppool.org/common/database/pgdb"
"gopkg.in/yaml.v3"
) )
type Config struct { // OpenDB opens a PostgreSQL connection pool using the specified config file
MySQL DBConfig `yaml:"mysql"` func OpenDB(ctx context.Context, configFile string) (*pgxpool.Pool, error) {
} return pgdb.OpenPoolWithConfigFile(ctx, configFile)
type DBConfig struct {
DSN string `default:"" flag:"dsn" usage:"Database DSN"`
User string `default:"" flag:"user"`
Pass string `default:"" flag:"pass"`
}
func OpenDB(ctx context.Context, configFile string) (*sql.DB, error) {
log := logger.FromContext(ctx)
dbconn := sql.OpenDB(Driver{CreateConnectorFunc: createConnector(ctx, configFile)})
dbconn.SetConnMaxLifetime(time.Minute * 3)
dbconn.SetMaxOpenConns(8)
dbconn.SetMaxIdleConns(3)
err := dbconn.Ping()
if err != nil {
log.DebugContext(ctx, "could not connect to database: %s", "err", err)
return nil, err
}
return dbconn, nil
}
func createConnector(ctx context.Context, configFile string) CreateConnectorFunc {
log := logger.FromContext(ctx)
return func() (driver.Connector, error) {
log.DebugContext(ctx, "opening db config file", "filename", configFile)
dbFile, err := os.Open(configFile)
if err != nil {
return nil, err
}
dec := yaml.NewDecoder(dbFile)
cfg := Config{}
err = dec.Decode(&cfg)
if err != nil {
return nil, err
}
// log.Printf("db cfg: %+v", cfg)
dsn := cfg.MySQL.DSN
if len(dsn) == 0 {
return nil, fmt.Errorf("--database.dsn flag or DATABASE_DSN environment variable required")
}
dbcfg, err := mysql.ParseDSN(dsn)
if err != nil {
return nil, err
}
if user := cfg.MySQL.User; len(user) > 0 {
dbcfg.User = user
}
if pass := cfg.MySQL.Pass; len(pass) > 0 {
dbcfg.Passwd = pass
}
return mysql.NewConnector(dbcfg)
}
} }

View File

@@ -1,34 +0,0 @@
package ntpdb
import (
"context"
"database/sql/driver"
"errors"
"fmt"
)
// from https://github.com/Boostport/dynamic-database-config
type CreateConnectorFunc func() (driver.Connector, error)
type Driver struct {
CreateConnectorFunc CreateConnectorFunc
}
func (d Driver) Driver() driver.Driver {
return d
}
func (d Driver) Connect(ctx context.Context) (driver.Conn, error) {
connector, err := d.CreateConnectorFunc()
if err != nil {
return nil, fmt.Errorf("error creating connector from function: %w", err)
}
return connector.Connect(ctx)
}
func (d Driver) Open(name string) (driver.Conn, error) {
return nil, errors.New("open is not supported")
}

View File

@@ -5,11 +5,10 @@
package ntpdb package ntpdb
import ( import (
"database/sql"
"database/sql/driver" "database/sql/driver"
"fmt" "fmt"
"time"
"github.com/jackc/pgx/v5/pgtype"
"go.ntppool.org/common/types" "go.ntppool.org/common/types"
) )
@@ -145,10 +144,10 @@ func (ns NullMonitorsType) Value() (driver.Value, error) {
type ServerScoresStatus string type ServerScoresStatus string
const ( const (
ServerScoresStatusNew ServerScoresStatus = "new"
ServerScoresStatusCandidate ServerScoresStatus = "candidate" ServerScoresStatusCandidate ServerScoresStatus = "candidate"
ServerScoresStatusTesting ServerScoresStatus = "testing" ServerScoresStatusTesting ServerScoresStatus = "testing"
ServerScoresStatusActive ServerScoresStatus = "active" ServerScoresStatusActive ServerScoresStatus = "active"
ServerScoresStatusPaused ServerScoresStatus = "paused"
) )
func (e *ServerScoresStatus) Scan(src interface{}) error { func (e *ServerScoresStatus) Scan(src interface{}) error {
@@ -271,73 +270,73 @@ func (ns NullZoneServerCountsIpVersion) Value() (driver.Value, error) {
} }
type LogScore struct { type LogScore struct {
ID uint64 `db:"id" json:"id"` ID int64 `db:"id" json:"id"`
MonitorID sql.NullInt32 `db:"monitor_id" json:"monitor_id"` MonitorID pgtype.Int8 `db:"monitor_id" json:"monitor_id"`
ServerID uint32 `db:"server_id" json:"server_id"` ServerID int64 `db:"server_id" json:"server_id"`
Ts time.Time `db:"ts" json:"ts"` Ts pgtype.Timestamptz `db:"ts" json:"ts"`
Score float64 `db:"score" json:"score"` Score float64 `db:"score" json:"score"`
Step float64 `db:"step" json:"step"` Step float64 `db:"step" json:"step"`
Offset sql.NullFloat64 `db:"offset" json:"offset"` Offset pgtype.Float8 `db:"offset" json:"offset"`
Rtt sql.NullInt32 `db:"rtt" json:"rtt"` Rtt pgtype.Int4 `db:"rtt" json:"rtt"`
Attributes types.LogScoreAttributes `db:"attributes" json:"attributes"` Attributes types.LogScoreAttributes `db:"attributes" json:"attributes"`
} }
type Monitor struct { type Monitor struct {
ID uint32 `db:"id" json:"id"` ID int64 `db:"id" json:"id"`
IDToken sql.NullString `db:"id_token" json:"id_token"` IDToken pgtype.Text `db:"id_token" json:"id_token"`
Type MonitorsType `db:"type" json:"type"` Type MonitorsType `db:"type" json:"type"`
UserID sql.NullInt32 `db:"user_id" json:"user_id"` UserID pgtype.Int8 `db:"user_id" json:"user_id"`
AccountID sql.NullInt32 `db:"account_id" json:"account_id"` AccountID pgtype.Int8 `db:"account_id" json:"account_id"`
Hostname string `db:"hostname" json:"hostname"` Hostname string `db:"hostname" json:"hostname"`
Location string `db:"location" json:"location"` Location string `db:"location" json:"location"`
Ip sql.NullString `db:"ip" json:"ip"` Ip pgtype.Text `db:"ip" json:"ip"`
IpVersion NullMonitorsIpVersion `db:"ip_version" json:"ip_version"` IpVersion NullMonitorsIpVersion `db:"ip_version" json:"ip_version"`
TlsName sql.NullString `db:"tls_name" json:"tls_name"` TlsName pgtype.Text `db:"tls_name" json:"tls_name"`
ApiKey sql.NullString `db:"api_key" json:"api_key"` ApiKey pgtype.Text `db:"api_key" json:"api_key"`
Status MonitorsStatus `db:"status" json:"status"` Status MonitorsStatus `db:"status" json:"status"`
Config string `db:"config" json:"config"` Config string `db:"config" json:"config"`
ClientVersion string `db:"client_version" json:"client_version"` ClientVersion string `db:"client_version" json:"client_version"`
LastSeen sql.NullTime `db:"last_seen" json:"last_seen"` LastSeen pgtype.Timestamptz `db:"last_seen" json:"last_seen"`
LastSubmit sql.NullTime `db:"last_submit" json:"last_submit"` LastSubmit pgtype.Timestamptz `db:"last_submit" json:"last_submit"`
CreatedOn time.Time `db:"created_on" json:"created_on"` CreatedOn pgtype.Timestamptz `db:"created_on" json:"created_on"`
DeletedOn sql.NullTime `db:"deleted_on" json:"deleted_on"` DeletedOn pgtype.Timestamptz `db:"deleted_on" json:"deleted_on"`
IsCurrent sql.NullBool `db:"is_current" json:"is_current"` IsCurrent pgtype.Bool `db:"is_current" json:"is_current"`
} }
type Server struct { type Server struct {
ID uint32 `db:"id" json:"id"` ID int64 `db:"id" json:"id"`
Ip string `db:"ip" json:"ip"` Ip string `db:"ip" json:"ip"`
IpVersion ServersIpVersion `db:"ip_version" json:"ip_version"` IpVersion ServersIpVersion `db:"ip_version" json:"ip_version"`
UserID sql.NullInt32 `db:"user_id" json:"user_id"` UserID pgtype.Int8 `db:"user_id" json:"user_id"`
AccountID sql.NullInt32 `db:"account_id" json:"account_id"` AccountID pgtype.Int8 `db:"account_id" json:"account_id"`
Hostname sql.NullString `db:"hostname" json:"hostname"` Hostname pgtype.Text `db:"hostname" json:"hostname"`
Stratum sql.NullInt16 `db:"stratum" json:"stratum"` Stratum pgtype.Int2 `db:"stratum" json:"stratum"`
InPool uint8 `db:"in_pool" json:"in_pool"` InPool int16 `db:"in_pool" json:"in_pool"`
InServerList uint8 `db:"in_server_list" json:"in_server_list"` InServerList int16 `db:"in_server_list" json:"in_server_list"`
Netspeed uint32 `db:"netspeed" json:"netspeed"` Netspeed int64 `db:"netspeed" json:"netspeed"`
NetspeedTarget uint32 `db:"netspeed_target" json:"netspeed_target"` NetspeedTarget int64 `db:"netspeed_target" json:"netspeed_target"`
CreatedOn time.Time `db:"created_on" json:"created_on"` CreatedOn pgtype.Timestamptz `db:"created_on" json:"created_on"`
UpdatedOn time.Time `db:"updated_on" json:"updated_on"` UpdatedOn pgtype.Timestamptz `db:"updated_on" json:"updated_on"`
ScoreTs sql.NullTime `db:"score_ts" json:"score_ts"` ScoreTs pgtype.Timestamptz `db:"score_ts" json:"score_ts"`
ScoreRaw float64 `db:"score_raw" json:"score_raw"` ScoreRaw float64 `db:"score_raw" json:"score_raw"`
DeletionOn sql.NullTime `db:"deletion_on" json:"deletion_on"` DeletionOn pgtype.Date `db:"deletion_on" json:"deletion_on"`
Flags string `db:"flags" json:"flags"` Flags string `db:"flags" json:"flags"`
} }
type Zone struct { type Zone struct {
ID uint32 `db:"id" json:"id"` ID int64 `db:"id" json:"id"`
Name string `db:"name" json:"name"` Name string `db:"name" json:"name"`
Description sql.NullString `db:"description" json:"description"` Description pgtype.Text `db:"description" json:"description"`
ParentID sql.NullInt32 `db:"parent_id" json:"parent_id"` ParentID pgtype.Int8 `db:"parent_id" json:"parent_id"`
Dns bool `db:"dns" json:"dns"` Dns bool `db:"dns" json:"dns"`
} }
type ZoneServerCount struct { type ZoneServerCount struct {
ID uint32 `db:"id" json:"id"` ID int64 `db:"id" json:"id"`
ZoneID uint32 `db:"zone_id" json:"zone_id"` ZoneID int64 `db:"zone_id" json:"zone_id"`
IpVersion ZoneServerCountsIpVersion `db:"ip_version" json:"ip_version"` IpVersion ZoneServerCountsIpVersion `db:"ip_version" json:"ip_version"`
Date time.Time `db:"date" json:"date"` Date pgtype.Date `db:"date" json:"date"`
CountActive uint32 `db:"count_active" json:"count_active"` CountActive int32 `db:"count_active" json:"count_active"`
CountRegistered uint32 `db:"count_registered" json:"count_registered"` CountRegistered int32 `db:"count_registered" json:"count_registered"`
NetspeedActive uint32 `db:"netspeed_active" json:"netspeed_active"` NetspeedActive int `db:"netspeed_active" json:"netspeed_active"`
} }

View File

@@ -0,0 +1,55 @@
import (
"context"
_codes "go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
)
{{ $decorator := (or .Vars.DecoratorName (printf "%sWithTracing" .Interface.Name)) }}
{{ $spanNameType := (or .Vars.SpanNamePrefix .Interface.Name) }}
// {{$decorator}} implements {{.Interface.Name}} interface instrumented with open telemetry spans
type {{$decorator}} struct {
{{.Interface.Type}}
_instance string
_spanDecorator func(span trace.Span, params, results map[string]interface{})
}
// New{{$decorator}} returns {{$decorator}}
func New{{$decorator}} (base {{.Interface.Type}}, instance string, spanDecorator ...func(span trace.Span, params, results map[string]interface{})) {{$decorator}} {
d := {{$decorator}} {
{{.Interface.Name}}: base,
_instance: instance,
}
if len(spanDecorator) > 0 && spanDecorator[0] != nil {
d._spanDecorator = spanDecorator[0]
}
return d
}
{{range $method := .Interface.Methods}}
{{if $method.AcceptsContext}}
// {{$method.Name}} implements {{$.Interface.Name}}
func (_d {{$decorator}}) {{$method.Declaration}} {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "{{$spanNameType}}.{{$method.Name}}")
defer func() {
if _d._spanDecorator != nil {
_d._spanDecorator(_span, {{$method.ParamsMap}}, {{$method.ResultsMap}})
}{{- if $method.ReturnsError}} else if err != nil {
_span.RecordError(err)
_span.SetStatus(_codes.Error, err.Error())
_span.SetAttributes(
attribute.String("event", "error"),
attribute.String("message", err.Error()),
)
}
{{end}}
_span.End()
}()
{{$method.Pass (printf "_d.%s." $.Interface.Name) }}
}
{{end}}
{{end}}

View File

@@ -1,19 +1,17 @@
// Code generated by gowrap. DO NOT EDIT. // Code generated by gowrap. DO NOT EDIT.
// template: https://raw.githubusercontent.com/hexdigest/gowrap/6bd1bc023b4d2a619f30020924f258b8ff665a7a/templates/opentelemetry // template: opentelemetry.gowrap
// gowrap: http://github.com/hexdigest/gowrap // gowrap: http://github.com/hexdigest/gowrap
package ntpdb package ntpdb
//go:generate gowrap gen -p go.ntppool.org/data-api/ntpdb -i QuerierTx -t https://raw.githubusercontent.com/hexdigest/gowrap/6bd1bc023b4d2a619f30020924f258b8ff665a7a/templates/opentelemetry -o otel.go -l ""
import ( import (
"context" "context"
"database/sql"
"go.opentelemetry.io/otel/trace"
"go.opentelemetry.io/otel" "go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/attribute"
_codes "go.opentelemetry.io/otel/codes" _codes "go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/trace"
) )
// QuerierTxWithTracing implements QuerierTx interface instrumented with open telemetry spans // QuerierTxWithTracing implements QuerierTx interface instrumented with open telemetry spans
@@ -82,14 +80,14 @@ func (_d QuerierTxWithTracing) Commit(ctx context.Context) (err error) {
return _d.QuerierTx.Commit(ctx) return _d.QuerierTx.Commit(ctx)
} }
// GetMonitorByName implements QuerierTx // GetMonitorByNameAndIPVersion implements QuerierTx
func (_d QuerierTxWithTracing) GetMonitorByName(ctx context.Context, tlsName sql.NullString) (m1 Monitor, err error) { func (_d QuerierTxWithTracing) GetMonitorByNameAndIPVersion(ctx context.Context, arg GetMonitorByNameAndIPVersionParams) (m1 Monitor, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetMonitorByName") ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetMonitorByNameAndIPVersion")
defer func() { defer func() {
if _d._spanDecorator != nil { if _d._spanDecorator != nil {
_d._spanDecorator(_span, map[string]interface{}{ _d._spanDecorator(_span, map[string]interface{}{
"ctx": ctx, "ctx": ctx,
"tlsName": tlsName}, map[string]interface{}{ "arg": arg}, map[string]interface{}{
"m1": m1, "m1": m1,
"err": err}) "err": err})
} else if err != nil { } else if err != nil {
@@ -103,11 +101,11 @@ func (_d QuerierTxWithTracing) GetMonitorByName(ctx context.Context, tlsName sql
_span.End() _span.End()
}() }()
return _d.QuerierTx.GetMonitorByName(ctx, tlsName) return _d.QuerierTx.GetMonitorByNameAndIPVersion(ctx, arg)
} }
// GetMonitorsByID implements QuerierTx // GetMonitorsByID implements QuerierTx
func (_d QuerierTxWithTracing) GetMonitorsByID(ctx context.Context, monitorids []uint32) (ma1 []Monitor, err error) { func (_d QuerierTxWithTracing) GetMonitorsByID(ctx context.Context, monitorids []int64) (ma1 []Monitor, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetMonitorsByID") ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetMonitorsByID")
defer func() { defer func() {
if _d._spanDecorator != nil { if _d._spanDecorator != nil {
@@ -131,7 +129,7 @@ func (_d QuerierTxWithTracing) GetMonitorsByID(ctx context.Context, monitorids [
} }
// GetServerByID implements QuerierTx // GetServerByID implements QuerierTx
func (_d QuerierTxWithTracing) GetServerByID(ctx context.Context, id uint32) (s1 Server, err error) { func (_d QuerierTxWithTracing) GetServerByID(ctx context.Context, id int64) (s1 Server, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetServerByID") ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetServerByID")
defer func() { defer func() {
if _d._spanDecorator != nil { if _d._spanDecorator != nil {
@@ -227,14 +225,14 @@ func (_d QuerierTxWithTracing) GetServerLogScoresByMonitorID(ctx context.Context
} }
// GetServerNetspeed implements QuerierTx // GetServerNetspeed implements QuerierTx
func (_d QuerierTxWithTracing) GetServerNetspeed(ctx context.Context, ip string) (u1 uint32, err error) { func (_d QuerierTxWithTracing) GetServerNetspeed(ctx context.Context, ip string) (i1 int64, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetServerNetspeed") ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetServerNetspeed")
defer func() { defer func() {
if _d._spanDecorator != nil { if _d._spanDecorator != nil {
_d._spanDecorator(_span, map[string]interface{}{ _d._spanDecorator(_span, map[string]interface{}{
"ctx": ctx, "ctx": ctx,
"ip": ip}, map[string]interface{}{ "ip": ip}, map[string]interface{}{
"u1": u1, "i1": i1,
"err": err}) "err": err})
} else if err != nil { } else if err != nil {
_span.RecordError(err) _span.RecordError(err)
@@ -299,7 +297,7 @@ func (_d QuerierTxWithTracing) GetZoneByName(ctx context.Context, name string) (
} }
// GetZoneCounts implements QuerierTx // GetZoneCounts implements QuerierTx
func (_d QuerierTxWithTracing) GetZoneCounts(ctx context.Context, zoneID uint32) (za1 []ZoneServerCount, err error) { func (_d QuerierTxWithTracing) GetZoneCounts(ctx context.Context, zoneID int64) (za1 []ZoneServerCount, err error) {
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetZoneCounts") ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetZoneCounts")
defer func() { defer func() {
if _d._spanDecorator != nil { if _d._spanDecorator != nil {

View File

@@ -6,20 +6,19 @@ package ntpdb
import ( import (
"context" "context"
"database/sql"
) )
type Querier interface { type Querier interface {
GetMonitorByName(ctx context.Context, tlsName sql.NullString) (Monitor, error) GetMonitorByNameAndIPVersion(ctx context.Context, arg GetMonitorByNameAndIPVersionParams) (Monitor, error)
GetMonitorsByID(ctx context.Context, monitorids []uint32) ([]Monitor, error) GetMonitorsByID(ctx context.Context, monitorids []int64) ([]Monitor, error)
GetServerByID(ctx context.Context, id uint32) (Server, error) GetServerByID(ctx context.Context, id int64) (Server, error)
GetServerByIP(ctx context.Context, ip string) (Server, error) GetServerByIP(ctx context.Context, ip string) (Server, error)
GetServerLogScores(ctx context.Context, arg GetServerLogScoresParams) ([]LogScore, error) GetServerLogScores(ctx context.Context, arg GetServerLogScoresParams) ([]LogScore, error)
GetServerLogScoresByMonitorID(ctx context.Context, arg GetServerLogScoresByMonitorIDParams) ([]LogScore, error) GetServerLogScoresByMonitorID(ctx context.Context, arg GetServerLogScoresByMonitorIDParams) ([]LogScore, error)
GetServerNetspeed(ctx context.Context, ip string) (uint32, error) GetServerNetspeed(ctx context.Context, ip string) (int64, error)
GetServerScores(ctx context.Context, arg GetServerScoresParams) ([]GetServerScoresRow, error) GetServerScores(ctx context.Context, arg GetServerScoresParams) ([]GetServerScoresRow, error)
GetZoneByName(ctx context.Context, name string) (Zone, error) GetZoneByName(ctx context.Context, name string) (Zone, error)
GetZoneCounts(ctx context.Context, zoneID uint32) ([]ZoneServerCount, error) GetZoneCounts(ctx context.Context, zoneID int64) ([]ZoneServerCount, error)
GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow, error) GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow, error)
GetZoneStatsV2(ctx context.Context, ip string) ([]GetZoneStatsV2Row, error) GetZoneStatsV2(ctx context.Context, ip string) ([]GetZoneStatsV2Row, error)
} }

View File

@@ -7,21 +7,27 @@ package ntpdb
import ( import (
"context" "context"
"database/sql"
"strings" "github.com/jackc/pgx/v5/pgtype"
"time"
) )
const getMonitorByName = `-- name: GetMonitorByName :one const getMonitorByNameAndIPVersion = `-- name: GetMonitorByNameAndIPVersion :one
select id, id_token, type, user_id, account_id, hostname, location, ip, ip_version, tls_name, api_key, status, config, client_version, last_seen, last_submit, created_on, deleted_on, is_current from monitors select id, id_token, type, user_id, account_id, hostname, location, ip, ip_version, tls_name, api_key, status, config, client_version, last_seen, last_submit, created_on, deleted_on, is_current from monitors
where where
tls_name like ? tls_name like $1 AND
(ip_version = $2 OR (type = 'score' AND ip_version IS NULL)) AND
is_current = true
order by id order by id
limit 1 limit 1
` `
func (q *Queries) GetMonitorByName(ctx context.Context, tlsName sql.NullString) (Monitor, error) { type GetMonitorByNameAndIPVersionParams struct {
row := q.db.QueryRowContext(ctx, getMonitorByName, tlsName) TlsName pgtype.Text `db:"tls_name" json:"tls_name"`
IpVersion NullMonitorsIpVersion `db:"ip_version" json:"ip_version"`
}
func (q *Queries) GetMonitorByNameAndIPVersion(ctx context.Context, arg GetMonitorByNameAndIPVersionParams) (Monitor, error) {
row := q.db.QueryRow(ctx, getMonitorByNameAndIPVersion, arg.TlsName, arg.IpVersion)
var i Monitor var i Monitor
err := row.Scan( err := row.Scan(
&i.ID, &i.ID,
@@ -49,21 +55,11 @@ func (q *Queries) GetMonitorByName(ctx context.Context, tlsName sql.NullString)
const getMonitorsByID = `-- name: GetMonitorsByID :many const getMonitorsByID = `-- name: GetMonitorsByID :many
select id, id_token, type, user_id, account_id, hostname, location, ip, ip_version, tls_name, api_key, status, config, client_version, last_seen, last_submit, created_on, deleted_on, is_current from monitors select id, id_token, type, user_id, account_id, hostname, location, ip, ip_version, tls_name, api_key, status, config, client_version, last_seen, last_submit, created_on, deleted_on, is_current from monitors
where id in (/*SLICE:MonitorIDs*/?) where id = ANY($1::bigint[])
` `
func (q *Queries) GetMonitorsByID(ctx context.Context, monitorids []uint32) ([]Monitor, error) { func (q *Queries) GetMonitorsByID(ctx context.Context, monitorids []int64) ([]Monitor, error) {
query := getMonitorsByID rows, err := q.db.Query(ctx, getMonitorsByID, monitorids)
var queryParams []interface{}
if len(monitorids) > 0 {
for _, v := range monitorids {
queryParams = append(queryParams, v)
}
query = strings.Replace(query, "/*SLICE:MonitorIDs*/?", strings.Repeat(",?", len(monitorids))[1:], 1)
} else {
query = strings.Replace(query, "/*SLICE:MonitorIDs*/?", "NULL", 1)
}
rows, err := q.db.QueryContext(ctx, query, queryParams...)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -96,9 +92,6 @@ func (q *Queries) GetMonitorsByID(ctx context.Context, monitorids []uint32) ([]M
} }
items = append(items, i) items = append(items, i)
} }
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil { if err := rows.Err(); err != nil {
return nil, err return nil, err
} }
@@ -108,11 +101,11 @@ func (q *Queries) GetMonitorsByID(ctx context.Context, monitorids []uint32) ([]M
const getServerByID = `-- name: GetServerByID :one const getServerByID = `-- name: GetServerByID :one
select id, ip, ip_version, user_id, account_id, hostname, stratum, in_pool, in_server_list, netspeed, netspeed_target, created_on, updated_on, score_ts, score_raw, deletion_on, flags from servers select id, ip, ip_version, user_id, account_id, hostname, stratum, in_pool, in_server_list, netspeed, netspeed_target, created_on, updated_on, score_ts, score_raw, deletion_on, flags from servers
where where
id = ? id = $1
` `
func (q *Queries) GetServerByID(ctx context.Context, id uint32) (Server, error) { func (q *Queries) GetServerByID(ctx context.Context, id int64) (Server, error) {
row := q.db.QueryRowContext(ctx, getServerByID, id) row := q.db.QueryRow(ctx, getServerByID, id)
var i Server var i Server
err := row.Scan( err := row.Scan(
&i.ID, &i.ID,
@@ -139,11 +132,11 @@ func (q *Queries) GetServerByID(ctx context.Context, id uint32) (Server, error)
const getServerByIP = `-- name: GetServerByIP :one const getServerByIP = `-- name: GetServerByIP :one
select id, ip, ip_version, user_id, account_id, hostname, stratum, in_pool, in_server_list, netspeed, netspeed_target, created_on, updated_on, score_ts, score_raw, deletion_on, flags from servers select id, ip, ip_version, user_id, account_id, hostname, stratum, in_pool, in_server_list, netspeed, netspeed_target, created_on, updated_on, score_ts, score_raw, deletion_on, flags from servers
where where
ip = ? ip = $1
` `
func (q *Queries) GetServerByIP(ctx context.Context, ip string) (Server, error) { func (q *Queries) GetServerByIP(ctx context.Context, ip string) (Server, error) {
row := q.db.QueryRowContext(ctx, getServerByIP, ip) row := q.db.QueryRow(ctx, getServerByIP, ip)
var i Server var i Server
err := row.Scan( err := row.Scan(
&i.ID, &i.ID,
@@ -168,20 +161,20 @@ func (q *Queries) GetServerByIP(ctx context.Context, ip string) (Server, error)
} }
const getServerLogScores = `-- name: GetServerLogScores :many const getServerLogScores = `-- name: GetServerLogScores :many
select id, monitor_id, server_id, ts, score, step, offset, rtt, attributes from log_scores select id, monitor_id, server_id, ts, score, step, "offset", rtt, attributes from log_scores
where where
server_id = ? server_id = $1
order by ts desc order by ts desc
limit ? limit $2
` `
type GetServerLogScoresParams struct { type GetServerLogScoresParams struct {
ServerID uint32 `db:"server_id" json:"server_id"` ServerID int64 `db:"server_id" json:"server_id"`
Limit int32 `db:"limit" json:"limit"` Limit int32 `db:"limit" json:"limit"`
} }
func (q *Queries) GetServerLogScores(ctx context.Context, arg GetServerLogScoresParams) ([]LogScore, error) { func (q *Queries) GetServerLogScores(ctx context.Context, arg GetServerLogScoresParams) ([]LogScore, error) {
rows, err := q.db.QueryContext(ctx, getServerLogScores, arg.ServerID, arg.Limit) rows, err := q.db.Query(ctx, getServerLogScores, arg.ServerID, arg.Limit)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -204,9 +197,6 @@ func (q *Queries) GetServerLogScores(ctx context.Context, arg GetServerLogScores
} }
items = append(items, i) items = append(items, i)
} }
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil { if err := rows.Err(); err != nil {
return nil, err return nil, err
} }
@@ -214,22 +204,22 @@ func (q *Queries) GetServerLogScores(ctx context.Context, arg GetServerLogScores
} }
const getServerLogScoresByMonitorID = `-- name: GetServerLogScoresByMonitorID :many const getServerLogScoresByMonitorID = `-- name: GetServerLogScoresByMonitorID :many
select id, monitor_id, server_id, ts, score, step, offset, rtt, attributes from log_scores select id, monitor_id, server_id, ts, score, step, "offset", rtt, attributes from log_scores
where where
server_id = ? AND server_id = $1 AND
monitor_id = ? monitor_id = $2
order by ts desc order by ts desc
limit ? limit $3
` `
type GetServerLogScoresByMonitorIDParams struct { type GetServerLogScoresByMonitorIDParams struct {
ServerID uint32 `db:"server_id" json:"server_id"` ServerID int64 `db:"server_id" json:"server_id"`
MonitorID sql.NullInt32 `db:"monitor_id" json:"monitor_id"` MonitorID pgtype.Int8 `db:"monitor_id" json:"monitor_id"`
Limit int32 `db:"limit" json:"limit"` Limit int32 `db:"limit" json:"limit"`
} }
func (q *Queries) GetServerLogScoresByMonitorID(ctx context.Context, arg GetServerLogScoresByMonitorIDParams) ([]LogScore, error) { func (q *Queries) GetServerLogScoresByMonitorID(ctx context.Context, arg GetServerLogScoresByMonitorIDParams) ([]LogScore, error) {
rows, err := q.db.QueryContext(ctx, getServerLogScoresByMonitorID, arg.ServerID, arg.MonitorID, arg.Limit) rows, err := q.db.Query(ctx, getServerLogScoresByMonitorID, arg.ServerID, arg.MonitorID, arg.Limit)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -252,9 +242,6 @@ func (q *Queries) GetServerLogScoresByMonitorID(ctx context.Context, arg GetServ
} }
items = append(items, i) items = append(items, i)
} }
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil { if err := rows.Err(); err != nil {
return nil, err return nil, err
} }
@@ -262,12 +249,12 @@ func (q *Queries) GetServerLogScoresByMonitorID(ctx context.Context, arg GetServ
} }
const getServerNetspeed = `-- name: GetServerNetspeed :one const getServerNetspeed = `-- name: GetServerNetspeed :one
select netspeed from servers where ip = ? select netspeed from servers where ip = $1
` `
func (q *Queries) GetServerNetspeed(ctx context.Context, ip string) (uint32, error) { func (q *Queries) GetServerNetspeed(ctx context.Context, ip string) (int64, error) {
row := q.db.QueryRowContext(ctx, getServerNetspeed, ip) row := q.db.QueryRow(ctx, getServerNetspeed, ip)
var netspeed uint32 var netspeed int64
err := row.Scan(&netspeed) err := row.Scan(&netspeed)
return netspeed, err return netspeed, err
} }
@@ -280,39 +267,28 @@ select
inner join monitors m inner join monitors m
on (m.id=ss.monitor_id) on (m.id=ss.monitor_id)
where where
server_id = ? AND server_id = $1 AND
monitor_id in (/*SLICE:MonitorIDs*/?) monitor_id = ANY($2::bigint[])
` `
type GetServerScoresParams struct { type GetServerScoresParams struct {
ServerID uint32 `db:"server_id" json:"server_id"` ServerID int64 `db:"server_id" json:"server_id"`
MonitorIDs []uint32 `db:"MonitorIDs" json:"MonitorIDs"` MonitorIDs []int64 `db:"MonitorIDs" json:"MonitorIDs"`
} }
type GetServerScoresRow struct { type GetServerScoresRow struct {
ID uint32 `db:"id" json:"id"` ID int64 `db:"id" json:"id"`
Hostname string `db:"hostname" json:"hostname"` Hostname string `db:"hostname" json:"hostname"`
TlsName sql.NullString `db:"tls_name" json:"tls_name"` TlsName pgtype.Text `db:"tls_name" json:"tls_name"`
Location string `db:"location" json:"location"` Location string `db:"location" json:"location"`
Type MonitorsType `db:"type" json:"type"` Type MonitorsType `db:"type" json:"type"`
ScoreRaw float64 `db:"score_raw" json:"score_raw"` ScoreRaw float64 `db:"score_raw" json:"score_raw"`
ScoreTs sql.NullTime `db:"score_ts" json:"score_ts"` ScoreTs pgtype.Timestamptz `db:"score_ts" json:"score_ts"`
Status ServerScoresStatus `db:"status" json:"status"` Status ServerScoresStatus `db:"status" json:"status"`
} }
func (q *Queries) GetServerScores(ctx context.Context, arg GetServerScoresParams) ([]GetServerScoresRow, error) { func (q *Queries) GetServerScores(ctx context.Context, arg GetServerScoresParams) ([]GetServerScoresRow, error) {
query := getServerScores rows, err := q.db.Query(ctx, getServerScores, arg.ServerID, arg.MonitorIDs)
var queryParams []interface{}
queryParams = append(queryParams, arg.ServerID)
if len(arg.MonitorIDs) > 0 {
for _, v := range arg.MonitorIDs {
queryParams = append(queryParams, v)
}
query = strings.Replace(query, "/*SLICE:MonitorIDs*/?", strings.Repeat(",?", len(arg.MonitorIDs))[1:], 1)
} else {
query = strings.Replace(query, "/*SLICE:MonitorIDs*/?", "NULL", 1)
}
rows, err := q.db.QueryContext(ctx, query, queryParams...)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -334,9 +310,6 @@ func (q *Queries) GetServerScores(ctx context.Context, arg GetServerScoresParams
} }
items = append(items, i) items = append(items, i)
} }
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil { if err := rows.Err(); err != nil {
return nil, err return nil, err
} }
@@ -346,11 +319,11 @@ func (q *Queries) GetServerScores(ctx context.Context, arg GetServerScoresParams
const getZoneByName = `-- name: GetZoneByName :one const getZoneByName = `-- name: GetZoneByName :one
select id, name, description, parent_id, dns from zones select id, name, description, parent_id, dns from zones
where where
name = ? name = $1
` `
func (q *Queries) GetZoneByName(ctx context.Context, name string) (Zone, error) { func (q *Queries) GetZoneByName(ctx context.Context, name string) (Zone, error) {
row := q.db.QueryRowContext(ctx, getZoneByName, name) row := q.db.QueryRow(ctx, getZoneByName, name)
var i Zone var i Zone
err := row.Scan( err := row.Scan(
&i.ID, &i.ID,
@@ -364,12 +337,12 @@ func (q *Queries) GetZoneByName(ctx context.Context, name string) (Zone, error)
const getZoneCounts = `-- name: GetZoneCounts :many const getZoneCounts = `-- name: GetZoneCounts :many
select id, zone_id, ip_version, date, count_active, count_registered, netspeed_active from zone_server_counts select id, zone_id, ip_version, date, count_active, count_registered, netspeed_active from zone_server_counts
where zone_id = ? where zone_id = $1
order by date order by date
` `
func (q *Queries) GetZoneCounts(ctx context.Context, zoneID uint32) ([]ZoneServerCount, error) { func (q *Queries) GetZoneCounts(ctx context.Context, zoneID int64) ([]ZoneServerCount, error) {
rows, err := q.db.QueryContext(ctx, getZoneCounts, zoneID) rows, err := q.db.Query(ctx, getZoneCounts, zoneID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -390,9 +363,6 @@ func (q *Queries) GetZoneCounts(ctx context.Context, zoneID uint32) ([]ZoneServe
} }
items = append(items, i) items = append(items, i)
} }
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil { if err := rows.Err(); err != nil {
return nil, err return nil, err
} }
@@ -401,7 +371,7 @@ func (q *Queries) GetZoneCounts(ctx context.Context, zoneID uint32) ([]ZoneServe
const getZoneStatsData = `-- name: GetZoneStatsData :many const getZoneStatsData = `-- name: GetZoneStatsData :many
SELECT zc.date, z.name, zc.ip_version, count_active, count_registered, netspeed_active SELECT zc.date, z.name, zc.ip_version, count_active, count_registered, netspeed_active
FROM zone_server_counts zc USE INDEX (date_idx) FROM zone_server_counts zc
INNER JOIN zones z INNER JOIN zones z
ON(zc.zone_id=z.id) ON(zc.zone_id=z.id)
WHERE date IN (SELECT max(date) from zone_server_counts) WHERE date IN (SELECT max(date) from zone_server_counts)
@@ -409,16 +379,16 @@ ORDER BY name
` `
type GetZoneStatsDataRow struct { type GetZoneStatsDataRow struct {
Date time.Time `db:"date" json:"date"` Date pgtype.Date `db:"date" json:"date"`
Name string `db:"name" json:"name"` Name string `db:"name" json:"name"`
IpVersion ZoneServerCountsIpVersion `db:"ip_version" json:"ip_version"` IpVersion ZoneServerCountsIpVersion `db:"ip_version" json:"ip_version"`
CountActive uint32 `db:"count_active" json:"count_active"` CountActive int32 `db:"count_active" json:"count_active"`
CountRegistered uint32 `db:"count_registered" json:"count_registered"` CountRegistered int32 `db:"count_registered" json:"count_registered"`
NetspeedActive uint32 `db:"netspeed_active" json:"netspeed_active"` NetspeedActive int `db:"netspeed_active" json:"netspeed_active"`
} }
func (q *Queries) GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow, error) { func (q *Queries) GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow, error) {
rows, err := q.db.QueryContext(ctx, getZoneStatsData) rows, err := q.db.Query(ctx, getZoneStatsData)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -438,9 +408,6 @@ func (q *Queries) GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow,
} }
items = append(items, i) items = append(items, i)
} }
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil { if err := rows.Err(); err != nil {
return nil, err return nil, err
} }
@@ -448,15 +415,14 @@ func (q *Queries) GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow,
} }
const getZoneStatsV2 = `-- name: GetZoneStatsV2 :many const getZoneStatsV2 = `-- name: GetZoneStatsV2 :many
select zone_name, netspeed_active+0 as netspeed_active FROM (
SELECT SELECT
z.name as zone_name, z.name as zone_name,
SUM( CAST(SUM(
IF (deletion_on IS NULL AND score_raw > 10, CASE WHEN deletion_on IS NULL AND score_raw > 10
netspeed, THEN netspeed
0 ELSE 0
) END
) AS netspeed_active ) AS int) AS netspeed_active
FROM FROM
servers s servers s
INNER JOIN server_zones sz ON (sz.server_id = s.id) INNER JOIN server_zones sz ON (sz.server_id = s.id)
@@ -465,14 +431,13 @@ FROM
select zone_id, s.ip_version select zone_id, s.ip_version
from server_zones sz from server_zones sz
inner join servers s on (s.id=sz.server_id) inner join servers s on (s.id=sz.server_id)
where s.ip=? where s.ip=$1
) as srvz on (srvz.zone_id=z.id AND srvz.ip_version=s.ip_version) ) as srvz on (srvz.zone_id=z.id AND srvz.ip_version=s.ip_version)
WHERE WHERE
(deletion_on IS NULL OR deletion_on > NOW()) (deletion_on IS NULL OR deletion_on > NOW())
AND in_pool = 1 AND in_pool = 1
AND netspeed > 0 AND netspeed > 0
GROUP BY z.name) GROUP BY z.name
AS server_netspeed
` `
type GetZoneStatsV2Row struct { type GetZoneStatsV2Row struct {
@@ -481,7 +446,7 @@ type GetZoneStatsV2Row struct {
} }
func (q *Queries) GetZoneStatsV2(ctx context.Context, ip string) ([]GetZoneStatsV2Row, error) { func (q *Queries) GetZoneStatsV2(ctx context.Context, ip string) ([]GetZoneStatsV2Row, error) {
rows, err := q.db.QueryContext(ctx, getZoneStatsV2, ip) rows, err := q.db.Query(ctx, getZoneStatsV2, ip)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -494,9 +459,6 @@ func (q *Queries) GetZoneStatsV2(ctx context.Context, ip string) ([]GetZoneStats
} }
items = append(items, i) items = append(items, i)
} }
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil { if err := rows.Err(); err != nil {
return nil, err return nil, err
} }

View File

@@ -2,7 +2,11 @@ package ntpdb
import ( import (
"context" "context"
"database/sql" "errors"
"github.com/jackc/pgx/v5"
"go.ntppool.org/common/logger"
"go.opentelemetry.io/otel/trace"
) )
type QuerierTx interface { type QuerierTx interface {
@@ -11,14 +15,17 @@ type QuerierTx interface {
Begin(ctx context.Context) (QuerierTx, error) Begin(ctx context.Context) (QuerierTx, error)
Commit(ctx context.Context) error Commit(ctx context.Context) error
Rollback(ctx context.Context) error Rollback(ctx context.Context) error
// Conn returns the connection used by this transaction
Conn() *pgx.Conn
} }
type Beginner interface { type Beginner interface {
Begin(context.Context) (sql.Tx, error) Begin(context.Context) (pgx.Tx, error)
} }
type Tx interface { type Tx interface {
Begin(context.Context) (sql.Tx, error) Begin(context.Context) (pgx.Tx, error)
Commit(ctx context.Context) error Commit(ctx context.Context) error
Rollback(ctx context.Context) error Rollback(ctx context.Context) error
} }
@@ -28,21 +35,33 @@ func (q *Queries) Begin(ctx context.Context) (QuerierTx, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &Queries{db: &tx}, nil return &Queries{db: tx}, nil
} }
func (q *Queries) Commit(ctx context.Context) error { func (q *Queries) Commit(ctx context.Context) error {
tx, ok := q.db.(Tx) tx, ok := q.db.(Tx)
if !ok { if !ok {
return sql.ErrTxDone // Commit called on Queries with dbpool, so treat as transaction already committed
return pgx.ErrTxClosed
} }
return tx.Commit(ctx) return tx.Commit(ctx)
} }
func (q *Queries) Conn() *pgx.Conn {
// pgx.Tx is an interface that has Conn() method
tx, ok := q.db.(pgx.Tx)
if !ok {
logger.Setup().Error("could not get connection from QuerierTx")
return nil
}
return tx.Conn()
}
func (q *Queries) Rollback(ctx context.Context) error { func (q *Queries) Rollback(ctx context.Context) error {
tx, ok := q.db.(Tx) tx, ok := q.db.(Tx)
if !ok { if !ok {
return sql.ErrTxDone // Rollback called on Queries with dbpool, so treat as transaction already committed
return pgx.ErrTxClosed
} }
return tx.Rollback(ctx) return tx.Rollback(ctx)
} }
@@ -62,3 +81,41 @@ func (wq *WrappedQuerier) Begin(ctx context.Context) (QuerierTx, error) {
} }
return NewWrappedQuerier(q), nil return NewWrappedQuerier(q), nil
} }
func (wq *WrappedQuerier) Conn() *pgx.Conn {
return wq.QuerierTxWithTracing.Conn()
}
// LogRollback logs and performs a rollback if the transaction is still active
func LogRollback(ctx context.Context, tx QuerierTx) {
if !isInTransaction(tx) {
return
}
log := logger.FromContext(ctx)
log.WarnContext(ctx, "transaction rollback called on an active transaction")
// if caller ctx is done we still need rollback to happen
// so Rollback gets a fresh context with span copied over
rbCtx := context.Background()
if span := trace.SpanFromContext(ctx); span != nil {
rbCtx = trace.ContextWithSpan(rbCtx, span)
}
if err := tx.Rollback(rbCtx); err != nil && !errors.Is(err, pgx.ErrTxClosed) {
log.ErrorContext(ctx, "rollback failed", "err", err)
}
}
func isInTransaction(tx QuerierTx) bool {
if tx == nil {
return false
}
conn := tx.Conn()
if conn == nil {
return false
}
// 'I' means idle, so if it's not idle, we're in a transaction
return conn.PgConn().TxStatus() != 'I'
}

View File

@@ -0,0 +1,389 @@
# DETAILED IMPLEMENTATION PLAN: Grafana Time Range API with Future Downsampling Support
## Overview
Implement a new Grafana-compatible API endpoint `/api/v2/server/scores/{server}/{mode}` that returns time series data in Grafana format with time range support and future downsampling capabilities.
## API Specification
### Endpoint
- **URL**: `/api/v2/server/scores/{server}/{mode}`
- **Method**: GET
- **Path Parameters**:
- `server`: Server IP address or ID (same validation as existing API)
- `mode`: Only `json` supported initially
### Query Parameters (following Grafana conventions)
- `from`: Unix timestamp in seconds (required)
- `to`: Unix timestamp in seconds (required)
- `maxDataPoints`: Integer, default 50000, max 50000 (for future downsampling)
- `monitor`: Monitor ID, name prefix, or "*" for all (optional, same as existing)
- `interval`: Future downsampling interval like "1m", "5m", "1h" (optional, not implemented initially)
### Response Format
Grafana table format JSON array (more efficient than separate series):
```json
[
{
"target": "monitor{name=zakim1-yfhw4a}",
"tags": {
"monitor_id": "126",
"monitor_name": "zakim1-yfhw4a",
"type": "monitor",
"status": "active"
},
"columns": [
{"text": "time", "type": "time"},
{"text": "score", "type": "number"},
{"text": "rtt", "type": "number", "unit": "ms"},
{"text": "offset", "type": "number", "unit": "s"}
],
"values": [
[1753431667000, 20.0, 18.865, -0.000267],
[1753431419000, 20.0, 18.96, -0.000390],
[1753431151000, 20.0, 18.073, -0.000768],
[1753430063000, 20.0, 18.209, null]
]
}
]
```
## Implementation Details
### 1. Server Routing (`server/server.go`)
Add new route after existing scores routes:
```go
e.GET("/api/v2/server/scores/:server/:mode", srv.scoresTimeRange)
```
**Note**: Initially attempted `:server.:mode` pattern, but Echo router cannot properly parse IP addresses with dots using this pattern. Changed to `:server/:mode` to match existing API pattern and ensure compatibility with IP addresses like `23.155.40.38`.
## Key Implementation Clarifications
### Monitor Filtering Behavior
- **monitor=\***: Return ALL monitors (no monitor count limit)
- **50k datapoint limit**: Applied in database query (LIMIT clause)
- Return whatever data we get from database to user (no post-processing truncation)
### Null Value Handling Strategy
- **Score**: Always include (should never be null)
- **RTT**: Skip datapoints where RTT is null
- **Offset**: Skip datapoints where offset is null
### Time Range Validation Rules
- **Zero duration**: Return 400 Bad Request
- **Future timestamps**: Allow for now
- **Minimum range**: 1 second
- **Maximum range**: 90 days
### 2. New Handler Function (`server/grafana.go`)
#### Function Signature
```go
func (srv *Server) scoresTimeRange(c echo.Context) error
```
#### Parameter Parsing & Validation
```go
// Extend existing historyParameters struct for time range support
type timeRangeParams struct {
historyParameters // embed existing struct
from time.Time
to time.Time
maxDataPoints int
interval string // for future downsampling
}
func (srv *Server) parseTimeRangeParams(ctx context.Context, c echo.Context) (timeRangeParams, error) {
// Start with existing parameter parsing logic
baseParams, err := srv.getHistoryParameters(ctx, c)
if err != nil {
return timeRangeParams{}, err
}
// Parse and validate from/to second timestamps
// Validate time range (max 90 days, min 1 second)
// Parse maxDataPoints (default 50000, max 50000)
// Return extended parameters
}
```
#### Response Structure
```go
type ColumnDef struct {
Text string `json:"text"`
Type string `json:"type"`
Unit string `json:"unit,omitempty"`
}
type GrafanaTableSeries struct {
Target string `json:"target"`
Tags map[string]string `json:"tags"`
Columns []ColumnDef `json:"columns"`
Values [][]interface{} `json:"values"`
}
type GrafanaTimeSeriesResponse []GrafanaTableSeries
```
#### Cache Control
```go
// Reuse existing setHistoryCacheControl function for consistency
// Logic based on data recency and entry count:
// - Empty or >8h old data: "s-maxage=260,max-age=360"
// - Single entry: "s-maxage=60,max-age=35"
// - Multiple entries: "s-maxage=90,max-age=120"
setHistoryCacheControl(c, history)
```
### 3. ClickHouse Data Access (`chdb/logscores.go`)
#### New Method
```go
func (d *ClickHouse) LogscoresTimeRange(ctx context.Context, serverID, monitorID int, from, to time.Time, limit int) ([]ntpdb.LogScore, error) {
// Build query with time range WHERE clause
// Always order by ts ASC (Grafana convention)
// Apply limit to prevent memory issues
// Use same row scanning logic as existing Logscores method
}
```
#### Query Structure
```sql
SELECT id, monitor_id, server_id, ts,
toFloat64(score), toFloat64(step), offset,
rtt, leap, warning, error
FROM log_scores
WHERE server_id = ?
AND ts >= ?
AND ts <= ?
[AND monitor_id = ?] -- if specific monitor requested
ORDER BY ts ASC
LIMIT ?
```
### 4. Data Transformation Logic (`server/grafana.go`)
#### Core Transformation Function
```go
func transformToGrafanaTableFormat(history *logscores.LogScoreHistory, monitors []ntpdb.Monitor) GrafanaTimeSeriesResponse {
// Group data by monitor_id (one series per monitor)
// Create table format with columns: time, score, rtt, offset
// Convert timestamps to milliseconds
// Build proper target names and tags
// Handle null values appropriately in table values
}
```
#### Grouping Strategy
1. **Group by Monitor**: One table series per monitor
2. **Table Columns**: time, score, rtt, offset (all metrics in one table)
3. **Target Naming**: `monitor{name={sanitized_monitor_name}}`
4. **Tag Structure**: Include monitor metadata (no metric type needed)
5. **Monitor Status**: Query real monitor data using `q.GetServerScores()` like existing API
6. **Series Ordering**: No guaranteed order (standard Grafana behavior)
7. **Efficiency**: More efficient than separate series - less JSON overhead
#### Timestamp Conversion
```go
timestampMs := logScore.Ts.Unix() * 1000
```
### 5. Error Handling
#### Validation Errors (400 Bad Request)
- Invalid timestamp format
- from >= to (including zero duration)
- Time range too large (> 90 days)
- Time range too small (< 1 second minimum)
- maxDataPoints > 50000
- Invalid mode (not "json")
#### Not Found Errors (404)
- Server not found
- Monitor not found
- Server deleted
#### Server Errors (500)
- ClickHouse connection issues
- Database query errors
### 6. Future Downsampling Design
#### API Extension Points
- `interval` parameter parsing ready
- `maxDataPoints` limit already enforced
- Response format supports downsampled data seamlessly
#### Downsampling Algorithm (Future Implementation)
```go
// When datapoints > maxDataPoints:
// 1. Calculate downsample interval: (to - from) / maxDataPoints
// 2. Group data into time buckets
// 3. Aggregate per bucket: avg for score/rtt, last for offset
// 4. Return aggregated datapoints
```
## Testing Strategy
### Unit Tests
- Parameter parsing and validation
- Data transformation logic
- Error handling scenarios
- Timestamp conversion accuracy
### Integration Tests
- End-to-end API requests
- ClickHouse query execution
- Multiple monitor scenarios
- Large time range handling
### Manual Testing
- Grafana integration testing
- Performance with various time ranges
- Cache behavior validation
## Performance Considerations
### Current Implementation
- 50k datapoint limit applied in database query (LIMIT clause) (covers ~few weeks of data)
- ClickHouse-only for better range query performance
- Proper indexing on (server_id, ts) assumed
- Table format more efficient than separate time series (less JSON overhead)
### Future Optimizations (Critical for Production)
- **Downsampling for large ranges**: Essential for 90-day queries with reasonable performance
- Query optimization based on range size
- Potential parallel monitor queries
- Adaptive sampling rates based on time range duration
## Documentation Updates
### API.md Addition
```markdown
### 7. Server Scores Time Range (v2)
**GET** `/api/v2/server/scores/{server}/{mode}`
Grafana-compatible time series endpoint for NTP server scoring data.
#### Path Parameters
- `server`: Server IP address or ID
- `mode`: Response format (`json` only)
#### Query Parameters
- `from`: Start time as Unix timestamp in seconds (required)
- `to`: End time as Unix timestamp in seconds (required)
- `maxDataPoints`: Maximum data points to return (default: 50000, max: 50000)
- `monitor`: Monitor filter (ID, name prefix, or "*" for all)
#### Response Format
Grafana table format array with one series per monitor containing all metrics as columns.
```
## Key Research Findings
### Grafana Error Format Requirements
- **HTTP Status Codes**: Standard 400/404/500 work fine
- **Response Body**: JSON preferred with `Content-Type: application/json`
- **Structure**: Simple `{"error": "message", "status": code}` is sufficient
- **Compatibility**: Existing Echo error patterns are Grafana-compatible
### Data Volume Considerations
- **50k Datapoint Limit**: Only covers ~few weeks of data, not sufficient for 90-day ranges
- **Downsampling Critical**: Required for production use with 90-day time ranges
- **Current Approach**: Acceptable for MVP, downsampling essential for full utility
## Implementation Checklist
### Phase 0: Grafana Table Format Validation ✅ **COMPLETED**
- [x] Add test endpoint `/api/v2/test/grafana-table` returning sample table format
- [x] Implement Grafana table format response structures in `server/grafana.go`
- [x] Add structured logging and OpenTelemetry tracing to test endpoint
- [x] Verify endpoint compiles and serves correct JSON format
- [x] Test endpoint response format and headers (CORS, Content-Type, Cache-Control)
- [ ] Test with actual Grafana instance to validate table format compatibility
- [ ] Confirm time series panels render table format correctly
- [ ] Validate column types and units display properly
#### Phase 0 Implementation Details
**Files Created/Modified:**
- `server/grafana.go`: New file containing Grafana table format structures and test endpoint
- `server/server.go`: Added route `e.GET("/api/v2/test/grafana-table", srv.testGrafanaTable)`
**Test Endpoint Features:**
- **URL**: `http://localhost:8030/api/v2/test/grafana-table`
- **Response Format**: Grafana table format with realistic NTP Pool data
- **Sample Data**: Two monitor series (zakim1-yfhw4a, nj2-mon01) with time-based values
- **Columns**: time, score, rtt (ms), offset (s) with proper units
- **Null Handling**: Demonstrates null offset values
- **Headers**: CORS, JSON content-type, cache control
- **Observability**: Structured logging with context, OpenTelemetry tracing
**Recommended Grafana Data Source**: JSON API plugin (`marcusolsson-json-datasource`) - ideal for REST APIs returning table format JSON
### Phase 1: Core Implementation ✅ **COMPLETED**
- [x] Add route in server.go (fixed routing pattern from `:server.:mode` to `:server/:mode`)
- [x] Implement parseTimeRangeParams function for parameter validation
- [x] Add LogscoresTimeRange method to ClickHouse with time range filtering
- [x] Implement transformToGrafanaTableFormat function with monitor grouping
- [x] Add scoresTimeRange handler with full error handling
- [x] Error handling and validation (reuse existing Echo patterns)
- [x] Cache control headers (reuse setHistoryCacheControl)
#### Phase 1 Implementation Details
**Key Components Built:**
- **Route Pattern**: `/api/v2/server/scores/:server/:mode` (matches existing API consistency)
- **Parameter Validation**: Full validation of `from`/`to` timestamps, `maxDataPoints`, time ranges
- **ClickHouse Integration**: `LogscoresTimeRange()` with time-based WHERE clauses and ASC ordering
- **Data Transformation**: Grafana table format with monitor grouping and null value handling
- **Complete Handler**: `scoresTimeRange()` with server validation, error handling, caching, and CORS
**Routing Fix**: Changed from `:server.:mode` to `:server/:mode` to resolve Echo router issue with IP addresses containing dots (e.g., `23.155.40.38`).
**Files Created/Modified in Phase 1:**
- `server/grafana.go`: Complete implementation with all structures and functions
- `timeRangeParams` struct and `parseTimeRangeParams()` function
- `transformToGrafanaTableFormat()` function with monitor grouping
- `scoresTimeRange()` handler with full error handling
- `sanitizeMonitorName()` utility function
- `server/server.go`: Added route `e.GET("/api/v2/server/scores/:server/:mode", srv.scoresTimeRange)`
- `chdb/logscores.go`: Added `LogscoresTimeRange()` method for time-based queries
**Production Testing Results** (July 25, 2025):
-**Real Data Verification**: Successfully tested with server `102.64.112.164` over 12-hour time range
-**Multiple Monitor Support**: Returns data for multiple monitors (`defra1-210hw9t`, `recentmedian`)
-**Data Quality Validation**:
- RTT conversion (microseconds → milliseconds): ✅ Working
- Timestamp conversion (seconds → milliseconds): ✅ Working
- Null value handling: ✅ Working (recentmedian has null RTT/offset as expected)
- Monitor grouping: ✅ Working (one series per monitor)
-**API Parameter Changes**: Successfully changed from milliseconds to seconds for user-friendliness
-**Volume Testing**: Handles 100+ data points per monitor efficiently
-**Error Handling**: All validation working (400 for invalid params, 404 for missing servers)
-**Performance**: Sub-second response times for 12-hour ranges
**Sample Working Request:**
```bash
curl 'http://localhost:8030/api/v2/server/scores/102.64.112.164/json?from=1753457764&to=1753500964&monitor=*'
```
### Phase 2: Testing & Polish
- [ ] Unit tests for all functions
- [ ] Integration tests
- [ ] Manual Grafana testing with real data
- [ ] Performance testing with large ranges (up to 50k points)
- [ ] API documentation updates
### Phase 3: Future Enhancement Ready
- [ ] Interval parameter parsing (no-op initially)
- [ ] Downsampling framework hooks (critical for 90-day ranges)
- [ ] Monitoring and metrics for new endpoint
This design provides a solid foundation for immediate Grafana integration while being fully prepared for future downsampling capabilities without breaking changes.
## Critical Notes for Production
- **Downsampling Required**: 50k datapoint limit means 90-day ranges will hit limits quickly
- **Table Format Validation**: Phase 0 testing ensures Grafana compatibility before full implementation
- **Error Handling**: Existing Echo patterns are sufficient for Grafana requirements
- **Scalability**: Current design handles weeks of data well, downsampling needed for months

View File

@@ -1,6 +1,6 @@
-- name: GetZoneStatsData :many -- name: GetZoneStatsData :many
SELECT zc.date, z.name, zc.ip_version, count_active, count_registered, netspeed_active SELECT zc.date, z.name, zc.ip_version, count_active, count_registered, netspeed_active
FROM zone_server_counts zc USE INDEX (date_idx) FROM zone_server_counts zc
INNER JOIN zones z INNER JOIN zones z
ON(zc.zone_id=z.id) ON(zc.zone_id=z.id)
WHERE date IN (SELECT max(date) from zone_server_counts) WHERE date IN (SELECT max(date) from zone_server_counts)
@@ -8,18 +8,17 @@ ORDER BY name;
-- name: GetServerNetspeed :one -- name: GetServerNetspeed :one
select netspeed from servers where ip = ?; select netspeed from servers where ip = $1;
-- name: GetZoneStatsV2 :many -- name: GetZoneStatsV2 :many
select zone_name, netspeed_active+0 as netspeed_active FROM (
SELECT SELECT
z.name as zone_name, z.name as zone_name,
SUM( CAST(SUM(
IF (deletion_on IS NULL AND score_raw > 10, CASE WHEN deletion_on IS NULL AND score_raw > 10
netspeed, THEN netspeed
0 ELSE 0
) END
) AS netspeed_active ) AS int) AS netspeed_active
FROM FROM
servers s servers s
INNER JOIN server_zones sz ON (sz.server_id = s.id) INNER JOIN server_zones sz ON (sz.server_id = s.id)
@@ -28,35 +27,36 @@ FROM
select zone_id, s.ip_version select zone_id, s.ip_version
from server_zones sz from server_zones sz
inner join servers s on (s.id=sz.server_id) inner join servers s on (s.id=sz.server_id)
where s.ip=? where s.ip=$1
) as srvz on (srvz.zone_id=z.id AND srvz.ip_version=s.ip_version) ) as srvz on (srvz.zone_id=z.id AND srvz.ip_version=s.ip_version)
WHERE WHERE
(deletion_on IS NULL OR deletion_on > NOW()) (deletion_on IS NULL OR deletion_on > NOW())
AND in_pool = 1 AND in_pool = 1
AND netspeed > 0 AND netspeed > 0
GROUP BY z.name) GROUP BY z.name;
AS server_netspeed;
-- name: GetServerByID :one -- name: GetServerByID :one
select * from servers select * from servers
where where
id = ?; id = $1;
-- name: GetServerByIP :one -- name: GetServerByIP :one
select * from servers select * from servers
where where
ip = sqlc.arg(ip); ip = sqlc.arg(ip);
-- name: GetMonitorByName :one -- name: GetMonitorByNameAndIPVersion :one
select * from monitors select * from monitors
where where
tls_name like sqlc.arg('tls_name') tls_name like sqlc.arg('tls_name') AND
(ip_version = sqlc.arg('ip_version') OR (type = 'score' AND ip_version IS NULL)) AND
is_current = true
order by id order by id
limit 1; limit 1;
-- name: GetMonitorsByID :many -- name: GetMonitorsByID :many
select * from monitors select * from monitors
where id in (sqlc.slice('MonitorIDs')); where id = ANY(sqlc.arg('MonitorIDs')::bigint[]);
-- name: GetServerScores :many -- name: GetServerScores :many
select select
@@ -66,23 +66,23 @@ select
inner join monitors m inner join monitors m
on (m.id=ss.monitor_id) on (m.id=ss.monitor_id)
where where
server_id = ? AND server_id = $1 AND
monitor_id in (sqlc.slice('MonitorIDs')); monitor_id = ANY(sqlc.arg('MonitorIDs')::bigint[]);
-- name: GetServerLogScores :many -- name: GetServerLogScores :many
select * from log_scores select * from log_scores
where where
server_id = ? server_id = $1
order by ts desc order by ts desc
limit ?; limit $2;
-- name: GetServerLogScoresByMonitorID :many -- name: GetServerLogScoresByMonitorID :many
select * from log_scores select * from log_scores
where where
server_id = ? AND server_id = $1 AND
monitor_id = ? monitor_id = $2
order by ts desc order by ts desc
limit ?; limit $3;
-- name: GetZoneByName :one -- name: GetZoneByName :one
select * from zones select * from zones
@@ -91,5 +91,5 @@ where
-- name: GetZoneCounts :many -- name: GetZoneCounts :many
select * from zone_server_counts select * from zone_server_counts
where zone_id = ? where zone_id = $1
order by date; order by date;

3813
schema.sql

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,7 @@
set -euo pipefail set -euo pipefail
go install github.com/goreleaser/goreleaser/v2@v2.10.2 go install github.com/goreleaser/goreleaser/v2@v2.12.3
if [ ! -z "${harbor_username:-}" ]; then if [ ! -z "${harbor_username:-}" ]; then
DOCKER_FILE=~/.docker/config.json DOCKER_FILE=~/.docker/config.json

View File

@@ -1,11 +1,11 @@
package server package server
import ( import (
"database/sql"
"errors" "errors"
"net/http" "net/http"
"net/netip" "net/netip"
"github.com/jackc/pgx/v5"
"github.com/labstack/echo/v4" "github.com/labstack/echo/v4"
"go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/attribute"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
@@ -16,8 +16,10 @@ import (
"go.ntppool.org/data-api/ntpdb" "go.ntppool.org/data-api/ntpdb"
) )
const pointBasis float64 = 10000 const (
const pointSymbol = "‱" pointBasis float64 = 10000
pointSymbol = "‱"
)
// const pointBasis = 1000 // const pointBasis = 1000
// const pointSymbol = "‰" // const pointSymbol = "‰"
@@ -54,7 +56,7 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
queryGroup, ctx := errgroup.WithContext(ctx) queryGroup, ctx := errgroup.WithContext(ctx)
var zoneStats []ntpdb.GetZoneStatsV2Row var zoneStats []ntpdb.GetZoneStatsV2Row
var serverNetspeed uint32 var serverNetspeed int64
queryGroup.Go(func() error { queryGroup.Go(func() error {
var err error var err error
@@ -62,7 +64,7 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
serverNetspeed, err = q.GetServerNetspeed(ctx, ip.String()) serverNetspeed, err = q.GetServerNetspeed(ctx, ip.String())
if err != nil { if err != nil {
if !errors.Is(err, sql.ErrNoRows) { if !errors.Is(err, pgx.ErrNoRows) {
log.Error("GetServerNetspeed", "err", err) log.Error("GetServerNetspeed", "err", err)
} }
return err // this will return if the server doesn't exist return err // this will return if the server doesn't exist
@@ -114,21 +116,21 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
err = queryGroup.Wait() err = queryGroup.Wait()
if err != nil { if err != nil {
if errors.Is(err, sql.ErrNoRows) { if errors.Is(err, pgx.ErrNoRows) {
return c.String(http.StatusNotFound, "Not found") return c.String(http.StatusNotFound, "Not found")
} }
log.Error("query error", "err", err) log.Error("query error", "err", err)
return c.String(http.StatusInternalServerError, err.Error()) return c.String(http.StatusInternalServerError, err.Error())
} }
zoneTotals := map[string]int32{} zoneTotals := map[string]int{}
for _, z := range zoneStats { for _, z := range zoneStats {
zn := z.ZoneName zn := z.ZoneName
if zn == "@" { if zn == "@" {
zn = "" zn = ""
} }
zoneTotals[zn] = z.NetspeedActive // binary.BigEndian.Uint64(...) zoneTotals[zn] = int(z.NetspeedActive) // binary.BigEndian.Uint64(...)
// log.Info("zone netspeed", "cc", z.ZoneName, "speed", z.NetspeedActive) // log.Info("zone netspeed", "cc", z.ZoneName, "speed", z.NetspeedActive)
} }
@@ -143,7 +145,7 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
if zt == 0 { if zt == 0 {
// if the recorded netspeed for the zone was zero, assume it's at least // if the recorded netspeed for the zone was zero, assume it's at least
// this servers worth instead. Otherwise the Netspeed gets to be 'infinite'. // this servers worth instead. Otherwise the Netspeed gets to be 'infinite'.
zt = int32(serverNetspeed) zt = int(serverNetspeed)
} }
cc.Netspeed = (pointBasis / float64(zt)) * float64(serverNetspeed) cc.Netspeed = (pointBasis / float64(zt)) * float64(serverNetspeed)
} }
@@ -163,5 +165,4 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
c.Response().Header().Set("Cache-Control", "public,max-age=1800") c.Response().Header().Set("Cache-Control", "public,max-age=1800")
return c.JSONPretty(http.StatusOK, r, "") return c.JSONPretty(http.StatusOK, r, "")
} }

View File

@@ -2,12 +2,12 @@ package server
import ( import (
"context" "context"
"database/sql"
"errors" "errors"
"net/netip" "net/netip"
"strconv" "strconv"
"time" "time"
"github.com/jackc/pgx/v5"
"go.ntppool.org/common/logger" "go.ntppool.org/common/logger"
"go.ntppool.org/common/tracing" "go.ntppool.org/common/tracing"
"go.ntppool.org/data-api/ntpdb" "go.ntppool.org/data-api/ntpdb"
@@ -22,7 +22,7 @@ func (srv *Server) FindServer(ctx context.Context, serverID string) (ntpdb.Serve
var serverData ntpdb.Server var serverData ntpdb.Server
var dberr error var dberr error
if id, err := strconv.Atoi(serverID); id > 0 && err == nil { if id, err := strconv.Atoi(serverID); id > 0 && err == nil {
serverData, dberr = q.GetServerByID(ctx, uint32(id)) serverData, dberr = q.GetServerByID(ctx, int64(id))
} else { } else {
ip, err := netip.ParseAddr(serverID) ip, err := netip.ParseAddr(serverID)
if err != nil || !ip.IsValid() { if err != nil || !ip.IsValid() {
@@ -31,7 +31,7 @@ func (srv *Server) FindServer(ctx context.Context, serverID string) (ntpdb.Serve
serverData, dberr = q.GetServerByIP(ctx, ip.String()) serverData, dberr = q.GetServerByIP(ctx, ip.String())
} }
if dberr != nil { if dberr != nil {
if !errors.Is(dberr, sql.ErrNoRows) { if !errors.Is(dberr, pgx.ErrNoRows) {
log.Error("could not query server id", "err", dberr) log.Error("could not query server id", "err", dberr)
return serverData, dberr return serverData, dberr
} }

589
server/grafana.go Normal file
View File

@@ -0,0 +1,589 @@
package server
import (
"context"
"fmt"
"net/http"
"regexp"
"strconv"
"strings"
"time"
"github.com/labstack/echo/v4"
"go.ntppool.org/common/logger"
"go.ntppool.org/common/tracing"
"go.ntppool.org/data-api/logscores"
"go.ntppool.org/data-api/ntpdb"
)
// ColumnDef represents a Grafana table column definition
type ColumnDef struct {
Text string `json:"text"`
Type string `json:"type"`
Unit string `json:"unit,omitempty"`
}
// GrafanaTableSeries represents a single table series in Grafana format
type GrafanaTableSeries struct {
Target string `json:"target"`
Tags map[string]string `json:"tags"`
Columns []ColumnDef `json:"columns"`
Values [][]interface{} `json:"values"`
}
// GrafanaTimeSeriesResponse represents the complete Grafana table response
type GrafanaTimeSeriesResponse []GrafanaTableSeries
// timeRangeParams extends historyParameters with time range support
type timeRangeParams struct {
historyParameters // embed existing struct
from time.Time
to time.Time
maxDataPoints int
interval string // for future downsampling
}
// parseTimeRangeParams parses and validates time range parameters
// parseRelativeTime parses relative time expressions like "-3d", "-2h", "-30m"
// Returns the absolute time relative to the provided base time (usually time.Now())
func parseRelativeTime(relativeTimeStr string, baseTime time.Time) (time.Time, error) {
if relativeTimeStr == "" {
return time.Time{}, fmt.Errorf("empty time string")
}
// Check if it's a regular Unix timestamp first
if unixTime, err := strconv.ParseInt(relativeTimeStr, 10, 64); err == nil {
return time.Unix(unixTime, 0), nil
}
// Parse relative time format like "-3d", "-2h", "-30m", "-5s"
re := regexp.MustCompile(`^(-?)(\d+)([dhms])$`)
matches := re.FindStringSubmatch(relativeTimeStr)
if len(matches) != 4 {
return time.Time{}, fmt.Errorf("invalid time format, expected Unix timestamp or relative format like '-3d', '-2h', '-30m', '-5s'")
}
sign := matches[1]
valueStr := matches[2]
unit := matches[3]
value, err := strconv.Atoi(valueStr)
if err != nil {
return time.Time{}, fmt.Errorf("invalid numeric value: %s", valueStr)
}
var duration time.Duration
switch unit {
case "s":
duration = time.Duration(value) * time.Second
case "m":
duration = time.Duration(value) * time.Minute
case "h":
duration = time.Duration(value) * time.Hour
case "d":
duration = time.Duration(value) * 24 * time.Hour
default:
return time.Time{}, fmt.Errorf("invalid time unit: %s", unit)
}
// Apply sign (negative means go back in time)
if sign == "-" {
return baseTime.Add(-duration), nil
}
return baseTime.Add(duration), nil
}
func (srv *Server) parseTimeRangeParams(ctx context.Context, c echo.Context, server ntpdb.Server) (timeRangeParams, error) {
log := logger.FromContext(ctx)
// Start with existing parameter parsing logic
baseParams, err := srv.getHistoryParameters(ctx, c, server)
if err != nil {
return timeRangeParams{}, err
}
trParams := timeRangeParams{
historyParameters: baseParams,
maxDataPoints: 50000, // default
}
// Parse from timestamp (required) - supports Unix timestamps and relative time like "-3d"
fromParam := c.QueryParam("from")
if fromParam == "" {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "from parameter is required")
}
now := time.Now()
trParams.from, err = parseRelativeTime(fromParam, now)
if err != nil {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("invalid from parameter: %v", err))
}
// Parse to timestamp (required) - supports Unix timestamps and relative time like "-1d"
toParam := c.QueryParam("to")
if toParam == "" {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "to parameter is required")
}
trParams.to, err = parseRelativeTime(toParam, now)
if err != nil {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("invalid to parameter: %v", err))
}
// Validate time range
if trParams.from.Equal(trParams.to) || trParams.from.After(trParams.to) {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "from must be before to")
}
// Check minimum range (1 second)
if trParams.to.Sub(trParams.from) < time.Second {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "time range must be at least 1 second")
}
// Check maximum range (90 days)
if trParams.to.Sub(trParams.from) > 90*24*time.Hour {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "time range cannot exceed 90 days")
}
// Parse maxDataPoints (optional)
if maxDataPointsParam := c.QueryParam("maxDataPoints"); maxDataPointsParam != "" {
maxDP, err := strconv.Atoi(maxDataPointsParam)
if err != nil {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "invalid maxDataPoints format")
}
if maxDP > 50000 {
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "maxDataPoints cannot exceed 50000")
}
if maxDP > 0 {
trParams.maxDataPoints = maxDP
}
}
// Parse interval (optional, for future downsampling)
trParams.interval = c.QueryParam("interval")
log.DebugContext(ctx, "parsed time range params",
"from", trParams.from,
"to", trParams.to,
"maxDataPoints", trParams.maxDataPoints,
"interval", trParams.interval,
"monitor", trParams.monitorID,
)
return trParams, nil
}
// sanitizeMonitorName sanitizes monitor names for Grafana target format
func sanitizeMonitorName(name string) string {
// Replace problematic characters for Grafana target format
result := strings.ReplaceAll(name, " ", "_")
result = strings.ReplaceAll(result, ".", "-")
result = strings.ReplaceAll(result, "/", "-")
return result
}
// transformToGrafanaTableFormat converts LogScoreHistory to Grafana table format
func transformToGrafanaTableFormat(history *logscores.LogScoreHistory, monitors []ntpdb.Monitor) GrafanaTimeSeriesResponse {
// Group data by monitor_id (one series per monitor)
monitorData := make(map[int][]ntpdb.LogScore)
monitorInfo := make(map[int]ntpdb.Monitor)
// Group log scores by monitor ID
skippedInvalidMonitors := 0
for _, ls := range history.LogScores {
if !ls.MonitorID.Valid {
skippedInvalidMonitors++
continue
}
monitorID := int(ls.MonitorID.Int64)
monitorData[monitorID] = append(monitorData[monitorID], ls)
}
// Debug logging for transformation
logger.Setup().Info("transformation grouping debug",
"total_log_scores", len(history.LogScores),
"skipped_invalid_monitors", skippedInvalidMonitors,
"grouped_monitor_ids", func() []int {
keys := make([]int, 0, len(monitorData))
for k := range monitorData {
keys = append(keys, k)
}
return keys
}(),
"monitor_data_counts", func() map[int]int {
counts := make(map[int]int)
for k, v := range monitorData {
counts[k] = len(v)
}
return counts
}(),
)
// Index monitors by ID for quick lookup
for _, monitor := range monitors {
monitorInfo[int(monitor.ID)] = monitor
}
var response GrafanaTimeSeriesResponse
// Create one table series per monitor
logger.Setup().Info("creating grafana series",
"monitor_data_entries", len(monitorData),
)
for monitorID, logScores := range monitorData {
if len(logScores) == 0 {
logger.Setup().Info("skipping monitor with no data", "monitor_id", monitorID)
continue
}
logger.Setup().Info("processing monitor series",
"monitor_id", monitorID,
"log_scores_count", len(logScores),
)
// Get monitor name from history.Monitors map or from monitor info
monitorName := "unknown"
if name, exists := history.Monitors[monitorID]; exists && name != "" {
monitorName = name
} else if monitor, exists := monitorInfo[monitorID]; exists {
monitorName = monitor.DisplayName()
}
// Build target name and tags
sanitizedName := sanitizeMonitorName(monitorName)
target := "monitor{name=" + sanitizedName + "}"
tags := map[string]string{
"monitor_id": strconv.Itoa(monitorID),
"monitor_name": monitorName,
"type": "monitor",
}
// Add status (we'll use active as default since we have data for this monitor)
tags["status"] = "active"
// Define table columns
columns := []ColumnDef{
{Text: "time", Type: "time"},
{Text: "score", Type: "number"},
{Text: "rtt", Type: "number", Unit: "ms"},
{Text: "offset", Type: "number", Unit: "s"},
}
// Build values array
var values [][]interface{}
for _, ls := range logScores {
// Convert timestamp to milliseconds
timestampMs := ls.Ts.Time.Unix() * 1000
// Create row: [time, score, rtt, offset]
row := []interface{}{
timestampMs,
ls.Score,
}
// Add RTT (convert from microseconds to milliseconds, handle null)
if ls.Rtt.Valid {
rttMs := float64(ls.Rtt.Int32) / 1000.0
row = append(row, rttMs)
} else {
row = append(row, nil)
}
// Add offset (handle null)
if ls.Offset.Valid {
row = append(row, ls.Offset.Float64)
} else {
row = append(row, nil)
}
values = append(values, row)
}
// Create table series
series := GrafanaTableSeries{
Target: target,
Tags: tags,
Columns: columns,
Values: values,
}
response = append(response, series)
logger.Setup().Info("created series for monitor",
"monitor_id", monitorID,
"target", series.Target,
"values_count", len(series.Values),
)
}
logger.Setup().Info("transformation complete",
"final_response_count", len(response),
"response_is_nil", response == nil,
)
return response
}
// scoresTimeRange handles Grafana time range requests for NTP server scores
func (srv *Server) scoresTimeRange(c echo.Context) error {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(c.Request().Context(), "scoresTimeRange")
defer span.End()
// Set reasonable default cache time; adjusted later based on data
c.Response().Header().Set("Cache-Control", "public,max-age=240")
// Validate mode parameter
mode := c.Param("mode")
if mode != "json" {
return echo.NewHTTPError(http.StatusNotFound, "invalid mode - only json supported")
}
// Find and validate server first
server, err := srv.FindServer(ctx, c.Param("server"))
if err != nil {
log.ErrorContext(ctx, "find server", "err", err)
if he, ok := err.(*echo.HTTPError); ok {
return he
}
span.RecordError(err)
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
}
if server.DeletionAge(30 * 24 * time.Hour) {
span.AddEvent("server deleted")
return echo.NewHTTPError(http.StatusNotFound, "server not found")
}
if server.ID == 0 {
span.AddEvent("server not found")
return echo.NewHTTPError(http.StatusNotFound, "server not found")
}
// Parse and validate time range parameters
params, err := srv.parseTimeRangeParams(ctx, c, server)
if err != nil {
if he, ok := err.(*echo.HTTPError); ok {
return he
}
log.ErrorContext(ctx, "parse time range parameters", "err", err)
span.RecordError(err)
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
}
// Query ClickHouse for time range data
log.InfoContext(ctx, "executing clickhouse time range query",
"server_id", server.ID,
"server_ip", server.Ip,
"monitor_id", params.monitorID,
"from", params.from,
"to", params.to,
"max_data_points", params.maxDataPoints,
"time_range_duration", params.to.Sub(params.from).String(),
)
logScores, err := srv.ch.LogscoresTimeRange(ctx, int(server.ID), int(params.monitorID), params.from, params.to, params.maxDataPoints)
if err != nil {
log.ErrorContext(ctx, "clickhouse time range query", "err", err,
"server_id", server.ID,
"monitor_id", params.monitorID,
"from", params.from,
"to", params.to,
)
span.RecordError(err)
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
}
log.InfoContext(ctx, "clickhouse query results",
"server_id", server.ID,
"rows_returned", len(logScores),
"first_few_ids", func() []int64 {
ids := make([]int64, 0, 3)
for i, ls := range logScores {
if i >= 3 {
break
}
ids = append(ids, ls.ID)
}
return ids
}(),
)
// Build LogScoreHistory structure for compatibility with existing functions
history := &logscores.LogScoreHistory{
LogScores: logScores,
Monitors: make(map[int]string),
}
// Get monitor names for the returned data
monitorIDs := []int64{}
for _, ls := range logScores {
if ls.MonitorID.Valid {
monitorID := ls.MonitorID.Int64
if _, exists := history.Monitors[int(monitorID)]; !exists {
history.Monitors[int(monitorID)] = ""
monitorIDs = append(monitorIDs, monitorID)
}
}
}
log.InfoContext(ctx, "monitor processing",
"unique_monitor_ids", monitorIDs,
"monitor_count", len(monitorIDs),
)
// Get monitor details from database for status and display names
var monitors []ntpdb.Monitor
if len(monitorIDs) > 0 {
q := ntpdb.NewWrappedQuerier(ntpdb.New(srv.db))
logScoreMonitors, err := q.GetServerScores(ctx, ntpdb.GetServerScoresParams{
MonitorIDs: monitorIDs,
ServerID: server.ID,
})
if err != nil {
log.ErrorContext(ctx, "get monitor details", "err", err)
// Don't fail the request, just use basic info
} else {
for _, lsm := range logScoreMonitors {
// Create monitor entry for transformation (we mainly need the display name)
tempMon := ntpdb.Monitor{
TlsName: lsm.TlsName,
Location: lsm.Location,
ID: lsm.ID,
}
monitors = append(monitors, tempMon)
// Update monitor name in history
history.Monitors[int(lsm.ID)] = tempMon.DisplayName()
}
}
}
// Transform to Grafana table format
log.InfoContext(ctx, "starting grafana transformation",
"log_scores_count", len(logScores),
"monitors_count", len(monitors),
"history_monitors", history.Monitors,
)
grafanaResponse := transformToGrafanaTableFormat(history, monitors)
log.InfoContext(ctx, "grafana transformation complete",
"response_series_count", len(grafanaResponse),
"response_preview", func() interface{} {
if len(grafanaResponse) == 0 {
return "empty_response"
}
first := grafanaResponse[0]
return map[string]interface{}{
"target": first.Target,
"tags": first.Tags,
"columns_count": len(first.Columns),
"values_count": len(first.Values),
"first_few_values": func() [][]interface{} {
if len(first.Values) == 0 {
return [][]interface{}{}
}
count := 2
if len(first.Values) < count {
count = len(first.Values)
}
return first.Values[:count]
}(),
}
}(),
)
// Set cache control headers based on data characteristics
setHistoryCacheControl(c, history)
// Set CORS headers
c.Response().Header().Set("Access-Control-Allow-Origin", "*")
c.Response().Header().Set("Content-Type", "application/json")
log.InfoContext(ctx, "time range response final",
"server_id", server.ID,
"server_ip", server.Ip,
"monitor_id", params.monitorID,
"time_range", params.to.Sub(params.from).String(),
"raw_data_points", len(logScores),
"grafana_series_count", len(grafanaResponse),
"max_data_points", params.maxDataPoints,
"response_is_null", grafanaResponse == nil,
"response_is_empty", len(grafanaResponse) == 0,
)
return c.JSON(http.StatusOK, grafanaResponse)
}
// testGrafanaTable returns sample data in Grafana table format for validation
func (srv *Server) testGrafanaTable(c echo.Context) error {
log := logger.Setup()
ctx, span := tracing.Tracer().Start(c.Request().Context(), "testGrafanaTable")
defer span.End()
log.InfoContext(ctx, "serving test Grafana table format",
"remote_ip", c.RealIP(),
"user_agent", c.Request().UserAgent(),
)
// Generate sample data with realistic NTP Pool values
now := time.Now()
sampleData := GrafanaTimeSeriesResponse{
{
Target: "monitor{name=zakim1-yfhw4a}",
Tags: map[string]string{
"monitor_id": "126",
"monitor_name": "zakim1-yfhw4a",
"type": "monitor",
"status": "active",
},
Columns: []ColumnDef{
{Text: "time", Type: "time"},
{Text: "score", Type: "number"},
{Text: "rtt", Type: "number", Unit: "ms"},
{Text: "offset", Type: "number", Unit: "s"},
},
Values: [][]interface{}{
{now.Add(-10*time.Minute).Unix() * 1000, 20.0, 18.865, -0.000267},
{now.Add(-20*time.Minute).Unix() * 1000, 20.0, 18.96, -0.000390},
{now.Add(-30*time.Minute).Unix() * 1000, 20.0, 18.073, -0.000768},
{now.Add(-40*time.Minute).Unix() * 1000, 20.0, 18.209, nil}, // null offset example
},
},
{
Target: "monitor{name=nj2-mon01}",
Tags: map[string]string{
"monitor_id": "84",
"monitor_name": "nj2-mon01",
"type": "monitor",
"status": "active",
},
Columns: []ColumnDef{
{Text: "time", Type: "time"},
{Text: "score", Type: "number"},
{Text: "rtt", Type: "number", Unit: "ms"},
{Text: "offset", Type: "number", Unit: "s"},
},
Values: [][]interface{}{
{now.Add(-10*time.Minute).Unix() * 1000, 19.5, 22.145, 0.000123},
{now.Add(-20*time.Minute).Unix() * 1000, 19.8, 21.892, 0.000089},
{now.Add(-30*time.Minute).Unix() * 1000, 20.0, 22.034, 0.000156},
},
},
}
// Add CORS header for browser testing
c.Response().Header().Set("Access-Control-Allow-Origin", "*")
c.Response().Header().Set("Content-Type", "application/json")
// Set cache control similar to other endpoints
c.Response().Header().Set("Cache-Control", "public,max-age=60")
log.InfoContext(ctx, "test Grafana table response sent",
"series_count", len(sampleData),
"response_size_approx", "~1KB",
)
return c.JSON(http.StatusOK, sampleData)
}

119
server/grafana_test.go Normal file
View File

@@ -0,0 +1,119 @@
package server
import (
"testing"
"time"
)
func TestParseRelativeTime(t *testing.T) {
// Use a fixed base time for consistent testing
baseTime := time.Date(2025, 8, 4, 12, 0, 0, 0, time.UTC)
tests := []struct {
name string
input string
expected time.Time
shouldError bool
}{
{
name: "Unix timestamp",
input: "1753500964",
expected: time.Unix(1753500964, 0),
},
{
name: "3 days ago",
input: "-3d",
expected: baseTime.Add(-3 * 24 * time.Hour),
},
{
name: "2 hours ago",
input: "-2h",
expected: baseTime.Add(-2 * time.Hour),
},
{
name: "30 minutes ago",
input: "-30m",
expected: baseTime.Add(-30 * time.Minute),
},
{
name: "5 seconds ago",
input: "-5s",
expected: baseTime.Add(-5 * time.Second),
},
{
name: "3 days in future",
input: "3d",
expected: baseTime.Add(3 * 24 * time.Hour),
},
{
name: "1 hour in future",
input: "1h",
expected: baseTime.Add(1 * time.Hour),
},
{
name: "empty string",
input: "",
shouldError: true,
},
{
name: "invalid format",
input: "invalid",
shouldError: true,
},
{
name: "invalid unit",
input: "3x",
shouldError: true,
},
{
name: "no number",
input: "-d",
shouldError: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result, err := parseRelativeTime(tt.input, baseTime)
if tt.shouldError {
if err == nil {
t.Errorf("parseRelativeTime(%q) expected error, got nil", tt.input)
}
return
}
if err != nil {
t.Errorf("parseRelativeTime(%q) unexpected error: %v", tt.input, err)
return
}
if !result.Equal(tt.expected) {
t.Errorf("parseRelativeTime(%q) = %v, expected %v", tt.input, result, tt.expected)
}
})
}
}
func TestParseRelativeTimeEdgeCases(t *testing.T) {
baseTime := time.Date(2025, 8, 4, 12, 0, 0, 0, time.UTC)
// Test large values
result, err := parseRelativeTime("365d", baseTime)
if err != nil {
t.Errorf("parseRelativeTime('365d') unexpected error: %v", err)
}
expected := baseTime.Add(365 * 24 * time.Hour)
if !result.Equal(expected) {
t.Errorf("parseRelativeTime('365d') = %v, expected %v", result, expected)
}
// Test zero values
result, err = parseRelativeTime("0s", baseTime)
if err != nil {
t.Errorf("parseRelativeTime('0s') unexpected error: %v", err)
}
if !result.Equal(baseTime) {
t.Errorf("parseRelativeTime('0s') = %v, expected %v", result, baseTime)
}
}

View File

@@ -3,7 +3,6 @@ package server
import ( import (
"bytes" "bytes"
"context" "context"
"database/sql"
"encoding/csv" "encoding/csv"
"errors" "errors"
"fmt" "fmt"
@@ -15,6 +14,8 @@ import (
"strings" "strings"
"time" "time"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgtype"
"github.com/labstack/echo/v4" "github.com/labstack/echo/v4"
"go.ntppool.org/common/logger" "go.ntppool.org/common/logger"
"go.ntppool.org/common/tracing" "go.ntppool.org/common/tracing"
@@ -63,13 +64,13 @@ func paramHistoryMode(s string) historyMode {
type historyParameters struct { type historyParameters struct {
limit int limit int
monitorID int monitorID int64
server ntpdb.Server server ntpdb.Server
since time.Time since time.Time
fullHistory bool fullHistory bool
} }
func (srv *Server) getHistoryParameters(ctx context.Context, c echo.Context) (historyParameters, error) { func (srv *Server) getHistoryParameters(ctx context.Context, c echo.Context, server ntpdb.Server) (historyParameters, error) {
log := logger.FromContext(ctx) log := logger.FromContext(ctx)
p := historyParameters{} p := historyParameters{}
@@ -90,21 +91,30 @@ func (srv *Server) getHistoryParameters(ctx context.Context, c echo.Context) (hi
monitorParam := c.QueryParam("monitor") monitorParam := c.QueryParam("monitor")
var monitorID uint32 var monitorID int64
switch monitorParam { switch monitorParam {
case "": case "":
name := "recentmedian.scores.ntp.dev" name := "recentmedian.scores.ntp.dev"
monitor, err := q.GetMonitorByName(ctx, sql.NullString{Valid: true, String: name}) var ipVersion ntpdb.NullMonitorsIpVersion
if server.IpVersion == ntpdb.ServersIpVersionV4 {
ipVersion = ntpdb.NullMonitorsIpVersion{MonitorsIpVersion: ntpdb.MonitorsIpVersionV4, Valid: true}
} else {
ipVersion = ntpdb.NullMonitorsIpVersion{MonitorsIpVersion: ntpdb.MonitorsIpVersionV6, Valid: true}
}
monitor, err := q.GetMonitorByNameAndIPVersion(ctx, ntpdb.GetMonitorByNameAndIPVersionParams{
TlsName: pgtype.Text{Valid: true, String: name},
IpVersion: ipVersion,
})
if err != nil { if err != nil {
log.Warn("could not find monitor", "name", name, "err", err) log.Warn("could not find monitor", "name", name, "ip_version", server.IpVersion, "err", err)
} }
monitorID = monitor.ID monitorID = monitor.ID
case "*": case "*":
monitorID = 0 // don't filter on monitor ID monitorID = 0 // don't filter on monitor ID
default: default:
mID, err := strconv.ParseUint(monitorParam, 10, 32) mID, err := strconv.ParseInt(monitorParam, 10, 64)
if err == nil { if err == nil {
monitorID = uint32(mID) monitorID = mID
} else { } else {
// only accept the name prefix; no wildcards; trust the database // only accept the name prefix; no wildcards; trust the database
// to filter out any other crazy // to filter out any other crazy
@@ -113,12 +123,21 @@ func (srv *Server) getHistoryParameters(ctx context.Context, c echo.Context) (hi
} }
monitorParam = monitorParam + ".%" monitorParam = monitorParam + ".%"
monitor, err := q.GetMonitorByName(ctx, sql.NullString{Valid: true, String: monitorParam}) var ipVersion ntpdb.NullMonitorsIpVersion
if server.IpVersion == ntpdb.ServersIpVersionV4 {
ipVersion = ntpdb.NullMonitorsIpVersion{MonitorsIpVersion: ntpdb.MonitorsIpVersionV4, Valid: true}
} else {
ipVersion = ntpdb.NullMonitorsIpVersion{MonitorsIpVersion: ntpdb.MonitorsIpVersionV6, Valid: true}
}
monitor, err := q.GetMonitorByNameAndIPVersion(ctx, ntpdb.GetMonitorByNameAndIPVersionParams{
TlsName: pgtype.Text{Valid: true, String: monitorParam},
IpVersion: ipVersion,
})
if err != nil { if err != nil {
if err == sql.ErrNoRows { if errors.Is(err, pgx.ErrNoRows) {
return p, echo.NewHTTPError(http.StatusNotFound, "monitor not found").WithInternal(err) return p, echo.NewHTTPError(http.StatusNotFound, "monitor not found").WithInternal(err)
} }
log.WarnContext(ctx, "could not find monitor", "name", monitorParam, "err", err) log.WarnContext(ctx, "could not find monitor", "name", monitorParam, "ip_version", server.IpVersion, "err", err)
return p, echo.NewHTTPError(http.StatusNotFound, "monitor not found (sql)") return p, echo.NewHTTPError(http.StatusNotFound, "monitor not found (sql)")
} }
monitorID = monitor.ID monitorID = monitor.ID
@@ -126,8 +145,8 @@ func (srv *Server) getHistoryParameters(ctx context.Context, c echo.Context) (hi
} }
} }
p.monitorID = int(monitorID) p.monitorID = monitorID
log.DebugContext(ctx, "monitor param", "monitor", monitorID) log.DebugContext(ctx, "monitor param", "monitor", monitorID, "ip_version", server.IpVersion)
since, _ := strconv.ParseInt(c.QueryParam("since"), 10, 64) // defaults to 0 so don't care if it parses since, _ := strconv.ParseInt(c.QueryParam("since"), 10, 64) // defaults to 0 so don't care if it parses
if since > 0 { if since > 0 {
@@ -152,8 +171,8 @@ func (srv *Server) getHistoryParameters(ctx context.Context, c echo.Context) (hi
return p, nil return p, nil
} }
func (srv *Server) getHistoryMySQL(ctx context.Context, _ echo.Context, p historyParameters) (*logscores.LogScoreHistory, error) { func (srv *Server) getHistoryPostgres(ctx context.Context, _ echo.Context, p historyParameters) (*logscores.LogScoreHistory, error) {
ls, err := logscores.GetHistoryMySQL(ctx, srv.db, p.server.ID, uint32(p.monitorID), p.since, p.limit) ls, err := logscores.GetHistoryPostgres(ctx, srv.db, p.server.ID, p.monitorID, p.since, p.limit)
return ls, err return ls, err
} }
@@ -171,16 +190,6 @@ func (srv *Server) history(c echo.Context) error {
return echo.NewHTTPError(http.StatusNotFound, "invalid mode") return echo.NewHTTPError(http.StatusNotFound, "invalid mode")
} }
p, err := srv.getHistoryParameters(ctx, c)
if err != nil {
if he, ok := err.(*echo.HTTPError); ok {
return he
}
log.ErrorContext(ctx, "get history parameters", "err", err)
span.RecordError(err)
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
}
server, err := srv.FindServer(ctx, c.Param("server")) server, err := srv.FindServer(ctx, c.Param("server"))
if err != nil { if err != nil {
log.ErrorContext(ctx, "find server", "err", err) log.ErrorContext(ctx, "find server", "err", err)
@@ -199,6 +208,16 @@ func (srv *Server) history(c echo.Context) error {
return echo.NewHTTPError(http.StatusNotFound, "server not found") return echo.NewHTTPError(http.StatusNotFound, "server not found")
} }
p, err := srv.getHistoryParameters(ctx, c, server)
if err != nil {
if he, ok := err.(*echo.HTTPError); ok {
return he
}
log.ErrorContext(ctx, "get history parameters", "err", err)
span.RecordError(err)
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
}
p.server = server p.server = server
var history *logscores.LogScoreHistory var history *logscores.LogScoreHistory
@@ -212,9 +231,9 @@ func (srv *Server) history(c echo.Context) error {
} }
if sourceParam == "m" { if sourceParam == "m" {
history, err = srv.getHistoryMySQL(ctx, c, p) history, err = srv.getHistoryPostgres(ctx, c, p)
} else { } else {
history, err = logscores.GetHistoryClickHouse(ctx, srv.ch, srv.db, p.server.ID, uint32(p.monitorID), p.since, p.limit, p.fullHistory) history, err = logscores.GetHistoryClickHouse(ctx, srv.ch, srv.db, p.server.ID, p.monitorID, p.since, p.limit, p.fullHistory)
} }
if err != nil { if err != nil {
var httpError *echo.HTTPError var httpError *echo.HTTPError
@@ -258,7 +277,7 @@ func (srv *Server) historyJSON(ctx context.Context, c echo.Context, server ntpdb
} }
type MonitorEntry struct { type MonitorEntry struct {
ID uint32 `json:"id"` ID int64 `json:"id"`
Name string `json:"name"` Name string `json:"name"`
Type string `json:"type"` Type string `json:"type"`
Ts string `json:"ts"` Ts string `json:"ts"`
@@ -279,9 +298,9 @@ func (srv *Server) historyJSON(ctx context.Context, c echo.Context, server ntpdb
// log.InfoContext(ctx, "monitor id list", "ids", history.MonitorIDs) // log.InfoContext(ctx, "monitor id list", "ids", history.MonitorIDs)
monitorIDs := []uint32{} monitorIDs := []int64{}
for k := range history.Monitors { for k := range history.Monitors {
monitorIDs = append(monitorIDs, uint32(k)) monitorIDs = append(monitorIDs, int64(k))
} }
q := ntpdb.NewWrappedQuerier(ntpdb.New(srv.db)) q := ntpdb.NewWrappedQuerier(ntpdb.New(srv.db))
@@ -300,12 +319,12 @@ func (srv *Server) historyJSON(ctx context.Context, c echo.Context, server ntpdb
// log.InfoContext(ctx, "got logScoreMonitors", "count", len(logScoreMonitors)) // log.InfoContext(ctx, "got logScoreMonitors", "count", len(logScoreMonitors))
// Calculate average RTT per monitor // Calculate average RTT per monitor
monitorRttSums := make(map[uint32]float64) monitorRttSums := make(map[int64]float64)
monitorRttCounts := make(map[uint32]int) monitorRttCounts := make(map[int64]int)
for _, ls := range history.LogScores { for _, ls := range history.LogScores {
if ls.MonitorID.Valid && ls.Rtt.Valid { if ls.MonitorID.Valid && ls.Rtt.Valid {
monitorID := uint32(ls.MonitorID.Int32) monitorID := ls.MonitorID.Int64
monitorRttSums[monitorID] += float64(ls.Rtt.Int32) / 1000.0 monitorRttSums[monitorID] += float64(ls.Rtt.Int32) / 1000.0
monitorRttCounts[monitorID]++ monitorRttCounts[monitorID]++
} }
@@ -344,8 +363,8 @@ func (srv *Server) historyJSON(ctx context.Context, c echo.Context, server ntpdb
x := float64(1000000000000) x := float64(1000000000000)
score := math.Round(ls.Score*x) / x score := math.Round(ls.Score*x) / x
res.History[i] = ScoresEntry{ res.History[i] = ScoresEntry{
TS: ls.Ts.Unix(), TS: ls.Ts.Time.Unix(),
MonitorID: int(ls.MonitorID.Int32), MonitorID: int(ls.MonitorID.Int64),
Step: ls.Step, Step: ls.Step,
Score: score, Score: score,
} }
@@ -396,7 +415,7 @@ func (srv *Server) historyCSV(ctx context.Context, c echo.Context, history *logs
score := ff(l.Score) score := ff(l.Score)
var monName string var monName string
if l.MonitorID.Valid { if l.MonitorID.Valid {
monName = history.Monitors[int(l.MonitorID.Int32)] monName = history.Monitors[int(l.MonitorID.Int64)]
} }
var leap string var leap string
if l.Attributes.Leap != 0 { if l.Attributes.Leap != 0 {
@@ -409,13 +428,13 @@ func (srv *Server) historyCSV(ctx context.Context, c echo.Context, history *logs
} }
err := w.Write([]string{ err := w.Write([]string{
strconv.Itoa(int(l.Ts.Unix())), strconv.Itoa(int(l.Ts.Time.Unix())),
// l.Ts.Format(time.RFC3339), // l.Ts.Format(time.RFC3339),
l.Ts.Format("2006-01-02 15:04:05"), l.Ts.Time.Format("2006-01-02 15:04:05"),
offset, offset,
step, step,
score, score,
fmt.Sprintf("%d", l.MonitorID.Int32), fmt.Sprintf("%d", l.MonitorID.Int64),
monName, monName,
rtt, rtt,
leap, leap,
@@ -446,7 +465,7 @@ func setHistoryCacheControl(c echo.Context, history *logscores.LogScoreHistory)
if len(history.LogScores) == 0 || if len(history.LogScores) == 0 ||
// cache for longer if data hasn't updated for a while; or we didn't // cache for longer if data hasn't updated for a while; or we didn't
// find any. // find any.
(time.Now().Add(-8 * time.Hour).After(history.LogScores[len(history.LogScores)-1].Ts)) { (time.Now().Add(-8 * time.Hour).After(history.LogScores[len(history.LogScores)-1].Ts.Time)) {
hdr.Set("Cache-Control", "s-maxage=260,max-age=360") hdr.Set("Cache-Control", "s-maxage=260,max-age=360")
} else { } else {
if len(history.LogScores) == 1 { if len(history.LogScores) == 1 {

View File

@@ -2,17 +2,16 @@ package server
import ( import (
"context" "context"
"database/sql"
"errors" "errors"
"fmt" "fmt"
"log/slog" "log/slog"
"net/http" "net/http"
"os" "os"
"strconv"
"time" "time"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/labstack/echo-contrib/echoprometheus" "github.com/labstack/echo-contrib/echoprometheus"
"github.com/labstack/echo/v4" "github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware" "github.com/labstack/echo/v4/middleware"
@@ -36,7 +35,7 @@ import (
) )
type Server struct { type Server struct {
db *sql.DB db *pgxpool.Pool
ch *chdb.ClickHouse ch *chdb.ClickHouse
config *config.Config config *config.Config
@@ -55,7 +54,7 @@ func NewServer(ctx context.Context, configFile string) (*Server, error) {
} }
db, err := ntpdb.OpenDB(ctx, configFile) db, err := ntpdb.OpenDB(ctx, configFile)
if err != nil { if err != nil {
return nil, fmt.Errorf("mysql open: %w", err) return nil, fmt.Errorf("postgres open: %w", err)
} }
conf := config.New() conf := config.New()
@@ -179,7 +178,7 @@ func (srv *Server) Run() error {
e.Use(middleware.CORSWithConfig(middleware.CORSConfig{ e.Use(middleware.CORSWithConfig(middleware.CORSConfig{
AllowOrigins: []string{ AllowOrigins: []string{
"http://localhost", "http://localhost:5173", "http://localhost:8080", "http://localhost", "http://localhost:5173", "http://localhost:5174", "http://localhost:8080",
"https://www.ntppool.org", "https://*.ntppool.org", "https://www.ntppool.org", "https://*.ntppool.org",
"https://web.beta.grundclock.com", "https://manage.beta.grundclock.com", "https://web.beta.grundclock.com", "https://manage.beta.grundclock.com",
"https:/*.askdev.grundclock.com", "https:/*.askdev.grundclock.com",
@@ -208,6 +207,8 @@ func (srv *Server) Run() error {
e.GET("/api/server/dns/answers/:server", srv.dnsAnswers) e.GET("/api/server/dns/answers/:server", srv.dnsAnswers)
e.GET("/api/server/scores/:server/:mode", srv.history) e.GET("/api/server/scores/:server/:mode", srv.history)
e.GET("/api/dns/counts", srv.dnsQueryCounts) e.GET("/api/dns/counts", srv.dnsQueryCounts)
e.GET("/api/v2/test/grafana-table", srv.testGrafanaTable)
e.GET("/api/v2/server/scores/:server/:mode", srv.scoresTimeRange)
if len(ntpconf.WebHostname()) > 0 { if len(ntpconf.WebHostname()) > 0 {
e.POST("/api/server/scores/:server/:mode", func(c echo.Context) error { e.POST("/api/server/scores/:server/:mode", func(c echo.Context) error {
@@ -301,22 +302,9 @@ func healthHandler(srv *Server, log *slog.Logger) func(w http.ResponseWriter, re
defer cancel() defer cancel()
g, ctx := errgroup.WithContext(ctx) g, ctx := errgroup.WithContext(ctx)
stats := srv.db.Stats() stats := srv.db.Stat()
if stats.OpenConnections > 3 { if stats.TotalConns() > 3 {
log.InfoContext(ctx, "health requests", "url", req.URL.String(), "stats", stats) log.InfoContext(ctx, "health requests", "url", req.URL.String(), "total_conns", stats.TotalConns())
}
if resetParam := req.URL.Query().Get("reset"); resetParam != "" {
reset, err := strconv.ParseBool(resetParam)
log.InfoContext(ctx, "db reset request", "err", err, "reset", reset)
if err == nil && reset {
// this feature was to debug some specific problem
log.InfoContext(ctx, "setting idle db conns to zero")
srv.db.SetConnMaxLifetime(30 * time.Second)
srv.db.SetMaxIdleConns(0)
srv.db.SetMaxIdleConns(4)
}
} }
g.Go(func() error { g.Go(func() error {
@@ -338,7 +326,7 @@ func healthHandler(srv *Server, log *slog.Logger) func(w http.ResponseWriter, re
}) })
g.Go(func() error { g.Go(func() error {
err := srv.db.PingContext(ctx) err := srv.db.Ping(ctx)
if err != nil { if err != nil {
log.WarnContext(ctx, "db ping", "err", err) log.WarnContext(ctx, "db ping", "err", err)
return err return err

View File

@@ -1,12 +1,12 @@
package server package server
import ( import (
"database/sql"
"errors" "errors"
"net/http" "net/http"
"strconv" "strconv"
"time" "time"
"github.com/jackc/pgx/v5"
"github.com/labstack/echo/v4" "github.com/labstack/echo/v4"
"go.ntppool.org/common/logger" "go.ntppool.org/common/logger"
"go.ntppool.org/common/tracing" "go.ntppool.org/common/tracing"
@@ -27,7 +27,7 @@ func (srv *Server) zoneCounts(c echo.Context) error {
zone, err := q.GetZoneByName(ctx, c.Param("zone_name")) zone, err := q.GetZoneByName(ctx, c.Param("zone_name"))
if err != nil || zone.ID == 0 { if err != nil || zone.ID == 0 {
if errors.Is(err, sql.ErrNoRows) { if errors.Is(err, pgx.ErrNoRows) {
return c.String(http.StatusNotFound, "Not found") return c.String(http.StatusNotFound, "Not found")
} }
log.ErrorContext(ctx, "could not query for zone", "err", err) log.ErrorContext(ctx, "could not query for zone", "err", err)
@@ -37,7 +37,7 @@ func (srv *Server) zoneCounts(c echo.Context) error {
counts, err := q.GetZoneCounts(ctx, zone.ID) counts, err := q.GetZoneCounts(ctx, zone.ID)
if err != nil { if err != nil {
if !errors.Is(err, sql.ErrNoRows) { if !errors.Is(err, pgx.ErrNoRows) {
log.ErrorContext(ctx, "get counts", "err", err) log.ErrorContext(ctx, "get counts", "err", err)
span.RecordError(err) span.RecordError(err)
return c.String(http.StatusInternalServerError, "internal error") return c.String(http.StatusInternalServerError, "internal error")
@@ -71,7 +71,7 @@ func (srv *Server) zoneCounts(c echo.Context) error {
count := 0 count := 0
dates := map[int64]bool{} dates := map[int64]bool{}
for _, c := range counts { for _, c := range counts {
ep := c.Date.Unix() ep := c.Date.Time.Unix()
if _, ok := dates[ep]; !ok { if _, ok := dates[ep]; !ok {
count++ count++
dates[ep] = true dates[ep] = true
@@ -84,7 +84,6 @@ func (srv *Server) zoneCounts(c echo.Context) error {
} else { } else {
// skip everything and use the special logic that we always include the most recent date // skip everything and use the special logic that we always include the most recent date
skipCount = float64(count) + 1 skipCount = float64(count) + 1
} }
} }
@@ -100,13 +99,13 @@ func (srv *Server) zoneCounts(c echo.Context) error {
lastSkip := int64(0) lastSkip := int64(0)
skipThreshold := 0.5 skipThreshold := 0.5
for _, c := range counts { for _, c := range counts {
cDate := c.Date.Unix() cDate := c.Date.Time.Unix()
if (toSkip <= skipThreshold && cDate != lastSkip) || if (toSkip <= skipThreshold && cDate != lastSkip) ||
lastDate == cDate || lastDate == cDate ||
mostRecentDate == cDate { mostRecentDate == cDate {
// log.Info("adding date", "date", c.Date.Format(time.DateOnly)) // log.Info("adding date", "date", c.Date.Time.Format(time.DateOnly))
rv.History = append(rv.History, historyEntry{ rv.History = append(rv.History, historyEntry{
D: c.Date.Format(time.DateOnly), D: c.Date.Time.Format(time.DateOnly),
Ts: int(cDate), Ts: int(cDate),
Ac: int(c.CountActive), Ac: int(c.CountActive),
Rc: int(c.CountRegistered), Rc: int(c.CountRegistered),
@@ -144,5 +143,4 @@ func (srv *Server) zoneCounts(c echo.Context) error {
c.Response().Header().Set("Cache-Control", "s-maxage=28800, max-age=7200") c.Response().Header().Set("Cache-Control", "s-maxage=28800, max-age=7200")
return c.JSON(http.StatusOK, rv) return c.JSON(http.StatusOK, rv)
} }

View File

@@ -2,20 +2,25 @@ version: "2"
sql: sql:
- schema: "schema.sql" - schema: "schema.sql"
queries: "query.sql" queries: "query.sql"
engine: "mysql" engine: "postgresql"
strict_order_by: false
gen: gen:
go: go:
package: "ntpdb" package: "ntpdb"
out: "ntpdb" out: "ntpdb"
sql_package: "pgx/v5"
emit_json_tags: true emit_json_tags: true
emit_db_tags: true emit_db_tags: true
omit_unused_structs: true omit_unused_structs: true
emit_interface: true emit_interface: true
# emit_all_enum_values: true
rename: rename:
servers.Ip: IP servers.Ip: IP
overrides: overrides:
- column: log_scores.attributes - column: log_scores.attributes
go_type: go.ntppool.org/common/types.LogScoreAttributes go_type: go.ntppool.org/common/types.LogScoreAttributes
- column: "server_netspeed.netspeed_active" - column: "server_netspeed.netspeed_active"
go_type: "uint64" go_type: "int"
- column: "zone_server_counts.netspeed_active"
go_type: "int"
- db_type: "bigint"
go_type: "int"