Compare commits
30 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| c9481d12c6 | |||
| 85d86bc837 | |||
| 196f90a2b9 | |||
| 02a6f587bb | |||
| 2dfc355f7c | |||
| 3e6a0f9e63 | |||
| 9c6b8d1867 | |||
| 393d532ce2 | |||
| 267c279f3d | |||
| eb5459abf3 | |||
| 8262b1442f | |||
| d4bf8d9e16 | |||
| 6c5b762a57 | |||
| fd6e87cf2d | |||
| a22d5ebc7e | |||
| 42ce22e83e | |||
| 087d253d90 | |||
| ae7acb4111 | |||
| bd4e52a73b | |||
| 118e596098 | |||
| e6f39f201c | |||
| 962839ed89 | |||
| f8662fbda5 | |||
| a5b1f9ef08 | |||
| e316aeee99 | |||
| 3a9879b793 | |||
| 9fb3edacef | |||
| d206f9d20e | |||
| dc8adc1aea | |||
| 35ea262b99 |
@@ -21,7 +21,8 @@ steps:
|
||||
memory: 100MiB
|
||||
|
||||
- name: test
|
||||
image: golang:1.23.4
|
||||
image: golang:1.25
|
||||
pull: always
|
||||
volumes:
|
||||
- name: go
|
||||
path: /go
|
||||
@@ -32,7 +33,8 @@ steps:
|
||||
- go build ./...
|
||||
|
||||
- name: goreleaser
|
||||
image: golang:1.23.4
|
||||
image: golang:1.25
|
||||
pull: always
|
||||
resources:
|
||||
requests:
|
||||
cpu: 6000
|
||||
@@ -81,6 +83,6 @@ volumes:
|
||||
|
||||
---
|
||||
kind: signature
|
||||
hmac: f9c2145e25810c18afed02f1092a1910894c6924873f9d1d7fdc492ebe6e8555
|
||||
hmac: 7f4f57140394a1c3a34e4d23188edda3cd95359dacf6d0abfa45bda3afff692f
|
||||
|
||||
...
|
||||
|
||||
481
API.md
Normal file
481
API.md
Normal file
@@ -0,0 +1,481 @@
|
||||
# NTP Pool Data API Documentation
|
||||
|
||||
This document describes the REST API endpoints provided by the NTP Pool data API server.
|
||||
|
||||
## Base URL
|
||||
|
||||
The API server runs on port 8030. All endpoints are accessible at:
|
||||
- Production: `https://www.ntppool.org/api/...`
|
||||
- Local development: `http://localhost:8030/api/...`
|
||||
|
||||
## Common Response Headers
|
||||
|
||||
All API responses include:
|
||||
- `Server`: Version information (e.g., `data-api/1.2.3+abc123`)
|
||||
- `Cache-Control`: Caching directives
|
||||
- `Access-Control-Allow-Origin`: CORS configuration
|
||||
|
||||
## Endpoints
|
||||
|
||||
### 1. User Country Data
|
||||
|
||||
**GET** `/api/usercc`
|
||||
|
||||
Returns DNS query statistics by user country code and NTP pool zone statistics.
|
||||
|
||||
#### Response Format
|
||||
```json
|
||||
{
|
||||
"UserCountry": [
|
||||
{
|
||||
"CC": "us",
|
||||
"IPv4": 42.5,
|
||||
"IPv6": 12.3
|
||||
}
|
||||
],
|
||||
"ZoneStats": {
|
||||
"zones": [
|
||||
{
|
||||
"zone_name": "us",
|
||||
"netspeed_active": 1000,
|
||||
"server_count": 450
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Response Fields
|
||||
- `UserCountry`: Array of country statistics
|
||||
- `CC`: Two-letter country code
|
||||
- `IPv4`: IPv4 query percentage
|
||||
- `IPv6`: IPv6 query percentage
|
||||
- `ZoneStats`: NTP pool zone information
|
||||
|
||||
#### Cache Control
|
||||
- `Cache-Control`: Varies based on data freshness
|
||||
|
||||
---
|
||||
|
||||
### 2. DNS Query Counts
|
||||
|
||||
**GET** `/api/dns/counts`
|
||||
|
||||
Returns aggregated DNS query counts from ClickHouse analytics.
|
||||
|
||||
#### Response Format
|
||||
```json
|
||||
{
|
||||
"total_queries": 1234567,
|
||||
"by_country": {
|
||||
"us": 456789,
|
||||
"de": 234567
|
||||
},
|
||||
"by_query_type": {
|
||||
"A": 987654,
|
||||
"AAAA": 345678
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Cache Control
|
||||
- `Cache-Control`: `s-maxage=30,max-age=60`
|
||||
|
||||
---
|
||||
|
||||
### 3. Server DNS Answers
|
||||
|
||||
**GET** `/api/server/dns/answers/{server}`
|
||||
|
||||
Returns DNS answer statistics for a specific NTP server, including geographic distribution and scoring metrics.
|
||||
|
||||
#### Path Parameters
|
||||
- `server`: Server IP address (IPv4 or IPv6)
|
||||
|
||||
#### Response Format
|
||||
```json
|
||||
{
|
||||
"Server": [
|
||||
{
|
||||
"CC": "us",
|
||||
"Count": 12345,
|
||||
"Points": 1234.5,
|
||||
"Netspeed": 567.8
|
||||
}
|
||||
],
|
||||
"PointSymbol": "‱"
|
||||
}
|
||||
```
|
||||
|
||||
#### Response Fields
|
||||
- `Server`: Array of country-specific statistics
|
||||
- `CC`: Country code where DNS queries originated
|
||||
- `Count`: Number of DNS answers served
|
||||
- `Points`: Calculated scoring points (basis: 10,000)
|
||||
- `Netspeed`: Network speed score relative to zone capacity
|
||||
- `PointSymbol`: Symbol used for point calculations ("‱" = per 10,000)
|
||||
|
||||
#### Error Responses
|
||||
- `400 Bad Request`: Invalid server IP format
|
||||
- `404 Not Found`: Server not found
|
||||
- `500 Internal Server Error`: Database error
|
||||
|
||||
#### Cache Control
|
||||
- Success: `public,max-age=1800`
|
||||
- Errors: `public,max-age=300`
|
||||
|
||||
#### URL Canonicalization
|
||||
Redirects to canonical IP format with `308 Permanent Redirect` if:
|
||||
- IP format is not canonical
|
||||
- Query parameters are present
|
||||
|
||||
---
|
||||
|
||||
### 4. Server Score History (Legacy)
|
||||
|
||||
**GET** `/api/server/scores/{server}/{mode}`
|
||||
|
||||
**⚠️ Legacy API** - Returns historical scoring data for an NTP server in JSON or CSV format. For enhanced features and higher limits, use the [v2 API](#7-server-score-history-v2---enhanced-time-range-api) instead.
|
||||
|
||||
#### Path Parameters
|
||||
- `server`: Server IP address or ID
|
||||
- `mode`: Response format (`json` or `log`)
|
||||
|
||||
#### Query Parameters
|
||||
- `limit`: Maximum number of records (default: 100, max: 10000)
|
||||
- `monitor`: Monitor ID or name prefix (default: "recentmedian.scores.ntp.dev")
|
||||
- Use `*` for all monitors
|
||||
- Use monitor ID number
|
||||
- Use monitor name prefix (e.g., "recentmedian")
|
||||
- `since`: Unix timestamp for start time
|
||||
- `source`: Data source (`m` for MySQL, `c` for ClickHouse)
|
||||
- `full_history`: Include full history (private IPs only)
|
||||
|
||||
#### JSON Response Format (`mode=json`)
|
||||
```json
|
||||
{
|
||||
"history": [
|
||||
{
|
||||
"ts": 1640995200,
|
||||
"offset": 0.001234,
|
||||
"step": 0.5,
|
||||
"score": 20.0,
|
||||
"monitor_id": 123,
|
||||
"rtt": 45.6
|
||||
}
|
||||
],
|
||||
"monitors": [
|
||||
{
|
||||
"id": 123,
|
||||
"name": "recentmedian.scores.ntp.dev",
|
||||
"type": "ntp",
|
||||
"ts": "2022-01-01T12:00:00Z",
|
||||
"score": 19.5,
|
||||
"status": "active",
|
||||
"avg_rtt": 45.2
|
||||
}
|
||||
],
|
||||
"server": {
|
||||
"ip": "192.0.2.1"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### CSV Response Format (`mode=log`)
|
||||
Returns CSV data with headers:
|
||||
```
|
||||
ts_epoch,ts,offset,step,score,monitor_id,monitor_name,rtt,leap,error
|
||||
1640995200,2022-01-01 12:00:00,0.001234,0.5,20.0,123,recentmedian.scores.ntp.dev,45.6,,
|
||||
```
|
||||
|
||||
#### CSV Fields
|
||||
- `ts_epoch`: Unix timestamp
|
||||
- `ts`: Human-readable timestamp
|
||||
- `offset`: Time offset in seconds
|
||||
- `step`: NTP step value
|
||||
- `score`: Computed score
|
||||
- `monitor_id`: Monitor identifier
|
||||
- `monitor_name`: Monitor display name
|
||||
- `rtt`: Round-trip time in milliseconds
|
||||
- `leap`: Leap second indicator
|
||||
- `error`: Error message (sanitized for CSV)
|
||||
|
||||
#### Error Responses
|
||||
- `404 Not Found`: Invalid mode, server not found, or monitor not found
|
||||
- `500 Internal Server Error`: Database error
|
||||
|
||||
#### Cache Control
|
||||
Dynamic based on data freshness:
|
||||
- Recent data: `s-maxage=90,max-age=120`
|
||||
- Older data: `s-maxage=260,max-age=360`
|
||||
|
||||
---
|
||||
|
||||
### 5. Zone Counts
|
||||
|
||||
**GET** `/api/zone/counts/{zone_name}`
|
||||
|
||||
Returns historical server count and network capacity data for an NTP pool zone.
|
||||
|
||||
#### Path Parameters
|
||||
- `zone_name`: Zone name (e.g., "us", "europe", "@" for global)
|
||||
|
||||
#### Query Parameters
|
||||
- `limit`: Maximum number of date entries to return
|
||||
|
||||
#### Response Format
|
||||
```json
|
||||
{
|
||||
"history": [
|
||||
{
|
||||
"d": "2022-01-01",
|
||||
"ts": 1640995200,
|
||||
"rc": 450,
|
||||
"ac": 380,
|
||||
"w": 12500,
|
||||
"iv": "v4"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Response Fields
|
||||
- `history`: Array of historical data points
|
||||
- `d`: Date in YYYY-MM-DD format
|
||||
- `ts`: Unix timestamp
|
||||
- `rc`: Registered server count
|
||||
- `ac`: Active server count
|
||||
- `w`: Network capacity (netspeed active)
|
||||
- `iv`: IP version ("v4" or "v6")
|
||||
|
||||
#### Data Sampling
|
||||
When `limit` is specified, the API intelligently samples data points to provide representative historical coverage while staying within the limit.
|
||||
|
||||
#### Error Responses
|
||||
- `404 Not Found`: Zone not found
|
||||
- `500 Internal Server Error`: Database error
|
||||
|
||||
#### Cache Control
|
||||
- `s-maxage=28800, max-age=7200`
|
||||
|
||||
---
|
||||
|
||||
### 6. Graph Images
|
||||
|
||||
**GET** `/graph/{server}/{type}`
|
||||
|
||||
Returns generated graph images for server visualization.
|
||||
|
||||
#### Path Parameters
|
||||
- `server`: Server IP address
|
||||
- `type`: Graph type (currently only "offset.png" supported)
|
||||
|
||||
#### Response
|
||||
- **Content-Type**: `image/png` or upstream service content type
|
||||
- **Body**: Binary image data
|
||||
|
||||
#### Features
|
||||
- Canonical URL enforcement (redirects if server IP format is non-canonical)
|
||||
- Query parameter removal (redirects to clean URLs)
|
||||
- Upstream service integration via HTTP proxy
|
||||
|
||||
#### Error Responses
|
||||
- `404 Not Found`: Invalid image type or server not found
|
||||
- `500 Internal Server Error`: Upstream service error
|
||||
|
||||
#### Cache Control
|
||||
- Success: `public,max-age=1800,s-maxage=1350`
|
||||
- Errors: `public,max-age=240`
|
||||
|
||||
---
|
||||
|
||||
### 7. Server Score History (v2) - Enhanced Time Range API
|
||||
|
||||
**GET** `/api/v2/server/scores/{server}/{mode}`
|
||||
|
||||
**🆕 Recommended API** - Returns historical scoring data for an NTP server in Grafana-compatible table format with enhanced time range support and relative time expressions.
|
||||
|
||||
#### Path Parameters
|
||||
- `server`: Server IP address or ID
|
||||
- `mode`: Response format (`json` only)
|
||||
|
||||
#### Query Parameters
|
||||
- `from`: Start time (required) - Unix timestamp or relative time (e.g., "-3d", "-2h", "-30m")
|
||||
- `to`: End time (required) - Unix timestamp or relative time (e.g., "-1d", "-1h", "0s")
|
||||
- `maxDataPoints`: Maximum data points to return (default: 50000, max: 50000)
|
||||
- `monitor`: Monitor filter (ID, name prefix, or "*" for all monitors)
|
||||
- `interval`: Future downsampling interval (not implemented)
|
||||
|
||||
#### Time Format Support
|
||||
The v2 API supports both Unix timestamps and relative time expressions:
|
||||
|
||||
**Unix Timestamps:**
|
||||
- `from=1753500964&to=1753587364` - Standard Unix seconds
|
||||
|
||||
**Relative Time Expressions:**
|
||||
- `from=-3d&to=-1d` - From 3 days ago to 1 day ago
|
||||
- `from=-2h&to=-30m` - From 2 hours ago to 30 minutes ago
|
||||
- `from=-1d&to=0s` - From 1 day ago to now
|
||||
|
||||
**Supported Units:**
|
||||
- `s` - seconds
|
||||
- `m` - minutes
|
||||
- `h` - hours
|
||||
- `d` - days
|
||||
|
||||
**Format:** `[-]<number><unit>` (negative sign for past, no sign for future)
|
||||
|
||||
#### Response Format
|
||||
Grafana table format optimized for visualization:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"target": "monitor{name=zakim1-yfhw4a}",
|
||||
"tags": {
|
||||
"monitor_id": "126",
|
||||
"monitor_name": "zakim1-yfhw4a",
|
||||
"type": "monitor",
|
||||
"status": "active"
|
||||
},
|
||||
"columns": [
|
||||
{"text": "time", "type": "time"},
|
||||
{"text": "score", "type": "number"},
|
||||
{"text": "rtt", "type": "number", "unit": "ms"},
|
||||
{"text": "offset", "type": "number", "unit": "s"}
|
||||
],
|
||||
"values": [
|
||||
[1753431667000, 20.0, 18.865, -0.000267],
|
||||
[1753431419000, 20.0, 18.96, -0.000390],
|
||||
[1753431151000, 20.0, 18.073, -0.000768]
|
||||
]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
#### Response Structure
|
||||
- **One series per monitor**: Efficient grouping by monitor ID
|
||||
- **Table format**: All metrics (time, score, rtt, offset) in columns
|
||||
- **Timestamps**: Converted to milliseconds for Grafana compatibility
|
||||
- **Null handling**: Null RTT/offset values preserved as `null`
|
||||
|
||||
#### Limits and Constraints
|
||||
- **Data points**: Maximum 50,000 records per request
|
||||
- **Time range**: Maximum 90 days per request
|
||||
- **Minimum range**: 1 second
|
||||
- **Data source**: ClickHouse only (for better time range performance)
|
||||
|
||||
#### Example Requests
|
||||
|
||||
**Recent data with relative times:**
|
||||
```
|
||||
GET /api/v2/server/scores/192.0.2.1/json?from=-3d&to=-1h&monitor=*
|
||||
```
|
||||
|
||||
**Specific time range:**
|
||||
```
|
||||
GET /api/v2/server/scores/192.0.2.1/json?from=1753500000&to=1753586400&monitor=recentmedian
|
||||
```
|
||||
|
||||
**All monitors, last 24 hours:**
|
||||
```
|
||||
GET /api/v2/server/scores/192.0.2.1/json?from=-1d&to=0s&monitor=*&maxDataPoints=10000
|
||||
```
|
||||
|
||||
#### Error Responses
|
||||
- `400 Bad Request`: Invalid time format, range too large/small, or invalid parameters
|
||||
- `404 Not Found`: Server not found, invalid mode, or monitor not found
|
||||
- `500 Internal Server Error`: Database or internal error
|
||||
|
||||
#### Cache Control
|
||||
Dynamic caching based on data characteristics:
|
||||
- Recent data: `s-maxage=90,max-age=120`
|
||||
- Older data: `s-maxage=260,max-age=360`
|
||||
- Empty results: `s-maxage=260,max-age=360`
|
||||
|
||||
#### Comparison with Legacy API
|
||||
The v2 API offers significant improvements over `/api/server/scores/{server}/{mode}`:
|
||||
|
||||
| Feature | Legacy API | v2 API |
|
||||
|---------|------------|--------|
|
||||
| **Record limit** | 10,000 | 50,000 |
|
||||
| **Time format** | Unix timestamps only | Unix timestamps + relative time |
|
||||
| **Response format** | Legacy JSON/CSV | Grafana table format |
|
||||
| **Time range** | Limited by `since` parameter | Full `from`/`to` range support |
|
||||
| **Maximum range** | No explicit limit | 90 days |
|
||||
| **Performance** | MySQL + ClickHouse | ClickHouse optimized |
|
||||
|
||||
#### Migration Guide
|
||||
To migrate from legacy API to v2:
|
||||
|
||||
**Legacy:**
|
||||
```
|
||||
/api/server/scores/192.0.2.1/json?limit=10000&since=1753500000&monitor=*
|
||||
```
|
||||
|
||||
**V2 equivalent:**
|
||||
```
|
||||
/api/v2/server/scores/192.0.2.1/json?from=1753500000&to=0s&monitor=*&maxDataPoints=10000
|
||||
```
|
||||
|
||||
**V2 with relative time:**
|
||||
```
|
||||
/api/v2/server/scores/192.0.2.1/json?from=-3d&to=-1h&monitor=*
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Health Check Endpoints
|
||||
|
||||
### Health Check
|
||||
**GET** `:9019/health`
|
||||
|
||||
Returns server health status by testing database connections.
|
||||
|
||||
#### Query Parameters
|
||||
- `reset`: Boolean to reset database connection pool
|
||||
|
||||
#### Response
|
||||
- `200 OK`: "ok" - All systems healthy
|
||||
- `503 Service Unavailable`: "db ping err" - Database connectivity issues
|
||||
|
||||
### Metrics
|
||||
**GET** `:9020/metrics`
|
||||
|
||||
Prometheus metrics endpoint for monitoring and observability.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Standard HTTP Status Codes
|
||||
- `200 OK`: Successful request
|
||||
- `308 Permanent Redirect`: URL canonicalization
|
||||
- `400 Bad Request`: Invalid request parameters
|
||||
- `404 Not Found`: Resource not found
|
||||
- `500 Internal Server Error`: Server-side error
|
||||
- `503 Service Unavailable`: Service temporarily unavailable
|
||||
|
||||
### Error Response Format
|
||||
Most endpoints return plain text error messages for non-2xx responses. Some endpoints may return JSON error objects.
|
||||
|
||||
---
|
||||
|
||||
## Data Sources
|
||||
|
||||
The API integrates multiple data sources:
|
||||
- **MySQL**: Operational data (servers, zones, accounts, current scores)
|
||||
- **ClickHouse**: Analytics data (DNS query logs, historical scoring data)
|
||||
|
||||
Different endpoints may use different data sources, and some endpoints allow source selection via query parameters.
|
||||
|
||||
---
|
||||
|
||||
## Rate Limiting and Caching
|
||||
|
||||
The API implements extensive caching at multiple levels:
|
||||
- **Response-level caching**: Each endpoint sets appropriate `Cache-Control` headers
|
||||
- **Database query optimization**: Efficient queries with proper indexing
|
||||
- **CDN integration**: Headers configured for CDN caching
|
||||
|
||||
Cache durations vary by endpoint and data freshness, ranging from 30 seconds for real-time data to 8 hours for historical data.
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM alpine:3.20.3
|
||||
FROM alpine:3.21
|
||||
|
||||
RUN apk --no-cache upgrade
|
||||
RUN apk --no-cache add ca-certificates tzdata zsh jq tmux curl
|
||||
|
||||
9
Makefile
9
Makefile
@@ -2,12 +2,9 @@ generate: sqlc
|
||||
go generate ./...
|
||||
|
||||
sqlc:
|
||||
@which gowrap >& /dev/null || (echo "Run 'go install github.com/hexdigest/gowrap/cmd/gowrap@v1.3.2'" && exit 1)
|
||||
@which mockery >& /dev/null || (echo "Run 'go install github.com/vektra/mockery/v2@v2.35.4'" && exit 1)
|
||||
sqlc compile
|
||||
sqlc generate
|
||||
gowrap gen -t opentelemetry -i QuerierTx -p ./ntpdb -o ./ntpdb/otel.go
|
||||
mockery --dir ntpdb --name QuerierTx --config /dev/null
|
||||
go tool sqlc compile
|
||||
go tool sqlc generate
|
||||
go tool gowrap gen -g -t opentelemetry -i QuerierTx -p ./ntpdb -o ./ntpdb/otel.go
|
||||
|
||||
sign:
|
||||
drone sign --save ntppool/data-api
|
||||
|
||||
@@ -24,15 +24,16 @@ type ServerTotals map[string]uint64
|
||||
func (s ServerQueries) Len() int {
|
||||
return len(s)
|
||||
}
|
||||
|
||||
func (s ServerQueries) Swap(i, j int) {
|
||||
s[i], s[j] = s[j], s[i]
|
||||
}
|
||||
|
||||
func (s ServerQueries) Less(i, j int) bool {
|
||||
return s[i].Count > s[j].Count
|
||||
}
|
||||
|
||||
func (d *ClickHouse) ServerAnswerCounts(ctx context.Context, serverIP string, days int) (ServerQueries, error) {
|
||||
|
||||
ctx, span := tracing.Tracer().Start(ctx, "ServerAnswerCounts")
|
||||
defer span.End()
|
||||
|
||||
|
||||
75
chdb/db.go
75
chdb/db.go
@@ -3,8 +3,10 @@ package chdb
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"dario.cat/mergo"
|
||||
"github.com/ClickHouse/clickhouse-go/v2"
|
||||
"gopkg.in/yaml.v3"
|
||||
|
||||
@@ -20,8 +22,13 @@ type Config struct {
|
||||
}
|
||||
|
||||
type DBConfig struct {
|
||||
DSN string
|
||||
|
||||
Host string
|
||||
Database string
|
||||
|
||||
User string
|
||||
Password string
|
||||
}
|
||||
|
||||
type ClickHouse struct {
|
||||
@@ -38,10 +45,9 @@ func New(ctx context.Context, dbConfigPath string) (*ClickHouse, error) {
|
||||
}
|
||||
|
||||
func setupClickhouse(ctx context.Context, configFile string) (*ClickHouse, error) {
|
||||
|
||||
log := logger.FromContext(ctx)
|
||||
|
||||
log.InfoContext(ctx, "opening config", "file", configFile)
|
||||
log.DebugContext(ctx, "opening ch config", "file", configFile)
|
||||
|
||||
dbFile, err := os.Open(configFile)
|
||||
if err != nil {
|
||||
@@ -74,28 +80,19 @@ func setupClickhouse(ctx context.Context, configFile string) (*ClickHouse, error
|
||||
func open(ctx context.Context, cfg DBConfig) (clickhouse.Conn, error) {
|
||||
log := logger.Setup()
|
||||
|
||||
conn, err := clickhouse.Open(&clickhouse.Options{
|
||||
Addr: []string{cfg.Host + ":9000"},
|
||||
Auth: clickhouse.Auth{
|
||||
Database: cfg.Database,
|
||||
Username: "default",
|
||||
Password: "",
|
||||
},
|
||||
// Debug: true,
|
||||
// Debugf: func(format string, v ...interface{}) {
|
||||
// slog.Info("debug format", "format", format)
|
||||
// fmt.Printf(format+"\n", v)
|
||||
// },
|
||||
options := &clickhouse.Options{
|
||||
Protocol: clickhouse.Native,
|
||||
Settings: clickhouse.Settings{
|
||||
"max_execution_time": 60,
|
||||
},
|
||||
|
||||
Compression: &clickhouse.Compression{
|
||||
Method: clickhouse.CompressionLZ4,
|
||||
},
|
||||
DialTimeout: time.Second * 5,
|
||||
MaxOpenConns: 5,
|
||||
MaxIdleConns: 5,
|
||||
ConnMaxLifetime: time.Duration(10) * time.Minute,
|
||||
MaxOpenConns: 8,
|
||||
MaxIdleConns: 3,
|
||||
ConnMaxLifetime: 5 * time.Minute,
|
||||
ConnOpenStrategy: clickhouse.ConnOpenInOrder,
|
||||
BlockBufferSize: 10,
|
||||
MaxCompressionBuffer: 10240,
|
||||
@@ -107,7 +104,49 @@ func open(ctx context.Context, cfg DBConfig) (clickhouse.Conn, error) {
|
||||
{Name: "data-api", Version: version.Version()},
|
||||
},
|
||||
},
|
||||
})
|
||||
// Debug: true,
|
||||
// Debugf: func(format string, v ...interface{}) {
|
||||
// slog.Info("debug format", "format", format)
|
||||
// fmt.Printf(format+"\n", v)
|
||||
// },
|
||||
|
||||
}
|
||||
|
||||
if cfg.DSN != "" {
|
||||
dsnOptions, err := clickhouse.ParseDSN(cfg.DSN)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = mergo.Merge(options, dsnOptions)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if cfg.Host != "" {
|
||||
options.Addr = []string{cfg.Host}
|
||||
}
|
||||
|
||||
if len(options.Addr) > 0 {
|
||||
// todo: support literal ipv6; or just require port to be configured explicitly
|
||||
if !strings.Contains(options.Addr[0], ":") {
|
||||
options.Addr[0] += ":9000"
|
||||
}
|
||||
}
|
||||
|
||||
if cfg.Database != "" {
|
||||
options.Auth.Database = cfg.Database
|
||||
}
|
||||
|
||||
if cfg.User != "" {
|
||||
options.Auth.Username = cfg.User
|
||||
}
|
||||
|
||||
if cfg.Password != "" {
|
||||
options.Auth.Password = cfg.Password
|
||||
}
|
||||
|
||||
conn, err := clickhouse.Open(options)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -24,9 +24,11 @@ type UserCountry []flatAPI
|
||||
func (s UserCountry) Len() int {
|
||||
return len(s)
|
||||
}
|
||||
|
||||
func (s UserCountry) Swap(i, j int) {
|
||||
s[i], s[j] = s[j], s[i]
|
||||
}
|
||||
|
||||
func (s UserCountry) Less(i, j int) bool {
|
||||
return s[i].IPv4 > s[j].IPv4
|
||||
}
|
||||
@@ -183,3 +185,55 @@ func (d *ClickHouse) UserCountryData(ctx context.Context) (*UserCountry, error)
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
type DNSQueryCounts struct {
|
||||
T uint32 `json:"t"`
|
||||
Avg float64 `json:"avg"`
|
||||
Max uint64 `json:"max"`
|
||||
}
|
||||
|
||||
func (d *ClickHouse) DNSQueries(ctx context.Context) ([]DNSQueryCounts, error) {
|
||||
log := logger.Setup()
|
||||
ctx, span := tracing.Tracer().Start(ctx, "DNSQueries")
|
||||
defer span.End()
|
||||
|
||||
startUnix := time.Now().Add(2 * time.Hour * -1).Unix()
|
||||
startUnix -= startUnix % (60 * 5)
|
||||
|
||||
log.InfoContext(ctx, "start time", "start", startUnix)
|
||||
|
||||
rows, err := d.Logs.Query(clickhouse.Context(ctx, clickhouse.WithSpan(span.SpanContext())),
|
||||
`
|
||||
select toUnixTimestamp(toStartOfFiveMinute(t)) as t,
|
||||
sum(q)/300 as avg, max(q) as max
|
||||
from (
|
||||
select window as t, sumSimpleState(queries) as q
|
||||
from geodns.by_origin_1s
|
||||
where
|
||||
window > FROM_UNIXTIME(?)
|
||||
and Origin IN ('pool.ntp.org', 'g.ntpns.org')
|
||||
group by t order by t
|
||||
)
|
||||
group by t order by t
|
||||
`, startUnix)
|
||||
if err != nil {
|
||||
log.ErrorContext(ctx, "query error", "err", err)
|
||||
return nil, fmt.Errorf("database error")
|
||||
}
|
||||
|
||||
var t uint32
|
||||
var avg float64
|
||||
var max uint64
|
||||
|
||||
r := []DNSQueryCounts{}
|
||||
|
||||
for rows.Next() {
|
||||
if err := rows.Scan(&t, &avg, &max); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
log.InfoContext(ctx, "data", "t", t, "avg", avg, "max", max)
|
||||
r = append(r, DNSQueryCounts{t, avg, max})
|
||||
}
|
||||
|
||||
return r, nil
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package chdb
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ClickHouse/clickhouse-go/v2"
|
||||
@@ -105,3 +106,129 @@ func (d *ClickHouse) Logscores(ctx context.Context, serverID, monitorID int, sin
|
||||
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
// LogscoresTimeRange queries log scores within a specific time range for Grafana integration
|
||||
func (d *ClickHouse) LogscoresTimeRange(ctx context.Context, serverID, monitorID int, from, to time.Time, limit int) ([]ntpdb.LogScore, error) {
|
||||
log := logger.Setup()
|
||||
ctx, span := tracing.Tracer().Start(ctx, "CH LogscoresTimeRange")
|
||||
defer span.End()
|
||||
|
||||
args := []interface{}{serverID, from, to}
|
||||
|
||||
query := `select id,monitor_id,server_id,ts,
|
||||
toFloat64(score),toFloat64(step),offset,
|
||||
rtt,leap,warning,error
|
||||
from log_scores
|
||||
where
|
||||
server_id = ?
|
||||
and ts >= ?
|
||||
and ts <= ?`
|
||||
|
||||
if monitorID > 0 {
|
||||
query += " and monitor_id = ?"
|
||||
args = append(args, monitorID)
|
||||
}
|
||||
|
||||
// Always order by timestamp ASC for Grafana convention
|
||||
query += " order by ts ASC"
|
||||
|
||||
// Apply limit to prevent memory issues
|
||||
if limit > 0 {
|
||||
query += " limit ?"
|
||||
args = append(args, limit)
|
||||
}
|
||||
|
||||
log.DebugContext(ctx, "clickhouse time range query",
|
||||
"query", query,
|
||||
"args", args,
|
||||
"server_id", serverID,
|
||||
"monitor_id", monitorID,
|
||||
"from", from.Format(time.RFC3339),
|
||||
"to", to.Format(time.RFC3339),
|
||||
"limit", limit,
|
||||
"full_sql_with_params", func() string {
|
||||
// Build a readable SQL query with parameters substituted for debugging
|
||||
sqlDebug := query
|
||||
paramIndex := 0
|
||||
for strings.Contains(sqlDebug, "?") && paramIndex < len(args) {
|
||||
var replacement string
|
||||
switch v := args[paramIndex].(type) {
|
||||
case int:
|
||||
replacement = fmt.Sprintf("%d", v)
|
||||
case time.Time:
|
||||
replacement = fmt.Sprintf("'%s'", v.Format("2006-01-02 15:04:05"))
|
||||
default:
|
||||
replacement = fmt.Sprintf("'%v'", v)
|
||||
}
|
||||
sqlDebug = strings.Replace(sqlDebug, "?", replacement, 1)
|
||||
paramIndex++
|
||||
}
|
||||
return sqlDebug
|
||||
}(),
|
||||
)
|
||||
|
||||
rows, err := d.Scores.Query(
|
||||
clickhouse.Context(
|
||||
ctx, clickhouse.WithSpan(span.SpanContext()),
|
||||
),
|
||||
query, args...,
|
||||
)
|
||||
if err != nil {
|
||||
log.ErrorContext(ctx, "time range query error", "err", err)
|
||||
return nil, fmt.Errorf("database error")
|
||||
}
|
||||
|
||||
rv := []ntpdb.LogScore{}
|
||||
|
||||
for rows.Next() {
|
||||
row := ntpdb.LogScore{}
|
||||
var leap uint8
|
||||
|
||||
if err := rows.Scan(
|
||||
&row.ID,
|
||||
&row.MonitorID,
|
||||
&row.ServerID,
|
||||
&row.Ts,
|
||||
&row.Score,
|
||||
&row.Step,
|
||||
&row.Offset,
|
||||
&row.Rtt,
|
||||
&leap,
|
||||
&row.Attributes.Warning,
|
||||
&row.Attributes.Error,
|
||||
); err != nil {
|
||||
log.Error("could not parse row", "err", err)
|
||||
continue
|
||||
}
|
||||
|
||||
row.Attributes.Leap = int8(leap)
|
||||
rv = append(rv, row)
|
||||
}
|
||||
|
||||
log.InfoContext(ctx, "time range query results",
|
||||
"rows_returned", len(rv),
|
||||
"server_id", serverID,
|
||||
"monitor_id", monitorID,
|
||||
"time_range", fmt.Sprintf("%s to %s", from.Format(time.RFC3339), to.Format(time.RFC3339)),
|
||||
"limit", limit,
|
||||
"sample_rows", func() []map[string]interface{} {
|
||||
samples := make([]map[string]interface{}, 0, 3)
|
||||
for i, row := range rv {
|
||||
if i >= 3 {
|
||||
break
|
||||
}
|
||||
samples = append(samples, map[string]interface{}{
|
||||
"id": row.ID,
|
||||
"monitor_id": row.MonitorID,
|
||||
"ts": row.Ts.Time.Format(time.RFC3339),
|
||||
"score": row.Score,
|
||||
"rtt_valid": row.Rtt.Valid,
|
||||
"offset_valid": row.Offset.Valid,
|
||||
})
|
||||
}
|
||||
return samples
|
||||
}(),
|
||||
)
|
||||
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
@@ -30,7 +30,7 @@ func NewCLI() *CLI {
|
||||
|
||||
// RootCmd represents the base command when called without any subcommands
|
||||
func (cli *CLI) rootCmd() *cobra.Command {
|
||||
var cmd = &cobra.Command{
|
||||
cmd := &cobra.Command{
|
||||
Use: "data-api",
|
||||
Short: "A brief description of your application",
|
||||
// Uncomment the following line if your bare application
|
||||
@@ -47,7 +47,6 @@ func (cli *CLI) rootCmd() *cobra.Command {
|
||||
// Execute adds all child commands to the root command and sets flags appropriately.
|
||||
// This is called by main.main(). It only needs to happen once to the rootCmd.
|
||||
func Execute() {
|
||||
|
||||
cli := NewCLI()
|
||||
|
||||
if err := cli.root.Execute(); err != nil {
|
||||
@@ -57,7 +56,6 @@ func Execute() {
|
||||
}
|
||||
|
||||
func (cli *CLI) init(cmd *cobra.Command) {
|
||||
|
||||
logger.Setup()
|
||||
|
||||
cmd.PersistentFlags().StringVar(&cfgFile, "database-config", "database.yaml", "config file (default is $HOME/.data-api.yaml)")
|
||||
|
||||
@@ -18,8 +18,7 @@ import (
|
||||
)
|
||||
|
||||
func (cli *CLI) serverCmd() *cobra.Command {
|
||||
|
||||
var serverCmd = &cobra.Command{
|
||||
serverCmd := &cobra.Command{
|
||||
Use: "server",
|
||||
Short: "server starts the API server",
|
||||
Long: `starts the API server on (default) port 8000`,
|
||||
|
||||
190
go.mod
190
go.mod
@@ -1,99 +1,151 @@
|
||||
module go.ntppool.org/data-api
|
||||
|
||||
go 1.23
|
||||
|
||||
toolchain go1.23.4
|
||||
go 1.25.0
|
||||
|
||||
// replace github.com/samber/slog-echo => github.com/abh/slog-echo v0.0.0-20231024051244-af740639893e
|
||||
|
||||
replace go.opentelemetry.io/otel/exporters/prometheus v0.59.1 => go.opentelemetry.io/otel/exporters/prometheus v0.59.0
|
||||
|
||||
tool (
|
||||
github.com/hexdigest/gowrap/cmd/gowrap
|
||||
github.com/sqlc-dev/sqlc/cmd/sqlc
|
||||
// github.com/vektra/mockery/v3
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/ClickHouse/clickhouse-go/v2 v2.30.0
|
||||
github.com/go-sql-driver/mysql v1.8.1
|
||||
github.com/hashicorp/go-retryablehttp v0.7.7
|
||||
github.com/labstack/echo-contrib v0.17.2
|
||||
github.com/labstack/echo/v4 v4.13.3
|
||||
github.com/samber/slog-echo v1.14.8
|
||||
github.com/spf13/cobra v1.8.1
|
||||
github.com/stretchr/testify v1.10.0
|
||||
dario.cat/mergo v1.0.2
|
||||
github.com/ClickHouse/clickhouse-go/v2 v2.40.3
|
||||
github.com/hashicorp/go-retryablehttp v0.7.8
|
||||
github.com/jackc/pgx/v5 v5.7.6
|
||||
github.com/labstack/echo-contrib v0.17.4
|
||||
github.com/labstack/echo/v4 v4.13.4
|
||||
github.com/samber/slog-echo v1.17.2
|
||||
github.com/spf13/cobra v1.10.1
|
||||
go.ntppool.org/api v0.3.4
|
||||
go.ntppool.org/common v0.3.0
|
||||
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.58.0
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.58.0
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0
|
||||
go.opentelemetry.io/otel v1.33.0
|
||||
go.opentelemetry.io/otel/trace v1.33.0
|
||||
golang.org/x/sync v0.10.0
|
||||
go.ntppool.org/common v0.6.3-0.20251129195245-283d3936f6d0
|
||||
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.63.0
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.63.0
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0
|
||||
go.opentelemetry.io/otel v1.38.0
|
||||
go.opentelemetry.io/otel/trace v1.38.0
|
||||
golang.org/x/sync v0.17.0
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
)
|
||||
|
||||
require (
|
||||
cel.dev/expr v0.24.0 // indirect
|
||||
filippo.io/edwards25519 v1.1.0 // indirect
|
||||
github.com/ClickHouse/ch-go v0.63.1 // indirect
|
||||
github.com/andybalholm/brotli v1.1.1 // indirect
|
||||
github.com/ClickHouse/ch-go v0.68.0 // indirect
|
||||
github.com/Masterminds/goutils v1.1.1 // indirect
|
||||
github.com/Masterminds/semver/v3 v3.1.1 // indirect
|
||||
github.com/Masterminds/sprig/v3 v3.2.2 // indirect
|
||||
github.com/andybalholm/brotli v1.2.0 // indirect
|
||||
github.com/antlr4-go/antlr/v4 v4.13.1 // indirect
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
|
||||
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/cubicdaiya/gonp v1.0.4 // indirect
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||
github.com/fatih/structtag v1.2.0 // indirect
|
||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||
github.com/go-faster/city v1.0.1 // indirect
|
||||
github.com/go-faster/errors v0.7.1 // indirect
|
||||
github.com/go-logr/logr v1.4.2 // indirect
|
||||
github.com/go-logr/logr v1.4.3 // indirect
|
||||
github.com/go-logr/stdr v1.2.2 // indirect
|
||||
github.com/go-sql-driver/mysql v1.9.3 // indirect
|
||||
github.com/google/cel-go v0.24.1 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.25.1 // indirect
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 // indirect
|
||||
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
|
||||
github.com/hexdigest/gowrap v1.4.2 // indirect
|
||||
github.com/huandu/xstrings v1.5.0 // indirect
|
||||
github.com/imdario/mergo v0.3.12 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
github.com/klauspost/compress v1.17.11 // indirect
|
||||
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
||||
github.com/jackc/puddle/v2 v2.2.2 // indirect
|
||||
github.com/jinzhu/inflection v1.0.0 // indirect
|
||||
github.com/klauspost/compress v1.18.0 // indirect
|
||||
github.com/labstack/gommon v0.4.2 // indirect
|
||||
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||
github.com/mattn/go-colorable v0.1.14 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/mitchellh/copystructure v1.2.0 // indirect
|
||||
github.com/mitchellh/reflectwalk v1.0.2 // indirect
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
|
||||
github.com/paulmach/orb v0.11.1 // indirect
|
||||
github.com/ncruces/go-strftime v0.1.9 // indirect
|
||||
github.com/paulmach/orb v0.12.0 // indirect
|
||||
github.com/pganalyze/pg_query_go/v6 v6.1.0 // indirect
|
||||
github.com/pierrec/lz4/v4 v4.1.22 // indirect
|
||||
github.com/pingcap/errors v0.11.5-0.20240311024730-e056997136bb // indirect
|
||||
github.com/pingcap/failpoint v0.0.0-20240528011301-b51a646c7c86 // indirect
|
||||
github.com/pingcap/log v1.1.0 // indirect
|
||||
github.com/pingcap/tidb/pkg/parser v0.0.0-20250324122243-d51e00e5bbf0 // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/prometheus/client_golang v1.20.5 // indirect
|
||||
github.com/prometheus/client_model v0.6.1 // indirect
|
||||
github.com/prometheus/common v0.61.0 // indirect
|
||||
github.com/prometheus/procfs v0.15.1 // indirect
|
||||
github.com/remychantenay/slog-otel v1.3.2 // indirect
|
||||
github.com/samber/lo v1.47.0 // indirect
|
||||
github.com/samber/slog-multi v1.2.4 // indirect
|
||||
github.com/segmentio/asm v1.2.0 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/prometheus/client_golang v1.23.2 // indirect
|
||||
github.com/prometheus/client_model v0.6.2 // indirect
|
||||
github.com/prometheus/common v0.66.1 // indirect
|
||||
github.com/prometheus/otlptranslator v1.0.0 // indirect
|
||||
github.com/prometheus/procfs v0.17.0 // indirect
|
||||
github.com/remychantenay/slog-otel v1.3.4 // indirect
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
|
||||
github.com/riza-io/grpc-go v0.2.0 // indirect
|
||||
github.com/samber/lo v1.51.0 // indirect
|
||||
github.com/samber/slog-common v0.19.0 // indirect
|
||||
github.com/samber/slog-multi v1.5.0 // indirect
|
||||
github.com/segmentio/asm v1.2.1 // indirect
|
||||
github.com/shopspring/decimal v1.4.0 // indirect
|
||||
github.com/spf13/pflag v1.0.5 // indirect
|
||||
github.com/stretchr/objx v0.5.2 // indirect
|
||||
github.com/spf13/cast v1.4.1 // indirect
|
||||
github.com/spf13/pflag v1.0.10 // indirect
|
||||
github.com/sqlc-dev/sqlc v1.29.0 // indirect
|
||||
github.com/stoewer/go-strcase v1.2.0 // indirect
|
||||
github.com/tetratelabs/wazero v1.9.0 // indirect
|
||||
github.com/valyala/bytebufferpool v1.0.0 // indirect
|
||||
github.com/valyala/fasttemplate v1.2.2 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
|
||||
go.opentelemetry.io/contrib/bridges/otelslog v0.8.0 // indirect
|
||||
go.opentelemetry.io/contrib/bridges/prometheus v0.58.0 // indirect
|
||||
go.opentelemetry.io/contrib/exporters/autoexport v0.58.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.9.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.9.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.33.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.33.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.33.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.33.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.33.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/prometheus v0.55.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.9.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.33.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.33.0 // indirect
|
||||
go.opentelemetry.io/otel/log v0.9.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.33.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.33.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk/log v0.9.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk/metric v1.33.0 // indirect
|
||||
go.opentelemetry.io/proto/otlp v1.4.0 // indirect
|
||||
golang.org/x/crypto v0.31.0 // indirect
|
||||
golang.org/x/mod v0.22.0 // indirect
|
||||
golang.org/x/net v0.33.0 // indirect
|
||||
golang.org/x/sys v0.28.0 // indirect
|
||||
golang.org/x/text v0.21.0 // indirect
|
||||
golang.org/x/time v0.8.0 // indirect
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20241223144023-3abc09e42ca8 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8 // indirect
|
||||
google.golang.org/grpc v1.69.2 // indirect
|
||||
google.golang.org/protobuf v1.36.1 // indirect
|
||||
github.com/wasilibs/go-pgquery v0.0.0-20250409022910-10ac41983c07 // indirect
|
||||
github.com/wasilibs/wazero-helpers v0.0.0-20240620070341-3dff1577cd52 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
|
||||
go.opentelemetry.io/contrib/bridges/otelslog v0.13.0 // indirect
|
||||
go.opentelemetry.io/contrib/bridges/prometheus v0.63.0 // indirect
|
||||
go.opentelemetry.io/contrib/exporters/autoexport v0.63.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.14.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.14.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/prometheus v0.60.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.14.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/log v0.14.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk/log v0.14.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0 // indirect
|
||||
go.opentelemetry.io/proto/otlp v1.8.0 // indirect
|
||||
go.uber.org/atomic v1.11.0 // indirect
|
||||
go.uber.org/multierr v1.11.0 // indirect
|
||||
go.uber.org/zap v1.27.0 // indirect
|
||||
go.yaml.in/yaml/v2 v2.4.3 // indirect
|
||||
go.yaml.in/yaml/v3 v3.0.4 // indirect
|
||||
golang.org/x/crypto v0.42.0 // indirect
|
||||
golang.org/x/exp v0.0.0-20250305212735-054e65f0b394 // indirect
|
||||
golang.org/x/mod v0.28.0 // indirect
|
||||
golang.org/x/net v0.44.0 // indirect
|
||||
golang.org/x/sys v0.36.0 // indirect
|
||||
golang.org/x/text v0.29.0 // indirect
|
||||
golang.org/x/time v0.13.0 // indirect
|
||||
golang.org/x/tools v0.37.0 // indirect
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250922171735-9219d122eba9 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250922171735-9219d122eba9 // indirect
|
||||
google.golang.org/grpc v1.75.1 // indirect
|
||||
google.golang.org/protobuf v1.36.9 // indirect
|
||||
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
|
||||
modernc.org/libc v1.62.1 // indirect
|
||||
modernc.org/mathutil v1.7.1 // indirect
|
||||
modernc.org/memory v1.9.1 // indirect
|
||||
modernc.org/sqlite v1.37.0 // indirect
|
||||
)
|
||||
|
||||
436
go.sum
436
go.sum
@@ -1,23 +1,44 @@
|
||||
cel.dev/expr v0.24.0 h1:56OvJKSH3hDGL0ml5uSxZmz3/3Pq4tJ+fb1unVLAFcY=
|
||||
cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
|
||||
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
|
||||
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
|
||||
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
|
||||
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
||||
github.com/ClickHouse/ch-go v0.63.1 h1:s2JyZvWLTCSAGdtjMBBmAgQQHMco6pawLJMOXi0FODM=
|
||||
github.com/ClickHouse/ch-go v0.63.1/go.mod h1:I1kJJCL3WJcBMGe1m+HVK0+nREaG+JOYYBWjrDrF3R0=
|
||||
github.com/ClickHouse/clickhouse-go/v2 v2.30.0 h1:AG4D/hW39qa58+JHQIFOSnxyL46H6h2lrmGGk17dhFo=
|
||||
github.com/ClickHouse/clickhouse-go/v2 v2.30.0/go.mod h1:i9ZQAojcayW3RsdCb3YR+n+wC2h65eJsZCscZ1Z1wyo=
|
||||
github.com/andybalholm/brotli v1.1.1 h1:PR2pgnyFznKEugtsUo0xLdDop5SKXd5Qf5ysW+7XdTA=
|
||||
github.com/andybalholm/brotli v1.1.1/go.mod h1:05ib4cKhjx3OQYUY22hTVd34Bc8upXjOLL2rKwwZBoA=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/ClickHouse/ch-go v0.68.0 h1:zd2VD8l2aVYnXFRyhTyKCrxvhSz1AaY4wBUXu/f0GiU=
|
||||
github.com/ClickHouse/ch-go v0.68.0/go.mod h1:C89Fsm7oyck9hr6rRo5gqqiVtaIY6AjdD0WFMyNRQ5s=
|
||||
github.com/ClickHouse/clickhouse-go/v2 v2.40.3 h1:46jB4kKwVDUOnECpStKMVXxvR0Cg9zeV9vdbPjtn6po=
|
||||
github.com/ClickHouse/clickhouse-go/v2 v2.40.3/go.mod h1:qO0HwvjCnTB4BPL/k6EE3l4d9f/uF+aoimAhJX70eKA=
|
||||
github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI=
|
||||
github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=
|
||||
github.com/Masterminds/semver/v3 v3.1.1 h1:hLg3sBzpNErnxhQtUy/mmLR2I9foDujNK030IGemrRc=
|
||||
github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=
|
||||
github.com/Masterminds/sprig/v3 v3.2.2 h1:17jRggJu518dr3QaafizSXOjKYp94wKfABxUmyxvxX8=
|
||||
github.com/Masterminds/sprig/v3 v3.2.2/go.mod h1:UoaO7Yp8KlPnJIYWTFkMaqPUYKTfGFPhxNuwnnxkKlk=
|
||||
github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=
|
||||
github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
|
||||
github.com/antlr4-go/antlr/v4 v4.13.1 h1:SqQKkuVZ+zWkMMNkjy5FZe5mr5WURWnlpmOuzYWrPrQ=
|
||||
github.com/antlr4-go/antlr/v4 v4.13.1/go.mod h1:GKmUxMtwp6ZgGwZSva4eWPC5mS6vUAmOABFgjdkM7Nw=
|
||||
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
|
||||
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
|
||||
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
|
||||
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
|
||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||
github.com/cubicdaiya/gonp v1.0.4 h1:ky2uIAJh81WiLcGKBVD5R7KsM/36W6IqqTy6Bo6rGws=
|
||||
github.com/cubicdaiya/gonp v1.0.4/go.mod h1:iWGuP/7+JVTn02OWhRemVbMmG1DOUnmrGTYYACpOI0I=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||
github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM=
|
||||
github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE=
|
||||
github.com/fatih/structtag v1.2.0 h1:/OdNE99OxoI/PqaW/SuSK9uxxT3f/tcSZgon/ssNSx4=
|
||||
github.com/fatih/structtag v1.2.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94=
|
||||
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||
github.com/go-faster/city v1.0.1 h1:4WAxSZ3V2Ws4QRDrscLEDcibJY8uf41H6AhXDrNDcGw=
|
||||
@@ -25,38 +46,63 @@ github.com/go-faster/city v1.0.1/go.mod h1:jKcUJId49qdW3L1qKHH/3wPeUstCVpVSXTM6v
|
||||
github.com/go-faster/errors v0.7.1 h1:MkJTnDoEdi9pDabt1dpWf7AA8/BaSYZqibYyhZ20AYg=
|
||||
github.com/go-faster/errors v0.7.1/go.mod h1:5ySTjWFiphBs07IKuiL69nxdfd5+fzh1u7FPGZP2quo=
|
||||
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
|
||||
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||
github.com/go-sql-driver/mysql v1.8.1 h1:LedoTUt/eveggdHS9qUFC1EFSa8bU2+1pZjSRpvNJ1Y=
|
||||
github.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqwuVSjsCm7DZg=
|
||||
github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
|
||||
github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/gojuno/minimock/v3 v3.0.10 h1:0UbfgdLHaNRPHWF/RFYPkwxV2KI+SE4tR0dDSFMD7+A=
|
||||
github.com/gojuno/minimock/v3 v3.0.10/go.mod h1:CFXcUJYnBe+1QuNzm+WmdPYtvi/+7zQcPcyQGsbcIXg=
|
||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/google/cel-go v0.24.1 h1:jsBCtxG8mM5wiUJDSGUqU0K7Mtr3w7Eyv00rw4DiZxI=
|
||||
github.com/google/cel-go v0.24.1/go.mod h1:Hdf9TqOaTNSFQA1ybQaRqATVoK7m/zcf7IMhGXP5zI8=
|
||||
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
|
||||
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
|
||||
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
|
||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.25.1 h1:VNqngBF40hVlDloBruUehVYC3ArSgIyScOAyMRqBxRg=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.25.1/go.mod h1:RBRO7fro65R6tjKzYgLAFo0t1QEXY1Dp+i/bvpRiqiQ=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 h1:8Tjv8EJ+pM1xP8mK6egEbD1OgnVTyacbefKhmbLhIhU=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2/go.mod h1:pkJQ2tZHJ0aFOVEEot6oZmaVEZcRme73eIFmhiVuRWs=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
|
||||
github.com/hashicorp/go-hclog v1.6.3 h1:Qr2kF+eVWjTiYmU7Y31tYlP1h0q/X3Nl3tPGdaB11/k=
|
||||
github.com/hashicorp/go-hclog v1.6.3/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=
|
||||
github.com/hashicorp/go-retryablehttp v0.7.7 h1:C8hUCYzor8PIfXHa4UrZkU4VvK8o9ISHxT2Q8+VepXU=
|
||||
github.com/hashicorp/go-retryablehttp v0.7.7/go.mod h1:pkQpWZeYWskR+D1tR2O5OcBFOxfA7DoAO6xtkuQnHTk=
|
||||
github.com/hashicorp/go-retryablehttp v0.7.8 h1:ylXZWnqa7Lhqpk0L1P1LzDtGcCR0rPVUrx/c8Unxc48=
|
||||
github.com/hashicorp/go-retryablehttp v0.7.8/go.mod h1:rjiScheydd+CxvumBsIrFKlx3iS0jrZ7LvzFGFmuKbw=
|
||||
github.com/hexdigest/gowrap v1.4.2 h1:crtk5lGwHCROa77mKcP/iQ50eh7z6mBjXsg4U492gfc=
|
||||
github.com/hexdigest/gowrap v1.4.2/go.mod h1:s+1hE6qakgdaaLqgdwPAj5qKYVBCSbPJhEbx+I1ef/Q=
|
||||
github.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
|
||||
github.com/huandu/xstrings v1.5.0 h1:2ag3IFq9ZDANvthTwTiqSSZLjDc+BedvHPAp5tJy2TI=
|
||||
github.com/huandu/xstrings v1.5.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
|
||||
github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
|
||||
github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
|
||||
github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
|
||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
||||
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
|
||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
|
||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
|
||||
github.com/jackc/pgx/v5 v5.7.6 h1:rWQc5FwZSPX58r1OQmkuaNicxdmExaEz5A2DO2hUuTk=
|
||||
github.com/jackc/pgx/v5 v5.7.6/go.mod h1:aruU7o91Tc2q2cFp5h4uP3f6ztExVpyVv88Xl/8Vl8M=
|
||||
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
|
||||
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
|
||||
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
|
||||
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
|
||||
github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
|
||||
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
|
||||
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
@@ -66,67 +112,113 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
|
||||
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
|
||||
github.com/labstack/echo-contrib v0.17.2 h1:K1zivqmtcC70X9VdBFdLomjPDEVHlrcAObqmuFj1c6w=
|
||||
github.com/labstack/echo-contrib v0.17.2/go.mod h1:NeDh3PX7j/u+jR4iuDt1zHmWZSCz9c/p9mxXcDpyS8E=
|
||||
github.com/labstack/echo/v4 v4.13.3 h1:pwhpCPrTl5qry5HRdM5FwdXnhXSLSY+WE+YQSeCaafY=
|
||||
github.com/labstack/echo/v4 v4.13.3/go.mod h1:o90YNEeQWjDozo584l7AwhJMHN0bOC4tAfg+Xox9q5g=
|
||||
github.com/labstack/echo-contrib v0.17.4 h1:g5mfsrJfJTKv+F5uNKCyrjLK7js+ZW6HTjg4FnDxxgk=
|
||||
github.com/labstack/echo-contrib v0.17.4/go.mod h1:9O7ZPAHUeMGTOAfg80YqQduHzt0CzLak36PZRldYrZ0=
|
||||
github.com/labstack/echo/v4 v4.13.4 h1:oTZZW+T3s9gAu5L8vmzihV7/lkXGZuITzTQkTEhcXEA=
|
||||
github.com/labstack/echo/v4 v4.13.4/go.mod h1:g63b33BZ5vZzcIUF8AtRH40DrTlXnx4UMC8rBdndmjQ=
|
||||
github.com/labstack/gommon v0.4.2 h1:F8qTUNXgG1+6WQmqoUWnz8WiEU60mXVVw0P4ht1WRA0=
|
||||
github.com/labstack/gommon v0.4.2/go.mod h1:QlUFxVM+SNXhDL/Z7YhocGIBYOiwB0mXm1+1bAPHPyU=
|
||||
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
|
||||
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
|
||||
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
|
||||
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
|
||||
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw=
|
||||
github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw=
|
||||
github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s=
|
||||
github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
|
||||
github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ=
|
||||
github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
|
||||
github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
||||
github.com/paulmach/orb v0.11.1 h1:3koVegMC4X/WeiXYz9iswopaTwMem53NzTJuTF20JzU=
|
||||
github.com/paulmach/orb v0.11.1/go.mod h1:5mULz1xQfs3bmQm63QEJA6lNGujuRafwA5S/EnuLaLU=
|
||||
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4=
|
||||
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
|
||||
github.com/paulmach/orb v0.12.0 h1:z+zOwjmG3MyEEqzv92UN49Lg1JFYx0L9GpGKNVDKk1s=
|
||||
github.com/paulmach/orb v0.12.0/go.mod h1:5mULz1xQfs3bmQm63QEJA6lNGujuRafwA5S/EnuLaLU=
|
||||
github.com/paulmach/protoscan v0.2.1/go.mod h1:SpcSwydNLrxUGSDvXvO0P7g7AuhJ7lcKfDlhJCDw2gY=
|
||||
github.com/pganalyze/pg_query_go/v6 v6.1.0 h1:jG5ZLhcVgL1FAw4C/0VNQaVmX1SUJx71wBGdtTtBvls=
|
||||
github.com/pganalyze/pg_query_go/v6 v6.1.0/go.mod h1:nvTHIuoud6e1SfrUaFwHqT0i4b5Nr+1rPWVds3B5+50=
|
||||
github.com/pierrec/lz4/v4 v4.1.22 h1:cKFw6uJDK+/gfw5BcDL0JL5aBsAFdsIT18eRtLj7VIU=
|
||||
github.com/pierrec/lz4/v4 v4.1.22/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
|
||||
github.com/pingcap/errors v0.11.0/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8=
|
||||
github.com/pingcap/errors v0.11.5-0.20240311024730-e056997136bb h1:3pSi4EDG6hg0orE1ndHkXvX6Qdq2cZn8gAPir8ymKZk=
|
||||
github.com/pingcap/errors v0.11.5-0.20240311024730-e056997136bb/go.mod h1:X2r9ueLEUZgtx2cIogM0v4Zj5uvvzhuuiu7Pn8HzMPg=
|
||||
github.com/pingcap/failpoint v0.0.0-20240528011301-b51a646c7c86 h1:tdMsjOqUR7YXHoBitzdebTvOjs/swniBTOLy5XiMtuE=
|
||||
github.com/pingcap/failpoint v0.0.0-20240528011301-b51a646c7c86/go.mod h1:exzhVYca3WRtd6gclGNErRWb1qEgff3LYta0LvRmON4=
|
||||
github.com/pingcap/log v1.1.0 h1:ELiPxACz7vdo1qAvvaWJg1NrYFoY6gqAh/+Uo6aXdD8=
|
||||
github.com/pingcap/log v1.1.0/go.mod h1:DWQW5jICDR7UJh4HtxXSM20Churx4CQL0fwL/SoOSA4=
|
||||
github.com/pingcap/tidb/pkg/parser v0.0.0-20250324122243-d51e00e5bbf0 h1:W3rpAI3bubR6VWOcwxDIG0Gz9G5rl5b3SL116T0vBt0=
|
||||
github.com/pingcap/tidb/pkg/parser v0.0.0-20250324122243-d51e00e5bbf0/go.mod h1:+8feuexTKcXHZF/dkDfvCwEyBAmgb4paFc3/WeYV2eE=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
|
||||
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
|
||||
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
|
||||
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
|
||||
github.com/prometheus/common v0.61.0 h1:3gv/GThfX0cV2lpO7gkTUwZru38mxevy90Bj8YFSRQQ=
|
||||
github.com/prometheus/common v0.61.0/go.mod h1:zr29OCN/2BsJRaFwG8QOBr41D6kkchKbpeNH7pAjb/s=
|
||||
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
|
||||
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
|
||||
github.com/remychantenay/slog-otel v1.3.2 h1:ZBx8qnwfLJ6e18Vba4e9Xp9B7khTmpIwFsU1sAmActw=
|
||||
github.com/remychantenay/slog-otel v1.3.2/go.mod h1:gKW4tQ8cGOKoA+bi7wtYba/tcJ6Tc9XyQ/EW8gHA/2E=
|
||||
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
|
||||
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
|
||||
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
|
||||
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
|
||||
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
|
||||
github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs=
|
||||
github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA=
|
||||
github.com/prometheus/otlptranslator v1.0.0 h1:s0LJW/iN9dkIH+EnhiD3BlkkP5QVIUVEoIwkU+A6qos=
|
||||
github.com/prometheus/otlptranslator v1.0.0/go.mod h1:vRYWnXvI6aWGpsdY/mOT/cbeVRBlPWtBNDb7kGR3uKM=
|
||||
github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0=
|
||||
github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw=
|
||||
github.com/remychantenay/slog-otel v1.3.4 h1:xoM41ayLff2U8zlK5PH31XwD7Lk3W9wKfl4+RcmKom4=
|
||||
github.com/remychantenay/slog-otel v1.3.4/go.mod h1:ZkazuFMICKGDrO0r1njxKRdjTt/YcXKn6v2+0q/b0+U=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||
github.com/riza-io/grpc-go v0.2.0 h1:2HxQKFVE7VuYstcJ8zqpN84VnAoJ4dCL6YFhJewNcHQ=
|
||||
github.com/riza-io/grpc-go v0.2.0/go.mod h1:2bDvR9KkKC3KhtlSHfR3dAXjUMT86kg4UfWFyVGWqi8=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/samber/lo v1.47.0 h1:z7RynLwP5nbyRscyvcD043DWYoOcYRv3mV8lBeqOCLc=
|
||||
github.com/samber/lo v1.47.0/go.mod h1:RmDH9Ct32Qy3gduHQuKJ3gW1fMHAnE/fAzQuf6He5cU=
|
||||
github.com/samber/slog-echo v1.14.8 h1:R7RF2LWEepsKtC7i6A6o9peS3Rz5HO8+H8OD+8mPD1I=
|
||||
github.com/samber/slog-echo v1.14.8/go.mod h1:K21nbusPmai/MYm8PFactmZoFctkMmkeaTdXXyvhY1c=
|
||||
github.com/samber/slog-multi v1.2.4 h1:k9x3JAWKJFPKffx+oXZ8TasaNuorIW4tG+TXxkt6Ry4=
|
||||
github.com/samber/slog-multi v1.2.4/go.mod h1:ACuZ5B6heK57TfMVkVknN2UZHoFfjCwRxR0Q2OXKHlo=
|
||||
github.com/segmentio/asm v1.2.0 h1:9BQrFxC+YOHJlTlHGkTrFWf59nbL3XnCoFLTwDCI7ys=
|
||||
github.com/segmentio/asm v1.2.0/go.mod h1:BqMnlJP91P8d+4ibuonYZw9mfnzI9HfxselHZr5aAcs=
|
||||
github.com/samber/lo v1.51.0 h1:kysRYLbHy/MB7kQZf5DSN50JHmMsNEdeY24VzJFu7wI=
|
||||
github.com/samber/lo v1.51.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0=
|
||||
github.com/samber/slog-common v0.19.0 h1:fNcZb8B2uOLooeYwFpAlKjkQTUafdjfqKcwcC89G9YI=
|
||||
github.com/samber/slog-common v0.19.0/go.mod h1:dTz+YOU76aH007YUU0DffsXNsGFQRQllPQh9XyNoA3M=
|
||||
github.com/samber/slog-echo v1.17.2 h1:/d1D2ZiJsaqaeyz3Yk9olCeFFpi4EIJZtnoMp5zt9fs=
|
||||
github.com/samber/slog-echo v1.17.2/go.mod h1:4diugqPTk6iQdL7gZFJIyf6zGMLVMaGnCmNm+DBSMRU=
|
||||
github.com/samber/slog-multi v1.5.0 h1:UDRJdsdb0R5vFQFy3l26rpX3rL3FEPJTJ2yKVjoiT1I=
|
||||
github.com/samber/slog-multi v1.5.0/go.mod h1:im2Zi3mH/ivSY5XDj6LFcKToRIWPw1OcjSVSdXt+2d0=
|
||||
github.com/segmentio/asm v1.2.1 h1:DTNbBqs57ioxAD4PrArqftgypG4/qNpXoJx8TVXxPR0=
|
||||
github.com/segmentio/asm v1.2.1/go.mod h1:BqMnlJP91P8d+4ibuonYZw9mfnzI9HfxselHZr5aAcs=
|
||||
github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
|
||||
github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=
|
||||
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
|
||||
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
|
||||
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
|
||||
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
|
||||
github.com/spf13/cast v1.4.1 h1:s0hze+J0196ZfEMTs80N7UlFt0BDuQ7Q+JDnHiMWKdA=
|
||||
github.com/spf13/cast v1.4.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
|
||||
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
|
||||
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
|
||||
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
|
||||
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/sqlc-dev/sqlc v1.29.0 h1:HQctoD7y/i29Bao53qXO7CZ/BV9NcvpGpsJWvz9nKWs=
|
||||
github.com/sqlc-dev/sqlc v1.29.0/go.mod h1:BavmYw11px5AdPOjAVHmb9fctP5A8GTziC38wBF9tp0=
|
||||
github.com/stoewer/go-strcase v1.2.0 h1:Z2iHWqGXH00XYgqDmNgQbIBxf3wrNq0F3feEy0ainaU=
|
||||
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
|
||||
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
|
||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
|
||||
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/tetratelabs/wazero v1.9.0 h1:IcZ56OuxrtaEz8UYNRHBrUa9bYeX9oVY93KspZZBf/I=
|
||||
github.com/tetratelabs/wazero v1.9.0/go.mod h1:TSbcXCfFP0L2FGkRPxHphadXPjo1T6W+CseNNY7EkjM=
|
||||
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
|
||||
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
|
||||
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
|
||||
github.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo=
|
||||
github.com/valyala/fasttemplate v1.2.2/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
|
||||
github.com/wasilibs/go-pgquery v0.0.0-20250409022910-10ac41983c07 h1:mJdDDPblDfPe7z7go8Dvv1AJQDI3eQ/5xith3q2mFlo=
|
||||
github.com/wasilibs/go-pgquery v0.0.0-20250409022910-10ac41983c07/go.mod h1:Ak17IJ037caFp4jpCw/iQQ7/W74Sqpb1YuKJU6HTKfM=
|
||||
github.com/wasilibs/wazero-helpers v0.0.0-20240620070341-3dff1577cd52 h1:OvLBa8SqJnZ6P+mjlzc2K7PM22rRUPE1x32G9DTPrC4=
|
||||
github.com/wasilibs/wazero-helpers v0.0.0-20240620070341-3dff1577cd52/go.mod h1:jMeV4Vpbi8osrE/pKUxRZkVaA0EX7NZN0A9/oRzgpgY=
|
||||
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
|
||||
github.com/xdg-go/scram v1.1.1/go.mod h1:RaEWvsqvNKKvBPvcKeFjrG2cJqOkHTiyTpzz23ni57g=
|
||||
github.com/xdg-go/stringprep v1.0.3/go.mod h1:W3f5j4i+9rC0kuIEJL0ky1VpHXQU3ocBgklLGvcBnW8=
|
||||
@@ -138,128 +230,194 @@ github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9dec
|
||||
go.mongodb.org/mongo-driver v1.11.4/go.mod h1:PTSz5yu21bkT/wXpkS7WR5f0ddqw5quethTUn9WM+2g=
|
||||
go.ntppool.org/api v0.3.4 h1:KeRyFhIRkjJwZif7hkpqEDEBmukyYGiOi2Fd6j3UzQ0=
|
||||
go.ntppool.org/api v0.3.4/go.mod h1:LFLAwnrc/JyjzKnjgf8tCOJhps6oFIjuledS3PCx7xc=
|
||||
go.ntppool.org/common v0.3.0 h1:IuSmyjEhI1F3tr5kc5MqlR4cy5y0o5f3EKvC7Koc6rs=
|
||||
go.ntppool.org/common v0.3.0/go.mod h1:25pUt3YUusF1MY0nsljjskcMMeTvKZszVvNsubvWhSM=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
|
||||
go.opentelemetry.io/contrib/bridges/otelslog v0.8.0 h1:G3sKsNueSdxuACINFxKrQeimAIst0A5ytA2YJH+3e1c=
|
||||
go.opentelemetry.io/contrib/bridges/otelslog v0.8.0/go.mod h1:ptJm3wizguEPurZgarDAwOeX7O0iMR7l+QvIVenhYdE=
|
||||
go.opentelemetry.io/contrib/bridges/prometheus v0.58.0 h1:gQFwWiqm4JUvOjpdmyU0di+2pVQ8QNpk1Ak/54Y6NcY=
|
||||
go.opentelemetry.io/contrib/bridges/prometheus v0.58.0/go.mod h1:CNyFi9PuvHtEJNmMFHaXZMuA4XmgRXIqpFcHdqzLvVU=
|
||||
go.opentelemetry.io/contrib/exporters/autoexport v0.58.0 h1:qVsDVgZd/bC6ZKDOHSjILpm0T/BWvASC9cQU3GYga78=
|
||||
go.opentelemetry.io/contrib/exporters/autoexport v0.58.0/go.mod h1:bAv7mY+5qTsFPFaRpr75vDOocX09I36QH4Rg0slEG/U=
|
||||
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.58.0 h1:DBk8Zh+Yn3WtWCdGSx1pbEV9/naLtjG16c1zwQA2MBI=
|
||||
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.58.0/go.mod h1:DFx32LPclW1MNdSKIMrjjetsk0tJtYhAvuGjDIG2SKE=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.58.0 h1:xwH3QJv6zL4u+gkPUu59NeT1Gyw9nScWT8FQpKLUJJI=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.58.0/go.mod h1:uosvgpqTcTXtcPQORTbEkZNDQTCDOgTz1fe6aLSyqrQ=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0 h1:yd02MEjBdJkG3uabWP9apV+OuWRIXGDuJEUJbOHmCFU=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0/go.mod h1:umTcuxiv1n/s/S6/c2AT/g2CQ7u5C59sHDNmfSwgz7Q=
|
||||
go.opentelemetry.io/contrib/propagators/b3 v1.33.0 h1:ig/IsHyyoQ1F1d6FUDIIW5oYpsuTVtN16AyGOgdjAHQ=
|
||||
go.opentelemetry.io/contrib/propagators/b3 v1.33.0/go.mod h1:EsVYoNy+Eol5znb6wwN3XQTILyjl040gUpEnUSNZfsk=
|
||||
go.opentelemetry.io/otel v1.33.0 h1:/FerN9bax5LoK51X/sI0SVYrjSE0/yUL7DpxW4K3FWw=
|
||||
go.opentelemetry.io/otel v1.33.0/go.mod h1:SUUkR6csvUQl+yjReHu5uM3EtVV7MBm5FHKRlNx4I8I=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.9.0 h1:gA2gh+3B3NDvRFP30Ufh7CC3TtJRbUSf2TTD0LbCagw=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.9.0/go.mod h1:smRTR+02OtrVGjvWE1sQxhuazozKc/BXvvqqnmOxy+s=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.9.0 h1:Za0Z/j9Gf3Z9DKQ1choU9xI2noCxlkcyFFP2Ob3miEQ=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.9.0/go.mod h1:jMRB8N75meTNjDFQyJBA/2Z9en21CsxwMctn08NHY6c=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.33.0 h1:7F29RDmnlqk6B5d+sUqemt8TBfDqxryYW5gX6L74RFA=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.33.0/go.mod h1:ZiGDq7xwDMKmWDrN1XsXAj0iC7hns+2DhxBFSncNHSE=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.33.0 h1:bSjzTvsXZbLSWU8hnZXcKmEVaJjjnandxD0PxThhVU8=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.33.0/go.mod h1:aj2rilHL8WjXY1I5V+ra+z8FELtk681deydgYT8ikxU=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.33.0 h1:Vh5HayB/0HHfOQA7Ctx69E/Y/DcQSMPpKANYVMQ7fBA=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.33.0/go.mod h1:cpgtDBaqD/6ok/UG0jT15/uKjAY8mRA53diogHBg3UI=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.33.0 h1:5pojmb1U1AogINhN3SurB+zm/nIcusopeBNp42f45QM=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.33.0/go.mod h1:57gTHJSE5S1tqg+EKsLPlTWhpHMsWlVmer+LA926XiA=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.33.0 h1:wpMfgF8E1rkrT1Z6meFh1NDtownE9Ii3n3X2GJYjsaU=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.33.0/go.mod h1:wAy0T/dUbs468uOlkT31xjvqQgEVXv58BRFWEgn5v/0=
|
||||
go.opentelemetry.io/otel/exporters/prometheus v0.55.0 h1:sSPw658Lk2NWAv74lkD3B/RSDb+xRFx46GjkrL3VUZo=
|
||||
go.opentelemetry.io/otel/exporters/prometheus v0.55.0/go.mod h1:nC00vyCmQixoeaxF6KNyP42II/RHa9UdruK02qBmHvI=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.9.0 h1:iI15wfQb5ZtAVTdS5WROxpYmw6Kjez3hT9SuzXhrgGQ=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.9.0/go.mod h1:yepwlNzVVxHWR5ugHIrll+euPQPq4pvysHTDr/daV9o=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.33.0 h1:FiOTYABOX4tdzi8A0+mtzcsTmi6WBOxk66u0f1Mj9Gs=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.33.0/go.mod h1:xyo5rS8DgzV0Jtsht+LCEMwyiDbjpsxBpWETwFRF0/4=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.33.0 h1:W5AWUn/IVe8RFb5pZx1Uh9Laf/4+Qmm4kJL5zPuvR+0=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.33.0/go.mod h1:mzKxJywMNBdEX8TSJais3NnsVZUaJ+bAy6UxPTng2vk=
|
||||
go.opentelemetry.io/otel/log v0.9.0 h1:0OiWRefqJ2QszpCiqwGO0u9ajMPe17q6IscQvvp3czY=
|
||||
go.opentelemetry.io/otel/log v0.9.0/go.mod h1:WPP4OJ+RBkQ416jrFCQFuFKtXKD6mOoYCQm6ykK8VaU=
|
||||
go.opentelemetry.io/otel/metric v1.33.0 h1:r+JOocAyeRVXD8lZpjdQjzMadVZp2M4WmQ+5WtEnklQ=
|
||||
go.opentelemetry.io/otel/metric v1.33.0/go.mod h1:L9+Fyctbp6HFTddIxClbQkjtubW6O9QS3Ann/M82u6M=
|
||||
go.opentelemetry.io/otel/sdk v1.33.0 h1:iax7M131HuAm9QkZotNHEfstof92xM+N8sr3uHXc2IM=
|
||||
go.opentelemetry.io/otel/sdk v1.33.0/go.mod h1:A1Q5oi7/9XaMlIWzPSxLRWOI8nG3FnzHJNbiENQuihM=
|
||||
go.opentelemetry.io/otel/sdk/log v0.9.0 h1:YPCi6W1Eg0vwT/XJWsv2/PaQ2nyAJYuF7UUjQSBe3bc=
|
||||
go.opentelemetry.io/otel/sdk/log v0.9.0/go.mod h1:y0HdrOz7OkXQBuc2yjiqnEHc+CRKeVhRE3hx4RwTmV4=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.33.0 h1:Gs5VK9/WUJhNXZgn8MR6ITatvAmKeIuCtNbsP3JkNqU=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.33.0/go.mod h1:dL5ykHZmm1B1nVRk9dDjChwDmt81MjVp3gLkQRwKf/Q=
|
||||
go.opentelemetry.io/otel/trace v1.33.0 h1:cCJuF7LRjUFso9LPnEAHJDB2pqzp+hbO8eu1qqW2d/s=
|
||||
go.opentelemetry.io/otel/trace v1.33.0/go.mod h1:uIcdVUZMpTAmz0tI1z04GoVSezK37CbGV4fr1f2nBck=
|
||||
go.opentelemetry.io/proto/otlp v1.4.0 h1:TA9WRvW6zMwP+Ssb6fLoUIuirti1gGbP28GcKG1jgeg=
|
||||
go.opentelemetry.io/proto/otlp v1.4.0/go.mod h1:PPBWZIP98o2ElSqI35IHfu7hIhSwvc5N38Jw8pXuGFY=
|
||||
go.ntppool.org/common v0.6.2 h1:TvxrpaBQpSYuvuRT24M/I1ZqFjh4woHJTqayCOxe+o8=
|
||||
go.ntppool.org/common v0.6.2/go.mod h1:Dkc2P5+aaCseC/cs0uD9elh4yTllqvyeZ1NNT/G/414=
|
||||
go.ntppool.org/common v0.6.3-0.20251129195245-283d3936f6d0 h1:Vbs/RgrwfdA9ZzGAkhFRaU7ZSEl8D28pk95iYhjzvyA=
|
||||
go.ntppool.org/common v0.6.3-0.20251129195245-283d3936f6d0/go.mod h1:Dkc2P5+aaCseC/cs0uD9elh4yTllqvyeZ1NNT/G/414=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||
go.opentelemetry.io/contrib/bridges/otelslog v0.13.0 h1:bwnLpizECbPr1RrQ27waeY2SPIPeccCx/xLuoYADZ9s=
|
||||
go.opentelemetry.io/contrib/bridges/otelslog v0.13.0/go.mod h1:3nWlOiiqA9UtUnrcNk82mYasNxD8ehOspL0gOfEo6Y4=
|
||||
go.opentelemetry.io/contrib/bridges/prometheus v0.63.0 h1:/Rij/t18Y7rUayNg7Id6rPrEnHgorxYabm2E6wUdPP4=
|
||||
go.opentelemetry.io/contrib/bridges/prometheus v0.63.0/go.mod h1:AdyDPn6pkbkt2w01n3BubRVk7xAsCRq1Yg1mpfyA/0E=
|
||||
go.opentelemetry.io/contrib/exporters/autoexport v0.63.0 h1:NLnZybb9KkfMXPwZhd5diBYJoVxiO9Qa06dacEA7ySY=
|
||||
go.opentelemetry.io/contrib/exporters/autoexport v0.63.0/go.mod h1:OvRg7gm5WRSCtxzGSsrFHbDLToYlStHNZQ+iPNIyD6g=
|
||||
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.63.0 h1:6YeICKmGrvgJ5th4+OMNpcuoB6q/Xs8gt0YCO7MUv1k=
|
||||
go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.63.0/go.mod h1:ZEA7j2B35siNV0T00aapacNzjz4tvOlNoHp0ncCfwNQ=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.63.0 h1:2pn7OzMewmYRiNtv1doZnLo3gONcnMHlFnmOR8Vgt+8=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.63.0/go.mod h1:rjbQTDEPQymPE0YnRQp9/NuPwwtL0sesz/fnqRW/v84=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0 h1:RbKq8BG0FI8OiXhBfcRtqqHcZcka+gU3cskNuf05R18=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0/go.mod h1:h06DGIukJOevXaj/xrNjhi/2098RZzcLTbc0jDAUbsg=
|
||||
go.opentelemetry.io/contrib/propagators/b3 v1.38.0 h1:uHsCCOSKl0kLrV2dLkFK+8Ywk9iKa/fptkytc6aFFEo=
|
||||
go.opentelemetry.io/contrib/propagators/b3 v1.38.0/go.mod h1:wMRSZJZcY8ya9mApLLhwIMjqmApy2o/Ml+62lhvxyHU=
|
||||
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
|
||||
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.14.0 h1:OMqPldHt79PqWKOMYIAQs3CxAi7RLgPxwfFSwr4ZxtM=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.14.0/go.mod h1:1biG4qiqTxKiUCtoWDPpL3fB3KxVwCiGw81j3nKMuHE=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.14.0 h1:QQqYw3lkrzwVsoEX0w//EhH/TCnpRdEenKBOOEIMjWc=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.14.0/go.mod h1:gSVQcr17jk2ig4jqJ2DX30IdWH251JcNAecvrqTxH1s=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.38.0 h1:vl9obrcoWVKp/lwl8tRE33853I8Xru9HFbw/skNeLs8=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.38.0/go.mod h1:GAXRxmLJcVM3u22IjTg74zWBrRCKq8BnOqUVLodpcpw=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.38.0 h1:Oe2z/BCg5q7k4iXC3cqJxKYg0ieRiOqF0cecFYdPTwk=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.38.0/go.mod h1:ZQM5lAJpOsKnYagGg/zV2krVqTtaVdYdDkhMoX6Oalg=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 h1:GqRJVj7UmLjCVyVJ3ZFLdPRmhDUp2zFmQe3RHIOsw24=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0/go.mod h1:ri3aaHSmCTVYu2AWv44YMauwAQc0aqI9gHKIcSbI1pU=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0 h1:lwI4Dc5leUqENgGuQImwLo4WnuXFPetmPpkLi2IrX54=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0/go.mod h1:Kz/oCE7z5wuyhPxsXDuaPteSWqjSBD5YaSdbxZYGbGk=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 h1:aTL7F04bJHUlztTsNGJ2l+6he8c+y/b//eR0jjjemT4=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0/go.mod h1:kldtb7jDTeol0l3ewcmd8SDvx3EmIE7lyvqbasU3QC4=
|
||||
go.opentelemetry.io/otel/exporters/prometheus v0.60.0 h1:cGtQxGvZbnrWdC2GyjZi0PDKVSLWP/Jocix3QWfXtbo=
|
||||
go.opentelemetry.io/otel/exporters/prometheus v0.60.0/go.mod h1:hkd1EekxNo69PTV4OWFGZcKQiIqg0RfuWExcPKFvepk=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.14.0 h1:B/g+qde6Mkzxbry5ZZag0l7QrQBCtVm7lVjaLgmpje8=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.14.0/go.mod h1:mOJK8eMmgW6ocDJn6Bn11CcZ05gi3P8GylBXEkZtbgA=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.38.0 h1:wm/Q0GAAykXv83wzcKzGGqAnnfLFyFe7RslekZuv+VI=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.38.0/go.mod h1:ra3Pa40+oKjvYh+ZD3EdxFZZB0xdMfuileHAm4nNN7w=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.38.0 h1:kJxSDN4SgWWTjG/hPp3O7LCGLcHXFlvS2/FFOrwL+SE=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.38.0/go.mod h1:mgIOzS7iZeKJdeB8/NYHrJ48fdGc71Llo5bJ1J4DWUE=
|
||||
go.opentelemetry.io/otel/log v0.14.0 h1:2rzJ+pOAZ8qmZ3DDHg73NEKzSZkhkGIua9gXtxNGgrM=
|
||||
go.opentelemetry.io/otel/log v0.14.0/go.mod h1:5jRG92fEAgx0SU/vFPxmJvhIuDU9E1SUnEQrMlJpOno=
|
||||
go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
|
||||
go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
|
||||
go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=
|
||||
go.opentelemetry.io/otel/sdk v1.38.0/go.mod h1:ghmNdGlVemJI3+ZB5iDEuk4bWA3GkTpW+DOoZMYBVVg=
|
||||
go.opentelemetry.io/otel/sdk/log v0.14.0 h1:JU/U3O7N6fsAXj0+CXz21Czg532dW2V4gG1HE/e8Zrg=
|
||||
go.opentelemetry.io/otel/sdk/log v0.14.0/go.mod h1:imQvII+0ZylXfKU7/wtOND8Hn4OpT3YUoIgqJVksUkM=
|
||||
go.opentelemetry.io/otel/sdk/log/logtest v0.14.0 h1:Ijbtz+JKXl8T2MngiwqBlPaHqc4YCaP/i13Qrow6gAM=
|
||||
go.opentelemetry.io/otel/sdk/log/logtest v0.14.0/go.mod h1:dCU8aEL6q+L9cYTqcVOk8rM9Tp8WdnHOPLiBgp0SGOA=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0 h1:aSH66iL0aZqo//xXzQLYozmWrXxyFkBJ6qT5wthqPoM=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0/go.mod h1:dg9PBnW9XdQ1Hd6ZnRz689CbtrUp0wMMs9iPcgT9EZA=
|
||||
go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
|
||||
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
|
||||
go.opentelemetry.io/proto/otlp v1.8.0 h1:fRAZQDcAFHySxpJ1TwlA1cJ4tvcrw7nXl9xWWC8N5CE=
|
||||
go.opentelemetry.io/proto/otlp v1.8.0/go.mod h1:tIeYOeNBU4cvmPqpaji1P+KbB4Oloai8wN4rWzRrFF0=
|
||||
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
|
||||
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
|
||||
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
|
||||
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
||||
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
|
||||
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
||||
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
|
||||
go.uber.org/multierr v1.7.0/go.mod h1:7EAYxJLBy9rStEaz58O2t4Uvip6FSURkq8/ppBp95ak=
|
||||
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
||||
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||
go.uber.org/zap v1.19.0/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI=
|
||||
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
|
||||
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
|
||||
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
|
||||
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
|
||||
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
|
||||
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
|
||||
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
|
||||
golang.org/x/crypto v0.42.0 h1:chiH31gIWm57EkTXpwnqf8qeuMUi0yekh6mT2AvFlqI=
|
||||
golang.org/x/crypto v0.42.0/go.mod h1:4+rDnOTJhQCx2q7/j6rAN5XDw8kPjeaXEUR2eL94ix8=
|
||||
golang.org/x/exp v0.0.0-20250305212735-054e65f0b394 h1:nDVHiLt8aIbd/VzvPWN6kSOPE7+F/fNFDSXLVYkE/Iw=
|
||||
golang.org/x/exp v0.0.0-20250305212735-054e65f0b394/go.mod h1:sIifuuw/Yco/y6yb6+bDNfyeQ/MdPUy/hKEMYQV17cM=
|
||||
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.22.0 h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4=
|
||||
golang.org/x/mod v0.22.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
|
||||
golang.org/x/mod v0.28.0 h1:gQBtGhjxykdjY9YhZpSlZIsbnaE2+PgjfLWUQTnoZ1U=
|
||||
golang.org/x/mod v0.28.0/go.mod h1:yfB/L0NOf/kmEbXjzCPOx1iK1fRutOydrCMsqRhEBxI=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
|
||||
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
|
||||
golang.org/x/net v0.44.0 h1:evd8IRDyfNBMBTTY5XRF1vaZlD+EmWx6x8PkhR04H/I=
|
||||
golang.org/x/net v0.44.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
|
||||
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
||||
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
|
||||
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
|
||||
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
|
||||
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
|
||||
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
|
||||
golang.org/x/time v0.8.0 h1:9i3RxcPv3PZnitoVGMPDKZSq1xW1gK1Xy3ArNOGZfEg=
|
||||
golang.org/x/time v0.8.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
|
||||
golang.org/x/text v0.29.0 h1:1neNs90w9YzJ9BocxfsQNHKuAT4pkghyXc4nhZ6sJvk=
|
||||
golang.org/x/text v0.29.0/go.mod h1:7MhJOA9CD2qZyOKYazxdYMF85OwPdEr9jTtBpO7ydH4=
|
||||
golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI=
|
||||
golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.37.0 h1:DVSRzp7FwePZW356yEAChSdNcQo6Nsp+fex1SUW09lE=
|
||||
golang.org/x/tools v0.37.0/go.mod h1:MBN5QPQtLMHVdvsbtarmTNukZDdgwdwlO5qGacAzF0w=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20241223144023-3abc09e42ca8 h1:st3LcW/BPi75W4q1jJTEor/QWwbNlPlDG0JTn6XhZu0=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20241223144023-3abc09e42ca8/go.mod h1:klhJGKFyG8Tn50enBn7gizg4nXGXJ+jqEREdCWaPcV4=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8 h1:TqExAhdPaB60Ux47Cn0oLV07rGnxZzIsaRhQaqS666A=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8/go.mod h1:lcTa1sDdWEIHMWlITnIczmw5w60CF9ffkb8Z+DVmmjA=
|
||||
google.golang.org/grpc v1.69.2 h1:U3S9QEtbXC0bYNvRtcoklF3xGtLViumSYxWykJS+7AU=
|
||||
google.golang.org/grpc v1.69.2/go.mod h1:vyjdE6jLBI76dgpDojsFGNaHlxdjXN9ghpnd2o7JGZ4=
|
||||
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
|
||||
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250922171735-9219d122eba9 h1:jm6v6kMRpTYKxBRrDkYAitNJegUeO1Mf3Kt80obv0gg=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250922171735-9219d122eba9/go.mod h1:LmwNphe5Afor5V3R5BppOULHOnt2mCIf+NxMd4XiygE=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250922171735-9219d122eba9 h1:V1jCN2HBa8sySkR5vLcCSqJSTMv093Rw9EJefhQGP7M=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250922171735-9219d122eba9/go.mod h1:HSkG/KdJWusxU1F6CNrwNDjBMgisKxGnc5dAZfT0mjQ=
|
||||
google.golang.org/grpc v1.75.1 h1:/ODCNEuf9VghjgO3rqLcfg8fiOP0nSluljWFlDxELLI=
|
||||
google.golang.org/grpc v1.75.1/go.mod h1:JtPAzKiq4v1xcAB2hydNlWI2RnF85XXcV0mhKXr2ecQ=
|
||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
||||
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
||||
google.golang.org/protobuf v1.36.1 h1:yBPeRvTftaleIgM3PZ/WBIZ7XM/eEYAaEyCwvyjq/gk=
|
||||
google.golang.org/protobuf v1.36.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
|
||||
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
|
||||
google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw=
|
||||
google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
|
||||
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
|
||||
gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
|
||||
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
modernc.org/cc/v4 v4.25.2 h1:T2oH7sZdGvTaie0BRNFbIYsabzCxUQg8nLqCdQ2i0ic=
|
||||
modernc.org/cc/v4 v4.25.2/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
|
||||
modernc.org/ccgo/v4 v4.25.1 h1:TFSzPrAGmDsdnhT9X2UrcPMI3N/mJ9/X9ykKXwLhDsU=
|
||||
modernc.org/ccgo/v4 v4.25.1/go.mod h1:njjuAYiPflywOOrm3B7kCB444ONP5pAVr8PIEoE0uDw=
|
||||
modernc.org/fileutil v1.3.0 h1:gQ5SIzK3H9kdfai/5x41oQiKValumqNTDXMvKo62HvE=
|
||||
modernc.org/fileutil v1.3.0/go.mod h1:XatxS8fZi3pS8/hKG2GH/ArUogfxjpEKs3Ku3aK4JyQ=
|
||||
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
|
||||
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
|
||||
modernc.org/libc v1.62.1 h1:s0+fv5E3FymN8eJVmnk0llBe6rOxCu/DEU+XygRbS8s=
|
||||
modernc.org/libc v1.62.1/go.mod h1:iXhATfJQLjG3NWy56a6WVU73lWOcdYVxsvwCgoPljuo=
|
||||
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
|
||||
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
|
||||
modernc.org/memory v1.9.1 h1:V/Z1solwAVmMW1yttq3nDdZPJqV1rM05Ccq6KMSZ34g=
|
||||
modernc.org/memory v1.9.1/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
|
||||
modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
|
||||
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
|
||||
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
|
||||
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
|
||||
modernc.org/sqlite v1.37.0 h1:s1TMe7T3Q3ovQiK2Ouz4Jwh7dw4ZDqbebSDTlSJdfjI=
|
||||
modernc.org/sqlite v1.37.0/go.mod h1:5YiWv+YviqGMuGw4V+PNplcyaJ5v+vQd7TQOgkACoJM=
|
||||
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
|
||||
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
|
||||
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
|
||||
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
|
||||
|
||||
@@ -2,9 +2,10 @@ package logscores
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5/pgtype"
|
||||
"github.com/jackc/pgx/v5/pgxpool"
|
||||
"go.ntppool.org/common/logger"
|
||||
"go.ntppool.org/common/tracing"
|
||||
"go.ntppool.org/data-api/chdb"
|
||||
@@ -19,12 +20,12 @@ type LogScoreHistory struct {
|
||||
// MonitorIDs []uint32
|
||||
}
|
||||
|
||||
func GetHistoryClickHouse(ctx context.Context, ch *chdb.ClickHouse, db *sql.DB, serverID, monitorID uint32, since time.Time, count int, fullHistory bool) (*LogScoreHistory, error) {
|
||||
func GetHistoryClickHouse(ctx context.Context, ch *chdb.ClickHouse, db *pgxpool.Pool, serverID, monitorID int64, since time.Time, count int, fullHistory bool) (*LogScoreHistory, error) {
|
||||
log := logger.FromContext(ctx)
|
||||
ctx, span := tracing.Tracer().Start(ctx, "logscores.GetHistoryClickHouse",
|
||||
trace.WithAttributes(
|
||||
attribute.Int("server", int(serverID)),
|
||||
attribute.Int("monitor", int(monitorID)),
|
||||
attribute.Int64("server", serverID),
|
||||
attribute.Int64("monitor", monitorID),
|
||||
attribute.Bool("full_history", fullHistory),
|
||||
),
|
||||
)
|
||||
@@ -33,7 +34,6 @@ func GetHistoryClickHouse(ctx context.Context, ch *chdb.ClickHouse, db *sql.DB,
|
||||
log.DebugContext(ctx, "GetHistoryCH", "server", serverID, "monitor", monitorID, "since", since, "count", count, "full_history", fullHistory)
|
||||
|
||||
ls, err := ch.Logscores(ctx, int(serverID), int(monitorID), since, count, fullHistory)
|
||||
|
||||
if err != nil {
|
||||
log.ErrorContext(ctx, "clickhouse logscores", "err", err)
|
||||
return nil, err
|
||||
@@ -52,17 +52,17 @@ func GetHistoryClickHouse(ctx context.Context, ch *chdb.ClickHouse, db *sql.DB,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func GetHistoryMySQL(ctx context.Context, db *sql.DB, serverID, monitorID uint32, since time.Time, count int) (*LogScoreHistory, error) {
|
||||
func GetHistoryPostgres(ctx context.Context, db *pgxpool.Pool, serverID, monitorID int64, since time.Time, count int) (*LogScoreHistory, error) {
|
||||
log := logger.FromContext(ctx)
|
||||
ctx, span := tracing.Tracer().Start(ctx, "logscores.GetHistoryMySQL")
|
||||
ctx, span := tracing.Tracer().Start(ctx, "logscores.GetHistoryPostgres")
|
||||
defer span.End()
|
||||
|
||||
span.SetAttributes(
|
||||
attribute.Int("server", int(serverID)),
|
||||
attribute.Int("monitor", int(monitorID)),
|
||||
attribute.Int64("server", serverID),
|
||||
attribute.Int64("monitor", monitorID),
|
||||
)
|
||||
|
||||
log.Debug("GetHistoryMySQL", "server", serverID, "monitor", monitorID, "since", since, "count", count)
|
||||
log.Debug("GetHistoryPostgres", "server", serverID, "monitor", monitorID, "since", since, "count", count)
|
||||
|
||||
q := ntpdb.NewWrappedQuerier(ntpdb.New(db))
|
||||
|
||||
@@ -70,13 +70,13 @@ func GetHistoryMySQL(ctx context.Context, db *sql.DB, serverID, monitorID uint32
|
||||
var err error
|
||||
if monitorID > 0 {
|
||||
ls, err = q.GetServerLogScoresByMonitorID(ctx, ntpdb.GetServerLogScoresByMonitorIDParams{
|
||||
ServerID: serverID,
|
||||
MonitorID: sql.NullInt32{Int32: int32(monitorID), Valid: true},
|
||||
ServerID: int64(serverID),
|
||||
MonitorID: pgtype.Int8{Int64: int64(monitorID), Valid: true},
|
||||
Limit: int32(count),
|
||||
})
|
||||
} else {
|
||||
ls, err = q.GetServerLogScores(ctx, ntpdb.GetServerLogScoresParams{
|
||||
ServerID: serverID,
|
||||
ServerID: int64(serverID),
|
||||
Limit: int32(count),
|
||||
})
|
||||
}
|
||||
@@ -98,12 +98,12 @@ func GetHistoryMySQL(ctx context.Context, db *sql.DB, serverID, monitorID uint32
|
||||
|
||||
func getMonitorNames(ctx context.Context, ls []ntpdb.LogScore, q ntpdb.QuerierTx) (map[int]string, error) {
|
||||
monitors := map[int]string{}
|
||||
monitorIDs := []uint32{}
|
||||
monitorIDs := []int64{}
|
||||
for _, l := range ls {
|
||||
if !l.MonitorID.Valid {
|
||||
continue
|
||||
}
|
||||
mID := uint32(l.MonitorID.Int32)
|
||||
mID := l.MonitorID.Int64
|
||||
if _, ok := monitors[int(mID)]; !ok {
|
||||
monitors[int(mID)] = ""
|
||||
monitorIDs = append(monitorIDs, mID)
|
||||
|
||||
105
mocks/Querier.go
105
mocks/Querier.go
@@ -1,105 +0,0 @@
|
||||
// Code generated by mockery v2.35.4. DO NOT EDIT.
|
||||
|
||||
package mocks
|
||||
|
||||
import (
|
||||
context "context"
|
||||
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
ntpdb "go.ntppool.org/data-api/ntpdb"
|
||||
)
|
||||
|
||||
// Querier is an autogenerated mock type for the Querier type
|
||||
type Querier struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
// GetServerNetspeed provides a mock function with given fields: ctx, ip
|
||||
func (_m *Querier) GetServerNetspeed(ctx context.Context, ip string) (uint32, error) {
|
||||
ret := _m.Called(ctx, ip)
|
||||
|
||||
var r0 uint32
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, string) (uint32, error)); ok {
|
||||
return rf(ctx, ip)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, string) uint32); ok {
|
||||
r0 = rf(ctx, ip)
|
||||
} else {
|
||||
r0 = ret.Get(0).(uint32)
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, string) error); ok {
|
||||
r1 = rf(ctx, ip)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// GetZoneStatsData provides a mock function with given fields: ctx
|
||||
func (_m *Querier) GetZoneStatsData(ctx context.Context) ([]ntpdb.GetZoneStatsDataRow, error) {
|
||||
ret := _m.Called(ctx)
|
||||
|
||||
var r0 []ntpdb.GetZoneStatsDataRow
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context) ([]ntpdb.GetZoneStatsDataRow, error)); ok {
|
||||
return rf(ctx)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context) []ntpdb.GetZoneStatsDataRow); ok {
|
||||
r0 = rf(ctx)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]ntpdb.GetZoneStatsDataRow)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context) error); ok {
|
||||
r1 = rf(ctx)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// GetZoneStatsV2 provides a mock function with given fields: ctx, ip
|
||||
func (_m *Querier) GetZoneStatsV2(ctx context.Context, ip string) ([]ntpdb.GetZoneStatsV2Row, error) {
|
||||
ret := _m.Called(ctx, ip)
|
||||
|
||||
var r0 []ntpdb.GetZoneStatsV2Row
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, string) ([]ntpdb.GetZoneStatsV2Row, error)); ok {
|
||||
return rf(ctx, ip)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, string) []ntpdb.GetZoneStatsV2Row); ok {
|
||||
r0 = rf(ctx, ip)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]ntpdb.GetZoneStatsV2Row)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, string) error); ok {
|
||||
r1 = rf(ctx, ip)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// NewQuerier creates a new instance of Querier. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewQuerier(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *Querier {
|
||||
mock := &Querier{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
15
ntpdb/db.go
15
ntpdb/db.go
@@ -1,19 +1,20 @@
|
||||
// Code generated by sqlc. DO NOT EDIT.
|
||||
// versions:
|
||||
// sqlc v1.26.0
|
||||
// sqlc v1.29.0
|
||||
|
||||
package ntpdb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
|
||||
"github.com/jackc/pgx/v5"
|
||||
"github.com/jackc/pgx/v5/pgconn"
|
||||
)
|
||||
|
||||
type DBTX interface {
|
||||
ExecContext(context.Context, string, ...interface{}) (sql.Result, error)
|
||||
PrepareContext(context.Context, string) (*sql.Stmt, error)
|
||||
QueryContext(context.Context, string, ...interface{}) (*sql.Rows, error)
|
||||
QueryRowContext(context.Context, string, ...interface{}) *sql.Row
|
||||
Exec(context.Context, string, ...interface{}) (pgconn.CommandTag, error)
|
||||
Query(context.Context, string, ...interface{}) (pgx.Rows, error)
|
||||
QueryRow(context.Context, string, ...interface{}) pgx.Row
|
||||
}
|
||||
|
||||
func New(db DBTX) *Queries {
|
||||
@@ -24,7 +25,7 @@ type Queries struct {
|
||||
db DBTX
|
||||
}
|
||||
|
||||
func (q *Queries) WithTx(tx *sql.Tx) *Queries {
|
||||
func (q *Queries) WithTx(tx pgx.Tx) *Queries {
|
||||
return &Queries{
|
||||
db: tx,
|
||||
}
|
||||
|
||||
@@ -1,83 +1,15 @@
|
||||
package ntpdb
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"database/sql/driver"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"time"
|
||||
//go:generate go tool github.com/hexdigest/gowrap/cmd/gowrap gen -t ./opentelemetry.gowrap -g -i QuerierTx -p . -o otel.go
|
||||
|
||||
"github.com/go-sql-driver/mysql"
|
||||
"gopkg.in/yaml.v3"
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/jackc/pgx/v5/pgxpool"
|
||||
"go.ntppool.org/common/database/pgdb"
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
MySQL DBConfig `yaml:"mysql"`
|
||||
}
|
||||
|
||||
type DBConfig struct {
|
||||
DSN string `default:"" flag:"dsn" usage:"Database DSN"`
|
||||
User string `default:"" flag:"user"`
|
||||
Pass string `default:"" flag:"pass"`
|
||||
}
|
||||
|
||||
func OpenDB(configFile string) (*sql.DB, error) {
|
||||
|
||||
dbconn := sql.OpenDB(Driver{CreateConnectorFunc: createConnector(configFile)})
|
||||
|
||||
dbconn.SetConnMaxLifetime(time.Minute * 3)
|
||||
dbconn.SetMaxOpenConns(8)
|
||||
dbconn.SetMaxIdleConns(3)
|
||||
|
||||
err := dbconn.Ping()
|
||||
if err != nil {
|
||||
log.Printf("Could not connect to database: %s", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return dbconn, nil
|
||||
}
|
||||
|
||||
func createConnector(configFile string) CreateConnectorFunc {
|
||||
return func() (driver.Connector, error) {
|
||||
|
||||
log.Printf("opening config file %s", configFile)
|
||||
|
||||
dbFile, err := os.Open(configFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
dec := yaml.NewDecoder(dbFile)
|
||||
|
||||
cfg := Config{}
|
||||
|
||||
err = dec.Decode(&cfg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// log.Printf("db cfg: %+v", cfg)
|
||||
|
||||
dsn := cfg.MySQL.DSN
|
||||
if len(dsn) == 0 {
|
||||
return nil, fmt.Errorf("--database.dsn flag or DATABASE_DSN environment variable required")
|
||||
}
|
||||
|
||||
dbcfg, err := mysql.ParseDSN(dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if user := cfg.MySQL.User; len(user) > 0 {
|
||||
dbcfg.User = user
|
||||
}
|
||||
|
||||
if pass := cfg.MySQL.Pass; len(pass) > 0 {
|
||||
dbcfg.Passwd = pass
|
||||
}
|
||||
|
||||
return mysql.NewConnector(dbcfg)
|
||||
}
|
||||
// OpenDB opens a PostgreSQL connection pool using the specified config file
|
||||
func OpenDB(ctx context.Context, configFile string) (*pgxpool.Pool, error) {
|
||||
return pgdb.OpenPoolWithConfigFile(ctx, configFile)
|
||||
}
|
||||
|
||||
@@ -1,34 +0,0 @@
|
||||
package ntpdb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql/driver"
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// from https://github.com/Boostport/dynamic-database-config
|
||||
|
||||
type CreateConnectorFunc func() (driver.Connector, error)
|
||||
|
||||
type Driver struct {
|
||||
CreateConnectorFunc CreateConnectorFunc
|
||||
}
|
||||
|
||||
func (d Driver) Driver() driver.Driver {
|
||||
return d
|
||||
}
|
||||
|
||||
func (d Driver) Connect(ctx context.Context) (driver.Conn, error) {
|
||||
connector, err := d.CreateConnectorFunc()
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error creating connector from function: %w", err)
|
||||
}
|
||||
|
||||
return connector.Connect(ctx)
|
||||
}
|
||||
|
||||
func (d Driver) Open(name string) (driver.Conn, error) {
|
||||
return nil, errors.New("open is not supported")
|
||||
}
|
||||
103
ntpdb/models.go
103
ntpdb/models.go
@@ -1,15 +1,14 @@
|
||||
// Code generated by sqlc. DO NOT EDIT.
|
||||
// versions:
|
||||
// sqlc v1.26.0
|
||||
// sqlc v1.29.0
|
||||
|
||||
package ntpdb
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"database/sql/driver"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5/pgtype"
|
||||
"go.ntppool.org/common/types"
|
||||
)
|
||||
|
||||
@@ -145,9 +144,10 @@ func (ns NullMonitorsType) Value() (driver.Value, error) {
|
||||
type ServerScoresStatus string
|
||||
|
||||
const (
|
||||
ServerScoresStatusNew ServerScoresStatus = "new"
|
||||
ServerScoresStatusTesting ServerScoresStatus = "testing"
|
||||
ServerScoresStatusActive ServerScoresStatus = "active"
|
||||
ServerScoresStatusCandidate ServerScoresStatus = "candidate"
|
||||
ServerScoresStatusTesting ServerScoresStatus = "testing"
|
||||
ServerScoresStatusActive ServerScoresStatus = "active"
|
||||
ServerScoresStatusPaused ServerScoresStatus = "paused"
|
||||
)
|
||||
|
||||
func (e *ServerScoresStatus) Scan(src interface{}) error {
|
||||
@@ -270,70 +270,73 @@ func (ns NullZoneServerCountsIpVersion) Value() (driver.Value, error) {
|
||||
}
|
||||
|
||||
type LogScore struct {
|
||||
ID uint64 `db:"id" json:"id"`
|
||||
MonitorID sql.NullInt32 `db:"monitor_id" json:"monitor_id"`
|
||||
ServerID uint32 `db:"server_id" json:"server_id"`
|
||||
Ts time.Time `db:"ts" json:"ts"`
|
||||
ID int64 `db:"id" json:"id"`
|
||||
MonitorID pgtype.Int8 `db:"monitor_id" json:"monitor_id"`
|
||||
ServerID int64 `db:"server_id" json:"server_id"`
|
||||
Ts pgtype.Timestamptz `db:"ts" json:"ts"`
|
||||
Score float64 `db:"score" json:"score"`
|
||||
Step float64 `db:"step" json:"step"`
|
||||
Offset sql.NullFloat64 `db:"offset" json:"offset"`
|
||||
Rtt sql.NullInt32 `db:"rtt" json:"rtt"`
|
||||
Offset pgtype.Float8 `db:"offset" json:"offset"`
|
||||
Rtt pgtype.Int4 `db:"rtt" json:"rtt"`
|
||||
Attributes types.LogScoreAttributes `db:"attributes" json:"attributes"`
|
||||
}
|
||||
|
||||
type Monitor struct {
|
||||
ID uint32 `db:"id" json:"id"`
|
||||
ID int64 `db:"id" json:"id"`
|
||||
IDToken pgtype.Text `db:"id_token" json:"id_token"`
|
||||
Type MonitorsType `db:"type" json:"type"`
|
||||
UserID sql.NullInt32 `db:"user_id" json:"user_id"`
|
||||
AccountID sql.NullInt32 `db:"account_id" json:"account_id"`
|
||||
Name string `db:"name" json:"name"`
|
||||
UserID pgtype.Int8 `db:"user_id" json:"user_id"`
|
||||
AccountID pgtype.Int8 `db:"account_id" json:"account_id"`
|
||||
Hostname string `db:"hostname" json:"hostname"`
|
||||
Location string `db:"location" json:"location"`
|
||||
Ip sql.NullString `db:"ip" json:"ip"`
|
||||
Ip pgtype.Text `db:"ip" json:"ip"`
|
||||
IpVersion NullMonitorsIpVersion `db:"ip_version" json:"ip_version"`
|
||||
TlsName sql.NullString `db:"tls_name" json:"tls_name"`
|
||||
ApiKey sql.NullString `db:"api_key" json:"api_key"`
|
||||
TlsName pgtype.Text `db:"tls_name" json:"tls_name"`
|
||||
ApiKey pgtype.Text `db:"api_key" json:"api_key"`
|
||||
Status MonitorsStatus `db:"status" json:"status"`
|
||||
Config string `db:"config" json:"config"`
|
||||
ClientVersion string `db:"client_version" json:"client_version"`
|
||||
LastSeen sql.NullTime `db:"last_seen" json:"last_seen"`
|
||||
LastSubmit sql.NullTime `db:"last_submit" json:"last_submit"`
|
||||
CreatedOn time.Time `db:"created_on" json:"created_on"`
|
||||
LastSeen pgtype.Timestamptz `db:"last_seen" json:"last_seen"`
|
||||
LastSubmit pgtype.Timestamptz `db:"last_submit" json:"last_submit"`
|
||||
CreatedOn pgtype.Timestamptz `db:"created_on" json:"created_on"`
|
||||
DeletedOn pgtype.Timestamptz `db:"deleted_on" json:"deleted_on"`
|
||||
IsCurrent pgtype.Bool `db:"is_current" json:"is_current"`
|
||||
}
|
||||
|
||||
type Server struct {
|
||||
ID uint32 `db:"id" json:"id"`
|
||||
Ip string `db:"ip" json:"ip"`
|
||||
IpVersion ServersIpVersion `db:"ip_version" json:"ip_version"`
|
||||
UserID sql.NullInt32 `db:"user_id" json:"user_id"`
|
||||
AccountID sql.NullInt32 `db:"account_id" json:"account_id"`
|
||||
Hostname sql.NullString `db:"hostname" json:"hostname"`
|
||||
Stratum sql.NullInt16 `db:"stratum" json:"stratum"`
|
||||
InPool uint8 `db:"in_pool" json:"in_pool"`
|
||||
InServerList uint8 `db:"in_server_list" json:"in_server_list"`
|
||||
Netspeed uint32 `db:"netspeed" json:"netspeed"`
|
||||
NetspeedTarget uint32 `db:"netspeed_target" json:"netspeed_target"`
|
||||
CreatedOn time.Time `db:"created_on" json:"created_on"`
|
||||
UpdatedOn time.Time `db:"updated_on" json:"updated_on"`
|
||||
ScoreTs sql.NullTime `db:"score_ts" json:"score_ts"`
|
||||
ScoreRaw float64 `db:"score_raw" json:"score_raw"`
|
||||
DeletionOn sql.NullTime `db:"deletion_on" json:"deletion_on"`
|
||||
Flags string `db:"flags" json:"flags"`
|
||||
ID int64 `db:"id" json:"id"`
|
||||
Ip string `db:"ip" json:"ip"`
|
||||
IpVersion ServersIpVersion `db:"ip_version" json:"ip_version"`
|
||||
UserID pgtype.Int8 `db:"user_id" json:"user_id"`
|
||||
AccountID pgtype.Int8 `db:"account_id" json:"account_id"`
|
||||
Hostname pgtype.Text `db:"hostname" json:"hostname"`
|
||||
Stratum pgtype.Int2 `db:"stratum" json:"stratum"`
|
||||
InPool int16 `db:"in_pool" json:"in_pool"`
|
||||
InServerList int16 `db:"in_server_list" json:"in_server_list"`
|
||||
Netspeed int64 `db:"netspeed" json:"netspeed"`
|
||||
NetspeedTarget int64 `db:"netspeed_target" json:"netspeed_target"`
|
||||
CreatedOn pgtype.Timestamptz `db:"created_on" json:"created_on"`
|
||||
UpdatedOn pgtype.Timestamptz `db:"updated_on" json:"updated_on"`
|
||||
ScoreTs pgtype.Timestamptz `db:"score_ts" json:"score_ts"`
|
||||
ScoreRaw float64 `db:"score_raw" json:"score_raw"`
|
||||
DeletionOn pgtype.Date `db:"deletion_on" json:"deletion_on"`
|
||||
Flags string `db:"flags" json:"flags"`
|
||||
}
|
||||
|
||||
type Zone struct {
|
||||
ID uint32 `db:"id" json:"id"`
|
||||
Name string `db:"name" json:"name"`
|
||||
Description sql.NullString `db:"description" json:"description"`
|
||||
ParentID sql.NullInt32 `db:"parent_id" json:"parent_id"`
|
||||
Dns bool `db:"dns" json:"dns"`
|
||||
ID int64 `db:"id" json:"id"`
|
||||
Name string `db:"name" json:"name"`
|
||||
Description pgtype.Text `db:"description" json:"description"`
|
||||
ParentID pgtype.Int8 `db:"parent_id" json:"parent_id"`
|
||||
Dns bool `db:"dns" json:"dns"`
|
||||
}
|
||||
|
||||
type ZoneServerCount struct {
|
||||
ID uint32 `db:"id" json:"id"`
|
||||
ZoneID uint32 `db:"zone_id" json:"zone_id"`
|
||||
ID int64 `db:"id" json:"id"`
|
||||
ZoneID int64 `db:"zone_id" json:"zone_id"`
|
||||
IpVersion ZoneServerCountsIpVersion `db:"ip_version" json:"ip_version"`
|
||||
Date time.Time `db:"date" json:"date"`
|
||||
CountActive uint32 `db:"count_active" json:"count_active"`
|
||||
CountRegistered uint32 `db:"count_registered" json:"count_registered"`
|
||||
NetspeedActive uint32 `db:"netspeed_active" json:"netspeed_active"`
|
||||
Date pgtype.Date `db:"date" json:"date"`
|
||||
CountActive int32 `db:"count_active" json:"count_active"`
|
||||
CountRegistered int32 `db:"count_registered" json:"count_registered"`
|
||||
NetspeedActive int `db:"netspeed_active" json:"netspeed_active"`
|
||||
}
|
||||
|
||||
@@ -7,8 +7,8 @@ import (
|
||||
|
||||
func (m *Monitor) DisplayName() string {
|
||||
switch {
|
||||
case len(m.Name) > 0:
|
||||
return m.Name
|
||||
// case len(m.Hostname) > 0:
|
||||
// return m.Hostname
|
||||
case m.TlsName.Valid && len(m.TlsName.String) > 0:
|
||||
name := m.TlsName.String
|
||||
if idx := strings.Index(name, "."); idx > 0 {
|
||||
|
||||
55
ntpdb/opentelemetry.gowrap
Normal file
55
ntpdb/opentelemetry.gowrap
Normal file
@@ -0,0 +1,55 @@
|
||||
import (
|
||||
"context"
|
||||
|
||||
_codes "go.opentelemetry.io/otel/codes"
|
||||
"go.opentelemetry.io/otel"
|
||||
"go.opentelemetry.io/otel/attribute"
|
||||
)
|
||||
|
||||
{{ $decorator := (or .Vars.DecoratorName (printf "%sWithTracing" .Interface.Name)) }}
|
||||
{{ $spanNameType := (or .Vars.SpanNamePrefix .Interface.Name) }}
|
||||
|
||||
// {{$decorator}} implements {{.Interface.Name}} interface instrumented with open telemetry spans
|
||||
type {{$decorator}} struct {
|
||||
{{.Interface.Type}}
|
||||
_instance string
|
||||
_spanDecorator func(span trace.Span, params, results map[string]interface{})
|
||||
}
|
||||
|
||||
// New{{$decorator}} returns {{$decorator}}
|
||||
func New{{$decorator}} (base {{.Interface.Type}}, instance string, spanDecorator ...func(span trace.Span, params, results map[string]interface{})) {{$decorator}} {
|
||||
d := {{$decorator}} {
|
||||
{{.Interface.Name}}: base,
|
||||
_instance: instance,
|
||||
}
|
||||
|
||||
if len(spanDecorator) > 0 && spanDecorator[0] != nil {
|
||||
d._spanDecorator = spanDecorator[0]
|
||||
}
|
||||
|
||||
return d
|
||||
}
|
||||
|
||||
{{range $method := .Interface.Methods}}
|
||||
{{if $method.AcceptsContext}}
|
||||
// {{$method.Name}} implements {{$.Interface.Name}}
|
||||
func (_d {{$decorator}}) {{$method.Declaration}} {
|
||||
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "{{$spanNameType}}.{{$method.Name}}")
|
||||
defer func() {
|
||||
if _d._spanDecorator != nil {
|
||||
_d._spanDecorator(_span, {{$method.ParamsMap}}, {{$method.ResultsMap}})
|
||||
}{{- if $method.ReturnsError}} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
)
|
||||
}
|
||||
{{end}}
|
||||
_span.End()
|
||||
}()
|
||||
{{$method.Pass (printf "_d.%s." $.Interface.Name) }}
|
||||
}
|
||||
{{end}}
|
||||
{{end}}
|
||||
@@ -1,21 +1,20 @@
|
||||
// Code generated by gowrap. DO NOT EDIT.
|
||||
// template: https://raw.githubusercontent.com/hexdigest/gowrap/6c8f05695fec23df85903a8da0af66ac414e2a63/templates/opentelemetry
|
||||
// template: opentelemetry.gowrap
|
||||
// gowrap: http://github.com/hexdigest/gowrap
|
||||
|
||||
package ntpdb
|
||||
|
||||
//go:generate gowrap gen -p go.ntppool.org/data-api/ntpdb -i QuerierTx -t https://raw.githubusercontent.com/hexdigest/gowrap/6c8f05695fec23df85903a8da0af66ac414e2a63/templates/opentelemetry -o otel.go -l ""
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
|
||||
"go.opentelemetry.io/otel/trace"
|
||||
|
||||
"go.opentelemetry.io/otel"
|
||||
"go.opentelemetry.io/otel/attribute"
|
||||
"go.opentelemetry.io/otel/trace"
|
||||
_codes "go.opentelemetry.io/otel/codes"
|
||||
)
|
||||
|
||||
// QuerierTxWithTracing implements QuerierTx interface instrumented with opentracing spans
|
||||
// QuerierTxWithTracing implements QuerierTx interface instrumented with open telemetry spans
|
||||
type QuerierTxWithTracing struct {
|
||||
QuerierTx
|
||||
_instance string
|
||||
@@ -47,6 +46,7 @@ func (_d QuerierTxWithTracing) Begin(ctx context.Context) (q1 QuerierTx, err err
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -68,6 +68,7 @@ func (_d QuerierTxWithTracing) Commit(ctx context.Context) (err error) {
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -79,18 +80,19 @@ func (_d QuerierTxWithTracing) Commit(ctx context.Context) (err error) {
|
||||
return _d.QuerierTx.Commit(ctx)
|
||||
}
|
||||
|
||||
// GetMonitorByName implements QuerierTx
|
||||
func (_d QuerierTxWithTracing) GetMonitorByName(ctx context.Context, tlsName sql.NullString) (m1 Monitor, err error) {
|
||||
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetMonitorByName")
|
||||
// GetMonitorByNameAndIPVersion implements QuerierTx
|
||||
func (_d QuerierTxWithTracing) GetMonitorByNameAndIPVersion(ctx context.Context, arg GetMonitorByNameAndIPVersionParams) (m1 Monitor, err error) {
|
||||
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetMonitorByNameAndIPVersion")
|
||||
defer func() {
|
||||
if _d._spanDecorator != nil {
|
||||
_d._spanDecorator(_span, map[string]interface{}{
|
||||
"ctx": ctx,
|
||||
"tlsName": tlsName}, map[string]interface{}{
|
||||
"ctx": ctx,
|
||||
"arg": arg}, map[string]interface{}{
|
||||
"m1": m1,
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -99,11 +101,11 @@ func (_d QuerierTxWithTracing) GetMonitorByName(ctx context.Context, tlsName sql
|
||||
|
||||
_span.End()
|
||||
}()
|
||||
return _d.QuerierTx.GetMonitorByName(ctx, tlsName)
|
||||
return _d.QuerierTx.GetMonitorByNameAndIPVersion(ctx, arg)
|
||||
}
|
||||
|
||||
// GetMonitorsByID implements QuerierTx
|
||||
func (_d QuerierTxWithTracing) GetMonitorsByID(ctx context.Context, monitorids []uint32) (ma1 []Monitor, err error) {
|
||||
func (_d QuerierTxWithTracing) GetMonitorsByID(ctx context.Context, monitorids []int64) (ma1 []Monitor, err error) {
|
||||
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetMonitorsByID")
|
||||
defer func() {
|
||||
if _d._spanDecorator != nil {
|
||||
@@ -114,6 +116,7 @@ func (_d QuerierTxWithTracing) GetMonitorsByID(ctx context.Context, monitorids [
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -126,7 +129,7 @@ func (_d QuerierTxWithTracing) GetMonitorsByID(ctx context.Context, monitorids [
|
||||
}
|
||||
|
||||
// GetServerByID implements QuerierTx
|
||||
func (_d QuerierTxWithTracing) GetServerByID(ctx context.Context, id uint32) (s1 Server, err error) {
|
||||
func (_d QuerierTxWithTracing) GetServerByID(ctx context.Context, id int64) (s1 Server, err error) {
|
||||
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetServerByID")
|
||||
defer func() {
|
||||
if _d._spanDecorator != nil {
|
||||
@@ -137,6 +140,7 @@ func (_d QuerierTxWithTracing) GetServerByID(ctx context.Context, id uint32) (s1
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -160,6 +164,7 @@ func (_d QuerierTxWithTracing) GetServerByIP(ctx context.Context, ip string) (s1
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -183,6 +188,7 @@ func (_d QuerierTxWithTracing) GetServerLogScores(ctx context.Context, arg GetSe
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -206,6 +212,7 @@ func (_d QuerierTxWithTracing) GetServerLogScoresByMonitorID(ctx context.Context
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -218,17 +225,18 @@ func (_d QuerierTxWithTracing) GetServerLogScoresByMonitorID(ctx context.Context
|
||||
}
|
||||
|
||||
// GetServerNetspeed implements QuerierTx
|
||||
func (_d QuerierTxWithTracing) GetServerNetspeed(ctx context.Context, ip string) (u1 uint32, err error) {
|
||||
func (_d QuerierTxWithTracing) GetServerNetspeed(ctx context.Context, ip string) (i1 int64, err error) {
|
||||
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetServerNetspeed")
|
||||
defer func() {
|
||||
if _d._spanDecorator != nil {
|
||||
_d._spanDecorator(_span, map[string]interface{}{
|
||||
"ctx": ctx,
|
||||
"ip": ip}, map[string]interface{}{
|
||||
"u1": u1,
|
||||
"i1": i1,
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -252,6 +260,7 @@ func (_d QuerierTxWithTracing) GetServerScores(ctx context.Context, arg GetServe
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -275,6 +284,7 @@ func (_d QuerierTxWithTracing) GetZoneByName(ctx context.Context, name string) (
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -287,7 +297,7 @@ func (_d QuerierTxWithTracing) GetZoneByName(ctx context.Context, name string) (
|
||||
}
|
||||
|
||||
// GetZoneCounts implements QuerierTx
|
||||
func (_d QuerierTxWithTracing) GetZoneCounts(ctx context.Context, zoneID uint32) (za1 []ZoneServerCount, err error) {
|
||||
func (_d QuerierTxWithTracing) GetZoneCounts(ctx context.Context, zoneID int64) (za1 []ZoneServerCount, err error) {
|
||||
ctx, _span := otel.Tracer(_d._instance).Start(ctx, "QuerierTx.GetZoneCounts")
|
||||
defer func() {
|
||||
if _d._spanDecorator != nil {
|
||||
@@ -298,6 +308,7 @@ func (_d QuerierTxWithTracing) GetZoneCounts(ctx context.Context, zoneID uint32)
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -320,6 +331,7 @@ func (_d QuerierTxWithTracing) GetZoneStatsData(ctx context.Context) (ga1 []GetZ
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -343,6 +355,7 @@ func (_d QuerierTxWithTracing) GetZoneStatsV2(ctx context.Context, ip string) (g
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
@@ -364,6 +377,7 @@ func (_d QuerierTxWithTracing) Rollback(ctx context.Context) (err error) {
|
||||
"err": err})
|
||||
} else if err != nil {
|
||||
_span.RecordError(err)
|
||||
_span.SetStatus(_codes.Error, err.Error())
|
||||
_span.SetAttributes(
|
||||
attribute.String("event", "error"),
|
||||
attribute.String("message", err.Error()),
|
||||
|
||||
@@ -1,25 +1,24 @@
|
||||
// Code generated by sqlc. DO NOT EDIT.
|
||||
// versions:
|
||||
// sqlc v1.26.0
|
||||
// sqlc v1.29.0
|
||||
|
||||
package ntpdb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
)
|
||||
|
||||
type Querier interface {
|
||||
GetMonitorByName(ctx context.Context, tlsName sql.NullString) (Monitor, error)
|
||||
GetMonitorsByID(ctx context.Context, monitorids []uint32) ([]Monitor, error)
|
||||
GetServerByID(ctx context.Context, id uint32) (Server, error)
|
||||
GetMonitorByNameAndIPVersion(ctx context.Context, arg GetMonitorByNameAndIPVersionParams) (Monitor, error)
|
||||
GetMonitorsByID(ctx context.Context, monitorids []int64) ([]Monitor, error)
|
||||
GetServerByID(ctx context.Context, id int64) (Server, error)
|
||||
GetServerByIP(ctx context.Context, ip string) (Server, error)
|
||||
GetServerLogScores(ctx context.Context, arg GetServerLogScoresParams) ([]LogScore, error)
|
||||
GetServerLogScoresByMonitorID(ctx context.Context, arg GetServerLogScoresByMonitorIDParams) ([]LogScore, error)
|
||||
GetServerNetspeed(ctx context.Context, ip string) (uint32, error)
|
||||
GetServerNetspeed(ctx context.Context, ip string) (int64, error)
|
||||
GetServerScores(ctx context.Context, arg GetServerScoresParams) ([]GetServerScoresRow, error)
|
||||
GetZoneByName(ctx context.Context, name string) (Zone, error)
|
||||
GetZoneCounts(ctx context.Context, zoneID uint32) ([]ZoneServerCount, error)
|
||||
GetZoneCounts(ctx context.Context, zoneID int64) ([]ZoneServerCount, error)
|
||||
GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow, error)
|
||||
GetZoneStatsV2(ctx context.Context, ip string) ([]GetZoneStatsV2Row, error)
|
||||
}
|
||||
|
||||
@@ -1,34 +1,41 @@
|
||||
// Code generated by sqlc. DO NOT EDIT.
|
||||
// versions:
|
||||
// sqlc v1.26.0
|
||||
// sqlc v1.29.0
|
||||
// source: query.sql
|
||||
|
||||
package ntpdb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5/pgtype"
|
||||
)
|
||||
|
||||
const getMonitorByName = `-- name: GetMonitorByName :one
|
||||
select id, type, user_id, account_id, name, location, ip, ip_version, tls_name, api_key, status, config, client_version, last_seen, last_submit, created_on from monitors
|
||||
const getMonitorByNameAndIPVersion = `-- name: GetMonitorByNameAndIPVersion :one
|
||||
select id, id_token, type, user_id, account_id, hostname, location, ip, ip_version, tls_name, api_key, status, config, client_version, last_seen, last_submit, created_on, deleted_on, is_current from monitors
|
||||
where
|
||||
tls_name like ?
|
||||
tls_name like $1 AND
|
||||
(ip_version = $2 OR (type = 'score' AND ip_version IS NULL)) AND
|
||||
is_current = true
|
||||
order by id
|
||||
limit 1
|
||||
`
|
||||
|
||||
func (q *Queries) GetMonitorByName(ctx context.Context, tlsName sql.NullString) (Monitor, error) {
|
||||
row := q.db.QueryRowContext(ctx, getMonitorByName, tlsName)
|
||||
type GetMonitorByNameAndIPVersionParams struct {
|
||||
TlsName pgtype.Text `db:"tls_name" json:"tls_name"`
|
||||
IpVersion NullMonitorsIpVersion `db:"ip_version" json:"ip_version"`
|
||||
}
|
||||
|
||||
func (q *Queries) GetMonitorByNameAndIPVersion(ctx context.Context, arg GetMonitorByNameAndIPVersionParams) (Monitor, error) {
|
||||
row := q.db.QueryRow(ctx, getMonitorByNameAndIPVersion, arg.TlsName, arg.IpVersion)
|
||||
var i Monitor
|
||||
err := row.Scan(
|
||||
&i.ID,
|
||||
&i.IDToken,
|
||||
&i.Type,
|
||||
&i.UserID,
|
||||
&i.AccountID,
|
||||
&i.Name,
|
||||
&i.Hostname,
|
||||
&i.Location,
|
||||
&i.Ip,
|
||||
&i.IpVersion,
|
||||
@@ -40,27 +47,19 @@ func (q *Queries) GetMonitorByName(ctx context.Context, tlsName sql.NullString)
|
||||
&i.LastSeen,
|
||||
&i.LastSubmit,
|
||||
&i.CreatedOn,
|
||||
&i.DeletedOn,
|
||||
&i.IsCurrent,
|
||||
)
|
||||
return i, err
|
||||
}
|
||||
|
||||
const getMonitorsByID = `-- name: GetMonitorsByID :many
|
||||
select id, type, user_id, account_id, name, location, ip, ip_version, tls_name, api_key, status, config, client_version, last_seen, last_submit, created_on from monitors
|
||||
where id in (/*SLICE:MonitorIDs*/?)
|
||||
select id, id_token, type, user_id, account_id, hostname, location, ip, ip_version, tls_name, api_key, status, config, client_version, last_seen, last_submit, created_on, deleted_on, is_current from monitors
|
||||
where id = ANY($1::bigint[])
|
||||
`
|
||||
|
||||
func (q *Queries) GetMonitorsByID(ctx context.Context, monitorids []uint32) ([]Monitor, error) {
|
||||
query := getMonitorsByID
|
||||
var queryParams []interface{}
|
||||
if len(monitorids) > 0 {
|
||||
for _, v := range monitorids {
|
||||
queryParams = append(queryParams, v)
|
||||
}
|
||||
query = strings.Replace(query, "/*SLICE:MonitorIDs*/?", strings.Repeat(",?", len(monitorids))[1:], 1)
|
||||
} else {
|
||||
query = strings.Replace(query, "/*SLICE:MonitorIDs*/?", "NULL", 1)
|
||||
}
|
||||
rows, err := q.db.QueryContext(ctx, query, queryParams...)
|
||||
func (q *Queries) GetMonitorsByID(ctx context.Context, monitorids []int64) ([]Monitor, error) {
|
||||
rows, err := q.db.Query(ctx, getMonitorsByID, monitorids)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -70,10 +69,11 @@ func (q *Queries) GetMonitorsByID(ctx context.Context, monitorids []uint32) ([]M
|
||||
var i Monitor
|
||||
if err := rows.Scan(
|
||||
&i.ID,
|
||||
&i.IDToken,
|
||||
&i.Type,
|
||||
&i.UserID,
|
||||
&i.AccountID,
|
||||
&i.Name,
|
||||
&i.Hostname,
|
||||
&i.Location,
|
||||
&i.Ip,
|
||||
&i.IpVersion,
|
||||
@@ -85,14 +85,13 @@ func (q *Queries) GetMonitorsByID(ctx context.Context, monitorids []uint32) ([]M
|
||||
&i.LastSeen,
|
||||
&i.LastSubmit,
|
||||
&i.CreatedOn,
|
||||
&i.DeletedOn,
|
||||
&i.IsCurrent,
|
||||
); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
items = append(items, i)
|
||||
}
|
||||
if err := rows.Close(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -102,11 +101,11 @@ func (q *Queries) GetMonitorsByID(ctx context.Context, monitorids []uint32) ([]M
|
||||
const getServerByID = `-- name: GetServerByID :one
|
||||
select id, ip, ip_version, user_id, account_id, hostname, stratum, in_pool, in_server_list, netspeed, netspeed_target, created_on, updated_on, score_ts, score_raw, deletion_on, flags from servers
|
||||
where
|
||||
id = ?
|
||||
id = $1
|
||||
`
|
||||
|
||||
func (q *Queries) GetServerByID(ctx context.Context, id uint32) (Server, error) {
|
||||
row := q.db.QueryRowContext(ctx, getServerByID, id)
|
||||
func (q *Queries) GetServerByID(ctx context.Context, id int64) (Server, error) {
|
||||
row := q.db.QueryRow(ctx, getServerByID, id)
|
||||
var i Server
|
||||
err := row.Scan(
|
||||
&i.ID,
|
||||
@@ -133,11 +132,11 @@ func (q *Queries) GetServerByID(ctx context.Context, id uint32) (Server, error)
|
||||
const getServerByIP = `-- name: GetServerByIP :one
|
||||
select id, ip, ip_version, user_id, account_id, hostname, stratum, in_pool, in_server_list, netspeed, netspeed_target, created_on, updated_on, score_ts, score_raw, deletion_on, flags from servers
|
||||
where
|
||||
ip = ?
|
||||
ip = $1
|
||||
`
|
||||
|
||||
func (q *Queries) GetServerByIP(ctx context.Context, ip string) (Server, error) {
|
||||
row := q.db.QueryRowContext(ctx, getServerByIP, ip)
|
||||
row := q.db.QueryRow(ctx, getServerByIP, ip)
|
||||
var i Server
|
||||
err := row.Scan(
|
||||
&i.ID,
|
||||
@@ -162,20 +161,20 @@ func (q *Queries) GetServerByIP(ctx context.Context, ip string) (Server, error)
|
||||
}
|
||||
|
||||
const getServerLogScores = `-- name: GetServerLogScores :many
|
||||
select id, monitor_id, server_id, ts, score, step, offset, rtt, attributes from log_scores
|
||||
select id, monitor_id, server_id, ts, score, step, "offset", rtt, attributes from log_scores
|
||||
where
|
||||
server_id = ?
|
||||
server_id = $1
|
||||
order by ts desc
|
||||
limit ?
|
||||
limit $2
|
||||
`
|
||||
|
||||
type GetServerLogScoresParams struct {
|
||||
ServerID uint32 `db:"server_id" json:"server_id"`
|
||||
Limit int32 `db:"limit" json:"limit"`
|
||||
ServerID int64 `db:"server_id" json:"server_id"`
|
||||
Limit int32 `db:"limit" json:"limit"`
|
||||
}
|
||||
|
||||
func (q *Queries) GetServerLogScores(ctx context.Context, arg GetServerLogScoresParams) ([]LogScore, error) {
|
||||
rows, err := q.db.QueryContext(ctx, getServerLogScores, arg.ServerID, arg.Limit)
|
||||
rows, err := q.db.Query(ctx, getServerLogScores, arg.ServerID, arg.Limit)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -198,9 +197,6 @@ func (q *Queries) GetServerLogScores(ctx context.Context, arg GetServerLogScores
|
||||
}
|
||||
items = append(items, i)
|
||||
}
|
||||
if err := rows.Close(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -208,22 +204,22 @@ func (q *Queries) GetServerLogScores(ctx context.Context, arg GetServerLogScores
|
||||
}
|
||||
|
||||
const getServerLogScoresByMonitorID = `-- name: GetServerLogScoresByMonitorID :many
|
||||
select id, monitor_id, server_id, ts, score, step, offset, rtt, attributes from log_scores
|
||||
select id, monitor_id, server_id, ts, score, step, "offset", rtt, attributes from log_scores
|
||||
where
|
||||
server_id = ? AND
|
||||
monitor_id = ?
|
||||
server_id = $1 AND
|
||||
monitor_id = $2
|
||||
order by ts desc
|
||||
limit ?
|
||||
limit $3
|
||||
`
|
||||
|
||||
type GetServerLogScoresByMonitorIDParams struct {
|
||||
ServerID uint32 `db:"server_id" json:"server_id"`
|
||||
MonitorID sql.NullInt32 `db:"monitor_id" json:"monitor_id"`
|
||||
Limit int32 `db:"limit" json:"limit"`
|
||||
ServerID int64 `db:"server_id" json:"server_id"`
|
||||
MonitorID pgtype.Int8 `db:"monitor_id" json:"monitor_id"`
|
||||
Limit int32 `db:"limit" json:"limit"`
|
||||
}
|
||||
|
||||
func (q *Queries) GetServerLogScoresByMonitorID(ctx context.Context, arg GetServerLogScoresByMonitorIDParams) ([]LogScore, error) {
|
||||
rows, err := q.db.QueryContext(ctx, getServerLogScoresByMonitorID, arg.ServerID, arg.MonitorID, arg.Limit)
|
||||
rows, err := q.db.Query(ctx, getServerLogScoresByMonitorID, arg.ServerID, arg.MonitorID, arg.Limit)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -246,9 +242,6 @@ func (q *Queries) GetServerLogScoresByMonitorID(ctx context.Context, arg GetServ
|
||||
}
|
||||
items = append(items, i)
|
||||
}
|
||||
if err := rows.Close(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -256,57 +249,46 @@ func (q *Queries) GetServerLogScoresByMonitorID(ctx context.Context, arg GetServ
|
||||
}
|
||||
|
||||
const getServerNetspeed = `-- name: GetServerNetspeed :one
|
||||
select netspeed from servers where ip = ?
|
||||
select netspeed from servers where ip = $1
|
||||
`
|
||||
|
||||
func (q *Queries) GetServerNetspeed(ctx context.Context, ip string) (uint32, error) {
|
||||
row := q.db.QueryRowContext(ctx, getServerNetspeed, ip)
|
||||
var netspeed uint32
|
||||
func (q *Queries) GetServerNetspeed(ctx context.Context, ip string) (int64, error) {
|
||||
row := q.db.QueryRow(ctx, getServerNetspeed, ip)
|
||||
var netspeed int64
|
||||
err := row.Scan(&netspeed)
|
||||
return netspeed, err
|
||||
}
|
||||
|
||||
const getServerScores = `-- name: GetServerScores :many
|
||||
select
|
||||
m.id, m.name, m.tls_name, m.location, m.type,
|
||||
m.id, m.hostname, m.tls_name, m.location, m.type,
|
||||
ss.score_raw, ss.score_ts, ss.status
|
||||
from server_scores ss
|
||||
inner join monitors m
|
||||
on (m.id=ss.monitor_id)
|
||||
where
|
||||
server_id = ? AND
|
||||
monitor_id in (/*SLICE:MonitorIDs*/?)
|
||||
server_id = $1 AND
|
||||
monitor_id = ANY($2::bigint[])
|
||||
`
|
||||
|
||||
type GetServerScoresParams struct {
|
||||
ServerID uint32 `db:"server_id" json:"server_id"`
|
||||
MonitorIDs []uint32 `db:"MonitorIDs" json:"MonitorIDs"`
|
||||
ServerID int64 `db:"server_id" json:"server_id"`
|
||||
MonitorIDs []int64 `db:"MonitorIDs" json:"MonitorIDs"`
|
||||
}
|
||||
|
||||
type GetServerScoresRow struct {
|
||||
ID uint32 `db:"id" json:"id"`
|
||||
Name string `db:"name" json:"name"`
|
||||
TlsName sql.NullString `db:"tls_name" json:"tls_name"`
|
||||
ID int64 `db:"id" json:"id"`
|
||||
Hostname string `db:"hostname" json:"hostname"`
|
||||
TlsName pgtype.Text `db:"tls_name" json:"tls_name"`
|
||||
Location string `db:"location" json:"location"`
|
||||
Type MonitorsType `db:"type" json:"type"`
|
||||
ScoreRaw float64 `db:"score_raw" json:"score_raw"`
|
||||
ScoreTs sql.NullTime `db:"score_ts" json:"score_ts"`
|
||||
ScoreTs pgtype.Timestamptz `db:"score_ts" json:"score_ts"`
|
||||
Status ServerScoresStatus `db:"status" json:"status"`
|
||||
}
|
||||
|
||||
func (q *Queries) GetServerScores(ctx context.Context, arg GetServerScoresParams) ([]GetServerScoresRow, error) {
|
||||
query := getServerScores
|
||||
var queryParams []interface{}
|
||||
queryParams = append(queryParams, arg.ServerID)
|
||||
if len(arg.MonitorIDs) > 0 {
|
||||
for _, v := range arg.MonitorIDs {
|
||||
queryParams = append(queryParams, v)
|
||||
}
|
||||
query = strings.Replace(query, "/*SLICE:MonitorIDs*/?", strings.Repeat(",?", len(arg.MonitorIDs))[1:], 1)
|
||||
} else {
|
||||
query = strings.Replace(query, "/*SLICE:MonitorIDs*/?", "NULL", 1)
|
||||
}
|
||||
rows, err := q.db.QueryContext(ctx, query, queryParams...)
|
||||
rows, err := q.db.Query(ctx, getServerScores, arg.ServerID, arg.MonitorIDs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -316,7 +298,7 @@ func (q *Queries) GetServerScores(ctx context.Context, arg GetServerScoresParams
|
||||
var i GetServerScoresRow
|
||||
if err := rows.Scan(
|
||||
&i.ID,
|
||||
&i.Name,
|
||||
&i.Hostname,
|
||||
&i.TlsName,
|
||||
&i.Location,
|
||||
&i.Type,
|
||||
@@ -328,9 +310,6 @@ func (q *Queries) GetServerScores(ctx context.Context, arg GetServerScoresParams
|
||||
}
|
||||
items = append(items, i)
|
||||
}
|
||||
if err := rows.Close(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -340,11 +319,11 @@ func (q *Queries) GetServerScores(ctx context.Context, arg GetServerScoresParams
|
||||
const getZoneByName = `-- name: GetZoneByName :one
|
||||
select id, name, description, parent_id, dns from zones
|
||||
where
|
||||
name = ?
|
||||
name = $1
|
||||
`
|
||||
|
||||
func (q *Queries) GetZoneByName(ctx context.Context, name string) (Zone, error) {
|
||||
row := q.db.QueryRowContext(ctx, getZoneByName, name)
|
||||
row := q.db.QueryRow(ctx, getZoneByName, name)
|
||||
var i Zone
|
||||
err := row.Scan(
|
||||
&i.ID,
|
||||
@@ -358,12 +337,12 @@ func (q *Queries) GetZoneByName(ctx context.Context, name string) (Zone, error)
|
||||
|
||||
const getZoneCounts = `-- name: GetZoneCounts :many
|
||||
select id, zone_id, ip_version, date, count_active, count_registered, netspeed_active from zone_server_counts
|
||||
where zone_id = ?
|
||||
where zone_id = $1
|
||||
order by date
|
||||
`
|
||||
|
||||
func (q *Queries) GetZoneCounts(ctx context.Context, zoneID uint32) ([]ZoneServerCount, error) {
|
||||
rows, err := q.db.QueryContext(ctx, getZoneCounts, zoneID)
|
||||
func (q *Queries) GetZoneCounts(ctx context.Context, zoneID int64) ([]ZoneServerCount, error) {
|
||||
rows, err := q.db.Query(ctx, getZoneCounts, zoneID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -384,9 +363,6 @@ func (q *Queries) GetZoneCounts(ctx context.Context, zoneID uint32) ([]ZoneServe
|
||||
}
|
||||
items = append(items, i)
|
||||
}
|
||||
if err := rows.Close(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -395,7 +371,7 @@ func (q *Queries) GetZoneCounts(ctx context.Context, zoneID uint32) ([]ZoneServe
|
||||
|
||||
const getZoneStatsData = `-- name: GetZoneStatsData :many
|
||||
SELECT zc.date, z.name, zc.ip_version, count_active, count_registered, netspeed_active
|
||||
FROM zone_server_counts zc USE INDEX (date_idx)
|
||||
FROM zone_server_counts zc
|
||||
INNER JOIN zones z
|
||||
ON(zc.zone_id=z.id)
|
||||
WHERE date IN (SELECT max(date) from zone_server_counts)
|
||||
@@ -403,16 +379,16 @@ ORDER BY name
|
||||
`
|
||||
|
||||
type GetZoneStatsDataRow struct {
|
||||
Date time.Time `db:"date" json:"date"`
|
||||
Date pgtype.Date `db:"date" json:"date"`
|
||||
Name string `db:"name" json:"name"`
|
||||
IpVersion ZoneServerCountsIpVersion `db:"ip_version" json:"ip_version"`
|
||||
CountActive uint32 `db:"count_active" json:"count_active"`
|
||||
CountRegistered uint32 `db:"count_registered" json:"count_registered"`
|
||||
NetspeedActive uint32 `db:"netspeed_active" json:"netspeed_active"`
|
||||
CountActive int32 `db:"count_active" json:"count_active"`
|
||||
CountRegistered int32 `db:"count_registered" json:"count_registered"`
|
||||
NetspeedActive int `db:"netspeed_active" json:"netspeed_active"`
|
||||
}
|
||||
|
||||
func (q *Queries) GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow, error) {
|
||||
rows, err := q.db.QueryContext(ctx, getZoneStatsData)
|
||||
rows, err := q.db.Query(ctx, getZoneStatsData)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -432,9 +408,6 @@ func (q *Queries) GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow,
|
||||
}
|
||||
items = append(items, i)
|
||||
}
|
||||
if err := rows.Close(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -442,15 +415,14 @@ func (q *Queries) GetZoneStatsData(ctx context.Context) ([]GetZoneStatsDataRow,
|
||||
}
|
||||
|
||||
const getZoneStatsV2 = `-- name: GetZoneStatsV2 :many
|
||||
select zone_name, netspeed_active+0 as netspeed_active FROM (
|
||||
SELECT
|
||||
z.name as zone_name,
|
||||
SUM(
|
||||
IF (deletion_on IS NULL AND score_raw > 10,
|
||||
netspeed,
|
||||
0
|
||||
)
|
||||
) AS netspeed_active
|
||||
CAST(SUM(
|
||||
CASE WHEN deletion_on IS NULL AND score_raw > 10
|
||||
THEN netspeed
|
||||
ELSE 0
|
||||
END
|
||||
) AS int) AS netspeed_active
|
||||
FROM
|
||||
servers s
|
||||
INNER JOIN server_zones sz ON (sz.server_id = s.id)
|
||||
@@ -459,14 +431,13 @@ FROM
|
||||
select zone_id, s.ip_version
|
||||
from server_zones sz
|
||||
inner join servers s on (s.id=sz.server_id)
|
||||
where s.ip=?
|
||||
where s.ip=$1
|
||||
) as srvz on (srvz.zone_id=z.id AND srvz.ip_version=s.ip_version)
|
||||
WHERE
|
||||
(deletion_on IS NULL OR deletion_on > NOW())
|
||||
AND in_pool = 1
|
||||
AND netspeed > 0
|
||||
GROUP BY z.name)
|
||||
AS server_netspeed
|
||||
GROUP BY z.name
|
||||
`
|
||||
|
||||
type GetZoneStatsV2Row struct {
|
||||
@@ -475,7 +446,7 @@ type GetZoneStatsV2Row struct {
|
||||
}
|
||||
|
||||
func (q *Queries) GetZoneStatsV2(ctx context.Context, ip string) ([]GetZoneStatsV2Row, error) {
|
||||
rows, err := q.db.QueryContext(ctx, getZoneStatsV2, ip)
|
||||
rows, err := q.db.Query(ctx, getZoneStatsV2, ip)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -488,9 +459,6 @@ func (q *Queries) GetZoneStatsV2(ctx context.Context, ip string) ([]GetZoneStats
|
||||
}
|
||||
items = append(items, i)
|
||||
}
|
||||
if err := rows.Close(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
69
ntpdb/tx.go
69
ntpdb/tx.go
@@ -2,7 +2,11 @@ package ntpdb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
|
||||
"github.com/jackc/pgx/v5"
|
||||
"go.ntppool.org/common/logger"
|
||||
"go.opentelemetry.io/otel/trace"
|
||||
)
|
||||
|
||||
type QuerierTx interface {
|
||||
@@ -11,14 +15,17 @@ type QuerierTx interface {
|
||||
Begin(ctx context.Context) (QuerierTx, error)
|
||||
Commit(ctx context.Context) error
|
||||
Rollback(ctx context.Context) error
|
||||
|
||||
// Conn returns the connection used by this transaction
|
||||
Conn() *pgx.Conn
|
||||
}
|
||||
|
||||
type Beginner interface {
|
||||
Begin(context.Context) (sql.Tx, error)
|
||||
Begin(context.Context) (pgx.Tx, error)
|
||||
}
|
||||
|
||||
type Tx interface {
|
||||
Begin(context.Context) (sql.Tx, error)
|
||||
Begin(context.Context) (pgx.Tx, error)
|
||||
Commit(ctx context.Context) error
|
||||
Rollback(ctx context.Context) error
|
||||
}
|
||||
@@ -28,21 +35,33 @@ func (q *Queries) Begin(ctx context.Context) (QuerierTx, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &Queries{db: &tx}, nil
|
||||
return &Queries{db: tx}, nil
|
||||
}
|
||||
|
||||
func (q *Queries) Commit(ctx context.Context) error {
|
||||
tx, ok := q.db.(Tx)
|
||||
if !ok {
|
||||
return sql.ErrTxDone
|
||||
// Commit called on Queries with dbpool, so treat as transaction already committed
|
||||
return pgx.ErrTxClosed
|
||||
}
|
||||
return tx.Commit(ctx)
|
||||
}
|
||||
|
||||
func (q *Queries) Conn() *pgx.Conn {
|
||||
// pgx.Tx is an interface that has Conn() method
|
||||
tx, ok := q.db.(pgx.Tx)
|
||||
if !ok {
|
||||
logger.Setup().Error("could not get connection from QuerierTx")
|
||||
return nil
|
||||
}
|
||||
return tx.Conn()
|
||||
}
|
||||
|
||||
func (q *Queries) Rollback(ctx context.Context) error {
|
||||
tx, ok := q.db.(Tx)
|
||||
if !ok {
|
||||
return sql.ErrTxDone
|
||||
// Rollback called on Queries with dbpool, so treat as transaction already committed
|
||||
return pgx.ErrTxClosed
|
||||
}
|
||||
return tx.Rollback(ctx)
|
||||
}
|
||||
@@ -62,3 +81,41 @@ func (wq *WrappedQuerier) Begin(ctx context.Context) (QuerierTx, error) {
|
||||
}
|
||||
return NewWrappedQuerier(q), nil
|
||||
}
|
||||
|
||||
func (wq *WrappedQuerier) Conn() *pgx.Conn {
|
||||
return wq.QuerierTxWithTracing.Conn()
|
||||
}
|
||||
|
||||
// LogRollback logs and performs a rollback if the transaction is still active
|
||||
func LogRollback(ctx context.Context, tx QuerierTx) {
|
||||
if !isInTransaction(tx) {
|
||||
return
|
||||
}
|
||||
|
||||
log := logger.FromContext(ctx)
|
||||
log.WarnContext(ctx, "transaction rollback called on an active transaction")
|
||||
|
||||
// if caller ctx is done we still need rollback to happen
|
||||
// so Rollback gets a fresh context with span copied over
|
||||
rbCtx := context.Background()
|
||||
if span := trace.SpanFromContext(ctx); span != nil {
|
||||
rbCtx = trace.ContextWithSpan(rbCtx, span)
|
||||
}
|
||||
if err := tx.Rollback(rbCtx); err != nil && !errors.Is(err, pgx.ErrTxClosed) {
|
||||
log.ErrorContext(ctx, "rollback failed", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
func isInTransaction(tx QuerierTx) bool {
|
||||
if tx == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
conn := tx.Conn()
|
||||
if conn == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// 'I' means idle, so if it's not idle, we're in a transaction
|
||||
return conn.PgConn().TxStatus() != 'I'
|
||||
}
|
||||
|
||||
389
plans/grafana-time-range-api.md
Normal file
389
plans/grafana-time-range-api.md
Normal file
@@ -0,0 +1,389 @@
|
||||
# DETAILED IMPLEMENTATION PLAN: Grafana Time Range API with Future Downsampling Support
|
||||
|
||||
## Overview
|
||||
Implement a new Grafana-compatible API endpoint `/api/v2/server/scores/{server}/{mode}` that returns time series data in Grafana format with time range support and future downsampling capabilities.
|
||||
|
||||
## API Specification
|
||||
|
||||
### Endpoint
|
||||
- **URL**: `/api/v2/server/scores/{server}/{mode}`
|
||||
- **Method**: GET
|
||||
- **Path Parameters**:
|
||||
- `server`: Server IP address or ID (same validation as existing API)
|
||||
- `mode`: Only `json` supported initially
|
||||
|
||||
### Query Parameters (following Grafana conventions)
|
||||
- `from`: Unix timestamp in seconds (required)
|
||||
- `to`: Unix timestamp in seconds (required)
|
||||
- `maxDataPoints`: Integer, default 50000, max 50000 (for future downsampling)
|
||||
- `monitor`: Monitor ID, name prefix, or "*" for all (optional, same as existing)
|
||||
- `interval`: Future downsampling interval like "1m", "5m", "1h" (optional, not implemented initially)
|
||||
|
||||
### Response Format
|
||||
Grafana table format JSON array (more efficient than separate series):
|
||||
```json
|
||||
[
|
||||
{
|
||||
"target": "monitor{name=zakim1-yfhw4a}",
|
||||
"tags": {
|
||||
"monitor_id": "126",
|
||||
"monitor_name": "zakim1-yfhw4a",
|
||||
"type": "monitor",
|
||||
"status": "active"
|
||||
},
|
||||
"columns": [
|
||||
{"text": "time", "type": "time"},
|
||||
{"text": "score", "type": "number"},
|
||||
{"text": "rtt", "type": "number", "unit": "ms"},
|
||||
{"text": "offset", "type": "number", "unit": "s"}
|
||||
],
|
||||
"values": [
|
||||
[1753431667000, 20.0, 18.865, -0.000267],
|
||||
[1753431419000, 20.0, 18.96, -0.000390],
|
||||
[1753431151000, 20.0, 18.073, -0.000768],
|
||||
[1753430063000, 20.0, 18.209, null]
|
||||
]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### 1. Server Routing (`server/server.go`)
|
||||
Add new route after existing scores routes:
|
||||
```go
|
||||
e.GET("/api/v2/server/scores/:server/:mode", srv.scoresTimeRange)
|
||||
```
|
||||
|
||||
**Note**: Initially attempted `:server.:mode` pattern, but Echo router cannot properly parse IP addresses with dots using this pattern. Changed to `:server/:mode` to match existing API pattern and ensure compatibility with IP addresses like `23.155.40.38`.
|
||||
|
||||
## Key Implementation Clarifications
|
||||
|
||||
### Monitor Filtering Behavior
|
||||
- **monitor=\***: Return ALL monitors (no monitor count limit)
|
||||
- **50k datapoint limit**: Applied in database query (LIMIT clause)
|
||||
- Return whatever data we get from database to user (no post-processing truncation)
|
||||
|
||||
### Null Value Handling Strategy
|
||||
- **Score**: Always include (should never be null)
|
||||
- **RTT**: Skip datapoints where RTT is null
|
||||
- **Offset**: Skip datapoints where offset is null
|
||||
|
||||
### Time Range Validation Rules
|
||||
- **Zero duration**: Return 400 Bad Request
|
||||
- **Future timestamps**: Allow for now
|
||||
- **Minimum range**: 1 second
|
||||
- **Maximum range**: 90 days
|
||||
|
||||
### 2. New Handler Function (`server/grafana.go`)
|
||||
|
||||
#### Function Signature
|
||||
```go
|
||||
func (srv *Server) scoresTimeRange(c echo.Context) error
|
||||
```
|
||||
|
||||
#### Parameter Parsing & Validation
|
||||
```go
|
||||
// Extend existing historyParameters struct for time range support
|
||||
type timeRangeParams struct {
|
||||
historyParameters // embed existing struct
|
||||
from time.Time
|
||||
to time.Time
|
||||
maxDataPoints int
|
||||
interval string // for future downsampling
|
||||
}
|
||||
|
||||
func (srv *Server) parseTimeRangeParams(ctx context.Context, c echo.Context) (timeRangeParams, error) {
|
||||
// Start with existing parameter parsing logic
|
||||
baseParams, err := srv.getHistoryParameters(ctx, c)
|
||||
if err != nil {
|
||||
return timeRangeParams{}, err
|
||||
}
|
||||
|
||||
// Parse and validate from/to second timestamps
|
||||
// Validate time range (max 90 days, min 1 second)
|
||||
// Parse maxDataPoints (default 50000, max 50000)
|
||||
// Return extended parameters
|
||||
}
|
||||
```
|
||||
|
||||
#### Response Structure
|
||||
```go
|
||||
type ColumnDef struct {
|
||||
Text string `json:"text"`
|
||||
Type string `json:"type"`
|
||||
Unit string `json:"unit,omitempty"`
|
||||
}
|
||||
|
||||
type GrafanaTableSeries struct {
|
||||
Target string `json:"target"`
|
||||
Tags map[string]string `json:"tags"`
|
||||
Columns []ColumnDef `json:"columns"`
|
||||
Values [][]interface{} `json:"values"`
|
||||
}
|
||||
|
||||
type GrafanaTimeSeriesResponse []GrafanaTableSeries
|
||||
```
|
||||
|
||||
#### Cache Control
|
||||
```go
|
||||
// Reuse existing setHistoryCacheControl function for consistency
|
||||
// Logic based on data recency and entry count:
|
||||
// - Empty or >8h old data: "s-maxage=260,max-age=360"
|
||||
// - Single entry: "s-maxage=60,max-age=35"
|
||||
// - Multiple entries: "s-maxage=90,max-age=120"
|
||||
setHistoryCacheControl(c, history)
|
||||
```
|
||||
|
||||
### 3. ClickHouse Data Access (`chdb/logscores.go`)
|
||||
|
||||
#### New Method
|
||||
```go
|
||||
func (d *ClickHouse) LogscoresTimeRange(ctx context.Context, serverID, monitorID int, from, to time.Time, limit int) ([]ntpdb.LogScore, error) {
|
||||
// Build query with time range WHERE clause
|
||||
// Always order by ts ASC (Grafana convention)
|
||||
// Apply limit to prevent memory issues
|
||||
// Use same row scanning logic as existing Logscores method
|
||||
}
|
||||
```
|
||||
|
||||
#### Query Structure
|
||||
```sql
|
||||
SELECT id, monitor_id, server_id, ts,
|
||||
toFloat64(score), toFloat64(step), offset,
|
||||
rtt, leap, warning, error
|
||||
FROM log_scores
|
||||
WHERE server_id = ?
|
||||
AND ts >= ?
|
||||
AND ts <= ?
|
||||
[AND monitor_id = ?] -- if specific monitor requested
|
||||
ORDER BY ts ASC
|
||||
LIMIT ?
|
||||
```
|
||||
|
||||
### 4. Data Transformation Logic (`server/grafana.go`)
|
||||
|
||||
#### Core Transformation Function
|
||||
```go
|
||||
func transformToGrafanaTableFormat(history *logscores.LogScoreHistory, monitors []ntpdb.Monitor) GrafanaTimeSeriesResponse {
|
||||
// Group data by monitor_id (one series per monitor)
|
||||
// Create table format with columns: time, score, rtt, offset
|
||||
// Convert timestamps to milliseconds
|
||||
// Build proper target names and tags
|
||||
// Handle null values appropriately in table values
|
||||
}
|
||||
```
|
||||
|
||||
#### Grouping Strategy
|
||||
1. **Group by Monitor**: One table series per monitor
|
||||
2. **Table Columns**: time, score, rtt, offset (all metrics in one table)
|
||||
3. **Target Naming**: `monitor{name={sanitized_monitor_name}}`
|
||||
4. **Tag Structure**: Include monitor metadata (no metric type needed)
|
||||
5. **Monitor Status**: Query real monitor data using `q.GetServerScores()` like existing API
|
||||
6. **Series Ordering**: No guaranteed order (standard Grafana behavior)
|
||||
7. **Efficiency**: More efficient than separate series - less JSON overhead
|
||||
|
||||
#### Timestamp Conversion
|
||||
```go
|
||||
timestampMs := logScore.Ts.Unix() * 1000
|
||||
```
|
||||
|
||||
### 5. Error Handling
|
||||
|
||||
#### Validation Errors (400 Bad Request)
|
||||
- Invalid timestamp format
|
||||
- from >= to (including zero duration)
|
||||
- Time range too large (> 90 days)
|
||||
- Time range too small (< 1 second minimum)
|
||||
- maxDataPoints > 50000
|
||||
- Invalid mode (not "json")
|
||||
|
||||
#### Not Found Errors (404)
|
||||
- Server not found
|
||||
- Monitor not found
|
||||
- Server deleted
|
||||
|
||||
#### Server Errors (500)
|
||||
- ClickHouse connection issues
|
||||
- Database query errors
|
||||
|
||||
### 6. Future Downsampling Design
|
||||
|
||||
#### API Extension Points
|
||||
- `interval` parameter parsing ready
|
||||
- `maxDataPoints` limit already enforced
|
||||
- Response format supports downsampled data seamlessly
|
||||
|
||||
#### Downsampling Algorithm (Future Implementation)
|
||||
```go
|
||||
// When datapoints > maxDataPoints:
|
||||
// 1. Calculate downsample interval: (to - from) / maxDataPoints
|
||||
// 2. Group data into time buckets
|
||||
// 3. Aggregate per bucket: avg for score/rtt, last for offset
|
||||
// 4. Return aggregated datapoints
|
||||
```
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
- Parameter parsing and validation
|
||||
- Data transformation logic
|
||||
- Error handling scenarios
|
||||
- Timestamp conversion accuracy
|
||||
|
||||
### Integration Tests
|
||||
- End-to-end API requests
|
||||
- ClickHouse query execution
|
||||
- Multiple monitor scenarios
|
||||
- Large time range handling
|
||||
|
||||
### Manual Testing
|
||||
- Grafana integration testing
|
||||
- Performance with various time ranges
|
||||
- Cache behavior validation
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Current Implementation
|
||||
- 50k datapoint limit applied in database query (LIMIT clause) (covers ~few weeks of data)
|
||||
- ClickHouse-only for better range query performance
|
||||
- Proper indexing on (server_id, ts) assumed
|
||||
- Table format more efficient than separate time series (less JSON overhead)
|
||||
|
||||
### Future Optimizations (Critical for Production)
|
||||
- **Downsampling for large ranges**: Essential for 90-day queries with reasonable performance
|
||||
- Query optimization based on range size
|
||||
- Potential parallel monitor queries
|
||||
- Adaptive sampling rates based on time range duration
|
||||
|
||||
## Documentation Updates
|
||||
|
||||
### API.md Addition
|
||||
```markdown
|
||||
### 7. Server Scores Time Range (v2)
|
||||
|
||||
**GET** `/api/v2/server/scores/{server}/{mode}`
|
||||
|
||||
Grafana-compatible time series endpoint for NTP server scoring data.
|
||||
|
||||
#### Path Parameters
|
||||
- `server`: Server IP address or ID
|
||||
- `mode`: Response format (`json` only)
|
||||
|
||||
#### Query Parameters
|
||||
- `from`: Start time as Unix timestamp in seconds (required)
|
||||
- `to`: End time as Unix timestamp in seconds (required)
|
||||
- `maxDataPoints`: Maximum data points to return (default: 50000, max: 50000)
|
||||
- `monitor`: Monitor filter (ID, name prefix, or "*" for all)
|
||||
|
||||
#### Response Format
|
||||
Grafana table format array with one series per monitor containing all metrics as columns.
|
||||
```
|
||||
|
||||
## Key Research Findings
|
||||
|
||||
### Grafana Error Format Requirements
|
||||
- **HTTP Status Codes**: Standard 400/404/500 work fine
|
||||
- **Response Body**: JSON preferred with `Content-Type: application/json`
|
||||
- **Structure**: Simple `{"error": "message", "status": code}` is sufficient
|
||||
- **Compatibility**: Existing Echo error patterns are Grafana-compatible
|
||||
|
||||
### Data Volume Considerations
|
||||
- **50k Datapoint Limit**: Only covers ~few weeks of data, not sufficient for 90-day ranges
|
||||
- **Downsampling Critical**: Required for production use with 90-day time ranges
|
||||
- **Current Approach**: Acceptable for MVP, downsampling essential for full utility
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
### Phase 0: Grafana Table Format Validation ✅ **COMPLETED**
|
||||
- [x] Add test endpoint `/api/v2/test/grafana-table` returning sample table format
|
||||
- [x] Implement Grafana table format response structures in `server/grafana.go`
|
||||
- [x] Add structured logging and OpenTelemetry tracing to test endpoint
|
||||
- [x] Verify endpoint compiles and serves correct JSON format
|
||||
- [x] Test endpoint response format and headers (CORS, Content-Type, Cache-Control)
|
||||
- [ ] Test with actual Grafana instance to validate table format compatibility
|
||||
- [ ] Confirm time series panels render table format correctly
|
||||
- [ ] Validate column types and units display properly
|
||||
|
||||
#### Phase 0 Implementation Details
|
||||
**Files Created/Modified:**
|
||||
- `server/grafana.go`: New file containing Grafana table format structures and test endpoint
|
||||
- `server/server.go`: Added route `e.GET("/api/v2/test/grafana-table", srv.testGrafanaTable)`
|
||||
|
||||
**Test Endpoint Features:**
|
||||
- **URL**: `http://localhost:8030/api/v2/test/grafana-table`
|
||||
- **Response Format**: Grafana table format with realistic NTP Pool data
|
||||
- **Sample Data**: Two monitor series (zakim1-yfhw4a, nj2-mon01) with time-based values
|
||||
- **Columns**: time, score, rtt (ms), offset (s) with proper units
|
||||
- **Null Handling**: Demonstrates null offset values
|
||||
- **Headers**: CORS, JSON content-type, cache control
|
||||
- **Observability**: Structured logging with context, OpenTelemetry tracing
|
||||
|
||||
**Recommended Grafana Data Source**: JSON API plugin (`marcusolsson-json-datasource`) - ideal for REST APIs returning table format JSON
|
||||
|
||||
### Phase 1: Core Implementation ✅ **COMPLETED**
|
||||
- [x] Add route in server.go (fixed routing pattern from `:server.:mode` to `:server/:mode`)
|
||||
- [x] Implement parseTimeRangeParams function for parameter validation
|
||||
- [x] Add LogscoresTimeRange method to ClickHouse with time range filtering
|
||||
- [x] Implement transformToGrafanaTableFormat function with monitor grouping
|
||||
- [x] Add scoresTimeRange handler with full error handling
|
||||
- [x] Error handling and validation (reuse existing Echo patterns)
|
||||
- [x] Cache control headers (reuse setHistoryCacheControl)
|
||||
|
||||
#### Phase 1 Implementation Details
|
||||
**Key Components Built:**
|
||||
- **Route Pattern**: `/api/v2/server/scores/:server/:mode` (matches existing API consistency)
|
||||
- **Parameter Validation**: Full validation of `from`/`to` timestamps, `maxDataPoints`, time ranges
|
||||
- **ClickHouse Integration**: `LogscoresTimeRange()` with time-based WHERE clauses and ASC ordering
|
||||
- **Data Transformation**: Grafana table format with monitor grouping and null value handling
|
||||
- **Complete Handler**: `scoresTimeRange()` with server validation, error handling, caching, and CORS
|
||||
|
||||
**Routing Fix**: Changed from `:server.:mode` to `:server/:mode` to resolve Echo router issue with IP addresses containing dots (e.g., `23.155.40.38`).
|
||||
|
||||
**Files Created/Modified in Phase 1:**
|
||||
- `server/grafana.go`: Complete implementation with all structures and functions
|
||||
- `timeRangeParams` struct and `parseTimeRangeParams()` function
|
||||
- `transformToGrafanaTableFormat()` function with monitor grouping
|
||||
- `scoresTimeRange()` handler with full error handling
|
||||
- `sanitizeMonitorName()` utility function
|
||||
- `server/server.go`: Added route `e.GET("/api/v2/server/scores/:server/:mode", srv.scoresTimeRange)`
|
||||
- `chdb/logscores.go`: Added `LogscoresTimeRange()` method for time-based queries
|
||||
|
||||
**Production Testing Results** (July 25, 2025):
|
||||
- ✅ **Real Data Verification**: Successfully tested with server `102.64.112.164` over 12-hour time range
|
||||
- ✅ **Multiple Monitor Support**: Returns data for multiple monitors (`defra1-210hw9t`, `recentmedian`)
|
||||
- ✅ **Data Quality Validation**:
|
||||
- RTT conversion (microseconds → milliseconds): ✅ Working
|
||||
- Timestamp conversion (seconds → milliseconds): ✅ Working
|
||||
- Null value handling: ✅ Working (recentmedian has null RTT/offset as expected)
|
||||
- Monitor grouping: ✅ Working (one series per monitor)
|
||||
- ✅ **API Parameter Changes**: Successfully changed from milliseconds to seconds for user-friendliness
|
||||
- ✅ **Volume Testing**: Handles 100+ data points per monitor efficiently
|
||||
- ✅ **Error Handling**: All validation working (400 for invalid params, 404 for missing servers)
|
||||
- ✅ **Performance**: Sub-second response times for 12-hour ranges
|
||||
|
||||
**Sample Working Request:**
|
||||
```bash
|
||||
curl 'http://localhost:8030/api/v2/server/scores/102.64.112.164/json?from=1753457764&to=1753500964&monitor=*'
|
||||
```
|
||||
|
||||
### Phase 2: Testing & Polish
|
||||
- [ ] Unit tests for all functions
|
||||
- [ ] Integration tests
|
||||
- [ ] Manual Grafana testing with real data
|
||||
- [ ] Performance testing with large ranges (up to 50k points)
|
||||
- [ ] API documentation updates
|
||||
|
||||
### Phase 3: Future Enhancement Ready
|
||||
- [ ] Interval parameter parsing (no-op initially)
|
||||
- [ ] Downsampling framework hooks (critical for 90-day ranges)
|
||||
- [ ] Monitoring and metrics for new endpoint
|
||||
|
||||
This design provides a solid foundation for immediate Grafana integration while being fully prepared for future downsampling capabilities without breaking changes.
|
||||
|
||||
## Critical Notes for Production
|
||||
|
||||
- **Downsampling Required**: 50k datapoint limit means 90-day ranges will hit limits quickly
|
||||
- **Table Format Validation**: Phase 0 testing ensures Grafana compatibility before full implementation
|
||||
- **Error Handling**: Existing Echo patterns are sufficient for Grafana requirements
|
||||
- **Scalability**: Current design handles weeks of data well, downsampling needed for months
|
||||
50
query.sql
50
query.sql
@@ -1,6 +1,6 @@
|
||||
-- name: GetZoneStatsData :many
|
||||
SELECT zc.date, z.name, zc.ip_version, count_active, count_registered, netspeed_active
|
||||
FROM zone_server_counts zc USE INDEX (date_idx)
|
||||
FROM zone_server_counts zc
|
||||
INNER JOIN zones z
|
||||
ON(zc.zone_id=z.id)
|
||||
WHERE date IN (SELECT max(date) from zone_server_counts)
|
||||
@@ -8,18 +8,17 @@ ORDER BY name;
|
||||
|
||||
|
||||
-- name: GetServerNetspeed :one
|
||||
select netspeed from servers where ip = ?;
|
||||
select netspeed from servers where ip = $1;
|
||||
|
||||
-- name: GetZoneStatsV2 :many
|
||||
select zone_name, netspeed_active+0 as netspeed_active FROM (
|
||||
SELECT
|
||||
z.name as zone_name,
|
||||
SUM(
|
||||
IF (deletion_on IS NULL AND score_raw > 10,
|
||||
netspeed,
|
||||
0
|
||||
)
|
||||
) AS netspeed_active
|
||||
CAST(SUM(
|
||||
CASE WHEN deletion_on IS NULL AND score_raw > 10
|
||||
THEN netspeed
|
||||
ELSE 0
|
||||
END
|
||||
) AS int) AS netspeed_active
|
||||
FROM
|
||||
servers s
|
||||
INNER JOIN server_zones sz ON (sz.server_id = s.id)
|
||||
@@ -28,61 +27,62 @@ FROM
|
||||
select zone_id, s.ip_version
|
||||
from server_zones sz
|
||||
inner join servers s on (s.id=sz.server_id)
|
||||
where s.ip=?
|
||||
where s.ip=$1
|
||||
) as srvz on (srvz.zone_id=z.id AND srvz.ip_version=s.ip_version)
|
||||
WHERE
|
||||
(deletion_on IS NULL OR deletion_on > NOW())
|
||||
AND in_pool = 1
|
||||
AND netspeed > 0
|
||||
GROUP BY z.name)
|
||||
AS server_netspeed;
|
||||
GROUP BY z.name;
|
||||
|
||||
-- name: GetServerByID :one
|
||||
select * from servers
|
||||
where
|
||||
id = ?;
|
||||
id = $1;
|
||||
|
||||
-- name: GetServerByIP :one
|
||||
select * from servers
|
||||
where
|
||||
ip = sqlc.arg(ip);
|
||||
|
||||
-- name: GetMonitorByName :one
|
||||
-- name: GetMonitorByNameAndIPVersion :one
|
||||
select * from monitors
|
||||
where
|
||||
tls_name like sqlc.arg('tls_name')
|
||||
tls_name like sqlc.arg('tls_name') AND
|
||||
(ip_version = sqlc.arg('ip_version') OR (type = 'score' AND ip_version IS NULL)) AND
|
||||
is_current = true
|
||||
order by id
|
||||
limit 1;
|
||||
|
||||
-- name: GetMonitorsByID :many
|
||||
select * from monitors
|
||||
where id in (sqlc.slice('MonitorIDs'));
|
||||
where id = ANY(sqlc.arg('MonitorIDs')::bigint[]);
|
||||
|
||||
-- name: GetServerScores :many
|
||||
select
|
||||
m.id, m.name, m.tls_name, m.location, m.type,
|
||||
m.id, m.hostname, m.tls_name, m.location, m.type,
|
||||
ss.score_raw, ss.score_ts, ss.status
|
||||
from server_scores ss
|
||||
inner join monitors m
|
||||
on (m.id=ss.monitor_id)
|
||||
where
|
||||
server_id = ? AND
|
||||
monitor_id in (sqlc.slice('MonitorIDs'));
|
||||
server_id = $1 AND
|
||||
monitor_id = ANY(sqlc.arg('MonitorIDs')::bigint[]);
|
||||
|
||||
-- name: GetServerLogScores :many
|
||||
select * from log_scores
|
||||
where
|
||||
server_id = ?
|
||||
server_id = $1
|
||||
order by ts desc
|
||||
limit ?;
|
||||
limit $2;
|
||||
|
||||
-- name: GetServerLogScoresByMonitorID :many
|
||||
select * from log_scores
|
||||
where
|
||||
server_id = ? AND
|
||||
monitor_id = ?
|
||||
server_id = $1 AND
|
||||
monitor_id = $2
|
||||
order by ts desc
|
||||
limit ?;
|
||||
limit $3;
|
||||
|
||||
-- name: GetZoneByName :one
|
||||
select * from zones
|
||||
@@ -91,5 +91,5 @@ where
|
||||
|
||||
-- name: GetZoneCounts :many
|
||||
select * from zone_server_counts
|
||||
where zone_id = ?
|
||||
where zone_id = $1
|
||||
order by date;
|
||||
|
||||
3717
schema.sql
3717
schema.sql
File diff suppressed because it is too large
Load Diff
@@ -2,7 +2,7 @@
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
go install github.com/goreleaser/goreleaser/v2@v2.5.0
|
||||
go install github.com/goreleaser/goreleaser/v2@v2.12.3
|
||||
|
||||
if [ ! -z "${harbor_username:-}" ]; then
|
||||
DOCKER_FILE=~/.docker/config.json
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"errors"
|
||||
"net/http"
|
||||
"net/netip"
|
||||
|
||||
"github.com/jackc/pgx/v5"
|
||||
"github.com/labstack/echo/v4"
|
||||
"go.opentelemetry.io/otel/attribute"
|
||||
"golang.org/x/sync/errgroup"
|
||||
@@ -16,8 +16,10 @@ import (
|
||||
"go.ntppool.org/data-api/ntpdb"
|
||||
)
|
||||
|
||||
const pointBasis float64 = 10000
|
||||
const pointSymbol = "‱"
|
||||
const (
|
||||
pointBasis float64 = 10000
|
||||
pointSymbol = "‱"
|
||||
)
|
||||
|
||||
// const pointBasis = 1000
|
||||
// const pointSymbol = "‰"
|
||||
@@ -54,7 +56,7 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
|
||||
queryGroup, ctx := errgroup.WithContext(ctx)
|
||||
|
||||
var zoneStats []ntpdb.GetZoneStatsV2Row
|
||||
var serverNetspeed uint32
|
||||
var serverNetspeed int64
|
||||
|
||||
queryGroup.Go(func() error {
|
||||
var err error
|
||||
@@ -62,7 +64,7 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
|
||||
|
||||
serverNetspeed, err = q.GetServerNetspeed(ctx, ip.String())
|
||||
if err != nil {
|
||||
if !errors.Is(err, sql.ErrNoRows) {
|
||||
if !errors.Is(err, pgx.ErrNoRows) {
|
||||
log.Error("GetServerNetspeed", "err", err)
|
||||
}
|
||||
return err // this will return if the server doesn't exist
|
||||
@@ -114,21 +116,21 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
|
||||
|
||||
err = queryGroup.Wait()
|
||||
if err != nil {
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
if errors.Is(err, pgx.ErrNoRows) {
|
||||
return c.String(http.StatusNotFound, "Not found")
|
||||
}
|
||||
log.Error("query error", "err", err)
|
||||
return c.String(http.StatusInternalServerError, err.Error())
|
||||
}
|
||||
|
||||
zoneTotals := map[string]int32{}
|
||||
zoneTotals := map[string]int{}
|
||||
|
||||
for _, z := range zoneStats {
|
||||
zn := z.ZoneName
|
||||
if zn == "@" {
|
||||
zn = ""
|
||||
}
|
||||
zoneTotals[zn] = z.NetspeedActive // binary.BigEndian.Uint64(...)
|
||||
zoneTotals[zn] = int(z.NetspeedActive) // binary.BigEndian.Uint64(...)
|
||||
// log.Info("zone netspeed", "cc", z.ZoneName, "speed", z.NetspeedActive)
|
||||
}
|
||||
|
||||
@@ -143,7 +145,7 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
|
||||
if zt == 0 {
|
||||
// if the recorded netspeed for the zone was zero, assume it's at least
|
||||
// this servers worth instead. Otherwise the Netspeed gets to be 'infinite'.
|
||||
zt = int32(serverNetspeed)
|
||||
zt = int(serverNetspeed)
|
||||
}
|
||||
cc.Netspeed = (pointBasis / float64(zt)) * float64(serverNetspeed)
|
||||
}
|
||||
@@ -163,5 +165,4 @@ func (srv *Server) dnsAnswers(c echo.Context) error {
|
||||
c.Response().Header().Set("Cache-Control", "public,max-age=1800")
|
||||
|
||||
return c.JSONPretty(http.StatusOK, r, "")
|
||||
|
||||
}
|
||||
|
||||
@@ -2,12 +2,12 @@ package server
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"net/netip"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5"
|
||||
"go.ntppool.org/common/logger"
|
||||
"go.ntppool.org/common/tracing"
|
||||
"go.ntppool.org/data-api/ntpdb"
|
||||
@@ -22,7 +22,7 @@ func (srv *Server) FindServer(ctx context.Context, serverID string) (ntpdb.Serve
|
||||
var serverData ntpdb.Server
|
||||
var dberr error
|
||||
if id, err := strconv.Atoi(serverID); id > 0 && err == nil {
|
||||
serverData, dberr = q.GetServerByID(ctx, uint32(id))
|
||||
serverData, dberr = q.GetServerByID(ctx, int64(id))
|
||||
} else {
|
||||
ip, err := netip.ParseAddr(serverID)
|
||||
if err != nil || !ip.IsValid() {
|
||||
@@ -31,7 +31,7 @@ func (srv *Server) FindServer(ctx context.Context, serverID string) (ntpdb.Serve
|
||||
serverData, dberr = q.GetServerByIP(ctx, ip.String())
|
||||
}
|
||||
if dberr != nil {
|
||||
if !errors.Is(dberr, sql.ErrNoRows) {
|
||||
if !errors.Is(dberr, pgx.ErrNoRows) {
|
||||
log.Error("could not query server id", "err", dberr)
|
||||
return serverData, dberr
|
||||
}
|
||||
|
||||
589
server/grafana.go
Normal file
589
server/grafana.go
Normal file
@@ -0,0 +1,589 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/labstack/echo/v4"
|
||||
"go.ntppool.org/common/logger"
|
||||
"go.ntppool.org/common/tracing"
|
||||
"go.ntppool.org/data-api/logscores"
|
||||
"go.ntppool.org/data-api/ntpdb"
|
||||
)
|
||||
|
||||
// ColumnDef represents a Grafana table column definition
|
||||
type ColumnDef struct {
|
||||
Text string `json:"text"`
|
||||
Type string `json:"type"`
|
||||
Unit string `json:"unit,omitempty"`
|
||||
}
|
||||
|
||||
// GrafanaTableSeries represents a single table series in Grafana format
|
||||
type GrafanaTableSeries struct {
|
||||
Target string `json:"target"`
|
||||
Tags map[string]string `json:"tags"`
|
||||
Columns []ColumnDef `json:"columns"`
|
||||
Values [][]interface{} `json:"values"`
|
||||
}
|
||||
|
||||
// GrafanaTimeSeriesResponse represents the complete Grafana table response
|
||||
type GrafanaTimeSeriesResponse []GrafanaTableSeries
|
||||
|
||||
// timeRangeParams extends historyParameters with time range support
|
||||
type timeRangeParams struct {
|
||||
historyParameters // embed existing struct
|
||||
from time.Time
|
||||
to time.Time
|
||||
maxDataPoints int
|
||||
interval string // for future downsampling
|
||||
}
|
||||
|
||||
// parseTimeRangeParams parses and validates time range parameters
|
||||
// parseRelativeTime parses relative time expressions like "-3d", "-2h", "-30m"
|
||||
// Returns the absolute time relative to the provided base time (usually time.Now())
|
||||
func parseRelativeTime(relativeTimeStr string, baseTime time.Time) (time.Time, error) {
|
||||
if relativeTimeStr == "" {
|
||||
return time.Time{}, fmt.Errorf("empty time string")
|
||||
}
|
||||
|
||||
// Check if it's a regular Unix timestamp first
|
||||
if unixTime, err := strconv.ParseInt(relativeTimeStr, 10, 64); err == nil {
|
||||
return time.Unix(unixTime, 0), nil
|
||||
}
|
||||
|
||||
// Parse relative time format like "-3d", "-2h", "-30m", "-5s"
|
||||
re := regexp.MustCompile(`^(-?)(\d+)([dhms])$`)
|
||||
matches := re.FindStringSubmatch(relativeTimeStr)
|
||||
if len(matches) != 4 {
|
||||
return time.Time{}, fmt.Errorf("invalid time format, expected Unix timestamp or relative format like '-3d', '-2h', '-30m', '-5s'")
|
||||
}
|
||||
|
||||
sign := matches[1]
|
||||
valueStr := matches[2]
|
||||
unit := matches[3]
|
||||
|
||||
value, err := strconv.Atoi(valueStr)
|
||||
if err != nil {
|
||||
return time.Time{}, fmt.Errorf("invalid numeric value: %s", valueStr)
|
||||
}
|
||||
|
||||
var duration time.Duration
|
||||
switch unit {
|
||||
case "s":
|
||||
duration = time.Duration(value) * time.Second
|
||||
case "m":
|
||||
duration = time.Duration(value) * time.Minute
|
||||
case "h":
|
||||
duration = time.Duration(value) * time.Hour
|
||||
case "d":
|
||||
duration = time.Duration(value) * 24 * time.Hour
|
||||
default:
|
||||
return time.Time{}, fmt.Errorf("invalid time unit: %s", unit)
|
||||
}
|
||||
|
||||
// Apply sign (negative means go back in time)
|
||||
if sign == "-" {
|
||||
return baseTime.Add(-duration), nil
|
||||
}
|
||||
return baseTime.Add(duration), nil
|
||||
}
|
||||
|
||||
func (srv *Server) parseTimeRangeParams(ctx context.Context, c echo.Context, server ntpdb.Server) (timeRangeParams, error) {
|
||||
log := logger.FromContext(ctx)
|
||||
|
||||
// Start with existing parameter parsing logic
|
||||
baseParams, err := srv.getHistoryParameters(ctx, c, server)
|
||||
if err != nil {
|
||||
return timeRangeParams{}, err
|
||||
}
|
||||
|
||||
trParams := timeRangeParams{
|
||||
historyParameters: baseParams,
|
||||
maxDataPoints: 50000, // default
|
||||
}
|
||||
|
||||
// Parse from timestamp (required) - supports Unix timestamps and relative time like "-3d"
|
||||
fromParam := c.QueryParam("from")
|
||||
if fromParam == "" {
|
||||
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "from parameter is required")
|
||||
}
|
||||
|
||||
now := time.Now()
|
||||
trParams.from, err = parseRelativeTime(fromParam, now)
|
||||
if err != nil {
|
||||
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("invalid from parameter: %v", err))
|
||||
}
|
||||
|
||||
// Parse to timestamp (required) - supports Unix timestamps and relative time like "-1d"
|
||||
toParam := c.QueryParam("to")
|
||||
if toParam == "" {
|
||||
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "to parameter is required")
|
||||
}
|
||||
|
||||
trParams.to, err = parseRelativeTime(toParam, now)
|
||||
if err != nil {
|
||||
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("invalid to parameter: %v", err))
|
||||
}
|
||||
|
||||
// Validate time range
|
||||
if trParams.from.Equal(trParams.to) || trParams.from.After(trParams.to) {
|
||||
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "from must be before to")
|
||||
}
|
||||
|
||||
// Check minimum range (1 second)
|
||||
if trParams.to.Sub(trParams.from) < time.Second {
|
||||
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "time range must be at least 1 second")
|
||||
}
|
||||
|
||||
// Check maximum range (90 days)
|
||||
if trParams.to.Sub(trParams.from) > 90*24*time.Hour {
|
||||
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "time range cannot exceed 90 days")
|
||||
}
|
||||
|
||||
// Parse maxDataPoints (optional)
|
||||
if maxDataPointsParam := c.QueryParam("maxDataPoints"); maxDataPointsParam != "" {
|
||||
maxDP, err := strconv.Atoi(maxDataPointsParam)
|
||||
if err != nil {
|
||||
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "invalid maxDataPoints format")
|
||||
}
|
||||
if maxDP > 50000 {
|
||||
return timeRangeParams{}, echo.NewHTTPError(http.StatusBadRequest, "maxDataPoints cannot exceed 50000")
|
||||
}
|
||||
if maxDP > 0 {
|
||||
trParams.maxDataPoints = maxDP
|
||||
}
|
||||
}
|
||||
|
||||
// Parse interval (optional, for future downsampling)
|
||||
trParams.interval = c.QueryParam("interval")
|
||||
|
||||
log.DebugContext(ctx, "parsed time range params",
|
||||
"from", trParams.from,
|
||||
"to", trParams.to,
|
||||
"maxDataPoints", trParams.maxDataPoints,
|
||||
"interval", trParams.interval,
|
||||
"monitor", trParams.monitorID,
|
||||
)
|
||||
|
||||
return trParams, nil
|
||||
}
|
||||
|
||||
// sanitizeMonitorName sanitizes monitor names for Grafana target format
|
||||
func sanitizeMonitorName(name string) string {
|
||||
// Replace problematic characters for Grafana target format
|
||||
result := strings.ReplaceAll(name, " ", "_")
|
||||
result = strings.ReplaceAll(result, ".", "-")
|
||||
result = strings.ReplaceAll(result, "/", "-")
|
||||
return result
|
||||
}
|
||||
|
||||
// transformToGrafanaTableFormat converts LogScoreHistory to Grafana table format
|
||||
func transformToGrafanaTableFormat(history *logscores.LogScoreHistory, monitors []ntpdb.Monitor) GrafanaTimeSeriesResponse {
|
||||
// Group data by monitor_id (one series per monitor)
|
||||
monitorData := make(map[int][]ntpdb.LogScore)
|
||||
monitorInfo := make(map[int]ntpdb.Monitor)
|
||||
|
||||
// Group log scores by monitor ID
|
||||
skippedInvalidMonitors := 0
|
||||
for _, ls := range history.LogScores {
|
||||
if !ls.MonitorID.Valid {
|
||||
skippedInvalidMonitors++
|
||||
continue
|
||||
}
|
||||
monitorID := int(ls.MonitorID.Int64)
|
||||
monitorData[monitorID] = append(monitorData[monitorID], ls)
|
||||
}
|
||||
|
||||
// Debug logging for transformation
|
||||
logger.Setup().Info("transformation grouping debug",
|
||||
"total_log_scores", len(history.LogScores),
|
||||
"skipped_invalid_monitors", skippedInvalidMonitors,
|
||||
"grouped_monitor_ids", func() []int {
|
||||
keys := make([]int, 0, len(monitorData))
|
||||
for k := range monitorData {
|
||||
keys = append(keys, k)
|
||||
}
|
||||
return keys
|
||||
}(),
|
||||
"monitor_data_counts", func() map[int]int {
|
||||
counts := make(map[int]int)
|
||||
for k, v := range monitorData {
|
||||
counts[k] = len(v)
|
||||
}
|
||||
return counts
|
||||
}(),
|
||||
)
|
||||
|
||||
// Index monitors by ID for quick lookup
|
||||
for _, monitor := range monitors {
|
||||
monitorInfo[int(monitor.ID)] = monitor
|
||||
}
|
||||
|
||||
var response GrafanaTimeSeriesResponse
|
||||
|
||||
// Create one table series per monitor
|
||||
logger.Setup().Info("creating grafana series",
|
||||
"monitor_data_entries", len(monitorData),
|
||||
)
|
||||
|
||||
for monitorID, logScores := range monitorData {
|
||||
if len(logScores) == 0 {
|
||||
logger.Setup().Info("skipping monitor with no data", "monitor_id", monitorID)
|
||||
continue
|
||||
}
|
||||
|
||||
logger.Setup().Info("processing monitor series",
|
||||
"monitor_id", monitorID,
|
||||
"log_scores_count", len(logScores),
|
||||
)
|
||||
|
||||
// Get monitor name from history.Monitors map or from monitor info
|
||||
monitorName := "unknown"
|
||||
if name, exists := history.Monitors[monitorID]; exists && name != "" {
|
||||
monitorName = name
|
||||
} else if monitor, exists := monitorInfo[monitorID]; exists {
|
||||
monitorName = monitor.DisplayName()
|
||||
}
|
||||
|
||||
// Build target name and tags
|
||||
sanitizedName := sanitizeMonitorName(monitorName)
|
||||
target := "monitor{name=" + sanitizedName + "}"
|
||||
|
||||
tags := map[string]string{
|
||||
"monitor_id": strconv.Itoa(monitorID),
|
||||
"monitor_name": monitorName,
|
||||
"type": "monitor",
|
||||
}
|
||||
|
||||
// Add status (we'll use active as default since we have data for this monitor)
|
||||
tags["status"] = "active"
|
||||
|
||||
// Define table columns
|
||||
columns := []ColumnDef{
|
||||
{Text: "time", Type: "time"},
|
||||
{Text: "score", Type: "number"},
|
||||
{Text: "rtt", Type: "number", Unit: "ms"},
|
||||
{Text: "offset", Type: "number", Unit: "s"},
|
||||
}
|
||||
|
||||
// Build values array
|
||||
var values [][]interface{}
|
||||
for _, ls := range logScores {
|
||||
// Convert timestamp to milliseconds
|
||||
timestampMs := ls.Ts.Time.Unix() * 1000
|
||||
|
||||
// Create row: [time, score, rtt, offset]
|
||||
row := []interface{}{
|
||||
timestampMs,
|
||||
ls.Score,
|
||||
}
|
||||
|
||||
// Add RTT (convert from microseconds to milliseconds, handle null)
|
||||
if ls.Rtt.Valid {
|
||||
rttMs := float64(ls.Rtt.Int32) / 1000.0
|
||||
row = append(row, rttMs)
|
||||
} else {
|
||||
row = append(row, nil)
|
||||
}
|
||||
|
||||
// Add offset (handle null)
|
||||
if ls.Offset.Valid {
|
||||
row = append(row, ls.Offset.Float64)
|
||||
} else {
|
||||
row = append(row, nil)
|
||||
}
|
||||
|
||||
values = append(values, row)
|
||||
}
|
||||
|
||||
// Create table series
|
||||
series := GrafanaTableSeries{
|
||||
Target: target,
|
||||
Tags: tags,
|
||||
Columns: columns,
|
||||
Values: values,
|
||||
}
|
||||
|
||||
response = append(response, series)
|
||||
|
||||
logger.Setup().Info("created series for monitor",
|
||||
"monitor_id", monitorID,
|
||||
"target", series.Target,
|
||||
"values_count", len(series.Values),
|
||||
)
|
||||
}
|
||||
|
||||
logger.Setup().Info("transformation complete",
|
||||
"final_response_count", len(response),
|
||||
"response_is_nil", response == nil,
|
||||
)
|
||||
|
||||
return response
|
||||
}
|
||||
|
||||
// scoresTimeRange handles Grafana time range requests for NTP server scores
|
||||
func (srv *Server) scoresTimeRange(c echo.Context) error {
|
||||
log := logger.Setup()
|
||||
ctx, span := tracing.Tracer().Start(c.Request().Context(), "scoresTimeRange")
|
||||
defer span.End()
|
||||
|
||||
// Set reasonable default cache time; adjusted later based on data
|
||||
c.Response().Header().Set("Cache-Control", "public,max-age=240")
|
||||
|
||||
// Validate mode parameter
|
||||
mode := c.Param("mode")
|
||||
if mode != "json" {
|
||||
return echo.NewHTTPError(http.StatusNotFound, "invalid mode - only json supported")
|
||||
}
|
||||
|
||||
// Find and validate server first
|
||||
server, err := srv.FindServer(ctx, c.Param("server"))
|
||||
if err != nil {
|
||||
log.ErrorContext(ctx, "find server", "err", err)
|
||||
if he, ok := err.(*echo.HTTPError); ok {
|
||||
return he
|
||||
}
|
||||
span.RecordError(err)
|
||||
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
|
||||
}
|
||||
if server.DeletionAge(30 * 24 * time.Hour) {
|
||||
span.AddEvent("server deleted")
|
||||
return echo.NewHTTPError(http.StatusNotFound, "server not found")
|
||||
}
|
||||
if server.ID == 0 {
|
||||
span.AddEvent("server not found")
|
||||
return echo.NewHTTPError(http.StatusNotFound, "server not found")
|
||||
}
|
||||
|
||||
// Parse and validate time range parameters
|
||||
params, err := srv.parseTimeRangeParams(ctx, c, server)
|
||||
if err != nil {
|
||||
if he, ok := err.(*echo.HTTPError); ok {
|
||||
return he
|
||||
}
|
||||
log.ErrorContext(ctx, "parse time range parameters", "err", err)
|
||||
span.RecordError(err)
|
||||
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
|
||||
}
|
||||
|
||||
// Query ClickHouse for time range data
|
||||
log.InfoContext(ctx, "executing clickhouse time range query",
|
||||
"server_id", server.ID,
|
||||
"server_ip", server.Ip,
|
||||
"monitor_id", params.monitorID,
|
||||
"from", params.from,
|
||||
"to", params.to,
|
||||
"max_data_points", params.maxDataPoints,
|
||||
"time_range_duration", params.to.Sub(params.from).String(),
|
||||
)
|
||||
|
||||
logScores, err := srv.ch.LogscoresTimeRange(ctx, int(server.ID), int(params.monitorID), params.from, params.to, params.maxDataPoints)
|
||||
if err != nil {
|
||||
log.ErrorContext(ctx, "clickhouse time range query", "err", err,
|
||||
"server_id", server.ID,
|
||||
"monitor_id", params.monitorID,
|
||||
"from", params.from,
|
||||
"to", params.to,
|
||||
)
|
||||
span.RecordError(err)
|
||||
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
|
||||
}
|
||||
|
||||
log.InfoContext(ctx, "clickhouse query results",
|
||||
"server_id", server.ID,
|
||||
"rows_returned", len(logScores),
|
||||
"first_few_ids", func() []int64 {
|
||||
ids := make([]int64, 0, 3)
|
||||
for i, ls := range logScores {
|
||||
if i >= 3 {
|
||||
break
|
||||
}
|
||||
ids = append(ids, ls.ID)
|
||||
}
|
||||
return ids
|
||||
}(),
|
||||
)
|
||||
|
||||
// Build LogScoreHistory structure for compatibility with existing functions
|
||||
history := &logscores.LogScoreHistory{
|
||||
LogScores: logScores,
|
||||
Monitors: make(map[int]string),
|
||||
}
|
||||
|
||||
// Get monitor names for the returned data
|
||||
monitorIDs := []int64{}
|
||||
for _, ls := range logScores {
|
||||
if ls.MonitorID.Valid {
|
||||
monitorID := ls.MonitorID.Int64
|
||||
if _, exists := history.Monitors[int(monitorID)]; !exists {
|
||||
history.Monitors[int(monitorID)] = ""
|
||||
monitorIDs = append(monitorIDs, monitorID)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
log.InfoContext(ctx, "monitor processing",
|
||||
"unique_monitor_ids", monitorIDs,
|
||||
"monitor_count", len(monitorIDs),
|
||||
)
|
||||
|
||||
// Get monitor details from database for status and display names
|
||||
var monitors []ntpdb.Monitor
|
||||
if len(monitorIDs) > 0 {
|
||||
q := ntpdb.NewWrappedQuerier(ntpdb.New(srv.db))
|
||||
logScoreMonitors, err := q.GetServerScores(ctx, ntpdb.GetServerScoresParams{
|
||||
MonitorIDs: monitorIDs,
|
||||
ServerID: server.ID,
|
||||
})
|
||||
if err != nil {
|
||||
log.ErrorContext(ctx, "get monitor details", "err", err)
|
||||
// Don't fail the request, just use basic info
|
||||
} else {
|
||||
for _, lsm := range logScoreMonitors {
|
||||
// Create monitor entry for transformation (we mainly need the display name)
|
||||
tempMon := ntpdb.Monitor{
|
||||
TlsName: lsm.TlsName,
|
||||
Location: lsm.Location,
|
||||
ID: lsm.ID,
|
||||
}
|
||||
monitors = append(monitors, tempMon)
|
||||
|
||||
// Update monitor name in history
|
||||
history.Monitors[int(lsm.ID)] = tempMon.DisplayName()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Transform to Grafana table format
|
||||
log.InfoContext(ctx, "starting grafana transformation",
|
||||
"log_scores_count", len(logScores),
|
||||
"monitors_count", len(monitors),
|
||||
"history_monitors", history.Monitors,
|
||||
)
|
||||
|
||||
grafanaResponse := transformToGrafanaTableFormat(history, monitors)
|
||||
|
||||
log.InfoContext(ctx, "grafana transformation complete",
|
||||
"response_series_count", len(grafanaResponse),
|
||||
"response_preview", func() interface{} {
|
||||
if len(grafanaResponse) == 0 {
|
||||
return "empty_response"
|
||||
}
|
||||
first := grafanaResponse[0]
|
||||
return map[string]interface{}{
|
||||
"target": first.Target,
|
||||
"tags": first.Tags,
|
||||
"columns_count": len(first.Columns),
|
||||
"values_count": len(first.Values),
|
||||
"first_few_values": func() [][]interface{} {
|
||||
if len(first.Values) == 0 {
|
||||
return [][]interface{}{}
|
||||
}
|
||||
count := 2
|
||||
if len(first.Values) < count {
|
||||
count = len(first.Values)
|
||||
}
|
||||
return first.Values[:count]
|
||||
}(),
|
||||
}
|
||||
}(),
|
||||
)
|
||||
|
||||
// Set cache control headers based on data characteristics
|
||||
setHistoryCacheControl(c, history)
|
||||
|
||||
// Set CORS headers
|
||||
c.Response().Header().Set("Access-Control-Allow-Origin", "*")
|
||||
c.Response().Header().Set("Content-Type", "application/json")
|
||||
|
||||
log.InfoContext(ctx, "time range response final",
|
||||
"server_id", server.ID,
|
||||
"server_ip", server.Ip,
|
||||
"monitor_id", params.monitorID,
|
||||
"time_range", params.to.Sub(params.from).String(),
|
||||
"raw_data_points", len(logScores),
|
||||
"grafana_series_count", len(grafanaResponse),
|
||||
"max_data_points", params.maxDataPoints,
|
||||
"response_is_null", grafanaResponse == nil,
|
||||
"response_is_empty", len(grafanaResponse) == 0,
|
||||
)
|
||||
|
||||
return c.JSON(http.StatusOK, grafanaResponse)
|
||||
}
|
||||
|
||||
// testGrafanaTable returns sample data in Grafana table format for validation
|
||||
func (srv *Server) testGrafanaTable(c echo.Context) error {
|
||||
log := logger.Setup()
|
||||
ctx, span := tracing.Tracer().Start(c.Request().Context(), "testGrafanaTable")
|
||||
defer span.End()
|
||||
|
||||
log.InfoContext(ctx, "serving test Grafana table format",
|
||||
"remote_ip", c.RealIP(),
|
||||
"user_agent", c.Request().UserAgent(),
|
||||
)
|
||||
|
||||
// Generate sample data with realistic NTP Pool values
|
||||
now := time.Now()
|
||||
sampleData := GrafanaTimeSeriesResponse{
|
||||
{
|
||||
Target: "monitor{name=zakim1-yfhw4a}",
|
||||
Tags: map[string]string{
|
||||
"monitor_id": "126",
|
||||
"monitor_name": "zakim1-yfhw4a",
|
||||
"type": "monitor",
|
||||
"status": "active",
|
||||
},
|
||||
Columns: []ColumnDef{
|
||||
{Text: "time", Type: "time"},
|
||||
{Text: "score", Type: "number"},
|
||||
{Text: "rtt", Type: "number", Unit: "ms"},
|
||||
{Text: "offset", Type: "number", Unit: "s"},
|
||||
},
|
||||
Values: [][]interface{}{
|
||||
{now.Add(-10*time.Minute).Unix() * 1000, 20.0, 18.865, -0.000267},
|
||||
{now.Add(-20*time.Minute).Unix() * 1000, 20.0, 18.96, -0.000390},
|
||||
{now.Add(-30*time.Minute).Unix() * 1000, 20.0, 18.073, -0.000768},
|
||||
{now.Add(-40*time.Minute).Unix() * 1000, 20.0, 18.209, nil}, // null offset example
|
||||
},
|
||||
},
|
||||
{
|
||||
Target: "monitor{name=nj2-mon01}",
|
||||
Tags: map[string]string{
|
||||
"monitor_id": "84",
|
||||
"monitor_name": "nj2-mon01",
|
||||
"type": "monitor",
|
||||
"status": "active",
|
||||
},
|
||||
Columns: []ColumnDef{
|
||||
{Text: "time", Type: "time"},
|
||||
{Text: "score", Type: "number"},
|
||||
{Text: "rtt", Type: "number", Unit: "ms"},
|
||||
{Text: "offset", Type: "number", Unit: "s"},
|
||||
},
|
||||
Values: [][]interface{}{
|
||||
{now.Add(-10*time.Minute).Unix() * 1000, 19.5, 22.145, 0.000123},
|
||||
{now.Add(-20*time.Minute).Unix() * 1000, 19.8, 21.892, 0.000089},
|
||||
{now.Add(-30*time.Minute).Unix() * 1000, 20.0, 22.034, 0.000156},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Add CORS header for browser testing
|
||||
c.Response().Header().Set("Access-Control-Allow-Origin", "*")
|
||||
c.Response().Header().Set("Content-Type", "application/json")
|
||||
|
||||
// Set cache control similar to other endpoints
|
||||
c.Response().Header().Set("Cache-Control", "public,max-age=60")
|
||||
|
||||
log.InfoContext(ctx, "test Grafana table response sent",
|
||||
"series_count", len(sampleData),
|
||||
"response_size_approx", "~1KB",
|
||||
)
|
||||
|
||||
return c.JSON(http.StatusOK, sampleData)
|
||||
}
|
||||
119
server/grafana_test.go
Normal file
119
server/grafana_test.go
Normal file
@@ -0,0 +1,119 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestParseRelativeTime(t *testing.T) {
|
||||
// Use a fixed base time for consistent testing
|
||||
baseTime := time.Date(2025, 8, 4, 12, 0, 0, 0, time.UTC)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
input string
|
||||
expected time.Time
|
||||
shouldError bool
|
||||
}{
|
||||
{
|
||||
name: "Unix timestamp",
|
||||
input: "1753500964",
|
||||
expected: time.Unix(1753500964, 0),
|
||||
},
|
||||
{
|
||||
name: "3 days ago",
|
||||
input: "-3d",
|
||||
expected: baseTime.Add(-3 * 24 * time.Hour),
|
||||
},
|
||||
{
|
||||
name: "2 hours ago",
|
||||
input: "-2h",
|
||||
expected: baseTime.Add(-2 * time.Hour),
|
||||
},
|
||||
{
|
||||
name: "30 minutes ago",
|
||||
input: "-30m",
|
||||
expected: baseTime.Add(-30 * time.Minute),
|
||||
},
|
||||
{
|
||||
name: "5 seconds ago",
|
||||
input: "-5s",
|
||||
expected: baseTime.Add(-5 * time.Second),
|
||||
},
|
||||
{
|
||||
name: "3 days in future",
|
||||
input: "3d",
|
||||
expected: baseTime.Add(3 * 24 * time.Hour),
|
||||
},
|
||||
{
|
||||
name: "1 hour in future",
|
||||
input: "1h",
|
||||
expected: baseTime.Add(1 * time.Hour),
|
||||
},
|
||||
{
|
||||
name: "empty string",
|
||||
input: "",
|
||||
shouldError: true,
|
||||
},
|
||||
{
|
||||
name: "invalid format",
|
||||
input: "invalid",
|
||||
shouldError: true,
|
||||
},
|
||||
{
|
||||
name: "invalid unit",
|
||||
input: "3x",
|
||||
shouldError: true,
|
||||
},
|
||||
{
|
||||
name: "no number",
|
||||
input: "-d",
|
||||
shouldError: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result, err := parseRelativeTime(tt.input, baseTime)
|
||||
|
||||
if tt.shouldError {
|
||||
if err == nil {
|
||||
t.Errorf("parseRelativeTime(%q) expected error, got nil", tt.input)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
t.Errorf("parseRelativeTime(%q) unexpected error: %v", tt.input, err)
|
||||
return
|
||||
}
|
||||
|
||||
if !result.Equal(tt.expected) {
|
||||
t.Errorf("parseRelativeTime(%q) = %v, expected %v", tt.input, result, tt.expected)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseRelativeTimeEdgeCases(t *testing.T) {
|
||||
baseTime := time.Date(2025, 8, 4, 12, 0, 0, 0, time.UTC)
|
||||
|
||||
// Test large values
|
||||
result, err := parseRelativeTime("365d", baseTime)
|
||||
if err != nil {
|
||||
t.Errorf("parseRelativeTime('365d') unexpected error: %v", err)
|
||||
}
|
||||
expected := baseTime.Add(365 * 24 * time.Hour)
|
||||
if !result.Equal(expected) {
|
||||
t.Errorf("parseRelativeTime('365d') = %v, expected %v", result, expected)
|
||||
}
|
||||
|
||||
// Test zero values
|
||||
result, err = parseRelativeTime("0s", baseTime)
|
||||
if err != nil {
|
||||
t.Errorf("parseRelativeTime('0s') unexpected error: %v", err)
|
||||
}
|
||||
if !result.Equal(baseTime) {
|
||||
t.Errorf("parseRelativeTime('0s') = %v, expected %v", result, baseTime)
|
||||
}
|
||||
}
|
||||
@@ -107,6 +107,7 @@ func (srv *Server) fetchGraph(ctx context.Context, serverIP string) (string, []b
|
||||
|
||||
client := retryablehttp.NewClient()
|
||||
client.Logger = log
|
||||
|
||||
client.HTTPClient.Transport = otelhttp.NewTransport(
|
||||
client.HTTPClient.Transport,
|
||||
otelhttp.WithClientTrace(func(ctx context.Context) *httptrace.ClientTrace {
|
||||
|
||||
@@ -3,7 +3,6 @@ package server
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/csv"
|
||||
"errors"
|
||||
"fmt"
|
||||
@@ -15,6 +14,8 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5"
|
||||
"github.com/jackc/pgx/v5/pgtype"
|
||||
"github.com/labstack/echo/v4"
|
||||
"go.ntppool.org/common/logger"
|
||||
"go.ntppool.org/common/tracing"
|
||||
@@ -22,6 +23,23 @@ import (
|
||||
"go.ntppool.org/data-api/ntpdb"
|
||||
)
|
||||
|
||||
// sanitizeForCSV removes or replaces problematic characters for CSV output
|
||||
func sanitizeForCSV(s string) string {
|
||||
// Replace NULL bytes and other control characters with a placeholder
|
||||
var result strings.Builder
|
||||
for _, r := range s {
|
||||
switch {
|
||||
case r == 0: // NULL byte
|
||||
result.WriteString("<NULL>")
|
||||
case r < 32 && r != '\t' && r != '\n' && r != '\r': // Other control chars except tab/newline/carriage return
|
||||
result.WriteString(fmt.Sprintf("<0x%02X>", r))
|
||||
default:
|
||||
result.WriteRune(r)
|
||||
}
|
||||
}
|
||||
return result.String()
|
||||
}
|
||||
|
||||
type historyMode uint8
|
||||
|
||||
const (
|
||||
@@ -46,13 +64,13 @@ func paramHistoryMode(s string) historyMode {
|
||||
|
||||
type historyParameters struct {
|
||||
limit int
|
||||
monitorID int
|
||||
monitorID int64
|
||||
server ntpdb.Server
|
||||
since time.Time
|
||||
fullHistory bool
|
||||
}
|
||||
|
||||
func (srv *Server) getHistoryParameters(ctx context.Context, c echo.Context) (historyParameters, error) {
|
||||
func (srv *Server) getHistoryParameters(ctx context.Context, c echo.Context, server ntpdb.Server) (historyParameters, error) {
|
||||
log := logger.FromContext(ctx)
|
||||
|
||||
p := historyParameters{}
|
||||
@@ -73,21 +91,30 @@ func (srv *Server) getHistoryParameters(ctx context.Context, c echo.Context) (hi
|
||||
|
||||
monitorParam := c.QueryParam("monitor")
|
||||
|
||||
var monitorID uint32 = 0
|
||||
var monitorID int64
|
||||
switch monitorParam {
|
||||
case "":
|
||||
name := "recentmedian.scores.ntp.dev"
|
||||
monitor, err := q.GetMonitorByName(ctx, sql.NullString{Valid: true, String: name})
|
||||
var ipVersion ntpdb.NullMonitorsIpVersion
|
||||
if server.IpVersion == ntpdb.ServersIpVersionV4 {
|
||||
ipVersion = ntpdb.NullMonitorsIpVersion{MonitorsIpVersion: ntpdb.MonitorsIpVersionV4, Valid: true}
|
||||
} else {
|
||||
ipVersion = ntpdb.NullMonitorsIpVersion{MonitorsIpVersion: ntpdb.MonitorsIpVersionV6, Valid: true}
|
||||
}
|
||||
monitor, err := q.GetMonitorByNameAndIPVersion(ctx, ntpdb.GetMonitorByNameAndIPVersionParams{
|
||||
TlsName: pgtype.Text{Valid: true, String: name},
|
||||
IpVersion: ipVersion,
|
||||
})
|
||||
if err != nil {
|
||||
log.Warn("could not find monitor", "name", name, "err", err)
|
||||
log.Warn("could not find monitor", "name", name, "ip_version", server.IpVersion, "err", err)
|
||||
}
|
||||
monitorID = monitor.ID
|
||||
case "*":
|
||||
monitorID = 0 // don't filter on monitor ID
|
||||
default:
|
||||
mID, err := strconv.ParseUint(monitorParam, 10, 32)
|
||||
mID, err := strconv.ParseInt(monitorParam, 10, 64)
|
||||
if err == nil {
|
||||
monitorID = uint32(mID)
|
||||
monitorID = mID
|
||||
} else {
|
||||
// only accept the name prefix; no wildcards; trust the database
|
||||
// to filter out any other crazy
|
||||
@@ -96,12 +123,21 @@ func (srv *Server) getHistoryParameters(ctx context.Context, c echo.Context) (hi
|
||||
}
|
||||
|
||||
monitorParam = monitorParam + ".%"
|
||||
monitor, err := q.GetMonitorByName(ctx, sql.NullString{Valid: true, String: monitorParam})
|
||||
var ipVersion ntpdb.NullMonitorsIpVersion
|
||||
if server.IpVersion == ntpdb.ServersIpVersionV4 {
|
||||
ipVersion = ntpdb.NullMonitorsIpVersion{MonitorsIpVersion: ntpdb.MonitorsIpVersionV4, Valid: true}
|
||||
} else {
|
||||
ipVersion = ntpdb.NullMonitorsIpVersion{MonitorsIpVersion: ntpdb.MonitorsIpVersionV6, Valid: true}
|
||||
}
|
||||
monitor, err := q.GetMonitorByNameAndIPVersion(ctx, ntpdb.GetMonitorByNameAndIPVersionParams{
|
||||
TlsName: pgtype.Text{Valid: true, String: monitorParam},
|
||||
IpVersion: ipVersion,
|
||||
})
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
if errors.Is(err, pgx.ErrNoRows) {
|
||||
return p, echo.NewHTTPError(http.StatusNotFound, "monitor not found").WithInternal(err)
|
||||
}
|
||||
log.WarnContext(ctx, "could not find monitor", "name", monitorParam, "err", err)
|
||||
log.WarnContext(ctx, "could not find monitor", "name", monitorParam, "ip_version", server.IpVersion, "err", err)
|
||||
return p, echo.NewHTTPError(http.StatusNotFound, "monitor not found (sql)")
|
||||
}
|
||||
monitorID = monitor.ID
|
||||
@@ -109,8 +145,8 @@ func (srv *Server) getHistoryParameters(ctx context.Context, c echo.Context) (hi
|
||||
}
|
||||
}
|
||||
|
||||
p.monitorID = int(monitorID)
|
||||
log.DebugContext(ctx, "monitor param", "monitor", monitorID)
|
||||
p.monitorID = monitorID
|
||||
log.DebugContext(ctx, "monitor param", "monitor", monitorID, "ip_version", server.IpVersion)
|
||||
|
||||
since, _ := strconv.ParseInt(c.QueryParam("since"), 10, 64) // defaults to 0 so don't care if it parses
|
||||
if since > 0 {
|
||||
@@ -135,8 +171,8 @@ func (srv *Server) getHistoryParameters(ctx context.Context, c echo.Context) (hi
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func (srv *Server) getHistoryMySQL(ctx context.Context, _ echo.Context, p historyParameters) (*logscores.LogScoreHistory, error) {
|
||||
ls, err := logscores.GetHistoryMySQL(ctx, srv.db, p.server.ID, uint32(p.monitorID), p.since, p.limit)
|
||||
func (srv *Server) getHistoryPostgres(ctx context.Context, _ echo.Context, p historyParameters) (*logscores.LogScoreHistory, error) {
|
||||
ls, err := logscores.GetHistoryPostgres(ctx, srv.db, p.server.ID, p.monitorID, p.since, p.limit)
|
||||
return ls, err
|
||||
}
|
||||
|
||||
@@ -145,7 +181,8 @@ func (srv *Server) history(c echo.Context) error {
|
||||
ctx, span := tracing.Tracer().Start(c.Request().Context(), "history")
|
||||
defer span.End()
|
||||
|
||||
// just cache for a short time by default
|
||||
// set a reasonable default cache time; adjusted later for
|
||||
// happy path common responses
|
||||
c.Response().Header().Set("Cache-Control", "public,max-age=240")
|
||||
|
||||
mode := paramHistoryMode(c.Param("mode"))
|
||||
@@ -153,16 +190,6 @@ func (srv *Server) history(c echo.Context) error {
|
||||
return echo.NewHTTPError(http.StatusNotFound, "invalid mode")
|
||||
}
|
||||
|
||||
p, err := srv.getHistoryParameters(ctx, c)
|
||||
if err != nil {
|
||||
if he, ok := err.(*echo.HTTPError); ok {
|
||||
return he
|
||||
}
|
||||
log.ErrorContext(ctx, "get history parameters", "err", err)
|
||||
span.RecordError(err)
|
||||
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
|
||||
}
|
||||
|
||||
server, err := srv.FindServer(ctx, c.Param("server"))
|
||||
if err != nil {
|
||||
log.ErrorContext(ctx, "find server", "err", err)
|
||||
@@ -181,6 +208,16 @@ func (srv *Server) history(c echo.Context) error {
|
||||
return echo.NewHTTPError(http.StatusNotFound, "server not found")
|
||||
}
|
||||
|
||||
p, err := srv.getHistoryParameters(ctx, c, server)
|
||||
if err != nil {
|
||||
if he, ok := err.(*echo.HTTPError); ok {
|
||||
return he
|
||||
}
|
||||
log.ErrorContext(ctx, "get history parameters", "err", err)
|
||||
span.RecordError(err)
|
||||
return echo.NewHTTPError(http.StatusInternalServerError, "internal error")
|
||||
}
|
||||
|
||||
p.server = server
|
||||
|
||||
var history *logscores.LogScoreHistory
|
||||
@@ -194,9 +231,9 @@ func (srv *Server) history(c echo.Context) error {
|
||||
}
|
||||
|
||||
if sourceParam == "m" {
|
||||
history, err = srv.getHistoryMySQL(ctx, c, p)
|
||||
history, err = srv.getHistoryPostgres(ctx, c, p)
|
||||
} else {
|
||||
history, err = logscores.GetHistoryClickHouse(ctx, srv.ch, srv.db, p.server.ID, uint32(p.monitorID), p.since, p.limit, p.fullHistory)
|
||||
history, err = logscores.GetHistoryClickHouse(ctx, srv.ch, srv.db, p.server.ID, p.monitorID, p.since, p.limit, p.fullHistory)
|
||||
}
|
||||
if err != nil {
|
||||
var httpError *echo.HTTPError
|
||||
@@ -223,7 +260,6 @@ func (srv *Server) history(c echo.Context) error {
|
||||
default:
|
||||
return c.String(http.StatusNotFound, "not implemented")
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func (srv *Server) historyJSON(ctx context.Context, c echo.Context, server ntpdb.Server, history *logscores.LogScoreHistory) error {
|
||||
@@ -237,15 +273,17 @@ func (srv *Server) historyJSON(ctx context.Context, c echo.Context, server ntpdb
|
||||
Step float64 `json:"step"`
|
||||
Score float64 `json:"score"`
|
||||
MonitorID int `json:"monitor_id"`
|
||||
Rtt *float64 `json:"rtt,omitempty"`
|
||||
}
|
||||
|
||||
type MonitorEntry struct {
|
||||
ID uint32 `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Type string `json:"type"`
|
||||
Ts string `json:"ts"`
|
||||
Score float64 `json:"score"`
|
||||
Status string `json:"status"`
|
||||
ID int64 `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Type string `json:"type"`
|
||||
Ts string `json:"ts"`
|
||||
Score float64 `json:"score"`
|
||||
Status string `json:"status"`
|
||||
AvgRtt *float64 `json:"avg_rtt,omitempty"`
|
||||
}
|
||||
res := struct {
|
||||
History []ScoresEntry `json:"history"`
|
||||
@@ -260,9 +298,9 @@ func (srv *Server) historyJSON(ctx context.Context, c echo.Context, server ntpdb
|
||||
|
||||
// log.InfoContext(ctx, "monitor id list", "ids", history.MonitorIDs)
|
||||
|
||||
monitorIDs := []uint32{}
|
||||
monitorIDs := []int64{}
|
||||
for k := range history.Monitors {
|
||||
monitorIDs = append(monitorIDs, uint32(k))
|
||||
monitorIDs = append(monitorIDs, int64(k))
|
||||
}
|
||||
|
||||
q := ntpdb.NewWrappedQuerier(ntpdb.New(srv.db))
|
||||
@@ -280,11 +318,23 @@ func (srv *Server) historyJSON(ctx context.Context, c echo.Context, server ntpdb
|
||||
|
||||
// log.InfoContext(ctx, "got logScoreMonitors", "count", len(logScoreMonitors))
|
||||
|
||||
// Calculate average RTT per monitor
|
||||
monitorRttSums := make(map[int64]float64)
|
||||
monitorRttCounts := make(map[int64]int)
|
||||
|
||||
for _, ls := range history.LogScores {
|
||||
if ls.MonitorID.Valid && ls.Rtt.Valid {
|
||||
monitorID := ls.MonitorID.Int64
|
||||
monitorRttSums[monitorID] += float64(ls.Rtt.Int32) / 1000.0
|
||||
monitorRttCounts[monitorID]++
|
||||
}
|
||||
}
|
||||
|
||||
for _, lsm := range logScoreMonitors {
|
||||
score := math.Round(lsm.ScoreRaw*10) / 10 // round to one decimal
|
||||
|
||||
tempMon := ntpdb.Monitor{
|
||||
Name: lsm.Name,
|
||||
// Hostname: lsm.Hostname,
|
||||
TlsName: lsm.TlsName,
|
||||
Location: lsm.Location,
|
||||
ID: lsm.ID,
|
||||
@@ -299,6 +349,13 @@ func (srv *Server) historyJSON(ctx context.Context, c echo.Context, server ntpdb
|
||||
Score: score,
|
||||
Status: string(lsm.Status),
|
||||
}
|
||||
|
||||
// Add average RTT if available
|
||||
if count, exists := monitorRttCounts[lsm.ID]; exists && count > 0 {
|
||||
avgRtt := monitorRttSums[lsm.ID] / float64(count)
|
||||
me.AvgRtt = &avgRtt
|
||||
}
|
||||
|
||||
res.Monitors = append(res.Monitors, me)
|
||||
}
|
||||
|
||||
@@ -306,8 +363,8 @@ func (srv *Server) historyJSON(ctx context.Context, c echo.Context, server ntpdb
|
||||
x := float64(1000000000000)
|
||||
score := math.Round(ls.Score*x) / x
|
||||
res.History[i] = ScoresEntry{
|
||||
TS: ls.Ts.Unix(),
|
||||
MonitorID: int(ls.MonitorID.Int32),
|
||||
TS: ls.Ts.Time.Unix(),
|
||||
MonitorID: int(ls.MonitorID.Int64),
|
||||
Step: ls.Step,
|
||||
Score: score,
|
||||
}
|
||||
@@ -315,23 +372,22 @@ func (srv *Server) historyJSON(ctx context.Context, c echo.Context, server ntpdb
|
||||
offset := ls.Offset.Float64
|
||||
res.History[i].Offset = &offset
|
||||
}
|
||||
if ls.Rtt.Valid {
|
||||
rtt := float64(ls.Rtt.Int32) / 1000.0
|
||||
res.History[i].Rtt = &rtt
|
||||
}
|
||||
}
|
||||
|
||||
if len(history.LogScores) == 0 ||
|
||||
history.LogScores[len(history.LogScores)-1].Ts.After(time.Now().Add(-8*time.Hour)) {
|
||||
// cache for longer if data hasn't updated for a while
|
||||
c.Request().Header.Set("Cache-Control", "s-maxage=3600,max-age=1800")
|
||||
} else {
|
||||
c.Request().Header.Set("Cache-Control", "s-maxage=300,max-age=240")
|
||||
}
|
||||
setHistoryCacheControl(c, history)
|
||||
|
||||
return c.JSON(http.StatusOK, res)
|
||||
|
||||
}
|
||||
|
||||
func (srv *Server) historyCSV(ctx context.Context, c echo.Context, history *logscores.LogScoreHistory) error {
|
||||
log := logger.Setup()
|
||||
ctx, span := tracing.Tracer().Start(ctx, "history.csv")
|
||||
defer span.End()
|
||||
|
||||
b := bytes.NewBuffer([]byte{})
|
||||
w := csv.NewWriter(b)
|
||||
|
||||
@@ -342,7 +398,11 @@ func (srv *Server) historyCSV(ctx context.Context, c echo.Context, history *logs
|
||||
return s
|
||||
}
|
||||
|
||||
w.Write([]string{"ts_epoch", "ts", "offset", "step", "score", "monitor_id", "monitor_name", "leap", "error"})
|
||||
err := w.Write([]string{"ts_epoch", "ts", "offset", "step", "score", "monitor_id", "monitor_name", "rtt", "leap", "error"})
|
||||
if err != nil {
|
||||
log.ErrorContext(ctx, "could not write csv header", "err", err)
|
||||
return err
|
||||
}
|
||||
for _, l := range history.LogScores {
|
||||
// log.Debug("csv line", "id", l.ID, "n", i)
|
||||
|
||||
@@ -355,24 +415,30 @@ func (srv *Server) historyCSV(ctx context.Context, c echo.Context, history *logs
|
||||
score := ff(l.Score)
|
||||
var monName string
|
||||
if l.MonitorID.Valid {
|
||||
monName = history.Monitors[int(l.MonitorID.Int32)]
|
||||
monName = history.Monitors[int(l.MonitorID.Int64)]
|
||||
}
|
||||
var leap string
|
||||
if l.Attributes.Leap != 0 {
|
||||
leap = fmt.Sprintf("%d", l.Attributes.Leap)
|
||||
}
|
||||
|
||||
var rtt string
|
||||
if l.Rtt.Valid {
|
||||
rtt = ff(float64(l.Rtt.Int32) / 1000.0)
|
||||
}
|
||||
|
||||
err := w.Write([]string{
|
||||
strconv.Itoa(int(l.Ts.Unix())),
|
||||
strconv.Itoa(int(l.Ts.Time.Unix())),
|
||||
// l.Ts.Format(time.RFC3339),
|
||||
l.Ts.Format("2006-01-02 15:04:05"),
|
||||
l.Ts.Time.Format("2006-01-02 15:04:05"),
|
||||
offset,
|
||||
step,
|
||||
score,
|
||||
fmt.Sprintf("%d", l.MonitorID.Int32),
|
||||
fmt.Sprintf("%d", l.MonitorID.Int64),
|
||||
monName,
|
||||
rtt,
|
||||
leap,
|
||||
l.Attributes.Error,
|
||||
sanitizeForCSV(l.Attributes.Error),
|
||||
})
|
||||
if err != nil {
|
||||
log.Warn("csv encoding error", "ls_id", l.ID, "err", err)
|
||||
@@ -381,16 +447,31 @@ func (srv *Server) historyCSV(ctx context.Context, c echo.Context, history *logs
|
||||
w.Flush()
|
||||
if err := w.Error(); err != nil {
|
||||
log.ErrorContext(ctx, "could not flush csv", "err", err)
|
||||
span.End()
|
||||
return c.String(http.StatusInternalServerError, "csv error")
|
||||
}
|
||||
|
||||
// log.Info("entries", "count", len(history.LogScores), "out_bytes", b.Len())
|
||||
|
||||
c.Response().Header().Set("Cache-Control", "s-maxage=150,max-age=120")
|
||||
setHistoryCacheControl(c, history)
|
||||
|
||||
c.Response().Header().Set("Content-Disposition", "inline")
|
||||
// Chrome and Firefox force-download text/csv files, so use text/plain
|
||||
// https://bugs.chromium.org/p/chromium/issues/detail?id=152911
|
||||
return c.Blob(http.StatusOK, "text/plain", b.Bytes())
|
||||
|
||||
}
|
||||
|
||||
func setHistoryCacheControl(c echo.Context, history *logscores.LogScoreHistory) {
|
||||
hdr := c.Response().Header()
|
||||
if len(history.LogScores) == 0 ||
|
||||
// cache for longer if data hasn't updated for a while; or we didn't
|
||||
// find any.
|
||||
(time.Now().Add(-8 * time.Hour).After(history.LogScores[len(history.LogScores)-1].Ts.Time)) {
|
||||
hdr.Set("Cache-Control", "s-maxage=260,max-age=360")
|
||||
} else {
|
||||
if len(history.LogScores) == 1 {
|
||||
hdr.Set("Cache-Control", "s-maxage=60,max-age=35")
|
||||
} else {
|
||||
hdr.Set("Cache-Control", "s-maxage=90,max-age=120")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,17 +2,16 @@ package server
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"net/http"
|
||||
"os"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"golang.org/x/sync/errgroup"
|
||||
|
||||
"github.com/jackc/pgx/v5/pgxpool"
|
||||
"github.com/labstack/echo-contrib/echoprometheus"
|
||||
"github.com/labstack/echo/v4"
|
||||
"github.com/labstack/echo/v4/middleware"
|
||||
@@ -36,7 +35,7 @@ import (
|
||||
)
|
||||
|
||||
type Server struct {
|
||||
db *sql.DB
|
||||
db *pgxpool.Pool
|
||||
ch *chdb.ClickHouse
|
||||
config *config.Config
|
||||
|
||||
@@ -53,9 +52,9 @@ func NewServer(ctx context.Context, configFile string) (*Server, error) {
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("clickhouse open: %w", err)
|
||||
}
|
||||
db, err := ntpdb.OpenDB(configFile)
|
||||
db, err := ntpdb.OpenDB(ctx, configFile)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("mysql open: %w", err)
|
||||
return nil, fmt.Errorf("postgres open: %w", err)
|
||||
}
|
||||
|
||||
conf := config.New()
|
||||
@@ -76,7 +75,7 @@ func NewServer(ctx context.Context, configFile string) (*Server, error) {
|
||||
Environment: conf.DeploymentMode(),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("tracing init: %w", err)
|
||||
}
|
||||
|
||||
srv.tpShutdown = append(srv.tpShutdown, tpShutdown)
|
||||
@@ -179,7 +178,7 @@ func (srv *Server) Run() error {
|
||||
|
||||
e.Use(middleware.CORSWithConfig(middleware.CORSConfig{
|
||||
AllowOrigins: []string{
|
||||
"http://localhost", "http://localhost:5173", "http://localhost:8080",
|
||||
"http://localhost", "http://localhost:5173", "http://localhost:5174", "http://localhost:8080",
|
||||
"https://www.ntppool.org", "https://*.ntppool.org",
|
||||
"https://web.beta.grundclock.com", "https://manage.beta.grundclock.com",
|
||||
"https:/*.askdev.grundclock.com",
|
||||
@@ -207,6 +206,9 @@ func (srv *Server) Run() error {
|
||||
e.GET("/api/usercc", srv.userCountryData)
|
||||
e.GET("/api/server/dns/answers/:server", srv.dnsAnswers)
|
||||
e.GET("/api/server/scores/:server/:mode", srv.history)
|
||||
e.GET("/api/dns/counts", srv.dnsQueryCounts)
|
||||
e.GET("/api/v2/test/grafana-table", srv.testGrafanaTable)
|
||||
e.GET("/api/v2/server/scores/:server/:mode", srv.scoresTimeRange)
|
||||
|
||||
if len(ntpconf.WebHostname()) > 0 {
|
||||
e.POST("/api/server/scores/:server/:mode", func(c echo.Context) error {
|
||||
@@ -261,7 +263,7 @@ func (srv *Server) userCountryData(c echo.Context) error {
|
||||
log.InfoContext(ctx, "didn't get zoneStats")
|
||||
}
|
||||
|
||||
data, err := srv.ch.UserCountryData(c.Request().Context())
|
||||
data, err := srv.ch.UserCountryData(ctx)
|
||||
if err != nil {
|
||||
log.ErrorContext(ctx, "UserCountryData", "err", err)
|
||||
return c.String(http.StatusInternalServerError, err.Error())
|
||||
@@ -274,41 +276,57 @@ func (srv *Server) userCountryData(c echo.Context) error {
|
||||
UserCountry: data,
|
||||
ZoneStats: zoneStats,
|
||||
})
|
||||
}
|
||||
|
||||
func (srv *Server) dnsQueryCounts(c echo.Context) error {
|
||||
log := logger.Setup()
|
||||
ctx, span := tracing.Tracer().Start(c.Request().Context(), "dnsQueryCounts")
|
||||
defer span.End()
|
||||
|
||||
data, err := srv.ch.DNSQueries(ctx)
|
||||
if err != nil {
|
||||
log.ErrorContext(ctx, "dnsQueryCounts", "err", err)
|
||||
return c.String(http.StatusInternalServerError, err.Error())
|
||||
}
|
||||
|
||||
hdr := c.Response().Header()
|
||||
hdr.Set("Cache-Control", "s-maxage=30,max-age=60")
|
||||
|
||||
return c.JSON(http.StatusOK, data)
|
||||
}
|
||||
|
||||
func healthHandler(srv *Server, log *slog.Logger) func(w http.ResponseWriter, req *http.Request) {
|
||||
|
||||
return func(w http.ResponseWriter, req *http.Request) {
|
||||
|
||||
ctx := req.Context()
|
||||
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
|
||||
defer cancel()
|
||||
g, ctx := errgroup.WithContext(ctx)
|
||||
|
||||
stats := srv.db.Stats()
|
||||
log.InfoContext(ctx, "health requests", "url", req.URL.String(), "stats", stats)
|
||||
|
||||
reset, err := strconv.ParseBool(req.URL.Query().Get("reset"))
|
||||
log.InfoContext(ctx, "db reset request", "err", err, "reset", reset)
|
||||
|
||||
if err == nil && reset {
|
||||
log.InfoContext(ctx, "setting idle db conns to zero")
|
||||
srv.db.SetMaxIdleConns(0)
|
||||
srv.db.SetConnMaxLifetime(5 * time.Second)
|
||||
stats := srv.db.Stat()
|
||||
if stats.TotalConns() > 3 {
|
||||
log.InfoContext(ctx, "health requests", "url", req.URL.String(), "total_conns", stats.TotalConns())
|
||||
}
|
||||
|
||||
g.Go(func() error {
|
||||
err := srv.ch.Scores.Ping(ctx)
|
||||
if err != nil {
|
||||
log.WarnContext(ctx, "ch ping", "err", err)
|
||||
log.WarnContext(ctx, "ch scores ping", "err", err)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
g.Go(func() error {
|
||||
err := srv.db.PingContext(ctx)
|
||||
err := srv.ch.Logs.Ping(ctx)
|
||||
if err != nil {
|
||||
log.WarnContext(ctx, "ch logs ping", "err", err)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
g.Go(func() error {
|
||||
err := srv.db.Ping(ctx)
|
||||
if err != nil {
|
||||
log.WarnContext(ctx, "db ping", "err", err)
|
||||
return err
|
||||
@@ -316,13 +334,19 @@ func healthHandler(srv *Server, log *slog.Logger) func(w http.ResponseWriter, re
|
||||
return nil
|
||||
})
|
||||
|
||||
err = g.Wait()
|
||||
err := g.Wait()
|
||||
if err != nil {
|
||||
w.WriteHeader(http.StatusServiceUnavailable)
|
||||
w.Write([]byte("db ping err"))
|
||||
_, err = w.Write([]byte("db ping err"))
|
||||
if err != nil {
|
||||
log.ErrorContext(ctx, "could not write response", "err", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
w.WriteHeader(http.StatusOK)
|
||||
w.Write([]byte("ok"))
|
||||
_, err = w.Write([]byte("ok"))
|
||||
if err != nil {
|
||||
log.ErrorContext(ctx, "could not write response", "err", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"errors"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5"
|
||||
"github.com/labstack/echo/v4"
|
||||
"go.ntppool.org/common/logger"
|
||||
"go.ntppool.org/common/tracing"
|
||||
@@ -27,7 +27,7 @@ func (srv *Server) zoneCounts(c echo.Context) error {
|
||||
|
||||
zone, err := q.GetZoneByName(ctx, c.Param("zone_name"))
|
||||
if err != nil || zone.ID == 0 {
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
if errors.Is(err, pgx.ErrNoRows) {
|
||||
return c.String(http.StatusNotFound, "Not found")
|
||||
}
|
||||
log.ErrorContext(ctx, "could not query for zone", "err", err)
|
||||
@@ -37,7 +37,7 @@ func (srv *Server) zoneCounts(c echo.Context) error {
|
||||
|
||||
counts, err := q.GetZoneCounts(ctx, zone.ID)
|
||||
if err != nil {
|
||||
if !errors.Is(err, sql.ErrNoRows) {
|
||||
if !errors.Is(err, pgx.ErrNoRows) {
|
||||
log.ErrorContext(ctx, "get counts", "err", err)
|
||||
span.RecordError(err)
|
||||
return c.String(http.StatusInternalServerError, "internal error")
|
||||
@@ -71,7 +71,7 @@ func (srv *Server) zoneCounts(c echo.Context) error {
|
||||
count := 0
|
||||
dates := map[int64]bool{}
|
||||
for _, c := range counts {
|
||||
ep := c.Date.Unix()
|
||||
ep := c.Date.Time.Unix()
|
||||
if _, ok := dates[ep]; !ok {
|
||||
count++
|
||||
dates[ep] = true
|
||||
@@ -84,7 +84,6 @@ func (srv *Server) zoneCounts(c echo.Context) error {
|
||||
} else {
|
||||
// skip everything and use the special logic that we always include the most recent date
|
||||
skipCount = float64(count) + 1
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
@@ -100,13 +99,13 @@ func (srv *Server) zoneCounts(c echo.Context) error {
|
||||
lastSkip := int64(0)
|
||||
skipThreshold := 0.5
|
||||
for _, c := range counts {
|
||||
cDate := c.Date.Unix()
|
||||
cDate := c.Date.Time.Unix()
|
||||
if (toSkip <= skipThreshold && cDate != lastSkip) ||
|
||||
lastDate == cDate ||
|
||||
mostRecentDate == cDate {
|
||||
// log.Info("adding date", "date", c.Date.Format(time.DateOnly))
|
||||
// log.Info("adding date", "date", c.Date.Time.Format(time.DateOnly))
|
||||
rv.History = append(rv.History, historyEntry{
|
||||
D: c.Date.Format(time.DateOnly),
|
||||
D: c.Date.Time.Format(time.DateOnly),
|
||||
Ts: int(cDate),
|
||||
Ac: int(c.CountActive),
|
||||
Rc: int(c.CountRegistered),
|
||||
@@ -144,5 +143,4 @@ func (srv *Server) zoneCounts(c echo.Context) error {
|
||||
|
||||
c.Response().Header().Set("Cache-Control", "s-maxage=28800, max-age=7200")
|
||||
return c.JSON(http.StatusOK, rv)
|
||||
|
||||
}
|
||||
|
||||
11
sqlc.yaml
11
sqlc.yaml
@@ -2,20 +2,25 @@ version: "2"
|
||||
sql:
|
||||
- schema: "schema.sql"
|
||||
queries: "query.sql"
|
||||
engine: "mysql"
|
||||
engine: "postgresql"
|
||||
strict_order_by: false
|
||||
gen:
|
||||
go:
|
||||
package: "ntpdb"
|
||||
out: "ntpdb"
|
||||
sql_package: "pgx/v5"
|
||||
emit_json_tags: true
|
||||
emit_db_tags: true
|
||||
omit_unused_structs: true
|
||||
emit_interface: true
|
||||
# emit_all_enum_values: true
|
||||
rename:
|
||||
servers.Ip: IP
|
||||
overrides:
|
||||
- column: log_scores.attributes
|
||||
go_type: go.ntppool.org/common/types.LogScoreAttributes
|
||||
- column: "server_netspeed.netspeed_active"
|
||||
go_type: "uint64"
|
||||
go_type: "int"
|
||||
- column: "zone_server_counts.netspeed_active"
|
||||
go_type: "int"
|
||||
- db_type: "bigint"
|
||||
go_type: "int"
|
||||
|
||||
Reference in New Issue
Block a user