35 Commits

Author SHA1 Message Date
da13a371b4 feat(database): add shared transaction helpers
Add transaction base utilities with Begin, Commit, and Rollback
functions supporting both sql.DB and sql.Tx interfaces.
2025-07-12 23:52:48 -07:00
a1a5a6b8be database: create shared database package
Extract common database functionality from api/ntpdb and monitor/ntpdb
into shared common/database package:

- Dynamic connector pattern with configuration loading
- Configurable connection pool management (API: 25/10, Monitor: 10/5)
- Optional Prometheus metrics integration
- Generic transaction helpers with proper error handling
- Unified interfaces compatible with SQLC-generated code

Foundation for migration to eliminate ~200 lines of duplicate code.
2025-07-12 17:59:28 -07:00
96afb77844 database: create shared database package with configurable patterns
Extract ~200 lines of duplicate database connection code from api/ntpdb/
and monitor/ntpdb/ into common/database/ package. Creates foundation for
database consolidation while maintaining zero breaking changes.

Files added:
- config.go: Unified configuration with package-specific defaults
- connector.go: Dynamic connector pattern from Boostport
- pool.go: Configurable connection pool management
- metrics.go: Optional Prometheus metrics integration
- interfaces.go: Shared database interfaces for consistent patterns

Key features:
- Configuration-driven approach (API: 25/10 connections + metrics,
  Monitor: 10/5 connections, no metrics)
- Optional Prometheus metrics when registerer provided
- Backward compatibility via convenience functions
- Flexible config file loading (explicit paths + search-based)

Dependencies: Added mysql driver and yaml parsing for database configuration.
2025-07-12 16:54:24 -07:00
c372d79d1d build: goreleaser 2.11.0 and download script tweaks 2025-07-12 16:51:10 -07:00
b5141d6a70 Add database transaction helpers 2025-07-12 13:57:27 -07:00
694f8ba1d3 Add comprehensive godoc documentation to all packages
- Add package-level documentation with usage examples and architecture details
- Document all public types, functions, and methods following godoc conventions
- Remove unused logger.Error type and NewError function
- Apply consistent documentation style across all packages

Packages updated:
- apitls: TLS certificate management with automatic renewal
- config: Environment-based configuration system
- config/depenv: Deployment environment handling
- ekko: Enhanced Echo web framework wrapper
- kafka: Kafka client wrapper with TLS support
- logger: Structured logging with OpenTelemetry integration
- tracing: OpenTelemetry distributed tracing setup
- types: Shared data structures for NTP Pool project
- xff/fastlyxff: Fastly CDN IP range management

All tests pass after documentation changes.
2025-06-19 23:52:03 -07:00
09b52f92d7 version: add documentation and tests 2025-06-06 20:19:08 -07:00
785abdec8d ulid: simplify, add function without a timestamp 2025-06-06 20:02:23 -07:00
ce203a4618 Add README 2025-06-06 19:56:43 -07:00
3c994a7343 Add copilot/claude instructions 2025-06-06 19:50:30 -07:00
f69c3e9c3c ulid: add documentation and more tests 2025-06-06 19:31:28 -07:00
fac5b1f275 metrics: add tests and documentation 2025-06-06 19:24:30 -07:00
a37559b93e health: add documentation 2025-06-06 19:16:14 -07:00
faac09ac0c timeutil: Add documentation 2025-06-06 19:08:16 -07:00
62a7605869 config: add depenv.MonitorDomain() and config.ManageURL() methods 2025-04-19 23:07:08 -07:00
0996167865 modernize + gofumpt 2025-04-19 22:19:02 -07:00
87344dd601 version: KongVersionCmd type 2025-04-12 00:24:19 -07:00
39e6611602 build: update goreleaser 2025-04-12 00:23:33 -07:00
355d246010 depenv: implement UnmarshalText 2025-04-12 00:22:57 -07:00
e5836a8b97 depenv: ntppool configuration for deployment environments 2025-01-26 11:08:44 -08:00
f6d160a7f8 health: fix shutdown of health check server 2025-01-03 14:01:52 +01:00
9e2d6fb74e Update dependencies 2024-12-27 18:39:48 -08:00
0df1154bb5 Update goreleaser to 2.5.0 2024-12-21 08:55:17 -08:00
b926a85737 ekko: gzip config option 2024-12-01 16:45:49 -08:00
68bd4d8904 ekko: configurable read write and readheader timeouts 2024-11-26 01:04:34 -08:00
152be9d956 logger: otlp support 2024-11-09 10:59:11 +00:00
ab94adb925 tracing: setup log provider 2024-11-09 10:19:16 +00:00
ddb56b3566 ekko: Add WithLogFilters option 2024-10-12 11:39:16 -07:00
4367ef9c29 Add Fatalf to standard logger-ish 2024-10-12 11:11:50 -07:00
d6a77f4003 ekko: add gzip, move recover middleware to run early 2024-09-21 00:53:10 -07:00
3f3fb29bc9 ekko: helper to setup labstack echo with logging, tracing, etc 2024-09-20 21:47:10 -07:00
8e898d9c59 tracing: refactor code, support more exporters with default environment configuration 2024-09-14 00:47:07 -07:00
1ecd5684e6 version: Add CheckVersion() function 2024-08-18 18:11:17 -07:00
59580b50ba scripts: update goreleaser 2024-07-07 13:05:06 -07:00
9a86b2aaf5 tracing: semconv v1.26.0 2024-07-06 13:04:48 -07:00
46 changed files with 3611 additions and 416 deletions

1
.github/copilot-instructions.md vendored Symbolic link
View File

@@ -0,0 +1 @@
../CLAUDE.md

163
CLAUDE.md Normal file
View File

@@ -0,0 +1,163 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Commands
### Testing
- Run all tests: `go test ./...`
- Run tests with verbose output: `go test -v ./...`
- Run tests for specific package: `go test ./config`
- Run specific test: `go test -run TestConfigBool ./config`
### Building
- Build all packages: `go build ./...`
- Check module dependencies: `go mod tidy`
- Verify dependencies: `go mod verify`
### Code Quality
- Format code: `go fmt ./...`
- Vet code: `go vet ./...`
- Run static analysis: `staticcheck ./...` (if available)
## Architecture
This is a common library (`go.ntppool.org/common`) providing shared infrastructure for the NTP Pool project. The codebase emphasizes observability, security, and modern Go practices.
### Core Components
**Web Service Foundation:**
- `ekko/` - Enhanced Echo web framework with pre-configured middleware (OpenTelemetry, Prometheus, logging, security headers)
- `health/` - Standalone health check HTTP server with `/__health` endpoint
- `metricsserver/` - Prometheus metrics exposure via `/metrics` endpoint
**Observability Stack:**
- `logger/` - Structured logging with OpenTelemetry trace integration and multiple output formats
- `tracing/` - OpenTelemetry distributed tracing with OTLP export support
- `metricsserver/` - Prometheus metrics with custom registry
**Configuration & Environment:**
- `config/` - Environment-based configuration with code-generated accessors (`config_accessor.go`)
- `version/` - Build metadata and version information with Cobra CLI integration
**Security & Communication:**
- `apitls/` - TLS certificate management with automatic renewal via certman
- `kafka/` - Kafka client wrapper with TLS support for log streaming
- `xff/fastlyxff/` - Fastly CDN IP range management for trusted proxy handling
**Utilities:**
- `ulid/` - Thread-safe ULID generation with monotonic ordering
- `timeutil/` - JSON-serializable duration types
- `types/` - Shared data structures (LogScoreAttributes for NTP server scoring)
### Key Patterns
**Functional Options:** Used extensively in `ekko/` for flexible service configuration
**Interface-Based Design:** `CertificateProvider` in `apitls/` for pluggable certificate management
**Context Propagation:** Throughout the codebase for cancellation and tracing
**Graceful Shutdown:** Implemented in web servers and background services
### Dependencies
The codebase heavily uses:
- Echo web framework with custom middleware stack
- OpenTelemetry for observability (traces, metrics, logs)
- Prometheus for metrics collection
- Kafka for message streaming
- Cobra for CLI applications
### Code Generation
`config/config_accessor.go` is generated - modify `config.go` and regenerate accessors when adding new configuration options.
## Package Overview
### `apitls/`
TLS certificate management with automatic renewal support via certman. Provides a CA pool for trusted certificates and interfaces for pluggable certificate providers. Used for secure inter-service communication.
### `config/`
Environment-based configuration system with code-generated accessor methods. Handles deployment mode, hostname configuration, and TLS settings. Provides URL building utilities for web and management interfaces.
### `ekko/`
Enhanced Echo web framework wrapper with pre-configured middleware stack including OpenTelemetry tracing, Prometheus metrics, structured logging, gzip compression, and security headers. Supports HTTP/2 with graceful shutdown.
### `health/`
Standalone HTTP health check server that runs independently from the main application. Exposes `/__health` endpoint with configurable health handlers, timeouts, and graceful shutdown capabilities.
### `kafka/`
Kafka client wrapper with TLS support for secure log streaming. Provides connection management, broker discovery, and reader/writer factories with compression and batching optimizations.
### `logger/`
Structured logging system with OpenTelemetry trace integration. Supports multiple output formats (text, OTLP) with configurable log levels, systemd compatibility, and context-aware logging.
### `metricsserver/`
Dedicated Prometheus metrics HTTP server with custom registry isolation. Exposes `/metrics` endpoint with OpenMetrics support and graceful shutdown handling.
### `timeutil/`
JSON-serializable duration types that support both string parsing ("30s", "5m") and numeric nanosecond values. Compatible with configuration files and REST APIs.
### `tracing/`
OpenTelemetry distributed tracing setup with support for OTLP export via gRPC or HTTP. Handles resource detection, propagation, and automatic instrumentation with configurable TLS.
### `types/`
Shared data structures for the NTP Pool project. Currently contains `LogScoreAttributes` for NTP server scoring with JSON and SQL database compatibility.
### `ulid/`
Thread-safe ULID (Universally Unique Lexicographically Sortable Identifier) generation using cryptographically secure randomness. Optimized for simplicity and performance in high-concurrency environments.
### `version/`
Build metadata and version information system with Git integration. Provides CLI commands for Cobra and Kong frameworks, Prometheus build info metrics, and semantic version validation.
### `xff/fastlyxff/`
Fastly CDN IP range management for trusted proxy handling. Parses Fastly's IP ranges JSON file and generates Echo framework trust options for proper client IP extraction.
## Go Development Best Practices
### Code Style
- Follow standard Go formatting (`go fmt ./...`)
- Use `go vet ./...` for static analysis
- Run `staticcheck ./...` when available
- Prefer short, descriptive variable names
- Use interfaces for testability and flexibility
### Error Handling
- Always handle errors explicitly
- Use `errors.Join()` for combining multiple errors
- Wrap errors with context using `fmt.Errorf("context: %w", err)`
- Return early on errors to reduce nesting
### Testing
- Write table-driven tests when testing multiple scenarios
- Use `t.Helper()` in test helper functions
- Test error conditions, not just happy paths
- Use `testing.Short()` for integration tests that can be skipped
### Concurrency
- Use contexts for cancellation and timeouts
- Prefer channels for communication over shared memory
- Use `sync.Once` for one-time initialization
- Always call `defer cancel()` after `context.WithCancel()`
### Performance
- Use `sync.Pool` for frequently allocated objects
- Prefer slices over arrays for better performance
- Use `strings.Builder` for string concatenation in loops
- Profile before optimizing with `go tool pprof`
### Observability
- Use structured logging with key-value pairs
- Add OpenTelemetry spans for external calls
- Include trace IDs in error messages
- Use metrics for monitoring application health
### Dependencies
- Keep dependencies minimal and well-maintained
- Use `go mod tidy` to clean up unused dependencies
- Pin major versions to avoid breaking changes
- Prefer standard library when possible
### Security
- Never log sensitive information (passwords, tokens)
- Use `crypto/rand` for cryptographic randomness
- Validate all inputs at API boundaries
- Use TLS for all network communication

20
README.md Normal file
View File

@@ -0,0 +1,20 @@
Common library for the NTP Pool project with shared infrastructure components.
## Packages
- **apitls** - TLS setup for NTP Pool internal services with embedded CA
- **config** - NTP Pool project configuration with environment variables
- **ekko** - Enhanced Echo web framework with observability middleware
- **health** - Standalone health check HTTP server
- **kafka** - Kafka client wrapper with TLS support
- **logger** - Structured logging with OpenTelemetry integration
- **metricsserver** - Prometheus metrics HTTP server
- **timeutil** - JSON-serializable duration types
- **tracing** - OpenTelemetry distributed tracing setup
- **types** - Shared data structures for NTP Pool
- **ulid** - Thread-safe ULID generation
- **version** - Build metadata and version information
- **xff/fastlyxff** - Fastly CDN IP range management
[![Go Reference](https://pkg.go.dev/badge/go.ntppool.org/common.svg)](https://pkg.go.dev/go.ntppool.org/common)

View File

@@ -1,3 +1,14 @@
// Package apitls provides TLS certificate management with automatic renewal support.
//
// This package handles TLS certificate provisioning and management for secure
// inter-service communication within the NTP Pool project infrastructure.
// It provides both server and client certificate management through the
// CertificateProvider interface and includes a trusted CA certificate pool
// for validating certificates.
//
// The package integrates with certman for automatic certificate renewal
// and includes embedded CA certificates for establishing trust relationships
// between services.
package apitls package apitls
import ( import (
@@ -13,11 +24,32 @@ import (
//go:embed ca.pem //go:embed ca.pem
var caBytes []byte var caBytes []byte
// CertificateProvider defines the interface for providing TLS certificates
// for both server and client connections. Implementations should handle
// certificate retrieval, caching, and renewal as needed.
//
// This interface supports both server-side certificate provisioning
// (via GetCertificate) and client-side certificate authentication
// (via GetClientCertificate).
type CertificateProvider interface { type CertificateProvider interface {
// GetCertificate retrieves a server certificate based on the client hello information.
// This method is typically used in tls.Config.GetCertificate for server-side TLS.
GetCertificate(hello *tls.ClientHelloInfo) (*tls.Certificate, error) GetCertificate(hello *tls.ClientHelloInfo) (*tls.Certificate, error)
// GetClientCertificate retrieves a client certificate for mutual TLS authentication.
// This method is used in tls.Config.GetClientCertificate for client-side TLS.
GetClientCertificate(certRequestInfo *tls.CertificateRequestInfo) (*tls.Certificate, error) GetClientCertificate(certRequestInfo *tls.CertificateRequestInfo) (*tls.Certificate, error)
} }
// CAPool returns a certificate pool containing trusted CA certificates
// for validating TLS connections within the NTP Pool infrastructure.
//
// The CA certificates are embedded in the binary and include the trusted
// certificate authorities used for inter-service communication.
// This pool should be used in tls.Config.RootCAs for client connections
// or tls.Config.ClientCAs for server connections requiring client certificates.
//
// Returns an error if the embedded CA certificates cannot be parsed or loaded.
func CAPool() (*x509.CertPool, error) { func CAPool() (*x509.CertPool, error) {
capool := x509.NewCertPool() capool := x509.NewCertPool()
if !capool.AppendCertsFromPEM(caBytes) { if !capool.AppendCertsFromPEM(caBytes) {
@@ -30,7 +62,6 @@ func CAPool() (*x509.CertPool, error) {
// GetCertman sets up certman for the specified cert / key pair. It is // GetCertman sets up certman for the specified cert / key pair. It is
// used in the monitor-api and (for now) in the client // used in the monitor-api and (for now) in the client
func GetCertman(certFile, keyFile string) (*certman.CertMan, error) { func GetCertman(certFile, keyFile string) (*certman.CertMan, error) {
cm, err := certman.New(certFile, keyFile) cm, err := certman.New(certFile, keyFile)
if err != nil { if err != nil {
return nil, err return nil, err

View File

@@ -1,5 +1,18 @@
// Package config provides NTP Pool specific // Package config provides environment-based configuration management for NTP Pool services.
// configuration tools. //
// This package handles configuration loading from environment variables and provides
// utilities for constructing URLs for web and management interfaces. It supports
// deployment-specific settings including hostname configuration, TLS settings,
// and deployment modes.
//
// Configuration is loaded automatically from environment variables:
// - deployment_mode: The deployment environment (devel, production, etc.)
// - manage_hostname: Hostname for management interface
// - web_hostname: Comma-separated list of web hostnames (first is primary)
// - manage_tls: Enable TLS for management interface (yes, no, true, false)
// - web_tls: Enable TLS for web interface (yes, no, true, false)
//
// The package includes code generation for accessor methods using the accessory tool.
package config package config
import ( import (
@@ -11,8 +24,11 @@ import (
"go.ntppool.org/common/logger" "go.ntppool.org/common/logger"
) )
//go:generate accessory -type Config //go:generate go tool github.com/masaushi/accessory -type Config
// Config holds environment-based configuration for NTP Pool services.
// It manages hostnames, TLS settings, and deployment modes loaded from
// environment variables. The struct includes code-generated accessor methods.
type Config struct { type Config struct {
deploymentMode string `accessor:"getter"` deploymentMode string `accessor:"getter"`
@@ -26,6 +42,16 @@ type Config struct {
valid bool `accessor:"getter"` valid bool `accessor:"getter"`
} }
// New creates a new Config instance by loading configuration from environment variables.
// It automatically parses hostnames, TLS settings, and deployment mode from the environment.
// The configuration is considered valid if at least one web hostname is provided.
//
// Environment variables used:
// - deployment_mode: Deployment environment identifier
// - manage_hostname: Management interface hostname
// - web_hostname: Comma-separated web hostnames (first becomes primary)
// - manage_tls: Management interface TLS setting
// - web_tls: Web interface TLS setting
func New() *Config { func New() *Config {
c := Config{} c := Config{}
c.deploymentMode = os.Getenv("deployment_mode") c.deploymentMode = os.Getenv("deployment_mode")
@@ -46,10 +72,30 @@ func New() *Config {
return &c return &c
} }
// WebURL constructs a complete URL for the web interface using the primary web hostname.
// It automatically selects HTTP or HTTPS based on the web_tls configuration setting.
//
// Parameters:
// - path: URL path component (should start with "/")
// - query: Optional URL query parameters (can be nil)
//
// Returns a complete URL string suitable for web interface requests.
func (c *Config) WebURL(path string, query *url.Values) string { func (c *Config) WebURL(path string, query *url.Values) string {
return baseURL(c.webHostname, c.webTLS, path, query) return baseURL(c.webHostname, c.webTLS, path, query)
} }
// ManageURL constructs a complete URL for the management interface using the management hostname.
// It automatically selects HTTP or HTTPS based on the manage_tls configuration setting.
//
// Parameters:
// - path: URL path component (should start with "/")
// - query: Optional URL query parameters (can be nil)
//
// Returns a complete URL string suitable for management interface requests.
func (c *Config) ManageURL(path string, query *url.Values) string {
return baseURL(c.manageHostname, c.webTLS, path, query)
}
func baseURL(host string, tls bool, path string, query *url.Values) string { func baseURL(host string, tls bool, path string, query *url.Values) string {
uri := url.URL{} uri := url.URL{}
uri.Host = host uri.Host = host

View File

@@ -7,7 +7,6 @@ import (
) )
func TestBaseURL(t *testing.T) { func TestBaseURL(t *testing.T) {
os.Setenv("web_hostname", "www.ntp.dev, web.ntppool.dev") os.Setenv("web_hostname", "www.ntp.dev, web.ntppool.dev")
os.Setenv("web_tls", "yes") os.Setenv("web_tls", "yes")
@@ -22,5 +21,4 @@ func TestBaseURL(t *testing.T) {
if u != "https://www.ntp.dev/foo?foo=bar" { if u != "https://www.ntp.dev/foo?foo=bar" {
t.Fatalf("unexpected WebURL: %s", u) t.Fatalf("unexpected WebURL: %s", u)
} }
} }

18
config/depenv/context.go Normal file
View File

@@ -0,0 +1,18 @@
package depenv
import "context"
type contextKey struct{}
// NewContext adds the deployment environment to the context
func NewContext(ctx context.Context, d DeploymentEnvironment) context.Context {
return context.WithValue(ctx, contextKey{}, d)
}
// FromContext retrieves the deployment environment from the context
func FromContext(ctx context.Context) DeploymentEnvironment {
if d, ok := ctx.Value(contextKey{}).(DeploymentEnvironment); ok {
return d
}
return DeployUndefined
}

133
config/depenv/depenv.go Normal file
View File

@@ -0,0 +1,133 @@
// Package depenv provides deployment environment management for NTP Pool services.
//
// This package handles different deployment environments (development, test, production)
// and provides environment-specific configuration including API endpoints, management URLs,
// and monitoring domains. It supports string-based environment identification and
// automatic URL construction for various service endpoints.
//
// The package defines three main deployment environments:
// - DeployDevel: Development environment with dev-specific endpoints
// - DeployTest: Test/beta environment for staging
// - DeployProd: Production environment with live endpoints
//
// Environment detection supports both short and long forms:
// - "dev" or "devel" → DeployDevel
// - "test" or "beta" → DeployTest
// - "prod" → DeployProd
package depenv
import (
"fmt"
"os"
)
var manageServers = map[DeploymentEnvironment]string{
DeployDevel: "https://manage.askdev.grundclock.com",
DeployTest: "https://manage.beta.grundclock.com",
DeployProd: "https://manage.ntppool.org",
}
var apiServers = map[DeploymentEnvironment]string{
DeployDevel: "https://dev-api.ntppool.dev",
DeployTest: "https://beta-api.ntppool.dev",
DeployProd: "https://api.ntppool.dev",
}
// var validationServers = map[DeploymentEnvironment]string{
// DeployDevel: "https://v.ntp.dev/d/",
// DeployTest: "https://v.ntp.dev/b/",
// DeployProd: "https://v.ntp.dev/p/",
// }
const (
// DeployUndefined represents an unrecognized or unset deployment environment.
DeployUndefined DeploymentEnvironment = iota
// DeployDevel represents the development environment.
DeployDevel
// DeployTest represents the test/beta environment.
DeployTest
// DeployProd represents the production environment.
DeployProd
)
// DeploymentEnvironment represents a deployment environment type.
// It provides methods for environment-specific URL construction and
// supports text marshaling/unmarshaling for configuration files.
type DeploymentEnvironment uint8
// DeploymentEnvironmentFromString parses a string into a DeploymentEnvironment.
// It supports both short and long forms of environment names:
// - "dev" or "devel" → DeployDevel
// - "test" or "beta" → DeployTest
// - "prod" → DeployProd
// - any other value → DeployUndefined
func DeploymentEnvironmentFromString(s string) DeploymentEnvironment {
switch s {
case "devel", "dev":
return DeployDevel
case "test", "beta":
return DeployTest
case "prod":
return DeployProd
default:
return DeployUndefined
}
}
// String returns the canonical string representation of the deployment environment.
// Returns "prod", "test", "devel", or panics for invalid environments.
func (d DeploymentEnvironment) String() string {
switch d {
case DeployProd:
return "prod"
case DeployTest:
return "test"
case DeployDevel:
return "devel"
default:
panic("invalid DeploymentEnvironment")
}
}
// APIHost returns the API server URL for this deployment environment.
// It first checks the API_HOST environment variable for overrides,
// then falls back to the environment-specific default API endpoint.
func (d DeploymentEnvironment) APIHost() string {
if apiHost := os.Getenv("API_HOST"); apiHost != "" {
return apiHost
}
return apiServers[d]
}
// ManageURL constructs a management interface URL for this deployment environment.
// It combines the environment-specific management server base URL with the provided path.
//
// The path parameter should start with "/" for proper URL construction.
func (d DeploymentEnvironment) ManageURL(path string) string {
return manageServers[d] + path
}
// MonitorDomain returns the monitoring domain for this deployment environment.
// The domain follows the pattern: {environment}.mon.ntppool.dev
// For example: "devel.mon.ntppool.dev" for the development environment.
func (d DeploymentEnvironment) MonitorDomain() string {
return d.String() + ".mon.ntppool.dev"
}
// UnmarshalText implements the encoding.TextUnmarshaler interface.
// It allows DeploymentEnvironment to be unmarshaled from configuration files
// and other text-based formats. Empty strings are treated as valid (no-op).
//
// Returns an error if the text represents an invalid deployment environment.
func (d *DeploymentEnvironment) UnmarshalText(text []byte) error {
s := string(text)
if s == "" {
return nil
}
env := DeploymentEnvironmentFromString(s)
if env == DeployUndefined {
return fmt.Errorf("invalid deployment environment: %s", s)
}
*d = env
return nil
}

View File

@@ -0,0 +1,40 @@
package depenv
import (
"fmt"
"strings"
)
var monitorApiServers = map[DeploymentEnvironment]string{
DeployDevel: "https://api.devel.mon.ntppool.dev",
DeployTest: "https://api.test.mon.ntppool.dev",
DeployProd: "https://api.mon.ntppool.dev",
}
func (d DeploymentEnvironment) MonitorAPIHost() string {
return monitorApiServers[d]
}
func GetDeploymentEnvironmentFromName(clientName string) (DeploymentEnvironment, error) {
clientName = strings.ToLower(clientName)
if !strings.HasSuffix(clientName, ".mon.ntppool.dev") {
return DeployUndefined, fmt.Errorf("invalid client name %s", clientName)
}
if clientName == "api.mon.ntppool.dev" {
return DeployProd, nil
}
prefix := clientName[:strings.Index(clientName, ".mon.ntppool.dev")]
parts := strings.Split(prefix, ".")
if len(parts) != 2 {
return DeployUndefined, fmt.Errorf("invalid client name %s", clientName)
}
if d := DeploymentEnvironmentFromString(parts[1]); d != DeployUndefined {
return d, nil
}
return DeployUndefined, fmt.Errorf("invalid client name %s (unknown environment %s)", clientName, parts[1])
}

61
database/config.go Normal file
View File

@@ -0,0 +1,61 @@
package database
import (
"time"
"github.com/prometheus/client_golang/prometheus"
)
// Config represents the database configuration structure
type Config struct {
MySQL DBConfig `yaml:"mysql"`
}
// DBConfig represents the MySQL database configuration
type DBConfig struct {
DSN string `default:"" flag:"dsn" usage:"Database DSN"`
User string `default:"" flag:"user"`
Pass string `default:"" flag:"pass"`
DBName string // Optional database name override
}
// ConfigOptions allows customization of database opening behavior
type ConfigOptions struct {
// ConfigFiles is a list of config file paths to search for database configuration
ConfigFiles []string
// EnablePoolMonitoring enables connection pool metrics collection
EnablePoolMonitoring bool
// PrometheusRegisterer for metrics collection. If nil, no metrics are collected.
PrometheusRegisterer prometheus.Registerer
// Connection pool settings
MaxOpenConns int
MaxIdleConns int
ConnMaxLifetime time.Duration
}
// DefaultConfigOptions returns the standard configuration options used by API package
func DefaultConfigOptions() ConfigOptions {
return ConfigOptions{
ConfigFiles: []string{"database.yaml", "/vault/secrets/database.yaml"},
EnablePoolMonitoring: true,
PrometheusRegisterer: prometheus.DefaultRegisterer,
MaxOpenConns: 25,
MaxIdleConns: 10,
ConnMaxLifetime: 3 * time.Minute,
}
}
// MonitorConfigOptions returns configuration options optimized for Monitor package
func MonitorConfigOptions() ConfigOptions {
return ConfigOptions{
ConfigFiles: []string{"database.yaml", "/vault/secrets/database.yaml"},
EnablePoolMonitoring: false, // Monitor doesn't need metrics
PrometheusRegisterer: nil, // No Prometheus dependency
MaxOpenConns: 10,
MaxIdleConns: 5,
ConnMaxLifetime: 3 * time.Minute,
}
}

81
database/config_test.go Normal file
View File

@@ -0,0 +1,81 @@
package database
import (
"testing"
"time"
"github.com/prometheus/client_golang/prometheus"
)
func TestDefaultConfigOptions(t *testing.T) {
opts := DefaultConfigOptions()
// Verify expected defaults for API package
if opts.MaxOpenConns != 25 {
t.Errorf("Expected MaxOpenConns=25, got %d", opts.MaxOpenConns)
}
if opts.MaxIdleConns != 10 {
t.Errorf("Expected MaxIdleConns=10, got %d", opts.MaxIdleConns)
}
if opts.ConnMaxLifetime != 3*time.Minute {
t.Errorf("Expected ConnMaxLifetime=3m, got %v", opts.ConnMaxLifetime)
}
if !opts.EnablePoolMonitoring {
t.Error("Expected EnablePoolMonitoring=true")
}
if opts.PrometheusRegisterer != prometheus.DefaultRegisterer {
t.Error("Expected PrometheusRegisterer to be DefaultRegisterer")
}
if len(opts.ConfigFiles) == 0 {
t.Error("Expected ConfigFiles to be non-empty")
}
}
func TestMonitorConfigOptions(t *testing.T) {
opts := MonitorConfigOptions()
// Verify expected defaults for Monitor package
if opts.MaxOpenConns != 10 {
t.Errorf("Expected MaxOpenConns=10, got %d", opts.MaxOpenConns)
}
if opts.MaxIdleConns != 5 {
t.Errorf("Expected MaxIdleConns=5, got %d", opts.MaxIdleConns)
}
if opts.ConnMaxLifetime != 3*time.Minute {
t.Errorf("Expected ConnMaxLifetime=3m, got %v", opts.ConnMaxLifetime)
}
if opts.EnablePoolMonitoring {
t.Error("Expected EnablePoolMonitoring=false")
}
if opts.PrometheusRegisterer != nil {
t.Error("Expected PrometheusRegisterer to be nil")
}
if len(opts.ConfigFiles) == 0 {
t.Error("Expected ConfigFiles to be non-empty")
}
}
func TestConfigStructures(t *testing.T) {
// Test that configuration structures can be created and populated
config := Config{
MySQL: DBConfig{
DSN: "user:pass@tcp(localhost:3306)/dbname",
User: "testuser",
Pass: "testpass",
DBName: "testdb",
},
}
if config.MySQL.DSN == "" {
t.Error("Expected DSN to be set")
}
if config.MySQL.User != "testuser" {
t.Errorf("Expected User='testuser', got '%s'", config.MySQL.User)
}
if config.MySQL.Pass != "testpass" {
t.Errorf("Expected Pass='testpass', got '%s'", config.MySQL.Pass)
}
if config.MySQL.DBName != "testdb" {
t.Errorf("Expected DBName='testdb', got '%s'", config.MySQL.DBName)
}
}

88
database/connector.go Normal file
View File

@@ -0,0 +1,88 @@
package database
import (
"context"
"database/sql/driver"
"errors"
"fmt"
"os"
"github.com/go-sql-driver/mysql"
"gopkg.in/yaml.v3"
)
// from https://github.com/Boostport/dynamic-database-config
// CreateConnectorFunc is a function that creates a database connector
type CreateConnectorFunc func() (driver.Connector, error)
// Driver implements the sql/driver interface with dynamic configuration
type Driver struct {
CreateConnectorFunc CreateConnectorFunc
}
// Driver returns the driver instance
func (d Driver) Driver() driver.Driver {
return d
}
// Connect creates a new database connection using the dynamic connector
func (d Driver) Connect(ctx context.Context) (driver.Conn, error) {
connector, err := d.CreateConnectorFunc()
if err != nil {
return nil, fmt.Errorf("error creating connector from function: %w", err)
}
return connector.Connect(ctx)
}
// Open is not supported for dynamic configuration
func (d Driver) Open(name string) (driver.Conn, error) {
return nil, errors.New("open is not supported")
}
// createConnector creates a connector function that reads configuration from a file
func createConnector(configFile string) CreateConnectorFunc {
return func() (driver.Connector, error) {
dbFile, err := os.Open(configFile)
if err != nil {
return nil, err
}
defer dbFile.Close()
dec := yaml.NewDecoder(dbFile)
cfg := Config{}
err = dec.Decode(&cfg)
if err != nil {
return nil, err
}
dsn := cfg.MySQL.DSN
if len(dsn) == 0 {
dsn = os.Getenv("DATABASE_DSN")
if len(dsn) == 0 {
return nil, fmt.Errorf("dsn config in database.yaml or DATABASE_DSN environment variable required")
}
}
dbcfg, err := mysql.ParseDSN(dsn)
if err != nil {
return nil, err
}
if user := cfg.MySQL.User; len(user) > 0 {
dbcfg.User = user
}
if pass := cfg.MySQL.Pass; len(pass) > 0 {
dbcfg.Passwd = pass
}
if name := cfg.MySQL.DBName; len(name) > 0 {
dbcfg.DBName = name
}
return mysql.NewConnector(dbcfg)
}
}

View File

@@ -0,0 +1,117 @@
package database
import (
"context"
"database/sql"
"testing"
)
// Mock types for testing SQLC integration patterns
type mockQueries struct {
db DBTX
}
type mockQueriesTx struct {
*mockQueries
tx *sql.Tx
}
// Mock the Begin method pattern that SQLC generates
func (q *mockQueries) Begin(ctx context.Context) (*mockQueriesTx, error) {
// This would normally be: tx, err := q.db.(*sql.DB).BeginTx(ctx, nil)
// For our test, we return a mock
return &mockQueriesTx{mockQueries: q, tx: nil}, nil
}
func (qtx *mockQueriesTx) Commit(ctx context.Context) error {
return nil // Mock implementation
}
func (qtx *mockQueriesTx) Rollback(ctx context.Context) error {
return nil // Mock implementation
}
// This test verifies that our common database interfaces are compatible with SQLC-generated code
func TestSQLCIntegration(t *testing.T) {
// Test that SQLC's DBTX interface matches our DBTX interface
t.Run("DBTX Interface Compatibility", func(t *testing.T) {
// Test interface compatibility by assignment without execution
var ourDBTX DBTX
// Test with sql.DB (should implement DBTX)
var db *sql.DB
ourDBTX = db // This will compile only if interfaces are compatible
_ = ourDBTX // Use the variable to avoid "unused" warning
// Test with sql.Tx (should implement DBTX)
var tx *sql.Tx
ourDBTX = tx // This will compile only if interfaces are compatible
_ = ourDBTX // Use the variable to avoid "unused" warning
// If we reach here, interfaces are compatible
t.Log("DBTX interface is compatible with sql.DB and sql.Tx")
})
t.Run("Transaction Interface Compatibility", func(t *testing.T) {
// This test verifies our transaction interfaces work with SQLC patterns
// We can't define methods inside a function, so we test interface compatibility
// Verify our DB interface is compatible with what SQLC expects
var dbInterface DB[*mockQueriesTx]
var mockDB *mockQueries = &mockQueries{}
dbInterface = mockDB
// Test that our transaction helper can work with this pattern
err := WithTransaction(context.Background(), dbInterface, func(ctx context.Context, qtx *mockQueriesTx) error {
// This would be where you'd call SQLC-generated query methods
return nil
})
if err != nil {
t.Errorf("Transaction helper failed: %v", err)
}
})
}
// Test that demonstrates how the common package would be used with real SQLC patterns
func TestRealWorldUsagePattern(t *testing.T) {
// This test shows how a package would typically use our common database code
t.Run("Database Opening Pattern", func(t *testing.T) {
// Test that our configuration options work as expected
opts := DefaultConfigOptions()
// Modify for test environment (no actual database connection)
opts.ConfigFiles = []string{} // No config files for unit test
opts.PrometheusRegisterer = nil // No metrics for unit test
// This would normally open a database: db, err := OpenDB(ctx, opts)
// For our unit test, we just verify the options are reasonable
if opts.MaxOpenConns <= 0 {
t.Error("MaxOpenConns should be positive")
}
if opts.MaxIdleConns <= 0 {
t.Error("MaxIdleConns should be positive")
}
if opts.ConnMaxLifetime <= 0 {
t.Error("ConnMaxLifetime should be positive")
}
})
t.Run("Monitor Package Configuration", func(t *testing.T) {
opts := MonitorConfigOptions()
// Verify monitor-specific settings
if opts.EnablePoolMonitoring {
t.Error("Monitor package should not enable pool monitoring")
}
if opts.PrometheusRegisterer != nil {
t.Error("Monitor package should not have Prometheus registerer")
}
if opts.MaxOpenConns != 10 {
t.Errorf("Expected MaxOpenConns=10 for monitor, got %d", opts.MaxOpenConns)
}
if opts.MaxIdleConns != 5 {
t.Errorf("Expected MaxIdleConns=5 for monitor, got %d", opts.MaxIdleConns)
}
})
}

34
database/interfaces.go Normal file
View File

@@ -0,0 +1,34 @@
package database
import (
"context"
"database/sql"
)
// DBTX matches the interface expected by SQLC-generated code
// This interface is implemented by both *sql.DB and *sql.Tx
type DBTX interface {
ExecContext(context.Context, string, ...interface{}) (sql.Result, error)
PrepareContext(context.Context, string) (*sql.Stmt, error)
QueryContext(context.Context, string, ...interface{}) (*sql.Rows, error)
QueryRowContext(context.Context, string, ...interface{}) *sql.Row
}
// BaseQuerier provides basic query functionality
// This interface should be implemented by package-specific Queries types
type BaseQuerier interface {
WithTx(tx *sql.Tx) BaseQuerier
}
// BaseQuerierTx provides transaction functionality
// This interface should be implemented by package-specific Queries types
type BaseQuerierTx interface {
BaseQuerier
Begin(ctx context.Context) (BaseQuerierTx, error)
Commit(ctx context.Context) error
Rollback(ctx context.Context) error
}
// TransactionFunc represents a function that operates within a database transaction
// This is used by the shared transaction helpers in transaction.go
type TransactionFunc[Q any] func(ctx context.Context, q Q) error

93
database/metrics.go Normal file
View File

@@ -0,0 +1,93 @@
package database
import (
"context"
"database/sql"
"fmt"
"time"
"github.com/prometheus/client_golang/prometheus"
)
// DatabaseMetrics holds the Prometheus metrics for database connection pool monitoring
type DatabaseMetrics struct {
ConnectionsOpen prometheus.Gauge
ConnectionsIdle prometheus.Gauge
ConnectionsInUse prometheus.Gauge
ConnectionsWaitCount prometheus.Counter
ConnectionsWaitDuration prometheus.Histogram
}
// NewDatabaseMetrics creates a new set of database metrics and registers them
func NewDatabaseMetrics(registerer prometheus.Registerer) *DatabaseMetrics {
metrics := &DatabaseMetrics{
ConnectionsOpen: prometheus.NewGauge(prometheus.GaugeOpts{
Name: "database_connections_open",
Help: "Number of open database connections",
}),
ConnectionsIdle: prometheus.NewGauge(prometheus.GaugeOpts{
Name: "database_connections_idle",
Help: "Number of idle database connections",
}),
ConnectionsInUse: prometheus.NewGauge(prometheus.GaugeOpts{
Name: "database_connections_in_use",
Help: "Number of database connections in use",
}),
ConnectionsWaitCount: prometheus.NewCounter(prometheus.CounterOpts{
Name: "database_connections_wait_count_total",
Help: "Total number of times a connection had to wait",
}),
ConnectionsWaitDuration: prometheus.NewHistogram(prometheus.HistogramOpts{
Name: "database_connections_wait_duration_seconds",
Help: "Time spent waiting for a database connection",
Buckets: prometheus.DefBuckets,
}),
}
if registerer != nil {
registerer.MustRegister(
metrics.ConnectionsOpen,
metrics.ConnectionsIdle,
metrics.ConnectionsInUse,
metrics.ConnectionsWaitCount,
metrics.ConnectionsWaitDuration,
)
}
return metrics
}
// monitorConnectionPool runs a background goroutine to collect connection pool metrics
func monitorConnectionPool(ctx context.Context, db *sql.DB, registerer prometheus.Registerer) {
if registerer == nil {
return // No metrics collection if no registerer provided
}
metrics := NewDatabaseMetrics(registerer)
ticker := time.NewTicker(30 * time.Second)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
stats := db.Stats()
metrics.ConnectionsOpen.Set(float64(stats.OpenConnections))
metrics.ConnectionsIdle.Set(float64(stats.Idle))
metrics.ConnectionsInUse.Set(float64(stats.InUse))
metrics.ConnectionsWaitCount.Add(float64(stats.WaitCount))
if stats.WaitDuration > 0 {
metrics.ConnectionsWaitDuration.Observe(stats.WaitDuration.Seconds())
}
// Log connection pool stats for high usage or waiting
if stats.OpenConnections > 20 || stats.WaitCount > 0 {
fmt.Printf("Connection pool stats: open=%d idle=%d in_use=%d wait_count=%d wait_duration=%s\n",
stats.OpenConnections, stats.Idle, stats.InUse, stats.WaitCount, stats.WaitDuration)
}
}
}
}

78
database/pool.go Normal file
View File

@@ -0,0 +1,78 @@
package database
import (
"context"
"database/sql"
"fmt"
"os"
"go.ntppool.org/common/logger"
)
// OpenDB opens a database connection with the specified configuration options
func OpenDB(ctx context.Context, options ConfigOptions) (*sql.DB, error) {
log := logger.Setup()
configFile, err := findConfigFile(options.ConfigFiles)
if err != nil {
return nil, err
}
dbconn := sql.OpenDB(Driver{
CreateConnectorFunc: createConnector(configFile),
})
// Set connection pool parameters
dbconn.SetConnMaxLifetime(options.ConnMaxLifetime)
dbconn.SetMaxOpenConns(options.MaxOpenConns)
dbconn.SetMaxIdleConns(options.MaxIdleConns)
err = dbconn.Ping()
if err != nil {
log.Error("could not connect to database", "err", err)
return nil, err
}
// Start optional connection pool monitoring
if options.EnablePoolMonitoring && options.PrometheusRegisterer != nil {
go monitorConnectionPool(ctx, dbconn, options.PrometheusRegisterer)
}
return dbconn, nil
}
// OpenDBWithConfigFile opens a database connection using an explicit config file path
// This is a convenience function for API package compatibility
func OpenDBWithConfigFile(ctx context.Context, configFile string) (*sql.DB, error) {
options := DefaultConfigOptions()
options.ConfigFiles = []string{configFile}
return OpenDB(ctx, options)
}
// OpenDBMonitor opens a database connection with monitor-specific defaults
// This is a convenience function for Monitor package compatibility
func OpenDBMonitor() (*sql.DB, error) {
options := MonitorConfigOptions()
return OpenDB(context.Background(), options)
}
// findConfigFile searches for the first existing config file from the list
func findConfigFile(configFiles []string) (string, error) {
var firstErr error
for _, configFile := range configFiles {
if configFile == "" {
continue
}
if _, err := os.Stat(configFile); err == nil {
return configFile, nil
} else if firstErr == nil {
firstErr = err
}
}
if firstErr != nil {
return "", fmt.Errorf("no config file found: %w", firstErr)
}
return "", fmt.Errorf("no valid config files provided")
}

69
database/transaction.go Normal file
View File

@@ -0,0 +1,69 @@
package database
import (
"context"
"fmt"
"go.ntppool.org/common/logger"
)
// DB interface for database operations that can begin transactions
type DB[Q any] interface {
Begin(ctx context.Context) (Q, error)
}
// TX interface for transaction operations
type TX interface {
Commit(ctx context.Context) error
Rollback(ctx context.Context) error
}
// WithTransaction executes a function within a database transaction
// Handles proper rollback on error and commit on success
func WithTransaction[Q TX](ctx context.Context, db DB[Q], fn func(ctx context.Context, q Q) error) error {
tx, err := db.Begin(ctx)
if err != nil {
return fmt.Errorf("failed to begin transaction: %w", err)
}
var committed bool
defer func() {
if !committed {
if rbErr := tx.Rollback(ctx); rbErr != nil {
// Log rollback error but don't override original error
log := logger.FromContext(ctx)
log.ErrorContext(ctx, "failed to rollback transaction", "error", rbErr)
}
}
}()
if err := fn(ctx, tx); err != nil {
return err
}
err = tx.Commit(ctx)
committed = true // Mark as committed regardless of commit success/failure
if err != nil {
return fmt.Errorf("failed to commit transaction: %w", err)
}
return nil
}
// WithReadOnlyTransaction executes a read-only function within a transaction
// Always rolls back at the end (for consistent read isolation)
func WithReadOnlyTransaction[Q TX](ctx context.Context, db DB[Q], fn func(ctx context.Context, q Q) error) error {
tx, err := db.Begin(ctx)
if err != nil {
return fmt.Errorf("failed to begin read-only transaction: %w", err)
}
defer func() {
if rbErr := tx.Rollback(ctx); rbErr != nil {
log := logger.FromContext(ctx)
log.ErrorContext(ctx, "failed to rollback read-only transaction", "error", rbErr)
}
}()
return fn(ctx, tx)
}

View File

@@ -0,0 +1,69 @@
package database
import (
"context"
"database/sql"
"fmt"
"go.ntppool.org/common/logger"
)
// Shared interface definitions that both packages use identically
type BaseBeginner interface {
Begin(context.Context) (sql.Tx, error)
}
type BaseTx interface {
BaseBeginner
Commit(ctx context.Context) error
Rollback(ctx context.Context) error
}
// BeginTransactionForQuerier contains the shared Begin() logic from both packages
func BeginTransactionForQuerier(ctx context.Context, db DBTX) (DBTX, error) {
if sqlDB, ok := db.(*sql.DB); ok {
tx, err := sqlDB.BeginTx(ctx, &sql.TxOptions{})
if err != nil {
return nil, err
}
return tx, nil
} else {
// Handle transaction case
if beginner, ok := db.(BaseBeginner); ok {
tx, err := beginner.Begin(ctx)
if err != nil {
return nil, err
}
return &tx, nil
}
return nil, fmt.Errorf("database connection does not support transactions")
}
}
// CommitTransactionForQuerier contains the shared Commit() logic from both packages
func CommitTransactionForQuerier(ctx context.Context, db DBTX) error {
if sqlTx, ok := db.(*sql.Tx); ok {
return sqlTx.Commit()
}
tx, ok := db.(BaseTx)
if !ok {
log := logger.FromContext(ctx)
log.ErrorContext(ctx, "could not get a Tx", "type", fmt.Sprintf("%T", db))
return sql.ErrTxDone
}
return tx.Commit(ctx)
}
// RollbackTransactionForQuerier contains the shared Rollback() logic from both packages
func RollbackTransactionForQuerier(ctx context.Context, db DBTX) error {
if sqlTx, ok := db.(*sql.Tx); ok {
return sqlTx.Rollback()
}
tx, ok := db.(BaseTx)
if !ok {
return sql.ErrTxDone
}
return tx.Rollback(ctx)
}

View File

@@ -0,0 +1,157 @@
package database
import (
"context"
"errors"
"testing"
)
// Mock implementations for testing
type mockDB struct {
beginError error
txMock *mockTX
}
func (m *mockDB) Begin(ctx context.Context) (*mockTX, error) {
if m.beginError != nil {
return nil, m.beginError
}
return m.txMock, nil
}
type mockTX struct {
commitError error
rollbackError error
commitCalled bool
rollbackCalled bool
}
func (m *mockTX) Commit(ctx context.Context) error {
m.commitCalled = true
return m.commitError
}
func (m *mockTX) Rollback(ctx context.Context) error {
m.rollbackCalled = true
return m.rollbackError
}
func TestWithTransaction_Success(t *testing.T) {
tx := &mockTX{}
db := &mockDB{txMock: tx}
var functionCalled bool
err := WithTransaction(context.Background(), db, func(ctx context.Context, q *mockTX) error {
functionCalled = true
if q != tx {
t.Error("Expected transaction to be passed to function")
}
return nil
})
if err != nil {
t.Errorf("Expected no error, got %v", err)
}
if !functionCalled {
t.Error("Expected function to be called")
}
if !tx.commitCalled {
t.Error("Expected commit to be called")
}
if tx.rollbackCalled {
t.Error("Expected rollback NOT to be called on success")
}
}
func TestWithTransaction_FunctionError(t *testing.T) {
tx := &mockTX{}
db := &mockDB{txMock: tx}
expectedError := errors.New("function error")
err := WithTransaction(context.Background(), db, func(ctx context.Context, q *mockTX) error {
return expectedError
})
if err != expectedError {
t.Errorf("Expected error %v, got %v", expectedError, err)
}
if tx.commitCalled {
t.Error("Expected commit NOT to be called on function error")
}
if !tx.rollbackCalled {
t.Error("Expected rollback to be called on function error")
}
}
func TestWithTransaction_BeginError(t *testing.T) {
expectedError := errors.New("begin error")
db := &mockDB{beginError: expectedError}
err := WithTransaction(context.Background(), db, func(ctx context.Context, q *mockTX) error {
t.Error("Function should not be called when Begin fails")
return nil
})
if err == nil || !errors.Is(err, expectedError) {
t.Errorf("Expected wrapped begin error, got %v", err)
}
}
func TestWithTransaction_CommitError(t *testing.T) {
commitError := errors.New("commit error")
tx := &mockTX{commitError: commitError}
db := &mockDB{txMock: tx}
err := WithTransaction(context.Background(), db, func(ctx context.Context, q *mockTX) error {
return nil
})
if err == nil || !errors.Is(err, commitError) {
t.Errorf("Expected wrapped commit error, got %v", err)
}
if !tx.commitCalled {
t.Error("Expected commit to be called")
}
if tx.rollbackCalled {
t.Error("Expected rollback NOT to be called when commit fails")
}
}
func TestWithReadOnlyTransaction_Success(t *testing.T) {
tx := &mockTX{}
db := &mockDB{txMock: tx}
var functionCalled bool
err := WithReadOnlyTransaction(context.Background(), db, func(ctx context.Context, q *mockTX) error {
functionCalled = true
return nil
})
if err != nil {
t.Errorf("Expected no error, got %v", err)
}
if !functionCalled {
t.Error("Expected function to be called")
}
if tx.commitCalled {
t.Error("Expected commit NOT to be called in read-only transaction")
}
if !tx.rollbackCalled {
t.Error("Expected rollback to be called in read-only transaction")
}
}
func TestWithReadOnlyTransaction_FunctionError(t *testing.T) {
tx := &mockTX{}
db := &mockDB{txMock: tx}
expectedError := errors.New("function error")
err := WithReadOnlyTransaction(context.Background(), db, func(ctx context.Context, q *mockTX) error {
return expectedError
})
if err != expectedError {
t.Errorf("Expected error %v, got %v", expectedError, err)
}
if !tx.rollbackCalled {
t.Error("Expected rollback to be called")
}
}

230
ekko/ekko.go Normal file
View File

@@ -0,0 +1,230 @@
// Package ekko provides an enhanced Echo web framework wrapper with pre-configured middleware.
//
// This package wraps the Echo web framework with a comprehensive middleware stack including:
// - OpenTelemetry distributed tracing with request context propagation
// - Prometheus metrics collection with per-service subsystems
// - Structured logging with trace ID correlation
// - Security headers (HSTS, content security policy)
// - Gzip compression for response optimization
// - Recovery middleware with detailed error logging
// - HTTP/2 support with H2C (HTTP/2 Cleartext) capability
//
// The package uses functional options pattern for flexible configuration
// and supports graceful shutdown with configurable timeouts. It's designed
// as the standard web service foundation for NTP Pool project services.
//
// Example usage:
//
// ekko, err := ekko.New("myservice",
// ekko.WithPort(8080),
// ekko.WithPrometheus(prometheus.DefaultRegisterer),
// ekko.WithEchoSetup(func(e *echo.Echo) error {
// e.GET("/health", healthHandler)
// return nil
// }),
// )
// if err != nil {
// log.Fatal(err)
// }
// err = ekko.Start(ctx)
package ekko
import (
"context"
"fmt"
"net"
"net/http"
"time"
"github.com/labstack/echo-contrib/echoprometheus"
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
slogecho "github.com/samber/slog-echo"
"go.ntppool.org/common/logger"
"go.ntppool.org/common/version"
"go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
"golang.org/x/net/http2"
"golang.org/x/sync/errgroup"
)
// New creates a new Ekko instance with the specified service name and functional options.
// The name parameter is used for OpenTelemetry service identification, Prometheus metrics
// subsystem naming, and server identification headers.
//
// Default configuration includes:
// - 60 second write timeout
// - 30 second read header timeout
// - HTTP/2 support with H2C
// - Standard middleware stack (tracing, metrics, logging, security)
//
// Use functional options to customize behavior:
// - WithPort(): Set server port (required for Start())
// - WithPrometheus(): Enable Prometheus metrics
// - WithEchoSetup(): Configure routes and handlers
// - WithLogFilters(): Filter access logs
// - WithOtelMiddleware(): Custom OpenTelemetry middleware
// - WithWriteTimeout(): Custom write timeout
// - WithReadHeaderTimeout(): Custom read header timeout
// - WithGzipConfig(): Custom gzip compression settings
func New(name string, options ...func(*Ekko)) (*Ekko, error) {
ek := &Ekko{
writeTimeout: 60 * time.Second,
readHeaderTimeout: 30 * time.Second,
}
for _, o := range options {
o(ek)
}
return ek, nil
}
// SetupEcho creates and configures an Echo instance without starting the server.
// This method is primarily intended for testing scenarios where you need access
// to the configured Echo instance without starting the HTTP server.
//
// The returned Echo instance includes all configured middleware and routes
// but requires manual server lifecycle management.
func (ek *Ekko) SetupEcho(ctx context.Context) (*echo.Echo, error) {
return ek.setup(ctx)
}
// Start creates the Echo instance and starts the HTTP server with graceful shutdown support.
// The server runs until either an error occurs or the provided context is cancelled.
//
// The server supports HTTP/2 with H2C (HTTP/2 Cleartext) and includes a 5-second
// graceful shutdown timeout when the context is cancelled. Server configuration
// (port, timeouts, middleware) must be set via functional options during New().
//
// Returns an error if server startup fails or if shutdown doesn't complete within
// the timeout period. Returns nil for clean shutdown via context cancellation.
func (ek *Ekko) Start(ctx context.Context) error {
log := logger.Setup()
e, err := ek.setup(ctx)
if err != nil {
return err
}
g, ctx := errgroup.WithContext(ctx)
g.Go(func() error {
e.Server.Addr = fmt.Sprintf(":%d", ek.port)
log.Info("server starting", "port", ek.port)
// err := e.Server.ListenAndServe()
err := e.StartH2CServer(e.Server.Addr, &http2.Server{})
if err == http.ErrServerClosed {
return nil
}
return err
})
g.Go(func() error {
<-ctx.Done()
shutdownCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
return e.Shutdown(shutdownCtx)
})
return g.Wait()
}
func (ek *Ekko) setup(ctx context.Context) (*echo.Echo, error) {
log := logger.Setup()
e := echo.New()
e.Server.ReadHeaderTimeout = ek.readHeaderTimeout
e.Server.WriteTimeout = ek.writeTimeout
e.Server.BaseContext = func(_ net.Listener) context.Context {
return ctx
}
trustOptions := []echo.TrustOption{
echo.TrustLoopback(true),
echo.TrustLinkLocal(false),
echo.TrustPrivateNet(true),
}
e.IPExtractor = echo.ExtractIPFromXFFHeader(trustOptions...)
if ek.otelmiddleware == nil {
e.Use(otelecho.Middleware(ek.name))
} else {
e.Use(ek.otelmiddleware)
}
e.Use(middleware.RecoverWithConfig(middleware.RecoverConfig{
LogErrorFunc: func(c echo.Context, err error, stack []byte) error {
log.ErrorContext(c.Request().Context(), err.Error(), "stack", string(stack))
fmt.Println(string(stack))
return err
},
}))
e.Use(slogecho.NewWithConfig(log,
slogecho.Config{
WithTraceID: false, // done by logger already
Filters: ek.logFilters,
},
))
if ek.prom != nil {
e.Use(echoprometheus.NewMiddlewareWithConfig(echoprometheus.MiddlewareConfig{
Subsystem: ek.name,
Registerer: ek.prom,
}))
}
if ek.gzipConfig != nil {
e.Use(middleware.GzipWithConfig(*ek.gzipConfig))
} else {
e.Use(middleware.Gzip())
}
secureConfig := middleware.DefaultSecureConfig
// secureConfig.ContentSecurityPolicy = "default-src *"
secureConfig.ContentSecurityPolicy = ""
secureConfig.HSTSMaxAge = int(time.Hour * 168 * 30 / time.Second)
secureConfig.HSTSPreloadEnabled = true
e.Use(middleware.SecureWithConfig(secureConfig))
e.Use(
func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
request := c.Request()
span := trace.SpanFromContext(request.Context())
if span.IsRecording() {
span.SetAttributes(attribute.String("http.real_ip", c.RealIP()))
span.SetAttributes(attribute.String("url.path", c.Request().RequestURI))
if q := c.QueryString(); len(q) > 0 {
span.SetAttributes(attribute.String("url.query", q))
}
c.Response().Header().Set("Traceparent", span.SpanContext().TraceID().String())
}
return next(c)
}
},
)
e.Use(func(next echo.HandlerFunc) echo.HandlerFunc {
vinfo := version.VersionInfo()
v := ek.name + "/" + vinfo.Version + "+" + vinfo.GitRevShort
return func(c echo.Context) error {
c.Response().Header().Set(echo.HeaderServer, v)
return next(c)
}
})
if ek.routeFn != nil {
err := ek.routeFn(e)
if err != nil {
return nil, err
}
}
return e, nil
}

102
ekko/options.go Normal file
View File

@@ -0,0 +1,102 @@
package ekko
import (
"time"
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
"github.com/prometheus/client_golang/prometheus"
slogecho "github.com/samber/slog-echo"
)
// Ekko represents an enhanced Echo web server with pre-configured middleware stack.
// It encapsulates server configuration, middleware options, and lifecycle management
// for NTP Pool web services. Use New() with functional options to configure.
type Ekko struct {
name string
prom prometheus.Registerer
port int
routeFn func(e *echo.Echo) error
logFilters []slogecho.Filter
otelmiddleware echo.MiddlewareFunc
gzipConfig *middleware.GzipConfig
writeTimeout time.Duration
readHeaderTimeout time.Duration
}
// RouteFn defines a function type for configuring Echo routes and handlers.
// It receives a configured Echo instance and should register all application
// routes, middleware, and handlers. Return an error to abort server startup.
type RouteFn func(e *echo.Echo) error
// WithPort sets the HTTP server port. This option is required when using Start().
// The port should be available and the process should have permission to bind to it.
func WithPort(port int) func(*Ekko) {
return func(ek *Ekko) {
ek.port = port
}
}
// WithPrometheus enables Prometheus metrics collection using the provided registerer.
// Metrics include HTTP request duration, request count, and response size histograms.
// The service name is used as the metrics subsystem for namespacing.
func WithPrometheus(reg prometheus.Registerer) func(*Ekko) {
return func(ek *Ekko) {
ek.prom = reg
}
}
// WithEchoSetup configures application routes and handlers via a setup function.
// The provided function receives the configured Echo instance after all middleware
// is applied and should register routes, custom middleware, and handlers.
func WithEchoSetup(rfn RouteFn) func(*Ekko) {
return func(ek *Ekko) {
ek.routeFn = rfn
}
}
// WithLogFilters configures access log filtering to reduce log noise.
// Filters can exclude specific paths, methods, or status codes from access logs.
// Useful for excluding health checks, metrics endpoints, and other high-frequency requests.
func WithLogFilters(f []slogecho.Filter) func(*Ekko) {
return func(ek *Ekko) {
ek.logFilters = f
}
}
// WithOtelMiddleware replaces the default OpenTelemetry middleware with a custom implementation.
// The default middleware provides distributed tracing for all requests. Use this option
// when you need custom trace configuration or want to disable tracing entirely.
func WithOtelMiddleware(mw echo.MiddlewareFunc) func(*Ekko) {
return func(ek *Ekko) {
ek.otelmiddleware = mw
}
}
// WithWriteTimeout configures the HTTP server write timeout.
// This is the maximum duration before timing out writes of the response.
// Default is 60 seconds. Should be longer than expected response generation time.
func WithWriteTimeout(t time.Duration) func(*Ekko) {
return func(ek *Ekko) {
ek.writeTimeout = t
}
}
// WithReadHeaderTimeout configures the HTTP server read header timeout.
// This is the amount of time allowed to read request headers.
// Default is 30 seconds. Should be sufficient for slow clients and large headers.
func WithReadHeaderTimeout(t time.Duration) func(*Ekko) {
return func(ek *Ekko) {
ek.readHeaderTimeout = t
}
}
// WithGzipConfig provides custom gzip compression configuration.
// By default, gzip compression is enabled with standard settings.
// Use this option to customize compression level, skip patterns, or disable compression.
func WithGzipConfig(gzipConfig *middleware.GzipConfig) func(*Ekko) {
return func(ek *Ekko) {
ek.gzipConfig = gzipConfig
}
}

84
go.mod
View File

@@ -1,54 +1,80 @@
module go.ntppool.org/common module go.ntppool.org/common
go 1.22.2 go 1.23.5
require ( require (
github.com/abh/certman v0.4.0 github.com/abh/certman v0.4.0
github.com/labstack/echo/v4 v4.12.0 github.com/go-sql-driver/mysql v1.9.3
github.com/labstack/echo-contrib v0.17.2
github.com/labstack/echo/v4 v4.13.3
github.com/oklog/ulid/v2 v2.1.0 github.com/oklog/ulid/v2 v2.1.0
github.com/prometheus/client_golang v1.19.1 github.com/prometheus/client_golang v1.20.5
github.com/remychantenay/slog-otel v1.3.1 github.com/prometheus/client_model v0.6.1
github.com/remychantenay/slog-otel v1.3.2
github.com/samber/slog-echo v1.14.8
github.com/samber/slog-multi v1.2.4
github.com/segmentio/kafka-go v0.4.47 github.com/segmentio/kafka-go v0.4.47
github.com/spf13/cobra v1.8.0 github.com/spf13/cobra v1.8.1
go.opentelemetry.io/otel v1.27.0 go.opentelemetry.io/contrib/bridges/otelslog v0.8.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.27.0 go.opentelemetry.io/contrib/exporters/autoexport v0.58.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.27.0 go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.58.0
go.opentelemetry.io/otel/sdk v1.27.0 go.opentelemetry.io/otel v1.33.0
go.opentelemetry.io/otel/trace v1.27.0 go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.33.0
golang.org/x/mod v0.18.0 go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.33.0
golang.org/x/sync v0.7.0 go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.33.0
go.opentelemetry.io/otel/log v0.9.0
go.opentelemetry.io/otel/sdk v1.33.0
go.opentelemetry.io/otel/sdk/log v0.9.0
go.opentelemetry.io/otel/trace v1.33.0
golang.org/x/mod v0.22.0
golang.org/x/net v0.33.0
golang.org/x/sync v0.10.0
google.golang.org/grpc v1.69.2
gopkg.in/yaml.v3 v3.0.1
) )
require ( require (
filippo.io/edwards25519 v1.1.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect github.com/fsnotify/fsnotify v1.8.0 // indirect
github.com/go-logr/logr v1.4.2 // indirect github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect
github.com/golang/protobuf v1.5.4 // indirect github.com/google/uuid v1.6.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 // indirect github.com/grpc-ecosystem/grpc-gateway/v2 v2.25.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/klauspost/compress v1.17.9 // indirect github.com/klauspost/compress v1.17.11 // indirect
github.com/labstack/gommon v0.4.2 // indirect github.com/labstack/gommon v0.4.2 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect github.com/mattn/go-isatty v0.0.20 // indirect
github.com/pierrec/lz4/v4 v4.1.21 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pierrec/lz4/v4 v4.1.22 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/prometheus/client_model v0.6.1 // indirect github.com/prometheus/common v0.61.0 // indirect
github.com/prometheus/common v0.54.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect github.com/prometheus/procfs v0.15.1 // indirect
github.com/samber/lo v1.47.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect github.com/spf13/pflag v1.0.5 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/valyala/fasttemplate v1.2.2 // indirect github.com/valyala/fasttemplate v1.2.2 // indirect
go.opentelemetry.io/otel/metric v1.27.0 // indirect go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect go.opentelemetry.io/contrib/bridges/prometheus v0.58.0 // indirect
golang.org/x/crypto v0.24.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.9.0 // indirect
golang.org/x/net v0.26.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.9.0 // indirect
golang.org/x/sys v0.21.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.33.0 // indirect
golang.org/x/text v0.16.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.33.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240610135401-a8a62080eff3 // indirect go.opentelemetry.io/otel/exporters/prometheus v0.55.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240610135401-a8a62080eff3 // indirect go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.9.0 // indirect
google.golang.org/grpc v1.64.0 // indirect go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.33.0 // indirect
google.golang.org/protobuf v1.34.2 // indirect go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.33.0 // indirect
go.opentelemetry.io/otel/metric v1.33.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.33.0 // indirect
go.opentelemetry.io/proto/otlp v1.4.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.8.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20241223144023-3abc09e42ca8 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8 // indirect
google.golang.org/protobuf v1.36.1 // indirect
) )

250
go.sum
View File

@@ -1,47 +1,49 @@
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
github.com/abh/certman v0.4.0 h1:XHoDtb0YyRQPclaHMrBDlKTVZpNjTK6vhB0S3Bd/Sbs= github.com/abh/certman v0.4.0 h1:XHoDtb0YyRQPclaHMrBDlKTVZpNjTK6vhB0S3Bd/Sbs=
github.com/abh/certman v0.4.0/go.mod h1:x8QhpKVZifmV1Hdiwdg9gLo2GMPAxezz1s3zrVnPs+I= github.com/abh/certman v0.4.0/go.mod h1:x8QhpKVZifmV1Hdiwdg9gLo2GMPAxezz1s3zrVnPs+I=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cenkalti/backoff/v4 v4.2.1 h1:y4OZtCnogmCPw98Zjyt5a6+QwPLGkiQsYW5oUqylYbM=
github.com/cenkalti/backoff/v4 v4.2.1/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8= github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= github.com/fsnotify/fsnotify v1.8.0 h1:dAwr6QBTBZIkG8roQaJjGof0pp0EeF+tNV7YBP3F/8M=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= github.com/fsnotify/fsnotify v1.8.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ=
github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.19.1 h1:/c3QmbOGMGTOumP2iT/rCwB7b0QDGLKzqOmktBjT+Is= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.19.1/go.mod h1:5SN9VR2LTsRFsrEC6FHgRbTWrTHu6tqPeKxEQv15giM= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 h1:bkypFPDjIYGfCYD5mRBvpqxfYX1YCS1PXdKYWi8FsN0= github.com/grpc-ecosystem/grpc-gateway/v2 v2.25.1 h1:VNqngBF40hVlDloBruUehVYC3ArSgIyScOAyMRqBxRg=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0/go.mod h1:P+Lt/0by1T8bfcF3z737NnSbmxQAppXMRziHUxPOC8k= github.com/grpc-ecosystem/grpc-gateway/v2 v2.25.1/go.mod h1:RBRO7fro65R6tjKzYgLAFo0t1QEXY1Dp+i/bvpRiqiQ=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/klauspost/compress v1.15.9/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU= github.com/klauspost/compress v1.15.9/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU=
github.com/klauspost/compress v1.17.7 h1:ehO88t2UGzQK66LMdE8tibEd1ErmzZjNEqWkjLAKQQg= github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
github.com/klauspost/compress v1.17.7/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw= github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/labstack/echo/v4 v4.11.4 h1:vDZmA+qNeh1pd/cCkEicDMrjtrnMGQ1QFI9gWN1zGq8= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/labstack/echo/v4 v4.11.4/go.mod h1:noh7EvLwqDsmh/X/HWKPUl1AjzJrhyptRyEbQJfxen8= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/labstack/echo/v4 v4.12.0 h1:IKpw49IMryVB2p1a4dzwlhP1O2Tf2E0Ir/450lH+kI0= github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/labstack/echo/v4 v4.12.0/go.mod h1:UP9Cr2DJXbOK3Kr9ONYzNowSh7HP0aG0ShAyycHSJvM= github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/labstack/echo-contrib v0.17.2 h1:K1zivqmtcC70X9VdBFdLomjPDEVHlrcAObqmuFj1c6w=
github.com/labstack/echo-contrib v0.17.2/go.mod h1:NeDh3PX7j/u+jR4iuDt1zHmWZSCz9c/p9mxXcDpyS8E=
github.com/labstack/echo/v4 v4.13.3 h1:pwhpCPrTl5qry5HRdM5FwdXnhXSLSY+WE+YQSeCaafY=
github.com/labstack/echo/v4 v4.13.3/go.mod h1:o90YNEeQWjDozo584l7AwhJMHN0bOC4tAfg+Xox9q5g=
github.com/labstack/gommon v0.4.2 h1:F8qTUNXgG1+6WQmqoUWnz8WiEU60mXVVw0P4ht1WRA0= github.com/labstack/gommon v0.4.2 h1:F8qTUNXgG1+6WQmqoUWnz8WiEU60mXVVw0P4ht1WRA0=
github.com/labstack/gommon v0.4.2/go.mod h1:QlUFxVM+SNXhDL/Z7YhocGIBYOiwB0mXm1+1bAPHPyU= github.com/labstack/gommon v0.4.2/go.mod h1:QlUFxVM+SNXhDL/Z7YhocGIBYOiwB0mXm1+1bAPHPyU=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA= github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
@@ -49,53 +51,49 @@ github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovk
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM= github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/oklog/ulid/v2 v2.1.0 h1:+9lhoxAP56we25tyYETBBY1YLA2SaoLvUFgrP2miPJU= github.com/oklog/ulid/v2 v2.1.0 h1:+9lhoxAP56we25tyYETBBY1YLA2SaoLvUFgrP2miPJU=
github.com/oklog/ulid/v2 v2.1.0/go.mod h1:rcEKHmBBKfef9DhnvX7y1HZBYxjXb0cP5ExxNsTT1QQ= github.com/oklog/ulid/v2 v2.1.0/go.mod h1:rcEKHmBBKfef9DhnvX7y1HZBYxjXb0cP5ExxNsTT1QQ=
github.com/pborman/getopt v0.0.0-20170112200414-7148bc3a4c30/go.mod h1:85jBQOZwpVEaDAr341tbn15RS4fCAsIst0qp7i8ex1o= github.com/pborman/getopt v0.0.0-20170112200414-7148bc3a4c30/go.mod h1:85jBQOZwpVEaDAr341tbn15RS4fCAsIst0qp7i8ex1o=
github.com/pierrec/lz4/v4 v4.1.15/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4= github.com/pierrec/lz4/v4 v4.1.15/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ= github.com/pierrec/lz4/v4 v4.1.22 h1:cKFw6uJDK+/gfw5BcDL0JL5aBsAFdsIT18eRtLj7VIU=
github.com/pierrec/lz4/v4 v4.1.21/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4= github.com/pierrec/lz4/v4 v4.1.22/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.19.0 h1:ygXvpU1AoN1MhdzckN+PyD9QJOSD4x7kmXYlnfbA6JU= github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.19.0/go.mod h1:ZRM9uEAypZakd+q/x7+gmsvXdURP+DABIEIjnmDdp+k= github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_golang v1.19.1 h1:wZWJDwK+NameRJuPGDhlnFgx8e8HN3XHQeLaYJFJBOE=
github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho=
github.com/prometheus/client_model v0.6.0 h1:k1v3CzpSRUTrKMppY35TLwPvxHqBu0bYgxZzqGIgaos=
github.com/prometheus/client_model v0.6.0/go.mod h1:NTQHnmxFpouOD0DpvP4XujX3CdOAGQPoaGhyTchlyt8=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E= github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY= github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.50.0 h1:YSZE6aa9+luNa2da6/Tik0q0A5AbR+U003TItK57CPQ= github.com/prometheus/common v0.61.0 h1:3gv/GThfX0cV2lpO7gkTUwZru38mxevy90Bj8YFSRQQ=
github.com/prometheus/common v0.50.0/go.mod h1:wHFBCEVWVmHMUpg7pYcOm2QUR/ocQdYSJVQJKnHc3xQ= github.com/prometheus/common v0.61.0/go.mod h1:zr29OCN/2BsJRaFwG8QOBr41D6kkchKbpeNH7pAjb/s=
github.com/prometheus/common v0.52.2 h1:LW8Vk7BccEdONfrJBDffQGRtpSzi5CQaRZGtboOO2ck=
github.com/prometheus/common v0.52.2/go.mod h1:lrWtQx+iDfn2mbH5GUzlH9TSHyfZpHkSiG1W7y3sF2Q=
github.com/prometheus/common v0.54.0 h1:ZlZy0BgJhTwVZUn7dLOkwCZHUkrAqd3WYtcFCWnM1D8=
github.com/prometheus/common v0.54.0/go.mod h1:/TQgMJP5CuVYveyT7n/0Ix8yLNNXy9yRSkhnLTHPDIQ=
github.com/prometheus/procfs v0.13.0 h1:GqzLlQyfsPbaEHaQkO7tbDlriv/4o5Hudv6OXHGKX7o=
github.com/prometheus/procfs v0.13.0/go.mod h1:cd4PFCR54QLnGKPaKGA6l+cfuNXtht43ZKY6tow0Y1g=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc= github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk= github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/remychantenay/slog-otel v1.2.4 h1:Z/IwIgFPzzGqLTI460KbTuJZPm5U830dgu0gPiEpufA= github.com/remychantenay/slog-otel v1.3.2 h1:ZBx8qnwfLJ6e18Vba4e9Xp9B7khTmpIwFsU1sAmActw=
github.com/remychantenay/slog-otel v1.2.4/go.mod h1:Ar2ZBcRfIPyoKV/3Xq4oHmNgKc69juGB0QMUzo1vJOc= github.com/remychantenay/slog-otel v1.3.2/go.mod h1:gKW4tQ8cGOKoA+bi7wtYba/tcJ6Tc9XyQ/EW8gHA/2E=
github.com/remychantenay/slog-otel v1.3.0 h1:mppL97agkmwR416lKzltRQ9QRhrPdxwVidt0AnI3Ts4= github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/remychantenay/slog-otel v1.3.0/go.mod h1:L2VAe6WOMAk/kRzzuv2B/rWe/IDXAhUNae0919b4kHU= github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/remychantenay/slog-otel v1.3.1 h1:A+VjqHaUka/sg3meWKVZw9NuMCgYUu7tPLI87pvBHxs=
github.com/remychantenay/slog-otel v1.3.1/go.mod h1:smosUkTPRlTot5TDJ88qcmGz6tnBq6MJ1bb2ndO66uE=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/samber/lo v1.47.0 h1:z7RynLwP5nbyRscyvcD043DWYoOcYRv3mV8lBeqOCLc=
github.com/samber/lo v1.47.0/go.mod h1:RmDH9Ct32Qy3gduHQuKJ3gW1fMHAnE/fAzQuf6He5cU=
github.com/samber/slog-echo v1.14.8 h1:R7RF2LWEepsKtC7i6A6o9peS3Rz5HO8+H8OD+8mPD1I=
github.com/samber/slog-echo v1.14.8/go.mod h1:K21nbusPmai/MYm8PFactmZoFctkMmkeaTdXXyvhY1c=
github.com/samber/slog-multi v1.2.4 h1:k9x3JAWKJFPKffx+oXZ8TasaNuorIW4tG+TXxkt6Ry4=
github.com/samber/slog-multi v1.2.4/go.mod h1:ACuZ5B6heK57TfMVkVknN2UZHoFfjCwRxR0Q2OXKHlo=
github.com/segmentio/kafka-go v0.4.47 h1:IqziR4pA3vrZq7YdRxaT3w1/5fvIH5qpCwstUanQQB0= github.com/segmentio/kafka-go v0.4.47 h1:IqziR4pA3vrZq7YdRxaT3w1/5fvIH5qpCwstUanQQB0=
github.com/segmentio/kafka-go v0.4.47/go.mod h1:HjF6XbOKh0Pjlkr5GVZxt6CsjjwnmhVOfURM5KMd8qg= github.com/segmentio/kafka-go v0.4.47/go.mod h1:HjF6XbOKh0Pjlkr5GVZxt6CsjjwnmhVOfURM5KMd8qg=
github.com/spf13/cobra v1.8.0 h1:7aJaZx1B85qltLMc546zn58BxxfZdR/W22ej9CFoEf0= github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
github.com/spf13/cobra v1.8.0/go.mod h1:WXLWApfZ71AjXPya3WOlMsY9yMs7YeiHhFVlvLyhcho= github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk= github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw= github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc= github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo= github.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo=
@@ -107,70 +105,80 @@ github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3k
github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8= github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM= github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.opentelemetry.io/otel v1.24.0 h1:0LAOdjNmQeSTzGBzduGe/rU4tZhMwL5rWgtp9Ku5Jfo= go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/otel v1.24.0/go.mod h1:W7b9Ozg4nkF5tWI5zsXkaKKDjdVjpD4oAt9Qi/MArHo= go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.27.0 h1:9BZoF3yMK/O1AafMiQTVu0YDj5Ea4hPhxCs7sGva+cg= go.opentelemetry.io/contrib/bridges/otelslog v0.8.0 h1:G3sKsNueSdxuACINFxKrQeimAIst0A5ytA2YJH+3e1c=
go.opentelemetry.io/otel v1.27.0/go.mod h1:DMpAK8fzYRzs+bi3rS5REupisuqTheUlSZJ1WnZaPAQ= go.opentelemetry.io/contrib/bridges/otelslog v0.8.0/go.mod h1:ptJm3wizguEPurZgarDAwOeX7O0iMR7l+QvIVenhYdE=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.24.0 h1:t6wl9SPayj+c7lEIFgm4ooDBZVb01IhLB4InpomhRw8= go.opentelemetry.io/contrib/bridges/prometheus v0.58.0 h1:gQFwWiqm4JUvOjpdmyU0di+2pVQ8QNpk1Ak/54Y6NcY=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.24.0/go.mod h1:iSDOcsnSA5INXzZtwaBPrKp/lWu/V14Dd+llD0oI2EA= go.opentelemetry.io/contrib/bridges/prometheus v0.58.0/go.mod h1:CNyFi9PuvHtEJNmMFHaXZMuA4XmgRXIqpFcHdqzLvVU=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.27.0 h1:R9DE4kQ4k+YtfLI2ULwX82VtNQ2J8yZmA7ZIF/D+7Mc= go.opentelemetry.io/contrib/exporters/autoexport v0.58.0 h1:qVsDVgZd/bC6ZKDOHSjILpm0T/BWvASC9cQU3GYga78=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.27.0/go.mod h1:OQFyQVrDlbe+R7xrEyDr/2Wr67Ol0hRUgsfA+V5A95s= go.opentelemetry.io/contrib/exporters/autoexport v0.58.0/go.mod h1:bAv7mY+5qTsFPFaRpr75vDOocX09I36QH4Rg0slEG/U=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.24.0 h1:Xw8U6u2f8DK2XAkGRFV7BBLENgnTGX9i4rQRxJf+/vs= go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.58.0 h1:DBk8Zh+Yn3WtWCdGSx1pbEV9/naLtjG16c1zwQA2MBI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.24.0/go.mod h1:6KW1Fm6R/s6Z3PGXwSJN2K4eT6wQB3vXX6CVnYX9NmM= go.opentelemetry.io/contrib/instrumentation/github.com/labstack/echo/otelecho v0.58.0/go.mod h1:DFx32LPclW1MNdSKIMrjjetsk0tJtYhAvuGjDIG2SKE=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.27.0 h1:QY7/0NeRPKlzusf40ZE4t1VlMKbqSNT7cJRYzWuja0s= go.opentelemetry.io/contrib/propagators/b3 v1.33.0 h1:ig/IsHyyoQ1F1d6FUDIIW5oYpsuTVtN16AyGOgdjAHQ=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.27.0/go.mod h1:HVkSiDhTM9BoUJU8qE6j2eSWLLXvi1USXjyd2BXT8PY= go.opentelemetry.io/contrib/propagators/b3 v1.33.0/go.mod h1:EsVYoNy+Eol5znb6wwN3XQTILyjl040gUpEnUSNZfsk=
go.opentelemetry.io/otel/metric v1.24.0 h1:6EhoGWWK28x1fbpA4tYTOWBkPefTDQnb8WSGXlc88kI= go.opentelemetry.io/otel v1.33.0 h1:/FerN9bax5LoK51X/sI0SVYrjSE0/yUL7DpxW4K3FWw=
go.opentelemetry.io/otel/metric v1.24.0/go.mod h1:VYhLe1rFfxuTXLgj4CBiyz+9WYBA8pNGJgDcSFRKBco= go.opentelemetry.io/otel v1.33.0/go.mod h1:SUUkR6csvUQl+yjReHu5uM3EtVV7MBm5FHKRlNx4I8I=
go.opentelemetry.io/otel/metric v1.27.0 h1:hvj3vdEKyeCi4YaYfNjv2NUje8FqKqUY8IlF0FxV/ik= go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.9.0 h1:gA2gh+3B3NDvRFP30Ufh7CC3TtJRbUSf2TTD0LbCagw=
go.opentelemetry.io/otel/metric v1.27.0/go.mod h1:mVFgmRlhljgBiuk/MP/oKylr4hs85GZAylncepAX/ak= go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.9.0/go.mod h1:smRTR+02OtrVGjvWE1sQxhuazozKc/BXvvqqnmOxy+s=
go.opentelemetry.io/otel/sdk v1.24.0 h1:YMPPDNymmQN3ZgczicBY3B6sf9n62Dlj9pWD3ucgoDw= go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.9.0 h1:Za0Z/j9Gf3Z9DKQ1choU9xI2noCxlkcyFFP2Ob3miEQ=
go.opentelemetry.io/otel/sdk v1.24.0/go.mod h1:KVrIYw6tEubO9E96HQpcmpTKDVn9gdv35HoYiQWGDFg= go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.9.0/go.mod h1:jMRB8N75meTNjDFQyJBA/2Z9en21CsxwMctn08NHY6c=
go.opentelemetry.io/otel/sdk v1.27.0 h1:mlk+/Y1gLPLn84U4tI8d3GNJmGT/eXe3ZuOXN9kTWmI= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.33.0 h1:7F29RDmnlqk6B5d+sUqemt8TBfDqxryYW5gX6L74RFA=
go.opentelemetry.io/otel/sdk v1.27.0/go.mod h1:Ha9vbLwJE6W86YstIywK2xFfPjbWlCuwPtMkKdz/Y4A= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.33.0/go.mod h1:ZiGDq7xwDMKmWDrN1XsXAj0iC7hns+2DhxBFSncNHSE=
go.opentelemetry.io/otel/trace v1.24.0 h1:CsKnnL4dUAr/0llH9FKuc698G04IrpWV0MQA/Y1YELI= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.33.0 h1:bSjzTvsXZbLSWU8hnZXcKmEVaJjjnandxD0PxThhVU8=
go.opentelemetry.io/otel/trace v1.24.0/go.mod h1:HPc3Xr/cOApsBI154IU0OI0HJexz+aw5uPdbs3UCjNU= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.33.0/go.mod h1:aj2rilHL8WjXY1I5V+ra+z8FELtk681deydgYT8ikxU=
go.opentelemetry.io/otel/trace v1.27.0 h1:IqYb813p7cmbHk0a5y6pD5JPakbVfftRXABGt5/Rscw= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.33.0 h1:Vh5HayB/0HHfOQA7Ctx69E/Y/DcQSMPpKANYVMQ7fBA=
go.opentelemetry.io/otel/trace v1.27.0/go.mod h1:6RiD1hkAprV4/q+yd2ln1HG9GoPx39SuvvstaLBl+l4= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.33.0/go.mod h1:cpgtDBaqD/6ok/UG0jT15/uKjAY8mRA53diogHBg3UI=
go.opentelemetry.io/proto/otlp v1.1.0 h1:2Di21piLrCqJ3U3eXGCTPHE9R8Nh+0uglSnOyxikMeI= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.33.0 h1:5pojmb1U1AogINhN3SurB+zm/nIcusopeBNp42f45QM=
go.opentelemetry.io/proto/otlp v1.1.0/go.mod h1:GpBHCBWiqvVLDqmHZsoMM3C5ySeKTC7ej/RNTae6MdY= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.33.0/go.mod h1:57gTHJSE5S1tqg+EKsLPlTWhpHMsWlVmer+LA926XiA=
go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.33.0 h1:wpMfgF8E1rkrT1Z6meFh1NDtownE9Ii3n3X2GJYjsaU=
go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.33.0/go.mod h1:wAy0T/dUbs468uOlkT31xjvqQgEVXv58BRFWEgn5v/0=
go.opentelemetry.io/otel/exporters/prometheus v0.55.0 h1:sSPw658Lk2NWAv74lkD3B/RSDb+xRFx46GjkrL3VUZo=
go.opentelemetry.io/otel/exporters/prometheus v0.55.0/go.mod h1:nC00vyCmQixoeaxF6KNyP42II/RHa9UdruK02qBmHvI=
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.9.0 h1:iI15wfQb5ZtAVTdS5WROxpYmw6Kjez3hT9SuzXhrgGQ=
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.9.0/go.mod h1:yepwlNzVVxHWR5ugHIrll+euPQPq4pvysHTDr/daV9o=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.33.0 h1:FiOTYABOX4tdzi8A0+mtzcsTmi6WBOxk66u0f1Mj9Gs=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.33.0/go.mod h1:xyo5rS8DgzV0Jtsht+LCEMwyiDbjpsxBpWETwFRF0/4=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.33.0 h1:W5AWUn/IVe8RFb5pZx1Uh9Laf/4+Qmm4kJL5zPuvR+0=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.33.0/go.mod h1:mzKxJywMNBdEX8TSJais3NnsVZUaJ+bAy6UxPTng2vk=
go.opentelemetry.io/otel/log v0.9.0 h1:0OiWRefqJ2QszpCiqwGO0u9ajMPe17q6IscQvvp3czY=
go.opentelemetry.io/otel/log v0.9.0/go.mod h1:WPP4OJ+RBkQ416jrFCQFuFKtXKD6mOoYCQm6ykK8VaU=
go.opentelemetry.io/otel/metric v1.33.0 h1:r+JOocAyeRVXD8lZpjdQjzMadVZp2M4WmQ+5WtEnklQ=
go.opentelemetry.io/otel/metric v1.33.0/go.mod h1:L9+Fyctbp6HFTddIxClbQkjtubW6O9QS3Ann/M82u6M=
go.opentelemetry.io/otel/sdk v1.33.0 h1:iax7M131HuAm9QkZotNHEfstof92xM+N8sr3uHXc2IM=
go.opentelemetry.io/otel/sdk v1.33.0/go.mod h1:A1Q5oi7/9XaMlIWzPSxLRWOI8nG3FnzHJNbiENQuihM=
go.opentelemetry.io/otel/sdk/log v0.9.0 h1:YPCi6W1Eg0vwT/XJWsv2/PaQ2nyAJYuF7UUjQSBe3bc=
go.opentelemetry.io/otel/sdk/log v0.9.0/go.mod h1:y0HdrOz7OkXQBuc2yjiqnEHc+CRKeVhRE3hx4RwTmV4=
go.opentelemetry.io/otel/sdk/metric v1.33.0 h1:Gs5VK9/WUJhNXZgn8MR6ITatvAmKeIuCtNbsP3JkNqU=
go.opentelemetry.io/otel/sdk/metric v1.33.0/go.mod h1:dL5ykHZmm1B1nVRk9dDjChwDmt81MjVp3gLkQRwKf/Q=
go.opentelemetry.io/otel/trace v1.33.0 h1:cCJuF7LRjUFso9LPnEAHJDB2pqzp+hbO8eu1qqW2d/s=
go.opentelemetry.io/otel/trace v1.33.0/go.mod h1:uIcdVUZMpTAmz0tI1z04GoVSezK37CbGV4fr1f2nBck=
go.opentelemetry.io/proto/otlp v1.4.0 h1:TA9WRvW6zMwP+Ssb6fLoUIuirti1gGbP28GcKG1jgeg=
go.opentelemetry.io/proto/otlp v1.4.0/go.mod h1:PPBWZIP98o2ElSqI35IHfu7hIhSwvc5N38Jw8pXuGFY=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA= golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs= golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.22.0 h1:g1v0xeRhjcugydODzvb3mEM9SQ0HGp9s/nh3COQ/C30=
golang.org/x/crypto v0.22.0/go.mod h1:vr6Su+7cTlO45qkww3VDJlzDn0ctJvRgYbC2NvXHt+M=
golang.org/x/crypto v0.24.0 h1:mnl8DM0o513X8fdIkmyFE/5hTYxbwYOjDS/+rK6qpRI=
golang.org/x/crypto v0.24.0/go.mod h1:Z1PMYSOR5nyMcyAVAIQSKCDwalqy85Aqn1x3Ws4L5DM=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.16.0 h1:QX4fJ0Rr5cPQCF7O9lh9Se4pmwfwskqZfq5moyldzic= golang.org/x/mod v0.22.0 h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4=
golang.org/x/mod v0.16.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= golang.org/x/mod v0.22.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
golang.org/x/mod v0.17.0 h1:zY54UmvipHiNd+pm+m0x9KhZ9hl1/7QNMyxXbc6ICqA=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.18.0 h1:5+9lSbEzPSdWkH32vYPBwEpX8KwDbM52Ud9xBUvNlb0=
golang.org/x/mod v0.18.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/net v0.22.0 h1:9sGLhx7iRIHEiX0oAJ3MRZMUCElJgy7Br1nO+AMN3Tc= golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
golang.org/x/net v0.22.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg= golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/net v0.24.0 h1:1PcaxkF854Fu3+lvBIx5SYn9wRlBzzcnHZSiaFFAb0w=
golang.org/x/net v0.24.0/go.mod h1:2Q7sJY5mzlzWjKtYUEXSlBWCdyaioyXzRB2RtU8KVE8=
golang.org/x/net v0.26.0 h1:soB7SVo0PWrY4vPW/+ay0jKDNScG2X9wFeYlXIvJsOQ=
golang.org/x/net v0.26.0/go.mod h1:5YKkiSynbBIh3p6iOc/vibscux0x38BZDkn8sCUPxHE=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ= golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -181,12 +189,8 @@ golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4= golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.19.0 h1:q5f1RH2jigJ1MoAWp2KTp3gm5zAGFUTarQZ5U386+4o=
golang.org/x/sys v0.19.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.21.0 h1:rF+pYz3DAGSQAxAu1CbC7catZg4ebC4UIeIhKxBZvws=
golang.org/x/sys v0.21.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
@@ -199,38 +203,26 @@ golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4= golang.org/x/time v0.8.0 h1:9i3RxcPv3PZnitoVGMPDKZSq1xW1gK1Xy3ArNOGZfEg=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI= golang.org/x/time v0.8.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto/googleapis/api v0.0.0-20240314234333-6e1732d8331c h1:kaI7oewGK5YnVwj+Y+EJBO/YN1ht8iTL9XkFHtVZLsc= google.golang.org/genproto/googleapis/api v0.0.0-20241223144023-3abc09e42ca8 h1:st3LcW/BPi75W4q1jJTEor/QWwbNlPlDG0JTn6XhZu0=
google.golang.org/genproto/googleapis/api v0.0.0-20240314234333-6e1732d8331c/go.mod h1:VQW3tUculP/D4B+xVCo+VgSq8As6wA9ZjHl//pmk+6s= google.golang.org/genproto/googleapis/api v0.0.0-20241223144023-3abc09e42ca8/go.mod h1:klhJGKFyG8Tn50enBn7gizg4nXGXJ+jqEREdCWaPcV4=
google.golang.org/genproto/googleapis/api v0.0.0-20240401170217-c3f982113cda h1:b6F6WIV4xHHD0FA4oIyzU6mHWg2WI2X1RBehwa5QN38= google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8 h1:TqExAhdPaB60Ux47Cn0oLV07rGnxZzIsaRhQaqS666A=
google.golang.org/genproto/googleapis/api v0.0.0-20240401170217-c3f982113cda/go.mod h1:AHcE/gZH76Bk/ROZhQphlRoWo5xKDEtz3eVEO1LfA8c= google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8/go.mod h1:lcTa1sDdWEIHMWlITnIczmw5w60CF9ffkb8Z+DVmmjA=
google.golang.org/genproto/googleapis/api v0.0.0-20240610135401-a8a62080eff3 h1:QW9+G6Fir4VcRXVH8x3LilNAb6cxBGLa6+GM4hRwexE= google.golang.org/grpc v1.69.2 h1:U3S9QEtbXC0bYNvRtcoklF3xGtLViumSYxWykJS+7AU=
google.golang.org/genproto/googleapis/api v0.0.0-20240610135401-a8a62080eff3/go.mod h1:kdrSS/OiLkPrNUpzD4aHgCq2rVuC/YRxok32HXZ4vRE= google.golang.org/grpc v1.69.2/go.mod h1:vyjdE6jLBI76dgpDojsFGNaHlxdjXN9ghpnd2o7JGZ4=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240314234333-6e1732d8331c h1:lfpJ/2rWPa/kJgxyyXM8PrNnfCzcmxJ265mADgwmvLI= google.golang.org/protobuf v1.36.1 h1:yBPeRvTftaleIgM3PZ/WBIZ7XM/eEYAaEyCwvyjq/gk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240314234333-6e1732d8331c/go.mod h1:WtryC6hu0hhx87FDGxWCDptyssuo68sk10vYjF+T9fY= google.golang.org/protobuf v1.36.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240401170217-c3f982113cda h1:LI5DOvAxUPMv/50agcLLoo+AdWc1irS9Rzz4vPuD1V4=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240401170217-c3f982113cda/go.mod h1:WtryC6hu0hhx87FDGxWCDptyssuo68sk10vYjF+T9fY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240610135401-a8a62080eff3 h1:9Xyg6I9IWQZhRVfCWjKK+l6kI0jHcPesVlMnT//aHNo=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240610135401-a8a62080eff3/go.mod h1:EfXuqaE1J41VCDicxHzUDm+8rk+7ZdXzHV0IhO/I6s0=
google.golang.org/grpc v1.62.1 h1:B4n+nfKzOICUXMgyrNd19h/I9oH0L1pizfk1d4zSgTk=
google.golang.org/grpc v1.62.1/go.mod h1:IWTG0VlJLCh1SkC58F7np9ka9mx/WNkjl4PGJaiq+QE=
google.golang.org/grpc v1.63.0 h1:WjKe+dnvABXyPJMD7KDNLxtoGk5tgk+YFWN6cBWjZE8=
google.golang.org/grpc v1.63.0/go.mod h1:WAX/8DgncnokcFUldAxq7GeB5DXHDbMF+lLvDomNkRA=
google.golang.org/grpc v1.64.0 h1:KH3VH9y/MgNQg1dE7b3XfVK0GsPSIzJwdF617gUSbvY=
google.golang.org/grpc v1.64.0/go.mod h1:oxjF8E3FBnjp+/gVFYdWacaLDx9na1aqy9oovLpxQYg=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg=
google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -1,3 +1,9 @@
// Package health provides a standalone HTTP server for health checks.
//
// This package implements a simple health check server that can be used
// to expose health status endpoints for monitoring and load balancing.
// It supports custom health check handlers and provides structured logging
// with graceful shutdown capabilities.
package health package health
import ( import (
@@ -11,11 +17,19 @@ import (
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
// Server is a standalone HTTP server dedicated to health checks.
// It runs separately from the main application server to ensure health
// checks remain available even if the main server is experiencing issues.
//
// The server includes built-in timeouts, graceful shutdown, and structured
// logging for monitoring and debugging health check behavior.
type Server struct { type Server struct {
log *slog.Logger log *slog.Logger
healthFn http.HandlerFunc healthFn http.HandlerFunc
} }
// NewServer creates a new health check server with the specified health handler.
// If healthFn is nil, a default handler that returns HTTP 200 "ok" is used.
func NewServer(healthFn http.HandlerFunc) *Server { func NewServer(healthFn http.HandlerFunc) *Server {
if healthFn == nil { if healthFn == nil {
healthFn = basicHealth healthFn = basicHealth
@@ -27,12 +41,15 @@ func NewServer(healthFn http.HandlerFunc) *Server {
return srv return srv
} }
// SetLogger replaces the default logger with a custom one.
func (srv *Server) SetLogger(log *slog.Logger) { func (srv *Server) SetLogger(log *slog.Logger) {
srv.log = log srv.log = log
} }
// Listen starts the health server on the specified port and blocks until ctx is cancelled.
// The server exposes the health handler at "/__health" with graceful shutdown support.
func (srv *Server) Listen(ctx context.Context, port int) error { func (srv *Server) Listen(ctx context.Context, port int) error {
srv.log.Info("Starting health listener", "port", port) srv.log.Info("starting health listener", "port", port)
serveMux := http.NewServeMux() serveMux := http.NewServeMux()
@@ -59,11 +76,10 @@ func (srv *Server) Listen(ctx context.Context, port int) error {
<-ctx.Done() <-ctx.Done()
ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
defer cancel()
g.Go(func() error { g.Go(func() error {
if err := hsrv.Shutdown(ctx); err != nil { shCtx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
if err := hsrv.Shutdown(shCtx); err != nil {
srv.log.Error("health check server shutdown failed", "err", err) srv.log.Error("health check server shutdown failed", "err", err)
return err return err
} }
@@ -73,8 +89,7 @@ func (srv *Server) Listen(ctx context.Context, port int) error {
return g.Wait() return g.Wait()
} }
// HealthCheckListener runs simple http server on the specified port for // HealthCheckListener runs a simple HTTP server on the specified port for health check probes.
// health check probes
func HealthCheckListener(ctx context.Context, port int, log *slog.Logger) error { func HealthCheckListener(ctx context.Context, port int, log *slog.Logger) error {
srv := NewServer(nil) srv := NewServer(nil)
srv.SetLogger(log) srv.SetLogger(log)

View File

@@ -8,7 +8,6 @@ import (
) )
func TestHealthHandler(t *testing.T) { func TestHealthHandler(t *testing.T) {
req := httptest.NewRequest(http.MethodGet, "/__health", nil) req := httptest.NewRequest(http.MethodGet, "/__health", nil)
w := httptest.NewRecorder() w := httptest.NewRecorder()

View File

@@ -1,3 +1,32 @@
// Package kafconn provides a Kafka client wrapper with TLS support for secure log streaming.
//
// This package handles Kafka connections with mutual TLS authentication for the NTP Pool
// project's log streaming infrastructure. It provides factories for creating Kafka readers
// and writers with automatic broker discovery, TLS configuration, and connection management.
//
// The package is designed specifically for the NTP Pool pipeline infrastructure and includes
// hardcoded bootstrap servers and group configurations. It uses certman for automatic
// certificate renewal and provides compression and batching optimizations.
//
// Key features:
// - Mutual TLS authentication with automatic certificate renewal
// - Broker discovery and connection pooling
// - Reader and writer factory methods with optimized configurations
// - LZ4 compression for efficient data transfer
// - Configurable batch sizes and load balancing
//
// Example usage:
//
// tlsSetup := kafconn.TLSSetup{
// CA: "/path/to/ca.pem",
// Cert: "/path/to/client.pem",
// Key: "/path/to/client.key",
// }
// kafka, err := kafconn.NewKafka(ctx, tlsSetup)
// if err != nil {
// log.Fatal(err)
// }
// writer, err := kafka.NewWriter("logs")
package kafconn package kafconn
import ( import (
@@ -24,12 +53,17 @@ const (
// kafkaMinBatchSize = 1000 // kafkaMinBatchSize = 1000
) )
// TLSSetup contains file paths for TLS certificate configuration.
// All fields are required for establishing secure Kafka connections.
type TLSSetup struct { type TLSSetup struct {
CA string CA string // Path to CA certificate file for server verification
Key string Key string // Path to client private key file
Cert string Cert string // Path to client certificate file
} }
// Kafka represents a configured Kafka client with TLS support.
// It manages connections, brokers, and provides factory methods for readers and writers.
// The client handles broker discovery, connection pooling, and TLS configuration automatically.
type Kafka struct { type Kafka struct {
tls TLSSetup tls TLSSetup
@@ -42,11 +76,9 @@ type Kafka struct {
l *log.Logger l *log.Logger
// wr *kafka.Writer // wr *kafka.Writer
} }
func (k *Kafka) tlsConfig() (*tls.Config, error) { func (k *Kafka) tlsConfig() (*tls.Config, error) {
cm, err := certman.New(k.tls.Cert, k.tls.Key) cm, err := certman.New(k.tls.Cert, k.tls.Key)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -118,6 +150,19 @@ func (k *Kafka) kafkaTransport(ctx context.Context) (*kafka.Transport, error) {
return transport, nil return transport, nil
} }
// NewKafka creates a new Kafka client with TLS configuration and establishes initial connections.
// It performs broker discovery, validates TLS certificates, and prepares the client for creating
// readers and writers.
//
// The function validates TLS configuration, establishes a connection to the bootstrap server,
// discovers all available brokers, and configures transport layers for optimal performance.
//
// Parameters:
// - ctx: Context for connection establishment and timeouts
// - tls: TLS configuration with paths to CA, certificate, and key files
//
// Returns a configured Kafka client ready for creating readers and writers, or an error
// if TLS setup fails, connection cannot be established, or broker discovery fails.
func NewKafka(ctx context.Context, tls TLSSetup) (*Kafka, error) { func NewKafka(ctx context.Context, tls TLSSetup) (*Kafka, error) {
l := log.New(os.Stdout, "kafka: ", log.Ldate|log.Ltime|log.LUTC|log.Lmsgprefix|log.Lmicroseconds) l := log.New(os.Stdout, "kafka: ", log.Ldate|log.Ltime|log.LUTC|log.Lmsgprefix|log.Lmicroseconds)
@@ -173,6 +218,12 @@ func NewKafka(ctx context.Context, tls TLSSetup) (*Kafka, error) {
return k, nil return k, nil
} }
// NewReader creates a new Kafka reader with the client's broker list and TLS configuration.
// The provided config is enhanced with the discovered brokers and configured dialer.
// The reader supports automatic offset management, consumer group coordination, and reconnection.
//
// The caller should configure the reader's Topic, GroupID, and other consumer-specific settings
// in the provided config. The client automatically sets Brokers and Dialer fields.
func (k *Kafka) NewReader(config kafka.ReaderConfig) *kafka.Reader { func (k *Kafka) NewReader(config kafka.ReaderConfig) *kafka.Reader {
config.Brokers = k.brokerAddrs() config.Brokers = k.brokerAddrs()
config.Dialer = k.dialer config.Dialer = k.dialer
@@ -188,8 +239,17 @@ func (k *Kafka) brokerAddrs() []string {
return addrs return addrs
} }
// NewWriter creates a new Kafka writer for the specified topic with optimized configuration.
// The writer uses LZ4 compression, least-bytes load balancing, and batching for performance.
//
// Configuration includes:
// - Batch size: 2000 messages for efficient throughput
// - Compression: LZ4 for fast compression with good ratios
// - Balancer: LeastBytes for optimal partition distribution
// - Transport: TLS-configured transport with connection pooling
//
// The writer is ready for immediate use and handles connection management automatically.
func (k *Kafka) NewWriter(topic string) (*kafka.Writer, error) { func (k *Kafka) NewWriter(topic string) (*kafka.Writer, error) {
// https://pkg.go.dev/github.com/segmentio/kafka-go#Writer // https://pkg.go.dev/github.com/segmentio/kafka-go#Writer
w := &kafka.Writer{ w := &kafka.Writer{
Addr: kafka.TCP(k.brokerAddrs()...), Addr: kafka.TCP(k.brokerAddrs()...),
@@ -205,6 +265,12 @@ func (k *Kafka) NewWriter(topic string) (*kafka.Writer, error) {
return w, nil return w, nil
} }
// CheckPartitions verifies that the Kafka connection can read partition metadata.
// This method is useful for health checks and connection validation.
//
// Returns an error if partition metadata cannot be retrieved, which typically
// indicates connection problems, authentication failures, or broker unavailability.
// Logs a warning if no partitions are available but does not return an error.
func (k *Kafka) CheckPartitions() error { func (k *Kafka) CheckPartitions() error {
partitions, err := k.conn.ReadPartitions() partitions, err := k.conn.ReadPartitions()
if err != nil { if err != nil {

78
logger/logfmt.go Normal file
View File

@@ -0,0 +1,78 @@
package logger
import (
"bytes"
"context"
"log/slog"
"slices"
"strings"
"sync"
)
type logfmt struct {
buf *bytes.Buffer
txt slog.Handler
next slog.Handler
mu sync.Mutex
}
func newLogFmtHandler(next slog.Handler) slog.Handler {
buf := bytes.NewBuffer([]byte{})
h := &logfmt{
buf: buf,
next: next,
txt: slog.NewTextHandler(buf, &slog.HandlerOptions{
ReplaceAttr: func(groups []string, a slog.Attr) slog.Attr {
if a.Key == slog.TimeKey && len(groups) == 0 {
return slog.Attr{}
}
if a.Key == slog.LevelKey && len(groups) == 0 {
return slog.Attr{}
}
return a
},
}),
}
return h
}
func (h *logfmt) Enabled(ctx context.Context, lvl slog.Level) bool {
return h.next.Enabled(ctx, lvl)
}
func (h *logfmt) WithAttrs(attrs []slog.Attr) slog.Handler {
return &logfmt{
buf: bytes.NewBuffer([]byte{}),
next: h.next.WithAttrs(slices.Clone(attrs)),
txt: h.txt.WithAttrs(slices.Clone(attrs)),
}
}
func (h *logfmt) WithGroup(g string) slog.Handler {
if g == "" {
return h
}
return &logfmt{
buf: bytes.NewBuffer([]byte{}),
next: h.next.WithGroup(g),
txt: h.txt.WithGroup(g),
}
}
func (h *logfmt) Handle(ctx context.Context, r slog.Record) error {
h.mu.Lock()
defer h.mu.Unlock()
if h.buf.Len() > 0 {
panic("buffer wasn't empty")
}
h.txt.Handle(ctx, r)
r.Message = h.buf.String()
r.Message = strings.TrimSuffix(r.Message, "\n")
h.buf.Reset()
return h.next.Handle(ctx, r)
}

41
logger/logfmt_test.go Normal file
View File

@@ -0,0 +1,41 @@
package logger
import (
"bytes"
"encoding/json"
"log/slog"
"strings"
"testing"
)
func TestLogFmt(t *testing.T) {
var buf bytes.Buffer
jsonh := slog.NewJSONHandler(&buf, nil)
h := newLogFmtHandler(jsonh)
log := slog.New(h)
log.Info("test message", "id", 1010)
t.Logf("buf: %s", buf.String())
msg := map[string]any{}
err := json.Unmarshal(buf.Bytes(), &msg)
if err != nil {
t.Logf("couldn't unmarshal json log: %s", err)
t.Fail()
}
if msgTxt, ok := msg["msg"].(string); ok {
if !strings.Contains(msgTxt, "id=1010") {
t.Log("didn't find id in msg value")
t.Fail()
}
if strings.Contains(msgTxt, "level=") {
t.Log("msg value contains level=")
t.Fail()
}
} else {
t.Log("didn't find message in output")
t.Fail()
}
}

View File

@@ -1,3 +1,25 @@
// Package logger provides structured logging with OpenTelemetry trace integration.
//
// This package offers multiple logging configurations for different deployment scenarios:
// - Text logging to stderr with optional timestamp removal for systemd
// - OTLP (OpenTelemetry Protocol) logging for observability pipelines
// - Multi-logger setup that outputs to both text and OTLP simultaneously
// - Context-aware logging with trace ID correlation
//
// The package automatically detects systemd environments and adjusts timestamp handling
// accordingly. It supports debug level configuration via environment variables and
// provides compatibility bridges for legacy logging interfaces.
//
// Key features:
// - Automatic OpenTelemetry trace and span ID inclusion in log entries
// - Configurable log levels via DEBUG environment variable (with optional prefix)
// - Systemd-compatible output (no timestamps when INVOCATION_ID is present)
// - Thread-safe logger setup with sync.Once protection
// - Context propagation for request-scoped logging
//
// Environment variables:
// - DEBUG: Enable debug level logging (configurable prefix via ConfigPrefix)
// - INVOCATION_ID: Systemd detection for timestamp handling
package logger package logger
import ( import (
@@ -8,77 +30,183 @@ import (
"strconv" "strconv"
"sync" "sync"
slogotel "github.com/remychantenay/slog-otel" slogtraceid "github.com/remychantenay/slog-otel"
slogmulti "github.com/samber/slog-multi"
"go.opentelemetry.io/contrib/bridges/otelslog"
) )
// ConfigPrefix allows customizing the environment variable prefix for configuration.
// When set, environment variables like DEBUG become {ConfigPrefix}_DEBUG.
// This enables multiple services to have independent logging configuration.
var ConfigPrefix = "" var ConfigPrefix = ""
var rootLogger *slog.Logger var (
var setup sync.Once textLogger *slog.Logger
otlpLogger *slog.Logger
multiLogger *slog.Logger
)
func Setup() *slog.Logger { var (
setupText sync.Once // this sets the default
setupOtlp sync.Once // this never sets the default
setupMulti sync.Once // this sets the default, and will always run after the others
mu sync.Mutex
)
setup.Do(func() { func setupStdErrHandler() slog.Handler {
programLevel := new(slog.LevelVar) // Info by default
var programLevel = new(slog.LevelVar) // Info by default envVar := "DEBUG"
if len(ConfigPrefix) > 0 {
envVar = ConfigPrefix + "_" + envVar
}
envVar := "DEBUG" if opt := os.Getenv(envVar); len(opt) > 0 {
if len(ConfigPrefix) > 0 { if debug, _ := strconv.ParseBool(opt); debug {
envVar = ConfigPrefix + "_" + envVar programLevel.Set(slog.LevelDebug)
} }
}
if opt := os.Getenv(envVar); len(opt) > 0 { logOptions := &slog.HandlerOptions{Level: programLevel}
if debug, _ := strconv.ParseBool(opt); debug {
programLevel.Set(slog.LevelDebug)
}
}
logOptions := &slog.HandlerOptions{Level: programLevel} if len(os.Getenv("INVOCATION_ID")) > 0 {
// don't add timestamps when running under systemd
log.Default().SetFlags(0)
if len(os.Getenv("INVOCATION_ID")) > 0 { logOptions.ReplaceAttr = logRemoveTime
// don't add timestamps when running under systemd }
log.Default().SetFlags(0)
logReplace := func(groups []string, a slog.Attr) slog.Attr { logHandler := slogtraceid.OtelHandler{
// Remove time Next: slog.NewTextHandler(os.Stderr, logOptions),
if a.Key == slog.TimeKey && len(groups) == 0 { }
return slog.Attr{}
}
return a
}
logOptions.ReplaceAttr = logReplace return logHandler
} }
logHandler := slogotel.OtelHandler{ func setupOtlpLogger() *slog.Logger {
Next: slog.NewTextHandler(os.Stderr, logOptions), setupOtlp.Do(func() {
} otlpLogger = slog.New(
newLogFmtHandler(otelslog.NewHandler("common")),
)
})
return otlpLogger
}
// https://github.com/cyrusaf/ctxlog/pull/1 // SetupMultiLogger creates a logger that outputs to both text (stderr) and OTLP simultaneously.
// log := slog.New(ctxlog.NewHandler(logHandler)) // This is useful for services that need both human-readable logs and structured observability data.
log := slog.New(logHandler) //
// The multi-logger combines:
slog.SetDefault(log) // - Text handler: Stderr output with OpenTelemetry trace integration
// - OTLP handler: Structured logs sent via OpenTelemetry Protocol
rootLogger = log //
// On first call, this logger becomes the default logger returned by Setup().
// The function is thread-safe and uses sync.Once to ensure single initialization.
func SetupMultiLogger() *slog.Logger {
setupMulti.Do(func() {
textHandler := Setup().Handler()
otlpHandler := setupOtlpLogger().Handler()
multiHandler := slogmulti.Fanout(
textHandler,
otlpHandler,
)
mu.Lock()
defer mu.Unlock()
multiLogger = slog.New(multiHandler)
slog.SetDefault(multiLogger)
}) })
return rootLogger return multiLogger
}
// SetupOLTP creates a logger that sends structured logs via OpenTelemetry Protocol.
// This logger is designed for observability pipelines and log aggregation systems.
//
// The OTLP logger formats log messages similarly to the text logger for better
// compatibility with Loki + Grafana, while still providing structured attributes.
// Log attributes are available both in the message format and as OTLP attributes.
//
// This logger does not become the default logger and must be used explicitly.
// It requires OpenTelemetry tracing configuration to be set up via the tracing package.
//
// See: https://github.com/grafana/loki/issues/14788 for formatting rationale.
func SetupOLTP() *slog.Logger {
return setupOtlpLogger()
}
// Setup creates and returns the standard text logger for the application.
// This is the primary logging function that most applications should use.
//
// Features:
// - Text formatting to stderr with human-readable output
// - Automatic OpenTelemetry trace_id and span_id inclusion when available
// - Systemd compatibility: omits timestamps when INVOCATION_ID environment variable is present
// - Debug level support via DEBUG environment variable (respects ConfigPrefix)
// - Thread-safe initialization with sync.Once
//
// On first call, this logger becomes the slog default logger. If SetupMultiLogger()
// has been called previously, Setup() returns the multi-logger instead of the text logger.
//
// The logger automatically detects execution context:
// - Systemd: Removes timestamps (systemd adds its own)
// - Debug mode: Enables debug level logging based on environment variables
// - OpenTelemetry: Includes trace correlation when tracing is active
func Setup() *slog.Logger {
setupText.Do(func() {
h := setupStdErrHandler()
textLogger = slog.New(h)
slog.SetDefault(textLogger)
})
mu.Lock()
defer mu.Unlock()
if multiLogger != nil {
return multiLogger
}
return textLogger
} }
type loggerKey struct{} type loggerKey struct{}
// NewContext adds the logger to the context. // NewContext stores a logger in the context for request-scoped logging.
// This enables passing request-specific loggers (e.g., with request IDs,
// user context, or other correlation data) through the call stack.
//
// Use this to create context-aware logging where different parts of the
// application can access the same enriched logger instance.
//
// Example:
//
// logger := slog.With("request_id", requestID)
// ctx := logger.NewContext(ctx, logger)
// // Pass ctx to downstream functions
func NewContext(ctx context.Context, l *slog.Logger) context.Context { func NewContext(ctx context.Context, l *slog.Logger) context.Context {
return context.WithValue(ctx, loggerKey{}, l) return context.WithValue(ctx, loggerKey{}, l)
} }
// FromContext retrieves a logger from the context. If there is none, // FromContext retrieves a logger from the context.
// it returns the default logger. // If no logger is stored in the context, it returns the default logger from Setup().
//
// This function provides a safe way to access context-scoped loggers without
// needing to check for nil values. It ensures that logging is always available,
// falling back to the application's standard logger configuration.
//
// Example:
//
// log := logger.FromContext(ctx)
// log.Info("processing request") // Uses context logger or default
func FromContext(ctx context.Context) *slog.Logger { func FromContext(ctx context.Context) *slog.Logger {
if l, ok := ctx.Value(loggerKey{}).(*slog.Logger); ok { if l, ok := ctx.Value(loggerKey{}).(*slog.Logger); ok {
return l return l
} }
return Setup() return Setup()
} }
func logRemoveTime(groups []string, a slog.Attr) slog.Attr {
// Remove time
if a.Key == slog.TimeKey && len(groups) == 0 {
return slog.Attr{}
}
return a
}

View File

@@ -5,12 +5,24 @@ import (
"log/slog" "log/slog"
) )
// stdLoggerish provides a bridge between legacy log interfaces and slog.
// It implements common logging methods (Println, Printf, Fatalf) that
// delegate to structured logging with a consistent key prefix.
type stdLoggerish struct { type stdLoggerish struct {
key string key string // Prefix key for all log messages
log *slog.Logger log *slog.Logger // Underlying structured logger
f func(string, ...any) f func(string, ...any) // Log function (Info or Debug level)
} }
// NewStdLog creates a legacy-compatible logger that bridges to structured logging.
// This is useful for third-party libraries that expect a standard log.Logger interface.
//
// Parameters:
// - key: Prefix added to all log messages for identification
// - debug: If true, logs at debug level; otherwise logs at info level
// - log: Underlying slog.Logger (uses Setup() if nil)
//
// The returned logger implements Println, Printf, and Fatalf methods.
func NewStdLog(key string, debug bool, log *slog.Logger) *stdLoggerish { func NewStdLog(key string, debug bool, log *slog.Logger) *stdLoggerish {
if log == nil { if log == nil {
log = Setup() log = Setup()
@@ -27,10 +39,19 @@ func NewStdLog(key string, debug bool, log *slog.Logger) *stdLoggerish {
return sl return sl
} }
func (l stdLoggerish) Println(msg ...interface{}) { // Println logs the arguments using the configured log level with the instance key.
func (l stdLoggerish) Println(msg ...any) {
l.f(l.key, "msg", msg) l.f(l.key, "msg", msg)
} }
func (l stdLoggerish) Printf(msg string, args ...interface{}) { // Printf logs a formatted message using the configured log level with the instance key.
func (l stdLoggerish) Printf(msg string, args ...any) {
l.f(l.key, "msg", fmt.Sprintf(msg, args...)) l.f(l.key, "msg", fmt.Sprintf(msg, args...))
} }
// Fatalf logs a formatted error message and panics.
// Note: This implementation panics instead of calling os.Exit for testability.
func (l stdLoggerish) Fatalf(msg string, args ...any) {
l.log.Error(l.key, "msg", fmt.Sprintf(msg, args...))
panic("fatal error") // todo: does this make sense at all?
}

View File

@@ -1,17 +0,0 @@
package logger
type Error struct {
Msg string
Data []any
}
func NewError(msg string, data ...any) *Error {
return &Error{
Msg: msg,
Data: data,
}
}
func (e *Error) Error() string {
return "not implemented"
}

View File

@@ -1,3 +1,8 @@
// Package metricsserver provides a standalone HTTP server for exposing Prometheus metrics.
//
// This package implements a dedicated metrics server that exposes application metrics
// via HTTP. It uses a custom Prometheus registry to avoid conflicts with other metric
// collectors and provides graceful shutdown capabilities.
package metricsserver package metricsserver
import ( import (
@@ -13,10 +18,13 @@ import (
"go.ntppool.org/common/logger" "go.ntppool.org/common/logger"
) )
// Metrics provides a custom Prometheus registry and HTTP handlers for metrics exposure.
// It isolates application metrics from the default global registry.
type Metrics struct { type Metrics struct {
r *prometheus.Registry r *prometheus.Registry
} }
// New creates a new Metrics instance with a custom Prometheus registry.
func New() *Metrics { func New() *Metrics {
r := prometheus.NewRegistry() r := prometheus.NewRegistry()
@@ -27,12 +35,14 @@ func New() *Metrics {
return m return m
} }
// Registry returns the custom Prometheus registry.
// Use this to register your application's metrics collectors.
func (m *Metrics) Registry() *prometheus.Registry { func (m *Metrics) Registry() *prometheus.Registry {
return m.r return m.r
} }
// Handler returns an HTTP handler for the /metrics endpoint with OpenMetrics support.
func (m *Metrics) Handler() http.Handler { func (m *Metrics) Handler() http.Handler {
log := logger.NewStdLog("prom http", false, nil) log := logger.NewStdLog("prom http", false, nil)
return promhttp.HandlerFor(m.r, promhttp.HandlerOpts{ return promhttp.HandlerFor(m.r, promhttp.HandlerOpts{
@@ -42,11 +52,9 @@ func (m *Metrics) Handler() http.Handler {
}) })
} }
// ListenAndServe starts a goroutine with a server running on // ListenAndServe starts a metrics server on the specified port and blocks until ctx is done.
// the specified port. The server will shutdown and return when // The server exposes the metrics handler and shuts down gracefully when the context is cancelled.
// the provided context is done
func (m *Metrics) ListenAndServe(ctx context.Context, port int) error { func (m *Metrics) ListenAndServe(ctx context.Context, port int) error {
log := logger.Setup() log := logger.Setup()
srv := &http.Server{ srv := &http.Server{

View File

@@ -0,0 +1,242 @@
package metricsserver
import (
"context"
"fmt"
"io"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"github.com/prometheus/client_golang/prometheus"
)
func TestNew(t *testing.T) {
metrics := New()
if metrics == nil {
t.Fatal("New returned nil")
}
if metrics.r == nil {
t.Error("metrics registry is nil")
}
}
func TestRegistry(t *testing.T) {
metrics := New()
registry := metrics.Registry()
if registry == nil {
t.Fatal("Registry() returned nil")
}
if registry != metrics.r {
t.Error("Registry() did not return the metrics registry")
}
// Test that we can register a metric
counter := prometheus.NewCounter(prometheus.CounterOpts{
Name: "test_counter",
Help: "A test counter",
})
err := registry.Register(counter)
if err != nil {
t.Errorf("failed to register metric: %v", err)
}
// Test that the metric is registered
metricFamilies, err := registry.Gather()
if err != nil {
t.Errorf("failed to gather metrics: %v", err)
}
found := false
for _, mf := range metricFamilies {
if mf.GetName() == "test_counter" {
found = true
break
}
}
if !found {
t.Error("registered metric not found in registry")
}
}
func TestHandler(t *testing.T) {
metrics := New()
// Register a test metric
counter := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "test_requests_total",
Help: "Total number of test requests",
},
[]string{"method"},
)
metrics.Registry().MustRegister(counter)
counter.WithLabelValues("GET").Inc()
// Test the handler
handler := metrics.Handler()
if handler == nil {
t.Fatal("Handler() returned nil")
}
// Create a test request
req := httptest.NewRequest("GET", "/metrics", nil)
recorder := httptest.NewRecorder()
// Call the handler
handler.ServeHTTP(recorder, req)
// Check response
resp := recorder.Result()
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Errorf("expected status 200, got %d", resp.StatusCode)
}
body, err := io.ReadAll(resp.Body)
if err != nil {
t.Fatalf("failed to read response body: %v", err)
}
bodyStr := string(body)
// Check for our test metric
if !strings.Contains(bodyStr, "test_requests_total") {
t.Error("test metric not found in metrics output")
}
// Check for OpenMetrics format indicators
if !strings.Contains(bodyStr, "# TYPE") {
t.Error("metrics output missing TYPE comments")
}
}
func TestListenAndServe(t *testing.T) {
metrics := New()
// Register a test metric
counter := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "test_requests_total",
Help: "Total number of test requests",
},
[]string{"method"},
)
metrics.Registry().MustRegister(counter)
counter.WithLabelValues("GET").Inc()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Start server in a goroutine
errCh := make(chan error, 1)
go func() {
// Use a high port number to avoid conflicts
errCh <- metrics.ListenAndServe(ctx, 9999)
}()
// Give the server a moment to start
time.Sleep(100 * time.Millisecond)
// Test metrics endpoint
resp, err := http.Get("http://localhost:9999/metrics")
if err != nil {
t.Fatalf("failed to GET /metrics: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Errorf("expected status 200, got %d", resp.StatusCode)
}
body, err := io.ReadAll(resp.Body)
if err != nil {
t.Fatalf("failed to read response body: %v", err)
}
bodyStr := string(body)
// Check for our test metric
if !strings.Contains(bodyStr, "test_requests_total") {
t.Error("test metric not found in metrics output")
}
// Cancel context to stop server
cancel()
// Wait for server to stop
select {
case err := <-errCh:
if err != nil {
t.Errorf("server returned error: %v", err)
}
case <-time.After(5 * time.Second):
t.Error("server did not stop within timeout")
}
}
func TestListenAndServeContextCancellation(t *testing.T) {
metrics := New()
ctx, cancel := context.WithCancel(context.Background())
// Start server
errCh := make(chan error, 1)
go func() {
errCh <- metrics.ListenAndServe(ctx, 9998)
}()
// Give server time to start
time.Sleep(100 * time.Millisecond)
// Cancel context
cancel()
// Server should stop gracefully
select {
case err := <-errCh:
if err != nil {
t.Errorf("server returned error on graceful shutdown: %v", err)
}
case <-time.After(5 * time.Second):
t.Error("server did not stop within timeout after context cancellation")
}
}
// Benchmark the metrics handler response time
func BenchmarkMetricsHandler(b *testing.B) {
metrics := New()
// Register some test metrics
for i := 0; i < 10; i++ {
counter := prometheus.NewCounter(prometheus.CounterOpts{
Name: fmt.Sprintf("bench_counter_%d", i),
Help: "A benchmark counter",
})
metrics.Registry().MustRegister(counter)
counter.Add(float64(i * 100))
}
handler := metrics.Handler()
b.ResetTimer()
for i := 0; i < b.N; i++ {
req := httptest.NewRequest("GET", "/metrics", nil)
recorder := httptest.NewRecorder()
handler.ServeHTTP(recorder, req)
if recorder.Code != http.StatusOK {
b.Fatalf("unexpected status code: %d", recorder.Code)
}
}
}

View File

@@ -15,7 +15,7 @@ mkdir -p $DIR
BASE=https://geodns.bitnames.com/${BASE}/builds/${BUILD} BASE=https://geodns.bitnames.com/${BASE}/builds/${BUILD}
files=`curl -sSf ${BASE}/checksums.txt | awk '{print $2}'` files=`curl -sSf ${BASE}/checksums.txt | sed 's/^[a-f0-9]*[[:space:]]*//'`
metafiles="checksums.txt metadata.json CHANGELOG.md artifacts.json" metafiles="checksums.txt metadata.json CHANGELOG.md artifacts.json"
for f in $metafiles; do for f in $metafiles; do

View File

@@ -2,7 +2,7 @@
set -euo pipefail set -euo pipefail
go install github.com/goreleaser/goreleaser@v1.24.0 go install github.com/goreleaser/goreleaser/v2@v2.11.0
if [ ! -z "${harbor_username:-}" ]; then if [ ! -z "${harbor_username:-}" ]; then
DOCKER_FILE=~/.docker/config.json DOCKER_FILE=~/.docker/config.json

View File

@@ -1,3 +1,4 @@
// Package timeutil provides JSON-serializable time utilities.
package timeutil package timeutil
import ( import (
@@ -6,16 +7,39 @@ import (
"time" "time"
) )
// Duration is a wrapper around time.Duration that supports JSON marshaling/unmarshaling.
//
// When marshaling to JSON, it outputs the duration as a string using time.Duration.String().
// When unmarshaling from JSON, it accepts both:
// - String values that can be parsed by time.ParseDuration (e.g., "30s", "5m", "1h30m")
// - Numeric values that represent nanoseconds as a float64
//
// This makes it compatible with configuration files and APIs that need to represent
// durations in a human-readable format.
//
// Example usage:
//
// type Config struct {
// Timeout timeutil.Duration `json:"timeout"`
// }
//
// // JSON: {"timeout": "30s"}
// // or: {"timeout": 30000000000}
type Duration struct { type Duration struct {
time.Duration time.Duration
} }
// MarshalJSON implements json.Marshaler.
// It marshals the duration as a string using time.Duration.String().
func (d Duration) MarshalJSON() ([]byte, error) { func (d Duration) MarshalJSON() ([]byte, error) {
return json.Marshal(time.Duration(d.Duration).String()) return json.Marshal(time.Duration(d.Duration).String())
} }
// UnmarshalJSON implements json.Unmarshaler.
// It accepts both string values (parsed via time.ParseDuration) and
// numeric values (interpreted as nanoseconds).
func (d *Duration) UnmarshalJSON(b []byte) error { func (d *Duration) UnmarshalJSON(b []byte) error {
var v interface{} var v any
if err := json.Unmarshal(b, &v); err != nil { if err := json.Unmarshal(b, &v); err != nil {
return err return err
} }

View File

@@ -18,5 +18,4 @@ func TestDuration(t *testing.T) {
if foo.Foo.Seconds() != 30 { if foo.Foo.Seconds() != 30 {
t.Fatalf("parsed time.Duration wasn't 30 seconds: %s", foo.Foo) t.Fatalf("parsed time.Duration wasn't 30 seconds: %s", foo.Foo)
} }
} }

View File

@@ -1,3 +1,36 @@
// Package tracing provides OpenTelemetry distributed tracing setup with OTLP export support.
//
// This package handles the complete OpenTelemetry SDK initialization including:
// - Trace provider configuration with batching and resource detection
// - Log provider setup for structured log export via OTLP
// - Automatic resource discovery (service name, version, host, container, process info)
// - Support for both gRPC and HTTP OTLP exporters with TLS configuration
// - Propagation context setup for distributed tracing across services
// - Graceful shutdown handling for all telemetry components
//
// The package supports various deployment scenarios:
// - Development: Local OTLP collectors or observability backends
// - Production: Secure OTLP export with mutual TLS authentication
// - Container environments: Automatic container and Kubernetes resource detection
//
// Configuration is primarily handled via standard OpenTelemetry environment variables:
// - OTEL_SERVICE_NAME: Service identification
// - OTEL_EXPORTER_OTLP_PROTOCOL: Protocol selection (grpc, http/protobuf)
// - OTEL_TRACES_EXPORTER: Exporter type (otlp, autoexport)
// - OTEL_RESOURCE_ATTRIBUTES: Additional resource attributes
//
// Example usage:
//
// cfg := &tracing.TracerConfig{
// ServiceName: "my-service",
// Environment: "production",
// Endpoint: "https://otlp.example.com:4317",
// }
// shutdown, err := tracing.InitTracer(ctx, cfg)
// if err != nil {
// log.Fatal(err)
// }
// defer shutdown(ctx)
package tracing package tracing
// todo, review: // todo, review:
@@ -7,121 +40,289 @@ import (
"context" "context"
"crypto/tls" "crypto/tls"
"crypto/x509" "crypto/x509"
"errors"
"os" "os"
"slices"
"time"
"go.ntppool.org/common/logger" "go.ntppool.org/common/logger"
"go.ntppool.org/common/version" "go.ntppool.org/common/version"
"google.golang.org/grpc/credentials"
"go.opentelemetry.io/contrib/exporters/autoexport"
"go.opentelemetry.io/otel" "go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace" "go.opentelemetry.io/otel/exporters/otlp/otlptrace"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp" "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
logglobal "go.opentelemetry.io/otel/log/global"
"go.opentelemetry.io/otel/propagation" "go.opentelemetry.io/otel/propagation"
sdklog "go.opentelemetry.io/otel/sdk/log"
"go.opentelemetry.io/otel/sdk/resource" "go.opentelemetry.io/otel/sdk/resource"
otelsdktrace "go.opentelemetry.io/otel/sdk/trace" sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.25.0" semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
"go.opentelemetry.io/otel/trace" "go.opentelemetry.io/otel/trace"
) )
const (
// svcNameKey is the environment variable name that Service Name information will be read from.
svcNameKey = "OTEL_SERVICE_NAME"
otelExporterOTLPProtoEnvKey = "OTEL_EXPORTER_OTLP_PROTOCOL"
otelExporterOTLPTracesProtoEnvKey = "OTEL_EXPORTER_OTLP_TRACES_PROTOCOL"
)
var errInvalidOTLPProtocol = errors.New("invalid OTLP protocol - should be one of ['grpc', 'http/protobuf']")
// https://github.com/open-telemetry/opentelemetry-go/blob/main/exporters/otlp/otlptrace/otlptracehttp/example_test.go // https://github.com/open-telemetry/opentelemetry-go/blob/main/exporters/otlp/otlptrace/otlptracehttp/example_test.go
// TpShutdownFunc represents a function that gracefully shuts down telemetry providers.
// It should be called during application shutdown to ensure all telemetry data is flushed
// and exporters are properly closed. The context can be used to set shutdown timeouts.
type TpShutdownFunc func(ctx context.Context) error type TpShutdownFunc func(ctx context.Context) error
// Tracer returns the configured OpenTelemetry tracer for the NTP Pool project.
// This tracer should be used for creating spans and distributed tracing throughout
// the application. It uses the global tracer provider set up by InitTracer/SetupSDK.
func Tracer() trace.Tracer { func Tracer() trace.Tracer {
traceProvider := otel.GetTracerProvider() traceProvider := otel.GetTracerProvider()
return traceProvider.Tracer("ntppool-tracer") return traceProvider.Tracer("ntppool-tracer")
} }
// Start creates a new span with the given name and options using the configured tracer.
// This is a convenience function that wraps the standard OpenTelemetry span creation.
// It returns a new context containing the span and the span itself for further configuration.
//
// The returned context should be used for downstream operations to maintain trace correlation.
func Start(ctx context.Context, spanName string, opts ...trace.SpanStartOption) (context.Context, trace.Span) { func Start(ctx context.Context, spanName string, opts ...trace.SpanStartOption) (context.Context, trace.Span) {
return Tracer().Start(ctx, spanName, opts...) return Tracer().Start(ctx, spanName, opts...)
} }
// GetClientCertificate defines a function type for providing client certificates for mutual TLS.
// This is used when exporting telemetry data to secured OTLP endpoints that require
// client certificate authentication.
type GetClientCertificate func(*tls.CertificateRequestInfo) (*tls.Certificate, error) type GetClientCertificate func(*tls.CertificateRequestInfo) (*tls.Certificate, error)
// TracerConfig provides configuration options for OpenTelemetry tracing setup.
// It supplements standard OpenTelemetry environment variables with additional
// NTP Pool-specific configuration including TLS settings for secure OTLP export.
type TracerConfig struct { type TracerConfig struct {
ServiceName string ServiceName string // Service name for resource identification (overrides OTEL_SERVICE_NAME)
Environment string Environment string // Deployment environment (development, staging, production)
Endpoint string Endpoint string // OTLP endpoint hostname/port (e.g., "otlp.example.com:4317")
EndpointURL string EndpointURL string // Complete OTLP endpoint URL (e.g., "https://otlp.example.com:4317/v1/traces")
CertificateProvider GetClientCertificate CertificateProvider GetClientCertificate // Client certificate provider for mutual TLS
RootCAs *x509.CertPool RootCAs *x509.CertPool // CA certificate pool for server verification
}
var emptyTpShutdownFunc = func(_ context.Context) error {
return nil
} }
// InitTracer initializes the OpenTelemetry SDK with the provided configuration.
// This is the main entry point for setting up distributed tracing in applications.
//
// The function configures trace and log providers, sets up OTLP exporters,
// and returns a shutdown function that must be called during application termination.
//
// Returns a shutdown function and an error. The shutdown function should be called
// with a context that has an appropriate timeout for graceful shutdown.
func InitTracer(ctx context.Context, cfg *TracerConfig) (TpShutdownFunc, error) { func InitTracer(ctx context.Context, cfg *TracerConfig) (TpShutdownFunc, error) {
// todo: setup environment from cfg
return SetupSDK(ctx, cfg)
}
// SetupSDK performs the complete OpenTelemetry SDK initialization including resource
// discovery, exporter configuration, provider setup, and shutdown function creation.
//
// The function automatically discovers system resources (service info, host, container,
// process details) and configures both trace and log exporters. It supports multiple
// OTLP protocols (gRPC, HTTP) and handles TLS configuration for secure deployments.
//
// The returned shutdown function coordinates graceful shutdown of all telemetry
// components in the reverse order of their initialization.
func SetupSDK(ctx context.Context, cfg *TracerConfig) (shutdown TpShutdownFunc, err error) {
if cfg == nil {
cfg = &TracerConfig{}
}
log := logger.Setup() log := logger.Setup()
// exporter, err := srv.newStdoutExporter(os.Stdout) if serviceName := os.Getenv(svcNameKey); len(serviceName) == 0 {
if len(cfg.ServiceName) > 0 {
var err error os.Setenv(svcNameKey, cfg.ServiceName)
var exporter otelsdktrace.SpanExporter }
if otlpEndPoint := os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT"); len(otlpEndPoint) > 0 || len(cfg.Endpoint) > 0 {
exporter, err = newOLTPExporter(ctx, cfg)
} }
if err != nil { resources := []resource.Option{
return emptyTpShutdownFunc, err resource.WithFromEnv(), // Discover and provide attributes from OTEL_RESOURCE_ATTRIBUTES and OTEL_SERVICE_NAME environment variables.
resource.WithTelemetrySDK(), // Discover and provide information about the OpenTelemetry SDK used.
resource.WithProcess(), // Discover and provide process information.
resource.WithOS(), // Discover and provide OS information.
resource.WithContainer(), // Discover and provide container information.
resource.WithHost(), // Discover and provide host information.
// set above via os.Setenv() for WithFromEnv to find
// resource.WithAttributes(semconv.ServiceNameKey.String(cfg.ServiceName)),
resource.WithAttributes(semconv.ServiceVersionKey.String(version.Version())),
} }
if exporter == nil { if len(cfg.Environment) > 0 {
log.Warn("tracing not configured") resources = append(resources,
return emptyTpShutdownFunc, nil resource.WithAttributes(attribute.String("environment", cfg.Environment)),
)
} }
resource, err := newResource(cfg) res, err := resource.New(
if err != nil { context.Background(),
return nil, err resources...,
}
tp := otelsdktrace.NewTracerProvider(
otelsdktrace.WithSampler(otelsdktrace.AlwaysSample()),
otelsdktrace.WithBatcher(exporter),
otelsdktrace.WithResource(resource),
) )
if errors.Is(err, resource.ErrPartialResource) || errors.Is(err, resource.ErrSchemaURLConflict) {
log.Warn("otel resource setup", "err", err) // Log non-fatal issues.
} else if err != nil {
log.Error("otel resource setup", "err", err)
return
}
otel.SetTracerProvider(tp) var shutdownFuncs []func(context.Context) error
shutdown = func(ctx context.Context) error {
var err error
// need to shutdown the providers first,
// exporters after which is the opposite
// order they are setup.
slices.Reverse(shutdownFuncs)
for _, fn := range shutdownFuncs {
// log.Warn("shutting down", "fn", fn)
err = errors.Join(err, fn(ctx))
}
shutdownFuncs = nil
if err != nil {
log.Warn("shutdown returned errors", "err", err)
}
return err
}
otel.SetTextMapPropagator( // handleErr calls shutdown for cleanup and makes sure that all errors are returned.
propagation.NewCompositeTextMapPropagator( handleErr := func(inErr error) {
propagation.TraceContext{}, // W3C Trace Context format; https://www.w3.org/TR/trace-context/ err = errors.Join(inErr, shutdown(ctx))
propagation.Baggage{}, }
prop := newPropagator()
otel.SetTextMapPropagator(prop)
var spanExporter sdktrace.SpanExporter
switch os.Getenv("OTEL_TRACES_EXPORTER") {
case "":
spanExporter, err = newOLTPExporter(ctx, cfg)
case "otlp":
spanExporter, err = newOLTPExporter(ctx, cfg)
default:
// log.Debug("OTEL_TRACES_EXPORTER", "fallback", os.Getenv("OTEL_TRACES_EXPORTER"))
spanExporter, err = autoexport.NewSpanExporter(ctx)
}
if err != nil {
handleErr(err)
return
}
shutdownFuncs = append(shutdownFuncs, spanExporter.Shutdown)
logExporter, err := autoexport.NewLogExporter(ctx)
if err != nil {
handleErr(err)
return
}
shutdownFuncs = append(shutdownFuncs, logExporter.Shutdown)
// Set up trace provider.
tracerProvider, err := newTraceProvider(spanExporter, res)
if err != nil {
handleErr(err)
return
}
shutdownFuncs = append(shutdownFuncs, tracerProvider.Shutdown)
otel.SetTracerProvider(tracerProvider)
logProvider := sdklog.NewLoggerProvider(sdklog.WithResource(res),
sdklog.WithProcessor(
sdklog.NewBatchProcessor(logExporter, sdklog.WithExportBufferSize(10)),
), ),
) )
return tp.Shutdown, nil logglobal.SetLoggerProvider(logProvider)
shutdownFuncs = append(shutdownFuncs, func(ctx context.Context) error {
logProvider.ForceFlush(ctx)
return logProvider.Shutdown(ctx)
},
)
if err != nil {
handleErr(err)
return
}
return
} }
func newOLTPExporter(ctx context.Context, cfg *TracerConfig) (otelsdktrace.SpanExporter, error) { func newOLTPExporter(ctx context.Context, cfg *TracerConfig) (sdktrace.SpanExporter, error) {
log := logger.Setup() log := logger.Setup()
opts := []otlptracehttp.Option{ var tlsConfig *tls.Config
otlptracehttp.WithCompression(otlptracehttp.GzipCompression),
}
if cfg.CertificateProvider != nil { if cfg.CertificateProvider != nil {
log.InfoContext(ctx, "setting up cert provider") tlsConfig = &tls.Config{
opts = append(opts, otlptracehttp.WithTLSClientConfig(&tls.Config{
GetClientCertificate: cfg.CertificateProvider, GetClientCertificate: cfg.CertificateProvider,
RootCAs: cfg.RootCAs, RootCAs: cfg.RootCAs,
})) }
} }
if len(cfg.Endpoint) > 0 { proto := os.Getenv(otelExporterOTLPTracesProtoEnvKey)
opts = append(opts, otlptracehttp.WithEndpoint(cfg.Endpoint)) if proto == "" {
proto = os.Getenv(otelExporterOTLPProtoEnvKey)
} }
if len(cfg.EndpointURL) > 0 { // Fallback to default, http/protobuf.
opts = append(opts, otlptracehttp.WithEndpointURL(cfg.EndpointURL)) if proto == "" {
proto = "http/protobuf"
}
var client otlptrace.Client
switch proto {
case "grpc":
opts := []otlptracegrpc.Option{
otlptracegrpc.WithCompressor("gzip"),
}
if tlsConfig != nil {
opts = append(opts, otlptracegrpc.WithTLSCredentials(credentials.NewTLS(tlsConfig)))
}
if len(cfg.Endpoint) > 0 {
log.Info("adding option", "Endpoint", cfg.Endpoint)
opts = append(opts, otlptracegrpc.WithEndpoint(cfg.Endpoint))
}
if len(cfg.EndpointURL) > 0 {
log.Info("adding option", "EndpointURL", cfg.EndpointURL)
opts = append(opts, otlptracegrpc.WithEndpointURL(cfg.EndpointURL))
}
client = otlptracegrpc.NewClient(opts...)
case "http/protobuf", "http/json":
opts := []otlptracehttp.Option{
otlptracehttp.WithCompression(otlptracehttp.GzipCompression),
}
if tlsConfig != nil {
opts = append(opts, otlptracehttp.WithTLSClientConfig(tlsConfig))
}
if len(cfg.Endpoint) > 0 {
opts = append(opts, otlptracehttp.WithEndpoint(cfg.Endpoint))
}
if len(cfg.EndpointURL) > 0 {
opts = append(opts, otlptracehttp.WithEndpointURL(cfg.EndpointURL))
}
client = otlptracehttp.NewClient(opts...)
default:
return nil, errInvalidOTLPProtocol
} }
client := otlptracehttp.NewClient(opts...)
exporter, err := otlptrace.New(ctx, client) exporter, err := otlptrace.New(ctx, client)
if err != nil { if err != nil {
log.ErrorContext(ctx, "creating OTLP trace exporter", "err", err) log.ErrorContext(ctx, "creating OTLP trace exporter", "err", err)
@@ -129,44 +330,19 @@ func newOLTPExporter(ctx context.Context, cfg *TracerConfig) (otelsdktrace.SpanE
return exporter, err return exporter, err
} }
// func (srv *Server) newStdoutExporter(w io.Writer) (sdktrace.SpanExporter, error) { func newTraceProvider(traceExporter sdktrace.SpanExporter, res *resource.Resource) (*sdktrace.TracerProvider, error) {
// return stdouttrace.New( traceProvider := sdktrace.NewTracerProvider(
// stdouttrace.WithWriter(w), sdktrace.WithResource(res),
// // Use human-readable output. sdktrace.WithBatcher(traceExporter,
// stdouttrace.WithPrettyPrint(), sdktrace.WithBatchTimeout(time.Second*3),
// // Do not print timestamps for the demo. ),
// stdouttrace.WithoutTimestamps(), )
// ) return traceProvider, nil
// } }
// newResource returns a resource describing this application. func newPropagator() propagation.TextMapPropagator {
func newResource(cfg *TracerConfig) (*resource.Resource, error) { return propagation.NewCompositeTextMapPropagator(
propagation.TraceContext{},
log := logger.Setup() propagation.Baggage{},
defaultResource := resource.Default()
log.Debug("default semconv", "url", defaultResource.SchemaURL())
newResource := resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String(cfg.ServiceName),
semconv.ServiceVersionKey.String(version.Version()),
attribute.String("environment", cfg.Environment),
) )
log.Debug("new resource semconv", "url", newResource.SchemaURL())
r, err := resource.Merge(
defaultResource,
newResource,
)
if err != nil {
log.Error("could not setup otel resource",
"err", err,
"default", defaultResource.SchemaURL(),
"local", newResource.SchemaURL(),
)
}
return r, nil
} }

View File

@@ -7,7 +7,6 @@ import (
) )
func TestInit(t *testing.T) { func TestInit(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
defer cancel() defer cancel()
@@ -18,5 +17,4 @@ func TestInit(t *testing.T) {
t.FailNow() t.FailNow()
} }
defer shutdownFn(ctx) defer shutdownFn(ctx)
} }

View File

@@ -1,3 +1,17 @@
// Package types provides shared data structures for the NTP Pool project.
//
// This package contains common types used across different NTP Pool services
// for data exchange, logging, and database operations. The types are designed
// to support JSON serialization for API responses and SQL database storage
// with automatic marshaling/unmarshaling.
//
// Current types include:
// - LogScoreAttributes: NTP server scoring metadata for monitoring and analysis
//
// All types implement appropriate interfaces for:
// - JSON serialization (json.Marshaler/json.Unmarshaler)
// - SQL database storage (database/sql/driver.Valuer/sql.Scanner)
// - String representation for logging and debugging
package types package types
import ( import (
@@ -6,17 +20,26 @@ import (
"errors" "errors"
) )
// LogScoreAttributes contains metadata about NTP server scoring and monitoring results.
// This structure captures both NTP protocol-specific information (leap, stratum) and
// operational data (errors, warnings, response status) for analysis and alerting.
//
// The type supports JSON serialization for API responses and database storage
// via the database/sql/driver interfaces. Fields use omitempty tags to minimize
// JSON payload size when values are at their zero state.
type LogScoreAttributes struct { type LogScoreAttributes struct {
Leap int8 `json:"leap,omitempty"` Leap int8 `json:"leap,omitempty"` // NTP leap indicator (0=no warning, 1=+1s, 2=-1s, 3=unsynchronized)
Stratum int8 `json:"stratum,omitempty"` Stratum int8 `json:"stratum,omitempty"` // NTP stratum level (1=primary, 2-15=secondary, 16=unsynchronized)
NoResponse bool `json:"no_response,omitempty"` NoResponse bool `json:"no_response,omitempty"` // True if server failed to respond to NTP queries
Error string `json:"error,omitempty"` Error string `json:"error,omitempty"` // Error message if scoring failed
Warning string `json:"warning,omitempty"` Warning string `json:"warning,omitempty"` // Warning message for non-fatal issues
FromLSID int `json:"from_ls_id,omitempty"` FromLSID int `json:"from_ls_id,omitempty"` // Source log server ID for traceability
FromSSID int `json:"from_ss_id,omitempty"` FromSSID int `json:"from_ss_id,omitempty"` // Source scoring system ID for traceability
} }
// String returns a JSON representation of the LogScoreAttributes for logging and debugging.
// Returns an empty string if JSON marshaling fails.
func (lsa *LogScoreAttributes) String() string { func (lsa *LogScoreAttributes) String() string {
b, err := json.Marshal(lsa) b, err := json.Marshal(lsa)
if err != nil { if err != nil {
@@ -25,11 +48,18 @@ func (lsa *LogScoreAttributes) String() string {
return string(b) return string(b)
} }
// Value implements the database/sql/driver.Valuer interface for database storage.
// It serializes the LogScoreAttributes to JSON for storage in SQL databases.
// Returns the JSON bytes or an error if marshaling fails.
func (lsa *LogScoreAttributes) Value() (driver.Value, error) { func (lsa *LogScoreAttributes) Value() (driver.Value, error) {
return json.Marshal(lsa) return json.Marshal(lsa)
} }
func (lsa *LogScoreAttributes) Scan(value interface{}) error { // Scan implements the database/sql.Scanner interface for reading from SQL databases.
// It deserializes JSON data from the database back into LogScoreAttributes.
// Supports both []byte and string input types, with nil values treated as no-op.
// Returns an error if the input type is unsupported or JSON unmarshaling fails.
func (lsa *LogScoreAttributes) Scan(value any) error {
var source []byte var source []byte
_t := LogScoreAttributes{} _t := LogScoreAttributes{}

View File

@@ -1,48 +1,44 @@
// Package ulid provides thread-safe ULID (Universally Unique Lexicographically Sortable Identifier) generation.
//
// ULIDs are 128-bit identifiers that are lexicographically sortable and contain
// a timestamp component. This package uses cryptographically secure random
// generation optimized for simplicity and performance in concurrent environments.
package ulid package ulid
import ( import (
cryptorand "crypto/rand" cryptorand "crypto/rand"
"encoding/binary"
"io"
mathrand "math/rand"
"os"
"sync"
"time" "time"
oklid "github.com/oklog/ulid/v2" oklid "github.com/oklog/ulid/v2"
"go.ntppool.org/common/logger"
) )
var monotonicPool = sync.Pool{ // MakeULID generates a new ULID with the specified timestamp using cryptographically secure randomness.
New: func() interface{} { // The function is thread-safe and optimized for high-concurrency environments.
//
log := logger.Setup() // This implementation prioritizes simplicity and performance over strict monotonicity within
// the same millisecond. Each ULID is guaranteed to be unique and lexicographically sortable
var seed int64 // across different timestamps.
err := binary.Read(cryptorand.Reader, binary.BigEndian, &seed) //
if err != nil { // Returns a pointer to the generated ULID or an error if generation fails.
log.Error("crypto/rand error", "err", err) // Generation should only fail under extreme circumstances (entropy exhaustion).
os.Exit(10)
}
rand := mathrand.New(mathrand.NewSource(seed))
inc := uint64(mathrand.Int63())
// log.Printf("seed: %d", seed)
// log.Printf("inc: %d", inc)
// inc = inc & ^uint64(1<<63) // only want 63 bits
mono := oklid.Monotonic(rand, inc)
return mono
},
}
func MakeULID(t time.Time) (*oklid.ULID, error) { func MakeULID(t time.Time) (*oklid.ULID, error) {
id, err := oklid.New(oklid.Timestamp(t), cryptorand.Reader)
mono := monotonicPool.Get().(io.Reader) if err != nil {
return nil, err
id, err := oklid.New(oklid.Timestamp(t), mono) }
return &id, nil
}
// Make generates a new ULID with the current timestamp using cryptographically secure randomness.
// This is a convenience function equivalent to MakeULID(time.Now()).
//
// The function is thread-safe and optimized for high-concurrency environments.
//
// Returns a pointer to the generated ULID or an error if generation fails.
// Generation should only fail under extreme circumstances (entropy exhaustion).
func Make() (*oklid.ULID, error) {
id, err := oklid.New(oklid.Now(), cryptorand.Reader)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@@ -1,25 +1,336 @@
package ulid package ulid
import ( import (
cryptorand "crypto/rand"
"sort"
"sync"
"testing" "testing"
"time" "time"
oklid "github.com/oklog/ulid/v2"
) )
func TestULID(t *testing.T) { func TestMakeULID(t *testing.T) {
tm := time.Now() tm := time.Now()
ul1, err := MakeULID(tm) ul1, err := MakeULID(tm)
if err != nil { if err != nil {
t.Logf("makeULID failed: %s", err) t.Fatalf("MakeULID failed: %s", err)
t.Fail()
} }
ul2, err := MakeULID(tm) ul2, err := MakeULID(tm)
if err != nil { if err != nil {
t.Logf("MakeULID failed: %s", err) t.Fatalf("MakeULID failed: %s", err)
t.Fail()
} }
if ul1 == nil || ul2 == nil {
t.Fatal("MakeULID returned nil ULID")
}
if ul1.String() == ul2.String() { if ul1.String() == ul2.String() {
t.Logf("ul1 and ul2 got the same string: %s", ul1.String()) t.Errorf("ul1 and ul2 should be different: %s", ul1.String())
t.Fail()
} }
// Verify they have the same timestamp
if ul1.Time() != ul2.Time() {
t.Errorf("ULIDs with same input time should have same timestamp: %d != %d", ul1.Time(), ul2.Time())
}
t.Logf("ulid string 1 and 2: %s | %s", ul1.String(), ul2.String()) t.Logf("ulid string 1 and 2: %s | %s", ul1.String(), ul2.String())
} }
func TestMake(t *testing.T) {
// Test Make() function (uses current time)
ul1, err := Make()
if err != nil {
t.Fatalf("Make failed: %s", err)
}
if ul1 == nil {
t.Fatal("Make returned nil ULID")
}
// Sleep a bit and generate another
time.Sleep(2 * time.Millisecond)
ul2, err := Make()
if err != nil {
t.Fatalf("Make failed: %s", err)
}
// Should be different ULIDs
if ul1.String() == ul2.String() {
t.Errorf("ULIDs from Make() should be different: %s", ul1.String())
}
// Second should be later (or at least not earlier)
if ul1.Time() > ul2.Time() {
t.Errorf("second ULID should not have earlier timestamp: %d > %d", ul1.Time(), ul2.Time())
}
t.Logf("Make() ULIDs: %s | %s", ul1.String(), ul2.String())
}
func TestMakeULIDUniqueness(t *testing.T) {
tm := time.Now()
seen := make(map[string]bool)
for i := 0; i < 1000; i++ {
ul, err := MakeULID(tm)
if err != nil {
t.Fatalf("MakeULID failed on iteration %d: %s", i, err)
}
str := ul.String()
if seen[str] {
t.Errorf("duplicate ULID generated: %s", str)
}
seen[str] = true
}
}
func TestMakeUniqueness(t *testing.T) {
seen := make(map[string]bool)
for i := 0; i < 1000; i++ {
ul, err := Make()
if err != nil {
t.Fatalf("Make failed on iteration %d: %s", i, err)
}
str := ul.String()
if seen[str] {
t.Errorf("duplicate ULID generated: %s", str)
}
seen[str] = true
}
}
func TestMakeULIDTimestampProgression(t *testing.T) {
t1 := time.Now()
ul1, err := MakeULID(t1)
if err != nil {
t.Fatalf("MakeULID failed: %s", err)
}
// Wait to ensure different timestamp
time.Sleep(2 * time.Millisecond)
t2 := time.Now()
ul2, err := MakeULID(t2)
if err != nil {
t.Fatalf("MakeULID failed: %s", err)
}
if ul1.Time() >= ul2.Time() {
t.Errorf("second ULID should have later timestamp: %d >= %d", ul1.Time(), ul2.Time())
}
if ul1.Compare(*ul2) >= 0 {
t.Errorf("second ULID should be greater: %s >= %s", ul1.String(), ul2.String())
}
}
func TestMakeULIDConcurrency(t *testing.T) {
const numGoroutines = 10
const numULIDsPerGoroutine = 100
var wg sync.WaitGroup
ulidChan := make(chan *oklid.ULID, numGoroutines*numULIDsPerGoroutine)
tm := time.Now()
// Start multiple goroutines generating ULIDs concurrently
for i := 0; i < numGoroutines; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for j := 0; j < numULIDsPerGoroutine; j++ {
ul, err := MakeULID(tm)
if err != nil {
t.Errorf("MakeULID failed: %s", err)
return
}
ulidChan <- ul
}
}()
}
wg.Wait()
close(ulidChan)
// Collect all ULIDs and check uniqueness
seen := make(map[string]bool)
count := 0
for ul := range ulidChan {
str := ul.String()
if seen[str] {
t.Errorf("duplicate ULID generated in concurrent test: %s", str)
}
seen[str] = true
count++
}
if count != numGoroutines*numULIDsPerGoroutine {
t.Errorf("expected %d ULIDs, got %d", numGoroutines*numULIDsPerGoroutine, count)
}
}
func TestMakeConcurrency(t *testing.T) {
const numGoroutines = 10
const numULIDsPerGoroutine = 100
var wg sync.WaitGroup
ulidChan := make(chan *oklid.ULID, numGoroutines*numULIDsPerGoroutine)
// Start multiple goroutines generating ULIDs concurrently
for i := 0; i < numGoroutines; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for j := 0; j < numULIDsPerGoroutine; j++ {
ul, err := Make()
if err != nil {
t.Errorf("Make failed: %s", err)
return
}
ulidChan <- ul
}
}()
}
wg.Wait()
close(ulidChan)
// Collect all ULIDs and check uniqueness
seen := make(map[string]bool)
count := 0
for ul := range ulidChan {
str := ul.String()
if seen[str] {
t.Errorf("duplicate ULID generated in concurrent test: %s", str)
}
seen[str] = true
count++
}
if count != numGoroutines*numULIDsPerGoroutine {
t.Errorf("expected %d ULIDs, got %d", numGoroutines*numULIDsPerGoroutine, count)
}
}
func TestMakeULIDErrorHandling(t *testing.T) {
// Test with various timestamps
timestamps := []time.Time{
time.Unix(0, 0), // Unix epoch
time.Now(), // Current time
time.Now().Add(time.Hour), // Future time
}
for i, tm := range timestamps {
ul, err := MakeULID(tm)
if err != nil {
t.Errorf("MakeULID failed with timestamp %d: %s", i, err)
}
if ul == nil {
t.Errorf("MakeULID returned nil ULID with timestamp %d", i)
}
}
}
func TestMakeULIDLexicographicOrdering(t *testing.T) {
var ulids []*oklid.ULID
var timestamps []time.Time
// Generate ULIDs with increasing timestamps
for i := 0; i < 10; i++ {
tm := time.Now().Add(time.Duration(i) * time.Millisecond)
timestamps = append(timestamps, tm)
ul, err := MakeULID(tm)
if err != nil {
t.Fatalf("MakeULID failed: %s", err)
}
ulids = append(ulids, ul)
// Small delay to ensure different timestamps
time.Sleep(time.Millisecond)
}
// Sort ULID strings lexicographically
ulidStrings := make([]string, len(ulids))
for i, ul := range ulids {
ulidStrings[i] = ul.String()
}
originalOrder := make([]string, len(ulidStrings))
copy(originalOrder, ulidStrings)
sort.Strings(ulidStrings)
// Verify lexicographic order matches chronological order
for i := 0; i < len(originalOrder); i++ {
if originalOrder[i] != ulidStrings[i] {
t.Errorf("lexicographic order doesn't match chronological order at index %d: %s != %s",
i, originalOrder[i], ulidStrings[i])
}
}
}
// Benchmark ULID generation performance
func BenchmarkMakeULID(b *testing.B) {
tm := time.Now()
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := MakeULID(tm)
if err != nil {
b.Fatalf("MakeULID failed: %s", err)
}
}
}
// Benchmark Make function
func BenchmarkMake(b *testing.B) {
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := Make()
if err != nil {
b.Fatalf("Make failed: %s", err)
}
}
}
// Benchmark concurrent ULID generation
func BenchmarkMakeULIDConcurrent(b *testing.B) {
tm := time.Now()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
_, err := MakeULID(tm)
if err != nil {
b.Fatalf("MakeULID failed: %s", err)
}
}
})
}
// Benchmark concurrent Make function
func BenchmarkMakeConcurrent(b *testing.B) {
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
_, err := Make()
if err != nil {
b.Fatalf("Make failed: %s", err)
}
}
})
}
// Benchmark random number generation
func BenchmarkCryptoRand(b *testing.B) {
buf := make([]byte, 10) // ULID entropy size
b.ResetTimer()
for i := 0; i < b.N; i++ {
cryptorand.Read(buf)
}
}

View File

@@ -1,34 +1,53 @@
// Package version provides build metadata and version information management.
//
// This package manages application version information including semantic version,
// Git revision, build time, and provides integration with CLI frameworks (Cobra, Kong)
// and Prometheus metrics for operational visibility.
//
// Version information can be injected at build time using ldflags:
//
// go build -ldflags "-X go.ntppool.org/common/version.VERSION=v1.0.0 \
// -X go.ntppool.org/common/version.buildTime=2023-01-01T00:00:00Z \
// -X go.ntppool.org/common/version.gitVersion=abc123"
//
// The package also automatically extracts build information from Go's debug.BuildInfo
// when available, providing fallback values for VCS time and revision.
package version package version
import ( import (
"fmt" "fmt"
"log/slog"
"runtime" "runtime"
"runtime/debug" "runtime/debug"
"strings" "strings"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"go.ntppool.org/common/logger"
"golang.org/x/mod/semver" "golang.org/x/mod/semver"
) )
// VERSION has the current software version (set in the build process) // VERSION contains the current software version (typically set during the build process via ldflags).
var VERSION string // If not set, defaults to "dev-snapshot". The version should follow semantic versioning.
var buildTime string var (
var gitVersion string VERSION string // Semantic version (e.g., "1.0.0" or "v1.0.0")
var gitModified bool buildTime string // Build timestamp (RFC3339 format)
gitVersion string // Git commit hash
gitModified bool // Whether the working tree was modified during build
)
// info holds the consolidated version information extracted from build variables and debug.BuildInfo.
var info Info var info Info
// Info represents structured version and build information.
// This struct is used for JSON serialization and programmatic access to build metadata.
type Info struct { type Info struct {
Version string `json:",omitempty"` Version string `json:",omitempty"` // Semantic version with "v" prefix
GitRev string `json:",omitempty"` GitRev string `json:",omitempty"` // Full Git commit hash
GitRevShort string `json:",omitempty"` GitRevShort string `json:",omitempty"` // Shortened Git commit hash (7 characters)
BuildTime string `json:",omitempty"` BuildTime string `json:",omitempty"` // Build timestamp
} }
func init() { func init() {
info.BuildTime = buildTime info.BuildTime = buildTime
info.GitRev = gitVersion info.GitRev = gitVersion
@@ -39,7 +58,7 @@ func init() {
VERSION = "v" + VERSION VERSION = "v" + VERSION
} }
if !semver.IsValid(VERSION) { if !semver.IsValid(VERSION) {
logger.Setup().Warn("invalid version number", "version", VERSION) slog.Default().Warn("invalid version number", "version", VERSION)
} }
} }
if bi, ok := debug.ReadBuildInfo(); ok { if bi, ok := debug.ReadBuildInfo(); ok {
@@ -78,10 +97,16 @@ func init() {
Version() Version()
} }
// VersionCmd creates a Cobra command for displaying version information.
// The name parameter is used as a prefix in the output (e.g., "myapp v1.0.0").
// Returns a configured cobra.Command that can be added to any CLI application.
func VersionCmd(name string) *cobra.Command { func VersionCmd(name string) *cobra.Command {
versionCmd := &cobra.Command{ versionCmd := &cobra.Command{
Use: "version", Use: "version",
Short: "Print version and build information", Short: "Print version and build information",
Long: `Print detailed version information including semantic version,
Git revision, build time, and Go version. Build information is automatically
extracted from Go's debug.BuildInfo when available.`,
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
ver := Version() ver := Version()
fmt.Printf("%s %s\n", name, ver) fmt.Printf("%s %s\n", name, ver)
@@ -90,6 +115,23 @@ func VersionCmd(name string) *cobra.Command {
return versionCmd return versionCmd
} }
// KongVersionCmd provides a Kong CLI framework compatible version command.
// The Name field should be set to the application name for proper output formatting.
type KongVersionCmd struct {
Name string `kong:"-"` // Application name, excluded from Kong parsing
}
// Run executes the version command for Kong CLI framework.
// Prints the application name and version information to stdout.
func (cmd *KongVersionCmd) Run() error {
fmt.Printf("%s %s\n", cmd.Name, Version())
return nil
}
// RegisterMetric registers a Prometheus gauge metric with build information.
// If name is provided, it creates a metric named "{name}_build_info", otherwise "build_info".
// The metric includes labels for version, build time, Git time, and Git revision.
// This is useful for exposing build information in monitoring systems.
func RegisterMetric(name string, registry prometheus.Registerer) { func RegisterMetric(name string, registry prometheus.Registerer) {
if len(name) > 0 { if len(name) > 0 {
name = strings.ReplaceAll(name, "-", "_") name = strings.ReplaceAll(name, "-", "_")
@@ -100,13 +142,13 @@ func RegisterMetric(name string, registry prometheus.Registerer) {
buildInfo := prometheus.NewGaugeVec( buildInfo := prometheus.NewGaugeVec(
prometheus.GaugeOpts{ prometheus.GaugeOpts{
Name: name, Name: name,
Help: "Build information", Help: "Build information including version, build time, and git revision",
}, },
[]string{ []string{
"version", "version", // Combined version/git format (e.g., "v1.0.0/abc123")
"buildtime", "buildtime", // Build timestamp from ldflags
"gittime", "gittime", // Git commit timestamp from VCS info
"git", "git", // Full Git commit hash
}, },
) )
registry.MustRegister(buildInfo) registry.MustRegister(buildInfo)
@@ -121,12 +163,20 @@ func RegisterMetric(name string, registry prometheus.Registerer) {
).Set(1) ).Set(1)
} }
// v caches the formatted version string to avoid repeated computation.
var v string var v string
// VersionInfo returns the structured version information.
// This provides programmatic access to version details for JSON serialization
// or other structured uses.
func VersionInfo() Info { func VersionInfo() Info {
return info return info
} }
// Version returns a human-readable version string suitable for display.
// The format includes semantic version, Git revision, build time, and Go version.
// Example: "v1.0.0/abc123f-M (2023-01-01T00:00:00Z, go1.21.0)"
// The "-M" suffix indicates the working tree was modified during build.
func Version() string { func Version() string {
if len(v) > 0 { if len(v) > 0 {
return v return v
@@ -153,3 +203,27 @@ func Version() string {
v = fmt.Sprintf("%s (%s)", v, strings.Join(extra, ", ")) v = fmt.Sprintf("%s (%s)", v, strings.Join(extra, ", "))
return v return v
} }
// CheckVersion compares a version against a minimum required version.
// Returns true if the version meets or exceeds the minimum requirement.
//
// Special handling:
// - "dev-snapshot" is always considered valid (returns true)
// - Git hash suffixes (e.g., "v1.0.0/abc123") are stripped before comparison
// - Uses semantic version comparison rules
//
// Both version and minimumVersion should follow semantic versioning with "v" prefix.
func CheckVersion(version, minimumVersion string) bool {
if version == "dev-snapshot" {
return true
}
// Strip Git hash suffix if present (e.g., "v1.0.0/abc123" -> "v1.0.0")
if idx := strings.Index(version, "/"); idx >= 0 {
version = version[0:idx]
}
if semver.Compare(version, minimumVersion) < 0 {
// log.Debug("version too old", "v", cl.Version.Version)
return false
}
return true
}

311
version/version_test.go Normal file
View File

@@ -0,0 +1,311 @@
package version
import (
"runtime"
"strings"
"testing"
"github.com/prometheus/client_golang/prometheus"
dto "github.com/prometheus/client_model/go"
)
func TestCheckVersion(t *testing.T) {
tests := []struct {
In string
Min string
Expected bool
}{
// Basic version comparisons
{"v3.8.4", "v3.8.5", false},
{"v3.9.3", "v3.8.5", true},
{"v3.8.5", "v3.8.5", true},
// Dev snapshot should always pass
{"dev-snapshot", "v3.8.5", true},
{"dev-snapshot", "v99.99.99", true},
// Versions with Git hashes should be stripped
{"v3.8.5/abc123", "v3.8.5", true},
{"v3.8.4/abc123", "v3.8.5", false},
{"v3.9.0/def456", "v3.8.5", true},
// Pre-release versions
{"v3.8.5-alpha", "v3.8.5", false},
{"v3.8.5", "v3.8.5-alpha", true},
{"v3.8.5-beta", "v3.8.5-alpha", true},
}
for _, d := range tests {
r := CheckVersion(d.In, d.Min)
if r != d.Expected {
t.Errorf("CheckVersion(%q, %q) = %t, expected %t", d.In, d.Min, r, d.Expected)
}
}
}
func TestVersionInfo(t *testing.T) {
info := VersionInfo()
// Check that we get a valid Info struct
if info.Version == "" {
t.Error("VersionInfo().Version should not be empty")
}
// Version should start with "v" or be "dev-snapshot"
if !strings.HasPrefix(info.Version, "v") && info.Version != "dev-snapshot" {
t.Errorf("Version should start with 'v' or be 'dev-snapshot', got: %s", info.Version)
}
// GitRevShort should be <= 7 characters if set
if info.GitRevShort != "" && len(info.GitRevShort) > 7 {
t.Errorf("GitRevShort should be <= 7 characters, got: %s", info.GitRevShort)
}
// GitRevShort should be prefix of GitRev if both are set
if info.GitRev != "" && info.GitRevShort != "" {
if !strings.HasPrefix(info.GitRev, info.GitRevShort) {
t.Errorf("GitRevShort should be prefix of GitRev: %s not prefix of %s",
info.GitRevShort, info.GitRev)
}
}
}
func TestVersion(t *testing.T) {
version := Version()
if version == "" {
t.Error("Version() should not return empty string")
}
// Should contain Go version
if !strings.Contains(version, runtime.Version()) {
t.Errorf("Version should contain Go version %s, got: %s", runtime.Version(), version)
}
// Should contain the VERSION variable (or dev-snapshot)
info := VersionInfo()
if !strings.Contains(version, info.Version) {
t.Errorf("Version should contain %s, got: %s", info.Version, version)
}
// Should be in expected format: "version (extras)"
if !strings.Contains(version, "(") || !strings.Contains(version, ")") {
t.Errorf("Version should be in format 'version (extras)', got: %s", version)
}
}
func TestVersionCmd(t *testing.T) {
appName := "testapp"
cmd := VersionCmd(appName)
// Test basic command properties
if cmd.Use != "version" {
t.Errorf("Expected command use to be 'version', got: %s", cmd.Use)
}
if cmd.Short == "" {
t.Error("Command should have a short description")
}
if cmd.Long == "" {
t.Error("Command should have a long description")
}
if cmd.Run == nil {
t.Error("Command should have a Run function")
}
// Test that the command can be executed without error
cmd.SetArgs([]string{})
err := cmd.Execute()
if err != nil {
t.Errorf("VersionCmd execution should not return error, got: %s", err)
}
}
func TestKongVersionCmd(t *testing.T) {
cmd := &KongVersionCmd{Name: "testapp"}
// Test that Run() doesn't return an error
err := cmd.Run()
if err != nil {
t.Errorf("KongVersionCmd.Run() should not return error, got: %s", err)
}
}
func TestRegisterMetric(t *testing.T) {
// Create a test registry
registry := prometheus.NewRegistry()
// Test registering metric without name
RegisterMetric("", registry)
// Gather metrics
metricFamilies, err := registry.Gather()
if err != nil {
t.Fatalf("Failed to gather metrics: %s", err)
}
// Find the build_info metric
var buildInfoFamily *dto.MetricFamily
for _, family := range metricFamilies {
if family.GetName() == "build_info" {
buildInfoFamily = family
break
}
}
if buildInfoFamily == nil {
t.Fatal("build_info metric not found")
}
if buildInfoFamily.GetHelp() == "" {
t.Error("build_info metric should have help text")
}
metrics := buildInfoFamily.GetMetric()
if len(metrics) == 0 {
t.Fatal("build_info metric should have at least one sample")
}
// Check that the metric has the expected labels
metric := metrics[0]
labels := metric.GetLabel()
expectedLabels := []string{"version", "buildtime", "gittime", "git"}
labelMap := make(map[string]string)
for _, label := range labels {
labelMap[label.GetName()] = label.GetValue()
}
for _, expectedLabel := range expectedLabels {
if _, exists := labelMap[expectedLabel]; !exists {
t.Errorf("Expected label %s not found in metric", expectedLabel)
}
}
// Check that the metric value is 1
if metric.GetGauge().GetValue() != 1 {
t.Errorf("Expected build_info metric value to be 1, got %f", metric.GetGauge().GetValue())
}
}
func TestRegisterMetricWithName(t *testing.T) {
// Create a test registry
registry := prometheus.NewRegistry()
// Test registering metric with custom name
appName := "my-test-app"
RegisterMetric(appName, registry)
// Gather metrics
metricFamilies, err := registry.Gather()
if err != nil {
t.Fatalf("Failed to gather metrics: %s", err)
}
// Find the my_test_app_build_info metric
expectedName := "my_test_app_build_info"
var buildInfoFamily *dto.MetricFamily
for _, family := range metricFamilies {
if family.GetName() == expectedName {
buildInfoFamily = family
break
}
}
if buildInfoFamily == nil {
t.Fatalf("%s metric not found", expectedName)
}
}
func TestVersionConsistency(t *testing.T) {
// Call Version() multiple times and ensure it returns the same result
v1 := Version()
v2 := Version()
if v1 != v2 {
t.Errorf("Version() should return consistent results: %s != %s", v1, v2)
}
}
func TestVersionInfoConsistency(t *testing.T) {
// Ensure VersionInfo() is consistent with Version()
info := VersionInfo()
version := Version()
// Version string should contain the semantic version
if !strings.Contains(version, info.Version) {
t.Errorf("Version() should contain VersionInfo().Version: %s not in %s",
info.Version, version)
}
// If GitRevShort is set, version should contain it
if info.GitRevShort != "" {
if !strings.Contains(version, info.GitRevShort) {
t.Errorf("Version() should contain GitRevShort: %s not in %s",
info.GitRevShort, version)
}
}
}
// Test edge cases
func TestCheckVersionEdgeCases(t *testing.T) {
// Test with empty strings
if CheckVersion("", "v1.0.0") {
t.Error("Empty version should not be >= v1.0.0")
}
// Test with malformed versions (should be handled gracefully)
// Note: semver.Compare might panic or return unexpected results for invalid versions
// but our function should handle the common cases
tests := []struct {
version string
minimum string
desc string
}{
{"v1.0.0/", "v1.0.0", "version with trailing slash"},
{"v1.0.0/abc/def", "v1.0.0", "version with multiple slashes"},
}
for _, test := range tests {
// This should not panic
result := CheckVersion(test.version, test.minimum)
t.Logf("%s: CheckVersion(%q, %q) = %t", test.desc, test.version, test.minimum, result)
}
}
// Benchmark version operations
func BenchmarkVersion(b *testing.B) {
// Reset the cached version to test actual computation
v = ""
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = Version()
}
}
func BenchmarkVersionInfo(b *testing.B) {
for i := 0; i < b.N; i++ {
_ = VersionInfo()
}
}
func BenchmarkCheckVersion(b *testing.B) {
version := "v1.2.3/abc123"
minimum := "v1.2.0"
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = CheckVersion(version, minimum)
}
}
func BenchmarkCheckVersionDevSnapshot(b *testing.B) {
version := "dev-snapshot"
minimum := "v1.2.0"
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = CheckVersion(version, minimum)
}
}

View File

@@ -1,3 +1,27 @@
// Package fastlyxff provides Fastly CDN IP range management for trusted proxy handling.
//
// This package parses Fastly's public IP ranges JSON file and generates Echo framework
// trust options for proper client IP extraction from X-Forwarded-For headers.
// It's designed specifically for services deployed behind Fastly's CDN that need
// to identify real client IPs for logging, rate limiting, and security purposes.
//
// Fastly publishes their edge server IP ranges in a JSON format that this package
// consumes to automatically configure trusted proxy ranges. This ensures that
// X-Forwarded-For headers are only trusted when they originate from legitimate
// Fastly edge servers.
//
// Key features:
// - Automatic parsing of Fastly's IP ranges JSON format
// - Support for both IPv4 and IPv6 address ranges
// - Echo framework integration via TrustOption generation
// - CIDR notation parsing and validation
//
// The JSON file typically contains IP ranges in this format:
//
// {
// "addresses": ["23.235.32.0/20", "43.249.72.0/22", ...],
// "ipv6_addresses": ["2a04:4e40::/32", "2a04:4e42::/32", ...]
// }
package fastlyxff package fastlyxff
import ( import (
@@ -9,15 +33,29 @@ import (
"github.com/labstack/echo/v4" "github.com/labstack/echo/v4"
) )
// FastlyXFF represents Fastly's published IP ranges for their CDN edge servers.
// This structure matches the JSON format provided by Fastly for their public IP ranges.
// It contains separate lists for IPv4 and IPv6 CIDR ranges.
type FastlyXFF struct { type FastlyXFF struct {
IPv4 []string `json:"addresses"` IPv4 []string `json:"addresses"` // IPv4 CIDR ranges (e.g., "23.235.32.0/20")
IPv6 []string `json:"ipv6_addresses"` IPv6 []string `json:"ipv6_addresses"` // IPv6 CIDR ranges (e.g., "2a04:4e40::/32")
} }
// TrustedNets holds parsed network prefixes for efficient IP range checking.
// This type is currently unused but reserved for future optimizations
// where frequent IP range lookups might benefit from pre-parsed prefixes.
type TrustedNets struct { type TrustedNets struct {
prefixes []netip.Prefix prefixes []netip.Prefix // Parsed network prefixes for efficient lookups
} }
// New loads and parses Fastly IP ranges from a JSON file.
// The file should contain Fastly's published IP ranges in their standard JSON format.
//
// Parameters:
// - fileName: Path to the Fastly IP ranges JSON file
//
// Returns the parsed FastlyXFF structure or an error if the file cannot be
// read or the JSON format is invalid.
func New(fileName string) (*FastlyXFF, error) { func New(fileName string) (*FastlyXFF, error) {
b, err := os.ReadFile(fileName) b, err := os.ReadFile(fileName)
if err != nil { if err != nil {
@@ -34,6 +72,19 @@ func New(fileName string) (*FastlyXFF, error) {
return &d, nil return &d, nil
} }
// EchoTrustOption converts Fastly IP ranges into Echo framework trust options.
// This method generates trust configurations that tell Echo to accept X-Forwarded-For
// headers only from Fastly's edge servers, ensuring accurate client IP extraction.
//
// The generated trust options should be used with Echo's IP extractor:
//
// options, err := fastlyRanges.EchoTrustOption()
// if err != nil {
// return err
// }
// e.IPExtractor = echo.ExtractIPFromXFFHeader(options...)
//
// Returns a slice of Echo trust options or an error if any CIDR range cannot be parsed.
func (xff *FastlyXFF) EchoTrustOption() ([]echo.TrustOption, error) { func (xff *FastlyXFF) EchoTrustOption() ([]echo.TrustOption, error) {
ranges := []echo.TrustOption{} ranges := []echo.TrustOption{}

View File

@@ -3,14 +3,12 @@ package fastlyxff
import "testing" import "testing"
func TestFastlyIPRanges(t *testing.T) { func TestFastlyIPRanges(t *testing.T) {
fastlyxff, err := New("fastly.json") fastlyxff, err := New("fastly.json")
if err != nil { if err != nil {
t.Fatalf("could not load test data: %s", err) t.Fatalf("could not load test data: %s", err)
} }
data, err := fastlyxff.EchoTrustOption() data, err := fastlyxff.EchoTrustOption()
if err != nil { if err != nil {
t.Fatalf("could not parse test data: %s", err) t.Fatalf("could not parse test data: %s", err)
} }
@@ -19,5 +17,4 @@ func TestFastlyIPRanges(t *testing.T) {
t.Logf("only got %d prefixes, expected more", len(data)) t.Logf("only got %d prefixes, expected more", len(data))
t.Fail() t.Fail()
} }
} }