mirror of
https://github.com/TwiN/gatus.git
synced 2026-02-17 05:09:11 +00:00
feat(suite): Implement Suites (#1239)
* feat(suite): Implement Suites Fixes #1230 * Update docs * Fix variable alignment * Prevent always-run endpoint from running if a context placeholder fails to resolve in the URL * Return errors when a context placeholder path fails to resolve * Add a couple of unit tests * Add a couple of unit tests * fix(ui): Update group count properly Fixes #1233 * refactor: Pass down entire config instead of several sub-configs * fix: Change default suite interval and timeout * fix: Deprecate disable-monitoring-lock in favor of concurrency * fix: Make sure there are no duplicate keys * Refactor some code * Update watchdog/watchdog.go * Update web/app/src/components/StepDetailsModal.vue Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * chore: Remove useless log * fix: Set default concurrency to 3 instead of 5 --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
This commit is contained in:
5
.github/codecov.yml
vendored
5
.github/codecov.yml
vendored
@@ -1,6 +1,9 @@
|
||||
ignore:
|
||||
- "watchdog/watchdog.go"
|
||||
- "storage/store/sql/specific_postgres.go" # Can't test for postgres
|
||||
- "watchdog/endpoint.go"
|
||||
- "watchdog/external_endpoint.go"
|
||||
- "watchdog/suite.go"
|
||||
- "watchdog/watchdog.go"
|
||||
comment: false
|
||||
coverage:
|
||||
status:
|
||||
|
||||
136
README.md
136
README.md
@@ -45,6 +45,7 @@ Have any feedback or questions? [Create a discussion](https://github.com/TwiN/ga
|
||||
- [Configuration](#configuration)
|
||||
- [Endpoints](#endpoints)
|
||||
- [External Endpoints](#external-endpoints)
|
||||
- [Suites (ALPHA)](#suites-alpha)
|
||||
- [Conditions](#conditions)
|
||||
- [Placeholders](#placeholders)
|
||||
- [Functions](#functions)
|
||||
@@ -122,7 +123,7 @@ Have any feedback or questions? [Create a discussion](https://github.com/TwiN/ga
|
||||
- [Monitoring an endpoint using STARTTLS](#monitoring-an-endpoint-using-starttls)
|
||||
- [Monitoring an endpoint using TLS](#monitoring-an-endpoint-using-tls)
|
||||
- [Monitoring domain expiration](#monitoring-domain-expiration)
|
||||
- [disable-monitoring-lock](#disable-monitoring-lock)
|
||||
- [Concurrency](#concurrency)
|
||||
- [Reloading configuration on the fly](#reloading-configuration-on-the-fly)
|
||||
- [Endpoint groups](#endpoint-groups)
|
||||
- [How do I sort by group by default?](#how-do-i-sort-by-group-by-default)
|
||||
@@ -247,7 +248,8 @@ If you want to test it locally, see [Docker](#docker).
|
||||
| `endpoints` | [Endpoints configuration](#endpoints). | Required `[]` |
|
||||
| `external-endpoints` | [External Endpoints configuration](#external-endpoints). | `[]` |
|
||||
| `security` | [Security configuration](#security). | `{}` |
|
||||
| `disable-monitoring-lock` | Whether to [disable the monitoring lock](#disable-monitoring-lock). | `false` |
|
||||
| `concurrency` | Maximum number of endpoints/suites to monitor concurrently. Set to `0` for unlimited. See [Concurrency](#concurrency). | `3` |
|
||||
| `disable-monitoring-lock` | Whether to [disable the monitoring lock](#disable-monitoring-lock). **Deprecated**: Use `concurrency: 0` instead. | `false` |
|
||||
| `skip-invalid-config-update` | Whether to ignore invalid configuration update. <br />See [Reloading configuration on the fly](#reloading-configuration-on-the-fly). | `false` |
|
||||
| `web` | Web configuration. | `{}` |
|
||||
| `web.address` | Address to listen on. | `0.0.0.0` |
|
||||
@@ -309,6 +311,8 @@ You can then configure alerts to be triggered when an endpoint is unhealthy once
|
||||
| `endpoints[].ui.dont-resolve-failed-conditions` | Whether to resolve failed conditions for the UI. | `false` |
|
||||
| `endpoints[].ui.badge.response-time` | List of response time thresholds. Each time a threshold is reached, the badge has a different color. | `[50, 200, 300, 500, 750]` |
|
||||
| `endpoints[].extra-labels` | Extra labels to add to the metrics. Useful for grouping endpoints together. | `{}` |
|
||||
| `endpoints[].always-run` | (SUITES ONLY) Whether to execute this endpoint even if previous endpoints in the suite failed. | `false` |
|
||||
| `endpoints[].store` | (SUITES ONLY) Map of values to extract from the response and store in the suite context (stored even on failure). | `{}` |
|
||||
|
||||
You may use the following placeholders in the body (`endpoints[].body`):
|
||||
- `[ENDPOINT_NAME]` (resolved from `endpoints[].name`)
|
||||
@@ -366,6 +370,99 @@ Where:
|
||||
You must also pass the token as a `Bearer` token in the `Authorization` header.
|
||||
|
||||
|
||||
### Suites (ALPHA)
|
||||
Suites are collections of endpoints that are executed sequentially with a shared context.
|
||||
This allows you to create complex monitoring scenarios where the result from one endpoint can be used in subsequent endpoints, enabling workflow-style monitoring.
|
||||
|
||||
Here are a few cases in which suites could be useful:
|
||||
- Testing multi-step authentication flows (login -> access protected resource -> logout)
|
||||
- API workflows where you need to chain requests (create resource -> update -> verify -> delete)
|
||||
- Monitoring business processes that span multiple services
|
||||
- Validating data consistency across multiple endpoints
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|:----------------------------------|:----------------------------------------------------------------------------------------------------|:--------------|
|
||||
| `suites` | List of suites to monitor. | `[]` |
|
||||
| `suites[].enabled` | Whether to monitor the suite. | `true` |
|
||||
| `suites[].name` | Name of the suite. Must be unique. | Required `""` |
|
||||
| `suites[].group` | Group name. Used to group multiple suites together on the dashboard. | `""` |
|
||||
| `suites[].interval` | Duration to wait between suite executions. | `10m` |
|
||||
| `suites[].timeout` | Maximum duration for the entire suite execution. | `5m` |
|
||||
| `suites[].context` | Initial context values that can be referenced by endpoints. | `{}` |
|
||||
| `suites[].endpoints` | List of endpoints to execute sequentially. | Required `[]` |
|
||||
| `suites[].endpoints[].store` | Map of values to extract from the response and store in the suite context (stored even on failure). | `{}` |
|
||||
| `suites[].endpoints[].always-run` | Whether to execute this endpoint even if previous endpoints in the suite failed. | `false` |
|
||||
|
||||
**Note**: Suite-level alerts are not supported yet. Configure alerts on individual endpoints within the suite instead.
|
||||
|
||||
#### Using Context in Endpoints
|
||||
Once values are stored in the context, they can be referenced in subsequent endpoints:
|
||||
- In the URL: `https://api.example.com/users/[CONTEXT].userId`
|
||||
- In headers: `Authorization: Bearer [CONTEXT].authToken`
|
||||
- In the body: `{"user_id": "[CONTEXT].userId"}`
|
||||
- In conditions: `[BODY].server_ip == [CONTEXT].serverIp`
|
||||
|
||||
#### Example Suite Configuration
|
||||
```yaml
|
||||
suites:
|
||||
- name: item-crud-workflow
|
||||
group: api-tests
|
||||
interval: 5m
|
||||
context:
|
||||
price: "19.99" # Initial static value in context
|
||||
endpoints:
|
||||
# Step 1: Create an item and store the item ID
|
||||
- name: create-item
|
||||
url: https://api.example.com/items
|
||||
method: POST
|
||||
body: '{"name": "Test Item", "price": "[CONTEXT].price"}'
|
||||
conditions:
|
||||
- "[STATUS] == 201"
|
||||
- "len([BODY].id) > 0"
|
||||
- "[BODY].price == [CONTEXT].price"
|
||||
store:
|
||||
itemId: "[BODY].id"
|
||||
alerts:
|
||||
- type: slack
|
||||
description: "Failed to create item"
|
||||
|
||||
# Step 2: Update the item using the stored item ID
|
||||
- name: update-item
|
||||
url: https://api.example.com/items/[CONTEXT].itemId
|
||||
method: PUT
|
||||
body: '{"price": "24.99"}'
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
alerts:
|
||||
- type: slack
|
||||
description: "Failed to update item"
|
||||
|
||||
# Step 3: Fetch the item and validate the price
|
||||
- name: get-item
|
||||
url: https://api.example.com/items/[CONTEXT].itemId
|
||||
method: GET
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
- "[BODY].price == 24.99"
|
||||
alerts:
|
||||
- type: slack
|
||||
description: "Item price did not update correctly"
|
||||
|
||||
# Step 4: Delete the item (always-run: true to ensure cleanup even if step 2 or 3 fails)
|
||||
- name: delete-item
|
||||
url: https://api.example.com/items/[CONTEXT].itemId
|
||||
method: DELETE
|
||||
always-run: true
|
||||
conditions:
|
||||
- "[STATUS] == 204"
|
||||
alerts:
|
||||
- type: slack
|
||||
description: "Failed to delete item"
|
||||
```
|
||||
|
||||
The suite will be considered successful only if all required endpoints pass their conditions.
|
||||
|
||||
|
||||
### Conditions
|
||||
Here are some examples of conditions you can use:
|
||||
|
||||
@@ -2921,17 +3018,34 @@ endpoints:
|
||||
> using the `[DOMAIN_EXPIRATION]` placeholder on an endpoint with an interval of less than `5m`.
|
||||
|
||||
|
||||
### disable-monitoring-lock
|
||||
Setting `disable-monitoring-lock` to `true` means that multiple endpoints could be monitored at the same time (i.e. parallel execution).
|
||||
### Concurrency
|
||||
By default, Gatus allows up to 5 endpoints/suites to be monitored concurrently. This provides a balance between performance and resource usage while maintaining accurate response time measurements.
|
||||
|
||||
While this behavior wouldn't generally be harmful, conditions using the `[RESPONSE_TIME]` placeholder could be impacted
|
||||
by the evaluation of multiple endpoints at the same time, therefore, the default value for this parameter is `false`.
|
||||
You can configure the concurrency level using the `concurrency` parameter:
|
||||
|
||||
There are three main reasons why you might want to disable the monitoring lock:
|
||||
- You're using Gatus for load testing (each endpoint are periodically evaluated on a different goroutine, so
|
||||
technically, if you create 100 endpoints with a 1 seconds interval, Gatus will send 100 requests per second)
|
||||
- You have a _lot_ of endpoints to monitor
|
||||
- You want to test multiple endpoints at very short intervals (< 5s)
|
||||
```yaml
|
||||
# Allow 10 endpoints/suites to be monitored concurrently
|
||||
concurrency: 10
|
||||
|
||||
# Allow unlimited concurrent monitoring
|
||||
concurrency: 0
|
||||
|
||||
# Use default concurrency (3)
|
||||
# concurrency: 3
|
||||
```
|
||||
|
||||
**Important considerations:**
|
||||
- Higher concurrency can improve monitoring performance when you have many endpoints
|
||||
- Conditions using the `[RESPONSE_TIME]` placeholder may be less accurate with very high concurrency due to system resource contention
|
||||
- Set to `0` for unlimited concurrency (equivalent to the deprecated `disable-monitoring-lock: true`)
|
||||
|
||||
**Use cases for higher concurrency:**
|
||||
- You have a large number of endpoints to monitor
|
||||
- You want to monitor endpoints at very short intervals (< 5s)
|
||||
- You're using Gatus for load testing scenarios
|
||||
|
||||
**Legacy configuration:**
|
||||
The `disable-monitoring-lock` parameter is deprecated but still supported for backward compatibility. It's equivalent to setting `concurrency: 0`.
|
||||
|
||||
|
||||
### Reloading configuration on the fly
|
||||
|
||||
@@ -87,7 +87,8 @@ func (a *API) createRouter(cfg *config.Config) *fiber.App {
|
||||
unprotectedAPIRouter.Post("/v1/endpoints/:key/external", CreateExternalEndpointResult(cfg))
|
||||
// SPA
|
||||
app.Get("/", SinglePageApplication(cfg.UI))
|
||||
app.Get("/endpoints/:name", SinglePageApplication(cfg.UI))
|
||||
app.Get("/endpoints/:key", SinglePageApplication(cfg.UI))
|
||||
app.Get("/suites/:key", SinglePageApplication(cfg.UI))
|
||||
// Health endpoint
|
||||
healthHandler := health.Handler().WithJSON(true)
|
||||
app.Get("/health", func(c *fiber.Ctx) error {
|
||||
@@ -127,5 +128,7 @@ func (a *API) createRouter(cfg *config.Config) *fiber.App {
|
||||
}
|
||||
protectedAPIRouter.Get("/v1/endpoints/statuses", EndpointStatuses(cfg))
|
||||
protectedAPIRouter.Get("/v1/endpoints/:key/statuses", EndpointStatus(cfg))
|
||||
protectedAPIRouter.Get("/v1/suites/statuses", SuiteStatuses(cfg))
|
||||
protectedAPIRouter.Get("/v1/suites/:key/statuses", SuiteStatus(cfg))
|
||||
return app
|
||||
}
|
||||
|
||||
@@ -34,8 +34,8 @@ func TestBadge(t *testing.T) {
|
||||
cfg.Endpoints[0].UIConfig = ui.GetDefaultConfig()
|
||||
cfg.Endpoints[1].UIConfig = ui.GetDefaultConfig()
|
||||
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[0], &endpoint.Result{Success: true, Connected: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[1], &endpoint.Result{Success: false, Connected: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatus(cfg.Endpoints[0], &endpoint.Result{Success: true, Connected: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatus(cfg.Endpoints[1], &endpoint.Result{Success: false, Connected: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
api := New(cfg)
|
||||
router := api.Router()
|
||||
type Scenario struct {
|
||||
@@ -284,8 +284,8 @@ func TestGetBadgeColorFromResponseTime(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
store.Get().Insert(&firstTestEndpoint, &testSuccessfulResult)
|
||||
store.Get().Insert(&secondTestEndpoint, &testSuccessfulResult)
|
||||
store.Get().InsertEndpointResult(&firstTestEndpoint, &testSuccessfulResult)
|
||||
store.Get().InsertEndpointResult(&secondTestEndpoint, &testSuccessfulResult)
|
||||
|
||||
scenarios := []struct {
|
||||
Key string
|
||||
|
||||
@@ -28,8 +28,8 @@ func TestResponseTimeChart(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[0], &endpoint.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[1], &endpoint.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatus(cfg.Endpoints[0], &endpoint.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatus(cfg.Endpoints[1], &endpoint.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
api := New(cfg)
|
||||
router := api.Router()
|
||||
type Scenario struct {
|
||||
|
||||
@@ -101,8 +101,8 @@ func TestEndpointStatus(t *testing.T) {
|
||||
MaximumNumberOfEvents: storage.DefaultMaximumNumberOfEvents,
|
||||
},
|
||||
}
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[0], &endpoint.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[1], &endpoint.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatus(cfg.Endpoints[0], &endpoint.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatus(cfg.Endpoints[1], &endpoint.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
api := New(cfg)
|
||||
router := api.Router()
|
||||
type Scenario struct {
|
||||
@@ -156,8 +156,8 @@ func TestEndpointStatuses(t *testing.T) {
|
||||
defer cache.Clear()
|
||||
firstResult := &testSuccessfulResult
|
||||
secondResult := &testUnsuccessfulResult
|
||||
store.Get().Insert(&testEndpoint, firstResult)
|
||||
store.Get().Insert(&testEndpoint, secondResult)
|
||||
store.Get().InsertEndpointResult(&testEndpoint, firstResult)
|
||||
store.Get().InsertEndpointResult(&testEndpoint, secondResult)
|
||||
// Can't be bothered dealing with timezone issues on the worker that runs the automated tests
|
||||
firstResult.Timestamp = time.Time{}
|
||||
secondResult.Timestamp = time.Time{}
|
||||
|
||||
@@ -60,7 +60,7 @@ func CreateExternalEndpointResult(cfg *config.Config) fiber.Handler {
|
||||
result.Errors = append(result.Errors, c.Query("error"))
|
||||
}
|
||||
convertedEndpoint := externalEndpoint.ToEndpoint()
|
||||
if err := store.Get().Insert(convertedEndpoint, result); err != nil {
|
||||
if err := store.Get().InsertEndpointResult(convertedEndpoint, result); err != nil {
|
||||
if errors.Is(err, common.ErrEndpointNotFound) {
|
||||
return c.Status(404).SendString(err.Error())
|
||||
}
|
||||
|
||||
@@ -33,8 +33,8 @@ func TestRawDataEndpoint(t *testing.T) {
|
||||
cfg.Endpoints[0].UIConfig = ui.GetDefaultConfig()
|
||||
cfg.Endpoints[1].UIConfig = ui.GetDefaultConfig()
|
||||
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[0], &endpoint.Result{Success: true, Connected: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[1], &endpoint.Result{Success: false, Connected: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatus(cfg.Endpoints[0], &endpoint.Result{Success: true, Connected: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatus(cfg.Endpoints[1], &endpoint.Result{Success: false, Connected: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
api := New(cfg)
|
||||
router := api.Router()
|
||||
type Scenario struct {
|
||||
|
||||
@@ -34,8 +34,8 @@ func TestSinglePageApplication(t *testing.T) {
|
||||
Title: "example-title",
|
||||
},
|
||||
}
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[0], &endpoint.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatuses(cfg.Endpoints[1], &endpoint.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatus(cfg.Endpoints[0], &endpoint.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now()})
|
||||
watchdog.UpdateEndpointStatus(cfg.Endpoints[1], &endpoint.Result{Success: false, Duration: time.Second, Timestamp: time.Now()})
|
||||
api := New(cfg)
|
||||
router := api.Router()
|
||||
type Scenario struct {
|
||||
|
||||
59
api/suite_status.go
Normal file
59
api/suite_status.go
Normal file
@@ -0,0 +1,59 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/TwiN/gatus/v5/config"
|
||||
"github.com/TwiN/gatus/v5/config/suite"
|
||||
"github.com/TwiN/gatus/v5/storage/store"
|
||||
"github.com/TwiN/gatus/v5/storage/store/common/paging"
|
||||
"github.com/gofiber/fiber/v2"
|
||||
)
|
||||
|
||||
// SuiteStatuses handles requests to retrieve all suite statuses
|
||||
func SuiteStatuses(cfg *config.Config) fiber.Handler {
|
||||
return func(c *fiber.Ctx) error {
|
||||
page, pageSize := extractPageAndPageSizeFromRequest(c, 100)
|
||||
params := paging.NewSuiteStatusParams().WithPagination(page, pageSize)
|
||||
suiteStatuses, err := store.Get().GetAllSuiteStatuses(params)
|
||||
if err != nil {
|
||||
return c.Status(fiber.StatusInternalServerError).JSON(fiber.Map{
|
||||
"error": fmt.Sprintf("Failed to retrieve suite statuses: %v", err),
|
||||
})
|
||||
}
|
||||
// If no statuses exist yet, create empty ones from config
|
||||
if len(suiteStatuses) == 0 {
|
||||
for _, s := range cfg.Suites {
|
||||
if s.IsEnabled() {
|
||||
suiteStatuses = append(suiteStatuses, suite.NewStatus(s))
|
||||
}
|
||||
}
|
||||
}
|
||||
return c.Status(fiber.StatusOK).JSON(suiteStatuses)
|
||||
}
|
||||
}
|
||||
|
||||
// SuiteStatus handles requests to retrieve a single suite's status
|
||||
func SuiteStatus(cfg *config.Config) fiber.Handler {
|
||||
return func(c *fiber.Ctx) error {
|
||||
page, pageSize := extractPageAndPageSizeFromRequest(c, 100)
|
||||
key := c.Params("key")
|
||||
params := paging.NewSuiteStatusParams().WithPagination(page, pageSize)
|
||||
status, err := store.Get().GetSuiteStatusByKey(key, params)
|
||||
if err != nil || status == nil {
|
||||
// Try to find the suite in config
|
||||
for _, s := range cfg.Suites {
|
||||
if s.Key() == key {
|
||||
status = suite.NewStatus(s)
|
||||
break
|
||||
}
|
||||
}
|
||||
if status == nil {
|
||||
return c.Status(404).JSON(fiber.Map{
|
||||
"error": fmt.Sprintf("Suite with key '%s' not found", key),
|
||||
})
|
||||
}
|
||||
}
|
||||
return c.Status(fiber.StatusOK).JSON(status)
|
||||
}
|
||||
}
|
||||
519
api/suite_status_test.go
Normal file
519
api/suite_status_test.go
Normal file
@@ -0,0 +1,519 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v5/config"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/config/suite"
|
||||
"github.com/TwiN/gatus/v5/storage"
|
||||
"github.com/TwiN/gatus/v5/storage/store"
|
||||
"github.com/TwiN/gatus/v5/watchdog"
|
||||
)
|
||||
|
||||
var (
|
||||
suiteTimestamp = time.Now()
|
||||
|
||||
testSuiteEndpoint1 = endpoint.Endpoint{
|
||||
Name: "endpoint1",
|
||||
Group: "suite-group",
|
||||
URL: "https://example.org/endpoint1",
|
||||
Method: "GET",
|
||||
Interval: 30 * time.Second,
|
||||
Conditions: []endpoint.Condition{endpoint.Condition("[STATUS] == 200"), endpoint.Condition("[RESPONSE_TIME] < 500")},
|
||||
NumberOfFailuresInARow: 0,
|
||||
NumberOfSuccessesInARow: 0,
|
||||
}
|
||||
testSuiteEndpoint2 = endpoint.Endpoint{
|
||||
Name: "endpoint2",
|
||||
Group: "suite-group",
|
||||
URL: "https://example.org/endpoint2",
|
||||
Method: "GET",
|
||||
Interval: 30 * time.Second,
|
||||
Conditions: []endpoint.Condition{endpoint.Condition("[STATUS] == 200"), endpoint.Condition("[RESPONSE_TIME] < 300")},
|
||||
NumberOfFailuresInARow: 0,
|
||||
NumberOfSuccessesInARow: 0,
|
||||
}
|
||||
testSuite = suite.Suite{
|
||||
Name: "test-suite",
|
||||
Group: "suite-group",
|
||||
Interval: 60 * time.Second,
|
||||
Endpoints: []*endpoint.Endpoint{
|
||||
&testSuiteEndpoint1,
|
||||
&testSuiteEndpoint2,
|
||||
},
|
||||
}
|
||||
testSuccessfulSuiteResult = suite.Result{
|
||||
Name: "test-suite",
|
||||
Group: "suite-group",
|
||||
Success: true,
|
||||
Timestamp: suiteTimestamp,
|
||||
Duration: 250 * time.Millisecond,
|
||||
EndpointResults: []*endpoint.Result{
|
||||
{
|
||||
Hostname: "example.org",
|
||||
IP: "127.0.0.1",
|
||||
HTTPStatus: 200,
|
||||
Success: true,
|
||||
Timestamp: suiteTimestamp,
|
||||
Duration: 100 * time.Millisecond,
|
||||
ConditionResults: []*endpoint.ConditionResult{
|
||||
{
|
||||
Condition: "[STATUS] == 200",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[RESPONSE_TIME] < 500",
|
||||
Success: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Hostname: "example.org",
|
||||
IP: "127.0.0.1",
|
||||
HTTPStatus: 200,
|
||||
Success: true,
|
||||
Timestamp: suiteTimestamp,
|
||||
Duration: 150 * time.Millisecond,
|
||||
ConditionResults: []*endpoint.ConditionResult{
|
||||
{
|
||||
Condition: "[STATUS] == 200",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[RESPONSE_TIME] < 300",
|
||||
Success: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
testUnsuccessfulSuiteResult = suite.Result{
|
||||
Name: "test-suite",
|
||||
Group: "suite-group",
|
||||
Success: false,
|
||||
Timestamp: suiteTimestamp,
|
||||
Duration: 850 * time.Millisecond,
|
||||
Errors: []string{"suite-error-1", "suite-error-2"},
|
||||
EndpointResults: []*endpoint.Result{
|
||||
{
|
||||
Hostname: "example.org",
|
||||
IP: "127.0.0.1",
|
||||
HTTPStatus: 200,
|
||||
Success: true,
|
||||
Timestamp: suiteTimestamp,
|
||||
Duration: 100 * time.Millisecond,
|
||||
ConditionResults: []*endpoint.ConditionResult{
|
||||
{
|
||||
Condition: "[STATUS] == 200",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[RESPONSE_TIME] < 500",
|
||||
Success: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Hostname: "example.org",
|
||||
IP: "127.0.0.1",
|
||||
HTTPStatus: 500,
|
||||
Errors: []string{"endpoint-error-1"},
|
||||
Success: false,
|
||||
Timestamp: suiteTimestamp,
|
||||
Duration: 750 * time.Millisecond,
|
||||
ConditionResults: []*endpoint.ConditionResult{
|
||||
{
|
||||
Condition: "[STATUS] == 200",
|
||||
Success: false,
|
||||
},
|
||||
{
|
||||
Condition: "[RESPONSE_TIME] < 300",
|
||||
Success: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func TestSuiteStatus(t *testing.T) {
|
||||
defer store.Get().Clear()
|
||||
defer cache.Clear()
|
||||
cfg := &config.Config{
|
||||
Metrics: true,
|
||||
Suites: []*suite.Suite{
|
||||
{
|
||||
Name: "frontend-suite",
|
||||
Group: "core",
|
||||
},
|
||||
{
|
||||
Name: "backend-suite",
|
||||
Group: "core",
|
||||
},
|
||||
},
|
||||
Storage: &storage.Config{
|
||||
MaximumNumberOfResults: storage.DefaultMaximumNumberOfResults,
|
||||
MaximumNumberOfEvents: storage.DefaultMaximumNumberOfEvents,
|
||||
},
|
||||
}
|
||||
watchdog.UpdateSuiteStatus(cfg.Suites[0], &suite.Result{Success: true, Duration: time.Millisecond, Timestamp: time.Now(), Name: cfg.Suites[0].Name, Group: cfg.Suites[0].Group})
|
||||
watchdog.UpdateSuiteStatus(cfg.Suites[1], &suite.Result{Success: false, Duration: time.Second, Timestamp: time.Now(), Name: cfg.Suites[1].Name, Group: cfg.Suites[1].Group})
|
||||
api := New(cfg)
|
||||
router := api.Router()
|
||||
type Scenario struct {
|
||||
Name string
|
||||
Path string
|
||||
ExpectedCode int
|
||||
Gzip bool
|
||||
}
|
||||
scenarios := []Scenario{
|
||||
{
|
||||
Name: "suite-status",
|
||||
Path: "/api/v1/suites/core_frontend-suite/statuses",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
Name: "suite-status-gzip",
|
||||
Path: "/api/v1/suites/core_frontend-suite/statuses",
|
||||
ExpectedCode: http.StatusOK,
|
||||
Gzip: true,
|
||||
},
|
||||
{
|
||||
Name: "suite-status-pagination",
|
||||
Path: "/api/v1/suites/core_frontend-suite/statuses?page=1&pageSize=20",
|
||||
ExpectedCode: http.StatusOK,
|
||||
},
|
||||
{
|
||||
Name: "suite-status-for-invalid-key",
|
||||
Path: "/api/v1/suites/invalid_key/statuses",
|
||||
ExpectedCode: http.StatusNotFound,
|
||||
},
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
request := httptest.NewRequest("GET", scenario.Path, http.NoBody)
|
||||
if scenario.Gzip {
|
||||
request.Header.Set("Accept-Encoding", "gzip")
|
||||
}
|
||||
response, err := router.Test(request)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if response.StatusCode != scenario.ExpectedCode {
|
||||
t.Errorf("%s %s should have returned %d, but returned %d instead", request.Method, request.URL, scenario.ExpectedCode, response.StatusCode)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSuiteStatus_SuiteNotInStoreButInConfig(t *testing.T) {
|
||||
defer store.Get().Clear()
|
||||
defer cache.Clear()
|
||||
tests := []struct {
|
||||
name string
|
||||
suiteKey string
|
||||
cfg *config.Config
|
||||
expectedCode int
|
||||
expectJSON bool
|
||||
expectError string
|
||||
}{
|
||||
{
|
||||
name: "suite-not-in-store-but-exists-in-config-enabled",
|
||||
suiteKey: "test-group_test-suite",
|
||||
cfg: &config.Config{
|
||||
Metrics: true,
|
||||
Suites: []*suite.Suite{
|
||||
{
|
||||
Name: "test-suite",
|
||||
Group: "test-group",
|
||||
Enabled: boolPtr(true),
|
||||
Endpoints: []*endpoint.Endpoint{
|
||||
{
|
||||
Name: "endpoint-1",
|
||||
Group: "test-group",
|
||||
URL: "https://example.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
Storage: &storage.Config{
|
||||
MaximumNumberOfResults: storage.DefaultMaximumNumberOfResults,
|
||||
MaximumNumberOfEvents: storage.DefaultMaximumNumberOfEvents,
|
||||
},
|
||||
},
|
||||
expectedCode: http.StatusOK,
|
||||
expectJSON: true,
|
||||
},
|
||||
{
|
||||
name: "suite-not-in-store-but-exists-in-config-disabled",
|
||||
suiteKey: "test-group_disabled-suite",
|
||||
cfg: &config.Config{
|
||||
Metrics: true,
|
||||
Suites: []*suite.Suite{
|
||||
{
|
||||
Name: "disabled-suite",
|
||||
Group: "test-group",
|
||||
Enabled: boolPtr(false),
|
||||
},
|
||||
},
|
||||
Storage: &storage.Config{
|
||||
MaximumNumberOfResults: storage.DefaultMaximumNumberOfResults,
|
||||
MaximumNumberOfEvents: storage.DefaultMaximumNumberOfEvents,
|
||||
},
|
||||
},
|
||||
expectedCode: http.StatusOK,
|
||||
expectJSON: true,
|
||||
},
|
||||
{
|
||||
name: "suite-not-in-store-and-not-in-config",
|
||||
suiteKey: "nonexistent_suite",
|
||||
cfg: &config.Config{
|
||||
Metrics: true,
|
||||
Suites: []*suite.Suite{
|
||||
{
|
||||
Name: "different-suite",
|
||||
Group: "different-group",
|
||||
},
|
||||
},
|
||||
Storage: &storage.Config{
|
||||
MaximumNumberOfResults: storage.DefaultMaximumNumberOfResults,
|
||||
MaximumNumberOfEvents: storage.DefaultMaximumNumberOfEvents,
|
||||
},
|
||||
},
|
||||
expectedCode: http.StatusNotFound,
|
||||
expectError: "Suite with key 'nonexistent_suite' not found",
|
||||
},
|
||||
{
|
||||
name: "suite-with-empty-group-in-config",
|
||||
suiteKey: "_empty-group-suite",
|
||||
cfg: &config.Config{
|
||||
Metrics: true,
|
||||
Suites: []*suite.Suite{
|
||||
{
|
||||
Name: "empty-group-suite",
|
||||
Group: "",
|
||||
},
|
||||
},
|
||||
Storage: &storage.Config{
|
||||
MaximumNumberOfResults: storage.DefaultMaximumNumberOfResults,
|
||||
MaximumNumberOfEvents: storage.DefaultMaximumNumberOfEvents,
|
||||
},
|
||||
},
|
||||
expectedCode: http.StatusOK,
|
||||
expectJSON: true,
|
||||
},
|
||||
{
|
||||
name: "suite-nil-enabled-defaults-to-true",
|
||||
suiteKey: "default_enabled-suite",
|
||||
cfg: &config.Config{
|
||||
Metrics: true,
|
||||
Suites: []*suite.Suite{
|
||||
{
|
||||
Name: "enabled-suite",
|
||||
Group: "default",
|
||||
Enabled: nil,
|
||||
},
|
||||
},
|
||||
Storage: &storage.Config{
|
||||
MaximumNumberOfResults: storage.DefaultMaximumNumberOfResults,
|
||||
MaximumNumberOfEvents: storage.DefaultMaximumNumberOfEvents,
|
||||
},
|
||||
},
|
||||
expectedCode: http.StatusOK,
|
||||
expectJSON: true,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
api := New(tt.cfg)
|
||||
router := api.Router()
|
||||
request := httptest.NewRequest("GET", "/api/v1/suites/"+tt.suiteKey+"/statuses", http.NoBody)
|
||||
response, err := router.Test(request)
|
||||
if err != nil {
|
||||
t.Fatalf("Router test failed: %v", err)
|
||||
}
|
||||
defer response.Body.Close()
|
||||
if response.StatusCode != tt.expectedCode {
|
||||
t.Errorf("Expected status code %d, got %d", tt.expectedCode, response.StatusCode)
|
||||
}
|
||||
body, err := io.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to read response body: %v", err)
|
||||
}
|
||||
bodyStr := string(body)
|
||||
if tt.expectJSON {
|
||||
if response.Header.Get("Content-Type") != "application/json" {
|
||||
t.Errorf("Expected JSON content type, got %s", response.Header.Get("Content-Type"))
|
||||
}
|
||||
if len(bodyStr) == 0 || bodyStr[0] != '{' {
|
||||
t.Errorf("Expected JSON response, got: %s", bodyStr)
|
||||
}
|
||||
}
|
||||
if tt.expectError != "" {
|
||||
if !contains(bodyStr, tt.expectError) {
|
||||
t.Errorf("Expected error message '%s' in response, got: %s", tt.expectError, bodyStr)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSuiteStatuses(t *testing.T) {
|
||||
defer store.Get().Clear()
|
||||
defer cache.Clear()
|
||||
firstResult := &testSuccessfulSuiteResult
|
||||
secondResult := &testUnsuccessfulSuiteResult
|
||||
store.Get().InsertSuiteResult(&testSuite, firstResult)
|
||||
store.Get().InsertSuiteResult(&testSuite, secondResult)
|
||||
// Can't be bothered dealing with timezone issues on the worker that runs the automated tests
|
||||
firstResult.Timestamp = time.Time{}
|
||||
secondResult.Timestamp = time.Time{}
|
||||
for i := range firstResult.EndpointResults {
|
||||
firstResult.EndpointResults[i].Timestamp = time.Time{}
|
||||
}
|
||||
for i := range secondResult.EndpointResults {
|
||||
secondResult.EndpointResults[i].Timestamp = time.Time{}
|
||||
}
|
||||
api := New(&config.Config{
|
||||
Metrics: true,
|
||||
Storage: &storage.Config{
|
||||
MaximumNumberOfResults: storage.DefaultMaximumNumberOfResults,
|
||||
MaximumNumberOfEvents: storage.DefaultMaximumNumberOfEvents,
|
||||
},
|
||||
})
|
||||
router := api.Router()
|
||||
type Scenario struct {
|
||||
Name string
|
||||
Path string
|
||||
ExpectedCode int
|
||||
ExpectedBody string
|
||||
}
|
||||
scenarios := []Scenario{
|
||||
{
|
||||
Name: "no-pagination",
|
||||
Path: "/api/v1/suites/statuses",
|
||||
ExpectedCode: http.StatusOK,
|
||||
ExpectedBody: `[{"name":"test-suite","group":"suite-group","key":"suite-group_test-suite","results":[{"name":"test-suite","group":"suite-group","success":true,"timestamp":"0001-01-01T00:00:00Z","duration":250000000,"endpointResults":[{"status":200,"hostname":"example.org","duration":100000000,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":200,"hostname":"example.org","duration":150000000,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 300","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"}]},{"name":"test-suite","group":"suite-group","success":false,"timestamp":"0001-01-01T00:00:00Z","duration":850000000,"endpointResults":[{"status":200,"hostname":"example.org","duration":100000000,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":500,"hostname":"example.org","duration":750000000,"errors":["endpoint-error-1"],"conditionResults":[{"condition":"[STATUS] == 200","success":false},{"condition":"[RESPONSE_TIME] \u003c 300","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}],"errors":["suite-error-1","suite-error-2"]}]}]`,
|
||||
},
|
||||
{
|
||||
Name: "pagination-first-result",
|
||||
Path: "/api/v1/suites/statuses?page=1&pageSize=1",
|
||||
ExpectedCode: http.StatusOK,
|
||||
ExpectedBody: `[{"name":"test-suite","group":"suite-group","key":"suite-group_test-suite","results":[{"name":"test-suite","group":"suite-group","success":false,"timestamp":"0001-01-01T00:00:00Z","duration":850000000,"endpointResults":[{"status":200,"hostname":"example.org","duration":100000000,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":500,"hostname":"example.org","duration":750000000,"errors":["endpoint-error-1"],"conditionResults":[{"condition":"[STATUS] == 200","success":false},{"condition":"[RESPONSE_TIME] \u003c 300","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}],"errors":["suite-error-1","suite-error-2"]}]}]`,
|
||||
},
|
||||
{
|
||||
Name: "pagination-second-result",
|
||||
Path: "/api/v1/suites/statuses?page=2&pageSize=1",
|
||||
ExpectedCode: http.StatusOK,
|
||||
ExpectedBody: `[{"name":"test-suite","group":"suite-group","key":"suite-group_test-suite","results":[{"name":"test-suite","group":"suite-group","success":true,"timestamp":"0001-01-01T00:00:00Z","duration":250000000,"endpointResults":[{"status":200,"hostname":"example.org","duration":100000000,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":200,"hostname":"example.org","duration":150000000,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 300","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"}]}]}]`,
|
||||
},
|
||||
{
|
||||
Name: "pagination-no-results",
|
||||
Path: "/api/v1/suites/statuses?page=5&pageSize=20",
|
||||
ExpectedCode: http.StatusOK,
|
||||
ExpectedBody: `[{"name":"test-suite","group":"suite-group","key":"suite-group_test-suite","results":[]}]`,
|
||||
},
|
||||
{
|
||||
Name: "invalid-pagination-should-fall-back-to-default",
|
||||
Path: "/api/v1/suites/statuses?page=INVALID&pageSize=INVALID",
|
||||
ExpectedCode: http.StatusOK,
|
||||
ExpectedBody: `[{"name":"test-suite","group":"suite-group","key":"suite-group_test-suite","results":[{"name":"test-suite","group":"suite-group","success":true,"timestamp":"0001-01-01T00:00:00Z","duration":250000000,"endpointResults":[{"status":200,"hostname":"example.org","duration":100000000,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":200,"hostname":"example.org","duration":150000000,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 300","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"}]},{"name":"test-suite","group":"suite-group","success":false,"timestamp":"0001-01-01T00:00:00Z","duration":850000000,"endpointResults":[{"status":200,"hostname":"example.org","duration":100000000,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":500,"hostname":"example.org","duration":750000000,"errors":["endpoint-error-1"],"conditionResults":[{"condition":"[STATUS] == 200","success":false},{"condition":"[RESPONSE_TIME] \u003c 300","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}],"errors":["suite-error-1","suite-error-2"]}]}]`,
|
||||
},
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
request := httptest.NewRequest("GET", scenario.Path, http.NoBody)
|
||||
response, err := router.Test(request)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer response.Body.Close()
|
||||
if response.StatusCode != scenario.ExpectedCode {
|
||||
t.Errorf("%s %s should have returned %d, but returned %d instead", request.Method, request.URL, scenario.ExpectedCode, response.StatusCode)
|
||||
}
|
||||
body, err := io.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
t.Error("expected err to be nil, but was", err)
|
||||
}
|
||||
if string(body) != scenario.ExpectedBody {
|
||||
t.Errorf("expected:\n %s\n\ngot:\n %s", scenario.ExpectedBody, string(body))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSuiteStatuses_NoSuitesInStoreButExistInConfig(t *testing.T) {
|
||||
defer store.Get().Clear()
|
||||
defer cache.Clear()
|
||||
cfg := &config.Config{
|
||||
Metrics: true,
|
||||
Suites: []*suite.Suite{
|
||||
{
|
||||
Name: "config-only-suite-1",
|
||||
Group: "test-group",
|
||||
Enabled: boolPtr(true),
|
||||
},
|
||||
{
|
||||
Name: "config-only-suite-2",
|
||||
Group: "test-group",
|
||||
Enabled: boolPtr(true),
|
||||
},
|
||||
{
|
||||
Name: "disabled-suite",
|
||||
Group: "test-group",
|
||||
Enabled: boolPtr(false),
|
||||
},
|
||||
},
|
||||
Storage: &storage.Config{
|
||||
MaximumNumberOfResults: storage.DefaultMaximumNumberOfResults,
|
||||
MaximumNumberOfEvents: storage.DefaultMaximumNumberOfEvents,
|
||||
},
|
||||
}
|
||||
api := New(cfg)
|
||||
router := api.Router()
|
||||
request := httptest.NewRequest("GET", "/api/v1/suites/statuses", http.NoBody)
|
||||
response, err := router.Test(request)
|
||||
if err != nil {
|
||||
t.Fatalf("Router test failed: %v", err)
|
||||
}
|
||||
defer response.Body.Close()
|
||||
if response.StatusCode != http.StatusOK {
|
||||
t.Errorf("Expected status code %d, got %d", http.StatusOK, response.StatusCode)
|
||||
}
|
||||
body, err := io.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to read response body: %v", err)
|
||||
}
|
||||
bodyStr := string(body)
|
||||
if !contains(bodyStr, "config-only-suite-1") {
|
||||
t.Error("Expected config-only-suite-1 in response")
|
||||
}
|
||||
if !contains(bodyStr, "config-only-suite-2") {
|
||||
t.Error("Expected config-only-suite-2 in response")
|
||||
}
|
||||
if contains(bodyStr, "disabled-suite") {
|
||||
t.Error("Should not include disabled-suite in response")
|
||||
}
|
||||
}
|
||||
|
||||
func boolPtr(b bool) *bool {
|
||||
return &b
|
||||
}
|
||||
|
||||
func contains(s, substr string) bool {
|
||||
return len(s) >= len(substr) && (s == substr || len(substr) == 0 ||
|
||||
(len(s) > len(substr) && (s[:len(substr)] == substr || s[len(s)-len(substr):] == substr ||
|
||||
func() bool {
|
||||
for i := 1; i <= len(s)-len(substr); i++ {
|
||||
if s[i:i+len(substr)] == substr {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}())))
|
||||
}
|
||||
109
config/config.go
109
config/config.go
@@ -17,8 +17,10 @@ import (
|
||||
"github.com/TwiN/gatus/v5/config/announcement"
|
||||
"github.com/TwiN/gatus/v5/config/connectivity"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/config/key"
|
||||
"github.com/TwiN/gatus/v5/config/maintenance"
|
||||
"github.com/TwiN/gatus/v5/config/remote"
|
||||
"github.com/TwiN/gatus/v5/config/suite"
|
||||
"github.com/TwiN/gatus/v5/config/ui"
|
||||
"github.com/TwiN/gatus/v5/config/web"
|
||||
"github.com/TwiN/gatus/v5/security"
|
||||
@@ -35,6 +37,9 @@ const (
|
||||
// DefaultFallbackConfigurationFilePath is the default fallback path that will be used to search for the
|
||||
// configuration file if DefaultConfigurationFilePath didn't work
|
||||
DefaultFallbackConfigurationFilePath = "config/config.yml"
|
||||
|
||||
// DefaultConcurrency is the default number of endpoints/suites that can be monitored concurrently
|
||||
DefaultConcurrency = 3
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -67,8 +72,14 @@ type Config struct {
|
||||
// DisableMonitoringLock Whether to disable the monitoring lock
|
||||
// The monitoring lock is what prevents multiple endpoints from being processed at the same time.
|
||||
// Disabling this may lead to inaccurate response times
|
||||
//
|
||||
// Deprecated: Use Concurrency instead TODO: REMOVE THIS IN v6.0.0
|
||||
DisableMonitoringLock bool `yaml:"disable-monitoring-lock,omitempty"`
|
||||
|
||||
// Concurrency is the maximum number of endpoints/suites that can be monitored concurrently
|
||||
// Defaults to DefaultConcurrency. Set to 0 for unlimited concurrency.
|
||||
Concurrency int `yaml:"concurrency,omitempty"`
|
||||
|
||||
// Security is the configuration for securing access to Gatus
|
||||
Security *security.Config `yaml:"security,omitempty"`
|
||||
|
||||
@@ -81,6 +92,9 @@ type Config struct {
|
||||
// ExternalEndpoints is the list of all external endpoints
|
||||
ExternalEndpoints []*endpoint.ExternalEndpoint `yaml:"external-endpoints,omitempty"`
|
||||
|
||||
// Suites is the list of suites to monitor
|
||||
Suites []*suite.Suite `yaml:"suites,omitempty"`
|
||||
|
||||
// Storage is the configuration for how the data is stored
|
||||
Storage *storage.Config `yaml:"storage,omitempty"`
|
||||
|
||||
@@ -309,6 +323,13 @@ func parseAndValidateConfigBytes(yamlBytes []byte) (config *Config, err error) {
|
||||
if err := validateAnnouncementsConfig(config); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := validateSuitesConfig(config); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := validateUniqueKeys(config); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
validateAndSetConcurrencyDefaults(config)
|
||||
// Cross-config changes
|
||||
config.UI.MaximumNumberOfResults = config.Storage.MaximumNumberOfResults
|
||||
}
|
||||
@@ -405,7 +426,7 @@ func validateEndpointsConfig(config *Config) error {
|
||||
logr.Infof("[config.validateEndpointsConfig] Validated %d endpoints", len(config.Endpoints))
|
||||
// Validate external endpoints
|
||||
for _, ee := range config.ExternalEndpoints {
|
||||
logr.Debugf("[config.validateEndpointsConfig] Validating external endpoint '%s'", ee.Name)
|
||||
logr.Debugf("[config.validateEndpointsConfig] Validating external endpoint '%s'", ee.Key())
|
||||
if endpointKey := ee.Key(); duplicateValidationMap[endpointKey] {
|
||||
return fmt.Errorf("invalid external endpoint %s: name and group combination must be unique", ee.Key())
|
||||
} else {
|
||||
@@ -419,6 +440,78 @@ func validateEndpointsConfig(config *Config) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateSuitesConfig(config *Config) error {
|
||||
if config.Suites == nil || len(config.Suites) == 0 {
|
||||
logr.Info("[config.validateSuitesConfig] No suites configured")
|
||||
return nil
|
||||
}
|
||||
suiteNames := make(map[string]bool)
|
||||
for _, suite := range config.Suites {
|
||||
// Check for duplicate suite names
|
||||
if suiteNames[suite.Name] {
|
||||
return fmt.Errorf("duplicate suite name: %s", suite.Key())
|
||||
}
|
||||
suiteNames[suite.Name] = true
|
||||
// Validate the suite configuration
|
||||
if err := suite.ValidateAndSetDefaults(); err != nil {
|
||||
return fmt.Errorf("invalid suite '%s': %w", suite.Key(), err)
|
||||
}
|
||||
// Check that endpoints referenced in Store mappings use valid placeholders
|
||||
for _, suiteEndpoint := range suite.Endpoints {
|
||||
if suiteEndpoint.Store != nil {
|
||||
for contextKey, placeholder := range suiteEndpoint.Store {
|
||||
// Basic validation that the context key is a valid identifier
|
||||
if len(contextKey) == 0 {
|
||||
return fmt.Errorf("suite '%s' endpoint '%s' has empty context key in store mapping", suite.Key(), suiteEndpoint.Key())
|
||||
}
|
||||
if len(placeholder) == 0 {
|
||||
return fmt.Errorf("suite '%s' endpoint '%s' has empty placeholder in store mapping for key '%s'", suite.Key(), suiteEndpoint.Key(), contextKey)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
logr.Infof("[config.validateSuitesConfig] Validated %d suite(s)", len(config.Suites))
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateUniqueKeys(config *Config) error {
|
||||
keyMap := make(map[string]string) // key -> description for error messages
|
||||
// Check all endpoints
|
||||
for _, ep := range config.Endpoints {
|
||||
epKey := ep.Key()
|
||||
if existing, exists := keyMap[epKey]; exists {
|
||||
return fmt.Errorf("duplicate key '%s': endpoint '%s' conflicts with %s", epKey, ep.Key(), existing)
|
||||
}
|
||||
keyMap[epKey] = fmt.Sprintf("endpoint '%s'", ep.Key())
|
||||
}
|
||||
// Check all external endpoints
|
||||
for _, ee := range config.ExternalEndpoints {
|
||||
eeKey := ee.Key()
|
||||
if existing, exists := keyMap[eeKey]; exists {
|
||||
return fmt.Errorf("duplicate key '%s': external endpoint '%s' conflicts with %s", eeKey, ee.Key(), existing)
|
||||
}
|
||||
keyMap[eeKey] = fmt.Sprintf("external endpoint '%s'", ee.Key())
|
||||
}
|
||||
// Check all suites
|
||||
for _, suite := range config.Suites {
|
||||
suiteKey := suite.Key()
|
||||
if existing, exists := keyMap[suiteKey]; exists {
|
||||
return fmt.Errorf("duplicate key '%s': suite '%s' conflicts with %s", suiteKey, suite.Key(), existing)
|
||||
}
|
||||
keyMap[suiteKey] = fmt.Sprintf("suite '%s'", suite.Key())
|
||||
// Check endpoints within suites (they generate keys using suite group + endpoint name)
|
||||
for _, ep := range suite.Endpoints {
|
||||
epKey := key.ConvertGroupAndNameToKey(suite.Group, ep.Name)
|
||||
if existing, exists := keyMap[epKey]; exists {
|
||||
return fmt.Errorf("duplicate key '%s': endpoint '%s' in suite '%s' conflicts with %s", epKey, epKey, suite.Key(), existing)
|
||||
}
|
||||
keyMap[epKey] = fmt.Sprintf("endpoint '%s' in suite '%s'", epKey, suite.Key())
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateSecurityConfig(config *Config) error {
|
||||
if config.Security != nil {
|
||||
if config.Security.IsValid() {
|
||||
@@ -531,3 +624,17 @@ func validateAlertingConfig(alertingConfig *alerting.Config, endpoints []*endpoi
|
||||
}
|
||||
logr.Infof("[config.validateAlertingConfig] configuredProviders=%s; ignoredProviders=%s", validProviders, invalidProviders)
|
||||
}
|
||||
|
||||
func validateAndSetConcurrencyDefaults(config *Config) {
|
||||
if config.DisableMonitoringLock {
|
||||
config.Concurrency = 0
|
||||
logr.Warn("WARNING: The 'disable-monitoring-lock' configuration has been deprecated and will be removed in v6.0.0")
|
||||
logr.Warn("WARNING: Please set 'concurrency: 0' instead")
|
||||
logr.Debug("[config.validateAndSetConcurrencyDefaults] DisableMonitoringLock is true, setting unlimited (0) concurrency")
|
||||
} else if config.Concurrency <= 0 && !config.DisableMonitoringLock {
|
||||
config.Concurrency = DefaultConcurrency
|
||||
logr.Debugf("[config.validateAndSetConcurrencyDefaults] Setting default concurrency to %d", config.Concurrency)
|
||||
} else {
|
||||
logr.Debugf("[config.validateAndSetConcurrencyDefaults] Using configured concurrency of %d", config.Concurrency)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2105,3 +2105,382 @@ func TestConfig_GetUniqueExtraMetricLabels(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAndValidateConfigBytesWithDuplicateKeysAcrossEntityTypes(t *testing.T) {
|
||||
scenarios := []struct {
|
||||
name string
|
||||
shouldError bool
|
||||
expectedErr string
|
||||
config string
|
||||
}{
|
||||
{
|
||||
name: "endpoint-suite-same-key",
|
||||
shouldError: true,
|
||||
expectedErr: "duplicate key 'backend_test-api': suite 'backend_test-api' conflicts with endpoint 'backend_test-api'",
|
||||
config: `
|
||||
endpoints:
|
||||
- name: test-api
|
||||
group: backend
|
||||
url: https://example.com/api
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
suites:
|
||||
- name: test-api
|
||||
group: backend
|
||||
interval: 30s
|
||||
endpoints:
|
||||
- name: step1
|
||||
url: https://example.com/test
|
||||
conditions:
|
||||
- "[STATUS] == 200"`,
|
||||
},
|
||||
{
|
||||
name: "endpoint-suite-different-keys",
|
||||
shouldError: false,
|
||||
config: `
|
||||
endpoints:
|
||||
- name: api-service
|
||||
group: backend
|
||||
url: https://example.com/api
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
suites:
|
||||
- name: integration-tests
|
||||
group: testing
|
||||
interval: 30s
|
||||
endpoints:
|
||||
- name: step1
|
||||
url: https://example.com/test
|
||||
conditions:
|
||||
- "[STATUS] == 200"`,
|
||||
},
|
||||
{
|
||||
name: "endpoint-external-endpoint-suite-unique-keys",
|
||||
shouldError: false,
|
||||
config: `
|
||||
endpoints:
|
||||
- name: api-service
|
||||
group: backend
|
||||
url: https://example.com/api
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
external-endpoints:
|
||||
- name: monitoring-agent
|
||||
group: infrastructure
|
||||
token: "secret-token"
|
||||
heartbeat:
|
||||
interval: 5m
|
||||
|
||||
suites:
|
||||
- name: integration-tests
|
||||
group: testing
|
||||
interval: 30s
|
||||
endpoints:
|
||||
- name: step1
|
||||
url: https://example.com/test
|
||||
conditions:
|
||||
- "[STATUS] == 200"`,
|
||||
},
|
||||
{
|
||||
name: "suite-with-same-key-as-external-endpoint",
|
||||
shouldError: true,
|
||||
expectedErr: "duplicate key 'monitoring_health-check': suite 'monitoring_health-check' conflicts with external endpoint 'monitoring_health-check'",
|
||||
config: `
|
||||
endpoints:
|
||||
- name: dummy
|
||||
url: https://example.com/dummy
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
external-endpoints:
|
||||
- name: health-check
|
||||
group: monitoring
|
||||
token: "secret-token"
|
||||
heartbeat:
|
||||
interval: 5m
|
||||
|
||||
suites:
|
||||
- name: health-check
|
||||
group: monitoring
|
||||
interval: 30s
|
||||
endpoints:
|
||||
- name: step1
|
||||
url: https://example.com/test
|
||||
conditions:
|
||||
- "[STATUS] == 200"`,
|
||||
},
|
||||
{
|
||||
name: "endpoint-with-same-name-as-suite-endpoint-different-groups",
|
||||
shouldError: false,
|
||||
config: `
|
||||
endpoints:
|
||||
- name: api-health
|
||||
group: backend
|
||||
url: https://example.com/health
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
suites:
|
||||
- name: integration-suite
|
||||
group: testing
|
||||
interval: 30s
|
||||
endpoints:
|
||||
- name: api-health
|
||||
url: https://example.com/api/health
|
||||
conditions:
|
||||
- "[STATUS] == 200"`,
|
||||
},
|
||||
{
|
||||
name: "endpoint-conflicting-with-suite-endpoint",
|
||||
shouldError: true,
|
||||
expectedErr: "duplicate key 'backend_api-health': endpoint 'backend_api-health' in suite 'backend_integration-suite' conflicts with endpoint 'backend_api-health'",
|
||||
config: `
|
||||
endpoints:
|
||||
- name: api-health
|
||||
group: backend
|
||||
url: https://example.com/health
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
suites:
|
||||
- name: integration-suite
|
||||
group: backend
|
||||
interval: 30s
|
||||
endpoints:
|
||||
- name: api-health
|
||||
url: https://example.com/api/health
|
||||
conditions:
|
||||
- "[STATUS] == 200"`,
|
||||
},
|
||||
}
|
||||
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.name, func(t *testing.T) {
|
||||
_, err := parseAndValidateConfigBytes([]byte(scenario.config))
|
||||
if scenario.shouldError {
|
||||
if err == nil {
|
||||
t.Error("should've returned an error")
|
||||
} else if scenario.expectedErr != "" && err.Error() != scenario.expectedErr {
|
||||
t.Errorf("expected error message '%s', got '%s'", scenario.expectedErr, err.Error())
|
||||
}
|
||||
} else if err != nil {
|
||||
t.Errorf("shouldn't have returned an error, got: %v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAndValidateConfigBytesWithSuites(t *testing.T) {
|
||||
scenarios := []struct {
|
||||
name string
|
||||
shouldError bool
|
||||
expectedErr string
|
||||
config string
|
||||
}{
|
||||
{
|
||||
name: "suite-with-no-name",
|
||||
shouldError: true,
|
||||
expectedErr: "invalid suite 'testing_': suite must have a name",
|
||||
config: `
|
||||
endpoints:
|
||||
- name: dummy
|
||||
url: https://example.com/dummy
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
suites:
|
||||
- group: testing
|
||||
interval: 30s
|
||||
endpoints:
|
||||
- name: step1
|
||||
url: https://example.com/test
|
||||
conditions:
|
||||
- "[STATUS] == 200"`,
|
||||
},
|
||||
{
|
||||
name: "suite-with-no-endpoints",
|
||||
shouldError: true,
|
||||
expectedErr: "invalid suite 'testing_empty-suite': suite must have at least one endpoint",
|
||||
config: `
|
||||
endpoints:
|
||||
- name: dummy
|
||||
url: https://example.com/dummy
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
suites:
|
||||
- name: empty-suite
|
||||
group: testing
|
||||
interval: 30s
|
||||
endpoints: []`,
|
||||
},
|
||||
{
|
||||
name: "suite-with-duplicate-endpoint-names",
|
||||
shouldError: true,
|
||||
expectedErr: "invalid suite 'testing_duplicate-test': suite cannot have duplicate endpoint names: duplicate endpoint name 'step1'",
|
||||
config: `
|
||||
endpoints:
|
||||
- name: dummy
|
||||
url: https://example.com/dummy
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
suites:
|
||||
- name: duplicate-test
|
||||
group: testing
|
||||
interval: 30s
|
||||
endpoints:
|
||||
- name: step1
|
||||
url: https://example.com/test1
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
- name: step1
|
||||
url: https://example.com/test2
|
||||
conditions:
|
||||
- "[STATUS] == 200"`,
|
||||
},
|
||||
{
|
||||
name: "suite-with-invalid-negative-timeout",
|
||||
shouldError: true,
|
||||
expectedErr: "invalid suite 'testing_negative-timeout-suite': suite timeout must be positive",
|
||||
config: `
|
||||
endpoints:
|
||||
- name: dummy
|
||||
url: https://example.com/dummy
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
suites:
|
||||
- name: negative-timeout-suite
|
||||
group: testing
|
||||
interval: 30s
|
||||
timeout: -5m
|
||||
endpoints:
|
||||
- name: step1
|
||||
url: https://example.com/test
|
||||
conditions:
|
||||
- "[STATUS] == 200"`,
|
||||
},
|
||||
{
|
||||
name: "valid-suite-with-defaults",
|
||||
shouldError: false,
|
||||
config: `
|
||||
endpoints:
|
||||
- name: api-service
|
||||
group: backend
|
||||
url: https://example.com/api
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
suites:
|
||||
- name: integration-test
|
||||
group: testing
|
||||
endpoints:
|
||||
- name: step1
|
||||
url: https://example.com/test
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
- name: step2
|
||||
url: https://example.com/validate
|
||||
conditions:
|
||||
- "[STATUS] == 200"`,
|
||||
},
|
||||
{
|
||||
name: "valid-suite-with-all-fields",
|
||||
shouldError: false,
|
||||
config: `
|
||||
endpoints:
|
||||
- name: api-service
|
||||
group: backend
|
||||
url: https://example.com/api
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
suites:
|
||||
- name: full-integration-test
|
||||
group: testing
|
||||
enabled: true
|
||||
interval: 15m
|
||||
timeout: 10m
|
||||
context:
|
||||
base_url: "https://example.com"
|
||||
user_id: 12345
|
||||
endpoints:
|
||||
- name: authentication
|
||||
url: https://example.com/auth
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
- name: user-profile
|
||||
url: https://example.com/profile
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
- "[BODY].user_id == 12345"`,
|
||||
},
|
||||
{
|
||||
name: "valid-suite-with-endpoint-inheritance",
|
||||
shouldError: false,
|
||||
config: `
|
||||
endpoints:
|
||||
- name: api-service
|
||||
group: backend
|
||||
url: https://example.com/api
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
suites:
|
||||
- name: inheritance-test
|
||||
group: parent-group
|
||||
endpoints:
|
||||
- name: child-endpoint
|
||||
url: https://example.com/test
|
||||
conditions:
|
||||
- "[STATUS] == 200"`,
|
||||
},
|
||||
{
|
||||
name: "valid-suite-with-store-functionality",
|
||||
shouldError: false,
|
||||
config: `
|
||||
endpoints:
|
||||
- name: api-service
|
||||
group: backend
|
||||
url: https://example.com/api
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
suites:
|
||||
- name: store-test
|
||||
group: testing
|
||||
endpoints:
|
||||
- name: get-token
|
||||
url: https://example.com/auth
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
store:
|
||||
auth_token: "[BODY].token"
|
||||
- name: use-token
|
||||
url: https://example.com/data
|
||||
headers:
|
||||
Authorization: "Bearer {auth_token}"
|
||||
conditions:
|
||||
- "[STATUS] == 200"`,
|
||||
},
|
||||
}
|
||||
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.name, func(t *testing.T) {
|
||||
_, err := parseAndValidateConfigBytes([]byte(scenario.config))
|
||||
if scenario.shouldError {
|
||||
if err == nil {
|
||||
t.Error("should've returned an error")
|
||||
} else if scenario.expectedErr != "" && err.Error() != scenario.expectedErr {
|
||||
t.Errorf("expected error message '%s', got '%s'", scenario.expectedErr, err.Error())
|
||||
}
|
||||
} else if err != nil {
|
||||
t.Errorf("shouldn't have returned an error, got: %v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,82 +7,11 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v5/jsonpath"
|
||||
"github.com/TwiN/gatus/v5/config/gontext"
|
||||
"github.com/TwiN/gatus/v5/pattern"
|
||||
)
|
||||
|
||||
// Placeholders
|
||||
const (
|
||||
// StatusPlaceholder is a placeholder for a HTTP status.
|
||||
//
|
||||
// Values that could replace the placeholder: 200, 404, 500, ...
|
||||
StatusPlaceholder = "[STATUS]"
|
||||
|
||||
// IPPlaceholder is a placeholder for an IP.
|
||||
//
|
||||
// Values that could replace the placeholder: 127.0.0.1, 10.0.0.1, ...
|
||||
IPPlaceholder = "[IP]"
|
||||
|
||||
// DNSRCodePlaceholder is a placeholder for DNS_RCODE
|
||||
//
|
||||
// Values that could replace the placeholder: NOERROR, FORMERR, SERVFAIL, NXDOMAIN, NOTIMP, REFUSED
|
||||
DNSRCodePlaceholder = "[DNS_RCODE]"
|
||||
|
||||
// ResponseTimePlaceholder is a placeholder for the request response time, in milliseconds.
|
||||
//
|
||||
// Values that could replace the placeholder: 1, 500, 1000, ...
|
||||
ResponseTimePlaceholder = "[RESPONSE_TIME]"
|
||||
|
||||
// BodyPlaceholder is a placeholder for the Body of the response
|
||||
//
|
||||
// Values that could replace the placeholder: {}, {"data":{"name":"john"}}, ...
|
||||
BodyPlaceholder = "[BODY]"
|
||||
|
||||
// ConnectedPlaceholder is a placeholder for whether a connection was successfully established.
|
||||
//
|
||||
// Values that could replace the placeholder: true, false
|
||||
ConnectedPlaceholder = "[CONNECTED]"
|
||||
|
||||
// CertificateExpirationPlaceholder is a placeholder for the duration before certificate expiration, in milliseconds.
|
||||
//
|
||||
// Values that could replace the placeholder: 4461677039 (~52 days)
|
||||
CertificateExpirationPlaceholder = "[CERTIFICATE_EXPIRATION]"
|
||||
|
||||
// DomainExpirationPlaceholder is a placeholder for the duration before the domain expires, in milliseconds.
|
||||
DomainExpirationPlaceholder = "[DOMAIN_EXPIRATION]"
|
||||
)
|
||||
|
||||
// Functions
|
||||
const (
|
||||
// LengthFunctionPrefix is the prefix for the length function
|
||||
//
|
||||
// Usage: len([BODY].articles) == 10, len([BODY].name) > 5
|
||||
LengthFunctionPrefix = "len("
|
||||
|
||||
// HasFunctionPrefix is the prefix for the has function
|
||||
//
|
||||
// Usage: has([BODY].errors) == true
|
||||
HasFunctionPrefix = "has("
|
||||
|
||||
// PatternFunctionPrefix is the prefix for the pattern function
|
||||
//
|
||||
// Usage: [IP] == pat(192.168.*.*)
|
||||
PatternFunctionPrefix = "pat("
|
||||
|
||||
// AnyFunctionPrefix is the prefix for the any function
|
||||
//
|
||||
// Usage: [IP] == any(1.1.1.1, 1.0.0.1)
|
||||
AnyFunctionPrefix = "any("
|
||||
|
||||
// FunctionSuffix is the suffix for all functions
|
||||
FunctionSuffix = ")"
|
||||
)
|
||||
|
||||
// Other constants
|
||||
const (
|
||||
// InvalidConditionElementSuffix is the suffix that will be appended to an invalid condition
|
||||
InvalidConditionElementSuffix = "(INVALID)"
|
||||
|
||||
// maximumLengthBeforeTruncatingWhenComparedWithPattern is the maximum length an element being compared to a
|
||||
// pattern can have.
|
||||
//
|
||||
@@ -97,50 +26,50 @@ type Condition string
|
||||
// Validate checks if the Condition is valid
|
||||
func (c Condition) Validate() error {
|
||||
r := &Result{}
|
||||
c.evaluate(r, false)
|
||||
c.evaluate(r, false, nil)
|
||||
if len(r.Errors) != 0 {
|
||||
return errors.New(r.Errors[0])
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// evaluate the Condition with the Result of the health check
|
||||
func (c Condition) evaluate(result *Result, dontResolveFailedConditions bool) bool {
|
||||
// evaluate the Condition with the Result and an optional context
|
||||
func (c Condition) evaluate(result *Result, dontResolveFailedConditions bool, context *gontext.Gontext) bool {
|
||||
condition := string(c)
|
||||
success := false
|
||||
conditionToDisplay := condition
|
||||
if strings.Contains(condition, " == ") {
|
||||
parameters, resolvedParameters := sanitizeAndResolve(strings.Split(condition, " == "), result)
|
||||
parameters, resolvedParameters := sanitizeAndResolveWithContext(strings.Split(condition, " == "), result, context)
|
||||
success = isEqual(resolvedParameters[0], resolvedParameters[1])
|
||||
if !success && !dontResolveFailedConditions {
|
||||
conditionToDisplay = prettify(parameters, resolvedParameters, "==")
|
||||
}
|
||||
} else if strings.Contains(condition, " != ") {
|
||||
parameters, resolvedParameters := sanitizeAndResolve(strings.Split(condition, " != "), result)
|
||||
parameters, resolvedParameters := sanitizeAndResolveWithContext(strings.Split(condition, " != "), result, context)
|
||||
success = !isEqual(resolvedParameters[0], resolvedParameters[1])
|
||||
if !success && !dontResolveFailedConditions {
|
||||
conditionToDisplay = prettify(parameters, resolvedParameters, "!=")
|
||||
}
|
||||
} else if strings.Contains(condition, " <= ") {
|
||||
parameters, resolvedParameters := sanitizeAndResolveNumerical(strings.Split(condition, " <= "), result)
|
||||
parameters, resolvedParameters := sanitizeAndResolveNumericalWithContext(strings.Split(condition, " <= "), result, context)
|
||||
success = resolvedParameters[0] <= resolvedParameters[1]
|
||||
if !success && !dontResolveFailedConditions {
|
||||
conditionToDisplay = prettifyNumericalParameters(parameters, resolvedParameters, "<=")
|
||||
}
|
||||
} else if strings.Contains(condition, " >= ") {
|
||||
parameters, resolvedParameters := sanitizeAndResolveNumerical(strings.Split(condition, " >= "), result)
|
||||
parameters, resolvedParameters := sanitizeAndResolveNumericalWithContext(strings.Split(condition, " >= "), result, context)
|
||||
success = resolvedParameters[0] >= resolvedParameters[1]
|
||||
if !success && !dontResolveFailedConditions {
|
||||
conditionToDisplay = prettifyNumericalParameters(parameters, resolvedParameters, ">=")
|
||||
}
|
||||
} else if strings.Contains(condition, " > ") {
|
||||
parameters, resolvedParameters := sanitizeAndResolveNumerical(strings.Split(condition, " > "), result)
|
||||
parameters, resolvedParameters := sanitizeAndResolveNumericalWithContext(strings.Split(condition, " > "), result, context)
|
||||
success = resolvedParameters[0] > resolvedParameters[1]
|
||||
if !success && !dontResolveFailedConditions {
|
||||
conditionToDisplay = prettifyNumericalParameters(parameters, resolvedParameters, ">")
|
||||
}
|
||||
} else if strings.Contains(condition, " < ") {
|
||||
parameters, resolvedParameters := sanitizeAndResolveNumerical(strings.Split(condition, " < "), result)
|
||||
parameters, resolvedParameters := sanitizeAndResolveNumericalWithContext(strings.Split(condition, " < "), result, context)
|
||||
success = resolvedParameters[0] < resolvedParameters[1]
|
||||
if !success && !dontResolveFailedConditions {
|
||||
conditionToDisplay = prettifyNumericalParameters(parameters, resolvedParameters, "<")
|
||||
@@ -235,79 +164,29 @@ func isEqual(first, second string) bool {
|
||||
return first == second
|
||||
}
|
||||
|
||||
// sanitizeAndResolve sanitizes and resolves a list of elements and returns the list of parameters as well as a list
|
||||
// of resolved parameters
|
||||
func sanitizeAndResolve(elements []string, result *Result) ([]string, []string) {
|
||||
// sanitizeAndResolveWithContext sanitizes and resolves a list of elements with an optional context
|
||||
func sanitizeAndResolveWithContext(elements []string, result *Result, context *gontext.Gontext) ([]string, []string) {
|
||||
parameters := make([]string, len(elements))
|
||||
resolvedParameters := make([]string, len(elements))
|
||||
body := strings.TrimSpace(string(result.Body))
|
||||
for i, element := range elements {
|
||||
element = strings.TrimSpace(element)
|
||||
parameters[i] = element
|
||||
switch strings.ToUpper(element) {
|
||||
case StatusPlaceholder:
|
||||
element = strconv.Itoa(result.HTTPStatus)
|
||||
case IPPlaceholder:
|
||||
element = result.IP
|
||||
case ResponseTimePlaceholder:
|
||||
element = strconv.Itoa(int(result.Duration.Milliseconds()))
|
||||
case BodyPlaceholder:
|
||||
element = body
|
||||
case DNSRCodePlaceholder:
|
||||
element = result.DNSRCode
|
||||
case ConnectedPlaceholder:
|
||||
element = strconv.FormatBool(result.Connected)
|
||||
case CertificateExpirationPlaceholder:
|
||||
element = strconv.FormatInt(result.CertificateExpiration.Milliseconds(), 10)
|
||||
case DomainExpirationPlaceholder:
|
||||
element = strconv.FormatInt(result.DomainExpiration.Milliseconds(), 10)
|
||||
default:
|
||||
// if contains the BodyPlaceholder, then evaluate json path
|
||||
if strings.Contains(element, BodyPlaceholder) {
|
||||
checkingForLength := false
|
||||
checkingForExistence := false
|
||||
if strings.HasPrefix(element, LengthFunctionPrefix) && strings.HasSuffix(element, FunctionSuffix) {
|
||||
checkingForLength = true
|
||||
element = strings.TrimSuffix(strings.TrimPrefix(element, LengthFunctionPrefix), FunctionSuffix)
|
||||
}
|
||||
if strings.HasPrefix(element, HasFunctionPrefix) && strings.HasSuffix(element, FunctionSuffix) {
|
||||
checkingForExistence = true
|
||||
element = strings.TrimSuffix(strings.TrimPrefix(element, HasFunctionPrefix), FunctionSuffix)
|
||||
}
|
||||
resolvedElement, resolvedElementLength, err := jsonpath.Eval(strings.TrimPrefix(strings.TrimPrefix(element, BodyPlaceholder), "."), result.Body)
|
||||
if checkingForExistence {
|
||||
if err != nil {
|
||||
element = "false"
|
||||
} else {
|
||||
element = "true"
|
||||
}
|
||||
} else {
|
||||
if err != nil {
|
||||
if err.Error() != "unexpected end of JSON input" {
|
||||
result.AddError(err.Error())
|
||||
}
|
||||
if checkingForLength {
|
||||
element = LengthFunctionPrefix + element + FunctionSuffix + " " + InvalidConditionElementSuffix
|
||||
} else {
|
||||
element = element + " " + InvalidConditionElementSuffix
|
||||
}
|
||||
} else {
|
||||
if checkingForLength {
|
||||
element = strconv.Itoa(resolvedElementLength)
|
||||
} else {
|
||||
element = resolvedElement
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Use the unified ResolvePlaceholder function
|
||||
resolved, err := ResolvePlaceholder(element, result, context)
|
||||
if err != nil {
|
||||
// If there's an error, add it to the result
|
||||
result.AddError(err.Error())
|
||||
resolvedParameters[i] = element + " " + InvalidConditionElementSuffix
|
||||
} else {
|
||||
resolvedParameters[i] = resolved
|
||||
}
|
||||
resolvedParameters[i] = element
|
||||
}
|
||||
return parameters, resolvedParameters
|
||||
}
|
||||
|
||||
func sanitizeAndResolveNumerical(list []string, result *Result) (parameters []string, resolvedNumericalParameters []int64) {
|
||||
parameters, resolvedParameters := sanitizeAndResolve(list, result)
|
||||
func sanitizeAndResolveNumericalWithContext(list []string, result *Result, context *gontext.Gontext) (parameters []string, resolvedNumericalParameters []int64) {
|
||||
parameters, resolvedParameters := sanitizeAndResolveWithContext(list, result, context)
|
||||
for _, element := range resolvedParameters {
|
||||
if duration, err := time.ParseDuration(element); duration != 0 && err == nil {
|
||||
// If the string is a duration, convert it to milliseconds
|
||||
|
||||
@@ -8,7 +8,7 @@ func BenchmarkCondition_evaluateWithBodyStringAny(b *testing.B) {
|
||||
condition := Condition("[BODY].name == any(john.doe, jane.doe)")
|
||||
for n := 0; n < b.N; n++ {
|
||||
result := &Result{Body: []byte("{\"name\": \"john.doe\"}")}
|
||||
condition.evaluate(result, false)
|
||||
condition.evaluate(result, false, nil)
|
||||
}
|
||||
b.ReportAllocs()
|
||||
}
|
||||
@@ -17,7 +17,7 @@ func BenchmarkCondition_evaluateWithBodyStringAnyFailure(b *testing.B) {
|
||||
condition := Condition("[BODY].name == any(john.doe, jane.doe)")
|
||||
for n := 0; n < b.N; n++ {
|
||||
result := &Result{Body: []byte("{\"name\": \"bob.doe\"}")}
|
||||
condition.evaluate(result, false)
|
||||
condition.evaluate(result, false, nil)
|
||||
}
|
||||
b.ReportAllocs()
|
||||
}
|
||||
@@ -26,7 +26,7 @@ func BenchmarkCondition_evaluateWithBodyString(b *testing.B) {
|
||||
condition := Condition("[BODY].name == john.doe")
|
||||
for n := 0; n < b.N; n++ {
|
||||
result := &Result{Body: []byte("{\"name\": \"john.doe\"}")}
|
||||
condition.evaluate(result, false)
|
||||
condition.evaluate(result, false, nil)
|
||||
}
|
||||
b.ReportAllocs()
|
||||
}
|
||||
@@ -35,7 +35,7 @@ func BenchmarkCondition_evaluateWithBodyStringFailure(b *testing.B) {
|
||||
condition := Condition("[BODY].name == john.doe")
|
||||
for n := 0; n < b.N; n++ {
|
||||
result := &Result{Body: []byte("{\"name\": \"bob.doe\"}")}
|
||||
condition.evaluate(result, false)
|
||||
condition.evaluate(result, false, nil)
|
||||
}
|
||||
b.ReportAllocs()
|
||||
}
|
||||
@@ -44,7 +44,7 @@ func BenchmarkCondition_evaluateWithBodyStringFailureInvalidPath(b *testing.B) {
|
||||
condition := Condition("[BODY].user.name == bob.doe")
|
||||
for n := 0; n < b.N; n++ {
|
||||
result := &Result{Body: []byte("{\"name\": \"bob.doe\"}")}
|
||||
condition.evaluate(result, false)
|
||||
condition.evaluate(result, false, nil)
|
||||
}
|
||||
b.ReportAllocs()
|
||||
}
|
||||
@@ -53,7 +53,7 @@ func BenchmarkCondition_evaluateWithBodyStringLen(b *testing.B) {
|
||||
condition := Condition("len([BODY].name) == 8")
|
||||
for n := 0; n < b.N; n++ {
|
||||
result := &Result{Body: []byte("{\"name\": \"john.doe\"}")}
|
||||
condition.evaluate(result, false)
|
||||
condition.evaluate(result, false, nil)
|
||||
}
|
||||
b.ReportAllocs()
|
||||
}
|
||||
@@ -62,7 +62,7 @@ func BenchmarkCondition_evaluateWithBodyStringLenFailure(b *testing.B) {
|
||||
condition := Condition("len([BODY].name) == 8")
|
||||
for n := 0; n < b.N; n++ {
|
||||
result := &Result{Body: []byte("{\"name\": \"bob.doe\"}")}
|
||||
condition.evaluate(result, false)
|
||||
condition.evaluate(result, false, nil)
|
||||
}
|
||||
b.ReportAllocs()
|
||||
}
|
||||
@@ -71,7 +71,7 @@ func BenchmarkCondition_evaluateWithStatus(b *testing.B) {
|
||||
condition := Condition("[STATUS] == 200")
|
||||
for n := 0; n < b.N; n++ {
|
||||
result := &Result{HTTPStatus: 200}
|
||||
condition.evaluate(result, false)
|
||||
condition.evaluate(result, false, nil)
|
||||
}
|
||||
b.ReportAllocs()
|
||||
}
|
||||
@@ -80,7 +80,7 @@ func BenchmarkCondition_evaluateWithStatusFailure(b *testing.B) {
|
||||
condition := Condition("[STATUS] == 200")
|
||||
for n := 0; n < b.N; n++ {
|
||||
result := &Result{HTTPStatus: 400}
|
||||
condition.evaluate(result, false)
|
||||
condition.evaluate(result, false, nil)
|
||||
}
|
||||
b.ReportAllocs()
|
||||
}
|
||||
|
||||
@@ -755,7 +755,7 @@ func TestCondition_evaluate(t *testing.T) {
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Condition.evaluate(scenario.Result, scenario.DontResolveFailedConditions)
|
||||
scenario.Condition.evaluate(scenario.Result, scenario.DontResolveFailedConditions, nil)
|
||||
if scenario.Result.ConditionResults[0].Success != scenario.ExpectedSuccess {
|
||||
t.Errorf("Condition '%s' should have been success=%v", scenario.Condition, scenario.ExpectedSuccess)
|
||||
}
|
||||
@@ -769,7 +769,7 @@ func TestCondition_evaluate(t *testing.T) {
|
||||
func TestCondition_evaluateWithInvalidOperator(t *testing.T) {
|
||||
condition := Condition("[STATUS] ? 201")
|
||||
result := &Result{HTTPStatus: 201}
|
||||
condition.evaluate(result, false)
|
||||
condition.evaluate(result, false, nil)
|
||||
if result.Success {
|
||||
t.Error("condition was invalid, result should've been a failure")
|
||||
}
|
||||
|
||||
@@ -21,6 +21,8 @@ import (
|
||||
"github.com/TwiN/gatus/v5/config/endpoint/dns"
|
||||
sshconfig "github.com/TwiN/gatus/v5/config/endpoint/ssh"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint/ui"
|
||||
"github.com/TwiN/gatus/v5/config/gontext"
|
||||
"github.com/TwiN/gatus/v5/config/key"
|
||||
"github.com/TwiN/gatus/v5/config/maintenance"
|
||||
"golang.org/x/crypto/ssh"
|
||||
)
|
||||
@@ -134,6 +136,18 @@ type Endpoint struct {
|
||||
|
||||
// LastReminderSent is the time at which the last reminder was sent for this endpoint.
|
||||
LastReminderSent time.Time `yaml:"-"`
|
||||
|
||||
///////////////////////
|
||||
// SUITE-ONLY FIELDS //
|
||||
///////////////////////
|
||||
|
||||
// Store is a map of values to extract from the result and store in the suite context
|
||||
// This field is only used when the endpoint is part of a suite
|
||||
Store map[string]string `yaml:"store,omitempty"`
|
||||
|
||||
// AlwaysRun defines whether to execute this endpoint even if previous endpoints in the suite failed
|
||||
// This field is only used when the endpoint is part of a suite
|
||||
AlwaysRun bool `yaml:"always-run,omitempty"`
|
||||
}
|
||||
|
||||
// IsEnabled returns whether the endpoint is enabled or not
|
||||
@@ -255,7 +269,7 @@ func (e *Endpoint) DisplayName() string {
|
||||
|
||||
// Key returns the unique key for the Endpoint
|
||||
func (e *Endpoint) Key() string {
|
||||
return ConvertGroupAndEndpointNameToKey(e.Group, e.Name)
|
||||
return key.ConvertGroupAndNameToKey(e.Group, e.Name)
|
||||
}
|
||||
|
||||
// Close HTTP connections between watchdog and endpoints to avoid dangling socket file descriptors
|
||||
@@ -269,16 +283,26 @@ func (e *Endpoint) Close() {
|
||||
|
||||
// EvaluateHealth sends a request to the endpoint's URL and evaluates the conditions of the endpoint.
|
||||
func (e *Endpoint) EvaluateHealth() *Result {
|
||||
return e.EvaluateHealthWithContext(nil)
|
||||
}
|
||||
|
||||
// EvaluateHealthWithContext sends a request to the endpoint's URL with context support and evaluates the conditions
|
||||
func (e *Endpoint) EvaluateHealthWithContext(context *gontext.Gontext) *Result {
|
||||
result := &Result{Success: true, Errors: []string{}}
|
||||
// Preprocess the endpoint with context if provided
|
||||
processedEndpoint := e
|
||||
if context != nil {
|
||||
processedEndpoint = e.preprocessWithContext(result, context)
|
||||
}
|
||||
// Parse or extract hostname from URL
|
||||
if e.DNSConfig != nil {
|
||||
result.Hostname = strings.TrimSuffix(e.URL, ":53")
|
||||
} else if e.Type() == TypeICMP {
|
||||
if processedEndpoint.DNSConfig != nil {
|
||||
result.Hostname = strings.TrimSuffix(processedEndpoint.URL, ":53")
|
||||
} else if processedEndpoint.Type() == TypeICMP {
|
||||
// To handle IPv6 addresses, we need to handle the hostname differently here. This is to avoid, for instance,
|
||||
// "1111:2222:3333::4444" being displayed as "1111:2222:3333:" because :4444 would be interpreted as a port.
|
||||
result.Hostname = strings.TrimPrefix(e.URL, "icmp://")
|
||||
result.Hostname = strings.TrimPrefix(processedEndpoint.URL, "icmp://")
|
||||
} else {
|
||||
urlObject, err := url.Parse(e.URL)
|
||||
urlObject, err := url.Parse(processedEndpoint.URL)
|
||||
if err != nil {
|
||||
result.AddError(err.Error())
|
||||
} else {
|
||||
@@ -287,11 +311,11 @@ func (e *Endpoint) EvaluateHealth() *Result {
|
||||
}
|
||||
}
|
||||
// Retrieve IP if necessary
|
||||
if e.needsToRetrieveIP() {
|
||||
e.getIP(result)
|
||||
if processedEndpoint.needsToRetrieveIP() {
|
||||
processedEndpoint.getIP(result)
|
||||
}
|
||||
// Retrieve domain expiration if necessary
|
||||
if e.needsToRetrieveDomainExpiration() && len(result.Hostname) > 0 {
|
||||
if processedEndpoint.needsToRetrieveDomainExpiration() && len(result.Hostname) > 0 {
|
||||
var err error
|
||||
if result.DomainExpiration, err = client.GetDomainExpiration(result.Hostname); err != nil {
|
||||
result.AddError(err.Error())
|
||||
@@ -299,42 +323,91 @@ func (e *Endpoint) EvaluateHealth() *Result {
|
||||
}
|
||||
// Call the endpoint (if there's no errors)
|
||||
if len(result.Errors) == 0 {
|
||||
e.call(result)
|
||||
processedEndpoint.call(result)
|
||||
} else {
|
||||
result.Success = false
|
||||
}
|
||||
// Evaluate the conditions
|
||||
for _, condition := range e.Conditions {
|
||||
success := condition.evaluate(result, e.UIConfig.DontResolveFailedConditions)
|
||||
for _, condition := range processedEndpoint.Conditions {
|
||||
success := condition.evaluate(result, processedEndpoint.UIConfig.DontResolveFailedConditions, context)
|
||||
if !success {
|
||||
result.Success = false
|
||||
}
|
||||
}
|
||||
result.Timestamp = time.Now()
|
||||
// Clean up parameters that we don't need to keep in the results
|
||||
if e.UIConfig.HideURL {
|
||||
if processedEndpoint.UIConfig.HideURL {
|
||||
for errIdx, errorString := range result.Errors {
|
||||
result.Errors[errIdx] = strings.ReplaceAll(errorString, e.URL, "<redacted>")
|
||||
result.Errors[errIdx] = strings.ReplaceAll(errorString, processedEndpoint.URL, "<redacted>")
|
||||
}
|
||||
}
|
||||
if e.UIConfig.HideHostname {
|
||||
if processedEndpoint.UIConfig.HideHostname {
|
||||
for errIdx, errorString := range result.Errors {
|
||||
result.Errors[errIdx] = strings.ReplaceAll(errorString, result.Hostname, "<redacted>")
|
||||
}
|
||||
result.Hostname = "" // remove it from the result so it doesn't get exposed
|
||||
}
|
||||
if e.UIConfig.HidePort && len(result.port) > 0 {
|
||||
if processedEndpoint.UIConfig.HidePort && len(result.port) > 0 {
|
||||
for errIdx, errorString := range result.Errors {
|
||||
result.Errors[errIdx] = strings.ReplaceAll(errorString, result.port, "<redacted>")
|
||||
}
|
||||
result.port = ""
|
||||
}
|
||||
if e.UIConfig.HideConditions {
|
||||
if processedEndpoint.UIConfig.HideConditions {
|
||||
result.ConditionResults = nil
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// preprocessWithContext creates a copy of the endpoint with context placeholders replaced
|
||||
func (e *Endpoint) preprocessWithContext(result *Result, context *gontext.Gontext) *Endpoint {
|
||||
// Create a deep copy of the endpoint
|
||||
processed := &Endpoint{}
|
||||
*processed = *e
|
||||
var err error
|
||||
// Replace context placeholders in URL
|
||||
if processed.URL, err = replaceContextPlaceholders(e.URL, context); err != nil {
|
||||
result.AddError(err.Error())
|
||||
}
|
||||
// Replace context placeholders in Body
|
||||
if processed.Body, err = replaceContextPlaceholders(e.Body, context); err != nil {
|
||||
result.AddError(err.Error())
|
||||
}
|
||||
// Replace context placeholders in Headers
|
||||
if e.Headers != nil {
|
||||
processed.Headers = make(map[string]string)
|
||||
for k, v := range e.Headers {
|
||||
if processed.Headers[k], err = replaceContextPlaceholders(v, context); err != nil {
|
||||
result.AddError(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
return processed
|
||||
}
|
||||
|
||||
// replaceContextPlaceholders replaces [CONTEXT].path placeholders with actual values
|
||||
func replaceContextPlaceholders(input string, ctx *gontext.Gontext) (string, error) {
|
||||
if ctx == nil {
|
||||
return input, nil
|
||||
}
|
||||
var contextErrors []string
|
||||
contextRegex := regexp.MustCompile(`\[CONTEXT\]\.[\w\.]+`)
|
||||
result := contextRegex.ReplaceAllStringFunc(input, func(match string) string {
|
||||
// Extract the path after [CONTEXT].
|
||||
path := strings.TrimPrefix(match, "[CONTEXT].")
|
||||
value, err := ctx.Get(path)
|
||||
if err != nil {
|
||||
contextErrors = append(contextErrors, fmt.Sprintf("path '%s' not found", path))
|
||||
return match // Keep placeholder for error reporting
|
||||
}
|
||||
return fmt.Sprintf("%v", value)
|
||||
})
|
||||
if len(contextErrors) > 0 {
|
||||
return result, fmt.Errorf("context placeholder resolution failed: %s", strings.Join(contextErrors, ", "))
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (e *Endpoint) getParsedBody() string {
|
||||
body := e.Body
|
||||
body = strings.ReplaceAll(body, "[ENDPOINT_NAME]", e.Name)
|
||||
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
"github.com/TwiN/gatus/v5/config/endpoint/dns"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint/ssh"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint/ui"
|
||||
"github.com/TwiN/gatus/v5/config/gontext"
|
||||
"github.com/TwiN/gatus/v5/config/maintenance"
|
||||
"github.com/TwiN/gatus/v5/test"
|
||||
)
|
||||
@@ -932,3 +933,352 @@ func TestEndpoint_needsToRetrieveIP(t *testing.T) {
|
||||
t.Error("expected true, got false")
|
||||
}
|
||||
}
|
||||
|
||||
func TestEndpoint_preprocessWithContext(t *testing.T) {
|
||||
// Import the gontext package for creating test contexts
|
||||
// This test thoroughly exercises the replaceContextPlaceholders function
|
||||
tests := []struct {
|
||||
name string
|
||||
endpoint *Endpoint
|
||||
context map[string]interface{}
|
||||
expectedURL string
|
||||
expectedBody string
|
||||
expectedHeaders map[string]string
|
||||
expectedErrorCount int
|
||||
expectedErrorContains []string
|
||||
}{
|
||||
{
|
||||
name: "successful_url_replacement",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com/users/[CONTEXT].userId",
|
||||
Body: "",
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"userId": "12345",
|
||||
},
|
||||
expectedURL: "https://api.example.com/users/12345",
|
||||
expectedBody: "",
|
||||
expectedErrorCount: 0,
|
||||
},
|
||||
{
|
||||
name: "successful_body_replacement",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com",
|
||||
Body: `{"userId": "[CONTEXT].userId", "action": "update"}`,
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"userId": "67890",
|
||||
},
|
||||
expectedURL: "https://api.example.com",
|
||||
expectedBody: `{"userId": "67890", "action": "update"}`,
|
||||
expectedErrorCount: 0,
|
||||
},
|
||||
{
|
||||
name: "successful_header_replacement",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com",
|
||||
Body: "",
|
||||
Headers: map[string]string{
|
||||
"Authorization": "Bearer [CONTEXT].token",
|
||||
"X-User-ID": "[CONTEXT].userId",
|
||||
},
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"token": "abc123token",
|
||||
"userId": "user123",
|
||||
},
|
||||
expectedURL: "https://api.example.com",
|
||||
expectedBody: "",
|
||||
expectedHeaders: map[string]string{
|
||||
"Authorization": "Bearer abc123token",
|
||||
"X-User-ID": "user123",
|
||||
},
|
||||
expectedErrorCount: 0,
|
||||
},
|
||||
{
|
||||
name: "multiple_placeholders_in_url",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://[CONTEXT].host/api/v[CONTEXT].version/users/[CONTEXT].userId",
|
||||
Body: "",
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"host": "api.example.com",
|
||||
"version": "2",
|
||||
"userId": "12345",
|
||||
},
|
||||
expectedURL: "https://api.example.com/api/v2/users/12345",
|
||||
expectedBody: "",
|
||||
expectedErrorCount: 0,
|
||||
},
|
||||
{
|
||||
name: "nested_context_path",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com/users/[CONTEXT].user.id",
|
||||
Body: `{"name": "[CONTEXT].user.name"}`,
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"user": map[string]interface{}{
|
||||
"id": "nested123",
|
||||
"name": "John Doe",
|
||||
},
|
||||
},
|
||||
expectedURL: "https://api.example.com/users/nested123",
|
||||
expectedBody: `{"name": "John Doe"}`,
|
||||
expectedErrorCount: 0,
|
||||
},
|
||||
{
|
||||
name: "url_context_not_found",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com/users/[CONTEXT].missingUserId",
|
||||
Body: "",
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"userId": "12345", // different key
|
||||
},
|
||||
expectedURL: "https://api.example.com/users/[CONTEXT].missingUserId",
|
||||
expectedBody: "",
|
||||
expectedErrorCount: 1,
|
||||
expectedErrorContains: []string{"path 'missingUserId' not found"},
|
||||
},
|
||||
{
|
||||
name: "body_context_not_found",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com",
|
||||
Body: `{"userId": "[CONTEXT].missingUserId"}`,
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"userId": "12345", // different key
|
||||
},
|
||||
expectedURL: "https://api.example.com",
|
||||
expectedBody: `{"userId": "[CONTEXT].missingUserId"}`,
|
||||
expectedErrorCount: 1,
|
||||
expectedErrorContains: []string{"path 'missingUserId' not found"},
|
||||
},
|
||||
{
|
||||
name: "header_context_not_found",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com",
|
||||
Body: "",
|
||||
Headers: map[string]string{
|
||||
"Authorization": "Bearer [CONTEXT].missingToken",
|
||||
},
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"token": "validtoken", // different key
|
||||
},
|
||||
expectedURL: "https://api.example.com",
|
||||
expectedBody: "",
|
||||
expectedHeaders: map[string]string{
|
||||
"Authorization": "Bearer [CONTEXT].missingToken",
|
||||
},
|
||||
expectedErrorCount: 1,
|
||||
expectedErrorContains: []string{"path 'missingToken' not found"},
|
||||
},
|
||||
{
|
||||
name: "multiple_missing_context_paths",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://[CONTEXT].missingHost/users/[CONTEXT].missingUserId",
|
||||
Body: `{"token": "[CONTEXT].missingToken"}`,
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"validKey": "validValue",
|
||||
},
|
||||
expectedURL: "https://[CONTEXT].missingHost/users/[CONTEXT].missingUserId",
|
||||
expectedBody: `{"token": "[CONTEXT].missingToken"}`,
|
||||
expectedErrorCount: 2, // 1 for URL (both placeholders), 1 for Body
|
||||
expectedErrorContains: []string{
|
||||
"path 'missingHost' not found",
|
||||
"path 'missingUserId' not found",
|
||||
"path 'missingToken' not found",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "mixed_valid_and_invalid_placeholders",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com/users/[CONTEXT].userId/posts/[CONTEXT].missingPostId",
|
||||
Body: `{"userId": "[CONTEXT].userId", "action": "[CONTEXT].missingAction"}`,
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"userId": "12345",
|
||||
},
|
||||
expectedURL: "https://api.example.com/users/12345/posts/[CONTEXT].missingPostId",
|
||||
expectedBody: `{"userId": "12345", "action": "[CONTEXT].missingAction"}`,
|
||||
expectedErrorCount: 2,
|
||||
expectedErrorContains: []string{
|
||||
"path 'missingPostId' not found",
|
||||
"path 'missingAction' not found",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "nil_context",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com/users/[CONTEXT].userId",
|
||||
Body: "",
|
||||
},
|
||||
context: nil,
|
||||
expectedURL: "https://api.example.com/users/[CONTEXT].userId",
|
||||
expectedBody: "",
|
||||
expectedErrorCount: 0,
|
||||
},
|
||||
{
|
||||
name: "empty_context",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com/users/[CONTEXT].userId",
|
||||
Body: "",
|
||||
},
|
||||
context: map[string]interface{}{},
|
||||
expectedURL: "https://api.example.com/users/[CONTEXT].userId",
|
||||
expectedBody: "",
|
||||
expectedErrorCount: 1,
|
||||
expectedErrorContains: []string{"path 'userId' not found"},
|
||||
},
|
||||
{
|
||||
name: "special_characters_in_context_values",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com/search?q=[CONTEXT].query",
|
||||
Body: "",
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"query": "hello world & special chars!",
|
||||
},
|
||||
expectedURL: "https://api.example.com/search?q=hello world & special chars!",
|
||||
expectedBody: "",
|
||||
expectedErrorCount: 0,
|
||||
},
|
||||
{
|
||||
name: "numeric_context_values",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com/users/[CONTEXT].userId/limit/[CONTEXT].limit",
|
||||
Body: "",
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"userId": 12345,
|
||||
"limit": 100,
|
||||
},
|
||||
expectedURL: "https://api.example.com/users/12345/limit/100",
|
||||
expectedBody: "",
|
||||
expectedErrorCount: 0,
|
||||
},
|
||||
{
|
||||
name: "boolean_context_values",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com",
|
||||
Body: `{"enabled": [CONTEXT].enabled, "active": [CONTEXT].active}`,
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"enabled": true,
|
||||
"active": false,
|
||||
},
|
||||
expectedURL: "https://api.example.com",
|
||||
expectedBody: `{"enabled": true, "active": false}`,
|
||||
expectedErrorCount: 0,
|
||||
},
|
||||
{
|
||||
name: "no_context_placeholders",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com/health",
|
||||
Body: `{"status": "check"}`,
|
||||
Headers: map[string]string{
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"userId": "12345",
|
||||
},
|
||||
expectedURL: "https://api.example.com/health",
|
||||
expectedBody: `{"status": "check"}`,
|
||||
expectedHeaders: map[string]string{
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
expectedErrorCount: 0,
|
||||
},
|
||||
{
|
||||
name: "deeply_nested_context_path",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com/users/[CONTEXT].response.data.user.id",
|
||||
Body: "",
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"response": map[string]interface{}{
|
||||
"data": map[string]interface{}{
|
||||
"user": map[string]interface{}{
|
||||
"id": "deep123",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedURL: "https://api.example.com/users/deep123",
|
||||
expectedBody: "",
|
||||
expectedErrorCount: 0,
|
||||
},
|
||||
{
|
||||
name: "invalid_nested_context_path",
|
||||
endpoint: &Endpoint{
|
||||
URL: "https://api.example.com/users/[CONTEXT].response.missing.path",
|
||||
Body: "",
|
||||
},
|
||||
context: map[string]interface{}{
|
||||
"response": map[string]interface{}{
|
||||
"data": "value",
|
||||
},
|
||||
},
|
||||
expectedURL: "https://api.example.com/users/[CONTEXT].response.missing.path",
|
||||
expectedBody: "",
|
||||
expectedErrorCount: 1,
|
||||
expectedErrorContains: []string{"path 'response.missing.path' not found"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Import gontext package for creating context
|
||||
var ctx *gontext.Gontext
|
||||
if tt.context != nil {
|
||||
ctx = gontext.New(tt.context)
|
||||
}
|
||||
// Create a new Result to capture errors
|
||||
result := &Result{}
|
||||
// Call preprocessWithContext
|
||||
processed := tt.endpoint.preprocessWithContext(result, ctx)
|
||||
// Verify URL
|
||||
if processed.URL != tt.expectedURL {
|
||||
t.Errorf("URL mismatch:\nexpected: %s\nactual: %s", tt.expectedURL, processed.URL)
|
||||
}
|
||||
// Verify Body
|
||||
if processed.Body != tt.expectedBody {
|
||||
t.Errorf("Body mismatch:\nexpected: %s\nactual: %s", tt.expectedBody, processed.Body)
|
||||
}
|
||||
// Verify Headers
|
||||
if tt.expectedHeaders != nil {
|
||||
if processed.Headers == nil {
|
||||
t.Error("Expected headers but got nil")
|
||||
} else {
|
||||
for key, expectedValue := range tt.expectedHeaders {
|
||||
if actualValue, exists := processed.Headers[key]; !exists {
|
||||
t.Errorf("Expected header %s not found", key)
|
||||
} else if actualValue != expectedValue {
|
||||
t.Errorf("Header %s mismatch:\nexpected: %s\nactual: %s", key, expectedValue, actualValue)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// Verify error count
|
||||
if len(result.Errors) != tt.expectedErrorCount {
|
||||
t.Errorf("Error count mismatch:\nexpected: %d\nactual: %d\nerrors: %v", tt.expectedErrorCount, len(result.Errors), result.Errors)
|
||||
}
|
||||
// Verify error messages contain expected strings
|
||||
if tt.expectedErrorContains != nil {
|
||||
actualErrors := strings.Join(result.Errors, " ")
|
||||
for _, expectedError := range tt.expectedErrorContains {
|
||||
if !strings.Contains(actualErrors, expectedError) {
|
||||
t.Errorf("Expected error containing '%s' not found in: %v", expectedError, result.Errors)
|
||||
}
|
||||
}
|
||||
}
|
||||
// Verify original endpoint is not modified
|
||||
if tt.endpoint.URL != ((&Endpoint{URL: tt.endpoint.URL, Body: tt.endpoint.Body, Headers: tt.endpoint.Headers}).URL) {
|
||||
t.Error("Original endpoint was modified")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
|
||||
"github.com/TwiN/gatus/v5/alerting/alert"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint/heartbeat"
|
||||
"github.com/TwiN/gatus/v5/config/key"
|
||||
"github.com/TwiN/gatus/v5/config/maintenance"
|
||||
)
|
||||
|
||||
@@ -82,7 +83,7 @@ func (externalEndpoint *ExternalEndpoint) DisplayName() string {
|
||||
|
||||
// Key returns the unique key for the Endpoint
|
||||
func (externalEndpoint *ExternalEndpoint) Key() string {
|
||||
return ConvertGroupAndEndpointNameToKey(externalEndpoint.Group, externalEndpoint.Name)
|
||||
return key.ConvertGroupAndNameToKey(externalEndpoint.Group, externalEndpoint.Name)
|
||||
}
|
||||
|
||||
// ToEndpoint converts the ExternalEndpoint to an Endpoint
|
||||
|
||||
@@ -2,24 +2,379 @@ package endpoint
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v5/alerting/alert"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint/heartbeat"
|
||||
"github.com/TwiN/gatus/v5/config/maintenance"
|
||||
)
|
||||
|
||||
func TestExternalEndpoint_ToEndpoint(t *testing.T) {
|
||||
externalEndpoint := &ExternalEndpoint{
|
||||
Name: "name",
|
||||
Group: "group",
|
||||
func TestExternalEndpoint_ValidateAndSetDefaults(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
endpoint *ExternalEndpoint
|
||||
wantErr error
|
||||
}{
|
||||
{
|
||||
name: "valid-external-endpoint",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test-endpoint",
|
||||
Group: "test-group",
|
||||
Token: "valid-token",
|
||||
},
|
||||
wantErr: nil,
|
||||
},
|
||||
{
|
||||
name: "valid-external-endpoint-with-heartbeat",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test-endpoint",
|
||||
Token: "valid-token",
|
||||
Heartbeat: heartbeat.Config{
|
||||
Interval: 30 * time.Second,
|
||||
},
|
||||
},
|
||||
wantErr: nil,
|
||||
},
|
||||
{
|
||||
name: "missing-token",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test-endpoint",
|
||||
Group: "test-group",
|
||||
},
|
||||
wantErr: ErrExternalEndpointWithNoToken,
|
||||
},
|
||||
{
|
||||
name: "empty-token",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test-endpoint",
|
||||
Token: "",
|
||||
},
|
||||
wantErr: ErrExternalEndpointWithNoToken,
|
||||
},
|
||||
{
|
||||
name: "heartbeat-interval-too-low",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test-endpoint",
|
||||
Token: "valid-token",
|
||||
Heartbeat: heartbeat.Config{
|
||||
Interval: 5 * time.Second, // Less than 10 seconds
|
||||
},
|
||||
},
|
||||
wantErr: ErrExternalEndpointHeartbeatIntervalTooLow,
|
||||
},
|
||||
{
|
||||
name: "heartbeat-interval-exactly-10-seconds",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test-endpoint",
|
||||
Token: "valid-token",
|
||||
Heartbeat: heartbeat.Config{
|
||||
Interval: 10 * time.Second,
|
||||
},
|
||||
},
|
||||
wantErr: nil,
|
||||
},
|
||||
{
|
||||
name: "heartbeat-interval-zero-is-allowed",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test-endpoint",
|
||||
Token: "valid-token",
|
||||
Heartbeat: heartbeat.Config{
|
||||
Interval: 0, // Zero means no heartbeat monitoring
|
||||
},
|
||||
},
|
||||
wantErr: nil,
|
||||
},
|
||||
{
|
||||
name: "missing-name",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Group: "test-group",
|
||||
Token: "valid-token",
|
||||
},
|
||||
wantErr: ErrEndpointWithNoName,
|
||||
},
|
||||
}
|
||||
convertedEndpoint := externalEndpoint.ToEndpoint()
|
||||
if externalEndpoint.Name != convertedEndpoint.Name {
|
||||
t.Errorf("expected %s, got %s", externalEndpoint.Name, convertedEndpoint.Name)
|
||||
}
|
||||
if externalEndpoint.Group != convertedEndpoint.Group {
|
||||
t.Errorf("expected %s, got %s", externalEndpoint.Group, convertedEndpoint.Group)
|
||||
}
|
||||
if externalEndpoint.Key() != convertedEndpoint.Key() {
|
||||
t.Errorf("expected %s, got %s", externalEndpoint.Key(), convertedEndpoint.Key())
|
||||
}
|
||||
if externalEndpoint.DisplayName() != convertedEndpoint.DisplayName() {
|
||||
t.Errorf("expected %s, got %s", externalEndpoint.DisplayName(), convertedEndpoint.DisplayName())
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
err := tt.endpoint.ValidateAndSetDefaults()
|
||||
if tt.wantErr != nil {
|
||||
if err == nil {
|
||||
t.Errorf("Expected error %v, but got none", tt.wantErr)
|
||||
return
|
||||
}
|
||||
if err.Error() != tt.wantErr.Error() {
|
||||
t.Errorf("Expected error %v, got %v", tt.wantErr, err)
|
||||
}
|
||||
} else {
|
||||
if err != nil {
|
||||
t.Errorf("Expected no error, but got %v", err)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestExternalEndpoint_IsEnabled(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
enabled *bool
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
name: "nil-enabled-defaults-to-true",
|
||||
enabled: nil,
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "explicitly-enabled",
|
||||
enabled: boolPtr(true),
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "explicitly-disabled",
|
||||
enabled: boolPtr(false),
|
||||
expected: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
endpoint := &ExternalEndpoint{
|
||||
Name: "test-endpoint",
|
||||
Token: "test-token",
|
||||
Enabled: tt.enabled,
|
||||
}
|
||||
result := endpoint.IsEnabled()
|
||||
if result != tt.expected {
|
||||
t.Errorf("Expected %v, got %v", tt.expected, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestExternalEndpoint_DisplayName(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
endpoint *ExternalEndpoint
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
name: "with-group",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test-endpoint",
|
||||
Group: "test-group",
|
||||
},
|
||||
expected: "test-group/test-endpoint",
|
||||
},
|
||||
{
|
||||
name: "without-group",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test-endpoint",
|
||||
Group: "",
|
||||
},
|
||||
expected: "test-endpoint",
|
||||
},
|
||||
{
|
||||
name: "empty-group-string",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "api-health",
|
||||
Group: "",
|
||||
},
|
||||
expected: "api-health",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := tt.endpoint.DisplayName()
|
||||
if result != tt.expected {
|
||||
t.Errorf("Expected %q, got %q", tt.expected, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestExternalEndpoint_Key(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
endpoint *ExternalEndpoint
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
name: "with-group",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test-endpoint",
|
||||
Group: "test-group",
|
||||
},
|
||||
expected: "test-group_test-endpoint",
|
||||
},
|
||||
{
|
||||
name: "without-group",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test-endpoint",
|
||||
Group: "",
|
||||
},
|
||||
expected: "_test-endpoint",
|
||||
},
|
||||
{
|
||||
name: "special-characters-in-name",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test endpoint with spaces",
|
||||
Group: "test-group",
|
||||
},
|
||||
expected: "test-group_test-endpoint-with-spaces",
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := tt.endpoint.Key()
|
||||
if result != tt.expected {
|
||||
t.Errorf("Expected %q, got %q", tt.expected, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestExternalEndpoint_ToEndpoint(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
externalEndpoint *ExternalEndpoint
|
||||
}{
|
||||
{
|
||||
name: "complete-external-endpoint",
|
||||
externalEndpoint: &ExternalEndpoint{
|
||||
Enabled: boolPtr(true),
|
||||
Name: "test-endpoint",
|
||||
Group: "test-group",
|
||||
Token: "test-token",
|
||||
Alerts: []*alert.Alert{
|
||||
{
|
||||
Type: alert.TypeSlack,
|
||||
},
|
||||
},
|
||||
MaintenanceWindows: []*maintenance.Config{
|
||||
{
|
||||
Start: "02:00",
|
||||
Duration: time.Hour,
|
||||
},
|
||||
},
|
||||
NumberOfFailuresInARow: 3,
|
||||
NumberOfSuccessesInARow: 5,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "minimal-external-endpoint",
|
||||
externalEndpoint: &ExternalEndpoint{
|
||||
Name: "minimal-endpoint",
|
||||
Token: "minimal-token",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "disabled-external-endpoint",
|
||||
externalEndpoint: &ExternalEndpoint{
|
||||
Enabled: boolPtr(false),
|
||||
Name: "disabled-endpoint",
|
||||
Token: "disabled-token",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "original-test-case",
|
||||
externalEndpoint: &ExternalEndpoint{
|
||||
Name: "name",
|
||||
Group: "group",
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := tt.externalEndpoint.ToEndpoint()
|
||||
// Verify all fields are correctly copied
|
||||
if result.Enabled != tt.externalEndpoint.Enabled {
|
||||
t.Errorf("Expected Enabled=%v, got %v", tt.externalEndpoint.Enabled, result.Enabled)
|
||||
}
|
||||
if result.Name != tt.externalEndpoint.Name {
|
||||
t.Errorf("Expected Name=%q, got %q", tt.externalEndpoint.Name, result.Name)
|
||||
}
|
||||
if result.Group != tt.externalEndpoint.Group {
|
||||
t.Errorf("Expected Group=%q, got %q", tt.externalEndpoint.Group, result.Group)
|
||||
}
|
||||
if len(result.Alerts) != len(tt.externalEndpoint.Alerts) {
|
||||
t.Errorf("Expected %d alerts, got %d", len(tt.externalEndpoint.Alerts), len(result.Alerts))
|
||||
}
|
||||
if result.NumberOfFailuresInARow != tt.externalEndpoint.NumberOfFailuresInARow {
|
||||
t.Errorf("Expected NumberOfFailuresInARow=%d, got %d", tt.externalEndpoint.NumberOfFailuresInARow, result.NumberOfFailuresInARow)
|
||||
}
|
||||
if result.NumberOfSuccessesInARow != tt.externalEndpoint.NumberOfSuccessesInARow {
|
||||
t.Errorf("Expected NumberOfSuccessesInARow=%d, got %d", tt.externalEndpoint.NumberOfSuccessesInARow, result.NumberOfSuccessesInARow)
|
||||
}
|
||||
// Original test assertions
|
||||
if tt.externalEndpoint.Key() != result.Key() {
|
||||
t.Errorf("expected %s, got %s", tt.externalEndpoint.Key(), result.Key())
|
||||
}
|
||||
if tt.externalEndpoint.DisplayName() != result.DisplayName() {
|
||||
t.Errorf("expected %s, got %s", tt.externalEndpoint.DisplayName(), result.DisplayName())
|
||||
}
|
||||
// Verify it's a proper Endpoint type
|
||||
if result == nil {
|
||||
t.Error("ToEndpoint() returned nil")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestExternalEndpoint_ValidationEdgeCases(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
endpoint *ExternalEndpoint
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "very-long-name",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "this-is-a-very-long-endpoint-name-that-might-cause-issues-in-some-systems-but-should-be-handled-gracefully",
|
||||
Token: "valid-token",
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "special-characters-in-name",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test-endpoint@#$%^&*()",
|
||||
Token: "valid-token",
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "unicode-characters-in-name",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "测试端点",
|
||||
Token: "valid-token",
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "very-long-token",
|
||||
endpoint: &ExternalEndpoint{
|
||||
Name: "test-endpoint",
|
||||
Token: "very-long-token-that-should-still-be-valid-even-though-it-is-extremely-long-and-might-not-be-practical-in-real-world-scenarios",
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
err := tt.endpoint.ValidateAndSetDefaults()
|
||||
if tt.wantErr && err == nil {
|
||||
t.Error("Expected error but got none")
|
||||
}
|
||||
if !tt.wantErr && err != nil {
|
||||
t.Errorf("Expected no error but got: %v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Helper function to create bool pointers
|
||||
func boolPtr(b bool) *bool {
|
||||
return &b
|
||||
}
|
||||
|
||||
@@ -1,11 +0,0 @@
|
||||
package endpoint
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkConvertGroupAndEndpointNameToKey(b *testing.B) {
|
||||
for n := 0; n < b.N; n++ {
|
||||
ConvertGroupAndEndpointNameToKey("group", "name")
|
||||
}
|
||||
}
|
||||
273
config/endpoint/placeholder.go
Normal file
273
config/endpoint/placeholder.go
Normal file
@@ -0,0 +1,273 @@
|
||||
package endpoint
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/TwiN/gatus/v5/config/gontext"
|
||||
"github.com/TwiN/gatus/v5/jsonpath"
|
||||
)
|
||||
|
||||
// Placeholders
|
||||
const (
|
||||
// StatusPlaceholder is a placeholder for a HTTP status.
|
||||
//
|
||||
// Values that could replace the placeholder: 200, 404, 500, ...
|
||||
StatusPlaceholder = "[STATUS]"
|
||||
|
||||
// IPPlaceholder is a placeholder for an IP.
|
||||
//
|
||||
// Values that could replace the placeholder: 127.0.0.1, 10.0.0.1, ...
|
||||
IPPlaceholder = "[IP]"
|
||||
|
||||
// DNSRCodePlaceholder is a placeholder for DNS_RCODE
|
||||
//
|
||||
// Values that could replace the placeholder: NOERROR, FORMERR, SERVFAIL, NXDOMAIN, NOTIMP, REFUSED
|
||||
DNSRCodePlaceholder = "[DNS_RCODE]"
|
||||
|
||||
// ResponseTimePlaceholder is a placeholder for the request response time, in milliseconds.
|
||||
//
|
||||
// Values that could replace the placeholder: 1, 500, 1000, ...
|
||||
ResponseTimePlaceholder = "[RESPONSE_TIME]"
|
||||
|
||||
// BodyPlaceholder is a placeholder for the Body of the response
|
||||
//
|
||||
// Values that could replace the placeholder: {}, {"data":{"name":"john"}}, ...
|
||||
BodyPlaceholder = "[BODY]"
|
||||
|
||||
// ConnectedPlaceholder is a placeholder for whether a connection was successfully established.
|
||||
//
|
||||
// Values that could replace the placeholder: true, false
|
||||
ConnectedPlaceholder = "[CONNECTED]"
|
||||
|
||||
// CertificateExpirationPlaceholder is a placeholder for the duration before certificate expiration, in milliseconds.
|
||||
//
|
||||
// Values that could replace the placeholder: 4461677039 (~52 days)
|
||||
CertificateExpirationPlaceholder = "[CERTIFICATE_EXPIRATION]"
|
||||
|
||||
// DomainExpirationPlaceholder is a placeholder for the duration before the domain expires, in milliseconds.
|
||||
DomainExpirationPlaceholder = "[DOMAIN_EXPIRATION]"
|
||||
|
||||
// ContextPlaceholder is a placeholder for suite context values
|
||||
// Usage: [CONTEXT].path.to.value
|
||||
ContextPlaceholder = "[CONTEXT]"
|
||||
)
|
||||
|
||||
// Functions
|
||||
const (
|
||||
// LengthFunctionPrefix is the prefix for the length function
|
||||
//
|
||||
// Usage: len([BODY].articles) == 10, len([BODY].name) > 5
|
||||
LengthFunctionPrefix = "len("
|
||||
|
||||
// HasFunctionPrefix is the prefix for the has function
|
||||
//
|
||||
// Usage: has([BODY].errors) == true
|
||||
HasFunctionPrefix = "has("
|
||||
|
||||
// PatternFunctionPrefix is the prefix for the pattern function
|
||||
//
|
||||
// Usage: [IP] == pat(192.168.*.*)
|
||||
PatternFunctionPrefix = "pat("
|
||||
|
||||
// AnyFunctionPrefix is the prefix for the any function
|
||||
//
|
||||
// Usage: [IP] == any(1.1.1.1, 1.0.0.1)
|
||||
AnyFunctionPrefix = "any("
|
||||
|
||||
// FunctionSuffix is the suffix for all functions
|
||||
FunctionSuffix = ")"
|
||||
)
|
||||
|
||||
// Other constants
|
||||
const (
|
||||
// InvalidConditionElementSuffix is the suffix that will be appended to an invalid condition
|
||||
InvalidConditionElementSuffix = "(INVALID)"
|
||||
)
|
||||
|
||||
// functionType represents the type of function wrapper
|
||||
type functionType int
|
||||
|
||||
const (
|
||||
// Note that not all functions are handled here. Only len() and has() directly impact the handler
|
||||
// e.g. "len([BODY].name) > 0" vs pat() or any(), which would be used like "[BODY].name == pat(john*)"
|
||||
|
||||
noFunction functionType = iota
|
||||
functionLen
|
||||
functionHas
|
||||
)
|
||||
|
||||
// ResolvePlaceholder resolves all types of placeholders to their string values.
|
||||
//
|
||||
// Supported placeholders:
|
||||
// - [STATUS]: HTTP status code (e.g., "200", "404")
|
||||
// - [IP]: IP address from the response (e.g., "127.0.0.1")
|
||||
// - [RESPONSE_TIME]: Response time in milliseconds (e.g., "250")
|
||||
// - [DNS_RCODE]: DNS response code (e.g., "NOERROR", "NXDOMAIN")
|
||||
// - [CONNECTED]: Connection status (e.g., "true", "false")
|
||||
// - [CERTIFICATE_EXPIRATION]: Certificate expiration time in milliseconds
|
||||
// - [DOMAIN_EXPIRATION]: Domain expiration time in milliseconds
|
||||
// - [BODY]: Full response body
|
||||
// - [BODY].path: JSONPath expression on response body (e.g., [BODY].status, [BODY].data[0].name)
|
||||
// - [CONTEXT].path: Suite context values (e.g., [CONTEXT].user_id, [CONTEXT].session_token)
|
||||
//
|
||||
// Function wrappers:
|
||||
// - len(placeholder): Returns the length of the resolved value
|
||||
// - has(placeholder): Returns "true" if the placeholder exists and is non-empty, "false" otherwise
|
||||
//
|
||||
// Examples:
|
||||
// - ResolvePlaceholder("[STATUS]", result, nil) → "200"
|
||||
// - ResolvePlaceholder("len([BODY].items)", result, nil) → "5" (for JSON array with 5 items)
|
||||
// - ResolvePlaceholder("has([CONTEXT].user_id)", result, ctx) → "true" (if context has user_id)
|
||||
// - ResolvePlaceholder("[BODY].user.name", result, nil) → "john" (for {"user":{"name":"john"}})
|
||||
//
|
||||
// Case-insensitive: All placeholder names are handled case-insensitively, but paths preserve original case.
|
||||
func ResolvePlaceholder(placeholder string, result *Result, ctx *gontext.Gontext) (string, error) {
|
||||
placeholder = strings.TrimSpace(placeholder)
|
||||
originalPlaceholder := placeholder
|
||||
|
||||
// Extract function wrapper if present
|
||||
fn, innerPlaceholder := extractFunctionWrapper(placeholder)
|
||||
placeholder = innerPlaceholder
|
||||
|
||||
// Handle CONTEXT placeholders
|
||||
uppercasePlaceholder := strings.ToUpper(placeholder)
|
||||
if strings.HasPrefix(uppercasePlaceholder, ContextPlaceholder) && ctx != nil {
|
||||
return resolveContextPlaceholder(placeholder, fn, originalPlaceholder, ctx)
|
||||
}
|
||||
|
||||
// Handle basic placeholders (try uppercase first for backward compatibility)
|
||||
switch uppercasePlaceholder {
|
||||
case StatusPlaceholder:
|
||||
return formatWithFunction(strconv.Itoa(result.HTTPStatus), fn), nil
|
||||
case IPPlaceholder:
|
||||
return formatWithFunction(result.IP, fn), nil
|
||||
case ResponseTimePlaceholder:
|
||||
return formatWithFunction(strconv.FormatInt(result.Duration.Milliseconds(), 10), fn), nil
|
||||
case DNSRCodePlaceholder:
|
||||
return formatWithFunction(result.DNSRCode, fn), nil
|
||||
case ConnectedPlaceholder:
|
||||
return formatWithFunction(strconv.FormatBool(result.Connected), fn), nil
|
||||
case CertificateExpirationPlaceholder:
|
||||
return formatWithFunction(strconv.FormatInt(result.CertificateExpiration.Milliseconds(), 10), fn), nil
|
||||
case DomainExpirationPlaceholder:
|
||||
return formatWithFunction(strconv.FormatInt(result.DomainExpiration.Milliseconds(), 10), fn), nil
|
||||
case BodyPlaceholder:
|
||||
body := strings.TrimSpace(string(result.Body))
|
||||
if fn == functionHas {
|
||||
return strconv.FormatBool(len(body) > 0), nil
|
||||
}
|
||||
if fn == functionLen {
|
||||
// For len([BODY]), we need to check if it's JSON and get the actual length
|
||||
// Use jsonpath to evaluate the root element
|
||||
_, resolvedLength, err := jsonpath.Eval("", result.Body)
|
||||
if err == nil {
|
||||
return strconv.Itoa(resolvedLength), nil
|
||||
}
|
||||
// Fall back to string length if not valid JSON
|
||||
return strconv.Itoa(len(body)), nil
|
||||
}
|
||||
return body, nil
|
||||
}
|
||||
|
||||
// Handle JSONPath expressions on BODY (including array indexing)
|
||||
if strings.HasPrefix(uppercasePlaceholder, BodyPlaceholder+".") || strings.HasPrefix(uppercasePlaceholder, BodyPlaceholder+"[") {
|
||||
return resolveJSONPathPlaceholder(placeholder, fn, originalPlaceholder, result)
|
||||
}
|
||||
|
||||
// Not a recognized placeholder
|
||||
if fn != noFunction {
|
||||
if fn == functionHas {
|
||||
return "false", nil
|
||||
}
|
||||
// For len() with unrecognized placeholder, return with INVALID suffix
|
||||
return originalPlaceholder + " " + InvalidConditionElementSuffix, nil
|
||||
}
|
||||
|
||||
// Return the original placeholder if we can't resolve it
|
||||
// This allows for literal string comparisons
|
||||
return originalPlaceholder, nil
|
||||
}
|
||||
|
||||
// extractFunctionWrapper detects and extracts function wrappers (len, has)
|
||||
func extractFunctionWrapper(placeholder string) (functionType, string) {
|
||||
if strings.HasPrefix(placeholder, LengthFunctionPrefix) && strings.HasSuffix(placeholder, FunctionSuffix) {
|
||||
inner := strings.TrimSuffix(strings.TrimPrefix(placeholder, LengthFunctionPrefix), FunctionSuffix)
|
||||
return functionLen, inner
|
||||
}
|
||||
if strings.HasPrefix(placeholder, HasFunctionPrefix) && strings.HasSuffix(placeholder, FunctionSuffix) {
|
||||
inner := strings.TrimSuffix(strings.TrimPrefix(placeholder, HasFunctionPrefix), FunctionSuffix)
|
||||
return functionHas, inner
|
||||
}
|
||||
return noFunction, placeholder
|
||||
}
|
||||
|
||||
// resolveJSONPathPlaceholder handles [BODY].path and [BODY][index] placeholders
|
||||
func resolveJSONPathPlaceholder(placeholder string, fn functionType, originalPlaceholder string, result *Result) (string, error) {
|
||||
// Extract the path after [BODY] (case insensitive)
|
||||
uppercasePlaceholder := strings.ToUpper(placeholder)
|
||||
path := ""
|
||||
if strings.HasPrefix(uppercasePlaceholder, BodyPlaceholder) {
|
||||
path = placeholder[len(BodyPlaceholder):]
|
||||
} else {
|
||||
path = strings.TrimPrefix(placeholder, BodyPlaceholder)
|
||||
}
|
||||
// Remove leading dot if present
|
||||
path = strings.TrimPrefix(path, ".")
|
||||
resolvedValue, resolvedLength, err := jsonpath.Eval(path, result.Body)
|
||||
if fn == functionHas {
|
||||
return strconv.FormatBool(err == nil), nil
|
||||
}
|
||||
if err != nil {
|
||||
return originalPlaceholder + " " + InvalidConditionElementSuffix, nil
|
||||
}
|
||||
if fn == functionLen {
|
||||
return strconv.Itoa(resolvedLength), nil
|
||||
}
|
||||
return resolvedValue, nil
|
||||
}
|
||||
|
||||
// resolveContextPlaceholder handles [CONTEXT] placeholder resolution
|
||||
func resolveContextPlaceholder(placeholder string, fn functionType, originalPlaceholder string, ctx *gontext.Gontext) (string, error) {
|
||||
contextPath := strings.TrimPrefix(placeholder, ContextPlaceholder)
|
||||
contextPath = strings.TrimPrefix(contextPath, ".")
|
||||
if contextPath == "" {
|
||||
if fn == functionHas {
|
||||
return "false", nil
|
||||
}
|
||||
return originalPlaceholder + " " + InvalidConditionElementSuffix, nil
|
||||
}
|
||||
value, err := ctx.Get(contextPath)
|
||||
if fn == functionHas {
|
||||
return strconv.FormatBool(err == nil), nil
|
||||
}
|
||||
if err != nil {
|
||||
return originalPlaceholder + " " + InvalidConditionElementSuffix, nil
|
||||
}
|
||||
if fn == functionLen {
|
||||
switch v := value.(type) {
|
||||
case string:
|
||||
return strconv.Itoa(len(v)), nil
|
||||
case []interface{}:
|
||||
return strconv.Itoa(len(v)), nil
|
||||
case map[string]interface{}:
|
||||
return strconv.Itoa(len(v)), nil
|
||||
default:
|
||||
return strconv.Itoa(len(fmt.Sprintf("%v", v))), nil
|
||||
}
|
||||
}
|
||||
return fmt.Sprintf("%v", value), nil
|
||||
}
|
||||
|
||||
// formatWithFunction applies len/has functions to any value
|
||||
func formatWithFunction(value string, fn functionType) string {
|
||||
switch fn {
|
||||
case functionHas:
|
||||
return strconv.FormatBool(value != "")
|
||||
case functionLen:
|
||||
return strconv.Itoa(len(value))
|
||||
default:
|
||||
return value
|
||||
}
|
||||
}
|
||||
125
config/endpoint/placeholder_test.go
Normal file
125
config/endpoint/placeholder_test.go
Normal file
@@ -0,0 +1,125 @@
|
||||
package endpoint
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v5/config/gontext"
|
||||
)
|
||||
|
||||
func TestResolvePlaceholder(t *testing.T) {
|
||||
result := &Result{
|
||||
HTTPStatus: 200,
|
||||
IP: "127.0.0.1",
|
||||
Duration: 250 * time.Millisecond,
|
||||
DNSRCode: "NOERROR",
|
||||
Connected: true,
|
||||
CertificateExpiration: 30 * 24 * time.Hour,
|
||||
DomainExpiration: 365 * 24 * time.Hour,
|
||||
Body: []byte(`{"status":"success","items":[1,2,3],"user":{"name":"john","id":123}}`),
|
||||
}
|
||||
|
||||
ctx := gontext.New(map[string]interface{}{
|
||||
"user_id": "abc123",
|
||||
"session_token": "xyz789",
|
||||
"array_data": []interface{}{"a", "b", "c"},
|
||||
"nested": map[string]interface{}{
|
||||
"value": "test",
|
||||
},
|
||||
})
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
placeholder string
|
||||
expected string
|
||||
}{
|
||||
// Basic placeholders
|
||||
{"status", "[STATUS]", "200"},
|
||||
{"ip", "[IP]", "127.0.0.1"},
|
||||
{"response-time", "[RESPONSE_TIME]", "250"},
|
||||
{"dns-rcode", "[DNS_RCODE]", "NOERROR"},
|
||||
{"connected", "[CONNECTED]", "true"},
|
||||
{"certificate-expiration", "[CERTIFICATE_EXPIRATION]", "2592000000"},
|
||||
{"domain-expiration", "[DOMAIN_EXPIRATION]", "31536000000"},
|
||||
{"body", "[BODY]", `{"status":"success","items":[1,2,3],"user":{"name":"john","id":123}}`},
|
||||
|
||||
// Case insensitive placeholders
|
||||
{"status-lowercase", "[status]", "200"},
|
||||
{"ip-mixed-case", "[Ip]", "127.0.0.1"},
|
||||
|
||||
// Function wrappers on basic placeholders
|
||||
{"len-status", "len([STATUS])", "3"},
|
||||
{"len-ip", "len([IP])", "9"},
|
||||
{"has-status", "has([STATUS])", "true"},
|
||||
{"has-empty", "has()", "false"},
|
||||
|
||||
// JSONPath expressions
|
||||
{"body-status", "[BODY].status", "success"},
|
||||
{"body-user-name", "[BODY].user.name", "john"},
|
||||
{"body-user-id", "[BODY].user.id", "123"},
|
||||
{"len-body-items", "len([BODY].items)", "3"},
|
||||
{"body-array-index", "[BODY].items[0]", "1"},
|
||||
{"has-body-status", "has([BODY].status)", "true"},
|
||||
{"has-body-missing", "has([BODY].missing)", "false"},
|
||||
|
||||
// Context placeholders
|
||||
{"context-user-id", "[CONTEXT].user_id", "abc123"},
|
||||
{"context-session-token", "[CONTEXT].session_token", "xyz789"},
|
||||
{"context-nested", "[CONTEXT].nested.value", "test"},
|
||||
{"len-context-array", "len([CONTEXT].array_data)", "3"},
|
||||
{"has-context-user-id", "has([CONTEXT].user_id)", "true"},
|
||||
{"has-context-missing", "has([CONTEXT].missing)", "false"},
|
||||
|
||||
// Invalid placeholders
|
||||
{"unknown-placeholder", "[UNKNOWN]", "[UNKNOWN]"},
|
||||
{"len-unknown", "len([UNKNOWN])", "len([UNKNOWN]) (INVALID)"},
|
||||
{"has-unknown", "has([UNKNOWN])", "false"},
|
||||
{"invalid-jsonpath", "[BODY].invalid.path", "[BODY].invalid.path (INVALID)"},
|
||||
|
||||
// Literal strings
|
||||
{"literal-string", "literal", "literal"},
|
||||
{"number-string", "123", "123"},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
actual, err := ResolvePlaceholder(test.placeholder, result, ctx)
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
if actual != test.expected {
|
||||
t.Errorf("expected '%s', got '%s'", test.expected, actual)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolvePlaceholderWithoutContext(t *testing.T) {
|
||||
result := &Result{
|
||||
HTTPStatus: 404,
|
||||
Body: []byte(`{"error":"not found"}`),
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
placeholder string
|
||||
expected string
|
||||
}{
|
||||
{"status-without-context", "[STATUS]", "404"},
|
||||
{"body-without-context", "[BODY].error", "not found"},
|
||||
{"context-without-context", "[CONTEXT].user_id", "[CONTEXT].user_id"},
|
||||
{"has-context-without-context", "has([CONTEXT].user_id)", "false"},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
actual, err := ResolvePlaceholder(test.placeholder, result, nil)
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
if actual != test.expected {
|
||||
t.Errorf("expected '%s', got '%s'", test.expected, actual)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -4,7 +4,7 @@ import (
|
||||
"time"
|
||||
)
|
||||
|
||||
// Result of the evaluation of a Endpoint
|
||||
// Result of the evaluation of an Endpoint
|
||||
type Result struct {
|
||||
// HTTPStatus is the HTTP response status code
|
||||
HTTPStatus int `json:"status,omitempty"`
|
||||
@@ -54,6 +54,13 @@ type Result struct {
|
||||
// Below is used only for the UI and is not persisted in the storage //
|
||||
///////////////////////////////////////////////////////////////////////
|
||||
port string `yaml:"-"` // used for endpoints[].ui.hide-port
|
||||
|
||||
///////////////////////////////////
|
||||
// BELOW IS ONLY USED FOR SUITES //
|
||||
///////////////////////////////////
|
||||
// Name of the endpoint (ONLY USED FOR SUITES)
|
||||
// Group is not needed because it's inherited from the suite
|
||||
Name string `json:"name,omitempty"`
|
||||
}
|
||||
|
||||
// AddError adds an error to the result's list of errors.
|
||||
|
||||
@@ -1,6 +1,9 @@
|
||||
package endpoint
|
||||
|
||||
import "github.com/TwiN/gatus/v5/config/key"
|
||||
|
||||
// Status contains the evaluation Results of an Endpoint
|
||||
// This is essentially a DTO
|
||||
type Status struct {
|
||||
// Name of the endpoint
|
||||
Name string `json:"name,omitempty"`
|
||||
@@ -30,7 +33,7 @@ func NewStatus(group, name string) *Status {
|
||||
return &Status{
|
||||
Name: name,
|
||||
Group: group,
|
||||
Key: ConvertGroupAndEndpointNameToKey(group, name),
|
||||
Key: key.ConvertGroupAndNameToKey(group, name),
|
||||
Results: make([]*Result, 0),
|
||||
Events: make([]*Event, 0),
|
||||
Uptime: NewUptime(),
|
||||
|
||||
121
config/gontext/gontext.go
Normal file
121
config/gontext/gontext.go
Normal file
@@ -0,0 +1,121 @@
|
||||
package gontext
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrGontextPathNotFound is returned when a gontext path doesn't exist
|
||||
ErrGontextPathNotFound = errors.New("gontext path not found")
|
||||
)
|
||||
|
||||
// Gontext holds values that can be shared between endpoints in a suite
|
||||
type Gontext struct {
|
||||
mu sync.RWMutex
|
||||
values map[string]interface{}
|
||||
}
|
||||
|
||||
// New creates a new gontext with initial values
|
||||
func New(initial map[string]interface{}) *Gontext {
|
||||
if initial == nil {
|
||||
initial = make(map[string]interface{})
|
||||
}
|
||||
// Create a deep copy to avoid external modifications
|
||||
values := make(map[string]interface{})
|
||||
for k, v := range initial {
|
||||
values[k] = deepCopyValue(v)
|
||||
}
|
||||
return &Gontext{
|
||||
values: values,
|
||||
}
|
||||
}
|
||||
|
||||
// Get retrieves a value from the gontext using dot notation
|
||||
func (g *Gontext) Get(path string) (interface{}, error) {
|
||||
g.mu.RLock()
|
||||
defer g.mu.RUnlock()
|
||||
parts := strings.Split(path, ".")
|
||||
current := interface{}(g.values)
|
||||
for _, part := range parts {
|
||||
switch v := current.(type) {
|
||||
case map[string]interface{}:
|
||||
val, exists := v[part]
|
||||
if !exists {
|
||||
return nil, fmt.Errorf("%w: %s", ErrGontextPathNotFound, path)
|
||||
}
|
||||
current = val
|
||||
default:
|
||||
return nil, fmt.Errorf("%w: %s", ErrGontextPathNotFound, path)
|
||||
}
|
||||
}
|
||||
return current, nil
|
||||
}
|
||||
|
||||
// Set stores a value in the gontext using dot notation
|
||||
func (g *Gontext) Set(path string, value interface{}) error {
|
||||
g.mu.Lock()
|
||||
defer g.mu.Unlock()
|
||||
parts := strings.Split(path, ".")
|
||||
if len(parts) == 0 {
|
||||
return errors.New("empty path")
|
||||
}
|
||||
// Navigate to the parent of the target
|
||||
current := g.values
|
||||
for i := 0; i < len(parts)-1; i++ {
|
||||
part := parts[i]
|
||||
if next, exists := current[part]; exists {
|
||||
if nextMap, ok := next.(map[string]interface{}); ok {
|
||||
current = nextMap
|
||||
} else {
|
||||
// Path exists but is not a map, create a new map
|
||||
newMap := make(map[string]interface{})
|
||||
current[part] = newMap
|
||||
current = newMap
|
||||
}
|
||||
} else {
|
||||
// Create intermediate maps
|
||||
newMap := make(map[string]interface{})
|
||||
current[part] = newMap
|
||||
current = newMap
|
||||
}
|
||||
}
|
||||
// Set the final value
|
||||
current[parts[len(parts)-1]] = value
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetAll returns a copy of all gontext values
|
||||
func (g *Gontext) GetAll() map[string]interface{} {
|
||||
g.mu.RLock()
|
||||
defer g.mu.RUnlock()
|
||||
|
||||
result := make(map[string]interface{})
|
||||
for k, v := range g.values {
|
||||
result[k] = deepCopyValue(v)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// deepCopyValue creates a deep copy of a value
|
||||
func deepCopyValue(v interface{}) interface{} {
|
||||
switch val := v.(type) {
|
||||
case map[string]interface{}:
|
||||
newMap := make(map[string]interface{})
|
||||
for k, v := range val {
|
||||
newMap[k] = deepCopyValue(v)
|
||||
}
|
||||
return newMap
|
||||
case []interface{}:
|
||||
newSlice := make([]interface{}, len(val))
|
||||
for i, v := range val {
|
||||
newSlice[i] = deepCopyValue(v)
|
||||
}
|
||||
return newSlice
|
||||
default:
|
||||
// For primitive types, return as-is (they're passed by value anyway)
|
||||
return val
|
||||
}
|
||||
}
|
||||
448
config/gontext/gontext_test.go
Normal file
448
config/gontext/gontext_test.go
Normal file
@@ -0,0 +1,448 @@
|
||||
package gontext
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestNew(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
initial map[string]interface{}
|
||||
expected map[string]interface{}
|
||||
}{
|
||||
{
|
||||
name: "nil-input",
|
||||
initial: nil,
|
||||
expected: make(map[string]interface{}),
|
||||
},
|
||||
{
|
||||
name: "empty-input",
|
||||
initial: make(map[string]interface{}),
|
||||
expected: make(map[string]interface{}),
|
||||
},
|
||||
{
|
||||
name: "simple-values",
|
||||
initial: map[string]interface{}{
|
||||
"key1": "value1",
|
||||
"key2": 42,
|
||||
},
|
||||
expected: map[string]interface{}{
|
||||
"key1": "value1",
|
||||
"key2": 42,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "nested-values",
|
||||
initial: map[string]interface{}{
|
||||
"user": map[string]interface{}{
|
||||
"id": 123,
|
||||
"name": "John Doe",
|
||||
},
|
||||
},
|
||||
expected: map[string]interface{}{
|
||||
"user": map[string]interface{}{
|
||||
"id": 123,
|
||||
"name": "John Doe",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
ctx := New(tt.initial)
|
||||
if ctx == nil {
|
||||
t.Error("Expected non-nil gontext")
|
||||
}
|
||||
if ctx.values == nil {
|
||||
t.Error("Expected non-nil values map")
|
||||
}
|
||||
|
||||
// Verify deep copy by modifying original
|
||||
if tt.initial != nil {
|
||||
tt.initial["modified"] = "should not appear"
|
||||
if _, exists := ctx.values["modified"]; exists {
|
||||
t.Error("Deep copy failed - original map modification affected gontext")
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGontext_Get(t *testing.T) {
|
||||
ctx := New(map[string]interface{}{
|
||||
"simple": "value",
|
||||
"number": 42,
|
||||
"boolean": true,
|
||||
"nested": map[string]interface{}{
|
||||
"level1": map[string]interface{}{
|
||||
"level2": "deep_value",
|
||||
},
|
||||
},
|
||||
"user": map[string]interface{}{
|
||||
"id": 123,
|
||||
"name": "John",
|
||||
"profile": map[string]interface{}{
|
||||
"email": "john@example.com",
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
path string
|
||||
expected interface{}
|
||||
shouldError bool
|
||||
errorType error
|
||||
}{
|
||||
{
|
||||
name: "simple-value",
|
||||
path: "simple",
|
||||
expected: "value",
|
||||
shouldError: false,
|
||||
},
|
||||
{
|
||||
name: "number-value",
|
||||
path: "number",
|
||||
expected: 42,
|
||||
shouldError: false,
|
||||
},
|
||||
{
|
||||
name: "boolean-value",
|
||||
path: "boolean",
|
||||
expected: true,
|
||||
shouldError: false,
|
||||
},
|
||||
{
|
||||
name: "nested-value",
|
||||
path: "nested.level1.level2",
|
||||
expected: "deep_value",
|
||||
shouldError: false,
|
||||
},
|
||||
{
|
||||
name: "user-id",
|
||||
path: "user.id",
|
||||
expected: 123,
|
||||
shouldError: false,
|
||||
},
|
||||
{
|
||||
name: "deep-nested-value",
|
||||
path: "user.profile.email",
|
||||
expected: "john@example.com",
|
||||
shouldError: false,
|
||||
},
|
||||
{
|
||||
name: "non-existent-key",
|
||||
path: "nonexistent",
|
||||
expected: nil,
|
||||
shouldError: true,
|
||||
errorType: ErrGontextPathNotFound,
|
||||
},
|
||||
{
|
||||
name: "non-existent-nested-key",
|
||||
path: "user.nonexistent",
|
||||
expected: nil,
|
||||
shouldError: true,
|
||||
errorType: ErrGontextPathNotFound,
|
||||
},
|
||||
{
|
||||
name: "invalid-nested-path",
|
||||
path: "simple.invalid",
|
||||
expected: nil,
|
||||
shouldError: true,
|
||||
errorType: ErrGontextPathNotFound,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result, err := ctx.Get(tt.path)
|
||||
|
||||
if tt.shouldError {
|
||||
if err == nil {
|
||||
t.Errorf("Expected error but got none")
|
||||
}
|
||||
if tt.errorType != nil && !errors.Is(err, tt.errorType) {
|
||||
t.Errorf("Expected error type %v, got %v", tt.errorType, err)
|
||||
}
|
||||
} else {
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error: %v", err)
|
||||
}
|
||||
if result != tt.expected {
|
||||
t.Errorf("Expected %v, got %v", tt.expected, result)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGontext_Set(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
path string
|
||||
value interface{}
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "simple-set",
|
||||
path: "key",
|
||||
value: "value",
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "nested-set",
|
||||
path: "user.name",
|
||||
value: "John Doe",
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "deep-nested-set",
|
||||
path: "user.profile.email",
|
||||
value: "john@example.com",
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "override-primitive-with-nested",
|
||||
path: "existing.new",
|
||||
value: "nested_value",
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "empty-path",
|
||||
path: "",
|
||||
value: "value",
|
||||
wantErr: false, // Actually, empty string creates a single part [""], which is valid
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
ctx := New(map[string]interface{}{
|
||||
"existing": "primitive",
|
||||
})
|
||||
|
||||
err := ctx.Set(tt.path, tt.value)
|
||||
|
||||
if tt.wantErr {
|
||||
if err == nil {
|
||||
t.Error("Expected error but got none")
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Verify the value was set correctly
|
||||
result, getErr := ctx.Get(tt.path)
|
||||
if getErr != nil {
|
||||
t.Errorf("Error retrieving set value: %v", getErr)
|
||||
return
|
||||
}
|
||||
|
||||
if result != tt.value {
|
||||
t.Errorf("Expected %v, got %v", tt.value, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGontext_SetOverrideBehavior(t *testing.T) {
|
||||
ctx := New(map[string]interface{}{
|
||||
"primitive": "value",
|
||||
"nested": map[string]interface{}{
|
||||
"key": "existing",
|
||||
},
|
||||
})
|
||||
|
||||
// Test overriding primitive with nested structure
|
||||
err := ctx.Set("primitive.new", "nested_value")
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Verify the primitive was replaced with a nested structure
|
||||
result, err := ctx.Get("primitive.new")
|
||||
if err != nil {
|
||||
t.Errorf("Error getting nested value: %v", err)
|
||||
}
|
||||
if result != "nested_value" {
|
||||
t.Errorf("Expected 'nested_value', got %v", result)
|
||||
}
|
||||
|
||||
// Test overriding existing nested value
|
||||
err = ctx.Set("nested.key", "modified")
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error: %v", err)
|
||||
}
|
||||
|
||||
result, err = ctx.Get("nested.key")
|
||||
if err != nil {
|
||||
t.Errorf("Error getting modified value: %v", err)
|
||||
}
|
||||
if result != "modified" {
|
||||
t.Errorf("Expected 'modified', got %v", result)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGontext_GetAll(t *testing.T) {
|
||||
initial := map[string]interface{}{
|
||||
"key1": "value1",
|
||||
"key2": 42,
|
||||
"nested": map[string]interface{}{
|
||||
"inner": "value",
|
||||
},
|
||||
}
|
||||
|
||||
ctx := New(initial)
|
||||
|
||||
// Add another value after creation
|
||||
ctx.Set("key3", "value3")
|
||||
|
||||
result := ctx.GetAll()
|
||||
|
||||
// Verify all values are present
|
||||
if result["key1"] != "value1" {
|
||||
t.Errorf("Expected key1=value1, got %v", result["key1"])
|
||||
}
|
||||
if result["key2"] != 42 {
|
||||
t.Errorf("Expected key2=42, got %v", result["key2"])
|
||||
}
|
||||
if result["key3"] != "value3" {
|
||||
t.Errorf("Expected key3=value3, got %v", result["key3"])
|
||||
}
|
||||
|
||||
// Verify nested values
|
||||
nested, ok := result["nested"].(map[string]interface{})
|
||||
if !ok {
|
||||
t.Error("Expected nested to be map[string]interface{}")
|
||||
} else if nested["inner"] != "value" {
|
||||
t.Errorf("Expected nested.inner=value, got %v", nested["inner"])
|
||||
}
|
||||
|
||||
// Verify deep copy - modifying returned map shouldn't affect gontext
|
||||
result["key1"] = "modified"
|
||||
original, _ := ctx.Get("key1")
|
||||
if original != "value1" {
|
||||
t.Error("GetAll did not return a deep copy - modification affected original")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGontext_ConcurrentAccess(t *testing.T) {
|
||||
ctx := New(map[string]interface{}{
|
||||
"counter": 0,
|
||||
})
|
||||
|
||||
done := make(chan bool, 10)
|
||||
|
||||
// Start 5 goroutines that read values
|
||||
for i := 0; i < 5; i++ {
|
||||
go func(id int) {
|
||||
for j := 0; j < 100; j++ {
|
||||
_, err := ctx.Get("counter")
|
||||
if err != nil {
|
||||
t.Errorf("Reader %d error: %v", id, err)
|
||||
}
|
||||
}
|
||||
done <- true
|
||||
}(i)
|
||||
}
|
||||
|
||||
// Start 5 goroutines that write values
|
||||
for i := 0; i < 5; i++ {
|
||||
go func(id int) {
|
||||
for j := 0; j < 100; j++ {
|
||||
err := ctx.Set("counter", id*1000+j)
|
||||
if err != nil {
|
||||
t.Errorf("Writer %d error: %v", id, err)
|
||||
}
|
||||
}
|
||||
done <- true
|
||||
}(i)
|
||||
}
|
||||
|
||||
// Wait for all goroutines to complete
|
||||
for i := 0; i < 10; i++ {
|
||||
<-done
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeepCopyValue(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input interface{}
|
||||
}{
|
||||
{
|
||||
name: "primitive-string",
|
||||
input: "test",
|
||||
},
|
||||
{
|
||||
name: "primitive-int",
|
||||
input: 42,
|
||||
},
|
||||
{
|
||||
name: "primitive-bool",
|
||||
input: true,
|
||||
},
|
||||
{
|
||||
name: "simple-map",
|
||||
input: map[string]interface{}{
|
||||
"key": "value",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "nested-map",
|
||||
input: map[string]interface{}{
|
||||
"nested": map[string]interface{}{
|
||||
"deep": "value",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "simple-slice",
|
||||
input: []interface{}{"a", "b", "c"},
|
||||
},
|
||||
{
|
||||
name: "mixed-slice",
|
||||
input: []interface{}{
|
||||
"string",
|
||||
42,
|
||||
map[string]interface{}{"nested": "value"},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := deepCopyValue(tt.input)
|
||||
|
||||
// For maps and slices, verify it's a different object
|
||||
switch v := tt.input.(type) {
|
||||
case map[string]interface{}:
|
||||
resultMap, ok := result.(map[string]interface{})
|
||||
if !ok {
|
||||
t.Error("Deep copy didn't preserve map type")
|
||||
return
|
||||
}
|
||||
// Modify original to ensure independence
|
||||
v["modified"] = "test"
|
||||
if _, exists := resultMap["modified"]; exists {
|
||||
t.Error("Deep copy failed - maps are not independent")
|
||||
}
|
||||
case []interface{}:
|
||||
resultSlice, ok := result.([]interface{})
|
||||
if !ok {
|
||||
t.Error("Deep copy didn't preserve slice type")
|
||||
return
|
||||
}
|
||||
if len(resultSlice) != len(v) {
|
||||
t.Error("Deep copy didn't preserve slice length")
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1,10 +1,10 @@
|
||||
package endpoint
|
||||
package key
|
||||
|
||||
import "strings"
|
||||
|
||||
// ConvertGroupAndEndpointNameToKey converts a group and an endpoint to a key
|
||||
func ConvertGroupAndEndpointNameToKey(groupName, endpointName string) string {
|
||||
return sanitize(groupName) + "_" + sanitize(endpointName)
|
||||
// ConvertGroupAndNameToKey converts a group and a name to a key
|
||||
func ConvertGroupAndNameToKey(groupName, name string) string {
|
||||
return sanitize(groupName) + "_" + sanitize(name)
|
||||
}
|
||||
|
||||
func sanitize(s string) string {
|
||||
@@ -16,4 +16,4 @@ func sanitize(s string) string {
|
||||
s = strings.ReplaceAll(s, " ", "-")
|
||||
s = strings.ReplaceAll(s, "#", "-")
|
||||
return s
|
||||
}
|
||||
}
|
||||
11
config/key/key_bench_test.go
Normal file
11
config/key/key_bench_test.go
Normal file
@@ -0,0 +1,11 @@
|
||||
package key
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkConvertGroupAndNameToKey(b *testing.B) {
|
||||
for n := 0; n < b.N; n++ {
|
||||
ConvertGroupAndNameToKey("group", "name")
|
||||
}
|
||||
}
|
||||
@@ -1,33 +1,38 @@
|
||||
package endpoint
|
||||
package key
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestConvertGroupAndEndpointNameToKey(t *testing.T) {
|
||||
func TestConvertGroupAndNameToKey(t *testing.T) {
|
||||
type Scenario struct {
|
||||
GroupName string
|
||||
EndpointName string
|
||||
Name string
|
||||
ExpectedOutput string
|
||||
}
|
||||
scenarios := []Scenario{
|
||||
{
|
||||
GroupName: "Core",
|
||||
EndpointName: "Front End",
|
||||
Name: "Front End",
|
||||
ExpectedOutput: "core_front-end",
|
||||
},
|
||||
{
|
||||
GroupName: "Load balancers",
|
||||
EndpointName: "us-west-2",
|
||||
Name: "us-west-2",
|
||||
ExpectedOutput: "load-balancers_us-west-2",
|
||||
},
|
||||
{
|
||||
GroupName: "a/b test",
|
||||
EndpointName: "a",
|
||||
Name: "a",
|
||||
ExpectedOutput: "a-b-test_a",
|
||||
},
|
||||
{
|
||||
GroupName: "",
|
||||
Name: "name",
|
||||
ExpectedOutput: "_name",
|
||||
},
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.ExpectedOutput, func(t *testing.T) {
|
||||
output := ConvertGroupAndEndpointNameToKey(scenario.GroupName, scenario.EndpointName)
|
||||
output := ConvertGroupAndNameToKey(scenario.GroupName, scenario.Name)
|
||||
if output != scenario.ExpectedOutput {
|
||||
t.Errorf("expected '%s', got '%s'", scenario.ExpectedOutput, output)
|
||||
}
|
||||
55
config/suite/result.go
Normal file
55
config/suite/result.go
Normal file
@@ -0,0 +1,55 @@
|
||||
package suite
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
)
|
||||
|
||||
// Result represents the result of a suite execution
|
||||
type Result struct {
|
||||
// Name of the suite
|
||||
Name string `json:"name,omitempty"`
|
||||
|
||||
// Group of the suite
|
||||
Group string `json:"group,omitempty"`
|
||||
|
||||
// Success indicates whether all required endpoints succeeded
|
||||
Success bool `json:"success"`
|
||||
|
||||
// Timestamp is when the suite execution started
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
|
||||
// Duration is how long the entire suite execution took
|
||||
Duration time.Duration `json:"duration"`
|
||||
|
||||
// EndpointResults contains the results of each endpoint execution
|
||||
EndpointResults []*endpoint.Result `json:"endpointResults"`
|
||||
|
||||
// Context is the final state of the context after all endpoints executed
|
||||
Context map[string]interface{} `json:"-"`
|
||||
|
||||
// Errors contains any suite-level errors
|
||||
Errors []string `json:"errors,omitempty"`
|
||||
}
|
||||
|
||||
// AddError adds an error to the suite result
|
||||
func (r *Result) AddError(err string) {
|
||||
r.Errors = append(r.Errors, err)
|
||||
}
|
||||
|
||||
// CalculateSuccess determines if the suite execution was successful
|
||||
func (r *Result) CalculateSuccess() {
|
||||
r.Success = true
|
||||
// Check if any endpoints failed (all endpoints are required)
|
||||
for _, epResult := range r.EndpointResults {
|
||||
if !epResult.Success {
|
||||
r.Success = false
|
||||
break
|
||||
}
|
||||
}
|
||||
// Also check for suite-level errors
|
||||
if len(r.Errors) > 0 {
|
||||
r.Success = false
|
||||
}
|
||||
}
|
||||
214
config/suite/suite.go
Normal file
214
config/suite/suite.go
Normal file
@@ -0,0 +1,214 @@
|
||||
package suite
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/config/gontext"
|
||||
"github.com/TwiN/gatus/v5/config/key"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrSuiteWithNoName is the error returned when a suite has no name
|
||||
ErrSuiteWithNoName = errors.New("suite must have a name")
|
||||
|
||||
// ErrSuiteWithNoEndpoints is the error returned when a suite has no endpoints
|
||||
ErrSuiteWithNoEndpoints = errors.New("suite must have at least one endpoint")
|
||||
|
||||
// ErrSuiteWithDuplicateEndpointNames is the error returned when a suite has duplicate endpoint names
|
||||
ErrSuiteWithDuplicateEndpointNames = errors.New("suite cannot have duplicate endpoint names")
|
||||
|
||||
// ErrSuiteWithInvalidTimeout is the error returned when a suite has an invalid timeout
|
||||
ErrSuiteWithInvalidTimeout = errors.New("suite timeout must be positive")
|
||||
|
||||
// DefaultInterval is the default interval for suite execution
|
||||
DefaultInterval = 10 * time.Minute
|
||||
|
||||
// DefaultTimeout is the default timeout for suite execution
|
||||
DefaultTimeout = 5 * time.Minute
|
||||
)
|
||||
|
||||
// Suite is a collection of endpoints that are executed sequentially with shared context
|
||||
type Suite struct {
|
||||
// Name of the suite. Must be unique.
|
||||
Name string `yaml:"name"`
|
||||
|
||||
// Group the suite belongs to. Used for grouping multiple suites together.
|
||||
Group string `yaml:"group,omitempty"`
|
||||
|
||||
// Enabled defines whether the suite is enabled
|
||||
Enabled *bool `yaml:"enabled,omitempty"`
|
||||
|
||||
// Interval is the duration to wait between suite executions
|
||||
Interval time.Duration `yaml:"interval,omitempty"`
|
||||
|
||||
// Timeout is the maximum duration for the entire suite execution
|
||||
Timeout time.Duration `yaml:"timeout,omitempty"`
|
||||
|
||||
// InitialContext holds initial values that can be referenced by endpoints
|
||||
InitialContext map[string]interface{} `yaml:"context,omitempty"`
|
||||
|
||||
// Endpoints in the suite (executed sequentially)
|
||||
Endpoints []*endpoint.Endpoint `yaml:"endpoints"`
|
||||
}
|
||||
|
||||
// IsEnabled returns whether the suite is enabled
|
||||
func (s *Suite) IsEnabled() bool {
|
||||
if s.Enabled == nil {
|
||||
return true
|
||||
}
|
||||
return *s.Enabled
|
||||
}
|
||||
|
||||
// Key returns a unique key for the suite
|
||||
func (s *Suite) Key() string {
|
||||
return key.ConvertGroupAndNameToKey(s.Group, s.Name)
|
||||
}
|
||||
|
||||
// ValidateAndSetDefaults validates the suite configuration and sets default values
|
||||
func (s *Suite) ValidateAndSetDefaults() error {
|
||||
// Validate name
|
||||
if len(s.Name) == 0 {
|
||||
return ErrSuiteWithNoName
|
||||
}
|
||||
// Validate endpoints
|
||||
if len(s.Endpoints) == 0 {
|
||||
return ErrSuiteWithNoEndpoints
|
||||
}
|
||||
// Check for duplicate endpoint names
|
||||
endpointNames := make(map[string]bool)
|
||||
for _, ep := range s.Endpoints {
|
||||
if endpointNames[ep.Name] {
|
||||
return fmt.Errorf("%w: duplicate endpoint name '%s'", ErrSuiteWithDuplicateEndpointNames, ep.Name)
|
||||
}
|
||||
endpointNames[ep.Name] = true
|
||||
// Suite endpoints inherit the group from the suite
|
||||
ep.Group = s.Group
|
||||
// Validate each endpoint
|
||||
if err := ep.ValidateAndSetDefaults(); err != nil {
|
||||
return fmt.Errorf("invalid endpoint '%s': %w", ep.Name, err)
|
||||
}
|
||||
}
|
||||
// Set default interval
|
||||
if s.Interval == 0 {
|
||||
s.Interval = DefaultInterval
|
||||
}
|
||||
// Set default timeout
|
||||
if s.Timeout == 0 {
|
||||
s.Timeout = DefaultTimeout
|
||||
}
|
||||
// Validate timeout
|
||||
if s.Timeout < 0 {
|
||||
return ErrSuiteWithInvalidTimeout
|
||||
}
|
||||
// Initialize context if nil
|
||||
if s.InitialContext == nil {
|
||||
s.InitialContext = make(map[string]interface{})
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Execute executes all endpoints in the suite sequentially with context sharing
|
||||
func (s *Suite) Execute() *Result {
|
||||
start := time.Now()
|
||||
// Initialize context from suite configuration
|
||||
ctx := gontext.New(s.InitialContext)
|
||||
// Create suite result
|
||||
result := &Result{
|
||||
Name: s.Name,
|
||||
Group: s.Group,
|
||||
Success: true,
|
||||
Timestamp: start,
|
||||
EndpointResults: make([]*endpoint.Result, 0, len(s.Endpoints)),
|
||||
}
|
||||
// Set up timeout for the entire suite execution
|
||||
timeoutChan := time.After(s.Timeout)
|
||||
// Execute each endpoint sequentially
|
||||
suiteHasFailed := false
|
||||
for _, ep := range s.Endpoints {
|
||||
// Skip non-always-run endpoints if suite has already failed
|
||||
if suiteHasFailed && !ep.AlwaysRun {
|
||||
continue
|
||||
}
|
||||
// Check timeout
|
||||
select {
|
||||
case <-timeoutChan:
|
||||
result.AddError(fmt.Sprintf("suite execution timed out after %v", s.Timeout))
|
||||
result.Success = false
|
||||
break
|
||||
default:
|
||||
}
|
||||
// Execute endpoint with context
|
||||
epStartTime := time.Now()
|
||||
epResult := ep.EvaluateHealthWithContext(ctx)
|
||||
epDuration := time.Since(epStartTime)
|
||||
// Set endpoint name, timestamp, and duration on the result
|
||||
epResult.Name = ep.Name
|
||||
epResult.Timestamp = epStartTime
|
||||
epResult.Duration = epDuration
|
||||
// Store values from the endpoint result if configured (always store, even on failure)
|
||||
if ep.Store != nil {
|
||||
_, err := StoreResultValues(ctx, ep.Store, epResult)
|
||||
if err != nil {
|
||||
epResult.AddError(fmt.Sprintf("failed to store values: %v", err))
|
||||
}
|
||||
}
|
||||
result.EndpointResults = append(result.EndpointResults, epResult)
|
||||
// Mark suite as failed on any endpoint failure
|
||||
if !epResult.Success {
|
||||
result.Success = false
|
||||
suiteHasFailed = true
|
||||
}
|
||||
}
|
||||
result.Context = ctx.GetAll()
|
||||
result.Duration = time.Since(start)
|
||||
result.CalculateSuccess()
|
||||
return result
|
||||
}
|
||||
|
||||
// StoreResultValues extracts values from an endpoint result and stores them in the gontext
|
||||
func StoreResultValues(ctx *gontext.Gontext, mappings map[string]string, result *endpoint.Result) (map[string]interface{}, error) {
|
||||
if mappings == nil || len(mappings) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
storedValues := make(map[string]interface{})
|
||||
for contextKey, placeholder := range mappings {
|
||||
value, err := extractValueForStorage(placeholder, result)
|
||||
if err != nil {
|
||||
// Continue storing other values even if one fails
|
||||
storedValues[contextKey] = fmt.Sprintf("ERROR: %v", err)
|
||||
continue
|
||||
}
|
||||
if err := ctx.Set(contextKey, value); err != nil {
|
||||
return storedValues, fmt.Errorf("failed to store %s: %w", contextKey, err)
|
||||
}
|
||||
storedValues[contextKey] = value
|
||||
}
|
||||
return storedValues, nil
|
||||
}
|
||||
|
||||
// extractValueForStorage extracts a value from an endpoint result for storage in context
|
||||
func extractValueForStorage(placeholder string, result *endpoint.Result) (interface{}, error) {
|
||||
// Use the unified ResolvePlaceholder function (no context needed for extraction)
|
||||
resolved, err := endpoint.ResolvePlaceholder(placeholder, result, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Try to parse as number or boolean to store as proper types
|
||||
// Try int first for whole numbers
|
||||
if num, err := strconv.ParseInt(resolved, 10, 64); err == nil {
|
||||
return num, nil
|
||||
}
|
||||
// Then try float for decimals
|
||||
if num, err := strconv.ParseFloat(resolved, 64); err == nil {
|
||||
return num, nil
|
||||
}
|
||||
// Then try boolean
|
||||
if boolVal, err := strconv.ParseBool(resolved); err == nil {
|
||||
return boolVal, nil
|
||||
}
|
||||
return resolved, nil
|
||||
}
|
||||
26
config/suite/suite_status.go
Normal file
26
config/suite/suite_status.go
Normal file
@@ -0,0 +1,26 @@
|
||||
package suite
|
||||
|
||||
// Status represents the status of a suite
|
||||
type Status struct {
|
||||
// Name of the suite
|
||||
Name string `json:"name,omitempty"`
|
||||
|
||||
// Group the suite is a part of. Used for grouping multiple suites together on the front end.
|
||||
Group string `json:"group,omitempty"`
|
||||
|
||||
// Key of the Suite
|
||||
Key string `json:"key"`
|
||||
|
||||
// Results is the list of suite execution results
|
||||
Results []*Result `json:"results"`
|
||||
}
|
||||
|
||||
// NewStatus creates a new Status for a given Suite
|
||||
func NewStatus(s *Suite) *Status {
|
||||
return &Status{
|
||||
Name: s.Name,
|
||||
Group: s.Group,
|
||||
Key: s.Key(),
|
||||
Results: []*Result{},
|
||||
}
|
||||
}
|
||||
449
config/suite/suite_test.go
Normal file
449
config/suite/suite_test.go
Normal file
@@ -0,0 +1,449 @@
|
||||
package suite
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/config/gontext"
|
||||
)
|
||||
|
||||
func TestSuite_ValidateAndSetDefaults(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
suite *Suite
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "valid-suite",
|
||||
suite: &Suite{
|
||||
Name: "test-suite",
|
||||
Endpoints: []*endpoint.Endpoint{
|
||||
{
|
||||
Name: "endpoint1",
|
||||
URL: "https://example.org",
|
||||
Conditions: []endpoint.Condition{
|
||||
endpoint.Condition("[STATUS] == 200"),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "suite-without-name",
|
||||
suite: &Suite{
|
||||
Endpoints: []*endpoint.Endpoint{
|
||||
{
|
||||
Name: "endpoint1",
|
||||
URL: "https://example.org",
|
||||
Conditions: []endpoint.Condition{
|
||||
endpoint.Condition("[STATUS] == 200"),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "suite-without-endpoints",
|
||||
suite: &Suite{
|
||||
Name: "test-suite",
|
||||
Endpoints: []*endpoint.Endpoint{},
|
||||
},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "suite-with-duplicate-endpoint-names",
|
||||
suite: &Suite{
|
||||
Name: "test-suite",
|
||||
Endpoints: []*endpoint.Endpoint{
|
||||
{
|
||||
Name: "duplicate",
|
||||
URL: "https://example.org",
|
||||
Conditions: []endpoint.Condition{
|
||||
endpoint.Condition("[STATUS] == 200"),
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "duplicate",
|
||||
URL: "https://example.com",
|
||||
Conditions: []endpoint.Condition{
|
||||
endpoint.Condition("[STATUS] == 200"),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
err := tt.suite.ValidateAndSetDefaults()
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("Suite.ValidateAndSetDefaults() error = %v, wantErr %v", err, tt.wantErr)
|
||||
}
|
||||
// Check defaults were set
|
||||
if err == nil {
|
||||
if tt.suite.Interval == 0 {
|
||||
t.Errorf("Expected Interval to be set to default, got 0")
|
||||
}
|
||||
if tt.suite.Timeout == 0 {
|
||||
t.Errorf("Expected Timeout to be set to default, got 0")
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSuite_IsEnabled(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
enabled *bool
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
name: "nil-defaults-to-true",
|
||||
enabled: nil,
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "explicitly-enabled",
|
||||
enabled: boolPtr(true),
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "explicitly-disabled",
|
||||
enabled: boolPtr(false),
|
||||
want: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
s := &Suite{Enabled: tt.enabled}
|
||||
if got := s.IsEnabled(); got != tt.want {
|
||||
t.Errorf("Suite.IsEnabled() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSuite_Key(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
suite *Suite
|
||||
want string
|
||||
}{
|
||||
{
|
||||
name: "with-group",
|
||||
suite: &Suite{
|
||||
Name: "test-suite",
|
||||
Group: "test-group",
|
||||
},
|
||||
want: "test-group_test-suite",
|
||||
},
|
||||
{
|
||||
name: "without-group",
|
||||
suite: &Suite{
|
||||
Name: "test-suite",
|
||||
Group: "",
|
||||
},
|
||||
want: "_test-suite",
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := tt.suite.Key(); got != tt.want {
|
||||
t.Errorf("Suite.Key() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSuite_DefaultValues(t *testing.T) {
|
||||
s := &Suite{
|
||||
Name: "test",
|
||||
Endpoints: []*endpoint.Endpoint{
|
||||
{
|
||||
Name: "endpoint1",
|
||||
URL: "https://example.org",
|
||||
Conditions: []endpoint.Condition{
|
||||
endpoint.Condition("[STATUS] == 200"),
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
err := s.ValidateAndSetDefaults()
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
if s.Interval != DefaultInterval {
|
||||
t.Errorf("Expected Interval to be %v, got %v", DefaultInterval, s.Interval)
|
||||
}
|
||||
if s.Timeout != DefaultTimeout {
|
||||
t.Errorf("Expected Timeout to be %v, got %v", DefaultTimeout, s.Timeout)
|
||||
}
|
||||
if s.InitialContext == nil {
|
||||
t.Error("Expected InitialContext to be initialized, got nil")
|
||||
}
|
||||
}
|
||||
|
||||
// Helper function to create bool pointers
|
||||
func boolPtr(b bool) *bool {
|
||||
return &b
|
||||
}
|
||||
|
||||
func TestStoreResultValues(t *testing.T) {
|
||||
ctx := gontext.New(nil)
|
||||
// Create a mock result
|
||||
result := &endpoint.Result{
|
||||
HTTPStatus: 200,
|
||||
IP: "192.168.1.1",
|
||||
Duration: 100 * time.Millisecond,
|
||||
Body: []byte(`{"status": "OK", "value": 42}`),
|
||||
Connected: true,
|
||||
}
|
||||
// Define store mappings
|
||||
mappings := map[string]string{
|
||||
"response_code": "[STATUS]",
|
||||
"server_ip": "[IP]",
|
||||
"response_time": "[RESPONSE_TIME]",
|
||||
"status": "[BODY].status",
|
||||
"value": "[BODY].value",
|
||||
"connected": "[CONNECTED]",
|
||||
}
|
||||
// Store values
|
||||
stored, err := StoreResultValues(ctx, mappings, result)
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected error storing values: %v", err)
|
||||
}
|
||||
// Verify stored values
|
||||
if stored["response_code"] != int64(200) {
|
||||
t.Errorf("Expected response_code=200, got %v", stored["response_code"])
|
||||
}
|
||||
if stored["server_ip"] != "192.168.1.1" {
|
||||
t.Errorf("Expected server_ip=192.168.1.1, got %v", stored["server_ip"])
|
||||
}
|
||||
if stored["status"] != "OK" {
|
||||
t.Errorf("Expected status=OK, got %v", stored["status"])
|
||||
}
|
||||
if stored["value"] != int64(42) { // Now parsed as int64 for whole numbers
|
||||
t.Errorf("Expected value=42, got %v", stored["value"])
|
||||
}
|
||||
if stored["connected"] != true {
|
||||
t.Errorf("Expected connected=true, got %v", stored["connected"])
|
||||
}
|
||||
// Verify values are in context
|
||||
val, err := ctx.Get("status")
|
||||
if err != nil || val != "OK" {
|
||||
t.Errorf("Expected status=OK in context, got %v, err=%v", val, err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSuite_ExecuteWithAlwaysRunEndpoints(t *testing.T) {
|
||||
suite := &Suite{
|
||||
Name: "test-suite",
|
||||
Endpoints: []*endpoint.Endpoint{
|
||||
{
|
||||
Name: "create-resource",
|
||||
URL: "https://example.org",
|
||||
Conditions: []endpoint.Condition{
|
||||
endpoint.Condition("[STATUS] == 200"),
|
||||
},
|
||||
Store: map[string]string{
|
||||
"created_id": "[BODY]",
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "failing-endpoint",
|
||||
URL: "https://example.org",
|
||||
Conditions: []endpoint.Condition{
|
||||
endpoint.Condition("[STATUS] != 200"), // This will fail
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "cleanup-resource",
|
||||
URL: "https://example.org",
|
||||
Conditions: []endpoint.Condition{
|
||||
endpoint.Condition("[STATUS] == 200"),
|
||||
},
|
||||
AlwaysRun: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
if err := suite.ValidateAndSetDefaults(); err != nil {
|
||||
t.Fatalf("suite validation failed: %v", err)
|
||||
}
|
||||
result := suite.Execute()
|
||||
if result.Success {
|
||||
t.Error("expected suite to fail due to middle endpoint failure")
|
||||
}
|
||||
if len(result.EndpointResults) != 3 {
|
||||
t.Errorf("expected 3 endpoint results, got %d", len(result.EndpointResults))
|
||||
}
|
||||
if result.EndpointResults[0].Name != "create-resource" {
|
||||
t.Errorf("expected first endpoint to be 'create-resource', got '%s'", result.EndpointResults[0].Name)
|
||||
}
|
||||
if result.EndpointResults[1].Name != "failing-endpoint" {
|
||||
t.Errorf("expected second endpoint to be 'failing-endpoint', got '%s'", result.EndpointResults[1].Name)
|
||||
}
|
||||
if result.EndpointResults[1].Success {
|
||||
t.Error("expected failing-endpoint to fail")
|
||||
}
|
||||
if result.EndpointResults[2].Name != "cleanup-resource" {
|
||||
t.Errorf("expected third endpoint to be 'cleanup-resource', got '%s'", result.EndpointResults[2].Name)
|
||||
}
|
||||
if !result.EndpointResults[2].Success {
|
||||
t.Error("expected cleanup endpoint to succeed")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSuite_ExecuteWithoutAlwaysRunEndpoints(t *testing.T) {
|
||||
suite := &Suite{
|
||||
Name: "test-suite",
|
||||
Endpoints: []*endpoint.Endpoint{
|
||||
{
|
||||
Name: "create-resource",
|
||||
URL: "https://example.org",
|
||||
Conditions: []endpoint.Condition{
|
||||
endpoint.Condition("[STATUS] == 200"),
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "failing-endpoint",
|
||||
URL: "https://example.org",
|
||||
Conditions: []endpoint.Condition{
|
||||
endpoint.Condition("[STATUS] != 200"), // This will fail
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "skipped-endpoint",
|
||||
URL: "https://example.org",
|
||||
Conditions: []endpoint.Condition{
|
||||
endpoint.Condition("[STATUS] == 200"),
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
if err := suite.ValidateAndSetDefaults(); err != nil {
|
||||
t.Fatalf("suite validation failed: %v", err)
|
||||
}
|
||||
result := suite.Execute()
|
||||
if result.Success {
|
||||
t.Error("expected suite to fail due to middle endpoint failure")
|
||||
}
|
||||
if len(result.EndpointResults) != 2 {
|
||||
t.Errorf("expected 2 endpoint results (execution should stop after failure), got %d", len(result.EndpointResults))
|
||||
}
|
||||
if result.EndpointResults[0].Name != "create-resource" {
|
||||
t.Errorf("expected first endpoint to be 'create-resource', got '%s'", result.EndpointResults[0].Name)
|
||||
}
|
||||
if result.EndpointResults[1].Name != "failing-endpoint" {
|
||||
t.Errorf("expected second endpoint to be 'failing-endpoint', got '%s'", result.EndpointResults[1].Name)
|
||||
}
|
||||
}
|
||||
|
||||
func TestResult_AddError(t *testing.T) {
|
||||
result := &Result{
|
||||
Name: "test-suite",
|
||||
Timestamp: time.Now(),
|
||||
}
|
||||
if len(result.Errors) != 0 {
|
||||
t.Errorf("Expected 0 errors initially, got %d", len(result.Errors))
|
||||
}
|
||||
result.AddError("first error")
|
||||
if len(result.Errors) != 1 {
|
||||
t.Errorf("Expected 1 error after AddError, got %d", len(result.Errors))
|
||||
}
|
||||
if result.Errors[0] != "first error" {
|
||||
t.Errorf("Expected 'first error', got '%s'", result.Errors[0])
|
||||
}
|
||||
result.AddError("second error")
|
||||
if len(result.Errors) != 2 {
|
||||
t.Errorf("Expected 2 errors after second AddError, got %d", len(result.Errors))
|
||||
}
|
||||
if result.Errors[1] != "second error" {
|
||||
t.Errorf("Expected 'second error', got '%s'", result.Errors[1])
|
||||
}
|
||||
}
|
||||
|
||||
func TestResult_CalculateSuccess(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
endpointResults []*endpoint.Result
|
||||
errors []string
|
||||
expectedSuccess bool
|
||||
}{
|
||||
{
|
||||
name: "no-endpoints-no-errors",
|
||||
endpointResults: []*endpoint.Result{},
|
||||
errors: []string{},
|
||||
expectedSuccess: true,
|
||||
},
|
||||
{
|
||||
name: "all-endpoints-successful-no-errors",
|
||||
endpointResults: []*endpoint.Result{
|
||||
{Success: true},
|
||||
{Success: true},
|
||||
},
|
||||
errors: []string{},
|
||||
expectedSuccess: true,
|
||||
},
|
||||
{
|
||||
name: "second-endpoint-failed-no-errors",
|
||||
endpointResults: []*endpoint.Result{
|
||||
{Success: true},
|
||||
{Success: false},
|
||||
},
|
||||
errors: []string{},
|
||||
expectedSuccess: false,
|
||||
},
|
||||
{
|
||||
name: "first-endpoint-failed-no-errors",
|
||||
endpointResults: []*endpoint.Result{
|
||||
{Success: false},
|
||||
{Success: true},
|
||||
},
|
||||
errors: []string{},
|
||||
expectedSuccess: false,
|
||||
},
|
||||
{
|
||||
name: "all-endpoints-successful-with-errors",
|
||||
endpointResults: []*endpoint.Result{
|
||||
{Success: true},
|
||||
{Success: true},
|
||||
},
|
||||
errors: []string{"suite level error"},
|
||||
expectedSuccess: false,
|
||||
},
|
||||
{
|
||||
name: "endpoint-failed-and-errors",
|
||||
endpointResults: []*endpoint.Result{
|
||||
{Success: true},
|
||||
{Success: false},
|
||||
},
|
||||
errors: []string{"suite level error"},
|
||||
expectedSuccess: false,
|
||||
},
|
||||
{
|
||||
name: "no-endpoints-with-errors",
|
||||
endpointResults: []*endpoint.Result{},
|
||||
errors: []string{"configuration error"},
|
||||
expectedSuccess: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := &Result{
|
||||
Name: "test-suite",
|
||||
Timestamp: time.Now(),
|
||||
EndpointResults: tt.endpointResults,
|
||||
Errors: tt.errors,
|
||||
}
|
||||
result.CalculateSuccess()
|
||||
if result.Success != tt.expectedSuccess {
|
||||
t.Errorf("Expected success=%v, got %v", tt.expectedSuccess, result.Success)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
2
go.mod
2
go.mod
@@ -25,6 +25,7 @@ require (
|
||||
golang.org/x/crypto v0.40.0
|
||||
golang.org/x/net v0.42.0
|
||||
golang.org/x/oauth2 v0.30.0
|
||||
golang.org/x/sync v0.16.0
|
||||
google.golang.org/api v0.242.0
|
||||
gopkg.in/mail.v2 v2.3.1
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
@@ -74,7 +75,6 @@ require (
|
||||
golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b // indirect
|
||||
golang.org/x/image v0.18.0 // indirect
|
||||
golang.org/x/mod v0.25.0 // indirect
|
||||
golang.org/x/sync v0.16.0 // indirect
|
||||
golang.org/x/sys v0.34.0 // indirect
|
||||
golang.org/x/text v0.27.0 // indirect
|
||||
golang.org/x/tools v0.34.0 // indirect
|
||||
|
||||
16
main.go
16
main.go
@@ -103,6 +103,15 @@ func initializeStorage(cfg *config.Config) {
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// Remove all SuiteStatuses that represent suites which no longer exist in the configuration
|
||||
var suiteKeys []string
|
||||
for _, suite := range cfg.Suites {
|
||||
suiteKeys = append(suiteKeys, suite.Key())
|
||||
}
|
||||
numberOfSuiteStatusesDeleted := store.Get().DeleteAllSuiteStatusesNotInKeys(suiteKeys)
|
||||
if numberOfSuiteStatusesDeleted > 0 {
|
||||
logr.Infof("[main.initializeStorage] Deleted %d suite statuses because their matching suites no longer existed", numberOfSuiteStatusesDeleted)
|
||||
}
|
||||
// Remove all EndpointStatus that represent endpoints which no longer exist in the configuration
|
||||
var keys []string
|
||||
for _, ep := range cfg.Endpoints {
|
||||
@@ -111,6 +120,13 @@ func initializeStorage(cfg *config.Config) {
|
||||
for _, ee := range cfg.ExternalEndpoints {
|
||||
keys = append(keys, ee.Key())
|
||||
}
|
||||
// Also add endpoints that are part of suites
|
||||
for _, suite := range cfg.Suites {
|
||||
for _, ep := range suite.Endpoints {
|
||||
keys = append(keys, ep.Key())
|
||||
}
|
||||
}
|
||||
logr.Infof("[main.initializeStorage] Total endpoint keys to preserve: %d", len(keys))
|
||||
numberOfEndpointStatusesDeleted := store.Get().DeleteAllEndpointStatusesNotInKeys(keys)
|
||||
if numberOfEndpointStatusesDeleted > 0 {
|
||||
logr.Infof("[main.initializeStorage] Deleted %d endpoint statuses because their matching endpoints no longer existed", numberOfEndpointStatusesDeleted)
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
|
||||
"github.com/TwiN/gatus/v5/config"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/config/suite"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
@@ -18,6 +19,11 @@ var (
|
||||
resultCertificateExpirationSeconds *prometheus.GaugeVec
|
||||
resultEndpointSuccess *prometheus.GaugeVec
|
||||
|
||||
// Suite metrics
|
||||
suiteResultTotal *prometheus.CounterVec
|
||||
suiteResultDurationSeconds *prometheus.GaugeVec
|
||||
suiteResultSuccess *prometheus.GaugeVec
|
||||
|
||||
// Track if metrics have been initialized to prevent duplicate registration
|
||||
metricsInitialized bool
|
||||
currentRegisterer prometheus.Registerer
|
||||
@@ -49,6 +55,17 @@ func UnregisterPrometheusMetrics() {
|
||||
currentRegisterer.Unregister(resultEndpointSuccess)
|
||||
}
|
||||
|
||||
// Unregister suite metrics
|
||||
if suiteResultTotal != nil {
|
||||
currentRegisterer.Unregister(suiteResultTotal)
|
||||
}
|
||||
if suiteResultDurationSeconds != nil {
|
||||
currentRegisterer.Unregister(suiteResultDurationSeconds)
|
||||
}
|
||||
if suiteResultSuccess != nil {
|
||||
currentRegisterer.Unregister(suiteResultSuccess)
|
||||
}
|
||||
|
||||
metricsInitialized = false
|
||||
currentRegisterer = nil
|
||||
}
|
||||
@@ -109,6 +126,28 @@ func InitializePrometheusMetrics(cfg *config.Config, reg prometheus.Registerer)
|
||||
}, append([]string{"key", "group", "name", "type"}, extraLabels...))
|
||||
reg.MustRegister(resultEndpointSuccess)
|
||||
|
||||
// Suite metrics
|
||||
suiteResultTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
|
||||
Namespace: namespace,
|
||||
Name: "suite_results_total",
|
||||
Help: "Total number of suite executions",
|
||||
}, append([]string{"key", "group", "name", "success"}, extraLabels...))
|
||||
reg.MustRegister(suiteResultTotal)
|
||||
|
||||
suiteResultDurationSeconds = prometheus.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Name: "suite_results_duration_seconds",
|
||||
Help: "Duration of suite execution in seconds",
|
||||
}, append([]string{"key", "group", "name"}, extraLabels...))
|
||||
reg.MustRegister(suiteResultDurationSeconds)
|
||||
|
||||
suiteResultSuccess = prometheus.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Namespace: namespace,
|
||||
Name: "suite_results_success",
|
||||
Help: "Whether the suite execution was successful (1) or not (0)",
|
||||
}, append([]string{"key", "group", "name"}, extraLabels...))
|
||||
reg.MustRegister(suiteResultSuccess)
|
||||
|
||||
// Mark as initialized
|
||||
metricsInitialized = true
|
||||
}
|
||||
@@ -116,7 +155,7 @@ func InitializePrometheusMetrics(cfg *config.Config, reg prometheus.Registerer)
|
||||
// PublishMetricsForEndpoint publishes metrics for the given endpoint and its result.
|
||||
// These metrics will be exposed at /metrics if the metrics are enabled
|
||||
func PublishMetricsForEndpoint(ep *endpoint.Endpoint, result *endpoint.Result, extraLabels []string) {
|
||||
labelValues := []string{}
|
||||
var labelValues []string
|
||||
for _, label := range extraLabels {
|
||||
if value, ok := ep.ExtraLabels[label]; ok {
|
||||
labelValues = append(labelValues, value)
|
||||
@@ -124,7 +163,6 @@ func PublishMetricsForEndpoint(ep *endpoint.Endpoint, result *endpoint.Result, e
|
||||
labelValues = append(labelValues, "")
|
||||
}
|
||||
}
|
||||
|
||||
endpointType := ep.Type()
|
||||
resultTotal.WithLabelValues(append([]string{ep.Key(), ep.Group, ep.Name, string(endpointType), strconv.FormatBool(result.Success)}, labelValues...)...).Inc()
|
||||
resultDurationSeconds.WithLabelValues(append([]string{ep.Key(), ep.Group, ep.Name, string(endpointType)}, labelValues...)...).Set(result.Duration.Seconds())
|
||||
@@ -146,3 +184,35 @@ func PublishMetricsForEndpoint(ep *endpoint.Endpoint, result *endpoint.Result, e
|
||||
resultEndpointSuccess.WithLabelValues(append([]string{ep.Key(), ep.Group, ep.Name, string(endpointType)}, labelValues...)...).Set(0)
|
||||
}
|
||||
}
|
||||
|
||||
// PublishMetricsForSuite publishes metrics for the given suite and its result.
|
||||
// These metrics will be exposed at /metrics if the metrics are enabled
|
||||
func PublishMetricsForSuite(s *suite.Suite, result *suite.Result, extraLabels []string) {
|
||||
if !metricsInitialized {
|
||||
return
|
||||
}
|
||||
var labelValues []string
|
||||
// For now, suites don't have ExtraLabels, so we'll use empty values
|
||||
// This maintains consistency with endpoint metrics structure
|
||||
for range extraLabels {
|
||||
labelValues = append(labelValues, "")
|
||||
}
|
||||
// Publish suite execution counter
|
||||
suiteResultTotal.WithLabelValues(
|
||||
append([]string{s.Key(), s.Group, s.Name, strconv.FormatBool(result.Success)}, labelValues...)...,
|
||||
).Inc()
|
||||
// Publish suite duration
|
||||
suiteResultDurationSeconds.WithLabelValues(
|
||||
append([]string{s.Key(), s.Group, s.Name}, labelValues...)...,
|
||||
).Set(result.Duration.Seconds())
|
||||
// Publish suite success status
|
||||
if result.Success {
|
||||
suiteResultSuccess.WithLabelValues(
|
||||
append([]string{s.Key(), s.Group, s.Name}, labelValues...)...,
|
||||
).Set(1)
|
||||
} else {
|
||||
suiteResultSuccess.WithLabelValues(
|
||||
append([]string{s.Key(), s.Group, s.Name}, labelValues...)...,
|
||||
).Set(0)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"github.com/TwiN/gatus/v5/config"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint/dns"
|
||||
"github.com/TwiN/gatus/v5/config/suite"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/testutil"
|
||||
)
|
||||
@@ -226,3 +227,93 @@ gatus_results_endpoint_success{group="http-ep-group",key="http-ep-group_http-ep-
|
||||
t.Errorf("Expected no errors but got: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPublishMetricsForSuite(t *testing.T) {
|
||||
reg := prometheus.NewRegistry()
|
||||
InitializePrometheusMetrics(&config.Config{}, reg)
|
||||
|
||||
testSuite := &suite.Suite{
|
||||
Name: "test-suite",
|
||||
Group: "test-group",
|
||||
}
|
||||
// Test successful suite execution
|
||||
successResult := &suite.Result{
|
||||
Success: true,
|
||||
Duration: 5 * time.Second,
|
||||
Name: "test-suite",
|
||||
Group: "test-group",
|
||||
}
|
||||
PublishMetricsForSuite(testSuite, successResult, []string{})
|
||||
|
||||
err := testutil.GatherAndCompare(reg, bytes.NewBufferString(`
|
||||
# HELP gatus_suite_results_duration_seconds Duration of suite execution in seconds
|
||||
# TYPE gatus_suite_results_duration_seconds gauge
|
||||
gatus_suite_results_duration_seconds{group="test-group",key="test-group_test-suite",name="test-suite"} 5
|
||||
# HELP gatus_suite_results_success Whether the suite execution was successful (1) or not (0)
|
||||
# TYPE gatus_suite_results_success gauge
|
||||
gatus_suite_results_success{group="test-group",key="test-group_test-suite",name="test-suite"} 1
|
||||
# HELP gatus_suite_results_total Total number of suite executions
|
||||
# TYPE gatus_suite_results_total counter
|
||||
gatus_suite_results_total{group="test-group",key="test-group_test-suite",name="test-suite",success="true"} 1
|
||||
`), "gatus_suite_results_duration_seconds", "gatus_suite_results_success", "gatus_suite_results_total")
|
||||
if err != nil {
|
||||
t.Errorf("Expected no errors but got: %v", err)
|
||||
}
|
||||
|
||||
// Test failed suite execution
|
||||
failureResult := &suite.Result{
|
||||
Success: false,
|
||||
Duration: 10 * time.Second,
|
||||
Name: "test-suite",
|
||||
Group: "test-group",
|
||||
}
|
||||
PublishMetricsForSuite(testSuite, failureResult, []string{})
|
||||
|
||||
err = testutil.GatherAndCompare(reg, bytes.NewBufferString(`
|
||||
# HELP gatus_suite_results_duration_seconds Duration of suite execution in seconds
|
||||
# TYPE gatus_suite_results_duration_seconds gauge
|
||||
gatus_suite_results_duration_seconds{group="test-group",key="test-group_test-suite",name="test-suite"} 10
|
||||
# HELP gatus_suite_results_success Whether the suite execution was successful (1) or not (0)
|
||||
# TYPE gatus_suite_results_success gauge
|
||||
gatus_suite_results_success{group="test-group",key="test-group_test-suite",name="test-suite"} 0
|
||||
# HELP gatus_suite_results_total Total number of suite executions
|
||||
# TYPE gatus_suite_results_total counter
|
||||
gatus_suite_results_total{group="test-group",key="test-group_test-suite",name="test-suite",success="false"} 1
|
||||
gatus_suite_results_total{group="test-group",key="test-group_test-suite",name="test-suite",success="true"} 1
|
||||
`), "gatus_suite_results_duration_seconds", "gatus_suite_results_success", "gatus_suite_results_total")
|
||||
if err != nil {
|
||||
t.Errorf("Expected no errors but got: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPublishMetricsForSuite_NoGroup(t *testing.T) {
|
||||
reg := prometheus.NewRegistry()
|
||||
InitializePrometheusMetrics(&config.Config{}, reg)
|
||||
|
||||
testSuite := &suite.Suite{
|
||||
Name: "no-group-suite",
|
||||
Group: "",
|
||||
}
|
||||
result := &suite.Result{
|
||||
Success: true,
|
||||
Duration: 3 * time.Second,
|
||||
Name: "no-group-suite",
|
||||
Group: "",
|
||||
}
|
||||
PublishMetricsForSuite(testSuite, result, []string{})
|
||||
|
||||
err := testutil.GatherAndCompare(reg, bytes.NewBufferString(`
|
||||
# HELP gatus_suite_results_duration_seconds Duration of suite execution in seconds
|
||||
# TYPE gatus_suite_results_duration_seconds gauge
|
||||
gatus_suite_results_duration_seconds{group="",key="_no-group-suite",name="no-group-suite"} 3
|
||||
# HELP gatus_suite_results_success Whether the suite execution was successful (1) or not (0)
|
||||
# TYPE gatus_suite_results_success gauge
|
||||
gatus_suite_results_success{group="",key="_no-group-suite",name="no-group-suite"} 1
|
||||
# HELP gatus_suite_results_total Total number of suite executions
|
||||
# TYPE gatus_suite_results_total counter
|
||||
gatus_suite_results_total{group="",key="_no-group-suite",name="no-group-suite",success="true"} 1
|
||||
`), "gatus_suite_results_duration_seconds", "gatus_suite_results_success", "gatus_suite_results_total")
|
||||
if err != nil {
|
||||
t.Errorf("Expected no errors but got: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,5 +4,6 @@ import "errors"
|
||||
|
||||
var (
|
||||
ErrEndpointNotFound = errors.New("endpoint not found") // When an endpoint does not exist in the store
|
||||
ErrSuiteNotFound = errors.New("suite not found") // When a suite does not exist in the store
|
||||
ErrInvalidTimeRange = errors.New("'from' cannot be older than 'to'") // When an invalid time range is provided
|
||||
)
|
||||
|
||||
22
storage/store/common/paging/suite_status_params.go
Normal file
22
storage/store/common/paging/suite_status_params.go
Normal file
@@ -0,0 +1,22 @@
|
||||
package paging
|
||||
|
||||
// SuiteStatusParams represents the parameters for suite status queries
|
||||
type SuiteStatusParams struct {
|
||||
Page int // Page number
|
||||
PageSize int // Number of results per page
|
||||
}
|
||||
|
||||
// NewSuiteStatusParams creates a new SuiteStatusParams
|
||||
func NewSuiteStatusParams() *SuiteStatusParams {
|
||||
return &SuiteStatusParams{
|
||||
Page: 1,
|
||||
PageSize: 20,
|
||||
}
|
||||
}
|
||||
|
||||
// WithPagination sets the page and page size
|
||||
func (params *SuiteStatusParams) WithPagination(page, pageSize int) *SuiteStatusParams {
|
||||
params.Page = page
|
||||
params.PageSize = pageSize
|
||||
return params
|
||||
}
|
||||
124
storage/store/common/paging/suite_status_params_test.go
Normal file
124
storage/store/common/paging/suite_status_params_test.go
Normal file
@@ -0,0 +1,124 @@
|
||||
package paging
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestNewSuiteStatusParams(t *testing.T) {
|
||||
params := NewSuiteStatusParams()
|
||||
if params == nil {
|
||||
t.Fatal("NewSuiteStatusParams should not return nil")
|
||||
}
|
||||
if params.Page != 1 {
|
||||
t.Errorf("expected default Page to be 1, got %d", params.Page)
|
||||
}
|
||||
if params.PageSize != 20 {
|
||||
t.Errorf("expected default PageSize to be 20, got %d", params.PageSize)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSuiteStatusParams_WithPagination(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
page int
|
||||
pageSize int
|
||||
expectedPage int
|
||||
expectedSize int
|
||||
}{
|
||||
{
|
||||
name: "valid pagination",
|
||||
page: 2,
|
||||
pageSize: 50,
|
||||
expectedPage: 2,
|
||||
expectedSize: 50,
|
||||
},
|
||||
{
|
||||
name: "zero page",
|
||||
page: 0,
|
||||
pageSize: 10,
|
||||
expectedPage: 0,
|
||||
expectedSize: 10,
|
||||
},
|
||||
{
|
||||
name: "negative page",
|
||||
page: -1,
|
||||
pageSize: 20,
|
||||
expectedPage: -1,
|
||||
expectedSize: 20,
|
||||
},
|
||||
{
|
||||
name: "zero page size",
|
||||
page: 1,
|
||||
pageSize: 0,
|
||||
expectedPage: 1,
|
||||
expectedSize: 0,
|
||||
},
|
||||
{
|
||||
name: "negative page size",
|
||||
page: 1,
|
||||
pageSize: -10,
|
||||
expectedPage: 1,
|
||||
expectedSize: -10,
|
||||
},
|
||||
{
|
||||
name: "large values",
|
||||
page: 1000,
|
||||
pageSize: 10000,
|
||||
expectedPage: 1000,
|
||||
expectedSize: 10000,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
params := NewSuiteStatusParams().WithPagination(tt.page, tt.pageSize)
|
||||
if params.Page != tt.expectedPage {
|
||||
t.Errorf("expected Page to be %d, got %d", tt.expectedPage, params.Page)
|
||||
}
|
||||
if params.PageSize != tt.expectedSize {
|
||||
t.Errorf("expected PageSize to be %d, got %d", tt.expectedSize, params.PageSize)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSuiteStatusParams_ChainedMethods(t *testing.T) {
|
||||
params := NewSuiteStatusParams().
|
||||
WithPagination(3, 100)
|
||||
|
||||
if params.Page != 3 {
|
||||
t.Errorf("expected Page to be 3, got %d", params.Page)
|
||||
}
|
||||
if params.PageSize != 100 {
|
||||
t.Errorf("expected PageSize to be 100, got %d", params.PageSize)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSuiteStatusParams_OverwritePagination(t *testing.T) {
|
||||
params := NewSuiteStatusParams()
|
||||
|
||||
// Set initial pagination
|
||||
params.WithPagination(2, 50)
|
||||
if params.Page != 2 || params.PageSize != 50 {
|
||||
t.Error("initial pagination not set correctly")
|
||||
}
|
||||
|
||||
// Overwrite pagination
|
||||
params.WithPagination(5, 200)
|
||||
if params.Page != 5 {
|
||||
t.Errorf("expected Page to be overwritten to 5, got %d", params.Page)
|
||||
}
|
||||
if params.PageSize != 200 {
|
||||
t.Errorf("expected PageSize to be overwritten to 200, got %d", params.PageSize)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSuiteStatusParams_ReturnsSelf(t *testing.T) {
|
||||
params := NewSuiteStatusParams()
|
||||
|
||||
// Verify WithPagination returns the same instance
|
||||
result := params.WithPagination(1, 20)
|
||||
if result != params {
|
||||
t.Error("WithPagination should return the same instance for method chaining")
|
||||
}
|
||||
}
|
||||
@@ -7,16 +7,20 @@ import (
|
||||
|
||||
"github.com/TwiN/gatus/v5/alerting/alert"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/config/key"
|
||||
"github.com/TwiN/gatus/v5/config/suite"
|
||||
"github.com/TwiN/gatus/v5/storage/store/common"
|
||||
"github.com/TwiN/gatus/v5/storage/store/common/paging"
|
||||
"github.com/TwiN/gocache/v2"
|
||||
"github.com/TwiN/logr"
|
||||
)
|
||||
|
||||
// Store that leverages gocache
|
||||
type Store struct {
|
||||
sync.RWMutex
|
||||
|
||||
cache *gocache.Cache
|
||||
endpointCache *gocache.Cache // Cache for endpoint statuses
|
||||
suiteCache *gocache.Cache // Cache for suite statuses
|
||||
|
||||
maximumNumberOfResults int // maximum number of results that an endpoint can have
|
||||
maximumNumberOfEvents int // maximum number of events that an endpoint can have
|
||||
@@ -28,7 +32,8 @@ type Store struct {
|
||||
// supports eventual persistence.
|
||||
func NewStore(maximumNumberOfResults, maximumNumberOfEvents int) (*Store, error) {
|
||||
store := &Store{
|
||||
cache: gocache.NewCache().WithMaxSize(gocache.NoMaxSize),
|
||||
endpointCache: gocache.NewCache().WithMaxSize(gocache.NoMaxSize),
|
||||
suiteCache: gocache.NewCache().WithMaxSize(gocache.NoMaxSize),
|
||||
maximumNumberOfResults: maximumNumberOfResults,
|
||||
maximumNumberOfEvents: maximumNumberOfEvents,
|
||||
}
|
||||
@@ -38,10 +43,12 @@ func NewStore(maximumNumberOfResults, maximumNumberOfEvents int) (*Store, error)
|
||||
// GetAllEndpointStatuses returns all monitored endpoint.Status
|
||||
// with a subset of endpoint.Result defined by the page and pageSize parameters
|
||||
func (s *Store) GetAllEndpointStatuses(params *paging.EndpointStatusParams) ([]*endpoint.Status, error) {
|
||||
endpointStatuses := s.cache.GetAll()
|
||||
pagedEndpointStatuses := make([]*endpoint.Status, 0, len(endpointStatuses))
|
||||
for _, v := range endpointStatuses {
|
||||
pagedEndpointStatuses = append(pagedEndpointStatuses, ShallowCopyEndpointStatus(v.(*endpoint.Status), params))
|
||||
allStatuses := s.endpointCache.GetAll()
|
||||
pagedEndpointStatuses := make([]*endpoint.Status, 0, len(allStatuses))
|
||||
for _, v := range allStatuses {
|
||||
if status, ok := v.(*endpoint.Status); ok {
|
||||
pagedEndpointStatuses = append(pagedEndpointStatuses, ShallowCopyEndpointStatus(status, params))
|
||||
}
|
||||
}
|
||||
sort.Slice(pagedEndpointStatuses, func(i, j int) bool {
|
||||
return pagedEndpointStatuses[i].Key < pagedEndpointStatuses[j].Key
|
||||
@@ -49,26 +56,53 @@ func (s *Store) GetAllEndpointStatuses(params *paging.EndpointStatusParams) ([]*
|
||||
return pagedEndpointStatuses, nil
|
||||
}
|
||||
|
||||
// GetAllSuiteStatuses returns all monitored suite.Status
|
||||
func (s *Store) GetAllSuiteStatuses(params *paging.SuiteStatusParams) ([]*suite.Status, error) {
|
||||
s.RLock()
|
||||
defer s.RUnlock()
|
||||
suiteStatuses := make([]*suite.Status, 0)
|
||||
for _, v := range s.suiteCache.GetAll() {
|
||||
if status, ok := v.(*suite.Status); ok {
|
||||
suiteStatuses = append(suiteStatuses, ShallowCopySuiteStatus(status, params))
|
||||
}
|
||||
}
|
||||
sort.Slice(suiteStatuses, func(i, j int) bool {
|
||||
return suiteStatuses[i].Key < suiteStatuses[j].Key
|
||||
})
|
||||
return suiteStatuses, nil
|
||||
}
|
||||
|
||||
// GetEndpointStatus returns the endpoint status for a given endpoint name in the given group
|
||||
func (s *Store) GetEndpointStatus(groupName, endpointName string, params *paging.EndpointStatusParams) (*endpoint.Status, error) {
|
||||
return s.GetEndpointStatusByKey(endpoint.ConvertGroupAndEndpointNameToKey(groupName, endpointName), params)
|
||||
return s.GetEndpointStatusByKey(key.ConvertGroupAndNameToKey(groupName, endpointName), params)
|
||||
}
|
||||
|
||||
// GetEndpointStatusByKey returns the endpoint status for a given key
|
||||
func (s *Store) GetEndpointStatusByKey(key string, params *paging.EndpointStatusParams) (*endpoint.Status, error) {
|
||||
endpointStatus := s.cache.GetValue(key)
|
||||
endpointStatus := s.endpointCache.GetValue(key)
|
||||
if endpointStatus == nil {
|
||||
return nil, common.ErrEndpointNotFound
|
||||
}
|
||||
return ShallowCopyEndpointStatus(endpointStatus.(*endpoint.Status), params), nil
|
||||
}
|
||||
|
||||
// GetSuiteStatusByKey returns the suite status for a given key
|
||||
func (s *Store) GetSuiteStatusByKey(key string, params *paging.SuiteStatusParams) (*suite.Status, error) {
|
||||
s.RLock()
|
||||
defer s.RUnlock()
|
||||
suiteStatus := s.suiteCache.GetValue(key)
|
||||
if suiteStatus == nil {
|
||||
return nil, common.ErrSuiteNotFound
|
||||
}
|
||||
return ShallowCopySuiteStatus(suiteStatus.(*suite.Status), params), nil
|
||||
}
|
||||
|
||||
// GetUptimeByKey returns the uptime percentage during a time range
|
||||
func (s *Store) GetUptimeByKey(key string, from, to time.Time) (float64, error) {
|
||||
if from.After(to) {
|
||||
return 0, common.ErrInvalidTimeRange
|
||||
}
|
||||
endpointStatus := s.cache.GetValue(key)
|
||||
endpointStatus := s.endpointCache.GetValue(key)
|
||||
if endpointStatus == nil || endpointStatus.(*endpoint.Status).Uptime == nil {
|
||||
return 0, common.ErrEndpointNotFound
|
||||
}
|
||||
@@ -97,7 +131,7 @@ func (s *Store) GetAverageResponseTimeByKey(key string, from, to time.Time) (int
|
||||
if from.After(to) {
|
||||
return 0, common.ErrInvalidTimeRange
|
||||
}
|
||||
endpointStatus := s.cache.GetValue(key)
|
||||
endpointStatus := s.endpointCache.GetValue(key)
|
||||
if endpointStatus == nil || endpointStatus.(*endpoint.Status).Uptime == nil {
|
||||
return 0, common.ErrEndpointNotFound
|
||||
}
|
||||
@@ -125,7 +159,7 @@ func (s *Store) GetHourlyAverageResponseTimeByKey(key string, from, to time.Time
|
||||
if from.After(to) {
|
||||
return nil, common.ErrInvalidTimeRange
|
||||
}
|
||||
endpointStatus := s.cache.GetValue(key)
|
||||
endpointStatus := s.endpointCache.GetValue(key)
|
||||
if endpointStatus == nil || endpointStatus.(*endpoint.Status).Uptime == nil {
|
||||
return nil, common.ErrEndpointNotFound
|
||||
}
|
||||
@@ -144,11 +178,11 @@ func (s *Store) GetHourlyAverageResponseTimeByKey(key string, from, to time.Time
|
||||
return hourlyAverageResponseTimes, nil
|
||||
}
|
||||
|
||||
// Insert adds the observed result for the specified endpoint into the store
|
||||
func (s *Store) Insert(ep *endpoint.Endpoint, result *endpoint.Result) error {
|
||||
key := ep.Key()
|
||||
// InsertEndpointResult adds the observed result for the specified endpoint into the store
|
||||
func (s *Store) InsertEndpointResult(ep *endpoint.Endpoint, result *endpoint.Result) error {
|
||||
endpointKey := ep.Key()
|
||||
s.Lock()
|
||||
status, exists := s.cache.Get(key)
|
||||
status, exists := s.endpointCache.Get(endpointKey)
|
||||
if !exists {
|
||||
status = endpoint.NewStatus(ep.Group, ep.Name)
|
||||
status.(*endpoint.Status).Events = append(status.(*endpoint.Status).Events, &endpoint.Event{
|
||||
@@ -157,18 +191,45 @@ func (s *Store) Insert(ep *endpoint.Endpoint, result *endpoint.Result) error {
|
||||
})
|
||||
}
|
||||
AddResult(status.(*endpoint.Status), result, s.maximumNumberOfResults, s.maximumNumberOfEvents)
|
||||
s.cache.Set(key, status)
|
||||
s.endpointCache.Set(endpointKey, status)
|
||||
s.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// InsertSuiteResult adds the observed result for the specified suite into the store
|
||||
func (s *Store) InsertSuiteResult(su *suite.Suite, result *suite.Result) error {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
suiteKey := su.Key()
|
||||
suiteStatus := s.suiteCache.GetValue(suiteKey)
|
||||
if suiteStatus == nil {
|
||||
suiteStatus = &suite.Status{
|
||||
Name: su.Name,
|
||||
Group: su.Group,
|
||||
Key: su.Key(),
|
||||
Results: []*suite.Result{},
|
||||
}
|
||||
logr.Debugf("[memory.InsertSuiteResult] Created new suite status for suiteKey=%s", suiteKey)
|
||||
}
|
||||
status := suiteStatus.(*suite.Status)
|
||||
// Add the new result at the end (append like endpoint implementation)
|
||||
status.Results = append(status.Results, result)
|
||||
// Keep only the maximum number of results
|
||||
if len(status.Results) > s.maximumNumberOfResults {
|
||||
status.Results = status.Results[len(status.Results)-s.maximumNumberOfResults:]
|
||||
}
|
||||
s.suiteCache.Set(suiteKey, status)
|
||||
logr.Debugf("[memory.InsertSuiteResult] Stored suite result for suiteKey=%s, total results=%d", suiteKey, len(status.Results))
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteAllEndpointStatusesNotInKeys removes all Status that are not within the keys provided
|
||||
func (s *Store) DeleteAllEndpointStatusesNotInKeys(keys []string) int {
|
||||
var keysToDelete []string
|
||||
for _, existingKey := range s.cache.GetKeysByPattern("*", 0) {
|
||||
for _, existingKey := range s.endpointCache.GetKeysByPattern("*", 0) {
|
||||
shouldDelete := true
|
||||
for _, key := range keys {
|
||||
if existingKey == key {
|
||||
for _, k := range keys {
|
||||
if existingKey == k {
|
||||
shouldDelete = false
|
||||
break
|
||||
}
|
||||
@@ -177,7 +238,24 @@ func (s *Store) DeleteAllEndpointStatusesNotInKeys(keys []string) int {
|
||||
keysToDelete = append(keysToDelete, existingKey)
|
||||
}
|
||||
}
|
||||
return s.cache.DeleteAll(keysToDelete)
|
||||
return s.endpointCache.DeleteAll(keysToDelete)
|
||||
}
|
||||
|
||||
// DeleteAllSuiteStatusesNotInKeys removes all suite statuses that are not within the keys provided
|
||||
func (s *Store) DeleteAllSuiteStatusesNotInKeys(keys []string) int {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
keysToKeep := make(map[string]bool, len(keys))
|
||||
for _, k := range keys {
|
||||
keysToKeep[k] = true
|
||||
}
|
||||
var keysToDelete []string
|
||||
for existingKey := range s.suiteCache.GetAll() {
|
||||
if !keysToKeep[existingKey] {
|
||||
keysToDelete = append(keysToDelete, existingKey)
|
||||
}
|
||||
}
|
||||
return s.suiteCache.DeleteAll(keysToDelete)
|
||||
}
|
||||
|
||||
// GetTriggeredEndpointAlert returns whether the triggered alert for the specified endpoint as well as the necessary information to resolve it
|
||||
@@ -215,12 +293,16 @@ func (s *Store) DeleteAllTriggeredAlertsNotInChecksumsByEndpoint(ep *endpoint.En
|
||||
func (s *Store) HasEndpointStatusNewerThan(key string, timestamp time.Time) (bool, error) {
|
||||
s.RLock()
|
||||
defer s.RUnlock()
|
||||
endpointStatus := s.cache.GetValue(key)
|
||||
endpointStatus := s.endpointCache.GetValue(key)
|
||||
if endpointStatus == nil {
|
||||
// If no endpoint exists, there's no newer status, so return false instead of an error
|
||||
return false, nil
|
||||
}
|
||||
for _, result := range endpointStatus.(*endpoint.Status).Results {
|
||||
status, ok := endpointStatus.(*endpoint.Status)
|
||||
if !ok {
|
||||
return false, nil
|
||||
}
|
||||
for _, result := range status.Results {
|
||||
if result.Timestamp.After(timestamp) {
|
||||
return true, nil
|
||||
}
|
||||
@@ -230,7 +312,8 @@ func (s *Store) HasEndpointStatusNewerThan(key string, timestamp time.Time) (boo
|
||||
|
||||
// Clear deletes everything from the store
|
||||
func (s *Store) Clear() {
|
||||
s.cache.Clear()
|
||||
s.endpointCache.Clear()
|
||||
s.suiteCache.Clear()
|
||||
}
|
||||
|
||||
// Save persists the cache to the store file
|
||||
|
||||
@@ -1,10 +1,12 @@
|
||||
package memory
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/config/suite"
|
||||
"github.com/TwiN/gatus/v5/storage"
|
||||
"github.com/TwiN/gatus/v5/storage/store/common/paging"
|
||||
)
|
||||
@@ -86,12 +88,12 @@ func TestStore_SanityCheck(t *testing.T) {
|
||||
store, _ := NewStore(storage.DefaultMaximumNumberOfResults, storage.DefaultMaximumNumberOfEvents)
|
||||
defer store.Clear()
|
||||
defer store.Close()
|
||||
store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult)
|
||||
endpointStatuses, _ := store.GetAllEndpointStatuses(paging.NewEndpointStatusParams())
|
||||
if numberOfEndpointStatuses := len(endpointStatuses); numberOfEndpointStatuses != 1 {
|
||||
t.Fatalf("expected 1 EndpointStatus, got %d", numberOfEndpointStatuses)
|
||||
}
|
||||
store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
store.InsertEndpointResult(&testEndpoint, &testUnsuccessfulResult)
|
||||
// Both results inserted are for the same endpoint, therefore, the count shouldn't have increased
|
||||
endpointStatuses, _ = store.GetAllEndpointStatuses(paging.NewEndpointStatusParams())
|
||||
if numberOfEndpointStatuses := len(endpointStatuses); numberOfEndpointStatuses != 1 {
|
||||
@@ -140,8 +142,8 @@ func TestStore_HasEndpointStatusNewerThan(t *testing.T) {
|
||||
store, _ := NewStore(storage.DefaultMaximumNumberOfResults, storage.DefaultMaximumNumberOfEvents)
|
||||
defer store.Clear()
|
||||
defer store.Close()
|
||||
// Insert a result
|
||||
err := store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
// InsertEndpointResult a result
|
||||
err := store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult)
|
||||
if err != nil {
|
||||
t.Fatalf("expected no error while inserting result, got %v", err)
|
||||
}
|
||||
@@ -162,3 +164,931 @@ func TestStore_HasEndpointStatusNewerThan(t *testing.T) {
|
||||
t.Fatal("expected not to have a newer status, but did")
|
||||
}
|
||||
}
|
||||
|
||||
// TestStore_MixedEndpointsAndSuites tests that having both endpoints and suites in the cache
|
||||
// doesn't cause issues with core operations
|
||||
func TestStore_MixedEndpointsAndSuites(t *testing.T) {
|
||||
// Helper function to create and populate a store with test data
|
||||
setupStore := func(t *testing.T) (*Store, *endpoint.Endpoint, *endpoint.Endpoint, *endpoint.Endpoint, *endpoint.Endpoint, *suite.Suite) {
|
||||
store, err := NewStore(100, 50)
|
||||
if err != nil {
|
||||
t.Fatal("expected no error, got", err)
|
||||
}
|
||||
|
||||
// Create regular endpoints
|
||||
endpoint1 := &endpoint.Endpoint{
|
||||
Name: "endpoint1",
|
||||
Group: "group1",
|
||||
URL: "https://example.com/1",
|
||||
}
|
||||
endpoint2 := &endpoint.Endpoint{
|
||||
Name: "endpoint2",
|
||||
Group: "group2",
|
||||
URL: "https://example.com/2",
|
||||
}
|
||||
|
||||
// Create suite endpoints (these would be part of a suite)
|
||||
suiteEndpoint1 := &endpoint.Endpoint{
|
||||
Name: "suite-endpoint1",
|
||||
Group: "suite-group",
|
||||
URL: "https://example.com/suite1",
|
||||
}
|
||||
suiteEndpoint2 := &endpoint.Endpoint{
|
||||
Name: "suite-endpoint2",
|
||||
Group: "suite-group",
|
||||
URL: "https://example.com/suite2",
|
||||
}
|
||||
|
||||
// Create a suite
|
||||
testSuite := &suite.Suite{
|
||||
Name: "test-suite",
|
||||
Group: "suite-group",
|
||||
Endpoints: []*endpoint.Endpoint{
|
||||
suiteEndpoint1,
|
||||
suiteEndpoint2,
|
||||
},
|
||||
}
|
||||
|
||||
return store, endpoint1, endpoint2, suiteEndpoint1, suiteEndpoint2, testSuite
|
||||
}
|
||||
|
||||
// Test 1: InsertEndpointResult endpoint results
|
||||
t.Run("InsertEndpointResults", func(t *testing.T) {
|
||||
store, endpoint1, endpoint2, suiteEndpoint1, suiteEndpoint2, _ := setupStore(t)
|
||||
// InsertEndpointResult regular endpoint results
|
||||
result1 := &endpoint.Result{
|
||||
Success: true,
|
||||
Timestamp: time.Now(),
|
||||
Duration: 100 * time.Millisecond,
|
||||
}
|
||||
if err := store.InsertEndpointResult(endpoint1, result1); err != nil {
|
||||
t.Fatalf("failed to insert endpoint1 result: %v", err)
|
||||
}
|
||||
|
||||
result2 := &endpoint.Result{
|
||||
Success: false,
|
||||
Timestamp: time.Now(),
|
||||
Duration: 200 * time.Millisecond,
|
||||
Errors: []string{"error"},
|
||||
}
|
||||
if err := store.InsertEndpointResult(endpoint2, result2); err != nil {
|
||||
t.Fatalf("failed to insert endpoint2 result: %v", err)
|
||||
}
|
||||
|
||||
// InsertEndpointResult suite endpoint results
|
||||
suiteResult1 := &endpoint.Result{
|
||||
Success: true,
|
||||
Timestamp: time.Now(),
|
||||
Duration: 50 * time.Millisecond,
|
||||
}
|
||||
if err := store.InsertEndpointResult(suiteEndpoint1, suiteResult1); err != nil {
|
||||
t.Fatalf("failed to insert suite endpoint1 result: %v", err)
|
||||
}
|
||||
|
||||
suiteResult2 := &endpoint.Result{
|
||||
Success: true,
|
||||
Timestamp: time.Now(),
|
||||
Duration: 75 * time.Millisecond,
|
||||
}
|
||||
if err := store.InsertEndpointResult(suiteEndpoint2, suiteResult2); err != nil {
|
||||
t.Fatalf("failed to insert suite endpoint2 result: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
// Test 2: InsertEndpointResult suite result
|
||||
t.Run("InsertSuiteResult", func(t *testing.T) {
|
||||
store, _, _, _, _, testSuite := setupStore(t)
|
||||
timestamp := time.Now()
|
||||
suiteResult := &suite.Result{
|
||||
Name: testSuite.Name,
|
||||
Group: testSuite.Group,
|
||||
Success: true,
|
||||
Timestamp: timestamp,
|
||||
Duration: 125 * time.Millisecond,
|
||||
EndpointResults: []*endpoint.Result{
|
||||
{Success: true, Duration: 50 * time.Millisecond},
|
||||
{Success: true, Duration: 75 * time.Millisecond},
|
||||
},
|
||||
}
|
||||
if err := store.InsertSuiteResult(testSuite, suiteResult); err != nil {
|
||||
t.Fatalf("failed to insert suite result: %v", err)
|
||||
}
|
||||
|
||||
// Verify the suite result was stored correctly
|
||||
status, err := store.GetSuiteStatusByKey(testSuite.Key(), nil)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get suite status: %v", err)
|
||||
}
|
||||
if len(status.Results) != 1 {
|
||||
t.Errorf("expected 1 suite result, got %d", len(status.Results))
|
||||
}
|
||||
|
||||
stored := status.Results[0]
|
||||
if stored.Name != testSuite.Name {
|
||||
t.Errorf("expected result name %s, got %s", testSuite.Name, stored.Name)
|
||||
}
|
||||
if stored.Group != testSuite.Group {
|
||||
t.Errorf("expected result group %s, got %s", testSuite.Group, stored.Group)
|
||||
}
|
||||
if !stored.Success {
|
||||
t.Error("expected result to be successful")
|
||||
}
|
||||
if stored.Duration != 125*time.Millisecond {
|
||||
t.Errorf("expected duration 125ms, got %v", stored.Duration)
|
||||
}
|
||||
if len(stored.EndpointResults) != 2 {
|
||||
t.Errorf("expected 2 endpoint results, got %d", len(stored.EndpointResults))
|
||||
}
|
||||
})
|
||||
|
||||
// Test 3: GetAllEndpointStatuses should only return endpoints, not suites
|
||||
t.Run("GetAllEndpointStatuses", func(t *testing.T) {
|
||||
store, endpoint1, endpoint2, suiteEndpoint1, suiteEndpoint2, testSuite := setupStore(t)
|
||||
|
||||
// InsertEndpointResult all test data
|
||||
store.InsertEndpointResult(endpoint1, &endpoint.Result{Success: true, Timestamp: time.Now(), Duration: 100 * time.Millisecond})
|
||||
store.InsertEndpointResult(endpoint2, &endpoint.Result{Success: false, Timestamp: time.Now(), Duration: 200 * time.Millisecond})
|
||||
store.InsertEndpointResult(suiteEndpoint1, &endpoint.Result{Success: true, Timestamp: time.Now(), Duration: 50 * time.Millisecond})
|
||||
store.InsertEndpointResult(suiteEndpoint2, &endpoint.Result{Success: true, Timestamp: time.Now(), Duration: 75 * time.Millisecond})
|
||||
store.InsertSuiteResult(testSuite, &suite.Result{
|
||||
Name: testSuite.Name, Group: testSuite.Group, Success: true,
|
||||
Timestamp: time.Now(), Duration: 125 * time.Millisecond,
|
||||
})
|
||||
statuses, err := store.GetAllEndpointStatuses(&paging.EndpointStatusParams{})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get all endpoint statuses: %v", err)
|
||||
}
|
||||
|
||||
// Should have 4 endpoints (2 regular + 2 suite endpoints)
|
||||
if len(statuses) != 4 {
|
||||
t.Errorf("expected 4 endpoint statuses, got %d", len(statuses))
|
||||
}
|
||||
|
||||
// Verify all are endpoint statuses with correct data, not suite statuses
|
||||
expectedEndpoints := map[string]struct {
|
||||
success bool
|
||||
duration time.Duration
|
||||
}{
|
||||
"endpoint1": {success: true, duration: 100 * time.Millisecond},
|
||||
"endpoint2": {success: false, duration: 200 * time.Millisecond},
|
||||
"suite-endpoint1": {success: true, duration: 50 * time.Millisecond},
|
||||
"suite-endpoint2": {success: true, duration: 75 * time.Millisecond},
|
||||
}
|
||||
|
||||
for _, status := range statuses {
|
||||
if status.Name == "" {
|
||||
t.Error("endpoint status should have a name")
|
||||
}
|
||||
// Make sure none of them are the suite itself
|
||||
if status.Name == "test-suite" {
|
||||
t.Error("suite should not appear in endpoint statuses")
|
||||
}
|
||||
|
||||
// Verify detailed endpoint data
|
||||
expected, exists := expectedEndpoints[status.Name]
|
||||
if !exists {
|
||||
t.Errorf("unexpected endpoint name: %s", status.Name)
|
||||
continue
|
||||
}
|
||||
|
||||
// Check that endpoint has results and verify the data
|
||||
if len(status.Results) != 1 {
|
||||
t.Errorf("endpoint %s should have 1 result, got %d", status.Name, len(status.Results))
|
||||
continue
|
||||
}
|
||||
|
||||
result := status.Results[0]
|
||||
if result.Success != expected.success {
|
||||
t.Errorf("endpoint %s result success should be %v, got %v", status.Name, expected.success, result.Success)
|
||||
}
|
||||
if result.Duration != expected.duration {
|
||||
t.Errorf("endpoint %s result duration should be %v, got %v", status.Name, expected.duration, result.Duration)
|
||||
}
|
||||
|
||||
delete(expectedEndpoints, status.Name)
|
||||
}
|
||||
if len(expectedEndpoints) > 0 {
|
||||
t.Errorf("missing expected endpoints: %v", expectedEndpoints)
|
||||
}
|
||||
})
|
||||
|
||||
// Test 4: GetAllSuiteStatuses should only return suites, not endpoints
|
||||
t.Run("GetAllSuiteStatuses", func(t *testing.T) {
|
||||
store, endpoint1, _, _, _, testSuite := setupStore(t)
|
||||
|
||||
// InsertEndpointResult test data
|
||||
store.InsertEndpointResult(endpoint1, &endpoint.Result{Success: true, Timestamp: time.Now(), Duration: 100 * time.Millisecond})
|
||||
timestamp := time.Now()
|
||||
store.InsertSuiteResult(testSuite, &suite.Result{
|
||||
Name: testSuite.Name, Group: testSuite.Group, Success: true,
|
||||
Timestamp: timestamp, Duration: 125 * time.Millisecond,
|
||||
})
|
||||
statuses, err := store.GetAllSuiteStatuses(&paging.SuiteStatusParams{})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get all suite statuses: %v", err)
|
||||
}
|
||||
|
||||
// Should have 1 suite
|
||||
if len(statuses) != 1 {
|
||||
t.Errorf("expected 1 suite status, got %d", len(statuses))
|
||||
}
|
||||
|
||||
if len(statuses) > 0 {
|
||||
suiteStatus := statuses[0]
|
||||
if suiteStatus.Name != "test-suite" {
|
||||
t.Errorf("expected suite name 'test-suite', got '%s'", suiteStatus.Name)
|
||||
}
|
||||
if suiteStatus.Group != "suite-group" {
|
||||
t.Errorf("expected suite group 'suite-group', got '%s'", suiteStatus.Group)
|
||||
}
|
||||
if len(suiteStatus.Results) != 1 {
|
||||
t.Errorf("expected 1 suite result, got %d", len(suiteStatus.Results))
|
||||
}
|
||||
if len(suiteStatus.Results) > 0 {
|
||||
result := suiteStatus.Results[0]
|
||||
if !result.Success {
|
||||
t.Error("expected suite result to be successful")
|
||||
}
|
||||
if result.Duration != 125*time.Millisecond {
|
||||
t.Errorf("expected suite result duration 125ms, got %v", result.Duration)
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// Test 5: GetEndpointStatusByKey should work for all endpoints
|
||||
t.Run("GetEndpointStatusByKey", func(t *testing.T) {
|
||||
store, endpoint1, _, suiteEndpoint1, _, _ := setupStore(t)
|
||||
|
||||
// InsertEndpointResult test data with specific timestamps and durations
|
||||
timestamp1 := time.Now()
|
||||
timestamp2 := time.Now().Add(1 * time.Hour)
|
||||
store.InsertEndpointResult(endpoint1, &endpoint.Result{Success: true, Timestamp: timestamp1, Duration: 100 * time.Millisecond})
|
||||
store.InsertEndpointResult(suiteEndpoint1, &endpoint.Result{Success: false, Timestamp: timestamp2, Duration: 50 * time.Millisecond, Errors: []string{"suite error"}})
|
||||
|
||||
// Test regular endpoints
|
||||
status1, err := store.GetEndpointStatusByKey(endpoint1.Key(), &paging.EndpointStatusParams{})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get endpoint1 status: %v", err)
|
||||
}
|
||||
if status1.Name != "endpoint1" {
|
||||
t.Errorf("expected endpoint1, got %s", status1.Name)
|
||||
}
|
||||
if status1.Group != "group1" {
|
||||
t.Errorf("expected group1, got %s", status1.Group)
|
||||
}
|
||||
if len(status1.Results) != 1 {
|
||||
t.Errorf("expected 1 result for endpoint1, got %d", len(status1.Results))
|
||||
}
|
||||
if len(status1.Results) > 0 {
|
||||
result := status1.Results[0]
|
||||
if !result.Success {
|
||||
t.Error("expected endpoint1 result to be successful")
|
||||
}
|
||||
if result.Duration != 100*time.Millisecond {
|
||||
t.Errorf("expected endpoint1 result duration 100ms, got %v", result.Duration)
|
||||
}
|
||||
}
|
||||
|
||||
// Test suite endpoints
|
||||
suiteStatus1, err := store.GetEndpointStatusByKey(suiteEndpoint1.Key(), &paging.EndpointStatusParams{})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get suite endpoint1 status: %v", err)
|
||||
}
|
||||
if suiteStatus1.Name != "suite-endpoint1" {
|
||||
t.Errorf("expected suite-endpoint1, got %s", suiteStatus1.Name)
|
||||
}
|
||||
if suiteStatus1.Group != "suite-group" {
|
||||
t.Errorf("expected suite-group, got %s", suiteStatus1.Group)
|
||||
}
|
||||
if len(suiteStatus1.Results) != 1 {
|
||||
t.Errorf("expected 1 result for suite-endpoint1, got %d", len(suiteStatus1.Results))
|
||||
}
|
||||
if len(suiteStatus1.Results) > 0 {
|
||||
result := suiteStatus1.Results[0]
|
||||
if result.Success {
|
||||
t.Error("expected suite-endpoint1 result to be unsuccessful")
|
||||
}
|
||||
if result.Duration != 50*time.Millisecond {
|
||||
t.Errorf("expected suite-endpoint1 result duration 50ms, got %v", result.Duration)
|
||||
}
|
||||
if len(result.Errors) != 1 || result.Errors[0] != "suite error" {
|
||||
t.Errorf("expected suite-endpoint1 to have error 'suite error', got %v", result.Errors)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// Test 6: GetSuiteStatusByKey should work for suites
|
||||
t.Run("GetSuiteStatusByKey", func(t *testing.T) {
|
||||
store, _, _, _, _, testSuite := setupStore(t)
|
||||
|
||||
// InsertEndpointResult suite result with endpoint results
|
||||
timestamp := time.Now()
|
||||
store.InsertSuiteResult(testSuite, &suite.Result{
|
||||
Name: testSuite.Name, Group: testSuite.Group, Success: false,
|
||||
Timestamp: timestamp, Duration: 125 * time.Millisecond,
|
||||
EndpointResults: []*endpoint.Result{
|
||||
{Success: true, Duration: 50 * time.Millisecond},
|
||||
{Success: false, Duration: 75 * time.Millisecond, Errors: []string{"endpoint failed"}},
|
||||
},
|
||||
})
|
||||
suiteStatus, err := store.GetSuiteStatusByKey(testSuite.Key(), &paging.SuiteStatusParams{})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get suite status: %v", err)
|
||||
}
|
||||
if suiteStatus.Name != "test-suite" {
|
||||
t.Errorf("expected test-suite, got %s", suiteStatus.Name)
|
||||
}
|
||||
if suiteStatus.Group != "suite-group" {
|
||||
t.Errorf("expected suite-group, got %s", suiteStatus.Group)
|
||||
}
|
||||
if len(suiteStatus.Results) != 1 {
|
||||
t.Errorf("expected 1 suite result, got %d", len(suiteStatus.Results))
|
||||
}
|
||||
|
||||
if len(suiteStatus.Results) > 0 {
|
||||
result := suiteStatus.Results[0]
|
||||
if result.Success {
|
||||
t.Error("expected suite result to be unsuccessful")
|
||||
}
|
||||
if result.Duration != 125*time.Millisecond {
|
||||
t.Errorf("expected suite result duration 125ms, got %v", result.Duration)
|
||||
}
|
||||
if len(result.EndpointResults) != 2 {
|
||||
t.Errorf("expected 2 endpoint results, got %d", len(result.EndpointResults))
|
||||
}
|
||||
if len(result.EndpointResults) >= 2 {
|
||||
if !result.EndpointResults[0].Success {
|
||||
t.Error("expected first endpoint result to be successful")
|
||||
}
|
||||
if result.EndpointResults[1].Success {
|
||||
t.Error("expected second endpoint result to be unsuccessful")
|
||||
}
|
||||
if len(result.EndpointResults[1].Errors) != 1 || result.EndpointResults[1].Errors[0] != "endpoint failed" {
|
||||
t.Errorf("expected second endpoint to have error 'endpoint failed', got %v", result.EndpointResults[1].Errors)
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// Test 7: DeleteAllEndpointStatusesNotInKeys should not affect suites
|
||||
t.Run("DeleteEndpointsNotInKeys", func(t *testing.T) {
|
||||
store, endpoint1, endpoint2, suiteEndpoint1, suiteEndpoint2, testSuite := setupStore(t)
|
||||
|
||||
// InsertEndpointResult all test data
|
||||
store.InsertEndpointResult(endpoint1, &endpoint.Result{Success: true, Timestamp: time.Now(), Duration: 100 * time.Millisecond})
|
||||
store.InsertEndpointResult(endpoint2, &endpoint.Result{Success: false, Timestamp: time.Now(), Duration: 200 * time.Millisecond})
|
||||
store.InsertEndpointResult(suiteEndpoint1, &endpoint.Result{Success: true, Timestamp: time.Now(), Duration: 50 * time.Millisecond})
|
||||
store.InsertEndpointResult(suiteEndpoint2, &endpoint.Result{Success: true, Timestamp: time.Now(), Duration: 75 * time.Millisecond})
|
||||
store.InsertSuiteResult(testSuite, &suite.Result{
|
||||
Name: testSuite.Name, Group: testSuite.Group, Success: true,
|
||||
Timestamp: time.Now(), Duration: 125 * time.Millisecond,
|
||||
})
|
||||
// Keep only endpoint1 and suite-endpoint1
|
||||
keysToKeep := []string{endpoint1.Key(), suiteEndpoint1.Key()}
|
||||
deleted := store.DeleteAllEndpointStatusesNotInKeys(keysToKeep)
|
||||
|
||||
// Should have deleted 2 endpoints (endpoint2 and suite-endpoint2)
|
||||
if deleted != 2 {
|
||||
t.Errorf("expected to delete 2 endpoints, deleted %d", deleted)
|
||||
}
|
||||
|
||||
// Verify remaining endpoints
|
||||
statuses, _ := store.GetAllEndpointStatuses(&paging.EndpointStatusParams{})
|
||||
if len(statuses) != 2 {
|
||||
t.Errorf("expected 2 remaining endpoint statuses, got %d", len(statuses))
|
||||
}
|
||||
|
||||
// Suite should still exist
|
||||
suiteStatuses, _ := store.GetAllSuiteStatuses(&paging.SuiteStatusParams{})
|
||||
if len(suiteStatuses) != 1 {
|
||||
t.Errorf("suite should not be affected by DeleteAllEndpointStatusesNotInKeys")
|
||||
}
|
||||
})
|
||||
|
||||
// Test 8: DeleteAllSuiteStatusesNotInKeys should not affect endpoints
|
||||
t.Run("DeleteSuitesNotInKeys", func(t *testing.T) {
|
||||
store, endpoint1, _, _, _, testSuite := setupStore(t)
|
||||
|
||||
// InsertEndpointResult test data
|
||||
store.InsertEndpointResult(endpoint1, &endpoint.Result{Success: true, Timestamp: time.Now(), Duration: 100 * time.Millisecond})
|
||||
store.InsertSuiteResult(testSuite, &suite.Result{
|
||||
Name: testSuite.Name, Group: testSuite.Group, Success: true,
|
||||
Timestamp: time.Now(), Duration: 125 * time.Millisecond,
|
||||
})
|
||||
// First, add another suite to test deletion
|
||||
anotherSuite := &suite.Suite{
|
||||
Name: "another-suite",
|
||||
Group: "another-group",
|
||||
}
|
||||
anotherSuiteResult := &suite.Result{
|
||||
Name: anotherSuite.Name,
|
||||
Group: anotherSuite.Group,
|
||||
Success: true,
|
||||
Timestamp: time.Now(),
|
||||
Duration: 100 * time.Millisecond,
|
||||
}
|
||||
store.InsertSuiteResult(anotherSuite, anotherSuiteResult)
|
||||
|
||||
// Keep only the original test-suite
|
||||
deleted := store.DeleteAllSuiteStatusesNotInKeys([]string{testSuite.Key()})
|
||||
|
||||
// Should have deleted 1 suite (another-suite)
|
||||
if deleted != 1 {
|
||||
t.Errorf("expected to delete 1 suite, deleted %d", deleted)
|
||||
}
|
||||
|
||||
// Endpoints should still exist
|
||||
endpointStatuses, _ := store.GetAllEndpointStatuses(&paging.EndpointStatusParams{})
|
||||
if len(endpointStatuses) != 1 {
|
||||
t.Errorf("endpoints should not be affected by DeleteAllSuiteStatusesNotInKeys")
|
||||
}
|
||||
|
||||
// Only one suite should remain
|
||||
suiteStatuses, _ := store.GetAllSuiteStatuses(&paging.SuiteStatusParams{})
|
||||
if len(suiteStatuses) != 1 {
|
||||
t.Errorf("expected 1 remaining suite, got %d", len(suiteStatuses))
|
||||
}
|
||||
})
|
||||
|
||||
// Test 9: Clear should remove everything
|
||||
t.Run("Clear", func(t *testing.T) {
|
||||
store, endpoint1, _, _, _, testSuite := setupStore(t)
|
||||
|
||||
// InsertEndpointResult test data
|
||||
store.InsertEndpointResult(endpoint1, &endpoint.Result{Success: true, Timestamp: time.Now(), Duration: 100 * time.Millisecond})
|
||||
store.InsertSuiteResult(testSuite, &suite.Result{
|
||||
Name: testSuite.Name, Group: testSuite.Group, Success: true,
|
||||
Timestamp: time.Now(), Duration: 125 * time.Millisecond,
|
||||
})
|
||||
store.Clear()
|
||||
|
||||
// No endpoints should remain
|
||||
endpointStatuses, _ := store.GetAllEndpointStatuses(&paging.EndpointStatusParams{})
|
||||
if len(endpointStatuses) != 0 {
|
||||
t.Errorf("expected 0 endpoints after clear, got %d", len(endpointStatuses))
|
||||
}
|
||||
|
||||
// No suites should remain
|
||||
suiteStatuses, _ := store.GetAllSuiteStatuses(&paging.SuiteStatusParams{})
|
||||
if len(suiteStatuses) != 0 {
|
||||
t.Errorf("expected 0 suites after clear, got %d", len(suiteStatuses))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// TestStore_EndpointStatusCastingSafety tests that type assertions are safe
|
||||
func TestStore_EndpointStatusCastingSafety(t *testing.T) {
|
||||
store, err := NewStore(100, 50)
|
||||
if err != nil {
|
||||
t.Fatal("expected no error, got", err)
|
||||
}
|
||||
|
||||
// InsertEndpointResult an endpoint
|
||||
ep := &endpoint.Endpoint{
|
||||
Name: "test-endpoint",
|
||||
Group: "test",
|
||||
URL: "https://example.com",
|
||||
}
|
||||
result := &endpoint.Result{
|
||||
Success: true,
|
||||
Timestamp: time.Now(),
|
||||
Duration: 100 * time.Millisecond,
|
||||
}
|
||||
store.InsertEndpointResult(ep, result)
|
||||
|
||||
// InsertEndpointResult a suite
|
||||
testSuite := &suite.Suite{
|
||||
Name: "test-suite",
|
||||
Group: "test",
|
||||
}
|
||||
suiteResult := &suite.Result{
|
||||
Name: testSuite.Name,
|
||||
Group: testSuite.Group,
|
||||
Success: true,
|
||||
Timestamp: time.Now(),
|
||||
Duration: 200 * time.Millisecond,
|
||||
}
|
||||
store.InsertSuiteResult(testSuite, suiteResult)
|
||||
|
||||
// This should not panic even with mixed types in cache
|
||||
statuses, err := store.GetAllEndpointStatuses(&paging.EndpointStatusParams{})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get all endpoint statuses: %v", err)
|
||||
}
|
||||
|
||||
// Should only have the endpoint, not the suite
|
||||
if len(statuses) != 1 {
|
||||
t.Errorf("expected 1 endpoint status, got %d", len(statuses))
|
||||
}
|
||||
if statuses[0].Name != "test-endpoint" {
|
||||
t.Errorf("expected test-endpoint, got %s", statuses[0].Name)
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_MaximumLimits(t *testing.T) {
|
||||
// Use small limits to test trimming behavior
|
||||
maxResults := 5
|
||||
maxEvents := 3
|
||||
store, err := NewStore(maxResults, maxEvents)
|
||||
if err != nil {
|
||||
t.Fatal("expected no error, got", err)
|
||||
}
|
||||
defer store.Clear()
|
||||
|
||||
t.Run("endpoint-result-limits", func(t *testing.T) {
|
||||
ep := &endpoint.Endpoint{Name: "test-endpoint", Group: "test", URL: "https://example.com"}
|
||||
|
||||
// Insert more results than the maximum
|
||||
baseTime := time.Now().Add(-10 * time.Hour)
|
||||
for i := 0; i < maxResults*2; i++ {
|
||||
result := &endpoint.Result{
|
||||
Success: i%2 == 0,
|
||||
Timestamp: baseTime.Add(time.Duration(i) * time.Hour),
|
||||
Duration: time.Duration(i*10) * time.Millisecond,
|
||||
}
|
||||
err := store.InsertEndpointResult(ep, result)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to insert result %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify only maxResults are kept
|
||||
status, err := store.GetEndpointStatusByKey(ep.Key(), nil)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get endpoint status: %v", err)
|
||||
}
|
||||
if len(status.Results) != maxResults {
|
||||
t.Errorf("expected %d results after trimming, got %d", maxResults, len(status.Results))
|
||||
}
|
||||
|
||||
// Verify the newest results are kept (should be results 5-9, not 0-4)
|
||||
if len(status.Results) > 0 {
|
||||
firstResult := status.Results[0]
|
||||
lastResult := status.Results[len(status.Results)-1]
|
||||
// First result should be older than last result due to append order
|
||||
if !lastResult.Timestamp.After(firstResult.Timestamp) {
|
||||
t.Error("expected results to be in chronological order")
|
||||
}
|
||||
// The last result should be the most recent one we inserted
|
||||
expectedLastDuration := time.Duration((maxResults*2-1)*10) * time.Millisecond
|
||||
if lastResult.Duration != expectedLastDuration {
|
||||
t.Errorf("expected last result duration %v, got %v", expectedLastDuration, lastResult.Duration)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("suite-result-limits", func(t *testing.T) {
|
||||
testSuite := &suite.Suite{Name: "test-suite", Group: "test"}
|
||||
|
||||
// Insert more results than the maximum
|
||||
baseTime := time.Now().Add(-10 * time.Hour)
|
||||
for i := 0; i < maxResults*2; i++ {
|
||||
result := &suite.Result{
|
||||
Name: testSuite.Name,
|
||||
Group: testSuite.Group,
|
||||
Success: i%2 == 0,
|
||||
Timestamp: baseTime.Add(time.Duration(i) * time.Hour),
|
||||
Duration: time.Duration(i*10) * time.Millisecond,
|
||||
}
|
||||
err := store.InsertSuiteResult(testSuite, result)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to insert suite result %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify only maxResults are kept
|
||||
status, err := store.GetSuiteStatusByKey(testSuite.Key(), &paging.SuiteStatusParams{})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get suite status: %v", err)
|
||||
}
|
||||
if len(status.Results) != maxResults {
|
||||
t.Errorf("expected %d results after trimming, got %d", maxResults, len(status.Results))
|
||||
}
|
||||
|
||||
// Verify the newest results are kept (should be results 5-9, not 0-4)
|
||||
if len(status.Results) > 0 {
|
||||
firstResult := status.Results[0]
|
||||
lastResult := status.Results[len(status.Results)-1]
|
||||
// First result should be older than last result due to append order
|
||||
if !lastResult.Timestamp.After(firstResult.Timestamp) {
|
||||
t.Error("expected results to be in chronological order")
|
||||
}
|
||||
// The last result should be the most recent one we inserted
|
||||
expectedLastDuration := time.Duration((maxResults*2-1)*10) * time.Millisecond
|
||||
if lastResult.Duration != expectedLastDuration {
|
||||
t.Errorf("expected last result duration %v, got %v", expectedLastDuration, lastResult.Duration)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestSuiteResultOrdering(t *testing.T) {
|
||||
store, err := NewStore(10, 5)
|
||||
if err != nil {
|
||||
t.Fatal("expected no error, got", err)
|
||||
}
|
||||
defer store.Clear()
|
||||
|
||||
testSuite := &suite.Suite{Name: "ordering-suite", Group: "test"}
|
||||
|
||||
// Insert results with distinct timestamps
|
||||
baseTime := time.Now().Add(-5 * time.Hour)
|
||||
timestamps := make([]time.Time, 5)
|
||||
|
||||
for i := 0; i < 5; i++ {
|
||||
timestamp := baseTime.Add(time.Duration(i) * time.Hour)
|
||||
timestamps[i] = timestamp
|
||||
result := &suite.Result{
|
||||
Name: testSuite.Name,
|
||||
Group: testSuite.Group,
|
||||
Success: true,
|
||||
Timestamp: timestamp,
|
||||
Duration: time.Duration(i*100) * time.Millisecond,
|
||||
}
|
||||
err := store.InsertSuiteResult(testSuite, result)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to insert result %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
t.Run("chronological-append-order", func(t *testing.T) {
|
||||
status, err := store.GetSuiteStatusByKey(testSuite.Key(), nil)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get suite status: %v", err)
|
||||
}
|
||||
|
||||
// Verify results are in chronological order (oldest first due to append)
|
||||
for i := 0; i < len(status.Results)-1; i++ {
|
||||
current := status.Results[i]
|
||||
next := status.Results[i+1]
|
||||
if !next.Timestamp.After(current.Timestamp) {
|
||||
t.Errorf("result %d timestamp %v should be before result %d timestamp %v",
|
||||
i, current.Timestamp, i+1, next.Timestamp)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify specific timestamp order
|
||||
if !status.Results[0].Timestamp.Equal(timestamps[0]) {
|
||||
t.Errorf("first result timestamp should be %v, got %v", timestamps[0], status.Results[0].Timestamp)
|
||||
}
|
||||
if !status.Results[len(status.Results)-1].Timestamp.Equal(timestamps[len(timestamps)-1]) {
|
||||
t.Errorf("last result timestamp should be %v, got %v", timestamps[len(timestamps)-1], status.Results[len(status.Results)-1].Timestamp)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("pagination-newest-first", func(t *testing.T) {
|
||||
// Test reverse pagination (newest first in paginated results)
|
||||
page1 := ShallowCopySuiteStatus(
|
||||
&suite.Status{
|
||||
Name: testSuite.Name, Group: testSuite.Group, Key: testSuite.Key(),
|
||||
Results: []*suite.Result{
|
||||
{Timestamp: timestamps[0], Duration: 0 * time.Millisecond},
|
||||
{Timestamp: timestamps[1], Duration: 100 * time.Millisecond},
|
||||
{Timestamp: timestamps[2], Duration: 200 * time.Millisecond},
|
||||
{Timestamp: timestamps[3], Duration: 300 * time.Millisecond},
|
||||
{Timestamp: timestamps[4], Duration: 400 * time.Millisecond},
|
||||
},
|
||||
},
|
||||
paging.NewSuiteStatusParams().WithPagination(1, 3),
|
||||
)
|
||||
|
||||
if len(page1.Results) != 3 {
|
||||
t.Errorf("expected 3 results in page 1, got %d", len(page1.Results))
|
||||
}
|
||||
|
||||
// With reverse pagination, page 1 should have the 3 newest results
|
||||
// That means results[2], results[3], results[4] from original array
|
||||
if page1.Results[0].Duration != 200*time.Millisecond {
|
||||
t.Errorf("expected first result in page to have 200ms duration, got %v", page1.Results[0].Duration)
|
||||
}
|
||||
if page1.Results[2].Duration != 400*time.Millisecond {
|
||||
t.Errorf("expected last result in page to have 400ms duration, got %v", page1.Results[2].Duration)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("trimming-preserves-newest", func(t *testing.T) {
|
||||
limitedStore, err := NewStore(3, 2) // Very small limits
|
||||
if err != nil {
|
||||
t.Fatal("expected no error, got", err)
|
||||
}
|
||||
defer limitedStore.Clear()
|
||||
|
||||
smallSuite := &suite.Suite{Name: "small-suite", Group: "test"}
|
||||
|
||||
// Insert 6 results, should keep only the newest 3
|
||||
for i := 0; i < 6; i++ {
|
||||
result := &suite.Result{
|
||||
Name: smallSuite.Name,
|
||||
Group: smallSuite.Group,
|
||||
Success: true,
|
||||
Timestamp: baseTime.Add(time.Duration(i) * time.Hour),
|
||||
Duration: time.Duration(i*50) * time.Millisecond,
|
||||
}
|
||||
err := limitedStore.InsertSuiteResult(smallSuite, result)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to insert result %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
status, err := limitedStore.GetSuiteStatusByKey(smallSuite.Key(), nil)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get suite status: %v", err)
|
||||
}
|
||||
|
||||
if len(status.Results) != 3 {
|
||||
t.Errorf("expected 3 results after trimming, got %d", len(status.Results))
|
||||
}
|
||||
|
||||
// Should have results 3, 4, 5 (the newest ones)
|
||||
expectedDurations := []time.Duration{150 * time.Millisecond, 200 * time.Millisecond, 250 * time.Millisecond}
|
||||
for i, expectedDuration := range expectedDurations {
|
||||
if status.Results[i].Duration != expectedDuration {
|
||||
t.Errorf("result %d should have duration %v, got %v", i, expectedDuration, status.Results[i].Duration)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestStore_ConcurrentAccess(t *testing.T) {
|
||||
store, err := NewStore(100, 50)
|
||||
if err != nil {
|
||||
t.Fatal("expected no error, got", err)
|
||||
}
|
||||
defer store.Clear()
|
||||
|
||||
t.Run("concurrent-endpoint-insertions", func(t *testing.T) {
|
||||
var wg sync.WaitGroup
|
||||
numGoroutines := 10
|
||||
resultsPerGoroutine := 5
|
||||
|
||||
// Create endpoints for concurrent testing
|
||||
endpoints := make([]*endpoint.Endpoint, numGoroutines)
|
||||
for i := 0; i < numGoroutines; i++ {
|
||||
endpoints[i] = &endpoint.Endpoint{
|
||||
Name: "endpoint-" + string(rune('A'+i)),
|
||||
Group: "concurrent",
|
||||
URL: "https://example.com/" + string(rune('A'+i)),
|
||||
}
|
||||
}
|
||||
|
||||
// Concurrently insert results for different endpoints
|
||||
for i := 0; i < numGoroutines; i++ {
|
||||
wg.Add(1)
|
||||
go func(endpointIndex int) {
|
||||
defer wg.Done()
|
||||
ep := endpoints[endpointIndex]
|
||||
for j := 0; j < resultsPerGoroutine; j++ {
|
||||
result := &endpoint.Result{
|
||||
Success: j%2 == 0,
|
||||
Timestamp: time.Now().Add(time.Duration(j) * time.Minute),
|
||||
Duration: time.Duration(j*10) * time.Millisecond,
|
||||
}
|
||||
if err := store.InsertEndpointResult(ep, result); err != nil {
|
||||
t.Errorf("failed to insert result for endpoint %d: %v", endpointIndex, err)
|
||||
}
|
||||
}
|
||||
}(i)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
// Verify all endpoints were created and have correct result counts
|
||||
statuses, err := store.GetAllEndpointStatuses(&paging.EndpointStatusParams{})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get all endpoint statuses: %v", err)
|
||||
}
|
||||
if len(statuses) != numGoroutines {
|
||||
t.Errorf("expected %d endpoint statuses, got %d", numGoroutines, len(statuses))
|
||||
}
|
||||
|
||||
// Verify each endpoint has the correct number of results
|
||||
for _, status := range statuses {
|
||||
if len(status.Results) != resultsPerGoroutine {
|
||||
t.Errorf("endpoint %s should have %d results, got %d", status.Name, resultsPerGoroutine, len(status.Results))
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("concurrent-suite-insertions", func(t *testing.T) {
|
||||
var wg sync.WaitGroup
|
||||
numGoroutines := 5
|
||||
resultsPerGoroutine := 3
|
||||
|
||||
// Create suites for concurrent testing
|
||||
suites := make([]*suite.Suite, numGoroutines)
|
||||
for i := 0; i < numGoroutines; i++ {
|
||||
suites[i] = &suite.Suite{
|
||||
Name: "suite-" + string(rune('A'+i)),
|
||||
Group: "concurrent",
|
||||
}
|
||||
}
|
||||
|
||||
// Concurrently insert results for different suites
|
||||
for i := 0; i < numGoroutines; i++ {
|
||||
wg.Add(1)
|
||||
go func(suiteIndex int) {
|
||||
defer wg.Done()
|
||||
su := suites[suiteIndex]
|
||||
for j := 0; j < resultsPerGoroutine; j++ {
|
||||
result := &suite.Result{
|
||||
Name: su.Name,
|
||||
Group: su.Group,
|
||||
Success: j%2 == 0,
|
||||
Timestamp: time.Now().Add(time.Duration(j) * time.Minute),
|
||||
Duration: time.Duration(j*50) * time.Millisecond,
|
||||
}
|
||||
if err := store.InsertSuiteResult(su, result); err != nil {
|
||||
t.Errorf("failed to insert result for suite %d: %v", suiteIndex, err)
|
||||
}
|
||||
}
|
||||
}(i)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
// Verify all suites were created and have correct result counts
|
||||
statuses, err := store.GetAllSuiteStatuses(&paging.SuiteStatusParams{})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get all suite statuses: %v", err)
|
||||
}
|
||||
if len(statuses) != numGoroutines {
|
||||
t.Errorf("expected %d suite statuses, got %d", numGoroutines, len(statuses))
|
||||
}
|
||||
|
||||
// Verify each suite has the correct number of results
|
||||
for _, status := range statuses {
|
||||
if len(status.Results) != resultsPerGoroutine {
|
||||
t.Errorf("suite %s should have %d results, got %d", status.Name, resultsPerGoroutine, len(status.Results))
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("concurrent-mixed-operations", func(t *testing.T) {
|
||||
var wg sync.WaitGroup
|
||||
|
||||
// Setup test data
|
||||
ep := &endpoint.Endpoint{Name: "mixed-endpoint", Group: "test", URL: "https://example.com"}
|
||||
testSuite := &suite.Suite{Name: "mixed-suite", Group: "test"}
|
||||
|
||||
// Concurrent endpoint insertions
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for i := 0; i < 5; i++ {
|
||||
result := &endpoint.Result{
|
||||
Success: true,
|
||||
Timestamp: time.Now(),
|
||||
Duration: time.Duration(i*10) * time.Millisecond,
|
||||
}
|
||||
store.InsertEndpointResult(ep, result)
|
||||
}
|
||||
}()
|
||||
|
||||
// Concurrent suite insertions
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for i := 0; i < 5; i++ {
|
||||
result := &suite.Result{
|
||||
Name: testSuite.Name,
|
||||
Group: testSuite.Group,
|
||||
Success: true,
|
||||
Timestamp: time.Now(),
|
||||
Duration: time.Duration(i*20) * time.Millisecond,
|
||||
}
|
||||
store.InsertSuiteResult(testSuite, result)
|
||||
}
|
||||
}()
|
||||
|
||||
// Concurrent reads
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for i := 0; i < 10; i++ {
|
||||
store.GetAllEndpointStatuses(&paging.EndpointStatusParams{})
|
||||
store.GetAllSuiteStatuses(&paging.SuiteStatusParams{})
|
||||
time.Sleep(1 * time.Millisecond)
|
||||
}
|
||||
}()
|
||||
|
||||
wg.Wait()
|
||||
|
||||
// Verify final state is consistent
|
||||
endpointStatuses, err := store.GetAllEndpointStatuses(&paging.EndpointStatusParams{})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get endpoint statuses after concurrent operations: %v", err)
|
||||
}
|
||||
if len(endpointStatuses) == 0 {
|
||||
t.Error("expected at least one endpoint status after concurrent operations")
|
||||
}
|
||||
|
||||
suiteStatuses, err := store.GetAllSuiteStatuses(&paging.SuiteStatusParams{})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get suite statuses after concurrent operations: %v", err)
|
||||
}
|
||||
if len(suiteStatuses) == 0 {
|
||||
t.Error("expected at least one suite status after concurrent operations")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ package memory
|
||||
|
||||
import (
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/config/suite"
|
||||
"github.com/TwiN/gatus/v5/storage/store/common/paging"
|
||||
)
|
||||
|
||||
@@ -14,19 +15,46 @@ func ShallowCopyEndpointStatus(ss *endpoint.Status, params *paging.EndpointStatu
|
||||
Key: ss.Key,
|
||||
Uptime: endpoint.NewUptime(),
|
||||
}
|
||||
numberOfResults := len(ss.Results)
|
||||
resultsStart, resultsEnd := getStartAndEndIndex(numberOfResults, params.ResultsPage, params.ResultsPageSize)
|
||||
if resultsStart < 0 || resultsEnd < 0 {
|
||||
shallowCopy.Results = []*endpoint.Result{}
|
||||
if params == nil || (params.ResultsPage == 0 && params.ResultsPageSize == 0 && params.EventsPage == 0 && params.EventsPageSize == 0) {
|
||||
shallowCopy.Results = ss.Results
|
||||
shallowCopy.Events = ss.Events
|
||||
} else {
|
||||
shallowCopy.Results = ss.Results[resultsStart:resultsEnd]
|
||||
numberOfResults := len(ss.Results)
|
||||
resultsStart, resultsEnd := getStartAndEndIndex(numberOfResults, params.ResultsPage, params.ResultsPageSize)
|
||||
if resultsStart < 0 || resultsEnd < 0 {
|
||||
shallowCopy.Results = []*endpoint.Result{}
|
||||
} else {
|
||||
shallowCopy.Results = ss.Results[resultsStart:resultsEnd]
|
||||
}
|
||||
numberOfEvents := len(ss.Events)
|
||||
eventsStart, eventsEnd := getStartAndEndIndex(numberOfEvents, params.EventsPage, params.EventsPageSize)
|
||||
if eventsStart < 0 || eventsEnd < 0 {
|
||||
shallowCopy.Events = []*endpoint.Event{}
|
||||
} else {
|
||||
shallowCopy.Events = ss.Events[eventsStart:eventsEnd]
|
||||
}
|
||||
}
|
||||
numberOfEvents := len(ss.Events)
|
||||
eventsStart, eventsEnd := getStartAndEndIndex(numberOfEvents, params.EventsPage, params.EventsPageSize)
|
||||
if eventsStart < 0 || eventsEnd < 0 {
|
||||
shallowCopy.Events = []*endpoint.Event{}
|
||||
return shallowCopy
|
||||
}
|
||||
|
||||
// ShallowCopySuiteStatus returns a shallow copy of a suite Status with only the results
|
||||
// within the range defined by the page and pageSize parameters
|
||||
func ShallowCopySuiteStatus(ss *suite.Status, params *paging.SuiteStatusParams) *suite.Status {
|
||||
shallowCopy := &suite.Status{
|
||||
Name: ss.Name,
|
||||
Group: ss.Group,
|
||||
Key: ss.Key,
|
||||
}
|
||||
if params == nil || (params.Page == 0 && params.PageSize == 0) {
|
||||
shallowCopy.Results = ss.Results
|
||||
} else {
|
||||
shallowCopy.Events = ss.Events[eventsStart:eventsEnd]
|
||||
numberOfResults := len(ss.Results)
|
||||
resultsStart, resultsEnd := getStartAndEndIndex(numberOfResults, params.Page, params.PageSize)
|
||||
if resultsStart < 0 || resultsEnd < 0 {
|
||||
shallowCopy.Results = []*suite.Result{}
|
||||
} else {
|
||||
shallowCopy.Results = ss.Results[resultsStart:resultsEnd]
|
||||
}
|
||||
}
|
||||
return shallowCopy
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/config/suite"
|
||||
"github.com/TwiN/gatus/v5/storage"
|
||||
"github.com/TwiN/gatus/v5/storage/store/common/paging"
|
||||
)
|
||||
@@ -64,3 +65,108 @@ func TestShallowCopyEndpointStatus(t *testing.T) {
|
||||
t.Error("expected to have 25 results, because there's only 25 results")
|
||||
}
|
||||
}
|
||||
|
||||
func TestShallowCopySuiteStatus(t *testing.T) {
|
||||
testSuite := &suite.Suite{Name: "test-suite", Group: "test-group"}
|
||||
suiteStatus := &suite.Status{
|
||||
Name: testSuite.Name,
|
||||
Group: testSuite.Group,
|
||||
Key: testSuite.Key(),
|
||||
Results: []*suite.Result{},
|
||||
}
|
||||
|
||||
ts := time.Now().Add(-25 * time.Hour)
|
||||
for i := 0; i < 25; i++ {
|
||||
result := &suite.Result{
|
||||
Name: testSuite.Name,
|
||||
Group: testSuite.Group,
|
||||
Success: i%2 == 0,
|
||||
Timestamp: ts,
|
||||
Duration: time.Duration(i*10) * time.Millisecond,
|
||||
}
|
||||
suiteStatus.Results = append(suiteStatus.Results, result)
|
||||
ts = ts.Add(time.Hour)
|
||||
}
|
||||
|
||||
t.Run("invalid-page-negative", func(t *testing.T) {
|
||||
result := ShallowCopySuiteStatus(suiteStatus, paging.NewSuiteStatusParams().WithPagination(-1, 10))
|
||||
if len(result.Results) != 0 {
|
||||
t.Errorf("expected 0 results for negative page, got %d", len(result.Results))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("invalid-page-zero", func(t *testing.T) {
|
||||
result := ShallowCopySuiteStatus(suiteStatus, paging.NewSuiteStatusParams().WithPagination(0, 10))
|
||||
if len(result.Results) != 0 {
|
||||
t.Errorf("expected 0 results for zero page, got %d", len(result.Results))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("invalid-pagesize-negative", func(t *testing.T) {
|
||||
result := ShallowCopySuiteStatus(suiteStatus, paging.NewSuiteStatusParams().WithPagination(1, -1))
|
||||
if len(result.Results) != 0 {
|
||||
t.Errorf("expected 0 results for negative page size, got %d", len(result.Results))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("zero-pagesize", func(t *testing.T) {
|
||||
result := ShallowCopySuiteStatus(suiteStatus, paging.NewSuiteStatusParams().WithPagination(1, 0))
|
||||
if len(result.Results) != 0 {
|
||||
t.Errorf("expected 0 results for zero page size, got %d", len(result.Results))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("nil-params", func(t *testing.T) {
|
||||
result := ShallowCopySuiteStatus(suiteStatus, nil)
|
||||
if len(result.Results) != 25 {
|
||||
t.Errorf("expected 25 results for nil params, got %d", len(result.Results))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("zero-params", func(t *testing.T) {
|
||||
result := ShallowCopySuiteStatus(suiteStatus, &paging.SuiteStatusParams{Page: 0, PageSize: 0})
|
||||
if len(result.Results) != 25 {
|
||||
t.Errorf("expected 25 results for zero-value params, got %d", len(result.Results))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("first-page", func(t *testing.T) {
|
||||
result := ShallowCopySuiteStatus(suiteStatus, paging.NewSuiteStatusParams().WithPagination(1, 10))
|
||||
if len(result.Results) != 10 {
|
||||
t.Errorf("expected 10 results for page 1, size 10, got %d", len(result.Results))
|
||||
}
|
||||
// Verify newest results are returned (reverse pagination)
|
||||
if len(result.Results) > 0 && !result.Results[len(result.Results)-1].Timestamp.After(result.Results[0].Timestamp) {
|
||||
t.Error("expected newest result to be at the end")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("second-page", func(t *testing.T) {
|
||||
result := ShallowCopySuiteStatus(suiteStatus, paging.NewSuiteStatusParams().WithPagination(2, 10))
|
||||
if len(result.Results) != 10 {
|
||||
t.Errorf("expected 10 results for page 2, size 10, got %d", len(result.Results))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("last-partial-page", func(t *testing.T) {
|
||||
result := ShallowCopySuiteStatus(suiteStatus, paging.NewSuiteStatusParams().WithPagination(3, 10))
|
||||
if len(result.Results) != 5 {
|
||||
t.Errorf("expected 5 results for page 3, size 10, got %d", len(result.Results))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("beyond-available-pages", func(t *testing.T) {
|
||||
result := ShallowCopySuiteStatus(suiteStatus, paging.NewSuiteStatusParams().WithPagination(4, 10))
|
||||
if len(result.Results) != 0 {
|
||||
t.Errorf("expected 0 results for page beyond available data, got %d", len(result.Results))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("large-page-size", func(t *testing.T) {
|
||||
result := ShallowCopySuiteStatus(suiteStatus, paging.NewSuiteStatusParams().WithPagination(1, 100))
|
||||
if len(result.Results) != 25 {
|
||||
t.Errorf("expected 25 results for large page size, got %d", len(result.Results))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
@@ -38,7 +38,8 @@ func (s *Store) createPostgresSchema() error {
|
||||
hostname TEXT NOT NULL,
|
||||
ip TEXT NOT NULL,
|
||||
duration BIGINT NOT NULL,
|
||||
timestamp TIMESTAMP NOT NULL
|
||||
timestamp TIMESTAMP NOT NULL,
|
||||
suite_result_id BIGINT REFERENCES suite_results(suite_result_id) ON DELETE CASCADE
|
||||
)
|
||||
`)
|
||||
if err != nil {
|
||||
@@ -79,7 +80,44 @@ func (s *Store) createPostgresSchema() error {
|
||||
UNIQUE(endpoint_id, configuration_checksum)
|
||||
)
|
||||
`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Create suite tables
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS suites (
|
||||
suite_id BIGSERIAL PRIMARY KEY,
|
||||
suite_key TEXT UNIQUE,
|
||||
suite_name TEXT NOT NULL,
|
||||
suite_group TEXT NOT NULL,
|
||||
UNIQUE(suite_name, suite_group)
|
||||
)
|
||||
`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS suite_results (
|
||||
suite_result_id BIGSERIAL PRIMARY KEY,
|
||||
suite_id BIGINT NOT NULL REFERENCES suites(suite_id) ON DELETE CASCADE,
|
||||
success BOOLEAN NOT NULL,
|
||||
errors TEXT NOT NULL,
|
||||
duration BIGINT NOT NULL,
|
||||
timestamp TIMESTAMP NOT NULL
|
||||
)
|
||||
`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Create index for suite_results
|
||||
_, err = s.db.Exec(`
|
||||
CREATE INDEX IF NOT EXISTS suite_results_suite_id_idx ON suite_results (suite_id);
|
||||
`)
|
||||
// Silent table modifications TODO: Remove this in v6.0.0
|
||||
_, _ = s.db.Exec(`ALTER TABLE endpoint_results ADD IF NOT EXISTS domain_expiration BIGINT NOT NULL DEFAULT 0`)
|
||||
// Add suite_result_id to endpoint_results table for suite endpoint linkage
|
||||
_, _ = s.db.Exec(`ALTER TABLE endpoint_results ADD COLUMN IF NOT EXISTS suite_result_id BIGINT REFERENCES suite_results(suite_result_id) ON DELETE CASCADE`)
|
||||
// Create index for suite_result_id
|
||||
_, _ = s.db.Exec(`CREATE INDEX IF NOT EXISTS endpoint_results_suite_result_id_idx ON endpoint_results(suite_result_id)`)
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -38,7 +38,8 @@ func (s *Store) createSQLiteSchema() error {
|
||||
hostname TEXT NOT NULL,
|
||||
ip TEXT NOT NULL,
|
||||
duration INTEGER NOT NULL,
|
||||
timestamp TIMESTAMP NOT NULL
|
||||
timestamp TIMESTAMP NOT NULL,
|
||||
suite_result_id INTEGER REFERENCES suite_results(suite_result_id) ON DELETE CASCADE
|
||||
)
|
||||
`)
|
||||
if err != nil {
|
||||
@@ -82,6 +83,32 @@ func (s *Store) createSQLiteSchema() error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Create suite tables
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS suites (
|
||||
suite_id INTEGER PRIMARY KEY,
|
||||
suite_key TEXT UNIQUE,
|
||||
suite_name TEXT NOT NULL,
|
||||
suite_group TEXT NOT NULL,
|
||||
UNIQUE(suite_name, suite_group)
|
||||
)
|
||||
`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS suite_results (
|
||||
suite_result_id INTEGER PRIMARY KEY,
|
||||
suite_id INTEGER NOT NULL REFERENCES suites(suite_id) ON DELETE CASCADE,
|
||||
success INTEGER NOT NULL,
|
||||
errors TEXT NOT NULL,
|
||||
duration INTEGER NOT NULL,
|
||||
timestamp TIMESTAMP NOT NULL
|
||||
)
|
||||
`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Create indices for performance reasons
|
||||
_, err = s.db.Exec(`
|
||||
CREATE INDEX IF NOT EXISTS endpoint_results_endpoint_id_idx ON endpoint_results (endpoint_id);
|
||||
@@ -98,7 +125,23 @@ func (s *Store) createSQLiteSchema() error {
|
||||
_, err = s.db.Exec(`
|
||||
CREATE INDEX IF NOT EXISTS endpoint_result_conditions_endpoint_result_id_idx ON endpoint_result_conditions (endpoint_result_id);
|
||||
`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Create index for suite_results
|
||||
_, err = s.db.Exec(`
|
||||
CREATE INDEX IF NOT EXISTS suite_results_suite_id_idx ON suite_results (suite_id);
|
||||
`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Silent table modifications TODO: Remove this in v6.0.0
|
||||
_, _ = s.db.Exec(`ALTER TABLE endpoint_results ADD domain_expiration INTEGER NOT NULL DEFAULT 0`)
|
||||
// Add suite_result_id to endpoint_results table for suite endpoint linkage
|
||||
_, _ = s.db.Exec(`ALTER TABLE endpoint_results ADD suite_result_id INTEGER REFERENCES suite_results(suite_result_id) ON DELETE CASCADE`)
|
||||
// Create index for suite_result_id
|
||||
_, _ = s.db.Exec(`CREATE INDEX IF NOT EXISTS endpoint_results_suite_result_id_idx ON endpoint_results(suite_result_id)`)
|
||||
// Note: SQLite doesn't support DROP COLUMN in older versions, so we skip this cleanup
|
||||
// The suite_id column in endpoints table will remain but unused
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -10,6 +10,8 @@ import (
|
||||
|
||||
"github.com/TwiN/gatus/v5/alerting/alert"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/config/key"
|
||||
"github.com/TwiN/gatus/v5/config/suite"
|
||||
"github.com/TwiN/gatus/v5/storage/store/common"
|
||||
"github.com/TwiN/gatus/v5/storage/store/common/paging"
|
||||
"github.com/TwiN/gocache/v2"
|
||||
@@ -138,7 +140,7 @@ func (s *Store) GetAllEndpointStatuses(params *paging.EndpointStatusParams) ([]*
|
||||
|
||||
// GetEndpointStatus returns the endpoint status for a given endpoint name in the given group
|
||||
func (s *Store) GetEndpointStatus(groupName, endpointName string, params *paging.EndpointStatusParams) (*endpoint.Status, error) {
|
||||
return s.GetEndpointStatusByKey(endpoint.ConvertGroupAndEndpointNameToKey(groupName, endpointName), params)
|
||||
return s.GetEndpointStatusByKey(key.ConvertGroupAndNameToKey(groupName, endpointName), params)
|
||||
}
|
||||
|
||||
// GetEndpointStatusByKey returns the endpoint status for a given key
|
||||
@@ -233,8 +235,8 @@ func (s *Store) GetHourlyAverageResponseTimeByKey(key string, from, to time.Time
|
||||
return hourlyAverageResponseTimes, nil
|
||||
}
|
||||
|
||||
// Insert adds the observed result for the specified endpoint into the store
|
||||
func (s *Store) Insert(ep *endpoint.Endpoint, result *endpoint.Result) error {
|
||||
// InsertEndpointResult adds the observed result for the specified endpoint into the store
|
||||
func (s *Store) InsertEndpointResult(ep *endpoint.Endpoint, result *endpoint.Result) error {
|
||||
tx, err := s.db.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -245,12 +247,12 @@ func (s *Store) Insert(ep *endpoint.Endpoint, result *endpoint.Result) error {
|
||||
// Endpoint doesn't exist in the database, insert it
|
||||
if endpointID, err = s.insertEndpoint(tx, ep); err != nil {
|
||||
_ = tx.Rollback()
|
||||
logr.Errorf("[sql.Insert] Failed to create endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to create endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
_ = tx.Rollback()
|
||||
logr.Errorf("[sql.Insert] Failed to retrieve id of endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to retrieve id of endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -266,7 +268,7 @@ func (s *Store) Insert(ep *endpoint.Endpoint, result *endpoint.Result) error {
|
||||
numberOfEvents, err := s.getNumberOfEventsByEndpointID(tx, endpointID)
|
||||
if err != nil {
|
||||
// Silently fail
|
||||
logr.Errorf("[sql.Insert] Failed to retrieve total number of events for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to retrieve total number of events for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
}
|
||||
if numberOfEvents == 0 {
|
||||
// There's no events yet, which means we need to add the EventStart and the first healthy/unhealthy event
|
||||
@@ -276,18 +278,18 @@ func (s *Store) Insert(ep *endpoint.Endpoint, result *endpoint.Result) error {
|
||||
})
|
||||
if err != nil {
|
||||
// Silently fail
|
||||
logr.Errorf("[sql.Insert] Failed to insert event=%s for endpoint with key=%s: %s", endpoint.EventStart, ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to insert event=%s for endpoint with key=%s: %s", endpoint.EventStart, ep.Key(), err.Error())
|
||||
}
|
||||
event := endpoint.NewEventFromResult(result)
|
||||
if err = s.insertEndpointEvent(tx, endpointID, event); err != nil {
|
||||
// Silently fail
|
||||
logr.Errorf("[sql.Insert] Failed to insert event=%s for endpoint with key=%s: %s", event.Type, ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to insert event=%s for endpoint with key=%s: %s", event.Type, ep.Key(), err.Error())
|
||||
}
|
||||
} else {
|
||||
// Get the success value of the previous result
|
||||
var lastResultSuccess bool
|
||||
if lastResultSuccess, err = s.getLastEndpointResultSuccessValue(tx, endpointID); err != nil {
|
||||
logr.Errorf("[sql.Insert] Failed to retrieve outcome of previous result for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to retrieve outcome of previous result for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
} else {
|
||||
// If we managed to retrieve the outcome of the previous result, we'll compare it with the new result.
|
||||
// If the final outcome (success or failure) of the previous and the new result aren't the same, it means
|
||||
@@ -297,7 +299,7 @@ func (s *Store) Insert(ep *endpoint.Endpoint, result *endpoint.Result) error {
|
||||
event := endpoint.NewEventFromResult(result)
|
||||
if err = s.insertEndpointEvent(tx, endpointID, event); err != nil {
|
||||
// Silently fail
|
||||
logr.Errorf("[sql.Insert] Failed to insert event=%s for endpoint with key=%s: %s", event.Type, ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to insert event=%s for endpoint with key=%s: %s", event.Type, ep.Key(), err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -306,42 +308,42 @@ func (s *Store) Insert(ep *endpoint.Endpoint, result *endpoint.Result) error {
|
||||
// (since we're only deleting MaximumNumberOfEvents at a time instead of 1)
|
||||
if numberOfEvents > int64(s.maximumNumberOfEvents+eventsAboveMaximumCleanUpThreshold) {
|
||||
if err = s.deleteOldEndpointEvents(tx, endpointID); err != nil {
|
||||
logr.Errorf("[sql.Insert] Failed to delete old events for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to delete old events for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
// Second, we need to insert the result.
|
||||
if err = s.insertEndpointResult(tx, endpointID, result); err != nil {
|
||||
logr.Errorf("[sql.Insert] Failed to insert result for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to insert result for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
_ = tx.Rollback() // If we can't insert the result, we'll rollback now since there's no point continuing
|
||||
return err
|
||||
}
|
||||
// Clean up old results
|
||||
numberOfResults, err := s.getNumberOfResultsByEndpointID(tx, endpointID)
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.Insert] Failed to retrieve total number of results for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to retrieve total number of results for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
} else {
|
||||
if numberOfResults > int64(s.maximumNumberOfResults+resultsAboveMaximumCleanUpThreshold) {
|
||||
if err = s.deleteOldEndpointResults(tx, endpointID); err != nil {
|
||||
logr.Errorf("[sql.Insert] Failed to delete old results for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to delete old results for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
// Finally, we need to insert the uptime data.
|
||||
// Because the uptime data significantly outlives the results, we can't rely on the results for determining the uptime
|
||||
if err = s.updateEndpointUptime(tx, endpointID, result); err != nil {
|
||||
logr.Errorf("[sql.Insert] Failed to update uptime for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to update uptime for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
}
|
||||
// Merge hourly uptime entries that can be merged into daily entries and clean up old uptime entries
|
||||
numberOfUptimeEntries, err := s.getNumberOfUptimeEntriesByEndpointID(tx, endpointID)
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.Insert] Failed to retrieve total number of uptime entries for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to retrieve total number of uptime entries for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
} else {
|
||||
// Merge older hourly uptime entries into daily uptime entries if we have more than uptimeTotalEntriesMergeThreshold
|
||||
if numberOfUptimeEntries >= uptimeTotalEntriesMergeThreshold {
|
||||
logr.Infof("[sql.Insert] Merging hourly uptime entries for endpoint with key=%s; This is a lot of work, it shouldn't happen too often", ep.Key())
|
||||
logr.Infof("[sql.InsertEndpointResult] Merging hourly uptime entries for endpoint with key=%s; This is a lot of work, it shouldn't happen too often", ep.Key())
|
||||
if err = s.mergeHourlyUptimeEntriesOlderThanMergeThresholdIntoDailyUptimeEntries(tx, endpointID); err != nil {
|
||||
logr.Errorf("[sql.Insert] Failed to merge hourly uptime entries for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to merge hourly uptime entries for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -350,11 +352,11 @@ func (s *Store) Insert(ep *endpoint.Endpoint, result *endpoint.Result) error {
|
||||
// but if Gatus was temporarily shut down, we might have some old entries that need to be cleaned up
|
||||
ageOfOldestUptimeEntry, err := s.getAgeOfOldestEndpointUptimeEntry(tx, endpointID)
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.Insert] Failed to retrieve oldest endpoint uptime entry for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to retrieve oldest endpoint uptime entry for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
} else {
|
||||
if ageOfOldestUptimeEntry > uptimeAgeCleanUpThreshold {
|
||||
if err = s.deleteOldUptimeEntries(tx, endpointID, time.Now().Add(-(uptimeRetention + time.Hour))); err != nil {
|
||||
logr.Errorf("[sql.Insert] Failed to delete old uptime entries for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Failed to delete old uptime entries for endpoint with key=%s: %s", ep.Key(), err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -364,7 +366,7 @@ func (s *Store) Insert(ep *endpoint.Endpoint, result *endpoint.Result) error {
|
||||
s.writeThroughCache.Delete(cacheKey)
|
||||
endpointKey, params, err := extractKeyAndParamsFromCacheKey(cacheKey)
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.Insert] Silently deleting cache key %s instead of refreshing due to error: %s", cacheKey, err.Error())
|
||||
logr.Errorf("[sql.InsertEndpointResult] Silently deleting cache key %s instead of refreshing due to error: %s", cacheKey, err.Error())
|
||||
continue
|
||||
}
|
||||
// Retrieve the endpoint status by key, which will in turn refresh the cache
|
||||
@@ -379,17 +381,43 @@ func (s *Store) Insert(ep *endpoint.Endpoint, result *endpoint.Result) error {
|
||||
|
||||
// DeleteAllEndpointStatusesNotInKeys removes all rows owned by an endpoint whose key is not within the keys provided
|
||||
func (s *Store) DeleteAllEndpointStatusesNotInKeys(keys []string) int {
|
||||
logr.Debugf("[sql.DeleteAllEndpointStatusesNotInKeys] Called with %d keys", len(keys))
|
||||
var err error
|
||||
var result sql.Result
|
||||
if len(keys) == 0 {
|
||||
// Delete everything
|
||||
logr.Debugf("[sql.DeleteAllEndpointStatusesNotInKeys] No keys provided, deleting all endpoints")
|
||||
result, err = s.db.Exec("DELETE FROM endpoints")
|
||||
} else {
|
||||
// First check what we're about to delete
|
||||
args := make([]interface{}, 0, len(keys))
|
||||
checkQuery := "SELECT endpoint_key FROM endpoints WHERE endpoint_key NOT IN ("
|
||||
for i := range keys {
|
||||
checkQuery += fmt.Sprintf("$%d,", i+1)
|
||||
args = append(args, keys[i])
|
||||
}
|
||||
checkQuery = checkQuery[:len(checkQuery)-1] + ")"
|
||||
|
||||
rows, checkErr := s.db.Query(checkQuery, args...)
|
||||
if checkErr == nil {
|
||||
defer rows.Close()
|
||||
var deletedKeys []string
|
||||
for rows.Next() {
|
||||
var key string
|
||||
if err := rows.Scan(&key); err == nil {
|
||||
deletedKeys = append(deletedKeys, key)
|
||||
}
|
||||
}
|
||||
if len(deletedKeys) > 0 {
|
||||
logr.Infof("[sql.DeleteAllEndpointStatusesNotInKeys] Deleting endpoints with keys: %v", deletedKeys)
|
||||
} else {
|
||||
logr.Debugf("[sql.DeleteAllEndpointStatusesNotInKeys] No endpoints to delete")
|
||||
}
|
||||
}
|
||||
|
||||
query := "DELETE FROM endpoints WHERE endpoint_key NOT IN ("
|
||||
for i := range keys {
|
||||
query += fmt.Sprintf("$%d,", i+1)
|
||||
args = append(args, keys[i])
|
||||
}
|
||||
query = query[:len(query)-1] + ")" // Remove the last comma and add the closing parenthesis
|
||||
result, err = s.db.Exec(query, args...)
|
||||
@@ -586,11 +614,16 @@ func (s *Store) insertEndpointEvent(tx *sql.Tx, endpointID int64, event *endpoin
|
||||
|
||||
// insertEndpointResult inserts a result in the store
|
||||
func (s *Store) insertEndpointResult(tx *sql.Tx, endpointID int64, result *endpoint.Result) error {
|
||||
return s.insertEndpointResultWithSuiteID(tx, endpointID, result, nil)
|
||||
}
|
||||
|
||||
// insertEndpointResultWithSuiteID inserts a result in the store with optional suite linkage
|
||||
func (s *Store) insertEndpointResultWithSuiteID(tx *sql.Tx, endpointID int64, result *endpoint.Result, suiteResultID *int64) error {
|
||||
var endpointResultID int64
|
||||
err := tx.QueryRow(
|
||||
`
|
||||
INSERT INTO endpoint_results (endpoint_id, success, errors, connected, status, dns_rcode, certificate_expiration, domain_expiration, hostname, ip, duration, timestamp)
|
||||
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12)
|
||||
INSERT INTO endpoint_results (endpoint_id, success, errors, connected, status, dns_rcode, certificate_expiration, domain_expiration, hostname, ip, duration, timestamp, suite_result_id)
|
||||
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13)
|
||||
RETURNING endpoint_result_id
|
||||
`,
|
||||
endpointID,
|
||||
@@ -605,6 +638,7 @@ func (s *Store) insertEndpointResult(tx *sql.Tx, endpointID int64, result *endpo
|
||||
result.IP,
|
||||
result.Duration,
|
||||
result.Timestamp.UTC(),
|
||||
suiteResultID,
|
||||
).Scan(&endpointResultID)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -652,7 +686,16 @@ func (s *Store) updateEndpointUptime(tx *sql.Tx, endpointID int64, result *endpo
|
||||
}
|
||||
|
||||
func (s *Store) getAllEndpointKeys(tx *sql.Tx) (keys []string, err error) {
|
||||
rows, err := tx.Query("SELECT endpoint_key FROM endpoints ORDER BY endpoint_key")
|
||||
// Only get endpoints that have at least one result not linked to a suite
|
||||
// This excludes endpoints that only exist as part of suites
|
||||
// Using JOIN for better performance than EXISTS subquery
|
||||
rows, err := tx.Query(`
|
||||
SELECT DISTINCT e.endpoint_key
|
||||
FROM endpoints e
|
||||
INNER JOIN endpoint_results er ON e.endpoint_id = er.endpoint_id
|
||||
WHERE er.suite_result_id IS NULL
|
||||
ORDER BY e.endpoint_key
|
||||
`)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -1108,3 +1151,428 @@ func extractKeyAndParamsFromCacheKey(cacheKey string) (string, *paging.EndpointS
|
||||
}
|
||||
return strings.Join(parts[:len(parts)-4], "-"), params, nil
|
||||
}
|
||||
|
||||
// GetAllSuiteStatuses returns all monitored suite statuses
|
||||
func (s *Store) GetAllSuiteStatuses(params *paging.SuiteStatusParams) ([]*suite.Status, error) {
|
||||
tx, err := s.db.Begin()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer tx.Rollback()
|
||||
|
||||
// Get all suites
|
||||
rows, err := tx.Query(`
|
||||
SELECT suite_id, suite_key, suite_name, suite_group
|
||||
FROM suites
|
||||
ORDER BY suite_key
|
||||
`)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var suiteStatuses []*suite.Status
|
||||
for rows.Next() {
|
||||
var suiteID int64
|
||||
var key, name, group string
|
||||
if err = rows.Scan(&suiteID, &key, &name, &group); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
status := &suite.Status{
|
||||
Name: name,
|
||||
Group: group,
|
||||
Key: key,
|
||||
Results: []*suite.Result{},
|
||||
}
|
||||
|
||||
// Get suite results with pagination
|
||||
pageSize := 20
|
||||
page := 1
|
||||
if params != nil {
|
||||
if params.PageSize > 0 {
|
||||
pageSize = params.PageSize
|
||||
}
|
||||
if params.Page > 0 {
|
||||
page = params.Page
|
||||
}
|
||||
}
|
||||
|
||||
status.Results, err = s.getSuiteResults(tx, suiteID, page, pageSize)
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.GetAllSuiteStatuses] Failed to retrieve results for suite_id=%d: %s", suiteID, err.Error())
|
||||
}
|
||||
// Populate Name and Group fields on each result
|
||||
for _, result := range status.Results {
|
||||
result.Name = name
|
||||
result.Group = group
|
||||
}
|
||||
|
||||
suiteStatuses = append(suiteStatuses, status)
|
||||
}
|
||||
|
||||
if err = tx.Commit(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return suiteStatuses, nil
|
||||
}
|
||||
|
||||
// GetSuiteStatusByKey returns the suite status for a given key
|
||||
func (s *Store) GetSuiteStatusByKey(key string, params *paging.SuiteStatusParams) (*suite.Status, error) {
|
||||
tx, err := s.db.Begin()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer tx.Rollback()
|
||||
|
||||
var suiteID int64
|
||||
var name, group string
|
||||
err = tx.QueryRow(`
|
||||
SELECT suite_id, suite_name, suite_group
|
||||
FROM suites
|
||||
WHERE suite_key = $1
|
||||
`, key).Scan(&suiteID, &name, &group)
|
||||
if err != nil {
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
status := &suite.Status{
|
||||
Name: name,
|
||||
Group: group,
|
||||
Key: key,
|
||||
Results: []*suite.Result{},
|
||||
}
|
||||
|
||||
// Get suite results with pagination
|
||||
pageSize := 20
|
||||
page := 1
|
||||
if params != nil {
|
||||
if params.PageSize > 0 {
|
||||
pageSize = params.PageSize
|
||||
}
|
||||
if params.Page > 0 {
|
||||
page = params.Page
|
||||
}
|
||||
}
|
||||
|
||||
status.Results, err = s.getSuiteResults(tx, suiteID, page, pageSize)
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.GetSuiteStatusByKey] Failed to retrieve results for suite_id=%d: %s", suiteID, err.Error())
|
||||
}
|
||||
// Populate Name and Group fields on each result
|
||||
for _, result := range status.Results {
|
||||
result.Name = name
|
||||
result.Group = group
|
||||
}
|
||||
|
||||
if err = tx.Commit(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return status, nil
|
||||
}
|
||||
|
||||
// InsertSuiteResult adds the observed result for the specified suite into the store
|
||||
func (s *Store) InsertSuiteResult(su *suite.Suite, result *suite.Result) error {
|
||||
tx, err := s.db.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer tx.Rollback()
|
||||
|
||||
// Get or create suite
|
||||
suiteID, err := s.getSuiteID(tx, su)
|
||||
if err != nil {
|
||||
if errors.Is(err, common.ErrSuiteNotFound) {
|
||||
// Suite doesn't exist in the database, insert it
|
||||
if suiteID, err = s.insertSuite(tx, su); err != nil {
|
||||
logr.Errorf("[sql.InsertSuiteResult] Failed to create suite with key=%s: %s", su.Key(), err.Error())
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
logr.Errorf("[sql.InsertSuiteResult] Failed to retrieve id of suite with key=%s: %s", su.Key(), err.Error())
|
||||
return err
|
||||
}
|
||||
}
|
||||
// Insert suite result
|
||||
var suiteResultID int64
|
||||
err = tx.QueryRow(`
|
||||
INSERT INTO suite_results (suite_id, success, errors, duration, timestamp)
|
||||
VALUES ($1, $2, $3, $4, $5)
|
||||
RETURNING suite_result_id
|
||||
`,
|
||||
suiteID,
|
||||
result.Success,
|
||||
strings.Join(result.Errors, arraySeparator),
|
||||
result.Duration.Nanoseconds(),
|
||||
result.Timestamp.UTC(), // timestamp is the start time
|
||||
).Scan(&suiteResultID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// For each endpoint result in the suite, we need to store them
|
||||
for _, epResult := range result.EndpointResults {
|
||||
// Create a temporary endpoint object for storage
|
||||
ep := &endpoint.Endpoint{
|
||||
Name: epResult.Name,
|
||||
Group: su.Group,
|
||||
}
|
||||
// Get or create the endpoint (without suite linkage in endpoints table)
|
||||
epID, err := s.getEndpointID(tx, ep)
|
||||
if err != nil {
|
||||
if errors.Is(err, common.ErrEndpointNotFound) {
|
||||
// Endpoint doesn't exist, create it
|
||||
if epID, err = s.insertEndpoint(tx, ep); err != nil {
|
||||
logr.Errorf("[sql.InsertSuiteResult] Failed to create endpoint %s: %s", epResult.Name, err.Error())
|
||||
continue
|
||||
}
|
||||
} else {
|
||||
logr.Errorf("[sql.InsertSuiteResult] Failed to get endpoint %s: %s", epResult.Name, err.Error())
|
||||
continue
|
||||
}
|
||||
}
|
||||
// InsertEndpointResult the endpoint result with suite linkage
|
||||
err = s.insertEndpointResultWithSuiteID(tx, epID, epResult, &suiteResultID)
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.InsertSuiteResult] Failed to insert endpoint result for %s: %s", epResult.Name, err.Error())
|
||||
}
|
||||
}
|
||||
// Clean up old suite results
|
||||
numberOfResults, err := s.getNumberOfSuiteResultsByID(tx, suiteID)
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.InsertSuiteResult] Failed to retrieve total number of results for suite with key=%s: %s", su.Key(), err.Error())
|
||||
} else {
|
||||
if numberOfResults > int64(s.maximumNumberOfResults+resultsAboveMaximumCleanUpThreshold) {
|
||||
if err = s.deleteOldSuiteResults(tx, suiteID); err != nil {
|
||||
logr.Errorf("[sql.InsertSuiteResult] Failed to delete old results for suite with key=%s: %s", su.Key(), err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
if err = tx.Commit(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteAllSuiteStatusesNotInKeys removes all suite statuses that are not within the keys provided
|
||||
func (s *Store) DeleteAllSuiteStatusesNotInKeys(keys []string) int {
|
||||
logr.Debugf("[sql.DeleteAllSuiteStatusesNotInKeys] Called with %d keys", len(keys))
|
||||
if len(keys) == 0 {
|
||||
// Delete all suites
|
||||
logr.Debugf("[sql.DeleteAllSuiteStatusesNotInKeys] No keys provided, deleting all suites")
|
||||
result, err := s.db.Exec("DELETE FROM suites")
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.DeleteAllSuiteStatusesNotInKeys] Failed to delete all suites: %s", err.Error())
|
||||
return 0
|
||||
}
|
||||
rowsAffected, _ := result.RowsAffected()
|
||||
return int(rowsAffected)
|
||||
}
|
||||
args := make([]interface{}, 0, len(keys))
|
||||
query := "DELETE FROM suites WHERE suite_key NOT IN ("
|
||||
for i := range keys {
|
||||
if i > 0 {
|
||||
query += ","
|
||||
}
|
||||
query += fmt.Sprintf("$%d", i+1)
|
||||
args = append(args, keys[i])
|
||||
}
|
||||
query += ")"
|
||||
// First, let's see what we're about to delete
|
||||
checkQuery := "SELECT suite_key FROM suites WHERE suite_key NOT IN ("
|
||||
for i := range keys {
|
||||
if i > 0 {
|
||||
checkQuery += ","
|
||||
}
|
||||
checkQuery += fmt.Sprintf("$%d", i+1)
|
||||
}
|
||||
checkQuery += ")"
|
||||
rows, err := s.db.Query(checkQuery, args...)
|
||||
if err == nil {
|
||||
defer rows.Close()
|
||||
var deletedKeys []string
|
||||
for rows.Next() {
|
||||
var key string
|
||||
if err := rows.Scan(&key); err == nil {
|
||||
deletedKeys = append(deletedKeys, key)
|
||||
}
|
||||
}
|
||||
if len(deletedKeys) > 0 {
|
||||
logr.Infof("[sql.DeleteAllSuiteStatusesNotInKeys] Deleting suites with keys: %v", deletedKeys)
|
||||
}
|
||||
}
|
||||
result, err := s.db.Exec(query, args...)
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.DeleteAllSuiteStatusesNotInKeys] Failed to delete suites: %s", err.Error())
|
||||
return 0
|
||||
}
|
||||
rowsAffected, _ := result.RowsAffected()
|
||||
return int(rowsAffected)
|
||||
}
|
||||
|
||||
// Suite helper methods
|
||||
|
||||
// getSuiteID retrieves the suite ID from the database by its key
|
||||
func (s *Store) getSuiteID(tx *sql.Tx, su *suite.Suite) (int64, error) {
|
||||
var id int64
|
||||
err := tx.QueryRow("SELECT suite_id FROM suites WHERE suite_key = $1", su.Key()).Scan(&id)
|
||||
if err != nil {
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
return 0, common.ErrSuiteNotFound
|
||||
}
|
||||
return 0, err
|
||||
}
|
||||
return id, nil
|
||||
}
|
||||
|
||||
// insertSuite inserts a suite in the store and returns the generated id
|
||||
func (s *Store) insertSuite(tx *sql.Tx, su *suite.Suite) (int64, error) {
|
||||
var id int64
|
||||
err := tx.QueryRow(
|
||||
"INSERT INTO suites (suite_key, suite_name, suite_group) VALUES ($1, $2, $3) RETURNING suite_id",
|
||||
su.Key(),
|
||||
su.Name,
|
||||
su.Group,
|
||||
).Scan(&id)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return id, nil
|
||||
}
|
||||
|
||||
// getSuiteResults retrieves paginated suite results
|
||||
func (s *Store) getSuiteResults(tx *sql.Tx, suiteID int64, page, pageSize int) ([]*suite.Result, error) {
|
||||
rows, err := tx.Query(`
|
||||
SELECT suite_result_id, success, errors, duration, timestamp
|
||||
FROM suite_results
|
||||
WHERE suite_id = $1
|
||||
ORDER BY suite_result_id DESC
|
||||
LIMIT $2 OFFSET $3
|
||||
`,
|
||||
suiteID,
|
||||
pageSize,
|
||||
(page-1)*pageSize,
|
||||
)
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.getSuiteResults] Query failed: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
type suiteResultData struct {
|
||||
result *suite.Result
|
||||
id int64
|
||||
}
|
||||
var resultsData []suiteResultData
|
||||
for rows.Next() {
|
||||
result := &suite.Result{
|
||||
EndpointResults: []*endpoint.Result{},
|
||||
}
|
||||
var suiteResultID int64
|
||||
var joinedErrors string
|
||||
var nanoseconds int64
|
||||
err = rows.Scan(&suiteResultID, &result.Success, &joinedErrors, &nanoseconds, &result.Timestamp)
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.getSuiteResults] Failed to scan suite result: %s", err.Error())
|
||||
continue
|
||||
}
|
||||
result.Duration = time.Duration(nanoseconds)
|
||||
if len(joinedErrors) > 0 {
|
||||
result.Errors = strings.Split(joinedErrors, arraySeparator)
|
||||
}
|
||||
// Store both result and ID together
|
||||
resultsData = append(resultsData, suiteResultData{
|
||||
result: result,
|
||||
id: suiteResultID,
|
||||
})
|
||||
}
|
||||
|
||||
// Reverse the results to get chronological order (oldest to newest)
|
||||
for i := len(resultsData)/2 - 1; i >= 0; i-- {
|
||||
opp := len(resultsData) - 1 - i
|
||||
resultsData[i], resultsData[opp] = resultsData[opp], resultsData[i]
|
||||
}
|
||||
// Fetch endpoint results for each suite result
|
||||
for _, data := range resultsData {
|
||||
result := data.result
|
||||
resultID := data.id
|
||||
// Query endpoint results for this suite result
|
||||
epRows, err := tx.Query(`
|
||||
SELECT
|
||||
e.endpoint_name,
|
||||
er.success,
|
||||
er.errors,
|
||||
er.duration,
|
||||
er.timestamp
|
||||
FROM endpoint_results er
|
||||
JOIN endpoints e ON er.endpoint_id = e.endpoint_id
|
||||
WHERE er.suite_result_id = $1
|
||||
ORDER BY er.endpoint_result_id
|
||||
`, resultID)
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.getSuiteResults] Failed to get endpoint results for suite_result_id=%d: %s", resultID, err.Error())
|
||||
continue
|
||||
}
|
||||
epCount := 0
|
||||
for epRows.Next() {
|
||||
epCount++
|
||||
var name string
|
||||
var success bool
|
||||
var joinedErrors string
|
||||
var duration int64
|
||||
var timestamp time.Time
|
||||
err = epRows.Scan(&name, &success, &joinedErrors, &duration, ×tamp)
|
||||
if err != nil {
|
||||
logr.Errorf("[sql.getSuiteResults] Failed to scan endpoint result: %s", err.Error())
|
||||
continue
|
||||
}
|
||||
epResult := &endpoint.Result{
|
||||
Name: name,
|
||||
Success: success,
|
||||
Duration: time.Duration(duration),
|
||||
Timestamp: timestamp,
|
||||
}
|
||||
if len(joinedErrors) > 0 {
|
||||
epResult.Errors = strings.Split(joinedErrors, arraySeparator)
|
||||
}
|
||||
result.EndpointResults = append(result.EndpointResults, epResult)
|
||||
}
|
||||
epRows.Close()
|
||||
if epCount > 0 {
|
||||
logr.Debugf("[sql.getSuiteResults] Found %d endpoint results for suite_result_id=%d", epCount, resultID)
|
||||
}
|
||||
}
|
||||
// Extract just the results for return
|
||||
var results []*suite.Result
|
||||
for _, data := range resultsData {
|
||||
results = append(results, data.result)
|
||||
}
|
||||
return results, nil
|
||||
}
|
||||
|
||||
// getNumberOfSuiteResultsByID gets the count of results for a suite
|
||||
func (s *Store) getNumberOfSuiteResultsByID(tx *sql.Tx, suiteID int64) (int64, error) {
|
||||
var count int64
|
||||
err := tx.QueryRow("SELECT COUNT(1) FROM suite_results WHERE suite_id = $1", suiteID).Scan(&count)
|
||||
return count, err
|
||||
}
|
||||
|
||||
// deleteOldSuiteResults deletes old suite results beyond the maximum
|
||||
func (s *Store) deleteOldSuiteResults(tx *sql.Tx, suiteID int64) error {
|
||||
_, err := tx.Exec(`
|
||||
DELETE FROM suite_results
|
||||
WHERE suite_id = $1
|
||||
AND suite_result_id NOT IN (
|
||||
SELECT suite_result_id
|
||||
FROM suite_results
|
||||
WHERE suite_id = $1
|
||||
ORDER BY suite_result_id DESC
|
||||
LIMIT $2
|
||||
)
|
||||
`,
|
||||
suiteID,
|
||||
s.maximumNumberOfResults,
|
||||
)
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -103,7 +103,7 @@ func TestStore_InsertCleansUpOldUptimeEntriesProperly(t *testing.T) {
|
||||
now := time.Now().Truncate(time.Hour)
|
||||
now = time.Date(now.Year(), now.Month(), now.Day(), now.Hour(), 0, 0, 0, now.Location())
|
||||
|
||||
store.Insert(&testEndpoint, &endpoint.Result{Timestamp: now.Add(-5 * time.Hour), Success: true})
|
||||
store.InsertEndpointResult(&testEndpoint, &endpoint.Result{Timestamp: now.Add(-5 * time.Hour), Success: true})
|
||||
|
||||
tx, _ := store.db.Begin()
|
||||
oldest, _ := store.getAgeOfOldestEndpointUptimeEntry(tx, 1)
|
||||
@@ -113,7 +113,7 @@ func TestStore_InsertCleansUpOldUptimeEntriesProperly(t *testing.T) {
|
||||
}
|
||||
|
||||
// The oldest cache entry should remain at ~5 hours old, because this entry is more recent
|
||||
store.Insert(&testEndpoint, &endpoint.Result{Timestamp: now.Add(-3 * time.Hour), Success: true})
|
||||
store.InsertEndpointResult(&testEndpoint, &endpoint.Result{Timestamp: now.Add(-3 * time.Hour), Success: true})
|
||||
|
||||
tx, _ = store.db.Begin()
|
||||
oldest, _ = store.getAgeOfOldestEndpointUptimeEntry(tx, 1)
|
||||
@@ -123,7 +123,7 @@ func TestStore_InsertCleansUpOldUptimeEntriesProperly(t *testing.T) {
|
||||
}
|
||||
|
||||
// The oldest cache entry should now become at ~8 hours old, because this entry is older
|
||||
store.Insert(&testEndpoint, &endpoint.Result{Timestamp: now.Add(-8 * time.Hour), Success: true})
|
||||
store.InsertEndpointResult(&testEndpoint, &endpoint.Result{Timestamp: now.Add(-8 * time.Hour), Success: true})
|
||||
|
||||
tx, _ = store.db.Begin()
|
||||
oldest, _ = store.getAgeOfOldestEndpointUptimeEntry(tx, 1)
|
||||
@@ -133,7 +133,7 @@ func TestStore_InsertCleansUpOldUptimeEntriesProperly(t *testing.T) {
|
||||
}
|
||||
|
||||
// Since this is one hour before reaching the clean up threshold, the oldest entry should now be this one
|
||||
store.Insert(&testEndpoint, &endpoint.Result{Timestamp: now.Add(-(uptimeAgeCleanUpThreshold - time.Hour)), Success: true})
|
||||
store.InsertEndpointResult(&testEndpoint, &endpoint.Result{Timestamp: now.Add(-(uptimeAgeCleanUpThreshold - time.Hour)), Success: true})
|
||||
|
||||
tx, _ = store.db.Begin()
|
||||
oldest, _ = store.getAgeOfOldestEndpointUptimeEntry(tx, 1)
|
||||
@@ -144,7 +144,7 @@ func TestStore_InsertCleansUpOldUptimeEntriesProperly(t *testing.T) {
|
||||
|
||||
// Since this entry is after the uptimeAgeCleanUpThreshold, both this entry as well as the previous
|
||||
// one should be deleted since they both surpass uptimeRetention
|
||||
store.Insert(&testEndpoint, &endpoint.Result{Timestamp: now.Add(-(uptimeAgeCleanUpThreshold + time.Hour)), Success: true})
|
||||
store.InsertEndpointResult(&testEndpoint, &endpoint.Result{Timestamp: now.Add(-(uptimeAgeCleanUpThreshold + time.Hour)), Success: true})
|
||||
|
||||
tx, _ = store.db.Begin()
|
||||
oldest, _ = store.getAgeOfOldestEndpointUptimeEntry(tx, 1)
|
||||
@@ -182,7 +182,7 @@ func TestStore_HourlyUptimeEntriesAreMergedIntoDailyUptimeEntriesProperly(t *tes
|
||||
for i := scenario.numberOfHours; i > 0; i-- {
|
||||
//fmt.Printf("i: %d (%s)\n", i, now.Add(-time.Duration(i)*time.Hour))
|
||||
// Create an uptime entry
|
||||
err := store.Insert(&testEndpoint, &endpoint.Result{Timestamp: now.Add(-time.Duration(i) * time.Hour), Success: true})
|
||||
err := store.InsertEndpointResult(&testEndpoint, &endpoint.Result{Timestamp: now.Add(-time.Duration(i) * time.Hour), Success: true})
|
||||
if err != nil {
|
||||
t.Log(err)
|
||||
}
|
||||
@@ -218,7 +218,7 @@ func TestStore_getEndpointUptime(t *testing.T) {
|
||||
// Add 768 hourly entries (32 days)
|
||||
// Daily entries should be merged from hourly entries automatically
|
||||
for i := 768; i > 0; i-- {
|
||||
err := store.Insert(&testEndpoint, &endpoint.Result{Timestamp: time.Now().Add(-time.Duration(i) * time.Hour), Duration: time.Second, Success: true})
|
||||
err := store.InsertEndpointResult(&testEndpoint, &endpoint.Result{Timestamp: time.Now().Add(-time.Duration(i) * time.Hour), Duration: time.Second, Success: true})
|
||||
if err != nil {
|
||||
t.Log(err)
|
||||
}
|
||||
@@ -245,7 +245,7 @@ func TestStore_getEndpointUptime(t *testing.T) {
|
||||
t.Errorf("expected uptime to be 1, got %f", uptime)
|
||||
}
|
||||
// Add a new unsuccessful result, which should impact the uptime
|
||||
err = store.Insert(&testEndpoint, &endpoint.Result{Timestamp: time.Now(), Duration: time.Second, Success: false})
|
||||
err = store.InsertEndpointResult(&testEndpoint, &endpoint.Result{Timestamp: time.Now(), Duration: time.Second, Success: false})
|
||||
if err != nil {
|
||||
t.Log(err)
|
||||
}
|
||||
@@ -280,8 +280,8 @@ func TestStore_InsertCleansUpEventsAndResultsProperly(t *testing.T) {
|
||||
resultsCleanUpThreshold := store.maximumNumberOfResults + resultsAboveMaximumCleanUpThreshold
|
||||
eventsCleanUpThreshold := store.maximumNumberOfEvents + eventsAboveMaximumCleanUpThreshold
|
||||
for i := 0; i < resultsCleanUpThreshold+eventsCleanUpThreshold; i++ {
|
||||
store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult)
|
||||
store.InsertEndpointResult(&testEndpoint, &testUnsuccessfulResult)
|
||||
ss, _ := store.GetEndpointStatusByKey(testEndpoint.Key(), paging.NewEndpointStatusParams().WithResults(1, storage.DefaultMaximumNumberOfResults*5).WithEvents(1, storage.DefaultMaximumNumberOfEvents*5))
|
||||
if len(ss.Results) > resultsCleanUpThreshold+1 {
|
||||
t.Errorf("number of results shouldn't have exceeded %d, reached %d", resultsCleanUpThreshold, len(ss.Results))
|
||||
@@ -296,8 +296,8 @@ func TestStore_InsertWithCaching(t *testing.T) {
|
||||
store, _ := NewStore("sqlite", t.TempDir()+"/TestStore_InsertWithCaching.db", true, storage.DefaultMaximumNumberOfResults, storage.DefaultMaximumNumberOfEvents)
|
||||
defer store.Close()
|
||||
// Add 2 results
|
||||
store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult)
|
||||
store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult)
|
||||
// Verify that they exist
|
||||
endpointStatuses, _ := store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(1, 20))
|
||||
if numberOfEndpointStatuses := len(endpointStatuses); numberOfEndpointStatuses != 1 {
|
||||
@@ -307,8 +307,8 @@ func TestStore_InsertWithCaching(t *testing.T) {
|
||||
t.Fatalf("expected 2 results, got %d", len(endpointStatuses[0].Results))
|
||||
}
|
||||
// Add 2 more results
|
||||
store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
store.InsertEndpointResult(&testEndpoint, &testUnsuccessfulResult)
|
||||
store.InsertEndpointResult(&testEndpoint, &testUnsuccessfulResult)
|
||||
// Verify that they exist
|
||||
endpointStatuses, _ = store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(1, 20))
|
||||
if numberOfEndpointStatuses := len(endpointStatuses); numberOfEndpointStatuses != 1 {
|
||||
@@ -329,8 +329,8 @@ func TestStore_InsertWithCaching(t *testing.T) {
|
||||
func TestStore_Persistence(t *testing.T) {
|
||||
path := t.TempDir() + "/TestStore_Persistence.db"
|
||||
store, _ := NewStore("sqlite", path, false, storage.DefaultMaximumNumberOfResults, storage.DefaultMaximumNumberOfEvents)
|
||||
store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult)
|
||||
store.InsertEndpointResult(&testEndpoint, &testUnsuccessfulResult)
|
||||
if uptime, _ := store.GetUptimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); uptime != 0.5 {
|
||||
t.Errorf("the uptime over the past 1h should've been 0.5, got %f", uptime)
|
||||
}
|
||||
@@ -425,12 +425,12 @@ func TestStore_Save(t *testing.T) {
|
||||
func TestStore_SanityCheck(t *testing.T) {
|
||||
store, _ := NewStore("sqlite", t.TempDir()+"/TestStore_SanityCheck.db", false, storage.DefaultMaximumNumberOfResults, storage.DefaultMaximumNumberOfEvents)
|
||||
defer store.Close()
|
||||
store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult)
|
||||
endpointStatuses, _ := store.GetAllEndpointStatuses(paging.NewEndpointStatusParams())
|
||||
if numberOfEndpointStatuses := len(endpointStatuses); numberOfEndpointStatuses != 1 {
|
||||
t.Fatalf("expected 1 EndpointStatus, got %d", numberOfEndpointStatuses)
|
||||
}
|
||||
store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
store.InsertEndpointResult(&testEndpoint, &testUnsuccessfulResult)
|
||||
// Both results inserted are for the same endpoint, therefore, the count shouldn't have increased
|
||||
endpointStatuses, _ = store.GetAllEndpointStatuses(paging.NewEndpointStatusParams())
|
||||
if numberOfEndpointStatuses := len(endpointStatuses); numberOfEndpointStatuses != 1 {
|
||||
@@ -541,7 +541,7 @@ func TestStore_NoRows(t *testing.T) {
|
||||
func TestStore_BrokenSchema(t *testing.T) {
|
||||
store, _ := NewStore("sqlite", t.TempDir()+"/TestStore_BrokenSchema.db", false, storage.DefaultMaximumNumberOfResults, storage.DefaultMaximumNumberOfEvents)
|
||||
defer store.Close()
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
if err := store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, got", err.Error())
|
||||
}
|
||||
if _, err := store.GetAverageResponseTimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); err != nil {
|
||||
@@ -553,7 +553,7 @@ func TestStore_BrokenSchema(t *testing.T) {
|
||||
// Break
|
||||
_, _ = store.db.Exec("DROP TABLE endpoints")
|
||||
// And now we'll try to insert something in our broken schema
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err == nil {
|
||||
if err := store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
if _, err := store.GetAverageResponseTimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
@@ -576,12 +576,12 @@ func TestStore_BrokenSchema(t *testing.T) {
|
||||
t.Fatal("schema should've been repaired")
|
||||
}
|
||||
store.Clear()
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
if err := store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, got", err.Error())
|
||||
}
|
||||
// Break
|
||||
_, _ = store.db.Exec("DROP TABLE endpoint_events")
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
if err := store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, because this should silently fails, got", err.Error())
|
||||
}
|
||||
if _, err := store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(1, 1).WithEvents(1, 1)); err != nil {
|
||||
@@ -592,28 +592,28 @@ func TestStore_BrokenSchema(t *testing.T) {
|
||||
t.Fatal("schema should've been repaired")
|
||||
}
|
||||
store.Clear()
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
if err := store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, got", err.Error())
|
||||
}
|
||||
// Break
|
||||
_, _ = store.db.Exec("DROP TABLE endpoint_results")
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err == nil {
|
||||
if err := store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
if _, err := store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(1, 1).WithEvents(1, 1)); err != nil {
|
||||
t.Fatal("expected no error, because this should silently fail, got", err.Error())
|
||||
if _, err := store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(1, 1).WithEvents(1, 1)); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
// Repair
|
||||
if err := store.createSchema(); err != nil {
|
||||
t.Fatal("schema should've been repaired")
|
||||
}
|
||||
store.Clear()
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
if err := store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, got", err.Error())
|
||||
}
|
||||
// Break
|
||||
_, _ = store.db.Exec("DROP TABLE endpoint_result_conditions")
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err == nil {
|
||||
if err := store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult); err == nil {
|
||||
t.Fatal("expected an error")
|
||||
}
|
||||
// Repair
|
||||
@@ -621,12 +621,12 @@ func TestStore_BrokenSchema(t *testing.T) {
|
||||
t.Fatal("schema should've been repaired")
|
||||
}
|
||||
store.Clear()
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
if err := store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, got", err.Error())
|
||||
}
|
||||
// Break
|
||||
_, _ = store.db.Exec("DROP TABLE endpoint_uptimes")
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
if err := store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, because this should silently fails, got", err.Error())
|
||||
}
|
||||
if _, err := store.GetAverageResponseTimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); err == nil {
|
||||
@@ -857,8 +857,8 @@ func TestStore_DeleteAllTriggeredAlertsNotInChecksumsByEndpoint(t *testing.T) {
|
||||
func TestStore_HasEndpointStatusNewerThan(t *testing.T) {
|
||||
store, _ := NewStore("sqlite", t.TempDir()+"/TestStore_HasEndpointStatusNewerThan.db", false, storage.DefaultMaximumNumberOfResults, storage.DefaultMaximumNumberOfEvents)
|
||||
defer store.Close()
|
||||
// Insert an endpoint status
|
||||
if err := store.Insert(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
// InsertEndpointResult an endpoint status
|
||||
if err := store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult); err != nil {
|
||||
t.Fatal("expected no error, got", err.Error())
|
||||
}
|
||||
// Check if it has a status newer than 1 hour ago
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
|
||||
"github.com/TwiN/gatus/v5/alerting/alert"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/config/suite"
|
||||
"github.com/TwiN/gatus/v5/storage"
|
||||
"github.com/TwiN/gatus/v5/storage/store/common/paging"
|
||||
"github.com/TwiN/gatus/v5/storage/store/memory"
|
||||
@@ -19,12 +20,18 @@ type Store interface {
|
||||
// with a subset of endpoint.Result defined by the page and pageSize parameters
|
||||
GetAllEndpointStatuses(params *paging.EndpointStatusParams) ([]*endpoint.Status, error)
|
||||
|
||||
// GetAllSuiteStatuses returns all monitored suite statuses
|
||||
GetAllSuiteStatuses(params *paging.SuiteStatusParams) ([]*suite.Status, error)
|
||||
|
||||
// GetEndpointStatus returns the endpoint status for a given endpoint name in the given group
|
||||
GetEndpointStatus(groupName, endpointName string, params *paging.EndpointStatusParams) (*endpoint.Status, error)
|
||||
|
||||
// GetEndpointStatusByKey returns the endpoint status for a given key
|
||||
GetEndpointStatusByKey(key string, params *paging.EndpointStatusParams) (*endpoint.Status, error)
|
||||
|
||||
// GetSuiteStatusByKey returns the suite status for a given key
|
||||
GetSuiteStatusByKey(key string, params *paging.SuiteStatusParams) (*suite.Status, error)
|
||||
|
||||
// GetUptimeByKey returns the uptime percentage during a time range
|
||||
GetUptimeByKey(key string, from, to time.Time) (float64, error)
|
||||
|
||||
@@ -34,14 +41,20 @@ type Store interface {
|
||||
// GetHourlyAverageResponseTimeByKey returns a map of hourly (key) average response time in milliseconds (value) during a time range
|
||||
GetHourlyAverageResponseTimeByKey(key string, from, to time.Time) (map[int64]int, error)
|
||||
|
||||
// Insert adds the observed result for the specified endpoint into the store
|
||||
Insert(ep *endpoint.Endpoint, result *endpoint.Result) error
|
||||
// InsertEndpointResult adds the observed result for the specified endpoint into the store
|
||||
InsertEndpointResult(ep *endpoint.Endpoint, result *endpoint.Result) error
|
||||
|
||||
// InsertSuiteResult adds the observed result for the specified suite into the store
|
||||
InsertSuiteResult(s *suite.Suite, result *suite.Result) error
|
||||
|
||||
// DeleteAllEndpointStatusesNotInKeys removes all Status that are not within the keys provided
|
||||
//
|
||||
// Used to delete endpoints that have been persisted but are no longer part of the configured endpoints
|
||||
DeleteAllEndpointStatusesNotInKeys(keys []string) int
|
||||
|
||||
// DeleteAllSuiteStatusesNotInKeys removes all suite statuses that are not within the keys provided
|
||||
DeleteAllSuiteStatusesNotInKeys(keys []string) int
|
||||
|
||||
// GetTriggeredEndpointAlert returns whether the triggered alert for the specified endpoint as well as the necessary information to resolve it
|
||||
GetTriggeredEndpointAlert(ep *endpoint.Endpoint, alert *alert.Alert) (exists bool, resolveKey string, numberOfSuccessesInARow int, err error)
|
||||
|
||||
|
||||
@@ -56,9 +56,9 @@ func BenchmarkStore_GetAllEndpointStatuses(b *testing.B) {
|
||||
for i := 0; i < numberOfEndpointsToCreate; i++ {
|
||||
ep := testEndpoint
|
||||
ep.Name = "endpoint" + strconv.Itoa(i)
|
||||
// Insert 20 results for each endpoint
|
||||
// InsertEndpointResult 20 results for each endpoint
|
||||
for j := 0; j < 20; j++ {
|
||||
scenario.Store.Insert(&ep, &testSuccessfulResult)
|
||||
scenario.Store.InsertEndpointResult(&ep, &testSuccessfulResult)
|
||||
}
|
||||
}
|
||||
// Run the scenarios
|
||||
@@ -131,7 +131,7 @@ func BenchmarkStore_Insert(b *testing.B) {
|
||||
result = testSuccessfulResult
|
||||
}
|
||||
result.Timestamp = time.Now()
|
||||
scenario.Store.Insert(&testEndpoint, &result)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &result)
|
||||
n++
|
||||
}
|
||||
})
|
||||
@@ -144,7 +144,7 @@ func BenchmarkStore_Insert(b *testing.B) {
|
||||
result = testSuccessfulResult
|
||||
}
|
||||
result.Timestamp = time.Now()
|
||||
scenario.Store.Insert(&testEndpoint, &result)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &result)
|
||||
}
|
||||
}
|
||||
b.ReportAllocs()
|
||||
@@ -192,8 +192,8 @@ func BenchmarkStore_GetEndpointStatusByKey(b *testing.B) {
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
for i := 0; i < 50; i++ {
|
||||
scenario.Store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &testUnsuccessfulResult)
|
||||
}
|
||||
b.Run(scenario.Name, func(b *testing.B) {
|
||||
if scenario.Parallel {
|
||||
|
||||
@@ -136,8 +136,8 @@ func TestStore_GetEndpointStatusByKey(t *testing.T) {
|
||||
thirdResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &firstResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &secondResult)
|
||||
endpointStatus, err := scenario.Store.GetEndpointStatusByKey(testEndpoint.Key(), paging.NewEndpointStatusParams().WithEvents(1, storage.DefaultMaximumNumberOfEvents).WithResults(1, storage.DefaultMaximumNumberOfResults))
|
||||
if err != nil {
|
||||
t.Fatal("shouldn't have returned an error, got", err.Error())
|
||||
@@ -157,7 +157,7 @@ func TestStore_GetEndpointStatusByKey(t *testing.T) {
|
||||
if endpointStatus.Results[0].Timestamp.After(endpointStatus.Results[1].Timestamp) {
|
||||
t.Error("The result at index 0 should've been older than the result at index 1")
|
||||
}
|
||||
scenario.Store.Insert(&testEndpoint, &thirdResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &thirdResult)
|
||||
endpointStatus, err = scenario.Store.GetEndpointStatusByKey(testEndpoint.Key(), paging.NewEndpointStatusParams().WithEvents(1, storage.DefaultMaximumNumberOfEvents).WithResults(1, storage.DefaultMaximumNumberOfResults))
|
||||
if err != nil {
|
||||
t.Fatal("shouldn't have returned an error, got", err.Error())
|
||||
@@ -175,7 +175,7 @@ func TestStore_GetEndpointStatusForMissingStatusReturnsNil(t *testing.T) {
|
||||
defer cleanUp(scenarios)
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult)
|
||||
endpointStatus, err := scenario.Store.GetEndpointStatus("nonexistantgroup", "nonexistantname", paging.NewEndpointStatusParams().WithEvents(1, storage.DefaultMaximumNumberOfEvents).WithResults(1, storage.DefaultMaximumNumberOfResults))
|
||||
if !errors.Is(err, common.ErrEndpointNotFound) {
|
||||
t.Error("should've returned ErrEndpointNotFound, got", err)
|
||||
@@ -206,8 +206,8 @@ func TestStore_GetAllEndpointStatuses(t *testing.T) {
|
||||
defer cleanUp(scenarios)
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&testEndpoint, &testUnsuccessfulResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &testUnsuccessfulResult)
|
||||
endpointStatuses, err := scenario.Store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(1, 20))
|
||||
if err != nil {
|
||||
t.Error("shouldn't have returned an error, got", err.Error())
|
||||
@@ -230,10 +230,10 @@ func TestStore_GetAllEndpointStatuses(t *testing.T) {
|
||||
t.Run(scenario.Name+"-page-2", func(t *testing.T) {
|
||||
otherEndpoint := testEndpoint
|
||||
otherEndpoint.Name = testEndpoint.Name + "-other"
|
||||
scenario.Store.Insert(&testEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&otherEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&otherEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&otherEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.InsertEndpointResult(&otherEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.InsertEndpointResult(&otherEndpoint, &testSuccessfulResult)
|
||||
scenario.Store.InsertEndpointResult(&otherEndpoint, &testSuccessfulResult)
|
||||
endpointStatuses, err := scenario.Store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(2, 2))
|
||||
if err != nil {
|
||||
t.Error("shouldn't have returned an error, got", err.Error())
|
||||
@@ -268,8 +268,8 @@ func TestStore_GetAllEndpointStatusesWithResultsAndEvents(t *testing.T) {
|
||||
secondResult := testUnsuccessfulResult
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &firstResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &secondResult)
|
||||
// Can't be bothered dealing with timezone issues on the worker that runs the automated tests
|
||||
endpointStatuses, err := scenario.Store.GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(1, 20).WithEvents(1, 50))
|
||||
if err != nil {
|
||||
@@ -302,8 +302,8 @@ func TestStore_GetEndpointStatusPage1IsHasMoreRecentResultsThanPage2(t *testing.
|
||||
secondResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &firstResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &secondResult)
|
||||
endpointStatusPage1, err := scenario.Store.GetEndpointStatusByKey(testEndpoint.Key(), paging.NewEndpointStatusParams().WithResults(1, 1))
|
||||
if err != nil {
|
||||
t.Error("shouldn't have returned an error, got", err.Error())
|
||||
@@ -345,8 +345,8 @@ func TestStore_GetUptimeByKey(t *testing.T) {
|
||||
if _, err := scenario.Store.GetUptimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); err != common.ErrEndpointNotFound {
|
||||
t.Errorf("should've returned not found because there's nothing yet, got %v", err)
|
||||
}
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &firstResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &secondResult)
|
||||
if uptime, _ := scenario.Store.GetUptimeByKey(testEndpoint.Key(), now.Add(-time.Hour), time.Now()); uptime != 0.5 {
|
||||
t.Errorf("the uptime over the past 1h should've been 0.5, got %f", uptime)
|
||||
}
|
||||
@@ -380,10 +380,10 @@ func TestStore_GetAverageResponseTimeByKey(t *testing.T) {
|
||||
fourthResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
scenario.Store.Insert(&testEndpoint, &thirdResult)
|
||||
scenario.Store.Insert(&testEndpoint, &fourthResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &firstResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &secondResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &thirdResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &fourthResult)
|
||||
if averageResponseTime, err := scenario.Store.GetAverageResponseTimeByKey(testEndpoint.Key(), now.Add(-48*time.Hour), now.Add(-24*time.Hour)); err == nil {
|
||||
if averageResponseTime != 0 {
|
||||
t.Errorf("expected average response time to be 0ms, got %v", averageResponseTime)
|
||||
@@ -437,10 +437,10 @@ func TestStore_GetHourlyAverageResponseTimeByKey(t *testing.T) {
|
||||
fourthResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
scenario.Store.Insert(&testEndpoint, &thirdResult)
|
||||
scenario.Store.Insert(&testEndpoint, &fourthResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &firstResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &secondResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &thirdResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &fourthResult)
|
||||
hourlyAverageResponseTime, err := scenario.Store.GetHourlyAverageResponseTimeByKey(testEndpoint.Key(), now.Add(-24*time.Hour), now)
|
||||
if err != nil {
|
||||
t.Error("shouldn't have returned an error, got", err)
|
||||
@@ -468,8 +468,8 @@ func TestStore_Insert(t *testing.T) {
|
||||
secondResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testEndpoint, &firstResult)
|
||||
scenario.Store.Insert(&testEndpoint, &secondResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &firstResult)
|
||||
scenario.Store.InsertEndpointResult(&testEndpoint, &secondResult)
|
||||
ss, err := scenario.Store.GetEndpointStatusByKey(testEndpoint.Key(), paging.NewEndpointStatusParams().WithEvents(1, storage.DefaultMaximumNumberOfEvents).WithResults(1, storage.DefaultMaximumNumberOfResults))
|
||||
if err != nil {
|
||||
t.Error("shouldn't have returned an error, got", err)
|
||||
@@ -545,8 +545,8 @@ func TestStore_DeleteAllEndpointStatusesNotInKeys(t *testing.T) {
|
||||
r := &testSuccessfulResult
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&firstEndpoint, r)
|
||||
scenario.Store.Insert(&secondEndpoint, r)
|
||||
scenario.Store.InsertEndpointResult(&firstEndpoint, r)
|
||||
scenario.Store.InsertEndpointResult(&secondEndpoint, r)
|
||||
if ss, _ := scenario.Store.GetEndpointStatusByKey(firstEndpoint.Key(), paging.NewEndpointStatusParams()); ss == nil {
|
||||
t.Fatal("firstEndpoint should exist, got", ss)
|
||||
}
|
||||
|
||||
80
watchdog/endpoint.go
Normal file
80
watchdog/endpoint.go
Normal file
@@ -0,0 +1,80 @@
|
||||
package watchdog
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v5/config"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/metrics"
|
||||
"github.com/TwiN/gatus/v5/storage/store"
|
||||
"github.com/TwiN/logr"
|
||||
)
|
||||
|
||||
// monitorEndpoint a single endpoint in a loop
|
||||
func monitorEndpoint(ep *endpoint.Endpoint, cfg *config.Config, extraLabels []string, ctx context.Context) {
|
||||
// Run it immediately on start
|
||||
executeEndpoint(ep, cfg, extraLabels)
|
||||
// Loop for the next executions
|
||||
ticker := time.NewTicker(ep.Interval)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
logr.Warnf("[watchdog.monitorEndpoint] Canceling current execution of group=%s; endpoint=%s; key=%s", ep.Group, ep.Name, ep.Key())
|
||||
return
|
||||
case <-ticker.C:
|
||||
executeEndpoint(ep, cfg, extraLabels)
|
||||
}
|
||||
}
|
||||
// Just in case somebody wandered all the way to here and wonders, "what about ExternalEndpoints?"
|
||||
// Alerting is checked every time an external endpoint is pushed to Gatus, so they're not monitored
|
||||
// periodically like they are for normal endpoints.
|
||||
}
|
||||
|
||||
func executeEndpoint(ep *endpoint.Endpoint, cfg *config.Config, extraLabels []string) {
|
||||
// Acquire semaphore to limit concurrent endpoint monitoring
|
||||
if err := monitoringSemaphore.Acquire(ctx, 1); err != nil {
|
||||
// Only fails if context is cancelled (during shutdown)
|
||||
logr.Debugf("[watchdog.executeEndpoint] Context cancelled, skipping execution: %s", err.Error())
|
||||
return
|
||||
}
|
||||
defer monitoringSemaphore.Release(1)
|
||||
// If there's a connectivity checker configured, check if Gatus has internet connectivity
|
||||
if cfg.Connectivity != nil && cfg.Connectivity.Checker != nil && !cfg.Connectivity.Checker.IsConnected() {
|
||||
logr.Infof("[watchdog.executeEndpoint] No connectivity; skipping execution")
|
||||
return
|
||||
}
|
||||
logr.Debugf("[watchdog.executeEndpoint] Monitoring group=%s; endpoint=%s; key=%s", ep.Group, ep.Name, ep.Key())
|
||||
result := ep.EvaluateHealth()
|
||||
if cfg.Metrics {
|
||||
metrics.PublishMetricsForEndpoint(ep, result, extraLabels)
|
||||
}
|
||||
UpdateEndpointStatus(ep, result)
|
||||
if logr.GetThreshold() == logr.LevelDebug && !result.Success {
|
||||
logr.Debugf("[watchdog.executeEndpoint] Monitored group=%s; endpoint=%s; key=%s; success=%v; errors=%d; duration=%s; body=%s", ep.Group, ep.Name, ep.Key(), result.Success, len(result.Errors), result.Duration.Round(time.Millisecond), result.Body)
|
||||
} else {
|
||||
logr.Infof("[watchdog.executeEndpoint] Monitored group=%s; endpoint=%s; key=%s; success=%v; errors=%d; duration=%s", ep.Group, ep.Name, ep.Key(), result.Success, len(result.Errors), result.Duration.Round(time.Millisecond))
|
||||
}
|
||||
inEndpointMaintenanceWindow := false
|
||||
for _, maintenanceWindow := range ep.MaintenanceWindows {
|
||||
if maintenanceWindow.IsUnderMaintenance() {
|
||||
logr.Debug("[watchdog.executeEndpoint] Under endpoint maintenance window")
|
||||
inEndpointMaintenanceWindow = true
|
||||
}
|
||||
}
|
||||
if !cfg.Maintenance.IsUnderMaintenance() && !inEndpointMaintenanceWindow {
|
||||
// TODO: Consider moving this after the monitoring lock is unlocked? I mean, how much noise can a single alerting provider cause...
|
||||
HandleAlerting(ep, result, cfg.Alerting)
|
||||
} else {
|
||||
logr.Debug("[watchdog.executeEndpoint] Not handling alerting because currently in the maintenance window")
|
||||
}
|
||||
logr.Debugf("[watchdog.executeEndpoint] Waiting for interval=%s before monitoring group=%s endpoint=%s (key=%s) again", ep.Interval, ep.Group, ep.Name, ep.Key())
|
||||
}
|
||||
|
||||
// UpdateEndpointStatus persists the endpoint result in the storage
|
||||
func UpdateEndpointStatus(ep *endpoint.Endpoint, result *endpoint.Result) {
|
||||
if err := store.Get().InsertEndpointResult(ep, result); err != nil {
|
||||
logr.Errorf("[watchdog.UpdateEndpointStatus] Failed to insert result in storage: %s", err.Error())
|
||||
}
|
||||
}
|
||||
83
watchdog/external_endpoint.go
Normal file
83
watchdog/external_endpoint.go
Normal file
@@ -0,0 +1,83 @@
|
||||
package watchdog
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v5/config"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/metrics"
|
||||
"github.com/TwiN/gatus/v5/storage/store"
|
||||
"github.com/TwiN/logr"
|
||||
)
|
||||
|
||||
func monitorExternalEndpointHeartbeat(ee *endpoint.ExternalEndpoint, cfg *config.Config, extraLabels []string, ctx context.Context) {
|
||||
ticker := time.NewTicker(ee.Heartbeat.Interval)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
logr.Warnf("[watchdog.monitorExternalEndpointHeartbeat] Canceling current execution of group=%s; endpoint=%s; key=%s", ee.Group, ee.Name, ee.Key())
|
||||
return
|
||||
case <-ticker.C:
|
||||
executeExternalEndpointHeartbeat(ee, cfg, extraLabels)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func executeExternalEndpointHeartbeat(ee *endpoint.ExternalEndpoint, cfg *config.Config, extraLabels []string) {
|
||||
// Acquire semaphore to limit concurrent external endpoint monitoring
|
||||
if err := monitoringSemaphore.Acquire(ctx, 1); err != nil {
|
||||
// Only fails if context is cancelled (during shutdown)
|
||||
logr.Debugf("[watchdog.executeExternalEndpointHeartbeat] Context cancelled, skipping execution: %s", err.Error())
|
||||
return
|
||||
}
|
||||
defer monitoringSemaphore.Release(1)
|
||||
// If there's a connectivity checker configured, check if Gatus has internet connectivity
|
||||
if cfg.Connectivity != nil && cfg.Connectivity.Checker != nil && !cfg.Connectivity.Checker.IsConnected() {
|
||||
logr.Infof("[watchdog.monitorExternalEndpointHeartbeat] No connectivity; skipping execution")
|
||||
return
|
||||
}
|
||||
logr.Debugf("[watchdog.monitorExternalEndpointHeartbeat] Checking heartbeat for group=%s; endpoint=%s; key=%s", ee.Group, ee.Name, ee.Key())
|
||||
convertedEndpoint := ee.ToEndpoint()
|
||||
hasReceivedResultWithinHeartbeatInterval, err := store.Get().HasEndpointStatusNewerThan(ee.Key(), time.Now().Add(-ee.Heartbeat.Interval))
|
||||
if err != nil {
|
||||
logr.Errorf("[watchdog.monitorExternalEndpointHeartbeat] Failed to check if endpoint has received a result within the heartbeat interval: %s", err.Error())
|
||||
return
|
||||
}
|
||||
if hasReceivedResultWithinHeartbeatInterval {
|
||||
// If we received a result within the heartbeat interval, we don't want to create a successful result, so we
|
||||
// skip the rest. We don't have to worry about alerting or metrics, because if the previous heartbeat failed
|
||||
// while this one succeeds, it implies that there was a new result pushed, and that result being pushed
|
||||
// should've resolved the alert.
|
||||
logr.Infof("[watchdog.monitorExternalEndpointHeartbeat] Checked heartbeat for group=%s; endpoint=%s; key=%s; success=%v; errors=%d", ee.Group, ee.Name, ee.Key(), hasReceivedResultWithinHeartbeatInterval, 0)
|
||||
return
|
||||
}
|
||||
// All code after this point assumes the heartbeat failed
|
||||
result := &endpoint.Result{
|
||||
Timestamp: time.Now(),
|
||||
Success: false,
|
||||
Errors: []string{"heartbeat: no update received within " + ee.Heartbeat.Interval.String()},
|
||||
}
|
||||
if cfg.Metrics {
|
||||
metrics.PublishMetricsForEndpoint(convertedEndpoint, result, extraLabels)
|
||||
}
|
||||
UpdateEndpointStatus(convertedEndpoint, result)
|
||||
logr.Infof("[watchdog.monitorExternalEndpointHeartbeat] Checked heartbeat for group=%s; endpoint=%s; key=%s; success=%v; errors=%d; duration=%s", ee.Group, ee.Name, ee.Key(), result.Success, len(result.Errors), result.Duration.Round(time.Millisecond))
|
||||
inEndpointMaintenanceWindow := false
|
||||
for _, maintenanceWindow := range ee.MaintenanceWindows {
|
||||
if maintenanceWindow.IsUnderMaintenance() {
|
||||
logr.Debug("[watchdog.monitorExternalEndpointHeartbeat] Under endpoint maintenance window")
|
||||
inEndpointMaintenanceWindow = true
|
||||
}
|
||||
}
|
||||
if !cfg.Maintenance.IsUnderMaintenance() && !inEndpointMaintenanceWindow {
|
||||
HandleAlerting(convertedEndpoint, result, cfg.Alerting)
|
||||
// Sync the failure/success counters back to the external endpoint
|
||||
ee.NumberOfSuccessesInARow = convertedEndpoint.NumberOfSuccessesInARow
|
||||
ee.NumberOfFailuresInARow = convertedEndpoint.NumberOfFailuresInARow
|
||||
} else {
|
||||
logr.Debug("[watchdog.monitorExternalEndpointHeartbeat] Not handling alerting because currently in the maintenance window")
|
||||
}
|
||||
logr.Debugf("[watchdog.monitorExternalEndpointHeartbeat] Waiting for interval=%s before checking heartbeat for group=%s endpoint=%s (key=%s) again", ee.Heartbeat.Interval, ee.Group, ee.Name, ee.Key())
|
||||
}
|
||||
86
watchdog/suite.go
Normal file
86
watchdog/suite.go
Normal file
@@ -0,0 +1,86 @@
|
||||
package watchdog
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v5/config"
|
||||
"github.com/TwiN/gatus/v5/config/suite"
|
||||
"github.com/TwiN/gatus/v5/metrics"
|
||||
"github.com/TwiN/gatus/v5/storage/store"
|
||||
"github.com/TwiN/logr"
|
||||
)
|
||||
|
||||
// monitorSuite monitors a suite by executing it at regular intervals
|
||||
func monitorSuite(s *suite.Suite, cfg *config.Config, extraLabels []string, ctx context.Context) {
|
||||
// Execute immediately on start
|
||||
executeSuite(s, cfg, extraLabels)
|
||||
// Set up ticker for periodic execution
|
||||
ticker := time.NewTicker(s.Interval)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
logr.Warnf("[watchdog.monitorSuite] Canceling monitoring for suite=%s", s.Name)
|
||||
return
|
||||
case <-ticker.C:
|
||||
executeSuite(s, cfg, extraLabels)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// executeSuite executes a suite with proper concurrency control
|
||||
func executeSuite(s *suite.Suite, cfg *config.Config, extraLabels []string) {
|
||||
// Acquire semaphore to limit concurrent suite monitoring
|
||||
if err := monitoringSemaphore.Acquire(ctx, 1); err != nil {
|
||||
// Only fails if context is cancelled (during shutdown)
|
||||
logr.Debugf("[watchdog.executeSuite] Context cancelled, skipping execution: %s", err.Error())
|
||||
return
|
||||
}
|
||||
defer monitoringSemaphore.Release(1)
|
||||
// Check connectivity if configured
|
||||
if cfg.Connectivity != nil && cfg.Connectivity.Checker != nil && !cfg.Connectivity.Checker.IsConnected() {
|
||||
logr.Infof("[watchdog.executeSuite] No connectivity; skipping suite=%s", s.Name)
|
||||
return
|
||||
}
|
||||
logr.Debugf("[watchdog.executeSuite] Monitoring group=%s; suite=%s; key=%s", s.Group, s.Name, s.Key())
|
||||
// Execute the suite using its Execute method
|
||||
result := s.Execute()
|
||||
// Publish metrics for the suite execution
|
||||
if cfg.Metrics {
|
||||
metrics.PublishMetricsForSuite(s, result, extraLabels)
|
||||
}
|
||||
// Store individual endpoint results and handle alerting
|
||||
for i, ep := range s.Endpoints {
|
||||
if i < len(result.EndpointResults) {
|
||||
epResult := result.EndpointResults[i]
|
||||
// Store the endpoint result
|
||||
UpdateEndpointStatus(ep, epResult)
|
||||
// Handle alerting if configured and not under maintenance
|
||||
if cfg.Alerting != nil && !cfg.Maintenance.IsUnderMaintenance() {
|
||||
// Check if endpoint is under maintenance
|
||||
inEndpointMaintenanceWindow := false
|
||||
for _, maintenanceWindow := range ep.MaintenanceWindows {
|
||||
if maintenanceWindow.IsUnderMaintenance() {
|
||||
logr.Debug("[watchdog.executeSuite] Endpoint under maintenance window")
|
||||
inEndpointMaintenanceWindow = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !inEndpointMaintenanceWindow {
|
||||
HandleAlerting(ep, epResult, cfg.Alerting)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
logr.Infof("[watchdog.executeSuite] Completed suite=%s; success=%v; errors=%d; duration=%v; endpoints_executed=%d/%d", s.Name, result.Success, len(result.Errors), result.Duration, len(result.EndpointResults), len(s.Endpoints))
|
||||
// Store result in database
|
||||
UpdateSuiteStatus(s, result)
|
||||
}
|
||||
|
||||
// UpdateSuiteStatus persists the suite result in the database
|
||||
func UpdateSuiteStatus(s *suite.Suite, result *suite.Result) {
|
||||
if err := store.Get().InsertSuiteResult(s, result); err != nil {
|
||||
logr.Errorf("[watchdog.executeSuite] Failed to insert suite result for suite=%s: %v", s.Name, err)
|
||||
}
|
||||
}
|
||||
@@ -2,23 +2,22 @@ package watchdog
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/TwiN/gatus/v5/alerting"
|
||||
"github.com/TwiN/gatus/v5/config"
|
||||
"github.com/TwiN/gatus/v5/config/connectivity"
|
||||
"github.com/TwiN/gatus/v5/config/endpoint"
|
||||
"github.com/TwiN/gatus/v5/config/maintenance"
|
||||
"github.com/TwiN/gatus/v5/metrics"
|
||||
"github.com/TwiN/gatus/v5/storage/store"
|
||||
"github.com/TwiN/logr"
|
||||
"golang.org/x/sync/semaphore"
|
||||
)
|
||||
|
||||
const (
|
||||
// UnlimitedConcurrencyWeight is the semaphore weight used when concurrency is set to 0 (unlimited).
|
||||
// This provides a practical upper limit while allowing very high concurrency for large deployments.
|
||||
UnlimitedConcurrencyWeight = 10000
|
||||
)
|
||||
|
||||
var (
|
||||
// monitoringMutex is used to prevent multiple endpoint from being evaluated at the same time.
|
||||
// monitoringSemaphore is used to limit the number of endpoints/suites that can be evaluated concurrently.
|
||||
// Without this, conditions using response time may become inaccurate.
|
||||
monitoringMutex sync.Mutex
|
||||
monitoringSemaphore *semaphore.Weighted
|
||||
|
||||
ctx context.Context
|
||||
cancelFunc context.CancelFunc
|
||||
@@ -27,12 +26,20 @@ var (
|
||||
// Monitor loops over each endpoint and starts a goroutine to monitor each endpoint separately
|
||||
func Monitor(cfg *config.Config) {
|
||||
ctx, cancelFunc = context.WithCancel(context.Background())
|
||||
// Initialize semaphore based on concurrency configuration
|
||||
if cfg.Concurrency == 0 {
|
||||
// Unlimited concurrency - use a very high limit
|
||||
monitoringSemaphore = semaphore.NewWeighted(UnlimitedConcurrencyWeight)
|
||||
} else {
|
||||
// Limited concurrency based on configuration
|
||||
monitoringSemaphore = semaphore.NewWeighted(int64(cfg.Concurrency))
|
||||
}
|
||||
extraLabels := cfg.GetUniqueExtraMetricLabels()
|
||||
for _, endpoint := range cfg.Endpoints {
|
||||
if endpoint.IsEnabled() {
|
||||
// To prevent multiple requests from running at the same time, we'll wait for a little before each iteration
|
||||
time.Sleep(777 * time.Millisecond)
|
||||
go monitor(endpoint, cfg.Alerting, cfg.Maintenance, cfg.Connectivity, cfg.DisableMonitoringLock, cfg.Metrics, extraLabels, ctx)
|
||||
time.Sleep(222 * time.Millisecond)
|
||||
go monitorEndpoint(endpoint, cfg, extraLabels, ctx)
|
||||
}
|
||||
}
|
||||
for _, externalEndpoint := range cfg.ExternalEndpoints {
|
||||
@@ -40,153 +47,27 @@ func Monitor(cfg *config.Config) {
|
||||
// If the external endpoint does not use heartbeat, then it does not need to be monitored periodically, because
|
||||
// alerting is checked every time an external endpoint is pushed to Gatus, unlike normal endpoints.
|
||||
if externalEndpoint.IsEnabled() && externalEndpoint.Heartbeat.Interval > 0 {
|
||||
go monitorExternalEndpointHeartbeat(externalEndpoint, cfg.Alerting, cfg.Maintenance, cfg.Connectivity, cfg.DisableMonitoringLock, cfg.Metrics, ctx, extraLabels)
|
||||
go monitorExternalEndpointHeartbeat(externalEndpoint, cfg, extraLabels, ctx)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// monitor a single endpoint in a loop
|
||||
func monitor(ep *endpoint.Endpoint, alertingConfig *alerting.Config, maintenanceConfig *maintenance.Config, connectivityConfig *connectivity.Config, disableMonitoringLock bool, enabledMetrics bool, extraLabels []string, ctx context.Context) {
|
||||
// Run it immediately on start
|
||||
execute(ep, alertingConfig, maintenanceConfig, connectivityConfig, disableMonitoringLock, enabledMetrics, extraLabels)
|
||||
// Loop for the next executions
|
||||
ticker := time.NewTicker(ep.Interval)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
logr.Warnf("[watchdog.monitor] Canceling current execution of group=%s; endpoint=%s; key=%s", ep.Group, ep.Name, ep.Key())
|
||||
return
|
||||
case <-ticker.C:
|
||||
execute(ep, alertingConfig, maintenanceConfig, connectivityConfig, disableMonitoringLock, enabledMetrics, extraLabels)
|
||||
for _, suite := range cfg.Suites {
|
||||
if suite.IsEnabled() {
|
||||
time.Sleep(222 * time.Millisecond)
|
||||
go monitorSuite(suite, cfg, extraLabels, ctx)
|
||||
}
|
||||
}
|
||||
// Just in case somebody wandered all the way to here and wonders, "what about ExternalEndpoints?"
|
||||
// Alerting is checked every time an external endpoint is pushed to Gatus, so they're not monitored
|
||||
// periodically like they are for normal endpoints.
|
||||
}
|
||||
|
||||
func execute(ep *endpoint.Endpoint, alertingConfig *alerting.Config, maintenanceConfig *maintenance.Config, connectivityConfig *connectivity.Config, disableMonitoringLock bool, enabledMetrics bool, extraLabels []string) {
|
||||
if !disableMonitoringLock {
|
||||
// By placing the lock here, we prevent multiple endpoints from being monitored at the exact same time, which
|
||||
// could cause performance issues and return inaccurate results
|
||||
monitoringMutex.Lock()
|
||||
defer monitoringMutex.Unlock()
|
||||
}
|
||||
// If there's a connectivity checker configured, check if Gatus has internet connectivity
|
||||
if connectivityConfig != nil && connectivityConfig.Checker != nil && !connectivityConfig.Checker.IsConnected() {
|
||||
logr.Infof("[watchdog.execute] No connectivity; skipping execution")
|
||||
return
|
||||
}
|
||||
logr.Debugf("[watchdog.execute] Monitoring group=%s; endpoint=%s; key=%s", ep.Group, ep.Name, ep.Key())
|
||||
result := ep.EvaluateHealth()
|
||||
if enabledMetrics {
|
||||
metrics.PublishMetricsForEndpoint(ep, result, extraLabels)
|
||||
}
|
||||
UpdateEndpointStatuses(ep, result)
|
||||
if logr.GetThreshold() == logr.LevelDebug && !result.Success {
|
||||
logr.Debugf("[watchdog.execute] Monitored group=%s; endpoint=%s; key=%s; success=%v; errors=%d; duration=%s; body=%s", ep.Group, ep.Name, ep.Key(), result.Success, len(result.Errors), result.Duration.Round(time.Millisecond), result.Body)
|
||||
} else {
|
||||
logr.Infof("[watchdog.execute] Monitored group=%s; endpoint=%s; key=%s; success=%v; errors=%d; duration=%s", ep.Group, ep.Name, ep.Key(), result.Success, len(result.Errors), result.Duration.Round(time.Millisecond))
|
||||
}
|
||||
inEndpointMaintenanceWindow := false
|
||||
for _, maintenanceWindow := range ep.MaintenanceWindows {
|
||||
if maintenanceWindow.IsUnderMaintenance() {
|
||||
logr.Debug("[watchdog.execute] Under endpoint maintenance window")
|
||||
inEndpointMaintenanceWindow = true
|
||||
}
|
||||
}
|
||||
if !maintenanceConfig.IsUnderMaintenance() && !inEndpointMaintenanceWindow {
|
||||
// TODO: Consider moving this after the monitoring lock is unlocked? I mean, how much noise can a single alerting provider cause...
|
||||
HandleAlerting(ep, result, alertingConfig)
|
||||
} else {
|
||||
logr.Debug("[watchdog.execute] Not handling alerting because currently in the maintenance window")
|
||||
}
|
||||
logr.Debugf("[watchdog.execute] Waiting for interval=%s before monitoring group=%s endpoint=%s (key=%s) again", ep.Interval, ep.Group, ep.Name, ep.Key())
|
||||
}
|
||||
|
||||
func monitorExternalEndpointHeartbeat(ee *endpoint.ExternalEndpoint, alertingConfig *alerting.Config, maintenanceConfig *maintenance.Config, connectivityConfig *connectivity.Config, disableMonitoringLock bool, enabledMetrics bool, ctx context.Context, extraLabels []string) {
|
||||
ticker := time.NewTicker(ee.Heartbeat.Interval)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
logr.Warnf("[watchdog.monitorExternalEndpointHeartbeat] Canceling current execution of group=%s; endpoint=%s; key=%s", ee.Group, ee.Name, ee.Key())
|
||||
return
|
||||
case <-ticker.C:
|
||||
executeExternalEndpointHeartbeat(ee, alertingConfig, maintenanceConfig, connectivityConfig, disableMonitoringLock, enabledMetrics, extraLabels)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func executeExternalEndpointHeartbeat(ee *endpoint.ExternalEndpoint, alertingConfig *alerting.Config, maintenanceConfig *maintenance.Config, connectivityConfig *connectivity.Config, disableMonitoringLock bool, enabledMetrics bool, extraLabels []string) {
|
||||
if !disableMonitoringLock {
|
||||
// By placing the lock here, we prevent multiple endpoints from being monitored at the exact same time, which
|
||||
// could cause performance issues and return inaccurate results
|
||||
monitoringMutex.Lock()
|
||||
defer monitoringMutex.Unlock()
|
||||
}
|
||||
// If there's a connectivity checker configured, check if Gatus has internet connectivity
|
||||
if connectivityConfig != nil && connectivityConfig.Checker != nil && !connectivityConfig.Checker.IsConnected() {
|
||||
logr.Infof("[watchdog.monitorExternalEndpointHeartbeat] No connectivity; skipping execution")
|
||||
return
|
||||
}
|
||||
logr.Debugf("[watchdog.monitorExternalEndpointHeartbeat] Checking heartbeat for group=%s; endpoint=%s; key=%s", ee.Group, ee.Name, ee.Key())
|
||||
convertedEndpoint := ee.ToEndpoint()
|
||||
hasReceivedResultWithinHeartbeatInterval, err := store.Get().HasEndpointStatusNewerThan(ee.Key(), time.Now().Add(-ee.Heartbeat.Interval))
|
||||
if err != nil {
|
||||
logr.Errorf("[watchdog.monitorExternalEndpointHeartbeat] Failed to check if endpoint has received a result within the heartbeat interval: %s", err.Error())
|
||||
return
|
||||
}
|
||||
if hasReceivedResultWithinHeartbeatInterval {
|
||||
// If we received a result within the heartbeat interval, we don't want to create a successful result, so we
|
||||
// skip the rest. We don't have to worry about alerting or metrics, because if the previous heartbeat failed
|
||||
// while this one succeeds, it implies that there was a new result pushed, and that result being pushed
|
||||
// should've resolved the alert.
|
||||
logr.Infof("[watchdog.monitorExternalEndpointHeartbeat] Checked heartbeat for group=%s; endpoint=%s; key=%s; success=%v; errors=%d", ee.Group, ee.Name, ee.Key(), hasReceivedResultWithinHeartbeatInterval, 0)
|
||||
return
|
||||
}
|
||||
// All code after this point assumes the heartbeat failed
|
||||
result := &endpoint.Result{
|
||||
Timestamp: time.Now(),
|
||||
Success: false,
|
||||
Errors: []string{"heartbeat: no update received within " + ee.Heartbeat.Interval.String()},
|
||||
}
|
||||
if enabledMetrics {
|
||||
metrics.PublishMetricsForEndpoint(convertedEndpoint, result, extraLabels)
|
||||
}
|
||||
UpdateEndpointStatuses(convertedEndpoint, result)
|
||||
logr.Infof("[watchdog.monitorExternalEndpointHeartbeat] Checked heartbeat for group=%s; endpoint=%s; key=%s; success=%v; errors=%d; duration=%s", ee.Group, ee.Name, ee.Key(), result.Success, len(result.Errors), result.Duration.Round(time.Millisecond))
|
||||
inEndpointMaintenanceWindow := false
|
||||
for _, maintenanceWindow := range ee.MaintenanceWindows {
|
||||
if maintenanceWindow.IsUnderMaintenance() {
|
||||
logr.Debug("[watchdog.monitorExternalEndpointHeartbeat] Under endpoint maintenance window")
|
||||
inEndpointMaintenanceWindow = true
|
||||
}
|
||||
}
|
||||
if !maintenanceConfig.IsUnderMaintenance() && !inEndpointMaintenanceWindow {
|
||||
HandleAlerting(convertedEndpoint, result, alertingConfig)
|
||||
// Sync the failure/success counters back to the external endpoint
|
||||
ee.NumberOfSuccessesInARow = convertedEndpoint.NumberOfSuccessesInARow
|
||||
ee.NumberOfFailuresInARow = convertedEndpoint.NumberOfFailuresInARow
|
||||
} else {
|
||||
logr.Debug("[watchdog.monitorExternalEndpointHeartbeat] Not handling alerting because currently in the maintenance window")
|
||||
}
|
||||
logr.Debugf("[watchdog.monitorExternalEndpointHeartbeat] Waiting for interval=%s before checking heartbeat for group=%s endpoint=%s (key=%s) again", ee.Heartbeat.Interval, ee.Group, ee.Name, ee.Key())
|
||||
}
|
||||
|
||||
// UpdateEndpointStatuses updates the slice of endpoint statuses
|
||||
func UpdateEndpointStatuses(ep *endpoint.Endpoint, result *endpoint.Result) {
|
||||
if err := store.Get().Insert(ep, result); err != nil {
|
||||
logr.Errorf("[watchdog.UpdateEndpointStatuses] Failed to insert result in storage: %s", err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
// Shutdown stops monitoring all endpoints
|
||||
func Shutdown(cfg *config.Config) {
|
||||
// Disable all the old HTTP connections
|
||||
// Stop in-flight HTTP connections
|
||||
for _, ep := range cfg.Endpoints {
|
||||
ep.Close()
|
||||
}
|
||||
for _, s := range cfg.Suites {
|
||||
for _, ep := range s.Endpoints {
|
||||
ep.Close()
|
||||
}
|
||||
}
|
||||
cancelFunc()
|
||||
}
|
||||
|
||||
@@ -61,7 +61,7 @@ import { computed } from 'vue'
|
||||
import { useRouter } from 'vue-router'
|
||||
import { Card, CardHeader, CardTitle, CardContent } from '@/components/ui/card'
|
||||
import StatusBadge from '@/components/StatusBadge.vue'
|
||||
import { helper } from '@/mixins/helper'
|
||||
import { generatePrettyTimeAgo } from '@/utils/time'
|
||||
|
||||
const router = useRouter()
|
||||
|
||||
@@ -145,12 +145,12 @@ const formattedResponseTime = computed(() => {
|
||||
|
||||
const oldestResultTime = computed(() => {
|
||||
if (!props.endpoint.results || props.endpoint.results.length === 0) return ''
|
||||
return helper.methods.generatePrettyTimeAgo(props.endpoint.results[0].timestamp)
|
||||
return generatePrettyTimeAgo(props.endpoint.results[0].timestamp)
|
||||
})
|
||||
|
||||
const newestResultTime = computed(() => {
|
||||
if (!props.endpoint.results || props.endpoint.results.length === 0) return ''
|
||||
return helper.methods.generatePrettyTimeAgo(props.endpoint.results[props.endpoint.results.length - 1].timestamp)
|
||||
return generatePrettyTimeAgo(props.endpoint.results[props.endpoint.results.length - 1].timestamp)
|
||||
})
|
||||
|
||||
const navigateToDetails = () => {
|
||||
|
||||
133
web/app/src/components/FlowStep.vue
Normal file
133
web/app/src/components/FlowStep.vue
Normal file
@@ -0,0 +1,133 @@
|
||||
<template>
|
||||
<div class="flex items-start gap-4 relative group hover:bg-accent/30 rounded-lg p-2 -m-2 transition-colors cursor-pointer"
|
||||
@click="$emit('step-click')">
|
||||
<!-- Step circle with status icon -->
|
||||
<div class="relative flex-shrink-0">
|
||||
<!-- Connection line from previous step -->
|
||||
<div v-if="index > 0" :class="incomingLineClasses" class="absolute left-1/2 bottom-8 w-0.5 h-4 -translate-x-px"></div>
|
||||
|
||||
<div :class="circleClasses" class="w-8 h-8 rounded-full flex items-center justify-center">
|
||||
<component :is="statusIcon" class="w-4 h-4" />
|
||||
</div>
|
||||
|
||||
<!-- Connection line to next step -->
|
||||
<div v-if="!isLast" :class="connectionLineClasses" class="absolute left-1/2 top-8 w-0.5 h-4 -translate-x-px"></div>
|
||||
</div>
|
||||
|
||||
<!-- Step content -->
|
||||
<div class="flex-1 min-w-0 pt-1">
|
||||
<div class="flex items-center justify-between gap-2 mb-1">
|
||||
<h4 class="font-medium text-sm truncate">{{ step.name }}</h4>
|
||||
<span class="text-xs text-muted-foreground whitespace-nowrap">
|
||||
{{ formatDuration(step.duration) }}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<!-- Step badges -->
|
||||
<div class="flex flex-wrap gap-1">
|
||||
<span v-if="step.isAlwaysRun" class="inline-flex items-center gap-1 px-2 py-1 text-xs font-medium bg-blue-100 text-blue-800 dark:bg-blue-900 dark:text-blue-200 rounded-md">
|
||||
<RotateCcw class="w-3 h-3" />
|
||||
Always Run
|
||||
</span>
|
||||
<span v-if="step.errors?.length" class="inline-flex items-center px-2 py-1 text-xs font-medium bg-red-100 text-red-800 dark:bg-red-900 dark:text-red-200 rounded-md">
|
||||
{{ step.errors.length }} error{{ step.errors.length !== 1 ? 's' : '' }}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { computed } from 'vue'
|
||||
import { CheckCircle, XCircle, SkipForward, RotateCcw, Pause } from 'lucide-vue-next'
|
||||
import { formatDuration } from '@/utils/format'
|
||||
|
||||
const props = defineProps({
|
||||
step: { type: Object, required: true },
|
||||
index: { type: Number, required: true },
|
||||
isLast: { type: Boolean, default: false },
|
||||
previousStep: { type: Object, default: null }
|
||||
})
|
||||
|
||||
defineEmits(['step-click'])
|
||||
|
||||
// Status icon mapping
|
||||
const statusIcon = computed(() => {
|
||||
switch (props.step.status) {
|
||||
case 'success': return CheckCircle
|
||||
case 'failed': return XCircle
|
||||
case 'skipped': return SkipForward
|
||||
case 'not-started': return Pause
|
||||
default: return Pause
|
||||
}
|
||||
})
|
||||
|
||||
// Circle styling classes
|
||||
const circleClasses = computed(() => {
|
||||
const baseClasses = 'border-2'
|
||||
|
||||
if (props.step.isAlwaysRun) {
|
||||
// Always-run endpoints get a special ring effect
|
||||
switch (props.step.status) {
|
||||
case 'success':
|
||||
return `${baseClasses} bg-green-500 text-white border-green-600 ring-2 ring-blue-200 dark:ring-blue-800`
|
||||
case 'failed':
|
||||
return `${baseClasses} bg-red-500 text-white border-red-600 ring-2 ring-blue-200 dark:ring-blue-800`
|
||||
default:
|
||||
return `${baseClasses} bg-blue-500 text-white border-blue-600 ring-2 ring-blue-200 dark:ring-blue-800`
|
||||
}
|
||||
}
|
||||
|
||||
switch (props.step.status) {
|
||||
case 'success':
|
||||
return `${baseClasses} bg-green-500 text-white border-green-600`
|
||||
case 'failed':
|
||||
return `${baseClasses} bg-red-500 text-white border-red-600`
|
||||
case 'skipped':
|
||||
return `${baseClasses} bg-gray-400 text-white border-gray-500`
|
||||
case 'not-started':
|
||||
return `${baseClasses} bg-gray-200 text-gray-500 border-gray-300 dark:bg-gray-700 dark:text-gray-400 dark:border-gray-600`
|
||||
default:
|
||||
return `${baseClasses} bg-gray-200 text-gray-500 border-gray-300 dark:bg-gray-700 dark:text-gray-400 dark:border-gray-600`
|
||||
}
|
||||
})
|
||||
|
||||
// Incoming connection line styling (from previous step to this step)
|
||||
const incomingLineClasses = computed(() => {
|
||||
if (!props.previousStep) return 'bg-gray-300 dark:bg-gray-600'
|
||||
|
||||
// If this step is skipped, the line should be dashed/gray
|
||||
if (props.step.status === 'skipped') {
|
||||
return 'border-l-2 border-dashed border-gray-400 bg-transparent'
|
||||
}
|
||||
|
||||
// Otherwise, color based on previous step's status
|
||||
switch (props.previousStep.status) {
|
||||
case 'success':
|
||||
return 'bg-green-500'
|
||||
case 'failed':
|
||||
// If previous failed but this ran (always-run), show red line
|
||||
return 'bg-red-500'
|
||||
default:
|
||||
return 'bg-gray-300 dark:bg-gray-600'
|
||||
}
|
||||
})
|
||||
|
||||
// Outgoing connection line styling (from this step to next)
|
||||
const connectionLineClasses = computed(() => {
|
||||
const nextStep = props.step.nextStepStatus
|
||||
switch (props.step.status) {
|
||||
case 'success':
|
||||
return nextStep === 'skipped'
|
||||
? 'bg-gray-300 dark:bg-gray-600'
|
||||
: 'bg-green-500'
|
||||
case 'failed':
|
||||
return nextStep === 'skipped'
|
||||
? 'border-l-2 border-dashed border-gray-400 bg-transparent'
|
||||
: 'bg-red-500'
|
||||
default:
|
||||
return 'bg-gray-300 dark:bg-gray-600'
|
||||
}
|
||||
})
|
||||
|
||||
</script>
|
||||
124
web/app/src/components/SequentialFlowDiagram.vue
Normal file
124
web/app/src/components/SequentialFlowDiagram.vue
Normal file
@@ -0,0 +1,124 @@
|
||||
<template>
|
||||
<div class="space-y-4">
|
||||
<!-- Timeline header -->
|
||||
<div class="flex items-center gap-4">
|
||||
<div class="text-sm font-medium text-muted-foreground">Start</div>
|
||||
<div class="flex-1 h-1 bg-gray-200 dark:bg-gray-700 rounded-full overflow-hidden">
|
||||
<div
|
||||
class="h-full bg-green-500 dark:bg-green-600 rounded-full transition-all duration-300 ease-out"
|
||||
:style="{ width: progressPercentage + '%' }"
|
||||
></div>
|
||||
</div>
|
||||
<div class="text-sm font-medium text-muted-foreground">End</div>
|
||||
</div>
|
||||
|
||||
<!-- Progress stats -->
|
||||
<div class="flex items-center justify-between text-xs text-muted-foreground">
|
||||
<span>{{ completedSteps }}/{{ totalSteps }} steps successful</span>
|
||||
<span v-if="totalDuration > 0">{{ formatDuration(totalDuration) }} total</span>
|
||||
</div>
|
||||
|
||||
<!-- Flow steps -->
|
||||
<div class="space-y-2">
|
||||
<FlowStep
|
||||
v-for="(step, index) in flowSteps"
|
||||
:key="index"
|
||||
:step="step"
|
||||
:index="index"
|
||||
:is-last="index === flowSteps.length - 1"
|
||||
:previous-step="index > 0 ? flowSteps[index - 1] : null"
|
||||
@step-click="$emit('step-selected', step, index)"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<!-- Legend -->
|
||||
<div class="mt-6 pt-4 border-t">
|
||||
<div class="text-sm font-medium text-muted-foreground mb-2">Status Legend</div>
|
||||
<div class="grid grid-cols-2 md:grid-cols-4 gap-3 text-xs">
|
||||
<div v-if="hasSuccessSteps" class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded-full bg-green-500 flex items-center justify-center">
|
||||
<CheckCircle class="w-3 h-3 text-white" />
|
||||
</div>
|
||||
<span class="text-muted-foreground">Success</span>
|
||||
</div>
|
||||
|
||||
<div v-if="hasFailedSteps" class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded-full bg-red-500 flex items-center justify-center">
|
||||
<XCircle class="w-3 h-3 text-white" />
|
||||
</div>
|
||||
<span class="text-muted-foreground">Failed</span>
|
||||
</div>
|
||||
|
||||
<div v-if="hasSkippedSteps" class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded-full bg-gray-400 flex items-center justify-center">
|
||||
<SkipForward class="w-3 h-3 text-white" />
|
||||
</div>
|
||||
<span class="text-muted-foreground">Skipped</span>
|
||||
</div>
|
||||
|
||||
<div v-if="hasAlwaysRunSteps" class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded-full bg-blue-500 border-2 border-blue-200 dark:border-blue-800 flex items-center justify-center">
|
||||
<RotateCcw class="w-3 h-3 text-white" />
|
||||
</div>
|
||||
<span class="text-muted-foreground">Always Run</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { computed } from 'vue'
|
||||
import { CheckCircle, XCircle, SkipForward, RotateCcw } from 'lucide-vue-next'
|
||||
import FlowStep from './FlowStep.vue'
|
||||
import { formatDuration } from '@/utils/format'
|
||||
|
||||
const props = defineProps({
|
||||
flowSteps: {
|
||||
type: Array,
|
||||
default: () => []
|
||||
},
|
||||
progressPercentage: {
|
||||
type: Number,
|
||||
default: 0
|
||||
},
|
||||
completedSteps: {
|
||||
type: Number,
|
||||
default: 0
|
||||
},
|
||||
totalSteps: {
|
||||
type: Number,
|
||||
default: 0
|
||||
}
|
||||
})
|
||||
|
||||
defineEmits(['step-selected'])
|
||||
|
||||
// Use props instead of computing locally for consistency
|
||||
const completedSteps = computed(() => props.completedSteps)
|
||||
const totalSteps = computed(() => props.totalSteps)
|
||||
|
||||
const totalDuration = computed(() => {
|
||||
return props.flowSteps.reduce((total, step) => {
|
||||
return total + (step.duration || 0)
|
||||
}, 0)
|
||||
})
|
||||
|
||||
// Legend visibility computed properties
|
||||
const hasSuccessSteps = computed(() => {
|
||||
return props.flowSteps.some(step => step.status === 'success')
|
||||
})
|
||||
|
||||
const hasFailedSteps = computed(() => {
|
||||
return props.flowSteps.some(step => step.status === 'failed')
|
||||
})
|
||||
|
||||
const hasSkippedSteps = computed(() => {
|
||||
return props.flowSteps.some(step => step.status === 'skipped')
|
||||
})
|
||||
|
||||
const hasAlwaysRunSteps = computed(() => {
|
||||
return props.flowSteps.some(step => step.isAlwaysRun === true)
|
||||
})
|
||||
|
||||
</script>
|
||||
115
web/app/src/components/StepDetailsModal.vue
Normal file
115
web/app/src/components/StepDetailsModal.vue
Normal file
@@ -0,0 +1,115 @@
|
||||
<template>
|
||||
<!-- Modal backdrop -->
|
||||
<div class="fixed inset-0 bg-black/50 backdrop-blur-sm flex items-center justify-center p-4 z-50" @click="$emit('close')">
|
||||
<!-- Modal content -->
|
||||
<div class="bg-background border rounded-lg shadow-lg max-w-2xl w-full max-h-[80vh] overflow-hidden" @click.stop>
|
||||
<!-- Header -->
|
||||
<div class="flex items-center justify-between p-4 border-b">
|
||||
<div>
|
||||
<h2 class="text-lg font-semibold flex items-center gap-2">
|
||||
<component :is="statusIcon" :class="iconClasses" class="w-5 h-5" />
|
||||
{{ step.name }}
|
||||
</h2>
|
||||
<p class="text-sm text-muted-foreground mt-1">
|
||||
Step {{ index + 1 }} • {{ formatDuration(step.duration) }}
|
||||
</p>
|
||||
</div>
|
||||
<Button variant="ghost" size="icon" @click="$emit('close')">
|
||||
<X class="w-4 h-4" />
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
<!-- Content -->
|
||||
<div class="p-4 space-y-4 overflow-y-auto max-h-[60vh]">
|
||||
<!-- Special properties -->
|
||||
<div v-if="step.isAlwaysRun" class="flex flex-wrap gap-2">
|
||||
<div class="flex items-center gap-2 px-3 py-2 bg-blue-50 dark:bg-blue-900/30 rounded-lg border border-blue-200 dark:border-blue-700">
|
||||
<RotateCcw class="w-4 h-4 text-blue-600 dark:text-blue-400" />
|
||||
<div>
|
||||
<p class="text-sm font-medium text-blue-900 dark:text-blue-200">Always Run</p>
|
||||
<p class="text-xs text-blue-600 dark:text-blue-400">This endpoint is configured to execute even after failures</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Errors section -->
|
||||
<div v-if="step.errors?.length" class="space-y-2">
|
||||
<h3 class="text-sm font-medium flex items-center gap-2 text-red-600 dark:text-red-400">
|
||||
<AlertCircle class="w-4 h-4" />
|
||||
Errors ({{ step.errors.length }})
|
||||
</h3>
|
||||
<div class="space-y-2">
|
||||
<div v-for="(error, index) in step.errors" :key="index"
|
||||
class="p-3 bg-red-50 dark:bg-red-900/50 border border-red-200 dark:border-red-700 rounded text-sm font-mono text-red-800 dark:text-red-300 break-all">
|
||||
{{ error }}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Timestamp -->
|
||||
<div v-if="step.result && step.result.timestamp" class="space-y-2">
|
||||
<h3 class="text-sm font-medium flex items-center gap-2">
|
||||
<Clock class="w-4 h-4" />
|
||||
Timestamp
|
||||
</h3>
|
||||
<p class="text-xs font-mono text-muted-foreground">{{ prettifyTimestamp(step.result.timestamp) }}</p>
|
||||
</div>
|
||||
|
||||
<!-- Response details -->
|
||||
<div v-if="step.result" class="space-y-2">
|
||||
<h3 class="text-sm font-medium flex items-center gap-2">
|
||||
<Download class="w-4 h-4" />
|
||||
Response
|
||||
</h3>
|
||||
<div class="grid grid-cols-2 gap-4 text-xs">
|
||||
<div>
|
||||
<span class="text-muted-foreground">Duration:</span>
|
||||
<p class="font-mono mt-1">{{ formatDuration(step.result.duration) }}</p>
|
||||
</div>
|
||||
<div>
|
||||
<span class="text-muted-foreground">Success:</span>
|
||||
<p class="mt-1" :class="step.result.success ? 'text-green-600 dark:text-green-400' : 'text-red-600 dark:text-red-400'">
|
||||
{{ step.result.success ? 'Yes' : 'No' }}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { computed } from 'vue'
|
||||
import { X, AlertCircle, RotateCcw, Download, CheckCircle, XCircle, SkipForward, Pause, Clock } from 'lucide-vue-next'
|
||||
import { Button } from '@/components/ui/button'
|
||||
import { formatDuration } from '@/utils/format'
|
||||
import { prettifyTimestamp } from '@/utils/time'
|
||||
|
||||
const props = defineProps({
|
||||
step: { type: Object, required: true },
|
||||
index: { type: Number, required: true }
|
||||
})
|
||||
|
||||
defineEmits(['close'])
|
||||
|
||||
const statusIcon = computed(() => {
|
||||
switch (props.step.status) {
|
||||
case 'success': return CheckCircle
|
||||
case 'failed': return XCircle
|
||||
case 'skipped': return SkipForward
|
||||
case 'not-started': return Pause
|
||||
default: return Pause
|
||||
}
|
||||
})
|
||||
|
||||
const iconClasses = computed(() => {
|
||||
switch (props.step.status) {
|
||||
case 'success': return 'text-green-600 dark:text-green-400'
|
||||
case 'failed': return 'text-red-600 dark:text-red-400'
|
||||
case 'skipped': return 'text-gray-600 dark:text-gray-400'
|
||||
default: return 'text-blue-600 dark:text-blue-400'
|
||||
}
|
||||
})
|
||||
|
||||
</script>
|
||||
171
web/app/src/components/SuiteCard.vue
Normal file
171
web/app/src/components/SuiteCard.vue
Normal file
@@ -0,0 +1,171 @@
|
||||
<template>
|
||||
<Card class="suite h-full flex flex-col transition hover:shadow-lg hover:scale-[1.01] dark:hover:border-gray-700">
|
||||
<CardHeader class="suite-header px-3 sm:px-6 pt-3 sm:pt-6 pb-2 space-y-0">
|
||||
<div class="flex items-start justify-between gap-2 sm:gap-3">
|
||||
<div class="flex-1 min-w-0 overflow-hidden">
|
||||
<CardTitle class="text-base sm:text-lg truncate">
|
||||
<span
|
||||
class="hover:text-primary cursor-pointer hover:underline text-sm sm:text-base block truncate"
|
||||
@click="navigateToDetails"
|
||||
@keydown.enter="navigateToDetails"
|
||||
:title="suite.name"
|
||||
role="link"
|
||||
tabindex="0"
|
||||
:aria-label="`View details for suite ${suite.name}`">
|
||||
{{ suite.name }}
|
||||
</span>
|
||||
</CardTitle>
|
||||
<div class="flex items-center gap-2 text-xs sm:text-sm text-muted-foreground">
|
||||
<span v-if="suite.group" class="truncate" :title="suite.group">{{ suite.group }}</span>
|
||||
<span v-if="suite.group && endpointCount">•</span>
|
||||
<span v-if="endpointCount">{{ endpointCount }} endpoint{{ endpointCount !== 1 ? 's' : '' }}</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="flex-shrink-0 ml-2">
|
||||
<StatusBadge :status="currentStatus" />
|
||||
</div>
|
||||
</div>
|
||||
</CardHeader>
|
||||
<CardContent class="suite-content flex-1 pb-3 sm:pb-4 px-3 sm:px-6 pt-2">
|
||||
<div class="space-y-2">
|
||||
<div>
|
||||
<div class="flex items-center justify-between mb-1">
|
||||
<p class="text-xs text-muted-foreground">Success Rate: {{ successRate }}%</p>
|
||||
<p class="text-xs text-muted-foreground" v-if="averageDuration">{{ averageDuration }}ms avg</p>
|
||||
</div>
|
||||
<div class="flex gap-0.5">
|
||||
<div
|
||||
v-for="(result, index) in displayResults"
|
||||
:key="index"
|
||||
:class="[
|
||||
'flex-1 h-6 sm:h-8 rounded-sm transition-all',
|
||||
result ? (result.success ? 'bg-green-500 hover:bg-green-700' : 'bg-red-500 hover:bg-red-700') : 'bg-gray-200 dark:bg-gray-700'
|
||||
]"
|
||||
@mouseenter="result && showTooltip(result, $event)"
|
||||
@mouseleave="hideTooltip($event)"
|
||||
/>
|
||||
</div>
|
||||
<div class="flex items-center justify-between text-xs text-muted-foreground mt-1">
|
||||
<span>{{ newestResultTime }}</span>
|
||||
<span>{{ oldestResultTime }}</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</CardContent>
|
||||
</Card>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { computed } from 'vue'
|
||||
import { useRouter } from 'vue-router'
|
||||
import { Card, CardHeader, CardTitle, CardContent } from '@/components/ui/card'
|
||||
import StatusBadge from '@/components/StatusBadge.vue'
|
||||
import { generatePrettyTimeAgo } from '@/utils/time'
|
||||
|
||||
const router = useRouter()
|
||||
|
||||
const props = defineProps({
|
||||
suite: {
|
||||
type: Object,
|
||||
required: true
|
||||
},
|
||||
maxResults: {
|
||||
type: Number,
|
||||
default: 50
|
||||
}
|
||||
})
|
||||
|
||||
const emit = defineEmits(['showTooltip'])
|
||||
|
||||
// Computed properties
|
||||
const displayResults = computed(() => {
|
||||
const results = [...(props.suite.results || [])]
|
||||
while (results.length < props.maxResults) {
|
||||
results.unshift(null)
|
||||
}
|
||||
return results.slice(-props.maxResults)
|
||||
})
|
||||
|
||||
const currentStatus = computed(() => {
|
||||
if (!props.suite.results || props.suite.results.length === 0) {
|
||||
return 'unknown'
|
||||
}
|
||||
return props.suite.results[props.suite.results.length - 1].success ? 'healthy' : 'unhealthy'
|
||||
})
|
||||
|
||||
const endpointCount = computed(() => {
|
||||
if (!props.suite.results || props.suite.results.length === 0) {
|
||||
return 0
|
||||
}
|
||||
const latestResult = props.suite.results[props.suite.results.length - 1]
|
||||
return latestResult.endpointResults ? latestResult.endpointResults.length : 0
|
||||
})
|
||||
|
||||
const successRate = computed(() => {
|
||||
if (!props.suite.results || props.suite.results.length === 0) {
|
||||
return 0
|
||||
}
|
||||
|
||||
const successful = props.suite.results.filter(r => r.success).length
|
||||
return Math.round((successful / props.suite.results.length) * 100)
|
||||
})
|
||||
|
||||
const averageDuration = computed(() => {
|
||||
if (!props.suite.results || props.suite.results.length === 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
const total = props.suite.results.reduce((sum, r) => sum + (r.duration || 0), 0)
|
||||
// Convert nanoseconds to milliseconds
|
||||
return Math.round((total / props.suite.results.length) / 1000000)
|
||||
})
|
||||
|
||||
const oldestResultTime = computed(() => {
|
||||
if (!props.suite.results || props.suite.results.length === 0) {
|
||||
return 'N/A'
|
||||
}
|
||||
|
||||
const oldestResult = props.suite.results[0]
|
||||
return generatePrettyTimeAgo(oldestResult.timestamp)
|
||||
})
|
||||
|
||||
const newestResultTime = computed(() => {
|
||||
if (!props.suite.results || props.suite.results.length === 0) {
|
||||
return 'Now'
|
||||
}
|
||||
|
||||
const newestResult = props.suite.results[props.suite.results.length - 1]
|
||||
return generatePrettyTimeAgo(newestResult.timestamp)
|
||||
})
|
||||
|
||||
// Methods
|
||||
const navigateToDetails = () => {
|
||||
router.push(`/suites/${props.suite.key}`)
|
||||
}
|
||||
|
||||
const showTooltip = (result, event) => {
|
||||
emit('showTooltip', result, event)
|
||||
}
|
||||
|
||||
const hideTooltip = (event) => {
|
||||
emit('showTooltip', null, event)
|
||||
}
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
.suite {
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
.suite:hover {
|
||||
transform: translateY(-2px);
|
||||
}
|
||||
|
||||
.suite-header {
|
||||
border-bottom: 1px solid rgba(0, 0, 0, 0.05);
|
||||
}
|
||||
|
||||
.dark .suite-header {
|
||||
border-bottom: 1px solid rgba(255, 255, 255, 0.05);
|
||||
}
|
||||
</style>
|
||||
@@ -10,20 +10,62 @@
|
||||
:style="`top: ${top}px; left: ${left}px;`"
|
||||
>
|
||||
<div v-if="result" class="space-y-2">
|
||||
<!-- Status (for suite results) -->
|
||||
<div v-if="isSuiteResult" class="flex items-center gap-2">
|
||||
<span :class="[
|
||||
'inline-block w-2 h-2 rounded-full',
|
||||
result.success ? 'bg-green-500' : 'bg-red-500'
|
||||
]"></span>
|
||||
<span class="text-xs font-semibold">
|
||||
{{ result.success ? 'Suite Passed' : 'Suite Failed' }}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<!-- Timestamp -->
|
||||
<div>
|
||||
<div class="text-xs font-semibold text-muted-foreground uppercase tracking-wider">Timestamp</div>
|
||||
<div class="font-mono text-xs">{{ prettifyTimestamp(result.timestamp) }}</div>
|
||||
</div>
|
||||
|
||||
<!-- Suite Info (for suite results) -->
|
||||
<div v-if="isSuiteResult && result.endpointResults">
|
||||
<div class="text-xs font-semibold text-muted-foreground uppercase tracking-wider">Endpoints</div>
|
||||
<div class="font-mono text-xs">
|
||||
<span :class="successCount === endpointCount ? 'text-green-500' : 'text-yellow-500'">
|
||||
{{ successCount }}/{{ endpointCount }} passed
|
||||
</span>
|
||||
</div>
|
||||
<!-- Endpoint breakdown -->
|
||||
<div v-if="result.endpointResults.length > 0" class="mt-1 space-y-0.5">
|
||||
<div
|
||||
v-for="(endpoint, index) in result.endpointResults.slice(0, 5)"
|
||||
:key="index"
|
||||
class="flex items-center gap-1 text-xs"
|
||||
>
|
||||
<span :class="endpoint.success ? 'text-green-500' : 'text-red-500'">
|
||||
{{ endpoint.success ? '✓' : '✗' }}
|
||||
</span>
|
||||
<span class="truncate">{{ endpoint.name }}</span>
|
||||
<span class="text-muted-foreground">({{ (endpoint.duration / 1000000).toFixed(0) }}ms)</span>
|
||||
</div>
|
||||
<div v-if="result.endpointResults.length > 5" class="text-xs text-muted-foreground">
|
||||
... and {{ result.endpointResults.length - 5 }} more
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Response Time -->
|
||||
<div>
|
||||
<div class="text-xs font-semibold text-muted-foreground uppercase tracking-wider">Response Time</div>
|
||||
<div class="font-mono text-xs">{{ (result.duration / 1000000).toFixed(0) }}ms</div>
|
||||
<div class="text-xs font-semibold text-muted-foreground uppercase tracking-wider">
|
||||
{{ isSuiteResult ? 'Total Duration' : 'Response Time' }}
|
||||
</div>
|
||||
<div class="font-mono text-xs">
|
||||
{{ isSuiteResult ? (result.duration / 1000000).toFixed(0) : (result.duration / 1000000).toFixed(0) }}ms
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Conditions -->
|
||||
<div v-if="result.conditionResults && result.conditionResults.length">
|
||||
<!-- Conditions (for endpoint results) -->
|
||||
<div v-if="!isSuiteResult && result.conditionResults && result.conditionResults.length">
|
||||
<div class="text-xs font-semibold text-muted-foreground uppercase tracking-wider">Conditions</div>
|
||||
<div class="font-mono text-xs space-y-0.5">
|
||||
<div
|
||||
@@ -54,8 +96,8 @@
|
||||
|
||||
<script setup>
|
||||
/* eslint-disable no-undef */
|
||||
import { ref, watch, nextTick } from 'vue'
|
||||
import { helper } from '@/mixins/helper'
|
||||
import { ref, watch, nextTick, computed } from 'vue'
|
||||
import { prettifyTimestamp } from '@/utils/time'
|
||||
|
||||
const props = defineProps({
|
||||
event: {
|
||||
@@ -74,8 +116,22 @@ const top = ref(0)
|
||||
const left = ref(0)
|
||||
const tooltip = ref(null)
|
||||
|
||||
// Methods from helper mixin
|
||||
const { prettifyTimestamp } = helper.methods
|
||||
// Computed properties
|
||||
const isSuiteResult = computed(() => {
|
||||
return props.result && props.result.endpointResults !== undefined
|
||||
})
|
||||
|
||||
const endpointCount = computed(() => {
|
||||
if (!isSuiteResult.value || !props.result.endpointResults) return 0
|
||||
return props.result.endpointResults.length
|
||||
})
|
||||
|
||||
const successCount = computed(() => {
|
||||
if (!isSuiteResult.value || !props.result.endpointResults) return 0
|
||||
return props.result.endpointResults.filter(e => e.success).length
|
||||
})
|
||||
|
||||
// Methods are imported from utils/time
|
||||
|
||||
const reposition = async () => {
|
||||
if (!props.event || !props.event.type) return
|
||||
|
||||
@@ -1,38 +0,0 @@
|
||||
export const helper = {
|
||||
methods: {
|
||||
generatePrettyTimeAgo(t) {
|
||||
let differenceInMs = new Date().getTime() - new Date(t).getTime();
|
||||
if (differenceInMs < 500) {
|
||||
return "now";
|
||||
}
|
||||
if (differenceInMs > 3 * 86400000) { // If it was more than 3 days ago, we'll display the number of days ago
|
||||
let days = (differenceInMs / 86400000).toFixed(0);
|
||||
return days + " day" + (days !== "1" ? "s" : "") + " ago";
|
||||
}
|
||||
if (differenceInMs > 3600000) { // If it was more than 1h ago, display the number of hours ago
|
||||
let hours = (differenceInMs / 3600000).toFixed(0);
|
||||
return hours + " hour" + (hours !== "1" ? "s" : "") + " ago";
|
||||
}
|
||||
if (differenceInMs > 60000) {
|
||||
let minutes = (differenceInMs / 60000).toFixed(0);
|
||||
return minutes + " minute" + (minutes !== "1" ? "s" : "") + " ago";
|
||||
}
|
||||
let seconds = (differenceInMs / 1000).toFixed(0);
|
||||
return seconds + " second" + (seconds !== "1" ? "s" : "") + " ago";
|
||||
},
|
||||
generatePrettyTimeDifference(start, end) {
|
||||
let minutes = Math.ceil((new Date(start) - new Date(end)) / 1000 / 60);
|
||||
return minutes + (minutes === 1 ? ' minute' : ' minutes');
|
||||
},
|
||||
prettifyTimestamp(timestamp) {
|
||||
let date = new Date(timestamp);
|
||||
let YYYY = date.getFullYear();
|
||||
let MM = ((date.getMonth() + 1) < 10 ? "0" : "") + "" + (date.getMonth() + 1);
|
||||
let DD = ((date.getDate()) < 10 ? "0" : "") + "" + (date.getDate());
|
||||
let hh = ((date.getHours()) < 10 ? "0" : "") + "" + (date.getHours());
|
||||
let mm = ((date.getMinutes()) < 10 ? "0" : "") + "" + (date.getMinutes());
|
||||
let ss = ((date.getSeconds()) < 10 ? "0" : "") + "" + (date.getSeconds());
|
||||
return YYYY + "-" + MM + "-" + DD + " " + hh + ":" + mm + ":" + ss;
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
import {createRouter, createWebHistory} from 'vue-router'
|
||||
import Home from '@/views/Home'
|
||||
import Details from "@/views/Details";
|
||||
import EndpointDetails from "@/views/EndpointDetails";
|
||||
import SuiteDetails from '@/views/SuiteDetails';
|
||||
|
||||
const routes = [
|
||||
{
|
||||
@@ -10,9 +11,14 @@ const routes = [
|
||||
},
|
||||
{
|
||||
path: '/endpoints/:key',
|
||||
name: 'Details',
|
||||
component: Details,
|
||||
name: 'EndpointDetails',
|
||||
component: EndpointDetails,
|
||||
},
|
||||
{
|
||||
path: '/suites/:key',
|
||||
name: 'SuiteDetails',
|
||||
component: SuiteDetails
|
||||
}
|
||||
];
|
||||
|
||||
const router = createRouter({
|
||||
|
||||
17
web/app/src/utils/format.js
Normal file
17
web/app/src/utils/format.js
Normal file
@@ -0,0 +1,17 @@
|
||||
/**
|
||||
* Formats a duration from nanoseconds to a human-readable string
|
||||
* @param {number} duration - Duration in nanoseconds
|
||||
* @returns {string} Formatted duration string (e.g., "123ms", "1.23s")
|
||||
*/
|
||||
export const formatDuration = (duration) => {
|
||||
if (!duration && duration !== 0) return 'N/A'
|
||||
|
||||
// Convert nanoseconds to milliseconds
|
||||
const durationMs = duration / 1000000
|
||||
|
||||
if (durationMs < 1000) {
|
||||
return `${durationMs.toFixed(0)}ms`
|
||||
} else {
|
||||
return `${(durationMs / 1000).toFixed(2)}s`
|
||||
}
|
||||
}
|
||||
52
web/app/src/utils/time.js
Normal file
52
web/app/src/utils/time.js
Normal file
@@ -0,0 +1,52 @@
|
||||
/**
|
||||
* Generates a human-readable relative time string (e.g., "2 hours ago")
|
||||
* @param {string|Date} timestamp - The timestamp to convert
|
||||
* @returns {string} Relative time string
|
||||
*/
|
||||
export const generatePrettyTimeAgo = (timestamp) => {
|
||||
let differenceInMs = new Date().getTime() - new Date(timestamp).getTime();
|
||||
if (differenceInMs < 500) {
|
||||
return "now";
|
||||
}
|
||||
if (differenceInMs > 3 * 86400000) { // If it was more than 3 days ago, we'll display the number of days ago
|
||||
let days = (differenceInMs / 86400000).toFixed(0);
|
||||
return days + " day" + (days !== "1" ? "s" : "") + " ago";
|
||||
}
|
||||
if (differenceInMs > 3600000) { // If it was more than 1h ago, display the number of hours ago
|
||||
let hours = (differenceInMs / 3600000).toFixed(0);
|
||||
return hours + " hour" + (hours !== "1" ? "s" : "") + " ago";
|
||||
}
|
||||
if (differenceInMs > 60000) {
|
||||
let minutes = (differenceInMs / 60000).toFixed(0);
|
||||
return minutes + " minute" + (minutes !== "1" ? "s" : "") + " ago";
|
||||
}
|
||||
let seconds = (differenceInMs / 1000).toFixed(0);
|
||||
return seconds + " second" + (seconds !== "1" ? "s" : "") + " ago";
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a pretty time difference string between two timestamps
|
||||
* @param {string|Date} start - Start timestamp
|
||||
* @param {string|Date} end - End timestamp
|
||||
* @returns {string} Time difference string
|
||||
*/
|
||||
export const generatePrettyTimeDifference = (start, end) => {
|
||||
let minutes = Math.ceil((new Date(start) - new Date(end)) / 1000 / 60);
|
||||
return minutes + (minutes === 1 ? ' minute' : ' minutes');
|
||||
}
|
||||
|
||||
/**
|
||||
* Formats a timestamp into YYYY-MM-DD HH:mm:ss format
|
||||
* @param {string|Date} timestamp - The timestamp to format
|
||||
* @returns {string} Formatted timestamp
|
||||
*/
|
||||
export const prettifyTimestamp = (timestamp) => {
|
||||
let date = new Date(timestamp);
|
||||
let YYYY = date.getFullYear();
|
||||
let MM = ((date.getMonth() + 1) < 10 ? "0" : "") + "" + (date.getMonth() + 1);
|
||||
let DD = ((date.getDate()) < 10 ? "0" : "") + "" + (date.getDate());
|
||||
let hh = ((date.getHours()) < 10 ? "0" : "") + "" + (date.getHours());
|
||||
let mm = ((date.getMinutes()) < 10 ? "0" : "") + "" + (date.getMinutes());
|
||||
let ss = ((date.getSeconds()) < 10 ? "0" : "") + "" + (date.getSeconds());
|
||||
return YYYY + "-" + MM + "-" + DD + " " + hh + ":" + mm + ":" + ss;
|
||||
}
|
||||
@@ -207,7 +207,7 @@ import Settings from '@/components/Settings.vue'
|
||||
import Pagination from '@/components/Pagination.vue'
|
||||
import Loading from '@/components/Loading.vue'
|
||||
import { SERVER_URL } from '@/main.js'
|
||||
import { helper } from '@/mixins/helper'
|
||||
import { generatePrettyTimeAgo, generatePrettyTimeDifference } from '@/utils/time'
|
||||
|
||||
const router = useRouter()
|
||||
const route = useRoute()
|
||||
@@ -290,7 +290,7 @@ const lastCheckTime = computed(() => {
|
||||
if (!currentStatus.value || !currentStatus.value.results || currentStatus.value.results.length === 0) {
|
||||
return 'Never'
|
||||
}
|
||||
return helper.methods.generatePrettyTimeAgo(currentStatus.value.results[currentStatus.value.results.length - 1].timestamp)
|
||||
return generatePrettyTimeAgo(currentStatus.value.results[currentStatus.value.results.length - 1].timestamp)
|
||||
})
|
||||
|
||||
|
||||
@@ -328,7 +328,7 @@ const fetchData = async () => {
|
||||
event.fancyText = 'Endpoint became healthy'
|
||||
} else if (event.type === 'UNHEALTHY') {
|
||||
if (nextEvent) {
|
||||
event.fancyText = 'Endpoint was unhealthy for ' + helper.methods.generatePrettyTimeDifference(nextEvent.timestamp, event.timestamp)
|
||||
event.fancyText = 'Endpoint was unhealthy for ' + generatePrettyTimeDifference(nextEvent.timestamp, event.timestamp)
|
||||
} else {
|
||||
event.fancyText = 'Endpoint became unhealthy'
|
||||
}
|
||||
@@ -336,7 +336,7 @@ const fetchData = async () => {
|
||||
event.fancyText = 'Monitoring started'
|
||||
}
|
||||
}
|
||||
event.fancyTimeAgo = helper.methods.generatePrettyTimeAgo(event.timestamp)
|
||||
event.fancyTimeAgo = generatePrettyTimeAgo(event.timestamp)
|
||||
processedEvents.push(event)
|
||||
}
|
||||
}
|
||||
@@ -39,20 +39,20 @@
|
||||
<Loading size="lg" />
|
||||
</div>
|
||||
|
||||
<div v-else-if="filteredEndpoints.length === 0" class="text-center py-20">
|
||||
<div v-else-if="filteredEndpoints.length === 0 && filteredSuites.length === 0" class="text-center py-20">
|
||||
<AlertCircle class="h-12 w-12 text-muted-foreground mx-auto mb-4" />
|
||||
<h3 class="text-lg font-semibold mb-2">No endpoints found</h3>
|
||||
<h3 class="text-lg font-semibold mb-2">No endpoints or suites found</h3>
|
||||
<p class="text-muted-foreground">
|
||||
{{ searchQuery || showOnlyFailing || showRecentFailures
|
||||
? 'Try adjusting your filters'
|
||||
: 'No endpoints are configured' }}
|
||||
: 'No endpoints or suites are configured' }}
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div v-else>
|
||||
<!-- Grouped view -->
|
||||
<div v-if="groupByGroup" class="space-y-6">
|
||||
<div v-for="(endpoints, group) in paginatedEndpoints" :key="group" class="endpoint-group border rounded-lg overflow-hidden">
|
||||
<div v-for="(items, group) in combinedGroups" :key="group" class="endpoint-group border rounded-lg overflow-hidden">
|
||||
<!-- Group Header -->
|
||||
<div
|
||||
@click="toggleGroupCollapse(group)"
|
||||
@@ -64,9 +64,9 @@
|
||||
<h2 class="text-xl font-semibold text-foreground">{{ group }}</h2>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<span v-if="calculateUnhealthyCount(endpoints) > 0"
|
||||
<span v-if="calculateUnhealthyCount(items.endpoints) + calculateFailingSuitesCount(items.suites) > 0"
|
||||
class="bg-red-600 text-white px-2 py-1 rounded-full text-sm font-medium">
|
||||
{{ calculateUnhealthyCount(endpoints) }}
|
||||
{{ calculateUnhealthyCount(items.endpoints) + calculateFailingSuitesCount(items.suites) }}
|
||||
</span>
|
||||
<CheckCircle v-else class="h-6 w-6 text-green-600" />
|
||||
</div>
|
||||
@@ -74,30 +74,68 @@
|
||||
|
||||
<!-- Group Content -->
|
||||
<div v-if="uncollapsedGroups.has(group)" class="endpoint-group-content p-4">
|
||||
<div class="grid gap-3 grid-cols-1 sm:grid-cols-2 lg:grid-cols-3">
|
||||
<EndpointCard
|
||||
v-for="endpoint in endpoints"
|
||||
:key="endpoint.key"
|
||||
:endpoint="endpoint"
|
||||
:maxResults="50"
|
||||
:showAverageResponseTime="showAverageResponseTime"
|
||||
@showTooltip="showTooltip"
|
||||
/>
|
||||
<!-- Suites Section -->
|
||||
<div v-if="items.suites.length > 0" class="mb-4">
|
||||
<h3 class="text-sm font-semibold text-muted-foreground uppercase tracking-wider mb-3">Suites</h3>
|
||||
<div class="grid gap-3 grid-cols-1 sm:grid-cols-2 lg:grid-cols-3">
|
||||
<SuiteCard
|
||||
v-for="suite in items.suites"
|
||||
:key="suite.key"
|
||||
:suite="suite"
|
||||
:maxResults="50"
|
||||
@showTooltip="showTooltip"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Endpoints Section -->
|
||||
<div v-if="items.endpoints.length > 0">
|
||||
<h3 v-if="items.suites.length > 0" class="text-sm font-semibold text-muted-foreground uppercase tracking-wider mb-3">Endpoints</h3>
|
||||
<div class="grid gap-3 grid-cols-1 sm:grid-cols-2 lg:grid-cols-3">
|
||||
<EndpointCard
|
||||
v-for="endpoint in items.endpoints"
|
||||
:key="endpoint.key"
|
||||
:endpoint="endpoint"
|
||||
:maxResults="50"
|
||||
:showAverageResponseTime="showAverageResponseTime"
|
||||
@showTooltip="showTooltip"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Regular view -->
|
||||
<div v-else class="grid gap-3 grid-cols-1 sm:grid-cols-2 lg:grid-cols-3">
|
||||
<EndpointCard
|
||||
v-for="endpoint in paginatedEndpoints"
|
||||
:key="endpoint.key"
|
||||
:endpoint="endpoint"
|
||||
:maxResults="50"
|
||||
:showAverageResponseTime="showAverageResponseTime"
|
||||
@showTooltip="showTooltip"
|
||||
/>
|
||||
<div v-else>
|
||||
<!-- Suites Section -->
|
||||
<div v-if="filteredSuites.length > 0" class="mb-6">
|
||||
<h2 class="text-lg font-semibold text-foreground mb-3">Suites</h2>
|
||||
<div class="grid gap-3 grid-cols-1 sm:grid-cols-2 lg:grid-cols-3">
|
||||
<SuiteCard
|
||||
v-for="suite in paginatedSuites"
|
||||
:key="suite.key"
|
||||
:suite="suite"
|
||||
:maxResults="50"
|
||||
@showTooltip="showTooltip"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Endpoints Section -->
|
||||
<div v-if="filteredEndpoints.length > 0">
|
||||
<h2 v-if="filteredSuites.length > 0" class="text-lg font-semibold text-foreground mb-3">Endpoints</h2>
|
||||
<div class="grid gap-3 grid-cols-1 sm:grid-cols-2 lg:grid-cols-3">
|
||||
<EndpointCard
|
||||
v-for="endpoint in paginatedEndpoints"
|
||||
:key="endpoint.key"
|
||||
:endpoint="endpoint"
|
||||
:maxResults="50"
|
||||
:showAverageResponseTime="showAverageResponseTime"
|
||||
@showTooltip="showTooltip"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div v-if="!groupByGroup && totalPages > 1" class="mt-8 flex items-center justify-center gap-2">
|
||||
@@ -144,6 +182,7 @@ import { ref, computed, onMounted } from 'vue'
|
||||
import { Activity, Timer, RefreshCw, AlertCircle, ChevronLeft, ChevronRight, ChevronDown, ChevronUp, CheckCircle } from 'lucide-vue-next'
|
||||
import { Button } from '@/components/ui/button'
|
||||
import EndpointCard from '@/components/EndpointCard.vue'
|
||||
import SuiteCard from '@/components/SuiteCard.vue'
|
||||
import SearchBar from '@/components/SearchBar.vue'
|
||||
import Settings from '@/components/Settings.vue'
|
||||
import Loading from '@/components/Loading.vue'
|
||||
@@ -160,6 +199,7 @@ const props = defineProps({
|
||||
const emit = defineEmits(['showTooltip'])
|
||||
|
||||
const endpointStatuses = ref([])
|
||||
const suiteStatuses = ref([])
|
||||
const loading = ref(false)
|
||||
const currentPage = ref(1)
|
||||
const itemsPerPage = 96
|
||||
@@ -215,8 +255,51 @@ const filteredEndpoints = computed(() => {
|
||||
return filtered
|
||||
})
|
||||
|
||||
const filteredSuites = computed(() => {
|
||||
let filtered = [...suiteStatuses.value]
|
||||
|
||||
if (searchQuery.value) {
|
||||
const query = searchQuery.value.toLowerCase()
|
||||
filtered = filtered.filter(suite =>
|
||||
suite.name.toLowerCase().includes(query) ||
|
||||
(suite.group && suite.group.toLowerCase().includes(query))
|
||||
)
|
||||
}
|
||||
|
||||
if (showOnlyFailing.value) {
|
||||
filtered = filtered.filter(suite => {
|
||||
if (!suite.results || suite.results.length === 0) return false
|
||||
return !suite.results[suite.results.length - 1].success
|
||||
})
|
||||
}
|
||||
|
||||
if (showRecentFailures.value) {
|
||||
filtered = filtered.filter(suite => {
|
||||
if (!suite.results || suite.results.length === 0) return false
|
||||
return suite.results.some(result => !result.success)
|
||||
})
|
||||
}
|
||||
|
||||
// Sort by health if selected
|
||||
if (sortBy.value === 'health') {
|
||||
filtered.sort((a, b) => {
|
||||
const aHealthy = a.results && a.results.length > 0 && a.results[a.results.length - 1].success
|
||||
const bHealthy = b.results && b.results.length > 0 && b.results[b.results.length - 1].success
|
||||
|
||||
// Unhealthy first
|
||||
if (!aHealthy && bHealthy) return -1
|
||||
if (aHealthy && !bHealthy) return 1
|
||||
|
||||
// Then sort by name
|
||||
return a.name.localeCompare(b.name)
|
||||
})
|
||||
}
|
||||
|
||||
return filtered
|
||||
})
|
||||
|
||||
const totalPages = computed(() => {
|
||||
return Math.ceil(filteredEndpoints.value.length / itemsPerPage)
|
||||
return Math.ceil((filteredEndpoints.value.length + filteredSuites.value.length) / itemsPerPage)
|
||||
})
|
||||
|
||||
const groupedEndpoints = computed(() => {
|
||||
@@ -248,6 +331,46 @@ const groupedEndpoints = computed(() => {
|
||||
return result
|
||||
})
|
||||
|
||||
const combinedGroups = computed(() => {
|
||||
if (!groupByGroup.value) {
|
||||
return null
|
||||
}
|
||||
|
||||
const combined = {}
|
||||
|
||||
// Add endpoints
|
||||
filteredEndpoints.value.forEach(endpoint => {
|
||||
const group = endpoint.group || 'No Group'
|
||||
if (!combined[group]) {
|
||||
combined[group] = { endpoints: [], suites: [] }
|
||||
}
|
||||
combined[group].endpoints.push(endpoint)
|
||||
})
|
||||
|
||||
// Add suites
|
||||
filteredSuites.value.forEach(suite => {
|
||||
const group = suite.group || 'No Group'
|
||||
if (!combined[group]) {
|
||||
combined[group] = { endpoints: [], suites: [] }
|
||||
}
|
||||
combined[group].suites.push(suite)
|
||||
})
|
||||
|
||||
// Sort groups alphabetically, with 'No Group' at the end
|
||||
const sortedGroups = Object.keys(combined).sort((a, b) => {
|
||||
if (a === 'No Group') return 1
|
||||
if (b === 'No Group') return -1
|
||||
return a.localeCompare(b)
|
||||
})
|
||||
|
||||
const result = {}
|
||||
sortedGroups.forEach(group => {
|
||||
result[group] = combined[group]
|
||||
})
|
||||
|
||||
return result
|
||||
})
|
||||
|
||||
const paginatedEndpoints = computed(() => {
|
||||
if (groupByGroup.value) {
|
||||
// When grouping, we don't paginate
|
||||
@@ -259,6 +382,17 @@ const paginatedEndpoints = computed(() => {
|
||||
return filteredEndpoints.value.slice(start, end)
|
||||
})
|
||||
|
||||
const paginatedSuites = computed(() => {
|
||||
if (groupByGroup.value) {
|
||||
// When grouping, we don't paginate
|
||||
return filteredSuites.value
|
||||
}
|
||||
|
||||
const start = (currentPage.value - 1) * itemsPerPage
|
||||
const end = start + itemsPerPage
|
||||
return filteredSuites.value.slice(start, end)
|
||||
})
|
||||
|
||||
const visiblePages = computed(() => {
|
||||
const pages = []
|
||||
const maxVisible = 5
|
||||
@@ -278,42 +412,31 @@ const visiblePages = computed(() => {
|
||||
|
||||
const fetchData = async () => {
|
||||
// Don't show loading state on refresh to prevent UI flicker
|
||||
const isInitialLoad = endpointStatuses.value.length === 0
|
||||
const isInitialLoad = endpointStatuses.value.length === 0 && suiteStatuses.value.length === 0
|
||||
if (isInitialLoad) {
|
||||
loading.value = true
|
||||
}
|
||||
try {
|
||||
const response = await fetch(`${SERVER_URL}/api/v1/endpoints/statuses?page=1&pageSize=100`, {
|
||||
// Fetch endpoints
|
||||
const endpointResponse = await fetch(`${SERVER_URL}/api/v1/endpoints/statuses?page=1&pageSize=100`, {
|
||||
credentials: 'include'
|
||||
})
|
||||
if (response.status === 200) {
|
||||
const data = await response.json()
|
||||
// If this is the initial load, just set the data
|
||||
if (isInitialLoad) {
|
||||
endpointStatuses.value = data
|
||||
} else {
|
||||
// Check if endpoints have been added or removed
|
||||
const currentKeys = new Set(endpointStatuses.value.map(ep => ep.key))
|
||||
const newKeys = new Set(data.map(ep => ep.key))
|
||||
const hasAdditions = data.some(ep => !currentKeys.has(ep.key))
|
||||
const hasRemovals = endpointStatuses.value.some(ep => !newKeys.has(ep.key))
|
||||
if (hasAdditions || hasRemovals) {
|
||||
// Endpoints have changed, reset the array to maintain proper order
|
||||
endpointStatuses.value = data
|
||||
} else {
|
||||
// Only statuses/results have changed, update in place to preserve scroll
|
||||
const endpointMap = new Map(data.map(ep => [ep.key, ep]))
|
||||
endpointStatuses.value.forEach((endpoint, index) => {
|
||||
const updated = endpointMap.get(endpoint.key)
|
||||
if (updated) {
|
||||
// Update in place to preserve Vue's reactivity and scroll position
|
||||
Object.assign(endpointStatuses.value[index], updated)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
if (endpointResponse.status === 200) {
|
||||
const data = await endpointResponse.json()
|
||||
endpointStatuses.value = data
|
||||
} else {
|
||||
console.error('[Home][fetchData] Error:', await response.text())
|
||||
console.error('[Home][fetchData] Error fetching endpoints:', await endpointResponse.text())
|
||||
}
|
||||
|
||||
// Fetch suites
|
||||
const suiteResponse = await fetch(`${SERVER_URL}/api/v1/suites/statuses?page=1&pageSize=100`, {
|
||||
credentials: 'include'
|
||||
})
|
||||
if (suiteResponse.status === 200) {
|
||||
const suiteData = await suiteResponse.json()
|
||||
suiteStatuses.value = suiteData
|
||||
} else {
|
||||
console.error('[Home][fetchData] Error fetching suites:', await suiteResponse.text())
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('[Home][fetchData] Error:', error)
|
||||
@@ -355,6 +478,13 @@ const calculateUnhealthyCount = (endpoints) => {
|
||||
}).length
|
||||
}
|
||||
|
||||
const calculateFailingSuitesCount = (suites) => {
|
||||
return suites.filter(suite => {
|
||||
if (!suite.results || suite.results.length === 0) return false
|
||||
return !suite.results[suite.results.length - 1].success
|
||||
}).length
|
||||
}
|
||||
|
||||
const toggleGroupCollapse = (groupName) => {
|
||||
if (uncollapsedGroups.value.has(groupName)) {
|
||||
uncollapsedGroups.value.delete(groupName)
|
||||
|
||||
334
web/app/src/views/SuiteDetails.vue
Normal file
334
web/app/src/views/SuiteDetails.vue
Normal file
@@ -0,0 +1,334 @@
|
||||
<template>
|
||||
<div class="suite-details-container bg-background min-h-screen">
|
||||
<div class="container mx-auto px-4 py-8 max-w-7xl">
|
||||
<!-- Back button and header -->
|
||||
<div class="mb-6">
|
||||
<Button variant="ghost" size="sm" @click="goBack" class="mb-4">
|
||||
<ArrowLeft class="h-4 w-4 mr-2" />
|
||||
Back to Dashboard
|
||||
</Button>
|
||||
|
||||
<div class="flex items-start justify-between">
|
||||
<div>
|
||||
<h1 class="text-3xl font-bold tracking-tight">{{ suite?.name || 'Loading...' }}</h1>
|
||||
<p class="text-muted-foreground mt-2">
|
||||
<span v-if="suite?.group">{{ suite.group }} • </span>
|
||||
<span v-if="latestResult">
|
||||
{{ selectedResult && selectedResult !== sortedResults[0] ? 'Ran' : 'Last run' }} {{ formatRelativeTime(latestResult.timestamp) }}
|
||||
</span>
|
||||
</p>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<StatusBadge v-if="latestResult" :status="latestResult.success ? 'healthy' : 'unhealthy'" />
|
||||
<Button variant="ghost" size="icon" @click="refreshData" title="Refresh">
|
||||
<RefreshCw class="h-5 w-5" />
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div v-if="loading" class="flex items-center justify-center py-20">
|
||||
<Loading size="lg" />
|
||||
</div>
|
||||
|
||||
<div v-else-if="!suite" class="text-center py-20">
|
||||
<AlertCircle class="h-12 w-12 text-muted-foreground mx-auto mb-4" />
|
||||
<h3 class="text-lg font-semibold mb-2">Suite not found</h3>
|
||||
<p class="text-muted-foreground">The requested suite could not be found.</p>
|
||||
</div>
|
||||
|
||||
<div v-else class="space-y-6">
|
||||
<!-- Latest Execution -->
|
||||
<Card v-if="latestResult">
|
||||
<CardHeader>
|
||||
<CardTitle>Latest Execution</CardTitle>
|
||||
</CardHeader>
|
||||
<CardContent>
|
||||
<div class="space-y-4">
|
||||
<!-- Execution stats -->
|
||||
<div class="grid grid-cols-2 md:grid-cols-4 gap-4">
|
||||
<div>
|
||||
<p class="text-sm text-muted-foreground">Status</p>
|
||||
<p class="text-lg font-medium">{{ latestResult.success ? 'Success' : 'Failed' }}</p>
|
||||
</div>
|
||||
<div>
|
||||
<p class="text-sm text-muted-foreground">Duration</p>
|
||||
<p class="text-lg font-medium">{{ formatDuration(latestResult.duration) }}</p>
|
||||
</div>
|
||||
<div>
|
||||
<p class="text-sm text-muted-foreground">Endpoints</p>
|
||||
<p class="text-lg font-medium">{{ latestResult.endpointResults?.length || 0 }}</p>
|
||||
</div>
|
||||
<div>
|
||||
<p class="text-sm text-muted-foreground">Success Rate</p>
|
||||
<p class="text-lg font-medium">{{ calculateSuccessRate(latestResult) }}%</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Enhanced Execution Flow -->
|
||||
<div class="mt-6">
|
||||
<h3 class="text-lg font-semibold mb-4">Execution Flow</h3>
|
||||
<SequentialFlowDiagram
|
||||
:flow-steps="flowSteps"
|
||||
:progress-percentage="executionProgress"
|
||||
:completed-steps="completedStepsCount"
|
||||
:total-steps="flowSteps.length"
|
||||
@step-selected="onStepSelected"
|
||||
/>
|
||||
</div>
|
||||
|
||||
|
||||
<!-- Errors -->
|
||||
<div v-if="latestResult.errors && latestResult.errors.length > 0" class="mt-6">
|
||||
<h3 class="text-lg font-semibold mb-3 text-red-500">Suite Errors</h3>
|
||||
<div class="space-y-2">
|
||||
<div
|
||||
v-for="(error, index) in latestResult.errors"
|
||||
:key="index"
|
||||
class="bg-red-50 dark:bg-red-950 text-red-700 dark:text-red-300 p-3 rounded-md text-sm"
|
||||
>
|
||||
{{ error }}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</CardContent>
|
||||
</Card>
|
||||
|
||||
<!-- Execution History -->
|
||||
<Card>
|
||||
<CardHeader>
|
||||
<CardTitle>Execution History</CardTitle>
|
||||
</CardHeader>
|
||||
<CardContent>
|
||||
<div v-if="sortedResults.length > 0" class="space-y-2">
|
||||
<div
|
||||
v-for="(result, index) in sortedResults"
|
||||
:key="index"
|
||||
class="flex items-center justify-between p-3 border rounded-lg hover:bg-accent/50 transition-colors cursor-pointer"
|
||||
@click="selectedResult = result"
|
||||
:class="{ 'bg-accent': selectedResult === result }"
|
||||
>
|
||||
<div class="flex items-center gap-3">
|
||||
<StatusBadge :status="result.success ? 'healthy' : 'unhealthy'" size="sm" />
|
||||
<div>
|
||||
<p class="text-sm font-medium">{{ formatTimestamp(result.timestamp) }}</p>
|
||||
<p class="text-xs text-muted-foreground">
|
||||
{{ result.endpointResults?.length || 0 }} endpoints • {{ formatDuration(result.duration) }}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
<ChevronRight class="h-4 w-4 text-muted-foreground" />
|
||||
</div>
|
||||
</div>
|
||||
<div v-else class="text-center py-8 text-muted-foreground">
|
||||
No execution history available
|
||||
</div>
|
||||
</CardContent>
|
||||
</Card>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<Settings @refreshData="fetchData" />
|
||||
|
||||
<!-- Step Details Modal -->
|
||||
<StepDetailsModal
|
||||
v-if="selectedStep"
|
||||
:step="selectedStep"
|
||||
:index="selectedStepIndex"
|
||||
@close="selectedStep = null"
|
||||
/>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
/* eslint-disable no-undef */
|
||||
import { ref, computed, onMounted } from 'vue'
|
||||
import { useRouter, useRoute } from 'vue-router'
|
||||
import { ArrowLeft, RefreshCw, AlertCircle, ChevronRight } from 'lucide-vue-next'
|
||||
import { Button } from '@/components/ui/button'
|
||||
import { Card, CardHeader, CardTitle, CardContent } from '@/components/ui/card'
|
||||
import StatusBadge from '@/components/StatusBadge.vue'
|
||||
import SequentialFlowDiagram from '@/components/SequentialFlowDiagram.vue'
|
||||
import StepDetailsModal from '@/components/StepDetailsModal.vue'
|
||||
import Settings from '@/components/Settings.vue'
|
||||
import Loading from '@/components/Loading.vue'
|
||||
import { generatePrettyTimeAgo } from '@/utils/time'
|
||||
import { SERVER_URL } from '@/main'
|
||||
|
||||
const router = useRouter()
|
||||
const route = useRoute()
|
||||
|
||||
// State
|
||||
const loading = ref(false)
|
||||
const suite = ref(null)
|
||||
const selectedResult = ref(null)
|
||||
const selectedStep = ref(null)
|
||||
const selectedStepIndex = ref(0)
|
||||
|
||||
// Computed properties
|
||||
const sortedResults = computed(() => {
|
||||
if (!suite.value || !suite.value.results || suite.value.results.length === 0) {
|
||||
return []
|
||||
}
|
||||
// Sort results by timestamp in descending order (most recent first)
|
||||
return [...suite.value.results].sort((a, b) => new Date(b.timestamp) - new Date(a.timestamp))
|
||||
})
|
||||
|
||||
const latestResult = computed(() => {
|
||||
if (!suite.value || !suite.value.results || suite.value.results.length === 0) {
|
||||
return null
|
||||
}
|
||||
return selectedResult.value || sortedResults.value[0]
|
||||
})
|
||||
|
||||
// Methods
|
||||
const fetchData = async () => {
|
||||
loading.value = true
|
||||
|
||||
try {
|
||||
const response = await fetch(`${SERVER_URL}/api/v1/suites/${route.params.key}/statuses`, {
|
||||
credentials: 'include'
|
||||
})
|
||||
|
||||
if (response.status === 200) {
|
||||
const data = await response.json()
|
||||
suite.value = data
|
||||
if (data.results && data.results.length > 0 && !selectedResult.value) {
|
||||
// Sort results by timestamp to get the most recent one
|
||||
const sorted = [...data.results].sort((a, b) => new Date(b.timestamp) - new Date(a.timestamp))
|
||||
selectedResult.value = sorted[0]
|
||||
}
|
||||
} else if (response.status === 404) {
|
||||
suite.value = null
|
||||
} else {
|
||||
console.error('[SuiteDetails][fetchData] Error:', await response.text())
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('[SuiteDetails][fetchData] Error:', error)
|
||||
} finally {
|
||||
loading.value = false
|
||||
}
|
||||
}
|
||||
|
||||
const refreshData = () => {
|
||||
fetchData()
|
||||
}
|
||||
|
||||
const goBack = () => {
|
||||
router.push('/')
|
||||
}
|
||||
|
||||
const formatRelativeTime = (timestamp) => {
|
||||
return generatePrettyTimeAgo(timestamp)
|
||||
}
|
||||
|
||||
const formatTimestamp = (timestamp) => {
|
||||
const date = new Date(timestamp)
|
||||
return date.toLocaleString()
|
||||
}
|
||||
|
||||
const formatDuration = (duration) => {
|
||||
if (!duration && duration !== 0) return 'N/A'
|
||||
|
||||
// Convert nanoseconds to milliseconds
|
||||
const durationMs = duration / 1000000
|
||||
|
||||
if (durationMs < 1000) {
|
||||
return `${durationMs.toFixed(0)}ms`
|
||||
} else {
|
||||
return `${(durationMs / 1000).toFixed(2)}s`
|
||||
}
|
||||
}
|
||||
|
||||
const calculateSuccessRate = (result) => {
|
||||
if (!result || !result.endpointResults || result.endpointResults.length === 0) {
|
||||
return 0
|
||||
}
|
||||
|
||||
const successful = result.endpointResults.filter(e => e.success).length
|
||||
return Math.round((successful / result.endpointResults.length) * 100)
|
||||
}
|
||||
|
||||
// Flow diagram computed properties
|
||||
const flowSteps = computed(() => {
|
||||
if (!latestResult.value || !latestResult.value.endpointResults) {
|
||||
return []
|
||||
}
|
||||
|
||||
const results = latestResult.value.endpointResults
|
||||
|
||||
return results.map((result, index) => {
|
||||
const endpoint = suite.value?.endpoints?.[index]
|
||||
const nextResult = results[index + 1]
|
||||
|
||||
// Determine if this is an always-run endpoint by checking execution pattern
|
||||
// If a previous step failed but this one still executed, it must be always-run
|
||||
let isAlwaysRun = false
|
||||
for (let i = 0; i < index; i++) {
|
||||
if (!results[i].success) {
|
||||
// A previous step failed, but we're still executing, so this must be always-run
|
||||
isAlwaysRun = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
name: endpoint?.name || result.name || `Step ${index + 1}`,
|
||||
endpoint: endpoint,
|
||||
result: result,
|
||||
status: determineStepStatus(result, endpoint),
|
||||
duration: result.duration || 0,
|
||||
isAlwaysRun: isAlwaysRun,
|
||||
errors: result.errors || [],
|
||||
nextStepStatus: nextResult ? determineStepStatus(nextResult, suite.value?.endpoints?.[index + 1]) : null
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
const completedStepsCount = computed(() => {
|
||||
return flowSteps.value.filter(step => step.status === 'success').length
|
||||
})
|
||||
|
||||
const executionProgress = computed(() => {
|
||||
if (!flowSteps.value.length) return 0
|
||||
return Math.round((completedStepsCount.value / flowSteps.value.length) * 100)
|
||||
})
|
||||
|
||||
|
||||
|
||||
// Helper functions
|
||||
const determineStepStatus = (result) => {
|
||||
if (!result) return 'not-started'
|
||||
|
||||
// Check if step was skipped
|
||||
if (result.conditionResults && result.conditionResults.some(c => c.condition.includes('SKIP'))) {
|
||||
return 'skipped'
|
||||
}
|
||||
|
||||
// Check if step failed but is always-run (still shows as failed but executed)
|
||||
if (!result.success) {
|
||||
return 'failed'
|
||||
}
|
||||
|
||||
return 'success'
|
||||
}
|
||||
|
||||
|
||||
// Event handlers
|
||||
const onStepSelected = (step, index) => {
|
||||
selectedStep.value = step
|
||||
selectedStepIndex.value = index
|
||||
}
|
||||
|
||||
// Lifecycle
|
||||
onMounted(() => {
|
||||
fetchData()
|
||||
})
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
.suite-details-container {
|
||||
min-height: 100vh;
|
||||
}
|
||||
</style>
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user