Query API Reference
All queries follow the same pattern: POST /queries/{name}/execute with a JSON body containing your parameters. Try it immediately after installation: curl -H "X-API-Key: $API_KEY" http://localhost:8000/queries/summary_stats/execute -d '{"parameters":{}}' — no parameters needed, returns device/interface/endpoint counts in under 100 ms.
The Query API exposes all 109 named queries via a RESTful HTTP interface. Queries are defined in registry.yaml and have dual implementations for Neo4j (Cypher) and PostgreSQL (SQL). Every query is also available as an MCP tool for AI-assistant integration — meaning a Claude Desktop conversation and a programmatic curl call run the exact same underlying query.
8.1 Authentication
All query endpoints require the X-API-Key HTTP header. Set the API key with the API_KEY environment variable at startup.
curl -H "X-API-Key: your-api-key" http://localhost:8000/queries/
A missing or invalid key returns 401 Unauthorized. Alternatively, pass the key as a ?api_key= query parameter for EventSource / SSE clients that cannot set custom headers.
8.2 API Endpoints
| Method | Path | Description |
|---|---|---|
GET | /queries/ | List all 109 available queries with metadata |
GET | /queries/{name} | Get details for a single query |
POST | /queries/{name}/execute | Execute a query with parameters |
POST | /queries/meshql | Execute a MeshQL structured query (SHOW NEIGHBORS OF DEVICE "sw-01") — compiles to a named query and executes it |
GET | /health | Shallow health check (process alive) |
GET | /health/ready | Deep readiness check (database connectivity + pool status) |
GET | /health/redis | Redis reachability and cluster_mode flag (no auth required) |
GET | /health/license | License plan, expiry, days remaining, demo mode (no auth required) |
GET | /history/diff | Compare two network snapshots Pro+ |
POST | /graph/whatif | Simulate a proposed topology change Pro+ |
GET | /inventory/ansible | Ansible dynamic inventory JSON or INI export Pro+ |
GET | /events | Server-Sent Events stream (30s interval; supports ?api_key=) |
GET | /admin/config | Runtime configuration dump (requires auth) |
GET | /ai/query/catalog | Full NL query catalog — all 109 queries with name, description, category, parameters. No auth required. Use for UI autocomplete. |
/history/diff
GET /history/diff?from_ts=2026-02-01T00:00:00Z&to_ts=2026-03-04T00:00:00Z
X-API-Key: your-api-key
# Response
{
"from_ts": "2026-02-01T00:00:00Z",
"to_ts": "2026-03-04T00:00:00Z",
"devices_added": ["fw-dmz-02"],
"devices_removed": [],
"rules_added": ["fw-dmz-02:permit-80"],
"rules_removed": ["fw-core-01:deny-8080"],
"delta": { "devices": 1, "firewall_rules": 0 }
}
/graph/whatif
POST /graph/whatif
Content-Type: application/json
X-API-Key: your-api-key
{
"devices": [...],
"interfaces": [...],
"firewall_rules": [...]
}
# Response
{
"proposed": {"devices": 21, "firewall_rules": 62},
"current": {"devices": 20, "firewall_rules": 58},
"delta": {"devices": 1, "firewall_rules": 4},
"new_devices": ["fw-dmz-new"],
"new_firewall_rules": ["fw-dmz-new:deny-any"]
}
Execute Request Body
POST /queries/{name}/execute
Content-Type: application/json
X-API-Key: your-api-key
{
"parameters": { "device": "sw-core-01" },
"limit": 1000,
"offset": 0,
"output_format": "json"
}
| Field | Type | Default | Description |
|---|---|---|---|
parameters | object | {} | Query parameters (names must match registry definition) |
limit | integer | 1000 | Maximum rows to return (1–10,000) |
offset | integer | 0 | Row offset for pagination |
output_format | string | json | json or csv |
8.3 Query Categories
Topology Queries
| Query Name | Parameters | Description |
|---|---|---|
device_neighbors | device_name: string | All devices directly connected to the named device |
interface_neighbors | device_a: string, device_b: string | Interfaces connecting two specific devices |
topology_edges | (none) | All device-to-device edges for full graph visualisation Advanced |
topology_neighborhood | device: string, depth: integer (default 2) | N-hop BFS subgraph rooted at the given device Pro+ |
Endpoint Queries
| Query Name | Parameters | Description |
|---|---|---|
locate_endpoint_by_ip | ip: string, vrf: string (optional) | Find endpoint record by IPv4 or IPv6 address; results include vrf field |
locate_endpoint_by_mac | mac: string | Find endpoint by MAC address (any common format) |
endpoints_on_interface | device: string, interface: string | All endpoints on a specific port |
Blast Radius Queries Advanced
| Query Name | Parameters | Description |
|---|---|---|
blast_radius_interface | device: string, interface: string | Endpoints impacted if an interface goes down |
blast_radius_device | device: string | Endpoints impacted if an entire device goes down |
blast_radius_vlan | vlan: integer | All endpoints in a specific VLAN |
blast_radius_subnet | cidr: string | Endpoints dependent on a subnet (CIDR notation) |
Addressing Queries
| Query Name | Parameters | Description |
|---|---|---|
ips_in_subnet | cidr: string, vrf: string (optional) | IP addresses allocated within a subnet; results include vrf field |
subnets_on_device | device: string | All subnets present on a device |
orphaned_ips | vrf: string (optional) | IPs with no associated subnet record; results include vrf field |
Hygiene Queries
| Query Name | Parameters | Description |
|---|---|---|
devices_without_neighbors | (none) | Devices with no topology neighbours (isolated) |
interfaces_without_ips | (none) | Interfaces that have no IP address assigned |
endpoints_without_location | (none) | Endpoints that cannot be traced to a physical port |
devices_missing_os_version | (none) | Devices with no OS version recorded — may indicate incomplete collection |
devices_missing_hostname | (none) | Devices with no hostname set (only management IP known) |
interfaces_no_description | (none) | Interfaces with no description or label configured |
duplicate_ip_addresses | (none) | IP addresses configured on more than one interface simultaneously |
Inventory & Summary Queries
| Query Name | Parameters | Description |
|---|---|---|
summary_stats | (none) | Dashboard counts: devices, interfaces, endpoints, VLANs |
all_devices | (none) | All devices with vendor, model, OS version, serial |
update_device_metadata | device: string, metadata object | Write NetBox site/tenant/rack to device node properties Pro+ |
Firewall Queries Pro+
| Query Name | Parameters | Gate |
|---|---|---|
all_firewall_devices | (none) | firewall_queries |
firewall_rules_by_device | device: string | firewall_queries |
deny_rules_summary | (none) | firewall_queries |
firewall_rules_by_zone_pair | source_zone: string, destination_zone: string | firewall_queries |
path_analysis | source_ip, destination_ip, protocol?, destination_port? | firewall_queries |
8.4 Pagination & Export
Pagination
# Page 1 (rows 0–99)
{"parameters": {}, "limit": 100, "offset": 0}
# Page 2 (rows 100–199)
{"parameters": {}, "limit": 100, "offset": 100}
# Response includes total count
{
"total": 1843,
"offset": 100,
"limit": 100,
"rows": [...]
}
CSV Export
curl -H "X-API-Key: $API_KEY" \
http://localhost:8000/queries/all_devices/execute \
-d '{"parameters": {}, "output_format": "csv"}' \
-o devices.csv
The response is a streaming CSV with a Content-Disposition: attachment; filename="all_devices.csv" header.
8.5 Kubernetes Observability Enterprise
MeshOptixIQ can collect live node and pod data from any Kubernetes cluster and expose it
through the same REST API used for network queries. Collection requires the
k8s_observability feature flag (Enterprise tier); the status endpoint is open
to all authenticated clients.
Installation
Install the optional Kubernetes SDK for full collection capability (recommended):
pip install kubernetes
If the SDK is not installed, MeshOptixIQ falls back to a built-in urllib-based collector that works inside a Kubernetes pod without any additional packages.
Configuration
| Environment Variable | Default | Description |
|---|---|---|
K8S_KUBECONFIG |
(unset) |
Path to a kubeconfig file. Leave unset when running inside a cluster — the collector
auto-detects in-cluster credentials via KUBERNETES_SERVICE_HOST and the
mounted ServiceAccount token. Set to a file path (e.g. /etc/k8s/config)
to monitor an external cluster.
|
Authentication modes
Run MeshOptixIQ as a Pod in the target cluster. The collector reads the ServiceAccount
token automatically from /var/run/secrets/kubernetes.io/serviceaccount/.
No extra configuration is needed — do not set K8S_KUBECONFIG.
Mount a kubeconfig file into the container and point K8S_KUBECONFIG at it.
With Docker Compose:
volumes:
- ~/.kube/config:/etc/meshq/kubeconfig:ro
environment:
K8S_KUBECONFIG: /etc/meshq/kubeconfig
With a Kubernetes Secret:
kubectl create secret generic meshq-kubeconfig \
--from-file=config=$HOME/.kube/config
# In your Pod spec:
volumes:
- name: kubeconfig
secret:
secretName: meshq-kubeconfig
volumeMounts:
- name: kubeconfig
mountPath: /etc/meshq
readOnly: true
Required RBAC (ServiceAccount)
Create a ClusterRole that grants read access to nodes and pods, then bind it to the ServiceAccount MeshOptixIQ runs under:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: meshoptixiq-observer
rules:
- apiGroups: [""]
resources: ["nodes", "pods", "endpoints", "services"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: meshoptixiq-observer
subjects:
- kind: ServiceAccount
name: meshoptixiq # match your Pod's serviceAccountName
namespace: monitoring
roleRef:
kind: ClusterRole
name: meshoptixiq-observer
apiGroup: rbac.authorization.k8s.io
REST endpoints
| Method | Path | Auth | Description |
|---|---|---|---|
GET | /k8s/status | API key | Node / pod / GPU counts from the in-memory store (no cluster call) |
GET | /k8s/nodes | Enterprise | Full list of nodes with CPU, memory, GPU capacity and ready status |
GET | /k8s/pods | Enterprise | All pods across all namespaces; add ?namespace=kube-system to filter |
GET | /k8s/node/{name}/pods | Enterprise | Pods scheduled on a specific node |
POST | /k8s/collect | Enterprise | Trigger a fresh collection from the cluster and refresh the in-memory store |
Triggering collection
Call POST /k8s/collect to pull a fresh snapshot from Kubernetes into the store:
curl -s -X POST http://localhost:8000/k8s/collect \
-H "X-API-Key: $API_KEY" | jq .
# {
# "status": "ok",
# "node_count": 12,
# "pod_count": 148
# }
Wire POST /k8s/collect into MeshOptixIQ's built-in schedule executor
(see Ch. 5 — Collection) or trigger it from an
external cron / CI pipeline. There is no automatic background polling — collection
only runs when explicitly requested.
Querying the data
# All nodes
curl -s http://localhost:8000/k8s/nodes \
-H "X-API-Key: $API_KEY" | jq '.nodes[] | {node_name, cpu_capacity, memory_gb, gpu_count, ready}'
# Pods in a namespace
curl -s "http://localhost:8000/k8s/pods?namespace=gpu-jobs" \
-H "X-API-Key: $API_KEY" | jq '.pods[] | {pod_name, phase, gpu_request, restart_count}'
# Pods on a specific node
curl -s http://localhost:8000/k8s/node/gpu-node-01/pods \
-H "X-API-Key: $API_KEY" | jq .
# Quick status — no Enterprise gate
curl -s http://localhost:8000/k8s/status \
-H "X-API-Key: $API_KEY"
# {"node_count": 12, "pod_count": 148, "gpu_node_count": 4, "total_gpus": 32}
Response schemas
Node object
| Field | Type | Description |
|---|---|---|
node_name | string | Kubernetes node name |
node_ip | string | null | Internal IP address |
cpu_capacity | integer | Allocatable CPU cores |
memory_gb | float | Total memory in GiB (rounded to 1 decimal) |
gpu_count | integer | NVIDIA GPU capacity (nvidia.com/gpu resource) |
gpu_model | string | null | Container runtime version string (SDK only) |
ready | boolean | True when the Ready condition is True |
labels | object | All node labels as a key/value map |
collected_at | ISO 8601 | Timestamp of this snapshot |
Pod object
| Field | Type | Description |
|---|---|---|
pod_name | string | Pod name |
namespace | string | Kubernetes namespace |
node_name | string | null | Node the pod is scheduled on |
phase | string | Pod phase: Running, Pending, Succeeded, Failed, Unknown |
gpu_request | integer | Total nvidia.com/gpu resource requests across all containers |
restart_count | integer | Sum of container restart counts |
age_hours | float | Pod age in hours since creation |
collected_at | ISO 8601 | Timestamp of this snapshot |
Node and pod data is held in process memory. A container restart clears the store.
Re-trigger POST /k8s/collect after startup, or schedule it to run on a
cron so the store is always populated.