Technical Overview: How SecureExec Works
SecureExec is a self-hosted endpoint detection and response (EDR) platform for Linux. This article is the technical overview — it covers the full data path from kernel hook to alert, the agent architecture, the server-side detection engine, and the response mechanisms.
Architecture at a Glance
┌─────────────────────────────────────────┐
│ Linux Host │
│ │
│ eBPF programs (kernel) │
│ ├── tracepoints (syscall hooks) │
│ └── kprobes (kernel function hooks) │
│ │ │
│ Ring buffers (4 channels) │
│ │ │
│ Agent (userspace, single Rust binary) │
│ ├── eBPF consumer (parse + enrich) │
│ ├── Process table (pid→guid) │
│ ├── Filter chain (dedup + config) │
│ ├── SQLite spool (durability) │
│ └── gRPC transport │
└────────────┬────────────────────────────┘
│ TLS / gRPC stream
▼
┌─────────────────────────────────────────┐
│ SecureExec Server │
│ ├── Event ingestion (gRPC) │
│ ├── Alert engine (21 built-in rules) │
│ ├── Starlark custom rules │
│ ├── Process table (per-agent lineage) │
│ ├── Elasticsearch indexing │
│ └── Response dispatcher │
│ │ │
│ Web console (Next.js) │
│ ├── Alert dashboard + search │
│ ├── Process tree visualization │
│ ├── Incident timeline │
│ └── Response actions UI │
└─────────────────────────────────────────┘
eBPF Event Collection
The agent uses eBPF programs attached to kernel tracepoints and kprobes to capture security-relevant events with near-zero overhead. No kernel module is required — eBPF programs are loaded at agent startup and removed cleanly on shutdown.
Events flow through four dedicated ring buffers, each sized for its expected throughput:
| Ring Buffer | Size | Event Types |
|---|---|---|
PROCESS_EVENTS | 512 KB | process_exec, process_fork, process_exit, exec_argv |
FILE_EVENTS | 256 KB | file_create, file_modify, file_delete, file_rename, file_link, file_perm_change |
NETWORK_EVENTS | 256 KB | network_connect, network_listen, dns_query |
SECURITY_EVENTS | 1 MB | privilege_change, process_vm_access, memfd_create, kernel_module_load, namespace_change, capability_change, bpf_program, process_signal, keyctl |
Every ring buffer entry starts with a event_tag: u8 byte that identifies the event type. The userspace consumer reads the tag, casts the remaining bytes to the correct #[repr(C)] struct, and converts it to a typed BpfEvent enum variant.
What We Hook
Some examples of the syscalls and kernel functions we instrument:
sched_process_exec— captures every exec, including the full argv via a chainedEXEC_ARGVevent for long command linessched_process_fork/sched_process_exit— tracks process creation and termination for the in-memory process tablesys_enter_openat/security_file_open— file operations, filtered in-kernel to exclude high-frequency noise paths like/procand/syssys_enter_connect/sys_enter_bind— network connections and listening sockets, with full sockaddr parsingudp_sendmsg— DNS query extraction (port 53 UDP)sys_enter_setuid/sys_enter_setreuid— privilege changessys_enter_process_vm_writev— cross-process memory writes (process injection)sys_enter_memfd_create— fileless execution via in-memory file descriptorssys_enter_init_module/sys_enter_finit_module— kernel module loadssys_enter_unshare/sys_enter_setns— namespace changes (container escape signals)
All eBPF programs are written in Rust using the Aya framework.
Agent Architecture
The agent is a single statically-linked Rust binary with zero runtime dependencies. It runs as a systemd service (secureexec-agent.service) and typically uses less than 1% CPU and < 50 MB RAM on a production host generating 50,000+ events per minute.
Event Pipeline
eBPF ring buffers
→ parse_*_event() (tag → BpfEvent variant)
→ convert_bpf_events() (BpfEvent → EventKind)
→ ProcessTable.resolve() (pid → process_guid, username, container_id)
→ FilterChain (dedup by content_hash, configurable per-type filters)
→ SQLite spool (durable on-disk queue, survives agent restart)
→ gRPC transport (streaming or batch mode)
→ Server
Process Table
The agent maintains an in-memory process table indexed by PID. On startup, it enumerates /proc to build a baseline snapshot. As process_exec, process_fork, and process_exit events arrive, the table is updated in real time.
Each process entry stores:
- PID and PPID (parent PID from procfs, not from tracepoint data which is unreliable for PPID)
- start_time (from
/proc/{pid}/stat) - process_guid — a stable SHA-256 hash of
(agent_id, pid, start_time)that uniquely identifies a process even after PID reuse - username — resolved from a UID→name map parsed from
/etc/passwdat startup - container_id — extracted from
/proc/{pid}/cgroupif the process runs inside a container
The process_guid is the key that connects events to the process tree visualization in the console. Because it includes start_time, it distinguishes between two different processes that reuse the same PID.
Spool and Retry
If the gRPC connection to the server drops, events are persisted to a local SQLite database (the "spool"). When the connection is restored, spooled events are replayed in sequence-number order. The agent heartbeat (AgentHeartbeat) reports spool_pending so operators can monitor backlog from the web console.
Content-Based Deduplication
Every event body is hashed with SHA-1 (ContentHash trait). The agent maintains a 65,536-entry LRU deduplication filter. If the same event content is seen twice within the filter window, the duplicate is dropped before it reaches the spool. This eliminates redundant noise from inotify storms and rapid process restarts.
Event Schema (Protobuf)
All events use a common envelope:
message AgentEvent {
string id = 1; // UUIDv4
uint64 seqno = 2; // monotonic per-agent
string timestamp = 3; // ISO-8601
string agent_id = 4;
string hostname = 5;
string os = 6;
string content_hash = 7; // SHA-1 of event body
string process_guid = 8; // stable process ID
string process_name = 9;
uint32 process_pid = 38;
string username = 40;
string container_id = 41;
oneof kind {
ProcessEvent process_create = 10;
ProcessEvent process_fork = 25;
ProcessEvent process_exit = 11;
FileEvent file_create = 12;
FileEvent file_modify = 13;
FileEvent file_delete = 14;
FileRenameEvent file_rename = 15;
NetworkEvent network_connect = 16;
NetworkEvent network_listen = 17;
DnsEvent dns_query = 18;
UserLogonEvent user_logon = 20;
PrivilegeChangeEvent privilege_change = 26;
ProcessVmEvent process_vm_access = 31;
MemfdCreateEvent memfd_create = 32;
KernelModuleEvent kernel_module_load = 30;
NamespaceChangeEvent namespace_change = 36;
CapabilityChangeEvent capability_change = 34;
// ... and more
}
}
The seqno field is a monotonically increasing counter per agent. It provides total ordering for events within a host and serves as a secondary sort key (after @timestamp) in Elasticsearch queries, enabling correct sub-millisecond event ordering.
Server-Side Detection Engine
The server evaluates every incoming event against a set of alert rules. Rules are of two types:
Built-in Rules (21 rules)
Rust-native rules compiled into the server binary. Each rule implements the AlertRule trait:
pub trait AlertRule: Send + Sync {
fn name(&self) -> &str;
fn description(&self) -> &str;
fn severity(&self) -> &str; // "low" | "medium" | "high" | "critical"
fn evaluate(&self, ctx: &AlertContext, events: &[AgentEvent]) -> Vec<AlertEvent>;
}
Rules receive event batches and the AlertContext which includes the server-side process table for lineage resolution. Examples:
- Reverse Shell — matches shell processes (
bash,sh,zsh,dash,ksh) making outbound connections to non-private IPs - SSH Brute-Force — stateful rule with a sliding window counter per (source_ip, agent_id) and a cooldown timer to prevent alert floods
- Ransomware — multi-signal rule: ransom note creation, encrypted file extension renames, mass rename rate detection (20+ renames in 60s), and backup wipe command patterns
- Crypto Miner — cross-event-type rule matching process names, command line stratum patterns, mining pool ports, and DNS queries to known pool domains
Each rule includes an allowlist for known-benign processes. For example, the Privilege Escalation rule excludes sshd (privilege separation), su, sudo, cron, and systemd-executor.
Custom Starlark Rules
Users can write detection rules in Starlark (a Python-like configuration language). Custom rules are stored in PostgreSQL and hot-reloaded without server restart. They have access to the same AlertContext and event data as built-in rules.
Suppression Rules
Starlark-based suppression rules can filter out known false positives. They are evaluated after alert rules and can suppress specific alerts by rule name, severity, process name, or any event field.
Elasticsearch Storage
Events and alerts are indexed into Elasticsearch with separate index templates:
- Events:
secureexec-events-{org_id}-*— daily rollover, 7-day retention (configurable) - Alerts:
secureexec-alerts-{org_id}-*— daily rollover, 90-day retention
The server uses _bulk API for high-throughput indexing. Each event is indexed with all envelope fields plus the type-specific payload, enabling full-text search across command lines, file paths, IP addresses, and DNS queries.
Response Capabilities
When a threat is detected, operators can act directly from the web console:
- Network Isolation — the agent configures iptables rules to block all traffic except the agent↔server gRPC channel. The host is effectively quarantined while remaining manageable.
- Kill Process Tree — sends SIGKILL to the entire process group from the ancestor PID, terminating the full attack chain.
- Block by Hash or Path — uses fanotify to deny exec permissions for specific file hashes or paths. Rules are pushed to all agents in the organization instantly.
- Global Blocklist — centrally managed blocking rules that propagate to all endpoints.
Response commands are delivered to agents through a gRPC control channel (CommandStream). The agent polls for pending commands on each heartbeat interval.
Incident Timeline
The incident timeline reconstructs a chronological attack narrative from any starting point — an alert, a process, or a time window on a specific host.
The server queries Elasticsearch for events matching the scope (agent_id + process_guid + time range), optionally resolves lightweight process lineage (one-hop parent lookup from process_create/fork events), and merges events with any matching alerts. Each entry is labeled with an attack phase (Initial Access, Execution, Persistence, C2) based on heuristics — for example, outbound connections to external IPs are labeled as C2, and writes to cron directories are labeled as Persistence.
Deployment
The agent ships as a .deb or .rpm package and installs in under 60 seconds:
dpkg -i secureexec-agent.deb
systemctl start secureexec-agent
Supported distributions: Ubuntu 20.04+, Debian 11+, RHEL 8+, Amazon Linux 2. The only kernel requirement is eBPF support (kernel 5.4+). An optional kernel module fallback is available for older kernels.
The server components (API server, Elasticsearch, PostgreSQL) deploy via Docker Compose for single-node setups or Kubernetes for production clusters.
Open Questions and Roadmap
Active development areas include:
- File integrity monitoring with baseline comparison
- YARA scanning integration for on-host file analysis
- Starlark rule library with community contributions
- macOS and Windows agent support (Endpoint Security framework and ETW respectively)
For questions or a guided walkthrough on your infrastructure, request a demo. We run the demo on a real Linux host — no slides.