Detection Engineering
Detection Engineering Workbench
Translate noisy ideas into durable detection logic with peer review and test harnesses you can keep.
Duration: 36 hours over five weeks
Format: Remote cohort with live labs
Skill focus: Advanced
Listed fee: ₩1,350,000 (informational; no checkout on this site)
Request informationOutline
Engineers author rules against a shared test corpus, run negative tests, and defend choices in review. Mentors focus on maintainability: naming, ownership fields, and rollback discipline rather than chasing novelty signatures.
Included practices
- Shared test harness with positive and negative fixtures
- Pair review sessions with annotated diffs
- Guidance on deprecation and sunset notices
- Office hours on noisy rule triage
- Export pack for documentation-minded teams
- Lightweight mapping to quality standards language your org already uses
- Capstone: ship one rule with tests and owner metadata
Outcomes
- Author a detection with explicit false-positive budget
- Write rollback steps a teammate can execute blind
- Explain trade-offs to a non-coder stakeholder in five sentences
Lead mentor
Noah Kim
Simulation engineer maintaining lab infrastructure and rule sandboxes.
Participant notes
-
“Workbench forced me to document false-positive budget before merge. That single habit reduced pager noise for our on-call rotation.”
-
“Pair review felt slower at first, but comments were kind and specific. Still wish we had one more week on negative testing.”