ExecutionConfig.json Input
{
  "target_system": "OAuth2 Auth Service",
  "attack_vectors": ["security", "logic"],
  "adversary_capability": "advanced",
  "generate_exploits": true,
  "input_files": ["src/auth/**/*.kt"],
  "max_vulnerabilities_per_vector": 3,
  "challenge_assumptions": ["Tokens are non-predictable"]
}
Executive Summary Output
📊 Risk Assessment
🔴 Critical1Token Replay Vulnerability
🟠 High2Insecure Redirect URI validation
🟡 Medium1Verbose Error Messages
Critical
Logic Flaw: Race condition in authorization code exchange allows for single-use token reuse under high concurrency.

Test Workspace Explorer

Browse the live output directory for this task, including generated Markdown transcripts, HTML reports, and raw analysis logs.

Configuration Parameters

Field Type Default Description
target_system * String - The system, design, or argument to analyze for weaknesses.
attack_vectors List<String> ["security", "logic"] Vectors to explore: security, performance, logic, business, privacy, compliance.
adversary_capability String "intermediate" Capability level: basic, intermediate, advanced, nation-state.
generate_exploits Boolean false Whether to generate detailed technical exploit scenarios.
suggest_mitigations Boolean true Whether to provide defensive recommendations for found issues.
related_files List<String> - Glob patterns for related code or documentation to provide context.
challenge_assumptions List<String> - Specific architectural or logical assumptions to target.
input_files List<String> - Glob patterns for source code or documentation to be analyzed.
max_vulnerabilities_per_vector Int 5 Limit of findings per category (Range: 1-20).

Task Execution Flow

  1. Context Gathering: Aggregates content from input_files, related_files, and prior task results.
  2. Adversarial Agent Initialization: Spawns specialized agents for each attack_vector with a persona matching the adversary_capability.
  3. Vector Analysis: Parallel analysis of the system to identify vulnerabilities, challenging any provided challenge_assumptions.
  4. Mitigation Synthesis: If enabled, a separate Security Architect agent reviews findings to propose immediate and long-term fixes.
  5. Reporting: Generates a structured Markdown transcript, PDF/HTML reports, and an Executive Summary with risk ratings.

Embedded Execution (Scenario B)

Use the UnifiedHarness to run this task programmatically in a headless environment (CI/CD, CLI tools).

// 1. Define the Execution Configuration
val executionConfig = AdversarialReasoningTask.AdversarialReasoningTaskExecutionConfigData(
    target_system = "Payment Gateway API",
    attack_vectors = listOf("security", "compliance"),
    adversary_capability = "advanced",
    input_files = listOf("src/main/kotlin/com/pay/**"),
    generate_exploits = true,
    suggest_mitigations = true
)

// 2. Run via Harness
harness.runTask(
    taskType = AdversarialReasoningTask.AdversarialReasoning,
    typeConfig = TaskTypeConfig(), // Use default static config
    executionConfig = executionConfig,
    workspace = File("./project-dir"),
    autoFix = true
)

Direct Instantiation

val task = AdversarialReasoningTask(
    orchestrationConfig = config,
    planTask = AdversarialReasoningTaskExecutionConfigData(
        target_system = "Payment Gateway API",
        attack_vectors = listOf("security", "compliance"),
        adversary_capability = "advanced"
    )
)

JSON Configuration

{
  "task_type": "AdversarialReasoning",
  "target_system": "Internal Auth Provider",
  "attack_vectors": ["logic", "privacy"],
  "adversary_capability": "nation-state",
  "input_files": ["docs/architecture/*.md"],
  "challenge_assumptions": [
    "Admin network is physically segmented",
    "Logs cannot be tampered with"
  ],
  "max_vulnerabilities_per_vector": 10
}

LLM Prompt Segment

The following instructions are injected into the LLM context:

AdversarialReasoning - Red team analysis to identify vulnerabilities and weaknesses
  ** Specify target_system: the system, design, or argument to analyze
  ** Choose attack_vectors from: 'security', 'performance', 'logic', 'business', 'privacy', 'compliance'
  ** Set adversary_capability: 'basic', 'intermediate', 'advanced', 'nation-state'
  ** Enable generate_exploits for detailed attack scenarios (use with caution)
  ** Enable suggest_mitigations to get defensive recommendations
  ** Optionally specify related_files (glob patterns) to analyze code
  ** Optionally list challenge_assumptions to target specific beliefs
  ** Identifies vulnerabilities, edge cases, and failure modes
  ** Simulates adversarial thinking to stress test systems
  ** Produces structured vulnerability reports with severity ratings