DiscussionTask
Directly answer questions or provide insights using the LLM, optionally referencing project files, with support for interactive user feedback.
Side-Effect Safe
Interactive Support
Markdown Output
⚙️ DiscussionTaskExecutionConfig.json
{
"inquiry_questions": [
"How is the authentication middleware structured?",
"Are there any potential race conditions in the cache?"
],
"inquiry_goal": "Security and Performance Audit",
"input_files": ["src/auth/**/*.kt", "src/cache/*.kt"]
}
→
👁️ Session UI (Markdown Render)
Analysis: Security & Performance
Based on the provided files in src/auth and src/cache:
- Authentication: Uses a JWT-based filter chain. Structure is modular but lacks token revocation logic.
- Cache:
LocalCache.ktusesConcurrentHashMap, which prevents basic race conditions, but thegetOrPutlogic is not atomic.
User: "Can you suggest a fix for the cache atomicity?"
Test Workspace Browser
Explore the artifacts and logs generated during automated testing of the DiscussionTask.
Execution Configuration
| Field | Type | Description |
|---|---|---|
inquiry_questions |
List<String> |
The specific questions or topics to be addressed in the inquiry. |
inquiry_goal |
String |
The high-level goal or purpose of the inquiry (e.g., "Code Review"). |
input_files |
List<String> |
Glob patterns (e.g. **/*.kt) to be used as context for the LLM. |
task_description |
String |
A natural language description of the task's intent. |
Token Usage: Medium to High (depends on the number and size of input files).
Task Lifecycle
- Context Gathering: The task resolves
input_filesglob patterns against the workspace. - File Ingestion: Files are read and formatted into a single context block. Non-text files (PDFs, etc.) are processed via specialized readers.
- LLM Interaction:
- If
autoFixis enabled: Performs a one-shot analysis and returns the result. - If
autoFixis disabled: Initiates an interactiveDiscussablesession allowing the user to ask follow-up questions.
- If
- Output: Generates a comprehensive Markdown report rendered directly in the Session UI.
Type Configuration
| Field | Type | Description |
|---|---|---|
model |
ApiChatModel |
Override the default model used for the "Insight" agent. |
Embedded Execution (UnifiedHarness)
To run this task programmatically in a headless environment (CI/CD, CLI), use the UnifiedHarness.
// 1. Define the Execution Configuration
val executionConfig = DiscussionTask.DiscussionTaskExecutionConfigData(
inquiry_goal = "Architecture Review",
inquiry_questions = listOf("Does the service layer follow the Repository pattern?"),
input_files = listOf("src/main/kotlin/services/*.kt", "src/main/kotlin/repositories/*.kt"),
task_description = "Reviewing architectural patterns"
)
// 2. Define the Type Configuration (Optional model override)
val typeConfig = DiscussionTask.DiscussionTaskTypeConfig(
name = "ArchReviewer"
)
// 3. Run via Harness
harness.runTask(
taskType = DiscussionTask.Discussion,
typeConfig = typeConfig,
executionConfig = executionConfig,
workspace = File("./my-project"),
autoFix = true // Set to true for non-interactive CI/CD usage
)
Test Case Example
Example of a unit test verifying the discussion logic:
@Test
fun testDiscussionTask() {
val harness = UnifiedHarness(serverless = true)
harness.start()
val result = harness.runTask(
taskType = DiscussionTask.Discussion,
executionConfig = DiscussionTask.DiscussionTaskExecutionConfigData(
inquiry_goal = "Test Inquiry",
inquiry_questions = listOf("What is the purpose of this test?"),
input_files = listOf("src/test/kotlin/DiscussionTaskTest.kt")
),
workspace = tempDir,
autoFix = true
)
assert(result.contains("DiscussionTaskTest"))
}Prompt Segment
The following logic is injected into the orchestrator's system prompt:
Discussion - Directly answer questions or provide insights using the LLM. Reading files is optional and can be included if relevant to the inquiry. * Specify the questions and the goal of the inquiry. * Optionally, list input files (supports glob patterns) to be examined. * User response/feedback and iteration are supported.