EthicalReasoningTask
A structured reasoning engine that deconstructs complex dilemmas through multiple ethical frameworks (Utilitarianism, Deontology, Virtue Ethics, etc.) to identify critical trade-offs and synthesize balanced recommendations.
Category: Reasoning
Model: GPT-4 Preferred
Output: Multi-Format Report
⚙️ TaskConfig.json
{
"task_type": "EthicalReasoning",
"ethical_dilemma": "Should we deploy an unvetted AI safety patch to prevent a theoretical breach?",
"input_files": ["docs/security_policy.md", "logs/threat_model.json"],
"stakeholders": [
"End Users",
"Security Team",
"Company Shareholders"
],
"ethical_frameworks": [
"utilitarianism",
"deontology"
],
"context": "The breach is estimated at 5% probability but catastrophic impact."
}
→
👁️ Session UI Output
Overview
Context
Dilemma Analysis
Synthesis
Synthesis & Recommendation
Conflict: Utilitarianism favors deployment to minimize aggregate risk, while Deontology warns against violating testing protocols.
Final Recommendation: Perform a 'Canary' deployment to 1% of users. This balances the duty of care (Deontology) with risk mitigation (Utilitarianism).
📥 Download: Report.pdf | Report.md
Live Results Showcase
Explore actual artifacts generated by the EthicalReasoningTask, including markdown reports and analysis transcripts.
Execution Configuration
| Field | Type | Description |
|---|---|---|
ethical_dilemma* |
String |
A clear description of the ethical problem or decision to be made. |
input_files |
List<String> |
Optional glob patterns for files providing context for the analysis. |
stakeholders* |
List<String> |
Individuals, groups, or entities affected by the decision. |
ethical_frameworks |
List<String> |
Frameworks to apply. Options: utilitarianism, deontology, virtue_ethics, care_ethics, rights_based. |
context |
String |
Optional background information or constraints. |
* Required fields
Task Process Lifecycle
- Initialization: Validates configuration and ensures dilemma and stakeholders are defined.
- Context Loading: Aggregates context from previous tasks and specified input files.
- Dilemma Analysis: An expert agent deconstructs the core conflict and maps stakeholder interests.
- Framework Application: Sequential analysis of the dilemma through each selected ethical lens (Utilitarianism, Deontology, etc.).
- Synthesis: A "Master Ethicist" agent compares framework recommendations and identifies critical trade-offs.
- Reporting: Generates a comprehensive multi-format report (Markdown, HTML, PDF).
Kotlin Boilerplate
// Direct Task Execution
val task = EthicalReasoningTask(
orchestrationConfig = config,
planTask = EthicalReasoningTaskExecutionConfigData(
ethical_dilemma = "Use of user data for training",
stakeholders = listOf("Users", "Developers"),
ethical_frameworks = listOf("utilitarianism", "rights_based")
)
)
Embedded Execution (Headless)
Invoke via UnifiedHarness for CI/CD or automated pipelines:
val harness = UnifiedHarness(serverless = true, ...)
harness.start()
harness.runTask(
taskType = EthicalReasoningTask.EthicalReasoning,
typeConfig = TaskTypeConfig(),
executionConfig = EthicalReasoningTaskExecutionConfigData(
ethical_dilemma = "Automated deployment of breaking changes",
stakeholders = listOf("Customers", "DevOps Team"),
ethical_frameworks = listOf("utilitarianism", "deontology")
),
workspace = File("./workspace"),
autoFix = true
)
Prompt Segment
The following logic is injected into the LLM context:
EthicalReasoning - Analyze a dilemma through multiple ethical frameworks
** Optionally specify input files (supports glob patterns) to provide context
** Files will be read and included in the analysis
** Specify the ethical dilemma and stakeholders
** Provides analysis from each framework's perspective
** Synthesizes findings into a balanced recommendation
** Highlights ethical trade-offs and points of conflict
** Useful for:
- AI safety and alignment
- Product and policy ethics
- Corporate governance