Embedding Cognotik
Use the com.cognotik:webapp library as an embedded engine.
Run "Headless" AI agents for complex coding tasks, refactoring, or documentation generation without a UI.
Key Capabilities
Headless Execution
Run agents without a UI using serverless = true. Perfect for background jobs, scripts, and server-side processing.
CI/CD Integration
Embed agents in GitHub Actions or Jenkins to perform code reviews, auto-fixes, or documentation generation on every push.
Gradle Plugin
Wrap agent tasks in custom Gradle tasks to automate boilerplate generation as part of your build process.
UnifiedHarness API
A simple entry point to configure models, inject API keys, and execute plans or individual tasks programmatically.
1. Add Dependency
Add the dependency to your build.gradle.kts:
repositories {
mavenCentral()
}
dependencies {
// The core webapp library contains the Harness and Planning engines
implementation("com.cognotik:webapp:2.0.39")
// You may need SLF4J for logging
implementation("org.slf4j:slf4j-simple:2.0.9")
}
2. Initialize UnifiedHarness
The entry point for embedded execution is the UnifiedHarness class. Initialize it with serverless = true for CI/CD or script environments:
import com.simiacryptus.cognotik.util.UnifiedHarness
import com.simiacryptus.cognotik.chat.model.OpenAIModels
val harness = UnifiedHarness(
serverless = true,
openBrowser = false,
// Define the models you want to use
smartModel = OpenAIModels.GPT4o,
fastModel = OpenAIModels.GPT35Turbo,
// Inject API Keys from Environment Variables
modelInstanceFn = { apiChatModel ->
val provider = apiChatModel.provider
val model = apiChatModel.model
// Fetch key based on provider (OpenAI, Anthropic, etc.)
val apiKey = System.getenv("OPENAI_API_KEY")
?: throw RuntimeException("Missing OPENAI_API_KEY env var")
model.instance(key = apiKey)
}
)
// Initialize platform services (loads Task definitions, etc.)
harness.start()
Important: The modelInstanceFn parameter is required to inject API keys programmatically. The harness does not load from local .config files when this is used.
Scenario A: Full Agent Planning (The "Manager")
Use this approach when you have a high-level goal and want the AI to figure out the steps, break them down, and execute them.
import com.simiacryptus.cognotik.plan.cognitive.CognitiveModeConfig
import com.simiacryptus.cognotik.plan.cognitive.CognitiveModeType
import java.io.File
fun runAgenticRefactor(projectDir: File, instruction: String) {
// 1. Configure the Strategy
val strategy = CognitiveModeConfig(
type = CognitiveModeType.Waterfall, // or Auto_Plan, AdaptivePlanning
name = "RefactorAgent"
)
// 2. Execute the Plan
harness.runPlan(
prompt = instruction,
cognitiveSettings = strategy,
workspace = projectDir, // The agent will read/write files here
timeoutMinutes = 60,
autoFix = true // Allow agent to fix its own errors without prompting
)
println("Agent execution complete. Check results.md in the workspace.")
}
// Example Call
runAgenticRefactor(
File("./my-project"),
"Analyze the User class and convert all public fields to private with getters/setters."
)
Key Configuration Options
- workspace: If
null, creates a temp dir. For CI/CD, passFile(".")to modify the current repository. - autoFix: Set to
truefor unattended execution. Iffalse, the agent might hang waiting for user confirmation. - cognitiveSettings:
Waterfall: Plans everything first, then executes. Good for predictable tasks.AdaptivePlanning: Loops through Think/Act cycles. Good for research or debugging.
Scenario B: Single Task Execution (The "Tool")
Use this approach when you want to use a specific Cognotik tool (like the Crawler or File Modifier) as a function call within your own code, skipping the high-level planning.
import com.simiacryptus.cognotik.plan.tools.TaskType
import com.simiacryptus.cognotik.plan.tools.file.FileModificationTask.FileModificationTaskExecutionConfigData
fun generateReadme(projectDir: File) {
// 1. Define Static Configuration (The "Tool" settings)
val typeConfig = FileModificationTaskTypeConfig(
name = "ReadmeGenerator"
)
// 2. Define Runtime Input (The "Job" settings)
val executionConfig = FileModificationTaskExecutionConfigData(
files = listOf("README.md"),
modifications = "Read the source code in src/main and generate a comprehensive README.md.",
extractContent = true,
task_description = "Generate Documentation"
)
// 3. Run
harness.runTask(
taskType = TaskType.FileModificationTask,
typeConfig = typeConfig,
executionConfig = executionConfig,
workspace = projectDir,
autoFix = true
)
}
Every task requires two configuration components: TaskTypeConfig (static settings) and TaskExecutionConfig (runtime inputs).
Used for refactoring, bug fixing, or feature implementation.
FileModificationTaskExecutionConfigData(
files = listOf("src/Main.kt"),
related_files = listOf("src/Utils.kt"),
modifications = "Refactor the main loop.",
extractContent = true
)
Optimized for adding content to the end of a file.
FileAppendTaskExecutionConfigData(
file = "CHANGELOG.md",
append_content = "## [1.0.1] - Fixed login bug",
related_files = listOf("src/auth/Login.kt")
)
Pure analysis task. Reads files and answers questions without modifying the filesystem.
ReadDocumentsTaskExecutionConfigData(
input_files = listOf("src/**/*.java"),
inquiry_questions = listOf(
"How is authentication handled?"
),
inquiry_goal = "Generate security docs"
)
Performs grep-like searches (literal or regex) and returns line numbers with context.
FileSearchTaskExecutionConfigData(
search_pattern = "TODO|FIXME",
is_regex = true,
input_files = listOf("**/*.kt"),
context_lines = 2
)
Executes external CLI tools configured in the user settings.
RunToolTaskExecutionConfigData(
tool = "python",
args = listOf("scripts/verify.py", "--verbose"),
workingDir = "."
)
Spawns a nested agent to solve a complex goal with its own cognitive mode.
SubPlanTaskExecutionConfigData(
planning_goal = "Research JSON library and implement wrapper.",
context = listOf("Must be Java 11 compatible")
)
A. As a Gradle Plugin
Wrap the harness in a custom Gradle Task to add AI capabilities to your build.
// buildSrc/src/main/kotlin/AiRefactorTask.kt
import org.gradle.api.DefaultTask
import org.gradle.api.tasks.TaskAction
import org.gradle.api.tasks.Input
import com.simiacryptus.cognotik.util.UnifiedHarness
abstract class AiRefactorTask : DefaultTask() {
@get:Input
abstract var instruction: String
@TaskAction
fun run() {
val harness = UnifiedHarness(serverless = true)
harness.start()
harness.runPlan(
prompt = instruction,
cognitiveSettings = CognitiveModeConfig(type = CognitiveModeType.Waterfall),
workspace = project.projectDir
)
harness.stop()
}
}
B. As a GitHub Action
Create a Kotlin CLI application, build a "Fat JAR" (Shadow JAR), and use it in a workflow:
name: AI Code Reviewer
on: [workflow_dispatch]
jobs:
ai-fix:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Java
uses: actions/setup-java@v3
with:
distribution: 'temurin'
java-version: '17'
- name: Run Cognotik Agent
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
java -jar cognotik-cli.jar \
--instruction "Review src/main for potential NPEs and fix them." \
--workspace .
- name: Create Pull Request
uses: peter-evans/create-pull-request@v5
with:
title: "AI Automated Fixes"
Advanced Configuration
When running runPlan, inject a custom configuration lambda to control budget and safety limits:
harness.runPlan(
prompt = "...",
cognitiveSettings = CognitiveModeConfig(type = CognitiveModeType.Waterfall),
config = { session, workspace ->
OrchestrationConfig(
sessionId = session.sessionId,
workingDir = workspace.absolutePath,
// Model Selection
defaultSmartModel = OpenAIModels.GPT4o.asApiChatModel(),
defaultFastModel = OpenAIModels.GPT35Turbo.asApiChatModel(),
// Safety Limits
budget = 2.00, // Max $2.00 USD spend
maxIterations = 15, // Max planning loops
maxTasksPerIteration = 3,
// Behavior
autoFix = true,
temperature = 0.1 // Low temperature for deterministic code
)
}
)
-
1.
Environment Variables: Ensure API keys are available in the environment where the JAR runs. The
UnifiedHarnessdoes not load from local.configfiles when a custommodelInstanceFnis used. -
2.
Context Window: If working on large codebases, ensure you select a model with a large context window (e.g.,
gpt-4-turboorclaude-3-opus). -
3.
Logging: Cognotik uses SLF4J. Configure a simple logger (like
slf4j-simple) to see the agent's "thought process" in your console logs. -
4.
Concurrency: In
serverlessmode, the harness runs synchronously (blocking the thread until completion). This is usually desired for CI/CD. -
5.
Artifacts: The agent writes a
results.mdand ausage.jsonin the workspace. Archive these in your CI pipeline to review what the agent did.