Library Stable

Embedding Cognotik

Use the com.cognotik:webapp library as an embedded engine. Run "Headless" AI agents for complex coding tasks, refactoring, or documentation generation without a UI.

Key Capabilities

Headless Execution

Run agents without a UI using serverless = true. Perfect for background jobs, scripts, and server-side processing.

CI/CD Integration

Embed agents in GitHub Actions or Jenkins to perform code reviews, auto-fixes, or documentation generation on every push.

Gradle Plugin

Wrap agent tasks in custom Gradle tasks to automate boilerplate generation as part of your build process.

UnifiedHarness API

A simple entry point to configure models, inject API keys, and execute plans or individual tasks programmatically.

1. Add Dependency

Add the dependency to your build.gradle.kts:

kotlin
repositories {
    mavenCentral()
}

dependencies {
    // The core webapp library contains the Harness and Planning engines
    implementation("com.cognotik:webapp:2.0.39")
    // You may need SLF4J for logging
    implementation("org.slf4j:slf4j-simple:2.0.9")
}

2. Initialize UnifiedHarness

The entry point for embedded execution is the UnifiedHarness class. Initialize it with serverless = true for CI/CD or script environments:

kotlin
import com.simiacryptus.cognotik.util.UnifiedHarness
import com.simiacryptus.cognotik.chat.model.OpenAIModels

val harness = UnifiedHarness(
    serverless = true,
    openBrowser = false,

    // Define the models you want to use
    smartModel = OpenAIModels.GPT4o,
    fastModel = OpenAIModels.GPT35Turbo,

    // Inject API Keys from Environment Variables
    modelInstanceFn = { apiChatModel ->
        val provider = apiChatModel.provider
        val model = apiChatModel.model

        // Fetch key based on provider (OpenAI, Anthropic, etc.)
        val apiKey = System.getenv("OPENAI_API_KEY")
            ?: throw RuntimeException("Missing OPENAI_API_KEY env var")

        model.instance(key = apiKey)
    }
)

// Initialize platform services (loads Task definitions, etc.)
harness.start()

Important: The modelInstanceFn parameter is required to inject API keys programmatically. The harness does not load from local .config files when this is used.

🔧 Troubleshooting & Best Practices
  • 1.
    Environment Variables: Ensure API keys are available in the environment where the JAR runs. The UnifiedHarness does not load from local .config files when a custom modelInstanceFn is used.
  • 2.
    Context Window: If working on large codebases, ensure you select a model with a large context window (e.g., gpt-4-turbo or claude-3-opus).
  • 3.
    Logging: Cognotik uses SLF4J. Configure a simple logger (like slf4j-simple) to see the agent's "thought process" in your console logs.
  • 4.
    Concurrency: In serverless mode, the harness runs synchronously (blocking the thread until completion). This is usually desired for CI/CD.
  • 5.
    Artifacts: The agent writes a results.md and a usage.json in the workspace. Archive these in your CI pipeline to review what the agent did.