Introduction

CLIAI is a completely free and open-source command-line AI assistant that helps you with terminal commands, system administration, and general questions.

Key Features

  • ๐Ÿ”’ Privacy-First: Local AI processing with Ollama - your data never leaves your machine
  • ๐Ÿ”‘ Bring Your Own Key: Use your own OpenAI, Anthropic, or other LLM API keys
  • ๐Ÿ†“ Completely Free: No subscriptions, no hidden costs - 100% open source
  • ๐Ÿ›ก๏ธ Safety-Focused: Multi-level command validation and safety checks
  • โšก Fast & Reliable: Built-in performance monitoring and circuit breakers

Why CLIAI?

CLIAI bridges the gap between natural language and command-line operations. Instead of searching documentation or Stack Overflow, simply ask CLIAI what you want to do, and it will generate the appropriate command with explanations.

Supported Platforms

  • Linux (x86_64, ARM64)
  • macOS (Intel, Apple Silicon)
  • Windows (x86_64)

Installation

Quick Install

One-Line Install (Linux/macOS)

curl -fsSL https://raw.githubusercontent.com/bytestrix/cliai/main/install.sh | bash

Cargo (All Platforms)

cargo install cliai

Package Managers

Arch Linux (AUR)

yay -S cliai

Ubuntu/Debian

wget https://github.com/bytestrix/cliai/releases/latest/download/cliai.deb
sudo dpkg -i cliai.deb

macOS (Homebrew)

brew tap bytestrix/tap
brew install cliai

Windows (Chocolatey)

choco install cliai

From Source

git clone https://github.com/bytestrix/cliai.git
cd cliai
cargo build --release
sudo cp target/release/cliai /usr/local/bin/

Prerequisites

For local AI processing (recommended):

For cloud AI:

  • OpenAI API key, or
  • Anthropic API key

Quick Start

Setup

Option 1: Local AI (Ollama)

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a model
ollama pull mistral

# Configure CLIAI
cliai select mistral

Option 2: Cloud AI

# Set your API key
cliai set-key openai sk-your-key-here

# Or use Anthropic
cliai set-key anthropic your-key-here

# Select model
cliai select gpt-4

First Commands

# Ask for help
cliai "how do I list all files including hidden ones?"

# System administration
cliai "check disk usage"

# File operations
cliai "find all Python files modified in the last week"

# Git operations
cliai "show me the last 5 commits"

Custom Prefix

Set a custom command prefix:

cliai set-prefix ai
# Now use: ai "your question"

Configuration

Configuration File

CLIAI stores configuration in ~/.config/cliai/config.toml:

model = "mistral"
provider = "ollama"
auto_execute = false
dry_run = false
safety_level = "Medium"
context_timeout = 5000
ollama_url = "http://localhost:11434"
prefix = "cliai"

Commands

Model Management

# List available models
cliai list-models

# Select a model
cliai select mistral

# List providers
cliai list-providers

API Key Management

# Set API key
cliai set-key openai sk-...

# Test connection
cliai test-key openai

# Remove key
cliai remove-key openai

Safety Settings

# Set safety level
cliai safety-level high    # Maximum safety
cliai safety-level medium  # Balanced (default)
cliai safety-level low     # Minimal checks

# Enable auto-execution
cliai auto-execute --enable

# Enable dry-run mode
cliai dry-run --enable

Other Settings

# View current configuration
cliai config

# Clear chat history
cliai clear

# Check provider status
cliai provider-status

# View performance metrics
cliai performance-status

Basic Commands

Command Syntax

cliai "your question or request"

Examples

File Operations

cliai "list all files in current directory"
cliai "find files larger than 100MB"
cliai "compress this folder"
cliai "extract archive.tar.gz"

System Information

cliai "check disk usage"
cliai "show memory usage"
cliai "list running processes"
cliai "what's my IP address?"

Git Operations

cliai "show git status"
cliai "create a new branch"
cliai "undo last commit"
cliai "show commit history"

Network Operations

cliai "test internet connection"
cliai "download file from URL"
cliai "check if port 8080 is open"

Package Management

cliai "install package nginx"
cliai "update all packages"
cliai "search for package python"

AI Providers

CLIAI supports multiple AI providers for maximum flexibility.

Ollama (Local)

Recommended for privacy and offline use

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull models
ollama pull mistral
ollama pull llama2
ollama pull codellama

# Configure CLIAI
cliai select mistral

Advantages:

  • Complete privacy
  • No API costs
  • Works offline
  • Fast responses

OpenAI

# Set API key
cliai set-key openai sk-your-key-here

# Select model
cliai select gpt-4
cliai select gpt-3.5-turbo

Available Models:

  • gpt-4
  • gpt-4-turbo
  • gpt-3.5-turbo

Anthropic

# Set API key
cliai set-key anthropic your-key-here

# Select model
cliai select claude-3-sonnet
cliai select claude-3-haiku
cliai select claude-3-opus

Available Models:

  • claude-3-opus
  • claude-3-sonnet
  • claude-3-haiku

Provider Fallback

CLIAI automatically falls back to alternative providers if the primary one fails:

  1. Try local Ollama first (if configured)
  2. Fall back to cloud provider (if API key set)
  3. Circuit breaker prevents repeated failures

Safety Features

CLIAI includes multiple layers of safety to protect your system.

Safety Levels

  • Blocks dangerous operations
  • Requires confirmation for system changes
  • Maximum validation

Medium (Default)

  • Balanced safety and convenience
  • Confirms risky operations
  • Standard validation

Low (Experienced users)

  • Minimal safety checks
  • Allows most operations
  • Basic validation only

Command Validation

Every command goes through multiple validation layers:

  1. Syntax Checking: Validates shell syntax
  2. Placeholder Detection: Catches AI hallucinations
  3. Risk Assessment: Categorizes command danger level
  4. Pattern Matching: Detects known dangerous patterns

Dangerous Patterns Detected

  • rm -rf / - System deletion
  • dd if=/dev/zero - Disk wiping
  • chmod 777 - Insecure permissions
  • :(){ :|:& };: - Fork bombs
  • curl | sh - Unverified script execution

Execution Modes

Manual (Default)

Commands are displayed for you to review and execute manually.

Auto-Execute

cliai auto-execute --enable

Safe commands execute automatically. Dangerous commands still require confirmation.

Dry-Run

cliai dry-run --enable

Shows what would be executed without actually running commands.

Architecture

CLIAI follows a modular architecture designed for reliability and extensibility.

Core Components

Orchestrator

Central coordinator managing AI providers and request routing.

Intent Classifier

Determines the type of request (command, question, explanation).

Context Gatherer

Collects system information for better responses.

Command Validator

Multi-layer validation with security checks.

Execution Engine

Safe command execution with multiple modes.

Performance Monitor

Tracks metrics and system health.

Circuit Breakers

Automatic failover between providers.

Request Flow

User Input
    โ†“
Intent Classification
    โ†“
Context Gathering
    โ†“
AI Provider (Ollama/OpenAI/Anthropic)
    โ†“
Command Validation
    โ†“
Safety Checks
    โ†“
Execution (Manual/Auto/Dry-run)

Provider Management

CLIAI uses a sophisticated provider management system:

  • Priority-based routing: Local first, cloud fallback
  • Circuit breakers: Prevent repeated failures
  • Retry logic: Automatic retries with backoff
  • Performance monitoring: Track response times
  • Health checks: Verify provider availability

API Documentation

For detailed Rust API documentation, see:

Rust API Docs

Key Modules

  • agents - AI orchestration and provider management
  • config - Configuration management
  • validation - Command validation and safety
  • execution - Command execution engine
  • providers - AI provider implementations
  • performance - Performance monitoring

Contributing to CLIAI

Thank you for your interest in contributing to CLIAI! We welcome contributions from the community and are excited to see what you'll bring to the project.

๐Ÿš€ Getting Started

Prerequisites

  • Rust 1.70 or later
  • Git
  • Ollama (for testing AI functionality)

Development Setup

  1. Fork and Clone

    git clone https://github.com/yourusername/cliai.git
    cd cliai
    
  2. Install Dependencies

    cargo build
    
  3. Set up Ollama (for testing)

    # Install Ollama
    curl -fsSL https://ollama.ai/install.sh | sh
    
    # Pull a test model
    ollama pull mistral
    
  4. Run Tests

    # Unit tests
    cargo test
    
    # Integration tests with AI
    cargo run -- test --quick
    

๐ŸŽฏ How to Contribute

Reporting Issues

  • Use the GitHub Issues page
  • Search existing issues before creating a new one
  • Include detailed reproduction steps
  • Provide system information (OS, Rust version, etc.)

Suggesting Features

  • Open a GitHub Discussion first
  • Describe the use case and expected behavior
  • Consider implementation complexity and maintenance burden

Code Contributions

  1. Create a Feature Branch

    git checkout -b feature/your-feature-name
    
  2. Make Your Changes

    • Follow the existing code style
    • Add tests for new functionality
    • Update documentation as needed
  3. Test Your Changes

    # Run all tests
    cargo test
    
    # Test with real AI (requires Ollama)
    cargo run -- test --categories "your-test-category"
    
    # Check formatting
    cargo fmt --check
    
    # Run clippy
    cargo clippy -- -D warnings
    
  4. Commit Your Changes

    git add .
    git commit -m "feat: add amazing new feature"
    
  5. Push and Create PR

    git push origin feature/your-feature-name
    

๐Ÿ“ Code Style Guidelines

Rust Code Style

  • Use cargo fmt for consistent formatting
  • Follow Rust naming conventions
  • Add documentation comments for public APIs
  • Use clippy suggestions to improve code quality

Commit Messages

We follow the Conventional Commits specification:

  • feat: - New features
  • fix: - Bug fixes
  • docs: - Documentation changes
  • style: - Code style changes (formatting, etc.)
  • refactor: - Code refactoring
  • test: - Adding or updating tests
  • chore: - Maintenance tasks

Examples:

feat: add support for custom AI providers
fix: resolve command validation edge case
docs: update installation instructions
test: add integration tests for file operations

๐Ÿ—๏ธ Project Structure

Understanding the codebase:

src/
โ”œโ”€โ”€ src/
โ”‚   โ”œโ”€โ”€ main.rs              # CLI interface and entry point
โ”‚   โ”œโ”€โ”€ lib.rs               # Library exports
โ”‚   โ”œโ”€โ”€ agents/              # AI orchestration
โ”‚   โ”‚   โ”œโ”€โ”€ mod.rs           # Main orchestrator
โ”‚   โ”‚   โ””โ”€โ”€ profiles.rs      # AI model profiles
โ”‚   โ”œโ”€โ”€ config.rs            # Configuration management
โ”‚   โ”œโ”€โ”€ context.rs           # System context gathering
โ”‚   โ”œโ”€โ”€ execution.rs         # Command execution
โ”‚   โ”œโ”€โ”€ validation.rs        # Command validation
โ”‚   โ”œโ”€โ”€ providers.rs         # AI provider implementations
โ”‚   โ”œโ”€โ”€ history.rs           # Chat history
โ”‚   โ”œโ”€โ”€ performance.rs       # Performance monitoring
โ”‚   โ”œโ”€โ”€ error_handling.rs    # Error handling
โ”‚   โ”œโ”€โ”€ logging.rs           # Privacy-preserving logging
โ”‚   โ””โ”€โ”€ test_suite.rs        # Testing framework
โ”œโ”€โ”€ Cargo.toml               # Dependencies and metadata
โ”œโ”€โ”€ README.md                # Project documentation
โ””โ”€โ”€ CONTRIBUTING.md          # This file

๐Ÿงช Testing Guidelines

Unit Tests

  • Write tests for all public functions
  • Use descriptive test names
  • Test both success and error cases
  • Mock external dependencies when possible
#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
    use super::*;
    
    #[test]
    fn test_command_validation_success() {
        // Test implementation
    }
    
    #[test]
    fn test_command_validation_failure() {
        // Test implementation
    }
}
}

Integration Tests

  • Test real AI interactions when possible
  • Use the built-in test suite for comprehensive testing
  • Add new test categories for new features

Performance Tests

  • Monitor performance impact of changes
  • Use the built-in performance monitoring
  • Add benchmarks for critical paths

๐Ÿ”’ Security Considerations

Command Safety

  • All command generation must go through validation
  • New validation rules should be thoroughly tested
  • Consider security implications of new features

Privacy Protection

  • Never log user commands or prompts in production
  • Ensure debug mode requires explicit consent
  • Review data handling in new features

AI Provider Security

  • Validate all AI responses
  • Implement proper error handling for AI failures
  • Consider rate limiting and abuse prevention

๐Ÿ“š Documentation

Code Documentation

  • Document all public APIs with rustdoc comments
  • Include examples in documentation
  • Explain complex algorithms and design decisions

User Documentation

  • Update README.md for user-facing changes
  • Add examples for new features
  • Update configuration documentation

๐ŸŽ‰ Recognition

Contributors will be recognized in:

  • GitHub contributors list
  • Release notes for significant contributions
  • README acknowledgments section

โ“ Questions?

  • Open a GitHub Discussion
  • Check existing issues and discussions
  • Reach out to maintainers

๐Ÿ“‹ Pull Request Checklist

Before submitting your PR, ensure:

  • Code follows project style guidelines
  • All tests pass (cargo test)
  • New functionality includes tests
  • Documentation is updated
  • Commit messages follow conventional format
  • No sensitive information is included
  • Performance impact is considered
  • Security implications are reviewed

Thank you for contributing to CLIAI! ๐Ÿš€